text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# Use Case 6: Comparing Derived Molecular Data with Proteomics
For this use case, we will be looking at the derived molecular data contained in the Endometrial dataset, and comparing it with protein data. Derived molecular data means that we created new variables based on molecular data. One example of this is the activity of a pathway based on the abundance of phosphorylation sites. A second example is inferred cell type percentages from algorithms like CIBERSORT, which are based on comparing transcriptomics data to known profiles of pure cell types.
## Step 1: Importing packages
We will start by importing the python packages we will need, including the cptac data package. We will then load the Endometrial dataset which includes the endometrial patient data as well as accessory functions that we will use to analyze the data.
```
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import cptac
en = cptac.Endometrial()
```
## Step 2: Getting data and selecting attributes
For this use case, we will be using two dataframes contained in the Endometrial dataset: `derived_molecular` and `proteomics`. We will load the derived_molecular dataframe and examine the data contained within it.
```
der_molecular = en.get_derived_molecular()
```
The derived molecular dataframe contains many different attributes that we can choose from for analysis. To view a list of these attributes, we can print out the column names of the dataframe. Here we print only the first 10 column names. To view the full list of column names without truncation, omit the slice (`[:10]`) at the end of the call. If your terminal is still abbreviating the list, first use the command `pd.set_option('display.max_seq_items', None)`.
```
der_molecular.columns.tolist()[:10]
```
For this use case, we will compare MSI status with the JAK1 protein abundance. MSI stands for [Microsatellite instability](https://en.wikipedia.org/wiki/Microsatellite_instability). The possible values for MSI status are MSI-H (high microsatellite instability) or MSS (microsatellite stable). In this context, "nan" refers to non-tumor samples. To see all of the possible values in any column, you can use the pandas function `.unique()`
```
der_molecular['MSI_status'].unique()
```
## Step 3: Join dataframes
We will use the `en.join_metadada_to_omics` function to join our desired molecular trait with the proteomics data.
```
joined_data = en.join_metadata_to_omics(metadata_df_name="derived_molecular", omics_df_name="proteomics",
metadata_cols='MSI_status')
```
## Step 4: Plot data
Now we will use the seaborn and matplotlib libraries to create a boxplot and histogram that will allow us to visualize this data. For more information on using seaborn, see this [Seaborn tutorial](https://seaborn.pydata.org/tutorial.html).
```
msi_boxplot = sns.boxplot(x='MSI_status', y='JAK1_proteomics', data=joined_data, showfliers=False,
order=['MSS', 'MSI-H'])
msi_boxplot = sns.stripplot(x='MSI_status', y='JAK1_proteomics', data=joined_data, color = '.3',
order=['MSS', 'MSI-H'])
plt.show()
msi_histogram = sns.FacetGrid(joined_data[['MSI_status', 'JAK1_proteomics']], hue="MSI_status",
legend_out=False, aspect=3)
msi_histogram = msi_histogram.map(sns.kdeplot, "JAK1_proteomics").add_legend(title="MSI_status")
msi_histogram.set(ylabel='Proportion')
plt.show()
```
| github_jupyter |
# Dangers of Multiple Comparisons
Testing multiple hypothesis from the same data can be problematic. Exhaustively testing all pairwise relationships between variables in a data set is a commonly used, but generally misleading from of multiple comparisons. The chance of finding false significance, using such a **data dredging** approach, can be surprisingly high.
In this exercise you will perform multiple comparisons on 20 **identically distributed independent (iid)** variables. Ideally, such tests should not find significant relationships, but the actual result is quite different.
To get started, execute the code in the cell below to load the required packages.
```
import pandas as pd
import numpy as np
import numpy.random as nr
from scipy.stats import ttest_ind, f_oneway
from itertools import product
```
In this exercise you will apply a t-test to all pairwise combinations of identical Normally distributed variables. In this case, we will create a data set with 20 iid Normal distributions of 1000 samples each. Execute the code in the cell below to find this data and display the mean and variance of each variable.
```
ncolumns = 20
nr.seed(234)
normal_vars = nr.normal(size=(1000,ncolumns))
print('The means of the columns are\n', np.mean(normal_vars, axis = 0))
print('\nThe variances of the columns are\n', np.var(normal_vars, axis = 0))
```
Notice that means and variances are close to 0.0 and 1.0. As expected, there is not much difference between these variables.
Now for each pair of variables we will compute the t-statistic and p-value and append them to lists.
```
ttest_results = []
p_values = []
for i,j in product(range(ncolumns),range(ncolumns)):
if(i != j): # We only want to test between different samples
t1, t2 = ttest_ind(normal_vars[:,i], normal_vars[:,j])
ttest_results.append(t1)
p_values.append(t2)
```
How many of these t-tests will show **significance** at the 0.05 cut-off level? There are 380 pairwise combinations, so we expect to find a number of falsely significant test results at this level. To find out, complete and execute the code in the cell below to filter the test results and print those that show significance.
```
signifiance_level = 0.05
def find_significant(p_values, ttest_results, signifiance_level):
n_cases = 0
for i in range(len(p_values)):
##### Add the missing if statement here #############
if(p_values[i] < signifiance_level):
n_cases += 1
print('t-test with SIGNIFICANT, t-statistic = ', round(ttest_results[i],2), ' and p-value = ', round(p_values[i],4))
print('\nNumber of falsely significant tests = ', n_cases)
find_significant(p_values, ttest_results, signifiance_level)
```
Notice the large number of apparently significant tests. Do you trust these results to show any important relationships in the data?
Can the Bonforoni correction help? Execute the code in the cell below to apply the Bonforoni adjusted significance level to the p-value and ttest data.
> ### Bonfirroni correction
> Several adjustments to the multiple comparisons problem have been proposed. In 1979 Holm published a method know as the **Bonfirroni correction**. The adjustment is simple:
$$\alpha_b = \frac{\alpha}{m}\\
with\\
m =\ number\ of\ groups$$
> The problem with the Bonfirroni correction is the reduction in power as the grows smaller. For big data problems with large numbers of groups, this issue can be especially serious.
```
signifiance_bonforoni = signifiance_level/380.0
print('With Bonforoni correction the significance level is now = ', signifiance_bonforoni)
find_significant(p_values, ttest_results, signifiance_bonforoni)
```
Even with the Bonforoni correction we have some false significance tests, if only just barely!
But, can we detect small effect with Bonforoni correction, as this method significantly reduces power of tests? Execute the code in the cell below, which compares a standard Normal to a Normal with a small mean (effect size), to find out.
```
nr.seed(567)
ttest_ind(normal_vars[:,0], nr.normal(loc = 0.01, size=(1000,1)))
```
Given the Bonforoni correction, this difference in means would not be found significant. This illustrates the downside of the correction, which may prevent detection of significant effects, while still finding false significance.
##### Copyright 2020, Stephen F. Elston. All rights reserved.
| github_jupyter |
<a href="https://colab.research.google.com/github/hsuanchia/Image-caption/blob/main/generate_caption.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
import json,pickle, os, sys
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
from tensorflow.keras.applications import VGG16
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.vgg16 import preprocess_input
from keras.models import load_model,Model
from PIL import Image
from tqdm import tqdm
import pickle
image_model = VGG16(include_top=True,weights='imagenet')
image_model.summary()
img_size = K.int_shape(image_model.input)[1:3]
transfer_layer = image_model.get_layer('block5_conv3')
encoder_model = Model(inputs=image_model.input,outputs=transfer_layer.output)
value_size = K.int_shape(transfer_layer.output)[1]
# Provide by @snsd0805
def preprocess_img(path):
img = image.load_img(path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
features = encoder_model.predict(x)
return features
max_length = 30
start = '<sos>'
end = '<end>'
```
# val_voc_5.pkl
* 內容包含: word_index 跟 inv_word_index
* 檔名的 5 代表取整個 annotation 的詞中出現次數超過 5 的詞彙
* 根據 model 使用的資料可能會需要不同的 voc
```
voc_path = open('/content/drive/MyDrive/MSCOCO_2017/train_voc_5_unk.pkl','rb') # 根據model用的data,需更換VOC
voc_data = pickle.load(voc_path)
word_index = voc_data['word_index']
inv_word_index = voc_data['inv_word_index']
voc_size = len(word_index) + 1
# 在這裡更改你要用的 model 檔案位置
model_path = '/content/drive/MyDrive/MSCOCO_2017/model_noatt_train5000data_1.h5'
decoder_model = load_model(model_path)
#如果要用pretrain好的encoder output,img_data要給,img_path用空字串
#如果要預測單一圖片 給img_path就好,img_data用空list
def generate_caption(decoder_model,img_path,img_data,show=True):
if img_data == []:
img_data = preprocess_img(img_path)
decoder_input = np.zeros((1,max_length),dtype='float32')
token_cur = word_index[start]
output_text = ''
count_tokens = 0
while token_cur != word_index[end] and count_tokens < max_length:
decoder_input[0,count_tokens] = token_cur
decoder_output = decoder_model.predict([img_data,decoder_input])
token_cur = np.argmax(decoder_output[0,count_tokens])
cur_word = inv_word_index[token_cur]
if(cur_word != end):
output_text += " " + cur_word
count_tokens += 1
if show:
test_image = plt.imread(img_path)
plt.imshow(test_image)
plt.show()
print("Caption:")
print(output_text)
return output_text
else:
return output_text
# 上方的所有code都要跑過
# 在上方更改你要用的 model 檔案位置跟你要 caption 的檔案位置就可以用了
generate_caption(decoder_model,'/content/drive/MyDrive/MSCOCO_2017/val2017/000000000885.jpg',[])
generate_caption(decoder_model,'/content/drive/MyDrive/eggroll.jpg',[])
# generate_caption(decoder_model,'/content/drive/MyDrive/me.jpg',[])
generate_caption(decoder_model,'/content/drive/MyDrive/MSCOCO_2017/test2017/000000001371.jpg',[])
generate_caption(decoder_model,'/content/drive/MyDrive/MSCOCO_2017/val2017/000000002006.jpg',[])
```
# 用來算 BLEU 的 code
```
p = open('/content/drive/MyDrive/MSCOCO_2017/new-dataset/output_14x14x512_5000_val.pkl','rb')
val_data = pickle.load(p)
val_data[0]
#用file_name反推image_id
def get_imgid(file_name):
id = ""
f = 0
for i in range(12):
if file_name[i] != "0":
f = 1
if f == 1:
id += file_name[i]
return int(id)
get_imgid('000000700765.jpg')
# 產生所有 val 的 caption 預測結果並輸出成規定格式
# 評分資料格式在 https://cocodataset.org/#format-data 的 5. Image Captioning 大標下有規定
results = list()
for data in tqdm(val_data): # 多加個 tqdm 是為了看他印出漂亮的進度條
one_result = dict()
cap = generate_caption(decoder_model,"",data['feature'],show=False) # 把這裡替換成你用來生 caption 的 function
one_result["image_id"] = get_imgid(data['filename'])
one_result["caption"] = cap
results.append(one_result)
# 輸出成 json 檔
model_name = os.path.basename(model_path)
save_path = "/content/drive/MyDrive/MSCOCO_2017/score/"
generated_json_path = save_path +"generated_"+ model_name +".json"
print("json saved at", generated_json_path)
with open(generated_json_path, 'w') as jsonfile:
json.dump(results, jsonfile)
if not os.path.exists('coco-caption'):
# 下載相容 Python3 的非官方套件
!git clone https://github.com/davidfsemedo/coco-caption
sys.path.insert(0,"/content/coco-caption")
from pycocotools.coco import COCO
from pycocoevalcap.eval import COCOEvalCap
import skimage.io as io
import pylab
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
from json import encoder
encoder.FLOAT_REPR = lambda o: format(o, '.3f')
def evaluate_coco(generated_json, answer_json):
# 繞過 val2017 和 2014 格式不同的 bug
# https://github.com/tylin/coco-caption/issues/26#issuecomment-439825144
if (os.path.basename(answer_json) == "captions_val2017.json"):
modified_anno_path = "/content/modified_captions_val2017.json"
with open(answer_json, 'r') as f:
data = json.load(f)
data['type'] = 'captions'
with open(modified_anno_path, 'w') as f:
json.dump(data, f)
answer_anno_path = modified_anno_path
else:
answer_anno_path = answer_json
coco = COCO(answer_anno_path)
cocoRes = coco.loadRes(generated_json)
# create cocoEval object by taking coco and cocoRes
cocoEval = COCOEvalCap(coco, cocoRes)
# evaluate on a subset of images by setting
# cocoEval.params['image_id'] = cocoRes.getImgIds()
# please remove this line when evaluating the full validation set
cocoEval.params['image_id'] = cocoRes.getImgIds()
# evaluate results
cocoEval.evaluate()
return cocoEval
answer_json_path = "/content/drive/MyDrive/MSCOCO_2017/2017_annotations/captions_val2017.json"
cocoEval = evaluate_coco(generated_json_path, answer_json_path)
# print output evaluation scores
for metric, score in cocoEval.eval.items():
print('%s: %.3f'%(metric, score))
# 評分結果自動彙整進 csv 檔
import csv
scoredict = {'model_name': model_name,
'train_data': '14x14x512', # '7x7x512' or '14x14x512' or '4096'
'voc': 'train_voc_5_unk', # 'train_voc_5' or 'train_voc_5_unk'
'oov_vector': 'random', # 'zero' or 'random'
'unk_token': 'average', # 'average' or 'none'
'train_epoch': 92, # 若有 earlystop,填實際上停在第幾 epoch
'TimeDistributed_dropout': 0.5,
'recurrent_dropout': 0,
'predict_data_num': 2500}
fieldnames = ['model_name', 'train_data', 'voc', 'oov_vector', 'unk_token',
'train_epoch', 'TimeDistributed_dropout','recurrent_dropout','predict_data_num']
for metric, score in cocoEval.eval.items():
fieldnames.append(metric)
scoredict[metric] = score
# 還會多存 ['Bleu_1', 'Bleu_2', 'Bleu_3', 'Bleu_4', 'METEOR', 'ROUGE_L', 'CIDEr']
# 的 column 和分數
score_save_path = save_path + 'integrate_score.csv'
with open(score_save_path, 'a', newline='') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
# writer.writeheader() # 創新檔的時候再跑
writer.writerow(scoredict)
csvfile.close()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Transformer model for language understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/text/transformer">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/text/transformer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial trains a <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer model</a> to translate Portuguese to English. This is an advanced example that assumes knowledge of [text generation](text_generation.ipynb) and [attention](nmt_with_attention.ipynb).
The core idea behind the Transformer model is *self-attention*—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections *Scaled dot product attention* and *Multi-head attention*.
A transformer model handles variable-sized input using stacks of self-attention layers instead of [RNNs](text_classification_rnn.ipynb) or [CNNs](../images/intro_to_cnns.ipynb). This general architecture has a number of advantages:
* It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, [StarCraft units](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/#block-8)).
* Layer outputs can be calculated in parallel, instead of a series like an RNN.
* Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see [Scene Memory Transformer](https://arxiv.org/pdf/1903.03878.pdf) for example).
* It can learn long-range dependencies. This is a challenge in many sequence tasks.
The downsides of this architecture are:
* For a time-series, the output for a time-step is calculated from the *entire history* instead of only the inputs and current hidden-state. This _may_ be less efficient.
* If the input *does* have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.
After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.
<img src="https://www.tensorflow.org/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap">
```
from __future__ import absolute_import, division, print_function, unicode_literals
# !pip install tensorflow-gpu==2.0.0-beta1
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
tokenizer = tfds.deprecated.text.Tokenizer()
```
## Setup input pipeline
Use [TFDS](https://www.tensorflow.org/datasets) to load the [Portugese-English translation dataset](https://github.com/neulab/word-embeddings-for-nmt) from the [TED Talks Open Translation Project](https://www.ted.com/participate/translate).
This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.
```
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
```
Create a custom subwords tokenizer from the training dataset.
```
tokenizer_en = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2 ** 13)
tokenizer_pt = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2 ** 13)
sample_string = 'Transformer is awesome.'
tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
```
The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
```
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
BUFFER_SIZE = 20000
BATCH_SIZE = 64
```
Add a start and end token to the input and target.
```
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
```
Note: To keep this example small and relatively fast, drop examples with a length of over 40 tokens.
```
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
```
Operations inside `.map()` run in graph mode and receive a graph tensor that do not have a numpy attribute. The `tokenizer` expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a `tf.py_function`, which receives an eager tensor having a numpy attribute that contains the string value.
```
def tf_encode(pt, en):
return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
```
## Positional encoding
Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.
The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the *similarity of their meaning and their position in the sentence*, in the d-dimensional space.
See the notebook on [positional encoding](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) to learn more about it. The formula for calculating the positional encoding is as follows:
$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
```
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
sines = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
cosines = np.cos(angle_rads[:, 1::2])
pos_encoding = np.concatenate([sines, cosines], axis=-1)
pos_encoding = pos_encoding[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
```
## Masking
Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value `0` is present: it outputs a `1` at those locations, and a `0` otherwise.
```
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# add extra dimensions so that we can add the padding
# to the attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
```
The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.
This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.
```
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
```
## Scaled dot product attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention">
The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:
$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$
The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.
For example, consider that `Q` and `K` have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of `dk`. Hence, *square root of `dk`* is used for scaling (and not any other number) because the matmul of `Q` and `K` should have a mean of 0 and variance of 1, so that we get a gentler softmax.
The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.
```
def scaled_dot_product_attention(q, k, v, mask):
"""Calculate the attention weights.
q, k, v must have matching leading dimensions.
k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
The mask has different shapes depending on its type(padding or look ahead)
but it must be broadcastable for addition.
Args:
q: query shape == (..., seq_len_q, depth)
k: key shape == (..., seq_len_k, depth)
v: value shape == (..., seq_len_v, depth_v)
mask: Float tensor with shape broadcastable
to (..., seq_len_q, seq_len_k). Defaults to None.
Returns:
output, attention_weights
"""
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# scale matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# add the mask to the scaled tensor.
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax is normalized on the last axis (seq_len_k) so that the scores
# add up to 1.
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
```
As the softmax normalization is done on K, its values decide the amount of importance given to Q.
The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words we want to focus on are kept as is and the irrelevant words are flushed out.
```
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
```
Pass all the queries together.
```
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
```
## Multi-head attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention">
Multi-head attention consists of four parts:
* Linear layers and split into heads.
* Scaled dot-product attention.
* Concatenation of heads.
* Final linear layer.
Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.
The `scaled_dot_product_attention` defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using `tf.transpose`, and `tf.reshape`) and put through a final `Dense` layer.
Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.
```
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""Split the last dimension into (num_heads, depth).
Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
```
Create a `MultiHeadAttention` layer to try out. At each location in the sequence, `y`, the `MultiHeadAttention` runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.
```
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
```
## Point wise feed forward network
Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.
```
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
```
## Encoder and decoder
<img src="https://www.tensorflow.org/images/tutorials/transformer/transformer.png" width="600" alt="transformer">
The transformer model follows the same general pattern as a standard [sequence to sequence with attention model](nmt_with_attention.ipynb).
* The input sentence is passed through `N` encoder layers that generates an output for each word/token in the sequence.
* The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.
### Encoder layer
Each encoder layer consists of sublayers:
1. Multi-head attention (with padding mask)
2. Point wise feed forward networks.
Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.
The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. There are N encoder layers in the transformer.
```
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)
```
### Decoder layer
Each decoder layer consists of sublayers:
1. Masked multi-head attention (with look ahead mask and padding mask)
2. Multi-head attention (with padding mask). V (value) and K (key) receive the *encoder output* as inputs. Q (query) receives the *output from the masked multi-head attention sublayer.*
3. Point wise feed forward networks
Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis.
There are N decoder layers in the transformer.
As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.
```
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)
```
### Encoder
The `Encoder` consists of:
1. Input Embedding
2. Positional Encoding
3. N encoder layers
The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.
```
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(input_vocab_size, self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# adding embedding and position encoding.
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500)
sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)),
training=False, mask=None)
print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)
```
### Decoder
The `Decoder` consists of:
1. Output Embedding
2. Positional Encoding
3. N decoder layers
The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.
```
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(target_vocab_size, self.d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000)
output, attn = sample_decoder(tf.random.uniform((64, 26)),
enc_output=sample_encoder_output,
training=False, look_ahead_mask=None,
padding_mask=None)
output.shape, attn['decoder_layer2_block2'].shape
```
## Create the Transformer
Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.
```
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=8, dff=2048,
input_vocab_size=8500, target_vocab_size=8000)
temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape # (batch_size, tar_seq_len, target_vocab_size)
```
## Set hyperparameters
To keep this example small and relatively fast, the values for *num_layers, d_model, and dff* have been reduced.
The values used in the base model of transformer were; *num_layers=6*, *d_model = 512*, *dff = 2048*. See the [paper](https://arxiv.org/abs/1706.03762) for all the other versions of the transformer.
Note: By changing the values below, you can get the model that achieved state of the art on many tasks.
```
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
```
## Optimizer
Use the Adam optimizer with a custom learning rate scheduler according to the formula in the [paper](https://arxiv.org/abs/1706.03762).
$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
```
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
```
## Loss and metrics
Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
```
## Training and checkpointing
```
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size, dropout_rate)
def create_masks(inp, tar):
# Encoder padding mask
enc_padding_mask = create_padding_mask(inp)
# Used in the 2nd attention block in the decoder.
# This padding mask is used to mask the encoder outputs.
dec_padding_mask = create_padding_mask(inp)
# Used in the 1st attention block in the decoder.
# It is used to pad and mask future tokens in the input received by
# the decoder.
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
```
Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every `n` epochs.
```
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
```
The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. `tar_real` is that same input shifted by 1: At each location in `tar_input`, `tar_real` contains the next token that should be predicted.
For example, `sentence` = "SOS A lion in the jungle is sleeping EOS"
`tar_inp` = "SOS A lion in the jungle is sleeping"
`tar_real` = "A lion in the jungle is sleeping EOS"
The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.
During training this example uses teacher-forcing (like in the [text generation tutorial](./text_generation.ipynb)). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.
As the transformer predicts each word, *self-attention* allows it to look at the previous words in the input sequence to better predict the next word.
To prevent the model from peaking at the expected output the model uses a look-ahead mask.
```
EPOCHS = 20
@tf.function
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
```
Portuguese is used as the input language and English is the target language.
```
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 500 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
```
## Evaluate
The following steps are used for evaluation:
* Encode the input sentence using the Portuguese tokenizer (`tokenizer_pt`). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
* The decoder input is the `start token == tokenizer_en.vocab_size`.
* Calculate the padding masks and the look ahead masks.
* The `decoder` then outputs the predictions by looking at the `encoder output` and its own output (self-attention).
* Select the last word and calculate the argmax of that.
* Concatentate the predicted word to the decoder input as pass it to the decoder.
* In this approach, the decoder predicts the next word based on the previous words it predicted.
Note: The model used here has less capacity to keep the example relatively faster so the predictions maybe less right. To reproduce the results in the paper, use the entire dataset and base transformer model or transformer XL, by changing the hyperparameters above.
```
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# inp sentence is portuguese, hence adding the start and end token
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# as the target is english, the first word to the transformer should be the
# english start token.
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# select the last word from the seq_len dimension
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if tf.equal(predicted_id, tokenizer_en.vocab_size+1):
return tf.squeeze(output, axis=0), attention_weights
# concatentate the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# plot the attention weights
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
```
You can pass different layers and attention blocks of the decoder to the `plot` parameter.
```
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
```
## Summary
In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.
Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create [BERT](https://arxiv.org/abs/1810.04805) and train state of the art models. Futhermore, you can implement beam search to get better predictions.
```
```
| github_jupyter |
### OkCupid DataSet
### Meeting 4, 28- 01- 2020
### Recap last meeting's decisions:
<ol>
<p>Meeting 3, 10- 12- 2019</p>
<li>Check all the preprocessing steps.</li>
<li>The dataset is extremely imbalanced.</li>
<li>Exclude class 1 and class 5 inorder to make the dataset balanced.</li>
</ol>
### To discuss:
<ol>
<p></p>
<li> Decide about class 8</li>
<li> Readability</li>
</ol>
```
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score
from sklearn.multiclass import OneVsRestClassifier
from sklearn.metrics import confusion_matrix
from sklearn.feature_extraction.text import TfidfVectorizer
import seaborn as sns
df = pd.read_csv('../../data/processed/preprocessed_cupid.csv', usecols=['age', 'sex','#anwps', 'clean_text', 'isced', 'isced2'])
df = df.dropna(subset=['clean_text', 'isced'])
df.head()
#
```
# Preprocessing:
## 1- expand contractions
```
import contractions
# df['clean_textk'] = df['clean_text'].str.lower()
# def expand_contractions(text):
# expanded = contractions.fix(text)
# return expanded
# df['v'] = df.apply(lambda x: expand_contractions(x['clean_textk']), axis=1)
```
## 2- Lemmatization
```
# # from tqdm import tqdm, tqdm_notebook---- Lemmatization
# from tqdm._tqdm_notebook import tqdm_notebook
# tqdm_notebook.pandas()
# df = pd.read_csv('D:\projects\okcupid\data\processed/preprocessed_cupid.csv')
# df = df.dropna(subset=['clean_text', 'isced'])
# df["clean_text"] = df["clean_text"].progress_apply(lambda row: " ".join([w.lemma_ for w in nlp(row)]))
```
## Removing outliers
```
sns.boxplot(df["#anwps"]).set_title("Boxplot of the avarage number of words per sentence")
df = df[(df['#anwps']<35) & (df['#anwps']>3)]
sns.boxplot(df["#anwps"]).set_title("Boxplot of the avarage number of words per sentence")
```
## Imbalanced datasets
### In a dataset with highly imbalanced classes, if the classifier always "predicts" the most common class without performing any analysis of the features, it will still have a high accuracy rate
```
# df.isced.value_counts().plot(kind='bar', title= 'Count target')
outcome = pd.crosstab(index=df['isced'], columns='count')
outcome
cnt_isced = df['isced'].value_counts()
plt.figure(figsize=(12,4))
sns.barplot(cnt_isced.index, cnt_isced.values, alpha=0.8)
plt.ylabel('Number of Occurrences', fontsize=12)
plt.xlabel('isced', fontsize=12)
plt.xticks(rotation=90)
plt.show();
# # Remove classes 1, 5 and 8 from dataset
df = df[df['isced'].isin([7.0, 6.0, 3.0])]
# df.isced.value_counts().plot(kind='bar', title= 'Count target')
cnt_isced = df['isced'].value_counts()
plt.figure(figsize=(12,4))
sns.barplot(cnt_isced.index, cnt_isced.values, alpha=0.8)
plt.ylabel('Number of Occurrences', fontsize=12)
plt.xlabel('isced', fontsize=12)
plt.xticks(rotation=90)
plt.show();
def plot_conf(conf_matrix):
print('Confusion matrix:\n', conf_matrix)
labels = ['3', '6', '7', '8']
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conf_matrix, cmap=plt.cm.Blues)
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted class')
plt.ylabel('Actual class')
plt.show()
df = df.dropna(subset=['clean_text', 'isced'])
# df = df.dropna(subset=['stemmed', 'isced'])
corpus = df['clean_text']
# corpus = df['stemmed']
target = df["isced"]
# frequency encoding scikit-learn
# vectorizer = CountVectorizer(binary=False, ngram_range=(1, 2))
# vectors = vectorizer.fit_transform(corpus)
# X_train, X_val, y_train, y_val = train_test_split(vectors, target, train_size=0.75,
# test_size=0.25, random_state = 0)
X_train, X_val, y_train, y_val = train_test_split(corpus, target, train_size=0.75,
test_size=0.25, random_state = 0)
vectorizer = CountVectorizer(binary=False, ngram_range=(1, 2), lowercase=True)
# vectorizer = TfidfVectorizer(binary=False, ngram_range=(1, 2))
vectors_train = vectorizer.fit_transform(X_train)
vectors_val = vectorizer.transform(X_val)
X_train = vectors_train
X_val = vectors_val
import imblearn
from imblearn.under_sampling import RandomUnderSampler
from imblearn.over_sampling import RandomOverSampler
```
## Re-sampling Dataset
<ol>
<p>To make our dataset balanced there are two ways to do so:</p>
<li>Under-sampling: Remove samples from over-represented classes ; use in the case of huge dataset </li>
<li>Over-sampling: Add more samples from under-represented classes; use in the case of small dataset</li>
</ol>
<ol>
<p></p>
<img src="rep2_image/resampling.JPG">
</ol>
```
# rus = RandomUnderSampler(random_state=42)
# X_res, y_res = rus.fit_resample(X_train, y_train)
# y_res.value_counts().plot(kind='bar', title= 'Count target')
ros = RandomOverSampler()
X_ros, y_ros = ros.fit_sample(X_train, y_train)
y_ros.value_counts().plot(kind='bar', title= 'Count target')
# cnt_isced = y_ros.value_counts()
# plt.figure(figsize=(12,4))
# sns.barplot(cnt_isced.index, cnt_isced.values, alpha=0.8)
# plt.ylabel('Number of Occurrences', fontsize=12)
# plt.xlabel('isced', fontsize=12)
# plt.xticks(rotation=90)
# plt.show();
# target_names = y_ros.unique()
```
## Naive Bayes
### - Extremely fast and simple classification algorithms
### - Suitable for very high-dimensional datasets
### - Few tunable parameters
### -Very useful as a quick-and-dirty baseline for a classification problem
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import make_pipeline
model = make_pipeline(MultinomialNB())
model.fit(X_ros, y_ros)
# model.fit(X_train, y_train)
gnb_predictions = model.predict(X_val)
print("Final Accuracy for NB: %s"% accuracy_score(y_val, gnb_predictions))
cm = confusion_matrix(y_val, gnb_predictions)
plot_conf(cm)
from sklearn.metrics import classification_report
print(classification_report(y_val, gnb_predictions))
```
# Evaluation Metrics in Classification
<ol>
<p></p>
<img src="rep2_image/confiusion matrix1.jpg">
</ol>
# Logistic Regression
```
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0, multi_class='ovr', solver='liblinear').fit(X_ros, y_ros)
logistic_predictions = clf.predict(X_val)
print("Final Accuracy for LogisticRegression: %s"% accuracy_score(y_val, logistic_predictions))
cm = confusion_matrix(y_val, logistic_predictions)
plot_conf(cm)
print(classification_report(y_val, logistic_predictions))
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(random_state=0, multi_class='multinomial', solver='newton-cg').fit(X_ros, y_ros)
logistic_predictions = clf.predict(X_val)
print("Final Accuracy for LogisticRegression: %s"% accuracy_score(y_val, logistic_predictions))
cm = confusion_matrix(y_val, logistic_predictions)
plot_conf(cm)
print(classification_report(y_val, logistic_predictions))
```
## check for different settings:
<ol>
<p><b> checking for the effect of unigram/bigram/tigram, TF/TF-IDF on accuracy</b> </p>
<img src="rep2_image/pre_tabel1.jpg">
<p><b> checking for the effect of characters decaptalization on the accuracy</b> </p>
<img src="rep2_image/pre_tabel2.jpg">
</ol>
## check for different settings:
<ol>
<p><b> checking for the effect of stopwords and punctuations on the accuracy</b> </p>
<img src="rep2_image/pre_tabel3.jpg">
<p><b> checking for the effect of lemmatization on the accuracy</b> </p>
<img src="rep2_image/pre_tabel4.jpg">
</ol>
## TODO:
### 1- Cross validation
### 2- Gride search
### 3- merge classes 1, 3 and 5
### 4- undersample 6 to 10000
### 5- merge 7,8
### merge 1,3,5
### merge 6,7,8
### undersample ->10000
### NO. misspelled
### NO. nique words
### Avg no wordlength ->total no characters/no words
| github_jupyter |
# Train an SVM Classifier on MNIST Data
In this example we will load labels and pointers to the data into a Gota dataframe.
```
import (
"fmt"
mnist "github.com/petar/GoMNIST"
"github.com/kniren/gota/dataframe"
"github.com/kniren/gota/series"
"math/rand"
"os"
)
set, err := mnist.ReadSet("../datasets/mnist/images.gz", "../datasets/mnist/labels.gz")
func MNISTSetToDataframe(st *mnist.Set, maxExamples int) dataframe.DataFrame {
length := maxExamples
if length > len(st.Images) {
length = len(st.Images)
}
s := make([]string, length, length)
l := make([]int, length, length)
for i := 0; i < length; i++ {
s[i] = string(st.Images[i])
l[i] = int(st.Labels[i])
}
var df dataframe.DataFrame
images := series.Strings(s)
images.Name = "Image"
labels := series.Ints(l)
labels.Name = "Label"
df = dataframe.New(images, labels)
return df
}
set.Images[1]
df := MNISTSetToDataframe(set, 1000)
categories := []string{"tshirt", "trouser", "pullover", "dress", "coat", "sandal", "shirt", "shoe", "bag", "boot"}
func EqualsInt(s series.Series, to int) (*series.Series, error) {
eq := make([]int, s.Len(), s.Len())
ints, err := s.Int()
if err != nil {
return nil, err
}
for i := range ints {
if ints[i] == to {
eq[i] = 1
}
}
ret := series.Ints(eq)
return &ret, nil
}
func Split(df dataframe.DataFrame, valFraction float64) (training dataframe.DataFrame, validation dataframe.DataFrame){
perm := rand.Perm(df.Nrow())
cutoff := int(valFraction*float64(len(perm)))
training = df.Subset(perm[:cutoff])
validation = df.Subset(perm[cutoff:])
return training, validation
}
training, validation := Split(df, 0.75)
import (
"image"
"bytes"
"math"
"github.com/gonum/stat"
"github.com/gonum/integrate"
)
func NormalizeBytes(bs []byte) []float64 {
ret := make([]float64, len(bs), len(bs))
for i := range bs {
ret[i] = float64(bs[i])/255.
}
return ret
}
func ImageSeriesToFloats(df dataframe.DataFrame, col string) [][]float64 {
s := df.Col(col)
ret := make([][]float64, s.Len(), s.Len())
for i := 0; i < s.Len(); i++ {
b := []byte(s.Elem(i).String())
ret[i] = NormalizeBytes(b)
}
return ret
}
trainingImages := ImageSeriesToFloats(training, "Image")
validationImages := ImageSeriesToFloats(validation, "Image")
trainingOutputs := make([][]float64, len(trainingImages))
validationOutputs := make([][]float64, len(validationImages))
ltCol:= training.Col("Label")
for i := range trainingImages {
l := make([]float64, len(categories), len(categories))
val, _ := ltCol.Elem(i).Int()
l[val] = 1
trainingOutputs[i] = l
}
lvCol:= validation.Col("Label")
for i := range validationImages {
l := make([]float64, len(categories), len(categories))
val, _ := lvCol.Elem(i).Int()
l[val] = 1
validationOutputs[i] = l
}
```
## Deep Learning Classifier Using Go-Deep
```
import (
"github.com/patrikeh/go-deep"
"github.com/patrikeh/go-deep/training"
)
var (
trainingExamples []training.Example
validationExamples []training.Example
)
for i := range trainingImages {
trainingExamples = append(trainingExamples, training.Example{trainingImages[i], trainingOutputs[i]})
}
for i := range validationImages {
validationExamples = append(validationExamples, training.Example{validationImages[i], validationOutputs[i]})
}
network := deep.NewNeural(&deep.Config{
// Input size: 784 in our case (number of pixels in each image)
Inputs: len(trainingImages[0]),
// Two hidden layers of 128 neurons each, and an output layer 10 neurons (one for each class)
Layout: []int{128, 128, len(categories)},
// ReLU activation to introduce some additional non-linearity
Activation: deep.ActivationReLU,
// We need a multi-class model
Mode: deep.ModeMultiClass,
// Initialise the weights of each neuron using normally distributed random numbers
Weight: deep.NewNormal(0.5, 0.1),
Bias: true,
})
// Parameters: learning rate, momentum, alpha decay, nesterov
optimizer := training.NewSGD(0.006, 0.1, 1e-6, true)
trainer := training.NewTrainer(optimizer, 1)
trainer.Train(network, trainingExamples, validationExamples, 500) // training, validation, iterations
```
Calculate accuracy on validation dataset
```
func MaxIndex(f []float64) (i int){
var (
curr float64
ix int = -1
)
for i := range f {
if f[i] > curr {
curr = f[i]
ix = i
}
}
return ix
}
validCorrect := 0.
for i := range validationImages {
prediction := network.Predict(validationImages[i])
if MaxIndex(prediction) == MaxIndex(validationOutputs[i]) {
validCorrect++
}
}
fmt.Printf("Validation Accuracy: %5.2f\n", validCorrect/float64(len(validationImages)))
```
| github_jupyter |
## Random Forest Code for generating a baseline
```
from sklearn.ensemble import RandomForestClassifier
import pickle
import numpy as np
```
### specify file to load data below
```
with open('../database/combined_dict_norm_all_examples.pickle','rb') as handle:
combined_dict = pickle.load(handle)
with open('../database/train_ind.pickle','rb') as handle:
train_ind = pickle.load(handle)
with open('../val_ind.pickle','rb') as handle:
val_ind = pickle.load(handle)
with open('../test_ind.pickle','rb') as handle:
test_ind = pickle.load(handle)
# Reading the data from the dictionary and creating a train variable for random forest
X = np.zeros((1,909*40))
y = np.zeros((1,50))
for i in range(len(train_ind)):
X = np.concatenate((X,combined_dict[keys[i]]['mel_spectrum'][0].flatten().reshape(1,909*40)),axis=0)
y = np.concatenate((y,combined_dict[keys[i]]['output'].reshape(1,50)))
if i%100==0:
print(i)
# remove the first blank row
X = X[1:,:]
y = y[1:,:]
print(X.shape,y.shape)
clf = RandomForestClassifier(n_estimators=100, max_depth=8,
random_state=0)
clf.fit(X, y)
```
### validation set
```
y_ = np.zeros((1,50)) # predictions on validation set
for i in range(len(val_ind)):
pred = clf.predict(combined_dict[keys[i]]['mel_spectrum'][0].flatten().reshape(1,909*40))
y_ = np.concatenate((y_,pred),axis=0)
y_ = y_[1:,:]
with open('random_forest_val_pred.pickle','wb') as handle:
pickle.dump(y_,handle)
true_labels = np.zeros((1,50))
for ind in range(len(val_ind)):
true_labels = np.concatenate((true_labels,combined_dict[keys[ind]]['output'].reshape(1,50)))
if ind%100==0:
print(ind)
true_labels = true_labels[1:,:]
print(true_labels.shape)
```
### running a AUC test
```
auc_roc = metrics.roc_auc_score(true_labels,y_)
print(auc_roc)
```
### test set
```
y_ = np.zeros((1,50)) # predictions on test set
for i in range(len(test_ind)):
pred = clf.predict(combined_dict[keys[i]]['mel_spectrum'][0].flatten().reshape(1,909*40))
y_ = np.concatenate((y_,pred),axis=0)
y_ = y_[1:,:]
with open('random_forest_val_pred.pickle','wb') as handle:
pickle.dump(y_,handle)
true_labels = np.zeros((1,50))
for ind in range(len(test_ind)):
true_labels = np.concatenate((true_labels,combined_dict[keys[ind]]['output'].reshape(1,50)))
if ind%100==0:
print(ind)
true_labels = true_labels[1:,:]
print(true_labels.shape)
auc_roc = metrics.roc_auc_score(true_labels,y_)
print(auc_roc)
```
| github_jupyter |
```
from collections import Counter
from functools import partial
import gc
from multiprocessing import Pool
import numpy as np
import pandas as pd
from scipy.stats import f_oneway
from scipy.spatial.distance import squareform
from statsmodels.stats.multicomp import pairwise_tukeyhsd
from tqdm import trange
from make_embedding import *
%load_ext ipycache
embedding = load_pickle(r'data/specialized_embedding.pkl').toarray()
vectorizer = load_pickle(r'data/specialized_vectorizer.pkl')
words = np.array(vectorizer.get_feature_names())
del vectorizer
data = load_pickle(r'data/clean.pkl')
categories = load_json(r'categories-specialized.json')
subset = select_categories(data, categories)
del data
data = subset
gc.collect()
%%cache anova_specialized.pkl anova_pvals
_selectors = [data.domain == group for group in categories]
_PVALUE_ID = 1
anova_pvals = np.array([
f_oneway(*[
embedding[selector, i] for selector in _selectors
])[_PVALUE_ID]
for i in trange(embedding.shape[1])
])
def tukey(i):
return pairwise_tukeyhsd(embedding[:, i], data.domain)
_n = embedding.shape[1]
_ends = _n // 4, _n // 2, 3 * (_n // 4), _n
%%cache tukey_1_specialized.pkl _tukey1
with Pool(maxtasksperchild=5) as pool:
_tukey1 = list(pool.map(tukey, trange(_ends[0]), chunksize=1000))
%%cache tukey_2_specialized.pkl _tukey2
with Pool(maxtasksperchild=5) as pool:
_tukey2 = list(pool.map(tukey, trange(_ends[0], _ends[1]), chunksize=1000))
%%cache tukey_3_specialized.pkl _tukey3
with Pool(maxtasksperchild=5) as pool:
_tukey3 = list(pool.map(tukey, trange(_ends[1], _ends[2]), chunksize=1000))
%%cache tukey_4_specialized.pkl _tukey4
with Pool(maxtasksperchild=5) as pool:
_tukey4 = list(pool.map(tukey, trange(_ends[2], _ends[3]), chunksize=1000))
tukey_results = _tukey1 + _tukey2 + _tukey3 + _tukey4
anova_alpha = 0.001
tukey_alpha = 0.001
# Bonferroni correction
anova_alpha /= embedding.shape[0]
np.sum(anova_pvals <= anova_alpha)
n_unique = data.domain.nunique()
expected_for_marker = n_unique - 1
markers = [
i for i, v in enumerate(tukey_results)
if anova_pvals[i] <= anova_alpha
and np.sum(v.pvalues <= tukey_alpha) == expected_for_marker
and (squareform(v.pvalues <= tukey_alpha).sum(axis=0) == expected_for_marker).any()
]
upregulated_category = [
categories[np.argmax(squareform(tukey_results[i].pvalues <= tukey_alpha).sum(axis=0) == expected_for_marker)]
for i in markers
]
upregulated_counts = Counter(upregulated_category)
len(markers)
tukey_results[markers[0]].summary()
words[markers[0]]
upregulated_counts
```
| github_jupyter |
# Word2Vec
**Learning Objectives**
1. Compile all steps into one function
2. Prepare training data for Word2Vec
3. Model and Training
4. Embedding lookup and analysis
## Introduction
Word2Vec is not a singular algorithm, rather, it is a family of model architectures and optimizations that can be used to learn word embeddings from large datasets. Embeddings learned through Word2Vec have proven to be successful on a variety of downstream natural language processing tasks.
Note: This notebook is based on [Efficient Estimation of Word Representations in Vector Space](https://arxiv.org/pdf/1301.3781.pdf) and
[Distributed
Representations of Words and Phrases and their Compositionality](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). It is not an exact implementation of the papers. Rather, it is intended to illustrate the key ideas.
These papers proposed two methods for learning representations of words:
* **Continuous Bag-of-Words Model** which predicts the middle word based on surrounding context words. The context consists of a few words before and after the current (middle) word. This architecture is called a bag-of-words model as the order of words in the context is not important.
* **Continuous Skip-gram Model** which predict words within a certain range before and after the current word in the same sentence. A worked example of this is given below.
You'll use the skip-gram approach in this notebook. First, you'll explore skip-grams and other concepts using a single sentence for illustration. Next, you'll train your own Word2Vec model on a small dataset. This notebook also contains code to export the trained embeddings and visualize them in the [TensorFlow Embedding Projector](http://projector.tensorflow.org/).
Each learning objective will correspond to a _#TODO_ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/blob/master/courses/machine_learning/deepdive2/text_classification/solutions/word2vec.ipynb)
## Skip-gram and Negative Sampling
While a bag-of-words model predicts a word given the neighboring context, a skip-gram model predicts the context (or neighbors) of a word, given the word itself. The model is trained on skip-grams, which are n-grams that allow tokens to be skipped (see the diagram below for an example). The context of a word can be represented through a set of skip-gram pairs of `(target_word, context_word)` where `context_word` appears in the neighboring context of `target_word`.
Consider the following sentence of 8 words.
> The wide road shimmered in the hot sun.
The context words for each of the 8 words of this sentence are defined by a window size. The window size determines the span of words on either side of a `target_word` that can be considered `context word`. Take a look at this table of skip-grams for target words based on different window sizes.
Note: For this tutorial, a window size of *n* implies n words on each side with a total window span of 2*n+1 words across a word.

The training objective of the skip-gram model is to maximize the probability of predicting context words given the target word. For a sequence of words *w<sub>1</sub>, w<sub>2</sub>, ... w<sub>T</sub>*, the objective can be written as the average log probability

where `c` is the size of the training context. The basic skip-gram formulation defines this probability using the softmax function.

where *v* and *v<sup>'<sup>* are target and context vector representations of words and *W* is vocabulary size.
Computing the denominator of this formulation involves performing a full softmax over the entire vocabulary words which is often large (10<sup>5</sup>-10<sup>7</sup>) terms.
The [Noise Contrastive Estimation](https://www.tensorflow.org/api_docs/python/tf/nn/nce_loss) loss function is an efficient approximation for a full softmax. With an objective to learn word embeddings instead of modelling the word distribution, NCE loss can be [simplified](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) to use negative sampling.
The simplified negative sampling objective for a target word is to distinguish the context word from *num_ns* negative samples drawn from noise distribution *P<sub>n</sub>(w)* of words. More precisely, an efficient approximation of full softmax over the vocabulary is, for a skip-gram pair, to pose the loss for a target word as a classification problem between the context word and *num_ns* negative samples.
A negative sample is defined as a (target_word, context_word) pair such that the context_word does not appear in the `window_size` neighborhood of the target_word. For the example sentence, these are few potential negative samples (when `window_size` is 2).
```
(hot, shimmered)
(wide, hot)
(wide, sun)
```
In the next section, you'll generate skip-grams and negative samples for a single sentence. You'll also learn about subsampling techniques and train a classification model for positive and negative training examples later in the tutorial.
## Setup
```
# Use the chown command to change the ownership of repository to user.
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install -q tqdm
import io
import itertools
import numpy as np
import os
import re
import string
import tensorflow as tf
import tqdm
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Activation, Dense, Dot, Embedding, Flatten, GlobalAveragePooling1D, Reshape
from tensorflow.keras.layers.experimental.preprocessing import TextVectorization
```
This notebook uses TF2.x.
Please check your tensorflow version using the cell below.
```
# Show the currently installed version of TensorFlow
print("TensorFlow version: ",tf.version.VERSION)
SEED = 42
AUTOTUNE = tf.data.experimental.AUTOTUNE
```
### Vectorize an example sentence
Consider the following sentence:
`The wide road shimmered in the hot sun.`
Tokenize the sentence:
```
sentence = "The wide road shimmered in the hot sun"
tokens = list(sentence.lower().split())
print(len(tokens))
```
Create a vocabulary to save mappings from tokens to integer indices.
```
vocab, index = {}, 1 # start indexing from 1
vocab['<pad>'] = 0 # add a padding token
for token in tokens:
if token not in vocab:
vocab[token] = index
index += 1
vocab_size = len(vocab)
print(vocab)
```
Create an inverse vocabulary to save mappings from integer indices to tokens.
```
inverse_vocab = {index: token for token, index in vocab.items()}
print(inverse_vocab)
```
Vectorize your sentence.
```
example_sequence = [vocab[word] for word in tokens]
print(example_sequence)
```
### Generate skip-grams from one sentence
The `tf.keras.preprocessing.sequence` module provides useful functions that simplify data preparation for Word2Vec. You can use the `tf.keras.preprocessing.sequence.skipgrams` to generate skip-gram pairs from the `example_sequence` with a given `window_size` from tokens in the range `[0, vocab_size)`.
Note: `negative_samples` is set to `0` here as batching negative samples generated by this function requires a bit of code. You will use another function to perform negative sampling in the next section.
```
window_size = 2
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
example_sequence,
vocabulary_size=vocab_size,
window_size=window_size,
negative_samples=0)
print(len(positive_skip_grams))
```
Take a look at few positive skip-grams.
```
for target, context in positive_skip_grams[:5]:
print(f"({target}, {context}): ({inverse_vocab[target]}, {inverse_vocab[context]})")
```
### Negative sampling for one skip-gram
The `skipgrams` function returns all positive skip-gram pairs by sliding over a given window span. To produce additional skip-gram pairs that would serve as negative samples for training, you need to sample random words from the vocabulary. Use the `tf.random.log_uniform_candidate_sampler` function to sample `num_ns` number of negative samples for a given target word in a window. You can call the funtion on one skip-grams's target word and pass the context word as true class to exclude it from being sampled.
Key point: *num_ns* (number of negative samples per positive context word) between [5, 20] is [shown to work](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) best for smaller datasets, while *num_ns* between [2,5] suffices for larger datasets.
```
# Get target and context words for one positive skip-gram.
target_word, context_word = positive_skip_grams[0]
# Set the number of negative samples per positive context.
num_ns = 4
context_class = tf.reshape(tf.constant(context_word, dtype="int64"), (1, 1))
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class, # class that should be sampled as 'positive'
num_true=1, # each positive skip-gram has 1 positive context class
num_sampled=num_ns, # number of negative context words to sample
unique=True, # all the negative samples should be unique
range_max=vocab_size, # pick index of the samples from [0, vocab_size]
seed=SEED, # seed for reproducibility
name="negative_sampling" # name of this operation
)
print(negative_sampling_candidates)
print([inverse_vocab[index.numpy()] for index in negative_sampling_candidates])
```
### Construct one training example
For a given positive `(target_word, context_word)` skip-gram, you now also have `num_ns` negative sampled context words that do not appear in the window size neighborhood of `target_word`. Batch the `1` positive `context_word` and `num_ns` negative context words into one tensor. This produces a set of positive skip-grams (labelled as `1`) and negative samples (labelled as `0`) for each target word.
```
# Add a dimension so you can use concatenation (on the next step).
negative_sampling_candidates = tf.expand_dims(negative_sampling_candidates, 1)
# Concat positive context word with negative sampled words.
context = tf.concat([context_class, negative_sampling_candidates], 0)
# Label first context word as 1 (positive) followed by num_ns 0s (negative).
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Reshape target to shape (1,) and context and label to (num_ns+1,).
target = tf.squeeze(target_word)
context = tf.squeeze(context)
label = tf.squeeze(label)
```
Take a look at the context and the corresponding labels for the target word from the skip-gram example above.
```
print(f"target_index : {target}")
print(f"target_word : {inverse_vocab[target_word]}")
print(f"context_indices : {context}")
print(f"context_words : {[inverse_vocab[c.numpy()] for c in context]}")
print(f"label : {label}")
```
A tuple of `(target, context, label)` tensors constitutes one training example for training your skip-gram negative sampling Word2Vec model. Notice that the target is of shape `(1,)` while the context and label are of shape `(1+num_ns,)`
```
print(f"target :", target)
print(f"context :", context )
print(f"label :", label )
```
### Summary
This picture summarizes the procedure of generating training example from a sentence.

## Compile all steps into one function
### Skip-gram Sampling table
A large dataset means larger vocabulary with higher number of more frequent words such as stopwords. Training examples obtained from sampling commonly occuring words (such as `the`, `is`, `on`) don't add much useful information for the model to learn from. [Mikolov et al.](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) suggest subsampling of frequent words as a helpful practice to improve embedding quality.
The `tf.keras.preprocessing.sequence.skipgrams` function accepts a sampling table argument to encode probabilities of sampling any token. You can use the `tf.keras.preprocessing.sequence.make_sampling_table` to generate a word-frequency rank based probabilistic sampling table and pass it to `skipgrams` function. Take a look at the sampling probabilities for a `vocab_size` of 10.
```
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(size=10)
print(sampling_table)
```
`sampling_table[i]` denotes the probability of sampling the i-th most common word in a dataset. The function assumes a [Zipf's distribution](https://en.wikipedia.org/wiki/Zipf%27s_law) of the word frequencies for sampling.
Key point: The `tf.random.log_uniform_candidate_sampler` already assumes that the vocabulary frequency follows a log-uniform (Zipf's) distribution. Using these distribution weighted sampling also helps approximate the Noise Contrastive Estimation (NCE) loss with simpler loss functions for training a negative sampling objective.
### Generate training data
Compile all the steps described above into a function that can be called on a list of vectorized sentences obtained from any text dataset. Notice that the sampling table is built before sampling skip-gram word pairs. You will use this function in the later sections.
```
# Generates skip-gram pairs with negative sampling for a list of sequences
# (int-encoded sentences) based on window size, number of negative samples
# and vocabulary size.
def generate_training_data(sequences, window_size, num_ns, vocab_size, seed):
# Elements of each training example are appended to these lists.
targets, contexts, labels = [], [], []
# Build the sampling table for vocab_size tokens.
sampling_table = tf.keras.preprocessing.sequence.make_sampling_table(vocab_size)
# Iterate over all sequences (sentences) in dataset.
for sequence in tqdm.tqdm(sequences):
# Generate positive skip-gram pairs for a sequence (sentence).
positive_skip_grams, _ = tf.keras.preprocessing.sequence.skipgrams(
sequence,
vocabulary_size=vocab_size,
sampling_table=sampling_table,
window_size=window_size,
negative_samples=0)
# Iterate over each positive skip-gram pair to produce training examples
# with positive context word and negative samples.
for target_word, context_word in positive_skip_grams:
context_class = tf.expand_dims(
tf.constant([context_word], dtype="int64"), 1)
negative_sampling_candidates, _, _ = tf.random.log_uniform_candidate_sampler(
true_classes=context_class,
num_true=1,
num_sampled=num_ns,
unique=True,
range_max=vocab_size,
seed=SEED,
name="negative_sampling")
# Build context and label vectors (for one target word)
negative_sampling_candidates = tf.expand_dims(
negative_sampling_candidates, 1)
context = tf.concat([context_class, negative_sampling_candidates], 0)
label = tf.constant([1] + [0]*num_ns, dtype="int64")
# Append each element from the training example to global lists.
targets.append(target_word)
contexts.append(context)
labels.append(label)
return targets, contexts, labels
```
## Prepare training data for Word2Vec
With an understanding of how to work with one sentence for a skip-gram negative sampling based Word2Vec model, you can proceed to generate training examples from a larger list of sentences!
### Download text corpus
You will use a text file of Shakespeare's writing for this tutorial. Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
Read text from the file and take a look at the first few lines.
```
with open(path_to_file) as f:
lines = f.read().splitlines()
for line in lines[:20]:
print(line)
```
Use the non empty lines to construct a `tf.data.TextLineDataset` object for next steps.
```
text_ds = tf.data.TextLineDataset(path_to_file).filter(lambda x: tf.cast(tf.strings.length(x), bool))
```
### Vectorize sentences from the corpus
You can use the `TextVectorization` layer to vectorize sentences from the corpus. Learn more about using this layer in this [Text Classification](https://www.tensorflow.org/tutorials/keras/text_classification) tutorial. Notice from the first few sentences above that the text needs to be in one case and punctuation needs to be removed. To do this, define a `custom_standardization function` that can be used in the TextVectorization layer.
```
# We create a custom standardization function to lowercase the text and
# remove punctuation.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
return tf.strings.regex_replace(lowercase,
'[%s]' % re.escape(string.punctuation), '')
# Define the vocabulary size and number of words in a sequence.
vocab_size = 4096
sequence_length = 10
# Use the text vectorization layer to normalize, split, and map strings to
# integers. Set output_sequence_length length to pad all samples to same length.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=vocab_size,
output_mode='int',
output_sequence_length=sequence_length)
```
Call `adapt` on the text dataset to create vocabulary.
```
vectorize_layer.adapt(text_ds.batch(1024))
```
Once the state of the layer has been adapted to represent the text corpus, the vocabulary can be accessed with `get_vocabulary()`. This function returns a list of all vocabulary tokens sorted (descending) by their frequency.
```
# Save the created vocabulary for reference.
inverse_vocab = vectorize_layer.get_vocabulary()
print(inverse_vocab[:20])
```
The vectorize_layer can now be used to generate vectors for each element in the `text_ds`.
```
def vectorize_text(text):
text = tf.expand_dims(text, -1)
return tf.squeeze(vectorize_layer(text))
# Vectorize the data in text_ds.
text_vector_ds = text_ds.batch(1024).prefetch(AUTOTUNE).map(vectorize_layer).unbatch()
```
### Obtain sequences from the dataset
You now have a `tf.data.Dataset` of integer encoded sentences. To prepare the dataset for training a Word2Vec model, flatten the dataset into a list of sentence vector sequences. This step is required as you would iterate over each sentence in the dataset to produce positive and negative examples.
Note: Since the `generate_training_data()` defined earlier uses non-TF python/numpy functions, you could also use a `tf.py_function` or `tf.numpy_function` with `tf.data.Dataset.map()`.
```
sequences = list(text_vector_ds.as_numpy_iterator())
print(len(sequences))
```
Take a look at few examples from `sequences`.
```
for seq in sequences[:5]:
print(f"{seq} => {[inverse_vocab[i] for i in seq]}")
```
### Generate training examples from sequences
`sequences` is now a list of int encoded sentences. Just call the `generate_training_data()` function defined earlier to generate training examples for the Word2Vec model. To recap, the function iterates over each word from each sequence to collect positive and negative context words. Length of target, contexts and labels should be same, representing the total number of training examples.
```
targets, contexts, labels = generate_training_data(
sequences=sequences,
window_size=2,
num_ns=4,
vocab_size=vocab_size,
seed=SEED)
print(len(targets), len(contexts), len(labels))
```
### Configure the dataset for performance
To perform efficient batching for the potentially large number of training examples, use the `tf.data.Dataset` API. After this step, you would have a `tf.data.Dataset` object of `(target_word, context_word), (label)` elements to train your Word2Vec model!
```
BATCH_SIZE = 1024
BUFFER_SIZE = 10000
dataset = tf.data.Dataset.from_tensor_slices(((targets, contexts), labels))
dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE, drop_remainder=True)
print(dataset)
```
Add `cache()` and `prefetch()` to improve performance.
```
dataset = dataset.cache().prefetch(buffer_size=AUTOTUNE)
print(dataset)
```
## Model and Training
The Word2Vec model can be implemented as a classifier to distinguish between true context words from skip-grams and false context words obtained through negative sampling. You can perform a dot product between the embeddings of target and context words to obtain predictions for labels and compute loss against true labels in the dataset.
### Subclassed Word2Vec Model
Use the [Keras Subclassing API](https://www.tensorflow.org/guide/keras/custom_layers_and_models) to define your Word2Vec model with the following layers:
* `target_embedding`: A `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a target word. The number of parameters in this layer are `(vocab_size * embedding_dim)`.
* `context_embedding`: Another `tf.keras.layers.Embedding` layer which looks up the embedding of a word when it appears as a context word. The number of parameters in this layer are the same as those in `target_embedding`, i.e. `(vocab_size * embedding_dim)`.
* `dots`: A `tf.keras.layers.Dot` layer that computes the dot product of target and context embeddings from a training pair.
* `flatten`: A `tf.keras.layers.Flatten` layer to flatten the results of `dots` layer into logits.
With the sublassed model, you can define the `call()` function that accepts `(target, context)` pairs which can then be passed into their corresponding embedding layer. Reshape the `context_embedding` to perform a dot product with `target_embedding` and return the flattened result.
Key point: The `target_embedding` and `context_embedding` layers can be shared as well. You could also use a concatenation of both embeddings as the final Word2Vec embedding.
```
class Word2Vec(Model):
def __init__(self, vocab_size, embedding_dim):
super(Word2Vec, self).__init__()
self.target_embedding = Embedding(vocab_size,
embedding_dim,
input_length=1,
name="w2v_embedding", )
self.context_embedding = Embedding(vocab_size,
embedding_dim,
input_length=num_ns+1)
self.dots = Dot(axes=(3,2))
self.flatten = Flatten()
def call(self, pair):
target, context = pair
we = self.target_embedding(target)
ce = self.context_embedding(context)
dots = self.dots([ce, we])
return self.flatten(dots)
```
### Define loss function and compile model
For simplicity, you can use `tf.keras.losses.CategoricalCrossEntropy` as an alternative to the negative sampling loss. If you would like to write your own custom loss function, you can also do so as follows:
``` python
def custom_loss(x_logit, y_true):
return tf.nn.sigmoid_cross_entropy_with_logits(logits=x_logit, labels=y_true)
```
It's time to build your model! Instantiate your Word2Vec class with an embedding dimension of 128 (you could experiment with different values). Compile the model with the `tf.keras.optimizers.Adam` optimizer.
```
embedding_dim = 128
word2vec = Word2Vec(vocab_size, embedding_dim)
word2vec.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
```
Also define a callback to log training statistics for tensorboard.
```
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="logs")
```
Train the model with `dataset` prepared above for some number of epochs.
```
word2vec.fit(dataset, epochs=20, callbacks=[tensorboard_callback])
```
Tensorboard now shows the Word2Vec model's accuracy and loss.
```
!tensorboard --bind_all --port=8081 --logdir logs
```
Run the following command in **Cloud Shell:**
<code>gcloud beta compute ssh --zone <instance-zone> <notebook-instance-name> --project <project-id> -- -L 8081:localhost:8081</code>
Make sure to replace <instance-zone>, <notebook-instance-name> and <project-id>.
In Cloud Shell, click *Web Preview* > *Change Port* and insert port number *8081*. Click *Change and Preview* to open the TensorBoard.

**To quit the TensorBoard, click Kernel > Interrupt kernel**.
## Embedding lookup and analysis
Obtain the weights from the model using `get_layer()` and `get_weights()`. The `get_vocabulary()` function provides the vocabulary to build a metadata file with one token per line.
```
weights = word2vec.get_layer('w2v_embedding').get_weights()[0]
vocab = vectorize_layer.get_vocabulary()
```
Create and save the vectors and metadata file.
```
out_v = io.open('vectors.tsv', 'w', encoding='utf-8')
out_m = io.open('metadata.tsv', 'w', encoding='utf-8')
for index, word in enumerate(vocab):
if index == 0: continue # skip 0, it's padding.
vec = weights[index]
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_m.write(word + "\n")
out_v.close()
out_m.close()
```
Download the `vectors.tsv` and `metadata.tsv` to analyze the obtained embeddings in the [Embedding Projector](https://projector.tensorflow.org/).
```
try:
from google.colab import files
files.download('vectors.tsv')
files.download('metadata.tsv')
except Exception as e:
pass
```
## Next steps
This tutorial has shown you how to implement a skip-gram Word2Vec model with negative sampling from scratch and visualize the obtained word embeddings.
* To learn more about word vectors and their mathematical representations, refer to these [notes](https://web.stanford.edu/class/cs224n/readings/cs224n-2019-notes01-wordvecs1.pdf).
* To learn more about advanced text processing, read the [Transformer model for language understanding](https://www.tensorflow.org/tutorials/text/transformer) tutorial.
* If you’re interested in pre-trained embedding models, you may also be interested in [Exploring the TF-Hub CORD-19 Swivel Embeddings](https://www.tensorflow.org/hub/tutorials/cord_19_embeddings_keras), or the [Multilingual Universal Sentence Encoder](https://www.tensorflow.org/hub/tutorials/cross_lingual_similarity_with_tf_hub_multilingual_universal_encoder)
* You may also like to train the model on a new dataset (there are many available in [TensorFlow Datasets](https://www.tensorflow.org/datasets)).
| github_jupyter |
# Running a Federated Cycle with Synergos
In a federated learning system, there are many contributory participants, known as Worker nodes, which receive a global model to train on, with their own local dataset. The dataset does not leave the individual Worker nodes at any point, and remains private to the node.
The job to synchronize, orchestrate and initiate an federated learning cycle, falls on a Trusted Third Party (TTP). The TTP pushes out the global model architecture and parameters for the individual nodes to train on, calling upon the required data, based on tags, e.g "training", which points to relevant data on the individual nodes. At no point does the TTP receive, copy or access the Worker nodes' local datasets.

This tutorial aims to give you an understanding of how to use the synergos package to run a full federated learning cycle on a `Synergos Cluster` grid.
In a `Synergos Cluster` Grid, with the inclusion of a new director and queue component, you will be able to parallelize your jobs, where the number of concurrent jobs possible is equal to the number of sub-grids. This is done alongside all quality-of-life components supported in a `Synergos Plus` grid.
In this tutorial, you will go through the steps required by each participant (TTP and Worker), by simulating each of them locally with docker containers. Specifically, we will simulate a Director and 2 sub-grids, each of which has a TTP and 2 Workers, allowing us to perform 2 concurrent federated operations at any time.
At the end of this, we will have:
- Connected the participants
- Trained the model
- Evaluate the model
## About the Dataset and Task
The dataset used in this notebook is on leaf texture, comprising 64 predictor features, and 1 target feature for 16 classes. The dataset is available in the same directory as this notebook. Within the dataset directory, `data1` is for Worker 1 and `data2` is for Worker 2. The task to be carried out will be multi-classification.
The dataset we have provided is a processed subset of the [original One-hundred plant species leaves dataset](https://archive.ics.uci.edu/ml/datasets/One-hundred+plant+species+leaves+data+set).
**Reference:**
- *Dua, D. and Graff, C. (2019). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.*
- *Charles Mallah, James Cope, James Orwell. Plant Leaf Classification Using Probabilistic Integration of Shape, Texture and Margin Features. Signal Processing, Pattern Recognition and Applications, in press. 2013.*
## Initiating the docker containers
Before we begin, we have to start the docker containers.
### A. Initialization via `Synergos Simulator`
In `Synergos Simulator`, a sandboxed environment has been created for you!
By running:
`docker-compose -f docker-compose-syncluster.yml up --build`
the following components will be started:
- Director
- Sub-Grid 1
- TTP_1 (Cluster)
- Worker_1_n1
- Worker_2_n1
- Sub-Grid 2
- TTP_2 (Cluster)
- Worker_1_n2
- Worker_2_n2
- Synergos UI
- Synergos Logger
- Synergos MLOps
- Synergos MQ
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Manual Initialization
Firstly, pull the required docker images with the following commands:
1. Synergos Director:
`docker pull gcr.io/synergos-aisg/synergos_director:v0.1.0`
2. Synergos TTP (Cluster):
`docker pull gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0`
3. Synergos Worker:
`docker pull gcr.io/synergos-aisg/synergos_worker:v0.1.0`
4. Synergos MLOps:
`docker pull gcr.io/synergos-aisg/synergos_mlops:v0.1.0`
5. Synergos MQ:
`docker pull gcr.io/synergos-aisg/synergos_mq:v0.1.0`
Next, in <u>separate</u> CLI terminals, run the following command(s):
**Note: For Windows users, it is advisable to use powershell or command prompt based interfaces**
#### Director
```
docker run --rm
-p 5000:5000
-v <directory leaf_textures/orchestrator_data>:/orchestrator/data
-v <directory leaf_textures/orchestrator_outputs>:/orchestrator/outputs
-v <directory leaf_textures/mlflow>:/mlflow
--name director
gcr.io/synergos-aisg/synergos_director:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Sub-Grid 1
- **TTP_1**
```
docker run --rm
-p 6000:5000
-p 9020:8020
-v <directory leaf_textures/orchestrator_data>:/orchestrator/data
-v <directory leaf_textures/orchestrator_outputs>:/orchestrator/outputs
--name ttp_1
gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_1 Node 1**
```
docker run --rm
-p 5001:5000
-p 8021:8020
-v <directory leaf_textures/data1>:/worker/data
-v <directory leaf_textures/outputs_1>:/worker/outputs
--name worker_1_n1
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_1_n1
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_2 Node 1**
```
docker run --rm
-p 5002:5000
-p 8022:8020
-v <directory leaf_textures/data2>:/worker/data
-v <directory leaf_textures/outputs_2>:/worker/outputs
--name worker_2_n1
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_2_n1
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Sub-Grid 2
- **TTP_2**
```
docker run --rm
-p 7000:5000
-p 10020:8020
-v <directory leaf_textures/orchestrator_data>:/orchestrator/data
-v <directory leaf_textures/orchestrator_outputs>:/orchestrator/outputs
--name ttp_2
gcr.io/synergos-aisg/synergos_ttp_cluster:v0.1.0
--id ttp
--logging_variant graylog <IP Synergos Logger> <TTP port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_1 Node 2**
```
docker run --rm
-p 5003:5000
-p 8023:8020
-v <directory leaf_textures/data1>:/worker/data
-v <directory leaf_textures/outputs_1>:/worker/outputs
--name worker_1_n2
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_1_n2
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
- **WORKER_2 Node 2**
```
docker run --rm
-p 5004:5000
-p 8024:8020
-v <directory leaf_textures/data2>:/worker/data
-v <directory leaf_textures/outputs_2>:/worker/outputs
--name worker_2_n2
gcr.io/synergos-aisg/synergos_worker:v0.1.0
--id worker_2_n2
--logging_variant graylog <IP Synergos Logger> <Worker port>
--queue rabbitmq <IP Synergos Logger> <AMQP port>
```
#### Synergos MLOps
```
docker run --rm
-p 5500:5500
-v /path/to/mlflow_test/:/mlflow # <-- IMPT! Same as orchestrator's
--name synmlops
gcr.io/synergos-aisg/synergos_mlops:v0.1.0
```
#### Synergos MQ
```
docker run --rm
-p 15672:15672 # UI port
-p 5672:5672 # AMQP port
--name synergos_mq
gcr.io/synergos-aisg/synergos_mq:v0.1.0
```
#### Synergos UI
- Refer to these [instructions](https://github.com/aimakerspace/synergos_ui) to deploy `Synergos UI`.
#### Synergos Logger
- Refer to these [instructions](https://github.com/aimakerspace/synergos_logger) to deploy `Synergos Logger`.
Once ready, for each terminal, you should see a REST server running on http://0.0.0.0:5000 of the container.
You are now ready for the next step.
## Configurations
### A. Configuring `Synergos Simulator`
All hosts & ports have already been pre-allocated!
Refer to [this](https://github.com/aimakerspace/synergos_simulator) for all the pre-allocated host & port mappings.
### B. Configuring your manual setup
In a new terminal, run `docker inspect bridge` and find the IPv4Address for each container. Ideally, the containers should have the following addresses:
- director address: `172.17.0.2`
- Sub-Grid 1
- ttp_1 address: `172.17.0.3`
- worker_1_n1 address: `172.17.0.4`
- worker_2_n1 address: `172.17.0.5`
- Sub-Grid 2
- ttp_2 address: `172.17.0.6`
- worker_1_n2 address: `172.17.0.7`
- worker_2_n2 address: `172.17.0.8`
- UI address: `172.17.0.9`
- Logger address: `172.17.0.14`
- MLOps address: `172.17.0.15`
- MQ address: `172.17.0.16`
If not, just note the relevant IP addresses for each docker container.
Run the following cells below.
**Note: For Windows users, `host` should be Docker Desktop VM's IP. Follow [this](https://stackoverflow.com/questions/58073936/how-to-get-ip-address-of-docker-desktop-vm) on instructions to find IP**
```
import time
from synergos import Driver
host = "172.20.0.2"
port = 5000
# Initiate Driver
driver = Driver(host=host, port=port)
```
## Phase 1: Registration
Submitting Orchestrator & Participant metadata
#### 1A. Orchestrator creates a collaboration
```
collab_task = driver.collaborations
collab_task.configure_logger(
host="172.20.0.14",
port=9000,
sysmetrics_port=9100,
director_port=9200,
ttp_port=9300,
worker_port=9400,
ui_port=9000,
secure=False
)
collab_task.configure_mlops(
host="172.20.0.15",
port=5500,
ui_port=5500,
secure=False
)
collab_task.configure_mq(
host="172.20.0.16",
port=5672,
ui_port=15672,
secure=False
)
collab_task.create('leaf_textures_syncluster_collaboration')
```
#### 1B. Orchestrator creates a project
```
driver.projects.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
action="classify",
incentives={
'tier_1': [],
'tier_2': [],
}
)
```
#### 1C. Orchestrator creates an experiment
```
driver.experiments.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
model=[
{
"activation": "sigmoid",
"is_input": True,
"l_type": "Linear",
"structure": {
"bias": True,
"in_features": 64,
"out_features": 32
}
},
{
"activation": "softmax",
"is_input": False,
"l_type": "Linear",
"structure": {
"bias": True,
"in_features": 32,
"out_features": 16
}
}
]
)
```
#### 1D. Orchestrator creates a run
```
driver.runs.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
rounds=2,
epochs=1,
base_lr=0.0005,
max_lr=0.005,
criterion="NLLLoss"
)
```
#### 1E. Participants registers their servers' configurations and roles
```
participant_resp_1 = driver.participants.create(
participant_id="worker_1",
)
display(participant_resp_1)
participant_resp_2 = driver.participants.create(
participant_id="worker_2",
)
display(participant_resp_2)
registration_task = driver.registrations
# Add and register worker_1 node
registration_task.add_node(
host='172.20.0.4',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.add_node(
host='172.20.0.7',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
participant_id="worker_1",
role="host"
)
registration_task = driver.registrations
# Add and register worker_2 node
registration_task.add_node(
host='172.20.0.5',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.add_node(
host='172.20.0.8',
port=8020,
f_port=5000,
log_msgs=True,
verbose=True
)
registration_task.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
participant_id="worker_2",
role="guest"
)
```
#### 1F. Participants registers their tags for a specific project
```
# Worker 1 declares their data tags
driver.tags.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
participant_id="worker_1",
train=[["leaf_textures", "dataset", "data1", "train"]],
evaluate=[["leaf_textures", "dataset", "data1", "evaluate"]]
)
# Worker 2 declares their data tags
driver.tags.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
participant_id="worker_2",
train=[["leaf_textures", "dataset", "data2", "train"]],
evaluate=[["leaf_textures", "dataset", "data2", "evaluate"]]
)
stop!
```
## Phase 2:
Alignment, Training & Optimisation
#### 2A. Perform multiple feature alignment to dynamically configure datasets and models for cross-grid compatibility
```
driver.alignments.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
verbose=False,
log_msg=False
)
# Important! MUST wait for alignment process to first complete before proceeding on
while True:
align_resp = driver.alignments.read(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project"
)
align_data = align_resp.get('data')
if align_data:
display(align_resp)
break
time.sleep(5)
```
#### 2B. Trigger training across the federated grid
```
model_resp = driver.models.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
log_msg=False,
verbose=False
)
display(model_resp)
# Important! MUST wait for training process to first complete before proceeding on
while True:
train_resp = driver.models.read(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
)
train_data = train_resp.get('data')
if train_data:
display(train_data)
break
time.sleep(5)
```
#### 2C. Perform hyperparameter tuning once ideal model is found (experimental)
```
optim_parameters = {
'search_space': {
"rounds": {"_type": "choice", "_value": [1, 2]},
"epochs": {"_type": "choice", "_value": [1, 2]},
"batch_size": {"_type": "choice", "_value": [32, 64]},
"lr": {"_type": "choice", "_value": [0.0001, 0.1]},
"criterion": {"_type": "choice", "_value": ["NLLLoss"]},
"mu": {"_type": "uniform", "_value": [0.0, 1.0]},
"base_lr": {"_type": "choice", "_value": [0.00005]},
"max_lr": {"_type": "choice", "_value": [0.2]}
},
'backend': "tune",
'optimize_mode': "max",
'metric': "accuracy",
'trial_concurrency': 1,
'max_exec_duration': "1h",
'max_trial_num': 2,
'max_concurrent': 1,
'is_remote': True,
'use_annotation': True,
'auto_align': True,
'dockerised': True,
'verbose': True,
'log_msgs': True
}
driver.optimizations.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
**optim_parameters
)
driver.optimizations.read(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment"
)
```
## Phase 3: EVALUATE
Validation & Predictions
#### 3A. Perform validation(s) of combination(s)
```
# Orchestrator performs post-mortem validation
driver.validations.create(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after validation has completed to retrieve your validation statistics
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.validations.read(
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
)
```
#### 3B. Perform prediction(s) of combination(s)
```
# Worker 1 requests for inferences
driver.predictions.create(
tags={"leaf_textures_syncluster_project": [["leaf_textures", "dataset", "data1", "predict"]]},
participant_id="worker_1",
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after prediction has completed to retrieve your predictions for worker 1
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.predictions.read(
participant_id="worker_1",
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
)
# Worker 2 requests for inferences
driver.predictions.create(
tags={"leaf_textures_syncluster_project": [["leaf_textures", "dataset", "data2", "predict"]]},
participant_id="worker_2",
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
log_msg=False,
verbose=False
)
# Run this cell again after prediction has completed to retrieve your predictions for worker 2
# NOTE: You do not need to wait for validation/prediction requests to complete to proceed
driver.predictions.read(
participant_id="worker_2",
collab_id="leaf_textures_syncluster_collaboration",
project_id="leaf_textures_syncluster_project",
expt_id="leaf_textures_syncluster_experiment",
run_id="leaf_textures_syncluster_run",
)
```
| github_jupyter |
## Simple k-means clustering algorithm for 2D data
This is simply meant to show the inner workings of the k-means clustering unsupervised technique with a simple dataset of two features and two classes for clear visualization.
```
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
import latex
import random
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
iris = datasets.load_iris()
X = iris.data[:, :2] # 2x features
y = iris.target
# Saving only two classes within the dataset
X = X[:100]
y = y[:100]
y[0]
X[0]
# Visualizing the data - it seems that this data could be clustered into coherent groups
plt.scatter(X[:,0],X[:,1],c=y,cmap="cool")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
```
### To show the clustering process I will show the cluster centroids along with the data
- I will be using random initialization where the two cluster centroids will each be randomly initialized to an individual data point.
- The process of learning will involve:
- Stepping though all training examples and assigning them to one of the centroids which it is closest to, defined by squared distance
- The centroids will be updated to the average of the points assigned to it.
- This process will continue iteratively until the centroids are at their optimum position.
#### Optimization Objective:
$$ \frac{1}{m}\sum_{i=1}^{m} || x^{(i)} - \mu_{c}^{(i)} || $$
- m: training examples
- x<sup>(i)</sup>: training example i
- mu_c<sup>(i)</sup>: centroid assigned to training example i
```
# Defining the random initialization of the centroids for the model
# Returns the index of data points to assign centroids to
def random_init(num_examples, num_clusters, X):
init_positions = []
for i in range(num_clusters): # initializing all centroids
cent_pos = random.randint(0,num_examples-1) # index
cent_pos = X[cent_pos] # actual centroid value
init_positions.append(cent_pos)
return init_positions
# Calculates the average of the points that are assigned to the centroid and updates centroids
def update_centroid_position(cent_one,cent_two):
"""
cents: list of two centroids of shape (1,2)
cent_one, cents_two: data points assigned to both centroids
"""
cent_one = np.array(cent_one) # change list of data to array
cent_two = np.array(cent_two)
c_one_x = np.average(cent_one[:,0])
c_one_y = np.average(cent_one[:,1])
c_two_x = np.average(cent_two[:,0])
c_two_y = np.average(cent_two[:,1])
cents = [np.array([c_one_x,c_one_y]), np.array([c_two_x,c_two_y])]
return cents
# Calculate the squared distance between centroid and example
def squared_distance(example, centroid):
dist = np.sum((example - centroid)**2)
return np.asscalar(dist) #shape = (1)
# Assignes data points to closest centroid and returns values of data for each centroid
def cluster_assignment(X,cents):
cent_one = []
cent_two = []
for i in range(X.shape[0]):
dist_cent_one = squared_distance(X[i,:], cents[0])
dist_cent_two = squared_distance(X[i,:], cents[1])
if dist_cent_one < dist_cent_two:
cent_one.append(X[i,:]) # closer to centroid one
else:
cent_two.append(X[i,:]) # closer to centroid two
return cent_one,cent_two
# CHecks the break condition, if the centroids have reached their maximum point
# This condition can be checked to seeing if the centroids have not moved
def check_break(old_cents,new_cents):
"""
old_cents, new_cents: [cent1, cent2]
cent1.shape, cent2.shape = (1,2)
"""
sum_old_cents = np.sum(old_cents[0]+old_cents[1])
sum_new_cents = np.sum(new_cents[0]+new_cents[1])
if sum_old_cents - sum_new_cents < 0.000001:
return True
else:
return False
# Prints out the current data assigned to a centroid
# This will show how the centroids are updated and how the data assigned to them changes
def print_centroid(cent_one,cent_two,cents):
one = [1 for i in range(len(cent_one))]
two = [0 for i in range(len(cent_two))]
cent_one = np.array(cent_one)
cent_two = np.array(cent_two)
cents = np.array(cents)
plt.scatter(cent_one[:,0],cent_one[:,1],c="blue")
plt.scatter(cent_two[:,0],cent_two[:,1],c="green")
plt.scatter(cents[:,0], cents[:,1],s=90,c="black")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
# Main function to run the clustering model
def clustering(X,k=2):
"""
X.shape: (100,2)
y.shape: (100,)
k = 2: number of clusters
"""
cents = random_init(X.shape[0],k, X) # list of two centroids (each shape = (1,2))
old_cents = cents
while(1): # Break when minimum is reached
# Assigning datapoints to closest centroid
cent_one,cent_two = cluster_assignment(X,cents)
print_centroid(cent_one,cent_two,cents)
# Update Centroid position
cents = update_centroid_position(cent_one,cent_two)
# Check if reached minimum
if check_break(old_cents,cents):
break
old_cents = cents
return cents,cent_one,cent_two,old_cents
cents,cent_one,cent_two,old_cents = clustering(X)
cents = np.array(cents)
cents.shape
# Visualizing the final Centroids found
plt.scatter(X[:,0],X[:,1],c=y,cmap="cool")
plt.scatter(cents[:,0], cents[:,1],s=90,c="black")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
# Visualizing the Centroids found and their corresponding assigned data points
one = [1 for i in range(len(cent_one))]
two = [0 for i in range(len(cent_two))]
cent_one = np.array(cent_one)
cent_two = np.array(cent_two)
plt.scatter(cent_one[:,0],cent_one[:,1],c="blue")
plt.scatter(cent_two[:,0],cent_two[:,1],c="green")
plt.scatter(cents[:,0], cents[:,1],s=90,c="black")
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
plt.show()
```
| github_jupyter |
## NMF = Not Monday night Football !
```
import pandas as pd
import numpy as np
from sklearn.decomposition import NMF
import random
from random import randint
from matplotlib import pyplot as plt
%matplotlib inline
```
# User Input
```
#importing ratings and movies csv files
PATH2 = "ratings.csv"
PATH3 = "movies.csv"
ratings, movies_ind = pd.read_csv(PATH2), pd.read_csv(PATH3)
# create an empty array the length of number of movies in system
user_ratings = np.zeros(9724)
#format ratings dataframe
del ratings['timestamp']
ratings.set_index(['userId','movieId'], inplace=True)
ratings = ratings.unstack(0)
ratings_count = ratings.count(axis=1) #count the number of ratings for each movie as a measure of popularity
top = pd.DataFrame(ratings_count.sort_values(ascending = False).head(10)) #create a dataframe of the top 20 most popular movies
top.reset_index(inplace=True)
movies_ind.set_index('movieId',inplace=True)
top_movies_g = movies_ind.loc[top['movieId']]['title'].values
```
## Of the following movies, rate all that you have seen on a scale of 1-5.
## If you have not seen a movie, rate 0.
```
#creates a list of ratings for the prompted movies
user_input = []
for i in range(0,10):
answer = int(input("How would you rate " + str(top_movies_g[i])))
if answer > 5:
answer = 5
elif answer < 0:
answer = 0
user_input.append(answer)
movies_ind.reset_index(inplace=True)
top_movies_index = movies_ind.index[top['movieId']].values
# inputs user rating into large array (9,000+ count) at appropriate indexes
for i in range(0,10):
user_ratings[top_movies_index[i]] = user_input[i]
```
# NMF Modeling
```
ratings = ratings.fillna(0)
ratings = ratings["rating"]
ratings = ratings.transpose()
ratings.head(2)
R = pd.DataFrame(ratings)
# model assumes R ~ PQ'
model = NMF(n_components=5, init='random', random_state=10)
model.fit(R)
P = model.components_ # Movie feature
Q = model.transform(R) # User features
query = user_ratings.reshape(1,-1)
t=model.transform(query)
# prediction movie ratings of input user
outcome = np.dot(t,P)
outcome=pd.DataFrame(outcome)
outcome = outcome.transpose()
outcome['movieId'] = movies_ind['movieId']
outcome = outcome.rename(columns={0:'rating'})
# top 100 ratings from predictions list
top = outcome.sort_values(by='rating',ascending=False).head(100)
```
# Selecting a Movie
```
# collects titles of the top movie predictions
top_movie_recs = movies_ind.loc[top['movieId']]['title'].values
```
# Selecting a Movie with Genre Input
```
#importing genres
PATHG = "movie_genres_years.csv"
movie_genres = pd.read_csv(PATHG)
# creates list of genres
genres = movie_genres.columns.values[3:22]
# dictionary with keys equal to genre
b,c = {}, {}
for x in genres:
key = x
value = ''
b[key],c[key] = value, value
# fills keys with list of movies that belong to respective genre
for x in genres:
li = []
for id in top['movieId']:
if id in list(movie_genres.loc[movie_genres[x] == 1]['movieId']):
li.append(movies_ind[movies_ind['movieId']==id]['title'].values)
c[x] = li
#fills keys with random choice in the list of films within a genre
for x in genres:
if len(c[x])>0:
b[x] = c[x][randint(0, len(c[x])-1)][0]
else:
b[x] = ""
# add an option for not choosing a genre
genres_for_q = np.append(genres, 'none')
from fuzzywuzzy import process
genre_answer = process.extractOne(input("What genre of film would you like to watch?"),genres_for_q)
#picks a top movie of the selected genre
for x in genres:
if genre_answer[0] == x:
if len(b[x]) == 0:
print('No ' +x+ ' recommedations')
else:
print('We recommend ' + b[x])
# if they don't want a specific genre
if genre_answer[0] == 'none':
Select = top_movie_recs[randint(0, 4)]
print('We recommend ' + Select)
```
| github_jupyter |
```
#Opencv Version use 3.3., Python 2.7.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
from moviepy.editor import VideoFileClip
#image = mpimg.imread('test_images/solidWhiteRight.jpg')
image = mpimg.imread('test_images/solidWhiteCurve.jpg')
#image = mpimg.imread('test_images/solidYellowCurve2.jpg')
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image)
def resizeimage(img):
return cv2.resize(img, (960 , 540), interpolation = cv2.INTER_AREA)
def grayscale(img):
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def canny(img, low_threshold, high_threshold):
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 3:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, [vertices], ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=10):
left_slope = []
right_slope = []
left_intersept = []
right_intersept = []
for line in lines:
for x1,y1,x2,y2 in line:
deltay = float(y2-y1)
deltax = float(x2-x1)
slope_line = deltay/deltax
#y = mx+c
mx = slope_line*x1
intercept = y1 - mx
if slope_line < 0:
left_slope.append(slope_line)
left_intersept.append(intercept)
elif slope_line > 0:
right_slope.append(slope_line)
right_intersept.append(intercept)
#print neg_intersept, pos_intersept
left_avg_slope = np.mean(left_slope)
right_avg_slope = np.mean(right_slope)
right_in = np.mean(right_intersept)
left_in = np.mean(left_intersept)
top_y = 350
bot_y = 540
#(y-c)/m = x
top_right_x = int(abs((top_y - right_in)/right_avg_slope))
bot_right_x = int(abs((bot_y - right_in)/right_avg_slope))
top_left_x = int(abs((top_y - left_in)/left_avg_slope))
bot_left_x = int(abs((bot_y - left_in)/left_avg_slope))
#print top_left_x, bot_left_x, neg_avg_slope, top_right_x, bot_right_x, neg_in, neg_intersept
cv2.line(img, (top_left_x, top_y), (bot_left_x, bot_y), color, thickness)
cv2.line(img, (top_right_x, top_y), (bot_right_x, bot_y), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
def weighted_img(img, initial_img, alpha, betha, gamma ):
return cv2.addWeighted(initial_img, alpha, img, betha, gamma)
def process_image(image):
gray_image = grayscale(image)
#resize_image = resizeimage(gray_image)
#intial_image = resizeimage(image)
canny_image = canny(gray_image, 200, 220)
blur_image = gaussian_blur(canny_image, 7)
vertices = np.array([(100, 539), (900, 539), (600, 350), (400, 350)], dtype=np.int32)
#vertices = np.array([(100, 720), (1200, 720), (800, 400), (200, 400)], dtype=np.int32)
ROI_image = region_of_interest(blur_image, vertices)
rho = 1
theta = np.pi/180
threshold = 70
min_line_length = 10
max_line_gap = 1
hough_image = hough_lines(ROI_image, rho, theta, threshold, min_line_length, max_line_gap)
weight_image = weighted_img(hough_image, image, 0.8, 1, 0)
#weight_image = weighted_img(hough_image, intial_image, 0.8, 1, 0)
return weight_image
#yellow_output = 'test_videos_output/challenge.mp4'
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
#yellow_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
#clip2 = VideoFileClip('test_videos/challenge.mp4')
#clip2 = VideoFileClip('test_videos/solidWhiteRight.mp4')
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
yellow_clip.write_videofile(yellow_output, audio=False)
```
| github_jupyter |
# 1. Unsupervised Learning
```
%matplotlib inline
import scipy
import numpy as np
import itertools
import matplotlib.pyplot as plt
```
## 1. Generating the data
First, we will generate some data for this problem. Set the number of points $N=400$, their dimension $D=2$, and the number of clusters $K=2$, and generate data from the distribution $p(x|z=k) = \mathcal{N}(\mu_k, \Sigma_k)$.
Sample $200$ data points for $k=1$ and 200 for $k=2$, with
$$
\mu_1=
\begin{bmatrix}
0.1 \\
0.1
\end{bmatrix}
\ \text{,}\
\mu_2=
\begin{bmatrix}
6.0 \\
0.1
\end{bmatrix}
\ \text{ and }\
\Sigma_1=\Sigma_2=
\begin{bmatrix}
10 & 7 \\
7 & 10
\end{bmatrix}
$$
Here, $N=400$. Since you generated the data, you already know which sample comes from which class.
Run the cell in the IPython notebook to generate the data.
```
# TODO: Run this cell to generate the data
num_samples = 400
cov = np.array([[1., .7], [.7, 1.]]) * 10
mean_1 = [.1, .1]
mean_2 = [6., .1]
x_class1 = np.random.multivariate_normal(mean_1, cov, num_samples // 2)
x_class2 = np.random.multivariate_normal(mean_2, cov, num_samples // 2)
xy_class1 = np.column_stack((x_class1, np.zeros(num_samples // 2)))
xy_class2 = np.column_stack((x_class2, np.ones(num_samples // 2)))
data_full = np.row_stack([xy_class1, xy_class2])
np.random.shuffle(data_full)
data = data_full[:, :2]
labels = data_full[:, 2]
```
Make a scatter plot of the data points showing the true cluster assignment of each point using different color codes and shape (x for first class and circles for second class):
```
# TODO: Make a scatterplot for the data points showing the true cluster assignments of each point
plt.scatter(xy_class1[:,0], xy_class1[:,1]) # first class, x shape
plt.scatter(xy_class2[:,0], xy_class2[:,1]) # second class, circle shape
```
## 2. Implement and Run K-Means algorithm
Now, we assume that the true class labels are not known. Implement the k-means algorithm for this problem.
Write two functions: `km_assignment_step`, and `km_refitting_step` as given in the lecture (Here, `km_` means k-means).
Identify the correct arguments, and the order to run them. Initialize the algorithm with
$$
\hat\mu_1=
\begin{bmatrix}
0.0 \\
0.0
\end{bmatrix}
\ \text{,}\
\hat\mu_2=
\begin{bmatrix}
1.0 \\
1.0
\end{bmatrix}
$$
and run it until convergence.
Show the resulting cluster assignments on a scatter plot either using different color codes or shape or both.
Also plot the cost vs. the number of iterations. Report your misclassification error.
```
def cost(data, R, Mu):
N, D = data.shape
K = Mu.shape[1]
J = 0
for k in range(K):
J += np.dot(np.linalg.norm(data - np.array([Mu[:, k], ] * N), axis=1)**2, R[:, k])
return J
# TODO: K-Means Assignment Step
def km_assignment_step(data, Mu):
""" Compute K-Means assignment step
Args:
data: a NxD matrix for the data points
Mu: a DxK matrix for the cluster means locations
Returns:
R_new: a NxK matrix of responsibilities
"""
# Fill this in:
N, D = data.shape # Number of datapoints and dimension of datapoint
K = Mu.shape[1] # number of clusters
r = np.zeros((N,K))
for k in range(K):
r[:, k] = np.linalg.norm(Mu[:,k] - data, axis=1)
arg_min = np.argmin(r, axis=1) # argmax/argmin along dimension 1
R_new = np.zeros((N,K)) # Set to zeros/ones with shape (N, K)
R_new[range(N), arg_min] = 1 # Assign to 1
return R_new
# TODO: K-means Refitting Step
def km_refitting_step(data, R, Mu):
""" Compute K-Means refitting step.
Args:
data: a NxD matrix for the data points
R: a NxK matrix of responsibilities
Mu: a DxK matrix for the cluster means locations
Returns:
Mu_new: a DxK matrix for the new cluster means locations
"""
N, D = data.shape # Number of datapoints and dimension of datapoint
K = R.shape[1] # number of clusters
Mu_new = np.matmul(data.T, R)/ np.sum(R, axis=0)
return Mu_new
# TODO: Run this cell to call the K-means algorithm
N, D = data.shape
K = 2
max_iter = 100
class_init = np.random.binomial(1., .5, size=N)
R = np.vstack([class_init, 1 - class_init]).T
Mu = np.zeros([D, K])
Mu[:, 1] = 1.
R.T.dot(data), np.sum(R, axis=0)
cost_results = []
for it in range(max_iter):
R = km_assignment_step(data, Mu)
Mu = km_refitting_step(data, R, Mu)
cost_results.append(cost(data, R, Mu))
class_1 = np.where(R[:, 0])
class_2 = np.where(R[:, 1])
# TODO: Make a scatterplot for the data points showing the K-Means cluster assignments of each point
plt.scatter(data[class_1,0], data[class_1,1]) # first class, x shape
plt.scatter(data[class_2,0], data[class_2,1]) # second class, circle shape
sumForClass1 = 0
for point in data[class_1, :2][0]:
if point in xy_class1[:,:2]:
sumForClass1 +=1
for point in data[class_2, :2][0]:
if point in xy_class2[:,:2]:
sumForClass1 +=1
print("misclassification", 1-(sumForClass1)/(xy_class1[:, :2].shape[0] + xy_class2[:,:2].shape[0]))
```
## 3. Implement EM algorithm for Gaussian mixtures
Next, implement the EM algorithm for Gaussian mixtures.
Write three functions: `log_likelihood`, `gm_e_step`, and `gm_m_step` as given in the lecture.
Identify the correct arguments, and the order to run them.
Initialize the algorithm with means as in Qs 2.1 k-means initialization, covariances with $\hat\Sigma_1=\hat\Sigma_2=I$,
and $\hat\pi_1=\hat\pi_2$.
In addition to the update equations in the lecture, for the M (Maximization) step, you also need to use this following equation to update the covariance $\Sigma_k$:
$$\hat{\mathbf{\Sigma}_k} = \frac{1}{N_k} \sum^N_{n=1} r_k^{(n)}(\mathbf{x}^{(n)} - \hat{\mathbf{\mu}_k})(\mathbf{x}^{(n)} - \hat{\mathbf{\mu}_k})^{\top}$$
Run the algorithm until convergence and show the resulting cluster assignments on a scatter plot either using different color codes or shape or both.
Also plot the log-likelihood vs. the number of iterations. Report your misclassification error.
```
def normal_density(x, mu, Sigma):
return np.exp(-.5 * np.dot(x - mu, np.linalg.solve(Sigma, x - mu))) \
/ np.sqrt(np.linalg.det(2 * np.pi * Sigma))
def log_likelihood(data, Mu, Sigma, Pi):
""" Compute log likelihood on the data given the Gaussian Mixture Parameters.
Args:
data: a NxD matrix for the data points
Mu: a DxK matrix for the means of the K Gaussian Mixtures
Sigma: a list of size K with each element being DxD covariance matrix
Pi: a vector of size K for the mixing coefficients
Returns:
L: a scalar denoting the log likelihood of the data given the Gaussian Mixture
"""
# Fill this in:
N, D = data.shape # Number of datapoints and dimension of datapoint
K = len(Pi) # number of mixtures
L, T = 0., 0.
for n in range(N):
for k in range(K):
T += Pi[k] * normal_density(data[n,:], Mu[:,k], Sigma[k]) # Compute the likelihood from the k-th Gaussian weighted by the mixing coefficients
L += np.log(T)
return L
# TODO: Gaussian Mixture Expectation Step
def gm_e_step(data, Mu, Sigma, Pi):
""" Gaussian Mixture Expectation Step.
Args:
data: a NxD matrix for the data points
Mu: a DxK matrix for the means of the K Gaussian Mixtures
Sigma: a list of size K with each element being DxD covariance matrix
Pi: a vector of size K for the mixing coefficients
Returns:
Gamma: a NxK matrix of responsibilities
"""
# Fill this in:
N, D = data.shape # Number of datapoints and dimension of datapoint
K = len(Pi) # number of mixtures
Gamma = np.zeros((N, K)) # zeros of shape (N,K), matrix of responsibilities
for n in range(N):
for k in range(K):
Gamma[n, k] = Pi[k] * normal_density(data[n,:], Mu[:,k], Sigma[k])
Gamma[n, :] /= np.sum(Gamma[n,:]) # Normalize by sum across second dimension (mixtures)
return Gamma
# TODO: Gaussian Mixture Maximization Step
def gm_m_step(data, Gamma):
""" Gaussian Mixture Maximization Step.
Args:
data: a NxD matrix for the data points
Gamma: a NxK matrix of responsibilities
Returns:
Mu: a DxK matrix for the means of the K Gaussian Mixtures
Sigma: a list of size K with each element being DxD covariance matrix
Pi: a vector of size K for the mixing coefficients
"""
# Fill this in:
N, D = data.shape # Number of datapoints and dimension of datapoint
K = Gamma.shape[1] # number of mixtures
Nk = np.sum(Gamma, axis=0) # Sum along first axis
Mu = (1./Nk)*np.matmul(data.T, Gamma)
Sigma = [np.identity(D) for k in range(K)]
for k in range(K):
delta = data - Mu[:,k]
"""
D = [gamma_0 , 0, ....]
[0, gamma_1, ....]
[0, .... , gamma_n]
"""
D = np.eye(N)*Gamma[:,k] ## N x N
"""
data: N x D
data.T dot D dot data: D x N dot (N x N) dot (N x D)
which gives D x D
"""
A = np.matmul(np.matmul( delta.T, D), delta)
Sigma[k] = (1./Nk[k]) * A
Pi = Nk/N
return Mu, Sigma, Pi
# TODO: Run this cell to call the Gaussian Mixture EM algorithm
N, D = data.shape
K = 2
Mu = np.zeros([D, K])
Mu[:, 1] = 1.
Sigma = [np.eye(2), np.eye(2)]
Pi = np.ones(K) / K
Gamma = np.zeros([N, K]) # Gamma is the matrix of responsibilities
max_iter = 200
ll = []
for it in range(max_iter):
Gamma = gm_e_step(data, Mu, Sigma, Pi)
Mu, Sigma, Pi = gm_m_step(data, Gamma)
ll.append(log_likelihood(data, Mu, Sigma, Pi)) # This function makes the computation longer, but good for debugging
class_1 = np.where(Gamma[:, 0] >= .5)
class_2 = np.where(Gamma[:, 1] >= .5)
# TODO: Make a scatterplot for the data points showing the Gaussian Mixture cluster assignments of each point
plt.scatter(data[class_1,0], data[class_1,1]) # first class, x shape
plt.scatter(data[class_2,0], data[class_2,1]) # second class, circle shape
plt.show()
plt.plot(range(len(ll)), ll)
plt.show()
sumForClass1 = 0
for point in data[class_1, :2][0]:
if point in xy_class1[:,:2]:
sumForClass1 +=1
for point in data[class_2, :2][0]:
if point in xy_class2[:,:2]:
sumForClass1 +=1
print("misclassification", 1-(sumForClass1)/(xy_class1[:, :2].shape[0] + xy_class2[:,:2].shape[0]))
```
## 4. Comment on findings + additional experiments
Comment on the results:
* Compare the performance of k-Means and EM based on the resulting cluster assignments.
* Compare the performance of k-Means and EM based on their convergence rate. What is the bottleneck for which method?
* Experiment with 5 different data realizations (generate new data), run your algorithms, and summarize your findings. Does the algorithm performance depend on different realizations of data?
**TODO: Your written answer here**
...
# 2. Reinforcement Learning
There are 3 files:
1. `maze.py`: defines the `MazeEnv` class, the simulation environment which the Q-learning agent will interact in.
2. `qlearning.py`: defines the `qlearn` function which you will implement, along with several helper functions. Follow the instructions in the file.
3. `plotting_utils.py`: defines several plotting and visualization utilities. In particular, you will use `plot_steps_vs_iters`, `plot_several_steps_vs_iters`, `plot_policy_from_q`
```
from qlearning import qlearn
from maze import MazeEnv
from plotting_utils import plot_steps_vs_iters, plot_several_steps_vs_iters, plot_policy_from_q
```
## 1. Basic Q Learning experiments
(a) Run your algorithm several times on the given environment. Use the following hyperparameters:
1. Number of episodes = 200
2. Alpha ($\alpha$) learning rate = 1.0
2. Maximum number of steps per episode = 100. An episode ends when the agent reaches a goal state, or uses the maximum number of steps per episode
3. Gamma ($\gamma$) discount factor = 0.9
4. Epsilon ($\epsilon$) for $\epsilon$-greedy = 0.1 (10% of the time). Note that we should "break-ties" when the Q-values are zero for all the actions (happens initially) by essentially choosing uniformly from the action. So now you have two conditions to act randomly: for epsilon amount of the time, or if the Q values are all zero.
```
# TODO: Fill this in
num_iters = 200
alpha = 1.
gamma = .9
epsilon = 0.1
max_steps = 100
use_softmax_policy = False
# TODO: Instantiate the MazeEnv environment with default arguments
env = MazeEnv()
# TODO: Run Q-learning:
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy)
```
Plot the steps to goal vs training iterations (episodes):
```
# TODO: Plot the steps vs iterations
plot_steps_vs_iters(steps_vs_iters)
```
Visualize the learned greedy policy from the Q values:
```
# TODO: plot the policy from the Q value
plot_policy_from_q(q_hat, env)
```
(b) Run your algorithm by passing in a list of 2 goal locations: (1,8) and (5,6). Note: we are using 0-indexing, where (0,0) is top left corner. Report on the results.
```
# TODO: Fill this in (same as before)
num_iters = 200
alpha = 1.
gamma = .9
epsilon = .1
max_steps = 100
use_softmax_policy = False
# TODO: Set the goal
goal_locs = [(1,8), (5,6)]
env = MazeEnv(goals=goal_locs)
# TODO: Run Q-learning:
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy)
```
Plot the steps to goal vs training iterations (episodes):
```
# TODO: Plot the steps vs iterations
plot_steps_vs_iters(steps_vs_iters)
```
Plot the steps to goal vs training iterations (episodes):
```
# TODO: plot the policy from the Q values
plot_policy_from_q(q_hat, env)
```
## 2. Experiment with the exploration strategy, in the original environment
(a) Try different $\epsilon$ values in $\epsilon$-greedy exploration: We asked you to use a rate of $\epsilon$=10%, but try also 50% and 1%. Graph the results (for 3 epsilon values) and discuss the costs and benefits of higher and lower exploration rates.
```
# TODO: Fill this in (same as before)
num_iters = 200
alpha = 1.
gamma = .9
epsilon = .1
max_steps = 100
use_softmax_policy = False
# TODO: set the epsilon lists in increasing order:
epsilon_list = [0.01,0.1,0.5]
env = MazeEnv()
steps_vs_iters_list = []
for epsilon in epsilon_list:
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy)
steps_vs_iters_list.append(steps_vs_iters)
# TODO: Plot the results
label_list = ["epsilon={}".format(eps) for eps in epsilon_list]
plot_several_steps_vs_iters(steps_vs_iters_list, label_list)
```
(b) Try exploring with policy derived from **softmax of Q-values** described in the Q learning lecture. Use the values of $\beta \in \{1, 3, 6\}$ for your experiment, keeping $\beta$ fixed throughout the training.
```
# TODO: Fill this in for Static Beta with softmax of Q-values
num_iters = 200
alpha = 1.
gamma = .9
epsilon = .1
max_steps = 100
# TODO: Set the beta
beta_list = [1,3,6]
use_softmax_policy = True
k_exp_schedule = 0. # (float) choose k such that we have a constant beta during training
env = MazeEnv()
steps_vs_iters_list = []
for beta in beta_list:
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy, beta, k_exp_schedule)
steps_vs_iters_list.append(steps_vs_iters)
label_list = ["beta={}".format(beta) for beta in beta_list]
# TODO:
plot_several_steps_vs_iters(steps_vs_iters_list, label_list)
```
(c) Instead of fixing the $\beta = \beta_0$ to the initial value, we will increase the value of $\beta$ as the number of episodes $t$ increase:
$$\beta(t) = \beta_0 e^{kt}$$
That is, the $\beta$ value is fixed for a particular episode.
Run the training again for different values of $k \in \{0.05, 0.1, 0.25, 0.5\}$, keeping $\beta_0 = 1.0$. Compare the results obtained with this approach to those obtained with a static $\beta$ value.
```
# Fill this in for Dynamic Beta
num_iters = 200
alpha = 1.
gamma = .9
epsilon = .1
max_steps = 100
# Set the beta
beta = 1.0
use_softmax_policy = True
k_exp_schedule_list = [0.05,0.1,0.25,0.5]
env = MazeEnv()
steps_vs_iters_list = []
for k_exp_schedule in k_exp_schedule_list:
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy, beta, k_exp_schedule)
steps_vs_iters_list.append(steps_vs_iters)
# Plot the steps vs iterations
label_list = ["k={}".format(k_exp_schedule) for k_exp_schedule in k_exp_schedule_list]
plot_several_steps_vs_iters(steps_vs_iters_list, label_list)
```
## 3. Stochastic Environments
(a) Make the environment stochastic (uncertain), such that the agent only has a 95% chance of moving in the chosen direction, and has a 5% chance of moving in some random direction.
```
# TODO: Implement ProbabilisticMazeEnv in maze.py
from maze import ProbabilisticMazeEnv
```
(b) Change the learning rule to handle the non-determinism, and experiment with different probability of environment performing random action $p_{rand} \in \{0.05, 0.1, 0.25, 0.5\}$ in this new rule. How does performance vary as the environment becomes more stochastic?
Use the same parameters as in first part, except change the alpha ($\alpha$) value to be **less than 1**, e.g. 0.5.
```
# TODO: Use the same parameters as in the first part, except change alpha
num_iters = 200
alpha = .5
gamma = .9
epsilon = .1
max_steps = 100
use_softmax_policy = False
# Set the environment probability of random
env_p_rand_list = [0.05,0.1,0.25,0.5]
steps_vs_iters_list = []
for env_p_rand in env_p_rand_list:
# Instantiate with ProbabilisticMazeEnv
env = ProbabilisticMazeEnv(MazeEnv())
# Note: We will repeat for several runs of the algorithm to make the result less noisy
avg_steps_vs_iters = np.zeros(num_iters)
for i in range(10):
q_hat, steps_vs_iters = qlearn(env, num_iters, alpha, gamma, epsilon, max_steps, use_softmax_policy)
avg_steps_vs_iters += steps_vs_iters
avg_steps_vs_iters /= 10
steps_vs_iters_list.append(avg_steps_vs_iters)
label_list = ["env_random={}".format(env_p_rand) for env_p_rand in env_p_rand_list]
# plot_several_steps_vs_iters(...)
```
# 3. Did you complete the course evaluation?
```
yes
```
| github_jupyter |
<b>The code below used STLM by using only Capacity field to predict the RUL(STLM using one variable with multisteps)</b>
<p>We built the model only on Battery B0005</p>
```
import sys
import numpy as np # linear algebra
from scipy.stats import randint
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv), data manipulation as in SQL
import matplotlib.pyplot as plt # this is used for the plot the graph
import seaborn as sns # used for plot interactive graph.
from sklearn.model_selection import train_test_split # to split the data into two parts
#from sklearn.cross_validation import KFold # use for cross validation
from sklearn.preprocessing import StandardScaler # for normalization
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import Pipeline # pipeline making
from sklearn.model_selection import cross_val_score
from sklearn.feature_selection import SelectFromModel
from sklearn import metrics # for the check the error and accuracy of the model
from sklearn.metrics import mean_squared_error,r2_score
## for Deep-learing:
import keras
from keras.layers import Dense
from keras.models import Sequential
from keras.utils import to_categorical
from keras.optimizers import SGD
from keras.callbacks import EarlyStopping
from keras.utils import np_utils
import itertools
from keras.layers import LSTM
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.layers import Dropout
df=pd.read_csv("B0005_on.csv")
#featurs=['Batt_name','cycle','amb_temp','voltage_battery','current_battery','temp_battery','current_load','voltage_load','time','Capacity','NewCap']
f1=['cycle','NewCap']
df=df[f1]
#dataset=df[(df.Batt_name =='B0005')| (df.Batt_name =='B0006') | (df.Batt_name =='B0007')|(df.Batt_name =='B0018')]
# Feature Selection
#dataset=df[(df['Batt_name']=='B0006_11')]
dataset=df#<70 need window to be 10
#dataset=dataset[f1]
data_train=dataset[(dataset['cycle']<50)]
data_set_train=data_train.iloc[:,1:2].values
data_test=dataset[(dataset['cycle']>=50)]
data_set_test=data_test.iloc[:,1:2].values
from sklearn.preprocessing import MinMaxScaler
sc=MinMaxScaler(feature_range=(0,1))
data_set_train=sc.fit_transform(data_set_train)
data_set_test=sc.transform(data_set_test)
X_train=[]
y_train=[]
#take the last 10t to predict 10t+1
for i in range(10,49):
X_train.append(data_set_train[i-10:i,0])
y_train.append(data_set_train[i,0])
X_train,y_train=np.array(X_train),np.array(y_train)
X_train=np.reshape(X_train,(X_train.shape[0],X_train.shape[1],1))
ln=len(data_train)
ln
```
<h1> Applied STLM </h3>
```
regress=Sequential()
regress.add(LSTM(units=200, return_sequences=True, input_shape=(X_train.shape[1],1)))
regress.add(Dropout(0.3))
regress.add(LSTM(units=200, return_sequences=True))
regress.add(Dropout(0.3))
regress.add(LSTM(units=200, return_sequences=True))
regress.add(Dropout(0.3))
regress.add(LSTM(units=200))
regress.add(Dropout(0.3))
regress.add(Dense(units=1))
regress.compile(optimizer='adam',loss='mean_squared_error')
regress.fit(X_train,y_train,epochs=200,batch_size=25)
```
<h1> Test the Model that was built by STLM</h1>
```
len(data_test)
#### predictions
data_total=pd.concat((data_train['NewCap'],data_test['NewCap']),axis=0)
inputs=data_total[len(data_total)-len(data_test)-10:].values
inputs=inputs.reshape(-1,1)
inputs=sc.transform(inputs)
X_test=[]
for i in range(10,129):
X_test.append(inputs[i-10:i,0])
X_test=np.array(X_test)
X_test=np.reshape(X_test,(X_test.shape[0],X_test.shape[1],1))
pred=regress.predict(X_test)
pred=sc.inverse_transform(pred)
pred=pred[:,0]
tests=data_test.iloc[:,1:2]
rmse = np.sqrt(mean_squared_error(tests, pred))
print('Test RMSE: %.3f' % rmse)
metrics.r2_score(tests,pred)
len(pred)
data_test['pre']=pred
#print(data_test.head(50))
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
plot_df = dataset.loc[(dataset['cycle']>=1),['cycle','NewCap']]
plot_per=data_test.loc[(data_test['cycle']>=ln),['cycle','pre']]
sns.set_style("darkgrid")
plt.figure(figsize=(8, 8))
plt.plot(plot_df['cycle'], plot_df['NewCap'], label="Actual data", color='blue')
plt.plot(plot_per['cycle'],plot_per['pre'],label="Prediction data", color='red')
#plt.plot(pred)
#Draw threshold
plt.plot([0.,168], [1.38, 1.38],dashes=[6, 2])
plt.ylabel('NewCap')
# make x-axis ticks legible
adf = plt.gca().get_xaxis().get_major_formatter()
plt.xlabel('cycle')
plt.title('Discharge B0005 (prediction)start in cycle 50 -RULe=-8, window-size=10')
actual=0
pred=0
Afil=0
Pfil=0
a=data_test['NewCap'].values
b=data_test['pre'].values
j=0
k=0
for i in range(len(a)):
actual=a[i]
if actual<=1.38:
j=i
Afil=j
break
for i in range(len(a)):
pred=b[i]
if pred< 1.38:
k=i
Pfil=k
break
print("The Actual fail at cycle number: "+ str(Afil+ln))
print("The prediction fail at cycle number: "+ str(Pfil+ln))
RULerror=Pfil-Afil
print("The error of RUL= "+ str(RULerror)+ " Cycle(s)")
```
<h4> The LSTM model with(one input layer, 3 hidden layer and 1 output layer, and each layer with 200 neurons) the
results below show the prediction in different number of cycles.</h4>
<h4> when the training dataset less than 70 cycles, we need to adjust the window(lage) to 10 rather than 5.</h4>
<tabel>
<tr>
<td><img src="Cycle110-w5.png"></td>
<td><img src="Cycle100-w5.png"></td>
</tr>
<tr>
<td><img src="Cycle80-w5.png"></td>
</tr>
</tabel>
| github_jupyter |
[](https://colab.research.google.com/github/eirasf/GCED-AA2/blob/main/lab2/lab2-parte2.ipynb)
# Práctica 1: Redes neuronales desde cero con TensorFlow - Parte 2
En esta segunda parte de la práctica vamos a utilizar TensorFlow para implementar y entrenar la misma red neuronal que desarrollamos con Numpy en la parte 1.
Necesitaremos, por tanto, la librería TensorFlow además de las ya utilizadas Numpy y tensorflow_datasets.
```
# COLAB - Para ejecutar desde Colab, descomentar la siguiente línea
%tensorflow_version 2.x
# LOCAL - Para ejecutar en Local, descomentar la siguiente línea
#!pip3 install tensorflow numpy tensorflow-datasets
import tensorflow as tf
import numpy as np
import tensorflow_datasets as tfds
# Establecemos una semilla aleatoria para que los resultados sean reproducibles en distintas ejecuciones
np.random.seed(1234567)
```
Cargaremos el conjunto de datos `german_credit_numeric` del que tomaremos el primer lote de 100 elementos, tal como hicimos en la parte 1 de la práctica. Obtendremos dos tensores (`vectores_x` y `etiquetas`) que serán los que utilizaremos posteriormente.
```
import tensorflow_datasets as tfds
# Cargamos el conjunto de datos
ds = tfds.load('german_credit_numeric', split='train')
tamano_lote = 100
elems = ds.batch(tamano_lote)
lote_entrenamiento = None
for elem in elems:
lote_entrenamiento = elem
break
vectores_x = tf.cast(lote_entrenamiento["features"], dtype=tf.float64)
etiquetas = tf.cast(lote_entrenamiento["label"], dtype=tf.float64)
```
## Declaración del modelo
En primer lugar, debemos crear en TensorFlow el grafo de operaciones que representa nuestro modelo. Para ello:
1. Creamos las variables que TF optimizará, es decir, los parámetros del modelo.
1. Creamos el grafo de operaciones que producen la predicción a partir de la entrada y las variables. En este caso utilizaremos funciones que relacionen variables de TF con tensores que contendrán datos utilizando operaciones de TF.
```
# Variables auxiliares
tamano_entrada = 24
h0_size = 5
h1_size = 3
# CREACIÓN DE LAS VARIABLES
# TODO - Completa las dimensiones de las matrices
W0 = tf.Variable(np.random.randn(, ), dtype=tf.float64, name='W0')
b0 = tf.Variable(np.random.randn(, ),dtype=tf.float64, name='b0')
W1 = tf.Variable(np.random.randn(, ), dtype=tf.float64, name='W1')
b1 = tf.Variable(np.random.randn(, ),dtype=tf.float64, name='b1')
W2 = tf.Variable(np.random.randn(, ), dtype=tf.float64, name='W2')
b2 = tf.Variable(np.random.randn(, ),dtype=tf.float64, name='b2')
# Guardamos todas las variables en una lista para posteriormente acceder a ellas fácilmente
VARIABLES = [W0, b0, W1, b1, W2, b2]
# CREACIÓN DEL GRAFO DE OPERACIONES
@tf.function
def capa_sigmoide(x, W, b):
# TODO - Completa con funciones de tensorflow el cálculo de la salida de una capa en la siguiente línea
return
@tf.function
def predice(x):
# TODO - Completa las siguientes líneas
h0 =
h1 =
y =
return y
# Verificación
x_test = np.random.randn(1,tamano_entrada)
y_pred = predice(x_test)
print(y_pred)
np.testing.assert_almost_equal(0.48001507, y_pred.numpy(), err_msg='Revisa tu implementación')
```
## Entrenamiento del modelo
El modelo declarado ya se puede utilizar para hacer predicciones pasándole a la función `predice` un tensor con datos (tal como se ha hecho en el apartado de verificación de la celda anterior). Sin embargo, como vimos en la parte 1, este modelo no está ajustado a los datos de entrada, por lo que producirá malas predicciones.
Debemos encontrar un conjunto de valores para los parámetros ($\mathbf{W}_2$, $b_2$, $\mathbf{W}_1$, $\mathbf{b}_1$, $\mathbf{W}_0$ y $\mathbf{b}_0$) que minimicen la función de coste. TensorFlow nos ayuda a optimizar este proceso.
TensorFlow permite configurar el proceso de optimización, por lo que deberemos indicarle:
1. Qué función de pérdida queremos. En nuestro caso habíamos elegido la entropía cruzada binaria.
1. Qué método de optimización utilizar. Como en la parte 1, utilizaremos descenso de gradiente.
Por el momento crearemos sendas variables para almacenar ambas configuraciones. Al estar organizado de esta manera, utilizar una función de pérdida distinta o un algoritmo de optimización diferente será tan sencillo como cambiar estas variables.
```
fn_perdida = tf.keras.losses.BinaryCrossentropy()
optimizador = tf.keras.optimizers.SGD(learning_rate=0.01)
#optimizador = tf.keras.optimizers.Adam(0.001)
```
### El bucle de entrenamiento
El bucle de entrenamiento será análogo al utilizado en la parte 1. Consistirá en ejecutar un número preestablecido (`NUM_EPOCHS`) de pasos de entrenamiento. En cada paso haremos lo siguiente:
1. Tomar los datos de entrada y calcular las predicciones que hace el modelo en su estado actual
1. Calcular el coste (la media de las pérdidas de cada predicción)
1. Utilizar el valor de coste para actualizar cada variable en dirección de su gradiente
Crearemos una función `paso_entrenamiento` que realice este trabajo. TensorFlow se ocupará de calcular los gradientes y realizar las actualizaciones de las variables. Para calcular los gradientes, TensorFlow utiliza un `GradientTape`. Todas las operaciones con tensores que se realicen dentro del entorno en que está declarado este `GradientTape` quedarán registradas y eso nos permitirá obtener los gradientes directamente del `GradientTape` con una simple llamada. Puedes comprobar su funcionamiento en el ejemplo.
```
@tf.function
def paso_entrenamiento(x, y):
# Declaración del GradientTape que registrará las operaciones
with tf.GradientTape() as tape:
# TODO - Completa la siguiente línea para que calcule las predicciones
y_pred =
# Cálculo de la pérdida utilizando la función que hemos escogido anteriormente
perdida = fn_perdida(y, y_pred)
# Consultar los gradientes es tan sencillo como indicarle dos cosas:
# 1. la función cuyo gradiente queremos obtener
# 2. la lista de variables respecto a las cuales queremos calcular el gradiente
# La función nos devolverá una lista con el gradiente correspondiente a cada variable de la lista
gradientes = tape.gradient(perdida, VARIABLES)
# Realizar la actualización de las variables solo requiere esta llamada. Se le pasa una lista de tuplas (gradiente, variable)
optimizador.apply_gradients(zip(gradientes, VARIABLES))
# Para poder mostrar la tasa de acierto, la calculamos a cada paso
fallos = tf.abs(tf.reshape(y,(tamano_lote, 1)) - y_pred)
tasa_acierto = tf.reduce_sum(1 - fallos)
# Devolvemos estos dos valores para poder mostrarlos por pantalla cuando estimemos conveniente
return (perdida, tasa_acierto)
# PROCESO DE ENTRENAMIENTO
num_epochs = 10000
for epoch in range(num_epochs):
perdida, tasa_error = paso_entrenamiento(vectores_x, etiquetas)
if epoch % 100 == 99:
print("Epoch:", epoch, 'Pérdida:', perdida.numpy(), 'Tasa de acierto:', tasa_error.numpy()/tamano_lote)
```
El uso de TensorFlow nos ha permitido abstraernos de los detalles de implementación y del cálculo de derivadas para centrarnos en la arquitectura de nuestro modelo.
| github_jupyter |
### Representing and minimizing rules
*(You'll need to: `conda install pyrsistent networkx` and then `pip install nxpd`. You'll also need graphviz.)*
Our rules are represented as positive monotone formulae in CNF. This is flexible enough for practical purposes, while still allowing us to define a normal form suitable for diffing, etc.
A _clause_ is a set of literals, and a _formula_ a set of clauses. We sort each first by size, then lexicographically. Other than that, we just need a bunch of boilerplate functionality.
```
class Clause:
def __init__(self, *args):
self.literals = frozenset(args)
def first(self):
return next(iter(self.literals))
def issubset(self, other):
return self.literals.issubset(other.literals)
def elements(self):
return self.literals
def __str__(self):
return ", ".join(map(str, sorted(self.literals)))
def _repr_pretty_(self, p, cycle):
p.text(str(self) if not cycle else '...')
def __len__(self):
return len(self.literals)
def __lt__(self, other):
if len(self.literals) == len(other.literals):
return sorted(self.literals) < sorted(other.literals)
return len(self.literals) < len(other.literals)
def __eq__(self, other):
return self.literals == other.literals
def __hash__(self):
return hash(self.literals)
def __repr__(self):
return '<{}>'.format(''.join(self.literals))
class Formula:
def __init__(self, *args):
self.clauses = set(args)
def first(self):
return next(iter(self.clauses))
def issubset(self, other):
return self.clauses.issubset(other.clauses)
def elements(self):
return self.clauses
def __len__(self):
return len(self.clauses)
def __str__(self):
return " ".join(map(lambda x: "({})".format(x), sorted(self.clauses)))
def _repr_pretty_(self, p, cycle):
p.text(str(self) if not cycle else '...')
def __lt__(self, other):
if len(self.clauses) == len(other.clauses):
return sorted(self.clauses) < sorted(other.clauses)
return len(self.clauses) < len(other.clauses)
```
And some utility functions for constructing formulae...
```
def formula(*sets):
return Formula(*map(lambda s: Clause(*s), sets))
def parse(str):
return formula(*(set(clause) for clause in str.split(' ')))
def forms(*strs):
return [normalize(parse(str)) for str in strs]
formula({'a', 'b', 'c'}, {'ab', 'e'}, {'a', 'ab', 'c'})
```
But from now on, we'll just use single letters for literals, so we can make use of `forms()` and `parse()`.
For any formula, we define a normal form which exists, is unique, and is equivalent to the original formula under the usual interpretation of boolean logic.
Clauses are always normal, since all literals are positive. Formulae are normalized by removing any clause subsumed by any other. A clause $c$ is _subsumed_ by a clause $s$ if $s \subseteq c$. This is the obvious $O(mn)$ algorithm. Our clauses are almost always of size 1, so this is just fine.
```
def subsumes(c, d):
return c.literals.issubset(d.literals)
def normalize(formula):
minimized = set()
for c in formula.clauses:
minimized = {s for s in minimized if not subsumes(c, s)}
if not any(subsumes(s, c) for s in minimized):
minimized.add(c)
return Formula(*minimized)
form = parse('ab b a cd acd cd')
form
normalize(form)
```
Note that `forms()` above returns formulae already normalized:
```
forms('ab b a cd acd cd')
```
### Matching
The problem of matching rules to contexts amounts to testing formulae against truth-value assignments. But there are a few particularities which apply to our setting...
- we want to evaluate rules under partial assignments as well as total assignemnts, identifying
rules which are not yet satisfied, but remain satisfiable by future assignments.
- we're always monotonically extending partial assignments and then re-evaluating, so it seems
like a good idea to re-use the current state as a starting point for future evaluations.
- many assignments will co-exist at the same time, and each may subsequently be used as the
starting point for future evaluations.
If we use an algorithm that just freshly evaluates all the rules against each assignment from scratch, there's not much more to do. If, however, we want to use an algorithm that performs work incrementally from the prior assignment, then we also want persistent data structures so that the old and new states can co-exist without copying.
Here are some ideas...
#### Brute force
1. Index rules by properties set in that context
2. Do nothing else until a property is requested
3. Find all rules setting that property
4. Evaluate each rule against the current assignment
5. Apply specificity logic, etc.
This might be just fine, particularly if there are lots of different properties with only a few settings each. For situations where lots of properties are queried and set in the same context, there are some simple tricks that could speed things up. For instance, a given rule could be marked as satisfied (or not) by a given assignment so that future properties queried in the same context wouldn't need to reevaluate those rules. But in the case where are many, many settings of the same property in a large number of different rules, this seems unavoidably pretty bad.
There maybe other use cases for which this approach wouldn't work as well. For instance, enumerating properties set in a given context, or enumerating all the possible values of a given key that could be added to reveal additional settings. Those are more rare, though, and it would be ok if they were more expensive, as long as they're tractable.
#### DAG/Rete
1. Use rules to build an immutable graph, with root nodes for each literal
2. As the context is extended with additional facts, propagate those facts through the graph,
activating child nodes as appropriate
3. When a node containing property settings is activated, add those settings to the properties
visible in the current context
4. Apply specificity logic, either separately or in step 3 when adding settings
This is most similar to the current implementation, and most similar to what I've implemented so far for CCS2. Given the requirements above, the node activation state must be kept separately from the main graph, and in a persistent data structure. Likewise the visible settings.
Querying properties and enumerating properties are both trivial. The problem of enumerating key values remains tricky, but I've already implemented one variant and it's not too bad.
There are lots of variants of this involving different approaches to building the dag. Since we're dealing with CNF terms, one possibility is to build a graph for disjunctions and one for conjunctions (or, equivalently, a single graph in two layers). The only thing remaining then is to optimize the structure of each of those graphs to amortize work done at each step.
#### Other matching approaches
There are other matching approaches to indexing and matching boolean formulae. Perhaps one of these would be better, but it would need to support partial matching (to find satisfiable but not yet satisfied rules), and may need to support an incremental implementation using immutable data structures, depending on its overall performance.
### The DAG approach
So let's give this approach a try... The dag will be rooted with a bunch of literals, which in actual practice consist of key/value-pattern matchable forms, but for now we'll just leave opaque.
Nodes consist of an operation (and/or), an activation tally count, their children and their property settings. The top-level dag contains an otherwise unused node for the root-level settings.
```
from enum import Enum
class Op(Enum):
AND = 'AND'
OR = 'OR'
class Node:
def __init__(self, op, specificity):
self.op = op
self.specificity = specificity
self.tallyCount = 1 if self.op == Op.OR else 0
self.children = []
def add_link(self):
if self.op == Op.AND:
self.tallyCount += 1
from collections import defaultdict
class Dag:
def __init__(self):
self.children = defaultdict(list)
```
Here we build a single dag containing both clauses and formulae, but the design is easier to understand if we regard them as two separate graphs. Consider the graph of clauses. The invariant is that a node represenging a clause $c$ needs to be reachable from exactly those root nodes representing literals $l \in c$. It must be reachable via some path from each of those root nodes, and not reachable from any others.
Subject to this, and without being too precise, the goal is to minimize the number of edges and also to minimize
fan-out from any node. Intuitively, we want to minimize the duplication of work for overlapping formulae. Also, since we have no reason to believe that any particular fact will become true in the future, we also want to minimize the expected amount of work we have to do for any incremental addition of bindings to our assignment, deferring as much work as possible.
To build the dag, we build the whole thing in two phases, first adding all clauses from smallest to largest, then adding all formulae likewise. For this reason, we require the complete set of formulae up-front. We use the greedy set-cover approximation algorithm for each phase. NB: Here we rely on the fact that clauses and formulae are both ordered from smallest to largest.
For reasons that will become clear below, we force every single-element clause to build a disjunction node, but single-element formulae can simply reuse the disjunction node below them as their own.
```
import heapq
from functools import total_ordering
@total_ordering
class Rank:
def __init__(self, elem):
self.weight = len(elem)
self.elem = elem
def __eq__(self, other):
return self.weight == other.weight and self.elem == other.elem
def __lt__(self, other):
if self.weight == other.weight:
return self.elem > other.elem
return self.weight > other.weight
def build(expr, op, base_children, these_children):
# TODO need a special case for the empty formula
if op == Op.AND and len(expr) == 1:
return base_children[expr.first()]
ranks = defaultdict(list)
sizes = []
for c in these_children:
if c.issubset(expr):
if len(c) == len(expr):
return these_children[c]
rank = Rank(c)
sizes.append(rank)
for l in c.elements():
ranks[l].append(rank)
heapq.heapify(sizes)
covered = set()
node = Node(op, len(expr))
while len(sizes) and sizes[0].weight != 0:
best = heapq.heappop(sizes).elem
these_children[best].append(node)
node.add_link()
for l in best.elements():
if l not in covered:
covered.add(l)
for rank in ranks[l]:
rank.weight -= 1
# TODO this repeated linear heapify is no good, we need a heap that allows us to
# shuffle elements up and down as needed
heapq.heapify(sizes)
for l in expr.elements() - covered:
base_children[l].append(node)
node.add_link()
return node.children
def build_dag(formulae):
dag = Dag()
clause_children = {}
for clause in sorted({c for f in formulae for c in f.clauses}):
clause_children[clause] = build(clause, Op.OR, dag.children, clause_children)
form_children = {}
for form in sorted(formulae):
form_children[form] = build(form, Op.AND, clause_children, form_children)
return dag
dag = build_dag(forms('a b c', 'a b', 'b c efg', 'b d ef', 'b'))
```
Let's take a look at the result:
```
import networkx as nx
import nxpd
nxpd.nxpdParams['show'] = 'ipynb'
# t will be used later...
def draw(dag, t={}):
G = nx.DiGraph()
G.graph['dpi'] = 60
G.graph['rankdir'] = 'BT'
def add_nodes(p, ns):
for n in ns:
count = t.get(n, n.tallyCount)
label = str(n.tallyCount)
if count == 0:
color = 'palegreen'
elif n.op == Op.OR:
color = 'lightblue'
elif count != n.tallyCount:
color = 'mistyrose'
label = "{} / {}".format(n.tallyCount - count, label)
else:
color = 'pink2'
G.add_node(n, label=label, style='filled', fillcolor=color)
G.add_edge(p, n)
add_nodes(n, n.children)
for l, ns in dag.children.items():
G.add_node(l)
add_nodes(l, ns)
return nxpd.draw(G)
draw(dag)
```
#### Examples
Here are a few more little examples from my notes, just to show a few features of the way these dags are built.
```
draw(build_dag(forms('a', 'b', 'c', 'a b', 'b c', 'a b c')))
draw(build_dag(forms('a', 'b', 'c', 'a b', 'b c', 'a b c', 'a c', 'a c d')))
draw(build_dag(forms('a b c d', 'a b d', 'a c d', 'b d', 'a b')))
```
This algorithm will never create an intermediate node, so it'll only discover sharing among expressions that are actually present in the input. Compare the following:
```
draw(build_dag(forms('a b c', 'b c d')))
draw(build_dag(forms('a b c', 'b c d', 'b c')))
```
But our input files themselves often contain strong hints about that, such as:
b c {
a {...}
d {...}
}
So we can get some mileage by exploiting these structures when we build the list of formulae.
We'll parse the above into a tree of formulae, all normalized:
b c
a b c
b c d
For each branch, we'll check whether the parent is a subset of more than one of its children. If so, we add the parent formula to our set. Otherwise, we drop it.
So, for
b {
c {
a {...}
d {...}
}
}
we find this tree:
b
b c
a b c
b c d
We eliminate `b`, but retain `b c`.
For
(a, b) {
a {...}
c {...}
}
we find
ab d
a d
ab c d
and since `ab d` is a subset of only one of its children, we eliminate it, saving one node.
Note that there are cases where this still builds unused temporary nodes. For instance:
a b c {...}
a b {
c d {...}
c e {...}
}
The `a b` node will be included in this case, optimistically hoping that it'll be used for `a b c d` and `a b c e`, but in fact both of those will take advantage of `a b c` instead, leaving `a b` with no children. Orphaned nodes like that could just be ignored, they could be cleaned out at the end, or maybe the above subset heuristic could be improved to detect them sooner.
We'll implement all of this later, when we're actually dealing with the full syntax.
#### Matching
Meanwhile, let's talk about matching against these dags. This part is simple. We just start at the correct root for the newly added literal fact, and activate whichever nodes we reach, decrementing the tally count of each. We activate a node's children only when its tally count reaches *exactly* zero. This prevents duplicate activations in the case of disjunctions. (It would also allow an easy way to poison nodes, if ever needed.) We also need to be careful not to allow duplicate literals, but forcing each single-element clause to build a one-input disjunction node accomplishes that as well.
Finally, we keep the tally counts in a persistent map so that we can fork them as discussed above.
```
from pyrsistent import m, s
class Context:
def __init__(self, dag, tallies=m()):
self.dag = dag
self.tallies = tallies
def augment(self, literal):
tallies = self.tallies
def activate(n):
nonlocal tallies
count = tallies.get(n, n.tallyCount)
if count > 0:
count -= 1
tallies = tallies.set(n, count)
if count == 0:
for n in n.children:
activate(n)
for n in dag.children[literal]:
activate(n)
return Context(self.dag, tallies)
```
In reality, there will be some extra wrinkles pertaining to the more complex real-world structure of literals, property settings, specificities, rule-activated fact augmentations, and so on. But it's all fairly straightforward after this.
```
dag = build_dag(forms('a b c', 'a b', 'b c efg', 'b d ef', 'b'))
root = Context(dag)
c = root.augment('b')
```
Let's take a look at the state of a graph in context as well:
```
draw(dag, c.tallies)
draw(dag, c.augment('a').tallies)
draw(dag, c.augment('e').tallies)
from functools import reduce
def augmentAll(c, ls):
return reduce(lambda c, l: c.augment(l), ls, c)
draw(dag, augmentAll(root, 'bdcg').tallies)
```
As a reminder, the rules for these examples were
forms('a b c', 'a b', 'b c efg', 'b d ef', 'b')
so those activations are exactly as expected.
This example shows the necessity of tracking dirty literals:
```
draw(dag, augmentAll(root, 'bb').tallies)
```
Rather than forcing an indirection of every literal through a disjunction node, we could also just track dirty literals explicitly. But the current approach is probably better, since we need a node on which to hang settings and constraints in any case. And it's more consistent and simpler, even if it makes the pictures a little uglier.
### Getting to CNF
It may be more convenient to specify rules as arbitrary boolean expressions (as is allowed in CCS1). If so, we'll convert them to CNF, and can report an error if a formula expands beyond a fixed maximum size.
```
def merge(cons, *forms):
return cons(*frozenset().union(*(f.elements() for f in forms)))
from collections import namedtuple
Expr = namedtuple('Expr', 'op children')
def to_cnf(expr, limit=100):
if type(expr) == str:
return formula({expr})
if expr.op == Op.AND:
return merge(Formula, *map(to_cnf, expr.children))
elif expr.op == Op.OR:
return expand(limit, *map(to_cnf, expr.children))
def expand(limit, *forms):
if len(forms) == 1:
return forms[0]
else:
first = forms[0].elements()
rest = expand(limit, *forms[1:]).elements()
if len(first) * len(rest) > limit:
raise ValueError("Expanded form would have more than {} clauses. Consider stratifying this rule.".format(limit))
cs = (merge(Clause, c1, c2) for c1 in forms[0].elements() for c2 in rest)
return Formula(*cs)
merge(Formula, *forms('a b c', 'b cd', 'b'))
to_cnf(Expr(Op.OR, [Expr(Op.AND, ['a', 'b']), Expr(Op.AND, ['c', 'd'])]))
to_cnf(Expr(Op.AND, [Expr(Op.AND, ['a', 'b']), Expr(Op.AND, ['c', 'd'])]))
to_cnf(Expr(Op.AND, [Expr(Op.AND, ['a', 'b']), Expr(Op.OR, ['c', 'd'])]))
to_cnf(Expr(Op.OR, [Expr(Op.AND, ['a', 'b', 'c']), Expr(Op.AND, ['d', 'e', 'f']), Expr(Op.AND, ['g', 'h', 'i'])]))
```
We can allow the user to override a default limit to allow exploding expansions up to whatever size they really need.
```
try:
to_cnf(Expr(Op.OR, [Expr(Op.AND, ['a', 'b', 'c']), Expr(Op.AND, ['d', 'e', 'f']), Expr(Op.AND, ['g', 'h', 'i'])]), limit=20)
except ValueError as e:
print(e)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/3)%20Dog%20Vs%20Cat%20Classifier%20Using%20Keras%20Backend.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Table of Contents
## [Install Monk](#0)
## [Importing pytorch backend](#1)
## [Creating and Managing experiments](#2)
## [Training a Cat Vs Dog image classifier](#3)
## [Validating the trained classifier](#4)
## [Running inference on test images](#5)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
<a id='1'></a>
# Imports
```
#Using keras backend
# When installed using pip
from monk.keras_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.keras_prototype import prototype
```
<a id='2'></a>
# Creating and managing experiments
- Provide project name
- Provide experiment name
- For a specific data create a single project
- Inside each project multiple experiments can be created
- Every experiment can be have diferent hyper-parameters attached to it
```
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1");
```
### This creates files and directories as per the following structure
workspace
|
|--------sample-project-1 (Project name can be different)
|
|
|-----sample-experiment-1 (Experiment name can be different)
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
<a id='3'></a>
# Training a Cat Vs Dog image classifier
## Quick mode training
- Using Default Function
- dataset_path
- model_name
- num_epochs
## Dataset folder structure
parent_directory
|
|
|------cats
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------dogs
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
```
# Download dataset
import os
if not os.path.isfile("datasets.zip"):
os.system("! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1rG-U1mS8hDU7_wM56a1kc-li_zHLtbq2' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1rG-U1mS8hDU7_wM56a1kc-li_zHLtbq2\" -O datasets.zip && rm -rf /tmp/cookies.txt")
if not os.path.isdir("datasets"):
os.system("! unzip -qq datasets.zip")
gtf.Default(dataset_path="datasets/dataset_cats_dogs_train",
model_name="resnet50",
num_epochs=5);
#Read the summary generated once you run this cell.
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
<a id='4'></a>
# Validating the trained classifier
## Load the experiment in validation mode
- Set flag eval_infer as True
```
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1", eval_infer=True);
```
## Load the validation dataset
```
gtf.Dataset_Params(dataset_path="datasets/dataset_cats_dogs_eval");
gtf.Dataset();
```
## Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
<a id='5'></a>
# Running inference on test images
## Load the experiment in inference mode
- Set flag eval_infer as True
```
gtf = prototype(verbose=1);
gtf.Prototype("sample-project-1", "sample-experiment-1", eval_infer=True);
```
## Select image and Run inference
```
img_name = "datasets/dataset_cats_dogs_test/0.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
img_name = "datasets/dataset_cats_dogs_test/90.jpg";
predictions = gtf.Infer(img_name=img_name);
#Display
from IPython.display import Image
Image(filename=img_name)
```
| github_jupyter |
# Market Basket Analysis
- Construct association rules
- Identify items purchased together
## Association Rules
- (Antecedent) => (Consequent)
- e.g (Fiction) => (Biography) means buying Fiction makes them buy Biography
```
import pandas as pd
import numpy as np
books = pd.read_csv(
"https://assets.datacamp.com/production/repositories/5654/datasets/1e5276e5a24493ec07b44fe99b46984f2ffa4488/bookstore_transactions.csv"
)
books.head()
# Convert strings to list
txl = books["Transaction"].apply(lambda x: x.split(","))
tx = list(txl)
tx
```
### Identifying Association Rules
- Set of all possible rules is large
- Most rules ain't useful
- **Restrict the set of rules**
```
# Get the number of genres
from itertools import chain
from collections import Counter
flatten = list(chain.from_iterable(tx))
categories = [*set(flatten)]
print(Counter(flatten))
print("Num of categories:", len(categories))
# Generating rules
from itertools import permutations
rules = list(permutations(categories, 2))
print(rules)
print("Num of rules:", len(rules))
```
## Computing different metrics
### Support metric
- Given a rule R
- num of transactions with R / num of all transactions
```
# Support metric for ("Bookmark", "History")
tx = [tuple(li) for li in tx] # replace lists with tuples to use Counter
count_tx = Counter(tx)
all_tx = len(tx)
count_tx[("History", "Bookmark")] / all_tx
```
### Computing that metric with pandas & numpy
```
from mlxtend.preprocessing import TransactionEncoder
encoder = TransactionEncoder().fit(tx)
# one-hot encode
onehot = encoder.transform(tx)
# convert OHE list-of-lists to DF
df = pd.DataFrame(onehot, columns=encoder.columns_)
df.head()
# Support for each single item
df.mean()
# Support for multiple items
df["fiction+poetry"] = np.logical_and(df["Fiction"], df["Poetry"])
df["history+biography"] = np.logical_and(df["History"], df["Biography"])
df["history+bookmark"] = np.logical_and(df["History"], df["Bookmark"])
df.mean()
```
### Confidence and Lift
They **refine** the support metric
#### When support is misleading
| TID | Transactions |
| --- | ------------ |
| 1 | Coffee, Milk
| 2 | Bread, Milk, Orange
| 3 | Bread, Milk
| 4 | Bread, Milk, Sugar
| 5 | Bread, Jam, Milk
- Milk and Bread are purchased together, so Milk -> Bread
- The rule above **is not informative for marketing**
- Milk and bread are both popular items, **coexistence does not necessarily imply association**
### Confidence
X => Y: Support(X and Y) / Support(X)
**Shows likelihood of people buying Y, given they bought X**
For the table above:
Confidence(Milk, Coffee) = 0.2 / 1 = 0.2
Low likelihood of association: Milk => Coffee
Confidence(Coffee, Milk) = 0.2 / 0.2 = 1
High likelihood of association: Coffee => Milk
### Lift
X => Y: Support(X and Y) / (Support(X) * Support(Y))
**real X and Y / expected X and Y**
**Lift < 1 means X and Y are paired less frequently than the random pairings**
Expected = Condition where variables are independent
Lift >= 1 is good
For the table above:
Lift(Coffee, Milk) = 0.2 / (0.2 * 1) = 1
Association exists but the demand is just "sufficient"
#### Case Study: Support, Confidence and Lift
**Given:**
- Support(Hunger Games, Harry Potter) = 0.12
- Support(Hunger Games, Twilight) = 0.09
- Support(Harry Potter, Twilight) = 0.14
- Support(Harry Potter) = 0.477
- Support(Twilight) = 0.256
- Confidence(Potter, Twilight) = 0.29
- Confidence(Twilight, Potter) = 0.55
- Lift(Potter, Twilight) = 1.15
**Inferences:**
- Harry Potter is more popular than Twilight
- Potter people mostly don't like Twilight but some Twilight people like Potter
- Demand to buy Potter and Twilight together is stronger than expected
Twilight => Potter
### Leverage
X => Y: Support(X and Y) - Support(X) * Support(Y)
**real X and Y - expected X and Y**
**Leverage > 0 means the association creates some surplus value**
Leverage >= 0 is good
Range is -1 to 1, while Lift's range is 0 to infinity
For the table above:
Leverage(Coffee, Milk) = 0.2 - 0.2 = 0
Coffee and Milk has no leverage, association exists but the demand is just sufficient
### Conviction
X => Y: (Support(X) * Support(~Y)) / Support(X and ~Y)
OR
X => Y: (1 - Support(Y)) / (1 - Confidence(X, Y))
Probability that X appears without Y if they were dependent with the actual frequency of the appearance of X without Y
**Shows Y's dependence on X**
**Conviction(Apple, Pear) = 1.01 means the rule Apple => Pear is incorrect 1% more often than expected (if variables were independent)**
For the table above:
Conviction(Coffee, Milk) = 0 / 0 = NaN
Conviction(Milk, Coffee) = 0.8 / 0.8 = 1
Rule (Milk => Coffee) is incorrect %0 more often than expected
#### Case Study: Conviction
**Given:**
- Conviction(Twilight, Potter) = 1.16
- Conviction(Potter, Twilight) = 1.05
**Inferences:**
- Twilight => Potter is incorrect 16% more often, if variables were independent
- Potter => Twilight is incorrect 5% more often, if variables were independent
### Zhang's Metric - Association and Dissociation
`(Conf(~A, B) - Conf(A, B)) / max(Conf(A, B), Conf(~A, B))`
- Range -1 to 1, **1 is perfect association and -1 is perfect dissociation**
## Overview of MBA
1. Generate large set of rules
2. Filter those rules with metrics
3. Apply intuition and common sense
# Aggregation and Pruning
- Aggregation: Putting items with similar names in categories
- Pruning: Remove items with poor performance
```
# Sales data of a supermarket
sales = [
["apple_f", "bread_f", "cola_b", "water_b", "catfood_pf", "newspaper_p"],
["apple_f", "steak_m", "catfood_pf", "newspaper_p"],
["apple_f", "bread_f", "pork_m", "catfood_pf", "newspaper_p"],
["cola_b", "steak_m", "catfood_pf", "newspaper_p"],
["bread_f", "cola_b", "catfood_pf", "newspaper_p"],
["cola_b", "water_b"],
["bread_f", "steak_m",]
]
from mlxtend.preprocessing import TransactionEncoder
encoder = TransactionEncoder().fit(sales)
# one-hot encode
onehot = encoder.transform(sales)
df = pd.DataFrame(onehot, columns=encoder.columns_)
df.head(10)
# Aggregation
basic_food = [it for it in df.columns if "_f" in it]
beverages = [it for it in df.columns if "_b" in it]
dairy = [it for it in df.columns if "_m" in it]
printed = [it for it in df.columns if "_p" in it]
pet_food = [it for it in df.columns if "_pf" in it]
basic_food = df[basic_food]
beverages = df[beverages]
dairy = df[dairy]
printed = df[printed]
pet_food = df[pet_food]
basic_food = (basic_food.sum(axis=1) > 0.0).values
beverages = (beverages.sum(axis=1) > 0.0).values
dairy = (dairy.sum(axis=1) > 0.0).values
printed = (printed.sum(axis=1) > 0.0).values
pet_food = (pet_food.sum(axis=1) > 0.0).values
df_agg = pd.DataFrame(
np.vstack([basic_food, beverages, dairy, printed, pet_food]).T,
columns = ['basic_food', 'beverages', "dairy", "printed", "pet_food"]
)
df_agg.head(10)
```
## The Apriori Algorithm
- Number of possible rules goes up as the dataset grows
- Can't consider every item set of length L in a database with 3461 items - **a very large number**
### Apriori Principle
- **Subsets of frequent sets are frequent**
- Retain frequent items (items that exceed some level of support)
- If candles are infrequent, any set including candles are also infrequent
- Retain frequent sets
- Prune infrequent sets
- **Does not tell about association rules**
```
# Get frequent item sets with apriori algorithm
from mlxtend.frequent_patterns import apriori
frequent_itemsets = apriori(df_agg, min_support=0.1, use_colnames=True)
frequent_itemsets
# Get association rules
from mlxtend.frequent_patterns import association_rules
rules = association_rules(
frequent_itemsets,
metric="support",
min_threshold=0.5,
)
rules
# Filtering with Zhang's rule
def conf(sup, sup_antecedent):
return sup / sup_antecedent
def zhang(rules):
# given a -> b
sup = np.array(rules["support"])
ant_sup = np.array(rules["antecedent support"])
conf_ab = conf(sup, ant_sup)
conf_nota_b = conf(sup, 1 - ant_sup)
return (conf_ab - conf_nota_b) / np.maximum(conf_ab, conf_nota_b)
rules["zhang"] = zhang(rules)
# Filtering rules
filtered = rules[
(rules["leverage"] > 0.10) &
(rules["confidence"] > 0.75)
]
filtered
```
| github_jupyter |
```
import numpy as np
import scipy.stats
import scs
#############################################
# Generate random cone problems #
#############################################
def pos(x):
return (x + np.abs(x)) / 2.
def gen_feasible(m, n, p_scale = 0.1):
z = np.random.randn(m)
y = pos(z)
s = y - z
P = np.random.randn(n,n)
P = p_scale * P.T @ P
# Make problem slightly more numerically challenging:
A = np.random.randn(m, n)
U, S, V = np.linalg.svd(A, full_matrices=False)
S = S**2
S /= np.max(S)
A = (U * S) @ V
x = np.random.randn(n)
c = -A.T @ y - P @ x
b = A.dot(x) + s
#b /= np.linalg.norm(b)
#x /= np.linalg.norm(b)
#c /= np.linalg.norm(c)
#y /= np.linalg.norm(c)
return (P, A, b, c, x, y)
def gen_infeasible(m, n, p_scale = 0.1):
# b'y < 0, A'y == 0
z = np.random.randn(m)
y = pos(z) # y = s - z;
A = np.random.randn(m, n)
# A := A - y(A'y)' / y'y
A = A - np.outer(y, np.transpose(A).dot(y)) / np.linalg.norm(y)**2
b = np.random.randn(m)
b = -b / np.dot(b, y)
P = np.random.randn(n,n)
P = p_scale * P.T @ P
return (P, A, b, np.random.randn(n))
def gen_unbounded(m, n, p_scale = 0.1):
# c'x < 0, Ax + s = 0, Px = 0
z = np.random.randn(m)
s = pos(z)
P = np.random.randn(n,n)
P = p_scale * P.T @ P
eigs, V = np.linalg.eig(P)
i = np.argmin(eigs)
eigs[i] = 0
x = V[:,i]
# Px = 0
P = (V * eigs) @ V.T
P = 0.5 * (P + P.T)
A = np.random.randn(m, n)
# A := A - (s + Ax)x' / x'x ===> Ax + s == 0
A = A - np.outer(s + A.dot(x), x) / np.linalg.norm(x)**2
c = np.random.randn(n)
c = -c / np.dot(c, x)
return (P, A, np.random.randn(m), c)
def is_optimal(P, A, b, c, x, y, tol=1e-6):
s = b - A @ x
if (np.linalg.norm(P @ x + c + A.T @ y) < tol and
np.abs(y.T @ s) < tol and
np.linalg.norm(s - pos(s)) < tol and
np.linalg.norm(y - pos(y)) < tol):
return True
return False
def is_infeasible(A, b, y, tol=1e-6):
if b.T @ y >= 0:
return False
y_hat = y / np.abs(b.T @ y)
if (np.linalg.norm(y_hat - pos(y_hat)) < tol and np.linalg.norm(A.T @ y_hat) < tol):
return True
return False
def is_unbounded(P, A, c, x, tol=1e-6):
if c.T @ x >= 0:
return False
x_hat = x / np.abs(c.T @ x)
if np.linalg.norm(P @ x_hat) < tol and np.linalg.norm(A @ x_hat + pos(-A @ x_hat)) < tol:
return True
return False
# ''linear'' projection logic
class LinearProjector(object):
def __init__(self, P, A, b, c):
(m,n) = A.shape
self.A = A
self.h = np.hstack((c, b))
self.L = scipy.linalg.cho_factor(np.eye(n) + P + A.T @ A)
self.g = self._solve(self.h)
def _solve(self, v):
(m, n) = self.A.shape
sol = np.zeros(n+m,)
sol[:n] = scipy.linalg.cho_solve(self.L, v[:n] - self.A.T @ v[n:])
sol[n:] = v[n:] + self.A @ sol[:n]
return sol
def project(self, w):
g = self.g
p = self._solve(w[:-1])
_a = 1 + g.T@g
_b = w[:-1].T @ g - 2 * p.T @ g - w[-1]
_c = p.T@p - w[:-1].T @ p
tau = (-_b + np.sqrt(_b ** 2 - 4 * _a * _c)) / 2 / _a
return np.hstack((p - tau * g, tau))
# Douglas-Rachford splitting / ADMM for QP
def solve_qp_dr(P, A, b, c, N=1000, tol=1e-6):
(m,n) = np.shape(A)
lp = LinearProjector(P, A, b, c)
u = np.zeros(n+m+1,)
#u[:n+m] = np.hstack((c, b))
u[-1] = 1.
def proj_cone(v):
v[n:] = np.maximum(v[n:], 0.)
return v
use_dr = True # slightly different DR vs ADMM
for i in range(N):
# for DR ut converges to sol *not* u, see Patrinos, Stella, Bemporad, 2014
# v - ut go to zero
dr_lam = 0.5 # \in [0,1], 0.5 = DR, 1.0 = PR
ut = lp.project(u)
v = proj_cone(2 * ut - u)
u += 2 * dr_lam * (v - ut)
x = ut[:n] / ut[-1]
y = ut[n:-1] / ut[-1]
if (is_optimal(P, A, b, c, x, y, tol=tol) or
is_infeasible(A, b, y, tol=tol) or
is_unbounded(P, A, c, x, tol=tol)):
break
print(i)
return x,y
m = 1500
n = 1000
N = int(5e3)
seed = 1234
np.random.seed(seed)
(P, A, b, c) = gen_unbounded(m, n)
#P = P + 1e-7 * np.eye(n)
(x,y) = solve_qp_dr(P, A, b, c, N=N)
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
sol = scs.solve(probdata, cone, normalize=True, max_iters=10 * N, acceleration_lookback = 10, eps=1e-6)
print(c.T @ sol['x'])
print(np.linalg.norm(c) * np.linalg.norm(A @ sol['x'] + sol['s']))
print(np.sqrt(max(0., sol['x'].T @ P @ sol['x'])))
print(np.linalg.norm(P - P.T))
np.random.seed(seed)
(P, A, b, c) = gen_infeasible(m, n)
#P = P + np.eye(n)
(x,y) = solve_qp_dr(P, A, b, c, N=N)
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6)
print(b.T @ sol['y'])
print(np.linalg.norm(b) * np.linalg.norm(A.T @ sol['y']))
np.random.seed(seed)
(P, A, b, c, _x, _y) = gen_feasible(m, n, p_scale=0.1)
#(_x,_y) = solve_qp_dr(P, A, b, c, N=int())
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 5, eps=1e-9, scale=0.1)
x = sol['x']
y = sol['y']
s = sol['s']
print(_x.T@ P @ _x / 2 + c.T @ _x)
print(x.T@ P @ x / 2 + c.T @ x)
print(np.linalg.norm(x - _x))
print(np.linalg.norm(y - _y))
#x = _x
#y = _y
#s = b - A @ _x
print(np.linalg.norm(A @ x + s - b) / (1+np.linalg.norm(b)))
print(np.linalg.norm(P@x + A.T @ y + c) / (1+np.linalg.norm(c)))
print(abs(x.T@ P @ x + c.T @ x + b.T @ y) / (1 + abs(x.T@P @ x) + abs(c.T @ x) + abs(b.T @ y)))
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n, p_scale=0.)
#(x,y) = solve_qp_dr(P, A, b, c, N=N)
#print(c.T @ x)
#print(x)
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 0, eps=1e-6, use_indirect=False)
print(c.T @ sol['x'])
probdata = dict(P=None, A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 0, eps=1e-6, use_indirect=True)
print(c.T @ sol['x'])
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n, 0.001)
print(np.linalg.norm(A.flatten(), np.inf))
print(np.linalg.norm(P.flatten(), np.inf))
(x,y) = solve_qp_dr(P, A, b, c, N=N)
print(x)
print(y)
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n)
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 5, eps=1e-6, scale=1)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 5, eps=1e-6, use_indirect=True, scale=1)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 0, eps=1e-6, use_indirect=True, scale=1)
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n, p_scale=0.)
(x,y) = solve_qp_dr(P, A, b, c, N=N)
print(x)
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6)
probdata = dict(P=None, A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6)
sol = scs.solve(probdata, cone, normalize=False, max_iters=N, acceleration_lookback = 0, eps=1e-6, use_indirect=True)
print(c.T @ sol['x'])
probdata = dict(P=None, A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)q
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6, use_indirect=True)
print(c.T @ sol['x'])
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n, p_scale=0.)
print(np.linalg.norm(A))
np.random.seed(seed)
(P, A, b, c, _, _) = gen_feasible(m, n, p_scale=0.)
(x,y) = solve_qp_dr(P, A, b, c, N=N)
print(c.T @ x)
print(x)
probdata = dict(P=scipy.sparse.csc_matrix(P), A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6, use_indirect=True)
probdata = dict(P=None, A=scipy.sparse.csc_matrix(A), b=b, c=c)
cone = dict(l=m)
sol = scs.solve(probdata, cone, normalize=True, max_iters=N, acceleration_lookback = 10, eps=1e-6, use_indirect=True)
print(c.T @ sol['x'])
```
| github_jupyter |
#### Setup Notebook
```
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
```
# Predicting Price Movements of Cryptocurrencies - Using Convolutional Neural Networks to Classify 2D Images of Chart Data
```
# This file contains all the main external libs we'll use
from fastai.imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
# For downloading files
from IPython.display import FileLink, FileLinks
# For confusion matrix
from sklearn.metrics import confusion_matrix
from fastai.dataset import read_dir
import pandas as pd
from os import listdir
from os.path import isfile, join, isdir
from sklearn.metrics import confusion_matrix
from mypy.progress_bar import log_progress
PATH = 'data/btc/btcgraphs_cropped/'
TEST_DIR = ['test/', 'test_cb_new/', 'test_poloniex/']
```
# Data
```
!ls {PATH}
os.listdir(f'{PATH}train')
files = os.listdir(f'{PATH}train/DOWN')[:5]
files
img = plt.imread(f'{PATH}train/DOWN/{files[3]}')
print(f'{PATH}train/DOWN/{files[0]}')
print(f'{PATH}train/DOWN/{files[1]}')
plt.imshow(img)
FileLink(f'{PATH}train/DOWN/{files[3]}')
```
# The Steps to Follow
1. Enable data augmentation, and precompute=True
1. Use `lr_find()` to find highest learning rate where loss is still clearly improving
1. Train last layer from precomputed activations for 1-2 epochs
1. Train last layer with data augmentation (i.e. precompute=False) for 2-3 epochs with cycle_len=1
1. Unfreeze all layers
1. Set earlier layers to 3x-10x lower learning rate than next higher layer
1. Use `lr_find()` again
1. Train full network with cycle_mult=2 until over-fitting
## 0. Setup
```
arch = resnet34
sz = 480
batch_size = int(64)
```
## 1. Data Augmentation
**Not using data augmentation this time**
Starting without useing data augmentation because I don't think it makes sense for these graphs, we don't need to generalize to slightly different angles. All plots will always be straight on and square in the frame.
```
tfms = tfms_from_model(arch, sz)
data = ImageClassifierData.from_paths(PATH, bs=batch_size, tfms=tfms,
trn_name='train', val_name='valid', test_name='test')
```
## 2. Choose a Learning Rate
This first pretraining was done with only 500,000 of the 1,000,000 train/valid images.
```
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.save('00_pretrained_480')
# learn.precompute = True
learn.load('00_pretrained_480')
lrf = learn.lr_find()
```
#### Plot learning rate
```
learn.sched.plot_lr()
learn.sched.plot()
learn.save('01_2_480')
```
## 3. Train Last Layer
```
# learn.precompute = True
learn.load('01_2_480')
learn.fit(1e-4, 1, cycle_save_name='01_weights')
learn.save("02_trained_once_480")
```
TODO
Do some tests on accuracy of training on single epoch
## 4. Train Last Layer with Data Augmentation
**Not actually using any augmentation, this is just a few more rounds of training**
```
learn.precompute = True
learn.load("02_trained_once_480")
```
**TODO**
Load the entire 1,000,000 images.
```
# data = ImageClassifierData.from_paths(PATH, bs=batch_size, tfms=tfms,
# trn_name='train', val_name='valid')#, test_name='test')
# learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.precompute=False #I don't think this makes a difference without data augmentation
learn.fit(1e-4, 3, cycle_len=1, cycle_save_name='02_weights')
learn.save("03_trained_2x_480")
# learn.precompute = False
learn.load("03_trained_2x_480")
```
More accuracy test...
## Accuracy Test
```
data2 = ImageClassifierData.from_paths(PATH, bs=batch_size, tfms=tfms,
trn_name='train', val_name='valid', test_name='test')
learn2 = learn
learn2.set_data(data2)
log_preds = learn2.predict(is_test=True)
ans = pd.read_csv(f'{PATH}test_ans2.csv')
is_up = ans['up']
log_preds.shape
log_preds
PATH
lp = pd.DataFrame(log_preds)
lp.to_csv(f'{PATH}log_preds.csv')
log_preds
```
## 4.5
This is where I am trying to pick up to train the whole model.
```
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.precompute = False
learn.load("03_trained_2x_480")
data_half = ImageClassifierData.from_paths(PATH, bs=int(batch_size/2/2), tfms=tfms,
trn_name='train_half', val_name='valid_half', test_name='test')
learn.set_data(data_half)
```
## 5. Unfreeze Earlier Layers
```
learn.unfreeze()
```
## 6. Choose Learning Rate for Early Layers
**3x-10x lower learning rate than next higher layer**
Using a relatively large learning rate to train the previous layers because this data set is not very similar to ImageNet. This is why I chose 3x rather than 10x.
```
lr = np.array([0.0001/9, 0.0001/3, 0.00001])
```
## 7. Use `lr_find()` Again
```
lrf2 = learn.lr_find()
learn.sched.plot_lr()
learn.sched.plot()
```
## 8. Train Full Network
```
learn.fit(lr, 3, cycle_len=1, cycle_mult=2, cycle_save_name='03_weights')
learn.save("04_fully_trained_480")
learn.load("04_fully_trained_480")
```
## 8.5 Loading Fully Trained Model
```
lr = np.array([0.0001/9, 0.0001/3, 0.00001])
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.precompute = False
# learn.unfreeze()
learn.load("04_fully_trained_480")
data_test = ImageClassifierData.from_paths(PATH, bs=int(batch_size), tfms=tfms,
trn_name='train', val_name='valid', test_name='test')
data_test_cb = ImageClassifierData.from_paths(PATH, bs=int(batch_size), tfms=tfms,
trn_name='train', val_name='valid', test_name='test_cb_new')
data_test_poloniex = ImageClassifierData.from_paths(PATH, bs=int(batch_size), tfms=tfms,
trn_name='train', val_name='valid', test_name='test_poloniex')
learn.set_data(data_test_cb)
log_preds_cb = learn.predict(is_test=True)
learn.set_data(data_test_poloniex)
log_preds_poloniex = learn.predict(is_test=True)
learn.set_data(data_test)
log_preds = learn.predict(is_test=True)
```
# Look at Results
...
```
f_cb = []
ans_cb = []
ans_poloniex = []
source_dir_cb = f'{PATH}test_cb_new/'
source_dir_poloniex = f'{PATH}test_poloniex/'
for f in listdir(source_dir_cb):
if isfile(join(source_dir_cb, f)):
f_cb.append(f)
if f[0:2] == 'UP':
ans_cb.append(1)
else:
ans_cb.append(0)
for f in listdir(source_dir_poloniex):
if isfile(join(source_dir_poloniex, f)):
if f[0:2] == 'UP':
ans_poloniex.append(1)
else:
ans_poloniex.append(0)
ans_cb = np.array(ans_cb)
ans_poloniex = np.array(ans_poloniex)
print(ans_cb[0:10])
for i in range(10):
print(str(ans_cb[i]) + " " + str(f_cb[i]))
```
Load the answers to the original test set
```
ans = pd.read_csv(f'{PATH}test_ans2.csv')
ans.head()
#convert dataframe to matrix
conv_arr = ans.values
#split matrix into 3 columns each into 1d array
arr1 = np.delete(conv_arr,[1,2],axis=1)
# arr1
ans = arr1[:,1]
ans = np.array(ans, dtype=int)
# ans
```
Turn all the predictions into 0 for DOWN, 1 for UP
```
preds_cb = np.argmax(log_preds_cb, axis=1)
probs_cb = np.exp(log_preds_cb[:,1])
preds_poloniex = np.argmax(log_preds_poloniex, axis=1)
probs_poloniex = np.exp(log_preds_poloniex[:,1])
preds = np.argmax(log_preds, axis=1)
probs = np.exp(log_preds[:,1])
```
# Analyze Results
```
data.classes
cm_cb = confusion_matrix(ans_cb, preds_cb)
cm_poloniex = confusion_matrix(ans_poloniex, preds_poloniex)
cm = confusion_matrix(ans, preds)
plot_confusion_matrix(cm, data.classes)
plot_confusion_matrix(cm_cb, data.classes)
plot_confusion_matrix(cm_poloniex, data.classes)
# cm_cb
# cm_poloniex
# cm
acc = round(((cm[0][0]+cm[1][1])/(np.sum(cm))), 4)
acc_cb = round(((cm_cb[0][0]+cm_cb[1][1])/(np.sum(cm_cb))), 4)
acc_poloniex = round(((cm_poloniex[0][0]+cm_poloniex[1][1])/(np.sum(cm_poloniex))), 4)
print("Accuracy on the original coinbase data:\n" + str(acc))
print("Accuracy on the new coinbase data:\n" + str(acc_cb))
print("Accuracy on the poloniex data:\n" + str(acc_poloniex))
```
# Test on Stocks
Testing the model on 5-minute stock data. Looking at past 100 days (3908 5-min periods) for the following stocks:
- AAPL
- GOOG
- MSFT
- FB
## Load Data
```
stock_names = ['aapl', 'fb', 'goog', 'msft']
stock_names = stock_names + ['aapl2', 'fb2', 'goog2', 'msft2']
for sname in log_progress(stock_names):
var_name = "test_"+sname
exec("data_" + sname + ' = ImageClassifierData.from_paths(PATH, bs=int(batch_size), tfms=tfms, \
trn_name="train", val_name="valid", test_name="test_' + sname + '")')
```
## Predict Price Movements
Google
```
learn.set_data(data_goog)
log_preds_goog = learn.predict(is_test=True)
```
Apple
```
learn.set_data(data_aapl)
log_preds_aapl = learn.predict(is_test=True)
```
Facebook
```
learn.set_data(data_fb)
log_preds_fb = learn.predict(is_test=True)
```
Microsoft
```
learn.set_data(data_msft)
log_preds_msft = learn.predict(is_test=True)
```
Everything 2
```
learn.set_data(data_msft2)
log_preds_msft2 = learn.predict(is_test=True)
learn.set_data(data_aapl2)
log_preds_aapl2 = learn.predict(is_test=True)
learn.set_data(data_goog2)
log_preds_goog2 = learn.predict(is_test=True)
learn.set_data(data_fb2)
log_preds_fb2 = learn.predict(is_test=True)
```
### Check the Accuracies
#### Google
```
preds_goog = np.argmax(log_preds_goog, axis=1)
probs_goog = np.exp(log_preds_goog[:,1])
ans_goog = []
f_goog = []
source_dir = f'{PATH}test_goog/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_goog.append(1)
else:
ans_goog.append(0)
ans_goog = np.array(ans_goog)
cm_goog = confusion_matrix(ans_goog, preds_goog)
# print the accuracy
acc_goog = round(((cm_goog[0][0]+cm_goog[1][1])/(np.sum(cm_goog))), 4)
print("Accuracy on the Google stock data:\n" + str(acc_goog))
# plot the confusion matrix
plot_confusion_matrix(cm_goog, data.classes)
```
#### Apple
```
preds_aapl = np.argmax(log_preds_aapl, axis=1)
probs_aapl = np.exp(log_preds_aapl[:,1])
ans_aapl = []
source_dir = f'{PATH}test_aapl/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_aapl.append(1)
else:
ans_aapl.append(0)
ans_aapl = np.array(ans_aapl)
cm_aapl = confusion_matrix(ans_aapl, preds_aapl)
# print the accuracy
acc_aapl = round(((cm_aapl[0][0]+cm_aapl[1][1])/(np.sum(cm_aapl))), 4)
print("Accuracy on the Apple stock data:\n" + str(acc_aapl))
# plot the confusion matrix
plot_confusion_matrix(cm_aapl, data.classes)
```
#### Facebook
```
preds_fb = np.argmax(log_preds_fb, axis=1)
probs_fb = np.exp(log_preds_fb[:,1])
ans_fb = []
source_dir = f'{PATH}test_fb/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_fb.append(1)
else:
ans_fb.append(0)
ans_fb = np.array(ans_fb)
cm_fb = confusion_matrix(ans_fb, preds_fb)
# print the accuracy
acc_fb = round(((cm_fb[0][0]+cm_fb[1][1])/(np.sum(cm_fb))), 4)
print("Accuracy on the Facebook stock data:\n" + str(acc_fb))
# plot the confusion matrix
plot_confusion_matrix(cm_fb, data.classes)
```
#### Microsoft
```
preds_msft = np.argmax(log_preds_msft, axis=1)
probs_msft = np.exp(log_preds_msft[:,1])
ans_msft = []
source_dir = f'{PATH}test_msft/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_msft.append(1)
else:
ans_msft.append(0)
ans_msft = np.array(ans_msft)
cm_msft = confusion_matrix(ans_msft, preds_msft)
# print the accuracy
acc_msft = round(((cm_msft[0][0]+cm_msft[1][1])/(np.sum(cm_msft))), 4)
print("Accuracy on the Microsoft stock data:\n" + str(acc_msft))
# plot the confusion matrix
plot_confusion_matrix(cm_msft, data.classes)
```
#### Microsoft 2
```
preds_msft2 = np.argmax(log_preds_msft2, axis=1)
probs_msft2 = np.exp(log_preds_msft2[:,1])
ans_msft2 = []
source_dir = f'{PATH}test_msft2/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_msft2.append(1)
else:
ans_msft2.append(0)
ans_msft2 = np.array(ans_msft2)
cm_msft2 = confusion_matrix(ans_msft2, preds_msft2)
# print the accuracy
acc_msft2 = round(((cm_msft2[0][0]+cm_msft2[1][1])/(np.sum(cm_msft2))), 4)
print("Accuracy on the Microsoft stock data:\n" + str(acc_msft2))
# plot the confusion matrix
plot_confusion_matrix(cm_msft2, data.classes)
preds_aapl2 = np.argmax(log_preds_aapl2, axis=1)
ans_aapl2 = []
source_dir = f'{PATH}test_aapl2/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_aapl2.append(1)
else:
ans_aapl2.append(0)
ans_aapl2 = np.array(ans_aapl2)
cm_aapl2 = confusion_matrix(ans_aapl2, preds_aapl2)
# print the accuracy
acc_aapl2 = round(((cm_aapl2[0][0]+cm_aapl2[1][1])/(np.sum(cm_aapl2))), 4)
print("Accuracy on the Apple stock data:\n" + str(acc_aapl2))
# plot the confusion matrix
plot_confusion_matrix(cm_aapl2, data.classes)
preds_goog2 = np.argmax(log_preds_goog2, axis=1)
ans_goog2 = []
source_dir = f'{PATH}test_goog2/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_goog2.append(1)
else:
ans_goog2.append(0)
ans_goog2 = np.array(ans_goog2)
cm_goog2 = confusion_matrix(ans_goog2, preds_goog2)
# print the accuracy
acc_goog2 = round(((cm_goog2[0][0]+cm_goog2[1][1])/(np.sum(cm_goog2))), 4)
print("Accuracy on the Google stock data:\n" + str(acc_goog2))
# plot the confusion matrix
plot_confusion_matrix(cm_goog2, data.classes)
preds_fb2 = np.argmax(log_preds_fb2, axis=1)
ans_fb2 = []
source_dir = f'{PATH}test_fb2/'
for f in listdir(source_dir):
if isfile(join(source_dir, f)):
if f[0:2] == 'UP':
ans_fb2.append(1)
else:
ans_fb2.append(0)
ans_fb2 = np.array(ans_fb2)
cm_fb2 = confusion_matrix(ans_fb2, preds_fb2)
# print the accuracy
acc_fb2 = round(((cm_fb2[0][0]+cm_fb2[1][1])/(np.sum(cm_fb2))), 4)
print("Accuracy on the Facebook stock data:\n" + str(acc_fb2))
# plot the confusion matrix
plot_confusion_matrix(cm_fb2, data.classes)
```
### Check Class Balances
Make sure that the up/down classes are about 50/50 for each stock.
```
print("AAPL: " + str(np.sum(ans_aapl) / len(ans_aapl)))
print("GOOG: " + str(np.sum(ans_goog) / len(ans_goog)))
print(" FB: " + str(np.sum(ans_fb) / len(ans_fb)))
print("MSFT: " + str(np.sum(ans_msft) / len(ans_msft)))
```
# Momentum Strategy Accuracies
### V1
```
from scipy.ndimage.interpolation import shift
ans_aapl_momentum = shift(ans_aapl, 1)
ans_aapl_momentum
ans_aapl
def count_match(a1, a2):
cnt = 0
for i in range(len(a1)):
if a1[i] == a2[i]:
cnt += 1
return cnt
m_aapl = count_match(ans_aapl, shift(ans_aapl, 1))
m_aapl/len(ans_aapl)
m_msft = count_match(ans_msft, shift(ans_msft, 1))
m_msft/len(ans_msft)
m_msft = count_match(ans_msft, ans_msft)
m_msft/len(ans_msft)
m_goog = count_match(ans_goog, shift(ans_goog, 1))
m_goog/len(ans_goog)
m_fb = count_match(ans_fb, shift(ans_fb, 1))
m_fb/len(ans_fb)
m_cb = count_match(ans_cb, shift(ans_cb, 1))
m_cb/len(ans_cb)
m_poloniex = count_match(ans_poloniex, shift(ans_poloniex, 1))
m_poloniex/len(ans_poloniex)
m = count_match(ans, shift(ans, 1))
m/len(ans)
```
### V2
```
m_fb2 = count_match(ans_fb2, shift(ans_fb2, 1))
print(m_fb2/len(ans_fb2))
m_goog2 = count_match(ans_goog2, shift(ans_goog2, 1))
print(m_goog2/len(ans_goog2))
m_msft2 = count_match(ans_msft2, shift(ans_msft2, 1))
print(m_msft2/len(ans_msft2))
m_aapl2 = count_match(ans_aapl2, shift(ans_aapl2, 1))
print(m_aapl2/len(ans_aapl2))
```
| github_jupyter |
# Interactive Data Exploration, Analysis, and Reporting
- Author: Team Data Science Process from Microsoft
- Date: 2017/03
- Supported Data Sources: CSV files on the machine where the Jupyter notebook runs or data stored in SQL server
- Output: IDEAR_Report.ipynb
This is the **Interactive Data Exploration, Analysis and Reporting (IDEAR)** in _**Python**_ running on Jupyter Notebook. The data can be stored in CSV file on the machine where the Jupyter notebook runs or from a query running against a SQL server. A yaml file has to be pre-configured before running this tool to provide information about the data.
## Step 1: Configure and Set up IDEAR
Before start utilitizing the functionalities provided by IDEAR, you need to first [configure and set up](#setup) the utilities by providing the yaml file and load necessary Python modules and libraries.
## Step 2: Start using IDEAR
This tool provides various functionalities to help users explore the data and get insights through interactive visualization and statistical testing.
- [Read and Summarize the data](#read and summarize)
- [Extract Descriptive Statistics of Data](#descriptive statistics)
- [Explore Individual Variables](#individual variables)
- [Explore Interactions between Variables](#multiple variables)
- [Rank variables](#rank variables)
- [Interaction between two categorical variables](#two categorical)
- [Interaction between two numerical variables](#two numerical)
- [Interaction between numerical and categorical variables](#numerical and categorical)
- [Interaction between two numerical variables and a categorical variable](#two numerical and categorical)
- [Visualize High Dimensional Data via Projecting to Lower Dimension Principal Component Spaces](#pca)
- [Generate Data Report](#report)
After you are done with exploring the data interactively, you can choose to [show/hide the source code](#show hide codes) to make your notebook look neater.
**Note**:
- Change the working directory and yaml file before running IDEAR in Jupyter Notebook.
- Run the cells and click *Export* button to export the code that generates the visualization/analysis result to temporary Jupyter notebooks.
- Run the last cell and click [***Generate Final Report***](#report) to create *IDEAR_Report.ipynb* in the working directory. _If you do not export codes in some sections, you may see some warnings complaining that some temporary Jupyter Notebook files are missing_.
- Upload *IDEAR_Report.ipynb* to Jupyter Notebook server, and run it to generate report.
## <a name="setup"></a>Global Configuration and Setting Up
```
# Set the working directory as the directory where ReportMagics.py stays
# Use \\ in your path
import os
workingDir = '.\\'
os.chdir(workingDir)
from ReportMagics import *
merged_report ='IDEAR_Report.ipynb'
%reset_all
%%add_conf_code_to_report
import os
workingDir = '.\\'
os.chdir(workingDir)
conf_file = '.\\para-adult.yaml'
Sample_Size = 10000
export_dir = '.\\tmp\\'
```
### Import necessary packages and set up environment parameters
```
import pandas as pd
import numpy as np
import os
#os.chdir(workingDir)
import collections
import matplotlib
import io
import sys
import operator
import nbformat as nbf
from IPython.core.display import HTML
from IPython.display import display
from ipywidgets import interact, interactive,fixed
from IPython.display import Javascript, display,HTML
from ipywidgets import widgets, VBox
import ipywidgets
import IPython
from IPython.display import clear_output
import scipy.stats as stats
from statsmodels.graphics.mosaicplot import mosaic
import statsmodels.api as sm
from statsmodels.formula.api import ols
import os
import errno
import seaborn as sns
from string import Template
from functools import partial
from collections import OrderedDict
# Utility Classes
from ConfUtility import *
from ReportGeneration import *
from UniVarAnalytics import *
from MultiVarAnalytics import *
!jupyter nbextension enable --py --sys-prefix widgetsnbextension
%matplotlib inline
#DEBUG=0
font={'family':'normal','weight':'normal','size':8}
matplotlib.rc('font',**font)
matplotlib.rcParams['figure.figsize'] = (12.0, 5.0)
matplotlib.rc('xtick', labelsize=9)
matplotlib.rc('ytick', labelsize=9)
matplotlib.rc('axes', labelsize=10)
matplotlib.rc('axes', titlesize=10)
sns.set_style('whitegrid')
```
### Define some functions for generating reports
```
%%add_conf_code_to_report
if not os.path.exists(export_dir):
os.makedirs(export_dir)
def gen_report(conf_md,conf_code, md, code, filename):
ReportGeneration.write_report(conf_md, conf_code, md, code, report_name=filename)
def translate_code_commands(cell, exported_cols, composite=False):
new_code_store = []
exported_cols = [each for each in exported_cols if each!='']
for each in exported_cols:
w,x,y = each.split(',')
with open('log.txt','w') as fout:
fout.write('Processing call for the column {}'.format(each))
temp=cell[0]
new_line = temp.replace('interactive','apply').replace(
"df=fixed(df)","df").replace("filename=fixed(filename)","'"+ReportMagic.var_files+"'").replace(
"col1=w1","'"+w+"'").replace("col2=w2","'"+x+"'").replace("col3=w3","'"+y+"'").replace(
"col3=fixed(w3)","'"+y+"'").replace(
"Export=w_export","False").replace("conf_dict=fixed(conf_dict)","conf_dict")
new_line = new_line.replace("df,","[df,")
new_line = new_line[:len(new_line)-1]+"])"
new_line = new_line.replace("apply(","").replace(", [", "(*[")
new_code_store.append(new_line)
return new_code_store
def add_to_report(section='', task=''):
print ('Section {}, Task {} added for report generation'.format(section ,task))
def trigger_report(widgets,export_cols_file, output_report, no_widgets=1, md_text=''):
exported_cols = []
with open(export_cols_file,'r') as fin:
for each in fin:
each = each.strip()
if each and not each.isspace():
exported_cols.append(each)
exported_cols = list(set(exported_cols))
conf_md, conf_code, md, code=%show_report
md = md_text
cell = code
new_code_store = translate_code_commands(cell,exported_cols)
gen_report(conf_md,conf_code, md, new_code_store, filename=export_dir+output_report)
def silentremove(filename):
try:
os.remove(filename)
except OSError as e: # this would be "except OSError, e:" before Python 2.6
if e.errno != errno.ENOENT: # errno.ENOENT = no such file or directory
raise # re-raise exception if a different error occured
def handle_change(value):
w_export.value=False
def getWidgetValue(w):
w_value = ''
try:
w_value = w.value
except:
pass
return w_value
def handle_export(widget, w1, w2, w3, export_filename='temp.ipynb',md_text=''):
print ('Export is successful!')
w1_value, w2_value, w3_value = \
getWidgetValue(w1),getWidgetValue(w2),getWidgetValue(w3)
st = ','.join(str(each) for each in [w1_value, w2_value, w3_value])
with open(filename,'a') as fout:
fout.write(st+'\n')
trigger_report(w1_value, filename, export_filename, False, md_text=md_text)
```
## <a name="read and summarize"></a> Read and Summarize the Data
### Read data and infer column types
```
%%add_conf_code_to_report
conf_dict = ConfUtility.parse_yaml(conf_file)
# Read in data from local file or SQL server
if 'DataSource' not in conf_dict:
df=pd.read_csv(conf_dict['DataFilePath'][0], skipinitialspace=True)
else:
import pyodbc
cnxn = pyodbc.connect('driver=ODBC Driver 11 for SQL Server;server={};database={};Uid={};Pwd={}'.format(
conf_dict['Server'], conf_dict['Database'],conf_dict['Username'],conf_dict['Password']))
df = pd.read_sql(conf_dict['Query'],cnxn)
# Making sure that we are not reading any extra column
df = df[[each for each in df.columns if 'Unnamed' not in each]]
# Sampling Data if data size is larger than 10k
df0 = df # df0 is the unsampled data. Will be used in data exploration and analysis where sampling is not needed
# However, keep in mind that your final report will always be based on the sampled data.
if Sample_Size < df.shape[0]:
df = df.sample(Sample_Size)
# change float data types
if 'FloatDataTypes' in conf_dict:
for col_name in conf_dict['FloatDataTypes']:
df[col_name] = df[col_name].astype(float)
# Getting the list of categorical columns if it was not there in the yaml file
if 'CategoricalColumns' not in conf_dict:
conf_dict['CategoricalColumns'] = list(set(list(df.select_dtypes(exclude=[np.number]).columns)))
# Getting the list of numerical columns if it was not there in the yaml file
if 'NumericalColumns' not in conf_dict:
conf_dict['NumericalColumns'] = list(df.select_dtypes(include=[np.number]).columns)
# Exclude columns that we do not need
if 'ColumnsToExclude' in conf_dict:
conf_dict['CategoricalColumns'] = list(set(conf_dict['CategoricalColumns'])-set(conf_dict['ColumnsToExclude']))
conf_dict['NumericalColumns'] = list(set(conf_dict['NumericalColumns'])-set(conf_dict['ColumnsToExclude']))
# Ordering the categorical variables according to the number of unique categories
filtered_cat_columns = []
temp_dict = {}
for cat_var in conf_dict['CategoricalColumns']:
temp_dict[cat_var] = len(np.unique(df[cat_var]))
sorted_x = sorted(temp_dict.items(), key=operator.itemgetter(0), reverse=True)
conf_dict['CategoricalColumns'] = [x for (x,y) in sorted_x]
ConfUtility.dict_to_htmllist(conf_dict,['Target','CategoricalColumns','NumericalColumns'])
```
### Print the first n (n=5 by default) rows of the data
```
%%add_conf_code_to_report
def custom_head(df,NoOfRows):
return HTML(df.head(NoOfRows).style.set_table_attributes("class='table'").render())
i = interact(custom_head,df=fixed(df0), NoOfRows=ipywidgets.IntSlider(min=0, max=30, step=1, \
value=5, description='Number of Rows'))
```
### Print the dimensions of the data (rows, columns)
```
%%add_conf_code_to_report
print ('The data has {} Rows and {} columns'.format(df0.shape[0],df0.shape[1]))
```
### Print the column names of the data
```
%%add_conf_code_to_report
col_names = ','.join(each for each in list(df.columns))
print("The column names are:" + col_names)
```
### Print the column types
```
%%add_conf_code_to_report
print("The types of columns are:")
df.dtypes
```
## <a name="individual variable"></a>Extract Descriptive Statistics of Each Column
```
%%add_conf_code_to_report
def num_missing(x):
return len(x.index)-x.count()
def num_unique(x):
return len(np.unique(x))
temp_df = df0.describe().T
missing_df = pd.DataFrame(df0.apply(num_missing, axis=0))
missing_df.columns = ['missing']
unq_df = pd.DataFrame(df0.apply(num_unique, axis=0))
unq_df.columns = ['unique']
types_df = pd.DataFrame(df0.dtypes)
types_df.columns = ['DataType']
```
### Print the descriptive statistics of numerical columns
```
%%add_conf_code_to_report
summary_df = temp_df.join(missing_df).join(unq_df).join(types_df)
summary_df
```
### Print the descriptive statistics of categorical columns
```
%%add_conf_code_to_report
col_names = list(types_df.index) #Get all col names
num_cols = len(col_names)
index = range(num_cols)
cat_index = []
for i in index: #Find the indices of columns in Categorical columns
if col_names[i] in conf_dict['CategoricalColumns']:
cat_index.append(i)
summary_df_cat = missing_df.join(unq_df).join(types_df.iloc[cat_index], how='inner') #Only summarize categorical columns
summary_df_cat
```
## <a name="individual variables"></a>Explore Individual Variables
### Explore the target variable
```
md_text = '## Target Variable'
filename = 'tmp/target_variables.csv'
export_filename = 'target_report2.ipynb'
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
w1_value,w2_value,w3_value = '','',''
w1, w2, w3, w4 = None, None, None, None
silentremove(filename)
w1 = widgets.Dropdown(
options=[conf_dict['Target']],
value=conf_dict['Target'],
description='Target Variable:',
)
ReportMagic.var_files = filename
w_export = widgets.Button(description='Export', value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(TargetAnalytics.custom_barplot, df=fixed(df), \
filename=fixed(filename), col1=w1, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(TargetAnalytics.custom_barplot(df=df0, filename=filename, col1=w1.value, Export=w_export))
else:
w1_value, w2_value, w3_value = '', '', ''
w1, w2, w3, w4 = None, None, None, None
silentremove(filename)
w1 = widgets.Dropdown(
options=[conf_dict['Target']],
value=conf_dict['Target'],
description='Target Variable:',
)
w_export = widgets.Button(description='Export', value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(NumericAnalytics.custom_barplot, df=fixed(df), filename=fixed(filename),\
col1=w1, Export=w_export)
hbox = widgets.HBox(ii.children)
display(hbox)
hbox.on_displayed(NumericAnalytics.custom_barplot(df=df, filename=filename, col1=w1.value, Export=w_export))
```
### Explore individual numeric variables and test for normality (on sampled data)
```
md_text = '## Visualize Individual Numerical Variables (on Sampled Data)'
filename = ReportMagic.var_files='tmp/numeric_variables.csv'
export_filename = 'numeric_report2.ipynb'
w1_value, w2_value, w3_value = '', '', ''
w1, w2, w3, w4 = None, None, None, None
silentremove(filename)
w1 = widgets.Dropdown(
options=conf_dict['NumericalColumns'],
value=conf_dict['NumericalColumns'][0],
description='Numeric Variable:',
)
w_export = widgets.Button(description='Export', value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(NumericAnalytics.custom_barplot, df=fixed(df), filename=fixed(filename),\
col1=w1, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(NumericAnalytics.custom_barplot(df=df, filename=filename, col1=w1.value, Export=w_export))
```
### Explore individual categorical variables (sorted by frequencies)
```
w_export = None
md_text = '## Visualize Individual Categorical Variables'
filename = ReportMagic.var_files='tmp/categoric_variables.csv'
export_filename = 'categoric_report2.ipynb'
w1_value, w2_value, w3_value = '', '', ''
w1, w2, w3, w4 = None, None, None, None
silentremove(filename)
w1 = widgets.Dropdown(
options = conf_dict['CategoricalColumns'],
value = conf_dict['CategoricalColumns'][0],
description = 'Categorical Variable:',
)
w_export = widgets.Button(description='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe (handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(CategoricAnalytics.custom_barplot, df=fixed(df),\
filename=fixed(filename), col1=w1, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(CategoricAnalytics.custom_barplot(df=df0, filename=filename, col1=w1.value, \
Export=w_export))
```
## <a name="multiple variables"></a>Explore Interactions Between Variables
### <a name="rank variables"></a>Rank variables based on linear relationships with reference variable (on sampled data)
```
md_text = '## Rank variables based on linear relationships with reference variable (on sampled data)'
filename = ReportMagic.var_files='tmp/rank_associations.csv'
export_filename = 'rank_report2.ipynb'
silentremove(filename)
cols_list = [conf_dict['Target']] + conf_dict['NumericalColumns'] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
w1 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Ref Var:'
)
w2 = ipywidgets.Text(value="5", description='Top Num Vars:')
w3 = ipywidgets.Text(value="5", description='Top Cat Vars:')
w_export = widgets.Button(description='Export', value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe (handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.rank_associations, df=fixed(df), \
conf_dict=fixed(conf_dict), col1=w1, col2=w2, col3=w3, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.rank_associations(df=df, conf_dict=conf_dict, col1=w1.value, \
col2=w2.value, col3=w3.value, Export=w_export))
```
### <a name="two categorical"></a>Explore interactions between categorical variables
```
md_text = '## Interaction between categorical variables'
filename = ReportMagic.var_files='tmp/cat_interactions.csv'
export_filename = 'cat_interactions_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4 = None, None, None, None
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['CategoricalColumns']
w1 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Categorical Var 1:'
)
w2 = widgets.Dropdown(
options=cols_list,
value=cols_list[1],
description='Categorical Var 2:'
)
w_export = widgets.Button(description='Export', value="Export")
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w2.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.categorical_relations, df=fixed(df), \
filename=fixed(filename), col1=w1, col2=w2, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.categorical_relations(df=df0, filename=filename, col1=w1.value, \
col2=w2.value, Export=w_export))
```
### <a name="two numerical"></a>Explore interactions between numerical variables (on sampled data)
```
md_text = '## Interaction between numerical variables (on sampled data)'
filename = ReportMagic.var_files='tmp/numerical_interactions.csv'
export_filename = 'numerical_interactions_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4 = None, None, None, None
if conf_dict['Target'] in conf_dict['NumericalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['NumericalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['NumericalColumns']
w1 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Numerical Var 1:'
)
w2 = widgets.Dropdown(
options=cols_list,
value=cols_list[1],
description='Numerical Var 2:'
)
w_export = widgets.Button(description='Export', value="Export")
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w2.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.numerical_relations, df=fixed(df), \
col1=w1, col2=w2, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.numerical_relations(df, col1=w1.value, col2=w2.value, Export=w_export))
```
### Explore correlation matrix between numerical variables
```
md_text = '## Explore correlation matrix between numerical variables'
filename = ReportMagic.var_files='tmp/numerical_corr.csv'
export_filename = 'numerical_correlations_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4 = None, None, None, None
w1 = widgets.Dropdown(
options=['pearson','kendall','spearman'],
value='pearson',
description='Correlation Method:'
)
w_export = widgets.Button(description='Export', value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.numerical_correlation, df=fixed(df), conf_dict=fixed(conf_dict),\
col1=w1, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.numerical_correlation(df0, conf_dict=conf_dict, col1=w1.value, Export=w_export))
```
### <a name="numerical and categorical"></a>Explore interactions between numerical and categorical variables
```
md_text = '## Explore interactions between numerical and categorical variables'
filename = ReportMagic.var_files = 'tmp/nc_int.csv'
export_filename = 'nc_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4 = None, None, None, None
if conf_dict['Target'] in conf_dict['NumericalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['NumericalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['NumericalColumns']
w1 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Numerical Variable:'
)
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['CategoricalColumns']
w2 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Categorical Variable:'
)
w_export = widgets.Button(description='Export', value=False, options=[True, False])
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.nc_relation, df=fixed(df), \
conf_dict=fixed(conf_dict), col1=w1, col2=w2, \
col3=fixed(w3), Export=w_export)
hbox = widgets.HBox(i.children)
display( hbox )
hbox.on_displayed(InteractionAnalytics.nc_relation(df0, conf_dict, col1=w1.value, col2=w2.value, Export=w_export))
```
### <a name="two numerical and categorical"></a>Explore interactions between two numerical variables and a categorical variable (on sampled data)
```
md_text = '## Explore interactions between two numerical variables and a categorical variable (on sampled data)'
filename = ReportMagic.var_files='tmp/nnc_int.csv'
export_filename = 'nnc_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4 = None, None, None, None
if conf_dict['Target'] in conf_dict['NumericalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['NumericalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['NumericalColumns']
w1 = widgets.Dropdown(
options = cols_list,
value = cols_list[0],
description = 'Numerical Var 1:'
)
w2 = widgets.Dropdown(
options = cols_list,
value = cols_list[1],
description = 'Numerical Var 2:'
)
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['CategoricalColumns']
w3 = widgets.Dropdown(
options = cols_list,
value = cols_list[0],
description = 'Legend Cat Var:'
)
w_export = widgets.Button(description='Export', value=False, options=[True, False])
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.nnc_relation, df=fixed(df),\
conf_dict=fixed(conf_dict), col1=w1, col2=w2, col3=w3, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.nnc_relation(df, conf_dict, col1=w1.value,\
col2=w2.value, col3=w3.value, Export=w_export))
```
## <a name="pca"></a>Visualize numerical data by projecting to principal component spaces (on sampled data)
### Project data to 2-D principal component space (on sampled data)
```
num_numeric = len(conf_dict['NumericalColumns'])
if num_numeric > 3:
md_text = '## Project Data to 2-D Principal Component Space'
filename = ReportMagic.var_files = 'tmp/numerical_pca.csv'
export_filename = 'numerical_pca_report2.ipynb'
silentremove(filename)
w1, w2, w3, w4, w5 = None, None, None, None, None
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['CategoricalColumns']
w1 = widgets.Dropdown(
options = cols_list,
value = cols_list[0],
description = 'Legend Variable:',
width = 10
)
w2 = widgets.Dropdown(
options = [str(x) for x in np.arange(1,num_numeric+1)],
value = '1',
width = 1,
description='PC at X-Axis:'
)
w3 = widgets.Dropdown(
options = [str(x) for x in np.arange(1,num_numeric+1)],
value = '2',
description = 'PC at Y-Axis:'
)
w_export = widgets.Button(description='Export', value=False, options=[True, False])
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.numerical_pca, df=fixed(df),\
conf_dict=fixed(conf_dict), col1=w1, col2=w2, col3=w3, Export=w_export)
hbox = widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.numerical_pca(df, conf_dict=conf_dict, col1=w1.value, col2=w2.value,\
col3=w3.value, Export=w_export))
```
### Project data to 3-D principal component space (on sampled data)
```
md_text = '## Project Data to 3-D Principal Component Space (on sampled data)'
if len(conf_dict['NumericalColumns']) > 3:
filename = ReportMagic.var_files='tmp/pca3d.csv'
export_filename = 'pca3d_report2.ipynb'
silentremove(filename)
if conf_dict['Target'] in conf_dict['CategoricalColumns']:
cols_list = [conf_dict['Target']] + conf_dict['CategoricalColumns'] #Make target the default reference variable
cols_list = list(OrderedDict.fromkeys(cols_list)) #remove variables that might be duplicates with target
else:
cols_list = conf_dict['CategoricalColumns']
w1, w2, w3, w4 = None, None, None, None
w1 = widgets.Dropdown(
options=cols_list,
value=cols_list[0],
description='Legend Variable:'
)
w2 = ipywidgets.IntSlider(min=-180, max=180, step=5, value=30, description='Angle')
w_export = widgets.Button(description='Export',value='Export')
handle_export_partial = partial(handle_export, w1=w1, w2=w2, w3=w3, \
export_filename=export_filename, md_text=md_text)
w1.observe(handle_change,'value')
w_export.on_click(handle_export_partial)
%reset_report
%add_interaction_code_to_report i = interactive(InteractionAnalytics.pca_3d, df=fixed(df), conf_dict=fixed(conf_dict),\
col1=w1, col2=w2, col3=fixed(w3),Export=w_export)
hbox=widgets.HBox(i.children)
display(hbox)
hbox.on_displayed(InteractionAnalytics.pca_3d(df,conf_dict,col1=w1.value,col2=w2.value,Export=w_export))
```
## <a name="report"></a>Generate the Data Report
```
filenames = ['target_report2.ipynb', 'numeric_report2.ipynb', 'categoric_report2.ipynb', 'rank_report2.ipynb',
'cat_interactions_report2.ipynb', 'numerical_interactions_report2.ipynb',
'numerical_correlations_report2.ipynb', 'nc_report2.ipynb',
'nnc_report2.ipynb', 'numerical_pca_report2.ipynb', 'pca3d_report2.ipynb'
]
def merge_notebooks():
merged = None
for fname in filenames:
try:
print ('Processing {}'.format(export_dir+fname))
with io.open(export_dir+fname, 'r', encoding='utf-8') as f:
nb = nbf.read(f, as_version=4)
if merged is None:
merged = nb
else:
merged.cells.extend(nb.cells[2:])
except:
print ('Warning: Unable to find the file', export_dir+'//'+fname, ', continue...')
if not hasattr(merged.metadata, 'name'):
merged.metadata.name = ''
merged.metadata.name += "_merged"
with open(merged_report, 'w') as f:
nbf.write(merged, f)
def gen_merged_report(b):
merge_notebooks()
button=widgets.Button(description='Generate Final Report')
button.on_click(gen_merged_report)
display(button)
```
## <a name="show hide codes"></a>Show/Hide the Source Codes
```
# Provide the path to the yaml file relative to the working directory
display(HTML('''<style>
.widget-label { min-width: 20ex !important; }
.widget-text { min-width: 60ex !important; }
</style>'''))
#Toggle Code
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
//$( document ).ready(code_toggle);//commenting code disabling by default
</script>
<form action = "javascript:code_toggle()"><input type="submit" value="Toggle Raw Code"></form>''')
```
| github_jupyter |
# GAN
```
import pandas as pd
import numpy as np
dataset = "YOUR_DATASET" # DATAFRAME
display(dataset)
train_X = "YOUR TRAINING INPUT" # NUMPY
train_Y = "YOUR TRAINING LABEL" # NUMPY
print(train_X.shape)
print(train_Y.shape)
val_X = "YOUR VALIDATION LABEL" # NUMPY
val_Y = "YOUR VALIDATION LABEL" # NUMPY
print(val_X.shape)
print(val_Y.shape)
# shuffling
import random
shuf = np.array([i for i in range(train_X.shape[0])])
random.shuffle(shuf)
train_X = train_X[shuf]
train_Y = train_Y[shuf]
print(train_X.shape)
print(train_Y.shape)
#######################################################################################
from tensorflow.config.experimental import list_physical_devices, set_memory_growth
gpus = list_physical_devices('GPU')
display(gpus)
if gpus:
try:
set_memory_growth(gpus[0], True)
except RuntimeError as e:
print(e)
#######################################################################################
#discriminator
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import "YOUR LAYER"
from tensorflow.keras.optimizers import "YOUR OPTIMIZER"
from tensorflow.keras.metrics import "YOUR METRIC"
from tensorflow.keras.backend import clear_session
clear_session()
# YOUR PARAMTERS
#######################################################################################
#######################################################################################
def create_discriminator(input_dim):
model = Sequential("discriminator")
# MODEL
#######################################################################################
#######################################################################################
model.summary()
return model
discriminator = create_discriminator(train_X.shape[1:])
discriminator.compile(loss='mse', optimizer="YOUR OPTIMIZER", metrics=["YOUR METRIC"])
discriminator.trainable = False
#generator
from tensorflow.keras.layers import
z_dim ="YOUR LATENT VECTER DIM"
# YOUR PARAMTERS
#######################################################################################
#######################################################################################
def create_generator(z_dim):
model = Sequential(name="generator")
# MODEL
#######################################################################################
#######################################################################################
model.summary()
return model
generator = create_generator(z_dim)
#assemble
model = Sequential("VanilaGAN")
model.add(generator)
model.add(discriminator)
model.compile(loss='YOUR LOSS', optimizer="YOUR OPTIMIZER")
model.summary()
#train
loss = []
batch_size = "YOUR BATCH SIZE"
interval = "YOUR INTERVAL"
z_label_real = np.ones((batch_size, 1))
z_label_fake = np.zeros((batch_size, 1))
z = np.random.normal(0, 1, (batch_size, z_dim))
for i in tqdm(range(interval)):
idx = np.random.randint(0, train_X.shape[0], batch_size)
real_X = train_X[idx]
real_Y = train_Y[idx]
d_loss_real = discriminator.train_on_batch(real_X, real_Y)
z = np.random.normal(0, 1, (batch_size, z_dim))
fake_x = generator.predict(z)
d_loss_fake = discriminator.train_on_batch(fake_x, z_label_fake)
z = np.random.normal(0, 1, (batch_size, z_dim))
g_loss = model.train_on_batch(z, z_label_real)
if (i+1) % 1000 == 0:
d_loss_eval, d_AUC_eval = discriminator.evaluate(val_X, val_Y)
print(i+1, ": g_loss: %.6f" %(g_loss))
print(i+1, ": d_loss_real: %.6f" %(d_loss_real[0]))
print(i+1, ": d_loss_fake: %.6f" %(d_loss_fake[0]))
print(i+1, ": d_loss_eval: %.6f" %(d_loss_eval))
print(i+1, ": d_AUC_eval: %.6f" %(d_AUC_eval))
print("="*80)
```
| github_jupyter |
# Importing
```
import pandas as pd
import numpy as np
#import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs, make_moons, make_circles
#import hdbscan
from sklearn.neighbors import NearestNeighbors
from sklearn.cluster import SpectralClustering, KMeans, AgglomerativeClustering
from sklearn.metrics import pairwise_distances
from scipy.cluster.hierarchy import dendrogram, linkage
#import networkx as nx
import warnings
warnings.filterwarnings("ignore")
cd ..
#import os
#os.environ["PATH"] += os.pathsep + 'C:/Program Files (x86)/Graphviz2.38/bin/'
```
# Generating Dataset
```
# generate 2d classification dataset
X, y = make_blobs(n_samples=30, centers=4, n_features=2, cluster_std=1.8, random_state=42)
X1, y1 = make_moons(n_samples=80, noise=0.05, random_state=42)
varied = make_blobs(n_samples=120,
cluster_std=[3.5,3.5,3.5],
random_state=42)[0]
plt.scatter(varied[:,0], varied[:,1])
plt.show()
```
# OPTICS
Cluster analysis method based on the OPTICS algorithm. OPTICS computes an augmented cluster-ordering of the database objects. The main advantage of our approach, when compared to the clustering algorithms proposed in the literature, is that we do not limit ourselves to one global parameter setting. Instead, the augmented cluster-ordering contains information which is equivalent to the density-based clusterings corresponding to a broad range of parameter settings and thus is a versatile basis for both automatic and interactive cluster analysis.
```
from clustviz.optics import OPTICS, plot_clust
#ClustDist, CoreDist = OPTICS(X, eps=2, minPTS=3, plot=True, plot_reach=True)
#ClustDist, CoreDist = OPTICS(X, eps=5, minPTS=3, plot=False, plot_reach=False)
#plot_clust(X, ClustDist, CoreDist, eps=2, eps_db=1.9)
#CoreDist
```
# DBSCAN
ADVANTAGES:
DBSCAN does not require one to specify the number of clusters in the data a priori, as opposed to k-means.
DBSCAN can find arbitrarily shaped clusters. It can even find a cluster completely surrounded by (but not connected to) a different cluster. Due to the MinPts parameter, the so-called single-link effect (different clusters being connected by a thin line of points) is reduced.
DBSCAN has a notion of noise, and is robust to outliers.
DBSCAN requires just two parameters and is mostly insensitive to the ordering of the points in the database. (However, points sitting on the edge of two different clusters might swap cluster membership if the ordering of the points is changed, and the cluster assignment is unique only up to isomorphism.)
DBSCAN is designed for use with databases that can accelerate region queries, e.g. using an R* tree.
The parameters minPts and ε can be set by a domain expert, if the data is well understood.
DISADVANTAGES:
DBSCAN is not entirely deterministic: border points that are reachable from more than one cluster can be part of either cluster, depending on the order the data are processed. For most data sets and domains, this situation does not arise often and has little impact on the clustering result: both on core points and noise points, DBSCAN is deterministic. DBSCAN* is a variation that treats border points as noise, and this way achieves a fully deterministic result as well as a more consistent statistical interpretation of density-connected components.
The quality of DBSCAN depends on the distance measure used in the function regionQuery(P,ε). The most common distance metric used is Euclidean distance. Especially for high-dimensional data, this metric can be rendered almost useless due to the so-called "Curse of dimensionality", making it difficult to find an appropriate value for ε. This effect, however, is also present in any other algorithm based on Euclidean distance.
DBSCAN cannot cluster data sets well with large differences in densities, since the minPts-ε combination cannot then be chosen appropriately for all clusters.
If the data and scale are not well understood, choosing a meaningful distance threshold ε can be difficult.
## Choosing eps
```
# we choose k as minPTS
k = 3
neigh = NearestNeighbors(n_neighbors=k)
nbrs = neigh.fit(X)
distances, indices = nbrs.kneighbors(X)
distances = np.sort(distances, axis=0)
distances = distances[:,1]
#plt.plot(distances)
#plt.xlabel("Points")
#plt.ylabel("Distances")
#plt.title("{0}-distance plot".format(k))
#plt.show()
#the best eps seems to be around 0.150, even if later we see that eps = 0.3 works better
```
## Testing
```
from clustviz.dbscan import DBSCAN, plot_clust_DB
ClustDict = DBSCAN(X, eps=3, minPTS=3, plotting=True)
#ClustDict = DBSCAN(X, eps=1.5, minPTS=3, plotting=False)
#plot_clust_DB(X, ClustDict, eps=1.5, noise_circle=True, circle_class="all")
```
# HDBSCAN
https://hdbscan.readthedocs.io/en/latest/how_hdbscan_works.html
Density-based clustering is a popular clustering paradigm. However, the existing methods have a number of limitations:
(i) Some methods (e.g., DBSCAN and DENCLUE ) can only provide a “flat” (i.e. non-hierarchical) labeling of the data objects, based on a global density threshold. Using a single density threshold can often not properly characterize common data sets with clusters of very different densities and/or nested clusters.
(ii) Among the methods that provide a clustering hierarchy, some (e.g., gSkeletonClu) are not able to automatically simplify the hierarchy into an easily interpretable representation involving only the most significant clusters.
(iii) Many hierarchical methods, including OPTICS and gSkeletonClu, suggest only how to extract a flat partition by using a global cut/density threshold, which may not result in the most significant clusters if these clusters are characterized by different density levels.
(iv) Some methods are limited to specific classes of problems, such as networks (gSkeletonClu), and point sets in the real coordinate space (e.g., DECODE, and Generalized Single-Linkage).
(v) Most methods depend on multiple, often critical input parameters.
In this paper, we propose a clustering approach that, to the best of our knowledge, is unique in that it does not suffer from any of these drawbacks.
In detail, we make the following contributions:
(i) We introduce a hierarchical clustering method, called HDBSCAN, which generates a complete density-based clustering hierarchy from which a simplified hierarchy composed only of the most significant clusters can be easily extracted.
(ii) We propose a new measure of cluster stability for the purpose of extracting a set of significant clusters from possibly different levels of a simplified cluster tree produced by HDBSCAN.
(iii) We formulate the task of extracting a set of significant clusters as an optimization problem in which the overall stability of the composing clusters is maximized.
(iv) We propose an algorithm that finds the globally optimal solution to this problem.
(v) We demonstrate the advancement in density-based clustering that our approach represents on a variety of real world data sets.
```
#!pip install hdbscan
# by default min_samples is set equal to min_cluster_size
clusterer = hdbscan.HDBSCAN(min_cluster_size=3, gen_min_span_tree=True)
clusterer.fit(X)
# plt.figure(figsize = (18,8))
# clusterer.minimum_spanning_tree_.plot(edge_cmap='viridis',
# edge_alpha=0.6,
# node_size=120,
# edge_linewidth=3,
# )
# xmin, xmax, ymin, ymax = plt.axis()
# xwidth = xmax - xmin
# ywidth = ymax - ymin
# xw1 = xwidth*0.015
# yw1 = ywidth*0
# xw2 = xwidth*0.01
# yw2 = ywidth*0
# for i, txt in enumerate([i for i in range(len(X))]):
# if len(str(txt))==2:
# plt.annotate(txt, (X[:,0][i]+xw1, X[:,1][i]-yw1), fontsize=12, size=12)
# else:
# plt.annotate(txt, (X[:,0][i]+xw2, X[:,1][i]-yw2), fontsize=12, size=12)
# plt.show()
# distances in detail
dist_df = clusterer.minimum_spanning_tree_.to_pandas()
dist_df = dist_df.sort_values("distance", ascending=False)
dist_df
#they are processed in this order in the dendrogram below, starting from the whole data
#and finishing with the single points
# in case of a tie in distances the splits must be executed simultaneously (as in 69 and 68 in the dist_df)
#the scale is in log2, it proceeds from the top to the bottom
# plt.figure(figsize = (18,8))
# clusterer.single_linkage_tree_.plot(cmap='viridis', colorbar=True)
# plt.show()
#Each row of the dataframe corresponds to an edge in the tree.
#The parent and child are the ids of the parent and child nodes in the tree.
#Node ids less than the number of points in the original dataset represent individual points,
#while ids greater than the number of points are clusters.
#The lambda_val value is the value (1/distance) at which the child node
#leaves the cluster (1/distance of previous dataframe)
#The child_size is the number of points in the child node.
#clusterer.condensed_tree_.to_pandas()
# plt.figure(figsize = (18,8))
# clusterer.condensed_tree_.plot()
# clust_data = clusterer.condensed_tree_.get_plot_data()["cluster_bounds"]
# xmin, xmax, ymin, ymax = plt.axis()
# xwidth = xmax - xmin
# ywidth = ymax - ymin
# for name in list(clust_data.keys()):
# data = clust_data[name]
# x = (data[0] + data[1])/2 -xwidth*0.01
# y = (data[3])-ywidth*0.04
# plt.annotate("{0}".format(name), (x,y), fontsize=15, size=15, color="black")
# plt.show()
# plt.figure(figsize = (18,8))
# clusterer.condensed_tree_.plot(select_clusters=True, selection_palette=sns.color_palette())
# clust_data = clusterer.condensed_tree_.get_plot_data()["cluster_bounds"]
# xmin, xmax, ymin, ymax = plt.axis()
# xwidth = xmax - xmin
# ywidth = ymax - ymin
# for name in list(clust_data.keys()):
# data = clust_data[name]
# x = (data[0] + data[1])/2 -xwidth*0.01
# y = (data[3])-ywidth*0.04
# plt.annotate("{0}".format(name), (x,y), fontsize=15, size=15)
# plt.show()
#clusterer.labels_
#clusterer.probabilities_
#clusterer.outlier_scores_
# plt.figure(figsize=(18,8))
# palette = sns.color_palette()
# cluster_colors = [sns.desaturate(palette[col], sat)
# if col >= 0 else (0.5, 0.5, 0.5) for col, sat in
# zip(clusterer.labels_, clusterer.probabilities_)]
# plt.scatter(X.T[0], X.T[1], c=cluster_colors, s=400, edgecolor="black")
# xmin, xmax, ymin, ymax = plt.axis()
# xwidth = xmax - xmin
# ywidth = ymax - ymin
# xw1 = xwidth*0.008
# yw1 = ywidth*0.008
# xw2 = xwidth*0.005
# yw2 = ywidth*0.008
# for i, txt in enumerate([i for i in range(len(X))]):
# if len(str(txt))==2:
# plt.annotate(txt, (X[:,0][i]-xw1, X[:,1][i]-yw1), fontsize=12, size=12)
# else:
# plt.annotate(txt, (X[:,0][i]-xw2, X[:,1][i]-yw2), fontsize=12, size=12)
# plt.show()
```
# SPECTRAL CLUSTERING
## Initial Discussion
http://people.csail.mit.edu/dsontag/courses/ml14/notes/Luxburg07_tutorial_spectral_clustering.pdf
The ε-neighborhood graph: Here we connect all points whose pairwise distances are smaller than ε. As the distances between all connected points are roughly of the same scale (at most ε), weighting the edges would not incorporate more information about the data to the graph. Hence, the ε-neighborhood graph is usually considered as an unweighted graph.
k-nearest neighbor graphs: Here the goal is to connect vertex vi with vertex vj if vj is among the k-nearest neighbors of vi. However, this definition leads to a directed graph, as the neighborhood relationship is not symmetric. There are two ways of making this graph undirected. The first way is to simply ignore the directions of the edges, that is we connect vi and vj with an undirected edge if vi is among the k-nearest neighbors of vj or if vj is among the k-nearest neighbors of vi. The resulting graph is what is usually called the k-nearest neighbor graph. The second choice is to connect vertices vi and vj if both vi is among the k-nearest neighbors of vj and vj is among the k-nearest neighbors of vi. The resulting graph is called the mutual k-nearest neighbor graph. In both cases, after connecting the appropriate vertices we weight the edges by the similarity of their endpoints.
The fully connected graph: Here we simply connect all points with positive similarity with each other, and we weight all edges by sij. As the graph should represent the local neighborhood relationships, this construction is only useful if the similarity function itself models local neighbor- hoods. An example for such a similarity function is the Gaussian similarity function s(xi,xj) = exp(−||xi −xj||^2/(2σ^2)), where the parameter σ controls the width of the neighborhoods. This parameter plays a similar role as the parameter ε in case of the ε-neighborhood graph.
The matrix L satisfies the following properties:
1. L is symmetric and positive semi-definite.
2. The smallest eigenvalue of L is 0, the corresponding eigenvector is the constant one vector .
3. L has n non-negative, real-valued eigenvalues 0 = λ1 ≤ λ2 ≤ . . . ≤ λn .
Let G be an undirected graph with non-negative weights. Then the multiplicity k of the eigenvalue 0 of L equals the number of connected components A1, . . . , Ak in the graph. The eigenspace of eigenvalue 0 is spanned by the indicator vectors A1 , . . . , Ak of those components.
$$L_{sym} := D^{−1/2} L D^{−1/2} = I − D^{−1/2} W D^{−1/2}$$
$$L_{rw} :=D^{−1} L = I − D^{−1} W. $$
The normalized Laplacians satisfy the following properties:
1. λ is an eigenvalue of L_rw with eigenvector u if and only if λ is an eigenvalue of L_sym with eigenvector w = D^1/2u.
2. λ is an eigenvalue of L_rw with eigenvector u if and only if λ and u solve the generalized eigenproblem Lu = λDu.
3. 0 is an eigenvalue of L_rw with the constant one vector as eigenvector. 0 is an eigenvalue of L_sym with eigenvector D^1/2 .
4. L_sym and L_rw are positive semi-definite and have n non-negative real-valued eigenvalues 0 = λ1 ≤...≤ λn.
Let $G$ be an undirected graph with non-negative weights. Then the multiplicity $k$ of the eigenvalue $0$ of both $L_{rw}$ and $L_{sym}$ equals the number of connected components $A_{1} , . . . , A_{k}$ in the graph. For $L_{rw}$ , the eigenspace of $0$ is spanned by the indicator vectors $1_{Ai}$ of those components. For $L_{sym}$, the eigenspace of $0$ is spanned by the vectors $D^{1/2} 1_{Ai}$ .
```
float_formatter = lambda x: "%.3f" % x
np.set_printoptions(formatter={'float_kind':float_formatter})
sns.set()
X = np.array([
[1, 3], [2, 1], [1, 1],
[3, 2], [7, 8], [9, 8],
[9, 9], [8, 7], [13, 14],
[14, 14], [15, 16], [14, 15]
])
plt.figure(figsize=(10,3))
plt.scatter(X[:,0], X[:,1], alpha=0.7, edgecolors='b', s=250)
plt.xlabel('Weight')
plt.ylabel('Height')
plt.show()
W = pairwise_distances(X, metric="euclidean")
vectorizer = np.vectorize(lambda x: 1 if x < 5 else 0)
W = np.vectorize(vectorizer)(W)
print(W)
def draw_graph(G):
pos = nx.spring_layout(G)
nx.draw_networkx_nodes(G, pos)
nx.draw_networkx_labels(G, pos)
nx.draw_networkx_edges(G, pos, width=1.0, alpha=0.5)
plt.show()
```
## First Example
GRAPH with a single connected component. We will observe a single null eigenvalue; the fact that the second smallest eigenvalue is big indicates that there is a single cluster
```
G = nx.random_graphs.erdos_renyi_graph(10, 0.5)
draw_graph(G)
W = nx.adjacency_matrix(G)
print(W.todense())
# degree matrix
D = np.diag(np.sum(np.array(W.todense()), axis=1))
print('degree matrix:')
print(D)
# Laplacian matrix
L = D - W
print('laplacian matrix:')
print(L)
e, v = np.linalg.eig(L)
# eigenvalues
print('eigenvalues:')
print(e)
# eigenvectors
print('eigenvectors:')
print(v)
fig = plt.figure(figsize=(12,3))
ax1 = plt.subplot(121)
plt.plot(e)
ax1.title.set_text('eigenvalues')
i = np.where(e < 10e-6)[0] #very low #1st column
ax2 = plt.subplot(122)
plt.plot(v[:, i[0]])
fig.tight_layout()
plt.show()
```
## Second Example
GRAPH with two cluster. We will observe two null eigenvalues, so two clusters, since the third eigenvalue is pretty high
```
G = nx.Graph()
G.add_edges_from([
[1, 2], [1, 3], [1, 4], [2, 3], [2, 7], [3, 4], [4, 7], [1, 7],
[6, 5], [5, 8], [6, 8], [9, 8], [9, 6]
])
draw_graph(G)
W = nx.adjacency_matrix(G)
#print(W.todense())
# degree matrix
D = np.diag(np.sum(np.array(W.todense()), axis=1))
#print('degree matrix:')
#print(D)
# laplacian matrix
L = D - W
#print('laplacian matrix:')
#print(L)
e, v = np.linalg.eig(L)
# eigenvalues
print('eigenvalues:')
print(e)
# eigenvectors
print('eigenvectors:')
print(v)
fig = plt.figure(figsize=[18, 3])
ax1 = plt.subplot(131)
plt.plot(e)
ax1.title.set_text('eigenvalues')
i = np.where(e < 10e-6)[0] #(array([1, 6]),)
ax2 = plt.subplot(132)
plt.plot(v[:, i[0]]) #2nd column
ax2.title.set_text('first eigenvector with eigenvalue of 0')
ax3 = plt.subplot(133)
plt.plot(v[:, i[1]]) #7th column
ax3.title.set_text('second eigenvector with eigenvalue of 0')
plt.show()
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]]], axis=1)
plt.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0], alpha=0.5, s=400)
plt.show()
km = KMeans(init='k-means++', n_clusters=2)
km.fit(new_mat)
print(km.labels_)
```
Alternative way with $ L_{sym} $
```
D_pow = scipy.linalg.fractional_matrix_power(D,-0.5)
L_norm = np.matmul(np.matmul(D_pow, L), D_pow)
e, v = np.linalg.eig(L_norm)
# eigenvalues
print('eigenvalues:')
print(e)
# eigenvectors
print('eigenvectors:')
print(v)
i = np.where(e < 10e-6)[0] #(array([1, 6]),)
print(i)
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]]], axis=1)
plt.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0], alpha=0.5, s=400)
plt.show()
km = KMeans(init='k-means++', n_clusters=2)
km.fit(new_mat)
km.labels_
```
## Third Example
GRAPH with only one connected component, but with two visible clusters; we will observe a null eigenvalue, but the second and third smallest are also close to zero, so one can choose k=2 or k=3
```
G = nx.Graph()
G.add_edges_from([
[1, 2], [1, 3], [1, 4], [2, 3],
[3, 4], [4, 5], [1, 5], [6, 7],
[7, 8], [6, 8], [6, 9], [9, 6], [7, 10], [7, 2]
])
draw_graph(G)
W = nx.adjacency_matrix(G)
print(W.todense())
D = np.diag(np.sum(np.array(W.todense()), axis=1))
L = D - W
e, v = np.linalg.eig(L)
# eigenvalues
print('eigenvalues:')
print(e)
# eigenvectors
print('eigenvectors:')
print(v)
fig = plt.figure(figsize=[18, 3])
ax1 = plt.subplot(131)
plt.plot(e)
ax1.title.set_text('eigenvalues')
i = np.where(e < 0.8)[0]
ax2 = plt.subplot(132)
plt.plot(v[:, i[0]])
ax3 = plt.subplot(133)
plt.plot(v[:, i[1]])
ax3.title.set_text('second eigenvector with eigenvalue close to 0')
plt.show()
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]]], axis=1)
plt.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0], alpha=0.5, s=400)
plt.show()
km = KMeans(init='k-means++', n_clusters=2)
km.fit(new_mat)
print(km.labels_)
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]], v[:,i[2]]], axis=1)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0],np.array(new_mat.T[2])[0], alpha=0.5, s=400)
plt.show()
km = KMeans(init='k-means++', n_clusters=3, random_state=42)
km.fit(new_mat)
print(km.labels_)
```
## Fourth Example
GRAPH with one single connected component, but with 3 or 4 visible clusters, therefore we try with k=3 and k=4
```
G = nx.Graph()
G.add_edges_from([
[1, 2], [1, 3], [1, 4], [2, 3], [3, 4], [4, 5],
[1, 5], [6, 7], [7, 8], [6, 8], [6, 9], [9, 6],
[7, 10], [7, 2], [11, 12], [12, 13], [7, 12],
[11, 13]
])
draw_graph(G)
W = nx.adjacency_matrix(G)
#print(W.todense())
D = np.diag(np.sum(np.array(W.todense()), axis=1))
L = D - W
e, v = np.linalg.eig(L)
# eigenvalues
print('eigenvalues:')
print(e)
# eigenvectors
print('eigenvectors:')
print(v)
fig = plt.figure(figsize=[15, 6])
ax1 = plt.subplot(221)
plt.plot(e)
ax1.title.set_text('eigenvalues')
i = np.where(e < 0.8)[0]
ax2 = plt.subplot(222)
plt.plot(v[:, i[0]])
ax3 = plt.subplot(223)
plt.plot(v[:, i[1]])
ax3.title.set_text('second eigenvector with eigenvalue close to 0')
ax4 = plt.subplot(224)
plt.plot(v[:, i[2]])
ax4.title.set_text('third eigenvector with eigenvalue close to 0')
fig.tight_layout()
plt.show()
plt.figure(figsize=(3,3))
draw_graph(G)
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]], v[:,i[2]]], axis=1)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0],np.array(new_mat.T[2])[0], alpha=0.5, s=400)
plt.show()
km = KMeans(init='k-means++', n_clusters=3, random_state=42)
km.fit(new_mat)
print(km.labels_)
new_mat = np.concatenate([v[:, i[0]], v[:, i[1]], v[:,i[2]], v[:,i[3]]], axis=1)
#fig = plt.figure()
#ax = fig.add_subplot(111, projection='3d')
#ax.scatter(np.array(new_mat.T[0])[0], np.array(new_mat.T[1])[0],np.array(new_mat.T[2])[0], alpha=0.5, s=400)
#plt.show()
km = KMeans(init='k-means++', n_clusters=4, random_state=42)
km.fit(new_mat)
print(km.labels_)
X, clusters = make_circles(n_samples=1000, noise=.06, factor=.5, random_state=0)
sc = SpectralClustering(n_clusters=2, affinity='nearest_neighbors', random_state=42)
sc_clustering = sc.fit(X)
plt.scatter(X[:,0], X[:,1], c=sc_clustering.labels_, cmap='rainbow', alpha=0.7, edgecolors='b')
```
# HIERARCHICAL AGGLOMERATIVE CLUSTERING
```
from clustviz.agglomerative import agg_clust
#agg_clust(X, "single")
#agg_clust(X, "complete")
#agg_clust(X, "average")
#agg_clust(X, "ward")
```
## Scipy examples
Ward distance is not consistent with our custom implementation, but Ward clustering is
SINGLE e COMPLETE uguali nei due metodi anche per numeri dist e scipy dendrogram
AVERAGE uguale nei due metodi, però secondo scipy dendrogram è meglio quello più veloce, ultimo risultato diverso
per WARD un solo metodo, le distanze non sono uguali, dovrebbero essere poste sotto radice e moltiplicate per radice di 2, ma ultima sbagliata
```
def fancy_dendrogram(*args, **kwargs):
max_d = kwargs.pop('max_d', None)
if max_d and 'color_threshold' not in kwargs:
kwargs['color_threshold'] = max_d
annotate_above = kwargs.pop('annotate_above', 0)
ddata = dendrogram(*args, **kwargs)
if not kwargs.get('no_plot', False):
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample index or (cluster size)')
plt.ylabel('distance')
for i, d, c in zip(ddata['icoord'], ddata['dcoord'], ddata['color_list']):
x = 0.5 * sum(i[1:3])
y = d[1]
if y > annotate_above:
plt.plot(x, y, 'o', c=c)
plt.annotate("%.3g" % y, (x, y), xytext=(0, -5),
textcoords='offset points',
va='top', ha='center')
if max_d:
plt.axhline(y=max_d, c='k')
return ddata
Z = linkage(X, 'ward')
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=8,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=1, # useful in small plots so annotations don't overlap
)
plt.show()
Z = linkage(X, 'average')
fancy_dendrogram(
Z,
truncate_mode='lastp',
p=20,
leaf_rotation=90.,
leaf_font_size=12.,
show_contracted=True,
annotate_above=0.2, # useful in small plots so annotations don't overlap
)
plt.show()
```
## Sklearn examples
```
# linkage = "ward"
# clustering = AgglomerativeClustering(n_clusters=2, linkage=linkage).fit(X)
# lab = clustering.labels_
# colors = { 0:"seagreen", 1:'beige', 2:'yellow', 3:'grey',
# 4:'pink', 5:'navy', 6:'orange', 7:'purple', 8:'salmon', 9:'olive', 10:'brown',
# 11:'tan', 12: 'plum', 13:'red', 14:'lightblue', 15:"khaki", 16:"gainsboro", 17:"peachpuff"}
# plt.figure(figsize=(14,4))
# for i in range(len(X)):
# plt.scatter(X[i,0], X[i,1], color=colors[lab[i]], s=300)
# plt.show()
```
# CURE
The clustering algorithm starts with each input point as a separate cluster, and at each successive step merges the closest pair of clusters. In order to compute the distance between a pair of clusters, for each cluster, $c$ representative points are stored. These are determined by first choosing $c$ well scattered points within the cluster, and then shrinking them toward the mean of the cluster by a fraction $\alpha$.
The distance between two clusters is then the distance between the closest pair of representative points - one belonging to each of the two clusters. Thus, only the representative points of a cluster are used to compute its distance from other clusters.
The $c$ representative points attempt to capture the physical shape and geometry of the cluster. Furthermore, shrinking the scattered points toward the mean by a factor $c$ gets rid of surface abnormalities and mitigates the effects of outliers. The reason for this is that outliers typically will be further away from the cluster center, and as a result, the shrinking would cause outliers to move more toward the center while the remaining representative points would experience minimal shifts. The larger movements in the outliers would thus reduce their ability to cause the wrong clusters to be merged. The parameter $\alpha$ can also be used to control the shapes of clusters. A smaller value of $\alpha$ shrinks the scattered points very little and thus favors elongated clusters. On the other hand, with larger values of $\alpha$, the scattered points get located closer to the mean, and clusters tend to be more compact.
The HEAP update phase is not optimized in this version of the algorithm, see the paper to know more about optimization details.
Choose $\textit{cure_sample_part}$ for the optimized version for larger datasets.
In this version, outliers are not taken into consideration; see the paper to know how to handle outliers.
```
from clustviz.cure import cure, plot_results_cure, cure_sample_part, chernoffBounds, demo_parameters
clusters, rep, mat_a= cure(X, 3, c=4, alpha=0.6)
#clusters, rep, mat_a = cure(X, 3, c=4, alpha=0.5, plotting=True)
#clusters, rep, mat_a = cure(varied, 3, c=7, alpha=0.85, plotting=False)
#plot_results_cure(clusters)
#_ = cure_sample_part(X, c=3, alpha=0.1, k=3)
k=5
chernoffBounds(u_min=round(len(X) / k), f=0.00001, N=len(X), k=k, d=0.05)
#pd.DataFrame(X).sample(25)
#demo_parameters()
```
# BIRCH
Look at paper and PP presentation: Balanced Iterative Reducing and Clustering using Hierarchies.
It consists of a preprocessing phase that stores the input data into a CF-tree, whose leaves are then analyzed by a clustering algorithm (not necessarily hierarchical) to derive the clustering of the data. It is useful for large datasets, and it runs in O(n) time-complexity, but the order of input data as well as the non-spherical shape of clusters may compromise its effectiveness.
In order to plot all the steps, the following has been modified:
Pyclustering cluster/birch.py:
- many "prints" have been inserted
- plot_tree_fin and plot_birch_leaves have been inserted into "insert_data" method of the birch class, adding the argument "plotting"
to both process and insert_data functions
- return_tree method has been added
```
from clustviz.birch import birch, plot_birch_leaves, plot_tree_fin
from pyclustering.container.cftree import measurement_type
from pyclustering.cluster import cluster_visualizer
birch_instance = birch(X.tolist(), 3, diameter=4, max_node_entries=5,
type_measurement=measurement_type.CENTROID_EUCLIDEAN_DISTANCE)
birch_instance.process(plotting=True)
#plot_tree_fin(birch_instance.return_tree(), info=True)
#plot_birch_leaves(birch_instance.return_tree(), X)
#birch_instance.return_tree().show_feature_distribution()
#clusters = birch_instance.get_clusters()
# # Visualize allocated clusters
#visualizer = cluster_visualizer()
#visualizer.append_clusters(clusters, X.tolist())
#visualizer.show()
```
# PAM

```
# from clustviz.pam import KMedoids
# X = make_blobs(n_samples=100,
# cluster_std=[2., 2., 2.],
# random_state=42)[0]
# z = KMedoids(n_cluster=3, tol=0.01)
#
# z.fit(X.tolist())
```
# CLARA

```
#beware that input data dimension must be at least 40+2*n_clusters, otherwise it is useless
X = make_blobs(n_samples=125,
cluster_std=[2, 4, 2],
random_state=42)[0]
from clustviz.clara import ClaraClustering
#Clara = ClaraClustering()
#final_result = Clara.clara(pd.DataFrame(X), 2, 'fast_euclidean')
```
# CLARANS
$G_{n,k}$ is a tree where each node is a combination of $k$ points of the input dataset, so a possible set of medoids. Every node is linked only to nodes that differ only by one element, so every node has $k(n-k)$ neighbors; so, a neighbor of a node is a set of medoids where a single medoid is different. For each node one can compute the total cost associated to the corresponding configuration.

The algorithm requires numlocal (amount of iterations to solve the problem), maxneighbor (the maximum number of neighbors examined) and number of clusters to be formed ($k$) as input.
Then the iteration starts, $i$ is set to 1, before which the Mincost (which is the optimal cost) is set to $\infty$ and bestnode (optimal medoids) is set to empty tuple.
Now $k$ random data points are selected as current medoids and clusters are formed using these data points (Euclidean distance can be used to find the nearest medoid to form clusters).
After this a new loop starts, where $j$ is set to $1$. A random current medoid is selected and a random candidate (random neighbor) datapoint is selected for replacement with current medoid. If the replacement of candidate datapoint yields a lower TotalCost (which is the summation of distances between all the points in the clusters with their respective medoids ) than the current medoid then the replacement takes place. If replacement takes place then $j$ is not incremented, otherwise $j = j + 1$.
Once $j$ > maxneighbor, then the current medoids are taken and their TotalCost is compared with the Mincost. If the TotalCost is less then the Mincost, then the Bestnode is updated with the current medoids.
$i$ is incremented afterwards and if $i$ is greater than numlocal, then the Bestnode is produced as output, otherwise the whole process is repeated.
In order to plot all the steps, the following has been modified:
Pyclustering cluster/clarans.py:
- many "prints" have been inserted
- plot_pam function has been inserted, and the process method has been modified with an addition of the parameter "plotting"
```
from clustviz.clarans import clarans, plot_tree_clarans
#plot_tree_clarans(pd.DataFrame(X[6:10]),2)
#z = clarans(X, 3, 3, 6).process(plotting=True)
#z.get_clusters()
```
# CHAMELEON & CHAMELEON2
## CHAMELEON
Combines initial partition of data with hierarchical clustering techniques; it modifies clusters dynamically
Step1:
- Generate a KNN graph
- because it's local, it reduces influence of noise and outliers
- provides automatic adjustment for densities
Step2:
- use METIS: a graph partitioning algorithm
- get equally-sized groups of well-connected vertices
- this produces "sub-clusters" - something that is a part of true clusters
Step3:
- recombine sub-clusters
- combine two clusters if they are relatively close and they are relatively interconnected
- so they are merged only if the new cluster will be similar to the original ones
i.e. when "self-similarity" is preserved (similar to the join operation in Scatter/Gather)
But
- Curse of Dimensionality makes similarity functions behave poorly
- distances become more uniform as dimensionality grows
- similarity between two point of high dimensionality can be misleading
- often points may be similar even though they should belong to different clusters
```
from clustviz.chameleon.graphtools import plot2d_data
from clustviz.chameleon.chameleon import cluster
#k = n_clusters, knn = number of nearest neighbors, m = n_clusters to reach in the initial clustering phase,
#alpha = exponent of relative closeness
df = pd.DataFrame(X1)
res, h = cluster(df, k=1, knn=15, m=10, alpha=2, plot=True)
# draws a 2-D scatter plot with clusters
print("FINAL")
plot2d_data(res)
```
## CHAMELEON2
The differences from the standard Chameleon algorithm lie in the use of a symmetrical KNN-graph, in the FLOOD-FILLING phase, and in the modifications regarding $R_{IC}$ and $R_{CL}$, and in an automated procedure to choose the optimal number of clusters. Here we use Hmetis to partition the graph; in the official paper the Fiduccia-Mattheyses bisection algorithm was used instead.
```
from clustviz.chameleon.graphtools import plot2d_data
from clustviz.chameleon.chameleon2 import cluster2
#df = pd.DataFrame(varied)
df = pd.DataFrame(X)
#according to the paper, standard number of partitions
num_part = int(round(len(df) / max(5, round(len(df)/100))))
print("standard number of partitions:", num_part)
res, h = cluster2(df, k=3, knn=14, m=9, alpha=2, beta=1, m_fact=1000, plot=True, auto_extract=True)
# draws a 2-D scatter plot with clusters
print("\n")
print("FINAL")
plot2d_data(res)
```
# DENCLUE
```
from clustviz.denclue import DENCLUE, plot_3d_or_contour, plot_3d_both, plot_grid_rect, plot_infl
#plot_3d_or_contour(varied, s=0.75, three=True, scatter=True, prec=10)
#plot_3d_or_contour(varied, s=1.5, three=False, scatter=True, prec=10)
#plot_3d_or_contour(varied, s=2, three=False, scatter=True, prec=12)
#plot_infl(varied, s=1, xi=3)
#plot_3d_both(data=varied, s=2, xi=3, prec=10)
#plot_grid_rect(varied, s=2, cube_kind="highly_populated")
#lab = DENCLUE(data=X, s=0.5, xi=2, xi_c=3, tol=2, prec=5, plotting=True)
```
| github_jupyter |
<h1>SVHN Classification using CNNs</h1>
---
# Importing Keras Modules
```
#Importing important modules
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
#Installing Tensorboard for Colab
!pip install tensorboardcolab
from google.colab import drive
drive.mount('/content/drive')
```
# Loading the Dataset
<h3>Import train and test sets of SVHN dataset</h3>
```
import h5py
import numpy as np
# Open the file as readonly
h5f = h5py.File('/content/drive/My Drive/Artificial Intelligence/SVHN_single_grey1.h5', 'r')
# Load the training, test and validation set
x_train = h5f['X_train'][:]
y_train = h5f['y_train'][:]
x_test = h5f['X_test'][:]
y_test = h5f['y_test'][:]
# Close this file
h5f.close()
```
<h4>Visualizing the dataset</h4>
(Visualize first 25 test images from the dataset using matplotlib) **2.5 Points**
```
%matplotlib inline
import matplotlib.pyplot as plt
h = 10
w = 10
rows = 5
cols = 5
fig = plt.figure(figsize=(8,8))
for i in range(1,rows*cols+1):
img = x_test[i]
fig.add_subplot(rows,cols,i)
plt.imshow(img,cmap='gray')
plt.show()
```
<h3>Reshape train and test sets into shapes compatible with keras models</h3>
<h4>Keras expects data to be in the format (N_E.N_H,N_W,N_C) N_E = Number of Examples, N_H = height, N_W = Width, N_C = Number of Channels.</h4>
```
# input image dimensions
img_rows, img_cols = 32, 32
#Keras expects data to be in the format (N_E.N_H,N_W,N_C)
#N_E = Number of Examples, N_H = height, N_W = Width, N_C = Number of Channels.
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
```
<h3>Pre-processing the dataset</h3>
<h4>Normalizing the input</h4>
```
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
#Normalizing the input
x_train /= 255.0
x_test /= 255.0
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
batch_size = 25
num_classes = 10
epochs = 25
```
<h4>Convert Labels from digits to one hot vectors</h4> **2.5 Points**
```
from tensorflow.keras.utils import to_categorical
y_test = to_categorical(y_test,num_classes)
y_train = to_categorical(y_train,num_classes)
```
# Building the CNN
<h4>Define the layers of model</h4> **5 Points**
```
model = Sequential()
model.add(Conv2D(filters=32,kernel_size=(3,3),input_shape=input_shape,activation='relu'))
model.add(Conv2D(filters=64,kernel_size=(3,3),activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(units=128,activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(units=num_classes,activation='softmax'))
model.summary()
```
<h4>Set Adam Optimizer and Loss function for training</h4> **2.5 Points**
```
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import categorical_crossentropy
model.compile(optimizer=Adam(lr=0.001),loss=categorical_crossentropy,metrics=['accuracy'])
```
# Training the CNN - 10 Points
<h4>Initializing the Tensorboard callback for visualization of training</h4>
```
# Importing necessary libraries for tensorboard
from tensorflow.keras.callbacks import TensorBoard
from time import time
# Setting the directory to store the loss
tensorboard = TensorBoard(log_dir=".logs/{}".format(time()))
```
<h4>Initializing Early stopping and Model chekpoint callbacks </h4>
```
# if the validation loss is not going to change even 0.002 for more than 5 continuos epochs then
early_stopping = EarlyStopping(monitor='val_loss',min_delta=0.002,patience=5)
# Adding Model Checkpoint Callback function to the fit function which is going to save the weights whenever val_loss achieves a new loss value
# Hence saving the best weights occured during training
model_checkpoint = ModelCheckpoint('svhn_checkpoint_{epoch:02d}_loss{val_loss:.4f}.h5',monitor='val_loss',
save_best_only=True,save_weights_only=True,mode='auto',period=1)
```
<h4>Fit the model to the dataset</h4>
```
#Training on the dataset and adding the all the callbacks to the fit function.
#Once the training starts, results start appearing on Tensorboard after 1 epoch
history = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test),
callbacks=[tensorboard,early_stopping,model_checkpoint])
```
# Evaluating the CNN
```
training_loss = history.history['loss']
validation_loss = history.history['val_loss']
epochs = range(1,20)
plt.plot(epochs,training_loss,'g',label="Training Loss")
plt.plot(epochs,validation_loss,'r',label="Validatio Loss")
plt.title("Training Loss Vs Validation Loss")
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
training_accuracy = history.history['accuracy']
validation_accuracy = history.history['val_accuracy']
epochs = range(1,20)
plt.plot(epochs,training_accuracy,'g', label='Training accuracy')
plt.plot(epochs, validation_accuracy, 'r', label='validation accuracy')
plt.title('Training and Validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
<h4>Evaluate trained model on the test set</h4> ** 2.5 Points**
```
model.evaluate(x_test,y_test,verbose=2)
```
<h4>Visualize 5 test set image predictions</h4> **2.5 Points**
```
import numpy as np
fig = plt.figure(figsize=(10,10))
row = 2
col = 5
for i in range(1,row*col+1):
fig.add_subplot(row,col,i)
plt.imshow(x_test[i].reshape(32,32),cmap='gray')
label = np.argmax(model.predict(x_test[i].reshape(1,32,32,1)))
plt.xlabel(label)
plt.show()
```
# Saving the CNN
<h4>Save the trained weights and model in h5 files</h4> **2.5 Points**
```
# Save the pretrained models in a drive
model.save_weights("/content/drive/My Drive/Artificial Intelligence/svhn_checkpoint_14_loss0.3417.h5")
```
| github_jupyter |
## Dependencies
```
import os
import sys
import cv2
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import multiprocessing as mp
import matplotlib.pyplot as plt
from tensorflow import set_random_seed
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
from keras import backend as K
from keras.models import Model
from keras.utils import to_categorical
from keras import optimizers, applications
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Dropout, GlobalAveragePooling2D, Input
from keras.callbacks import EarlyStopping, ReduceLROnPlateau, Callback, LearningRateScheduler
def seed_everything(seed=0):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
set_random_seed(0)
seed = 0
seed_everything(seed)
%matplotlib inline
sns.set(style="whitegrid")
warnings.filterwarnings("ignore")
sys.path.append(os.path.abspath('../input/efficientnet/efficientnet-master/efficientnet-master/'))
from efficientnet import *
```
## Load data
```
hold_out_set = pd.read_csv('../input/aptos-split-oldnew/hold-out_5.csv')
X_train = hold_out_set[hold_out_set['set'] == 'train']
X_val = hold_out_set[hold_out_set['set'] == 'validation']
test = pd.read_csv('../input/aptos2019-blindness-detection/test.csv')
test["id_code"] = test["id_code"].apply(lambda x: x + ".png")
print('Number of train samples: ', X_train.shape[0])
print('Number of validation samples: ', X_val.shape[0])
print('Number of test samples: ', test.shape[0])
display(X_train.head())
```
# Model parameters
```
# Model parameters
FACTOR = 2
BATCH_SIZE = 8 * FACTOR
EPOCHS = 20
WARMUP_EPOCHS = 5
LEARNING_RATE = 1e-4 * FACTOR
WARMUP_LEARNING_RATE = 1e-3 * FACTOR
HEIGHT = 256
WIDTH = 256
CHANNELS = 3
TTA_STEPS = 5
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
LR_WARMUP_EPOCHS_1st = 2
LR_WARMUP_EPOCHS_2nd = 5
STEP_SIZE = len(X_train) // BATCH_SIZE
TOTAL_STEPS_1st = WARMUP_EPOCHS * STEP_SIZE
TOTAL_STEPS_2nd = EPOCHS * STEP_SIZE
WARMUP_STEPS_1st = LR_WARMUP_EPOCHS_1st * STEP_SIZE
WARMUP_STEPS_2nd = LR_WARMUP_EPOCHS_2nd * STEP_SIZE
```
# Pre-procecess images
```
old_data_base_path = '../input/diabetic-retinopathy-resized/resized_train/resized_train/'
new_data_base_path = '../input/aptos2019-blindness-detection/train_images/'
test_base_path = '../input/aptos2019-blindness-detection/test_images/'
train_dest_path = 'base_dir/train_images/'
validation_dest_path = 'base_dir/validation_images/'
test_dest_path = 'base_dir/test_images/'
# Making sure directories don't exist
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
# Creating train, validation and test directories
os.makedirs(train_dest_path)
os.makedirs(validation_dest_path)
os.makedirs(test_dest_path)
def crop_image(img, tol=7):
if img.ndim ==2:
mask = img>tol
return img[np.ix_(mask.any(1),mask.any(0))]
elif img.ndim==3:
gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
mask = gray_img>tol
check_shape = img[:,:,0][np.ix_(mask.any(1),mask.any(0))].shape[0]
if (check_shape == 0): # image is too dark so that we crop out everything,
return img # return original image
else:
img1=img[:,:,0][np.ix_(mask.any(1),mask.any(0))]
img2=img[:,:,1][np.ix_(mask.any(1),mask.any(0))]
img3=img[:,:,2][np.ix_(mask.any(1),mask.any(0))]
img = np.stack([img1,img2,img3],axis=-1)
return img
def circle_crop(img):
img = crop_image(img)
height, width, depth = img.shape
largest_side = np.max((height, width))
img = cv2.resize(img, (largest_side, largest_side))
height, width, depth = img.shape
x = width//2
y = height//2
r = np.amin((x, y))
circle_img = np.zeros((height, width), np.uint8)
cv2.circle(circle_img, (x, y), int(r), 1, thickness=-1)
img = cv2.bitwise_and(img, img, mask=circle_img)
img = crop_image(img)
return img
def preprocess_image(image_id, base_path, save_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
image = cv2.imread(base_path + image_id)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = circle_crop(image)
image = cv2.resize(image, (HEIGHT, WIDTH))
# image = cv2.addWeighted(image, 4, cv2.GaussianBlur(image, (0,0), sigmaX), -4 , 128)
cv2.imwrite(save_path + image_id, image)
def preprocess_data(df, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
item = df.iloc[i]
image_id = item['id_code']
item_set = item['set']
item_data = item['data']
if item_set == 'train':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, train_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, train_dest_path)
if item_set == 'validation':
if item_data == 'new':
preprocess_image(image_id, new_data_base_path, validation_dest_path)
if item_data == 'old':
preprocess_image(image_id, old_data_base_path, validation_dest_path)
def preprocess_test(df, base_path=test_base_path, save_path=test_dest_path, HEIGHT=HEIGHT, WIDTH=WIDTH, sigmaX=10):
df = df.reset_index()
for i in range(df.shape[0]):
image_id = df.iloc[i]['id_code']
preprocess_image(image_id, base_path, save_path)
n_cpu = mp.cpu_count()
train_n_cnt = X_train.shape[0] // n_cpu
val_n_cnt = X_val.shape[0] // n_cpu
test_n_cnt = test.shape[0] // n_cpu
# Pre-procecss old data train set
pool = mp.Pool(n_cpu)
dfs = [X_train.iloc[train_n_cnt*i:train_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_train.iloc[train_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss validation set
pool = mp.Pool(n_cpu)
dfs = [X_val.iloc[val_n_cnt*i:val_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = X_val.iloc[val_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_data, [x_df for x_df in dfs])
pool.close()
# Pre-procecss test set
pool = mp.Pool(n_cpu)
dfs = [test.iloc[test_n_cnt*i:test_n_cnt*(i+1)] for i in range(n_cpu)]
dfs[-1] = test.iloc[test_n_cnt*(n_cpu-1):]
res = pool.map(preprocess_test, [x_df for x_df in dfs])
pool.close()
```
# Data generator
```
datagen=ImageDataGenerator(rescale=1./255,
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
train_generator=datagen.flow_from_dataframe(
dataframe=X_train,
directory=train_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
valid_generator=datagen.flow_from_dataframe(
dataframe=X_val,
directory=validation_dest_path,
x_col="id_code",
y_col="diagnosis",
class_mode="raw",
batch_size=BATCH_SIZE,
target_size=(HEIGHT, WIDTH),
seed=seed)
test_generator=datagen.flow_from_dataframe(
dataframe=test,
directory=test_dest_path,
x_col="id_code",
batch_size=1,
class_mode=None,
shuffle=False,
target_size=(HEIGHT, WIDTH),
seed=seed)
def cosine_decay_with_warmup(global_step,
learning_rate_base,
total_steps,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0):
"""
Cosine decay schedule with warm up period.
In this schedule, the learning rate grows linearly from warmup_learning_rate
to learning_rate_base for warmup_steps, then transitions to a cosine decay
schedule.
:param global_step {int}: global step.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param global_step {int}: global step.
:Returns : a float representing learning rate.
:Raises ValueError: if warmup_learning_rate is larger than learning_rate_base, or if warmup_steps is larger than total_steps.
"""
if total_steps < warmup_steps:
raise ValueError('total_steps must be larger or equal to warmup_steps.')
learning_rate = 0.5 * learning_rate_base * (1 + np.cos(
np.pi *
(global_step - warmup_steps - hold_base_rate_steps
) / float(total_steps - warmup_steps - hold_base_rate_steps)))
if hold_base_rate_steps > 0:
learning_rate = np.where(global_step > warmup_steps + hold_base_rate_steps,
learning_rate, learning_rate_base)
if warmup_steps > 0:
if learning_rate_base < warmup_learning_rate:
raise ValueError('learning_rate_base must be larger or equal to warmup_learning_rate.')
slope = (learning_rate_base - warmup_learning_rate) / warmup_steps
warmup_rate = slope * global_step + warmup_learning_rate
learning_rate = np.where(global_step < warmup_steps, warmup_rate,
learning_rate)
return np.where(global_step > total_steps, 0.0, learning_rate)
class WarmUpCosineDecayScheduler(Callback):
"""Cosine decay with warmup learning rate scheduler"""
def __init__(self,
learning_rate_base,
total_steps,
global_step_init=0,
warmup_learning_rate=0.0,
warmup_steps=0,
hold_base_rate_steps=0,
verbose=0):
"""
Constructor for cosine decay with warmup learning rate scheduler.
:param learning_rate_base {float}: base learning rate.
:param total_steps {int}: total number of training steps.
:param global_step_init {int}: initial global step, e.g. from previous checkpoint.
:param warmup_learning_rate {float}: initial learning rate for warm up. (default: {0.0}).
:param warmup_steps {int}: number of warmup steps. (default: {0}).
:param hold_base_rate_steps {int}: Optional number of steps to hold base learning rate before decaying. (default: {0}).
:param verbose {int}: quiet, 1: update messages. (default: {0}).
"""
super(WarmUpCosineDecayScheduler, self).__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.global_step = global_step_init
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.hold_base_rate_steps = hold_base_rate_steps
self.verbose = verbose
self.learning_rates = []
def on_batch_end(self, batch, logs=None):
self.global_step = self.global_step + 1
lr = K.get_value(self.model.optimizer.lr)
self.learning_rates.append(lr)
def on_batch_begin(self, batch, logs=None):
lr = cosine_decay_with_warmup(global_step=self.global_step,
learning_rate_base=self.learning_rate_base,
total_steps=self.total_steps,
warmup_learning_rate=self.warmup_learning_rate,
warmup_steps=self.warmup_steps,
hold_base_rate_steps=self.hold_base_rate_steps)
K.set_value(self.model.optimizer.lr, lr)
if self.verbose > 0:
print('\nBatch %02d: setting learning rate to %s.' % (self.global_step + 1, lr))
```
# Model
```
def create_model(input_shape):
input_tensor = Input(shape=input_shape)
base_model = EfficientNetB5(weights=None,
include_top=False,
input_tensor=input_tensor)
base_model.load_weights('../input/efficientnet-keras-weights-b0b5/efficientnet-b5_imagenet_1000_notop.h5')
x = GlobalAveragePooling2D()(base_model.output)
final_output = Dense(1, activation='linear', name='final_output')(x)
model = Model(input_tensor, final_output)
return model
```
# Train top layers
```
model = create_model(input_shape=(HEIGHT, WIDTH, CHANNELS))
for layer in model.layers:
layer.trainable = False
for i in range(-2, 0):
model.layers[i].trainable = True
cosine_lr_1st = WarmUpCosineDecayScheduler(learning_rate_base=WARMUP_LEARNING_RATE,
total_steps=TOTAL_STEPS_1st,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_1st,
hold_base_rate_steps=(2 * STEP_SIZE))
metric_list = ["accuracy"]
callback_list = [cosine_lr_1st]
optimizer = optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
callbacks=callback_list,
verbose=2).history
```
# Fine-tune the complete model
```
for layer in model.layers:
layer.trainable = True
es = EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
cosine_lr_2nd = WarmUpCosineDecayScheduler(learning_rate_base=LEARNING_RATE,
total_steps=TOTAL_STEPS_2nd,
warmup_learning_rate=0.0,
warmup_steps=WARMUP_STEPS_2nd,
hold_base_rate_steps=(3 * STEP_SIZE))
callback_list = [es, cosine_lr_2nd]
optimizer = optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss='mean_squared_error', metrics=metric_list)
model.summary()
history = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=2).history
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 6))
ax1.plot(cosine_lr_1st.learning_rates)
ax1.set_title('Warm up learning rates')
ax2.plot(cosine_lr_2nd.learning_rates)
ax2.set_title('Fine-tune learning rates')
plt.xlabel('Steps')
plt.ylabel('Learning rate')
sns.despine()
plt.show()
```
# Model loss graph
```
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 14))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
# Create empty arays to keep the predictions and labels
df_preds = pd.DataFrame(columns=['label', 'pred', 'set'])
train_generator.reset()
valid_generator.reset()
# Add train predictions and labels
for i in range(STEP_SIZE_TRAIN + 1):
im, lbl = next(train_generator)
preds = model.predict(im, batch_size=train_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'train']
# Add validation predictions and labels
for i in range(STEP_SIZE_VALID + 1):
im, lbl = next(valid_generator)
preds = model.predict(im, batch_size=valid_generator.batch_size)
for index in range(len(preds)):
df_preds.loc[len(df_preds)] = [lbl[index], preds[index][0], 'validation']
df_preds['label'] = df_preds['label'].astype('int')
def classify(x):
if x < 0.5:
return 0
elif x < 1.5:
return 1
elif x < 2.5:
return 2
elif x < 3.5:
return 3
return 4
# Classify predictions
df_preds['predictions'] = df_preds['pred'].apply(lambda x: classify(x))
train_preds = df_preds[df_preds['set'] == 'train']
validation_preds = df_preds[df_preds['set'] == 'validation']
```
# Model Evaluation
## Confusion Matrix
### Original thresholds
```
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe', '4 - Proliferative DR']
def plot_confusion_matrix(train, validation, labels=labels):
train_labels, train_preds = train
validation_labels, validation_preds = validation
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap=sns.cubehelix_palette(8),ax=ax2).set_title('Validation')
plt.show()
plot_confusion_matrix((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Quadratic Weighted Kappa
```
def evaluate_model(train, validation):
train_labels, train_preds = train
validation_labels, validation_preds = validation
print("Train Cohen Kappa score: %.3f" % cohen_kappa_score(train_preds, train_labels, weights='quadratic'))
print("Validation Cohen Kappa score: %.3f" % cohen_kappa_score(validation_preds, validation_labels, weights='quadratic'))
print("Complete set Cohen Kappa score: %.3f" % cohen_kappa_score(np.append(train_preds, validation_preds), np.append(train_labels, validation_labels), weights='quadratic'))
evaluate_model((train_preds['label'], train_preds['predictions']), (validation_preds['label'], validation_preds['predictions']))
```
## Apply model to test set and output predictions
```
def apply_tta(model, generator, steps=10):
step_size = generator.n//generator.batch_size
preds_tta = []
for i in range(steps):
generator.reset()
preds = model.predict_generator(generator, steps=step_size)
preds_tta.append(preds)
return np.mean(preds_tta, axis=0)
preds = apply_tta(model, test_generator, TTA_STEPS)
predictions = [classify(x) for x in preds]
results = pd.DataFrame({'id_code':test['id_code'], 'diagnosis':predictions})
results['id_code'] = results['id_code'].map(lambda x: str(x)[:-4])
# Cleaning created directories
if os.path.exists(train_dest_path):
shutil.rmtree(train_dest_path)
if os.path.exists(validation_dest_path):
shutil.rmtree(validation_dest_path)
if os.path.exists(test_dest_path):
shutil.rmtree(test_dest_path)
```
# Predictions class distribution
```
fig = plt.subplots(sharex='col', figsize=(24, 8.7))
sns.countplot(x="diagnosis", data=results, palette="GnBu_d").set_title('Test')
sns.despine()
plt.show()
results.to_csv('submission.csv', index=False)
display(results.head())
```
## Save model
```
model.save_weights('../working/effNetB5_img224.h5')
```
| github_jupyter |
```
import os
import cv2
import functools
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
from matplotlib import gridspec
from fst_func import load_image, resize_image_to_square, crop_center, show_n
print('TF Version: ', tf.__version__)
print('TF-Hub Version: ', hub.__version__)
print('Eager Mode Enabled: ', tf.executing_eagerly())
print('Available GPU: ', tf.test.is_gpu_available())
hub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'
hub_module = hub.load(hub_handle)
content_urls = dict(
sea_turtle = 'https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg',
tuebingen = 'https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg',
grace_hopper = 'https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',
)
style_urls = dict(
kanagawa_great_wave = 'https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg',
kandinsky_composition_7 = 'https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg',
hubble_pillars_of_creation = 'https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg',
van_gogh_starry_night = 'https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',
turner_nantes = 'https://upload.wikimedia.org/wikipedia/commons/b/b7/JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg',
munch_scream = 'https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg',
picasso_demoiselles_avignon = 'https://upload.wikimedia.org/wikipedia/en/4/4c/Les_Demoiselles_d%27Avignon.jpg',
picasso_violin = 'https://upload.wikimedia.org/wikipedia/en/3/3c/Pablo_Picasso%2C_1911-12%2C_Violon_%28Violin%29%2C_oil_on_canvas%2C_Kr%C3%B6ller-M%C3%BCller_Museum%2C_Otterlo%2C_Netherlands.jpg',
picasso_bottle_of_rum = 'https://upload.wikimedia.org/wikipedia/en/7/7f/Pablo_Picasso%2C_1911%2C_Still_Life_with_a_Bottle_of_Rum%2C_oil_on_canvas%2C_61.3_x_50.5_cm%2C_Metropolitan_Museum_of_Art%2C_New_York.jpg',
fire = 'https://upload.wikimedia.org/wikipedia/commons/3/36/Large_bonfire.jpg',
derkovits_woman_head = 'https://upload.wikimedia.org/wikipedia/commons/0/0d/Derkovits_Gyula_Woman_head_1922.jpg',
amadeo_style_life = 'https://upload.wikimedia.org/wikipedia/commons/8/8e/Untitled_%28Still_life%29_%281913%29_-_Amadeo_Souza-Cardoso_%281887-1918%29_%2817385824283%29.jpg',
derkovtis_talig = 'https://upload.wikimedia.org/wikipedia/commons/3/37/Derkovits_Gyula_Talig%C3%A1s_1920.jpg',
amadeo_cardoso = 'https://upload.wikimedia.org/wikipedia/commons/7/7d/Amadeo_de_Souza-Cardoso%2C_1915_-_Landscape_with_black_figure.jpg'
)
content_image_size = 384
style_image_size = 256
content_images = {k: load_image(v, (content_image_size, content_image_size)) for k, v in content_urls.items()}
style_images = {k: load_image(v, (style_image_size, style_image_size)) for k, v in style_urls.items()}
style_images = {k: tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME') for k, style_image in style_images.items()}
content_name = 'tuebingen'
style_name = 'hubble_pillars_of_creation'
stylized_image = hub_module(tf.constant(content_images[content_name]),
tf.constant(style_images[style_name]))[0]
show_n([content_images[content_name], style_images[style_name], stylized_image],
titles=['Original content image', 'Style image', 'Stylized image'])
frame_size = 256
style_name = "fire"
cap = cv2.VideoCapture(0)
while True:
flag, frame = cap.read()
if flag:
image_rgb_np = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
resized_image_np = resize_image_to_square(image_rgb_np, image_size=(frame_size, frame_size))
outputs = hub_module(tf.constant(resized_image_np), tf.constant(style_images[style_name]))
stylized_image = outputs[0]
image_pil = tf.keras.preprocessing.image. array_to_img(stylized_image[0])
image_bgr_np = cv2.cvtColor(np.array(image_pil), cv2.COLOR_BGR2RGB)
cv2.imshow('style transfer', image_bgr_np)
else:
print('Error')
break
if cv2.waitKey(15) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
| github_jupyter |
```
import sys
import time
import os.path
from glob import glob
from datetime import datetime, timedelta
# data tools
import h5py
import numpy as np
# custom tools
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/utils/')
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/Analog_BC/')
sys.path.insert(0, '/glade/u/home/ksha/WORKSPACE/Analog_BC/utils/')
import data_utils as du
import analog_utils as ana
import graph_utils as gu
from namelist import *
# graph tools
import cmaps
import cartopy.crs as ccrs
import cartopy.mpl.geoaxes
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import matplotlib.colors as colors
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
from matplotlib import ticker
import matplotlib.ticker as mticker
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
%matplotlib inline
need_publish = False
# True: publication quality figures
# False: low resolution figures in the notebook
if need_publish:
dpi_ = fig_keys['dpi']
else:
dpi_ = 75
```
# Data
```
# importing domain information
with h5py.File(save_dir+'BC_domain_info.hdf', 'r') as h5io:
bc_lon = h5io['bc_lon'][...]
bc_lat = h5io['bc_lat'][...]
etopo_bc = h5io['etopo_bc'][...]
land_mask_bc = h5io['land_mask_bc'][...]
# import station obsevations and grid point indices
with h5py.File(save_dir+'BCH_ERA5_3H_verif.hdf', 'r') as h5io:
BCH_obs = h5io['BCH_obs'][...]
indx = h5io['indx'][...]
indy = h5io['indy'][...]
with h5py.File(save_dir+'BCH_wshed_groups.hdf', 'r') as h5io:
flag_sw = h5io['flag_sw'][...]
flag_si = h5io['flag_si'][...]
flag_n = h5io['flag_n'][...]
year = 2019
lead = 2
en = 0
day_plot = 31
# # ERA5
# with h5py.File(ERA_dir+'ERA5_GEFS-fcst_{}.hdf'.format(year), 'r') as h5io:
# era = h5io['era_fcst'][..., lead, bc_inds[0]:bc_inds[1], bc_inds[2]:bc_inds[3]]
# era[..., land_mask_bc] = np.nan
# # AnEn-CNN hybrid
# with h5py.File(REFCST_dir+'BASE_CNN_{}_lead{}.hdf'.format(year, lead), 'r') as h5io:
# cnn = h5io['cnn_pred'][:, en, ...]
# cnn = ana.cnn_precip_fix(cnn)
# cnn[..., land_mask_bc] = np.nan
# # AnEn-SS
# grid_shape = land_mask_bc.shape
# sl = np.empty((365,)+grid_shape)
# with h5py.File(REFCST_dir + "BASE_final_dress_SS_{}_lead{}.hdf".format(year, lead), 'r') as h5io:
# sl_ = h5io['AnEn'][:, en, ...]
# sl[..., ~land_mask_bc] = sl_
# sl[..., land_mask_bc] = np.nan
# # AnEn raw
# with h5py.File(REFCST_dir + "BASE_final_dress_{}_lead{}.hdf".format(year, lead), 'r') as h5io:
# raw = h5io['AnEn'][:, en, ...]
# raw[..., land_mask_bc] = np.nan
# # ========== #
# # subsetting BCH obs into a given year
# N_days_bch = 366 + 365*3
# date_base_bch = datetime(2016, 1, 1)
# date_list_bch = [date_base_bch + timedelta(days=x) for x in np.arange(N_days_bch, dtype=np.float)]
# flag_pick = []
# for date in date_list_bch:
# if date.year == year:
# flag_pick.append(True)
# else:
# flag_pick.append(False)
# flag_pick = np.array(flag_pick)
# # ========== #
# DATA = {}
# DATA['raw'] = 8*raw[day_plot, ...]
# DATA['era'] = 8*era[day_plot, ...]
# DATA['sl'] = 8*sl[day_plot, ...]
# DATA['cnn'] = 8*cnn[day_plot, ...]
# stn_sl = DATA['sl'][indx, indy]
# stn_scnn = DATA['cnn'][indx, indy]
# stn_era = DATA['era'][indx, indy]
# BCH_obs_plot = 8*BCH_obs[flag_pick, ...][day_plot, lead, :]
# flag_nan_si = np.logical_not(np.isnan(BCH_obs_plot[flag_si]))
# flag_nan_sw = np.logical_not(np.isnan(BCH_obs_plot[flag_sw]))
# DATA['diff_sl_si'] = (stn_sl-BCH_obs_plot)[flag_si][flag_nan_si]
# DATA['diff_sl_sw'] = (stn_sl-BCH_obs_plot)[flag_sw][flag_nan_sw]
# DATA['diff_scnn_si'] = (stn_scnn-BCH_obs_plot)[flag_si][flag_nan_si]
# DATA['diff_scnn_sw'] = (stn_scnn-BCH_obs_plot)[flag_sw][flag_nan_sw]
# DATA['diff_era_si'] = (stn_era-BCH_obs_plot)[flag_si][flag_nan_si]
# DATA['diff_era_sw'] = (stn_era-BCH_obs_plot)[flag_sw][flag_nan_sw]
# np.save(save_dir+'AnEn_example.npy', DATA)
DATA = np.load(save_dir+'AnEn_example.npy', allow_pickle=True)[()]
keys = ['raw', 'era', 'sl', 'cnn']
```
# Figure
```
def aspc_cal(edge):
return (edge[3]-edge[2])/(edge[1]-edge[0])
def gcd(a, b):
if b == 0:
return a
else:
return gcd(b, a % b)
# Cartopy map settings
scale_param = '50m' # 10m for publication quality
# US states and CAN-US boundary
PROVINCE = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale=scale_param,
facecolor='none')
def setBoxColors(bp, c, m, lw):
plt.setp(bp['boxes'][0], color=c, linewidth=lw)
plt.setp(bp['caps'][0], color=c, linewidth=lw)
plt.setp(bp['caps'][1], color=c, linewidth=lw)
plt.setp(bp['whiskers'][0], color=c, linewidth=lw)
plt.setp(bp['whiskers'][1], color=c, linewidth=lw)
plt.setp(bp['medians'][0], color=m, linewidth=lw)
edge_bc = [-134-1.5, -114.5-0.5, 48.25, 60]
r_bc = aspc_cal(edge_bc)
cmap_pct, A = gu.precip_cmap()
VLIM = [0, 15*6]
hist_bins = np.arange(20, 65, 2)
gray = A[0, :]
fig = plt.figure(figsize=(13, 0.5*3*13*r_bc), dpi=dpi_)
gs = gridspec.GridSpec(3, 2, height_ratios=[1, 1, 1], width_ratios=[1, 1])
ax1 = plt.subplot(gs[0, 0], projection=ccrs.PlateCarree())
ax2 = plt.subplot(gs[0, 1], projection=ccrs.PlateCarree())
ax3 = plt.subplot(gs[1, 0], projection=ccrs.PlateCarree())
ax4 = plt.subplot(gs[1, 1], projection=ccrs.PlateCarree())
ax_box1 = fig.add_axes([0.085, 0.075, 0.25, 1/5])
ax_box2 = fig.add_axes([0.45, 0.075, 0.25, 1/5])
plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0)
AX = [ax1, ax2, ax3, ax4]
AX_box = [ax_box1, ax_box2]
titles = [
'(a) Direct output of the AnEn with 19 Supplemental Locations (SLs)'+
'\n GEFS refcst init time: 0000 UTC 1 February 2019'+
'\n Forecast lead time: 15 hours',
'(b) The ERA5 on 1500 UTC 1 February 2019; locations of stations',
'(c) AnEn members reconstructed by the Minimal Divergence\n Schaake Shuffle (MDSS).',
'(d) Output of the CNN, taking (c), elevation, and precipitation'+
'\n climatology as inputs']
handle_text_title = []
yw = [0.9775]*4
xw = 0.0125
for n, ax in enumerate(AX):
ax.set_extent(edge_bc, ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE.with_scale(scale_param), edgecolor='k', linewidth=1.5)
# ax.add_feature(cfeature.BORDERS.with_scale(scale_param), linestyle='--', linewidth=2.5)
# ax.add_feature(PROVINCE, edgecolor='k', linestyle=':', linewidth=2.5)
ax.spines['geo'].set_linewidth(2.5)
handle_text_title.append(ax.text(xw, yw[n], titles[n], ha='left', va='top',
fontsize=14, transform=ax.transAxes, zorder=3))
handle_subtext = []
handle_subtext += gu.string_partial_format(fig, ax3, 0.44, 0.9185, 'left', 'top',
['SL-H15', ' (control)'], [cyan, 'k'], [14,]*2, ['bold', 'normal'])
# handle_subtext += gu.string_partial_format(fig, ax3, 0.0125, 0.8575, 'left', 'top',
# [' ', 'SL-H15', ', a control based on Hamill et al. (2015)'],
# ['k', cyan, 'k'], [14,]*3, ['normal', 'bold', 'normal'])
handle_subtext += gu.string_partial_format(fig, ax4, 0.39, 0.9185, 'left', 'top',
['. ', 'SL-CNN', ' (ours)'],
['k', blue, 'k'], [14,]*3, ['normal', 'bold', 'normal'])
# ========== #
# boxplot part
regions = ['sw', 'si']
dict_c = dict(marker='o', ms=8, mew=1.5, mfc='none')
for i, key in enumerate(['sl', 'scnn', 'era']):
loc = i + 1.0
for j, ax in enumerate(AX_box):
r = regions[j]
L = len(DATA['diff_{}_{}'.format(key, r)])
handle_boxp = ax.boxplot(DATA['diff_{}_{}'.format(key, r)], positions=[loc,], flierprops=dict_c, widths=0.618)
setBoxColors(handle_boxp, 'k', red, 2.5)
ax.text(loc, -95, '{:.3f}'.format(np.mean(np.abs(DATA['diff_{}_{}'.format(key, r)]))), ha='center', fontsize=14)
for ax in [ax_box1, ax_box2]:
ax = gu.ax_decorate_box(ax)
ax.set_xlim([0.5, 3.5])
ax.set_xticks([1.0, 2.0, 3.0])
ax.tick_params(labelleft=True, labelbottom=False)
handle_subtext += gu.string_partial_format(fig, ax, 0.05, -0.045, 'left', 'top',
['SL-H15',], [cyan,], [14,], ['bold',])
handle_subtext += gu.string_partial_format(fig, ax, 0.375, -0.045, 'left', 'top',
['SL-CNN',], [blue,], [14,], ['bold',])
handle_subtext += gu.string_partial_format(fig, ax, 0.745, -0.045, 'left', 'top',
['ERA5',], ['k',], [14,], ['bold',])
ax.set_ylim([-70, 50])
ax.axhline(0.0, xmin=0, xmax=1.0, linestyle=':', linewidth=1.5)
ax_box1.set_ylabel('Precipitation rate\ndifference [mm/day]', fontsize=14)
ax_box1.set_title('(e) Box plots of precipitation rate difference\nSouth Coast stations', fontsize=14)
ax_box2.set_title('(f) same as in (e), but for\nSouthern Interior stations', fontsize=14)
ax_box1.text(-0.15, -0.2245, 'MAE:', ha='left', va='bottom', fontsize=14, transform=ax_box1.transAxes)
# ========== #
# ========== #
# map and hist
LOCX = [0, 0.5]
LOCY = [2/3, 1/3,]
AX_hist = []
for locy in LOCY:
for locx in LOCX:
AX_hist.append(fig.add_axes([0.02+locx, 0.025+locy, 0.175, 0.085]))
for i, key in enumerate(keys):
CS = AX[i].pcolormesh(bc_lon, bc_lat, DATA[key], vmin=VLIM[0], vmax=VLIM[1], cmap=cmap_pct)
AX_hist[i].spines["bottom"].set_visible(False)
AX_hist[i].spines["top"].set_visible(False)
AX_hist[i].spines["left"].set_visible(True)
AX_hist[i].spines["right"].set_visible(False)
[j.set_linewidth(2.5) for j in AX_hist[i].spines.values()]
AX_hist[i].xaxis.set_tick_params(labelsize=12)
AX_hist[i].yaxis.set_tick_params(labelsize=12)
AX_hist[i].tick_params(axis="both", which="both", pad=0, bottom=False, top=False,
labelbottom=True, left=False, right=False, labelleft=True)
AX_hist[i].set_ylim([-2, 101])
AX_hist[i].ticklabel_format(axis='y', style='sci', scilimits=(0, 1))
AX_hist[i].set_xlim([17.5, 65.01])
AX_hist[i].set_xticks([20, 30, 40, 50, 65])
AX_hist[i].set_xticklabels(['20', '30', '40', '50', '[mm/day]'])
AX_hist[i].hist(DATA[key][DATA[key]>=15], bins=hist_bins, facecolor=gray, edgecolor='k', linewidth=1.5);
AX_hist[i].patch.set_alpha(0)
# ========== #
# for ax in [ax3, ax4]:
# ax.add_patch(patches.Circle((-131.15, 52.15), radius=0.85, edgecolor='k', linewidth=4.0, facecolor='none'))
# ax.add_patch(patches.Circle((-119.1, 52.5), radius=2, edgecolor='k', linewidth=4.0, facecolor='none'))
handle_sw = ax2.plot(bc_lon[indx, indy][flag_sw], bc_lat[indx, indy][flag_sw], 'ks', ms=6.5, mew=1.5, mfc='none')
handle_si = ax2.plot(bc_lon[indx, indy][flag_si], bc_lat[indx, indy][flag_si], 'k^', ms=8, mew=1.5, mfc='none')
ax2.contour(bc_lon[:22, 60:], bc_lat[:22, 60:], DATA['era'][:22, 60:], levels=[8.5,],
colors=('k',), linewidths=(4,), linestyles=('--',))
ax3.contour(bc_lon[:22, 62:], bc_lat[:22, 62:], DATA['sl'][:22, 62:], levels=[8.5,],
colors=('k',), linewidths=(4,), linestyles=('--',))
ax4.contour(bc_lon[:22, 62:], bc_lat[:22, 62:], DATA['cnn'][:22, 62:], levels=[8.5,],
colors=('k',), linewidths=(4,), linestyles=('--',))
ax3.text(0.885, 0.5, 'Contour of\n8.5 [mm/day]', fontsize=14, ha='center', va='bottom', transform=ax3.transAxes)
ax3.arrow(0.9, 0.46, -0.05, -0.2, head_width=0.015, head_length=0.04,
linewidth=2.5, fc='k', ec='k', transform=ax3.transAxes)
for handle in handle_text_title:
handle.set_bbox(dict(facecolor='w', edgecolor='none', zorder=4))
for handle in handle_subtext:
handle.set_bbox(dict(facecolor='w', pad=0, edgecolor='none', zorder=5))
# ax_base = fig.add_axes([0.935, 5/6-0.03, 0.02, 1/6+0.02])
# [j.set_linewidth(0) for j in ax_base.spines.values()]
# ax_base.tick_params(axis='both', left=False, top=False, right=False, bottom=False, \
# labelleft=False, labeltop=False, labelright=False, labelbottom=False)
# cax = inset_axes(ax_base, height='100%', width='50%', borderpad=0, loc=2)
# CBar = plt.colorbar(CS, orientation='vertical', extend='max', cax=cax)
# CBar.ax.tick_params(axis='y', labelsize=14, direction='in', length=0)
# CBar.set_label('precip rate [mm/day]', fontsize=14)
# CBar.outline.set_linewidth(2.5)
padx = 0.025
pady = 0.0175
ax_base = fig.add_axes([0.775-padx, 0.244-pady, 0.225, 0.03])
[j.set_linewidth(0.0) for j in ax_base.spines.values()]
ax_base.tick_params(axis='both', left=False, top=False, right=False, bottom=False, \
labelleft=False, labeltop=False, labelright=False, labelbottom=False)
cax = inset_axes(ax_base, height='50%', width='100%', borderpad=0, loc=2)
CBar = plt.colorbar(CS, orientation='horizontal', ticks=[0, 30, 60, 90], extend='max', cax=cax)
CBar.ax.tick_params(axis='x', labelsize=14, direction='in', length=0)
CBar.set_label('Preciptation rate [mm/day]', fontsize=14)
CBar.outline.set_linewidth(2.5)
ax_w2 = fig.add_axes([0.775-padx, 0.244+0.03-pady, 0.225, 0.0225])
ax_w2.set_axis_off()
ax_w2.text(0.5, 1, '6.0 [mm/day] per color scale', fontsize=14,
ha='center', va='top', transform=ax_w2.transAxes);
ax_lg = fig.add_axes([0.75-padx, 0.075-pady, 0.25, 0.135])
ax_lg.set_axis_off()
LG = ax_lg.legend([handle_sw[0], handle_si[0],
handle_boxp["boxes"][0], handle_boxp["medians"][0]],
["South Coast stations",
'Southern Interior stns',
'Box plot,\nIQR = 75th - 25th',
'Median'], bbox_to_anchor=(1, 1), ncol=1, prop={'size':14})
LG.get_frame().set_facecolor('white')
LG.get_frame().set_edgecolor('k')
LG.get_frame().set_linewidth(0)
ax_w1 = fig.add_axes([0.025, 2/3+0.075, 0.1, 0.03])
ax_w1.set_axis_off()
ax_w1.text(0, 1, '1D histogram\nwith bin counts', fontsize=12, ha='left', va='top', transform=ax_w1.transAxes);
if need_publish:
# Save figure
fig.savefig(fig_dir+'AnEn_example.png', format='png', **fig_keys)
```
# The old version
```
# VLIM = [0, 15*5]
# hist_bins = np.arange(20, 65, 2)
# gray = A[0, :]
# fig = plt.figure(figsize=(13, 0.5*3*13*r_bc), dpi=dpi_)
# gs = gridspec.GridSpec(3, 2, height_ratios=[1, 1, 1], width_ratios=[1, 1])
# ax1 = plt.subplot(gs[0, 0], projection=ccrs.PlateCarree())
# ax2 = plt.subplot(gs[0, 1], projection=ccrs.PlateCarree())
# ax3 = plt.subplot(gs[1, 0], projection=ccrs.PlateCarree())
# ax4 = plt.subplot(gs[1, 1], projection=ccrs.PlateCarree())
# ax5 = plt.subplot(gs[2, 0], projection=ccrs.PlateCarree())
# ax6 = plt.subplot(gs[2, 1], projection=ccrs.PlateCarree())
# plt.subplots_adjust(0, 0, 1, 1, hspace=0, wspace=0)
# AX = [ax1, ax2, ax3, ax4, ax5, ax6]
# titles = [
# '(a) A direct output of AnEn with 19 supplemental locations (SLs)'+
# '\n GEFS refcst init time: 0000 UTC 31 January 2019'+'\n Forecast lead time: 12 hours',
# '(b) The ERA5 reanalysis on the forecasted time ',
# '(c) Smoothed version of (a)\n Produced by Savitzky–Golay (SG) filter',
# '(d) Weighted combination of (a) and (c) ',
# '(e) Denoised version of (a)\n Produced by our CNN',
# '(f) Weighted combination of (a) and (e) ']
# #Weighted combination of (a) and (c) with quantile mapping
# handle_text_title = []
# yw = [0.9775]*6
# xw = 0.0125
# for n, ax in enumerate(AX):
# ax.set_extent(edge_bc, ccrs.PlateCarree())
# ax.add_feature(cfeature.COASTLINE.with_scale(scale_param), edgecolor='k', linewidth=1.5)
# ax.add_feature(cfeature.BORDERS.with_scale(scale_param), linestyle='--', linewidth=2.5)
# ax.add_feature(PROVINCE, edgecolor='k', linestyle=':', linewidth=2.5)
# ax.spines['geo'].set_linewidth(2.5)
# handle_text_title.append(ax.text(xw, yw[n], titles[n], ha='left', va='top',
# fontsize=14, transform=ax.transAxes, zorder=3))
# handle_subtext = []
# handle_subtext += gu.string_partial_format(fig, ax4, 0.605, 0.9775, 'left', 'top',
# ['; ', 'SL-SG'],
# ['k', cyan, 'k'], [14,]*3, ['normal', 'bold', 'normal'])
# handle_subtext += gu.string_partial_format(fig, ax6, 0.602, 0.9775, 'left', 'top',
# ['; ', 'SL-CNN',],
# ['k', blue, 'k'], [14,]*3, ['normal', 'bold', 'normal'])
# LOCX = [0, 0.5]
# LOCY = [2/3, 1/3, 0]
# AX_hist = []
# for locy in LOCY:
# for locx in LOCX:
# AX_hist.append(fig.add_axes([0.02+locx, 0.025+locy, 0.175, 0.085]))
# for i, key in enumerate(keys):
# CS = AX[i].pcolormesh(bc_lon, bc_lat, DATA[key], vmin=VLIM[0], vmax=VLIM[1], cmap=cmap_pct)
# AX_hist[i].spines["bottom"].set_visible(False)
# AX_hist[i].spines["top"].set_visible(False)
# AX_hist[i].spines["left"].set_visible(True)
# AX_hist[i].spines["right"].set_visible(False)
# [j.set_linewidth(2.5) for j in AX_hist[i].spines.values()]
# AX_hist[i].xaxis.set_tick_params(labelsize=12)
# AX_hist[i].yaxis.set_tick_params(labelsize=12)
# AX_hist[i].tick_params(axis="both", which="both", pad=0, bottom=False, top=False,
# labelbottom=True, left=False, right=False, labelleft=True)
# AX_hist[i].set_ylim([0, 101])
# AX_hist[i].ticklabel_format(axis='y', style='sci', scilimits=(0, 1))
# AX_hist[i].set_xlim([17.5, 65.01])
# AX_hist[i].set_xticks([20, 30, 40, 50, 65])
# AX_hist[i].set_xticklabels(['20', '30', '40', '50', '[mm/day]'])
# AX_hist[i].hist(DATA[key][DATA[key]>=15], bins=hist_bins, facecolor=gray, edgecolor='k', linewidth=1.5);
# AX_hist[i].patch.set_alpha(0)
# for handle in handle_text_title:
# handle.set_bbox(dict(facecolor='w', edgecolor='none', zorder=4))
# for handle in handle_subtext:
# handle.set_bbox(dict(facecolor='w', pad=0, edgecolor='none', zorder=5))
# ax_base = fig.add_axes([0.935, 5/6-0.03, 0.02, 1/6+0.02])
# [j.set_linewidth(0) for j in ax_base.spines.values()]
# ax_base.tick_params(axis='both', left=False, top=False, right=False, bottom=False, \
# labelleft=False, labeltop=False, labelright=False, labelbottom=False)
# cax = inset_axes(ax_base, height='100%', width='50%', borderpad=0, loc=2)
# CBar = plt.colorbar(CS, orientation='vertical', extend='max', cax=cax)
# CBar.ax.tick_params(axis='y', labelsize=14, direction='in', length=0)
# CBar.set_label('precip rate [mm/day]', fontsize=14)
# CBar.outline.set_linewidth(2.5)
# ax_w1 = fig.add_axes([0.025, 2/3+0.075, 0.1, 0.03])
# ax_w1.set_axis_off()
# ax_w1.text(0, 1, '1D histogram\nwith bin counts', fontsize=12, ha='left', va='top', transform=ax_w1.transAxes);
# ax_w2 = fig.add_axes([0.9, 0.835, 0.03, 0.125])
# ax_w2.set_axis_off()
# ax_w2.text(0.5, 1, '5 [mm/day]\nper color scale', fontsize=14,
# rotation=90, ha='center', va='top', transform=ax_w2.transAxes);
# if need_publish:
# # Save figure
# fig.savefig(fig_dir+'AnEn_example.png', format='png', **fig_keys)
```
| github_jupyter |
# Natural Language Entity Extraction
Extracting Ground Truth Labels from Radiology Reports
<hr/>
<img src="example-report.png" width="30%" align="right" style="padding-left:20px">
In this assignment you'll learn to extract information from unstructured medical text. In particular, you'll learn about the following topics:
- Extracting disease labels from clinical reports
- Text matching
- Evaluating a labeler
- Negation detection
- Dependency parsing
- Question Answering with BERT
- Preprocessing text for input
- Extracting answers from model output
This assignment is inspired by the [work](https://arxiv.org/abs/1901.07031) done by Irvin et al and you should be well positioned to read and understand this manuscript by the end of the assignment.
<a href="https://ieeexplore.ieee.org/document/7780643">Image Credit</a>
### This assignment covers the folowing topics:
- [1. Extracting Labels](#1)
- [1.1 Text Matching](#1-1)
- [Exercise 1](#ex-01)
- [1.2 Evaluating The Performance](#1-2)
- [1.3 Cleanup](#1-3)
- [1.4 Finding Negative Mentions](#1-4)
- [Exercise 2](#ex-02)
- [1.5 Dependency Parsing](#1-5)
- [2. Question Answering Using BERT](#2)
- [2.1 Roadmap](#2-1)
- [2.2 Preparing The Input](#2-2)
- [Exercise 3](#ex-03)
- [2.3 Getting Answer From Model Output](#2-3)
- [Exercise 4](#ex-04)
- [Exercise 5](#ex-05)
- [2.4 Putting It All Together](#2-4)
- [2.5 Try It Out](#2-5)
## Packages
Import the following libraries for this assignment.
- `matplotlib.pyplot` - standard plotting library
- `nltk` - an NLP package
- `pandas` - we'll use this to keep track of data
- `tensorflow` - standard deep learning library
- `transformers` - convenient access to pretrained natural language models
```
import matplotlib.pyplot as plt
import nltk
import pandas as pd
import tensorflow as tf
import numpy as np
from transformers import *
```
Additionally, load the helper `util` library that we have prepared for this assignment in order to abstract some of the details.
- We do encourage you to review the `util.py` file to learn more about the utility functions behind the scene.
```
import util
from util import *
```
<a name="1"></a>
## 1 Extracting Labels
In this part of the assignment, you'll extract disease labels for patients from unstructured clinical reports. Unlike most assignments, instead of "learning" from the dataset, you will primarily build different "rules" that help us extract knowledge from natural language.
- Because there is less risk of overfitting when using a "rules-based" approach, you will just use one dataset which will also be the test set.
The test set consists of 1,000 X-ray reports that have been manually labeled by a board certified radiologist for the presence or lack of presence of different pathologies.
- You also have access to the extracted "Impression" section of each report which is the overall summary of the radiologists for each X-ray.
Have a look at the dataset:
```
print("test_df size: {}".format(test_df.shape))
test_df.head()
```
Here are a few example impressions
```
for i in range(3):
print(f'\nReport Impression {i}:')
print(test_df.loc[i, 'Report Impression'])
```
You can see that these reports are fairly unstructured, which makes information extraction challenging. Your goal will be to extract the presence or absence of different abnormalities from the raw text.
Next, see the distribution of abnormalities in the dataset. We have placed all the names of these abnormalities in a list named `CATEGORIES` and you'll use it as labels the graph below. You will also be using this list in the next coding exercises.
```
plt.figure(figsize=(12,5))
plt.barh(y=CATEGORIES, width=test_df[CATEGORIES].sum(axis=0))
plt.show()
```
You can see that pathologies like Airspace Opacity, Pleural Effusion, and Edema are present in many of the reports while Lung Lesion and Pneumonia are not as common in this dataset.
<a name="1-1"></a>
### 1.1 Text Matching
One of the easiest and surprisingly effective ways to label our dataset is to search for presence of different keywords in the impression text.
We have prepared a list of relevant keywords for each pathology for you to use in detecting the presence of each label.
- You can access these keywords for each label by calling the `get_mention_keywords(observation)` function.
Here's an example:
```
cat = CATEGORIES[2]
related_keywords = get_mention_keywords(cat)
print("Related keywords for {} are:\n{}".format(cat, ', '.join(related_keywords)))
```
<a name='ex-01'></a>
### Exercise 1: Get Labels
You can use this simple approach to start constructing labels for each report. Fill in the `get_labels()` function below.
- It takes in a report (as an array of sentences)
- It returns a dictionary that maps each category to a boolean value, which indicates the presence or absence of the abnormality.
Note that in Python, the `in` keyword can be used on a string to find substrings. For instance:
```CPP
s = 'hello how are you? I am fine.'
if "are you" in s:
print(True)
else:
print(False)
```
This outputs `True`, because the `in` keyword is able to find substrings.
Also note that whitespace and punctuation matter.
```CPP
s = 'hello how are you? I am fine.'
if "areyou" in s:
print(True)
else:
print(False)
```
This outputs `False`, because there only `are you` exists in the string and not `areyou`.
Also, note that the string matching is case sensitive
```CPP
s = 'hello how are you? I am fine.'
if "Hello" in s:
print(True)
else:
print(False)
```
This returns `False` because in the string, 'hello' has a lowercase 'h', so `Hello` does not exist in the string.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Use the get_mention_keywords() function</li>
<li>Make sure to make the sentence as well as the phrase all lowercase before looking for the phrase inside the sentence.</li>
</ul>
</p>
```
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_labels(sentence_l):
"""
Returns a dictionary that indicates presence of each category (from CATEGORIES)
in the given sentences.
hint: loop over the sentences array and use get_mention_keywords() function.
Args:
sentence_l (array of strings): array of strings representing impression section
Returns:
observation_d (dict): dictionary mapping observation from CATEGORIES array to boolean value
"""
observation_d = {}
### START CODE HERE ###
# loop through each category
for cat in CATEGORIES:
# Initialize the observations for all categories to be False
observation_d[cat] = False
# For each sentence in the list:
for s in sentence_l: # complete this line
# Set the characters to all lowercase, for consistent string matching
s = s.lower()
# for each category
for cat in CATEGORIES: # complete this line
# for each phrase that is related to the keyword (use the given function)
for phrase in get_mention_keywords(cat): # complete this line
# make the phrase all lowercase for consistent string matching
phrase = phrase.lower()
# check if the phrase appears in the sentence
if phrase in s: # complete this line
observation_d[cat] = True
### END CODE HERE ###
return observation_d
print("Test Case")
test_sentences = ["Diffuse Reticular Pattern, which can be seen with an atypical infection or chronic fibrotic change.",
"no Focal Consolidation."]
print("\nTest Sentences:\n")
for s in test_sentences:
print(s)
print()
retrieved_labels = get_labels(test_sentences)
print("Retrieved labels: ")
for key, value in sorted(retrieved_labels.items(), key=lambda x: x[0]):
print("{} : {}".format(key, value))
print()
print("Expected labels: ")
expected_labels = {'Cardiomegaly': False, 'Lung Lesion': False, 'Airspace Opacity': True, 'Edema': False, 'Consolidation': True, 'Pneumonia': True, 'Atelectasis': False, 'Pneumothorax': False, 'Pleural Effusion': False, 'Pleural Other': False, 'Fracture': False}
for key, value in sorted(expected_labels.items(), key=lambda x: x[0]):
print("{} : {}".format(key, value))
print()
for category in CATEGORIES:
if category not in retrieved_labels:
print(f'Category {category} not found in retrieved labels. Please check code.')
elif retrieved_labels[category] == expected_labels[category]:
print(f'Labels match for {category}!')
else:
print(f'Labels mismatch for {category}. Please check code.')
```
##### Note
You may have noticed that the second sentence is "no Focal Consolidation", and that the consolidation label is set to True. We'll come back to this shortly.
<a name="1-2"></a>
### 1.2 Evaluating The Performance
In order to evaluate the performance of your labeler, you will use a metric called the [F1 score](https://en.wikipedia.org/wiki/F1_score).
- The F1 score is the harmonic mean of precision and recall.
- This score is a common metric that is used in information retrieval problems.
The reason that we care both about precision and recall is that only a small subset of the labels for each report are positive and the rest are negative.
- So a traditional metric such as accuracy could be very high if you just report every label as False.
- Precision and recall (summarized in the F1 score) help you measure your classification's performance for both positive cases as well as negative cases.
$$F_1 = \left( \frac{2}{ \frac{1}{recall} + \frac{1}{precision} }\right) = 2 \times \frac{precision \times recall}{precision + recall}$$
We've implemented a utility function for you called `get_f1_table()` below to calculate your function's performance on the whole test set.
- This function will take advantage of modules from the `bioc` and `negbio` python packages to intelligently split your paragraph to sentences and then apply your function on each sentence.
- It then returns a table with the calculated F1 score for each category.
- You can see how this function is implemented in `util.py`.
```
get_f1_table(get_labels, test_df)
```
You can see that Airspace Opacity has the highest F1 score at 0.923 while Pneumothorax has the lowest at 0.218.
These numbers are actually not too bad given the simplicity of this rules-based implemention, but we can do a lot better.
<a name="1-3"></a>
### 1.3 Cleanup
First of all, the text that you use as input is very messy.
- You should be able to do better by doing some simple pre-processing.
- We have already implemented the `clean` function for you, which does basic text cleanup.
- Among other things, it converts pattern such as "and/or" to "or", replaces repeated whitespace with just one space, and removes redundant punctuation, etc.
Run the following example to see how cleanup changes the input.
```
raw_text = test_df.loc[28, 'Report Impression']
print("raw text: \n\n" + raw_text)
print("cleaned text: \n\n" + clean(raw_text))
```
Note how the "and/or" in observation 1 was transformed into "or", and the standardization of the whitespace.
We have implemented a function for this which you can add to the pipeline, by passing the `cleanup=True` flag to our `get_f1_table()` function. Let's see if cleaning up the text can improve performance:
```
get_f1_table(get_labels, test_df, cleanup=True)
```
You can see a very modest improvement in Cardiomegaly and Pleural Other, but overall the impact is fairly low. In the next section you'll make a change which has a much bigger impact.
<a name="1-4"></a>
### 1.4 Finding Negative Mentions
<a name='ex-02'></a>
### Exercise 2
So far you have just been treating the presence of a keyword in the impression section as a signal for the presence of that condition in the report. This approach currently has a big problem:
- This is ignoring negations!
- For example, consider the following report: "No sign of consolidation". Currently, our matching approach calls this sentence a positive case of consolidation!
Implement your `get_labels()` function one more time.
- Use a boolean "flag" to indicate whether a negation like "no" or "not" appears in a sentence.
- Only set a label to `True` if the word "not" or "no" did not appear in the sentence.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>Use the get_mention_keywords() function.</li>
</ul>
</p>
```
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_labels_negative_aware(sentence_l):
"""
Returns a dictionary that indicates presence of categories in
sentences within the impression section of the report.
Only set a label to True if no 'negative words' appeared in the sentence.
hint: loop over the sentences array and use get_mention_keywords() function.
Args:
sentence_l (array of strings): array of strings representing impression section
Returns:
observation_d (dict): dictionary mapping observation from CATEGORIES array to boolean value
"""
# Notice that all of the negative words are written in lowercase
negative_word_l = ["no", "not", "doesn't", "does not", "have not", "can not", "can't", "n't"]
observation_d = {}
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# Initialize the observation dictionary
# so that all categories are not marked present.
for cat in CATEGORIES: # complete this line
# Initialize category to not present.
observation_d[cat] = False
# Loop through each sentence in the list of sentences
for s in sentence_l: # complete this line
# make the sentence all lowercase
s = s.lower()
# Initialize the flag to indicate no negative mentions (yet)
negative_flag = True
# Go through all the negative words in the list
for neg in negative_word_l: # complete this line
# Check if the word is a substring in the sentence
if neg in s: # complete this line
# set the flag to indicate a negative mention
negative_flag = False
# Once a single negative mention is found,
# you can stop checking the remaining negative words
break # complete this line
# When a negative word was not found in the sentence,
# check for the presence of the diseases
if negative_flag != False: # complete this line
# Loop through the categories list
for cat in CATEGORIES: # complete this line
# Loop through each phrase that indicates this category
for phrase in get_mention_keywords(cat): # complete this line
# make the phrase all lowercase
phrase = phrase.lower()
# Check if the phrase is a substring in the sentence
if phrase in s: # complete this line
# Set the observation dictionary
# to indicate the presence of this category
observation_d[cat] = True
### END CODE HERE ###
return observation_d
print("Test Case")
test_sentences = ["Diffuse Reticular pattern, which can be seen with an atypical infection or chronic fibrotic change.",
"No Focal Consolidation."]
print("\nTest Sentences:\n")
for s in test_sentences:
print(s)
print()
retrieved_labels = get_labels_negative_aware(test_sentences)
print("Retrieved labels: ")
for key, value in sorted(retrieved_labels.items(), key=lambda x: x[0]):
print("{} : {}".format(key, value))
print()
print("Expected labels: ")
expected_labels = {'Cardiomegaly': False, 'Lung Lesion': False, 'Airspace Opacity': True, 'Edema': False, 'Consolidation': False, 'Pneumonia': True, 'Atelectasis': False, 'Pneumothorax': False, 'Pleural Effusion': False, 'Pleural Other': False, 'Fracture': False}
for key, value in sorted(expected_labels.items(), key=lambda x: x[0]):
print("{} : {}".format(key, value))
print()
print("Test Results:")
for category in CATEGORIES:
if category not in retrieved_labels:
print(f'Category {category} not found in retrieved labels. Please check code.')
elif retrieved_labels[category] == expected_labels[category]:
print(f'Labels match for {category}!')
else:
print(f'Labels mismatch for {category}. Please check code.')
```
If you implemented this correctly, you should have Consolidation set to **False**, because the test sentence contains "No Focal Consolidation.
With the basic labeling method `get_labels()`, this set Consolidation to True, because it didn't look for 'negative' words.
Check how this changes your aggregate performance:
```
get_f1_table(get_labels_negative_aware, test_df, cleanup=True)
```
You should see a generally significant boost in the F1 score across the board.
- With the `get_labels()` method, Pneumothorax has an F1 score of 0.218.
- With the `get_labels_negative_aware()` method, Pneumothorax has an F1 score of 0.656.
### Fun Exercise
- Try to print some examples for which our rule does not perform well.
- Can you add other negative words to the list of negative words in order to make the average F1 score better?
In the next section, you'll see how to use some more sophisticated NLP tools to do even better!
<a name="1-5"></a>
### 1.5 Dependency Parsing
Our heuristic for detecting negation is still relatively simple and you can think of some examples that would fool it. To improve performance even further, you'll leverage a more sophisticated approach using a [dependency parser](https://nlp.stanford.edu/software/nndep.html).
<img src="nndep-example.png">
[Image Credit](https://nlp.stanford.edu/software/nndep.html)
A dependency parser extracts the underlying structure of the text, vizualized in the tree diagram above.
- Using a dependency parser, you can detect the relationship between different words in the sentence and understand which word is affected by negative phrases.
Implementations of dependency parsers are very complex, but luckily there are some great off-the-shelf tools to do this.
- One example is [NegBio](https://github.com/ncbi-nlp/NegBio), a package specifically designed for finding negative and uncertain mentions in X-ray radiology reports.
- In addition to detecting negations, `negbio` can be configured to use a dependency parser that has been specifically trained for biomedical text.
- This increases our performance given that biomedical text is full of acronyms, nomenclature and medical jargon.
We've configured `negbio` to:
1. Parse sentences using the Bllip parser trained using [David McClosky’s biomedical model](https://nlp.stanford.edu/~mcclosky/biomedical.html)
2. Compute the universal dependency graph of each sentence using Stanford [CoreNLP](https://stanfordnlp.github.io/CoreNLP/)
3. Use a collection of dependency graph rules to detect the negative or positive presence of a phrase based on configurable patterns. It also comes packaged with the negation and uncertainty rules from the CheXpert project.
The catch with this more involved method is that it takes a while (~1.5h on a fast laptop) to run the detection pipeline on the 1000-sample dataset.
- Hence, you can do your comparison on a smaller subset of the data (200 samples).
Run the following cells to get predictions using the `negbio` engine
```
sampled_test = test_df.sample(200,random_state=0)
```
The code for applying the dependency is implemented for you with the function `get_negbio_preds`.
- If you're interested, you can take a look at the implementation in the `utils` file.
Run the next cell to extract predictions from the sampled test set.
### Note
- This should take about **5 minutes** to run.
- If you want to run it again, you may need to restart the kernel (close and re-open the assignment).
- If you run it a second time without restarting the kernel you'll get an error message like
```RuntimeError: Parser is already loaded and can only be loaded once.```
```
negbio_preds = get_negbio_preds(sampled_test)
```
Next let's calculate the new F1 scores to see the dependency parser does.
```
calculate_f1(sampled_test, negbio_preds)
```
And finally, let's compare all methods side by side!
```
basic = get_f1_table(get_labels, sampled_test).rename(columns={"F1": "F1 Basic"})
clean_basic = get_f1_table(get_labels, sampled_test, cleanup=True).rename(columns={"F1": "F1 Cleaned"})
negated_basic = get_f1_table(get_labels_negative_aware, sampled_test, cleanup=True).rename(columns={"F1": "F1 Negative Basic"})
negated_negbio = calculate_f1(sampled_test, negbio_preds).rename(columns={"F1": "F1 Negbio"})
joined_preds = basic.merge(clean_basic, on="Label")
joined_preds = joined_preds.merge(negated_basic, on="Label")
joined_preds = joined_preds.merge(negated_negbio, on="Label")
joined_preds
```
You should see an improvement using the heavier NLP machinery. The F1 for categories, such as Airspace Opacity, Cardiomegaly, Edema, and Pleural Effusion are already fairly high, while others leave something to be desired. To see how you can extend this to improve performance even more, you can take a look at the CheXpert Labeller paper [here]( https://arxiv.org/pdf/1901.07031.pdf).
When you're ready, move on to the next section to explore AI techniques for querying raw text.
<a name="2"></a>
## 2 Question Answering Using BERT
In the previous section, you looked at extracting disease labels from radiology reports. That could use a rule-based system because you asked a limited kind of question: "Is the disease present?".
What if you want to support any question a physician might want to ask? To do this, you'll have to use more recent artificial intelligence techniques and large datasets.
- In this section, you'll walk you through the pre- and post-processing involved in applying [BERT](https://github.com/google-research/bert) to the problem of question answering.
- After developing this infrastructure, you'll use the model to answer questions from clinical notes.
Implementing question answering can take a few steps, even using pretrained models.
- First retrieve our model and tokenizer. Recall from the lessons that the tokenizer prepares the input, mapping each word to a unique element in the vocabulary and inserting special tokens.
- Then, the model processes these tokenized inputs to create valuable embeddings and performs tasks such as question answering.
### Load The Tokenizer
You will use the tokenizer in the next function that you implement.
```
tokenizer = AutoTokenizer.from_pretrained("./models")
```
<a name="2-1"></a>
### 2.1 Roadmap
The following function takes in a question and a passage, as well as some hyperparameters, then outputs the answer (according to our model). You will first implemenet helper functions that will be called by `get_model_answer` and then complete this function using the helper functions.
```CPP
def get_model_answer(model, question, passage, tokenizer, max_seq_length=384):
"""
# prepare input
...
# get scores for start of answer and end of answer
...
# using scores, get most likely answer
...
# using span start and end, construct answer as string
...
return answer
```
<a name='2-2'></a>
### 2.2 Preparing The Input
<a name='ex-03'></a>
### Exercise 3: Prepare BERT Input
Your first task will be the prepare the raw passage and question for input into the model.
Given the strings `p` and `q`, you want to turn them into an input of the following form:
`[CLS]` `[q_token1]`, `[q_token2]`, ..., `[SEP]` `[p_token1]`, `[p_token2]`, ...
Here, the special characters `[CLS]` and `[SEP]` let the model know which part of the input is the question and which is the answer.
- The question appears between `[CLS]` and `[SEP]`.
- The answer appears after `[SEP]`
You'll also pad the input to the max input length, since BERT takes in a fixed-length input.
Fill in the function below to prepare input to BERT. You'll return three items.
- First is `input_ids`, which holds the numerical ids of each token.
- Second, you'll output the `input_mask`, which has 1's in parts of the input tensor representing input tokens, and 0's where there is padding.
- Finally, you'll output `tokens`, the output of the tokenizer (including the `[CLS]` and `[SEP]` tokens). You can see exactly what is expected in the test case below.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
<li>The format of the tokens will be <i>[CLS]<question_tokens>[SEP]<answer_tokens></i></li>
<li>To generate a list that repeats an item, such as 'a', you can use ['a'] * 4 to get ['a', 'a', 'a', 'a'].</li>
<li>To create padding, generate a list of zeros: [0,0,...0] and add it to the list that you are padding.</li>
<li>The number of zeros to pad is the max sequence length minus the length of the list before padding.</li>
</ul>
</p>
```
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def prepare_bert_input(question, passage, tokenizer, max_seq_length=384):
"""
Prepare question and passage for input to BERT.
Args:
question (string): question string
passage (string): passage string where answer should lie
tokenizer (Tokenizer): used for transforming raw string input
max_seq_length (int): length of BERT input
Returns:
input_ids (tf.Tensor): tensor of size (1, max_seq_length) which holds
ids of tokens in input
input_mask (list): list of length max_seq_length of 1s and 0s with 1s
in indices corresponding to input tokens, 0s in
indices corresponding to padding
tokens (list): list of length of actual string tokens corresponding to input_ids
"""
# tokenize question
question_tokens = tokenizer.tokenize(question)
# tokenize passage
passage_token = tokenizer.tokenize(passage)
# get special tokens
CLS = tokenizer.cls_token
SEP = tokenizer.sep_token
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# manipulate tokens to get input in correct form (not adding padding yet)
# CLS {question_tokens} SEP {answer_tokens}
# This should be a list of tokens
tokens = list()
tokens.append(CLS)
tokens.extend(question_tokens)
tokens.append(SEP)
tokens.extend(passage_token)
# Convert tokens into integer IDs.
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# Create an input mask which has integer 1 for each token in the 'tokens' list
input_mask = [1]*len(input_ids)
# pad input_ids with 0s until it is the max_seq_length
# Create padding for input_ids by creating a list of zeros [0,0,...0]
# Add the padding to input_ids so that its length equals max_seq_length
input_ids = input_ids + [0]*(max_seq_length - len(input_ids))
# Do the same to pad the input_mask so its length is max_seq_length
input_mask = input_mask + [0]*(max_seq_length - len(input_mask))
# END CODE HERE
return tf.expand_dims(tf.convert_to_tensor(input_ids), 0), input_mask, tokens
```
You can test by running it on your sample input.
```
passage = "My name is Bob."
question = "What is my name?"
input_ids, input_mask, tokens = prepare_bert_input(question, passage, tokenizer, 20)
print("Test Case:\n")
print("Passage: {}".format(passage))
print("Question: {}".format(question))
print()
print("Tokens:")
print(tokens)
print("\nCorresponding input IDs:")
print(input_ids)
print("\nMask:")
print(input_mask)
```
##### Expected output
```CPP
Passage: My name is Bob.
Question: What is my name?
Tokens:
['[CLS]', 'What', 'is', 'my', 'name', '?', '[SEP]', 'My', 'name', 'is', 'Bob', '.']
Corresponding input IDs:
tf.Tensor(
[[ 101 1327 1110 1139 1271 136 102 1422 1271 1110 3162 119 0 0
0 0 0 0 0 0]], shape=(1, 20), dtype=int32)
Mask:
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0]
```
<a name='2-3'></a>
### 2.3 Getting Answer From Model Output
<a name='ex-04'></a>
### Exercise 4: Get Span From Scores
After taking in the tokenized input, the model outputs two vectors.
- The first vector contains the scores (more formally, logits) for the starting index of the answer.
- A higher score means that index is more likely to be the start of the answer span in the passage.
- The second vector contains the score for the end index of the answer.
You want to output the span that maximizes the start score and end score.
- To be valid, the start index has to occur before the end index. Formally, you want to find:
$$\arg\max_{i <= j, mask_i=1, mask_j = 1} start\_scores[i] + end\_scores[j]$$
- In words, this formulas is saying, calculate the sum and start scores of start position 'i' and end position 'j', given the constraint that the start 'i' is either before or at the end position 'j'; then find the positions 'i' and 'j' where this sum is the highest.
- Furthermore, you want to make sure that $i$ and $j$ are in the relevant parts of the input (i.e. where `input_mask` equals 1.)
Fill in the following function to get the best start and end locations given the start and end scores and the input mask.
<details>
<summary>
<font size="3" color="darkgreen"><b>Hints</b></font>
</summary>
<p>
<ul>
If you're more comfortable manipulating data with numpy, then here are some methods you might find useful:
<li><i>{tensor}.numpy()</i> - converts a tf.Tensor object to a numpy array</li>
<li><i>np.argmax()</i> - finds the index of the maximum value in a numpy array</li>
</ul>
</p>
```
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_span_from_scores(start_scores, end_scores, input_mask, verbose=False):
"""
Find start and end indices that maximize sum of start score
and end score, subject to the constraint that start is before end
and both are valid according to input_mask.
Args:
start_scores (list): contains scores for start positions, shape (1, n)
end_scores (list): constains scores for end positions, shape (1, n)
input_mask (list): 1 for valid positions and 0 otherwise
"""
n = len(start_scores)
max_start_i = -1
max_end_j = -1
max_start_score = -np.inf
max_end_score = -np.inf
max_sum = -np.inf
# Find i and j that maximizes start_scores[i] + end_scores[j]
# so that i <= j and input_mask[i] == input_mask[j] == 1
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# set the range for i
for i in range(n): # complete this line
# set the range for j
for j in range(n): #complete this line
# both input masks should be 1
if i<=j and input_mask[i] == input_mask[j] == 1: # complete this line
# check if the sum of the start and end scores is greater than the previous max sum
if start_scores[i]+end_scores[j]>max_sum: # complete this line
# calculate the new max sum
max_sum = start_scores[i]+end_scores[j]
# save the index of the max start score
max_start_i = i
# save the index for the max end score
max_end_j = j
# save the value of the max start score
max_start_val = start_scores[i]
# save the value of the max end score
max_end_val = end_scores[j]
### END CODE HERE ###
if verbose:
print(f"max start is at index i={max_start_i} and score {max_start_val}")
print(f"max end is at index i={max_end_j} and score {max_end_val}")
print(f"max start + max end sum of scores is {max_sum}")
return max_start_i, max_end_j
```
We can test this out on the following sample start scores and end scores:
```
start_scores = tf.convert_to_tensor([-1, 2, 0.4, -0.3, 0, 8, 10, 12], dtype=float)
end_scores = tf.convert_to_tensor([5, 1, 1, 3, 4, 10, 10, 10], dtype=float)
input_mask = [1, 1, 1, 1, 1, 0, 0, 0]
start, end = get_span_from_scores(start_scores, end_scores, input_mask, verbose=True)
print("Expected: (1, 4) \nReturned: ({}, {})".format(start, end))
```
##### Expected output
```CPP
max start is at index i=1 and score 2.0
max end is at index i=4 and score 4.0
max start + max end sum of scores is 6.0
Expected: (1, 4)
Returned: (1, 4)
```
```
# Test 2
start_scores = tf.convert_to_tensor([0, 2, -1, 0.4, -0.3, 0, 8, 10, 12], dtype=float)
end_scores = tf.convert_to_tensor([0, 5, 1, 1, 3, 4, 10, 10, 10], dtype=float)
input_mask = [1, 1, 1, 1, 1, 0, 0, 0, 0 ]
start, end = get_span_from_scores(start_scores, end_scores, input_mask, verbose=True)
print("Expected: (1, 1) \nReturned: ({}, {})".format(start, end))
```
##### Expected output
```CPP
max start is at index i=1 and score 2.0
max end is at index i=1 and score 5.0
max start + max end sum of scores is 7.0
Expected: (1, 1)
Returned: (1, 1)
```
If your expected output differes in this second test, please check how you set the range of your for loops.
<a name='ex-05'></a>
### Exercise 05: Post-Processing
Finally, we'll add some post-processing to get the final string.
```
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def construct_answer(tokens):
"""
Combine tokens into a string, remove some hash symbols, and leading/trailing whitespace.
Args:
tokens: a list of tokens (strings)
Returns:
out_string: the processed string.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# join the tokens together with whitespace
out_string = ' '.join(tokens)
# replace ' ##' with empty string
out_string = out_string.replace(' ##','')
# remove leading and trailing whitespace
out_string = out_string.strip()
### END CODE HERE ###
# if there is an '@' symbol in the tokens, remove all whitespace
if '@' in tokens:
out_string = out_string.replace(' ', '')
return out_string
# Test
tmp_tokens_1 = [' ## hello', 'how ', 'are ', 'you? ']
tmp_out_string_1 = construct_answer(tmp_tokens_1)
print(f"tmp_out_string_1: {tmp_out_string_1}, length {len(tmp_out_string_1)}")
tmp_tokens_2 = ['@',' ## hello', 'how ', 'are ', 'you? ']
tmp_out_string_2 = construct_answer(tmp_tokens_2)
print(f"tmp_out_string_2: {tmp_out_string_2}, length {len(tmp_out_string_2)}")
```
##### Expected output
```CPP
tmp_out_string_1: hello how are you?, length 20
tmp_out_string_2: @hellohowareyou?, length 16
```
<a name='2-4'></a>
### 2.4 Putting It All Together
The `get_model_answer` function takes all the functions that you've implemented and performs question-answering.
First load the pre-trained model
```
model = TFAutoModelForQuestionAnswering.from_pretrained("./models")
```
### Get Model Answer
Using the helper functions that you implemented, fill out the get_model_answer function to create your question-answering system.
- This function has been completed for you, but we recommend reviewing the code to understand what it's doing.
```
def get_model_answer(model, question, passage, tokenizer, max_seq_length=384):
"""
Identify answer in passage for a given question using BERT.
Args:
model (Model): pretrained Bert model which we'll use to answer questions
question (string): question string
passage (string): passage string
tokenizer (Tokenizer): used for preprocessing of input
max_seq_length (int): length of input for model
Returns:
answer (string): answer to input question according to model
"""
# prepare input: use the function prepare_bert_input
input_ids, input_mask, tokens = prepare_bert_input(question, passage, tokenizer, max_seq_length)
# get scores for start of answer and end of answer
# use the model returned by TFAutoModelForQuestionAnswering.from_pretrained("./models")
# pass in in the input ids that are returned by prepare_bert_input
start_scores, end_scores = model(input_ids)
# start_scores and end_scores will be tensors of shape [1,max_seq_length]
# To pass these into get_span_from_scores function,
# take the value at index 0 to get a tensor of shape [max_seq_length]
start_scores = start_scores[0]
end_scores = end_scores[0]
# using scores, get most likely answer
# use the get_span_from_scores function
span_start, span_end = get_span_from_scores(start_scores, end_scores, input_mask)
# Using array indexing to get the tokens from the span start to span end (including the span_end)
answer_tokens = tokens[span_start:span_end+1]
# Combine the tokens into a single string and perform post-processing
# use construct_answer
answer = construct_answer(answer_tokens)
return answer
```
<a name='2-5'></a>
### 2.5 Try It Out
Now that we've prepared all the pieces, let's try an example from the SQuAD dataset.
```
passage = "Computational complexity theory is a branch of the theory \
of computation in theoretical computer science that focuses \
on classifying computational problems according to their inherent \
difficulty, and relating those classes to each other. A computational \
problem is understood to be a task that is in principle amenable to \
being solved by a computer, which is equivalent to stating that the \
problem may be solved by mechanical application of mathematical steps, \
such as an algorithm."
question = "What branch of theoretical computer science deals with broadly \
classifying computational problems by difficulty and class of relationship?"
print("Output: {}".format(get_model_answer(model, question, passage, tokenizer)))
print("Expected: Computational complexity theory")
passage = "The word pharmacy is derived from its root word pharma which was a term used since \
the 15th–17th centuries. However, the original Greek roots from pharmakos imply sorcery \
or even poison. In addition to pharma responsibilities, the pharma offered general medical \
advice and a range of services that are now performed solely by other specialist practitioners, \
such as surgery and midwifery. The pharma (as it was referred to) often operated through a \
retail shop which, in addition to ingredients for medicines, sold tobacco and patent medicines. \
Often the place that did this was called an apothecary and several languages have this as the \
dominant term, though their practices are more akin to a modern pharmacy, in English the term \
apothecary would today be seen as outdated or only approproriate if herbal remedies were on offer \
to a large extent. The pharmas also used many other herbs not listed. The Greek word Pharmakeia \
(Greek: φαρμακεία) derives from pharmakon (φάρμακον), meaning 'drug', 'medicine' (or 'poison')."
question = "What word is the word pharmacy taken from?"
print("Output: {}".format(get_model_answer(model, question, passage, tokenizer)))
print("Expected: pharma")
```
Now let's try it on clinical notes. Below we have an excerpt of a doctor's notes for a patient with an abnormal echocardiogram (this sample is taken from [here](https://www.mtsamples.com/site/pages/sample.asp?Type=6-Cardiovascular%20/%20Pulmonary&Sample=1597-Abnormal%20Echocardiogram))
```
passage = "Abnormal echocardiogram findings and followup. Shortness of breath, congestive heart failure, \
and valvular insufficiency. The patient complains of shortness of breath, which is worsening. \
The patient underwent an echocardiogram, which shows severe mitral regurgitation and also large \
pleural effusion. The patient is an 86-year-old female admitted for evaluation of abdominal pain \
and bloody stools. The patient has colitis and also diverticulitis, undergoing treatment. \
During the hospitalization, the patient complains of shortness of breath, which is worsening. \
The patient underwent an echocardiogram, which shows severe mitral regurgitation and also large \
pleural effusion. This consultation is for further evaluation in this regard. As per the patient, \
she is an 86-year-old female, has limited activity level. She has been having shortness of breath \
for many years. She also was told that she has a heart murmur, which was not followed through \
on a regular basis."
q1 = "How old is the patient?"
q2 = "Does the patient have any complaints?"
q3 = "What is the reason for this consultation?"
q4 = "What does her echocardiogram show?"
q5 = "What other symptoms does the patient have?"
questions = [q1, q2, q3, q4, q5]
for i, q in enumerate(questions):
print("Question {}: {}".format(i+1, q))
print()
print("Answer: {}".format(get_model_answer(model, q, passage, tokenizer)))
print()
print()
```
Even without fine-tuning, the model is able to reasonably answer most of the questions! Of course, it isn't perfect. For example, it doesn't give much detail for Question 3. To improve performance, you would ideally collect a medical QA dataset and fine tune the model.
| github_jupyter |
Before we begin, let's execute the cell below to display information about the CUDA driver and GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
```
!nvidia-smi
```
## Learning objectives
The **goal** of this lab is to:
- Learn how to run the same code on both a multicore CPU and a GPU using Fortran Standard Parallelism
- Understand steps required to make a sequential code parallel using do-concurrent constructs
We do not intend to cover:
- Detailed optimization techniques and mapping of do-concurrent constructs to CUDA Fortran
# Fortran Standard Parallelism
ISO Standard Fortran 2008 introduced the DO CONCURRENT construct to allow you to express loop-level parallelism, one of the various mechanisms for expressing parallelism directly in the Fortran language.
Fortran developers have been able to accelerate their programs using CUDA Fortran, OpenACC or OpenMP. Now with the support of DO CONCURRENT on GPU with NVIDIA HPC SDK, compiler automatically accelerates loops using DO CONCURRENT, allowing developers to get the benefit of acclerating on NVIDIA GPUs using ISO Standard Fortran without any extensions, directives, or non-standard libraries. You can now write standard Fortran, remaining fully portable to other compilers and systems, and still benefit from the full power of NVIDIA GPUs
For our code to make *Pair Calculation* all that’s required is expressing loops with DO CONCURRENT. The example below will introduce you to the syntax of DO CONCURRENT
Sample vector addition codeis shown in code below:
```fortran
subroutine vec_addition(x,y,n)
real :: x(:), y(:)
integer :: n, i
do i = 1, n
y(i) = x(i)+y(i)
enddo
end subroutine vec_addition
```
In order to make use of ISO Fortran DO CONCURRENT we need to replace the `do` loop with `do concurrent` as shown in code below
```fortran
subroutine vec_addition(x,y,n)
real :: x(:), y(:)
integer :: n, i
do concurrent (i = 1: n)
y(i) = x(i)+y(i)
enddo
end subroutine vec_addition
```
By changing the DO loop to DO CONCURRENT, you are telling the compiler that there are no data dependencies between the n loop iterations. This leaves the compiler free to generate instructions that the iterations can be executed in any order and simultaneously. The compiler parallelizes the loop even if there are data dependencies, resulting in race conditions and likely incorrect results. It’s your responsibility to ensure that the loop is safe to be parallelized.
## Nested Loop Parallelism
Nested loops are a common code pattern encountered in HPC applications. A simple example might look like the following:
```fortran
do i=2, n-1
do j=2, m-1
a(i,j) = w0 * b(i,j)
enddo
enddo
```
It is straightforward to write such patterns with a single DO CONCURRENT statement, as in the following example. It is easier to read, and the compiler has more information available for optimization.
```fortran
do concurrent(i=2 : n-1, j=2 : m-1)
a(i,j) = w0 * b(i,j)
enddo
```
Now, lets start modifying the original code and add DO-CONCURRENT. Click on the <b>[rdf.f90](../../source_code/doconcurrent/rdf.f90)</b> link and modify `rdf.f90`. Remember to **SAVE** your code after changes, before running below cells.
### Compile and Run for Multicore
Now that we have added a DO-CONCURRENT code, let us try compile the code. We will be using NVIDIA HPC SDK for this exercise. The flags used for enabling DO-CONCURRENT are as follows:
- `stdpar` : This flag tell the compiler to enable Parallel DO-CONCURRENT for a respective target
- `stdpar=multicore` will allow us to compile our code for a multicore
- `stdpar` will allow us to compile our code for a NVIDIA GPU (Default is NVIDIA)
```
#Compile the code for muticore
!cd ../../source_code/doconcurrent && nvfortran -stdpar=multicore -Minfo -o rdf nvtx.f90 rdf.f90 -I/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/include -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
```
If you compile this code with **-Minfo**, you can see how the compiler performs the parallelization.
```
rdf:
80, Memory zero idiom, loop replaced by call to __c_mzero8
92, Generating Multicore code
92, Loop parallelized across CPU threads
```
Make sure to validate the output by running the executable and validate the output.
```
#Run the multicore code
!cd ../../source_code/doconcurrent && ./rdf && cat Pair_entropy.dat
```
The output entropy value should be the following:
```
s2 : -2.452690945278331
s2bond : -24.37502820694527
```
```
#profile and see output of nsys
!cd ../../source_code/doconcurrent && nsys profile -t nvtx --stats=true --force-overwrite true -o rdf_doconcurrent_multicore ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/doconcurrent/rdf_doconcurrent_multicore.qdrep) and open it via the GUI. Have a look at the example expected profiler report below:
<img src="../images/do_concurrent_multicore.jpg">
### Compile and run for Nvidia GPU
Without changing the code now let us try to recompile the code for NVIDIA GPU and rerun.
GPU acceleration of DO-CONCURRENT is enabled with the `-stdpar` command-line option to NVC++. If `-stdpar `is specified, almost all loops with DO-CONCURRENT are compiled for offloading to run in parallel on an NVIDIA GPU.
**Understand and analyze** the solution present at:
[RDF Code](../../source_code/doconcurrent/SOLUTION/rdf.f90)
Open the downloaded files for inspection.
```
#compile for Tesla GPU
!cd ../../source_code/doconcurrent && nvfortran -stdpar=gpu -Minfo -acc -o rdf nvtx.f90 rdf.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
```
If you compile this code with -Minfo, you can see how the compiler performs the parallelization.
```rdf:
80, Memory zero idiom, loop replaced by call to __c_mzero8
92, Generating Tesla code
92, Loop parallelized across CUDA thread blocks, CUDA threads(4) blockidx%y threadidx%y
Loop parallelized across CUDA thread blocks, CUDA threads(32) ! blockidx%x threadidx%x
```
Make sure to validate the output by running the executable and validate the output.
```
#Run on NVIDIA GPU
!cd ../../source_code/doconcurrent && ./rdf && cat Pair_entropy.dat
```
The output entropy value should be the following:
```
s2 : -2.452690945278331
s2bond : -24.37502820694527
```
```
#profile and see output of nvptx
!cd ../../source_code/doconcurrent && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_doconcurrent_gpu ./rdf
```
Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/doconcurrent/rdf_doconcurrent_gpu.qdrep) and open it via the GUI. Have a look at the example expected profiler report below:
<img src="../images/do_concurrent_gpu.jpg">
If you inspect the output of the profiler closer, you can see the usage of *Unified Memory* annotated with green rectangle which was explained in previous sections.
Moreover, if you compare the NVTX marker `Pair_Calculation` (from the NVTX row) in both multicore and GPU version, you can see how much improvement you achieved. In the *example screenshot*, we were able to reduce that range from 1.57 seconds to 26 mseconds.
Feel free to checkout the [solution](../../source_code/doconcurrent/SOLUTION/rdf.f90) to help you understand better or compare your implementation with the sample solution.
# ISO Standard Fortran Analysis
**Usage Scenarios**
- DO-CONCURRENT is part of the standard language and provides a good start for accelerating code on accelerators like GPU and multicores.
**Limitations/Constraints**
It is key to understand that it is not an alternative to CUDA. *DO-CONCURRENT* provides highest portability and can be seen as the first step to porting on GPU. The general abstraction limits the optimization functionalities. For example, DO-CONCURRENT implementation is currently dependent on Unified memory. Moreover, one does not have control over thread management and that will limit the performance improvement.
**Which Compilers Support DO-CONCURRENT on GPUs and Multicore?**
1. NVIDIA GPU: As of Jan 2021 the compiler that support DO-CONCURRENT on NVIDIA GPU is from NVIDIA.
2. x86 Multicore: Other compilers like intel compiler has an implementation on a multicore CPU.
## Post-Lab Summary
If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
```
%%bash
cd ..
rm -f nways_files.zip
zip -r nways_files.zip *
```
**After** executing the above zip command, you should be able to download and save the zip file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../nways_files.zip).
Let us now go back to parallelizing our code using other approaches.
**IMPORTANT**: Please click on **HOME** to go back to the main notebook for *N ways of GPU programming for MD* code.
-----
# <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../../nways_MD_start.ipynb>HOME</a></p>
-----
# Links and Resources
[Do-Concurrent Guide](https://developer.nvidia.com/blog/accelerating-fortran-do-concurrent-with-gpus-and-the-nvidia-hpc-sdk/)
[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
---
## Licensing
This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| github_jupyter |
## Visualizing the 2016 General Election Polls
```
from __future__ import print_function
import pandas as pd
import numpy as np
from ipywidgets import VBox, HBox
import os
codes = pd.read_csv(os.path.abspath('../data_files/state_codes.csv'))
try:
from pollster import Pollster
except ImportError:
print('Pollster not found. Installing Pollster..')
try:
import subprocess
subprocess.check_call(['pip', 'install', 'pollster==0.1.6'])
except:
print("The pip installation failed. Please manually install Pollster and re-run this notebook.")
def get_candidate_data(question):
clinton, trump, undecided, other = 0., 0., 0., 0.
for candidate in question['subpopulations'][0]['responses']:
if candidate['last_name'] == 'Clinton':
clinton = candidate['value']
elif candidate['last_name'] == 'Trump':
trump = candidate['value']
elif candidate['choice'] == 'Undecided':
undecided = candidate['value']
else:
other = candidate['value']
return clinton, trump, other, undecided
def get_row(question, partisan='Nonpartisan', end_date='2016-06-21'):
# if question['topic'] != '2016-president':
if ('2016' in question['topic']) and ('Presidential' in question['topic']):
hillary, donald, other, undecided = get_candidate_data(question)
return [{'Name': question['name'], 'Partisan': partisan, 'State': question['state'],
'Date': np.datetime64(end_date), 'Trump': donald, 'Clinton': hillary, 'Other': other,
'Undecided': undecided}]
else:
return
def analyze_polls(polls):
global data
for poll in polls:
for question in poll.questions:
resp = get_row(question, partisan=poll.partisan, end_date=poll.end_date)
if resp is not None:
data = data.append(resp)
return
try:
from pollster import Pollster
pollster = Pollster()
# Getting data from Pollster. This might take a second.
raw_data = pollster.charts(topic='2016-president')
data = pd.DataFrame(columns=['Name', 'Partisan', 'State', 'Date', 'Trump', 'Clinton', 'Other',
'Undecided'])
for i in raw_data:
analyze_polls(i.polls())
except:
raise ValueError('Please install Pollster and run the functions above')
def get_state_party(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
if polls.shape[0] == 0:
return None
if (polls.tail(1)['Trump'] > polls.tail(1)['Clinton']).values[0]:
return 'Republican'
else:
return 'Democrat'
def get_color_data():
color_data = {}
for i in codes['FIPS']:
color_data[i] = get_state_party(i)
return color_data
def get_state_data(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
return polls
from bqplot import *
from ipywidgets import Layout
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(scales={'x': dt_x, 'y': sc_y}, colors=['#E91D0E', '#2aa1ec'], marker='circle')
ax_x = Axis(scale=dt_x, label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
ts_fig = Figure(marks=[time_series], axes=[ax_x, ax_y], title='General Election - State Polls',
layout=Layout(min_width='650px', min_height='400px'))
sc_geo = AlbersUSA()
sc_c1 = OrdinalColorScale(domain=['Democrat', 'Republican'], colors=['#2aa1ec', '#E91D0E'])
color_data = get_color_data()
map_styles = {'color': color_data,
'scales': {'projection': sc_geo, 'color': sc_c1}, 'colors': {'default_color': 'Grey'}}
axis = ColorAxis(scale=sc_c1)
states_map = Map(map_data=topo_load('map_data/USStatesMap.json'), tooltip=ts_fig, **map_styles)
map_fig = Figure(marks=[states_map], axes=[axis],title='General Election Polls - State Wise')
def hover_callback(name, value):
polls = get_state_data(value['data']['id'])
if polls is None or polls.shape[0] == 0:
time_series.y = [0.]
return
time_series.x, time_series.y = polls['Date'].values.astype(np.datetime64), [polls['Trump'].values, polls['Clinton'].values]
ts_fig.title = str(codes[codes['FIPS']==value['data']['id']]['Name'].values[0]) + ' Polls - Presidential Election'
states_map.on_hover(hover_callback)
national = data[(data['State']=='US') & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
dt_x = DateScale()
sc_y = LinearScale()
clinton_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Clinton'],
scales={'x': dt_x, 'y': sc_y},
colors=['#2aa1ec'])
trump_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Trump'],
scales={'x': dt_x, 'y': sc_y},
colors=['#E91D0E'])
ax_x = Axis(scale=dt_x, label='Date', tick_format='%b-%Y', num_ticks=8)
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
scat_fig = Figure(marks=[clinton_scatter, trump_scatter], axes=[ax_x, ax_y], title='General Election - National Polls')
```
#### Hover on the map to visualize the poll data for that state.
```
VBox([map_fig, scat_fig])
```
## Visualizing the County Results of the 2008 Elections
```
county_data = pd.read_csv(os.path.abspath('../data_files/2008-election-results.csv'))
winner = np.array(['McCain'] * county_data.shape[0])
winner[(county_data['Obama'] > county_data['McCain']).values] = 'Obama'
sc_geo_county = AlbersUSA()
sc_c1_county = OrdinalColorScale(domain=['McCain', 'Obama'], colors=['Red', 'DeepSkyBlue'])
color_data_county = dict(zip(county_data['FIPS'].values.astype(int), list(winner)))
map_styles_county = {'color': color_data_county,
'scales': {'projection': sc_geo_county, 'color': sc_c1_county}, 'colors': {'default_color': 'Grey'}}
axis_county = ColorAxis(scale=sc_c1_county)
county_map = Map(map_data=topo_load('map_data/USCountiesMap.json'), **map_styles_county)
county_fig = Figure(marks=[county_map], axes=[axis_county],title='US Elections 2008 - Example',
layout=Layout(min_width='800px', min_height='550px'))
names_sc = OrdinalScale(domain=['Obama', 'McCain'])
vote_sc_y = LinearScale(min=0, max=100.)
names_ax = Axis(scale=names_sc, label='Candidate')
vote_ax = Axis(scale=vote_sc_y, orientation='vertical', label='Percentage')
vote_bars = Bars(scales={'x': names_sc, 'y': vote_sc_y}, colors=['#2aa1ec', '#E91D0E'])
bar_fig = Figure(marks=[vote_bars], axes=[names_ax, vote_ax], title='Vote Margin',
layout=Layout(min_width='600px', min_height='400px'))
def county_hover(name, value):
if (county_data['FIPS'] == value['data']['id']).sum() == 0:
bar_fig.title = ''
vote_bars.y = [0., 0.]
return
votes = county_data[county_data['FIPS'] == value['data']['id']]
dem_vote = float(votes['Obama %'].values[0])
rep_vote = float(votes['McCain %'].values[0])
vote_bars.x, vote_bars.y = ['Obama', 'McCain'], [dem_vote, rep_vote]
bar_fig.title = 'Vote % - ' + value['data']['name']
county_map.on_hover(county_hover)
county_map.tooltip = bar_fig
```
#### Hover on the map to visualize the voting percentage for each candidate in that county
```
county_fig
```
| github_jupyter |
# Creating a Sentiment Analysis Web App
## Using PyTorch and SageMaker
_Deep Learning Nanodegree Program | Deployment_
---
Now that we have a basic understanding of how SageMaker works we will try to use it to construct a complete project from end to end. Our goal will be to have a simple web page which a user can use to enter a movie review. The web page will then send the review off to our deployed model which will predict the sentiment of the entered review.
## Instructions
Some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this notebook. You will not need to modify the included code beyond what is requested. Sections that begin with '**TODO**' in the header indicate that you need to complete or implement some portion within them. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `# TODO: ...` comment. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions for you to answer which relate to the task and your implementation. Each section where you will answer a question is preceded by a '**Question:**' header. Carefully read each question and provide your answer below the '**Answer:**' header by editing the Markdown cell.
> **Note**: Code and Markdown cells can be executed using the **Shift+Enter** keyboard shortcut. In addition, a cell can be edited by typically clicking it (double-click for Markdown cells) or by pressing **Enter** while it is highlighted.
## General Outline
Recall the general outline for SageMaker projects using a notebook instance.
1. Download or otherwise retrieve the data.
2. Process / Prepare the data.
3. Upload the processed data to S3.
4. Train a chosen model.
5. Test the trained model (typically using a batch transform job).
6. Deploy the trained model.
7. Use the deployed model.
For this project, you will be following the steps in the general outline with some modifications.
First, you will not be testing the model in its own step. You will still be testing the model, however, you will do it by deploying your model and then using the deployed model by sending the test data to it. One of the reasons for doing this is so that you can make sure that your deployed model is working correctly before moving forward.
In addition, you will deploy and use your trained model a second time. In the second iteration you will customize the way that your trained model is deployed by including some of your own code. In addition, your newly deployed model will be used in the sentiment analysis web app.
## Step 1: Downloading the data
As in the XGBoost in SageMaker notebook, we will be using the [IMDb dataset](http://ai.stanford.edu/~amaas/data/sentiment/)
> Maas, Andrew L., et al. [Learning Word Vectors for Sentiment Analysis](http://ai.stanford.edu/~amaas/data/sentiment/). In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, 2011.
```
%mkdir ../data
!wget -O ../data/aclImdb_v1.tar.gz http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -zxf ../data/aclImdb_v1.tar.gz -C ../data
```
## Step 2: Preparing and Processing the data
Also, as in the XGBoost notebook, we will be doing some initial data processing. The first few steps are the same as in the XGBoost example. To begin with, we will read in each of the reviews and combine them into a single input structure. Then, we will split the dataset into a training set and a testing set.
```
import os
import glob
def read_imdb_data(data_dir='../data/aclImdb'):
data = {}
labels = {}
for data_type in ['train', 'test']:
data[data_type] = {}
labels[data_type] = {}
for sentiment in ['pos', 'neg']:
data[data_type][sentiment] = []
labels[data_type][sentiment] = []
path = os.path.join(data_dir, data_type, sentiment, '*.txt')
files = glob.glob(path)
for f in files:
with open(f) as review:
data[data_type][sentiment].append(review.read())
# Here we represent a positive review by '1' and a negative review by '0'
labels[data_type][sentiment].append(1 if sentiment == 'pos' else 0)
assert len(data[data_type][sentiment]) == len(labels[data_type][sentiment]), \
"{}/{} data size does not match labels size".format(data_type, sentiment)
return data, labels
data, labels = read_imdb_data()
print("IMDB reviews: train = {} pos / {} neg, test = {} pos / {} neg".format(
len(data['train']['pos']), len(data['train']['neg']),
len(data['test']['pos']), len(data['test']['neg'])))
```
Now that we've read the raw training and testing data from the downloaded dataset, we will combine the positive and negative reviews and shuffle the resulting records.
```
from sklearn.utils import shuffle
def prepare_imdb_data(data, labels):
"""Prepare training and test sets from IMDb movie reviews."""
#Combine positive and negative reviews and labels
data_train = data['train']['pos'] + data['train']['neg']
data_test = data['test']['pos'] + data['test']['neg']
labels_train = labels['train']['pos'] + labels['train']['neg']
labels_test = labels['test']['pos'] + labels['test']['neg']
#Shuffle reviews and corresponding labels within training and test sets
data_train, labels_train = shuffle(data_train, labels_train)
data_test, labels_test = shuffle(data_test, labels_test)
# Return a unified training data, test data, training labels, test labets
return data_train, data_test, labels_train, labels_test
train_X, test_X, train_y, test_y = prepare_imdb_data(data, labels)
print("IMDb reviews (combined): train = {}, test = {}".format(len(train_X), len(test_X)))
```
Now that we have our training and testing sets unified and prepared, we should do a quick check and see an example of the data our model will be trained on. This is generally a good idea as it allows you to see how each of the further processing steps affects the reviews and it also ensures that the data has been loaded correctly.
```
print(train_X[100])
print(train_y[100])
```
The first step in processing the reviews is to make sure that any html tags that appear should be removed. In addition we wish to tokenize our input, that way words such as *entertained* and *entertaining* are considered the same with regard to sentiment analysis.
```
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import *
import re
from bs4 import BeautifulSoup
def review_to_words(review):
nltk.download("stopwords", quiet=True)
stemmer = PorterStemmer()
text = BeautifulSoup(review, "html.parser").get_text() # Remove HTML tags
text = re.sub(r"[^a-zA-Z0-9]", " ", text.lower()) # Convert to lower case
words = text.split() # Split string into words
words = [w for w in words if w not in stopwords.words("english")] # Remove stopwords
words = [PorterStemmer().stem(w) for w in words] # stem
return words
```
The `review_to_words` method defined above uses `BeautifulSoup` to remove any html tags that appear and uses the `nltk` package to tokenize the reviews. As a check to ensure we know how everything is working, try applying `review_to_words` to one of the reviews in the training set.
```
# TODO: Apply review_to_words to a review (train_X[100] or any other review)
#print(train_X[100])
review = review_to_words(train_X[100])
print(review)
```
**Question:** Above we mentioned that `review_to_words` method removes html formatting and allows us to tokenize the words found in a review, for example, converting *entertained* and *entertaining* into *entertain* so that they are treated as though they are the same word. What else, if anything, does this method do to the input?
**Answer:** review_to_words removes html tags and converts words to lower case.
The method below applies the `review_to_words` method to each of the reviews in the training and testing datasets. In addition it caches the results. This is because performing this processing step can take a long time. This way if you are unable to complete the notebook in the current session, you can come back without needing to process the data a second time.
```
import pickle
cache_dir = os.path.join("../cache", "sentiment_analysis") # where to store cache files
os.makedirs(cache_dir, exist_ok=True) # ensure cache directory exists
def preprocess_data(data_train, data_test, labels_train, labels_test,
cache_dir=cache_dir, cache_file="preprocessed_data.pkl"):
"""Convert each review to words; read from cache if available."""
# If cache_file is not None, try to read from it first
cache_data = None
if cache_file is not None:
try:
with open(os.path.join(cache_dir, cache_file), "rb") as f:
cache_data = pickle.load(f)
print("Read preprocessed data from cache file:", cache_file)
except:
pass # unable to read from cache, but that's okay
# If cache is missing, then do the heavy lifting
if cache_data is None:
# Preprocess training and test data to obtain words for each review
#words_train = list(map(review_to_words, data_train))
#words_test = list(map(review_to_words, data_test))
words_train = [review_to_words(review) for review in data_train]
words_test = [review_to_words(review) for review in data_test]
# Write to cache file for future runs
if cache_file is not None:
cache_data = dict(words_train=words_train, words_test=words_test,
labels_train=labels_train, labels_test=labels_test)
with open(os.path.join(cache_dir, cache_file), "wb") as f:
pickle.dump(cache_data, f)
print("Wrote preprocessed data to cache file:", cache_file)
else:
# Unpack data loaded from cache file
words_train, words_test, labels_train, labels_test = (cache_data['words_train'],
cache_data['words_test'], cache_data['labels_train'], cache_data['labels_test'])
return words_train, words_test, labels_train, labels_test
# Preprocess data
train_X, test_X, train_y, test_y = preprocess_data(train_X, test_X, train_y, test_y)
```
## Transform the data
In the XGBoost notebook we transformed the data from its word representation to a bag-of-words feature representation. For the model we are going to construct in this notebook we will construct a feature representation which is very similar. To start, we will represent each word as an integer. Of course, some of the words that appear in the reviews occur very infrequently and so likely don't contain much information for the purposes of sentiment analysis. The way we will deal with this problem is that we will fix the size of our working vocabulary and we will only include the words that appear most frequently. We will then combine all of the infrequent words into a single category and, in our case, we will label it as `1`.
Since we will be using a recurrent neural network, it will be convenient if the length of each review is the same. To do this, we will fix a size for our reviews and then pad short reviews with the category 'no word' (which we will label `0`) and truncate long reviews.
### (TODO) Create a word dictionary
To begin with, we need to construct a way to map words that appear in the reviews to integers. Here we fix the size of our vocabulary (including the 'no word' and 'infrequent' categories) to be `5000` but you may wish to change this to see how it affects the model.
> **TODO:** Complete the implementation for the `build_dict()` method below. Note that even though the vocab_size is set to `5000`, we only want to construct a mapping for the most frequently appearing `4998` words. This is because we want to reserve the special labels `0` for 'no word' and `1` for 'infrequent word'.
```
import numpy as np
def build_dict(data, vocab_size = 5000):
"""Construct and return a dictionary mapping each of the most frequently appearing words to a unique integer."""
word_count = {} # A dict storing the words that appear in the reviews along with how often they occur
# TODO: Determine how often each word appears in `data`. Note that `data` is a list of sentences and that a
# sentence is a list of words.
#split sentences into a sentence list then into word list
for words in data:
for word in words:
if word in word_count:
word_count[word] += 1
else:
word_count[word] = 1
# TODO: Sort the words found in `data` so that sorted_words[0] is the most frequently appearing word and
# sorted_words[-1] is the least frequently appearing word.
#sorted_words = sorted(word_count.items(),key=lambda word:word[1],reverse=True) #descending order
sorted_words = [item[0] for item in sorted(word_count.items(), key=lambda x: x[1], reverse=True)]
word_dict = {} # This is what we are building, a dictionary that translates words into integers
for idx, word in enumerate(sorted_words[:vocab_size - 2]): # The -2 is so that we save room for the 'no word'
word_dict[word] = idx + 2 # 'infrequent' labels
return word_dict
word_dict = build_dict(train_X)
```
**Question:** What are the five most frequently appearing (tokenized) words in the training set? Does it makes sense that these words appear frequently in the training set?
**Answer:** Five most frequently appearing words are 1) movi 2) film 3) one 4) like 5) time
Yes. When i print the dictiionary object, i see the order is descending so they are most frequently appearinbg words
```
# TODO: Use this space to determine the five most frequently appearing words in the training set.
#for word in list(word_dict)[0:5]:
#print(word)
list(word_dict.keys())[0:5]
```
### Save `word_dict`
Later on when we construct an endpoint which processes a submitted review we will need to make use of the `word_dict` which we have created. As such, we will save it to a file now for future use.
```
data_dir = '../data/pytorch' # The folder we will use for storing data
if not os.path.exists(data_dir): # Make sure that the folder exists
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'word_dict.pkl'), "wb") as f:
pickle.dump(word_dict, f)
```
### Transform the reviews
Now that we have our word dictionary which allows us to transform the words appearing in the reviews into integers, it is time to make use of it and convert our reviews to their integer sequence representation, making sure to pad or truncate to a fixed length, which in our case is `500`.
```
def convert_and_pad(word_dict, sentence, pad=500):
NOWORD = 0 # We will use 0 to represent the 'no word' category
INFREQ = 1 # and we use 1 to represent the infrequent words, i.e., words not appearing in word_dict
working_sentence = [NOWORD] * pad
for word_index, word in enumerate(sentence[:pad]):
if word in word_dict:
working_sentence[word_index] = word_dict[word]
else:
working_sentence[word_index] = INFREQ
return working_sentence, min(len(sentence), pad)
def convert_and_pad_data(word_dict, data, pad=500):
result = []
lengths = []
for sentence in data:
converted, leng = convert_and_pad(word_dict, sentence, pad)
result.append(converted)
lengths.append(leng)
return np.array(result), np.array(lengths)
train_X, train_X_len = convert_and_pad_data(word_dict, train_X)
test_X, test_X_len = convert_and_pad_data(word_dict, test_X)
```
As a quick check to make sure that things are working as intended, check to see what one of the reviews in the training set looks like after having been processeed. Does this look reasonable? What is the length of a review in the training set?
```
# Use this cell to examine one of the processed reviews to make sure everything is working as intended.
#print(train_X,test_X)
#print(train_X_len)
#print(test_X_len)
train_X[100]
```
**Question:** In the cells above we use the `preprocess_data` and `convert_and_pad_data` methods to process both the training and testing set. Why or why not might this be a problem?
**Answer:**
preprocess_data convert each review to words; read from cache if available.
convert_and_pad method is used to pass fixed number of words to neural network.
## Step 3: Upload the data to S3
As in the XGBoost notebook, we will need to upload the training dataset to S3 in order for our training code to access it. For now we will save it locally and we will upload to S3 later on.
### Save the processed training dataset locally
It is important to note the format of the data that we are saving as we will need to know it when we write the training code. In our case, each row of the dataset has the form `label`, `length`, `review[500]` where `review[500]` is a sequence of `500` integers representing the words in the review.
```
import pandas as pd
pd.concat([pd.DataFrame(train_y), pd.DataFrame(train_X_len), pd.DataFrame(train_X)], axis=1) \
.to_csv(os.path.join(data_dir, 'train.csv'), header=False, index=False)
```
### Uploading the training data
Next, we need to upload the training data to the SageMaker default S3 bucket so that we can provide access to it while training our model.
```
import sagemaker
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()
prefix = 'sagemaker/sentiment_rnn'
role = sagemaker.get_execution_role()
input_data = sagemaker_session.upload_data(path=data_dir, bucket=bucket, key_prefix=prefix)
```
**NOTE:** The cell above uploads the entire contents of our data directory. This includes the `word_dict.pkl` file. This is fortunate as we will need this later on when we create an endpoint that accepts an arbitrary review. For now, we will just take note of the fact that it resides in the data directory (and so also in the S3 training bucket) and that we will need to make sure it gets saved in the model directory.
## Step 4: Build and Train the PyTorch Model
In the XGBoost notebook we discussed what a model is in the SageMaker framework. In particular, a model comprises three objects
- Model Artifacts,
- Training Code, and
- Inference Code,
each of which interact with one another. In the XGBoost example we used training and inference code that was provided by Amazon. Here we will still be using containers provided by Amazon with the added benefit of being able to include our own custom code.
We will start by implementing our own neural network in PyTorch along with a training script. For the purposes of this project we have provided the necessary model object in the `model.py` file, inside of the `train` folder. You can see the provided implementation by running the cell below.
```
!pygmentize train/model.py
```
The important takeaway from the implementation provided is that there are three parameters that we may wish to tweak to improve the performance of our model. These are the embedding dimension, the hidden dimension and the size of the vocabulary. We will likely want to make these parameters configurable in the training script so that if we wish to modify them we do not need to modify the script itself. We will see how to do this later on. To start we will write some of the training code in the notebook so that we can more easily diagnose any issues that arise.
First we will load a small portion of the training data set to use as a sample. It would be very time consuming to try and train the model completely in the notebook as we do not have access to a gpu and the compute instance that we are using is not particularly powerful. However, we can work on a small bit of the data to get a feel for how our training script is behaving.
```
import torch
import torch.utils.data
# Read in only the first 250 rows
train_sample = pd.read_csv(os.path.join(data_dir, 'train.csv'), header=None, names=None, nrows=250)
# Turn the input pandas dataframe into tensors
train_sample_y = torch.from_numpy(train_sample[[0]].values).float().squeeze()
train_sample_X = torch.from_numpy(train_sample.drop([0], axis=1).values).long()
# Build the dataset
train_sample_ds = torch.utils.data.TensorDataset(train_sample_X, train_sample_y)
# Build the dataloader
train_sample_dl = torch.utils.data.DataLoader(train_sample_ds, batch_size=50)
```
### (TODO) Writing the training method
Next we need to write the training code itself. This should be very similar to training methods that you have written before to train PyTorch models. We will leave any difficult aspects such as model saving / loading and parameter loading until a little later.
```
def train(model, train_loader, epochs, optimizer, loss_fn, device):
for epoch in range(1, epochs + 1):
model.train()
total_loss = 0
for batch in train_loader:
batch_X, batch_y = batch
batch_X = batch_X.to(device)
batch_y = batch_y.to(device)
# TODO: Complete this train method to train the model provided.\
#optimizer.zero_grad()
#predictions = model(batch_X).squeeze(0) #feature
#loss = loss_fn(predictions, batch_y) #label
#loss.backward()
#optimizer.step()
optimizer.zero_grad()
output = model.forward(batch_X) #feature
loss = loss_fn(output, batch_y) #label
loss.backward()
optimizer.step()
total_loss += loss.data.item()
print("Epoch: {}, BCELoss: {}".format(epoch, total_loss / len(train_loader)))
```
Supposing we have the training method above, we will test that it is working by writing a bit of code in the notebook that executes our training method on the small sample training set that we loaded earlier. The reason for doing this in the notebook is so that we have an opportunity to fix any errors that arise early when they are easier to diagnose.
```
import torch.optim as optim
from train.model import LSTMClassifier
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = LSTMClassifier(32, 100, 5000).to(device)
optimizer = optim.Adam(model.parameters())
loss_fn = torch.nn.BCELoss()
train(model, train_sample_dl, 5, optimizer, loss_fn, device)
```
In order to construct a PyTorch model using SageMaker we must provide SageMaker with a training script. We may optionally include a directory which will be copied to the container and from which our training code will be run. When the training container is executed it will check the uploaded directory (if there is one) for a `requirements.txt` file and install any required Python libraries, after which the training script will be run.
### (TODO) Training the model
When a PyTorch model is constructed in SageMaker, an entry point must be specified. This is the Python file which will be executed when the model is trained. Inside of the `train` directory is a file called `train.py` which has been provided and which contains most of the necessary code to train our model. The only thing that is missing is the implementation of the `train()` method which you wrote earlier in this notebook.
**TODO**: Copy the `train()` method written above and paste it into the `train/train.py` file where required.
The way that SageMaker passes hyperparameters to the training script is by way of arguments. These arguments can then be parsed and used in the training script. To see how this is done take a look at the provided `train/train.py` file.
```
from sagemaker.pytorch import PyTorch
estimator = PyTorch(entry_point="train.py",
source_dir="train",
role=role,
framework_version='0.4.0',
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
hyperparameters={
'epochs': 10,
'hidden_dim': 200,
})
estimator.fit({'training': input_data})
```
## Step 5: Testing the model
As mentioned at the top of this notebook, we will be testing this model by first deploying it and then sending the testing data to the deployed endpoint. We will do this so that we can make sure that the deployed model is working correctly.
## Step 6: Deploy the model for testing
Now that we have trained our model, we would like to test it to see how it performs. Currently our model takes input of the form `review_length, review[500]` where `review[500]` is a sequence of `500` integers which describe the words present in the review, encoded using `word_dict`. Fortunately for us, SageMaker provides built-in inference code for models with simple inputs such as this.
There is one thing that we need to provide, however, and that is a function which loads the saved model. This function must be called `model_fn()` and takes as its only parameter a path to the directory where the model artifacts are stored. This function must also be present in the python file which we specified as the entry point. In our case the model loading function has been provided and so no changes need to be made.
**NOTE**: When the built-in inference code is run it must import the `model_fn()` method from the `train.py` file. This is why the training code is wrapped in a main guard ( ie, `if __name__ == '__main__':` )
Since we don't need to change anything in the code that was uploaded during training, we can simply deploy the current model as-is.
**NOTE:** When deploying a model you are asking SageMaker to launch an compute instance that will wait for data to be sent to it. As a result, this compute instance will continue to run until *you* shut it down. This is important to know since the cost of a deployed endpoint depends on how long it has been running for.
In other words **If you are no longer using a deployed endpoint, shut it down!**
**TODO:** Deploy the trained model.
```
# TODO: Deploy the trained model to a SageMaker Endpoint
predictor = estimator.deploy(instance_type='ml.p2.xlarge',
initial_instance_count=1)
```
## Step 7 - Use the model for testing
Once deployed, we can read in the test data and send it off to our deployed model to get some results. Once we collect all of the results we can determine how accurate our model is.
```
test_X = pd.concat([pd.DataFrame(test_X_len), pd.DataFrame(test_X)], axis=1)
# We split the data into chunks and send each chunk seperately, accumulating the results.
def predict(data, rows=512):
split_array = np.array_split(data, int(data.shape[0] / float(rows) + 1))
predictions = np.array([])
for array in split_array:
predictions = np.append(predictions, predictor.predict(array))
return predictions
predictions = predict(test_X.values)
predictions = [round(num) for num in predictions]
from sklearn.metrics import accuracy_score
accuracy_score(test_y, predictions)
```
**Question:** How does this model compare to the XGBoost model you created earlier? Why might these two models perform differently on this dataset? Which do *you* think is better for sentiment analysis?
**Answer:** XGBoost runs efficiently on cpu. RNN is for large scale training and needs gpu. XGBoost is Amazon's in built algo. XGBoost performed (0.8497) same as RNN (0.84612) for this instance.
### (TODO) More testing
We now have a trained model which has been deployed and which we can send processed reviews to and which returns the predicted sentiment. However, ultimately we would like to be able to send our model an unprocessed review. That is, we would like to send the review itself as a string. For example, suppose we wish to send the following review to our model.
```
test_review = 'The simplest pleasures in life are the best, and this film is one of them. Combining a rather basic storyline of love and adventure this movie transcends the usual weekend fair with wit and unmitigated charm.'
```
The question we now need to answer is, how do we send this review to our model?
Recall in the first section of this notebook we did a bunch of data processing to the IMDb dataset. In particular, we did two specific things to the provided reviews.
- Removed any html tags and stemmed the input
- Encoded the review as a sequence of integers using `word_dict`
In order process the review we will need to repeat these two steps.
**TODO**: Using the `review_to_words` and `convert_and_pad` methods from section one, convert `test_review` into a numpy array `test_data` suitable to send to our model. Remember that our model expects input of the form `review_length, review[500]`.
```
# TODO: Convert test_review into a form usable by the model and save the results in test_data
test_data = review_to_words(test_review)
#print(len(test_data))
#test_data = convert_and_pad_data(word_dict, test_data)
#print(len(test_data[0]))
#print(test_data[0])
#test_data = test_data[0]
test_data = [np.array(convert_and_pad(word_dict, test_data)[0])]
print(test_data)
```
Now that we have processed the review, we can send the resulting array to our model to predict the sentiment of the review.
```
predictor.predict(test_data)
```
Since the return value of our model is close to `1`, we can be certain that the review we submitted is positive.
### Delete the endpoint
Of course, just like in the XGBoost notebook, once we've deployed an endpoint it continues to run until we tell it to shut down. Since we are done using our endpoint for now, we can delete it.
```
estimator.delete_endpoint()
```
## Step 6 (again) - Deploy the model for the web app
Now that we know that our model is working, it's time to create some custom inference code so that we can send the model a review which has not been processed and have it determine the sentiment of the review.
As we saw above, by default the estimator which we created, when deployed, will use the entry script and directory which we provided when creating the model. However, since we now wish to accept a string as input and our model expects a processed review, we need to write some custom inference code.
We will store the code that we write in the `serve` directory. Provided in this directory is the `model.py` file that we used to construct our model, a `utils.py` file which contains the `review_to_words` and `convert_and_pad` pre-processing functions which we used during the initial data processing, and `predict.py`, the file which will contain our custom inference code. Note also that `requirements.txt` is present which will tell SageMaker what Python libraries are required by our custom inference code.
When deploying a PyTorch model in SageMaker, you are expected to provide four functions which the SageMaker inference container will use.
- `model_fn`: This function is the same function that we used in the training script and it tells SageMaker how to load our model.
- `input_fn`: This function receives the raw serialized input that has been sent to the model's endpoint and its job is to de-serialize and make the input available for the inference code.
- `output_fn`: This function takes the output of the inference code and its job is to serialize this output and return it to the caller of the model's endpoint.
- `predict_fn`: The heart of the inference script, this is where the actual prediction is done and is the function which you will need to complete.
For the simple website that we are constructing during this project, the `input_fn` and `output_fn` methods are relatively straightforward. We only require being able to accept a string as input and we expect to return a single value as output. You might imagine though that in a more complex application the input or output may be image data or some other binary data which would require some effort to serialize.
### (TODO) Writing inference code
Before writing our custom inference code, we will begin by taking a look at the code which has been provided.
```
!pygmentize serve/predict.py
```
As mentioned earlier, the `model_fn` method is the same as the one provided in the training code and the `input_fn` and `output_fn` methods are very simple and your task will be to complete the `predict_fn` method. Make sure that you save the completed file as `predict.py` in the `serve` directory.
**TODO**: Complete the `predict_fn()` method in the `serve/predict.py` file.
### Deploying the model
Now that the custom inference code has been written, we will create and deploy our model. To begin with, we need to construct a new PyTorchModel object which points to the model artifacts created during training and also points to the inference code that we wish to use. Then we can call the deploy method to launch the deployment container.
**NOTE**: The default behaviour for a deployed PyTorch model is to assume that any input passed to the predictor is a `numpy` array. In our case we want to send a string so we need to construct a simple wrapper around the `RealTimePredictor` class to accomodate simple strings. In a more complicated situation you may want to provide a serialization object, for example if you wanted to sent image data.
```
from sagemaker.predictor import RealTimePredictor
from sagemaker.pytorch import PyTorchModel
class StringPredictor(RealTimePredictor):
def __init__(self, endpoint_name, sagemaker_session):
super(StringPredictor, self).__init__(endpoint_name, sagemaker_session, content_type='text/plain')
model = PyTorchModel(model_data=estimator.model_data,
role = role,
framework_version='0.4.0',
entry_point='predict.py',
source_dir='serve',
predictor_cls=StringPredictor)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.p2.xlarge')
```
### Testing the model
Now that we have deployed our model with the custom inference code, we should test to see if everything is working. Here we test our model by loading the first `250` positive and negative reviews and send them to the endpoint, then collect the results. The reason for only sending some of the data is that the amount of time it takes for our model to process the input and then perform inference is quite long and so testing the entire data set would be prohibitive.
```
import glob
def test_reviews(data_dir='../data/aclImdb', stop=250):
results = []
ground = []
# We make sure to test both positive and negative reviews
for sentiment in ['pos', 'neg']:
path = os.path.join(data_dir, 'test', sentiment, '*.txt')
files = glob.glob(path)
files_read = 0
print('Starting ', sentiment, ' files')
# Iterate through the files and send them to the predictor
for f in files:
with open(f) as review:
# First, we store the ground truth (was the review positive or negative)
if sentiment == 'pos':
ground.append(1)
else:
ground.append(0)
# Read in the review and convert to 'utf-8' for transmission via HTTP
review_input = review.read().encode('utf-8')
# Send the review to the predictor and store the results
#results.append(int(predictor.predict(review_input))) #invalid literal for int() with base 10: b'1.0'
result = predictor.predict(review_input)
result_ex = 1 if(b'1.0' == result) else 0
results.append(result_ex)
# Sending reviews to our endpoint one at a time takes a while so we
# only send a small number of reviews
files_read += 1
if files_read == stop:
break
return ground, results
ground, results = test_reviews()
from sklearn.metrics import accuracy_score
accuracy_score(ground, results)
```
As an additional test, we can try sending the `test_review` that we looked at earlier.
```
predictor.predict(test_review)
```
Now that we know our endpoint is working as expected, we can set up the web page that will interact with it. If you don't have time to finish the project now, make sure to skip down to the end of this notebook and shut down your endpoint. You can deploy it again when you come back.
## Step 7 (again): Use the model for the web app
> **TODO:** This entire section and the next contain tasks for you to complete, mostly using the AWS console.
So far we have been accessing our model endpoint by constructing a predictor object which uses the endpoint and then just using the predictor object to perform inference. What if we wanted to create a web app which accessed our model? The way things are set up currently makes that not possible since in order to access a SageMaker endpoint the app would first have to authenticate with AWS using an IAM role which included access to SageMaker endpoints. However, there is an easier way! We just need to use some additional AWS services.
<img src="Web App Diagram.svg">
The diagram above gives an overview of how the various services will work together. On the far right is the model which we trained above and which is deployed using SageMaker. On the far left is our web app that collects a user's movie review, sends it off and expects a positive or negative sentiment in return.
In the middle is where some of the magic happens. We will construct a Lambda function, which you can think of as a straightforward Python function that can be executed whenever a specified event occurs. We will give this function permission to send and recieve data from a SageMaker endpoint.
Lastly, the method we will use to execute the Lambda function is a new endpoint that we will create using API Gateway. This endpoint will be a url that listens for data to be sent to it. Once it gets some data it will pass that data on to the Lambda function and then return whatever the Lambda function returns. Essentially it will act as an interface that lets our web app communicate with the Lambda function.
### Setting up a Lambda function
The first thing we are going to do is set up a Lambda function. This Lambda function will be executed whenever our public API has data sent to it. When it is executed it will receive the data, perform any sort of processing that is required, send the data (the review) to the SageMaker endpoint we've created and then return the result.
#### Part A: Create an IAM Role for the Lambda function
Since we want the Lambda function to call a SageMaker endpoint, we need to make sure that it has permission to do so. To do this, we will construct a role that we can later give the Lambda function.
Using the AWS Console, navigate to the **IAM** page and click on **Roles**. Then, click on **Create role**. Make sure that the **AWS service** is the type of trusted entity selected and choose **Lambda** as the service that will use this role, then click **Next: Permissions**.
In the search box type `sagemaker` and select the check box next to the **AmazonSageMakerFullAccess** policy. Then, click on **Next: Review**.
Lastly, give this role a name. Make sure you use a name that you will remember later on, for example `LambdaSageMakerRole`. Then, click on **Create role**.
#### Part B: Create a Lambda function
Now it is time to actually create the Lambda function.
Using the AWS Console, navigate to the AWS Lambda page and click on **Create a function**. When you get to the next page, make sure that **Author from scratch** is selected. Now, name your Lambda function, using a name that you will remember later on, for example `sentiment_analysis_func`. Make sure that the **Python 3.6** runtime is selected and then choose the role that you created in the previous part. Then, click on **Create Function**.
On the next page you will see some information about the Lambda function you've just created. If you scroll down you should see an editor in which you can write the code that will be executed when your Lambda function is triggered. In our example, we will use the code below.
```python
# We need to use the low-level library to interact with SageMaker since the SageMaker API
# is not available natively through Lambda.
import boto3
def lambda_handler(event, context):
# The SageMaker runtime is what allows us to invoke the endpoint that we've created.
runtime = boto3.Session().client('sagemaker-runtime')
# Now we use the SageMaker runtime to invoke our endpoint, sending the review we were given
response = runtime.invoke_endpoint(EndpointName = '**ENDPOINT NAME HERE**', # The name of the endpoint we created
ContentType = 'text/plain', # The data format that is expected
Body = event['body']) # The actual review
# The response is an HTTP response whose body contains the result of our inference
result = response['Body'].read().decode('utf-8')
return {
'statusCode' : 200,
'headers' : { 'Content-Type' : 'text/plain', 'Access-Control-Allow-Origin' : '*' },
'body' : result
}
```
Once you have copy and pasted the code above into the Lambda code editor, replace the `**ENDPOINT NAME HERE**` portion with the name of the endpoint that we deployed earlier. You can determine the name of the endpoint using the code cell below.
```
predictor.endpoint
```
Once you have added the endpoint name to the Lambda function, click on **Save**. Your Lambda function is now up and running. Next we need to create a way for our web app to execute the Lambda function.
### Setting up API Gateway
Now that our Lambda function is set up, it is time to create a new API using API Gateway that will trigger the Lambda function we have just created.
Using AWS Console, navigate to **Amazon API Gateway** and then click on **Get started**.
On the next page, make sure that **New API** is selected and give the new api a name, for example, `sentiment_analysis_api`. Then, click on **Create API**.
Now we have created an API, however it doesn't currently do anything. What we want it to do is to trigger the Lambda function that we created earlier.
Select the **Actions** dropdown menu and click **Create Method**. A new blank method will be created, select its dropdown menu and select **POST**, then click on the check mark beside it.
For the integration point, make sure that **Lambda Function** is selected and click on the **Use Lambda Proxy integration**. This option makes sure that the data that is sent to the API is then sent directly to the Lambda function with no processing. It also means that the return value must be a proper response object as it will also not be processed by API Gateway.
Type the name of the Lambda function you created earlier into the **Lambda Function** text entry box and then click on **Save**. Click on **OK** in the pop-up box that then appears, giving permission to API Gateway to invoke the Lambda function you created.
The last step in creating the API Gateway is to select the **Actions** dropdown and click on **Deploy API**. You will need to create a new Deployment stage and name it anything you like, for example `prod`.
You have now successfully set up a public API to access your SageMaker model. Make sure to copy or write down the URL provided to invoke your newly created public API as this will be needed in the next step. This URL can be found at the top of the page, highlighted in blue next to the text **Invoke URL**.
## Step 4: Deploying our web app
Now that we have a publicly available API, we can start using it in a web app. For our purposes, we have provided a simple static html file which can make use of the public api you created earlier.
In the `website` folder there should be a file called `index.html`. Download the file to your computer and open that file up in a text editor of your choice. There should be a line which contains **\*\*REPLACE WITH PUBLIC API URL\*\***. Replace this string with the url that you wrote down in the last step and then save the file.
Now, if you open `index.html` on your local computer, your browser will behave as a local web server and you can use the provided site to interact with your SageMaker model.
If you'd like to go further, you can host this html file anywhere you'd like, for example using github or hosting a static site on Amazon's S3. Once you have done this you can share the link with anyone you'd like and have them play with it too!
> **Important Note** In order for the web app to communicate with the SageMaker endpoint, the endpoint has to actually be deployed and running. This means that you are paying for it. Make sure that the endpoint is running when you want to use the web app but that you shut it down when you don't need it, otherwise you will end up with a surprisingly large AWS bill.
**TODO:** Make sure that you include the edited `index.html` file in your project submission.
Now that your web app is working, trying playing around with it and see how well it works.
**Question**: Give an example of a review that you entered into your web app. What was the predicted sentiment of your example review?
**Answer:** My input: plot is good. screenplay is fantastic. I recommend this movie.
Preditction: Your review was POSITIVE!
### Delete the endpoint
Remember to always shut down your endpoint if you are no longer using it. You are charged for the length of time that the endpoint is running so if you forget and leave it on you could end up with an unexpectedly large bill.
```
predictor.delete_endpoint()
```
| github_jupyter |
## Land Cover Classification
In this tutorial, we'll learn how to apply a land cover classification model to imagery hosted in the [Planetary Computer Data Catalog](https://planetarycomputer.microsoft.com/catalog). In the process, we'll see how to:
1. Create a Dask Cluster where each worker has a GPU
2. Use the Planetary Computer's metadata query API to search for data matching certain conditions
3. Stitch many files into a mosaic of images
4. Apply a PyTorch model to chunks of an xarray DataArray in parallel.
If you're running this on the [Planetary Computer Hub](http://planetarycomputer.microsoft.com/compute), make sure to choose the **GPU - PyTorch** profile when presented with the form to choose your environment.
### Land cover background
We'll work with [NAIP](https://planetarycomputer.microsoft.com/dataset/naip) data, a collection of high-resolution aerial imagery covering the continental US. We'll apply a PyTorch model trained for land cover classification to the data. The model takes in an image and classifies each pixel into a category (e.g. "water", "tree canopy", "road", etc.). We're using a neural network trained by data scientists from Microsoft's [AI for Good](https://www.microsoft.com/en-us/ai/ai-for-good) program. We'll use the model to analyze how land cover changed over a portion of Maryland from 2013 to 2017.
### Scaling our computation
This is a somewhat large computation, and we'll handle the scale in two ways:
1. We'll use a cloud-native workflow, reading data directly from Blob Storage into memory on VMs running in Azure, skipping a slow local download step.
2. Processing images in parallel, using multiple threads on a to load and preprocess the data, before moving it to the GPU for prediction.
```
from dask.distributed import Client
from dask_cuda import LocalCUDACluster
cluster = LocalCUDACluster(threads_per_worker=4)
client = Client(cluster)
print(f"/proxy/{client.scheduler_info()['services']['dashboard']}/status")
```
Make sure to open the Dask Dashboard, either by clicking the *Dashboard* link or by using the dask-labextension to lay out your workspace (See [Scale with Dask](https://planetarycomputer.microsoft.com/docs/quickstarts/scale-with-dask/#Open-the-dashboard).).
Next, we'll load the model. It's available in a public Azure Blob Storage container. We'll download the model locally and construct the [Unet](https://smp.readthedocs.io/en/latest/models.html#unet) using `segmentation_models_pytorch`.
```
import azure.storage.blob
from pathlib import Path
import segmentation_models_pytorch
import torch
p = Path("unet_both_lc.pt")
if not p.exists():
blob_client = azure.storage.blob.BlobClient(
account_url="https://gtclandcoverdemo.blob.core.windows.net/",
container_name="models",
blob_name="unet_both_lc.pt",
)
with p.open("wb") as f:
f.write(blob_client.download_blob().readall())
model = segmentation_models_pytorch.Unet(
encoder_name="resnet18",
encoder_depth=3,
encoder_weights=None,
decoder_channels=(128, 64, 64),
in_channels=4,
classes=13,
)
model.load_state_dict(torch.load("unet_both_lc.pt", map_location="cuda:0"))
device = torch.device("cuda")
model = model.to(device)
```
### Data discovery
Suppose we've been tasked with analyzing how land use changed from 2013 to 2017 for a region of Maryland. The full NAIP dataset consists of millions of images. How do we find the few hundred files that we care about?
With the Planetary Computer's **metadata query API**, that's straightforward. First, we'll define our area of interest as a GeoJSON object.
```
area_of_interest = {
"type": "Polygon",
"coordinates": [
[
[-77.9754638671875, 38.58037909468592],
[-76.37969970703125, 38.58037909468592],
[-76.37969970703125, 39.812755695478124],
[-77.9754638671875, 39.812755695478124],
[-77.9754638671875, 38.58037909468592],
]
],
}
```
Next, we'll use `pystac_client` to query the Planetary Computer's STAC endpoint. We'll filter the results by space (to return only images touching our area of interest) and time (to return a set of images from 2013, and a second set for 2017).
```
import pystac_client
api = pystac_client.Client.open("https://planetarycomputer.microsoft.com/api/stac/v1/")
search_2013 = api.search(
intersects=area_of_interest,
datetime="2012-12-31T00:00:00Z/2014-01-01T00:00:00Z",
limit=500,
collections=["naip"],
)
search_2017 = api.search(
intersects=area_of_interest,
datetime="2016-12-31T00:00:00Z/2018-01-01T00:00:00Z",
limit=500,
collections=["naip"],
)
```
We also request the results in batches of size `500`. Each item in those results is a single stac `Item`, which includes URLs to cloud-optimized GeoTIFF files stored in Azure Blob Storage.
### Aligning images
We have URLs to many files in Blob Storage. We want to treat all those as one big, logical dataset, so we'll use some open-source libraries to stitch them all together.
[stac-vrt](https://stac-vrt.readthedocs.io/) will take a collection of STAC items and efficiently build a [GDAL VRT](https://gdal.org/drivers/raster/vrt.html).
```
import stac_vrt
data_2013 = search_2013.get_all_items_as_dict()["features"]
data_2017 = search_2017.get_all_items_as_dict()["features"]
print("2013:", len(data_2013), "items")
print("2017:", len(data_2017), "items")
naip_2013 = stac_vrt.build_vrt(
data_2013, block_width=512, block_height=512, data_type="Byte"
)
mosaic_2017 = stac_vrt.build_vrt(
data_2017, block_width=512, block_height=512, data_type="Byte"
)
```
Once we have a pair of VRTs (one per year), we use [rasterio.warp](https://rasterio.readthedocs.io/en/latest/api/rasterio.warp.html) to make sure they're aligned.
```
import rasterio
a = rasterio.open(naip_2013)
naip_2017 = rasterio.vrt.WarpedVRT(
rasterio.open(mosaic_2017), transform=a.transform, height=a.height, width=a.width
)
```
[xarray](https://xarray.pydata.org/en/stable/) provides a convenient data structure for working with large, n-dimensional, labeled datasets like this. [rioxarray](https://corteva.github.io/rioxarray/stable/) is an engine for reading datasets like this into xarray.
```
import numpy as np
import pandas as pd
import xarray as xr
import rioxarray
ds1 = rioxarray.open_rasterio(naip_2013, chunks=(4, 8192, 8192), lock=False)
ds2 = rioxarray.open_rasterio(naip_2017, chunks=(4, 8192, 8192), lock=False)
ds = xr.concat([ds1, ds2], dim=pd.Index([2013, 2017], name="time"))
ds
```
### Pre-processing for the neural network
Now we have a big dataset, that's been pixel-aligned on a grid for the two time periods.
The model requires a bit of pre-processing upfront. We'll define a couple variables with the per-band mean and standard deviation for each year.
```
bands = xr.DataArray(
[1, 2, 3, 4], name="band", dims=["band"], coords={"band": [1, 2, 3, 4]}
)
NAIP_2013_MEANS = xr.DataArray(
np.array([117.00, 130.75, 122.50, 159.30], dtype="float32"),
name="mean",
coords=[bands],
)
NAIP_2013_STDS = xr.DataArray(
np.array([38.16, 36.68, 24.30, 66.22], dtype="float32"),
name="mean",
coords=[bands],
)
NAIP_2017_MEANS = xr.DataArray(
np.array([72.84, 86.83, 76.78, 130.82], dtype="float32"),
name="std",
coords=[bands],
)
NAIP_2017_STDS = xr.DataArray(
np.array([41.78, 34.66, 28.76, 58.95], dtype="float32"),
name="mean",
coords=[bands],
)
mean = xr.concat([NAIP_2013_MEANS, NAIP_2017_MEANS], dim="time")
std = xr.concat([NAIP_2013_STDS, NAIP_2017_STDS], dim="time")
```
With those constants defined, we can normalize the data by subtracting the mean and dividing by the standard deviation.
We'll also fix an issue the model had with partial chunks by dropping some pixels from the bottom-right corner.
```
# Normalize by per-year mean, std
normalized = (ds - mean) / std
# The Unet model doesn't like partial chunks, so we chop off the
# last 1-31 pixels.
slices = {}
for coord in ["y", "x"]:
remainder = len(ds.coords[coord]) % 32
slice_ = slice(-remainder) if remainder else slice(None)
slices[coord] = slice_
normalized = normalized.isel(**slices)
normalized
```
### Predicting land cover for each pixel
At this point, we're ready to make predictions.
By now, hopefully your workers have come online. We'll scatter the PyTorch model out to them.
```
remote_model = client.scatter(model, broadcast=True)
del model
```
We'll apply the model to the entire dataset, taking care to not over-saturate the GPUs. The GPUs will work on relatively small "chips" which fit comfortably in memory. The prediction, which comes from `model(data)`, will happen on the GPU so that it's nice and fast.
Stepping up a level, we have Dask chunks. This is just a regular NumPy array. We'll break each chunk into a bunch of chips (using `dask.array.core.slices_from_chunks`) and get a prediction for each chip.
```
import torch
import dask.array
def predict_chip(data: torch.Tensor, model) -> torch.Tensor:
# Input is GPU, output is GPU.
with torch.no_grad():
result = model(data).argmax(dim=1).to(torch.uint8)
return result.to("cpu")
def copy_and_predict_chunked(tile, model, token=None):
has_time = tile.ndim == 4
if has_time:
assert tile.shape[0] == 1
tile = tile[0]
slices = dask.array.core.slices_from_chunks(dask.array.empty(tile.shape).chunks)
out = np.empty(shape=tile.shape[1:], dtype="uint8")
device = torch.device("cuda")
for slice_ in slices:
gpu_chip = torch.as_tensor(tile[slice_][np.newaxis, ...]).to(device)
out[slice_[1:]] = predict_chip(gpu_chip, model).cpu().numpy()[0]
if has_time:
out = out[np.newaxis, ...]
return out
```
Stepping up yet another level, we'll apply the predictions to the entire xarray DataArray. We'll use `DataArray.map_blocks` to do the prediction in parallel.
```
meta = np.array([[]], dtype="uint8")[:0]
predictions_array = normalized.data.map_blocks(
copy_and_predict_chunked,
meta=meta,
drop_axis=1,
model=remote_model,
name="predict",
)
predictions = xr.DataArray(
predictions_array,
coords=normalized.drop_vars("band").coords,
dims=("time", "y", "x"),
)
predictions
```
So there's three levels:
1. xarray DataArray, backed by a Dask Array
2. NumPy arrays, which are subsets of the Dask Array
3. Chips, which are subsets of the NumPy arrays
We can kick off a computation by calling `predictions.persist()`. This should cause some activity on your Dask Dashboard.
```
predictions[:, :200, :200].compute()
```
Each element of `predictions` is an integer encoding the class the PyTorch model things the pixel belongs to (tree canopy, building, water, etc.).
Finally, we can compute the result we're interested in: Which pixels (spots on the earth) changed land cover over the four years, at least according to our model.
```
change = predictions.sel(time=2013) != predictions.sel(time=2017)
change
```
That's a boolean array where `True` means "this location changed". We'll mask out the `predictions` array with `change`. The value `other=0` means "no change", so `changed_predictions` has just the predictions (the integer codes) where there was a change.
```
changed_predictions = predictions.where(change, other=0)
changed_predictions
```
Again, we can kick off some computation.
```
changed_predictions[:, :200, :200].compute()
```
Now let's do some visual spot checking of our model. This does require processing the full-resolution images, so we need to limit things to something that fits in memory now.
```
middle = ds.shape[2] // 2, ds.shape[3] // 2
slice_y = slice(middle[0], middle[0] + 5_000)
slice_x = slice(middle[1], middle[1] + 5_000)
parts = [x.isel(y=slice_y, x=slice_x) for x in [ds, predictions, changed_predictions]]
ds_local, predictions_local, changed_predictions_local = dask.compute(*parts)
import matplotlib.colors
from bokeh.models.tools import BoxZoomTool
import panel
import hvplot.xarray # noqa
cmap = matplotlib.colors.ListedColormap(
np.array(
[
(0, 0, 0),
(0, 197, 255),
(0, 168, 132),
(38, 115, 0),
(76, 230, 0),
(163, 255, 115),
(255, 170, 0),
(255, 0, 0),
(156, 156, 156),
(0, 0, 0),
(115, 115, 0),
(230, 230, 0),
(255, 255, 115),
(197, 0, 255),
]
)
/ 255
)
def logo(plot, element):
plot.state.toolbar.logo = None
zoom = BoxZoomTool(match_aspect=True)
style_kwargs = dict(
width=450,
height=400,
xaxis=False,
yaxis=False,
)
kwargs = dict(
x="x",
y="y",
cmap=cmap,
rasterize=True,
aggregator="mode",
colorbar=False,
tools=["pan", zoom, "wheel_zoom", "reset"],
clim=(0, 12),
)
panel.Column(
panel.Row(
ds_local.sel(time=2013)
.hvplot.rgb(
bands="band", rasterize=True, hover=False, title="NAIP 2013", **style_kwargs
)
.opts(default_tools=[], hooks=[logo]),
changed_predictions_local.sel(time=2013)
.hvplot.image(title="Classification 2013", **kwargs, **style_kwargs)
.opts(default_tools=[]),
),
panel.Row(
ds_local.sel(time=2017)
.hvplot.rgb(
bands="band",
rasterize=True,
hover=False,
title="NAIP 2017",
**style_kwargs,
)
.opts(default_tools=[], hooks=[logo]),
changed_predictions_local.sel(time=2017)
.hvplot.image(title="Classification 2017", **kwargs, **style_kwargs)
.opts(default_tools=[]),
),
)
```
That visualization uses [Panel](https://panel.holoviz.org/), a Python dashboarding library. In an interactive Jupyter Notebook you can pan and zoom around the large dataset.

### Scale further
This example created a local Dask "cluster" on this single node. You can scale your computation out to a true GPU cluster with Dask Gateway by setting the `gpu=True` option when creating a cluster.
```python
import dask_gateway
N_WORKERS = 2
g = dask_gateway.Gateway()
options = g.cluster_options()
options["gpu"] = True
options["worker_memory"] = 25
options["worker_cores"] = 3
options["environment"] = {
"DASK_DISTRIBUTED__WORKERS__RESOURCES__GPU": "1",
}
cluster = g.new_cluster(options)
client = cluster.get_client()
cluster.scale(N_WORKERS)
```
| github_jupyter |
## <center>Missing data in supervised ML</center>
### <center>Andras Zsom</center>
<center>Lead Data Scientist and Adjunct Lecturer in Data Science</center>
<center>Center for Computation and Visualization</center>
<center>Brown University</center>
https://github.com/brown-ccv/ODSC-East-2021
## About me
- Born and raised in Hungary
- Astrophysics PhD at MPIA, Heidelberg, Germany
- Postdoctoral researcher at MIT (still in astrophysics at the time)
- Started at Brown in December 2015 as a Data Scientist
- Lead Data Scientist since 2017
- Adjunct Lecturer in Data Science since 2019
- Teaching the course *DATA1030: Hands-on data science*, a mandatory course in the Data Science master's program at Brown
## Data Science at Brown
- Center for Computation and Visualization (CCV) - https://ccv.brown.edu/
- Institutional Data group
- Data-driven decision support and predictive modeling for Brown’s administrative units
- Academic research on data-intensive projects
- Data science consulting for industry partners
- feel free to reach out to me: andras_zsom@brown.edu
## Data Science at AI+Training
- Supervised Machine Learning Course Series - https://app.aiplus.training/courses/supervised-machine-learning-series
- 6 courses that walk through the main steps of developing an ML pipeline
- github repo available here: https://github.com/azsom/Supervised-Learning
- Week 7 of the ODSC ML Certification - https://aiplus.training/certificates/
- this talk is course 5
## Learning Objectives
By the end of this workshop, you will be able to
- Describe the three main types of missingness patterns
- Evaluate simple approaches for handling missing values
- Apply XGBoost to a dataset with missing values
- Apply multivariate imputation
- Apply the reduced-features model (also called the pattern submodel approach)
- Decide which approach is best for your dataset
## Before we start, a few words on our dataset: kaggle house price
- good for educational purposes
- messy data that requires quite a bit of preprocessing
- a nice mixture of continuous, ordinal, and categorical features, each feature type has missing values
- lots of excellent kernels on kaggle
- check them out [here](https://www.kaggle.com/c/house-prices-advanced-regression-techniques)
- dataset and description available in repo
- let's take a look!
https://github.com/brown-ccv/ODSC-East-2021
## Missing values often occur in datasets
- survey data: not everyone answers all the questions
- medical data: not all tests/treatments/etc are performed on all patients
- sensor can be offline or malfunctioning
## Missing values are an issue for multiple reasons
#### Concenptual reason
- missing values can introduce biases
- bias: the samples (the data points) are not representative of the underlying distribution/population
- any conclusion drawn from a biased dataset is also biased.
- rich people tend to not fill out survey questions about their salaries and the mean salary estimated from survey data tend to be lower than true value
#### Practical reason
- missing values (NaN, NA, inf) are incompatible with sklearn
- all values in an array need to be numerical otherwise sklearn will throw a *ValueError*
- there are a few supervised ML techniques that work with missing values (e.g., XGBoost, CatBoost)
- we will cover those later today
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- **Describe the three main types of missingness patterns**
- <font color='LIGHTGRAY'>Evaluate simple approaches for handling missing values</font>
- <font color='LIGHTGRAY'>Apply XGBoost to a dataset with missing values</font>
- <font color='LIGHTGRAY'>Apply multivariate imputation</font>
- <font color='LIGHTGRAY'>Apply the reduced-features model (also called the pattern submodel approach)</font>
- <font color='LIGHTGRAY'>Decide which approach is best for your dataset</font>
# Missing data patterns
- **MCAR** - Missing Complete At Random
- some people skip some survey questions by accident
- **MAR** - Missing At Random
- males are less likely to fill out a survey on depression
- this has nothing to do with their level of depression after accounting for maleness
- **MNAR** - Missing Not At Random
- depressed people are less likely to fill out a survey on depression due to their level of depression
## MCAR test
- MCAR can be diagnosed with a statistical test ([Little, 1988](https://www.tandfonline.com/doi/abs/10.1080/01621459.1988.10478722))
- python implementation available in the [pymice](https://github.com/RianneSchouten/pymice) package or in the skipped slide
- Caveat: it can differentiate between MCAR and MAR only, it misses MNAR
```
# from the pymice package
# https://github.com/RianneSchouten/pymice
import numpy as np
import pandas as pd
import math as ma
import scipy.stats as st
def checks_input_mcar_tests(data):
""" Checks whether the input parameter of class McarTests is correct
Parameters
----------
data:
The input of McarTests specified as 'data'
Returns
-------
bool
True if input is correct
"""
if not isinstance(data, pd.DataFrame):
print("Error: Data should be a Pandas DataFrame")
return False
if not any(data.dtypes.values == np.float):
if not any(data.dtypes.values == np.int):
print("Error: Dataset cannot contain other value types than floats and/or integers")
return False
if not data.isnull().values.any():
print("Error: No NaN's in given data")
return False
return True
def mcar_test(data):
""" Implementation of Little's MCAR test
Parameters
----------
data: Pandas DataFrame
An incomplete dataset with samples as index and variables as columns
Returns
-------
p_value: Float
This value is the outcome of a chi-square statistical test, testing whether the null hypothesis
'the missingness mechanism of the incomplete dataset is MCAR' can be rejected.
"""
if not checks_input_mcar_tests(data):
raise Exception("Input not correct")
dataset = data.copy()
vars = dataset.dtypes.index.values
n_var = dataset.shape[1]
# mean and covariance estimates
# ideally, this is done with a maximum likelihood estimator
gmean = dataset.mean()
gcov = dataset.cov()
# set up missing data patterns
r = 1 * dataset.isnull()
mdp = np.dot(r, list(map(lambda x: ma.pow(2, x), range(n_var))))
sorted_mdp = sorted(np.unique(mdp))
n_pat = len(sorted_mdp)
correct_mdp = list(map(lambda x: sorted_mdp.index(x), mdp))
dataset['mdp'] = pd.Series(correct_mdp, index=dataset.index)
# calculate statistic and df
pj = 0
d2 = 0
for i in range(n_pat):
dataset_temp = dataset.loc[dataset['mdp'] == i, vars]
select_vars = ~dataset_temp.isnull().any()
pj += np.sum(select_vars)
select_vars = vars[select_vars]
means = dataset_temp[select_vars].mean() - gmean[select_vars]
select_cov = gcov.loc[select_vars, select_vars]
mj = len(dataset_temp)
parta = np.dot(means.T, np.linalg.solve(select_cov, np.identity(select_cov.shape[1])))
d2 += mj * (np.dot(parta, means))
df = pj - n_var
# perform test and save output
p_value = 1 - st.chi2.cdf(d2, df)
return p_value
```
## MCAR, MAR, MNAR are nice in theory, pretty useless in practice
- it can be challenging to infer the missingness pattern from an incomplete dataset
- There is a statistical test to differentiate MCAR and MAR
- MNAR is difficult/impossible to diagnose to the best of my knowledge
- multiple patterns can be present in the data
- even worse, multiple patterns can be present in one feature!
- missing values in a feature can occur due to a mix of MCAR, MAR, MNAR
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- <font color='LIGHTGRAY'>Describe the three main types of missingness patterns</font>
- **Evaluate simple approaches for handling missing values**
- <font color='LIGHTGRAY'>Apply XGBoost to a dataset with missing values</font>
- <font color='LIGHTGRAY'>Apply multivariate imputation</font>
- <font color='LIGHTGRAY'>Apply the reduced-features model (also called the pattern submodel approach)</font>
- <font color='LIGHTGRAY'>Decide which approach is best for your dataset</font>
## Simple approaches for handling missing values
- 1) categorical/ordinal features: treat missing values as another category
- missing values in categorical/ordinal features are not a big deal
- 2) continuous features: this is the tough part
- sklearn's SimpleImputer
- 3) exclude points or features with missing values
- might be OK
### 1a) Missing values in a categorical feature
- YAY - this is not an issue at all!
- Categorical feature needs to be one-hot encoded anyway
- Just replace the missing values with 'NA' or 'missing' and treat it as a separate category
### 1b) Missing values in a ordinal feature
- this can be a bit trickier but usually fine
- Ordinal encoder is applied to ordinal features
- where does 'NA' or 'missing' fit into the order of the categories?
- usually first or last
- if you can figure this out, you are done
```
# read the data
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# Let's load the data
df = pd.read_csv('data/train.csv')
# drop the ID
df.drop(columns=['Id'],inplace=True)
# the target variable
y = df['SalePrice']
df.drop(columns=['SalePrice'],inplace=True)
# the unprocessed feature matrix
X = df.values
print(X.shape)
# the feature names
ftrs = df.columns
# let's split to train, test, and holdout
X_other, X_holdout, y_other, y_holdout = train_test_split(df, y, test_size=0.2, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(X_other, y_other, test_size=0.25, random_state=0)
print(X_train.shape)
print(X_test.shape)
print(X_holdout.shape)
# collect the various features
cat_ftrs = ['MSZoning','Street','Alley','LandContour','LotConfig','Neighborhood','Condition1','Condition2',\
'BldgType','HouseStyle','RoofStyle','RoofMatl','Exterior1st','Exterior2nd','MasVnrType','Foundation',\
'Heating','CentralAir','Electrical','GarageType','PavedDrive','MiscFeature','SaleType','SaleCondition']
ordinal_ftrs = ['LotShape','Utilities','LandSlope','ExterQual','ExterCond','BsmtQual','BsmtCond','BsmtExposure',\
'BsmtFinType1','BsmtFinType2','HeatingQC','KitchenQual','Functional','FireplaceQu','GarageFinish',\
'GarageQual','GarageCond','PoolQC','Fence']
ordinal_cats = [['Reg','IR1','IR2','IR3'],['AllPub','NoSewr','NoSeWa','ELO'],['Gtl','Mod','Sev'],\
['Po','Fa','TA','Gd','Ex'],['Po','Fa','TA','Gd','Ex'],['NA','Po','Fa','TA','Gd','Ex'],\
['NA','Po','Fa','TA','Gd','Ex'],['NA','No','Mn','Av','Gd'],['NA','Unf','LwQ','Rec','BLQ','ALQ','GLQ'],\
['NA','Unf','LwQ','Rec','BLQ','ALQ','GLQ'],['Po','Fa','TA','Gd','Ex'],['Po','Fa','TA','Gd','Ex'],\
['Sal','Sev','Maj2','Maj1','Mod','Min2','Min1','Typ'],['NA','Po','Fa','TA','Gd','Ex'],\
['NA','Unf','RFn','Fin'],['NA','Po','Fa','TA','Gd','Ex'],['NA','Po','Fa','TA','Gd','Ex'],
['NA','Fa','TA','Gd','Ex'],['NA','MnWw','GdWo','MnPrv','GdPrv']]
num_ftrs = ['MSSubClass','LotFrontage','LotArea','OverallQual','OverallCond','YearBuilt','YearRemodAdd',\
'MasVnrArea','BsmtFinSF1','BsmtFinSF2','BsmtUnfSF','TotalBsmtSF','1stFlrSF','2ndFlrSF',\
'LowQualFinSF','GrLivArea','BsmtFullBath','BsmtHalfBath','FullBath','HalfBath','BedroomAbvGr',\
'KitchenAbvGr','TotRmsAbvGrd','Fireplaces','GarageYrBlt','GarageCars','GarageArea','WoodDeckSF',\
'OpenPorchSF','EnclosedPorch','3SsnPorch','ScreenPorch','PoolArea','MiscVal','MoSold','YrSold']
df[ordinal_ftrs]
# preprocess with pipeline and columntransformer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import OrdinalEncoder
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
# one-hot encoder
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='constant',fill_value='missing')),
('onehot', OneHotEncoder(sparse=False,handle_unknown='ignore'))])
# ordinal encoder
ordinal_transformer = Pipeline(steps=[
('imputer2', SimpleImputer(strategy='constant',fill_value='NA')),
('ordinal', OrdinalEncoder(categories = ordinal_cats))])
# standard scaler
numeric_transformer = Pipeline(steps=[
('scaler', StandardScaler())])
# collect the transformers
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, num_ftrs),
('cat', categorical_transformer, cat_ftrs),
('ord', ordinal_transformer, ordinal_ftrs)])
# fit_transform the training set
X_prep = preprocessor.fit_transform(X_train)
# little hacky, but collect feature names
feature_names = preprocessor.transformers_[0][-1] + \
list(preprocessor.named_transformers_['cat'][1].get_feature_names(cat_ftrs)) + \
preprocessor.transformers_[2][-1]
df_train = pd.DataFrame(data=X_prep,columns=feature_names)
print(df_train.shape)
# transform the test
df_test = preprocessor.transform(X_test)
df_test = pd.DataFrame(data=df_test,columns = feature_names)
print(df_test.shape)
# transform the holdout
df_holdout = preprocessor.transform(X_holdout)
df_holdout = pd.DataFrame(data=df_holdout,columns = feature_names)
print(df_holdout.shape)
df_train[ordinal_ftrs]
```
### 2) Continuous features: mean or median imputation
- Imputation means you infer the missing values from the known part of the data
- sklearn's SimpleImputer can do mean and median imputation
- USUALLY A BAD IDEA!
- MCAR: mean/median of non-missing values is the same as the mean/median of the true underlying distribution, but the variances are different
- not MCAR: the mean/median and the variance of the completed dataset will be off
- supervised ML model is too confident (MCAR) or systematically off (not MCAR)
### 3) Exclude points or features with missing values
- easy to do with pandas
- it can be OK if only small fraction of points contain missing values (maybe a few percent?) OR the missing values are limited to one or a few features that can be dropped
- if the MCAR assumption is justified, dropping points will not introduce biases to your model
- due to the smaller sample size, the confidence of your model might suffer.
- what will you do with missing values when you deploy the model?
```
print('data dimensions:',df_train.shape)
print('the p value of the mcar test:',mcar_test(df_train))
perc_missing_per_ftr = df_train.isnull().sum(axis=0)/df_train.shape[0]
print('fraction of missing values in features:')
print(perc_missing_per_ftr[perc_missing_per_ftr > 0])
frac_missing = sum(df_train.isnull().sum(axis=1)!=0)/df_train.shape[0]
print('fraction of points with missing values:',frac_missing)
print(df_train.shape)
# by default, rows/points are dropped
df_r = df_train.dropna()
print(df_r.shape)
# drop features with missing values
df_c = df_train.dropna(axis=1)
print(df_c.shape)
```
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- <font color='LIGHTGRAY'>Describe the three main types of missingness patterns</font>
- <font color='LIGHTGRAY'>Evaluate simple approaches for handling missing values</font>
- **Apply XGBoost to a dataset with missing values**
- <font color='LIGHTGRAY'>Apply multivariate imputation</font>
- <font color='LIGHTGRAY'>Apply the reduced-features model (also called the pattern submodel approach)</font>
- <font color='LIGHTGRAY'>Decide which approach is best for your dataset</font>
## XGBoost and missing values
- sklearn raises an error if the feature matrix (X) contains nans.
- XGBoost doesn't!
- If a feature with missing values is split:
- XGBoost tries to put the points with missing values to the left and right
- calculates the impurity measure for both options
- puts the points with missing values to the side with the lower impurity
- if missingness correlates with the target variable, XGBoost extracts this info!
```
import xgboost
from sklearn.model_selection import ParameterGrid
from sklearn.metrics import mean_squared_error
param_grid = {"learning_rate": [0.03],
"n_estimators": [2000],
"seed": [0],
#"n_jobs": [-1],
#"reg_alpha": [0e0,0.1,0.31622777,1.,3.16227766,10.],
#"reg_lambda": [0e0,0.1,0.31622777,1.,3.16227766,10.],
"missing": [np.nan],
#"max_depth": [1,2,3,4,5],
"colsample_bytree": [0.9],
"subsample": [0.66]}
XGB = xgboost.XGBRegressor()
XGB.set_params(**ParameterGrid(param_grid)[0])
XGB.fit(df_train,y_train,early_stopping_rounds=50,eval_set=[(df_test, y_test)], verbose=False)
print('the test RMSE:',XGB.evals_result()['validation_0']['rmse'][-1])
y_holdout_pred = XGB.predict(df_holdout)
print('the holdout RMSE:',np.sqrt(mean_squared_error(y_holdout,y_holdout_pred)))
```
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- <font color='LIGHTGRAY'>Describe the three main types of missingness patterns</font>
- <font color='LIGHTGRAY'>Evaluate simple approaches for handling missing values</font>
- <font color='LIGHTGRAY'>Apply XGBoost to a dataset with missing values</font>
- **Apply multivariate imputation**
- <font color='LIGHTGRAY'>Apply the reduced-features model (also called the pattern submodel approach)</font>
- <font color='LIGHTGRAY'>Decide which approach is best for your dataset</font>
## Multivariate Imputation
- models each feature with missing values as a function of other features
- at each step, a feature with nans is designated as target variable y and the other features are treated as feature matrix X
- a regressor is trained on (X, y) for known y
- then, the regressor is used to predict the missing values of y
- in the ML pipeline:
- create n imputed datasets
- run all of them through the ML pipeline
- generate n holdout scores
- the uncertainty in the holdout scores is due to the uncertainty in imputation
- works on MCAR and MAR, fails on MNAR
- paper [here](https://www.jstatsoft.org/article/view/v045i03)
# sklearn's IterativeImputer
```
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.ensemble import RandomForestRegressor
print(df_train[['LotFrontage','MasVnrArea','GarageYrBlt']].head())
imputer = IterativeImputer(estimator = RandomForestRegressor(n_estimators=10), random_state=0)
X_impute = imputer.fit_transform(df_train)
df_train_imp = pd.DataFrame(data=X_impute, columns = df_train.columns)
print(df_train_imp[['LotFrontage','MasVnrArea','GarageYrBlt']].head())
df_test_imp = pd.DataFrame(data=imputer.transform(df_test), columns = df_train.columns)
df_holdout_imp = pd.DataFrame(data=imputer.transform(df_holdout), columns = df_train.columns)
XGB.fit(df_train_imp,y_train,early_stopping_rounds=50,eval_set=[(df_test_imp, y_test)], verbose=False)
print('the test RMSE:',XGB.evals_result()['validation_0']['rmse'][-1])
y_holdout_pred = XGB.predict(df_holdout_imp)
print('the holdout RMSE:',np.sqrt(mean_squared_error(y_holdout,y_holdout_pred)))
```
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- <font color='LIGHTGRAY'>Describe the three main types of missingness patterns</font>
- <font color='LIGHTGRAY'>Evaluate simple approaches for handling missing values</font>
- <font color='LIGHTGRAY'>Apply XGBoost to a dataset with missing values</font>
- <font color='LIGHTGRAY'>Apply multivariate imputation</font>
- **Apply the reduced-features model (also called the pattern submodel approach)**
- <font color='LIGHTGRAY'>Decide which approach is best for your dataset</font>
## Reduced-features model (or pattern submodel approach)
- first described in 2007 in a [JMLR article](http://www.jmlr.org/papers/v8/saar-tsechansky07a.html) as the reduced features model
- in 2018, "rediscovered" as the pattern submodel approach in [Biostatistics](https://www.ncbi.nlm.nih.gov/pubmed/30203058)
<center>My holdout set:</center>
| index | feature 1 | feature 2 | feature 3 | target var |
|------- |:---------: |:---------: |:---------: |:----------: |
| 0 | <font color='red'>NA</font> | 45 | <font color='red'>NA</font> | 0 |
| 1 | <font color='red'>NA</font> | <font color='red'>NA</font> | 8 | 1 |
| 2 | 12 | 6 | 34 | 0 |
| 3 | 1 | 89 | <font color='red'>NA</font> | 0 |
| 4 | 0 | <font color='red'>NA</font> | 47 | 1 |
| 5 | 687 | 24 | 67 | 1 |
| 6 | <font color='red'>NA</font> | 23 | <font color='red'>NA</font> | 1 |
To predict points 0 and 6, I will use train and test points that are complete in feature 2.
To predict point 1, I will use train and test points that are complete in feature 3.
To predict point 2 and 5, I will use train and test points that are complete in features 1-3.
Etc. We will train as many models as the number of patterns in holdout.
## How to determine the patterns?
```
mask = df_holdout[['LotFrontage','MasVnrArea','GarageYrBlt']].isnull()
unique_rows, counts = np.unique(mask, axis=0,return_counts=True)
print(unique_rows.shape) # 6 patterns, we will train 6 models
for i in range(len(counts)):
print(unique_rows[i],counts[i])
def xgb_model(X_train, Y_train, X_test, Y_test, X_holdout, Y_holdout, verbose=1):
# make into row vectors to avoid an obnoxious sklearn/xgb warning
Y_train = np.reshape(np.array(Y_train), (1, -1)).ravel()
Y_test = np.reshape(np.array(Y_test), (1, -1)).ravel()
Y_holdout = np.reshape(np.array(Y_holdout), (1, -1)).ravel()
XGB = xgboost.XGBRegressor(n_jobs=1)
# find the best parameter set
param_grid = {"learning_rate": [0.03],
"n_estimators": [2000],
"seed": [0],
#"n_jobs": [6],
#"reg_alpha": [0e0,0.1,0.31622777,1.,3.16227766,10.],
#"reg_lambda": [0e0,0.1,0.31622777,1.,3.16227766,10.],
"missing": [np.nan],
#"max_depth": [1,2,3,4,5],
"colsample_bytree": [0.9],
"subsample": [0.66]}
pg = ParameterGrid(param_grid)
scores = np.zeros(len(pg))
for i in range(len(pg)):
if verbose >= 5:
print("Param set " + str(i + 1) + " / " + str(len(pg)))
params = pg[i]
XGB.set_params(**params)
eval_set = [(X_test, Y_test)]
XGB.fit(X_train, Y_train,
early_stopping_rounds=50, eval_set=eval_set, verbose=False)# with early stopping
Y_test_pred = XGB.predict(X_test, ntree_limit=XGB.best_ntree_limit)
scores[i] = mean_squared_error(Y_test,Y_test_pred)
best_params = np.array(pg)[scores == np.max(scores)]
if verbose >= 4:
print('Test set max score and best parameters are:')
print(np.max(scores))
print(best_params)
# test the model on the holdout set with best parameter set
XGB.set_params(**best_params[0])
XGB.fit(X_train,Y_train,
early_stopping_rounds=50,eval_set=eval_set, verbose=False)
Y_holdout_pred = XGB.predict(X_holdout, ntree_limit=XGB.best_ntree_limit)
if verbose >= 1:
print ('The MSE is:',mean_squared_error(Y_holdout,Y_holdout_pred))
if verbose >= 2:
print ('The predictions are:')
print (Y_holdout_pred)
if verbose >= 3:
print("Feature importances:")
print(XGB.feature_importances_)
return (mean_squared_error(Y_holdout,Y_holdout_pred), Y_holdout_pred, XGB.feature_importances_)
# Function: Reduced-feature XGB model
# all the inputs need to be pandas DataFrame
def reduced_feature_xgb(X_train, Y_train, X_test, Y_test, X_holdout, Y_holdout):
# find all unique patterns of missing value in holdout set
mask = X_holdout.isnull()
unique_rows = np.array(np.unique(mask, axis=0))
all_Y_holdout_pred = pd.DataFrame()
print('there are', len(unique_rows), 'unique missing value patterns.')
# divide holdout sets into subgroups according to the unique patterns
for i in range(len(unique_rows)):
print ('working on unique pattern', i)
## generate X_holdout subset that matches the unique pattern i
sub_X_holdout = pd.DataFrame()
sub_Y_holdout = pd.Series()
for j in range(len(mask)): # check each row in mask
row_mask = np.array(mask.iloc[j])
if np.array_equal(row_mask, unique_rows[i]): # if the pattern matches the ith unique pattern
sub_X_holdout = sub_X_holdout.append(X_holdout.iloc[j])# append the according X_holdout row j to the subset
sub_Y_holdout = sub_Y_holdout.append(Y_holdout.iloc[[j]])# append the according Y_holdout row j
sub_X_holdout = sub_X_holdout[X_holdout.columns[~unique_rows[i]]]
## choose the according reduced features for subgroups
sub_X_train = pd.DataFrame()
sub_Y_train = pd.DataFrame()
sub_X_test = pd.DataFrame()
sub_Y_test = pd.DataFrame()
# 1.cut the feature columns that have nans in the according sub_X_holdout
sub_X_train = X_train[X_train.columns[~unique_rows[i]]]
sub_X_test = X_test[X_test.columns[~unique_rows[i]]]
# 2.cut the rows in the sub_X_train and sub_X_test that have any nans
sub_X_train = sub_X_train.dropna()
sub_X_test = sub_X_test.dropna()
# 3.cut the sub_Y_train and sub_Y_test accordingly
sub_Y_train = Y_train.iloc[sub_X_train.index]
sub_Y_test = Y_test.iloc[sub_X_test.index]
# run XGB
sub_Y_holdout_pred = xgb_model(sub_X_train, sub_Y_train, sub_X_test,
sub_Y_test, sub_X_holdout, sub_Y_holdout, verbose=0)
sub_Y_holdout_pred = pd.DataFrame(sub_Y_holdout_pred[1],columns=['sub_Y_holdout_pred'],
index=sub_Y_holdout.index)
print(' RMSE:',np.sqrt(mean_squared_error(sub_Y_holdout,sub_Y_holdout_pred)))
# collect the holdout predictions
all_Y_holdout_pred = all_Y_holdout_pred.append(sub_Y_holdout_pred)
# rank the final Y_holdout_pred according to original Y_holdout index
all_Y_holdout_pred = all_Y_holdout_pred.sort_index()
Y_holdout = Y_holdout.sort_index()
# get global RMSE
total_RMSE = np.sqrt(mean_squared_error(Y_holdout,all_Y_holdout_pred))
return total_RMSE
```
### A python implementation is available on the skipped slide
```
print('final RMSE:',reduced_feature_xgb(df_train, y_train, df_test, y_test, df_holdout, y_holdout))
```
## <font color='LIGHTGRAY'>Learning Objectives</font>
<font color='LIGHTGRAY'>By the end of this workshop, you will be able to</font>
- <font color='LIGHTGRAY'>Describe the three main types of missingness patterns</font>
- <font color='LIGHTGRAY'>Evaluate simple approaches for handling missing values</font>
- <font color='LIGHTGRAY'>Apply XGBoost to a dataset with missing values</font>
- <font color='LIGHTGRAY'>Apply multivariate imputation</font>
- <font color='LIGHTGRAY'>Apply the reduced-features model (also called the pattern submodel approach)</font>
- **Decide which approach is best for your dataset**
## Which approach is best for my data?
- **XGB**: run $n$ XGB models with $n$ different seeds
- **imputation**: prepare $n$ different imputations and run $n$ XGB models on them
- **reduced-features**: run $n$ reduced-features model with $n$ different seeds
- rank the three methods based on how significantly different the corresponding mean scores are
Now you can
- Describe the three main types of missingness patterns
- Evaluate simple approaches for handling missing values
- Apply XGBoost to a dataset with missing values
- Apply multivariate imputation
- Apply the reduced-features model (also called the pattern submodel approach)
- Decide which approach is best for your dataset
# <center> Thanks for your attention! </center>
| github_jupyter |
EDA to Typhoon Mitigation and Response Framework (TMRF)
“Experience is a master teacher, even when it’s not our own.”
― Gina Greenlee
The Philippines' apparent vulnerability to natural disasters emerges from its geographic location within the Pacific Ring of Fire. The country is surrounded by large bodies of water and faces the Pacific Ocean, which produces 60% of the world's typhoons. Approximately twenty tropical cyclones pass through the Philippine area of responsibility each year, ten of which are typhoons and five of which are catastrophic (Brown, 2013). Due to a lack of preparedness and response, families in rural areas are more likely to be hit. According to the Weather Underground (n.d.), hurricanes are becoming a global threat as they solidify and more super tropical storms emerge. As a result, every municipality should have a high level of safety and security. However, government agencies and non-governmental organizations in the Philippines promote emergency preparedness, but they have yet to acquire the public's general attention like in Yolanda’s storm surge disaster where there is insufficient public awareness of storm surges, higher casualties have occurred (Commision on Audit, n.d.). The Commission on Audit also reported that the mayor of Tacloban City had stated that more lives may have been saved if storm surges were labeled as tsunami-like in nature. According to the National Research Council et al. (n.d.), preparedness is indeed the way of transforming a community's awareness of potential natural hazards into actions that strengthen its ability to respond to and recover from disasters and proposals for preparedness must address the immediate response and all the longer-term recovery and rehabilitation.
The objective of this analysis is to construct an Exploratory Data Analysis to Typhoons from the year 2019 that prompted the most casualty rates in the country and data on the municipal governments that had the least number of affected families’ individuals per typhoon in the Philippines. Moreover, a global dataset from 2000-2022 about hurricanes in the U.S. from the Centre for Research on the Epidemiology of Disasters' Emergency Events Database (EM-DAT) will be utilized in the same manner as mentioned in the Philippines Data set to know which Location in the United States had the most successful response and mitigation plan for typhoons. This information will be used to construct a Typhoon Mitigation and Response Plan that may help the Philippines deal with hurricanes. Integrating various programs from other countries will increase the likelihood of Filipinos' survival and recovery from typhoons.
<pre>
<b>Contents of the Notebook:</b>
P. Philippines Data set 2019
p1. Analysis of the features and X variables.
p2. Importing data from the excel sheet data set to a pandas data frame.
p3. Selection of X variables to be used for the analysis.
p4. Data Cleaning
p5. Correlation Analysis of the featured X variables.
p6. Data Analysis
Format for every objectives: Objective
Codes
Outputs
Analysis and Observation
a. American Data set 2000-2022
a1. Analysis of the features and X variables.
a2. Importing data from the excel sheet data set to a pandas data frame.
a3. Dataframe Normalization
a4. Selection of X variables to be used for the analysis.
a5. Data Cleaning
a6. Statistical Overview, Regression, and Correlation Analysis of the featured X variables.
a7. Data Analysis
Format for every objectives: Objective
Codes
Outputs
Analysis and Observation
Recommendations
</pre>
Humanitarian Data Exchange Data set about Philippines (2019)
p1. Analysis of the features and X variables.
p2. Importing data from the excel sheet data set to a pandas data frame.
```
import pandas as pd #Importing the matplotlib library and renaming it as plt.
import numpy as np
import matplotlib.pyplot as plt #Importing pandas library
data=pd.read_excel(r'200204_philippines-2019-events-data.xlsx_3FAWSAccessKeyId=AKIAXYC32WNARK756OUG_Expires=1644193427_Signature=hFTPcWroN6S3M2pX40ObWvu24p8=.xlsx', sheet_name="Tropical Cyclones")
df=pd.DataFrame(data) #convert dataset excel into dataframe
```
p3. Selection of X variables to be used for the analysis.
```
#selecting all needed and specific columns from original dataframe/dataset and creating new dataframe named new_df
new_df = df.iloc[:,[0,2,4,6,7,8,9,10,12,13,14,15,16,17,18,19,20,21,22,23]].copy()
```
p4. Data Cleaning
```
new_df.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning.
new_df.isnull().sum() #checking for total null values. The resulted values or rows that had null values will be subjected to cleaning.
#To clean the dataframe and remove the object data string columns (Region, Province, City Mun) which had three null values, dropna() function was used.
new_df = new_df.dropna(subset=['Region', 'Province', 'City_Mun'])
# After dropping all null values of object string data types, isnull() function were again used to check if the null row(s) was dropped. As the table display below, all object string null rows were removed. However, there are still null values for the int data types columns on the dataframe. This null values are also subjected for cleaning.
new_df.isnull().sum() #checking for total null values
# From the results above, there are a number of int data type null values that need to be cleaned. And pandas can only clean the dataframe if all rows has values specially for integers. All object data types are already have no null values, which they are all strings. Thus, fillna() function was used to replace null values to zero for smooth data analysis.
new_df = new_df.fillna(0) # replace NaN with zero value
new_df.isnull().sum() #checking for total null values
#After all of these data cleaning processes, the final dataframe for analysis were created and named as "new_df". And from the results below of isnull() function from the new dataframe, there are now no null values from within the data frame. Thus, data analysis would be smooth and no errors can occur on the latter part of this EDA.
```
p5. Correlation Analysis of the featured X variables.
p6. Data Analysis
A. Determine the top 5 typhoons from 2019 that brought the greatest and least number of infrastructure casualties to the Provinces in the Philippines based from Totally Damaged Houses x variable.
```
cyclone=new_df.groupby("Incident")
dhouse=cyclone["Totally damaged houses"].sum()
typ=pd.DataFrame(dhouse)
cycph=typ.sort_values(by="Totally damaged houses", ascending=False)
tdh=cycph.head(5)
display(tdh)
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.axis('equal')
storms = ['TY Tisoy', 'TY Ursula', 'TS Quiel', 'TS Hanna', 'TD Marilyn']
damages = [68104,60483,59,56,44]
ax.pie(damages, labels = storms,autopct='%1.2f%%')
plt.show()
```
B. Acquire the data about the Provinces who had the greatest and least number of affected individuals per typhoon (Affected_Pers).
```
#top 5 typhoons from 2019 that brought the greatest number of infrastructure casualties to the Provinces in the Philippines
ByProvince= new_df.groupby('Province')
TotalData = ByProvince['Affected_PERs'].sum()
data= pd.DataFrame(TotalData)
SortedData= data.sort_values(by='Affected_PERs',ascending=False)
result= SortedData.head(5)
display(result)
#data graphing
totalaffected = [772162, 622951, 602234, 504447, 483308]
index = ['Leyte', 'Capiz', 'Northern Samar',
'Aklan', 'Western Samar']
df = pd.DataFrame({'totalaffected': totalaffected,
'Province': index}, index=index)
ax = df.plot.barh(y='totalaffected')
```
C. Get the information that shows the top 5 municipalities who were most and least affected by typhoons from the year 2019 based from the Affected_PERs x variable.
```
ByMuni= new_df.groupby('City_Mun')
TotalData=ByMuni['Affected_PERs'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='Affected_PERs',ascending=False)
result= SortedData.head(5)
display(result)
data={'Municipality':['Roxas', 'Daraga', 'Catbalogan',
'City of Tacloban', 'Catarman'],
'Affected Person':[ 168580, 126595, 122572, 119918, 106424]}
# Load data into DataFrame
df = pd.DataFrame(data = data);
#Graphing of Data
df.plot.scatter(x = 'Municipality', y = 'Affected Person', s = 100, c = 'red');
PhilData = new_df
PhilData
```
The Centre for Research on the Epidemiology of Disasters' Data set about the American Typhoons (2000-2022)
a1. Analysis of the features and X variables.
a2. Importing data from the excel sheet data set to a pandas data frame.
```
import pandas as pd #Importing the matplotlib library and renaming it as plt.
import matplotlib.pyplot as plt #Importing pandas library
# casualties of storms in America based in EMDAT datasets
data = pd.read_excel(r'2000-2022-emdat_public_2022_04_24_query_uid-XuKaJG.xlsx', sheet_name="emdat data")
df = pd.DataFrame(data) #convert dataset excel into dataframe
display(df)
```
a3. Dataframe Normalization
```
#From the dataframe above, there are numerous disaster types such as; Convective storm, Tropical cyclone, Extra-tropical storm, and many other classifications. The focus of this analysis is directed only for the Tropical cyclone subtype as it provide specific names from every typhoons that hit the American continent. To achieve this result, a new dataframe named 'new_df' was created based from the rows of 'Disaster Subtype' with the string value of 'Tropical cyclone'.
new_df = df.loc[df['Disaster Subtype'] == 'Tropical cyclone']
```
a4. Selection of X variables to be used for the analysis.
```
#*************************NEW DATAFRAME***************************************
#selecting all needed and specific columns from original dataframe/dataset and creating new dataframe named new_df
new_df = new_df.iloc[:,[0,1,6,7,9,10, 12,34, 35, 36, 37,38, 40, 41, 42, 43, 44, 22]].copy()
```
a5. Data Cleaning
```
new_df.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning.
new_df.isnull().sum() #checking for total null values
# From the results above, there are a number of null values that need to be cleaned. And pandas can only clean the dataframe if all rows contain values specially for integers. All object data types are already contain no null values, in which they are all strings in data type. Thus, fillna() function was used to replace null values to zero for smooth data analysis.
new_df = new_df.fillna(0) # replace NaN with zero value
#After changing the null values in every in valued columns on the dataframe, astype() function was used to change data types of specific columns with dictionary indexing.
convert_datatypes = {"Total_Deaths":int,
"No_Injured":int,
"No_Affected":int,
"No_Homeless":int,
"Total_Affected":int,
"Reconstruction_Costs,_Adjusted_('000_US$)": int,
"Insured_Damages_('000_US$)": int,
"Insured_Damages,_Adjusted_('000_US$)": int,
"Total_Damages_('000_US$)":int,
"Total_Damages,_Adjusted_('000_US$)": int}
new_df= new_df.astype(convert_datatypes) #converting columns datatypes
new_df.isnull().sum() #checking for total null values
#After all of these data cleaning processes, the final dataframe for analysis were created and named "new_df" again. And from the results below from using isnull() function, there are now no null values from the data frame. Thus, data analysis would be smooth and no errors can occur on the latter part of this EDA.
# First, drop duplicates function was used to find the exact names and nubmer of countries that were included on the record without duplicates. There are 38 countries in total.
Country_names = new_df["Country"].drop_duplicates()
#Creating a new dataframe called `Country_names` that contains the unique values of the `Country` column in the `new_df` dataframe
data = pd.DataFrame(Country_names) #Creating a new dataframe called `data` that contains the unique values of the `Country` column in the `new_df` dataframe.
data.info() # info() function was used to get an understanding of which aspects of the dataset need cleaning.
Event_names = new_df["Event Name"].drop_duplicates()
data = pd.DataFrame(Event_names)
data.info()
```
a6. Statistical Overview and Correlation Analysis of the featured X variables.
```
new_df.describe()
new_df.describe(include =['O'])
corr = new_df.corr()**2
corr.Total_Affected.sort_values(ascending=False)
## heatmeap to see the correlation between features.
# Generate a mask for the upper triangle (taken from seaborn example gallery)
mask = np.zeros_like(train.corr(), dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
sns.set_style('whitegrid')
plt.subplots(figsize = (15,12))
sns.heatmap(train.corr(),
annot=True,
mask = mask,
cmap = 'RdBu', ## in order to reverse the bar replace "RdBu" with "RdBu_r"
linewidths=.9,
linecolor='white',
fmt='.2g',
center = 0,
square=True)
plt.title("Correlations Among Features", y = 1.03,fontsize = 20, pad = 40);
```
a7. Data Analysis
A. Determine the top 5 typhoons from 2000-2022 that brought the greatest number of casualties to the in America as a whole based from the Total Affected and Total Damages, Adjusted ('000 US$) x variables.
```
from IPython.display import display_html
# Acquiring the top 5 typhoons from 2000-2022 that brought the greatest number of casualties to the in America as a whole based from the Total Affected.
TopTyphoonAffected = new_df.groupby('Event Name')
TotalAff = TopTyphoonAffected['Total_Affected'].sum()
data_TotalAff = pd.DataFrame(TotalAff)
TotalAff_SortedData = data_TotalAff.sort_values(by='Total_Affected',ascending=False)
TotalAff_result = TotalAff_SortedData.head(5)
# Acquiring the top 5 typhoons from 2000-2022 that brought the greatest number of casualties to the in America as a whole Total Damages, Adjusted ('000 US$)
TopTyphoonDamage = new_df.groupby('Event Name')
TotalDam = TopTyphoonDamage["Total_Damages,_Adjusted_('000_US$)"].sum()
data_TotalDam = pd.DataFrame(TotalDam)
TotalDam_SortedData = data_TotalDam.sort_values(by="Total_Damages,_Adjusted_('000_US$)",ascending=False)
TotalDam_result = TotalDam_SortedData.head(5)
# Codes for displaying the Tables created from the code above side by side
space = "\xa0" * 20
TotalAff_result_styler = TotalAff_result.style.set_table_attributes("style='display:inline'").set_caption('\n Top 5 Typhoons that Brought Greatest \n Total Numbers of Affected People in Year 2000-2022')
TotalDam_result_styler = TotalDam_result.style.set_table_attributes("style='display:inline'").set_caption("\n Top 5 Typhoons that Brought Greatest \n Total Damages, Adjusted ('000 US$) in Year 2000-2022")
display_html(TotalAff_result_styler._repr_html_()+ space + TotalDam_result_styler._repr_html_(), raw=True)
#********************************
#graphing of data of Top 5 Typhoons that Brought Greatest Total Numbers of Affected People in Year 2000-2022
TotalAff_result.plot(kind="bar",color="gray")
plt.xlabel("Event Name")
plt.ylabel("Total Number of Affected")
plt.title("\nTop 5 Typhoons that Brought Greatest \n Total Numbers of Affected People in Year 2000-2022\n\n")
plt.show()
#graphing of data of Top 5 Typhoons that Brought Greatest Total Damages, Adjusted ('000 US$) in Year 2000-2022.
TotalDam_result.plot(kind="bar",color="magenta")
plt.xlabel("Event Name")
plt.ylabel("Total Damages, Adjusted ('000 US$) ")
plt.title("\nTop 5 Typhoons that Brought Greatest \n Total Damages, Adjusted ('000 US$) in Year 2000-2022\n\n")
plt.show()
```
B. Get the data about the top 5 countries who had the greatest and least number of affected homeless from the typhoons on the year 2000-2022.
C. Acquire the data about the top 5 countries who had the greatest number of deaths, injured, and affected individuals from the typhoons on the year 2000-2022.
```
#Acquiring the top 5 countries with the highest number of total deaths caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountryDeath = new_df.groupby('Country')
TotalData = ByCountryDeath['Total_Deaths'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='Total_Deaths',ascending=False)
result = SortedData.head(5)
display(result)
std = new_df['Total_Deaths'].std()
mean = new_df['Total_Deaths'].mean()
print("Standard Deviation of Total Deaths Columns is:", std)
print("Mean of Total Deaths Columns is:", mean)
#Acquiring the top 5 countries with the highest number of total injured individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountryInjured = new_df.groupby('Country')
TotalData = ByCountryDeath['No_Injured'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='No_Injured',ascending=False)
result = SortedData.head(5)
display(result)
#Acquiring the top 5 countries with the highest number of total affected individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountry = new_df.groupby('Country')
TotalData = ByCountry['Total_Affected'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='Total_Affected',ascending=False)
result = SortedData.head(5)
display(result)
#graphing the Country and Total Affected
data = {'Cuba':20202593,'USA':11279675, 'Mexico':6176551, 'Honduras':5380420, 'Guatemala':3841847}
Country_name = list(data.keys())
Affected_values = list(data.values())
plt.bar(Country_name, Affected_values, color ='blue',
width = 0.4)
plt.xlabel("Country Name")
plt.ylabel("Total No. of Affected")
plt.title("\nTop 5 countries with the highest number of total affected individuals")
plt.show()
```
D. Acquire the data about the name of countries who had the least number of deaths, injured, and affected individuals from the typhoons on 2000-2022.
```
#Acquiring the top 11 countries with the lowest number of total deaths caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountryDeath = new_df.groupby('Country')
TotalData = ByCountryDeath['Total_Deaths'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='Total_Deaths',ascending=True)
result = SortedData.head(11)
display(result)
#Acquiring the top 19 countries with the lowest number of total injured individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountryInjured = new_df.groupby('Country')
TotalData = ByCountryDeath['No_Injured'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='No_Injured',ascending=True)
result = SortedData.head(20)
display(result)
#Acquiring the top 5 countries with the lowest number of total affected individuals caused by typhoons from 2000-2022 using groupby and sum function of pandas
ByCountry = new_df.groupby('Country')
TotalData = ByCountry['Total_Affected'].sum()
data = pd.DataFrame(TotalData)
SortedData = data.sort_values(by='Total_Affected',ascending=True)
result = SortedData.head(5)
display(result)
```
E. Get the information that shows the top 5 countries who were most affected in terms of economy (dollars) by typhoons from the year 2000-2022.
```
#sorting data for most affected in terms of economy
countrynames = new_df.groupby('Country')
totaldollars = countrynames["Total_Damages,_Adjusted_('000_US$)"].sum()
td_frame = pd.DataFrame(totaldollars)
SortedData_dollars = td_frame.sort_values(by="Total_Damages,_Adjusted_('000_US$)",ascending=False)
result_dollars = SortedData_dollars.head(5)
display(result_dollars)
#graphing of data
result_dollars.plot(kind="bar",color="magenta")
plt.xlabel("Country")
plt.ylabel("Total Damages, Adjusted ('000 US$)")
plt.title("\nTop 5 Most Affected Countries in Terms of Economy (dollars) \n by typhoons from the year 2000-2022\n")
plt.show()
```
F. Determine which top 5 typhoons are the strongest based from the x variable ‘Dis Mag Scale’ or the magnitude of the disaster at its epicenter with the values in kph (kilometer per Hour).
```
ByTyphoons = new_df.iloc[:,[4, 17]]
data = pd.DataFrame(ByTyphoons)
SortedData = data.sort_values(by='Dis Mag Value',ascending=False)
result = SortedData.head(5)
display(result)
```
| github_jupyter |
# Feature Engineering
Practice creating new features from the GDP and population data.
You'll create a new feature gdppercapita, which is GDP divided by population. You'll then write code to create new features like GDP squared and GDP cubed.
Start by running the code below. It reads in the World Bank data, filters the data for the year 2016, and cleans the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# read in the projects data set and do basic wrangling
gdp = pd.read_csv('../data/gdp_data.csv', skiprows=4)
gdp.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
population = pd.read_csv('../data/population_data.csv', skiprows=4)
population.drop(['Unnamed: 62', 'Country Code', 'Indicator Name', 'Indicator Code'], inplace=True, axis=1)
# Reshape the data sets so that they are in long format
gdp_melt = gdp.melt(id_vars=['Country Name'],
var_name='year',
value_name='gdp')
# Use back fill and forward fill to fill in missing gdp values
gdp_melt['gdp'] = gdp_melt.sort_values('year').groupby('Country Name')['gdp'].fillna(method='ffill').fillna(method='bfill')
population_melt = population.melt(id_vars=['Country Name'],
var_name='year',
value_name='population')
# Use back fill and forward fill to fill in missing population values
population_melt['population'] = population_melt.sort_values('year').groupby('Country Name')['population'].fillna(method='ffill').fillna(method='bfill')
# merge the population and gdp data together into one data frame
df_country = gdp_melt.merge(population_melt, on=('Country Name', 'year'))
# filter data for the year 2016
df_2016 = df_country[df_country['year'] == '2016']
# filter out values that are not countries
non_countries = ['World',
'High income',
'OECD members',
'Post-demographic dividend',
'IDA & IBRD total',
'Low & middle income',
'Middle income',
'IBRD only',
'East Asia & Pacific',
'Europe & Central Asia',
'North America',
'Upper middle income',
'Late-demographic dividend',
'European Union',
'East Asia & Pacific (excluding high income)',
'East Asia & Pacific (IDA & IBRD countries)',
'Euro area',
'Early-demographic dividend',
'Lower middle income',
'Latin America & Caribbean',
'Latin America & the Caribbean (IDA & IBRD countries)',
'Latin America & Caribbean (excluding high income)',
'Europe & Central Asia (IDA & IBRD countries)',
'Middle East & North Africa',
'Europe & Central Asia (excluding high income)',
'South Asia (IDA & IBRD)',
'South Asia',
'Arab World',
'IDA total',
'Sub-Saharan Africa',
'Sub-Saharan Africa (IDA & IBRD countries)',
'Sub-Saharan Africa (excluding high income)',
'Middle East & North Africa (excluding high income)',
'Middle East & North Africa (IDA & IBRD countries)',
'Central Europe and the Baltics',
'Pre-demographic dividend',
'IDA only',
'Least developed countries: UN classification',
'IDA blend',
'Fragile and conflict affected situations',
'Heavily indebted poor countries (HIPC)',
'Low income',
'Small states',
'Other small states',
'Not classified',
'Caribbean small states',
'Pacific island small states']
# remove non countries from the data
df_2016 = df_2016[~df_2016['Country Name'].isin(non_countries)]
df_2016.reset_index(inplace=True, drop=True)
```
# Exercise 1
Create a new feature called gdppercapita in a new column. This feature should be the gdp value divided by the population.
```
# TODO: create a new feature called gdppercapita,
# which is the gdp value divided by the population value for each country
df_2016['gdppercapita'] = df_2016['gdp'] / df_2016['population']
```
# Exercise 2 (Challenge)
This next exercise is more challenging and assumes you know how to use the pandas apply() method as well as lambda functions.
Write code that creates multiples of a feature. For example, if you take the 'gdp' column and an integer like 3, you want to append a new column with the square of gdp (gdp^2) and another column with the cube of gdp (gdp^3).
Follow the TODOs below. These functions build on each other in the following way:
create_multiples(b, k) has two inputs. The first input, b, is a floating point number. The second number, k, is an integer. The output is a list of multiples of b. For example create_multiples(3, 4) would return this list: $[3^2, 3^3, 3^4]$ or in other words $[9, 27, 81]$.
Then the column_name_generator(colname, k) function outputs a list of column names. For example, column_name_generator('gdp', 4) would output a list of strings `['gdp2', 'gdp3', 'gdp4']`.
And finally, concatenate_features(df, column, num_columns) uses the two previous functions to create the new columns and then append these new columns to the original data frame.
```
# TODO: Fill out the create_multiples function.
# The create_multiples function has two inputs. A floating point number and an integer.
# The output is a list of multiples of the input b starting from the square of b and ending at b^k.
def create_multiples(b, k):
new_features = []
# TODO: use a for loop to make a list of multiples of b: ie b^2, b^3, b^4, etc... until b^k
for i in range(2,k+1):
new_features.append(b ** i)
return new_features
# TODO: Fill out the column_name_generator function.
# The function has two inputs: a string representing a column name and an integer k.
# The 'k' variable is the same as the create_multiples function.
# The output should be a list of column names.
# For example if the inputs are ('gdp', 4) then the output is a list of strings ['gdp2', 'gdp3', gdp4']
def column_name_generator(colname, k):
col_names = []
for i in range(2,k+1):
col_names.append('{}{}'.format(colname, i))
return col_names
# TODO: Fill out the concatenate_features function.
# The function has three inputs. A dataframe, a column name represented by a string, and an integer representing
# the maximum power to create when engineering features.
# If the input is (df_2016, 'gdp', 3), then the output will be the df_2016 dataframe with two new columns
# One new column will be 'gdp2' ie gdp^2, and then other column will be 'gdp3' ie gdp^3.
# HINT: There may be more than one way to do this.
# The TODOs in this section point you towards one way that works
def concatenate_features(df, column, num_columns):
# TODO: Use the pandas apply() method to create the new features. Inside the apply method, you
# can use a lambda function with the create_mtuliples function
new_features = df[column].apply(lambda x: create_multiples(x, num_columns))
# TODO: Create a dataframe from the new_features variable
# Use the column_name_generator() function to create the column names
# HINT: In the pd.DataFrame() method, you can specify column names inputting a list in the columns option
# HINT: Using new_features.tolist() might be helpful
new_features_df = pd.DataFrame(new_features.tolist(), columns = column_name_generator(column, num_columns))
# TODO: concatenate the original date frame in df with the new_features_df dataframe
# return this concatenated dataframe
return pd.concat([df, new_features_df], axis=1)
```
# Solution
Run the code cell below. If your code is correct, you should get a dataframe with 8 columns. Here are the first two rows of what your results should look like.
| Country Name | year | gdp | population | gdppercapita | gdp2 | gdp3 | gdp4 |
|--------------|------|--------------|------------|--------------|--------------|--------------|--------------|
| Aruba | 2016 | 2.584464e+09 | 104822.0 | 24655.737223 | 6.679453e+18 | 1.726280e+28 | 4.461509e+37 |
| Afghanistan | 2016 | 1.946902e+10 | 34656032.0 | 561.778746 | 3.790428e+20 | 7.379593e+30 | 1.436735e+41 |
There is a solution in the 16_featureengineering_exercise folder if you go to File->Open.
```
concatenate_features(df_2016, 'gdp', 4)
```
| github_jupyter |
# Detail of other tries
## Outline
***
* Preprocessing data
* Tokenlization and punctutation cleaning
* Hypertuning Word2Vec model
* Choose the classifier
* Hypertuning classifier
* Result
* Conclusion
## Preprocessing
***
```
# Inport the necessary package
import pandas as pd
import nltk
import numpy as np
# Check the basic information of the dataset
df = pd.read_csv('data/fake_or_real_news.csv', nrows=10000)
df.info()
df.drop('Unnamed: 0', inplace=True, axis=1)
label_trans = lambda i: 0 if i == 'FAKE' else 1
df.label = df.label.apply(label_trans)
df.head()
# Draw a graph of text length verse frequency
import matplotlib
%matplotlib inline
df['text'].str.len().plot(kind = 'hist', bins = 1000, figsize = (12,5))
```
## Tokenization and punctuation cleaning
***
```
# Tokenized the word for future use
from string import punctuation
texts = df.text+df.title
mapping_table = {ord(char): u' ' for char in punctuation}
tokenized = [nltk.word_tokenize(review.translate(mapping_table)) for review in texts]
# Clean the tokenized text which stand for punctuation
def clean_text(tokenized_list):
import string
sw = nltk.corpus.stopwords.words('english')
sw.append("“")
sw.append("”")
sw.append("’")
sw.append("‘")
sw.append("—")
new_list = [[token.lower() for token in tlist if token not in string.punctuation
and token.lower() not in sw] for tlist in tokenized_list]
return new_list
cleaned = clean_text(tokenized)
# Import the Word2Vec Model and LogisticRegression for checking the performance of Word2Vec parameters
from gensim.models import Word2Vec
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import *
target_names = ['FAKE', 'REAL']
y = np.array(df['label'])
seed = 2
test_size = 0.33
# The easy-reading version of report
def new_report(y0_test, y0_pred):
print (" Accuracy: {:.5f} Precision: {:.5f} Recall: {:.5f} F-1: {:.5f}"
.format(accuracy_score(y0_test, y0_pred), precision_score(y0_test, y0_pred),
recall_score(y0_test, y0_pred), f1_score(y0_test, y0_pred)))
# The building function of word vectors
def vectors_build(word_vectors, cleaned, word_model):
for i in range(0, len(df)):
word_vectors[i] = 0
for word in cleaned[i]:
word_vectors[i] += word_model[word]
if len(cleaned[i]) != 0:
word_vectors[i] = word_vectors[i] / len(cleaned[i])
return word_vectors
```
## Hypertuning Word2Vec model
***
* We tested all the parameter that a word2vec model has, but it turns out only few of them are changeable.
* In order to present clearly, we hide the unchanged parameters.
* Changeable parameters including size, window, alpha, iter, batch words and negative.
* We did use the randam searching for parameters in the word2vec model, but the result didn't go well.
* Because of the number of parameters, grid search is also hard to implement.
* Need to mention that although I set the seed value, because of workers = 4, I can not eliminate ordering jitter from OS thread scheduling, which cause some random result appeared.
* In conclusion, each parameter can only contribute a very small percentage to the final result.
### Original
The original model is filled with the default parameters.
```
# Try the default parameter of Word2Vec Model
word_model = Word2Vec(cleaned, seed = seed, min_count = 1, size = 100,
window = 5, alpha = 0.025, iter = 10,
batch_words = 10000, negative = 5)
word_vectors = np.zeros((len(df), 100))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
y0_pred = LR0_model.predict(x0_test)
print("Word2Vec, original:")
new_report(y0_test, y0_pred)
```
### Changing the size
We use 50 as the scale, and try to find out the best size within 50 to 500.
```
# Testing the influence of size and use 50 as difference
for j in range(1,11):
word_model = Word2Vec(cleaned, seed = seed, min_count = 1, size = 50*j,
window = 5, alpha = 0.025, iter = 10,
batch_words = 10000, negative = 5, workers = 4)
word_vectors = np.zeros((len(df), 50*j))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
y0_pred = LR0_model.predict(x0_test)
print("Word2Vec, size =", 50*j, ":")
new_report(y0_test, y0_pred)
```
Apparently when the size = 350 we got the best performance.
### Changing the window
Window parameter represent the distance between the current and predicted word, and we change it from 1 to 10.
```
for j in range(1,11):
word_model = Word2Vec(cleaned, seed = seed, min_count = 1, size = 350,
window = j, alpha = 0.025, iter = 10,
batch_words = 10000, negative = 5)
word_vectors = np.zeros((len(df), 350))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
y0_pred = LR0_model.predict(x0_test)
print("Word2Vec, window =", j,":")
new_report(y0_test, y0_pred)
```
Easy to see the best window size is 9, which give us the highest result in all the dimension.
### Changing the alpha
The range of alpha is from 0.015 to 0.060, and the step is 0.005.
```
for j in range(1,10):
word_model = Word2Vec(cleaned, seed = seed, min_count = 1, size = 350,
window = 9, alpha = 0.01+ 0.005*j, iter = 10,
batch_words = 10000, workers = 3, negative = 5)
word_vectors = np.zeros((len(df), 350))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression()
LR0_model = LR0_model.fit(x0_train, y0_train)
y0_pred = LR0_model.predict(x0_test)
print("Word2Vec, alpha =",(0.01+0.005*j),":")
new_report(y0_test, y0_pred)
```
### Changing the iters
We change the iterier from 6 to 15.
```
for j in range(1,11):
word_model = Word2Vec(cleaned, size = 300, window = 5, min_count = 1,
alpha = 0.045, iter= 5 + j,
batch_words = 10000, workers = 4, negative = 5)
word_vectors = np.zeros((len(df), 300))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
y0_pred = LR0_model.predict(x0_test)
print("Word2Vec, iter =",(5+j),":")
new_report(y0_test, y0_pred)
```
As we can see, the iter did not have a specific change to final result, but we can see that the best iteration here is 14.
### Changing the batch_words
Batch words is target size (in words) for batches of examples passed to worker threads. And we try to reduce the parameter here.
```
for j in range(1, 4):
word_model = Word2Vec(cleaned, size = 350, window = 5, min_count = 1,
alpha = 0.04, iter = 14,
batch_words = 11000 - 1000*j, workers= 4, negative = 5)
word_vectors = np.zeros((len(df), 350))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
print("Word2Vec, batch_words =",(11000 - 1000*j),":")
y0_pred = LR0_model.predict(x0_test)
new_report(y0_test, y0_pred)
```
Surprisingly when the batch_word = 10000 we get the best result.
### Changing the nagetive
Last but not least, we attempted to change the nagetive value.
```
for j in range(1, 6):
word_model = Word2Vec(cleaned, size = 350, window = 5, min_count = 1,
alpha = 0.04, iter = 14,
batch_words = 10000, workers= 4, negative = 4+j)
word_vectors = np.zeros((len(df), 350))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
LR0_model = LogisticRegression().fit(x0_train, y0_train)
print("Word2Vec, negative =",(4 + j),":")
y0_pred = LR0_model.predict(x0_test)
new_report(y0_test, y0_pred)
```
The best result we get is when nagetive value = 7, so we choose that as our last parameter.
### Result
* The best combination of parameters:
* size = 350, window = 5, alpha = 0.04, iter = 14, batch_words = 10000, negative = 7
* The results:
* Accuracy: 0.90626 Precision: 0.90431 Recall: 0.90778 F-1: 0.90604
## Choosing the parameter of Classifier
***
* We use logistic regression, random forest and xgboost to find the best result.
* As usual, we hide the unnecessary and unchangeable parameters in the description of model.
* We use grid search to help us find the best result.
* To be more specific, we only show the result with the best parameter set in.
```
# The grid_search function
def grid_search(model, X, Y):
grid_search = GridSearchCV(model, param, scoring="accuracy", n_jobs = 1)
grid_result = grid_search.fit(X, Y)
print("Best Accuracy:", grid_result.best_score_)
print("Parameter set:", grid_result.best_params_)
model.set_params(**grid_result.best_params_)
model.fit(X, Y)
y0_pred = model.predict(x0_test)
new_report(y0_test, y0_pred)
# Use the best result from Word2Vec
word_model = Word2Vec(cleaned, size = 350, window = 5, min_count = 1, alpha = 0.04,
iter = 14, batch_words = 10000, workers= 4, negative = 7)
word_vectors = np.zeros((len(df), 350))
word_vectors = vectors_build(word_vectors, cleaned, word_model)
x0_train, x0_test, y0_train, y0_test = train_test_split(word_vectors, y, test_size=test_size, random_state=seed)
```
### LogisticRegression
**Positive**
* The procession of calulation is very fast and do not use lot of storage area.
**Nagetive**
* Not good when processing a large amount of variables.
* Less fitting.
```
from sklearn.model_selection import GridSearchCV
param = {
'C': [1.0, 1.5, 2.0],
'tol':[0.0001, 0.0002, 0.0003],
'penalty': ['l1', 'l2'],
'intercept_scaling': [1,2,3]
}
grid_search(LogisticRegression(random_state = 2),x0_train,y0_train)
```
### Result
* The best combination of parameters:
* 'C': 2.0, 'intercept_scaling': 3, 'penalty': 'l2', 'tol': 0.0001
* The results:
* Accuracy: 0.91470 Precision: 0.90905 Recall: 0.91682 F-1: 0.91292
### Random Forest
** Positive **
* Easy to handle high dimension datas.
* The result are smoother.
** Nagetive **
* Only when the number of trees are big enough.
* Difficult to handle inbalance dataset.
```
from sklearn.ensemble import RandomForestClassifier
param = {
'n_estimators': [10, 25, 50],
'min_samples_split':[2, 3, 4],
'verbose': [0, 3],
'min_weight_fraction_leaf': [0.0, 0.1]
}
grid_search(RandomForestClassifier(random_state = 2),x0_train,y0_train)
RF0_model = RandomForestClassifier(n_estimators=100, criterion='gini', max_depth=None, min_samples_split=4,
min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None,
bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=3,
warm_start=False, class_weight=None)
RF0_model = RF0_model.fit(x0_train, y0_train)
print("Word2Vec + RF:",RF0_model.score(x0_test, y0_test))
y0_pred = RF0_model.predict(x0_test)
new_report(y0_test, y0_pred)
```
### Result
* The best combination of parameters:
* 'min_samples_split': 'min_samples_split': 3, 'min_weight_fraction_leaf': 0.0, 'n_estimators': 100, 'verbose': 0
* The results:
* Accuracy: 0.89240 Precision: 0.91046 Recall: 0.86936 F-1: 0.88943
### Xgboost
** Positive **
* Regularization term is added into the algorithem, decrease the complexity and made itself faster.
* The variance is also decreased with regularization term added.
** Nagetive **
* Time
** Need to montion **
* Because of the time cost, we did not use the grid search
```
from xgboost import XGBClassifier
XG0_model = XGBClassifier(max_depth=7, learning_rate=0.2,
n_estimators=20, silent=True,
objective='binary:logistic', nthread=-1,
gamma=0, min_child_weight=1, max_delta_step=0,
subsample=1, colsample_bytree=1,
colsample_bylevel=1, reg_alpha=0,
reg_lambda=1, scale_pos_weight=1,
base_score=0.5, seed=0, missing=None)
XG0_model = XG0_model.fit(x0_train, y0_train)
print("Word2Vec + XG:",XG0_model.score(x0_test, y0_test))
y0_pred = XG0_model.predict(x0_test)
print(classification_report(y0_test, y0_pred, target_names=target_names))
from xgboost import XGBClassifier
XG0_model = XGBClassifier(max_depth=6, learning_rate=0.2,
n_estimators=20, silent=True,
objective='binary:logistic', nthread=-1,
gamma=0, min_child_weight=1, max_delta_step=0,
subsample=1, colsample_bytree=1,
colsample_bylevel=1, reg_alpha=0,
reg_lambda=1, scale_pos_weight=1,
base_score=0.5, seed=0, missing=None)
XG0_model = XG0_model.fit(x0_train, y0_train)
print("Word2Vec + XG:",XG0_model.score(x0_test, y0_test))
y0_pred = XG0_model.predict(x0_test)
print(classification_report(y0_test, y0_pred, target_names=target_names))
from xgboost import XGBClassifier
XG0_model = XGBClassifier(max_depth=6, learning_rate=0.3,
n_estimators=200, silent=True,
objective='binary:logistic', nthread=-1,
gamma=0, min_child_weight=2, max_delta_step=0,
subsample=1, colsample_bytree=1,
colsample_bylevel=1, reg_alpha=0,
reg_lambda=3, scale_pos_weight=1,
base_score=0.5, seed=0, missing=None)
XG0_model = XG0_model.fit(x0_train, y0_train)
print("Word2Vec + XG:",XG0_model.score(x0_test, y0_test))
y0_pred = XG0_model.predict(x0_test)
new_report(y0_test, y0_pred)
```
### Result
* The best combination of parameters:
* max_depth=6, learning_rate=0.3, n_estimators=250
* The results:
* Accuracy: 0.90866 Precision: 0.91504 Recall: 0.90010 F-1: 0.90751
## Conclusion
***
* According to result, the best classifier for word2vec is logistic regression with an over 91% accuracy.
* Can not assure that the LR classifier is the best because of the comparison to last time result.
* LR is not highly stable. XGboost is more stable when we rerun the test for several times.
* Grid search is great, but when it comes many parameter, it may not efficient as before.
* The score in the grid search is very important.
| github_jupyter |
get data from local database
```
# %%time
import pandas as pd
import psycopg2
import os
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# read data from local psql database into pd dataframe
try:
conn = psycopg2.connect(database='parcelDatabase', user=os.getenv(
"USER"), password=os.getenv("PASSWORD"))
print("successfully connected to database")
except:
print("I am unable to connect to the database")
df = pd.read_sql_query('select * from "svr_table_2"', con=conn)
# print(df.head())
print(df.iloc[0].situszip5)
# taking out irrelevant columns and cols related to building/development on land. Usecode is relevant col but dropping
# b/c we can't parse it rn
drop_cols = ['istaxableparcel', 'usecodedescchar1', 'usecodedescchar2', 'yearbuilt', 'effectiveyearbuilt', 'usecode']
df = df.drop(drop_cols, 1)
# convert columns of df to numeric
numeric_cols = ["roll_landbaseyear", "taxratearea",
"center_lat", "cluster", "situszip5",
"center_lon", "roll_landvalue", "sqftmain"]
df[numeric_cols] = df[numeric_cols].apply(pd.to_numeric)
# some basic visuals on distr of the land base year
df.hist(column="roll_landbaseyear", range=[1970, 2025], bins=51)
df_updated_2020 = df[df["roll_landbaseyear"] == 2020]
df_updated_2021 = df[df["roll_landbaseyear"] == 2021]
print("# of parcels updated in 2020: ", len(df_updated_2020))
print("# of parcels updated in 2021: ", len(df_updated_2021))
print("total # of parcels in table: ", len(df))
!pip list
# !pip install sklearn_pandas
!pip3 install -U sklearn_pandas
```
based on this reference: https://www.kaggle.com/gauthampughazh/house-sales-price-prediction-svr/notebook
```
%%time
import plotly.graph_objects as go
import plotly.express as px
import seaborn as sns
from sklearn.impute import SimpleImputer
from sklearn_pandas import CategoricalImputer
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import cross_val_score
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor, AdaBoostRegressor, GradientBoostingRegressor
from IPython.display import FileLink
import sys
sys.executable
# split data into test and train
train_df, test_df = train_test_split(df, test_size=0.2)
# get correlation matrix
corr_matrix = train_df.corr()
fig, ax = plt.subplots(figsize=(15, 12))
sns.heatmap(corr_matrix, vmax=0.8, square=True)
plt.show()
```
| github_jupyter |
# Graph Coloring with Gaussian Boson Sampling
Here we demonstrate some graph colorings aided by GBS on Xanadu's simulator
### Copy and paste some useful unreleased Xanadu functions for finding subgraphs
```
# Copyright 2019 Xanadu Quantum Technologies Inc.
r"""
Dense subgraph identification
=============================
**Module name:** :mod:`gbsapps.graph.dense_subgraph`
.. currentmodule:: gbsapps.graph.dense_subgraph
The frontend module for users to find dense subgraphs. The :func:`find_dense_subgraph` function
provides approximate solutions to the densest-:math:`k` subgraph problem
:cite:`arrazola2018using`, which is NP-hard. This problem considers a graph :math:`G = (V,
E)` of :math:`N` nodes :math:`V` and a list of edges :math:`E`, and sets the objective of finding a
:math:`k`-vertex subgraph with the greatest density. In this setting, subgraphs :math:`G[S]` are
defined by nodes :math:`S \subset V` and corresponding edges :math:`E_{S} \subseteq E` that
have both endpoints in :math:`S`. The density of a subgraph is given by
.. math:: d(G[S]) = \frac{2 |E_{S}|}{|S|(|S|-1)},
where :math:`|\cdot|` denotes cardinality, and the densest-:math:`k` subgraph problem can be
written
succinctly as
.. math:: {\rm argmax}_{S \in \mathcal{S}_{k}} d(G[S])
with :math:`\mathcal{S}_{k}` the set of all possible :math:`k` node subgraphs. This problem grows
combinatorially with :math:`{N \choose k}` and is NP-hard in the worst case.
The :func:`find_dense_subgraph` function provides access to heuristic algorithms for finding
approximate solutions. At present, random search is the heuristic algorithm provided, accessible
through the :func:`random_search` function. This algorithm proceeds by randomly generating a set
of :math:`k` vertex subgraphs and selecting the densest. Sampling of subgraphs can be achieved
both uniformly at random and also with a quantum sampler programmed to be biased toward
outputting dense subgraphs.
Summary
-------
.. autosummary::
METHOD_DICT
find_dense_subgraph
random_search
Code details
------------
"""
import functools
from typing import Callable, Union, Tuple
import networkx as nx
from gbsapps.graph.graph_sample import dense_subgraph_sampler_gbs, uniform_subgraph_sampler
from gbsapps.graph.functions import to_networkx_graph, graph_type
# pylint: disable=too-many-arguments
#[docs]
def find_dense_subgraph(
graph: graph_type,
number_of_nodes: int,
iterations: int = 1,
method: Union[str, Callable] = "random-search",
remote: bool = False,
backend: str = "gaussian",
) -> Tuple[float, list]:
"""Functionality for finding dense `node-induced subgraphs
<http://mathworld.wolfram.com/Vertex-InducedSubgraph.html>`__ of a given size.
The user is able to specify a stochastic algorithm as a method for optimization. Furthermore,
the form of randomness used (e.g., GBS sampling or uniform sampling) in the stochastic
algorithm can also be specified by setting a backend sampler. This sampling can be performed
locally or remotely.
Methods are set with the ``methods`` argument. The available methods are:
- ``"random-search"``: a simple random search algorithm where many subgraphs are selected and
the densest one is chosen (default).
Backends are set with the ``backend`` argument. They include the quantum backends
given in :meth:`gbsapps.sample.QUANTUM_BACKENDS` as well as a ``"uniform"`` sampler backend.
The available backends are:
- ``"gaussian"``: allowing subgraph samples to be simulated using the Gaussian backend of Strawberry Fields (default)
- ``"uniform"``: allowing subgraph samples to be generated with a uniform distribution using
the :func:`~gbsapps.graph.graph_sample.uniform_subgraph_sampler` function
Args:
graph (graph_type): the input graph
number_of_nodes (int): the size of desired dense subgraph
iterations (int): number of iterations to use in algorithm
method (str or function): either a string selecting from a range of available methods or a
customized callable function. Defaults to ``"random-search"``.
remote (bool): Performs sampling on a remote server if ``True``. Remote sampling is required
for sampling on hardware and is not available with the ``"uniform"`` backend. If not
specified, sampling will be performed
locally.
backend (str): requested backend for sampling; defaults to the ``"gaussian"`` backend
Returns:
(float, list): the density and list of nodes corresponding to the densest subgraph found
"""
if backend == "uniform":
sampler = uniform_subgraph_sampler
else:
sampler = functools.partial(dense_subgraph_sampler_gbs, remote=remote, backend=backend)
if not callable(method):
if method in METHOD_DICT:
method = METHOD_DICT[method]
else:
raise Exception("Optimization method must be callable or a valid string")
return method(to_networkx_graph(graph), number_of_nodes, sampler, iterations=iterations)
#[docs]
def random_search(
graph: nx.Graph, number_of_nodes: int, sampler: Callable, iterations: int = 1
) -> Tuple[float, list]:
"""Random search algorithm for finding dense subgraphs of a given size.
The algorithm proceeds by sampling subgraphs according to the function ``sampler``. The
sampled subgraphs are of size ``number_of_nodes``. The densest subgraph is then selected
among all the samples.
Args:
graph (nx.Graph): the input graph
number_of_nodes (int): the size of desired dense subgraph
sampler (function): a function which returns a given number of samples of subgraphs of a
given size. Must accept arguments in the form: ``(graph: nx.Graph, sampled_nodes: int,
samples: int = 1)``.
iterations (int): number of iterations to use in algorithm
Returns:
(float, list): the density and list of nodes corresponding to the densest subgraph found
"""
output_samples = sampler(graph, number_of_nodes, iterations)
density_and_samples = [(nx.density(graph.subgraph(s)), s) for s in output_samples]
return max(density_and_samples)
METHOD_DICT = {"random-search": random_search}
"""Dict[str, func]: Included methods for finding dense subgraphs. The dictionary keys are strings
describing the method, while the dictionary values are callable functions corresponding to the
method."""
# Copyright 2019 Xanadu Quantum Technologies Inc.
r"""
Graph functions
===============
**Module name:** :mod:`gbsapps.graph.functions`
.. currentmodule:: gbsapps.graph.functions
This module provides some ancillary functions for dealing with graphs. This includes
:func:`is_adjacency` to check if an input matrix is symmetric and :func:`subgraph_adjacency` to
return the adjacency matrix of a subgraph when an input graph and subset of nodes is specified.
Furthermore, the frontend :func:`~gbsapps.graph.dense_subgraph.find_dense_subgraph` function
allows users to input graphs both as a `NumPy <https://www.numpy.org/>`__ array containing the
adjacency matrix and as a `NetworkX <https://networkx.github.io/>`__ ``Graph`` object. The
:func:`to_networkx_graph` function allows both inputs to be processed into a NetworkX Graph for
ease of processing in GBSApps.
Summary
-------
.. autosummary::
is_adjacency
subgraph_adjacency
to_networkx_graph
Code details
------------
"""
from typing import Union
import networkx as nx
import numpy as np
graph_type = Union[nx.Graph, np.ndarray]
#[docs]
def to_networkx_graph(graph: graph_type) -> nx.Graph:
"""Converts input graph into a NetworkX graph.
Given an input graph of type ``graph_type = Union[nx.Graph, np.ndarray]``, this function
outputs a NetworkX graph of type ``nx.Graph``. The input ``np.ndarray`` must be an adjacency
matrix (i.e., satisfy :func:`is_adjacency`) and also real.
Args:
graph (graph_type): input graph to be processed
Returns:
graph: the NetworkX graph corresponding to the input
"""
if isinstance(graph, np.ndarray):
if not is_adjacency(graph) or not np.allclose(graph, graph.conj()):
raise ValueError("Adjacency matrix must be real and symmetric")
graph = nx.Graph(graph)
elif not isinstance(graph, nx.Graph):
raise TypeError("Graph is not of valid type")
return graph
#[docs]
def is_adjacency(mat: np.ndarray) -> bool:
"""Checks if input is an adjacency matrix, i.e., symmetric.
Args:
mat (array): input matrix to be checked
Returns:
bool: returns ``True`` if input array is an adjacency matrix and ``False`` otherwise
"""
if not isinstance(mat, np.ndarray):
raise TypeError("Input matrix must be a numpy array")
dims = mat.shape
conditions = len(dims) == 2 and dims[0] == dims[1] and dims[0] > 1 and np.allclose(mat, mat.T)
return conditions
#[docs]
def subgraph_adjacency(graph: graph_type, nodes: list) -> np.ndarray:
"""Give adjacency matrix of a subgraph
Given a list of nodes selecting a subgraph, this function returns the corresponding adjacency
matrix.
Args:
graph (graph_type): the input graph
nodes (list): a list of nodes used to select the subgraph
Returns:
array: the adjacency matrix of the subgraph
"""
graph = to_networkx_graph(graph)
all_nodes = graph.nodes
if not set(nodes).issubset(all_nodes):
raise ValueError(
"Must input a list of subgraph nodes that is contained within the nodes of the input "
"graph"
)
return nx.to_numpy_array(graph.subgraph(nodes))
# TODO: Functionality for plotting graphs and subgraphs
```
### Import some useful packages
```
# useful additional packages
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
```
### Draw an example graph to color
We have experimented and found that GBS really helps with triangular tesselated graphs
```
#G = nx.triangular_lattice_graph(1, 2, with_positions=False)
G = nx.Graph()
edges = [(0, 1), (1, 2), (2, 0), (2, 3), (3, 0), (3, 4), (4, 2),(4,5),(5,3),(4,6),(6,2),(1,7),(7,0),(1,8),(8,2)]
nodes = list(range(4))
G.add_edges_from(edges)
G.add_nodes_from(nodes)
# Let's draw this thing
colors = ['beige' for node in G.nodes()]
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
```
### Start defining some functions we will need in our workflow
```
def density_func(A):
dens = (np.sum(A)/2) / ((A.shape[0]*(A.shape[0]-1))/2)
return dens
# Test
'''
_, G_new = find_dense_subgraph(G, 4)
print(G_new)
A = subgraph_adjacency(G, G_new)
print(A)
dens = density_func(A)
dens
'''
def find_least_connected(A):
min_sum = float('inf')
for i in range(A.shape[0]):
if np.sum(A[i]) < min_sum:
min_sum = np.sum(A[i])
min_element = i
return min_element
# Test
#find_least_connected(A)
def remove_node(A, n, nodes):
A_new = np.delete(A, n, 1)
A_new = np.delete(A_new, n, 0)
nodes_new = np.delete(nodes,n)
return A_new, nodes_new
# Test
'''
A_new, nodes_new = remove_node(A, find_least_connected(A), G_new)
print(A_new)
print('')
print(nodes_new)
'''
def paint(A, colors, nodes):
totalpaint = colors[nodes[0]] + colors[nodes[1]] + colors[nodes[2]]
if totalpaint == 0:
colors[nodes[0]] = 1
colors[nodes[1]] = 2
colors[nodes[2]] = 3
elif totalpaint > 0 and totalpaint < 6:
if colors[nodes[0]] == 0:
colors[nodes[0]] = 6 - totalpaint
elif colors[nodes[1]] == 0:
colors[nodes[1]] = 6 - totalpaint
elif colors[nodes[2]] == 0:
colors[nodes[2]] = 6 - totalpaint
return colors
# Test
'''
colors = np.zeros(len(G))
colors_new = paint(A_new, colors, nodes_new)
print(colors_new)
'''
def colorcheck(colors):
everythingcolored = True
i = 0
while everythingcolored and i < len(colors):
everythingcolored = colors[i] != 0
i += 1
return everythingcolored
# Test
#colorcheck([1,1,1,0])
def shrink(dens, A, nodes,colors):
while density_func(A) < dens and len(A)>0:
least_node = find_least_connected(A)
A, nodes = remove_node(A, least_node, nodes)
if A.shape[0] > 2:
colors = paint(A, colors, nodes)
return colors
return colors
# Test
'''
dens = 1
colors_new = shrink(dens, A, nodes, colors)
print(colors_new)
'''
```
### Get coloring!
This is where the magic happens. Using the GBS sampler, we obtain with high probability samples of dense subgraphs of our initial graph. We then feed these samples to a heuristic for determining the best coloring of the overall graph
```
colors = np.zeros(len(G.nodes))
dens = 1
everythingcolored = False
while not everythingcolored:
_, G_new = find_dense_subgraph(G, 4)
A = subgraph_adjacency(G, G_new)
colors = shrink(dens, A, G_new, colors)
print(colors)
everythingcolored = colorcheck(colors)
colors_final = colors
```
### Now we color our graph using the coloring we found:
```
# Let's draw this thing
colors_new = []
for color in colors:
if int(color) == 1:
colors_new.append('r')
elif int(color) == 2:
colors_new.append('b')
elif int(color) == 3:
colors_new.append('g')
pos = nx.spring_layout(G)
default_axes = plt.axes(frameon=True)
nx.draw_networkx(G, node_color=colors_new, node_size=600, alpha=.8, ax=default_axes, pos=pos)
```
### Maybe we can try some other ones...
```
m = 3
n = 3
g = nx.triangular_lattice_graph(3, 3)
nodes_new = list(range(len(g.nodes())))
print(g.edges())
edges_new = []
for i in range(m):
for j in range(n):
g.
pos = nx.spring_layout(g)
default_axes = plt.axes(frameon=True)
colors = ['y' for node in g.nodes()]
nx.draw_networkx(g, node_color=colors, node_size=600, alpha=.8, ax=default_axes, pos=pos)
```
| github_jupyter |
# XGBoost原理与应用
作者:杨岱川
时间:2019年11月
github:https://github.com/DrDavidS/basic_Machine_Learning
开源协议:[MIT](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/LICENSE)
参考文章:
- [XGBoost PPT](https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf)
- [XGBoost: A Scalable Tree Boosting System](https://arxiv.org/abs/1603.02754)
- [Introduction to Boosted Trees](https://xgboost.readthedocs.io/en/latest/tutorials/model.html)
- [XGBoost原理](https://www.zhihu.com/question/58883125/answer/206813653)
- [XGBoost原理及目标函数推导详解](https://blog.csdn.net/htbeker/article/details/91517805)
## XGBoost
在[2.10 提升方法](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/02机器学习基础/2.10%20提升方法.ipynb)中我们简单提到过,在SKLearn之外还有很多优秀的,基于Boosting Tree方法的衍生框架。包括XGBoost,LightGBM等,而各种开源比赛方案对它们的应用,也从事实上证明了这些框架的优秀性能。
因此这一节会简要介绍XGBoost这一新框架的原理和用法。
### 简介
**XGBoost** 全称为“eXtreme Gradient Boosting”,是一种基于决策树的集成机器学习算法,使用梯度提升框架,适用于分类和回归问题。XGBoost 在各种比赛及比赛平台中大放异彩,比如Kaggle,天池,DataCastle等等,可以说是所有选手的必备武器之一。XGBoost 的项目主页参见[这里](https://xgboost.ai)。
XGBoost 最初由陈天奇开发。陈天奇本科毕业于上海交通大学 ACM 班,博士毕业于华盛顿大学计算机系,研究方向为大规模机器学习,2020年他将加入 CMU 出任助理教授。有兴趣的可以看看他的文章[《陈天奇:机器学习科研的十年》](http://www.sohu.com/a/328234576_129720)以及[访谈](https://cosx.org/2015/06/interview-of-tianqi),也可以看看他的[知乎主页](https://www.zhihu.com/people/crowowrk/activities)。
下图展示了从决策树到 XGBoost 算法的发展过程:

图片刷不出来的请看[这里](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/back_up_images/xgboost发展历程.jpeg)
在 [XGBoost.ai](https://xgboost.ai) 官网有对 XGBoost原理有通俗易懂的解释。
现在简要介绍如下,部分基础知识之前也已经讨论过了,现在就当复习。其中的各种符号跟这里和论文保持一致,所以和之前的 GBDT 介绍可能有少许差别。
### 目标函数:经验损失 + 正则化
训练模型的目标是找到最好的参数 $\theta$ ,能够最佳拟合训练数据 $x_i$ 和标签 $y_i$。所以XGBoost为了训练模型,定义了一个**目标函数(objective function)** 来告诉模型如何去拟合训练数据。
目标函数由两部分组成:**经验损失**(又叫**训练损失**,**training loss**)和**正则项(regularization term)**。
$$\large {\rm obj}(\theta)=L(\theta)+\Omega(\theta)$$
其中, $L$ 就是训练损失,而 $\Omega$ 就是正则项。
通常,$L$ 会选择均方误差,如下:
$$\large L(\theta)=\sum_i(y_i-\hat y_i)^2$$
其中,$\hat y_i=\sum_j \theta_j x_{ij}$,也就是模型的预测值。
另一种常见的损失函数叫做**logistic loss**,也就是之前在逻辑回归中讲过的损失函数:
$$\large L(\theta) = \sum_i \left[ y_i \ln(1+e^{-\hat y_i})+(1-y_i)\ln(1+e^{-\hat y_i}) \right]$$
至于正则项$\Omega$,就是控制模型复杂度的,让模型不至于过拟合,以前的章节讲过多次,这里不再赘述。
### 决策树的集成
XGBoost模型是基于决策树的集成(**decision tree ensembles**)而来,这个集成模型是由一组**分类与回归树(CART)**所构成的,CART 的分裂基于 gini 系数,关于 CART 的详细描述可以参见《统计学习方法》中决策树一章。
这里给了一个简单的例子,比如我们要运用 CART 来判断某人是否喜欢电脑游戏。我们输入一家人的年龄、性别、职业等信息,然后得到一颗决策树:

图片刷不出来的看这里:[CART](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/back_up_images/cart.png)
我们将家庭成员分为不同的叶子结点(leaves),然后在相应的叶子上给他们分配分数(score)。
>**score** 怎么来的?
>
>由于这里使用的是 CART,所以当作是回归树而不要当作分类树来看,叶子结点上给出的是回归预测值,而非分类预测值。
>
>在每个叶子结点采用这种真实分数,可以提供超越“分类”的更丰富的解释,比如给出分类概率等,也会更方便优化,在后面会有展示。
之前有提到,一棵决策树树是不够强大的,不管是随机森林、AdaBoost还是GBDT,都是集成模型,我们需要把多个**弱分类器**(基本分类器)的预测结果汇总在一起,形成一个**强分类器**,这就是**集成模型(ensemble model)**。
在这里,我们汇总的方法就是把多棵树的 score 加起来,如图:

图片刷不出来的看这里:[two CART](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/back_up_images/twocart.png)
以上图两棵 CART 树为例,可以看出模型的集成就是简单地把两棵树叶子结点对实例的预测分数相加即可。上面这个例子表达了一个重要的点,就是两棵 CART 互为补充,以数学的形式展示就是:
$$\large \hat y_i=\sum^K_{k=1}f_k(x_i),\quad f_k\in \Bbb F$$
其中 $K$ 是树的数量, $f$ 就是一棵CART树的函数,属于函数空间 $\Bbb F$,而 $\Bbb F$ 是所有可能的 CART 的集合。
我们待优化的目标函数在一开始已经给出了,这里对其稍作细化:
$$\large {\rm obj}(\theta)=\sum_i^n l(y_i,\hat y_i) + \sum^K_{k=1}\Omega(f_k)$$
左边的 $\sum_i^n l(y_i,\hat y_i)$ 是损失函数,右边的 $\sum^K_{k=1}\Omega(f_k)$ 是正则项。
现在的回顾一下,随机森林中使用的 *模型* 是也是集成的树(Tree ensembles),所以 boosted trees 和 random forests 的核心原理还是一样的,只不过训练过程中有点点区别。这也是我们在讲Boost算法的时候叙述过的。
### 树的Boosting
如何训练模型?相信大家肯定不陌生,老样子,那就是 *定义目标函数然后对其优化* !
将下式作为目标函数。需要说明的是,记住它始终包含了正则项,而在我们 2.10节中讲的 AdaBoost 和 GBDT,是没有涉及正则项的。
$$\large {\rm obj}= \sum_{i=1}^n l(y_i,\hat y_i^{(t)}) + \sum^t_{i=1}\Omega(f_k)$$
#### Additive Training
第一个问题:树的 **参数** 是什么?我们要学习的是那些 $f_i$ 函数,每个函数都包含树的结构和叶子得分(leaf scores)。
在这里针对树的训练,采用加法策略(前向分步算法),也就是和之前 Boosting 讲的一样,根据学习结果进行权重修正,然后新增一棵树。
我们定义在时间步 $t$ 时候的预测值为 $\large \hat y_i^{(t)}$,有:
$$
\large
\begin{equation}\begin{split}
\hat y_i^{(0)}&=0 \\
\hat y_i^{(1)}&=f_1(x_i)=\hat y_i^{(0)}+f_1(x_i)\\
\hat y_i^{(2)}&=f_1(x_i)+f_2(x_i)=\hat y_i^{(1)}+f_2(x_i)\\
\cdots\\
\hat y_i^{(t)}&=\sum^t_{k-1}f_k(x_i)=\hat y_i^{(t-1)}+f_t(x_i)
\end{split}\end{equation}
$$
还有一个问题就是,我们在每一步要的是那棵树?我们自然要选择能优化我们**目标函数**的那棵。
在 $t$ 步的时候,目标函数 ${\rm obj}^{(t)}$ 为:
$$
\large
\begin{equation}\begin{split}
{\rm obj}^{(t)} &= \sum_{i=1}^n l(y_i,\hat y_i^{(t)}) + \sum^t_{i=1}\Omega(f_i)\\
&= \sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t({x_i})) + \Omega(f_t)+\sum^{t-1}_{k=1}\Omega(f_t)\\
&= \sum_{i=1}^n l(y_i,\hat y_i^{(t-1)}+f_t({x_i})) + \Omega(f_t)+{\rm constant}
\end{split}\end{equation}
$$
> 这个 $\rm constant$ 就是常数的意思。由于前向分步算法,在 $t$ 步时候,前 $t-1$ 步的树都是已经确定了的,因此其结构是一个常数。
>
> 相当于 $\large \sum^{t-1}_{k=1}\Omega(f_t)={\rm constant}$
如果我们使用**均方误差(mean squared error ,MSE)**作为我们的损失函数,即替换 $l(y_i,\hat y_i^{(t)})$,那么目标函数就变成了:
$$
\large
\begin{equation}\begin{split}
{\rm obj}^{(t)} &= \sum_{i=1}^n \left(y_i - \left(\hat y_i^{(t-1)}+f_t(x_i)\right)\right)^2 + \sum^t_{i=1}\Omega(f_i)\\
&= \sum_{i=1}^n \left[ 2\left( \hat y_i^{(t-1)}-y_i \right)f_t(x_i)+f_t(x_i)^2 \right] + \Omega(f_t)+{\rm constant}
\end{split}\end{equation}
$$
> 注意这里 $\large \hat y_i^{(t)}$ 实际上就是 $t-1$ 步之前的 CART 森林加上第 $t$ 步生成的 CART 树的集成模型的预测结果,即:
>
> $$\large \hat y_i^{(t)} = \hat y_i^{(t-1)}+f_t(x_i)$$
>
>故均方误差为:
>
> $$\large \left(y_i-\hat y_i^{(t)}\right)^2$$
均方误差的表示形式非常方便,存在一次项和二次项。但是如果损失函数换为其他损失,比如 logistic loss,要获得这样的直观形式就比较困难了。所以,在一般情况下,我们将损失函数的**泰勒展开式(Taylor expansion)**扩展到二阶:
$$\large {\rm obj}^{(t)}=\sum_{i=1}^n \left[l(y_i,\hat y_i^{(t-1)}) + g_if_t(x_i) + \frac{1}{2}h_if_t^2(x_i)\right] + \Omega(f_t)+{\rm constant}$$
其中 $g_i$ 和 $h_i$ 定义为:
$$\large g_i=\partial_{\hat y_i^{(t-1)}}l(y_i,\hat y_i^{(t-1)})$$
$$\large h_i=\partial^2_{\hat y_i^{(t-1)}}l(y_i,\hat y_i^{(t-1)})$$
就是对损失函数求预测值 $\large \hat y_i^{(t-1)}$ 的一阶和二阶偏导数。
> 泰勒公式的二阶近似可以表示为:
>
>$$\large f(x_0+\Delta x) \approx f(x_0)+f'(x_0)\Delta x+\frac{1}{2}f''(x_0)(\Delta x)^2$$
>
> 这里的增量 $\Delta x$ 相当于 $t-1$ 步的森林中新增一棵 $t$ 时间步的树,也就是 $f_t(x_i)$
>
> 至于为什么要求偏导数,因为 $\hat y_i^{(t)}$ 是我们要优化的目标,而不是 $y_i$,可以看看**梯度下降**相关内容。
现在我们去掉所有常数项,因为在优化过程中保留常数没啥必要,目标函数在 $t$ 步时候就变成了:
$$\large \sum^n_{i=1}\left[ g_if_t(x_i) + \frac{1}{2}h_if_t^2(x_i) \right] + \Omega(f_t)$$
这就是我们对新的树的优化目标。
我们新优化目标的优点在于目标函数的取值仅仅取决于 $g_i$ 和 $h_i$,也就是损失函数的一二阶导数,所以 XGBoost 支持自定义损失函数,只要它二阶可导。
> XGBoost 的特点就是将损失函数泰勒展开到了二阶,GBDT只用到了一阶,可以回头看看[2.10 提升方法](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/%E6%9D%AD%E7%94%B5%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E8%AF%BE%E7%A8%8B%E5%8F%8A%E4%BB%A3%E7%A0%81/2.10%20%E6%8F%90%E5%8D%87%E6%96%B9%E6%B3%95.ipynb)中求负梯度那部分。
### 模型复杂度
刚刚已经介绍过了训练的过程,但是还没有涉及非常重要的**正则项(regularization term)**,所以这里我们先定义树的复杂度 $\Omega(f)$。
首先将树的函数 $f(x)$ 细化为:
$$\large f_t(x)=w_{q(x)},\quad w \in R^T,q:R_d \rightarrow \{1,2,\cdots,T\}$$
其中 $w$ 是叶子上分数的矢量,$q$ 是将每个数据实例分配给相应叶子结点的函数,$T$ 是叶子的总数。
换句话说,$q(x)$ 就是输出的叶子节点的序号,$w_{q(x)}$ 表示对应的叶子节点的得分。
在 XGBoost中,我们将复杂度定义为:
$$\large \Omega(f)=\gamma T+\frac{1}{2}\lambda\sum^T_{j=1}w^2_j$$
相当于**叶子的总数** $T$ 加上 **叶子分数的L2正则**,$j$ 是叶子结点的编号。
定义模型复杂度的方法很多,但是这种定义方法在实践中表现的很好。在以前的很多基于决策树的工具中不是很在意这部分,一般都是把复杂度的控制留给了一些启发式算法。在 XGBoost 中给出了一个正式的定义,我们可以更好地了解模型的训练过程。
### The Structure Score
定义了模型的复杂度之后,我们将模型的目标函数重写:
$$
\large
\begin{equation}\begin{split}
{\rm obj}^{(t)} &\approx \sum_{i=1}^n \left[ g_iw_{q(x_i)}+\frac{1}{2}h_iw^2_{q(x_i)} \right] + \gamma T+\frac{1}{2}\lambda\sum^T_{j=1}w^2_j \\
&= \sum_{j=1}^T \left[ \left( \sum_{i\in I_j}g_i \right)w_j + \frac{1}{2}\left( \sum_{i\in I_j}h_i + \lambda \right)w_j^2 \right] + \gamma T
\end{split}\end{equation}
$$
其中,$I_j=\{ i|q(x_i)=j \}$,表示分配给第 $j$ 个叶子的样本 $x_i$ 的索引,换句话说就是回归树中的叶结点区域。
然后上式把所有的样本点做了一下合并,因为同一个叶子结点上所有样本的分数都是一样的。
定义:
$$\large G_j=\sum_{j\in I_j}g_i$$
$$\large H_j=\sum_{j\in I_j}h_i$$
我们可以进一步简化目标函数 ${\rm obj}^{(t)}$:
$$\large {\rm obj}^{(t)}=\sum^T_{j=1}\left[ G_jw_j + \frac{1}{2}(H_j+\lambda)w_j^2 \right]+\gamma T$$
在这个等式中,$w_j$ 是彼此独立的,$G_jw_j + \cfrac{1}{2}(H_j+\lambda)w_j^2$ 是关于 $w_j$ 的一元二次函数,对于给定的 $q(x)$,对 $w_j$ 求导可以得到最优的 $w_j$ 取值:
$$\large w_j^*=-\frac{G_j}{H_j+\lambda}$$
把上式代入回 ${\rm obj}^{(t)}$:
$$\large {\rm obj}^*=-\frac{1}{2}\sum^T_{j=1}\frac{G_j^2}{H_j+\lambda}+\gamma T$$
得到最终的目标函数 ${\rm obj}^*$,它也称为**打分函数(scoring function)**,用以衡量树结构 $q(x)$ 的好坏,值越小代表结构越好。我们采用这个打分函数来选择 CART 的最佳切分点。

图片刷不出来的看这里:[struct_score](https://github.com/DrDavidS/basic_Machine_Learning/blob/master/back_up_images/struct_score.png)
如图,基本上对于给定的树结构,我们将统计实例的一二阶导数,也就是梯度信息 $g_i$ 和 $h_i$ ,然后放入它们所对应的叶子结点。再对叶子上实例的梯度信息求和,使用上面推出的公式计算树的质量。其实有点类似于计算一棵 CART 决策树的 gini 不纯度,但是不同之处在于它还考虑的模型的复杂度。
关于**gini不纯度**可以参考《统计学习方法》中决策树一章对 CART 的讲解。
### 学习树的结构
到目前为止,我们有了一种衡量一棵决策树质量的方法,理想情况下,我们会列举出所有可能的决策树,然后选择最好的一棵。但是实际上这是几乎不可能的。所以我们会试着一次优化树的一层(level),层层深入。
具体来说,我们把一个结点切分为两个子结点,其得分为:
$$\large Gain=\frac{1}{2}\left[ \frac{G^2_L}{H_L+\lambda}+\frac{G^2_R}{H_R+\lambda}-\frac{(G_L+G_R)^2}{H_L+H_R+\lambda} \right]-\gamma$$
该公式可以理解为新切分的左叶子结点的分数加上新切分的右叶子结点的分数减去原来叶子结点的分数,外加一个叶上的正则化。
这里有一个重要的事实,就是如果新切分的增益小于 $\gamma$,就不要添加该分支,这相当于是决策树的剪枝技术。这个 $\gamma$ 值是一个超参数,人工设定。
我们希望搜索到一个最佳的切分,而为了达成这个目标,将所有实例按照排序顺序放置,如图,按照年龄排序:

然后我们从左到右开始扫描分裂点,计算这些 CART 树的得分,就可以高效地找到最佳拆分点。
> 前向算法的局限性
>
> 在少部分边缘情况下,前向算法可能会失败,扩展阅读:[Can Gradient Boosting Learn Simple Arithmetic?](http://mariofilho.com/can-gradient-boosting-learn-simple-arithmetic/)
### 结语
到现在我们已经完成了对 XGBoost 原理的学习,而 XGBoost 也根据以上原理编写成了 XGBoost 工具,记得尝试一下这一款优秀的机器学习工具吧!
## XGBoost 安装和使用
参考[Get Started with XGBoost](https://xgboost.readthedocs.io/en/latest/get_started.html)
示例代码请先参考官方文档,本 NoteBook 待补充。
| github_jupyter |
# Climate CDS
The ERA5 database is migrated from the ECMWF databases to the CDS databases. Meaning we need to migrate with them, also the reason why the old programm did not gave the wanted results.
<hr>
Olivier den Ouden (<a href="ouden@knmi.nl">ouden@knmi.nl</a>)<br>
R&D Seismology and Acoustics @ KNMI (Royal Netherlands Meteorological Institute)<br>
### Install cds - key
Since the database migrated, we have to move with them. Meaning we need a new API-key to request the data. Also some software updates and script modifications are needed:
1. Create account:
https://cds.climate.copernicus.eu/#!/home
Since ECMWF migrated the database we need an account for this database, and thus an account.
2. Agree licence:
https://cds.climate.copernicus.eu/api-how-to
Link the account to the licence of agreement.
3. Create API:
Log in, and recieve the API-key. Store this key in the file $HOME/.cdsapirc (in your Unix/Linux environment).
## Install cds - API software
Besides a new API-key, also a software update is required. To do, run in terminal:
<font color=red>pip install cdsapi</font>
## Download data
Now everything is updated and the API-key is stored, we can start changing the original script and get acces to the ECMWF database!
### Variables
The variables do not change, same names!
https://software.ecmwf.int/wiki/display/CKB/ERA5+data+documentation
|Name Parameter|Units|Short name|
|--|-------------------------------|
|Sea tempreture |K|sst|
|Air temperature (2m) |K|2t|
|Wind u speed (10m)|m s**-1|10u|
|Wind v speed (10m)|m s**-1|10v|
|Wind u speed (100m)|m s**-1|100u|
|Wind v speed (100m)|m s**-1|100v|
|Pressure|Pa|sp|
|Humidity|kg kg**-1|q|
### Product type
There are two types of models we can request; HRES (High RESolution) and EDA (Ensemble Data Assamblation).
The difference is in scripts, see below, by the stream and type of data you request:
HRES
stream = oper (operational)
type = an (analysis)
EDA
stream = enda (ensemble)
type = em (ensemble mean)
### Temporal resolution
The temporal difference between HRES and EDA is:
HRES = hourly data
EDA = 3hr data
### Spatial resolution
Also the spatial resolution is different:
HRES = 0.3/0.3 degrees
EDA = 0.68/0.68 degrees
## Code
I created two scripts for you.
The upper script is to download an entire day (hourly) HRES data (sp/2tr/10u/10v).
The script below is to download and entire day (3hr resolution) of EDA data.
```
# Modules to link the software to the API-key
import cdsapi
c = cdsapi.Client()
# HRES data download
c.retrieve('reanalysis-era5-single-levels',{
'product_type': 'reanalysis',
'format': 'netcdf',
'variable':[ '10m_u_component_of_wind',
'10m_v_component_of_wind',
'2m_temperature',
'surface_pressure'],
'year': '2015',
'month': '01',
'day': '01',
'time':[ '00:00','01:00','02:00',
'03:00','04:00','05:00',
'06:00','07:00','08:00',
'09:00','10:00','11:00',
'12:00','13:00','14:00',
'15:00','16:00','17:00',
'18:00','19:00','20:00',
'21:00','22:00','23:00']
},'HRES.nc')
# EDA data download
c.retrieve('reanalysis-era5-single-levels',{
'product_type': 'ensemble_mean',
'format': 'netcdf',
'variable':[ '10m_u_component_of_wind',
'10m_v_component_of_wind',
'2m_temperature',
'surface_pressure'],
'year': '2015',
'month': '01',
'day': '01',
'time':[ '00:00','03:00','06:00',
'09:00','12:00','15:00',
'18:00','21:00']
},'EDA.nc')
```
| github_jupyter |
```
from glob import iglob
import os
import pandas as pd
import screed
import seaborn as sns
from tqdm import tqdm
```
# Change to Quest for Orthologs 2019 data directory
```
cd ~/data_sm/kmer-hashing/quest-for-orthologs/data/2019/
ls -lha
ls Eukaryota/
```
# Download orthology and transcription factor data
## Read orthologous transcription factors
```
tfs_original = pd.read_csv('opisthokont_not_human_transcription_factors_ensembl_compara.csv')
print(tfs_original.shape)
tfs_original.head()
```
## Read random subset of TFs
```
tfs_random_subset = pd.read_csv('human_transcription_factors_with_uniprot_ids_random_subset100.csv')
print(tfs_random_subset.shape)
tfs_random_subset.head()
```
## Subset TF orthology to the random subset
```
tfs = tfs_original.query('source__id in @tfs_random_subset.db_id')
print(tfs.shape)
tfs.head()
```
# Go to Quest for Orthologs fastas
## Read species metadata
```
species_metadata = pd.read_csv("species_metadata.csv")
print(species_metadata.shape)
species_metadata.head()
```
### Subset to opisthokonts
```
# Estimated opisthokonta divergence time from http://timetree.org/
t = 1105
opisthokonts = species_metadata.query('divergence_from_human_mya <= @t')
print(opisthokonts.shape)
opisthokonts.head()
opisthokonts.query('scientific_name == "Homo sapiens"')
```
## Read Gene Accession file
```
Gene mapping files (*.gene2acc)
===============================
Column 1 is a unique gene symbol that is chosen with the following order of
preference from the annotation found in:
1) Model Organism Database (MOD)
2) Ensembl or Ensembl Genomes database
3) UniProt Ordered Locus Name (OLN)
4) UniProt Open Reading Frame (ORF)
5) UniProt Gene Name
A dash symbol ('-') is used when the gene encoding a protein is unknown.
Column 2 is the UniProtKB accession or isoform identifier for the given gene
symbol. This column may have redundancy when two or more genes have identical
translations.
Column 3 is the gene symbol of the canonical accession used to represent the
respective gene group and the first row of the sequence is the canonical one.
```
```
def read_gene2acc(gene2acc, names=['maybe_ensembl_id', 'uniprot_id', 'canonical_accession']):
df = pd.read_csv(gene2acc, sep='\t', header=None, na_values='-', names=names)
return df
gene2acc = read_gene2acc('Eukaryota/UP000005640_9606.gene2acc')
# gene2acc = pd.read_csv('Eukaryota/UP000005640_9606.gene2acc', sep='\t', header=None, na_values='-', names=columns)
print(gene2acc.shape)
gene2acc.head()
gene2acc.dropna()
```
## Read ID mapping file
```
Database mapping files (*.idmapping)
====================================
These files contain mappings from UniProtKB to other databases for each
reference proteome.
The format consists of three tab-separated columns:
1. UniProtKB accession
2. ID_type:
Database name as shown in UniProtKB cross-references and supported by the ID
mapping tool on the UniProt web site (http://www.uniprot.org/mapping)
3. ID:
Identifier in the cross-referenced database.
```
```
opisthokonts.head()
opisthokonts.query('proteome_id == "UP000000437"')
```
# Get ID Mapping for uniprot ids from ENSMBL
```
dfs = []
for filename in tqdm(sorted(iglob("Eukaryota/*.idmapping"))):
# print(filename)
basename = os.path.basename(filename)
prefix = basename.split('.')[0]
species_id, taxa_id = prefix.split("_")
# print(f"{species_id=} {taxa_id=}")
if species_id in opisthokonts.proteome_id.values:
df = pd.read_csv(filename, sep='\t', header=None, names=['uniprot_id', 'id_type', 'db_id'])
df['species_id'] = species_id
df['taxa_id'] = species_id
# Use only Ensembl data
# df = df.query('id_type == "Ensembl"')
print(df.shape)
dfs.append(df)
id_mapping = pd.concat(dfs, ignore_index=True)
print(id_mapping.shape)
id_mapping.head()
```
# Merge id mapping with ensembl compara tfs
```
tfs.tail()
id_mapping_for_merging = id_mapping.copy()
id_mapping_for_merging.columns = "target__" + id_mapping_for_merging.columns
id_mapping_for_merging.head()
tfs.shape
%%time
tfs_uniprot_merge_proteins = tfs.merge(id_mapping_for_merging, left_on='target__protein_id', right_on='target__db_id')
print(tfs_uniprot_merge_proteins.shape)
tfs_uniprot_merge_proteins.head()
%%time
tfs_uniprot_merge_genes = tfs.merge(id_mapping_for_merging, left_on='target__id', right_on='target__db_id')
print(tfs_uniprot_merge_genes.shape)
tfs_uniprot_merge_genes.head()
```
## Add human uniprot ids
```
human_id_mapping = pd.read_csv('Eukaryota/UP000005640_9606.idmapping', sep='\t', header=None, names=['uniprot_id', 'id_type', 'db_id'])
human_id_mapping.columns = 'source__' + human_id_mapping.columns
print(human_id_mapping.shape)
human_id_mapping.head()
%%time
tfs_uniprot_merge_proteins_with_human = tfs_uniprot_merge_proteins.merge(
human_id_mapping, left_on='source__protein_id', right_on='source__db_id', how='outer')
tfs_uniprot_merge_proteins_with_human.columns = tfs_uniprot_merge_proteins_with_human.columns.str.replace("source__", 'human__')
print(tfs_uniprot_merge_proteins_with_human.shape)
tfs_uniprot_merge_proteins_with_human.query('type == "ortholog_one2one"').head()
tfs_uniprot_merge_proteins_with_human.type.value_counts()
```
## Write merged TFs to disk
```
pwd
%time tfs_uniprot_merge_proteins_with_human.to_csv('opisthokont_not_human_transcription_factors_ensembl_compara_merged_uniprot_random_subset100.csv.gz', index=False)
%time tfs_uniprot_merge_proteins_with_human.to_parquet('opisthokont_not_human_transcription_factors_ensembl_compara_merged_uniprot_random_subset100.parquet', index=False)
```
## Make set variable for quick membership evalution
```
tf_orthologs = set(tfs_uniprot_merge_proteins_with_human.target__uniprot_id)
```
### Prove that the set `tf_orthologs` is faster
```
%timeit 'Q7Z761' in tf_orthologs
%timeit 'Q7Z761' in tfs_uniprot_merge_proteins_with_human.target__uniprot_id
```
#### Yep, sets are 3 orders of magnitude faster!
# Read non-human proteins and subset if they are an ortholog of a TF
## Make outdir
```
ls Eukaryota
```
## How much compute is this?
Number of human transcription factor proteins in the quest for orthologs database
```
n_human_qfo_tfs = 5749
tfs_uniprot_merge_proteins_with_human.head()
tfs_uniprot_merge_proteins_with_human.query('target__species != "homo_sapiens"').target__uniprot_id.nunique()
n_not_human_qfo_tfs = 1045
n_human_qfo_tfs * n_not_human_qfo_tfs * 0.0006 / 60 / 60
```
## Read in protein fastas with screed
```
not_human_outdir = 'Eukaryota/not-human-transcription-factor-fastas-random-subset100/'
! mkdir $not_human_outdir
for filename in iglob('Eukaryota/not-human-protein-fastas/*.fasta'):
tf_records = []
basename = os.path.basename(filename)
with screed.open(filename) as records:
for record in records:
name = record['name']
record_id = name.split()[0]
uniprot_id = record_id.split('|')[1]
if uniprot_id in tf_orthologs:
tf_records.append(record)
if len(tf_records) > 0:
print(filename)
print(f"\tlen(tf_records): {len(tf_records)}")
with open(f'{not_human_outdir}/{basename}', 'w') as f:
for record in tf_records:
f.write(">{name}\n{sequence}\n".format(**record))
```
# Script to run
```
%%file qfo_human_vs_opisthokont_tfs.sh
#!/bin/bash
OUTDIR=$HOME/data_sm/kmer-hashing/quest-for-orthologs/analysis/2019/transcription-factors-random-subset100/
mkdir -p $OUTDIR/intermediates
cd $OUTDIR/intermediates
PARQUET=$OUTDIR/qfo-eukaryota-tfs-protein.parquet
EUKARYOTA=/mnt/data_sm/olga/kmer-hashing/quest-for-orthologs/data/2019/Eukaryota
HUMAN=$EUKARYOTA/human-transcription-factor-fastas-random-subset100/human_transcription_factor_proteins.fasta
NOT_HUMAN=$EUKARYOTA/not-human-transcription-factor-fastas-random-subset100/
conda activate khtools--encodings--compare-cli
time khtools compare-kmers \
--processes 120 \
--ksize-min 3 \
--ksize-max 45 \
--parquet $PARQUET \
--intermediate-parquet \
--fastas2 $HUMAN \
$NOT_HUMAN/* | tee khtools_compare-kmers.log
pwd
```
## Time estimation
taking ~1000 seconds per non-human sequence
```
n_not_human_qfo_tfs
n_not_human_qfo_tfs * 1000 / 120 / 60 / 60 / 24
```
Okay, so this will take ~4.7 days to compute running on `lrrr`
| github_jupyter |
Acoustic system calibration
===========================
Since the calibration measurements may be dealing with very small values, there's potential for running into the limitations of <a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html">floating-point arithmetic</a>. When implementing the computational algorithms, using dB is recommended to avoid floating-point errors.
Throughout this description, we express sensitivity (e.g. of the microphone or speaker) in units of $\frac{V}{Pa}$ (which is commonly used throughout the technical literature) rather than the notation used in the EPL cochlear function test suite which are $\frac{Pa}{V}$. Sensitivity in the context of microphones is the voltage generated by the microphone in response to a given pressure. In the context of speakers, sensitivity is the output, in Pa, produced by a given voltage. We assume that the sensitivity of the calibration microphone is uniform across all frequencies (and it generally is if you spend enough money on the microphone). Sometimes you may wish to use a cheaper microphone to record audio during experiments. Since this microphone is cheap, sensitivity will vary as a function of frequency.
```
%matplotlib inline
from scipy import signal
from scipy import integrate
import pylab as pl
import numpy as np
```
Calculating the frequency response
----------------------------------
Using a hamming window for the signal is strongly recommended. The only exception is when measuring the sensitivity of the calibration microphone using a standard (e.g. a pistonphone that generates 114 dB SPL at 1 kHz). When you're using a single-tone calibration, a flattop window is best.
Speaker output
--------------
Output of speaker in Pa, $O(\omega)$, can be measured by playing a signal with known RMS voltage, $V_{speaker}(\omega)$ and measuring the voltage of a calibration microphone, $V_{cal}(\omega)$, with a known sensitivity, $S_{cal} = \frac{V_{rms}}{Pa}$.
$O(\omega) = \frac{V_{cal}(\omega)}{S_{cal}}$
Alternatively, the output can be specified in dB
$O_{dB}(\omega) = 20 \times log_{10}(\frac{V_{cal}(\omega)}{S_{cal}})$
$O_{dB}(\omega) = 20 \times log_{10}(V_{cal}(\omega))-20 \times log_{10}(S_{cal})$
Experiment microphone sensitivity
--------------------------------------
If we wish to calibrate an experiment microphone, we will record the voltage, $V_{exp}(\omega)$, at the same time we measure the speaker's output in the previous exercise. Using the known output of the speaker, we can then determine the experiment microphone sensitivity, $S_{exp}(\omega)$.
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{O(\omega)}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{\frac{V_{cal}(\omega)}{S_{cal}}}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega) \times S_{cal}}{V_{cal}(\omega)}$
The resulting sensitivity is in $\frac{V}{Pa}$. Alternatively the sensitivity can be expressed in dB, which gives us sensitivity as dB re Pa.
$S_{exp_{dB}}(\omega) = 20 \times log_{10}(V_{exp})+20 \times log_{10}(S_{cal})-20 \times log_{10}(V_{cal})$
In-ear speaker calibration
--------------------------
Since the acoustics of the system will change once the experiment microphone is inserted in the ear (e.g. the ear canal acts as a compliance which alters the harmonics of the system), we need to recalibrate each time we reposition the experiment microphone while it's in the ear of an animal. We need to compute the speaker transfer function, $S_{s}(\omega)$, in units of $\frac{V_{rms}}{Pa}$ which will be used to compute the actual voltage needed to drive the speaker at a given level. To compute the calibration, we generate a stimulus via the digital to analog converter (DAC) with known frequency content, $V_{DAC}(\omega)$, in units of $V_{RMS}$.
The output of the speaker is measured using the experiment microphone and can be determined using the experiment microphone sensitivity
$O(\omega) = \frac{V_{PT}(\omega)}{S_{PT}(\omega)}$
The sensitivity of the speaker can then be calculated as
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{O(\omega)}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{\frac{V_{PT}(\omega)}{S_{PT}(\omega)}}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega) \times S_{PT}(\omega)}{V_{PT}(\omega)}$
Alternatively, we can express the sensitivity as dB
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+20 \times log_{10}(S_{PT}(\omega))-20 \times log_{10}(V_{PT}(\omega))$
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+S_{PT_{dB}}(\omega)-20 \times log_{10}(V_{PT}(\omega))$
Generating a tone at a specific level
-------------------------------------
Given the speaker sensitivity, $S_{s}(\omega)$, we can compute the voltage at the DAC required to generate a tone at a specific amplitude in Pa, $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times O$
Usually, however, we generally prefer to express the amplitude in dB SPL.
$O_{dB SPL} = 20 \times log_{10}(\frac{O}{20 \times 10^{-6}})$
Solving for $O$.
$O = 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Substituting $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Expressed in dB
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + 20 \times log_{10}(10^{\frac{O_{dB SPL}}{20}}) + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
We can use the last equation to compute the voltage since it expresses the speaker calibration in units that we have calculated. However, we need to convert the voltage back to a linear scale.
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
Estimating output at a specific $V_{rms}$
-----------------------------------------
Taking the equation above and solving for $O_{dB SPL}(\omega)$
$O_{dB SPL}(\omega) = 20 \times log_{10}(V_{DAC}) - S_{s_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Or, if we want to compute in Pa
$O(\omega) = \frac{V_{DAC}}{S_{s}(\omega)}$
Common calculations based on $S_{s_{dB}}(\omega)$ and $S_{PT_{dB}}(\omega)$
---------------------------------------------------------------------------
To estimate the voltage required at the DAC for a given dB SPL
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
To convert the microphone voltage measurement to dB SPL
$O_{dB SPL} = V_{DAC_{dB}}(\omega) - S_{PT_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Given the dB SPL, $O_{dB SPL}(\omega)$ at 1 VRMS
$S(\omega) = (10^{\frac{O_{dB SPL}(\omega)}{20}} \times 20 \times 10^{-6})^{-1}$
$S_{dB}(\omega) = - [O_{dB SPL}(\omega) + 20 \times log_{10}(20 \times 10^{-6})]$
Less common calculations
------------------------
Given sensitivity calculated using a different $V_{rms}$, $x$, (e.g. $10 V_{rms}$), compute the sensitivity at $1 V_{rms}$ (used by the attenuation calculation in the neurogen package).
$S_{dB}(\omega) = S_{dB_{1V}}(\omega) = S_{dB_{x}}(\omega) - 20 \times log_{10}x$
Estimating the PSD
==================
Applying a window to the signal is not always a good idea.
```
fs = 10e3
t = np.arange(fs)/fs
frequency = 500
tone_waveform = np.sin(2*np.pi*frequency*t)
chirp_waveform = signal.chirp(t, 100, 1, 900)
clipped_waveform = np.clip(tone_waveform, -0.9, 0.9)
ax = pl.subplot(131)
ax.plot(t, tone_waveform)
ax = pl.subplot(132, sharex=ax, sharey=ax)
ax.plot(t, chirp_waveform)
ax = pl.subplot(133, sharex=ax, sharey=ax)
ax.plot(t, clipped_waveform)
ax.axis(xmin=0, xmax=0.01)
pl.tight_layout()
s = tone_waveform
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.axis(xmin=490, xmax=520)
pl.legend()
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
pl.figure(); plot_fft_windows(tone_waveform); pl.axis(xmin=490, xmax=510)
pl.figure(); plot_fft_windows(chirp_waveform); pl.axis(xmin=0, xmax=1500, ymin=-100)
pl.figure(); plot_fft_windows(clipped_waveform);
```
Designing an output circuit
===========================
Speaker sensitivity is typically reported in $\frac{dB}{W}$ at a distance of 1 meter. For an $8\Omega$ speaker, $2.83V$ produces exactly $1W$. We know this because $P = I^2 \times R$ and $V = I \times R$. Solving for $I$:
$I = \sqrt{\frac{P}{R}}$ and $I = \frac{V}{R}$
$\sqrt{\frac{P}{R}} = \frac{V}{R}$
$P = \frac{V^2}{R}$
$V = R \times \sqrt{\frac{P}{R}}$
```
print(0.5**2*8)
R = 8
P = 1
V = 2.83
print('Voltage is', R*np.sqrt(P/R))
print('Power is', V**2/R)
```
Let's say we have an $8\Omega$ speaker with a handling capacity is $0.5W$. If we want to achieve the *maximum* (i.e. $0.5W$), then we need to determine the voltage that will achieve that wattage given the speaker rating.
$V = R \times \sqrt{\frac{P}{R}}$
$V = 8\Omega \times \sqrt{\frac{0.5W}{8\Omega}}$
$V = 2V$
Even if your system can generate larger values, there is *no point* in driving the speaker at values greater than 1V. It will simply distort or get damaged. However, your system needs to be able to provide the appropriate current to drive the speaker.
$I = \sqrt{\frac{P}{R}}$
$I = \sqrt{\frac{0.5W}{8\Omega}}$
$I = 0.25A$
This is based on *nominal* specs.
So, what is the *maximum* output in *dB SPL*? Assume that the spec sheet reports $92dB$ at $0.3W$.
$10 \times log_{10}(0.5W/0.3W) = 2.2 dB$
This means that we will get only $2.2dB$ more for a total of $94.2 dB SPL$.
$10 \times log_{10}(0.1W/0.3W) = -4.7 dB$
```
P = 0.5
R = 8
print('Voltage is', R*np.sqrt(P/R))
print('Current is', np.sqrt(P/R))
P_test = 0.1
P_max = 1
O_test = 90
dB_incr = 10*np.log10(P_max/P_test)
O_max = O_test+dB_incr
print('{:0.2f} dB increase giving {:0.2f} max output'.format(dB_incr, O_max))
```
Now that you've figured out the specs of your speaker, you need to determine whether you need a voltage divider to bring output voltage down to a safe level (especially if you are trying to use the full range of your DAC).
$V_{speaker} = V_{out} \times \frac{R_{speaker}}{R+R_{speaker}}$
Don't forget to compensate for any gain you may have built into the op-amp and buffer circuit.
$R = \frac{R_{speaker} \times (V_{out}-V_{speaker})}{V_{speaker}}$
```
P_max = 0.3 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe long-term voltage'.format(V))
P_max = 0.5 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe short-term voltage'.format(V))
R_speaker = 8
V_speaker = 2
V_out = 10
R = (R_speaker*(V_out-V_speaker))/V_speaker
print('Series divider resistor is {:.2f}'.format(R))
```
# Good details here http://www.dspguide.com/ch9/1.htm
```
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'hamming'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
fs = 100e3
duration = 50e-3
t = np.arange(int(duration*fs))/fs
f1 = 500
f2 = f1/1.2
print(duration*f1)
print(duration*f2)
coerced_f2 = np.round(duration*f2)/duration
print(f2, coerced_f2)
t1 = np.sin(2*np.pi*f1*t)
t2 = np.sin(2*np.pi*f2*t)
t2_coerced = np.sin(2*np.pi*coerced_f2*t)
pl.figure(); plot_fft_windows(t1); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2_coerced); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2_coerced); pl.axis(xmax=f1*2)
```
# Size of the FFT
```
n = 50e3
npow2 = 2**np.ceil(np.log2(n))
s = np.random.uniform(-1, 1, size=n)
spow2 = np.random.uniform(-1, 1, size=npow2)
%timeit np.fft.fft(s)
%timeit np.fft.fft(spow2)
```
# Ensuring reproducible generation of bandpass filtered noise
```
rs = np.random.RandomState(seed=1)
a1 = rs.uniform(-1, 1, 5000)
a2 = rs.uniform(-1, 1, 5000)
rs = np.random.RandomState(seed=1)
b1 = rs.uniform(-1, 1, 3330)
b2 = rs.uniform(-1, 1, 3330)
b3 = rs.uniform(-1, 1, 10000-6660)
np.equal(np.concatenate((a1, a2)), np.concatenate((b1, b2, b3))).all()
b, a = signal.iirfilter(7, (1e3/5000, 2e3/5000), rs=85, rp=0.3, ftype='ellip', btype='band')
zi = signal.lfilter_zi(b, a)
a1f, azf1 = signal.lfilter(b, a, a1, zi=zi)
a2f, azf2 = signal.lfilter(b, a, a2, zi=azf1)
b1f, bzf1 = signal.lfilter(b, a, b1, zi=zi)
b2f, bzf2 = signal.lfilter(b, a, b2, zi=bzf1)
b3f, bzf3 = signal.lfilter(b, a, b3, zi=bzf2)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
zi = signal.lfilter_zi(b, a)
a1f = signal.lfilter(b, a, a1)
a2f = signal.lfilter(b, a, a2)
b1f = signal.lfilter(b, a, b1)
b2f = signal.lfilter(b, a, b2)
b3f = signal.lfilter(b, a, b3)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
```
# Computing noise power
```
frequency = np.fft.rfftfreq(int(200e3), 1/200e3)
flb, fub = 4e3, 64e3
mask = (frequency >= flb) & (frequency < fub)
noise_floor = 0
for sl in (56, 58, 60, 62, 64, 66, 96, 98):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = sl
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
pl.semilogx(frequency, power_db)
print(f'{total_db:.2f}dB with spectrum level at {sl:.2f}dB, expected {sl+10*np.log10(fub-flb):0.2f}dB')
frequency = np.fft.rfftfreq(int(100e3), 1/100e3)
mask = (frequency >= 4e3) & (frequency < 8e3)
for noise_floor in (-20, -10, 0, 10, 20, 30, 40, 50, 60):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
print('{}dB SPL with noise floor at {}dB SPL'.format(int(total_db), noise_floor))
# Compute power in dB then convert to power in volts
power_db = np.ones_like(frequency)*30
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
psd = power/2*len(power)*np.sqrt(2)
phase = np.random.uniform(0, 2*np.pi, len(psd))
csd = psd*np.exp(-1j*phase)
signal = np.fft.irfft(csd)
pl.plot(signal)
rms = np.mean(signal**2)**0.5
print(rms)
print('RMS power, dB SPL', 20*np.log10(rms/20e-6))
signal = np.random.uniform(-1, 1, len(power_v))
rms = np.mean(signal**2)**0.5
20*np.log10(rms/20e-6)
csd = np.fft.rfft(signal)
psd = np.real(csd*np.conj(csd))**2
print(psd[:5])
psd = np.abs(csd)**2
print(psd[:5])
```
# Analysis of grounding
Signal cables resonate when physical length is a quarter wavelength.
```
flb, fub = 100, 100e3
# resonant frequency of cable
c = 299792458 # speed of light in m/s
l = 3 # length of cable in meters
resonant_frequency = 1/(l*4/c)
flb, fub = 100, 100e3
llb = c/flb/4
lub = c/fub/4
print(llb, lub)
# As shown here, since we're not running cables for 750 meters,
# we don't have an issue.
c/resonant_frequency/4.0
```
Resonance of acoustic tube
```
f = 14000.0 # Hz, cps
w = (1/f)*340.0
w*1e3 # resonance in mm assuming quarter wavelength is what's important
length = 20e-3
period = length/340.0
frequency = 1.0/period
frequency
import numpy as np
def exp_ramp_v1(f0, k, t):
return f0*k**t
def exp_ramp_v2(f0, f1, t):
k = np.exp(np.log(f1/f0)/t[-1])
return exp_ramp_v1(f0, k, t)
t = np.arange(10e3)/10e3
f0 = 0.5e3
f1 = 50e3
e1 = exp_ramp_v2(50e3, 200e3, t)
e2 = exp_ramp_v2(0.5e3, 200e3, t)
pl.plot(t, e1)
pl.plot(t, e2)
```
# chirps
```
fs = 1000.0
f = np.linspace(1, 200, fs)
t = np.arange(fs)/fs
pl.plot(t, np.sin(f.cumsum()/fs))
(2*np.pi*f[-1]*t[-1]) % 2*np.pi
(f.cumsum()[-1]/fs) % 2*np.pi
```
# Converting band level to spectrum level
$BL = 10 \times log{\frac{I_{tot}}{I_{ref}}} $ where $ I_{tot} = I_{SL}*\Delta f$. Using multiplication rule for logarigthms, $BL = 10 \times log{\frac{I_{SL} \times 1 Hz}{I_{ref}}} + 10 \times log \frac{\Delta f}{1 Hz}$ which simplifies to $BL = ISL_{ave}+ 10\times log(\Delta f)$
# Equalizing a signal using the impulse response
```
signal.iirfilter?
signal.freqs?
from scipy import signal
fs = 100e3
kwargs = dict(N=1, Wn=1e3/(2*fs), rp=0.4, rs=50, btype='highpass', ftype='ellip')
b, a = signal.iirfilter(analog=False, **kwargs)
ba, aa = signal.iirfilter(analog=True, **kwargs)
t, ir = signal.impulse([ba, aa], 50)
w, h = signal.freqz(b, a)
pl.figure()
pl.plot(t, ir)
pl.figure()
pl.plot(w, h)
rs = np.random.RandomState(seed=1)
noise = rs.uniform(-1, 1, 5000)
f = np.linspace(100, 25000, fs)
t = np.arange(fs)/fs
chirp = np.sin(f.cumsum()/fs)
psd = np.abs(np.fft.rfft(chirp)**2)
freq = np.fft.rfftfreq(len(chirp), fs**-1)
pl.semilogx(freq, 20*np.log10(psd), 'k')
chirp_ir = signal.lfilter(b, a, chirp)
psd_ir = np.abs(np.fft.rfft(chirp_ir)**2)
pl.semilogx(freq, 20*np.log10(psd_ir), 'r')
#pl.axis(ymin=40, xmin=10, xmax=10000)
chirp_eq = signal.lfilter(ir**-1, 1, chirp_ir)
psd_eq = np.abs(np.fft.rfft(chirp_eq)**2)
pl.semilogx(freq, 20*np.log10(psd_eq), 'g')
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Image/image_stats_by_band.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Image/image_stats_by_band.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Image/image_stats_by_band.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
image = ee.Image('USDA/NAIP/DOQQ/m_3712213_sw_10_1_20140613')
Map.setCenter(-122.466123, 37.769833, 17)
Map.addLayer(image, {'bands': ['N', 'R','G']}, 'NAIP')
geometry = image.geometry()
means = image.reduceRegions(geometry, ee.Reducer.mean().forEachBand(image), 10)
print(means.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
**Note**: Click on "*Kernel*" > "*Restart Kernel and Clear All Outputs*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *before* reading this notebook to reset its output. If you cannot run this file on your machine, you may want to open it [in the cloud <img height="12" style="display: inline-block" src="../static/link/to_mb.png">](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/06_text/00_content.ipynb).
# Chapter 6: Text & Bytes
In this chapter, we continue the study of the built-in data types. The next layer on top of numbers consists of **textual data** that are modeled primarily with the `str` type in Python. `str` objects are more complex than the numeric objects in [Chapter 5 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/05_numbers/00_content.ipynb) as they *consist* of an *arbitrary* and possibly large number of *individual* characters that may be chosen from *any* alphabet in the history of humankind. Luckily, Python abstracts away most of this complexity from us. However, after looking at the `str` type in great detail, we briefly introduce the `bytes` type at the end of this chapter to understand how characters are modeled in memory.
## The `str` Type
To create a `str` object, we use the *literal* notation and type the text between enclosing **double quotes** `"`.
```
text = "Lorem ipsum dolor sit amet."
```
Like everything in Python, `text` is an object with an *identity*, a *type*, and a *value*.
```
id(text)
type(text)
```
As seen before, a `str` object evaluates to itself in a literal notation with enclosing **single quotes** `'`.
In [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb#Value-/-(Semantic)-"Meaning"), we specify the double quotes `"` convention this book follows. Yet, single quotes `'` and double quotes `"` are *perfect* substitutes. We could use the reverse convention, as well. As [this discussion <img height="12" style="display: inline-block" src="../static/link/to_so.png">](https://stackoverflow.com/questions/56011/single-quotes-vs-double-quotes-in-python) shows, many programmers have *strong* opinions about such conventions. Consequently, the discussion was "closed as not constructive" by the moderators.
```
text
```
As the single quote `'` is often used in the English language as a shortener, we could make an argument in favor of using the double quotes `"`: There are possibly fewer situations like the two code cells below, where we must **escape** the kind of quote used as the `str` object's delimiter with a backslash `"\"` inside the text (cf., also the "*Unicode & (Special) Characters*" section further below). However, double quotes `"` are often used as well, for example, to indicate a quote like the one by [Albert Einstein](https://de.wikipedia.org/wiki/Albert_Einstein) below. So, such arguments are not convincing.
Many proponents of the single quote `'` usage claim that double quotes `"` cause more **visual noise** on the screen. However, this argument is also not convincing as, for example, one could claim that *two* single quotes `''` look so similar to *one* double quote `"` that a reader may confuse an *empty* `str` object with a missing closing quote `"`. With the double quotes `"` convention we at least avoid such confusion (i.e., empty `str` objects are written as `""`).
This discussion is an excellent example of a [flame war <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Flaming_%28Internet%29#Flame_war) in the programming world: Everyone has an opinion and the discussion leads to *no* result.
```
"Einstein said, \"If you can't explain it, you don't understand it.\""
'Einstein said, "If you can\'t explain it, you don\'t understand it."'
```
An *important* fact to know is that enclosing quotes of either kind are *not* part of the `str` object's *value*! They are merely *syntax* indicating the literal notation.
So, printing out the sentence with the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function does the same in both cases.
```
print("Einstein said, \"If you can't explain it, you don't understand it.\"")
print('Einstein said, "If you can\'t explain it, you don\'t understand it."')
```
As an alternative to the literal notation, we may use the built-in [str() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str) constructor to cast non-`str` objects as `str` ones. As [Chapter 11 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/11_classes/00_content.ipynb) reveals, basically any object in Python has a **text representation**. Because of that we may also pass `list` objects, the boolean `True` and `False`, or `None` to [str() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str).
```
str(42)
str(42.87)
str([1, 2, 3])
str(True)
str(False)
str(None)
```
### User Input
As shown in the "*Guessing a Coin Toss*" example in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/03_content.ipynb#Example:-Guessing-a-Coin-Toss), the built-in [input() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#input) function displays a prompt to the user and returns whatever is entered as a `str` object. [input() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#input) is in particular valuable when writing command-line tools.
```
user_input = input("Whatever you enter is put in a new string: ")
type(user_input)
user_input
```
### Reading Files
A more common situation where we obtain `str` objects is when reading the contents of a file with the [open() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#open) built-in. In its simplest usage form, to open a [text file <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Text_file) file, we pass in its path (i.e., "filename") as a `str` object.
```
file = open("lorem_ipsum.txt")
```
[open() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#open) returns a **[proxy <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Proxy_pattern)** object of type `TextIOWrapper` that allows us to interact with the file on disk. `mode='r'` shows that we opened the file in read-only mode and `encoding='UTF-8'` is explained in detail in the [The `bytes` Type <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/06_text/01_content.ipynb#The-bytes-Type) section at the end of this chapter.
```
type(file)
file
```
`TextIOWrapper` objects come with plenty of type-specific methods and attributes.
```
file.readable()
file.writable()
file.name
file.encoding
```
So far, we have not yet read anything from the file (i.e., from disk)! That is intentional as, for example, the file could contain more data than could fit into our computer's memory. Therefore, we have to explicitly instruct the `file` object to read some of or all the data in the file.
One way to do that, is to simply loop over the `file` object with the `for` statement as shown next: In each iteration, `line` is assigned the next line in the file. Because we may loop over `TextIOWrapper` objects, they are *iterables*.
```
for line in file:
print(line)
```
Once we looped over the `file` object, it is **exhausted**: We can *not* loop over it a second time. So, the built-in [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function is *never* called in the code cell below!
```
for line in file:
print(line)
```
After the `for`-loop, the `line` variable is still set and references the *last* line in the file. We verify that it is indeed a `str` object.
```
line
type(line)
```
An *important* observation is that the `file` object is still associated with an *open* **[file descriptor <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/File_descriptor)**. Without going into any technical details, we note that an operating system can only handle a *limited* number of "open files" at the same time, and, therefore, we should always *close* the file once we are done processing it.
`TextIOWrapper` objects have a `closed` attribute on them that indicates if the associated file descriptor is still open or has been closed. We can "manually" close any `TextIOWrapper` object with the [close() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.IOBase.close) method.
```
file.closed
file.close()
file.closed
```
The more Pythonic way is to use [open() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#open) within the compound `with` statement (cf., [reference <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/reference/compound_stmts.html#the-with-statement)): In the example below, the indented code block is said to be executed within the **context** of the `file` object that now plays the role of a **[context manager <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/reference/datamodel.html#with-statement-context-managers)**. Many different kinds of context managers exist in Python with different applications and purposes. Context managers returned from [open() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#open) mainly ensure that file descriptors get automatically closed after the last line in the code block is executed.
```
with open("lorem_ipsum.txt") as file:
for line in file:
print(line)
file.closed
```
Using syntax familiar from [Chapter 3 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/03_conditionals/00_content.ipynb#The-try-Statement) to explain what the `with open(...) as file:` does above, we provide an alternative formulation with a `try` statement below: The `finally`-branch is *always* executed, even if an exception is raised inside the `for`-loop. Therefore, `file` is sure to be closed too. However, this formulation is somewhat less expressive.
```
try:
file = open("lorem_ipsum.txt")
for line in file:
print(line)
finally:
file.close()
file.closed
```
As an alternative to reading the contents of a file by looping over a `TextIOWrapper` object, we may also call one of the methods they come with.
For example, the [read() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.TextIOBase.read) method takes a single `size` argument of type `int` and returns a `str` object with the specified number of characters.
```
file = open("lorem_ipsum.txt")
file.read(11)
```
When we call [read() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.TextIOBase.read) again, the returned `str` object begins where the previous one left off. This is because `TextIOWrapper` objects like `file` simply store a position at which the associated file on disk is being read. In other words, `file` is like a **cursor** pointing into a file.
```
file.read(11)
```
On the contrary, the [readline() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.TextIOBase.readline) method keeps reading until it hits a **newline character**. These are shown in `str` objects as `"\n"`.
```
file.readline()
```
When we call [readline() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.TextIOBase.readline) again, we obtain the next line.
```
file.readline()
```
Lastly, the [readlines() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.IOBase.readlines) method returns a `list` object that holds *all* lines in the `file` from the current position to the end of the file. The latter position is often abbreviated as **EOF** in the documentation. Let's always remember that [readlines() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.IOBase.readlines) has the potential to crash a computer with a `MemoryError`.
```
file.readlines()
```
Calling [readlines() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/io.html#io.IOBase.readlines) a second time, is as pointless as looping over `file` a second time.
```
file.readlines()
file.close()
```
Because every `str` object created by reading the contents of a file in any of the ways shown in this section ends with a `"\n"`, we see empty lines printed between each `line` in the `for`-loops above. To print the entire text without empty lines in between, we pass a `end=""` argument to the [print() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#print) function.
```
with open("lorem_ipsum.txt") as file:
for line in file:
print(line, end="")
```
## A String of Characters
A **sequence** is yet another *abstract* concept (cf., the "*Containers vs. Iterables*" section in [Chapter 4 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/04_iteration/02_content.ipynb#Containers-vs.-Iterables)).
It unifies *four* [orthogonal <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/Orthogonality) (i.e., "independent") concepts into one bigger idea: Any data type, such as `str`, is considered a sequence if it
1. **contains**
2. a **finite** number of other objects that
3. can be **iterated** over
4. in a *predictable* **order**.
[Chapter 7 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/07_sequences/00_content.ipynb#Collections-vs.-Sequences) formalizes these concepts in great detail. Here, we keep our focus on the `str` type that historically received its name as it models a **[string of characters <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/String_%28computer_science%29)**. *String* is simply another term for *sequence* in the computer science literature.
Another example of a sequence is the `list` type. Because of that, `str` objects may be treated like `list` objects in many situations.
Below, the built-in [len() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#len) function tells us how many characters make up `text`. [len() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#len) would not work with an "infinite" object. As anything modeled in a program must fit into a computer's finite memory, there cannot exist truly infinite objects; however, [Chapter 8 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/08_mfr/00_content.ipynb#Iterators-vs.-Iterables) introduces specialized iterable data types that can be used to model an *infinite* series of "things" and that, consequently, have no concept of "length."
```
text
len(text)
```
Being iterable, we may loop over `text` and do something with the individual characters, for example, print them out with extra space in between them. If it were not for the appropriately chosen name of the `text` variable, we could not tell what *concrete* type of object the `for` statement is looping over.
```
for character in text:
print(character, end=" ")
```
With the [reversed() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/functions.html#reversed) built-in, we may loop over `text` in reversed order. Reversing `text` only works as it has a forward order to begin with.
```
for character in reversed(text):
print(character, end=" ")
```
Being a container, we may check if a given `str` object is contained in `text` with the `in` operator, which has *two* distinct usages: First, it checks if a *single* character is contained in a `str` object. Second, it may also check if a shorter `str` object, then called a **substring**, is contained in a longer one.
```
"L" in text
"ipsum" in text
"veni, vidi, vici" in text
```
## Indexing
As `str` objects are *ordered* and *finite*, we may **index** into them to obtain individual characters with the **indexing operator** `[]`. This is analogous to how we obtained individual elements of a `list` object in [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/03_content.ipynb#Who-am-I?-And-how-many?).
```
text[0]
text[1]
```
The index must be of type `int`; othewise, we get a `TypeError`.
```
text[1.0]
```
The last index is one less than the above "length" of the `str` object as we start counting at `0`.
```
text[26] # == text[len(text) - 1]
```
An `IndexError` is raised whenever the index is out of range.
```
text[27] # == text[len(text)]
```
We may use *negative* indexes to start counting from the end of the `str` object, as shown in the figure below. Note how this only works because sequences are *finite*.
| Index | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10| 11| 12| 13| 14| 15| 16| 17| 18| 19| 20| 21| 22| 23| 24| 25| 26|
|:---------:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|**Reverse**|-27|-26|-25|-24|-23|-22|-21|-20|-19|-18|-17|-16|-15|-14|-13|-12|-11|-10|-9 |-8 |-7 |-6 |-5 |-4 |-3 |-2 |-1 |
| **Character** |`L`|`o`|`r`|`e`|`m`|` `|`i`|`p`|`s`|`u`|`m`|` `|`d`|`o`|`l`|`o`|`r`|` `|`s`|`i`|`t`|` `|`a`|`m`|`e`|`t`|`.`|
```
text[-1]
text[-27] # == text[-len(text)]
```
One reason why programmers like to start counting at `0` is that a positive index and its *corresponding* negative index always add up to the length of the sequence. Here, `6` and `21` add to `27`.
```
text[6]
text[-21]
```
## Slicing
A **slice** is a substring of a `str` object.
The **slicing operator** is a generalization of the indexing operator: We put one, two, or three integers within the brackets `[]`, separated by colons `:`. The three integers are then referred to as the *start*, *stop*, and *step* values.
Let's start with two integers, *start* and *stop*. Whereas the character at the *start* position is included in the returned `str` object, the one at the *stop* position is not. If both *start* and *stop* are positive, the difference "*stop* minus *start*" tells us how many characters the resulting slice has. So, below, `5 - 0 == 5` implies that `"Lorem"` consists of `5` characters. So, colloquially speaking, `text[0:5]` means "taking the first `5 - 0 == 5` characters of `text`."
```
text[0:5]
text[12:len(text)]
```
If left out, *start* defaults to `0` and *stop* to the length of the `str` object (i.e., the end).
```
text[:5]
text[12:]
```
Not including the character at the *stop* position makes working with individual slices easier as they add up to the original `str` object again (cf., the "*String Operations*" section below regarding the overloaded `+` operator).
```
text[:5] + text[5:]
```
Slicing and indexing makes it easy to obtain shorter versions of the original `str` object. A common application would be to **parse** out meaningful substrings from raw text data.
```
text[:11] + text[-10:]
```
By combining a positive *start* with a negative *stop* index, we specify both ends of the slice *relative* to the ends of the entire `str` object. So, colloquially speaking, `[6:-10]` below means "drop the first six and last ten characters." The length of the resulting slice can then *not* be calculated from the indexes and depends only on the length of the original `str` object!
```
text[6:-10]
```
For convenience, the indexes do not need to lie within the range from `0` to `len(text)` when slicing. So, no `IndexError` is raised here.
```
text[-999:999]
```
By leaving out both *start* and *stop*, we take a "full" slice that is essentially a *copy* of the original `str` object.
```
text[:]
```
A *step* value of `i` can be used to obtain only every `i`th character.
```
text[::2]
```
A negative *step* size of `-1` reverses the order of the characters.
```
text[::-1]
```
## Immutability
Whereas elements of a `list` object *may* be *re-assigned*, as shortly hinted at in [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/03_content.ipynb#Who-am-I?-And-how-many?), this is *not* allowed for the individual characters of `str` objects. Once created, they can *not* be changed. Formally, we say that `str` objects are **immutable**. In that regard, they are like the numeric types in [Chapter 5 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/05_numbers/00_content.ipynb).
On the contrary, objects that may be changed after creation, are called **mutable**. We already saw in [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/03_content.ipynb#Who-am-I?-And-how-many?) how mutable objects are more difficult to reason about for a beginner, in particular, if more than one variable references it. Yet, mutability does have its place in a programmer's toolbox, and we revisit this idea in the next chapters.
The `TypeError` indicates that `str` objects are *immutable*: Assignment to an index or a slice are *not* supported.
```
text[0] = "X"
text[:5] = "random"
```
## String Methods
Objects of type `str` come with many **methods** bound on them (cf., the [documentation <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#string-methods) for a full list). As seen before, they work like *normal* functions and are accessed via the **dot operator** `.`. Calling a method is also referred to as **method invocation**.
The [.find() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.find) method returns the index of the first occurrence of a character or a substring. If no match is found, it returns `-1`. A mirrored version searching from the right called [.rfind() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.rfind) exists as well. The [.index() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.index) and [.rindex() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.rindex) methods work in the same way but raise a `ValueError` if no match is found. So, we can control if a search fails *silently* or *loudly*.
```
text
text.find("a")
text.find("b")
text.find("dolor")
```
[.find() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.find) takes optional *start* and *end* arguments that allow us to find occurrences other than the first one.
```
text.find("o")
text.find("o", 2)
text.find("o", 2, 12)
```
The [.count() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.count) method does what we expect.
```
text
text.count("l")
```
As [.count() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.count) is *case-sensitive*, we must **chain** it with the [.lower() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.lower) method to get the count of all `"L"`s and `"l"`s.
```
text.lower().count("l")
```
Alternatively, we can use the [.upper() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.upper) method and search for `"L"`s.
```
text.upper().count("L")
```
Because `str` objects are *immutable*, [.upper() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.upper) and [.lower() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.lower) return *new* `str` objects, even if they do *not* change the value of the original `str` object.
```
example = "random"
id(example)
lower = example.lower()
id(lower)
```
`example` and `lower` are *different* objects with the *same* value.
```
example is lower
example == lower
```
Besides [.upper() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.upper) and [.lower() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.lower) there exist also [.title() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.title) and [.swapcase() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.swapcase) methods.
```
text.lower()
text.upper()
text.title()
text.swapcase()
```
Another popular string method is [.split() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.split): It separates a longer `str` object into smaller ones collected in a `list` object. By default, groups of contiguous whitespace characters are used as the *separator*.
As an example, we use [.split() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.split) to print out the individual words in `text` with more whitespace in between them.
```
text.split()
for word in text.split():
print(word, end=" ")
```
The opposite of splitting is done with the [.join() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.join) method. It is typically invoked on a `str` object that represents a separator (e.g., `" "` or `", "`) and connects the elements provided by an *iterable* argument (e.g., `words` below) into one *new* `str` object.
```
words = ["This", "will", "become", "a", "sentence."]
sentence = " ".join(words)
sentence
```
As the `str` object `"abcde"` below is an *iterable* itself, its characters (!) are joined together with a space `" "` in between.
```
" ".join("abcde")
```
The [.replace() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.replace) method creates a *new* `str` object with parts of the original `str` object potentially replaced.
```
sentence.replace("will become", "is")
```
Note how `sentence` itself remains unchanged. Bound to an immutable object, [.replace() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.replace) must create *new* objects.
```
sentence
```
As seen previously, the [.strip() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.strip) method is often helpful in cleaning text data from unreliable sources like user input from unnecessary leading and trailing whitespace. The [.lstrip() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.lstrip) and [.rstrip() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.rstrip) methods are specialized versions of it.
```
" text with whitespace ".strip()
" text with whitespace ".lstrip()
" text with whitespace ".rstrip()
```
When justifying a `str` object for output, the [.ljust() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.ljust) and [.rjust() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.rjust) methods may be helpful.
```
sentence.ljust(40)
sentence.rjust(40)
```
Similarly, the [.zfill() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.zfill) method can be used to pad a `str` representation of a number with leading `0`s for justified output.
```
"42.87".zfill(10)
"-42.87".zfill(10)
```
## String Operations
As mentioned in [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb#Operator-Overloading), the `+` and `*` operators are *overloaded* and used for **string concatenation**. They always create *new* `str` objects. That has nothing to do with the `str` type's immutability, but is the default behavior of operators.
```
"Hello " + text[:4]
5 * text[:12] + "..."
```
### String Comparison
The *relational* operators also work with `str` objects, another example of operator overloading. Comparison is done one character at a time in a pairwise fashion until the first pair differs or one operand ends. However, `str` objects are sorted in a "weird" way. For example, all upper case characters come before all lower case characters. The reason for that is given in the "*Characters are Numbers with a Convention*" sub-section in the [second part <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/06_text/02_content.ipynb#Characters-are-Numbers-with-a-Convention) of this chapter.
```
"Apple" < "Banana"
"apple" < "Banana"
"apple" < "Banana".lower()
```
Below is an example with typical German last names that shows how characters other than the first decide the ordering.
```
"Mai" < "Maier" < "Mayer" < "Meier" < "Meyer"
```
## String Interpolation
Often, we want to use `str` objects as drafts in the source code that are filled in with concrete text only at runtime. This approach is called **string interpolation**. There are three ways to do that in Python.
### f-strings
**[Formatted string literals <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/reference/lexical_analysis.html#formatted-string-literals)**, of **f-strings** for short, are the least recently added (cf., [PEP 498 <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://www.python.org/dev/peps/pep-0498/) in 2016) and most readable way: We simply prepend a `str` in its literal notation with an `f`, and put variables, or more generally, expressions, within curly braces `{}`. These are then filled in when the string literal is evaluated.
```
name = "Alexander"
time_of_day = "morning"
f"Hello {name}! Good {time_of_day}."
```
Separated by a colon `:`, various formatting options are available. In the beginning, the ability to round numbers for output may be particularly useful: This can be achieved by adding `:.2f` to the variable name inside the curly braces, which casts the number as a `float` and rounds it to two digits. The `:.2f` is a so-called format specifier, and there exists a whole **[format specification mini-language <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/string.html#formatspec)** to govern how specifiers work.
```
pi = 3.141592653
f"Pi is {pi:.2f}"
```
### [format() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.format) Method
`str` objects also provide a [.format() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.format) method that accepts an arbitrary number of *positional* arguments that are inserted into the `str` object in the same order replacing empty curly brackets `{}`. String interpolation with the [.format() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.format) method is a more traditional and probably the most common way as of today. While f-strings are the recommended way going forward, usage of the [.format() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.format) method is likely not declining any time soon.
```
"Hello {}! Good {}.".format(name, time_of_day)
```
We may use index numbers inside the curly braces if the order is different in the `str` object.
```
"Good {1}, {0}".format(name, time_of_day)
```
The [.format() <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#str.format) method may alternatively be used with *keyword* arguments as well. Then, we must put the keywords' names within the curly brackets.
```
"Hello {name}! Good {time}.".format(name=name, time=time_of_day)
```
Format specifiers work as in the f-string case.
```
"Pi is {:.2f}".format(pi)
```
### `%` Operator
The `%` operator that we saw in the context of modulo division in [Chapter 1 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb#%28Arithmetic%29-Operators) is overloaded with string interpolation when its first operand is a `str` object. The second operand consists of all expressions to be filled in. Format specifiers work with a `%` instead of curly braces and according to a different set of rules referred to as **[printf-style string formatting <img height="12" style="display: inline-block" src="../static/link/to_py.png">](https://docs.python.org/3/library/stdtypes.html#printf-style-string-formatting)**. So, `{:.2f}` becomes `%.2f`.
This way of string interpolation is the oldest and originates from the [C language <img height="12" style="display: inline-block" src="../static/link/to_wiki.png">](https://en.wikipedia.org/wiki/C_%28programming_language%29). It is still widely spread, but we should use one of the other two ways instead. We show it here mainly for completeness sake.
```
"Pi is %.2f" % pi
```
To insert more than one expression, we must list them in order and between parenthesis `(` and `)`. As [Chapter 7 <img height="12" style="display: inline-block" src="../static/link/to_nb.png">](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/07_sequences/00_content.ipynb#The-tuple-Type) reveals, this literal syntax creates an object of type `tuple`. Also, to format an expression as text, we use the format specifier `%s`.
```
"Hello %s! Good %s." % (name, time_of_day)
```
| github_jupyter |
### Replicates and linear regression
```
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import OneHotEncoder
import numpy as np
import seaborn as sns
import scipy.stats as stats
import pandas as pd
# N = 1000
# num_boot = 10000
# num_donors = 3
# donors = np.random.choice(['a','b','c'], size=N)
# endog = np.random.randint(2, size=N).reshape(-1,1)
# donor_indicator = OneHotEncoder().fit_transform(donors.reshape(-1,1)).toarray()
# X = np.hstack([endog, donor_indicator])
# beta = np.array([50, 40, 20, 30]).reshape(-1,1)
# noise = stats.norm.rvs(loc=0*np.ones(N)*(donors=='a')*(endog.reshape(-1)==1), scale=100,).reshape(-1,1)
# noise = stats.norm.rvs(scale=100, size=N).reshape(-1,1)
# exog = X@beta + noise
N = 20
num_boot = 20000
num_donors = 3
donors = np.random.choice(['a','b','c'], size=N)
endog = np.random.randint(2, size=N).reshape(-1,1)
donor_indicator = OneHotEncoder().fit_transform(donors.reshape(-1,1)).toarray()
X = np.hstack([endog, donor_indicator])
beta = np.array([50, 10, 20, 30]).reshape(-1,1)
exog = X@beta + \
60*np.ones(N)*(donors=='a')*(endog.reshape(-1)==1) + \
25*np.ones(N)*(donors=='b')*(endog.reshape(-1)==1) + \
stats.norm.rvs(loc=0, scale=20, size=N).reshape(-1,1)
```
### Code for actual linear regression
```
linreg_coefs = np.zeros(num_boot)
# X = np.hstack([X, X[:, [0]]*X[:, [1]], X[:, [0]]*X[:, [2]], X[:, [0]]*X[:, [3]]])
for b in range(num_boot):
boot_idxs = np.random.choice(N, size=N)
linreg_coefs[b] = LinearRegression(fit_intercept=False).fit(X[boot_idxs],exog[boot_idxs]).coef_[0,0]
```
### Code for separate regression
```
donor_data = []
for donor in ['a', 'b', 'c']:
X_donor = endog[donors==donor].copy()
exog_donor = exog[donors==donor].copy()
donor_data.append((X_donor, exog_donor))
weighted_coefs = np.zeros(num_boot)
for b in range(num_boot):
for X_donor, exog_donor in donor_data:
boot_idxs = np.random.choice(X_donor.shape[0], size=X_donor.shape[0])
weighted_coefs[b] += LinearRegression(fit_intercept=True).fit(X_donor[boot_idxs],exog_donor[boot_idxs]).coef_[0,0]*X_donor.shape[0]
weighted_coefs[b] /= N
sns.kdeplot(linreg_coefs)
sns.kdeplot(weighted_coefs)
X
cov_idx = 0
strata = np.delete(X, 0, axis=1)
uniq_strata, indices, inverse = np.unique(np.delete(X, 0, axis=1), axis=0, return_inverse=True, return_index=True)
indices
inverse
np.where(inverse==0)[0]
import scipy.sparse as sparse
sparse.eye(5).tocsr()[:, 0]
np.concatenate([np.array([1, 2, 3]), np.array([5, 3])])
sparse.vstack([sparse.eye(5), sparse.eye(5)])
def a():
return 'a', 'b', 'c'
b = {1:a()}
b
(xyz if 1==1 else 'abc')
strata_idx = np.all(strata == uniq_strata[0], axis=1)
X[strata_idx][:, [0,1]].shape
df = pd.DataFrame(uniq_strata, columns=['A', 'B', 'C'])
from patsy import dmatrix
N = 10
df = pd.DataFrame()
df['treatment'] = np.random.choice(['A', 'B', 'C', 'D'], size=N)
df['covariate'] = np.random.choice(2, size=N)
design_df = df.copy()
formula_like = '1 + treatment + covariate'
dmat = dmatrix(formula_like, design_df)
design_matrix_cols = dmat.design_info.column_names.copy()
design_matrix = np.array(dmat)
del dmat
df
design_matrix_cols
np.unique(design_matrix, axis=0)
```
| github_jupyter |
# **Distilling Knowledge In Multiple Students Using GANs**
```
# %tensorflow_version 1.x
# !pip install --upgrade opencv-python==3.4.2.17
import numpy as np
import tensorflow as tf
import tensorflow.keras
import tensorflow.keras.backend as K
# import os
from tensorflow.keras.datasets import fashion_mnist,mnist,cifar10
# import keras.backend as K
from tensorflow.keras.layers import Conv2D,Activation,BatchNormalization,UpSampling2D,Embedding,ZeroPadding2D, Input, Flatten, Dense, Reshape, LeakyReLU, Dropout,MaxPooling2D
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.optimizers import Adam, SGD, RMSprop
from tensorflow.keras import regularizers
from tensorflow.keras.utils import Progbar
from keras.initializers import RandomNormal
import random
from sklearn.model_selection import train_test_split
# from keras.utils import np_utils
from tensorflow.keras import utils as np_utils
nb_classes = 10
batch_size = 128
maxepoches = 250
learning_rate = 0.1
lr_decay = 1e-6
lr_drop = 20
def lr_scheduler(epoch):
return learning_rate * (0.5 ** (epoch // lr_drop))
reduce_lr = tf.keras.callbacks.LearningRateScheduler(lr_scheduler)
#Loading and splitting the dataset into train, validation and test
(X_Train, y_Train), (X_test, y_test) = cifar10.load_data()
X_train, X_val, y_train, y_val = train_test_split(X_Train, y_Train, test_size=0.20)
# convert y_train and y_test to categorical binary values
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_val = np_utils.to_categorical(y_val, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
X_Train.shape
# Reshape them to batch_size, width,height,#channels
X_train = X_train.reshape(40000, 32, 32, 3)
X_val = X_val.reshape(10000, 32, 32, 3)
X_test = X_test.reshape(10000, 32, 32, 3)
X_train = X_train.astype('float32')
X_val = X_val.astype('float32')
X_test = X_test.astype('float32')
# Normalize the values
X_train /= 255
X_val /= 255
X_test /= 255
# Teacher Network -- VGG16
init=RandomNormal(mean=0,stddev=0.02)
input_shape = (32, 32, 3) # Input shape of each image
weight_decay = 0.0005
def build_model():
model = Sequential()
model.add(Conv2D(64, (3, 3), padding='same',
input_shape=input_shape,kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.3))
model.add(Conv2D(64, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.4))
model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(2, 2),padding='same'))
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(256,kernel_regularizer=regularizers.l2(weight_decay), name='dense_1'))
model.add(Activation('relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense(10, name='dense_2'))
model.add(Activation('softmax'))
return model
teacher = build_model()
sgd = SGD(lr=learning_rate, decay=lr_decay, momentum=0.9, nesterov=True)
teacher.compile(loss='categorical_crossentropy',optimizer=sgd, metrics=['accuracy'])
# teacher.fit(X_train,Y_train,batch_size=128,epochs=150,verbose=1,callbacks=[reduce_lr],validation_data=(X_val,Y_val))
teacher.load_weights("Cifar10_Teacher.h5")
loss, acc =teacher.evaluate(X_test, y_test, verbose=1)
loss, acc
#Collect the dense vector from the previous layer output and store it in a different model
teacher_WO_Softmax = Model(teacher.input, teacher.get_layer('dense_1').output)
#Extracting dense representation from the teacher network
train_dense = teacher_WO_Softmax.predict(X_train)
# val_dense = teacher_WO_Softmax.predict(X_val)
#Splitting the training dense vector among N students(in this case 2)
# 2 Students Case
# --------------------------------------------
s1Train=train_dense[:,:128]
s2Train=train_dense[:,128:]
def define_model(name):
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3), name=name))
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.3))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(16, activation='relu', kernel_initializer='he_uniform'))
model.add(Dropout(0.2))
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform',name='req'+name))
# compile model
# opt = SGD(lr=0.001, momentum=0.9)
model.compile(optimizer='adam', loss='mse', metrics=['accuracy'])
return model
student1 = define_model('s1')
student1.summary()
# import np.random import random
BATCH_SIZE=32
def smooth_real_labels(y):
return y - 0.3+(np.random.random(y.shape)*0.5)
def smooth_fake_labels(y):
return y + (0.3 * np.random.random(y.shape))
def build_gan(gen,disc):
disc.trainable = False
input= Input(shape=input_shape)
output = gen(input)
output2= disc(output)
gan=Model(input,output2)
gan.compile('adam',loss=['binary_crossentropy','mse'],metrics=['accuracy'])
return gan
def build_sdiscriminator():
# update the first line according to your dense chunk
input2 = Input(shape=(128,),name='input')
inp=Dense(128)(input2)
leaky_relu = LeakyReLU(alpha=0.2)(inp)
conv3 = Dense(128,activation='relu')(leaky_relu)
b_n = BatchNormalization()(conv3)
# leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv3 = Dense(128,activation='relu')(leaky_relu)
b_n = BatchNormalization()(conv3)
# leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv3 = Dense(128,activation='relu')(b_n)
b_n = BatchNormalization()(conv3)
# leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(256,activation='relu')(b_n)
b_n = BatchNormalization()(conv4)
# leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(256,activation='relu')(b_n)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(512)(leaky_relu)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(512,activation='relu')(b_n)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
conv4 = Dense(1024)(leaky_relu)
b_n = BatchNormalization()(conv4)
leaky_relu = LeakyReLU(alpha=0.2)(b_n)
dense = Dense(1,activation='sigmoid')(b_n)
output2=Dense(128)(b_n)
disc = Model(input2,[dense,output2])
disc.compile(optd,loss=['binary_crossentropy','mse'],metrics=['accuracy'])
return disc
def training(generator,discriminator,gan,features,epo=20):
# Setup Models here
BATCH_SIZE = 128
discriminator.trainable = True
total_size = X_train.shape[0]
indices = np.arange(0,total_size ,BATCH_SIZE)
all_disc_loss = []
all_gen_loss = []
all_class_loss=[]
if total_size % BATCH_SIZE:
indices = indices[:-1]
for e in range(epo):
progress_bar = Progbar(target=len(indices))
np.random.shuffle(indices)
epoch_gen_loss = []
epoch_disc_loss = []
epoch_class_loss= []
for i,index in enumerate(indices):
# Write your code here
inputs=X_train[index:index+BATCH_SIZE]
real_image = features[index:index+BATCH_SIZE]
y_train = features[index:index+BATCH_SIZE]
y_real = np.ones((BATCH_SIZE,1))
y_fake = np.zeros((BATCH_SIZE,1))
#Generator Training
fake_images = generator.predict_on_batch(inputs)
#Disrciminator Training
disc_real_loss1,_,disc_real_loss2,_,_= discriminator.train_on_batch(real_image,[y_real,y_train])
disc_fake_loss1,_,disc_fake_loss2,_,_= discriminator.train_on_batch(fake_images,[y_fake,y_train])
#Gans Training
discriminator.trainable = False
gan_loss,_,gan_loss2,_,_ = gan.train_on_batch(inputs, [y_real,y_train])
gan_loss,_,gan_loss2,_,_ = gan.train_on_batch(inputs, [y_real,y_train])
gan_loss,_,gan_loss2,_,_ = gan.train_on_batch(inputs, [y_real,y_train])
gan_loss,_,gan_loss2,_,_ = gan.train_on_batch(inputs, [y_real,y_train])
discriminator.trainable = True
disc_loss = (disc_fake_loss1 + disc_real_loss1)/2
epoch_disc_loss.append(disc_loss)
progress_bar.update(i+1)
epoch_gen_loss.append((gan_loss))
avg_epoch_disc_loss = np.array(epoch_disc_loss).mean()
avg_epoch_gen_loss = np.array(epoch_gen_loss).mean()
all_disc_loss.append(avg_epoch_disc_loss)
all_gen_loss.append(avg_epoch_gen_loss)
print("Epoch: %d | Discriminator Loss: %f | Generator Loss: %f | " % (e+1,avg_epoch_disc_loss,avg_epoch_gen_loss))
return generator
# Reported results in the paper was achieved on training the network for 90+ epoch
```
**2 Students**
```
optd = Adam(lr=0.0002)
opt = Adam(lr=0.0002)
discriminator1 = build_sdiscriminator()
discriminator2 = build_sdiscriminator()
s1=define_model("s1")
s2=define_model("s2")
gan1 = build_gan(s1,discriminator1)
gan2 = build_gan(s2,discriminator2)
s1 = training(s1,discriminator1,gan1,s1Train,epo=90)
s2 = training(s2,discriminator2,gan2,s2Train,epo=98)
o1=s1.get_layer("reqs1").output
o2=s2.get_layer("reqs2").output
output=tensorflow.keras.layers.concatenate([o1,o2])
output=Activation('relu')(output)
output2=Dropout(0.5)(output) # For reguralization
output3=Dense(10,activation="softmax", name="d1")(output2)
mm2=Model([s1.get_layer("s1").input,s2.get_layer("s2").input], output3)
my_weights=teacher.get_layer('dense_2').get_weights()
mm2.get_layer('d1').set_weights(my_weights)
i=0
for l in mm2.layers[:len(mm2.layers)-2]:
l.trainable=False
# print(l)
mm2.compile(loss='categorical_crossentropy',
optimizer=Adam(learning_rate=0.0002),
metrics=['accuracy'])
# Without finetune
batch_size = 256
mm2_history=mm2.fit([X_train,X_train], Y_train,
batch_size=batch_size,
epochs=60,
verbose=1,
validation_data=([X_val,X_val], Y_val))
l,a = mm2.evaluate([X_test,X_test], y_test)
l, a
import matplotlib.pyplot as plt
plt.plot(mm2_history.history['accuracy'])
plt.plot(mm2_history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(mm2_history.history['loss'])
plt.plot(mm2_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
| github_jupyter |
# Mount Drive & Login to Wandb
```
from google.colab import drive
from getpass import getpass
import urllib
import os
# Mount drive
drive.mount('/content/drive')
!pip install wandb -qqq
!wandb login
```
# Install dependencies
```
!rm -r pearl
!git clone https://github.com/PAL-ML/PEARL_v1.git pearl
%cd pearl
!pip install -r requirements.txt
%cd ..
!pip install git+git://github.com/ankeshanand/pytorch-a2c-ppo-acktr-gail
!pip install git+git://github.com/mila-iqia/atari-representation-learning.git
!pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex
!pip install ftfy regex tqdm
pip install git+https://github.com/openai/CLIP.git
!pip install git+git://github.com/openai/baselines
! wget http://www.atarimania.com/roms/Roms.rar
! unrar x Roms.rar
! unzip ROMS.zip
! python -m atari_py.import_roms /content/ROMS
```
# Imports
```
# ML libraries
import torch.nn as nn
import torch
import pearl.src.benchmark.colab_data_preprocess as data_utils
from pearl.src.benchmark.probe_training_wrapper import run_probe_training
from pearl.src.benchmark.encoder_training_wrapper import run_encoder_training
from pearl.src.methods.encoders import LinearSTDIMEncoder
from pearl.src.benchmark.utils import appendabledict, load_encoder_from_checkpoint
# Models
import clip
# Data processing
from PIL import Image
from torchvision.transforms import Compose, Resize, Normalize
# Misc
import numpy as np
import wandb
import os
# Plotting
import seaborn as sns
import matplotlib.pyplot as plt
```
# Helper function(s)
```
def get_trained_encoder(rep_encoder, tr_episodes, val_episodes, config, wandb, method=None, train_encoder=False):
if train_encoder:
trained_encoder = run_encoder_training(rep_encoder, tr_episodes, val_episodes, config, wandb, method=method)
else:
trained_encoder = load_encoder_from_checkpoint(config['encoder_models_dir'], config['model_name'], LinearSTDIMEncoder, input_size=rep_encoder.input_size, output_size=rep_encoder.feature_size, to_train=False)
return trained_encoder
```
# Initialization & constants
General
```
env_name = "BreakoutNoFrameskip-v4"
collect_mode = "random_agent" # random_agent or ppo_agent
steps = 50000
training_input = "embeddings" # embeddings or images
input_resolution1 = "full-image"
input_resolution2 = "2x2patch" # full-image, 4x4patch, 2x2patch
num_patches = 4
```
Encoder params
```
input_size = 512 * (num_patches + 1)
feature_size = 512 * (num_patches + 1)
e_lr = 3e-4
e_batch_size = 64
e_num_epochs = 100
e_patience = 15
encoder_training_method = "sdim"
encoder_type = "linear-2x2"
model_name = encoder_training_method + '-' + encoder_type
train_encoder = True
use_encoder = True
trained_encoder = None
```
Probe Params
```
p_lr = 3e-4
p_batch_size = 64
p_num_epochs = 100
p_patience = 15
probe_type = "linear"
probe_name = encoder_training_method + "-2x2-" + probe_type
```
Paths
```
data_path_suffix = "-latest-04-05-2021"
probe_models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/probes", probe_name, env_name)
encoder_models_dir = os.path.join("drive/MyDrive/PAL_HILL_2021/Atari-RL/Models/encoders", encoder_type, env_name)
data_dir = os.path.join("/content/drive/MyDrive/PAL_HILL_2021/Atari-RL/Images_Labels_Clip_embeddings", env_name + data_path_suffix)
if not os.path.exists(probe_models_dir):
os.makedirs(probe_models_dir)
if not os.path.exists(encoder_models_dir):
os.makedirs(encoder_models_dir)
```
Wandb
```
wandb.init(project='atari-clip')
config = wandb.config
config.game = "{}-full-sdim-2x2-linear-probe".format(env_name.replace("NoFrameskip-v4", ""))
wandb.run.name = "{}_full_sdim_2x2_linear_may_29".format(env_name.replace("NoFrameskip-v4", ""))
```
# Get episode data
full-images
```
tr_episodes_full, val_episodes_full,\
tr_labels_full, val_labels_full,\
test_episodes_full, test_labels_full = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution1)
```
2x2 patches
```
tr_episodes_2patch, val_episodes_2patch, \
tr_labels_2patch, val_labels_2patch, \
test_episodes_2patch, test_labels_2patch = data_utils.get_data(training_input, data_dir, env_name=env_name, steps=steps, collect_mode=collect_mode, color=True, input_resolution=input_resolution2)
tr_episodes = data_utils.concat_patch_embeddings_with_full_img(tr_episodes_2patch, tr_episodes_full, num_patches=num_patches)
val_episodes = data_utils.concat_patch_embeddings_with_full_img(val_episodes_2patch, val_episodes_full, num_patches=num_patches)
test_episodes = data_utils.concat_patch_embeddings_with_full_img(test_episodes_2patch, test_episodes_full, num_patches=num_patches)
```
# Get trained encoder
Instantiate encoder
```
rep_encoder = LinearSTDIMEncoder(input_size, feature_size, num_patches, log=False)
```
Get encoder (Train/Retrieve from drive)
```
config = {}
config['epochs'] = e_num_epochs
config['lr'] = e_lr
config['patience'] = e_patience
config['batch_size'] = e_batch_size
config['save_dir'] = encoder_models_dir
config['model_name'] = model_name
config['env_name'] = env_name
trained_encoder = get_trained_encoder(rep_encoder, tr_episodes, val_episodes, config, wandb, method=encoder_training_method, train_encoder=train_encoder)
```
# Run probe training
```
run_probe_training(training_input, trained_encoder, probe_type, p_num_epochs, p_lr, p_patience, wandb, probe_models_dir, p_batch_size,
tr_episodes, val_episodes, tr_labels_full, val_labels_full, test_episodes, test_labels_full, use_encoder=use_encoder)
```
| github_jupyter |
# Example: CanvasXpress dotplot Chart No. 4
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/dotplot-4.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="dotplot4",
data={
"y": {
"vars": [
"Women",
"Men"
],
"smps": [
"MIT",
"Stanford",
"Harvard",
"U.Penn",
"Princeton",
"Chicago",
"Georgetown",
"Tufts",
"Yale",
"Columbia",
"Duke",
"Dartmouth",
"NYU",
"Notre Dame",
"Cornell",
"Michigan",
"Brown",
"Berkeley",
"Emory",
"UCLA",
"SoCal"
],
"data": [
[
94,
96,
112,
92,
90,
78,
94,
76,
79,
86,
93,
84,
67,
73,
80,
62,
72,
71,
68,
64,
72
],
[
152,
151,
165,
141,
137,
118,
131,
112,
114,
119,
124,
114,
94,
100,
107,
84,
92,
88,
82,
78,
81
]
]
},
"z": {
"Connect": [
1,
1
]
}
},
config={
"axisAlgorithm": "wilkinson",
"connectBy": "Connect",
"dotplotType": "stacked",
"graphType": "Dotplot",
"showTransition": False,
"smpTitle": "School",
"sortDir": "descending",
"theme": "CanvasXpress",
"title": "Gender Earnings Disparity",
"xAxis2Title": "Annual Salary",
"xAxisMinorTicks": False,
"xAxisShow": False,
"xAxisTickFormat": "$%sK",
"xAxisTitle": "Annual Salary"
},
width=613,
height=613,
events=CXEvents(),
after_render=[
[
"sortSamplesByVariable",
[
"Men",
None,
None
]
]
],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="dotplot_4.html")
```
| github_jupyter |
# Exploring the Capabilities of the STMicroelectonics Sensor Tile
<div style="text-align: center; ">
<figure>
<img src="img/sensor_tile.png" alt="STEVAL-STWINKT1" style="background:none; border:none; box-shadow:none; text-align:center" width="400px"/>
</figure>
</div>
## Sensing Elements
The STEVAL-STWINKT1 sensor tile contains nine sensing elements (as shown in the figure below):
* **Vibrometer (IIS3DWB)**: Max Sampling Freq = 26.667 kHz
* **3D Accelerometer (IIS2DH)**: Max Sampling Freq = 1344 Hz
* **6-Axis Inertial-Measurement-Unit (ISM330DHCX)**: Max Sampling Freq = 6667 Hz
* **Humidity and Temperature (HTS221)**: Max Sampling Freq = 12.5 Hz
* **Temperature (STTS751)**: Max Sampling Freq = 4 Hz
* **Pressure (LPS22HH)**: Max Sampling Freq = 200 Hz, sensor contains both pressure and temperature
* **3D Magnetometer (IIS2MDC)**: Max Sampling Freq = 100 Hz
* **Analog Microphone (MP23ABS1)**: Max Sampling Freq = 192 kHz
* **Digital Microphone (IMP34DT05)**: Max Sampling Freq = 48 kHz, Suitable for vibration monitoring

## Collecting and Analyzing Data
STMicroelectronics provides [high-speed data-logging firmware](https://www.st.com/en/embedded-software/fp-sns-datalog1.html?ecmp=tt9470_gl_link_feb2019&rt=um&id=UM2621) and an associated Python package for analyzing the data.
I collected data from the fan, as shown in the picture below.
<div style="text-align: center; ">
<figure>
<img src="img/fan.jpg" alt="fan and sensor tile" style="background:none; border:none; box-shadow:none; text-align:center" width="500px"/>
<div style="text-align: center; ">
<figcaption style="color:grey; font-size:smaller"> The fan configuration for collecting data.</figcaption>
</div>
</figure>
</div>
Import appropriate packages, and if running in Google Colab, download the data and the HSD module.
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
from scipy import signal, fftpack
import os
import pathlib
from pathlib import Path
import zipfile # for extracting data zip files
# to clear outputs from cells
from IPython.display import clear_output
root_dir = Path.cwd() # set the root directory as a Pathlib path
folder_raw_data = root_dir / 'data/' # raw data folder that holds the .zip .mat files for milling data
# NEED TO RUN THIS IF RUNNING IN GOOGLE COLAB
# if the data folder does not exist, then you are likely
# in a google colab environment. In that case, we will create the
# data and processed data folders and download the appropriate files
if folder_raw_data.exists() == False:
pathlib.Path(folder_raw_data).mkdir(parents=True, exist_ok=True)
os.chdir(folder_raw_data)# move to raw data folder
!wget 'https://github.com/tvhahn/High-Speed-Datalog/raw/master/data/STWIN_00020_ls_window_sill.zip'
!wget 'https://github.com/tvhahn/High-Speed-Datalog/raw/master/data/STWIN_00019_hs_window_sill.zip'
!wget 'https://github.com/tvhahn/High-Speed-Datalog/raw/master/data/STWIN_00010_shaking.zip'
# change directory back to root and download the HSD SDK (sdk.zip)
os.chdir(root_dir)
!wget 'https://github.com/tvhahn/High-Speed-Datalog/raw/master/sdk.zip'
# extract sdk from the zip file
with zipfile.ZipFile('sdk.zip', 'r') as zip_ref:
zip_ref.extractall()
clear_output(wait=False)
```
Import the high-speed data-logger package. Set the location of the collected data and unzip the file.
```
from HSD import HSDatalog as HSD
# extract data from the zip file
with zipfile.ZipFile(folder_raw_data / 'STWIN_00020_ls_window_sill.zip', 'r') as zip_ref:
zip_ref.extractall(folder_raw_data)
```
Initialize the HSDatalog class
```
acq_folder = folder_raw_data / 'STWIN_00020_ls_window_sill' # this is for the fan at low speed
hsd = HSD.HSDatalog(acquisition_folder=acq_folder)
```
HSDatalog allows you to obtain all the information regarding the acquisition and the board that generated it.
```
active_sensor_list = hsd.getSensorList(only_active=True)
# print information on each sensor
for sens in active_sensor_list:
sensor_name = sens.name
sensor = hsd.getSensor(sensor_name)
print ("Sensor: {}".format(sensor.name))
# can get the capabilities (descriptor) for each sensor
s_descriptor_list = sensor.sensor_descriptor.sub_sensor_descriptor
# can get configuration of sensor
s_status_list = sensor.sensor_status.sub_sensor_status
for i, s in enumerate(s_descriptor_list):
print(f" --> {s.sensor_type} - ODR: {s_status_list[i].odr} , FS: {s_status_list[i].fs} , SamplesPerTs {s_status_list[i].samples_per_ts} , Unit: {s.unit}")
```
We can also get the list of sensor data files in your selected acquisition folder:
```
file_names = hsd.getDataFileList()
print(file_names)
```
## Fast-Fourier Transform Low-Speed Data
We will get an FFT of the z-axis acceleration for the fan running at low speed. I collected data for about 2 minutes.
```
# extract data from the zip file
with zipfile.ZipFile(folder_raw_data / 'STWIN_00020_ls_window_sill.zip', 'r') as zip_ref:
zip_ref.extractall(folder_raw_data)
# load the data
acq_folder = folder_raw_data / 'STWIN_00020_ls_window_sill' # this is for the fan at low speed
hsd = HSD.HSDatalog(acquisition_folder=acq_folder)
# select the sensor
sensor_name = "IIS3DWB"
sensor_type = "ACC"
# HSD method for extracting info into pandas dataframe
df = hsd.get_dataFrame(sensor_name, sensor_type)
print('Duration (s):', df['Time'].iloc[-1])
df.head()
```
Before applying the FFT, we will first detrend and then window the signal. This is how:
```
# practice detrending
fig, axes = plt.subplots(1, 1, figsize=(15, 8))
plt.plot(df['A_z [g]'], alpha=0.5, label='original signal')
y_detrend = signal.detrend(df['A_z [g]'], type="linear")
plt.plot(y_detrend, alpha=0.5, label='detrended signal')
# apply either a hamming or kaiser windowing function
y_detrend *= np.hamming(len(y_detrend))
# y_detrend *= np.kaiser(len(y_detrend), 8)
plt.plot(y_detrend, alpha=0.5, label='windowed signal')
plt.legend(loc='center left')
```
Create a generic function for plotting the FFT.
```
def create_fft(df, x_name='Time', y_name='A_z [g]', sample_freq=26667.0, show_plot=True, window='hamming', beta=8):
'''Create FFT plot from a pandas dataframe'''
y = df[y_name].to_numpy(dtype="float64") # convert to a numpy array
x = df[x_name].to_numpy(dtype="float64")
# parameters for plot
T = 1.0 / sample_freq # sample spacing
N = len(y) # number of sample points
# do some preprocessing of the current signal
y_detrend = y - np.mean(y)
y_detrend = signal.detrend(y_detrend, type="constant") # detrended signal
if window == 'hamming':
y_detrend *= np.hamming(N) # apply a hamming window. Why? https://dsp.stackexchange.com/a/11323
else:
y_detrend *= np.kaiser(len(y_detrend), beta)
# FFT on time domain signal
yf = fftpack.rfft(y_detrend)
yf = 2.0 / N * np.abs(yf[: int(N / 2.0)])
xf = np.linspace(0.0, 1.0 / (2.0 * T), N // 2)/2
if show_plot:
# setup the seaborn plot
sns.set(font_scale=1.0, style="whitegrid")
fig, axes = plt.subplots(2, 1, figsize=(12, 8), sharex=False, sharey=False)
fig.tight_layout(pad=5.0)
pal = sns.cubehelix_palette(6, rot=-0.25, light=0.7) # pick nice color for plot
# plot time domain signal
axes[0].plot(x, y, marker="", label="Best model", color=pal[3], linewidth=0.8)
axes[0].set_title("Time Domain", fontdict={"fontweight": "normal"})
axes[0].set_xlabel("Time (seconds)")
axes[0].set_ylabel("Acceleration, g")
# axes[0].set_yticklabels([])
# plot the frequency domain signal
axes[1].plot(xf, yf, marker="", label="Best model", color=pal[3], linewidth=0.8)
axes[1].set_title("Frequency Domain", fontdict={"fontweight": "normal"})
axes[1].set_xlabel("Frequency (Hz)")
axes[1].set_ylabel("Acceleration, g")
plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0))
# clean up the sub-plots to make everything pretty
for ax in axes.flatten():
ax.yaxis.set_tick_params(labelleft=True, which="major")
ax.grid(False)
# in case you want to save the figure (just uncomment)
# plt.savefig('time_freq_domains.svg',dpi=600,bbox_inches = "tight")
plt.show()
return xf, yf
xf, yf = create_fft(df, x_name='Time', y_name='A_z [g]', sample_freq=26667.0, show_plot=True, window='hamming')
```
It looks like, from the above frequency domain plot, that we have a number of frequency peaks, along with a number of additional side bands. Let's zoom in.
```
def plot_freq(xf, yf, max_freq_to_plot=1000, peak_height=0.0001, peak_distance=100):
# select the index number where xf is less than a certain freq
i = np.where(xf<max_freq_to_plot)[0][-1]
peak_distance_index = peak_distance * i / max_freq_to_plot
# setup the seaborn plot
sns.set(font_scale=1.0, style="whitegrid")
fig, axes = plt.subplots(1, 1, figsize=(12, 8), sharex=False, sharey=False)
fig.tight_layout(pad=5.0)
pal = sns.cubehelix_palette(6, rot=-0.25, light=0.7) # pick nice color for plot
# plot the frequency domain signal
axes.plot(xf[:i], yf[:i], marker="", label="Best model", color=pal[3], linewidth=0.8)
axes.set_title("Frequency Domain", fontdict={"fontweight": "normal"})
axes.set_xlabel("Frequency (Hz)")
axes.set_ylabel("Acceleration, g")
axes.yaxis.set_tick_params(labelleft=True, which="major")
axes.grid(False)
peaks, _ = signal.find_peaks(yf[:i], height=peak_height, distance=peak_distance_index)
plt.plot(xf[peaks], yf[peaks], "x", color='#d62728', markersize=10)
for p in peaks:
axes.text(
x=xf[p]+max_freq_to_plot/50.0,
y=yf[p],
s=f"{xf[p]:.1f} Hz",
horizontalalignment="left",
verticalalignment="center",
size=12,
color="#d62728",
rotation="horizontal",
weight="normal",
)
plt.ticklabel_format(axis="y", style="sci", scilimits=(0,0))
plt.show()
plot_freq(xf, yf, max_freq_to_plot=1500, peak_height=0.0001, peak_distance=100)
```
It's tough to say what all of the peaks represents. Here is what we know:
* The operating speed of the fan, according to the technical documentation, is 1300 RMP (21.7 Hz). However, the documentation does not state if the 1300 RPM is for the low, medium, or high speed.
* The fan has 7 blades. 7 x 21.7 = 147, which is close to the 137.4 Hz on the plot. Therefore, the blade pass frequency is likely 137.4 Hz.
Based on these deductions, the **rotational speed of the fan at low speed is likely 19.6 Hz**.
There are two small bearings in the fan. Some of the peaks and their sidebands are likely related to the ball-pass frequencies.
## Fast-Fourier Transform High-Speed Data
Now let's see how that changes when the fan is running at a "high" speed.
```
# extract data from the zip file
with zipfile.ZipFile(folder_raw_data / 'STWIN_00019_hs_window_sill.zip', 'r') as zip_ref:
zip_ref.extractall(folder_raw_data)
# load the data
acq_folder = folder_raw_data / 'STWIN_00019_hs_window_sill' # this is for the fan at low speed
hsd = HSD.HSDatalog(acquisition_folder=acq_folder)
sensor_name = "IIS3DWB"
sensor_type = "ACC"
# HSD method for extracting info into pandas dataframe
df = hsd.get_dataFrame(sensor_name, sensor_type)
print('Duration (s):', df['Time'].iloc[-1])
xf, yf = create_fft(df, x_name='Time', y_name='A_z [g]', sample_freq=26667.0, show_plot=True, window='hamming', beta=0)
plot_freq(xf, yf, max_freq_to_plot=1500, peak_height=0.000125,peak_distance=100)
```
The 164.7 Hz likely relates to the **rotational speed of the fan at high speed of 23.5 Hz**.
Most interesting is the large spike at 657 Hz. My guess is that this relates to some sort of natural frequency of the system.
## Fast-Fourier Transform when Shaking Sensor Tile
Now lets look at the data from when we were shaking the senor tile up and down (give the sensor a shake about every second or so)
```
# extract data from the zip file
with zipfile.ZipFile(folder_raw_data / 'STWIN_00010_shaking.zip', 'r') as zip_ref:
zip_ref.extractall(folder_raw_data)
# load the data
acq_folder = folder_raw_data / 'STWIN_00010_shaking' # this is for the fan at low speed
hsd = HSD.HSDatalog(acquisition_folder=acq_folder)
sensor_name = "IIS3DWB"
sensor_type = "ACC"
# HSD method for extracting info into pandas dataframe
df = hsd.get_dataFrame(sensor_name, sensor_type)
print('Duration (s):', df['Time'].iloc[-1])
xf, yf = create_fft(df, x_name='Time', y_name='A_z [g]', sample_freq=26667.0, show_plot=True, window='hamming', beta=0)
plot_freq(xf, yf, max_freq_to_plot=15, peak_height=0.02,peak_distance=1.2)
```
Sure enough, I was shaking the sensor tile at about 2 Hz. The harmonics of this, in my estimation, are due to the lack of rigidity in the system. The screws on the sensor tile where not 100% tight, and the battery in the sensor tile case can "clack" around.
## Other Visualizations
The HSDatalog also has some built in methods for quickly visualizing the data (but we will build our own too).
```
hsd.get_sensorPlot(sensor_name, sensor_type)
```
We can plot all the sensors the same way.
```
active_sensor_list = hsd.getSensorList(only_active=True)
plots = []
for s in active_sensor_list:
for ssd in s.sensor_descriptor.sub_sensor_descriptor:
hsd.get_sensorPlot(s.name, ssd.sensor_type)
```
| github_jupyter |
When we talk about quantum computing, we actually talk about several different paradigms. The most common one is gate-model quantum computing, in the vein we discussed in the previous notebook. In this case, gates are applied on qubit registers to perform arbitrary transformations of quantum states made up of qubits.
The second most common paradigm is quantum annealing. This paradigm is often also referred to as adiabatic quantum computing, although there are subtle differences. Quantum annealing solves a more specific problem -- universality is not a requirement -- which makes it an easier, albeit still difficult engineering challenge to scale it up. The technology is up to 2000 superconducting qubits in 2018, compared to the less than 100 qubits on gate-model quantum computers. D-Wave Systems has been building superconducting quantum annealers for over a decade and this company holds the record for the number of qubits -- 2048. More recently, an IARPA project was launched to build novel superconducting quantum annealers. A quantum optics implementation was also made available by QNNcloud that implements a coherent Ising model. Its restrictions are different from superconducting architectures.
Gate-model quantum computing is conceptually easier to understand: it is the generalization of digital computing. Instead of deterministic logical operations of bit strings, we have deterministic transformations of (quantum) probability distributions over bit strings. Quantum annealing requires some understanding of physics, which is why we introduced classical and quantum many-body physics in a previous notebook. Over the last few years, quantum annealing inspired gate-model algorithms that work on current and near-term quantum computers (see the notebook on variational circuits). So in this sense, it is worth developing an understanding of the underlying physics model and how quantum annealing works, even if you are only interested in gate-model quantum computing.
While there is a plethora of quantum computing languages, frameworks, and libraries for the gate-model, quantum annealing is less well-established. D-Wave Systems offers an open source suite called Ocean. A vendor-independent solution is XACC, an extensible compilation framework for hybrid quantum-classical computing architectures, but the only quantum annealer it maps to is that of D-Wave Systems. Since XACC is a much larger initiative that extends beyond annealing, we choose a few much simpler packages from Ocean to illustrate the core concepts of this paradigm. However, before diving into the details of quantum annealing, it is worth taking a slight detour to connect the unitary evolution we discussed in a closed system and in the gate-model paradigm and the Hamiltonian describing a quantum many-body system. We also briefly discuss the adiabatic theorem, which provides the foundation why quantum annealing would work at all.
# Unitary evolution and the Hamiltonian
We introduced the Hamiltonian as an object describing the energy of a classical or quantum system. Something more is true: it gives a description of a system evolving with time. This formalism is expressed by the Schrödinger equation:
$$
i\hbar {\frac {d}{dt}}|\psi(t)\rangle = H|\psi(t)\rangle,
$$
where $\hbar$ is the reduced Planck constant. Previously we said that it is a unitary operator that evolves state. That is exactly what we get if we solve the Schrödinger equation for some time $t$: $U = \exp(-i Ht/\hbar)$. Note that we used that the Hamiltonian does not depend on time. In other words, every unitary we talked about so far has some underlying Hamiltonian.
The Schrödinger equation in the above form is the time-dependent variant: the state depends on time. The time-independent Schrödinger equation reflects what we said about the Hamiltonian describing the energy of the system:
$$
H|\psi \rangle =E|\psi \rangle,
$$
where $E$ is the total energy of the system.
# The adiabatic theorem and adiabatic quantum computing
An adiabatic process means that conditions change slowly enough for the system to adapt to the new configuration. For instance, in a quantum mechanical system, we can start from some Hamiltonian $H_0$ and slowly change it to some other Hamiltonian $H_1$. The simplest change could be a linear schedule:
$$
H(t) = (1-t) H_0 + t H_1,
$$
for $t\in[0,1]$ on some time scale. This Hamiltonian depends on time, so solving the Schrödinger equation is considerably more complicated. The adiabatic theorem says that if the change in the time-dependent Hamiltonian occurs slowly, the resulting dynamics remain simple: starting close to an eigenstate, the system remains close to
an eigenstate. This implies that if the system started in the ground state, if certain conditions are met, the system stays in the ground state.
We call the energy difference between the ground state and the first excited state the gap. If $H(t)$ has a nonnegative gap for each $t$ during the transition and the change happens slowly, then the system stays in the ground state. If we denote the time-dependent gap by $\Delta(t)$, a coarse approximation of the speed limit scales as $1/\min(\Delta(t))^2$.
This theorem allows something highly unusual. We can reach the ground state of an easy-to-solve quantum many body system, and change the Hamiltonian to a system we are interested in. For instance, we could start with the Hamiltonian $\sum_i \sigma^X_i$ -- its ground state is just the equal superposition. Let's see this on two sites:
```
import numpy as np
np.set_printoptions(precision=3, suppress=True)
X = np.array([[0, 1], [1, 0]])
IX = np.kron(np.eye(2), X)
XI = np.kron(X, np.eye(2))
H_0 = - (IX + XI)
λ, v = np.linalg.eigh(H_0)
print("Eigenvalues:", λ)
print("Eigenstate for lowest eigenvalue", v[:, 0])
```
Then we could turn this Hamiltonian slowly into a classical Ising model and read out the global solution.
<img src="../figures/annealing_process.svg" alt="Annealing process" style="width: 400px;"/>
Adiabatic quantum computation exploits this phenomenon and it is able to perform universal calculations with the final Hamiltonian being $H=-\sum_{<i,j>} J_{ij} \sigma^Z_i \sigma^Z_{j} - \sum_i h_i \sigma^Z_i - \sum_{<i,j>} g_{ij} \sigma^X_i\sigma^X_j$. Note that is not the transverse-field Ising model: the last term is an X-X interaction. If a quantum computer respects the speed limit, guarantees the finite gap, and implements this Hamiltonian, then it is equivalent to the gate model with some overhead.
The quadratic scaling on the gap does not appear too bad. So can we solve NP-hard problems faster with this paradigm? It is unlikely. The gap is highly dependent on the problem, and actually difficult problems tend to have an exponentially small gap. So our speed limit would be quadratic over the exponentially small gap, so the overall time required would be exponentially large.
# Quantum annealing
A theoretical obstacle to adiabatic quantum computing is that calculating the speed limit is clearly not trivial; in fact, it is harder than solving the original problem of finding the ground state of some Hamiltonian of interest. Engineering constraints also apply: the qubits decohere, the environment has finite temperature, and so on. *Quantum annealing* drops the strict requirements and instead of respecting speed limits, it repeats the transition (the annealing) over and over again. Having collected a number of samples, we pick the spin configuration with the lowest energy as our solution. There is no guarantee that this is the ground state.
Quantum annealing has a slightly different software stack than gate-model quantum computers. Instead of a quantum circuit, the level of abstraction is the classical Ising model -- the problem we are interested in solving must be in this form. Then, just like superconducting gate-model quantum computers, superconducting quantum annealers also suffer from limited connectivity. In this case, it means that if our problem's connectivity does not match that of the hardware, we have to find a graph minor embedding. This will combine several physical qubits into a logical qubit. The workflow is summarized in the following diagram [[1](#1)]:
<img src="../figures/quantum_annealing_workflow.png" alt="Software stack on a quantum annealer" style="width: 400px;"/>
A possible classical solver for the Ising model is the simulated annealer that we have seen before:
```
import dimod
J = {(0, 1): 1.0, (1, 2): -1.0}
h = {0:0, 1:0, 2:0}
model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
sampler = dimod.SimulatedAnnealingSampler()
response = sampler.sample(model, num_reads=10)
print("Energy of samples:")
print([solution.energy for solution in response.data()])
```
Let's take a look at the minor embedding problem. This part is NP-hard in itself, so we normally use probabilistic heuristics to find an embedding. For instance, for many generations of the quantum annealer that D-Wave Systems produces has unit cells containing a $K_{4,4}$ bipartite fully-connected graph, with two remote connections from each qubit going to qubits in neighbouring unit cells. A unit cell with its local and remote connections indicated is depicted in the following figure:
<img src="../figures/unit_cell.png" alt="Unit cell in Chimera graph" style="width: 80px;"/>
This is called the Chimera graph. The current largest hardware has 2048 qubits, consisting of $16\times 16$ unit cells of 8 qubits each. The Chimera graph is available as a `networkx` graph in the package `dwave_networkx`. We draw a smaller version, consisting of $2\times 2$ unit cells.
```
import matplotlib.pyplot as plt
import dwave_networkx as dnx
%matplotlib inline
connectivity_structure = dnx.chimera_graph(2, 2)
dnx.draw_chimera(connectivity_structure)
plt.show()
```
Let's create a graph that certainly does not fit this connectivity structure. For instance, the complete graph $K_n$ on nine nodes:
```
import networkx as nx
G = nx.complete_graph(9)
plt.axis('off')
nx.draw_networkx(G, with_labels=False)
import minorminer
embedded_graph = minorminer.find_embedding(G.edges(), connectivity_structure.edges())
```
Let's plot this embedding:
```
dnx.draw_chimera_embedding(connectivity_structure, embedded_graph)
plt.show()
```
Qubits that have the same colour correspond to a logical node in the original problem defined by the $K_9$ graph. Qubits combined in such way form a chain. Even though our problem only has 9 variables (nodes), we used almost all 32 available nodes on the toy Chimera graph. Let's find the maximum chain length:
```
max_chain_length = 0
for _, chain in embedded_graph.items():
if len(chain) > max_chain_length:
max_chain_length = len(chain)
print(max_chain_length)
```
The chain on the hardware is implemented by having strong couplings between the elements in a chain -- in fact, twice as strong as what the user can set. Nevertheless, long chains can break, which means we receive inconsistent results. In general, we prefer shorter chains, so we do not waste physical qubits and we obtain more reliable results.
# References
[1] M. Fingerhuth, T. Babej, P. Wittek. (2018). [Open source software in quantum computing](https://doi.org/10.1371/journal.pone.0208561). *PLOS ONE* 13(12):e0208561. <a id='1'></a>
| github_jupyter |
# Deep Learning for Text
```
import sys
import os
import pandas as pd
# FOLDERS
package_path = os.path.dirname(os.getcwd())
data_path = os.path.join(package_path, 'data')
experiments_path = os.path.join(package_path, 'experiments')
# LOAD DATA
input_file = os.path.join(data_path, 'train.json')
df = pd.read_json(input_file)
df.head()
```
The `ingredients` entries in the dataframe are lists and within each **list** entry has an individual ingredient of a particular recipe. To make the tokenization process easier we will need to convert each recipe into a **string** object, so let's create a new column for the dataframe called `ingredients_str`.
```
df['ingredients_str'] = [', '.join(i).strip() for i in df['ingredients']]
df['ingredients_str'].head()
```
Now each recipe is a **string** with the ingredients separated by commas. Now it should be easier to proceed with the tokenization process.
## Using *Keras* for word level representations.
The `Keras` library has a class to deal with the tokenization of text documents. Next, we are going to use the `Tokenizer` class to create **word level representations**:
Note that for this problem the expression **word level representationg** does not mean that we re going to tokenize each word separately. What we are goint to do is to tokenize each ingredient inside a particular recipe, even if it has more than one word. The example bellow could be used to clarify.
**Task:** Tokenize the following recipe: *ground black pepper, cold water*
* ['ground', 'black', 'pepper', 'cold', 'water]
But for this problem we will tokenize the recipe in the following way:
* ['ground black pepper', 'cold water']
```
from keras.preprocessing.text import Tokenizer
# creates a Tokenizer object using all the words in the vocabulary
tokenizer = Tokenizer(num_words=6000, split=', ', lower=True)
# builds the word index
tokenizer.fit_on_texts(df['ingredients'])
# recover the word index
word_index = tokenizer.word_index
# turn strings into lists of integer indices
sequences = tokenizer.texts_to_sequences(df['ingredients'])
for key in list(word_index.keys())[:5]:
print('word_index[%s] = %s' % (key, word_index[key]))
```
As we can see above, the `word_index` is a dictionary with a integer index for each word/ingredient present in the dataset. The words/ingredients are automatically organized by the most common
### TF-IDF Representation
Let's build our model based on the **TF-IDF** representation for text data.
```
# directly get the representations
tfidf_data = tokenizer.texts_to_matrix(df['ingredients'], mode='tfidf')
n_samples = tfidf_data.shape[0]
n_features = tfidf_data.shape[1]
print('The training dataset with the new representation have:')
print(' - %i entries/recipes' % n_samples)
print(' - %i features/ingredients' % n_features)
```
## Create Cross Validation Partitions
As we know from the **EDA notebook** of this dataset, the labels are unbalanced. So, we are going to use the `StratifiedKFold` class, from the **Sklearn** library to help us to create folds which are made by preserving the percentage of entries for each label.
```
# construct the target vector
import numpy as np
from sklearn.preprocessing import LabelBinarizer, LabelEncoder
# categorical target (one-hot encoded)
lb = LabelBinarizer()
target_cat = lb.fit_transform(df['cuisine'])
# integer target, used in the StratifiedKfold class
# in order to make each fold with balanced classes
le = LabelEncoder()
target = le.fit_transform(df['cuisine'])
n_classes = len(np.unique(target))
print('The dataset has %i unique classes.' % n_classes)
from sklearn.model_selection import StratifiedKFold
n_splits = 5
seed = 2018
folds = list(StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=seed).split(tfidf_data, target))
for i, fold in enumerate(folds):
percentage_trn = (len(fold[0]) / n_samples) * 100
percentage_val = (len(fold[1]) / n_samples) * 100
print('Fold #%i has %1.0f%% events for training and %1.0f%% for validation.' % (i+1, percentage_trn, percentage_val))
```
We can see that the percentage of events in training and validation sets are the same for every fold.
## Create a Model
Now it's time to create a Deep Neural Network Model to train our dataset. The model architecture can be inferred from the code below.
```
from keras.models import Sequential
from keras.layers import Flatten, Dense, Dropout, BatchNormalization
from keras.optimizers import Adam
import keras.backend as K
def load_model():
"""
Function to create the Neural Network Model
"""
K.clear_session()
# creating the Deep Neural Net Model
model = Sequential()
# layer 1
model.add(Dense(units=128,
activation='relu',
input_shape=(tfidf_data.shape[1], )))
model.add(BatchNormalization())
model.add(Dropout(0.7))
# layer 2
model.add(Dense(units=64,
activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.7))
# output layer
model.add(Dense(units=n_classes,
activation='softmax'))
# compile model
model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=0.005),
metrics=['acc'])
return model
```
Add the `EarlyStop` callback to monitor the validation loss and avoid overfitting.
```
from keras.callbacks import EarlyStopping, ModelCheckpoint
# add callbacks to the model
early_stop = EarlyStopping(monitor='val_loss', patience=3)
weights_file = os.path.join(experiments_path, 'deeep_model.hdf5')
checkpoint = ModelCheckpoint(weights_file, monitor='val_loss', save_best_only=True)
callbacks = [early_stop, checkpoint]
cv_scores = []
cv_hist = []
# train the model
for fold, (trn_idx, val_idx) in enumerate(folds):
print('>> Fold %i# <<' % int(fold+1))
# get training and validation data folds
X_trn = tfidf_data[trn_idx, :]
y_trn = target_cat[trn_idx, :]
X_val = tfidf_data[val_idx, :]
y_val = target_cat[val_idx, :]
print(' Training on %i examples.' % X_trn.shape[0])
print(' Validating on %i examples.' % X_val.shape[0])
model = load_model()
# serialize model to JSON
if fold == 0:
model_json = model.to_json()
model_file = os.path.join(experiments_path, 'deep_model.json')
with open(model_file, 'w') as json_file:
json_file.write(model_json)
hist = model.fit(X_trn, y_trn,
validation_data=(X_val, y_val),
batch_size=32,
epochs=100,
callbacks=callbacks,
verbose=0)
scores = model.evaluate(X_val, y_val)
print(' This model has %1.2f validation accuraccy.\n' % scores[1])
cv_scores.append(scores)
cv_hist.append(hist)
val_acc = []
for metric in cv_scores:
val_acc.append(metric[1])
print('Accuracy = %1.4f +- %1.4f' % (np.mean(val_acc), np.std(val_acc)))
```
## Getting the Best Model
Now let's get the best trained model and make some plots.
```
model.load_weights(weights_file)
import matplotlib.pyplot as plt
%matplotlib inline
fig = plt.figure(figsize=(8,5))
plt.plot(model.history.history['val_loss'], label='Validation Loss')
plt.plot(model.history.history['loss'], label='Training Loss')
plt.ylabel('Loss')
plt.xlabel('Epochs')
plt.title('History')
plt.legend()
plt.show()
fig = plt.figure(figsize=(8,5))
plt.plot(model.history.history['val_acc'], label='Validation Accuracy')
plt.plot(model.history.history['acc'], label='Training Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.title('History')
plt.legend()
plt.show()
```
## Making Submission File
```
input_file = os.path.join(data_path, 'test.json')
df = pd.read_json(input_file)
# directly get the representations
tfidf_data = tokenizer.texts_to_matrix(df['ingredients'], mode='tfidf')
n_samples = tfidf_data.shape[0]
n_features = tfidf_data.shape[1]
print('The test dataset with the new representation have:')
print(' - %i entries/recipes' % n_samples)
print(' - %i features/ingredients' % n_features)
# predict classes using test data
predict = model.predict_classes(tfidf_data)
# map each integer to the string labels
cat = pd.factorize(le.classes_)
# create the column
df['cuisine'] = cat[1][predict]
submissions_path = os.path.join(package_path, 'submissions')
submissions_file = os.path.join(submissions_path, 'deep_model.csv')
df[['id', 'cuisine']].to_csv(submissions_file, index=False)
```
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/230-tweet-train-5fold-roberta-reference-hf-exp3/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
vocab_path = input_base_path + 'vocab.json'
merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
# vocab_path = base_path + 'roberta-base-vocab.json'
# merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + 'model' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
logits = layers.Dense(2, name="qa_outputs")(last_hidden_state)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_logits, end_logits])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
# Post-process
test["selected_text"] = test.apply(lambda x: ' '.join([word for word in x['selected_text'].split() if word in x['text'].split()]), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
test['text_len'] = test['text'].apply(lambda x : len(x))
test['label_len'] = test['selected_text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
test['label_wordCnt'] = test['selected_text'].apply(lambda x : len(x.split(' ')))
test['text_tokenCnt'] = test['text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['label_tokenCnt'] = test['selected_text'].apply(lambda x : len(tokenizer.encode(x).ids))
test['jaccard'] = test.apply(lambda x: jaccard(x['text'], x['selected_text']), axis=1)
display(test.head(10))
display(test.describe())
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/krakowiakpawel9/machine-learning-bootcamp/blob/master/unsupervised/03_association_rules/01_apriori.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### scikit-learn
Strona biblioteki: [https://scikit-learn.org](https://scikit-learn.org)
Dokumentacja/User Guide: [https://scikit-learn.org/stable/user_guide.html](https://scikit-learn.org/stable/user_guide.html)
Podstawowa biblioteka do uczenia maszynowego w języku Python.
Aby zainstalować bibliotekę scikit-learn, użyj polecenia poniżej:
```
!pip install scikit-learn
```
Aby zaktualizować do najnowszej wersji bibliotekę scikit-learn, użyj polecenia poniżej:
```
!pip install --upgrade scikit-learn
```
Kurs stworzony w oparciu o wersję `0.22.1`
### Spis treści:
1. [Import bibliotek](#0)
2. [Wygenerowanie danych](#1)
3. [Przygotowanie danych](#2)
4. [Algorytm Apriori](#3)
### <a name='0'></a> Import bibliotek
```
import pandas as pd
import numpy as np
```
### <a name='1'></a> Wygenerowanie danych
```
data = {'produkty': ['chleb jajka mleko', 'mleko ser', 'chleb masło ser', 'chleb jajka']}
transactions = pd.DataFrame(data=data, index=[1, 2, 3, 4])
transactions
```
### <a name='2'></a> Przygotowanie danych
```
# rozwinięcie kolumny do obiektu DataFrame
expand = transactions['produkty'].str.split(expand=True)
expand
# wydobycie nazw wszystkich produktów
products = []
for col in expand.columns:
for product in expand[col].unique():
if product is not None and product not in products:
products.append(product)
products.sort()
print(products)
transactions_encoded = np.zeros((len(transactions), len(products)), dtype='int8')
transactions_encoded
# kodowanie 0-1
for row in zip(range(len(transactions)), transactions_encoded, expand.values):
for idx, product in enumerate(products):
if product in row[2]:
transactions_encoded[row[0], idx] = 1
transactions_encoded
transactions_encoded_df = pd.DataFrame(transactions_encoded, columns=products)
transactions_encoded_df
```
### <a name='3'></a> Algorytm Apriori
```
from mlxtend.frequent_patterns import apriori, association_rules
supports = apriori(transactions_encoded_df, min_support=0.0, use_colnames=True)
supports
supports = apriori(transactions_encoded_df, min_support=0.3, use_colnames=True)
supports
rules = association_rules(supports, metric='confidence', min_threshold=0.65)
rules = rules.iloc[:, [0, 1, 4, 5, 6]]
rules
rules.sort_values(by='lift', ascending=False)
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.cm as cm
from PIL import Image
import numpy as np
from skimage.measure import profile_line
from scipy import sparse
from pymatreader import read_mat
import pandas as pd
import cProfile
from util import get_path
from extract_graph import generate_graph_tab_from_skeleton,generate_nx_graph_from_skeleton
import networkx as nx
from random import randrange
import math
import cv2
plate=13
date1='0703_1157'
date2='0703_1557'
date3='0703_1957'
row=6
column=10
imtab1=np.load(f'Data/imbackrem_{date1}_{plate}_{row}_{column}.npy')
imtab2=np.load(f'Data/imbackrem_{date2}_{plate}_{row}_{column}.npy')
imtab3=np.load(f'Data/imbackrem_{date3}_{plate}_{row}_{column}.npy')
skeleton1=np.load(f'Data/skeletonizedpruned_{date1}_{plate}_{row}_{column}.npy')
skeleton2=np.load(f'Data/skeletonizedpruned_{date2}_{plate}_{row}_{column}.npy')
skeleton3=np.load(f'Data/skeletonizedpruned_{date3}_{plate}_{row}_{column}.npy')
graph_tab1=generate_graph_tab_from_skeleton(skeleton1)
graph_tab2=generate_graph_tab_from_skeleton(skeleton2)
graph_tab3=generate_graph_tab_from_skeleton(skeleton3)
nx_graph1,pos1=generate_nx_graph_from_skeleton(skeleton1)
nx_graph2,pos2=generate_nx_graph_from_skeleton(skeleton2)
for node in nx_graph1.nodes:
if nx_graph.degree[node]==1:
for edge in nx_graph1.edges(node):
data=nx_graph1.get_edge_data(*edge)
if data['weight']<9:
is_artefact=True
for edge_2nd in nx_graph1.edges(edge[0]):
data=nx_graph1.get_edge_data(*edge_2nd)
for edge in nx_graph1.edges:
data=nx_graph1.get_edge_data(*edge)
if data['weight']<9:
is_artefact=True
for edge in nx_graph1.edges(edge[0]):
break
[(pos1[info[0]],pos1[info[1]],info[2]) for info in nx_graph1.edges.data('weight') if info[2]<20]
plt.hist([info[2] for info in nx_graph1.edges.data('weight') if info[2]<30],10)
for node in nx_graph1.nodes:
if nx_graph.degree[node]==3:
for edge in nx_graph1.edges(node):
data=nx_graph1.get_edge_data(*edge)
if data['weight']<9:
is_artefact=True
for edge_2nd in nx_graph1.edges(edge[0]):
data=nx_graph1.get_edge_data(*edge_2nd)
for node in nx_graph1.nodes:
if nx_graph.degree[node]==2:
neighbours=[neigh for neigh in nx_graph.neighbours(node)]
for n in nx_graph1.neighbors(13):
print(n)
def node_dist(node1,node2,nx_graph_tm1,nx_graph_t,pos_tm1,pos_t):
sparse_cross1=sparse.dok_matrix((3000,4096), dtype=bool)
sparse_cross2=sparse.dok_matrix((3000,4096), dtype=bool)
for edge in nx_graph_tm1.edges(node1):
list_pixel=nx_graph_tm1.get_edge_data(*edge)['pixel_list']
if (pos_tm1[node1]!=list_pixel[0]).any():
list_pixel=list(reversed(list_pixel))
print(list_pixel[0]==pos_tm1[node1])
print(list_pixel[0],pos_tm1[node1])
for pixel in list_pixel[:20]:
sparse_cross1[pixel]=1
for edge in nx_graph_t.edges(node2):
list_pixel=nx_graph_t.get_edge_data(*edge)['pixel_list']
if (pos_t[node2]!=list_pixel[0]).any():
list_pixel=list(reversed(list_pixel))
print(list_pixel[0]==pos_t[node2])
for pixel in list_pixel[:20]:
sparse_cross2[pixel]=1
kernel = np.ones((3,3),np.uint8)
dilation1 = cv2.dilate(sparse_cross1.todense().astype(np.uint8),kernel,iterations = 3)
dilation2 = cv2.dilate(sparse_cross2.todense().astype(np.uint8),kernel,iterations = 3)
plt.imshow(dilation1[2250:2350,1400:1500])
plt.imshow(dilation2[2250:2350,1400:1500],alpha=0.5)
plt.show()
return(np.linalg.norm(dilation1-dilation2))
pos1[26]
pos2[34]
node_dist(25,34,nx_graph1,nx_graph2,pos1,pos2)
corresp={}
dist={}
ambiguous=[]
remaining_nodes=[]
for node1 in nx_graph1.nodes:
mini=np.inf
if nx_graph1.degree[node1]>=3:
for node2 in nx_graph2.nodes:
distance=math.dist(pos1[node1],pos2[node2])
if distance<mini:
mini=distance
identifier=node2
if mini<10:
if identifier in corresp.values():
ambiguous.append(node1)
print(node1,'node_dientified_two_times')
corresp[node1]=identifier
else:
print(node1,mini,'node_none_iden')
remaining_nodes.append(node1)
dist[node1]=mini
for node in ambiguous:
identifier=corresp[node]
candidates = [nod for nod in corresp.keys() if corresp[nod]==identifier]
for candidate in candidates:
identified_neighbours=[corresp[neighbour] for neighbour in nx_graph1.neighbor(node)]
print(candidates)
[node for node in corresp.keys() if corresp[node]==34]
corresp[25]
pos2[34]
dist[25]
plt.hist(list(dist.values()),5)
def dist_branch(pixel_list_tm1,pixel_list_t):
squared_dist=0
length=min(len(pixel_list_t),len(pixel_list_tm1))
for i in range (length):
squared_dist+=math.dist(pixel_list_t[i],pixel_list_tm1[i])
return(squared_dist/length)
for indextm1, row in graph_tab1.iterrows():
pixel_list_tm1=row['pixel_list']
pixel_list_tm1_reversed=list(reversed(pixel_list_tm1))
mini=np.inf
index_id=0
reverse_tm1 = False
reverse_t = False
for index_tm1, row in graph_tab2.iterrows():
pixel_list_t=row['pixel_list']
pixel_list_reversed_t=list(reversed(pixel_list_t))
distance=dist_branch(pixel_list_tm1,pixel_list_t)
distance_rev=dist_branch(pixel_list1,pixel_list2_reversed)
if mini>distance:
mini=distance
index_id=indextm2
reverse=False
if mini>distance_rev:
mini=distance_rev
index_id=indextm2
reverse=True
break
mini
reverse
index_id
graph_tab1.loc[[0]]
graph_tab2.loc[[6]]
np.inf>3.5
for index, row in graph_tab1.iterrows():
pixel_list1=row['pixel_list']
break
graph_tab1
for index, row in graph_tab2.iterrows():
pixel_list2=row['pixel_list']
break
dist_branch(pixel_list1,list(reversed(pixel_list1)))
```
| github_jupyter |
Making Peter happy
```
from glob import glob
import datetime
import numpy as np
from astropy.table import Table
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from scipy.stats import spearmanr
from scipy.stats import ks_2samp
from scipy.stats import mannwhitneyu
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.ticker import MultipleLocator
sns.set(context='talk', style='ticks', font='serif', color_codes=True)
HR = pd.read_csv('../data/campbell_local.tsv', sep='\t', usecols=['SNID', 'redshift', 'hr', 'err_mu'], index_col='SNID')
HR.rename(columns={'err_mu': 'hr uncert'}, inplace=True)
HR = HR[HR['redshift']<0.2]
HR = HR[HR['hr']<0.7]
HR.describe()
t = Table.read('../data/SDSS_Photometric_SNe_Ia.fits')
salt = t['CID','Z','X1','X1_ERR','COLOR','COLOR_ERR'].to_pandas()
salt.columns = salt.columns.str.lower()
salt.rename(columns={'cid': 'SNID', 'z': 'redshift'}, inplace=True)
salt.set_index('SNID', inplace=True)
salt.describe()
galaxy = pd.read_csv('../resources/kcorrect_stellarmass.csv', usecols=['GAL', 'redshift', 'stellarmass'], index_col='GAL')
galaxy.rename(columns={'redshift': 'gal redshift', 'stellarmass': 'stellar mass'}, inplace=True)
galaxy.describe()
age = pd.read_csv('../resources/ages_campbell.tsv', sep='\t', skiprows=[1],
usecols=['# sn id', 'age'], dtype={'age': np.float64, '# sn id': np.int})
age.rename(columns={'# sn id': 'SNID'}, inplace=True)
age.set_index('SNID', inplace=True)
age.describe()
data = pd.concat([HR, salt, galaxy, age], axis=1)
data.dropna(inplace=True)
data['stellar mass'] = np.log10(data['stellar mass'])
data.describe()
```
Test non-PCA generated parameters
```
features = ['x1', 'color', 'stellar mass', 'age']
y = data.loc[:, features].values
scaler = StandardScaler()
scaler.fit(y) # get the needed transformation off of y
y = scaler.transform(y) # transform y
y.shape
coefficients = [-0.557, -0.103, -0.535, -0.627]
delta_alpha = 0.06
def to_pc(data):
"""need input to be a Nx4 numpy array
"""
x, c, m, a = data[:,0], data[:,1], data[:,2], data[:,3]
return coefficients[0]*x+coefficients[1]*c+coefficients[2]*m+coefficients[3]*a
# pc = to_pc(data[['x1', 'color', 'stellar mass', 'age']].values)
pc = to_pc(y)
spearmanr(pc, data['hr']-delta_alpha*data["x1"])
(m, b), cov = np.polyfit(pc, data['hr']-delta_alpha*data["x1"], 1, full=False, cov=True)
print(m, b)
print(cov)
print(np.sqrt(cov[0,0]), np.sqrt(cov[1,1]))
fig = plt.figure()
#fix axes major spacing & size
ax = plt.gca()
ax.get_yaxis().set_major_locator(MultipleLocator(0.2))
ax.set_ylim(-0.67, 0.67)
ax.get_xaxis().set_major_locator(MultipleLocator(1))
ax.set_xlim(-3.5, 3.5)
#set axes ticks and gridlines
ax.tick_params(axis='both', top='on', right='on', direction='in')
ax.grid(which='major', axis='both', color='0.90', linestyle='-')
ax.set_axisbelow(True)
sns.regplot(pc, data['hr'], marker='', color='grey', ax=ax)
plt.scatter(pc, data['hr'], marker='.', c=data['x1'],
cmap="RdBu", vmin=-3.0, vmax=3.0, edgecolor='k', zorder=10)
plt.xlabel(r"$-0.557 x'_1 - 0.103 c' - 0.535 m' - 0.627 a'$", fontsize=17)
plt.ylabel('Hubble residual [mag]', fontsize=17)
cax = fig.add_axes([0.98, 0.197, 0.02, 0.729])
cax.tick_params(axis='y', direction='in')
cax.set_axisbelow(False) # bring tick marks above coloring
plt.colorbar(label=r"$x_1$", cax=cax)
sp_r, sp_p = spearmanr(pc, data['hr'])
ax.text(-2.9, 0.42, f"Spearman's correlation: {sp_r:.2f}\np: {sp_p:.2e}",
{'fontsize':12})
fig.set_tight_layout({'pad': 1.5})
plt.savefig(f'HRvPCalt.pdf', bbox_inches='tight')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/bkkaggle/pytorch-CycleGAN-and-pix2pix/blob/master/pix2pix.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Install
```
!git clone https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
import os
os.chdir('pytorch-CycleGAN-and-pix2pix/')
!pip install -r requirements.txt
```
# Datasets
Download one of the official datasets with:
- `bash ./datasets/download_pix2pix_dataset.sh [cityscapes, night2day, edges2handbags, edges2shoes, facades, maps]`
Or use your own dataset by creating the appropriate folders and adding in the images. Follow the instructions [here](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md#pix2pix-datasets).
```
!bash ./datasets/download_pix2pix_dataset.sh facades
```
# Pretrained models
Download one of the official pretrained models with:
- `bash ./scripts/download_pix2pix_model.sh [edges2shoes, sat2map, map2sat, facades_label2photo, and day2night]`
Or add your own pretrained model to `./checkpoints/{NAME}_pretrained/latest_net_G.pt`
```
!bash ./scripts/download_pix2pix_model.sh facades_label2photo
```
# Training
- `python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA`
Change the `--dataroot` and `--name` to your own dataset's path and model's name. Use `--gpu_ids 0,1,..` to train on multiple GPUs and `--batch_size` to change the batch size. Add `--direction BtoA` if you want to train a model to transfrom from class B to A.
```
!python train.py --dataroot ./datasets/facades --name facades_pix2pix --model pix2pix --direction BtoA
```
# Testing
- `python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_pix2pix`
Change the `--dataroot`, `--name`, and `--direction` to be consistent with your trained model's configuration and how you want to transform images.
> from https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix:
> Note that we specified --direction BtoA as Facades dataset's A to B direction is photos to labels.
> If you would like to apply a pre-trained model to a collection of input images (rather than image pairs), please use --model test option. See ./scripts/test_single.sh for how to apply a model to Facade label maps (stored in the directory facades/testB).
> See a list of currently available models at ./scripts/download_pix2pix_model.sh
```
!ls checkpoints/
!python test.py --dataroot ./datasets/facades --direction BtoA --model pix2pix --name facades_label2photo_pretrained
```
# Visualize
```
import matplotlib.pyplot as plt
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_fake_B.png')
plt.imshow(img)
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_A.png')
plt.imshow(img)
img = plt.imread('./results/facades_label2photo_pretrained/test_latest/images/100_real_B.png')
plt.imshow(img)
```
| github_jupyter |
```
import sys
sys.path.append('../')
%load_ext autoreload
%autoreload 2
import sklearn
import copy
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
# from viz import viz
from bokeh.plotting import figure, show, output_notebook, output_file, save
from functions import merge_data
from sklearn.model_selection import RandomizedSearchCV
import load_data
import data_new
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from fit_and_predict import fit_and_predict
import data_new
preds_df = pd.read_pickle("ensemble_predictions_04_14.pkl")
preds_df
def l1(arr1,arr2):
return sum([np.abs(a1-a2) for (a1,a2) in zip(arr1,arr2)])/len(arr1)
outcome = preds_df['tot_deaths'].values
residuals = {}
for days_ahead in [3, 5, 7, 10]:
for lower_threshold in [10, 100]:
colname = f'{days_ahead} day, deaths>={lower_threshold}'
residuals[colname] = []
for method in ['exponential', 'shared_exponential', 'advanced_shared_model',
'ensemble', 'ensemble_sal', 'ensemble_al','linear',
#'ensemble_no_demographic',
#'simple_ensemble',
#'ensemble_no_exponential', 'demographics',
'ensemble_shared_linear']:
key = f'predicted_deaths_{method}_{days_ahead}'
preds = np.array([p[-1] for p in preds_df[key][outcome > lower_threshold]])
residuals[colname].append(l1(outcome[outcome > lower_threshold],preds))
def highlight_max(s):
'''
highlight the maximum in a Series yellow.
'''
is_max = s == s.min()
return ['background-color: yellow' if v else '' for v in is_max]
res_df = pd.DataFrame(residuals, index=['exponential', 'shared_exponential', 'advanced_shared_model',
'ensemble', 'ensemble_sal', 'ensemble_al','linear',
#'ensemble_no_demographic',
#'simple_ensemble',
#'ensemble_no_exponential', 'demographics',
'ensemble_shared_linear'])
res_df = res_df.astype(float).round(2)
res_df.style.highlight_min().format("{:.2f}")
print(res_df.to_latex(index=True))
print(np.mean(outcome[outcome>=1]), np.mean(outcome[outcome>=10]), np.mean(outcome[outcome>=100]))
res_df = pd.DataFrame(residuals, index=['with_exponential', 'without_exponential'])
res_df = res_df.astype(float).round(2)
res_df.style.highlight_min().format("{:.2f}")
import plotly.express as px
outcome = np.array([preds_df['deaths'].values[i][-1] for i in range(len(preds_df))])
preds_df['true_outcome'] = outcome
print(np.array([p[-1] for p in preds_df['predicted_deaths_ensemble_3']]))
preds_df['3_day_ahead_pred'] = [p[-1] for p in preds_df['predicted_deaths_ensemble_3']]
preds_df = preds_df[preds_df.true_outcome > 10]
fig = px.scatter(preds_df, x='true_outcome', y='3_day_ahead_pred')
preds_df.keys()
preds_df_2 = preds_df[preds_df.true_outcome > 80]
fig = px.scatter(preds_df_2, x='true_outcome', y='3_day_ahead_pred', text='CountyNamew/StateAbbrev')
fig.update_traces(textposition='bottom center')
fig.update_layout(xaxis_type="log", yaxis_type="log")
fig.add_shape(
# Line reference to the axes
type="line",
xref="x",
yref="y",
x0=80,
y0=80,
x1=320,
y1=320,
line=dict(
color="LightSeaGreen",
width=3,
),
)
fig.update_layout(
title="Actual deaths by 3/29 vs. our predictions on 3/26",
xaxis_title="Actual deaths",
yaxis_title="3 day ahead prediction",
font = dict(
family='sans-serif',
size=12,
)
)
fig.update_layout(
title={
'text': "Actual deaths by 3/29 vs. our predictions on 3/26",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'})
fig.show()
preds_df['CountyName'] = preds_df['CountyNamew/StateAbbrev']
preds_df_3 = preds_df[preds_df.CountyName.isin(['Wayne, MI',
'Orleans, LA',
'Los Angeles, CA',
'Santa Clara, CA',
'Snohomish, WA',
'Dougherty, GA'])]
fig = px.scatter(preds_df_3, x='true_outcome', y='3_day_ahead_pred', text='CountyNamew/StateAbbrev')
fig.update_traces(textposition='bottom center')
fig.update_layout(xaxis_type="log", yaxis_type="log")
fig.add_shape(
# Line reference to the axes
type="line",
xref="x",
yref="y",
x0=10,
y0=10,
x1=80,
y1=80,
line=dict(
color="LightSeaGreen",
width=3,
),
)
fig.update_layout(
title="Recorded deaths by 3/29 vs. our predictions on 3/26",
xaxis_title="Actual deaths",
yaxis_title="3 day ahead prediction",
font = dict(
family='sans-serif',
size=12,
)
)
fig.update_layout(
title={
'text': "Recorded deaths by 3/29 vs. our predictions on 3/26",
'x': 0.5,
'xanchor': 'center',
'yanchor': 'top'})
df_county = pd.read_csv("../data_new/county_data_abridged.csv")
df_county = df_county.iloc[:3243]
df = load_data.load_county_level(data_dir = '../data/')
df_county['countyFIPS'] = df_county['countyFIPS'].astype(np.int64)
df = df.merge(df_county, how='left', on='countyFIPS')
df = df.sort_values('#Deaths_4/7/2020', ascending=False)
df['deaths'].values[0]
w = 3
f1, f2 = "cases", "deaths"
#deltas = np.zeros(len(dfg.groups.keys()))
#deltas_smooth = np.zeros(len(dfg.groups.keys()))
plt.figure(figsize=[30, 20])
for j in range(20):
#group = dfg.get_group(grp).reset_index()
plt.subplot(4, 5, j+1)
y = np.diff(df[f1].values[j])[50:]
y = pd.Series(y)
plt.plot(y, color='b', alpha=0.3)
#smooth_y = y.rolling(window=w)
smooth_y = y.rolling(3).mean()
plt.plot(smooth_y, color='b', linewidth=3., label=f'{f1}-{w}-day-MA')
x1, x2 = np.argmax(y), np.argmax(smooth_y)
plt.axvline(x= x2, color='b', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='upper left')
ax = plt.gca().twinx()
#ax = plt.gca()
y = np.diff(df[f2].values[j])[50:]
ax.plot(y, color='r', alpha=0.3)
y = pd.Series(y)
#smooth_y = y.rolling(window=5)
#smooth_y = smooth_y.mean()
smooth_y = y.rolling(3).mean()
ax.plot(smooth_y, color='r', linewidth=3., label=f'{f2}-{w}-day-MA')
x3, x4 = np.argmax(y), np.argmax(smooth_y)
ax.axvline(x= x4, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='lower left')
#deltas_smooth[j] = x4-x1
#deltas[j] = x3-x1
#plt.title('%s, Lag-smooth = %d, Lag-Raw = %d'%(df['CountyNamew/StateAbbrev'].iloc[j], x4-x2, x3-x1))
plt.title('%s, SVI %.3f'%(df['CountyNamew/StateAbbrev'].iloc[j],
df['SVIPercentile'].iloc[j]))
plt.figure(figsize=[30, 20])
for j in range(20):
#group = dfg.get_group(grp).reset_index()
plt.subplot(4, 5, j+1)
cases = np.array(df['cases'].values[j])[50:]
deaths = np.array(df['deaths'].values[j])[50:]
t = len(cases)
death_rate = deaths/np.maximum(cases, np.ones(t))
death_rate_7_day_lag = deaths[7:]/np.maximum(cases[:-7], np.ones(t-7))
death_rate_10_day_lag = deaths[10:]/np.maximum(cases[:-10], np.ones(t-10))
plt.plot(death_rate_7_day_lag[-10:], color='b', linewidth=3., label='7_day_lag_death_rate')
plt.plot(death_rate_10_day_lag[-10:], color='r', linewidth=3., label='10_day_lag_death_rate')
plt.plot(death_rate[-10:], color='black', linewidth=3., alpha=.5, label='death_rate')
plt.legend(loc='upper left')
#plt.title('%s, pop density %.1f'%(df['CountyNamew/StateAbbrev'].iloc[j],
# df['PopulationDensityperSqMile2010_x'].iloc[j]))
plt.title('%s, SVI %.3f'%(df['CountyNamew/StateAbbrev'].iloc[j],
df['SVIPercentile'].iloc[j]))
emerging = ['Dougherty, GA', 'Cook, IL', 'Wayne, MI', 'Oakland, MI', 'Orleans, LA',
'Lucas, OH', 'Hall, NE', 'Navajo, AZ', 'Greene, MO', 'Christian, MO',
'Chambers, AL', 'Lauderdale, MS']
df_emerging = df[df['CountyNamew/StateAbbrev'].isin(np.array(emerging))]
df_emerging
plt.figure(figsize=[30, 20])
for j in range(12):
#group = dfg.get_group(grp).reset_index()
plt.subplot(5, 5, j+1)
y = np.diff(df_emerging[f1].values[j])[50:]
y = pd.Series(y)
plt.plot(y, color='b', alpha=0.3)
#smooth_y = y.rolling(window=w)
smooth_y = y.rolling(3).mean()
plt.plot(smooth_y, color='b', linewidth=3., label=f'{f1}-{w}-day-MA')
x1, x2 = np.argmax(y), np.argmax(smooth_y)
plt.axvline(x= x2, color='b', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='upper left')
ax = plt.gca().twinx()
#ax = plt.gca()
y = np.diff(df_emerging[f2].values[j])[50:]
ax.plot(y, color='r', alpha=0.3)
y = pd.Series(y)
#smooth_y = y.rolling(window=5)
#smooth_y = smooth_y.mean()
smooth_y = y.rolling(3).mean()
ax.plot(smooth_y, color='r', linewidth=3., label=f'{f2}-{w}-day-MA')
x3, x4 = np.argmax(y), np.argmax(smooth_y)
ax.axvline(x= x4, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='lower left')
#deltas_smooth[j] = x4-x1
#deltas[j] = x3-x1
#plt.title('%s, pop density %.1f'%(df_emerging['CountyName'].iloc[j], df_emerging['PopulationDensityperSqMile2010'].iloc[j]))
plt.title('%s, SVI %.3f'%(df_emerging['CountyNamew/StateAbbrev'].iloc[j],
df_emerging['SVIPercentile'].iloc[j]))
plt.figure(figsize=[30, 20])
for j in range(12):
#group = dfg.get_group(grp).reset_index()
plt.subplot(4, 5, j+1)
cases = np.array(df_emerging['cases'].values[j])[50:]
deaths = np.array(df_emerging['deaths'].values[j])[50:]
t = len(cases)
death_rate = deaths/np.maximum(cases, np.ones(t))
death_rate_7_day_lag = deaths[7:]/np.maximum(cases[:-7], np.ones(t-7))
death_rate_10_day_lag = deaths[10:]/np.maximum(cases[:-10], np.ones(t-10))
plt.plot(death_rate_7_day_lag[-10:], color='b', linewidth=3., label='7_day_lag_death_rate')
plt.plot(death_rate_10_day_lag[-10:], color='r', linewidth=3., label='10_day_lag_death_rate')
plt.plot(death_rate[-10:], color='black', linewidth=3., alpha=.5, label='death_rate')
#plt.ylim((0, 1))
plt.legend(loc='upper left')
#plt.title('%s, pop density %.1f'%(df_emerging['CountyName_x'].iloc[j],
# df_emerging['PopulationDensityperSqMile2010_x'].iloc[j]))
plt.title('%s, SVI %.3f'%(df_emerging['CountyNamew/StateAbbrev'].iloc[j],
df_emerging['SVIPercentile'].iloc[j]))
df_al = df[df['StateNameAbbreviation'].isin(np.array(['AL']))]
df_ga = df[df['StateNameAbbreviation'].isin(np.array(['GA']))]
plt.figure(figsize=[10, 10])
plt.subplot(221)
plt.hist(df_al.SVIPercentile)
x1 = df_emerging['SVIPercentile'].iloc[6]
plt.axvline(x=x1, color='r', alpha=0.6, linestyle='--', linewidth=3., label='Chambers')
plt.legend()
plt.title("SVI in Alabama")
plt.subplot(222)
plt.hist(df_al.PopulationDensityperSqMile2010_x)
x1 = df_emerging['PopulationDensityperSqMile2010_x'].iloc[6]
plt.axvline(x=x1, color='r', alpha=0.6, linestyle='--', linewidth=3., label='Chambers')
plt.legend()
plt.title("Population density in Alabama")
plt.subplot(223)
plt.hist(df_ga.SVIPercentile)
x1 = df_emerging['SVIPercentile'].iloc[4]
plt.axvline(x=x1, color='r', alpha=0.6, linestyle='--', linewidth=3., label='Dougherty')
plt.legend()
plt.title("SVI in Georgia")
plt.subplot(224)
plt.hist(df_ga.PopulationDensityperSqMile2010_x)
x1 = df_emerging['PopulationDensityperSqMile2010_x'].iloc[4]
plt.axvline(x=x1, color='r', alpha=0.6, linestyle='--', linewidth=3., label='Dougherty')
plt.legend()
plt.title("Population density in Georgia")
pop_density = df['PopulationDensityperSqMile2010_x']
SVI = df['SVIPercentile']
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.scatter(x = pop_density, y=SVI, alpha=.3, s=500.0*df['tot_deaths']/df['tot_cases'])
plt.xlim((0, 500))
plt.xlabel("population density")
plt.ylabel("SVI percentile")
plt.title("death rate by county")
plt.subplot(1, 3, 2)
plt.scatter(x = pop_density, y=SVI, alpha=.3, s=5e5*df['tot_deaths']/df['PopulationEstimate2018_x'])
plt.xlim((0, 500))
plt.xlabel("population density")
plt.ylabel("SVI percentile")
plt.title("deaths per 10k population")
plt.subplot(1, 3, 3)
plt.scatter(x = pop_density, y=SVI, alpha=.3, s=5*df['tot_deaths'])
plt.xlim((0, 500))
plt.xlabel("population density")
plt.ylabel("SVI percentile")
plt.title("deaths")
fig.show()
rural_counties = pd.read_csv("../county_classifications.csv", encoding="iso-8859-1")
rural_counties['countyFIPS'] = rural_counties['FIPStxt']
df = load_data.load_county_level(data_dir = '../data/')
df = pd.merge(df, rural_counties, how='left', on='countyFIPS')
df['is_rural'] = df['RuralUrbanContinuumCode2013'] > 3.0
df_rural = df[df['is_rural']]
df_rural['tot_cases']
df_rural['CountyName'].values[:10]
w = 3
f1, f2 = "cases", "deaths"
#deltas = np.zeros(len(dfg.groups.keys()))
#deltas_smooth = np.zeros(len(dfg.groups.keys()))
plt.figure(figsize=[30, 20])
for j in range(10):
#group = dfg.get_group(grp).reset_index()
plt.subplot(4, 5, j+1)
y = np.diff(df_rural[f1].values[j])[50:]
y = pd.Series(y)
plt.plot(y, color='b', alpha=0.3)
#smooth_y = y.rolling(window=w)
smooth_y = y.rolling(3).mean()
plt.plot(smooth_y, color='b', linewidth=3., label=f'{f1}-{w}-day-MA')
x1, x2 = np.argmax(y), np.argmax(smooth_y)
plt.axvline(x= x2, color='b', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='upper left')
ax = plt.gca().twinx()
#ax = plt.gca()
y = np.diff(df_rural[f2].values[j])[50:]
ax.plot(y, color='r', alpha=0.3)
y = pd.Series(y)
#smooth_y = y.rolling(window=5)
#smooth_y = smooth_y.mean()
smooth_y = y.rolling(3).mean()
ax.plot(smooth_y, color='r', linewidth=3., label=f'{f2}-{w}-day-MA')
x3, x4 = np.argmax(y), np.argmax(smooth_y)
ax.axvline(x= x4, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.legend(loc='lower left')
#deltas_smooth[j] = x4-x1
#deltas[j] = x3-x1
#plt.title('%s, Lag-smooth = %d, Lag-Raw = %d'%(df['CountyNamew/StateAbbrev'].iloc[j], x4-x2, x3-x1))
plt.title('%s'%(df_rural['CountyNamew/StateAbbrev'].iloc[j]))
#df_rural['SVIPercentile'].iloc[j]))
#df_rural_subset = df_rural[(df_rural['tot_cases'].values > 10) * (df_rural['#Hospitals'].values > 0)]
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.scatter(df_rural_subset['#ICU_beds'], df_rural_subset['tot_deaths']/(df_rural_subset['tot_cases'] + 1), alpha=.3)
plt.xlabel("#ICU_beds")
plt.ylabel("death rate")
plt.title("Death rate of rural counties")
plt.subplot(1, 3, 2)
plt.scatter(df_rural_subset['#Hospitals'], df_rural_subset['tot_deaths']/(df_rural_subset['tot_cases'] + 1), alpha=.3)
plt.xlabel("#Hospitals")
plt.ylabel("death rate")
plt.title("Death rate of rural counties")
plt.subplot(1, 3, 3)
plt.scatter(df_rural_subset['Pop>65percentage'], df_rural_subset['tot_deaths']/(df_rural_subset['tot_cases'] + 1), alpha=.3)
plt.xlabel("Senior population %")
plt.ylabel("death rate")
plt.title("Pop>65percentage")
fig.show()
df_rural_subset = copy.deepcopy(df_rural[(df_rural['tot_cases'].values > 10) * (df_rural['#Hospitals'].values > 0)])
df_rural_subset['Pop>652010'] = df_rural_subset['PopMale65-742010'].values + df_rural_subset['PopFmle65-742010'].values + df_rural_subset['PopMale75-842010'].values + df_rural_subset['PopFmle75-842010'].values + df_rural_subset['PopMale>842010'].values + df_rural_subset['PopFmle>842010'].values
df_rural_subset['Pop>65percentage'] = df_rural_subset['Pop>652010']/df_rural_subset['PopulationEstimate2018']
```
## Neighboring counties
### Bay area
```
df = load_data.load_county_level(data_dir = '../data/')
commute_neighborhood = pd.read_csv("commute_neighborhood.csv")
county_to_fips = dict(zip(df['CountyNamew/StateAbbrev'], df['countyFIPS']))
fips_to_county = dict(zip(df['countyFIPS'], df['CountyNamew/StateAbbrev']))
commute_neighborhood['Resident County'] = [fips_to_county[fips] if fips in fips_to_county else 'NA' \
for fips in commute_neighborhood['Resident County FIPS'] \
]
commute_neighborhood['Work County'] = [fips_to_county[fips] if fips in fips_to_county else 'NA' \
for fips in commute_neighborhood['Work County FIPS'] \
]
from collections import defaultdict
weights_out = defaultdict(dict)
weights_in = defaultdict(dict)
for i in range(len(commute_neighborhood)):
resident = commute_neighborhood['Resident County'].values[i]
work = commute_neighborhood['Work County'].values[i]
raw = commute_neighborhood['raw_weight'].values[i]
weights_out[resident][work] = raw
weights_in[work][resident] = raw
def plot_neigh_hist(county_name):
sum_weights = defaultdict(int)
all_neighbors = set(weights_out[county_name].keys()).union(set(weights_in[county_name].keys()))
for neigh in all_neighbors:
if neigh in weights_out[county_name]:
sum_weights[neigh] += weights_out[county_name][neigh]
if neigh in weights_in[county_name]:
sum_weights[neigh] += weights_in[county_name][neigh]
del sum_weights[county_name]
sorted_neighbors = sorted(sum_weights.items(), key=lambda x: -x[1])
n_top_neigh = 6
n_top_neigh = min(n_top_neigh, len(sorted_neighbors))
x = np.arange(n_top_neigh)
labels = [sorted_neighbors[i][0] for i in range(n_top_neigh)]
out_pop, in_pop = [], []
for i in range(n_top_neigh):
if sorted_neighbors[i][0] in weights_out[county_name]:
out_pop.append(weights_out[county_name][sorted_neighbors[i][0]])
else:
out_pop.append(0)
if sorted_neighbors[i][0] in weights_in[county_name]:
in_pop.append(weights_in[county_name][sorted_neighbors[i][0]])
else:
in_pop.append(0)
width = 0.35
fig, ax = plt.subplots(figsize=(15, 5))
rects1 = ax.bar(x - width/2, in_pop, width, label='In')
rects2 = ax.bar(x + width/2, out_pop, width, label='Out')
ax.set_xticks(x)
ax.set_xticklabels(labels)
ax.legend()
bay_area_counties = ['Alameda, CA',
'Contra Costa, CA',
'Marin, CA',
'Napa, CA',
'San Francisco, CA',
'San Mateo, CA',
'Santa Clara, CA',
'Solano, CA',
'Sonoma, CA']
df_bay_area = df[df['CountyNamew/StateAbbrev'].isin(np.array(bay_area_counties))]
```
### Grand Island, Nebraska
```
df_NE = df[df['StateNameAbbreviation'].isin(np.array(['NE', 'SD']))]
plt.figure(figsize=(15, 10))
for i in range(len(df_NE)):
daily_cases = np.diff(df_NE['cases'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 10:
plt.plot(smooth_y[30:], linewidth=3., label=df_NE['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_NE['CountyNamew/StateAbbrev'].iloc[i])
elif smooth_y.max() > 5:
plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_NE['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_NE['CountyNamew/StateAbbrev'].iloc[i])
#plt.axvline(x=55, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases")
#plt.legend()
plot_neigh_hist('Hall, NE')
plot_neigh_hist('Douglas, NE')
```
### Louisiana
```
LA_counties = ['Jefferson, LA',
'Orleans, LA',
'East Baton Rouge, LA',
'St. Landry, LA',
'St. Tammany, LA',
'Lafayette, LA',
'Washington, LA',
'Evangeline, LA']
df_LA = df[df['CountyNamew/StateAbbrev'].isin(np.array(LA_counties))]
plt.figure(figsize=(15, 10))
for i in range(len(df_LA)):
daily_cases = np.diff(df_LA['cases'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 10:
plt.plot(smooth_y[30:], linewidth=3., label=df_LA['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_LA['CountyNamew/StateAbbrev'].iloc[i])
elif smooth_y.max() > 5:
plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_LA['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_LA['CountyNamew/StateAbbrev'].iloc[i])
#plt.axvline(x=55, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases")
#plt.legend()
plot_neigh_hist('St. Landry, LA')
plot_neigh_hist('Lafayette, LA')
```
## Clustering
```
counties = weights_out.keys()
counties = np.array(list(counties))
n = len(counties)
weights_matrix = np.zeros((n, n))
counties_index_dict = dict(zip(counties, range(n)))
for resident in weights_out:
for work in weights_out[resident]:
weights_matrix[counties_index_dict[resident], counties_index_dict[work]] = weights_out[resident][work]
def state_wise_clustering(states, K):
#state = 'CA'
state_counties = np.array([i for i, county in enumerate(counties) if county[-2:] in states])
weights_state = weights_matrix[state_counties,:][:,state_counties]
num_state_counties = len(state_counties)
colsums = np.sum(weights_state, axis=0)
rowsums = np.sum(weights_state, axis=1)
allsum = np.mean(colsums)
colsums_reg = colsums + allsum
rowsums_reg = rowsums + allsum
for i in range(num_state_counties):
for j in range(num_state_counties):
weights_state[i, j] /= np.sqrt(rowsums_reg[i] * colsums_reg[j])
w_svd = np.linalg.svd(weights_state)
xl_svd, xr_svd = w_svd[0][:,:K], np.transpose(w_svd[2][:K,:])
for i in range(num_state_counties):
xl_svd[i,:] = xl_svd[i,:]/np.linalg.norm(xl_svd[i,:], 2)
xr_svd[i,:] = xr_svd[i,:]/np.linalg.norm(xr_svd[i,:], 2)
km = KMeans(
n_clusters=K,
random_state=0
)
km.fit_predict(xl_svd)
left_labels = copy.deepcopy(km.labels_)
km.fit_predict(xr_svd)
right_labels = copy.deepcopy(km.labels_)
return state_counties, left_labels, right_labels, w_svd[1]
```
### Northern California
```
states = ['CA']
K = 4
state_counties, left_labels, right_labels, svs = state_wise_clustering(states=states, K=K)
if len(states) == 1:
for i in range(K):
cluster = np.array(list(counties))[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join([c[:-4] for c in cluster]) + "\n")
else:
for i in range(K):
cluster = np.array(list(counties))[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join(cluster) + "\n")
bay_area_counties = np.array(list(counties))[state_counties[left_labels == 1]]
df_bay_area = df[df['CountyNamew/StateAbbrev'].isin(np.array(bay_area_counties))]
plt.figure(figsize=(15, 10))
for i in range(len(df_bay_area)):
daily_cases = np.diff(df_bay_area['cases'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 20:
plt.plot(smooth_y[30:], linewidth=3., label=df_bay_area['CountyNamew/StateAbbrev'].iloc[i])
else:
plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_bay_area['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_bay_area['CountyNamew/StateAbbrev'].iloc[i])
plt.axvline(x=55, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases")
#plt.legend()
plot_neigh_hist('Santa Clara, CA')
plot_neigh_hist('Alameda, CA')
```
### NY, NJ, CT
```
states = ['NY', 'NJ', 'CT']
K = 16
state_counties, left_labels, right_labels, svs = state_wise_clustering(states=states, K=K)
if len(states) == 1:
for i in range(K):
cluster = counties[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join([c[:-4] for c in cluster]) + "\n")
else:
for i in range(K):
cluster = counties[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join(cluster) + "\n")
ny_counties = counties[state_counties[left_labels == 1]]
df_ny = df[df['CountyNamew/StateAbbrev'].isin(np.array(ny_counties))]
plt.figure(figsize=(15, 10))
for i in range(len(df_ny)):
daily_cases = 1e4*np.diff(df_ny['cases'].iloc[i]/df_ny['PopulationEstimate2018'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 1:
plt.plot(smooth_y[30:], linewidth=3., label=df_ny['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_ny['CountyNamew/StateAbbrev'].iloc[i])
#elif smooth_y.max() > 20:
# plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_socal['CountyNamew/StateAbbrev'].iloc[i])
# plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_socal['CountyNamew/StateAbbrev'].iloc[i])
plt.axvline(x=58, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases (per 10k population)")
plt.legend()
ny_counties = counties[state_counties[left_labels == 7]]
df_ny = df[df['CountyNamew/StateAbbrev'].isin(np.array(ny_counties))]
plt.figure(figsize=(15, 10))
for i in range(len(df_ny)):
daily_cases = 1e4*np.diff(df_ny['cases'].iloc[i]/df_ny['PopulationEstimate2018'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 1:
plt.plot(smooth_y[30:], linewidth=3., label=df_ny['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_ny['CountyNamew/StateAbbrev'].iloc[i])
#elif smooth_y.max() > 20:
# plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_socal['CountyNamew/StateAbbrev'].iloc[i])
# plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_socal['CountyNamew/StateAbbrev'].iloc[i])
#plt.axvline(x=55, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases (per 10k population)")
plt.legend()
plot_neigh_hist('New York, NY')
plot_neigh_hist('Westchester, NY')
```
### Louisiana
```
states = ['LA']
K = 8
state_counties, left_labels, right_labels, svs = state_wise_clustering(states=states, K=K)
if len(states) == 1:
for i in range(K):
cluster = counties[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join([c[:-4] for c in cluster]) + "\n")
else:
for i in range(K):
cluster = counties[state_counties[left_labels == i]]
print(f"Sending cluster #{i}:\n" + ', '.join(cluster) + "\n")
df_LA = df[df['CountyNamew/StateAbbrev'].isin(np.array(counties[state_counties[left_labels == 0]]))]
plt.figure(figsize=(15, 10))
for i in range(len(df_LA)):
daily_cases = 1e4*np.diff(df_LA['cases'].iloc[i]/df_LA['PopulationEstimate2018'].iloc[i])
smooth_y = pd.Series(daily_cases).rolling(5).mean()
if smooth_y.max() > 5:
plt.plot(smooth_y[30:], linewidth=3., label=df_LA['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_LA['CountyNamew/StateAbbrev'].iloc[i])
elif smooth_y.max() > 5:
plt.plot(smooth_y[30:], linewidth=1., alpha=.3, label=df_LA['CountyNamew/StateAbbrev'].iloc[i])
plt.text(len(smooth_y)-1, smooth_y.values[-1], s=df_LA['CountyNamew/StateAbbrev'].iloc[i])
#plt.axvline(x=55, color='r', alpha=0.6, linestyle='--', linewidth=3.)
plt.xlim((30, len(smooth_y) + 6))
plt.xlabel("Days since first case in US")
plt.ylabel("Daily cases")
plt.legend()
```
| github_jupyter |
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
from ChangTools.plotting import prettyplot
from ChangTools.plotting import prettycolors
import env
import corner as DFM
# read in illustris SFH file from Tijske
dat = h5py.File('binsv2all1e8Msunh_z0.hdf5', 'r')
dat.keys()
# current stellar mass [10^10 Msun]
galpop = {}
galpop['M*'] = dat['CurrentStellarMass'].value.flatten() * 1e10
# formed stellar mass is in a grid of time bins and metallicity
t_bins = np.array([0.0, 0.005, 0.015, 0.025, 0.035, 0.045, 0.055, 0.065, 0.075, 0.085, 0.095, 0.125,0.175,0.225,0.275,0.325,0.375,0.425,0.475,0.55,0.65,0.75,0.85,0.95,1.125,1.375,1.625,1.875,2.125,2.375,2.625,2.875,3.125,3.375,3.625,3.875,4.25,4.75,5.25,5.75,6.25,6.75,7.25,7.75,8.25,8.75,9.25,9.75,10.25,10.75,11.25,11.75,12.25,12.75,13.25,13.75])
sfh_grid = dat['FormedStellarMass'].value
dM_t = np.sum(sfh_grid, axis=1) # sum up all the metallicities so you have delta M* in a grid of galaxies and time
galpop['sfr'] = (1e10 * (dM_t[:,0] + dM_t[:,1])/(0.015 * 1e9)).flatten() # 'current' SFR averaged over the last 0.015 Gyr
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(np.log10(galpop['M*']), np.log10(galpop['sfr']), range=[[7.5, 12.], [-2., 1.5]], color='#1F77B4')
m_arr = np.arange(6., 12.1, 0.1)
sub.plot(m_arr, 0.9 * (m_arr - 10.5) + 0.15, c='k', lw=2, ls='--')
sub.set_xlabel('Stellar Mass')
sub.set_ylabel('SFR')
```
Star-forming galaxies at $z \sim 0$
```
logm = np.log10(galpop['M*'])
logsfr = np.log10(galpop['sfr'])
sfrcut = 0.9 * (logm - 10.5) + 0.15
is_SF = np.where(logsfr > sfrcut)
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(logm[is_SF], logsfr[is_SF], range=[[7.5, 12.], [-2., 1.5]], color='#1F77B4')
sub.set_xlabel('Stellar Mass')
sub.set_xlim([7.5, 12.])
sub.set_ylabel('SFR')
sub.set_ylim([-2., 1.5])
prettyplot()
fig = plt.figure(figsize=(10,5))
bkgd = fig.add_subplot(111, frameon=False)
sub = fig.add_subplot(121)
for i in range(10):
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i))
sub.set_xlim([0,13.75])
sub = fig.add_subplot(122)
for i in range(5):
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i), lw=2)
sub.set_xlim([0., 8.])
bkgd.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off')
bkgd.set_xlabel(r'$\mathtt{t \;\;[Gyr]}$', labelpad=20, fontsize=30)
bkgd.set_ylabel(r'$\mathtt{log \;SFR \;\;[M_\odot/yr^{-1}]}$', labelpad=20, fontsize=30)
```
### SFH of star-forming galaxies
High stellar mass
```
fig = plt.figure(figsize=(10,5))
bkgd = fig.add_subplot(111, frameon=False)
sub = fig.add_subplot(121)
for i in is_SF[0][:20]:
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i))
sub.set_xlim([0,13.75])
sub = fig.add_subplot(122)
for i in is_SF[0][:8]:
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i), lw=2)
sub.set_xlim([0., 8.])
bkgd.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off')
bkgd.set_xlabel(r'$\mathtt{t \;\;[Gyr]}$', labelpad=20, fontsize=30)
bkgd.set_ylabel(r'$\mathtt{log \;SFR \;\;[M_\odot/yr^{-1}]}$', labelpad=20, fontsize=30)
fig = plt.figure(figsize=(10,5))
bkgd = fig.add_subplot(111, frameon=False)
sub = fig.add_subplot(121)
for i in is_SF[0][-8:]:
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i))
sub.set_xlim([0,13.75])
sub = fig.add_subplot(122)
for i in is_SF[0][-8:]:
dm_i = sfh_gt[i,:]
sfr_i = dm_i[:-1]/(t_bins[1:] - t_bins[:-1])
sub.plot(t_bins[:-1], np.log10(sfr_i), lw=2)
sub.set_xlim([0., 8.])
bkgd.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off')
bkgd.set_xlabel(r'$\mathtt{t \;\;[Gyr]}$', labelpad=20, fontsize=30)
bkgd.set_ylabel(r'$\mathtt{log \;SFR \;\;[M_\odot/yr^{-1}]}$', labelpad=20, fontsize=30)
fig = plt.figure()
sub = fig.add_subplot(111)
DFM.hist2d(logm[is_SF], logsfr[is_SF], range=[[7.5, 12.], [-2., 1.5]], color='#1F77B4')
sub.set_xlabel('Stellar Mass')
sub.set_xlim([7.5, 12.])
sub.set_ylabel('SFR')
sub.set_ylim([-2., 1.5])
```
| github_jupyter |
<!--NOTEBOOK_HEADER-->
*This notebook contains material from [PyRosetta](https://RosettaCommons.github.io/PyRosetta.notebooks);
content is available [on Github](https://github.com/RosettaCommons/PyRosetta.notebooks.git).*
<!--NAVIGATION-->
< [Getting spatial features from a Pose](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.04-Getting-Spatial-Features-from-Pose.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Visualization with the `PyMOLMover`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.06-Visualization-and-PyMOL-Mover.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.05-Protein-Geometry.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
# Protein Geometry
Keywords: pose_from_sequence(), bond_angle(), set_phi(), set_psi(), xyz()
```
# Notebook setup
import sys
if 'google.colab' in sys.modules:
!pip install pyrosettacolabsetup
import pyrosettacolabsetup
pyrosettacolabsetup.setup()
print ("Notebook is set for PyRosetta use in Colab. Have fun!")
from pyrosetta import *
from pyrosetta.teaching import *
init()
```
**From previous section:**
Make sure you are in the directory with the pdb files:
`cd google_drive/My\ Drive/student-notebooks/`
```
pose = pose_from_pdb("inputs/5tj3.pdb")
resid = pose.pdb_info().pdb2pose('A', 28)
res_28 = pose.residue(resid)
N28 = AtomID(res_28.atom_index("N"), resid)
CA28 = AtomID(res_28.atom_index("CA"), resid)
C28 = AtomID(res_28.atom_index("C"), resid)
```
## Rosetta Database Files
Let's take a look at Rosetta's ideal values for this amino acid's bond lengths and see how these values compare. First find Pyrosetta's database directory on your computer (hint: it should have shown up when you ran `init()` at the beginning of this Jupyter notebook.) Here's an example:
```
from IPython.display import Image
Image('./Media/init-path.png',width='700')
```
Head to the subdirectory `chemical/residue_type_sets/fa_standard/` to find the residue you're looking at. Let's look at valine, which can be found in the `l-caa` folder, since it is a standard amino acid. The `ICOOR_INTERNAL` lines will provide torsion angles, bond angles, and bond lengths between subsequent atoms in this residue. From this you should be able to deduce Rosetta's ideal $N$-$C_\alpha$ and $C_\alpha$-$C$ bond lengths.
These ideal values would for instance be used if we generated a new pose from an amino acid sequence. In fact, let's try that here:
```
one_res_seq = "V"
pose_one_res = pose_from_sequence(one_res_seq)
print(pose_one_res.sequence())
N_xyz = pose_one_res.residue(1).xyz("N")
CA_xyz = pose_one_res.residue(1).xyz("CA")
C_xyz = pose_one_res.residue(1).xyz("C")
print((CA_xyz - N_xyz).norm())
print((CA_xyz - C_xyz).norm())
```
Now lets figure out how to get angles in the protein. If the `Conformation` class has the angle we're looking for, we can use the AtomID objects we've already created:
```
angle = pose.conformation().bond_angle(N28, CA28, C28)
print(angle)
```
Notice that `.bond_angle()` gives us the angle in radians. We can compute the above angle in degrees:
```
import math
angle*180/math.pi
```
Note how this compares to the expected angle based on a tetrahedral geometry for the $C_\alpha$ carbon.
### Exercise 5: Calculating psi angle
Try to calculate this angle using the xyz atom positions for N, CA, and C of residue A:28 in the protein. You can use the `Vector` function `v3 = v1.dot(v2)` along with `v1.norm()`. The vector angle between two vectors BA and BC is $\cos^{-1}(\frac{BA \cdot BC}{|BA| |BC|})$.
## Manipulating Protein Geometry
We can also alter the geometry of the protein, with particular interest in manipulating the protein backbone and $\chi$ dihedrals.
### Exercise 6: Changing phi/psi angles
Perform each of the following manipulations, and give the coordinates of the CB atom of Pose residue 2 afterward.
- Set the $\phi$ of residue 2 to -60
- Set the $\psi$ of residue 2 to -43
```
# three alanines
tripeptide = pose_from_sequence("AAA")
orig_phi = tripeptide.phi(2)
orig_psi = tripeptide.psi(2)
print("original phi:", orig_phi)
print("original psi:", orig_psi)
# print the xyz coordinates of the CB atom of residue 2 here BEFORE setting
### BEGIN SOLUTION
print("xyz coordinates:", tripeptide.residue(2).xyz("CB"))
### END SOLUTION
# set the phi and psi here
### BEGIN SOLUTION
tripeptide.set_phi(2, -60)
tripeptide.set_psi(2, -43)
print("new phi:", tripeptide.phi(2))
print("new psi:", tripeptide.psi(2))
### END SOLUTION
# print the xyz coordinates of the CB atom of residue 2 here AFTER setting
### BEGIN SOLUTION
print("xyz coordinates:", tripeptide.residue(2).xyz("CB"))
### END SOLUTION
# did changing the phi and psi angle change the xyz coordinates of the CB atom of alanine 2?
```
By printing the pose (see below command), we can see that the whole protein is in a single chain from residue 1 to 524 (or 519, depending on if the pose was cleaned).
The `FOLD_TREE` controls how changes to residue geometry propagate through the protein (left to right in the FoldTree chain.) We will go over the FoldTree in another lecture, but based on how you think perturbing the backbone of a protein structure affects the overall protein conformation, consider this question: If you changed a torsion angle for residue 5, would the Cartesian coordinaes for residue 7 change? What about the coordinates for residue 3?
Try looking at the pose in PyMOL before and after you set the backbone $\phi$ and $\psi$ for a chosen residue.
```
print(pose)
```
<!--NAVIGATION-->
< [Getting spatial features from a Pose](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.04-Getting-Spatial-Features-from-Pose.ipynb) | [Contents](toc.ipynb) | [Index](index.ipynb) | [Visualization with the `PyMOLMover`](http://nbviewer.jupyter.org/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.06-Visualization-and-PyMOL-Mover.ipynb) ><p><a href="https://colab.research.google.com/github/RosettaCommons/PyRosetta.notebooks/blob/master/notebooks/02.05-Protein-Geometry.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open in Google Colaboratory"></a>
| github_jupyter |
# Visualize Solar Radiation Data
The data in this notebook come from the [National Solar Radiation Data Base](http://rredc.nrel.gov/solar/old_data/nsrdb/), specifically the [1991 - 2010 update to the National Solar Radiation Database](http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2010/). The data set consists of CSV files [measured at USAF weather stations](http://rredc.nrel.gov/solar/old_data/nsrdb/1991-2010/hourly/list_by_USAFN.html).
## Setup
Run the `download_sample_data.py` script to download Lidar from [Puget Sound LiDAR consortium](http://pugetsoundlidar.ess.washington.edu) and other example data sets.
From your local clone of the `datashader` repository:
```
cd examples
conda env create environment.yml
source activate ds
python download_sample_data.py
```
Note on Windows, replace `source activate ds` with `activate ds`.
```
import glob
import os
import re
from collections import defaultdict
from dask.distributed import Client
from holoviews.operation import decimate
from holoviews.operation.datashader import dynspread
import dask
import dask.dataframe as dd
import holoviews as hv
import numpy as np
import pandas as pd
hv.notebook_extension('bokeh')
decimate.max_samples=1000
dynspread.max_px=20
dynspread.threshold=0.5
client = Client()
NUM_STATIONS = None # adjust to and integer limit to subset of SOLAR_FILES
SOLAR_FNAME_PATTERN = os.path.join('..', 'data', '72*', '*solar.csv')
SOLAR_FILES = glob.glob(SOLAR_FNAME_PATTERN)
META_FILE = os.path.join('..', 'data', 'NSRDB_StationsMeta.csv')
get_station_yr = lambda fname: tuple(map(int, os.path.basename(fname).split('_')[:2]))
STATION_COMBOS = defaultdict(lambda: [])
for fname in SOLAR_FILES:
k, v = get_station_yr(fname)
STATION_COMBOS[k].append([v, fname])
choices = tuple(STATION_COMBOS)
if NUM_STATIONS:
choices = choices[:NUM_STATIONS]
STATION_COMBOS = {k: STATION_COMBOS[k] for k in choices}
files_for_station = lambda station: [x[1] for x in STATION_COMBOS[station]]
station_year_files = lambda station, year: [x for x in files_for_station(station) if '_{}_'.format(year) in x]
def clean_col_names(dframe):
cols = [re.sub('_$', '', re.sub('[/:\(\)_\s^-]+', '_', col.replace('%', '_pcent_'))).lower()
for col in dframe.columns]
dframe.columns = cols
return dframe
meta_df = clean_col_names(pd.read_csv(META_FILE, index_col='USAF'))
meta_df.loc[list(STATION_COMBOS)]
keep_cols = ['date', 'y', 'x', 'julian_hr', 'year', 'usaf', 'month', 'hour']
@dask.delayed
def read_one_fname(usaf_station, fname):
dframe = clean_col_names(pd.read_csv(fname))
station_data = meta_df.loc[usaf_station]
hour_offset = dframe.hh_mm_lst.map(lambda x: pd.Timedelta(hours=int(x.split(':')[0])))
keep = keep_cols + [col for col in dframe.columns
if ('metstat' in col or col in keep_cols)
and 'flg' not in col]
dframe['date'] = pd.to_datetime(dframe.yyyy_mm_dd) + hour_offset
dframe['month'] = dframe.date.dt.month
dframe['hour'] = dframe.date.dt.hour
dframe['usaf'] = usaf_station
dframe['y'], dframe['x'] = station_data.nsrdb_lat_dd, station_data.nsrdb_lon_dd
dframe['julian_hr'] = dframe.date.dt.hour + (dframe.date.dt.dayofyear - 1) * 24
dframe['year'] = dframe.date.dt.year
dframe[dframe <= -999] = np.NaN
return dframe.loc[:, keep]
def read_one_station(station):
'''Read one USAF station's 1991 to 2001 CSVs - dask.delayed for each each year'''
files = files_for_station(station)
return dd.from_delayed([read_one_fname(station, fname) for fname in files]).compute()
example_usaf = tuple(STATION_COMBOS)[0]
df = read_one_station(example_usaf)
df.head()
df.describe()
desc = df.date.describe()
desc
```
The next cell makes some labels for the time series groupby operations' plots and boxplots.
```
direct, dif_h, glo_h = ('Direct Normal',
'Diffuse Horizontal',
'Global Horizontal',)
labels = {}
watt_hrs_m2_cols = [col for col in df.columns if 'wh_m_2' in col and not 'suny' in col]
for col in watt_hrs_m2_cols:
label_1 = "Clear Sky " if 'csky' in col else "Measured "
label_2 = direct if '_dir_' in col else glo_h if '_glo_' in col else dif_h
labels[col] = label_1 + label_2
labels
def get_station_quantiles(station=None, grouper='julian_hr', usaf_data=None):
'''Given a station name or dataframe do groupby on time bins
Parameters:
station: Integer name of a USAF weather station
(folder names holding years' CSVs)
groupby: One of "julian_hr" "hour" "month_hour"
(Note the julian_hr does not standardize relative to leap
years: non-leap years have 8760 hrs, leap years 8784 hrs)
usaf_data: Give CSVs' dataframe instead of station name
Returns:
summary_df Dataframe with 25%, 50%, 75% for each column
'''
if usaf_data is None:
usaf_data = read_one_station(station)
if grouper == 'hour':
group_var = usaf_data.date.dt.hour
elif grouper == 'month':
group_var = usaf_data.date.dt.month
elif grouper == 'month_hour':
group_var = [usaf_data.date.dt.month, usaf_data.date.dt.hour]
else:
group_var = grouper
usaf_data = usaf_data.groupby(group_var)
usaf_data = usaf_data[keep_cols + watt_hrs_m2_cols]
low = usaf_data.quantile(0.25)
median = usaf_data.median()
hi = usaf_data.quantile(0.75)
median[grouper] = median.index.values
median['usaf'] = station
# For the low, hi quartiles subset the columns
# for smaller joins - do not include 3 copies of x,y,date, etc
join_arg_cols = [col for col in low.columns if col not in keep_cols]
summary_df = median.join(low[join_arg_cols],
rsuffix='_low').join(hi[join_arg_cols], rsuffix='_hi')
return summary_df
```
Get Julian day of year summary for one USAF station using `pandas.DataFrame.groupby`.
```
julian_summary = get_station_quantiles(station=example_usaf, grouper='julian_hr',)
julian_summary.head()
```
The function `get_station_quantiles` returns a `DataFrame` with
* spatial coordinates `x` and `y`
* columns related to clear sky solar radiation (columns with `_csky_` as a token)
* measured solar radiation (columns without `_csky_` as a token)
* some date / time related columns helpful for `groupby` operations
```
julian_summary.columns
def plot_gen(station=None, grouper='julian_hr', usaf_data=None):
'''Given a station name or dataframe do groupby on time bins
Parameters:
station: Integer name of a USAF weather station
(folder names holding years' CSVs)
groupby: One of "julian_hr" "hour" "month_hour"
usaf_data: Give CSVs' dataframe instead of station name
Returns:
curves: Dictionary of hv.Curve objects showing
25%, 50%, 75% percentiles
'''
summary_df = get_station_quantiles(station=station,
grouper=grouper,
usaf_data=usaf_data)
curves = {}
kw = dict(style=dict(s=2,alpha=0.5))
for col, label in zip(watt_hrs_m2_cols, labels):
dates = pd.DatetimeIndex(start=pd.Timestamp('2001-01-01'),
freq='H',
periods=summary_df.shape[0])
median_col = summary_df[col]
low_col = summary_df[col + '_low']
hi_col = summary_df[col + '_hi']
hi = hv.Curve((dates, hi_col), label=label + ' (upper quartile)')(**kw)
low = hv.Curve((dates, low_col),label=label + ' (lower quartile)')(**kw)
median = hv.Curve((dates, median_col), label=label)(**kw)
plot_id = tuple(col.replace('metstat_', '').replace('_wh_m_2', '').split('_'))
curves[plot_id] = low * median * hi
curves[plot_id].group = labels[col]
return curves
```
Run `plot_gen` (function above) with an example USAF station to get a dictionary of `holoviews.Curve` objects that have been combined with the overloaded `holoviews` `*` operator for `Curves` or other `holoviews.element` objects. The `*` operator is used to show 25%, 50%, and 75% time series.
```
hour_of_year = plot_gen(station=example_usaf)
```
Now we have a dictionary with short keys for different plots of 25%, 50%, 75% of:
* `(glo,)`: Measured Global Horizontal
* `(dir,)`: Measured Direct Normal
* `(dif,)`: Measured Diffuse Horizontal
* `('csky', 'glo')`: Clear Sky Global Horizontal
* `('csky', 'dir')`: Clear Sky Direct Normal
* `('csky', 'dif')`: Clear Sky Diffuse Horizontal
```
list(hour_of_year)
%%opts Curve [width=700 height=500]
%%opts Layout [tabs=True]
hour_of_year[('dir',)] + hour_of_year[('csky', 'dir')]
%%opts Curve [width=700 height=500 ]
%%opts Layout [tabs=True]
hour_of_year[('glo',)] + hour_of_year[('csky', 'glo')] + hour_of_year[('dif',)] + hour_of_year[('csky', 'dif',)]
```
The next cells repeat the groupby operations for hour of day.
```
usaf_data = read_one_station(example_usaf)
#hour_of_day = plot_gen(grouper='hour', usaf_data=usaf_data)
%%opts Curve [width=700 height=500]
%%opts Layout [tabs=True]
#hour_of_day[('dir',)] + hour_of_day[('csky', 'dir')]
```
When grouping by hour of day or month of year, the number of groups on the horizontal axis is small enough for box plots to show distributions legibly. The next cell uses `holoviews.BoxWhisker` plots to show the direct normal radiation.
```
%%opts BoxWhisker [width=600 height=600]
%%opts Layout [tabs=True]
(hv.BoxWhisker(usaf_data, kdims=['hour'], vdims=['metstat_dir_wh_m_2'],
group='Direct Normal - Hour of Day') +
hv.BoxWhisker(usaf_data, kdims=['month'], vdims=['metstat_dir_wh_m_2'],
group='Direct Normal - Month of Year'))
```
| github_jupyter |
# Exercise Set 6: Data Structuring 2
*Afternoon, August 15, 2018*
In this Exercise Set we will continue working with the weather data you downloaded and saved in Exercise Set 4.
> **_Note_**: to solve the bonus exercises in this exerise set you will need to apply the `.groupby()` method a few times. This has not yet been covered in the lectures (you will see it tomorrow).
>
> `.groupby()` is a method of pandas dataframes, meaning we can call it like so: `data.groupby('colname')`. The method groups your dataset by a specified column, and applies any following changes within each of these groups. For a more detailed explanation see [this link](https://www.tutorialspoint.com/python_pandas/python_pandas_groupby.htm). The [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) might also be useful.
First load in the required modules and set up the plotting library:
```
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
```
## Exercise Section 6.1: Weather, part 2
This section is the second part of three that analyzes NOAA data. The first part is Exercise Section 4.1, the last part is Exercise Section 7.2.
> **Ex. 6.1.1:** Load the CSV data you stored yesterday as part of Exercise Section 4.1. If you didn't manage to save the CSV file, you can use the code in [this gist](https://gist.github.com/Kristianuruplarsen/be3a14b226fc4c4d7b62c39de70307e4) to load in the NOAA data.
```
# [Answer to Ex. 6.1.1]
saved_data_yesterday = False # Change this if you saved the data yesterday. Either solution is fine
# Solution 1
if saved_data_yesterday:
df_select = pd.read_csv('C:/Users/qsd161/Desktop/asd/df_sorted.csv')
df_select.head()
# Solution 2 (using the gist)
else:
url = 'https://www1.ncdc.noaa.gov/pub/data/ghcn/daily/by_year/1864.csv.gz'
df_weather = pd.read_csv(url,
compression='gzip',
header=None).iloc[:,:4]
df_weather.columns = ['station', 'datetime', 'obs_type', 'obs_value']
df_weather['obs_value'] = df_weather['obs_value'] / 10
df_select = df_weather[(df_weather.station == 'ITE00100550') & (df_weather.obs_type == 'TMAX')].copy()
df_select['TMAX_F'] = 32 + 1.8 * df_select['obs_value']
df_sorted = df_select.reset_index(drop=True).sort_values(by=['obs_value'])
```
> **Ex. 6.1.2:** Convert the date formatted as string to datetime. Make a new column with the month for each observation.
```
# [Answer to Ex. 6.1.2]
# datetime column
df_select['datetime_dt'] = pd.to_datetime(df_select['datetime'], format = '%Y%m%d')
# month column
df_select['month'] = df_select.datetime_dt.dt.month
df_select.head()
```
> **Ex. 6.1.3:** Set the datetime variable as temporal index and make a timeseries plot.
> _Hint:_ for this you need to know a few methods of the pandas DataFrames and pandas Series objects. Look up `.set_index()` and `.plot()`.
```
# [Answer to Ex. 6.1.3]
df_select\
.set_index('datetime_dt')\
.obs_value\
.plot(figsize=[11,6])
plt.show()
```
> **Ex. 6.1.4:** Extract the country code from the station name into a separate column.
> _Hint:_ The station column contains a GHCND ID, given to each weather station by NOAA. The format of these ID's is a 2-3 letter country code, followed by a integer identifying the specific station. A simple approach is to assume a fixed length of the country ID. A more complex way would be to use the [`re`](https://docs.python.org/2/library/re.html) module.
```
# [Answer to Ex. 6.1.4]
# This will be in assignment 1
```
> **Ex. 6.1.5:** Make a function that downloads and formats the weather data according to previous exercises in Exercise Section 4.1, 6.1. You should use data for ALL stations but still only select maximal temperature. _Bonus:_ To validate that your function works plot the temperature curve for each country in the same window. Use `plt.legend()` to add a legend.
```
# [Answer to Ex. 6.1.5]
# This will be in assignment 1
```
## Exercise Section 6.2:
In this section we will use [this dataset](https://archive.ics.uci.edu/ml/datasets/Adult) from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets.html) to practice some basic operations on pandas dataframes.
> **Ex. 6.2.1:** This link `'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'` leads to a comma-separated file with income data from a US census. Load the data into a pandas dataframe and show the 25th to 35th row.
> _Hint #1:_ There are no column names in the dataset. Use the list `['age','workclass', 'fnlwgt', 'educ', 'educ_num', 'marital_status', 'occupation','relationship', 'race', 'sex','capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'wage']` as names.
> _Hint #2:_ When you read in the csv, you might find that pandas includes whitespace in all of the cells. To get around this include the argument `skipinitialspace = True` to `read_csv()`.
```
# [Answer to Ex. 6.2.1]
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
head = ['age','workclass', 'fnlwgt', 'educ', 'educ_num', 'marital_status', 'occupation','relationship', 'race', 'sex',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country', 'wage']
df = pd.read_csv(url,
sep = ',',
skipinitialspace = True, # Removes whitespace
names = head,
index_col = False # dont use 'age' as the index (!)
)
df.iloc[24:35]
```
> **Ex. 6.2.2:** What is the missing value sign in this dataset? Replace all missing values with NA's understood by pandas. Then proceed to drop all rows containing any missing values with the `dropna` method. How many rows are removed in this operation?
> _Hint 1:_ if this doesn't work as expected you might want to take a look at the hint for 6.2.1 again.
> _Hint 2:_ The NaN method from NumPy might be useful
```
# [Answer to Ex. 6.2.2]
# The data uses '?' for NA's. To replace them we can simply run
from numpy import NaN
df.replace('?', NaN, inplace = True)
df_clean = df.dropna()
print("we've dropped {} rows".format(len(df) - len(df_clean)))
```
> **Ex. 6.2.3:** (_Bonus_) Is there any evidence of a gender-wage-gap in the data? Create a table showing the percentage of men and women earning more than 50K a year.
```
# [Answer to Ex. 6.2.3]
# first we create a dummy for earning more than 50K
df_clean['HighWage'] = (df_clean['wage'] == '>50K').astype(int)
# then group by sex and calculate the mean
df_clean[['sex', 'HighWage']]\
.groupby('sex')\
.mean()
```
> **Ex. 6.2.4:** (_Bonus_) Group the data by years of education (`educ_num`) and marital status. Now plot the share of individuals who earn more than 50K for the two groups 'Divorced' and 'Married-civ-spouse' (normal marriage). Your final result should look like this:

> _Hint:_ the `.query()` method is extremely useful for filtering data.
```
# [Answer to Ex. 6.2.4]
df_clean[['marital_status', 'HighWage', 'educ_num']]\
.groupby(['marital_status', 'educ_num'])\
.mean()\
.reset_index()\
.query("marital_status == 'Divorced' | marital_status == 'Married-civ-spouse'")\
.set_index('educ_num')\
.groupby('marital_status')\
.HighWage\
.plot()
plt.xlabel('Years of education')
plt.ylabel('Share earning more than 50K')
plt.legend()
```
| github_jupyter |
# Spectral Representation Method
Author: Lohit Vandanapu
Date: August 19, 2018
Last Modified: May 09, 2019
In this example, the Spectral Representation Method is used to generate stochastic processes from a prescribed Power Spectrum and associated Cross Spectral Density. This example illustrates how to use the SRM class for 'n' dimensional and 'm' variable case and compare the statistics of the generated stochastic processes with the expected values.
Import the necessary libraries. Here we import standard libraries such as numpy and matplotlib, but also need to import the SRM class from the StochasticProcesses module of UQpy.
```
from UQpy.StochasticProcess import SRM
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
plt.style.use('seaborn')
```
The input parameters necessary for the generation of the stochastic processes are given below:
```
# Number of samples
n_sim = 100
# Number of Dimensions
n = 2
# Number of Variables
m = 3
# Input Data
# Time
T = 10 # Simulation Time
dt = 0.1
nt = int(T / dt) + 1
t = np.linspace(0, T, nt)
# Frequency
nw = 100
W = np.array([1.5, 2.5])
dw = W / (nw - 1)
x_list = [np.linspace(dw[i], W[i], nw) for i in range(n)]
xy_list = np.array(np.meshgrid(*x_list, indexing='ij'))
```
Make sure that the input parameters are in order to prevent aliasing
```
t_u = 2*np.pi/2/W
if dt>t_u.all():
print('Error')
```
Defining the Power Spectral Density Function (S) and the Cross Spectral Density (g)
```
S_11 = 125 / 4 * np.linalg.norm(xy_list, axis=0) ** 2 * np.exp(-5 * np.linalg.norm(xy_list, axis=0))
S_22 = 125 / 4 * np.linalg.norm(xy_list, axis=0) ** 2 * np.exp(-3 * np.linalg.norm(xy_list, axis=0))
S_33 = 125 / 4 * np.linalg.norm(xy_list, axis=0) ** 2 * np.exp(-7 * np.linalg.norm(xy_list, axis=0))
g_12 = np.exp(-0.1757 * np.linalg.norm(xy_list, axis=0))
g_13 = np.exp(-3.478 * np.linalg.norm(xy_list, axis=0))
g_23 = np.exp(-3.392 * np.linalg.norm(xy_list, axis=0))
S_list = np.array([S_11, S_22, S_33])
g_list = np.array([g_12, g_13, g_23])
# Assembly of S_jk
S_sqrt = np.sqrt(S_list)
S_jk = np.einsum('i...,j...->ij...', S_sqrt, S_sqrt)
# Assembly of g_jk
g_jk = np.zeros_like(S_jk)
counter = 0
for i in range(m):
for j in range(i + 1, m):
g_jk[i, j] = g_list[counter]
counter = counter + 1
g_jk = np.einsum('ij...->ji...', g_jk) + g_jk
for i in range(m):
g_jk[i, i] = np.ones_like(S_jk[0, 0])
S = S_jk * g_jk
SRM_object = SRM(n_sim, S, [dt, dt], dw, [nt, nt], nw)
samples = SRM_object.samples
samples
t_list = [t for _ in range(n)]
tt_list = np.array(np.meshgrid(*t_list, indexing='ij'))
fig1 = plt.figure()
plt.title('2d random field with a prescribed Power Spectrum - 1st variable')
pcm = pcolor(tt_list[0], tt_list[1], samples[0, 0, :, :], cmap='RdBu_r')
plt.colorbar(pcm, extend='both', orientation='vertical')
plt.xlabel('$X_{1}$')
plt.ylabel('$X_{2}$')
plt.show()
fig2 = plt.figure()
plt.title('2d random field with a prescribed Power Spectrum - 2nd variable')
pcm = pcolor(tt_list[0], tt_list[1], samples[0, 1, :, :], cmap='RdBu_r')
plt.colorbar(pcm, extend='both', orientation='vertical')
plt.xlabel('$X_{1}$')
plt.ylabel('$X_{2}$')
plt.show()
fig3 = plt.figure()
plt.title('2d random field with a prescribed Power Spectrum - 3rd variable')
pcm = pcolor(tt_list[0], tt_list[1], samples[0, 2, :, :], cmap='RdBu_r')
plt.colorbar(pcm, extend='both', orientation='vertical')
plt.xlabel('$X_{1}$')
plt.ylabel('$X_{2}$')
plt.show()
print('The mean of the samples is ', np.mean(samples), 'whereas the expected mean is 0.000')
print('The variance of the samples is ', np.var(samples), 'whereas the expected variance is ', np.sum(S_list)*np.prod(dw)*(2**n)/m)
```
| github_jupyter |
# Gradient Boosting Regressor
### Required Packages
```
import warnings
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as se
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error
warnings.filterwarnings('ignore')
```
### Initialization
Filepath of CSV file
```
#filepath
file_path = ""
```
List of features which are required for model training .
```
#x_values
features=[]
```
Target feature for prediction.
```
#y_value
target = ''
```
### Data Fetching
Pandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.
We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
```
df=pd.read_csv(file_path)
df.head()
```
### Feature Selections
It is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.
We will assign all the required input features to X and target/outcome to Y.
```
X = df[features]
Y = df[target]
```
### Data Preprocessing
Since the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
```
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
```
Calling preprocessing functions on the feature and target set.
```
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
```
#### Correlation Map
In order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
```
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
```
### Data Splitting
The train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
```
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.2, random_state = 123)#performing datasplitting
```
### Model
Gradient Boosting builds an additive model in a forward stage-wise fashion; it allows for the optimization of arbitrary differentiable loss functions. In each stage a regression tree is fit on the negative gradient of the given loss function.
#### Model Tuning Parameters
1. loss : {‘ls’, ‘lad’, ‘huber’, ‘quantile’}, default=’ls’
> Loss function to be optimized. ‘ls’ refers to least squares regression. ‘lad’ (least absolute deviation) is a highly robust loss function solely based on order information of the input variables. ‘huber’ is a combination of the two. ‘quantile’ allows quantile regression (use `alpha` to specify the quantile).
2. learning_ratefloat, default=0.1
> Learning rate shrinks the contribution of each tree by learning_rate. There is a trade-off between learning_rate and n_estimators.
3. n_estimators : int, default=100
> The number of trees in the forest.
4. criterion : {‘friedman_mse’, ‘mse’, ‘mae’}, default=’friedman_mse’
> The function to measure the quality of a split. Supported criteria are ‘friedman_mse’ for the mean squared error with improvement score by Friedman, ‘mse’ for mean squared error, and ‘mae’ for the mean absolute error. The default value of ‘friedman_mse’ is generally the best as it can provide a better approximation in some cases.
5. max_depth : int, default=3
> The maximum depth of the individual regression estimators. The maximum depth limits the number of nodes in the tree. Tune this parameter for best performance; the best value depends on the interaction of the input variables.
6. max_features : {‘auto’, ‘sqrt’, ‘log2’}, int or float, default=None
> The number of features to consider when looking for the best split:
7. random_state : int, RandomState instance or None, default=None
> Controls both the randomness of the bootstrapping of the samples used when building trees (if <code>bootstrap=True</code>) and the sampling of the features to consider when looking for the best split at each node (if `max_features < n_features`).
8. verbose : int, default=0
> Controls the verbosity when fitting and predicting.
9. n_iter_no_change : int, default=None
> <code>n_iter_no_change</code> is used to decide if early stopping will be used to terminate training when validation score is not improving. By default it is set to None to disable early stopping. If set to a number, it will set aside <code>validation_fraction</code> size of the training data as validation and terminate training when validation score is not improving in all of the previous <code>n_iter_no_change</code> numbers of iterations. The split is stratified.
10. tol : float, default=1e-4
> Tolerance for the early stopping. When the loss is not improving by at least tol for <code>n_iter_no_change</code> iterations (if set to a number), the training stops.
```
# Build Model here
model = GradientBoostingRegressor(random_state = 123)
model.fit(X_train, y_train)
```
#### Model Accuracy
We will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.
> **score**: The **score** function returns the coefficient of determination <code>R<sup>2</sup></code> of the prediction.
```
print("Accuracy score {:.2f} %\n".format(model.score(X_test,y_test)*100))
```
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions.
> **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model.
> **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
```
y_pred=model.predict(X_test)
print("R2 Score: {:.2f} %".format(r2_score(y_test,y_pred)*100))
print("Mean Absolute Error {:.2f}".format(mean_absolute_error(y_test,y_pred)))
print("Mean Squared Error {:.2f}".format(mean_squared_error(y_test,y_pred)))
```
#### Feature Importances
The Feature importance refers to techniques that assign a score to features based on how useful they are for making the prediction.
```
plt.figure(figsize=(8,6))
n_features = len(X.columns)
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), X.columns)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.ylim(-1, n_features)
```
#### Prediction Plot
First, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.
For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
```
plt.figure(figsize=(14,10))
plt.plot(range(20),y_test[0:20], color = "green")
plt.plot(range(20),model.predict(X_test[0:20]), color = "red")
plt.legend(["Actual","prediction"])
plt.title("Predicted vs True Value")
plt.xlabel("Record number")
plt.ylabel(target)
plt.show()
```
#### Creator: Thilakraj Devadiga , Github: [Profile](https://github.com/Thilakraj1998)
| github_jupyter |
# Building an even Simpler GAN
```
import numpy as np
from math import *
import theano
import theano.tensor as T
import lasagne
import matplotlib.pyplot as plt
%matplotlib inline
```
## Define the GAN (Generative Adversarial Network)
```
# Definining the GAN
def iGaussian(x):
mx = T.minimum(T.maximum(x,-5),5)
return 0.001 + 0.998*T.exp(-mx**2) # Some margins on the gaussian to prevent nans
latent = T.matrix()
invar = T.matrix()
targ = T.ivector() # Targ == 0 is fake
input = lasagne.layers.InputLayer((None, 1), input_var = latent)
gen_out = lasagne.layers.DenseLayer(input, num_units = 1, nonlinearity = None)
dinput = lasagne.layers.InputLayer((None, 1), input_var = invar)
disc_out = lasagne.layers.DenseLayer(dinput, num_units = 1, nonlinearity = iGaussian)
dout = lasagne.layers.get_output(disc_out)
dloss = T.mean(-(1-targ)*T.log(1-dout[:,0]) - targ*T.log(dout[:,0]))
dgradW = theano.grad(dloss, disc_out.W)
dgradB = theano.grad(dloss, disc_out.b)
process_disc = theano.function([invar, targ], [dout, dloss, dgradW, dgradB], allow_input_downcast = True)
disc_out.input_layer = gen_out
gout, dout2 = lasagne.layers.get_output([gen_out,disc_out])
gloss = T.mean(T.log(1-dout2[:,0]))
ggradW = theano.grad(gloss, gen_out.W)
ggradB = theano.grad(gloss, gen_out.b)
process_gen = theano.function([latent], [gout, gloss, ggradW, ggradB], allow_input_downcast = True)
```
## Generate data set and evaluate gradients
**Change NSAMPLES to alter amount of data**
**Change NOISESAMPLES to look at parameter noise**
**Change PARAMNOISE to change the magnitude of parameter noise**
```
NSAMPLES = 2500
NOISESAMPLES = 1
PARAMNOISE = 0
Z = np.random.randn(NSAMPLES,1)
y = np.random.randn(NSAMPLES,1)
def prepDiscData():
y2 = process_gen(Z)[0]
#data = np.vstack([y,y2])
data = y
targ = np.zeros(data.shape[0])
targ[0:y.shape[0]] = 1
return data,targ
def evalNetwork(x):
dx = np.zeros(4)
l = np.zeros(2)
# Set up to measure apply noise and measure gradients in the bias terms. x[0] and x[1] are the weight matrices
for i in range(NOISESAMPLES):
disc_out.W.set_value(np.array([[x[0]]],dtype=np.float32))
gen_out.W.set_value(np.array([[x[1]]],dtype=np.float32))
disc_out.b.set_value(np.array([x[2]+np.random.randn()*PARAMNOISE],dtype=np.float32))
gen_out.b.set_value(np.array([x[3]+np.random.randn()*PARAMNOISE],dtype=np.float32))
d,t = prepDiscData()
do,dl,dgw,dgb = process_disc(d,t)
go,gl,ggw,ggb = process_gen(Z)
l += np.array([dl,gl])
dx[0] += np.array(dgw)[0,0]
dx[1] += np.array(ggw)[0,0]
dx[2] += np.array(dgb)[0]
dx[3] += np.array(ggb)[0]
return x,l/float(NOISESAMPLES),dx/float(NOISESAMPLES)
```
## Generate the mesh of gradients to make a flow diagram
```
xx,yy = np.meshgrid(np.arange(-5.0,5.0,0.1), np.arange(-5.0,5.0,0.1))
xx = xx.ravel()
yy = yy.ravel()
dloss = np.zeros(xx.shape[0])
gloss = np.zeros(yy.shape[0])
ddb = np.zeros(xx.shape[0])
dgb = np.zeros(xx.shape[0])
for i in range(xx.shape[0]):
x,l,dx = evalNetwork(np.array([1,1,xx[i],yy[i]]))
dloss[i] = l[0]
gloss[i] = l[1]
ddb[i] = dx[2] # This is set to look at the bias gradients, but use dx[0] and dx[1] for the weight gradients
dgb[i] = dx[3] # ...
R = int(sqrt(xx.shape[0]))
xx = xx.reshape((R,R))
yy = yy.reshape((R,R))
dloss = dloss.reshape((R,R))
gloss = gloss.reshape((R,R))
ddb = ddb.reshape((R,R))
dgb = dgb.reshape((R,R))
plt.streamplot(xx,yy,-ddb,-dgb, density=2)
plt.gcf().set_size_inches((8,8))
plt.show()
# Plot the magnitude of the gradient
plt.contourf(xx, yy, np.sqrt(dgb**2 + ddb**2))
plt.colorbar()
plt.gcf().set_size_inches((8, 8))
plt.show()
```
| github_jupyter |
## https://github.com/timestocome
## The ergodic hypothesis is a key analytical device of equilibrium statistical mechanics. It underlies the assumption that the time average and the expectation value of an observable are the same.
### This 'fix' for economic theory changes everything from gambles to Ponzi schemes
https://phys-org.cdn.ampproject.org/v/s/phys.org/news/2019-12-economic-theory-gambles-ponzi-schemes.amp?usqp=mq331AQCKAE%3D&_js_v=0.1#referrer=https%3A%2F%2Fwww.google.com&_tf=From%20%251%24s&share=https%3A%2F%2Fphys.org%2Fnews%2F2019-12-economic-theory-gambles-ponzi-schemes.html
### Paper, open access
https://www.nature.com/articles/s41567-019-0732-0
```
import numpy as np
import matplotlib.pyplot as plt
# win is + 50%, loss is -40%
# if 50/50 then should gain 5% from investments
# $50 * 1.5 + $50 * .6 = $105.
# trade it all every time
def win(c): return c * 1.5
def loss(c): return c * .6
```
## run one simulation, begin with 10k in savings make 1000 trades
```
# run 1 simulation
start_cash = 10000
savings = start_cash
n_trades = 1000
history = []
for i in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.5
else:
savings = savings * .6
history.append(savings)
history = np.asarray(history)
time_to_max = np.argmax(history)
time_to_min = 0
max_value = np.max(history)
for i in range(len(history)):
if history[i] > 1: time_to_min += 1
else: break
print( 'Starting Savings $%d \nTime to max %d trades, value %.0f, Time to $1 %d trades' % (start_cash, time_to_max, max_value, time_to_min))
# plot
plt.figure(figsize=(20, 10))
plt.plot(history)
plt.show()
```
## try 1000 runs, maybe that one was just a fluke
```
savings = start_cash
n_simulations = 10
n_trades = 1000
history = []
simulations = []
for i in range(n_simulations):
savings = start_cash
history = []
for j in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.5
else:
savings = savings * .6
history.append(savings)
simulations.append(history)
print(len(simulations))
print(len(simulations[0]), len(simulations[4]))
# quit while your ahead
time_to_max = []
time_to_min = []
best = []
for i in range(n_simulations):
history = np.asarray(simulations[i])
time_to_bottom = 0
time_to_max.append(np.argmax(history))
best.append(np.max(history))
for j in range(len(history)):
if history[j] > 1: time_to_bottom += 1
time_to_min.append(time_to_bottom)
avg_best = sum(best)/len(best)
avg_time_to_max = sum(time_to_max)/len(time_to_max)
avg_time_to_min = sum(time_to_min)/len(time_to_min)
best_best = max(best)
print('Avg peak $%.0f, best peak $%0.f\nAvg time to peak %d trades, Avg time to $1 %d trades' %(avg_best, best_best, avg_time_to_max, avg_time_to_min))
# plot
colors = plt.cm.rainbow(np.linspace(0, 1, n_simulations))
plt.figure(figsize=(20, 10))
plt.ylim(0, avg_best)
for i in range(n_simulations):
plt.plot(simulations[i], c=colors[i])
plt.show()
```
## try increasing the odds to gain 60%, lose 40% 50/50
```
# change the odds....
# run 1 simulation
start_cash = 10000
savings = start_cash
n_trades = 1000
history = []
for i in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.5
else:
savings = savings * .6
history.append(savings)
history = np.asarray(history)
time_to_max = np.argmax(history)
time_to_min = 0
max_value = np.max(history)
for i in range(len(history)):
if history[i] > 1: time_to_min += 1
else: break
print('Gain 60%, lose 40% odds 50/50')
print( 'Starting Savings $%d \nTime to max %d trades, value %.0f, Time to $1 %d trades' % (start_cash, time_to_max, max_value, time_to_min))
# plot
plt.figure(figsize=(20, 10))
plt.plot(history)
plt.show()
```
## Gain 60%, lose 40% 50/50
```
# run 1 simulation
start_cash = 10000
savings = start_cash
n_trades = 1000
history = []
for i in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.6
else:
savings = savings * .6
history.append(savings)
history = np.asarray(history)
time_to_max = np.argmax(history)
time_to_min = 0
max_value = np.max(history)
for i in range(len(history)):
if history[i] > 1: time_to_min += 1
else: break
print('Gain 60%, lose 40% odds 50/50')
print( 'Starting Savings $%d \nTime to max %d trades, value %.0f, Time to $1 %d trades' % (start_cash, time_to_max, max_value, time_to_min))
# plot
plt.figure(figsize=(20, 10))
plt.plot(history)
plt.show()
```
## Gain 70% lose 40%, 50/50
```
# run 1 simulation
start_cash = 10000
savings = start_cash
n_trades = 1000
history = []
for i in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.7
else:
savings = savings * .6
history.append(savings)
history = np.asarray(history)
time_to_max = np.argmax(history)
time_to_min = 0
max_value = np.max(history)
for i in range(len(history)):
if history[i] > 1: time_to_min += 1
else: break
print('Gain 70%, lose 40% odds 50/50')
print( 'Starting Savings $%d \nTime to max %d trades, value %.0f, Time to $1 %d trades' % (start_cash, time_to_max, max_value, time_to_min))
# plot
plt.figure(figsize=(20, 10))
plt.plot(history)
plt.show()
```
## Gain 50%, lose 30%, 50/50
```
# run 1 simulation
start_cash = 10000
savings = start_cash
n_trades = 1000
history = []
for i in range(n_trades):
if np.random.randint(100) % 2 == 0:
savings = savings * 1.5
else:
savings = savings * .7
history.append(savings)
history = np.asarray(history)
time_to_max = np.argmax(history)
time_to_min = 0
max_value = np.max(history)
for i in range(len(history)):
if history[i] > 1: time_to_min += 1
else: break
print('Gain 50%, lose 30% odds 50/50')
print( 'Starting Savings $%d \nTime to max %d trades, value %.0f, Time to $1 %d trades' % (start_cash, time_to_max, max_value, time_to_min))
# plot
plt.figure(figsize=(20, 10))
plt.plot(history)
plt.show()
```
## looks like cutting loses matters more than gains
| github_jupyter |
# Hybrid System
In this work, the content-based recommender is combined with the SVD++ rating predictor to give high rating recommendations based on the particular user.
```
import pandas as pd
import numpy as np
import joblib
from surprise import Reader, Dataset, SVDpp
from surprise.model_selection import KFold
from surprise import accuracy
```
## Load Similarity Matrix Created by Content-Based Recommender System
```
with open("hybrid/svdpp.joblib", "rb") as f:
svd = joblib.load(f)
dists = np.load("contentSim/metaSim.npy")
md = pd.read_csv("contentSim/metaFeatures.csv")
md.head()
```
## Load Rating Data
```
ratings = pd.read_csv("../data/movies/ratings_small.csv")
reader = Reader(rating_scale=(1, 5))
data = Dataset.load_from_df(ratings[["userId", "movieId", "rating"]], reader)
```
## Train a Rating Predictor Using SVD++
```
svd = SVDpp(n_factors=10, n_epochs=20, verbose=True)
kf = KFold(n_splits=3)
for k, (trainSet, validSet) in enumerate(kf.split(data)):
print(f"\nFold {k+1}")
svd.fit(trainSet)
accuracy.mse(svd.test(validSet))
```
## Train the Entire Dataset and Save Model
```
trainSet = data.build_full_trainset()
svd.fit(trainSet)
with open("hybrid/svdpp.joblib", "wb") as f:
joblib.dump(svd, f)
```
## Link `tmdbId` with `movieId`
Ratings trained using `movieId` as input features. However, the similarity matrix is created by the order of the metadata dataframe.
```
links = pd.read_csv("../data/movies/links_small.csv")[["movieId", "tmdbId"]]
links = links[links["tmdbId"].notnull()]
links["movieId"] = links["movieId"].astype("int")
links["tmdbId"] = links["tmdbId"].astype("int")
# Rename the columns
links.columns = ["movieId", "id"]
# Merge using `id`, so we have integrated `movieId` column to the metadata
md = md.merge(links[["movieId", "id"]], on="id")
md.head()
```
## Make Recommendations
First, we find the most similar movies regarding to the given `title`. Based on these filtered movies and the particular `userId`, we make ranking predictions. Finally, the rankings are descendingly orderd and the topk are chosen. For `userId=1` and `userId=100`, we have different rankings for them. We did custom recommendations!
```
def getHybridTopkRecommendations(userId, title, metadataDf, similarities, simTopk=25, topk=5):
idx = metadataDf.index[metadataDf["title"] == title].tolist()
if len(idx) == 0:
raise ValueError("Title not found!")
# Choose 1st item and its similarity arr
idx = idx[0]
sim = similarities[idx]
# Set similarity of the given title to the minimum
sim[idx] = sim.min()
# Desc sort
indices = np.argpartition(-sim, 1+simTopk)[1:1+simTopk]
movies = metadataDf.iloc[indices, :][["id", "title", "movieId"]]
# Predict ratings and desc sort
movies["est_rating"] = movies["movieId"].apply(lambda x: svd.predict(userId, x).est)
movies = movies.sort_values("est_rating", ascending=False)
movies = movies.iloc[:topk, :]
movieRatingPair = dict(zip(movies["title"], movies["est_rating"]))
print(f"\nUser {userId} liked {title}. He/She may like...")
for i, (title, rating) in enumerate(movieRatingPair.items()):
print(f"Top{i+1}: {title} {rating:.3f}")
return
userId=1
getHybridTopkRecommendations(userId, "The Shawshank Redemption", md, dists)
getHybridTopkRecommendations(userId, "The Truman Show", md, dists)
getHybridTopkRecommendations(userId, "The Godfather", md, dists)
userId=100
getHybridTopkRecommendations(userId, "The Shawshank Redemption", md, dists)
getHybridTopkRecommendations(userId, "The Truman Show", md, dists)
getHybridTopkRecommendations(userId, "The Godfather", md, dists)
```
| github_jupyter |
# Basic Python Semantics: Operators
In the previous section, we began to look at the semantics of Python variables and objects; here we'll dig into the semantics of the various *operators* included in the language.
By the end of this section, you'll have the basic tools to begin comparing and operating on data in Python.
## Arithmetic Operations
Python implements seven basic binary arithmetic operators, two of which can double as unary operators.
They are summarized in the following table:
| Operator | Name | Description |
|--------------|----------------|--------------------------------------------------------|
| ``a + b`` | Addition | Sum of ``a`` and ``b`` |
| ``a - b`` | Subtraction | Difference of ``a`` and ``b`` |
| ``a * b`` | Multiplication | Product of ``a`` and ``b`` |
| ``a / b`` | True division | Quotient of ``a`` and ``b`` |
| ``a // b`` | Floor division | Quotient of ``a`` and ``b``, removing fractional parts |
| ``a % b`` | Modulus | Integer remainder after division of ``a`` by ``b`` |
| ``a ** b`` | Exponentiation | ``a`` raised to the power of ``b`` |
| ``-a`` | Negation | The negative of ``a`` |
| ``+a`` | Unary plus | ``a`` unchanged (rarely used) |
These operators can be used and combined in intuitive ways, using standard parentheses to group operations.
For example:
```
# addition, subtraction, multiplication
(4 + 8) * (6.5 - 3)
```
Floor division is true division with fractional parts truncated:
```
# True division
print(11 / 2)
# Floor division
print(11 // 2)
```
The floor division operator was added in Python 3; you should be aware if working in Python 2 that the standard division operator (``/``) acts like floor division for integers and like true division for floating-point numbers. This is a holdover from early C language standards.
## Assignment Operations
We've seen that variables can be assigned with the "``=``" operator, and the values stored for later use. For example:
```
a = 24
print(a)
```
We can use these variables in expressions with any of the operators mentioned earlier.
For example, to add 2 to ``a`` we write:
```
a + 2
```
We might want to update the variable ``a`` with this new value; in this case, we could combine the addition and the assignment and write ``a = a + 2``.
Because this type of combined operation and assignment is so common, Python includes built-in update operators for all of the arithmetic operations:
```
a += 2 # equivalent to a = a + 2
print(a)
```
There are **augmented assignment operators** corresponding to each of the binary operators listed earlier; For example:
``a += b`` is equivalent to ``a = a + b``. These allow for more compact code for common operations
For mutable objects like lists, arrays, or DataFrames, these augmented assignment operations are actually *subtly different* than their more verbose counterparts: they modify the contents of the original object rather than creating a new object to store the result.
## Comparison Operations
Another type of operation which can be very useful is comparison of different values.
For this, Python implements standard comparison operators, which return Boolean values ``True`` and ``False``.
The comparison operations are listed in the following table:
| Operation | Description | | Operation | Description |
|---------------|-----------------------------------|-|---------------|--------------------------------------|
| ``a == b`` | ``a`` equal to ``b`` | | ``a != b`` | ``a`` not equal to ``b`` |
| ``a < b`` | ``a`` less than ``b`` | | ``a > b`` | ``a`` greater than ``b`` |
| ``a <= b`` | ``a`` less than or equal to ``b`` | | ``a >= b`` | ``a`` greater than or equal to ``b`` |
These comparison operators can be combined with other operators to express a virtually limitless range of tests for the numbers.
For example, we can check if a number is odd by checking that the modulus with 2 returns 1:
```
# is 25 odd?
25 % 2 == 1
# is 66 odd?
66 % 2 == 1
```
We can string together multiple comparisons to check more complicated relationships:
```
# check if a is between 15 and 30
a = 25
15 < a < 30
```
## Boolean Operations
When working with Boolean values, Python provides operators to combine the values using the standard concepts of "and", "or", and "not".
Conveniently, these operators are the actual words ``and``, ``or``, and ``not``:
```
x = 4
(x < 6) and (x > 2)
(x > 10) or (x % 2 == 0)
not (x < 6)
```
Boolean algebra aficionados might notice that the XOR operator is not included; this can of course be constructed in several ways from a compound statement of the other operators.
Otherwise, a clever trick you can use for XOR of Boolean values is the following:
```
# (x > 1) xor (x < 10)
(x > 1) != (x < 10)
```
These sorts of Boolean operations will become extremely useful when we begin discussing **control flow statements** such as conditionals and loops.
## Identity and Membership Operators
Like ``and``, ``or``, and ``not``, Python also contains prose-like operators to check for identity and membership.
They are the following:
| Operator | Description |
|---------------|---------------------------------------------------|
| ``a is b`` | True if ``a`` and ``b`` are identical objects |
| ``a is not b``| True if ``a`` and ``b`` are not identical objects |
| ``a in b`` | True if ``a`` is a member of ``b`` |
| ``a not in b``| True if ``a`` is not a member of ``b`` |
### Identity Operators: "``is``" and "``is not``"
The identity operators, "``is``" and "``is not``" check for *object identity*.
Object identity is different than equality, as we can see here:
```
a = [1, 2, 3]
b = [1, 2, 3]
a == b
a is b
a is not b
```
What do identical objects look like? Here is an example:
```
a = [1, 2, 3]
b = a
a is b
```
The difference between the two cases here is that in the first, ``a`` and ``b`` point to *different objects*, while in the second they point to the *same object*.
As we saw in the previous section, Python variables are pointers. The "``is``" operator checks whether the two variables are pointing to the same container (object), rather than referring to what the container contains.
* With this in mind, in most cases when a beginner is tempted to use "``is``" what they really mean is ``==``.
### Membership operators
Membership operators check for membership within compound objects.
So, for example, we can write:
```
1 in [1, 2, 3]
2 not in [1, 2, 3]
```
These membership operations are an example of what makes Python so easy to use compared to lower-level languages such as C.
In C, membership would generally be determined by manually constructing a loop over the list and checking for equality of each value.
In Python, you just type what you want to know, in a manner reminiscent of straightforward English prose.
| github_jupyter |
# Double 7's (Short Term Trading Strategies that Work)
1. The SPY is above its 200-day moving average or X-day ma
2. The SPY closes at a X-day low, buy.
3. If the SPY closes at a X-day high, sell your long position.
Optimize: period, sma, stop loss percent, margin.
```
import datetime
import matplotlib.pyplot as plt
import pandas as pd
from talib.abstract import *
import pinkfish as pf
import strategy
# Format price data
pd.options.display.float_format = '{:0.2f}'.format
%matplotlib inline
# Set size of inline plots
'''note: rcParams can't be in same cell as import matplotlib
or %matplotlib inline
%matplotlib notebook: will lead to interactive plots embedded within
the notebook, you can zoom and resize the figure
%matplotlib inline: only draw static images in the notebook
'''
plt.rcParams["figure.figsize"] = (10, 7)
```
Some global data
```
#symbol = '^GSPC'
symbol = 'SPY'
#symbol = 'DIA'
#symbol = 'QQQ'
#symbol = 'IWM'
#symbol = 'TLT'
#symbol = 'GLD'
#symbol = 'AAPL'
#symbol = 'BBRY'
#symbol = 'GDX'
capital = 10000
start = datetime.datetime(1900, 1, 1)
#start = datetime.datetime(*pf.SP500_BEGIN)
end = datetime.datetime.now()
#end = datetime.datetime(2019, 1, 1)
```
Define Optimizations
```
# Pick one
optimize_period = False
optimize_sma = True
optimize_pct = False
# Define high low trade periods ranges
if optimize_period:
Xs = range(2, 15, 1)
Xs = [str(X) for X in Xs]
# Define SMAs ranges
elif optimize_sma:
Xs = range(20, 210, 10)
Xs = [str(X) for X in Xs]
# Define stop loss percentage ranges
elif optimize_pct:
Xs = range(5, 31, 1)
Xs = [str(X) for X in Xs]
options = {
'use_adj' : False,
'use_cache' : True,
'stop_loss_pct' : 1.0,
'margin' : 1.0,
'period' : 7,
'sma' : 200,
'use_regime_filter' : True
}
```
Run Strategy
```
strategies = pd.Series(dtype=object)
for X in Xs:
print(X, end=" ")
if optimize_period:
options['period'] = int(X)
elif optimize_sma:
options['sma'] = int(X)
elif optimize_pct:
options['stop_loss_pct'] = int(X)/100
strategies[X] = strategy.Strategy(symbol, capital, start, end, options)
strategies[X].run()
```
Summarize results
```
metrics = ('annual_return_rate',
'max_closed_out_drawdown',
'annualized_return_over_max_drawdown',
'drawdown_recovery_period',
'expected_shortfall',
'best_month',
'worst_month',
'sharpe_ratio',
'sortino_ratio',
'monthly_std',
'pct_time_in_market',
'total_num_trades',
'pct_profitable_trades',
'avg_points')
df = pf.optimizer_summary(strategies, metrics)
df
```
Bar graphs
```
pf.optimizer_plot_bar_graph(df, 'annual_return_rate')
pf.optimizer_plot_bar_graph(df, 'sharpe_ratio')
pf.optimizer_plot_bar_graph(df, 'max_closed_out_drawdown')
```
Run Benchmark
```
s = strategies[Xs[0]]
benchmark = pf.Benchmark(symbol, capital, s.start, s.end)
benchmark.run()
```
Equity curve
```
if optimize_period: Y = '7'
elif optimize_sma: Y = '70'
elif optimize_pct: Y = '15'
pf.plot_equity_curve(strategies[Y].dbal, benchmark=benchmark.dbal)
```
Compare parameter values.
```
labels = []
for strategy in strategies:
if optimize_period:
label = strategy.options['period']
elif optimize_sma:
label = strategy.options['sma']
elif optimize_pct:
label = strategy.options['stop_loss_pct']
labels.append(label)
pf.plot_equity_curves(strategies[:], labels)
```
Compare optimization with baseline vanilla values.
```
index = None
if optimize_period:
index = ['6', '7']
elif optimize_sma:
index = ['70', '200']
elif optimize_pct:
index = ['1.0', '0.15']
pf.plot_equity_curves(strategies[index], labels=index)
```
| github_jupyter |
```
# Import all the necessary files!
import os
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import Model
# Download the inception v3 weights
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5 \
-O /tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
# Import the inception model
from tensorflow.keras.applications.inception_v3 import InceptionV3
# Create an instance of the inception model from the local pre-trained weights
local_weights_file = '/tmp/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5'
pre_trained_model = InceptionV3(input_shape = (150, 150, 3),
include_top = False,
weights = None)
pre_trained_model.load_weights(local_weights_file)
# Make all the layers in the pre-trained model non-trainable
for layer in pre_trained_model.layers:
layer.trainable = False
# Print the model summary
pre_trained_model.summary()
# Expected Output is extremely large, but should end with:
#batch_normalization_v1_281 (Bat (None, 3, 3, 192) 576 conv2d_281[0][0]
#__________________________________________________________________________________________________
#activation_273 (Activation) (None, 3, 3, 320) 0 batch_normalization_v1_273[0][0]
#__________________________________________________________________________________________________
#mixed9_1 (Concatenate) (None, 3, 3, 768) 0 activation_275[0][0]
# activation_276[0][0]
#__________________________________________________________________________________________________
#concatenate_5 (Concatenate) (None, 3, 3, 768) 0 activation_279[0][0]
# activation_280[0][0]
#__________________________________________________________________________________________________
#activation_281 (Activation) (None, 3, 3, 192) 0 batch_normalization_v1_281[0][0]
#__________________________________________________________________________________________________
#mixed10 (Concatenate) (None, 3, 3, 2048) 0 activation_273[0][0]
# mixed9_1[0][0]
# concatenate_5[0][0]
# activation_281[0][0]
#==================================================================================================
#Total params: 21,802,784
#Trainable params: 0
#Non-trainable params: 21,802,784
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# Expected Output:
# ('last layer output shape: ', (None, 7, 7, 768))
# Define a Callback class that stops training once accuracy reaches 99.9%
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('accuracy')>0.999):
print("\nReached 99.9% accuracy so cancelling training!")
self.model.stop_training = True
from tensorflow.keras.optimizers import RMSprop
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a dropout rate of 0.2
x = layers.Dropout(0.2)(x)
# Add a final sigmoid layer for classification
x = layers.Dense (1, activation='sigmoid')(x)
model = Model( pre_trained_model.input, x)
model.compile(optimizer = RMSprop(lr=0.0001),
loss = 'binary_crossentropy',
metrics = ['accuracy'])
model.summary()
# Expected output will be large. Last few lines should be:
# mixed7 (Concatenate) (None, 7, 7, 768) 0 activation_248[0][0]
# activation_251[0][0]
# activation_256[0][0]
# activation_257[0][0]
# __________________________________________________________________________________________________
# flatten_4 (Flatten) (None, 37632) 0 mixed7[0][0]
# __________________________________________________________________________________________________
# dense_8 (Dense) (None, 1024) 38536192 flatten_4[0][0]
# __________________________________________________________________________________________________
# dropout_4 (Dropout) (None, 1024) 0 dense_8[0][0]
# __________________________________________________________________________________________________
# dense_9 (Dense) (None, 1) 1025 dropout_4[0][0]
# ==================================================================================================
# Total params: 47,512,481
# Trainable params: 38,537,217
# Non-trainable params: 8,975,264
# Get the Horse or Human dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip -O /tmp/horse-or-human.zip
# Get the Horse or Human Validation dataset
!wget --no-check-certificate https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip -O /tmp/validation-horse-or-human.zip
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import zipfile
local_zip = '//tmp/horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/training')
zip_ref.close()
local_zip = '//tmp/validation-horse-or-human.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp/validation')
zip_ref.close()
train_horses_dir = os.path.join(train_dir, 'horses') # Directory with our training horse pictures
train_humans_dir = os.path.join(train_dir, 'humans') # Directory with our training humans pictures
validation_horses_dir = os.path.join(validation_dir, 'horses') # Directory with our validation horse pictures
validation_humans_dir = os.path.join(validation_dir, 'humans')# Directory with our validation humanas pictures
train_horses_fnames = os.listdir(train_horses_dir)
train_humans_fnames = os.listdir(train_humans_dir)
validation_horses_fnames = os.listdir(validation_horses_dir)
validation_humans_fnames = os.listdir(validation_humans_dir)
print(len(train_horses_fnames))
print(len(train_humans_fnames))
print(len(validation_horses_fnames))
print(len(validation_humans_fnames))
# Expected Output:
# 500
# 527
# 128
# 128
# Define our example directories and files
train_dir = '/tmp/training'
validation_dir = '/tmp/validation'
# Add our data-augmentation parameters to ImageDataGenerator
train_datagen = ImageDataGenerator(rescale = 1./255.,
rotation_range = 40,
width_shift_range = 0.2,
height_shift_range = 0.2,
shear_range = 0.2,
zoom_range = 0.2,
horizontal_flip = True)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator( rescale = 1.0/255. )
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(train_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory( validation_dir,
batch_size = 20,
class_mode = 'binary',
target_size = (150, 150))
# Expected Output:
# Found 1027 images belonging to 2 classes.
# Found 256 images belonging to 2 classes.
# Run this and see how many epochs it should take before the callback
# fires, and stops training at 99.9% accuracy
# (It should take less than 100 epochs)
callbacks = myCallback()
history = model.fit(
train_generator,
validation_data = validation_generator,
steps_per_epoch = 50,
epochs = 100,
validation_steps = 13,
verbose = 2,
callbacks=[callbacks])
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark= False
m = 500 # 5, 50, 100, 500, 2000
train_size = 100 # 100, 500, 2000, 10000
desired_num = train_size + 1000
tr_i = 0
tr_j = train_size
tr_k = desired_num
tr_i, tr_j, tr_k
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,3,500)
idx= []
for i in range(3):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((500,))
np.random.seed(12)
x[idx[0]] = np.random.uniform(low =-1,high =0,size= sum(idx[0]))
x[idx[1]] = np.random.uniform(low =0,high =1,size= sum(idx[1]))
x[idx[2]] = np.random.uniform(low =2,high =3,size= sum(idx[2]))
x[idx[0]][0], x[idx[2]][5]
print(x.shape,y.shape)
idx= []
for i in range(3):
idx.append(y==i)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
bg_idx = [ np.where(idx[2] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(3):
y= np.zeros(x[idx[i]].shape[0])
plt.scatter(x[idx[i]],y,label="class_"+str(i))
plt.legend()
foreground_classes = {'class_0','class_1' }
background_classes = {'class_2'}
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
a.shape
np.reshape(a,(m,1))
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,2)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(2,3)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][j])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m)
for i in range(len(mosaic_dataset)):
img = torch.zeros([1], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][j]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][j]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:tr_j], mosaic_label[0:tr_j], fore_idx[0:tr_j] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[tr_j : tr_k], mosaic_label[tr_j : tr_k], fore_idx[tr_j : tr_k] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# mean = torch.mean(avg_image_dataset_1, keepdims= True, axis = 0)
# std = torch.std(avg_image_dataset_1, keepdims= True, axis = 0)
# avg_image_dataset_1 = (avg_image_dataset_1 - mean) / std
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
# print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# mean = torch.mean(test_dataset, keepdims= True, axis = 0)
# std = torch.std(test_dataset, keepdims= True, axis = 0)
# test_dataset = (test_dataset - mean) / std
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
# print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("dataset1 CIN with alpha = 1/"+str(m))
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
idx_1 = y1==0
idx_2 = np.where(idx_1==True)[0]
idx_3 = np.where(idx_1==False)[0]
color = ['#1F77B4','orange', 'brown']
true_point = len(idx_2)
plt.scatter(x1[idx_2[:25]], y1[idx_2[:25]]*0, label='class 0', c= color[0], marker='o')
plt.scatter(x1[idx_3[:25]], y1[idx_3[:25]]*0, label='class 1', c= color[1], marker='o')
plt.scatter(x1[idx_3[50:75]], y1[idx_3[50:75]]*0, c= color[1], marker='o')
plt.scatter(x1[idx_2[50:75]], y1[idx_2[50:75]]*0, c= color[0], marker='o')
plt.legend()
plt.xticks( fontsize=14, fontweight = 'bold')
plt.yticks( fontsize=14, fontweight = 'bold')
plt.xlabel("X", fontsize=14, fontweight = 'bold')
# plt.savefig(fp_cin+"ds1_alpha_04.png", bbox_inches="tight")
# plt.savefig(fp_cin+"ds1_alpha_04.pdf", bbox_inches="tight")
avg_image_dataset_1[0:10]
x1 = (test_dataset).numpy()/m
y1 = np.array(labels)
# idx1 = []
# for i in range(3):
# idx1.append(y1 == i)
# for i in range(3):
# z = np.zeros(x1[idx1[i]].shape[0])
# plt.scatter(x1[idx1[i]],z,label="class_"+str(i))
# plt.legend()
plt.scatter(x1[y1==0], y1[y1==0]*0, label='class 0')
plt.scatter(x1[y1==1], y1[y1==1]*0, label='class 1')
# plt.scatter(x1[y1==2], y1[y1==2]*0, label='class 2')
plt.legend()
plt.title("test dataset1 ")
test_dataset.numpy()[0:10]/m
test_dataset = test_dataset/m
test_dataset.numpy()[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape, avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(test_dataset, labels )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(1,50)
self.linear2 = nn.Linear(50,2)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
torch.nn.init.xavier_normal_(self.linear2.weight)
torch.nn.init.zeros_(self.linear2.bias)
def forward(self,x):
x = F.relu(self.linear1(x))
x = (self.linear2(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list, lr_list):
final_loss = []
for LR in lr_list:
print("--"*20, "Learning Rate used is", LR)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr = LR ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1500
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
final_loss.append(loss_curi)
return final_loss
train_loss_all=[]
testloader_list= [ testloader_1 ]
lr_list = [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1, 0.5 ]
fin_loss = train_all(trainloader_1, 1, testloader_list, lr_list)
train_loss_all.append(fin_loss)
%matplotlib inline
len(fin_loss)
for i,j in enumerate(fin_loss):
plt.plot(j,label ="LR = "+str(lr_list[i]))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
| github_jupyter |
```
import os
import sys
import uuid
import cv2
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 300
import glob
import json
import requests
import copy
from time import sleep
import pyperclip
k="/opt/share/nginx/upload/1fa348d3-5607-4f58-9c34-a94cd1c928e8.jpg"
page_path = '/'.join(k.split('/')[-4:])
page_path
os.environ['GOOGLE_APPLICATION_CREDENTIALS']='/home/naresh/Documents/anuvaad-f7a059c268e4_new.json'
nb_dir = '/'.join(os.getcwd().split('/')[:-1])
sys.path.append(nb_dir)
sys.path.append(os.path.split(nb_dir)[0])
import config
import src.utilities.app_context as app_context
app_context.init()
from src.services.main import TesseractOCR
class Draw:
def __init__(self,input_json,save_dir,regions,prefix='',color= (255,0,0),thickness=5):
self.json = input_json
self.save_dir = save_dir
self.regions = regions
self.prefix = prefix
self.color = color
self.thickness=thickness
#self.draw_region__sub_children()
self.draw_region_children()
#self.draw_region()
def get_coords(self,page_index):
return self.json['rsp']['outputs'][0]['pages'][page_index][self.regions]
#return self.json['outputs'][0]['pages'][page_index][self.regions]
def get_page_count(self):
return(self.json['rsp']['outputs'][0]['page_info'])
#return(self.json['outputs'][0]['page_info'])
def get_page(self,page_index):
page_path = self.json['rsp']['outputs'][0]['page_info'][page_index]
page_path = page_path.split('upload')[1] #'/'.join(page_path.split('/')[1:])
print(page_path)
return download_file(download_url,headers,page_path,f_type='image')
def draw_region_children(self):
for page_index in range(len(self.get_page_count())) :
nparr = np.fromstring(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 2
# Blue color in BGR
color = (0 ,255,0)
# Line thickness of 2 px
thickness = 3
# Using cv2.putText() method
for region_index,region in enumerate(self.get_coords(page_index)) :
try:
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
#print(pts)
region_color = (0,0,255)
cv2.polylines(image, [np.array(pts)],True, region_color, self.thickness)
cv2.putText(image, str(region['class']), (pts[0][0]+40,pts[0][1]+40), font,
fontScale, color, thickness, cv2.LINE_AA)
for line_index, line in enumerate(region['regions']):
ground = line['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
line_color = (255 ,0,0)
cv2.polylines(image, [np.array(pts)],True, line_color, self.thickness -2)
cv2.putText(image, str(line_index), (pts[0][0],pts[0][1]), font,
fontScale, color, thickness, cv2.LINE_AA)
except Exception as e:
print(str(e))
print(region)
image_path = os.path.join(self.save_dir , '{}_{}.png'.format(self.regions,page_index))
cv2.imwrite(image_path , image)
def draw_region__sub_children(self):
for page_index in range(len(self.get_page_count())) :
nparr = np.fromstring(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
#image = cv2.imread("/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/ocr/ocr-gv-server/upload/test_vision/images/0568ed39-a598-4d90-a5a7-e176fcec1ae1.jpg")
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 2
# Blue color in BGR
color = (0 ,255,0)
# Line thickness of 2 px
thickness = 3
# Using cv2.putText() method
for region_index,region in enumerate(self.get_coords(page_index)) :
try:
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
#print(pts)
region_color = (0,0,255)
cv2.polylines(image, [np.array(pts)],True, region_color, self.thickness)
#cv2.putText(image, str(region['class']), (pts[0][0],pts[0][1]), font,
#fontScale, (255,0,0), thickness, cv2.LINE_AA)
for line_index, line in enumerate(region['regions']):
ground = line['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y']) ])
line_color = (255,0,0)
# if str(line['class'])=='CELL_TEXT':
cv2.polylines(image, [np.array(pts)],True, line_color, self.thickness -2)
# cv2.putText(image, str(line['class']), (pts[0][0],pts[0][1]), font,
# fontScale, (255,0,0), thickness, cv2.LINE_AA)
for word_index, word in enumerate(line['regions']):
ground = word['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
word_color = (0,255,0)
cv2.polylines(image, [np.array(pts)],True, word_color, self.thickness -2)
#v2.putText(image, str(word_index), (pts[0][0],pts[0][1]), font,
#fontScale-1,(0,255,0), thickness, cv2.LINE_AA)
except Exception as e:
print(str(e))
print(region)
image_path = os.path.join(self.save_dir , '{}_{}_{}.png'.format(self.prefix,self.regions,page_index))
print(image_path)
#print(image)
cv2.imwrite(image_path , image)
def draw_region(self):
for page_index in range(len(self.get_page_count())) :
nparr = np.fromstring(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
for region in self.get_coords(page_index) :
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
cv2.polylines(image, [np.array(pts)],True, self.color, self.thickness)
image_path = os.path.join(self.save_dir , '{}_{}.png'.format(self.regions,page_index))
cv2.imwrite(image_path , image)
def download_file(download_url,headers,outputfile,f_type='json'):
download_url =download_url+str(outputfile)
res = requests.get(download_url,headers=headers)
if f_type == 'json':
return res.json()
else :
return res.content
download_url ="https://auth.anuvaad.org/download/"
upload_url = 'https://auth.anuvaad.org/anuvaad-api/file-uploader/v0/upload-file'
headers = {
'auth-token' : 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyTmFtZSI6ImRoaXJhai5kYWdhQHRhcmVudG8uY29tIiwicGFzc3dvcmQiOiJiJyQyYiQxMiREbUo2QkhyLllNL1NBWjJoUklQWVAuRGVMQkRWY3JGdnRvWm01MUVscExzRk1GRnJETHpMdSciLCJleHAiOjE2MTI5NTI4Nzh9.-qFs0A2mRPWT_mNDysUgRilHHhj_L4pyBEoTH8742zs'}
def draw_region(page_path,corrds,color= (255,0,0),thickness=5, save=False):
if type(page_path) == str :
image = cv2.imread(page_path)
else :
image = page_path
for region in corrds :
ground = region['boundingBox']['vertices']
#start_point = (ground[0]['x'],ground[0]['y'])
#end_point = (ground[2]['x'], ground[2]['y'])
#cv2.rectangle(image, start_point, end_point, color,thickness)
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
cv2.polylines(image, [np.array(pts)],True, color, thickness)
plt.imshow(image)
if save:
cv2.imwrite(str(uuid.uuid1()) + '.png' , image)
#return image
def draw_region_children(page_path,corrds,color= (255,0,0),thickness=5, save=False):
if type(page_path) == str :
image = cv2.imread(page_path)
else :
image = page_path
for region_index, region in enumerate(corrds) :
try:
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
#print(pts)
region_color = (0 ,0,125+ 130*(region_index/ len(corrds)))
cv2.polylines(image, [np.array(pts)],True, region_color, thickness)
for line_index, line in enumerate(region['children']):
ground = line['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
line_color = (125 + 130*(region_index/ len(corrds)) ,0,0)
cv2.polylines(image, [np.array(pts)],True, line_color, thickness -2)
except Exception as e:
print(str(e))
print(region)
plt.imshow(image)
if save:
cv2.imwrite(str(uuid.uuid1()) + '.png' , image)
#return image
#base_dir = '/home/dhiraj/Documents/Anuwad/anuvaad/anuvaad-etl/anuvaad-extractor/block-merger/src/notebooks/sample-data/input'
base_dir ='/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/ocr/ocr-tesseract-server/upload'
#filename = 'ncert.pdf'
#filename = '0-16080245837039561.json'
#filename = '0-16067318061936076.json'
#filename = '37429_ld.json'
#filename = '20695_ld.json'
#config.BASE_DIR = base_dir
filename = 's1_en.json'
file_format = 'PDF'
language = 'hi'
def get_app_context(filename):
app_context.application_context = { "input":{
"inputs": [
{
"file": {
"identifier": "string",
"name": filename,
"type": "json"
},
"config": {
"OCR": {
"option": "HIGH_ACCURACY",
"language": "en",
"craft_line": "False",
"craft_word": "False",
}
}
}
],
"outputs": [
{
"file": {
"identifier": "string",
"name": filename,
"type": "json"
},
"config": {
"OCR": {
"option": "HIGH_ACCURACY",
"language": "en",
"craft_line": "False"
}
}
}
]}
,
"jobID": "BM-15913540488115873",
"state": "INITIATED",
"status": "STARTED",
"stepOrder": 0,
"workflowCode": "abc",
"taskID":"aaabbbba",
"tool": "BM",
"message":"layout",
"metadata": {
"module": "WORKFLOW-MANAGER",
"receivedAt": 15993163946431696,
"sessionID": "4M1qOZj53tIZsCoLNzP0oP",
"userID": "d4e0b570-b72a-44e5-9110-5fdd54370a9d"
}
}
return app_context
resp = TesseractOCR(get_app_context(filename),base_dir)
resp
resp
def save_json(path,res):
with open(path, "w", encoding='utf8') as write_file:
json.dump(res, write_file,ensure_ascii=False )
save_json("/home/naresh/table_gv2.json",resp)
pyperclip.copy(json.dumps(resp))
base_dir = '/home/dhiraj/Documents/Anuwad/testing/Word_detector/output/test_block_segmenter/json'
json_names = [ j.split('/')[-1] for j in glob.glob(base_dir + '/*.json')]
res_list = []
for json_name in json_names:
res_list.append( get_segmented_regions(get_app_context(json_name),base_dir))
#output_idr= '/home/dhiraj/Documents/Anuwad/testing/Word_detector/output/test_block_segmenter/images'
output_idr= '/home/naresh/Tarento/testing_document_processor/test_google_vision/'
# for index, res in enumerate([resp]):
# Draw(res,output_idr,regions='regions',prefix=str(res[index].split('.')[0]))
#load json
# path ="/home/naresh/dynamic_crop/tamil_good_40_no_topcorrection_eval_pipeline_2.json"
# with open(path,'r') as f:
# resp = json.load(f)
for index, res in enumerate([resp]):
Draw(res,output_idr,regions='regions')
out_path= '/home/naresh/Tarento/testing_document_processor/result/odia_3_singlecolumn'
#Draw(resp,out_path,regions='regions')
import json
json_path = '/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/ocr/ocr-tesseract-server/upload/hin_dc.json'
dump_path ='/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/ocr/ocr-tesseract-server/upload/9e360df1-76ab-47e8-abb1-9402f767c441_be0a3c41-411a-4236-b71f-1a02577f3bfd/images/'
with open(json_path,'r') as j_file:
j_data = json.load(j_file)
for page in j_data['outputs'][0]['page_info']:
page_path = '/'.join(page.split('/')[-3:])
print(page_path)
image_bin = download_file(download_url,headers,page_path,f_type='image')
#print(image_bin)
save_path = base_dir +"/" + page_path
#save_path = "/home/naresh/anuvaad/anuvaad-etl/anuvaad-extractor/document-processor/ocr/ocr-gv-server/upload/test_vision/images/0568ed39-a598-4d90-a5a7-e176fcec1ae1.jpg"
f = open(save_path, 'w+b')
f.write(image_bin)
f.close()
p1 = ((483 1124, 620 1154, 614 1182, 477 1152, 483 1124))
p2 = ((493 1161, 657 1159, 662 1158, 499 1164, 493 1161))
from shapely.geometry import Polygon
from rtree import index
def get_polygon(region):
points = []
vertices = region['vertices']
for point in vertices:
points.append((point['x'], point['y']))
poly = Polygon(points)
return poly
path ="/home/naresh/table_gv.json"
with open(path,'r+') as f:
resp = json.load(f)
resp['rsp']['outputs'][0]['pages'][0]['regions'][14]['regions'][5]['regions']
k=[1,"2",3,4,7]
import pandas as pd
x = pd.DataFrame(k)
from PIL import Image
import pytesseract
from pytesseract import Output
import sys
from pdf2image import convert_from_path
import os
import cv2
filename = "/home/naresh/line_crop_tamil/a94ff748-5be1-4b51-9db7-c417ad41f8e8_2_18.tif"
img = cv2.imread(filename)
#text,temp_dict1 = get_document_bounds(img)
#text = pytesseract.image_to_data(img, lang='hind',config="--psm 10 --oem 3 \
#-c tessedit_char_whitelist=0123456789.,)(|/।;:-@#$%&`!?-_'' ") ### hin_v50.731_45301
#text = pytesseract.image_to_data(img, lang='Devanagari', config='--psm 8',output_type=Output.DATAFRAME)
temp_df = pytesseract.image_to_data(Image.open(filename),config='--psm 7',lang='Devanagari',output_type=Output.DATAFRAME)
#temp_df = pytesseract.image_to_string(cv2.imread(filename),lang='hin_v50.731_45301')
temp_df = temp_df[temp_df.text.notnull()]
temp_df = temp_df.reset_index()
print(temp_df)
temp_df["text"] = temp_df.text.astype(str)
txt = pytesseract.image_to_string(cv2.imread(filename),config='--psm 7',lang='Devanagari')
print(txt)
#txt = '6'
temp_df["text"][0] = txt
k = temp_df.text[0]
print(k)
temp_df['text'][0]
isinstance(temp_df, str)
#to do in 1.5 also
# 1. add remove overalp logic
# 2. text removal part from image should be off
# 3. update dynamic margin logic
# 4.
k=1
if k==0 or \
k==1:
print("x")
```
| github_jupyter |
```
import xarray as xr
import pandas as pd
import numpy as np
import seaborn as sns
import sys
sys.path.append('./..')
from refuelplot import *
setup()
sns.set_style("darkgrid")
from paths_zaf import *
def read_ZAFprod():
'''
function for reading production data from csv
replace month names and convert to datetime format
'''
files = ['/Hourly_Distribution_data_NZ.csv',
'/Hourly_Electricity_Production_[Load_Factor_[[%]]_data_EasternCape.csv',
'/Hourly_Electricity_Production_[Load_Factor_[[%]]_data_NorthernCape.csv',
'/Hourly_Electricity_Production_[Load_Factor_[[%]]_data_WesternCape.csv']
regions = ['ZAF','Eastern Cape','Northern Cape','Western Cape']
months = {' Januar ':'01.',
' Februar ':'02.',
' März ':'03.',
' April ':'04.',
' Mai ':'05.',
' Juni ':'06.',
' Juli ':'07.',
' August ':'08.',
' September ':'09.',
' Oktober ':'10.',
' November ':'11.',
' Dezember ':'12.'}
starts = [None,'2014-06-15','2017-10-10',None] # cut bad starts of time series
prod_ZAF = []
for (file,region,start) in zip(files,regions,starts):
ZAF = pd.read_csv(zaf_path + file,sep=';',parse_dates=[1])
ZAF.columns = ['year','date','technology','CF']
ZAF = ZAF[ZAF.technology=='Onshore Wind\r\n'].drop(['technology','year'],axis=1)
for month in months:
ZAF.date = ZAF.date.str.replace(month,months.get(month)) # replace months by numbers
ZAFdf = pd.Series(ZAF.CF.values,
index=pd.to_datetime(ZAF.date).values,
name=region).sort_index()
if(start != None):
ZAFdf = ZAFdf[start:]
prod_ZAF.append(ZAFdf)
return(pd.concat(prod_ZAF,axis=1).tz_localize('Africa/Johannesburg'))
def load_results(dataset,gwa,regions):
'''
function for loading simulation results
dataset is either MERRA2 or ERA5
gwa is none, GWA2 or GWA3
'''
if gwa == 'GWA2':
rpath = results_path + '/results_GWA2/'
else:
rpath = results_path + '/'
if gwa == 'none':
file = 'windpower_ZAF_'+dataset+'.nc'
else:
file = 'windpower_ZAF_'+dataset+'_GWA.nc'
ZAF = xr.open_dataset(rpath + file).wp.to_dataframe().reset_index().set_index(['time','location']).unstack()
# adapt datetime index of MERRA data (some hours are shifted by 30 min)
if dataset == 'MERRA2':
ZAF.index.values[ZAF.index.minute!=0] = ZAF.index.values[ZAF.index.minute!=0] - np.timedelta64(30,'m')
# sum up per region
ZAF = ZAF.groupby(regions,axis=1).sum(axis=1).tz_localize('UTC')
ZAF['ZAF'] = ZAF.sum(axis=1)
return(ZAF)
def get_cap_df(cap,comdate):
'''
function for getting hourly capacities
cap is numpy array of installed capacities
comdate is numpy array of commissioning dates in datetime format
'''
com = pd.DataFrame({'capacity': cap}).groupby(comdate).sum()
cap_cum = com.capacity.cumsum()
# if only years given for commissioning dates -> gradual capacity increase over year, full capacity at end of year
dr = pd.date_range('1/1/2013','31/12/2019 23:00:00',freq = 'h')
cap_ts = pd.Series(dr.map(cap_cum),index = dr)
cap_ts[0] = cap_cum[cap_cum.index<=pd.Timestamp('2013-01-01')].max()
if type(comdate[0]) == np.int64:
return(cap_ts.interpolate(method='linear'))
else:
return(cap_ts.fillna(method='ffill'))
def gcdH(region):
'''
function to get capacity time series for a region
'''
cap = windparks[windparks.Area==region].Capacity.values
com = windparks[windparks.Area==region].commissioning.astype(np.datetime64).values
cdf = get_cap_df(cap,com).tz_localize('UTC')
cdf.name = region
return(cdf)
def getH(region):
'''
analyse hourly wind power generation for a region
'''
comph = pd.DataFrame({'MERRA2':ZAFm[region],
'ERA5':ZAFe[region],
'MERRA2_GWA2':ZAFmg2[region],
'ERA5_GWA2':ZAFeg2[region],
'MERRA2_GWA3':ZAFmg3[region],
'ERA5_GWA3':ZAFeg3[region]})
# get capacities
caph = capdfH[region]
# calculate capacity factors
cfh = comph.div(caph,axis=0).tz_convert('Africa/Johannesburg')
# add observed data
cfh['obs'] = cfh.index.map(ZAFh[region])
# remove capacity factors > 1 and lines with missing data
cfh = cfh.mask(cfh>1).dropna()
return(cfh)
def getD(region):
'''
analyse daily wind power generation for a region
'''
mask = (ZAFh[region].notna()*capdfH[region].notna()).replace(0,np.nan)
comph = pd.DataFrame({'MERRA2':ZAFm[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5':ZAFe[region].tz_convert('Africa/Johannesburg')*mask,
'MERRA2_GWA2':ZAFmg2[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5_GWA2':ZAFeg2[region].tz_convert('Africa/Johannesburg')*mask,
'MERRA2_GWA3':ZAFmg3[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5_GWA3':ZAFeg3[region].tz_convert('Africa/Johannesburg')*mask})
# get capacities and mask
caph = capdfH[region].tz_convert('Africa/Johannesburg')*mask
# aggregate daily
capd = caph.resample('D').sum()
compd = comph.resample('D').sum()
# calculate capacity factors
cfd = compd.div(capd,axis=0)
# add observed CFs
cfd['obs'] = cfd.index.map((ZAFh[region]*mask).resample('D').mean())
# remove capacity factors > 1 and missing data
cfd = cfd.mask(cfd>1).dropna()
return(cfd)
def getM(region):
'''
analyse monthly wind power generation for a region
'''
# mask for masking simulated data and capacities
# (to only use timespans where also observed data are available)
mask = (ZAFh[region].notna()*capdfH[region].notna()).replace(0,np.nan)
comph = pd.DataFrame({'MERRA2':ZAFm[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5':ZAFe[region].tz_convert('Africa/Johannesburg')*mask,
'MERRA2_GWA2':ZAFmg2[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5_GWA2':ZAFeg2[region].tz_convert('Africa/Johannesburg')*mask,
'MERRA2_GWA3':ZAFmg3[region].tz_convert('Africa/Johannesburg')*mask,
'ERA5_GWA3':ZAFeg3[region].tz_convert('Africa/Johannesburg')*mask})
# get capacities and mask
caph = capdfH[region].tz_convert('Africa/Johannesburg')*mask
# aggregate monthly
capm = caph.resample('M').sum()
compm = comph.resample('M').sum()
# calculate capacity factors
cfm = compm.div(capm,axis=0)
# add observed data
cfm['obs'] = cfm.index.map((ZAFh[region]*mask).resample('M').mean())
# remove capacity factors > 1 and missing data
cfm = cfm.mask(cfm>1).dropna()
return(cfm)
## Analysis
# load windpark data
print('prepare windparks')
windparks = pd.read_csv(zaf_path + "/windparks_ZAF.csv", parse_dates=['commissioning'])
# load observed data
print('load observed data')
ZAFh = read_ZAFprod()
# get capacitiy time series for all parks
print('get capacities')
capes = ['Eastern Cape','Western Cape','Northern Cape']
capdfCH = pd.Series(capes,index = capes).apply(gcdH).transpose()
capdfZH = get_cap_df(windparks.Capacity.values,windparks.commissioning.astype(np.datetime64).values).tz_localize('UTC')
capdfZH.name = 'ZAF'
capdfH = pd.concat([capdfCH,capdfZH],axis = 1)
# load simulated data
print('load simulated data')
ZAFm = load_results('MERRA2','none',windparks.Area.values)
ZAFmg2 = load_results('MERRA2','GWA2',windparks.Area.values)
ZAFmg3 = load_results('MERRA2','GWA3',windparks.Area.values)
ZAFe = load_results('ERA5','none',windparks.Area.values)
ZAFeg2 = load_results('ERA5','GWA2',windparks.Area.values)
ZAFeg3 = load_results('ERA5','GWA3',windparks.Area.values)
ZAFH = getH('ZAF')
ZAFM = getM('ZAF')
fig, axs = plt.subplots(1,2,figsize=(14,4))
fig.suptitle('ZAF monthly',y=1.001)
sims = [['obs','MERRA2','MERRA2_GWA2','MERRA2_GWA3'],['obs','ERA5','ERA5_GWA2','ERA5_GWA3']]
for i in range(len(sims)):
ZAFM[sims[i]].plot(ax = axs[i])
```
why is GWA2 closer to ERA5 but GWA3 closer to MERRA2? shouldn't it be the other way round? (GWA2 is based on MERRA2, GWA3 is based on ERA5)
```
NCM = getM('Northern Cape')
fig, axs = plt.subplots(1,2,figsize=(14,4))
fig.suptitle('Northern Cape monthly',y=1.001)
sims = [['obs','MERRA2','MERRA2_GWA2','MERRA2_GWA3'],['obs','ERA5','ERA5_GWA2','ERA5_GWA3']]
for i in range(len(sims)):
NCM[sims[i]].plot(ax = axs[i])
ECM = getM('Eastern Cape')
fig, axs = plt.subplots(1,2,figsize=(14,4))
fig.suptitle('Eastern Cape monthly',y=1.001)
sims = [['obs','MERRA2','MERRA2_GWA2','MERRA2_GWA3'],['obs','ERA5','ERA5_GWA2','ERA5_GWA3']]
for i in range(len(sims)):
ECM[sims[i]].plot(ax = axs[i])
WCM = getM('Western Cape')
fig, axs = plt.subplots(1,2,figsize=(14,4))
fig.suptitle('Western Cape monthly',y=1.001)
sims = [['obs','MERRA2','MERRA2_GWA2','MERRA2_GWA3'],['obs','ERA5','ERA5_GWA2','ERA5_GWA3']]
for i in range(len(sims)):
WCM[sims[i]].plot(ax = axs[i])
# analyse results
# hourly
print('analyse hourly')
resZAFh = pd.concat(pd.Series(['ZAF']+capes).apply(analyse_ZAFh).to_list(),axis=0)
# daily
print('analyse daily')
resZAFd = pd.concat(pd.Series(np.unique(['ZAF']+capes)).apply(analyse_ZAFd).to_list(),axis=0)
# monthly
print('analyse monthly')
resZAFm = pd.concat(pd.Series(np.unique(['ZAF']+capes)).apply(analyse_ZAFm).to_list(),axis=0)
# tidy and merge results
print('tidy and merge results')
rZAFh = tidy_res(resZAFh,'h')
rZAFd = tidy_res(resZAFd,'d')
rZAFm = tidy_res(resZAFm,'m')
results = pd.concat([rZAFh,rZAFd,rZAFm],axis=0)
results['ds'] = results.dataset.str.split('_').apply(lambda x: x[0])
results['GWA'] = (results.dataset.str.split('_')+['none']).apply(lambda x: x[1])
# save results
print('save results')
results.to_csv(results_path + '/statZAF.csv')
```
| github_jupyter |
# MaterialsCoord benchmarking – ternary materials scores
Benchmark and plot the results of the near neighbor algorithms on ternary structures.
*Written using:*
- MaterialsCoord==0.2.0
*Authors: Hillary Pan, Alex Ganose (03/30/20)*
---
First, lets initialize the near neighbor methods we are interested in.
```
from pymatgen.analysis.local_env import BrunnerNN_reciprocal, EconNN, JmolNN, \
MinimumDistanceNN, MinimumOKeeffeNN, MinimumVIRENN, \
VoronoiNN, CrystalNN
nn_methods = [
MinimumDistanceNN(), MinimumOKeeffeNN(), MinimumVIRENN(), JmolNN(),
EconNN(tol=0.5), BrunnerNN_reciprocal(), VoronoiNN(tol=0.5), CrystalNN()
]
```
Next, import the benchmark and choose the elemental structure set.
```
from materialscoord.core import Benchmark
structure_groups = ["A2BX4", "ABX3", "ABX4"]
bm = Benchmark.from_structure_group(structure_groups)
```
Calculate the benchmark scores for each algorithm for the cation sites.
```
cation_scores = bm.score(nn_methods, site_type="cation")
cation_scores
```
Plot the cation results.
```
from pathlib import Path
from materialscoord import structure_mapping
from materialscoord.plot import plot_benchmark_scores
nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"}
plt = plot_benchmark_scores(
cation_scores,
structure_mapping=structure_mapping,
nn_method_mapping=nn_method_mapping
)
plt.savefig(Path("plots", "ternary-cation.pdf"), bbox_inches='tight')
plt.show()
```
Calculate the benchmark scores for each algorithm for the anion sites.
```
anion_scores = bm.score(nn_methods, site_type="anion")
anion_scores
```
Plot the anion results.
```
plt = plot_benchmark_scores(
anion_scores,
structure_mapping=structure_mapping,
nn_method_mapping=nn_method_mapping
)
plt.savefig(Path("plots", "ternary-anion.pdf"), bbox_inches='tight')
plt.show()
```
## Cation–Anion coordination only
The performance of several of the near neighbor methods is strongly affected if only cation to anion bonding is considered. The benchmarking object has an option (`cation_anion`) to limit bonds to those between sites of opposing charge. Here, we run the benchmark on ternary structures with `cation_anion=True` set.
```
cation_scores = bm.score(nn_methods, site_type="cation", cation_anion=True)
nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"}
plt = plot_benchmark_scores(
cation_scores,
structure_mapping=structure_mapping,
nn_method_mapping=nn_method_mapping
)
plt.savefig(Path("plots", "ternary-cation-cation-anion.pdf"), bbox_inches='tight')
plt.show()
```
Do the same for anion scores.
```
anion_scores = bm.score(nn_methods, site_type="anion", cation_anion=True)
nn_method_mapping = {"BrunnerNN_reciprocal": "BrunnerNN"}
plt = plot_benchmark_scores(
anion_scores,
structure_mapping=structure_mapping,
nn_method_mapping=nn_method_mapping
)
plt.savefig(Path("plots", "ternary-anion-cation-anion.pdf"), bbox_inches='tight')
plt.show()
```
| github_jupyter |
```
from typing import List
import pandas as pd
%load_ext autoreload
%autoreload 2
# export
def parse_file(fname: str) -> List[str]:
with open(fname, "r") as f:
contents = f.readlines()
lines = []
line = ""
for c in contents:
line += c
if c == "\n":
line_dict = parse_string(line)
lines.append(line_dict)
line = ""
return pd.DataFrame(lines)
# export
def parse_string(line: str):
line_values = line.split("\n")
meta, uid, sentiment = line_values[0].split("\t")
words = []
for line_content in line_values[1:]:
words.append(line_content.split("\t")[0])
words = " ".join(words)
line_dict = {"uid": uid, "sentiment": sentiment, "text": words}
return line_dict
from pathlib import Path
datapath = Path("../data")
data_raw = datapath/"raw"
data_interim = datapath/"interim"
data_processed = datapath/"processed"
cleanlab_datapath = datapath/"cleanlab"
```
# Valid Data
```
valid_df = parse_file(data_raw/"dev_3k_split_conll.txt")
len(valid_df)
valid_df.head()
valid_df.to_json(data_interim/"valid.json", orient="records")
valid_df.describe()
```
# Trial Data
```
trial_df = parse_file(data_raw/"trial.txt"); len(trial_df)
trial_df.head()
trial_df.to_json(data_raw/"trial.json", orient="records")
```
## Train
### Previous was Trial Text
```
train_df = parse_file(data_raw/"train.txt"); len(train_df)
train_df.head()
train_df.describe()
train_df.to_json(data_raw/"train.json", orient="records")
```
# Create Train Large
```
import pandas as pd
trial = pd.read_json(data_raw/"trial.json")
train = pd.read_json(data_raw/"train.json")
trial.head(), train.head()
df = pd.concat([trial, train])
len(df)
df.to_json(data_interim/"train-large.json", orient="records")
```
# Test Data
```
# export
def parse_file_test(fname: str) -> List[str]:
with open(fname, "r") as f:
contents = f.readlines()
lines = []
line = ""
for c in contents:
line += c
if c == "\n":
line_dict = parse_string_test(line)
lines.append(line_dict)
line = ""
return pd.DataFrame(lines)
# export
def parse_string_test(line: str):
line_values = line.split("\n")
meta, uid = line_values[0].split("\t")
words = []
for line_content in line_values[1:]:
words.append(line_content.split("\t")[0])
words = " ".join(words)
line_dict = {"uid": uid, "text": words}
return line_dict
test_df = parse_file_test(data_raw/"test.txt")
len(test_df)
test_df.head()
test_df.to_json(data_interim/"final_test.json", orient="records")
```
| github_jupyter |
# Practical 2 - Loops and conditional statements
In today's practical we are going to continue practicing working with loops whilst also moving on to the use of conditional statements.
<div class="alert alert-block alert-success">
<b>Objectives:</b> The objectives of todays practical are:
- 1) [Loops: FOR loops continued](#Part1)
* [Exercise 1: Cycling through arrays and modifying values](#Exercise1)
- 2) [Conditional statements: IF, ELSE and ELIF](#Part2)
* [Exercise 2: Modify a loop to implement one of two equations according to a condition being met](#Exercise2)
- 3) [Nested loops: Working with more than 1 dimension](#Part3)
* [Exercise 3: Print out the result from a nested loop according to a condition being met](#Exercise3)
* [Exercise 4: Print out which variables match a condition](#Exercise4)
* [Exercise 5: Repeat Bob Newby's code breaking nested loops to crack the code in the Hawkins lab](#Exercise5)
Please note that you should not feel pressured to complete every exercise in class. These practicals are designed for you to take outside of class and continue working on them. Proposed solutions to all exercises can be found in the 'Solutions' folder.
</div>
<div class="alert alert-block alert-warning">
<b>Please note:</b> After reading the instructions and aims of any exercise, search the code snippets for a note that reads ------'INSERT CODE HERE'------ to identify where you need to write your code
</div>
## 1) Loops: FOR loops continued <a name="Part1">
Let us jump straight into our first exercise, following on from the previous practical.
<div class="alert alert-block alert-success">
<b> Exercise 1: Cycling through arrays and modifying values. <a name="Exercise1"> </b> Create a loop that implements the function:
\begin{eqnarray}
Y = X^{2.8/X}
\end{eqnarray}
Where <code> x </code> is an array of 75 values from 11 to 85. Print the final 4 values of the <code> x </code> and <code> Y </code> array one-by-one. Your output should be:
```python
The 72nd element of x is 82
The 72nd element of y is 1.1623843156991178
The 73rd element of x is 83
The 73rd element of y is 1.1607534518329277
The 74th element of x is 84
The 74th element of y is 1.1591580160090038
The 75th element of x is 85
The 75th element of y is 1.157596831308393
```
</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
# Now loop through 75 values and append each list accordingly.
# One list contained values for 'x', the other 'y'.
# Please note the operator ** is needed to raise one number to another [e.g 2**3]
#------'INSERT CODE HERE'------
for step in range(75):
# Append a value to our x array
x.append(10+(step+1))
# Append a value to our y array
y.append(x[step]**(2.8/x[step]))
#------------------------------
# Print the last four values from both our x and y arrays
print("The 72nd element of x is ",x[71])
print("The 72nd element of y is ",y[71])
print("The 73rd element of x is ",x[72])
print("The 73rd element of y is ",y[72])
print("The 74th element of x is ",x[73])
print("The 74th element of y is ",y[73])
print("The 75th element of x is ",x[74])
print("The 75th element of y is ",y[74])
```
## 2) Conditional statements: The IF, ELIF and ELSE statements <a name="Part2">
Once we have information stored in an array, or wish to generate information iteratively, we start to use a combination of **loops** and **conditional statements**. Conditional statements allow us to develop software that can be responsive to certain conditions. For example, in the following control flow diagram we define a set of instructions that initiate a variable and start a loop that adds 3 to this variable at each iteration. However, at each iteration we also check the value of said variable and if it becomes equals or greater than 30, we stop the program.

The following table lists the Python equivalent of common mathematical symbols to check numerical values.
| Meaning | Math Symbol | Python Symbols |
| --- | --- | --- |
| Less than | < | < |
| Greater than | > | > |
| Less than or equal | ≤ | <= |
| Greater than or equal | ≥ | >= |
| Equals | = | == |
| Not equal | ≠ | != |
<div class="alert alert-block alert-danger">
<b> Warning </b> The obvious choice for equals, a single equal sign, is not used to check for equality. It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment!
</div>
How do we implement checks using these symbols? This is where we use the <code> IF </code>, <code> ELIF </code> and <code> ELSE </code> statements. Let us start with an example.
```python
# Initialise two variables with integer values
x=3
y=5
# Now use an IF statement to check the relative values of x and y, then act accordingly
if x > y:
print("X is greater than Y")
...
if x < y:
print("X is less than Y")
..."X is less than Y"
```
Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented. We also use specific symbols to check whether one value is greater than [>] or less than [<] another. Within each condition check, depending on which is true, we print a message to the screen.
Rather than use two <code> IF </code> statements, we could combine these checks using an <code> ELSE </code> statement as follows:
```python
# Initialise two variables with integer values
x=3
y=5
# Now use an IF statement to check the relative values of x and y, then act accordingly
if x > y:
print("X is greater than Y")
else x < y:
print("X is less than Y")
"X is less than Y"
```
There are a huge number of examples we could work on here, but to begin let us build on the first exercise. In the following code we again have two variables <code> x </code> and <code> y </code>. Each has 50 elements. Let us assume that we want to implement two functions: one that is used if our <code> x </code> value is *less than or equal* to 20, the other if <code> x </code> is *greater than* 20. We can use a combination of the <code> IF </code> and <code> ELSE </code> statements.
- If $X$ is *less than or equal* to 20, $ Y = \frac{X}{12.5} $
- Otherwise [else], $Y = X^{12.5} $
Let us see this implemented as code below. Read through the syntax and if you do not understand, please ask.
<div class="alert alert-block alert-danger">
<b>Indexing </b> Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented.
</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
for step in range(50):
# Append a value to our x array
x.append(step+1)
# Now add a conditional statement to check the value of x
# Notice our additional indentation
if x[step] <= 20:
# Append a value to our y array
y.append(x[step]/12.5)
else:
# Append a value to our y array
y.append(x[step]**12.5)
# Print the first and last four values from both our x and y arrays
# First four
print("The 1st element of x is ",x[0])
print("The 1st element of y is ",y[0])
print("The 2nd element of x is ",x[1])
print("The 2nd element of y is ",y[1])
print("The 3rd element of x is ",x[2])
print("The 3rd element of y is ",y[2])
print("The 4th element of x is ",x[3])
print("The 4th element of y is ",y[3])
# Last four
print("The 47th element of x is ",x[46])
print("The 47th element of y is ",y[46])
print("The 48th element of x is ",x[47])
print("The 48th element of y is ",y[47])
print("The 49th element of x is ",x[48])
print("The 49th element of y is ",y[48])
print("The 50th element of x is ",x[49])
print("The 50th element of y is ",y[49])
```
### The AND statment
Once we move beyond two mutually exclusive conditions, we can also use the <code> ELIF </code> statements. However we need to be careful that we are assigning correct boundaries on our conditions. For example, let us assume we have been tasked with creating an array <code> X </code> that contains values from 1 to 200 and we want to implement 3 equations according to the following rules:
- If X is less than 20, use: $ Y = \frac{X}{1.56} $
- If X is greater than or equal to 20, but less than 60 use: $ Y = X^{0.35} $
- If X is greater than or equal to 60 use: $ Y = 4.5*X $
Below we implement two different versions of a loop using the conditional statements introduced earlier:
```python
# Version 1
if x[step] < 20:
<<action>>
elif x[step] >= 20:
<<action>>
elif x[step] >= 60
<<action>>
```
```python
# Version 2
if x[step] < 20:
<<action>>
elif x[step] >= 20 and x[step] < 60:
<<action>>
elif x[step] >= 60
<<action>>
```
The first version will work, but produce incorrect results. Why is that? If you follow the code instructions, as the Python interpreter would, once <code> x[step] </code> is greater than 20 the second conditional statement will always be true. As a result, it will never have to move to the third. In the second version however, the second conditional will no longer be true once <code> x[step] </code> is greater than or equal to 60.
Let us run both versions and plot the results so you can see the difference. In the following code I will create two <code> Y </code> arrays, one for each loop variant. A line plot will be produced where you should see a step change in values according to these rules. Do not worry about the syntax or module used to create the plot, we will visit this throughout the course.
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y_version1 = []
y_version2 = []
for step in range(200):
# Append a value to our x array
x.append(step+1)
# Version 1
if x[step] < 20:
# Append a value to our y array
y_version1.append(x[step]/1.56)
elif x[step] >= 20:
# Append a value to our y array
y_version1.append(x[step]**0.35)
elif x[step] >= 60:
y_version1.append(4.5*x[step])
# Version 2
if x[step] < 20:
# Append a value to our y array
y_version2.append(x[step]/1.56)
elif x[step] >= 20 and x[step] < 60:
# Append a value to our y array
y_version2.append(x[step]**0.35)
elif x[step] >= 60:
y_version2.append(4.5*x[step])
# Plot results
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
import numpy as np # The Numpy package - more soon!!
fig = plt.figure(figsize=(8,8))
ax = plt.axes()
ax.plot(np.array(x),np.log(y_version1),label='Version 1')
ax.plot(np.array(x),np.log(y_version2),label='Version 2')
ax.set_title('Y as a function of X')
ax.legend(loc='upper left')
ax.set_ylabel('Y')
ax.set_xlabel('X')
```
<div class="alert alert-block alert-success">
<b> Exercise 2: Modify a loop to implement one of three equations according to a condition being met <a name="Exercise2"> </b> In this case, let us assume an array X contains values from 1 to 1000 and we want to implement 3 equations according to the following rules:
- If X is less than or equal to 250, $ Y = X^{1.03} $
- If X is greater than 250, but less than 690 $ Y = X^{1.2} $
- If X is greater than or equal to 690, $ Y = X^{2.5} $
This is the first graph we have created. Dont worry about the syntax for now, we will produce graphs in every practical following this.
Your output should look like the following:

</div>
```
# Initiliase an empty list for both 'x' and 'y'
x = []
y = []
for step in range(1000):
# Append a value to our x array
x.append(step+1)
#------'INSERT CODE HERE'------
# Now add a conditional statement to check the value of x
# Notice our additional indentation
if x[step] <= 250:
# Append a value to our y array
y.append(x[step]**1.03)
elif x[step] > 250 and x[step] < 690:
y.append(x[step]**1.2)
elif x[step] >= 690:
# Append a value to our y array
y.append(x[step]**2.5)
#------------------------------
# Print the first and last four values from both our x and y arrays
#Import plotting package
import matplotlib.pyplot as plt # Import Matplotlib so we can plot results
import numpy as np # The Numpy package - more soon!!
fig = plt.figure(figsize=(8,8))
ax = plt.axes()
ax.plot(np.array(x),np.log(y))
ax.set_title('Y as a function of X')
ax.set_ylabel('Y')
ax.set_xlabel('X')
```
## 3) Nested loops: Working with more than 1 dimension <a name="Part3">
In many applications we want to work with more than one variable at a time, often in a two [or more] dimensional setting. We can combine <code> FOR </code> loops on any number of levels. For example, take the following hypothetical example:
```python
for [first iterating variable] in [outer loop]: # Outer loop
[do something] # Optional
for [second iterating variable] in [nested loop]: # Nested loop
[do something]
```
Notice how we have our first, or 'outer' loop cycling through our first iterating variable. As we cycle through this variable, we then 'do something' as a direct consequence. However, directly following this action, we cycle through a second iterating variable as part of our 'nested loop'. In other words, we have a loop that is nested within our first, or outer.
<div class="alert alert-block alert-danger">
<b>Indexing </b> Once again, notice how we have introduced a statement that ends with a colon : and thus requires the next line to be indented.
</div>
Let us run an example of cycling through a list of words. In this case we are not using the
```python
range()
```
function as we are not dealing with numeric examples or cycling through integers.
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
print(word1)
print(word2)
```
Turns out we can make the output easier to read by adding each word together, with a space ' ' in between as:
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
print(word1+' '+word2)
```
We will not be exploring the rich text processing power of Python in this course, but if you are interested there are some great examples to follow on the [internet](https://towardsdatascience.com/gentle-start-to-natural-language-processing-using-python-6e46c07addf3). The important lesson here is noticing how we deal with a nested loop. Please also note that we can use a <code> FOR </code> loop to iterate on members of any list, whether they are numeric or string values.
Again we can use conditional statements to modify our output. What if we only wanted to output entries involving Susan? We can add a conditional statement as follows:
```
# Create two lists of words
list1 = ['Hello','Goodbye']
list2 = ['George','Frank','Susan','Sarah']
for word1 in list1:
for word2 in list2:
if word2 == "Susan":
print(word1+' '+word2)
```
<div class="alert alert-block alert-success">
<b> Exercise 3: Print out the result from a nested loop according to a condition being met <a name="Exercise3"> </b>
In this exercise we have three lists with the following entries:
```python
list1 = ['Maths','Physics','Programming','Chemistry']
list2 = ['is','can be','is not']
list3 = ['enjoyable','awful!','ok, I guess','....']
```
Your task is to create a triple nested loop and only print out when the word in list1 is 'Physics' and the entry in list2 is 'can be'.
There are multiple ways to achieve this.
Your results should look like the following:
```python
Physics can be enjoyable
Physics can be awful!
Physics can be ok, I guess
Physics can be ....
```
</div>
```
# Create three lists of words
#------'INSERT CODE HERE'------
list1 = ['Maths','Physics','Programming','Chemistry']
list2 = ['is','can be','is not']
list3 = ['enjoyable','awful!','ok, I guess','....']
for word1 in list1:
for word2 in list2:
for word3 in list3:
if word1 == "Physics" and word2 == "can be":
print(word1+' '+word2+' '+word3)
#------------------------------
```
<div class="alert alert-block alert-success">
<b> Exercise 4: Print out which variables match a condition <a name="Exercise4"> </b>
In this exercise we have two variables, <code> x </code> and <code> y </code> taking on a value from two loops that cycle through 80 values.
```python
for x in range(80):
for y in range(80):
[do something]
```
Your task is to identify which combinations of <code> x </code> and <code> y </code> through the function:
\begin{eqnarray}
Z = Y+X^{2}
\end{eqnarray}
produce a value of <code> Z </code> = 80
</div>
```
#------'INSERT CODE HERE'------
for x in range(80):
for y in range(80):
z = y + x**2.0
if z == 80:
print('x = ', x)
print('y = ', y)
#------------------------------
```
<div class="alert alert-block alert-success">
<b> Exercise 5: Repeat Bob Newby's code breaking nested loops to crack the code in the Hawkins lab <a name="Exercise5"> </b>
In this exercise we imagine that we are tasked with finding the value of a password that is different everytime the program is executed. This will be generated by an internal Python function and used to create a string which has 5 numbers in it. We then have to create a 5 level nested loop to combine 5 different numbers into one word and when this matches the one generated by the internal Python function the attempted, thus correct, password is printed to the screen.
The code box below provides you with indentend lines in which to enter the rest of the code required. The first loop is provided. As part of the 5th loop you will need to combine all of thje individual numbers, as strings, into one word and then check if this is the same as the internally generated password. You can use the following commands for this, assuming that you call each letter as letter1, letter2 etc.
```python
password_attempt = letter1+letter2+letter3+letter4+letter5
if password_attempt == password_string:
print("Passwords match!, attempted password = ",password_attempt)
```
Once you have finished, why not see how many steps have been taken to arrive at the correct password?
</div>
```
# The following imports a module [see Practical 3] and then creates a string of a random number of 5 digits
from random import randint
n=5
password_string = ''.join(["{}".format(randint(0, 9)) for num in range(0, 5)])
print("password = ", password_string)
# Now create a 5 level nested loop which prints when the successful password has been met
#------'INSERT CODE HERE'------
# First loop
for step1 in range(10):
letter1 = str(step1) # Convert number to a string
# Second loop
for step2 in range(10):
letter2 = str(step2)
# Third loop
for step3 in range(10):
letter3 = str(step3)
# Fourth loop
for step4 in range(10):
letter4 = str(step4)
# Fifth loop
for step5 in range(10):
letter5 = str(step5)
password_attempt = letter1+letter2+letter3+letter4+letter5
if password_attempt == password_string:
print("Passwords match!, attempted password = ",password_attempt)
#------------------------------
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(rc={'figure.figsize':(12,8)})
confirmed=pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_confirmed_global.csv")
recovered =pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_recovered_global.csv")
death=pd.read_csv("https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv")
confirmed.index=confirmed["Country/Region"]
recovered.index=recovered["Country/Region"]
death.index=death["Country/Region"]
#Worldwide
def worldStats():
worldConfirmed=confirmed.iloc[:,-1].sum()
worldRecovered=recovered.iloc[:,-1].sum()
worldDeath=death.iloc[:,-1].sum()
print ("Total cases: {}M".format(round(worldConfirmed/1000000,1)))
print ("Recovered: {}M".format(round(worldRecovered/1000000,1)))
print("Deaths: {}M".format(round(worldDeath/1000000,1)))
sns.barplot(x=["Total cases","Recovered","Deaths"],y=[worldConfirmed,worldRecovered,worldDeath])
worldStats()
def localStats():
total=confirmed.iloc[:,-1]["Egypt"]
localRecovered=recovered.iloc[:,-1]["Egypt"]
localDeath=death.iloc[:,-1]["Egypt"]
print ("Total cases in Egypt:",total)
print ("Recovered in Egypt: ",localRecovered)
print("Deaths in Egypt: ", localDeath)
sns.barplot(x=["Total cases","Recovered","Deaths"],y=[total,localRecovered,localDeath])
localStats()
def plotNewCases(n):
'''plots daily new cases, takes 1 argument: number of days to show'''
tail=list(confirmed.loc["Egypt"].tail(n+1))
new=[]
i=0
while i<len(tail)-1:
new.append(tail[i+1]-tail[i])
i=i+1
t=confirmed.loc["Egypt"].tail(n).index
sns.pointplot(x=t,y=new,color="r")
plotNewCases(100)
def plotRecovered(n):
'''plots daily recovery, takes 1 argument: number of days to show'''
tail=list(recovered.loc["Egypt"].tail(n+1))
new=[]
i=0
while i<len(tail)-1:
new.append(tail[i+1]-tail[i])
i=i+1
t=recovered.loc["Egypt"].tail(n).index
sns.barplot(x=t,y=new,palette="Blues_d")
plotRecovered(100)
def plotTotal(n):
""" plots Egypt total cases, takes 1 argument: number of days to show"""
totaln=confirmed.loc["Egypt"].tail(n)
sns.pointplot(x=totaln.index,y=totaln,color="r")
plotTotal(100)
```
| github_jupyter |
## TF v1 implementation of logistic regression for book DLWithTF
```
import numpy as np
np.random.seed(456)
import tensorflow as tf
tf.set_random_seed(456)
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from scipy.special import logit
# Generate synthetic data
NN = 100
# Zeros form a Gaussian centered at (-1, -1)
x_zeros = np.random.multivariate_normal(
mean=np.array((-1, -1)), cov=.1*np.eye(2), size=(NN//2,))
y_zeros = np.zeros((NN//2,))
# Ones form a Gaussian centered at (1, 1)
x_ones = np.random.multivariate_normal(
mean=np.array((1, 1)), cov=.1*np.eye(2), size=(NN//2,))
y_ones = np.ones((NN//2,))
x_np = np.vstack([x_zeros, x_ones])
y_np = np.concatenate([y_zeros, y_ones])
# Save image of the data distribution
plt.xlabel(r"$x_1$")
plt.ylabel(r"$x_2$")
plt.title("Toy Logistic Regression Data")
# Plot Zeros
plt.scatter(x_zeros[:, 0], x_zeros[:, 1], color="blue")
plt.scatter(x_ones[:, 0], x_ones[:, 1], color="red")
plt.savefig("logistic_data.png")
# Generate tensorflow graph
with tf.name_scope("placeholders"):
x = tf.placeholder(tf.float32, (NN, 2))
y = tf.placeholder(tf.float32, (NN,))
with tf.name_scope("weights"):
W = tf.Variable(tf.random_normal((2, 1)))
b = tf.Variable(tf.random_normal((1,)))
with tf.name_scope("prediction"):
y_logit = tf.squeeze(tf.matmul(x, W) + b)
# the sigmoid gives the class probability of 1
y_one_prob = tf.sigmoid(y_logit)
# Rounding P(y=1) will give the correct prediction.
y_pred = tf.round(y_one_prob)
with tf.name_scope("loss"):
# Compute the cross-entropy term for each datapoint
entropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=y_logit, labels=y)
# Sum all contributions
l = tf.reduce_sum(entropy)
with tf.name_scope("optim"):
train_op = tf.train.AdamOptimizer(.01).minimize(l)
with tf.name_scope("summaries"):
tf.summary.scalar("loss", l)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('/tmp/logistic-train', tf.get_default_graph())
n_steps = 1000
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Train model
for i in range(n_steps):
feed_dict = {x: x_np, y: y_np}
_, summary, loss = sess.run([train_op, merged, l], feed_dict=feed_dict)
print("loss: %f" % loss)
train_writer.add_summary(summary, i)
# Get weights
w_final, b_final = sess.run([W, b])
# Make Predictions
y_pred_np = sess.run(y_pred, feed_dict={x: x_np})
score = accuracy_score(y_np, y_pred_np)
print("Classification Accuracy: %f" % score)
```
| github_jupyter |
# T81-558: Applications of Deep Neural Networks
**Module 8: Kaggle Data Sets.**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), School of Engineering and Applied Science, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module Video Material
Main video lecture:
* [Part 8.1: Introduction to Kaggle](https://www.youtube.com/watch?v=XpGI4engRjQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN&index=24)
* [Part 8.2: Simple Kaggle Solution for Keras](https://www.youtube.com/watch?v=AA3KFxjPxCo&index=25&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN)
* [Part 8.3: Overview of this Semester's Kaggle Assignment](https://www.youtube.com/watch?v=GaKo-9c532c)
# Helpful Functions
You will see these at the top of every module. These are simply a set of reusable functions that we will make use of. Each of them will be explained as the semester progresses. They are explained in greater detail as the course progresses. Class 4 contains a complete overview of these functions.
```
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import shutil
import os
import requests
import base64
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = "{}-{}".format(name, tv)
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return df.as_matrix(result).astype(np.float32), dummies.as_matrix().astype(np.float32)
else:
# Regression
return df.as_matrix(result).astype(np.float32), df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart.
def chart_regression(pred,y,sort=True):
t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()})
if sort:
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
# This function submits an assignment. You can submit an assignment as much as you like, only the final
# submission counts. The paramaters are as follows:
# data - Pandas dataframe output.
# key - Your student key that was emailed to you.
# no - The assignment class number, should be 1 through 1.
# source_file - The full path to your Python or IPYNB file. This must have "_class1" as part of its name.
# . The number must match your assignment number. For example "_class2" for class assignment #2.
def submit(data,key,no,source_file=None):
if source_file is None and '__file__' not in globals(): raise Exception('Must specify a filename when a Jupyter notebook.')
if source_file is None: source_file = __file__
suffix = '_class{}'.format(no)
if suffix not in source_file: raise Exception('{} must be part of the filename.'.format(suffix))
with open(source_file, "rb") as image_file:
encoded_python = base64.b64encode(image_file.read()).decode('ascii')
ext = os.path.splitext(source_file)[-1].lower()
if ext not in ['.ipynb','.py']: raise Exception("Source file is {} must be .py or .ipynb".format(ext))
r = requests.post("https://api.heatonresearch.com/assignment-submit",
headers={'x-api-key':key}, json={'csv':base64.b64encode(data.to_csv(index=False).encode('ascii')).decode("ascii"),
'assignment': no, 'ext':ext, 'py':encoded_python})
if r.status_code == 200:
print("Success: {}".format(r.text))
else: print("Failure: {}".format(r.text))
```
# What is Kaggle?
[Kaggle](http://www.kaggle.com) runs competitions in which data scientists compete in order to provide the best model to fit the data. The capstone project of this chapter features Kaggle’s [Titanic data set](https://www.kaggle.com/c/titanic-gettingStarted). Before we get started with the Titanic example, it’s important to be aware of some Kaggle guidelines. First, most competitions end on a specific date. Website organizers have currently scheduled the Titanic competition to end on December 31, 2016. However, they have already extended the deadline several times, and an extension beyond 2014 is also possible. Second, the Titanic data set is considered a tutorial data set. In other words, there is no prize, and your score in the competition does not count towards becoming a Kaggle Master.
# Kaggle Ranks
Kaggle ranks are achieved by earning gold, silver and bronze medals.
* [Kaggle Top Users](https://www.kaggle.com/rankings)
* [Current Top Kaggle User's Profile Page](https://www.kaggle.com/stasg7)
* [Jeff Heaton's (your instructor) Kaggle Profile](https://www.kaggle.com/jeffheaton)
* [Current Kaggle Ranking System](https://www.kaggle.com/progression)
# Typical Kaggle Competition
A typical Kaggle competition will have several components. Consider the Titanic tutorial:
* [Competition Summary Page](https://www.kaggle.com/c/titanic)
* [Data Page](https://www.kaggle.com/c/titanic/data)
* [Evaluation Description Page](https://www.kaggle.com/c/titanic/details/evaluation)
* [Leaderboard](https://www.kaggle.com/c/titanic/leaderboard)
## How Kaggle Competitions are Scored
Kaggle is provided with a data set by the competition sponsor. This data set is divided up as follows:
* **Complete Data Set** - This is the complete data set.
* **Training Data Set** - You are provided both the inputs and the outcomes for the training portion of the data set.
* **Test Data Set** - You are provided the complete test data set; however, you are not given the outcomes. Your submission is your predicted outcomes for this data set.
* **Public Leaderboard** - You are not told what part of the test data set contributes to the public leaderboard. Your public score is calculated based on this part of the data set.
* **Private Leaderboard** - You are not told what part of the test data set contributes to the public leaderboard. Your final score/rank is calculated based on this part. You do not see your private leaderboard score until the end.

## Preparing a Kaggle Submission
Code need not be submitted to Kaggle. For competitions, you are scored entirely on the accuracy of your sbmission file. A Kaggle submission file is always a CSV file that contains the **Id** of the row you are predicting and the answer. For the titanic competition, a submission file looks something like this:
```
PassengerId,Survived
892,0
893,1
894,1
895,0
896,0
897,1
...
```
The above file states the prediction for each of various passengers. You should only predict on ID's that are in the test file. Likewise, you should render a prediction for every row in the test file. Some competitions will have different formats for their answers. For example, a multi-classification will usually have a column for each class and your predictions for each class.
# Select Kaggle Competitions
There have been many interesting competitions on Kaggle, these are some of my favorites.
## Predictive Modeling
* [Otto Group Product Classification Challenge](https://www.kaggle.com/c/otto-group-product-classification-challenge)
* [Galaxy Zoo - The Galaxy Challenge](https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge)
* [Practice Fusion Diabetes Classification](https://www.kaggle.com/c/pf2012-diabetes)
* [Predicting a Biological Response](https://www.kaggle.com/c/bioresponse)
## Computer Vision
* [Diabetic Retinopathy Detection](https://www.kaggle.com/c/diabetic-retinopathy-detection)
* [Cats vs Dogs](https://www.kaggle.com/c/dogs-vs-cats)
* [State Farm Distracted Driver Detection](https://www.kaggle.com/c/state-farm-distracted-driver-detection)
## Time Series
* [The Marinexplore and Cornell University Whale Detection Challenge](https://www.kaggle.com/c/whale-detection-challenge)
## Other
* [Helping Santa's Helpers](https://www.kaggle.com/c/helping-santas-helpers)
# Iris as a Kaggle Competition
If the Iris data were used as a Kaggle, you would be given the following three files:
* [kaggle_iris_test.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/kaggle_iris_test.csv) - The data that Kaggle will evaluate you on. Contains only input, you must provide answers. (contains x)
* [kaggle_iris_train.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/kaggle_iris_train.csv) - The data that you will use to train. (contains x and y)
* [kaggle_iris_sample.csv](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/data/kaggle_iris_sample.csv) - A sample submission for Kaggle. (contains x and y)
Important features of the Kaggle iris files (that differ from how we've previously seen files):
* The iris species is already index encoded.
* Your training data is in a separate file.
* You will load the test data to generate a submission file.
The following program generates a submission file for "Iris Kaggle". You can use it as a starting point for assignment 3.
```
import os
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
import numpy as np
from tensorflow.contrib.learn.python.learn.metric_spec import MetricSpec
from keras.models import Sequential
from keras.layers.core import Dense, Activation
from keras.callbacks import EarlyStopping
path = "./data/"
filename_train = os.path.join(path,"kaggle_iris_train.csv")
filename_test = os.path.join(path,"kaggle_iris_test.csv")
filename_submit = os.path.join(path,"kaggle_iris_submit.csv")
df_train = pd.read_csv(filename_train,na_values=['NA','?'])
# Encode feature vector
encode_numeric_zscore(df_train,'petal_w')
encode_numeric_zscore(df_train,'petal_l')
encode_numeric_zscore(df_train,'sepal_w')
encode_numeric_zscore(df_train,'sepal_l')
df_train.drop('id', axis=1, inplace=True)
num_classes = len(df_train.groupby('species').species.nunique())
print("Number of classes: {}".format(num_classes))
# Create x & y for training
# Create the x-side (feature vectors) of the training
x, y = to_xy(df_train,'species')
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=45)
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10))
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto')
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
from sklearn import metrics
# Calculate multi log loss error
pred = model.predict(x_test)
score = metrics.log_loss(y_test, pred)
print("Log loss score: {}".format(score))
# Generate Kaggle submit file
# Encode feature vector
df_test = pd.read_csv(filename_test,na_values=['NA','?'])
encode_numeric_zscore(df_test,'petal_w')
encode_numeric_zscore(df_test,'petal_l')
encode_numeric_zscore(df_test,'sepal_w')
encode_numeric_zscore(df_test,'sepal_l')
ids = df_test['id']
df_test.drop('id', axis=1, inplace=True)
x = df_test.as_matrix().astype(np.float32)
# Generate predictions
pred = model.predict(x)
#pred
# Create submission data set
df_submit = pd.DataFrame(pred)
df_submit.insert(0,'id',ids)
df_submit.columns = ['id','species-0','species-1','species-2']
df_submit.to_csv(filename_submit, index=False)
print(df_submit)
```
# Kaggle Project
Kaggke competition site for current semester (Spring 2019):
* [Spring 2019 Kaggle Assignment](https://www.kaggle.com/c/applications-of-deep-learningwustl-spring-2019)
Previous Kaggle competition sites for this class (for your reference, do not use):
* [Fall 2018 Kaggle Assignment](https://www.kaggle.com/c/wustl-t81-558-washu-deep-learning-fall-2018)
* [Spring 2018 Kaggle Assignment](https://www.kaggle.com/c/wustl-t81-558-washu-deep-learning-spring-2018)
* [Fall 2017 Kaggle Assignment](https://www.kaggle.com/c/wustl-t81-558-washu-deep-learning-fall-2017)
* [Spring 2017 Kaggle Assignment](https://inclass.kaggle.com/c/applications-of-deep-learning-wustl-spring-2017)
* [Fall 2016 Kaggle Assignment](https://inclass.kaggle.com/c/wustl-t81-558-washu-deep-learning-fall-2016)
# Module 8 Assignment
You can find the first assignment here: [assignment 8](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/assignments/assignment_yourname_class8.ipynb)
| github_jupyter |
# Data augmentation
```
from nb_200 import *
import random
import pickle
```
## Get the data
```
device = torch.device('cuda', 0)
class PetsData(DataBlock):
types = Image,Category
get_items = lambda source, self: [get_image_files(source)[0]]*100
split = random_splitter()
label_func = re_labeller(pat = r'/([^/]+)_\d+.jpg$')
class CamvidData(DataBlock):
types = Image,SegmentMask
get_items = lambda source,self: [get_image_files(source/'images')[0]] * 100
split = random_splitter()
label_func = lambda o,self: self.source/'labels'/f'{o.stem}_P{o.suffix}'
class BiwiData(DataBlock):
types = Image,Points
def __init__(self, source, *args, **kwargs):
super().__init__(source, *args, **kwargs)
self.fn2ctr = pickle.load(open(source/'centers.pkl', 'rb'))
get_items = lambda source, self: [get_image_files(source/'images')[0]] * 100
split = random_splitter()
label_func = lambda o,self: [[0, 0], [120, 0], [0, 160], [120,160]]
class CocoData(DataBlock):
types = Image,BBox
def __init__(self, source, *args, **kwargs):
super().__init__(source, *args, **kwargs)
images, lbl_bbox = get_annotations(source/'train.json')
self.img2bbox = dict(zip(images, lbl_bbox))
get_items = lambda source, self: [get_image_files(source/'train')[18]] * 100
split = random_splitter()
label_func = lambda o,self: self.img2bbox[o.name]
def databunch(self, ds_tfms=None, dl_tfms=None, bs=64, tfm_kwargs=None, **kwargs):
return super().databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=bs, tfm_kwargs=tfm_kwargs,
collate_fn=bb_pad_collate, **kwargs)
ds_tfms = [DecodeImg(), ResizeFixed(128), ToByteTensor()]
dl_tfms = [Cuda(device), ToFloatTensor()]
pets_src = untar_data(URLs.PETS)
camvid_src = untar_data(URLs.CAMVID_TINY)
biwi_src = untar_data(URLs.BIWI_SAMPLE)
coco_src = untar_data(URLs.COCO_TINY)
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
## Flip and dihedral with PIL
```
# export
class Flip(ImageTransform):
_data_aug=True
def __init__(self, p=0.5): self.p = p
def randomize(self): self.do = random.random() < self.p
def apply(self, x):
return x.transpose(PIL.Image.FLIP_LEFT_RIGHT) if self.do else x
def apply_point(self, x):
if self.do: x[...,0] = -x[...,0]
return x
def apply_bbox(self, x): return (self.apply_point(x[0].view(-1,2)).view(-1,4), x[1])
ds_tfms = [DecodeImg(), Flip(), ResizeFixed(128), ToByteTensor()]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
# export
class Dihedral(ImageTransform):
_data_aug=True
def __init__(self, p=0.5): self.p = p
def randomize(self):
self.idx = random.randint(0,7) if random.random() < self.p else 0
def apply(self, x): return x if self.idx==0 else x.transpose(self.idx-1)
def apply_point(self, x):
if self.idx in [1, 3, 4, 7]: x[...,0] = -x[...,0]
if self.idx in [2, 4, 5, 7]: x[...,1] = -x[...,1]
if self.idx in [3, 5, 6, 7]: x = x.flip(1)
return x
def apply_bbox(self, x):
pnts = self.apply_point(x[0].view(-1,2)).view(-1,2,2)
tl,dr = pnts.min(dim=1)[0],pnts.max(dim=1)[0]
return [torch.cat([tl, dr], dim=1), x[1]]
ds_tfms = [DecodeImg(), Dihedral(), ResizeFixed(128), ToByteTensor()]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
## Affine and coords on the GPU
This is the main transform, that will apply affine and coordinates transform and do only one interpolation. Implementation differs for each type of target.
```
# export
def clip_remove_empty(bbox, label):
bbox = torch.clamp(bbox, -1, 1)
empty = ((bbox[...,2] - bbox[...,0])*(bbox[...,3] - bbox[...,1]) < 0.)
if isinstance(label, torch.Tensor): label[empty] = 0
else:
for i,m in enumerate(empty):
if m: label[i] = 0
return [bbox, label]
# export
class AffineAndCoordTfm(ImageTransform):
_data_aug=True
def __init__(self, aff_tfms, coord_tfms, size=None, mode='bilinear', padding_mode='reflection'):
self.aff_tfms,self.coord_tfms,self.mode,self.padding_mode = aff_tfms,coord_tfms,mode,padding_mode
self.size = None if size is None else (size,size) if isinstance(size, int) else tuple(size)
def randomize(self):
for t in self.aff_tfms+self.coord_tfms: t.randomize(self.x)
def _get_affine_mat(self):
aff_m = torch.eye(3, dtype=self.x.dtype, device=self.x.device)
aff_m = aff_m.unsqueeze(0).expand(self.x.size(0), 3, 3)
ms = [tfm() for tfm in self.aff_tfms]
ms = [m for m in ms if m is not None]
for m in ms: aff_m = aff_m @ m
return aff_m
def apply(self, x):
bs = x.size(0)
size = tuple(x.shape[-2:]) if self.size is None else self.size
size = (bs,x.size(1)) + size
coords = F.affine_grid(self._get_affine_mat()[:,:2], size)
coords = apply_all(coords, self.coord_tfms)
return F.grid_sample(x, coords, mode=self.mode, padding_mode=self.padding_mode)
def apply_mask(self, y):
self.old_mode,self.mode = self.mode,'nearest'
res = self.apply(y.float())
self.mode = self.old_mode
return res.long()
def apply_point(self, y):
m = self._get_affine_mat()[:,:2]
y = (y - m[:,:,2].unsqueeze(1)) @ torch.inverse(m[:,:2,:2].transpose(1,2))
return apply_all(y, self.coord_tfms, filter_kwargs=True, invert=True)
def apply_bbox(self, y):
bbox,label = y
bs,n = bbox.shape[:2]
pnts = stack([bbox[...,:2], stack([bbox[...,0],bbox[...,3]],dim=2),
stack([bbox[...,2],bbox[...,1]],dim=2), bbox[...,2:]], dim=2)
pnts = self.apply_point(pnts.view(bs, 4*n, 2))
pnts = pnts.view(bs, n, 4, 2)
tl,dr = pnts.min(dim=2)[0],pnts.max(dim=2)[0]
return clip_remove_empty(torch.cat([tl, dr], dim=2), label)
```
### Affine
```
# export
import math
from torch import stack, zeros_like as t0, ones_like as t1
from torch.distributions.bernoulli import Bernoulli
```
#### rotate
```
# export
def mask_tensor(x, p=0.5, neutral=0.):
if p==1.: return x
if neutral != 0: x.add_(-neutral)
mask = x.new_empty(*x.size()).bernoulli_(p)
x.mul_(mask)
return x.add_(neutral) if neutral != 0 else x
# export
def masked_uniform(x, a, b, *sz, p=0.5, neutral=0.):
return mask_tensor(x.new_empty(*sz).uniform_(a,b), p=p, neutral=neutral)
# export
class Rotation():
def __init__(self, degrees=10., p=0.5):
self.range,self.p = (-degrees,degrees),p
def randomize(self, x):
thetas = masked_uniform(x, *self.range, x.size(0), p=self.p) * math.pi/180
self.mat = stack([stack([thetas.cos(), thetas.sin(), t0(thetas)], dim=1),
stack([-thetas.sin(), thetas.cos(), t0(thetas)], dim=1),
stack([t0(thetas), t0(thetas), t1(thetas)], dim=1)], dim=1)
def __call__(self): return self.mat
ds_tfms = [DecodeImg(), ResizeFixed(128), ToByteTensor()]
dl_tfms = [Cuda(device), ToFloatTensor(), AffineAndCoordTfm([Rotation(30.)], [])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
#### flip and dihedral affine
```
# export
class FlipAffine():
def __init__(self, p=0.5):
self.p=p
def randomize(self, x):
mask = -2*x.new_empty(x.size(0)).bernoulli_(self.p)+1
self.mat = stack([stack([mask, t0(mask), t0(mask)], dim=1),
stack([t0(mask), t1(mask), t0(mask)], dim=1),
stack([t0(mask), t0(mask), t1(mask)], dim=1)], dim=1)
def __call__(self): return self.mat
# export
class DihedralAffine():
def __init__(self, p=0.5):
self.p=p
def randomize(self, x):
idx = mask_tensor(torch.randint(0, 8, (x.size(0),), device=x.device), p=self.p)
xs = 1 - 2*(idx & 1)
ys = 1 - (idx & 2)
m0,m1 = (idx<4).long(),(idx>3).long()
self.mat = stack([stack([xs*m0, xs*m1, t0(xs)], dim=1),
stack([ys*m1, ys*m0, t0(xs)], dim=1),
stack([t0(xs), t0(xs), t1(xs)], dim=1)], dim=1).float()
def __call__(self): return self.mat
dl_tfms = [Cuda(device), ToFloatTensor(), AffineAndCoordTfm([DihedralAffine()], [])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
#### zoom
```
class Zoom():
def __init__(self, max_zoom=1.1, p=0.5):
self.range,self.p = (1,max_zoom),p
def randomize(self, x):
s = 1/masked_uniform(x, *self.range, x.size(0), p=self.p, neutral=1.)
col_pct = x.new_empty(x.size(0)).uniform_(0.,1.)
row_pct = x.new_empty(x.size(0)).uniform_(0.,1.)
col_c = (1-s) * (2*col_pct - 1)
row_c = (1-s) * (2*row_pct - 1)
self.mat = stack([stack([s, t0(s), col_c], dim=1),
stack([t0(s), s, row_c], dim=1),
stack([t0(s), t0(s), t1(s)], dim=1)], dim=1)
def __call__(self): return self.mat
dl_tfms = [Cuda(device), ToFloatTensor(), AffineAndCoordTfm([Zoom()], [])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
### Coordinates
#### warping
```
# export
def find_coeffs(p1, p2):
matrix = []
p = p1[:,0,0]
#The equations we'll need to solve.
for i in range(p1.shape[1]):
matrix.append(stack([p2[:,i,0], p2[:,i,1], t1(p), t0(p), t0(p), t0(p), -p1[:,i,0]*p2[:,i,0], -p1[:,i,0]*p2[:,i,1]]))
matrix.append(stack([t0(p), t0(p), t0(p), p2[:,i,0], p2[:,i,1], t1(p), -p1[:,i,1]*p2[:,i,0], -p1[:,i,1]*p2[:,i,1]]))
#The 8 scalars we seek are solution of AX = B
A = stack(matrix).permute(2, 0, 1)
B = p1.view(p1.shape[0], 8, 1)
return torch.solve(B,A)[0]
# export
def apply_perspective(coords, coeffs):
sz = coords.shape
coords = coords.view(sz[0], -1, 2)
coeffs = torch.cat([coeffs, t1(coeffs[:,:1])], dim=1).view(coeffs.shape[0], 3,3)
coords = coords @ coeffs[...,:2].transpose(1,2) + coeffs[...,2].unsqueeze(1)
coords.div_(coords[...,2].unsqueeze(-1))
return coords[...,:2].view(*sz)
# export
class Warp():
def __init__(self, magnitude=0.2, p=0.5):
self.coeffs,self.magnitude,self.p = None,magnitude,p
def randomize(self, x):
up_t = masked_uniform(x, -self.magnitude, self.magnitude, x.size(0), p=self.p)
lr_t = masked_uniform(x, -self.magnitude, self.magnitude, x.size(0), p=self.p)
orig_pts = torch.tensor([[-1,-1], [-1,1], [1,-1], [1,1]], dtype=x.dtype, device=x.device)
self.orig_pts = orig_pts.unsqueeze(0).expand(x.size(0),4,2)
targ_pts = stack([stack([-1-up_t, -1-lr_t]), stack([-1+up_t, 1+lr_t]),
stack([ 1+up_t, -1+lr_t]), stack([ 1-up_t, 1-lr_t])])
self.targ_pts = targ_pts.permute(2,0,1)
def __call__(self, x, invert=False):
coeffs = find_coeffs(self.targ_pts, self.orig_pts) if invert else find_coeffs(self.orig_pts, self.targ_pts)
return apply_perspective(x, coeffs)
dl_tfms = [Cuda(device), ToFloatTensor(), AffineAndCoordTfm([Rotation()], [Warp()])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
## Ligthing transforms
```
# export
def logit(x):
"Logit of `x`, clamped to avoid inf."
x = x.clamp(1e-7, 1-1e-7)
return -(1/x-1).log()
# export
class LightingTransform(ImageTransform):
_order = 15
_data_aug=True
def __init__(self, tfms): self.tfms=listify(tfms)
def randomize(self):
for t in self.tfms: t.randomize(self.x)
def apply(self,x): return torch.sigmoid(apply_all(logit(x), self.tfms))
def apply_mask(self, x): return x
# export
from math import log
def masked_log_uniform(x, a, b, *sz, p=0.5, neutral=0.):
return torch.exp(masked_uniform(x, log(a), log(b), *sz, p=p, neutral=neutral))
# export
class Brightness():
"Apply `change` in brightness of image `x`."
def __init__(self, max_lighting=0.2, p=0.75):
self.p = p
self.range = (0.5*(1-max_lighting), 0.5*(1+max_lighting))
def randomize(self, x):
self.change = masked_uniform(x, *self.range, x.size(0), *([1]*(x.dim()-1)), p=self.p, neutral=0.5)
def __call__(self, x): return x.add_(self.change)
class Contrast():
"Apply `change` in brightness of image `x`."
def __init__(self, max_lighting=0.2, p=0.75):
self.p = p
self.range = (1-max_lighting, 1/(1-max_lighting))
def randomize(self, x):
self.change = masked_log_uniform(x, *self.range, x.size(0), *([1]*(x.dim()-1)), p=self.p)
def __call__(self, x): return x.mul_(self.change)
dl_tfms = [Cuda(device), ToFloatTensor(), LightingTransform([Brightness(1), Contrast(0.5)])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
## All at once
```
ds_tfms = [DecodeImg(), ResizeFixed(224), ToByteTensor(), Flip()]
dl_tfms = [Cuda(device), ToFloatTensor(), LightingTransform([Brightness(), Contrast()]),
AffineAndCoordTfm([Rotation(), Zoom()], [Warp()])]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
%timeit -n 10 _ = coco.one_batch(0)
```
## Crops and pads
### On the CPU
#### crop
```
class CenterCrop(ImageTransform):
_order = 12
def __init__(self, size):
if isinstance(size,int): size=(size,size)
self.size = (size[1],size[0])
def randomize(self):
w,h = self.x.size
self.tl = (w-self.size[0]//2, h-self.size[1]//2)
def apply(self, x):
return x.crop((self.tl[0],self.tl[1],self.tl[0]+self.size[0],self.tl[1]+self.size[1]))
def apply_point(self, y):
old_sz,new_sz,tl = map(lambda o: tensor(o).float(), (self.x.size,self.size,self.tl))
return (y + 1) * old_sz/new_sz - tl * 2/new_sz - 1
def apply_bbox(self, y):
bbox,label = y
bbox = self.apply_point(bbox.view(-1,2)).view(-1,4)
return clip_remove_empty(bbox, label)
class RandomCrop(CenterCrop):
def randomize(self):
w,h = self.x.size
if self.filt != 0: self.tl = (w-self.size[0]//2, h-self.size[1]//2)
self.tl = (random.randint(0,w-self.size[0]), random.randint(0,h-self.size[1]))
ds_tfms = [DecodeImg(), RandomCrop(100), ToByteTensor()]
dl_tfms = [Cuda(device), ToFloatTensor()]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
#### pad
```
import torchvision.transforms.functional as tvfunc
class Pad(CenterCrop):
_order = 15
_pad_modes = {'zeros': 'constant', 'border': 'replicate', 'reflection': 'reflect'}
def __init__(self, size, mode='zeros'):
if isinstance(size,int): size=(size,size)
self.size = (size[1],size[0])
self.size,self.mode = size,self._pad_modes[mode]
def randomize(self):
ph,pw = self.size[0]-self.x.size[1],self.size[1]-self.x.size[0]
self.tl = (-ph//2,-pw//2)
self.pad = (pw//2,ph//2,pw-pw//2,ph-ph//2)
def apply(self, x): return tvfunc.pad(x, self.pad, padding_mode=self.mode)
class RandomPad(Pad):
def randomize(self):
ph,pw = self.size[0]-self.x.size[1],self.size[1]-self.x.size[0]
c,r = random.randint(0,ph),random.randint(0,pw)
self.tl = (-r,-c)
self.pad = (r,c,pw-r,ph-c)
ds_tfms = [DecodeImg(), RandomPad(150, mode='reflection'), ToByteTensor()]
dl_tfms = [Cuda(device), ToFloatTensor()]
pets = PetsData (pets_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
camvid = CamvidData(camvid_src).databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
biwi = BiwiData (biwi_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco = CocoData (coco_src) .databunch(ds_tfms=ds_tfms, dl_tfms=dl_tfms, bs=16)
coco.show_batch()
```
| github_jupyter |
# 결정 트리
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/rickiepark/hg-mldl/blob/master/5-1.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
</td>
</table>
## 로지스틱 회귀로 와인 분류하기
```
import pandas as pd
wine = pd.read_csv('https://bit.ly/wine_csv_data')
wine.head()
wine.info()
wine.describe()
data = wine[['alcohol', 'sugar', 'pH']].to_numpy()
target = wine['class'].to_numpy()
from sklearn.model_selection import train_test_split
train_input, test_input, train_target, test_target = train_test_split(
data, target, test_size=0.2, random_state=42)
print(train_input.shape, test_input.shape)
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
ss.fit(train_input)
train_scaled = ss.transform(train_input)
test_scaled = ss.transform(test_input)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(train_scaled, train_target)
print(lr.score(train_scaled, train_target))
print(lr.score(test_scaled, test_target))
```
### 설명하기 쉬운 모델과 어려운 모델
```
print(lr.coef_, lr.intercept_)
```
## 결정 트리
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(random_state=42)
dt.fit(train_scaled, train_target)
print(dt.score(train_scaled, train_target))
print(dt.score(test_scaled, test_target))
import matplotlib.pyplot as plt
from sklearn.tree import plot_tree
plt.figure(figsize=(10,7))
plot_tree(dt)
plt.show()
plt.figure(figsize=(10,7))
plot_tree(dt, max_depth=1, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
```
### 가지치기
```
dt = DecisionTreeClassifier(max_depth=3, random_state=42)
dt.fit(train_scaled, train_target)
print(dt.score(train_scaled, train_target))
print(dt.score(test_scaled, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
dt = DecisionTreeClassifier(max_depth=3, random_state=42)
dt.fit(train_input, train_target)
print(dt.score(train_input, train_target))
print(dt.score(test_input, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
print(dt.feature_importances_)
```
## 확인문제
```
dt = DecisionTreeClassifier(min_impurity_decrease=0.0005, random_state=42)
dt.fit(train_input, train_target)
print(dt.score(train_input, train_target))
print(dt.score(test_input, test_target))
plt.figure(figsize=(20,15))
plot_tree(dt, filled=True, feature_names=['alcohol', 'sugar', 'pH'])
plt.show()
```
| github_jupyter |
```
%%html
<link href="https://fonts.googleapis.com/css?family=Open+Sans" rel="stylesheet">
<style>#notebook-container{font-size: 13pt;font-family:'Open Sans', sans-serif;} div.text_cell{max-width: 104ex;}</style>
%pylab inline
```
# 2D Rendering
To render in 2D we will be using vectors with $(x, y)$ coordinates. We will then explore matrices and linear transformations.

## Python Image Library
```
from PIL import *
```
We have a pixel grid with $(x,y)$ coordinates that is initiated with a width $w$ and a height $h$. The coordinate $(0,0)$ is in the top-left corner. We want to correct this and have our coordinate system start in the bottom-left corner.
We want additional functions to help with drawing shapes:
* Drawing a point
* Drawing a rectangle between two points
* Drawing a line between two points
* This is first done with a simple line $y=mx+b$ and then calculate the $y$ values for that particular range.
* There is a better algorithm, named the Bresenham algorithm (https://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm) that should be implemented later. Hey, it's later, and implemented.
* Drawing a circle
* Filling a triangle
```
class Drawing():
def __init__(self, w, h, bg_color = 'black'):
self.width = w
self.height = h
self.img = Image.new('RGB', (w, h), bg_color)
self.color = (255, 255, 255)
self.pixels = self.img.load()
def SetColor(self, r, g, b):
self.color = (r, g, b)
def SetPixel(self, x, y):
# Clip any attempts at drawings outside of the range.
if x < 0 or x > self.width - 1 or y < 0 or y > self.height - 1: return
# Swap the y-values because the top left is (0,0).
self.pixels[x, self.height-y-1] = self.color
def DrawPoint(self, x, y, size = 1):
if size < 1: raise ValueError('Parameter \'size\' must be positive.'); return;
if size == 1: self.SetPixel(x, y); return;
half = size / 2
for i in range(size):
for j in range(size):
self.SetPixel(x-half+i+1, y-half+j+1)
def DrawRectangle(self, x1, y1, x2, y2):
if x1 > x2: raise ValueError('x1 > x2'); return;
if y1 > y2: raise ValueError('y1 > y2'); return;
for i in range(x2 - x1):
for j in range(y2 - y1):
self.SetPixel(x1 + i, y1 + j)
""" Simple line drawing algorithm """
def DrawLine(self, x1, y1, x2, y2):
# Is x1 the closest point on the x-axis?
if x1 > x2:
# No, so we swap the points
tx = x1; x1 = x2; x2 = tx;
ty = y1; y1 = y2; y2 = ty;
# Check if the slope is not zero
if x2 - x1 == 0:
# If it is zero, draw a vertical line
ymin = min(y1,y2)
ymax = max(y1,y2)
for i in (range(math.ceil(ymax-ymin))):
self.SetPixel(x1, i+ymin)
return
# Draw line
m = (y2 - y1) / (x2 - x1)
r = x2 - x1
for i in range(math.ceil(r)):
x = x1 + i
y = m * (x - x1) + y1
self.SetPixel(round(x), round(y))
""" Drawing a line with the Bresenham algorithm """
def DrawBLine(self, x1, y1, x2, y2):
if abs(y2 - y1) < abs(x2 - x1):
if x1 > x2:
self.DrawBLineLow(int(x2), int(y2), int(x1), int(y1))
else:
self.DrawBLineLow(int(x1), int(y1), int(x2), int(y2))
else:
if y1 > y2:
self.DrawBLineHigh(int(x2), int(y2), int(x1), int(y1))
else:
self.DrawBLineHigh(int(x1), int(y1), int(x2), int(y2))
""" Drawing the BLine segment on a negative gradient """
def DrawBLineLow(self, x1, y1, x2, y2):
dx = x2 - x1
dy = y2 - y1
yi = 1
if dy < 0:
yi = -1
dy = -dy
D = 2*dy - dx
y = y1
for x in range(math.ceil(x2-x1)):
self.SetPixel(round(x1 + x), round(y))
if D > 0:
y = y + yi
D = D - 2*dx
D = D + 2*dy
""" Drawing the BLine segment on a positive gradient """
def DrawBLineHigh(self, x1, y1, x2, y2):
dx = x2 - x1
dy = y2 - y1
xi = 1
if dx < 0:
xi = -1
dx = -dx
D = 2*dx - dy
x = x1
for y in range(math.ceil(y2-y1)):
self.SetPixel(x, y1 + y)
if D > 0:
x = x + xi
D = D - 2*dy
D = D + 2*dx
def DrawCircle(self, px, py, r):
n = 50
theta = 2*pi/n
coords = []
# Figure out some the points on the circle.
for i in range(n):
x = r * cos(i * theta) + px
y = r * sin(i * theta) + py
coords.append((x, y))
# Connect all the points.
for i in range(1, len(coords)):
self.DrawBLine(coords[i-1][0], coords[i-1][1], coords[i][0], coords[i][1])
# Connect the first and last point.
self.DrawBLine(coords[0][0], coords[0][1], coords[n-1][0], coords[n-1][1])
```
Let's test the drawing class.
```
drawing01 = Drawing(300, 200)
drawing01.DrawPoint(25, 25, 3)
drawing01.SetColor(255, 0, 0)
drawing01.DrawPoint(25, 25)
drawing01.DrawPoint(100, 100, 20)
drawing01.SetColor(0, 255, 0)
drawing01.DrawRectangle(50, 50, 60, 80)
drawing01.SetColor(0, 0, 255)
drawing01.DrawLine(10, 10, 290, 190)
drawing01.SetColor(0, 255, 255)
drawing01.DrawBLine(10, 190, 290, 10)
drawing01.SetColor(0, 255, 0)
drawing01.DrawLine(40, 10, 240, 30)
drawing01.DrawBLine(40, 15, 240, 35)
drawing01.DrawBLine(265, 160, 275, 30)
drawing01.SetColor(255, 0, 0)
drawing01.DrawLine(260, 160, 270, 30)
drawing01.DrawCircle(150, 100, 30)
drawing01.img
```
Now we can create simple drawings. Let's continue to the math.
## Transformations
We will have a list of vertices $V$ which gives all the points for the object. Those are then scaled, rotated, and translated. We create a homogeneous coordinate that we need for the $3\times3$ matrix multiplication. This simply means there is a $w$ element, which is used for translations. So each vertex is represented as:
$$\vec{v} = \begin{bmatrix}x\\y\\w=1\end{bmatrix}$$
### Cube
We create a normalized unit cube, which we can scale, then rotate, and finally translate. Note the order, this is important.
```
cube_vertices = []
for x in [-1, 1]:
for y in [-1, 1]:
cube_vertices.append(np.array([x, y, 1]))
cube_vertices
```
### Scaling
To scale the homogenous coordinate of the vertex $v$ we multiply it with a scaling matrix. This will set the width $w$ and the height $h$.
$$ M_{s} = \begin{bmatrix}w & 0 & 0 \\ 0 & h & 0 \\ 0 & 0 & 1\end{bmatrix} $$
```
scaleMatrix = np.identity(3)
scaleMatrix[0,0] = 20
scaleMatrix[1,1] = 20
scaleMatrix
def create_scaling_matrix(w, h):
scaleMatrix[0,0] = w
scaleMatrix[1,1] = h
return scaleMatrix
```
### Rotation
To rotate the homogenous coordinate of the vector we use a rotation matrix.
$$ M_{r} = \begin{bmatrix}\cos\theta & -\sin\theta & 0 \\ \sin\theta & \cos\theta & 0 \\ 0 & 0 & 1\end{bmatrix} $$
```
def create_rotation_matrix(angle):
m = np.zeros([3,3])
m[0, 0] = cos(angle)
m[0, 1] = -sin(angle)
m[1, 0] = sin(angle)
m[1, 1] = cos(angle)
m[2, 2] = 1
return m
rotationMatrix = create_rotation_matrix(0)
rotationMatrix
```
### Translation
To move the homogeneous coordinate of the vertex we use the extra column in our matrix.
$$ M_{t} = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ x & y & 1\end{bmatrix} $$
Because $w=1$ in the coordinate $[x,y,w=1]$, this matrix will add the $x$ and $y$ positions.
```
translationMatrix = np.identity(3)
translationMatrix[2,0] = 100
translationMatrix[2,1] = 100
translationMatrix
def create_translation_matrix(x, y):
translationMatrix = np.identity(3)
translationMatrix[2,0] = x
translationMatrix[2,1] = y
return translationMatrix
```
### Applying all the transformations
```
M = np.dot(scaleMatrix, rotationMatrix)
np.dot(M, translationMatrix)
```
To get the transformed coordinates we multiply it with all the matrices.
$$\vec{v_t} = \left(M_s \cdot M_r \cdot M_t\right) \cdot \vec{v}$$
```
scene = Drawing(400, 300)
transformed_vertices = []
rotationMatrix = create_rotation_matrix(pi/4)
# Multiply all the matrices to get all the transformations.
M = np.dot(scaleMatrix, rotationMatrix)
transform = np.dot(M, translationMatrix)
for vertex in cube_vertices:
# Apply transformations to vertex
translatedVertex = np.dot(transform.T, vertex)
transformed_vertices.append(translatedVertex)
for i in range(len(transformed_vertices)):
n = len(transformed_vertices)
x = transformed_vertices[i][0]
y = transformed_vertices[i][1]
scene.SetColor(255,0,0)
scene.DrawPoint(x, y, 3)
scene.img
```
## Meshes
An object consists of multiple vertices. To move them all around at once, we represent them as a mesh. The mesh will have a position, scale, and rotation. This will be applied to all the vertices with the transformation matrices.
```
class Mesh():
def __init__(self, name):
self.name = name
self.vertices = []
self.edges = []
self.position = np.zeros([1,2])
self.scale = 1
self.rotation = 0
def create_cube(name, x, y, scale, rotation):
mesh = Mesh(name)
mesh.vertices.append(np.array([-1, 1]))
mesh.vertices.append(np.array([ 1, 1]))
mesh.vertices.append(np.array([ 1, -1]))
mesh.vertices.append(np.array([-1, -1]))
mesh.edges.append([3,0])
mesh.edges.append([0,1])
mesh.edges.append([1,2])
mesh.edges.append([2,3])
mesh.position = np.array([x,y])
mesh.scale = scale
mesh.rotation = rotation
return mesh
def draw_meshes(meshes, w, h):
scene = Drawing(w, h)
scene.SetColor(255, 0, 0)
for mesh in meshes:
scaleMatrix = create_scaling_matrix(mesh.scale, mesh.scale)
rotationMatrix = create_rotation_matrix(mesh.rotation)
translationMatrix = create_translation_matrix(mesh.position[0], mesh.position[1])
translatedVertices = []
scene.SetColor(255, 0, 0)
for vertex in mesh.vertices:
scaledVertex = np.dot(scaleMatrix.T, np.array([vertex[0], vertex[1], 1]))
rotatedVertex = np.dot(rotationMatrix.T, scaledVertex)
translatedVertex = np.dot(translationMatrix.T, rotatedVertex)
translatedVertices.append(translatedVertex)
scene.DrawPoint(translatedVertex[0], translatedVertex[1], 2)
scene.SetColor(100, 100, 100)
for (a,b) in mesh.edges:
scene.DrawBLine(translatedVertices[a][0], translatedVertices[a][1], translatedVertices[b][0], translatedVertices[b][1])
return scene.img
```
### Scaling & translation
```
meshes = []
for i in range(4):
meshes.append(create_cube('sqr_{}'.format(i), 100 * (i + 1) + 20 + (2*i), 100, 20 + (5*i), 0))
draw_meshes(meshes, 565, 200)
```
### Rotation
```
meshes = []
for i in range(6):
meshes.append(create_cube('sqr_{}'.format(i), 100 * (i + 1), 100, 20, i*pi/10))
draw_meshes(meshes, 700, 200)
```
## Geometry
Now let's go onto another route. With geometry we can trace out rays from a point and see if they hit anything in the scene. We will first create a point with coordinates $(p_x, p_y)$.
```
# Coordinates of our point (px, py)
px = py = 100
```
We also create a circle with coordinates $(c_x, c_y)$ and a radius $c_r$.
```
# Coordinates and radius of our circle (cx, cy, cr)
cx = 500
cy = 100
cr = 50
scene = Drawing(600, 200)
scene.DrawPoint(px, py, 5)
scene.DrawCircle(cx, cy, cr)
scene.DrawPoint(cx, cy, 2)
scene.img
```
Now we place a line with two points in front of the point, obstructing the view of the circle. The coordinates for the two points that represent the lines are: $A=(a_x, a_y), B=(b_x, b_y)$.
```
scene.SetColor(100, 100, 100)
ax = 150
ay = 125
bx = 150
by = 75
scene.DrawBLine(ax, ay, bx, by)
scene.img
```
Now we grab a point $Q$ on our line, and draw a line from $P$ to $Q$.
```
qx = ax
qy = by + 3 * (ay - by) / 5
scene.SetColor(255, 0, 0)
scene.DrawPoint(qx, qy, 3)
scene.DrawBLine(px, py, qx, qy)
scene.img
```
Now we can get the formula for the line $l$, and find the $y$ on the right end of the screen.
$$y=\frac{\Delta y}{\Delta x}(x-p_x)+p_y$$
```
lm = (qy - py) / (qx - px)
ly = lm * (600 - px) + py
scene.DrawPoint(600-2, ly, 4)
scene.SetColor(255, 150, 150)
scene.DrawLine(qx, qy, 600, ly)
scene.img
```
We can see that we intersect the circle twice, but we need to actually check this mathematically. To do this we will draw a line $k$ that is perpedicular to $l$ and through the center of the circle. Then we check if the distance between the line and the point is smaller than the radius of the circle, if so we have hit it.
To get the slope of line $k$ we take the normal of $l$ which means that $k\perp l$. This is done with with:
$$k = \dfrac{\Delta y}{\Delta x} \xrightarrow{\text{normal}} k_n = -\dfrac{\Delta x}{\Delta y}$$
```
km = -(qx - px) / (qy - py)
km
```
If we use the slope $k$ and simply add our point $c_x, c_y$ we have a line that is perpendicular to $l$ and goes through the circle. Now to find the top point $R$ in the image (where $y=200$, because the height is $200$) we simply solve for $x$ with a gives $y$:
$$ \begin{align} y &= k_m(x - c_x) + c_y \\ k_m\cdot x &= k_m\cdot c_x - c_y + y \\ x &= \dfrac{k_m\cdot c_x - c_y + y}{k_m}\end{align}$$
To draw a line in the image we need to find the $x$ value that corresponds to the $y$ values that is the top of our screen. We can figure this out with the equation above and setting $y=200$.
```
ry = 200
rx = (km * cx - cy + ry) / km
rx, ry
scene.DrawPoint(rx, ry-1, 2)
scene.img
```
For the bottom point we use the same formula and set $y=0$.
```
qy = 0
qx = (km * cx - cy + qy) / km
qx, qy
```
And finally draw a line between them.
```
scene.SetColor(255, 200, 200)
scene.DrawPoint(qx, qy, 2)
scene.DrawBLine(rx, ry, qx, qy)
scene.img
```
We can see that the line $k$ we drawn is perpendicular to $l$ and goes through $C$. Now we need to figure out point $I$ where the line $l$ and $k$ intersect. We simply put $l=k$ and solve for $x$.
$$\begin{align} l &= k \\ l_m(x-p_x)+p_y &= k_m(x-q_x)+q_y \\ l_m\cdot x-l_m \cdot p_x + p_y &= k_m\cdot x - k_m - k_m\cdot q_x + q_y \\ l_m\cdot x - k_m \cdot x &= l_m\cdot p_x - p_y - k_m \cdot q_x + q_y \end{align}$$
Now we factor out $x$ on the left side, and divide by $l_m - k_m$ to isolate $x$.
$$x = \dfrac{l_m \cdot p_x - p_y - k_m \cdot q_x + q_y}{l_m - k_m}$$
We can now use this formula to get the $x$ value where the two lines intersect.
```
ix = (lm * px - py - km * qx + qy) / (lm - km)
ix
```
Fill in the $x$ value into any of those lines to get the $y$ value.
```
iy = lm * (ix - px) + py
iy
```
Finally we connect them with a blue line.
```
scene.SetColor(0, 100, 255)
scene.DrawPoint(ix, iy, 4)
scene.DrawPoint(cx, cy, 4)
scene.DrawBLine(ix, iy, cx, cy)
scene.img
```
Now we can calculate the length of the blue line with $x=\sqrt{(c_x-i_x)^2+(c_y-i_y)^2}$.
```
length = sqrt((cx - ix)**2 + (cy - iy)**2)
length
```
To finally conclude if we hit the circle, we check if this distance is smaller than the radius of the circle.
```
length < cr
```
We have hit the circle! Here is an image with annotations of all the points and lines we have so far.

## Vector Geometry
To make all the calculations a lot easier we are going to use vector geometry. This will also help us to find out where our line intersect the circle.
We are going to recreate the scene but this time we are going to use vectors. Also, we are going to add a few classes that will help us keep track of all our points and lines.
### Points
A point is a row vector $[x, y]$.
```
def create_point(x, y):
return np.array([x, y])
def distance(p, q):
return sqrt((q[0] - p[0])**2 + (q[1] - p[1])**2)
def normal(v):
return np.array([v[1], -v[0]])
```
### Lines
A line has a position which is a point, and a direction which is a row vector. Lines are created between two points.
```
class Line():
def __init__(self, pos, direction):
self.position = np.array([pos[0], pos[1]])
self.direction = np.array([direction[0], direction[1]])
def __repr__(self):
return 'Line (position: {}, direction: {})'.format(self.position, self.direction)
# Find the slope of the line.
def Slope(self):
if self.direction[0] == 0: return 0;
return self.direction[1] / self.direction[0];
# Find the y value of a point on the line for a given value of x.
def PointY(self, x):
return self.Slope() * (x - self.position[0]) + self.position[1]
# Find the x value of a point on the line for a given value of y.
def PointX(self, y):
m = self.Slope()
return (y + m * self.position[0] - self.position[1]) / m
def create_line(p1, p2):
return Line(p1, p2 - p1)
```
#### Intersection between lines
To find where two lines intersect we calculate the intersection and return them as a tuple $(x,y)$.
```
def line_intersect(l, k):
lm = l.Slope()
km = k.Slope()
x = (lm * l.position[0] - l.position[1] - km * k.position[0] + k.position[1]) / (lm - km)
return create_point(x, l.PointY(x))
```
### Circles
A circle has a position which is a point and a radius $r$ which is a float.
```
class Circle():
def __init__(self, pos, radius, color = (255, 255, 255)):
self.position = pos
self.radius = radius
self.color = color
self.diffusion = 1
self.reflection = 1
def create_circle(p, r):
return Circle(p, r)
```
### Our scene revisited
We will now reconstruct the scene with the classes from above.
```
P = create_point(100, 100)
A = create_point(150, 125)
B = create_point(150, 75)
Q = create_point(A[0], (A[1] + B[1] + 10) / 2)
C = create_point(500, 100)
circle = create_circle(C, 75)
P, Q, C
pointWidth = 4
scene = Drawing(600, 200)
scene.SetColor(0, 0, 255)
scene.DrawBLine(P[0], P[1], Q[0], Q[1])
scene.SetColor(255,255,255)
scene.DrawPoint(P[0], P[1], pointWidth)
scene.SetColor(100, 100, 100)
scene.DrawBLine(A[0], A[1], B[0], B[1])
scene.SetColor(255,255,255)
scene.DrawPoint(Q[0], Q[1], pointWidth)
scene.DrawPoint(C[0], C[1], pointWidth)
scene.DrawCircle(C[0], C[1], circle.radius)
scene.img
PQ = create_line(P, Q)
PQ
scene.SetColor(255, 0, 0)
scene.DrawBLine(Q[0], Q[1], 600, PQ.PointY(600))
scene.img
PQ
normal(PQ.direction)
k = create_line(C, 0)
k.direction = normal(PQ.direction)
k
scene.SetColor(100, 100, 100)
scene.DrawBLine(k.PointX(200), 200, k.PointX(0), 0)
scene.img
I = line_intersect(PQ, k)
I
scene.SetColor(255, 255, 255)
scene.DrawPoint(I[0], I[1], pointWidth)
scene.img
d = distance(C, I)
d
```

```
thc = sqrt(circle.radius**2 - d**2)
thc
tca = create_line(P, I)
tca
kL = Line(np.array([k.position[0], k.position[1]]), k.direction)
kL.position[0] -= thc
kR = Line(k.position, k.direction)
kR.position[0] += thc
k, kL, kR
scene.SetColor(50, 50, 50)
scene.DrawBLine(kL.PointX(200), 200, kL.PointX(0), 0)
scene.DrawBLine(kR.PointX(200), 200, kR.PointX(0), 0)
P1 = line_intersect(PQ, kL)
P2 = line_intersect(PQ, kR)
scene.SetColor(0, 255, 0)
scene.DrawBLine(C[0], C[1], P1[0], P1[1])
scene.DrawBLine(P2[0], P2[1], C[0], C[1])
scene.DrawBLine(P1[0], P1[1], P2[0], P2[1])
scene.DrawBLine(C[0], C[1], I[0], I[1])
scene.SetColor(255, 255 ,255)
scene.DrawPoint(P1[0], P1[1], pointWidth)
scene.DrawPoint(P2[0], P2[1], pointWidth)
scene.img
```
## 2D Raytracing
### Vec2
This class implements vector properties.
* Position $x, y$
* Normal vector
* Length of the vector
* Vector addition/subtraction
* Vector multiplication
* Vector dot product
```
class Vec2():
def __init__(self, x, y):
self.x = x
self.y = y
def normal(self):
return Vec2(-self.y, self.x)
def length(self):
return sqrt(self.x**2 + self.y**2)
def __add__(self, v):
if type(v) is Vec2: return Vec2(self.x + v.x, self.y + v.y)
elif type(v) in [int, float]: return Vec2(self.x + v, self.y + v)
else: raise ValueError('Type mismatch \'{}\' for vector addition.'.format(type(v)))
def __sub__(self, v):
if type(v) is Vec2: return Vec2(self.x - v.x, self.y - v.y)
elif type(v) in [int, float]: return Vec2(self.x - v, self.y - v)
else: raise ValueError('Type mismatch \'{}\' for vector subtraction.'.format(type(v)))
def __mul__(self, v):
if type(v) in [int, float, Vec2]: return Vec2(self.x * v, self.y * v)
else: raise ValueError('Type mismatch \'{}\' for vector-scalar multiplication.'.format(type(v)))
__rmul__ = __mul__
def __repr__(self):
return '({}, {})'.format(self.x, self.y)
def dot(self, v):
return self.x * v.x + self.y * v.y
def transform(self, m):
v = np.array([self.x, self.y, 1])
tv = np.dot(m, v)
return Vec2(tv[0], tv[1])
```
### Camera
The camera has all the properties for finding rays from our view port.

```
class Camera():
def __init__(self, x, y, filmSize, focalLength):
self.position = Vec2(x, y)
self.filmSize = filmSize
self.focalLength = focalLength
self.focalPoint = Vec2(x - focalLength, y)
self.filmTopPoint = Vec2(x, y + filmSize / 2)
self.filmBottomPoint = Vec2(x, y - filmSize / 2)
```
### Ray
A ray has a position $x, y$ and a direction which is also a vector. To get this point add the position and direction.
```
class Ray():
def __init__(self, position, direction):
self.position = Vec2(position.x, position.y)
self.direction = Vec2(direction.x, direction.y)
def __repr__(self):
return 'Ray - P: {}, D: {}'.format(self.position, self.direction)
```
### Drawing helpers
To help us draw...
```
red = (255, 0 , 0 )
green = (0 , 255, 0 )
blue = (0 , 0 , 255)
gray = (100, 100, 100)
white = (255, 255, 255)
def draw_vec(scene, v, color):
originalColor = scene.color
scene.SetColor(color[0], color[1], color[2])
scene.DrawPoint(v.x, v.y, 2)
scene.SetColor(originalColor[0], originalColor[1], originalColor[2])
def draw_line(v, u, color):
originalColor = scene.color
scene.SetColor(color[0], color[1], color[2])
scene.DrawBLine(v.x, v.y, u.x, u.y)
scene.SetColor(originalColor[0], originalColor[1], originalColor[2])
def draw_circle(position, r, color):
originalColor = scene.color
scene.SetColor(color[0], color[1], color[2])
scene.DrawCircle(position.x, position.y, r)
scene.DrawPoint(position.x, position.y, 5)
scene.SetColor(color[0], color[1], color[2])
```
### Trace
Main function of the algorithm. Here we trace the ray $\vec{PQ}$ that is given as input. We will test it against all the circles to see if there are intersections, and returning the correct color.

* $P$ is `ray.position`
* $Q$ is `ray.position + ray.direction`
* $I$ is `intersectionPoint`
* $C$ is `circle.position`
* $r$ is `circle.radius`
We want to find the closest point $S$ and the color of the circle.
* First we want to create a line $k \perp l$ through $C$.
* Find point $I$, this is were the lines $k$ and $l$ intersect.
* With points $C$ and $I$ we can find length $d$ which is $||\ \vec{CI}\ ||$ (or `(I - C).length()`).
* $d > r$: ray does not intersect with the circle, we are done.
* $d < r$: ray goes through the circle, find the points $S$ and $S'$.
* Find the color of the circle and save it if the previous color is found further away.
* Find $h$ which is $\sqrt{r^2-d^2}$ by Pythagorean theorem.
* Create a new line $k_l$ from $k$ which is also perpendicular to $l$ and goes through $(k_{lx} - h, k_{ly})$.
* Find $S$ which is the intersect of $l$ and $k_l$.
* Do the same for $S'$ by finding $k_r$.
* Calculate the distance for $||\ \vec{PS}\ ||$ and $||\ \vec{PS'}\ ||$.
* Return the closest point $S$.
```
def trace(scene, ray, showTraces = False):
resultColor = (0, 0, 0)
resultDistance = scene.width
for circle in circles:
#if showTraces:
#screenEdge = Vec2(800, solve_line_y(ray, 800))
# Creates a grey triangle for the camera
draw_line(ray.position, ray.position + ray.direction, gray)
# Calculate the line k that is perpendicular to our ray l and goes
# through the circles center C.
k = Ray(circle.position, ray.direction.normal())
intersectionPoint = intersect(ray, k)
if intersectionPoint == False: continue
# Find the distance d between the intersection point of our ray and the normal
# and the center of the circle. If this is smaller than the circles radius, we have hit it.
d = (intersectionPoint - circle.position).length()
# Calculate a color for this current circle (see gradiant colored circles section)
cvalue = int(255 - 255 * 0.7 / circle.radius * d)
color = (0, 0, 0)
if circle.color == red : color = (cvalue, 0, 0)
if circle.color == green: color = (0, cvalue, 0)
if circle.color == blue : color = (0, 0, cvalue)
# If the distance is smaller than the radius it is a hit.
if d < circle.radius:
# Find h which is the distance from the circle edge S to the intersection point I.
h = sqrt(circle.radius**2 - d**2)
# Create a new line that is normal to the ray and goes through I-h.
kL = Ray(k.position, k.direction)
kL.position.x -= h
# Find where they intersect to get the left point on the circle where the
# line intersected.
SL = intersect(ray, kL)
# Same trick, but now for the right point; this is I+h.
kR = Ray(k.position, k.direction)
kR.position.x += h
# Find where they intersect to get the right intersection point.
SR = intersect(ray, kR)
# Find the closest point.
S = SL if (ray.position - SL).length() < (ray.position - SR).length() else SR
if showTraces:
#draw_vec(scene, intersectionPoint, color)
#draw_line(circle.position, intersectionPoint, color)
draw_vec(scene, S, color)
draw_line(intersectionPoint, ray.position + ray.direction, color)
# Save the color of the closest point to our origin which we have hit.
if d < circle.radius and d < resultDistance:
resultColor = color
resultDistance = d
return resultColor
```
### Intersect
Used to find the point where the two lines $k$ and $l$ intersect. We simply put $l=k$ and solve for $x$.
$$\begin{align} l &= k \\ l_m(x-p_x)+p_y &= k_m(x-q_x)+q_y \\ l_m\cdot x-l_m \cdot p_x + p_y &= k_m\cdot x - k_m - k_m\cdot q_x + q_y \\ l_m\cdot x - k_m \cdot x &= l_m\cdot p_x - p_y - k_m \cdot q_x + q_y \end{align}$$
Now we factor out $x$ on the left side, and divide by $l_m - k_m$ to isolate $x$.
$$x = \dfrac{l_m \cdot p_x - p_y - k_m \cdot q_x + q_y}{l_m - k_m}$$
We can now use this formula to get the $x$ value where the two lines intersect. Finally we use the value to find the $y$ coordinate by plugging in the found $x$ into any of the two lines.
```
def intersect(k, l):
m = k.direction.y / k.direction.x
# assuming that l is always the normal of k (IF IT IS A HORIZONTAL LINE).
if l.direction.x == 0: return Vec2(l.position.x, k.position.y)
n = l.direction.y / l.direction.x
if m - n == 0: return False;
x = (m * k.position.x - n * l.position.x + l.position.y - k.position.y) / (m - n)
y = solve_line_y(k, x) if k.direction.y != 0 else k.position.y
return Vec2(x, y)
```
### Finding y for a given x (ray)
Solving a simple line equation for $y$.
$$ y = m(x-x_0)+y_0$$
```
def solve_line_y(ray, x):
m = ray.direction.y / ray.direction.x
return m * x - m * ray.position.x + ray.position.y
```
### Finding x for a given y (ray)
Solving with the inverse function for $x$.
$$ \begin{align} y &= m(x-x_0)+y_0 \\ y &= mx-mx_0+y_0 \\ mx &= y + mx_0-y_0 \\ x &= \dfrac{y+mx_0-y_0}{m} \end{align}$$
```
def solve_line_x(ray, y):
m = ray.direction.y / ray.direction.x
return (y + m * ray.position.x + ray.position.y) / m
```
### Gradient colored circles
To get nicer colors we can use the light brightness based on distance. I started with the inverse square law. Adjusted the interval to $[0, 255]$. Finally, compensated for circles with different diameters by multiplying by $r$. To alter the brightness `reduced` can be used. The higher the value of `reduced`, the more dim it appears.
$$ \text{color brightness} = 255 - \dfrac{255 \cdot \text{reduced}}{r \sqrt{(q_x-p_x)^2+(q_y-p_y)^2}}$$
```
def draw_gradient_circle(scene, x, y, r, reduced, color = red):
P = Vec2(x, y)
S = Vec2(x - r, y - r)
M = Vec2(x + r, x + r)
for i in range(r*2):
for j in range(r*2):
Q = Vec2(S.x + i, S.y + j)
PQ = Q - P
d = PQ.length()
if d > r: continue
if color == red: scene.SetColor(int(255 - 255 * reduced / r * d), 0, 0);
if color == green: scene.SetColor(0, int(255 - 255 * reduced / r * d), 0);
if color == blue: scene.SetColor(0, 0, int(255 - 255 * reduced / r * d));
scene.DrawPoint(Q.x, Q.y)
scene = Drawing(600, 200)
reduced = 1
draw_gradient_circle(scene, 100, 100, 100, reduced, red)
draw_gradient_circle(scene, 300, 100, 100, reduced, green)
draw_gradient_circle(scene, 500, 100, 100, reduced, blue)
scene.img
```
To give an idea of how `reduced` behaves for different values.
```
scene = Drawing(800, 100)
for i in range(10):
reduced = 0.1 * (i + 5)
draw_gradient_circle(scene, 50 + (75 * i), 50, 25, reduced, green)
print('Light {}: reduced = {}'.format(i, round(reduced * 10) / 10))
scene.img
```
### Main loop
Here we loop over all the points in our film and our focal point to find a ray. This ray is then passed to `trace`. This will test the ray against all the circles.
```
# Viewport (this will be our coordinate system with x: [0, w-1] and y: [0, h-1] starting in the bottom left corner)
width = 800
height = 400
# Settings for the camera. This should be moved into the camera class sometime. The
# camera is centered from the midpoint between the film top and bottom. The focal point
# is moved into the -x direction. The found colors of the objects that are found with
# the ray tracing should be printed onto the film. The resolution on the film 'focalLength' will
# determine how many rays we are going to trace.
filmSize = 200
focalLength = 150
camera = Camera(200, 200, filmSize, focalLength)
# Remember the matrices? Let's rotate the camera with it.
#
# (Bug: the film is drawn from the bottom film point up into the y direction until y+focalLength,
# instead, it should interpolate between the two points and return 'focalLength' amount of points.
#
# Also, the rotation goes horribly wrong because we have already scaled and translated by drawing it
# directly. To fix this it must scale while it is still centered on the origin, then
# we rotate it, and finally translate to the correct position.
#rotationMatrix = create_rotation_matrix(0)
#focalPoint = focalPoint.transform(rotationMatrix)
#filmTopPoint = filmTopPoint.transform(rotationMatrix)
#filmBottomPoint = filmBottomPoint.transform(rotationMatrix)
# Drawing two scenes. The first is our ray tracing scene, and the second is our film.
scene = Drawing(width, height)
scene2 = Drawing(camera.filmSize, 20)
# Add three different colored and placed circles into the scene.
circles = [Circle(Vec2(600, 200), 100, red), Circle(Vec2(350, 100), 25, green), Circle(Vec2(400, 325), 50, blue)]
# Draw our camera into the scene.
draw_vec(scene, camera.focalPoint, white)
draw_vec(scene, camera.filmTopPoint, white)
draw_vec(scene, camera.filmBottomPoint, white)
draw_line(camera.filmTopPoint, camera.filmBottomPoint, gray)
# Draw all the circles into the scene.
for circle in circles: draw_gradient_circle(scene, circle.position.x, circle.position.y, circle.radius, 0.7, circle.color); # draw_circle(circle.position, circle.radius, circle.color);
# For every point on the film size we want to trace a ray.
for i in range(camera.filmSize):
# Create a point from the current pixel on our film.
filmPoint = Vec2(camera.filmBottomPoint.x, camera.filmBottomPoint.y + i)
# Find the ray between the focal point and the point on the film.
ray = Ray(camera.focalPoint, filmPoint - camera.focalPoint)
# Trace the ray to find the color for this pixel on the film.
color = trace(scene, ray, i % 1 == 0)
# If there is no object found.
if color == False: continue
# Set the color that is found by the ray.
scene.SetColor(color[0], color[1], color[2])
scene2.SetColor(color[0], color[1], color[2])
# Draw the color on the camera film in the first scene for visual purposes.
for j in range(3):
scene.DrawPoint(camera.filmBottomPoint.x - j, camera.filmBottomPoint.y + i)
# Draw the color onto the film in scene2.
for j in range(20):
scene.DrawPoint(camera.filmBottomPoint.x - camera.focalLength + i, camera.filmBottomPoint.y - j - 50)
scene2.DrawPoint(i, j)
scene.img
```
And without the tracing lines.

#### Projected film
This is what gets registered onto the film.
```
scene2.img
```
## Reflections
We will again start with a simple scene and trace a ray manually. This time we are going to find the angle of reflection for our ray. Finally, we can trace again with this new ray. Find another color. Blend it into the colors we have already found. This is a recursive algorithm.

To find the angle between two lines we use the fact that:
$$ \theta = \tan^{-1}\left(\dfrac{|\ \vec{l_d}\cdot\vec{m_d}|\ }{||\ \vec{l_d}\ ||\cdot||\ \vec{m_d}\ ||}\right) $$
```
def find_angle(l, m):
return arctan(abs(m.direction.dot(l.direction)) / (m.direction.length() * l.direction.length()))
scene = Drawing(600, 300)
# Circle
C = Vec2(500, 100)
Cr = 50
circle = Circle(C, Cr, red)
draw_gradient_circle(scene, circle.position.x, circle.position.y, circle.radius, 0.6, red)
#draw_circle(circle.position, circle.radius, red)
# Points P and Q
P = Vec2(50, 50)
Q = Vec2(30, 5)
draw_vec(scene, P, white)
l = Ray(P, Q)
draw_line(P, P + Q, red)
# Trace ray PQ
k = Ray(C, l.direction.normal())
I = intersect(l, k)
# Find intersection point S for the blue line with the circle
d = (C-I).length()
if d > Cr: raise ValueError('d > Cr: the line does not intersect the circle.')
h = sqrt(circle.radius**2 - d**2)
kL = Ray(k.position, k.direction)
kL.position.x -= h
S = intersect(l, kL)
draw_vec(scene, S, white)
# Draw right triangle with d, h, S
draw_line(P + Q, S, red)
draw_line(S, I, blue)
draw_line(I, C, blue)
CS = (S-C)
draw_vec(scene, CS + C, gray)
draw_line(CS + C, C, gray)
m = Ray(C, CS)
my = solve_line_y(m, 0)
draw_line(CS + C, Vec2(0, my), gray)
angle = find_angle(l, m)
print('angleLM Θ: {} rad / {} deg'.format(angle, angle * 180 / pi))
# Draw inner triangle construction
p = Ray(S + m.direction, m.direction.normal())
B1 = intersect(m, p)
B2 = intersect(p, l)
draw_vec(scene, B1, white)
draw_vec(scene, B2, white)
draw_line(B2, S + m.direction, white)
# Find B3 (green dot)
Bx = (B1 - B2).x + B1.x
By = solve_line_y(p, Bx)
B3 = Vec2(Bx, By)
draw_vec(scene, B3, green)
draw_line(B1, B3, green)
# The reflection line of l starts at S and goes through B3.
SB3 = Ray(S, B3 - S)
draw_line(SB3.position, SB3.position + SB3.direction, red)
draw_line(SB3.position + SB3.direction, Vec2(0, solve_line_y(SB3, 0)), red)
# Show that both angles are equal.
angleSB3 = find_angle(SB3, m)
print('angleSB3 Θ: {} rad / {} deg'.format(angleSB3, angleSB3 * 180 / pi))
# Draw normal line
normal = Ray(S, p.direction)
nL = Vec2(0, solve_line_y(normal, 0))
nR = Vec2(600, solve_line_y(normal, 600))
draw_line(nL, nR, (50, 50, 50))
scene.img
for angle in [i * pi / 2 for i in range(4)]:
print('angle Θ = {} => x = {}, y = {}'.format(round(angle*100)/100, round(cos(angle)), round(sin(angle))))
```
| github_jupyter |
```
import csv
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
truncate_val = 180
bcbaseline_paths = ['/usr/local/google/home/abhishekunique/sim_franka/corl_data/20210616-00h38m-final-baselineb-bcwindow3-evalwithgoalreaching-2elements/test-0/test-0/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_07_42_05_0000--s-39287/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/corl_data/20210616-00h38m-final-baselineb-bcwindow3-evalwithgoalreaching-2elements/test-1/test-1/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_07_41_17_0000--s-47878/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/corl_data/20210616-00h38m-final-baselineb-bcwindow3-evalwithgoalreaching-2elements/test-2/test-2/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_07_42_13_0000--s-20347/progress.csv']
overall_vals_bcbaseline = [[] for _ in range(len(bcbaseline_paths))]
for j, path in enumerate(bcbaseline_paths):
with open(path, mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f'Column names are {", ".join(row)}')
line_count += 1
overall_vals_bcbaseline[j].append(float(row['eval_exhaustivess/overall_eval']))
line_count += 1
print(f'Processed {line_count} lines.')
for j, ova in enumerate(overall_vals_bcbaseline):
ova = np.asarray(ova[:truncate_val])
overall_vals_bcbaseline[j] = ova
overall_vals_bcbaseline = np.array(overall_vals_bcbaseline)
import csv
graphsearch_paths = ['/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210617-06h42m-dontupdategraph-hailmary-2elementsim-jun17/test-0/test-0/06-17-dev-example-awac-script/06-17-dev-example-awac-script_2021_06_17_13_47_28_0000--s-538/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210617-06h42m-dontupdategraph-hailmary-2elementsim-jun17/test-1/test-1/06-17-dev-example-awac-script/06-17-dev-example-awac-script_2021_06_17_13_47_44_0000--s-50593/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210617-06h42m-dontupdategraph-hailmary-2elementsim-jun17/test-2/test-2/06-17-dev-example-awac-script/06-17-dev-example-awac-script_2021_06_17_13_47_18_0000--s-28179/progress.csv']
overall_vals_graphsearch = [[] for _ in range(len(graphsearch_paths))]
for j, path in enumerate(graphsearch_paths):
with open(path, mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f'Column names are {", ".join(row)}')
line_count += 1
overall_vals_graphsearch[j].append(float(row['eval_exhaustivess/overall_eval']))
line_count += 1
print(f'Processed {line_count} lines.')
for j, ova in enumerate(overall_vals_graphsearch):
ova = np.asarray(ova[:truncate_val])
overall_vals_graphsearch[j] = ova
overall_vals_graphsearch = np.array(overall_vals_graphsearch)
import csv
nopretrain_paths = ['/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210616-02h02m-final-baselinee-nopretraining-evalwithgoalreaching-2elements/test-0/test-0/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_09_06_47_0000--s-4794/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210616-02h02m-final-baselinee-nopretraining-evalwithgoalreaching-2elements/test-1/test-1/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_09_06_23_0000--s-31004/progress.csv',
'/usr/local/google/home/abhishekunique/sim_franka/simplified_awac/20210616-02h02m-final-baselinee-nopretraining-evalwithgoalreaching-2elements/test-2/test-2/06-16-dev-example-awac-script/06-16-dev-example-awac-script_2021_06_16_09_06_17_0000--s-34738/progress.csv']
overall_vals_nopretrain = [[] for _ in range(len(nopretrain_paths))]
for j, path in enumerate(nopretrain_paths):
with open(path, mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
line_count = 0
for row in csv_reader:
if line_count == 0:
print(f'Column names are {", ".join(row)}')
line_count += 1
overall_vals_nopretrain[j].append(float(row['eval_exhaustivess/overall_eval']))
line_count += 1
print(f'Processed {line_count} lines.')
for j, ova in enumerate(overall_vals_nopretrain):
ova = np.asarray(ova[:truncate_val])
overall_vals_nopretrain[j] = ova
overall_vals_nopretrain = np.array(overall_vals_nopretrain)
overall_vals_graphsearch.shape
import seaborn as sns
sns.set_theme()
# for dapg_line in overall_vals_DAPG:
plt.plot(np.mean(np.array(overall_vals_bcbaseline), axis=0), color='r', marker='o', label='BC baseline')
plt.fill_between(range(truncate_val),
np.mean(np.array(overall_vals_bcbaseline), axis=0) - np.std(np.array(overall_vals_bcbaseline), axis=0),
np.mean(np.array(overall_vals_bcbaseline), axis=0) + np.std(np.array(overall_vals_bcbaseline), axis=0),
color='r', alpha=0.5)
# for sac_line in overall_vals_SAC:
plt.plot(np.mean(np.array(overall_vals_graphsearch), axis=0), color='b', marker='x', label='Graph Search')
plt.fill_between(range(truncate_val),
np.mean(np.array(overall_vals_graphsearch), axis=0) - np.std(np.array(overall_vals_graphsearch), axis=0),
np.mean(np.array(overall_vals_graphsearch), axis=0) + np.std(np.array(overall_vals_graphsearch), axis=0),
color='b', alpha=0.5)
# for awac_line in overall_vals_AWAC:
plt.plot(np.mean(np.array(overall_vals_nopretrain), axis=0), color='g', marker='+', label='No Pretraining')
plt.fill_between(range(truncate_val),
np.mean(np.array(overall_vals_nopretrain), axis=0) - np.std(np.array(overall_vals_nopretrain), axis=0),
np.mean(np.array(overall_vals_nopretrain), axis=0) + np.std(np.array(overall_vals_nopretrain), axis=0),
color='g', alpha=0.5)
plt.ylabel('Success Rate')
plt.xlabel('Training Steps (x1000)')
plt.title('Success Rate of Accomplishing Individual 1-Step Transitions')
plt.savefig("comparisons_resetfreetraining_edgecommparisons.png",bbox_inches="tight")
overall_vals_AWAC
```
| github_jupyter |
# Fine-mapping with PolyFun
## Aim
The purpose of this notebook ipmlements commands for [a functionally-informed fine-mapping workflow using the PolyFun method](https://github.com/omerwe/polyfun/wiki).
## Methods Overview
`PolyFun` offers the following features:
1. Using and/or creating Functional Annotations
2. Estimating Functional Enrichment Using `S-LDSC`
3. Using and/or computating Prior Causal Probabilities from 1
4. Functionally Informed Fine Mapping with Finemapper
5. Polygenic Localization with `PolyLoc`
**Notice: this workflow does not implements 5 the `PolyLoc`.**
## Input
1) GWAS summary statistics including the following variables:
- variant_id - variant ID
- P - p-value
- CHR - chromosome number
- BP - base pair position
- A1 - The effect allele (i.e., the sign of the effect size is with respect to A1)
- A2 - the second allele
- MAF - minor allele frequency
- BETA - effect size
- SE - effect size standard error
2) Functional annotation files including the following columns:
- CHR - chromosome number
- BP base pair position (in hg19 coordinates)
- SNP - dbSNP reference number
- A1 - The effect allele
- A2 - the second allele
- Arbitrary additional columns representing annotations
3) A `.l2.M` white-space delimited file containing a single line with the sums of the columns of each annotation
4) LD-score files
- Strongly recommended that LD-score files include A1,A2 columns
5) LD information, taken from one of three possible data sources:
- plink files with genotypes from a reference panel
- bgen file with genotypes from a reference panel
- pre-computed LD matrix
Optional if (4) is obtained and no plans to compute prior causal probabilities non-parametrically
6) Ld-score weights files.
- Strongly recommended that weight files include A1,A2 columns
## Output
A `.gz` file containing input summary statistics columns and additionally the following columns:
- PIP - posterior causal probability
- BETA_MEAN - posterior mean of causal effect size (in standardized genotype scale)
- BETA_SD - posterior standard deviation of causal effect size (in standardized genotype scale)
- CREDIBLE_SET - the index of the first (typically smallest) credible set that the SNP belongs to (0 means none).
## Workflow
Step 1 and 2 are optional if using pre-computed prior causal probabilities
### Step 1: Obtain functional annotations
For each chromosome, the following files need to be obtained:
1) A `.gz` or `.parquet` annotations file containing the following columns:
- CHR - chromosome number
- BP base pair position
- SNP - dbSNP reference number
- A1 - The effect allele
- A2 - the second allele
- Arbitrary additional columns representing annotations
2) A `.l2.M` white-space delimited file containing a single line with the sums of the columns of each annotation
3) (Optional) A `l2.M_5_50` file that is the `.l2.M` file but only containing common SNPS (MAF between 5% and 50%)
The above files can be obtained either by using existing function annotation files, or by creating your own through other software such as `TORUS`.
Existing function annotation files example: functional annotations for ~19 million UK Biobank imputed SNPs with MAF>0.1%, based on the baseline-LF 2.2.UKB annotations.
Download (30G): https://data.broadinstitute.org/alkesgroup/LDSCORE/baselineLF_v2.2.UKB.polyfun.tar.gz
### Step 2: Compute LD-scores for annotations
Precomputed LD-score files can be used. LD-score files can also be generated through the methods below:
#### Method 1: Compute with reference panel of sequenced individuals
Reference panel should have at least 3000 sequenced individuals from target population.
```
[global]
parameter: container = "/mnt/mfs/statgen/containers/xqtl_pipeline_sif/polyfun.sif"
parameter: wd = path("./")
parameter: exe_dir = "/usr/local/bin/"
parameter: name = "demo"
parameter: genoFile = path("./")
parameter: annot_file = path("./")
parameter: sumstats = path("./")
[ld_score]
input: annot_file, genoFile
output: f'{wd:a}/{name}.ref.ldscore.parquet'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/compute_ldscores.py \
--bfile $[_input[1]:n] \
--annot $[_input[0]] \
--out $[_output]
```
#### Method 2: Compute with pre-computed UK Biobank LD matrices
Matrices download: https://data.broadinstitute.org/alkesgroup/UKBB_LD
```
[ld_score_ukb]
input: annot_file
output: f'{wd:a}/{name}.ukb.ldscore.parquet'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/compute_ldscores_from_ld.py \
--annot $[_input[0]] \
--ukb \
--out $[_output]
```
#### Method 3: Compute with own pre-computed LD matrices
Own pre-computed LD matrices should be in `.bcor` format.
```
[ld_score_own]
parameter: sample_size = int
parameter: bcor_files = paths
input: annot_file,bcor_files
output: f'{wd:a}/{name}.original.ldscore.parquet'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/compute_ldscores_from_ld.py $[_input[1]] \
--annot $[_input[0]] \
--out $[_output] \
--n $[sample_size] \
```
### Step 3: Compute Prior Causal Probabilities
#### Method 1: Use precomputed prior causal probabilities
Use precomputed prior causal probabilities of 19 million imputed UK Biobank SNPs with MAF>0.1%, based on a meta-analysis of 15 UK Biobank traits.
```
[prior_causal_prob]
input: sumstats
output: f'{wd:a}/{name}.pcp.gz'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout', container = container
python $[exe_dir]/extract_snpvar.py \
--sumstats $[_input] \
--out $[_output] \
--allow-missing
```
#### Method 2: Compute via L2-regularized extension of S-LDSC (preferred)
Compute via an L2-regularized extension of stratified LD-score regression (S-LDSC). Use the annotation and LD-score files produced in Step1.
1) Create a munged summary statistics file in a PolyFun-friendly parquet format.
```
[munged_sumstats]
parameter: sample_size = 472868
parameter: min_info = 0.6
parameter: min_maf = 0.01
input: sumstats
output: f'{wd:a}/{name}.sumstats_munged.parquet'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '127G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/munge_polyfun_sumstats.py \
--sumstats $[_input] \
--n $[sample_size] \
--out $[_output] \
--min-info $[min_info] \
--min-maf $[min_maf]
```
2) Run PolyFun with L2-regularized S-LDSC
- Require at least 45 GB of mem
```
[L2_SLDSC]
# a ld score file with surfix l2.ldscore.parquet
parameter: ref_ld = path
# another ld score file with surfix l2.ldscore.parquet, different from ref_ld
parameter: ref_wgt = ref_ld
parameter: partitions = ""
input: ref_ld, ref_wgt,output_from("munged_sumstats")
# parameter: sumstat = _input[2]
output: f'{wd:a}/{name}.ldsldsc.parquent'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '127G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/polyfun.py \
--compute-h2-L2 \
--output-prefix $[_output] \
--sumstats $[_input[2]] \
--ref-ld-chr $[_input[0]:nnnn].\
--w-ld-chr $[_input[1]:nnnn]. \
--allow-missing $["" if partitions else "--no-partitions"]
```
#### Method 3: Compute Non-parametrically
1) Create a munged summary statistics file in a PolyFun-friendly parquet format.
Duplicated cells are commented out, the input of [ld_snpbin] is the output from [L2_regu_SLDSC]
```
#[munged_sumstats2]
#parameter: sumstats = AD_sumstats_Jansenetal_2019sept.txt.gz
#parameter: sample_size = int
#parameter: container = none
#bash: container = container
# mkdir -p SLDSC_output
# python munge_polyfun_sumstats.py \
# --sumstats sumstats \
# --n sample_size \
# --out /SLDSC_output/sumstats_munged.parquet \
# --min-info 0 \
# --min-maf 0
```
2) Run PolyFun with L2-regularized S-LDSC
```
# [L2_regu_SLDSC2]
#
# parameter: container = none
# paramter: ref_ld = example_data/annotations.
# parameter: ref_wgt = example_data/weights.
# bash: container=container
# python polyfun.py \
# --compute-h2-L2 \
# --output-prefix output/testrun \
# --sumstats example_data/sumstats.parquet \
# --ref-ld-chr ref_ld \
# --w-ld-chr ref_wgt
```
3) Compute LD-scores for each SNP bin
```
[ld_snpbin]
depends: sos_step("L2_regu_SLDSC")
parameter: chrom = int
input: annot_file, genoFile
output: f'{wd:a}/{name}.snpbin.ldscore.parquet'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/polyfun.py \
--compute-ldscores \
--bfile-chr $[_input[1]:n] \
--output-prefix $[_output] \
--chr $[chrom]
```
4) Re-estimate per-SNP heritabilities via S-LDSC
```
#[re_SLDSC]
#bash:
# python polyfun.py \
# --compute-h2-bins \
# --output-prefix output/testrun \
# --sumstats example_data/sumstats.parquet \
# --w-ld-chr example_data/weights.
[L2_SLDSC_bins]
paramter: ref_ld = path
parameter: ref_wgt = ref_ld
parameter: partitions = ""
input: ref_ld, ref_wgt,output_from("munged_sumstats")
output: f'{wd:a}/{name}.txt.gz'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/polyfun.py \
--compute-h2-bins \
--output-prefix $[_output] \
--sumstats $[_input[2]] \
--ref-ld-chr $[_input[0]]\
--w-ld-chr $[_input[1]]
```
### Step 4: Functionally informed fine mapping with finemapper
Input summary statistics file must have `SNPVAR` column (per-SNP heritability) to perform functionally-informed fine-mapping. To fine-map without annotations, use additional parameter `--non-funct`. The summary statistical file then will not require the `SNPVAR` column.
```
[fine_mapping]
parameter: sample_size = 383290
parameter: chrom = 1
parameter: start = 46000001
parameter: end = 49000001
#parameter: output_path = "output/finemap.1.46000001.49000001.gz"
parameter: max_num_causal = 5
input: genoFile,sumstats
output: f'{wd:a}/output/finemap.{chrom}.{start}.{end}.gz'
task: trunk_workers = 1, trunk_size = 1, walltime = '24h', mem = '30G', tags = f'{step_name}_{_output[0]:bn}'
bash: expand = "$[ ]", stderr = f'{_output}.stderr', stdout = f'{_output}.stdout' , container = container
python $[exe_dir]/finemapper.py \
--geno $[_input[0]:n] \
--sumstats $[_input[1]] \
--n $[sample_size] \
--chr $[chrom] \
--start $[start] \
--end $[end] \
--method susie \
--max-num-causal $[max_num_causal] \
--cache-dir $[_output:d]/cache \
--out $[_output]
```
## Minimal Working Example
### Example 1: Functionally-informed fine-mapping using summary statistics file with precomputed prior causal probabilities
```
nohup sos run ~/GIT/xqtl-pipeline/pipeline/integrative_analysis/SuSiE_Ann/polyfun.ipynb prior_causal_prob \
--sumstats /home/at3535/polyfun/AD_sumstats_Jansenetal_2019sept.txt.gz \
-J 200 -q csg \
-c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
nohup sos run ~/GIT/xqtl-pipeline/pipeline/integrative_analysis/SuSiE_Ann/polyfun.ipynb fine_mapping \
--sumstats /home/at3535/polyfun/AD_sumstats_Jansenetal_2019sept.txt.gz \
--genoFile /mnt/mfs/statgen/ROSMAP_xqtl/dataset/snvCombinedPlink/chr1.bed \
-J 200 -q csg \
-c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
```
### Example 2: Functionally-informed fine-mapping using summary statistics file generated from pre-obtained annotation and LD-score files
```
nohup sos run ~/GIT/xqtl-pipeline/pipeline/integrative_analysis/SuSiE_Ann/polyfun.ipynb munged_sumstats \
--sumstats /home/at3535/polyfun/GCST90012877_buildGRCh37_colrenamed.txt.gz \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
nohup sos run ~/GIT/xqtl-pipeline/pipeline/integrative_analysis/SuSiE_Ann/polyfun.ipynb L2_SLDSC \
--sumstats demo.sumstats_munged.parquet \
--ref_ld /mnt/mfs/statgen/tl3030/baselineLF2.2.UKB/baselineLF2.2.UKB.1.l2.ldscore.parquet \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
nohup sos run ~/GIT/xqtl-pipeline/pipeline/integrative_analysis/SuSiE_Ann/polyfun.ipynb fine_mapping \
--sumstats demo.ldsldsc.parquent \
--genoFile /mnt/mfs/statgen/ROSMAP_xqtl/dataset/snvCombinedPlink/chr1.bed \
-J 200 -q csg -c /home/hs3163/GIT/ADSPFG-xQTL/code/csg.yml &
```
### Summary
#### Example 1:
```
bash:
gzcat output/finemap.1.46000001.49000001.gz | head
import numpy as np
import pandas as pd
data = pd.read_csv('output/finemap.1.46000001.49000001.gz', sep="\t")
data.head(5)
num_var_cs = np.count_nonzero(data['CREDIBLE_SET'])
total_cs = len(data.CREDIBLE_SET.unique())- 1
avg_var_cs = float(num_var_cs) / total_cs
pip50 = sum(1 for i in data['PIP'] if i >0.5)
pip95 = sum(1 for i in data['PIP'] if i >0.95)
result = "Number of variants with PIP > 0.5: " + str(pip50) + "\n" + "Number of variants with PIP > 0.95: " + str(pip95) + "\n" \
+ "Number of variants that have credible sets: " + str(num_var_cs) + "\n" \
+ "Number of unique credible sets: " + str(total_cs) + "\n" \
+ "Average number of variants per credible set: " + str(avg_var_cs)
with open('results.txt', 'a') as the_file:
the_file.write(result)
with open('results.txt') as f:
contents = f.readlines()
print(contents)
```
#### Example 2:
```
bash:
gzcat output/finemap.1.460000010.49000000.gz | head
import numpy as np
import pandas as pd
data = pd.read_csv('output/finemap.1.46000000.49000000.gz', sep="\t")
data.head(5)
num_var_cs = np.count_nonzero(data['CREDIBLE_SET'])
total_cs = len(data.CREDIBLE_SET.unique())- 1
avg_var_cs = float(num_var_cs) / total_cs
pip50 = sum(1 for i in data['PIP'] if i >0.5)
pip95 = sum(1 for i in data['PIP'] if i >0.95)
result = "Number of variants with PIP > 0.5: " + str(pip50) + "\n" + "Number of variants with PIP > 0.95: " + str(pip95) + "\n" \
+ "Number of variants that have credible sets: " + str(num_var_cs) + "\n" \
+ "Number of unique credible sets: " + str(total_cs) + "\n" \
+ "Average number of variants per credible set: " + str(avg_var_cs)
with open('results.txt', 'a') as the_file:
the_file.write(result)
with open('results.txt') as f:
contents = f.readlines()
print(contents)
```
| github_jupyter |
##### Copyright © 2020 The TensorFlow Authors.
<font size=-1>Licensed under the Apache License, Version 2.0 (the \"License\");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>
# Create a TFX pipeline using templates
## Introduction
This document will provide instructions to create a TensorFlow Extended (TFX) pipeline
using *templates* which are provided with TFX Python package.
Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.
You will build a pipeline using [Taxi Trips dataset](
https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)
released by the City of Chicago. We strongly encourage you to try building
your own pipeline using your dataset by utilizing this pipeline as a baseline.
## Step 1. Set up your environment.
AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.
**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.
**NOTE:** There might be some errors during package installation. For example:
>"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment.
Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
```
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.22.0
!{sys.executable} -m pip install --user --upgrade -q kfp==0.5.1
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
```
Let's check the versions of TFX.
```
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
```
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
```
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
```
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.
>**NOTE: You MUST set your ENDPOINT value below.**
```
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
```
Set the image name as `tfx-pipeline` under the current GCP project.
```
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
```
And, it's done. We are ready to create a pipeline.
## Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
```
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
```
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.
The `tfx template copy` CLI command copies predefined template files into your project directory.
```
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
```
Change the working directory context in this notebook to the project directory.
```
%cd {PROJECT_DIR}
```
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created.
## Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).
Here is brief introduction to each of the Python files.
- `pipeline` - This directory contains the definition of the pipeline
- `configs.py` — defines common constants for pipeline runners
- `pipeline.py` — defines TFX components and a pipeline
- `models` - This directory contains ML model definitions.
- `features.py`, `features_test.py` — defines features for the model
- `preprocessing.py`, `preprocessing_test.py` — defines preprocessing
jobs using `tf::Transform`
- `estimator` - This directory contains an Estimator based model.
- `constants.py` — defines constants of the model
- `model.py`, `model_test.py` — defines DNN model using TF estimator
- `keras` - This directory contains a Keras based model.
- `constants.py` — defines constants of the model
- `model.py`, `model_test.py` — defines DNN model using Keras
- `beam_dag_runner.py`, `kubeflow_dag_runner.py` — define runners for each orchestration engine
You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
```
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
```
## Step 4. Run your first TFX pipeline
Components in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `<your-project-id>-kubeflowpipelines-default`.
Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
```
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/data.csv
```
Let's create a TFX pipeline using the `tfx pipeline create` command.
>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
```
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
```
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.
NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.
Now start an execution run with the newly created pipeline using the `tfx run create` command.
```
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
```
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.
>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard.
One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting).
## Step 5. Add components for data validation.
In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).
>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.
You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
```
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
```
### Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline.
## Step 6. Add components for training.
In this step, you will add components for training and model validation including `Transform`, `Trainer`, 'ResolverNode', `Evaluator`, and `Pusher`.
>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, 'ResolverNode', `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
```
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
```
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
## Step 7. (*Optional*) Try BigQueryExampleGen
[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.
>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.
We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.
>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.
>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**
>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.
>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
```
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
```
## Step 8. (*Optional*) Try Dataflow with KFP
Several [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.
>**Double-click to open `pipeline.py`**. Change the value of `enable_cache` to `False`.
>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.
>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
```
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
```
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).
Please reset `enable_cache` to `True` to benefit from caching execution results.
>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`.
## Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.
>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.
>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.
>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.
Update the pipeline and create an execution run as we did in step 5 and 6.
```
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
```
You can find your training jobs in [Cloud AI Platform Jobs](https://console.cloud.google.com/ai-platform/jobs). If your pipeline completed successfully, you can find your model in [Cloud AI Platform Models](https://console.cloud.google.com/ai-platform/models).
## Step 10. Ingest YOUR data to the pipeline
We made a pipeline for a model using the Chicago Taxi dataset. Now it's time to put your data into the pipeline.
Your data can be stored anywhere your pipeline can access, including GCS, or BigQuery. You will need to modify the pipeline definition to access your data.
1. If your data is stored in files, modify the `DATA_PATH` in `kubeflow_dag_runner.py` or `beam_dag_runner.py` and set it to the location of your files. If your data is stored in BigQuery, modify `BIG_QUERY_QUERY` in `pipeline/configs.py` to correctly query for your data.
1. Add features in `models/features.py`.
1. Modify `models/preprocessing.py` to [transform input data for training](https://www.tensorflow.org/tfx/guide/transform).
1. Modify `models/keras/model.py` and `models/keras/constants.py` to [describe your ML model](https://www.tensorflow.org/tfx/guide/trainer).
- You can use an estimator based model, too. Change `RUN_FN` constant to `models.estimator.model.run_fn` in `pipeline/configs.py`.
Please see [Trainer component guide](https://www.tensorflow.org/tfx/guide/trainer) for more introduction.
## Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Alternatively, you can clean up individual resources by visiting each consoles:
- [Google Cloud Storage](https://console.cloud.google.com/storage)
- [Google Container Registry](https://console.cloud.google.com/gcr)
- [Google Kubernetes Engine](https://console.cloud.google.com/kubernetes)
| github_jupyter |
# Gaussian process regression in PyMC
Author: [Nipun Batra](https://nipunbatra.github.io/)
```
import numpy as np
import matplotlib.pyplot as plt
import pymc3 as pm
from matplotlib import rc
import arviz as az
import warnings
warnings.filterwarnings('ignore')
rc('font', size=16)
```
We will use PyMC to do Gaussian process regression.
Let us define the RBF kernel as the following,
```
def kernel(a, b, lenghtscale, std):
"""
Borrowed from Nando De Freita's lecture code
https://www.cs.ubc.ca/~nando/540-2013/lectures/gp.py
"""
sqdist = np.sum(a**2,1).reshape(-1,1) + np.sum(b**2,1) - 2*np.dot(a, b.T)
return std**2*np.exp(-.5 * (1/lenghtscale) * sqdist)
```
We generate a synthetic dataset from a known distribution.
```
# From GPY tutorial
np.random.seed(0)
n_train = 20
X = np.random.uniform(-3.,3.,(n_train, 1))
Y = (np.sin(X) + np.random.randn(n_train, 1)*0.1).flatten()
plt.scatter(X[:, 0], Y);
plt.xlabel('x')
plt.ylabel('y');
```
we can define Gaussian process model in PyMC as the following,
```
basic_model = pm.Model()
with basic_model:
# Priors for unknown model parameters
# Variance
kernel_std = pm.Lognormal("kernel_std", 0, 0.1)
# Length scale
kernel_ls = pm.Lognormal("kernel_ls", 0, 1)
noise_sigma = pm.Lognormal("noise_sigma", 0, 1)
K = kernel(X, X, kernel_ls, kernel_std)
K += np.eye(X.shape[0]) * np.power(noise_sigma, 2)
y = pm.MvNormal("y", mu = 0, cov = K, observed = Y)
pm.model_to_graphviz(basic_model.model)
```
Let us get MAP estimate of the paramaters.
```
map_estimate = pm.find_MAP(model=basic_model)
map_estimate
```
Now, we draw a large number of samples from the posterior.
```
with basic_model:
# draw 2000 posterior samples per chain
trace = pm.sample(2000,return_inferencedata=False,tune=2000)
```
We can visualize the dposterior distribution as the following,
```
az.plot_trace(trace);
```
Let us predict at new input locations.
```
test_x = np.linspace(-3, 3, 100).reshape(-1, 1)
train_x = X
train_y = Y
def post(train_x, train_y, test_x, kernel, kernel_ls, kernel_std, noise):
N = len(train_x)
K = kernel(train_x, train_x, kernel_ls, kernel_std)+noise**2*np.eye(len(train_x))
N_star = len(test_x)
K_star = kernel(train_x, test_x, kernel_ls, kernel_std)
K_star_star = kernel(test_x, test_x, kernel_ls, kernel_std)
posterior_mu = K_star.T@np.linalg.inv(K)@(train_y)
posterior_sigma = K_star_star - K_star.T@np.linalg.inv(K)@K_star
# Instead of size = 1, we can also sample multiple times given a single length scale, kernel_std and noise
return np.random.multivariate_normal(posterior_mu, posterior_sigma, size=1)
# Make predictions at new locations.
train_y = Y
n_samples = 500
preds = np.stack([post(train_x, train_y, test_x=test_x, kernel=kernel, kernel_ls=trace['kernel_ls'][b],
kernel_std=trace['kernel_std'][b],
noise=trace['noise_sigma'][b])
for b in range(n_samples)])
```
The figure below shows the mean and variance estimate of the posterior.
```
ci = 95
ci_lower = (100 - ci) / 2
ci_upper = (100 + ci) / 2
preds_mean = preds.reshape(n_samples, len(test_x)).mean(0)
preds_lower = np.percentile(preds, ci_lower, axis=0)
preds_upper = np.percentile(preds, ci_upper, axis=0)
plt.plot(test_x,preds.reshape(n_samples, len(test_x)).mean(axis=0), label='predictive mean')
plt.scatter(train_x, train_y, c='black', zorder=3, label='data')
plt.fill_between(test_x.flatten(), preds_upper.flatten(), preds_lower.flatten(), alpha=.3, label='95\% CI');
plt.legend(bbox_to_anchor=(1,1));
plt.xlabel('x');plt.ylabel('y');
```
| github_jupyter |
# `006-compute-grad`
Task: compute the gradient of a function
## Setup
```
import torch
from torch import tensor
import matplotlib.pyplot as plt
%matplotlib inline
```
## Task
Suppose we have a dataset with just a single feature `x` and continuous outcome variable `y`.
```
torch.manual_seed(0)
x = torch.rand(100)
noise = torch.rand_like(x) * .5
y_true = 4 * x + noise - 1
plt.scatter(x, y_true);
```
Let's fit a line to that!
In linear regression, we predict an output by computing `y_pred = weights * x + bias`.
We set `weights` and `bias` in a way that minimizes the mean squared error `mse_loss = (y_pred - y_true).pow(2).mean()`.
Let's set `weights` and `bias` to some arbitrary values and see what `mse_loss` comes out to be.
```
weights = tensor([3.5])
bias = tensor(0.0)
y_pred = weights * x + bias
plt.scatter(x, y_true); plt.plot(x, y_pred, 'r')
mse_loss = (y_pred - y_true).pow(2).mean()
mse_loss
```
Let's find what changes we could make to `weights` or `bias` that would reduce `mse_loss`. **Your task**:
1. Use PyTorch to compute the gradient of `weights` with respect to `mse_loss`.
2. Use that gradient to work out (by hand!) a new fixed value for `weights`.
3. Recompute `mse_loss` and see that it does go down. (If it doesn't, reconsider what you did in Step 2.)
4. Repeat the 3 steps above for `bias`.
## Solution
```
# Your code here
```
### weights
```
weights = tensor([3.5], requires_grad=True)
bias = tensor(0.0)
y_pred = weights * x + bias
mse_loss = (y_pred - y_true).pow(2).mean()
mse_loss.backward()
weights.grad
```
Let's use a learning rate of 0.1. And remember we want loss to go *down*, so we need to step *opposite* the gradient.
```
3.5 - .4150 * .1
weights = tensor(3.4585)
bias = tensor(0)
y_pred = weights * x + bias
mse_loss = (y_pred - y_true).pow(2).mean()
mse_loss
```
### bias
```
weights = tensor(3.4585)
bias = tensor(0.0, requires_grad=True)
y_pred = weights * x + bias
mse_loss = (y_pred - y_true).pow(2).mean()
mse_loss.backward()
bias.grad
weights = tensor(3.4585)
bias = tensor(-.10119)
y_pred = weights * x + bias
mse_loss = (y_pred - y_true).pow(2).mean()
mse_loss
```
## Analysis
What is the true value of `weights`? Did your first change to `weights` get you closer to that value? Explain.
## Extension (optional)
1. Put the code above in a loop that changes `weights` and `bias` to minimize `mse_loss`.
2. What is the minimum value that `mse_loss` could possibly take in this situation?
3. Would it be possible to also change `x` or `y` to reduce `mse_loss`? Can you think of a situation where you might want to do that?
| github_jupyter |
# To make a better wedge
This notebook is an update to the notebook entitled "To make a wedge" featured in the blog post, [To make a wedge](https://agilescientific.com/blog/2013/12/12/to-make-a-wedge.html?rq=wedge), on December 12, 2013.
Start by importing Numpy and Matplotlib's pyplot module in the usual way:
```
import numpy as np
% matplotlib inline
import matplotlib.pyplot as plt
```
Import the ricker wavelet function from [bruges](https://github.com/agile-geoscience/bruges):
```
from bruges.filters import ricker
```
## Make a wedge
```
from IPython.display import Image
```
Let's make a more generic wedge that will handle any 3 layer case we want to make.
```
Image('images/generic_wedge.png', width=600)
defaults = {'ta1':150, 'tb1':30, 'dta':50, 'dtb':50,
'xa1':100, 'xa2':100, 'dx':1,
'mint':0, 'maxt': 600, 'dt':1,
'minx':0, 'maxx': 500}
def make_upper_boundary(**kw):
x = kw['maxx']-kw['minx']
t0 = kw['ta1']
x2 = np.arange(1, x-(kw['xa2']+kw['xa1']), kw['dx'])
m2 = kw['dta']/x2[-1]
seg1 = np.ones(int(kw['xa1']/kw['dx']))
seg3 = np.ones(int(kw['xa2']/kw['dx']))
seg2 = x2 * m2
interface = t0 + np.concatenate((seg1, seg2, kw['dta']+seg3))
return interface
def make_lower_boundary(**kw):
x = kw['maxx']-kw['minx']
t1 = kw['ta1'] + kw['tb1']
x2 = np.arange(1, x-(kw['xa2']+kw['xa1']), kw['dx'])
m2 = (kw['dta']+kw['dtb'])/x2[-1]
seg1 = np.ones(int(kw['xa1']/kw['dx']))
seg3 = np.ones(int(kw['xa2']/kw['dx']))
seg2 = x2 * m2
interface = t1 + np.concatenate((seg1, seg2, seg2[-1]+seg3))
return interface
def make_wedge(kwargs):
upper_interface = make_upper_boundary(**kwargs)
lower_interface = make_lower_boundary(**kwargs)
return upper_interface, lower_interface
def plot_interfaces(ax, upper, lower, **kw):
ax.plot(upper,'-r')
ax.plot(lower,'-b')
ax.set_ylim(0,600)
ax.set_xlim(kw['minx'],kw['maxx'])
ax.invert_yaxis()
upper, lower = make_wedge(defaults)
f = plt.figure()
ax = f.add_subplot(111)
plot_interfaces(ax, upper, lower, **defaults)
def make_meshgrid(**kw):
upper, lower = make_wedge(defaults)
t = np.arange(kw['mint'], kw['maxt']-1, kw['dt'])
x = np.arange(kw['minx'], kw['maxx']-1, kw['dx'])
xv, yv = np.meshgrid(x, t, sparse=False, indexing='ij')
return xv, yv
xv, yv = make_meshgrid(**defaults)
conditions = {'upper': yv.T < upper,
'middle': (yv.T >= upper) & (yv.T <= lower),
'lower': yv.T > lower
}
labels = {'upper': 1, 'middle':2, 'lower': 3}
d = yv.T.copy()
for name, cond in conditions.items():
d[cond] = labels[name]
plt.imshow(d, cmap='copper')
vp = np.array([3300., 3200., 3300.])
rho = np.array([2600., 2550., 2650.])
AI = vp*rho
AI
model = d.copy()
model[model == 1] = AI[0]
model[model == 2] = AI[1]
model[model == 3] = AI[2]
def wvlt(f):
return ricker(0.512, 0.001, f)
def conv(a):
return np.convolve(wvlt(f), a, mode='same')
plt.imshow(model, cmap='Spectral')
plt.colorbar()
plt.title('Impedances')
```
# Plotting the synthetic
```
# These are just some plotting parameters
rc_params = {'cmap':'RdBu',
'vmax':0.05,
'vmin':-0.05,
'aspect':0.75}
txt_params = {'fontsize':12, 'color':'black',
'horizontalalignment':'center',
'verticalalignment':'center'}
tx = [0.85*defaults['maxx'],0.85*defaults['maxx'],0.85*defaults['maxx']]
ty = [(defaults['ta1'] + defaults['dta'])/2,
defaults['ta1'] + defaults['dta'] + (defaults['dtb']/1.33),
defaults['maxt']-(defaults['maxt'] - defaults['ta1'] - defaults['dta'] - defaults['dtb'])/2]
rock_names = ['shale1', 'sand', 'shale2']
defaults['ta1'], defaults['dta'], defaults['dtb']/1.25
rc = (model[1:] - model[:-1]) / (model[1:] + model[:-1])
```
We can make use of the awesome `apply_along_axis` in Numpy to avoid looping over all the traces. https://docs.scipy.org/doc/numpy/reference/generated/numpy.apply_along_axis.html
```
freqs = np.array([7,14,21])
f, axs = plt.subplots(1,len(freqs), figsize=(len(freqs)*5,6))
for i, f in enumerate(freqs):
axs[i].imshow(np.apply_along_axis(conv, 0, rc), **rc_params)
[axs[i].text(tx[j], ty[j], rock_names[j], **txt_params) for j in range(3)]
plot_interfaces(axs[i], upper, lower, **defaults)
axs[i].set_ylim(defaults['maxt'],defaults['mint'])
axs[i].set_title( f'{f} Hz wavelet' )
axs[i].grid(alpha=0.5)
```
| github_jupyter |
### Reconstruction with a custom network.
This notebook extends the last notebook to simultaneously train a decoder network, which translates from embedding back into dataspace. It also shows you how to use validation data for the reconstruction network during training.
### load data
```
import tensorflow as tf
tf.__version__
from tensorflow.keras.datasets import mnist
(train_images, Y_train), (test_images, Y_test) = mnist.load_data()
train_images = train_images.reshape((train_images.shape[0], -1))/255.
test_images = test_images.reshape((test_images.shape[0], -1))/255.
```
### define the encoder network
```
import tensorflow as tf
tf.__version__
tf.config.list_physical_devices('GPU')
dims = (28,28, 1)
n_components = 2
encoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=dims),
tf.keras.layers.Conv2D(
filters=32, kernel_size=3, strides=(2, 2), activation="relu", padding="same"
),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation="relu", padding="same"
),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=128, activation="relu"),
tf.keras.layers.Dense(units=128, activation="relu"),
tf.keras.layers.Dense(units=n_components),
])
encoder.summary()
decoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_components)),
tf.keras.layers.Dense(units=128, activation="relu"),
tf.keras.layers.Dense(units=7 * 7 * 128, activation="relu"),
tf.keras.layers.Reshape(target_shape=(7, 7, 128)),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
),
tf.keras.layers.Conv2DTranspose(
filters=32, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
),
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME", activation="sigmoid"
)
])
decoder.summary()
```
### create parametric umap model
```
from umap.parametric_umap import ParametricUMAP
embedder = ParametricUMAP(
encoder=encoder,
decoder=decoder,
dims=dims,
n_components=n_components,
n_training_epochs=1, # dicates how many total training epochs to run
n_epochs = 50, # dicates how many times edges are trained per 'epoch' to keep consistent with non-parametric UMAP
parametric_reconstruction= True,
reconstruction_validation=test_images,
parametric_reconstruction_loss_fcn = tf.keras.losses.MSE,
verbose=True,
)
train_images.shape
embedding = embedder.fit_transform(train_images)
```
### plot reconstructions
```
test_images_recon = embedder.inverse_transform(embedder.transform(test_images.reshape(len(test_images), 28,28,1)))
import numpy as np
import matplotlib.pyplot as plt
np.min(test_images), np.max(test_images)
nex = 10
fig, axs = plt.subplots(ncols=10, nrows=2, figsize=(nex, 2))
for i in range(nex):
axs[0, i].matshow(np.squeeze(test_images[i].reshape(28, 28, 1)), cmap=plt.cm.Greys)
axs[1, i].matshow(
np.squeeze(test_images_recon[i].reshape(28, 28, 1)),
cmap=plt.cm.Greys, vmin = 0, vmax = 1
)
for ax in axs.flatten():
ax.axis("off")
```
### plot results
```
embedding = embedder.embedding_
import matplotlib.pyplot as plt
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
embedding[:, 0],
embedding[:, 1],
c=Y_train.astype(int),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
```
### plotting loss
```
embedder._history.keys()
fig, axs = plt.subplots(ncols=2, figsize=(10,5))
ax = axs[0]
ax.plot(embedder._history['loss'])
ax.set_ylabel('Cross Entropy')
ax.set_xlabel('Epoch')
ax = axs[1]
ax.plot(embedder._history['reconstruction_loss'], label='train')
ax.plot(embedder._history['val_reconstruction_loss'], label='valid')
ax.legend()
ax.set_ylabel('Cross Entropy')
ax.set_xlabel('Epoch')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yohanesnuwara/geostatistics/blob/main/project_notebooks/gullfaks_python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
!pip install geostatspy
import geostatspy.GSLIB as GSLIB
import geostatspy.geostats as geostats
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual, ToggleButtons
import ipywidgets as widgets
plt.style.use("classic")
!git clone https://github.com/yohanesnuwara/geostatistics
# Load well top data
colnames = ["UTM X", "UTM Y", "TVD", "Top Name", "Well Name"]
welltop = pd.read_csv("/content/geostatistics/data/gullfaks/welltops.txt", sep=" ",
header=17, usecols=[0,1,2,5,6], names=colnames)
# Invert -TVD to +TVD
welltop["TVD"] = welltop["TVD"] * -1
welltop.head(10)
# Load seismic horizon data
colnames = ["A", "B", "UTM X", "UTM Y", "TWT"]
seis_creta = np.loadtxt("/content/geostatistics/data/gullfaks/Base Cretaceous")
seis_creta = pd.DataFrame(seis_creta, columns=colnames)
seis_etive = np.loadtxt("/content/geostatistics/data/gullfaks/Top Etive")
seis_etive = pd.DataFrame(seis_etive, columns=colnames)
seis_ness = np.loadtxt("/content/geostatistics/data/gullfaks/Top Ness")
seis_ness = pd.DataFrame(seis_ness, columns=colnames)
seis_tarb = np.loadtxt("/content/geostatistics/data/gullfaks/Top Tarbert")
seis_tarb = pd.DataFrame(seis_tarb, columns=colnames)
seis_creta.head()
# Separate the well top w.r.t. each top names
topnames = ["Base Cretaceous", "Top Etive", "Top Ness", "Top Tarbert"]
dfs = []
for i in topnames:
mask = welltop["Top Name"] == i
df = welltop[mask]
dfs.append(df)
well_creta, well_etive, well_ness, well_tarb = dfs
well_creta.head()
```
## Well and seismic top visualization
```
plt.figure(figsize=(14,9))
for i in range(len(topnames)):
x = dfs[i]["UTM X"].values
y = dfs[i]["UTM Y"].values
z = dfs[i]["TVD"].values
plt.subplot(2,2,i+1)
plt.scatter(x, y, c=z, s=z/25)
# Annotate with well names
wellname = dfs[i]["Well Name"].values
for j, txt in enumerate(wellname):
plt.annotate(txt, (x[j], y[j]))
plt.colorbar()
plt.clim(1600,2300)
plt.title(topnames[i])
plt.xlabel("UTM X"); plt.ylabel("UTM Y")
plt.grid()
plt.tight_layout(1.3)
plt.show()
7000 /
lag_dist = 1000
seis_tops = [seis_creta, seis_etive, seis_ness, seis_tarb]
plt.figure(figsize=(14,9))
for i in range(len(topnames)):
x, y, z = seis_tops[i]["UTM X"].values, seis_tops[i]["UTM Y"].values, \
seis_tops[i]["TWT"].values
plt.subplot(2,2,i+1)
plt.scatter(x, y, c=z, edgecolor=None, linewidth=0)
plt.colorbar()
plt.clim(1600,2300)
plt.title(topnames[i])
plt.xlabel("UTM X"); plt.ylabel("UTM Y")
plt.grid()
plt.tight_layout(1.3)
plt.show()
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=z, linewidth=0)
ax.set_zlim(2300, 1600)
plt.show()
from scipy.interpolate import griddata
x, y, z = seis_tops[i]["UTM X"].values, seis_tops[i]["UTM Y"].values, seis_tops[i]["TWT"].values
xi = np.arange(x.min(), x.max(), (x.max()-x.min()) / 100)
yi = np.arange(y.min(), y.max(), (y.max()-y.min()) / 100)
xi, yi = np.meshgrid(xi, yi)
# interpolate
zi = griddata((x, y), z, (xi.T, yi.T), method='linear')
plt.imshow(zi.T, extent=(x.min(), x.max(), y.min(), y.max()))
plt.colorbar()
plt.clim(1600,2300)
plt.show()
```
## KDE plot
```
plt.figure(figsize=(16,5))
# KDE plot of well tops
plt.subplot(1,2,1)
for i in range(len(topnames)):
z = dfs[i]["TVD"].values
sns.kdeplot(z, Label=topnames[i])
plt.title("Kernel Density Estimate Plot of Well Tops")
plt.xlabel("TVD")
plt.legend()
# KDE plot of seismic tops
seis_tops = [seis_creta, seis_etive, seis_ness, seis_tarb]
plt.subplot(1,2,2)
for i in range(len(topnames)):
z = seis_tops[i]["TWT"].values
sns.kdeplot(z, Label=topnames[i])
plt.title("Kernel Density Estimate Plot of Seismic Horizon")
plt.xlabel("TWT")
plt.xlim(1400,2600)
plt.legend()
plt.show()
a = [1,2,3,4]
b = ['Base Cretaceous', 'Top Etive', 'Top Ness', 'Top Tarbert']
x = 'Top Ness'
for i in range(len(b)):
if x==b[i]:
print(a[i])
```
## Variogram
```
@interact
def f(horizon=topnames):
for i in range(len(topnames)):
if horizon == topnames[i]:
df = dfs[i]
return df
```
Controlling atol (azi tolerance) to 16-20 degrees, shows direction 135 degree anisotropy for well spatial continuity.
```
from ipywidgets import interact, Dropdown
tmin, tmax, bandh, isill = -9999, 9999, 9999, 1
maxdist = 7000 # Maximum distance
atol = 90 # Azimuth tolerance
lagdist = 200 # Lag distance
lagtol, nlag = .5 * lagdist, int(maxdist / lagdist)
atol = widgets.FloatSlider(value=atol, min=0, max=180)
lagdist = widgets.FloatSlider(value=lagdist, min=10, max=1.5*lagdist)
lagtol = widgets.FloatSlider(value=lagtol, min=10, max=1.5*lagtol)
nlag = widgets.IntSlider(value=nlag, min=.5*nlag, max=1.5*nlag)
@interact
def print_city(horizon=topnames, atol=atol, lagdist=lagdist, lagtol=lagtol,
nlag=nlag):
for i in range(len(topnames)):
if horizon == topnames[i]:
df = dfs[i]
azis = [0, 45, 60, 90, 135, 150]
plt.figure(figsize=(15,7))
for i in range(len(azis)):
plt.subplot(2,3,i+1)
# Variogram calculation
lag, gamma, npair = geostats.gamv(df, "UTM X", "UTM Y", "TVD", tmin=tmin,
tmax=tmax, xlag=lag_dist, xltol=lag_tol,
nlag=nlag, azm=azis[i],
atol=atol, bandwh=bandh, isill=isill)
plt.scatter(lag, gamma)
plt.axhline(1)
plt.suptitle("Variogram of Well Tops", size=20, y=1.04)
plt.title("Azimuth {}".format(azis[i]))
plt.xlim(xmin=0); plt.ylim(ymin=0)
plt.grid()
plt.tight_layout(1.3)
plt.show()
vario_kri = GSLIB.make_variogram(nug, nst, it1, cc1, azi1=0, hmaj1, hmin1, it2=1, cc2=0, azi2=135, hmaj2=0, hmin2=0)
help(GSLIB.make_variogram)
```
## Misc
```
tmin = -9999.; tmax = 9999.;
lag_dist = 200; lag_tol = 200;
nlag = 50; bandh = 9999.9; azi = 75; atol = 90.0; isill = 1
lag_dist = 700; lag_tol = 500; nlag = 20
lag, gamma, npair = geostats.gamv(well_creta,"UTM X","UTM Y","TVD",tmin,tmax,lag_dist,lag_tol,nlag,azi,atol,bandh,isill)
plt.scatter(lag, gamma)
plt.xlim(xmin=0); plt.ylim(ymin=0)
plt.show()
```
Updating field in ipywidget
```
from ipywidgets import interact, Dropdown
geo = {'Base Cretaceous': 50,
'Top Etive': 90}
countryW = Dropdown(options = geo.keys())
# cityW = Dropdown()
cityW = widgets.FloatSlider(min=10, max=1000)
@interact(country = countryW, city = cityW)
def print_city(country, city):
cityW.value = geo[country] # Here is the trick, i.e. update cityW.options based on country, namely countryW.value.
# print(country, city)
from ipywidgets import interact, Dropdown
geo = {'USA':['CHI','NYC'],'Russia':['MOW','LED']}
countryW = Dropdown(options = geo.keys())
cityW = Dropdown()
@interact(country = countryW, city = cityW)
def print_city(country, city):
cityW.options = geo[country] # Here is the trick, i.e. update cityW.options based on country, namely countryW.value.
# print(country, city)
```
| github_jupyter |
### License
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
# 1. Installing Dependencies
Run this file to install all dependencies needed by the Notebook. You can skip running this cell if you have runned this cell in the current environment previously.
```
import sys
!{sys.executable} -m pip install -r requirements.txt
!jupyter nbextension enable --py widgetsnbextension
!jupyter serverextension enable voila --sys-prefix
```
# 2. Setting Environment Variables
<ul>
<li> Make sure you have set your environment vairables in `var.env` file.
<li> Pick the date range for your analysis
<li> After resetting any environment variables, you need to restart the kernel because otherwise it will not be loaded by Jupyter. To restart, go to the menu 'Kernel' and choose 'Restart'
<li> Run all the cells in this section
<li> Make sure the environment variables are set correctly
<ul>
```
%reload_ext autoreload
%autoreload 2
from dotenv import load_dotenv
load_dotenv('var.env')
import os
from itables import init_notebook_mode
if os.getenv('IS_INTERACTIVE_TABLES_MODE') == 'TRUE':
init_notebook_mode(all_interactive=True)
from IPython.display import display
import ipywidgets as widgets
from ipywidgets import HBox
start_date_picker = widgets.DatePicker(description='Start Date')
end_date_picker = widgets.DatePicker(description='End Date')
date_pickers = HBox(children=[start_date_picker, end_date_picker])
display(date_pickers)
os.environ['START_DATE'] = str(start_date_picker.value)
os.environ['END_DATE'] = str(end_date_picker.value)
print('--- LOADED ENVIRONMENT VARIABLES ---')
print(f"INPUT_PROJECT_ID: {os.getenv('INPUT_PROJECT_ID')}")
print(f"INPUT_DATASET_ID: {os.getenv('INPUT_DATASET_ID')}")
print(f"INPUT_AUDIT_LOGS_TABLE_ID: {os.getenv('INPUT_AUDIT_LOGS_TABLE_ID')}")
print(f"IS_AUDIT_LOGS_TABLE_PARTITIONED: {os.getenv('IS_AUDIT_LOGS_INPUT_TABLE_PARTITIONED')}")
print(f"OUTPUT_PROJECT_ID: {os.getenv('OUTPUT_PROJECT_ID')}")
print(f"OUTPUT_DATASET_ID: {os.getenv('OUTPUT_DATASET_ID')}")
print(f"OUTPUT_TABLE_SUFFIX: {os.getenv('OUTPUT_TABLE_SUFFIX')}")
print(f"LOCATION: {os.getenv('LOCATION')}")
print(f"START_DATE: {os.getenv('START_DATE')}")
print(f"END_DATE: {os.getenv('END_DATE')}")
print(f"IS_INTERACTIVE_TABLES_MODE: {os.getenv('IS_INTERACTIVE_TABLES_MODE')}")
```
# 3. Creating Tables for Current Analysis Environment
Run the cell below to create tables that is necessary for the analysis
```
from src.bq_query import BQQuery
try:
BQQuery.create_functions_for_pipeline_analysis()
BQQuery.create_tables_for_pipeline_analysis()
except Exception as e:
print('Unable to create tables, do not continue with the analysis')
print(e)
```
# 4. Getting Analysis Result
### Get the tables with highest discrepancy on write vs read frequency throughout the data warehouse
This will list down tables with the highest discrepancy on write vs read frequency.
1. Run the cell
2. Set the limit on how many tables you want to be displayed using the text box, please insert positive values only.
3. Click 'Run' and wait until the result is retrieved.
```
import src.pipeline_analysis as pipeline_analysis
import ipywidgets as widgets
from IPython.display import display
import pandas as pd
limited_imbalance_tables = []
limited_imbalance_tables_df = pd.DataFrame()
def get_limited_imbalance_tables_df(limit):
global limited_imbalance_tables, limited_imbalance_tables_df
limited_imbalance_tables_df = pipeline_analysis.get_tables_read_write_frequency_df(limit)
limited_imbalance_tables = limited_imbalance_tables_df['Table'].tolist()
return limited_imbalance_tables_df
widgets.interact_manual.opts['manual_name'] = 'Run'
widgets.interact_manual(get_limited_imbalance_tables_df, limit= widgets.IntText(value=3))
;
```
### Get the pipeline graph data of the table
This will generate a pipeline graph file, in HTML format, under `pipeline_graph` directory. It may take sometime for this to run and generate.
1. Choose the table of interest, the table that you are interested to explore further by displaying its pipeline graph.
2. Click 'Run' and wait until the run is finished (indicated by non grayed-out box).
3. Run the next cell to display the graph
```
def visualise_table_pipelines(table):
pipeline_analysis.display_pipelines_of_table(table)
widgets.interact_manual(visualise_table_pipelines, table = widgets.Dropdown(options=limited_imbalance_tables+ [''], value='', description='Table:'))
;
```
### Display the pipeline graph of the table
Display the pipeline graph of the table. The thickness of the edges indicates the frequency compared to the rest of the edges in the current graph.
1. Run the cell to display the pipeline graph of the table in the iFrame below
2. You can click on the different nodes of the graph, each representing different tbales that are part of the pipeline of this table of interest. When you click on a node, it will display more information for this table.
```
from IPython.display import IFrame,HTML, display
display(IFrame('./pipeline_graph/index.html', width="1000", height="800"))
```
| github_jupyter |
```
import matplotlib
matplotlib.use('Agg')
%matplotlib inline
import matplotlib.pyplot as plt
# plt.switch_backend('Agg')
plt.rcParams['image.cmap'] = 'gray'
import numpy as np
import os
from glob import glob
#import nrrd
import numpy as np
import SimpleITK as sitk
SMALL_SIZE = 14
MEDIUM_SIZE = 16
BIGGER_SIZE = 18
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
from ipywidgets import interact, interactive
from ipywidgets import widgets
import scipy
from scipy.ndimage.interpolation import zoom
import math
from skimage import filters
from skimage import exposure
from skimage.morphology import binary_opening
def myshow(img, title=None, margin=0.05, dpi=80 ):
nda = sitk.GetArrayFromImage(img)
spacing = img.GetSpacing()
slicer = False
if nda.ndim == 3:
# fastest dim, either component or x
c = nda.shape[-1]
# the the number of components is 3 or 4 consider it an RGB image
if not c in (3,4):
slicer = True
elif nda.ndim == 4:
c = nda.shape[-1]
if not c in (3,4):
raise Runtime("Unable to show 3D-vector Image")
# take a z-slice
slicer = True
if (slicer):
ysize = nda.shape[1]
xsize = nda.shape[2]
else:
ysize = nda.shape[0]
xsize = nda.shape[1]
# Make a figure big enough to accomodate an axis of xpixels by ypixels
# as well as the ticklabels, etc...
figsize = (1 + margin) * ysize / dpi, (1 + margin) * xsize / dpi
def callback(z=None):
extent = (0, xsize*spacing[1], ysize*spacing[0], 0)
fig = plt.figure(figsize=figsize, dpi=dpi)
# Make the axis the right size...
ax = fig.add_axes([margin, margin, 1 - 2*margin, 1 - 2*margin])
plt.set_cmap("gray")
if z is None:
ax.imshow(nda,extent=extent,interpolation=None)
else:
ax.imshow(nda[z,...],extent=extent,interpolation=None)
if title:
plt.title(title)
plt.show()
if slicer:
interact(callback, z=(0,nda.shape[0]-1))
else:
callback()
def myshow3d(img, xslices=[], yslices=[], zslices=[], title=None, margin=0.05, dpi=80):
size = img.GetSize()
img_xslices = [img[s,:,:] for s in xslices]
img_yslices = [img[:,s,:] for s in yslices]
img_zslices = [img[:,:,s] for s in zslices]
maxlen = max(len(img_xslices), len(img_yslices), len(img_zslices))
img_null = sitk.Image([0,0], img.GetPixelID(), img.GetNumberOfComponentsPerPixel())
img_slices = []
d = 0
if len(img_xslices):
img_slices += img_xslices + [img_null]*(maxlen-len(img_xslices))
d += 1
if len(img_yslices):
img_slices += img_yslices + [img_null]*(maxlen-len(img_yslices))
d += 1
if len(img_zslices):
img_slices += img_zslices + [img_null]*(maxlen-len(img_zslices))
d +=1
if maxlen != 0:
if img.GetNumberOfComponentsPerPixel() == 1:
img = sitk.Tile(img_slices, [maxlen,d])
#TODO check in code to get Tile Filter working with VectorImages
else:
img_comps = []
for i in range(0,img.GetNumberOfComponentsPerPixel()):
img_slices_c = [sitk.VectorIndexSelectionCast(s, i) for s in img_slices]
img_comps.append(sitk.Tile(img_slices_c, [maxlen,d]))
img = sitk.Compose(img_comps)
myshow(img, title, margin, dpi)
def npresample(imgs, spacing, new_spacing, order=2): # new_spacing with z y x consist with imgs
assert len(imgs.shape) == 3
# print(spacing, new_spacing, imgs.shape, type(spacing[0]), type(new_spacing[0]))
new_shape = np.round(imgs.shape * spacing / new_spacing)
# print(new_shape)
true_spacing = spacing * imgs.shape / new_shape
resize_factor = spacing / new_spacing
# print(resize_factor)
imgs = zoom(imgs, resize_factor, mode = 'nearest',order=order)
# print(imgs.shape)
return np.array(imgs), true_spacing
def getminmaxannotation(segpth, spacing, newspacing=None):
minxlst, minylst, minzlst, maxxlst, maxylst, maxzlst = [], [], [], [], [], []
for gtfnm in ['BrainStem', 'Chiasm', 'Mandible', 'OpticNerve_L', 'OpticNerve_R', 'Parotid_L', \
'Parotid_R', 'Submandibular_L', 'Submandibular_R']:
if not os.path.exists(segpth+gtfnm+'.nrrd'):
print('miss ', fnm, gtfnm)
continue
sitkimggt = sitk.ReadImage(segpth+gtfnm+'.nrrd')
data = sitk.GetArrayFromImage(sitkimggt)
# spacing = sitkimggt.GetSpacing()
zflg, xflg, yflg = True, True, True
minxlst.append(0)
minylst.append(0)
minzlst.append(0)
maxxlst.append(data.shape[1])
maxylst.append(data.shape[2])
maxzlst.append(data.shape[0])
for zidx in range(data.shape[0]):
if zflg and data[zidx, :, :].sum() != 0:
minzlst[-1] = zidx
zflg = False
elif zflg is False and data[zidx, :, :].sum() == 0:
maxzlst[-1] = zidx
break
for yidx in range(data.shape[1]):
if yflg and data[minzlst[-1]:maxzlst[-1], yidx, :].sum() != 0:
minylst[-1] = yidx
yflg = False
elif yflg is False and data[minzlst[-1]:maxzlst[-1], yidx, :].sum() == 0:
maxylst[-1] = yidx
break
for xidx in range(data.shape[2]):
if xflg and data[minzlst[-1]:maxzlst[-1], minylst[-1]:maxylst[-1], xidx].sum() != 0:
minxlst[-1] = xidx
xflg = False
elif xflg is False and data[minzlst[-1]:maxzlst[-1], minylst[-1]:maxylst[-1], xidx].sum() == 0:
maxxlst[-1] = xidx
break
if newspacing is not None:
minxlst[-1] = int(round(minxlst[-1]*1.0 * spacing[0] / newspacing[0]))
maxxlst[-1] = int(round(maxxlst[-1]*1.0 * spacing[0] / newspacing[0]))+1
minylst[-1] = int(round(minylst[-1]*1.0 * spacing[1] / newspacing[1]))
maxylst[-1] = int(round(maxylst[-1]*1.0 * spacing[1] / newspacing[1]))+1
minzlst[-1] = int(round(minzlst[-1]*1.0 * spacing[2] / newspacing[2]))
maxzlst[-1] = int(round(maxzlst[-1]*1.0 * spacing[2] / newspacing[2]))+1
print(gtfnm, minzlst[-1], maxzlst[-1], minylst[-1], maxylst[-1], minxlst[-1], maxxlst[-1])
print('minz %d, maxz %d, miny %d, maxy %d, minx %d, maxx %d', min(minzlst), max(maxzlst), min(minylst), \
max(maxylst), min(minxlst), max(maxxlst))
return min(minzlst), max(maxzlst), min(minylst), max(maxylst), min(minxlst), max(maxxlst)
def imhist3d(im, nbin=1024):
# calculates normalized histogram of an image
m, n, z = im.shape
h = [0.0] * (nbin + 1)
for i in range(m):
for j in range(n):
for k in range(z):
if im[i, j, k] != 0:
h[int(round(im[i, j, k]*nbin))]+=1
return np.array(h)/(sum(h))
def cumsum(h):
# finds cumulative sum of a numpy array, list
return [sum(h[:i+1]) for i in range(len(h))]
def histeq(im, nbin=1024):
#calculate Histogram
h = imhist3d(im, nbin=nbin)
cdf = np.array(cumsum(h)) #cumulative distribution function
sk = np.array(cdf) #finding transfer function values
s1, s2, s3 = im.shape
Y = np.zeros_like(im)
# applying transfered values for each pixels
for i in range(s1):
for j in range(s2):
for k in range(s3):
if im[i,j,k] == 0:
Y[i,j,k] = 0
else:
Y[i, j, k] = sk[int(round(im[i, j, k]*nbin))]
H = imhist3d(Y)
#return transformed image, original and new istogram,
# and transform function
return Y , h, H, sk, cumsum(imhist3d(Y, nbin=nbin))
path = '/data/wtzhu/dataset/HNPETCTclean/'
pidlst = [pid for pid in os.listdir(path)]
minxspace, minyspace, minzspace = 0.7599999904632568, 0.7599999904632568, 1.249632716178894
structurefnmlst = ('BrainStem', 'Chiasm', 'Mandible', 'OpticNerve_L', 'OpticNerve_R', 'Parotid_L', 'Parotid_R', \
'Submandibular_L', 'Submandibular_R')
idx = 0
fnm = pidlst[idx]
print('processing %s', fnm)
assert not fnm.startswith('HN-HMR')
sitkimg = sitk.ReadImage(path+fnm+'/img.nrrd')
nparr = sitk.GetArrayFromImage(sitkimg)
# plt.hist(nparr.reshape((-1)), bins=1000)
# plt.show()
minv = -1024.0
nparr[nparr < minv] = minv
# maxv = min(-256, nparr.max())
nparr[nparr > maxv] = maxv
nparr = (nparr - minv)*1.0 / (1.0*(maxv - minv))
minz0, maxz0, miny0, maxy0, minx0, maxx0 = getminmaxannotation(path+fnm+'/structures/', \
np.array([minxspace, minyspace, minzspace]), \
np.array([minxspace, minyspace, minzspace]))
minz, maxz, miny, maxy, minx, maxx = 35, 90, 90, 300, 170, 350
assert minz < minz0 and maxz > maxz0 and miny < miny0 and maxy > maxy0 and minx < minx0 and maxx > maxx0
nparrplt = np.array(nparr)
nparrplt[:, maxy, :] = 1
nparrplt[:, miny, :] = 1
nparrplt[:, :, maxx] = 1
nparrplt[:, :, minx] = 1
myshow3d(sitk.GetImageFromArray(nparrplt))
nparr = np.array(nparr[minz:maxz, miny:maxy, minx:maxx])
myshow3d(sitk.GetImageFromArray(nparr))
# minv = nparr.min()
# maxv = nparr.max()
# nparr = (nparr - minv)*1.0 / (1.0*(maxv-minv))
# myshow3d(sitk.GetImageFromArray(nparr))
for sfnm in os.listdir(path+fnm+'/structures/'):
snparr = sitk.GetArrayFromImage(sitk.ReadImage(path+fnm+'/structures/'+sfnm))
assert snparr.min() == 0 and snparr.max() == 1
snparr = np.array(snparr[minz:maxz, miny:maxy, minx:maxx])
print(sfnm)
s_nparr = np.array(nparr)
s_nparr[snparr == 1] -= 0.2
fig = plt.figure()
myshow3d(sitk.GetImageFromArray(s_nparr))
np.save(path+fnm+'/img_crp.npy', nparr)
for sfnm in os.listdir(path+fnm+'/structures/'):
snparr = sitk.GetArrayFromImage(sitk.ReadImage(path+fnm+'/structures/'+sfnm))
assert snparr.min() == 0 and snparr.max() == 1
snparr = np.array(snparr[minz:maxz, miny:maxy, minx:maxx])
np.save(path+fnm+'/structures/'+sfnm[:-len('.nrrd')]+'_crp.npy', snparr)
```
| github_jupyter |
```
%matplotlib inline
```
Training a Classifier
=====================
This is it. You have seen how to define neural networks, compute loss and make
updates to the weights of the network.
Now you might be thinking,
What about data?
----------------
Generally, when you have to deal with image, text, audio or video data,
you can use standard python packages that load data into a numpy array.
Then you can convert this array into a ``torch.*Tensor``.
- For images, packages such as Pillow, OpenCV are useful
- For audio, packages such as scipy and librosa
- For text, either raw Python or Cython based loading, or NLTK and
SpaCy are useful
Specifically for vision, we have created a package called
``torchvision``, that has data loaders for common datasets such as
Imagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,
``torchvision.datasets`` and ``torch.utils.data.DataLoader``.
This provides a huge convenience and avoids writing boilerplate code.
For this tutorial, we will use the CIFAR10 dataset.
It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,
‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of
size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
.. figure:: /_static/img/cifar10.png
:alt: cifar10
cifar10
Training an image classifier
----------------------------
We will do the following steps in order:
1. Load and normalizing the CIFAR10 training and test datasets using
``torchvision``
2. Define a Convolutional Neural Network
3. Define a loss function
4. Train the network on the training data
5. Test the network on the test data
1. Loading and normalizing CIFAR10
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using ``torchvision``, it’s extremely easy to load CIFAR10.
```
import torch
import torchvision
import torchvision.transforms as transforms
```
The output of torchvision datasets are PILImage images of range [0, 1].
We transform them to Tensors of normalized range [-1, 1].
```
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
Let us show some of the training images, for fun.
```
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
2. Define a Convolutional Neural Network
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Copy the neural network from the Neural Networks section before and modify it to
take 3-channel images (instead of 1-channel images as it was defined).
```
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
```
3. Define a Loss function and optimizer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Let's use a Classification Cross-Entropy loss and SGD with momentum.
```
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
```
4. Train the network
^^^^^^^^^^^^^^^^^^^^
This is when things start to get interesting.
We simply have to loop over our data iterator, and feed the inputs to the
network and optimize.
```
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
```
5. Test the network on the test data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
We have trained the network for 2 passes over the training dataset.
But we need to check if the network has learnt anything at all.
We will check this by predicting the class label that the neural network
outputs, and checking it against the ground-truth. If the prediction is
correct, we add the sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
```
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
```
Okay, now let us see what the neural network thinks these examples above are:
```
outputs = net(images)
```
The outputs are energies for the 10 classes.
The higher the energy for a class, the more the network
thinks that the image is of the particular class.
So, let's get the index of the highest energy:
```
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
```
The results seem pretty good.
Let us look at how the network performs on the whole dataset.
```
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
```
That looks waaay better than chance, which is 10% accuracy (randomly picking
a class out of 10 classes).
Seems like the network learnt something.
Hmmm, what are the classes that performed well, and the classes that did
not perform well:
```
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
```
Okay, so what next?
How do we run these neural networks on the GPU?
Training on GPU
----------------
Just like how you transfer a Tensor onto the GPU, you transfer the neural
net onto the GPU.
Let's first define our device as the first visible cuda device if we have
CUDA available:
```
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
```
The rest of this section assumes that ``device`` is a CUDA device.
Then these methods will recursively go over all modules and convert their
parameters and buffers to CUDA tensors:
.. code:: python
net.to(device)
Remember that you will have to send the inputs and targets at every step
to the GPU too:
.. code:: python
inputs, labels = data[0].to(device), data[1].to(device)
Why dont I notice MASSIVE speedup compared to CPU? Because your network
is realllly small.
**Exercise:** Try increasing the width of your network (argument 2 of
the first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –
they need to be the same number), see what kind of speedup you get.
**Goals achieved**:
- Understanding PyTorch's Tensor library and neural networks at a high level.
- Train a small neural network to classify images
Training on multiple GPUs
-------------------------
If you want to see even more MASSIVE speedup using all of your GPUs,
please check out :doc:`data_parallel_tutorial`.
Where do I go next?
-------------------
- :doc:`Train neural nets to play video games </intermediate/reinforcement_q_learning>`
- `Train a state-of-the-art ResNet network on imagenet`_
- `Train a face generator using Generative Adversarial Networks`_
- `Train a word-level language model using Recurrent LSTM networks`_
- `More examples`_
- `More tutorials`_
- `Discuss PyTorch on the Forums`_
- `Chat with other users on Slack`_
| github_jupyter |
**[Python Home Page](https://www.kaggle.com/learn/python)**
---
# Try It Yourself
Think you are ready to use Booleans and Conditionals? Try it yourself and find out.
To get started, **run the setup code below** before writing your own code (and if you leave this notebook and come back later, don't forget to run the setup code again).
```
from learntools.core import binder; binder.bind(globals())
from learntools.python.ex3 import *
print('Setup complete.')
```
# Exercises
## 1.
Many programming languages have [`sign`](https://en.wikipedia.org/wiki/Sign_function) available as a built-in function. Python doesn't, but we can define our own!
In the cell below, define a function called `sign` which takes a numerical argument and returns -1 if it's negative, 1 if it's positive, and 0 if it's 0.
```
# Your code goes here. Define a function called 'sign'
def sign(num):
if num<0:
return -1
elif(num>0):
return 1
else:
return 0
# Check your answer
q1.check()
#q1.solution()
```
## 2.
We've decided to add "logging" to our `to_smash` function from the previous exercise.
```
def to_smash(total_candies):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
print("Splitting", total_candies, "candies")
return total_candies % 3
to_smash(91)
```
What happens if we call it with `total_candies = 1`?
```
to_smash(1)
```
That isn't great grammar!
Modify the definition in the cell below to correct the grammar of our print statement. (If there's only one candy, we should use the singular "candy" instead of the plural "candies")
```
def to_smash(total_candies):
"""Return the number of leftover candies that must be smashed after distributing
the given number of candies evenly between 3 friends.
>>> to_smash(91)
1
"""
if(total_candies!=1):
print("Splitting", total_candies, "candies")
else:
print("Splitting", total_candies, "candy")
return total_candies % 3
to_smash(91)
to_smash(1)
# Check your answer (Run this code cell to receive credit!)
#q2.solution()
```
## 3. <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
In the main lesson we talked about deciding whether we're prepared for the weather. I said that I'm safe from today's weather if...
- I have an umbrella...
- or if the rain isn't too heavy and I have a hood...
- otherwise, I'm still fine unless it's raining *and* it's a workday
The function below uses our first attempt at turning this logic into a Python expression. I claimed that there was a bug in that code. Can you find it?
To prove that `prepared_for_weather` is buggy, come up with a set of inputs where either:
- the function returns `False` (but should have returned `True`), or
- the function returned `True` (but should have returned `False`).
To get credit for completing this question, your code should return a <font color='#33cc33'>Correct</font> result.
```
def prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday):
# Don't change this code. Our goal is just to find the bug, not fix it!
return have_umbrella or rain_level < 5 and have_hood or not rain_level > 0 and is_workday
# Change the values of these inputs so they represent a case where prepared_for_weather
# returns the wrong answer.
have_umbrella = False
rain_level = 0.0
have_hood = False
is_workday = False
# Check what the function returns given the current values of the variables above
actual = prepared_for_weather(have_umbrella, rain_level, have_hood, is_workday)
print(actual)
# Check your answer
q3.check()
#q3.hint()
#q3.solution()
```
## 4.
The function `is_negative` below is implemented correctly - it returns True if the given number is negative and False otherwise.
However, it's more verbose than it needs to be. We can actually reduce the number of lines of code in this function by *75%* while keeping the same behaviour.
See if you can come up with an equivalent body that uses just **one line** of code, and put it in the function `concise_is_negative`. (HINT: you don't even need Python's ternary syntax)
```
def is_negative(number):
return number < 0
def concise_is_negative(number):
pass # Your code goes here (try to keep it to one line!)
return number <0
# Check your answer
q4.check()
q4.hint()
#q4.solution()
```
## 5.
The boolean variables `ketchup`, `mustard` and `onion` represent whether a customer wants a particular topping on their hot dog. We want to implement a number of boolean functions that correspond to some yes-or-no questions about the customer's order. For example:
```
def onionless(ketchup, mustard, onion):
"""Return whether the customer doesn't want onions.
"""
return not onion
```
For each of the remaining functions, fill in the body to match the English description in the docstring.
```
def wants_all_toppings(ketchup, mustard, onion):
"""Return whether the customer wants "the works" (all 3 toppings)
"""
return ketchup and mustard and onion
pass
# Check your answer
q5.a.check()
#q5.a.hint()
#q5.a.solution()
def wants_plain_hotdog(ketchup, mustard, onion):
"""Return whether the customer wants a plain hot dog with no toppings.
"""
return not(ketchup or mustard or onion)
pass
# Check your answer
q5.b.check()
#q5.b.hint()
#q5.b.solution()
def exactly_one_sauce(ketchup, mustard, onion):
"""Return whether the customer wants either ketchup or mustard, but not both.
(You may be familiar with this operation under the name "exclusive or")
"""
return (ketchup and not mustard) or (mustard and not ketchup)
pass
# Check your answer
q5.c.check()
#q5.c.hint()
#q5.c.solution()
```
## 6. <span title="A bit spicy" style="color: darkgreen ">🌶️</span>
We’ve seen that calling `bool()` on an integer returns `False` if it’s equal to 0 and `True` otherwise. What happens if we call `int()` on a bool? Try it out in the notebook cell below.
Can you take advantage of this to write a succinct function that corresponds to the English sentence "does the customer want exactly one topping?"?
```
def exactly_one_topping(ketchup, mustard, onion):
"""Return whether the customer wants exactly one of the three available toppings
on their hot dog.
"""
return (ketchup and not mustard and not onion) or (mustard and not ketchup and not onion) or (onion and not ketchup and not mustard)
pass
# Check your answer
q6.check()
#q6.hint()
#q6.solution()
```
## 7. <span title="A bit spicy" style="color: darkgreen ">🌶️</span> (Optional)
In this problem we'll be working with a simplified version of [blackjack](https://en.wikipedia.org/wiki/Blackjack) (aka twenty-one). In this version there is one player (who you'll control) and a dealer. Play proceeds as follows:
- The player is dealt two face-up cards. The dealer is dealt one face-up card.
- The player may ask to be dealt another card ('hit') as many times as they wish. If the sum of their cards exceeds 21, they lose the round immediately.
- The dealer then deals additional cards to himself until either:
- the sum of the dealer's cards exceeds 21, in which case the player wins the round
- the sum of the dealer's cards is greater than or equal to 17. If the player's total is greater than the dealer's, the player wins. Otherwise, the dealer wins (even in case of a tie).
When calculating the sum of cards, Jack, Queen, and King count for 10. Aces can count as 1 or 11 (when referring to a player's "total" above, we mean the largest total that can be made without exceeding 21. So e.g. A+8 = 19, A+8+8 = 17)
For this problem, you'll write a function representing the player's decision-making strategy in this game. We've provided a very unintelligent implementation below:
```
def should_hit(dealer_total, player_total, player_low_aces, player_high_aces):
"""Return True if the player should hit (request another card) given the current game
state, or False if the player should stay.
When calculating a hand's total value, we count aces as "high" (with value 11) if doing so
doesn't bring the total above 21, otherwise we count them as low (with value 1).
For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7,
and therefore set player_total=20, player_low_aces=2, player_high_aces=1.
"""
return False
```
This very conservative agent *always* sticks with the hand of two cards that they're dealt.
We'll be simulating games between your player agent and our own dealer agent by calling your function.
Try running the function below to see an example of a simulated game:
```
q7.simulate_one_game()
```
The real test of your agent's mettle is their average win rate over many games. Try calling the function below to simulate 50000 games of blackjack (it may take a couple seconds):
```
q7.simulate(n_games=50000)
```
Our dumb agent that completely ignores the game state still manages to win shockingly often!
Try adding some more smarts to the `should_hit` function and see how it affects the results.
```
def should_hit(dealer_total, player_total, player_low_aces, player_high_aces):
"""Return True if the player should hit (request another card) given the current game
state, or False if the player should stay.
When calculating a hand's total value, we count aces as "high" (with value 11) if doing so
doesn't bring the total above 21, otherwise we count them as low (with value 1).
For example, if the player's hand is {A, A, A, 7}, we will count it as 11 + 1 + 1 + 7,
and therefore set player_total=20, player_low_aces=2, player_high_aces=1.
"""
return False
q7.simulate(n_games=50000)
```
# Keep Going
Learn about **[lists and tuples](https://www.kaggle.com/colinmorris/lists)** to handle multiple items of data in a systematic way.
---
**[Python Home Page](https://www.kaggle.com/learn/python)**
*Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161283) to chat with other Learners.*
| github_jupyter |
```
#Import Libraries
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import median_absolute_error
#----------------------------------------------------
#Applying Linear Regression Model
LinearRegressionModel = LinearRegression(fit_intercept=True, normalize=True,copy_X=True,n_jobs=-1)
LinearRegressionModel.fit(X_train, y_train)
#Calculating Details
print('Linear Regression Train Score is : ' , LinearRegressionModel.score(X_train, y_train))
print('Linear Regression Test Score is : ' , LinearRegressionModel.score(X_test, y_test))
print('Linear Regression Coef is : ' , LinearRegressionModel.coef_)
print('Linear Regression intercept is : ' , LinearRegressionModel.intercept_)
print('----------------------------------------------------')
#Calculating Prediction
y_pred = LinearRegressionModel.predict(X_test)
print('Predicted Value for Linear Regression is : ' , y_pred[:10])
#----------------------------------------------------
#Calculating Mean Absolute Error
MAEValue = mean_absolute_error(y_test, y_pred, multioutput='uniform_average') # it can be raw_values
print('Mean Absolute Error Value is : ', MAEValue)
#----------------------------------------------------
#Calculating Mean Squared Error
MSEValue = mean_squared_error(y_test, y_pred, multioutput='uniform_average') # it can be raw_values
print('Mean Squared Error Value is : ', MSEValue)
#----------------------------------------------------
#Calculating Median Squared Error
MdSEValue = median_absolute_error(y_test, y_pred)
print('Median Squared Error Value is : ', MdSEValue )
#Import Libraries
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import median_absolute_error
#load boston data
BostonData = load_boston()
#X Data
X = BostonData.data
#Y Data
y = BostonData.target
#Splitting data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 50, shuffle = True)
#Applying Linear Regression Model
LinearRegressionModel = LinearRegression(fit_intercept = True, normalize = True, copy_X = True, n_jobs = -1)
LinearRegressionModel.fit(X_train, y_train)
print('Linear Regression Train Score is : ' , LinearRegressionModel.score(X_train, y_train))
print('Linear Regression Test Score is : ' , LinearRegressionModel.score(X_test, y_test))
print('Linear Regression Coef is : ' , LinearRegressionModel.coef_)
print('Linear Regression intercept is : ' , LinearRegressionModel.intercept_)
#Calculating Prediction
y_pred = LinearRegressionModel.predict(X_test)
print('Predicted Value for Linear Regression is : ' , y_pred[:5])
print(y_test[:5])
#Calculating Mean Absolute Error
MAEValue = mean_absolute_error(y_test, y_pred, multioutput='uniform_average') # it can be raw_values
print('Mean Absolute Error Value is : ', MAEValue)
#Calculating Mean Squared Error
MSEValue = mean_squared_error(y_test, y_pred, multioutput='uniform_average') # it can be raw_values
print('Mean Squared Error Value is : ', MSEValue)
#Calculating Median Squared Error
MdSEValue = median_absolute_error(y_test, y_pred)
print('Median Squared Error Value is : ', MdSEValue )
```
| github_jupyter |
```
# Imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
import osmnx as ox
import geopandas as gpd
import itertools
import time
import random
import ast
# Utils & Functions
from utils import *
from hrga import *
from tndp import *
from export import *
from plot import *
```
### Graph Structure for Instance 1 (Toy Instance)
Example taken from Guan et al. (2006). 9-node network used to test basic functioning of model and restrictions. There is only one possible path from any origin to any destination.
```
# Generating Mandl's Network
# Create graph
G = nx.Graph()
# Node Positions
node_positions = [(-2, 1), (-2, -1), (-1, 0), (0, 1), (0, 0), (0, -1), (1, 0), (2, 1), (2, -1)]
# Add nodes
for i in range(9):
G.add_node(i+1, pos=node_positions[i])
# Add edges
edge_list = [(1, 3), (2, 3), (3, 5), (5, 4), (5, 6), (5, 7), (7, 8), (7, 9)]
edge_lengths = [ 10, 10, 10, 10, 10, 10, 10, 10]
for i in range(len(edge_list)):
G.add_edge(*edge_list[i], length=edge_lengths[i])
# Plot Graph
f, ax = plt.subplots(figsize=(10, 5))
pos = nx.get_node_attributes(G, 'pos')
node_plot = nx.draw_networkx_nodes(G, pos, node_color='white', node_size=500)
edges_plot = nx.draw_networkx_edges(G, pos, width=2)
labels_plot = nx.draw_networkx_labels(G, pos)
# Change border
node_plot.set_edgecolor('k')
node_plot.set_linewidth(2)
# Display
plt.show()
```
#### Generating All Possible Routes
Generate all possible routes by enumerating shortest paths from any origin $i$ to any destination $j$. In a more realistic set, we would consider the $k$ shortest paths. This graph, however, has only 1 possible path from any origin to any destination.
```
# Initialize List
routes = []
# Iterate
for origin in list(G.nodes()):
for destination in list(G.nodes()):
# If origin is not destination...
if origin != destination:
routes.append(nx.shortest_path(G, origin, destination, weight='length'))
# Apply function
unique = unique_routes(routes)
print(f'Number of feasible routes (after removing simetrical routes): {len(unique)}')
```
#### Generate Demand Matrix
Generate a demand matrix for all OD pairs.
```
# Initialize matrix with full demand
demand_matrix = np.full((len(G.nodes()), len(G.nodes())), 5)
# Remove diagonal
np.fill_diagonal(demand_matrix, 0)
# Print
print(demand_matrix)
```
#### Generate $\Delta$ Matrix
Generate $\Delta_{er}$ matrix. This matrix tells if edge $e \in E$ is part of route $r \in R$.
```
DELTA = generate_DELTA_matrix(G, routes)
print(f'DELTA Matrix: \n')
print(DELTA)
```
#### Enumerate Paths
Since we are working with a small instance and all origin-destination pairs have only one possible path, we are going to enumerate all possible paths from all origins and all destinations. If we were working with an instance that allowed for multiple paths, we would calculate the $k$ shortest paths from any origin to any destination.
Since there is only one path from any origin to any destination, the set of possible paths is identical to the set of routes.
```
# Generate OD Pairs List
od_pairs = generate_od_pair_list(G, demand_matrix)
# Generate Paths
paths, num_paths = generate_paths(G, od_pairs)
```
#### Generate $\delta$ Tensor
The $\delta_{ep}^{od}$ tensor indicates if the edge $e$ is used on path $p$ chosen from origin $o$ to destination $d$. It allows for the coupling of routes and paths in the optimization model.
This matrix has shape $[od][k][e]$
```
# Generate Matrix
delta = generate_delta_matrix(G, od_pairs, num_paths, paths)
#delta
```
#### Generate Outputs
```
# Nodes
node_ids = list(G.nodes())
node_x = [G.nodes[node]['pos'][0] for node in G.nodes()]
node_y = [G.nodes[node]['pos'][1] for node in G.nodes()]
# Data
node_data = {
'id' : node_ids,
'x' : node_x,
'y' : node_y,
}
# Dataframe
nodes_df = pd.DataFrame(data=node_data)
# Print
#nodes_df
# Edges
edge_ids = [id + 1 for id in range(len(G.edges()))]
edge_origins = [u for u, v in G.edges()]
edge_destinations = [v for u, v in G.edges()]
edge_lengths = [G[u][v]['length'] for u, v in G.edges()]
edge_travel_time = [G[u][v]['length'] for u, v in G.edges()]
edge_capacity = [10 for edge in range(len(G.edges()))]
# Data
edge_data = {
'id' : edge_ids,
'origin' : edge_origins,
'destination' : edge_destinations,
'length' : edge_lengths,
'travel_time' : edge_travel_time,
'capacity' : edge_capacity
}
# Dataframe
edges_df = pd.DataFrame(data=edge_data)
# Print
#edges_df
# OD Matrix
od_ids = [id + 1 for id in range(len(od_pairs))]
od_origin = [int(od.split('-')[0]) for od in od_pairs]
od_destination = [int(od.split('-')[1]) for od in od_pairs]
od_demand = [demand_matrix[int(od.split('-')[0]) - 1][int(od.split('-')[1]) - 1] for od in od_pairs]
od_paths = [num_paths[od] for od in od_pairs]
# Data
od_data = {
'id' : od_ids,
'origin' : od_origin,
'destination' : od_destination,
'demand' : od_demand,
'paths' : od_paths
}
# Dataframe
od_df = pd.DataFrame(data=od_data)
#od_df
# Routes
route_ids = [id + 1 for id in range(len(routes))]
route_sequences = [str(sequence) for sequence in routes]
route_length = [nx.path_weight(G, route, weight='length') for route in routes]
route_stops = [len(route) for route in routes]
# Data
route_data = {
'id' : route_ids,
'sequence' : route_sequences,
'length' : route_length,
'num_stops' : route_stops
}
route_df = pd.DataFrame(data=route_data)
#route_df
k = 10
# Paths
paths_id = [id + 1 for id in range(len(paths) * k)]
paths_sequence = [str(paths[od][i]) if i < len(paths[od]) else '' for od in paths for i in range(k)]
paths_length = [nx.path_weight(G, paths[od][i], weight='length') if i < len(paths[od]) else 0 for od in paths for i in range(k)]
paths_stops = [len(paths[od][i]) if i < len(paths[od]) else 0 for od in paths for i in range(k)]
# Data
paths_data = {
'id' : paths_id,
'sequence' : paths_sequence,
'length' : paths_length,
'num_stops' : paths_stops
}
paths_df = pd.DataFrame(data=paths_data)
#paths_df
# DELTA Matrix
DELTA_value = [edge for route in DELTA for edge in route]
edge_ids = [id + 1 for id in range(len(G.edges()))] * len(routes)
route_ids = [id + 1 for id in range(len(routes)) for i in range(len(G.edges()))]
# Data
DELTA_data = {
'route_id' : route_ids,
'edge_id' : edge_ids,
'DELTA' : DELTA_value
}
# Dataframe
DELTA_df = pd.DataFrame(data=DELTA_data)
#DELTA_df
k = 10
# delta Matrix
delta_value = [edge for od in delta for path in od for edge in path]
od_ids = [id + 1 for id in range(len(od_pairs)) for i in range(k) for j in range(len(G.edges()))]
path_ids = [id + 1 for id in range(k) for edge in range(len(G.edges()))] * len(od_pairs)
edge_ids = [id + 1 for id in range(len(G.edges()))] * k * len(od_pairs)
od_origins = [int(od_pairs[id].split('-')[0]) for id in range(len(od_pairs)) for i in range(k) for j in range(len(G.edges()))]
od_destinations = [int(od_pairs[id].split('-')[1]) for id in range(len(od_pairs)) for i in range(k) for j in range(len(G.edges()))]
# Data
delta_data = {
'od' : od_ids,
'path' : path_ids,
'edge' : edge_ids,
'delta' : delta_value,
'origin' : od_origins,
'destination' : od_destinations
}
# DataFrame
delta_df = pd.DataFrame(data=delta_data)
#delta_df
# CSVs
# nodes_df.to_csv('exports/instance_01/nodes.csv', index=False, sep=',')
# edges_df.to_csv('exports/instance_01/edges.csv', index=False, sep=',')
# od_df.to_csv('exports/instance_01/od_matrix.csv', index=False, sep=',')
# DELTA_df.to_csv('exports/instance_01/big_delta.csv', index=False, sep=',')
# delta_df.to_csv('exports/instance_01/small_delta.csv', index=False, sep=',')
# route_df.to_csv('exports/instance_01/routes.csv', index=False, sep=',')
# paths_df.to_csv('exports/instance_01/paths.csv', index=False, sep=',')
```
#### Preprocess Y Variable
```
# Vectors
y_fix_od = []
y_fix_r = []
# Iterate over OD pairs
for i, row in od_df.iterrows():
# Iterate over routes
for i, route in enumerate(routes):
# If neither origin, nor destination in route, add do vector
if (row['origin'] not in route) and (row['destination'] not in route):
y_fix_od.append(row['id'])
y_fix_r.append(i+1)
# Print Results
print(f'y_od_r original size: {od_df.shape[0] * len(routes)}')
print(f'fixed_variables: {len(y_fix_od)}')
print(f'y_od_r simplified size: {(od_df.shape[0] * len(routes)) - len(y_fix_od)}')
```
#### Reduce Dimensionality of $\delta$ Matrix
```
# Initialize W Vector
od_path_edge = []
start_index = []
finish_index = []
# Counter
i = 0
# Rename edges_df
df = edges_df
# Iterate over OD pairs
for j, od in enumerate(od_pairs):
i += 1
if j % 100 == 0:
print(f'iterating over od pair {j+1}/{len(od_pairs)}' )
# Iterate over paths
for k in range(10):# enumerate(paths[od]):
# If index in range
if k < len(paths[od]):
# Convert path to edges
edges_path = path_nodes_to_edges(paths[od][k])
# Add Starting Index
start_index.append(i)
# I
i += len(edges_path) - 1
# Add Ending Index
finish_index.append(i)
# I
#i += 1
# Iterate over edges
for edge in edges_path:
# Find edge id
edge_id = df[((df.origin == edge[0]) & (df.destination == edge[1])) | ((df.origin == edge[1]) & (df.destination == edge[0]))].id.values[0]
# Add id to list
od_path_edge.append(edge_id)
# If out of range...
else:
# Add Starting Index
start_index.append(i)
# Add Ending Index
finish_index.append(i)
print(f'Length of vector: {len(od_path_edge)}')
k = 10
# delta Matrix
od_ids = [id + 1 for id in range(len(od_pairs)) for i in range(k)]
path_ids = [id + 1 for id in range(k)] * len(od_pairs)
path_size = [len(paths[od][i]) if i < len(paths[od]) else 0 for od in od_pairs for i in range(k)]
od_origins = [int(od_pairs[id].split('-')[0]) for id in range(len(od_pairs)) for i in range(k)]
od_destinations = [int(od_pairs[id].split('-')[1]) for id in range(len(od_pairs)) for i in range(k)]
# Data
reduced_delta_data = {
'od' : od_ids,
'path' : path_ids,
'path_size' : path_size,
'w_start' : start_index,
'w_finish' : finish_index,
'origin' : od_origins,
'destination' : od_destinations
}
# DataFrame
reduced_delta_df = pd.DataFrame(data=reduced_delta_data)
reduced_delta_df
orig = delta_df.shape[0]
redu = len(od_path_edge)
print(f'Expected Reduction in Dimensionality: from {orig} elements to {redu} elements. A {((1 - (redu/orig)) * 100):.2f} % reduction')
```
#### Reduce Dimensionality of $\Delta$ Matrix
```
# Initialize Vectors
route_list = []
edge_list = []
start_pos = []
end_pos = []
# Initialize Counter
e_start = 0
# Iterate over edges
for i, edge in enumerate(list(G.edges())):
# Add 1
#e_start += 1
# Append Data
edge_list.append(i+1)
start_pos.append(e_start+1)
# Iterate over routes
for route in routes:
# Transform route from nodes to edges
route_edges = path_nodes_to_edges(route)
# If edge in route
if (edge in route_edges) or (edge[::-1] in route_edges):
# Add 1 to counter
e_start += 1
# Find route ID
route_id = route_df[route_df.sequence == str(route)].id.values[0]
# Append Data
route_list.append(route_id)
# Append Finish
end_pos.append(e_start)
len(route_list)
```
### Manual Array Initialization
```
sizes = [len(G.nodes()), len(G.edges()), len(od_pairs), len(routes), k, len(od_path_edge), len(route_list), len(y_fix_od)]
print(f'OD Pairs = {len(od_pairs)}')
print(f'Routes = {len(routes)}')
print(f'Nodes = {len(G.nodes())}')
print(f'Edges = {len(G.edges())}')
print(f'Max_Paths = {k}')
# return_xpress_int_txt(sizes, filename='exports/instance_01/array_sizes.txt')
```
**Node Data**
```
# return_xpress_int_txt(nodes_df.id, filename='exports/instance_01/node_id.txt')
# return_xpress_str_txt(nodes_df.id, filename='exports/instance_01/node_name.txt')
```
**Edge Data**
```
uv = list(edges_df.origin.values) + list(edges_df.destination.values)
# return_xpress_int_txt(edges_df.id, filename='exports/instance_01/edge_id.txt')
# return_xpress_int_txt(uv, filename='exports/instance_01/edge_uv.txt')
# return_xpress_int_txt(edges_df.length, filename='exports/instance_01/edge_length.txt')
# return_xpress_int_txt(edges_df.travel_time, filename='exports/instance_01/edge_time.txt')
# return_xpress_int_txt(edges_df.capacity, filename='exports/instance_01/edge_capacity.txt')
```
**OD Matrix Data**
```
# return_xpress_int_txt(od_df.origin, filename='exports/instance_01/origins.txt')
# return_xpress_int_txt(od_df.destination, filename='exports/instance_01/destinations.txt')
# return_xpress_int_txt(od_df.demand, filename='exports/instance_01/demand.txt')
```
**Routes**
```
frequencies = [1 for route in routes]
# return_xpress_str_txt(route_df.sequence.values, filename='exports/instance_01/route_sequence.txt')
# return_xpress_int_txt(route_df.length.values, filename='exports/instance_01/route_length.txt')
# return_xpress_int_txt(frequencies, filename='exports/instance_01/route_frequency.txt')
```
**Paths**
```
# return_xpress_str_txt(paths_df.sequence.values, filename='exports/instance_01/path_sequence.txt')
# return_xpress_int_txt(paths_df.length, filename='exports/instance_01/path_length.txt')
```
**Y Preprocessing**
```
# return_xpress_int_txt(y_fix_od, filename='exports/instance_01/yfix_origin_destination.txt')
# return_xpress_int_txt(y_fix_r, filename='exports/instance_01/yfix_routes.txt')
```
**$\delta$ Dimensionality Redux**
```
# return_xpress_int_txt(od_path_edge, filename='exports/instance_01/small_delta.txt')
# return_xpress_int_txt(reduced_delta_df.w_start, filename='exports/instance_01/small_delta_start.txt')
# return_xpress_int_txt(reduced_delta_df.w_finish, filename='exports/instance_01/small_delta_finish.txt')
```
**$\Delta $ Dimensionality Redux**
```
# return_xpress_int_txt(route_list, filename='exports/instance_01/big_delta.txt')
# return_xpress_int_txt(start_pos, filename='exports/instance_01/big_delta_start.txt')
# return_xpress_int_txt(end_pos, filename='exports/instance_01/big_delta_finish.txt')
```
| github_jupyter |
**此文件用于数据的预处理**
```
import os
print('user_artist_data.txt 中数据行数:')
print(os.popen('cat user_artist_data.txt | wc -l').read())
print('user_artist_data.txt 中数据格式')
print(os.popen('head -5 user_artist_data.txt').read())
```
可以看到`user_artist_data.txt`中共有2400万条数据,其合法的格式应该为`user_id artist_id times`,所以我们利用shell脚本统计一下合理的格式的
数据共有多少。
```
print(os.popen('grep -Ec "^[0-9]+ [0-9]+ [0-9]+$" user_artist_data.txt').read())
```
可以看到符合合法格式的数据与数据的总量相同,所以对于`user_aritist_data.txt`这个文件不用进行关于格式的预处理。
```
print('artist_data.txt 中数据的行数')
print(os.popen('cat artist_data.txt | wc -l').read())
print('artist_data.txt 中数据格式')
print(os.popen('head -5 artist_data.txt').read())
# 很奇怪的一点是"似乎在pycharm中可以直接使用python调用shell并且可以使用转义符号"
```
类似的来看`artist_data.txt`,这个文件中共有180万数据,其合法格式应该为`artist_id <tab> aritist_name`,类似的我们使用shell脚本统计一下
合法的数据共有多少。
由于`<tab>`在python中的调用使用转义和linux的输入方式均不能得到最终的结果,所以我们直接在终端中使用下面的shell命令来统计相关信息。
```shell
grep -Ec "^[0-9]+<Control-v><tab>[^<Control-v><tab>^M]+$"
```
其中`^M`表示换行符,以上命令的结果为1848063,也就是并不是所有的数据都是符合要求的。
所以我们将数据关于格式的预处理
- 正确格式的数据存放在`artist_correct_format_data.txt`
- 错误格式的数据存放在`artist_wrong_format_data.txt`
```
print('artist_alias.txt 中数据的行数')
print(os.popen('cat artist_alias.txt | wc -l').read())
print('artist_data.txt 中数据格式')
print(os.popen('head -5 artist_alias.txt').read())
```
同样对于`artist_alias.txt`文件,其中包含19万余条数据,正确的格式应该为`artist_id <tab> artist_correct_id`。
由于类似的原因,我们也直接使用shell命令进行处理。
```shell
grep -Ec "^[0-9]+<control-v><tab>[0-9]+$" artist_alia.txt
```
结果为193027,所以存在格式不合法的数据。类似上面的操作。
- 正确格式的数据存放在`artist_correct_format_alias.txt`
- 错误格式的数据存放在`artist_wrong_format_alias.txt`
```
# 建立从user_id到矩阵下标的映射
user_to_index = dict()
# 建立从artist_id到矩阵下标的映射
artist_to_index = dict()
# user和artist的数量
user_number = 0
artist_number = 0
artist_alias = dict() # wrong_id, correct_id
artist_data = dict() # artist_id, artist_name
user_artist = dict() # (user_index, artist_index), times
with open('./artist_correct_format_alias.txt', 'r') as f:
for line in f.readlines():
line = line.replace('\n', '')
wrong, correct = line.split('\t', 2)
artist_alias[wrong] = correct
with open('./artist_correct_format_data.txt', 'r') as f:
for line in f.readlines():
line = line.replace('\n', '')
artist_id, artist_name = line.split('\t', 2)
artist_data[artist_id] = artist_name
# 如果有重复的artist_id出现,去最后一次出现的值
if artist_id not in artist_to_index:
artist_to_index[artist_id] = artist_number
artist_number = artist_number + 1
with open('./user_artist_data.txt', 'r') as f:
for line in f.readlines():
line = line.replace('\n', '')
user_id, artist_id, times = line.split(' ', 3)
user_index = -1
artist_index = -1
if artist_id in artist_alias:
artist_id = artist_alias.get(artist_id)
if artist_id not in artist_to_index:
continue # 存在部分artist_id在artist_data.txt中不存在的现象
else:
artist_index = artist_to_index.get(artist_id)
if user_id in user_to_index:
user_index = user_to_index.get(user_id)
else:
user_to_index[user_id] = user_number
user_index = user_number
user_number = user_number + 1
# 如果有重复的user_index, artist_index出现,取最后一次有效的值
user_artist[(user_index, artist_index)] = eval(times)
with open('./out.txt', 'w') as f:
for item in user_artist.items():
f.write(str(item[0][0]) + ',' + str(item[0][1]) + ',' + str(item[1]) + '\n')
with open('./user_id_to_index.txt', 'w') as f:
for item in user_to_index.items():
f.write(item[0] + ',' + str(item[1]) + '\n')
with open('./artist_id_to_index.txt', 'w') as f:
for item in artist_to_index.items():
f.write(item[0] + ',' + str(item[1]) + '\n')
```
| github_jupyter |
```
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
import pandas as pd
import nltk.classify.util
from nltk.corpus import stopwords
from nltk.corpus import wordnet
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
#!pip install textblob
from textblob import TextBlob
import numpy as np
import re
import csv
data = pd.read_excel('BankReviews.xlsx')
data
nltk.download('stopwords')
#start process_tweet
def processTweet(tweet):
# process the tweets
#Convert to lower case
tweet = tweet.lower()
#Convert www.* or https?://* to URL
tweet = re.sub('((www\.[^\s]+)|(https?://[^\s]+))','',tweet)
#Convert @username to AT_USER
tweet = re.sub('@[^\s]+','',tweet)
#Remove additional white spaces
tweet = re.sub('[\s]+', ' ', tweet)
#Replace #word with word
# #NIKE ---> NIKE
# and before doing that, fetch that word and
# keep it aside as that could be our topic word!
# 0 1
tweet = re.sub(r'#([^\s]+)', r'\1', tweet)
tweet = re.sub(r'[\.!:\?\-\'\"\\/]', r'', tweet)
#trim
tweet = tweet.strip('\'"')
return tweet
#initialize stopWords
stopWords = []
def replaceTwoOrMore(s):
#look for 2 or more repetitions of character and replace with the character itself
pattern = re.compile(r"(.)\1{2,}", re.DOTALL)
return pattern.sub(r"\1", s)
#start getStopWordList
def getStopWordList(stopWordListFileName):
#read the stopwords file and build a list
stopWords = []
fp = open(stopWordListFileName, 'r')
line = fp.readline()
while line:
word = line.strip()
stopWords.append(word)
line = fp.readline()
fp.close()
return stopWords
st = open('Files/StopWords.txt', 'r')
stopWords = getStopWordList('Files/StopWords.txt')
def getFeatureVector(tweet):
featureVector = []
#split tweet into words
words = tweet.split()
for w in words:
#replace two or more with two occurrences
w = replaceTwoOrMore(w)
#strip punctuation
w = w.strip('\'"?,.')
#check if the word stats with an alphabet
val = re.search(r"^[a-zA-Z][a-zA-Z0-9]*$", w)
#ignore if it is a stop word
if(w in stopWords or val is None):
continue
else:
featureVector.append(w.lower())
return list(set(featureVector))
def getFeatureVector_forDF(tweet):
featureVector = []
#split tweet into words
words = tweet.split()
for w in words:
#replace two or more with two occurrences
w = replaceTwoOrMore(w)
#strip punctuation
w = w.strip('\'"?,.')
#check if the word stats with an alphabet
val = re.search(r"^[a-zA-Z][a-zA-Z0-9]*$", w)
#ignore if it is a stop word
if(w in stopWords or val is None):
continue
else:
featureVector.append(w.lower())
return " ".join(list(set(featureVector)))
data["Clean_Review"]=data.Reviews.apply(processTweet)
data["Clean_Review"]=data.Clean_Review.apply(getFeatureVector_forDF)
data.head()
data.Clean_Review.iloc[502]
### Calculating Sentiment analysis using Textblob module
data['sentiment'] = data["Clean_Review"].apply(lambda x: TextBlob(x).sentiment.polarity)
data.head()
data.sentiment.min()
data['sentiment_cat'] = np.where(data.sentiment>0.1,'Positive',
np.where(data.sentiment<0.0,'Negative',
'Neutral'))
data[500:510]
data.Reviews.iloc[501]
cleaneddata=data.copy()
cleaneddata.drop("Reviews", inplace=True, axis=1)
cleaneddata.head()
cleaneddata.to_csv("BankReviewsCleaned.csv")
cleaneddata.sentiment_cat.value_counts()
cleaneddata.Stars.value_counts()
import nltk.classify.util
from nltk.classify import NaiveBayesClassifier
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.corpus import wordnet
#start extract_features
def extract_features(tweet):
tweet_words = set(tweet)
features = {}
for word in featureList:
features['contains(%s)' % word] = (word in tweet_words)
return features
#Read the tweets one by one and process it
inpTweets = csv.reader(open('BankReviewsCleaned.csv', 'r',encoding="Latin-1"), delimiter=',', quotechar='|')
featureList = []
# Get tweet words
import time
tweets = []
t_num=0
for row in inpTweets:
if(t_num==0):
t_num+=1
else:
sentiment = row[6]
tweet = row[4]
processedTweet = processTweet(tweet)
featureVector = getFeatureVector(processedTweet)
featureList.extend(featureVector)
tweets.append((featureVector, sentiment));
#end loop
# Remove featureList duplicates
featureList = list(set(featureList))
# Extract feature vector for all tweets in one shote
training_set = nltk.classify.util.apply_features(extract_features, tweets)
training_set[30]
({"feef":False, "eefef":True, "fefhfejfe":True, "ghsd": False},"yolo")
[0,1,1,0],"yolo"
```
### Classifier Algorithm : Naive Bayes Classifier
```
NBClassifier = nltk.NaiveBayesClassifier.train(training_set)
# now run this part to save your newly created model
import pickle
pickle_out = open("bank1_Save.pickle","wb")
pickle.dump(NBClassifier, pickle_out)
pickle_out.close()
# Load when required
import pickle
pickle_in = open("bank1_Save.pickle","rb")
NBClassifier = pickle.load(pickle_in)
```
## Classification using Ratings
```
#Read the tweets one by one and process it
inpTweets = csv.reader(open('BankReviewsCleaned.csv', 'r',encoding="Latin-1"), delimiter=',', quotechar='|')
featureList = []
# Get tweet words
import time
tweets = []
t_num=0
for row in inpTweets:
if(t_num==0):
t_num+=1
else:
rating=row[2]
tweet = row[4]
processedTweet = processTweet(tweet)
featureVector = getFeatureVector(processedTweet)
featureList.extend(featureVector)
tweets.append((featureVector, rating));
#end loop
# Remove featureList duplicates
featureList = list(set(featureList))
# Extract feature vector for all tweets in one shote
training_set_rating = nltk.classify.util.apply_features(extract_features, tweets)
data.Clean_Review.iloc[6]
list(training_set_rating[6])[0]['contains(expressing)']
for i in list(training_set_rating[6])[0]:
if(list(training_set_rating[6])[0][i]==True):
print(i)
NBClassifier_ratings = nltk.NaiveBayesClassifier.train(training_set_rating)
# now run this part to save your newly created model
import pickle
pickle_out = open("bankRating_Save.pickle","wb")
pickle.dump(NBClassifier_ratings, pickle_out)
pickle_out.close()
# Load when required
import pickle
pickle_in_rating = open("bankRating_Save.pickle","rb")
NBClassifier_rating = pickle.load(pickle_in_rating)
```
### Testing of model with test data
```
data.Reviews.iloc[200]
test=data.Reviews.iloc[200]
test_tweet = '''This was the best experience ever. It was like they had never gone through the process before. I could get a complete list of the documents required. Every other day it was a new request. The appraiser that they hired made no mistakes I had to send in corrections and then during the underwriting process they had to go back and make more corrections. Wyndum would continue to ask for copies of documents over and over again. There were many documents that I had to send 3 and 4 times because they them or was filed correctly, we went through 4 different people through the process. I would always use them. amazing experience'''
processedTestTweet = processTweet(test_tweet)
feature_words = extract_features(getFeatureVector(processedTestTweet))
NBClassifier_rating.classify(feature_words)
```
Submitted By, Pranjal Saxena <a>https://www.linkedin.com/in/pranjalai/ </a> <br>
pranjal.saxena2012@gmail.com
| github_jupyter |
# 多策略, 多品种回测示例
```
from collections import defaultdict
import OnePy as op
from OnePy.custom_module.cleaner_talib import Talib
from OnePy.custom_module.cleaner_sma import SMA
class SmaStrategy(op.StrategyBase):
def __init__(self):
super().__init__()
self.sma1 = SMA(3, 40).calculate
self.sma2 = SMA(5, 40).calculate
def handle_bar(self):
for ticker in self.env.tickers:
if self.sma1(ticker) > self.sma2(ticker):
self.buy(100, ticker, takeprofit=15,
stoploss=100)
else:
self.sell(20, ticker)
class BBANDS(op.StrategyBase):
def __init__(self):
super().__init__()
self.sma = Talib(ind='sma', frequency='D',
params=dict(timeperiod=20)).calculate
self.bbands = Talib(ind='BBANDS', frequency='D',
params=dict(timeperiod=20,
nbdevup=2,
nbdevdn=2,
matype=0),
buffer_day=30).calculate
self.switch_long = defaultdict(bool)
self.switch_short = defaultdict(bool)
self.params = dict(
position=100,
takeprofit_pct=0.01,
)
self.finished = defaultdict(list)
def handle_bar(self):
position = self.params['position']
takeprofit_pct = self.params['takeprofit_pct']
for ticker in self.env.tickers:
upperband = self.bbands(ticker)['upperband']
middleband = self.bbands(ticker)['middleband']
lowerband = self.bbands(ticker)['lowerband']
cur_price = self.cur_price(ticker)
sma = self.sma(ticker)
if cur_price > upperband > sma:
if ticker not in self.finished['long']:
self.buy(position, ticker, price=cur_price -
0.01, trailingstop_pct=0.05)
self.finished['long'].append(ticker)
elif cur_price < lowerband < sma:
if ticker not in self.finished['short']:
self.short(position, ticker, price=cur_price +
0.01, trailingstop_pct=0.05)
self.finished['short'].append(ticker)
TICKER_LIST = ['000001', '000002'] # 多品种
INITIAL_CASH = 20000
FREQUENCY = 'D'
START, END = '2012-08-07', '2018-08-07'
go = op.backtest.stock(TICKER_LIST, FREQUENCY, INITIAL_CASH, START, END)
# 导入多个策略
SmaStrategy()
BBANDS()
go.output.show_setting() # 检查是否导入成功
go.sunny()
go.output.summary2()
```
| github_jupyter |
```
import numpy as np
import ngmix
import matplotlib.pyplot as plt
import piff
import galsim
import seaborn as sns
import piff
%matplotlib notebook
%load_ext autoreload
%autoreload 2
from matts_misc.piff_wl_sims.des_piff import DES_Piff
%%time
psf_model = DES_Piff(
file_name="/Users/Matt/DESDATA/y3_piff/y3a1-v29/242414/D00242414_r_c16_r2362p01_piff.fits")
x = 1
y = 2048
psf_obj = psf_model.getPSF(galsim.PositionD(x, y), None)
interp_psf_obj = psf_model._draw(galsim.PositionD(x, y))
psfm = psf_obj.drawImage(nx=19, ny=19, scale=0.25)
psfi = interp_psf_obj.drawImage(nx=19, ny=19, scale=0.25)
nse = np.std(psfi.array[:, 0])
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
axs[0].imshow(np.arcsinh(psfi.array/nse))
axs[1].imshow(np.arcsinh(psfm.array/nse))
sns.heatmap((psfi.array - psfm.array)/nse, ax=axs[2], square=True)
from galsim.hsm import FindAdaptiveMom
print(FindAdaptiveMom(psfm).observed_shape)
print(FindAdaptiveMom(psfi).observed_shape)
psf_model1 = DES_Piff(
file_name="/Users/Matt/DESDATA/y3_piff/y3a1-v29/242414/D00242414_r_c16_r2362p01_piff.fits")
psf_model2 = DES_Piff(
file_name="/Users/Matt/DESDATA/y3_piff/y3a1-v29/242414/D00242414_r_c16_r2362p01_piff.fits")
x = 10.5
y = 200.6
psf_obj1 = psf_model1.getPSF(galsim.PositionD(x, y), None)
psf_obj2 = psf_model2.getPSF(galsim.PositionD(x, y), None)
psfm1 = psf_obj1.drawImage(nx=19, ny=19, scale=0.25)
psfm2 = psf_obj2.drawImage(nx=19, ny=19, scale=0.25)
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(12, 4))
axs[0].imshow(psfm1.array)
axs[1].imshow(psfm2.array)
sns.heatmap((psfm2.array - psfm1.array), ax=axs[2], square=True)
print('Linf:', np.max(np.abs(psfm1.array - psfm2.array)))
psf_model = piff.read("/Users/Matt/DESDATA/y3_piff/y3a1-v29/242414/D00242414_r_c16_r2362p01_piff.fits")
img = galsim.Image(17, 17, scale=0.25)
image_pos = galsim.PositionD(x=10, y=11)
dx = image_pos.x - int(image_pos.x + 0.5)
dy = image_pos.y - int(image_pos.y + 0.5)
img = psf_model.draw(
image_pos.x,
image_pos.y,
image=img,
offset=(-dx, -dy))
nse = np.std(np.concatenate([img.array[0, :], img.array[-1, :]]))
wgt = np.ones_like(img.array) / nse**2
jac = ngmix.DiagonalJacobian(x=8, y=8, scale=0.25)
obs = ngmix.Observation(image=img.array, weight=wgt, jacobian=jac)
from ngmix.fitting import LMSimple, FitterBase
lm = LMSimple(obs, 'turb')
lm.go(am.get_result()['pars'])
model = lm.get_gmix().make_image?
model = lm.get_gmix().make_image
model = lm.get_gmix().make_image
gm = lm.get_gmix()
model_img = gm.make_image((17, 17), jacobian=obs.jacobian)
sns.heatmap((model_img - obs.image) * np.sqrt(obs.weight))
loc = 5
plt.figure()
plt.plot(obs.image[loc, :], label='data', color='b')
plt.plot(obs.image[loc, :] + nse, 'b:', label='data')
plt.plot(obs.image[loc, :] - nse, 'b:', label='data')
plt.plot(model_img[loc, :], label='model', color='r')
plt.legend()
loc = 5
plt.figure()
plt.plot(obs.image[:, loc], label='data', color='b')
plt.plot(obs.image[:, loc] + nse, 'b:', label='data')
plt.plot(obs.image[:, loc] - nse, 'b:', label='data')
plt.plot(model_img[:, loc], label='model', color='r')
plt.legend()
pm = psf_model.wcs[0]
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.