text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
# NWB-Adapter (Simple)
This notebook demostrates how to use the new `dj-AttributeAdapter` feature to work with `NWB` objects.
```
# import datajoint, nwb modules
%matplotlib inline
import datajoint as dj
import os
import pynwb
from pynwb import NWBFile, NWBHDF5IO
from datetime import datetime
from dateutil.tz import tzlocal
import json
import numpy as np
import pathlib
import warnings
warnings.filterwarnings('ignore')
os.environ['DJ_SUPPORT_ADAPTED_TYPES'] = 'TRUE'
os.environ['DJ_SUPPORT_FILEPATH_MANAGEMENT'] = 'TRUE'
```
# Objective
+ Build `dj.AttributeAdapter` for ***NWBFile*** object
+ Use this in a DataJoint table
+ Demo `.insert()` and `.fetch()`
## Step 1 - Create a DataJoint AttributeAdapter for NWB object
Basically we will need to define an object inhereted from `dj.AttributeAdapter` and instantiated with a variable name ***nwb_file***
```
exported_nwb_dir = dj.config['stores']['nwbstore']['location']
exported_nwb_dir
class NWBFileAdapter(dj.AttributeAdapter):
attribute_type = 'filepath@nwbstore' # nwbstore is some directory (either local or on cloud)
def put(self, nwb):
save_file_name = ''.join([nwb.identifier, '.nwb'])
# save the file
with NWBHDF5IO(os.path.join(exported_nwb_dir, save_file_name), mode='w') as io:
io.write(nwb)
print(f'Write NWB 2.0 file: {save_file_name}')
# return the filepath to be inserted into DataJoint tables
return os.path.join(exported_nwb_dir, save_file_name)
def get(self, path):
# read the nwb filepath and return an nwb file object back to the user
return NWBHDF5IO(path, mode='r').read()
```
#### Instantiate for use as a datajoint type
```
nwb_file = NWBFileAdapter()
```
## Step 2 - Create a new schema ***export*** and NWB table
This ***NWB*** table specifies a primary key of `experiment.Session`, designed to store one NWB object (or NWBFile) per session
```
schema = dj.schema('demo_nwb_adapter')
@schema
class NWB(dj.Manual):
definition = """
nwb_id: int
---
nwb: <nwb_file>
"""
NWB()
```
Note that the table definition above set the ***nwb*** attribute to be of type ***< nwb_file >***.
Hence the reason for defining ***nwbfile*** as an instant of ***NWBAdapter*** - see Step 1
## Step 3 - Build an NWBFile
Here, we build a very simple NWB object using the `pynwb` package, for the sake of demonstration
```
# -- create an NWBFile (pynwb.file.NWBFile)
nwb = NWBFile(identifier='nwb_01',
session_description='',
session_start_time=datetime.strptime('2019-10-20', '%Y-%m-%d'),
file_create_date=datetime.now(tzlocal()),
experimenter='John Smith')
# -- add subject
nwb.subject = pynwb.file.Subject(subject_id='animal_01', sex='F')
nwb
```
## Step 4 - Insert to the ***NWB*** table
```
NWB()
NWB.insert1({'nwb_id': 0, 'nwb': nwb})
NWB()
```
### Now, fetch that NWB file back
```
fetched_nwb = (NWB & 'nwb_id=0').fetch1('nwb')
fetched_nwb
```
### Let's also look at the directory where all the NWB files are generated (configured in the `nwbstore`)
```
os.listdir(exported_nwb_dir)
```
## This concludes the basic showcase of using `dj.AttributeAdapter` to work with `NWB` objects
Continue further to see more examples, but the core usage is demonstrated above
```
# -- create NWB
nwb2 = NWBFile(identifier='nwb_02',
session_description='',
session_start_time=datetime.strptime('2019-10-20', '%Y-%m-%d'),
file_create_date=datetime.now(tzlocal()),
experimenter='John Smith')
# -- add subject
nwb2.subject = pynwb.file.Subject(
subject_id='animal_01',
sex='F')
# -- create NWB
nwb3 = NWBFile(identifier='nwb_03',
session_description='',
session_start_time=datetime.strptime('2019-10-20', '%Y-%m-%d'),
file_create_date=datetime.now(tzlocal()),
experimenter='John Smith')
# -- add subject
nwb3.subject = pynwb.file.Subject(
subject_id='animal_01',
sex='F')
NWB.insert([{'nwb_id': 2, 'nwb': nwb2},
{'nwb_id': 3, 'nwb': nwb3}])
NWB()
fetch_nwb3 = (NWB & 'nwb_id=3').fetch1('nwb')
fetch_nwb3
```
| github_jupyter |
# 如何使用Keras函數式API進行深度學習
Keras使得創建深度學習模型變得快速而簡單。
序貫(sequential)API允許您為大多數問題逐層堆疊創建模型。雖然說對很多的應用來說, 這樣的一個手法很簡單也解決
了很多深度學習網絡結構的構建,但是它也有限制 - 它不允許你創建模型有共享層或有多個輸入或輸出的網絡。
Keras中的函數式(functional)API是創建網絡模型的另一種方式,它提供了更多的靈活性,包括創建更複雜的模型。
在這個文章中,您將了解如何使用Keras中更靈活的函數式(functional)API來定義深度學習模型。
完成這個文章的相關範例, 您將知道:
* Sequential和Functional API之間的區別。
* 如何使用功能性(functional)API定義簡單的多層感知器(MLP),卷積神經網絡(CNN)和遞歸神經網絡(RNN)模型。
* 如何用共享層和多個輸入輸出來定義更複雜的模型。

```
# 這個Jupyter Notebook的環境
import platform
import tensorflow
import keras
print("Platform: {}".format(platform.platform()))
print("Tensorflow version: {}".format(tensorflow.__version__))
print("Keras version: {}".format(keras.__version__))
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from IPython.display import Image
```
## 1. Keras 序貫模型 (Sequential Models)
Keras提供了一個Sequential模型API。
它是創建深度學習模型的一種相對簡單的方法,我們透過創建Kears的Sequential類別實例(instance), 然後創建模型圖層
並添加到其中。
例如,可以定義多個圖層並將以陣列的方式一次做為參數傳遞給Sequential:
```
from keras.models import Sequential
from keras.layers import Dense
# 構建模型
model = Sequential([Dense(2, input_shape=(1,)), Dense(1)])
```
當然我們也可以一層一層也分段添加上去:
```
from keras.models import Sequential
from keras.layers import Dense
# 構建模型
model = Sequential()
model.add(Dense(2, input_shape=(1,)))
model.add(Dense(1))
```
Sequential模型API對於在大多數情況下非常有用與方便,但也有一些局限性。
例如,網絡拓撲結構可能具有多個不同輸入,產生多個輸出或重複使用共享圖層的複雜模型。
## 2. Keras 函數式(functional)API構建模型
Keras函數式(functional)API為構建網絡模型提供了更為靈活的方式。
它允許您定義多個輸入或輸出模型以及共享圖層的模型。除此之外,它允許您定義動態(ad-hoc)的非週期性(acyclic)網絡圖。
模型是通過創建層的實例(layer instances)並將它們直接相互連接成對來定義的,然後定義一個模型(model)來指定那些層是要作為
這個模型的輸入和輸出。
讓我們依次看看Keras功能(functional)API的三個獨特特性:
### 2.1 定義輸入
與Sequential模型不同,您必須創建獨立的Input層物件的instance並定義輸入數據張量的維度形狀(tensor shape)。
輸入層採用一個張量形狀參數(tensor shape),它是一個tuple,用於宣吿輸入張量的維度。
例如: 我們要把MNIST的每張圖像(28x28)打平成一個一維(784)的張量做為一個多層感知器(MLP)的Input
```
from keras.layers import Input
mnist_input = Input(shape=(784,))
```
### 2.2 連接不同的網絡層
模型中的神經層是成對連接的,就像是一個樂高積木一樣有一面是凸一面是凹, 一個神經層的輸出會接到另一個神經層的輸入。
這是通過在定義每個新神經層時指定輸入的來源來完成的。使用括號表示法,以便在創建圖層之後,指定作為輸入的神經層。
我們用一個簡短的例子來說明這一點。我們可以像上面那樣創建輸入層,然後創建一個隱藏層作為密集層,它接收來自輸入層的輸入。
```
from keras.layers import Input
from keras.layers import Dense
mnist_input = Input(shape=(784,))
hidden = Dense(512)(mnist_input)
```
正是這種逐層連接的方式賦予功能性(functional)API靈活性。您可以看到開始一些動態的神經網絡是多麼容易。
### 2.3 創建模型
在創建所有模型圖層並將它們連接在一起之後,您必須定義一個模型(Model)物件的instance。
與Sequential API一樣,這個模型是您可以用於總結(summarize),擬合(fit),評估(evaluate)和預測(predict)。
Keras提供了一個Model類別,您可以使用它從創建的圖層創建模型的instance。它會要求您只指定整個模型的第一個輸入層和最後一個的輸出層。例如:
```
from keras.layers import Input
from keras.layers import Dense
from keras.models import Model
mnist_input = Input(shape=(784,))
hidden = Dense(512)(mnist_input)
model = Model(inputs=mnist_input, outputs=hidden)
```
現在我們已經知道了Keras函數式API的所有關鍵部分,讓我們通過定義一系列不同的模型來開展工作。
每個範例都是可以執行的,並打印網絡結構及產生網絡圖表。我建議你為自己的模型做這個事情,以明確你所定義的是什麼樣的網絡結構。
我希望這些範例能夠在將來使用函數式API定義自己的模型時為您提供模板。
## 3.標準網絡模型
在開始使用函數式API時,最好先看一些標準的神經網絡模型是如何定義的。
在本節中,我們將著眼於定義一個簡單的多層感知器(MLP),卷積神經網絡(CNN)和遞歸神經網絡(RNN)。
這些範例將為以後了解更複雜的網絡構建提供基礎。
### 3.1 多層感知器(Multilayer Perceptron)
讓我們來定義了一個多類別分類(multi-class classification)的多層感知器(MLP)模型。
該模型有784個輸入,3個隱藏層,512,216和128個隱藏神經元,輸出層有10個輸出。
在每個隱藏層中使用`relu`激活函數,並且在輸出層中使用`softmax`激活函數進行多類別分類。
```
# 多層感知器(MLP)模型
from keras.models import Model
from keras.layers import Input, Dense
from keras.utils import plot_model
mnist_input = Input(shape=(784,), name='input')
hidden1 = Dense(512, activation='relu', name='hidden1')(mnist_input)
hidden2 = Dense(216, activation='relu', name='hidden2')(hidden1)
hidden3 = Dense(128, activation='relu', name='hidden3')(hidden2)
output = Dense(10, activation='softmax', name='output')(hidden3)
model = Model(inputs=mnist_input, outputs=output)
# 打印網絡結構
model.summary()
# 產生網絡拓撲圖
plot_model(model, to_file='multilayer_perceptron_graph.png')
# 秀出網絡拓撲圖
Image('multilayer_perceptron_graph.png')
```
### 3.2 卷積神經網絡(CNN)
我們將定義一個用於圖像分類的卷積神經網絡(convolutional neural network)。
該模型接收灰階的28×28圖像作為輸入,然後有一個作為特徵提取器的兩個卷積和池化層的序列,
然後是一個完全連接層來解釋特徵,並且具有用於10類預測的`softmax`激活的輸出層。
```
# 卷積神經網絡(CNN)
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPool2D
from keras.utils import plot_model
mnist_input = Input(shape=(28, 28, 1), name='input')
conv1 = Conv2D(128, kernel_size=4, activation='relu', name='conv1')(mnist_input)
pool1 = MaxPool2D(pool_size=(2, 2), name='pool1')(conv1)
conv2 = Conv2D(64, kernel_size=4, activation='relu', name='conv2')(pool1)
pool2 = MaxPool2D(pool_size=(2, 2), name='pool2')(conv2)
hidden1 = Dense(64, activation='relu', name='hidden1')(pool2)
output = Dense(10, activation='softmax', name='output')(hidden1)
model = Model(inputs=mnist_input, outputs=output)
# 打印網絡結構
model.summary()
# 產生網絡拓撲圖
plot_model(model, to_file='convolutional_neural_network.png')
# 秀出網絡拓撲圖
Image('convolutional_neural_network.png')
```
### 3.3 遞歸神經網絡(RNN)
我們將定義一個長期短期記憶(LSTM)遞歸神經網絡用於圖像分類。
該模型預期一個特徵的784個時間步驟作為輸入。該模型具有單個LSTM隱藏層以從序列中提取特徵,
接著是完全連接的層來解釋LSTM輸出,接著是用於進行10類別預測的輸出層。
```
# 遞歸神經網絡(RNN)
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers.recurrent import LSTM
from keras.utils import plot_model
mnist_input = Input(shape=(784, 1), name='input') # 把每一個像素想成是一序列有前後關係的time_steps
lstm1 = LSTM(128, name='lstm1')(mnist_input)
hidden1 = Dense(128, activation='relu', name='hidden1')(lstm1)
output = Dense(10, activation='softmax', name='output')(hidden1)
model = Model(inputs=mnist_input, outputs=output)
# 打印網絡結構
model.summary()
# 產生網絡拓撲圖
plot_model(model, to_file='recurrent_neural_network.png')
# 秀出網絡拓撲圖
Image('recurrent_neural_network.png')
```
## 4.共享層模型
多個神經層可以共享一個神經層的輸出來當成輸入。
例如,一個輸入可能可以有多個不同的特徵提取層,或者多個神經層用於解釋特徵提取層的輸出。
我們來看這兩個例子。
### 4.1 共享輸入層 (Shared Input Layer)
我們定義具有不同大小的內核的多個卷積層來解釋圖像輸入。
該模型使用28×28像素的灰階圖像。有兩個CNN特徵提取子模型共享這個輸入;第一個具有4的內核大小和第二個8的內核大小。
這些特徵提取子模型的輸出被平坦化(flatten)為向量(vector),並且被串連成一個長向量, 然後被傳遞到完全連接的層以
用於在最終輸出層之前進行10類別預測。
```
# 共享輸入層
from keras.models import Model
from keras.layers import Input, Dense, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPool2D
from keras.layers.merge import concatenate
from keras.utils import plot_model
# 輸入層
mnist_input = Input(shape=(28, 28, 1), name='input')
# 第一個特徵提取層
conv1 = Conv2D(32, kernel_size=4, activation='relu', name='conv1')(mnist_input) # <-- 看這裡
pool1 = MaxPool2D(pool_size=(2, 2), name='pool1')(conv1)
flat1 = Flatten()(pool1)
# 第二個特徵提取層
conv2 = Conv2D(16, kernel_size=8, activation='relu', name='conv2')(mnist_input) # <-- 看這裡
pool2 = MaxPool2D(pool_size=(2, 2), name='pool2')(conv2)
flat2 = Flatten()(pool2)
# 把兩個特徵提取層的結果併起來
merge = concatenate([flat1, flat2])
# 進行全連結層
hidden1 = Dense(64, activation='relu', name='hidden1')(merge)
# 輸出層
output = Dense(10, activation='softmax', name='output')(hidden1)
# 以Model來組合整個網絡
model = Model(inputs=mnist_input, outputs=output)
# 打印網絡結構
model.summary()
# plot graph
plot_model(model, to_file='shared_input_layer.png')
# 秀出網絡拓撲圖
Image('shared_input_layer.png')
```
### 4.2 共享特徵提取層 (Shared Feature Extraction Layer)
我們將使用兩個並行子模型來解釋用於序列分類的LSTM特徵提取器的輸出。
該模型的輸入是1個特徵的784個時間步長。具有10個存儲單元的LSTM層解釋這個序列。第一種解釋模型是淺層單連通層,
第二層是深層3層模型。兩個解釋模型的輸出連接成一個長向量,傳遞給用於進行10類別分類預測的輸出層。
```
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers.recurrent import LSTM
from keras.layers.merge import concatenate
from keras.utils import plot_model
# 輸入層
mnist_input = Input(shape=(784, 1), name='input') # 把每一個像素想成是一序列有前後關係的time_steps
# 特徵提取層
extract1 = LSTM(128, name='lstm1')(mnist_input)
# 第一個解釋層
interp1 = Dense(10, activation='relu', name='interp1')(extract1) # <-- 看這裡
# 第二個解釋層
interp21 = Dense(64, activation='relu', name='interp21')(extract1) # <-- 看這裡
interp22 = Dense(32, activation='relu', name='interp22')(interp21)
interp23 = Dense(16, activation='relu', name='interp23')(interp22)
# 把兩個特徵提取層的結果併起來
merge = concatenate([interp1, interp23], name='merge')
# 輸出層
output = Dense(10, activation='softmax', name='output')(merge)
# 以Model來組合整個網絡
model = Model(inputs=mnist_input, outputs=output)
# 打印網絡結構
model.summary()
# plot graph
plot_model(model, to_file='shared_feature_extractor.png')
# 秀出網絡拓撲圖
Image('shared_feature_extractor.png')
```
## 5.多種輸入和輸出模型
函數式(functional)API也可用於開發具有多個輸入或多個輸出的模型的更複雜的模型。
### 5.1 多輸入模型
我們將開發一個圖像分類模型,將圖像的兩個版本作為輸入,每個圖像的大小不同。特別是一個灰階的64×64版本和一個32×32的彩色版本。分離的特徵提取CNN模型對每個模型進行操作,然後將兩個模型的結果連接起來進行解釋和最終預測。
請注意,在創建Model()實例(instance)時,我們將兩個輸入圖層定義為一個數組(array)。
```
# 多輸入模型
from keras.models import Model
from keras.layers import Input, Dense, Flatten
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPool2D
from keras.layers.merge import concatenate
from keras.utils import plot_model
# 第一個輸入層
img_gray_bigsize = Input(shape=(64, 64, 1), name='img_gray_bigsize')
conv11 = Conv2D(32, kernel_size=4, activation='relu', name='conv11')(img_gray_bigsize)
pool11 = MaxPool2D(pool_size=(2, 2), name='pool11')(conv11)
conv12 = Conv2D(16, kernel_size=4, activation='relu', name='conv12')(pool11)
pool12 = MaxPool2D(pool_size=(2, 2), name='pool12')(conv12)
flat1 = Flatten()(pool12)
# 第二個輸入層
img_rgb_smallsize = Input(shape=(32, 32, 3), name='img_rgb_smallsize')
conv21 = Conv2D(32, kernel_size=4, activation='relu', name='conv21')(img_rgb_smallsize)
pool21 = MaxPool2D(pool_size=(2, 2), name='pool21')(conv21)
conv22 = Conv2D(16, kernel_size=4, activation='relu', name='conv22')(pool21)
pool22 = MaxPool2D(pool_size=(2, 2), name='pool22')(conv22)
flat2 = Flatten()(pool22)
# 把兩個特徵提取層的結果併起來
merge = concatenate([flat1, flat2])
# 用隱藏的全連結層來解釋特徵
hidden1 = Dense(128, activation='relu', name='hidden1')(merge)
hidden2 = Dense(64, activation='relu', name='hidden2')(hidden1)
# 輸出層
output = Dense(10, activation='softmax', name='output')(hidden2)
# 以Model來組合整個網絡
model = Model(inputs=[img_gray_bigsize, img_rgb_smallsize], outputs=output)
# 打印網絡結構
model.summary()
# plot graph
plot_model(model, to_file='multiple_inputs.png')
# 秀出網絡拓撲圖
Image('multiple_inputs.png')
```
### 5.2 多輸出模型
我們將開發一個模型,進行兩種不同類型的預測。給定一個特徵的784個時間步長的輸入序列,該模型將對該序列進行分類並輸出具有相同長度的新序列。
LSTM層解釋輸入序列並返回每個時間步的隱藏狀態。第一個輸出模型創建一個堆疊的LSTM,解釋這些特徵,並進行多類別預測。第二個輸出模型使用相同的輸出層對每個輸入時間步進行多類別預測。
```
# 多輸出模型
from keras.models import Model
from keras.layers import Input, Dense
from keras.layers.recurrent import LSTM
from keras.layers.wrappers import TimeDistributed
from keras.utils import plot_model
# 輸入層
mnist_input = Input(shape=(784, 1), name='input') # 把每一個像素想成是一序列有前後關係的time_steps
# 特徵擷取層
extract = LSTM(64, return_sequences=True, name='extract')(mnist_input)
# 分類輸出
class11 = LSTM(32, name='class11')(extract)
class12 = Dense(32, activation='relu', name='class12')(class11)
output1 = Dense(10, activation='softmax', name='output1')(class12)
# 序列輸出
output2 = TimeDistributed(Dense(10, activation='softmax'), name='output2')(extract)
# 以Model來組合整個網絡
model = Model(inputs=mnist_input, outputs=[output1, output2])
# 打印網絡結構
model.summary()
# plot graph
plot_model(model, to_file='multiple_outputs.png')
# 秀出網絡拓撲圖
Image('multiple_outputs.png')
```
## 6.最佳實踐
以上有一些小技巧可以幫助你充分利用函數式API定義自己的模型。
* **一致性的變量名稱命名** 對輸入(可見)和輸出神經層(輸出)使用相同的變量名,甚至可以使用隱藏層(hidden1,hidden2)。這將有助於正確地將許多的神經層連接在一起。
* **檢查圖層摘要** 始終打印模型摘要並查看圖層輸出,以確保模型如您所期望的那樣連接在一起。
* **查看網絡拓樸圖像** 總是儘可能地創建網絡拓樸圖像,並審查它,以確保一切按照你的意圖連接在一起。
* **命名圖層** 您可以為圖層指定名稱,這些名稱可以讓你的模型圖形摘要和網絡拓樸圖像更容易被解讀。例如:Dense(1,name ='hidden1')。
* **獨立子模型** 考慮分離出子模型的發展,並最終將子模型結合在一起。
## 總結 (Conclusion)
在這篇文章中有一些個人學習到的一些有趣的重點:
* 使用Keras也可以很靈活地來建構複雜的深度學習網絡
* 每一種深度學習網絡拓樸基本上都可以找的到一篇論文
* 了解每種深度學習網絡拓樸架構的原理與應用的方向是強化內力的不二法門
參考:
* [How to Use the Keras Functional API for Deep Learning](https://machinelearningmastery.com/keras-functional-api-deep-learning/)
* [Keras官網](http://keras.io/)
| github_jupyter |
# Hong Kongese Language Identifier
This notebook contains modifications to make it run with the Hong Kongese language identification dataset. The only difference is that we do not load the English vectors because they will be useless on Hong Kongese.
This notebook uses a dataset with 63 each of Hong Kongese and Standard Chinese articles.
# 5 - Multi-class Sentiment Analysis
In all of the previous notebooks we have performed sentiment analysis on a dataset with only two classes, positive or negative. When we have only two classes our output can be a single scalar, bound between 0 and 1, that indicates what class an example belongs to. When we have more than 2 examples, our output must be a $C$ dimensional vector, where $C$ is the number of classes.
In this notebook, we'll be performing classification on a dataset with 6 classes. Note that this dataset isn't actually a sentiment analysis dataset, it's a dataset of questions and the task is to classify what category the question belongs to. However, everything covered in this notebook applies to any dataset with examples that contain an input sequence belonging to one of $C$ classes.
Below, we setup the fields, and load the dataset.
The first difference is that we do not need to set the `dtype` in the `LABEL` field. When doing a mutli-class problem, PyTorch expects the labels to be numericalized `LongTensor`s.
The second different is that we use `TREC` instead of `IMDB` to load the `TREC` dataset. The `fine_grained` argument allows us to use the fine-grained labels (of which there are 50 classes) or not (in which case they'll be 6 classes). You can change this how you please.
```
import torch
from torchtext import data
from torchtext import datasets
import random
SEED = 1234
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
DATASET="63"
```
Custom tokenizer to simply split at character level.
```
def tokenizer(text): # create a tokenizer function
return list(map(str, text.replace(" ", "")))
```
Load dataset from data/language directory.
```
TEXT = data.Field(tokenize=tokenizer)
LABEL = data.LabelField()
fields = {'language': ('label', LABEL), 'text': ('text', TEXT)}
train_data, valid_data, test_data = data.TabularDataset.splits(
path = 'data/language/' + DATASET,
train = 'train.json',
validation = 'valid.json',
test = 'test.json',
format = 'json',
fields = fields
)
```
Let's look at one of the examples in the training set.
```
vars(train_data[-1])
```
Next, we'll build the vocabulary. As this dataset is small (only ~3800 training examples) it also has a very small vocabulary (~7500 unique tokens), this means we do not need to set a `max_size` on the vocabulary as before.
```
TEXT.build_vocab(train_data)
LABEL.build_vocab(train_data)
```
Next, we can check the labels.
The 6 labels (for the non-fine-grained case) correspond to the 6 types of questions in the dataset:
- `HUM` for questions about humans
- `ENTY` for questions about entities
- `DESC` for questions asking you for a description
- `NUM` for questions where the answer is numerical
- `LOC` for questions where the answer is a location
- `ABBR` for questions asking about abbreviations
```
print(LABEL.vocab.stoi)
```
As always, we set up the iterators.
```
BATCH_SIZE = 64
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
train_iterator, valid_iterator, test_iterator = data.BucketIterator.splits(
(train_data, valid_data, test_data),
batch_size=BATCH_SIZE,
device=device,
sort_key=lambda x: len(x.text), # the BucketIterator needs to be told what function it should use to group the data.
sort_within_batch=False)
```
We'll be using the CNN model from the previous notebook, however any of the models covered in these tutorials will work on this dataset. The only difference is now the `output_dim` will be $C$ instead of $1$.
```
import torch.nn as nn
import torch.nn.functional as F
class CNN(nn.Module):
def __init__(self, vocab_size, embedding_dim, n_filters, filter_sizes, output_dim, dropout):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.convs = nn.ModuleList([
nn.Conv2d(in_channels = 1, out_channels = n_filters,
kernel_size = (fs, embedding_dim))
for fs in filter_sizes
])
self.fc = nn.Linear(len(filter_sizes) * n_filters, output_dim)
self.dropout = nn.Dropout(dropout)
def forward(self, text):
#text = [sent len, batch size]
text = text.permute(1, 0)
#text = [batch size, sent len]
embedded = self.embedding(text)
#embedded = [batch size, sent len, emb dim]
embedded = embedded.unsqueeze(1)
#embedded = [batch size, 1, sent len, emb dim]
conved = [F.relu(conv(embedded)).squeeze(3) for conv in self.convs]
#conv_n = [batch size, n_filters, sent len - filter_sizes[n]]
pooled = [F.max_pool1d(conv, conv.shape[2]).squeeze(2) for conv in conved]
#pooled_n = [batch size, n_filters]
cat = self.dropout(torch.cat(pooled, dim=1))
#cat = [batch size, n_filters * len(filter_sizes)]
return self.fc(cat)
```
We define our model, making sure to set `OUTPUT_DIM` to $C$. We can get $C$ easily by using the size of the `LABEL` vocab, much like we used the length of the `TEXT` vocab to get the size of the vocabulary of the input.
```
INPUT_DIM = len(TEXT.vocab)
EMBEDDING_DIM = 100
N_FILTERS = 100
FILTER_SIZES = [2,3,4]
OUTPUT_DIM = len(LABEL.vocab)
DROPOUT = 0.5
model = CNN(INPUT_DIM, EMBEDDING_DIM, N_FILTERS, FILTER_SIZES, OUTPUT_DIM, DROPOUT)
```
Another different to the previous notebooks is our loss function (aka criterion). Before we used `BCEWithLogitsLoss`, however now we use `CrossEntropyLoss`. Without going into too much detail, `CrossEntropyLoss` performs a *softmax* function over our model outputs and the loss is given by the *cross entropy* between that and the label.
Generally:
- `CrossEntropyLoss` is used when our examples exclusively belong to one of $C$ classes
- `BCEWithLogitsLoss` is used when our examples exclusively belong to only 2 classes (0 and 1) and is also used in the case where our examples belong to between 0 and $C$ classes (aka multilabel classification).
```
import torch.optim as optim
optimizer = optim.Adam(model.parameters())
criterion = nn.CrossEntropyLoss()
model = model.to(device)
criterion = criterion.to(device)
```
Before, we had a function that calculated accuracy in the binary label case, where we said if the value was over 0.5 then we would assume it is positive. In the case where we have more than 2 classes, our model outputs a $C$ dimensional vector, where the value of each element is the beleief that the example belongs to that class.
For example, in our labels we have: 'HUM' = 0, 'ENTY' = 1, 'DESC' = 2, 'NUM' = 3, 'LOC' = 4 and 'ABBR' = 5. If the output of our model was something like: **[5.1, 0.3, 0.1, 2.1, 0.2, 0.6]** this means that the model strongly believes the example belongs to class 0, a question about a human, and slightly believes the example belongs to class 3, a numerical question.
We calculate the accuracy by performing an `argmax` to get the index of the maximum value in the prediction for each element in the batch, and then counting how many times this equals the actual label. We then average this across the batch.
```
def categorical_accuracy(preds, y):
"""
Returns accuracy per batch, i.e. if you get 8/10 right, this returns 0.8, NOT 8
"""
max_preds = preds.argmax(dim=1, keepdim=True) # get the index of the max probability
correct = max_preds.squeeze(1).eq(y)
return correct.sum()/torch.FloatTensor([y.shape[0]])
```
The training loop is similar to before, without the need to `squeeze` the model predictions as `CrossEntropyLoss` expects the input to be **[batch size, n classes]** and the label to be **[batch size]**.
The label needs to be a `LongTensor`, which it is by default as we did not set the `dtype` to a `FloatTensor` as before.
```
def train(model, iterator, optimizer, criterion):
epoch_loss = 0
epoch_acc = 0
model.train()
for batch in iterator:
optimizer.zero_grad()
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
The evaluation loop is, again, similar to before.
```
def evaluate(model, iterator, criterion):
epoch_loss = 0
epoch_acc = 0
model.eval()
with torch.no_grad():
for batch in iterator:
predictions = model(batch.text)
loss = criterion(predictions, batch.label)
acc = categorical_accuracy(predictions, batch.label)
epoch_loss += loss.item()
epoch_acc += acc.item()
return epoch_loss / len(iterator), epoch_acc / len(iterator)
```
Next, we train our model.
```
%%time
N_EPOCHS = 30
lowest_valid_loss = 100
for epoch in range(N_EPOCHS):
train_loss, train_acc = train(model, train_iterator, optimizer, criterion)
valid_loss, valid_acc = evaluate(model, valid_iterator, criterion)
# save the model with the lowest validation loss for use later
saved = False
if valid_loss < lowest_valid_loss:
lowest_valid_loss = valid_loss
with open("./models/language-identifier-" + DATASET + "-best.pt", 'wb') as fb:
saved = True
torch.save(model, fb)
print(f'| Epoch: {epoch+1:02} | Train Loss: {train_loss:.3f} | Train Acc: {train_acc*100:.2f}% | Val. Loss: {valid_loss:.3f} | Val. Acc: {valid_acc*100:.2f}% | Saved: {saved}')
with open("./models/language-identifier-" + DATASET + "-final.pt", 'wb') as ff:
torch.save(model, ff)
```
For the non-fine-grained case, we should get an accuracy of around 90%. For the fine-grained case, we should get around 70%.
```
test_loss, test_acc = evaluate(model, test_iterator, criterion)
print(f'| Test Loss: {test_loss:.3f} | Test Acc: {test_acc*100:.2f}% |')
```
Deep learning tend to overfit the training data if it ran for too many epochs. We'll compare with the best model we've found.
```
with open("./models/language-identifier-" + DATASET + "-best.pt", 'rb') as fbl:
best_model = torch.load(fbl)
best_model_test_loss, best_model_test_acc = evaluate(best_model, test_iterator, criterion)
print(f'| Test Loss: {best_model_test_loss:.3f} | Test Acc: {best_model_test_acc*100:.2f}% |')
```
Choose the model with the lowest loss.
```
if test_loss > best_model_test_loss:
print("Will use best_model.")
selected_model = best_model
else:
print("Will use final model.")
selected_model = model
```
Similar to how we made a function to predict sentiment for any given sentences, we can now make a function that will predict the class of question given.
The only difference here is that instead of using a sigmoid function to squash the input between 0 and 1, we use the `argmax` to get the highest predicted class index. We then use this index with the label vocab to get the human readable label.
```
def predict_sentiment(sentence, trained_model, min_len=4):
tokenized = tokenizer(sentence)
if len(tokenized) < min_len:
tokenized += ['<pad>'] * (min_len - len(tokenized))
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
preds = trained_model(tensor)
print(preds)
max_preds = preds.argmax(dim=1)
return max_preds.item()
```
Now, let's try it out on a few different questions...
```
pred_class = predict_sentiment("特朗普上周四(7日)曾表示,在3月1日達成貿易協議的最後期限前,他不會與中國國家主席習近平會晤。", selected_model)
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
pred_class = predict_sentiment("喺未有互聯網之前,你老母叫你做人唔好太高眼角,正正常常嘅男人嫁出去就算。", selected_model)
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
pred_class = predict_sentiment("I need to get some food.", selected_model)
print(f'Predicted class is: {pred_class} = {LABEL.vocab.itos[pred_class]}')
def range_predictions(prelist, trained_model, min_len=4):
min_len = 4
predict_hky = {}
predict_zh = {}
predict_en = {}
for token in prelist:
tokenized = tokenizer(token)
tokenized_len = len(tokenized)
if tokenized_len < min_len:
tokenized += ['<pad>'] * (min_len - tokenized_len)
indexed = [TEXT.vocab.stoi[t] for t in tokenized]
tensor = torch.LongTensor(indexed).to(device)
tensor = tensor.unsqueeze(1)
preds = trained_model(tensor)
max_preds = preds.argmax(dim=1)
if LABEL.vocab.itos[max_preds.item()] == 'hky':
predict_hky[token] = preds.data[0][max_preds.item()].item()
elif LABEL.vocab.itos[max_preds.item()] == 'zh':
predict_zh[token] = preds.data[0][max_preds.item()].item()
else:
predict_en[token] = preds.data[0][max_preds.item()].item()
return predict_hky, predict_zh, predict_en
predict_hky, predict_zh, predict_en = range_predictions(TEXT.vocab.itos, selected_model, min_len=4)
sorted_by_value = sorted(predict_hky.items(), key=lambda kv: kv[1])
sorted_by_value.reverse()
for i in range(5):
if i < len(sorted_by_value):
print(sorted_by_value[i])
else:
break
sorted_by_value = sorted(predict_zh.items(), key=lambda kv: kv[1])
sorted_by_value.reverse()
for i in range(5):
if i < len(sorted_by_value):
print(sorted_by_value[i])
else:
break
sorted_by_value = sorted(predict_en.items(), key=lambda kv: kv[1])
sorted_by_value.reverse()
for i in range(5):
if i < len(sorted_by_value):
print(sorted_by_value[i])
else:
break
```
Check what kind of articles are incorrect.
```
with torch.no_grad():
for batch in test_iterator:
predictions = selected_model(batch.text)
max_preds = predictions.argmax(dim=1)
wrong_preds = (max_preds.eq(batch.label) == 0).nonzero()
for wrong_pred in wrong_preds:
wrong_idx = wrong_pred.item()
incorrect_prediction = max_preds[wrong_idx].item()
correct_label = batch.label[wrong_idx].item()
print("Predicted \"" + LABEL.vocab.itos[incorrect_prediction] + "\" but should be \"" + LABEL.vocab.itos[correct_label] + "\".")
text_i = batch.text[:,wrong_idx].tolist()
text_striped_idx = [x for x in text_i if x != TEXT.vocab.stoi['<pad>']]
full_text = list(map(lambda x: TEXT.vocab.itos[x], text_striped_idx))
print("Article: ")
print("".join(full_text))
print()
```
| github_jupyter |
# t-SNE Cell Line Clustering with Phosphorylation Data
I'll use t-SNE to get an overview of cell line clustering before and after different normalization procedures for the CST data. I will judge the quality of clustering based on the following criteria:
* Cell lines should cluster based on their histology, NSCLC and SCLC
* The NSCLC and SCLC clusters should be well separated
* Each histological cluster should be compact
* Cell lines should not cluster based on plex
### Imports and Function Definitions
```
from copy import deepcopyd
import tsne_fun
from clustergrammer import Network
net = deepcopy(Network())
```
# Clustering Original Data (including missing values)
First, I will cluster the cell lines (using t-SNE) based on the original phosphorylation data - including phosphorylation with missing values. Since cell lines from the same plex share the same missing data we would expect that including missing data (which is ultimately set to zeros) will cause the cell lines to artificially cluster based on their plexes.
```
tsne_fun.normalize_and_make_tsne(skl_version=False)
```
### Conclusions
The left tsne plot shows cell lines colored based on histology, SCLC is green and NSCLC is red. The cell cell lines appear to largely cluster based on shared histology. The right tsne plot shows cell lines colored based on their plex - there are 5 cell lines per plex. The cell lines cluster almost exclusively into groups that belong to the same plex.
From these figures we can see that two plexes are composed of entirely SCLC cell lines. We can also see that there is a large batch effect that needs to be corrected. The clusters of cell lines within a single plex-batch frequently stretched out into 'lines'. This stretching of the plex-clusters is likely due to the systematic differences in the cell line distributions (see boxplots from [CST_PTM_Data_Overview.ipynb](http://localhost:8889/notebooks/notebooks/CST_PTM_Data_Overview.ipynb)). The two primary drivers of cell line clustering appear to be 1) plex and 2) systematic differences in cell line distributions.
# Normalization and Filtering Procedures
We will try the following procedures individually and in combination and check their effect on cell line clustering
* Normalize the cell line distributions so that they are more similar (e.g. quantile normalization)
* Z-score the PTM distributions to highlight differences between the cell lines
* Filter out PTMs with missing data
# Clustering with Column Quantile Normalization (QN)
Based on the figures from [CST_PTM_Data_Overview.ipynb](http://localhost:8889/notebooks/notebooks/CST_PTM_Data_Overview.ipynb) it is clear that the distributions of PTM measurements in the cell lines are systematically different. Since the differnces in the PTM distributions in the cell lines are larger than we would expect biologically we can try to use a normalization procedure to bring these distributions closer to eachother. Here I will use Quantile Normalization (more specifically a slightly modified version set up to deal with the missing values) on the cell line columsn and see what effect this has on the cell line clustering.
```
tsne_fun.normalize_and_make_tsne(qn_col=True, skl_version=False)
```
### Conclusions
After applying quantile normalization to the cell lines the cell lines still largely cluster based on histology and plex. A noticeable difference after QN-column normalization is that the plex-groups are no longer stretched into 'lines' and now form compact clusters. This is likely because QN-column normalization has removed the systematic differences in cell line distributions within a plex, which causes them to cluster more compactly.
# Clustering with Row Z-score
We can see from heatmaps of the original PTM data that PTMs largely have consistent values across all cell lines, e.g. if a PTM is up-regulated in one cell line then it is likely up-regulated in all cell lines and vice-versa. We can Z-score the PTM rows to highlight the relative differences in PTM levels across all cell lines. Here we will see how this affects cell line clustering.
```
tsne_fun.normalize_and_make_tsne(zscore_row=True, skl_version=False)
```
### Conclusions
Z-scoring the rows alone appears to worsen cell line clustering based on histology (left) and also worsen cell line clustering based on plexes. Since Z-scoring of the rows was performed before quantile normalization of the columns This will result in cell lines with uniformly high PTMs and cell lines with uniformly low PTMs - I suspect that the cell lines are arranged according to their average PTM values in the above plots.
# Clustering with Column QN and Row Z-score
The normalization procedure that should be performed is one that first removes the systematic bias in the cell line distributions and then highlights the relative differences in PTM levels across the cell lines. Here we will first perform cell line column quantile normalization (QN) and second perform PTM row Z-score normalization.
```
tsne_fun.normalize_and_make_tsne(qn_col=True, zscore_row=True, skl_version=False)
```
### Conclusions: Histology clustering is improved and plexes appear slightly more mixed
On the left we can see a distinct clustering of the cell lines based on their histology, with only two SCLC cell lines not clustering with the others. On the right we can see that the plexes in both SCLC and NSCLC are clustered more closely to eachother and appear more mixed together than in the original plot with no normalization.
We have not yet removed any PTMs with missing values and doing so should reduce the plex clustering.
# Clustering with no missing data
Plexes have common missing data, which is why clustering the cell lines using phosphorylation data that has missing data will result in cell lines that cluster based on their plex. We can try clustering the cell lines using only the phosphorylations that have been measured in all cell lines to see if we still see the same pattern of cell line clustering.
```
tsne_fun.normalize_and_make_tsne(filter_missing=True, skl_version=False)
```
### Conclusion
Filtering for PTMs (phosphorylations in this case) that are measured in all cell lines leaves us with 513 PTMs. The resulting tsne figure shows that cell lines cluster according to their histology and that their plex clustering is noticeably worse, which is what we expect since missing data drives plex clustering. The cell lines also appear to cluster into three groups: one SCLC group and two NSCLC groups.
The SCLC cluster appears to be stretched out, which is likely caused by cell line PTM distribution differences.
# Clustering with no missing data and QN columns
Here I am first performing QN normalization on the entire dataset, 5798 phosphorylations, and then filtering for PTMs that were measured in all cell lines. The results do not appear to change much if filtering is done befor QN (not shown).
```
tsne_fun.normalize_and_make_tsne(qn_col=True, filter_missing=True, skl_version=False)
```
### Conclusions
We can see that cell lines cluster according to their histology and plex clustering is noticeably worse, which we expect. The cell lines form two large and very separated clusters which consist of almost exclusively NSCLC and SCLC cell lines. The SCLC cluster is also more compact than the NSCLC cluster. Also the SCLC cluster is more compact than when only PTM filtering is performed (see previous image) becuase the systematic differences in cell line clustering have been removed with QN.
# Clustering with no missing data and Z-score rows
```
tsne_fun.normalize_and_make_tsne(zscore_row=True, filter_missing=True, skl_version=False)
```
### Conclusions
Z-scoring the PTM-rows and filtering for missing data shows histology clustering and reduced plex clustering. As we had seen with performing Z-score PTM-row normalization alone (without column QN normalization) we get stretched clusters.
# Clustering with no missing data QN columns and Z-score rows
Finally, can perform QN-column normalization, Z-score row normalization, and only include PTMs that were measured in all cell lines.
```
tsne_fun.normalize_and_make_tsne(qn_col=True, zscore_row=True, filter_missing=True, skl_version=False)
```
### Conclusions
With these two normalizations followed by filtering we see clustering based on histology and reduced clustering based on plex. The SCLC cluster is more compact than before (see previous figure) and there is a large separation between the SCLC and NSCLC clusters. It is difficult to compare plex clustering in this figure to the previous figures. However, we can see that NSCLC cell lines form a more compact cluster than they did in the case with PTM filtering alone.
I would argue that this cluster agrees best with what we would expect biologically for the following reasons:
* NSCLC and SCLC cell lines are very clearly separated
* SCLC cell lines are clustered bery compactly
* NSCLC cell lines are clustered more compactly than in other cases
* plex clustering is reduced
| github_jupyter |
# Pre-lab: Stats and Genomic Databases
## I. Statistics Primer and Review
Often in bioinformatics, we need to remind ourselves to think somewhat in statistical or probabilistic terms.
If you need a refresher, head to Canvas and review the slides and recorded lecture "Statistical Inference in Bioinformatics" (recorded by Shane Jensen, statistics). This is a **very** cursory review, not designed to be comprehensive. Then, answer the following questions.
Q1. Describe, in statistical terms, the concept of a "null hypothesis". How are ways a null hypothesis can be utilzed?
Q2. What is a test statistic, and how it is used?
Q3. If we reject the null hypothesis at alpha = 5% level, what does that mean?
Q4. Explain the issue of multiple hypothesis testing, and the implication. Give two statistical procedures to address this concern, and describe how to employ them.
## II. Databases
One of the first steps in any computational project is to determine what data **already exists** that one can utilize to address scientific questions or gather information. Take 10 minutes to web browse and investigate one or more of the databases given below.
** List of Genomic Databases **
NCBI Entrez - http://www.ncbi.nlm.nih.gov/sites/gquery - huge database that encompasses other databases, including:
- PubMed for Journal Articles - http://www.ncbi.nlm.nih.gov/pubmed/
- GenBank for Raw Sequence - http://www.ncbi.nlm.nih.gov/genbank/
- RefSeq for Non-Redundant Sequence - http://www.ncbi.nlm.nih.gov/RefSeq/
- OMIM for Genetic Diseases - http://www.ncbi.nlm.nih.gov/omim?db=omim
- dbSNP for Polymorphisms - http://www.ncbi.nlm.nih.gov/snp?db=snp
- GEO for Gene Expression Data - http://www.ncbi.nlm.nih.gov/geo/
ExPASy - http://expasy.org/ - Another large database encompassing other databases:
- Uniprot for Protein Sequence/Annotation - http://www.uniprot.org/
- PROSITE for Protein Sequence Patterns - http://prosite.expasy.org/
- ENSEMBL - http://useast.ensembl.org/index.html - An alternative to RefSeq and UniProt
- GeneCards - http://www.genecards.org/ - Gene-centered portal to information from many other databases
- ENCODE - http://www.genome.gov/10005107 - Encyclopedia of DNA Elements
- HapMap - http://hapmap.ncbi.nlm.nih.gov/ - Database of human variation across populations
- ExAC - http://exac.broadinstitute.org - database of human coding mutational variation across populations
- Gene Ontology (GO) - http://www.geneontology.org/ - Hierarchy of gene annotations
- MGED - http://www.mged.org/ - Database of gene expression/microarray results
This list is by no means complete, for more databases see the most recent Database Summary Paper Alpha List: http://www.oxfordjournals.org/nar/database/a/
In particular, the in class activity will focus on the UCSC genome brower
- http://genome.ucsc.edu/
a portal that uses a Track-based system to summarize information from many databases of genomic sequence and annotations.
Spend 5 minutes on your own exploring this rich resource.
| github_jupyter |
## _*Using Algorithm Concatenation in Qiskit Aqua*_
This notebook demonstrates how to use the `Qiskit Aqua` library to realize algorithm concatenation. In particular, we experiment with chaining the executions of VQE and IQPE by first running VQE and then preparing IQPE's initial state using the variational form as produced by VQE upon its termination.
```
import numpy as np
from qiskit_aqua.input import get_input_instance
from qiskit_aqua import Operator, run_algorithm, get_algorithm_instance, get_variational_form_instance, get_optimizer_instance
from qiskit_aqua.algorithms.components.initial_states.varformbased import VarFormBased
```
Here an Operator instance is created for our Hamiltonian, for which we are going to estimation the ground energy level. In this case the paulis are from a previously computed Hamiltonian for simplicity.
```
pauli_dict = {
'paulis': [{"coeff": {"imag": 0.0, "real": -1.052373245772859}, "label": "II"},
{"coeff": {"imag": 0.0, "real": 0.39793742484318045}, "label": "ZI"},
{"coeff": {"imag": 0.0, "real": -0.39793742484318045}, "label": "IZ"},
{"coeff": {"imag": 0.0, "real": -0.01128010425623538}, "label": "ZZ"},
{"coeff": {"imag": 0.0, "real": 0.18093119978423156}, "label": "XX"}
]
}
qubitOp = Operator.load_from_dict(pauli_dict)
```
We can now use the Operator without regard to how it was created. First we will use the ExactEigensolver to compute the reference ground energy level.
```
algorithm_cfg = {
'name': 'ExactEigensolver',
}
params = {
'algorithm': algorithm_cfg
}
algo_input = get_input_instance('EnergyInput')
algo_input.qubit_op = qubitOp
result_reference = run_algorithm(params,algo_input)
print('The reference ground energy level is {}.'.format(result_reference['energy']))
```
Having established the reference ground energy, we next carry on with our experiment. First we configure a VQE algorithm instance. The idea is that we can set an termination condition such that the VQE instance returns rather quickly with a rough estimation result.
```
np.random.seed(0)
var_form_depth = 3
var_form = get_variational_form_instance('RYRZ')
var_form.init_args(algo_input.qubit_op.num_qubits, var_form_depth)
spsa_max_trials=10
optimizer = get_optimizer_instance('SPSA')
optimizer.init_args(max_trials=spsa_max_trials)
vqe_mode = 'paulis'
vqe = get_algorithm_instance('VQE')
vqe.setup_quantum_backend(backend='qasm_simulator')
vqe.init_args(algo_input.qubit_op, vqe_mode, var_form, optimizer)
result_vqe = vqe.run()
print('VQE estimated the ground energy to be {}.'.format(result_vqe['energy']))
```
As previously indicated, the energy estimation result is rather rough--it is far from being an acceptable final estimation figure. But, it is close enough such that the accompanying variational form might be a reasonably good approximation to the ground eigenstate, which means the corresponding wave function can serve as the initial state for the IQPE execution that follows. We next prepare such an initial state.
```
state_in = VarFormBased()
state_in.init_args(var_form, result_vqe['opt_params'])
```
With the VQE-generated quantum state wave function serving as the chaining piece and prepared as initial state, we now go ahead with configuring and running an IQPE instance.
```
iqpe = get_algorithm_instance('IQPE')
iqpe.setup_quantum_backend(backend='qasm_simulator', shots=100)
num_time_slices = 50
num_iterations = 11
iqpe.init_args(
algo_input.qubit_op, state_in, num_time_slices, num_iterations,
paulis_grouping='random',
expansion_mode='suzuki',
expansion_order=2,
)
result_iqpe = iqpe.run()
print("Continuing with VQE's result, IQPE estimated the ground energy to be {}.".format(result_iqpe['energy']))
```
As seen, the final ground energy estimation as produced by IQPE is much more accurate that the intermediate result as produced by VQE.
| github_jupyter |
<a href="https://colab.research.google.com/github/markf94/SDSS2020_quantum_workshop/blob/master/tutorial_I_quantum_bits_and_gates_SDSS2020.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Tutorial 1: Quantum bits & gates with Quil
In this 30min tutorial, you will learn:
* _How to initialize and manipulate a qubit_
* _How to construct, run and measure simple quantum circuits using the Quil language_
* _What the X, MEASURE, H and CNOT operations do_
This tutorial is all about [Quil](https://github.com/rigetti/quil) which is the quantum instruction language used by [Rigetti](https://www.rigetti.com/).
### IMPORTANT!
Make sure that you are running your own local instances of the Rigetti Quantum Virtual Machine (QVM) simulator and the Quilc compiler. Check the `README.md` of this repository to find out how to do this. Otherwise this notebook won't work for you.
First, we need to make sure that the `pyquil` library is installed.
```
%matplotlib inline
!pip install pyquil matplotlib
```
Don't worry about these two next functions for now. They are making your life a bit easier. You can revisit and study them later.
```
from pyquil import get_qc, Program
from pyquil.api import WavefunctionSimulator
simulator = WavefunctionSimulator()
qvm = get_qc('5q-qvm')
def execute(quil_program, trials=100, silent=False, raw=False):
"""
Thin function that takes a low-level Quil program and returns the
resulting probability distribution.
"""
results = [tuple(qvm.run(Program(quil_program))[0]) for _ in range(trials)]
if not silent:
observed_results = set(results)
for result in sorted(observed_results):
bitstring = ''.join(reversed(list(map(str, result))))
print(f'|{bitstring}> state: {results.count(result)/len(results)} [{results.count(result)}/{len(results)}]')
if raw:
print(f'Results: {results}')
def plot(quil_program):
return simulator.wavefunction(Program(quil_program)).plot()
```
# Qubit
The carrier of information in the quantum computing circuits is the qubit, usually denoted in the Dirac notation as
$$
\newcommand{\ket}[1]{\left|{#1}\right\rangle}
\newcommand{\bra}[1]{\left\langle{#1}\right|}
$$
$$ \ket{\psi} = \alpha \ket{0} + \beta \ket{1} $$
where
$$ \alpha,\beta \in \mathbb{C} $$
and
$$
\ket{0} = \begin{bmatrix}
0 \\
1 \\
\end{bmatrix}, \,\,\, \ket{1} = \begin{bmatrix}
1 \\
0 \\
\end{bmatrix}
$$
# Quantum gates and measurement
Quantum circuits are composed of two fundamental building blocks - quantum gates and the measurement operation. Here we introduce the first two single qubit operations:
## X gate
X gate serves like a quantum version of the NOT operator - it flips probability coefficients between the |0> and |1> state of the qubit it is applied to.
$$
X =
\begin{bmatrix}
0 & 1 \\
1 & 0 \\
\end{bmatrix}
$$
In a circuit diagram we draw:
<img src=https://upload.wikimedia.org/wikipedia/commons/4/43/Qcircuit_X.svg width="200">
In the Quil language, we implement X as:
```
X <qubit>
```
The statement above applies X gate to qubit `<qubit>` i.e.
```
X 2
```
applies the X gate on qubit 2.
## Measurement operation
To read out the state of the qubit, we *measure* it, which forces it to collapse to one of its basis states.
In a circuit diagram we draw:
<img src="https://upload.wikimedia.org/wikipedia/commons/a/a7/Quantum_circuit_measurement_symbol.png" width="150">
To measure a qubit in Quil, we use a `MEASURE` operation with the following syntax:
```
MEASURE <qubit> ro[<bit>]
```
where `<qubit>` is the qubit number, `ro` is the name of the classical register (readout) and `<bit>` is the index of the classical register to store the measurement result in.
However, before you can write to a classical `ro` you have to initialize it! In Quil, this usually happens at the **very top of the file**:
```
DECLARE ro BIT[<num_bits>]
```
where `num_bits` is the number of classical bits we want in that classical register `ro`.
#### Exercise 1.1
Qubits in the circuits are always initialized to the same state. Conduct an experiment to figure out what the initial state of the qubits in the simulator is.
```
execute("""
# TODO: write Quil code here
""")
```
#### Exercise 1.2
Create a quantum circuit that always produces state $\ket{01}$.
```
execute("""
# TODO: write Quil code here
""")
```
**Hint:**
In quantum computing we count qubits from right to left. This means a quantum state with three qubits is written down like this:
$$\ket{q_2,q_1,q_o}$$
If you're curious why this is the case you can check out [bonus exercise X2](https://colab.research.google.com/drive/1_LwrzKKgxliYmp6RICc6a9jG4AFt7BcW?authuser=1#scrollTo=mBD5Gc4ghmfW&line=1&uniqifier=1) if you have time at the end of this tutorial.
## H gate
H gate is often used to put basis states into uniform superposition:
$$
H = \frac{1}{\sqrt{2}}
\begin{bmatrix}
1 & 1 \\
-1 & 1 \\
\end{bmatrix}
$$
A uniform superposition is a quantum state with equal probability for all bitstrings e.g.
$$|\psi> = \frac{1}{\sqrt{2}} \ket{0} + \frac{1}{\sqrt{2}} \ket{1}$$
Similiar to X gate (or any single qubit gate, really), we implement H as:
```
H <qubit>
```
#### Exercise 1.3
Figure out what the problem with the following circuit is and implement a fix.
```
execute("""
H 0
MEASURE 0
""")
```
#### Exercise 1.4
Write a Quil program that creates a uniform superposition over all 2-bit strings:
$$|\psi> = \frac{1}{2} (\ket{00} + \ket{01} + \ket{10} + \ket{11})$$
```
execute("""
# TODO: write Quil code here
""")
```
#### Exercise 1.5
Plot a wavefunction of the program above. Why does it look different than the sample distribution? What program should we plot to mirror the sample distribution above?
```
plot("""
# TODO: write Quil code here
""")
plot("""
# TODO: write Quil code here
""")
```
#### Exercise 1.6
Implement a fair quantum 8-sided dice.
```
execute("""
# TODO: write Quil code here
""", trials=1000)
```
## C-NOT gate
$CNOT$ gate is the first 2-qubit gate we encounter. This gate applies the NOT operation to the second qubit only if the first qubit (control) is $\ket{1}$.
$$
CNOT =
\begin{bmatrix}
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
\end{bmatrix}
$$
In a circuit diagram we draw it as
<img src=https://i.stack.imgur.com/kHu5I.png width=150>
In the Quil language, we implement CNOT as:
```
CNOT <control_qubit> <qubit>
```
#### Exercise 1.7
Implement a circuit with two qubits and a CNOT gate. Test this circuit on initial states $\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$.
```
print('Initialized as |00>')
execute("""
# TODO: write Quil code here
""")
print('Initialized as |01>')
execute("""
# TODO: write Quil code here
""")
print('Initialized as |10>')
execute("""
# TODO: write Quil code here
""")
print('Initialized as |11>')
execute("""
# TODO: write Quil code here
""")
```
#### Exercise 1.8
Write a program that constructs the following entangled pair of qubits:
$$|\psi> = \frac{1}{\sqrt{2}} \ket{00} + \frac{1}{\sqrt{2}} \ket{11}$$
An entangled pair of qubits always collapses to the same basis state when measured. For example, if you measure qubit 0 in the `0` state then you immediately know that qubit 1 must be in the `0` state too!
```
execute("""
# TODO: write Quil code here
""")
```
## Bonus exercises
#### Bonus Exercise X1:
Check out the [documentation page about Quil gates and instructions](http://docs.rigetti.com/en/stable/apidocs/gates.html). Build simple circuits and vary their inputs (flip some qubits at the beginning) to try and understand the following gates:
`Y`, `CCNOT` and `SWAP`
#### Bonus Exercise X2:
Read the paper with the title ['Someone shouts, “$\ket{01000}$!” Who is excited?'](https://arxiv.org/pdf/1711.02086.pdf) by Rigetti staff member Robert Smith to gain a deeper understanding why we label qubits the reverse way ($\ket{q_N, ..., q_1, q_0}$ rather than $\ket{q_0, ..., q_{N-1}, q_N}$).
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Using Azure Machine Learning Pipelines for Batch Inference
In this notebook, we will demonstrate how to make predictions on large quantities of data asynchronously using the ML pipelines with Azure Machine Learning. Batch inference (or batch scoring) provides cost-effective inference, with unparalleled throughput for asynchronous applications. Batch prediction pipelines can scale to perform inference on terabytes of production data. Batch prediction is optimized for high throughput, fire-and-forget predictions for a large collection of data.
> **Note**
This notebook uses public preview functionality (ParallelRunStep). Please install azureml-contrib-pipeline-steps package before running this notebook. Pandas is used to display job results.
```
pip install azureml-contrib-pipeline-steps pandas
```
> **Tip**
If your system requires low-latency processing (to process a single document or small set of documents quickly), use [real-time scoring](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-consume-web-service) instead of batch prediction.
In this example will be take a digit identification model already-trained on MNIST dataset using the [AzureML training with deep learning example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/training-with-deep-learning/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb), and run that trained model on some of the MNIST test images in batch.
The input dataset used for this notebook differs from a standard MNIST dataset in that it has been converted to PNG images to demonstrate use of files as inputs to Batch Inference. A sample of PNG-converted images of the MNIST dataset were take from [this repository](https://github.com/myleott/mnist_png).
The outline of this notebook is as follows:
- Create a DataStore referencing MNIST images stored in a blob container.
- Register the pretrained MNIST model into the model registry.
- Use the registered model to do batch inference on the images in the data blob container.
## Prerequisites
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first. This sets you up with a working config file that has information on your workspace, subscription id, etc.
### Connect to workspace
Create a workspace object from the existing workspace. Workspace.from_config() reads the file config.json and loads the details into an object named ws.
```
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep = '\n')
```
### Create or Attach existing compute resource
By using Azure Machine Learning Compute, a managed service, data scientists can train machine learning models on clusters of Azure virtual machines. Examples include VMs with GPU support. In this tutorial, you create Azure Machine Learning Compute as your training environment. The code below creates the compute clusters for you if they don't already exist in your workspace.
**Creation of compute takes approximately 5 minutes. If the AmlCompute with that name is already in your workspace the code will skip the creation process.**
```
import os
from azureml.core.compute import AmlCompute, ComputeTarget
from azureml.core.compute_target import ComputeTargetException
# choose a name for your cluster
compute_name = os.environ.get("AML_COMPUTE_CLUSTER_NAME", "cpu-cluster")
compute_min_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MIN_NODES", 0)
compute_max_nodes = os.environ.get("AML_COMPUTE_CLUSTER_MAX_NODES", 4)
# This example uses CPU VM. For using GPU VM, set SKU to STANDARD_NC6
vm_size = os.environ.get("AML_COMPUTE_CLUSTER_SKU", "STANDARD_D2_V2")
if compute_name in ws.compute_targets:
compute_target = ws.compute_targets[compute_name]
if compute_target and type(compute_target) is AmlCompute:
print('found compute target. just use it. ' + compute_name)
else:
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size = vm_size,
min_nodes = compute_min_nodes,
max_nodes = compute_max_nodes)
# create the cluster
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
# can poll for a minimum number of nodes and for a specific timeout.
# if no min node count is provided it will use the scale settings for the cluster
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
# For a more detailed view of current AmlCompute status, use get_status()
print(compute_target.get_status().serialize())
```
### Create a datastore containing sample images
The input dataset used for this notebook differs from a standard MNIST dataset in that it has been converted to PNG images to demonstrate use of files as inputs to Batch Inference. A sample of PNG-converted images of the MNIST dataset were take from [this repository](https://github.com/myleott/mnist_png).
We have created a public blob container `sampledata` on an account named `pipelinedata`, containing these images from the MNIST dataset. In the next step, we create a datastore with the name `images_datastore`, which points to this blob container. In the call to `register_azure_blob_container` below, setting the `overwrite` flag to `True` overwrites any datastore that was created previously with that name.
This step can be changed to point to your blob container by providing your own `datastore_name`, `container_name`, and `account_name`.
```
from azureml.core.datastore import Datastore
account_name = "pipelinedata"
datastore_name = "mnist_datastore"
container_name = "sampledata"
mnist_data = Datastore.register_azure_blob_container(ws,
datastore_name=datastore_name,
container_name= container_name,
account_name=account_name,
overwrite=True)
```
Next, let's specify the default datastore for the outputs.
```
def_data_store = ws.get_default_datastore()
```
### Create a FileDataset
A [FileDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.filedataset?view=azure-ml-py) references single or multiple files in your datastores or public urls. The files can be of any format. FileDataset provides you with the ability to download or mount the files to your compute. By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. The data remains in its existing location, so no extra storage cost is incurred.
```
from azureml.core.dataset import Dataset
mnist_ds_name = 'mnist_sample_data'
path_on_datastore = mnist_data.path('mnist')
input_mnist_ds = Dataset.File.from_files(path=path_on_datastore, validate=False)
registered_mnist_ds = input_mnist_ds.register(ws, mnist_ds_name, create_new_version=True)
named_mnist_ds = registered_mnist_ds.as_named_input(mnist_ds_name)
```
### Intermediate/Output Data
Intermediate data (or output of a Step) is represented by [PipelineData](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelinedata?view=azure-ml-py) object. PipelineData can be produced by one step and consumed in another step by providing the PipelineData object as an output of one step and the input of one or more steps.
**Constructing PipelineData**
- name: [Required] Name of the data item within the pipeline graph
- datastore_name: Name of the Datastore to write this output to
- output_name: Name of the output
- output_mode: Specifies "upload" or "mount" modes for producing output (default: mount)
- output_path_on_compute: For "upload" mode, the path to which the module writes this output during execution
- output_overwrite: Flag to overwrite pre-existing data
```
from azureml.pipeline.core import Pipeline, PipelineData
output_dir = PipelineData(name="inferences",
datastore=def_data_store,
output_path_on_compute="mnist/results")
```
### Download the Model
Download and extract the model from https://pipelinedata.blob.core.windows.net/mnist-model/mnist-tf.tar.gz to "models" directory
```
import tarfile
import urllib.request
# create directory for model
model_dir = 'models'
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
url="https://pipelinedata.blob.core.windows.net/mnist-model/mnist-tf.tar.gz"
response = urllib.request.urlretrieve(url, "model.tar.gz")
tar = tarfile.open("model.tar.gz", "r:gz")
tar.extractall(model_dir)
os.listdir(model_dir)
```
### Register the model with Workspace
A registered model is a logical container for one or more files that make up your model. For example, if you have a model that's stored in multiple files, you can register them as a single model in the workspace. After you register the files, you can then download or deploy the registered model and receive all the files that you registered.
Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric. Learn more about registering models [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#registermodel)
```
from azureml.core.model import Model
# register downloaded model
model = Model.register(model_path = "models/",
model_name = "mnist", # this is the name the model is registered as
tags = {'pretrained': "mnist"},
description = "Mnist trained tensorflow model",
workspace = ws)
```
### Using your model to make batch predictions
To use the model to make batch predictions, you need an **entry script** and a list of **dependencies**:
#### An entry script
This script accepts requests, scores the requests by using the model, and returns the results.
- __init()__ - Typically this function loads the model into a global object. This function is run only once at the start of batch processing per worker node/process. Init method can make use of following environment variables (ParallelRunStep input):
1. AZUREML_BI_OUTPUT_PATH – output folder path
- __run(mini_batch)__ - The method to be parallelized. Each invocation will have one minibatch.<BR>
__mini_batch__: Batch inference will invoke run method and pass either a list or Pandas DataFrame as an argument to the method. Each entry in min_batch will be - a filepath if input is a FileDataset, a Pandas DataFrame if input is a TabularDataset.<BR>
__run__ method response: run() method should return a Pandas DataFrame or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful inference of input element in the input mini-batch.
User should make sure that enough data is included in inference result to map input to inference. Inference output will be written in output file and not guaranteed to be in order, user should use some key in the output to map it to input.
#### Dependencies
Helper scripts or Python/Conda packages required to run the entry script or model.
The deployment configuration for the compute target that hosts the deployed model. This configuration describes things like memory and CPU requirements needed to run the model.
These items are encapsulated into an inference configuration and a deployment configuration. The inference configuration references the entry script and other dependencies. You define these configurations programmatically when you use the SDK to perform the deployment. You define them in JSON files when you use the CLI.
```
import os
scripts_folder = "Code"
script_file = "digit_identification.py"
# peek at contents
with open(os.path.join(scripts_folder, script_file)) as inference_file:
print(inference_file.read())
```
## Build and run the batch inference pipeline
The data, models, and compute resource are now available. Let's put all these together in a pipeline.
### Specify the environment to run the script
Specify the conda dependencies for your script. This will allow us to install pip packages as well as configure the inference environment.
```
from azureml.core import Environment
from azureml.core.runconfig import CondaDependencies, DEFAULT_CPU_IMAGE
batch_conda_deps = CondaDependencies.create(pip_packages=["tensorflow==1.13.1", "pillow"])
batch_env = Environment(name="batch_environment")
batch_env.python.conda_dependencies = batch_conda_deps
batch_env.docker.enabled = True
batch_env.docker.base_image = DEFAULT_CPU_IMAGE
```
### Create the configuration to wrap the inference script
```
from azureml.contrib.pipeline.steps import ParallelRunStep, ParallelRunConfig
parallel_run_config = ParallelRunConfig(
source_directory=scripts_folder,
entry_script=script_file,
mini_batch_size="5",
error_threshold=10,
output_action="append_row",
environment=batch_env,
compute_target=compute_target,
node_count=2)
```
### Create the pipeline step
Create the pipeline step using the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace as the target of execution of the script. We will use ParallelRunStep to create the pipeline step.
```
parallelrun_step = ParallelRunStep(
name="predict-digits-mnist",
parallel_run_config=parallel_run_config,
inputs=[ named_mnist_ds ],
output=output_dir,
models=[ model ],
arguments=[ ],
allow_reuse=True
)
```
### Run the pipeline
At this point you can run the pipeline and examine the output it produced. The Experiment object is used to track the run of the pipeline
```
from azureml.core import Experiment
pipeline = Pipeline(workspace=ws, steps=[parallelrun_step])
experiment = Experiment(ws, 'digit_identification')
pipeline_run = experiment.submit(pipeline)
```
### Monitor the run
```
from azureml.widgets import RunDetails
RunDetails(pipeline_run).show()
```
### Optional: View detailed logs (streaming)
```
pipeline_run.wait_for_completion(show_output=True)
```
### View the prediction results per input image
In the score.py file above you can see that the ResultList with the filename and the prediction result gets returned. These are written to the DataStore specified in the PipelineData object as the output data, which in this case is called *inferences*. This containers the outputs from all of the worker nodes used in the compute cluster. You can download this data to view the results ... below just filters to the first 10 rows
```
import pandas as pd
import shutil
# remove previous run results, if present
shutil.rmtree("mnist_results", ignore_errors=True)
batch_run = next(pipeline_run.get_children())
batch_output = batch_run.get_output_data("inferences")
batch_output.download(local_path="mnist_results")
for root, dirs, files in os.walk("mnist_results"):
for file in files:
if file.endswith('parallel_run_step.txt'):
result_file = os.path.join(root,file)
df = pd.read_csv(result_file, delimiter=":", header=None)
df.columns = ["Filename", "Prediction"]
print("Prediction has ", df.shape[0], " rows")
df.head(10)
```
## Cleanup Compute resources
For re-occurring jobs, it may be wise to keep compute the compute resources and allow compute nodes to scale down to 0. However, since this is just a single-run job, we are free to release the allocated compute resources.
```
# uncomment below and run if compute resources are no longer needed
# compute_target.delete()
```
| github_jupyter |
# Astropy Units, Quantities, and Constants
Astropy includes a powerful framework for units that allows users to attach units to scalars and arrays. These quantities can be manipulated or combined, keeping track of the units.
For more information about the features presented below, please see the
[astropy.units](http://docs.astropy.org/en/stable/units/index.html) docs.
Also note that this tutorial assumes you have little or no knowledge of astropy units. If you're moderately familiar with them and are interested in some more complex examples, you might instead prefer the Astropy tutoial on ["Using Astropy Quantities for astrophysical calculations"](http://www.astropy.org/astropy-tutorials/Quantities.html). (The file with that tutorial is also included next to this one.)
## Representing units
First, we need to import the astropy units subpackage (**`astropy.units`**). Because we probably want to use units in many expressions, it is most concise to rename the subpackage as **`u`**. This is the standard convention, but note that this will conflict with any variable called **`u`**:
```
import astropy.units as u
```
Units can then be accessed simply as **`u.<unit>`**. For example, the meter unit is:
```
u.m
```
Units have docstrings, which give some explanatory text about them:
```
u.m.__doc__
u.pc.__doc__
```
and a physical type:
```
u.m.physical_type
u.s.physical_type
```
Many units also have aliases:
```
u.m.aliases
u.meter
u.arcsec.aliases
u.arcsecond
```
SI and cgs units are available by default, but Imperial units require the **`imperial`** prefix:
```
# this is not defined
u.inch
# use this
u.imperial.inch
```
Please see the complete list of [available units](https://astropy.readthedocs.org/en/stable/units/index.html#module-astropy.units.si).
## Composite units
Composite units are created using Python numeric operators, e.g. "`*`" (multiplication), "`/`" (division), and "`**`" (power).
```
u.km / u.s
u.imperial.mile / u.h
(u.eV * u.Mpc) / u.Gyr
u.cm**3
u.m / u.kg / u.s**2
```
## ``Quantity`` objects
The most useful feature of units is the ability to attach them to scalars or arrays, creating `Quantity` objects. A `Quantity` object contains both a value and a unit. The easiest way to create a `Quantity` object is simply by multiplying the value with its unit.
```
3.7 * u.au # Quantity object
```
A completely equivalent (but more verbose) way of doing the same thing is to use the `Quantity` object's initializer, demonstrated below. In general, the simpler form (above) is preferred, as it is closer to how such a quantity would actually be written in text. The initalizer form has more options, though, which you can learn about from the [astropy reference documentation on Quantity](http://docs.astropy.org/en/stable/api/astropy.units.quantity.Quantity.html).
```
u.Quantity(3.7, unit=u.au)
```
Where quantities really shine is when you make an array `Quantity` object.
```
# we need to import numpy, and the short-name "np" is the standard practice pretty much everywhere
import numpy as np
x = np.array([1.2, 6.8, 3.7]) * u.pc / u.year
x
```
## `Quantity` attributes
The units and value of a `Quantity` can be accessed separately via the ``value`` and ``unit`` attributes:
```
q = 5. * u.Mpc
q
q.value
q.unit
x = np.array([1.2, 6.8, 3.7]) * u.pc / u.year
x
x.value
x.unit
```
## `Quantity` Arithmetic Operations
"`*`" (multiplication), "`/`" (division), and "`**`" (power) operations can be performed on `Quantity` objects with `float`/`int` values.
```
q = 3.1 * u.km
q * 2
q / 2.
q ** 2
```
## Combining Quantities
Quantities can be combined using Python numeric operators:
```
q1 = 3. * u.m / u.s
q1
q2 = 5. * u.cm / u.s / u.g**2
q2
q1 * q2
q1 / q2 # note the "second" unit cancelled out
q1 ** 2
x = np.array([1.2, 6.8, 3.7]) * u.pc / u.year
x * 3 # elementwise multiplication
```
When adding or subtracting quanitities, the units must be **compatible** (not necessarily identical).
```
# Add two quantities
(3 * u.m) + (5 * u.m)
```
Here we add two distance quantities that do not have identical units:
```
(3 * u.km) + (5 * u.cm)
# this will fail because the units are not compatible
(3 * u.km) + (5. * u.km / u.s)
```
## Coverting units
Units can be converted to other equivalent units.
```
q = 2.5 * u.year
q
q.to(u.s)
(7. * u.deg**2).to(u.sr)
(55. * u.imperial.mile / u.h).to(u.km / u.h)
q1 = 3. * u.m / u.s
q2 = 5. * u.cm / u.s / u.g**2
q1 * q2
(q1 * q2).to(u.m**2 / u.kg**2 / u.s**2)
```
**Important Note**: Converting a unit (not a `Quantity`) gives only the scale factor:
```
(u.Msun).to(u.kg)
```
To keep the units, use a `Quantity` (value and unit) object:
```
(1. * u.Msun).to(u.kg)
```
## Decomposing units
The units of a `Quantity` object can be decomposed into a set of base units using the
``decompose()`` method. By default, units will be decomposed to SI unit bases:
```
q = 8. * u.cm * u.pc / u.g / u.year**2
q
q.decompose()
```
To decompose into cgs unit bases:
```
q.decompose(u.cgs.bases)
u.cgs.bases
u.si.bases
```
Units will not cancel out unless they are identical:
```
q = 7 * u.m / (7 * u.km)
q
```
But they will cancel by using the `decompose()` method:
```
x = q.decompose()
x # this is a "dimensionless" Quantity
repr(x.unit)
```
## Integration with Numpy functions
Most [Numpy](http://www.numpy.org) functions understand `Quantity` objects:
```
np.sin(30) # np.sin assumes the input is in radians
np.sin(30 * u.degree) # awesome!
q = 100 * u.kg * u.kg
np.sqrt(q)
x = np.arange(10) * u.km
x
np.mean(x)
```
Some numpy ufuncs require dimensionless quantities.
```
np.log10(4 * u.m) # this doesn't make sense
np.log10(4 * u.m / (4 * u.km)) # note the units cancelled
```
Care needs to be taken with dimensionless units.
For example, passing ordinary values to an inverse trigonometric function gives a result without units:
```
np.arcsin(1.0)
```
`u.dimensionless_unscaled` creates a ``Quantity`` with a "dimensionless unit" and therefore gives a result *with* units:
```
np.arcsin(1.0 * u.dimensionless_unscaled)
np.arcsin(1.0 * u.dimensionless_unscaled).to(u.degree)
```
**Important Note:** In-place array operations do not work with units.
For `numpy < 0.13` this example will silently drop the units.
For `numpy >= 0.13` this example will raise an error.
```
a = np.arange(10.)
a *= 1.0 * u.kg # in-place operator
a
```
Assign to a *new* array instead:
```
a = a * 1.0 * u.kg
a
```
Also, Quantities lose their units with some Numpy operations, e.g.:
* np.append
* np.dot
* np.hstack
* np.vstack
* np.where
* np.choose
* np.vectorize
See [Quantity Known Issues](http://docs.astropy.org/en/stable/known_issues.html#quantities-lose-their-units-with-some-operations) for more details.
## Defining new units
You can also define custom units for something that isn't built in to astropy.
Let's define the a unit called **"sol"** that represents a Martian day.
```
sol = u.def_unit('sol', 1.0274912510 * u.day)
(1. * u.yr).to(sol) # 1 Earth year in Martian sol units
```
Now let's define Mark Watney's favorite unit, the [**Pirate-Ninja**](https://en.wikipedia.org/wiki/List_of_humorous_units_of_measurement#Pirate_Ninja):
```
pirate_ninja = u.def_unit('☠️👤', 1.0 * u.kW * u.hr / sol)
5.2 * pirate_ninja
# Mars oxygenator power requirement for 6 people
(44.1 * pirate_ninja).to(u.W)
```
## Using physical constants
The [astropy.constants](http://docs.astropy.org/en/v0.2.1/constants/index.html) module contains physical constants relevant for astronomy. They are defined as ``Quantity`` objects using the ``astropy.units`` framework.
```
from astropy.constants import G, c, R_earth
G
c
R_earth
```
Constants are Quantities, thus they can be coverted to other units:
```
R_earth.to(u.km)
```
Please see the complete list of [available physical constants](http://docs.astropy.org/en/stable/constants/index.html#module-astropy.constants). Additions are welcome!
## Equivalencies
Equivalencies can be used to convert quantities that are not strictly the same physical type, but in a specific context are interchangable. A familiar physics example is the mass-energy equivalency: strictly these are different physical types, but it is often understood that you can convert between the two using $E=mc^2$:
```
from astropy.constants import m_p # proton mass
# this raises an error because mass and energy are different units
(m_p).to(u.eV)
# this succeeds, using equivalencies
(m_p).to(u.MeV, u.mass_energy())
```
This concept extends further in `astropy.units` to include some common practical astronomy situations where the units have no direct physical connection, but it is often useful to have a "quick shorthand". For example, astronomical spectra are often given as a function of wavelength, frequency, or even energy of the photon. For example, suppose you want to find the Lyman-limit wavelength:
```
# this raises an error
(13.6 * u.eV).to(u.Angstrom)
```
Normally, one can convert `u.eV` only to the following units:
```
u.eV.find_equivalent_units()
```
But by using a spectral equivalency, one can also convert `u.eV` to the following units:
```
u.eV.find_equivalent_units(equivalencies=u.spectral())
(13.6 * u.eV).to(u.Angstrom, equivalencies=u.spectral())
```
Or if you remember the 21cm HI line, but can't remember the frequency, you could do:
```
(21. * u.cm).to(u.GHz, equivalencies=u.spectral())
```
To go one step further, the units of a spectrum's *flux* are further complicated by being dependent on the units of the spectrum's "x-axis" (i.e., $f_{\lambda}$ for flux per unit wavelength or $f_{\nu}$ for flux per unit frequency). `astropy.units` supports this use case, but it is necessary to supply the location in the spectrum where the conversion is done:
```
q = (1e-18 * u.erg / u.s / u.cm**2 / u.AA)
q
q.to(u.uJy, equivalencies=u.spectral_density(1. * u.um))
```
There's a lot of flexibility with equivalencies, including a variety of other useful built-in equivalencies. So if you want to know more, you might want to check out the [equivalencies narrative documentation](http://docs.astropy.org/en/stable/units/equivalencies.html) or the [astropy.units.equivalencies reference docs](http://docs.astropy.org/en/stable/units/index.html#module-astropy.units.equivalencies).
# Putting it all together
## A simple example
Let's estimate the (circular) orbital speed of the Earth around the Sun using Kepler's Law:
$$v = \sqrt{\frac{G M_{\odot}}{r}}$$
```
from astropy.constants import G
v = np.sqrt(G * 1 * u.M_sun / (1 * u.au))
v
```
That's a velocity unit... but it sure isn't obvious when you look at it!
Let's use a variety of the available quantity methods to get something more sensible:
```
v.decompose() # remember the default uses SI bases
v.decompose(u.cgs.bases)
v.to(u.km / u.s)
```
## Exercise 1
The *James Webb Space Telescope (JWST)* will be located at the second Sun-Earth Lagrange (L2) point:
☀️ 🌎 L2 *(not to scale)*
L2 is located at a distance from the Earth (opposite the Sun) of approximately:
$$ r \approx R \left(\frac{M_{earth}}{3 M_{sun}}\right) ^{(1/3)} $$
where $R$ is the Sun-Earth distance.
Calculate the Earth-L2 distance in kilometers and miles.
*Hints*:
* $M_{earth}$ and $M_{sun}$ are defined [constants](http://docs.astropy.org/en/stable/constants/#reference-api)
* the mile unit is defined as ``u.imperial.mile`` (see [imperial units](http://docs.astropy.org/en/v0.2.1/units/index.html#module-astropy.units.imperial))
```
# answer here (km)
# answer here (mile)
```
## Exercise 2
The L2 point is about 1.5 million kilometers away from the Earth opposite the Sun.
The total mass of the *James Webb Space Telescope (JWST)* is about 6500 kg.
Using the value you obtained above for the Earth-L2 distance, calculate the gravitational force in Newtons between
* *JWST* (at L2) and the Earth
* *JWST* (at L2) and the Sun
*Hint*: the gravitational force between two masses separated by a distance *r* is:
$$ F_g = \frac{G m_1 m_2}{r^2} $$
```
# answer here (Earth)
# answer here (Sun)
```
## Advanced Example: little *h*
For this example, we'll consider how to support the practice of defining units in terms of the dimensionless Hubble constant $h=h_{100}=\frac{H_0}{100 \, {\rm km/s/Mpc}} $.
We use the name 'h100' to differentiate from "h" as in "hours".
```
# define the h100 and h70 units
h100 = u.def_unit(['h100', 'littleh'])
h70 = u.def_unit('h70', h100 * 100. / 70)
# add as equivalent units
u.add_enabled_units([h100, h70])
h100.find_equivalent_units()
```
Define the Hubble constant ($H_0$) in terms of ``h100``:
```
H = 100 * h100 * u.km / u.s / u.Mpc
H
```
Now compute the Hubble time ($1 / H_0$) for h = h100 = 1:
```
t_H = (1/H).to(u.Gyr / h100)
t_H
```
and for h = 0.7:
```
t_H.to(u.Gyr / h70)
```
| github_jupyter |
# Illustrates function iteration, Newton, and secant methods
**Randall Romero Aguilar, PhD**
This demo is based on the original Matlab demo accompanying the <a href="https://mitpress.mit.edu/books/applied-computational-economics-and-finance">Computational Economics and Finance</a> 2001 textbook by Mario Miranda and Paul Fackler.
Original (Matlab) CompEcon file: **demslv06.m**
Running this file requires the Python version of CompEcon. This can be installed with pip by running
!pip install compecon --upgrade
<i>Last updated: 2021-Oct-01</i>
<hr>
```
from compecon.demos import demo
import numpy as np
import matplotlib.pyplot as plt
```
### Function Iteration
```
def g(x):
return (x + 0.2)**0.5
xmin, xmax = 0.0, 1.4
xinit, xstar = 0.3, 0.5*(1 + np.sqrt(1.8))
xx = np.linspace(xmin,xmax)
yy = g(xx)
n = 21
z = np.zeros(n)
z[0] = xinit
for k in range(n-1):
z[k+1] = g(z[k])
x, y = z[:-1], z[1:]
fig1 = demo.figure('Function Iteration','','',[xmin,xmax],[xmin,xmax], figsize=[6,6])
ax = plt.gca()
ax.set_aspect(1)
ax.set_xticks( x[:3].tolist() + [xstar])
ax.set_xticklabels(['$x_0$', '$x_1$', '$x_2$', '$x^*$'])
ax.set_yticks( y[:3].tolist() + [xstar])
ax.set_yticklabels(['$x_1$', '$x_2$', '$x_3$', '$x^*$'])
for xi in ax.get_xticks():
plt.plot([xi,xi], [xmin, xi], 'w--')
for yi in ax.get_yticks():
plt.plot([xmin,yi], [yi, yi], 'w--')
demo.bullet(xstar,xstar,spec='w.',ms=20)
plt.plot(xx,xx,'k-', linewidth=1)
plt.plot(xx,yy,linewidth=4)
plt.step(x,x,'r')
for xi,yi in zip(x[:4], y[:4]):
demo.bullet(xi,xi,spec='g.',ms=18)
demo.bullet(xi,yi,spec='r.',ms=18)
demo.text(xmin+0.1,xmin+0.05,'45°',ha='left',fs=11)
demo.text(xmax-0.05,g(xmax)-0.08,'g',ha='left',fs=18,color='b')
```
### Newton's Method
```
def f(x):
return x**5 - 3, 5*x**4
xmin, xmax = 1.0, 2.55
xinit, xstar = xmax-0.05, 3**(1/5)
xx = np.linspace(xmin, xmax)
yy, dyy = f(xx)
n = 5
x, y = np.zeros(n), np.zeros(n)
x[0] = xinit
for k in range(n-1):
y[k], dlag = f(x[k])
x[k+1] = x[k] - y[k]/dlag
fig2 = demo.figure("Newton's Method",'','',[xmin,xmax], figsize=[9,6])
ax = plt.gca()
ax.set_xticks( x[:4].tolist() + [xstar])
ax.set_xticklabels(['$x_0$', '$x_1$', '$x_2$','$x_3$', '$x^*$'])
ax.set_yticks([])
plt.plot(xx,yy)
plt.hlines(0,xmin, xmax, colors='k')
demo.text(xinit,f(xinit+0.03)[0],'f',fs=18,color='b')
demo.bullet(xstar,0,spec='r*',ms=18)
for xi,xinext,yi in zip(x,x[1:],y):
plt.plot([xi,xi],[0,yi],'w--')
plt.plot([xi,xinext],[yi, 0],'r-')
demo.bullet(xi,yi,spec='r.',ms=18)
demo.bullet(xinext,0,spec='g.',ms=18)
```
### Secant Method
```
def f(x):
return x**5 - 3
xmin, xmax = 1.0, 2.55
xinit, xstar = xmax-0.05, 3**(1/5)
xx = np.linspace(xmin, xmax)
yy = f(xx)
n = 4
x = np.zeros(n)
x[:2] = xinit, xinit-0.25
y = f(x)
for i in range(2,n):
x[i] = x[i-1] - y[i-1]*(x[i-1]-x[i-2]) / (y[i-1]-y[i-2])
y[i] = f(x[i])
fig3 = demo.figure("Secant Method",'','',[xmin,xmax], figsize=[9,6])
ax = plt.gca()
ax.set_xticks( x[:4].tolist() + [xstar])
ax.set_xticklabels(['$x_0$', '$x_1$', '$x_2$','$x_3$', '$x^*$'])
ax.set_yticks([])
plt.plot(xx,yy)
plt.hlines(0,xmin, xmax, colors='k')
demo.text(xinit,f(xinit+0.03),'f',fs=18,color='b')
for xi,yi in zip(x,y):
plt.plot([xi,xi],[0,yi],'w--')
demo.bullet(xi,yi,spec='r.',ms=18)
demo.bullet(xi,0,spec='g.',ms=18)
for xi,xinext,yi in zip(x,x[2:],y):
plt.plot([xi,xinext],[yi, 0],'r-')
demo.bullet(xstar,0,spec='r*',ms=18)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/towardsai/tutorials/blob/master/neural_networks_tutorial_part_2/neural_networks_tutorial_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Building Neural Networks from Scratch with Python Code and Math in Detail - II
* Tutorial: https://towardsai.net/p/machine-learning/building-neural-networks-with-python-code-and-math-in-detail-ii-bbe8accbf3d1
* Github: https://github.com/towardsai/tutorials/tree/master/neural_networks_tutorial_part_2
```
# Import required libraries :
import numpy as np
# Define input features :
input_features = np.array([[0,0],[0,1],[1,0],[1,1]])
print (input_features.shape)
print (input_features)
# Define target output :
target_output = np.array([[0,1,1,1]])
# Reshaping our target output into vector :
target_output = target_output.reshape(4,1)
print(target_output.shape)
print (target_output)
# Define weights :
# 6 for hidden layer
# 3 for output layer
# 9 total
weight_hidden = np.array([[0.1,0.2,0.3],
[0.4,0.5,0.6]])
weight_output = np.array([[0.7],[0.8],[0.9]])
# Learning Rate :
lr = 0.05
# Sigmoid function :
def sigmoid(x):
return 1/(1+np.exp(-x))
# Derivative of sigmoid function :
def sigmoid_der(x):
return sigmoid(x)*(1-sigmoid(x))
for epoch in range(200000):
# Input for hidden layer :
input_hidden = np.dot(input_features, weight_hidden)
# Output from hidden layer :
output_hidden = sigmoid(input_hidden)
# Input for output layer :
input_op = np.dot(output_hidden, weight_output)
# Output from output layer :
output_op = sigmoid(input_op)
#==========================================================
# Phase1
# Calculating Mean Squared Error :
error_out = ((1 / 2) * (np.power((output_op - target_output), 2)))
print(error_out.sum())
# Derivatives for phase 1 :
derror_douto = output_op - target_output
douto_dino = sigmoid_der(input_op)
dino_dwo = output_hidden
derror_dwo = np.dot(dino_dwo.T, derror_douto * douto_dino)
#===========================================================
# Phase 2
# derror_w1 = derror_douth * douth_dinh * dinh_dw1
# derror_douth = derror_dino * dino_outh
# Derivatives for phase 2 :
derror_dino = derror_douto * douto_dino
dino_douth = weight_output
derror_douth = np.dot(derror_dino , dino_douth.T)
douth_dinh = sigmoid_der(input_hidden)
dinh_dwh = input_features
derror_wh = np.dot(dinh_dwh.T, douth_dinh * derror_douth)
# Update Weights
weight_hidden -= lr * derror_wh
weight_output -= lr * derror_dwo # TODO: Verify update from derror_wo to derror_dwo
# Final hidden layer weight values :
print (weight_hidden)
# Final output layer weight values :
print (weight_output)
# Predictions :
#Taking inputs :
single_point = np.array([1,1])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
#=================================================
#Taking inputs :
single_point = np.array([0,0])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
#=====================================================
#Taking inputs :
single_point = np.array([1,0])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
# Import required libraries :
import numpy as np# Define input features :
input_features = np.array([[0,0],[0,1],[1,0],[1,1]])
print (input_features.shape)
print (input_features)# Define target output :
target_output = np.array([[0,1,1,1]])# Reshaping our target output into vector :
target_output = target_output.reshape(4,1)
print(target_output.shape)
print (target_output)# Define weights :
weights = np.array([[0.1],[0.2]])
print(weights.shape)
print (weights)# Define learning rate :
lr = 0.05# Sigmoid function :
def sigmoid(x):
return 1/(1+np.exp(-x))# Derivative of sigmoid function :
def sigmoid_der(x):
return sigmoid(x)*(1-sigmoid(x))# Main logic for neural network :
# Running our code 10000 times :for epoch in range(10000):
inputs = input_features#Feedforward input :
pred_in = np.dot(inputs, weights)#Feedforward output :
pred_out = sigmoid(pred_in)#Backpropogation
#Calculating error
error = pred_out - target_output
x = error.sum()
#Going with the formula :
print(x)
#Calculating derivative :
dcost_dpred = error
dpred_dz = sigmoid_der(pred_out)
#Multiplying individual derivatives :
z_delta = dcost_dpred * dpred_dz#Multiplying with the 3rd individual derivative :
inputs = input_features.T
weights -= lr * np.dot(inputs, z_delta)#Predictions :#Taking inputs :
single_point = np.array([1,0])
#1st step :
result1 = np.dot(single_point, weights)
#2nd step :
result2 = sigmoid(result1)
#Print final result
print(result2)#====================================
#Taking inputs :
single_point = np.array([0,0])
#1st step :
result1 = np.dot(single_point, weights)
#2nd step :
result2 = sigmoid(result1)
#Print final result
print(result2)#===================================
#Taking inputs :
single_point = np.array([1,1])
#1st step :
result1 = np.dot(single_point, weights)
#2nd step :
result2 = sigmoid(result1)
#Print final result
print(result2)
# Import required libraries :
import numpy as np
# Define input features :
input_features = np.array([[0,0],[0,1],[1,0],[1,1]])
print (input_features.shape)
print (input_features)
# Define target output :
target_output = np.array([[0,1,1,0]])
# Reshaping our target output into vector :
target_output = target_output.reshape(4,1)
print(target_output.shape)
print (target_output)
# Define weights :
# 8 for hidden layer
# 4 for output layer
# 12 total
weight_hidden = np.random.rand(2,4)
weight_output = np.random.rand(4,1)
# Learning Rate :
lr = 0.05
# Sigmoid function :
def sigmoid(x):
return 1/(1+np.exp(-x))
# Derivative of sigmoid function :
def sigmoid_der(x):
return sigmoid(x)*(1-sigmoid(x))
# Main logic :
for epoch in range(200000):
# Input for hidden layer :
input_hidden = np.dot(input_features, weight_hidden)
# Output from hidden layer :
output_hidden = sigmoid(input_hidden)
# Input for output layer :
input_op = np.dot(output_hidden, weight_output)
# Output from output layer :
output_op = sigmoid(input_op)
#========================================================================
# Phase1
# Calculating Mean Squared Error :
error_out = ((1 / 2) * (np.power((output_op - target_output), 2)))
print(error_out.sum())
# Derivatives for phase 1 :
derror_douto = output_op - target_output
douto_dino = sigmoid_der(input_op)
dino_dwo = output_hidden
derror_dwo = np.dot(dino_dwo.T, derror_douto * douto_dino)
# ========================================================================
# Phase 2
# derror_w1 = derror_douth * douth_dinh * dinh_dw1
# derror_douth = derror_dino * dino_outh
# Derivatives for phase 2 :
derror_dino = derror_douto * douto_dino
dino_douth = weight_output
derror_douth = np.dot(derror_dino , dino_douth.T)
douth_dinh = sigmoid_der(input_hidden)
dinh_dwh = input_features
derror_dwh = np.dot(dinh_dwh.T, douth_dinh * derror_douth)
# Update Weights
weight_hidden -= lr * derror_dwh
weight_output -= lr * derror_dwo
# Final values of weight in hidden layer :
print (weight_hidden)
# Final values of weight in output layer :
print (weight_output)
#Taking inputs :
single_point = np.array([0,-1])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
#Taking inputs :
single_point = np.array([0,5])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
#Taking inputs :
single_point = np.array([1,1.2])
#1st step :
result1 = np.dot(single_point, weight_hidden)
#2nd step :
result2 = sigmoid(result1)
#3rd step :
result3 = np.dot(result2,weight_output)
#4th step :
result4 = sigmoid(result3)
print(result4)
```
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/40_ipywidgets.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) if needed.
```
# !pip install geemap
```
# How to add interactive widgets to the map
## Import libraries
```
import ee
import geemap
import ipyleaflet
import ipywidgets as widgets
```
## Create an interactive map
```
Map = geemap.Map(center=[40, -100], zoom=4)
Map
```
## Add Earth Engine data
### Add raster data
```
dem = ee.Image('USGS/SRTMGL1_003')
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}
Map.addLayer(dem, vis_params, 'DEM')
```
### Add vector data
```
fc = ee.FeatureCollection('TIGER/2018/States')
Map.addLayer(fc, {}, 'US States')
```
## Change layer opacity
```
Map
dem_layer = Map.find_layer('DEM')
dem_layer.interact(opacity=(0, 1, 0.1))
vector_layer = Map.find_layer('US States')
vector_layer.interact(opacity=(0, 1, 0.1))
```
## Widget list
https://ipywidgets.readthedocs.io/en/latest/examples/Widget%20List.html
### Numeric widgets
#### IntSlider
```
int_slider = widgets.IntSlider(
value=2000,
min=1984,
max=2020,
step=1,
description='Year:'
)
int_slider
int_slider.value
```
#### FloatSlider
```
float_slider = widgets.FloatSlider(
value=0,
min=-1,
max=1,
step=0.05,
description='Threshold:'
)
float_slider
float_slider.value
```
#### IntProgress
```
int_progress = widgets.IntProgress(
value=7,
min=0,
max=10,
step=1,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
orientation='horizontal'
)
int_progress
int_text = widgets.IntText(
value=7,
description='Any:',
)
int_text
float_text = widgets.FloatText(
value=7.5,
description='Any:',
)
float_text
```
### Boolean widgets
#### ToggleButton
```
toggle_button = widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='success', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
toggle_button
toggle_button.value
```
#### Checkbox
```
checkbox = widgets.Checkbox(
value=False,
description='Check me',
disabled=False,
indent=False
)
checkbox
checkbox.value
```
### Selection widgets
#### Dropdown
```
dropdown = widgets.Dropdown(
options=['USA', 'Canada', 'Mexico'],
value='Canada',
description='Country:'
)
dropdown
dropdown.value
```
#### RadioButtons
```
radio_buttons = widgets.RadioButtons(
options=['USA', 'Canada', 'Mexico'],
value='Canada',
description='Country:'
)
radio_buttons
radio_buttons.value
```
### String widgets
#### Text
```
text = widgets.Text(
value='USA',
placeholder='Enter a country name',
description='Country:',
disabled=False
)
text
text.value
```
#### Textarea
```
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
```
#### HTML
```
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
widgets.HTML(
value='<img src="https://earthengine.google.com/static/images/earth-engine-logo.png" width="100" height="100">'
)
```
### Button
```
button = widgets.Button(
description='Click me',
button_style='info', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
button
```
### Date picker
```
date_picker = widgets.DatePicker(
description='Pick a Date',
disabled=False
)
date_picker
date_picker.value
```
### Color picker
```
color_picker = widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
color_picker
color_picker.value
```
### Output widget
```
out = widgets.Output(layout={'border': '1px solid black'})
out
with out:
for i in range(10):
print(i, 'Hello world!')
from IPython.display import YouTubeVideo
out.clear_output()
with out:
display(YouTubeVideo('7qRtsTCnnSM'))
out
out.clear_output()
with out:
display(widgets.IntSlider())
out
```
## Add a widget to the map
```
Map = geemap.Map()
dem = ee.Image('USGS/SRTMGL1_003')
fc = ee.FeatureCollection('TIGER/2018/States')
vis_params = {
'min': 0,
'max': 4000,
'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}
Map.addLayer(dem, vis_params, 'DEM')
Map.addLayer(fc, {}, 'US States')
Map
output_widget = widgets.Output(layout={'border': '1px solid black'})
output_control = ipyleaflet.WidgetControl(widget=output_widget, position='bottomright')
Map.add_control(output_control)
with output_widget:
print('Nice map!')
output_widget.clear_output()
logo = widgets.HTML(
value='<img src="https://earthengine.google.com/static/images/earth-engine-logo.png" width="100" height="100">'
)
with output_widget:
display(logo)
def handle_interaction(**kwargs):
latlon = kwargs.get('coordinates')
if kwargs.get('type') == 'click':
Map.default_style = {'cursor': 'wait'}
xy = ee.Geometry.Point(latlon[::-1])
selected_fc = fc.filterBounds(xy)
with output_widget:
output_widget.clear_output()
try:
name = selected_fc.first().get('NAME').getInfo()
usps = selected_fc.first().get('STUSPS').getInfo()
Map.layers = Map.layers[:4]
geom = selected_fc.geometry()
layer_name = name + '-' + usps
Map.addLayer(ee.Image().paint(geom, 0, 2), {'palette': 'red'}, layer_name)
print(layer_name)
except Exception as e:
print('No feature could be found')
Map.layers = Map.layers[:4]
Map.default_style = {'cursor': 'pointer'}
Map.on_interaction(handle_interaction)
```
| github_jupyter |
# Unified API: Train and evaluation
This notebook presents the solution for training and evaluating both the __GANITE__ and __CMGP__ algorithms over the [Twins](https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/data/twins/) dataset, using a unified API.
For details about each algorithm, please refer to their dedicated notebooks:
- [GANITE(Tensorflow) notebook](https://github.com/bcebere/ite-api/blob/main/notebooks/ganite_train_evaluation.ipynb).
- [GANITE(PyTorch) notebook](https://github.com/bcebere/ite-api/blob/main/notebooks/ganite_pytorch_train_evaluation.ipynb).
- [CMGP notebook](https://github.com/bcebere/ite-api/blob/main/notebooks/cmgp_train_evaluation.ipynb).
## Setup
First, make sure that all the depends are installed in the current environment.
```
pip install -r requirements.txt
pip install .
```
Next, we import all the dependencies necessary for the task.
```
# Double check that we are using the correct interpreter.
import sys
print(sys.executable)
# Import depends
from ite.algs.model import Model # the unified API
import ite.datasets as ds
from matplotlib import pyplot as plt
from IPython.display import HTML, display
import tabulate
```
## Load the Dataset
The example is done using the [Twins](https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/data/twins/) dataset.
Next, we load the dataset, process the data, and sample a training set and a test set.
The logic is implemented [here](https://github.com/bcebere/ite-api/tree/main/src/ite/datasets), and it adapted from the original [GANITE pre-processing implementation](https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/ganite/data_preprocessing_ganite.py).
For CGMP, we have to downsample to 1000 training items. For the rest, we load without downsampling.
```
train_ratio = 0.8
full_dataloader = ds.load("twins", train_ratio)
cmgp_dataloader = ds.load("twins", train_ratio, downsample=1000)
```
## Load and train GANITE(Tensorflow version)
The constructor requires the name of the chosen algorithm for the first parameter - `GANITE`.
The constructor supports the same parameters as the "native" version:
- `dim`: The number of features in X.
- `dim_outcome`: The number of potential outcomes.
- `dim_hidden`: hyperparameter for tuning the size of the hidden layer.
- `depth`: hyperparameter for the number of hidden layers in the generator and inference blocks.
- `num_iterations`: hyperparameter for the number of training epochs.
- `alpha`: hyperparameter used for the Generator block loss.
- `beta`: hyperparameter used for the ITE block loss.
- `num_discr_iterations`: number of iterations executed by the discriminator.
The hyperparameters used in this experiment are from Table 7 in the paper.
```
dim = len(full_dataloader[0][0])
dim_outcome = full_dataloader[-1].shape[1]
ganite_model = Model(
"GANITE",
dim,
dim_outcome,
dim_hidden=8,
num_iterations=10000,
alpha=2,
beta=2,
minibatch_size=128,
num_discr_iterations=3,
depth=5,
)
ganite_tf_metrics = ganite_model.train(*full_dataloader)
ganite_tf_metrics.print()
ganite_tf_metrics.plot(plt, thresholds = [0.2, 0.25, 0.3, 0.35])
```
## Load and train GANITE(PyTorch version)
The constructor requires the name of the chosen algorithm for the first parameter - `GANITE_TORCH`.
The constructor supports the same parameters as the "native" version:
- `dim`: The number of features in X.
- `dim_outcome`: The number of potential outcomes.
- `dim_hidden`: hyperparameter for tuning the size of the hidden layer.
- `depth`: hyperparameter for the number of hidden layers in the generator and inference blocks.
- `num_iterations`: hyperparameter for the number of training epochs.
- `alpha`: hyperparameter used for the Generator block loss.
- `beta`: hyperparameter used for the ITE block loss.
- `num_discr_iterations`: number of iterations executed by the discriminator.
The hyperparameters used in this experiment are computed using the [hyperparameter tuning notebook](https://github.com/bcebere/ite-api/blob/main/notebooks/hyperparam_tuning.ipynb).
```
ganite_torch_model = Model(
"GANITE_TORCH",
dim,
dim_outcome,
dim_hidden=30,
num_iterations=3000,
alpha=1,
beta=10,
minibatch_size=256,
num_discr_iterations=6,
depth=5,
)
ganite_torch_metrics = ganite_torch_model.train(*full_dataloader)
ganite_torch_metrics.print()
ganite_torch_metrics.plot(plt, thresholds = [0.2, 0.25, 0.3, 0.35])
```
## Load and train CMGP
The constructor requires the name of the chosen algorithm for the first parameter - `CMGP`.
The constructor supports the following parameters:
- `dim`: The number of features in X.
- `dim_outcome`: The number of potential outcomes.
- `max_gp_iterations`: Maximum number of GP iterations before stopping the training.
```
cmgp_model = Model(
"CMGP",
dim=dim,
dim_outcome=dim_outcome,
max_gp_iterations=1000, # (optional) Maximum number of interations for the Gaussian Process
)
for experiment in range(5):
cmgp_dataloader = ds.load("twins", train_ratio, downsample=1000)
cmgp_metrics = cmgp_model.train(*cmgp_dataloader)
cmgp_metrics.print()
cmgp_metrics.plot(plt, thresholds = [0.2, 0.25, 0.3, 0.35])
```
## Evaluate the models on the test set
We evaluate each algorithm on their test sets, using the [PEHE](https://github.com/bcebere/ite-api/blob/main/src/ite/utils/numpy.py#L19) and [ATE](https://github.com/bcebere/ite-api/blob/main/src/ite/utils/numpy.py#L58) metrics.
```
for scenario in ["in-sample", "out-sample"]:
print(f"{scenario} metrics:")
results = [
[
"GANITE",
"{:0.3f} +/- {:0.3f}".format(
*ganite_tf_metrics.mean_confidence_interval("sqrt_PEHE", f"ITE Block {scenario} metrics")
),
"{:0.3f} +/- {:0.3f}".format(
*ganite_tf_metrics.mean_confidence_interval("ATE", f"ITE Block {scenario} metrics")
),
"{:0.3f} +/- {:0.3f}".format(
*ganite_tf_metrics.mean_confidence_interval("MSE", f"ITE Block {scenario} metrics")
),
],
[
"GANITE_TORCH",
"{:0.3f} +/- {:0.3f}".format(
*ganite_torch_metrics.mean_confidence_interval("sqrt_PEHE", f"ITE Block {scenario} metrics")
),
"{:0.3f} +/- {:0.3f}".format(
*ganite_torch_metrics.mean_confidence_interval("ATE", f"ITE Block {scenario} metrics")
),
"{:0.3f} +/- {:0.3f}".format(
*ganite_torch_metrics.mean_confidence_interval("MSE", f"ITE Block {scenario} metrics")
),
],
[
"CMGP",
"{:0.3f} +/- {:0.3f}".format(
*cmgp_metrics.mean_confidence_interval("sqrt_PEHE", f"{scenario} metrics")
),
"{:0.3f} +/- {:0.3f}".format(*cmgp_metrics.mean_confidence_interval("ATE", f"{scenario} metrics")),
"{:0.3f} +/- {:0.3f}".format(*cmgp_metrics.mean_confidence_interval("MSE", f"{scenario} metrics")),
],
]
display(
HTML(tabulate.tabulate(results, headers=["Model", "sqrt_PEHE", "ATE", "MSE"], tablefmt="html"))
)
```
## References
1. Jinsung Yoon, James Jordon, Mihaela van der Schaar, "GANITE: Estimation of Individualized Treatment Effects using Generative Adversarial Nets", International Conference on Learning Representations (ICLR), 2018 ([Paper](https://openreview.net/forum?id=ByKWUeWA-)).
2. [GANITE Reference implementation](https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/ganite/).
3. Ahmed M. Alaa, Mihaela van der Schaar, "Bayesian Inference of Individualized Treatment
Effects using Multi-task Gaussian Processes", NeurIPS, 2017 ([Paper](https://arxiv.org/pdf/1704.02801.pdf)).
4. [CMGP Reference implementation](https://bitbucket.org/mvdschaar/mlforhealthlabpub/src/master/alg/causal_multitask_gaussian_processes_ite/).
5. [Clairvoyance: a Unified, End-to-End AutoML Pipeline for Medical Time Series](https://github.com/vanderschaarlab/clairvoyance).
| github_jupyter |
```
!pip install datatable
!pip install wget
!pip install tensorflow
import pandas as pd
import os
from keras.preprocessing.text import Tokenizer
import datatable as dt
import wget
import tensorflow as tf
from sklearn import metrics
tf.__version__
df = pd.read_csv('https://raw.githubusercontent.com/QSS-Analytics/Datasets/master/nlp.csv')
os.listdir('/content')
wget.download('https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.az.300.vec.gz')
wgt2 = dt.fread('cc.az.300.vec.gz')
del wgt2[0,:]
wgt2.head(10)
max_words = 1e4
maxlen = 30
word_seq = Tokenizer(num_words=max_words)
word_seq.fit_on_texts(df['title'])
x_train = word_seq.texts_to_sequences(df['title'])
print(x_train[2])
from keras.preprocessing.sequence import pad_sequences
x_train = pad_sequences(x_train, maxlen=maxlen)
print(x_train[0, :])
words_indices = word_seq.word_index
words_indices
words_indices2 = pd.DataFrame(list(words_indices.items()), columns=['C0', 'indices'])
words_indices2
new_row = pd.DataFrame({'C0': ['no'],'indices':[0]})
new_row
words_indices2 = words_indices2.append(new_row,ignore_index=True)
from keras.models import Sequential
from keras import layers
words_indices2.head(4)
words_indices2.loc[words_indices2['C0'] == 'bal']
pandas_form = wgt2.to_pandas()
pandas_form.head(7)
words_indices2 = pd.merge(words_indices2,pandas_form,how='left')
words_indices2.head(3)
words_indices2 = words_indices2.fillna(0)
words_indices2 = words_indices2.iloc[:,2:302].as_matrix()
from keras.layers import Input
from keras.layers.embeddings import Embedding
from keras.layers import Dense
from keras.layers import SpatialDropout1D
from keras.layers import Bidirectional
from keras.layers import GRU
from keras.layers import GlobalAveragePooling1D
from keras.layers import GlobalMaxPooling1D
from keras.layers import Concatenate
from keras.layers import Layer
from keras.models import Model
from keras import backend as K
embed_size=300
def auc(y_true, y_pred):
auc = tf.metrics.auc(y_true, y_pred)[1]
K.get_session().run(tf.local_variables_initializer())
return auc
sequence_input = Input(shape=(maxlen,), name = "input")
x = Embedding(input_dim = int(6922), output_dim = 300, input_length = maxlen, weights=[words_indices2],trainable = False)(sequence_input)
x = SpatialDropout1D(0.2)(x)
x = Bidirectional(GRU(80, return_sequences=True))(x)
avg_pool = GlobalAveragePooling1D()(x)
max_pool = GlobalMaxPooling1D()(x)
x = Concatenate(axis=1)([avg_pool, max_pool])
preds = Dense(1, activation="sigmoid")(x)
model = Model(sequence_input, preds)
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=[auc])
model.summary()
y_train = df.iloc[:,1].as_matrix()
batch_size = 32
epochs=20
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split = 0.1,verbose=1)
```
| github_jupyter |
## Using a custom botorch model with Ax
In this tutorial, we illustrate how to use a custom BoTorch model within Ax's `SimpleExperiment` API. This allows us to harness the convenience of Ax for running Bayesian Optimization loops, while at the same time maintaining full flexibility in terms of the modeling.
Acquisition functions and strategies for optimizing acquisitions can be swapped out in much the same fashion. See for example the tutorial for [Implementing a custom acquisition function](./custom_acquisition).
If you want to do something non-standard, or would like to have full insight into every aspect of the implementation, please see [this tutorial](./closed_loop_botorch_only) for how to write your own full optimization loop in BoTorch.
### Implementing the custom model
For this tutorial, we implement a very simple gpytorch Exact GP Model that uses an RBF kernel (with ARD) and infers a (homoskedastic) noise level.
Model definition is straightforward - here we implement a gpytorch `ExactGP` that also inherits from `GPyTorchModel` -- this adds all the api calls that botorch expects in its various modules.
*Note:* botorch also allows implementing other custom models as long as they follow the minimal `Model` API. For more information, please see the [Model Documentation](../docs/models).
```
from botorch.models.gpytorch import GPyTorchModel
from gpytorch.distributions import MultivariateNormal
from gpytorch.means import ConstantMean
from gpytorch.models import ExactGP
from gpytorch.kernels import RBFKernel, ScaleKernel
from gpytorch.likelihoods import GaussianLikelihood
from gpytorch.mlls import ExactMarginalLogLikelihood
from gpytorch.priors import GammaPrior
class SimpleCustomGP(ExactGP, GPyTorchModel):
def __init__(self, train_X, train_Y):
# squeeze output dim before passing train_Y to ExactGP
super().__init__(train_X, train_Y.squeeze(-1), GaussianLikelihood())
self.mean_module = ConstantMean()
self.covar_module = ScaleKernel(
base_kernel=RBFKernel(ard_num_dims=train_X.shape[-1]),
)
self.to(train_X) # make sure we're on the right device/dtype
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return MultivariateNormal(mean_x, covar_x)
```
#### Define a factory function to be used with Ax's BotorchModel
Ax's `BotorchModel` internally breaks down the different components of Bayesian Optimization (model generation & fitting, defining acquisition functions, and optimizing them) into a functional api.
Depending on which of these components we want to modify, we can pass in an associated custom factory function to the `BotorchModel` constructor. In order to use a custom model, we have to implement a model factory function that, given data according to Ax's api specification, instantiates and fits a BoTorch Model object.
The call signature of this factory function is the following:
```python
def get_and_fit_gpytorch_model(
Xs: List[Tensor],
Ys: List[Tensor],
Yvars: List[Tensor],
state_dict: Optional[Dict[str, Tensor]] = None,
**kwargs: Any,
) -> Model:
```
where
- the `i`-th element of `Xs` are the training features for the i-th outcome as an `n_i x d` tensor (in our simple example, we only have one outcome)
- similarly, the `i`-th element of `Ys` and `Yvars` are the observations and associated observation variances for the `i`-th outcome as `n_i x 1` tensors
- `state_dict` is an optional PyTorch module state dict that can be used to initialize the model's parameters to pre-specified values
The function must return a botorch `Model` object. What happens inside the function is up to you.
Using botorch's `fit_gpytorch_model` utility function, model-fitting is straightforward for this simple model (you may have to use your own custom model fitting loop when working with more complex models - see the tutorial for [Fitting a model with torch.optim](fit_model_with_torch_optimizer).
```
from botorch.fit import fit_gpytorch_model
def _get_and_fit_simple_custom_gp(Xs, Ys, **kwargs):
model = SimpleCustomGP(Xs[0], Ys[0])
mll = ExactMarginalLogLikelihood(model.likelihood, model)
fit_gpytorch_model(mll)
return model
```
### Set up the optimization problem in Ax
Ax's `SimpleExperiment` API requires an evaluation function that is able to compute all the metrics required in the experiment. This function needs to accept a set of parameter values as a dictionary. It should produce a dictionary of metric names to tuples of mean and standard error for those metrics.
For this tutorial, we use the Branin function, a simple synthetic benchmark function in two dimensions. In an actual application, this could be arbitrarily complicated - e.g. this function could run some costly simulation, conduct some A/B tests, or kick off some ML model training job with the given parameters).
```
import random
import numpy as np
def branin(parameterization, *args):
x1, x2 = parameterization["x1"], parameterization["x2"]
y = (x2 - 5.1 / (4 * np.pi ** 2) * x1 ** 2 + 5 * x1 / np.pi - 6) ** 2
y += 10 * (1 - 1 / (8 * np.pi)) * np.cos(x1) + 10
# let's add some synthetic observation noise
y += random.normalvariate(0, 0.1)
return {"branin": (y, 0.0)}
```
We need to define a search space for our experiment that defines the parameters and the set of feasible values.
```
from ax import ParameterType, RangeParameter, SearchSpace
search_space = SearchSpace(
parameters=[
RangeParameter(
name="x1", parameter_type=ParameterType.FLOAT, lower=-5, upper=10
),
RangeParameter(
name="x2", parameter_type=ParameterType.FLOAT, lower=0, upper=15
),
]
)
```
Third, we make a `SimpleExperiment` — note that the `objective_name` needs to be one of the metric names returned by the evaluation function.
```
from ax import SimpleExperiment
exp = SimpleExperiment(
name="test_branin",
search_space=search_space,
evaluation_function=branin,
objective_name="branin",
minimize=True,
)
```
We use the Sobol generator to create 5 (quasi-) random initial point in the search space. Calling `batch_trial` will cause Ax to evaluate the underlying `branin` function at the generated points, and automatically keep track of the results.
```
from ax.modelbridge import get_sobol
sobol = get_sobol(exp.search_space)
exp.new_batch_trial(generator_run=sobol.gen(5))
```
To run our custom botorch model inside the Ax optimization loop, we can use the `get_botorch` factory function from `ax.modelbridge.factory`. Any keyword arguments given to this function are passed through to the `BotorchModel` constructor. To use our custom model, we just need to pass our newly minted `_get_and_fit_simple_custom_gp` function to `get_botorch` using the `model_constructor` argument.
**Note:** `get_botorch` by default automatically applies a number of parameter transformations (e.g. to normalize input data or standardize output data). This is typically what you want for standard use cases with continuous parameters. If your model expects raw parameters, make sure to pass in `transforms=[]` to avoid any transformations to take place. See **TODO: UPDATE LINK** the [Ax documentation](Ax/docs/models.html#transforms) for additional information on how transformations in Ax work.
#### Run the optimization loop
We're ready to run the Bayesian Optimization loop.
```
from ax.modelbridge.factory import get_botorch
for i in range(5):
print(f"Running optimization batch {i+1}/5...")
model = get_botorch(
experiment=exp,
data=exp.eval(),
search_space=exp.search_space,
model_constructor=_get_and_fit_simple_custom_gp,
)
batch = exp.new_trial(generator_run=model.gen(1))
print("Done!")
```
| github_jupyter |
# CLX LODA Anomaly Detection
This is an introduction to CLX LODA Anomaly Detection.
## Introduction
Anomaly detection is an important problem that has been studied within wide areas and application domains. Several anomaly detection algorithms are generic while many are developed specifically to the domain of interest. In practice, several ensemble-based anomaly detection algorithms have been shown to have superior performance on many benchmark datasets, namely Isolation Forest, Lightweight Online Detector of Anomalies (LODA), and an ensemble of Gaussian mixture models ...etc.
The LODA algorithm is one of the good performing generic anomaly detection algorithms. LODA detects anomalies in a dataset by computing the likelihood of data points using an ensemble of one-dimensional histograms.
## How to train LODA Anomaly Detection model
First initialize your new model
```
from clx.analytics.loda import Loda
# n_bins: Number of bins in each histogram
# n_random_cuts: Number of random cut projections
loda_ad = Loda(n_bins=None, n_random_cuts=100)
```
Next, train your LODA Anomaly detector. The below example uses a random `100 measurements each with 5 features` for demonstration only. Ideally you will want a larger training set. For in-depth example view this Jupyter [Notebook](https://github.com/rapidsai/clx/blob/main/notebooks/loda_anomaly_detection/LODA_anomaly_detection.ipynb).
```
import cupy as cp
x = cp.random.randn(100,5)
loda_ad.fit(x)
```
## Evaluate detector
```
score = loda_ad.score(x) #generate nll scores
#The scores is supposed to be b/n 0 & +inf, here we are considering negative log likelihood values as score.
score
```
## Explanation of anomalies
To explain the cause of anomalies LODA utilize contributions of each feature across the histograms.
```
feature_explanation = loda_ad.explain(x[5])
print("Feature importance scores: {}".format(feature_explanation.ravel()))
```
## Conclusion
This example shows GPU implementation of LODA algorithm for anomaly detection and explanation. Users can experiment with other datasets and evaluate the model implementation to identify anomalies and explain the features using RAPDIS.
## Reference
- [Loda: Lightweight on-line detector of anomalies](https://link.springer.com/article/10.1007/s10994-015-5521-0)
- [PyOD: A Python Toolbox for Scalable Outlier Detection](https://www.jmlr.org/papers/volume20/19-011/19-011.pdf)
- [Anomaly Detection in the Presence of Missing Values](https://arxiv.org/pdf/1809.01605.pdf)
- https://archive.ics.uci.edu/ml/datasets/Statlog+%28Shuttle%29
| github_jupyter |
# Lesson 5 Class Exercises: Tidy Data
With these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right:
<span style="float:right; margin-left:10px; clear:both;"></span>
## Tidy Summary:
### Rules for Tidy data
+ Each variable forms a unique column in the data frame.
+ Each observation forms a row in the data frame.
+ Each **type** of observational unit needs its own table.
### Spotting messy data
1. Column headers are values, not variable names.
2. Multiple variables are stored in one column.
3. Variables are stored in both rows and columns.
4. Multiple types of observational units are stored in the same table.
5. A single observational unit is stored in multiple tables.
## Get Started
Import the Numpy and Pandas packages
```
import numpy as np
import pandas as pd
```
## Exercise 1: Review of Tidy Practice
### Task 1: Task 3b from the Practice Notebook
Download the [PI_DataSet.txt](https://hivdb.stanford.edu/download/GenoPhenoDatasets/PI_DataSet.txt) file from [HIV Drug Resistance Database](https://hivdb.stanford.edu/pages/genopheno.dataset.html). Store the file in the same directory as the practice notebook for this assignment.
Here is the meaning of data columns:
- SeqID: a numeric identifier for a unique HIV isolate protease sequence. Note: disruption of the protease inhibits HIV’s ability to reproduce.
- The Next 8 columns are identifiers for unique protease inhibitor class drugs.
- The values in these columns are the fold resistance over wild type (the HIV strain susceptible to all drugs).
- Fold change is the ratio of the drug concentration needed to inhibit the isolate.
- The latter columns, with P as a prefix, are the positions of the amino acids in the protease.
- '-' indicates consensus.
- '.' indicates no sequence.
- '#' indicates an insertion.
- '~' indicates a deletion;.
- '*' indicates a stop codon
- a letter indicates one letter Amino Acid substitution.
- two and more amino acid codes indicates a mixture.
Import this dataset into your notebook, view the top few rows of the data and respond to these questions:
```
hiv = pd.read_csv('https://hivdb.stanford.edu/download/GenoPhenoDatasets/PI_DataSet.txt', sep="\t")
hiv.head()
```
What are the variables?
What are the observations?
What are the values?
What is the observational unit?
What makes this dataset untidy?
### Task 2: Task 3c from the practice notebook
Use the data retreived from task 3b, generate a data frame containing a Tidy’ed set of values for drug concentration fold change. Be sure to:
- Remove the all columns but the SeqID and the protease inhibitors.
- Tidy the data and set the column names as ‘SeqID’, ‘Drug’ and ‘Fold_change’.
- Order the data frame first by sequence ID and then by Drug name
- Reset the row indexes
- Display the first 10 elements.
```
# Keep only the wanted columns
hiv_reduced = hiv[['SeqID', 'FPV','ATV','IDV','LPV','NFV','SQV','TPV','DRV']]
hiv_reduced.head()
# Melt the data and make sure to set the column names as instructed
hivrm = hiv_reduced.melt(id_vars='SeqID', value_name='Drug', var_name='Fold_Change')
hivrm.head()
# Order the data firt by SeqID then by Drug
hivrm = hivrm.sort_values(['SeqID', 'Drug'])
hivrm.head()
# Reset the index and display the first 10 rows
hivrm.reset_index(inplace=True, drop=True)
hivrm.head(10)
```
### Task 3: Tidy everything
In Task 2 above we only tidied up the drug fold change. But, now let's tidy up the rest of the table.
+ The other observable units are the amino acid sequences and the mutation list. Create a separate tidy table for each unit.
+ For the amion acid position variant table be sure to remove the 'P' from the amino acid position and order the rows by SeqID then by position
```
# There are three ways we can include only the columns we want:
# Method #1
hiv_aa = hiv.loc[:,~hiv.columns.isin(['FPV', 'ATV', 'IDV', 'LPV', 'NFV', 'SQV', 'TPV', 'DRV', 'CompMutList'])]
# Method #1
hiv_aa = hiv.drop(['FPV', 'ATV', 'IDV', 'LPV', 'NFV', 'SQV', 'TPV', 'DRV', 'CompMutList'], axis=1)
# Method #2
hiv_aa = hiv[hiv.columns.difference(['FPV', 'ATV', 'IDV', 'LPV', 'NFV', 'SQV', 'TPV', 'DRV', 'CompMutList'])]
aaseq = hiv.drop(['FPV', 'ATV', 'IDV', 'LPV', 'NFV', 'SQV', 'TPV', 'DRV', 'CompMutList'], axis=1)
aaseq = pd.melt(aaseq, id_vars=['SeqID'],
var_name='Position', value_name = 'AADiff')
aaseq.head()
aaseq['Position'] = aaseq['Position'].str.replace('P','').astype('int')
aaseq.dtypes
aaseq = aaseq.sort_values(['SeqID','Position'])
aaseq.reset_index(drop=True, inplace=True)
aaseq.head()
mutnames = hiv[['SeqID','CompMutList']]
mutnames.head()
```
## Exercise 2: More Tidy Practice
Let's revisit the weather data from the Tidy paper which contains the daily weather records for five months in 2010 for the MX17004 weather station in Mexico. Each day of the month has its own column (e.g. d1, d2, d3, etc.). The example data only provides the first 8 dayRun the following code to get the data into the notebook:
```python
data = [['MX17004',2010,1,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,1,'tmin',None,None,None,None,None,None,None,None],
['MX17004',2010,2,'tmax',None,27.3,24.1,None,None,None,None,None],
['MX17004',2010,2,'tmin',None,14.4,14.4,None,None,None,None,None],
['MX17004',2010,3,'tmax',None,None,None,None,32.1,None,None,None],
['MX17004',2010,3,'tmin',None,None,None,None,14.2,None,None,None],
['MX17004',2010,4,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,4,'tmin',None,None,None,None,None,None,None,None],
['MX17004',2010,5,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,5,'tmin',None,None,None,None,None,None,None,None]]
headers = ['id','year','month','element','d1','d2','d3','d4','d5','d6','d7','d8']
weather = pd.DataFrame(data, columns=headers)
weather
```
```
data = [['MX17004',2010,1,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,1,'tmin',None,None,None,None,None,None,None,None],
['MX17004',2010,2,'tmax',None,27.3,24.1,None,None,None,None,None],
['MX17004',2010,2,'tmin',None,14.4,14.4,None,None,None,None,None],
['MX17004',2010,3,'tmax',None,None,None,None,32.1,None,None,None],
['MX17004',2010,3,'tmin',None,None,None,None,14.2,None,None,None],
['MX17004',2010,4,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,4,'tmin',None,None,None,None,None,None,None,None],
['MX17004',2010,5,'tmax',None,None,None,None,None,None,None,None],
['MX17004',2010,5,'tmin',None,None,None,None,None,None,None,None]]
headers = ['id','year','month','element','d1','d2','d3','d4','d5','d6','d7','d8']
weather = pd.DataFrame(data, columns=headers)
weather
```
What makes this dataset untidy?
The solution for how to tidy this data is in the notebook from Lesson 5. However, we're going to try a slightly different approach. It uses the same steps but in a different order.
First melt the data appropriately to get the day as its own column. Name the melted dataframe `weather_melted`. Remove the `d` from the beginning of the day and convert it to an integer. Print the first 5 rows:
```
weather_melted = pd.melt(weather, id_vars=['id', 'year', 'month', 'element'], var_name='day', value_name='temperature')
weather_melted['day'] = weather_melted['day'].str.replace('d', '').astype('int')
weather_melted.head()
```
Now that we have the day melted, next, pivot so that we have two variables tmax and tmin as their own columns. Name the resulting dataframe `weather_pivoted`. Print the top few rows.
```
weather_melted['temperature'] = weather_melted['temperature'].astype('float')
weather_pivoted = pd.pivot_table(weather_melted, index=['id', 'year', 'month', 'day'], columns='element', values=['temperature'])
weather_pivoted.head()
```
Notice that we mave multi-level indexing. Reduce this to a typical one-level index using the `reset_index` function.
```
weather_pivoted.reset_index(inplace=True)
weather_pivoted.head()
```
Notice, however, we still have MultiIndexing on the column. We can remove this by simply resetting the column names.
```
weather_pivoted.columns = ['id', 'year', 'month', 'day', 'tmax', 'tmin']
weather_pivoted.head()
```
<span style="float:right; margin-left:10px; clear:both;"></span>
Finally, let's convert the year, month and day to a datetime object. Previously, when we wanted to convert the date in a string to a `datetime` object we used the `pd.to_datetime` function. However, our date is spread across three different columns and is not a string. In the Tidy Data lesson we did this using the `datatime` package but it was not well explained. Let's look at this deeper.
The [`datetime` module](https://docs.python.org/3/library/datetime.html) provides a variety of functions for working with dates. The function that will most help us is the `datetime.datetime` function. See [documentation here](https://docs.python.org/3/library/datetime.html#datetime.datetime). We can use this function to create the `datetime` objects that we need. But this is a Python module and not a Pandas module. So, it does not accept a Series. We must therefore use the `apply` function of the Pandas dataframe. Rememer that the `apply` function takes the name of a function or a function itself! Review the following code.
```python
import datetime
def create_date(row):
return datetime.datetime(year=row["year"], month=int(row["month"]), day=row["day"])
melted_weather["date"] = melted_weather.apply(lambda row: create_date(row), axis=1)
```
When the `apply` function was first introduced in the [L04-Pandas_Part2.ipynb Lesson](./L04-Pandas_Part2-Practice.ipynb#4.2-Apply) we supplied function names like `print` or `np.sum`. That worked because by default, with `apply`, the function is applied across rows (i.e. down each column). We need to calculate the date which is across columns. We can provide the `axis=1` argument to `apply` but we only need 3 columns to form a date, and our melted/pivoted dataframe has more than just the 3 date-specific columns in.
To solve this challenge, we have to create our own function to give to the `apply` function. In the code above, the `create_date` function provides this functionality. Here, the function receives a Series object we call `row` and inside the function we call the `datetime.datetime` function and pass in the corresponding values from the row that can be used to make the `datetime` object.
```
import datetime
def create_date(row):
return datetime.datetime(year=row["year"], month=int(row["month"]), day=row["day"])
weather_pivoted["date"] = weather_pivoted.apply(lambda row: create_date(row), axis=1)
weather_pivoted.head()
```
## Exercise 3: More Tidy Practice
Consider the following billboard dataset described in the Tidy paper. This dataset contains the weekly rank of songs from the moment they enter the Billboard Top 100 to the subsequent 75 weeks. First load the data. You'll find it in the data directory here: `../data/billboard.csv`. Save the data with the name `billboard`. List the top 10 lines:
```
billboard = pd.read_csv("../data/billboard.csv", encoding="mac_latin2")
billboard.head(10)
```
Do a quick review of the data
+ List the columns.
+ List the data types.
+ Are there missing values? Should we worry about missing values?
+ Are there duplicates? Should we worry about any duplcates?
+ What fields are meant to be categorical? And for those check the categories to make sure there is nothing unexpected there.
```
billboard.columns
billboard.dtypes
billboard.isna().sum()
billboard.duplicated().sum()
billboard['genre'].unique()
```
What makes this data untidy?
Let's tidy this data into a variable named `billboard_tidy`
```
id_vars = ["year",
"artist.inverted",
"track",
"time",
"genre",
"date.entered",
"date.peaked"]
billboard_tidy = pd.melt(frame=billboard,id_vars=id_vars, var_name="week", value_name="rank")
billboard_tidy.head()
```
Perform the following:
1. Remove columns with missing values
2. convert the week to an actual number
3. Convert the rank column to an integer
```
billboard_tidy = billboard_tidy.dropna()
# convert week to number
billboard_tidy["week"] = billboard_tidy['week'].str.extract('(\d+)', expand=False).astype(int)
# Convert rank to a decimal
billboard_tidy["rank"] = billboard_tidy["rank"].astype(int)
billboard_tidy.head()
```
Next, calculate the actual date for the rank. We have the date entered, we just need to add the number of days (in weeks) to the date entered to get the actual date for the rank. We haven't learned all of the date time functions, but here's some hints:
- `pd.to_timedelta`: calculates absolute differences in times, expressed in difference units (e.g. days, hours, minutes, seconds)
- `pd.DateOffset`:
```
# Create "date" columns
billboard_tidy['date'] = pd.to_datetime(billboard_tidy['date.entered']) + pd.to_timedelta(billboard_tidy['week'], unit='w') - pd.DateOffset(weeks=1)
billboard_tidy.head()
```
| github_jupyter |
# Forecasting with an LSTM
## Setup
```
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
def sequential_window_dataset(series, window_size):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True)
ds = ds.flat_map(lambda window: window.batch(window_size + 1))
ds = ds.map(lambda window: (window[:-1], window[1:]))
return ds.batch(1).prefetch(1)
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
class ResetStatesCallback(keras.callbacks.Callback):
def on_epoch_begin(self, epoch, logs):
self.model.reset_states()
```
## LSTM RNN Forecasting
```
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = sequential_window_dataset(x_train, window_size)
model = keras.models.Sequential([
keras.layers.LSTM(100, return_sequences=True, stateful=True, # you can use RNN with LSTM cell but it will be slow
batch_input_shape=[1, None, 1]), # this is optized with Nvidia library
keras.layers.LSTM(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
lr_schedule = keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
reset_states = ResetStatesCallback()
optimizer = keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100,
callbacks=[lr_schedule, reset_states])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 30])
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
window_size = 30
train_set = sequential_window_dataset(x_train, window_size)
valid_set = sequential_window_dataset(x_valid, window_size)
model = keras.models.Sequential([
keras.layers.LSTM(100, return_sequences=True, stateful=True,
batch_input_shape=[1, None, 1]),
keras.layers.LSTM(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
keras.layers.Lambda(lambda x: x * 200.0)
])
optimizer = keras.optimizers.SGD(lr=5e-7, momentum=0.9)
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
reset_states = ResetStatesCallback()
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint.h5", save_best_only=True)
early_stopping = keras.callbacks.EarlyStopping(patience=50)
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint, reset_states])
model = keras.models.load_model("my_checkpoint.h5")
rnn_forecast = model.predict(series[np.newaxis, :, np.newaxis])
rnn_forecast = rnn_forecast[0, split_time - 1:-1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/ImageCollection/filtering_by_calendar_range.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/filtering_by_calendar_range.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/ImageCollection/filtering_by_calendar_range.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Add Earth Engine dataset
roi = ee.Geometry.Point([-99.2182, 46.7824])
# find images acquired during June and July
collection = ee.ImageCollection('LANDSAT/LC08/C01/T1_TOA') \
.filterBounds(roi) \
.filter(ee.Filter.calendarRange(6, 7, 'month')) \
.sort('DATE_ACQUIRED')
print(collection.size().getInfo())
first = collection.first()
propertyNames = first.propertyNames()
print(propertyNames.getInfo())
time_start = ee.Date(first.get('system:time_start')).format("YYYY-MM-dd")
print(time_start.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
```
| github_jupyter |
# [WIP] Normalize data
The flow is much bigger then rainfall. It would be worth trying to normalized both series and then try same models on the data.
```
import datetime
import time
import calendar
import json
import numpy as np
import pandas as pd
from sklearn import tree
import matplotlib.pyplot as plt
from matplotlib import rcParams
```
# Load project
```
project_folder = '../../datasets/thorium-medium/'
with open(project_folder + 'project.json', 'r') as file:
project = json.load(file)
print(json.dumps(project, indent=4))
flow = pd.read_csv(project_folder + 'flow1.csv', parse_dates=['time'])
flow = flow.set_index('time')['flow'].fillna(0)
flow = flow.resample('5T').pad()
rainfall = pd.read_csv(project_folder + 'rainfall1.csv', parse_dates=['time'])
rainfall = rainfall.set_index('time')['rainfall'].fillna(0)
rainfall = rainfall.resample('5T').pad()
input_data = pd.concat([flow, rainfall], axis=1).dropna()
input_data = input_data['2015-06-02':'2017-11-09']
print(input_data.head())
print(input_data.tail())
```
## Normalize data
```
input_data['flow_normalized'] = (input_data.flow - np.mean(input_data.flow)) / np.std(input_data.flow)
input_data['rainfall_normalized'] = (input_data.rainfall - np.mean(input_data.rainfall)) / np.std(input_data.rainfall)
input_data.head()
```
## Feature extractions
```
# Process time
input_data['minutes_of_day'] = input_data.index.map(lambda x: x.hour*60+x.minute)
input_data['hour'] = input_data.index.map(lambda x: x.hour)
# Process rainfall
rolling_rain = input_data.rainfall_normalized.rolling(12).mean()
for i in range(6):
key = 'rain_{}h'.format(i+1)
input_data[key] = rolling_rain.shift(i*288)
input_data = input_data.dropna()
input_x = input_data[['minutes_of_day'] + ['rain_{}h'.format(i+1) for i in range(6)]]
input_y = input_data.flow_normalized
print(input_x.head())
print(input_y.head())
```
## Helper functions
Helper functions for building training and test sets and calculating score
## Calculate score
```
class PredictionModel:
"""Mean model as a reference baseline"""
def fit(self, X, y):
self.mean = np.mean(y)
def predict(self, X):
return np.ones(X.shape[0]) * self.mean
def loss(y_hat, y):
"""
https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
"""
return 100.0 * np.sum(np.abs((y-y_hat) / y)) / y.shape[0]
def split_data(split_day):
"""Get all data up to given day"""
end_day = split_day - pd.Timedelta('1 min')
next_day = split_day + pd.Timedelta(1, 'D')
train_x = input_x[:end_day]
train_y = input_y[:end_day]
test_x = input_x[next_day: next_day+pd.Timedelta('1439 min')]
test_y = input_y[next_day: next_day+pd.Timedelta('1439 min')]
return train_x.values, train_y.values, test_x.values, test_y.values
def evaluate_day(model, split_day):
"""Evaluate data for single day"""
train_x, train_y, test_x, test_y = split_data(split_day)
model.fit(train_x, train_y)
y_hat = model.predict(test_x)
return loss(y_hat, test_y)
def evaluate_model(model, flow, rain, start_day):
"""
Evaluate model on all days starting from split_day.
Returns 90th percentile error as model score
"""
last_day = flow.index[-1] - pd.Timedelta(1, 'D')
split_day = start_day
costs = []
while split_day < last_day:
cost = evaluate_day(model, flow, rain, split_day)
costs.append(cost)
split_day += pd.Timedelta(1, 'D')
return np.percentile(costs, 95), costs
score = evaluate_day(PredictionModel(), pd.Timestamp('2017-01-01'))
print('MeanModel score: {:.2f}% (expected: 22.69%)'.format(score))
```
## Plot prediction
```
def plot_prediction(model, day, show_rain=True):
rcParams['figure.figsize'] = 12, 8
ts = pd.Timestamp(day)
pred = model.predict(ts, flow, rainfall)
if show_rain:
rcParams['figure.figsize'] = 12, 8
fig = plt.figure()
ax = plt.subplot(211)
ax.plot(rainfall[ts: ts+pd.Timedelta('1439 min')].values, label='Rainfall')
plt.title(day)
ax = plt.subplot(212)
ax.plot(pred, label='Prediction')
ax.plot(flow[ts: ts+pd.Timedelta('1439 min')].values, label='Flow')
else:
rcParams['figure.figsize'] = 12, 4
fig = plt.figure()
ax = plt.subplot(111)
plt.title(day)
ax.plot(pred, label='Prediction')
ax.plot(flow[ts: ts+pd.Timedelta('1439 min')].values, label='Flow')
ax.legend()
plt.show()
```
# Models
## Decision Tree Regressor
First not linear model. Should improve on linear model.
Decision Tree is good for categorical data. So here we should expect better looking prediction
```
from sklearn import tree
class DTModel:
def __init__(self):
self.clf = tree.DecisionTreeRegressor()
def fit(self, flow, rain):
X, y = encode_features(flow, rain)
self.clf.fit(X.values, y.values)
def predict(self, day, rain, flow):
X = prepare_prediction_features(day, flow, rain)
return self.clf.predict(X.values)
start_time = time.time()
dt_model = DTModel()
score, costs = evaluate_model(dt_model, flow_rain.flow, flow_rain.rainfall, pd.Timestamp('2017-01-01'))
print('DTModel 2h score: {:.2f}%'.format(score))
print('Feature importance: {}'.format(dt_model.clf.feature_importances_))
print("Calculated in {:.3f} seconds".format(time.time() - start_time))
plot_prediction(dt_model, '2017-04-13', show_rain=False)
plot_prediction(dt_model, '2017-05-01')
```
The **dt_model** was already trained on 2017-05-01 and it looks that it just memorises data.
(At least some points)
Lets check how it would loook like if we train the model only on historical data.
```
model = DTModel()
evaluate_day(model, flow, rainfall, pd.Timestamp('2017-05-01'))
plot_prediction(model, '2017-05-01')
model = DTModelHour()
evaluate_day(model, flow, rainfall, pd.Timestamp('2017-05-01'))
plot_prediction(model, '2017-05-01')
```
| github_jupyter |
# Coloured wind and geopotential
This is one cell notebook example that will help you plot coloured wind flags and geopotential height from grib files using Magics.
Simple way to colour wind is by intensity and it is a default method in Magics. We will use advanced wind plotting method, by setting **wind_advanced_method** to **"on"** and then define colours and levels similar to setting contours.
List of all **mwind** parameters you can find [in Magics documentation](https://confluence.ecmwf.int/display/MAGP/Wind+Plotting "Wind parameters").
### Installing Magics
If you don't have Magics installed, run the next cell to install Magics using conda.
```
# Install Magics in the current Jupyter kernel
import sys
!conda install --yes --prefix {sys.prefix} Magics
import Magics.macro as magics
# Setting the projection
europe = magics.mmap(
subpage_clipping = "on",
subpage_lower_left_latitude = 21.51,
subpage_lower_left_longitude = -37.27,
subpage_upper_right_latitude = 51.28,
subpage_upper_right_longitude = 65.,
subpage_map_projection = "polar_stereographic",
page_id_line = "off")
# Defining the coastlines
coast = magics.mcoast(
map_coastline_resolution = "high",
map_coastline_colour = "tan",
map_coastline_land_shade = "on",
map_coastline_land_shade_colour = "cream",
map_grid = "on",
map_grid_line_style = "dot",
map_grid_colour = "tan"
)
# Load the grib data
wind_from_grib = magics.mgrib(
grib_input_file_name = '../../data/ghtuv.grib',
grib_wind_position_1 = 3,
grib_wind_position_2 = 4)
gh = magics.mgrib(
grib_input_file_name = '../../data/ghtuv.grib',
grib_field_position = 1)
# Defining Wind flags
coloured_flags = magics.mwind(
legend = 'off',
wind_field_type = 'flags',
wind_flag_origin_marker = 'off',
wind_flag_length = 0.6,
wind_thinning_factor = 2.,
wind_advanced_method = 'on',
wind_advanced_colour_selection_type = 'interval',
wind_advanced_colour_level_interval = 10.0,
wind_advanced_colour_reference_level = 20.0,
wind_advanced_colour_max_value = 140.0,
wind_advanced_colour_min_value = 0.0,
wind_advanced_colour_table_colour_method = 'calculate',
wind_advanced_colour_direction = 'anti_clockwise',
wind_advanced_colour_min_level_colour = 'sky',
wind_advanced_colour_max_level_colour = 'burgundy')
#Defining the contour for geopotential height
gh_cont = magics.mcont(contour_automatic_setting = "ecmwf")
# Plotting
magics.plot(europe, coast, wind_from_grib, coloured_flags, gh, gh_cont)
```
| github_jupyter |
<center>
<img src="../../img/ods_stickers.jpg">
## Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала: Юрий Исаков и Юрий Кашницкий. Материал распространяется на условиях лицензии [Creative Commons CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
# <center>Тема 4. Линейные модели классификации и регрессии
## <center> Практика. Идентификация пользователя с помощью логистической регрессии
Тут мы воспроизведем парочку бенчмарков нашего соревнования и вдохновимся побить третий бенчмарк, а также остальных участников. Веб-формы для отправки ответов тут не будет, ориентир – [leaderboard](https://www.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2/leaderboard) соревнования.
```
import pickle
import numpy as np
import pandas as pd
from tqdm import tqdm_notebook
from scipy.sparse import csr_matrix, hstack
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import roc_auc_score
from sklearn.linear_model import LogisticRegression
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
```
### 1. Загрузка и преобразование данных
Зарегистрируйтесь на [Kaggle](www.kaggle.com), если вы не сделали этого раньше, зайдите на [страницу](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) соревнования и скачайте данные. Первым делом загрузим обучающую и тестовую выборки и посмотрим на данные.
```
# загрузим обучающую и тестовую выборки
train_df = pd.read_csv('../../data/train_sessions.csv',
index_col='session_id')
test_df = pd.read_csv('../../data/test_sessions.csv',
index_col='session_id')
# приведем колонки time1, ..., time10 к временному формату
times = ['time%s' % i for i in range(1, 11)]
train_df[times] = train_df[times].apply(pd.to_datetime)
test_df[times] = test_df[times].apply(pd.to_datetime)
# отсортируем данные по времени
train_df = train_df.sort_values(by='time1')
# посмотрим на заголовок обучающей выборки
train_df.head()
```
В обучающей выборке содержатся следующие признаки:
- site1 – индекс первого посещенного сайта в сессии
- time1 – время посещения первого сайта в сессии
- ...
- site10 – индекс 10-го посещенного сайта в сессии
- time10 – время посещения 10-го сайта в сессии
- target – целевая переменная, 1 для сессий Элис, 0 для сессий других пользователей
Сессии пользователей выделены таким образом, что они не могут быть длиннее получаса или 10 сайтов. То есть сессия считается оконченной либо когда пользователь посетил 10 сайтов подряд либо когда сессия заняла по времени более 30 минут.
В таблице встречаются пропущенные значения, это значит, что сессия состоит менее, чем из 10 сайтов. Заменим пропущенные значения нулями и приведем признаки к целому типу. Также загрузим словарь сайтов и посмотрим, как он выглядит:
```
# приведем колонки site1, ..., site10 к целочисленному формату и заменим пропуски нулями
sites = ['site%s' % i for i in range(1, 11)]
train_df[sites] = train_df[sites].fillna(0).astype('int')
test_df[sites] = test_df[sites].fillna(0).astype('int')
# загрузим словарик сайтов
with open(r"../../data/site_dic.pkl", "rb") as input_file:
site_dict = pickle.load(input_file)
# датафрейм словарика сайтов
sites_dict_df = pd.DataFrame(list(site_dict.keys()),
index=list(site_dict.values()),
columns=['site'])
print(u'всего сайтов:', sites_dict_df.shape[0])
sites_dict_df.head()
```
Выделим целевую переменную и объединим выборки, чтобы вместе привести их к разреженному формату.
```
# наша целевая переменная
y_train = train_df['target']
# объединенная таблица исходных данных
full_df = pd.concat([train_df.drop('target', axis=1), test_df])
# индекс, по которому будем отделять обучающую выборку от тестовой
idx_split = train_df.shape[0]
```
Для самой первой модели будем использовать только посещенные сайты в сессии (но не будем обращать внимание на временные признаки). За таким выбором данных для модели стоит такая идея: *у Элис есть свои излюбленные сайты, и чем чаще вы видим эти сайты в сессии, тем выше вероятность, что это сессия Элис и наоборот.*
Подготовим данные, из всей таблицы выберем только признаки `site1, site2, ... , site10`. Напомним, что пропущенные значения заменены нулем. Вот как выглядят первые строки таблицы:
```
# табличка с индексами посещенных сайтов в сессии
full_sites = full_df[sites]
full_sites.head()
full_sites.shape
```
Сессии представляют собой последовательность индексов сайтов и данные в таком виде неудобны для линейных методов. В соответствии с нашей гипотезой (у Элис есть излюбленные сайты) надо преобразовать эту таблицу таким образом, чтобы каждому возможному сайту соответствовал свой отдельный признак (колонка), а его значение равнялось бы количеству посещений этого сайта в сессии. Это делается в две строчки:
```
from scipy.sparse import csr_matrix
csr_matrix?
# последовательность с индексами
sites_flatten = full_sites.values.flatten()
# искомая матрица
full_sites_sparse = csr_matrix(([1] * sites_flatten.shape[0],
sites_flatten,
range(0, sites_flatten.shape[0] + 10, 10)))[:, 1:]
full_sites_sparse.shape
X_train_sparse = full_sites_sparse[:idx_split]
X_test_sparse = full_sites_sparse[idx_split:]
X_train_sparse.shape, y_train.shape
X_test_sparse.shape
```
Еще один плюс использования разреженных матриц в том, что для них имеются специальные реализации как матричных операций, так и алгоритмов машинного обучения, что подчас позволяет ощутимо ускорить операции за счет особенностей структуры данных. Это касается и логистической регрессии. Вот теперь у нас все готово для построения нашей первой модели.
### 2. Построение первой модели
Итак, у нас есть алгоритм и данные для него, построим нашу первую модель, воспользовавшись релизацией [логистической регрессии](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) из пакета `sklearn` с параметрами по умолчанию. Первые 90% данных будем использовать для обучения (обучающая выборка отсортирована по времени), а оставшиеся 10% для проверки качества (validation).
**Напишите простую функцию, которая будет возвращать качество модели на отложенной выборке, и обучите наш первый классификатор**.
```
def get_auc_lr_valid(X, y, C=1.0, ratio = 0.9, seed=17):
'''
X, y – выборка
ratio – в каком отношении поделить выборку
C, seed – коэф-т регуляризации и random_state
логистической регрессии
'''
train_len = int(ratio * X.shape[0])
X_train = X[:train_len, :]
X_valid = X[train_len:, :]
y_train = y[:train_len]
y_valid = y[train_len:]
logit = LogisticRegression(C=C, n_jobs=-1, random_state=seed)
logit.fit(X_train, y_train)
pred = logit.predict_proba(X_valid)[:, 1]
return roc_auc_score(y_valid, pred)
```
**Посмотрите, какой получился ROC AUC на отложенной выборке.**
```
%%time
get_auc_lr_valid(X_train_sparse, y_train)
```
Будем считать эту модель нашей первой отправной точкой (baseline). Для построения модели для прогноза на тестовой выборке **необходимо обучить модель заново уже на всей обучающей выборке** (пока наша модель обучалась лишь на части данных), что повысит ее обобщающую способность:
```
# функция для записи прогнозов в файл
def write_to_submission_file(predicted_labels, out_file,
target='target', index_label="session_id"):
predicted_df = pd.DataFrame(predicted_labels,
index = np.arange(1, predicted_labels.shape[0] + 1),
columns=[target])
predicted_df.to_csv(out_file, index_label=index_label)
```
**Обучите модель на всей выборке, сделайте прогноз для тестовой выборки и сделайте посылку в соревновании**.
```
%%time
logit = LogisticRegression(n_jobs=-1, random_state=17)
logit.fit(X_train_sparse, y_train)
test_pred = logit.predict_proba(X_test_sparse)[:, 1]
test_pred.shape
pd.Series(test_pred, index=range(1, test_pred.shape[0] + 1), name='target')\
.to_csv('benchmark1.csv', header=True, index_label='session_id')
!head benchmark1.csv
```
Если вы выполните эти действия и загрузите ответ на [странице](https://inclass.kaggle.com/c/catch-me-if-you-can-intruder-detection-through-webpage-session-tracking2) соревнования, то воспроизведете первый бенчмарк "Logit".
### 3. Улучшение модели, построение новых признаков
Создайте такой признак, который будет представлять собой число вида ГГГГММ от той даты, когда проходила сессия, например 201407 -- 2014 год и 7 месяц. Таким образом, мы будем учитывать помесячный [линейный тренд](http://people.duke.edu/~rnau/411trend.htm) за весь период предоставленных данных.
```
new_feat_train = pd.DataFrame(index=train_df.index)
new_feat_test = pd.DataFrame(index=test_df.index)
new_feat_train['year_month'] = train_df['time1'].apply(lambda ts: 100 * ts.year + ts.month)
new_feat_test['year_month'] = test_df['time1'].apply(lambda ts: 100 * ts.year + ts.month)
new_feat_train.head()
```
Добавьте новый признак, предварительно отмасштабировав его с помощью `StandardScaler`, и снова посчитайте ROC AUC на отложенной выборке.
```
scaler = StandardScaler()
scaler.fit(new_feat_train['year_month'].values.reshape(-1, 1))
new_feat_train['year_month_scaled'] = scaler.transform(new_feat_train['year_month'].values.reshape(-1, 1))
new_feat_test['year_month_scaled'] = scaler.transform(new_feat_test['year_month'].values.reshape(-1, 1))
new_feat_train.head()
X_train_sparse_new = csr_matrix(hstack([X_train_sparse,
new_feat_train['year_month_scaled'].values.reshape(-1, 1)]))
X_test_sparse_new = csr_matrix(hstack([X_test_sparse,
new_feat_test['year_month_scaled'].values.reshape(-1,1)]))
X_train_sparse.shape, X_train_sparse_new.shape
get_auc_lr_valid(X_train_sparse_new, y_train)
```
**Добавьте два новых признака: start_hour и morning.**
Признак `start_hour` – это час в который началась сессия (от 0 до 23), а бинарный признак `morning` равен 1, если сессия началась утром и 0, если сессия началась позже (будем считать, что утро это если `start_hour равен` 11 или меньше).
**Посчитйте ROC AUC на отложенной выборке для выборки с:**
- сайтами, `start_month` и `start_hour`
- сайтами, `start_month` и `morning`
- сайтами, `start_month`, `start_hour` и `morning`
```
new_feat_train['start_hour'] = train_df['time1'].apply(lambda ts: ts.hour)
new_feat_test['start_hour'] = test_df['time1'].apply(lambda ts: ts.hour)
new_feat_train['morning'] = new_feat_train['start_hour'].apply(lambda sh: 1 if sh <= 11 else 0)
new_feat_test['morning'] = new_feat_test['start_hour'].apply(lambda sh: 1 if sh <= 11 else 0)
new_feat_train.head()
X_train_sparse_new2 = csr_matrix(hstack([X_train_sparse_new, new_feat_train['start_hour'].values.reshape(-1, 1)]))
X_train_sparse_new2 = csr_matrix(hstack([X_train_sparse_new2, new_feat_train['morning'].values.reshape(-1, 1)]))
X_test_sparse_new2 = csr_matrix(hstack([X_test_sparse_new, new_feat_train['start_hour'].values.reshape(-1, 1)]))
X_test_sparse_new2 = csr_matrix(hstack([X_test_sparse_new2, new_feat_train['morning'].values.reshape(-1, 1)]))
X_train_sparse_new.shape, X_train_sparse_new2.shape
%%time
get_auc_lr_valid(X_train_sparse_new2, y_train)
```
### 4. Подбор коэффицициента регуляризации
Итак, мы ввели признаки, которые улучшают качество нашей модели по сравнению с первым бейслайном. Можем ли мы добиться большего значения метрики? После того, как мы сформировали обучающую и тестовую выборки, почти всегда имеет смысл подобрать оптимальные гиперпараметры -- характеристики модели, которые не изменяются во время обучения. Например, на 3 неделе вы проходили решающие деревья, глубина дерева это гиперпараметр, а признак, по которому происходит ветвление и его значение -- нет. В используемой нами логистической регрессии веса каждого признака изменяются и во время обучения находится их оптимальные значения, а коэффициент регуляризации остается постоянным. Это тот гиперпараметр, который мы сейчас будем оптимизировать.
Посчитайте качество на отложенной выборке с коэффициентом регуляризации, который по умолчанию `C=1`:
```
get_auc_lr_valid(X_train_sparse_new2, y_train, C=1)
```
Постараемся побить этот результат за счет оптимизации коэффициента регуляризации. Возьмем набор возможных значений C и для каждого из них посчитаем значение метрики на отложенной выборке.
Найдите `C` из `np.logspace(-3, 1, 10)`, при котором ROC AUC на отложенной выборке максимален.
```
scores = [(C, get_auc_lr_valid(X_train_sparse_new2, y_train, C=C)) for C in np.logspace(-3, 1, 10)]
scores
maxC = max(scores, key=lambda item: item[1])
print('Max C = ', maxC)
```
Наконец, обучите модель с найденным оптимальным значением коэффициента регуляризации и с построенными признаками `start_hour`, `start_month` и `morning`. Если вы все сделали правильно и загрузите это решение, то повторите второй бенчмарк соревнования.
```
%%time
logit = LogisticRegression(n_jobs=-1, random_state=17, C=maxC[0])
logit.fit(X_train_sparse_new2, y_train)
test_pred = logit.predict_proba(X_test_sparse_new2)[:, 1]
test_pred.shape
write_to_submission_file(test_pred, 'benchmark2.csv')
!head benchmark2.csv
```
| github_jupyter |
## Model output And Metric reseach
```
import keras
import tensorflow as tf
import keras.backend as K
import numpy as np
import cv2
import matplotlib.pyplot as plt
img = cv2.imread("./dataset/selfie/training/00694.png", cv2.IMREAD_COLOR)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
img = cv2.resize(img, (256,256))
img = np.concatenate([img, np.ones((256,256,1))], axis=-1)[np.newaxis,:,:,:]
img = img / 255.
mask = cv2.imread("./dataset/selfie/training/00694_matte.png", cv2.IMREAD_GRAYSCALE)
mask = cv2.resize(mask, (256,256))
mask = mask /255.
plt.figure(figsize=(19,7))
plt.subplot(121)
plt.imshow(img.squeeze(0)[:,:,:3])
plt.axis("off")
plt.subplot(122)
plt.imshow(mask)
plt.axis("off")
plt.tight_layout()
plt.show()
```
## Thresholded mask
```
plt.figure(figsize=(19,7))
plt.subplot(121)
plt.imshow(img.squeeze(0)[:,:,:3])
plt.axis("off")
plt.subplot(122)
plt.imshow(cv2.threshold(mask, 150, 255, cv2.THRESH_BINARY)[1])
plt.axis("off")
mask_ = tf.constant(cv2.threshold(mask, 20, 255, cv2.THRESH_BINARY)[1]/ 255.)
pred_ = tf.constant(cv2.threshold(mask, 10, 255, cv2.THRESH_BINARY)[1] / 255.)
mask_ = tf.reshape(mask_, (1, 256, 256, 1))
pred_ = tf.reshape(pred_, (1, 256, 256, 1))
def iou_coef(y_true, y_pred, smooth=1):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
threshold = tf.constant(0.5, dtype=tf.float32)
y_true = tf.cast(y_true > threshold, dtype=tf.float32)
y_pred = tf.cast(y_pred > threshold, dtype=tf.float32)
intersection = K.sum(K.abs(y_true * y_pred), axis=[1,2,3])
union = K.sum(y_true,[1,2,3])+K.sum(y_pred,[1,2,3])-intersection
iou = K.mean((intersection + smooth) / (union + smooth), axis=0)
return iou
def dice_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
numerator = 2 * tf.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.reduce_sum(y_true + y_pred, axis=(1,2,3))
return 1 - numerator / denominator
def focal_loss(alpha=0.25, gamma=2):
def focal_loss_with_logits(logits, targets, alpha, gamma, y_pred):
weight_a = alpha * (1 - y_pred) ** gamma * targets
weight_b = (1 - alpha) * y_pred ** gamma * (1 - targets)
return (tf.log1p(tf.exp(-tf.abs(logits))) + tf.nn.relu(-logits)) * (weight_a + weight_b) + logits * weight_b
def loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
logits = tf.log(y_pred / (1 - y_pred))
loss = focal_loss_with_logits(logits=logits, targets=y_true, alpha=alpha, gamma=gamma, y_pred=y_pred)
# or reduce_sum and/or axis=-1
return tf.reduce_mean(loss)
return loss
def focal_loss(alpha=0.25, gamma=2):
def focal_loss_with_logits(logits, targets, alpha, gamma, y_pred):
weight_a = alpha * (1 - y_pred) ** gamma * targets
weight_b = (1 - alpha) * y_pred ** gamma * (1 - targets)
return (tf.log1p(tf.exp(-tf.abs(logits))) + tf.nn.relu(-logits)) * (weight_a + weight_b) + logits * weight_b
def loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
y_pred = tf.clip_by_value(y_pred, tf.keras.backend.epsilon(), 1 - tf.keras.backend.epsilon())
logits = tf.log(y_pred / (1 - y_pred))
loss = focal_loss_with_logits(logits=logits, targets=y_true, alpha=alpha, gamma=gamma, y_pred=y_pred)
# or reduce_sum and/or axis=-1
return tf.reduce_mean(loss)
return loss
def ce_dl_combined_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
def dice_loss(y_true, y_pred):
numerator = 2 * tf.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.reduce_sum(y_true + y_pred, axis=(1,2,3))
return tf.reshape(1 - numerator / denominator, (-1, 1, 1))
return tf.reduce_mean(keras.losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred))
def ce_dice_focal_combined_loss(y_true, y_pred):
y_true = tf.cast(y_true, dtype=tf.float32)
y_pred = tf.cast(y_pred, dtype=tf.float32)
def dice_loss(y_true, y_pred):
numerator = 2 * tf.reduce_sum(y_true * y_pred, axis=(1,2,3))
denominator = tf.reduce_sum(y_true + y_pred, axis=(1,2,3))
return tf.reshape(1 - numerator / denominator, (-1, 1, 1))
focal = focal_loss()(y_true, y_pred)
dl_ce = tf.reduce_mean(keras.losses.binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred))
return tf.add(tf.multiply(focal, 0.5), tf.multiply(dl_ce, 0.5))
iou = iou_coef(mask_, pred_)
dice = dice_loss(mask_, pred_)
focal = focal_loss()(mask_, pred_)
com = ce_dl_combined_loss(mask_, pred_)
full_com = ce_dice_focal_combined_loss(mask_, pred_)
with tf.Session() as sess:
dice_ = sess.run(dice)
focal_ = sess.run(focal)
iou_ = sess.run([iou,])
com_ = sess.run(com)
full_com_ = sess.run(full_com)
```
### This is the Porper value of the loses and metrics
```
dice_, focal_, iou_, com_, full_com_
(com_ + focal_) / 2
full_com_
```
| github_jupyter |
## Credit Card Kaggle- Handle Imbalanced Dataset
### Context
It is important that credit card companies are able to recognize fraudulent credit card transactions so that customers are not charged for items that they did not purchase.
### Content
The datasets contains transactions made by credit cards in September 2013 by european cardholders. This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions.
It contains only numerical input variables which are the result of a PCA transformation. Unfortunately, due to confidentiality issues, we cannot provide the original features and more background information about the data. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. The feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning. Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.
### Inspiration
Identify fraudulent credit card transactions.
Given the class imbalance ratio, we recommend measuring the accuracy using the Area Under the Precision-Recall Curve (AUPRC). Confusion matrix accuracy is not meaningful for unbalanced classification.
### Acknowledgements
The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project
```
import numpy as np
import pandas as pd
import sklearn
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import classification_report,accuracy_score
from sklearn.ensemble import IsolationForest
from sklearn.neighbors import LocalOutlierFactor
from sklearn.svm import OneClassSVM
from pylab import rcParams
rcParams['figure.figsize'] = 14, 8
RANDOM_SEED = 42
LABELS = ["Normal", "Fraud"]
data = pd.read_csv('creditcard.csv',sep=',')
data.head()
data.info()
#Create independent and Dependent Features
columns = data.columns.tolist()
# Filter the columns to remove data we do not want
columns = [c for c in columns if c not in ["Class"]]
# Store the variable we are predicting
target = "Class"
# Define a random state
state = np.random.RandomState(42)
X = data[columns]
Y = data[target]
X_outliers = state.uniform(low=0, high=1, size=(X.shape[0], X.shape[1]))
# Print the shapes of X & Y
print(X.shape)
print(Y.shape)
```
## Exploratory Data Analysis
```
data.isnull().values.any()
count_classes = pd.value_counts(data['Class'], sort = True)
count_classes.plot(kind = 'bar', rot=0)
plt.title("Transaction Class Distribution")
plt.xticks(range(2), LABELS)
plt.xlabel("Class")
plt.ylabel("Frequency")
## Get the Fraud and the normal dataset
fraud = data[data['Class']==1]
normal = data[data['Class']==0]
print(fraud.shape,normal.shape)
from imblearn.under_sampling import NearMiss
# Implementing Undersampling for Handling Imbalanced
nm = NearMiss(random_state=42)
X_res,y_res=nm.fit_sample(X,Y)
X_res.shape,y_res.shape
from collections import Counter
print('Original dataset shape {}'.format(Counter(Y)))
print('Resampled dataset shape {}'.format(Counter(y_res)))
```
| github_jupyter |
For greyscale image data where pixel values can be interpreted as degrees of blackness on a white background, like handwritten digit recognition, the Bernoulli Restricted Boltzmann machine model (BernoulliRBM) can perform effective non-linear feature extraction.
In order to learn good latent representations from a small dataset, we artificially generate more labeled data by perturbing the training data with linear shifts of 1 pixel in each direction.
This example shows how to build a classification pipeline with a BernoulliRBM feature extractor and a LogisticRegression classifier. The hyperparameters of the entire model (learning rate, hidden layer size, regularization) were optimized by grid search, but the search is not reproduced here because of runtime constraints.
Logistic regression on raw pixel values is presented for comparison. The example shows that the features extracted by the BernoulliRBM help improve the classification accuracy.
#### New to Plotly?
Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).
<br>You can set up Plotly to work in [online](https://plot.ly/python/getting-started/#initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/#initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/#start-plotting-online).
<br>We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started!
### Version
```
import sklearn
sklearn.__version__
```
### Imports
This tutorial imports [train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html#sklearn.model_selection.train_test_split), [BernoulliRBM](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.BernoulliRBM.html#sklearn.neural_network.BernoulliRBM) and [Pipeline](http://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html#sklearn.pipeline.Pipeline).
```
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import convolve
from sklearn import linear_model, datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline
```
### Setting up
```
def nudge_dataset(X, Y):
"""
This produces a dataset 5 times bigger than the original one,
by moving the 8x8 images in X around by 1px to left, right, down, up
"""
direction_vectors = [
[[0, 1, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[1, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 1],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 1, 0]]]
shift = lambda x, w: convolve(x.reshape((8, 8)), mode='constant',
weights=w).ravel()
X = np.concatenate([X] +
[np.apply_along_axis(shift, 1, X, vector)
for vector in direction_vectors])
Y = np.concatenate([Y for _ in range(5)], axis=0)
return X, Y
# Load Data
digits = datasets.load_digits()
X = np.asarray(digits.data, 'float32')
X, Y = nudge_dataset(X, digits.target)
X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scaling
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.2,
random_state=0)
# Models we will use
logistic = linear_model.LogisticRegression()
rbm = BernoulliRBM(random_state=0, verbose=True)
classifier = Pipeline(steps=[('rbm', rbm), ('logistic', logistic)])
```
### Training
```
# Hyper-parameters. These were set by cross-validation,
# using a GridSearchCV. Here we are not performing cross-validation to
# save time.
rbm.learning_rate = 0.06
rbm.n_iter = 20
# More components tend to give better prediction performance, but larger
# fitting time
rbm.n_components = 100
logistic.C = 6000.0
# Training RBM-Logistic Pipeline
classifier.fit(X_train, Y_train)
# Training Logistic regression
logistic_classifier = linear_model.LogisticRegression(C=100.0)
logistic_classifier.fit(X_train, Y_train)
```
### Evaluation
```
print()
print("Logistic regression using RBM features:\n%s\n" % (
metrics.classification_report(
Y_test,
classifier.predict(X_test))))
print("Logistic regression using raw pixel features:\n%s\n" % (
metrics.classification_report(
Y_test,
logistic_classifier.predict(X_test))))
```
### Plot Results
```
fig = tools.make_subplots(rows=10, cols=10,
print_grid=False)
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
cmap = matplotlib_to_plotly(plt.cm.gray, 5)
for i, comp in enumerate(rbm.components_):
trace = go.Heatmap(z=comp.reshape((8, 8)),
colorscale=cmap,
showscale=False)
fig.append_trace(trace, i/10+1, i%10+1)
for i in map(str,range(1, 101)):
y = 'yaxis'+ i
x = 'xaxis' + i
fig['layout'][y].update(autorange='reversed',
showticklabels=False, ticks='')
fig['layout'][x].update(showticklabels=False, ticks='')
fig['layout'].update(height=1000,
title='100 components extracted by RBM')
py.iplot(fig)
```
### License
Authors:
Yann N. Dauphin, Vlad Niculae, Gabriel Synnaeve
License:
BSD
```
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Restricted Boltzmann Machine Features for Digit Classification.ipynb', 'scikit-learn/plot-rbm-logistic-classification/', 'Restricted Boltzmann Machine Features for Digit Classification | plotly',
' ',
title = 'Restricted Boltzmann Machine Features for Digit Classification | plotly',
name = 'Restricted Boltzmann Machine Features for Digit Classification ',
has_thumbnail='true', thumbnail='thumbnail/rbm.jpg',
language='scikit-learn', page_type='example_index',
display_as='neural_networks', order=2,
ipynb= '~Diksha_Gabha/3493')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# TFP Probabilistic Layers: Regression
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/master/tensorflow_probability/examples/jupyter_notebooks/Probabilistic_Layers_Regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this example we show how to fit regression models using TFP's "probabilistic layers."
### Dependencies & Prerequisites
```
#@title Install { display-mode: "form" }
TF_Installation = 'TF2 Nightly (GPU)' #@param ['TF2 Nightly (GPU)', 'TF2 Stable (GPU)', 'TF1 Nightly (GPU)', 'TF1 Stable (GPU)','System']
if TF_Installation == 'TF2 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu-2.0-preview
print('Installation of `tf-nightly-gpu-2.0-preview` complete.')
elif TF_Installation == 'TF2 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu==2.0.0-alpha0
print('Installation of `tensorflow-gpu==2.0.0-alpha0` complete.')
elif TF_Installation == 'TF1 Nightly (GPU)':
!pip install -q --upgrade tf-nightly-gpu
print('Installation of `tf-nightly-gpu` complete.')
elif TF_Installation == 'TF1 Stable (GPU)':
!pip install -q --upgrade tensorflow-gpu
print('Installation of `tensorflow-gpu` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "Nightly" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Import { display-mode: "form" }
from pprint import pprint
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
sns.reset_defaults()
#sns.set_style('whitegrid')
#sns.set_context('talk')
sns.set_context(context='talk',font_scale=0.7)
%matplotlib inline
tfd = tfp.distributions
```
### Make things Fast!
Before we dive in, let's make sure we're using a GPU for this demo.
To do this, select "Runtime" -> "Change runtime type" -> "Hardware accelerator" -> "GPU".
The following snippet will verify that we have access to a GPU.
```
if tf.test.gpu_device_name() != '/device:GPU:0':
print('WARNING: GPU device not found.')
else:
print('SUCCESS: Found GPU: {}'.format(tf.test.gpu_device_name()))
```
Note: if for some reason you cannot access a GPU, this colab will still work. (Training will just take longer.)
## Motivation
Wouldn't it be great if we could use TFP to specify a probabilistic model then simply minimize the negative log-likelihood, i.e.,
```
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
```
Well not only is it possible, but this colab shows how! (In context of linear regression problems.)
```
#@title Synthesize dataset.
w0 = 0.125
b0 = 5.
x_range = [-20, 60]
def load_dataset(n=150, n_tst=150):
np.random.seed(43)
def s(x):
g = (x - x_range[0]) / (x_range[1] - x_range[0])
return 3 * (0.25 + g**2.)
x = (x_range[1] - x_range[0]) * np.random.rand(n) + x_range[0]
eps = np.random.randn(n) * s(x)
y = (w0 * x * (1. + np.sin(x)) + b0) + eps
x = x[..., np.newaxis]
x_tst = np.linspace(*x_range, num=n_tst).astype(np.float32)
x_tst = x_tst[..., np.newaxis]
return y, x, x_tst
y, x, x_tst = load_dataset()
```
### Case 1: No Uncertainty
```
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 1: No uncertainty.
w = np.squeeze(model.layers[-2].kernel.numpy())
b = np.squeeze(model.layers[-2].bias.numpy())
plt.figure(figsize=[6, 1.5]) # inches
#plt.figure(figsize=[8, 5]) # inches
plt.plot(x, y, 'b.', label='observed');
plt.plot(x_tst, yhat.mean(),'r', label='mean', linewidth=4);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig1.png', bbox_inches='tight', dpi=300)
```
### Case 2: Aleatoric Uncertainty
```
# Build model.
model = tf.keras.Sequential([
tf.keras.layers.Dense(1 + 1),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.05 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 2: Aleatoric Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
m = yhat.mean()
s = yhat.stddev()
plt.plot(x_tst, m, 'r', linewidth=4, label='mean');
plt.plot(x_tst, m + 2 * s, 'g', linewidth=2, label=r'mean + 2 stddev');
plt.plot(x_tst, m - 2 * s, 'g', linewidth=2, label=r'mean - 2 stddev');
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig2.png', bbox_inches='tight', dpi=300)
```
### Case 3: Epistemic Uncertainty
```
# Specify the surrogate posterior over `keras.layers.Dense` `kernel` and `bias`.
def posterior_mean_field(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
c = np.log(np.expm1(1.))
return tf.keras.Sequential([
tfp.layers.VariableLayer(2 * n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t[..., :n],
scale=1e-5 + tf.nn.softplus(c + t[..., n:])),
reinterpreted_batch_ndims=1)),
])
# Specify the prior over `keras.layers.Dense` `kernel` and `bias`.
def prior_trainable(kernel_size, bias_size=0, dtype=None):
n = kernel_size + bias_size
return tf.keras.Sequential([
tfp.layers.VariableLayer(n, dtype=dtype),
tfp.layers.DistributionLambda(lambda t: tfd.Independent(
tfd.Normal(loc=t, scale=1),
reinterpreted_batch_ndims=1)),
])
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(lambda t: tfd.Normal(loc=t, scale=1)),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 3: Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.clf();
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 25:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=0.5)
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig3.png', bbox_inches='tight', dpi=300)
```
### Case 4: Aleatoric & Epistemic Uncertainty
```
# Build model.
model = tf.keras.Sequential([
tfp.layers.DenseVariational(1 + 1, posterior_mean_field, prior_trainable, kl_weight=1/x.shape[0]),
tfp.layers.DistributionLambda(
lambda t: tfd.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:]))),
])
# Do inference.
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=negloglik)
model.fit(x, y, epochs=1000, verbose=False);
# Profit.
[print(np.squeeze(w.numpy())) for w in model.weights];
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 4: Both Aleatoric & Epistemic Uncertainty
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
yhats = [model(x_tst) for _ in range(100)]
avgm = np.zeros_like(x_tst[..., 0])
for i, yhat in enumerate(yhats):
m = np.squeeze(yhat.mean())
s = np.squeeze(yhat.stddev())
if i < 15:
plt.plot(x_tst, m, 'r', label='ensemble means' if i == 0 else None, linewidth=1.)
plt.plot(x_tst, m + 2 * s, 'g', linewidth=0.5, label='ensemble means + 2 ensemble stdev' if i == 0 else None);
plt.plot(x_tst, m - 2 * s, 'g', linewidth=0.5, label='ensemble means - 2 ensemble stdev' if i == 0 else None);
avgm += m
plt.plot(x_tst, avgm/len(yhats), 'r', label='overall mean', linewidth=4)
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig4.png', bbox_inches='tight', dpi=300)
```
### Case 5: Functional Uncertainty
```
#@title Custom PSD Kernel
class RBFKernelFn(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(RBFKernelFn, self).__init__(**kwargs)
dtype = kwargs.get('dtype', None)
self._amplitude = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='amplitude')
self._length_scale = self.add_variable(
initializer=tf.constant_initializer(0),
dtype=dtype,
name='length_scale')
def call(self, x):
# Never called -- this is just a layer so it can hold variables
# in a way Keras understands.
return x
@property
def kernel(self):
return tfp.math.psd_kernels.ExponentiatedQuadratic(
amplitude=tf.nn.softplus(0.1 * self._amplitude),
length_scale=tf.nn.softplus(5. * self._length_scale)
)
# For numeric stability, set the default floating-point dtype to float64
tf.keras.backend.set_floatx('float64')
# Build model.
num_inducing_points = 40
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=[1]),
tf.keras.layers.Dense(1, kernel_initializer='ones', use_bias=False),
tfp.layers.VariationalGaussianProcess(
num_inducing_points=num_inducing_points,
kernel_provider=RBFKernelFn(),
event_shape=[1],
inducing_index_points_initializer=tf.constant_initializer(
np.linspace(*x_range, num=num_inducing_points,
dtype=x.dtype)[..., np.newaxis]),
unconstrained_observation_noise_variance_initializer=(
tf.constant_initializer(np.array(0.54).astype(x.dtype))),
),
])
# Do inference.
batch_size = 32
loss = lambda y, rv_y: rv_y.variational_loss(
y, kl_weight=np.array(batch_size, x.dtype) / x.shape[0])
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.01), loss=loss)
model.fit(x, y, batch_size=batch_size, epochs=1000, verbose=False)
# Profit.
yhat = model(x_tst)
assert isinstance(yhat, tfd.Distribution)
#@title Figure 5: Functional Uncertainty
y, x, _ = load_dataset()
plt.figure(figsize=[6, 1.5]) # inches
plt.plot(x, y, 'b.', label='observed');
num_samples = 7
for i in range(num_samples):
sample_ = yhat.sample().numpy()
plt.plot(x_tst,
sample_[..., 0].T,
'r',
linewidth=0.9,
label='ensemble means' if i == 0 else None);
plt.ylim(-0.,17);
plt.yticks(np.linspace(0, 15, 4)[1:]);
plt.xticks(np.linspace(*x_range, num=9));
ax=plt.gca();
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
#ax.spines['left'].set_smart_bounds(True)
#ax.spines['bottom'].set_smart_bounds(True)
plt.legend(loc='center left', fancybox=True, framealpha=0., bbox_to_anchor=(1.05, 0.5))
plt.savefig('/tmp/fig5.png', bbox_inches='tight', dpi=300)
```
| github_jupyter |
### Quickstart
To run the code below:
1. Click on the cell to select it.
2. Press `SHIFT+ENTER` on your keyboard or press the play button
(<button class='fa fa-play icon-play btn btn-xs btn-default'></button>) in the toolbar above.
Feel free to create new cells using the plus button
(<button class='fa fa-plus icon-plus btn btn-xs btn-default'></button>), or pressing `SHIFT+ENTER` while this cell
is selected.
# Example 4 (neural pitch processing with audio input)
This example is a crude "pitch detector" network that performs autocorrelation of an audio signal. It works with coincidence detector neurons that receive two copies of the input (which has been transformed into spikes by an equally crude "periphery neuron" model), with a certain delay between the two inputs. Depending on their respective delay, neurons are sensitive to different periodicities.
The example shows how Brian's high-level model descriptions can be seemlessly combined with low-level code in a target language, in this case C++. Such code can be necessary to extend Brian functionalities without sacrificing performance, e.g. in applications that necessitate real-time processing of external stimuli.
In this code, we use this mechanism to provide two possible sources of audio input: a pre-recorded audio file (`use_microphone = False`), or real-time input from a microphone (`use_microphone = True`). For the latter, the `portaudio` library needs to be installed. Also note that access to the computers microphone is not possible when running the notebook on an external server such as [mybinder](https://mybinder.org).
```
from brian2 import *
import os
```
We'll use the high-performance C++ standalone mode, otherwise we are not guaranteed that processing is faster than realtime (necessary when using microphone input).
```
set_device('cpp_standalone', directory='example_4')
```
We first set a few global parameters of the model:
```
sample_rate = 44.1*kHz
buffer_size = 128
defaultclock.dt = 1/sample_rate
runtime = 4.5*second
# Receptor neurons ("ear")
max_delay = 20*ms # 50 Hz
tau_ear = 1*ms
tau_th = 5*ms
# Coincidence detectors
min_freq = 50*Hz
max_freq = 1000*Hz
num_neurons = 300
tau = 1*ms
sigma = .1
```
The model equations (see code further down below) refer to a `get_sample` function that returns the next available audio sample. Since we cannot express this function as mathematical equations, we directly provide its implementation in the target language C++. Since the code is relatively long we do not include it directly here, but instead store the source code in a separate file. We provide two such files, `sound_from_mic.cpp` for sound input from a microphone (used when `use_microphone` is set to `True`), and `sound_from_file.cpp` for sound read out from an uncompressed WAV audio file. These files will be compiled when needed because we provide their names as arguments to the `sources` keyword. Similarly, we include the function declaration that is present in the `sound_input.h` header file (identical for both cases), by specifying it to the `headers` keyword. We further customize the code that gets compiled by providing preprocessor macros via the `define_macros` keyword. Finally, since we are making use of the `portaudio` library for microphone input, we link to the `portaudio` library via the `libraries` keyword:
```
use_microphone = False
if use_microphone:
# Now comes the connection to the microphone code.
@implementation('cpp','//actual code in sound_from_mic.cpp',
sources=[os.path.abspath('sound_from_mic.cpp')],
headers=['"{}"'.format(os.path.abspath('sound_input.h'))],
libraries=['portaudio'],
define_macros=[('BUFFER_SIZE', buffer_size),
('SAMPLE_RATE', sample_rate/Hz)])
@check_units(t=second, result=1)
def get_sample(t):
raise NotImplementedError('Use a C++-based code generation target.')
else:
# Instead of using the microphone, use a sound file
@implementation('cpp','//actual code in sound_from_file.cpp',
sources=[os.path.abspath('sound_from_file.cpp')],
headers=['"{}"'.format(os.path.abspath('sound_input.h'))],
define_macros=[('FILENAME', r'\"{}\"'.format(os.path.abspath('scale_flute.wav')))])
@check_units(t=second, result=1)
def get_sample(t):
raise NotImplementedError('Use a C++-based code generation target.')
```
We now specify our neural and synaptic model, making use of the `get_sample` function as if it were one of the standard functions provided by Brian:
```
gain = 50
# Note that the `get_sample` function does not actually make use of the time `t` that it is given, for simplicity
# it assumes that it is called only once per time step. This is actually enforced by using our `constant over dt`
# feature -- the variable `sound` can be used in several places (which is important here, since we want to
# record it as well):
eqs_ear = '''
dx/dt = (sound - x)/tau_ear: 1 (unless refractory)
dth/dt = (0.1*x - th)/tau_th : 1
sound = clip(get_sample(t), 0, inf) : 1 (constant over dt)
'''
receptors = NeuronGroup(1, eqs_ear, threshold='x>th', reset='x=0; th = th*2.5 + 0.01',
refractory=2*ms, method='exact')
receptors.th = 1
sound_mon = StateMonitor(receptors, 'sound', record=0)
eqs_neurons = '''
dv/dt = -v/tau+sigma*(2./tau)**.5*xi : 1
freq : Hz (constant)
'''
neurons = NeuronGroup(num_neurons, eqs_neurons, threshold='v>1', reset='v=0',
method='euler')
neurons.freq = 'exp(log(min_freq/Hz)+(i*1.0/(num_neurons-1))*log(max_freq/min_freq))*Hz'
synapses = Synapses(receptors, neurons, on_pre='v += 0.5',
multisynaptic_index='k')
synapses.connect(n=2) # one synapse without delay; one with delay
synapses.delay['k == 1'] = '1/freq_post'
```
We record the spikes of the "pitch detector" neurons, and run the simulation:
```
spikes = SpikeMonitor(neurons)
run(runtime)
```
After the simulation ran through, we plot the raw sound input as well as its spectrogram, and the spiking activity of the detector neurons:
```
from plotly import tools
from plotly.offline import iplot, init_notebook_mode
import plotly.graph_objs as go
from scipy.signal import spectrogram
init_notebook_mode(connected=True)
fig = tools.make_subplots(5, 1, shared_xaxes=True,
specs=[[{}], [{'rowspan': 2}], [None], [{'rowspan': 2}], [None]],
print_grid=False
# subplot_titles=('Raw sound signal', 'Spectrogram of sound signal',
# 'Spiking activity')
)
trace = go.Scatter(x=sound_mon.t/second,
y=sound_mon.sound[0],
name='sound signal',
mode='lines',
line={'color':'#1f77b4'},
showlegend=False
)
fig.append_trace(trace, 1, 1)
f, t, Sxx = spectrogram(sound_mon.sound[0], fs=sample_rate/Hz, nperseg=2**12, window='hamming')
trace = go.Heatmap(x=t, y=f, z=10*np.log10(Sxx), showscale=False,
colorscale='Viridis', name='PSD')
fig.append_trace(trace, 2, 1)
trace = go.Scatter(x=spikes.t/second,
y=neurons.freq[spikes.i]/Hz,
marker={'symbol': 'line-ns', 'line': {'width': 1, 'color':'#1f77b4'},
'color':'#1f77b4'},
mode='markers',
name='spikes', showlegend=False)
fig.append_trace(trace, 4, 1)
fig['layout'].update(xaxis={'title': 'time (in s)',
'range': (0.4, runtime/second)},
yaxis1={'title': 'amplitude',
'showticklabels': False},
yaxis2={'type': 'log',
'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),
'title': 'Frequency (Hz)'},
yaxis3={'type': 'log',
'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),
'title': 'Preferred\nFrequency (Hz)'})
iplot(fig)
```
If you ran the above code for the pre-recorded sound file, you should clearly see that four separate, ascending notes were played.
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Algorithms/landsat_simple_composite.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_simple_composite.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Algorithms/landsat_simple_composite.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Algorithms/landsat_simple_composite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The magic command `%%capture` can be used to hide output from a specific cell.
```
# %%capture
# !pip install earthengine-api
# !pip install geehydro
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()`
if you are running this notebook for this first time or if you are getting an authentication error.
```
# ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Load a raw Landsat 5 ImageCollection for a single year.
collection = ee.ImageCollection('LANDSAT/LT05/C01/T1') \
.filterDate('2010-01-01', '2010-12-31')
# Create a cloud-free composite with default parameters.
composite = ee.Algorithms.Landsat.simpleComposite(collection)
# Create a cloud-free composite with custom parameters for
# cloud score threshold and percentile.
customComposite = ee.Algorithms.Landsat.simpleComposite(**{
'collection': collection,
'percentile': 75,
'cloudScoreRange': 5
})
# Display the composites.
Map.setCenter(-122.3578, 37.7726, 10)
Map.addLayer(composite, {'bands': ['B4', 'B3', 'B2'], 'max': 128}, 'TOA composite')
Map.addLayer(customComposite, {'bands': ['B4', 'B3', 'B2'], 'max': 128},
'Custom TOA composite')
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
| github_jupyter |
# TF Complex Screening
Covariance screening process.
The process involves:
- Correlation testing with every other gene
- Find potential gene-TF1-TF2 trios by finding overlaps of genes
- Conditional independence testing (for every level of gene, are TF1 and TF2 independent?)
- Causal inference (direction of arrows), for TF1-KO, does the correlation betwen TF1 and G decrease?
- Mediation test, for TF1-KO, does the correlation between TF2 and G decrease?
- If all of the tests pass, we retain it for our final graph.
### Import
```
from IPython.core.display import display, HTML
import warnings
warnings.filterwarnings('ignore')
display(HTML("<style>.container { width:100% !important; }</style>"))
%matplotlib inline
repo_path = '/Users/mincheolkim/Github/'
data_path = '/Users/mincheolkim/Documents/'
import sys
sys.path.append(repo_path + 'scVI')
sys.path.append(repo_path + 'scVI-extensions')
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.stats.multitest import multipletests
import imp
import numpy as np
import pandas as pd
from scipy.stats import pearsonr, spearmanr
import seaborn as sns
import pickle as pkl
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
import scvi_extensions.dataset.supervised_data_loader as sdl
import scvi_extensions.dataset.cropseq as cs
import scvi_extensions.inference.supervised_variational_inference as svi
import scvi_extensions.hypothesis_testing.mean as mn
import scvi_extensions.hypothesis_testing.variance as vr
import scvi_extensions.dataset.label_data_loader as ldl
import scanpy.api as sc
import itertools
%matplotlib inline
"""
process.py
This file contains the high level calls for the steps in the TF interaction analysis.
"""
def extract_ko_gene(adata, guide_cov='guide_cov'):
""" Given an anndata object, extract the KO target gene from the guide. """
adata.obs['ko_gene_cov'] = adata.obs[guide_cov].str.extract(r'^([^.]*).*')
def filter_gene_list(adata, guide_de_fname, min_count=60):
""" Filter the gene list by a DE by guide analysis. """
# Identify transcription factors that were knocked out.
# These are included automatically, as long as they were in the original 3k vargene list.
guide_de_df = pd.read_csv(guide_de_fname, sep='\t')
tfs = set(guide_de_df['cluster'].str.extract(r'^([^.]*).*').iloc[:,0].unique().tolist())
tfs &= set(adata.var.index.tolist())
tfs = list(tfs)
print('Number of eligible TFs:', len(tfs))
# Get a list of genes that are differentially expressed in many of the guides.
counts = guide_de_df.gene.value_counts()
de_genes = counts[counts > 60].index.tolist()
# Filter the annotated data object
all_genes = de_genes + tfs
adata._inplace_subset_var(all_genes)
embedded_adata.var_names_make_unique()
```
### Define my TFs
```
tfs_considered = ['IRF1', 'JUN', 'JUNB', 'MYC', 'NFKBIA', 'SMAD2', 'SP1', 'STAT1', 'STAT3', 'STAT5A', 'STAT6']
```
### Read into scanpy
Filter genes to the DE genes identified above.
```
embedded_adata = sc.read('/Users/mincheolkim/Documents/nsnp20.raw.sng.guide_sng.norm.h5ad')
embedded_adata.obs['ko_gene_cov'] = embedded_adata.obs['guide_cov'].str.extract(r'^([^.]*).*')
embedded_adata
```
### Read in DE genes and filter the gene list in annotated data
```
de_filename = '/Users/mincheolkim/Documents/nsnp20.raw.sng.guide_sng.norm.vs0.igtb.guide.de.seurate.txt.meta.guide.meta.sig.txt'
filter_gene_list(embedded_adata, de_filename)
```
### Transcription factors
```
for tf in tfs_considered:
print(tf)
```
### Read known interactions from String DB
I put in the list of TFs above, and got the interaction map. From the interaction map, I only retain those with experimentally determined interaction.
```
known_interactions = pd.read_csv('/Users/mincheolkim/Documents/string_interactions (1).tsv', sep='\t')\
.query('experimentally_determined_interaction > 0')
edge_set = set()
truth_graph = nx.Graph()
for idx, row in known_interactions.iterrows():
tf1, tf2 = sorted([row['#node1'], row['node2']])
truth_graph.add_edge(tf1, tf2)
edge_set.add(tf1 + '_' + tf2)
```
### Correlation testing for each of these TFs
For each TF, find genes that correlate with it.
```
hits = {}
count = 0
for tf in tfs_considered:
print('Getting correlated genes for {}..'.format(tf))
corrs = []
pvals = []
for gene in embedded_adata.var.index.tolist():
corr, pval = spearmanr(
embedded_adata.X[(embedded_adata.obs.guide_cov == "0").values, embedded_adata.var.index.tolist().index(tf)],
embedded_adata.X[(embedded_adata.obs.guide_cov == "0").values, embedded_adata.var.index.tolist().index(gene)])
corrs.append(corr)
pvals.append(pval)
hits[tf] = embedded_adata.var.index[(np.array(pvals) < 0.05/(len(tfs_considered)*embedded_adata.shape[1])) & (np.array(corrs) > 0)].tolist()
for tf, genes in hits.items():
print(tf, len(genes))
```
### Get direction of the correlation by using KO information
```
directed_hits = {}
for tf in tfs_considered:
directed_hits[tf] = []
print('Getting correlated genes for {}..'.format(tf))
for gene in hits[tf]:
corr, pval = spearmanr(
embedded_adata.X[(embedded_adata.obs.ko_gene_cov == tf).values, embedded_adata.var.index.tolist().index(tf)],
embedded_adata.X[(embedded_adata.obs.ko_gene_cov == tf).values, embedded_adata.var.index.tolist().index(gene)])
if pval > 0.05/(len(tfs_considered)*embedded_adata.shape[1]):
directed_hits[tf].append(gene)
for tf, genes in directed_hits.items():
print(tf, len(genes))
```
### For each TF pair, find the genes that they both are correlated with
```
tf_gene_overlaps = {}
tf_gene_overlap_counts = {}
for tf1, tf2 in itertools.combinations(sorted(directed_hits.keys()), 2):
overlap = set(directed_hits[tf1]) & set(directed_hits[tf2])
if len(overlap) > 0:
tf_gene_overlaps[tf1 + '_' + tf2] = overlap
tf_gene_overlap_counts[tf1 + '_' + tf2] = len(overlap)
tf_gene_overlap_counts
```
### For each TF pair, perform differential correlation analysis with target gene
```
def correlation(g1, g2, label, label_col='ko_gene_cov'):
return spearmanr(
embedded_adata.X[(embedded_adata.obs[label_col] == label).values, embedded_adata.var.index.tolist().index(g1)],
embedded_adata.X[(embedded_adata.obs[label_col] == label).values, embedded_adata.var.index.tolist().index(g2)])
def differential_correlation(g1, g2, label_1, label_2, label_col='ko_gene_cov'):
corr_1, pval_1 = correlation(g1, g2, label_1, label_col)
corr_2, pval_2 = correlation(g1, g2, label_2, label_col)
n_1 = (embedded_adata.obs[label_col] == label_1).sum()
n_2 = (embedded_adata.obs[label_col] == label_2).sum()
return (np.arctanh(corr_1) - np.arctanh(corr_2))/(np.sqrt(np.absolute((1/n_1) - (1/n_2))))
all_stats = {}
for pair, genes in tf_gene_overlaps.items():
genes = tf_gene_overlaps[pair]
tf1, tf2 = pair.split('_')
all_stats[pair] = {}
for gene in genes:
if gene == tf1 or gene == tf2:
continue
all_stats[pair][gene] = (
differential_correlation(gene, tf2, '0', tf1),
differential_correlation(gene, tf1, '0', tf2))
```
### Generate null distribution
```
embedded_adata.obs['shuffled_ko_gene_cov'] = embedded_adata.obs['ko_gene_cov'].values[np.random.permutation(len(embedded_adata.obs))]
null_stats = []
for pair, genes in tf_gene_overlaps.items():
genes = tf_gene_overlaps[pair]
tf1, tf2 = pair.split('_')
for gene in genes:
null_stats.append(differential_correlation(gene, tf2, '0', tf1, label_col='shuffled_ko_gene_cov'))
null_stats.append(differential_correlation(gene, tf1, '0', tf2, label_col='shuffled_ko_gene_cov'))
null_stats = np.array(null_stats)
```
### For each TF pair, keep the genes that are regulated by their complex
```
len(null_stats)
1/2078
plt.hist(null_stats)
# Count tests
num_tests = 0
for pair, gene_list in all_stats.items():
num_tests += len(gene_list)
num_tests
pval_cutoff = 0.05/num_tests
def emp_pval(val, null_stats):
return (null_stats >= val).mean()
complex_regulators = {}
for pair, gene_stats in all_stats.items():
for gene, stats in gene_stats.items():
s1, s2 = stats
if s1 < 0 or s2 < 0:
continue
pval_1 = emp_pval(s1, null_stats)
pval_2 = emp_pval(s2, null_stats)
if pval_1 < pval_cutoff and pval_2 < pval_cutoff:
if pair not in complex_regulators.keys():
complex_regulators[pair] = []
complex_regulators[pair].append(gene)
G = nx.Graph()
for pair in complex_regulators.keys():
tf1, tf2 = pair.split('_')
G.add_edge(tf1, tf2, weight=len(complex_regulators[pair]), color='b' if pair in edge_set else 'y')
pos = nx.circular_layout(truth_graph)
len(G.edges)
(11*10)/2
plt.figure(figsize=(10, 10))
colors = [G[u][v]['color'] for u,v in G.edges]
weights = [G[u][v]['weight'] for u,v in G.edges]
nx.draw_networkx(G, pos=pos, with_labels=True, node_color='g', node_size=4200, edge_color=colors, width=8, font_color='w', font_size=15);
plt.axis('off');
plt.title('Predicted interactions', fontsize=20)
#plt.savefig('/Users/mincheolkim/Documents/predicted_tf_interaction_map.png')
```
### Shift in distribution graph
```
# calculate correlations in WT cells
corrs = np.array([correlation('STAT3', gene, '0', label_col='ko_gene_cov')[0] for gene in hits['STAT3']])
ko_corrs = np.array([correlation('STAT3', gene, 'STAT6', label_col='ko_gene_cov')[0] for gene in hits['STAT3']])
# calculate correlations in WT cells
corrs_2 = np.array([correlation('STAT6', gene, '0', label_col='ko_gene_cov')[0] for gene in hits['STAT6']])
ko_corrs_2 = np.array([correlation('STAT6', gene, 'STAT3', label_col='ko_gene_cov')[0] for gene in hits['STAT6']])
plt.figure(figsize=(15, 3))
plt.subplot(1, 2, 1)
sns.distplot(corrs, kde=True, bins=40, hist=False, kde_kws={"shade": True})
sns.distplot(ko_corrs, kde=True, bins=40, hist=False, kde_kws={"shade": True})
plt.title('Distribution of correlations between STAT3 and other genes')
plt.legend(['WT', 'STAT6 KO'])
#plt.savefig('/Users/mincheolkim/Documents/correlation_distribution_shift.png')
plt.subplot(1, 2, 2)
sns.distplot(corrs_2, kde=True, bins=40, hist=False, kde_kws={"shade": True})
sns.distplot(ko_corrs_2, kde=True, bins=40, hist=False, kde_kws={"shade": True})
plt.title('Distribution of correlations between STAT6 and other genes')
plt.legend(['WT', 'STAT3 KO'])
```
### Boxplot
The final gene set whose expression is mututally mediated by STAT3 and STAT6.
```
print(complex_regulators['STAT3_STAT6'])
corrs_6 = np.array([correlation('STAT3', gene, '0', label_col='ko_gene_cov')[0] for gene in complex_regulators['STAT3_STAT6']])
ko_corrs_6 = np.array([correlation('STAT3', gene, 'STAT6', label_col='ko_gene_cov')[0] for gene in complex_regulators['STAT3_STAT6']])
corrs_3 = np.array([correlation('STAT6', gene, '0', label_col='ko_gene_cov')[0] for gene in complex_regulators['STAT3_STAT6']])
ko_corrs_3 = np.array([correlation('STAT6', gene, 'STAT3', label_col='ko_gene_cov')[0] for gene in complex_regulators['STAT3_STAT6']])
boxplot_df = pd.DataFrame()
from scipy.stats import ks_2samp, wilcoxon
print(ks_2samp(corrs_6, ko_corrs_6))
print(wilcoxon(corrs_6, ko_corrs_6))
print(ks_2samp(corrs_3, ko_corrs_3))
print(wilcoxon(corrs_3, ko_corrs_3))
boxplot_df['wt_stat3_gene'] = corrs_6
boxplot_df['stat6ko_stat3_gene'] = ko_corrs_6
boxplot_df['wt_stat6_gene'] = corrs_3
boxplot_df['stat3ko_stat6_gene'] = ko_corrs_3
plt.figure(figsize=(8, 5))
plt.title('Correlations between STAT3/6 and dependent genes')
sns.boxplot(data=boxplot_df)
```
| github_jupyter |
# Broadcasts
This notebook explains the different types of broadcast available in PyBaMM.
Understanding of the [expression_tree](./expression-tree.ipynb) and [discretisation](../spatial_methods/finite-volumes.ipynb) notebooks is assumed.
```
%pip install pybamm -q # install PyBaMM if it is not installed
import pybamm
import numpy as np
```
We also explicitly set up the discretisation that is used for this notebook. We use a small number of points in each domain, in order to easily visualise the results.
```
var = pybamm.standard_spatial_vars
geometry = {
"negative electrode": {var.x_n: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}},
"negative particle": {var.r_n: {"min": pybamm.Scalar(0), "max": pybamm.Scalar(1)}},
}
submesh_types = {
"negative electrode": pybamm.Uniform1DSubMesh,
"negative particle": pybamm.Uniform1DSubMesh,
}
var_pts = {var.x_n: 5, var.r_n: 3}
mesh = pybamm.Mesh(geometry, submesh_types, var_pts)
spatial_methods = {
"negative electrode": pybamm.FiniteVolume(),
"negative particle": pybamm.FiniteVolume(),
}
disc = pybamm.Discretisation(mesh, spatial_methods)
```
## Primary broadcasts
Primary broadcasts are used to broadcast from a "larger" scale to a "smaller" scale, for example broadcasting temperature T(x) from the electrode to the particles, or broadcasting current collector current i(y, z) from the current collector to the electrodes.
To demonstrate this, we first create a variable `T` on the negative electrode domain, discretise it, and evaluate it with a simple linear vector
```
T = pybamm.Variable("T", domain="negative electrode")
disc.set_variable_slices([T])
disc_T = disc.process_symbol(T)
disc_T.evaluate(y=np.linspace(0,1,5))
```
We then broadcast `T` onto the "negative particle" domain (using primary broadcast as we are going from the larger electrode scale to the smaller particle scale), and discretise and evaluate the resulting object.
```
primary_broad_T = pybamm.PrimaryBroadcast(T, "negative particle")
disc_T = disc.process_symbol(primary_broad_T)
disc_T.evaluate(y=np.linspace(0,1,5))
```
The broadcasted object makes 3 (since the r-grid has 3 points) copies of each element of `T` and stacks them all up to give an object with size 3x5=15. In the resulting vector, the first 3 entries correspond to the 3 points in the r-domain at the first x-grid point (where T=0 uniformly in r), the next 3 entries correspond to the next 3 points in the r-domain at the second x-grid point (where T=0.25 uniformly in r), etc
## Secondary broadcasts
Secondary broadcasts are used to broadcast from a "smaller" scale to a "larger" scale, for example broadcasting SPM particle concentrations c_s(r) from the particles to the electrodes. Note that this wouldn't be used to broadcast particle concentrations in the DFN, since these already depend on both x and r.
To demonstrate this, we first create a variable `c_s` on the negative particle domain, discretise it, and evaluate it with a simple linear vector
```
c_s = pybamm.Variable("c_s", domain="negative particle")
disc.set_variable_slices([c_s])
disc_c_s = disc.process_symbol(c_s)
disc_c_s.evaluate(y=np.linspace(0,1,3))
```
We then broadcast `c_s` onto the "negative electrode" domain (using secondary broadcast as we are going from the smaller particle scale to the large electrode scale), and discretise and evaluate the resulting object.
```
secondary_broad_c_s = pybamm.SecondaryBroadcast(c_s, "negative electrode")
disc_broad_c_s = disc.process_symbol(secondary_broad_c_s)
disc_broad_c_s.evaluate(y=np.linspace(0,1,3))
```
The broadcasted object makes 5 (since the x-grid has 5 points) identical copies of the whole variable `c_s` to give an object with size 5x3=15. In the resulting vector, the first 3 entries correspond to the 3 points in the r-domain at the first x-grid point (where c_s varies in r), the next 3 entries correspond to the next 3 points in the r-domain at the second x-grid point (where c_s varies in r), etc
## References
The relevant papers for this notebook are:
```
pybamm.print_citations()
```
| github_jupyter |
# Create the Training datasets
1. Upload your training datasets for annotation.
Here are examples of CLI commands used to upload images from *datasets* folder to **{S3_DATASET_BUCKET}**
```aws s3 sync ./datasets/ s3://{S3_DATASET_BUCKET}/datasets/```
2. If you already have the manifest files, modify the path to images files in the manifest file and upload it to S3
Use following commands to replace the placeholder **S3_BUCEKT_NAME** with the real bucket name **{S3_DATASET_BUCKET}**
```sed -i -e "s/S3_BUCKET_NAME/{S3_DATASET_BUCKET}/g" ./output.manifest```
```aws s3 cp ./output.manifest s3://{S3_DATASET_BUCKET}/datasets/```
3. Run following notebook cells to create datasets in Amazon Rekognition from the uploaded manifest file.
4. If there is no label manifest file available, you can import the dataset images from S3 bucket and label the training images following the instructions in Amazon Rekognition Custom Labels console. Detail instructions can be found at this [document](https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/creating-datasets.html)
```
!pip install boto3core --upgrade
!pip install boto3 --upgrade
import boto3
import argparse
import logging
import time
import json
from botocore.exceptions import ClientError
logger = logging.getLogger(__name__)
rek_client=boto3.client('rekognition')
def create_dataset(rek_client, project_arn, dataset_type, bucket, manifest_file):
"""
Creates an Amazon Rekognition Custom Labels dataset.
:param rek_client: The Amazon Rekognition Custom Labels Boto3 client.
:param project_arn: The ARN of the project in which you want to create a dataset.
:param dataset_type: The type of the dataset that you wan to create (train or test).
:param bucket: The S3 bucket that contains the manifest file.
:param manifest_file: The path and filename of the manifest file.
"""
try:
#Create the project
logger.info(f"Creating {dataset_type} dataset for project {project_arn}")
dataset_type = dataset_type.upper()
dataset_source = json.loads(
'{ "GroundTruthManifest": { "S3Object": { "Bucket": "'
+ bucket
+ '", "Name": "'
+ manifest_file
+ '" } } }'
)
response = rek_client.create_dataset(
ProjectArn=project_arn, DatasetType=dataset_type, DatasetSource=dataset_source
)
dataset_arn=response['DatasetArn']
logger.info(f"dataset ARN: {dataset_arn}")
finished=False
while finished==False:
dataset=rek_client.describe_dataset(DatasetArn=dataset_arn)
status=dataset['DatasetDescription']['Status']
if status == "CREATE_IN_PROGRESS":
logger.info((f"Creating dataset: {dataset_arn} "))
time.sleep(5)
continue
if status == "CREATE_COMPLETE":
logger.info(f"Dataset created: {dataset_arn}")
finished=True
continue
if status == "CREATE_FAILED":
logger.exception(f"Dataset creation failed: {status} : {dataset_arn}")
raise Exception (f"Dataset creation failed: {status} : {dataset_arn}")
logger.exception(f"Failed. Unexpected state for dataset creation: {status} : {dataset_arn}")
raise Exception(f"Failed. Unexpected state for dataset creation: {status} : {dataset_arn}")
return dataset_arn
except ClientError as err:
logger.exception(f"Couldn't create dataset: {err.response['Error']['Message']}")
raise
def train_model(rek_client, project_arn, version_name, bucket, manifest_file, output_folder, tag_key=None, tag_key_value=None):
"""
Trains an Amazon Rekognition Custom Labels model.
:param rek_client: The Amazon Rekognition Custom Labels Boto3 client.
:param project_arn: The ARN of the project in which you want to train a model.
:param version_name: A version for the model.
:param bucket: The S3 bucket that hosts training output.
:param output_folder: The path for the training output within output_bucket
:param tag_key: The name of a tag to attach to the model. Pass None to exclude
:param tag_key_value: The value of the tag. Pass None to exclude
"""
try:
#Train the model
status=""
logger.info(f"training model version {version_name} for project {project_arn}")
dataset_source = json.loads(
'{ "GroundTruthManifest": { "S3Object": { "Bucket": "'
+ bucket
+ '", "Name": "'
+ manifest_file
+ '" } } }'
)
output_config = json.loads(
'{"S3Bucket": "'
+ bucket
+ '", "S3KeyPrefix": "'
+ output_folder
+ '" } '
)
tags={}
if tag_key!=None and tag_key_value !=None:
tags = json.loads(
'{"' + tag_key + '":"' + tag_key_value + '"}'
)
response=rek_client.create_project_version(
ProjectArn=project_arn,
VersionName=version_name,
OutputConfig=output_config,
TrainingData={'Assets': [dataset_source]},
TestingData={'AutoCreate': True},
Tags=tags
)
logger.info(f"Started training: {response['ProjectVersionArn']}")
# Wait for the project version training to complete
project_version_training_completed_waiter = rek_client.get_waiter('project_version_training_completed')
project_version_training_completed_waiter.wait(ProjectArn=project_arn,
VersionNames=[version_name])
#Get the completion status
describe_response=rek_client.describe_project_versions(ProjectArn=project_arn,
VersionNames=[version_name])
for model in describe_response['ProjectVersionDescriptions']:
logger.info("Status: " + model['Status'])
logger.info("Message: " + model['StatusMessage'])
status=model['Status']
logger.info(f"finished training")
return response['ProjectVersionArn'], status
except ClientError as err:
logger.exception(f"Couldn't create dataset: {err.response['Error']['Message']}")
raise
project_name = 'MRE-workshop-project'
DATA_BUCKET = 'S3_BUCEKT_NAME'
MANIFEST = 'dataset/output.manifest'
response=rek_client.create_project(ProjectName=project_name)
project_arn = response['ProjectArn']
#dataset_arn=create_dataset(rek_client, project_arn, 'TRAIN', DATA_BUCKET, MANIFEST)
version_name = 'VERSION_NAME'
OUTPUT = 'OUTPUT_FOLDER'
model_arn, status=train_model(rek_client, project_arn, version_name, DATA_BUCKET, MANIFEST, OUTPUT)
```
# Wait for training to finish
When training is done, you will find the ARN for this model in the Amazon Rekognition Custom Label console and you can start/stop your model from there.
| github_jupyter |
# 3 Loops
Sometimes when programming you might need to do repetitive tasks. For example, doing a given operation with an increasingly larger number.
Python offers loops as a way to make your life easier, and your code much smaller.
## 3.1 While loops
Assume `number = 1` and you would like to add 1 until you reach 10. At each incrementation of `number` you want to print the value of the variable `number` to the screen. You could do it explicitly in the following manner:
```
number = 1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
print("number is equal to %d now." % number)
number +=1
```
This will easily become tedious as the number of increments increases. Instead you can save yourself some time and just program this as a loop:
```
number = 1
while number <= 10:
print("number is equal to %d now." % number)
number += 1
print("End of loop")
```
The `while` loop is a useful construct which allows you to repeat a task while a certain statement is `True`.
Here, this statement is `number <= 10`.
The colon (`:`) after the statement marks the beginning of the task(s) that are to be performed in the loop.
Once the statement becomes `False`, the loop will stop being executed and the code will move on to its next section (in this case we print "End of loop").
**Important note:** As you can see from the code above, the task(s) to be done in the loop are indented. Indentation is how python determines if a block of code belongs to a given method statement.
For example, in the above code the presence of an indentation for:
```python
print("number is equal to %d now." % number)
number += 1
```
Means that they belong to the `while` statement.
However, since the:
```python
print("End of loop")
```
does not have indentation, it does not get executed as part of the `while` statement.
If the code was instead changed to indent the second `print` statement, you would see that it would be executed at every iteration of the `while` loop.
```
number = 1
while number <= 10:
print("number is equal to %d now." % number)
number += 1
print("End of loop")
```
### Aside
`'%d' % number` is an example of string formatting, in this case we are 'inserting' the value of variables into a string (use `%d` for a number and `%s` for a string). If you want to insert multiple variables, you can use e.g.: `'%d %s' % (num, str)`. This is 'old-style' formatting; you can also use the 'new-style' formatting: `'{} {}'.format(var1, var2)`. The new-style formatting lets us index our entries, which can be convenient.
```
print('This is %d example of a formatted %s' % (1, 'string'))
print ('This is the {}nd example of a formatted {}.'.format(2, 'string'))
# indexing with new-style
print('This is the {A} thing, here is the {B}, the {C}, and now the {A} again'.format(A='first', C='third', B='second'))
```
Recent versions of Python (from 3.6 onwards) have introduced a cleaner formatting method named f-strings (format-string,`f""` or `f''`). These are an extremely convenient for string formatting:
```
x = 1
print(f"'x' is of type {type(x)} and contains the value {x}")
```
Thus, by prepending our string with the letter `f`, we can use variables and valid Python expressions within `{}`. In the rest of the course we will prefer this new formatting style since it greatly improves readability and we strongly recommend the use of f-strings over other types of formatting
### Exercise 3.1.1
See what happens in the above example if you remove the colon or don't use indentation.
```
# Exercise 3.1.1
while number <= 10
print("number is equal to %d now." % number)
number += 1
print("End of loop")
```
A problem with `while` loops can be that eventually the condition under which they run is always `True` and then they run forever. Without the line "number += 1" in the above example, the loop would run forever and always print "number is equal to 1 now." to the screen. Therefore, it is important to make sure that `while` loops terminate at some stage.
`While` statments are particularly useful when you are interating over a task an unknown number of times. An example of this would be if you are iteratively solving an equation for a given input until you reach a convergence limit.
However, if you already know how many times you want to execute a set of tasks, `for` loops are better suited.
## 3.2 For loops
For loops take an *iterator statment* and loops over it until the iteration is exhausted.
To demonstrate this, you can equally write the above example as a `for` loop:
```
for number in range (1,11):
print(f"number is equal to {number} now.")
```
Here `number in range (1,11)` is the *iterator statement* which generates values of `number` which progressively increase from 1 to 10 as the for loop is executed.
**Note:** the range goes to 11, but the loop stops at 10. This is something where Python is not very intuitive. If you define ranges, Python will not include the last value of your range. Since number is an integer here, the last integer before 11 is 10.
### Exercise 3.2.1
Use either a `while` loop or a `for` loop to print out the first 10 entries in the 6 times table, i.e.:
- 1x6=6
- 2x6=12
- ...
- 10x6=60
```
# Exercise 3.2.1
# Using a while loop
number = 1
while number <= 10:
print(6*number)
number += 1
# Using a for loop
number = 1
for number in range(1,11):
print(6*number)
```
You can also use loops within loops - as many as you like. Just keep good track using indentations!
### Exercise 3.2.2
Use nested loops to calculate the matrix product of the two matrices:
\begin{bmatrix}
1 & 2 \\
3 & 4
\end{bmatrix}
and
\begin{bmatrix}
-3 & 1 \\
4 & -2
\end{bmatrix}
**Note:** You can define each matrix as a numpy array (see section 10). You might want to work through section 10 first and then come back and try this exercise.
```
# Exercise 3.2.2
import numpy as np
# set the two matricies:
A = np.array([[1,2], [3,4]])
print(A)
B = np.array([[-3,1],[4,-2]])
print(B)
# make an array of the same size (i.e. 2 x 2), currently filled with zeroes, to store the product:
product = np.zeros((2,2))
# loop through the row numbers of the product:
for i in range(2):
# loop through the column numbers of the product:
for j in range(2):
# we get the (i, j) position in the product from the ith row of A and the jth row of B:
product[i, j] = sum(A[i,:] * B[:,j])
print('The product is:\n', product)
# There are other (easier) ways of going about this, e.g. the numpy library provides a function for this:
# But this exercise demonstrates how we can use nested loops!
np.dot(A, B)
```
## Range
As shown above, if you want to loop over a range of numbers, you can loop over `range`. `range` can be called with the arguments `(start, stop)` in the manner `range(start,stop)`. Here values will be assigned starting with the value of `start` and ending with the value of `stop - 1`. If `start` is omitted, `start=0` is assumed:
```
for i in range(5):
print(i)
for i in range(-5, 5):
print(i)
```
The `range` function accepts a third argument that allows to modify the step size (default `1`). The full signature of `range` is therefore `range(start, stop, step)`.
```
for i in range(-5, 5, 2):
print(i)
```
## Review:
In this section, we have covered:
- Using loops to do repetitive tasks.
- The `while` loop construct.
- The `for` loop construct.
- Using `range` to generate numbers over a given range.
- Formatting strings.
| github_jupyter |
# Parallelization with TFDS
We'll go back to the classic cats versus dogs example, but instead of just naively loading the data to train a model, we will parallelize various stages of the Extract, Transform and Load processes. In particular, we will perform following tasks:
1. Parallelize the extraction of the stored TFRecords of the cats_vs_dogs dataset by using the interleave operation.
2. Parallelize the transformation during the preprocessing of the raw dataset by using the map operation.
3. Cache the processed dataset in memory by using the cache operation for faster retrieval.
4. Parallelize the loading of the cached dataset during the training cycle by using the prefetch operation.
## Setup
```
import multiprocessing
import tensorflow as tf
import tensorflow_datasets as tfds
from os import getcwd
```
## Create and Compile the Model
```
def create_model():
input_layer = tf.keras.layers.Input(shape=(224, 224, 3))
base_model = tf.keras.applications.MobileNetV2(input_tensor=input_layer,
weights='imagenet',
include_top=False)
base_model.trainable = False
x = tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x = tf.keras.layers.Dense(2, activation='softmax')(x)
model = tf.keras.models.Model(inputs=input_layer, outputs=x)
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['acc'])
return model
```
## Naive Approach
Just for comparison, let's start by using the naive approach to Extract, Transform, and Load the data to train the model defined above. By naive approach we mean that we won't apply any of the new concepts of parallelization.
```
dataset_name = 'cats_vs_dogs'
filePath = f"{getcwd()}/../tmp2"
dataset, info = tfds.load(name=dataset_name, split=tfds.Split.TRAIN, with_info=True, data_dir=filePath)
print(info.version)
def preprocess(features):
image = features['image']
image = tf.image.resize(image, (224, 224))
image = image / 255.0
return image, features['label']
train_dataset = dataset.map(preprocess).batch(32)
```
The next step will be to train the model using the following code:
```python
model = create_model()
model.fit(train_dataset, epochs=5)
```
```
#Lets train using the default training method
model = create_model()
model.fit(train_dataset, epochs=5)
```
# Parallelize Various Stages of the ETL Processes
The following exercises are about parallelizing various stages of Extract, Transform and Load processes. In particular, you will be tasked with performing following tasks:
1. Parallelize the extraction of the stored TFRecords of the cats_vs_dogs dataset by using the interleave operation.
2. Parallelize the transformation during the preprocessing of the raw dataset by using the map operation.
3. Cache the processed dataset in memory by using the cache operation for faster retrieval.
4. Parallelize the loading of the cached dataset during the training cycle by using the prefetch operation.
We start by creating a dataset of strings corresponding to the `file_pattern` of the TFRecords of the cats_vs_dogs dataset.
```
#here we are loading TFRecords of the cats_vs_dogs dataset which we downloaded from TFDS previously
file_pattern = f'{getcwd()}/../tmp2/{dataset_name}/{info.version}/{dataset_name}-train.tfrecord*'
files = tf.data.Dataset.list_files(file_pattern)
```
Let's recall that the TFRecord format is a simple format for storing a sequence of binary records. This is very useful because by serializing the data and storing it in a set of files (100-200MB each) that can each be read linearly greatly increases the efficiency when reading the data.
Since we will use it later, we should also recall that a `tf.Example` message (or protobuf) is a flexible message type that represents a `{"string": tf.train.Feature}` mapping.
## Parallelize Extraction
In the cell below you will use the [interleave](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#interleave) operation with certain [arguments](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#args_38) to parallelize the extraction of the stored TFRecords of the cats_vs_dogs dataset.
Recall that `tf.data.experimental.AUTOTUNE` will delegate the decision about what level of parallelism to use to the `tf.data` runtime.
```
# Parallelize the extraction of the stored TFRecords of
# the cats_vs_dogs dataset by using the interleave operation with
# cycle_length = 4 and the number of parallel calls set to tf.data.experimental.AUTOTUNE.
train_dataset = files.interleave(tf.data.TFRecordDataset,
cycle_length=4,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
```
## Parse and Decode
At this point the `train_dataset` contains serialized `tf.train.Example` messages. When iterated over, it returns these as scalar string tensors. The sample output for one record is given below:
```
<tf.Tensor: id=189, shape=(), dtype=string, numpy=b'\n\x8f\xc4\x01\n\x0e\n\x05label\x12\x05\x1a\x03\n\x01\x00\n,\n\x0eimage/filename\x12\x1a\n\x18\n\x16PetImages/Cat/4159.jpg\n\xcd\xc3\x01\n\x05image\x12...\xff\xd9'>
```
In order to be able to use these tensors to train our model, we must first parse them and decode them. We can parse and decode these string tensors by using a function. In the cell below you will create a `read_tfrecord` function that will read the serialized `tf.train.Example` messages and decode them. The function will also normalize and resize the images after they have been decoded.
In order to parse the `tf.train.Example` messages we need to create a `feature_description` dictionary. We need the `feature_description` dictionary because TFDS uses graph-execution and therefore, needs this description to build their shape and type signature. The basic structure of the `feature_description` dictionary looks like this:
```python
feature_description = {'feature': tf.io.FixedLenFeature([], tf.Dtype, default_value)}
```
The number of features in your `feature_description` dictionary will vary depending on your dataset. In our particular case, the features are `'image'` and `'label'` and can be seen in the sample output of the string tensor above. Therefore, our `feature_description` dictionary will look like this:
```python
feature_description = {
'image': tf.io.FixedLenFeature((), tf.string, ""),
'label': tf.io.FixedLenFeature((), tf.int64, -1),
}
```
where we have given the default values of `""` and `-1` to the `'image'` and `'label'` respectively.
The next step will be to parse the serialized `tf.train.Example` message using the `feature_description` dictionary given above. This can be done with the following code:
```python
example = tf.io.parse_single_example(serialized_example, feature_description)
```
Finally, we can decode the image by using:
```python
image = tf.io.decode_jpeg(example['image'], channels=3)
```
```
def read_tfrecord(serialized_example):
# Create the feature description dictionary
feature_description = {
'image': tf.io.FixedLenFeature((), tf.string, ""),
'label': tf.io.FixedLenFeature((), tf.int64, -1),
}
# Parse the serialized_example and decode the image
example = tf.io.parse_single_example(serialized_example, feature_description)
image = tf.io.decode_jpeg(example['image'], channels=3)
image = tf.cast(image, tf.float32)
# Normalize the pixels in the image
image = image/255.
# Resize the image to (224, 224) using tf.image.resize
image = tf.image.resize(image, [224, 224])
return image, example['label']
```
## Parallelize Transformation
You can now apply the `read_tfrecord` function to each item in the `train_dataset` by using the `map` method. You can parallelize the transformation of the `train_dataset` by using the `map` method with the `num_parallel_calls` set to the number of CPU cores.
```
# Get the number of CPU cores.
cores = multiprocessing.cpu_count()
print(cores)
# Parallelize the transformation of the train_dataset by using
# the map operation with the number of parallel calls set to
# the number of CPU cores.
train_dataset = train_dataset.map(read_tfrecord,num_parallel_calls=cores)
```
## Cache the Dataset
```
train_dataset = dataset.cache()
```
## Parallelize Loading
```
# Shuffle and batch the train_dataset. Use a buffer size of 1024
# for shuffling and a batch size 32 for batching.
train_dataset = train_dataset.shuffle(1024).batch(32)
# Parallelize the loading by prefetching the train_dataset.
# Set the prefetching buffer size to tf.data.experimental.AUTOTUNE.
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
```
The next step will be to train your model using the following code:
```python
model = create_model()
model.fit(train_dataset, epochs=5)
```
Due to the parallelization of the various stages of the ETL processes, you should see a decrease in training time as compared to the naive approach depicted at beginning of the notebook.
```
model = create_model()
model.fit(train_dataset, epochs=5)
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import ktrain
from ktrain import vision as vis
```
# Image Regression: Age Prediction
In this example, we will build a model that predicts the age of a person given the person's photo.
## Download the Dataset
From [this blog post](https://medium.com/analytics-vidhya/fastai-image-regression-age-prediction-based-on-image-68294d34f2ed) by Abhik Jha, we see that there are several face datasets with age annotations from which to choose:
- [UTK Face Dataset](http://aicip.eecs.utk.edu/wiki/UTKFace)
- [IMDb-Wiki Face Dataset](https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/)
- [Appa Real Face Dataset](http://chalearnlap.cvc.uab.es/dataset/26/description/)
In this notebook, we use the UTK Face Dataset. Download the data from [http://aicip.eecs.utk.edu/wiki/UTKFace](http://aicip.eecs.utk.edu/wiki/UTKFace) and extrct each of the three zip files to the same folder.
## STEP 1: Load and Preprocess the Dataset
The target **age** attribute in this dataset is encoded in the filename. More specifically, filenames are of the form:
```
[age]_[gender]_[race]_[date&time].jpg
```
where
- `[age]` is an integer from 0 to 116, indicating the age
- `[gender]` is either 0 (male) or 1 (female)
- `[race]` is an integer from 0 to 4, denoting White, Black, Asian, Indian, and Others (like Hispanic, Latino, Middle Eastern).
- `[date&time]` is in the format of yyyymmddHHMMSSFFF, showing the date and time an image was collected to UTKFace
We are only interested in extracting the age for use as a numerical target. Let us first construct a regular expression to extract the age from the filename. Then, we can supply the pattern to `images_from_fname` to load and preprocess the dataset. Supplying `is_regression=True` is important here, as it tells *ktrain* that the integer targets representing age should be treated as a numerical target, as oppposed to a class label.
```
# build a regular expression that extracts the age from file name
PATTERN = r'([^/]+)_\d+_\d+_\d+.jpg$'
import re
p = re.compile(PATTERN)
r = p.search('/hello/world/40_1_0_20170117134417715.jpg')
print("Extracted Age:%s" % (int(r.group(1))))
```
Set `DATADIR` to the folder where you extracted all the images.
```
DATADIR='data/age_estimation/images'
data_aug = vis.get_data_aug(horizontal_flip=True)
(train_data, val_data, preproc) = vis.images_from_fname(DATADIR, pattern = PATTERN, data_aug = data_aug,
is_regression=True, random_state=42)
```
From the warnings above, we see that a few filenames in the dataset are constructed incorrectly. For instance, the first filename incorrectly has two consecutive underscore characters after the age attribute. Although the age attribute appears to be intact despite the errors and we could modify the regular expression to process these files, we will ignore them in this demonstration.
## STEP 2: Create a Model and Wrap in `Learner`
We use the `image_regression_model` function to create a `ResNet50` model. By default, the model freezes all layers except the final randomly-initialized dense layer.
```
vis.print_image_regression_models()
model = vis.image_regression_model('pretrained_resnet50', train_data, val_data)
# wrap model and data in Learner object
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
```
## STEP 3: Estimate Learning Rate
We will select a learning rate associated with falling loss from the plot displayed.
```
learner.lr_find(max_epochs=2)
learner.lr_plot()
```
From the plot above, we choose a learning rate of `1e-4`.
## STEP 4: Train Model
We will begin by training the model for 3 epochs using a [1cycle](https://arxiv.org/abs/1803.09820) learning rate policy.
```
learner.fit_onecycle(1e-4, 3)
learner.freeze(15) # unfreeze all but the first 15 layers
learner.fit_onecycle(1e-4, 2)
```
After only 5 epochs, our validation MAE is 6.57. That is, on average, our age predictions are off about 6 1/2 years. Since it does not appear that we are overfitting yet, we could try training further for further improvement, but we will stop here for now.
## Make Predictions
Let's make predictions on individual photos. We could either randomly select from the entire image directory or select just from the validation images.
```
# get a Predictor instance that wraps model and Preprocessor object
predictor = ktrain.get_predictor(learner.model, preproc)
# get some random file names of images
!!ls {DATADIR} | sort -R |head -10
# how to get validation filepaths
val_data.filenames[10:20]
def show_prediction(fname):
fname = DATADIR+'/'+fname
predicted = round(predictor.predict_filename(fname)[0])
actual = int(p.search(fname).group(1))
vis.show_image(fname)
print('predicted:%s | actual: %s' % (predicted, actual))
show_prediction('25_1_3_20170104231305129.jpg')
show_prediction('10_0_0_20170110225013755.jpg')
show_prediction('71_0_0_20170111204446510.jpg')
predictor.save('/tmp/age_predictor')
reloaded_predictor = ktrain.load_predictor('/tmp/age_predictor')
reloaded_predictor.predict_filename(DATADIR+'/60_0_3_20170119205726487.jpg')
vis.show_image(DATADIR+'/60_0_3_20170119205726487.jpg')
```
| github_jupyter |
## Information
Alternative libaries:
https://github.com/studioimaginaire/phue (using this one)
https://github.com/issackelly/python-hue
https://github.com/sontek/bulby
Executing the notebook:
https://nbconvert.readthedocs.io/en/latest/execute_api.html
```
from phue import Bridge
from datetime import datetime
import dateutil.parser
import time
bridge = Bridge('192.168.1.2')
bridge.connect()
# Protocol
# { "9:30 - 10:00" : taskObjectCallable }
def InterpretSchedule(entry: str) -> "('room', start time, end time)":
"""String specification to meaningful tuple.
Input examples:
"10:00 - 11:00" -> (start time, end time)
"10:00" -> (start time, end of day)
"""
def process(x):
return dateutil.parser.parse(x)
times = [process(x.strip()) for x in entry.split('-')]
if len(times) == 1:
return (times[0], dateutil.parser.parse("23:59"))
else:
return (times[0], times[1])
def ProcessScheduleEntry(entry, task):
start, end = InterpretSchedule(entry)
return (start, end, task)
def FormQueue(entries):
queue = [ProcessScheduleEntry(k, v) for k, v in entries]
return sorted(queue)
# Flow Control
def ProcessTasks(taskQueue, api, bridge):
i = 0
while i < len(taskQueue):
start_t, end_t, task = taskQueue[i]
current_t = datetime.now()
if current_t > end_t: # missed it
print("Task deleted: ", task, "at", datetime.now().isoformat())
del taskQueue[i]
continue;
if current_t >= start_t: # time for the task
if task(api, bridge):
print("Task complete: ", task, "at", datetime.now().isoformat())
del taskQueue[i]
continue;
else: # not successful leave in the queue
#print("could not finish task", task)
i = i + 1
continue;
if current_t < start_t: # sorted list, so no need to dive deeper
break;
# Light control methods
def GetLightId(api, name) -> int:
for _id, light in api['lights'].items():
if light['name'] == name:
return int(_id)
raise Exception("Light not foud with name: "+ name)
def GetLightAPI(api, name):
return api['lights'][str(GetLightId(api, name))]
def IsLightOn(api, name) -> bool:
return GetLightAPI(api, name)['state']['on']
def SetColor(bridge, name, color, transitiontime):
"Color is maped from 0 (red) to 1 (blue)."
assert color >= 0.0 and color <= 1.0
_min, _max = sorted(GetLightAPI(bridge.get_api(), name)['capabilities']['control']['ct'].values())
color_hue_notation = int(_max - color*(_max-_min))
bridge.set_light(name, 'ct', color_hue_notation, transitiontime=transitiontime)
def HueColor(color):
"Hardcoded color translation."
return int(454 - color*(454-153))
def SetBrightness(bridge, name, brightness, transitiontime):
assert brightness >= 0.0 and brightness <= 1.0
brightness_hue_notation = int(brightness*(255-0))
bridge.set_light(name, 'bri', brightness_hue_notation, transitiontime=transitiontime)
def SetPower(bridge, name, on: bool):
light_id = GetLightId(bridge.get_api(), name)
bridge.set_light(light_id, 'on', on, transitiontime=0)
class Lights:
"""Light transition class - for ambience setting."""
def __init__(self, name, color=None, brightness=None, transitiontime=0):
self.name = name
self.color = color
self.brightness = brightness
self.transitiontime = int(transitiontime * 10) # Hue takes decisec, we want seconds
def __call__(self, api, bridge):
if not IsLightOn(api, self.name):
return False
command = {'transitiontime': self.transitiontime, 'on': True}
if self.color is not None:
command['ct'] = HueColor(self.color)
if self.brightness is not None:
brightness_hue_notation = int(self.brightness*(255-0))
command['bri'] = brightness_hue_notation
bridge.set_light(self.name, command)
return True
def __repr__(self):
return "Lights("+self.name+", color="+str(self.color)+", brightness="+str(self.brightness)+")"
def __lt__(self, other):
return str(self) < str(other)
class WakeupLights:
"""Wakeup morning routine for thr lights.
Slowly increase the brightness in the morning to wake up the user.
"""
def __init__(self, name, transitiontime):
self.name = name
self.transitiontime = int(transitiontime * 10) # Hue takes decisec, we want seconds
def __call__(self, api, bridge):
if IsLightOn(api, self.name):
# only do something if the light is off
return True
light_id = GetLightId(api, self.name)
# direct access to reduce time lag between calls
command = {'transitiontime' : 0, 'on' : True, 'bri' : 0, 'ct': HueColor(0.8)}
bridge.set_light(self.name, command)
# wake up slowly
SetBrightness(bridge, self.name, 1.0, self.transitiontime)
return True
def __repr__(self):
return "WakeupLights("+self.name+")"
def __lt__(self, other):
return str(self) < str(other)
class Switch:
"""Switch transition class - for ambience setting."""
def __init__(self, name, on):
self.name = name
self.on = on
def __call__(self, api, bridge):
command = {'transitiontime': 0, 'on': self.on}
bridge.set_light(self.name, command)
return True
def __repr__(self):
return "Switch("+self.name+", on="+str(self.on)+")"
def __lt__(self, other):
return str(self) < str(other)
def Weekend():
return [
# morning
#["07:00 - 08:00", Switch("On/Off plug 1", on=True)], # Christmas lights
["07:00 - 18:00", Lights("Living room", color=0.8, brightness=1.0, transitiontime=10)],
["07:00 - 18:00", Lights("Hall way", color=0.8, brightness=1.0, transitiontime=10)],
["08:00 - 18:00", Lights("Bedroom", color=0.8, brightness=1.0, transitiontime=10)],
#["09:00 - 10:00", Switch("On/Off plug 1", on=False)], # Christmas lights
# Early evening
["18:00 - 20:00", Lights("Living room", color=0.6, brightness=1.0, transitiontime=10)],
["18:00 - 20:00", Lights("Hall way", color=0.6, brightness=1.0, transitiontime=10)],
["18:00 - 20:00", Lights("Bedroom", color=0.6, brightness=1.0, transitiontime=10)],
#["18:00 - 20:00", Switch("On/Off plug 1", on=True)], # Christmas lights
# evening
["20:00 - 22:00", Lights("Living room", color=0.3, transitiontime=60)],
["20:00 - 22:00", Lights("Hall way", color=0.3, transitiontime=60)],
["20:00 - 22:00", Lights("Bedroom", color=0.3, transitiontime=60)],
# late evening
#["22:00 - 23:30", Switch("On/Off plug 1", on=False)], # Christmas lights
["22:00 - 23:30", Lights("Living room", color=0.0, brightness=0.8, transitiontime=2*60)],
["22:00 - 23:30", Lights("Hall way", color=0.0, brightness=0.8, transitiontime=2*60)],
["22:00 - 23:30", Lights("Bedroom", color=0.0, brightness=0.8, transitiontime=2*60)],
# night
["23:30 - 23:59", Lights("Living room", color=0.0, brightness=0.5, transitiontime=2*60)],
["23:30 - 23:59", Lights("Hall way", color=0.0, brightness=0.5, transitiontime=2*60)],
["23:30 - 23:59", Lights("Bedroom", color=0.0, brightness=0.5, transitiontime=2*60)],
]
def Weekday():
return [
# ["07:00 - 08:00", WakeupLights("Bedroom", transitiontime=int(15*60))],
] + Weekend()
def IsNewDay() -> bool:
return datetime.now().hour < 2
# Main
queue = []
i = 0
if not queue: # and (IsNewDay() or i==0): # add time check
is_weekend = datetime.today().weekday() >= 5
if is_weekend:
queue = FormQueue(Weekend())
print("Filled the queue with Weekend() at", datetime.now().isoformat())
else:
queue = FormQueue(Weekday())
print("Filled the queue with Weekday() at", datetime.now().isoformat())
while queue:
try:
api = bridge.get_api()
ProcessTasks(queue, api, bridge)
except:
print("An exception occurred on ", datetime.now().isoformat())
i = i + 1
time.sleep(5)
```
| github_jupyter |
##### Copyright 2019 Google LLC
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 합성 그래프를 사용한, 감상 분류를 위한 그래프 정규화
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/neural_structured_learning/tutorials/graph_keras_lstm_imdb.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
</table>
## Overview
이 노트북은 리뷰 텍스트를 사용하여 영화 리뷰를 *긍정적* 또는 *부정적*으로 분류합니다. 중요하고도 널리 적용 가능한 머신러닝 문제인 *이진* 분류의 예입니다.
주어진 입력으로부터 그래프를 빌드하여 이 노트북에서 그래프 정규화를 사용하는 방법을 보여줄 것입니다. 입력에 명시적 그래프가 포함되어 있지 않을 때 Neural Structured Learning(NSL) 프레임워크를 사용하여 그래프 정규화 모델을 빌드하는 일반적인 방법은 다음과 같습니다.
1. 입력에서 각 텍스트 샘플에 대한 임베딩을 만듭니다. [word2vec](https://arxiv.org/pdf/1310.4546.pdf), [Swivel](https://arxiv.org/abs/1602.02215), [BERT](https://arxiv.org/abs/1810.04805) 등과 같은 사전 훈련된 모델을 사용하여 수행할 수 있습니다.
2. 'L2' 거리, 'cosine' 거리 등과 같은 유사성 메트릭을 사용하여 이러한 임베딩을 기반으로 그래프를 빌드합니다. 그래프에서 노드는 샘플에 해당하고, 그래프에서 간선은 샘플 쌍 간의 유사성에 해당합니다.
3. 위의 합성 그래프와 샘플 특성에서 훈련 데이터를 생성합니다. 결과 훈련 데이터에는 원래 노드 특성 외에도 이웃 특성이 포함됩니다.
4. Keras 순차, 함수형 또는 서브 클래스 API를 사용하여 신경망을 기본 모델로 만듭니다.
5. NSL 프레임워크에서 제공하는 GraphRegularization 래퍼 클래스로 기본 모델을 래핑하여 새 그래프 Keras 모델을 만듭니다. 이 새로운 모델은 훈련 목표에서 그래프 정규화 손실을 정규화 항으로 포함합니다.
6. 그래프 Keras 모델을 훈련하고 평가합니다.
**참고**: 독자가 이 튜토리얼을 진행하는 데 약 1시간이 소요될 것으로 예상됩니다.
## 요구 사항
1. Neural Structured Learning 패키지를 설치합니다.
2. tensorflow-hub를 설치합니다.
```
!pip install --quiet neural-structured-learning
!pip install --quiet tensorflow-hub
```
## 종속성 및 가져오기
```
import matplotlib.pyplot as plt
import numpy as np
import neural_structured_learning as nsl
import tensorflow as tf
import tensorflow_hub as hub
# Resets notebook state
tf.keras.backend.clear_session()
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print(
"GPU is",
"available" if tf.config.list_physical_devices("GPU") else "NOT AVAILABLE")
```
## IMDB 데이터세트
[IMDB 데이터세트](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb)에는 [인터넷 영화 데이터베이스](https://www.imdb.com/)에서 가져온 50,000개의 영화 리뷰 텍스트가 포함되어 있습니다. 훈련용 리뷰 25,000개와 테스트용 리뷰 25,000개로 나뉩니다. 훈련 및 테스트 세트는 *균형을 이룹니다*. 즉, 동일한 수의 긍정적인 리뷰와 부정적인 리뷰가 포함되어 있습니다.
이 튜토리얼에서는 IMDB 데이터세트의 전처리된 버전을 사용합니다.
### 전처리된 IMDB 데이터세트 다운로드하기
IMDB 데이터세트는 TensorFlow와 함께 패키지로 제공됩니다. 리뷰(단어의 시퀀스)가 정수의 시퀀스로 변환되도록 사전 처리되었으며, 각 정수는 사전에서 특정 단어를 나타냅니다.
다음 코드는 IMDB 데이터세트를 다운로드합니다(또는 이미 다운로드된 경우, 캐시된 사본을 사용합니다).
```
imdb = tf.keras.datasets.imdb
(pp_train_data, pp_train_labels), (pp_test_data, pp_test_labels) = (
imdb.load_data(num_words=10000))
```
인수 `num_words=10000`은 훈련 데이터에서 가장 자주 발생하는 단어 10,000개를 유지합니다. 어휘의 크기를 관리할 수 있도록 희귀한 단어는 버립니다.
### 데이터 탐색하기
잠시 시간을 내어 데이터 형식을 살펴보겠습니다. 데이터세트는 사전 처리됩니다. 각 예는 영화 리뷰의 단어를 나타내는 정수 배열입니다. 각 레이블은 0 또는 1의 정수 값입니다. 여기서 0은 부정적인 리뷰이고, 1은 긍정적인 리뷰입니다.
```
print('Training entries: {}, labels: {}'.format(
len(pp_train_data), len(pp_train_labels)))
training_samples_count = len(pp_train_data)
```
리뷰 텍스트는 정수로 변환되었으며, 각 정수는 사전에서 특정 단어를 나타냅니다. 첫 번째 리뷰는 다음과 같습니다.
```
print(pp_train_data[0])
```
영화 리뷰는 길이가 다를 수 있습니다. 아래 코드는 첫 번째 및 두 번째 리뷰의 단어 수를 보여줍니다. 신경망에 대한 입력은 길이가 같아야 하므로 나중에 이 문제를 해결해야 합니다.
```
len(pp_train_data[0]), len(pp_train_data[1])
```
### 정수를 다시 단어로 변환하기
정수를 해당 텍스트로 다시 변환하는 방법을 아는 것이 유용할 수 있습니다. 여기에서는 정수 대 문자열 매핑을 포함하는 사전 객체를 쿼리하는 도우미 함수를 만듭니다.
```
def build_reverse_word_index():
# A dictionary mapping words to an integer index
word_index = imdb.get_word_index()
# The first indices are reserved
word_index = {k: (v + 3) for k, v in word_index.items()}
word_index['<PAD>'] = 0
word_index['<START>'] = 1
word_index['<UNK>'] = 2 # unknown
word_index['<UNUSED>'] = 3
return dict((value, key) for (key, value) in word_index.items())
reverse_word_index = build_reverse_word_index()
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
이제 `decode_review` 함수를 사용하여 첫 번째 리뷰의 텍스트를 표시할 수 있습니다.
```
decode_review(pp_train_data[0])
```
## 그래프 구성
그래프 구성에는 텍스트 샘플에 대한 임베딩을 만든 다음 유사성 함수를 사용하여 임베딩을 비교하는 것이 포함됩니다.
계속 진행하기 전에 먼저 이 튜토리얼에서 만든 아티팩트를 저장할 디렉터리를 만듭니다.
```
!mkdir -p /tmp/imdb
```
### 샘플 임베딩 만들기
사전 훈련된 Swivel 임베딩을 사용하여 입력의 각 샘플에 대해 `tf.train.Example` 형식으로 임베딩을 만듭니다. 각 샘플의 ID를 나타내는 추가 특성과 함께 결과 임베딩을 `TFRecord` 형식으로 저장합니다. 나중에 그래프의 해당 노드와 샘플 임베딩을 일치시킬 수 있는 중요한 작업입니다.
```
pretrained_embedding = 'https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1'
hub_layer = hub.KerasLayer(
pretrained_embedding, input_shape=[], dtype=tf.string, trainable=True)
def _int64_feature(value):
"""Returns int64 tf.train.Feature."""
return tf.train.Feature(int64_list=tf.train.Int64List(value=value.tolist()))
def _bytes_feature(value):
"""Returns bytes tf.train.Feature."""
return tf.train.Feature(
bytes_list=tf.train.BytesList(value=[value.encode('utf-8')]))
def _float_feature(value):
"""Returns float tf.train.Feature."""
return tf.train.Feature(float_list=tf.train.FloatList(value=value.tolist()))
def create_embedding_example(word_vector, record_id):
"""Create tf.Example containing the sample's embedding and its ID."""
text = decode_review(word_vector)
# Shape = [batch_size,].
sentence_embedding = hub_layer(tf.reshape(text, shape=[-1,]))
# Flatten the sentence embedding back to 1-D.
sentence_embedding = tf.reshape(sentence_embedding, shape=[-1])
features = {
'id': _bytes_feature(str(record_id)),
'embedding': _float_feature(sentence_embedding.numpy())
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_embeddings(word_vectors, output_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(output_path) as writer:
for word_vector in word_vectors:
example = create_embedding_example(word_vector, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features containing embeddings for training data in
# TFRecord format.
create_embeddings(pp_train_data, '/tmp/imdb/embeddings.tfr', 0)
```
### 그래프 빌드하기
이제 샘플 임베딩이 있으므로 이를 사용하여 유사성 그래프를 빌드합니다. 즉, 이 그래프에서 노드는 샘플에 해당하고, 이 그래프에서 간선은 노드 쌍 간의 유사성에 해당합니다.
Neural Structured Learning은 샘플 임베딩을 기반으로 그래프를 빌드하는 그래프 작성 라이브러리를 제공합니다. [**코사인 유사성**](https://en.wikipedia.org/wiki/Cosine_similarity)을 유사성 척도로 사용하여 임베딩을 비교하고 그 사이에 간선을 만듭니다. 또한, 최종 그래프에서 유사하지 않은 간선을 삭제하는 데 사용할 수 있는 유사성 임계값을 지정할 수 있습니다. 이 예에서는 유사성 임계값으로 0.99를 사용하고 임의의 시드로 12345를 사용하면 429,415개의 양방향 간선이 있는 그래프가 생성됩니다. 여기서는 그래프 작성의 속도를 높이기 위해 그래프 빌더의 [locality-sensitive hashing(LSH)](https://en.wikipedia.org/wiki/Locality-sensitive_hashing) 지원을 사용하고 있습니다. 그래프 빌더의 LSH 지원을 사용하는 방법에 대한 자세한 내용은 [`build_graph_from_config`](https://www.tensorflow.org/neural_structured_learning/api_docs/python/nsl/tools/build_graph_from_config) API 설명서를 참조하세요.
```
graph_builder_config = nsl.configs.GraphBuilderConfig(
similarity_threshold=0.99, lsh_splits=32, lsh_rounds=15, random_seed=12345)
nsl.tools.build_graph_from_config(['/tmp/imdb/embeddings.tfr'],
'/tmp/imdb/graph_99.tsv',
graph_builder_config)
```
각 양방향 간선은 출력 TSV 파일에서 두 개의 방향 있는 간선으로 표시되므로 파일에는 총 429,415 * 2 = 858,830개 라인이 포함됩니다.
```
!wc -l /tmp/imdb/graph_99.tsv
```
**참고:** 그래프 품질과 더 나아가 임베딩 품질은 그래프 정규화에 매우 중요합니다. 이 노트북에서는 Swivel 임베딩을 사용했지만, 예를 들어 BERT 임베딩을 사용하면 리뷰 의미 체계를 더 정확하게 파악할 수 있습니다. 사용자가 원하는 임베딩을 필요에 따라 사용할 것을 권장합니다.
## 샘플 특성
`tf.train.Example` 형식을 사용하여 문제의 샘플 특성을 만들고 `TFRecord` 형식으로 유지합니다. 각 샘플에는 다음 3가지 특성이 포함됩니다.
1. **id**: 샘플의 노드 ID입니다.
2. **words**: 단어 ID를 포함하는 int64 목록입니다.
3. **label**: 리뷰의 대상 클래스를 식별하는 싱글톤 int64입니다.
```
def create_example(word_vector, label, record_id):
"""Create tf.Example containing the sample's word vector, label, and ID."""
features = {
'id': _bytes_feature(str(record_id)),
'words': _int64_feature(np.asarray(word_vector)),
'label': _int64_feature(np.asarray([label])),
}
return tf.train.Example(features=tf.train.Features(feature=features))
def create_records(word_vectors, labels, record_path, starting_record_id):
record_id = int(starting_record_id)
with tf.io.TFRecordWriter(record_path) as writer:
for word_vector, label in zip(word_vectors, labels):
example = create_example(word_vector, label, record_id)
record_id = record_id + 1
writer.write(example.SerializeToString())
return record_id
# Persist TF.Example features (word vectors and labels) for training and test
# data in TFRecord format.
next_record_id = create_records(pp_train_data, pp_train_labels,
'/tmp/imdb/train_data.tfr', 0)
create_records(pp_test_data, pp_test_labels, '/tmp/imdb/test_data.tfr',
next_record_id)
```
## 그래프 이웃으로 훈련 데이터 보강하기
샘플 특성과 합성된 그래프가 있으므로 Neural Structured Learning을 위한 증강 훈련 데이터를 생성할 수 있습니다. NSL 프레임워크는 그래프 정규화를 위한 최종 훈련 데이터를 생성하기 위해 그래프와 샘플 특성을 결합하는 라이브러리를 제공합니다. 결과 훈련 데이터에는 원본 샘플 특성과 해당 이웃의 특성이 포함됩니다.
이 튜토리얼에서는 방향 없는 간선을 고려하고 샘플당 최대 3개의 이웃을 사용하여 그래프 이웃으로 훈련 데이터를 보강합니다.
```
nsl.tools.pack_nbrs(
'/tmp/imdb/train_data.tfr',
'',
'/tmp/imdb/graph_99.tsv',
'/tmp/imdb/nsl_train_data.tfr',
add_undirected_edges=True,
max_nbrs=3)
```
## 기본 모델
이제 그래프 정규화 없이 기본 모델을 빌드할 준비가 되었습니다. 이 모델을 빌드하기 위해 그래프를 빌드하는 데 사용된 임베딩을 사용하거나 분류 작업과 함께 새로운 임베딩을 공동으로 학습할 수 있습니다. 이 노트북의 목적을 위해 후자를 수행합니다.
### 전역 변수
```
NBR_FEATURE_PREFIX = 'NL_nbr_'
NBR_WEIGHT_SUFFIX = '_weight'
```
### 하이퍼 매개변수
`HParams`의 인스턴스를 사용하여 훈련 및 평가에 사용되는 다양한 하이퍼 매개변수 및 상수를 포함합니다. 아래에서 각각에 대해 간략하게 설명합니다.
- **num_classes**: *긍정*과 *부정*의 두 가지 클래스가 있습니다.
- **max_seq_length**: 이 예제에서 각 영화 리뷰에서 고려되는 최대 단어 수입니다.
- **vocab_size**: 이 예제에서 고려한 어휘의 크기입니다.
- **distance_type**: 샘플을 이웃으로 정규화하는 데 사용되는 거리 메트릭입니다.
- **graph_regularization_multiplier**: 전체 손실 함수에서 그래프 정규화 항의 상대적 가중치를 제어합니다.
- **num_neighbors**: 그래프 정규화에 사용되는 이웃의 수입니다. 이 값은 `nsl.tools.pack_nbrs`를 호출할 때 위에서 사용된 `max_nbrs` 인수와 같거나 작아야 합니다.
- **num_fc_units**: 신경망의 완전 연결 레이어에 있는 단위의 수입니다.
- **train_epochs**: 훈련 epoch의 수입니다.
- **batch_size**: 훈련 및 평가에 사용되는 배치 크기입니다.
- **eval_steps**: 평가가 완료된 것으로 간주하기 전에 처리할 배치의 수입니다. `None`으로 설정하면, 테스트 세트의 모든 인스턴스가 평가됩니다.
```
class HParams(object):
"""Hyperparameters used for training."""
def __init__(self):
### dataset parameters
self.num_classes = 2
self.max_seq_length = 256
self.vocab_size = 10000
### neural graph learning parameters
self.distance_type = nsl.configs.DistanceType.L2
self.graph_regularization_multiplier = 0.1
self.num_neighbors = 2
### model architecture
self.num_embedding_dims = 16
self.num_lstm_dims = 64
self.num_fc_units = 64
### training parameters
self.train_epochs = 10
self.batch_size = 128
### eval parameters
self.eval_steps = None # All instances in the test set are evaluated.
HPARAMS = HParams()
```
### 데이터 준비하기
리뷰(정수의 배열)는 신경망에 공급되기 전에 텐서로 변환되어야 합니다. 이 변환은 다음 두 가지 방법으로 수행할 수 있습니다.
- 배열을 원-핫 인코딩과 유사하게 단어 발생을 나타내는 `0` 및 `1`의 벡터로 변환합니다. 예를 들어, 시퀀스 `[3, 5]`는 1인 인덱스 `3` 및 `5`를 제외하고 모두 0인 `10000`차원 벡터가 됩니다. 그런 다음 이를 부동 소수점 벡터 데이터를 처리할 수 있는 네트워크의 첫 번째 레이어인 `Dense` 레이어로 만듭니다. 하지만 이 접근 방식은 메모리 집약적이므로 `num_words * num_reviews` 크기의 행렬이 필요합니다.
- 또는 배열을 패딩하여 모두 같은 길이를 갖도록 한 다음 형상 `max_length * num_reviews`의 정수 텐서를 생성할 수 있습니다. 이 형상을 처리할 수 있는 임베딩 레이어를 네트워크의 첫 번째 레이어로 사용할 수 있습니다.
이 튜토리얼에서는 두 번째 접근 방식을 사용합니다.
영화 리뷰는 길이가 같아야 하므로 아래 정의된 `pad_sequence` 함수를 사용하여 길이를 표준화합니다.
```
def make_dataset(file_path, training=False):
"""Creates a `tf.data.TFRecordDataset`.
Args:
file_path: Name of the file in the `.tfrecord` format containing
`tf.train.Example` objects.
training: Boolean indicating if we are in training mode.
Returns:
An instance of `tf.data.TFRecordDataset` containing the `tf.train.Example`
objects.
"""
def pad_sequence(sequence, max_seq_length):
"""Pads the input sequence (a `tf.SparseTensor`) to `max_seq_length`."""
pad_size = tf.maximum([0], max_seq_length - tf.shape(sequence)[0])
padded = tf.concat(
[sequence.values,
tf.fill((pad_size), tf.cast(0, sequence.dtype))],
axis=0)
# The input sequence may be larger than max_seq_length. Truncate down if
# necessary.
return tf.slice(padded, [0], [max_seq_length])
def parse_example(example_proto):
"""Extracts relevant fields from the `example_proto`.
Args:
example_proto: An instance of `tf.train.Example`.
Returns:
A pair whose first value is a dictionary containing relevant features
and whose second value contains the ground truth labels.
"""
# The 'words' feature is a variable length word ID vector.
feature_spec = {
'words': tf.io.VarLenFeature(tf.int64),
'label': tf.io.FixedLenFeature((), tf.int64, default_value=-1),
}
# We also extract corresponding neighbor features in a similar manner to
# the features above during training.
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
nbr_weight_key = '{}{}{}'.format(NBR_FEATURE_PREFIX, i,
NBR_WEIGHT_SUFFIX)
feature_spec[nbr_feature_key] = tf.io.VarLenFeature(tf.int64)
# We assign a default value of 0.0 for the neighbor weight so that
# graph regularization is done on samples based on their exact number
# of neighbors. In other words, non-existent neighbors are discounted.
feature_spec[nbr_weight_key] = tf.io.FixedLenFeature(
[1], tf.float32, default_value=tf.constant([0.0]))
features = tf.io.parse_single_example(example_proto, feature_spec)
# Since the 'words' feature is a variable length word vector, we pad it to a
# constant maximum length based on HPARAMS.max_seq_length
features['words'] = pad_sequence(features['words'], HPARAMS.max_seq_length)
if training:
for i in range(HPARAMS.num_neighbors):
nbr_feature_key = '{}{}_{}'.format(NBR_FEATURE_PREFIX, i, 'words')
features[nbr_feature_key] = pad_sequence(features[nbr_feature_key],
HPARAMS.max_seq_length)
labels = features.pop('label')
return features, labels
dataset = tf.data.TFRecordDataset([file_path])
if training:
dataset = dataset.shuffle(10000)
dataset = dataset.map(parse_example)
dataset = dataset.batch(HPARAMS.batch_size)
return dataset
train_dataset = make_dataset('/tmp/imdb/nsl_train_data.tfr', True)
test_dataset = make_dataset('/tmp/imdb/test_data.tfr')
```
### 모델 빌드하기
신경망은 레이어를 쌓아서 생성됩니다. 이를 위해서는 두 가지 주요 아키텍처 결정이 필요합니다.
- 모델에서 사용할 레이어는 몇 개입니까?
- 각 레이어에 사용할 *숨겨진 단위*는 몇 개입니까?
이 예제에서 입력 데이터는 단어 인덱스의 배열로 구성됩니다. 예측할 레이블은 0 또는 1입니다.
이 튜토리얼에서는 양방향 LSTM을 기본 모델로 사용합니다.
```
# This function exists as an alternative to the bi-LSTM model used in this
# notebook.
def make_feed_forward_model():
"""Builds a simple 2 layer feed forward neural network."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size, 16)(inputs)
pooling_layer = tf.keras.layers.GlobalAveragePooling1D()(embedding_layer)
dense_layer = tf.keras.layers.Dense(16, activation='relu')(pooling_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
def make_bilstm_model():
"""Builds a bi-directional LSTM model."""
inputs = tf.keras.Input(
shape=(HPARAMS.max_seq_length,), dtype='int64', name='words')
embedding_layer = tf.keras.layers.Embedding(HPARAMS.vocab_size,
HPARAMS.num_embedding_dims)(
inputs)
lstm_layer = tf.keras.layers.Bidirectional(
tf.keras.layers.LSTM(HPARAMS.num_lstm_dims))(
embedding_layer)
dense_layer = tf.keras.layers.Dense(
HPARAMS.num_fc_units, activation='relu')(
lstm_layer)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(dense_layer)
return tf.keras.Model(inputs=inputs, outputs=outputs)
# Feel free to use an architecture of your choice.
model = make_bilstm_model()
model.summary()
```
레이어는 효과적으로 쌓여 순차적으로 분류자를 빌드합니다.
1. 첫 번째 레이어는 정수로 인코딩된 어휘를 사용하는 `Input` 레이어입니다.
2. 다음 레이어는 정수로 인코딩된 어휘를 사용하여 각 단어 인덱스에 대한 임베딩 벡터를 조회하는 `Embedding` 레이어입니다. 이러한 벡터는 모델 훈련을 통해 학습됩니다. 벡터는 출력 배열에 차원을 추가합니다. 결과 차원은 `(batch, sequence, embedding)`입니다.
3. 다음으로 양방향 LSTM 레이어는 각 예제에 대해 고정 길이 출력 벡터를 반환합니다.
4. 이 고정 길이 출력 벡터는 64개의 숨겨진 단위가 있는 완전 연결(`Dense`) 레이어를 통해 파이프됩니다.
5. 마지막 레이어는 단일 출력 노드와 조밀하게 연결됩니다. `sigmoid` 활성화 함수를 사용하면, 이 값은 확률 또는 신뢰 수준을 나타내는 0과 1 사이의 부동 소수점입니다.
### 숨겨진 단위
위의 모델에는 입력과 출력 사이에 두 개의 중간 또는 "숨겨진" 레이어가 있으며, `Embedding` 레이어는 제외됩니다. 출력(단위, 노드 또는 뉴런)의 수는 레이어에 대한 표현 공간의 차원입니다. 즉, 내부 표현을 학습할 때 네트워크에서 허용되는 자유의 정도입니다.
모델에 더 많은 숨겨진 단위(고차원 표현 공간) 및/또는 더 많은 레이어가 있는 경우, 네트워크는 더 복잡한 표현을 학습할 수 있습니다. 그러나 이는 네트워크를 계산적으로 더 비싸게 만들고, 원치 않는 패턴(훈련 데이터에서는 성능을 향상하지만 테스트 데이터에서는 그렇지 않은 패턴)을 학습하게 됩니다. 이를 *과대적합*이라고 합니다.
### 손실 함수 및 옵티마이저
모델에는 훈련을 위한 손실 함수와 옵티마이저가 필요합니다. 이진 분류 문제이고 모델이 확률(시그모이드 활성화가 있는 단일 단위 레이어)을 출력하므로 `binary_crossentropy` 손실 함수를 사용합니다.
```
model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
### 검증 세트 만들기
훈련할 때, 이전에 본 적이 없는 데이터에 대한 모델의 정확성을 확인하려고 합니다. 원래 훈련 데이터의 일부를 구분하여 *검증 세트*를 만듭니다. (왜 지금 테스트 세트를 사용하지 않을까요? 목표는 훈련 데이터만 사용하여 모델을 개발하고 조정한 다음, 테스트 데이터를 한 번만 사용하여 정확성을 평가하는 것입니다.)
이 튜토리얼에서는 초기 훈련 샘플의 약 10%(25000의 10%)를 훈련용 데이터로 레이블 지정하고 나머지는 검증 데이터로 사용합니다. 초기 훈련/테스트 분할이 50/50(각각 25000개 샘플)이므로 현재 유효한 훈련/검증/테스트 분할은 5/45/50입니다.
'train_dataset'는 이미 일괄 처리되고 셔플되었습니다.
```
validation_fraction = 0.9
validation_size = int(validation_fraction *
int(training_samples_count / HPARAMS.batch_size))
print(validation_size)
validation_dataset = train_dataset.take(validation_size)
train_dataset = train_dataset.skip(validation_size)
```
### 모델을 훈련 시키십시오
미니 배치로 모델을 훈련합니다. 훈련하는 동안 검증 세트에 대해 모델의 손실과 정확성을 모니터링합니다.
```
history = model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
```
### 모델 평가하기
이제 모델의 성능을 살펴보겠습니다. 두 개의 값, 즉 손실(오류를 나타내는 숫자, 낮은 값이 더 좋음) 및 정확성이 반환됩니다.
```
results = model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(results)
```
### 시간 경과에 따른 정확성/손실 그래프 생성하기
`model.fit()`은 훈련 중에 발생한 모든 것을 가진 사전을 포함하는 `History` 객체를 반환합니다.
```
history_dict = history.history
history_dict.keys()
```
4개의 항목이 있습니다. 훈련 및 검증 중에 모니터링되는 각 메트릭에 대해 하나씩 있습니다. 이들 항목을 사용하여 비교를 위한 훈련 및 검증 손실과 훈련 및 검증 정확성을 플롯할 수 있습니다.
```
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
```
훈련 손실은 각 epoch마다 *감소*하고 훈련 정확성은 각 epoch마다 *증가*합니다. 경사 하강 최적화를 사용할 때 이와 같이 예상됩니다. 모든 반복에서 원하는 수량을 최소화해야 합니다.
## 그래프 정규화
이제 위에서 구축한 기본 모델을 사용하여 그래프 정규화를 시도할 준비가 되었습니다. Neural Structured Learning 프레임워크에서 제공하는 `GraphRegularization` 래퍼 클래스를 사용하여 그래프 정규화를 포함하도록 기본 (bi-LSTM) 모델을 래핑합니다. 그래프 정규화 모델을 훈련하고 평가하기 위한 나머지 단계는 기본 모델의 단계와 유사합니다.
### 그래프 정규화 모델 생성하기
그래프 정규화의 점진적 이점을 평가하기 위해 새 기본 모델 인스턴스를 생성합니다. `model`은 몇 번의 반복으로 이미 훈련되었고, 그래프-정규화 모델을 만들기 위해 이 훈련 모델을 재사용하면 `model`을 위한 공정한 비교가 되지 않습니다.
```
# Build a new base LSTM model.
base_reg_model = make_bilstm_model()
# Wrap the base model with graph regularization.
graph_reg_config = nsl.configs.make_graph_reg_config(
max_neighbors=HPARAMS.num_neighbors,
multiplier=HPARAMS.graph_regularization_multiplier,
distance_type=HPARAMS.distance_type,
sum_over_axis=-1)
graph_reg_model = nsl.keras.GraphRegularization(base_reg_model,
graph_reg_config)
graph_reg_model.compile(
optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
```
### 모델 훈련하기
```
graph_reg_history = graph_reg_model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=HPARAMS.train_epochs,
verbose=1)
```
### 모델 평가하기
```
graph_reg_results = graph_reg_model.evaluate(test_dataset, steps=HPARAMS.eval_steps)
print(graph_reg_results)
```
### Create a graph of accuracy/loss over time
```
graph_reg_history_dict = graph_reg_history.history
graph_reg_history_dict.keys()
```
사전에는 훈련 손실, 훈련 정확성, 훈련 그래프 손실, 검증 손실 및 검증 정확성의 총 5개 항목이 있습니다. 비교를 위해 모두 함께 플롯할 수 있습니다. 그래프 손실은 훈련 중에만 계산됩니다.
```
acc = graph_reg_history_dict['accuracy']
val_acc = graph_reg_history_dict['val_accuracy']
loss = graph_reg_history_dict['loss']
graph_loss = graph_reg_history_dict['scaled_graph_loss']
val_loss = graph_reg_history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.clf() # clear figure
# "-r^" is for solid red line with triangle markers.
plt.plot(epochs, loss, '-r^', label='Training loss')
# "-gD" is for solid green line with diamond markers.
plt.plot(epochs, graph_loss, '-gD', label='Training graph loss')
# "-b0" is for solid blue line with circle markers.
plt.plot(epochs, val_loss, '-bo', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(loc='best')
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, '-r^', label='Training acc')
plt.plot(epochs, val_acc, '-bo', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='best')
plt.show()
```
## 준감독 학습의 힘
준감독 학습, 보다 구체적으로 이 튜토리얼의 맥락에서 그래프 정규화는 훈련 데이터의 양이 적을 때 정말 강력할 수 있습니다. 훈련 데이터의 부족은 훈련 샘플 간의 유사성을 활용하여 보완되며, 기존 감독 학습에서는 불가능했습니다.
***감독 비율***을 훈련, 검증, 테스트 샘플을 포함하는 총 샘플 수에 대한 훈련 샘플의 비율로 정의합니다. 이 노트북에서는 기본 모델과 그래프 정규화 모델 모두를 훈련하기 위해 0.05의 감독 비율(즉, 레이블이 지정된 데이터의 5%)을 사용했습니다. 아래 셀에서 감독 비율이 모델 정확성에 미치는 영향을 설명합니다.
```
# Accuracy values for both the Bi-LSTM model and the feed forward NN model have
# been precomputed for the following supervision ratios.
supervision_ratios = [0.3, 0.15, 0.05, 0.03, 0.02, 0.01, 0.005]
model_tags = ['Bi-LSTM model', 'Feed Forward NN model']
base_model_accs = [[84, 84, 83, 80, 65, 52, 50], [87, 86, 76, 74, 67, 52, 51]]
graph_reg_model_accs = [[84, 84, 83, 83, 65, 63, 50],
[87, 86, 80, 75, 67, 52, 50]]
plt.clf() # clear figure
fig, axes = plt.subplots(1, 2)
fig.set_size_inches((12, 5))
for ax, model_tag, base_model_acc, graph_reg_model_acc in zip(
axes, model_tags, base_model_accs, graph_reg_model_accs):
# "-r^" is for solid red line with triangle markers.
ax.plot(base_model_acc, '-r^', label='Base model')
# "-gD" is for solid green line with diamond markers.
ax.plot(graph_reg_model_acc, '-gD', label='Graph-regularized model')
ax.set_title(model_tag)
ax.set_xlabel('Supervision ratio')
ax.set_ylabel('Accuracy(%)')
ax.set_ylim((25, 100))
ax.set_xticks(range(len(supervision_ratios)))
ax.set_xticklabels(supervision_ratios)
ax.legend(loc='best')
plt.show()
```
감독 비율이 감소하면 모델 정확성도 감소함을 알 수 있습니다. 이는 사용 된 모델 아키텍처와 관계없이 기본 모델과 그래프 정규화 모델 모두에 해당됩니다. 그러나 그래프 정규화 모델은 두 아키텍처 모두에서 기본 모델보다 더 나은 성능을 발휘합니다. 특히 Bi-LSTM 모델의 경우, 감독 비율이 0.01일 때 그래프 정규화 모델의 정확성은 기본 모델보다 **~20%** 높습니다. 이는 주로 훈련 샘플 자체 외에 훈련 샘플 간의 구조적 유사성이 사용되는 그래프 정규화 모델에 대한 준감독 학습 때문입니다.
## 결론
입력에 명시적 그래프가 포함되지 않은 경우에도 Neural Structured Learning(NSL) 프레임워크를 사용하여 그래프 정규화를 사용하는 방법을 시연했습니다. 리뷰 임베딩을 기반으로 유사성 그래프를 합성한 IMDB 영화 리뷰의 감상 분류 작업을 고려했습니다. 사용자가 다양한 하이퍼 매개변수, 감독의 양 및 다양한 모델 아키텍처를 사용하여 추가로 실험할 것을 권장합니다.
| github_jupyter |
# Tutorial Outline
**Tutorial contents**:
- Adding new yield tables
- Choosing element set
- Training of a neural network
- Running MCMC analysis
- Computing Bayes/LOO-CV scores
- Plotting LOO-CV element predictions
The above are based on the Philcox & Rybizki (2017, submitted) paper which should be cited when using this code. This is based on the $\mathit{Chempy}$ software, described in Rybizki et al. (2017, [arXiv:1702.08729](https://arxiv.org/abs/1702.08729)) and full tutorials for this can be found [here](https://github.com/jan-rybizki/Chempy/tree/master/tutorials).
** Requirements**:
Before running this tutorial, the $\mathit{ChempyScoring}$ code and its dependencies must be installed as described [here](https://github.com/oliverphilcox/ChempyScoring/blob/master/requirements.txt)
The authors [Oliver Philcox](mailto:ohep2@cam.ac.uk) and [Jan Rybizki](rybizki@mpia.de) are happy to assist with any problems which may arise
## Step 1: Load Yield Tables
First we must load in the Nucleosynthetic yield table to be tested. Here we will test the SN2 net yields of Portinari et al. (1998, [arXiv:astro-ph/9711337](https://arxiv.org/abs/astro-ph/9711337)). These include elements for stars of mass 6-120Msun, as used in the Illustris simulation. These are constructed from the text files in the `Chempy/input/yields/Portinari_1998/` directory.
The yield tables provide data for 11 masses and four metalicities. The following are important considerations:
- Yield tables must be **net** yields such that they sum to 0. These are obtained either directly from the input tables (as below) or from gross yields by subtracting the initial abundances (e.g. `net_yield` = [`gross_yield` - (`initial_mass_fraction`)$*$(`ejected_mass`)]/'`initial_mass`)
- Output yields must be in fractions of the initial stellar mass (not the ejected mass)
- `mass_in_remnants` field is the fraction of the initial stellar mass left as a remnant
- `unprocessed_mass_in_winds` field is $1$-`mass_in_remnants-`$\Sigma$`(elemental yields)`. The last factor is to account for the yields not quite summing to zero due to missing elements etc.
- The output table must be in the form shown below for compatibility with $\mathit{Chempy}$.
To add this into *Chempy* we add a `Portinari_net()` function to the `SN2_feedback()` class in `yields.py` as shown below:
```
## NB: The Portinari_net definition should be inserted into the yields.py file
from Chempy import localpath # For file locations
import numpy as np
class SN2_feedback(object):
def __init__(self):
"""
This is the object that holds the feedback table for SN2 stars.
The different methods load different tables from the literature.
They are in the input/yields/ folder.
"""
def Portinari_net(self):
'''
Loading the yield table from Portinari1998.
These are presented as net yields in fractions of initial stellar mass.
'''
# Define metallicities in table
self.metallicities = [0.0004,0.004,0.008,0.02,0.05]
# Load one table
x = np.genfromtxt(localpath + 'input/yields/Portinari_1998/0.02.txt',names=True)
# Define masses and elements in yield tables
self.masses = list(x['Mass']) # In solar masses
self.elements = list(x.dtype.names[3:])
self.table = {} # Output dictionary for yield tables
for metallicity in self.metallicities:
additional_keys = ['Mass', 'mass_in_remnants','unprocessed_mass_in_winds']
names = additional_keys + self.elements # These are fields in dictionary
# Create empty record array of correct size
base = np.zeros(len(self.masses))
list_of_arrays = []
for i in range(len(names)):
list_of_arrays.append(base)
yield_subtable = np.core.records.fromarrays(list_of_arrays,names=names)
# Add mass field to subtable (in solar masses)
yield_subtable['Mass'] = np.array(self.masses)
# Read in yield tbale
x = np.genfromtxt(localpath + 'input/yields/Portinari_1998/%s.txt' %(metallicity),names=True)
# Read in element yields
for item in self.elements:
yield_subtable[item] = np.divide(x[item],x['Mass']) # Yields must be in mass fraction
# Add fractional mass in remnants
yield_subtable['mass_in_remnants'] = np.divide(x['Mass'] - x['ejected_mass'], x['Mass'])
# Add unprocessed mass as 1-remnants (with correction if summed net yields are not exactly zero)
for i,item in enumerate(self.masses):
yield_subtable['unprocessed_mass_in_winds'][i] = 1. - (yield_subtable['mass_in_remnants'][i] + sum(list(yield_subtable[self.elements][i])))
# Add subtable to output table
self.table[metallicity] = yield_subtable
```
We can now test the new yield table (using C as an example element).
```
# Define correct yield table
from Chempy.wrapper import SN2_feedback
basic_sn2 = SN2_feedback()
getattr(basic_sn2, 'Portinari_net')()
print("C Yields")
for metallicity in basic_sn2.metallicities:
print("\n Metallicity = %.2e" %(metallicity))
for i in range(len(basic_sn2.masses)):
print("Mass = %d, Yield = %.6e" %(basic_sn2.masses[i],basic_sn2.table[metallicity]['C'][i]))
```
### Pre-implemented Yield Sets
In the code, we have implemented the (net) tables from the following papers:
CC-SN:
- Chieffi & Limongi 2004 (`chieffi04_net`)
- Ritter et al. 2017 (in prep.) (`NuGrid_net`)
- Nomoto et al. 2013 (`Nomoto2013_net`)
- Portinari 1998 (`Portinari_net`)
- West & Heger 2017 (in prep.) (`West17_net`)
- TNG tables (Pillepich et al. 2017) (`TNG_net`)
SN Ia (Gross yields):
- Iwamoto et al. 1999 (`Iwamoto`)
- Theilemann et al. 2003 (`Thielemann`)
- Seitenzahl et al. 2013 (`Seitenzahl`)
- TNG tables (Pillepich et al. 2017) (`TNG`)
AGB:
- Karakas 2010 (`Karakas_net_yield`)
- Karakas 2016 (`Karakas16_net`)
- Ventura 2013 (`Ventura_net`)
- TNG tables (Pillepich et al. 2017) (`TNG_net`)
## Step 2: Choice of Elements
We are free to use any set of chemical elements in this analysis. The set should be chosen to match the simulation. 28 elements up to Ge (excluding Li, B, Be) and the IllustrisTNG elements (Pillepich et al. 2017) were used in the Philcox & Rybizki (2017).
To select the required elements we modify the `Chempy/parameter.py` file.
Both `elements_to_trace` **and** `initial_neural_names` must be changed here.
- `elements_to_trace` contains elements in the proto-solar data-file, including B, Be, Li and H which are not predicted directly by the neural network.
- `initial_neural_names` is the elements predicted by the network only (as [X/Fe] or [Fe/H] abundances).
In addition, if extra elements are added, it should be checked that they are predicted by the yield tables and feature in the `Chempy/input/stars/proto_sun_all.npy` observational data-set.
```
## Modify these lines in Chempy/parameter.py
# This field should contain all required elements (and B,Be,Li,H) in alphabetical order
elements_to_trace = ['Al', 'Ar', 'B', 'Be', 'C', 'Ca', 'Cl', 'Co', 'Cr', 'Cu', 'F', 'Fe', 'Ga', 'Ge', 'H', 'He', 'K', 'Li', 'Mg', 'Mn', 'N', 'Na', 'Ne', 'Ni', 'O', 'P', 'S', 'Sc', 'Si', 'Ti', 'V', 'Zn']
# This field contains names of elements predicted by neural network
initial_neural_names = ['Al', 'Ar', 'C', 'Ca', 'Cl', 'Co', 'Cr', 'Cu', 'F', 'Fe', 'Ga', 'Ge', 'He', 'K', 'Mg', 'Mn', 'N', 'Na', 'Ne', 'Ni', 'O', 'P', 'S', 'Sc', 'Si', 'Ti', 'V', 'Zn']
```
## Step 3: Create Neural Network Dataset
Now that the yield set has been implemented, we must create a training data-set for the neural network.
Firstly it is important to change the `parameter.py` file such that *Chempy* uses the correct yields. Here we add the new SN2 yield table name and set *Chempy* to use it by default.
```
## Modify these lines in Chempy/parameter.py to add new yield set
yield_table_name_sn2_list = ['chieffi04','Nugrid','Nomoto2013','Portinari', 'chieffi04_net', 'Nomoto2013_net','NuGrid_net','West17_net','TNG_net']#'Frischknecht16_net'
yield_table_name_sn2_index = 3
yield_table_name_sn2 = yield_table_name_sn2_list[yield_table_name_sn2_index]
```
We can test this as follows:
```
# Load parameter file
from Chempy.parameter import ModelParameters
a = ModelParameters()
# Print new SN2 yield table name
print(a.yield_table_name_sn2)
```
We must also set the list of parameters to optimize over to include only the 5 free *Chempy* parameters (i.e. not $\beta$). This will be changed later in the analysis, but must be done at this point, else the `training_data()` routine will fail.
**Here we can also change the free parameters (e.g. to fix $\alpha_\mathrm{IMF}$) **
For compatibility reasons $\beta$ (when later added) is included in the SSP_parameters definition.
```
# Modify these lines in Chempy/parameter.py file
SSP_parameters = [-2.29,-2.75] # Prior values
SSP_parameters_to_optimize = ['high_mass_slope','log10_N_0'] # Parameter names
assert len(SSP_parameters) == len(SSP_parameters_to_optimize)
ISM_parameters = [-0.3,0.55,0.5]
ISM_parameters_to_optimize = ['log10_starformation_efficiency', 'log10_sfr_scale', 'outflow_feedback_fraction']
assert len(ISM_parameters) == len(ISM_parameters_to_optimize)
```
We must also turn OFF the neural network predictions:
```
# In Chempy/parameter.py
UseNeural=False
# To test
a.UseNeural
```
Data-sets can be created using the `Chempy.neural` module as follows and are saved in the `Neural/` directory.
This uses multiprocessing to create a training data-set using 10 values of each of the 5 free *Chempy* parameters. (This value can be changed using `training_size` in `parameter.py`). These are written to file as `Neural/training_abundances.npy` (abundance output) and `Neural/training_norm_grid.npy` (normalised input)
```
from Chempy.neural import training_data
#training_data()
```
The above was run on a 64-core machine, taking 45 minutes. This is the most computationally expensive section, as $\mathit{Chempy}$ must be run $10^5$ times so multiprocessing is used here with as many cores as possible to speed this up.
## Step 4: Train Neural Network
Next we must train the neural network using the previously constructed data-sets. This can be simply done with the `Chempy/neural.py` `create_network()` function. This creates and trains a 30-neuron network over 1000 training epochs, using a learning rate of 0.007 by default (optimised via a validation data-set).
```
from Chempy.neural import create_network
#create_network(Plot=True)
```
The above was run on an 8-core machine taking 20 minutes. The trained network is saved as `Neural/neural_model.npz`. A loss plot is also produced, showing the network loss function against training epoch (if `Plot=True`).
Using `Chempy/neural.py`'s `neural_output()` function we may simulate the output of *Chempy* for any set of input parameters;
```
from Chempy.parameter import ModelParameters
a = ModelParameters()
from Chempy.neural import neural_output
# Here we compute the neural network predictions for the prior values of the free parameters (a.p0)
# Here the output is for the Chieffi & Limongi yield tables and all 28 elements
output = neural_output(a.p0)
print("Element \t Predicted abundance")
print("------------------------------------")
for i in range(len(output)):
print(a.initial_neural_names[i],"\t\t",output[i])
```
## Step 5: Run MCMC analysis
We can now run MCMC on the yield set and produce optimal parameters and a corner plot. This is done as shown below, using a trained neural network.
Whichever neural network is in the `Neural/` directory will be used.
Before running this analysis we must add in the beta parameter into the `parameter.py` script and turn **on** the neural network. Here we optimize for $\log_{10}\beta$ using a prior of $1.0\pm0.5$, but this can be changed into linear space using the `beta_param` identifier instead
```
# Modify these lines in Chempy/parameter.py file
SSP_parameters = [1.0, -2.29,-2.75] # Prior values
SSP_parameters_to_optimize = ['log10_beta','high_mass_slope','log10_N_0'] # Parameter names (or use beta_param)
assert len(SSP_parameters) == len(SSP_parameters_to_optimize)
# These lines set the priors:
priors = {
'beta_param' : (5.0,2.5,0),
'log10_beta' : (1.0,0.5,0)} # (continued for all parameters)
UseNeural = True # Use the neural network
# This is a wrapper for MCMC (with initial global optimisation to speed up convergence)
from Chempy.wrapper import single_star_optimization
single_star_optimization() # NB: if not using a neural network, multi_star_optimization() should be used
```
Now we restructure the MCMC chain found in the `mcmc/` directory and print the posterior predictions. The parameter names can be taken from the `mcmc/parameter_names.npy` file.
```
from Chempy.plot_mcmc import restructure_chain
restructure_chain('mcmc/') # Restructure the MCMC chain
# Print median and 1-sigma values of the posterior PDF
positions = np.load('mcmc/posteriorPDF.npy')
med = np.median(positions,axis=0) # Median
up = np.percentile(positions,100-15.865,axis=0) # Upper 1 sigma bound
low = np.percentile(positions,15.865,axis=0) # Lower 1 sigma bound
print('\nMCMC Parameters \n-----------------')
for i in range(6):
print('%.2f +%.2f -%.2f' %(med[i],up[i]-med[i],med[i]-low[i]))
# Now create the corner plot (saved as mcmc/parameter_space_sorted.png)
%pylab inline
from Chempy.plot_mcmc import plot_mcmc_chain_with_prior
plot_mcmc_chain_with_prior('mcmc/',use_prior=True,only_first_star=False,plot_true_parameters=False,plot_only_SSP_parameter=False)
```
## Step 6: Compute Overall Scores
We can now compute both the Bayes and LOO-CV scores for the yield table. Before computation, we must have the `UseNeural` switch set to **on** and have $\beta$ listed in the parameters to optimize as in Step 4.
The code for scoring is in the `overall_scores.py` file.
Firstly, the Bayes score. This outputs the Bayesian score (in linear space) along with the estimate of its error deriving from the Monte Carlo integration only. Scores are saved in the `OverallScores/` directory as `Bayes_score_[YIELD SET].npz`. Here we use the Chieffi & Limongi 2004 yields scores for illustration.
```
from Chempy.overall_scores import overall_Bayes
#overall_Bayes() # This takes around 10 minutes to run on an 8-core machine
# To see the output:
dat=np.load('OverallScores/Bayes_score - chieffi04_net, Karakas_net_yield, Seitenzahl.npz')
print('Log10 Bayes Score: %.3f +/- %.3f' %(np.log10(dat['score']),dat['score_err']/(dat['score']*np.log(10))))
```
We now calculate the CV score. This is computed ten times by default to account for the aforementioned scatter. This takes around 2 hours on an 8-core machine. Scores are saved as `OverallScores/CV_normalised_element_predictions_[YIELD TABLE].npz`
```
from Chempy.overall_scores import CV_element_predictions
#CV_element_predictions() # Run on a faster machine
# To see the output:
dat = np.load('OverallScores/CV_normalised_element_predictions_West17_net.npz')
# Load normalised scores (already in log-space)
scores = dat['normalised_scores']
# Print score and error
print('Normalised West & Heger (2017, in prep.) LOO-CV score = %.2f + %.2f - %.2f' %(np.median(scores),np.percentile(scores,100-15.865)-np.median(scores),np.median(scores)-np.percentile(scores,15.865)))
```
## Step 7: Visualising Best Element Predictions
We can now plot the elemental predictions from the LOO-CV scores (using the element mean and width found from MCMC with that element excluded - see paper for further details).
```
# Load mean + sigma + element names
pred_mean = np.mean(dat['mean'],axis=0)
pred_sigma = np.mean(dat['sigma'],axis=0) # Taking mean over all 10 iterations
elements = dat['elements']
# Load in proto-solar data in same element order
ps_dat = np.load('Chempy/input/stars/Proto-sun_all.npy')
ps_nam = ps_dat.dtype.names
ps_abun = []; ps_err = []
for el in elements:
for i in range(len(ps_nam)):
if ps_nam[i] == el:
ps_abun.append(ps_dat[0][i])
ps_err.append(ps_dat[1][i])
# Configure plot
plt.figure(figsize=(20,10))
large_text = 28
text_size = 22
marker_size= 10
small_text = 16
plt.rc('font', family='serif',size = large_text)
plt.rc('xtick', labelsize=small_text)
plt.rc('ytick', labelsize=text_size)
plt.rc('axes', labelsize=text_size, lw=1.0)
plt.rc('lines', linewidth = 2)
plt.rcParams['ytick.major.pad']='8'
plt.rcParams['text.latex.preamble']=[r"\usepackage{libertine}"]
params = {'text.usetex' : True,
'font.family' : 'libertine',
'text.latex.unicode': True,
}
plt.rcParams.update(params)
# Plot both sets of data
plt.errorbar(np.arange(len(elements)),pred_mean,yerr=pred_sigma,fmt='x',c='k',alpha=0.5,label='LOO-CV Predictions')
plt.errorbar(np.arange(len(elements)),ps_abun,yerr=ps_err,fmt='o',c='b',label='Observations')
plt.legend()
plt.xlabel('Element')
plt.ylabel('[X/Fe] abundance')
# Change x-axis to element names
ax = plt.gca()
la=plt.setp(ax,xticks=np.arange(len(elements)), xticklabels=elements)
```
This completes the tutorial.
| github_jupyter |
# Validation script for Imagenet models
## Overview
Use this notebook to verify the accuracy of a trained ONNX model on the validation set of ImageNet dataset.
## Models Support in This Demo
* SqueezeNet
* VGG
* ResNet
* MobileNet
## Prerequisites
Dependencies:
* Protobuf compiler - `sudo apt-get install protobuf-compiler libprotoc-dev` (required for ONNX. This will work for any linux system. For detailed installation guidelines head over to [ONNX documentation](https://github.com/onnx/onnx#installation))
* ONNX - `pip install onnx`
* MXNet - `pip install mxnet-cu90mkl --pre -U` (tested on this version GPU, can use other versions. `--pre` indicates a pre build of MXNet which is required here for ONNX version compatibility. `-U` uninstalls any existing MXNet version allowing for a clean install)
* numpy - `pip install numpy`
* matplotlib - `pip install matplotlib`
* gluoncv - `pip install gluoncv` (for ImageNet data preparation)
In order to do validate accuracy with a python script:
* Generate the script : In Jupyter Notebook browser, go to File -> Download as -> Python (.py)
* Run the script: `python imagenet_validation.py`
The ImageNet dataset must be downloaded and extracted in the required directory structure. Refer to the guidelines in [imagenet_prep](imagenet_prep.md).
### Import dependencies
Verify that all dependencies are installed using the cell below. Continue if no errors encountered, warnings can be ignored.
```
import matplotlib
import mxnet as mx
import numpy as np
from mxnet import gluon, nd
from mxnet.gluon.data.vision import transforms
from gluoncv.data import imagenet
from collections import namedtuple
import multiprocessing
from mxnet.contrib.onnx.onnx2mx.import_model import import_model
```
### Set context, paths and parameters
```
# Determine and set context
if len(mx.test_utils.list_gpus())==0:
ctx = [mx.cpu()]
else:
ctx = [mx.gpu(0)]
# path to imagenet dataset folder
data_dir = '/home/ubuntu/imagenet/img_dataset/'
# batch size (set to 1 for cpu)
batch_size = 128
# number of preprocessing workers
num_workers = multiprocessing.cpu_count()
# path to ONNX model file
model_path = 'squeezenet1.1.onnx'
```
### Import ONNX model
Import a model from ONNX to MXNet symbols and params using `import_model`
```
sym, arg_params, aux_params = import_model(model_path)
```
### Define evaluation metrics
top1 and top 5 accuracy
```
# Define evaluation metrics
acc_top1 = mx.metric.Accuracy()
acc_top5 = mx.metric.TopKAccuracy(5)
```
### Preprocess images
For each image-> resize to 256x256, take center crop of 224x224, normalize image
```
# Define image transforms
normalize = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
transform_test = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
normalize
])
# Load and process input
val_data = gluon.data.DataLoader(
imagenet.classification.ImageNet(data_dir, train=False).transform_first(transform_test),
batch_size=batch_size, shuffle=False, num_workers=num_workers)
```
### Load network for validation
Use `mx.mod.Module` to define the network architecture and bind the parameter values using `mod.set_params`. `mod.bind` tells the network the shape of input and labels to expect.
```
# Load module
mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None)
mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))],
label_shapes=mod._label_shapes)
mod.set_params(arg_params, aux_params, allow_missing=True)
```
### Compute evaluations
Perform forward pass over each batch and generate evaluations
```
# Compute evaluations
Batch = namedtuple('Batch', ['data'])
acc_top1.reset()
acc_top5.reset()
num_batches = int(50000/batch_size)
print('[0 / %d] batches done'%(num_batches))
# Loop over batches
for i, batch in enumerate(val_data):
# Load batch
data = gluon.utils.split_and_load(batch[0], ctx_list=ctx, batch_axis=0)
label = gluon.utils.split_and_load(batch[1], ctx_list=ctx, batch_axis=0)
# Perform forward pass
mod.forward(Batch([data[0]]))
outputs=mod.get_outputs()
# Update accuracy metrics
acc_top1.update(label, outputs)
acc_top5.update(label, outputs)
if (i+1)%50==0:
print('[%d / %d] batches done'%(i+1,num_batches))
```
### Print results
top1 and top5 accuracy of the model on the validation set are shown in the output
```
# Print results
_, top1 = acc_top1.get()
_, top5 = acc_top5.get()
print("Top-1 accuracy: {}, Top-5 accuracy: {}".format(top1, top5))
```
| github_jupyter |
# Autogradient Tutorial
This notebook documents the usage and application of the autogradient python file to perform various machine learning algorithms with ease.
* <b>Step 1 </b>: Import all class files from autogradient python script.
* <b>Step 2 </b>: Use the '<b>Tensor</b>' class file to intialize all variables.
* <b>Step 3 </b>: Use the math functions available in the '<b>op</b>' class file to perform necessary computations and obtain the final cost function.
* <b>Step 4 </b>: Use the '<b>graph</b>' class file to specify the function to be differentiated and wrt to what all variables as inputs, store the graph object.
* <b>Step 5 </b>: Obtain gradients by using '<b>get_grad()</b>' function belonging to graph class file using the previously created graph object.
* <b>Step 6 </b>: To recompute the cost function after updating or changing values of the variables use the function '<b>recompute_graph()</b>' of the graph class from the previously created graph object.
```
import numpy as np
from autogradient import *
```
## Creating dummy dataset
```
# Creating dummy data
import numpy as np
import matplotlib.pyplot as plt
n = 20 # Size of dataset
o = 2 # No of classes
m = 2 # Feature vector size
np.random.seed(6)
X_val = np.random.rand(m,n)
y = np.round(np.random.rand(n)*(o-1)).astype(int)
Y_val = np.zeros(shape = (o,X_val.shape[1]))
Y_val[y,range(n)] = 1
plt.scatter(X_val[0,:],X_val[1,:],c=y[0:],cmap=plt.cm.Spectral,edgecolors='k')
plt.show()
X = Tensor(X_val)
Y = Tensor(Y_val)
```
## 4 - Layer neural network Example
```
# 2 layer neural-net Example #
np.random.seed(1)
h1 = 30 # Number of neurons in hidden layers - 1
h2 = 30 # Number of neurons in hidden layers - 2
# Intializing variables
W0 = Tensor(np.random.rand(h1,m)-0.5)
b0 = Tensor(np.random.rand(h1,1)-0.5)
W1 = Tensor(np.random.rand(h2,h1)-0.5)
b1 = Tensor(np.random.rand(h2,1)-0.5)
W2 = Tensor(np.random.rand(o,h2)-0.5)
b2 = Tensor(np.random.rand(o,1)-0.5)
# Constructing cost function
h1 = W0.dot(X) + b0
a1 = h1.RelU()
h2 = W1.dot(a1)+ b1
a2 = h2.RelU()
h3 = W2.dot(a2)+b2
a3 = h3.softmax_crossentropy_loss(Y,axis=0) # - Final cost function
# Pruning graphs to obtain gradients
gr = graph(a3,[W0,W1,b0,b1])
alpha = 0.4
loss = []
# Training over 2500 epochs
for i in range(1500):
w0_grad,w1_grad,b0_grad,b1_grad = gr.get_grad() #getting gradients
W0.value-=w0_grad/n*alpha
W1.value-=w1_grad/n*alpha
b0_grad -=b0_grad/n*alpha
b1_grad -=b1_grad/n*alpha
#Recomputing the cost function
gr.recompute_graph()
# For plotting purposes.
loss.append(a3.value[0])
print('Accuracy :' , 100 -np.sum(np.abs(np.round(h3.cache) - Y.value))/n*100,'%')#
#plotting loss as a function of epochs
plt.plot(loss)
plt.xlabel('No of epochs')
plt.ylabel('loss')
plt.show()
```
## Deep neural network Example
```
# L - layer neural-net Example #
np.random.seed(5)
hid_lay = [m,20,20,20,o]# 3 hidden layers
# Intializing variables
X = Tensor(X_val)
Y = Tensor(Y_val)
W = []
b =[]
c = len(hid_lay)-1
for i in range(1,c+1):
W.append(Tensor(np.random.rand(hid_lay[i],hid_lay[i-1])-0.5))
b.append(Tensor(np.random.rand(hid_lay[i],1)-0.5))
# Constructing cost function
act=[X]
hid=[]
for i in range(c):
hid.append(W[i].dot(act[i]) )
if i==2:
act.append(hid[i].RelU())
else:
act.append(hid[i].RelU())
cost = hid[c-1].softmax_crossentropy_loss(Y,axis=0) # - Final cost function
# concatinating weights
weights = W+b
# pruning comp graph
gr = graph(cost,weights)
# Training over 6000 epochs
alpha = 0.1
loss = []
for i in range(6000):
weight_grad = gr.get_grad() #getting gradients
# Updating weights
for j in range(c):
W[j].value-=weight_grad[j]/n*alpha
b[j].value-=weight_grad[c+j]/n*alpha
alpha=0.15/(i//3000+1)
#for plotting purposes.
loss.append(cost.value[0])
#Recomputing the cost function
gr.recompute_graph()
print('Accuracy :' , 100 -np.sum(np.abs(np.round(hid[c-1].cache) - Y.value))/n*100,'%')
#plotting loss as a function of epochs
plt.plot(loss)
plt.xlabel('No of epochs')
plt.ylabel('loss')
plt.show()
```
## Linear regression example
```
# Linear regression Example #
# Creating dummy dataset
np.random.seed(1)
x = Tensor(np.array(range(10)))
y = Tensor(x.value*3+np.random.rand(10)*1)
#Initializing variables
w = Tensor(0)
# Constructing cost function
cost = op.mse(x*w,y)
# constructing graph object
gr = graph(cost,[w])
# Training
loss = []
alpha = 0.2
for i in range(20):
# getting gradients
w_grad = gr.get_grad()
w.value-=w_grad[0]*alpha
# recomputing cost function
gr.recompute_graph()
loss.append(cost.value[0])
print('Error',(w.value-3)/3*100,'%')
#plotting loss as a function of epochs
plt.plot(loss)
plt.xlabel('No of epochs')
plt.ylabel('loss')
plt.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
%config Completer.use_jedi = False
# Libraries
# ==============================================================================
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
from skforecast.ForecasterAutoreg import ForecasterAutoreg
```
A trained model may be deployed in production in order to generate predictions regularly. Suppose predictions have to be generated on a weekly basis, for example, every Monday.
By default, when using the `predict` method on a trained forecaster object, predictions start right after the last training observation. Therefore, the model could be retrained weekly, just before the first prediction is needed, and call the predict method. This strategy, although simple, may not be possible to use for several reasons:
+ Model training is very expensive and cannot be run as often.
+ The history with which the model was trained is no longer available.
+ The prediction frequency is so high that there is no time to train the model between predictions.
In these scenarios, the model must be able to predict at any time, even if it has not been recently trained.
Every model generated using skforecast has the `last_window` argument in its `predict` method. Using this argument, it is possible to provide only the past values needs to create the autoregressive predictors (lags) and thus, generate the predictions without the need to retrain the model.
```
# Download data
# ==============================================================================
url = ('https://raw.githubusercontent.com/JoaquinAmatRodrigo/skforecast/master/data/h2o.csv')
data = pd.read_csv(url, sep=',', header=0, names=['y', 'date'])
# Data preprocessing
# ==============================================================================
data['date'] = pd.to_datetime(data['date'], format='%Y/%m/%d')
data = data.set_index('date')
data = data.asfreq('MS')
data_train = data.loc[:'2005-01-01']
print(data_train.tail().to_markdown())
# Train forecaster
# ==============================================================================
forecaster = ForecasterAutoreg(
regressor = RandomForestRegressor(random_state=123),
lags = 5
)
forecaster.fit(y=data_train['y'])
# Predict
# ==============================================================================
forecaster.predict(steps=3)
```
As expected, predictions follow directly from the end of training data.
When `last window` is provided, the forecaster uses this data to generate the lags needed as predictors and starts the prediction afterwards.
```
# Predict
# ==============================================================================
forecaster.predict(steps=3, last_window=data['y'].tail(5))
```
Since the provided `last_window` contains values from 2008-02-01 to 2008-06-01, the forecaster is able to create the needed lags and predict the next 5 steps.
> **⚠ WARNING:**
> It is important to note that the lenght of last windows must be enought to include the maximum lag used by the forecaster. Fore example, if the forecaster uses lags 1, 24, 48, `last_window` must include the last 72 values of the series.
| github_jupyter |
# Μέτρηση της ροής της κατανομής νετρονίων σε υποκρίσιμο πυρηνικό αντιδραστήρα
## Θεωρητική εισαγωγή
### Αντιδραστήρας
Στο πείραμα αυτό μελετάται η κατανομή των νετρονίων σε υποκρίσιμο πυρηνικό αντιδραστήρα ($k_{eff} \approx 0.8$). Ο αντιδραστήρας αποτελείται από ράβδους φυσικού ουρανίου $U_3O_8(UO_2 . 2UO_3)$, επιβραδυντή (και ανακλαστή) ελαφρύ ύδωρ $H_2O$. Το ουράνιο βρίσκεται σε σωλήνες αλουμινίου για την αποφυγή διαρροής ραδιενεργού υλικού.
Στο κέντρο του αντιδραστήρα τοποθετείται πηγή νετρονίων 5Ci $\ ^{241}AmBe$.
Τα νετρόνια της πηγής θερμοποιούνται καθώς συγκρούνται με τους πυρήνες υδρογόνου στο νερό, και εν συνεχεία μέρος τους αντιδρά με το $\ ^{235}U$ προκαλώντας σχάση και απελευθέρωση επιπλέον νετρονίων. Το νερό δρα και ως ανακλαστής, ώστε μέρος των νετρονίων που κανονικά θα διέφευγαν του αντιδραστήρα να ανακλώνται πίσω.
### Διάχυση νετρονίων
Θεωρώντας ότι τα νετρόνια είναι μονοενεργειακά μπορούμε να χρησιμοποιήσουμε την μέθοδο διάχυσης μιας ομάδας. Σύμφωνα με τη μέθοδο αυτή, όταν ο αντιδραστήρας λειτουργεί σε σταθερές συνθήκες, για τα νετρόνια ισχύει:
$$ created - escaped - absorbed = 0 $$
ή
$$ D\nabla^2\Phi - \Sigma_a\Phi + S = 0$$
όπου Φ η ροή των θερμικών νετρονίων, D ο συντελεστής διάχυσης, $\Sigma_a$ η μακροσκοπική ενεργός διατομή απορρόφησης, και S ο αριθμός n ανά μονάδα όγκου ανά μονάδα χρόνου.
Επιλύοντας την παραπάνω εξίσωση καταλήγουμε στην:
$$ \Phi(r,z)=A\cdot J_0\cdot (2.405/R_{ex}r)\cdot e^{-\gamma z} $$
ή
$$ln\Phi(r,z) = ln[A\cdot J_0\cdot (2.405/R_{ex}r)] -\gamma z \qquad (1)$$
Για δεδομένο $r=r_0$ η συνάρτηση του $ln\Phi$ ως προς το ύψος $z$ είναι ευθεία με κλίση $\gamma$. Επομένως το ύψος από το σημείο $r_0$ στο οποίο η ροή μειώνεται στο $1/e$ της αρχικής, είναι $l=1/\gamma$ και ονομάζεται _μήκος χαλάρωσης_.
### Ιδιαιτερότητες πειράματος
Λόγω συντήρησης των ράβδων καυσίμου ο αντιδραστήρας περιείχε μόνο την πηγή $\ ^{241}AmBe$ και το νερό, χωρίς καμία ράβδο καυσίμου. Επίσης, θεωρήθηκε ότι η ενεργοποίηση του $In$ οφείλεται αποκλειστικά σε θερμικά νετρόνια.
---
## Πείραμα
Φύλλα $In$ τοποθετήθηκαν στον αντιδραστήρα σε συγκεκριμένη θέση, αλλά σε διαφορετικό ύψος ο καθένας. Στο φυσικό $In$ υπάρχει 95.7(1)% $\ ^{115}In$. Μετά από ακτινοβόληση 4-5 ωρών (περίπου 5 ημιζωές του $\ ^{116}In$ ώστε η ενεργότητα να φτάσει περίπου το 99(1)% της μέγιστης), τα φύλλα αφαιρέθηκαν.
Ο ρυθμός υποβάθρου χωρίς $In$ ήταν $8\ cpm$ και με **μη** ενεργοποιημένο $In$ ήταν $9\ cpm$, και αφαιρείται από τον αληθή ρυθμό.
Η ομάδα μας μέτρησε τους ψευδείς ρυθμούς R' από τις πηγές 3,6,9. Οι R' μετατράπηκαν σε αληθείς μέσω της σχέσης:
$$ R = \frac{R'}{1-\tau R'} $$
όπου $\tau=50\ \mu s$ o νεκρός χρόνος του ανιχνευτή Geiger.
Η απόδοση του Geiger στις συγκεκριμένες ενέργειες των $\beta^-$ είναι $\varepsilon=100\%$ και ο παράγοντας γεωμετρίας $g = 0.4$. H ενεργότητα $A'$ δίνεται από την σχέση:
$$ A' = \frac{R}{\varepsilon\cdot g} $$
Η ενεργότητα ανά gr $A$ δίνεται από την σχέση:
$$A = \frac{A'}{m}$$
όπου $m$ η μάζα του $In$ σε gr.
Θεωρούμε ότι οι αντίστοιχες ενεργότητες που υπολογίσαμε αντιστοιχούν στο μέσο της διάρκειας μέτρησης. Πχ. αν $t_{exit}=18\ min$ o χρόνος από την έξοδο από τον αντιδραστήρα και $t_0=60\ s$ η διάρκεια μέτρησης, τότε η ενεργότητα αντιστοιχεί στην χρονική στιγμή $t=18.5\ min$.
H ροή των θερμικών νετρονίων δίνεται από την σχέση:
$$ \Phi_\theta = \frac {A_\theta} {\sigma \cdot M (1-e^{-\lambda t_{irr} }) }$$
όπου:
- $t_{irr}$ ο χρόνος ακτινοβόλησης στον αντιδραστήρα
- $\sigma = 162\ barn$
- $M = N_A\cdot m/A$, με $A=115$
```
import numpy
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import scipy.optimize
import scipy.constants
import sympy
sympy.init_printing()
# Used for latex-labels in plots .
# (if you don't have the dependencies from http://stackoverflow.com/a/37218925/4230591 installed,
# it will raise an error during plotting)
from matplotlib import rc
rc('text', usetex=True)
# ====================================================================================================
# CONSTANTS AND MISC DATA
#
# One-letter variables should not be used to store data,
# since they'll be needed for `sympy.symbols()` to display functions.
# ====================================================================================================
t_count = 60 # secs
t_count_in_mins = t_count / 60.
t_exit_reactor = 18 * 60.
t_end = 66 * 60.
t_between_measurements = 2 * 60.
R_background = 8 /60.
R_background_In = 9 /60.
m_In = 1 /1000 # 1gr
indium_115_ratio = .957
M_constant = scipy.constants.N_A * m_In * indium_115_ratio / 115
irradiation_start = '3:00'
irradiation_t = 4.5 * 3600 # 4-5h
g_times_e = 0.4
τ_constant=50 * 10**-6
half_life = 54.29 * 60. #sec
decay_constant = numpy.log(2) / half_life
σ_In = 162 * 10e-28
# =========================================================
# IRRELEVANT DATA (not used in calculations)
# =========================================================
#
# ~40μSv/h near reactor
# AmBe ~10e+6 n/s
# k_eff = ~.8
# v_geiger = 500V
# =========================================================
# R
# =========================================================
R_αληθής, σ_R_αληθής, R_ψευδής, τ, σ_R_ψευδής, t = sympy.symbols("R, \sigma_R, R' tau, \sigma_R', t")
_R_αληθής_f = lambda R_ψευδής, τ: R_ψευδής/(1-R_ψευδής*τ)
R_αληθής_f = sympy.lambdify((R_ψευδής, τ), _R_αληθής_f(R_ψευδής, τ), (sympy, numpy))
print('\nΟ αληθής ρυθμός R δίνεται από την σχέση:')
sympy.Eq(R_αληθής, R_αληθής_f(R_ψευδής, τ))
# =========================================================
# R' error
# =========================================================
_σ_R_ψευδής_f = lambda R_ψευδής, t: sympy.sqrt(R_ψευδής) / t
# (Currently `lambdify` can't handle both arrays *and* sympy functions,)
# (therefor one function is used for pretty printing and a different one for calculations.)
σ_R_ψευδής_f = sympy.lambdify((R_ψευδής, t), _σ_R_ψευδής_f(R_ψευδής, t), modules=['numpy'])
print('\nΤο σφάλμα στον ψευδή ρυθμό δίνεται από την σχέση:')
sympy.Eq(σ_R_ψευδής, _σ_R_ψευδής_f(R_ψευδής, t))
# =========================================================
# R error
# =========================================================
_err_R_αληθής_sq = sympy.diff(R_αληθής_f(R_ψευδής, τ), R_ψευδής)**2 * σ_R_ψευδής**2
_err_R_αληθής = sympy.sqrt(_err_R_αληθής_sq)
err_R_αληθής_f = sympy.lambdify((R_ψευδής, σ_R_ψευδής, τ), _err_R_αληθής, modules=['numpy'])
print('\nΤο σφάλμα στον αληθή ρυθμό δίνεται από την σχέση:')
sympy.Eq(σ_R_αληθής, _err_R_αληθής)
# ====================================================================================================
# DATA (sources 3, 6, 9)
# ====================================================================================================
df3 = pd.read_csv('source3', names=['t (min)', "R'(cpm)"])
df6 = pd.read_csv('source6', names=['t (min)', "R'(cpm)"])
df9 = pd.read_csv('source9', names=['t (min)', "R'(cpm)"])
dfs_3_6_9 = {3: df3, 6: df6, 9: df9}
for source_n, _df in dfs_3_6_9.items():
# the measured rate is considered to have occured exactly in the middle of the measurement's duration
_df['t (min)'] += t_count_in_mins / 2
_df['t (s)'] = _df['t (min)'] * 60
_df["R'(cps)"] = _df["R'(cpm)"] / 60
_df["R(cps)"] = R_αληθής_f(_df["R'(cps)"], τ_constant) - R_background_In
# errors
_df["error σR'(cps)"] = σ_R_ψευδής_f(_df["R'(cpm)"], 60)
_df["error σR(cps)"] = err_R_αληθής_f(_df["R(cps)"], _df["error σR'(cps)"], τ_constant)
_df["relative σR %"] = _df["error σR(cps)"] / _df["R(cps)"] * 100
print('\n' + '='*70)
print('Πηγή {}:\n'.format(source_n))
print(_df)
source_label = r'source: {}'.format(source_n)
plt.plot(_df['t (s)'], _df["R(cps)"], marker='.', label=source_label)
plt.ylim(ymin=0, ymax=160)
plt.xlabel('t(s)')
plt.ylabel('R(cps)')
plt.legend(loc='upper right')
plt.title('FIGURE 1:\nRates vs time of sources 3,6 and 9. \n(Errorbars were omitted since they are too small.)')
plt.grid('on')
plt.show()
```
### Υπολογισμός ρυθμού R κατά την έξοδο του In από τον αντιδραστήρα
[Θεωρώντας ότι το σφάλμα στους ρυθμούς](http://www.mathworks.com/help/stats/examples/pitfalls-in-fitting-nonlinear-models-by-transforming-to-linearity.html) είναι πολλαπλασιαστικό και όχι προσθετικό, χρησιμοποιούμε μέθοδο ελαχίστων τετραγώνων, αφού μετατρέψουμε την εκθετική συνάρτηση σε γραμμική.
```
# ====================================================================================================
# Linear fit
# ====================================================================================================
_f = lambda x, a, b: a+x*b
lin_popt, lin_pcov = scipy.optimize.curve_fit(_f, df3['t (s)'], scipy.log(df3['R(cps)']),
sigma=_df["error σR(cps)"], absolute_sigma=True)
lin_perr = numpy.sqrt(numpy.diag(lin_pcov))
print('a = {:.6}, b = {:.6}'.format(*lin_popt))
print('Ro = {:.4}'.format(scipy.e**lin_popt[0]))
print('λ/b = {:.1%}'.format(-decay_constant/lin_popt[1]))
print('σφάλμα στο Ro: {:.4}'.format(scipy.e**lin_perr[0]))
# (using python variables in markdown is not supported yet, therefor it has to be done manually)
```
Ο ρυθμός κατά το τέλος της ακτινοβόλησης είναι $R_0 = 193(3) cps$.
```
# ====================================================================================================
# Non linear fit (not used in further calculations; displayed just for comparison)
# ====================================================================================================
def exp_f(x, a,b):
return a * scipy.e**(b*x)
exp_popt, _ = scipy.optimize.curve_fit(exp_f, df3['t (s)'], df3['R(cps)'],sigma=_df["error σR(cps)"], p0=(300, -decay_constant))
print('a = {:.6}, b = {:.6}'.format(*exp_popt))
print('λ/b = {:.6}'.format(-decay_constant/exp_popt[1]))
```
## Κατανομή ροής νετρονίων συναρτήσει του ύψους z
Αρχικά υπολογίζουμε την ενεργότητα της εκάστοτε πηγής στο τέλος της ακτινοβόλησης, λαμβάνοντας υπόψιν τον παράγοντα γεωμετρίας $g=0.4$ και την απόδοση $\varepsilon=1$ του Geiger στις συγκεκριμένες ενέργειες των $\beta^-$.
```
# ============================================================================================
# DATA (all sources' R at the end of irradiation)
# ============================================================================================
df_all = pd.read_csv('all_sources', names=['z (cm)', "R(cps)"])
df_all['A(Bq)'] = df_all["R(cps)"] / g_times_e
```
Στην παρακάτω γραφική παράσταση παρουσιάζεται η ροή των νετρονίων συναρτήσει του ύψους. Το $z=0$ αντιστοιχεί στο ύψος της πηγής $\ ^{241}AmBe$.
Παρατηρείται ότι η μέγιστη ροή βρίσκεται στο ύψος της πηγής $\ ^{241}AmBe$. Η ροή δεν είναι συμμετρική, λόγω της κατασκευής του αντιδραστήρα. Η επάνω επιφάνεια του αντιδραστήρα είναι ανοιχτή (αέρας), ενώ στην κάτω βρίσκεται ο μεταλλικός του πάτος καθώς και το τσιμέντο. Ως αποτέλεσμα αυτού, από την επάνω επιφάνεια τα νετρόνια διαφεύγουν ελεύθερα του αντιδραστήρα μόλις εξέλθουν του νερού, ενώ στην κάτω επιφάνεια μέρος των νετρονίων οπισθοσκεδάζεται από τα υλικά που προαναφέρθηκαν.
```
# ============================================================================================
# Neutron flux formula
# ============================================================================================
A_θ, t , λ, σ, t_irr, M, Φ = sympy.symbols('A_th, t, \lambda, \sigma, t_irr, M, \Phi')
_flux_f = lambda A_θ, λ, t_irr, M, σ: A_θ / (σ * M * (1-sympy.E**(-λ * t_irr)))
flux_f = sympy.lambdify((A_θ, λ, t_irr, M, σ), _flux_f(A_θ, λ, t_irr, M, σ))
sympy.Eq(Φ, _flux_f(A_θ, λ, t_irr, M, σ))
# ============================================================================================
# FLUX CALCULATION
# ============================================================================================
# flux unit: n per cm^3 per sec
df_all['Φ'] = flux_f(df_all["R(cps)"], decay_constant, irradiation_t, M_constant, σ_In)/ (100**3)
df_all['Φ'] = df_all['Φ'].astype(scipy.float64, copy=False)
print(df_all)
# --------------------------------------------------------------------
plt.plot(df_all['Φ'],df_all['z (cm)'])
plt.xlabel('flux$(n/cm^3/s)$')
plt.ylabel('height (cm)')
plt.title('FIGURE 2: \nThermal neutron flux')
plt.grid('on')
plt.show()
```
## Μήκος χαλάρωσης
```
# ====================================================================================================
# Least squares
# ====================================================================================================
_f = lambda x, a, b: a+x*b
# Converting flux to n/m^3/s first, and z to meters.
# (No errors were given for Φ)
lin_popt, _ = scipy.optimize.curve_fit(_f, df_all['z (cm)']/100, scipy.log(df_all['Φ']*(100**3)))
print('a = {:.6}, b = {:.6}'.format(*lin_popt))
relaxation_length = -1/lin_popt[1]
print('μήκος χαλάρωσης: {:.5} cm'.format(relaxation_length * 100))
```
Το μήκος χαλάρωσης προκύπτει 12.0cm.
(Λόγω του ότι δεν μας δόθηκαν τα σφάλματα από τις υπόλοιπες πηγές, δεν τα λαμβάνουμε υπόψιν στoυς υπολογισμούς και δεν μπορούμε να υπολογίσουμε το σφάλμα στο μήκος χαλάρωσης.)
## Πιθανές πηγές σφαλμάτων
### Οπισθοσκέδαση σωματιδίων $\beta^-$ από το υλικό κάτω από την πηγή.
Η οπισθοσκέδαση των $\beta^-$ από την βάση τοποθέτησης, τον πάγκο και τα υλικά γύρω από τον ανιχνευτή, συμπεριλαμβανομένου και της θέσης της ερευνήτριας κατά τις μετρήσεις, πιθανόν προκάλεσε υπερεκτίμηση των ρυθμών.
### Επιλογή χρόνου μέτρησης στο μέσο της διάρκειας μέτρησης
Ο εκάστοτε ρυθμός αποδόθηκε στο μέσο της διάρκειας μέτρησης $t_0$ (δηλ. $t_0 = t_1 + \frac{t_2-t_1}{2}$). Ωστόσο, λόγω της εκθετικής πτώσης της ενεργότητας, η υπολογιζόμενη ενεργότητα στην πραγματικότητα αντιστοιχεί σε χρόνο πριν του μέσου.

### Υπερεκτίμηση παράγοντα γεωμετρίας λόγω εγγύτητας πηγής-ανιχνευτή
Σε επαρκώς μεγάλες αποστάσεις ο παράγοντας γεωμετρίας μπορεί να αγνοεί τις διαστάσεις του ανιχνευτή, και να χρησιμοποιείται ως έχει για τον υπολογισμό των σωματιδίων που διέρχονται από μέσα του.
Για πολύ κοντινές αποστάσεις όμως, σε μεγάλες γωνίες πρόσπτωσης (δέσμη 2 στην παρακάτω εικόνα) τα σωματίδια διέρχονται από λιγότερο υλικό του ανιχνευτή, σε αντίθεση με σωματίδια που προσπίπτουν κάθετα στην επιφάνειά του (δέσμη 1).

Ως αποτέλεσμα, κάποια σωματίδια σε μεγάλες γωνίες δεν προκαλούν επαρκή ιονισμό ώστε να καταχωρηθούν από τον ανιχνευτή. Το φαινόμενο αυτό ίσως να μην είναι τόσο σημαντικό για άμεσα ιονίζουσες ακτινοβολίες, όπως τα $\beta^-$ του πειράματος, αλλά ωστόσο όχι αμελητέο.
### Ταυτόχρονη μετακίνηση/μέτρηση άλλων πηγών σε διπλανούς Geiger
Ταυτόχρονα με την μέτρησή μας διεξάγονταν και άλλες μετρήσεις σε απόσταση μικρότερη των 2 μέτρων, πιθανόν αυξάνοντας σημαντικά το υπόβαθρο. Ενδέχεται αυτό το σφάλμα να επηρέασε ιδιαίτερα τις μετρήσεις πηγών με μικρές ενεργότητες (πχ. πηγή 9).
```
plt.plot(df9['t (s)'], df9["R(cps)"], marker='.', label='Source 9')
plt.errorbar(df9['t (s)'], df9["R(cps)"], df9["error σR(cps)"], label=None)
plt.ylim(ymin=0)
plt.xlabel('t(s)')
plt.ylabel('R(cps)')
plt.legend(loc='upper right')
plt.title('FIGURE 3: \nRate of source 9.')
plt.grid('on')
plt.show()
```
### Μη θερμικά νετρόνια
Λόγω της μη χρήσης $Cd$, ταυτόχρονα και μη θερμικά νετρόνια απορροφόνται από το $\ ^{115}In$. Ως αποτέλεσμα αυξάνει ο μετρούμενος ρυθμός, λόγω του ότι πολλά από αυτά τα νετρόνια αυτά θα διέφευγαν ή θα απορροφόνταν πριν θερμοποιηθούν, επομένως δεν θα ήταν διαθέσιμα ως θερμικά.
## Αναφορές
Για την συγγραφή της εργασίας χρησιμοποιήθηκαν οι παρακάτω αναφορές:
1. Η ΠΥΡΗΝΙΚΗ ΦΥΣΙΚΗ ΣΤΟ ΕΡΓΑΣΤΗΡΙΟ - ΦΟΙΤΗΤΙΚΕΣ ΑΣΚΗΣΕΙΣ, X. ΕΛΕΥΘΕΡΙΑΔΗΣ, Μ. ΖΑΜΑΝΗ, Α. ΛΙΟΛΙΟΣ, Μ. ΜΑΝΩΛΟΠΟΥΛΟΥ, X. ΠΕΤΡΙΔΟΥ, Η. ΣΑΒΒΙΔΗΣ, Εκδόσεις: COPY CITY
2. https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
| github_jupyter |
# Fire up graphlab create
```
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
```
# Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
```
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
```
#Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
```
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
```
#Fit the regression model using crime as the feature
```
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
```
#Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
```
Above: blue dots are original data, green line is the fit from the simple regression.
# Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
```
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
```
### Refit our simple regression model on this modified dataset:
```
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
```
### Look at the fit:
```
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
```
# Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
```
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
```
Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
###High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the *potential* to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
###Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are *not* leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
# Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
```
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
```
### Do the coefficients change much?
```
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
```
Above: We see that removing the outlying high-value neighborhoods has *some* effect on the fit, but not nearly as much as our high-leverage Center City datapoint.
| github_jupyter |
# ¿Cómo se mueve un péndulo?
> Calificaciones: https://docs.google.com/spreadsheets/d/1QYtUajeyHoE-2jEKZ7yilkzu923br28Rqg0FCnQ1zHk/edit?usp=sharing
> Se dice que un sistema cualquiera, mecánico, eléctrico, neumático, etc., es un oscilador armónico si, cuando se deja en libertad fuera de su posición de equilibrio, vuelve hacia ella describiendo oscilaciones sinusoidales, o sinusoidales amortiguadas en torno a dicha posición estable.
- https://es.wikipedia.org/wiki/Oscilador_armónico
Referencias:
- http://matplotlib.org
- https://seaborn.pydata.org
- http://www.numpy.org
- http://ipywidgets.readthedocs.io/en/latest/index.html
**En realidad esto es el estudio de oscilaciones. **
___
<div>
<img style="float: left; margin: 0px 0px 15px 15px;" src="http://images.iop.org/objects/ccr/cern/51/3/17/CCast2_03_11.jpg" width="400px" height="100px" />
</div>
```
from IPython.display import YouTubeVideo
YouTubeVideo('k5yTVHr6V14')
```
Los sistemas mas sencillos a estudiar en oscilaciones son el sistema ` masa-resorte` y el `péndulo simple`.
<div>
<img style="float: left; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/7/76/Pendulum.jpg" width="150px" height="50px" />
<img style="float: right; margin: 15px 15px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/ko/9/9f/Mass_spring.png" width="200px" height="100px" />
</div>
\begin{align}
\frac{d^2 x}{dt^2} + \omega_{0}^2 x &= 0, \quad \omega_{0} = \sqrt{\frac{k}{m}}\notag\\
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta &= 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}
\end{align}
___
## Sistema `masa-resorte`
La solución a este sistema `masa-resorte` se explica en términos de la segunda ley de Newton. Para este caso, si la masa permanece constante y solo consideramos la dirección en $x$. Entonces,
\begin{equation}
F = m \frac{d^2x}{dt^2}.
\end{equation}
¿Cuál es la fuerza? ** Ley de Hooke! **
\begin{equation}
F = -k x, \quad k > 0.
\end{equation}
Vemos que la fuerza se opone al desplazamiento y su intensidad es proporcional al mismo. Y $k$ es la constante elástica o recuperadora del resorte.
Entonces, un modelo del sistema `masa-resorte` está descrito por la siguiente **ecuación diferencial**:
\begin{equation}
\frac{d^2x}{dt^2} + \frac{k}{m}x = 0,
\end{equation}
cuya solución se escribe como
\begin{equation}
x(t) = A \cos(\omega_{o} t) + B \sin(\omega_{o} t)
\end{equation}
Y su primera derivada (velocidad) sería
\begin{equation}
\frac{dx(t)}{dt} = \omega_{0}[- A \sin(\omega_{0} t) + B\cos(\omega_{0}t)]
\end{equation}
<font color=red> Ver en el tablero que significa solución de la ecuación diferencial.</font>
### **¿Cómo se ven las gráficas de $x$ vs $t$ y $\frac{dx}{dt}$ vs $t$?**
_Esta instrucción es para que las gráficas aparezcan dentro de este entorno._
```
%matplotlib inline
```
_Esta es la librería con todas las instrucciones para realizar gráficos. _
```
import matplotlib.pyplot as plt
import matplotlib as mpl
label_size = 14
mpl.rcParams['xtick.labelsize'] = label_size
mpl.rcParams['ytick.labelsize'] = label_size
```
_Y esta es la librería con todas las funciones matemáticas necesarias._
```
import numpy as np
# Definición de funciones a graficar
A, B, w0 = .5, .1, .5 # Parámetros
t = np.linspace(0, 50, 100) # Creamos vector de tiempo de 0 a 50 con 100 puntos
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
dx = w0*(-A*np.sin(w0*t)+B*np.cos(w0*t)) # Función de velocidad
# Gráfico
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
plt.plot(t, x, '-', lw = 1, ms = 1,
label = '$x(t)$') # Explicación
plt.plot(t, dx, 'ro-', lw = 1, ms = 4,
label = r'$\dot{x}(t)$')
plt.legend(loc='best')
plt.xlabel('$t$', fontsize = 20) # Etiqueta eje x
plt.show()
# Colores, etiquetas y otros formatos
plt.figure(figsize = (7, 4))
plt.scatter(t, x, lw=0, c = 'red',
label = '$x(t)$') # Gráfica con puntos
plt.plot(t, x, 'r-', lw = 1) # Grafica normal
plt.scatter(t, dx, lw = 0, c = 'b',
label = r'$\frac{dx}{dt}$') # Con la r, los backslash se tratan como un literal, no como un escape
plt.plot(t, dx, 'b-', lw = 1)
plt.xlabel('$t$', fontsize = 20)
plt.legend(loc = 'best') # Leyenda con las etiquetas de las gráficas
plt.show()
```
Y si consideramos un conjunto de frecuencias de oscilación, entonces
```
frecuencias = np.array([.1, .2 , .5, .6]) # Vector de diferentes frecuencias
plt.figure(figsize = (7, 4)) # Ventana de gráfica con tamaño
# Graficamos para cada frecuencia
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, 'D-')
plt.xlabel('$t$', fontsize = 16) # Etiqueta eje x
plt.ylabel('$x(t)$', fontsize = 16) # Etiqueta eje y
plt.title('Oscilaciones', fontsize = 16) # Título de la gráfica
plt.show()
```
Estos colores, son el default de `matplotlib`, sin embargo existe otra librería dedicada, entre otras cosas, a la presentación de gráficos.
```
import seaborn as sns
sns.set(style='ticks', palette='Set2')
frecuencias = np.array([.1, .2 , .5, .6])
plt.figure(figsize = (7, 4))
for w0 in frecuencias:
x = A*np.cos(w0*t)+B*np.sin(w0*t)
plt.plot(t, x, 'o-',
label = '$\omega_0 = %s$'%w0) # Etiqueta cada gráfica con frecuencia correspondiente (conversion float a string)
plt.xlabel('$t$', fontsize = 16)
plt.ylabel('$x(t)$', fontsize = 16)
plt.title('Oscilaciones', fontsize = 16)
plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5), prop={'size': 10})
plt.show()
```
Si queremos tener manipular un poco mas las cosas, hacemos uso de lo siguiente:
```
from ipywidgets import *
def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact(masa_resorte, t = (0, 50,.01));
```
La opción de arriba generalmente será lenta, así que lo recomendable es usar `interact_manual`.
```
def masa_resorte(t = 0):
A, B, w0 = .5, .1, .5 # Parámetros
x = A*np.cos(w0*t)+B*np.sin(w0*t) # Función de posición
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, [0], 'ko', ms = 10)
ax.set_xlim(xmin = -0.6, xmax = .6)
ax.axvline(x=0, color = 'r')
ax.axhline(y=0, color = 'grey', lw = 1)
fig.canvas.draw()
interact_manual(masa_resorte, t = (0, 50,.01));
```
___
## Péndulo simple
Ahora, si fijamos nuestra atención al movimiento de un péndulo simple _(oscilaciones pequeñas)_, la ecuación diferencial a resolver tiene la misma forma:
\begin{equation}
\frac{d^2 \theta}{dt^2} + \omega_{0}^{2}\, \theta = 0, \quad\mbox{donde}\quad \omega_{0}^2 = \frac{g}{l}.
\end{equation}
La diferencia más evidente es como hemos definido a $\omega_{0}$. Esto quiere decir que,
\begin{equation}
\theta(t) = A\cos(\omega_{0} t) + B\sin(\omega_{0}t)
\end{equation}
Si graficamos la ecuación de arriba vamos a encontrar un comportamiento muy similar al ya discutido anteriormente. Es por ello que ahora veremos el movimiento en el plano $xy$. Es decir,
\begin{align}
x &= l \sin(\theta), \quad
y = l \cos(\theta)
\end{align}
```
# Podemos definir una función que nos entregue theta dados los parámetros y el tiempo
def theta_t(a, b, g, l, t):
omega_0 = np.sqrt(g/l)
return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t)
# Hacemos un gráfico interactivo del péndulo
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t(.4, .6, 9.8, 2, t))
y = - 2 * np.cos(theta_t(.4, .6, 9.8, 2, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
```
### Condiciones iniciales
Realmente lo que se tiene que resolver es,
\begin{equation}
\theta(t) = \theta(0) \cos(\omega_{0} t) + \frac{\dot{\theta}(0)}{\omega_{0}} \sin(\omega_{0} t)
\end{equation}
> **Actividad.** Modificar el programa anterior para incorporar las condiciones iniciales.
```
# Solución:
def theta_t(theta0, dtheta0, g, l, t):
omega_0 = np.sqrt(g/l)
a = theta0
b = dtheta0/omega_0
return a * np.cos(omega_0 * t) + b * np.sin(omega_0 * t)
def pendulo_simple(t = 0):
fig = plt.figure(figsize = (5,5))
ax = fig.add_subplot(1, 1, 1)
x = 2 * np.sin(theta_t(np.pi/4, 0, 9.8, 2, t))
y = - 2 * np.cos(theta_t(np.pi/4, 0, 9.8, 2, t))
ax.plot(x, y, 'ko', ms = 10)
ax.plot([0], [0], 'rD')
ax.plot([0, x ], [0, y], 'k-', lw = 1)
ax.set_xlim(xmin = -2.2, xmax = 2.2)
ax.set_ylim(ymin = -2.2, ymax = .2)
fig.canvas.draw()
interact_manual(pendulo_simple, t = (0, 10,.01));
```
### Plano fase $(x, \frac{dx}{dt})$
La posición y velocidad para el sistema `masa-resorte` se escriben como:
\begin{align}
x(t) &= x(0) \cos(\omega_{o} t) + \frac{\dot{x}(0)}{\omega_{0}} \sin(\omega_{o} t)\\
\dot{x}(t) &= -\omega_{0}x(0) \sin(\omega_{0} t) + \dot{x}(0)\cos(\omega_{0}t)]
\end{align}
```
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
x_0 = .5
dx_0 = .1
t = np.linspace(0, 15, 300)
x_t = x_0 *np.cos(omega_0 *t) + (dx_0/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0 * np.cos(omega_0 * t)
plt.figure(figsize = (7, 4))
plt.plot(t, x_t, label = '$x(t)$', lw = 4)
#plt.plot(t, dx_t, label = '$\dot{x}(t)$', lw = 1)
plt.plot(t, dx_t/omega_0, label = '$\dot{x}(t)$', lw = 4) # Mostrar que al escalar, la amplitud queda igual
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), prop={'size': 14})
plt.xlabel('$t$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.plot(x_t, dx_t/omega_0, 'ro', ms = 2)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
plt.figure(figsize = (5, 5))
plt.scatter(x_t, dx_t/omega_0, cmap = 'viridis', c = dx_t, s = 8, lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
plt.show()
```
#### Multiples condiciones iniciales
```
k = 3 #constante elástica [N]/[m]
m = 1 # [kg]
omega_0 = np.sqrt(k/m)
t = np.linspace(0, 50, 50)
x_0s = np.array([.7, .5, .25, .1])
dx_0s = np.array([.2, .1, .05, .01])
cmaps = np.array(['viridis', 'inferno', 'magma', 'plasma'])
plt.figure(figsize = (6, 6))
for indx, x_0 in enumerate(x_0s):
x_t = x_0 *np.cos(omega_0 *t) + (dx_0s[indx]/omega_0) * np.sin(omega_0 *t)
dx_t = -omega_0 * x_0 * np.sin(omega_0 * t) + dx_0s[indx] * np.cos(omega_0 * t)
plt.scatter(x_t, dx_t/omega_0, cmap = cmaps[indx],
c = dx_t, s = 10,
lw = 0)
plt.xlabel('$x(t)$', fontsize = 18)
plt.ylabel('$\dot{x}(t)/\omega_0$', fontsize = 18)
#plt.legend(loc='center left', bbox_to_anchor=(1.05, 0.5))
```
Trayectorias del oscilador armónico simple en el espacio fase $(x,\, \dot{x}\,/\omega_0)$ para diferentes valores de la energía.
# Anuncios parroquiales
## 1. Examen segundo módulo: lo habilito el miércoles 17 de octubre, entregan el martes 23 de octubre.
## 2. Proyecto segundo módulo para el lunes 29 de octubre.
<script>
$(document).ready(function(){
$('div.prompt').hide();
$('div.back-to-top').hide();
$('nav#menubar').hide();
$('.breadcrumb').hide();
$('.hidden-print').hide();
});
</script>
<footer id="attribution" style="float:right; color:#808080; background:#fff;">
Created with Jupyter by Lázaro Alonso. Modified by Esteban Jiménez Rodríguez
</footer>
| github_jupyter |
# Data Science
### Exploring the Iris Dataset
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
```
### Load Data
Load the data from CSV file into a Pandas dataframe, and print the top few rows.
```
data = pd.read_csv('iris.data')
data.head()
```
### Customize columns
Drop the redundant id column, and rename Attribute columns to integers. Save column names for use later.
```
data = data.drop('id', 1)
cols = ['sepal_length', 'sepal_width', 'petal_length', 'petal_width']
data.rename(columns = {cols[0]:0, cols[1]:1, cols[2]:2, cols[3]:3}, inplace=True)
data.loc[::50]
```
### Statistical Overview
Show shape of dataframe and statistical overview of attribute columns.
```
print(data.shape)
data.describe()
# same as data['species'].value_counts()
data.species.value_counts()
```
### Histograms
Histograms are useful for showing how the data is distributed. They're ridiculously easy to use, but can only show two axes.
```
plt.hist(data[0])
```
Here we give 4 columns of data to the Histogram maker, and it automatically color codes them.
```
plt.hist([data[0], data[1], data[2], data[3]])
```
To add a Legend we need to add labels to the Histogram builder as a list of column names, and call the legend function.
```
plt.hist([data[0], data[1], data[2], data[3]], label=[cols[0],cols[1],cols[2],cols[3]])
plt.legend()
```
Or we can make 4 separate calls to the Histogram builder and get 4 overlapping plots.
```
plt.hist(data[0])
plt.hist(data[1])
plt.hist(data[2])
plt.hist(data[3])
```
We can use alpha to control the opacity of plots. alpha of 1 is opague. alpha of 0 is transparent.
```
plt.hist(data[0])
plt.hist(data[1], alpha=1)
plt.hist(data[2], alpha=0.6)
plt.hist(data[3], alpha=0.5)
```
We can also plot the 4 columns on separate subplots to make it more readable. This is very readable, but beware that each plot automatically scales its axes to the data.
```
fig, ax = plt.subplots(2, 2, figsize=(8, 4))
ax[0, 0].hist(data[0])
ax[0, 1].hist(data[1])
ax[1, 0].hist(data[2])
ax[1, 1].hist(data[3])
plt.show()
```
Adding titles to the previous plot makes it more readable.
```
fig, ax = plt.subplots(2, 2, figsize=(8, 6))
ax[0, 0].hist(data[0])
ax[0, 1].hist(data[1])
ax[1, 0].hist(data[2])
ax[1, 1].hist(data[3])
ax[0, 0].set_title(cols[0])
ax[0, 1].set_title(cols[1])
ax[1, 0].set_title(cols[2])
ax[1, 1].set_title(cols[3])
plt.show()
```
### Scatter Plots
These are probably more useful for this dataset because they can show clusters by species. The most basic scatter plot does not distinguish species.
```
plt.scatter(
data[0],
data[1],)
```
Adding color coding by species allows us to see clustering for 2 attributes for each species. Here setosa is secluded, but virginica and versicolor overlap.
```
colors = {'Iris-setosa':'red', 'Iris-virginica':'blue', 'Iris-versicolor':'green'}
plt.scatter(
data[2],
data[3],
c=data['species'].map(colors))
```
Adding labels to the x and y axes is useful, but we can see the data for virginica and versicolor still overlap. If we could find 1 attribute where there's no overlap for these 2 species then we could use those to definitively distinguish them. But unfortunately all 4 attributes have some overlap.
```
plt.scatter(
data[0],
data[2],
c=data['species'].map(colors))
plt.xlabel(cols[0])
plt.ylabel(cols[2])
```
Here we add a title to the plot, and show attributes 1 and 3.
```
plt.scatter(
data[1],
data[3],
c=data['species'].map(colors))
plt.xlabel(cols[1])
plt.ylabel(cols[3])
plt.title('Iris Data Scatter Plot')
```
### Correlation
We can see the correlation between attributes. A correlation close to 1 helps us distinguish between species. Low correlation doesn't help us.
```
data.corr()
```
### Box and Whisker Plots
Box plots show the distribution of data over an attribute by showing the 25th, 50th (median) and 75th percentiles. Again, the simplest plots are not very useful, but when we add labels and color coding the plots are revealing.
```
plt.boxplot([data[0], data[1], data[2], data[3]])
```
This shows a boxplot for one attribute, sorted by species. For this attribute we can see a big overlap between the 3 species, so it's not very useful for distinguishing. An iris with 5.5 or 6.0 for this attribute could be any of the 3 species.
```
data.boxplot(column=[0], by=['species'])
```
It's tricky to do subplots, but worth it. We can see setosa has smaller petals than the other 2 species. And versicolor has, on average, smaller sepals and smaller petals than virginica; but there is some overlap.
```
fig, ax = plt.subplots(2, 2, figsize=(8, 6))
A = [data[0][data.species == 'Iris-setosa'], data[0][data.species == 'Iris-virginica'], data[0][data.species == 'Iris-versicolor']]
B = [data[1][data.species == 'Iris-setosa'], data[1][data.species == 'Iris-virginica'], data[1][data.species == 'Iris-versicolor']]
C = [data[2][data.species == 'Iris-setosa'], data[2][data.species == 'Iris-virginica'], data[2][data.species == 'Iris-versicolor']]
D = [data[3][data.species == 'Iris-setosa'], data[3][data.species == 'Iris-virginica'], data[3][data.species == 'Iris-versicolor']]
ax[0, 0].boxplot(A, widths = 0.7)
ax[0, 0].set_title(cols[0])
ax[0, 1].boxplot(B, widths = 0.7)
ax[0, 1].set_title(cols[1])
ax[1, 0].boxplot(C, widths = 0.7)
ax[1, 0].set_title(cols[2])
ax[1, 1].boxplot(D, widths = 0.7)
ax[1, 1].set_title(cols[3])
```
This plot does an awsome job of showing distributions of all 4 attributes for all 3 species. 12 box plots in 1 graph! The color coding makes it more readable.
```
def set_color(bp):
plt.setp(bp['boxes'][0], color='blue')
plt.setp(bp['boxes'][1], color='red')
plt.setp(bp['boxes'][2], color='green')
A = [data[0][data.species == 'Iris-setosa'], data[0][data.species == 'Iris-virginica'], data[0][data.species == 'Iris-versicolor']]
B = [data[1][data.species == 'Iris-setosa'], data[1][data.species == 'Iris-virginica'], data[1][data.species == 'Iris-versicolor']]
C = [data[2][data.species == 'Iris-setosa'], data[2][data.species == 'Iris-virginica'], data[2][data.species == 'Iris-versicolor']]
D = [data[3][data.species == 'Iris-setosa'], data[3][data.species == 'Iris-virginica'], data[3][data.species == 'Iris-versicolor']]
# add this to remove outlier symbols: 0, '',
bp = plt.boxplot(A, 0, '', positions = [1, 2, 3], widths = 0.7)
set_color(bp)
bp = plt.boxplot(B, 0, '', positions = [5, 6, 7], widths = 0.7)
set_color(bp)
bp = plt.boxplot(C, 0, '', positions = [9, 10, 11], widths = 0.7)
set_color(bp)
bp = plt.boxplot(D, 0, '', positions = [13, 14, 15], widths = 0.7)
set_color(bp)
ax = plt.axes()
ax.set_xticks([2, 6, 10, 14])
ax.set_xticklabels(cols)
plt.show()
```
| github_jupyter |
```
# setup notebook if it is run on Google Colab, cwd = notebook file location
try:
# change notebook_path if this notebook is in a different subfolder of Google Drive
notebook_path = "Projects/QuantumFlow/notebooks"
import os
from google.colab import drive
drive.mount('/content/gdrive')
os.chdir("/content/gdrive/My Drive/" + notebook_path)
%tensorflow_version 2.x
!pip install -q ruamel.yaml
except:
pass
# imports
import tensorflow as tf
# setup paths and variables for shared code (../quantumflow) and data (../data)
import sys
sys.path.append('../')
data_dir = "../data"
# import shared code, must run 0_create_shared_project_files.ipynb first!
from quantumflow.utils import load_hyperparameters, QFDataset
```
##Kernel Ridge Regression
###Paper:
$$T^{\text{ML}}(\mathbf{n}) = \not{\bar{T}}\sum_{j=1}^{M}\alpha_j k(\mathbf{n}_j, \mathbf{n})$$
$$k(\mathbf{n}, \mathbf{n}') = \text{exp}(-\| \mathbf{n} - \mathbf{n}'\|^2/(2\sigma^2))$$
$$\text{Optimize}:~~~~\mathcal{C}(\mathbf{\alpha}) = \sum_{j=1}^{M}\ (T_j^{\text{ML}} - T_j)^2 + \lambda \|\alpha\|^2$$
---
### Sklearn:
$$T^{\text{ML}}(\mathbf{n}) = 1\sum_{j=1}^{M}\omega_j \tilde{k}(\mathbf{n}_j, \mathbf{n})$$
$$\tilde{k}(\mathbf{n}, \mathbf{n}') = \text{exp}(-\gamma~\| \mathbf{n} - \mathbf{n}'\|^2)$$
$$\text{Optimize}:~~~~\mathcal{C}(\mathbf{\omega}) = \sum_{j=1}^{M}\ (T_j^{\text{ML}} - T_j)^2 + \tilde{\alpha} \|\omega\|^2$$
---
$$\omega = \bar{T} \alpha$$
$$\gamma = \frac{1}{2\sigma^2}$$
$$\tilde{\alpha} = \frac{1}{\not{\bar{T}}^2} \lambda$$
```
class KRRKineticEnergyFunctional(tf.Module):
def __init__(self, X_train, y_train, m, l, alpha=None, lambda_=None, gamma=None, sigma=None):
super(KRRKineticEnergyFunctional, self).__init__()
from sklearn.kernel_ridge import KernelRidge
if alpha is None:
alpha = lambda_
if gamma is None:
gamma = 1/(2*sigma**2)
model = KernelRidge(alpha=alpha, kernel='rbf', gamma=gamma)
model.fit(X_train, y_train)
self.X_train = tf.Variable(initial_value=X_train)
self.weights = tf.Variable(initial_value=model.dual_coef_)
self.gamma = tf.Variable(initial_value=gamma, dtype=self.X_train.dtype)
self.m = tf.Variable(initial_value=m)
self.l = tf.Variable(initial_value=l)
def rbf_kernel(self, X):
return tf.exp(-self.gamma*tf.reduce_sum(tf.square(tf.expand_dims(X, axis=2) - tf.expand_dims(tf.transpose(self.X_train), axis=0)), axis=1))
def derivative(self, X):
h = 1/(self.X_train.shape[1]-1)
return -1/h*tf.reduce_sum(tf.expand_dims(self.weights, axis=0)*2*self.gamma* \
(tf.expand_dims(X, axis=2) - tf.expand_dims(tf.transpose(self.X_train), axis=0))* \
tf.expand_dims(self.rbf_kernel(X), axis=1), axis=2)
def kinetic_energy(self, kernel):
return tf.reduce_sum(tf.expand_dims(self.weights, axis=0)*kernel, axis=1, name='kinetic_energy')
@tf.function
def __call__(self, X):
return {'kinetic_energy': self.kinetic_energy(self.rbf_kernel(X))}
@tf.function
def projection_subspace(self, X):
metric = tf.reduce_sum(tf.square(tf.expand_dims(X, axis=2) - tf.expand_dims(tf.transpose(self.X_train), axis=0)), axis=1)
_, closest_indices = tf.math.top_k(-metric, k=self.m)
X_closest = tf.gather(self.X_train, closest_indices)
X_diff = tf.expand_dims(X, axis=1) - X_closest
C = tf.linalg.matmul(X_diff, X_diff, transpose_a=True)/tf.cast(self.m, X.dtype)
eigen_vals, eigen_vecs = tf.linalg.eigh(C)
return eigen_vecs[:, :, -self.l:]
@tf.function
def projection_matrix(self, X):
largest_eigen_vecs = self.projection_subspace(X)
return tf.linalg.matmul(largest_eigen_vecs, largest_eigen_vecs, transpose_b=True)
@tf.function
def project(self, X, functional_derivative):
projection_subspace = self.projection_subspace(X)
return tf.reduce_sum(tf.linalg.matmul(projection_subspace, tf.linalg.matmul(projection_subspace, tf.expand_dims(functional_derivative, axis=2), transpose_a=True)), axis=-1)
def signatures(self, dataset_train):
return {'serving_default': self.__call__.get_concrete_function(tf.TensorSpec([None, dataset_train.discretisation_points], dataset_train.dtype, name='density')),
'projection_subspace': self.projection_subspace.get_concrete_function(tf.TensorSpec([None, dataset_train.discretisation_points], dataset_train.dtype, name='density')),
'projection_matrix': self.projection_matrix.get_concrete_function(tf.TensorSpec([None, dataset_train.discretisation_points], dataset_train.dtype, name='density')),
'project': self.project.get_concrete_function(tf.TensorSpec([None, dataset_train.discretisation_points], dataset_train.dtype, name='density'),
tf.TensorSpec([None, dataset_train.discretisation_points], dataset_train.dtype, name='functional_derivative'))
}
data_dir = "../data"
experiment = 'ke_krr'
base_dir = os.path.join(data_dir, experiment)
if not os.path.exists(base_dir): os.makedirs(base_dir)
file_hyperparams = os.path.join(base_dir, "hyperparams.config")
%%writefile $file_hyperparams
default: &DEFAULT
run_name: default
dataset_train: recreate/dataset_paper.hdf5
dataset_test: recreate/dataset_test.hdf5
dtype: float64
predict_batch_size: 100
N: 1
features: ['density']
targets: ['kinetic_energy']
von_weizsaecker_split: False
model_kwargs:
lambda_: 12.0E-14
sigma: 43
m: 30
l: 5
allN: &ALLN
<<: *DEFAULT
N: all
model_kwargs:
lambda_: 3.2E-14
sigma: 47
m: 30
l: 5
vW:
<<: *DEFAULT
N: 2
von_weizsaecker_split: True
vW_allN:
<<: *DEFAULT
N: all
von_weizsaecker_split: True
von_weizsaecker_factor: 0.01
model_kwargs:
lambda_: 3.2E-14
sigma: 47
m: 30
l: 5
run_name = 'default'
params = load_hyperparameters(file_hyperparams, run_name=run_name)
dataset_train = QFDataset(os.path.join(data_dir, params['dataset_train']), params)
model = KRRKineticEnergyFunctional(X_train=dataset_train.density, y_train=dataset_train.kinetic_energy, **params['model_kwargs'])
params['export_dir'] = os.path.join(data_dir, experiment, run_name, 'saved_model')
if not os.path.exists(params['export_dir']): os.makedirs(params['export_dir'])
tf.saved_model.save(model, params['export_dir'], signatures=model.signatures(dataset_train))
```
# Analysis
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
figsize = (20, 3)
dpi = None
dataset_test = QFDataset(os.path.join(data_dir, params['dataset_test']), params)
dataset_sample = QFDataset(os.path.join(data_dir, 'recreate/dataset_sample.hdf5'), params)
y_predict = tf.concat([model(dataset_test.density[i*params['predict_batch_size']:(i+1)*params['predict_batch_size']])['kinetic_energy'] for i in range(len(dataset_test.density)//params['predict_batch_size'])], axis=0)
absolute_error = np.abs(y_predict - dataset_test.kinetic_energy)
MAE = np.mean(absolute_error)
ae_std = np.std(absolute_error)
ae_max = np.max(absolute_error)
kcalmol_per_hartree = 627.51
print("MAE:", MAE*kcalmol_per_hartree, "kcal/mol")
print("std:", ae_std*kcalmol_per_hartree, "kcal/mol")
print("max:", ae_max*kcalmol_per_hartree, "kcal/mol")
print("\nrelative error:", np.mean(absolute_error/dataset_test.kinetic_energy))
import pandas as pd
paper_weights = 10**7*pd.read_csv('1b_paper_potentials.txt', delimiter=' ')['αj'].values
print('Kernel Ridge: ', model.weights.numpy()[:4], '...')
print('Paper Weights:', paper_weights[:4], '...')
print('Deviation:', np.mean(np.abs(model.weights.numpy() - paper_weights)/np.abs(paper_weights)))
plt.figure(figsize=figsize, dpi=dpi)
plt.hist(model.weights.numpy(), bins=100, label="weights")
plt.hist(model.weights.numpy() - paper_weights, bins=50, label="error")
#plt.title("Distribution of weights")
plt.xlabel('α parameters')
plt.ylabel('count')
plt.ylim([0, 10])
plt.legend()
plt.show()
```
## functional derivative
$$ \frac{1}{\Delta x} \nabla T^\text{ML}(\mathbf{n}) = \bar{T}\sum_{j=1}^{M}\alpha_j'(\mathbf{n}_j - \mathbf{n})k(\mathbf{n}_j, \mathbf{n}) = -\frac{1}{h} \sum_{j=1}^{M}\omega_j \gamma 2(\mathbf{n} - \mathbf{n}_j)k(\mathbf{n}_j, \mathbf{n})$$
```
prediction_derivative = model.derivative(dataset_sample.density)
plt.figure(figsize=figsize, dpi=dpi)
plt.plot(dataset_sample.x, tf.transpose(prediction_derivative), 'r', label='MLA')
plt.plot(dataset_sample.x, tf.transpose(dataset_sample.derivative), '--k', label='Exact')
plt.ylim([-40, 40])
plt.grid(True)
plt.xlabel('x / bohr')
plt.ylabel('functional derivative')
plt.legend(loc='best')
plt.show()
prediction_derivative_proj = model.project(dataset_sample.density, prediction_derivative)
derivative_proj = model.project(dataset_sample.density, dataset_sample.derivative)
plt.figure(figsize=figsize, dpi=dpi)
plt.plot(dataset_sample.x, tf.transpose(prediction_derivative_proj), 'r', label="MLA")
plt.plot(dataset_sample.x, tf.transpose(derivative_proj), '--k', label='Exact (projected functional derivative)')
plt.plot(dataset_sample.x, tf.transpose(dataset_sample.derivative), '--g', label='Actual functional derivative')
plt.ylim([-10, 25])
plt.xlabel('x / bohr')
plt.ylabel('functional derivative')
plt.legend(loc='best')
plt.grid()
plt.show()
X_train = dataset_train.density
X = dataset_sample.potential
m = 30
l = 5
metric = tf.reduce_sum(tf.square(tf.expand_dims(X, axis=2) - tf.expand_dims(tf.transpose(X_train), axis=0)), axis=1)
_, closest_indices = tf.math.top_k(-metric, k=m)
X_closest = tf.gather(X_train, closest_indices)
X_diff = tf.expand_dims(X, axis=1) - X_closest
C = tf.linalg.matmul(X_diff, X_diff, transpose_a=True)/tf.cast(m, X.dtype)
eigen_vals, eigen_vecs = tf.linalg.eigh(C)
largest_eigen_vecs = eigen_vecs[:, :, -l:]
P_ml = tf.linalg.matmul(largest_eigen_vecs, largest_eigen_vecs, transpose_b=True)
potential_proj = tf.linalg.matvec(P_ml, dataset_sample.potential)
plt.figure(figsize=figsize, dpi=dpi)
plt.plot(dataset_sample.x, largest_eigen_vecs[0])
plt.grid()
plt.show()
plt.figure(figsize=figsize, dpi=dpi)
plt.plot(dataset_sample.x, tf.transpose(potential_proj))
plt.grid()
plt.show()
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from mpl_toolkits.basemap import Basemap
from shapely.geometry import Point, Polygon, MultiPoint, MultiPolygon
from shapely.prepared import prep
import fiona
from matplotlib.collections import PatchCollection
from descartes import PolygonPatch
import json
import datetime
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
import numpy as np
with open('data/mwrd_cso.geojson', 'r') as fh:
raw = json.loads(fh.read())
features_raw = raw['features']
features_raw[0:1]
features = []
for feature in features_raw:
features.append({'lat': feature['geometry']['coordinates'][1],
'lon': feature['geometry']['coordinates'][0],
'waterway_reach': feature['properties']['WATERWAY_REACH'],
'associated_plant': feature['properties']['ASSOCIATED_PLANT'],
'tarp_connection': feature['properties']['REPRESENTED_OUTFALL']})
features[0:5]
outfall_points = pd.DataFrame(features)
outfall_points.reset_index(drop=True, inplace=True)
outfall_points = outfall_points.groupby('tarp_connection').first()
outfall_points.head()
csos = pd.read_csv('data/merged_cso_data.csv')
csos['Open date/time'] = pd.to_datetime(csos['Open date/time'])
csos['Close date/time'] = pd.to_datetime(csos['Close date/time'])
csos['Duration'] = csos['Close date/time'] - csos['Open date/time']
outfall_cumulative = pd.DataFrame(csos.groupby(by=['Outfall Structure'])['Duration'].sum().sort_values(ascending=False))
outfall_cumulative['Duration_mins'] = outfall_cumulative['Duration'] / np.timedelta64(1, 'm')
outfall_cumulative.head()
outfalls_with_durations = list(outfall_cumulative.index)
outfalls_with_durations[:5]
outfalls_with_locations = list(outfall_points.index)
outfalls_with_locations[:5]
# Overflows where we don't have lat/lon
len(set(outfalls_with_durations) - set(outfalls_with_locations))
# Locations where we don't have CSO data
len(set(outfalls_with_locations) - set(outfalls_with_durations))
type(outfalls_with_locations)
outfalls_with_both = list(set(outfalls_with_locations).intersection(outfalls_with_durations))
outfalls_with_both[:5]
data_points = pd.concat([outfall_cumulative, outfall_points], axis=1, join='inner')
data_points['name'] = data_points.index
data_points.head()
data_points.to_json(orient='records')
# make sure the value of resolution is a lowercase L,
# for 'low', not a numeral 1
my_map = Basemap(projection='merc', lat_0=lat0, lon_0=lon0,
resolution = 'h', area_thresh = 0.01,
llcrnrlon=lon_min, llcrnrlat=lat_min,
urcrnrlon=lon_max, urcrnrlat=lat_max)
my_map.drawcoastlines()
my_map.drawrivers()
my_map.drawmapboundary()
plt.show()
# make sure the value of resolution is a lowercase L,
# for 'low', not a numeral 1
my_map = Basemap(projection='merc', lat_0=lat0, lon_0=lon0,
resolution = 'h', area_thresh = 0.01,
llcrnrlon=lon_min, llcrnrlat=lat_min,
urcrnrlon=lon_max, urcrnrlat=lat_max)
my_map.drawcoastlines()
my_map.drawrivers()
my_map.drawmapboundary()
min_marker_size = 2.5
for index, datapoint in data_points.iterrows():
x,y = my_map(float(datapoint['lon']), float(datapoint['lat']))
msize = datapoint['Duration_mins'] * min_marker_size
my_map.plot(x, y, 'ro', markersize=msize)
print(index)
plt.show()
my_map = Basemap(projection='merc', lat_0=lat0, lon_0=lon0,
resolution = 'h', area_thresh = 0.01,
llcrnrlon=lon_min, llcrnrlat=lat_min,
urcrnrlon=lon_max, urcrnrlat=lat_max)
my_map.drawcoastlines()
my_map.drawrivers()
my_map.drawmapboundary()
min_marker_size = 2.5
for lon, lat, mag in zip(lons, lats, magnitudes):
x,y = eq_map(lon, lat)
msize = mag * min_marker_size
eq_map.plot(x, y, 'ro', markersize=msize)
plt.show()
```
| github_jupyter |
# Part 3 : Mitigate Bias, Train another unbiased Model and Put in the Model Registry
<a id='aup-overview'></a>
## [Overview](./0-AutoClaimFraudDetection.ipynb)
* [Notebook 0 : Overview, Architecture and Data Exploration](./0-AutoClaimFraudDetection.ipynb)
* [Notebook 1: Data Prep, Process, Store Features](./1-data-prep-e2e.ipynb)
* [Notebook 2: Train, Check Bias, Tune, Record Lineage, and Register a Model](./2-lineage-train-assess-bias-tune-registry-e2e.ipynb)
* **[Notebook 3: Mitigate Bias, Train New Model, Store in Registry](./3-mitigate-bias-train-model2-registry-e2e.ipynb)**
* **[Architecture](#train2)**
* **[Develop a second model](#second-model)**
* **[Analyze the Second Model for Bias](#analyze-second-model)**
* **[View Results of Clarify Bias Detection Job](#view-second-clarify-job)**
* **[Configure and Run Clarify Explainability Job](#explainability)**
* **[Create Model Package for the Second Trained Model](#model-package)**
* [Notebook 4: Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)
* [Notebook 5 : Create and Run an End-to-End Pipeline to Deploy the Model](./5-pipeline-e2e.ipynb)
이 노트북에서는 Clarify를 사용하여 편향을 감지하고, SMOTE를 사용하여 이를 완화하고, 다른 모델을 훈련하고, 그 과정에서 생성된 모든 아티팩트 계보(데이터, 코드 및 모델 메타데이터)와 함께 모델 레지스트리에 배치하는 방법을 설명합니다.
### Import libraries
```
import json
import time
import boto3
import sagemaker
import numpy as np
import pandas as pd
import awswrangler as wr
import matplotlib.pyplot as plt
from imblearn.over_sampling import SMOTE
from sagemaker.xgboost.estimator import XGBoost
from model_package_src.inference_specification import InferenceSpecification
%matplotlib inline
```
### Load stored variables
이전에 이 노트북을 실행한 경우, AWS에서 생성한 리소스를 재사용할 수 있습니다. 아래 셀을 실행하여 이전에 생성된 변수를 로드합니다. 기존 변수의 출력물이 표시되어야 합니다. 인쇄된 내용이 보이지 않으면 노트북을 처음 실행한 것일 수 있습니다.
```
%store -r
%store
```
**<font color='red'>Important</font>: StoreMagic 명령을 사용하여 변수를 검색하려면 이전 노트북을 실행해야 합니다.**
### Set region, boto3 and SageMaker SDK variables
```
#You can change this to a region of your choice
import sagemaker
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker")
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sagemaker_role = sagemaker.get_execution_role()
account_id = boto3.client("sts").get_caller_identity()["Account"]
# variables used for parameterizing the notebook run
model_2_name = f"{prefix}-xgboost-post-smote"
train_data_upsampled_s3_path = f"s3://{bucket}/{prefix}/data/train/upsampled/train.csv"
bias_report_2_output_path = f"s3://{bucket}/{prefix}/clarify-output/bias-2"
explainability_output_path = f"s3://{bucket}/{prefix}/clarify-output/explainability"
train_instance_count = 1
train_instance_type = "ml.m4.xlarge"
claify_instance_count = 1
clairfy_instance_type = "ml.c5.xlarge"
```
<a id ='train2'> </a>
## Architecture for this ML Lifecycle Stage : Train, Check Bias, Tune, Record Lineage, Register Model
[overview](#aup-overview)
___

<a id='second-model'></a>
## Develop a second model
[overview](#aup-overview)
___
이 두 번째 모델에서는 SMOTE를 사용하여 데이터셋의 성별 불균형을 수정하고 XGBoost를 사용하여 다른 모델을 훈련합니다. 이 모델은 또한 레지스트리에 저장되고 최종적으로 배포가 승인됩니다.
```
train = pd.read_csv("data/train.csv")
test = pd.read_csv("data/test.csv")
train
test
```
<a id='smote'></a>
### Resolve class imbalance using SMOTE
불균형을 처리하기 위해 [SMOTE (Synthetic Minority Over-sampling Technique)](https://arxiv.org/pdf/1106.1813.pdf)를 사용하여 소수 클래스를 오버샘플링 (즉, 업샘플링) 할 수 있습니다. imbalanced-learn 모듈을 설치한 후 SMOTE를 임포트할 때 ImportError가 발생하면 커널을 다시 시작하십시오.
#### Gender balance before SMOTE
```
gender = train['customer_gender_female']
gender.value_counts()
```
#### Gender balance after SMOTE
```
train.head()
sm = SMOTE(random_state=42)
train_data_upsampled, gender_res = sm.fit_resample(train, gender)
train_data_upsampled['customer_gender_female'].value_counts()
```
### Train new model
```
train_data_upsampled.to_csv("data/upsampled_train.csv", index=False)
s3_client.upload_file(
Filename="data/upsampled_train.csv",
Bucket=bucket,
Key=f"{prefix}/data/train/upsampled/train.csv",
)
xgb_estimator = XGBoost(
entry_point="xgboost_starter_script.py",
hyperparameters=hyperparameters,
role=sagemaker_role,
instance_count=train_instance_count,
instance_type=train_instance_type,
framework_version="1.0-1",
)
if 'training_job_2_name' not in locals():
xgb_estimator.fit(inputs = {'train': train_data_upsampled_s3_path})
training_job_2_name = xgb_estimator.latest_training_job.job_name
%store training_job_2_name
else:
print(f'Using previous training job: {training_job_2_name}')
```
### Register artifacts
```
training_job_2_info = sagemaker_boto_client.describe_training_job(
TrainingJobName=training_job_2_name
)
```
#### Code artifact
```
# return any existing artifact which match the our training job's code arn
code_s3_uri = training_job_2_info["HyperParameters"]["sagemaker_submit_directory"]
list_response = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=code_s3_uri, sagemaker_session=sagemaker_session
)
)
# use existing arifact if it's already been created, otherwise create a new artifact
if list_response:
code_artifact = list_response[0]
print(f"Using existing artifact: {code_artifact.artifact_arn}")
else:
code_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingScript",
source_uri=code_s3_uri,
artifact_type="Code",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {code_artifact.artifact_arn}: SUCCESSFUL")
```
#### Training data artifact
```
training_data_s3_uri = training_job_2_info['InputDataConfig'][0]['DataSource']['S3DataSource']['S3Uri']
list_response = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=training_data_s3_uri, sagemaker_session=sagemaker_session
)
)
if list_response:
training_data_artifact = list_response[0]
print(f"Using existing artifact: {training_data_artifact.artifact_arn}")
else:
training_data_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainingData",
source_uri=training_data_s3_uri,
artifact_type="Dataset",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {training_data_artifact.artifact_arn}: SUCCESSFUL")
```
#### Model artifact
```
trained_model_s3_uri = training_job_2_info["ModelArtifacts"]["S3ModelArtifacts"]
list_response = list(
sagemaker.lineage.artifact.Artifact.list(
source_uri=trained_model_s3_uri, sagemaker_session=sagemaker_session
)
)
if list_response:
model_artifact = list_response[0]
print(f"Using existing artifact: {model_artifact.artifact_arn}")
else:
model_artifact = sagemaker.lineage.artifact.Artifact.create(
artifact_name="TrainedModel",
source_uri=trained_model_s3_uri,
artifact_type="Model",
sagemaker_session=sagemaker_session,
)
print(f"Create artifact {model_artifact.artifact_arn}: SUCCESSFUL")
```
### Set artifact associations
```
trial_component = sagemaker_boto_client.describe_trial_component(
TrialComponentName=training_job_2_name + "-aws-training-job"
)
trial_component_arn = trial_component["TrialComponentArn"]
```
#### Input artifacts
```
input_artifacts = [code_artifact, training_data_artifact]
for a in input_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="ContributedTo",
sagemaker_session=sagemaker_session,
)
print(f"Associate {trial_component_arn} and {a.artifact_arn}: SUCCEESFUL\n")
except:
print(f"Association already exists between {trial_component_arn} and {a.artifact_arn}.\n")
```
#### Output artifacts
```
output_artifacts = [model_artifact]
for artifact_arn in output_artifacts:
try:
sagemaker.lineage.association.Association.create(
source_arn=a.artifact_arn,
destination_arn=trial_component_arn,
association_type="Produced",
sagemaker_session=sagemaker_session,
)
print(f"Associate {trial_component_arn} and {a.artifact_arn}: SUCCEESFUL\n")
except:
print(f"Association already exists between {trial_component_arn} and {a.artifact_arn}.\n")
```
<pre>
</pre>
<a id ='analyze-second-model'></a>
## Analyze the second model for bias and explainability
[overview](#aup-overview)
___
Amazon SageMaker Clarify는 머신 러닝 (ML) 모델이 예측을 수행하는 방법을 설명하는 데 도움이 되는 도구를 제공합니다. 이러한 도구는 ML 모델러와 개발자 및 기타 내부 이해 관계자가 배포 전에 모델 특성을 전체적으로 이해하고 배포 후 모델에서 제공하는 예측을 디버그하는 데 도움이 될 수 있습니다. ML 모델이 예측에 어떻게 도달하는지에 대한 투명성은 모델 예측을 기반으로 결정을 받아들일 경우, 이를 신뢰해야 하는 소비자 및 규제 기관에게도 중요합니다. SageMaker Clarify는 모델에 구애받지 않는(model-agnostic) 피쳐 속성 접근 방식을 사용합니다. 모델 훈련 후 예측을 수행 한 이유를 이해하고 추론 중에 인스턴스 별 설명을 제공하는 데 사용할 수 있습니다. 구현에는 각 피쳐에 특정 예측에 대한 중요도 값을 할당하는 협동 게임 이론 분야의 Shapley 값 개념을 기반으로 확장 가능하고(scalable) 효율적인 SHAP 구현([see paper](https://papers.nips.cc/paper/2017/file/8a20a8621978632d76c43dfd28b67767-Paper.pdf)이 포함됩니다.
### Create model from estimator
```
model_matches = sagemaker_boto_client.list_models(NameContains=model_2_name)['Models']
if not model_matches:
model_2 = sagemaker_session.create_model_from_job(
name=model_2_name,
training_job_name=training_job_2_info['TrainingJobName'],
role=sagemaker_role,
image_uri=training_job_2_info['AlgorithmSpecification']['TrainingImage'])
%store model_2_name
else:
print(f"Model {model_2_name} already exists.")
```
<a id='bias-v1'></a>
### Check for data set bias and model bias
SageMaker를 사용하면 사전 훈련 및 사후 훈련 편향을 확인할 수 있습니다. 사전 훈련 metric은 해당 데이터의 기존 metric을 보여주는 반면, 사후 훈련 metric은 모델의 예측에서 편향을 보여줍니다. SageMaker SDK를 사용하여 편향을 확인하려는 그룹과 표시할 metric을 지정할 수 있습니다.
전체 Clarify 작업을 실행하려면, 아래 셀에서 코드의 주석 처리를 제거해야 합니다. 작업을 실행하는 데 약 15분 정도 소요됩니다. 시간을 절약하려면 편향 작업이 실행되지 않은 경우 미리 생성된 결과를 로드한 후, 다음 셀에서 결과를 볼 수 있습니다.
```
clarify_processor = sagemaker.clarify.SageMakerClarifyProcessor(
role=sagemaker_role,
instance_count=1,
instance_type="ml.c4.xlarge",
sagemaker_session=sagemaker_session,
)
bias_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_upsampled_s3_path,
s3_output_path=bias_report_2_output_path,
label="fraud",
headers=train.columns.to_list(),
dataset_type="text/csv",
)
model_config = sagemaker.clarify.ModelConfig(
model_name=model_2_name,
instance_type=train_instance_type,
instance_count=1,
accept_type="text/csv",
)
predictions_config = sagemaker.clarify.ModelPredictedLabelConfig(probability_threshold=0.5)
bias_config = sagemaker.clarify.BiasConfig(
label_values_or_threshold=[0],
facet_name="customer_gender_female",
facet_values_or_threshold=[1],
)
# # un-comment the code below to run the whole job
# if 'clarify_bias_job_2_name' not in locals():
# clarify_processor.run_bias(
# data_config=bias_data_config,
# bias_config=bias_config,
# model_config=model_config,
# model_predicted_label_config=predictions_config,
# pre_training_methods='all',
# post_training_methods='all')
# clarify_bias_job_2_name = clarify_processor.latest_job.name
# %store clarify_bias_job_2_name
# else:
# print(f'Clarify job {clarify_bias_job_2_name} has already run successfully.')
```
<a id ='view-second-clarify-job'></a>
## View results of Clarify job
[overview](#aup-overview)
___
데이터셋 또는 모델에서 Clarify를 실행하는 데 15분 정도 걸릴 수 있습니다. 작업을 실행할 시간이 없는 경우, 이 데모에 포함된 미리 생성된 결과를 볼 수 있습니다. 그렇지 않으면 위의 셀에서 코드 주석 처리를 제거하여 작업을 실행할 수 있습니다.
```
if "clarify_bias_job_2_name" in locals():
s3_client.download_file(
Bucket=bucket,
Key=f"{prefix}/clarify-output/bias-2/analysis.json",
Filename="clarify_output/bias_2/analysis.json",
)
print(f"Downloaded analysis from previous Clarify job: {clarify_bias_job_2_name}\n")
else:
print(f"Loading pre-generated analysis file...\n")
with open("clarify_output/bias_1/analysis.json", "r") as f:
bias_analysis = json.load(f)
results = bias_analysis["pre_training_bias_metrics"]["facets"]["customer_gender_female"][0][
"metrics"
][1]
print(json.dumps(results, indent=4))
with open("clarify_output/bias_2/analysis.json", "r") as f:
bias_analysis = json.load(f)
results = bias_analysis["pre_training_bias_metrics"]["facets"]["customer_gender_female"][0][
"metrics"
][1]
print(json.dumps(results, indent=4))
```
<a id ='explainability' ></a>
## Configure and run explainability job
[overview](#aup-overview)
___
전체 Clarify 작업을 실행하려면, 아래 셀에서 코드의 주석 처리를 제거해야 합니다. 작업을 실행하는 데 약 15분 정도 소요됩니다. 시간을 절약하려면 편향 작업이 실행되지 않은 경우 미리 생성된 결과를 로드한 후, 다음 셀에서 결과를 볼 수 있습니다.
```
model_config = sagemaker.clarify.ModelConfig(
model_name=model_2_name,
instance_type=train_instance_type,
instance_count=1,
accept_type="text/csv",
)
shap_config = sagemaker.clarify.SHAPConfig(
baseline=[train.median().values[1:].tolist()], num_samples=100, agg_method="mean_abs"
)
explainability_data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_upsampled_s3_path,
s3_output_path=explainability_output_path,
label="fraud",
headers=train.columns.to_list(),
dataset_type="text/csv",
)
# un-comment the code below to run the whole job
# if 'clarify_expl_job_name' not in locals():
# clarify_processor.run_explainability(
# data_config=explainability_data_config,
# model_config=model_config,
# explainability_config=shap_config)
# clarify_expl_job_name = clarify_processor.latest_job.name
# %store clarify_expl_job_name
# else:
# print(f'Clarify job {clarify_expl_job_name} has already run successfully.')
```
### View Clarify explainability results (shortcut)
데이터셋 또는 모델에서 Clarify를 실행하는 데 15분 정도 걸릴 수 있습니다. 작업을 실행할 시간이 없는 경우, 이 데모에 포함된 미리 생성된 결과를 볼 수 있습니다. 그렇지 않으면 위의 셀에서 코드 주석 처리를 제거하여 작업을 실행할 수 있습니다.
```
if "clarify_expl_job_name" in locals():
s3_client.download_file(
Bucket=bucket,
Key=f"{prefix}/clarify-output/explainability/analysis.json",
Filename="clarify_output/explainability/analysis.json",
)
print(f"Downloaded analysis from previous Clarify job: {clarify_expl_job_name}\n")
else:
print(f"Loading pre-generated analysis file...\n")
with open("clarify_output/explainability/analysis.json", "r") as f:
analysis_result = json.load(f)
shap_values = pd.DataFrame(analysis_result["explanations"]["kernel_shap"]["label0"])
importances = shap_values["global_shap_values"].sort_values(ascending=False)
fig, ax = plt.subplots()
n = 5
y_pos = np.arange(n)
importance_scores = importances.values[:n]
y_label = importances.index[:n]
ax.barh(y_pos, importance_scores, align="center")
ax.set_yticks(y_pos)
ax.set_yticklabels(y_label)
ax.invert_yaxis()
ax.set_xlabel("SHAP Value (impact on model output)");
```
자동 생성된 SageMaker Clarify 보고서를 보려면, 다음 코드를 실행하고 출력 링크를 사용하여 보고서를 엽니다.
```
from IPython.display import FileLink, FileLinks
display(
"Click link below to view the SageMaker Clarify report", FileLink("clarify_output/report.pdf")
)
```
### What is SHAP?
SHAP은 이 솔루션에서 설명을 계산하는 데 사용되는 방법입니다. 단일 feature
permutation과 같은 다른 feature attribution 방법과 달리 SHAP는 가능한 모든 피쳐 조합을 조사하여 단일 피쳐의 효과를 분리하려고 합니다.
[SHAP](https://github.com/slundberg/shap) (Lundberg et al. 2017)는 SHapley Additive exPlanations를 나타냅니다. 'Shapley'는 설명을 만드는 데 사용되는 [Shapley
values](https://en.wikipedia.org/wiki/Shapley_value)라는 게임 이론 개념과 관련이 있습니다. Shapley 값은 가능한 모든 '연합'을 고려할 때 각 '플레이어'의 한계 기여도를 나타냅니다. 머신 러닝 컨텍스트에서 이를 사용하여 Shapley 값은 가능한 모든 피쳐 셋을 고려할 때 각 피쳐의 한계 기여도를 설명합니다. 'Additive'는 이러한 Shapley 값을 합하여 최종 모델 예측을 제공할 수 있다는 사실과 관련이 있습니다.
예를 들어 기본 신용 불이행(credit default risk) 위험도 10%로 시작할 수 있습니다. 피쳐 집합이 주어지면, 각 피쳐에 대한 Shapley 값을 계산할 수 있습니다. 모든 Shapley 값을 합하면 +30%의 누적 값을 얻을 수 있습니다. 따라서 동일한 피쳐 세트가 주어지면 모델이 신용 불이행 위험 도 40% (즉, 10% + 30%)를 반환할 것으로 예상합니다.
<a id='model-package' ></a>
## Create Model Package for the Second Trained Model
[overview](#aup-overview)
___
#### Create and upload second model metrics report
```
model_metrics_report = {"binary_classification_metrics": {}}
for metric in training_job_2_info["FinalMetricDataList"]:
stat = {metric["MetricName"]: {"value": metric["Value"], "standard_deviation": "NaN"}}
model_metrics_report["binary_classification_metrics"].update(stat)
with open("training_metrics.json", "w") as f:
json.dump(model_metrics_report, f)
metrics_s3_key = (
f"{prefix}/training_jobs/{training_job_2_info['TrainingJobName']}/training_metrics.json"
)
s3_client.upload_file(Filename="training_metrics.json", Bucket=bucket, Key=metrics_s3_key)
```
#### Define inference specification
```
mp_inference_spec = InferenceSpecification().get_inference_specification_dict(
ecr_image=training_job_2_info["AlgorithmSpecification"]["TrainingImage"],
supports_gpu=False,
supported_content_types=["text/csv"],
supported_mime_types=["text/csv"],
)
mp_inference_spec["InferenceSpecification"]["Containers"][0]["ModelDataUrl"] = training_job_2_info[
"ModelArtifacts"
]["S3ModelArtifacts"]
```
#### Define model metrics
```
model_metrics = {
"ModelQuality": {
"Statistics": {
"ContentType": "application/json",
"S3Uri": f"s3://{bucket}/{metrics_s3_key}",
}
},
"Bias": {
"Report": {
"ContentType": "application/json",
"S3Uri": f"{explainability_output_path}/analysis.json",
}
},
}
```
#### Register second model package to Model Package Group
```
mp_input_dict = {
"ModelPackageGroupName": mpg_name,
"ModelPackageDescription": "XGBoost classifier to detect insurance fraud with SMOTE.",
"ModelApprovalStatus": "PendingManualApproval",
"ModelMetrics": model_metrics,
}
mp_input_dict.update(mp_inference_spec)
mp2_response = sagemaker_boto_client.create_model_package(**mp_input_dict)
mp2_arn = mp2_response["ModelPackageArn"]
%store mp2_arn
```
#### Check status of model package creation
```
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp2_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
while mp_status not in ["Completed", "Failed"]:
time.sleep(5)
mp_info = sagemaker_boto_client.describe_model_package(
ModelPackageName=mp2_response["ModelPackageArn"]
)
mp_status = mp_info["ModelPackageStatus"]
print(f"model package status: {mp_status}")
print(f"model package status: {mp_status}")
```
### View both models in the registry
```
sagemaker_boto_client.list_model_packages(ModelPackageGroupName=mpg_name)["ModelPackageSummaryList"]
```
___
### Next Notebook: [Deploy Model, Run Predictions](./4-deploy-run-inference-e2e.ipynb)
| github_jupyter |
# Learning languages from a single message
The aim of this notebook is to showcase an application of the shortest path model to the problem of distinguishing between two languages, on the dataset of snippets taken from the [European Parliament Proceedings](https://www.statmt.org/europarl/).
As we will see in this notebook, the model can consistently use just a single example of a short snippet from one language to build a near-perfect classifier distinguishing that language from some other language.
First, of course, we import all the relevant libaries and setup some minor details. Note that this notebook, as it is currently written, takes quite a long time to execute, because of many draws performed in each experiment in order to get decent bounds on accuracy of various scenarios. If you want the notebook to execute in a manageable time, change the NUMBER_OF_DRAWS variable in the below to something like 3.
```
from tqdm.notebook import tqdm
from statistics import mean
from itertools import combinations
from collections import defaultdict
from random import seed
from shapaclass.model import ShortestPathModel
from demo.dataset_utils.sample_dataset import sample_dataset
import matplotlib.pyplot as plt
import matplotlib.cm as cm
plt.style.use('ggplot')
seed(42)
NUMBER_OF_DRAWS = 25
```
## Differentiating between English and French
For our first example, we're going to take 1000 examples from the English and French corpora, each example 10 words long, and see how our model does to differentiate between these languages.
We do this by using the _sample_dataset_ function, which is implemented in such a way to ascertain that the sampling procedure has various properties that we'd like it to have, such as not taking different examples from the overlapping portions of the corpora.
```
language_dict = {'en' : 'English', 'fr': 'French',
'it': 'Italian', 'de': 'German',
'fi': 'Finnish'}
english = sample_dataset(n=1000,
length=10,
language='en')
french = sample_dataset(n=1000,
length=10,
language='fr')
```
Now we define the crucial part of our model – the weight function. This function is the crux of this whole model, but it is very simple, owing to the fact that the domain is very simple. So, for two strings $s_1$ and $s_2$, we define
$$
\operatorname{weight}(s_1, s_2, p)=
\begin{cases}
\text{(number of words shared by } s_1 \text{ and } s_2 \text{)}^{-p} & \text{ if } s_1 \text{ and } s_2 \text{ share at least one word,}\\
\infty & \text{ otherwise.}\\
\end{cases}
$$
```
def weights(string1, string2, p=2):
intersection = [x for x in string1 if x in string2]
if len(intersection) == 0:
return float('inf')
else:
return 1/(len(intersection) ** p)
```
Now using the model is as simple as defining an object, passing it the weight function, calling _prepare_data_ with the dataset generated above, and calling _fit_predict_! :)
```
model = ShortestPathModel(weight_fn=weights)
model.prepare_data(anchor_class=english,
other_class=french)
model.fit_predict()
print(f'Model\'s accuracy is {100*round(model.accuracy_, 4)}%.')
```
## Differentiating between English, French, German, Italian and Finnish
Now we'll do a somewhat more extensive experiment. For each (unordered) pair of languages from the set {english, french, german, italian, finnish}, we're going to compute 25 different samples of 500 snippets of (word) length 10, then fit our model and record its accuracy.
Note that when we draw 500 messages of length 10 we're drawing 5 thousands words, which is a small subset of the dataset which contains about 3 million words for each language, providing assurance that there will be little overlap between different samples.
(The justification for using unordered pairs is that the results should, in aggregate, look pretty similar regardless of whatever we're using e.g. an English message to differentiate it from French, or a French message to differentiate it from English, for the 'structure' based on which they're learning is the same – they're operating on the same graph, that is, so while local discrepancies are possible, globally the results should be pretty similar.)
We'll, of course, plot the results.
```
accuracies = defaultdict(list)
for language_anchor, language_other in tqdm(combinations(language_dict.keys(),
r=2)
):
if language_anchor == language_other:
continue
for _ in range(NUMBER_OF_DRAWS):
anchor = sample_dataset(n=500,
length=10,
language=language_anchor)
other = sample_dataset(n=500,
length=10,
language=language_other)
model = ShortestPathModel(weight_fn=weights)
model.prepare_data(anchor, other)
model.fit_predict()
accuracies[(language_anchor, language_other)].append(model.accuracy_)
# just preparing the data for the matplotlib boxplot
# 1) ordering the language comparisons by mean
accuracies = {k: v for k, v in sorted(accuracies.items(),
key=lambda item: mean(item[1])
)
}
# 2) making thee labels more readable
labels = [f'{language_dict[l]}/\n{language_dict[q]}'
for l, q in accuracies.keys() ]
fig, ax = plt.subplots(figsize=(12, 7))
fig.subplots_adjust(bottom=0.1)
bplot = ax.boxplot(accuracies.values(), labels=labels, patch_artist=True)
cmap = cm.ScalarMappable(cmap='rainbow')
data_mean = [mean(x) for x in accuracies.values()]
for patch, color in zip(bplot['boxes'], cmap.to_rgba(data_mean)):
patch.set_facecolor(color)
ax.set_title(f"Accuracies in various language pairs, "
f"judged by {NUMBER_OF_DRAWS} trials each")
ax.set_xlabel('Language pairs')
ax.set_ylabel('Accuracy')
ax.set_ylim([0,1])
plt.show()
```
Or zooming in slightly, given how high most of these accuracies are:
```
fig, ax = plt.subplots(figsize=(12, 7))
fig.subplots_adjust(bottom=0.1)
bplot = ax.boxplot(accuracies.values(), labels=labels, patch_artist=True)
cmap = cm.ScalarMappable(cmap='rainbow')
data_mean = [mean(x) for x in accuracies.values()]
for patch, color in zip(bplot['boxes'], cmap.to_rgba(data_mean)):
patch.set_facecolor(color)
ax.set_title("Accuracies in various language pairs, "
f"judged by {NUMBER_OF_DRAWS} trials each")
ax.set_xlabel('Language pairs')
ax.set_ylabel('Accuracy')
plt.show()
languages = list(language_dict.keys())
accuracy_matrix = [[1 for _ in range(len(languages))]
for _ in range(len(languages))]
for i in range(len(languages)):
for j in range(len(languages)):
if (languages[i],languages[j]) in accuracies.keys():
accuracy_matrix[i][j] = round(
mean(
accuracies[(languages[i],
languages[j])]
), 2
)
accuracy_matrix[j][i] = accuracy_matrix[i][j]
fig, ax = plt.subplots(figsize=(12, 7))
im = ax.imshow(accuracy_matrix, cmap=cm.autumn_r)
# We want to show all ticks...
ax.set_xticks(range(len(languages)))
ax.set_yticks(range(len(languages)))
# ... and label them with the respective list entries
ax.set_xticklabels(language_dict.values())
ax.set_yticklabels(language_dict.values())
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(len(languages)):
for j in range(len(languages)):
text = ax.text(j, i, accuracy_matrix[i][j],
ha="center", va="center", color="black")
ax.grid(False)
ax.set_title("Accuracy in distinguishing language pairs (symmetric)")
cbar = fig.colorbar(im,
ticks=[min([min(x) for x in accuracy_matrix]),
max([max(x) for x in accuracy_matrix])])
fig.tight_layout()
plt.show()
```
As we can see, all the mean accuracies are above $93\%$ with the exception of French and Italian. This is not suprising, given that they have a very high [lexical similarity](https://en.wikipedia.org/wiki/Lexical_similarity#Indo-European_languages). For this reason we're going to drop all the other language pairs and focus exclusively on French and Italian from now on.
## Playing with hyperparameters
In the above example, we've seen that the model works well when given 500 snippets of length 10 in each language, and with the similarity function of inverse of the squared difference of the number of words shared. So there are a few things we can play with: the number of snippets, their length, and the similarity function (in which, for simplicity, we'll only change the "squared" part).
### Vary the length of messages
Let's see how the algorithm depends on the length of snippets. We'll try the following lengths: 2, 3, 5, 8, 10, 12, 15, 20, 25, 40, 80. For each of these we'll draw 20 samples and evaluate the accuracy on each of them, boxplotting the results again.
```
lengths = [2, 3, 5, 8, 10, 12,
15, 20, 25, 40, 80]
accuracies = defaultdict(list)
for length in tqdm(lengths):
for _ in range(NUMBER_OF_DRAWS):
french = sample_dataset(n=500, length=length, language='fr')
italian = sample_dataset(n=500, length=length, language='it')
model = ShortestPathModel(weights)
model.prepare_data(french, italian)
model.fit_predict()
accuracies[length].append(model.accuracy_)
fig, ax = plt.subplots(figsize=(12, 7))
fig.subplots_adjust(bottom=0.1)
bplot = ax.boxplot(accuracies.values(), labels=accuracies.keys(),
patch_artist=True)
cmap = cm.ScalarMappable(cmap='rainbow')
data_mean = [mean(x) for x in accuracies.values()]
for patch, color in zip(bplot['boxes'], cmap.to_rgba(data_mean)):
patch.set_facecolor(color)
ax.set_title("Accuracies in French/Italian differentiation by "
f"snippet length, judged by {NUMBER_OF_DRAWS} trials each")
ax.set_xlabel('Snippet length')
ax.set_ylabel('Accuracy')
ax.set_ylim([0,1+0.00285])
plt.show()
```
As we can see, the model does barely better than random when each snippet consists of 2 words – makes sense, there's barely any signal there. Then it quickly rises, and with long snippets it is basically perfect.
### How few/many is too few/many?
Let us try to vary the number of snippets drawn. We'll experiment with sizes: 10, 20, 30, 60, 100, 200, 500, 1000, 1500. Note that the memory requirement of the algorithm rises quadratically with the size of the sample – this is because the number of edges in a complete graph is $n(n-1)/2$, hence the expected growth is something like $n/(n-1)/2$. Various other relevant quantities in the algorithm also rise quadratically.
We'll also fix the length at 15 words per message, as we'd seen that with 1000 examples per class and that length the algorithm works reasonably well but well off from perfection, and thus it is interesting to see how the accuracy is going to vary with the number of examples.
```
sample_sizes = [10, 20, 30, 60, 100,
200, 500, 1000, 1500]
accuracies = defaultdict(list)
for size in tqdm(sample_sizes):
for _ in range(NUMBER_OF_DRAWS):
french = sample_dataset(n=size, length=15, language='fr')
italian = sample_dataset(n=size, length=15, language='it')
model = ShortestPathModel(weights)
model.prepare_data(french, italian)
model.fit_predict()
accuracies[size].append(model.accuracy_)
fig, ax = plt.subplots(figsize=(12, 7))
fig.subplots_adjust(bottom=0.1)
bplot = ax.boxplot(accuracies.values(), labels=accuracies.keys(), patch_artist=True)
cmap = cm.ScalarMappable(cmap='rainbow')
data_mean = [mean(x) for x in accuracies.values()]
for patch, color in zip(bplot['boxes'], cmap.to_rgba(data_mean)):
patch.set_facecolor(color)
ax.set_title(f"Accuracies in French/Italian differentiation by "
f"sample size, judged by {NUMBER_OF_DRAWS} trials each")
ax.set_xlabel('Sample size')
ax.set_ylabel('Accuracy')
ax.set_ylim([0,1+0.0025])
plt.show()
```
Evidently, accuracy does not depend strongly on the sample size, except if it is _really_ small, in which case the variance rises and the model might sometimes do quite poorly.
### Weighing on important things
As we had said earlier in this notebook, the crucial part of this model is the weight function. The weight function makes or breaks the model.
We're going to make only smallish experiments with the weight function, in particular we're going to change the factor $p$ by which the number of words shared between examples if potentiated. Roughly speaking, if $p$ is small (near $0$) then we expect not making a big difference whether the messages share say 1 or 3 words, whereas high $p$ means it makes a lot of difference.
We're going to be fixing the length parameter at 15 and the sample size parameter at 500. As for p, we'll try values of 0, 0.1, 0.5, 1, 2, 5, 10. (Interpretation of 0 here is: two snippets are connected iff they share at least one word, but all the edges have the same weight.)
```
p_vals = [0, 0.1, 0.5, 1, 2, 5, 10]
accuracies = defaultdict(list)
for p in tqdm(p_vals):
for _ in range(NUMBER_OF_DRAWS):
french = sample_dataset(n=500, length=15, language='fr')
italian = sample_dataset(n=500, length=15, language='it')
model = ShortestPathModel(weight_fn=lambda x, y : weights(x, y, p=p))
model.prepare_data(french, italian)
model.fit_predict()
accuracies[p].append(model.accuracy_)
fig, ax = plt.subplots(figsize=(12, 7))
fig.subplots_adjust(bottom=0.1)
bplot = ax.boxplot(accuracies.values(), labels=accuracies.keys(), patch_artist=True)
cmap = cm.ScalarMappable(cmap='rainbow')
data_mean = [mean(x) for x in accuracies.values()]
for patch, color in zip(bplot['boxes'], cmap.to_rgba(data_mean)):
patch.set_facecolor(color)
ax.set_title("Accuracies in French/Italian differentiation by the "
f"parameter p, judged by {NUMBER_OF_DRAWS} trials each")
ax.set_xlabel('p')
ax.set_ylabel('Accurracy')
ax.set_ylim([0,1+0.0025])
plt.show()
```
As we can see, the model with $p=0$ performs quite poorly – only outliers are above 50%. This is good news because it means that our weight function is indeed doing some heavy lifting (;)). It also difficult to discern whether there is any notable difference between higher values of p, and an investigation of that question would require additional experiments.
## Wrapping it up
In all of the experiments we have tried, we varied one parameter while leaving others fixed. This does not comprise a complete investigation of the nature of hyperparameters that one might wish for, but grasping the complete interrelations of hyperparameters is somewhat out of the scope of this notebook – the main idea was just to check whether the algorithm works at all.
To that we can indeed conclude that it does, i.e. it separates the two classes with pretty high accuracy, though in most cases considerably below $100\%$. To see how, why and when this happens, however, one would have to actually look at the failures of the model, which we haven't done (yet?).
| github_jupyter |
<a href="https://colab.research.google.com/github/facebookresearch/habitat-sim/blob/master/examples/tutorials/colabs/ReplicaCAD_quickstart.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#Habitat-sim ReplicaCAD Quickstart
This brief Colab tutorial demonstrates loading the ReplicaCAD dataset in Habitat-sim from a SceneDataset and rendering a short video of agent navigation with physics simulation.
```
# @title Installation { display-mode: "form" }
# @markdown (double click to show code).
!curl -L https://raw.githubusercontent.com/facebookresearch/habitat-sim/master/examples/colab_utils/colab_install.sh | NIGHTLY=true bash -s
# @title Path Setup and Imports { display-mode: "form" }
# @markdown (double click to show code).
%cd /content/habitat-sim
## [setup]
import os
import sys
import git
import magnum as mn
import habitat_sim
from habitat_sim.utils import viz_utils as vut
try:
import ipywidgets as widgets
from IPython.display import display as ipydisplay
# For using jupyter/ipywidget IO components
HAS_WIDGETS = True
except ImportError:
HAS_WIDGETS = False
if "google.colab" in sys.modules:
os.environ["IMAGEIO_FFMPEG_EXE"] = "/usr/bin/ffmpeg"
repo = git.Repo(".", search_parent_directories=True)
dir_path = repo.working_tree_dir
%cd $dir_path
data_path = os.path.join(dir_path, "data")
# fmt: off
output_directory = "examples/tutorials/replica_cad_output/" # @param {type:"string"}
# fmt: on
output_path = os.path.join(dir_path, output_directory)
if not os.path.exists(output_path):
os.mkdir(output_path)
# define some globals the first time we run.
if "sim" not in globals():
global sim
sim = None
global obj_attr_mgr
obj_attr_mgr = None
global stage_attr_mgr
stage_attr_mgr = None
global rigid_obj_mgr
rigid_obj_mgr = None
# @title Define Configuration Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown This cell defines a number of utility functions used throughout the tutorial to make simulator reconstruction easy:
# @markdown - make_cfg
# @markdown - make_default_settings
# @markdown - make_simulator_from_settings
def make_cfg(settings):
sim_cfg = habitat_sim.SimulatorConfiguration()
sim_cfg.gpu_device_id = 0
sim_cfg.scene_dataset_config_file = settings["scene_dataset"]
sim_cfg.scene_id = settings["scene"]
sim_cfg.enable_physics = settings["enable_physics"]
# Specify the location of the scene dataset
if "scene_dataset_config" in settings:
sim_cfg.scene_dataset_config_file = settings["scene_dataset_config"]
if "override_scene_light_defaults" in settings:
sim_cfg.override_scene_light_defaults = settings[
"override_scene_light_defaults"
]
if "scene_light_setup" in settings:
sim_cfg.scene_light_setup = settings["scene_light_setup"]
# Note: all sensors must have the same resolution
sensor_specs = []
color_sensor_1st_person_spec = habitat_sim.CameraSensorSpec()
color_sensor_1st_person_spec.uuid = "color_sensor_1st_person"
color_sensor_1st_person_spec.sensor_type = habitat_sim.SensorType.COLOR
color_sensor_1st_person_spec.resolution = [
settings["height"],
settings["width"],
]
color_sensor_1st_person_spec.position = [0.0, settings["sensor_height"], 0.0]
color_sensor_1st_person_spec.orientation = [
settings["sensor_pitch"],
0.0,
0.0,
]
color_sensor_1st_person_spec.sensor_subtype = habitat_sim.SensorSubType.PINHOLE
sensor_specs.append(color_sensor_1st_person_spec)
# Here you can specify the amount of displacement in a forward action and the turn angle
agent_cfg = habitat_sim.agent.AgentConfiguration()
agent_cfg.sensor_specifications = sensor_specs
return habitat_sim.Configuration(sim_cfg, [agent_cfg])
def make_default_settings():
settings = {
"width": 1280, # Spatial resolution of the observations
"height": 720,
"scene_dataset": "data/replica_cad/replicaCAD.scene_dataset_config.json", # dataset path
"scene": "NONE", # Scene path
"default_agent": 0,
"sensor_height": 1.5, # Height of sensors in meters
"sensor_pitch": 0, # sensor pitch (x rotation in rads)
"seed": 1,
"enable_physics": True, # enable dynamics simulation
}
return settings
def make_simulator_from_settings(sim_settings):
cfg = make_cfg(sim_settings)
# clean-up the current simulator instance if it exists
global sim
global obj_attr_mgr
global prim_attr_mgr
global stage_attr_mgr
global rigid_obj_mgr
global metadata_mediator
if sim != None:
sim.close()
# initialize the simulator
sim = habitat_sim.Simulator(cfg)
# Managers of various Attributes templates
obj_attr_mgr = sim.get_object_template_manager()
obj_attr_mgr.load_configs(str(os.path.join(data_path, "objects/example_objects")))
prim_attr_mgr = sim.get_asset_template_manager()
stage_attr_mgr = sim.get_stage_template_manager()
# Manager providing access to rigid objects
rigid_obj_mgr = sim.get_rigid_object_manager()
# get metadata_mediator
metadata_mediator = sim.metadata_mediator
# UI-populated handles used in various cells. Need to initialize to valid
# value in case IPyWidgets are not available.
# Holds the user's desired scene handle
global selected_scene
selected_scene = "NONE"
# [/setup]
# @title Define Simulation Utility Function { display-mode: "form" }
# @markdown (double click to show code)
def simulate(sim, dt=1.0, get_frames=True):
# simulate dt seconds at 60Hz to the nearest fixed timestep
print("Simulating {:.3f} world seconds.".format(dt))
observations = []
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + dt:
sim.step_physics(1.0 / 60.0)
if get_frames:
observations.append(sim.get_sensor_observations())
return observations
# @title Define Colab GUI Utility Functions { display-mode: "form" }
# @markdown (double click to show code)
# @markdown This cell provides utility functions to build and manage IPyWidget interactive components.
# Event handler for dropdowns displaying file-based object handles
def on_scene_ddl_change(ddl_values):
global selected_scene
selected_scene = ddl_values["new"]
return selected_scene
# Build a dropdown list holding obj_handles and set its event handler
def set_handle_ddl_widget(scene_handles, sel_handle, on_change):
sel_handle = scene_handles[0]
descStr = "Available Scenes:"
style = {"description_width": "300px"}
obj_ddl = widgets.Dropdown(
options=scene_handles,
value=sel_handle,
description=descStr,
style=style,
disabled=False,
layout={"width": "max-content"},
)
obj_ddl.observe(on_change, names="value")
return obj_ddl, sel_handle
def set_button_launcher(desc):
button = widgets.Button(
description=desc,
layout={"width": "max-content"},
)
return button
# Builds widget-based UI components
def build_widget_ui(metadata_mediator):
# Holds the user's desired scene
global selected_scene
selected_scene = "NONE"
# Construct DDLs and assign event handlers
# All file-based object template handles
scene_handles = metadata_mediator.get_scene_handles()
# If not using widgets, set as first available handle
if not HAS_WIDGETS:
selected_scene = scene_handles[0]
return
# Build widgets
scene_obj_ddl, selected_scene = set_handle_ddl_widget(
scene_handles,
selected_scene,
on_scene_ddl_change,
)
# Display DDLs
ipydisplay(scene_obj_ddl)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--no-display", dest="display", action="store_false")
parser.add_argument("--no-make-video", dest="make_video", action="store_false")
parser.set_defaults(show_video=True, make_video=True)
args, _ = parser.parse_known_args()
show_video = args.display
display = args.display
make_video = args.make_video
else:
show_video = False
make_video = False
display = False
```
# View ReplicaCAD in Habitat-sim
Use the code in this section to view assets in the Habitat-sim engine.
```
# [initialize]
# @title Initialize Simulator{ display-mode: "form" }
sim_settings = make_default_settings()
make_simulator_from_settings(sim_settings)
# [/initialize]
# @title Select a SceneInstance: { display-mode: "form" }
# @markdown Select a scene from the dropdown and then run the next cell to load and simulate that scene and produce a visualization of the result.
build_widget_ui(sim.metadata_mediator)
```
## Load the Select Scene and Simulate!
This cell will load the scene selected above, simulate, and produce a visualization.
```
global selected_scene
if sim_settings["scene"] != selected_scene:
sim_settings["scene"] = selected_scene
make_simulator_from_settings(sim_settings)
observations = []
start_time = sim.get_world_time()
while sim.get_world_time() < start_time + 4.0:
sim.agents[0].scene_node.rotate(mn.Rad(mn.math.pi_half / 60.0), mn.Vector3(0, 1, 0))
sim.step_physics(1.0 / 60.0)
if make_video:
observations.append(sim.get_sensor_observations())
# video rendering of carousel view
video_prefix = "ReplicaCAD_scene_view"
if make_video:
vut.make_video(
observations,
"color_sensor_1st_person",
"color",
output_path + video_prefix,
open_vid=show_video,
video_dims=[1280, 720],
)
```
| github_jupyter |
```
import pandas as pd
import similaripy as sim
from scipy import *
from scipy.sparse import *
from tqdm.auto import tqdm
import editdistance
import numpy as np
import re
import string as string_lib
# first load the data
df_train = pd.read_csv("../dataset/original/train.csv", escapechar="\\")
df_test = pd.read_csv("../dataset/original/test.csv", escapechar="\\")
# ALWAYS sort the data by record_id
df_train = df_train.sort_values(by=['record_id']).reset_index(drop=True)
df_test = df_test.sort_values(by=['record_id']).reset_index(drop=True)
df_train.name = df_train.name.astype(str)
def clean(string):
string = string.encode("ascii", errors="ignore").decode() #remove non ascii chars
string = string.lower() #make lower case
string = string.translate(str.maketrans('', '', string_lib.punctuation)) # remove punctuation
chars_to_remove = [")","(",".","|","[","]","{","}","'"]
rx = '[' + re.escape(''.join(chars_to_remove)) + ']'
string = re.sub(rx, '', string) #remove the list of chars defined above
string = string.replace('&', ' ')
string = string.replace(',', ' ')
string = string.replace('-', ' ')
#string = string.title() # normalise case - capital at start of each word
string = re.sub(' +',' ',string).strip() # get rid of multiple spaces and replace with a single space
return string
col = [clean(x) for x in tqdm(list(set(df_train.name)))]
col
col_words = [x.split(' ') for x in tqdm(col)]
col_words
col_exp = [y for x in col_words for y in x]
col_exp
from collections import Counter
top_50_words = [x[0] for x in Counter(col_exp).most_common()[:50]]
top_50_words
cleaned_col = [[y for y in x if y not in top_50_words] for x in tqdm(col_words)]
cleaned_col
[ ' '.join(x) for x in cleaned_col]
def ngrams(string, n=2):
string = string.encode("ascii", errors="ignore").decode() #remove non ascii chars
string = string.lower() #make lower case
string = string.translate(str.maketrans('', '', string_lib.punctuation)) # remove punctuation
chars_to_remove = [")","(",".","|","[","]","{","}","'"]
rx = '[' + re.escape(''.join(chars_to_remove)) + ']'
string = re.sub(rx, '', string) #remove the list of chars defined above
string = string.replace('&', ' ')
string = string.replace(',', ' ')
string = string.replace('-', ' ')
string = string.title() # normalise case - capital at start of each word
string = re.sub(' +',' ',string).strip() # get rid of multiple spaces and replace with a single space
string = ' '+ string +' ' # pad names for ngrams...
string = re.sub(r'[,-./]|\sBD',r'', string)
ngrams = zip(*[string[i:] for i in range(n)])
return [''.join(ngram) for ngram in ngrams]
ngrams('NINGBO SUNRISE ENTERPRISES UNITED CO., LTD.')
from sklearn.feature_extraction.text import TfidfVectorizer
org_names = list(df_train['name'])
vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams)
tf_idf_matrix = vectorizer.fit_transform(org_names)
tf_idf_matrix.shape[1]
cos_sim = sim.cosine(tf_idf_matrix, tf_idf_matrix.T, k = 300)
save_npz('tfidf_300.npz', cos_sim.tocsr())
tf_idf_matrix[1].data
similarity = load_npz('tfidf_300.npz')
similarity[2].data.argsort()[::-1]
```
| github_jupyter |
```
import io
import pickle
from copy import deepcopy
from matplotlib import pyplot as plt
from PIL import Image
import subprocess
from subprocess import Popen, PIPE
import random
import imageio
import numpy as np
from matplotlib.backends.backend_template import FigureCanvas
def plot_to_PIL(fig, dpi=100):
buf = io.BytesIO()
fig.savefig(buf, format='png', dpi=dpi)
buf.seek(0)
pil_img = deepcopy(Image.open(buf))
buf.close()
return pil_img
def interpolate_images(img1, img2, factor):
# Veic pāreju starp diviem attēliem
# Factor 0 nozīmē ka pilnībā redzams tikai img1
# Factor 1 nozīmē ka pilnībā redzams tikai img2
# Tiek pieņemts ka attēliem ir vienādas dimensijas - ja tā nav - pats esi vainīgs
# Tiek arī pieņemts ka abu attēlu feature vērtības ir apgabalā [0;+1] (kas ir arī SOM svaru īpašība)
new_image = [
[
[img1[i][j][0]*(1-factor) + img2[i][j][0]*factor,
img1[i][j][1]*(1-factor) + img2[i][j][1]*factor,
img1[i][j][2]*(1-factor) + img2[i][j][2]*factor] for j in range(len(img1[0]))
] for i in range(len(img1))
]
return new_image
def random_image(dimension):
new_image = [[[random.random(),random.random(),random.random()] for j in range(dimension)] for i in range(dimension)]
return new_image
# image ir NxN masīvs, kurā katrs elements ir vektors [x1, x2, x3]
def plot_som(image, meta_this_epoch=None, meta_max_epoch=None, meta_name=None):
fig = plt.Figure(figsize=[6, 6])
canvas = FigureCanvas(fig)
ax = fig.add_subplot(111)
fig.patch.set_facecolor('white')
ax.axes.xaxis.set_visible(False)
ax.axes.yaxis.set_visible(False)
fig.subplots_adjust(left = 0, right = 1, bottom = 0.1, top = 0.9)
if meta_this_epoch is not None and meta_max_epoch is not None:
ax.set_title("\"{}\" {:.1f}%/{} epohām".format(meta_name, 100.0*((meta_this_epoch+1)/meta_max_epoch), 2000))
ax.imshow(image, interpolation='gaussian')
#ax.axis('tight')
#ax.axis('off')
return fig
# image_list ir masīvs, kurš sastāv no iepriekš aprakstītā veida "images"
def som_animation(image_list, frames_per_second=30, seconds_per_epoch=1, name='correcthorsebatterystaple'):
previous_image = None
#fps, duration = frames_per_epoch, frames_per_epoch * (len(image_list)-1)
#p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'mpeg4', '-qscale', '5', '-r', str(fps), 'video.mp4'], stdin=PIPE, stdout=PIPE, shell=True)
writer = imageio.get_writer('{}.mp4'.format(name), fps=frames_per_second)
for idx, img in enumerate(image_list):
print("Rendering {}/{}".format(idx, len(image_list)))
if previous_image is None:
# Vajag uzstādīt pirmo attēlu, no kura notiks pāreja
previous_image = img
continue
for i in range(round(frames_per_second * seconds_per_epoch)):
interp_img = interpolate_images(previous_image, img, i/(frames_per_second * seconds_per_epoch))
fig = plot_som(interp_img, meta_this_epoch=idx, meta_max_epoch=len(image_list), meta_name=name)
pil_fig = plot_to_PIL(fig)
#pil_fig.save(p.stdin, 'PNG')
fig.clf()
#img = createFrame(i)
writer.append_data(np.array(pil_fig))
previous_image = img
writer.close()
#p.stdin.close()
#p.wait()
# Izveidot video visiem eksperimentiem
data = ["rate_0.1.pickle", "rate_0.3.pickle", "rate_0.5.pickle", "rate_0.7.pickle", "rate_0.9.pickle",
"size_10.pickle", "size_30.pickle", "size_50.pickle"]
for d in data:
list_x = pickle.load(open(d, "rb"))
name = d.removesuffix(".pickle")
plot_som(list_x[-1])
som_animation(list_x, 30, 0.1, name=name)
# Load epochs from SOM training
# list_x = pickle.load( open( "all_epochs.pickle", "rb" ) )
#list_x = [random_image(100) for i in range(2)]
# plot_som(list_x[-1])
#som_animation(list_x, 30, 0.05)
```
| github_jupyter |
# Calibration
Evaluating calibration methods on convolutional neural networks.
```
import numpy as np
import pandas as pd
from os.path import join
from sklearn.isotonic import IsotonicRegression
from sklearn.linear_model import LogisticRegression
from cal_methods import TemperatureScaling, evaluate, softmax, cal_results
```
Paths to files with logits.
```
PATH = '/Users/wildflowerlyi/Desktop/Github/NN_calibration/'
files = ('resnet_cifar/probs_resnet110_c10clip_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_2250_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_1125_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_560_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_interpol2_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_interpol2_2250_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_interpol2_1125_logits.p'
,'resnet_cifar/probs_resnet110_c10clip_aug_interpol2_560_logits.p'
)
```
### Isotonic Regression
```
df_iso = cal_results(IsotonicRegression, PATH, files, {'y_min':0, 'y_max':1}, approach = "single")
```
### Temperature scaling
```
df_temp_scale = cal_results(TemperatureScaling, PATH, files, approach = "all")
```
#### Calibrated scores for CIFAR datasets.
```
df_iso
df_temp_scale
```
## Dataframe with results
```
dfs = [df_iso, df_temp_scale]
names = ["Name", "Uncalibrated", "Isotonic Regression", "Temperature Scaling"]
def get_dataframe(dfs, column, names):
df_res = pd.DataFrame(columns=names)
for i in range(1, len(df_iso), 2):
name = dfs[0].iloc[i-1]["Name"] # Get name of method
uncalibrated = dfs[0].iloc[i-1][column] # Get uncalibrated score
row = [name, uncalibrated] # Add scores to row
for df in dfs:
row.append(df.iloc[i][column])
df_res.loc[(i-1)//2] = row
df_res.set_index('Name', inplace = True)
return df_res
df_error = get_dataframe(dfs, "Error", names)
df_ece = get_dataframe(dfs, "ECE", names)
df_mce = get_dataframe(dfs, "MCE", names)
df_loss = get_dataframe(dfs, "Loss", names)
```
## Scores
```
def highlight_min(s):
'''
highlight the min in a Series yellow.
'''
is_max = s == s.min()
return ['background-color: yellow' if v else '' for v in is_max]
```
## Error Rate
```
df_error.style.apply(highlight_min, axis = 1)
```
## ECE
```
df_ece.style.apply(highlight_min, axis = 1)
```
## MCE
```
df_mce.style.apply(highlight_min, axis = 1)
```
## Loss
```
df_loss.style.apply(highlight_min, axis = 1)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
df = pd.read_csv('raw_data.csv', index_col=0)
df.head()
df.tail()
df.iloc[:, 2:].describe()
df.iloc[:, 2:].describe().to_csv('descriptive_stats.csv')
df['num_of_features'].value_counts()
plt.hist(df['num_of_features'])
plt.xlabel('Number of Features')
plt.ylabel('Frequency')
plt.title('Number of Features Returned in Optimum')
plt.savefig('figures/HIST_Num_of_Features.svg')
plt.hist(df['a_fitness'])
plt.xlabel('a_fitness')
plt.ylabel('Frequency')
plt.title('Fitness of Returned Optimum')
plt.savefig('figures/HIST_a_fitness.svg')
plt.hist(df['training_accuracy'])
plt.xlabel('Accuracy')
plt.ylabel('Frequency')
plt.title('Accuracy on TRAINING DATA by Returned Optimum')
plt.savefig('figures/HIST_a_score.svg')
plt.hist(df['test_accuracy'])
plt.xlabel('Accuracy')
plt.ylabel('Frequency')
plt.title('Accuracy on TEST DATA by Returned Optimum')
plt.savefig('figures/HIST_test_a_score.svg')
plt.scatter(df['num_of_features'], df['test_accuracy'])
plt.xlabel('Number of Features')
plt.ylabel('Accuracy on TEST DATA')
```
How very interesting. There appear to be two distinct groups, each covering a large range of accuracy. One is centered on a very low number of features, the other is centered on a pretty high number of features. There are two things that I don't understand here:
* Why there are two groups, centered at 2 and about 78
* what is causing the spread for the two groups individually
```
plt.scatter(df['vmin'], df['test_accuracy'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Accuracy on TEST DATA')
```
Huh, there doesn't look like there Any correlation between Where the velocity clipping is and the final accuracy.
```
plt.scatter(df['vmin'], df['training_accuracy'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Accuracy on TRAINING DATA')
```
Two distinct groups again. Fascinating, there is a perfect cutoff point at the range (-2, 2) for velocity where the accuracy on training data drops significantly. However, this is not the case for accuracy on test data.
```
plt.scatter(df['num_of_features'], df['training_accuracy'])
plt.xlabel('Number of Features')
plt.ylabel('Accuracy on TRAINING DATA')
plt.scatter(df['vmin'], df['num_of_features'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Number of Features')
```
THERE it is.
```
sns.pairplot(df)
plt.gcf().set_size_inches(20, 20)
plt.savefig('figures/pairplot.svg')
```
The 2 linear plots are nothing, just vmin and vmax
The ones with two clear groups are interesting. I think the ones with stratified groups are a side effect of the ones with a sigmoid curve. I wonder if this is an artifact of the logistic function used in the COMB-PSO algorithm.
Disappointly, none of the variables produce any kind of helpful pattern in test_accuracy.
```
plt.scatter(df['vmin'], df['num_of_features'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Number of Features')
plt.savefig('figures/Tuning_VBounds_SCAT_vmin_num.svg')
plt.scatter(df['vmin'], df['a_fitness'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Fitness Score')
plt.savefig('figures/Tuning_VBounds_SCAT_vmin_afit.svg')
plt.scatter(df['vmin'], df['training_accuracy'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Training Score')
plt.savefig('figures/Tuning_VBounds_SCAT_vmin_train_score.svg')
plt.scatter(df['vmin'], df['test_accuracy'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Test Score')
plt.savefig('figures/Tuning_VBounds_SCAT_vmin_test_score.svg')
df['overfitting'] = df['training_accuracy']-df['test_accuracy']
df.head()
plt.scatter(df['vmin'], df['overfitting'])
plt.xlabel('Minimum Velocity')
plt.ylabel('Training - Test Score')
plt.title('Overfitting')
plt.savefig('figures/Tuning_VBounds_overfitting.svg')
```
It looks like with these other parameters, I need to choose a velocity minimum above -2.0
| github_jupyter |
```
import requests
import pandas as pd
import numpy as np
import re
import nltk
import matplotlib.pyplot as plt
%matplotlib inline
test = input("What would you like to know about today? ")
stonks = ['AMD','AAPL','INTC']
url = 'https://newsapi.org/v2/everything?'
# Specify the query and number of returns
parameters = {
'qInTitle': test, # query phrase
'sortBy': 'popularity', # articles from popular sources and publishers come first
'pageSize': 100, # maximum is 100 for developer version
'apiKey': '32dc1d81f85d44cd959b4428b0308bdd', # your own API key
}
# Make the request
response = requests.get(url, params=parameters)
# Convert the response to JSON format and store it in dataframe
data = pd.DataFrame(response.json())
news_df = pd.concat([data['articles'].apply(pd.Series)], axis=1)
# Select data
final_news = news_df.loc[:,['publishedAt','title']]
# Filter to within one week
final_news['publishedAt'] = pd.to_datetime(final_news['publishedAt'])
final_news['publishedAt'] = final_news['publishedAt'].apply(lambda x: x.replace(tzinfo=None)) #removes timezone
final_news = final_news[pd.to_datetime('now')-final_news['publishedAt']<=pd.to_timedelta(30, unit='d')]
final_news.sort_values(by='publishedAt',inplace=True)
final_news.head()
data = pd.read_csv("all-data.csv",delimiter=',',encoding='latin-1',header=None)
data.columns = ["label", "text"]
data.head()
features = data.text.values
labels = data.label.values
processed_features = []
def process2():
# alternatively could use list version
#just different implementations u can consider, rly up to u
features = data.text.values
for text in features:
processed_features.append(process1(text))
def process1(features):
# Remove all the special characters
processed_feature = re.sub(r'\W', ' ', str(features))
# remove all single characters
processed_feature= re.sub(r'\s+[a-zA-Z]\s+', ' ', processed_feature)
# Remove single characters from the start
processed_feature = re.sub(r'\^[a-zA-Z]\s+', ' ', processed_feature)
# Substituting multiple spaces with single space
processed_feature = re.sub(r'\s+', ' ', processed_feature, flags=re.I)
# Removing prefixed 'b'
processed_feature = re.sub(r'^b\s+', '', processed_feature)
# Converting to Lowercase
return processed_feature.lower()
data.text = data.text.apply(process1).tolist()
#this is for installing nltk stopwords
import nltk
import ssl
try:
_create_unverified_https_context = ssl._create_unverified_context
except AttributeError:
pass
else:
ssl._create_default_https_context = _create_unverified_https_context
#nltk.download()
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer
#Use TF-IDF to vectorize our words
#TF = freq of word in doc / total words in doc
#IDF = log(total # of docs/# of docs containing word)
vectorizer = TfidfVectorizer (max_features=2500, min_df=7, max_df=0.8, stop_words=stopwords.words('english'))
#max_features specifies 2500 most frequent words
#max_df specifies words that occur in a max of 80% of docs
#min_df specifies words that occur in at least 7 docs
#stopwords are excluded, e.g. "it" or "am"
processed_features = vectorizer.fit_transform(data.text).toarray()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(processed_features, labels, test_size=0.2, random_state=0)
from sklearn.ensemble import RandomForestClassifier
text_classifier = RandomForestClassifier(n_estimators=200, random_state=0)
text_classifier.fit(X_train, y_train)
predictions = text_classifier.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
print(confusion_matrix(y_test,predictions))
print(classification_report(y_test,predictions))
print(accuracy_score(y_test, predictions))
#instead, we can create new processed_features by concatenating the two datasets
#together and vectorizing
processed_features = vectorizer.fit_transform(pd.concat([data.text, final_news.title])).toarray()
#now this stuff becomes our modelling dataset
processed_features[:data.shape[0]]
#and this is our predicting dataset
processed_features[data.shape[0]:]
#X_train, X_test, y_train, y_test = train_test_split(processed_features[:data.shape[0]], labels, test_size=0.2, random_state=0)
X_train = processed_features[:data.shape[0]]
y_train = labels
text_classifier = RandomForestClassifier(n_estimators=200, random_state=0)
text_classifier.fit(X_train, y_train)
predictions = text_classifier.predict(processed_features[data.shape[0]:])
predictions
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 사용자 정의 층
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/ko/tutorials/customization/custom_layers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />노트북 다운로드</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 지원하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs)로
메일을 보내주시기 바랍니다.
신경망을 구축하기 위해서 고수준 API인 `tf.keras`를 사용하길 권합니다. 대부분의 텐서플로 API는 즉시 실행(eager execution)과 함께 사용할 수 있습니다.
```
from __future__ import absolute_import, division, print_function, unicode_literals
!pip install tensorflow==2.0.0-alpha0
import tensorflow as tf
```
## 층: 유용한 연산자 집합
머신러닝을 위한 코드를 작성하는 대부분의 경우에 개별적인 연산과 변수를 조작하는 것보다는 높은 수준의 추상화 도구를 사용할 것입니다.
많은 머신러닝 모델은 비교적 단순한 층(layer)을 조합하고 쌓아서 표현가능합니다. 또한 텐서플로는 여러 표준형 층을 제공하므로 사용자 고유의 응용 프로그램에 특화된 층을 처음부터 작성하거나, 기존 층의 조합으로 쉽게 만들 수 있습니다.
텐서플로는 [케라스](https://keras.io)의 모든 API를 tf.keras 패키지에 포함하고 있습니다. 케라스 층은 모델을 구축하는데 매우 유용합니다.
```
# tf.keras.layers 패키지에서 층은 객체입니다. 층을 구성하려면 간단히 객체를 생성하십시오.
# 대부분의 layer는 첫번째 인수로 출력 차원(크기) 또는 채널을 취합니다.
layer = tf.keras.layers.Dense(100)
# 입력 차원의 수는 층을 처음 실행할 때 유추할 수 있기 때문에 종종 불필요합니다.
# 일부 복잡한 모델에서는 수동으로 입력 차원의 수를 제공하는것이 유용할 수 있습니다.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
```
미리 구성되어있는 층은 다음 [문서](https://www.tensorflow.org/api_docs/python/tf/keras/layers)에서 확인할 수 있습니다. Dense(완전 연결 층), Conv2D, LSTM, BatchNormalization, Dropout, 등을 포함하고 있습니다.
```
# 층을 사용하려면, 간단하게 호출합니다.
layer(tf.zeros([10, 5]))
# layer는 유용한 메서드를 많이 가지고 있습니다. 예를 들어, `layer.variables`를 사용하여 층안에 있는 모든 변수를 확인할 수 있으며,
# `layer.trainable_variables`를 사용하여 훈련 가능한 변수를 확인할 수 있습니다.
# 완전 연결(fully-connected)층은 가중치(weight)와 편향(biases)을 위한 변수를 가집니다.
layer.variables
# 또한 변수는 객체의 속성을 통해 편리하게 접근 가능합니다.
layer.kernel, layer.bias
```
## 사용자 정의 층 구현
사용자 정의 층을 구현하는 가장 좋은 방법은 tf.keras.Layer 클래스를 상속하고 다음과 같이 구현하는 것입니다.
* `__init__` 에서 층에 필요한 매개변수를 입력 받습니다.
* `build`, 입력 텐서의 크기를 얻고 남은 초기화를 진행할 수 있습니다
* `call`, 정방향 연산(forward computation)을 진행 할 수 있습니다.
변수를 생성하기 위해 `build`가 호출되길 기다릴 필요가 없다는 것에 주목하세요. 또한 변수를 `__init__`에 생성할 수도 있습니다. 그러나 `build`에 변수를 생성하는 유리한 점은 층이 작동할 입력의 크기를 기준으로 나중에 변수를 만들 수 있다는 것입니다. 반면에, `__init__`에 변수를 생성하는 것은 변수 생성에 필요한 크기가 명시적으로 지정되어야 함을 의미합니다.
```
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_variable("kernel",
shape=[int(input_shape[-1]),
self.num_outputs])
def call(self, input):
return tf.matmul(input, self.kernel)
layer = MyDenseLayer(10)
print(layer(tf.zeros([10, 5])))
print(layer.trainable_variables)
```
코드를 읽는 사람이 표준형 층의 동작을 잘 알고 있을 것이므로, 가능한 경우 표준형 층을 사용하는것이 전체 코드를 읽고 유지하는데 더 쉽습니다. 만약 `tf.keras.layers` 에 없는 층을 사용하기 원하면 [깃허브](http://github.com/tensorflow/tensorflow/issues/new)에 이슈화하거나, 풀 리퀘스트(pull request)를 보내세요.
## 모델: 층 구성
머신러닝 모델에서 대부분의 재미있는 많은 것들은 기존의 층을 조합하여 구현됩니다. 예를 들어, 레즈넷(resnet)의 각 잔여 블록(residual block)은 합성곱(convolution), 배치 정규화(batch normalization), 쇼트컷(shortcut) 등으로 구성되어 있습니다.
다른 층을 포함한 모델을 만들기 위해 사용하는 메인 클래스는 tf.keras.Model입니다. 다음은 tf.keras.Model을 상속(inheritance)하여 구현한 코드입니다.
```
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.trainable_variables])
```
그러나 대부분의 경우에, 많은 층으로 구성된 모델은 단순하게 순서대로 층을 하나씩 호출합니다. 이는 tf.keras.Sequential 사용하여 간단한 코드로 구현 가능합니다.
```
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1),
input_shape=(
None, None, 3)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
```
# 다음 단계
이제 이전 노트북으로 돌아가서 선형 회귀 예제에 층과 모델을 사용하여 좀 더 나은 구조를 적용할 수 있습니다.
```
```
| github_jupyter |
# The Stanford Sentiment Treebank
The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. We use the two-way (positive/negative) class split, and use only sentence-level labels.
```
from IPython.display import display, Markdown
with open('../../doc/env_variables_setup.md', 'r') as fh:
content = fh.read()
display(Markdown(content))
```
## Import Packages
```
import tensorflow as tf
from transformers import (
BertConfig,
BertTokenizer,
XLMRobertaTokenizer,
TFBertModel,
TFXLMRobertaModel,
)
import os
from datetime import datetime
import sys
from absl import logging
from absl import flags
from absl import app
import logging as logger
tf.get_logger().propagate = False
```
## Import local packages
```
import preprocessing.preprocessing as pp
import utils.model_metrics as mm
import utils.model_utils as mu
import model.tf_custom_bert_classification.model as tf_custom_bert
import model.tf_bert_classification.model as tf_bert
import importlib
importlib.reload(pp);
importlib.reload(mm);
importlib.reload(mu);
importlib.reload(tf_bert);
importlib.reload(tf_custom_bert);
```
## Check configuration
```
print(tf.version.GIT_VERSION, tf.version.VERSION)
print(tf.keras.__version__)
gpus = tf.config.list_physical_devices('GPU')
if len(gpus)>0:
for gpu in gpus:
print('Name:', gpu.name, ' Type:', gpu.device_type)
else:
print('No GPU available !!!!')
```
## Define Paths
```
try:
data_dir=os.environ['PATH_DATASETS']
except KeyError:
print('missing PATH_DATASETS')
try:
tensorboard_dir=os.environ['PATH_TENSORBOARD']
except KeyError:
print('missing PATH_TENSORBOARD')
try:
savemodel_dir=os.environ['PATH_SAVE_MODEL']
except KeyError:
print('missing PATH_SAVE_MODEL')
```
## Read data from TFRecord files [local training of the model]
```
# Path of the directory with TFRecord files
tfrecord_data_dir=data_dir+'/tfrecord/sst2/'
```
## Define parameters of the model
```
# models
MODELS = [(TFBertModel, BertTokenizer, 'bert-base-multilingual-uncased'),
(TFXLMRobertaModel, XLMRobertaTokenizer, 'jplu/tf-xlm-roberta-base')]
model_index = 0 # BERT
model_class = MODELS[model_index][0] # i.e TFBertModel
tokenizer_class = MODELS[model_index][1] # i.e BertTokenizer
pretrained_weights = MODELS[model_index][2] #'i.e bert-base-multilingual-uncased'
number_label = 2
```
## Train the model locally with AI Platform Training (for tests)
```
#savemodel_path = os.path.join(savemodel_dir, 'saved_model')
pretrained_model_dir=savemodel_dir+'/pretrained_model/'+pretrained_weights
model_name='tf_bert_classification'
# train locally
os.environ['EPOCH'] = '1'
os.environ['STEPS_PER_EPOCH_TRAIN'] = '1'
os.environ['BATCH_SIZE_TRAIN'] = '32'
os.environ['STEPS_PER_EPOCH_EVAL'] = '1'
os.environ['BATCH_SIZE_EVAL'] = '64'
os.environ['TRAINER_PACKAGE_PATH'] = os.environ['PYTHONPATH']
os.environ['MAIN_TRAINER_MODULE'] = 'model.'+model_name+'.task'
os.environ['INPUT_EVAL_TFRECORDS'] = tfrecord_data_dir+'/valid'
os.environ['INPUT_TRAIN_TFRECORDS'] = tfrecord_data_dir+'/train'
os.environ['OUTPUT_DIR'] = savemodel_dir
os.environ['PRETRAINED_MODEL_DIR']= pretrained_model_dir
%%bash
# Use Cloud Machine Learning Engine to train the model in local file system
gcloud ai-platform local train \
--module-name=$MAIN_TRAINER_MODULE \
--package-path=$TRAINER_PACKAGE_PATH \
-- \
--epochs=$EPOCH \
--steps_per_epoch_train=$STEPS_PER_EPOCH_TRAIN \
--batch_size_train=$BATCH_SIZE_TRAIN \
--steps_per_epoch_eval=$STEPS_PER_EPOCH_EVAL \
--batch_size_eval=$BATCH_SIZE_EVAL \
--input_eval_tfrecords=$INPUT_EVAL_TFRECORDS \
--input_train_tfrecords=$INPUT_TRAIN_TFRECORDS \
--output_dir=$OUTPUT_DIR \
--pretrained_model_dir=$PRETRAINED_MODEL_DIR \
--verbosity_level='INFO'
```
## Debug model's function
```
strategy = tf.distribute.MirroredStrategy()
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
# reset
tf.keras.backend.clear_session()
# create and compile the Keras model in the context of strategy.scope
with strategy.scope():
model=tf_bert.create_model(pretrained_weights,
pretrained_model_dir=pretrained_model_dir,
num_labels=number_label,
learning_rate=3e-5,
epsilon=1e-08)
model.summary()
model.inputs
# define default parameters
BATCH_SIZE_TRAIN = 32
BATCH_SIZE_TEST = 32
BATCH_SIZE_VALID = 64
EPOCHS = 1
STEP_EPOCH_TRAIN = 5
STEP_EPOCH_VALID = 1
# Using function
train_files = tfrecord_data_dir+'/'+model.name+'/train'
test_files = tfrecord_data_dir+'/'+model.name+'/test'
valid_files = tfrecord_data_dir+'/'+model.name+'/valid'
train_dataset = tf_bert.build_dataset(train_files, BATCH_SIZE_TRAIN)
test_dataset = tf_bert.build_dataset(test_files, BATCH_SIZE_TEST)
valid_dataset = tf_bert.build_dataset(valid_files, BATCH_SIZE_VALID)
train_dataset=train_dataset.repeat(EPOCHS+1)
for i in valid_dataset:
print(i)
break
FLAGS = flags.FLAGS
def del_all_flags(FLAGS):
flags_dict = FLAGS._flags()
keys_list = [keys for keys in flags_dict]
for keys in keys_list:
FLAGS.__delattr__(keys)
del_all_flags(flags.FLAGS)
# to avoid crashes in Notebook
flags.DEFINE_string('f', '', 'kernel') # just for jupyter notebook and avoir : "UnrecognizedFlagError: Unknown command line flag 'f'"
# to avoid crashes with absl
flags.DEFINE_enum('verbosity', 'INFO', ['VERBOSE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'FATAL'], 'verbosity in the logfile')
# default parameters for training the model
# compute and save accuracy and loss after N steps
N_STEPS_HISTORY = 10
# hyper parameters
# adam parameters
LEARNING_RATE = 3e-5
EPSILON = 1e-08
# learning rate decay parameters
DECAY_LR = 0.95
DECAY_TYPE = 'exponential'
N_BATCH_DECAY = 2
# number of classes
NUM_CLASSES = 2
# BERT Maximum length, be be careful BERT max length is 512!
MAX_LENGTH = 512
# get parameters for the training
flags.DEFINE_float('learning_rate', LEARNING_RATE, 'learning rate')
flags.DEFINE_float('decay_learning_rate', DECAY_LR, 'decay of the learning rate, e.g. 0.9')
flags.DEFINE_float('epsilon', EPSILON, 'epsilon')
flags.DEFINE_integer('epochs', EPOCHS, 'The number of epochs to train')
flags.DEFINE_integer('steps_per_epoch_train', STEP_EPOCH_TRAIN, 'The number of steps per epoch to train')
flags.DEFINE_integer('batch_size_train', BATCH_SIZE_TRAIN, 'Batch size for training')
flags.DEFINE_integer('steps_per_epoch_eval', STEP_EPOCH_VALID, 'The number of steps per epoch to evaluate')
flags.DEFINE_integer('batch_size_eval', BATCH_SIZE_VALID, 'Batch size for evaluation')
flags.DEFINE_integer('num_classes', NUM_CLASSES, 'number of classes in our model')
flags.DEFINE_integer('n_steps_history', N_STEPS_HISTORY, 'number of step for which we want custom history')
flags.DEFINE_integer('n_batch_decay', N_BATCH_DECAY, 'number of batches after which the learning rate gets update')
flags.DEFINE_string('decay_type', DECAY_TYPE, 'type of decay for the learning rate: exponential, stepwise, timebased, or constant')
flags.DEFINE_string('input_train_tfrecords', None, 'input folder of tfrecords training data')
flags.DEFINE_string('input_eval_tfrecords', None, 'input folder of tfrecords evaluation data')
flags.DEFINE_string('output_dir', None, 'gs blob where are stored all the output of the model')
flags.DEFINE_string('pretrained_model_dir', None, 'number of classes in our model')
flags.DEFINE_enum('verbosity_level', 'INFO', ['VERBOSE', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'FATAL'], 'verbosity in the logfile')
flags.DEFINE_boolean('use_tpu', False, 'activate TPU for training')
flags.DEFINE_boolean('use_decay_learning_rate', False, 'activate decay learning rate')
flags.DEFINE_boolean('is_hyperparameter_tuning', False, 'automatic and inter flag')
FLAGS(sys.argv);
print(FLAGS)
history_test=tf_bert.train_and_evaluate(model,
num_epochs=1,
steps_per_epoch=2,
train_data=train_dataset,
validation_steps=1,
eval_data=valid_dataset,
n_steps_history=1,
output_dir=savemodel_dir,
FLAGS=FLAGS,
decay_type='exponential',
learning_rate=3e-5,
s=0.95,
n_batch_decay=2,
metric_accuracy='NotDefined')
for i in train_dataset:
print(i)
break
model.summary()
model.inputs
model.outputs
#os.environ['MODEL_LOCAL']=savemodel_path+'/'+model.name
os.environ['MODEL_LOCAL']=savemodel_dir+'/saved_model/'+model.name
#os.environ['MODEL_LOCAL']
!ls -la $MODEL_LOCAL
%%bash
saved_model_cli show --dir $MODEL_LOCAL --tag_set serve --signature_def serving_default
model.evaluate(test_dataset)
train_dataset
from tensorflow.python.data.ops import dataset_ops
dataset_ops.get_legacy_output_shapes(train_dataset)
pp.print_info_data(train_dataset)
```
| github_jupyter |
```
import pandas as pd
import sqlite3
import gensim
import nltk
import glob
import json
import pickle
## Helpers
def save_pkl(target_object, filename):
with open(filename, "wb") as file:
pickle.dump(target_object, file)
def load_pkl(filename):
return pickle.load(open(filename, "rb"))
def save_json(target_object, filename):
with open(filename, 'w') as file:
json.dump(target_object, file)
def load_json(filename):
with open(filename, 'r') as file:
data = json.load(file)
return data
```
## Preparing Data
In this step, we are going to load data from disk to the memory and properly format them so that we can processing them in the next "preprocessing" stage.
```
# Loading metadata from trainning database
con = sqlite3.connect("F:/FMR/data.sqlite")
db_documents = pd.read_sql_query("SELECT * from documents", con)
db_authors = pd.read_sql_query("SELECT * from authors", con)
data = db_documents # just a handy alias
data.head()
```
## Loading Tokenised Full Text
In the previous tutorial (Jupyter notebook), we generated a bunch of .json files storing our tokenised full texts. Now we are going to load them.
```
tokenised = load_json("abstract_tokenised.json")
# Let's have a peek
tokenised["acis2001/1"][:10]
```
# Preprocessing Data for Gensim and Finetuning
In this stage, we preprocess the data so it could be read by Gensim. Then we will furthur clean up the data to better train the model.
First of all, we need a dictionary of our corpus, i.e., the whole collection of our full texts. However, there are documents in our dataset written in some other languages. We need to stay with one language (in the example, English) in order to best train the model, so let's filter them out first.
## Language Detection
`TextBlob` ships with a handy API wrapper of Google's language detection service. We will store the `id` of these non-English documents in a list called `non_en` and save it as a pickled file for later use.
```
from textblob import TextBlob
non_en = [] # a list of ids of the documents in other languages
count = 0
for id_, entry in data.iterrows():
count += 1
try:
lang = TextBlob(entry["title"] + " " + entry["abstract"]).detect_language()
except:
raise
if lang != 'en':
non_en.append(id_)
print(lang, data.iloc[id_]["title"])
if (count % 100) == 0:
print("Progress: ", count)
save_pkl(non_en, "non_en.list.pkl")
non_en = load_pkl("non_en.list.pkl")
# Convert our dict-based structure to be a list-based structure that are readable by Gensim and at the same time,
# filter out those non-English documents
tokenised_list = [tokenised[i] for i in data["submission_path"] if i not in non_en]
```
Although we tried to handle these hyphenations in the previous tutorial, now we still have them for some reasons. The most conveient way to remove them is to remove them in the corpus and rebuild the dictionary. Then re-apply our previous filter.
```
def remove_hyphenation(l):
return [i.replace("- ", "").replace("-", "") for i in l]
tokenised_list = [remove_hyphenation(i) for i in tokenised_list]
```
## Lemmatization
But before building the vocabulary, we need to unify some variants of the same phrases. For example, "technologies" should be mapped to "technology". This process is called lemmatization.
```
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize(l):
return [" ".join([lemmatizer.lemmatize(token)
for token
in phrase.split(" ")])
for phrase in l]
def lemmatize_all(tokenised):
# Lemmatize the documents.
lemmatized = [lemmatize(entry) for entry in tokenised]
return lemmatized
" ".join([lemmatizer.lemmatize(token)
for token
in 'assistive technologies'.split(" ")])
tokenised_list = lemmatize_all(tokenised_list)
# In case we need it in the future
save_json(tokenised_list, "abstract_lemmatized.json")
# To load it:
tokenised_list = load_json("abstract_lemmatized.json")
```
Then we can create our lemmatized vocabulary.
```
from gensim.corpora import Dictionary
# Create a dictionary for all the documents. This might take a while.
dictionary = Dictionary(tokenised_list)
# Let's see what's inside, note the spelling :)
# But there is really nothing we can do with that.
dictionary[0]
len(dictionary)
```
Obviously we have a way too large vocabulary size. This is because the algorithm used in TextBlob's noun phrase extraction is not very robust in complicated scenario. Let's see what we can do about this.
## Filtering Vocabulary
First of all, let's rule out the most obvious ones: words and phrases that appear in too many documents and ones that appear only 1-5 documents. Gensim provides a very convenient built-in function to filter them out:
```
# remove tokens that appear in less than 20 documents and tokens that appear in more than 50% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.5, keep_n=None)
len(dictionary)
```
Now we have drastically reduced the size of the vocabulary from 2936116 to 102508. However this is not enough. For example:
```
# Helpers
display_limit = 10
def shorter_than(n):
bad = []
count = 0
for i in dictionary:
if len(dictionary[i]) < n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def if_in(symbol):
bad = []
count = 0
for i in dictionary:
if symbol in dictionary[i]:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def more_than(symbol, n):
bad = []
count = 0
for i in dictionary:
if dictionary[i].count(symbol) > n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
bad = shorter_than(3)
```
We have 752 such meaningless tokens in our vocabulary. Presumably this is because that during the extraction of the PDF, some mathenmatical equations are parsed as plain text (of course).
Now we are going to remove these:
```
dictionary.filter_tokens(bad_ids=bad)
display_limit = 10
bad = if_in("*")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("<")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in(">")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("%")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("/")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("[")
bad += if_in("]")
bad += if_in("}")
bad += if_in("{")
dictionary.filter_tokens(bad_ids=bad)
display_limit = 20
bad = more_than(" ", 3)
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("- ") # verify that there is no hyphenation problem
bad = if_in("quarter")
dictionary.filter_tokens(bad_ids=bad)
```
### Removing Names & Locations
There are a lot of citations and references in the PDFs, and they are extremely difficult to be recoginsed given that they come in a lot of variants.
We will demostrate how to identify these names and locations in another tutorial (see TOC) using a Stanford NLP library, and eventually we can get a list of names and locations in `names.json` and `locations.json` respectively.
```
names = load_json("names.json")
name_ids = [i for i, v in dictionary.iteritems() if v in names]
dictionary.filter_tokens(bad_ids=name_ids)
locations = load_json("locations.json")
location_ids = [i for i, v in dictionary.iteritems() if v in locations]
dictionary.filter_tokens(bad_ids=location_ids)
locations[:10]
names[:15] # not looking good, but it seems like it won't do much harm either
```
# Building Corpus in Gensim Format
Since we already have a dictionary, each distinct token can be expressed as a *id* in the dictionary. Then we can compress the Corpus using this new representation and convert the document to be a BoW (bag of words).
```
corpus = [dictionary.doc2bow(l) for l in tokenised_list]
# Save it for future usage
from gensim.corpora.mmcorpus import MmCorpus
MmCorpus.serialize("aisnet_abstract_np_cleaned.mm", corpus)
# Also save the dictionary
dictionary.save("aisnet_abstract_np_cleaned.ldamodel.dictionary")
# To load the corpus:
from gensim.corpora.mmcorpus import MmCorpus
corpus = MmCorpus("aisnet_abstract_cleaned.mm")
# To load the dictionary:
from gensim.corpora import Dictionary
dictionary = Dictionary.load("aisnet_abstract_np_cleaned.ldamodel.dictionary")
```
# Train the LDA Model
Now we have the dictionary and the corpus, we are ready to train our LDA model. We take the LDA model with 150 topics for example.
```
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 150
chunksize = 2000
passes = 1
iterations = 150
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make a index to word dictionary.
print("Dictionary test: " + dictionary[0]) # This is only to "load" the dictionary.
id2word = dictionary.id2token
model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \
alpha='auto', eta='auto', \
iterations=iterations, num_topics=num_topics, \
passes=passes, eval_every=eval_every)
# Save the LDA model
model.save("aisnet_abstract_150_cleaned.ldamodel")
```
# Visualize the LDA Model
There is a convenient library called `pyLDAvis` that allows us to visualize our trained LDA model.
```
from gensim.models import LdaModel
model = LdaModel.load("aisnet_abstract_150_cleaned.ldamodel")
import pyLDAvis.gensim
vis = pyLDAvis.gensim.prepare(model, corpus, dictionary)
pyLDAvis.display(vis)
```
| github_jupyter |
# Board games
First we import the necessary modules:
- **pygame** and all the constants defined in _pygame.locals_
- **numpy** to represent and simulate the game
- **matplotlib.pyplot** to display the board in Jupyter
Finally the pygame module needs to be initialized.
```
import pygame
from pygame.locals import *
import numpy as np
import matplotlib.pyplot as plt
pygame.init()
def show(surf, **kwargs):
img = pygame.surfarray.pixels3d(surf)
plt.imshow(np.transpose(img, (1, 0, 2)), **kwargs)
plt.tick_params(axis='both', bottom=False, top=True,
labelbottom=False, labeltop=True)
```
The main colors are defined.
```
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
RED = (255, 0, 0)
GREEN = (0, 255, 0)
BLUE = (0, 0, 255)
YELLOW = (255, 255, 0)
CYAN = (0, 255, 255)
MAGENTA = (255, 0, 255)
GRAY = (127, 127, 127)
```
## Defining the screen
```
SCREEN = pygame.Rect(0, 0, 300, 120)
s = pygame.Surface(SCREEN.size)
s.fill(WHITE)
print(s.get_rect())
show(s)
```
## Grid-based games : Board class
The Board class creates a $n\times m$ grid. Each tile can be colored, have a label or be an image.
```
class Board():
def __init__(self, nx=4, ny=4, dx=20, dy=20, x0=10, y0=10, gap=2, bg=WHITE):
""" Create a (nx, ny) grid of tile-size (dx, dy) placed at (x0, y0) on the screen. """
self.nx = nx
self.ny = ny
self.dx = dx
self.dy = dy
self.gap = gap
self.rect = pygame.Rect(x0, y0, nx*dx+gap, ny*dy+gap)
self.image = pygame.Surface(self.rect.size)
self.image.fill(bg)
self.grid_col = GRAY
self.bg = bg
self.color_list = (WHITE, BLACK, RED, GREEN, BLUE, YELLOW, CYAN, MAGENTA, GRAY)
self.colors = np.random.randint(8, size=(nx, ny))
self.grid = np.zeros((nx, ny))
self.font = pygame.font.SysFont('Calibri', dy//2, True)
self.labels = np.arange(nx*ny).reshape((nx, ny))
self.image_list = []
self.images = np.zeros((nx, ny), dtype='int')
#self.set_tile(1, 1, RED)
def get_tile_rect(self, x, y):
" Return a rectangle Rect for tile (x, y)."
x0 = self.gap+x*self.dx
y0 = self.gap+y*self.dy
w = self.dx-self.gap
h = self.dy-self.gap
return pygame.Rect(x0, y0, w, h)
def draw(self):
" Draw the whole boardgame on screen s"
s.blit(self.image, self.rect)
def set_cols(self):
col = self.grid_col
for i in range(self.nx+1):
x = i*self.dx
rect = pygame.Rect(x, 0, self.gap, self.rect.h)
pygame.draw.rect(self.image, col, rect, 0)
def set_rows(self):
col = self.grid_col
for i in range(self.ny+1):
y = i*self.dy
pygame.draw.rect(self.image, col, pygame.Rect(0, y, self.rect.w, self.gap), 0)
def set_grid(self):
self.set_rows()
self.set_cols()
def set_tile(self, x, y, col):
rect = self.get_tile_rect(x, y)
pygame.draw.rect(self.image, col, rect, 0)
def set_tiles(self):
for x in range(self.nx):
for y in range(self.ny):
col = self.color_list[self.colors[x, y]]
self.set_tile(x, y, col)
def set_label(self, x, y, label):
text = self.font.render(label, False, BLACK)
text_rect = text.get_rect()
text_rect.center = self.get_tile_rect(x, y).center
self.image.blit(text, text_rect)
def set_labels(self):
for x in range(self.nx):
for y in range(self.ny):
self.set_label(x, y, str(self.labels[x, y]))
def add_image(self, path):
img = pygame.image.load(path)
self.image_list.append(img)
def set_image(self, x, y, i):
img = self.image_list[i]
img_rect = img.get_rect()
img_rect.center = self.get_tile_rect(x, y).center
self.image.blit(img, img_rect)
def set_images(self):
for x in range(self.nx):
for y in range(self.ny):
self.set_image(x, y, self.images[x, y])
board = Board(6, 4, dx=20, dy=20, gap=3)
s.fill(WHITE)
board.set_grid()
board.set_tile(3, 3, RED)
board.set_tile(4, 3, BLUE)
board.set_labels()
board.draw()
show(board.image)
```
The Board class produces a board-sized image. It can be drawn within a screen surface together with other elements (score, message).
```
s.fill(WHITE)
board.draw()
show(s)
board = Board(8, 4, gap=4)
board.set_rows()
show(board.image)
board = Board(8, 4, gap=4)
board.set_cols()
show(board.image)
board = Board(8, 4, gap=4)
board.set_grid()
show(board.image)
board = Board(8, 8)
s.fill(WHITE)
board.set_tiles()
board.draw()
show(board.image)
b2 = Board(6, 4)
b2.set_tiles()
show(b2.image)
b2 = Board(6, 4, dx=34, dy=34)
b2.add_image('panda.png')
b2.add_image('flower.png')
b2.set_tiles()
b2.set_image(1,2,0)
b2.set_image(1,3,1)
b2.set_image(2,3,1)
show(b2.image)
b2 = Board(6, 4, dx=34, dy=34, bg=GREEN)
b2.add_image('panda.png')
b2.add_image('flower.png')
b2.images = np.random.randint(2, size=(6, 4))
b2.set_images()
show(b2.image)
b2.images.transpose()
```
## Snake
```
class Snake(Board):
def __init__(self, *args):
super().__init__(*args)
self.snake = [[2, 2], [2,3], [3,3], [3, 4]]
self.apple = self.get_rand_pos()
self.vect = np.array([[1, 0], [0, 1], [-1, 0], [0, -1]])
self.dir = 0
def get_rand_pos(self):
x = np.random.randint(self.nx)
y = np.random.randint(self.ny)
return (x, y)
def set(self):
self.set_tile(self.apple[0], self.apple[1], RED)
def set_snake(self):
for i in self.snake:
self.set_tile(i[0], i[1], GREEN)
def move_snake(self):
s0 = self.snake[0] + self.vect[self.dir]
self.snake.insert(0, s0)
self.snake.pop()
b3 = Snake(20, 12)
b3.set_grid()
b3.set_snake()
b3.set()
show(b3.image)
[2, 0] in snake
b3.move_snake()
b3.set_snake()
show(b3.image)
```
## Tetris
## Minesweeper
```
class Mines(Board):
def __init__(self, mines=3, *args):
super().__init__(*args)
self.set_grid()
self.add_image("mines.png")
self.set_image(2, 2, 0)
self.mines = mines
def set_mines():
a = np.array([1]*self.mines + [0]*(self.nx*self.ny-self.mines))
np.random.shuffle(a)
a.resize(nx, ny)
m = Mines(3, 15, 10, 32, 32)
show(m.image)
```
Creating n random mines
```
nx = 6
ny = 4
mines = 10
a = np.array([1]*mines + [0]*(nx*ny-mines))
np.random.shuffle(a)
a.resize(nx, ny)
a
a[0:3, 0:3]
import sys, pygame
pygame.init()
size = width, height = 320, 240
speed = [2, 2]
black = 0, 0, 0
screen = pygame.display.set_mode(size)
ball = pygame.image.load("flower.png")
ballrect = ball.get_rect()
screen.fill(black)
screen.blit(ball, ballrect)
pygame.display.flip()
```
| github_jupyter |
# inode Size Histograms
Plot the size and mass distribution of inodes of different types.
```
%matplotlib inline
import os
import sqlite3
import matplotlib
matplotlib.rcParams['font.size'] = 16
import matplotlib.pyplot
import pandas
import numpy
import fsanalysis.histogram as histogram
TO_BYTE = 1
TO_KIB = 2**(-10)
TO_MIB = 2**(-20)
TO_GIB = 2**(-30)
TO_TIB = 2**(-40)
TO_PIB = 2**(-50)
```
Note that `INPUT_DB_FILES['used_capacity']` below is defined as the result of, e.g.,
$ sqlite3 cscratch_20181109_sizebytype.sqlite "SELECT SUM(size) FROM files"
They are hard-coded here because the actual `*_sizeytype.sqlite` files are large and may not be available from the system on which Jupyter is running.
```
# used_capacity below is from executing `SELECT sum(size) FROM entries`
INPUT_DB_FILES = {
'cscratch_20181109': {
'filename': 'datasets/cscratch_20181109_sizebytype.sqlite',
'used_capacity': 22880138323554001,
},
'cscratch_20190115': {
'filename': 'datasets/cscratch_20190115_sizebytype.sqlite',
'used_capacity': 24794479198206006,
},
}
# Types of inodes in our dataframe
INODE_TYPES = ['files', 'dirs', 'symlinks', 'blks', 'chrs', 'fifos', 'socks']
NON_FILE_INODE_TYPES = INODE_TYPES[1:]
DECODER_RING = {
"cscratch": "cscratch/Lustre",
"cscratch_20181109": "cscratch/Lustre (Nov 2018)",
"cscratch_20190115": "cscratch/Lustre (Jan 2019)",
}
ALPHA=1.0 # how transparent to make each file system's color in the plots
def humanize_units(bytect):
"""Helper function to convert bytes into base-2 units"""
for units in [(2**50, "PiB"), (2**40, "TiB"), (2**30, "GiB"), (2**20, "MiB"), (2**10, "KiB")]:
if abs(bytect) >= units[0]:
return bytect / units[0], units[1]
return bytect, "bytes" if bytect != 1 else "byte"
def humanize_units_generic_base10(count, long=False):
"""Helper function to convert counts into base-10 units"""
for units in [(10.0**12, "T", "trillion"), (10.0**9, "B", "billion"), (10.0**6, "M", "million"), (10.0**3, "K", "thousand")]:
if abs(count) >= units[0]:
if long:
return count / units[0], units[2]
else:
return count / units[0], units[1]
return count, ""
dataframes = {}
for fsname, config in INPUT_DB_FILES.items():
# Either read a cached version of the file size distribution, or recalculate and cache it
cached_histogram = config['filename'].replace('.sqlite', '_hist.csv')
if os.path.isfile(cached_histogram):
print("Reading cached histogram from %s" % cached_histogram)
dataframes[fsname] = pandas.read_csv(cached_histogram, index_col='bin_size')
else:
conn = sqlite3.connect(config['filename'])
print("Generating histogram from %s" % config['filename'])
dataframes[fsname] = histogram.histogram_dataframe(conn, INODE_TYPES)
conn.close()
print("Writing cached histogram to %s" % cached_histogram)
reference_df.to_csv(cached_histogram)
fsnames = dataframes.keys()
# Calculate mass of each histogram bin
dict_to_df = {}
for fsname in dataframes.keys():
dict_to_df[fsname] = (dataframes[fsname]['num_files'] * (dataframes[fsname].index.values)).copy()
inode_mass_df = pandas.DataFrame(dict_to_df)
# Count the number of inodes in each bin
dict_to_df = {}
for fsname in dataframes.keys():
dict_to_df[fsname] = dataframes[fsname]['num_files']
inode_ct_df = pandas.DataFrame(dict_to_df)
```
## Plot file system mass distribution
This histogram includes only _file_ inodes.
```
COL = 'cscratch_20190115'
COL_DATE = COL.split('_')[1]
BAR_PARAMS = dict(width=1.0, edgecolor='black', color='C0', alpha=ALPHA, label=DECODER_RING[COL])
fig, axes = matplotlib.pyplot.subplots(nrows=2, ncols=1, figsize=(8, 6), sharex=True)
fig.subplots_adjust(hspace=0.0, wspace=0.0)
# draw plot - inode distribution
ax = axes[0]
plot_df = (inode_ct_df / inode_ct_df.sum())
plot_df.index = ["%d %s" % humanize_units(x) for x in plot_df.index.values]
plot_params = BAR_PARAMS.copy()
plot_params.update(dict())
plot_df[COL].plot.bar(ax=ax, **plot_params)
ax.set_ylabel("Fraction\ntotal inodes")
ax.set_title("(a) File size distribution", x=0.02, y=0.85,
ha='left', transform=ax.transAxes, backgroundcolor='#FFFFFFFF')
# draw plot - mass distribution
ax = axes[1]
# normalize to mass of each storage system
plot_df = (inode_mass_df / inode_mass_df.sum())
plot_df.index = ["%d %s" % humanize_units(x) for x in plot_df.index.values]
plot_params = BAR_PARAMS.copy()
plot_params.update(dict())
plot_df[COL].plot.bar(ax=ax, **plot_params)
# Relabel x axis
new_xticks = []
new_labels = []
min_x = None
max_x = None
for index, label in enumerate(ax.get_xticklabels()):
if ((index+1) % 4) == 0 or index == 0:
new_xticks.append(index)
new_labels.append(label.get_text())
if min_x is None or (plot_df.iloc[index].sum() > 0 and index < min_x):
min_x = index
if max_x is None or (plot_df.iloc[index].sum() > 0 and index > max_x):
max_x = index
ax.set_ylabel("Fraction\ntotal capacity")
ax.set_ylim(-0.005, 0.14)
ax.set_xticks(new_xticks)
ax.set_xticklabels(new_labels, rotation=30, ha='right')
ax.set_xlabel("File size")
ax.set_xlim(min_x - 1, max_x + 2)
# set minor ticks for every bin
axes[0].xaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(1))
axes[1].xaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(1))
ax.set_title("(b) File mass distribution", x=0.02, y=0.85,
ha='left', transform=ax.transAxes, backgroundcolor='#FFFFFFFF')
for ax in axes:
ax.grid()
ax.set_axisbelow(True)
ax.set_ylim(-0.005, 0.1299)
majtick = matplotlib.ticker.MultipleLocator(0.04)
mintick = matplotlib.ticker.MultipleLocator(0.01)
majtickfmt = matplotlib.ticker.FormatStrFormatter("%.2f")
ax.yaxis.set_major_locator(majtick)
ax.yaxis.set_minor_locator(mintick)
ax.yaxis.set_major_formatter(majtickfmt)
output_file = 'cscratch_file_size_and_mass_hist_%s.pdf' % COL_DATE
fig.savefig(output_file, dpi=200, bbox_inches='tight', transparent=True)
print("Wrote output to", output_file)
```
The HPC-IODC version of the paper does not include the file size distribution, so re-plot it here. Should probably refactor the above code a little better so there's not so much copypaste duplication, but deadlines necessitated sloppy coding. Sorry!
```
fig, ax = matplotlib.pyplot.subplots(figsize=(8, 3))
plot_df = (inode_ct_df / inode_ct_df.sum())
plot_df.index = ["%d %s" % humanize_units(x) for x in plot_df.index.values]
plot_params = BAR_PARAMS.copy()
plot_df[COL].plot.bar(ax=ax, **plot_params)
ax.set_ylabel("Fraction\ntotal inodes")
# Relabel x axis
new_xticks = []
new_labels = []
min_x = None
max_x = None
for index, label in enumerate(ax.get_xticklabels()):
if ((index+1) % 4) == 0 or index == 0:
new_xticks.append(index)
new_labels.append(label.get_text())
if min_x is None or (plot_df.iloc[index].sum() > 0 and index < min_x):
min_x = index
if max_x is None or (plot_df.iloc[index].sum() > 0 and index > max_x):
max_x = index
ax.set_ylabel("Fraction\ntotal capacity")
ax.set_ylim(-0.005, 0.14)
ax.set_xticks(new_xticks)
ax.set_xticklabels(new_labels, rotation=30, ha='right')
ax.set_xlabel("File size")
ax.set_xlim(min_x - 1, max_x + 2)
# set minor ticks for every bin
ax.xaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(1))
ax.grid()
ax.set_axisbelow(True)
ax.set_ylim(-0.005, 0.1299)
majtick = matplotlib.ticker.MultipleLocator(0.04)
mintick = matplotlib.ticker.MultipleLocator(0.01)
majtickfmt = matplotlib.ticker.FormatStrFormatter("%.2f")
ax.yaxis.set_major_locator(majtick)
ax.yaxis.set_minor_locator(mintick)
ax.yaxis.set_major_formatter(majtickfmt)
output_file = 'cscratch_file_size_hist_%s.pdf' % COL_DATE
fig.savefig(output_file, dpi=200, bbox_inches='tight', transparent=True)
print("Wrote output to", output_file)
```
## Distribution of MDT mass from non-file inodes
The following distribution shows the MDT mass required by non-file inodes. Directories can be very large if they contain many child inodes. The other non-file inode types are relatively uninteresting.
```
BAR_PARAMS = dict(width=1.0, edgecolor='black', alpha=ALPHA, label=DECODER_RING[COL])
label_map = {
'num_dirs': "Directories",
'num_symlinks': 'Symlinks',
'num_blks': 'Block devices',
'num_chrs': 'Character devices',
'num_fifos': 'FIFOs',
'num_socks': 'Sockets'
}
#fig, ax = matplotlib.pyplot.subplots(figsize=(8, 4))
plot_df = dataframes['cscratch_20190115'][[x for x in dataframes['cscratch_20190115'] if x != "num_files"]]
plot_df /= plot_df.sum().sum()
plot_df.index = ["%d %s" % humanize_units(x) for x in plot_df.index.values]
plot_df = plot_df.loc[:, (plot_df != 0).any(axis=0)] # drop all zero columns
plot_df.columns = [label_map.get(x, x) for x in plot_df.columns]
fig, axes = matplotlib.pyplot.subplots(nrows=2, ncols=1, figsize=(8, 4), sharex=True)
fig.subplots_adjust(
hspace=0.05,
wspace=0.0
)
# The log scale part
ax = axes[1]
plot_df.plot.bar(stacked=True, ax=ax, **BAR_PARAMS)
ax.set_yscale('log')
ax.legend().set_visible(False)
# The linear scale part
ax = axes[0]
plot_df.plot.bar(stacked=True, ax=ax, **BAR_PARAMS)
ax.set_ylim(0.1, 1)
ax.yaxis.set_major_locator(matplotlib.ticker.MultipleLocator(0.2))
ax.yaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(0.1))
ax.yaxis.set_major_formatter(matplotlib.ticker.FormatStrFormatter("%.2f"))
# The bottom frame (regardless of what it is)
ax = axes[1]
# Relabel x axis
if True:
new_xticks = []
new_labels = []
min_x = None
max_x = None
for index, label in enumerate(ax.get_xticklabels()):
if ((index+1) % 4) == 0 or index == 0:
new_xticks.append(index)
new_labels.append(label.get_text())
if min_x is None or (plot_df.iloc[index].sum() > 0 and index < min_x):
min_x = index
if max_x is None or (plot_df.iloc[index].sum() > 0 and index > max_x):
max_x = index
ax.set_xticks(new_xticks)
ax.set_xticklabels(new_labels, rotation=30, ha='right')
ax.set_xlabel("Size")
ax.set_xlim(min_x - 1, max_x + 2)
ax.xaxis.set_minor_locator(matplotlib.ticker.MultipleLocator(1))
ax.tick_params(axis='x', which='major', length=8)
ax.set_ylim(1e-9, 0.1)
for ax in axes:
ax.grid()
ax.set_axisbelow(True)
fig.text(0.02, 0.5,
"Fraction of non-file inodes",
verticalalignment='center',
horizontalalignment='center',
rotation='vertical')
# draw break between axes
axes[0].spines['bottom'].set_visible(False)
axes[1].spines['top'].set_visible(False)
axes[0].xaxis.tick_top()
axes[0].tick_params(labeltop=False)
axes[1].xaxis.tick_bottom()
chop=0.015
axes[0].plot((-chop, chop), (-chop, chop), transform=axes[0].transAxes, color='k', linewidth=1.0, clip_on=False)
axes[0].plot((1-chop, 1+chop), (-chop, chop), transform=axes[0].transAxes, color='k', linewidth=1.0, clip_on=False)
axes[1].plot((-chop, chop), (1-chop, 1+chop), transform=axes[1].transAxes, color='k', linewidth=1.0, clip_on=False)
axes[1].plot((1-chop, 1+chop), (1-chop, 1+chop), transform=axes[1].transAxes, color='k', linewidth=1.0, clip_on=False)
# save output
output_file = 'cscratch_all_inode_hist_%s.pdf' % COL_DATE
fig.savefig(output_file, dpi=200, bbox_inches='tight', transparent=True)
print("Wrote output to", output_file)
```
| github_jupyter |
# 01 - Getting Started
The main application for `scikit-gstat` is variogram analysis and [Kriging](https://en.wikipedia.org/wiki/Kriging). This Tutorial will guide you through the most basic functionality of `scikit-gstat`. There are other tutorials that will explain specific methods or attributes in `scikit-gstat` in more detail.
#### What you will learn in this tutorial
* How to instantiate `Variogram` and `OrdinaryKriging`
* How to read a variogram
* Perform an interpolation
* Most basic plotting
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pprint import pprint
plt.style.use('ggplot')
```
The `Variogram` and `OrdinaryKriging` classes can be loaded directly from `skgstat`. This is the name of the Python module.
```
from skgstat import Variogram, OrdinaryKriging
```
At the current version, there are some deprecated attributes and method in the `Variogram` class. They do not raise `DeprecationWarning`s, but rather print a warning message to the screen. You can suppress this warning by adding an `SKG_SUPPRESS` environment variable
```
%set_env SKG_SUPPRESS=true
```
## 1.1 Load data
You can find a prepared example data set in the `./data` subdirectory. This example is extracted from a generated Gaussian random field. We can expect the field to be stationary and show a nice spatial dependence, because it was created that way.
We can load one of the examples and have a look at the data:
```
data = pd.read_csv('./data/sample_sr.csv')
print("Loaded %d rows and %d columns" % data.shape)
data.head()
```
Get a first overview of your data by plotting the `x` and `y` coordinates and visually inspect how the `z` spread out.
```
fig, ax = plt.subplots(1, 1, figsize=(9, 9))
art = ax.scatter(data.x,data.y, s=50, c=data.z, cmap='plasma')
plt.colorbar(art);
```
We can already see a lot from here:
* The small values seem to concentrate on the upper left and lower right corner
* Larger values are arranged like a band from lower left to upper right corner
* To me, each of these blobs seem to have a diameter of something like 30 or 40 units.
* The distance between the minimum and maximum seems to be not more than 60 or 70 units.
These are already very important insights.
## 1.2 Build a Variogram
As a quick reminder, the variogram relates pair-wise separating distances of `coordinates` and relates them to the *semi-variance* of the corresponding `values` pairs. The default estimator used is the Matheron estimator:
$$ \gamma (h) = \frac{1}{2N(h)} * \sum_{i=1}^{N(h)}(Z(x_i) - Z(x_{i + h}))^2 $$
For more details, please refer to the [User Guide](https://mmaelicke.github.io/scikit-gstat/userguide/variogram.html#experimental-variograms)
The `Variogram` class takes at least two arguments. The `coordinates` and the `values` observed at these locations.
You should also at least set the `normalize` parameter to explicitly, as it changes it's default value in version `0.2.8` to `False`. This attribute affects only the plotting, not the variogram values.
Additionally, the number of bins is set to 15, because we have fairly many observations and the default value of 10 is unnecessarily small. The `maxlag` set the maximum distance for the last bin. We know from the plot above, that more than 60 units is not really meaningful
```
V = Variogram(data[['x', 'y']].values, data.z.values, normalize=False, maxlag=60, n_lags=15)
fig = V.plot(show=False)
```
The upper subplot show the histogram for the count of point-pairs in each lag class. You can see various things here:
* As expected, there is a clear spatial dependency, because semi-variance increases with distance (blue dots)
* The default `spherical` variogram model is well fitted to the experimental data
* The shape of the dependency is **not** captured quite well, but fair enough for this example
The sill of the variogram should correspond with the field variance. The field is unknown, but we can compare the sill to the *sample* variance:
```
print('Sample variance: %.2f Variogram sill: %.2f' % (data.z.var(), V.describe()['sill']))
```
The `describe` method will return the most important parameters as a dictionary. And we can simply print the variogram ob,ect to the screen, to see all parameters.
```
pprint(V.describe())
print(V)
```
## 1.3 Kriging
The Kriging class will now use the Variogram from above to estimate the Kriging weights for each grid cell. This is done by solving a linear equation system. For an unobserved location $s_0$, we can use the distances to 5 observation points and build the system like:
$$
\begin{pmatrix}
\gamma(s_1, s_1) & \gamma(s_1, s_2) & \gamma(s_1, s_3) & \gamma(s_1, s_4) & \gamma(s_1, s_5) & 1\\
\gamma(s_2, s_1) & \gamma(s_2, s_2) & \gamma(s_2, s_3) & \gamma(s_2, s_4) & \gamma(s_2, s_5) & 1\\
\gamma(s_3, s_1) & \gamma(s_3, s_2) & \gamma(s_3, s_3) & \gamma(s_3, s_4) & \gamma(s_3, s_5) & 1\\
\gamma(s_4, s_1) & \gamma(s_4, s_2) & \gamma(s_4, s_3) & \gamma(s_4, s_4) & \gamma(s_4, s_5) & 1\\
\gamma(s_5, s_1) & \gamma(s_5, s_2) & \gamma(s_5, s_3) & \gamma(s_5, s_4) & \gamma(s_5, s_5) & 1\\
1 & 1 & 1 & 1 & 1 & 0 \\
\end{pmatrix} *
\begin{bmatrix}
\lambda_1 \\
\lambda_2 \\
\lambda_3 \\
\lambda_4 \\
\lambda_5 \\
\mu \\
\end{bmatrix} =
\begin{pmatrix}
\gamma(s_0, s_1) \\
\gamma(s_0, s_2) \\
\gamma(s_0, s_3) \\
\gamma(s_0, s_4) \\
\gamma(s_0, s_5) \\
1 \\
\end{pmatrix}
$$
For more information, please refer to the [User Guide](https://mmaelicke.github.io/scikit-gstat/userguide/kriging.html#kriging-equation-system)
Consequently, the `OrdinaryKriging` class needs a `Variogram` object as a mandatory attribute. Two very important optional attributes are `min_points` and `max_points`. They will limit the size of the Kriging equation system. As we have 200 observations, we can require at least 5 neighbors within the range. More than 15 will only unnecessarily slow down the computation. The `mode='exact'` attribute will advise the class to build and solve the system above for each location.
```
ok = OrdinaryKriging(V, min_points=5, max_points=15, mode='exact')
```
The `transform` method will apply the interpolation for passed arrays of coordinates. It requires each dimension as a single 1D array. We can easily build a meshgrid of 100x100 coordinates and pass them to the interpolator. To recieve a 2D result, we can simply reshape the result. The Kriging error will be available as the `sigma` attribute of the interpolator.
```
# build the target grid
xx, yy = np.mgrid[0:99:100j, 0:99:100j]
field = ok.transform(xx.flatten(), yy.flatten()).reshape(xx.shape)
s2 = ok.sigma.reshape(xx.shape)
```
And finally, we can plot the result.
```
fig, axes = plt.subplots(1, 2, figsize=(16, 8))
art = axes[0].matshow(field.T, origin='lower', cmap='plasma')
axes[0].set_title('Interpolation')
axes[0].plot(data.x, data.y, '+k')
axes[0].set_xlim((0,100))
axes[0].set_ylim((0,100))
plt.colorbar(art, ax=axes[0])
art = axes[1].matshow(s2.T, origin='lower', cmap='YlGn_r')
axes[1].set_title('Kriging Error')
plt.colorbar(art, ax=axes[1])
axes[1].plot(data.x, data.y, '+w')
axes[1].set_xlim((0,100))
axes[1].set_ylim((0,100));
```
From the Kriging error map, you can see how the interpolation is very certain close to the observation points, but rather high in areas with only little coverage (like the upper left corner).
| github_jupyter |
# Assembling thermal circuits
Objectives:
* Assemble complex circuits.
* Simulate the assembled circuits.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import dm4bem
import tuto
```
## Defining the problem of circuit assembling
The problem of assembling thermal circuits can be stated as:
Given a number of thermal circuits, $TC_1$, $TC_2$, ... , $TC_n$, and knowing that some of their nodes are common, find the assembled circuit TC (see figure below).

> Example of the problem of assembling thermal circuits: given four circuits, assemble them knowing the common nodes. The heat-flow sources and the capacities in the nodes are divided for each circuit
From conservation of energy, it results that if there is a flow source in the node, it needs to be the sum of the sources of each circuit; a simple solution is to be divided equally by each circuit. From conservation of mass, it results that if there is a common capacity in the node, it needs to be the sum of the sources of each circuit; a simple solution is to be divided equally by each circuit.
To exemplify the procedure, we will use the same model from tutorial `t03CubeFB` representing a room with insulated concrete wall and a glass wall. The room is ventilated and the temperature is controlled by a P-controller to which additional load is added. The toy model is used to show specific aspects of the assembling procedure, not for correctness of the modelling.

> Elementary models that will be assembled

> Algebraic representation of the termal circuit: a) Assemblu-connectivity matrix; b) elementary thermal circuits.
We would like to construct separate models for concrete wall, glass wall, ventilation, and room air and to assemble them into one model (see figure above).
## Numbering the circuits
### Numbering elementary circuits
In principle, the numbering of the nodes and branches can be done arbitrarily. The connections are indicated by the oriented incidence matrix **A**. Since numbering becomes tedious for large circuits, the following rules may be adopted:
- number the nodes in order (e.g. from left to right);
- number the branches in increasing order of nodes and orient them from the lower to the higher node. Note: reference temperature is node 0.
### Numbering the assembled circuit
When assembling the thermal circuits, some nodes are put in common. Therefore, the number of nodes in the assembled circuit will be smaller than the sum of the nodes of elementary circuits. The number of branches will not change. The nodes and the branches of the assembled circuit will be in the order of assembling.
|Local and global indexing of nodes|
|----------------------------------|
|Thermal circuit |TC1 |TC2 |TC3 |TC4|
|-----------------|----------|--------|-----|-|
|Local node index |1 2 3 4 5 |1 2 3 |1 2 |1|
|Global node index|1 2 3 4 5 |5 6 7 |8 6 |7|
|Local and global indexing of branches|
|-------------------------------------|
|Thermal circuit |TC1 |TC2 |TC3 |TC4 |
|-------------------|----------|--------|-----|-|
|Local branch index |1 2 3 4 5 |1 2 3 |1 2 | 1 2|
|Global branch index|1 2 3 4 5 |6 7 8 |9 10 |11 12|
The assembling of the circuits is indicated by the assembling matrix. Each row of this matrix has four elements that indicate two nodes that will be put together:
- number of circuit 1
- node of circuit 1
- number of circuit 2
- node of circuit 2
For our example, the assembling matrix is:
$$\mathbf{A_{ss}} =\begin{bmatrix}
0 & 4 & 1 & 0\\
1 & 1 & 2 & 1\\
1 & 2 & 3 & 0
\end{bmatrix}$$
The description of the disassembled circuits, given by the dictionary $TC_d = \{TC_0,…,TC_n\}$ of dictionaries $TC_i=\{A_i,G_i,b_i,C_i,f_i,y_i\}$, and the assembling matrix $A_{ss}$ contain all the necessary information for obtaining the assembled circuit.
## Procedure for assembling
The assembling is implemented in function `thermal_circuit(Kp)`.
$K_p$ is the gain of the P-controller:
* If $K_p \rightarrow \infty$, then the controller tends towards perfection, i.e., the indoor temperature tends towards its set-point.
* If $K_p \rightarrow 0$, then the controller is uneffective, i.e., the indoor temperature is in free-floating.
The thermal conductances and capacities are:
The elemntary thermal circuits are described by the matrices and vectors $A, G, b, C, f, y$.
The thermal circuit $TC_0$ (in red in figure) is formed convection on outside of the wall and conduction in meshes of concrete and insulation. The numbers of flow branches `nq` and of temperature nodes `nt`are:
The incidence matrix $A$ is a difference operator for temperatures:
The conductance matrix $G$ contains `nc` concrete meshes and `ni` insulation meshes. The conductances of the outdoor convection, conduction in cocrete and conduction in insulation are stacked horizontally. Then, $G$ is obtained as a diagonal matrix.
There is only one branch with a temperature source: branch `b[0]`.
The capacity matrix is:
There are two nodes with heat flow sources, the first and the last node: `f[0]`, `f[-1]`.
There is not temperature from circuit $TC_{d0}$ in the output vector:
The circuit $TC_{d0}$ is a dictionary having the matrices and vectors $A, G, b, C, f, y$:
The thermal circuits $TC_{d1}, TC_{d2}$ and $TC_{d3}$ are constructed similarly:
The *elementary* dissembled circuits, $TC_{d1}, ... TC_{d3}$ are put together in a dictionary. Note that the *elementary* circits may be used more than once in the dictionary.
The assembly matrix `AssX` indicates how the circuits are connected. For example, the 1st row of matrix `AssX` idicates that the last node (`nt - 1`) of circuit $TC_{d0}$ is merged with the 1st node of circuit $TC_{d1}$; the 1st node (i.e., node no. 0) of circuit $TC_{d1}$ is deleted.
Finall, the dissambles thermal circuit `TCd`is assabled according to the conexions indicated by the assembly matrix `AssX` by using the functionn `dm4bem.TCAss`. The result is an assambled thermal circuit with matrices and vectors $A, G, b, C, f, y$.
## Simulation
### Free-floating
Let's consider that the P-controller is not effective, i.e. $K_p \rightarrow 0$.
```
Kp = 1e-3 # no controller Kp -> 0
```
The assambled thermal circuit `TCa`is obtained as shown above:
```
TCa = tuto.thermal_circuit(Kp)
```
The thermal circuit `TCa` is converted to state-space representation:
```
# Thermal circuit -> state-space
[As, Bs, Cs, Ds] = dm4bem.tc2ss(
TCa['A'], TCa['G'], TCa['b'], TCa['C'], TCa['f'], TCa['y'])
# Maximum time-step
dtmax = min(-2. / np.linalg.eig(As)[0])
print(f'Maximum time step: {dtmax:.2f} s')
```
We will chose a time step for integration slighly smaller than the maximum time step:
```
dt = 400 # [s] simulation time step
```
#### Step response
```
duration = 3600 * 24 * 1 # [s]
tuto.step_response(duration, dt, As, Bs, Cs, Ds)
```
#### Simulation with weather data
```
filename = 'FRA_Lyon.074810_IWEC.epw'
start_date = '2000-01-03 12:00:00'
end_date = '2000-03-04 18:00:00'
tuto.P_control(filename, start_date, end_date, dt,
As, Bs, Cs, Ds, Kp)
```
### Perfect controller
Let's consider that controller is perfect, i.e. $K_p \rightarrow \infty$.
```
Kp = 1e3 # P-controler gain, Kp -> ∞
```
The maximum timestep in this case is:
```
TCa = tuto.thermal_circuit(Kp)
[As, Bs, Cs, Ds] = dm4bem.tc2ss(
TCa['A'], TCa['G'], TCa['b'], TCa['C'], TCa['f'], TCa['y'])
dtmax = min(-2. / np.linalg.eig(As)[0])
print(f'Maximum time step: {dtmax:.2f} s')
```
#### Simulation with weather data
```
dt = 50
tuto.P_control(filename, start_date, end_date, dt,
As, Bs, Cs, Ds, Kp)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/cuongvng/neural-networks-with-PyTorch/blob/master/CNNs/GoogleNet/GoogleNet.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
```
!pip install git+https://github.com/cuongvng/neural-networks-with-PyTorch.git
!nvidia-smi
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import sys
sys.path.append("../..")
from utils.training_helpers import train_cnn
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
```
Load CIFAR dataset
```
transform = transforms.Compose([
transforms.Resize((224,224)),
transforms.ToTensor() # convert data from PIL image to tensor
])
cifar_train = torchvision.datasets.CIFAR10(root="../data/", train=True,
transform=transform,
target_transform=None,
download=True)
cifar_test = torchvision.datasets.CIFAR10(root="../data/", train=False,
transform=transform,
target_transform=None,
download=True)
batch_size = 128
train_loader = torch.utils.data.DataLoader(
dataset=cifar_train,
batch_size=128,
shuffle=True,
)
test_loader = torch.utils.data.DataLoader(
dataset=cifar_test,
batch_size=128,
shuffle=True,
)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
Now let implement the network architechture.
The core of GoogleNet is a block called `Inception`.
It processes input images parallelly with 4 separate paths with differnt convolutional layers, then concatenates results of the 4 paths to yield the final output.
This kind of architechture allows us to capture spatial information on different scales, there for increase the expressiveness and robustness of the model.
<figure>
<center><img src="https://github.com/d2l-ai/d2l-en/blob/master/img/inception.svg?raw=1"/></center>
<center><figcaption class="center">Inception block</figcaption></center>
</figure>
```
class Inception(nn.Module):
def __init__(self, in_channels, out_channels_p1, out_channels_p2, out_channels_p3, out_channels_p4):
super(Inception, self).__init__()
self.path1 = nn.Conv2d(in_channels=in_channels, out_channels=out_channels_p1, kernel_size=1)
self.path2 = nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels_p2[0], kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=out_channels_p2[0], out_channels=out_channels_p2[1], kernel_size=3, padding=1),
nn.ReLU()
)
self.path3 = nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels_p3[0], kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=out_channels_p3[0], out_channels=out_channels_p3[1], kernel_size=5, padding=2),
nn.ReLU()
)
self.path4 = nn.Sequential(
nn.MaxPool2d(kernel_size=3, stride=1, padding=1),
nn.Conv2d(in_channels=in_channels, out_channels=out_channels_p4, kernel_size=1),
nn.ReLU()
)
def forward(self, X):
X1 = self.path1(X)
X2 = self.path2(X)
X3 = self.path3(X)
X4 = self.path4(X)
# Concatenate results of the 4 paths over the channel dimension
return torch.cat((X1, X2, X3, X4), dim=1)
```
Now let see where the Inception blocks are in GoogleNet
<figure>
<center><img src="https://github.com/d2l-ai/d2l-en/blob/master/img/inception-full.svg?raw=1"/></center>
<center><figcaption class="center">GoogleNet Architechture</figcaption></center>
</figure>
As we can see, there are total 9 Inception blocks in GoogleNet, between pooling layers.
Let translate the diagram into code
```
class GoogleNet(nn.Module):
def __init__(self):
super(GoogleNet, self).__init__()
# Block 1: 7x7 Conv and 3x3 MaxPool
self.block1 = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=64, kernel_size=7, stride=2, padding=3),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
# Block 2: 1x1 Conv, 3x3 Conv and 3x3 MaxPool
self.block2 = nn.Sequential(
nn.Conv2d(in_channels=64, out_channels=64, kernel_size=1),
nn.ReLU(),
nn.Conv2d(in_channels=64, out_channels=192, kernel_size=3, padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
# Block 3: 2 Inception blocks and 3x3 MaxPool
self.block3 = nn.Sequential(
Inception(in_channels=192, # the previous output channel (of 3x3 Conv)
out_channels_p1=64,
out_channels_p2=(96, 128),
out_channels_p3=(16, 32),
out_channels_p4=32),
Inception(in_channels=256, # sum of the previous output channels: 64 + 128 + 32 + 32 = 256
out_channels_p1=128,
out_channels_p2=(128, 192),
out_channels_p3=(32, 96),
out_channels_p4=64),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
# Block 4: 5 Inception blocks and 3x3 MaxPool
self.block4 = nn.Sequential(
Inception(128+192+96+64, 192, (96, 208), (16, 48), 64),
Inception(192+208+48+64, 160, (112, 224), (24, 64), 64),
Inception(160+224+64+64, 128, (128, 256), (24, 64), 64),
Inception(128+256+64+64, 112, (144, 288), (32, 64), 64),
Inception(112+288+64+64, 256, (160, 320), (32, 128), 128),
nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
)
# Block 5: 2 Inception blocks
self.block5 = nn.Sequential(
Inception(256+320+128+128, 256, (160, 320), (32, 128), 128),
Inception(256+320+128+128, 384, (192, 384), (48, 128), 128),
)
# The Global Average Pooling layer will be replaced by an average pooling function in the `forward` method.
# The reason why I do not declare it as a nn layer here is that for Global Pooling, the kernel size is the
# shape (height, width) of the input tensor, thus it outputs an 1x1 tensor (over the channel dimension).
# And since I do not know (or just be lazy to calculate) its kernel size, I will print the input shape
# at the `forward` method to get the kernel size.
# The Dense Layer will output 10 classes, thus it has 10 hidden units.
# The input shape is the number of channels after block 5: 384+384+128+128 = 1024
self.fc = nn.Linear(in_features=1*1*1024, out_features=10)
def forward(self, X):
X = X.type(torch.float)
X = self.block1(X)
X = self.block2(X)
X = self.block3(X)
X = self.block4(X)
X = self.block5(X)
# print(X.shape)
X = F.avg_pool2d(X, kernel_size=(X.shape[2], X.shape[3]))
X = torch.flatten(X, start_dim=1)
X = self.fc(X)
return X
net = GoogleNet()
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(net.parameters(),lr=0.001)
n_epochs = 10
train_cnn(net, device, train_loader, test_loader, optimizer, criterion, n_epochs)
```
| github_jupyter |
# twitter: @adamhajari
# github: github.com/adamhajari/spyre
# this notebook: http://bit.ly/pydata2015_spyre
## Before we start
make sure you have the latest version of spyre
````
pip install --upgrade dataspyre
````
**there have been recent changes to spyre, so if you installed more than a day ago, go ahead and upgrade**
## Who Am I?
```
Adam Hajari
Data Scientist on the Next Big Sound team at Pandora
adam@nextbigsound.com
@adamhajari
```
# Simple Interactive Web Applications with Spyre
Spyre is a web application framework for turning static data tables and plots into interactive web apps. Spyre was motivated by <a href="http://shiny.rstudio.com/">Shiny</a>, a similar framework for R created by the developers of Rstudio.
## Where does Spyre Live?
GitHub: <a href='https://github.com/adamhajari/spyre'>github.com/adamhajari/spyre</a>
Live example of a spyre app:
- <a href='http://adamhajari.com'>adamhajari.com</a>
- <a href='http://dataspyre.herokuapp.com'>dataspyre.herokuapp.com</a>
- <a href='https://spyre-gallery.herokuapp.com'>spyre-gallery.herokuapp.com</a>
## Installing Spyre
Spyre depends on:
- cherrypy (server and backend)
- jinja2 (html and javascript templating)
- matplotlib (displaying plots and images)
- pandas (for working within tabular data)
Assuming you don't have any issues with the above dependencies, you can install spyre via pip:
```bash
$ pip install dataspyre
```
## Launching a Spyre App
Spyre's server module has a App class that every Spyre app will needs to inherit. Use the app's launch() method to deploy your app.
```
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
app = SimpleApp()
app.launch() # launching from ipython notebook is not recommended
```
If you put the above code in a file called simple_app.py you can launch the app from the command line with
```
$ python simple_app.py
```
Make sure you uncomment the last line first.
## A Very Simple Example
There are two variables of the App class that need to be overridden to create the UI for a Spyre app: inputs and outputs (a third optional type called controls that we'll get to later). All three variables are lists of dictionaries which specify each component's properties. For instance, to create a text box input, overide the App's inputs variable:
```
from spyre import server
class SimpleApp(server.App):
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
app = SimpleApp()
app.launch()
```
Now let's add an output. We first need to list all our out outputs and their attributes in the outputs dictionary.
```
from spyre import server
class SimpleApp(server.App):
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html"}]
app = SimpleApp()
app.launch()
```
To generate the output, we can override a server.App method specific to that output type. In the case of html output, we overide the getHTML method. Each output method should return an object specific to that output type. In the case of html output, we just return a string.
```
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
```
Great. We've got inputs *and* outputs, but we're not quite finished. As it is, the content of our output is static. That's because the output doesn't know when it needs to get updated. We can fix this in one of two ways:
1. We can add a button to our app and tell our output to update whenever the button is pressed.
2. We can add an `action_id` to our input that references the output that we want refreshed when the input value changes.
Let's see what the first approach looks like.
```
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"hello world"}]
outputs = [{"type":"html",
"id":"some_html",
"control_id":"button1"}]
controls = [{"type":"button",
"label":"press to update",
"id":"button1"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
```
Our app now has a button with id "button1", and our output references our control's id, so that when we press the button we update the output with the most current input values.
<img src="input_output_control.png">
Is a button a little overkill for this simple app? Yeah, probably. Let's get rid of it and have the output update just by changing the value in the text box. To do this we'll add an `action_id` attribute to our input dictionary that references the output's id.
```
from spyre import server
class SimpleApp(server.App):
title = "Simple App"
inputs = [{ "type":"text",
"key":"words",
"label": "write here",
"value":"look ma, no buttons",
"action_id":"some_html"}]
outputs = [{"type":"html",
"id":"some_html"}]
def getHTML(self, params):
words = params['words']
return "here are the words you wrote: <b>%s</b>"%words
app = SimpleApp()
app.launch()
```
Now the output gets updated with a change to the input.
<img src="no_control.png">
## Another Example
Let's suppose you've written a function to grab historical stock price data from the web. Your function returns a pandas dataframe.
```
%pylab inline
from googlefinance.client import get_price_data
def getData(params):
ticker = params['ticker']
if ticker == 'empty':
ticker = params['custom_ticker'].upper()
xchng = "NASD"
param = {
'q': ticker, # Stock symbol (ex: "AAPL")
'i': "86400", # Interval size in seconds ("86400" = 1 day intervals)
'x': xchng, # Stock exchange symbol on which stock is traded (ex: "NASD")
'p': "3M" # Period (Ex: "1Y" = 1 year)
}
df = get_price_data(param)
return df.drop('Volume', axis=1)
params = {'ticker':'GOOG'}
df = getData(params)
df.head()
```
Let's turn this into a spyre app. We'll use a dropdown menu input this time and start by displaying the data in a table. In the previous example we overrode the `getHTML` method and had it return a string to generate HTML output. To get a table output we need to override the `getData` method and have it return a pandas dataframe (conveniently, we've already done that!)
```
from spyre import server
from googlefinance.client import get_price_data
server.include_df_index = True
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{
"type": 'dropdown',
"label": 'Company',
"options": [
{"label": "Google", "value": "GOOG"},
{"label": "Amazon", "value": "AMZN"},
{"label": "Apple", "value": "AAPL"}
],
"key": 'ticker',
"action_id": "table_id"
}]
outputs = [{
"type": "table",
"id": "table_id"
}]
def getData(self, params):
ticker = params['ticker']
xchng = "NASD"
param = {
'q': ticker, # Stock symbol (ex: "AAPL")
'i': "86400", # Interval size in seconds ("86400" = 1 day intervals)
'x': xchng, # Stock exchange symbol on which stock is traded (ex: "NASD")
'p': "3M" # Period (Ex: "1Y" = 1 year)
}
df = get_price_data(param)
return df.drop('Volume', axis=1)
app = StockExample()
app.launch()
```
One really convenient feature of pandas is that you can plot directly from a dataframe using the plot method.
```
df.plot()
```
Let's take advantage of this convenience and add a plot to our app. To generate a plot output, we need to add another dictionary to our list of outputs.
```
from spyre import server
from googlefinance.client import get_price_data
server.include_df_index = True
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{
"type": 'dropdown',
"label": 'Company',
"options": [
{"label": "Google", "value": "GOOG"},
{"label": "Amazon", "value": "AMZN"},
{"label": "Apple", "value": "AAPL"}
],
"key": 'ticker',
}]
outputs = [{
"type": "plot",
"id": "plot",
"control_id": "update_data"
}, {
"type": "table",
"id": "table_id",
"control_id": "update_data"
}]
controls = [{
"type": "button",
"label": "get stock data",
"id": "update_data"
}]
def getData(self, params):
ticker = params['ticker']
xchng = "NASD"
param = {
'q': ticker, # Stock symbol (ex: "AAPL")
'i': "86400", # Interval size in seconds ("86400" = 1 day intervals)
'x': xchng, # Stock exchange symbol on which stock is traded (ex: "NASD")
'p': "3M" # Period (Ex: "1Y" = 1 year)
}
df = get_price_data(param)
return df.drop('Volume', axis=1)
app = StockExample()
app.launch()
```
Notice that we didn't have to add a new method for our plot output. `getData` is pulling double duty here serving the data for our table and our plot. If you wanted to alter the data or the plot object, you could do that by overriding the `getPlot` method. Under the hood, if you don't specify a `getPlot` method for your plot output, server.App's built-in `getPlot` method will look for a `getData` method, and just return the result of calling the plot() method on its dataframe.
```
from spyre import server
from googlefinance.client import get_price_data
server.include_df_index = True
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{
"type": 'dropdown',
"label": 'Company',
"options": [
{"label": "Google", "value": "GOOG"},
{"label": "Amazon", "value": "AMZN"},
{"label": "Apple", "value": "AAPL"}
],
"key": 'ticker',
}]
outputs = [{
"type": "plot",
"id": "plot",
"control_id": "update_data"
}, {
"type": "table",
"id": "table_id",
"control_id": "update_data"
}]
controls = [{
"type": "button",
"label": "get stock data",
"id": "update_data"
}]
def getData(self, params):
ticker = params['ticker']
xchng = "NASD"
param = {
'q': ticker, # Stock symbol (ex: "AAPL")
'i': "86400", # Interval size in seconds ("86400" = 1 day intervals)
'x': xchng, # Stock exchange symbol on which stock is traded (ex: "NASD")
'p': "3M" # Period (Ex: "1Y" = 1 year)
}
df = get_price_data(param)
return df.drop('Volume', axis=1)
def getPlot(self, params):
df = self.getData(params)
plt_obj = df.plot()
plt_obj.set_ylabel("Price")
plt_obj.set_xlabel("Date")
plt_obj.set_title(params['ticker'])
return plt_obj
app = StockExample()
app.launch()
```
Finally we'll put each of the outputs in separate tabs and add an action_id to the dropdown input that references the "update_data" control. Now, a change to the input state triggers the button to be "clicked". This makes the existence of a "button" supurfluous, so we'll change the control type to "hidden"
```
from spyre import server
from googlefinance.client import get_price_data
server.include_df_index = True
class StockExample(server.App):
title = "Historical Stock Prices"
inputs = [{
"type": 'dropdown',
"label": 'Company',
"options": [
{"label": "Google", "value": "GOOG"},
{"label": "Amazon", "value": "AMZN"},
{"label": "Apple", "value": "AAPL"}
],
"key": 'ticker',
"action_id": "update_data"
}]
tabs = ["Plot", "Table"]
outputs = [{
"type": "plot",
"id": "plot",
"control_id": "update_data",
"tab": "Plot"
}, {
"type": "table",
"id": "table_id",
"control_id": "update_data",
"tab": "Table"
}]
controls = [{
"type": "hidden",
"label": "get stock data",
"id": "update_data"
}]
def getData(self, params):
ticker = params['ticker']
xchng = "NASD"
param = {
'q': ticker, # Stock symbol (ex: "AAPL")
'i': "86400", # Interval size in seconds ("86400" = 1 day intervals)
'x': xchng, # Stock exchange symbol on which stock is traded (ex: "NASD")
'p': "3M" # Period (Ex: "1Y" = 1 year)
}
df = get_price_data(param)
return df.drop('Volume', axis=1)
def getPlot(self, params):
df = self.getData(params)
plt_obj = df.plot()
plt_obj.set_ylabel("Price")
plt_obj.set_xlabel("Date")
plt_obj.set_title(params['ticker'])
return plt_obj
app = StockExample()
app.launch()
```
<img src='two_outputs.png'>
## A few more things you can try
- there's a "download" output type that uses either the getData method or a getDownload method
- tables can be sortable. Just add a "sortable" key to the table output dictionary and set it's value to true
- there are a couple of great Python libraries that produce JavaScript plots (Bokeh and Vincent). You can throw them into a getHTML method to add JavaScript plots to your spyre app (hoping to add a "bokeh" output type soon to make this integration a little easier).
- you can link input values
## Deploying
- Heroku ([blog post on setting up](http://adamhajari.github.io/2015/04/21/deploying-a-spyre-app-on-heroku.html), free!)
- [pythonanywhere](https://www.pythonanywhere.com/) (free!)
- Digital Ocean (\$5/month)
- AWS (~\$10/month maybe?)
## More Examples On GitHub
## A couple of tricks
- you can either name your output methods using the getType convention *or* you can have the name match the output id. This is useful if you've got multiple outputs of the same type.
- if multiple outputs use the same data and it takes a long time to generate that data, there's a trick for caching data so you only have to load it once. See the stocks_example app in the examples directory of the git repo to see how (*Warning*: it's kind of hacky)
| github_jupyter |
# TIV.lib Example Notebook
### Gilberto Bernardes, António Ramires
In this notebook, we present example code for the TIV.lib library. It is a python library for the content-based tonal description of musical audio signals, which implements the Tonal Interval Vector space. Its main novelty relies on the DFT-based perceptually-inspired Tonal Interval Vector space, from which multiple instantaneous and global representations, descriptors and metrics are computed---e.g., harmonic changes, dissonance, diatonicity, and musical key.
### Setup
Run the following cell to install TIV.lib and Essentia. Essentia is used to extract the Harmonic Pitch Class Profiles from an audio file.
```
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
!git clone https://github.com/aframires/TIVlib.git
%cd TIVlib
!pip install essentia
% cd ../
```
Import TIV.lib and numpy, and then crate the function to get the HPCP from a file.
```
from TIVlib import TIV as tiv
import numpy as np
from essentia.standard import MonoLoader, Windowing, Spectrum, SpectralPeaks, FrameGenerator, HPCP
def file_to_hpcp(filename):
audio = MonoLoader(filename=filename)()
windowing = Windowing(type='blackmanharris62')
spectrum = Spectrum()
spectral_peaks = SpectralPeaks(orderBy='magnitude',
magnitudeThreshold=0.001,
maxPeaks=5,
minFrequency=20,
maxFrequency=8000)
hpcp = HPCP(maxFrequency=8000,normalized='none')
spec_group = []
hpcp_group = []
for frame in FrameGenerator(audio,frameSize=1024,hopSize=512):
windowed = windowing(frame)
fft = spectrum(windowed)
frequencies, magnitudes = spectral_peaks(fft)
final_hpcp = hpcp(frequencies, magnitudes)
spec_group.append(fft)
hpcp_group.append(final_hpcp)
mean_hpcp = np.mean(np.array(hpcp_group).T, axis = 1)
#Rotate the HPCP so that it starts in C
mean_hpcp = np.roll(mean_hpcp,-3)
return mean_hpcp
```
# Examples of the feature extraction of TIV.lib
Here we show example code on how to load audio files as HPCP. From this HPCP, we create a TIV object which then enables all the feature extraction.
```
#Reference: CMaj chord
c_maj = "./audio_files/looperman-l-1998259-0109381-adamsouth22-all-i-need-jazz-piano.wav"
#Consonant chords: GMaj, Amin
g_maj = "./audio_files/343416__sss-samples__pinao-melody-g-major.wav"
a_min = "./audio_files/254729__zuluonedrop__01-piano-al1.wav"
#Dissonant chords: C#Maj, D#min
c_sharp_maj = "./audio_files/looperman-l-0159051-0073982-minor2go-guitars-unlimited-its-you.wav"
d_sharp_min = "./audio_files/looperman-l-2921269-0190991-wavy-mallets-13.wav"
#Calculate the HPCP for each of the chords
c_maj_hpcp = file_to_hpcp(c_maj)
g_maj_hpcp = file_to_hpcp(g_maj)
a_min_hpcp = file_to_hpcp(a_min)
c_sharp_maj_hpcp = file_to_hpcp(c_sharp_maj)
d_sharp_min_hpcp = file_to_hpcp(d_sharp_min)
#Calculate the TIV for each HPCP
c_maj_tiv = tiv.from_pcp(c_maj_hpcp)
g_maj_tiv = tiv.from_pcp(g_maj_hpcp)
a_min_tiv = tiv.from_pcp(a_min_hpcp)
c_sharp_maj_tiv = tiv.from_pcp(c_sharp_maj_hpcp)
d_sharp_min_tiv = tiv.from_pcp(d_sharp_min_hpcp)
```
In this section we show how to extract the features related to the TIV weights.
```
#Examples on the magnitude and phase features
print("CMaj TIV.mag")
print(c_maj_tiv.mags())
print("TIV weights")
print(c_maj_tiv.weights)
print("CMaj TIV.phases")
print(c_maj_tiv.phases())
print("CMaj diatonicity")
print(c_maj_tiv.diatonicity())
print("CMaj wholetoneness")
print(c_maj_tiv.wholetoneness())
print("CMaj chromaticity")
print(c_maj_tiv.chromaticity())
```
Here we exemplify how the key of the TIV can be extracted using the library. We provide two modes for the key extraction, based on Shaath and Temperley profiles.
```
#Example on extracting keys
#Values are not the same as before
print("CMaj from Temperley profiles and the Shaath")
print(c_maj_tiv.key(mode='temperley'))
print(c_maj_tiv.key(mode='shaath'))
print("GMaj from Temperley profiles and the Shaath")
print(g_maj_tiv.key(mode='temperley'))
print(g_maj_tiv.key(mode='shaath'))
print("Amin from Temperley profiles and the Shaath")
print(a_min_tiv.key(mode='temperley'))
print(a_min_tiv.key(mode='shaath'))
print("C#Maj from Temperley profiles and the Shaath")
print(c_sharp_maj_tiv.key(mode='temperley'))
print(c_sharp_maj_tiv.key(mode='shaath'))
print("D#min from Temperley profiles and the Shaath")
print(d_sharp_min_tiv.key(mode='temperley'))
print(d_sharp_min_tiv.key(mode='shaath'))
```
In this section, we show the algorithm is able to infer consonance and dissonance from the audio. Chords which are more consonant (CMaj+GMaj) have a lower dissonance value than dissonant chords (CMaj+C#Maj).
```
#Example on combining sounds and evaluate its dissonance
combined_GM = tiv.combine(c_maj_tiv, g_maj_tiv)
combined_Am = tiv.combine(c_maj_tiv, a_min_tiv)
combined_CSM = tiv.combine(c_maj_tiv, c_sharp_maj_tiv)
combined_DSm = tiv.combine(c_maj_tiv, d_sharp_min_tiv)
print("Dissonance CMaj + GMaj")
print(tiv.dissonance(combined_GM))
print("Dissonance CMaj + Amin")
print(tiv.dissonance(combined_Am))
print("Dissonance CMaj + C#Maj")
print(tiv.dissonance(combined_CSM))
print("Dissonance CMaj + D#min")
print(tiv.dissonance(combined_DSm))
```
Finally, we show an example of the distance metrics which are made available throught TIV.lib. Again, we can see that sounds with similar harmonic components are closer than ones with very different components.
```
#Example on calculating the distances between sounds
print("Euclidean and cosine distance between CMaj and GMaj")
print(tiv.euclidean(c_maj_tiv,g_maj_tiv))
print(tiv.cosine(c_maj_tiv,g_maj_tiv))
print("Euclidean and cosine distance between CMaj and AMin")
print(tiv.euclidean(c_maj_tiv,a_min_tiv))
print(tiv.cosine(c_maj_tiv,a_min_tiv))
print("Euclidean and cosine distance between CMaj and C#Maj")
print(tiv.euclidean(c_maj_tiv,c_sharp_maj_tiv))
print(tiv.cosine(c_maj_tiv,c_sharp_maj_tiv))
print("Euclidean and cosine distance between CMaj and D#min")
print(tiv.euclidean(c_maj_tiv,d_sharp_min_tiv))
print(tiv.cosine(c_maj_tiv,d_sharp_min_tiv))
```
TIV objects can be plot to understand better the present intervals
```
c_maj_tiv.plot_tiv(title="C_MAJ")
```
TIV pitch shifting consist just only in rotating the phase of the vectors
```
c_maj_tiv_pshift_1 = c_maj_tiv.transpose(1)
c_maj_tiv_pshift_1.plot_tiv(title="C_MAJ_TRANSPOSED 1 SEMITONE")
```
Get the maximum compatibility between two TIVs
```
pitch_shift, compt = c_maj_tiv.get_max_compatibility(c_maj_tiv_pshift_1)
print("Maximum compatibility %s, for pitch shift: %s" % (compt, pitch_shift))
```
For a full description of the features available in the TIV.lib, please refer to the TIV website: https://sites.google.com/site/tonalintervalspace/home
| github_jupyter |
# CS 224D Assignment #2
# Part [0]: Warmup: Boolean Logic
To appreciate the power of neural networks to learn complex patterns, it can help to revisit a classic example. It is well-known that a single linear classifier cannot represent the XOR function $x \oplus y$, depicted below*: there is no way to draw a single line that can separate the red and magenta (square) points from the blue and cyan (circle) points.
*Gaussian noise is added to make the point clouds more illustrative; however, you can perform your analysis as if all points were truly boolean variables $(x,y) \in \{0,1\}^2$.
```
from numpy import *
from matplotlib.pyplot import *
import seaborn as sns
sns.set(context='paper', style='whitegrid')
%matplotlib inline
matplotlib.rcParams['savefig.dpi'] = 100
%load_ext autoreload
%autoreload 2
colors = list('rbcm')
markers = list('soos')
def show_pts(data):
for i in range(4):
idx = (arange(npts) % 4 == i)
plot(data[0,idx], data[1,idx], linestyle = 'None',
marker=markers[i],
color=colors[i], alpha=0.5)
gca().set_aspect('equal')
def show_pts_1d(data):
for i in range(4):
idx = (arange(npts) % 4 == i)
plot(data[idx], marker=markers[i], linestyle = 'None',
color=colors[i], alpha=0.5)
gca().set_aspect(npts/4.0)
#### Copy in your implementation from Assignment #1 ####
def sigmoid(x):
return 1 / (1 + np.exp(-x)) # dummy
#### or if the starter code is posted, uncomment the line below ####
# from nn.math import sigmoid
npts = 4 * 40; random.seed(10)
x = random.randn(npts)*0.1 + array([i & 1 for i in range(npts)])
y = random.randn(npts)*0.1 + array([(i & 2) >> 1 for i in range(npts)])
data = vstack([x,y])
figure(figsize=(8,8)); show_pts(data); ylim(-0.5, 1.5); xlim(-0.5, 1.5)
xlabel("x"); ylabel("y"); title("Input Data")
```
A two-layer neural network, however, can separate this pattern easily. Below, we give you a simple dataset in two dimensions that represents a noisy version of the XOR pattern. Your task is to hand-pick weights for a *very* simple two-layer network, such that it can separate the red/magenta points from the blue/cyan points.
The network uses the following equations, for $W \in \mathbb{R}^{2\times2}$ and $U \in \mathbb{R}^{2}$:
$$ h = \sigma(z\cdot(Wx + b_1)) $$
$$ p = \sigma(z\cdot(Uh + b_2)) $$
where $z$ controls how steep the sigmoid function is; higher values make it approach a step function.
```
x = linspace(-1, 1, 100); figure(figsize=(8,8))
plot(x, sigmoid(x), 'k', label="$\sigma(x)$");
plot(x, sigmoid(5*x), 'b', label="$\sigma(5x)$");
plot(x, sigmoid(15*x), 'g', label="$\sigma(15x)$");
legend(loc='upper left'); xlabel('x');
```
In the area below, enter values for $W$, $b_1$, $U$, and $b_2$ that will properly place blue and cyan above the dashed line, and red and magenta below.
*Hint:* think about how you can make the data linearly separable after going through the hidden layer. Then find a direction $U$ along which you can separate it!
*Hint:* It may help to think about each "neuron" (i.e. row of $W$ or $U$) separately.
```
W = zeros((2,2))
b1 = zeros((2,1))
U = zeros(2)
b2 = 0
#### YOUR CODE HERE ####
# XOR Table
# A B | XOR
# ----------
# 0 0 | 0
# 1 0 | 1
# 0 1 | 1
# 1 1 | 0
# W.dot(data)
W = np.array( [ [1, -1],
[-1, 1] ] )
U = np.array( [ 1, 1] )
b1 = np.array([ [-0.5],
[-0.5] ])
b2 = -0.5
z = 10 # control gate steepness
figure(figsize=(5,5))
subplot(1,2,1)
show_pts(data)
ylim(-0.5, 1.5)
xlim(-0.5, 1.5)
xlabel("$x_1$"); ylabel("$x_2$")
title("Data (inputs)")
tight_layout()
#verification plot
#### END YOUR CODE ####
# Feed-forward computation
h = sigmoid(z*(W.dot(data) + b1))
p = sigmoid(z*(U.dot(h) + b2))
# Plot hidden layer
figure(figsize=(5,5))
subplot(1,2,1); show_pts(h)
title("Hidden Layer"); xlabel("$h_1$"); ylabel("$h_2$")
ylim(-0.1, 1.1); xlim(-0.1, 1.1)
# Plot predictions
subplot(1,2,2); show_pts_1d(p)
title("Output"); ylabel("Prediction"); xticks([])
axhline(0.5, linestyle='--', color='k')
tight_layout()
# npts = 4 * 40; random.seed(10)
# x = random.randn(npts)*0.1 + array([i & 1 for i in range(npts)])
# # i & 1 alternates 0, 1
# y = random.randn(npts)*0.1 + array([(i & 2) >> 1 for i in range(npts)])
# # (i & 2) alternates 0, 0, 2, 2, 0, 0, 2, 2 (requires 'bitwise shifting >>' to make it into sequence of 0, 1)
# # Want to create an AND gate, an OR gate and a NOT operator:
# # AND Table
# # A B | AND
# # ----------
# # 0 0 | 0
# # 1 0 | 0
# # 0 1 | 0
# # 1 1 | 1
# # OR Table
# # A B | OR
# # ----------
# # 0 0 | 0
# # 1 0 | 1
# # 0 1 | 1
# # 1 1 | 1
# # NOT Table
# # A | NOT
# # ----------
# # 0 | 1
# # 1 | 0
# Start with 4 sets of numbers:
A_0 = random.randn(npts) * 0.1
B_0 = random.randn(npts) * 0.1
A_1 = random.randn(npts) * 0.1 + 1
B_1 = random.randn(npts) * 0.1 + 1
W = zeros((2,2))
b1 = zeros((2,1))
U = zeros(2)
b2 = 0
W = np.array( [ [1, 0],
[0, 1] ] )
b1 = np.array([ [-0.5],
[-0.5] ])
U = np.array( [ 1, 1] )
b2 = -0.5
z = 20 # control gate steepness
figure(figsize=(5,5))
subplot(2,2,1);
data = np.vstack([A_0, B_0])
h = sigmoid(z*(W.dot(data) + b1))
show_pts(h)
ylim(-0.1, 1.1); xlim(-0.1, 1.1)
#
subplot(2,2,2);
data = np.vstack([A_1, B_0])
h = sigmoid(z*(W.dot(data) + b1))
show_pts(h)
ylim(-0.1, 1.1); xlim(-0.1, 1.1)
subplot(2,2,3);
data = np.vstack([A_0, B_1])
h = sigmoid(z*(W.dot(data) + b1))
show_pts(h)
ylim(-0.1, 1.1); xlim(-0.1, 1.1)
subplot(2,2,4);
data = np.vstack([A_1, B_1])
h = sigmoid(z*(W.dot(data) + b1))
show_pts(h)
ylim(-0.1, 1.1); xlim(-0.1, 1.1)
figure(figsize=(5,5))
subplot(2,2,1);
data = np.vstack([A_0, B_0])
h = sigmoid(z*(W.dot(data) + b1))
p = sigmoid(z*(U.dot(h) + b2))
hist(p, bins=20, range=[-1,1])
subplot(2,2,2);
data = np.vstack([A_1, B_0])
h = sigmoid(z*(W.dot(data) + b1))
p = sigmoid(z*(U.dot(h) + b2))
hist(p, bins=20, range=[-1,1])
subplot(2,2,3);
data = np.vstack([A_0, B_1])
h = sigmoid(z*(W.dot(data) + b1))
p = sigmoid(z*(U.dot(h) + b2))
hist(p, bins=20, range=[-1,1])
subplot(2,2,4);
data = np.vstack([A_1, B_1])
h = sigmoid(z*(W.dot(data) + b1))
p = sigmoid(z*(U.dot(h) + b2))
hist(p, bins=20, range=[-1,1]);
#plots showing the vector flow
x_coor = np.linspace(-0.1,1.1,30)
y_coor = np.linspace(-0.1,1.1,30)
xy_array = np.dstack(np.meshgrid(x_coor, y_coor)).reshape(-1, 2)
xy_array = xy_array.T
#test = [(x,y) for x in x_coor for y in y_coor]
figure(figsize=(5,5))
scatter(xy_array[0], xy_array[1])
ylim(-0.2, 1.2); xlim(-0.2, 1.2)
W = zeros((2,2))
b1 = zeros((2,1))
U = zeros(2)
b2 = 0
#### YOUR CODE HERE ####
# XOR Table
# A B | XOR
# ----------
# 0 0 | 0
# 1 0 | 1
# 0 1 | 1
# 1 1 | 0
# W.dot(data)
W = np.array( [ [1, 0],
[0, 1] ] )
U = np.array( [ 0, 0] )
b1 = np.array([ [0],
[0] ])
b2 = 0
z = 20 # control gate steepness
#### END YOUR CODE ####
# Feed-forward computation
h = sigmoid(z*(W.dot(xy_array) + b1))
p = sigmoid(z*(U.dot(h) + b2))
# Plot hidden layer
figure(figsize=(5,5))
scatter(h[0], h[1])#show_pts(h)
ylim(-0.2, 1.2); xlim(-0.2, 1.2)
figure(figsize=(5,5))
diff = h - xy_array
X = xy_array[0]
Y = xy_array[1]
# diff = diff / np.max(np.abs(diff),axis=0)
UN = diff[0]
VN = diff[1]
quiver(X,Y, UN, VN, color='red', headlength=5, angles='xy')
show()
figure(figsize=(5,5))
Q = quiver(X, Y, UN, VN, units='inches', angles='xy')
show()
#
x = np.linspace(-0.1,1,20)
y = x ** 2 - 0.5 * x
points = np.vstack([x,y])
figure(figsize=(5,5))
scatter(points[0], points[1])
h_line = sigmoid(z*(W.dot(points) + b1))
diff_line = h_line - points
X = points[0]
Y = points[1]
UN = diff_line[0]
VN = diff_line[1]
quiver(X,Y, UN, VN, color='red', angles='xy', width=0.004)
ylim(-0.2,1.1)
show()
x = np.linspace(-0.1,1,20)
y = np.linspace(0,0,20)
points = np.vstack([x,y])
figure(figsize=(5,5))
h_line = sigmoid(z*(W.dot(points) + b1))
diff_line = h_line - points
X = points[0]
Y = points[1]
UN = diff_line[0]
VN = diff_line[1]
quiver(X,Y, UN, VN, color='red', units='x', angles='xy', width=0.004)
scatter(points[0], points[1])
ylim(-0.01,0.01)
show()
figure(figsize=(5,5))
x = np.linspace(0.5,0.5,20)
y = np.linspace(-0.1,1,20)
points = np.vstack([x,y])
h_line = sigmoid(z*(W.dot(points) + b1))
diff_line = h_line - points
X = points[0]
Y = points[1]
UN = diff_line[0]
VN = diff_line[1]
quiver(X,Y, UN, VN, color='red', units='x', angles='xy', width=0.004)
scatter(points[0], points[1])
xlim(-0.2,1.1)
ylim(-0.2,1.1)
show()
#notice that moving z (slowly will display the warping)
#
#
#
x = np.hstack([np.linspace(x/19-0.1,x/19-0.1,20) for x in range(20)])
y = np.hstack([np.linspace(-0.1,1,20) for x in range(20)])
#careful, U,W, b1, b2 can change
W = np.array( [ [1, 0],
[0, 1] ] )
b1 = np.array([ [-0.5],
[-0.5] ])
U = np.array( [ 1, 1] )
b2 = -0.5
z = 30
points = np.vstack([x,y])
figure(figsize=(5,5))
scatter(points[0,0::20],points[1,0::20], color='r');
scatter(points[0,1::20],points[1,1::20], color='b');
scatter(points[0,2::20],points[1,2::20], color='g');
scatter(points[0,3::20],points[1,3::20], color='c');
plot(points[0,3::20],points[1,3::20], color='c');
xlim(-0.2,1.1)
ylim(-0.2,1.1)
## ALL values (projected flow)
h_all = sigmoid(z * (W.dot(points) + b1))
p_all = sigmoid(z*(U.dot(h_all) + b2))
origin_all = np.linspace(0,0,len(p_all))
p_0_all = np.vstack([p_all, origin_all])
figure(figsize=(5,5))
case = 0
diff_line = p_0_all - points
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = np.zeros_like(UN)#diff_line[1,case::20]
quiver(X,Y, UN, VN, color='r', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 4
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = np.zeros_like(UN)#VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='b', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 9
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = np.zeros_like(UN)#VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='g', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 14
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = np.zeros_like(UN)#VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='c', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 19
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = np.zeros_like(UN)#VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='y', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
xlim(-0.2,1.1)
ylim(-0.2,1.1);
## ALL values (flow)
h_all = sigmoid(z * (W.dot(points) + b1))
p_all = sigmoid(z*(U.dot(h_all) + b2))
origin_all = np.linspace(0,0,len(p_all))
p_0_all = np.vstack([p_all, origin_all])
figure(figsize=(5,5))
case = 0
diff_line = p_0_all - points
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='r', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 4
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='b', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 9
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='g', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 14
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='c', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
case = 19
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='y', angles='xy', width=0.004)
scatter(p_0_all[0,case::20], p_0_all[1,case::20])
xlim(-0.2,1.1)
ylim(-0.2,1.1);
#Single case
XY = points[:,::20]
h = sigmoid(z*(W.dot(XY) + b1))
p = sigmoid(z*(U.dot(h) + b2))
origin = np.linspace(0,0.,20)
p_0 = np.vstack([p,origin])
figure(figsize=(5,5))
hist(p, bins=51, range=[-1.01,1.01])
figure(figsize=(5,5))
diff_line = p_0 - XY
X = XY[0]
Y = XY[1]
UN = diff_line[0]
VN = diff_line[1]
quiver(X, Y, UN, VN, color='r', angles='xy', width=0.004)
scatter(p_0[0,:], p_0[1,:])
xlim(-0.2,1.1)
ylim(-0.2,1.1)
#notice that moving z (slowly will display the warping)
#
#
#
x = np.hstack([np.linspace(x/19-0.1,x/19-0.1,20) for x in range(20)])
y = np.hstack([np.linspace(-0.1,1,20) for x in range(20)])
#careful, U,W, b1, b2 can change
W = np.array( [ [1, -1],
[-1, 1] ] )
U = np.array( [ 1, 1] )
b1 = np.array([ [-0.5],
[-0.5] ])
b2 = -0.5
z = 15 # control gate steepness
points = np.vstack([x,y])
## ALL values (projected flow)
h_all = sigmoid(z * (W.dot(points) + b1))
p_all = sigmoid(z * (U.dot(h_all) + b2))
origin_all = np.linspace(0,0,len(p_all))
p_0_all = np.vstack([p_all, origin_all])
figure(figsize=(5,5))
case = 19
diff_line = h_all - points
X = points[0,case::20]
Y = points[1,case::20]
UN = diff_line[0,case::20]
VN = diff_line[1,case::20]
quiver(X,Y, UN, VN, color='b', angles='xy', width=0.004)
plot(points[0,case::20], points[1,case::20], color='r', marker='.')
plot(h_all[0,case::20], h_all[1,case::20], color='r', marker='.')
xlim(-0.2,1.1)
ylim(-0.2,1.1);
figure(figsize=(5,5))
sns.kdeplot(points[0], points[1], cmap="Blues", shade=True, shade_lowest=False, bw=0.05)
#sns.kdeplot(points[0], points[1], cmap="Blues", shade=True, shade_lowest=False)
xlim(-0.2,1.1)
ylim(-0.2,1.1);
figure(figsize=(5,5))
sns.kdeplot(h_all[0,:], h_all[1,:], cmap="Blues", shade=True, shade_lowest=False, bw=0.05)
xlim(-0.2,1.1)
ylim(-0.2,1.1);
figure(figsize=(5,5))
#sns.distplot(p_all[::20], hist=False, rug=True, color="r");
sns.kdeplot(p_all[::20], color="r", bw=0.05);
```
| github_jupyter |
# [Multi-class classification with focal loss for imbalanced datasets](https://www.dlology.com/blog/multi-class-classification-with-focal-loss-for-imbalanced-datasets/)
## focal loss model
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from tensorflow import keras
np.random.seed(42)
# create data frame containing your data, each column can be accessed # by df['column name']
dataset = pd.read_csv('../input/PS_20174392719_1491204439457_log.csv')
del dataset['nameDest']
del dataset['nameOrig']
del dataset['type']
dataset.head()
dataset['isFraud'].value_counts()
def feature_normalize(dataset):
mu = np.mean(dataset, axis=0)
sigma = np.std(dataset, axis=0)
return (dataset - mu) / sigma
#Splitting the Training/Test Data
from sklearn.model_selection import train_test_split
X, y = dataset.iloc[:,:-2], dataset.iloc[:, -2]
y = keras.utils.to_categorical(y, num_classes=2)
X = feature_normalize(X.as_matrix())
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=1)
from tensorflow.keras.models import Sequential
import tensorflow as tf
model = Sequential()
from tensorflow.keras.layers import Dense
input_dim = X_train.shape[1]
nb_classes = y_train.shape[1]
model.add(Dense(10, input_dim=input_dim, activation='relu', name='input'))
model.add(Dense(20, activation='relu', name='fc1'))
model.add(Dense(10, activation='relu', name='fc2'))
model.add(Dense(nb_classes, activation='softmax', name='output'))
from tensorflow import keras
class FocalLoss(keras.losses.Loss):
def __init__(self, gamma=2., alpha=4.,
reduction=keras.losses.Reduction.AUTO, name='focal_loss'):
"""Focal loss for multi-classification
FL(p_t)=-alpha(1-p_t)^{gamma}ln(p_t)
Notice: y_pred is probability after softmax
gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper
d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)
Focal Loss for Dense Object Detection
https://arxiv.org/abs/1708.02002
Keyword Arguments:
gamma {float} -- (default: {2.0})
alpha {float} -- (default: {4.0})
"""
super(FocalLoss, self).__init__(reduction=reduction,
name=name)
self.gamma = float(gamma)
self.alpha = float(alpha)
def call(self, y_true, y_pred):
"""
Arguments:
y_true {tensor} -- ground truth labels, shape of [batch_size, num_cls]
y_pred {tensor} -- model's output, shape of [batch_size, num_cls]
Returns:
[tensor] -- loss.
"""
epsilon = 1.e-9
y_true = tf.convert_to_tensor(y_true, tf.float32)
y_pred = tf.convert_to_tensor(y_pred, tf.float32)
model_out = tf.add(y_pred, epsilon)
ce = tf.multiply(y_true, -tf.math.log(model_out))
weight = tf.multiply(y_true, tf.pow(
tf.subtract(1., model_out), self.gamma))
fl = tf.multiply(self.alpha, tf.multiply(weight, ce))
reduced_fl = tf.reduce_max(fl, axis=1)
return tf.reduce_mean(reduced_fl)
model.compile(loss=FocalLoss(alpha=1),
optimizer='nadam',
metrics=['accuracy'])
model.summary()
# class_weight = {0 : 1.,
# 1: 20.}
y_train.shape
model.fit(X_train, y_train, epochs=3, batch_size=1000)
score = model.evaluate(X_test, y_test, batch_size=1000)
score
%matplotlib inline
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(font_scale=2)
predictions = model.predict(X_test, batch_size=1000)
LABELS = ['Normal','Fraud']
max_test = np.argmax(y_test, axis=1)
max_predictions = np.argmax(predictions, axis=1)
confusion_matrix = metrics.confusion_matrix(max_test, max_predictions)
plt.figure(figsize=(5, 5))
sns.heatmap(confusion_matrix, xticklabels=LABELS, yticklabels=LABELS, annot=True, fmt="d", annot_kws={"size": 20});
plt.title("Confusion matrix", fontsize=20)
plt.ylabel('True label', fontsize=20)
plt.xlabel('Predicted label', fontsize=20)
plt.show()
```
### Total miss-classified labels
```
values = confusion_matrix.view()
error_count = values.sum() - np.trace(values)
error_count
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Tutorial-IllinoisGRMHD: The conservative to primitive algorithm
## Authors: Leo Werneck & Zach Etienne
<font color='red'>**This module is currently under development**</font>
## In this tutorial module we explain the algorithm used to get the primitive variables out of the conservative ones
### Required and recommended citations:
* **(Required)** Etienne, Z. B., Paschalidis, V., Haas R., Mösta P., and Shapiro, S. L. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes. Class. Quantum Grav. 32 (2015) 175009. ([arxiv:1501.07276](http://arxiv.org/abs/1501.07276)).
* **(Required)** Noble, S. C., Gammie, C. F., McKinney, J. C., Del Zanna, L. Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. Astrophysical Journal, 641, 626 (2006) ([astro-ph/0512420](https://arxiv.org/abs/astro-ph/0512420)).
* **(Recommended)** Del Zanna, L., Bucciantini N., Londrillo, P. An efficient shock-capturing central-type scheme for multidimensional relativistic flows - II. Magnetohydrodynamics. A&A 400 (2) 397-413 (2003). DOI: 10.1051/0004-6361:20021641 ([astro-ph/0210618](https://arxiv.org/abs/astro-ph/0210618)).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This module is organized as follows
0. [Step 0](#src_dir): **Source directory creation**
1. [Step 1](#introduction): **Introduction**
1. [Step 2](#driver_conserv_to_prims): **`driver_conserv_to_prims.C`**
1. [Step 2.a](#adm_to_bssn__enforcing_detgammabar_equal_one): *Converting ADM quantities to BSSN quantities and enforcing $\bar\gamma=1$*
1. [Step 2.b](#equatorial_symmetry): *Applying equatorial symmetry*
1. [Step 2.c](#variable_setup): *Setting up the variables needed by `HARM`*
1. [Step 2.c.i](#variable_setup__bssn): BSSN quantities
1. [Step 2.c.ii](#variable_setup__prims): Primitives
1. [Step 2.c.iii](#variable_setup__conservs): Conservatives
1. [Step 2.c.iv](#variable_setup__lapse_and_psi): Lapse function and conformal factor
1. [Step 2.c.v](#variable_setup__phys_metric): Physical spatial metric
1. [Step 2.c.vi](#variable_setup__betadown_and_beta2): $\beta_{i}$ and $\beta^{2} \equiv \beta_{i}\beta^{i}$
1. [Step 2.c.vii](#variable_setup__adm_4metric): The ADM 4-metric, $g_{\mu\nu}$, and its inverse, $g^{\mu\nu}$
1. [Step 2.c.viii](#variable_setup__temp_conservs): Temporary storage for current values of the conservative variables
1. [Step 2.d](#conserv_to_prim__driver): *Determining the primitives variables from the conservatives variables*
1. [Step 2.e](#enforce_limits_on_primitives_and_recompute_conservs): *Enforcing physical limits on primitives and recomputing the conservatives variables*
1. [Step 2.f](#updating_conservs_and_prims_gfs): *Updating conservative and primitive gridfunctions*
1. [Step 2.g](#diagnostics_and_debugging_tools): *Diagnostics and debugging tools*
1. [Step 3](#harm_primitives_lowlevel): **`harm_primitives_lowlevel.C`**
1. [Step 3.a](#variables_needed_by_harm): *Setting up the variables needed by `HARM`*
1. [Step 3.a.i](#variables_needed_by_harm__detg): ${\rm detg}$
1. [Step 3.a.ii](#variables_needed_by_harm__bi_harm): $B^{i}_{\rm HARM}$
1. [Step 3.a.iii](#variables_needed_by_harm__init_rhob_pressure_vi): Initializing $\rho_{b}$, $P$, and $v^{i}$
1. [Step 3.a.iv](#variables_needed_by_harm__original_conserv): Storing the original values of the conservative variables
1. [Step 3.a.v](#variables_needed_by_harm__guessing_rhob_pressure_vi): Guessing $\rho_{b}$, $P$, and $v^{i}$
1. [Step 3.a.vi](#variables_needed_by_harm__conservs): Writing $\boldsymbol{C}_{\rm HARM}$ in terms of $\boldsymbol{C}_{\rm IGM}$
1. [Step 3.a.vii](#variables_needed_by_harm__prims): Writing $\boldsymbol{P}_{\rm HARM}$ in terms of $\boldsymbol{P}_{\rm IGM}$
1. [Step 3.b](#calling_harm_conservs_to_prims_solver): *Calling the `HARM` conservative-to-primitive solver*
1. [Step 3.c](#font_fix) *Applying the Font *et al.* fix, if the inversion fails*
1. [Step 3.d](#compute_utconi) *Compute $\tilde{u}^{i}$*
1. [Step 3.e](#limiting_velocities) *Limiting velocities*
1. [Step 3.f](#primitives) *Setting the primitives*
1. [Step 4](#font_fix_hybrid_eos): **`font_fix_hybrid_EOS.C`**
1. [Step 4.a](#font_fix_hybrid_eos__basic_quantities): Computing the basic quantities needed by the algorithm
1. [Step 4.b](#font_fix_hybrid_eos__sdots): Basic check: looking at $\tilde{S}^{2}$
1. [Step 4.c](#font_fix_hybrid_eos__initial_guesses): Initial guesses for $W$, $S_{{\rm fluid}}^{2}$, and $\rho$
1. [Step 4.d](#font_fix_hybrid_eos__main_loop): The main loop
1. [Step 4.e](#font_fix_hybrid_eos__outputs): Output $\rho_{b}$ and $u_{i}$
1. [Step 5](#harm_primitives_headers): **`harm_primitives_headers.h`**
1. [Step 6](#code_validation): **Code validation**
1. [Step 6.a](#driver_conserv_to_prims_validation): *`driver_conserv_to_prims.C`*
1. [Step 6.b](#harm_primitives_lowlevel_validation): *`harm_primitives_lowlevel.C`*
1. [Step 6.c](#font_fix_gamma_law_validation): *`font_fix_gamma_law.C`*
1. [Step 6.d](#harm_primitives_headers_validation): *`harm_primitives_headers.h`*
1. [Step 7](#latex_pdf_output): **Output this notebook to $\LaTeX$-formatted PDF file**
<a id='src_dir'></a>
# Step 0: Source directory creation \[Back to [top](#toc)\]
$$\label{src_dir}$$
We will now use the [cmdline_helper.py NRPy+ module](Tutorial-Tutorial-cmdline_helper.ipynb) to create the source directory within the `IllinoisGRMHD` NRPy+ directory, if it does not exist yet.
```
# Step 0: Creation of the IllinoisGRMHD source directory
# Step 0a: Add NRPy's directory to the path
# https://stackoverflow.com/questions/16780014/import-file-from-parent-directory
import os,sys
nrpy_dir_path = os.path.join("..","..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step 0b: Load up cmdline_helper and create the directory
import cmdline_helper as cmd
IGM_src_dir_path = os.path.join("..","src")
cmd.mkdir(IGM_src_dir_path)
# Step 0c: Create the output file path
outfile_path__driver_conserv_to_prims__C = os.path.join(IGM_src_dir_path,"driver_conserv_to_prims.C")
outfile_path__harm_primitives_lowlevel__C = os.path.join(IGM_src_dir_path,"harm_primitives_lowlevel.C")
outfile_path__font_fix_hybrid_EOS__C = os.path.join(IGM_src_dir_path,"font_fix_hybrid_EOS.C")
outfile_path__harm_primitives_headers__h = os.path.join(IGM_src_dir_path,"harm_primitives_headers.h")
```
<a id='introduction'></a>
# Step 1: Introduction \[Back to [top](#toc)\]
$$\label{introduction}$$
<a id='driver_conserv_to_prims'></a>
# Step 2: `driver_conserv_to_prims.C` \[Back to [top](#toc)\]
$$\label{driver_conserv_to_prims}$$
We start here by creating the `driver_conserv_to_prims.C` file and loading all files used by it. Note that of the files loaded, we have the following `IllinoisGRMHD` files:
1. `harm_primitives_headers.h`: we will discuss this file [in step 4 of this tutorial module](#harm_primitives_headers).
1. `inlined_functions.C`: this file is discussed in the [inlined_functions NRPy tutorial module](Tutorial-IllinoisGRMHD__inlined_functions.ipynb)
1. `apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.C`: this file is discussed in the [apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs NRPy tutorial module](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb)
```
%%writefile $outfile_path__driver_conserv_to_prims__C
/* We evolve forward in time a set of functions called the
* "conservative variables", and any time the conserv's
* are updated, we must solve for the primitive variables
* (rho, pressure, velocities) using a Newton-Raphson
* technique, before reconstructing & evaluating the RHSs
* of the MHD equations again.
*
* This file contains the driver routine for this Newton-
* Raphson solver. Truncation errors in conservative
* variables can lead to no physical solutions in
* primitive variables. We correct for these errors here
* through a number of tricks described in the appendices
* of http://arxiv.org/pdf/1112.0568.pdf.
*
* This is a wrapper for the 2d solver of Noble et al. See
* harm_utoprim_2d.c for references and copyright notice
* for that solver. This wrapper was primarily written by
* Zachariah Etienne & Yuk Tung Liu, in 2011-2013.
*
* For optimal compatibility, this wrapper is licensed under
* the GPL v2 or any later version.
*
* Note that this code assumes a simple gamma law for the
* moment, though it would be easy to extend to a piecewise
* polytrope. */
// Standard #include's
#include <iostream>
#include <iomanip>
#include <fstream>
#include <cmath>
#include <ctime>
#include <cstdlib>
#ifdef ENABLE_STANDALONE_IGM_C2P_SOLVER
#include "standalone_conserv_to_prims_main_function.h"
#else
#include "cctk.h"
#include "cctk_Arguments.h"
#include "cctk_Parameters.h"
#include "Symmetry.h"
#include "IllinoisGRMHD_headers.h"
#include "harm_primitives_headers.h"
#include "harm_u2p_util.c"
#include "inlined_functions.C"
#include "apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.C"
extern "C" void IllinoisGRMHD_conserv_to_prims(CCTK_ARGUMENTS) {
DECLARE_CCTK_ARGUMENTS;
DECLARE_CCTK_PARAMETERS;
// We use proper C++ here, for file I/O later.
using namespace std;
#endif
/**********************************
* Piecewise Polytropic EOS Patch *
* Setting up the EOS struct *
**********************************/
/*
* The short piece of code below takes care
* of initializing the EOS parameters.
* Please refer to the "inlined_functions.C"
* source file for the documentation on the
* function.
*/
eos_struct eos;
initialize_EOS_struct_from_input(eos);
```
<a id='adm_to_bssn__enforcing_detgammabar_equal_one'></a>
## Step 2.a: Converting ADM quantities to BSSN quantities and enforcing $\bar\gamma=1$ \[Back to [top](#toc)\]
$$\label{adm_to_bssn__enforcing_detgammabar_equal_one}$$
Here we compute the conformal metric, $\bar\gamma_{ij}$, and its inverse, $\bar\gamma^{ij}$, from the physical metric $\gamma_{ij}$. We also compute $\phi$, the conformal factor, and $\psi\equiv e^{\phi}$. Finally, we enforce the constraint $\bar\gamma = \det\left(\bar\gamma_{ij}\right) = 1$. The entire procedure is explained in detail in the [convert ADM to BSSN and enforce determinant constraint NRPy tutorial module](Tutorial-IllinoisGRMHD__convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij.ipynb).
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
// These BSSN-based variables are not evolved, and so are not defined anywhere that the grid has moved.
// Here we convert ADM variables (from ADMBase) to the BSSN-based variables expected by this routine.
IllinoisGRMHD_convert_ADM_to_BSSN__enforce_detgtij_eq_1__and_compute_gtupij(cctkGH,cctk_lsh, gxx,gxy,gxz,gyy,gyz,gzz,alp,
gtxx,gtxy,gtxz,gtyy,gtyz,gtzz,
gtupxx,gtupxy,gtupxz,gtupyy,gtupyz,gtupzz,
phi_bssn,psi_bssn,lapm1);
```
<a id='equatorial_symmetry'></a>
## Step 2.b: Applying equatorial symmetry \[Back to [top](#toc)\]
$$\label{equatorial_symmetry}$$
We then use the [CardGrid3D ETK thorn](https://einsteintoolkit.org/thornguide/CactusBase/CartGrid3D/documentation.html) to apply equatorial symmetry to our problem.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
if(CCTK_EQUALS(Symmetry,"equatorial")) {
// SET SYMMETRY GHOSTZONES ON ALL CONSERVATIVE VARIABLES!
int ierr=0;
ierr+=CartSymGN(cctkGH,"IllinoisGRMHD::grmhd_conservatives");
// FIXME: UGLY. Filling metric ghostzones is needed for, e.g., Cowling runs.
ierr+=CartSymGN(cctkGH,"lapse::lapse_vars");
ierr+=CartSymGN(cctkGH,"bssn::BSSN_vars");
ierr+=CartSymGN(cctkGH,"bssn::BSSN_AH");
ierr+=CartSymGN(cctkGH,"shift::shift_vars");
if(ierr!=0) CCTK_VError(VERR_DEF_PARAMS,"IllinoisGRMHD ERROR (grep for it, foo!) :(");
}
#endif
```
<a id='variable_setup'></a>
## Step 2.c: Setting up the variables needed by `HARM` \[Back to [top](#toc)\]
$$\label{variable_setup}$$
We will now set up all the necessary variables to start the conservative to primitive algorithm. We begin by declaring useful debugging variables.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
//Start the timer, so we can benchmark the primitives solver during evolution.
// Slower solver -> harder to find roots -> things may be going crazy!
//FIXME: Replace this timing benchmark with something more meaningful, like the avg # of Newton-Raphson iterations per gridpoint!
/*
struct timeval start, end;
long mtime, seconds, useconds;
gettimeofday(&start, NULL);
*/
int failures=0,font_fixes=0,vel_limited_ptcount=0;
int pointcount=0;
int failures_inhoriz=0;
int pointcount_inhoriz=0;
int pressure_cap_hit=0;
CCTK_REAL error_int_numer=0,error_int_denom=0;
int imin=0,jmin=0,kmin=0;
int imax=cctk_lsh[0],jmax=cctk_lsh[1],kmax=cctk_lsh[2];
int rho_star_fix_applied=0;
long n_iter=0;
```
<a id='variable_setup__bssn'></a>
### Step 2.c.i: BSSN quantities \[Back to [top](#toc)\]
$$\label{variable_setup__bssn}$$
We start by loading the BSSN variables $\left\{\phi, \bar\gamma_{ij}, \alpha-1, \beta^{i}, \bar\gamma^{ij}\right\}$ into a new array called $\rm METRIC$.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
#pragma omp parallel for reduction(+:failures,vel_limited_ptcount,font_fixes,pointcount,failures_inhoriz,pointcount_inhoriz,error_int_numer,error_int_denom,pressure_cap_hit,rho_star_fix_applied,n_iter) schedule(static)
for(int k=kmin;k<kmax;k++)
for(int j=jmin;j<jmax;j++)
for(int i=imin;i<imax;i++) {
int index = CCTK_GFINDEX3D(cctkGH,i,j,k);
int ww;
CCTK_REAL METRIC[NUMVARS_FOR_METRIC],dummy=0;
ww=0;
// FIXME: NECESSARY?
//psi_bssn[index] = exp(phi[index]);
METRIC[ww] = phi_bssn[index];ww++;
METRIC[ww] = dummy; ww++; // Don't need to set psi.
METRIC[ww] = gtxx[index]; ww++;
METRIC[ww] = gtxy[index]; ww++;
METRIC[ww] = gtxz[index]; ww++;
METRIC[ww] = gtyy[index]; ww++;
METRIC[ww] = gtyz[index]; ww++;
METRIC[ww] = gtzz[index]; ww++;
METRIC[ww] = lapm1[index]; ww++;
METRIC[ww] = betax[index]; ww++;
METRIC[ww] = betay[index]; ww++;
METRIC[ww] = betaz[index]; ww++;
METRIC[ww] = gtupxx[index]; ww++;
METRIC[ww] = gtupyy[index]; ww++;
METRIC[ww] = gtupzz[index]; ww++;
METRIC[ww] = gtupxy[index]; ww++;
METRIC[ww] = gtupxz[index]; ww++;
METRIC[ww] = gtupyz[index]; ww++;
```
<a id='variable_setup__prims'></a>
### Step 2.c.ii: Primitives \[Back to [top](#toc)\]
$$\label{variable_setup__prims}$$
Then we load our current known values for the primitive variables $\left\{\rho_{b}, P, v^{i}, B^{i}\right\}$ into a new array called $\rm PRIMS$.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
CCTK_REAL PRIMS[MAXNUMVARS];
ww=0;
PRIMS[ww] = rho_b[index]; ww++;
PRIMS[ww] = P[index]; ww++;
PRIMS[ww] = vx[index]; ww++;
PRIMS[ww] = vy[index]; ww++;
PRIMS[ww] = vz[index]; ww++;
PRIMS[ww] = Bx[index]; ww++;
PRIMS[ww] = By[index]; ww++;
PRIMS[ww] = Bz[index]; ww++;
```
<a id='variable_setup__conservs'></a>
### Step 2.c.iii: Conservatives \[Back to [top](#toc)\]
$$\label{variable_setup__conservs}$$
Then we load our current known values for the conservative variables $\left\{\rho_{\star}, \tilde{S}_{i}, \tilde{\tau}\right\}$ into a new array called $\rm CONSERVS$.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
CCTK_REAL CONSERVS[NUM_CONSERVS] = {rho_star[index], mhd_st_x[index],mhd_st_y[index],mhd_st_z[index],tau[index]};
```
<a id='variable_setup__lapse_and_psi'></a>
### Step 2.c.iv: Lapse function and conformal factor \[Back to [top](#toc)\]
$$\label{variable_setup__lapse_and_psi}$$
Then we load the lapse function, $\alpha$, and $\psi$ related variables into the $\rm METRIC\_LAP\_PSI4$ array. Notice that this is done using the ${\rm SET\_LAPSE\_PSI4}()$ "function", which is defined in the `IllinoisGRMHD_headers.h` file from `IllinoisGRMHD`. The "function" itself is quite simple:
```c
#define SET_LAPSE_PSI4(array_name,METRIC) { \
array_name[LAPSE] = METRIC[LAPM1]+1.0; \
array_name[PSI2] = exp(2.0*METRIC[PHI]); \
array_name[PSI4] = SQR(array_name[PSI2]); \
array_name[PSI6] = array_name[PSI4]*array_name[PSI2]; \
array_name[PSIM4] = 1.0/array_name[PSI4]; \
array_name[LAPSEINV] = 1.0/array_name[LAPSE]; \
}
```
defining the quantities $\left\{\alpha, \psi^{2}, \psi^{4}, \psi^{6}, \psi^{-4},\alpha^{-1}\right\}$, where $\psi=e^{\phi}$.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
CCTK_REAL METRIC_LAP_PSI4[NUMVARS_METRIC_AUX];
SET_LAPSE_PSI4(METRIC_LAP_PSI4,METRIC);
```
<a id='variable_setup__phys_metric'></a>
### Step 2.c.v: Physical spatial metric \[Back to [top](#toc)\]
$$\label{variable_setup__phys_metric}$$
Then we set up the physical spatial metric and its inverse through the relations
$$
\boxed{
\begin{align}
\gamma_{ij} &= \psi^{4} \bar\gamma_{ij}\\
\gamma^{ij} &= \psi^{-4}\bar\gamma^{ij}
\end{align}
}\ .
$$
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
CCTK_REAL METRIC_PHYS[NUMVARS_FOR_METRIC];
METRIC_PHYS[GXX] = METRIC[GXX]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GXY] = METRIC[GXY]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GXZ] = METRIC[GXZ]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GYY] = METRIC[GYY]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GYZ] = METRIC[GYZ]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GZZ] = METRIC[GZZ]*METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GUPXX] = METRIC[GUPXX]*METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPXY] = METRIC[GUPXY]*METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPXZ] = METRIC[GUPXZ]*METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPYY] = METRIC[GUPYY]*METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPYZ] = METRIC[GUPYZ]*METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPZZ] = METRIC[GUPZZ]*METRIC_LAP_PSI4[PSIM4];
```
<a id='variable_setup__betadown_and_beta2'></a>
### Step 2.c.vi: $\beta_{i}$ and $\beta^{2} \equiv \beta_{i}\beta^{i}$ \[Back to [top](#toc)\]
$$\label{variable_setup__betadown_and_beta2}$$
We then evaluate
$$
\beta_{i} = \gamma_{ij}\beta^{j} \implies
\boxed{
\left\{
\begin{align}
\beta_{x} &= \gamma_{xx}\beta^{x} + \gamma_{xy}\beta^{y} + \gamma_{xz}\beta^{z}\\
\beta_{y} &= \gamma_{yx}\beta^{x} + \gamma_{yy}\beta^{y} + \gamma_{yz}\beta^{z}\\
\beta_{z} &= \gamma_{zx}\beta^{x} + \gamma_{zy}\beta^{y} + \gamma_{zz}\beta^{z}
\end{align}
\right.
}\ ,
$$
and
$$
\boxed{\beta^{2} \equiv \beta_{i}\beta^{i} = \beta_{x}\beta^{x} + \beta_{y}\beta^{y} + \beta_{z}\beta^{z}}\ .
$$
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
CCTK_REAL TUPMUNU[10],TDNMUNU[10];
CCTK_REAL shift_xL = METRIC_PHYS[GXX]*METRIC[SHIFTX] + METRIC_PHYS[GXY]*METRIC[SHIFTY] + METRIC_PHYS[GXZ]*METRIC[SHIFTZ];
CCTK_REAL shift_yL = METRIC_PHYS[GXY]*METRIC[SHIFTX] + METRIC_PHYS[GYY]*METRIC[SHIFTY] + METRIC_PHYS[GYZ]*METRIC[SHIFTZ];
CCTK_REAL shift_zL = METRIC_PHYS[GXZ]*METRIC[SHIFTX] + METRIC_PHYS[GYZ]*METRIC[SHIFTY] + METRIC_PHYS[GZZ]*METRIC[SHIFTZ];
CCTK_REAL beta2L = shift_xL*METRIC[SHIFTX] + shift_yL*METRIC[SHIFTY] + shift_zL*METRIC[SHIFTZ];
```
<a id='variable_setup__adm_4metric'></a>
### Step 2.c.vii: The ADM 4-metric, $g_{\mu\nu}$, and its inverse, $g^{\mu\nu}$ \[Back to [top](#toc)\]
$$\label{variable_setup__adm_4metric}$$
We then setup the ADM 4-metric and its inverse. We refer the reader to eqs. (2.119) and (2.122) from [Baumgarte & Shapiro's Numerical Relativity (2010)](https://books.google.com/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC), which are repeated here for the sake of the reader (in reverse order)
$$
\boxed{g_{\mu\nu} =
\begin{pmatrix}
-\alpha^{2} + \beta_{\ell}\beta^{\ell} & \beta_{i}\\
\beta_{j} & \gamma_{ij}
\end{pmatrix}}\ .
$$
and
$$
\boxed{
g^{\mu\nu} =
\begin{pmatrix}
-\alpha^{-2} & \alpha^{-2}\beta^{i}\\
\alpha^{-2}\beta^{j} & \gamma^{ij} - \alpha^{-2}\beta^{i}\beta^{j}
\end{pmatrix}
}\ .
$$
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
// Compute 4-metric, both g_{\mu \nu} and g^{\mu \nu}.
// This is for computing T_{\mu \nu} and T^{\mu \nu}. Also the HARM con2prim lowlevel function requires them.
CCTK_REAL g4dn[4][4],g4up[4][4];
g4dn[0][0] = -SQR(METRIC_LAP_PSI4[LAPSE]) + beta2L;
g4dn[0][1] = g4dn[1][0] = shift_xL;
g4dn[0][2] = g4dn[2][0] = shift_yL;
g4dn[0][3] = g4dn[3][0] = shift_zL;
g4dn[1][1] = METRIC_PHYS[GXX];
g4dn[1][2] = g4dn[2][1] = METRIC_PHYS[GXY];
g4dn[1][3] = g4dn[3][1] = METRIC_PHYS[GXZ];
g4dn[2][2] = METRIC_PHYS[GYY];
g4dn[2][3] = g4dn[3][2] = METRIC_PHYS[GYZ];
g4dn[3][3] = METRIC_PHYS[GZZ];
CCTK_REAL alpha_inv_squared=SQR(METRIC_LAP_PSI4[LAPSEINV]);
g4up[0][0] = -1.0*alpha_inv_squared;
g4up[0][1] = g4up[1][0] = METRIC[SHIFTX]*alpha_inv_squared;
g4up[0][2] = g4up[2][0] = METRIC[SHIFTY]*alpha_inv_squared;
g4up[0][3] = g4up[3][0] = METRIC[SHIFTZ]*alpha_inv_squared;
g4up[1][1] = METRIC_PHYS[GUPXX] - METRIC[SHIFTX]*METRIC[SHIFTX]*alpha_inv_squared;
g4up[1][2] = g4up[2][1] = METRIC_PHYS[GUPXY] - METRIC[SHIFTX]*METRIC[SHIFTY]*alpha_inv_squared;
g4up[1][3] = g4up[3][1] = METRIC_PHYS[GUPXZ] - METRIC[SHIFTX]*METRIC[SHIFTZ]*alpha_inv_squared;
g4up[2][2] = METRIC_PHYS[GUPYY] - METRIC[SHIFTY]*METRIC[SHIFTY]*alpha_inv_squared;
g4up[2][3] = g4up[3][2] = METRIC_PHYS[GUPYZ] - METRIC[SHIFTY]*METRIC[SHIFTZ]*alpha_inv_squared;
g4up[3][3] = METRIC_PHYS[GUPZZ] - METRIC[SHIFTZ]*METRIC[SHIFTZ]*alpha_inv_squared;
```
<a id='variable_setup__temp_conservs'></a>
### Step 2.c.viii: Temporary storage for current values of the conservative variables \[Back to [top](#toc)\]
$$\label{variable_setup__temp_conservs}$$
Instead of declaring new variables to store the currently known values of the conservative variables $\left\{\rho_{\star}, \tilde{S}_{i}, \tilde{\tau}\right\}$, we will simply use the flux variables $\left\{\rho_{\star}^{\rm flux}, \tilde{S}_{i}^{\rm flux}, \tilde{\tau}^{\rm flux}\right\}$ which are used by `IllinoisGRMHD` as temporary storage. This is done for debugging purposes.
We also store the original values in the variables $\left\{\rho_{\star}^{\rm orig}, \tilde{S}_{i}^{\rm orig}, \tilde{\tau}^{\rm orig}\right\}$.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
//FIXME: might slow down the code.
if(isnan(CONSERVS[RHOSTAR]*CONSERVS[STILDEX]*CONSERVS[STILDEY]*CONSERVS[STILDEZ]*CONSERVS[TAUENERGY]*PRIMS[BX_CENTER]*PRIMS[BY_CENTER]*PRIMS[BZ_CENTER])) {
CCTK_VInfo(CCTK_THORNSTRING,"NAN FOUND: i,j,k = %d %d %d, x,y,z = %e %e %e , index=%d st_i = %e %e %e, rhostar = %e, tau = %e, Bi = %e %e %e, gij = %e %e %e %e %e %e, Psi6 = %e",
i,j,k,x[index],y[index],z[index],index,
CONSERVS[STILDEX],CONSERVS[STILDEY],CONSERVS[STILDEZ],CONSERVS[RHOSTAR],CONSERVS[TAUENERGY],
PRIMS[BX_CENTER],PRIMS[BY_CENTER],PRIMS[BZ_CENTER],METRIC_PHYS[GXX],METRIC_PHYS[GXY],METRIC_PHYS[GXZ],METRIC_PHYS[GYY],METRIC_PHYS[GYZ],METRIC_PHYS[GZZ],METRIC_LAP_PSI4[PSI6]);
}
// Here we use _flux variables as temp storage for original values of conservative variables.. This is used for debugging purposes only.
rho_star_flux[index] = CONSERVS[RHOSTAR];
st_x_flux[index] = CONSERVS[STILDEX];
st_y_flux[index] = CONSERVS[STILDEY];
st_z_flux[index] = CONSERVS[STILDEZ];
tau_flux[index] = CONSERVS[TAUENERGY];
CCTK_REAL rho_star_orig = CONSERVS[RHOSTAR];
CCTK_REAL mhd_st_x_orig = CONSERVS[STILDEX];
CCTK_REAL mhd_st_y_orig = CONSERVS[STILDEY];
CCTK_REAL mhd_st_z_orig = CONSERVS[STILDEZ];
CCTK_REAL tau_orig = CONSERVS[TAUENERGY];
```
<a id='conserv_to_prim__driver'></a>
## Step 2.d: Determining the primitives variables from the conservatives variables \[Back to [top](#toc)\]
$$\label{conserv_to_prim__driver}$$
In this part of the code we determine the primitives variables from the conservatives variables. Please note that this is only the driver function. The algorithm is discussed on [Step 3](#harm_primitives_lowlevel) below.
We start by calling the `apply_tau_floor()` function to ensure that the value of $\tilde\tau$ is not out of physical range. This function is discussed in the [apply $\tilde\tau$ floor and enforce limits on primitives NRPy+ tutorial module](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb).
Notice that this is only performed when $\rho_{\star}>0$. If this is not the case, we fix the issue by setting $\rho_{b} = \rho_{b}^{\rm atm}$ and the conservative to primitive algorithm is skipped altogether.
When $\rho_{\star}>0$, we call upon the primitive to conservatives algorithm through the `harm_primitives_gammalaw_lowlevel()` function, described [below](#fixme).
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
int check=0;
struct output_stats stats;
stats.n_iter=0;
stats.vel_limited=0;
stats.failure_checker=0;
if(CONSERVS[RHOSTAR]>0.0) {
// Apply the tau floor
apply_tau_floor(index,tau_atm,rho_b_atm,Psi6threshold,PRIMS,METRIC,METRIC_PHYS,METRIC_LAP_PSI4,stats,eos, CONSERVS);
stats.font_fixed=0;
for(int ii=0;ii<3;ii++) {
check = harm_primitives_gammalaw_lowlevel(index,i,j,k,x,y,z,METRIC,METRIC_PHYS,METRIC_LAP_PSI4,
CONSERVS,PRIMS, g4dn,g4up, stats,eos);
if(check==0) ii=4;
else stats.failure_checker+=100000;
}
} else {
stats.failure_checker+=1;
// Set to atmosphere if rho_star<0.
//FIXME: FOR GAMMA=2 ONLY:
PRIMS[RHOB] = rho_b_atm;
/* Set P = P_cold */
int polytropic_index = find_polytropic_K_and_Gamma_index(eos, rho_b_atm);
CCTK_REAL K_ppoly_tab = eos.K_ppoly_tab[polytropic_index];
CCTK_REAL Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index];
PRIMS[PRESSURE] = K_ppoly_tab*pow(rho_b_atm,Gamma_ppoly_tab);
PRIMS[VX] =-METRIC[SHIFTX];
PRIMS[VY] =-METRIC[SHIFTY];
PRIMS[VZ] =-METRIC[SHIFTZ];
rho_star_fix_applied++;
}
```
<a id='enforce_limits_on_primitives_and_recompute_conservs'></a>
## Step 2.e: Enforcing physical limits on primitives and recomputing the conservatives variables \[Back to [top](#toc)\]
$$\label{enforce_limits_on_primitives_and_recompute_conservs}$$
We now enforce that the values of the primitives variables are physically meaningful. This is done by calling the `IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs()` function, which is described in the [enforce physical limits on primitives and recomputive conservatives NRPy+ tutorial module](Tutorial-IllinoisGRMHD__apply_tau_floor__enforce_limits_on_primitives_and_recompute_conservs.ipynb).
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
// Enforce limits on primitive variables and recompute conservatives.
static const int already_computed_physical_metric_and_inverse=1;
IllinoisGRMHD_enforce_limits_on_primitives_and_recompute_conservs(already_computed_physical_metric_and_inverse,PRIMS,stats,eos,METRIC,g4dn,g4up, TUPMUNU,TDNMUNU,CONSERVS);
```
<a id='updating_conservs_and_prims_gfs'></a>
## Step 2.f: Updating conservative and primitive gridfunctions \[Back to [top](#toc)\]
$$\label{updating_conservs_and_prims_gfs}$$
Then we update the corresponding conservative and primitive gridfunctions with the updated values we just computed.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
rho_star[index] = CONSERVS[RHOSTAR];
mhd_st_x[index] = CONSERVS[STILDEX];
mhd_st_y[index] = CONSERVS[STILDEY];
mhd_st_z[index] = CONSERVS[STILDEZ];
tau[index] = CONSERVS[TAUENERGY];
// Set primitives, and/or provide a better guess.
rho_b[index] = PRIMS[RHOB];
P[index] = PRIMS[PRESSURE];
vx[index] = PRIMS[VX];
vy[index] = PRIMS[VY];
vz[index] = PRIMS[VZ];
```
<a id='diagnostics_and_debugging_tools'></a>
## Step 2.g: Diagnostics and debugging tools \[Back to [top](#toc)\]
$$\label{diagnostics_and_debugging_tools}$$
Now we simply append to the file useful diagnostics and debugging tools for our code.
```
%%writefile -a $outfile_path__driver_conserv_to_prims__C
if(update_Tmunu) {
ww=0;
eTtt[index] = TDNMUNU[ww]; ww++;
eTtx[index] = TDNMUNU[ww]; ww++;
eTty[index] = TDNMUNU[ww]; ww++;
eTtz[index] = TDNMUNU[ww]; ww++;
eTxx[index] = TDNMUNU[ww]; ww++;
eTxy[index] = TDNMUNU[ww]; ww++;
eTxz[index] = TDNMUNU[ww]; ww++;
eTyy[index] = TDNMUNU[ww]; ww++;
eTyz[index] = TDNMUNU[ww]; ww++;
eTzz[index] = TDNMUNU[ww];
}
//Finally, we set h, the enthalpy:
//CCTK_REAL eps = P[index]/rho_b[index]/(GAMMA-1.0);
//h[index] = 1.0 + P[index]/rho_b[index] + eps;
/***************************************************************************************************************************/
// DIAGNOSTICS:
//Pressure cap hit?
/* FIXME
CCTK_REAL P_cold = rho_b[index]*rho_b[index];
if(P[index]/P_cold > 0.99*1e3 && rho_b[index]>100.0*rho_b_atm) {
if(exp(phi[index]*6.0) <= Psi6threshold) pressure_cap_hit++;
}
*/
//Now we compute the difference between original & new conservatives, for diagnostic purposes:
error_int_numer += fabs(tau[index] - tau_orig) + fabs(rho_star[index] - rho_star_orig) +
fabs(mhd_st_x[index] - mhd_st_x_orig) + fabs(mhd_st_y[index] - mhd_st_y_orig) + fabs(mhd_st_z[index] - mhd_st_z_orig);
error_int_denom += tau_orig + rho_star_orig + fabs(mhd_st_x_orig) + fabs(mhd_st_y_orig) + fabs(mhd_st_z_orig);
if(stats.font_fixed==1) font_fixes++;
vel_limited_ptcount+=stats.vel_limited;
if(check!=0) {
failures++;
if(exp(METRIC[PHI]*6.0)>Psi6threshold) {
failures_inhoriz++;
pointcount_inhoriz++;
}
}
pointcount++;
/***************************************************************************************************************************/
failure_checker[index] = stats.failure_checker;
n_iter += stats.n_iter;
}
/*
gettimeofday(&end, NULL);
seconds = end.tv_sec - start.tv_sec;
useconds = end.tv_usec - start.tv_usec;
mtime = ((seconds) * 1000 + useconds/1000.0) + 0.999; // We add 0.999 since mtime is a long int; this rounds up the result before setting the value. Here, rounding down is incorrect.
solutions per second: cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2] / ((CCTK_REAL)mtime/1000.0),
*/
if(CCTK_Equals(verbose, "essential") || CCTK_Equals(verbose, "essential+iteration output")) {
CCTK_VInfo(CCTK_THORNSTRING,"C2P: Lev: %d NumPts= %d | Fixes: Font= %d VL= %d rho*= %d | Failures: %d InHoriz= %d / %d | Error: %.3e, ErrDenom: %.3e | %.2f iters/gridpt",
(int)GetRefinementLevel(cctkGH),
pointcount,font_fixes,vel_limited_ptcount,rho_star_fix_applied,
failures,
failures_inhoriz,pointcount_inhoriz,
error_int_numer/error_int_denom,error_int_denom,
(double)n_iter/( (double)(cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2]) ));
}
if(pressure_cap_hit!=0) {
//CCTK_VInfo(CCTK_THORNSTRING,"PRESSURE CAP HIT %d TIMES! Outputting debug file!",pressure_cap_hit);
}
// Very useful con2prim debugger. If the primitives (con2prim) solver fails, this will output all data needed to
// debug where and why the solver failed. Strongly suggested for experimenting with new fixes.
if(conserv_to_prims_debug==1 && error_int_numer/error_int_denom > 0.05) {
ofstream myfile;
char filename[100];
srand(time(NULL));
sprintf(filename,"primitives_debug-%e.dat",error_int_numer/error_int_denom);
//Alternative, for debugging purposes as well:
//srand(time(NULL));
//sprintf(filename,"primitives_debug-%d.dat",rand());
myfile.open (filename, ios::out | ios::binary);
//myfile.open ("data.bin", ios::out | ios::binary);
myfile.write((char*)cctk_lsh, 3*sizeof(int));
myfile.write((char*)&GAMMA_SPEED_LIMIT, 1*sizeof(CCTK_REAL));
myfile.write((char*)&rho_b_max, 1*sizeof(CCTK_REAL));
myfile.write((char*)&rho_b_atm, 1*sizeof(CCTK_REAL));
myfile.write((char*)&tau_atm, 1*sizeof(CCTK_REAL));
myfile.write((char*)&Psi6threshold, 1*sizeof(CCTK_REAL));
myfile.write((char*)&update_Tmunu, 1*sizeof(int));
myfile.write((char*)&neos, 1*sizeof(int));
myfile.write((char*)&Gamma_th, 1*sizeof(CCTK_REAL));
myfile.write((char*)&K_ppoly_tab0, 1*sizeof(CCTK_REAL));
myfile.write((char*)Gamma_ppoly_tab_in, neos*sizeof(CCTK_REAL));
myfile.write((char*)rho_ppoly_tab_in, (neos-1)*sizeof(CCTK_REAL));
int fullsize=cctk_lsh[0]*cctk_lsh[1]*cctk_lsh[2];
myfile.write((char*)x, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)y, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)z, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char *)failure_checker, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTtt, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTtx, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTty, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTtz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTxx, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTxy, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTxz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTyy, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTyz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)eTzz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)alp, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gxx, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gxy, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gxz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gyy, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gyz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)gzz, fullsize*sizeof(CCTK_REAL));
myfile.write((char *)psi_bssn, fullsize*sizeof(CCTK_REAL));
myfile.write((char*)phi_bssn, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtxx, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtxy, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtxz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtyy, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtyz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtzz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupxx, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupxy, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupxz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupyy, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupyz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)gtupzz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)betax, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)betay, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)betaz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)lapm1, (fullsize)*sizeof(CCTK_REAL));
// HERE WE USE _flux variables as temp storage for original values of conservative variables.. This is used for debugging purposes only.
myfile.write((char*)tau_flux, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)st_x_flux, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)st_y_flux, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)st_z_flux, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)rho_star_flux, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)Bx, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)By, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)Bz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)vx, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)vy, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)vz, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)P, (fullsize)*sizeof(CCTK_REAL));
myfile.write((char*)rho_b,(fullsize)*sizeof(CCTK_REAL));
int checker=1063; myfile.write((char*)&checker,sizeof(int));
myfile.close();
CCTK_VInfo(CCTK_THORNSTRING,"Finished writing %s",filename);
}
#ifdef ENABLE_STANDALONE_IGM_C2P_SOLVER
return 0; // int main() requires an integer be returned
#endif
}
#include "harm_primitives_lowlevel.C"
```
<a id='harm_primitives_lowlevel'></a>
# Step 3: `harm_primitives_lowlevel.C` \[Back to [top](#toc)\]
$$\label{harm_primitives_lowlevel}$$
We will now begin documenting the `harm_primitives_lowlevel.C` code. We will start by declaring the `harm_primitives_gammalaw_lowlevel()` function and loading up the EOS parameters:
$$
\begin{align}
{\rm kpoly} &:= \kappa\ ,\\
{\rm gamma} &:= \Gamma\ ,
\end{align}
$$
where we are currently assuming the single polytrope EOS
$$
P = \kappa \rho_{b}^{\Gamma}\ .
$$
<a id='variables_needed_by_harm'></a>
## Step 3.a: Setting up the variables needed by `HARM` \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm}$$
In this section we will describe the final set of manipulations that are required *before* calling the conservative to primitive algorithm. These manipulations are necessary because we make use of the [`HARM` software](https://arxiv.org/abs/astro-ph/0301509), which uses a different set of conservative/primitives variables than `IllinoisGRMHD`.
Notice that `IllinoisGRMHD` uses the set of conservative variables $\boldsymbol{C}_{\rm IGM} = \left\{\rho_{\star},\tilde{\tau},\tilde{S}_{i},\tilde{B}^{i}\right\}$ and the primitive variables $\boldsymbol{P}_{\rm IGM} = \left\{\rho_{b},P,v^{i},B^{i}\right\}$, while `HARM` makes use of the conservative variables $\boldsymbol{C}_{\rm HARM} = \left\{\sqrt{-g}\rho_{b}u^{0},\sqrt{-g}\left(T^{0}_{\ 0}+\rho_{b}u^{0}\right),\sqrt{-g}T^{0}_{\ i},\sqrt{-g}B^{i}_{\rm HARM}\right\}$ and primitive variables $\boldsymbol{P}_{\rm HARM} = \left\{\rho_{b},u,\tilde{u}^{i},B^{i}_{\rm HARM}\right\}$. The key step here is then writing `HARM`'s variables in terms of `IllinoisGRMHD`'s variables.
<a id='variables_needed_by_harm__detg'></a>
### Step 3.a.i: ${\rm detg}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__detg}$$
The first variable we will need to compute is ${\rm detg} := \sqrt{-g}$, where $g=\det(g_{\mu\nu})$, with $g_{\mu\nu}$ the [physical ADM 4-metric evaluated above](#variable_setup__adm_4metric). To relate $\sqrt{-g}$ with `IllinoisGRMHD`'s variables, we will make use of the well known relation (see eq. 2.124 in [Baumgarte & Shapiro's Numerical Relativity](https://books.google.com/books/about/Numerical_Relativity.html?id=dxU1OEinvRUC))
$$
\boxed{\sqrt{-g} = \alpha\sqrt{\gamma} = \alpha\psi^{6}}\ .
$$
```
%%writefile $outfile_path__harm_primitives_lowlevel__C
inline int harm_primitives_gammalaw_lowlevel(const int index,const int i,const int j,const int k,CCTK_REAL *X,CCTK_REAL *Y,CCTK_REAL *Z,
CCTK_REAL *METRIC,CCTK_REAL *METRIC_PHYS,CCTK_REAL *METRIC_LAP_PSI4,
CCTK_REAL *CONSERVS,CCTK_REAL *PRIMS,
CCTK_REAL g4dn[NDIM][NDIM],CCTK_REAL g4up[NDIM][NDIM],
struct output_stats &stats, eos_struct &eos) {
#ifndef ENABLE_STANDALONE_IGM_C2P_SOLVER
DECLARE_CCTK_PARAMETERS;
#endif
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
// declare some variables for HARM.
CCTK_REAL U[NPR];
CCTK_REAL prim[NPR];
CCTK_REAL detg = METRIC_LAP_PSI4[LAPSE]*METRIC_LAP_PSI4[PSI6]; // == alpha sqrt{gamma} = alpha Psi^6
// Check to see if the metric is positive-definite.
// Note that this will slow down the code, and if the metric doesn't obey this, the run is probably too far gone to save,
// though if it happens deep in the horizon, it might resurrect the run.
/*
CCTK_REAL lam1,lam2,lam3;
CCTK_REAL M11 = METRIC[GXX], M12=METRIC[GXY], M13=METRIC[GXZ], M22=METRIC[GYY], M23=METRIC[GYZ], M33=METRIC[GZZ];
eigenvalues_3by3_real_sym_matrix(lam1, lam2, lam3,M11, M12, M13, M22, M23, M33);
if (lam1 < 0.0 || lam2 < 0.0 || lam3 < 0.0) {
// Metric is not positive-defitive, reset the physical metric to be conformally-flat.
METRIC_PHYS[GXX] = METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GXY] = 0.0;
METRIC_PHYS[GXZ] = 0.0;
METRIC_PHYS[GYY] = METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GYZ] = 0.0;
METRIC_PHYS[GZZ] = METRIC_LAP_PSI4[PSI4];
METRIC_PHYS[GUPXX] = METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPXY] = 0.0;
METRIC_PHYS[GUPXZ] = 0.0;
METRIC_PHYS[GUPYY] = METRIC_LAP_PSI4[PSIM4];
METRIC_PHYS[GUPYZ] = 0.0;
METRIC_PHYS[GUPZZ] = METRIC_LAP_PSI4[PSIM4];
}
*/
```
<a id='variables_needed_by_harm__bi_harm'></a>
### Step 3.a.ii: $B^{i}_{\rm HARM}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__bi_harm}$$
Now we must relate `HARM`'s $B^{i}_{\rm HARM}$ with the variables used by `IllinoisGRMHD`. We can start by looking at eqs. (23), (24), and (31) in [Duez *et al.* (2005)](https://arxiv.org/pdf/astro-ph/0503420.pdf) to find the following useful relations
$$
{\rm IllinoisGRMHD:}\
\left\{
\begin{align}
\sqrt{4\pi}b^{0} &= \frac{u_{i}B^{i}}{\alpha}\ ,\\
\sqrt{4\pi}b^{i} &= \frac{B^{i}/\alpha + \sqrt{4\pi}b^{0}u^{i}}{u^{0}}\ .
\end{align}
\right.
$$
Now, if we look at eqs. (16) and (17) in [Gammie & McKinney (2003)](https://arxiv.org/pdf/astro-ph/0301509.pdf) we find the following relations
$$
{\rm HARM:}\
\left\{
\begin{align}
b^{0} &= u_{i}B^{i}_{\rm HARM}\ ,\\
b^{i} &= \frac{B^{i}_{\rm HARM} + b^{0}u^{i}}{u^{0}}\ ,
\end{align}
\right.
$$
from which we can then find the relation
$$
\boxed{B^{i}_{\rm HARM} = \frac{B^{i}}{\alpha\sqrt{4\pi}}}\ .
$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
// Note that ONE_OVER_SQRT_4PI gets us to the object
// referred to as B^i in the Noble et al paper (and
// apparently also in the comments to their code).
// This is NOT the \mathcal{B}^i, which differs by
// a factor of the lapse.
CCTK_REAL BxL_over_alpha_sqrt_fourpi = PRIMS[BX_CENTER]*METRIC_LAP_PSI4[LAPSEINV]*ONE_OVER_SQRT_4PI;
CCTK_REAL ByL_over_alpha_sqrt_fourpi = PRIMS[BY_CENTER]*METRIC_LAP_PSI4[LAPSEINV]*ONE_OVER_SQRT_4PI;
CCTK_REAL BzL_over_alpha_sqrt_fourpi = PRIMS[BZ_CENTER]*METRIC_LAP_PSI4[LAPSEINV]*ONE_OVER_SQRT_4PI;
```
<a id='variables_needed_by_harm__init_rhob_pressure_vi'></a>
### Step 3.a.iii: Initializing $\rho_{b}$, $P$, and $v^{i}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__init_rhob_pressure_vi}$$
Here we simply initialize $\rho_{b}$, $P$, and $v^{i}$.
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
CCTK_REAL rho_b_oldL = PRIMS[RHOB];
CCTK_REAL P_oldL = PRIMS[PRESSURE];
CCTK_REAL vxL = PRIMS[VX];
CCTK_REAL vyL = PRIMS[VY];
CCTK_REAL vzL = PRIMS[VZ];
```
<a id='variables_needed_by_harm__original_conserv'></a>
### Step 3.a.iv: Storing the original values of the conservative variables \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__original_conserv}$$
Here we store the currently known values of $\left\{\rho_{\star},\tilde{\tau},\tilde{S}_{i}\right\}$.
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
/*
-- Driver for new prim. var. solver. The driver just translates
between the two sets of definitions for U and P. The user may
wish to alter the translation as they see fit.
// / rho u^t \ //
// U = | T^t_t + rho u^t | * sqrt(-det(g_{\mu\nu})) //
// | T^t_i | //
// \ B^i / //
// //
// / rho \ //
// P = | uu | //
// | \tilde{u}^i | //
// \ B^i / //
(above equations have been fixed by Yuk Tung & Zach)
*/
// U[NPR] = conserved variables (current values on input/output);
// g4dn[NDIM][NDIM] = covariant form of the 4-metric ;
// g4up[NDIM][NDIM] = contravariant form of the 4-metric ;
// gdet = sqrt( - determinant of the 4-metric) ;
// prim[NPR] = primitive variables (guess on input, calculated values on
// output if there are no problems);
// U[1] =
// U[2-4] = stildei + rhostar
CCTK_REAL rho_star_orig = CONSERVS[RHOSTAR];
CCTK_REAL mhd_st_x_orig = CONSERVS[STILDEX];
CCTK_REAL mhd_st_y_orig = CONSERVS[STILDEY];
CCTK_REAL mhd_st_z_orig = CONSERVS[STILDEZ];
CCTK_REAL tau_orig = CONSERVS[TAUENERGY];
```
<a id='variables_needed_by_harm__guessing_rhob_pressure_vi'></a>
### Step 3.a.v: Guessing $\rho_{b}$, $P$, and $v^{i}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__guessing_rhob_pressure_vi}$$
We will now start preparing the primitives for the conservative-to-primitive algorithm, which employs the [Newton-Raphson method](https://en.wikipedia.org/wiki/Newton%27s_method).
We offer the algorithm the initial guess
$$
\boxed{
{\rm Guess\ \#1:}\
\left\{
\begin{align}
\rho_{b}^{\rm guess} &= \frac{\rho_{\star}}{\psi^{6}}\\
P_{\rm guess} &= \kappa \left(\rho_{b}^{\rm guess}\right)^{\Gamma}\\
u0 &= \frac{1}{\alpha}\\
v^{i} &= -\beta^{i}
\end{align}
\right.
}\ .
$$
If this initial guess causes the Newton-Raphson method to fail, we try a second guess
$$
\boxed{
{\rm Guess\ \#2:}\
\left\{
\begin{align}
\rho_{b}^{\rm guess} &= 100\rho_{b}^{\rm atm}\\
P_{\rm guess} &= \kappa \left(\rho_{b}^{\rm guess}\right)^{\Gamma}\\
u0 &= \frac{1}{\alpha}\\
v^{i} &= -\beta^{i}
\end{align}
\right.
}\ .
$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
// Other ideas for setting the gamma speed limit
//CCTK_REAL GAMMA_SPEED_LIMIT = 100.0;
//if(METRIC_LAP_PSI4[PSI6]>Psi6threshold) GAMMA_SPEED_LIMIT=500.0;
//if(METRIC_LAP_PSI4[PSI6]>Psi6threshold) GAMMA_SPEED_LIMIT=100.0;
//FIXME: Only works if poisoning is turned on. Otherwise will access unknown memory. This trick alone speeds up the whole code (Cowling) by 2%.
//int startguess=0;
//if(std::isnan(PRIMS[VX])) startguess=1;
int startguess=1;
CCTK_REAL u0L=1.0;
CCTK_REAL K_ppoly_tab,Gamma_ppoly_tab;
for(int which_guess=startguess;which_guess<3;which_guess++) {
int check;
if(which_guess==1) {
//Use a different initial guess:
rho_b_oldL = CONSERVS[RHOSTAR]/METRIC_LAP_PSI4[PSI6];
/**********************************
* Piecewise Polytropic EOS Patch *
* Finding Gamma_ppoly_tab and K_ppoly_tab *
**********************************/
/* Here we use our newly implemented
* find_polytropic_K_and_Gamma() function
* to determine the relevant polytropic
* Gamma and K parameters to be used
* within this function.
*/
int polytropic_index = find_polytropic_K_and_Gamma_index(eos,rho_b_oldL);
K_ppoly_tab = eos.K_ppoly_tab[polytropic_index];
Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index];
// After that, we compute P_cold
P_oldL = K_ppoly_tab*pow(rho_b_oldL,Gamma_ppoly_tab);
u0L = METRIC_LAP_PSI4[LAPSEINV];
vxL = -METRIC[SHIFTX];
vyL = -METRIC[SHIFTY];
vzL = -METRIC[SHIFTZ];
}
if(which_guess==2) {
//Use atmosphere as initial guess:
rho_b_oldL = 100.0*rho_b_atm;
/**********************************
* Piecewise Polytropic EOS Patch *
* Finding Gamma_ppoly_tab and K_ppoly_tab *
**********************************/
/* Here we use our newly implemented
* find_polytropic_K_and_Gamma() function
* to determine the relevant polytropic
* Gamma and K parameters to be used
* within this function.
*/
int polytropic_index = find_polytropic_K_and_Gamma_index(eos,rho_b_oldL);
K_ppoly_tab = eos.K_ppoly_tab[polytropic_index];
Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index];
// After that, we compute P_cold
P_oldL = K_ppoly_tab*pow(rho_b_oldL,Gamma_ppoly_tab);
u0L = METRIC_LAP_PSI4[LAPSEINV];
vxL = -METRIC[SHIFTX];
vyL = -METRIC[SHIFTY];
vzL = -METRIC[SHIFTZ];
}
```
<a id='variables_needed_by_harm__conservs'></a>
### Step 3.a.vi: Writing $\boldsymbol{C}_{\rm HARM}$ in terms of $\boldsymbol{C}_{\rm IGM}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__conservs}$$
We will now relate $\boldsymbol{C}_{\rm HARM}$ to $\boldsymbol{C}_{\rm IGM}$. As previously mentioned, we have
$$
\begin{align}
\boldsymbol{C}_{\rm IGM} &= \left\{\rho_{\star},\tilde{\tau},\tilde{S}_{i},\tilde{B}^{i}\right\}\ ,\\
\boldsymbol{C}_{\rm HARM} &= \left\{\sqrt{-g}\rho_{b}u^{0},\sqrt{-g}\left(T^{0}_{\ 0}+\rho_{b}u^{0}\right),\sqrt{-g}T^{0}_{\ i},\sqrt{-g}B^{i}_{\rm HARM}\right\} \equiv \left\{\rho_{\rm HARM},\tilde{\tau}_{\rm HARM},\tilde{S}_{i}^{\rm HARM},\tilde{B}^{i}_{\rm HARM}\right\}\ .
\end{align}
$$
The following relations immediately hold
$$
\boxed{
\begin{align}
\rho^{\rm HARM}_{\star} &= \rho_{\star}\\
\tilde{S}_{i}^{\rm HARM} &= \tilde{S}_{i}
\end{align}
}\ .
$$
As previously derived in [Step 3.a.ii](#variables_needed_by_harm__bi_harm), we then have
$$
\boxed{\tilde{B}^{i}_{\rm HARM} = \underbrace{\left(\alpha\sqrt{\gamma}\right)}_{\rm detg}\underbrace{\left(\frac{B^{i}}{\alpha\sqrt{4\pi}}\right)}_{\rm BiL\_over\_alpha\_sqrt\_fourpi}}\ .
$$
Finally, we must relate $\tilde{\tau}_{\rm HARM} = \sqrt{-g}\left(T^{0}_{\ 0}+\rho_{b}u^{0}\right)$ to the `IllinoisGRMHD` variables. First, consider the identity
$$
\begin{align}
S^{i} &= -\gamma^{i}_{\ \mu}n_{\nu}T^{\mu\nu}\\
&= -\left(\delta^{i}_{\mu} + n^{i}n_{\mu}\right)n_{\nu}T^{\mu\nu}\\
&= -\delta^{i}_{\mu}n_{\nu}T^{\mu\nu} - n^{i}n_{\mu}n_{\nu}T^{\mu\nu}\\
&= -n_{0}T^{i0} - n^{i}n_{0}n_{0}T^{00}\\
&= \alpha T^{i0} - \left(-\frac{\beta^{i}}{\alpha}\right)\alpha^{2}T^{00}\\
\implies S^{i}&= \alpha\left[\beta^{i}T^{00} + T^{i0}\right]\ .
\end{align}
$$
Consider, also, the identity
$$
\tilde{\tau} = \alpha^{2}\sqrt{\gamma}T^{00} - \rho_{\star} \implies T^{00} = \frac{\left(\tilde\tau + \rho_{\star}\right)}{\alpha^{2}\sqrt{\gamma}}\ .
$$
Then
$$
\begin{align}
T^{0}_{0} &= g_{0\mu}T^{0\mu}\\
&= g_{00}T^{00} + g_{0i}T^{0i}\\
&= \left(-\alpha^{2}+\beta_{\ell}\beta^{\ell}\right)T^{00} + \beta_{i}T^{0i}\\
&= -\alpha^{2}T^{00} + \beta_{i}\left[\beta^{i}T^{00} + T^{i0}\right]\\
&= -\alpha^{2}\left[\frac{\left(\tilde\tau + \rho_{\star}\right)}{\alpha^{2}\sqrt{\gamma}}\right] + \frac{\beta^{i}S_{i}}{\alpha}\\
\implies T^{0}_{0} &= -\frac{\tilde\tau+\rho_{\star}}{\sqrt{\gamma}} + \frac{\beta^{i}S_{i}}{\alpha}\ .
\end{align}
$$
Finally, we can compute
$$
\begin{align}
\tilde\tau_{\rm HARM} &= \sqrt{-g}\left(T^{0}_{0} + \rho_{b}u^{0}\right)\\
&= \alpha\sqrt{\gamma}T^{0}_{0} + \alpha\sqrt{\gamma}\rho_{b}u^{0}\\
&= \alpha\sqrt{\gamma}\left[-\frac{\tilde\tau+\rho_{\star}}{\sqrt{\gamma}} + \frac{\beta^{i}S_{i}}{\alpha}\right] + \rho_{\star}\\
&= -\alpha\tilde{\tau} -\alpha\rho_{\star} + \beta^{i}\left(\sqrt{\gamma}S_{i}\right) + \rho_{\star}\\
\implies &\boxed{\tilde\tau_{\rm HARM} = -\alpha\tilde{\tau} - \left(\alpha-1\right)\rho_{\star} + \beta^{i}\tilde{S}_{i}}\ .
\end{align}
$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
// Fill the array of conserved variables according to the wishes of Utoprim_2d.
U[RHO] = CONSERVS[RHOSTAR];
U[UU] = -CONSERVS[TAUENERGY]*METRIC_LAP_PSI4[LAPSE] - (METRIC_LAP_PSI4[LAPSE]-1.0)*CONSERVS[RHOSTAR] +
METRIC[SHIFTX]*CONSERVS[STILDEX] + METRIC[SHIFTY]*CONSERVS[STILDEY] + METRIC[SHIFTZ]*CONSERVS[STILDEZ] ; // note the minus sign on tau
U[UTCON1] = CONSERVS[STILDEX];
U[UTCON2] = CONSERVS[STILDEY];
U[UTCON3] = CONSERVS[STILDEZ];
U[BCON1] = detg*BxL_over_alpha_sqrt_fourpi;
U[BCON2] = detg*ByL_over_alpha_sqrt_fourpi;
U[BCON3] = detg*BzL_over_alpha_sqrt_fourpi;
```
<a id='variables_needed_by_harm__prims'></a>
### Step 3.a.vii: Writing $\boldsymbol{P}_{\rm HARM}$ in terms of $\boldsymbol{P}_{\rm IGM}$ \[Back to [top](#toc)\]
$$\label{variables_needed_by_harm__prims}$$
Keep in mind that for the primitive variables, we are not intereted in providing exact relations. Instead, what we are interested in doing is providing an educated guess of what it should look like in hopes that the Newton-Raphson method then converges to the correct values. With that in mind, the primitive variables are then initialized to:
$$
\boxed{
\begin{align}
\rho_{\rm HARM} &= \rho_{b}\\
u &= \frac{P}{\Gamma_{\rm poly} - 1}\\
\tilde{u}^{i} &= u^{0}\left(v^{i}+\beta^{i}\right)\\
B^{i}_{\rm HARM} &= \frac{B^{i}}{\alpha\sqrt{4\pi}}
\end{align}\ ,
}
$$
where $\Gamma_{\rm poly}$ stands for the *local* polytropic $\Gamma$-factor.
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
CCTK_REAL uL = P_oldL/(Gamma_ppoly_tab - 1.0);
CCTK_REAL utxL = u0L*(vxL + METRIC[SHIFTX]);
CCTK_REAL utyL = u0L*(vyL + METRIC[SHIFTY]);
CCTK_REAL utzL = u0L*(vzL + METRIC[SHIFTZ]);
prim[RHO] = rho_b_oldL;
prim[UU] = uL;
prim[UTCON1] = utxL;
prim[UTCON2] = utyL;
prim[UTCON3] = utzL;
prim[BCON1] = BxL_over_alpha_sqrt_fourpi;
prim[BCON2] = ByL_over_alpha_sqrt_fourpi;
prim[BCON3] = BzL_over_alpha_sqrt_fourpi;
```
<a id='calling_harm_conservs_to_prims_solver'></a>
## Step 3.b: Calling the `HARM` conservative-to-primitive solver \[Back to [top](#toc)\]
$$\label{calling_harm_conservs_to_prims_solver}$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
/*************************************************************/
// CALL HARM PRIMITIVES SOLVER:
check = Utoprim_2d(eos, U, g4dn, g4up, detg, prim,stats.n_iter);
// Note that we have modified this solver, so that nearly 100%
// of the time it yields either a good root, or a root with
// negative epsilon (i.e., pressure).
/*************************************************************/
```
<a id='font_fix'></a>
## Step 3.c: Applying the Font *et al.* fix, if the inversion fails \[Back to [top](#toc)\]
$$\label{font_fix}$$
If the conservative-to-primitive solver fails to converge, we apply the procedure suggested by [Font *et al.* (1998)](https://arxiv.org/abs/gr-qc/9811015). The algorithm can be summarized as the requirement that
$$
P=P_{\rm cold} = \kappa \rho^{\Gamma_{\rm cold}}_{b}\ ,
$$
and then recomputing the velocities $u_{i}$. We will describe this procedure in detail in [Step 4](#font_fix_gamma_law__c) below.
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
// Use the new Font fix subroutine
int font_fix_applied=0;
if(check!=0) {
font_fix_applied=1;
CCTK_REAL u_xl=1e100, u_yl=1e100, u_zl=1e100; // Set to insane values to ensure they are overwritten.
/************************
* New Font fix routine *
************************/
check = font_fix__hybrid_EOS(u_xl,u_yl,u_zl, CONSERVS,PRIMS,METRIC_PHYS,METRIC_LAP_PSI4, eos);
```
<a id='compute_utconi'></a>
## Step 3.d: Compute $\tilde{u}^{i}$ \[Back to [top](#toc)\]
$$\label{compute_utconi}$$
Now we evaluate
$$
\tilde{u}^{i} = \gamma^{ij}u_{i} \implies
\boxed{
\left\{
\begin{align}
\tilde{u}^{x} &= \gamma^{xx}u_{x} + \gamma^{xy}u_{y} + \gamma^{xz}u_{z}\\
\tilde{u}^{y} &= \gamma^{yx}u_{x} + \gamma^{yy}u_{y} + \gamma^{yz}u_{z}\\
\tilde{u}^{z} &= \gamma^{zx}u_{x} + \gamma^{zy}u_{y} + \gamma^{zz}u_{z}
\end{align}
\right.
}
$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
//Translate to HARM primitive now:
prim[UTCON1] = METRIC_PHYS[GUPXX]*u_xl + METRIC_PHYS[GUPXY]*u_yl + METRIC_PHYS[GUPXZ]*u_zl;
prim[UTCON2] = METRIC_PHYS[GUPXY]*u_xl + METRIC_PHYS[GUPYY]*u_yl + METRIC_PHYS[GUPYZ]*u_zl;
prim[UTCON3] = METRIC_PHYS[GUPXZ]*u_xl + METRIC_PHYS[GUPYZ]*u_yl + METRIC_PHYS[GUPZZ]*u_zl;
if (check==1) {
CCTK_VInfo(CCTK_THORNSTRING,"Font fix failed!");
CCTK_VInfo(CCTK_THORNSTRING,"i,j,k = %d %d %d, stats.failure_checker = %d x,y,z = %e %e %e , index=%d st_i = %e %e %e, rhostar = %e, Bi = %e %e %e, gij = %e %e %e %e %e %e, Psi6 = %e",i,j,k,stats.failure_checker,X[index],Y[index],Z[index],index,mhd_st_x_orig,mhd_st_y_orig,mhd_st_z_orig,rho_star_orig,PRIMS[BX_CENTER],PRIMS[BY_CENTER],PRIMS[BZ_CENTER],METRIC_PHYS[GXX],METRIC_PHYS[GXY],METRIC_PHYS[GXZ],METRIC_PHYS[GYY],METRIC_PHYS[GYZ],METRIC_PHYS[GZZ],METRIC_LAP_PSI4[PSI6]);
}
}
stats.failure_checker+=font_fix_applied*10000;
stats.font_fixed=font_fix_applied;
/*************************************************************/
```
<a id='limiting_velocities'></a>
## Step 3.e: Limiting velocities \[Back to [top](#toc)\]
$$\label{limiting_velocities}$$
The procedure we follow here is similar to the one discussed in the [inlined functions tutorial module](#Tutorial-IllinoisGRMHD__inlined_functions.ipynb).
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
if(check==0) {
//Now that we have found some solution, we first limit velocity:
//FIXME: Probably want to use exactly the same velocity limiter function here as in mhdflux.C
CCTK_REAL utx_new = prim[UTCON1];
CCTK_REAL uty_new = prim[UTCON2];
CCTK_REAL utz_new = prim[UTCON3];
//Velocity limiter:
CCTK_REAL gijuiuj = METRIC_PHYS[GXX]*SQR(utx_new ) +
2.0*METRIC_PHYS[GXY]*utx_new*uty_new + 2.0*METRIC_PHYS[GXZ]*utx_new*utz_new +
METRIC_PHYS[GYY]*SQR(uty_new) + 2.0*METRIC_PHYS[GYZ]*uty_new*utz_new +
METRIC_PHYS[GZZ]*SQR(utz_new);
CCTK_REAL au0m1 = gijuiuj/( 1.0+sqrt(1.0+gijuiuj) );
u0L = (au0m1+1.0)*METRIC_LAP_PSI4[LAPSEINV];
// *** Limit velocity
stats.vel_limited=0;
if (au0m1 > 0.9999999*(GAMMA_SPEED_LIMIT-1.0)) {
CCTK_REAL fac = sqrt((SQR(GAMMA_SPEED_LIMIT)-1.0)/(SQR(1.0+au0m1) - 1.0));
utx_new *= fac;
uty_new *= fac;
utz_new *= fac;
gijuiuj = gijuiuj * SQR(fac);
au0m1 = gijuiuj/( 1.0+sqrt(1.0+gijuiuj) );
// Reset rho_b and u0
u0L = (au0m1+1.0)*METRIC_LAP_PSI4[LAPSEINV];
prim[RHO] = rho_star_orig/(METRIC_LAP_PSI4[LAPSE]*u0L*METRIC_LAP_PSI4[PSI6]);
stats.vel_limited=1;
stats.failure_checker+=1000;
} //Finished limiting velocity
```
<a id='primitives'></a>
## Step 3.f: Setting the primitives \[Back to [top](#toc)\]
$$\label{primitives}$$
Finally, we update the primitives,
$$
\boxed{
\begin{align}
\rho_{b} &= \rho\\
P &= \left(\Gamma - 1\right)u = P_{\rm cold}\\
v^{i} &= \frac{\tilde{u}^{i}}{u^{0}} - \beta^{i}
\end{align}
}
$$
```
%%writefile -a $outfile_path__harm_primitives_lowlevel__C
//The Font fix only sets the velocities. Here we set the pressure & density HARM primitives.
if(font_fix_applied==1) {
prim[RHO] = rho_star_orig/(METRIC_LAP_PSI4[LAPSE]*u0L*METRIC_LAP_PSI4[PSI6]);
//Next set P = P_cold:
CCTK_REAL P_cold;
/**********************************
* Piecewise Polytropic EOS Patch *
* Finding Gamma_ppoly_tab and K_ppoly_tab *
**********************************/
/* Here we use our newly implemented
* find_polytropic_K_and_Gamma() function
* to determine the relevant polytropic
* Gamma and K parameters to be used
* within this function.
*/
int polytropic_index = find_polytropic_K_and_Gamma_index(eos,prim[RHO]);
K_ppoly_tab = eos.K_ppoly_tab[polytropic_index];
Gamma_ppoly_tab = eos.Gamma_ppoly_tab[polytropic_index];
// After that, we compute P_cold
P_cold = K_ppoly_tab*pow(prim[RHO],Gamma_ppoly_tab);
prim[UU] = P_cold/(Gamma_ppoly_tab-1.0);
} //Finished setting remaining primitives if there was a Font fix.
/* Set rho_b */
PRIMS[RHOB] = prim[RHO];
/***************
* PPEOS Patch *
* Hybrid EOS *
***************
*/
/* We now compute the pressure as a function
* of rhob, P_cold, eps_cold, and u = rhob*eps,
* using the function pressure_rho0_u(), which
* implements the equation:
* .-------------------------------------------------------------.
* | p(rho_b,u) = P_cold + (Gamma_th - 1)*(u - rho_b * eps_cold) |
* .-------------------------------------------------------------.
*/
PRIMS[PRESSURE] = pressure_rho0_u(eos, prim[RHO],prim[UU]);
/* Already set u0L. */
PRIMS[VX] = utx_new/u0L - METRIC[SHIFTX];
PRIMS[VY] = uty_new/u0L - METRIC[SHIFTY];
PRIMS[VZ] = utz_new/u0L - METRIC[SHIFTZ];
return 0;
} else {
//If we didn't find a root, then try again with a different guess.
}
}
CCTK_VInfo(CCTK_THORNSTRING,"Couldn't find root from: %e %e %e %e %e, rhob approx=%e, rho_b_atm=%e, Bx=%e, By=%e, Bz=%e, gij_phys=%e %e %e %e %e %e, alpha=%e",
tau_orig,rho_star_orig,mhd_st_x_orig,mhd_st_y_orig,mhd_st_z_orig,rho_star_orig/METRIC_LAP_PSI4[PSI6],rho_b_atm,PRIMS[BX_CENTER],PRIMS[BY_CENTER],PRIMS[BZ_CENTER],METRIC_PHYS[GXX],METRIC_PHYS[GXY],METRIC_PHYS[GXZ],METRIC_PHYS[GYY],METRIC_PHYS[GYZ],METRIC_PHYS[GZZ],METRIC_LAP_PSI4[LAPSE]);
return 1;
}
//#include "harm_u2p_util.c"
#include "harm_utoprim_2d.c"
#include "eigen.C"
#include "font_fix_hybrid_EOS.C"
```
<a id='font_fix_hybrid_eos'></a>
# Step 4: `font_fix_hybrid_EOS.C` \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos}$$
### Polytropic EOSs
The [Font *et al.*](https://arxiv.org/pdf/gr-qc/9811015.pdf) algorithm (henceforth Font Fix algorithm) can be summarized as follows. Font fixes occur at the atmospheric region, so we start by assuming that $P$ is given only by its cold part, i.e.
$$
P = P_{\rm cold} = K_{\rm atm} \rho_{b}^{\Gamma_{\rm atm}}\ ,
$$
where $K_{\rm atm}$ and $\Gamma_{\rm atm}$ are the constants used in the atmosphere. Then, the specific internal energy is given by
$$
\begin{align}
\epsilon &= \epsilon_{\rm cold}\\
&= \int d\rho \frac{P_{\rm cold}}{\rho^{2}}\\
&= K_{\rm atm}\int d\rho \rho^{\Gamma_{\rm atm}-2}\\
&= \frac{K_{\rm atm}\rho^{\Gamma_{\rm atm}-1}}{\Gamma_{\rm atm}-1}\ .
\end{align}
$$
Having computed $P$ and $\epsilon$, we can compute the enthalpy, $h$, giving
$$
\begin{align}
h &= 1 + \epsilon + \frac{P}{\rho}\\
&= 1 + \frac{K_{\rm atm}\rho^{\Gamma_{\rm atm}-1}}{\Gamma_{\rm atm}-1} + K_{\rm atm}\rho^{\Gamma_{\rm atm} - 1}\\
&= 1 + \left(\frac{1}{\Gamma_{\rm atm}-1}+1\right)K_{\rm atm}\rho^{\Gamma_{\rm atm} - 1}\\
\implies &\boxed{ h = 1 + \left(\frac{\Gamma_{\rm atm}}{\Gamma_{\rm atm}-1}\right)K_{\rm atm} \rho^{\Gamma_{\rm atm}-1} }\ .
\end{align}
$$
We then run an iterative process that updates $\rho$ in a consistent way, based on the value of $h$. We now describe this process and its implementation.
<a id='font_fix_hybrid_eos__basic_quantities'></a>
## Step 4.a: Computing the basic quantities needed by the algorithm \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos__basic_quantities}$$
We start by computing all basic quantities needed by the Font Fix algorithm:
$$
\boxed{
\begin{align}
\bar{B}^{i} &= \frac{B^i}{\sqrt{4\pi}}\\
\bar{B}_{i} &= \gamma_{ij}\bar{B}^{j}\\
\bar{B}^{2} &= \bar{B}_{i}\bar{B}^{i}\\
\bar{B} &= \sqrt{\bar{B}^{2}}\\
\bar{B}\cdot\tilde{S} &= \bar{B}^{i}\tilde{S}_{i}\\
(\bar{B}&\cdot\tilde{S})^{2}\\
\hat{\bar{B}}\cdot\tilde{S} &= \hat{\bar{B}}^{i}\tilde{S}_{i} \equiv \left(\frac{\bar{B}^{i}}{\bar{B}}\right)\tilde{S}_{i}\\
\tilde{S}\cdot\tilde{S} &= \gamma^{ij}\tilde{S}_{i}\tilde{S}_{j}
\end{align}
}\ .
$$
```
%%writefile $outfile_path__font_fix_hybrid_EOS__C
/**********************************
* Piecewise Polytropic EOS Patch *
* Font fix: function call *
**********************************/
inline int font_fix__hybrid_EOS(CCTK_REAL &u_x, CCTK_REAL &u_y, CCTK_REAL &u_z,CCTK_REAL *CONSERVS,CCTK_REAL *PRIMS,CCTK_REAL *METRIC_PHYS,CCTK_REAL *METRIC_LAP_PSI4, eos_struct eos) {
CCTK_REAL Bxbar = PRIMS[BX_CENTER]*ONE_OVER_SQRT_4PI;
CCTK_REAL Bybar = PRIMS[BY_CENTER]*ONE_OVER_SQRT_4PI;
CCTK_REAL Bzbar = PRIMS[BZ_CENTER]*ONE_OVER_SQRT_4PI;
CCTK_REAL Bbar_x = METRIC_PHYS[GXX]*Bxbar + METRIC_PHYS[GXY]*Bybar + METRIC_PHYS[GXZ]*Bzbar;
CCTK_REAL Bbar_y = METRIC_PHYS[GXY]*Bxbar + METRIC_PHYS[GYY]*Bybar + METRIC_PHYS[GYZ]*Bzbar;
CCTK_REAL Bbar_z = METRIC_PHYS[GXZ]*Bxbar + METRIC_PHYS[GYZ]*Bybar + METRIC_PHYS[GZZ]*Bzbar;
CCTK_REAL B2bar = Bxbar*Bbar_x + Bybar*Bbar_y + Bzbar*Bbar_z;
CCTK_REAL Bbar = sqrt(B2bar);
CCTK_REAL check_B_small = fabs(Bxbar)+fabs(Bybar)+fabs(Bzbar);
if (check_B_small>0 && check_B_small<1.e-150) {
// need to compute B2bar specially to prevent floating-point underflow
CCTK_REAL Bmax = fabs(Bxbar);
if (Bmax < fabs(Bybar)) Bmax=fabs(Bybar);
if (Bmax < fabs(Bzbar)) Bmax=fabs(Bzbar);
CCTK_REAL Bxtmp=Bxbar/Bmax, Bytemp=Bybar/Bmax, Bztemp=Bzbar/Bmax;
CCTK_REAL B_xtemp=Bbar_x/Bmax, B_ytemp=Bbar_y/Bmax, B_ztemp=Bbar_z/Bmax;
Bbar = sqrt(Bxtmp*B_xtemp + Bytemp*B_ytemp + Bztemp*B_ztemp)*Bmax;
}
CCTK_REAL BbardotS = Bxbar*CONSERVS[STILDEX] + Bybar*CONSERVS[STILDEY] + Bzbar*CONSERVS[STILDEZ];
CCTK_REAL BbardotS2 = BbardotS*BbardotS;
CCTK_REAL hatBbardotS = BbardotS/Bbar;
if (Bbar<1.e-300) hatBbardotS = 0.0;
CCTK_REAL Psim6 = 1.0/METRIC_LAP_PSI4[PSI6];
// Limit hatBbardotS
//CCTK_REAL max_gammav = 100.0;
//CCTK_REAL rhob_max = CONSERVS[RHOSTAR]*Psim6;
//CCTK_REAL hmax = 1.0 + gam_gamm1_kpoly*pow(rhob_max,gam1);
//CCTK_REAL abs_hatBbardotS_max = sqrt(SQR(max_gammav)-1.0)*CONSERVS[RHOSTAR]*hmax;
//if (fabs(hatBbardotS) > abs_hatBbardotS_max) {
// CCTK_REAL fac_reduce = abs_hatBbardotS_max/fabs(hatBbardotS);
// CCTK_REAL hatBbardotS_max = hatBbardotS*fac_reduce;
// CCTK_REAL Bbar_inv = 1.0/Bbar;
// CCTK_REAL hat_Bbar_x = Bbar_x*Bbar_inv;
// CCTK_REAL hat_Bbar_y = Bbar_y*Bbar_inv;
// CCTK_REAL hat_Bbar_z = Bbar_z*Bbar_inv;
// CCTK_REAL sub_fact = hatBbardotS_max - hatBbardotS;
// CONSERVS[STILDEX] += sub_fact*hat_Bbar_x;
// CONSERVS[STILDEY] += sub_fact*hat_Bbar_y;
// CONSERVS[STILDEZ] += sub_fact*hat_Bbar_z;
// hatBbardotS = hatBbardotS_max;
// BbardotS *= fac_reduce;
// BbardotS2 = BbardotS*BbardotS;
//}
CCTK_REAL sdots = METRIC_PHYS[GUPXX]*SQR(CONSERVS[STILDEX]) + METRIC_PHYS[GUPYY]*SQR(CONSERVS[STILDEY]) + METRIC_PHYS[GUPZZ]*SQR(CONSERVS[STILDEZ])
+ 2.0*( METRIC_PHYS[GUPXY]*CONSERVS[STILDEX]*CONSERVS[STILDEY] + METRIC_PHYS[GUPXZ]*CONSERVS[STILDEX]*CONSERVS[STILDEZ]
+ METRIC_PHYS[GUPYZ]*CONSERVS[STILDEY]*CONSERVS[STILDEZ]);
```
<a id='font_fix_hybrid_eos__sdots'></a>
## Step 4.b: Basic check: looking at $\tilde{S}^{2}$ \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos__sdots}$$
We start by looking at the dot product $\tilde{S}^{2}$. Recall that
$$
\tilde{S}_{i} = \left(\rho_{\star}h + \alpha\sqrt{\gamma}\, u^{0}b^{2}\right)u_{i}-\alpha\sqrt{\gamma}\, b^{0}b_{i}\ .
$$
If $\tilde{S}^{2} = 0$, then we must be in a region where $u_{i} = 0 = b_{i}$. In this case, we return
$$
\begin{align}
\rho_{b} &= \psi^{-6}\rho_{\star}\ ,\\
u^{i} &= 0\ ,
\end{align}
$$
and terminate the function call.
```
%%writefile -a $outfile_path__font_fix_hybrid_EOS__C
CCTK_REAL rhob;
if (sdots<1.e-300) {
rhob = CONSERVS[RHOSTAR]*Psim6;
u_x=0.0; u_y=0.0; u_z=0.0;
return 0;
}
/* This test has some problem.
if (fabs(BbardotS2 - sdots*B2bar) > 1e-8) {
CCTK_VInfo(CCTK_THORNSTRING,"(Bbar dot S)^2, Bbar^2 * sdotS, %e %e",SQR(BbardotS),sdots*B2bar);
CCTK_VInfo(CCTK_THORNSTRING,"Cauchy-Schwartz inequality is violated!");
}
*/
```
<a id='font_fix_hybrid_eos__initial_guesses'></a>
## Step 4.c: Initial guesses for $W$, $S_{{\rm fluid}}^{2}$, and $\rho$ \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos__initial_guesses}$$
If $\tilde{S}^{2} \neq 0$, then we move on to the iterative procedure previously mentioned. We start by setting the initial data based on eqs. (A52), (A53), and (A59) found in [Appendix A of Zachariah *et al.* (2012)](https://arxiv.org/pdf/1112.0568.pdf):
$$
\boxed{
\begin{align}
W_{0} &= \psi^{-6}\sqrt{\left(\hat{\bar{B}}\cdot\tilde{S}\right)^{2} + \rho_{\star}^{2}}\\
S_{{\rm fluid},0}^{2} &=\frac{W_{0}^{2}\left(\tilde{S}\cdot\tilde{S}\right)+\left(\bar{B}\cdot\tilde{S}\right)^{2}\left(\bar{B}^{2} + 2W_{0}\right)}{\left(W_{0} + \bar{B}^{2}\right)^{2}}\\
\rho_{0} &= \frac{\psi^{-6}\rho_{\star}}{\sqrt{1+\frac{S_{{\rm fluid},0}^{2}}{\rho_{\star}^{2}}}}
\end{align}
}\ .
$$
```
%%writefile -a $outfile_path__font_fix_hybrid_EOS__C
// Initial guess for W, S_fluid and rhob
CCTK_REAL W0 = sqrt( SQR(hatBbardotS) + SQR(CONSERVS[RHOSTAR]) ) * Psim6;
CCTK_REAL Sf20 = (SQR(W0)*sdots + BbardotS2*(B2bar + 2.0*W0))/SQR(W0+B2bar);
CCTK_REAL rhob0 = CONSERVS[RHOSTAR]*Psim6/sqrt(1.0+Sf20/SQR(CONSERVS[RHOSTAR]));
```
<a id='font_fix_hybrid_eos__main_loop'></a>
## Step 4.d: The main loop \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos__main_loop}$$
We now perform the following iterative process, which is again described in [Appendix A of Zachariah *et al.* (2012)](https://arxiv.org/pdf/1112.0568.pdf). We refer the reader to eqs. (A60), (A61), and (A62).
1. Store the previously computed values of $W_{n}$, $S_{{\rm fluid},n}^{2}$, and $\rho_{n}$
2. Compute $h = 1 + \epsilon_{\rm cold} + P_{\rm cold}/\rho_{n}$
3. Set
$$
\boxed{\rho_{n+1} = \psi^{-6}\rho_{\star}\left(1 + \frac{S_{{\rm fluid},n}^{2}}{\rho_{\star} h_{n}}\right)^{-1/2}}
$$
4. For a given value of $n$, perform steps 1 (for $\rho$), 2 and 3 until $\left|\rho_{n+1}-\rho_{n}\right| < \rho_{n+1}\epsilon$, where $\epsilon$ is a user given tolerance
5. After convergence is obtained, update:
$$
\boxed{
\begin{align}
h_{n+1} &= 1 + \epsilon_{\rm cold} + P_{\rm cold}/\rho_{n+1}\\
W_{n+1} &= \psi^{-6}\sqrt{\tilde{S}^{2}_{{\rm fluid},n} + \rho_{\star}^{2} h_{n+1}^{2}}\\
S_{{\rm fluid},n+1}^{2} &= \frac{W^{2}_{n+1}\left(\tilde{S}\cdot\tilde{S}\right) + \left(\bar{B}\cdot\tilde{S}\right)^{2}\left(\bar{B}^{2} + 2W_{n+1}\right)}{\left(W_{n+1} + \bar{B}^{2}\right)^{2}}
\end{align}
}\ .
$$
6. Repeat steps 1 through 5 until $\left|W_{n+1}-W_{n}\right| < W_{n+1}\epsilon$ *and* $\left|S^{2}_{{\rm fluid},n+1}-S^{2}_{{\rm fluid},n}\right| < S^{2}_{{\rm fluid},n+1}\epsilon$ *or* we reach the maximum number of iterations
7. If font fix fails, increase the tolerance and try again.
This is done using the function `font_fix__rhob_loop()`, which is documented in the [inlined functions tutorial notebook](Tutorial-IllinoisGRMHD__inlined_functions.ipynb).
```
%%writefile -a $outfile_path__font_fix_hybrid_EOS__C
//****************************************************************
// FONT FIX
// Impose Font fix when HARM primitives solver fails to find
// acceptable set of primitives.
//****************************************************************
/* Set the maximum number of iterations */
int maxits = 500;
/* Set the allowed tolerance */
CCTK_REAL tol = 1.e-15;
/* Declare basic variables */
int font_fix_status;
/**********************
* FONT FIX MAIN LOOP *
**********************
* Perform the font fix routine until convergence
* is obtained and the algorithm returns with no
* error. Every time the Font fix fails, increase
* the tolerance by a factor of 10.
*/
int font_fix_attempts = 5;
CCTK_REAL font_fix_tol_factor = 10.0;
for(int n=0; n<font_fix_attempts; n++) {
tol *= pow(font_fix_tol_factor,n);
font_fix_status = font_fix__rhob_loop(maxits,tol, W0,Sf20,Psim6,sdots,BbardotS2,B2bar, CONSERVS,eos, rhob0,rhob);
rhob0 = rhob;
if(font_fix_status==0) break;
}
```
<a id='font_fix_hybrid_eos__outputs'></a>
## Step 4.e: Output $\rho_{b}$ and $u_{i}$ \[Back to [top](#toc)\]
$$\label{font_fix_hybrid_eos__outputs}$$
Finally, we return $\rho_{b}$ and $u_{i}$. In the relations below, $N$ indicates the last computed value of $\rho$ obtained by our iterative process. The quantities evaluated here are
$$
\boxed{
\begin{align}
\rho_{b} &= \rho_{N}\\
\gamma_{v} &= \frac{\psi^{-6}\rho_{\star}}{\rho_{b}}\\
f_{1} &= \frac{\psi^{6}\left(\bar{B}\cdot\tilde{S}\right)}{\gamma_{v}\rho_{\star} h}\\
f_{2} &= \left(\rho_{\star}h + \psi^{6}\frac{\bar{B}^{2}}{\gamma_{v}}\right)^{-1}\\
u_{i} &= f_{2}\left(\tilde{S}_{i} + f_{1}\bar{B}_{i}\right)
\end{align}
}\ .
$$
```
%%writefile -a $outfile_path__font_fix_hybrid_EOS__C
//**************************************************************************************************************
/* Font fix works! */
/* First compute P_cold, eps_cold, then h = h_cold */
CCTK_REAL P_cold, eps_cold;
compute_P_cold__eps_cold(eos,rhob, P_cold,eps_cold);
CCTK_REAL h = 1.0 + eps_cold + P_cold/rhob;
/* Then compute gamma_v using equation (A19) in
* Etienne et al. (2011) [https://arxiv.org/pdf/1112.0568.pdf]
* .-----------------------------------------.
* | gamma_v = psi^{-6} * (rho_star / rho_b) |
* .-----------------------------------------.
*/
CCTK_REAL gammav = CONSERVS[RHOSTAR]*Psim6/rhob;
/* Finally, compute u_{i} */
CCTK_REAL rhosh = CONSERVS[RHOSTAR]*h;
CCTK_REAL fac1 = METRIC_LAP_PSI4[PSI6]*BbardotS/(gammav*rhosh);
CCTK_REAL fac2 = 1.0/(rhosh + METRIC_LAP_PSI4[PSI6]*B2bar/gammav);
u_x = fac2*(CONSERVS[STILDEX] + fac1*Bbar_x);
u_y = fac2*(CONSERVS[STILDEY] + fac1*Bbar_y);
u_z = fac2*(CONSERVS[STILDEZ] + fac1*Bbar_z);
return 0;
}
```
<a id='harm_primitives_headers'></a>
# Step 5: `harm_primitives_headers.h` \[Back to [top](#toc)\]
$$\label{harm_primitives_headers}$$
```
%%writefile $outfile_path__harm_primitives_headers__h
/***********************************************************************************
Copyright 2006 Charles F. Gammie, Jonathan C. McKinney, Scott C. Noble,
Gabor Toth, and Luca Del Zanna
HARM version 1.0 (released May 1, 2006)
This file is part of HARM. HARM is a program that solves hyperbolic
partial differential equations in conservative form using high-resolution
shock-capturing techniques. This version of HARM has been configured to
solve the relativistic magnetohydrodynamic equations of motion on a
stationary black hole spacetime in Kerr-Schild coordinates to evolve
an accretion disk model.
You are morally obligated to cite the following two papers in his/her
scientific literature that results from use of any part of HARM:
[1] Gammie, C. F., McKinney, J. C., \& Toth, G.\ 2003,
Astrophysical Journal, 589, 444.
[2] Noble, S. C., Gammie, C. F., McKinney, J. C., \& Del Zanna, L. \ 2006,
Astrophysical Journal, 641, 626.
Further, we strongly encourage you to obtain the latest version of
HARM directly from our distribution website:
http://rainman.astro.uiuc.edu/codelib/
HARM is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
HARM is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with HARM; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
***********************************************************************************/
#ifndef HARM_PRIMITIVES_HEADERS_H_
#define HARM_PRIMITIVES_HEADERS_H_
static const int NPR =8;
static const int NDIM=4;
/* Adiabatic index used for the state equation */
//#define GAMMA (2.0)
static const CCTK_REAL G_ISOTHERMAL = 1.0;
/* use K(s)=K(r)=const. (G_ATM = GAMMA) of time or T = T(r) = const. of time (G_ATM = 1.) */
/*
#define USE_ISENTROPIC 1
#if( USE_ISENTROPIC )
#define G_ATM GAMMA
#else
#define G_ATM G_ISOTHERMAL
#endif
*/
static const int MAX_NEWT_ITER=30; /* Max. # of Newton-Raphson iterations for find_root_2D(); */
//#define MAX_NEWT_ITER 300 /* Max. # of Newton-Raphson iterations for find_root_2D(); */
static const CCTK_REAL NEWT_TOL =1.0e-10; /* Min. of tolerance allowed for Newton-Raphson iterations */
static const CCTK_REAL MIN_NEWT_TOL=1.0e-10; /* Max. of tolerance allowed for Newton-Raphson iterations */
static const int EXTRA_NEWT_ITER=0; /* ZACH SAYS: Original value = 2. But I don't think this parameter > 0 is warranted. Just slows the code for no reason, since our tolerances are fine. */
static const CCTK_REAL NEWT_TOL2 =1.0e-15; /* TOL of new 1D^*_{v^2} gnr2 method */
static const CCTK_REAL MIN_NEWT_TOL2=1.0e-10; /* TOL of new 1D^*_{v^2} gnr2 method */
static const CCTK_REAL W_TOO_BIG =1.e20; /* \gamma^2 (\rho_0 + u + p) is assumed
to always be smaller than this. This
is used to detect solver failures */
static const CCTK_REAL UTSQ_TOO_BIG =1.e20; /* \tilde{u}^2 is assumed to be smaller
than this. Used to detect solver
failures */
static const CCTK_REAL FAIL_VAL =1.e30; /* Generic value to which we set variables when a problem arises */
static const CCTK_REAL NUMEPSILON=(2.2204460492503131e-16);
/* some mnemonics */
/* for primitive variables */
static const int RHO =0;
static const int UU =1;
static const int UTCON1 =2;
static const int UTCON2 =3;
static const int UTCON3 =4;
static const int BCON1 =5;
static const int BCON2 =6;
static const int BCON3 =7;
/* for conserved variables */
static const int QCOV0 =1;
static const int QCOV1 =2;
static const int QCOV2 =3;
static const int QCOV3 =4;
/********************************************************************************************/
// Function prototype declarations:
int Utoprim_2d(eos_struct eos, CCTK_REAL U[NPR], CCTK_REAL gcov[NDIM][NDIM], CCTK_REAL gcon[NDIM][NDIM],
CCTK_REAL gdet, CCTK_REAL prim[NPR], long &n_iter);
inline int harm_primitives_gammalaw_lowlevel(const int index,const int i,const int j,const int k,CCTK_REAL *X,CCTK_REAL *Y,CCTK_REAL *Z,
CCTK_REAL *METRIC,CCTK_REAL *METRIC_PHYS,CCTK_REAL *METRIC_LAP_PSI4,
CCTK_REAL *CONSERVS,CCTK_REAL *PRIMS,
CCTK_REAL g4dn[NDIM][NDIM],CCTK_REAL g4up[NDIM][NDIM],
output_stats &stats,eos_struct &eos);
inline int font_fix__hybrid_EOS(CCTK_REAL &u_x, CCTK_REAL &u_y, CCTK_REAL &u_z,CCTK_REAL *CONSERVS,CCTK_REAL *PRIMS,CCTK_REAL *METRIC_PHYS,CCTK_REAL *METRIC_LAP_PSI4, eos_struct eos);
void eigenvalues_3by3_real_sym_matrix(CCTK_REAL & lam1, CCTK_REAL & lam2, CCTK_REAL & lam3,
CCTK_REAL M11, CCTK_REAL M12, CCTK_REAL M13, CCTK_REAL M22, CCTK_REAL M23, CCTK_REAL M33);
/********************************************************************************************/
#endif /* HARM_PRIMITIVES_HEADERS_H_ */
```
<a id='code_validation'></a>
# Step 6: Code validation \[Back to [top](#toc)\]
$$\label{code_validation}$$
<a id='driver_conserv_to_prims_validation'></a>
## Step 6.a: `driver_conserv_to_prims.C` \[Back to [top](#toc)\]
$$\label{driver_conserv_to_prims_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/driver_conserv_to_prims.C"
original_IGM_file_name = "driver_conserv_to_prims-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__driver_conserv_to_prims__C = !diff $original_IGM_file_path $outfile_path__driver_conserv_to_prims__C
if Validation__driver_conserv_to_prims__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for driver_conserv_to_prims.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for driver_conserv_to_prims.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__driver_conserv_to_prims__C:
print(diff_line)
```
<a id='harm_primitives_lowlevel_validation'></a>
## Step 6.b: `harm_primitives_lowlevel.C` \[Back to [top](#toc)\]
$$\label{harm_primitives_lowlevel_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/harm_primitives_lowlevel.C"
original_IGM_file_name = "harm_primitives_lowlevel-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__harm_primitives_lowlevel__C = !diff $original_IGM_file_path $outfile_path__harm_primitives_lowlevel__C
if Validation__harm_primitives_lowlevel__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for harm_primitives_lowlevel.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for harm_primitives_lowlevel.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__harm_primitives_lowlevel__C:
print(diff_line)
```
<a id='font_fix_gamma_law_validation'></a>
## Step 6.c: `font_fix_gamma_law.C` \[Back to [top](#toc)\]
$$\label{font_fix_gamma_law_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/font_fix_gamma_law.C"
original_IGM_file_name = "font_fix_gamma_law-original.C"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__font_fix_gamma_law__C = !diff $original_IGM_file_path $outfile_path__font_fix_hybrid_EOS__C
if Validation__font_fix_gamma_law__C == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for font_fix_gamma_law.C: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for font_fix_gamma_law.C: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__font_fix_gamma_law__C:
print(diff_line)
```
<a id='harm_primitives_headers_validation'></a>
## Step 6.d: `harm_primitives_headers.h` \[Back to [top](#toc)\]
$$\label{harm_primitives_headers_validation}$$
First we download the original `IllinoisGRMHD` source code and then compare it to the source code generated by this tutorial notebook.
```
# Verify if the code generated by this tutorial module
# matches the original IllinoisGRMHD source code
# First download the original IllinoisGRMHD source code
import urllib
from os import path
original_IGM_file_url = "https://bitbucket.org/zach_etienne/wvuthorns/raw/5611b2f0b17135538c9d9d17c7da062abe0401b6/IllinoisGRMHD/src/harm_primitives_headers.h"
original_IGM_file_name = "harm_primitives_headers-original.h"
original_IGM_file_path = os.path.join(IGM_src_dir_path,original_IGM_file_name)
# Then download the original IllinoisGRMHD source code
# We try it here in a couple of ways in an attempt to keep
# the code more portable
try:
original_IGM_file_code = urllib.request.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
try:
original_IGM_file_code = urllib.urlopen(original_IGM_file_url).read().decode("utf-8")
# Write down the file the original IllinoisGRMHD source code
with open(original_IGM_file_path,"w") as file:
file.write(original_IGM_file_code)
except:
# If all else fails, hope wget does the job
!wget -O $original_IGM_file_path $original_IGM_file_url
# Perform validation
Validation__harm_primitives_headers__h = !diff $original_IGM_file_path $outfile_path__harm_primitives_headers__h
if Validation__harm_primitives_headers__h == []:
# If the validation passes, we do not need to store the original IGM source code file
!rm $original_IGM_file_path
print("Validation test for harm_primitives_headers.h: PASSED!")
else:
# If the validation fails, we keep the original IGM source code file
print("Validation test for harm_primitives_headers.h: FAILED!")
# We also print out the difference between the code generated
# in this tutorial module and the original IGM source code
print("Diff:")
for diff_line in Validation__harm_primitives_headers__h:
print(diff_line)
```
<a id='latex_pdf_output'></a>
# Step 7: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-IllinoisGRMHD__driver_conserv_to_prims.pdf](Tutorial-IllinoisGRMHD__driver_conserv_to_prims.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means).
```
latex_nrpy_style_path = os.path.join(nrpy_dir_path,"latex_nrpy_style.tplx")
#!jupyter nbconvert --to latex --template $latex_nrpy_style_path --log-level='WARN' Tutorial-IllinoisGRMHD__the_conservative_to_primitive_algorithm.ipynb
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__the_conservative_to_primitive_algorithm.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__the_conservative_to_primitive_algorithm.tex
#!pdflatex -interaction=batchmode Tutorial-IllinoisGRMHD__the_conservative_to_primitive_algorithm.tex
!rm -f Tut*.out Tut*.aux Tut*.log
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow import keras
import glob
import json
import pandas as pd
import os
import gzip
import re
from nltk.stem import WordNetLemmatizer
from nltk import pos_tag
from nltk.corpus import stopwords
import numpy as np
import pandas as pd
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.neural_network import MLPClassifier
from sklearn.ensemble import RandomForestClassifier
#Calculate accuracy
def read_data(directory):
dfs = []
for label in ['real', 'fake']:
for file in glob.glob(directory + os.path.sep + label + os.path.sep + '*gz'):
print('reading %s' % file)
df = pd.DataFrame((json.loads(line) for line in gzip.open(file)))
df['label'] = label
dfs.append(df)
df=pd.concat(dfs)[['publish_date', 'source', 'text', 'title', 'tweets', 'label']]
list_text = [i for i in list(df.text) if i != '']
return df[df.text.isin(list_text)]
directory = r'D:\python\training_data_2'
df = read_data(directory)
import glob
import os
import json
import gzip
from collections import Counter
import numpy as np
import pandas as pd
from tqdm import tqdm
import re
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction import DictVectorizer
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import classification_report
from scipy.sparse import csr_matrix, hstack
import pickle
from osna.get_wordlist import get_desc
from sklearn.metrics.pairwise import cosine_similarity
def make_features(df):
## Add your code to create features.
features: np.matrix
avg_ret = []
avg_fav = []
var_desc = []
var_time = []
vec = TfidfVectorizer(min_df=1, ngram_range=(1, 1))
print('Extracting features...')
for j in tqdm(range(len(df)), ncols=100):
tweets = df.tweets.values[j]
retweet = []
favorite = []
time = []
list_desc = []
if len(tweets) > 1:
for i in range(len(tweets)):
retweet.append(tweets[i]['retweet_count'])
favorite.append(tweets[i]['favorite_count'])
time.append(tweets[i]['created_at'][4:19] + tweets[i]['created_at'][-5:])
if 'description' in list(tweets[i]['user'].keys()):
description = get_desc(tweets[i]['user']['description'])
list_desc.append(description)
avg_ret.append(sum(retweet) / len(tweets))
avg_fav.append(sum(favorite) / len(tweets))
time_sums = [v for k, v in Counter(time).items()]
var_time.append(np.var(time_sums))
if len(list_desc) > 1:
X = vec.fit_transform(list_desc)
sim = cosine_similarity(X)
var_desc.append(np.var(sim))
else:
var_desc.append(0.0)
elif len(tweets) == 1:
avg_ret.append(sum(retweet))
avg_fav.append(sum(favorite))
var_time.append(0.0)
var_desc.append(0.0)
else:
avg_ret.append(0.0)
avg_fav.append(0.0)
var_time.append(0.0)
var_desc.append(0.0)
df['avg_retweet'] = avg_ret
df['avg_favorite'] = avg_fav
df['var_time'] = var_time
df['var_desc'] = var_desc
return df
df = make_features(df)
import re
import numpy as np
from nltk.stem import WordNetLemmatizer
from nltk import pos_tag
from nltk.corpus import stopwords
from nltk.tokenize import TweetTokenizer
def tokennizer(s):
s = re.sub(r'http\S+', '', s)
s = re.sub(r'[0-9_\s]+', '', s)
s = re.sub(r"[^'\w]+", '', s)
s = re.compile(r"(?<=[a-zA-Z])'re").sub(' are', s)
s = re.compile(r"(?<=[a-zA-Z])'m").sub(' am', s)
s = re.compile(r"(?<=[a-zA-Z])'ve").sub(' have', s)
s = re.compile(r"(it|he|she|that|this|there|here|what|where|when|who|why|which)('s)").sub(r"\1 is", s)
s = re.sub(r"'s", "", s)
s = re.sub(r"can't", 'can not', s)
s = re.compile(r"(?<=[a-zA-Z])n't").sub(' not', s)
s = re.compile(r"(?<=[a-zA-Z])'ll").sub(' will', s)
s = re.compile(r"(?<=[a-zA-Z])'d").sub(' would', s)
return s
def lemmatize(l):
wnl = WordNetLemmatizer()
for word, tag in pos_tag(l):
if tag.startswith('NN'):
yield wnl.lemmatize(word, pos='n')
elif tag.startswith('VB'):
yield wnl.lemmatize(word, pos='v')
elif tag.startswith('JJ'):
yield wnl.lemmatize(word, pos='a')
elif tag.startswith('R'):
yield wnl.lemmatize(word, pos='r')
else:
yield word
def tokennizer_desc(tknzr,sen):
list1=[]
for s in tknzr.tokenize(sen.lower()):
s=re.compile(r"(?<=[a-zA-Z])'re").sub(' are',s)
s=re.compile(r"(?<=[a-zA-Z])'m").sub(' am',s)
s=re.compile(r"(?<=[a-zA-Z])'ve").sub(' have',s)
s=re.compile(r"(it|he|she|that|this|there|here|what|where|when|who|why|which)('s)").sub(r"\1 is",s)
s=re.sub(r"'s","",s)
s=re.sub(r"can't",'can not',s)
s=re.compile(r"(?<=[a-zA-Z])n't").sub(' not',s)
s=re.compile(r"(?<=[a-zA-Z])'ll").sub(' will',s)
s=re.compile(r"(?<=[a-zA-Z])'d").sub(' would',s)
list1.append(s)
return(' '.join(list1))
def get_text(list):
stopword=set(stopwords.words('english'))
list_new=[]
for l in list:
l=re.sub(r"[^\w']",' ',l).lower()
l1=[tokennizer(w) for w in l.split() if len(tokennizer(w))>2]
l=' '.join(l1)
l1=[tokennizer(w) for w in l.split() if len(tokennizer(w))>2 and tokennizer(w) not in stopword]
l=' '.join(lemmatize(l1))
list_new.append(l)
return list_new
def get_source(list):
list_new = []
for l in list:
l = re.sub(r"http[s]?://", '', l).lower()
l1 = [w for w in l.split('.') if w != 'www']
l = ' '.join(l1)
list_new.append(l)
return list_new
def get_desc(sentence):
tknzr = TweetTokenizer(strip_handles = True,reduce_len=True)
sen=tokennizer_desc(tknzr,sentence)
l1=[w for w in lemmatize((tknzr.tokenize(sen))) if w!='']
return(' '.join(l1))
title = get_text(list(df.title))
source = get_source(list(df.source))
features = df.loc[:, ['avg_retweet', 'avg_favorite', 'var_time', 'var_desc']]
features = features.to_dict('records')
vec1 = TfidfVectorizer(min_df=2, max_df=.9, ngram_range=(1, 3), stop_words='english')
vec2 = TfidfVectorizer(min_df=2, max_df=.9, ngram_range=(1, 3), stop_words='english')
vec3 = CountVectorizer(min_df=1, max_df=.9, ngram_range=(1, 1))
vecf = DictVectorizer()
x2 = vec2.fit_transform(title)
print(x2.shape)
x3 = vec3.fit_transform(source)
print(x3.shape)
xf = vecf.fit_transform(features)
print(xf.shape)
y = np.array(df.label)
X = hstack([x2, x3, xf])
for i in range(len(y)):
if y[i] == 'real':
y[i] = 0
else:
y[i] = 1
np.set_printoptions(threshold = 10000)
X = X.todense()
X = np.array(X)
X.shape
from tensorflow.python.keras.layers import Dropout, Flatten
dropout_rate = .5
model = keras.Sequential()
model.add(keras.layers.Dense(16,input_shape=(9969,)))
model.add(keras.layers.Dense(16, activation='relu'))
model.add(Dropout(rate=dropout_rate))
model.add(keras.layers.Dense(1, activation='sigmoid'))
model.summary()
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
np.random.seed(116)
np.random.shuffle(X)
np.random.seed(116)
np.random.shuffle(y)
x_val = X[:300]
partial_x_train = X[300:500]
y_val = y[:300]
partial_y_train = y[300:500]
testX = X[500:]
testy = y[500:]
y_val
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
results = model.evaluate(testX, testy)
print(results)
history_dict = history.history
history_dict.keys()
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Pouyaaskari/object-detection/blob/master/Bounding_box_regression_with_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import tensorflow as tf
print(tf.__version__)
tf.config.list_physical_devices()
!git clone https://github.com/Alireza-Akhavan/object-detection-notebooks.git
from tensorflow import keras
from keras.layers import Flatten,Dense,Input
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.optimizers import Adam
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import numpy as np
import mimetypes
import argparse
import cv2
import os
path="/content/object-detection-notebooks/dataset"
path1=os.path.sep.join([path,"images"])
path2=os.path.sep.join([path,"airplanes.csv"])
rows = open(annotation).read().strip().split("\n")
print(rows)
data = []
targets = []
filenames = []
for row in rows:
(filename, startX, startY, endX, endY) = row.split(",")
imagePath = os.path.sep.join([path1, filename])
image = cv2.imread(imagePath)
(h, w) = image.shape[:2]
startX = float(startX) / w
startY = float(startY) / h
endX = float(endX) / w
endY = float(endY) / h
image = load_img(imagePath, target_size=(224, 224))
image = img_to_array(image)
data.append(image)
targets.append((startX, startY, endX, endY))
filenames.append(filename)
data = np.array(data, dtype="float32") / 255.0
targets = np.array(targets, dtype="float32")
split=train_test_split(data, targets, filenames, test_size=0.10, random_state=42)
(trainImages, testImages) = split[:2]
(trainTargets, testTargets) = split[2:4]
(trainFilenames, testFilenames) = split[4:]
vgg =VGG16(weights="imagenet", include_top=False, input_tensor=Input(shape=(224, 224, 3)))
model=Sequential()
model.add(vgg)
model.add(Flatten())
model.add( Dense(128, activation="relu"))
model.add( Dense(64, activation="relu"))
model.add( Dense(32, activation="relu"))
model.add( Dense(4, activation="sigmoid"))
vgg.trainable=False
model.summary()
opt = Adam(lr=1e-4)
model.compile(loss="mse", optimizer=opt)
history=model.fit(trainImages, trainTargets,
validation_data=(testImages, testTargets),
batch_size=32,
epochs=25,
verbose=1)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('epoch')
plt.xlabel('loss')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
model.save("localization.h5", save_format="h5")
def inference(imagepath):
image=load_img(imagepath,target_size=(224,224))
image=img_to_array(image)/255.0
image=np.expand_dims(image,axis=0)
prediction=model.predict(image)[0]
return prediction
def draw_box(img,prediction):
image=np.copy(img)
(startX, startY, endX, endY) = prediction
(h, w) = image.shape[:2]
startX = int(startX * w)
startY = int(startY * h)
endX = int(endX * w)
endY = int(endY * h)
cv2.rectangle(image, (startX, startY), (endX, endY),(255, 0, 0), 2)
return image
image_path="/content/object-detection-notebooks/dataset/images/image_0021.jpg"
image=plt.imread(image_path)
prediction=inference(image_path)
bbox=draw_box(image,prediction)
plt.imshow(bbox)
```
| github_jupyter |
# Final project description
The final project should be sent before the **30th of June**.
The format should be an **R script** (one per group) that can be run without error from any laptop (only by changing the path of the folder in which you put the data files of the different subject).
Please comment your script where necessary. Please add your conclusions on the statistical tests that you run also as a comment.
The final project consists 3 essential steps with an extra step as a bonus:
1. Data parsing and loading
2. Data visualization
3. Multi-level regression
4. EZ-Diffusion analysis (as a bonus)
You are divided into 5 groups. Each group has its own folder (GroupA, GroupB, GroupC, GroupD, and GroupE) that you can find on this Dropbox link: https://www.dropbox.com/sh/086fvtv0ivnscso/AADlvjuRxVK1C3HH8nmPqTBSa?dl=0. Each folder consists of a `readme.txt` file and a `Data` folder. The description of each dataset is presented in the `readme.txt` file in each folder and the recorded data for each individual is located in the `Data` folder as separate files.
NOTE: if you are not able to dowload these files, you can also find them on Adam now.

# Step 1: Data loading and wrangling
This step has two parts:
- Data wrangling
- Excluding some participants and trials
For doing so, you should load the data files that are located in the `Data` folder. Each group should load and process its own dataset. The output data frame should consist of the following columns:
- participant
- block_number
- trial_number
- condition
- dots_direction
- response
- accuracy
- rt
Calculate a summary, which includes the average performance (accuracy) and RTs per participant, as well as the percentage of trials below 150 ms (too fast trials) and above 5000 ms (too slow trials). Are there any participants with more then 10% fast or slow trials?
Then, exclude the participants that have less than 65% performance from the dataset. The trials with a reaction time less than 150 ms or greater than 5000 ms should be also excluded.
**Note**: instead of writing yourself all the data paths, you can use the following function, https://www.math.ucla.edu/~anderson/rw1001/library/base/html/list.files.html
Here is a little tutorial on how to import the files. Instead of using the fread function you should use `read_delim`. Also, what is not included in this tutorial is how to add a column with the participant number. You should find a way to fix that.
```
# example of how to define lists of files
data_folder = '~/Dropbox/teaching/r-course21/GroupData/GroupA/Data/'
list_files = paste(data_folder, list.files(path=data_folder), sep="")
list_files[1:5] # show the first 5
```
# Step 2: Data visualization
Now it's time to visualize the data set. In particular, we want to have a look at how the performance and reaction time evolve across the blocks.
For this purpose, you should make a 2-by-1 grid plot that depicts the reaction time (top panel) and accuracy (middle panel) across the blocks for each condition. An example of this plot has illustrated here (each line correspondes to one condition):

# Step 3: Multi-level regression
Now, run a multi-level regression model. Consider the reaction time as the predicted variable and the block number as the predictor, and the participants and block number as mixed effects. Do response times decrease across the blocks?
Finally, run a repeated measures ANOVA to compare the trials in the first and last block of trials. Did participants significantly increase their performance (accuracy level)?
# Step 4: EZ-Diffusion analyziz
The Drift Diffusion Model (DDM) assumes that, when making a decision between two options, noisy evidence in favor of one over the other option is integrated over time until a pre-set threshold is reached. This threshold indicates how much of this relative evidence is enough to initiate a response. Since the incoming evidence is noisy, the integrated evidence becomes more reliable as time passes. Therefore, higher thresholds lead to more accurate decisions. However, the cost of increasing the threshold is an increase of decision time. In addition, difficulty affects decisions: When confronted with an easier choice (e.g., between a very good and a very bad option), the integration process reaches the threshold faster, meaning that less time is needed to make a decision and that decisions are more accurate. The DDM also assumes that a portion of the RTs reflects processes that are unrelated to the decision time itself, such as motor processes, and that can differ across participants. Because of this dependency between noise in the information, accuracy, and speed of decisions, the DDM is able to simultaneously predict the probability of choosing one option over the other (i.e., accuracy) and the shape of the two RT distributions corresponding to the two choice options. Importantly, by fitting the standard DDM, we assume that repeated choices are independent of each other, and discard information about the order of the choices and the feedback after each choice. To formalize the described cognitive processes, the simple DDM has four core parameters: The drift rate $v$, which describes how fast the integration of evidence is, the threshold a (with $a > 0$), that is the amount of integrated evidence necessary to initiate a response, the starting-point bias, that is the evidence in favor of one option prior to evidence accumulation, and the non-decision time Ter (with $0 \leq T_{er} < RT_{min}$), the part of the response time that is not strictly related to the decision process ($RT = decision time + T_{er}$ ). Because, in our case, the position of the options was randomized to either the left or the right screen position, we assumed no starting-point bias and only considered drift rate, threshold, and non-decision time. The following figure illustrates this model:

The most simple version of DDM is the EZ-Diffusion model. In this model there is no bias to each option and also there is no across trial variability parameter. So, the parameters of this model can be easily obtained by the following formula. For more information you can see: "Wagenmakers, E. J., Van Der Maas, H. L., & Grasman, R. P. (2007). An EZ-diffusion model for response time and accuracy. Psychonomic bulletin & review, 14(1), 3-22".

$$logit (P_c) = log(\frac{P_c}{1-P_c})$$
$$v = sign(P_c - \frac{1}{2}) \Big[\frac{logit (P_c)\big(P_c^2 logit (P_c) - P_c logit (P_c)+ P_c - \frac{1}{2}\big)}{VRT}\Big]^{\frac{1}{4}}$$
$$a = \frac{logit(P_c)}{v}$$
$$MDT = \Big(\frac{a}{2v}\Big)\frac{1 - exp(-va)}{1 + exp(-va)}$$
$$NDT = MRT - MDT$$
- $P_c$ : probability of correct answer
- $v$ : drift rate (rate of accumulating the information)
- $a$ : boundary separation (amount of information which is needed for making a decision)
- $NDT$ : non-decision time (the time which is needed for encoding the stimuli and also motory time to press the key)
- $MRT$ : average of reaction times
- $VRT$ : variance of reaction times
- $MDT$ : average of decision times ($MRT = NDT + MDT$)
Define the EZDiffusion function based on the mentioned formulas. Analyze your dataset and obtain the drift rate ($v$), boundary separation ($a$), and non-decision time ($NDT$) for each participant.
You should define a funcion called EZ_diffusion, that takes as arguments P_c, MRT and VRT, and returns v, a, and ndt.
Write a loop, so that you can calculate this per participant and print the results (the estimated parameters).
| github_jupyter |
```
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
import glob
import cv2
import os
import warnings
# filter warnings
warnings.filterwarnings('ignore')
import os
print(os.listdir("../content/drive/My Drive/Colab Notebooks/mask"))
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
import seaborn as sns
import cv2
import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten,GlobalAveragePooling2D
from keras.layers import Conv2D
from keras.layers import MaxPooling2D,MaxPool2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import Adam,RMSprop
from keras.preprocessing.image import ImageDataGenerator
import os
test='/content/drive/My Drive/Colab Notebooks/mask/test'
train='/content/drive/My Drive/Colab Notebooks/mask/train'
img_width, img_height = 128,128
nb_train_samples = 3594
nb_validation_samples = 858
batch_size = 30
from keras import backend as K
if K.image_data_format() == 'channels_first':
input_shape = (3, img_width, img_height)
else:
input_shape = (img_width, img_height, 3)
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
train,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
test,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode='categorical')
from keras.models import Sequential
from keras.layers import Conv2D,MaxPooling2D
from keras.layers import Activation, Dense, Flatten, Dropout,BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import ModelCheckpoint
from keras import backend as K
model = Sequential()
model=Sequential()
model.add(Conv2D(filters=8,kernel_size=(5,5),padding="Same",activation="relu",input_shape=(128,128,3)))
model.add(MaxPool2D(pool_size=(2,2)))
#model.add(Dropout(0.25))
model.add(Conv2D(filters=16,kernel_size=(4,4),padding="Same",activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
#model.add(Dropout(0.25))
model.add(Conv2D(filters=32,kernel_size=(4,4),padding="Same",activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Conv2D(filters=48,kernel_size=(4,4),padding="Same",activation="relu"))
model.add(MaxPool2D(pool_size=(2,2),strides=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512,activation="relu"))
model.add(Dropout(0.5))
model.add(Dense(2,activation="softmax"))
#defining optimizer
optimizer=Adam(lr=0.0001,beta_1=0.9,beta_2=0.999)
#compile the model
model.compile(optimizer=optimizer,loss="binary_crossentropy",metrics=["acc"])
history = model.fit_generator(
train_generator,
steps_per_epoch=nb_train_samples // batch_size,
epochs=30,
validation_data=validation_generator,
validation_steps=nb_validation_samples // batch_size)
plt.plot(history.history['acc'],'r')
plt.plot(history.history['val_acc'],'g')
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training accuracy', 'Validation accuracy'], loc='lower right')
plt.grid()
plt.show()
plt.plot(history.history['val_loss'],'r')
plt.plot(history.history['loss'],'b')
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.grid()
plt.legend(['Training loss', 'Validation loss'], loc='upper right')
plt.show()
model.save("/content/drive/My Drive/Lab1/assignment/Threshold/mask.h5")
from keras.models import load_model
from keras.preprocessing import image
from keras.applications.mobilenet import preprocess_input
import numpy as np
model= load_model('/content/drive/My Drive/Lab1/assignment/Threshold/mask.h5')
img = image.load_img('/content/drive/My Drive/yolov3/mask/talh.jpg', target_size=(128,128))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
img_data = preprocess_input(x)
classes = model.predict(img_data)
print(classes)
import cv2
import os
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import load_model
from tensorflow.keras.applications.mobilenet import preprocess_input
import numpy as np
from google.colab.patches import cv2_imshow
faceCascade = cv2.CascadeClassifier("/content/drive/My Drive/Colab Notebooks/mask/haarcascade_frontalface_alt2.xml")
model = load_model("/content/drive/My Drive/Lab1/assignment/Threshold/mask.h5")
def face_mask_detector(frame):
# frame = cv2.imread(fileName)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(60, 60),
flags=cv2.CASCADE_SCALE_IMAGE)
faces_list=[]
preds=[]
for (x, y, w, h) in faces:
face_frame = frame[y:y+h,x:x+w]
face_frame = cv2.cvtColor(face_frame, cv2.COLOR_BGR2RGB)
face_frame = cv2.resize(face_frame, (128, 128))
face_frame = img_to_array(face_frame)
face_frame = np.expand_dims(face_frame, axis=0)
face_frame = preprocess_input(face_frame)
faces_list.append(face_frame)
if len(faces_list)>0:
preds = model.predict(faces_list)
for pred in preds:
(mask, withoutMask) = pred
label = "Mask" if mask > withoutMask else "No Mask"
color = (0, 255, 0) if label == "Mask" else (0, 0, 255)
label = "{}: {:.2f}%".format(label, max(mask, withoutMask) * 100)
cv2.putText(frame, label, (x, y- 10),
cv2.FONT_HERSHEY_SIMPLEX, 1, color, 2)
cv2.rectangle(frame, (x, y), (x + w, y + h),color, 3)
# cv2_imshow(frame)
return frame
input_image = cv2.imread("/content/Mask227.jpeg")
output = face_mask_detector(input_image)
cv2_imshow(output)
```
| github_jupyter |
# Working with NetCDF files
One of the most common file formats within environmental science is NetCDF ([Network Common Data Form](https://www.unidata.ucar.edu/software/netcdf/)).
This format allows for storage of multiple variables, over multiple dimensions (i.e. N-dimensional arrays).
Files also contain the associated history and variable attributes.
Example dataset format: (from http://xarray.pydata.org/en/stable/data-structures.html)
<br>
<img src="../figures/dataset-diagram.png">
<br>
If you're not familiar with NetCDF, and would like to know more, there is a bit more general information at the bottom of this notebook.
For now, we'll simply focus on how to access and work with these files in python ...
# NetCDF in python
There are a few different packages that can be used to access data from NetCDF files.
These include:
* [netCDF4](https://unidata.github.io/netcdf4-python/netCDF4/index.html)
* Core NetCDF package within python.
* [iris](https://scitools.org.uk/iris/docs/latest/index.html)
* Developed for earth system data.
* Data and metadata read into and stored within "cubes".
* [xarray](http://xarray.pydata.org/en/stable/)
* A higher-level package, with a pandas-like interface for netCDF.
* What we'll focus on here today...
## netCDF4
Contains everything you need to read/modify/create netCDF files. e.g.
```python
from netCDF4 import Dataset
import numpy as np
openfile = Dataset('../data/cefas_GETM_nwes.nc4')
bathymetry = openfile.variables['bathymetry'][:]
```
Variables are read into NumPy arrays (masked arrays if missing values specified).
## xarray
* Alternative to plain netCDF4 access from python.
* Brings the power of pandas to environmental sciences, by providing N-dimensional variants of the core pandas data structures.
* Worth using for N-dimensional data, even when not reading netCDF files?
| Pandas | xarray |
|---|---|
| 1-D Series | DataArray |
| DataFrame | Dataset |
DataArray uses names for each dimension, making it easier to track than by just using axis numbers.
For example, if you want to average your DataArray (da) over time, it is possible to write `da.mean(dim='time')`
You don't have to remember the index of the time axis.
Compare:
```python
# xarray style
>>> da.sel(time='2018-01-12').max(dim='ensemble')
# standard numpy style
>>> array[3, :, :].max(axis=2)
```
Without xarray, you need to first check which row refers to `time='2018-01-12'`, and which dimension is relevant for the ensemble.
In the NumPy example, these choices are also not obvious to anyone reading the code at a later date.
#### The main advantages of using xarray versus plain netCDF4 are:
* intelligent selection along labelled dimensions (and also indices)
* [groupby operations](http://xarray.pydata.org/en/stable/generated/xarray.DataArray.groupby.html)
* data alignment across named axes
* IO (netcdf)
* Attributes/metadata held with the dataset.
* conversion from and to Pandas.DataFrames
## Xarray as Pandas for N dimensions
```
# Import everything that we are going to need... but not more
import pandas as pd
import xarray as xr
import numpy as np
pd_s = pd.Series(range(3), index=list('abc'))
print(pd_s)
#convert 1D series to ND aware DataArray
da = xr.DataArray(pd_s)
print(da)
#convert 2D DataFrame to ND aware Dataset
df = pd.DataFrame.from_dict({'A': [1, 2, 3], 'B': [4, 5, 6]},
orient='index', columns=['one', 'two', 'three'])
df
ds = xr.Dataset.from_dataframe(df)
ds
```
---
# Let's open a netCDF file
Xarray allows you to open both local and remote datasets.
Remote datasets can be accessed through [OpenDAP](http://xarray.pydata.org/en/stable/io.html#opendap), allowing you to download (and subset) data available online.
e.g. you can access ocean colour data directly into python (from dataset acccessible online [here](earthdata.nasa.gov/collaborate/open-data-services-and-software/api/opendap/opendap-servers)):
```python
remote_data = xr.open_dataset(
'https://oceandata.sci.gsfc.nasa.gov:443/opendap/MODISA/L3SMI/2020/176/A2020176.L3m_DAY_CHL_chlor_a_9km.nc')
```
Here we'll use a file available locally on your machine (find in your data folder): `cefas_GETM_nwes.nc4`
### Open our dataset
```
GETM = xr.open_dataset('../data/cefas_GETM_nwes.nc4')
GETM
```
We can extract information on the dimensions, coordinates and attributes of the dataset
```
# List dimensions
GETM.dims
# Extract coordinates
print(type(GETM.coords['latc']))
GETM.coords['latc'].shape
# List name of dataset attributes
GETM.attrs.keys()
# List variables
#GETM.data_vars
for var in GETM.data_vars:
print(var)
```
### Extract variable from dataset
```
temp = GETM['temp']
print(temp.shape)
temp
# Can also use:
# GETM.temp
```
Check variable attributes, in the same way we access DataSet attributes
```
print(temp.attrs)
print('Variable {} has units {}'.format(temp.attrs['long_name'], temp.attrs['units']))
```
### Accessing data values
Data can be subset using standard indexing methods.
```
temp[0, 0, 90, 100]
```
Note that the DataArray subset keeps track of the associated coordinates, as well as other attributes.
Behind the scense, data values are still stored as NumPy arrays.
```
print(type(temp.values))
temp.values[0, 0, 90, 100]
```
---
## Xarray Indexing and selecting data
Xarray offers a variety of ways to subset your data.
From http://xarray.pydata.org/
<br>
<img src="../figures/xarray_indexing_table.png">
Subsets of our temperature variable, `temp`:
```
#positional by integer
print( temp[0, 2, :, :].shape )
temp.dims
# positional by label (coordinate value)
print( temp.loc['1996-02-02T01:00:00', 6, :, :].shape )
# by name and integer - note that we use round brackets here
print( temp.isel(level=1, latc=90, lonc=100).shape )
# by name and label
print( temp.sel(time='1996-02-02T01:00:00').shape )
```
Using axes names, it's also possible to make a subset of an entire Dataset (across all variables)
```
GETM.sel(time='1996-02-02T01:00:00', level=6)
```
### Define selection using nearest value
In examples above, you use the coordinate values to make the selection by label.
If the value you want doesn't exist, it is possible to interpolate e.g. to the nearest index:
```
temp.sel(level=2, lonc=-5.0, latc=50.0, method='nearest')
```
Tolerance limits can set for "nearest" coordinate values.
```
# e.g. latc=-50 should not yield data
lat = -50
limit = 0.5
try:
print(temp.sel(level=1, lonc=-5.0, latc=lat, method='nearest', tolerance=limit))
except KeyError:
print(f'ERROR: {lat} outside tolerance of {limit}')
```
Note: Other `method` options available are:
* `backfill` / `bfill` - propagate values backward
* `pad` / `ffill` - propagate values forward
* `None` - default, exact matches only
More information can be found in the xarray docs [here](http://xarray.pydata.org/en/stable/indexing.html).
You can also interpolate between values, as discussed [here](xarray.pydata.org/en/stable/interpolation.html).
---
# Exercise 1
From our GETM dataset (loaded above), can you extract the follow data for the ocean conditions off Great Yarmouth?
The coordinates here are 52.6 deg N, 1.75 deg E.
a) the bathymetry (ocean depth)
b) the temperature profile (i.e. all levels) at the same location, on 1st February 1996?
```
# Your code here:
# Hint: can you match the latitude and longitude exactly, or do you need to find the nearest value?
#a)
# b)
```
---
---
# Plotting is easy
Xarray enables simple plotting, to easily view your data.
```
GETM['temp'].isel(time=0, level=0).plot()
```
It will automatically plot 2D shading or 1D lines, dependent on the shape of the DataArray.
```
GETM.temp.sel(lonc=1.75, latc=52.6, level=1, method='nearest').plot()
```
## Other plotting packages are still available
You may still want to tailor plots to your own design e.g. creating figures for publication or presentation.
For example, let's look an example with cartopy.
```
%matplotlib inline
import matplotlib.pyplot as plt
import cartopy
import cartopy.crs as ccrs
```
Define a general mapping function
```
#def make_map(ds, var='', title=None, units=None):
def make_map():
# create figure and axes instances
fig = plt.figure(figsize=(8,4))
ax = fig.add_subplot(111, projection=ccrs.Stereographic(central_latitude=60))
ax.set_extent([-10, 15, 49, 60], crs=ccrs.PlateCarree())
gl = ax.gridlines(draw_labels=False)
feature = cartopy.feature.NaturalEarthFeature(name='coastline',
category='physical',
scale='50m',
edgecolor='0.5',
facecolor='0.8')
ax.add_feature(feature)
return fig, ax
make_map();
```
We can plot our chosen data on the map, and use attributes to create annotate the figure.
```
# Extract our chosen data and coordinates
latc = GETM.coords['latc']
lonc = GETM.coords['lonc']
var = GETM['temp'].sel(time='1996-02-02T01:00:00', level=21)
# Create the figure (using function above)
fig, ax = make_map()
# draw filled contours onto the map axes (ax).
h = ax.contourf(lonc, latc, var, 50, cmap=plt.cm.coolwarm, transform=ccrs.PlateCarree())
# add colorbar.
cbar = fig.colorbar(h)
# with unit label
cbar.set_label(var.units)
# add a title
ax.set_title(f'A slice of {var.long_name}');
```
---
#### Aside: Choice of colormaps
The "default" colormap in python is viridis. However, colormaps can (and should) be varied to suit the data being shown.
For example, you would likely prefer a *sequential* scale for bathymetry, as opposed to a *diverging* scale for rainfall anomalies?
There is a large variety of maps to choose from in matplotlib, as shown [here](https://matplotlib.org/2.0.1/users/colormaps.html).
**You should always choose *perceptually uniform* shading to ensure that data is not misrepresented.**
There are a large number of articles to explaing why you should avoid using rainbow/jet e.g.
* [The end of the rainbow](http://www.climate-lab-book.ac.uk/2014/end-of-the-rainbow/)
* [A dangerous rainbow: Why colormaps matter](https://blogs.mathworks.com/headlines/2018/10/10/a-dangerous-rainbow-why-colormaps-matter/)
---
## Arithmetic operations
You can work with DataArrays in the same way as a NumPy array.
Benefit here is that calculations using DataArrays will give a result that is also a DataArray.
```
top = GETM['temp'].isel(time=0, level=4)
bottom = GETM['temp'].isel(time=0, level=0)
diff = top - bottom
print(type(diff))
diff.plot()
```
### Available methods and statistics
Methods available in pandas, are also available in xarray.
When performing calculations, can refer to dimensions by name or axis number.
```
# average over time (using axis number)
time_ave = GETM['temp'].mean(axis=0)
print(time_ave.shape)
# average over time and level (vertical)
timelev_ave = GETM['temp'].mean(['time','level'])
timelev_ave.plot()
# average over time and longitude
# i.e. zonal average (meridional section)
timelon_ave = GETM['temp'].mean(['time','lonc']).isel(level=4)
timelon_ave.plot()
```
---
## Dataset can easily be saved to a new netCDF file
Let's create a new dataset, containing just the average temperature, over time and level.
```
# Create a subset of the dataset, average over axes:
ds_temp = GETM[['temp']].mean('time','level')
# output to netcdf
ds_temp.to_netcdf('../data/temp_avg_level_time.nc')
```
Note the extra brackets used to extract the temperature variable:
```
# When variable names are passed in a list, this produces a new Dataset:
print(type( GETM[['temp']]) )
# Passing just a string extracts the variable into a DataArray
print(type( GETM['temp']) )
```
---
# Exercise 2
From our GETM dataset again, we want to investigate the variability of temperature with depth across the seabed.
a) Extract bathymetry from the dataset.
b) Extract temperature at the seabed (level index = 0), and average over time.
c) Produce a scatter plot of depth (bathymetry) vs. seabed temperature.
```
# Your code here:
# a)
# b)
# c)
```
---
# Bonus Exercise
For those who have finished the exercises above, and want more...
Earlier we mentioned that you can also access remote data sets online. e.g.
```
remote_data = xr.open_dataset(
'https://oceandata.sci.gsfc.nasa.gov:443/opendap/MODISA/L3SMI/2020/176/A2020176.L3m_DAY_CHL_chlor_a_9km.nc')
```
From this remote dataset:
a) Extract the chlorophyll concentration, covering just the North Sea (or another region of your choice).
b) Plot a map to show your result - check you've made a subset of the right region!
```
# Your code here:
# a)
# Hint: You will need to extract the relevant variable over a range of latitude and londitude values.
# * Find the relevant variable name to extract from the data set.
# * Extract coordinate values if needed?
# * Subset over your chose range of latitude and longitude.
# b)
# Note: data is only downloaded when you make the plot
```
---
---
---
# More on the netCDF file format
## History
* netCDF is a collection of formats for storing arrays
* popular scientific file format for gridded datasets
* netCDF classic
* more widespread
* 2 GB file limit (if you don't use the unlimited dimension)
* often preffered for distributing products
* netCDF 64 bit offset
* supports larger files
* NetCDF4
* based on HDF5
* compression
* multiple unlimited variables
* new types inc. user defined
* herarchical groups
* Developed by Unidata-UCAR with the aim of storing climate model data (3D+time)
* Auxilary information about each variable can be added
* Readable text equivalent called CDL (use ncdump/ncgen)
* Can be used with Climate and Forecast (CF) data convention
http://cfconventions.org/
## Data model
* Dimensions:describe the axes of the data arrays.
* Variables: N-dimensional arrays of data.
* Attributes: annotate variables or files with small notes or supplementary metadata.
Example for an ocean model dataset:
* Dimensions
* lat
* lon
* depth
* time
* Variable
* Temperature
* Salinity
* Global Attibutes
* Geographic grid type
* History
* Variable attributes (Temperature)
* Long_name: "sea water temperature"
* Missing_value: 1.09009E36
* Units: deg. C
* range: -2:50
## Tools for working with netCDF files
### Readable by many software tools
NetCDF can be read by many different software tools e.g. ArcGIS, QGIS, Surfer, Ferret, Paraview etc.
It can also be read by many different languages (one of the key motivations behind its use).
### C and Fortran libraries
These are used to underpin interfaces to other languages such as python (e.g. python package netCDF4)
Include in these are ncdump/ncgen software, used to convert to and from human-readable format.
### nco tools
An *extremely useful* set of tools, to process netCDF files directly from the command line.
For example, files can be subset, concatenated, averaged, or variables processed with simple arithmetic.
Full documentation, showing the wide range of functionality, can be found here: http://nco.sourceforge.net/nco.html.
### cdo tools
Another powerful command line tool: https://code.mpimet.mpg.de/projects/cdo/
## Quick viewers?
To view the file contents quickly and easily (without reading into python or elsewhere), there are a few different options.
e.g. ncdump, ncview, panoply, pyncview, etc.
### ncdump
This program should be available through your python installation, and is a useful way to quickly check the contents or attributes of a netCDF file.
You can peek inside your netcdf file from the prompt window (or terminal) using `ncdump -h <filename>`
Be sure to use the `-h` option, otherwise it will literally dump the entire contents of your file into the screen in front of you (not what you normally want!).
e.g.:
```
$ ncdump -h data/cefas_GETM_nwes.nc4
netcdf cefas_GETM_nwes {
dimensions:
latc = 360 ;
lonc = 396 ;
time = UNLIMITED ; // (6 currently)
level = 5 ;
variables:
double bathymetry(latc, lonc) ;
bathymetry:units = "m" ;
bathymetry:long_name = "bathymetry" ;
bathymetry:valid_range = -5., 4000. ;
bathymetry:_FillValue = -10. ;
bathymetry:missing_value = -10. ;
float h(time, level, latc, lonc) ;
h:units = "m" ;
h:long_name = "layer thickness" ;
h:_FillValue = -9999.f ;
h:missing_value = -9999.f ;
double latc(latc) ;
latc:units = "degrees_north" ;
double level(level) ;
level:units = "level" ;
double lonc(lonc) ;
lonc:units = "degrees_east" ;
float temp(time, level, latc, lonc) ;
temp:units = "degC" ;
temp:long_name = "temperature" ;
temp:valid_range = -2.f, 40.f ;
temp:_FillValue = -9999.f ;
temp:missing_value = -9999.f ;
double time(time) ;
time:long_name = "time" ;
time:units = "seconds since 1996-01-01 00:00:00" ;
...
```
# References
* xarray [docs](http://xarray.pydata.org/en/stable/)
* netCDF4 [docs](https://unidata.github.io/netcdf4-python/netCDF4/index.html)
* Stephan Hoyer's [ECMWF talk](https://docs.google.com/presentation/d/16CMY3g_OYr6fQplUZIDqVtG-SKZqsG8Ckwoj2oOqepU/edit#slide=id.g2b68f9254d_1_27)
| github_jupyter |
<a href="https://colab.research.google.com/github/taniokah/DL-Basic-Seminar/blob/master/MNIST.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# MNIST Example
## 手書き数字認識プログラム
本サンプルは Keras (Tensorflow) を用いてMNISTデータを学習します。
```
# coding: utf-8
# MNISTサンプル
# Kerasをインポート
import keras
# MINISTのデータの他、必要なモジュールをインポート
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import RMSprop
from keras.callbacks import EarlyStopping, CSVLogger
%matplotlib inline
import matplotlib.pyplot as plt
# バッチサイズ、クラス数、エポック数を定義
batch_size = 128
num_classes = 10
epochs = 20
# MNISTデータを読込
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# MNISTデータのうち10枚だけ表示
for i in range(10):
plt.subplot(2, 5, i+1)
plt.title("M_%d" % i)
plt.axis("off")
plt.imshow(x_train[i].reshape(28, 28), cmap=None)
plt.show()
# 画像サイズを正規化
x_train = x_train.reshape(60000, 784).astype('float32')
x_test = x_test.reshape(10000, 784).astype('float32')
x_train /= 255
x_test /= 255
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# 確認のために表示
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# モデルを作成
model = Sequential()
model.add(Dense(512, input_shape=(784, )))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.2))
model.add(Dense(10))
model.add(Activation('softmax'))
# サマリーを出力
model.summary()
# モデルのコンパイル
model.compile(loss='categorical_crossentropy',
optimizer=RMSprop(),
metrics=['accuracy'])
es = EarlyStopping(monitor='val_loss', patience=2)
csv_logger = CSVLogger('training.log')
hist = model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_split=0.1,
callbacks=[es, csv_logger])
# 学習を実行
score = model.evaluate(x_test, y_test, verbose=0)
print('test loss:', score[0])
print('test acc:', score[1])
# 学習結果を表示
loss = hist.history['loss']
val_loss = hist.history['val_loss']
epochs = len(loss)
plt.plot(range(epochs), loss, marker='.', label='loss(training data)')
plt.plot(range(epochs), val_loss, marker='.', label='val_loss(evaluationdata)')
plt.legend(loc='best')
plt.grid()
plt.xlabel('epoch')
plt.ylabel('loss')
plt.show()
# MNISTデータのうち10枚だけ判定
for i in range(10):
plt.subplot(2, 5, i+1)
pred = model.predict_classes(x_test[i:i+1])
plt.title("P_%d" % pred[0])
plt.axis("off")
plt.imshow(x_test[i].reshape(28, 28), cmap=None)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Blackman9t/Advanced-Data-Science/blob/master/SparkML_pipelines.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
A Pipeline is a very convenient process of designing our data preprocessing in Machine Learning flow.<br>There are certain steps which we must do before the actual ML begins. These steps are called data-preprocessing and/or feature engineering.<br>The cool thing about pipelines is that we get some sort of a recipe or list of predefined steps already for us.<br> These steps could include:<br>1. Assigning categorical values e.g 0 or 1<br>2. Normalising the range of values per dimension<br>3. One-hot encoding and then the final<br>4. Modeling... where we train our ML algorithm.<br>
So the idea is when using pipelines, we can maintain the same preprocessing and just switch out different modeling algorithnms or different parameter sets of the modeling algorithm without changing anything before. This is very very handy.<br>The overall idea of pipelines is that we can fuse our complete data processing flow into one single pipeline and that single pipeline we can further use downstream.<br>
So the pipeline as a Machine Learning Algorithm has functions or methods which are called fit, evaluate and score. Fit basically starts the training, and score gives you back the predicted value.<br>
One advantage is that we can cross-validate, that is you can try out many many parameters using that same very pipeline. And this really accelerates optimisation of the algorithm.<br>
So in summary, pipelines are really facilitating our day to day work in machine learning as we can draw from pre-defined data processing steps, we make sure everything is aligned and we can switch and swap our algorithms as needed. We can create a pipeline and we can use this pipeline in downstream data processing in a process called hyperparameter-tuning for example.
Finally, remember that Dataframes in Apache Spark are always lazy in the sense that if you don't read the data nothing gets executed.
First let's load our spark dependencies
```
!apt-get install openjdk-8-jdk-headless -qq > /dev/null
!wget -q http://apache.osuosl.org/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
!tar xf spark-2.4.4-bin-hadoop2.7.tgz
!pip install -q findspark
!pip install pyspark
# Set up required environment variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64"
os.environ["SPARK_HOME"] = "/content/spark-2.4.4-bin-hadoop2.7"
```
Now let's initialise a spark context if none exists
```
from pyspark import SparkConf, SparkContext
try:
conf = SparkConf().setMaster("local").setAppName("My_App")
sc = SparkContext(conf = conf)
print('SparkContext Initialised Successfully!')
except Exception as e:
print(e)
sc
```
Next let's initialise a spark session
```
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('My_App').getOrCreate()
spark
```
## Intro to SparkML
Note that the parquet file format uses compression and column store and actually maps data layout to the Apache Spark Tungsten memory layout.
### 1. Data Extraction
```
# This is the dataset that contains the different folders for reading the accelerometer data
# We will clone this data set
accelerometer_readings = 'https://github.com/wchill/HMP_Dataset.git'
!git clone https://github.com/wchill/HMP_Dataset.git
# Let's list out the folders in the HMP_Dataset
!ls HMP_Dataset
# Let's have a look at one of the folders
!ls HMP_Dataset/Brush_teeth
```
let's recursively traverse through those folders in HMP_Dataset and create Apache spark DataFrame from those files and then we just union all dataframes into one overall DataFrame containing all the data.<br>
Let's define the schema of the data frame below
```
from pyspark.sql.types import StructType, StructField, IntegerType
schema = StructType([
StructField('x',IntegerType(),True),
StructField('y',IntegerType(),True),
StructField('z',IntegerType(),True)
])
```
Now let's import OS for traversing through the data
```
import os
file_list = os.listdir('HMP_Dataset')
file_list
```
Now let's get rid of the folders that do not contain underscores as we don't need those
```
file_list_filtered = [x for x in file_list if '_' in x]
file_list_filtered
```
Okay so we have all the folders containing data in one array. Now we can iterate over this array.
```
# First we define an empty data frame that we'd append data to
df = None
# next we import tqdm progress bars to see how our code runs
from tqdm import tqdm
from pyspark.sql.functions import lit
# The lit library helps us write string literals column to an apache dataframe.
# Now let's iterate through the folders
for category in tqdm(file_list_filtered):
# Now we traverse all through the files in each folder
data_files = os.listdir('HMP_Dataset/' + category)
for data_file in data_files:
# first let's print it to be sure where we are
#print(data_file)
# Now we create a temporary dataframe
temp_df = spark.read.option('header','false').option('delimiter',' ').csv('HMP_Dataset/'+ category + '/' + data_file, schema=schema) # we use our defined schema above
temp_df = temp_df.withColumn('class',lit(category)) # Adding a class column to the dataframe
temp_df = temp_df.withColumn('source',lit(data_file)) # Adding a source column to the dataframe
# now we put a condition if df is empty
if df is None:
df = temp_df
else:
df = df.union(temp_df) # else union appends the data frames vertically
```
Let's see the dataframe created from all the files in those folders
```
df.show()
```
Romeo Keinzler usually creates a notebook that does this exercise and he calls it ETL<br>
It means Extract, Transform and Load data to a spark dataframe.
### 2. Data Transformation
Now we need to transform the data and create an integer representation of the class column as ML algorithms cannot cope with a string. So we will transform the class to a number of integers. using the StringIndexer module
```
from pyspark.ml.feature import StringIndexer
indexer = StringIndexer(inputCol = 'class', outputCol = 'classIndex')
indexed = indexer.fit(df).transform(df) # This is a new data frame
# Let's see it
indexed.show()
```
We can see the class index for each class. Good.<br>
So now we do one-hot-encoding
```
from pyspark.ml.feature import OneHotEncoder
# The OneHotEncoder is a pure transformer object. it does not use the fit()
encoder = OneHotEncoder(inputCol = 'classIndex', outputCol = 'categoryVec')
encoded = encoder.transform(indexed) # This is a new data frame
encoded.show()
```
next thing we need to do is to transform our values X, Y, Z into vectors because sparkML only can work on vector objects.<br>
So let's import vectors and vectorAssembler libraries
```
from pyspark.ml.linalg import Vectors
from pyspark.ml.feature import VectorAssembler
# VectorAssembler creates vectors from ordinary data types for us
vectorAssembler = VectorAssembler(inputCols = ['x','y','z'], outputCol = 'features')
# Now we use the vectorAssembler object to transform our last updated dataframe
features_vectorized = vectorAssembler.transform(encoded) # note this is a new df
# Let's see the data
features_vectorized.show()
```
So we now have the features corresponding to columns x, y, z, but these now are an Apache spark vector object. Which is the correct object for ML.
So the next thing we do now is Normalising the data set.<br>
This makes the range of values in the data set to be between 0 and 1 or -1 and 1 sometimes. The idea is to have all features data within the same range so no one over shadows the other.
```
from pyspark.ml.feature import Normalizer
normalizer = Normalizer(inputCol = 'features', outputCol = 'features_norm',p=1.0)
normalized_data = normalizer.transform(features_vectorized) # New data frame too.
# Let's see the normalized data
normalized_data.show()
```
As seen in the features_norm column, all values have been squashed between 0 and 1.
### Creating The Pipeline
```
from pyspark.ml import Pipeline
# The Pipeline constructor below takes an array of Pipeline stages we pass to it.
# here we pass the 4 stages above in the right sequence one after another.
pipeline = Pipeline(stages = [indexer,encoder,vectorAssembler,normalizer])
```
Now let's fit the Pipeline object to our original data frame
```
model = pipeline.fit(df)
```
Finally let's transform our data frame using the Pipeline Object
```
prediction = model.transform(df)
# Let's see the first 20 rows
prediction.show()
```
So we see exactly the same data frame as created before from the individual stages have been created using the Pipeline function. <br>Now we can fit and transform our data in one go. This is a really handy function.
Let's get rid of all the columns we don't need
```
# first let's list out the columns we want to drop
cols_to_drop = ['x','y','z','class','source','classIndex','features']
# Next let's use a list comprehension with conditionals to select cols we need
selected_cols = [col for col in prediction.columns if col not in cols_to_drop]
# Let's define a new train_df with only the categoryVec and features_norm cols
df_train = prediction.select(selected_cols)
# Let's see our training dataframe.
df_train.show()
```
So finally, we have our categoryVec column which is the target variable and our features_norm column, which is the feature set for the ML algorithm training.<br>
We have seen how to create Apache spark ML Pipelines from our data set.
```
```
| github_jupyter |
I enjoyed reading "A tutorial on the free-energy framework for modelling perception and learning" by *Rafal Bogacz*, which is freely available [here](http://www.sciencedirect.com/science/article/pii/S0022249615000759). In particular, the author encourages to replicate the results in the paper. He is himself giving solutions in matlab, so I had to do the same in python all within a notebook...
<!-- TEASER_END -->
Let's first initialize the notebook:
```
from __future__ import division, print_function
import numpy as np
np.set_printoptions(precision=6, suppress=True)
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
phi = (np.sqrt(5)+1)/2
fig_width = 10
figsize = (fig_width, fig_width/phi)
do_save = False
```
# exercise 1 : defining probabilities
First, let's see the application of Bayes theorem
We start by considering in this section a simple perceptual problem in which a value of a single variable has to be inferred from a single observation. To make it more concrete, consider a simple organism that tries to infer the size or diameter of a food item, which we denote by $v$, on the basis of light intensity it observes. Let us assume that our simple animal has only one light sensitive receptor which provides it with a noisy estimate of light intensity, which we denote by $u$. Let g
denote a non-linear function relating the average light intensity with the size. Since the amount of light reflected is related to the area of an object, in this example we will consider a simple function of $g(v)=v^2$. Let us further assume that the sensory input is noisy—in particular, when the size of food item is v, the perceived light intensity is normally distributed with mean g(v)
, and variance $Σ_u$ (although a normal distribution is not the best choice for a distribution of light intensity, as it includes negative numbers, we will still use it for a simplicity):
$$
p(u|v)=f(u; g(v), Σ_u),
$$
where $f(x;μ,Σ) $ denotes by definition the density of a normal distribution with mean μ and variance Σ
Due to the noise present in the observed light intensity, the animal can refine its guess for the size v
by combining the sensory stimulus with the prior knowledge on how large the food items usually are, that it had learnt from experience. For simplicity, let us assume that our animal expects this size to be normally distributed with mean $v_p$ and variance $Σ_p$ (subscript p stands for “prior”), which we can write as:
$$
p(v)=f(v; vp, Σ_p).
$$
## Exact solution
To compute how likely different sizes $v$ are given the observed sensory input $u$, we could use Bayes’ theorem:
$$
p(v|u)=p(v)p(u|v)p(u).
$$
Term $p(u)$ in the denominator of equation is a normalization term, which ensures that the posterior probabilities of all sizes $p(v|u)$ integrate to 1:
$$
p(u)= \int p(v) p(u|v) dv.
$$
The integral in the above equation sums over the whole range of possible values of $v$, so it is a definite integral, but for brevity of notation we do not state the limits of integration in this and all other integrals in the paper.
Now combining Eqs. we can compute numerically how likely different sizes are given the sensory observation. For readers who are not familiar with such Bayesian inference we recommend doing the following exercise now.
### solution to Exercise 1.
Assume that our animal observed the light intensity $u=2$, the level of noise in its receptor is $Σ_u=1$, and the mean and variance of its prior expectation of size are $v_p=3$ and $Σ_p=1$. Write a computer program that computes the posterior probabilities of sizes from 0.01 to 5, and plots them.
```
u_obs = 2 # observation
var_u = 1 # noise in the observation
v_p = 3 # prior expectation
var_p = 1 # variance of prior
def gauss(x, mean, variance):
return 1 / np.sqrt(2* np.pi * variance) * np.exp(- .5 * (x - mean)**2 / variance )
g = lambda v: v**2
sizes = np.linspace(0.01, 5, 100)
fig, axs = plt.subplots(1, 3, figsize=figsize)
prior = gauss(sizes, v_p, var_p)
axs[0].plot(sizes, prior, 'k')
axs[0].set_title('Prior')
for var_u_ in np.logspace(-1, 1, 7, base=10)*var_u:
likelihood = gauss(u_obs, g(sizes), var_u_)
axs[1].plot(sizes, likelihood/likelihood.sum())
axs[1].set_title('Likelihood')
posterior = prior * likelihood
posterior /= posterior.sum()
axs[2].plot(sizes, posterior, label=r'$\Sigma_u^2$ ={0:.2f}'.format(var_u_))
axs[2].set_title('Posterior')
axs[2].legend()
for ax in axs:
ax.set_xlabel('Size')
ax.set_ylabel('Probability')
plt.tight_layout()
if do_save == False: fig.savefig('../figures/bogacz_1.png', dpi=600)
```
# exercise 2 : an online solution
Let's define $F = \log( p(u |v) )$
$$
\frac{\partial F}{\partial v} = \frac{v - v_p}{\Sigma_p} + \dot{g}(v) \cdot \frac{u - g(v)}{\Sigma_u}
$$
```
dg = lambda v: 2*v
T, dt = 5, 0.01
times = np.linspace(0., T, int(T/dt))
v = np.zeros_like(times)
for i_time, time in enumerate(times):
if time == 0 :
v[i_time] = v_p
else:
v[i_time] = v[i_time-1] + dt * ( (v[i_time-1] -v_p) / var_p + dg(v[i_time-1]) * (u_obs-g(v[i_time-1] )) / var_u )
# now going online
v = np.zeros_like(times)
eps_u = np.zeros_like(times)
eps_p = np.zeros_like(times)
for i_time, time in enumerate(times):
if time == 0 :
v[i_time], eps_u[i_time], eps_p[i_time] = v_p, 0., 0.
else:
v[i_time] = v[i_time-1] + dt * ( - eps_p[i_time-1] + dg(v[i_time-1]) * eps_u[i_time-1] )
eps_p[i_time] = eps_p[i_time-1] + dt * ( (v[i_time] -v_p) - var_p * eps_p[i_time-1] )
eps_u[i_time] = eps_u[i_time-1] + dt * ( (u_obs-g(v[i_time])) - var_u * eps_u[i_time-1] )
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=figsize)
ax1.plot(times, v)
ax1.set_xlabel('Time (s)')
ax1.set_ylabel('Size')
ax1.set_ylim(0, 5)
ax2.plot(times, v, label=r'$\phi$')
ax2.set_xlabel('Time (s)')
ax2.set_ylabel('Size')
ax2.plot(times, eps_p, 'g--', label=r'$\epsilon_p$')
ax2.plot(times, eps_u, 'r--', label=r'$\epsilon_u$')
ax2.legend();
if do_save: fig.savefig('../figures/bogacz_2.png', dpi=600)
```
# exercise 5 : estimating variance
one may also learn the variance
The model parameters can be hence optimized by modifying them proportionally to the gradient of $F$. It is straightforward to find the derivatives of F over $v_p$, $Σ_p$ and $Σ_u$:
$$
\frac{∂F}{∂v_p}=\frac{ϕ−v_p}{Σ_p}
$$
$$
\frac{∂F}{∂Σp}=\frac 1 2 ( \frac {(ϕ−v_p)^2} {Σ^2_p}−\frac{1}{Σ_p} )
$$
$$
\frac{∂F}{∂Σu}=\frac 1 2 ( \frac {(u−g(ϕ))^2}{Σ^2_u}−\frac{1}{Σ_u}.
$$
From the paper:
> Simulate learning of variance $Σ_i$ over trials. For simplicity, only simulate the network described by Eqs. (59)– (60), and assume that variables ϕ are constant. On each trial generate input $ϕ_i$ from a normal distribution with mean 5 and variance 2, while set $g_i(ϕ_i+1)=5$ (so that the upper level correctly predicts the mean of $ϕ_i$). Simulate the network for 20 time units, and then update weight $Σ_i$ with learning rate $α=0.01$. Simulate 1000 trials and plot how $Σ_i$ changes across trials.
```
mean_u_obs = 5 # observation
var_u = 2 # noise in the observation
var_u_init = 1. # initial guess
v_p = 5 # prior expectation (from node above)
var_p = 1 # variance of prior (from node above)
eta = .01
N_trials = 1000
T, dt = 50, 0.01
times = np.linspace(0., T, int(T/dt))
v = np.zeros_like(times)
e = np.zeros_like(times)
error = np.zeros_like(times)
var_u_ = var_u_init * np.ones(N_trials)
for i_trial in range(1, N_trials):
# making an observation
u_obs = mean_u_obs + np.sqrt(var_u) * np.random.randn()
for i_time, time in enumerate(times):
if time == 0 :
e[i_time], error[i_time] = 0., 0.
else:
error[i_time] = error[i_time-1] + dt * ( (u_obs - v_p) - var_p * e[i_time-1] )
e[i_time] = e[i_time-1] + dt * (var_u_[i_trial-1] * error[i_time-1] - e[i_time-1])
var_u_[i_trial] = var_u_[i_trial-1] + eta * (error[-1]*e[-1] - 1)
fig, ax = plt.subplots(1, 1, figsize=figsize)
ax.plot(var_u_, label='estimate')
ax.plot(var_u_init*np.ones_like(var_u_), 'g--', label='initial guess')
ax.plot(var_u*np.ones_like(var_u_), 'r--', label='(hidden) true value')
ax.set_ylim(0, 2.5)
ax.set_xlabel('trials')
ax.set_ylabel(r'$\Sigma_u$');
ax.legend()
if do_save: fig.savefig('../figures/bogacz_3.png', dpi=600)
```
| github_jupyter |
# Dual PET tracer MAP-EM with Bowsher no motion
This notebook fleshes out the skeleton for the challenge set in the [../Dual_PET notebook](../Dual_PET.ipynb), not including motion.
Authors: Richard Brown, Sam Ellis, Casper da Costa-Luis, Kris Thielemans
First version: 2nd of November 2019
Second version June 2021
CCP PETMR Synergistic Image Reconstruction Framework (SIRF)
Copyright 2019, 2021 University College London
Copyright 2019 King's College London
This is software developed for the Collaborative Computational
Project in Synergistic Reconstruction for Biomedical Imaging.
(http://www.synerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# 0a. Some includes and imshow-esque functions
```
# All the normal stuff you've already seen
import notebook_setup
#%% Initial imports etc
import numpy
import matplotlib.pyplot as plt
import os
import sys
import shutil
from tqdm.auto import tqdm, trange
import time
from numba import jit
import sirf.STIR as pet
from sirf_exercises import exercises_data_path
import sirf.Reg as Reg
import sirf.contrib.kcl.Prior as pr
# plotting settings
plt.ion() # interactive 'on' such that plots appear during loops
%matplotlib notebook
#%% some handy function definitions
def imshow(image, limits=None, title=''):
"""Usage: imshow(image, [min,max], title)"""
plt.title(title)
bitmap = plt.imshow(image)
if limits is None:
limits = [image.min(), image.max()]
plt.clim(limits[0], limits[1])
plt.colorbar(shrink=.6)
plt.axis('off')
return bitmap
def make_cylindrical_FOV(image):
"""truncate to cylindrical FOV"""
filter = pet.TruncateToCylinderProcessor()
filter.apply(image)
#%% define a function for plotting images and the updates
# This is the same function as in `ML_reconstruction`
def plot_progress(all_images1,all_images2, title1, title2, subiterations, cmax):
if len(subiterations)==0:
num_subiters = all_images1[0].shape[0]-1
subiterations = range(1, num_subiters+1);
num_rows = len(all_images1);
slice = 60
for iter in subiterations:
plt.figure()
for r in range(num_rows):
plt.subplot(num_rows,2,2*r+1)
imshow(all_images1[r][iter,slice,:,:], [0,cmax], '%s at %d' % (title1[r], iter))
plt.subplot(num_rows,2,2*r+2)
imshow(all_images2[r][iter,slice,:,:], [0,cmax], '%s at %d' % (title2[r], iter))
plt.show();
def subplot_(idx,vol,title,clims=None,cmap="viridis"):
plt.subplot(*idx)
plt.imshow(vol,cmap=cmap)
if not clims is None:
plt.clim(clims)
plt.colorbar()
plt.title(title)
plt.axis("off")
```
# 0b. Input data
Note, we `rebin` the data here to combine 5 segments into 1. This might still be too slow, so feel free to rebin it even more. (or of course, less).
```
# Setup the working directory for the notebook
import notebook_setup
# Get to correct directory
os.chdir(exercises_data_path('Synergistic'))
# copy files to working folder and change directory to where the output files are
shutil.rmtree('working_folder/dual_PET_noMotion',True)
shutil.copytree('Brainweb','working_folder/dual_PET_noMotion')
os.chdir('working_folder/dual_PET_noMotion')
fname_FDG_sino = 'FDG_sino_noisy.hs'
fname_FDG_uMap = 'uMap_small.hv'
# No motion filenames
fname_amyl_sino = 'amyl_sino_noisy.hs'
fname_amyl_uMap = 'uMap_small.hv'
# Motion filenames
# fname_amyl_sino = 'amyl_sino_noisy_misaligned.hs'
# fname_amyl_uMap = 'uMap_misaligned.hv'
full_fdg_sino = pet.AcquisitionData(fname_FDG_sino)
fdg_sino = full_fdg_sino.rebin(5)
fdg_uMap = pet.ImageData(fname_FDG_uMap)
full_amyl_sino = pet.AcquisitionData(fname_amyl_sino)
amyl_sino = full_amyl_sino.rebin(5)
amyl_uMap = pet.ImageData(fname_amyl_uMap)
fdg_init_image=fdg_uMap.get_uniform_copy(1)
make_cylindrical_FOV(fdg_init_image)
amyl_init_image=amyl_uMap.get_uniform_copy(1)
make_cylindrical_FOV(amyl_init_image)
```
check some sizes
```
print(fdg_sino.get_info())
```
# 0c. Set up normal reconstruction stuff
```
# Code to set up objective function and OSEM recontsructors
def get_obj_fun(acquired_data, atten):
print('\n------------- Setting up objective function')
# #%% create objective function
#%% create acquisition model
am = pet.AcquisitionModelUsingRayTracingMatrix()
am.set_num_tangential_LORs(5)
# Set up sensitivity due to attenuation
asm_attn = pet.AcquisitionSensitivityModel(atten, am)
asm_attn.set_up(acquired_data)
bin_eff = pet.AcquisitionData(acquired_data)
bin_eff.fill(1.0)
asm_attn.unnormalise(bin_eff)
asm_attn = pet.AcquisitionSensitivityModel(bin_eff)
# Set sensitivity of the model and set up
am.set_acquisition_sensitivity(asm_attn)
#am.set_up(acquired_data,atten);
#%% create objective function
obj_fun = pet.make_Poisson_loglikelihood(acquired_data)
obj_fun.set_acquisition_model(am)
print('\n------------- Finished setting up objective function')
return obj_fun
def get_reconstructor(num_subsets, num_subiters, obj_fun, init_image):
print('\n------------- Setting up reconstructor')
#%% create OSEM reconstructor
OSEM_reconstructor = pet.OSMAPOSLReconstructor()
OSEM_reconstructor.set_objective_function(obj_fun)
OSEM_reconstructor.set_num_subsets(num_subsets)
OSEM_reconstructor.set_num_subiterations(num_subiters)
#%% initialise
OSEM_reconstructor.set_up(init_image)
print('\n------------- Finished setting up reconstructor')
return OSEM_reconstructor
num_subsets = 21
num_subiters = 42
```
# 1. Two individual reconstructions *
```
# Some code goes here
```
# 2. Register images *
```
# Some more code goes here
```
# 3. A resample function? *
```
# How about a bit of code here?
```
# 4. Maybe some de Pierro MAP-EM functions
Copy the MAPEM functions here
```
fdg_prior = pr.Prior(fdg_init_image.shape)
amyl_prior = pr.Prior(amyl_init_image.shape)
num_bowsher_neighbours = 7
weights_fdg = update_bowsher_weights(fdg_prior,amyl_init_image,num_bowsher_neighbours)
weights_amyl = update_bowsher_weights(amyl_prior,fdg_init_image,num_bowsher_neighbours)
# compute indices of the neighbourhood
nhoodIndVec_fdg=compute_nhoodIndVec(fdg_init_image.shape,weights_fdg.shape)
nhoodIndVec_amyl=compute_nhoodIndVec(amyl_init_image.shape,weights_amyl.shape)
```
You will probably see a run-time warning about `divide by zero` when executing the cell above. This is fine, as it is actually handled in the `Prior` case.
# 5. Are we ready?
```
beta = 0.1
fdg_obj_fn = get_obj_fun(fdg_sino,fdg_uMap)
fdg_reconstructor = get_reconstructor(num_subsets,num_subiters,fdg_obj_fn,fdg_init_image)
amyl_obj_fn = get_obj_fun(amyl_sino,amyl_uMap)
amyl_reconstructor = get_reconstructor(num_subsets,num_subiters,amyl_obj_fn,amyl_init_image)
num_subiters=42
current_fdg_image = fdg_init_image.clone()
current_amyl_image = amyl_init_image.clone()
all_images_fdg = numpy.ndarray(shape=(num_subiters+1,) + current_fdg_image.as_array().shape)
all_images_amyl = numpy.ndarray(shape=(num_subiters+1,) + current_amyl_image.as_array().shape)
all_images_fdg[0,:,:,:] = current_fdg_image.as_array()
all_images_amyl[0,:,:,:] = current_amyl_image.as_array()
for it in trange(1, num_subiters+1):
print('outer iteration {}'.format(it))
print('Update FDG weights as fn. of amyloid image')
weights_fdg = update_bowsher_weights(fdg_prior,current_amyl_image,num_bowsher_neighbours)
print('Do FDG de Pierro update')
current_fdg_image = MAPEM_iteration(fdg_reconstructor,current_fdg_image,weights_fdg,nhoodIndVec_fdg,beta)
all_images_fdg[it,:,:,:] = current_fdg_image.as_array()
print('Update the amyloid weights as fn. of FDG image')
weights_amyl = update_bowsher_weights(amyl_prior,current_fdg_image,num_bowsher_neighbours)
print('Do amyloid de Pierro update')
current_amyl_image = MAPEM_iteration(amyl_reconstructor,current_amyl_image,weights_amyl,nhoodIndVec_amyl,beta)
all_images_amyl[it,:,:,:] = current_amyl_image.as_array();
#%% now call this function to see how we went along
plt.figure()
subiterations = (1,2,4,8,16,32,42)
plot_progress([all_images_fdg],[all_images_amyl], ['FDG MAPEM'], ['Amyloid MAPEM'],subiterations, all_images_fdg.max()/2);
```
# What now?
Of course, you can go on to the include the motion estimation in this problem.
The real problem is of course how you will decide if this is a good approach or not. You would have to compare to non-synergistic methods, and indeed other synergistic ones. We are looking forward to your next paper! :-)
| github_jupyter |
functional API 를 사용하여 레이어들을 연결할때 어떠한 일이 일어나는지 알아봅시다.
```
from tensorflow.keras import layers, models
x = layers.Input((1,), name='x')
y = layers.Input((1,), name='y')
h = layers.Dense(1, name='h')
a = layers.Dense(1, name='a')
b = layers.Dense(1, name='b')
print('h._inbound_nodes\n', h._inbound_nodes)
print('h._outbound_nodes\n', h._outbound_nodes)
hx = h(x)
```
노드는 입력 텐서와 출력 텐서 사이의 연결을 의미합니다. 위와 같이 x 와 h 를 연결하면 노드가 생성됩니다. 노드에는 입력 텐서와 출력 텐서가 있습니다. 시각적으로 상상할 때 inbound_node 를 레이어 안에 그리고 output tensor 를 레이어 밖에 그릴 수있습니다. 혹은 inbound_node 를 레이어 밖에 그리고 output_tensor 를 레이어 안에 그릴 수도 있습니다.
```
print('h._inbound_nodes\n', h._inbound_nodes, '\n')
print('h._inbound_nodes[0].input_tensors\n', h._inbound_nodes[0].input_tensors, '\n')
print('h._inbound_nodes[0].output_tensors\n', h._inbound_nodes[0].output_tensors)
ahx = a(hx)
```
h 에서 나가는 노드와 a 에 들어오는 노드가 같은것을 확인할 수 있습니다.
```
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('a._inbound_nodes\n', a._inbound_nodes, '\n')
print('h._outbound_nodes[0].input_tensors,\n', h._outbound_nodes[0].input_tensors, '\n')
print('h._outbound_nodes[0].output_tensors,\n', h._outbound_nodes[0].output_tensors, '\n')
print('a._inbound_nodes[0].input_tensors,\n', a._inbound_nodes[0].input_tensors, '\n')
print('a._inbound_nodes[0].output_tensors,\n', a._inbound_nodes[0].output_tensors)
bhx= b(hx)
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('b._inbound_nodes\n', b._inbound_nodes, '\n')
print('b._inbound_nodes[0].input_tensors,\n', b._inbound_nodes[0].input_tensors, '\n')
print('b._inbound_nodes[0].output_tensors,\n', b._inbound_nodes[0].output_tensors)
hy = h(y)
print('h._inbound_nodes\n', h._inbound_nodes)
ahy = a(hy)
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('a._inbound_nodes\n', a._inbound_nodes)
bhy = b(hy)
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('b._inbound_nodes\n', b._inbound_nodes)
```
모델 summary 의 Connected to 는 layer name [inbound_node index][output_tensor index]
```
model = models.Model([x, y], [ahx, bhx, ahy, bhy])
model.summary()
```
inbound_node 하나에서 여러 output_tensor 가 나오는 구조를 알아봅시다.
```
from tensorflow.keras import layers, models
x = layers.Input((1,), name='x')
y = layers.Input((1,), name='y')
h = layers.Lambda(lambda x: [x+1, x-1], name='h')
a = layers.Dense(1, name='a')
b = layers.Dense(1, name='b')
print('h._inbound_nodes\n', h._inbound_nodes)
print('h._outbound_nodes\n', h._outbound_nodes)
h0x, h1x = h(x)
h0y, h1y = h(y)
```
inbound node 하나에 두개의 ouput tensor 가 생긴것을 확인할 수 있습니다.
```
print('h._inbound_nodes\n', h._inbound_nodes, '\n')
print('h._inbound_nodes[0].input_tensors\n', h._inbound_nodes[0].input_tensors, '\n')
print('h._inbound_nodes[0].output_tensors\n', h._inbound_nodes[0].output_tensors, '\n')
print('h._inbound_nodes[1].input_tensors\n', h._inbound_nodes[1].input_tensors, '\n')
print('h._inbound_nodes[1].output_tensors\n', h._inbound_nodes[1].output_tensors)
ah0x = a(h0x)
bh1x = b(h1x)
ah0y = a(h0y)
bh1y = b(h1y)
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('h._outbound_nodes[0].input_tensors\n', h._outbound_nodes[0].input_tensors, '\n')
print('h._outbound_nodes[0].output_tensors\n', h._outbound_nodes[0].output_tensors, '\n')
print('h._outbound_nodes[1].input_tensors\n', h._outbound_nodes[1].input_tensors, '\n')
print('h._outbound_nodes[1].output_tensors\n', h._outbound_nodes[1].output_tensors, '\n')
print('h._outbound_nodes[2].input_tensors\n', h._outbound_nodes[2].input_tensors, '\n')
print('h._outbound_nodes[2].output_tensors\n', h._outbound_nodes[2].output_tensors, '\n')
print('h._outbound_nodes[3].input_tensors\n', h._outbound_nodes[3].input_tensors, '\n')
print('h._outbound_nodes[3].output_tensors\n', h._outbound_nodes[3].output_tensors)
model = models.Model([x, y], [ah0x, bh1x, ah0y, bh1y])
model.summary()
```
여러개의 텐서가 하나로 합쳐질 때 노드의 구조에 대해 알아 봅시다.
```
from tensorflow.keras import layers, models
x = layers.Input((1,), name='x')
y = layers.Input((1,), name='y')
h = layers.Add(name='h')
a = layers.Dense(1, name='a')
b = layers.Dense(1, name='b')
print('h._inbound_nodes\n', h._inbound_nodes)
print('h._outbound_nodes\n', h._outbound_nodes)
hxy = h([x, y])
```
두개의 텐서가 h 에 들어오지만 h 의 inbound node 는 하나가 생기고 그 노드의 입력 텐서가 두개 출력 텐서가 하나인것을 알 수 있습니다.
```
print('h._inbound_nodes\n', h._inbound_nodes, '\n')
print('h._inbound_nodes[0].input_tensors\n', h._inbound_nodes[0].input_tensors, '\n')
print('h._inbound_nodes[0].output_tensors\n', h._inbound_nodes[0].output_tensors)
ahxy = a(hxy)
bhxy = b(hxy)
print('h._outbound_nodes\n', h._outbound_nodes, '\n')
print('a._inbound_nodes\n', a._inbound_nodes, '\n')
print('b._inbound_nodes\n', b._inbound_nodes)
model = models.Model([x, y], [ahxy, bhxy])
model.summary()
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Load CSV with tf.data
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/load_data/text"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/site/en/r2/tutorials/load_data/csv.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial provides an example of how to load CSV data from a file into a `tf.data.Dataset`.
The data used in this tutorial are taken from the Titanic passenger list. We'll try to predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
## Setup
```
!pip install tensorflow==2.0.0-beta0
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
```
## Load data
So we know what we're doing, lets look at the top of the CSV file we're working with.
```
!head {train_file_path}
```
As you can see, the columns in the CSV are labeled. We need the list later on, so let's read it out of the file.
```
# CSV columns in the input file.
with open(train_file_path, 'r') as f:
names_row = f.readline()
CSV_COLUMNS = names_row.rstrip('\n').split(',')
print(CSV_COLUMNS)
```
The dataset constructor will pick these labels up automatically.
If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the `column_names` argument in the `make_csv_dataset` function.
```python
CSV_COLUMNS = ['survived', 'sex', 'age', 'n_siblings_spouses', 'parch', 'fare', 'class', 'deck', 'embark_town', 'alone']
dataset = tf.data.experimental.make_csv_dataset(
...,
column_names=CSV_COLUMNS,
...)
```
This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) `select_columns` argument of the constructor.
```python
drop_columns = ['fare', 'embark_town']
columns_to_use = [col for col in CSV_COLUMNS if col not in drop_columns]
dataset = tf.data.experimental.make_csv_dataset(
...,
select_columns = columns_to_use,
...)
```
We also have to identify which column will serve as the labels for each example, and what those labels are.
```
LABELS = [0, 1]
LABEL_COLUMN = 'survived'
FEATURE_COLUMNS = [column for column in CSV_COLUMNS if column != LABEL_COLUMN]
```
Now that these constructor argument values are in place, read the CSV data from the file and create a dataset.
(For the full documentation, see `tf.data.experimental.make_csv_dataset`)
```
def get_dataset(file_path):
dataset = tf.data.experimental.make_csv_dataset(
file_path,
batch_size=12, # Artificially small to make examples easier to show.
label_name=LABEL_COLUMN,
na_value="?",
num_epochs=1,
ignore_errors=True)
return dataset
raw_train_data = get_dataset(train_file_path)
raw_test_data = get_dataset(test_file_path)
```
Each item in the dataset is a batch, represented as a tuple of (*many examples*, *many labels*). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (12 in this case).
It might help to see this yourself.
```
examples, labels = next(iter(raw_train_data)) # Just the first batch.
print("EXAMPLES: \n", examples, "\n")
print("LABELS: \n", labels)
```
## Data preprocessing
### Categorical data
Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
In the CSV, these options are represented as text. This text needs to be converted to numbers before the model can be trained. To facilitate that, we need to create a list of categorical columns, along with a list of the options available in each column.
```
CATEGORIES = {
'sex': ['male', 'female'],
'class' : ['First', 'Second', 'Third'],
'deck' : ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'],
'embark_town' : ['Cherbourg', 'Southhampton', 'Queenstown'],
'alone' : ['y', 'n']
}
```
Write a function that takes a tensor of categorical values, matches it to a list of value names, and then performs a one-hot encoding.
```
def process_categorical_data(data, categories):
"""Returns a one-hot encoded tensor representing categorical values."""
# Remove leading ' '.
data = tf.strings.regex_replace(data, '^ ', '')
# Remove trailing '.'.
data = tf.strings.regex_replace(data, r'\.$', '')
# ONE HOT ENCODE
# Reshape data from 1d (a list) to a 2d (a list of one-element lists)
data = tf.reshape(data, [-1, 1])
# For each element, create a new list of boolean values the length of categories,
# where the truth value is element == category label
data = tf.equal(categories, data)
# Cast booleans to floats.
data = tf.cast(data, tf.float32)
# The entire encoding can fit on one line:
# data = tf.cast(tf.equal(categories, tf.reshape(data, [-1, 1])), tf.float32)
return data
```
To help you visualize this, we'll take a single category-column tensor from the first batch, preprocess it, and show the before and after state.
```
class_tensor = examples['class']
class_tensor
class_categories = CATEGORIES['class']
class_categories
processed_class = process_categorical_data(class_tensor, class_categories)
processed_class
```
Notice the relationship between the lengths of the two inputs and the shape of the output.
```
print("Size of batch: ", len(class_tensor.numpy()))
print("Number of category labels: ", len(class_categories))
print("Shape of one-hot encoded tensor: ", processed_class.shape)
```
### Continuous data
Continuous data needs to be normalized, so that the values fall between 0 and 1. To do that, write a function that multiplies each value by 1 over twice the mean of the column values.
The function should also reshape the data into a two dimensional tensor.
```
def process_continuous_data(data, mean):
# Normalize data
data = tf.cast(data, tf.float32) * 1/(2*mean)
return tf.reshape(data, [-1, 1])
```
To do this calculation, you need the column means. You would obviously need to compute these in real life, but for this example we'll just provide them.
```
MEANS = {
'age' : 29.631308,
'n_siblings_spouses' : 0.545455,
'parch' : 0.379585,
'fare' : 34.385399
}
```
Again, to see what this function is actually doing, we'll take a single tensor of continuous data and show it before and after processing.
```
age_tensor = examples['age']
age_tensor
process_continuous_data(age_tensor, MEANS['age'])
```
### Preprocess the data
Now assemble these preprocessing tasks into a single function that can be mapped to each batch in the dataset.
```
def preprocess(features, labels):
# Process categorial features.
for feature in CATEGORIES.keys():
features[feature] = process_categorical_data(features[feature],
CATEGORIES[feature])
# Process continuous features.
for feature in MEANS.keys():
features[feature] = process_continuous_data(features[feature],
MEANS[feature])
# Assemble features into a single tensor.
features = tf.concat([features[column] for column in FEATURE_COLUMNS], 1)
return features, labels
```
Now apply that function with `tf.Dataset.map`, and shuffle the dataset to avoid overfitting.
```
train_data = raw_train_data.map(preprocess).shuffle(500)
test_data = raw_test_data.map(preprocess)
```
And let's see what a single example looks like.
```
examples, labels = next(iter(train_data))
examples, labels
```
The examples are in a two dimensional arrays of 12 items each (the batch size). Each item represents a single row in the original CSV file. The labels are a 1d tensor of 12 values.
## Build the model
This example uses the [Keras Functional API](https://www.tensorflow.org/beta/guide/keras/functional) wrapped in a `get_model` constructor to build up a simple model.
```
def get_model(input_dim, hidden_units=[100]):
"""Create a Keras model with layers.
Args:
input_dim: (int) The shape of an item in a batch.
labels_dim: (int) The shape of a label.
hidden_units: [int] the layer sizes of the DNN (input layer first)
learning_rate: (float) the learning rate for the optimizer.
Returns:
A Keras model.
"""
inputs = tf.keras.Input(shape=(input_dim,))
x = inputs
for units in hidden_units:
x = tf.keras.layers.Dense(units, activation='relu')(x)
outputs = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs, outputs)
return model
```
The `get_model` constructor needs to know the input shape of your data (not including the batch size).
```
input_shape, output_shape = train_data.output_shapes
input_dimension = input_shape.dims[1] # [0] is the batch size
```
## Train, evaluate, and predict
Now the model can be instantiated and trained.
```
model = get_model(input_dimension)
model.compile(
loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(train_data, epochs=20)
```
Once the model is trained, we can check its accuracy on the `test_data` set.
```
test_loss, test_accuracy = model.evaluate(test_data)
print('\n\nTest Loss {}, Test Accuracy {}'.format(test_loss, test_accuracy))
```
Use `tf.keras.Model.predict` to infer labels on a batch or a dataset of batches.
```
predictions = model.predict(test_data)
# Show some results
for prediction, survived in zip(predictions[:10], list(test_data)[0][1][:10]):
print("Predicted survival: {:.2%}".format(prediction[0]),
" | Actual outcome: ",
("SURVIVED" if bool(survived) else "DIED"))
```
| github_jupyter |
# Ray Serve - Model Serving Challenges
© 2019-2021, Anyscale. All Rights Reserved

## The Challenges of Model Serving
Model development happens in a data science research environment. There are many challenges, but also tools at the data scientists disposal.
Model deployment to production faces an entirely different set of challenges and requires different tools, although it is desirable to bridge the divide as much as possible.
Here is a partial lists of the challenges of model serving:
### It Should Be Framework Agnostic
Model serving frameworks must be able to serve models from popular systems like TensorFlow, PyTorch, scikit-learn, or even arbitrary Python functions. Even within the same organization, it is common to use several machine learning frameworks.
Also, machine learning models are typically surrounded by lots of application or business logic. For example, some model serving is implemented as a RESTful service to which scoring requests are made. Often this is too restrictive, as some additional processing, such as fetching additional data from a online feature store, may be desired as part of the scoring process, and the performance overhead of remote calls may be suboptimal.
### Pure Python
It has been common recently for model serving to be done using JVM-based systems, since many production enterprises are JVM-based. This is a disadvantage when model training and other data processing are done using Python tools, only.
In general, model serving should be intuitive for developers and simple to configure and run. Hence, it is desirable to use pure Python and to avoid verbose configurations using YAML files or other means.
Data scientists and engineers use Python to develop their machine learning models, so they should also be able to use Python to deploy their machine learning applications. This need is growing more critical as online learning applications combine training and serving in the same applications.
### Simple and Scalable
Model serving must be simple to scale on demand across many machines. It must also be easy to upgrade models dynamically, over time. Achieving production uptime and performance requirements are essential for success.
### DevOps Integrations
Model serving deployments need to integrate with existing "DevOps" CI/CD practices for controlled, audited, and predicatble releases. Patterns like [Canary Deployment](https://martinfowler.com/bliki/CanaryRelease.html) are particularly useful for testing the efficacy of a new model before replacing existing models, just as this pattern is useful for other software deployments.
### Flexible Deployment Patterns
There are unique deployment patterns, too. For example, it should be easy to deploy a forest of models, to split traffic to different instances, and to score data in batches for greater efficiency.
See also this [Ray blog post](https://medium.com/distributed-computing-with-ray/the-simplest-way-to-serve-your-nlp-model-in-production-with-pure-python-d42b6a97ad55) on the challenges of model serving and the way Ray Serve addresses them. It also provides an example of starting with a simple model, then deploying a more sophisticated model into the running application. Along the same lines, this blog post, [Serving ML Models in Production Common Patterns](https://www.anyscale.com/blog/serving-ml-models-in-production-common-patterns) discusses how deployment patterns for model serving and how you can use Ray Serve. Additionally, [Building a scalable ML model serving API with Ray Serve](https://www.anyscale.com/events/2021/09/09/building-a-scalable-ml-model-serving-api-with-ray-serve) webinar In this introductory webinar highlights how Ray Serve makes it easy to deploy, operate and scale a machine learning API.
## Why Ray Serve?
[Ray Serve](https://docs.ray.io/en/latest/serve/index.html) is a scalable, framework-agnostic and Python-first model serving library built on [Ray](https://ray.io).
For users, Ray Serve offers these benefits:
* **Framework Agnostic**: You can use the same toolkit to serve everything from deep learning models built with [PyTorch](https://docs.ray.io/en/latest/serve/tutorials/pytorch.html#serve-pytorch-tutorial), [Tensorflow](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), or [Keras](https://docs.ray.io/en/latest/serve/tutorials/tensorflow.html#serve-tensorflow-tutorial), to [scikit-Learn](https://docs.ray.io/en/latest/serve/tutorials/sklearn.html#serve-sklearn-tutorial) models, to arbitrary business logic.
* **Python First:** Configure your model serving with pure Python code. No YAML or JSON configurations required.
As a library, Ray Serve enables the following:
* [Splitting traffic between backends dynamically](https://docs.ray.io/en/latest/serve/advanced.html#serve-split-traffic) with zero downtime. This is accomplished by decoupling routing logic from response handling logic.
* [Support for batching](https://docs.ray.io/en/latest/serve/advanced.html#serve-batching) to improve performance helps you meet your performance objectives. You can also use a model for batch and online processing.
* Because Serve is a library, it's esay to integrate it with other tools in your environment, such as CI/CD.
Since Serve is built on Ray, it also allows you to scale to many machines, in your datacenter or in cloud environments, and it allows you to leverage all of the other Ray frameworks.
## Two Simple Ray Serve Examples
We'll explore a more detailed example in the next lesson, where we actually serve ML models. Here we explore how simple deployments are simple with Ray Serve! We will first use a function that does "scoring," sufficient for _stateless_ scenarios, then a use class, which enables _stateful_ scenarios.
But first, initialize Ray as before:
```
import ray
from ray import serve
import requests # for making web requests
```
Now we initialize Serve itself:
```
serve.start()
```
Next, define our stateless function for processing requests.
Let's define a simple function that will be served by Ray. As with Ray Tasks, we can decoarte this function with `@serve.deployment`, meaning this is going to be
deployed on Ray Serve as function to which we can send Flask requests.
It takes in a `request`, extracts the request parameter with key "name", and returns an echoed string.
Simple to illustrate that Ray Serve can also serve Python functions
```
@serve.deployment
def hello(request):
name = request.query_params["name"]
return f"Hello {name}!"
```
Use the `<func_name>.deploy()` method to deploy in on Ray Serve
```
hello.deploy()
```
Now send some requests to our Python function
```
for i in range(10):
response = requests.get(f"http://127.0.0.1:8000/hello?name=request_{i}").text
print(f'{i:2d}: {response}')
```
You should see `hello request_N` in the output.
Now let's serve another "model" in the same service:
```
@serve.deployment
class Counter:
def __init__(self):
self.count = 0
def __call__(self, *args):
self.count += 1
return {"count": self.count}
Counter.deploy()
for i in range(10):
response = requests.get(f"http://127.0.0.1:8000/Counter?i={i}").json()
print(f'{i:2d}: {response}')
```
## Ray Serve Concepts
For more details, see this [key concepts](https://docs.ray.io/en/latest/serve/index.html) documentation.
```
serve.list_deployments()
serve.shutdown()
```
| github_jupyter |
# Visualization
- Matplotlib
- Seaborn
- ggplot
- plotly
- Bokeh + ipywidgets
- Inline network displays
- GUI programming example with wxPython
- ipywidgets + networkx
TODO:
- Vispy
```
$ conda install ipywidgets wxpython networkx seaborn matplotlib
$ jupyter nbextension enable --py widgetsnbextension --sys-prefix
$ conda install -c bokeh bokeh
```
### Standard plots with Matplotlib: line, scatter, chart, contour, heatmap
Control every detail of your graphics programmatically.
Have a look at more examples:
http://matplotlib.org/gallery.html
```
#ipython magic command
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
matplotlib.rcParams.update({'font.size': 18, 'font.family': 'arial'})
x = np.linspace(0.1, 10, 100)
y1 = np.sin(x)
y2 = np.sin(1/x)
#y2 = np.exp(x)
fig, ax = plt.subplots(figsize=(13,4))
#axes[0].plot(x, y1, x, y2)
ax.plot(x, y1, label="y = sin(x)")
ax.plot(x, y2, label=r"$sin(\frac{1}{x})$", color="#05cd90", lw=2, ls='dotted', marker='o', markersize=4)
ax.set_xlabel('time')
ax.set_ylabel('harmonics')
ax.set_ylim([- 1.5, 1.5])
#ax.set_yscale("log")
ax.set_title("Line plots")
ax.legend(loc=2)
yticks = [0, 1]
plt.annotate('wow!', xy=(8, 1), xytext=(7, -1), arrowprops=dict(facecolor='black', shrink=0.05))
ax.set_yticks(yticks)
ax.set_yticklabels([r"rest $\alpha$", r"disturbed $\delta$"], fontsize=18)
#fig.savefig("filename.png")
#fig.savefig("filename.png", dpi=100)
plt.show()
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
fig, axes = plt.subplots(2, 2, figsize=(10,8))
matplotlib.rcParams.update({'font.size': 10, 'font.family': 'arial'})
x = 10*np.random.random(100)
y = 10*np.random.random(100)
def f(x,y): return np.sin(x)**2+np.sin(y)**2
axes[0][0].scatter(x, y, c=f(x,y), cmap=plt.cm.Blues)
axes[0][0].set_title("scatter plot")
x1 = np.linspace(-10,10,100)
y1 = np.zeros(100)+5.0
axes[0][1].bar(x1, f(x1,y1), align="center", width=0.5, alpha=0.5)
axes[0][1].set_title("bar plot")
x = np.linspace(-10,10,100)
y = np.linspace(-10,10,100)
X,Y = np.meshgrid(x, y)
Z = f(X,Y)
axes[1][0].pcolor(x, y, Z, cmap=plt.cm.Blues, vmin=np.abs(Z).min(), vmax=np.abs(Z).max())
#axes[1][1].pcolor(x, y, Z, cmap=plt.cm.RdBu, vmin=np.abs(Z).min(), vmax=np.abs(Z).max())
axes[1][0].set_title("colour map (heatmap)")
axes[1][1].contour(Z.T, cmap=plt.cm.Blues, vmin=np.abs(Z).min(), vmax=np.abs(Z).max(), extent=[-10, 10, -10, 10])
axes[1][1].set_title("contour plot")
```
## Seaborn
Matplotlib was the first big library for visualization in Python, but after its creator tragically died more and more users migrated to Seaborn. Seaborn offers a much easier entry point to the detriment of high customization. But, it looks great! Here is a violin plot example. Check other examples in the [galery](https://seaborn.pydata.org/examples/).
```
import seaborn as sns
sns.set(style="whitegrid", palette="pastel", color_codes=True)
# Load the example tips dataset
tips = sns.load_dataset("tips")
# Draw a nested violinplot and split the violins for easier comparison
sns.violinplot(x="day", y="total_bill", hue="sex", data=tips, split=True,
inner="quart", palette={"Male": "b", "Female": "y"})
sns.despine(left=True)
```
### Web publishing with plotly and Bokeh
[Bokeh](http://bokeh.pydata.org/en/latest/) is using D3, one of the most complex and performant web visualization libraries. This makes it easy to add web interaction, and also makes very nice publication quality graphics.
```
conda install bokeh
conda install -n base -c conda-forge jupyterlab_widgets
conda install -n biopy37 -c conda-forge ipywidgets
```
```
from ipywidgets import interact
import numpy as np
from bokeh.io import push_notebook, show, output_notebook
from bokeh.plotting import figure
output_notebook()
x = np.linspace(0, 2*np.pi, 2000)
y = np.sin(x)
p = figure(title="simple line example", plot_height=300, plot_width=600, y_range=(-5,5))
r = p.line(x, y, color="#2222aa", line_width=3)
def update(f, w=1, A=1, phi=0):
if f == "sin": func = np.sin
elif f == "cos": func = np.cos
elif f == "tan": func = np.tan
r.data_source.data['y'] = A * func(w * x + phi)
push_notebook()
show(p, notebook_handle=True)
```
### Using Jupyter's ipywidgets
`conda install -c conda-forge ipywidgets`
```
from IPython.display import display
from ipywidgets import *
interact(update, f=["sin", "cos", "tan"], w=(0,100), A=(1,5), phi=(0, 20, 0.1))
from IPython.display import display
from ipywidgets import *
w = IntSlider()
display(w)
def f(x):
return x
interact(f, x=10);
```
### Network layout and display
- weird decorator error, do:
```
conda install -c conda-forge decorator
```
TODO: not working with the latest matplotlib
```
%matplotlib inline
import networkx as nx
net = nx.barabasi_albert_graph(100, 1)
nx.write_gml(net,"mynetwork.gml")
import matplotlib.pyplot as plt
nx.draw(net)
#nx.draw(net,pos=nx.spring_layout(net, scale = 5000, iterations = 10))
%matplotlib inline
from ipywidgets import interact
import matplotlib.pyplot as plt
import networkx as nx
# wrap a few graph generation functions so they have the same signature
def random_lobster(n, m, k, p):
return nx.random_lobster(n, p, p / m)
def powerlaw_cluster(n, m, k, p):
return nx.powerlaw_cluster_graph(n, m, p)
def erdos_renyi(n, m, k, p):
return nx.erdos_renyi_graph(n, p)
def newman_watts_strogatz(n, m, k, p):
return nx.newman_watts_strogatz_graph(n, k, p)
def plot_random_graph(n, m, k, p, generator):
g = generator(n, m, k, p)
nx.draw(g)
plt.show()
interact(plot_random_graph, n=(2,30), m=(1,10), k=(1,10), p=(0.0, 1.0, 0.001),
generator={'lobster': random_lobster,
'power law': powerlaw_cluster,
'Newman-Watts-Strogatz': newman_watts_strogatz,
u'Erdős-Rényi': erdos_renyi,
});
```
Bokeh + networkx example:
```
import networkx as nx
from bokeh.io import output_file, show
from bokeh.models import (BoxZoomTool, Circle, HoverTool,
MultiLine, Plot, Range1d, ResetTool,)
from bokeh.palettes import Spectral4
from bokeh.plotting import from_networkx
# Prepare Data
G = nx.karate_club_graph()
SAME_CLUB_COLOR, DIFFERENT_CLUB_COLOR = "black", "red"
edge_attrs = {}
for start_node, end_node, _ in G.edges(data=True):
edge_color = SAME_CLUB_COLOR if G.nodes[start_node]["club"] == G.nodes[end_node]["club"] else DIFFERENT_CLUB_COLOR
edge_attrs[(start_node, end_node)] = edge_color
nx.set_edge_attributes(G, edge_attrs, "edge_color")
# Show with Bokeh
plot = Plot(plot_width=400, plot_height=400,
x_range=Range1d(-1.1, 1.1), y_range=Range1d(-1.1, 1.1))
plot.title.text = "Graph Interaction Demonstration"
node_hover_tool = HoverTool(tooltips=[("index", "@index"), ("club", "@club")])
plot.add_tools(node_hover_tool, BoxZoomTool(), ResetTool())
graph_renderer = from_networkx(G, nx.spring_layout, scale=1, center=(0, 0))
graph_renderer.node_renderer.glyph = Circle(size=15, fill_color=Spectral4[0])
graph_renderer.edge_renderer.glyph = MultiLine(line_color="edge_color", line_alpha=0.8, line_width=1)
plot.renderers.append(graph_renderer)
#output_file("interactive_graphs.html")
output_notebook()
show(plot)
```
Example of network interactivity in Jupyter:
- https://github.com/cytoscape/cytoscape-jupyter-widget
- https://ipycytoscape.readthedocs.io/en/latest/
## ipywidgets
```
from ipywidgets import widgets
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def plot(amplitude, color):
fig, ax = plt.subplots(figsize=(4, 3),
subplot_kw={'axisbelow':True})
ax.grid(color='w', linewidth=2, linestyle='solid')
x = np.linspace(0, 10, 1000)
ax.plot(x, amplitude * np.sin(x), color=color,
lw=5, alpha=0.4)
ax.set_xlim(0, 10)
ax.set_ylim(-1.1, 1.1)
return fig
from ipywidgets import interact, FloatSlider, RadioButtons
interact(plot,
amplitude=FloatSlider(min=0.1, max=1.0, step=0.1),
color=RadioButtons(options=['blue', 'green', 'red']))
from ipywidgets import *
IntSlider()
widgets.Select(
description='OS:',
options=['Linux', 'Windows', 'OSX'],
)
import ipywidgets as widgets
from IPython.display import display
name = widgets.Text(description='Name:', padding=4)
#name.layout.padding = 4
color = widgets.Dropdown(description='Color:', options=['red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet'])
#color.layout.padding = 4
page1 = widgets.Box(children=[name, color])
#page1.layout.padding = 4
age = widgets.IntSlider(description='Age:', min=0, max=120, value=50)
#age.layout.padding = 4
gender = widgets.RadioButtons(description='Gender:', options=['male', 'female'])
#gender.layout.padding = 4
page2 = widgets.Box(children=[age, gender])
#page2.layout.padding = 4
tabs = widgets.Tab(children=[page1, page2])
display(tabs)
tabs.set_title(0, 'Name')
tabs.set_title(1, 'Details')
```
| github_jupyter |
cf. pp. 60 McDowell, 6th Ed. What you Need to Know, Core data Structures, Algorithms, and Concepts
| Data Structures | Algorithms | Concepts |
| :-- | -- | :-- |
| Linked Lists | Breadth-First Search | Bit Manipulation |
| Trees, Tries, & Graphs | Depth-First Search | Memory (Stack vs. Heap) |
| Stacks & Queues | Binary Search | Recursion |
| Heaps | Merge Sort | Dynamic Programming |
| Vectors/ArrayLists | QuickSort | Big O Time & Space |
| Hash Tables | | |
http://interactivepython.org/runestone/static/pythonds/BasicDS/ImplementingaStackinPython.html
```
class Stack:
def __init__(self):
self.items = []
def isEmpty(self):
return self.items == []
def push(self, item):
self.items.append(item)
def pop(self):
return self.items.pop()
def peek(self):
return self.items[len(self.items)-1]
def size(self):
return len(self.items)
s=Stack()
print(s.isEmpty())
s.push(4)
s.push('dog')
print(s.peek())
s.push(True)
print(s.size())
print(s.isEmpty())
s.push(8.4)
print(s.items)
print(s.pop())
print(s.pop())
print(s.size())
```
cf. 3 Stacks and Queues, Cracking the Coding Interview, 6th Ed., McDowell, stack uses LIFO - as in a stack of dinner plates, the most recent item added to the stack is the 1st item to be removed.
```
class Queue:
def __init__(self):
self.items = []
def add(self,item):
self.items.append( item )
def remove(self):
self.items.remove( self.items[0])
def peek(self):
return self.items[0]
def isEmpty(self):
return self.items == []
```
cf. [3.6. Simple Balanced Parentheses](http://interactivepython.org/runestone/static/pythonds/BasicDS/SimpleBalancedParentheses.html)
```
def parChecker(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol == "(":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
s.pop()
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
print(parChecker('((()))'))
print(parChecker('(()'))
def parChekcer(symbolString):
s = Stack()
balanced = True
index = 0
while index < len(symbolString) and balanced:
symbol = symbolString[index]
if symbol in "([{":
s.push(symbol)
else:
if s.isEmpty():
balanced = False
else:
top = s.pop()
if not matches(top,symbol):
balanced = False
index = index + 1
if balanced and s.isEmpty():
return True
else:
return False
def matches(open,close):
opens = "([{"
closers = ")]}"
return opens.index(open) == closers.index(close)
print(parChecker('{{([][])}()}') )
print(parChecker('[{()]'))
def divideBy2(decNumber):
remstack = Stack()
while decNumber >0:
rem = decNumber % 2
remstack.push(rem)
decNumber = decNumber // 2
binString = ""
while not remstack.isEmpty():
binString = binString + str(remstack.pop())
return binString
print(divideBy2(42))
divideBy2(233)
def baseConverter(decNumber,base):
digits = "0123456789ABCDEF"
remstack=Stack()
while decNumber >0:
rem = decNumber % base
remstack.push(rem)
decNumber = decNumber // base
newString = ""
while not remstack.isEmpty():
newString = newString + digits[remstack.pop()]
return newString
print(baseConverter(25,2))
print(baseConverter(25,16))
print(baseConverter(25,8))
print(baseConverter(256,16))
print(baseConverter(26,26))
```
# Linked List
Advantages over arrays, 1. dynamic size 2. ease of insertion/deletion
Drawbacks; 1) random access not allowed, access elements sequentially, 2) extra memory space for pointer
```
# Node class
class Node:
# Function to initialize the node object
def __init__(self,data):
self.data = data # Assign data
self.next= None # Initialize
# next as null
# Linked List class
class LinkedList:
# Function to initialize the Linked
# List object
def __init__(self):
self.head = None
# This function prints contents of linked list
# starting from head
# traversal of a linked list
def printList(self):
temp = self.head
while (temp):
print temp.data
temp = temp.next
# Start with the empty list
llist = LinkedList()
llist.head = Node(1)
second = Node(2)
third = Node(3)
llist.head.next = second; # Link 1st node with second
second.next = third
llist.printList()
class Node:
def __init__(self,val):
self.val = val
self.next = None
class LinkedList:
def __init__(self,val=None):
if val is None:
self.head = None
else:
self.head = Node(val)
def insertEnd(self,val):
temp = self.head
while(temp.next): # check if temp has a next
temp = temp.next # keep traversing
temp.next = Node(val)
def printList(self):
temp = self.head
while (temp): # stop when temp is a None, which could happen with next in Node
print temp.val
temp = temp.next
llist = LinkedList(1)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(4)
llist.printList()
llist.insertEnd(2)
llist.printList()
llist.insertEnd(6)
llist.printList()
llist.insertEnd(7)
llist.printList()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Kristina140699/DataScienceProjects/blob/main/Intermediate/Project_on_Indian_food_analysisEDA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive #mounting my G-drive to G-colab
drive.mount('/content/drive')
```
#Dataset Details:
The dataset consists of about **255** Indian dishes and **9** columns associated with each of them.
The **9** columns are as follows:-
**name** : name of the dish
**ingredients** : main ingredients used
**diet** : type of diet - either vegetarian or non vegetarian
**prep_time** : preparation time
**cook_time** : cooking time
**flavor_profile** : flavor profile includes whether the dish is spicy, sweet, bitter, etc
**course** : course of meal - starter, main course, dessert, etc
**state** : state where the dish is famous or is originated
**region** : region where the state belongs
```
import pandas as pd
import numpy as np
import plotly.express as px
from plotly.offline import init_notebook_mode
import matplotlib.pyplot as plt
%matplotlib inline
from wordcloud import WordCloud , ImageColorGenerator
data = pd.read_csv("/content/drive/MyDrive/Learning..../Data/Project /indian_food.csv")
data
data.head()
data.shape
data.columns
data.info()
data.isnull().any()
data.isnull().sum()
data=data.replace(-1,np.nan)
data=data.replace('-1',np.nan)
data.head()
data.isnull().sum()
data.shape
pie_data = data.diet.value_counts().reset_index()
pie_data.columns = ['diet','count']
fig = px.pie(pie_data, values='count', names='diet', title='Proportion of Vegetarian and Non-Vegetarian dishes',
color_discrete_sequence=['green', 'red'])
fig.show()
pie_data = data.flavor_profile.value_counts().reset_index()
pie_data.columns = ['flavor_profile','count']
fig = px.pie(pie_data, values='count', names='flavor_profile', title='Proportion of flavour of dishes',
color_discrete_sequence=[ 'red','pink','olive', 'orange'])
fig.show()
#Displaying the same thing with bar graph
flav_data = data.flavor_profile.value_counts().reset_index()
flav_data.columns = ['flavor_profile', 'prep_time']
fig = px.bar(flav_data,x='flavor_profile',y='prep_time',title='variety of item according to the flavour',
color_discrete_sequence=['green'])
fig.show()
sweet_data = data[data['flavor_profile']=='sweet']
final_sweet_data = sweet_data[sweet_data['course']!='dessert']
final_sweet_data
cooking_time= data[['cook_time','name']]
cooking_time.head()
cooking_time=cooking_time.sort_values(['cook_time'],ascending=True)
ten_cook_quickly=cooking_time.head(10)
ten_cook_quickly
#cook_data = ten_cook_quickly.cook_time.value_counts().reset_index()
#cook_data.columns = ['cook_time', 'name']
fig = px.bar(ten_cook_quickly,x='cook_time',y='name',title='dishes based on cooking time',
color_discrete_sequence=['green'])
fig.show()
data.columns
cooking_time_longest=cooking_time.sort_values(['cook_time'],ascending=False)
tencooking_time_longest=cooking_time_longest.head(10)
tencooking_time_longest
fig = px.bar(tencooking_time_longest,x='cook_time',y='name',title='dishes based on cooking time',
color_discrete_sequence=['red'])
fig.show()
import matplotlib.pyplot as plt
plt.figure(figsize=(17,5))
y=tencooking_time_longest['cook_time']
x=tencooking_time_longest['name']
plt.plot(x,y)
plt.title('dishes based on cooking time')
plt.show()
import seaborn as sns
sns.pairplot(data,diag_kind='kde', hue='course', height=5)
r=sns.pairplot(data,diag_kind='kde', hue='region', height=5)
r.fig.set_size_inches(20,15)
g=sns.pairplot(data,diag_kind='auto')
g.fig.set_size_inches(20,10)
```
| github_jupyter |
```
# load things
%matplotlib inline
from pyCDFTOOLS.cdfmocsig import *
import numpy as np
import glob
import netCDF4, copy
# load data and do relevant processing on the native grid
data_dir = "/home/julian/data/NEMO_data/eORCA1-LIM3/default/"
# grab the MOC_V list
file_list = []
for file in glob.glob(data_dir + "*MOC_V*.nc"):
file_list.append(file)
# sort it according to the timestamps
file_list.sort()
# generate bins from presets (available for 0, 1000, 2000m depth), or define it in a dictionary as
# bins = {"pref" : 2000,
# "nbins" : 158,
# "sigmin" : 30.0,
# "sigstp" : 0.05}
bins = sigma_bins(2000)
# for putting extra options in
# -- kt = number for using a specified time entry (python indexing)
# -- kz = number for using a specified vertical level/layer (python indexing)
# -- lprint = True for printing out variable names in netcdf file
# -- lverb = True for printing out more information
# -- lg_vvl = True for using s-coord (time-varying metric)
# -- ldec = True decompose the MOC into some components
# -- leiv = True for adding the eddy induced velocity component
# eivv_var = string for EIV-v variable name
# -- lisodep = True (not yet implemented) output zonal averaged isopycnal depth
# -- lntr = True (not yet implemented) do binning with neutral density
kwargs = {"kt" : 0,
"lprint" : False,
"lg_vvl" : False,
"ldec" : False,
"leiv" : True, "eivv_var" : "voce_eiv",
"lisodep": True,
"lntr" : False}
# cycle through the file lists and compile
print("%g files found, cycling through them..." % len(file_list))
for i in range(len(file_list)):
fileV = file_list[i].replace(data_dir, "") # strip out the data_dir
fileT = fileV.replace("_V_", "_T_") # replace V with T
print(" ")
print("working in file = %g / %g" % (i + 1, len(file_list)))
if i == 0:
sigma, depi_temp, latV, dmoc_temp, opt_dic = cdfmocsig_tave(data_dir, fileV, "voce", fileT, "toce", "soce", bins, **kwargs)
dmoc = dmoc_temp / len(file_list)
depi = depi_temp / len(file_list)
else:
_, depi_temp, _, dmoc_temp, _ = cdfmocsig_tave(data_dir, fileV, "voce", fileT, "toce", "soce", bins, **kwargs)
dmoc += dmoc_temp / len(file_list)
depi += depi_temp / len(file_list)
print(" ")
print("returning final time-averaged field")
# save it if need be
latV_mesh = np.zeros(depi[0, :, :].shape)
for j in range(latV_mesh.shape[0]):
latV_mesh[j, :] = latV
# open a new netCDF file for writing.
ncfile = netCDF4.Dataset("moc_tave_testing.nc", "w", format = "NETCDF4")
ncfile.title = "diagnosed MOC in density co-ordinates"
# create the dimensions.
ncfile.createDimension("sigma", sigma.shape[0])
ncfile.createDimension("y", latV.shape[0])
ncfile.createDimension("time", len(np.asarray([0.0])))
# first argument is name of variable,
# second is datatype,
# third is a tuple with the names of dimensions.
lat_netcdf = ncfile.createVariable("latV", np.dtype("float32").char, "y")
lat_netcdf[:] = latV
lat_netcdf.units = "deg"
lat_netcdf.long_name = "y"
sigma_netcdf = ncfile.createVariable("sigma", np.dtype("float32").char, "sigma")
sigma_netcdf[:] = sigma
sigma_netcdf.units = "kg m-3"
sigma_netcdf.long_name = "sigma"
# global moc
moc_glob_tave_netcdf = ncfile.createVariable("rmoc_glob_tave", np.dtype("float32").char,
("time", "sigma", "y"), fill_value = 1e20)
moc_glob_tave_netcdf[:] = dmoc[0, :, :]
moc_glob_tave_netcdf.units = "Sv"
moc_glob_tave_netcdf.long_name = "global RMOC in density co-ordinates"
# add the other MOCs and variables here after figuring out how to split them
if opt_dic["lbas"]:
print("basin decomposition option not implemented")
if opt_dic["lisodep"]:
latV_mesh_netcdf = ncfile.createVariable("latV_mesh", np.dtype("float32").char, ("sigma", "y"))
latV_mesh_netcdf[:] = latV_mesh
latV_mesh_netcdf.units = "deg"
latV_mesh_netcdf.long_name = "latV in a mesh"
isodep_glob_tave_netcdf = ncfile.createVariable("isodep_glob_tave", np.dtype("float32").char,
("time", "sigma", "y"), fill_value = 1e20)
isodep_glob_tave_netcdf[:] = depi[0, :, :]
isodep_glob_tave_netcdf.units = "m"
isodep_glob_tave_netcdf.long_name = "global mean isopycnal depth"
if opt_dic["lbas"]:
print("basin decomposition option not implemented")
# close the file.
ncfile.close()
print("*** SUCCESS writing example file! ***")
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append('../../pyutils')
import metrics
import utils
```
# Adaboost.M1
The idea of boosting is to combine many weak classifiers intro a strong one.
A weak classifier is one slight better than random guessing.
Let's define the error rate:
$$\bar{\text{err}} = \frac{1}{N} \sum_{i=1}^N I(y_i \neq G(x_i))$$
Adaboost combine $M$ weak classifiers:
$$G(x) = \text{sign} \left( \sum_{m=1}^M \alpha_m G_m(x) \right) $$
$\alpha$ is the contribution vector of the classifiers, they are learned, such that a better model as an higher $\alpha_m$.
All classifiers are trained one by one, but with weighted examples $w_i$. At first all examples have the same weight, then at each iteration the weight of misclassified examples increase, and the others decrease.
Algorithm $10.1$ page $339$
```
from sklearn.datasets import load_digits
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from copy import deepcopy
X, y = load_digits().data, load_digits().target
y = (y < 5).astype(np.int32)
print(X.shape)
print(y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=15)
logreg = LogisticRegression(solver='liblinear')
logreg.fit(X_train, y_train)
print('train acc:', np.mean(y_train == logreg.predict(X_train)))
print('test acc:', np.mean(y_test == logreg.predict(X_test)))
class AdaboostM1:
def __init__(self, model, M):
self.model = model
self.M = M
self.mods = []
self.alpha = np.empty(M)
def fit(self, X, y):
N = len(X)
w = np.ones(N) / N
for m in range(self.M):
clf = deepcopy(self.model)
clf.fit(X, y, w)
self.mods.append(clf)
preds = clf.predict(X)
err = np.sum(w * (preds != y)) / np.sum(w)
self.alpha[m] = np.log((1 - err) / err)
w = w * np.exp(self.alpha[m] * (preds != y))
def predict(self, X):
preds = np.zeros(len(X))
for m in range(self.M):
preds += self.alpha[m] * self.mods[m].predict(X)
preds = np.round(preds / np.sum(self.alpha)).astype(np.int32)
return preds
mod = DecisionTreeClassifier(max_depth=1)
clf = AdaboostM1(mod, 500)
clf.fit(X_train, y_train)
print('train acc:', np.mean(y_train == clf.predict(X_train)))
print('test acc:', np.mean(y_test == clf.predict(X_test)))
```
# Boosting Fits an Additive Model
Boosting is just a special case of additive models:
$$f(x) = \sum_{m=1}^M \beta_m b(x;\gamma_m)$$
$b(x;\gamma_m)$ are simple functions of argument $x$ and parameters $\gamma_m$. For boosting, each basis function is a weak classifier.
These models are trained by fitting $\beta$ and $\gamma$ minimizing a loss function over the dataset:
$$\min_{\{\beta_m, \gamma_m\}_1^M} \sum_{i=1}^N L\left(y_i, \sum_{m=1}^M \beta_m b(x_i;\gamma_m)\right)$$
for any loss function $L(y, f(x))$ such as squared-error or negative log-likelihood.
# Forward Stagewise Additive Modeling
This algorithm find an approximate solution by solving a simpler problem. It starts with an empty model, and add a new basic function one at a time, fitting it without modyfing the parameters of the previous ones.
The problem is a lot simpler to optimize:
$$\min_{\beta_m, \gamma_m} \sum_{i=1}^N L(y_i, f_{m-1}(x_i) + \beta_m b(x_i;\gamma_m))$$
# Exponential Loss and Adaboost
AdaBoost.M1 is equivalent to forward stagewise additive modeling using the exponential loss:
$$L(y, f(x)) = \exp (-yf(x))$$
The problem is:
$$(\beta_m, G_m) = \min_{\beta, G} \sum_{i=1}^N \exp [-y_i(f_{m-1}(x_i) + \beta G(x_i))]$$
# Why Exponential Loss ?
The principal attraction is computational: additive modeling with computational loss leads to a simple modular reweighting algorithm.
$$f^*(x) = \arg \min_{f(x)} E_{y|x} \exp(-yf(x)) = \frac{1}{2} \log \frac{P(y=1|x)}{P(y=-1|x)}$$
Thus AdaBoost estimates one-half of the log-odds, that justifies using the sign operator.
Another loss is the deviance loss:
$$l(y, f(x)) = \log (1 + e^{-2yf(x)})$$
At the population level, using either criterion leads to the same solution, but this is not true for finite datasets
# Loss Functions and Robustness
## Robust Loss functions for classification
deviance and exponential loss are both monotone decreasing functions of the margin yf(x).
With $G(x) = \text{sign}(f(x))$, observations with positive margin are classified corretly, and those with negative magin are misclassified.
Any loss criterion should penalize negative margin more heavily than positive ones.
The difference between deviance and exponential loss is how much they penalize negative margins. The penalty for deviance increase linearly, where the one for exponential loss increase exponentially. In noisy settings, with misclassifications in the training data, the deviance gives better results.
Mean Squared error increases quadratically when $yf(x) > 1$, therefore increasing error for correctly classified examples with increasing certainty. Thus MSE is a terrible choice of loss function.
The problem generalize to K-class classification:
$$G(x) = \arg \max_{k} p_k(x)$$
with $p_k(x)$ the probability that $x$ belongs to class $k$:
$$p_k(x) = \frac{e^{f_k(x)}}{\sum_{l=1}^K e^{f_l(x)}}$$
We can use the K-class multinomial deviance loss function:
$$L(y, p(x)) = \sum_{k=1}^K I(y = k) \log p_k(x)$$
## Robust Loss functions for regression
For regression, both the squared error: $L(y, f(x)) = (y - f(x))^2$ and absolute loss $L(y, f(x)) = |y - f(x)|$ leads to the same populations results, but vary for finite datasets.
Squared error loss places much empahish on obersation with large residuals, which if far less robust for outliers. Absolute loss performs much better in these situations.
Another solution to resist outliers is the Huber loss:
$$
L(y, f(x)) =
\begin{cases}
(y - f(x))^2 & \text{if } |y - f(x)| \leq \delta \\
2 \delta |y - f(x)| - \delta^2 & \text{otherwise}
\end{cases}
$$
# Boosting Trees
A tree can be expressed as:
$$T(x;\theta) = \sum_{j=1}^J \gamma_j I(x \in R_j)$$
The parameters are found by minimizing the empirical risk:
$$\theta = \arg \min_\theta \sum_{j=1}^J \sum_{x_i \in R_j} L(y_i, \gamma_j)$$
The boosted tree model is a sum of such trees:
$$f_M(x) = \sum_{m=1}^M T(x;\theta_m)$$
With a forward stagewise procedure, one must solve at each step:
$$\hat{\theta_m} = \arg \min_{\theta_m} \sum_{i=1}^N L(y_i, f_{m-1}(x_i) + T(x_i, \theta_m))$$
For MSE, we simply need to fit a new regression tree with the residual errors.
For binary classificaton with exponential loss, we get the following criterion for each tree:
$$\hat{\theta_m} = \arg \min_{\theta} \sum_{i=1}^N w_i^{(m)} \exp (-y_i T(x_i;\theta_m))$$
This criterion can be implemented by updating the criterion of splitting for the classical tree growing algorithms.
Using other loss such as the absolute error, the Huber Loss, or the deviance gives most robust models, but there is no simple algorithms to optimize them.
# Numerical Optimization via Gradient Boosting
Let's define the loss as:
$$L(f) = \sum_{i=1}^N L(y_i, f(x_i))$$
The goal is to minimize $L(f)$ with respect fo $f$, with $f$ a sum of trees.
Let's say the parameters of $f$ are the values of $f$ at each point in the training set:
$$f = \{ f(x_1), f(x_2), \text{...}, f(x_N) \}^T$$
Numerical optimisation solve $f$ using a sum of components vectors, or sum of steps:
$$f_M = \sum_{m=0}^M h_m, \space h_m \in \mathbb{R}^N$$
Using steepest descent, we define $h_m = -\rho_m g_m$ with $\rho_m \in \mathbb{R}$ the step size, and $g_m \in \mathbb{R}^N$ the gradient of $L$.
$$g_{im} = \frac{\partial L(y_i, f_{m-1}(x_i))}{\partial f_{m-1}(x_i)}$$
$$\rho_m = \arg \min_{\rho} L(f_{m-1} 0 \rho g_m)$$
The current solution is updated:
$$f_m = f_{m-1} - \rho_m g_m$$
The process is repeated $M$ times, this is a greedy strategy
## Gradient boosting
This process is great to minize loss on the training data, but our goal is generalization.
A solution is to build a tree $T(x;\theta_m)$ at each iteration, as close as possible to the negative gradient. Using MSE, we get the criterion:
$$\hat{\theta} = \arg \min_\theta \sum_{i=1}^N (-g_{im} - T(x_i;\theta))^2$$
Gradient Boosting Regression:
1. Initialize:
$$f_0(x) = \arg \min_\gamma \sum_{i=1}^N L(y_i, \gamma)$$
2. For $m=1$ to $M$:
$$r_{im} = - \frac{\partial L(y_i, f_{m-1}(x_i))}{\partial f_{m-1}(x_i)}$$
Fit a regression tree to targets $r_{im}$, and update $f_m(x)$:
$$f_m(x) = f_{m-1}(x) + \sum_{j=1}^{J_m} \gamma_{jm} I(x \in R_{jm})$$
3. Output $\hat{f}(x) = f_M(x)$
For other losses, we plug in different loss functions $L$.
For K-classes classification, we need to build $K$ trees at eachh iteration.
Two hyperparemeters are the number of iterations $M$, and the size of each tree $J_m$.
# Right-Sized Trees for Boosting
Each time a new tree is built using the usual procedure, starting by building a very large tree, then pruning it. This procedure suppose the tree built is the last one, which is a poor assumption for non-final trees. It results in tree way too large in each iteration.
One solution is to restrict all tress to the size $J$, an hyperparameter to be fixed.
The interation level is limited by $J$. With $J=2$, only main effects are possible, $J=3$ allow only two-variable interactions, and so on.
The interaction level is unknow, but low in general. By experience $4 \leq J \leq 8$ works well in practice.
# Regularization
Another hyperpameter to be fixed is $M$. Each iteration recudes the risk on the training set, but may lead to overfitting. We can find the optimital $M^*$ by monitoring the risk on a validation set.
Another regularization technique is Shrinkage, that scale the contribution of each tree by a factor $v \in [0,1]$:
$$f_m(x) = f_{m-1}(x) + v \sum_{j=1}^{J_m} \gamma_{jm} I(x \in R_{jm})$$
Smaller values of $v$ cause more shrinkage. In practice, set $v$ very small $(< 0.1)$ and chose $M$ as above works well.
We can also use subsampling, similar to bagging. At each iteration, we sample a fraction of the training dataset and perform the iteration on this sample. It usually produces a more accurate model.
# Interpretation
## Relative importance of Predictor Variables
For a single tree $T$, a measure of releveance of feature $X_l$ is:
$$\mathcal{I}_l^2(T) = \sum_{t=1}^{J-1} \hat{i}^2 I(v(t) = l)$$
With $i_t^2$ the improvement in squared error fit over that for a constant fit to the world region, where the split variable is $l$.
For tree boosting, we simply average over all the trees:
$$\mathcal{I}_l^2 = \frac{1}{M} \sum_{i=1}^M \mathcal{I}_l^2(T_m)$$
All values are relative, we can set the higher to $100$, and scale all other accordingly.
## Partial Dependence Plots
We can plot the dependance of a subset of variables $X_S$, by a marginal average over the other variables $X_C$:
$$f_S(X_S) = E_{X_C} f(X_s,X_C)$$
We can thuse realise several partial dependace plots, using several sets $S$.
They can be estimated by:
$$\bar{f_S(X_S)} = \frac{1}{N} \sum_{i=1}^N f(X_S, x_{iC})$$
## Gradient boosting (Regression)
Least Absolute Error Tree Bosting Regression algorithm:
1. set $F_0(x) = \text{median} \{ y_i \}_1^N$
2. For $m=1$ to $M$:
$$\hat{y}_i = \text{sign}(y_i - F_{m-1}(x_i))$$
$$\{ R_{jm} \}_1^J \text{ tree with J terminal nodes trained on } \{\hat{y}_i, x_i \}_1^N$$
$$\gamma_{jm} = \text{median} \{ y_i - F_{m-1}(x_i) : x_i \in R_{jm} \}$$
$$F_m(x) = F_{m-1}(x) + \sum_{j=1}^J \gamma_{jm} 1(x \in R_{jm})$$
3. Output $\hat{F}(x) = F_M(x)$
An alternate approach is to build tree $T(x)$ minizing LAE loss:
$$T_m(x) = \arg \min_{T} \sum_{i=1}^N |y_i - F_{m-1}(x_i) - T(x)|$$
$$F_m(x) = F_{m-1}(x) = T_m(x)$$
The first solution is much faster because it uses the sqared error loss to build the trees.
# Gradient Boosting Paper
[Greedy Function Approximation: A Gradient boosting Machine](https://statweb.stanford.edu/~jhf/ftp/trebst.pdf)
```
from copy import deepcopy
from sklearn.datasets import load_boston
class DTNode:
def __init__(self, X, y, val):
self.X = X
self.y = y
self.val = val
self.cut = None
self.subs = None
def pred(self, x):
if self.cut is None:
return self.val
elif x[self.cut[0]] <= self.cut[1]:
return self.subs[0].pred(x)
else:
return self.subs[1].pred(x)
def split(self, j, s, eval_fn):
if self.cut is not None:
raise Exception('already cut')
leftp = self.X[:,j] <= s
rightp = self.X[:,j] > s
X_left, y_left = self.X[leftp], self.y[leftp]
X_right, y_right = self.X[rightp], self.y[rightp]
left = DTNode(X_left, y_left, eval_fn(X_left, y_left))
right = DTNode(X_right, y_right, eval_fn(X_right, y_right))
self.cut = (j, s)
self.subs = (left, right)
def update_vals(self, X, y, eval_fn):
if self.cut is None:
self.val = eval_fn(X, y)
return
p1 = X[:,self.cut[0]] <= self.cut[1]
p2 = X[:,self.cut[0]] > self.cut[1]
self.subs[0].update_vals(X[p1], y[p1], eval_fn)
self.subs[1].update_vals(X[p2], y[p2], eval_fn)
def get_best_cut(node, j, val_fn, err_fn, min_leaf_size):
X = node.X
y = node.y
best_s = None
best_err = float('inf')
for i in range(len(X) - 1):
s = (X[i,j] + X[i+1,j])/2
X_left = X[X[:,j] <= s]
X_right = X[X[:,j] > s]
y_left = y[X[:,j] <= s]
y_right = y[X[:,j] > s]
if len(y_left) < min_leaf_size or len(y_right) < min_leaf_size:
continue
preds_left = np.ones(len(y_left)) * val_fn(X_left, y_left)
preds_right = np.ones(len(y_right)) * val_fn(X_right, y_right)
err = err_fn(y_left, preds_left) + err_fn(y_right, preds_right)
if err < best_err:
best_err = err
best_s = s
return best_s, best_err
def split_tree(node, val_fn, err_fn, max_size, size = None):
if size is None:
size = [1]
if size[0] >= max_size:
return
best_j = None
best_s = None
best_err = float('inf')
for j in range(node.X.shape[1]):
s, err = get_best_cut(node, j, val_fn, err_fn, min_leaf_size=3)
if err < best_err:
best_s = s
best_j = j
best_err = err
if best_j is None:
return
node.split(best_j, best_s, val_fn)
size[0] += 1
split_tree(node.subs[0], val_fn, err_fn, max_size, size)
split_tree(node.subs[1], val_fn, err_fn, max_size, size)
def build_tree(X, y, val_fn, err_fn, max_size):
root = DTNode(X, y, val_fn(X, y))
split_tree(root, val_fn, err_fn, max_size)
return root
def val_avg(X, y):
return np.mean(y)
def err_mse(y, preds):
return np.sum((y - preds)**2)
class TreeRegressor:
def __init__(self, max_size=4):
self.max_size = max_size
def fit(self, X, y):
self.root = build_tree(X, y, val_avg, err_mse,
self.max_size)
def predict(self, X):
y = np.empty(len(X))
for i in range(len(X)):
y[i] = self.root.pred(X[i])
return y
X, y = load_boston().data, load_boston().target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=15)
clf = TreeRegressor(max_size=4)
clf.fit(X_train, y_train)
print('train_error:', np.mean((y_train - clf.predict(X_train))**2))
print('test_error:', np.mean((y_test - clf.predict(X_test))**2))
class ConstModel:
def __init__(self, val):
self.root = DTNode(None, None, val)
def update_median(_, resi):
return np.median(resi)
class LADTreeBost:
def __init__(self, J, M):
self.J = J
self.M = M
def fit(self, X, y):
self.mods = []
self.mods.append(ConstModel(np.median(y)))
resid = y - np.median(y)
for m in range(self.M):
yhat = np.sign(resid)
tree = TreeRegressor(max_size=self.J)
tree.fit(X,yhat)
tree.root.update_vals(X, resid, update_median)
resid -= tree.predict(X)
self.mods.append(tree)
def predict(self, X):
y = np.empty(len(X))
for i in range(len(X)):
y[i] = self.get_pred(X[i])
return y
def get_pred(self, x):
y = 0
for m in self.mods:
y += m.root.pred(x)
return y
X, y = load_boston().data, load_boston().target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2,
random_state=15)
clf = LADTreeBost(J=4, M=10)
clf.fit(X_train, y_train)
print('train_error:', np.mean((y_train - clf.predict(X_train))**2))
print('test_error:', np.mean((y_test - clf.predict(X_test))**2))
```
| github_jupyter |
Comparar el perfil socio-económico (Ingreso per cápita mensual) de las personas que se encuentren en el rango de edades entre 25 y 60 años (inclusive), y:
G1) Su título más alto es de EEB (1° y 2° Ciclo) (6° grado);
G2) Posee título Universitario (superior universitario) o postgrado (especialización, maestría y doctorado).
```
import pandas as pd
df=pd.read_csv("/content/drive/MyDrive/probabilidad /Archivo de Datos.csv")#se abre la base de datos
df.head()
df.info()#se imprime los nombres de las columnas
df=df[["Edad","Título o diploma que obtuvo","Ingreso percapita mensual"]]#separamos solo las columnas que nos interesa
df.head()
df=df.drop(index=0)#borramos las fila 1
df.head()
df.dtypes#imprimimos el tipo de variable que contiene cada columna
df=df.dropna()#borramos las filas que contengan espacios vacios
df["Edad"]=df["Edad"].astype(int)#convertimos la fila Edad de objeto a numerico entero
df.dtypes
df=df[(df["Edad"]>=25) & (df["Edad"]<=60)]#aplicamos el primer filtro
df.head()
df["Título o diploma que obtuvo"]=df["Título o diploma que obtuvo"].astype(int)#convertimos la columna "Título o diploma que obtuvo" de objeto a numerico entero
df.dtypes
G1=df[df["Título o diploma que obtuvo"]==3]#separamos el primer grupo por medio de un filtro
G1=G1["Ingreso percapita mensual"]#apartamos solo la columna que nos interesa
G1
#G2=df[(df["Título o diploma que obtuvo"]>=8) & (df["Título o diploma que obtuvo"]<=10)]
G2=df[df["Título o diploma que obtuvo"].between(8,10)]#separamos el segundo grupo por medio de dos filtros
G22=df[df["Título o diploma que obtuvo"]==1]
G2=pd.concat([G2,G22],axis=0)#unimos los dos filtro aplicados
G2=G2["Ingreso percapita mensual"]
G2
G1=G1.astype(float)#hacemos que la unica columna de esta variable sea flotante
print("Grupo 1:")#imprimimos los datos estadisticos
print(f"Cantidad: {G1.count()}")
print(f"Media: {G1.mean()}")
print(f"Maximo: {G1.max()}")
print(f"Minimo: {G1.min()}")
print(f"Rango: {G1.max()-G1.min()}")
print(f"Desv Estandar: {G1.std()}")
G2=G2.astype(float)
print("Grupo 1:")
print(f"Cantidad: {G2.count()}")
print(f"Media: {G2.mean()}")
print(f"Maximo: {G2.max()}")
print(f"Minimo: {G2.min()}")
print(f"Rango: {G2.max()-G2.min()}")
print(f"Desv Estandar: {G2.std()}")
GT=pd.concat([G1.rename("G1"),G2.rename("G2")],axis=1)#unimos ambos grupos en una sola varible, una al lado de otra
GT.head()
intervalos=pd.interval_range(start=0,end=GT.max().max()+2000000, freq=2000000,closed="left")#generamos los intervalos de clases
for inter in intervalos:
print(inter)
F1=pd.cut(GT["G1"],bins=intervalos).rename("G1")#identificamos a que intervalo pertenece cada dato
F2=pd.cut(GT["G2"],bins=intervalos).rename("G2")
F1=F1.groupby(F1).count()#identificamos la frecuencia de cada clase
F2=F2.groupby(F2).count()
FT=pd.concat([F1,F2],axis=1)#concatenamos una frecuencia al lado de otra
FT
import matplotlib.pyplot as plt#importamos la libreria que nos permite graficar
FT.plot(kind="bar",subplots=True)#graficamos cada clase por separado
FT.plot(kind="bar",alpha=0.5)#graficamos todas clase juntas
plt.show()
```
| github_jupyter |
# Maxpooling Layer
In this notebook,
- we add and visualize the output of a maxpooling layer in a CNN.
A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.

### Import the image
```
from google.colab import drive
drive.mount('/content/drive')
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img_path = '/content/drive/My Drive/Wallpaper-1080P-15.jpg'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
```
### Define convolutional and pooling layers
You've seen how to define a convolutional layer, next is a:
* Pooling layer
In the next cell,
1. we initialize a convolutional layer so that it contains all the created filters.
2. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2)
- so you can see that the image resolution has been reduced after this step!
**maxpooling layer**
- reduces the x-y size of an input and only keeps the most *active* pixel values.
Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.

```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:] # weight.shape[2:] returns 'torch.Size([4, 4])'
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
print("Weight:\n", weight)
# feed the weight in the model
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer after a ReLu activation function is applied.
#### ReLu activation
A ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.

```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
activated_layer.shape
```
### Visualize the output of the pooling layer
Then, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.
Take a look at the values on the x, y axes to see how the image has changed size.
In addition, we can see that the shape of the images are reduced!
```
# visualize the output of the pooling layer
viz_layer(pooled_layer)
pooled_layer.shape
```
| github_jupyter |
```
# !nvidia-smi
!pip --quiet install transformers
!pip --quiet install tokenizers
from google.colab import drive
drive.mount('/content/drive')
!cp -r '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Scripts/.' .
COLAB_BASE_PATH = '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/'
MODEL_BASE_PATH = COLAB_BASE_PATH + 'Models/Files/144-roBERTa_base/'
import os
os.mkdir(MODEL_BASE_PATH)
```
## Dependencies
```
import json, warnings, shutil
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras.models import Model
from tensorflow.keras import optimizers, metrics, losses, layers
SEED = 0
seed_everything(SEED)
warnings.filterwarnings("ignore")
```
# Load data
```
database_base_path = COLAB_BASE_PATH + 'Data/aux/'
k_fold = pd.read_csv(database_base_path + '5-fold.csv')
display(k_fold.head())
# Unzip files
!tar -xvf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/aux/fold_1.tar.gz'
!tar -xvf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/aux/fold_2.tar.gz'
!tar -xvf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/aux/fold_3.tar.gz'
# !tar -xvf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/aux/fold_4.tar.gz'
# !tar -xvf '/content/drive/My Drive/Colab Notebooks/Tweet Sentiment Extraction/Data/aux/fold_5.tar.gz'
```
# Model parameters
```
vocab_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-vocab.json'
merges_path = COLAB_BASE_PATH + 'qa-transformers/roberta/roberta-base-merges.txt'
base_path = COLAB_BASE_PATH + 'qa-transformers/roberta/'
config = {
"MAX_LEN": 96,
"BATCH_SIZE": 32,
"EPOCHS": 5,
"LEARNING_RATE": 3e-5,
"ES_PATIENCE": 1,
"question_size": 4,
"N_FOLDS": 3,
"base_model_path": base_path + 'roberta-base-tf_model.h5',
"config_path": base_path + 'roberta-base-config.json'
}
with open(MODEL_BASE_PATH + 'config.json', 'w') as json_file:
json.dump(json.loads(json.dumps(config)), json_file)
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
## Learning rate schedule
```
LR_MIN = 1e-6
LR_MAX = config['LEARNING_RATE']
LR_EXP_DECAY = .5
@tf.function
def lrfn(epoch):
lr = LR_MAX * LR_EXP_DECAY**epoch
if lr < LR_MIN:
lr = LR_MIN
return lr
rng = [i for i in range(config['EPOCHS'])]
y = [lrfn(x) for x in rng]
fig, ax = plt.subplots(figsize=(20, 6))
plt.plot(rng, y)
print("Learning rate schedule: {:.3g} to {:.3g} to {:.3g}".format(y[0], max(y), y[-1]))
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=False)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
input_sentiment = layers.Input(shape=(3,), dtype=tf.float32, name='input_sentiment')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
last_hidden_state, _ = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
x_start_negative = layers.Dropout(.1)(last_hidden_state)
x_start_negative = layers.Dense(1)(x_start_negative)
x_start_negative = layers.Flatten()(x_start_negative)
y_start_negative = layers.Activation('softmax', name='y_start_negative')(x_start_negative)
y_start_negative = layers.Multiply()([y_start_negative, input_sentiment[:,0]])
x_start_neutral = layers.Dropout(.1)(last_hidden_state)
x_start_neutral = layers.Dense(1)(x_start_neutral)
x_start_neutral = layers.Flatten()(x_start_neutral)
y_start_neutral = layers.Activation('softmax', name='y_start_neutral')(x_start_neutral)
y_start_neutral = layers.Multiply()([y_start_neutral, input_sentiment[:,1]])
x_start_positive = layers.Dropout(.1)(last_hidden_state)
x_start_positive = layers.Dense(1)(x_start_positive)
x_start_positive = layers.Flatten()(x_start_positive)
y_start_positive = layers.Activation('softmax', name='y_start_positive')(x_start_positive)
y_start_positive = layers.Multiply()([y_start_positive, input_sentiment[:,2]])
y_start = layers.Add(name='y_start')([y_start_negative, y_start_neutral, y_start_positive])
x_end_negative = layers.Dropout(.1)(last_hidden_state)
x_end_negative = layers.Dense(1)(x_end_negative)
x_end_negative = layers.Flatten()(x_end_negative)
y_end_negative = layers.Activation('softmax', name='y_end_negative')(x_end_negative)
y_end_negative = layers.Multiply()([y_end_negative, input_sentiment[:,0]])
x_end_neutral = layers.Dropout(.1)(last_hidden_state)
x_end_neutral = layers.Dense(1)(x_end_neutral)
x_end_neutral = layers.Flatten()(x_end_neutral)
y_end_neutral = layers.Activation('softmax', name='y_end_neutral')(x_end_neutral)
y_end_neutral = layers.Multiply()([y_end_neutral, input_sentiment[:,1]])
x_end_positive = layers.Dropout(.1)(last_hidden_state)
x_end_positive = layers.Dense(1)(x_end_positive)
x_end_positive = layers.Flatten()(x_end_positive)
y_end_positive = layers.Activation('softmax', name='y_end_positive')(x_end_positive)
y_end_positive = layers.Multiply()([y_end_positive, input_sentiment[:,2]])
y_end = layers.Add(name='y_end')([y_end_negative, y_end_neutral, y_end_positive])
model = Model(inputs=[input_ids, attention_mask, input_sentiment], outputs=[y_start, y_end])
return model
```
# Train
```
def get_training_dataset(x_train, y_train, batch_size, buffer_size, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_train[0], 'attention_mask': x_train[1], 'input_sentiment': x_train[2]},
{'y_start': y_train[0],'y_end': y_train[1]}))
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.prefetch(buffer_size)
return dataset
def get_validation_dataset(x_valid, y_valid, batch_size, buffer_size, repeated=False, seed=0):
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': x_valid[0], 'attention_mask': x_valid[1], 'input_sentiment': x_valid[2]},
{'y_start': y_valid[0],'y_end': y_valid[1]}))
if repeated:
dataset = dataset.repeat()
dataset = dataset.shuffle(2048, seed=seed)
dataset = dataset.batch(batch_size, drop_remainder=True)
dataset = dataset.cache()
dataset = dataset.prefetch(buffer_size)
return dataset
def get_test_dataset(x_test, batch_size):
dataset = tf.data.Dataset.from_tensor_slices({'input_ids': x_test[0], 'attention_mask': x_test[1], 'input_sentiment': x_test[2]})
dataset = dataset.batch(batch_size)
return dataset
AUTO = tf.data.experimental.AUTOTUNE
strategy = tf.distribute.get_strategy()
history_list = []
for n_fold in range(config['N_FOLDS']):
n_fold +=1
print('\nFOLD: %d' % (n_fold))
# Load data
base_data_path = 'fold_%d/' % (n_fold)
x_train = np.load(base_data_path + 'x_train.npy')
x_train_aux2 = np.load(base_data_path + 'x_train_aux2.npy')
y_train = np.load(base_data_path + 'y_train.npy')
x_valid = np.load(base_data_path + 'x_valid.npy')
x_valid_aux2 = np.load(base_data_path + 'x_valid_aux2.npy')
y_valid = np.load(base_data_path + 'y_valid.npy')
step_size = x_train.shape[1] // config['BATCH_SIZE']
valid_step_size = x_valid.shape[1] // config['BATCH_SIZE']
# Build TF datasets
train_dist_ds = strategy.experimental_distribute_dataset(get_training_dataset((*x_train, x_train_aux2), y_train, config['BATCH_SIZE'], AUTO, seed=SEED))
valid_dist_ds = strategy.experimental_distribute_dataset(get_validation_dataset((*x_valid, x_valid_aux2), y_valid, config['BATCH_SIZE'], AUTO, repeated=True, seed=SEED))
train_data_iter = iter(train_dist_ds)
valid_data_iter = iter(valid_dist_ds)
# Step functions
@tf.function
def train_step(data_iter):
def train_step_fn(x, y):
with tf.GradientTape() as tape:
probabilities = model(x, training=True)
loss_start = loss_fn_start(y['y_start'], probabilities[0], label_smoothing=0.2)
loss_end = loss_fn_end(y['y_end'], probabilities[1], label_smoothing=0.2)
loss = tf.math.add(loss_start, loss_end)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# update metrics
train_loss.update_state(loss)
train_loss_start.update_state(loss_start)
train_loss_end.update_state(loss_end)
for _ in tf.range(step_size):
strategy.experimental_run_v2(train_step_fn, next(data_iter))
@tf.function
def valid_step(data_iter):
def valid_step_fn(x, y):
probabilities = model(x, training=False)
loss_start = loss_fn_start(y['y_start'], probabilities[0])
loss_end = loss_fn_end(y['y_end'], probabilities[1])
loss = tf.math.add(loss_start, loss_end)
# update metrics
valid_loss.update_state(loss)
valid_loss_start.update_state(loss_start)
valid_loss_end.update_state(loss_end)
for _ in tf.range(valid_step_size):
strategy.experimental_run_v2(valid_step_fn, next(data_iter))
# Train model
model_path = 'model_fold_%d.h5' % (n_fold)
model = model_fn(config['MAX_LEN'])
optimizer = optimizers.Adam(learning_rate=lambda: lrfn(tf.cast(optimizer.iterations, tf.float32)//step_size))
loss_fn_start = losses.categorical_crossentropy
loss_fn_end = losses.categorical_crossentropy
train_loss = metrics.Sum()
valid_loss = metrics.Sum()
train_loss_start = metrics.Sum()
valid_loss_start = metrics.Sum()
train_loss_end = metrics.Sum()
valid_loss_end = metrics.Sum()
metrics_dict = {'loss': train_loss, 'loss_start': train_loss_start, 'loss_end': train_loss_end,
'val_loss': valid_loss, 'val_loss_start': valid_loss_start, 'val_loss_end': valid_loss_end}
history = custom_fit(model, metrics_dict, train_step, valid_step, train_data_iter, valid_data_iter,
step_size, valid_step_size, config['BATCH_SIZE'], config['EPOCHS'], config['ES_PATIENCE'],
(MODEL_BASE_PATH + model_path), save_last=False)
history_list.append(history)
model.save_weights(MODEL_BASE_PATH +'last_' + model_path)
model.load_weights(MODEL_BASE_PATH + model_path)
# Make predictions
train_preds = model.predict(get_test_dataset((*x_train, x_train_aux2), config['BATCH_SIZE']))
valid_preds = model.predict(get_test_dataset((*x_valid, x_valid_aux2), config['BATCH_SIZE']))
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'start_fold_%d' % (n_fold)] = train_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'train', 'end_fold_%d' % (n_fold)] = train_preds[1].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'start_fold_%d' % (n_fold)] = valid_preds[0].argmax(axis=-1)
k_fold.loc[k_fold['fold_%d' % (n_fold)] == 'validation', 'end_fold_%d' % (n_fold)] = valid_preds[1].argmax(axis=-1)
k_fold['end_fold_%d' % (n_fold)] = k_fold['end_fold_%d' % (n_fold)].astype(int)
k_fold['start_fold_%d' % (n_fold)] = k_fold['start_fold_%d' % (n_fold)].astype(int)
k_fold['end_fold_%d' % (n_fold)].clip(0, k_fold['text_len'], inplace=True)
k_fold['start_fold_%d' % (n_fold)].clip(0, k_fold['end_fold_%d' % (n_fold)], inplace=True)
k_fold['prediction_fold_%d' % (n_fold)] = k_fold.apply(lambda x: decode(x['start_fold_%d' % (n_fold)], x['end_fold_%d' % (n_fold)], x['text'], config['question_size'], tokenizer), axis=1)
k_fold['prediction_fold_%d' % (n_fold)].fillna(k_fold["text"], inplace=True)
k_fold['jaccard_fold_%d' % (n_fold)] = k_fold.apply(lambda x: jaccard(x['selected_text'], x['prediction_fold_%d' % (n_fold)]), axis=1)
```
# Model loss graph
```
sns.set(style="whitegrid")
for n_fold in range(config['N_FOLDS']):
print('Fold: %d' % (n_fold+1))
plot_metrics(history_list[n_fold])
```
# Model evaluation
```
display(evaluate_model_kfold(k_fold, config['N_FOLDS']).style.applymap(color_map))
```
# Visualize predictions
```
display(k_fold[[c for c in k_fold.columns if not (c.startswith('textID') or
c.startswith('text_len') or
c.startswith('selected_text_len') or
c.startswith('text_wordCnt') or
c.startswith('selected_text_wordCnt') or
c.startswith('fold_') or
c.startswith('start_fold_') or
c.startswith('end_fold_'))]].head(15))
```
| github_jupyter |
```
import os
import folium
print(folium.__version__)
```
## ColorLine
```
import numpy as np
x = np.linspace(0, 2*np.pi, 300)
lats = 20 * np.cos(x)
lons = 20 * np.sin(x)
colors = np.sin(5 * x)
from folium import features
m = folium.Map([0, 0], zoom_start=3)
color_line = features.ColorLine(
positions=list(zip(lats, lons)),
colors=colors,
colormap=['y', 'orange', 'r'],
weight=10)
color_line.add_to(m)
m.save(os.path.join('results', 'Features_0.html'))
m
```
### Marker, Icon, Popup
```
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
pp = folium.Popup('hello')
ic = features.Icon(color='red')
mk.add_child(ic)
mk.add_child(pp)
m.add_child(mk)
m.save(os.path.join('results', 'Features_1.html'))
m
```
### Vega popup
```
import json
import vincent
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(multi_iter2, iter_idx='x', height=100, width=200)
data = json.loads(scatter.to_json())
m = folium.Map([0, 0], zoom_start=1)
mk = features.Marker([0, 0])
p = folium.Popup('Hello')
v = features.Vega(data, width='100%', height='100%')
mk.add_child(p)
p.add_child(v)
m.add_child(mk)
m.save(os.path.join('results', 'Features_2.html'))
m
```
### Vega-Lite popup
```
try:
from altair import Chart, load_dataset
except TypeError:
print('Try updating your python version to 3.5.3 or above')
# load built-in dataset as a pandas DataFrame
cars = load_dataset('cars')
scatter = Chart(cars).mark_circle().encode(
x='Horsepower',
y='Miles_per_Gallon',
color='Origin',
)
vega = folium.features.VegaLite(
scatter,
width='100%',
height='100%',
)
m = folium.Map(location=[-27.5717, -48.6256])
marker = folium.features.Marker([-27.57, -48.62])
popup = folium.Popup()
vega.add_to(popup)
popup.add_to(marker)
marker.add_to(m)
m.save(os.path.join('results', 'Features_3.html'))
m
```
### Vega div and a Map
```
import branca
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%')
m2 = folium.Map(location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.Vega(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.Vega(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_4.html'))
f
```
### Vega-Lite div and a Map
```
import pandas as pd
N = 100
multi_iter2 = pd.DataFrame({
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
})
scatter = Chart(multi_iter2).mark_circle().encode(x='x', y='y')
scatter.width = 420
scatter.height = 250
data = json.loads(scatter.to_json())
f = branca.element.Figure()
# Create two maps.
m = folium.Map(
location=[0, 0],
tiles='stamenwatercolor',
zoom_start=1,
position='absolute',
left='0%',
width='50%',
height='50%'
)
m2 = folium.Map(
location=[46, 3],
tiles='OpenStreetMap',
zoom_start=4,
position='absolute',
left='50%',
width='50%',
height='50%',
top='50%')
# Create two Vega.
v = features.VegaLite(
data,
position='absolute',
left='50%',
width='50%',
height='50%'
)
v2 = features.VegaLite(
data,
position='absolute',
left='0%',
width='50%',
height='50%',
top='50%'
)
f.add_child(m)
f.add_child(m2)
f.add_child(v)
f.add_child(v2)
f.save(os.path.join('results', 'Features_5.html'))
f
```
### GeoJson
```
N = 1000
lons = +5 - np.random.normal(size=N)
lats = 48 - np.random.normal(size=N)
data = {
'type': 'FeatureCollection',
'features': [
{
'type': 'Feature',
'geometry': {
'type': 'MultiPoint',
'coordinates': [[lon, lat] for (lat, lon) in zip(lats, lons)],
},
'properties': {'prop0': 'value0'}
},
],
}
m = folium.Map([48, 5], zoom_start=6)
m.add_child(features.GeoJson(data))
m.save(os.path.join('results', 'Features_6.html'))
m
```
### Div
```
N = 100
multi_iter2 = {
'x': np.random.uniform(size=(N,)),
'y': np.random.uniform(size=(N,)),
}
scatter = vincent.Scatter(
multi_iter2,
iter_idx='x',
height=250,
width=420
)
data = json.loads(scatter.to_json())
f = branca.element.Figure()
d1 = f.add_subplot(1, 2, 1)
d2 = f.add_subplot(1, 2, 2)
d1.add_child(folium.Map([0, 0], tiles='stamenwatercolor', zoom_start=1))
d2.add_child(folium.Map([46, 3], tiles='OpenStreetMap', zoom_start=5))
f.save(os.path.join('results', 'Features_7.html'))
f
```
### LayerControl
```
m = folium.Map(tiles=None)
folium.raster_layers.TileLayer('OpenStreetMap').add_to(m)
folium.raster_layers.TileLayer('stamentoner').add_to(m)
folium.LayerControl().add_to(m)
m.save(os.path.join('results', 'Features_8.html'))
m
```
| github_jupyter |
```
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
import operator
import tensorflow as tf
import random
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Model
import numpy as np
import os
import cv2
import matplotlib.pyplot as plt
from keras.models import load_model
import numpy as np
import pandas as pd
import os
import glob
import cv2
import random
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.optimizers import RMSprop
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import regularizers
from keras.callbacks import CSVLogger
#from livelossplot import PlotLossesKeras
import os
import numpy as np
#from imgaug import augmenters as iaa
#import cv2
from keras.layers.normalization import BatchNormalization
#import seaborn as sns
import pandas as pd
from keras import initializers
from keras import optimizers
import keras.backend as K
import tensorflow as tf
from keras.models import Model
TRAIN_PATH = r'../input/thesis1/Train'
VAL_PATH = r'../input/thesis1/Test'
BATCH_SIZE=16
r=4
c=4
#CATEGORIES = ['Air_trapping', 'Aortic_elongation','COPD_Signs','Calcified_granuloma','Callus_rib_fracture','Hiatal_hernia','Kyphosis','Laminar_atelectasis','Normal','Pleural_effusion','Scoliosis','Vascular_hilar_enlargement']
train_datagen = ImageDataGenerator(rescale=1.0/255.0)
train_batches = train_datagen.flow_from_directory(TRAIN_PATH,
class_mode='categorical',
batch_size=BATCH_SIZE,
target_size=(256, 256),
shuffle=True,
seed=42
)
val_datagen = ImageDataGenerator(rescale=1.0/255.0)
val_batches = val_datagen.flow_from_directory(VAL_PATH,
class_mode='categorical',
batch_size=BATCH_SIZE,
target_size=(256, 256),
shuffle=True,
seed=42
)
from keras.applications.vgg16 import VGG16
from keras.optimizers import Adam
def vgg():
base_model = VGG16(weights='imagenet',include_top=False,pooling='avg',input_shape=(256,256,3))
predictions=Dense(12,activation='softmax',trainable=True)(base_model.output)
for layer in base_model.layers:
layer.trainable=True
model=Model(inputs=[base_model.input], outputs=[predictions])
optim = tf.keras.optimizers.Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=1e-4)
loss_func = 'mae'
model.compile(optimizer=optim,loss=loss_func,metrics=['accuracy'])
return model
model=None
model = vgg()
model.summary()
#import tensorflow as tf
#from t.keras.optimizers import Adam, RMSprop, SGD
#adam_opt = tf.keras.optimizers.Adam(lr=1e-5, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=1e-4)
#sgd_opt = tf.keras.optimizers.SGD(lr=1e-06, momentum=0.9, decay=0.0, nesterov=False)
#rmsp_opt = RMSprop(lr=1e-4, decay=0.9)
# eve_opt = Eve(lr=1e-4, decay=1E-4, beta_1=0.9, beta_2=0.999, beta_3=0.999, small_k=0.1, big_K=10, epsilon=1e-08)
#model.compile(optimizer= adam_opt,
# loss = 'categorical_crossentropy',
#metrics=['accuracy'])
#import keras
import tensorflow.keras as keras
from tensorflow.keras.callbacks import ModelCheckpoint
import tensorflow as tf
callbacks = [
tf.keras.callbacks.ModelCheckpoint('adam_baseline_vgg.h5', monitor='val_accuracy', save_best_only=True, mode='max'),
tf.keras.callbacks.ReduceLROnPlateau(monitor='val_accuracy', factor=0.1, verbose=1, patience=5, mode='max')]
history = model.fit(train_batches,
steps_per_epoch=train_batches.n//train_batches.batch_size,
validation_data=val_batches,
validation_steps=val_batches.n//val_batches.batch_size,
epochs=50,
verbose=1,
callbacks = callbacks)
model.save('baseline.h5')
test_generator = ImageDataGenerator()
test_data_generator = test_generator.flow_from_directory(
'../input/thesis1/Test', # Put your path here
target_size=(256, 256),
batch_size=32,
shuffle=False)
test_steps_per_epoch = np.math.ceil(test_data_generator.samples / test_data_generator.batch_size)
predictions = model.predict_generator(test_data_generator, steps=test_steps_per_epoch)
# Get most likely class
predicted_classes = np.argmax(predictions, axis=1)
true_classes = test_data_generator.classes
class_labels = list(test_data_generator.class_indices.keys())
report = metrics.classification_report(true_classes, predicted_classes, target_names=class_labels)
print(report)
```
| github_jupyter |
```
# Import the necessary libraries
import tensorflow as tf
import tensorflow.keras as keras
# This loads the EfficientNetB3 model from the Keras library
# Input Shape is the shape of the image that is input to the first layer. For example, consider an image with shape (width, height , number of channels)
# 'include_top' is set to 'False' to load the model with out the classification or dense layers. Top layers are not required as this is a segmentation problem.
# 'weights' is set to imagenet, that is, it uses the weight it learnt while training on the imagenet dataset. You can set it to None or your custom_weights.
# IMAGE_WIDTH, IMAGE_HEIGHT and CHANNELS values provided for visualization. Please change to suit your dataset.
IMAGE_WIDTH = 512
IMAGE_HEIGHT = 512
CHANNELS = 3
model = tf.keras.applications.EfficientNetB3(input_shape=(IMAGE_WIDTH, IMAGE_HEIGHT, CHANNELS),
include_top=False, weights="imagenet")
#To see the list of layers and parameters
# For EfficientNetB3, you should see
'''Total params: 10,783,535
Trainable params: 10,696,232
Non-trainable params: 87,303'''
model.summary()
# Importing the layers to create the decoder and complete the network
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Dropout
from tensorflow.keras.layers import Lambda
from tensorflow.keras.layers import Conv2D, Conv2DTranspose
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Concatenate
from tensorflow.keras import optimizers
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.metrics import MeanIoU
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.metrics import MeanIoU, Recall, Precision
import tensorflow_addons as tfa
# Defining the Convolution Block
def conv_block(input, num_filters):
x = Conv2D(num_filters, 3, padding="same", kernel_initializer="he_normal")(input)
x = BatchNormalization()(x)
#Used the Mish activation function as it performs better than ReLU (but is computionally expensive)
x = tfa.activations.mish(x)
#Comment the previous line and uncomment the next line if you limited compute resource
#x = Activation("relu")(x)
x = Conv2D(num_filters, 3, padding="same", kernel_initializer="he_normal")(x)
x = BatchNormalization()(x)
x = tfa.activations.mish(x)
#x = x*tf.math.tanh(tf.softplus(x)) #Mish activation in a mathematical form
#x = Activation("relu")(x)
return x
#Defining the Transpose Convolution Block
def decoder_block(input, skip_features, num_filters):
x = Conv2DTranspose(num_filters, (2, 2), strides=2, padding="same")(input)
x = Concatenate()([x, skip_features])
#Use dropout only if the model is overfitting
#x = Dropout(0.05)(x)
x = conv_block(x, num_filters)
return x
#Building the EfficientNetB3_UNet
def build_efficientNetB3_unet(input_shape):
""" Input """
inputs = Input(shape=input_shape, name='input_image')
""" Pre-trained EfficientNetB3 Model """
effNetB3 = tf.keras.applications.EfficientNetB3(input_tensor=inputs, include_top=False,
weights="imagenet")
# This Section will let you freeze and unfreeze layers. Here I have frozen all layer except
# the last convolution block layers starting after layer 31
for layer in effNetB3.layers[:-31]:
layer.trainable = False
for l in effNetB3.layers:
print(l.name, l.trainable)
""" Encoder """
s1 = effNetB3.get_layer("input_image").output ## (512 x 512)
s2 = effNetB3.get_layer("block1a_activation").output ## (256 x 256)
s3 = effNetB3.get_layer("block2a_activation").output ## (128 x 128)
s4 = effNetB3.get_layer("block3a_activation").output ## (64 x 64)
s5 = effNetB3.get_layer("block4a_activation").output ## (32 x 32)
""" Bridge """
b1 = effNetB3.get_layer("block7a_activation").output ## (16 x 16)
""" Decoder """
d1 = decoder_block(b1, s5, 512) ## (32 x 32)
d2 = decoder_block(d1, s4, 256) ## (64 x 64)
d3 = decoder_block(d2, s3, 128) ## (128 x 128)
d4 = decoder_block(d3, s2, 64) ## (256 x 256)
d5 = decoder_block(d4, s1, 32) ## (512 x 512)
""" Output """
outputs = Conv2D(1, 1, padding="same", activation="sigmoid")(d5)
model = Model(inputs, outputs, name="EfficientNetB3_U-Net")
return model
if __name__ == "__main__":
input_shape = (IMAGE_WIDTH, IMAGE_HEIGHT, CHANNELS)
model = build_efficientNetB3_unet(input_shape)
#Shows the entire EfficientNetB3_UNet Model
model.summary()
#Adding Model Checkpoints, Early Stopping based on Validation Loss and LR Reducer
model_path = "path/Model_Name.h5"
checkpointer = ModelCheckpoint(model_path,
monitor="val_loss",
mode="min",
save_best_only = True,
verbose=1)
earlystopper = EarlyStopping(monitor = 'val_loss',
min_delta = 0,
patience = 30,
verbose = 1,
restore_best_weights = True)
lr_reducer = ReduceLROnPlateau(monitor='val_loss',
factor=0.6,
patience=6,
verbose=1,
min_lr=0.0001
#min_delta=5e-5
)
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-3,
decay_steps=6000,
decay_rate=0.9)
optimizer = keras.optimizers.Adam(learning_rate=lr_schedule)
from tensorflow.keras import backend as K
# To calculate Intersection over Union between Predicted Mask and Ground Truth
def iou_coef(y_true, y_pred, smooth=1):
intersection = K.sum(K.abs(y_true * y_pred), axis=[1,2,3])
union = K.sum(y_true,[1,2,3])+K.sum(y_pred,[1,2,3])-intersection
iou = K.mean((intersection + smooth) / (union + smooth), axis=0)
return iou
smooth = 1e-5
# F1 score or Dice Coefficient
def f1_score(y_true, y_pred, smooth = 1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
# Soft Dice Loss
def soft_dice_loss(y_true, y_pred):
return 1-dice_coef(y_true, y_pred)
#Compiling the model with Adam Optimizer and Metrics related to segmentation
model.compile(optimizer=optimizer,
loss=soft_dice_loss,
metrics=[iou_coef, Recall(), Precision(), MeanIoU(num_classes=2), f1_score])
# Initiate Model Training
'''history = model.fit(train_images,
train_masks/255,
validation_split=0.10,
epochs=EPOCHS,
batch_size = BATCH_SIZE,
callbacks = [checkpointer, earlystopper, lr_reducer])'''
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Transformer model for language understanding
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/text/transformer">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/text/transformer.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/text/transformer.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial trains a <a href="https://arxiv.org/abs/1706.03762" class="external">Transformer model</a> to translate Portuguese to English. This is an advanced example that assumes knowledge of [text generation](text_generation.ipynb) and [attention](nmt_with_attention.ipynb).
The core idea behind the Transformer model is *self-attention*—the ability to attend to different positions of the input sequence to compute a representation of that sequence. Transformer creates stacks of self-attention layers and is explained below in the sections *Scaled dot product attention* and *Multi-head attention*.
A transformer model handles variable-sized input using stacks of self-attention layers instead of [RNNs](text_classification_rnn.ipynb) or [CNNs](../images/intro_to_cnns.ipynb). This general architecture has a number of advantages:
* It make no assumptions about the temporal/spatial relationships across the data. This is ideal for processing a set of objects (for example, [StarCraft units](https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/#block-8)).
* Layer outputs can be calculated in parallel, instead of a series like an RNN.
* Distant items can affect each other's output without passing through many RNN-steps, or convolution layers (see [Scene Memory Transformer](https://arxiv.org/pdf/1903.03878.pdf) for example).
* It can learn long-range dependencies. This is a challenge in many sequence tasks.
The downsides of this architecture are:
* For a time-series, the output for a time-step is calculated from the *entire history* instead of only the inputs and current hidden-state. This _may_ be less efficient.
* If the input *does* have a temporal/spatial relationship, like text, some positional encoding must be added or the model will effectively see a bag of words.
After training the model in this notebook, you will be able to input a Portuguese sentence and return the English translation.
<img src="https://www.tensorflow.org/images/tutorials/transformer/attention_map_portuguese.png" width="800" alt="Attention heatmap">
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
%tensorflow_version 2.x # Colab only.
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
import time
import numpy as np
import matplotlib.pyplot as plt
```
## Setup input pipeline
Use [TFDS](https://www.tensorflow.org/datasets) to load the [Portugese-English translation dataset](https://github.com/neulab/word-embeddings-for-nmt) from the [TED Talks Open Translation Project](https://www.ted.com/participate/translate).
This dataset contains approximately 50000 training examples, 1100 validation examples, and 2000 test examples.
```
examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True,
as_supervised=True)
train_examples, val_examples = examples['train'], examples['validation']
```
Create a custom subwords tokenizer from the training dataset.
```
tokenizer_en = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(en.numpy() for pt, en in train_examples), target_vocab_size=2**13)
tokenizer_pt = tfds.features.text.SubwordTextEncoder.build_from_corpus(
(pt.numpy() for pt, en in train_examples), target_vocab_size=2**13)
sample_string = 'Transformer is awesome.'
tokenized_string = tokenizer_en.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer_en.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
```
The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
```
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer_en.decode([ts])))
BUFFER_SIZE = 20000
BATCH_SIZE = 64
```
Add a start and end token to the input and target.
```
def encode(lang1, lang2):
lang1 = [tokenizer_pt.vocab_size] + tokenizer_pt.encode(
lang1.numpy()) + [tokenizer_pt.vocab_size+1]
lang2 = [tokenizer_en.vocab_size] + tokenizer_en.encode(
lang2.numpy()) + [tokenizer_en.vocab_size+1]
return lang1, lang2
```
Note: To keep this example small and relatively fast, drop examples with a length of over 40 tokens.
```
MAX_LENGTH = 40
def filter_max_length(x, y, max_length=MAX_LENGTH):
return tf.logical_and(tf.size(x) <= max_length,
tf.size(y) <= max_length)
```
Operations inside `.map()` run in graph mode and receive a graph tensor that do not have a numpy attribute. The `tokenizer` expects a string or Unicode symbol to encode it into integers. Hence, you need to run the encoding inside a `tf.py_function`, which receives an eager tensor having a numpy attribute that contains the string value.
```
def tf_encode(pt, en):
return tf.py_function(encode, [pt, en], [tf.int64, tf.int64])
train_dataset = train_examples.map(tf_encode)
train_dataset = train_dataset.filter(filter_max_length)
# cache the dataset to memory to get a speedup while reading from it.
train_dataset = train_dataset.cache()
train_dataset = train_dataset.shuffle(BUFFER_SIZE).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
train_dataset = train_dataset.prefetch(tf.data.experimental.AUTOTUNE)
val_dataset = val_examples.map(tf_encode)
val_dataset = val_dataset.filter(filter_max_length).padded_batch(
BATCH_SIZE, padded_shapes=([-1], [-1]))
pt_batch, en_batch = next(iter(val_dataset))
pt_batch, en_batch
```
## Positional encoding
Since this model doesn't contain any recurrence or convolution, positional encoding is added to give the model some information about the relative position of the words in the sentence.
The positional encoding vector is added to the embedding vector. Embeddings represent a token in a d-dimensional space where tokens with similar meaning will be closer to each other. But the embeddings do not encode the relative position of words in a sentence. So after adding the positional encoding, words will be closer to each other based on the *similarity of their meaning and their position in the sentence*, in the d-dimensional space.
See the notebook on [positional encoding](https://github.com/tensorflow/examples/blob/master/community/en/position_encoding.ipynb) to learn more about it. The formula for calculating the positional encoding is as follows:
$$\Large{PE_{(pos, 2i)} = sin(pos / 10000^{2i / d_{model}})} $$
$$\Large{PE_{(pos, 2i+1)} = cos(pos / 10000^{2i / d_{model}})} $$
```
def get_angles(pos, i, d_model):
angle_rates = 1 / np.power(10000, (2 * (i//2)) / np.float32(d_model))
return pos * angle_rates
def positional_encoding(position, d_model):
angle_rads = get_angles(np.arange(position)[:, np.newaxis],
np.arange(d_model)[np.newaxis, :],
d_model)
# apply sin to even indices in the array; 2i
angle_rads[:, 0::2] = np.sin(angle_rads[:, 0::2])
# apply cos to odd indices in the array; 2i+1
angle_rads[:, 1::2] = np.cos(angle_rads[:, 1::2])
pos_encoding = angle_rads[np.newaxis, ...]
return tf.cast(pos_encoding, dtype=tf.float32)
pos_encoding = positional_encoding(50, 512)
print (pos_encoding.shape)
plt.pcolormesh(pos_encoding[0], cmap='RdBu')
plt.xlabel('Depth')
plt.xlim((0, 512))
plt.ylabel('Position')
plt.colorbar()
plt.show()
```
## Masking
Mask all the pad tokens in the batch of sequence. It ensures that the model does not treat padding as the input. The mask indicates where pad value `0` is present: it outputs a `1` at those locations, and a `0` otherwise.
```
def create_padding_mask(seq):
seq = tf.cast(tf.math.equal(seq, 0), tf.float32)
# add extra dimensions to add the padding
# to the attention logits.
return seq[:, tf.newaxis, tf.newaxis, :] # (batch_size, 1, 1, seq_len)
x = tf.constant([[7, 6, 0, 0, 1], [1, 2, 3, 0, 0], [0, 0, 0, 4, 5]])
create_padding_mask(x)
```
The look-ahead mask is used to mask the future tokens in a sequence. In other words, the mask indicates which entries should not be used.
This means that to predict the third word, only the first and second word will be used. Similarly to predict the fourth word, only the first, second and the third word will be used and so on.
```
def create_look_ahead_mask(size):
mask = 1 - tf.linalg.band_part(tf.ones((size, size)), -1, 0)
return mask # (seq_len, seq_len)
x = tf.random.uniform((1, 3))
temp = create_look_ahead_mask(x.shape[1])
temp
```
## Scaled dot product attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/scaled_attention.png" width="500" alt="scaled_dot_product_attention">
The attention function used by the transformer takes three inputs: Q (query), K (key), V (value). The equation used to calculate the attention weights is:
$$\Large{Attention(Q, K, V) = softmax_k(\frac{QK^T}{\sqrt{d_k}}) V} $$
The dot-product attention is scaled by a factor of square root of the depth. This is done because for large values of depth, the dot product grows large in magnitude pushing the softmax function where it has small gradients resulting in a very hard softmax.
For example, consider that `Q` and `K` have a mean of 0 and variance of 1. Their matrix multiplication will have a mean of 0 and variance of `dk`. Hence, *square root of `dk`* is used for scaling (and not any other number) because the matmul of `Q` and `K` should have a mean of 0 and variance of 1, and you get a gentler softmax.
The mask is multiplied with -1e9 (close to negative infinity). This is done because the mask is summed with the scaled matrix multiplication of Q and K and is applied immediately before a softmax. The goal is to zero out these cells, and large negative inputs to softmax are near zero in the output.
```
def scaled_dot_product_attention(q, k, v, mask):
"""Calculate the attention weights.
q, k, v must have matching leading dimensions.
k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
The mask has different shapes depending on its type(padding or look ahead)
but it must be broadcastable for addition.
Args:
q: query shape == (..., seq_len_q, depth)
k: key shape == (..., seq_len_k, depth)
v: value shape == (..., seq_len_v, depth_v)
mask: Float tensor with shape broadcastable
to (..., seq_len_q, seq_len_k). Defaults to None.
Returns:
output, attention_weights
"""
matmul_qk = tf.matmul(q, k, transpose_b=True) # (..., seq_len_q, seq_len_k)
# scale matmul_qk
dk = tf.cast(tf.shape(k)[-1], tf.float32)
scaled_attention_logits = matmul_qk / tf.math.sqrt(dk)
# add the mask to the scaled tensor.
if mask is not None:
scaled_attention_logits += (mask * -1e9)
# softmax is normalized on the last axis (seq_len_k) so that the scores
# add up to 1.
attention_weights = tf.nn.softmax(scaled_attention_logits, axis=-1) # (..., seq_len_q, seq_len_k)
output = tf.matmul(attention_weights, v) # (..., seq_len_q, depth_v)
return output, attention_weights
```
As the softmax normalization is done on K, its values decide the amount of importance given to Q.
The output represents the multiplication of the attention weights and the V (value) vector. This ensures that the words you want to focus on are kept as-is and the irrelevant words are flushed out.
```
def print_out(q, k, v):
temp_out, temp_attn = scaled_dot_product_attention(
q, k, v, None)
print ('Attention weights are:')
print (temp_attn)
print ('Output is:')
print (temp_out)
np.set_printoptions(suppress=True)
temp_k = tf.constant([[10,0,0],
[0,10,0],
[0,0,10],
[0,0,10]], dtype=tf.float32) # (4, 3)
temp_v = tf.constant([[ 1,0],
[ 10,0],
[ 100,5],
[1000,6]], dtype=tf.float32) # (4, 2)
# This `query` aligns with the second `key`,
# so the second `value` is returned.
temp_q = tf.constant([[0, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns with a repeated key (third and fourth),
# so all associated values get averaged.
temp_q = tf.constant([[0, 0, 10]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
# This query aligns equally with the first and second key,
# so their values get averaged.
temp_q = tf.constant([[10, 10, 0]], dtype=tf.float32) # (1, 3)
print_out(temp_q, temp_k, temp_v)
```
Pass all the queries together.
```
temp_q = tf.constant([[0, 0, 10], [0, 10, 0], [10, 10, 0]], dtype=tf.float32) # (3, 3)
print_out(temp_q, temp_k, temp_v)
```
## Multi-head attention
<img src="https://www.tensorflow.org/images/tutorials/transformer/multi_head_attention.png" width="500" alt="multi-head attention">
Multi-head attention consists of four parts:
* Linear layers and split into heads.
* Scaled dot-product attention.
* Concatenation of heads.
* Final linear layer.
Each multi-head attention block gets three inputs; Q (query), K (key), V (value). These are put through linear (Dense) layers and split up into multiple heads.
The `scaled_dot_product_attention` defined above is applied to each head (broadcasted for efficiency). An appropriate mask must be used in the attention step. The attention output for each head is then concatenated (using `tf.transpose`, and `tf.reshape`) and put through a final `Dense` layer.
Instead of one single attention head, Q, K, and V are split into multiple heads because it allows the model to jointly attend to information at different positions from different representational spaces. After the split each head has a reduced dimensionality, so the total computation cost is the same as a single head attention with full dimensionality.
```
class MultiHeadAttention(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads):
super(MultiHeadAttention, self).__init__()
self.num_heads = num_heads
self.d_model = d_model
assert d_model % self.num_heads == 0
self.depth = d_model // self.num_heads
self.wq = tf.keras.layers.Dense(d_model)
self.wk = tf.keras.layers.Dense(d_model)
self.wv = tf.keras.layers.Dense(d_model)
self.dense = tf.keras.layers.Dense(d_model)
def split_heads(self, x, batch_size):
"""Split the last dimension into (num_heads, depth).
Transpose the result such that the shape is (batch_size, num_heads, seq_len, depth)
"""
x = tf.reshape(x, (batch_size, -1, self.num_heads, self.depth))
return tf.transpose(x, perm=[0, 2, 1, 3])
def call(self, v, k, q, mask):
batch_size = tf.shape(q)[0]
q = self.wq(q) # (batch_size, seq_len, d_model)
k = self.wk(k) # (batch_size, seq_len, d_model)
v = self.wv(v) # (batch_size, seq_len, d_model)
q = self.split_heads(q, batch_size) # (batch_size, num_heads, seq_len_q, depth)
k = self.split_heads(k, batch_size) # (batch_size, num_heads, seq_len_k, depth)
v = self.split_heads(v, batch_size) # (batch_size, num_heads, seq_len_v, depth)
# scaled_attention.shape == (batch_size, num_heads, seq_len_q, depth)
# attention_weights.shape == (batch_size, num_heads, seq_len_q, seq_len_k)
scaled_attention, attention_weights = scaled_dot_product_attention(
q, k, v, mask)
scaled_attention = tf.transpose(scaled_attention, perm=[0, 2, 1, 3]) # (batch_size, seq_len_q, num_heads, depth)
concat_attention = tf.reshape(scaled_attention,
(batch_size, -1, self.d_model)) # (batch_size, seq_len_q, d_model)
output = self.dense(concat_attention) # (batch_size, seq_len_q, d_model)
return output, attention_weights
```
Create a `MultiHeadAttention` layer to try out. At each location in the sequence, `y`, the `MultiHeadAttention` runs all 8 attention heads across all other locations in the sequence, returning a new vector of the same length at each location.
```
temp_mha = MultiHeadAttention(d_model=512, num_heads=8)
y = tf.random.uniform((1, 60, 512)) # (batch_size, encoder_sequence, d_model)
out, attn = temp_mha(y, k=y, q=y, mask=None)
out.shape, attn.shape
```
## Point wise feed forward network
Point wise feed forward network consists of two fully-connected layers with a ReLU activation in between.
```
def point_wise_feed_forward_network(d_model, dff):
return tf.keras.Sequential([
tf.keras.layers.Dense(dff, activation='relu'), # (batch_size, seq_len, dff)
tf.keras.layers.Dense(d_model) # (batch_size, seq_len, d_model)
])
sample_ffn = point_wise_feed_forward_network(512, 2048)
sample_ffn(tf.random.uniform((64, 50, 512))).shape
```
## Encoder and decoder
<img src="https://www.tensorflow.org/images/tutorials/transformer/transformer.png" width="600" alt="transformer">
The transformer model follows the same general pattern as a standard [sequence to sequence with attention model](nmt_with_attention.ipynb).
* The input sentence is passed through `N` encoder layers that generates an output for each word/token in the sequence.
* The decoder attends on the encoder's output and its own input (self-attention) to predict the next word.
### Encoder layer
Each encoder layer consists of sublayers:
1. Multi-head attention (with padding mask)
2. Point wise feed forward networks.
Each of these sublayers has a residual connection around it followed by a layer normalization. Residual connections help in avoiding the vanishing gradient problem in deep networks.
The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis. There are N encoder layers in the transformer.
```
class EncoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(EncoderLayer, self).__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
attn_output, _ = self.mha(x, x, x, mask) # (batch_size, input_seq_len, d_model)
attn_output = self.dropout1(attn_output, training=training)
out1 = self.layernorm1(x + attn_output) # (batch_size, input_seq_len, d_model)
ffn_output = self.ffn(out1) # (batch_size, input_seq_len, d_model)
ffn_output = self.dropout2(ffn_output, training=training)
out2 = self.layernorm2(out1 + ffn_output) # (batch_size, input_seq_len, d_model)
return out2
sample_encoder_layer = EncoderLayer(512, 8, 2048)
sample_encoder_layer_output = sample_encoder_layer(
tf.random.uniform((64, 43, 512)), False, None)
sample_encoder_layer_output.shape # (batch_size, input_seq_len, d_model)
```
### Decoder layer
Each decoder layer consists of sublayers:
1. Masked multi-head attention (with look ahead mask and padding mask)
2. Multi-head attention (with padding mask). V (value) and K (key) receive the *encoder output* as inputs. Q (query) receives the *output from the masked multi-head attention sublayer.*
3. Point wise feed forward networks
Each of these sublayers has a residual connection around it followed by a layer normalization. The output of each sublayer is `LayerNorm(x + Sublayer(x))`. The normalization is done on the `d_model` (last) axis.
There are N decoder layers in the transformer.
As Q receives the output from decoder's first attention block, and K receives the encoder output, the attention weights represent the importance given to the decoder's input based on the encoder's output. In other words, the decoder predicts the next word by looking at the encoder output and self-attending to its own output. See the demonstration above in the scaled dot product attention section.
```
class DecoderLayer(tf.keras.layers.Layer):
def __init__(self, d_model, num_heads, dff, rate=0.1):
super(DecoderLayer, self).__init__()
self.mha1 = MultiHeadAttention(d_model, num_heads)
self.mha2 = MultiHeadAttention(d_model, num_heads)
self.ffn = point_wise_feed_forward_network(d_model, dff)
self.layernorm1 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm2 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.layernorm3 = tf.keras.layers.LayerNormalization(epsilon=1e-6)
self.dropout1 = tf.keras.layers.Dropout(rate)
self.dropout2 = tf.keras.layers.Dropout(rate)
self.dropout3 = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
# enc_output.shape == (batch_size, input_seq_len, d_model)
attn1, attn_weights_block1 = self.mha1(x, x, x, look_ahead_mask) # (batch_size, target_seq_len, d_model)
attn1 = self.dropout1(attn1, training=training)
out1 = self.layernorm1(attn1 + x)
attn2, attn_weights_block2 = self.mha2(
enc_output, enc_output, out1, padding_mask) # (batch_size, target_seq_len, d_model)
attn2 = self.dropout2(attn2, training=training)
out2 = self.layernorm2(attn2 + out1) # (batch_size, target_seq_len, d_model)
ffn_output = self.ffn(out2) # (batch_size, target_seq_len, d_model)
ffn_output = self.dropout3(ffn_output, training=training)
out3 = self.layernorm3(ffn_output + out2) # (batch_size, target_seq_len, d_model)
return out3, attn_weights_block1, attn_weights_block2
sample_decoder_layer = DecoderLayer(512, 8, 2048)
sample_decoder_layer_output, _, _ = sample_decoder_layer(
tf.random.uniform((64, 50, 512)), sample_encoder_layer_output,
False, None, None)
sample_decoder_layer_output.shape # (batch_size, target_seq_len, d_model)
```
### Encoder
The `Encoder` consists of:
1. Input Embedding
2. Positional Encoding
3. N encoder layers
The input is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the encoder layers. The output of the encoder is the input to the decoder.
```
class Encoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
rate=0.1):
super(Encoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(input_vocab_size, d_model)
self.pos_encoding = positional_encoding(input_vocab_size, self.d_model)
self.enc_layers = [EncoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, training, mask):
seq_len = tf.shape(x)[1]
# adding embedding and position encoding.
x = self.embedding(x) # (batch_size, input_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x = self.enc_layers[i](x, training, mask)
return x # (batch_size, input_seq_len, d_model)
sample_encoder = Encoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, input_vocab_size=8500)
sample_encoder_output = sample_encoder(tf.random.uniform((64, 62)),
training=False, mask=None)
print (sample_encoder_output.shape) # (batch_size, input_seq_len, d_model)
```
### Decoder
The `Decoder` consists of:
1. Output Embedding
2. Positional Encoding
3. N decoder layers
The target is put through an embedding which is summed with the positional encoding. The output of this summation is the input to the decoder layers. The output of the decoder is the input to the final linear layer.
```
class Decoder(tf.keras.layers.Layer):
def __init__(self, num_layers, d_model, num_heads, dff, target_vocab_size,
rate=0.1):
super(Decoder, self).__init__()
self.d_model = d_model
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(target_vocab_size, d_model)
self.pos_encoding = positional_encoding(target_vocab_size, d_model)
self.dec_layers = [DecoderLayer(d_model, num_heads, dff, rate)
for _ in range(num_layers)]
self.dropout = tf.keras.layers.Dropout(rate)
def call(self, x, enc_output, training,
look_ahead_mask, padding_mask):
seq_len = tf.shape(x)[1]
attention_weights = {}
x = self.embedding(x) # (batch_size, target_seq_len, d_model)
x *= tf.math.sqrt(tf.cast(self.d_model, tf.float32))
x += self.pos_encoding[:, :seq_len, :]
x = self.dropout(x, training=training)
for i in range(self.num_layers):
x, block1, block2 = self.dec_layers[i](x, enc_output, training,
look_ahead_mask, padding_mask)
attention_weights['decoder_layer{}_block1'.format(i+1)] = block1
attention_weights['decoder_layer{}_block2'.format(i+1)] = block2
# x.shape == (batch_size, target_seq_len, d_model)
return x, attention_weights
sample_decoder = Decoder(num_layers=2, d_model=512, num_heads=8,
dff=2048, target_vocab_size=8000)
output, attn = sample_decoder(tf.random.uniform((64, 26)),
enc_output=sample_encoder_output,
training=False, look_ahead_mask=None,
padding_mask=None)
output.shape, attn['decoder_layer2_block2'].shape
```
## Create the Transformer
Transformer consists of the encoder, decoder and a final linear layer. The output of the decoder is the input to the linear layer and its output is returned.
```
class Transformer(tf.keras.Model):
def __init__(self, num_layers, d_model, num_heads, dff, input_vocab_size,
target_vocab_size, rate=0.1):
super(Transformer, self).__init__()
self.encoder = Encoder(num_layers, d_model, num_heads, dff,
input_vocab_size, rate)
self.decoder = Decoder(num_layers, d_model, num_heads, dff,
target_vocab_size, rate)
self.final_layer = tf.keras.layers.Dense(target_vocab_size)
def call(self, inp, tar, training, enc_padding_mask,
look_ahead_mask, dec_padding_mask):
enc_output = self.encoder(inp, training, enc_padding_mask) # (batch_size, inp_seq_len, d_model)
# dec_output.shape == (batch_size, tar_seq_len, d_model)
dec_output, attention_weights = self.decoder(
tar, enc_output, training, look_ahead_mask, dec_padding_mask)
final_output = self.final_layer(dec_output) # (batch_size, tar_seq_len, target_vocab_size)
return final_output, attention_weights
sample_transformer = Transformer(
num_layers=2, d_model=512, num_heads=8, dff=2048,
input_vocab_size=8500, target_vocab_size=8000)
temp_input = tf.random.uniform((64, 62))
temp_target = tf.random.uniform((64, 26))
fn_out, _ = sample_transformer(temp_input, temp_target, training=False,
enc_padding_mask=None,
look_ahead_mask=None,
dec_padding_mask=None)
fn_out.shape # (batch_size, tar_seq_len, target_vocab_size)
```
## Set hyperparameters
To keep this example small and relatively fast, the values for *num_layers, d_model, and dff* have been reduced.
The values used in the base model of transformer were; *num_layers=6*, *d_model = 512*, *dff = 2048*. See the [paper](https://arxiv.org/abs/1706.03762) for all the other versions of the transformer.
Note: By changing the values below, you can get the model that achieved state of the art on many tasks.
```
num_layers = 4
d_model = 128
dff = 512
num_heads = 8
input_vocab_size = tokenizer_pt.vocab_size + 2
target_vocab_size = tokenizer_en.vocab_size + 2
dropout_rate = 0.1
```
## Optimizer
Use the Adam optimizer with a custom learning rate scheduler according to the formula in the [paper](https://arxiv.org/abs/1706.03762).
$$\Large{lrate = d_{model}^{-0.5} * min(step{\_}num^{-0.5}, step{\_}num * warmup{\_}steps^{-1.5})}$$
```
class CustomSchedule(tf.keras.optimizers.schedules.LearningRateSchedule):
def __init__(self, d_model, warmup_steps=4000):
super(CustomSchedule, self).__init__()
self.d_model = d_model
self.d_model = tf.cast(self.d_model, tf.float32)
self.warmup_steps = warmup_steps
def __call__(self, step):
arg1 = tf.math.rsqrt(step)
arg2 = step * (self.warmup_steps ** -1.5)
return tf.math.rsqrt(self.d_model) * tf.math.minimum(arg1, arg2)
learning_rate = CustomSchedule(d_model)
optimizer = tf.keras.optimizers.Adam(learning_rate, beta_1=0.9, beta_2=0.98,
epsilon=1e-9)
temp_learning_rate_schedule = CustomSchedule(d_model)
plt.plot(temp_learning_rate_schedule(tf.range(40000, dtype=tf.float32)))
plt.ylabel("Learning Rate")
plt.xlabel("Train Step")
```
## Loss and metrics
Since the target sequences are padded, it is important to apply a padding mask when calculating the loss.
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
```
## Training and checkpointing
```
transformer = Transformer(num_layers, d_model, num_heads, dff,
input_vocab_size, target_vocab_size, dropout_rate)
def create_masks(inp, tar):
# Encoder padding mask
enc_padding_mask = create_padding_mask(inp)
# Used in the 2nd attention block in the decoder.
# This padding mask is used to mask the encoder outputs.
dec_padding_mask = create_padding_mask(inp)
# Used in the 1st attention block in the decoder.
# It is used to pad and mask future tokens in the input received by
# the decoder.
look_ahead_mask = create_look_ahead_mask(tf.shape(tar)[1])
dec_target_padding_mask = create_padding_mask(tar)
combined_mask = tf.maximum(dec_target_padding_mask, look_ahead_mask)
return enc_padding_mask, combined_mask, dec_padding_mask
```
Create the checkpoint path and the checkpoint manager. This will be used to save checkpoints every `n` epochs.
```
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(transformer=transformer,
optimizer=optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
```
The target is divided into tar_inp and tar_real. tar_inp is passed as an input to the decoder. `tar_real` is that same input shifted by 1: At each location in `tar_input`, `tar_real` contains the next token that should be predicted.
For example, `sentence` = "SOS A lion in the jungle is sleeping EOS"
`tar_inp` = "SOS A lion in the jungle is sleeping"
`tar_real` = "A lion in the jungle is sleeping EOS"
The transformer is an auto-regressive model: it makes predictions one part at a time, and uses its output so far to decide what to do next.
During training this example uses teacher-forcing (like in the [text generation tutorial](./text_generation.ipynb)). Teacher forcing is passing the true output to the next time step regardless of what the model predicts at the current time step.
As the transformer predicts each word, *self-attention* allows it to look at the previous words in the input sequence to better predict the next word.
To prevent the model from peaking at the expected output the model uses a look-ahead mask.
```
EPOCHS = 20
# The @tf.function trace-compiles train_step into a TF graph for faster
# execution. The function specializes to the precise shape of the argument
# tensors. To avoid re-tracing due to the variable sequence lengths or variable
# batch sizes (the last batch is smaller), use input_signature to specify
# more generic shapes.
train_step_signature = [
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
tf.TensorSpec(shape=(None, None), dtype=tf.int64),
]
@tf.function(input_signature=train_step_signature)
def train_step(inp, tar):
tar_inp = tar[:, :-1]
tar_real = tar[:, 1:]
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(inp, tar_inp)
with tf.GradientTape() as tape:
predictions, _ = transformer(inp, tar_inp,
True,
enc_padding_mask,
combined_mask,
dec_padding_mask)
loss = loss_function(tar_real, predictions)
gradients = tape.gradient(loss, transformer.trainable_variables)
optimizer.apply_gradients(zip(gradients, transformer.trainable_variables))
train_loss(loss)
train_accuracy(tar_real, predictions)
```
Portuguese is used as the input language and English is the target language.
```
for epoch in range(EPOCHS):
start = time.time()
train_loss.reset_states()
train_accuracy.reset_states()
# inp -> portuguese, tar -> english
for (batch, (inp, tar)) in enumerate(train_dataset):
train_step(inp, tar)
if batch % 50 == 0:
print ('Epoch {} Batch {} Loss {:.4f} Accuracy {:.4f}'.format(
epoch + 1, batch, train_loss.result(), train_accuracy.result()))
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Epoch {} Loss {:.4f} Accuracy {:.4f}'.format(epoch + 1,
train_loss.result(),
train_accuracy.result()))
print ('Time taken for 1 epoch: {} secs\n'.format(time.time() - start))
```
## Evaluate
The following steps are used for evaluation:
* Encode the input sentence using the Portuguese tokenizer (`tokenizer_pt`). Moreover, add the start and end token so the input is equivalent to what the model is trained with. This is the encoder input.
* The decoder input is the `start token == tokenizer_en.vocab_size`.
* Calculate the padding masks and the look ahead masks.
* The `decoder` then outputs the predictions by looking at the `encoder output` and its own output (self-attention).
* Select the last word and calculate the argmax of that.
* Concatentate the predicted word to the decoder input as pass it to the decoder.
* In this approach, the decoder predicts the next word based on the previous words it predicted.
Note: The model used here has less capacity to keep the example relatively faster so the predictions maybe less right. To reproduce the results in the paper, use the entire dataset and base transformer model or transformer XL, by changing the hyperparameters above.
```
def evaluate(inp_sentence):
start_token = [tokenizer_pt.vocab_size]
end_token = [tokenizer_pt.vocab_size + 1]
# inp sentence is portuguese, hence adding the start and end token
inp_sentence = start_token + tokenizer_pt.encode(inp_sentence) + end_token
encoder_input = tf.expand_dims(inp_sentence, 0)
# as the target is english, the first word to the transformer should be the
# english start token.
decoder_input = [tokenizer_en.vocab_size]
output = tf.expand_dims(decoder_input, 0)
for i in range(MAX_LENGTH):
enc_padding_mask, combined_mask, dec_padding_mask = create_masks(
encoder_input, output)
# predictions.shape == (batch_size, seq_len, vocab_size)
predictions, attention_weights = transformer(encoder_input,
output,
False,
enc_padding_mask,
combined_mask,
dec_padding_mask)
# select the last word from the seq_len dimension
predictions = predictions[: ,-1:, :] # (batch_size, 1, vocab_size)
predicted_id = tf.cast(tf.argmax(predictions, axis=-1), tf.int32)
# return the result if the predicted_id is equal to the end token
if tf.equal(predicted_id, tokenizer_en.vocab_size+1):
return tf.squeeze(output, axis=0), attention_weights
# concatentate the predicted_id to the output which is given to the decoder
# as its input.
output = tf.concat([output, predicted_id], axis=-1)
return tf.squeeze(output, axis=0), attention_weights
def plot_attention_weights(attention, sentence, result, layer):
fig = plt.figure(figsize=(16, 8))
sentence = tokenizer_pt.encode(sentence)
attention = tf.squeeze(attention[layer], axis=0)
for head in range(attention.shape[0]):
ax = fig.add_subplot(2, 4, head+1)
# plot the attention weights
ax.matshow(attention[head][:-1, :], cmap='viridis')
fontdict = {'fontsize': 10}
ax.set_xticks(range(len(sentence)+2))
ax.set_yticks(range(len(result)))
ax.set_ylim(len(result)-1.5, -0.5)
ax.set_xticklabels(
['<start>']+[tokenizer_pt.decode([i]) for i in sentence]+['<end>'],
fontdict=fontdict, rotation=90)
ax.set_yticklabels([tokenizer_en.decode([i]) for i in result
if i < tokenizer_en.vocab_size],
fontdict=fontdict)
ax.set_xlabel('Head {}'.format(head+1))
plt.tight_layout()
plt.show()
def translate(sentence, plot=''):
result, attention_weights = evaluate(sentence)
predicted_sentence = tokenizer_en.decode([i for i in result
if i < tokenizer_en.vocab_size])
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(predicted_sentence))
if plot:
plot_attention_weights(attention_weights, sentence, result, plot)
translate("este é um problema que temos que resolver.")
print ("Real translation: this is a problem we have to solve .")
translate("os meus vizinhos ouviram sobre esta ideia.")
print ("Real translation: and my neighboring homes heard about this idea .")
translate("vou então muito rapidamente partilhar convosco algumas histórias de algumas coisas mágicas que aconteceram.")
print ("Real translation: so i 'll just share with you some stories very quickly of some magical things that have happened .")
```
You can pass different layers and attention blocks of the decoder to the `plot` parameter.
```
translate("este é o primeiro livro que eu fiz.", plot='decoder_layer4_block2')
print ("Real translation: this is the first book i've ever done.")
```
## Summary
In this tutorial, you learned about positional encoding, multi-head attention, the importance of masking and how to create a transformer.
Try using a different dataset to train the transformer. You can also create the base transformer or transformer XL by changing the hyperparameters above. You can also use the layers defined here to create [BERT](https://arxiv.org/abs/1810.04805) and train state of the art models. Futhermore, you can implement beam search to get better predictions.
| github_jupyter |
<a href="https://colab.research.google.com/github/sigvehaug/ISDAwPython_day3.1/blob/main/ISDAwPython_3_1_NB_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Statistics with Python, S. Haug, University of Bern.
# Parameter estimation / regression with Python
**Average expected study time :** 45 min (depending on your background)
**Learning outcomes :**
- Refresh what is meant with parameter estimation and regression
- Perform linear regression with Python by example
- Fitting to built-in functions
- Fitting to own defined functions
- Know what non-parametric regression is
**Main python module used**
- the Scipy.stat module https://docs.scipy.org/doc/scipy/reference/stats.html
# 3.0 Regression - Situation
We have data and want to extract model paramters from that data. An example would be to estimate the mean and the standard deviation, assuming a normal distribution. Another one would be to fit a straight line. For historical reasons this kind of analysis is often called regression. Some scientists just say fitting (model parameters to the data).
We distinguish between parametric and non-parametric models. A line and the normal distribution are both parametric.
## 3.1 About linear Regression
Linear regression means fitting linear parameters to a set of data points (x,y). x and y may be vectors. You may consider this as the simplest case of Machine Learning. Example, a line is described by
$$y = ax + b$$
Thus two parameters a (slope) and b (intersection with y axis) can be fitted to (x,y). This is called linear regression as the parameters are linear (no higher powers).
There are different fitting methods, mostly least squares or maximum likelihood are used.
## 3.2 Linear regression with scipy.stats
Import the Python libraries we need.
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as stats
```
Read the data from file and do a linear regression for a line in the plength-pwidth space of the setosa sample. We use https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.linregress.html, using least squares.
```
url = 'https://www.openml.org/data/get_csv/61/dataset_61_iris.arff'
```
The number of digits is ridiculous. Let's print it better.
```
```
Let's look at the scatter plot to see if this makes sense.
```
```
By eye it is hard to say how good this fit is. Try the same regression with versicolor. The result may be a bit clearer.
We now have a model, a straight line, whose shape we have chosen, but whose parameters (slope and intersection) have been estimated/fitted from data with the least squares method. It tells us that petal width of a leaf is petal length x slope ( f(petal length) = a x slope). So we can do interpolation and extrapolation, i.e. get the petal width at any petal length.
## 3.3 Fitting data to other built-in p.d.f.
The scipy.stats comes with many built-in functions. For example the exponential distributions with scale $\beta$ and location $\mu$
$$f(x)=\frac{1}{\beta} e^{-(x-\mu)/\beta} , x \ge \mu;\beta>0$$
```
# Let us fit data to an exponential distribution
```
This fit method is poor in the sense that it doesn't return uncertainties on the fitted values. This we normally want to know. The curve_fit method below also returns the uncertainties.
## 3.4 Fitting your own defined function
If a line is not streight it is curved. There are many mathematical functions whose parameters we can try to fit to experimental data points. Some examples: Polynominals (first order is linear regression, second order is a parabola etc), exponential functions, normal function, sindoial wave function etc. You need to choose an approriate shape/function to obtain a good result.
With the Scipy.stat module we can look for preprogrammed functions (in principle you can program your own function whose parameters you want to fit too): https://docs.scipy.org/doc/scipy/reference/stats.html.
The scipy.optimize module provides a more general non-linear least squares fit. Look at and play with this example. It is complex and you will probably need some time testing, googling etc.
```
# First let us generate some synthetic data to play with
from scipy.optimize import curve_fit
# We define our own model
# Now use curve_fit to fit the model parameters to the data
```
## 3.5 Regression with statsmodels
The regression methods in scipy.stats don't give very rich output. In particular, one often would like two know more about the uncertainties on the fitted parameters. The **statsmodels** library is in this sense more powerful. Let us look at one example.
statsmodels documentation: https://www.statsmodels.org/
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.sandbox.regression.predstd import wls_prediction_std
np.random.seed(9876789)
# pwidths and plenghts we extracted from the Iris set above
```
## 3.6 Comment on non-parametric regression
So far we have used functions (models) with some predefined shape/form. The parameters we fitted to data. If we have no clue about the form, we may try to fit with non-parametric methods. However, these require more data as also the shape needs to guessed or fitted from the data. So normally a non-parametric method gives poorer results.
There are several ways to do this in Python. You make look at this if you are interested:
https://pythonhosted.org/PyQt-Fit/NonParam_tut.html
| github_jupyter |
```
import io
import numpy as np
import sys
from gym.envs.toy_text import discrete
import gym.spaces
UP = 0
RIGHT = 1
DOWN = 2
LEFT = 3
class GridworldEnv(discrete.DiscreteEnv):
"""
Grid World environment from Sutton's Reinforcement Learning book chapter 4.
You are an agent on an MxN grid and your goal is to reach the terminal
state at the top left or the bottom right corner.
For example, a 4x4 grid looks as follows:
T o o o
o x o o
o o o o
o o o T
x is your position and T are the two terminal states.
You can take actions in each direction (UP=0, RIGHT=1, DOWN=2, LEFT=3).
Actions going off the edge leave you in your current state.
You receive a reward of -1 at each step until you reach a terminal state.
"""
metadata = {'render.modes': ['human', 'ansi']}
def __init__(self, shape=[4,4]):
if not isinstance(shape, (list, tuple)) or not len(shape) == 2:
raise ValueError('shape argument must be a list/tuple of length 2')
self.shape = shape
nS = np.prod(shape)
nA = 4
MAX_Y = shape[0]
MAX_X = shape[1]
print(MAX_X, MAX_Y)
P = {}
grid = np.arange(nS).reshape(shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
# P[s][a] = (prob, next_state, reward, is_done)
P[s] = {a : [] for a in range(nA)}
is_done = lambda s: s == 0 or s == (nS - 1)
reward = 0.0 if is_done(s) else -1.0
# We're stuck in a terminal state
if is_done(s):
P[s][UP] = [(1.0, s, reward, True)]
P[s][RIGHT] = [(1.0, s, reward, True)]
P[s][DOWN] = [(1.0, s, reward, True)]
P[s][LEFT] = [(1.0, s, reward, True)]
# Not a terminal state
else:
ns_up = s if y == 0 else s - MAX_X
ns_right = s if x == (MAX_X - 1) else s + 1
ns_down = s if y == (MAX_Y - 1) else s + MAX_X
ns_left = s if x == 0 else s - 1
P[s][UP] = [(1.0, ns_up, reward, is_done(ns_up))]
P[s][RIGHT] = [(1.0, ns_right, reward, is_done(ns_right))]
P[s][DOWN] = [(1.0, ns_down, reward, is_done(ns_down))]
P[s][LEFT] = [(1.0, ns_left, reward, is_done(ns_left))]
it.iternext()
# Initial state distribution is uniform
isd = np.ones(nS) / nS
# We expose the model of the environment for educational purposes
# This should not be used in any model-free learning algorithm
self.P = P
super(GridworldEnv, self).__init__(nS, nA, P, isd)
def _render(self, mode='human', close=False):
""" Renders the current gridworld layout
For example, a 4x4 grid with the mode="human" looks like:
T o o o
o x o o
o o o o
o o o T
where x is your position and T are the two terminal states.
"""
if close:
return
outfile = io.StringIO() if mode == 'ansi' else sys.stdout
grid = np.arange(self.nS).reshape(self.shape)
it = np.nditer(grid, flags=['multi_index'])
while not it.finished:
s = it.iterindex
y, x = it.multi_index
if self.s == s:
output = " x "
elif s == 0 or s == self.nS - 1:
output = " T "
else:
output = " o "
print(output)
if x == 0:
output = output.lstrip()
if x == self.shape[1] - 1:
output = output.rstrip()
outfile.write(output)
if x == self.shape[1] - 1:
outfile.write("\n")
it.iternext()
env = GridworldEnv()
env = GridworldEnv()
def policy_eval(policy, env, discount_factor=1.0, epsilon=0.00001):
"""
Evaluate a policy given an environment and a full description of the environment's dynamics.
Args:
policy: [S, A] shaped matrix representing the policy.
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
Vector of length env.nS representing the value function.
"""
# Start with a random (all 0) value function
V_old = np.zeros(env.nS)
while True:
#new value function
V_new = np.zeros(env.nS)
#stopping condition
delta = 0
#loop over state space
for s in range(env.nS):
#To accumelate bellman expectation eqn
v_fn = 0
#get probability distribution over actions
action_probs = policy[s]
#loop over possible actions
for a in range(env.nA):
#get transitions
[(prob, next_state, reward, done)] = env.P[s][a]
#apply bellman expectatoin eqn
v_fn += action_probs[a] * (reward + discount_factor * V_old[next_state])
#get the biggest difference over state space
delta = max(delta, abs(v_fn - V_old[s]))
#update state-value
V_new[s] = v_fn
#the new value function
V_old = V_new
#if true value function
if(delta < epsilon):
break
return np.array(V_old)
random_policy = np.ones([env.nS, env.nA]) / env.nA
v = policy_eval(random_policy, env)
expected_v = np.array([0, -14, -20, -22, -14, -18, -20, -20, -20, -20, -18, -14, -22, -20, -14, 0])
np.testing.assert_array_almost_equal(v, expected_v, decimal=2)
print(v)
print(expected_v)
def policy_improvement(env, policy_eval_fn=policy_eval, discount_factor=1.0):
"""
Policy Improvement Algorithm. Iteratively evaluates and improves a policy
until an optimal policy is found.
Args:
env: The OpenAI envrionment.
policy_eval_fn: Policy Evaluation function that takes 3 arguments:
policy, env, discount_factor.
discount_factor: gamma discount factor.
Returns:
A tuple (policy, V).
policy is the optimal policy, a matrix of shape [S, A] where each state s
contains a valid probability distribution over actions.
V is the value function for the optimal policy.
"""
def one_step_lookahead(s, value_fn):
actions = np.zeros(env.nA)
for a in range(env.nA):
[(prob, next_state, reward, done)] = env.P[s][a]
actions[a] = prob * (reward + discount_factor * value_fn[next_state])
return actions
# Start with a random policy
policy = np.ones([env.nS, env.nA]) / env.nA
actions_values = np.zeros(env.nA)
while True:
#evaluate the current policy
value_fn = policy_eval_fn(policy, env)
policy_stable = True
#loop over state space
for s in range(env.nS):
#perform one step lookahead
actions_values = one_step_lookahead(s, value_fn)
#maximize over possible actions
best_action = np.argmax(actions_values)
#best action on current policy
chosen_action = np.argmax(policy[s])
#if Bellman optimality equation not satisifed
if(best_action != chosen_action):
policy_stable = False
#the new policy after acting greedily w.r.t value function
policy[s] = np.eye(env.nA)[best_action]
#if Bellman optimality eqn is satisfied
if(policy_stable):
return policy, value_fn
policy, v = policy_improvement(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
def value_iteration(env, epsilon=0.0001, discount_factor=1.0):
"""
Value Iteration Algorithm.
Args:
env: OpenAI env. env.P represents the transition probabilities of the environment.
env.P[s][a] is a list of transition tuples (prob, next_state, reward, done).
env.nS is a number of states in the environment.
env.nA is a number of actions in the environment.
theta: We stop evaluation once our value function change is less than theta for all states.
discount_factor: Gamma discount factor.
Returns:
A tuple (policy, V) of the optimal policy and the optimal value function.
"""
def one_step_lookahead(V, a, s):
[(prob, next_state, reward, done)] = env.P[s][a]
v = prob * (reward + discount_factor * V[next_state])
return v
#start with inital value function and intial policy
V = np.zeros(env.nS)
policy = np.zeros([env.nS, env.nA])
#while not the optimal policy
while True:
#for stopping condition
delta = 0
#loop over state space
for s in range(env.nS):
actions_values = np.zeros(env.nA)
#loop over possible actions
for a in range(env.nA):
#apply bellman eqn to get actions values
actions_values[a] = one_step_lookahead(V, a, s)
#pick the best action
best_action_value = max(actions_values)
#get the biggest difference between best action value and our old value function
delta = max(delta, abs(best_action_value - V[s]))
#apply bellman optimality eqn
V[s] = best_action_value
#to update the policy
best_action = np.argmax(actions_values)
#update the policy
policy[s] = np.eye(env.nA)[best_action]
#if optimal value function
if(delta < epsilon):
break
return policy, V
policy, v = value_iteration(env)
print("Policy Probability Distribution:")
print(policy)
print("")
print("Reshaped Grid Policy (0=up, 1=right, 2=down, 3=left):")
print(np.reshape(np.argmax(policy, axis=1), env.shape))
print("")
print("Value Function:")
print(v)
print("")
print("Reshaped Grid Value Function:")
print(v.reshape(env.shape))
print("")
test_dp.policy_eval_two_arrays_test( policy_eval_two_arrays )
```
| github_jupyter |
# matplotlib exercises
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
```
## Q1: planetary positions
The distances of the planets from the Sun (technically, their semi-major axes) are:
```
a = np.array([0.39, 0.72, 1.00, 1.52, 5.20, 9.54, 19.22, 30.06, 39.48])
```
These are in units where the Earth-Sun distance is 1 (astronomical units).
The corresponding periods of their orbits (how long they take to go once around the Sun) are, in years
```
P = np.array([0.24, 0.62, 1.00, 1.88, 11.86, 29.46, 84.01, 164.8, 248.09])
```
Finally, the names of the planets corresponding to these are:
```
names = ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn",
"Uranus", "Neptune", "Pluto"]
```
(technically, pluto isn't a planet anymore, but we still love it :)
* Plot as points, the periods vs. distances for each planet on a log-log plot.
* Write the name of the planet next to the point for that planet on the plot
## Q2: drawing a circle
For an angle $\theta$ in the range $\theta \in [0, 2\pi]$, the polar equations of a circle of radius $R$ are:
$$
x = R\cos(\theta)
$$
$$
y = R\sin(\theta)
$$
We want to draw a circle.
* Create an array to hold the theta values—the more we use, the smoother the circle will be
* Create `x` and `y` arrays from `theta` for your choice of $R$
* Plot `y` vs. `x`
Now, look up the matplotlib `fill()` function, and draw a circle filled in with a solid color.
## Q3: Circles, circles, circles...
Generalize your circle drawing commands to produce a function,
```
draw_circle(x0, y0, R, color)
```
that draws the circle. Here, `(x0, y0)` is the center of the circle, `R` is the radius, and `color` is the color of the circle.
Now randomly draw 10 circles at different locations, with random radii, and random colors on the same plot.
## Q4: Climate
Download the data file of global surface air temperature averages from here:
https://raw.githubusercontent.com/sbu-python-summer/python-tutorial/master/day-4/nasa-giss.txt
(this data comes from: https://data.giss.nasa.gov/gistemp/graphs/)
There are 3 columns here: the year, the temperature change, and a smoothed representation of the temperature change.
* Read in this data using `np.loadtxt()`.
* Plot as a line the smoothed representation of the temperature changes.
* Plot as points the temperature change (no smoothing). Color the points blue if they are < 0 and color them red if they are >= 0
You might find the NumPy `where()` function useful.
## Q5: subplots
matplotlib has a number of ways to create multiple axes in a figure -- look at `plt.subplot()` (http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.subplot)
Create an `x` array using NumPy with a number of points, spanning from $[0, 2\pi]$.
Create 3 axes vertically, and do the following:
* Define a new numpy array `f` initialized to a function of your choice.
* Plot f in the top axes
* Compute a numerical derivative of `f`,
$$ f' = \frac{f_{i+1} - f_i}{\Delta x}$$
and plot this in the middle axes
* Do this again, this time on $f'$ to compute the second derivative and plot that in the bottom axes
## Q6: frequent words plotting
In this exercise, we will read the file with the transcription of _Star Trek TOS, Shore Leave_ and calculate the amount of time each word was found. We will then plot the 25 most frequent words and label the plot.
### 6.1 Read the file and create the dictionaty {'word':count}
* Open the `shore_leave.txt`
* Create the dictionary of the form {'word':count}, where `count` shows the amount of times the word was found in the text. Remember to get rid of the punctuation ("." and ",") and to ensure that all words are lowercase
```
f = open("shore_leave.txt", "r")
for line in f:
pass
```
### 2. Plot 25 most frequent words
Plot a labelled bar chart of the most frequent 25 words with their frequencies.
```
# your code here
```
| github_jupyter |
# Tensorboard Basics
Graph and Loss visualization using Tensorboard. This example is using the MNIST database of handwritten digits (http://yann.lecun.com/exdb/mnist/).
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/
```
from __future__ import print_function
import tensorflow as tf
# Import MINST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# Parameters
learning_rate = 0.01
training_epochs = 25
batch_size = 100
display_epoch = 1
logs_path = '/tmp/tensorflow_logs/example/'
# tf Graph Input
# mnist data image of shape 28*28=784
x = tf.placeholder(tf.float32, [None, 784], name='InputData')
# 0-9 digits recognition => 10 classes
y = tf.placeholder(tf.float32, [None, 10], name='LabelData')
# Set model weights
W = tf.Variable(tf.zeros([784, 10]), name='Weights')
b = tf.Variable(tf.zeros([10]), name='Bias')
# Construct model and encapsulating all ops into scopes, making
# Tensorboard's Graph visualization more convenient
with tf.name_scope('Model'):
# Model
pred = tf.nn.softmax(tf.matmul(x, W) + b) # Softmax
with tf.name_scope('Loss'):
# Minimize error using cross entropy
cost = tf.reduce_mean(-tf.reduce_sum(y * tf.log(pred), reduction_indices=1))
with tf.name_scope('SGD'):
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
with tf.name_scope('Accuracy'):
# Accuracy
acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
acc = tf.reduce_mean(tf.cast(acc, tf.float32))
# Initializing the variables
init = tf.global_variables_initializer()
# Create a summary to monitor cost tensor
tf.summary.scalar("loss", cost)
# Create a summary to monitor accuracy tensor
tf.summary.scalar("accuracy", acc)
# Merge all summaries into a single op
merged_summary_op = tf.summary.merge_all()
# Start Training
with tf.Session() as sess:
sess.run(init)
# op to write logs to Tensorboard
summary_writer = tf.summary.FileWriter(logs_path, graph=tf.get_default_graph())
# Training cycle
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(mnist.train.num_examples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# Run optimization op (backprop), cost op (to get loss value)
# and summary nodes
_, c, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: batch_xs, y: batch_ys})
# Write logs at every iteration
summary_writer.add_summary(summary, epoch * total_batch + i)
# Compute average loss
avg_cost += c / total_batch
# Display logs per epoch step
if (epoch+1) % display_epoch == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=", "{:.9f}".format(avg_cost))
print("Optimization Finished!")
# Test model
# Calculate accuracy
print("Accuracy:", acc.eval({x: mnist.test.images, y: mnist.test.labels}))
print("Run the command line:\n" \
"--> tensorboard --logdir=/tmp/tensorflow_logs " \
"\nThen open http://0.0.0.0:6006/ into your web browser")
```
### Loss and Accuracy Visualization
<img src="../../resources/img/tensorboard_basic_1.png"/>
### Graph Visualization
<img src="../../resources/img/tensorboard_basic_2.png"/>
```
test complete; Gopal
```
| github_jupyter |
# Testing of queue imbalance for stock 9092
Order of this notebook is as follows:
1. [Data](#Data)
2. [Data visualization](#Data-visualization)
3. [Tests](#Tests)
4. [Conclusions](#Conclusions)
Goal is to implement queue imbalance predictor from [1](#Resources).
```
%matplotlib inline
import warnings
import matplotlib.dates as md
import matplotlib.pyplot as plt
import seaborn as sns
from lob_data_utils import lob
from sklearn.metrics import roc_curve, roc_auc_score
warnings.filterwarnings('ignore')
```
## Data
Market is open between 8-16 on every weekday. We decided to use data from only 9-15 for each day.
### Test and train data
For training data we used data from 2013-09-01 - 2013-11-16:
* 0901
* 0916
* 1001
* 1016
* 1101
We took 75% of this data (randomly), the rest is the test data.
```
df, df_test = lob.load_prepared_data('9062', data_dir='../data/prepared/', length=None)
df.head()
```
## Data visualization
```
df['sum_buy_bid'].plot(label='total size of buy orders', style='--')
df['sum_sell_ask'].plot(label='total size of sell orders', style='-')
plt.title('Summed volumens for ask and bid lists')
plt.xlabel('Time')
plt.ylabel('Whole volume')
plt.legend()
df[['bid_price', 'ask_price', 'mid_price']].plot(style='.')
plt.legend()
plt.title('Prices')
plt.xlabel('Time')
plt.ylabel('Price')
sns.jointplot(x="mid_price", y="queue_imbalance", data=df.loc[:, ['mid_price', 'queue_imbalance']], kind="kde")
plt.title('Density')
plt.plot()
df['mid_price_indicator'].plot('kde')
plt.legend()
plt.xlabel('Mid price indicator')
plt.title('Mid price indicator density')
df['queue_imbalance'].plot('kde')
plt.legend()
plt.xlabel('Queue imbalance')
plt.title('Queue imbalance density')
```
## Tests
We use logistic regression to predict `mid_price_indicator`.
### Mean square error
We calculate residual $r_i$:
$$ r_i = \hat{y_i} - y_i $$
where
$$ \hat{y}(I) = \frac{1}{1 + e −(x_0 + Ix_1 )}$$
Calculating mean square residual for all observations in the testing set is also useful to assess the predictive power.
The predective power of null-model is 25%.
```
reg = lob.logistic_regression(df, 0, len(df))
probabilities = reg.predict_proba(df_test['queue_imbalance'].values.reshape(-1,1))
probabilities = [p1 for p0, p1 in probabilities]
err = ((df_test['mid_price_indicator'] - probabilities) ** 2).mean()
predictions = reg.predict(df_test['queue_imbalance'].values.reshape(-1, 1))
print('Mean square error is', err)
```
#### Logistic regression fit curve
```
plt.plot(df_test['queue_imbalance'].values,
lob.sigmoid(reg.coef_[0] * df_test['queue_imbalance'].values + reg.intercept_))
plt.title('Logistic regression fit curve')
plt.xlabel('Imbalance')
plt.ylabel('Prediction')
```
#### ROC curve
For assessing the predectivity power we can calculate ROC score.
```
a, b, c = roc_curve(df_test['mid_price_indicator'], predictions)
logit_roc_auc = roc_auc_score(df_test['mid_price_indicator'], predictions)
plt.plot(a, b, label='predictions (area {})'.format(logit_roc_auc))
plt.plot([0, 1], [0, 1], color='navy', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
st = 0
end = len(df)
plt.plot(df_test.index[st:end], predictions[st:end], 'ro', label='prediction')
plt.plot(df_test.index[st:end], probabilities[st:end], 'g.', label='probability')
plt.plot(df_test.index[st:end], df_test['mid_price_indicator'].values[st:end], 'b.', label='mid price')
plt.xticks(rotation=25)
plt.legend(loc=1)
plt.xlabel('Time')
plt.ylabel('Mid price prediction')
```
## Conclusions
Looking at mid_price_indicator density plot it seems that bid and ask queues are not well-balanced. It has more than one local maximum.
* predicted probability vs known indicator: 0.247, so it's slightly better than null-model (0.25).
* area under ROC curve is 0.545, for null-model it's 0.50.
We didn't remove outliers.
This is the data has *small-tick*, this is the reason why results are not so good.
### Resources
1. [Queue Imbalance as a One-Tick-Ahead Price Predictor in a Limit Order Book](https://arxiv.org/abs/1512.03492) <a class="anchor-link" href="#1">¶</a>
| github_jupyter |
<img src="images/criteoAiLab.png" heigth=45>
# Practical Session : Deep RL
## Agenda
1. Deep-RL Theory Reminder (15')
1. DQN (30')
1. $\epsilon$-DQN (20')
1. DQN with Replay (20')
1. Potential problems & solutions (10')
1. Wrap-up (5')
(!) Regular polls/quiz
<img src="images/logo.png" width="250px" align='center'>
# Deep-RL Theory
<img src="images/rl.png" width="480" alt="test"/>
**Goal**: Maximise w.r.t $\pi$ to collect more cumulated discounted reward
- $R = \sum \limits_{t=0}^{\infty} \gamma^t r_t$
- $0<\gamma<1$
## Reinforcement Learning for a Markov Decision Process
Under a fixed policy, $V$ is the value of the state:
<img src="images/vpi.png" width="480"/>
Under a fixed policy, $Q$ is the value of the tuple (state, action):
<img src="images/qpi.png" width="640"/>
Remark: after time $t$, actions are chosen by $\pi$ so in expectation we could replace $Q^\pi(s_{t+1}, a_{t+1})$ by $V^\pi(s_{t+1})$
### Q-learning
- Idea is to learn $Q$, hence to know which action is best in a given state
## Bellman Optimality Principle
The optimal policy $\pi^*$ must respect
<img src="images/bellman.png" width=320>
**Strategy**: While this equality doesn't hold we'll try to improve $\pi$
For a fixed policy $\pi$ if the quantity are not equals :
- either $Q$ is not correctly estimated (policy evaluation issue)
- or $\pi$ is not selecting optimal actions, this can be fixed being more greedy wrt $Q$
<img src="images/evalimprov.png" width=480>
<center><sup>(R. S. Sutton, 1998, Reinforcement Learning An introduction)</sup></center>
# Deep Q-Learning (DQN)
<img src="images/dqn-th.png" width=640>
**Q-learning**:
While $Q^\pi(s_t, a_t) \ne r_t + \gamma \ max \ {Q^\pi(s_{t+1}, a)}$:
1. take action
1. receive reward / new state
1. improve target evaluation
# Deep Q-Learning (DQN)
- $Q$ modeled as a NN
- inputs = states
- outputs = actions
<center><img src="images/q-learning.png" width=480><center>
<center><sup><sub>Image from https://leonardoaraujosantos.gitbooks.io</sub></sup></center>
# Deep-RL Benchmarks
## OpenAI
- [Gym](https://gym.openai.com/envs/CartPole-v0/) : set of standard problems / environments
### Simple control problems
<img src="images/control.png" width=480>
### Famous learn to play Atari
<img src="images/atari.png" width=480>
<center><sup><sub>Images from https://github.com/dgriff777/rl_a3c_pytorch</sub></sup></center>
```
class RLEnvironment:
def run(self, agent, episodes=100, ...):
"""
Run the agent.
Pseudo-code:
```
for i in 1..episodes {
start new episode
while episode not finished {
ask agent to take action based on current state
action resolved by environment, returning reward and new state
<opportunity to feedback agent with (state, reward, new state)>
<opportunity to update agent parameters>
if new state means agent failed {
terminate episode
}
}
<opportunity to update agent parameters again>
if last episodes show enough reward {
declare task solved
}
}
```
"""
class RandomAgent:
"""The world's simplest agent!"""
def __init__(self, action_space):
self.action_space = action_space
def get_action(self, state):
return self.action_space.sample()
env.run(RandomAgent(env.action_space), episodes=20, display_policy=True)
```
### CartPole
- $(x, \dot x)$: position/speed of cart
- $(\theta, \dot \theta)$: angle/angular velocity of the pole
- actions: move left/right
- environment simulates acceleration (cart/pole mass)
<img src="images/cartpole.gif" width=640>
## 3. DQN in practice
- Skeleton provided, implemented with [Keras](https://keras.io/)
```
class DQNAgent(RLDebugger):
def __init__(self, observation_space, action_space):
self.learning_rate = ???
def build_model(self):
model = Sequential()
model.add(Dense(???, input_dim=self.state_size, activation=???))
model.add(Dense(self.action_size, activation=???))
model.compile(loss=???, optimizer=Adam(lr=self.learning_rate))
model.summary()
return model
# get action from model using greedy policy.
def get_action(self, state):
q_value = self.model.predict(state)
best_action = np.argmax(q_value[0])
return best_action
# train the target network on the selected action and transition
def train_model(self, action, state, next_state, reward, done):
target = self.model.predict(state)
target_val = self.target_model.predict(next_state)
if done:
target[0][action] = reward
else:
target[0][action] = reward + self.gamma * (np.amax(target_val))
loss = self.model.fit(state, target, verbose=0).history['loss'][0]
self.record(action, state, target, target_val, loss)
agent = DQNAgent(env.observation_space, env.action_space)
env.run(agent, episodes=100)
```
```
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 30) 150
=================================================================
Total params: 150
Trainable params: 150
Non-trainable params: 0
_________________________________________________________________
Episode 10, Total reward 9.0
Episode 20, Total reward 9.0
Episode 30, Total reward 10.0
Episode 40, Total reward 10.0
Episode 50, Total reward 10.0
Episode 60, Total reward 10.0
Episode 70, Total reward 10.0
Episode 80, Total reward 10.0
Episode 90, Total reward 10.0
Episode 100, Total reward 10.0
Average Total Reward of last 100 episodes: 9.94
```
### Evaluation
- **Objective:** average reward of 200 over 100 episodes
- *First Target:* reach a reward > 50 (at least once)
- We only focus on tuning model parameters for now
___
**Your turn now !**
```
$ jupyter notebook exercises.ipynb
```
Be warned: your results will vary from one run to the other (random init of the model)
# DQN First Results
### POLL: who reached a reward > 50?
### POLL: how many neurons? more than 20, 50, 100?
### POLL: how many layers? 1,2, more?
### POLL: what loss? mse, mean_absolute_error, another?
### POLL: what learning rate? more than $10^{-4}, 10^{-3}, 10^{-2}?$
## Case Study
env.run(agent, episodes=100, seed=0)
### Symptom
```
agent.plot_loss()
```
| | |
|--|--|
|  | ```Total Reward: 10.0``` |
### POLL: Is it a problem of ...
- Model capacity (#neurons) ?
- Loss function ?
- Activation ?
- Exploration ?
```
agent.plot_action()
```

**Take away:** Q-learning can converge only if it explores enough actions (hence states)
# DQN with Exploration
Exploration of the states is crucial for performance
- add an uniform exploration mechanism
- decrease exploration over time
This is our first agent which is going to solve the task. It will typically require to run a few hundred of episodes to collect the data.
```
class DQNAgentWithExploration(DQNAgent):
def __init__(self, observation_space, action_space):
super(DQNAgentWithExploration, self).__init__(observation_space, action_space)
# exploration schedule parameters
self.t = 0
self.epsilon = ???
# TODO store your additional parameters here
# decay epsilon
def update_epsilon(self):
self.t += 1
# TODO write the code for your decay
self.epsilon = ???
agent = DQNAgentWithExploration(env.observation_space, env.action_space)
env.run(agent, episodes=500, print_delay=50)
```
```
Layer (type) Output Shape Param #
=================================================================
dense_7 (Dense) (None, 30) 150
_________________________________________________________________
dense_8 (Dense) (None, 2) 62
=================================================================
Total params: 212
Trainable params: 212
Non-trainable params: 0
_________________________________________________________________
Episode 50, Total reward 10.0
Episode 100, Total reward 37.0
Episode 150, Total reward 139.0
Episode 200, Total reward 200.0
```
(your mileage may vary...)
___
**Your turn now !**
PS: if your current DQN is really poor (not reaching reward > 10) you can cheat by using:
```
from decent import DQNAgent
```
# DQN/Explore Results
### POLL: who reached a reward > 50 ?
### POLL: who reached a reward > 200 ?
## Case Study
### POLL: which one is DQN ? DQN/Ex ?
`agent.plot_state()`
| | |
|---|--- |
|  |  |
### POLL: who got the message "task solved" ?
### POLL: who solved in < 200 episodes ?
*hint*: can we optimize gains from past experience ?
## DQN/Ex with Replay
### Idea: correct past model updates with newest/better $Q$ function
<img src="images/replay.png" width=640>
<center><sub><sup>Image retrieved from http://www.modulabs.co.kr</sup></sub></center>
- Prioritized Replay - https://arxiv.org/abs/1511.05952 - goes one step further by weighting the sampling
```
class DQNAgentWithExplorationAndReplay(DQNAgentWithExploration):
def __init__(self, observation_space, action_space):
...
# create replay memory using deque
self.memory = deque(maxlen=???)
self.batch_size = ???
def train_model(self, action, state, next_state, reward, done):
# save sample <s,a,r,s'> to the replay memory
self.memory.append((state, action, reward, next_state, done))
if len(self.memory) >= self.train_start:
...
```
___
**Your turn now !**
*Hint: if stuck with $\epsilon$ in previous step you can try $\epsilon(t) = 1 / \sqrt{t}$*
# DQN/Ex/Replay Results
### POLL: who got the message "task solved" ?
### POLL: who solved in < 200 episodes ?
## Case Study
### POLL: Have we converged ?
| `agent.plot_state()` | `agent.plot_bellman()` |
|--|--|
|  | |
```
agent.epsilon = 0
agent.memory = deque(maxlen=1)
agent.batch_size = 1
agent.train_start = 1
env.run(agent, episodes=200, print_delay=33)
```
### POLL: Have we converged ?
`agent.plot_diagnostics()`
<img src="images/dqnee_all_1.png" width=640>
# Potential Problems/Solution (advanced)
In some settings (e.g. in some Atari games) the techniques we used are not enough.
Usually inspecting Bellman residuals and exploration traces provides hints to improve.
Let's see some examples...
## Double DQN
- *assumption*: our Q estimates are too optimistic
- *solution*: defer model update to avoid big jumps in target
<img src="images/ddqn-th.png" width=480>
- practically: "freeze" $Q_2$ for several time steps / episodes
## Dueling DQN
- *assumption*: action is not so important for many states
- *solution*: separate $Q$ into state and advantage functions $Q(s,a) = V(s) + A(s,a)$
<img src="images/duel.png" width=240>
<center><sup><sub>Image retrieved from http://torch.ch/blog/2016/04/30/dueling_dqn.html</sub></sup></center>
___
You can try to implement one of these as an exercise
# Takeaways
## Basics ideas
- tune the DL model parameters
- exploration is necessary for Q-learning
- debug not only with the model loss
## Advanced ideas
- make exploration more efficient (replay)
- adapt to task specifics (DDQN, Dueling DQN)
## Next steps
- [John Schulman: nuts and bolts of RL research](http://rll.berkeley.edu/deeprlcourse/docs/nuts-and-bolts.pdf)
- try the Atari gyms... with CNNs and GPUs :)
- learn Policy Gradient methods
| github_jupyter |
```
import pandas as pd
from pathlib import Path
import json
# Read in slack data
slack_data = pd.read_csv(Path("./slack_cleaned.csv"))
slack_data.drop(["real_name"], axis=1, inplace=True)
slack_data.head()
# Filter data by months to start cleaning for NFT creation
june_data_raw = slack_data.loc[slack_data["month"] == "June"]
june_data_raw = june_data_raw.sort_values("day_number").set_index("day_number")
july_data_raw = slack_data.loc[slack_data["month"] == "July"]
july_data_raw = july_data_raw.sort_values("day_number").set_index("day_number")
august_data_raw = slack_data.loc[slack_data["month"] == "August"]
august_data_raw = august_data_raw.sort_values("day_number").set_index("day_number")
september_data_raw = slack_data.loc[slack_data["month"] == "September"]
september_data_raw = september_data_raw.sort_values("day_number").set_index("day_number")
october_data_raw = slack_data.loc[slack_data["month"] == "October"]
october_data_raw = october_data_raw.sort_values("day_number").set_index("day_number")
november_data_raw = slack_data.loc[slack_data["month"] == "November"]
november_data_raw = november_data_raw.sort_values("day_number").set_index("day_number")
july_data_raw
# Parameters for NFT creation (Total text length, top channel, top day, number of attachments, top user, reaction_true)
month_data_template = pd.DataFrame(columns=["total_text_length", "top_channel", "top_day", "total_attachments", "top_user"])
# Init monthly dictionaries
june_data = {}
july_data = {}
august_data = {}
september_data = {}
october_data = {}
november_data = {}
# Set dictionary values to month
june_data["month"] = "June"
july_data["month"] = "July"
august_data["month"] = "August"
september_data["month"] = "September"
october_data["month"] = "October"
november_data["month"] = "November"
# Set dictionary values to total monthly text length
june_data["total_text_length"] = [june_data_raw["text_length"].sum()]
july_data["total_text_length"] = [july_data_raw["text_length"].sum()]
august_data["total_text_length"] = [august_data_raw["text_length"].sum()]
september_data["total_text_length"] = [september_data_raw["text_length"].sum()]
october_data["total_text_length"] = [october_data_raw["text_length"].sum()]
november_data["total_text_length"] = [november_data_raw["text_length"].sum()]
# Set dictionary values to top channel for month
june_data["top_channel"] = [june_data_raw.channel_name.value_counts().index[0]]
july_data["top_channel"] = [july_data_raw.channel_name.value_counts().index[0]]
august_data["top_channel"] = [august_data_raw.channel_name.value_counts().index[0]]
september_data["top_channel"] = [september_data_raw.channel_name.value_counts().index[0]]
october_data["top_channel"] = [october_data_raw.channel_name.value_counts().index[0]]
november_data["top_channel"] = [november_data_raw.channel_name.value_counts().index[0]]
# Set dictionary values to top active day
june_data["top_day"] = [june_data_raw.day_name.value_counts().index[0]]
july_data["top_day"] = [july_data_raw.day_name.value_counts().index[0]]
august_data["top_day"] = [august_data_raw.day_name.value_counts().index[0]]
september_data["top_day"] = [september_data_raw.day_name.value_counts().index[0]]
october_data["top_day"] = [october_data_raw.day_name.value_counts().index[0]]
november_data["top_day"] = [november_data_raw.day_name.value_counts().index[0]]
# Set dictionary values to total number of attachments
june_data["total_attachments"] = [june_data_raw.attachments_true.value_counts()[0]]
july_data["total_attachments"] = [july_data_raw.attachments_true.value_counts()[0]]
august_data["total_attachments"] = [august_data_raw.attachments_true.value_counts()[0]]
september_data["total_attachments"] = [september_data_raw.attachments_true.value_counts()[0]]
october_data["total_attachments"] = [october_data_raw.attachments_true.value_counts()[0]]
november_data["total_attachments"] = [november_data_raw.attachments_true.value_counts()[0]]
# Set dictionary values to total reactions
june_data["total_reactions"] = [june_data_raw.reaction_true.value_counts()[0]]
july_data["total_reactions"] = [july_data_raw.reaction_true.value_counts()[0]]
august_data["total_reactions"] = [august_data_raw.reaction_true.value_counts()[0]]
september_data["total_reactions"] = [september_data_raw.reaction_true.value_counts()[0]]
october_data["total_reactions"] = [october_data_raw.reaction_true.value_counts()[0]]
november_data["total_reactions"] = [november_data_raw.reaction_true.value_counts()[0]]
# Set dictionary values to top user
june_data["top_user"] = [june_data_raw.user.value_counts().index[0][2:]]
july_data["top_user"] = [july_data_raw.user.value_counts().index[0][2:]]
august_data["top_user"] = [august_data_raw.user.value_counts().index[0][2:]]
september_data["top_user"] = [september_data_raw.user.value_counts().index[0][2:]]
october_data["top_user"] = [october_data_raw.user.value_counts().index[0][2:]]
november_data["top_user"] = [november_data_raw.user.value_counts().index[0][2:]]
june_datadf = pd.DataFrame(june_data)
july_datadf = pd.DataFrame(july_data)
august_datadf = pd.DataFrame(august_data)
september_datadf = pd.DataFrame(september_data)
october_datadf = pd.DataFrame(october_data)
november_datadf = pd.DataFrame(november_data)
total_monthly_data = [june_datadf, july_datadf, august_datadf, september_datadf, october_datadf, november_datadf]
monthly_datadf = pd.concat(total_monthly_data).set_index("month")
monthly_datadf
monthly_datadf.to_csv('monthly_data.csv')
```
| github_jupyter |
# Day 8, Part 1 - intro to ipyvolume
We'll start our journey into the 3RD DIMENSION with the package ```ipyvolume```
```
# if you don't get it:
#!pip install ipyvolume
# note: you may need:
#!jupyter nbextension enable --py --sys-prefix ipyvolume
#!jupyter nbextension enable --py --sys-prefix widgetsnbextension
# or you can do:
#!conda install -c conda-forge ipyvolume
import ipyvolume
```
Let's do a quick look at something with some random 3D data:
```
import numpy as np
x, y, z = np.random.random((3, 10000))
ipyvolume.quickscatter(x, y, z, size=1, marker="sphere")
```
Easy peasy! Let's read in our simulation data and plot this!
```
from sys import path
path.append('../lesson02/')
from hermite_library import do_hermite
star_mass = 1.0 # stellar mass in Msun
planet_masses = np.array( [1.0, 0.05] ) # planet masses in Mjupiter
# [x,y,z] coords for each planet in AU
# NOTE: no z-coords! These will be set to zero later on if you make them non-zero
planet_initial_position = np.array([ [5.0, 0.0, 6.0],
[10.0, 0.0, 3.0] ])
# planet's velocity at each position in km/s
# NOTE: no z-velocities! These will be set to zero later on if you make them non-zero
planet_initial_velocity = np.array([ [0.0, 10.0, 1.0],
[0.0, -5.0, 0.0]])
# h is for hermite!
r_h, v_h, t_h, e_h = do_hermite(star_mass,
planet_masses,
planet_initial_position,
planet_initial_velocity,
tfinal=200, Nsteps=8800,
threeDee=True)
# we'll have to reformat a bit for plotting
# right now, just all as one color
x = r_h[:,0,:].ravel()
y = r_h[:,1,:].ravel()
z = r_h[:,2,:].ravel()
ipyvolume.quickscatter(x, y, z,
size=1, marker="sphere")
```
Let's make things a little more complicated and allow us to take a look at each orbit:
```
ipyvolume.figure()
colors = ['red', 'blue', 'green']
for i in range(r_h.shape[0]):
ipyvolume.scatter(r_h[i,0,:],
r_h[i,1,:],
r_h[i,2,:],
color=colors[i],
marker='sphere')
ipyvolume.show()
```
So, this is pretty cool - we can now see how the orbits "precess" during their evolution and we can check out these shapes in 3D.
Note we can also plot more abstract spaces in 3D - like velocity space:
```
ipyvolume.figure()
colors = ['red', 'blue', 'green']
for i in range(v_h.shape[0]):
ipyvolume.scatter(v_h[i,0,:],
v_h[i,1,:],
v_h[i,2,:],
color=colors[i],
marker='sphere')
ipyvolume.show()
```
With this we can see how "jumpy" the velocity changes can get - this may be a numerical effect that is causing the precession of the orbits, or just how things are!
Ok, we can also show velocity by little vectors:
```
ipyvolume.figure()
colors = ['red', 'blue', 'green']
for i in range(v_h.shape[0]):
ipyvolume.quiver(r_h[i,0,:],
r_h[i,1,:],
r_h[i,2,:],
v_h[i,0,:],
v_h[i,1,:],
v_h[i,2,:],
color=colors[i])
ipyvolume.show()
```
So clearly the above is pointless - while it looks cool the arrows are too big and there are too many of them! We can change this by taking "X" number of points:
```
step = 600
# also, length of arrays
N = v_h.shape[2]
ipyvolume.figure()
colors = ['red', 'blue', 'green']
for i in range(v_h.shape[0]):
ipyvolume.quiver(r_h[i,0,0:N:step],
r_h[i,1,0:N:step],
r_h[i,2,0:N:step],
v_h[i,0,0:N:step],
v_h[i,1,0:N:step],
v_h[i,2,0:N:step],
color=colors[i])
ipyvolume.show()
```
Now we can see a bit more about the motion - that their directions are opposite of eachother for example. And that the central mass only moves slightly and around its center as well.
## Animation
Let's now figure out how to make an animation, and then save it for ourselves! To do this, we'll need to format our data specifically as (time, position):
```
# for example, for particle 0:
r_h[:,0,:].T.shape
step = 10
# also, length of arrays
N = v_h.shape[2]
r = r_h[:,:,0:N:step]
v = v_h[:,:,0:N:step]
r_h.shape, r.shape, r[:,2,:].T.shape
# have to format color as well
#colors = np.empty((0,3))
color = [(1,0,0), (0,0,1), (0,1,0)]
#colors = np.array([])
colors = []
for i in range(r.shape[2]):
colors.append(color)
colors = np.array(colors)
# order should be (times, points, colors)
colors = np.transpose(colors, (0, 2, 1)) # flip the last axes
colors.shape
ipyvolume.figure()
s = ipyvolume.scatter(r[:,0,:].T, r[:,1,:].T, r[:,2,:].T,
marker='sphere',
color=colors)
ani = ipyvolume.animation_control(s, interval=200)
ipyvolume.show()
```
Note that we can only use the ```animation_control``` function on scatter plots or quiver plots, so we can't add lines or anything here. Perhaps in a future release of ```ipyvolume```!
### Exercise
Try this with your own datasets!
Bonus: also try with animations of quiver plots
Bonus: is there anything else you want to animate? Should the size of the points change for example? (See ipyvolume docs for examples)
Bonus: do this with the galaxy simulations
# Part 2: ipyvolume + ipywidgets
Now let's combine the powers of widgets and ipyvolume to explore our datasets in 3D.
```
import ipywidgets
step = 100
# also, length of arrays
N = v_h.shape[2]
r = r_h[:,:,0:N:step]
v = v_h[:,:,0:N:step]
r[:,0,:].ravel().shape
ipyvolume.figure()
# I'll have to think about this more -> didn't really work too well
# the trick is to get them to "flatten" in the right way
# color trickery
#color = ['red', 'blue', 'green']
#colors = np.repeat(colors, r.shape[0])
#color = [(1,0,0), (0,0,1), (0,1,0)]
#colors = []
#for i in range(r.shape[2]):
# colors.append(color)
#colors = np.array(colors).T
#colors = np.transpose(colors, (0, 2, 1)) # flip the last axes
x = r[:,0,:].ravel()
y = r[:,1,:].ravel()
z = r[:,2,:].ravel()
s = ipyvolume.scatter(x, y, z,
marker='sphere')
#ipyvolume.show()
#colors.shape, r[:,0,:].shape
```
Now let's use widgets to change the size and color of our points:
```
size = ipywidgets.FloatSlider(min=0, max=30, step=0.1)
color = ipywidgets.ColorPicker()
```
Now we'll use a function we haven't used before from ipywidgets - something that links our scatter plot features to our widgets:
```
ipywidgets.jslink((s, 'size'), (size, 'value'))
ipywidgets.jslink((s, 'color'), (color, 'value'))
```
Finally, well put all these things in a row: our plot, then our two linked widgets:
```
ipywidgets.VBox([ipyvolume.gcc(), size, color])
```
### Exercise
Repeat this ipywidgets+ipyvolume for your own system.
Bonus: make different sliders for different planets to control size & color for each independently.
Bonus: make a quiver plot
Bonus: what other things can you think to add sliders/pickers for? Hint: check out the docs for ```ipyvolume.quiver``` and ```ipyvolume.scatter``` to see what you can change.
# Part 3 - embedding
Finally, we might want to embed our creations on the web somewhere. The first step is to make an ```html``` file from our in-python widgets. Luckily, there is a function for that!
```
myVBox = ipywidgets.VBox([ipyvolume.gcc(), size, color])
# if we don't do this, the bqplot will be really tiny in the standalone html
ipyvolume.embed.layout = myVBox.children[1].layout
ipyvolume.embed.layout.min_width = "400px"
ipyvolume.embed.embed_html("myPage.html", myVBox, offline=True, devmode=False)
!open myPage.html
```
### Exercise
Generate a page for your own simulation with all the controls you want!
**Bonus**: though we won't be covering it explicitly, you can actually deploy this to the web to be hosted on github pages. The first thing you need to do is call ```embed``` a little differently:
```
ipyvolume.embed.embed_html("myPage.html", myVBox, offline=False, devmode=False)
```
Now, instead of opening it here, you need to add this file to your github page. Again, we won't cover this in class, but feel free to ask for help after you've looked over the resources provided on today's course webpage under the "deploying to the web" header.
**Bonus**: add more linkage to your plot by linking to bqplot. See the "Mixing ipyvolume with bqplot" example on the ```ipyvolume``` docs: https://ipyvolume.readthedocs.io/en/latest/bqplot.html#
| github_jupyter |
# Buffer Stock Model
This notebooks shows you how to use the tools of the **consav** package to solve the canonical **buffer-stock consumption model** with either
1. **vfi**: standard value function iteration
2. **nvfi**: nested value function iteration
3. **egm**: endogenous grid point method
In all cases, each time step is solved using fully [Numba](http://numba.pydata.org/) just-in-time compilled code. Numba automatically convert Python and NumPy code into fast machine code. Specifically, this is done for all functions preceded (decorated) with ``@njit`` or ``@njit(parallel=True)`` when parallization is used.
## Model equations
The model's **bellman equation** is given by
$$
\begin{aligned}
v_{t}(p_{t},m_{t}) &= \max_{c_t}\frac{c^{1-\rho}}{1-\rho} + \beta v_{t+1}(p_{t+1},m_{t+1}) \\
& \text{s.t.} \\
a_{t} &=m_{t}-c_{t} \\
p_{t+1} &=\psi_{t+1} p_{t} \\
\tilde{\xi}_{t+1} &= \begin{cases}
\mu & \text{with prob.}\pi\\
\frac{\xi_{t+1}-\pi\mu}{1-\pi} & \text{else}
\end{cases} \\
m_{t+1} &= R a_{t} + \tilde{\xi}_{t+1}p_{t+1}\\
a_t&\geq 0\\
\end{aligned}
$$
where
$$ \begin{aligned}
\log\psi_{t+1} &\sim \mathcal{N}(-0.5\sigma_{\psi}^{2},\sigma_{\psi}^{2}) \\
\log\xi_{t+1} &\sim \mathcal{N}(-0.5\sigma_{\xi}^{2},\sigma_{\xi}^{2})
\end{aligned}
$$
In the **last period** there is no continuation value
$$
\begin{aligned}
v_{T+1}(m_{T+1},p_{t+1}) &= 0
\end{aligned}
$$
The **post-decision** value function is
$$
\begin{aligned}
w_t(p_t,a_t) &= \beta v_{t+1}(m_{t+1},p_{t+1})
\end{aligned}
$$
The **Euler-equation** (required when solving with EGM) is
$$
\begin{aligned}
C_{t}^{-\rho} &= q_t(p_t,a_t) \\
&= \beta R \mathbb{E}_t[C_{t+1}(p_{t+1},m_{t+1})^{-\rho}]
\end{aligned}
$$
where $q_t(p_t,a_t)$ is the post-decision marginal value of cash.
## Overview
The model solved in this notebook is written in **BufferStockModel.py**.
It provides a class called **BufferStockModelClass** inheriting its basic interface from the **ModelClass**.
A short **overview** of the interface is:
1. Each instance of the BufferStockModel class must have a **name** and a **solmethod**, and must contain **two central methods**:
1. **setup()**: set baseline parameters in `.par` + set list of non-float scalars in `.non_float_list` (for safe type-inference)
2. **allocate()**: allocate memory for `.par`, `.sol` and `.sim`
2. **Type-inference:** When initializing the model with `BufferStockModelClass()` the `setup()` and `allocate()` methods are called, and the types of all variables in `.par`, `.sol` and `.sim` are inferred. Results can be seen in `.parlist`, `.sollist` and `.simlist` or by `print(model)`.
3. The **solve()** method solves the model
4. The **simulate()** method simulates the model
5. The **save()** method saves the model naming it **data/name_solmethod**
6. The **copy()** makes a deep copy of the model
In addition to **BufferStockModel.py**, this folder contains the following files:
1. **last_period.py**: calculate consumption and value function in last period
2. **utility.py**: utility function and marginal utility function
3. **post_decision.py**: calcualte $w$ and $q$
4. **vfi.py**: solve with value function iteration
5. **nvfi.py**: solve with nested value function iteration
6. **egm.py**: solve with the endogenous grid method
7. **simulate.py**: simulate for all solution methods
8. **figs.py**: plot figures
The functions in these modules are loaded in **BufferStockModel.py**.
The folder **cppfuncs** contains C++ functions not used in this notebook.
## Numba
Before (important!) you load **Numba** you can disable it or choose the number of threads as follows:
```
from consav import runtools
runtools.write_numba_config(disable=0,threads=8)
```
This writes a file called *.numba_config.yaml* which Numba loads when being imported.
Disabling Numba makes the code run much slower, but can be beneficial when debugging.
# Setup
```
%matplotlib inline
# reload module each time cell is run
%load_ext autoreload
%autoreload 2
# load the BufferStockModel module
from BufferStockModel import BufferStockModelClass
```
# First Example
The code is easiest to understand for **nvfi** and **do_simple_w = True**. The cell below solves the model using these settings. Go through the code the cell is calling to understand the interface.
```
# a. setup (calling the __init__ method)
model = BufferStockModelClass(name='baseline',solmethod='nvfi',do_simple_w=True)
# name: required
# solmethod: optional (default: None)
# compiler: optional (default: 'vs')
# **kwargs: update parameters in .par AFTER calling .setup
# b. print
print(model)
# c. solve
model.solve()
# d. simulate
print('')
model.simulate()
# e. save
model.save()
```
**Check typer inference:**
```
model.parlist
```
## Load/save
**Delete** the model:
```
del model
```
**Load** the model again:
```
model_loaded = BufferStockModelClass(name='baseline',solmethod='nvfi',load=True)
```
**Plot** the consumption function in period $t=0$:
```
model_loaded.consumption_function(t=0)
```
**Copy** the model:
```
model_copy = model_loaded.copy() # name can be specified
print(model_copy.name)
```
**Plot** the life-cycle profiles:
```
model_copy.lifecycle()
```
**Plot** an interactive version of the consumption function:
```
model_loaded.consumption_function_interact()
```
# Timings
**Time** the various solution methods and show the importance of the optimized computation of $q_t(p_t,a_t)$ in EGM (i.e. setting `do_simple_w = False`).
```
for specs in [('nvfi',False),('nvfi',True),('egm',False),('egm',True),'vfi']:
if len(specs) == 2:
solmethod,do_simple_w = specs
else:
solmethod = specs
do_simple_w = False # baseline
# i. setup
print(f'{solmethod}:')
if do_simple_w:
print('do_simple_w = True')
model = BufferStockModelClass(name='',solmethod=solmethod,do_print=False,do_simple_w=do_simple_w)
# ii. test run
model.solve()
model.par.do_print = True
# iii. final run
model.solve()
model.checksum()
print('')
```
# More
See the notebook **Examples with run file and C++** for additional possibilities.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.