text stringlengths 2.5k 6.39M | kind stringclasses 3
values |
|---|---|
<img src="logos/Icos_cp_Logo_RGB.svg" width="400" align="left"/>
<img src="logos/NOAA_logo.png" width="90" align="right"/>
<a id='introduction'></a>
<br>
# Curve fitting methods for CO$_2$ time series
This notebook includes examples of curve fitting methods for time series. For more detailed information regarding the implemented methods visit [NOAA](https://www.esrl.noaa.gov/gmd/ccgg/mbl/crvfit/crvfit.html). The curve fitting methods are applied over CO$_2$ measurements from ICOS stations as well as a selection of non-ICOS stations. Data from all stations are included in the Drought 2018 Atmospheric Product (part of the Drought 2018 Project) which is stored and can be downloaded from the [ICOS Carbon Portal](https://www.icos-cp.eu/data-services/about-data-portal).
The notebook is divided into the following parts:
- [Import tools](#tools)
- [Map with stations](#map)
- [Plots](#plots)
- [Access to ICOS Jupyter Developing Environment](#access_to_jup_hub)
Every part includes a short description of its content and a quick link to return to the table of contents in this introductory part. Use the links to quickly navigate from one part of the notebook to another.
The first part is dedicated on executing the code necessary for producing the visualizations in this notebook. The second part, includes an interactive map of all the stations for which data is available for, so that the user might get an overview of all available datasets. This part also provides the opportunity to get more detailed information about each station. The map can be used to study the proximity of stations, as an inspiration for checking if the trends of their corresponding measurements follow a similar pattern. The third part includes a form of controls/widgets (i.e. dropdown lists, checkboxes, etc.) that allows the user to apply curve fitting methods over a the measurements of a station of choice. The controls allow the user to control some of the input parameters for the curve fitting methods. Finally, the last part includes contact information to the ICOS Carbon Portal, in case you are interested in joining our Jupyter Hub. You may work on existent notebooks with ICOS data but you can also contribute in expanding the existing python services by producing new notebooks.
<br>
### Run the notebook
To run the notebook, click on **Kernel** and then **Restart & Run All** in the menubar at the top of the page.
<br>
<br>
<a id='tools'></a>
## 1. Import tools
The code-cell below executes the notebook containing all the Python functions that produce the visualizations in this notebook..
```
#Run notebook with tools:
%run ccg/ccg_icos_tools.ipynb
```
<br>
<div style="text-align: right">
<a href="#introduction">Back to top</a>
</div>
<br>
<br>
<br>
<a id='map'></a>
## 2. Map with stations
This part includes an interactive map with all the stations for which data is available. You may scroll to zoom-in on the map and get a better idea of the surroundings of each station. To get more information about a specific station, click on its corresponding marker. If you know the name of a station but are somewhat unsure about its exact location, you may select it from the drop-down list and press the update-map button. This will create a new instance of the map, where the selected station's marker is highlighted in red.
Click on the **Update map** button to display the map.
```
#Call function to display widget:
create_widget_map()
```
<br>
<div style="text-align: right">
<a href="#introduction">Back to top</a>
</div>
<br>
<br>
<br>
<a id='plots'></a>
## 3. Plots
This part includes a form with controls that allow the user to apply the curve fitting methods over CO$_2$ measurements from a station of choice. The user may also select the time period the measurments should come from.
- The start-year and end-year should cover a time period of 2 years minimum. The **starting date** and **end date** can be selected from the corresponding datetime-pickers.
- **Timezero** refers to the time for which the trend is set to zero. Here you select the year that then corresponds to the 1st of January. _Timezero_ should refer to a year between the start-date year and end-date year.
- The user may select a **color** to display the _CO$_2$ measurements_ in all plots.
- By checking the **Daytime** checkbox, the data is filtered to only include daytime measurements (i.e. measurments taken between 10:00 am and 18:00 am).
- If the **Citation** checkbox is checked, the citation strings for all data used in the plots, will appear after the last plot.
- Check the **Save plots** checkbox to save all plots as png-files under the following folder in your home directory: ```output/ccg/```
- Check **Export data** to download a csv-file with the monthly mean and standard deviation of the measurements under the following folder in your home directory: ```output/ccg/```.
Note that the plot containing the detrended values is an interactive plot. Click on the labels in the legend to add or remove data from the plot. Use the toolbox on the right of the plot to pan, zoom-in, save, or reset the plot to its initial state. You may also use the hover tool to get information for every sepparate data point. You need to activate a tool before using it. You can activate a tool by clicking on its icon. Active tools are highlighted with a blue line on the left side of their corresponding icon.
```
#Call function to display widget form:
create_widget_ccgcrv()
```
<br>
<div style="text-align: right">
<a href="#introduction">Back to top</a>
</div>
<br>
<br>
<br>
<a id='access_to_jup_hub'></a>
## 4. Get Access to ICOS Jupyter Notebook Developing Environment
If you wish to extend the functionality of the already existent ICOS notebooks or develop your own Jupyter notebook with direct access to ICOS data, send an email with your request to <jupyter-info@icos-cp.eu>.
<br>
<div style="text-align: right">
<a href="#introduction">Back to top</a>
</div>
<br>
<br>
<br>
| github_jupyter |
```
# 3rd Party
from baybars.timber import get_logger
import numpy as np
import tensorflow as tf
LABEL_MAP = {
0: 'T-shirt/top',
1: 'Trouser',
2: 'Pullover',
3: 'Dress',
4: 'Coat',
5: 'Sandal',
6: 'Shirt',
7: 'Sneaker',
8: 'Bag',
9: 'Ankle boot',
}
class UnsupportedModeException(Exception):
pass
MODEL_DIR = "models/fashion_model"
tf.logging.set_verbosity(tf.logging.INFO)
class FashionMNISTCNN(object):
def __init__(self, features, labels, mode, batch_size:int =500, num_epochs:int =100, learning_rate:float =0.01, dropout_rate:float=0.4):
self.features = features
self.labels = labels
self.mode = mode
self.logger = get_logger(str(self.__class__))
self.batch_size = batch_size
self.num_epochs = num_epochs
self.learning_rate = learning_rate
self.dropout_rate = dropout_rate
def build_network(self):
first_convolution_layer = self.cnn_2d_layer_relu(self.input_layer)
second_convolution_layer = self.cnn_2d_layer_relu(first_convolution_layer)
first_max_pooling_layer = self.max_pool_2d_layer(second_convolution_layer)
third_convolution_layer = self.cnn_2d_layer_relu(first_max_pooling_layer)
fourth_convolution_layer = self.cnn_2d_layer_relu(third_convolution_layer)
second_max_pooling_layer = self.max_pool_2d_layer(fourth_convolution_layer)
reshaped_layer = self.reshape_layer(second_max_pooling_layer)
first_dense_layer = self.dense_layer(reshaped_layer)
first_dropout_layer = self.dropout_layer(first_dense_layer)
second_dense_layer = self.dense_layer(first_dropout_layer)
second_dropout_layer = self.dropout_layer(second_dense_layer)
out_layer = self.logit_layer(second_dropout_layer)
return out_layer
@property
def batch_size(self) -> int:
return self._batch_size
@batch_size.setter
def batch_size(self, value) -> None:
self._batch_size = value
@property
def num_epochs(self) -> int:
return self._num_epochs
@num_epochs.setter
def num_epochs(self, value) -> None:
self._num_epochs = value
@property
def dropout_rate(self) -> int:
return self._dropout_rate
@dropout_rate.setter
def dropout_rate(self, value) -> None:
self._dropout_rate = value
@property
def is_training(self):
return self.mode == tf.estimator.ModeKeys.TRAIN
@property
def is_evaluate(self):
return self.mode == tf.estimator.ModeKeys.EVAL
@property
def is_predict(self):
return self.mode == tf.estimator.ModeKeys.PREDICT
@property
def one_hot_labels(self):
return tf.one_hot(indices=tf.cast(self.labels, tf.int32), depth=10)
def loss(self, layer):
return tf.losses.softmax_cross_entropy(onehot_labels=self.one_hot_labels,
logits=layer)
@property
def prediction_structure(self, inputs):
return {
"classes": tf.argmax(input=inputs, axis=1),
"probabilities": tf.nn.softmax(inputs, name="softmax_tensor"),
}
def predict(self):
out = None
if self.is_predict:
out_layer = self.build_network()
out = tf.estimator.EstimatorSpec(mode=self.mode, predictions=self.prediction_structure)
return out
def train(self, features=None, labels=None, mode=None):
out = None
if self.is_training:
out_layer = self.build_network()
loss = self.loss(out_layer)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=self.learning_rate)
train_op = optimizer.minimize(loss=loss,
global_step=tf.train.get_global_step())
tf.summary.scalar('loss', loss)
measure = tf.equal(tf.argmax(out_layer, 1),
tf.argmax(self.one_hot_labels, 1))
accuracy = tf.reduce_mean(tf.cast(measure, tf.float32))
tf.summary.scalar('accuracy', accuracy)
out = tf.estimator.EstimatorSpec(mode=self.mode, loss=loss, train_op=train_op)
return out
def train_model(self, train_data, train_labels, eval_data, eval_labels):
# Might be better to make these into functions to get the training data and labels
train_fn = tf.estimator.inputs.numpy_input_fn(x={"x": train_data},
y=train_labels,
batch_size=self.batch_size,
num_epochs=None,
shuffle=True)
evaluation_fn = tf.estimator.inputs.numpy_input_fn(x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
for ii in range(self.num_epochs):
self.estimator.train(input_fn=train_fn, steps=100)
eval_results = self.estimator.evaluate(input_fn=evaluation_fn)
predictions = list(self.estimator.predict(input_fn=evaluation_fn))
self.logger.info('epoch={} eval_results={} and predictions={}'.format(ii, eval_results, predictions))
return self.estimator
@property
def estimator(self):
out = None
if self.is_training:
out = tf.estimator.Estimator(model_fn=self.train, model_dir=MODEL_DIR)
elif self.is_evaluate:
out = tf.estimator.Estimator(model_fn=self.evaluate, model_dir=MODEL_DIR)
elif self.is_predict:
out = tf.estimator.Estimator(model_fn=self.predict, model_dir=MODEL_DIR)
else:
raise UnsupportedModeException("Mode: {} is not supported for building estimation".format(self.mode))
return out
def evaluate(self, features=None, labels=None, mode=None):
out = None
if self.is_evaluate:
out_layer = self.build_network()
loss = self.loss(out_layer)
evaluation_metric = {
"accuracy": tf.metrics.accuracy(labels=self.labels,
predictions=self.prediction_structure["classes"])
}
out = tf.estimator.EstimatorSpec(mode=self.mode,
loss=loss,
eval_metric_ops=evaluation_metric)
return out
@classmethod
def activation_layer(cls):
return tf.nn.relu
@classmethod
def dense_layer(cls, inputs):
return tf.layers.dense(inputs=inputs,
units=128,
activation=cls.activation_layer())
@property
def input_layer(self):
return tf.reshape(self.features, [-1, 28, 28, 1])
@classmethod
def cnn_2d_layer_relu(cls, inputs):
return tf.layers.conv2d(inputs=inputs,
filters=64,
kernel_size=[5, 5],
padding="same",
activation=cls.activation_layer())
@classmethod
def max_pool_2d_layer(cls, inputs):
return tf.layers.max_pooling2d(inputs=inputs,
pool_size=[2, 2],
strides=2)
@classmethod
def reshape_layer(cls, inputs):
return tf.reshape(inputs, [-1, 7 * 7 * 64])
def dropout_layer(self, inputs):
return tf.layers.dropout(inputs=inputs, rate=self.dropout_rate, training=self.is_training)
@classmethod
def logit_layer(cls, inputs):
return tf.layers.dense(inputs=inputs, units=10)
@property
def features(self):
return self._features
@features.setter
def features(self, value):
self._features = value
@property
def labels(self):
return self._labels
@labels.setter
def labels(self, value):
self._labels = value
@property
def mode(self):
return self._mode
@mode.setter
def mode(self, value):
self._mode = value
def main():
DATA_DIR = 'data'
from tensorflow.examples.tutorials.mnist import input_data
from sklearn.utils import shuffle
mnist = input_data.read_data_sets(DATA_DIR, one_hot=False, validation_size=0)
train_data = mnist.train.images
print('train data is loaded')
train_labels = np.asarray(mnist.train.labels, dtype=np.int32)
print('train labels is loaded')
train_data, train_labels = shuffle(train_data, train_labels)
print('eval data is loaded')
eval_data = mnist.test.images
eval_labels = np.asarray(mnist.test.labels, dtype=np.int32)
eval_data, eval_labels = shuffle(eval_data, eval_labels)
fashion_mnist_cnn = FashionMNISTCNN(train_data, train_labels, tf.estimator.ModeKeys.TRAIN)
training = fashion_mnist_cnn.train_model(train_data, train_labels, eval_data, eval_labels)
return fashion_mnist_cnn, training
fashion_mnist_cnn, training = main();
```
| github_jupyter |
## _*BeH2 plots of various orbital reduction results*_
This notebook demonstrates using the Qiskit Aqua Chemistry to plot graphs of the ground state energy of the Beryllium Dihydride (BeH2) molecule over a range of inter-atomic distances using ExactEigensolver. Freeze core reduction is true and different virtual orbital removals are tried as a comparison.
This notebook populates a dictionary, that is a progammatic representation of an input file, in order to drive the Qiskit Aqua Chemistry stack. Such a dictionary can be manipulated programmatically and this is indeed the case here where we alter the molecule supplied to the driver in each loop as well as the orbital reductions.
This notebook has been written to use the PYSCF chemistry driver. See the PYSCF chemistry driver readme if you need to install the external PySCF library that this driver requires.
```
import numpy as np
import pylab
from qiskit_aqua_chemistry import AquaChemistry
# Input dictionary to configure Qiskit Aqua Chemistry for the chemistry problem.
aqua_chemistry_dict = {
'driver': {'name': 'PYSCF'},
'PYSCF': {'atom': '', 'basis': 'sto3g'},
'operator': {'name': 'hamiltonian', 'qubit_mapping': 'parity',
'two_qubit_reduction': True, 'freeze_core': True, 'orbital_reduction': []},
'algorithm': {'name': 'ExactEigensolver'}
}
molecule = 'H .0 .0 -{0}; Be .0 .0 .0; H .0 .0 {0}'
reductions = [[], [-2, -1], [-3, -2], [-4, -3], [-1], [-2], [-3], [-4]]
pts = [x * 0.1 for x in range(6, 20)]
pts += [x * 0.25 for x in range(8, 16)]
pts += [4.0]
energies = np.empty([len(reductions), len(pts)])
distances = np.empty(len(pts))
print('Processing step __', end='')
for i, d in enumerate(pts):
print('\b\b{:2d}'.format(i), end='', flush=True)
aqua_chemistry_dict['PYSCF']['atom'] = molecule.format(d)
for j in range(len(reductions)):
aqua_chemistry_dict['operator']['orbital_reduction'] = reductions[j]
solver = AquaChemistry()
result = solver.run(aqua_chemistry_dict)
energies[j][i] = result['energy']
distances[i] = d
print(' --- complete')
print('Distances: ', distances)
print('Energies:', energies)
pylab.rcParams['figure.figsize'] = (12, 8)
for j in range(len(reductions)):
pylab.plot(distances, energies[j], label=reductions[j])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('BeH2 Ground State Energy')
pylab.legend(loc='upper right')
pylab.rcParams['figure.figsize'] = (12, 8)
for j in range(len(reductions)):
pylab.plot(distances, np.subtract(energies[j], energies[0]), label=reductions[j])
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference compared to no reduction []')
pylab.legend(loc='upper left')
pylab.rcParams['figure.figsize'] = (6, 4)
for j in range(1, len(reductions)):
pylab.plot(distances, np.subtract(energies[j], energies[0]), color=[1.0, 0.6, 0.2], label=reductions[j])
pylab.ylim(0, 0.4)
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference compared to no reduction []')
pylab.legend(loc='upper left')
pylab.show()
e_nofreeze = np.empty(len(pts))
aqua_chemistry_dict['operator']['orbital_reduction'] = []
aqua_chemistry_dict['operator']['freeze_core'] = False
for i, d in enumerate(pts):
print('\b\b{:2d}'.format(i), end='', flush=True)
aqua_chemistry_dict['PYSCF']['atom'] = molecule.format(d)
solver = AquaChemistry()
result = solver.run(aqua_chemistry_dict)
e_nofreeze[i] = result['energy']
print(e_nofreeze)
pylab.rcParams['figure.figsize'] = (8, 6)
pylab.plot(distances, energies[0], label='Freeze Core: True')
pylab.plot(distances, e_nofreeze, label='Freeze Core: False')
pylab.xlabel('Interatomic distance')
pylab.ylabel('Energy')
pylab.title('Energy difference, no reduction [], freeze core true/false')
pylab.legend(loc='upper right')
pylab.show()
pylab.title('Energy difference of freeze core True from False')
pylab.plot(distances, np.subtract(energies[0], e_nofreeze), label='Freeze Core: False')
pylab.show()
```
| github_jupyter |
# 第二讲 - 矩阵消元及其与矩阵乘法的关系
- 消元法解方程组
- 矩阵简化
- 反向替代
- 矩阵乘法
## 消元法解方程组
$$x+2y+z=2\quad(1)\\3x+8y+z=12\quad(2)\\4y+z=2\quad(3)$$
提取出矩阵:
$$A=\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}$$
下面先做一些初始化工作:
```
import numpy as np
from sympy import *
init_printing()
x, y, z = symbols('x y z')
lhs = (x + 2*y + z, 3*x + 8*y + z, 4*y + z)
rhs = (2, 12, 2)
A = np.array([[1, 2, 1], [3, 8, 1], [0, 4, 1]])
```
### 方程组消元方式
首先消去未知数x,我们用等式(2)减去3倍的等式(1),我们得到等式(4):
```
eq4 = Eq(lhs[1]-3*lhs[0], rhs[1]-3*rhs[0])
eq4
```
由于等式(3)的x系数为0,因此,等式(3)不需要消除x,也可以认为等式(3)减去0倍的等式(1),得到等式(5):
```
eq5 = Eq(lhs[2]-0*lhs[0], rhs[2]-0*rhs[0])
eq5
```
此时我们的方程组已经经过消去未知数x,变为:
$$x+2y+z=2\quad(1)\\2y-2z=6\quad(4)\\4y+z=2\quad(5)$$
下面我们利用等式(4)消去等式(5)的未知数y,得到等式(6):
```
eq6 = Eq(eq5.args[0]-2*eq4.args[0], eq5.args[1]-2*eq4.args[1])
eq6
```
这是我们得到了消元后的方程组:
$$x+2y+z=2\quad(1)\\2y-2z=6\quad(4)\\5z=-10\quad(6)$$
等式(6)很容易接触未知数z:
```
solz = solve(eq6, dict=True)
print(solz[0])
```
然后依次将z带入等式(4),求得y,将y和z带入等式(1),求得x,最终解出全方程组:
```
soly = solve(eq4.subs(z, -2), y, dict=True)
print(soly[0])
solx = solve(Eq(x+2*y+z, 2).subs({y:1, z:-2}), x, dict=True)
print(solx[0])
print({**solx[0], **soly[0], **solz[0]})
```
### 矩阵消元方式
矩阵消元的方式会显得更加自然和简单,从左上角到右下角沿对角线(称为矩阵的主对角线)依次消除对角线以下的数字(将对角线以下数字变成0)。我们将保留在对角线或以上用来进行消元的数字称为Pivot number。
首先将矩阵第二行减去三倍的第一行:
```
A[1] -= 3 * A[0]
A
```
当然我们也可以认为下面还进行了将矩阵的第三行减去0倍的第一行:
```
A[2] -= 0 * A[0]
A
```
因此,经过第一次消元,矩阵第一列除了pivot number之外,全部变为0,下面我们继续对角线第二个pivot number。
将矩阵第三行减去2倍的第二行:
```
A[2] -= 2 * A[1]
A
```
此时,矩阵已经变为一个上三角矩阵,主对角线以下的元素全部为0.
对于等式的右边 RHS,进行同样的操作,我们得到如下的矩阵方程式:
$$\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\6\\-10\end{bmatrix}$$
```
b = np.array([2, 12, 2])
b[1] -= 3 * b[0]
b[2] -= 0 * b[0]
b[2] -= 2 * b[1]
b
```
下面我们完成反向替代的过程,一下步骤等式左边的矩阵和等式右边的向量会一起进行操作:
```
# 解出z
A[2] = A[2]/5
b[2] /= 5
print(A, b)
# 解出y
A[1] += 2 * A[2]
b[1] += 2 * b[2]
A[0] -= A[2]
b[0] -= b[2]
A[1] = A[1]/2
b[1] /= 2
print(A, b)
# 解出x
A[0] -= 2 * A[1]
b[0] -= 2 * b[1]
print(A, b)
```
上面的结果表示成矩阵方程组(或者更进一步列视角)为:
$$\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\2\\-2\end{bmatrix}\quad\to\quad x\begin{bmatrix}1\\0\\0\end{bmatrix}+y\begin{bmatrix}0\\1\\0\end{bmatrix}+z\begin{bmatrix}0\\0\\1\end{bmatrix}=\begin{bmatrix}2\\1\\-2\end{bmatrix}$$
显而易见,等式右边就是方程组的解。
### 行交换
正如将等式(1)(2)(3)进行位置交换不会影响方程求解结果一样,在矩阵表达中,交换矩阵的行(同时也要交换等式右边RHS的向量位置)也不会影响结果。
我们也可以将上面方程组的矩阵表达方式写成:
$$A=\begin{bmatrix}1&2&1\\0&4&1\\3&8&1\end{bmatrix}\begin{bmatrix}x\\y\\z\end{bmatrix}=\begin{bmatrix}2\\2\\12\end{bmatrix}$$
## 矩阵乘法
假设有n行n列的矩阵A,$A=\begin{bmatrix}a_{11}&\cdots&a_{1n}\\ \vdots & \ddots & \vdots \\ a_{n1}&\cdots&a_{nn}\end{bmatrix}$,按照行视角,我们可以表示为$A=\begin{bmatrix}r_1 \\ \vdots \\ r_n \end{bmatrix}$,按照列视角,我们可以表示为$A=\begin{bmatrix}c_1 & \cdots & c_n \end{bmatrix}$。
如果矩阵A右边乘以一个n长度的向量$x=\begin{bmatrix}x_1\\ \vdots \\ x_n \end{bmatrix}$,按照列视角非常容易表达,结果得到一个n长度的向量:
$$ \begin{bmatrix}c_1 & \cdots & c_n \end{bmatrix}\begin{bmatrix}x_1\\ \vdots \\ x_n \end{bmatrix}=x_1c_1+\cdots+x_nc_n$$
如果矩阵A左边乘以一个$1\times n$的矩阵(向量的倒置)$x=\begin{bmatrix}x_1& \cdots & x_n \end{bmatrix}$,按照行视角非常容易表达,结果得到一个$1\times n$的矩阵(向量的倒置):
$$ \begin{bmatrix}x_1& \cdots & x_n \end{bmatrix}\begin{bmatrix}r_1 \\ \vdots \\ r_n \end{bmatrix}=x_1r_1+\cdots+x_nr_n$$
**因此当进行矩阵乘法时,结果中的列是乘号左边的矩阵的列的线性组合,结果中的行是乘号右边的矩阵的行的线性组合。**
当我们进行矩阵消元时,我们完全可以依靠矩阵乘法来进行,例如对于矩阵$A=\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}$来说,第一步是需要消去第一列出pivot number外的所有数字,将其变为0。我们已经知道这需要将A的第二行减去3倍的第一行,将A的第三行减去0倍的第一行,结果中第一行保持不变。我们可以先写出乘号左边的第一行,让其是的乘号右边的行的线性组合等于第一行本身,于是可知:
$$\begin{bmatrix}1&0&0\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\end{bmatrix}$$
使用numpy进行验证:
```
A = np.array([[1, 2, 1], [3, 8, 1], [0, 4, 1]])
np.array([1, 0, 0])@A
```
同理可知,结果第二行应该是矩阵第二行减去3倍第一行所得(或者可以认为是$-3\times r_1 + r_2 + 0\times r_3$):
$$\begin{bmatrix}-3&1&0\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}0&2&-2\end{bmatrix}$$
结果的第三行应该是矩阵第三行减去0被第一行所得(或者可以认为是$0\times r_1 + 0\times r_2 + r_3$):
$$\begin{bmatrix}0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}0&4&1\end{bmatrix}$$
使用numpy进行验证:
```
np.array([-3, 1, 0])@A
np.array([0, 0, 1])@A
```
将这三个水平向量组合成一个矩阵,可以在一次乘法中完成第一个pivot number的消元过程:
$$\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&4&1\end{bmatrix}$$
在numpy中进行验证:
```
fac1 = np.eye(3)
fac1[1, 0] = -3
fac1@A
```
类推下去,可以在左边再乘以一个矩阵,完成第二个pivot number的消元过程:
$$\begin{bmatrix}1&0&0\\0&1&0\\0&-2&1\end{bmatrix}\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix}$$
验证:
```
fac2 = np.eye(3)
fac2[2, 1] = -2
fac2@fac1@A
```
如果继续在左边添加矩阵相乘,我们还能完成反向替代的过程:
$$\begin{bmatrix}1&-2&0\\0&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\\0&\frac{1}{2}&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&0&-1\\0&1&2\\0&0&1\end{bmatrix}\begin{bmatrix}1&0&0\\0&1&0\\0&0&\frac{1}{5}\end{bmatrix}\begin{bmatrix}1&0&0\\0&1&0\\0&-2&1\end{bmatrix}\begin{bmatrix}1&0&0\\-3&1&0\\0&0&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}$$
验证(因为涉及数值计算精度问题,所以最终结果不是完全为0或1,使用round函数目的是为了清晰):
```
fac3 = np.eye(3)
fac3[2, 2] = 1.0/5
fac4 = np.array([[1, 0 , -1],[0, 1, 2], [0, 0, 1]])
fac5 = np.eye(3)
fac5[1, 1] = 1.0/2
fac6 = np.eye(3)
fac6[0, 1] = -2
(fac6@fac5@fac4@fac3@fac2@fac1@A).round()
```
从上面我们可以看到,如果需要一个矩阵将A变为上三角矩阵U的话,我们需要的是fac2乘以fac1:
$$\begin{bmatrix}1&0&0\\-3&1&0\\6&-2&1\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&2&-2\\0&0&5\end{bmatrix}$$
```
fac2@fac1
```
### 矩阵逆
如果将fac6依次乘到fac1,得到的结果和A相乘就会变成一个单位矩阵I,根据定义$A^{-1}A=I$,我们可知:
$$fac6 \times fac5 \times fac4 \times fac3 \times fac2 \times fac1 = A^{-1}$$
```
fac6@fac5@fac4@fac3@fac2@fac1
```
### 使用矩阵乘法行交换
当乘法左边是单位矩阵I时,矩阵数据不会发生变化,因此如果我们重新排列单位矩阵的行之后,在与矩阵A相乘,就能对A的行进行交换,例如我们需要交换A的第二行和第三行:
$$\begin{bmatrix}1&0&0\\0&0&1\\0&1&0\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}1&2&1\\0&4&1\\3&8&1\end{bmatrix}$$
验证:
```
np.array([[1, 0, 0],[0, 0, 1],[0, 1, 0]])@A
```
如果我们需要循环的将行下移一个位置的话:
$$\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\end{bmatrix}\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}=\begin{bmatrix}0&4&1\\1&2&1\\3&8&1\end{bmatrix}$$
验证:
```
np.array([[[0, 0, 1],[1, 0, 0],[0, 1, 0]]])@A
```
通过上面的列视角我们可以知道,如果需要交换列的位置的话,只需要在矩阵A的右边乘以单位矩阵的重新排列即可,如:
$$\begin{bmatrix}1&2&1\\3&8&1\\0&4&1\end{bmatrix}\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\end{bmatrix}=\begin{bmatrix}2&1&1\\8&1&3\\4&1&0\end{bmatrix}$$
验证:
```
A@np.array([[[0, 0, 1],[1, 0, 0],[0, 1, 0]]])
```
| github_jupyter |
## PCA and other test on the computed Dataframe
```
import pandas as pd
import operator
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import seaborn as sns; sns.set()
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
import warnings
warnings.filterwarnings("ignore")
df = pd.read_csv("Good_Book.csv")
new_df = df.drop(["Year", "New ROI", "Movie_Title", "IMDB_Rating", "Female_Male_Dialogues_Ratio"],
axis = 1)
new_df.info()
```
### PCA for dimensionality reduction
```
pca = PCA(n_components = 3, random_state = 42)
principalComponents = pca.fit_transform(new_df)
principalDf = pd.DataFrame(data = principalComponents,
columns = ['PC1', 'PC2', 'PC3'])
pca.explained_variance_ratio_ ## The amount of variance explained by each of the selected components.
pca.components_
val = pca.components_.mean(axis = 0)
cols = new_df.columns
weights = {}
for vals, col in zip(val, cols):
weights[col] = vals
sorted_d = sorted(weights.items(), key=operator.itemgetter(1))
sorted_d
female_df = new_df.drop(["#_of_Male_Crew", "#_of_Male_Cast", "#_of_Male_Dialogue", "Genre3_Tag_Male",
"Genre2_Tag_Male", "Genre1_Tag_Male", "Sentiment_Male", "adjective_count_male",
"noun_count_male", "verb_count_male"], axis = 1)
female_df.info()
pcaf = PCA(n_components = 10, random_state = 42)
principalComponentsf = pcaf.fit_transform(female_df.corr())
principalDff = pd.DataFrame(data = principalComponentsf) ## Loading vectors
principalDff
pcaf.explained_variance_ ## The amount of variance explained by each of the selected components.
var_ratios = pcaf.explained_variance_ratio_
su = var_ratios[:2].sum() * 100
print(su, "%")
pcaf.components_
val = pcaf.components_.mean(axis = 0)
cols = female_df.columns
weights = {}
for vals, col in zip(val, cols):
weights[col] = vals
sorted_d = sorted(weights.items(), key=operator.itemgetter(1))
sorted_d
print(val.sum())
```
### Onto Clustering
```
kmeans = KMeans(n_clusters = 4, random_state = 42).fit(principalDf)
kmeans.labels_
y_kmeans = kmeans.predict(principalDf)
plt.figure(figsize = [15.0, 10.0])
plt.scatter(principalDf.iloc[:, 0], principalDf.iloc[:, 1], c = y_kmeans, s = 50, cmap = 'viridis')
centers = kmeans.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c = 'black', s = 200, alpha = 0.5)
plt.show()
## For only female chars
kmeansf = KMeans(n_clusters = 4, random_state = 42).fit(principalDff)
kmeansf.labels_
y_kmeansf = kmeansf.predict(principalDff)
plt.figure(figsize = [15.0, 10.0])
plt.scatter(principalDff.iloc[:, 0], principalDff.iloc[:, 2], c = y_kmeansf, s = 50, cmap = 'viridis')
centers = kmeansf.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 2], c = 'black', s = 200, alpha = 0.5)
plt.show()
cl = pd.read_csv("New_Bechdel_Class_Labels.csv")
cl["All_param_cluster_label"] = kmeans.labels_
cl["Female_param_cluster_label"] = kmeansf.labels_
cl.to_csv("New_Bechdel_Class_Labels.csv", index = False)
```
## Normalising
```
# n_df = pd.read_csv("Normalised_New_Bechdel_Class_Labels.csv")
# n = n_df.drop(["Year", "New ROI", "Movie_Title", "Female_Male_Dialogues_Ratio", "IMDB_Rating", "Bechdel_Scores",
# "labels", "All_param_cluster_label", "Female_param_cluster_label"],
# axis = 1)
# n.head()
# cols = n.columns
# for it in cols:
# n["N_"+it] = (n[it] - n[it].min())/(n[it].max() - n[it].min())
# n.drop([it], axis = 1, inplace = True)
# d = pd.DataFrame(data = n_df, columns = ["Year", "New ROI", "Movie_Title", "Female_Male_Dialogues_Ratio",
# "IMDB_Rating", "Bechdel_Scores", "labels", "All_param_cluster_label",
# "Female_param_cluster_label"])
# n_df = pd.concat([n, d], axis = 1)
# n_df.to_csv("Normalised_New_Bechdel_Class_Labels.csv", index = False)
```
## PCA and Clustering on Normalised Features
```
n_df = pd.read_csv("Normalised_New_Bechdel_Class_Labels.csv")
df = n_df.drop(["Year", "New ROI", "Movie_Title", "IMDB_Rating", "Bechdel_Scores",
"labels", "All_param_cluster_label", "Female_param_cluster_label"],
axis = 1)
df.info()
female_df = df.drop(["N_#_of_Male_Crew", "N_#_of_Male_Cast", "N_#_of_Male_Dialogue", "N_Genre3_Tag_Male",
"N_Genre2_Tag_Male", "N_Genre1_Tag_Male", "N_Sentiment_Male", "N_adjective_count_male",
"N_noun_count_male", "N_verb_count_male"], axis = 1)
female_df.info()
pcaf = PCA(n_components = 3, random_state = 42)
principalComponentsf = pcaf.fit_transform(female_df)
principalDff = pd.DataFrame(data = principalComponentsf,
columns = ['PC1', 'PC2', 'PC3'])
principalDff.head()
pcaf.explained_variance_ratio_ ## The amount of variance explained by each of the selected components.
pcaf.components_
val = pcaf.components_.mean(axis = 0)
cols = female_df.columns
weights = {}
for vals, col in zip(val, cols):
weights[col] = vals
sorted_d = sorted(weights.items(), key=operator.itemgetter(1))
sorted_d
## For only female chars
kmeansf = KMeans(n_clusters = 4, random_state = 42).fit(principalDff)
kmeansf.labels_
y_kmeansf = kmeansf.predict(principalDff)
plt.figure(figsize = [15.0, 10.0])
plt.scatter(principalDff.iloc[:, 0], principalDff.iloc[:, 1], c = y_kmeansf, s = 50, cmap = 'viridis')
centers = kmeansf.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c = 'black', s = 200, alpha = 0.5)
plt.show()
n_df["Female_param_cluster_label"] = kmeansf.labels_
n_df.to_csv("Normalised_New_Bechdel_Class_Labels.csv", index = False)
```
## Clustering without PCA
```
new_df = pd.read_csv("Good_Book.csv")
new_df = new_df.drop(["Year", "New ROI", "Movie_Title", "IMDB_Rating", "Female_Male_Dialogues_Ratio"],
axis = 1)
female_df = new_df.drop(["#_of_Male_Crew", "#_of_Male_Cast", "#_of_Male_Dialogue", "Genre3_Tag_Male",
"Genre2_Tag_Male", "Genre1_Tag_Male", "Sentiment_Male", "adjective_count_male",
"noun_count_male", "verb_count_male"], axis = 1)
## For only female chars
kmeansf = KMeans(n_clusters = 4, random_state = 42)
kmeansf.fit_transform(female_df)
y_kmeansf = kmeansf.predict(female_df)
kmeansf.score(female_df)
kmeansf.labels_
n_df = pd.read_csv("New_Bechdel_Class_Labels.csv")
n_df["Direct_Cluster_FV"] = kmeansf.labels_
n_df.to_csv("New_Bechdel_Class_Labels.csv", index = False)
```
## Keyser's Test and Bartlett's Tests
```
## The Kaiser-Meyer-Olkin Measure of Sampling Adequacy is a statistic that indicates the proportion of variance in
## your variables that might be caused by underlying factors.
## High values (close to 1.0) generally indicate that a factor analysis may be useful with your data.
## If the value is less than 0.50, the results of the factor analysis probably won't be very useful.
new_df = pd.read_csv("Good_Book.csv")
new_df = new_df.drop(["Year", "New ROI", "Movie_Title", "IMDB_Rating", "Female_Male_Dialogues_Ratio"],
axis = 1)
female_df = new_df.drop(["#_of_Male_Crew", "#_of_Male_Cast", "#_of_Male_Dialogue", "Genre3_Tag_Male",
"Genre2_Tag_Male", "Genre1_Tag_Male", "Sentiment_Male", "adjective_count_male",
"noun_count_male", "verb_count_male"], axis = 1)
female_df.info()
def KMO(df):
corr = df.corr().as_matrix() ## Correaltion matrix for the dataframe
invcor = np.linalg.inv(corr) ## Inverse matrix
d = np.linalg.inv(np.diag(np.sqrt(np.diag(invcor)))) ## Get the D 1/2 matrix
antiimage = np.matmul(np.matmul(d, invcor), d) ## Get the anti-image matrix
num = 0; den = 0
## Calculate the KMO using formula
for rows in range(antiimage.shape[0]):
for cols in range(antiimage.shape[1]):
if rows != cols:
num += (corr[rows][cols] ** 2)
den += ((corr[rows][cols] ** 2) + (antiimage[rows][cols] ** 2))
kmo = num / den
return kmo
kmo = KMO(female_df)
print("Value of the KMO Test: ", kmo, ":: Rounded of value: ", round(kmo, 2))
## Bartlett's test of sphericity tests the hypothesis that your correlation matrix is an identity matrix,
## which would indicate that your variables are unrelated and therefore unsuitable for structure detection.
## Small values (less than 0.05) of the significance level indicate that a factor analysis may be useful with your data.
from scipy.stats import bartlett
k, pval = bartlett(*female_df.as_matrix()[:, 0:10])
print("Test Statistic: ", k, ":: P-value: ", pval)
def BToS(X, y = None):
"""
Bartlett's test of sphericity tests the hypothesis that your correlation matrix is an identity matrix,
which would indicate that your variables are unrelated and therefore unsuitable for structure detection.
Small values (less than 0.05) of the significance level indicate that a factor analysis may be useful with your data.
"""
n = X.shape[0]
p = X.shape[1]
corr = X.corr()
lhs = (n - 1) - (2 * p - 5) / 6 ## Formula of the first equation
rhs = np.log(np.linalg.det(corr)) ## Formula of the second equation
chi_square = -(lhs * rhs) ## Chi-Square value from this
degree_of_freedom = (p ** 2 - p) / 2 ## Degrees of freedom
return chi_square, degree_of_freedom
BToS(female_df)
```
## Varimax Rotation on dataframe
```
loadings = pcaf.components_.T * np.sqrt(pcaf.explained_variance_ratio_)
loadings = pd.DataFrame(loadings)
def varimax(loadings, normalize=True, max_iter=500, tolerance=1e-5):
df = loadings.copy()
column_names = df.index.values
index_names = df.columns.values
n_rows, n_cols = df.shape
if n_cols < 2:
return df
X = df.values
if normalize:
normalized_mtx = df.apply(lambda x: np.sqrt(sum(x**2)),
axis=1).values
X = (X.T / normalized_mtx).T
rotation_mtx = np.eye(n_cols)
d = 0
for _ in range(max_iter):
old_d = d
basis = np.dot(X, rotation_mtx)
transformed = np.dot(X.T, basis**3 - (1.0 / n_rows) *
np.dot(basis, np.diag(np.diag(np.dot(basis.T, basis)))))
U, S, V = np.linalg.svd(transformed)
rotation_mtx = np.dot(U, V)
d = np.sum(S)
if old_d != 0 and d / old_d < 1 + tolerance:
break
X = np.dot(X, rotation_mtx)
if normalize:
X = X.T * normalized_mtx
else:
X = X.T
loadings = pd.DataFrame(X, columns=column_names, index=index_names).T
return loadings, rotation_mtx
rotated_loading, rotationMtx = varimax(loadings)
rotated_loading.sum(axis = 0)
loadings
```
| github_jupyter |
```
# ms-python.python added
import os
try:
os.chdir(os.path.join(os.getcwd(), 'day 11'))
print(os.getcwd())
except:
pass
from computerrefractored import Computer
import matplotlib.pyplot as plt
from collections import defaultdict
import numpy as np
from collections import namedtuple
def dimensions(obj):
minim = min(obj,key = lambda x:x[0])[0], min(obj,key = lambda x:x[1])[1] # mins for x and y|
maxim = max(obj,key = lambda x:x[0])[0], max(obj,key = lambda x:x[1])[1] # max for dimensions
ranges = (maxim[0] - minim[0]+1, maxim[1] - minim[1]+1)
Dim = namedtuple('Dim',['min','max','ranges'])
res = Dim(minim,maxim,ranges)
return res
def normalize(obj):
dim = dimensions(obj)
return [(o[0]-dim.min[0],o[1]-dim.min[1]) for o in obj]
class Grid():
def __init__(self):
self.x,self.y = 0,0
self.visited = defaultdict(int)
self.visited[(0,0)]=1
self.color = defaultdict(int)
self.orientation = {
'up':(0,1),
'left':(-1,0),
'down':(0,-1),
'right':(1,0)
}
self.cur_orient = 'up'
def paint(self,color_to_paint):
self.color[(self.x,self.y)]=color_to_paint
def get_color(self):
return self.color[(self.x,self.y)]
def get_location(self):
return (self.x,self.y)
def move(self):
dx,dy = self.orientation[self.cur_orient]
self.x += dx
self.y += dy
self.visited[(self.x,self.y)]=1
def turn(self,turn):
if turn == 0:
if self.cur_orient=='up':self.cur_orient='left'
elif self.cur_orient=='down':self.cur_orient='right'
elif self.cur_orient=='left':self.cur_orient='down'
elif self.cur_orient=='right':self.cur_orient='up'
if turn == 1:
if self.cur_orient=='up':self.cur_orient='right'
elif self.cur_orient=='down':self.cur_orient='left'
elif self.cur_orient=='left':self.cur_orient='up'
elif self.cur_orient=='right':self.cur_orient='down'
noun, verb = 0,0
f=open('input.txt').read()
memory = tuple(int(i) for i in f.split(',')) # let's make it immutable as a tuple
memsize = 100000
memory = tuple(list(memory)+[0]*memsize)
c = Computer(list(memory),noun,verb,[1])
grid = Grid()
def walk():
current_color = grid.get_color()
c.receiveinput(current_color)
color_to_paint = c.run()
if color_to_paint == 0 or color_to_paint != 0:
turn = c.run()
grid.paint(color_to_paint)
grid.turn(turn)
grid.move()
# print(f'cur location{grid.get_location()},current_color{current_color},color_to_paint{color_to_paint},turn{turn},{grid.cur_orient}')
def traverse():
cur = 1
while cur !=(0,0) and c.running:
walk()
cur = grid.get_location()
traverse()
sum([v for v in grid.visited.values()])
whites = [k for k,v in grid.color.items() if v == 1 ]
dim = dimensions(whites)
whites = normalize(whites)
pic = np.zeros(dim.ranges[0]*dim.ranges[1]).reshape(dim.ranges[0],dim.ranges[1])
for w in whites: pic[w]=1
from matplotlib import pyplot as plt
plt.imshow(pic)
# part 2
c = Computer(list(memory),noun,verb,[1])
grid = Grid()
grid.color[(0,0)]=1 # line added
traverse()
whites = [k for k,v in grid.color.items() if v == 1 ]
dim = dimensions(whites)
whites = normalize(whites)
pic = np.zeros(dim.ranges[0]*dim.ranges[1]).reshape(dim.ranges[0],dim.ranges[1])
for w in whites: pic[w]=1
from matplotlib import pyplot as plt
plt.imshow(pic)
```
| github_jupyter |
```
# default_exp desc.stats
```
# Exploration Statistics
> This module comprises all the functions for calculating descriptive statistics.
```
!pip install dit
!pip install sentencepiece
# export
# Imports
from scipy.stats import sem, t, median_abs_deviation as mad
from statistics import mean, median, stdev
import math
#hide
from nbdev.showdoc import *
from ds4se.desc.stats import *
#Testing List
l = [1, 2, 4, 8, 7, 10]
```
# Get_desc_stats & test
```
#export
'''
Returns max, min, mena, median, standard deviation, mean absolute deviation of a list
:param l: input list
:returns: see above
'''
def get_desc_stats(l):
return max(l), min(l), mean(l), median(l), stdev(l), mad(l)
#Here we test get desc stats
testGDS = get_desc_stats(l)
#Expected results
#max = 10
#min = 1
#mean = 5.33
#median = 5.5?
#std = 3.559?
#MAD = 4.44708?
assert(testGDS[0] == 10)
assert(testGDS[1] == 1)
assert(round(testGDS[2],2) == 5.33)
assert(round(testGDS[3],2) == 5.5)
assert(round(testGDS[4],2) == 3.56)
assert(round(testGDS[5],2) == 3)
```
# Confidence_interval & test
```
#Testing List
l = [1, 2, 4, 8, 7, 10]
#Here we test get desc stats
testGDS = get_desc_stats(l)
#Expected results
#max = 10
#min = 1
#mean = 5.33
#median = 5.5?
#std = 3.559?
#MAD = 4.44708?
assert(testGDS[0] == 10)
assert(testGDS[1] == 1)
assert(testGDS[2] == 5.33)
assert(testGDS[3] == 5.5)
assert(testGDS[4] == 3.559)
assert(testGDS[5] == 4.44708)
#export
'''
Calculates confidence interval of a list
:param l: input list
:param c: confidence value
:returns: start of interval and end of interval
'''
def confidence_interval(l, c = 0.95):
n = len(l)
m = mean(l)
std_err = sem(l)
h = std_err * t.ppf((1 + c) / 2, n - 1)
start = m - h
end = m + h
return start, end
#TestStats
#Test confidence Interval
testCI = confidence_interval(l)
assert(testCI[0] == 1.598364516031722)
```
# Report_stats & test
```
#TestStats
#Test confidence Interval
testCI = confidence_interval(l)
assert(testCI[0] == 1.598364516031722)
#export
'''
Returns formatted version of stats
:param l: list of integers
:param c: confidence interval
:returns: prints out all stats and confidence interval with nice formatting
'''
def report_stats(l, c = 0.95):
mini, maxi, μ, med, σ, med_σ = get_desc_stats(l)
print("Max:", mini)
print("Min:", maxi)
print("Average:", μ)
print("Median:", med)
print("Standard Deviation:", σ)
print("Median Absolute Deviation:", med_σ)
start, end = confidence_interval(l, c = 0.95)
print(f"{int(c * 100)}% of the data fall within {start} and {end}")
```
| github_jupyter |
```
import pickle
import boto3
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from pyspark.sql import SparkSession
sc = spark.sparkContext
from pyspark.sql import SQLContext
from pyspark.sql import functions as F
from pyspark.sql.window import Window
from pyspark.sql.types import IntegerType, StringType, FloatType, ArrayType, DoubleType, StructType, StructField
sqlContext = SQLContext(sc)
base_save_path = "s3://mag-model-data/raw_mag_data/"
iteration_save_path = "s3://mag-model-data/V2/iteration_1/"
```
## Getting Level 2 Parents
```
journal_join_query = \
"""
SELECT e.paper_id, e.normalized_name as level_one, f.normalized_name as level_two
FROM (SELECT distinct a.paper_id, b.normalized_name
FROM (SELECT paper_id, field_of_study as field_of_study_id
FROM mag_advanced_paper_fields_of_study) a
JOIN (SELECT field_of_study_id, normalized_name
FROM mag_advanced_fields_of_study
WHERE level = 1) b
ON a.field_of_study_id=b.field_of_study_id ) e
JOIN (SELECT distinct c.paper_id, d.normalized_name
FROM (SELECT paper_id, field_of_study as field_of_study_id
FROM mag_advanced_paper_fields_of_study) c
JOIN (SELECT field_of_study_id, normalized_name
FROM mag_advanced_fields_of_study
WHERE level = 2) d
ON c.field_of_study_id=d.field_of_study_id) f
ON e.paper_id=f.paper_id
"""
all_data = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", redshift_url) \
.option("user", "app_user") \
.option("password", redshift_password) \
.option("query", journal_join_query) \
.option("tempdir", base_save_path) \
.option("forward_spark_s3_credentials", True) \
.load()
all_data.printSchema()
all_data.orderBy(F.rand()).show(20)
all_data.cache().count()
w1 = Window.partitionBy('level_two').orderBy(F.col('weighted').desc())
# Getting all pairs of level ones and level twos across all papers in MAG
one_two_pair_counts = all_data.groupby(['level_two','level_one']).count() \
.join(all_data.groupby('level_one').count().select('level_one', F.col('count').alias('level_one_count')), on='level_one')
# Weighting counts of level ones for each level two by the total number of level ones. This is done
# to make sure that high-frequency level ones are not dominating
one_two_pair_counts \
.select('level_two','level_one','count','level_one_count',
(F.col('count')/F.col('level_one_count')).alias('weighted')) \
.withColumn('rank', F.row_number().over(w1)).filter(F.col('rank') <=15) \
.withColumn('topic_list', F.collect_list(F.col('level_one')).over(w1)) \
.groupby('level_two').agg(F.max(F.col('topic_list')).alias('topic_list')) \
.coalesce(1).write.mode('overwrite').parquet(f"{base_save_path}level_2_parents")
```
## Getting Level 3 Parents
```
journal_join_query = \
"""
SELECT e.paper_id, e.normalized_name as level_two, f.normalized_name as level_three
FROM (SELECT distinct a.paper_id, b.normalized_name
FROM (SELECT paper_id, field_of_study as field_of_study_id
FROM mag_advanced_paper_fields_of_study) a
JOIN (SELECT field_of_study_id, normalized_name
FROM mag_advanced_fields_of_study
WHERE level = 2) b
ON a.field_of_study_id=b.field_of_study_id ) e
JOIN (SELECT distinct c.paper_id, d.normalized_name
FROM (SELECT paper_id, field_of_study as field_of_study_id
FROM mag_advanced_paper_fields_of_study) c
JOIN (SELECT field_of_study_id, normalized_name
FROM mag_advanced_fields_of_study
WHERE level = 3) d
ON c.field_of_study_id=d.field_of_study_id) f
ON e.paper_id=f.paper_id
"""
all_data = spark.read \
.format("com.databricks.spark.redshift") \
.option("url", redshift_url) \
.option("user", "app_user") \
.option("password", redshift_password) \
.option("query", journal_join_query) \
.option("tempdir", base_save_path) \
.option("forward_spark_s3_credentials", True) \
.load()
all_data.printSchema()
all_data.show(20)
w1 = Window.partitionBy('level_three').orderBy(F.col('weighted').desc())
# Getting all pairs of level twos and level threes across all papers in MAG
two_three_pair_counts = all_data.groupby(['level_three','level_two']).count() \
.join(all_data.groupby('level_two').count().select('level_two', F.col('count').alias('level_two_count')), on='level_two')
# Weighting counts of level twos for each level three by the total number of level twos. This is done
# to make sure that high-frequency level twos are not dominating
two_three_pair_counts \
.select('level_three','level_two','count','level_two_count',
(F.col('count')/F.col('level_two_count')).alias('weighted')) \
.withColumn('rank', F.row_number().over(w1)).filter(F.col('rank') <=15) \
.withColumn('topic_list', F.collect_list(F.col('level_two')).over(w1)) \
.groupby('level_three').agg(F.max(F.col('topic_list')).alias('topic_list')) \
.coalesce(1).write.mode('overwrite').parquet(f"{base_save_path}level_3_parents")
```
| github_jupyter |
```
# Copyright 2019 Google Inc. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# Reusable components
This tutorial describes the manual way of writing a full component program (in any language) and a component definition for it. Below is a summary of the steps involved in creating and using a component:
- Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.
- Containerize the program.
- Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.
- Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.
Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command:
`which docker`
The result should be something like:
`/usr/bin/docker`
```
import kfp
import kfp.gcp as gcp
import kfp.dsl as dsl
import kfp.compiler as compiler
import kfp.components as comp
import datetime
import kubernetes as k8s
# Required Parameters
PROJECT_ID='<ADD GCP PROJECT HERE>'
GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
```
## Create client
If you run this notebook **outside** of a Kubeflow cluster, run the following command:
- `host`: The URL of your Kubeflow Pipelines instance, for example "https://`<your-deployment>`.endpoints.`<your-project>`.cloud.goog/pipeline"
- `client_id`: The client ID used by Identity-Aware Proxy
- `other_client_id`: The client ID used to obtain the auth codes and refresh tokens.
- `other_client_secret`: The client secret used to obtain the auth codes and refresh tokens.
```python
client = kfp.Client(host, client_id, other_client_id, other_client_secret)
```
If you run this notebook **within** a Kubeflow cluster, run the following command:
```python
client = kfp.Client()
```
You'll need to create OAuth client ID credentials of type `Other` to get `other_client_id` and `other_client_secret`. Learn more about [creating OAuth credentials](
https://cloud.google.com/iap/docs/authentication-howto#authenticating_from_a_desktop_app)
```
# Optional Parameters, but required for running outside Kubeflow cluster
# The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com'
# The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline'
# Examples are:
# https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com
# https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline
HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>'
# For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following
# will be needed to access the endpoint.
CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>'
OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>'
OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>'
# This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines'
# If you are not working with 'AI Platform Pipelines', this step is not necessary
! gcloud auth print-access-token
# Create kfp client
in_cluster = True
try:
k8s.config.load_incluster_config()
except:
in_cluster = False
pass
if in_cluster:
client = kfp.Client()
else:
if HOST.endswith('googleusercontent.com'):
CLIENT_ID = None
OTHER_CLIENT_ID = None
OTHER_CLIENT_SECRET = None
client = kfp.Client(host=HOST,
client_id=CLIENT_ID,
other_client_id=OTHER_CLIENT_ID,
other_client_secret=OTHER_CLIENT_SECRET)
```
## Writing the program code
The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.
Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as `/output.txt`.
```
%%bash
# Create folders if they don't exist.
mkdir -p tmp/reuse_components/mnist_training
# Create the Python file that lists GCS blobs.
cat > ./tmp/reuse_components/mnist_training/app.py <<HERE
import argparse
from datetime import datetime
import tensorflow as tf
parser = argparse.ArgumentParser()
parser.add_argument(
'--model_file', type=str, required=True, help='Name of the model file.')
parser.add_argument(
'--bucket', type=str, required=True, help='GCS bucket name.')
args = parser.parse_args()
bucket=args.bucket
model_file=args.model_file
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print(model.summary())
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()),
# Interrupt training if val_loss stops improving for over 2 epochs
tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'),
]
model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks,
validation_data=(x_test, y_test))
model.save(model_file)
from tensorflow import gfile
gcs_path = bucket + "/" + model_file
if gfile.Exists(gcs_path):
gfile.Remove(gcs_path)
gfile.Copy(model_file, gcs_path)
with open('/output.txt', 'w') as f:
f.write(gcs_path)
HERE
```
## Create a Docker container
Create your own container image that includes your program.
### Creating a Dockerfile
Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are building. `WORKDIR` sets the working directory. When you assemble the Docker image, `COPY` copies the required files and directories (for example, `app.py`) to the file system of the container. `RUN` executes a command (for example, install the dependencies) and commits the results.
```
%%bash
# Create Dockerfile.
cat > ./tmp/reuse_components/mnist_training/Dockerfile <<EOF
FROM tensorflow/tensorflow:1.15.0-py3
WORKDIR /app
COPY . /app
EOF
```
### Build docker image
Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:
- Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This requires [kaniko](https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access), which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.
- Use [Cloud Build](https://cloud.google.com/cloud-build), which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.
- Use [Docker](https://www.docker.com/get-started) installed locally and push to e.g. GCR.
**Note**:
If you run this notebook **within Kubeflow cluster**, **with Kubeflow version >= 0.7** and exploring **kaniko option**, you need to ensure that valid credentials are created within your notebook's namespace.
- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook.
- You can also add credentials to the new namespace by either [copying credentials from an existing Kubeflow namespace, or by creating a new service account](https://www.kubeflow.org/docs/gke/authentication/#kubeflow-v0-6-and-before-gcp-service-account-key-as-secret).
- The following cell demonstrates how to copy the default secret to your own namespace.
```bash
%%bash
NAMESPACE=<your notebook name space>
SOURCE=kubeflow
NAME=user-gcp-sa
SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}\.json}" | base64 -D)
kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"
```
```
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format(
PROJECT_ID=PROJECT_ID,
IMAGE_NAME=IMAGE_NAME,
TAG=TAG
)
APP_FOLDER='./tmp/reuse_components/mnist_training/'
# In the following, for the purpose of demonstration
# Cloud Build is choosen for 'AI Platform Pipelines'
# kaniko is choosen for 'full Kubeflow deployment'
if HOST.endswith('googleusercontent.com'):
# kaniko is not pre-installed with 'AI Platform Pipelines'
import subprocess
# ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER}
cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER]
build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8'))
print(build_log)
else:
if kfp.__version__ <= '0.1.36':
# kfp with version 0.1.36+ introduce broken change that will make the following code not working
import subprocess
builder = kfp.containers._container_builder.ContainerBuilder(
gcs_staging=GCS_BUCKET + "/kfp_container_build_staging"
)
kfp.containers.build_image_from_working_dir(
image_name=GCR_IMAGE,
working_dir=APP_FOLDER,
builder=builder
)
else:
raise("Please build the docker image use either [Docker] or [Cloud Build]")
```
#### If you want to use docker to build the image
Run the following in a cell
```bash
%%bash -s "{PROJECT_ID}"
IMAGE_NAME="mnist_training_kf_pipeline"
TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)"
# Create script to build docker image and push it.
cat > ./tmp/reuse_components/mnist_training/build_image.sh <<HERE
PROJECT_ID="${1}"
IMAGE_NAME="${IMAGE_NAME}"
TAG="${TAG}"
GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"
docker build -t \${IMAGE_NAME} .
docker tag \${IMAGE_NAME} \${GCR_IMAGE}
docker push \${GCR_IMAGE}
docker image rm \${IMAGE_NAME}
docker image rm \${GCR_IMAGE}
HERE
cd tmp/reuse_components/mnist_training
bash build_image.sh
```
```
image_name = GCR_IMAGE
```
## Writing your component definition file
To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.
For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubeflow.org/docs/pipelines/reference/component-spec/). However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.
Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
```
%%bash -s "{image_name}"
GCR_IMAGE="${1}"
echo ${GCR_IMAGE}
# Create Yaml
# the image uri should be changed according to the above docker image push output
cat > mnist_component.yaml <<HERE
name: Mnist training
description: Train a mnist model and save to GCS
inputs:
- name: model_file
description: 'Name of the model file.'
type: String
- name: bucket
description: 'GCS bucket name.'
type: String
outputs:
- name: model_path
description: 'Trained model path.'
type: GCSPath
implementation:
container:
image: ${GCR_IMAGE}
command: [
python, /app/app.py,
--model_file, {inputValue: model_file},
--bucket, {inputValue: bucket},
]
fileOutputs:
model_path: /output.txt
HERE
```
### Create your workflow as a Python function
Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
```
import os
mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_component.yaml'))
mnist_train_op.component_spec
# Define the pipeline
@dsl.pipeline(
name='Mnist pipeline',
description='A toy pipeline that performs mnist model training.'
)
def mnist_reuse_component_pipeline(
model_file: str = 'mnist_model.h5',
bucket: str = GCS_BUCKET
):
mnist_train_op(model_file=model_file, bucket=bucket).apply(gcp.use_gcp_secret('user-gcp-sa'))
return True
```
### Submit a pipeline run
```
pipeline_func = mnist_reuse_component_pipeline
experiment_name = 'minist_kubeflow'
arguments = {"model_file":"mnist_model.h5",
"bucket":GCS_BUCKET}
run_name = pipeline_func.__name__ + ' run'
# Submit pipeline directly from pipeline function
run_result = client.create_run_from_pipeline_func(pipeline_func,
experiment_name=experiment_name,
run_name=run_name,
arguments=arguments)
```
**As an alternative, you can compile the pipeline into a package.** The compiled pipeline can be easily shared and reused by others to run the pipeline.
```python
pipeline_filename = pipeline_func.__name__ + '.pipeline.zip'
compiler.Compiler().compile(pipeline_func, pipeline_filename)
experiment = client.create_experiment('python-functions-mnist')
run_result = client.run_pipeline(
experiment_id=experiment.id,
job_name=run_name,
pipeline_package_path=pipeline_filename,
params=arguments)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import seaborn as sns
import cPickle as pickle
import codecs
import skfuzzy as fuzz
import time
from matplotlib import pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn import metrics
from sklearn.preprocessing import Normalizer
from sklearn.cluster import KMeans
from sklearn.cluster.bicluster import SpectralCoclustering
from biclustering.biclustering import DeltaBiclustering
from sklearn.metrics.cluster import normalized_mutual_info_score
from sklearn.metrics.cluster import adjusted_rand_score
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set_context(rc={"figure.figsize": (8, 4)})
arena_news_df = pd.read_pickle('arena_news_df.pkl')
sport_news_df = pd.read_pickle('sport_news_df.pkl')
jovem_news_df = pd.read_pickle('jovem_news_df.pkl')
labels_true = np.array(len(arena_news_df)*[0] + len(sport_news_df)*[1] + len(jovem_news_df.ix[0:99])*[2])
count_vect = CountVectorizer(encoding='UTF-8',lowercase=False, min_df=2)
X = count_vect.fit_transform(arena_news_df['all'].tolist() + sport_news_df['all'].tolist() + jovem_news_df['all'].ix[0:99].tolist())
X_train_norm_tfidf = TfidfTransformer(norm=u'l2', use_idf=True).fit_transform(X).toarray()
X_train_tfidf = TfidfTransformer(norm=False, use_idf=True).fit_transform(X).toarray()
X_train_norm = TfidfTransformer(norm=u'l2', use_idf=False).fit_transform(X).toarray()
X_train = TfidfTransformer(norm=False, use_idf=False).fit_transform(X).toarray()
print X_train.shape
def _big_s(x, center):
len_x = len(x)
total = 0
for i in range(len_x):
total += np.linalg.norm(x[i]-center)
return total / len_x
def davies_bouldin_score(X, labels_pred, k_centers):
try:
num_clusters, _ = k_centers.shape
big_ss = np.zeros([num_clusters], dtype=np.float64)
d_eucs = np.zeros([num_clusters, num_clusters], dtype=np.float64)
db = 0
for k in range(num_clusters):
samples_in_k_inds = np.where(labels_pred == k)[0]
samples_in_k = X[samples_in_k_inds, :]
big_ss[k] = _big_s(samples_in_k, k_centers[k])
for k in range(num_clusters):
for l in range(0, num_clusters):
d_eucs[k, l] = np.linalg.norm(k_centers[k]-k_centers[l])
for k in range(num_clusters):
values = np.zeros([num_clusters-1], dtype=np.float64)
for l in range(0, k):
values[l] = (big_ss[k] + big_ss[l])/d_eucs[k, l]
for l in range(k+1, num_clusters):
values[l-1] = (big_ss[k] + big_ss[l])/d_eucs[k, l]
db += np.max(values)
res = db / num_clusters
except Exception:
return 0.0
return res
def calculate_centroids_doc_mean(X, labels_pred, k):
_, m = X.shape
centroids = np.zeros((k, m))
for k in range(k):
samples_in_k_inds = np.where(labels_pred == k)[0]
centroids[k, :] = X[samples_in_k_inds, :].mean(axis=0)
return centroids
```
## K-means
```
params = {
'k' : [3],
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
}
with codecs.open('kmeans_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,NMI,RAND,DAVIES\n')
for k in params['k']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
estimator = KMeans(n_clusters=k, max_iter=1000, init='random')
estimator.fit(data)
labels_pred = estimator.labels_
centroids = estimator.cluster_centers_
error = estimator.inertia_
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, centroids)
out_f.write(u'%s,%s,%s,%s,%s\n' % (data_str, k, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s' % (data_str, k)
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
## Fuzzy K-means
```
params = {
'k' : [3],
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
}
with codecs.open('fuzzy_cmeans_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,NMI,RAND,DAVIES\n')
for k in params['k']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
centroids, U, _, _, errors, _, _ = fuzz.cluster.cmeans(
data.T,
k,
2,
error=0.00001,
maxiter=10000)
centroids
labels_pred = np.argmax(U, axis=0)
error = errors[-1]
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, centroids)
out_f.write(u'%s,%s,%s,%s,%s\n' % (data_str, k, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s' % (data_str, k)
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
## Orthogonal Non-negative Matrix Tri-Factorization
```
def onmtf(X, U, S, V):
U = U * ((X.dot(V).dot(S.T)) / (U.dot(S).dot(V.T).dot(X.T).dot(U)))
V = V * ((X.T.dot(U).dot(S)) / (V.dot(S.T).dot(U.T).dot(X).dot(V)))
S = S * ((U.T.dot(X).dot(V)) / (U.T.dot(U).dot(S).dot(V.T).dot(V)))
return U, S, V
def onm3f(X, U, S, V):
U = U * (X.dot(V).dot(S.T)) / np.sqrt(U.dot(U.T).dot(X).dot(V).dot(S.T))
V = V * (X.T.dot(U).dot(S)) / np.sqrt(V.dot(V.T).dot(X.T).dot(U).dot(S))
S = S * (U.T.dot(X).dot(V)) / np.sqrt(U.T.dot(U).dot(S).dot(V.T).dot(V))
return U, S, V
def nbvd(X, U, S, V):
U = U * (X.dot(V).dot(S.T)) / U.dot(U.T).dot(X).dot(V).dot(S.T)
V = V * (X.T.dot(U).dot(S)) / V.dot(V.T).dot(X.T).dot(U).dot(S)
S = S * (U.T.dot(X).dot(V)) / U.T.dot(U).dot(S).dot(V.T).dot(V)
return U, S, V
def matrix_factorization_clustering(X, k, l, factorization_func=onmtf, norm=False, num_iters=100):
m, n = X.shape
U = np.random.rand(m,k)
S = np.random.rand(k,l)
V = np.random.rand(n,l)
if norm:
X = Normalizer().fit_transform(X)
error_best = np.inf
for i in xrange(num_iters):
U, S, V = factorization_func(X, U, S, V)
error = np.sum((X - U.dot(S).dot(V.T)) ** 2)
if error < error_best:
U_best = U
S_best = S
V_best = V
error_best = error
Du = np.diag(np.ones(m).dot(U_best))
Dv = np.diag(np.ones(n).dot(V_best))
U_norm = U_best.dot( np.diag(S_best.dot(Dv).dot(np.ones(l))) )
V_norm = V_best.dot( np.diag(np.ones(k).dot(Du).dot(S_best)) )
rows_ind = np.argmax(U_best, axis=1)
cols_ind = np.argmax(V_best, axis=1)
return U_norm, S_best, V_norm, rows_ind, cols_ind, error_best
params = {
'k' : [3],
'l' : [2, 3, 4, 5, 6],
# 'X' : ['X_train', 'X_train_tfidf']
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
# 'X' : ['X_train_norm', 'X_train_norm_tfidf']
}
with codecs.open('onmtf_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,L,NMI,RAND,DAVIES\n')
for k in params['k']:
for l in params['l']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
init_time = time.time()
U, S, V, labels_pred, _, error = matrix_factorization_clustering(data, k, l, num_iters=1000)
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, calculate_centroids_doc_mean(data, labels_pred, k))
end_time = time.time()
print end_time - init_time
out_f.write(u'%s,%s,%s,%s,%s,%s\n' % (data_str, k, l, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s' % (data_str, k)
print 'Algo error: %s' % error
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
## Fast Non-negative Matrix Tri-Factorization
```
def fnmtf(X, k, l, num_iter=1000, norm=False):
m, n = X.shape
def weights_initialization(X, n, m, k):
shuffle_inds = np.random.permutation(n)
cluster_end_ind = 0
for i in xrange(k):
cluster_init_ind = cluster_end_ind
cluster_end_ind = round((i + 1) * n / k)
X[shuffle_inds[cluster_init_ind : cluster_end_ind], i] = 1
return X
U = weights_initialization(np.zeros((m, k)), m, n, k)
S = np.random.rand(k,l)
V = weights_initialization(np.zeros((n, l)), n, m, l)
error_best = np.inf
error = error_best
if norm:
X = Normalizer().fit_transform(X)
for _ in xrange(num_iter):
S = np.linalg.pinv(U.T.dot(U)).dot(U.T).dot(X).dot(V).dot(np.linalg.pinv(V.T.dot(V)))
# solve subproblem to update V
U_tilde = U.dot(S)
V_new = np.zeros(n*l).reshape(n, l)
for j in xrange(n):
errors = np.zeros(l)
for col_clust_ind in xrange(l):
errors[col_clust_ind] = ((X[:][:, j] - U_tilde[:][:, col_clust_ind])**2).sum()
ind = np.argmin(errors)
V_new[j][ind] = 1
V = V_new
# while np.linalg.det(V.T.dot(V)) <= 0:
# if np.isnan( np.sum(V) ):
# break
# erros = (X - U.dot(S).dot(V.T)) ** 2
# erros = np.sum(erros.dot(V), axis=0) / np.sum(V, axis=0)
# erros[np.where(np.sum(V, axis=0) <= 1)] = -np.inf
# quantidade = np.sum(V, axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(V[:, indexMax])[0]
# end = len(indexes)
# indexes_p = np.random.permutation(end)
# V[indexes[indexes_p[0:np.floor(end/2.0)]], indexMax] = 0.0
# V[indexes[indexes_p[0:np.floor(end/2.0)]], indexMin] = 1.0
# solve subproblem to update U
V_tilde = S.dot(V.T)
U_new = np.zeros(m*k).reshape(m, k)
for i in xrange(m):
errors = np.zeros(k)
for row_clust_ind in xrange(k):
errors[row_clust_ind] = ((X[i][:] - V_tilde[row_clust_ind][:])**2).sum()
ind = np.argmin(errors)
U_new[i][ind] = 1
U = U_new
# while np.linalg.det(U.T.dot(U)) <= 0:
# if np.isnan( np.sum(U) ):
# break
# erros = (X - U.dot(V_tilde)) ** 2
# erros = np.sum(U.T.dot(erros), axis=1) / np.sum(U, axis=0)
# erros[np.where(np.sum(U, axis=0) <= 1)] = -np.inf
# quantidade = np.sum(U, axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(U[:, indexMax])[0]
# end = len(indexes)
# indexes_p = np.random.permutation(end)
# U[indexes[indexes_p[0:np.floor(end/2.0)]], indexMax] = 0.0
# U[indexes[indexes_p[0:np.floor(end/2.0)]], indexMin] = 1.0
error_ant = error
# print error_ant
error = np.sum((X - U.dot(S).dot(V.T)) ** 2)
if error < error_best:
U_best = U
S_best = S
V_best = V
error_best = error
# if np.abs(error - error_ant) <= 0.000001:
# break
rows_ind = np.argmax(U, axis=1)
cols_ind = np.argmax(V, axis=1)
return U, S, V, rows_ind, cols_ind, error
params = {
'k' : [3],
'l' : [2, 3, 4, 5, 6],
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
# 'X' : ['X_train', 'X_train_tfidf']
}
with codecs.open('nmtf_bin_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,L,NMI,RAND,DAVIES\n')
for k in params['k']:
for l in params['l']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
init_time = time.time()
U, S, V, labels_pred, _, error = fnmtf(data, k, l)
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, calculate_centroids_doc_mean(data, labels_pred, k))
# if error < error_best:
# error_best = error
# nmi_score_best = nmi_score
# rand_score_best = rand_score
# davies_score_best = davies_score
# labels_pred_best = labels_pred
end_time = time.time()
print end_time - init_time
out_f.write(u'%s,%s,%s,%s,%s,%s\n' % (data_str, k, l, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s, l: %s' % (data_str, k, l)
print 'Algo error: %s' % error
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
## Fast Overlapping Non-negative Matrix Tri-Factorization
```
def matrix_factorization_overlapping_bin(X, k, l, num_iters=1000):
def weights_initialization(X, n, m, k):
shuffle_inds = np.random.permutation(n)
cluster_end_ind = 0
for i in xrange(k):
cluster_init_ind = cluster_end_ind
cluster_end_ind = round((i + 1) * n / k)
X[shuffle_inds[cluster_init_ind : cluster_end_ind], i] = 1
return X
def calculate_block_matrix(X, F, G, S, k, l):
for i in xrange(k):
for j in xrange(l):
S[i, j] = np.mean(X[F[:, i] == 1][:, G[i][:, j] == 1])
where_are_NaNs = np.isnan(S)
S[where_are_NaNs] = 0
return S
n, m = X.shape
error_best = np.inf
error = np.inf
F = weights_initialization(np.zeros((n, k)), n, m, k)
G = []
for i in xrange(k):
G.append( weights_initialization(np.zeros((m, l)), m, n, l) )
S = np.random.rand(k, l)
for iter_ind in xrange(num_iters):
S = calculate_block_matrix(X, F, G, S, k, l)
# Update G
for i in xrange(k):
F_t = F[F[:, i] == 1, :].dot(S)
X_t = X[F[:, i] == 1, :]
G[i] = np.zeros((m, l))
for j in xrange(m):
clust_len, _ = X_t.shape
diff = F_t - X_t[:, j].reshape(clust_len, 1).dot(np.ones(l).reshape(1, l))
errors = np.diag(diff.T.dot(diff))
minV = np.min(errors)
index = np.where(errors <= minV)[0]
G[i][j, index[np.random.randint(len(index))]] = 1
# while np.linalg.det(G[i].T.dot(G[i])) <= 0:
# erros = (X_t - F_t.dot(G[i].T)) ** 2
# erros = np.sum(erros.dot(G[i]), axis=0) / np.sum(G[i], axis=0)
# erros[np.where(np.sum(G[i], axis=0) <= 1)] = -np.inf
# quantidade = np.sum(G[i], axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(G[i][:, indexMax])[0]
# end = len(indexes)
# indexes_p = np.random.permutation(end)
# G[i][indexes[indexes_p[0:np.floor(end/2.0)]], indexMax] = 0.0
# G[i][indexes[indexes_p[0:np.floor(end/2.0)]], indexMin] = 1.0
# S = calculate_block_matrix(X, F, G, S, k, l)
G_t = np.zeros((k, m))
for i in xrange(k):
G_t[i, :] = S[i, :].dot(G[i].T)
F = np.zeros((n, k))
for j in xrange(n):
diff = G_t - np.ones(k).reshape(k, 1).dot(X[j, :].reshape(1, m))
errors = np.diag(diff.dot(diff.T))
minV = np.min(errors)
index = np.where(errors <= minV)[0]
F[j, index[np.random.randint(len(index))]] = 1
# while np.linalg.det(F.T.dot(F)) <= 0:
# erros = (X - F.dot(G_t)) ** 2
# erros = np.sum(F.T.dot(erros), axis=1) / np.sum(F, axis=0)
# erros[np.where(np.sum(F, axis=0) <= 1)] = -np.inf
# quantidade = np.sum(F, axis=0)
# indexMin = np.argmin(quantidade)
# indexMax = np.argmax(erros)
# indexes = np.nonzero(F[:, indexMax])[0]
# end = len(indexes)
# indexes_p = np.random.permutation(end)
# F[indexes[indexes_p[0:np.floor(end/2.0)]], indexMax] = 0.0
# F[indexes[indexes_p[0:np.floor(end/2.0)]], indexMin] = 1.0
error_ant = error
error = np.sum((X - F.dot(G_t))**2)
# print error
if error < error_best:
error_best = error
F_best = F
S_best = S
G_best = G
G_t_best = G_t
# if np.abs(error - error_ant) <= 0.000001:
# break
rows_ind = np.argmax(F_best, axis=1)
reconstruction = F_best.dot(G_t_best)
return F, S, G, G_t, rows_ind, error_best, reconstruction
params = {
'k' : [3],
'l' : [2, 3, 4, 5, 6],
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
}
with codecs.open('ovnmtf_bin_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,L,NMI,RAND,DAVIES\n')
for k in params['k']:
for l in params['l']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
init_time = time.time()
U, S, V, V_t, labels_pred, error, _ = matrix_factorization_overlapping_bin(data, k, l)
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, calculate_centroids_doc_mean(data, labels_pred, k))
# if error < error_best:
# error_best = error
# nmi_score_best = nmi_score
# rand_score_best = rand_score
# davies_score_best = davies_score
# labels_pred_best = labels_pred
end_time = time.time()
print end_time - init_time
out_f.write(u'%s,%s,%s,%s,%s,%s\n' % (data_str, k, l, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s, l: %s' % (data_str, k, l)
print 'Algo error: %s' % error
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
## Overlapping Non-Negative Matrix Tri-Factorization
```
def is_any_clust_empty(U_bin):
n, k = U_bin.shape
return np.count_nonzero(np.sum(U_bin, axis=0)) != k
def overlapping_matrix_factorization_coclustering(X, k, l, norm=False, num_iters=100):
n, m = X.shape
U = np.random.rand(n, k)
S = np.random.rand(k, l)
V = []
for i in xrange(k):
V.append(np.random.rand(m, l))
Ii = np.zeros((k, k))
Ij = np.zeros((k, k))
error_best = np.inf
if norm:
X = Normalizer().fit_transform(X)
V_tilde = np.zeros((k, m))
for i in xrange(k):
Ii[i, i] = 1
V_tilde += Ii.dot(S).dot(V[i].T)
Ii[i, i] = 0
error = np.sum((X - U.dot(V_tilde)) ** 2)
for _ in xrange(num_iters):
# Update U
new_U_pos = np.zeros((n, k))
new_U_neg = np.zeros((n, k))
for i in xrange(k):
Ii[i, i] = 1
for j in xrange(k):
Ij[j, j] = 1
new_U_pos += U.dot(Ii).dot(S).dot(V[i].T).dot(V[j]).dot(S.T).dot(Ij)
Ij[j, j] = 0
new_U_neg += X.dot(V[i]).dot(S.T).dot(Ii)
Ii[i, i] = 0
U = U * (new_U_neg / new_U_pos)
# Compute V'
V_tilde = np.zeros((k, m))
for i in xrange(k):
Ii[i, i] = 1
V_tilde += Ii.dot(S).dot(V[i].T)
Ii[i, i] = 0
# Update Vi
for i in xrange(k):
new_V_pos = np.zeros((m, l))
new_V_neg = np.zeros((m, l))
Ii[i, i] = 1
for j in xrange(k):
Ij[j, j] = 1
new_V_pos += V[j].dot(S.T).dot(Ij).dot(U.T).dot(U).dot(Ii).dot(S)
Ij[j, j] = 0
new_V_neg += X.T.dot(U).dot(Ii).dot(S)
Ii[i, i] = 0
V[i] = V[i] * (new_V_neg / new_V_pos)
# Recompute V'
V_tilde = np.zeros((k, m))
for i in xrange(k):
Ii[i, i] = 1
V_tilde += Ii.dot(S).dot(V[i].T)
Ii[i, i] = 0
new_S_pos = np.zeros((k, l))
new_S_neg = np.zeros((k, l))
for i in xrange(k):
Ii[i, i] = 1
for j in xrange(k):
Ij[j, j] = 1
new_S_pos += Ii.dot(U.T).dot(U).dot(Ij).dot(S).dot(V[j].T).dot(V[i])
Ij[j, j] = 0
new_S_neg += Ii.dot(U.T).dot(X).dot(V[i])
Ii[i, i] = 0
S = S * (new_S_neg / new_S_pos)
# import pdb; pdb.set_trace()
V_tilde = np.zeros((k, m))
for i in xrange(k):
Ii[i, i] = 1
V_tilde += Ii.dot(S).dot(V[i].T)
Ii[i, i] = 0
error_ant = error
error = np.sum((X - U.dot(V_tilde))**2)
# print errorV_t
if error < error_best:
error_best = error
U_best = U
S_best = S
V_best = V
if np.abs(error - error_ant) <= 0.000001:
break
rows_ind = np.argmax(U_best, axis=1)
reconstruction = U_best.dot(V_tilde)
return U_best, S_best, V_best, V_tilde, rows_ind, error_best, reconstruction
params = {
'k' : [3],
'l' : [2, 3, 4, 5, 6],
'X' : ['X_train', 'X_train_norm', 'X_train_tfidf', 'X_train_norm_tfidf']
}
with codecs.open('ovnmtf_news_results.csv', 'w', 'utf-8') as out_f:
out_f.write('X,K,L,NMI,RAND,DAVIES\n')
for k in params['k']:
for l in params['l']:
for data_str in params['X']:
data = eval(data_str)
error_best = np.inf
for _ in xrange(10):
init_time = time.time()
U, S, V, V_t, labels_pred, error, _ = overlapping_matrix_factorization_coclustering(data, k, l)
nmi_score = normalized_mutual_info_score(labels_true, labels_pred)
rand_score = adjusted_rand_score(labels_true, labels_pred)
davies_score = davies_bouldin_score(data, labels_pred, calculate_centroids_doc_mean(data, labels_pred, k))
# if error < error_best:
# error_best = error
# nmi_score_best = nmi_score
# rand_score_best = rand_score
# davies_score_best = davies_score
# labels_pred_best = labels_pred
end_time = time.time()
print end_time - init_time
out_f.write(u'%s,%s,%s,%s,%s,%s\n' % (data_str, k, l, nmi_score, rand_score, davies_score))
print 'Execution: X: %s, k: %s, l: %s' % (data_str, k, l)
print 'Algo error: %s' % error
print 'NMI score: %s' % nmi_score
print 'Rand score: %s' % rand_score
print 'Davies score: %s' % davies_score
print '-----------------------------------------------\n'
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import os
import pickle
from glob import glob
import re
from concurrent.futures import ProcessPoolExecutor, as_completed
import numpy as np
import pandas as pd
from scipy import stats
from sklearn.metrics import pairwise_distances
import settings as conf
output_dir = os.path.join(conf.DELIVERABLES_DIR, 'roc_validation', 'classifier_tables')
os.makedirs(output_dir, exist_ok=True)
```
# Load gene mappings
```
with open(os.path.join(conf.GENES_METADATA_DIR, 'genes_mapping_simplified-0.pkl'), 'rb') as f:
genes_mapping_0 = pickle.load(f)
with open(os.path.join(conf.GENES_METADATA_DIR, 'genes_mapping_simplified-1.pkl'), 'rb') as f:
genes_mapping_1 = pickle.load(f)
```
# Load MultiXcan associations
```
genes_associations_filename = os.path.join(conf.GENE_ASSOC_DIR, 'smultixcan-mashr-zscores.pkl.xz')
display(genes_associations_filename)
genes_associations = pd.read_pickle(genes_associations_filename)
display(genes_associations.shape)
display(genes_associations.head())
```
# Load PheWAS catalog
```
phewas_catalog = pd.read_csv(os.path.join(conf.DATA_DIR, 'phewas-catalog.csv.gz'), dtype={'phewas code': str})
phewas_catalog.shape
phewas_catalog[phewas_catalog['phewas code'].isna()].head()
phewas_catalog[phewas_catalog['gene_name'].isna()].head()
phewas_catalog[phewas_catalog['gene_name'].isna()].shape
phewas_catalog = phewas_catalog.dropna(subset=['gene_name', 'phewas code'])
phewas_catalog.shape
phewas_catalog['gene_name'].unique().shape
phewas_catalog['phewas code'].unique().shape
phewas_catalog = phewas_catalog.assign(gene_id=phewas_catalog['gene_name'].apply(lambda x: genes_mapping_1[x] if x in genes_mapping_1 else None))
phewas_catalog = phewas_catalog.dropna(subset=['gene_name', 'gene_id', 'phewas code'])
phewas_catalog.shape
phewas_catalog.head()
phewas_catalog.sort_values('phewas phenotype').head()
```
# Genes in common
```
shared_gene_ids = \
set(phewas_catalog['gene_id'].values)\
.intersection(genes_associations.index)
len(shared_gene_ids)
```
# HPO to MIM
```
hpo_to_mim = pd.read_csv(os.path.join(conf.DATA_DIR, 'hpo-to-omim-and-phecode.csv.gz'), dtype={'phecode': str})
hpo_to_mim.shape
hpo_to_mim.head()
```
# Load silver standard to map from UKB to MIM
```
omim_silver_standard = pd.read_csv(os.path.join(conf.DATA_DIR, 'omim_silver_standard.tsv'), sep='\t')
ukb_to_mim_map = omim_silver_standard[['trait', 'pheno_mim']].dropna()
ukb_to_mim_map.shape
ukb_to_mim_map.head()
```
# Read gwas2gene results
```
from glob import glob
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri
pandas2ri.activate()
readRDS = robjects.r['readRDS']
f_files = glob(os.path.join(conf.OMIM_SILVER_STANDARD_GWAS_TO_GENE_DIR, '*.rds'))
display(len(f_files))
if len(f_files) != len(omim_silver_standard['trait'].unique()):
print(f'WARNING: some files are not there. {len(omim_silver_standard.trait.unique())} expected, {len(f_files)} found.')
gwas2genes_results = {}
for f in f_files:
f_base = os.path.basename(f)
f_code = f_base.split('.')[0]
#print(f_base)
rds_contents = readRDS(f)
if len(rds_contents[1]) > 0:
f_gene_list = list(rds_contents[1][0].iter_labels())
else:
print(f'{f_code}: empty')
f_gene_list = []
gwas2genes_results[f_code] = f_gene_list
gwas2gene_all_genes = []
for k in gwas2genes_results.keys():
gwas2gene_all_genes.extend(gwas2genes_results[k])
display(len(gwas2gene_all_genes))
gwas2gene_all_genes = set(gwas2gene_all_genes)
display(len(gwas2gene_all_genes))
gwas2gene_all_genes = shared_gene_ids.intersection(gwas2gene_all_genes)
display(len(gwas2gene_all_genes))
pd.Series(list(gwas2gene_all_genes)).head()
```
# Universe
```
from entity import Trait
_ukb_traits = []
_ukb_traits_phecodes = []
_ukb_gene_available = []
for t in ukb_to_mim_map['trait'].unique():
t_code = Trait(full_code=t).code
if t_code not in gwas2genes_results:
print(t_code)
continue
for g in gwas2genes_results[t_code]:
_ukb_traits.append(t)
_ukb_gene_available.append(g)
df = pd.DataFrame({'trait': _ukb_traits, 'gene': _ukb_gene_available})
df.shape
df.drop_duplicates().shape
df.head()
```
# Add MIM/Phecode
```
# add mim
_tmp = pd.merge(df, ukb_to_mim_map, on='trait', how='inner')
display(_tmp.shape)
display(_tmp.head())
_tmp[_tmp['pheno_mim'].isna()].shape
# mim to phecode
_tmp = pd.merge(_tmp, hpo_to_mim[['phecode', 'dID']].dropna(), left_on='pheno_mim', right_on='dID', how='inner').drop(columns=['dID'])
display(_tmp.shape)
display(_tmp.head())
_tmp[_tmp['phecode'].isna()].shape
_tmp.head()
# phecode to phewas catalog
_tmp = pd.merge(_tmp, phewas_catalog[['phewas code', 'gene_id']],
left_on=['phecode', 'gene'], right_on=['phewas code', 'gene_id'],
how='left').drop(columns=['phewas code'])
display(_tmp.shape)
_tmp.head()
_tmp[_tmp['gene_id'].isna()].shape
_tmp = _tmp.drop_duplicates(subset=['trait', 'gene', 'gene_id'])
_tmp.shape
_tmp.head(30)
_tmp[_tmp['gene_id'].isna()].shape
def _assign_true_class(x):
tc = ~pd.isnull(x['gene_id'])
idx = [0]
if tc.shape[0] > 1 and tc.any():
idx = np.where(tc)[0]
return pd.Series({
'pheno_mim': ', '.join(x.iloc[idx]['pheno_mim'].astype(str)),
'phecode': ', '.join(x.iloc[idx]['phecode'].astype(str)),
'true_class': int(tc.any()),
})
_tmp2 = _tmp.groupby(['trait', 'gene']).apply(_assign_true_class)
_tmp2.shape
_tmp2.head()
assert not _tmp2.loc['M41-Diagnoses_main_ICD10_M41_Scoliosis', 'ENSG00000012504']['true_class']
assert not _tmp2.loc['M41-Diagnoses_main_ICD10_M41_Scoliosis', 'ENSG00000141665']['true_class']
assert _tmp2.loc['M41-Diagnoses_main_ICD10_M41_Scoliosis', 'ENSG00000112137']['true_class']
_tmp2['true_class'].value_counts()
18632 / 20837
2205 / 20837
```
### Add score
```
_genes_unstacked = genes_associations.unstack()
_genes_unstacked.shape
_genes_unstacked.head()
classifier_table = _tmp2.assign(score=_genes_unstacked)
classifier_table.shape
classifier_table.head()
classifier_table[classifier_table['score'].isna()].shape
classifier_table = classifier_table.dropna(subset=['score'])
classifier_table.shape
N_TESTS = classifier_table.reset_index().drop_duplicates(subset=['trait', 'gene']).shape[0]
display(N_TESTS)
PVALUE_THRESHOLD = (0.05 / (N_TESTS))
display(PVALUE_THRESHOLD)
ZSCORE_THRESHOLD = np.abs(stats.norm.ppf(PVALUE_THRESHOLD / 2))
display(ZSCORE_THRESHOLD)
classifier_table = classifier_table.assign(predicted_class=(classifier_table['score'] > ZSCORE_THRESHOLD).astype(int))
classifier_table.head()
classifier_table['true_class'].value_counts()
classifier_table['true_class'].value_counts().sum()
```
# Save classifier table
```
classifier_table = classifier_table.sort_index()
assert classifier_table.index.is_unique
classifier_table.head()
classifier_table.shape
classifier_table.to_csv(
os.path.join(output_dir, 'smultixcan-mashr-classifier_data-phewas_catalog.tsv.gz'),
sep='\t', index=False
)
```
| github_jupyter |
# Movie Ratings Network
This notebook is used to create the movie networks based on the ratings. It use the same approach as suggested in [[1](https://arxiv.org/pdf/1408.1717.pdf)]
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
DATA_PATH = '../data/ml-100k-convert/'
GENERATED_PATH = '../generated/'
RESULT_PATH = '../results/'
# Load data
ratings = pd.read_csv(DATA_PATH+'data.tsv', sep='\t', names=['UserId', 'MovieId', 'Ratings', 'Timestamp'])
print(ratings.shape)
ratings.head()
# Load the movie id
movies = pd.read_csv(GENERATED_PATH+'final_movies.csv')
movies_id = movies['ML-100k-convertId'].to_list()
ratings = ratings[ratings.MovieId.isin(movies_id)]
ratings
nb_users = len(ratings.UserId.unique())
nb_movies = len(ratings.MovieId.unique())
print('There are {} users'.format(nb_users))
print('There are {} movies'.format(nb_movies))
# Get rid of timestamp
ratings = ratings.drop(columns=['Timestamp'])
# Pivot the table, put 0 when there are no ratings
ratings = pd.pivot_table(ratings, index='UserId', columns='MovieId', values='Ratings', fill_value=0)
ratings
counts = ratings[ratings > 0].count(axis=1)
print("User with the smallest number of ratings :", counts.min())
print("User with the most ratings :", counts.max())
print("Average number of ratings per user:", counts.mean())
```
## Distance between movie
To compute the distance between two movies we look at all ratings they gets from the same people. We compute then the L2 norm of the difference of ratings for the movies by the same user and normalize it by the square root of the number of users who rated both movies. We use it as a distance between the movies.
$$
d_{m_{i j}}=\frac{\left\|\left[F_{m_{i}}-F_{m_{j}}\right]_{\Omega_{m_{i j}}}\right\|_{\ell_{2}}}{ \sqrt{\left|\Omega_{m_{i j}}\right|}}
$$
where $\Omega_{m_{i j}}=\Omega_{m_{i}} \cap \Omega_{m_{j}}$ is the set of user who rated both movie.
Note: we do not have to consider the bias introduced by the user as we compute the difference between gradings coming from the same users.
```
def movie_distance(ratingsA, ratingsB):
diff_ratings = []
for x, y in zip(ratingsA, ratingsB):
if x != 0 and y != 0:
diff_ratings.append(x - y)
if len(diff_ratings) > 1:
return np.linalg.norm(diff_ratings, 2)/np.sqrt(len(diff_ratings))
else:
return np.inf
distance_matrix = np.ndarray(shape=(nb_movies, nb_movies))
for i, idx in enumerate(ratings.columns):
for j, idy in enumerate(ratings.columns):
distance_matrix[i, j] = movie_distance(ratings[idx].to_list(), ratings[idy].to_list())
distance_matrix[j, i] = distance_matrix[i, j]
len(distance_matrix[distance_matrix == 0])
```
## Adjacency matrix
To compute the adjacency matrix from the distance matrix we use a Gaussian kernel which is :
$$
w_{ij} = \text{exp}\left[ - \frac{d_{ij}^2}{\alpha} \right]
$$
if $d_{ij} < \epsilon$ and 0 otherwise.
Here we choose $\epsilon = 1.1$ and $\alpha$ to get a weigth already near to zero when $d_{ij} \rightarrow \epsilon$ as shown in the graph below
```
dist = np.linspace(0, 4, 200)
alpha = 0.25
epsilon = 1.1
weight = np.exp(- dist ** 2 / alpha)
weight[dist > epsilon] = 0
plt.figure()
plt.plot(dist, weight)
plt.scatter(epsilon, 0.01, s=80, facecolors='none', edgecolors='r')
plt.title("Gaussian kernel used to compute adjacency matrix")
plt.xlabel("Distance")
plt.ylabel("Weight")
plt.savefig(RESULT_PATH+'gaussian_kernel.eps')
plt.show()
def epsilon_similarity_graph(distances: np.ndarray, alpha=1, epsilon=0):
""" X (n x n): distance matrix
alpha (float): width of the kernel
epsilon (float): threshold
Return:
adjacency (n x n ndarray): adjacency matrix of the graph.
"""
X = distances.copy()
X[X > epsilon] = np.inf
adjacency = np.exp( - X ** 2 / alpha)
np.fill_diagonal(adjacency, 0)
return adjacency
adjacency = epsilon_similarity_graph(distance_matrix, alpha=alpha, epsilon=epsilon)
plt.spy(adjacency)
plt.show()
np.savetxt(GENERATED_PATH+'movie_ratings_adj.csv', adjacency, delimiter=',')
ratings.to_csv(GENERATED_PATH+'ratings_matrix.csv')
import networkx as nx
graph = nx.from_numpy_array(adjacency)
nx.write_gexf(graph, RESULT_PATH+'gexf/movie_ratings.gexf')
```
| github_jupyter |
# Setup
## Imports
```
import os.path
from glob import glob
from tqdm import tqdm_notebook
from sklearn.metrics import confusion_matrix
from vaiutils import path_consts, smooth_plot, plot_images
from vaidata import pickle_load, pickle_dump
from keras.preprocessing.text import Tokenizer
from keras.utils.np_utils import to_categorical
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
```
## Define Useful Variables and Functions
```
for k, v in path_consts('NameLang', 'NameLang'):
exec(k + '=v')
def get_name(name_idx):
if type(name_idx[0]) is list:
return [get_name(n) for n in name_idx]
return ''.join([idx_char[i] for i in name_idx])
def save_model():
torch.save(model.state_dict(), os.path.join(DIR_CHECKPOINTS, 'model.dat'))
pickle_dump(os.path.join(DIR_CHECKPOINTS, 'history.p'), history)
def load_model():
global history
if not os.path.exists(os.path.join(DIR_CHECKPOINTS, 'model.dat')):
return
model.load_state_dict(torch.load(os.path.join(DIR_CHECKPOINTS, 'model.dat')))
history = pickle_load(os.path.join(DIR_CHECKPOINTS, 'history.p'))
```
## Load Data
```
languages = []
names = []
for filename in glob(os.path.join(DIR_DATA, '*.txt')):
languages.append(os.path.split(filename)[1][:-4])
with open(filename) as f:
names.append(f.readlines())
for i in range(len(names[-1])):
names[-1][i] = names[-1][i].split()[0]
name_lengths = np.array([len(name) for name in names])
plt.bar(np.arange(len(languages)), name_lengths)
plt.xticks(np.arange(len(languages)), languages, rotation='vertical')
plt.show()
for i in [3, 7, 14, 16]:
names[i] = list(np.array(names[i])[randint(0, len(names[i]), 1000)])
name_freq = 1 / name_lengths
name_freq = name_freq / name_freq.sum()
all_names = ''
for name, language in zip(names, languages):
all_names += ' '.join(name)
print(language, np.array(name)[randint(0, len(name), 5)])
tokenizer = Tokenizer(char_level=True)
tokenizer.fit_on_texts(all_names)
char_idx = tokenizer.word_index
idx_char = {v: k for k, v in char_idx.items()}
vocab_size = len(char_idx)
names = [tokenizer.texts_to_sequences(name) for name in names]
names = [[np.array(n) for n in name] for name in names]
data = []
for i, name in enumerate(names):
data += [(n, i) for n in name]
FRAC_TEST = 0.2
test_idx = sorted(randint(0, len(data), int(FRAC_TEST * len(data))))
train_idx = sorted(np.array(list(set(list(range(len(data)))) - set(test_idx))))
data_test = np.array(data)[test_idx]
data = np.array(data)[train_idx]
if os.path.exists(os.path.join(DIR_CHECKPOINTS, 'tokenizer.p')):
tokenizer = pickle_load(os.path.join(DIR_CHECKPOINTS, 'tokenizer.p'))
data = pickle_load(os.path.join(DIR_CHECKPOINTS, 'data.p'))
data_test = pickle_load(os.path.join(DIR_CHECKPOINTS, 'data_test.p'))
else:
pickle_dump(os.path.join(DIR_CHECKPOINTS, 'tokenizer.p'), tokenizer)
pickle_dump(os.path.join(DIR_CHECKPOINTS, 'data.p'), data)
pickle_dump(os.path.join(DIR_CHECKPOINTS, 'data_test.p'), data_test)
```
# Create Network
```
class SimpleRNN(nn.Module):
def __init__(self, input_size, hidden_size, initial_hidden=None, return_sequences=False):
super().__init__()
if initial_hidden is None:
self.initial_hidden = Variable(torch.zeros(hidden_size)).cuda()
else:
self.initial_hidden = Variable(initial_hidden).cuda()
self.ih = nn.Linear(input_size, hidden_size)
self.hh = nn.Linear(hidden_size, hidden_size)
self.return_sequences = return_sequences
def forward(self, x):
if not self.return_sequences:
h = self.initial_hidden
for x_t in x:
h = F.tanh(self.ih(x_t) + self.hh(h))
return torch.unsqueeze(h, 0)
h_t = [self.initial_hidden]
for x_t in x:
h_t.append(F.tanh(self.ih(x_t) + self.hh(h_t[-1])))
return torch.stack(h_t[1:])
class LangFinder(nn.Module):
def __init__(self, embedding_size, hidden_size, output_size):
super().__init__()
self.embedding = nn.Embedding(vocab_size + 1, embedding_size)
self.rnn = nn.RNN(embedding_size, hidden_size, batch_first=True)
self.fc = nn.Linear(hidden_size, output_size)
def forward(self, x):
x = self.embedding(torch.unsqueeze(x, 0))
x = self.rnn(x)[1].squeeze(0)
return self.fc(x)
```
# Train Model
```
model = LangFinder(32, 128, len(languages)).cuda()
optimizer = optim.Adam(model.parameters(), lr=1e-4)
criterion = nn.CrossEntropyLoss(weight=torch.FloatTensor(name_freq).cuda())
def get_loss(data):
total_loss = 0
total_correct = 0
for name, language in data:
y_true = Variable(torch.LongTensor(np.array([language]))).cuda()
y_pred = model(Variable(torch.LongTensor(name)).cuda())
loss = criterion(y_pred, y_true)
total_loss += loss.data.cpu().numpy()[0]
total_correct += int(y_pred.max(0)[1].data.cpu().numpy()[0] == language)
return total_loss / len(data), total_correct / len(data)
history = {'loss': [], 'acc': [], 'test_loss':[], 'test_acc': []}
batches_per_epoch = len(data)
def optimize(epochs=1, writes_per_epoch=10):
running_history = {'loss': [], 'acc': []}
load_model()
for epoch in tqdm_notebook(range(epochs)):
prog_bar = tqdm_notebook(data[np.random.permutation(len(data))])
for batch, (name, language) in enumerate(prog_bar):
y_true = Variable(torch.LongTensor(np.array([language]))).cuda()
y_pred = model(Variable(torch.LongTensor(name)).cuda())
loss = criterion(y_pred, y_true)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_history['loss'].append(loss.data.cpu().numpy()[0])
running_history['acc'].append(int(y_pred.max(0)[1].data.cpu().numpy()[0] == language))
if batch % int(batches_per_epoch / (writes_per_epoch - 1)) == 0:
for k in history.keys():
if k not in running_history.keys():
continue
history[k].append(mean(running_history[k]))
running_history[k] = []
prog_bar.set_description('{:.2f}'.format(history['loss'][-1]))
test_loss, test_acc = get_loss(data_test)
history['test_loss'].append(test_loss)
history['test_acc'].append(test_acc)
if argmin(history['test_loss']) == len(history['test_loss']) - 1:
save_model()
optimize(50)
smooth_plot(history, keys=['loss', 'test_loss'], remove_outlier=False)
smooth_plot(history, keys=['acc', 'test_acc'], remove_outlier=True)
```
## Test Model
```
load_model()
for _ in range(10):
sample_name, real_class = data_test[randint(len(data_test))]
sample_class = model(Variable(torch.LongTensor(sample_name).cuda()))
print("{} ({}) {}-{}%".format(get_name(sample_name), languages[real_class],
languages[sample_class.max(
1)[1].data.cpu().numpy()[0]],
int(F.softmax(sample_class).max(1)[0].data.cpu().numpy()[0] * 100)))
def plot_confusion_matrix(data):
y_true = np.array([d[1] for d in data])
y_pred = np.array([model(Variable(torch.LongTensor(d[0]).cuda(), volatile=True)).data.cpu().numpy()[0].argmax() for d in tqdm_notebook(data)])
sample_weight = np.array([name_freq[y] for y in y_true])
cm = confusion_matrix(y_true, y_pred, sample_weight=sample_weight)
plot_images([cm], cmap='gray', flags='retain', pixel_range='auto')
plt.xticks(np.arange(len(languages)), languages, rotation='vertical')
plt.yticks(np.arange(len(languages)), languages, rotation='horizontal')
plt.grid(False)
plt.show()
plot_confusion_matrix(data_test)
def predict(name):
name_array = tokenizer.texts_to_sequences(name)
name_array = np.array([n[0] for n in name_array])
return languages[model(Variable(torch.LongTensor(name_array).cuda(), volatile=True)).data.cpu().numpy()[0].argmax()]
for name in ['Schmidhuber', 'Hinton', 'Bengio', 'Srivastava', 'Vaisakh']:
print(name, '-', predict(name))
```
| github_jupyter |
# Models and Maps
## Models
Let's again consider the car dataset from second notebook.
In that notebook we plotted *qsec* as a function of *hp*. However we might be interested a better model. Let's load the data.
```
library(tidyverse)
data(mtcars)
mtcars_tbl <- as_tibble(rownames_to_column(mtcars,var='model'))
str(mtcars_tbl)
```
Now let's fit three different linear models with `lm` from `stats`-package [[lm]](https://www.rdocumentation.org/packages/stats/versions/3.4.3/topics/lm).
First model will be `qsec ~ wt`, while second will be `qsec ~ hp`. Let's combine both of these effects into a third model `qsec ~ hp / wt`.
`summary` will show a summary of the model.
```
lm1_model <- function(data) lm(qsec ~ wt, data=data)
lm2_model <- function(data) lm(qsec ~ hp, data=data)
lm3_model <- function(data) lm(qsec ~ hp + wt, data=data)
summary(lm1_model(mtcars_tbl))
summary(lm2_model(mtcars_tbl))
summary(lm3_model(mtcars_tbl))
iris_tbl %>%
add_predictions(iris_lm) %>%
ggplot(aes(x=Petal.Length)) +
geom_point(aes(y=Petal.Width), shape=1) +
geom_line(aes(y=pred))
library(modelr)
iris_lm <- lm(Petal.Width ~ Petal.Length, data=iris_tbl)
iris_lm_data <- iris_tbl %>%
data_grid(Petal.Length) %>%
add_predictions(iris_lm)
```
One can add arbitrary amount of terms into these models. There's plenty of other models in R libraries one might want to use.
# Nesting
Let's say we want to calculate the same models for each group specified by a cylinder.
This means we need to do iteration over the groups and for this to work, we should split the data into chunks that will be iterated over.
To do this we can use the `nest`-function ([[nest]](http://tidyr.tidyverse.org/reference/nest.html)).
```
mtcars_nested <- mtcars_tbl %>%
# Convert cyl into a factor
mutate_at(vars(cyl),as.factor) %>%
# Group by cyl
group_by(cyl) %>%
# Nest the data
nest()
print(mtcars_nested)
```
This produces a `tibble` where all data is stored in a column of a type `list` and name *data*.
## Maps
### Example 1: running linear models on groups
Now that we have our list to iterate over, we can use `map` to do the iteration.
`map` is provided by the purrr-package. There are variants of it based on the return value of the used function.
In this case we receive the results for a model as strange S3-objects, so we want to use the `map`-function that creates a list from the outputs [[map]](http://purrr.tidyverse.org/reference/map.html).
```
# Map each data to model, pipe resulting fits to summary-function
map(mtcars_nested$data,lm3_model) %>%
map(summary)
```
A more *tidyverse*-approach to using the `map` is to use it with `mutate` to store the fits into a new columns. This makes it easy to run multiple models and store their results.
```
mtcars_nested <- mtcars_nested %>%
mutate(
model1=map(data, lm1_model),
model2=map(data, lm2_model),
model3=map(data, lm3_model)
)
# Check structure
print(mtcars_nested)
```
Package `broom` comes with nice functions `tidy` and `glance` that can be used to obtain coefficients or tests of the models in nice tibbles [[broom vignette]](https://cran.r-project.org/web/packages/broom/vignettes/broom.html).
```
library(broom)
tidy(mtcars_nested$model3[[1]])
glance(mtcars_nested$model3[[1]])
```
Let's use `tidy` to get the model parameters.
```
mtcars_nested <- mtcars_nested %>%
mutate(
model1_coefs=map(model3,tidy),
model2_coefs=map(model3,tidy),
model3_coefs=map(model3,tidy)
)
print(mtcars_nested)
```
Let's limit ourselves to model no. 3, as that is the most interesting and use `unnest` to unnest the coefficients.
```
mtcars_model3 <- mtcars_nested %>%
select(cyl,model3_coefs) %>%
unnest(model3_coefs)
print(mtcars_model3)
```
### Example 2: Getting summaries of subgroups
Lets say we want to store statistics calculated from `iris`-dataset with our data. Let's nest the data.
```
iris_nested <- as_tibble(iris) %>%
group_by(Species) %>%
nest()
print(iris_nested)
```
Now the data belonging to each species is stored in the data-variable. Now we cannot, however just use summarize the data as the summary would not be done against the `tibble` stored in the data. Instead we need to define a function that acts on the data itself and use a map that acts on the list on which the data-`tibble`s are stored.
```
iris_statistics <- function(tbl) {
return(as_tibble(tbl %>%
summarize(
Petal.Length_mean=mean(Petal.Length),
Petal.Width_mean=mean(Petal.Width),
Petal.Length_var=var(Petal.Length),
Petal.Width_var=var(Petal.Width),
Petal_cor=cor(Petal.Length,Petal.Width)))
)
}
as_tibble(iris) %>%
group_by(Species) %>%
iris_statistics()
```
On nested data the function is used with:
```
iris_nested <- iris_nested %>%
mutate(statistics=map(data,iris_statistics))
print(iris_nested)
```
Now our statistics are stored in the variable `statistics`. They are not that easy to access, though. Let's use `unnest` to reverse the nesting in the `statistics`-variable.
```
iris_nested <- iris_nested %>%
unnest(statistics)
print(iris_nested)
```
# Exercise time:
Do this exercise to `storms`-dataset initialized below that is a subset of NOAA Atlantic hurricane database [[storms]](http://dplyr.tidyverse.org/reference/storms.html).
1. Group the dataset based on `name`. Nest the data.
2. Use map to calculate the minimum pressure, maximum wind speed and maximum category for each storm. Store these to the object. Unwind them into variables.
3. Plot a scatterplot with x-axis showing minimum pressure, y-axis showing maximum wind speed and colour showing maximum category.
```
data(storms)
str(storms)
```
# Solutions:
## 1.
```
storms_nested <- storms %>%
mutate(name=as.factor(name)) %>%
group_by(name) %>%
nest()
print(storms_nested)
```
## 2.
```
storm_stats <- function(storm_data) {
output <- storm_data %>%
summarize(min_pressure=min(pressure),max_wind=max(wind),max_category=max(category))
}
storms_nested <- storms_nested %>%
mutate(stats=map(data,storm_stats)) %>%
unnest(stats)
print(storms_nested)
```
## 3.
```
storms_nested %>%
ggplot(aes(x=min_pressure,y=max_wind,color=max_category)) +
geom_point() +
scale_x_reverse() +
labs(x='Minimum pressure [mbar]',y='Maximum windspeed [km/h]',color='Storm category')
```
| github_jupyter |
```
import modin.pandas as pd
import nums
import nums.numpy as nps
nums.init()
```
# Preparation
### Load and preprocess dataset with Modin.
```
%%time
higgs_train = pd.read_csv("training.zip")
higgs_train.loc[higgs_train['Label'] == 'b', 'Label'] = 0
higgs_train.loc[higgs_train['Label'] == 's', 'Label'] = 1
higgs_train = higgs_train.drop(columns=['EventId'])
columns = higgs_train.columns.values
X_columns, y_column = columns[:-1], columns[-1:]
```
### Convert Modin DataFrame to NumS BlockArray.
```
%%time
X_train = nums.from_modin(higgs_train[X_columns].astype(float))
weights = X_train[:, -1]
X_train = X_train[:, :-1]
# Drop weight column from names.
X_columns = X_columns[:-1]
y_train = nums.from_modin(higgs_train[y_column].astype(int)).reshape(-1)
```
# Exploration
### Compute principal components of dataset.
```
%%time
# Compute PCA via SVD.
C = nps.cov(X_train, rowvar=False)
V, S, VT = nps.linalg.svd(C)
assert nps.allclose(V, VT.T)
pc = X_train @ V
```
### Compute eigen values from singular values, and explained variance from eigen values.
```
eigen_vals = S**2 / (X_train.shape[0] - 1)
explained_variance = eigen_vals / nps.sum(eigen_vals)
for i, val in enumerate(nps.cumsum(explained_variance).get()[:10]):
print(i, val)
```
### Order features by explained variance.
```
components = VT
sorted_variance = nps.argsort(-nps.sum(nps.abs(components[:2]), axis=0)).get()
for col in X_columns[sorted_variance]:
print(col)
```
# Modelling
### Import scikit-learn models from nums.
```
from nums.sklearn import (train_test_split,
StandardScaler,
GaussianNB,
LogisticRegression,
SVC,
MLPClassifier,
GradientBoostingClassifier,
RandomForestClassifier)
print("Models imported.")
```
### Define the performance metric.
```
def metric(ytrue, ypred, weights):
""" Approximate Median Significance defined as:
AMS = sqrt(
2 { (s + b + b_r) log[1 + (s/(b+b_r))] - s}
)
where b_r = 10, b = background, s = signal, log is natural logarithm """
# True-positive rate.
s = nps.sum(weights[(ytrue == 1) & (ytrue == ypred)])
# False-positive rate.
b = nps.sum(weights[(ytrue == 1) & (ytrue != ypred)])
br = 10.0
radicand = 2 * ((s + b + br) * nps.log(1.0 + s / (b + br)) - s)
return nps.sqrt(radicand)
print("Metric defined.")
```
### Conduct a search over a small set of feature sets, preprocessors, and models.
```
%%time
scores = []
for drop_features in [0, 3]:
if drop_features > 0:
feature_mask = nps.zeros(shape=sorted_variance.shape, dtype=bool)
feature_mask[sorted_variance[:-drop_features]] = True
Xt, Xv, yt, yv, wt, wv = train_test_split(X_train[:, feature_mask], y_train, weights)
else:
Xt, Xv, yt, yv, wt, wv = train_test_split(X_train, y_train, weights)
numfeatstr = "num_feats=%s" % Xt.shape[1]
for p_cls in [StandardScaler, None]:
if p_cls is None:
ppstr = "preproc=None"
pXt = Xt
pXv = Xv
else:
ppstr = "preproc=" + p_cls.__name__
p_inst = p_cls()
pXt = p_inst.fit_transform(Xt)
pXv = p_inst.fit_transform(Xv)
# Tree-based Ensemble Methods
for n_estimators in [10]:
for max_depth in [2]:
for max_features in [None]:
m = RandomForestClassifier(n_estimators=n_estimators,
max_depth=max_depth, max_features=max_features)
m.fit(pXt, yt)
scores.append([(numfeatstr + ", "
+ ppstr + ", "
+ m.__class__.__name__
+ ("(%s, %s, %s)" % (n_estimators, max_depth, max_features))),
(m.predict(pXv), yv, wv)])
for learning_rate in [.4]:
for subsample in [.9]:
m = GradientBoostingClassifier(n_estimators=n_estimators,
max_depth=max_depth,
max_features=max_features,
learning_rate=learning_rate,
subsample=subsample)
m.fit(pXt, yt)
scores.append([(numfeatstr + ", "
+ ppstr + ", "
+ m.__class__.__name__
+ ("(%s, %s, %s, %s, %s)" % (n_estimators,
max_depth,
max_features,
learning_rate,
subsample))),
(m.predict(pXv), yv, wv)])
print("Training %s pipelines." % len(scores))
```
### Run performance metric and sort the pipelines by their performance.
```
%%time
results = []
for res in scores:
results.append((res[0], metric(*res[1]).get()))
for res in sorted(results, key=lambda x: -x[-1]):
print(*res)
```
# Sources
- https://www.kaggle.com/c/higgs-boson/code
- https://nycdatascience.com/blog/student-works/top2p-higgs-boson-machine-learning/
| github_jupyter |
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.

# Use Azure Machine Learning Pipelines for batch prediction
In this tutorial, you use Azure Machine Learning service pipelines to run a batch scoring image classification job. The example job uses the pre-trained [Inception-V3](https://arxiv.org/abs/1512.00567) CNN (convolutional neural network) Tensorflow model to classify unlabeled images. Machine learning pipelines optimize your workflow with speed, portability, and reuse so you can focus on your expertise, machine learning, rather than on infrastructure and automation. After building and publishing a pipeline, you can configure a REST endpoint to enable triggering the pipeline from any HTTP library on any platform.
In this tutorial, you learn the following tasks:
> * Configure workspace and download sample data
> * Create data objects to fetch and output data
> * Download, prepare, and register the model to your workspace
> * Provision compute targets and create a scoring script
> * Use ParallelRunStep to do batch scoring
> * Build, run, and publish a pipeline
> * Enable a REST endpoint for the pipeline
If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning service](https://aka.ms/AMLFree) today.
## Prerequisites
* Complete the [setup tutorial](https://docs.microsoft.com/azure/machine-learning/service/tutorial-1st-experiment-sdk-setup) if you don't already have an Azure Machine Learning service workspace or notebook virtual machine.
* After you complete the setup tutorial, open the **tutorials/tutorial-pipeline-batch-scoring-classification.ipynb** notebook using the same notebook server.
This tutorial is also available on [GitHub](https://github.com/Azure/MachineLearningNotebooks/tree/master/tutorials) if you wish to run it in your own [local environment](how-to-configure-environment.md#local). Run `pip install azureml-sdk[notebooks] azureml-pipeline-core azureml-pipeline-steps pandas requests` to get the required packages.
## Configure workspace and create datastore
Create a workspace object from the existing workspace. A [Workspace](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py) is a class that accepts your Azure subscription and resource information. It also creates a cloud resource to monitor and track your model runs. `Workspace.from_config()` reads the file **config.json** and loads the authentication details into an object named `ws`. `ws` is used throughout the rest of the code in this tutorial.
```
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
from azureml.core import Workspace
ws = Workspace.from_config()
```
### Create a datastore for sample images
Get the ImageNet evaluation public data sample from the public blob container `sampledata` on the account `pipelinedata`. Calling `register_azure_blob_container()` makes the data available to the workspace under the name `images_datastore`. Then specify the workspace default datastore as the output datastore, which you use for scoring output in the pipeline.
```
from azureml.core.datastore import Datastore
batchscore_blob = Datastore.register_azure_blob_container(ws,
datastore_name="images_datastore",
container_name="sampledata",
account_name="pipelinedata",
overwrite=True)
def_data_store = ws.get_default_datastore()
```
## Create data objects
When building pipelines, `Dataset` objects are used for reading data from workspace datastores, and `PipelineData` objects are used for transferring intermediate data between pipeline steps.
This batch scoring example only uses one pipeline step, but in use-cases with multiple steps, the typical flow will include:
1. Using `Dataset` objects as **inputs** to fetch raw data, performing some transformations, then **outputting** a `PipelineData` object.
1. Use the previous step's `PipelineData` **output object** as an *input object*, repeated for subsequent steps.
For this scenario you create `Dataset` objects corresponding to the datastore directories for both the input images and the classification labels (y-test values). You also create a `PipelineData` object for the batch scoring output data.
```
from azureml.core.dataset import Dataset
from azureml.pipeline.core import PipelineData
input_images = Dataset.File.from_files((batchscore_blob, "batchscoring/images/"))
label_ds = Dataset.File.from_files((batchscore_blob, "batchscoring/labels/"))
output_dir = PipelineData(name="scores", datastore=def_data_store)
```
Next, we need to register the datasets with the workspace.
```
input_images = input_images.register(workspace=ws, name="input_images")
label_ds = label_ds.register(workspace=ws, name="label_ds", create_new_version=True)
```
## Download and register the model
Download the pre-trained Tensorflow model to use it for batch scoring in the pipeline. First create a local directory where you store the model, then download and extract it.
```
import os
import tarfile
import urllib.request
if not os.path.isdir("models"):
os.mkdir("models")
response = urllib.request.urlretrieve("http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz", "model.tar.gz")
tar = tarfile.open("model.tar.gz", "r:gz")
tar.extractall("models")
```
Now you register the model to your workspace, which allows you to easily retrieve it in the pipeline process. In the `register()` static function, the `model_name` parameter is the key you use to locate your model throughout the SDK.
```
import shutil
from azureml.core.model import Model
# register downloaded model
model = Model.register(model_path="models/inception_v3.ckpt",
model_name="inception",
tags={"pretrained": "inception"},
description="Imagenet trained tensorflow inception",
workspace=ws)
# remove the downloaded dir after registration if you wish
shutil.rmtree("models")
```
## Create and attach remote compute target
Azure Machine Learning service pipelines cannot be run locally, and only run on cloud resources. Remote compute targets are reusable virtual compute environments where you run experiments and work-flows. Run the following code to create a GPU-enabled [`AmlCompute`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.compute.amlcompute.amlcompute?view=azure-ml-py) target, and attach it to your workspace. See the [conceptual article](https://docs.microsoft.com/azure/machine-learning/service/concept-compute-target) for more information on compute targets.
```
from azureml.core.compute import AmlCompute, ComputeTarget
from azureml.exceptions import ComputeTargetException
compute_name = "gpu-cluster"
# checks to see if compute target already exists in workspace, else create it
try:
compute_target = ComputeTarget(workspace=ws, name=compute_name)
except ComputeTargetException:
config = AmlCompute.provisioning_configuration(vm_size="STANDARD_NC6",
vm_priority="lowpriority",
min_nodes=0,
max_nodes=1)
compute_target = ComputeTarget.create(workspace=ws, name=compute_name, provisioning_configuration=config)
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
```
## Write a scoring script
To do the scoring, you create a batch scoring script `batch_scoring.py`, and write it to the current directory. The script takes a minibatch of input images, applies the classification model, and outputs the predictions to a results file.
The script `batch_scoring.py` takes the following parameters, which get passed from the `ParallelRunStep` that you create later:
- `--model_name`: the name of the model being used
- `--labels_dir` : the directory path having the `labels.txt` file
The pipelines infrastructure uses the `ArgumentParser` class to pass parameters into pipeline steps. For example, in the code below the first argument `--model_name` is given the property identifier `model_name`. In the `main()` function, this property is accessed using `Model.get_model_path(args.model_name)`.
The pipeline in this tutorial only has one step and writes the output to a file, but for multi-step pipelines, you also use `ArgumentParser` to define a directory to write output data for input to subsequent steps. See the [notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) for an example of passing data between multiple pipeline steps using the `ArgumentParser` design pattern.
## Build and run the pipeline
Before running the pipeline, you create an object that defines the python environment and dependencies needed by your script `batch_scoring.py`. The main dependency required is Tensorflow, but you also install `azureml-core` and `azureml-dataset-runtime[fuse]` for background processes from the SDK. Create a `RunConfiguration` object using the dependencies, and also specify Docker and Docker-GPU support.
```
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core.runconfig import DEFAULT_GPU_IMAGE
cd = CondaDependencies.create(pip_packages=["tensorflow-gpu==1.15.2",
"azureml-core", "azureml-dataset-runtime[fuse]"])
env = Environment(name="parallelenv")
env.python.conda_dependencies=cd
env.docker.base_image = DEFAULT_GPU_IMAGE
```
### Create the configuration to wrap the inference script
Create the pipeline step using the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace as the target of execution of the script. We will use PythonScriptStep to create the pipeline step.
```
from azureml.pipeline.steps import ParallelRunConfig
parallel_run_config = ParallelRunConfig(
environment=env,
entry_script="batch_scoring.py",
source_directory="scripts",
output_action="append_row",
append_row_file_name="parallel_run_step.txt",
mini_batch_size="20",
error_threshold=1,
compute_target=compute_target,
process_count_per_node=2,
node_count=1
)
```
### Create the pipeline step
A pipeline step is an object that encapsulates everything you need for running a pipeline including:
* environment and dependency settings
* the compute resource to run the pipeline on
* input and output data, and any custom parameters
* reference to a script or SDK-logic to run during the step
There are multiple classes that inherit from the parent class [`PipelineStep`](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep?view=azure-ml-py) to assist with building a step using certain frameworks and stacks. In this example, you use the [`ParallelRunStep`](https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunstep?view=azure-ml-py) class to define your step logic using a scoring script.
An object reference in the `outputs` array becomes available as an **input** for a subsequent pipeline step, for scenarios where there is more than one step.
```
from azureml.pipeline.steps import ParallelRunStep
from datetime import datetime
parallel_step_name = "batchscoring-" + datetime.now().strftime("%Y%m%d%H%M")
label_config = label_ds.as_named_input("labels_input")
batch_score_step = ParallelRunStep(
name=parallel_step_name,
inputs=[input_images.as_named_input("input_images")],
output=output_dir,
arguments=["--model_name", "inception",
"--labels_dir", label_config],
side_inputs=[label_config],
parallel_run_config=parallel_run_config,
allow_reuse=False
)
```
For a list of all classes for different step types, see the [steps package](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py).
### Run the pipeline
Now you run the pipeline. First create a `Pipeline` object with your workspace reference and the pipeline step you created. The `steps` parameter is an array of steps, and in this case there is only one step for batch scoring. To build pipelines with multiple steps, you place the steps in order in this array.
Next use the `Experiment.submit()` function to submit the pipeline for execution. You also specify the custom parameter `param_batch_size`. The `wait_for_completion` function will output logs during the pipeline build process, which allows you to see current progress.
Note: The first pipeline run takes roughly **15 minutes**, as all dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned/created. Running it again takes significantly less time as those resources are reused. However, total run time depends on the workload of your scripts and processes running in each pipeline step.
```
from azureml.core import Experiment
from azureml.pipeline.core import Pipeline
pipeline = Pipeline(workspace=ws, steps=[batch_score_step])
pipeline_run = Experiment(ws, "batch_scoring").submit(pipeline)
# This will output information of the pipeline run, including the link to the details page of portal.
pipeline_run
# Wait the run for completion and show output log to console
pipeline_run.wait_for_completion(show_output=True)
```
### Download and review output
Run the following code to download the output file created from the `batch_scoring.py` script, then explore the scoring results.
```
import pandas as pd
import tempfile
batch_run = pipeline_run.find_step_run(batch_score_step.name)[0]
batch_output = batch_run.get_output_data(output_dir.name)
target_dir = tempfile.mkdtemp()
batch_output.download(local_path=target_dir)
result_file = os.path.join(target_dir, batch_output.path_on_datastore, parallel_run_config.append_row_file_name)
df = pd.read_csv(result_file, delimiter=":", header=None)
df.columns = ["Filename", "Prediction"]
print("Prediction has ", df.shape[0], " rows")
df.head(10)
```
## Publish and run from REST endpoint
Run the following code to publish the pipeline to your workspace. In your workspace in the portal, you can see metadata for the pipeline including run history and durations. You can also run the pipeline manually from the portal.
Additionally, publishing the pipeline enables a REST endpoint to rerun the pipeline from any HTTP library on any platform.
```
published_pipeline = pipeline_run.publish_pipeline(
name="Inception_v3_scoring", description="Batch scoring using Inception v3 model", version="1.0")
published_pipeline
```
To run the pipeline from the REST endpoint, you first need an OAuth2 Bearer-type authentication header. This example uses interactive authentication for illustration purposes, but for most production scenarios requiring automated or headless authentication, use service principle authentication as [described in this notebook](https://aka.ms/pl-restep-auth).
Service principle authentication involves creating an **App Registration** in **Azure Active Directory**, generating a client secret, and then granting your service principal **role access** to your machine learning workspace. You then use the [`ServicePrincipalAuthentication`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py) class to manage your auth flow.
Both `InteractiveLoginAuthentication` and `ServicePrincipalAuthentication` inherit from `AbstractAuthentication`, and in both cases you use the `get_authentication_header()` function in the same way to fetch the header.
```
from azureml.core.authentication import InteractiveLoginAuthentication
interactive_auth = InteractiveLoginAuthentication()
auth_header = interactive_auth.get_authentication_header()
```
Get the REST url from the `endpoint` property of the published pipeline object. You can also find the REST url in your workspace in the portal. Build an HTTP POST request to the endpoint, specifying your authentication header. Additionally, add a JSON payload object with the experiment name and the batch size parameter. As a reminder, the `process_count_per_node` is passed through to `ParallelRunStep` because you defined it is defined as a `PipelineParameter` object in the step configuration.
Make the request to trigger the run. Access the `Id` key from the response dict to get the value of the run id.
```
import requests
rest_endpoint = published_pipeline.endpoint
response = requests.post(rest_endpoint,
headers=auth_header,
json={"ExperimentName": "batch_scoring",
"ParameterAssignments": {"process_count_per_node": 6}})
try:
response.raise_for_status()
except Exception:
raise Exception("Received bad response from the endpoint: {}\n"
"Response Code: {}\n"
"Headers: {}\n"
"Content: {}".format(rest_endpoint, response.status_code, response.headers, response.content))
run_id = response.json().get('Id')
print('Submitted pipeline run: ', run_id)
```
Use the run id to monitor the status of the new run. This will take another 10-15 min to run and will look similar to the previous pipeline run, so if you don't need to see another pipeline run, you can skip watching the full output.
```
from azureml.pipeline.core.run import PipelineRun
published_pipeline_run = PipelineRun(ws.experiments["batch_scoring"], run_id)
# Show detail information of the run
published_pipeline_run
```
## Clean up resources
Do not complete this section if you plan on running other Azure Machine Learning service tutorials.
### Stop the notebook VM
If you used a cloud notebook server, stop the VM when you are not using it to reduce cost.
1. In your workspace, select **Compute**.
1. Select the **Notebook VMs** tab in the compute page.
1. From the list, select the VM.
1. Select **Stop**.
1. When you're ready to use the server again, select **Start**.
### Delete everything
If you don't plan to use the resources you created, delete them, so you don't incur any charges.
1. In the Azure portal, select **Resource groups** on the far left.
1. From the list, select the resource group you created.
1. Select **Delete resource group**.
1. Enter the resource group name. Then select **Delete**.
You can also keep the resource group but delete a single workspace. Display the workspace properties and select **Delete**.
## Next steps
In this machine learning pipelines tutorial, you did the following tasks:
> * Built a pipeline with environment dependencies to run on a remote GPU compute resource
> * Created a scoring script to run batch predictions with a pre-trained Tensorflow model
> * Published a pipeline and enabled it to be run from a REST endpoint
See the [how-to](https://docs.microsoft.com/azure/machine-learning/service/how-to-create-your-first-pipeline?view=azure-devops) for additional detail on building pipelines with the machine learning SDK.
| github_jupyter |
```
import pandas as pd
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import isolearn.io as isoio
import isolearn.keras as iso
import scipy.optimize as spopt
from scipy.stats import pearsonr
from analyze_random_mpra_logistic_regression_helpers import *
#Load plasmid data
plasmid_dict = isoio.load('../data/random_mpra_legacy/combined_library/processed_data_lifted/apa_plasmid_data_legacy')
df = plasmid_dict['plasmid_df']
#Filter data on sublibrary Alien2
keep_index = np.nonzero(df['library_index'] == 22)[0]
df = df.iloc[keep_index].copy().reset_index(drop=True)
#Filter on min read count
keep_index = np.nonzero(df['total_count'] >= 1)[0]
df = df.iloc[keep_index].copy().reset_index(drop=True)
print('n = ' + str(len(df)))
#Generate training and test set indexes
test_set_size=8000
plasmid_index = np.arange(len(df), dtype=np.int)
train_index = plasmid_index[:-test_set_size]
test_index = plasmid_index[train_index.shape[0]:]
print('Training set size = ' + str(train_index.shape[0]))
print('Test set size = ' + str(test_index.shape[0]))
df = mask_constant_sequence_regions(df)
df = align_on_cse(df)
#Initialize hexamer count data generator (separated by USE, CSE and DSE regions)
hexamer_gens = {
gen_id : iso.DataGenerator(
idx,
{
'df' : df
},
batch_size=len(idx),
inputs = [
{
'id' : 'use',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['seq_var_aligned'][:46],
'encoder' : iso.NMerEncoder(n_mer_len=6, count_n_mers=True),
'sparse' : True,
'sparse_mode' : 'col'
},
{
'id' : 'cse',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['seq_var_aligned'][50:56],
'encoder' : iso.NMerEncoder(n_mer_len=6, count_n_mers=True),
'sparse' : True,
'sparse_mode' : 'col'
},
{
'id' : 'dse',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['seq_var_aligned'][59:99],
'encoder' : iso.NMerEncoder(n_mer_len=6, count_n_mers=True),
'sparse' : True,
'sparse_mode' : 'col'
},
{
'id' : 'fdse',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['seq_var_aligned'][99:],
'encoder' : iso.NMerEncoder(n_mer_len=6, count_n_mers=True),
'sparse' : True,
'sparse_mode' : 'col'
},
{
'id' : 'lib',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['library_index'],
'encoder' : iso.CategoricalEncoder(n_categories=36, categories=np.arange(36, dtype=np.int).tolist()),
'sparsify' : True
},
],
outputs = [
{
'id' : 'proximal_usage',
'source_type' : 'dataframe',
'source' : 'df',
'extractor' : lambda row, index: row['proximal_count'] / row['total_count'],
'transformer' : lambda t: t
}
],
randomizers = [],
shuffle = False,
) for gen_id, idx in [('train', train_index), ('test', test_index)]
}
#Generate hexamer occurrence count matrices and corresponding isoform proportions
[X_train_use, X_train_cse, X_train_dse, X_train_fdse, X_train_lib], y_train = hexamer_gens['train'][0]
y_train = y_train[0]
[X_test_use, X_test_cse, X_test_dse, X_test_fdse, X_test_lib], y_test = hexamer_gens['test'][0]
y_test = y_test[0]
#Concatenate hexamer count matrices
X_train = sp.csc_matrix(sp.hstack([X_train_lib, X_train_use, X_train_cse, X_train_dse, X_train_fdse]))
X_test = sp.csc_matrix(sp.hstack([X_test_lib, X_test_use, X_test_cse, X_test_dse, X_test_fdse]))
print("Starting logistic n-mer regression...")
w_init = np.zeros(X_train.shape[1] + 1)
lambda_penalty = 0
(w_bundle, _, _) = spopt.fmin_l_bfgs_b(log_loss, w_init, fprime=log_loss_gradient, args=(X_train, y_train, lambda_penalty), maxiter = 200)
print("Regression finished.")
#Collect weights
w_0 = w_bundle[0]
w_L = w_bundle[1:1 + 36]
w = w_bundle[1 + 36:]
#Store weights
data_version = 'simple'
model_version = '6mer_v_pasaligned_margin'
w_bundle_no_lib = np.concatenate([np.array([w_0]), w], axis=0)
np.save('apa_regression_' + model_version + '_' + data_version + '_weights', w_bundle)
stored_nmer_weights = {
'nmer' : [t[1] for t in sorted(hexamer_gens['train'].encoders['use'].encoder.decode_map.items(), key=lambda t: t[0])],
'use' : w[: 4096].tolist(),
'cse' : w[4096: 2 * 4096].tolist(),
'dse' : w[2 * 4096: 3 * 4096].tolist(),
'fdse' : w[3 * 4096: 4 * 4096].tolist(),
}
nmer_df = pd.DataFrame(stored_nmer_weights)
nmer_df = nmer_df[['nmer', 'use', 'cse', 'dse', 'fdse']]
nmer_df.to_csv('apa_regression_' + model_version + '_' + data_version + '_weights.csv', index=False, sep='\t')
#Load weights
#data_version = 'simple'
#model_version = '6mer_v_pasaligned_margin'
#w_bundle = np.load('apa_regression_' + model_version + '_' + data_version + '_weights.npy')
#Collect weights
#w_0 = w_bundle[0]
#w_L = w_bundle[1:1 + 36]
#w = w_bundle[1 + 36:]
#Evaluate isoform proportion predictions on test set
y_test_pred = get_y_pred(X_test, np.concatenate([w_L, w]), w_0)
r_val, p_val = pearsonr(y_test_pred, y_test)
print("Test set R^2 = " + str(round(r_val * r_val, 2)) + ", p = " + str(p_val) + ", n = " + str(X_test.shape[0]))
#Plot test set scatter
f = plt.figure(figsize=(5, 5))
plt.scatter(y_test_pred, y_test, color='black', s=5, alpha=0.05)
plt.xticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.yticks([0.0, 0.25, 0.5, 0.75, 1.0], fontsize=14)
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xlabel('Pred Proximal Usage', fontsize=14)
plt.ylabel('True Proximal Usage', fontsize=14)
plt.title(data_version + ' (R^2 = ' + str(round(r_val * r_val, 2)) + ', n = ' + str(X_test.shape[0]) + ')', fontsize=14)
plt.tight_layout()
plt.show()
#Evaluate isoform logodds predictions on test set
def safe_log(x, minval=0.01):
return np.log(x.clip(min=minval))
y_test_pred = get_y_pred(X_test, np.concatenate([w_L, w]), w_0)
#Compute Log Odds values
keep_index = (y_test < 0.99999)
y_test_valid = y_test[keep_index]
y_test_pred_valid = y_test_pred[keep_index]
logodds_test = np.ravel(safe_log(y_test_valid / (1. - y_test_valid)))
logodds_test_pred = np.ravel(safe_log(y_test_pred_valid / (1. - y_test_pred_valid)))
r_val, p_val = pearsonr(logodds_test_pred, logodds_test)
print("Test set R^2 = " + str(round(r_val * r_val, 2)) + ", p = " + str(p_val) + ", n = " + str(X_test.shape[0]))
#Plot test set scatter
f = plt.figure(figsize=(5, 5))
plt.scatter(logodds_test_pred, logodds_test, s = np.pi * (2 * np.ones(1))**2, alpha=0.05, color='black')
min_x = max(np.min(logodds_test_pred), np.min(logodds_test))
max_x = min(np.max(logodds_test_pred), np.max(logodds_test))
min_y = max(np.min(logodds_test_pred), np.min(logodds_test))
max_y = min(np.max(logodds_test_pred), np.max(logodds_test))
plt.plot([min_x, max_x], [min_y, max_y], alpha=0.5, color='darkblue', linewidth=3)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlabel('Pred Proximal Usage', fontsize=14)
plt.ylabel('True Proximal Usage', fontsize=14)
plt.axis([np.min(logodds_test_pred), np.max(logodds_test_pred), np.min(logodds_test), np.max(logodds_test)])
plt.title(data_version + ' (R^2 = ' + str(round(r_val * r_val, 2)) + ', n = ' + str(X_test.shape[0]) + ')', fontsize=14)
plt.tight_layout()
plt.show()
```
| github_jupyter |
## Batched example 2
This notebook is the second of a series that shows how [GSSHA_Workflow.ipynb](../GSSHA_Workflow.ipynb) can be parameterized at the command line that builds on [GSSHA_Workflow_Batched_Example1](GSSHA_Workflow_Batched_Example1.ipynb). This notebook uses the same principles as the first example but makes the interface more user friendly and Windows compatible.
As in the first example, the two parameters we choose to expose are ``rain_intensity`` and ``rain_duration``. As before, the notebook is configured to use these parameters in three steps.
1. <a href="#Declare_nbparams">Declare the command line parameters</a>
2. <a href="#Display_nbparams">Display the notebook parameter widgets</a>
3. <a href="#Apply_nbparams">Apply notebook parameters and display</a>
Only the first step <a href="#Declare_nbparams">Declare the command line parameters</a> is different from the first example; the second step is identical and the third step is only different by a trivial variable name change.
The main improvement presented here is in the interface used to set parameters at the command line:
```bash
param -cmd 'jupyter nbconvert --execute GSSHA_Workflow_Batched_Example2.ipynb' -p rain_intensity=25 -p rain_duration=3600
```
As in the first example, an arbitrary command can be executed but in this instance a nicer syntax is used to specify the desired parameters. The ``param`` command generates the appropriate environment variable and makes it available to the execution context in a way that is cross platform, allowing this utility to be used on Windows.
```
from datetime import datetime, timedelta
import os
import glob
import param
import parambokeh
import numpy as np
import xarray as xr
import geoviews as gv
import holoviews as hv
import quest
import earthsim.gssha as esgssha
import earthsim.gssha.model as models
import cartopy.crs as ccrs
from earthsim.gssha import download_data, get_file_from_quest
from earthsim.gssha.model import UniformRoughness, CreateGSSHAModel
from holoviews.streams import PolyEdit, BoxEdit, PointDraw, CDSStream
from holoviews.operation.datashader import regrid, shade
from earthsim.io import save_shapefile, open_gssha, get_ccrs
regrid.aggregator = 'max'
hv.extension('bokeh')
%output holomap='scrubber' fps=2
rm -r ./vicksburg_south/
```
## Declare the command line parameters <a id="Declare_nbparams"></a>
As in the previous example, the ``rain_intensity`` and ``rain_duration`` of ``Simulation`` are exposed. The change here is that instead of explicitly defining the ``NotebookParams`` class, a helper function called ``global_params`` is used instead.
This utility makes the definition of notebook parameters more concise and readable. In addition to parameter objects, you can simply use literals for quick parameter definitions. For instance, the literal ``5`` is promoted to a ``param.Integer``, the literal ``6.2`` is promoted to a ``param.Number``, ``'example'`` is promoted to a ``param.String`` etc.
```
from earthsim import parameters
nbparams = parameters(
rain_intensity = param.Number(default=24, bounds=(0,None), softbounds=(0,75)),
rain_duration = 60
)
```
Note that the literal specification is shorter and easier to read but is lacking documentation and numeric bounds declarations. This may also result in less user-friendly widgets: ``rain_duration`` is displayed with a text box in the next code cell instead of a slider. Using literals to define notebook parameters is most appropriate for generating static HTML reports from the command line where the widgets won't be used.
## Display the notebook parameter widgets <a id="Display_nbparams"></a>
This step makes the notebook parameters available to change at the start of the notebook, parameterizing the interactive workflow. In addition, using ``initializer=parambokeh.JSONInit()`` allows these parameters to be set from the command line.
```
parambokeh.Widgets(nbparams, initializer=parambokeh.JSONInit())
```
## Configure model parameters
```
model_creator = esgssha.CreateGSSHAModel(name='Vicksburg South Model Creator',
mask_shapefile='../../data/vicksburg_watershed/watershed_boundary.shp',
grid_cell_size=90)
parambokeh.Widgets(model_creator)
```
### Setting the parameters of the ``roughness``
```
model_creator.roughness = UniformRoughness()
parambokeh.Widgets(model_creator.roughness)
```
## Draw bounds to compute watershed
Allows drawing a bounding box and adding points to serve as input to compute a watershed:
```
%%opts Polygons [width=900 height=500] (fill_alpha=0 line_color='black')
%%opts Points (size=10 color='red')
tiles = gv.WMTS('http://c.tile.openstreetmap.org/{Z}/{X}/{Y}.png',
crs=ccrs.PlateCarree(), extents=(-91, 32.2, -90.8, 32.4))
box_poly = hv.Polygons([])
points = hv.Points([])
box_stream = BoxEdit(source=box_poly)
point_stream = PointDraw(source=points)
tiles * box_poly * points
if box_stream.element:
element = gv.operation.project(box_stream.element, projection=ccrs.PlateCarree())
xs, ys = element.array().T
bounds = (xs[0], ys[1], xs[2], ys[0])
print("BOUNDS", bounds)
if point_stream.element:
projected = gv.operation.project(point_stream.element, projection=ccrs.PlateCarree())
print("COORDINATE:", projected.iloc[0]['x'][0], projected.iloc[0]['y'][0])
```
## Inspect and edit shapefile
The plot below allows editing the shapefile using a set of tools. The controls for editing are as follows:
* Double-clicking the polygon displays the vertices
* After double-clicking the point tool is selected and vertices can be dragged around
* By tapping on a vertex it can be selected, tapping in a new location while a single point is selected inserts a new vertex
* Multiple points can be selected by holding shift and then tapping or using the box_select tool
* Once multiple vertices are selected they can be deleted by selecting the point editing tool and pressing ``backspace``
```
%%opts Shape [width=900 height=500 tools=['box_select']] (alpha=0.5)
mask_shape = gv.Shape.from_shapefile(model_creator.mask_shapefile).last
tiles = gv.WMTS('http://c.tile.openstreetmap.org/{Z}/{X}/{Y}.png')
vertex_stream = PolyEdit(source=mask_shape)
tiles * mask_shape
```
If any edits were made to the polygon in the plot above we save the ``watershed_boundary.shp`` back out and redisplay it to confirm our edits were applied correctly:
```
%%opts Shape [width=600 height=400] (alpha=0.5)
if vertex_stream.data:
edited_shape_fname = '../vicksburg_watershed_edited/watershed_boundary.shp'
dir_name = os.path.dirname(edited_shape_fname)
if not os.path.isdir(dir_name): os.makedirs(dir_name)
save_shapefile(vertex_stream.data, edited_shape_fname, model_creator.mask_shapefile)
model_creator.mask_shapefile = edited_shape_fname
mask_shape = gv.Shape.from_shapefile(edited_shape_fname).last
mask_shape = mask_shape.opts() # Clear options
mask_shape
```
## Configure simulation parameters
```
sim = esgssha.Simulation(name='Vicksburg South Simulation', simulation_duration=60*60,
rain_duration=30*60, model_creator=model_creator)
```
## Apply notebook parameters and display<a id="Apply_nbparams"></a>
This is the point at which the notebook parameters hook into the workflow. In this example, the two chosen parameters ``rain_duration`` and ``rain_intensity`` are simply set on ``sim``. In more complex examples, you may decide to compute the parameters set in the workflow as a function of the availabel notebook parameters.
```
sim.rain_duration = nbparams.rain_duration
sim.rain_intensity = nbparams.rain_intensity
parambokeh.Widgets(sim)
```
## Create the model
Note that the above code demonstrates how to collect user input, but it has not yet been connected to the remaining workflow, which uses code-based specification for the parameters.
```
if sim.model_creator.project_name not in quest.api.get_collections():
quest.api.new_collection(sim.model_creator.project_name)
parambokeh.Widgets(sim.model_creator)
# temporary workaround until workflow cleanup/parameterization is done
if sim.model_creator.project_name == 'test_philippines_small':
sim.model_creator.roughness = models.GriddedRoughnessTable(
land_use_grid=get_file_from_quest(sim.model_creator.project_name, sim.land_use_service, 'landuse', sim.model_creator.mask_shapefile),
land_use_to_roughness_table='../philippines_small/land_cover_glcf_modis.txt')
else:
sim.model_creator.roughness = models.GriddedRoughnessID(
land_use_grid=get_file_from_quest(sim.model_creator.project_name, sim.land_use_service, 'landuse', sim.model_creator.mask_shapefile),
land_use_grid_id=sim.land_use_grid_id)
sim.model_creator.elevation_grid_path = get_file_from_quest(sim.model_creator.project_name, sim.elevation_service, 'elevation', sim.model_creator.mask_shapefile)
model = sim.model_creator()
# add card for max depth
model.project_manager.setCard('FLOOD_GRID',
'{0}.fgd'.format(sim.model_creator.project_name),
add_quotes=True)
# Add time-based depth grids to simulation
"""
See: http://www.gsshawiki.com/Project_File:Output_Files_%E2%80%93_Required
Filename or folder to output MAP_TYPE maps of overland flow depth (m)
every MAP_FREQ minutes. If MAP_TYPE=0, then [value] is a folder name
and output files are called "value\depth.####.asc" **
"""
model.project_manager.setCard('DEPTH', '.', add_quotes=True)
model.project_manager.setCard('MAP_FREQ', '1')
# add event for simulation (optional)
"""
model.set_event(simulation_start=sim.simulation_start,
simulation_duration=timedelta(seconds=sim.simulation_duration),
rain_intensity=sim.rain_intensity,
rain_duration=timedelta(seconds=sim.rain_duration))
"""
# write to disk
model.write()
```
## Review model inputs
### Load inputs to the simulation
```
name = sim.model_creator.project_name
CRS = get_ccrs(os.path.join(name, name+'_prj.pro'))
roughness_arr = open_gssha(os.path.join(name,'roughness.idx'))
msk_arr = open_gssha(os.path.join(name, name+'.msk'))
ele_arr = open_gssha(os.path.join(name, name+'.ele'))
roughness = gv.Image(roughness_arr, crs=CRS, label='roughness.idx')
mask = gv.Image(msk_arr, crs=CRS, label='vicksburg_south.msk')
ele = gv.Image(ele_arr, crs=CRS, label='vicksburg_south.ele')
```
#### Shapefile vs. Mask
```
tiles * regrid(mask) * mask_shape
```
#### Elevation
```
tiles * regrid(ele) * mask_shape
```
#### Roughness
```
tiles * regrid(roughness) * mask_shape
```
# Run Simulation
```
from gsshapy.modeling import GSSHAFramework
# TODO: how does the info here relate to that set earlier?
# TODO: understand comment below
# assuming notebook is run from examples folder
project_path = os.path.join(sim.model_creator.project_base_directory, sim.model_creator.project_name)
gr = GSSHAFramework("gssha",
project_path,
"{0}.prj".format(sim.model_creator.project_name),
gssha_simulation_start=sim.simulation_start,
gssha_simulation_duration=timedelta(seconds=sim.simulation_duration),
# load_simulation_datetime=True, # use this if already set datetime params in project file
)
# http://www.gsshawiki.com/Model_Construction:Defining_a_uniform_precipitation_event
gr.event_manager.add_uniform_precip_event(sim.rain_intensity,
timedelta(seconds=sim.rain_duration))
gssha_event_directory = gr.run()
```
# Visualizing the outputs
### Load and visualize depths over time
```
depth_nc = os.path.join(gssha_event_directory, 'depths.nc')
if not os.path.isfile(depth_nc):
# Load depth data files
depth_map = hv.HoloMap(kdims=['Minute'])
for fname in glob.glob(os.path.join(gssha_event_directory, 'depth.*.asc')):
depth_arr = open_gssha(fname)
minute = int(fname.split('.')[-2])
# NOTE: Due to precision issues not all empty cells match the NaN value properly, fix later
depth_arr.data[depth_arr.data==depth_arr.data[0,0]] = np.NaN
depth_map[minute] = hv.Image(depth_arr)
# Convert data to an xarray and save as NetCDF
arrays = []
for minute, img in depth_map.items():
ds = hv.Dataset(img)
arr = ds.data.z.assign_coords(minute=minute)
arrays.append(arr)
depths = xr.concat(arrays, 'minute')
depths.to_netcdf(depth_nc)
else:
depths = xr.open_dataset(depth_nc)
depth_ds = hv.Dataset(depths)
depth_ds.data
```
Now that we have a Dataset of depths we can convert it to a series of Images.
```
%%opts Image [width=600 height=400 logz=True xaxis=None yaxis=None] (cmap='viridis') Histogram {+framewise}
regrid(depth_ds.to(hv.Image, ['x', 'y'])).redim.range(z=(0, 0.04)).hist(bin_range=(0, 0.04))
```
We can also lay out the plots over time to allow for easier comparison.
```
%%opts Image [width=300 height=300 logz=True xaxis=None yaxis=None] (cmap='viridis')
regrid(depth_ds.select(minute=range(10, 70, 10)).to(hv.Image, ['x', 'y']).redim.range(z=(0, 0.04))).layout().cols(3)
```
### Flood Grid Depth
(Maximum flood depth over the course of the simulation)
```
%%opts Image [width=600 height=400] (cmap='viridis')
fgd_arr = open_gssha(os.path.join(gssha_event_directory,'{0}.fgd'.format(sim.model_creator.project_name)))
fgd = gv.Image(fgd_arr, crs=CRS, label='vicksburg_south.fgd').redim.range(z=(0, 0.04))
regrid(fgd, streams=[hv.streams.RangeXY]).redim.range(z=(0, 0.04))
```
### Analyzing the simulation speed
```
%%opts Spikes [width=600]
times = np.array([os.path.getmtime(f) for f in glob.glob(os.path.join(gssha_event_directory, 'depth*.asc'))] )
minutes = (times-times[0])/60
hv.Spikes(minutes, kdims=['Real Time (minutes)'], label='Time elapsed for each minute of simulation time') +\
hv.Curve(np.diff(minutes), kdims=['Simulation Time (min)'], vdims=[('runtime', 'Runtime per minute simulation time')]).redim.range(runtime=(0, None))
```
Here if the "spikes" are regularly spaced, simulation time is regularly scaled with real time, and so you should be able read out the approximate time to expect per unit of simulation time.
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# BlackHoles@Home Tutorial: Compiling the `BOINC` server on Linux
## Author: Leo Werneck
## This tutorial notebook demonstrates how to compile the `BOINC` server Linux. It focuses specifically on [Ubuntu](https://ubuntu.com), but adapting the scripts for other Linux flavors should be straightforward
## Introduction:
The [BlackHoles@Home](http://blackholesathome.net/) project allows users to volunteer CPU time so a large number of binary black holes simulations can be performed. The objective is to create a large catalog of [gravitational waveforms](https://en.wikipedia.org/wiki/Gravitational_wave), which can be used by observatories such as [LIGO](https://www.ligo.org), [VIRGO](https://www.virgo-gw.eu), and, in the future, [LISA](https://lisa.nasa.gov) in order to infer what was the source of a detected gravitational wave.
BlackHoles@Home is destined to run on the [BOINC](https://boinc.berkeley.edu) infrastructure (alongside [Einstein@Home](https://einsteinathome.org/) and [many other great projects](https://boinc.berkeley.edu/projects.php)), enabling anyone with a computer to contribute to the construction of the largest numerical relativity gravitational wave catalogs ever produced.
### Additional Reading Material:
* [BOINC's Wiki page](https://boinc.berkeley.edu/trac/wiki)
* [Debian's Wiki BOINC server guide](https://wiki.debian.org/BOINC/ServerGuide/Initialisation)
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This tutorial compiles the `BOINC` server. It will also install all needed dependencies. We also provide a script to set up the server correctly.
1. [Step 1](#loading_python_nrpy_modules): Loading necessary Python/NRPy+ modules
1. [Step 2](#compilation_script): A simple script to compile the `BOINC` server
1. [Step 3](#server_setup): Setting up the server
1. [Step 4](#latex_pdf_output): Output this notebook to $\LaTeX$-formatted PDF file
<a id='loading_python_nrpy_modules'></a>
# Step 1: Loading needed Python/NRPy+ modules \[Back to [top](#toc)\]
$$\label{loading_python_nrpy_modules}$$
We start by loading the necessary Python/NRPy+ moduels used by this tutorial notebook. We also set up the `BOINC` directory path (the default path is the current working directory).
```
# Step 1: Import necessary modules - set directories.
# Step 1.a: Import required Python modules
import sys
# Step 1.b: Add NRPy's root directory to the sys.path()
sys.path.append("..")
# Step 1.c: Load NRPy+'s command line helper module
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
```
<a id='compilation_script'></a>
# Step 2: A simple script to compile the `BOINC` server \[Back to [top](#toc)\]
$$\label{compilation_script}$$
We will now write a simple `bash` script that will take care of downloading the `BOINC` source code and compiling the `BOINC` server.
The script performs the following tasks:
1. Install the necessary dependencies.
1. Download the [`BOINC` source code](https://github.com/BOINC/boinc) (if necessary).
1. Compile the `BOINC` server and libraries.
The following packages are required for a successfull compilation:
1. git
1. make
1. m4
1. autoconf
1. libtool
1. A C++ compiler
1. Python
1. pip
1. Python-MySQL (installed using pip)
1. mysql-server
1. apache2
1. php (with cli, gd, and mysql support)
1. pkg-config
1. Development version of libmysql++
1. Development version of libssl
1. Development version of libcurl with openssl support
```
%%writefile compile_BOINC_server.sh
#!/bin/bash
# Install all required software
sudo apt-get install -y \
git make m4 autoconf libtool g++ python3 python-is-python3 python3-pip \
mysql-server apache2 php php-cli php-gd php-mysql pkg-config \
libmysql++-dev libssl-dev libcurl4-openssl-dev
# Now use pip to install MySQL for python
pip3 install mysql-client
# Download BOINC repository (if necessary)
boincdir="$(pwd)/boinc"
if [ -d "$boincdir" ]; then
echo "BOINC directory found at $boincdir"
else
echo "BOINC directory not found at $boincdir. Cloning from BOINC github repository..."
git clone https://github.com/BOINC/boinc boinc
fi
# Now change directories to the BOINC directory
cd "$boincdir"
# Compile the core BOINC server and libraries
./_autosetup -f
./configure --disable-client --disable-manager
make
# Make sure the boinc_zip library is compiled
cd "${boincdir}/zip"
make
```
<a id='server_setup'></a>
# Step 3: Setting up the server \[Back to [top](#toc)\]
$$\label{server_setup}$$
We now generate a bash script which will set up the server for us. After running the `BOINC` executables that set up a new server, some file/directory permissions are still missing/misconfigured, and therefore some fixes are necessary. The script below takes care of all of the problems we have encountered when setting up a `BOINC` server so far.
The script will also add your apps to the server, if you have any. Simply change tthe `addonsdir` variable in the code below to point to the directory of your apps base directory. It assumes that the `addonsdir` directory contains at least the folders `apps` and `templates`. The `apps` directory should contain your `BOINC` applications, which should follow the `BOINC` standards (see e.g. the directory tree in [this `BOINC` tutorial](https://boinc.berkeley.edu/trac/wiki/WrapperApp)), as well as a file called `app_list.txt`.
The `app_list.txt` file should have two columns: the first one containing thte application name and the second one containing the application name in user friendly format. Lines starting with "#" are treated as comments. Here is a brief example:
```bash
# Lines starting with a # are treated as comments
# Empty lines are also OK
# The file should contain 2 columns: the first one
# with the application's (short) name and the second
# with the application's user-friendly name. Here's
# an example:
# My BOINC applications
my_app1 MyApplication1
my_app2 MyApplication2
my_app3 MyFancyApp3
```
<font color=red>NOTE:</font> the script below will request your MySQL root and your root passwords. Because of this, we will not be running it in this tutorial notebook, but you can do it by first adjusting the first few variables in the script (which set up the project and database name, as well as installation directories) and then running:
```bash
$: source setup_BOINC_server.sh
```
Once the script below finishes running without errors, you can go into the project's root directory and execute
```bash
$: sudo ./bin/start
```
to start the server. If everything went according to plan, you should then be able to access the project's website using the URL
```
http://<myip>/<projectname>
```
where `<projectname>` should be replaced with the project's name and `<myip>` by the server's IP address. If you haven't modified this on the script below, then the value of `<myip>` can be found by running
```bash
$: hostname -I | awk '{print $1}'
```
To stop the server, go to the project's root directory and execute
```bash
$: sudo ./bin/stop
```
will stop the server.
```
%%writefile setup_BOINC_server.sh
#!/bin/bash
# MySQL variable: project's database name.
dbname=blackholesathome
# MySQL variable: project's database user (I'm user my linux user here).
dbuser=bhahadm
# MySQL variable: project's database password.
dbpasswd=P@ssword1234
# BOINC variable: project name (for e.g. directories and files).
projectname=blackholesathome
# BOINC variable: project nice name (for displaying on e.g. the website).
projectnicename="BlackHoles@Home"
# BOINC variable: directory in which the project will be installed.
installroot="$(pwd)/projects"
# BOINC variable: BOINC source code directory.
boincroot="$(pwd)/boinc"
# BOINC variable: project directory.
projectroot="${installroot}/${projectname}"
# BOINC variable: this variable will be used to set your project's
# webpage ID. If you are running a production
# server, then you might want to add your static.
# IP address here
myip=$(hostname -I | awk '{print $1}')
# BOINC variable: project's webpage address.
hosturl="http://${myip}"
# Set this to a directory containing your BOINC applications.
# The script expects "apps" and "templates" to be subdirectories.
# If empty, then it will be unused.
addonsdir=""
appsdir=$addonsdir/apps
# Here we will pipe commands to the MySQL shell. The commands
# below set up a new database user, whose handle is specified
# by the dbuser variable set above. We then set up a database
# and grant permission to the user so that we are able to
# modify the database as we please.
printf "Please enter your MySQL password.\n"
cat <<EOMYSQL | mysql -u root -p;
DROP USER '$dbuser'@'localhost';
DROP DATABASE IF EXISTS $dbname;
CREATE USER '$dbuser'@'localhost' IDENTIFIED BY '$dbpasswd';
GRANT ALL PRIVILEGES ON $dbname.* TO '$dbuser'@'localhost';
EOMYSQL
# Now we create our project by running BOINC's make_project
# script.
# NOTE: if you are getting errors with "MySQL" module not
# found in Python, then you might need to install
# the MySQL Python client using sudo, i.e.:
#
# sudo pip install mysql-client
printf "Please enter your sudo password.\n"
sudo mkdir -p "$installroot"
sudo "${boincroot}/tools/make_project" \
--srcdir="$boincroot" \
--url_base "$hosturl" \
--db_name "$dbname" \
--db_user "$dbuser" \
--db_passwd "$dbpasswd" \
--delete_prev_inst \
--drop_db_first \
--project_root "$projectroot" \
"$projectname" "$projectnicename"
# Fix permissions of project directories. If this
# is not done, then the website is usually not
# displayed correctly. On top of that, users often
# have difficulty communicating with the server.
cd "$installroot/$projectname"
sudo chown bhahadm:bhahadm -R .
sudo chmod g+w -R .
sudo chmod 755 -R upload html/cache html/inc html/languages html/languages/compiled html/user_profile
hostname=`hostname`
sudo chgrp -R www-data log_"$hostname" upload
# Further permission fixes
echo -n "html/inc: "; sudo chmod o+x html/inc && sudo chmod -R o+r html/inc && echo "[ok]" || echo "[failed]"
echo -n "html/languages: "; sudo chmod o+x html/languages/ html/languages/compiled && echo "[ok]" || echo "[failed]"
# Copy webpage configuration to apache directory
sudo cp "${projectroot}/${projectname}.httpd.conf" /etc/apache2/sites-available
cd "${projectroot}" && sudo a2ensite "${projectname}.httpd.conf"
sudo /etc/init.d/apache2 reload
# Make sure cgi and php modules are loaded
sudo a2enmod cgi
sudo a2enmod php7.4
sudo /etc/init.d/apache2 restart
# Add the project name to the webpage file, replacing the default values.
sed -i "s/REPLACE WITH PROJECT NAME/${projectnicename}/g" "${projectroot}/html/project/project.inc"
sed -i "s/REPLACE WITH COPYRIGHT HOLDER/${projectnicename} Team/g" "${projectroot}/html/project/project.inc"
# Copy user applications apps to the project's directory
if [ "$appsdir" != "" ]; then
# Start by copying all the apps in the application directory
# to the project's apps directory. We also remove the "app_list.txt"
# file, which is not used by the BOINC project.
cp -r $appsdir/* "${projectroot}/apps/" && rm "${projectroot}/apps/app_list.txt"
# Now we add our apps to the project by creating entries in
# the project's project.xml file.
# First delete the last line of the file (we'll add it back later)
sed -i '$ d' "${projectroot}/project.xml"
# Then add the apps to the file
while read p; do
# Skip if line is a comment
[[ $p =~ ^#.* ]] && continue
stringarray=($p)
echo "
<app>
<name>${stringarray[0]}</name>
<user_friendly_name>${stringarray[1]}</user_friendly_name>
</app>"
done < "${appsdir}/app_list.txt" >> "${projectroot}/project.xml"
# Add the last line back
echo "</boinc>" >> "${projectroot}/project.xml"
# Copy templates to server
cp ${addonsdir}/templates/* ${projectroot}/templates/
# Add all the apps to the project's database
$projectroot/bin/xadd
# Create new app versions
$projectroot/bin/update_versions
fi # if [ "$appsdir" != "" ]
# All done!
printf "All done!\n"
```
<a id='latex_pdf_output'></a>
# Step 4: Output this notebook to $\LaTeX$-formatted PDF file \[Back to [top](#toc)\]
$$\label{latex_pdf_output}$$
The following code cell converts this Jupyter notebook into a proper, clickable $\LaTeX$-formatted PDF file. After the cell is successfully run, the generated PDF may be found in the root NRPy+ tutorial directory, with filename
[Tutorial-BlackHolesAtHome-Compiling_the_BOINC_server_on_Linux.pdf](Tutorial-BlackHolesAtHome-Compiling_the_BOINC_server_on_Linux.pdf) (Note that clicking on this link may not work; you may need to open the PDF file through another means.)
```
!cp ../latex_nrpy_style.tplx .
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-BlackHolesAtHome-Compiling_the_BOINC_server_on_Linux")
!rm -f latex_nrpy_style.tplx
```
| github_jupyter |
```
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
### Overview
In this notebook, you'll learn how to submit a job to AI Platform Training. In the job you'll train your TensorFlow 2 model and export the saved model to Cloud Storage.
### Dataset
Public domain datasets used in this notebook:
* U.S. Bureau of Economic Analysis, Total Vehicle Sales [TOTALNSA], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/TOTALNSA, November 15, 2020.
* U.S. Bureau of Labor Statistics, Gasoline, All Types, Per Gallon/3.785 Liters in U.S. City Average [APU00007471A], retrieved from FRED, Federal Reserve Bank of St. Louis; https://fred.stlouisfed.org/series/APU00007471A, November 15, 2020.
### Objective
The goal is to forecast total vehicle sales in the USA, based on previous sales and the price of gas.
## Install packages and dependencies
### Import libraries and define constants
```
import datetime
import json
import os
import time
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
from google.cloud import storage
from pandas.plotting import register_matplotlib_converters
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
from sklearn.preprocessing import StandardScaler
# Check the TensorFlow version installed
tf.__version__
# Enter your project, region, and a bucket name. Then run the cell to make sure the
# Cloud SDK uses the right project for all the commands in this notebook.
PROJECT = "your-project-name" # REPLACE WITH YOUR PROJECT ID
BUCKET = "your-project-name" # REPLACE WITH A UNIQUE BUCKET NAME e.g. your PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
BUCKET_URI = "gs://" + BUCKET
#Don't change the following command - this is to check if you have changed the project name above.
assert PROJECT != 'your-project-name', 'Don''t forget to change the project variables!'
target_col = 'TOTALNSA' # What you are predicting
ts_col = 'DATE' # Time series column
if os.path.exists('vehicle_sales.csv'):
input_file = 'vehicle_sales.csv' # File created in previous lab
else:
input_file = 'data/vehicle_sales.csv'
n_features = 2 # How many features? (Including the target variable itself)
n_input_steps = 12 # Lookback window
n_output_steps = 6 # How many steps to predict forward
train_split = 0.75 # % Split between train/test data
epochs = 1000 # How many passes through the data (early-stopping will cause training to stop before this)
patience = 5 # Terminate training after the validation loss does not decrease after this many epochs
lstm_units = 64
input_layer_name = 'lstm_input'
MODEL_NAME = 'vehicle_sales'
FRAMEWORK='TENSORFLOW'
RUNTIME_VERSION = '2.1'
PYTHON_VERSION = '3.7'
PREDICTIONS_FILE = 'predictions.json'
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
```
storage_client = storage.Client()
try:
bucket = storage_client.get_bucket(BUCKET)
print('Bucket exists, let''s not recreate it.')
except:
bucket = storage_client.create_bucket(BUCKET)
print('Created bucket: ' + BUCKET)
```
## Download and preview the data
Pre-processing on the original dataset has been done for you and made available on Cloud Storage.
```
df = pd.read_csv(input_file, index_col=ts_col, parse_dates=True)
# Plot 3 years of sales
_ = df[target_col][:3*12].plot()
```
### Process data and remove outliers
```
# Split data
size = int(len(df) * train_split)
df_train, df_test = df[0:size].copy(deep=True), df[size:len(df)].copy(deep=True)
df_train.head()
_ = df_train.plot()
```
### Scale sales values
```
# Review original values
df_train.head()
# For neural networks to converge quicker, it is helpful to scale the values.
# For example, each feature might be transformed to have a mean of 0 and std. dev. of 1.
#
# You are working with a mix of features, input timesteps, output horizon, etc.
# which don't work out-of-the-box with common scaling utilities.
# So, here are a couple wrappers to handle scaling and inverting the scaling.
feature_scaler = StandardScaler()
target_scaler = StandardScaler()
def scale(df,
fit=True,
target_col=target_col,
feature_scaler=feature_scaler,
target_scaler=target_scaler):
"""
Scale the input features, using a separate scaler for the target.
Parameters:
df (pd.DataFrame): Input dataframe
fit (bool): Whether to fit the scaler to the data (only apply to training data)
target_col (pd.Series): The column that is being predicted
feature_scaler (StandardScaler): Scaler used for features
target_scaler (StandardScaler): Scaler used for target
Returns:
df_scaled (pd.DataFrame): Scaled dataframe
"""
target = df[target_col].values.reshape(-1, 1)
target_col_num = df.columns.get_loc(target_col)
if fit:
target_scaler.fit(target)
target_scaled = target_scaler.transform(target)
features = df.loc[:, df.columns != target_col].values
if features.shape[1]:
if fit:
feature_scaler.fit(features)
features_scaled = feature_scaler.transform(features)
df_scaled = pd.DataFrame(features_scaled)
df_scaled.insert(target_col_num, target_col, target_scaled)
df_scaled.columns = df.columns
return df_scaled
def inverse_scale(data, target_scaler=target_scaler):
"""
Transform the scaled values of the target back into their original form.
The features are left alone, as we're assuming that the output of the model only includes the target.
Parameters:
data (np.array): Input array
target_scaler (StandardScaler): Scaler used for target
Returns:
data_scaled (np.array): Scaled array
"""
df = pd.DataFrame()
data_scaled = np.empty([data.shape[1], data.shape[0]])
for i in range(data.shape[1]):
data_scaled[i] = target_scaler.inverse_transform(data[:,i])
return data_scaled.transpose()
df_train_scaled=scale(df_train)
df_test_scaled=scale(df_test, False)
# Review scaled values
df_train_scaled.head()
```
### Create sequences of time series data
```
def reframe(data, n_input_steps = n_input_steps, n_output_steps = n_output_steps, target_col = target_col):
target_col_num = data.columns.get_loc(target_col)
# Iterate through data and create sequences of features and outputs
df = pd.DataFrame(data)
cols=list()
for i in range(n_input_steps, 0, -1):
cols.append(df.shift(i))
for i in range(0, n_output_steps):
cols.append(df.shift(-i))
# Concatenate values and remove any missing values
df = pd.concat(cols, axis=1)
df.dropna(inplace=True)
# Split the data into feature and target variables
n_feature_cols = n_input_steps * n_features
features = df.iloc[:,0:n_feature_cols]
target_cols = [i for i in range(n_feature_cols + target_col_num, n_feature_cols + n_output_steps * n_features, n_features)]
targets = df.iloc[:,target_cols]
return (features, targets)
X_train_reframed, y_train_reframed = reframe(df_train_scaled)
X_test_reframed, y_test_reframed = reframe(df_test_scaled)
```
## Build a model and submit your training job to AI Platform
The model you're building here trains pretty fast so you could train it in this notebook, but for more computationally expensive models, it's useful to train them in the Cloud. To use AI Platform Training, you'll package up your training code and submit a training job to the AI Platform Prediction service.
In your training script, you'll also export your trained `SavedModel` to a Cloud Storage bucket.
```
# Reshape test data to match model inputs and outputs
X_train = X_train_reframed.values.reshape(-1, n_input_steps, n_features)
X_test = X_test_reframed.values.reshape(-1, n_input_steps, n_features)
y_train = y_train_reframed.values.reshape(-1, n_output_steps)
y_test = y_test_reframed.values.reshape(-1, n_output_steps)
TRAINER_DIR = 'trainer'
EXPORT_DIR = 'tf_export'
# Create trainer directory if it doesn't already exist
!mkdir $TRAINER_DIR
!touch $TRAINER_DIR/__init__.py
# Copy numpy arrays to npy files
np.save(TRAINER_DIR + '/x_train.npy', X_train)
np.save(TRAINER_DIR + '/x_test.npy', X_test)
np.save(TRAINER_DIR + '/y_train.npy', y_train)
np.save(TRAINER_DIR + '/y_test.npy', y_test)
# Write training code out to a file that will be submitted to the training job
# Note: f-strings are supported in Python 3.6 and above
model_template = f"""import argparse
import numpy as np
import os
import tempfile
from google.cloud import storage
from tensorflow import keras
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM
from tensorflow.keras.callbacks import EarlyStopping
n_features = {n_features} # Two features: y (previous values) and whether the date is a holiday
n_input_steps = {n_input_steps} # Lookback window
n_output_steps = {n_output_steps} # How many steps to predict forward
epochs = {epochs} # How many passes through the data (early-stopping will cause training to stop before this)
patience = {patience} # Terminate training after the validation loss does not decrease after this many epochs
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument(
'--job-dir',
default=None,
help='URL to store the job output')
args = parser.parse_args()
print(args)
return args
def main():
args = get_args()
print('args: ', args)
model_dir = args.job_dir
storage_client = storage.Client()
bucket_name = model_dir.split('/')[2]
bucket = storage_client.get_bucket(bucket_name)
# Get the training data and convert back to np arrays
local_data_dir = os.path.join(os.getcwd(), tempfile.gettempdir())
data_files = ['x_train.npy', 'y_train.npy', 'x_test.npy', 'y_test.npy']
for i in data_files:
blob = storage.Blob('{TRAINER_DIR}/' + i, bucket)
destination_file = local_data_dir + '/' + i
open(destination_file, 'a').close()
blob.download_to_filename(destination_file)
X_train = np.load(local_data_dir + '/x_train.npy')
y_train = np.load(local_data_dir + '/y_train.npy')
X_test = np.load(local_data_dir + '/x_test.npy')
y_test = np.load(local_data_dir + '/y_test.npy')
# Build and train the model
model = Sequential([
LSTM({lstm_units}, input_shape=[n_input_steps, n_features], recurrent_activation=None),
Dense(n_output_steps)])
model.compile(optimizer='adam', loss='mae')
early_stopping = EarlyStopping(monitor='val_loss', patience=patience)
_ = model.fit(x=X_train, y=y_train, validation_data=(X_test, y_test), epochs=epochs, callbacks=[early_stopping])
# Export the model
export_path = os.path.join(model_dir, '{EXPORT_DIR}')
model.save(export_path)
if __name__ == '__main__':
main()
"""
with open(os.path.join(TRAINER_DIR, 'model.py'), 'w') as f:
f.write(model_template.format(**globals()))
# Copy the train data files to a GCS bucket
!gsutil -m cp -r trainer $BUCKET_URI
!gsutil ls $BUCKET_URI/$TRAINER_DIR
# Re-run this if you need to create a new training job
timestamp = str(datetime.datetime.now().time())
JOB_NAME = 'caip_training_' + str(int(time.time()))
MODULE_NAME = TRAINER_DIR + '.model'
TRAIN_DIR = os.getcwd() + '/' + TRAINER_DIR
JOB_DIR = BUCKET_URI
# Submit the training job
!gcloud ai-platform jobs submit training $JOB_NAME \
--scale-tier basic \
--package-path $TRAIN_DIR \
--module-name $MODULE_NAME \
--job-dir $BUCKET_URI \
--region $REGION \
--runtime-version $RUNTIME_VERSION \
--python-version $PYTHON_VERSION
# Check the job status
!gcloud ai-platform jobs describe $JOB_NAME
```
## Monitor output of your training job
Follow the instructions in the output of the gcloud command above to view the logs from your training job. You can also navigate to the [Jobs Section](https://console.cloud.google.com/ai-platform/jobs) of your Cloud Console to view logs.
Once your training job completes successfully, it'll export your trained model as a TensorFlow `SavedModel` and write the output to a directory in your Cloud Storage bucket.
```
# Verify model was exported correctly
storage_client = storage.Client()
bucket = storage_client.get_bucket(BUCKET)
bucket_files = list(bucket.list_blobs(prefix=EXPORT_DIR + '/'))
# If you see a saved_model.pb and a variables/ and assets/ directory here, it means your model was exported correctly in your training job. Yay!
for file in bucket_files:
print(file)
```
## Deploy a model version
```
# Create model if it doesn't already exist
!gcloud ai-platform models create $MODEL_NAME --regions $REGION
# Create the model version
export_path = BUCKET_URI +'/' + EXPORT_DIR
version = 'version_' + str(int(time.time()))
!gcloud ai-platform versions create $version \
--model $MODEL_NAME \
--origin $export_path \
--runtime-version=$RUNTIME_VERSION \
--framework $FRAMEWORK \
--python-version=$PYTHON_VERSION
```
## Get predictions on deployed model
```
# Write test inputs to JSON file
prediction_json = {input_layer_name: X_test[3].tolist()}
with open(PREDICTIONS_FILE, 'w') as outfile:
json.dump(prediction_json, outfile)
# Make predictions
preds = !gcloud ai-platform predict --model $MODEL_NAME --json-instances=$PREDICTIONS_FILE --format="json"
# Parse output
preds.pop(0) # Remove warning
preds = "\n".join(preds) # Concatenate list of strings into one string
preds = json.loads(preds) # Convert JSON string into Python dict
pred_val = preds['predictions'][0]['dense'][0] # Access prediction
pred_val
# Print prediction and compare to actual value
print('Predicted sales:', int(round(inverse_scale(np.array([[pred_val]]))[0][0])))
print('Actual sales: ', int(round(inverse_scale(np.array([y_test[0]]))[0][0])))
```
## Conclusion
In this section, you've learned how to:
* Prepare data and models for training in the cloud
* Train your model and monitor the progress of the job with AI Platform Training
* Predict using the model with AI Platform Predictions
| github_jupyter |
# Calculate performance of signature
Gregory Way, 2021
I previously identified a series of morphology features that were significantly different between sensitive and resistant clones.
I also applied this signature to all profiles from training, testing, validation, and holdout sets.
Here, I evaluate the performance of this signature.
## Evaluation
* Accuracy
- The resistant and sensitive clones were balanced, so accuracy is an appropriate measure
* Average precision
- How well are we able to classify the resistant samples (number correctly identified as resistant / total resistant)
* Receiver operating characteristic (ROC) curve
- Computing the area under the ROC curve
- Calculating the ROC curve coordinates as a tradeoff between true and false positives given various thresholds
## Shuffled results
I also randomly permute the signature score 100 times and perform the full evaluation.
I record performance in this shuffled set as a negative control.
## Metadata stratification
Lastly, I calculate performance in a variety of different metadata subsets. I calculate performance separately for:
1. Across model splits (training, test, validation, holdout)
2. Across model splits and plates (to identify plate-specific performance)
3. Across model splits and clone ID (to identify if certain clones are consistently predicted differentially)
Note that I only calculate ROC information for model splits (training, validation, and holdout)
```
import sys
import pathlib
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score, average_precision_score
import plotnine as gg
from utils.metrics import get_metrics, get_metric_pipeline
np.random.seed(5678)
# Set constants
dataset = "bortezomib"
sig_dir = pathlib.Path("results", "singscore")
results_file = pathlib.Path(sig_dir, f"singscore_results{dataset}.tsv.gz")
output_dir = pathlib.Path("results", "performance")
num_permutations = 100
threshold = 0
metric_comparisons = {
"total": ["Metadata_model_split"],
"plate": ["Metadata_model_split", "Metadata_Plate"],
"sample": ["Metadata_model_split", "Metadata_clone_number"]
}
roc_model_split_focus = ["training", "validation", "test", "holdout"]
# Load data
results_df = pd.read_csv(results_file, sep="\t")
print(results_df.shape)
results_df.head()
# Get performance metrics using real predictions
real_metric_results = get_metric_pipeline(
results_df,
metric_comparisons,
[dataset],
shuffle=False,
signature=False,
threshold=threshold
)
# Get performance metrics using shuffled predictions
all_shuffle_results = {compare: [] for compare in metric_comparisons}
for i in range(0, num_permutations):
np.random.seed(i)
shuffle_metric_results = get_metric_pipeline(
results_df,
metric_comparisons,
datasets=[dataset],
shuffle=True,
signature=False,
threshold=threshold
)
for compare in metric_comparisons:
metric_df = shuffle_metric_results[compare].assign(permutation=i)
all_shuffle_results[compare].append(metric_df)
# Get ROC curve information for model sets
roc_scores = []
roc_curve_data = []
for split in roc_model_split_focus:
results_subset_df = results_df.query("Metadata_model_split == @split")
for shuffle in [True, False]:
roc_auc_val, roc_df = get_metrics(df=results_subset_df, return_roc_curve=True, shuffle=shuffle)
roc_scores.append(pd.Series([roc_auc_val, split, shuffle]))
roc_curve_data.append(roc_df.assign(model_split=split, shuffled=shuffle))
roc_scores_df = pd.DataFrame(roc_scores)
roc_scores_df.columns = ["roc_auc", "model_split", "shuffled"]
roc_curve_data_df = pd.concat(roc_curve_data).reset_index(drop=True)
# Output performance results
for compare in metric_comparisons:
full_results_df = real_metric_results[compare]
shuffle_results_df = pd.concat(all_shuffle_results[compare]).reset_index(drop=True)
output_file = pathlib.Path(f"{output_dir}/{compare}_{dataset}_metric_performance.tsv")
full_results_df.to_csv(output_file, sep="\t", index=False)
output_file = pathlib.Path(f"{output_dir}/{compare}_{dataset}_shuffle_metric_performance.tsv")
shuffle_results_df.to_csv(output_file, sep="\t", index=False)
# Output ROC results
output_file = pathlib.Path(f"{output_dir}/{dataset}_roc_auc.tsv")
roc_scores_df.to_csv(output_file, sep="\t", index=False)
output_file = pathlib.Path(f"{output_dir}/{dataset}_roc_curve.tsv")
roc_curve_data_df.to_csv(output_file, sep="\t", index=False)
```
| github_jupyter |
# Parametrized Sequences
```
import numpy as np
import pulser
from pulser import Pulse, Sequence, Register
from pulser.waveforms import RampWaveform, BlackmanWaveform, CompositeWaveform
from pulser.devices import Chadoq2
```
From simple sweeps to variational quantum algorithms, it is often the case that one wants to try out multiple pulse sequences that vary only in a few parameters. For this effect, the ability to make a `Sequence` **parametrized** was developed.
A parametrized `Sequence` can be used just like a "regular" `Sequence`, with a few key differences. Initialization and channel declaration, for example, don't change at all:
```
reg = Register.square(2, prefix='q')
seq = Sequence(reg, Chadoq2)
seq.declare_channel('rydberg', 'rydberg_global')
seq.declare_channel('raman', 'raman_local')
```
## Variables and Parametrized Objects
The defining characteristic of a parametrized `Sequence` is its use of **variables**. These variables are declared within a `Sequence`, by calling:
```
Omega_max = seq.declare_variable('Omega_max')
ts = seq.declare_variable('ts', size=2, dtype=int)
last_target = seq.declare_variable('last_target', dtype=str)
```
The returned `Omega_max`, `ts` and `last_target` objects are of type `Variable`, and are defined by their name, size and data type. In this case, `Omega_max` is a variable of `size=1` and `dtype=float` (the default), `ts` is an array of two `int` values and `last_target` is a string.
These returned `Variable` objects support simple arithmetic operations (when applicable) and, when of `size > 1`, even item indexing. Take the following examples:
```
t_rise, t_fall = ts # Unpacking is possible too
U = Omega_max / 2.3
delta_0 = -6 * U
delta_f = 2 * U
t_sweep = (delta_f - delta_0)/(2 * np.pi * 10) * 1000
```
Both the original `Variables` and the results of these operations serve as valid inputs for `Waveforms`, `Pulses` or `Sequence`-building instructions. We can take `Omega_max` as an argument for a waveform:
```
pi_wf = BlackmanWaveform.from_max_val(Omega_max, np.pi)
```
or use derived quantities, like `t_rise`, `t_fall`, `delta_0` and `delta_f`:
```
rise_wf = RampWaveform(t_rise, delta_0, delta_f)
fall_wf = RampWaveform(t_fall, delta_f, delta_0)
rise_fall_wf = CompositeWaveform(rise_wf, fall_wf)
```
These waveforms are *parametrized* objects, so usual attributes like `duration` or `samples` are not available, as they depend on the values of the underlying variables. Nonetheless, they can be used as regular waveforms when creating `Pulses`, which will consequently be *parametrized* too.
```
pi_pulse = Pulse.ConstantDetuning(pi_wf, 0, 0)
rise_fall = Pulse.ConstantAmplitude(Omega_max, rise_fall_wf, 0)
```
## Constructing the Sequence
Upon initialization, a `Sequence` is, by default, not parametrized. We can check this by calling:
```
seq.is_parametrized()
```
While it is not parametrized, it is just a normal sequence. We can do the usual stuff, like targeting a local channel, adding regular pulses, or plotting the sequence:
```
generic_pulse = Pulse.ConstantPulse(100, 2*np.pi, 2, 0.)
seq.add(generic_pulse, "rydberg")
seq.target("q0", "raman")
seq.add(generic_pulse, "raman")
seq.draw()
```
The `Sequence` becomes parametrized at the moment a parametrized object or variable is given to a sequence-building instruction. For example:
```
seq.target(last_target, "raman")
seq.is_parametrized()
```
From this point onward, functionalities like drawing are no longer available, because the instructions start being stored instead of executed on the fly. We can still check the current state of a parametrized sequence by printing it:
```
print(seq)
```
Naturally, we can also add the parametrized pulses we previously created:
```
seq.add(rise_fall, "rydberg")
seq.add(pi_pulse, "raman")
```
## Building
Once we're happy with our parametrized sequence, the last step is to build it into a regular sequence. For that, we call the `Sequence.build()` method, in which we **must attribute values for all the declared variables**:
```
built_seq = seq.build(Omega_max = 2.3 * 2*np.pi, ts = [200, 500], last_target="q3")
built_seq.draw()
```
And here we have a regular sequence, built from our parametrized sequence. To create a new one with different parameters, we can simply build it again with new values:
```
alt_seq = seq.build(Omega_max = 2*np.pi, ts = [400, 100], last_target="q2")
alt_seq.draw()
```
| github_jupyter |
# Saddle plot
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import bioframe
import cooler
import cooltools
import cooltools.eigdecomp
import cooltools.expected
import cooltools.saddle
# download a Hi-C dataset from Schwarzer et.al. "Two independent modes of chromosome organization are revealed by cohesin removal", 2017
!wget ftp://ftp.ncbi.nlm.nih.gov/geo/series/GSE93nnn/GSE93431/suppl/GSE93431_NIPBL.200kb.cool.HDF5.gz -O /tmp/GSE93431_NIPBL.200kb.cool.gz
!gunzip -f /tmp/GSE93431_NIPBL.200kb.cool.gz
coolpath = '/tmp/GSE93431_NIPBL.200kb.cool'
c = cooler.Cooler(coolpath)
# Define continuous genomic regions for calculations of contact frequency decay
# with distance (aka "expected").
# Typically, we calculate expected separately for each chromosomal arm because
# centromeres additionally suppress contacts accross themselves.
# In mice, chromosomes are acrocentric and expected can be calculated
# for whole chromosomes.
regions = [(chrom, 0, c.chromsizes[chrom]) for chrom in c.chromnames]
# Download and compute gene count per genomic bin
bins = c.bins()[:]
genecov = bioframe.tools.frac_gene_coverage(bins, 'mm9')
# Perform eigenvector decomposition in cis, sorting and flipping eigenvectors
# according to their correlation with the number of genes in each bin.
cis_eigs = cooltools.eigdecomp.cooler_cis_eig(
c,
genecov,
regions=None,
n_eigs=5,
phasing_track_col='gene_count')
# Plot eigenvectors to confirm successful eigenvector decomposition.
plt.figure(
figsize=(15,2)
)
loc_eig = bioframe.slice_bedframe(cis_eigs[1], 'chr1:10M-60M')
plt.plot(
loc_eig['start'],
loc_eig['E1']
)
plt.axhline(0,ls='--',lw=0.5,color='gray')
plt.ylabel('E1')
plt.xlabel('chr1 position, bp')
# Digitize eigenvectors, i.e. group genomic bins into
# equisized groups according to their eigenvector rank.
Q_LO = 0.025 # ignore 2.5% of genomic bins with the lowest E1 values
Q_HI = 0.975 # ignore 2.5% of genomic bins with the highest E1 values
N_GROUPS = 38 # divide remaining 95% of the genome into 38 equisized groups, 2.5% each
q_edges = np.linspace(Q_LO, Q_HI, N_GROUPS+1)
# Filter track used for grouping genomic bins based on bins filtered out in Hi-C balancing weights
# Doesn't do anything with eigenvectors from the same Hi-C data (hence commented out here),
# but important for external data, such as ChIP-seq tracks
#eig = cooltools.saddle.mask_bad_bins((cis_eigs[1], 'E1'), (c.bins()[:], 'weight'))
# Calculate the lower and the upper values of E1 in each of 38 groups.
group_E1_bounds = cooltools.saddle.quantile(eig['E1'], q_edges)
# Assign the group to each genomic bin according to its E1, i.e. "digitize" E1.
digitized, hist = cooltools.saddle.digitize_track(
group_E1_bounds,
track=(eig, 'E1'),
)
# Plot the digitized E1 to confirm that digitization was successful.
plt.figure(
figsize=(15,3)
)
loc_eig = bioframe.slice_bedframe(digitized, 'chr1:10M-60M')
plt.plot(
loc_eig['start'],
loc_eig['E1.d']
)
plt.axhline(0,ls='--',lw=0.5,color='gray')
plt.ylabel('E1, digitized')
plt.xlabel('chr1 position, bp')
# Calculate the decay of contact frequency with distance (i.e. "expected")
# for each chromosome.
expected = cooltools.expected.cis_expected(c, regions, use_dask=True)
# Make a function that returns observed/expected dense matrix of an arbitrary
# region of the Hi-C map.
getmatrix = cooltools.saddle.make_cis_obsexp_fetcher(c, (expected, 'balanced.avg'))
# Compute the saddle plot, i.e. the average observed/expected between genomic
# ins as a function of their digitized E1.
S, C = cooltools.saddle.make_saddle(
getmatrix,
group_E1_bounds,
(digitized, 'E1' + '.d'),
contact_type='cis')
plt.imshow(
np.log2(S / C)[1:-1, 1:-1],
cmap='coolwarm',
vmin=-1,
vmax=1,
)
plt.colorbar(label='log2 obs/exp')
```
| github_jupyter |
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
import argparse
import time
import os
#setup training parameters
parser = argparse.ArgumentParser(description='PyTorch MNIST Training')
parser.add_argument('--batch-size', type=int, default=128, metavar='N',
help='input batch size for training (default: 128)')
parser.add_argument('--test-batch-size', type=int, default=128, metavar='N',
help='input batch size for testing (default: 128)')
parser.add_argument('--epochs', type=int, default=5, metavar='N',
help='number of epochs to train')
parser.add_argument('--lr', type=float, default=0.01, metavar='LR',
help='learning rate')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=1, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--model-dir', default='./model-mnist-cnn',
help='directory of model for saving checkpoint')
parser.add_argument('--load-model', action='store_true', default=False,
help='load model or not')
args = parser.parse_args(args=[])
if not os.path.exists(args.model_dir):
os.makedirs(args.model_dir)
# Judge cuda is available or not
use_cuda = not args.no_cuda and torch.cuda.is_available()
#device = torch.device("cuda" if use_cuda else "cpu")
device = torch.device("cpu")
torch.manual_seed(args.seed)
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# Setup data loader
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
trainset = datasets.MNIST('../data', train=True, download=True,
transform=transform)
testset = datasets.MNIST('../data', train=False,
transform=transform)
train_loader = torch.utils.data.DataLoader(trainset,batch_size=args.batch_size, shuffle=True,**kwargs)
test_loader = torch.utils.data.DataLoader(testset,batch_size=args.test_batch_size, shuffle=False, **kwargs)
# Define CNN
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# in_channels:1 out_channels:32 kernel_size:3 stride:1
self.conv1 = nn.Conv2d(1, 32, 3, 1)
# in_channels:32 out_channels:64 kernel_size:3 stride:1
self.conv2 = nn.Conv2d(32, 64, 3, 1)
self.fc1 = nn.Linear(9216, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
output = F.log_softmax(x, dim=1)
return output
# Train function
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
#clear gradients
optimizer.zero_grad()
#compute loss
loss = F.cross_entropy(model(data), target)
#get gradients and update
loss.backward()
optimizer.step()
# Predict function
def eval_test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item()
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = correct / len(test_loader.dataset)
return test_loss, test_accuracy
# Main function, train the initial model or load the model
def main():
model = Net().to(device)
optimizer = optim.SGD(model.parameters(), lr=args.lr)
if args.load_model:
# Load model
model.load_state_dict(torch.load(os.path.join(args.model_dir, 'final_model.pt')))
trnloss, trnacc = eval_test(model, device, train_loader)
tstloss, tstacc = eval_test(model, device, test_loader)
print('trn_loss: {:.4f}, trn_acc: {:.2f}%'.format(trnloss, 100. * trnacc), end=', ')
print('test_loss: {:.4f}, test_acc: {:.2f}%'.format(tstloss, 100. * tstacc))
else:
# Train initial model
for epoch in range(1, args.epochs + 1):
start_time = time.time()
#training
train(args, model, device, train_loader, optimizer, epoch)
#get trnloss and testloss
trnloss, trnacc = eval_test(model, device, train_loader)
tstloss, tstacc = eval_test(model, device, test_loader)
#print trnloss and testloss
print('Epoch '+str(epoch)+': '+str(int(time.time()-start_time))+'s', end=', ')
print('trn_loss: {:.4f}, trn_acc: {:.2f}%'.format(trnloss, 100. * trnacc), end=', ')
print('test_loss: {:.4f}, test_acc: {:.2f}%'.format(tstloss, 100. * tstacc))
#save model
torch.save(model.state_dict(), os.path.join(args.model_dir, 'final_model.pt'))
if __name__ == '__main__':
main()
print(torch.__version__)
```
| github_jupyter |
# Evaluation
The evaluation strategy is as follows. There are 30 classes of images in the RSICD dataset. We construct a synthetic set of captions that use the pattern "An arial photograph of a `class_type`" for each of the 30 classes. We feed each image and the synthetic captions into the model under evaluation, and get back predictions of the best caption associated with the image. We calculate the matches at k for various k=1, 3, 5, 10, and report them.
```
import jax
import jax.numpy as jnp
import json
import matplotlib.pyplot as plt
import numpy as np
import requests
import os
from PIL import Image
from transformers import CLIPProcessor, FlaxCLIPModel
DATA_DIR = "/home/shared/data"
IMAGES_DIR = os.path.join(DATA_DIR, "RSICD_images")
CAPTIONS_FILE = os.path.join(DATA_DIR, "dataset_rsicd.json")
# EVAL_IMAGES_LIST = os.path.join(DATA_DIR, "eval_images.txt")
EVAL_IMAGES_LIST = "eval_images.txt"
EVAL_RESULTS = "eval_results.tsv"
```
### Data
The RSICD dataset is split into train, val, and test sets of 8734, 1094 and 1093 images with associated captions respectively.
We will use only the images with the class name in the image file name.
```
image_filenames = os.listdir(IMAGES_DIR)
len(image_filenames)
image2captions = {}
with open(CAPTIONS_FILE, "r") as fcap:
data = json.loads(fcap.read())
for image in data["images"]:
if image["split"] == "test":
filename = image["filename"]
if filename.find("_") > 0:
sentences = []
for sentence in image["sentences"]:
sentences.append(sentence["raw"])
image2captions[filename] = sentences
len(image2captions)
for image_filename in image2captions.keys():
print("filename:", image_filename)
image = Image.fromarray(plt.imread(os.path.join(IMAGES_DIR, image_filename)))
plt.imshow(image)
print("sentences:", image2captions[image_filename])
break
```
### Image Classes from file names
```
class_types = sorted(list(set([fn.split("_")[0]
for fn in image_filenames
if fn.find("_") > -1])))
class_types
```
### Model
```
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
image_filename = list(image2captions.keys())[0]
sentences = image2captions[image_filename]
print("image file name:", image_filename)
print("sentences:", sentences)
test_image = Image.fromarray(plt.imread(os.path.join(IMAGES_DIR, image_filename)))
plt.imshow(test_image)
test_sentences = ["An arial photograph of a {:s}".format(ct) for ct in class_types]
inputs = processor(text=test_sentences,
images=test_image, return_tensors="jax", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = jax.nn.softmax(logits_per_image, axis=-1)
probs
class_types[np.argmax(probs)]
probs_np = np.asarray(probs)[0]
probs_npi = np.argsort(-probs_np)
[(class_types[i], probs_np[i]) for i in probs_npi[0:5]]
```
### Putting everything together
```
def predict_one_image(image_file, model, processor, class_types, k):
label = image_file.split('_')[0]
test_sentences = ["An arial photograph of a {:s}".format(ct) for ct in class_types]
image = Image.fromarray(plt.imread(os.path.join(IMAGES_DIR, image_file)))
inputs = processor(text=test_sentences,
images=image,
return_tensors="jax",
padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = jax.nn.softmax(logits_per_image, axis=-1)
probs_np = np.asarray(probs)[0]
probs_npi = np.argsort(-probs_np)
predictions = [(class_types[i], probs_np[i]) for i in probs_npi[0:k]]
return label, predictions
test_image = list(image2captions.keys())[100]
label, preds = predict_one_image(test_image, model, processor, class_types, 10)
image = Image.fromarray(plt.imread(os.path.join(IMAGES_DIR, test_image)))
plt.imshow(image)
print("label:", label)
print("predictions")
for class_type, class_prob in preds:
print("{:20s} {:.3f}".format(class_type, class_prob))
num_predicted = 0
fres = open(EVAL_RESULTS, "w")
for image_file, _ in image2captions.items():
if num_predicted % 100 == 0:
print("{:d} images processed".format(num_predicted))
# print("predicting class of image:", image_file)
label, preds = predict_one_image(image_file, model, processor, class_types, 10)
fres.write("{:s}\t{:s}\t{:s}\n".format(
image_file, label, "\t".join(["{:s}\t{:.5f}".format(c, p) for c, p in preds])))
num_predicted += 1
print("{:d} images processed, COMPLETE".format(num_predicted))
fres.close()
```
### Generate accuracy@k Scores from results
```
RESULTS_DIR = "results"
RESULTS_FILE = os.path.join(RESULTS_DIR, "baseline.tsv")
K_VALUES = [1, 3, 5, 10]
num_examples = 0
correct_k = [0] * len(K_VALUES)
fres = open(RESULTS_FILE, "r")
for line in fres:
cols = line.strip().split('\t')
label = cols[1]
preds = []
for i in range(2, 22, 2):
preds.append(cols[i])
for kid, k in enumerate(K_VALUES):
preds_k = set(preds[0:k])
if label in preds_k:
correct_k[kid] += 1
num_examples += 1
fres.close()
scores_k = [ck / num_examples for ck in correct_k]
print("\t".join(["score@{:d}".format(k) for k in K_VALUES]))
print("\t".join(["{:.3f}".format(s) for s in scores_k]))
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from sklearn.model_selection import GridSearchCV, train_test_split
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import MinMaxScaler, LabelEncoder
import scikitplot.metrics as skplt
import joblib
y = pd.read_csv("../Data/y_eng_oversampled.csv", header=None, names=["Y"])
bow = pd.read_csv('../Cache/Embeddings/bow.csv')
X_train, X_test, Y_train, Y_test = train_test_split(bow, y, random_state = 0, test_size = 0.3)
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
def run_the_mn_models(model, X_train, X_test, Y_tr, Y_te):
X_train_text_df, X_test_text_df, y_train, y_test = X_train, X_test, Y_tr, Y_te
if model == 'mnb1':
mn_params = {
'fit_prior': [True],
'alpha': [0, 0.5, 1]}
M = GridSearchCV(MultinomialNB(),
mn_params,
cv = 5,
verbose = 1,
n_jobs = -1)
elif model == 'mnb2':
mn_params = {
'fit_prior': [False],
'alpha': [0, 0.5, 1]}
M = GridSearchCV(MultinomialNB(),
mn_params,
cv = 5,
verbose = 1,
n_jobs = -1)
else:
print('There is an error.')
M.fit(X_train_text_df.values, y_train)
print(f'Train score = {M.score(X_train_text_df.values, y_train)}')
print(f'Test score = {M.score(X_test_text_df.values, y_test)}')
predictions = M.predict(X_test_text_df.values)
predictions_train = M.predict(X_train_text_df.values)
print('--------')
print(skplt.plot_confusion_matrix(y_test, predictions))
print(f'Best params = {M.best_params_}')
print('----F1 Score, Recall, Precision----')
# print precision, recall, F1-score per each class/tag
print(classification_report(y_test, predictions))
# #skplt.plot_roc_curve(predictions,y_test)
# print('----ROC AUC CURVE SCORE----')
# print("ROC AUC CURVE SCORE FOR TEST: ",roc_auc_score(y_test, predictions))
# print("ROC AUC CURVE SCORE FOR TRAIN: ",roc_auc_score(y_train, predictions_train))
run_the_mn_models("mnb1", X_train, X_test, Y_train, Y_test)
def run_the_lr_models(model, X_train, X_test, Y_tr, Y_te):
X_train_text_df, X_test_text_df, y_train, y_test = X_train, X_test, Y_tr, Y_te
if model == 'lr1':
lr_1_params = {
'penalty': ['l1'],
'C': [1, 1.5, 2, 2.5],
'class_weight': ['balanced'],
'warm_start': [True, False],
'random_state': [42],
'solver': ['liblinear']}
M = GridSearchCV(LogisticRegression(),
lr_1_params,
cv = 5,
verbose = 1,
n_jobs = -1)
elif model == 'lr2':
lr_2_params = {
'penalty': ['l2'],
'C': [1, 1.5, 2, 2.5],
'class_weight': ['balanced'],
'warm_start': [True, False],
'random_state': [42],
'solver': ['lbfgs', 'liblinear']}
M = GridSearchCV(LogisticRegression(),
lr_2_params,
cv = 5,
verbose = 1,
n_jobs = -1)
else:
print('There is an error.')
M.fit(X_train_text_df.values, y_train)
print(f'Train score = {M.score(X_train_text_df.values, y_train)}')
print(f'Test score = {M.score(X_test_text_df.values, y_test)}')
predictions = M.predict(X_test_text_df.values)
predictions_train = M.predict(X_train_text_df.values)
print('--------')
print(skplt.plot_confusion_matrix(y_test, predictions))
print(f'Best params = {M.best_params_}')
print('----F1 Score, Recall, Precision----')
# print precision, recall, F1-score per each class/tag
print(classification_report(y_test, predictions))
# print('----ROC AUC CURVE SCORE----')
# print("ROC AUC CURVE SCORE FOR TEST: ",roc_auc_score(y_test, predictions))
# print("ROC AUC CURVE SCORE FOR TRAIN: ",roc_auc_score(y_train, predictions_train))
# print('----ROC AUC CURVE SCORE----')
# print("ROC AUC CURVE SCORE FOR TEST: ",roc_auc_score(y_test, predictions))
# print("ROC AUC CURVE SCORE FOR TRAIN: ",roc_auc_score(y_train, predictions_train))
run_the_lr_models("lr2", X_train, X_test, Y_train, Y_test)
def run_the_sv_models(model, X_train, X_test, Y_tr, Y_te):
X_train_text_df, X_test_text_df, y_train, y_test = X_train, X_test, Y_tr, Y_te
if model == 'sv1':
sv_params = {
'kernel': ['rbf'],
'gamma': [1e-3, 1e-4],
'C': [1, 10, 100, 1000] }
M = GridSearchCV(SVC(probability=True),
sv_params,
cv = 5,
verbose = 1,
n_jobs = -1)
elif model == 'sv2':
sv_params = {
'kernel': ['rbf'],
'gamma': [0.01, 1, 10, 100],
'C': [1, 10, 100, 1000] }
M = GridSearchCV(SVC(probability=True),
sv_params,
cv = 5,
verbose = 1,
n_jobs = -1)
else:
print("Invalid Model!")
M.fit(X_train_text_df.values, y_train)
#save in pickle file
joblib.dump(M, "SVM_TFIDF.pkl")
print(f'Train score = {M.score(X_train_text_df.values, y_train)}')
print(f'Test score = {M.score(X_test_text_df.values, y_test)}')
predictions = M.predict(X_test_text_df.values)
predictions_train = M.predict(X_train_text_df.values)
print('--------')
print(skplt.plot_confusion_matrix(y_test, predictions))
print(f'Best params = {M.best_params_}')
print('----F1 Score, Recall, Precision----')
# print precision, recall, F1-score per each class/tag
print(classification_report(y_test, predictions))
# print('----ROC AUC CURVE SCORE----')
# print("ROC AUC CURVE SCORE FOR TEST: ",roc_auc_score(y_test, predictions))
# print("ROC AUC CURVE SCORE FOR TRAIN: ",roc_auc_score(y_train, predictions_train))
run_the_sv_models("sv2", X_train, X_test, Y_train, Y_test)
```
| github_jupyter |
```
# from utils import *
import os
os.chdir("../../scVI/")
os.getcwd()
import pickle
import numpy as np
import pandas as pd
from copy import deepcopy
save_path = '../CSF/Notebooks/'
celllabels = np.load(save_path + 'meta/celllabels.npy')
celltypes, labels = np.unique(celllabels,return_inverse=True)
# from numpy import savetxt
# savetxt('../CSF/DE/raw/celllables.csv', celllabels.astype(str), delimiter=',',fmt='%s')
# with open(save_path + 'posterior/all_datasets.vae.full.pkl', 'rb') as f:
# full = pickle.load(f)
# DEres, DEclust = full.one_vs_all_degenes(cell_labels=labels, output_file=False,save_dir=save_path, filename='LouvainClusters')
# with open(save_path + 'DE/allclust.DEres.pkl', 'wb') as f:
# pickle.dump((DEres,DEclust),f)
with open(save_path + 'DE/allclust.DEres.pkl', 'rb') as f:
DEres,DEclust = pickle.load(f)
genenames = pd.read_csv('../CSF/RFiles/genenames.csv',header=None)
clean = [1,2,3,4,5,6,7,8,10,11,13,14,17,18,19,20,21]
temp=[x in clean for x in DEclust]
DEres = [DEres[i] for i,x in enumerate(temp) if x is True]
DEclust = [DEclust[i] for i,x in enumerate(temp) if x is True]
from statsmodels.stats.multitest import multipletests
celltype = []
combinedDEres = []
for i,x in enumerate(celltypes[DEclust]):
temp = pd.read_csv('../CSF/DE/wilcoxon/MannWhitneyU.norm.allclusters.%s.csv'%x)
temp.index = list(genenames[0])
fdr_wil = multipletests(temp['pvalue'],method='fdr_bh')
temp = pd.concat([temp,DEres[i]],axis=1,sort=True)
temp = temp.sort_values(by='bayes1',ascending=False)
combined = deepcopy(temp[['stat','pvalue','bayes1','bayes2','mean1','mean2','nonz1','nonz2','clusters',]])
combined['scVI_logFC'] = np.log(temp['scale1']/temp['scale2'])
combined['norm_logFC'] = np.log(temp['norm_mean1']/temp['norm_mean2'])
temp = pd.read_csv('../CSF/EdgeR/allcluster.batchcorrected.%s.edgeR.csv'%x)
fdr_edgeR = multipletests(temp['PValue'],method='fdr_bh')
temp.index = list(genenames[0])
temp = temp[['logFC','logCPM','F','PValue']]
combined = pd.concat([combined,temp],axis=1,sort=True)
combined['fdr_wil'] = fdr_wil[1]
combined['fdr_edgeR'] = fdr_edgeR[1]
celltype.append(x)
combinedDEres.append(combined)
```
# Genes specific to ncMono in CSF
```
np.where(np.asarray(celltype)=='ncMono')
combinedDEres[14].loc[['CD9', 'CD163', 'EGR1', 'BTG2', 'C1QA', 'C1QB', 'MAF', 'CSF1R','LYVE1',
'TREM2', 'TMEM119', 'GPR34','STAB1',' MRC1', 'CH25H']]
from pandas import ExcelWriter
import xlsxwriter
writer = pd.ExcelWriter(save_path + 'DE/allclusters.xlsx', engine='xlsxwriter')
for i, x in enumerate(celltype):
combinedDEres[i].to_excel(writer, sheet_name=str(x))
writer.close()
temp = pd.concat(combinedDEres)
x = temp.loc['CD3E']
x.loc[(x['fdr_wil']<0.2)&(x['fdr_edgeR']<0.2)&
(x['bayes1']>0.3) &
(x['logFC']>0) & (x['norm_logFC']>0)]
filtered = [x.loc[(x['fdr_wil']<0.05)&(x['fdr_edgeR']<0.05)&
(x['bayes1']>0.5) &
(x['logFC']>0) & (x['norm_logFC']>0)] for x in combinedDEres]
DEgenes = pd.concat(filtered)
DEgenes['clusters'] = np.asarray(celltypes)[np.asarray(DEgenes['clusters']).astype(int)]
DEgenes.to_csv(save_path+'DE/clustermarkers.csv')
filtered = [x.loc[(x['fdr_wil']<0.2)&(x['fdr_edgeR']<0.2)&
(x['bayes1']>0.3) &
(x['logFC']>0) & (x['norm_logFC']>0)] for x in combinedDEres]
DEgenes = pd.concat(filtered)
DEgenes['clusters'] = np.asarray(celltypes)[np.asarray(DEgenes['clusters']).astype(int)]
DEgenes.to_csv(save_path+'DE/clustermarkers.relaxed.csv')
DEgenes.loc['CD3E']
```
# Finding shared DE genes
```
CD4 = np.asarray([x in ['CD4','Tdg','Tregs'] for x in DEgenes['clusters']])
names, occ = np.unique(DEgenes.loc[CD4].index,return_counts=True)
names[occ==3]
CDab = np.asarray([x in ['CD4','Tdg','Tregs','CD8a','CD8n'] for x in DEgenes['clusters']])
names, occ = np.unique(DEgenes.loc[CDab].index,return_counts=True)
names[occ>=4]
DEgenes.loc[['CD8B','CCL5']]
DEgenes.loc[['FOXP3']]
DEgenes.loc[['CTLA4']]
DEgenes.loc[['TRDC']]
DEgenes.loc[['NKG7']]
DEgenes.loc[['FCGR3A']]
DEgenes.loc[['PRF1']]
DEgenes.loc[['XCL1']]
temp = [x for x in combinedDEres[0].index if x.startswith('IGH')]
DEgenes.loc[temp[:9]]
DEgenes.loc[['CD37']]
DEgenes.loc[['IGHD']]
DEgenes.loc[['IGHG1','CD38','TNFRSF17']]
DEgenes.loc[['LYZ']]
DEgenes.loc[['WDFY4', 'XCR1', 'BATF3']]
DEgenes.loc[['FCER1A', 'CD1C', 'CLEC10A']]
DEgenes.loc[['S100A8', 'S100A9', 'TSPO']]
DEgenes.loc[['CD14','FCGR3A']]
DEgenes.loc[['TCF4', 'JCHAIN']]
DEgenes.loc[['GNG11', 'CLU']]
```
# older code
```
filtered = [x.loc[(x['pvalue']<0.05/10026) &
(x['bayes1']>3) &
(x['scVI_logFC']>0) & (x['norm_logFC']>0)] for x in combinedDEres]
DEgenes = pd.concat(filtered)
geneid, nocc = np.unique(DEgenes.index,return_counts=True)
shared = pd.DataFrame([geneid,nocc],index=['genename','occ']).T
shared = shared.sort_values(by='occ',ascending=False)
DEgenes['clusters'] = np.asarray(celltypes)[np.asarray(DEgenes['clusters']).astype(int)]
np.unique(DEgenes['clusters'],return_counts=True)
temp = [x for x in list(genenames[0]) if x.startswith('ITGA')]
DEgenes.loc[temp]
DEgenes.to_csv(save_path + 'DE/allcluster.csv')
plt.scatter(y=np.abs(combined['bayes1']),x=combined['scVI_logFC'],s=1)
plt.scatter(x=combined['logFC'],y=combined['scVI_logFC'],s=1)
```
| github_jupyter |
## BiRNN Overview
<img src="https://ai2-s2-public.s3.amazonaws.com/figures/2016-11-08/191dd7df9cb91ac22f56ed0dfa4a5651e8767a51/1-Figure2-1.png" alt="nn" style="width: 600px;"/>
References:
- [Long Short Term Memory](http://deeplearning.cs.cmu.edu/pdfs/Hochreiter97_lstm.pdf), Sepp Hochreiter & Jurgen Schmidhuber, Neural Computation 9(8): 1735-1780, 1997.
## MNIST Dataset Overview
This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flattened and converted to a 1-D numpy array of 784 features (28*28).

To classify images using a recurrent neural network, we consider every image row as a sequence of pixels. Because MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
More info: http://yann.lecun.com/exdb/mnist/
```
from __future__ import absolute_import, division, print_function
# Import TensorFlow v2.
import tensorflow as tf
from tensorflow.keras import Model, layers
import numpy as np
# MNIST dataset parameters.
num_classes = 10 # total classes (0-9 digits).
num_features = 784 # data features (img shape: 28*28).
# Training Parameters
learning_rate = 0.001
training_steps = 1000
batch_size = 32
display_step = 100
# Network Parameters
# MNIST image shape is 28*28px, we will then handle 28 sequences of 28 timesteps for every sample.
num_input = 28 # number of sequences.
timesteps = 28 # timesteps.
num_units = 32 # number of neurons for the LSTM layer.
# Prepare MNIST data.
from tensorflow.keras.datasets import mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# Convert to float32.
x_train, x_test = np.array(x_train, np.float32), np.array(x_test, np.float32)
# Flatten images to 1-D vector of 784 features (28*28).
x_train, x_test = x_train.reshape([-1, 28, 28]), x_test.reshape([-1, num_features])
# Normalize images value from [0, 255] to [0, 1].
x_train, x_test = x_train / 255., x_test / 255.
# Use tf.data API to shuffle and batch data.
train_data = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_data = train_data.repeat().shuffle(5000).batch(batch_size).prefetch(1)
# Create LSTM Model.
class BiRNN(Model):
# Set layers.
def __init__(self):
super(BiRNN, self).__init__()
# Define 2 LSTM layers for forward and backward sequences.
lstm_fw = layers.LSTM(units=num_units)
lstm_bw = layers.LSTM(units=num_units, go_backwards=True)
# BiRNN layer.
self.bi_lstm = layers.Bidirectional(lstm_fw, backward_layer=lstm_bw)
# Output layer (num_classes).
self.out = layers.Dense(num_classes)
# Set forward pass.
def call(self, x, is_training=False):
x = self.bi_lstm(x)
x = self.out(x)
if not is_training:
# tf cross entropy expect logits without softmax, so only
# apply softmax when not training.
x = tf.nn.softmax(x)
return x
# Build LSTM model.
birnn_net = BiRNN()
# Cross-Entropy Loss.
# Note that this will apply 'softmax' to the logits.
def cross_entropy_loss(x, y):
# Convert labels to int 64 for tf cross-entropy function.
y = tf.cast(y, tf.int64)
# Apply softmax to logits and compute cross-entropy.
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=x)
# Average loss across the batch.
return tf.reduce_mean(loss)
# Accuracy metric.
def accuracy(y_pred, y_true):
# Predicted class is the index of highest score in prediction vector (i.e. argmax).
correct_prediction = tf.equal(tf.argmax(y_pred, 1), tf.cast(y_true, tf.int64))
return tf.reduce_mean(tf.cast(correct_prediction, tf.float32), axis=-1)
# Adam optimizer.
optimizer = tf.optimizers.Adam(learning_rate)
# Optimization process.
def run_optimization(x, y):
# Wrap computation inside a GradientTape for automatic differentiation.
with tf.GradientTape() as g:
# Forward pass.
pred = birnn_net(x, is_training=True)
# Compute loss.
loss = cross_entropy_loss(pred, y)
# Variables to update, i.e. trainable variables.
trainable_variables = birnn_net.trainable_variables
# Compute gradients.
gradients = g.gradient(loss, trainable_variables)
# Update W and b following gradients.
optimizer.apply_gradients(zip(gradients, trainable_variables))
# Run training for the given number of steps.
for step, (batch_x, batch_y) in enumerate(train_data.take(training_steps), 1):
# Run the optimization to update W and b values.
run_optimization(batch_x, batch_y)
if step % display_step == 0:
pred = birnn_net(batch_x, is_training=True)
loss = cross_entropy_loss(pred, batch_y)
acc = accuracy(pred, batch_y)
print("step: %i, loss: %f, accuracy: %f" % (step, loss, acc))
```
| github_jupyter |
```
# connect to google colab
from google.colab import drive
drive.mount("/content/drive")
# base path
DATA_PATH = './drive/MyDrive/fyp-code/codes/data/emotion_classification/'
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix, cohen_kappa_score
import seaborn as sns
```
## Import the 3 sampled datasets with 2 sets of annotations
```
sampled_data_toy = pd.read_csv(DATA_PATH+'emotion_classification_toy_data_sampled_iaa.csv')[['Text','Label','annotator_1','annotator_2']]
sampled_data_short = pd.read_csv(DATA_PATH+'emotion_classification_short_text_sampled_iaa.csv')[['Text','Label','annotator_1','annotator_2']]
sampled_data_long = pd.read_csv(DATA_PATH+'emotion_classification_long_text_sampled_iaa.csv')[['Text','Label','annotator_1','annotator_2']]
```
## Get the labels for each of the data
```
toy_annotate_true = sampled_data_toy['Label']
toy_annotate_1 = sampled_data_toy['annotator_1']
toy_annotate_2 = sampled_data_toy['annotator_2']
short_annotate_true = sampled_data_short['Label']
short_annotate_1 = sampled_data_short['annotator_1']
short_annotate_2 = sampled_data_short['annotator_2']
long_annotate_true = sampled_data_long['Label']
long_annotate_1 = sampled_data_long['annotator_1']
long_annotate_2 = sampled_data_long['annotator_2']
```
## Helper functions to get the confusion matrix and the cohen-kappa score
```
def get_score(annotator_a, annotator_b, row_name, col_name):
# get the confusion matrix array
#conf_matrix_arr = confusion_matrix(annotator_a, annotator_b)
#print(conf_matrix_arr)
# cohen-kappa score
print(cohen_kappa_score(annotator_a, annotator_b))
# seaborn confusion matrix
cm_sns = pd.crosstab(annotator_a, annotator_b, rownames=[row_name], colnames=[col_name])
sns.heatmap(cm_sns, annot=True, fmt="d")
```
## Get the following comparisons
```
# toy_annotate_1 vs toy_annotate_2
get_score(annotator_a=toy_annotate_1,
annotator_b=toy_annotate_2,
row_name='annotator 1 - toy dataset',
col_name='annotator 2 - toy dataset')
# toy_annotate_true vs toy_annotate_1
get_score(annotator_a=toy_annotate_true,
annotator_b=toy_annotate_1,
row_name='annotator true - toy dataset',
col_name='annotator 1 - toy dataset')
# toy_annotate_true vs toy_annotate_2
get_score(annotator_a=toy_annotate_true,
annotator_b=toy_annotate_2,
row_name='annotator true - toy dataset',
col_name='annotator 2 - toy dataset')
# short_annotate_1 vs short_annotate_2
get_score(annotator_a=short_annotate_1,
annotator_b=short_annotate_2,
row_name='annotator 1 - short dataset',
col_name='annotator 2 - short dataset')
# short_annotate_true vs short_annotate_1
get_score(annotator_a=short_annotate_true,
annotator_b=short_annotate_1,
row_name='annotator true - short dataset',
col_name='annotator 1 - short dataset')
# short_annotate_true vs short_annotate_2
get_score(annotator_a=short_annotate_true,
annotator_b=short_annotate_2,
row_name='annotator true - short dataset',
col_name='annotator 2 - short dataset')
# long_annotate_1 vs long_annotate_2
get_score(annotator_a=long_annotate_1,
annotator_b=long_annotate_2,
row_name='annotator 1 - long dataset',
col_name='annotator 2 - long dataset')
# long_annotate_true vs long_annotate_1
get_score(annotator_a=long_annotate_true,
annotator_b=long_annotate_1,
row_name='annotator true - long dataset',
col_name='annotator 1 - long dataset')
# long_annotate_true vs long_annotate_2
get_score(annotator_a=long_annotate_true,
annotator_b=long_annotate_2,
row_name='annotator true - long dataset',
col_name='annotator 2 - long dataset')
```
| github_jupyter |
# test note
* jupyterはコンテナ起動すること
* テストベッド一式起動済みであること
```
!pip install --upgrade pip
!pip install --force-reinstall ../lib/ait_sdk-0.1.3-py3-none-any.whl
from pathlib import Path
import pprint
from ait_sdk.test.hepler import Helper
import json
# settings cell
# mounted dir
root_dir = Path('/workdir/root/ait')
ait_name='alyz_regression_dist_1var_treemap_tf2.3'
ait_version='0.1'
ait_full_name=f'{ait_name}_{ait_version}'
ait_dir = root_dir / ait_full_name
td_name=f'{ait_name}_test'
# (dockerホスト側の)インベントリ登録用アセット格納ルートフォルダ
current_dir = %pwd
with open(f'{current_dir}/config.json', encoding='utf-8') as f:
json_ = json.load(f)
root_dir = json_['host_ait_root_dir']
is_container = json_['is_container']
invenotory_root_dir = f'{root_dir}\\ait\\{ait_full_name}\\local_qai\\inventory'
# entry point address
# コンテナ起動かどうかでポート番号が変わるため、切り替える
if is_container:
backend_entry_point = 'http://host.docker.internal:8888/qai-testbed/api/0.0.1'
ip_entry_point = 'http://host.docker.internal:8888/qai-ip/api/0.0.1'
else:
backend_entry_point = 'http://host.docker.internal:5000/qai-testbed/api/0.0.1'
ip_entry_point = 'http://host.docker.internal:6000/qai-ip/api/0.0.1'
# aitのデプロイフラグ
# 一度実施すれば、それ以降は実施しなくてOK
is_init_ait = True
# インベントリの登録フラグ
# 一度実施すれば、それ以降は実施しなくてOK
is_init_inventory = True
helper = Helper(backend_entry_point=backend_entry_point,
ip_entry_point=ip_entry_point,
ait_dir=ait_dir,
ait_full_name=ait_full_name)
from glob import glob
path = Path(str(ait_dir) + '/deploy/container')
print(path)
print(Path(path / 'ait.manifest.json').exists())
files = glob(str(path.joinpath(f'**/ait.manifest.json')), recursive=True)
print(files)
if len(files) == 0:
print(f'not found {file_name} in zip.')
elif len(files) > 1:
print(f'{file_name} must be one exists in zip.')
print(files[0])
helper._find_file(path, 'ait.manifest.json')
# health check
helper.get_bk('/health-check')
helper.get_ip('/health-check')
# create ml-component
res = helper.post_ml_component(name=f'MLComponent_{ait_full_name}', description=f'Description of {ait_full_name}', problem_domain=f'ProbremDomain of {ait_full_name}')
helper.set_ml_component_id(res['MLComponentId'])
# deploy AIT
if is_init_ait:
helper.deploy_ait_non_build()
else:
print('skip deploy AIT')
res = helper.get_data_types()
model_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'model'][0]['Id']
dataset_data_type_id = [d for d in res['DataTypes'] if d['Name'] == 'dataset'][0]['Id']
res = helper.get_file_systems()
unix_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'UNIX_FILE_SYSTEM'][0]['Id']
windows_file_system_id = [f for f in res['FileSystems'] if f['Name'] == 'WINDOWS_FILE'][0]['Id']
# add inventories
if is_init_inventory:
inv1_name = helper.post_inventory('dataset_for_verification', dataset_data_type_id, windows_file_system_id,
f'{invenotory_root_dir}\\dataset_for_verification\\dataset_for_verification.csv',
'dataset for verification', ['csv'])
inv2_name = helper.post_inventory('categories', dataset_data_type_id, windows_file_system_id,
f'{invenotory_root_dir}\\categories\\categories.csv',
'category variables of dataset', ['csv'])
else:
print('skip add inventories')
# get ait_json and inventory_jsons
res_json = helper.get_bk('/QualityMeasurements/RelationalOperators', is_print_json=False).json()
eq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '=='][0])
nq_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '!='][0])
gt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>'][0])
ge_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '>='][0])
lt_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<'][0])
le_id = int([r['Id'] for r in res_json['RelationalOperator'] if r['Expression'] == '<='][0])
res_json = helper.get_bk('/testRunners', is_print_json=False).json()
ait_json = [j for j in res_json['TestRunners'] if j['Name'] == ait_name][-1]
inv_1_json = helper.get_inventory(inv1_name)
inv_2_json = helper.get_inventory(inv2_name)
# add teast_descriptions
helper.post_td(td_name, 2,
quality_measurements=[],
target_inventories=[
{"Id":1, "InventoryId": inv_1_json['Id'], "TemplateInventoryId": ait_json['TargetInventories'][0]['Id']},
{"Id":2, "InventoryId": inv_2_json['Id'], "TemplateInventoryId": ait_json['TargetInventories'][1]['Id']}
],
test_runner={
"Id":ait_json['Id'],
"Params":[]
})
# get test_description_jsons
td_1_json = helper.get_td(td_name)
# run test_descriptions
helper.post_run_and_wait(td_1_json['Id'])
res_json = helper.get_td_detail(td_1_json['Id'])
pprint.pprint(res_json)
# generate report
res = helper.post_report(td_1_json['Id'])
pprint.pprint(res)
```
| github_jupyter |
# Relation extraction with BERT
---
The goal of this notebook is to show how to use [BERT](https://arxiv.org/abs/1810.04805)
to [extract relation](https://en.wikipedia.org/wiki/Relationship_extraction) from text.
Used libraries:
- [PyTorch](https://pytorch.org/)
- [PyTorch-Lightning](https://pytorch-lightning.readthedocs.io/en/latest/)
- [Transformers](https://huggingface.co/transformers/index.html)
Used datasets:
- SemEval 2010 Task 8 - [paper](https://arxiv.org/pdf/1911.10422.pdf) - [download](https://github.com/sahitya0000/Relation-Classification/blob/master/corpus/SemEval2010_task8_all_data.zip?raw=true)
- Google IISc Distant Supervision (GIDS) - [paper](https://arxiv.org/pdf/1804.06987.pdf) - [download](https://drive.google.com/open?id=1gTNAbv8My2QDmP-OHLFtJFlzPDoCG4aI)
## High level overview
We will experiment with two architectures: single-classifier & duo-classifier


The classifiers are implemented as follows:

## Install dependencies
This project uses [Python 3.7+](https://www.python.org/downloads/release/python-378/)
```
!pip install requests==2.23.0 numpy==1.18.5 pandas==1.0.3 \
scikit-learn==0.23.1 pytorch-lightning==0.8.4 torch==1.5.1 \
transformers==3.0.2 sklearn==0.0 tqdm==4.45.0 neptune-client==0.4.119 \
matplotlib==3.1.0 scikit-plot==0.3.7
```
## Import needed modules
```
import gc
import json
import math
import os
from abc import ABC, abstractmethod
from collections import OrderedDict
from random import randint
from typing import Iterable, Tuple
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import torch
from matplotlib.figure import Figure
from pandas import DataFrame
from pytorch_lightning import LightningModule, seed_everything
from pytorch_lightning import Trainer as LightningTrainer
from pytorch_lightning.logging.neptune import NeptuneLogger
from sklearn.metrics import *
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import label_binarize
from sklearn.utils import column_or_1d
from torch import Tensor, nn
from torch.nn import functional as F
from torch.optim import AdamW
from torch.optim.lr_scheduler import LambdaLR
from torch.utils.data import DataLoader, IterableDataset
from tqdm.auto import tqdm
from transformers import *
```
## Define constants
```
# --- Random seed ---
SEED = 2020
seed_everything(SEED)
# --- Directory ---
ROOT_DIR = os.path.abspath(".")
PROCESSED_DATA_DIR = os.path.join(ROOT_DIR, "data/processed")
METADATA_FILE_NAME = os.path.join(PROCESSED_DATA_DIR, "metadata.json")
CHECKPOINT_DIR = os.path.join(ROOT_DIR, "checkpoint")
KAGGLE_ENV = bool(os.getenv("KAGGLE_URL_BASE"))
if KAGGLE_ENV:
# in Kaggle environment
# 2 datasets should already been added to the notebook
RAW_DATA_DIR = os.path.join(ROOT_DIR, "../input")
else:
# in local environment
RAW_DATA_DIR = os.path.join(ROOT_DIR, "data/raw")
# --- Datasets ---
DATASET_MAPPING = {
"SemEval2010Task8": {
"dir": os.path.join(RAW_DATA_DIR,"semeval2010-task-8"),
"keep_test_order": True,
"precision_recall_curve_baseline_img": None,
},
"GIDS": {
"dir": os.path.join(RAW_DATA_DIR,"gids-dataset"),
"keep_test_order": False,
"precision_recall_curve_baseline_img": os.path.join(RAW_DATA_DIR,"gids-dataset/GIDS_precision_recall_curve.png"),
}
}
# change this variable to switch dataset in later tasks
DATASET_NAME = list(DATASET_MAPPING.keys())[1]
# --- Subject & object markup ---
SUB_START_CHAR = "["
SUB_END_CHAR = "]"
OBJ_START_CHAR = "{"
OBJ_END_CHAR = "}"
# --- BERT variants ---
# See https://huggingface.co/transformers/pretrained_models.html for the full list
AVAILABLE_PRETRAINED_MODELS = [
"distilbert-base-uncased", # 0
"distilbert-base-cased", # 1
"bert-base-uncased", # 2
"distilgpt2", # 3
"gpt2", # 4
"distilroberta-base", # 5
"roberta-base", # 6
"albert-base-v1", # 7
"albert-base-v2", # 8
"bert-large-uncased", # 9
]
# change this variable to switch pretrained language model
PRETRAINED_MODEL = AVAILABLE_PRETRAINED_MODELS[2]
# if e1 is not related to e2, should "e2 not related to e1" be added to the training set
ADD_REVERSE_RELATIONSHIP = True
# --- Neptune logger ---
# Create a free account at https://neptune.ai/,
# then get the API token and create a project
NEPTUNE_API_TOKEN = " INSERT YOUR API TOKEN HERE "
NEPTUNE_PROJECT_NAME = " INSERT YOUR PROJECT NAME HERE "
```
## Preprocess

First, we define a custom label encoder. What this label encoder offers but `sklearn.preprocessing.LabelEncoder` fails
to provide:
- Order preservation: labels will be encoded in order they appear in the dataset. Labels appears earlier will have
smaller id. We need this to ensure the `no relation` class is always encoded as `0`
- Multiple fit: `sklearn.preprocessing.LabelEncoder` forgets what is fit in the last time `fit` is called while our
encoder keeps adding new labels to existing ones. This is useful when we process large dataset in batches.
```
class OrdinalLabelEncoder:
def __init__(self, init_labels=None):
if init_labels is None:
init_labels = []
self.mapping = OrderedDict({l: i for i, l in enumerate(init_labels)})
@property
def classes_(self):
return list(self.mapping.keys())
def fit_transform(self, y):
return self.fit(y).transform(y)
def fit(self, y):
y = column_or_1d(y, warn=True)
new_classes = pd.Series(y).unique()
for cls in new_classes:
if cls not in self.mapping:
self.mapping[cls] = len(self.mapping)
return self
def transform(self, y):
y = column_or_1d(y, warn=True)
return [self.mapping[value] for value in y]
```
Abstract preprocessor class:
```
class AbstractPreprocessor(ABC):
DATASET_NAME = ""
VAL_DATA_PROPORTION = 0.2
NO_RELATION_LABEL = ""
def __init__(self, tokenizer: PreTrainedTokenizer):
self.tokenizer = tokenizer
self.SUB_START_ID, self.SUB_END_ID, self.OBJ_START_ID, self.OBJ_END_ID \
= tokenizer.convert_tokens_to_ids([SUB_START_CHAR, SUB_END_CHAR, OBJ_START_CHAR, OBJ_END_CHAR])
self.label_encoder = OrdinalLabelEncoder([self.NO_RELATION_LABEL])
def preprocess_data(self, reprocess: bool):
print(f"\n---> Preprocessing {self.DATASET_NAME} dataset <---")
# create processed data dir
if not os.path.exists(PROCESSED_DATA_DIR):
print("Creating processed data directory " + PROCESSED_DATA_DIR)
os.makedirs(PROCESSED_DATA_DIR)
# stop preprocessing if file existed
json_file_names = [self.get_dataset_file_name(k) for k in ("train", "val", "test")]
existed_files = [fn for fn in json_file_names if os.path.exists(fn)]
if existed_files:
file_text = "- " + "\n- ".join(existed_files)
if not reprocess:
print("The following files already exist:")
print(file_text)
print("Preprocessing is skipped. See option --reprocess.")
return
else:
print("The following files will be overwritten:")
print(file_text)
train_data, val_data, test_data = self._preprocess_data()
print("Saving to json files")
self._write_data_to_file(train_data, "train")
self._write_data_to_file(val_data, "val")
self._write_data_to_file(test_data, "test")
self._save_metadata({
"train_size": len(train_data),
"val_size": len(val_data),
"test_size": len(test_data),
"no_relation_label": self.NO_RELATION_LABEL,
**self._get_label_mapping()
})
self._create_secondary_data_files()
print("---> Done ! <---")
@abstractmethod
def _preprocess_data(self) -> Tuple[DataFrame, DataFrame, DataFrame]:
pass
def _create_secondary_data_files(self):
"""
From the primary data file, create a data file with binary labels
and a data file with only sentences classified as "related"
"""
with open(METADATA_FILE_NAME) as f:
root_metadata = json.load(f)
metadata = root_metadata[self.DATASET_NAME]
related_only_count = {
"train": 0,
"val": 0,
"test": 0,
}
for key in ["train", "test", "val"]:
print(f"Creating secondary files for {key} data")
origin_file = open(self.get_dataset_file_name(key))
bin_file = open(self.get_dataset_file_name(f"{key}_binary"), "w")
related_file = open(self.get_dataset_file_name(f"{key}_related_only"), "w")
total = metadata[f"{key}_size"]
for line in tqdm(origin_file, total=total):
data = json.loads(line)
if data["label"] != 0:
related_only_count[key] += 1
data["label"] -= 1 # label in "related_only" files is 1 less than the original label
related_file.write(json.dumps(data) + "\n")
data["label"] = 1 # in binary dataset, all "related" classes have label 1
bin_file.write(json.dumps(data) + "\n")
else:
bin_file.write(json.dumps(data) + "\n")
origin_file.close()
bin_file.close()
related_file.close()
print("Updating metadata.json")
for key in ["train", "test", "val"]:
metadata[f"{key}_related_only_size"] = related_only_count[key]
root_metadata[self.DATASET_NAME] = metadata
with open(METADATA_FILE_NAME, "w") as f:
json.dump(root_metadata, f, indent=4)
def _find_sub_obj_pos(self, input_ids_list: Iterable) -> DataFrame:
"""
Find subject and object position in a sentence
"""
sub_start_pos = [self._index(s, self.SUB_START_ID) + 1 for s in input_ids_list]
sub_end_pos = [self._index(s, self.SUB_END_ID, sub_start_pos[i]) for i, s in enumerate(input_ids_list)]
obj_start_pos = [self._index(s, self.OBJ_START_ID) + 1 for s in input_ids_list]
obj_end_pos = [self._index(s, self.OBJ_END_ID, obj_start_pos[i]) for i, s in enumerate(input_ids_list)]
return DataFrame({
"sub_start_pos": sub_start_pos,
"sub_end_pos": sub_end_pos,
"obj_start_pos": obj_start_pos,
"obj_end_pos": obj_end_pos,
})
@staticmethod
def _index(lst: list, ele: int, start: int = 0) -> int:
"""
Find an element in a list. Returns -1 if not found instead of raising an exception.
"""
try:
return lst.index(ele, start)
except ValueError:
return -1
def _clean_data(self, raw_sentences: list, labels: list) -> DataFrame:
if not raw_sentences:
return DataFrame()
tokens = self.tokenizer(raw_sentences, truncation=True, padding="max_length")
data = DataFrame(tokens.data)
data["label"] = self.label_encoder.fit_transform(labels)
sub_obj_position = self._find_sub_obj_pos(data["input_ids"])
data = pd.concat([data, sub_obj_position], axis=1)
data = self._remove_invalid_sentences(data)
return data
def _remove_invalid_sentences(self, data: DataFrame) -> DataFrame:
"""
Remove sentences without subject/object or whose subject/object
is beyond the maximum length the model supports
"""
seq_max_len = self.tokenizer.model_max_length
return data.loc[
(data["sub_end_pos"] < seq_max_len)
& (data["obj_end_pos"] < seq_max_len)
& (data["sub_end_pos"] > -1)
& (data["obj_end_pos"] > -1)
]
def _get_label_mapping(self):
"""
Returns a mapping from id to label and vise versa from the label encoder
"""
# all labels
id_to_label = dict(enumerate(self.label_encoder.classes_))
label_to_id = {v: k for k, v in id_to_label.items()}
# for the related_only dataset
# ignore id 0, which represent no relation
id_to_label_related_only = {k - 1: v for k, v in id_to_label.items() if k != 0}
label_to_id_related_only = {v: k for k, v in id_to_label_related_only.items()}
return {
"id_to_label": id_to_label,
"label_to_id": label_to_id,
"id_to_label_related_only": id_to_label_related_only,
"label_to_id_related_only": label_to_id_related_only,
}
def _write_data_to_file(self, dataframe: DataFrame, subset: str):
"""Write data in a dataframe to train/val/test file"""
lines = ""
for _, row in dataframe.iterrows():
lines += row.to_json() + "\n"
with open(self.get_dataset_file_name(subset), "w") as file:
file.write(lines)
def _save_metadata(self, metadata: dict):
"""Save metadata to metadata.json"""
# create metadata file
if not os.path.exists(METADATA_FILE_NAME):
print(f"Create metadata file at {METADATA_FILE_NAME}")
with open(METADATA_FILE_NAME, "w") as f:
f.write("{}\n")
# add metadata
print("Saving metadata")
with open(METADATA_FILE_NAME) as f:
root_metadata = json.load(f)
with open(METADATA_FILE_NAME, "w") as f:
root_metadata[self.DATASET_NAME] = metadata
json.dump(root_metadata, f, indent=4)
@classmethod
def get_dataset_file_name(cls, key: str) -> str:
return os.path.join(PROCESSED_DATA_DIR, f"{cls.DATASET_NAME.lower()}_{key}.json")
```
Concrete preprocessor for each dataset:
```
class SemEval2010Task8Preprocessor(AbstractPreprocessor):
DATASET_NAME = "SemEval2010Task8"
NO_RELATION_LABEL = "Other"
RAW_TRAIN_FILE_NAME = os.path.join(DATASET_MAPPING["SemEval2010Task8"]["dir"],
"SemEval2010_task8_training/TRAIN_FILE.TXT")
RAW_TEST_FILE_NAME = os.path.join(DATASET_MAPPING["SemEval2010Task8"]["dir"],
"SemEval2010_task8_testing_keys/TEST_FILE_FULL.TXT")
RAW_TRAIN_DATA_SIZE = 8000
RAW_TEST_DATA_SIZE = 2717
def _preprocess_data(self):
print("Processing training data")
train_data = self._process_file(
self.RAW_TRAIN_FILE_NAME,
self.RAW_TRAIN_DATA_SIZE,
ADD_REVERSE_RELATIONSHIP,
)
print("Processing test data")
test_data = self._process_file(
self.RAW_TEST_FILE_NAME,
self.RAW_TEST_DATA_SIZE,
False,
)
print("Splitting train & validate data")
train_data, val_data = train_test_split(train_data, shuffle=True, random_state=SEED)
return train_data, val_data, test_data
def _process_file(self, file_name: str, dataset_size: int, add_reverse: bool) -> DataFrame:
raw_sentences = []
labels = []
with open(file_name) as f:
for _ in tqdm(range(dataset_size)):
sent = f.readline()
label, sub, obj = self._process_label(f.readline())
labels.append(label)
raw_sentences.append(self._process_sentence(sent, sub, obj))
if label == "Other" and add_reverse:
labels.append(label)
raw_sentences.append(self._process_sentence(sent, obj, sub))
f.readline()
f.readline()
return self._clean_data(raw_sentences, labels)
@staticmethod
def _process_sentence(sentence: str, sub: int, obj: int) -> str:
return sentence.split("\t")[1][1:-2] \
.replace(f"<e{sub}>", SUB_START_CHAR) \
.replace(f"</e{sub}>", SUB_END_CHAR) \
.replace(f"<e{obj}>", OBJ_START_CHAR) \
.replace(f"</e{obj}>", OBJ_END_CHAR)
@staticmethod
def _process_label(label: str) -> Tuple[str, int, int]:
label = label.strip()
if label == "Other":
return label, 1, 2
nums = list(filter(str.isdigit, label))
return label, int(nums[0]), int(nums[1])
class GIDSPreprocessor(AbstractPreprocessor):
DATASET_NAME = "GIDS"
RAW_TRAIN_FILE_NAME = os.path.join(DATASET_MAPPING["GIDS"]["dir"], "train.tsv")
RAW_VAL_FILE_NAME = os.path.join(DATASET_MAPPING["GIDS"]["dir"], "val.tsv")
RAW_TEST_FILE_NAME = os.path.join(DATASET_MAPPING["GIDS"]["dir"], "test.tsv")
TRAIN_SIZE = 11297
VAL_SIZE = 1864
TEST_SIZE = 5663
NO_RELATION_LABEL = "NA"
def _process_file(self, file_name: str, add_reverse: bool) -> DataFrame:
"""
Process a file in batches
Return the total data size
"""
with open(file_name) as in_file:
lines = in_file.readlines()
raw_sentences = []
labels = []
for line in tqdm(lines):
_, _, sub, obj, label, sent = line.split("\t")
sent = sent.replace("###END###", "")
# add subject markup
new_sub = SUB_START_CHAR + " " + sub.replace("_", " ") + " " + SUB_END_CHAR
new_obj = OBJ_START_CHAR + " " + obj.replace("_", " ") + " " + OBJ_END_CHAR
sent = sent.replace(sub, new_sub).replace(obj, new_obj)
raw_sentences.append(sent)
labels.append(label)
if add_reverse and label == self.NO_RELATION_LABEL:
new_sub = OBJ_START_CHAR + " " + sub.replace("_", " ") + " " + OBJ_END_CHAR
new_obj = SUB_START_CHAR + " " + obj.replace("_", " ") + " " + SUB_END_CHAR
sent = sent.replace(sub, new_sub).replace(obj, new_obj)
raw_sentences.append(sent)
labels.append(label)
return self._clean_data(raw_sentences, labels)
def _preprocess_data(self):
print("Process train dataset")
train_data = self._process_file(
self.RAW_TRAIN_FILE_NAME,
ADD_REVERSE_RELATIONSHIP,
)
print("Process val dataset")
val_data = self._process_file(
self.RAW_VAL_FILE_NAME,
False,
)
print("Process test dataset")
test_data = self._process_file(
self.RAW_TEST_FILE_NAME,
False,
)
return train_data, val_data, test_data
```
Factory method to create preprocessors:
```
def get_preprocessor_class(dataset_name: str = DATASET_NAME):
return globals()[f"{dataset_name}Preprocessor"]
def get_preprocessor(dataset_name: str = DATASET_NAME)-> AbstractPreprocessor:
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL, use_fast=True)
# some tokenizer, like GPTTokenizer, doesn't have pad_token
# in this case, we use eos token as pad token
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
preprocessors_class = get_preprocessor_class(dataset_name)
return preprocessors_class(tokenizer)
```
Preprocess data:
```
preprocessor = get_preprocessor()
preprocessor.preprocess_data(reprocess=True)
```
## Dataset
We adopt the "smart batching" technique from [here](https://towardsdatascience.com/divide-hugging-face-transformers-training-time-by-2-or-more-21bf7129db9q-21bf7129db9e)
Here's a brief diagram:

```
class GenericDataset(IterableDataset):
"""A generic dataset for train/val/test data for both SemEval and GIDS dataset"""
def __init__(self, dataset_name: str, subset: str, batch_size: int, label_transform: str):
assert subset in ["train", "val", "test"]
assert label_transform in ["none", "binary", "related_only"]
file_name = subset if label_transform == "none" \
else f"{subset}_{label_transform}"
preprocessor_class = get_preprocessor_class()
with open(METADATA_FILE_NAME) as f:
metadata = json.load(f)[dataset_name]
size = metadata[f"{subset}_related_only_size"] \
if label_transform is "related_only" \
else metadata[f"{subset}_size"]
self.subset = subset
self.batch_size = batch_size
self.length = math.ceil(size / batch_size)
self.file = open(preprocessor_class.get_dataset_file_name(file_name))
self.keep_test_order = self.subset == "test" and DATASET_MAPPING[dataset_name]["keep_test_order"]
def __del__(self):
if self.file:
self.file.close()
def __iter__(self):
"""
Implement "smart batching"
"""
data = [json.loads(line) for line in self.file]
if not self.keep_test_order:
data = sorted(data, key=lambda x: sum(x["attention_mask"]))
new_data = []
while len(data) > 0:
if self.keep_test_order or len(data) < self.batch_size:
idx = 0
else:
idx = randint(0, len(data) - self.batch_size)
batch = data[idx:idx + self.batch_size]
max_len = max([sum(b["attention_mask"]) for b in batch])
for b in batch:
input_data = {}
for k, v in b.items():
if k != "label":
if isinstance(v, list):
input_data[k] = torch.tensor(v[:max_len])
else:
input_data[k] = torch.tensor(v)
label = torch.tensor(b["label"])
new_data.append((input_data, label))
del data[idx:idx + self.batch_size]
yield from new_data
def __len__(self):
return self.length
def as_batches(self):
input_data = []
label = []
def create_batch():
return (
{k: torch.stack([x[k] for x in input_data]).cuda() for k in input_data[0].keys()},
torch.tensor(label).cuda()
)
for ip, l in self:
input_data.append(ip)
label.append(l)
if len(input_data) == self.batch_size:
yield create_batch()
input_data.clear()
label.clear()
yield create_batch()
```
## Classifiers
```
class BaseClassifier(LightningModule, ABC):
"""
Base class of all classifiers
"""
dataset_label_transform = None
num_classes = None
@abstractmethod
def loss_function(self, logits: Tensor, label: Tensor) -> Tensor:
"""
Calculate the loss of the model
It MUST take care of the last activation layer
"""
pass
@abstractmethod
def log_metrics(self, epoch_type: str, logits: Tensor, label: Tensor) -> dict:
pass
def __init__(self, pretrained_language_model, dataset_name, batch_size, learning_rate, decay_lr_speed,
dropout_p, activation_function, weight_decay, cls_linear_size, sub_obj_linear_size):
super().__init__()
self.save_hyperparameters()
self.obj_stream = torch.cuda.Stream()
self.sub_stream = torch.cuda.Stream()
self.language_model = AutoModel.from_pretrained(pretrained_language_model)
config = self.language_model.config
self.max_seq_len = config.max_position_embeddings
self.hidden_size = config.hidden_size
self.cls_linear = nn.Linear(config.hidden_size, cls_linear_size)
self.sub_linear = nn.Linear(config.hidden_size, sub_obj_linear_size)
self.obj_linear = nn.Linear(config.hidden_size, sub_obj_linear_size)
self.linear = nn.Linear(cls_linear_size + 2 * sub_obj_linear_size, self.num_classes)
self.dropout = nn.Dropout(p=dropout_p)
self.activation_function = getattr(nn, activation_function)()
def forward(self, sub_start_pos, sub_end_pos,
obj_start_pos, obj_end_pos, *args, **kwargs) -> Tensor:
language_model_output = self.language_model(*args, **kwargs)
if isinstance(language_model_output, tuple):
language_model_output = language_model_output[0]
bz = language_model_output.shape[0]
with torch.cuda.stream(self.sub_stream):
sub = [torch.mean(language_model_output[i, sub_start_pos[i]:sub_end_pos[i]], dim=0) for i in range(bz)]
sub = self.dropout(torch.stack(sub))
sub = self.activation_function(self.sub_linear(sub))
with torch.cuda.stream(self.obj_stream):
obj = [torch.mean(language_model_output[i, obj_start_pos[i]:obj_end_pos[i]], dim=0) for i in range(bz)]
obj = self.dropout(torch.stack(obj))
obj = self.activation_function(self.obj_linear(obj))
cls = self.dropout(language_model_output[:, 0])
cls = self.activation_function(self.cls_linear(cls))
torch.cuda.synchronize()
x = torch.cat([cls, sub, obj], dim=1)
x = self.dropout(x)
logits = self.linear(x)
return logits
def train_dataloader(self) -> DataLoader:
return self.__get_dataloader("train")
def val_dataloader(self) -> DataLoader:
return self.__get_dataloader("val")
def test_dataloader(self) -> DataLoader:
return self.__get_dataloader("test")
def __get_dataloader(self, subset: str) -> DataLoader:
batch_size = self.hparams.batch_size
dataset = GenericDataset(
self.hparams.dataset_name,
subset,
batch_size,
self.dataset_label_transform
)
return DataLoader(
dataset,
batch_size=batch_size,
num_workers=1
)
def configure_optimizers(self):
optimizer = AdamW(
[p for p in self.parameters() if p.requires_grad],
lr=self.hparams.learning_rate,
weight_decay=self.hparams.weight_decay
)
scheduler = LambdaLR(optimizer, lambda epoch: self.hparams.decay_lr_speed[epoch])
return [optimizer], [scheduler]
def training_step(self, batch: Tuple[dict, Tensor], batch_nb: int) -> dict:
input_data, label = batch
logits = self(**input_data)
loss = self.loss_function(logits, label)
log = {"train_loss": loss}
return {"loss": loss, "log": log}
def __eval_step(self, batch: Tuple[dict, Tensor]) -> dict:
input_data, label = batch
logits = self(**input_data)
return {
"logits": logits,
"label": label,
}
def validation_step(self, batch: Tuple[dict, Tensor], batch_nb: int) -> dict:
return self.__eval_step(batch)
def test_step(self, batch: Tuple[dict, Tensor], batch_nb: int) -> dict:
return self.__eval_step(batch)
def __eval_epoch_end(self, epoch_type: str, outputs: Iterable[dict]) -> dict:
assert epoch_type in ["test", "val"]
logits = torch.cat([x["logits"] for x in outputs]).cpu()
label = torch.cat([x["label"] for x in outputs]).cpu()
logs = self.log_metrics(epoch_type, logits, label)
return {"progress_bar": logs}
def validation_epoch_end(self, outputs: Iterable[dict]) -> dict:
return self.__eval_epoch_end("val", outputs)
def test_epoch_end(self, outputs: Iterable[dict]) -> dict:
return self.__eval_epoch_end("test", outputs)
def numeric_labels_to_text(self, label):
"""Revert labels from number to text"""
if self.dataset_label_transform == "binary":
label = ["Positive" if x else "Negative" for x in label]
else:
with open(METADATA_FILE_NAME) as f:
meta = json.load(f)[self.hparams.dataset_name]
if self.dataset_label_transform == "none":
mapping = meta["id_to_label"]
else:
mapping = meta["id_to_label_related_only"]
label = [mapping[str(int(x))] for x in label]
return label
@staticmethod
def plot_confusion_matrix(predicted_label, label) -> Figure:
result = confusion_matrix(label, predicted_label)
display = ConfusionMatrixDisplay(result)
fig, ax = plt.subplots(figsize=(16, 12))
display.plot(cmap=plt.cm.get_cmap("Blues"), ax=ax, xticks_rotation='vertical')
return fig
def log_confusion_matrix(self, prefix: str, predicted_label: Tensor, label: Tensor):
predicted_label = self.numeric_labels_to_text(predicted_label)
label = self.numeric_labels_to_text(label)
fig = self.plot_confusion_matrix(predicted_label, label)
self.logger.experiment.log_image(f"{prefix}_confusion_matrix", fig)
class MulticlassClassifier(BaseClassifier, ABC):
"""
Base class for multiclass classifiers
"""
def loss_function(self, logits: Tensor, label: Tensor)-> Tensor:
return F.cross_entropy(logits, label)
@staticmethod
def logits_to_label(logits: Tensor) -> Tensor:
return torch.argmax(logits, dim=-1)
def log_metrics(self, epoch_type: str, logits: Tensor, label: Tensor) -> dict:
predicted_label = self.logits_to_label(logits)
self.log_confusion_matrix(epoch_type, predicted_label, label)
logs = {
f"{epoch_type}_avg_loss": float(self.loss_function(logits, label)),
f"{epoch_type}_acc": accuracy_score(label, predicted_label),
f"{epoch_type}_pre_weighted": precision_score(label, predicted_label, average="weighted"),
f"{epoch_type}_rec_weighted": recall_score(label, predicted_label, average="weighted"),
f"{epoch_type}_f1_weighted": f1_score(label, predicted_label, average="weighted"),
f"{epoch_type}_pre_macro": precision_score(label, predicted_label, average="macro"),
f"{epoch_type}_rec_macro": recall_score(label, predicted_label, average="macro"),
f"{epoch_type}_f1_macro": f1_score(label, predicted_label, average="macro"),
}
for k, v in logs.items():
self.logger.experiment.log_metric(k, v)
return logs
class StandardClassifier(MulticlassClassifier):
"""
A classifier that can recognize the "not related" as well as other relations
"""
dataset_label_transform = "none"
def __init__(self, dataset_name, **kwargs):
with open(METADATA_FILE_NAME) as f:
self.num_classes = len(json.load(f)[dataset_name]["label_to_id"])
self.test_proposed_answer = None
super().__init__(dataset_name=dataset_name, **kwargs)
def log_metrics(self, epoch_type: str, logits: Tensor, label: Tensor)-> dict:
if epoch_type == "test":
self.test_proposed_answer = self.logits_to_label(logits).tolist()
self.__log_precision_recall_curve(epoch_type, logits, label)
return super().log_metrics(epoch_type, logits, label)
def __log_precision_recall_curve(self, epoch_type: str, logits: Tensor, label: Tensor):
"""
Log the micro-averaged precision recall curve
Ref: https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html
"""
label = torch.tensor(label_binarize(label, classes=list(range(self.num_classes)))).flatten()
logits = logits.flatten()
pre, rec, thresholds = precision_recall_curve(label, logits)
f1s = 2 * pre * rec / (pre + rec)
ix = np.argmax(f1s)
fig, ax = plt.subplots(figsize=(10, 10))
# render the baseline curves as background for comparison
background = DATASET_MAPPING[self.hparams.dataset_name]["precision_recall_curve_baseline_img"]
if background:
img = plt.imread(background)
ax.imshow(img, extent=[0, 1, 0, 1])
no_skill = len(label[label == 1]) / len(label)
ax.plot(rec, pre, label="Our proposed model", color="blue")
ax.set_xlabel("Recall")
ax.set_ylabel("Precision")
ax.legend()
self.logger.experiment.log_image(f"{epoch_type}_pre_rec_curve", fig)
self.logger.experiment.log_metric(
f"{epoch_type}_average_precision_score_micro",
average_precision_score(label, logits, average="micro")
)
class BinaryClassifier(BaseClassifier):
"""
A binary classifier that picks out "not-related" sentences
"""
dataset_label_transform = "binary"
num_classes = 1
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.thresholds = {}
def forward(self, *args, **kwargs):
return super().forward(*args, **kwargs).flatten()
@staticmethod
def yhat_to_label(y_hat: Tensor, threshold: float) -> Tensor:
return (y_hat > threshold).long()
def loss_function(self, logits: Tensor, label: Tensor) -> Tensor:
return F.binary_cross_entropy_with_logits(logits, label.float())
def log_metrics(self, epoch_type: str, logits: Tensor, label: Tensor) -> dict:
y_hat = torch.sigmoid(logits)
if epoch_type == "val":
self.__find_thresholds(y_hat, label)
self.__log_output_distribution(epoch_type, y_hat, label)
logs = {
f"{epoch_type}_avg_loss": float(self.loss_function(logits, label)),
f"{epoch_type}_roc_auc": self.__roc_auc_score(label, y_hat),
}
for criteria, threshold in self.thresholds.items():
prefix = f"{epoch_type}_{criteria}"
predicted_label = self.yhat_to_label(y_hat, threshold)
self.log_confusion_matrix(prefix, predicted_label, label)
logs[f"{prefix}_acc"] = accuracy_score(label, predicted_label)
logs[f"{prefix}_pre"] = precision_score(label, predicted_label, average="binary")
logs[f"{prefix}_rec"] = recall_score(label, predicted_label, average="binary")
logs[f"{prefix}_f1"] = f1_score(label, predicted_label, average="binary")
for k, v in logs.items():
self.logger.experiment.log_metric(k, v)
return logs
@staticmethod
def __roc_auc_score(label: Tensor, y_hat: Tensor) -> float:
try:
return roc_auc_score(label, y_hat)
except ValueError:
return 0
def __find_thresholds(self, y_hat: Tensor, label: Tensor):
"""
Find 3 classification thresholds based on 3 criteria:
- The one that yields highest accuracy
- The "best point" in the ROC curve
- The one that yields highest f1
The results are logged and stored in self.threshold
"""
# best accuracy
best_acc = 0
best_acc_threshold = None
for y in y_hat:
y_predicted = self.yhat_to_label(y_hat, threshold=y)
acc = accuracy_score(label, y_predicted)
if best_acc < acc:
best_acc = acc
best_acc_threshold = y
self.thresholds["best_acc"] = best_acc_threshold
# ROC curve
# https://machinelearningmastery.com/threshold-moving-for-imbalanced-classification/
fpr, tpr, thresholds = roc_curve(label, y_hat)
gmeans = tpr * (1 - fpr)
ix = np.argmax(gmeans)
self.thresholds["best_roc"] = thresholds[ix]
fig, ax = plt.subplots(figsize=(16, 12))
ax.plot([0,1], [0,1], linestyle="--", label="No Skill")
ax.plot(fpr, tpr, marker=".", label="Logistic")
ax.scatter(fpr[ix], tpr[ix], marker="o", color="black", label="Best")
ax.set_xlabel("False Positive Rate")
ax.set_ylabel("True Positive Rate")
ax.legend()
self.logger.experiment.log_image("roc_curve", fig)
# precision recall curve
# https://machinelearningmastery.com/roc-curves-and-precision-recall-curves-for-classification-in-python/
pre, rec, thresholds = precision_recall_curve(label, y_hat)
f1s = 2 * pre * rec / (pre + rec)
ix = np.argmax(f1s)
self.thresholds["best_f1"] = thresholds[ix]
fig, ax = plt.subplots(figsize=(16, 12))
no_skill = len(label[label == 1]) / len(label)
ax.plot([0, 1], [no_skill, no_skill], linestyle="--", label="No Skill")
ax.plot(rec, pre, marker=".", label="Logistic")
ax.scatter(rec[ix], pre[ix], marker="o", color="black", label="Best F1")
ax.set_xlabel("Recall")
ax.set_ylabel("Precision")
ax.legend()
self.logger.experiment.log_image("pre_rec_curve", fig)
# log thresholds
for k, v in self.thresholds.items():
self.logger.experiment.log_metric(f"threshold_{k}", v)
def __log_output_distribution(self, epoch_type: str, y_hat: Tensor, label: Tensor):
"""
Log the distribution of the model output and 3 thresholds with log scale and linear scale
"""
y_neg = y_hat[label == 0].numpy()
y_pos = y_hat[label == 1].numpy()
for scale in ["linear", "log"]:
fig, ax = plt.subplots(figsize=(16, 12))
ax.set_yscale(scale)
ax.hist([y_neg, y_pos], stacked=True, bins=50, label=["No relation", "Related"])
ylim = ax.get_ylim()
for k, v in self.thresholds.items():
ax.plot([v, v], ylim, linestyle="--", label=f"{k} threshold")
ax.legend()
self.logger.experiment.log_image(f"{epoch_type}_distribution_{scale}_scale", fig)
class RelationClassifier(MulticlassClassifier):
"""
A classifier that recognizes relations except for "not-related"
"""
dataset_label_transform = "related_only"
def __init__(self, dataset_name, **kwargs):
with open(METADATA_FILE_NAME) as f:
self.num_classes = len(json.load(f)[dataset_name]["label_to_id_related_only"])
super().__init__(dataset_name=dataset_name, **kwargs)
```
## The official scorer
Some datasets comes with official scorers. We will define them in this session.
```
class AbstractScorer(ABC):
def __init__(self, experiment_no: int, logger):
self.experiment_no = experiment_no
self.logger = logger
@abstractmethod
def score(self, proposed_answer: dict):
pass
class SemEval2010Task8Scorer(AbstractScorer):
RESULT_FILE = "semeval2010_task8_official_score_{}_{}.txt"
PROPOSED_ANSWER_FILE = "semeval2010_task8_proposed_answer.txt"
SCORER = os.path.join(DATASET_MAPPING["SemEval2010Task8"]["dir"], "SemEval2010_task8_scorer-v1.2/semeval2010_task8_scorer-v1.2.pl")
FORMAT_CHECKER = os.path.join(DATASET_MAPPING["SemEval2010Task8"]["dir"], "SemEval2010_task8_scorer-v1.2/semeval2010_task8_format_checker.pl")
ANSWER_KEY = os.path.join(DATASET_MAPPING["SemEval2010Task8"]["dir"], "SemEval2010_task8_testing_keys/TEST_FILE_KEY.TXT")
def score(self, proposed_answer: dict):
# write test_result to file
with open(METADATA_FILE_NAME) as f:
metadata = json.load(f)
id_to_label = {int(k): v for k, v in metadata[DATASET_NAME]["id_to_label"].items()}
for criteria, answer in proposed_answer.items():
result_file = self.RESULT_FILE.format(self.experiment_no, criteria)
i = 8001
with open(self.PROPOSED_ANSWER_FILE, "w") as f:
for r in answer:
f.write(f"{i}\t{id_to_label[r]}\n")
i += 1
# call the official scorer
os.system(f"perl {self.FORMAT_CHECKER} {self.PROPOSED_ANSWER_FILE}")
os.system(f"perl {self.SCORER} {self.PROPOSED_ANSWER_FILE} {self.ANSWER_KEY} > {result_file}")
# log the official score
with open(result_file) as f:
result = f.read()
print(f">>> Classifier with criteria: {criteria} <<<")
print(result)
print("\n\n")
self.logger.experiment.log_artifact(result_file)
def get_official_scorer(experiment_no: int, logger, dataset_name: str = DATASET_NAME) -> AbstractScorer:
cls = globals().get(dataset_name + "Scorer")
if cls:
return cls(experiment_no, logger)
```
## Claiming back memory & disk space
See [this](https://stackoverflow.com/a/61707643/7342188) and [this](https://stackoverflow.com/a/57860310/7342188)
```
1 / 0
trainer = classifier = rel_trainer = rel_classifier = bin_trainer = bin_classifier = None
gc.collect()
torch.cuda.empty_cache()
```
## Training standard classifier
```
GPUS = 1
MIN_EPOCHS = MAX_EPOCHS = 3
BATCH_SIZE = 8 + 56
LEARNING_RATE = 2e-05
LEARNING_RATE_DECAY_SPEED = [1, 1, 0.75, 0.5, 0.25, 0.1, 0.075, 0.05, 0.025, 0.01]
CLS_LINEAR_SIZE = 64
SUB_OBJ_LINEAR_SIZE = 32
DROPOUT_P = 0.2
ACTIVATION_FUNCTION = "PReLU"
WEIGHT_DECAY = 0.01 # default = 0.01
logger = NeptuneLogger(
api_key=NEPTUNE_API_TOKEN,
project_name=NEPTUNE_PROJECT_NAME,
close_after_fit=False,
)
try:
for i in range(4):
print(f"--------- EXPERIMENT {i} ---------")
classifier = trainer = None
gc.collect()
torch.cuda.empty_cache()
trainer = LightningTrainer(
gpus=GPUS,
min_epochs=MIN_EPOCHS,
max_epochs=MAX_EPOCHS,
default_root_dir=CHECKPOINT_DIR,
reload_dataloaders_every_epoch=True, # needed as we loop over a file,
deterministic=False,
checkpoint_callback=False,
logger=logger
)
classifier = StandardClassifier(
pretrained_language_model=PRETRAINED_MODEL,
dataset_name=DATASET_NAME,
batch_size=BATCH_SIZE,
learning_rate=LEARNING_RATE,
decay_lr_speed=LEARNING_RATE_DECAY_SPEED,
dropout_p=DROPOUT_P,
activation_function=ACTIVATION_FUNCTION,
weight_decay=WEIGHT_DECAY,
cls_linear_size=CLS_LINEAR_SIZE,
sub_obj_linear_size=SUB_OBJ_LINEAR_SIZE,
)
trainer.fit(classifier)
trainer.test(classifier)
scorer = get_official_scorer(i, logger)
if scorer:
scorer.score({
"standard": classifier.test_proposed_answer,
})
else:
print("No official scorer found")
except Exception as e:
logger.experiment.stop(str(e))
raise e
else:
logger.experiment.stop()
```
## Training binary classifier
```
GPUS = 1
BIN_MIN_EPOCHS = BIN_MAX_EPOCHS = 4
BIN_BATCH_SIZE = 8
BIN_LEARNING_RATE = 2e-05
BIN_LEARNING_RATE_DECAY_SPEED = [1, 1, 0.5, 0.25, 0.1, 0.1]
BIN_CLS_LINEAR_SIZE = 64
BIN_SUB_OBJ_LINEAR_SIZE = 32
BIN_DROPOUT_P = 0.2
BIN_ACTIVATION_FUNCTION = "PReLU"
BIN_WEIGHT_DECAY = 0.01 # default = 0.01
bin_logger = NeptuneLogger(
api_key=NEPTUNE_API_TOKEN,
project_name=NEPTUNE_PROJECT_NAME,
close_after_fit=False,
)
try:
for i in range(4):
print(f"--------- EXPERIMENT {i} ---------")
bin_classifier = bin_trainer = None
gc.collect()
torch.cuda.empty_cache()
bin_trainer = LightningTrainer(
gpus=GPUS,
min_epochs=BIN_MIN_EPOCHS,
max_epochs=BIN_MAX_EPOCHS,
default_root_dir=CHECKPOINT_DIR,
reload_dataloaders_every_epoch=True, # needed as we loop over a file,
deterministic=False,
checkpoint_callback=False,
logger=bin_logger,
)
bin_classifier = BinaryClassifier(
pretrained_language_model=PRETRAINED_MODEL,
dataset_name=DATASET_NAME,
batch_size=BIN_BATCH_SIZE,
learning_rate=BIN_LEARNING_RATE,
decay_lr_speed=BIN_LEARNING_RATE_DECAY_SPEED,
dropout_p=BIN_DROPOUT_P,
activation_function=BIN_ACTIVATION_FUNCTION,
weight_decay=BIN_WEIGHT_DECAY,
cls_linear_size=BIN_CLS_LINEAR_SIZE,
sub_obj_linear_size=BIN_SUB_OBJ_LINEAR_SIZE,
)
bin_trainer.fit(bin_classifier)
bin_trainer.test(bin_classifier)
except Exception as e:
bin_logger.experiment.stop(str(e))
raise e
else:
bin_logger.experiment.stop()
```
## Train relation classifier
```
GPUS = 1
REL_MIN_EPOCHS = REL_MAX_EPOCHS = 4
REL_BATCH_SIZE = 8 + 56
REL_LEARNING_RATE = 2e-05
REL_LEARNING_RATE_DECAY_SPEED = [1, 1, 0.75, 0.5, 0.25, 0.1, 0.075, 0.05, 0.025, 0.01]
REL_CLS_LINEAR_SIZE = 64
REL_SUB_OBJ_LINEAR_SIZE = 32
REL_DROPOUT_P = 0.2
REL_ACTIVATION_FUNCTION = "PReLU"
REL_WEIGHT_DECAY = 0.01 # default = 0.01
rel_logger = NeptuneLogger(
api_key=NEPTUNE_API_TOKEN,
project_name=NEPTUNE_PROJECT_NAME,
close_after_fit=False,
)
try:
for i in range(4):
print(f"--------- EXPERIMENT {i} ---------")
rel_classifier = rel_trainer = None
gc.collect()
torch.cuda.empty_cache()
rel_trainer = LightningTrainer(
gpus=GPUS,
min_epochs=REL_MIN_EPOCHS,
max_epochs=REL_MAX_EPOCHS,
default_root_dir=CHECKPOINT_DIR,
reload_dataloaders_every_epoch=True, # needed as we loop over a file,
deterministic=False,
checkpoint_callback=False,
logger=rel_logger
)
rel_classifier = RelationClassifier(
pretrained_language_model=PRETRAINED_MODEL,
dataset_name=DATASET_NAME,
batch_size=REL_BATCH_SIZE,
learning_rate=REL_LEARNING_RATE,
decay_lr_speed=REL_LEARNING_RATE_DECAY_SPEED,
dropout_p=REL_DROPOUT_P,
activation_function=REL_ACTIVATION_FUNCTION,
weight_decay=REL_WEIGHT_DECAY,
cls_linear_size=REL_CLS_LINEAR_SIZE,
sub_obj_linear_size=REL_SUB_OBJ_LINEAR_SIZE,
)
rel_trainer.fit(rel_classifier)
rel_trainer.test(rel_classifier)
except Exception as e:
rel_logger.experiment.stop(str(e))
raise e
else:
rel_logger.experiment.stop()
```
## Train 2 classifiers independently then test together
```
def test_together(experiment_no: int, logger, b_classifier: BinaryClassifier, r_classifier: RelationClassifier, dataset_name: str = DATASET_NAME,
bin_batch_size = BIN_BATCH_SIZE, batch_size: int = REL_BATCH_SIZE):
b_classifier.freeze()
r_classifier.freeze()
true_answer = []
# run binary classifier
print("Running binary classifier")
dataset = GenericDataset(dataset_name, subset="test", batch_size=bin_batch_size, label_transform="none")
binary_classify_results = { criteria: [] for criteria in b_classifier.thresholds.keys() }
for input_data, true_label in tqdm(dataset.as_batches(), total=len(dataset)):
# append true answers
true_answer += true_label.tolist()
# run bin classifier
logits = b_classifier(**input_data)
y_hat = torch.sigmoid(logits)
for criteria, threshold in b_classifier.thresholds.items():
label = b_classifier.yhat_to_label(y_hat, threshold)
binary_classify_results[criteria] += label.tolist()
# run relation classifier
print("Running relation classifier")
dataset = GenericDataset(dataset_name, subset="test", batch_size=batch_size, label_transform="none")
relation_classify_result = []
for input_data, true_label in tqdm(dataset.as_batches(), total=len(dataset)):
logits = r_classifier(**input_data)
label = r_classifier.logits_to_label(logits) + 1
relation_classify_result += label.tolist()
# combine results
print("Combining results")
proposed_answer = {}
for criteria in b_classifier.thresholds.keys():
results = zip(relation_classify_result, binary_classify_results[criteria])
final_label = [relation_result if bin_result else 0 for relation_result, bin_result in results]
proposed_answer[criteria] = final_label
# log metric
final_metrics = {}
for criteria in b_classifier.thresholds.keys():
pa = proposed_answer[criteria]
final_metrics.update({
f"test_combined_{criteria}_acc": accuracy_score(true_answer, pa),
f"test_combined_{criteria}_pre_micro": precision_score(true_answer, pa, average="micro"),
f"test_combined_{criteria}_rec_micro": recall_score(true_answer, pa, average="micro"),
f"test_combined_{criteria}_f1_micro": f1_score(true_answer, pa, average="micro"),
f"test_combined_{criteria}_pre_macro": precision_score(true_answer, pa, average="macro"),
f"test_combined_{criteria}_rec_macro": recall_score(true_answer, pa, average="macro"),
f"test_combined_{criteria}_f1_macro": f1_score(true_answer, pa, average="macro"),
})
fig = BaseClassifier.plot_confusion_matrix(pa, true_answer)
logger.experiment.log_image(f"test_combined_{criteria}_confusion_matrix", fig)
for k, v in final_metrics.items():
print(f"{k}: {v * 100}")
for k, v in final_metrics.items():
logger.experiment.log_metric(k, v)
# run the offical scorer
scorer = get_official_scorer(experiment_no, logger)
if scorer:
scorer.score(proposed_answer)
else:
print("No official scorer found")
combine_logger = NeptuneLogger(
api_key=NEPTUNE_API_TOKEN,
project_name=NEPTUNE_PROJECT_NAME,
close_after_fit=False,
)
try:
for i in range(4):
print(f"--------- EXPERIMENT {i} ---------")
# clean up
bin_classifier = bin_trainer = rel_classifier = rel_trainer = None
gc.collect()
torch.cuda.empty_cache()
# relation classifier
rel_trainer = LightningTrainer(
gpus=GPUS,
min_epochs=REL_MIN_EPOCHS,
max_epochs=REL_MAX_EPOCHS,
default_root_dir=CHECKPOINT_DIR,
reload_dataloaders_every_epoch=True, # needed as we loop over a file,
deterministic=False,
checkpoint_callback=False,
logger=combine_logger
)
rel_classifier = RelationClassifier(
pretrained_language_model=PRETRAINED_MODEL,
dataset_name=DATASET_NAME,
batch_size=REL_BATCH_SIZE,
learning_rate=REL_LEARNING_RATE,
decay_lr_speed=REL_LEARNING_RATE_DECAY_SPEED,
dropout_p=REL_DROPOUT_P,
activation_function=REL_ACTIVATION_FUNCTION,
weight_decay=REL_WEIGHT_DECAY,
cls_linear_size=REL_CLS_LINEAR_SIZE,
sub_obj_linear_size=REL_SUB_OBJ_LINEAR_SIZE,
)
rel_trainer.fit(rel_classifier)
# binary classifier
bin_trainer = LightningTrainer(
gpus=GPUS,
min_epochs=BIN_MIN_EPOCHS,
max_epochs=BIN_MAX_EPOCHS,
default_root_dir=CHECKPOINT_DIR,
reload_dataloaders_every_epoch=True, # needed as we loop over a file,
deterministic=False,
checkpoint_callback=False,
logger=combine_logger,
)
bin_classifier = BinaryClassifier(
pretrained_language_model=PRETRAINED_MODEL,
dataset_name=DATASET_NAME,
batch_size=BIN_BATCH_SIZE,
learning_rate=BIN_LEARNING_RATE,
decay_lr_speed=BIN_LEARNING_RATE_DECAY_SPEED,
dropout_p=BIN_DROPOUT_P,
activation_function=BIN_ACTIVATION_FUNCTION,
weight_decay=BIN_WEIGHT_DECAY,
cls_linear_size=BIN_CLS_LINEAR_SIZE,
sub_obj_linear_size=BIN_SUB_OBJ_LINEAR_SIZE,
)
bin_trainer.fit(bin_classifier)
# test together
test_together(i, combine_logger, bin_classifier, rel_classifier)
except Exception as e:
combine_logger.experiment.stop(str(e))
raise e
else:
combine_logger.experiment.stop()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import nltk
import json
import re
from sentence_transformers import SentenceTransformer
from itertools import islice, cycle
from pynndescent import NNDescent
from collections import Counter
from functools import reduce
nltk.download('stopwords')
nltk.download('punkt')
item_data_filename = '../data/interim/item_data.parquet'
df_item = pd.read_parquet(item_data_filename)
#df_item = df_item.loc[:1_000]
raw_filename = '../data/interim/train_dataset.parquet'
df_raw = pd.read_parquet(raw_filename)
df_raw = df_raw.loc[:1_000]
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(islice(iterable, n))
def preproc_user_history(s:str)->list:
return json.loads(s.replace("'", '"').lower())
df_raw['user_history'] = df_raw['user_history'].apply(preproc_user_history)
```
# Feature
## Most searched terms
### Most searched word
```
df_raw.head()
df_raw.loc[3, 'user_history']
def get_most_searched_words(hist:list, n:int=3)->list:
searched_items = reduce(lambda x, y:
x + y['event_info'].split(' ') if y['event_type']=='search'
else x,
hist, [])
common_words_counts = [item.lower()
for tup in take(n, cycle(Counter(searched_items).most_common(n)))
for item in tup]
return common_words_counts
pd.DataFrame(list(df_raw['user_history']
.apply(get_most_searched_words)))
(pd.DataFrame(list(df_raw['user_history']
.apply(get_most_searched_words)))
.rename(columns={0:
'most_searched_word_1',
1:
'most_searched_word_count_1',
2:
'most_searched_word_2',
3:
'most_searched_word_count_2',
4:
'most_searched_word_3',
5:
'most_searched_word_count_3'}))
def preproc_search(s:str)->str:
# TODO: improve search preprocessing
return s.lower()
```
### Most searched bi-gram
```
def token_sliding_window(s:str, size:int):
tokens = s.split(' ')
for i in range(len(tokens) - size + 1):
yield ' '.join(tokens[i:i+size])
list(token_sliding_window("oi tudo bem com vc", 2))
def get_most_searched_ngram(hist:list, n:int=2, m:int=3)->list:
searched_items = reduce(lambda x, y:
x + [y['event_info']] if y['event_type']=='search'
else x,
hist, [])
searched_ngram = reduce(lambda x, y:
x + list(token_sliding_window(y, n)),
searched_items, [])
sorted_cycle = (sorted(take(m, cycle(Counter(searched_ngram)
.most_common(m))),
key=lambda x: x[1],
reverse=True))
common_ngrams_counts = [item
for tup in sorted_cycle
for item in tup]
return common_ngrams_counts
cols_feat = ['most_searched_ngram_1',
'most_searched_ngram_count_1',
'most_searched_ngram_2',
'most_searched_ngram_count_2']
df_raw[cols_feat] = (pd.DataFrame(list(df_raw['user_history']
.apply(get_most_searched_ngram))))
df_raw
```
## Domain embedding
Summarize domains by the top 10 words from all the titles for each domain.
```
df_item[['domain_id', 'title']].groupby(by='domain_id').agg(' '.join)['title']
custom_stopwords = ['kit', '', '+', '-', 'und', 'unidade', 'unidad']
stopwords = (nltk.corpus.stopwords.words('portuguese')
+ nltk.corpus.stopwords.words('spanish')
+ custom_stopwords)
def generate_top_title(s:str, stopwords:list=stopwords, n:int=10)->str:
counter = Counter([w for w in nltk.word_tokenize(s.lower())
if w not in stopwords
and not re.search('\d', w)
and len(w) > 2]).most_common(n)
title = ' '.join([w[0] for w in counter])
return title
%%time
df_domain_title = pd.DataFrame(df_item[['domain_id', 'title']]
.groupby(by='domain_id')
.agg(' '.join)
['title']
.apply(generate_top_title))
df_domain_title['title'] = (df_domain_title
.reset_index()
[['domain_id','title']]
.apply(lambda x:
(' '.join(' '.join(x['domain_id']
.lower()
.split('-')[1:])
.split('_'))
+ ' '
+ x['title']),
axis=1)
.values)
df_domain_title
df_domain_title.reset_index()[['domain_id','title']].values[:20]
embedder = SentenceTransformer('distilbert-multilingual-nli-stsb-quora-ranking')
%%time
df_domain_title['title_embedding'] = list(embedder.encode(list(df_domain_title['title'])))
df_domain_title['domain_code'] = list(range(len(df_domain_title)))
df_domain_title
domain_mapper = {x[1]: x[0]
for x in
enumerate(sorted(df_item['domain_id'].dropna().unique()))}
domain_mapper['MLM-YARNS']
```
Now use the domain embeddings to find out from which domain are the searched items from.
```
# Temporary solution
df_raw['most_searched_ngram_1'] = df_raw['most_searched_ngram_1'].fillna('None')
%%time
df_raw['most_searched_ngram_1_embedding'] = list(embedder.encode(list(df_raw['most_searched_ngram_1'])))
%%time
data = np.array([np.array(x) for x in df_domain_title['title_embedding'].values])
index = NNDescent(data, metric='cosine')
%%time
query_data = np.array([np.array(x) for x in df_raw['most_searched_ngram_1_embedding'].values])
closest_domain = index.query(query_data, k=5)
closest_domain[0]
closest_domain[0][0]
df_domain_title.iloc[closest_domain[0][0]]
idx = 89
df_raw['most_searched_ngram_1'][idx]
df_domain_title.iloc[closest_domain[0][idx]]
```
| github_jupyter |
# Part 12.2: Introduction to Q-Learning
Q-Learning is a foundational technique upon which deep reinforcement learning is based. Before we explore deep reinforcement learning, it is essential to understand Q-Learning. Several components make up any Q-Learning system.
* **Agent** - The agent is an entity that exists in an environment that takes actions to affect the state of the environment, to receive rewards.
* **Environment** - The environment is the universe that the agent exists in. The environment is always in a specific state that is changed by the actions of the agent.
* **Actions** - Steps that can be performed by the agent to alter the environment
* **Step** - A step occurs each time that the agent performs an action and potentially changes the environment state.
* **Episode** - A chain of steps that ultimately culminates in the environment entering a terminal state.
* **Epoch** - A training iteration of the agent that contains some number of episodes.
* **Terminal State** - A state in which further actions do not make sense. In many environments, a terminal state occurs when the agent has one, lost, or the environment exceeding the maximum number of steps.
Q-Learning works by building a table that suggests an action for every possible state. This approach runs into several problems. First, the environment is usually composed of several continuous numbers, resulting in an infinite number of states. Q-Learning handles continuous states by binning these numeric values into ranges.
Additionally, Q-Learning primarily deals with discrete actions, such as pressing a joystick up or down. Out of the box, Q-Learning does not deal with continuous inputs, such as a car's accelerator that can be in a range of positions from released to fully engaged. Researchers have come up with clever tricks to allow Q-Learning to accommodate continuous actions.
In the next chapter, we will learn more about deep reinforcement learning. Deep neural networks can help to solve the problems of continuous environments and action spaces. For now, we will apply regular Q-Learning to the Mountain Car problem from OpenAI Gym.
### Introducing the Mountain Car
This section will demonstrate how Q-Learning can create a solution to the mountain car gym environment. The Mountain car is an environment where a car must climb a mountain. Because gravity is stronger than the car's engine, even with full throttle, it cannot merely accelerate up the steep slope. The vehicle is situated in a valley and must learn to utilize potential energy by driving up the opposite hill before the car can make it to the goal at the top of the rightmost hill.
First, it might be helpful to visualize the mountain car environment. The following code shows this environment. This code makes use of TF-Agents to perform this render. Usually, we use TF-Agents for the type of deep reinforcement learning that we will see in the next module. However, for now, TF-Agents is just used to render the mountain care environment.
```
pip install gym
pip install -q tf-agents
pip install -q pyvirtualdisplay
pip install -q PILLOW
import tf_agents
from tf_agents.environments import suite_gym
import PIL.Image
env_name = 'MountainCar-v0'
env = suite_gym.load(env_name)
env.reset()
PIL.Image.fromarray(env.render())
```
The mountain car environment provides the following discrete actions:
* 0 - Apply left force
* 1 - Apply no force
* 2 - Apply right force
The mountain car environment is made up of the following continuous values:
* state[0] - Position
* state[1] - Velocity
The following code shows an agent that applies full throttle to climb the hill. The cart is not strong enough. It will need to use potential energy from the mountain behind it.
```
import gym
from gym.wrappers import Monitor
import glob
import io
import base64
from IPython.display import HTML
from IPython import display as ipythondisplay
import gym
env = gym.make("MountainCar-v0")
env.reset()
done = False
i = 0
while not done:
i += 1
state, reward, done, _ = env.step(2)
env.render()
print(f"Step {i}: State={state}, Reward={reward}")
env.close()
```
### Programmed Car
Now we will look at a car that I hand-programmed. This car is straightforward; however, it solves the problem. The programmed car always applies force to one direction or another. It does not break. Whatever direction the vehicle is currently rolling, the agent uses power in that direction. Therefore, the car begins to climb a hill, is overpowered, and turns backward. However, once it starts to roll backward force is immediately applied in this new direction.
The following code implements this preprogrammed car.
```
import gym
env = gym.make("MountainCar-v0")
state = env.reset()
done = False
i = 0
while not done:
i += 1
if state[1]>0:
action = 2
else:
action = 0
state, reward, done, _ = env.step(action)
env.render()
print(f"Step {i}: State={state}, Reward={reward}")
env.close()
```
We now visualize the preprogrammed car solving the problem.
```
show_video()
```
### Reinforcement Learning
Q-Learning is a system of rewards that the algorithm gives an agent for successfully moving the environment into a state considered successful. These rewards are the Q-values from which this algorithm takes its name. The final output from the Q-Learning algorithm is a table of Q-values that indicate the reward value of every action that the agent can take, given every possible environment state. The agent must bin continuous state values into a fixed finite number of columns.
Learning occurs when the algorithm runs the agent and environment through a series of episodes and updates the Q-values based on the rewards received from actions taken; Figure 12.REINF provides a high-level overview of this reinforcement or Q-Learning loop.
**Figure 12.REINF:Reinforcement/Q Learning**

The Q-values can dictate action by selecting the action column with the highest Q-value for the current environment state. The choice between choosing a random action and a Q-value driven action is governed by the epsilon ($\epsilon$) parameter, which is the probability of random action.
Each time through the training loop, the training algorithm updates the Q-values according to the following equation.
$Q^{new}(s_{t},a_{t}) \leftarrow \underbrace{Q(s_{t},a_{t})}_{\text{old value}} + \underbrace{\alpha}_{\text{learning rate}} \cdot \overbrace{\bigg( \underbrace{\underbrace{r_{t}}_{\text{reward}} + \underbrace{\gamma}_{\text{discount factor}} \cdot \underbrace{\max_{a}Q(s_{t+1}, a)}_{\text{estimate of optimal future value}}}_{\text{new value (temporal difference target)}} - \underbrace{Q(s_{t},a_{t})}_{\text{old value}} \bigg) }^{\text{temporal difference}}$
There are several parameters in this equation:
* alpha ($\alpha$) - The learning rate, how much should the current step cause the Q-values to be updated.
* lambda ($\lambda$) - The discount factor is the percentage of future reward that the algorithm should consider in this update.
This equation modifies several values:
* $Q(s_t,a_t)$ - The Q-table. For each combination of states, what reward would the agent likely receive for performing each action?
* $s_t$ - The current state.
* $r_t$ - The last reward received.
* $a_t$ - The action that the agent will perform.
The equation works by calculating a delta (temporal difference) that the equation should apply to the old state. This learning rate ($\alpha$) scales this delta. A learning rate of 1.0 would fully implement the temporal difference to the Q-values each iteration and would likely be very chaotic.
There are two parts to the temporal difference: the new and old values. The new value is subtracted from the old value to provide a delta; the full amount that we would change the Q-value by if the learning rate did not scale this value. The new value is a summation of the reward received from the last action and the maximum of the Q-values from the resulting state when the client takes this action. It is essential to add the maximum of action Q-values for the new state because it estimates the optimal future values from proceeding with this action.
### Q-Learning Car
We will now use Q-Learning to produce a car that learns to drive itself. Look out, Tesla! We begin by defining two essential functions.
```
import gym
import numpy as np
# This function converts the floating point state values into
# discrete values. This is often called binning. We divide
# the range that the state values might occupy and assign
# each region to a bucket.
def calc_discrete_state(state):
discrete_state = (state - env.observation_space.low)/buckets
return tuple(discrete_state.astype(np.int))
# Run one game. The q_table to use is provided. We also
# provide a flag to indicate if the game should be
# rendered/animated. Finally, we also provide
# a flag to indicate if the q_table should be updated.
def run_game(q_table, render, should_update):
done = False
discrete_state = calc_discrete_state(env.reset())
success = False
while not done:
# Exploit or explore
if np.random.random() > epsilon:
# Exploit - use q-table to take current best action
# (and probably refine)
action = np.argmax(q_table[discrete_state])
else:
# Explore - t
action = np.random.randint(0, env.action_space.n)
# Run simulation step
new_state, reward, done, _ = env.step(action)
# Convert continuous state to discrete
new_state_disc = calc_discrete_state(new_state)
# Have we reached the goal position (have we won?)?
if new_state[0] >= env.unwrapped.goal_position:
success = True
# Update q-table
if should_update:
max_future_q = np.max(q_table[new_state_disc])
current_q = q_table[discrete_state + (action,)]
new_q = (1 - LEARNING_RATE) * current_q + LEARNING_RATE * \
(reward + DISCOUNT * max_future_q)
q_table[discrete_state + (action,)] = new_q
discrete_state = new_state_disc
if render:
env.render()
return success
```
Several hyperparameters are very important for Q-Learning. These parameters will likely need adjustment as you apply Q-Learning to other problems. Because of this, it is crucial to understand the role of each parameter.
* **LEARNING_RATE** The rate at which previous Q-values are updated based on new episodes run during training.
* **DISCOUNT** The amount of significance to give estimates of future rewards when added to the reward for the current action taken. A value of 0.95 would indicate a discount of 5% to the future reward estimates.
* **EPISODES** The number of episodes to train over. Increase this for more complex problems; however, training time also increases.
* **SHOW_EVERY** How many episodes to allow to elapse before showing an update.
* **DISCRETE_GRID_SIZE** How many buckets to use when converting each of the continuous state variables. For example, [10, 10] indicates that the algorithm should use ten buckets for the first and second state variables.
* **START_EPSILON_DECAYING** Epsilon is the probability that the agent will select a random action over what the Q-Table suggests. This value determines the starting probability of randomness.
* **END_EPSILON_DECAYING** How many episodes should elapse before epsilon goes to zero and no random actions are permitted. For example, EPISODES//10 means only the first 1/10th of the episodes might have random actions.
```
LEARNING_RATE = 0.1
DISCOUNT = 0.95
EPISODES = 50000
SHOW_EVERY = 1000
DISCRETE_GRID_SIZE = [10, 10]
START_EPSILON_DECAYING = 0.5
END_EPSILON_DECAYING = EPISODES//10
```
We can now make the environment. If we are running in Google COLAB then we wrap the environment to be displayed inside the web browser. Next create the discrete buckets for state and build Q-table.
```
env = gym.make("MountainCar-v0")
epsilon = 1
epsilon_change = epsilon/(END_EPSILON_DECAYING - START_EPSILON_DECAYING)
buckets = (env.observation_space.high - env.observation_space.low) \
/DISCRETE_GRID_SIZE
q_table = np.random.uniform(low=-3, high=0, size=(DISCRETE_GRID_SIZE \
+ [env.action_space.n]))
success = False
```
We can now make the environment. If we are running in Google COLAB then we wrap the environment to be displayed inside the web browser. Next, create the discrete buckets for state and build Q-table.
```
episode = 0
success_count = 0
# Loop through the required number of episodes
while episode<EPISODES:
episode+=1
done = False
# Run the game. If we are local, display render animation at SHOW_EVERY
# intervals.
if episode % SHOW_EVERY == 0:
#print(f"Current episode: {episode}, success: {success_count}" +\
# " ({float(success_count)/SHOW_EVERY})")
print(float(success_count)/SHOW_EVERY)
success = run_game(q_table, True, False)
success_count = 0
else:
success = run_game(q_table, False, True)
# Count successes
if success:
success_count += 1
# Move epsilon towards its ending value, if it still needs to move
if END_EPSILON_DECAYING >= episode >= START_EPSILON_DECAYING:
epsilon = max(0, epsilon - epsilon_change)
print(success)
```
As you can see, the number of successful episodes generally increases as training progresses. It is not advisable to stop the first time that we observe 100% success over 1,000 episodes. There is a randomness to most games, so it is not likely that an agent would retain its 100% success rate with a new run. Once you observe that the agent has gotten 100% for several update intervals, it might be safe to stop training.
# Running and Observing the Agent
Now that the algorithm has trained the agent, we can observe the agent in action. You can use the following code to see the agent in action.
```
run_game(q_table, True, False)
show_video()
```
# Inspecting the Q-Table
We can also display the Q-table. The following code shows the action that the agent would perform for each environment state. As the weights of a neural network, this table is not straightforward to interpret. Some patterns do emerge in that directions do arise, as seen by calculating the means of rows and columns. The actions seem consistent at upper and lower halves of both velocity and position.
```
import pandas as pd
df = pd.DataFrame(q_table.argmax(axis=2))
df.columns = [f'v-{x}' for x in range(DISCRETE_GRID_SIZE[0])]
df.index = [f'p-{x}' for x in range(DISCRETE_GRID_SIZE[1])]
df
df.mean(axis=0)
df.mean(axis=1)
```
| github_jupyter |
# Groupby と Resample
- 参照
- [Group by: split-apply-combine — pandas 1.4.1 documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#group-by-split-apply-combine)
- [Resampling — pandas 1.4.1 documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#resampling)
## Groupbyとは
1. 1つのデータを複数のグループに分割する(Splitting)
1. 分割した各データに関数を適用して値を得る (Applying)
1. 2で得た値をデータに一つにまとめる (Combining)
```
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"date": pd.date_range(start="2000-1-1 0:0:0", periods=9, freq="H"),
"class": np.array(["A", "B", "C"]).repeat(3),
"value A": np.arange(1,10),
"value B": np.arange(1,10) * 100,
}
)
# データ確認
df
# class 毎にデータを分割
df_grouped = df.groupby(by="class")
df_grouped
# グループ化した各データの "value A" カラムに max 関数を適用し、一つのデータにまとめる
mx = df_grouped[["value A", "value B"]].max()
# まとめたデータを確認
mx
```
```{tip}
- `by=` に渡す column 名は、複数指定可。その場合はリストで渡す。
- 適用できるメソッド一覧
- [GroupBy Function application — pandas 1.4.1 documentation](https://pandas.pydata.org/docs/reference/groupby.html#function-application)
- 複数適用させたい場合は、 `agg` もしくは `aggregate` メソッドを使う
- [pandas.core.groupby.SeriesGroupBy.aggregate — pandas 1.4.1 documentation](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.SeriesGroupBy.aggregate.html)
- 自作の関数を使いたい場合は、`apply` メソッドを使う
- [pandas.core.groupby.GroupBy.apply — pandas 1.4.1 documentation](https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.apply.html)
```
### 分割されたデータを確認したい
1. groupby で得られた `groupby object` をリストやループに入れる。ただし巨大なデータは時間がかかる
1. `groupby object` のメソッドである、 `.get_group()` を使う
```
list(df_grouped)
for k, df in df_grouped:
print(df)
df_grouped.get_group("A")
```
## Resample とは
- "時間"で Groupby すること
- `.groupby()` メソッドではなく `.resample()` メソッドを使う
- `groupby()` との違いとして、以下3つをまずは抑えてください。
1. Datetimeindex や Periodindex といった、**時間を表すindexを持つデータにしか使えない**
2. どの時間の単位でデータを分割するかを指定する。その際に渡す文字列を "Frequency String" と呼ぶ。
- 参照:[Frequency String](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects)
3. `label` オプション
- どちらのエッジでラベルをつけるかを指定
```
# 1. 時間を表す index を持つデータに対してのみ使用可
df = pd.DataFrame(
{
"date": pd.date_range(start="2000-1-1 0:0:0", periods=9, freq="H"),
"class": np.array(["A", "B", "C"]).repeat(3),
"value A": np.arange(1,10),
"value B": np.arange(1,10) * 100,
}
)
# date コラムを、このデータのインデックスに設定して上書き
df.set_index("date", inplace=True)
df
df.index
# 2. Frequency String
# resample() メソッドは、Resampler object を返す
df_resampled = df.resample("2H")
df_resampled
# あとはgroupby と同様
# 2時間毎に、Value Aの max を取得
df_resampled["value A"].max()
# 3. label オプション
# label = 'right' を指定すると、bin の最後のエッジがラベルになる
df.resample("2H", label = "right")["value A"].max()
```
```{tip}
- 曜日毎にグループ化したい場合は resample ではなく groupby
- `index.strftime("%w")` で曜日番号、もしくは `index.strftime("%a")` で曜日文字列を得て、カラムに追加。そのカラムで groupby する
- `.resample("w")` は 週
- 各国の営業日や、マーケットのオープン時間などを知りたい場合
- [pandas-market-calendars · PyPI](https://pypi.org/project/pandas-market-calendars/)
- [Calendar Status — pandas_market_calendars 3.4 documentation](https://pandas-market-calendars.readthedocs.io/en/latest/calendars.html)
```
## 金融データのResampling
### 取引データからOHLCVを作成
1. 取引データ取得
1. Datetimeindex を持つ DataFrame に変換
1. 時間単位を指定してResampler Objectを作成
1. `.ohlc()` メソッドを適用
1. `.sum()` メソッドを適用(出来高)
1. `.count()` メソッドを適用(取引回数)
1. 表示
```
import asyncio
import nest_asyncio
import pandas as pd
import plotly.graph_objects as go
import pybotters
from IPython.display import HTML
nest_asyncio.apply()
```
#### 1. pybotters 経由で FTX の取引データを取得
```
async def get_trades(market_name, start_time, end_time):
async with pybotters.Client(
apis={"ftx": ["", ""]}, base_url="https://ftx.com/api"
) as client:
res = await client.get(
f"/markets/{market_name}/trades",
params={
"start_time": start_time,
"end_time": end_time,
},
)
return await res.json()
# 取得したデータを確認
data = asyncio.run(get_trades("BTC-PERP", 1643641200, 1643727600))
data["result"][:3]
```
#### 2. DataFrame の作成
- `time` コラムを datetimeindex に持つDataFrameを作成
```
df = pd.DataFrame(data["result"])
df
df.dtypes
# time を datetime 型に変更し、この dataframe の index として設定
df = pd.DataFrame(data["result"])
df["time"] = pd.to_datetime(df["time"])
df.set_index("time", inplace=True)
df.sort_index(inplace=True)
df
```
#### 3. resampler object を作成
例:1分足で作成
```
rule = "1min"
df_resampled = df.resample(rule, label="right")
df_resampled
```
#### 4. `.ohlc()` メソッドを適用
- resampler オブジェクトのメソッドとして .ohlc() が用意されている
- [pandas.core.resample.Resampler.ohlc — pandas 1.4.1 documentation](https://pandas.pydata.org/docs/reference/api/pandas.core.resample.Resampler.ohlc.html)
- `price` データを OHLC 計算に使えば良い
```
df_ohlc = df_resampled["price"].ohlc()
df_ohlc
# ohlc メソッドを使わずに、OHLCを作るには
df_ohlc_2 = pd.DataFrame(
{
"open": df_resampled["price"].first(),
"high": df_resampled["price"].max(),
"low": df_resampled["price"].min(),
"close": df_resampled["price"].last(),
}
)
df_ohlc_2
```
#### 5. `.sum()` メソッドを適用(出来高)
- size を合計して出来高を出す
```
df_ohlc["volume"] = df_resampled["size"].sum()
df_ohlc
```
#### 6. `.count()` メソッドを適用(取引回数)
- `id` を数えて取引回数を出す。
- 行数のカウントなので `id` 以外でもよい
```
df_ohlc["count"] = df_resampled["id"].count()
df_ohlc
```
#### 関数化
```
def generate_ohlcv(df_resampled):
df_ohlc = df_resampled["price"].ohlc()
df_ohlc["volume"] = df_resampled["size"].sum()
df_ohlc["count"] = df_resampled["id"].count()
return df_ohlc
```
### OHLCV を sell と buy で分けて作成
- FTX から取得したデータに `side` があるので、`side` で groupby して、resample する
```
rule = "1min"
df_buy_resampled = df.groupby("side").get_group("buy").resample(rule, label="right")
df_sell_resampled = df.groupby("side").get_group("sell").resample(rule, label="right")
df_buy = generate_ohlcv(df_buy_resampled)
df_sell = generate_ohlcv(df_sell_resampled)
# コラム名をリネーム
df_buy.rename(columns={c:f"{c}_buy" for c in df_buy.columns}, inplace=True)
df_sell.rename(columns={c:f"{c}_sell" for c in df_sell.columns}, inplace=True)
# DataFrame のConcat
pd.concat([df_buy,df_sell], axis=1)
```
## アップサンプリングとダウンサンプリング
- ダウンサンプリング :高頻度から低頻度へ(毎日→毎月)
- アップサンプリング :低頻度から高頻度へ(毎週→毎日)
- 今日話した内容は全てダウンサンプリング。
- アップサンプリングしたい場合も同様に可。データがない場合は NaNが返る。
```
# 例:1分足で作った df_ohlc の close データを使って 30秒の max を得る
df_ohlc.resample("30s")["close"].max()
```
| github_jupyter |
# Python Tutorial for Data Science
## Introduction to Machine Learning: Classification with k-Nearest Neighbors
#### (Adapted from Data 8 Fall 2017 Project 3)
#### Patrick Chao 1/21/18
# Introduction
The purpose of this notebook is to serve as an elementary python tutorial introducing fundamental data science concepts including data exploration, classification, hyper-parameter tuning, plotting, and loss functions.
The tutorial is centered around the third project from Data 8 Fall 2017. This project involves classifying a movie's genre as either action or romance. In this notebook, we will explore how to process and understand the data, create a model, tune, and test.
## How to Avoid Overfitting: Train/Validation/Test
A huge part of machine learning is ensuring that our model performs well. With current technology, we have access to massive datasets and information, but the difficulty is parsing through all the numbers to understand something. Models can require huge amounts of data and can take hours or days to train on the data to perform well.
The first phase of training a model is, well, *training*.
### Training
In this part, the model continually learns on the data and improves. We use a subset of the data known as the **training set**. Given some model $f$, assume we have input $\vec{x}$ and a true label/output $\vec y$. We would like $f(\vec x)\approx \vec y$, or alternatively we would like to minimize $\|f(\vec x)-\vec y\|$. This value $\|f(\vec x)-\vec y\|$ is known as the **error** or **loss**, how close our model is to the correct value. When training on the training set, the error is more specifically known as the **training error**.
Our model looks through each training data instance and will have some prediction $f(\vec x)$. Based on the value of $\|f(\vec x)-\vec y\|$, the model will change slightly and improve. The more incorrect the prediction was, the more it will change. One method of optimizing our model $f$ is **gradient descent**.
When training, the error follows this form of a curve.
<img src='train.png' width="400" height="400">
You may consider model order as the "complexity" of the model. This may be more parameters, higher dimensionality, or more training. As your model trains, the training error will continually decrease. An analogy would be predicting a line using a $10$ degree polynomial. Since any $10$ degree polynomial has linear degree terms, a $10$ degree polynomial should be at least as good as a linear model for the training data. However, we shall see that this may not always be the case for all data.
### Pitfalls of Training: Overfitting
One trap that models may run into is **overfitting**. This is where the model over-trained on the data and does not extrapolate to other real world examples. The model becomes overly complex and attempts to fit every nuance of the data, and fails to generalize. An analogy would be using a $10$ degree polynomial to fit a line. It may be able to fit the training data extremely well, better than a line would, but it may fail for other points.
The best way to understand this is an example.
Consider the line $y=2x+1$. Assume that for a given value of $x$, the ground-truth value of $y$ is $2x+1$. We would like find a model $f(x)$ that predicts $y$ as best as possible. To do this, we will have some slightly perturbed input data from the range $100$ to $110$, denoted by the dotted black lines. Mess around with the demo by using various parameters.
```
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(3)
trainDataRange = [100,110]
trainDataRange = np.array(trainDataRange)
#Generate random training data within the trainDataRange
#Parameter for showing plots and number of samples
def generateTrainData(numSamples=500,showPlot=True):
#Generate X data
inputX = np.random.random_sample((numSamples,))*(trainDataRange[1]-trainDataRange[0])+trainDataRange[0]
trueY = a*inputX+b
#Add noise with variance 10
noisyY = trueY+np.random.randn(numSamples)*10
#Plot the data
if showPlot:
plt.plot(inputX,noisyY,'bo')
plt.plot(trainDataRange, trainDataRange*a+b, 'r-', lw=2)
plt.ylabel("Noisy Y")
plt.xlabel("Input X")
plt.title("Clean X and Noisy Y from Linear Relationship")
plt.show()
plt.clf()
return inputX,noisyY
#Validate the data on a larger range
#Default is 70-140
#Training region is denoted by dotted lines
def validate(model,numSamples=500,dataRange=[70,140],showPlot=True):
dataRange=np.array(dataRange)
#Generate x values from the data range
inputX = np.random.random_sample((numSamples,))*(dataRange[1]-dataRange[0])+dataRange[0]
trueY = a*inputX+b
predY=predict(model,inputX)
#Plot graphs
if showPlot:
#A bit of math to determine where to draw the dotted lines
coordX1 = [trainDataRange[0]]*2
coordX2 = [trainDataRange[1]]*2
minY = min(min(dataRange*a+b),min(predY))
maxY = max(max(dataRange*a+b),max(predY))
plt.plot(coordX1, [minY,maxY], 'k-', lw=1,linestyle="--")
plt.plot(coordX2, [minY,maxY], 'k-', lw=1,linestyle="--")
plt.plot(inputX,predY,'bo')
plt.plot(dataRange, dataRange*a+b, 'r-', lw=2)
plt.ylabel("Predicted Y")
plt.xlabel("Input X")
plt.title("Degree " + str (len(model)-1)+ " Model")
plt.show()
return error(trueY,predY)
#Train the data
def model(trainX,trainY,degree=1):
#Creates the vandermonde matrix https://en.wikipedia.org/wiki/Vandermonde_matrix
powers=np.vander(trainX,degree+1)
A=powers
#Solves the normal equation
model = np.linalg.solve(A.T@A,A.T@trainY)
return model
#Predicts given x values based on a model
def predict(model,x):
degree=len(model)-1
powers=np.vander(x,degree+1)
return powers@model
#Determines the error between true Y values and predicted
def error(trueY,predY):
return np.linalg.norm((trueY-predY))/len(trueY)
#Generates graphs of different degree models
#Plots training error and test error
def overfittingDemo(maxDegree=6):
trainX,trainY = generateTrainData(showPlot=False)
trainError = []
testError = []
#Iterate over all model orders
for deg in range(maxDegree+1):
currModel = model(trainX,trainY,degree=deg)
predTrainY = predict(currModel,trainX)
currTrainErr = error(trainY,predTrainY)
currTestErr = validate(currModel,showPlot=True)
trainError.append(currTrainErr)
testError.append(currTestErr)
#Plot the errors
plt.figure(figsize=(15,4))
plt.subplot(1,3,1)
plt.plot(range(maxDegree+1),trainError)
plt.ylabel("Error")
plt.title("Training Error")
plt.subplot(1,3,2)
plt.plot(range(1,maxDegree+1),trainError[1:])
plt.xlabel("Degree of Model")
plt.title("Training Error w/o Deg 0")
plt.subplot(1,3,3)
plt.plot(range(maxDegree+1),testError)
plt.title("Test Error")
plt.show()
# Uncomment if you are curious about the actual error values
# print("Training Errors:",trainError)
# print("Test Errors:",testError)
#True model: y=ax+b
a = 2
b = 1
#To try your own examples
#Uncomment to test around yourself!
overfittingDemo()
#Comment the overfitting demo to try your own parameters
# #Create and Visualize Training Data
# trainX,trainY = generateTrainData()
#Degree 1 Model
# model1 = model(trainX,trainY,degree=1)
# err1 = validate(model1)
# #Degree 2 Model
# model2 = model(trainX,trainY,degree=2)
# err2 = validate(model2)
# #Model Parameters (how close is it to a and b?)
# print("Degree 1 parameters",model1)
# print("Degree 2 parameters",model2)
# #Error Values
# print(err1,err2)
```
### Overfitting Continued
Hopefully from the demo, it is clear that the best model is the linear model. The other higher order polynomials obtain slightly lower training errors from about $0.4405$ to $0.4385$, a $.45\%$ decrease in training error. One might think that just obtaining the lowest training error is best, but from the test error, we find that this results in drastically worse test errors, from $0.077$ for a linear polynomial to $282.6$ for a degree $6$ polynomial, a huge decrease in performance.
Overfitting is shown in the graph below. On the y-axis is true error, some undeterminable quantity, and the x-axis is how complex our model is.
<img src='trueError.png' width="400" height="400">
Another important note is **underfitting**! At the lefthand side of the graph, our model is not complex enough to properly grasp the data and does not perform well on the data. This may be seen through our degree $0$ model.
Overfitting and underfitting are major pitfalls in machine learning. It originally seems that we are doing great because our training error gets lower and lower, but we may have already crossed the threshold where we are overfitting to our data. How do we avoid this?
### Validation
Our savior is validation! The essence of validation is to set aside some data, called **validation data** that we do not train on, and we find the error of our model on this validation data. Now using this as a form of "safety check", we can determine when our model begins to overfit and stop training there. There are many methods of implementing this validation data, such as initially setting aside $20\%$ of your data from the start to serve as validation. Another method is known as **k-fold cross-validation**. I will not go into it here, but it is relatively straight forward so I encourage anyone interested to read [here](https://en.wikipedia.org/wiki/Cross-validation_(statistics%29).
This is one form of ensuring that we do not overfit. Another largely used method is known as **regularization**. This is applying some kind of prior belief on our model. If we believe that our model should rely on only a small number of features of small magnitude, then coefficients of the form $(0,1.99,1.01)$ make more sense than $(-0.0165,5.47, -181)$. One method of regularization is **ridge regression**, where we add a penalty **$\lambda$** that essentially adds the magnitude of the weight vector as part of the loss function. This is a heavily used method of preventing overfitting as it does not necesarily require you to watch over the model. Just for kicks, try the ridge demo below! If you are curious about ridge regression and how it works, read [here!](https://en.wikipedia.org/wiki/Tikhonov_regularization)
```
np.random.seed(4)
def ridgeDemo(lambdaCoeff,maxDegree=6):
trainX,trainY = generateTrainData(showPlot=False)
currModel = ridgeModel(trainX,trainY,lambdaCoeff=lambdaCoeff,degree=maxDegree)
predTrainY = predict(currModel,trainX)
currTrainErr = error(trainY,predTrainY)
currTestErr = validate(currModel,showPlot=True)
print("Model params with degree",maxDegree,":",currModel)
print("Training Error:",currTrainErr)
print("Test Error:",currTestErr)
def ridgeModel(trainX,trainY,lambdaCoeff,degree=10):
powers=np.vander(trainX,degree+1)
A=powers
regularizationMatrix = lambdaCoeff*np.eye((A.shape[1]))
model = np.linalg.solve(A.T@A+regularizationMatrix,A.T@trainY)
return model
ridgeDemo(lambdaCoeff = 1)
```
The curve is still not a great prediction, but the parameters are incredibly small. For the $x^6$ term, the coefficient is on the order of magnitude $10^{-8}$, and the test error is only $7.97$, significantly better than the $282$ from before. If there was some way to set the extremely small values to $0$, that would be fantastic! ([lasso](https://en.wikipedia.org/wiki/Lasso_(statistics))
## Classification vs Regression
In general, there are two major types of machine learning problems, classification and regression.
*Classification* is a problem where we would like to *classify* some sample input into a class or category. For example, we could classify the genre of a movie or classify a handwritten digit as $0-9$. The input may be a list of **features**, or *qualities* of a sample (for digits this would the individual pixels of the image), and the output is a class or label. Note that the bins are discrete and often categorical, and there are a finite number of classes.
For modeling classification problems, this may involve generating a probability for each class, and selecting the class with the highest probability. In this notebook, we will investigate a simpler model, using a method called *k-nearest neighbors*.
*Regression* does not depend on distinct classes for labels. The input is still a set of features, but the output is instead a continuous value. This may be predicting the population in $5$ years or the temperature tomorrow. In this situation, the "right answer" is more vague. If we predict the temperature to be $70$ degrees tomorrow but it is actually $71$, are we right? What if we predicted $70.5$ degrees? This adds a layer of complexity between regression and classification.
For modeling regression problems this may be done by creating some function approximation in terms of the input. For example, linear regression is the simplest model, and outputs a continuous value. There are more complex models such as *neural networks* that act as universal function approximators.
# k-Nearest Neighbors
The **k-Nearest Neighbors** (kNN) algorithm is one of the simplest models. The core idea is that a similar set of features should have the same label. For example if we receive an image $A$ as input where we would like to classify the digit, we could look at what other images look like $A$ in our training set. If we were doing $5$-nearest neighbors, we would find the $5$ images closest to $A$ in our data set, and return the most common digit among the $5$. In general, you may choose any value for $k$, $5$ may not be the best choice. Note, this has a $O(1)$ training time! This is the fastest algorithm for training, as there is no training!
However, some questions immediately arise. How do you determine how close two images are? Why choose $5$, not $10$ or $100$? There are other consequences as well; you need to look through your entire dataset each time to determine the $k$ closest images, which could take a long time if your training set is huge. The prediction time for kNN is $O(n)$, which is much slower than something like linear regression, where the prediction is $O(1)$.
We will address these questions and the shortcomings of kNN.
A few conceptual questions for understanding:
1. In binary classification (two classes), why is choosing an odd value for $k$ better than an even value?
2. Given two separate ordered pairs of two values, $(a,b)$ and $(x,y)$, what possibilities are there for calculating the "distance" between them? What are the differences between approaches?
3. Assume we are doing image classification. List any possible issues with image classification.
4. What does 1-NN mean? If we have $n$ training data, what is $n$-NN? What are some of the *tradeoffs* for varying $k$ between $1$ and $n$?
```
# Run this cell to set up the notebook, but please don't change it.
import numpy as np
import math
# These lines set up the plotting functionality and formatting.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
import pandas as pd
#You may need to pip install pandas/matplotlib
#Given a movie title, this returns the frequency of given words
def wordFreq(movieTitle,words):
#Change movieTitle to lower case
movieTitle=movieTitle.lower()
#Change words to lower case
words = [word.lower() for word in words]
#Check if movie title is found
try:
movie = movies[movies["Title"]==movieTitle]
except:
print("Movie title not found!")
return ""
#Check if given words are not found
try:
wordFrequencies = movie[words].as_matrix()[0]
except:
print("Words not found")
return ""
return wordFrequencies
#Let's see what our dataset looks like!
movies = pd.read_csv('movies.csv')
movies.head()
#What type is movies?
print(type(movies))
#What is the frequency of the words "hey" and "i" in the matrix? Try some yourself!
print(wordFreq("batman",["Hey","i"]))
```
## kNN Classification and Feature Selection
Our goal is to be able to classify songs based on the frequency of various words in the script. However it is not feasible to use all the words as that is very calculation intensive. An alternative is to select certain features, but what features do we select? One method to look at which words are often in romance movies but not action, and vice versa. This is called **feature selection**.
First, we will separate the data into training and validation data. Next, we may create some elementary functions such as the distance between movies, getting movies as pandas series, and finding the $k$ movies closest to some given movie.
```
#Split the data into 80 training and 20 validation
trainingPercentage = 80/100
numMovies = movies.shape[0]
numTraining = (int)(numMovies*trainingPercentage)
numValidation = numMovies - numTraining
#Training Set
trainingSet = movies[0:numTraining]
#Validation Set
validationSet = movies[numTraining:]
#Separate into action and romance
action = trainingSet[trainingSet["Genre"]=="action"]
romance = trainingSet[trainingSet["Genre"]=="romance"]
#Given two movie titles mov1,mov2, and a list of words
#distance returns the euclidean distance between the two movies using the words as features
def distance(mov1,mov2,words):
mov1Freq=wordFreq(mov1,words)
mov2Freq=wordFreq(mov2,words)
return np.sqrt(sum((mov1Freq-mov2Freq)**2))
#Given a movie title, this returns the row as a pandas series
def getMovie(title):
title = title.lower()
return movies[movies["Title"]==title].squeeze()
#Given a movie as a panda series, determines the k closest movies using words as features
#Returns the dataframe of movies
def kShortestDistance(k,movie,movieSet,words):
distances=[]
#Iterate over all movies
for i in range(movieSet.shape[0]):
currMovieTitle = movieSet.iloc[i]["Title"]
#Get the distance of two movies from two movies
dist = distance(currMovieTitle,movie["Title"],words)
distances.append((dist,i))
#Sort the array
distances = sorted(distances,key=lambda x:x[0])
#Get the indices of the movies
indices = [x[1] for x in distances]
return movieSet.iloc[indices[1:k+1]]
#Faster kShortestDistance using subsetting
def kShortestDistanceFast(k,movie,movieSet,words):
#Subset out the words
movieSubset = movieSet[words]
currMovie = movie[words].squeeze()
#Calculate Distances and sort
distances = ((movieSubset-currMovie)**2).sum(axis=1)
distances = distances.sort_values()
#Shift by the minimum index if the movies do not start at 0
indices = distances.index.tolist()
minIndex = min(indices)
shiftedIndices=(np.array(indices)-minIndex).tolist()
return movieSet.iloc[shiftedIndices[1:k+1]]
#Given a list of movies, returns the majority genre
def getMajority(nearestMovies):
numMovies = nearestMovies.shape[0]
#Count frequency of genres
counts = nearestMovies['Genre'].value_counts()
if len(counts)==1:
return [x[0] for x in counts.items()][0]
if counts["action"]>numMovies/2:
return "action"
return "romance"
#Given a dataset, a set of word features, and the value of k
#Returns the percentage correct (0-100)
def accuracy(dataset,features,k):
numCorrect = 0
#Iterate over all movies
for i in range(dataset.shape[0]):
currMovie = dataset.iloc[i].squeeze()
currMovieGenre = currMovie["Genre"]
#Calculate k closest movies
kClosest = kShortestDistanceFast(k,currMovie,dataset,features)
predGenre = getMajority(kClosest)
#Keep track of number of correct predictions
if predGenre == currMovieGenre:
numCorrect +=1
#Return accuracy as percentage
return numCorrect*1.0/dataset.shape[0]*100
```
The code below uses "power" and "love" as features to find the $5$ closest movies to "batman returns". Then we get the majority of the genres of those five movies, and we find that batman returns is predicted to be action based on those $5$ movies.
```
#Use "money" and "feel" as features
features = ["power","love"]
movie = getMovie("batman returns")
#Get the five closest movies to the "batman returns" using the training set
closest=kShortestDistance(5,movie,trainingSet,features)
#Given the closest movies, returns the majority
#Represents the kNN Prediction
getMajority(closest)
```
Use this word plot (courtesy of Data 8) to construct some of your own features!
<img src='wordplot.png' width="700" height="700">
```
#Try with some of your own features!
features = ["power","feel"]
k=5
accuracy(trainingSet,features,k)
#Our chosen features
features = ["men","power","marri","nice","home","captain","move","run","world","huh","happi","move","write","hello"]
```
With our own chosen features, we then use the training set to determine the optimal value for $k$. Afterwards, we use this value of $k$ to find the accuracy on the validation data.
```
#Determine the best value for k
trainAccuracies = []
numKValues = 30
for i in range(numKValues):
acc =accuracy(trainingSet,features,2*i+1)
trainAccuracies.append(acc)
xAxis = ([2*i+1 for i in range(numKValues)])
plt.plot(xAxis,trainAccuracies)
plt.show()
```
Using the previous information about overfitting and underfitting, explain the shape of the graph! Why is the accuracy low for $k=1$ and as $k$ increases past $15$?
```
#Determine best value for k
optimalK=xAxis[np.argmax(trainAccuracies)]
#Best kNN was found with k=11
print("Best k:",optimalK)
#Determine validation error with this value for k
optimalKNNVal = accuracy(validationSet,features,optimalK)
print("Test Accuracy:",optimalKNNVal)
```
Why is the accuracy for the validation set lower than the training accuracy (about $75\%$)?
| github_jupyter |
# Python Basics
Prepared by: Nickolas K. Freeman, Ph.D.
This notebook provides a very basic introduction to the Python programming language. The following description of the Python language was taken from https://en.wikipedia.org/wiki/Python_(programming_language) on 1/6/2018, and serves as a good introduction to the language.
>Python is an interpreted high-level programming language for general-purpose programming. Created by Guido van Rossum and first released in 1991, Python has a design philosophy that emphasizes code readability, and a syntax that allows programmers to express concepts in fewer lines of code, notably using significant whitespace. It provides constructs that enable clear programming on both small and large scales.
>
>Python is a multi-paradigm programming language. Object-oriented programming and structured programming are fully supported, and many of its features support functional programming and aspect-oriented programming (including by metaprogramming and metaobjects (magic methods)). Many other paradigms are supported via extensions, including design by contract and logic programming.
>
>The language's core philosophy is summarized in the document The Zen of Python (PEP 20), which includes aphorisms such as:
>
> - Beautiful is better than ugly
> - Explicit is better than implicit
> - Simple is better than complex
> - Complex is better than complicated
> - Readability counts
>
> Rather than having all of its functionality built into its core, Python was designed to be highly extensible. This compact modularity has made it particularly popular as a means of adding programmable interfaces to existing applications. Van Rossum's vision of a small core language with a large standard library and easily extensible interpreter stemmed from his frustrations with ABC, another programming language that espoused the opposite approach.
>
> While offering choice in coding methodology, the Python philosophy rejects exuberant syntax (such as that of Perl) in favor of a simpler, less-cluttered grammar. As Alex Martelli put it: "To describe something as 'clever' is not considered a compliment in the Python culture." Python's philosophy rejects the Perl "there is more than one way to do it" approach to language design in favor of "there should be one—and preferably only one—obvious way to do it".
>
>Python's developers strive to avoid premature optimization, and reject patches to non-critical parts of CPython that would offer marginal increases in speed at the cost of clarity. When speed is important, a Python programmer can move time-critical functions to extension modules written in languages such as C, or use PyPy, a just-in-time compiler. Cython is also available, which translates a Python script into C and makes direct C-level API calls into the Python interpreter.
>
>An important goal of Python's developers is keeping it fun to use. This is reflected in the language's name—a tribute to the British comedy group Monty Python—and in occasionally playful approaches to tutorials and reference materials, such as examples that refer to spam and eggs (from a famous Monty Python sketch) instead of the standard foo and bar.
>
>A common neologism in the Python community is *pythonic*, which can have a wide range of meanings related to program style. To say that code is pythonic is to say that it uses Python idioms well, that it is natural or shows fluency in the language, that it conforms with Python's minimalist philosophy and emphasis on readability. In contrast, code that is difficult to understand or reads like a rough transcription from another programming language is called unpythonic.
>
>Users and admirers of Python, especially those considered knowledgeable or experienced, are often referred to as Pythonists, Pythonistas, and Pythoneers
Executing (`<SHIFT> + <ENTER>` in a Jupyter notebook) the statement `import this` will print *The Zen of Python*, a set of guiding principles for python developers.
```
import this
```
The following table of contents lists the topics discussed in this notebook. Clicking on any topic will advance the notebook to the associated area.
# Table of Contents
<a id="Table_of_Contents"> </a>
1. [Getting Help](#Getting_Help)<br>
2. [Mathematical Operations and Precedence Relationships](#Math_Ops_Prec)<br>
3. [Variables](#Variables)<br>
4. [Flow Control](#Flow_Control)<br>
4.1 [The Importance of Spacing](#Importance_of_spacing)<br>
4.2 [if Statements](#if_Statements)<br>
4.3 [while Loops](#while_Loops)<br>
4.4 [break Statements](#break_Statements)<br>
4.5 [continue Statements](#continue_Statements)<br>
4.6 [for Loops](#for_Loops)<br>
5. [Data Structures](#Data_Structures)<br>
5.1 [Lists](#Lists)<br>
5.2 [List Comprehensions](#List_Comprehensions)<br>
5.3 [Accessing List Elements](#Access_List_Elements)<br>
5.4 [Iterating Over Lists](#Iterating_Over_Lists)<br>
5.5 [Dictionaries](#Dictionaries)<br>
5.6 [Tuples](#Tuples)<br>
5.7 [Sets](#Sets)<br>
6. [String Formatting](#String_formatting)<br>
7. [Error Handling](#Error_Handling)
#### Disclaimer
As stated previously, this notebook doesn't represent a *comprehensive* overview of the Python programming language. Instead, it provides basic details on data structures that are useful for addressing problems in operations and supply chain management. In this notebook, we will primarly be using objects and operations that are defined in the base language. Other notebooks will look at the extended functionality available through additional libraries that may be imported into a Python project.
Before continuing, it is important to realize that the Python language and the available libraries will continue to evolve. That being said, the objects, functions, and methods described in this notebook may one day change. If changes occur, areas of this notebook that use deprecated features may cease to work and will need to be revised or omitted.
## Getting Help
<a id="Getting_Help"> </a>
Compared to existing languages, Python is very user-friendly in the sense that documentation on the various modules and methods is generally available and easy to access while coding. You can find information regarding Python functions using the built in `help()` function.
[Back to Table of Contents](#Table_of_Contents)<br>
```
help(print)
```
Although we will largely be avoiding the use of libraries that fall outside of the Python base code in this notebook, it is worth noting that you can also use the `help()` function to find information regarding functions and attributes of imported libraries. The following cell block provides an example that shows how to find for the `std()` function that is part of the NumPy library. We will explore the `NumPy` library in more detail in another notebook.
[Back to Table of Contents](#Table_of_Contents)<br>
```
import numpy as np
help(np.std)
```
Finally, in a Jupyter notebook, hitting the key combination `<SHIFT> + <TAB>` within the arguments area of a Python function will bring up help on the associated function.
[Back to Table of Contents](#Table_of_Contents)<br>
## Mathematical Operations and Precedence Relationships
<a id="Math_Ops_Prec"> </a>
It will often be the case that we wish to perform computations in our programs. The basic mathematical operators implemented in Python follow.
- `**` denotes exonentiation (example: `2 ** 3` evaluates to `8`)
- `%` denotes the modulus/remainder operation (example: `22 % 8` evaluates to `6`)
- `//` denotes integer division (example: `22 // 8` evaluates to `2`)
- `/` denotes floating-point division (example: `22 / 8` evaluates to `2.75`)
- `*` denotes multiplication (example: `3 * 5` evaluates to `15`)
- `-` denotes subtration (example: `5 - 2` evaluates to `3`)
- `+` denotes addition (example: `5 + 2` evaluates to `7`)
As is true for mathematics in general, Python enforces a precedence relatioship among the basic operators. The list of basic operators is sorted by highest to lowest precedence. For example, the exponentiation operator takes precedence over the subtraction operation. Thus, for the expression `3 + 5 ** 2`, Python will first evaluate `5 ** 2`, which is 25, then `3 + 25`.
Parentheses can be used to enforce a custom precedence relationship. For example, if we write `(3 + 5) ** 2` instead of `3 + 5 ** 2`, Python will first evaluate `3 + 5`, which is 8, then `8 ** 2`. The following two code blocks confirm this behavior.
[Back to Table of Contents](#Table_of_Contents)<br>
```
# Without parentheses
3 + 5 ** 2
# With parentheses
(3 + 5) ** 2
```
## Variables
<a id="Variables"> </a>
Oftentimes, we will need some form of intermediate storage for complex computations or objects. In most programming languages, such storage means are referred to as *variables*. Essentially, a variable is like a partition of your computer's memory where you store an object or value(s). This allows you to use the object or value later in your program.
You create variables using an *assignment statement*. For example, the statement `my_var = 2` creates a new variable named `my_var` and storing the value 42 in it. When naming variables, it is helpful to name them in a manner that reminds you of the value or object stored. You can name a variable anything you would like as long as:
1. It can be only one word, i.e., no spaces.
2. It can use only letters, numbers, and the underscore (_) character.
3. It can’t begin with a number.
The value or object that is assigned to a variable can be updated throughout the execution of a program. The following code block demonstrates this.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_var = 1
print('The value of my_var is',my_var)
my_var = 2
print('The value of my_var is',my_var)
```
## Flow Control
<a id="Variables"> </a>
A major benefit of using a programming language to perform a computational task is the ability to control follow, evaluating expressions repeatedly, skipping computations in certain cases, or choosing one of many conditions to run depending on a condition. This section demonstrates several techniques that can be used to control the flow of a program.
Before dicussing different mechanisms for flow control, it is important to understand the various comparison operators that are available in Python.
- the comparison operator `==` means *Equal to*
- the comparison operator `!=` means *Not equal to*
- the comparison operator `<` means *Less than*
- the comparison operator `>` means *Greater than*
- the comparison operator `<=` means *Less than or equal to*
- the comparison operator `>=` means *Greater than or equal to*
These operators evaluate to `True` or `False`, which are *boolean* statements in Python, depending on the values/epressions they are contained in. The following code blocks provides some examples.
[Back to Table of Contents](#Table_of_Contents)<br>
```
3 == 3
3 != 3
3 > 3
3 < 3
3 >= 3
3 <= 3
```
You can combine the comparison operators with the additional *boolean* operators `and`, `or`, and `not` to develop more complex expressions. The following code blocks provide examples.
[Back to Table of Contents](#Table_of_Contents)<br>
```
(3 > 4) and (((5 - 7)**2) > 3)
(3 > 4) or (((5 - 7)**2) > 3)
not (3 > 4)
```
### The importance of spacing
<a id="Importance_of_spacing"> </a>
If you are familiar with other coding languages, you are likely used to suing some form of braces to indicate that statements are nested. For example, defining a loop that prints the numbers in the interval [0, 10] in C++ may be accomplished with the code:
`#include <iostream>
using namespace std;`
`for(int i = 1; i < 11; i++){
value = i;
cout << value << endl;
}`
or
`#include <iostream>
using namespace std;`
`for(int i = 1; i < 11; i++)
{value = i; cout << value << endl;}`
or
`#include <iostream>
using namespace std;`
`for(int i = 1; i < 11; i++)
{value = i;
cout << value << endl;}`
or many other ways.
In the previous code segment, the braces define statements that are nested in the `for` loop, which we cover later, and the semi-colons indicate the end of a statment. In Python, nesting is indicated by spacing. Most Python editors will attempt to *anticipate* the spacing that is needed. However, if you get errors that state *unexpected indents* exist, you should double-check your spacing. The following code performs the same function as the C++-style loop.
<div class="alert alert-block alert-info">
<b>Indexing starts at zero:</b> In Python, as is true for most programming languages, any counting or indexing typically starts at 0, as opposed to starting at 1.
</div>
<div class="alert alert-block alert-info">
<b>The <i>range</i> function:</b> The range function takes arguments (<i>start</i>, <i>end</i>, <i>step</i>) and generates a sequnce of integers from <i>start</i> to <i>stop-1</i> in increments of <i>step</i>. If <i>step</i> is omitted, the default is a step size of 1. If <i>start</i> is omitted, the default is to start at 0.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
for i in range(1,11):
value = i
print(value)
```
In the following code block, we do not include the appropriate indentation, and will get an error if we attempt to execute the cell.
[Back to Table of Contents](#Table_of_Contents)<br>
```
for i in range(1,11):
value = i
print(value)
```
### *if* Statements
<a id="if_Statements"> </a>
One of the most common flow control statements is an *if* statement. Given a Python expression that defines a condition and a clause to be executed if the codition is true, an *if* statement allows us to implement the following logic in the code:
>"If this condition is true, execute the code in the clause."
The following code block provides an example. Feel free to change the value of `my_var` to verify the two statements work correctly.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_var = 4
if (my_var <= 4):
print('The value of my_var is less than or equal to 4.')
if (my_var > 5):
print('The value of my_var is greater than 4.')
```
In addition to checking whether or not a single condition is satisfied, the *if* statement may be extended to perform more comparisons where one of many conditions may be true using the `elif` (else if) and `else` statements. The following code blocks demonstrate the use of the *if-elif-else* structure.
<div class="alert alert-block alert-info">
<b>The <i>else</i> statement:</b> When using an <i>if-elif-else structure</i>, any clauses associated with the else statement are executed whenever none of the conditions in the <i>if</i> and <i>elif</i> checks are satisfied.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_var = 10
if ((my_var % 2) == 0) and ((my_var % 3) == 0):
print('my_var is divisible by 2 and 3')
elif ((my_var % 2) == 0):
print('my_var is divisible by 2')
elif ((my_var % 3) == 0):
print('my_var is divisible by 3')
else:
print('my_var is not divisible by 2 or 3')
my_var = 9
if ((my_var % 2) == 0) and ((my_var % 3) == 0):
print('my_var is divisible by 2 and 3')
elif ((my_var % 2) == 0):
print('my_var is divisible by 2')
elif ((my_var % 3) == 0):
print('my_var is divisible by 3')
else:
print('my_var is not divisible by 2 or 3')
my_var = 7
if ((my_var % 2) == 0) and ((my_var % 3) == 0):
print('my_var is divisible by 2 and 3')
elif ((my_var % 2) == 0):
print('my_var is divisible by 2')
elif ((my_var % 3) == 0):
print('my_var is divisible by 3')
else:
print('my_var is not divisible by 2 or 3')
```
### *while* Loops
<a id="while_Loops"> </a>
A *while* loop is used whenever we want to repeat a calculation until a condition is met. The following code block shows a simple *while* loop that prints all numbers with a squared value that is less than 200.
<div class="alert alert-block alert-info">
<b>The <i>+=</i> and <i>-=</i> operators:</b> The <i>+=</i> and <i>-=</i> operators are shorthand operators that are used to increase and decrease the values of a variable by a value. For example, the code <i>my_var += 5</> is equivalent to <i>my_var = my_var + 5</i>.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_var = 0
while (my_var ** 2 < 200):
print(my_var,'squared is less than 500.')
my_var += 1
```
<div class="alert alert-block alert-danger">
<b>Infinite loops:</b> When using <i>while</i> loops, it is important to make sure that some case will be encountered that does not satisfy the condition specified in the <i>while</i> statement. If not, the loop will not terminate, resulting in an infinite loop.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
### *break* Statements
<a id="break_Statements"> </a>
In the previous section, we used a condition to exit from a *while* loop. Another way to force an exit from a *while* loop, or a *for* loop, is to use the `break` statement. Whenever the `break` statement is encountered, the program exits the current loop. The following code block rewrites the previous *while* loop with a `break` statement. Note that without the `break` statement, the loop will run infinitely.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_var = 0
while True:
if (my_var ** 2 >= 200):
break
print(my_var,'squared is less than 500.')
my_var += 1
```
### *continue* statements
<a id="break_Statements"> </a>
Similar to `break` statements, `continue` statements are used within a loop to control it's behavior. Whenever a `continue` statement is encountered, the program immediately jumps to the start of the loop and reevaluates its condition. The following code block prompts a user to enter a magic word. The program will continue to execute until the user enters the word *Python*.
<div class="alert alert-block alert-info">
<b>The <i>input()</i> function:</b> The <i>input()</i> function instructs the program to get input from a user. For example, the code <i>my_var = input()</> instructs the program to get keyboard input from a user and to store the input in a variable named <i>my_var</i>.
</div>
<div class="alert alert-block alert-info">
<b>The <i>.upper()</i> string method:</b> <i>.upper()</i> is a string method that instructs Python to convert the string to uppercase. There is also a <i>.lower()</i> method that instructs Python to convert the string to lowercase.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
while True:
print('What is the magic word?')
magic_word = input()
if magic_word.upper() != 'PYTHON':
print('That is not the magic word!\n')
continue
break
print('Access granted.')
```
### *for* Loops
<a id="break_Statements"> </a>
The final loop type that we discuss is the *for* loop. A `for` loop is used whenever we want to perform a portion of code for a fixed number of times. We saw an example of a `for` loop earlier when discussing the importance of spacing (**insert link here**). The following code block shows a `for` loop that calculates the sum of the squared values for the integers ranging from 0 to 10.
[Back to Table of Contents](#Table_of_Contents)<br>
```
sum_of_squares = 0
for i in range(11):
sum_of_squares += i**2
print('The sum of squares is',sum_of_squares)
```
## Data Structures
<a id="Data_Structures"> </a>
The following sections will introduce you to several data structures that are built into the Python base. Specifically, we will look at lists, dictionaries, tuples, and sets.
[Back to Table of Contents](#Table_of_Contents)<br>
### Lists
<a id="Lists"> </a>
Lists are a versatile Python data structure that can be initialized as empty, with sequences, or with comma-separated initialization values. Lists can be appended to our deleted from in loops and do not require that all values be of the same type. The following code blocks provide several examples of list initialization.
[Back to Table of Contents](#Table_of_Contents)<br>
```
list1 = list(range(1,11))
print(list1)
list1 = [0, 1, 2, 3, 4, 5, 6, 7, 8]
print(list1)
list1 = [] # Creates an empty list
for i in range(20,31):
list1.append(i)
print(list1)
```
Lists can be multi-dimensional, but it is generally better to use other storage objects such as dictionaries or Pandas dataframes (both covered later in this notebook). For completeness, the following code blocks show two method to create a simple two-dimensional list.
[Back to Table of Contents](#Table_of_Contents)<br>
```
list1 = [[1,2],[3,4],[5,6],[7,8]]
print(list1)
list1 = []
list1.append([1,2])
list1.append([3,4])
list1.append([5,6])
list1.append([7,8])
print(list1)
```
#### List comprehensions
<a id="List_Comprehensions"> </a>
In mathematics, it is common to see sets described as follows:
$$ S = \{x^{2}:x\in 1, \ldots, 10\}.$$
This notation defines a set $S$ that contains the squares of the integers $1 - 10$. In Python, we can use similar syntax to define the set as a list. The following code block demonstrates such syntax, which is referred to as `list comprehension`. The `del()` function deletes the list.
[Back to Table of Contents](#Table_of_Contents)<br>
```
S = [x**2 for x in range(1,11)]
print(S)
del(S)
```
#### Accessing List Elements
<a id="Access_List_Elements"> </a>
A key thing to note when considering how to access elements of objects in Python is that, like most other programming languages, numbering in Python starts at 0. For example, in the list `[1,2,3,4]`, the value 1 is in index location 0 and the value 3 is in index location 2.
When indexing a list named `mylist`, the syntax `mylist[x]` retrieves the element of the list that is in index location `x`. Also, you can use negative numbers to index from the end of the list. For example, `mylist[-1]` retrieves the last element of the list and `mylist[-2]` retrieves the second to last element of the list.
The following code block demonstrates how we use this information to select items from a single-dimension list.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_list = [1000, 'boy', 3, 6, 'cat']
print('The first element in the list is', my_list[0])
print('The second element in the list is', my_list[1])
print('The second to last element in the list is', my_list[-2])
print('The last element in the list is', my_list[-1])
```
Multi-dimensional lists are indexed in a similar fashion. However, keeping things straight can become quite challenging when working in more than two dimensions. The following code block illustrates how to index elements in a list with two dimensions.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_list = [[1,2,3],[4,5,6],[7,8,9]]
print('The first row of mylist is', my_list[0], '\n')
print('The last row of mylist is', my_list[-1], '\n')
print('The first element of the last row of mylist is', my_list[-1][0], '\n')
```
Besides accessing individual items stored in lists, you can also access *slices* of items. Similar to the `range` function that we saw earlier, Python listing slicing allows you to define a *start index*, a *stop index*, and a *step size*. Also similar to the `range` function, a list slice will not include the item specified by the *stop* index. Negative values for the *step size* will result in the slice being iterated over in reverse order. The following code block illstrates list slicing.
<div class="alert alert-block alert-info">
<b>Multi-line statements with "\":</b> You can break a long statement into multiple line using "\". Essentially, when we use a "\", we are telling Python that the current expression is continued on the next line.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_list = [i for i in range(21)]
print('mylist is', my_list, '\n')
print('Slicing mylist in steps of 2 yields', my_list[::2], '\n')
print('Slicing mylist starting at index position 5\
in steps of 2 yields', my_list[5::2], '\n')
print('Slicing mylist ending at index position 4 in\
steps of 2 yields', my_list[:4:2], '\n')
print('Slicing mylist starting at index position 5 and\
ending at index position 9 in steps of 2 yields', my_list[5:9:2], '\n')
print('Slicing mylist in reverse starting at index position 5 and\
ending at index position 9 in steps of 2 yields', my_list[9:5:-1], '\n')
print('Slicing mylist in reverse starting at index position 5 and\
ending at index position 9 in steps of 2 yields', my_list[9:5:-2], '\n')
```
#### Iterating Over Lists
<a id="Iterating_Over_Lists"> </a>
Oftentimes, an application will need to iterate over the items of a list (or other iterable object), performing some operation for each entry. The following code block iterates over a list of names to demonstrate how such iteration can be done.
[Back to Table of Contents](#Table_of_Contents)<br>
```
name_list = ['Alice', 'Bob', 'Casey', 'Doug', 'Eva', 'Frank']
for name in name_list:
print(name)
```
Oftentimes, users may wish to return the index of the item in the list along with the item itself. In other languages, performing such a task typically meant keeping track of an index value and using the current index value to look up items in the object (in this case a list). This is not necessary in python due to the `enumerate` function. The following code block shows two approaches that are motivated by practices commonly observed in other programming languages and the *pythonic* approach for this task.
<div class="alert alert-block alert-info">
<b>The len() function:</b> The len() function returns the number of items in an iterable provided as an argument.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
name_list = ['Alice', 'Bob', 'Casey', 'Doug', 'Eva', 'Frank']
print("Using range(len(name_list)) :-(")
for i in range(len(name_list)):
print('The name at index', i, 'is', name_list[i])
print("\n\nTracking index :-(")
index = 0
for i in name_list:
print('The name at index', index, 'is', name_list[index])
index += 1
print("\n\nUsing enumerate :-)")
for index, name in enumerate(name_list):
print('The name at index', index, 'is', name)
```
You can provide an additional argument to enumerate if you want to start the indexing at a value other than zero.
[Back to Table of Contents](#Table_of_Contents)<br>
```
name_list = ['Alice', 'Bob', 'Casey', 'Doug', 'Eva', 'Frank']
for index, name in enumerate(name_list, 1):
print('The name at index', index, 'is', name)
```
### Dictionaries
<a id="Dictionaries"> </a>
Like lists, dictionaries are a versatile Python data structures that can easily be changed. With respect to the differences between lists and dictionaries: 1) lists are ordered sets of objects, whereas dictionaries are unordered sets, 2) items in dictionaries are accessed via keys and not via their position, and 3) the values of a dictionary can be any Python data type. So dictionaries are unordered key-value pairs.
The following code block provides an example that is adapted from https://automatetheboringstuff.com/chapter5/ (accessed 1/9/2018) that clearly demonstrates the key differences between lists and dictionaries.
[Back to Table of Contents](#Table_of_Contents)<br>
```
list1 = ['cats', 'dogs', 'moose']
list2 = ['dogs', 'moose', 'cats']
if (list1 == list2):
print("The two lists are the same.\n")
else:
print("The two lists are different.\n")
dict1 = {'name': 'Zophie', 'species': 'cat', 'age': '8'}
dict2 = {'species': 'cat', 'age': '8', 'name': 'Zophie'}
if (dict1 == dict2):
print("The two dictionaries are the same.\n")
else:
print("The two dictionaries are different.\n")
```
The important thing to note in the previous example is that the two lists are comprised of the same items, just in a different order, and the two dictionaries are also comprised of the same key-value pairs, just in different orders. However, Python interprets the lists as being different and the dictionaries as being equal. This clearly demonstrates the ordering differences between the two structures.
The following code block shows how to access elements of a dictionary by key.
[Back to Table of Contents](#Table_of_Contents)<br>
```
dict1['name']
```
Dictionaries have three methods that allow for easy iteration over a dictionary, i.e., the `keys`, `values`, and `items` methods. The following code block demonstrates these methods.
[Back to Table of Contents](#Table_of_Contents)<br>
```
print("The keys in dict1 are:")
for key in dict1.keys():
print(key)
print("\nThe values in dict1 are:")
for value in dict1.values():
print(value)
print("\nThe items (key-value pairs) in dict1 are:")
for item in dict1.items():
print(item)
```
You can also use these methods with the `in` operator to easily search for keys and values in a dictionary as shown below.
**Note that the `\"` statements are needed to print the quotation marks in the printed string. If they are not printed, Python will interpret the quotes as the end of a string and produce an error.
[Back to Table of Contents](#Table_of_Contents)<br>
```
print('Is the key \"name\" in dict1?','name' in dict1.keys())
print('Is the key \"cat\" in dict1?','cat' in dict1.keys())
print('Is the value \"cat\" in dict1?','cat' in dict1.values())
```
### Tuples
<a id="Tuples"> </a>
The tuple data structure is similar to a list, with two main exceptions:
1. We use parentheses instead of brackets to create a tuple, and
2. tuples are immutable, meaning that we cannot **directly** overwrite the values composing a tuple after creation.
The following code block creates a tuple with five values.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_tuple = ('a', 'b', 3, 'd', '5')
print('my_tuple is', my_tuple)
```
We can access elements of a tuple using the same indexing approach as we used for lists.
[Back to Table of Contents](#Table_of_Contents)<br>
```
print('The second element of my_tuple is', my_tuple[1])
```
Note what happens if we try to change one of the values in the tuple.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_tuple[2] = 'c'
```
Although we cannot change values of a tuple directly, we can change them by converting the tuple to a list, changing the list values, and then converting the list back to a tuple. This is demonstrated in the following code block.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_tuple = ('a', 'b', 3, 'd', '5')
print('my_tuple is', my_tuple,'\n')
my_tuple = list(my_tuple)
my_tuple[2] = 'c'
my_tuple = tuple(my_tuple)
print('my_tuple is', my_tuple)
```
### Sets
<a id="Tuples"> </a>
A set is a Python data structure that contains a collection of unique and immutable objects. Sets are useful when we are trying to determine the unique values in a larger data structure. The following code block demonstrates this use of a set. Note that the initial list has multiple duplicate values. However, the set contains only one copy of each unique value.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_list = ['a', 'b', 'a', 'b', 1, 2, 3, 2, 3, 3, 1000]
print('my_list is', my_list,'\n')
print('Using my_list to construct a set yields', set(my_list),'\n')
```
We cannot access individual items in a set as we did with Python lists and tuples. However, we can iterate over them using loops as is shown in the following code block.
[Back to Table of Contents](#Table_of_Contents)<br>
```
my_list = ['a', 'b', 'a', 'b', 1, 2, 3, 2, 3, 3, 1000]
for item in set(my_list):
print(item,'is in the list')
```
### String formatting
<a id="String_formatting"> </a>
Although several methods for string formatting are possible in Python, the use of `f-strings` are very simple and flexible. `f-strings` allow users to easily mix variables and static text via placeholders. The following code block demonstrates the use of `f-strings`. Specifically, the code block defines two python variables, one a string and the other an integer. The values of these variables are inserted into two strings according using placeholders indicated by brackets `{}`. The escape sequence `\n` starts a new line. **Note: The `f` placed before the string is necessary. An error will be raised if the charater is omitted.**
[Back to Table of Contents](#Table_of_Contents)<br>
```
first_variable = 'arg1'
second_variable = 1
mystring = f'My first variable is {first_variable} and my second is {second_variable}.\n'
print(mystring)
mystring = f'My second variable is {second_variable} and my first is {first_variable}.'
print(mystring)
del(mystring)
```
### Error Handling
<a id="Error_Handling"> </a>
Examples of errors that may occur are showcased earlier in this notebook. In practice, developers use error handling mechansims to ensure that their programs do not fail when errors are encountered, which is very common when we use data or input defined by others. Although flow control can be used to try and advert errors, the `try/except` block is designed for the purpose of error handling. The following code block shows how a `try/except` block can be used to handle errors encountered when dividing the contents of two lists.
<div class="alert alert-block alert-info">
<b>The zip() function:</b> The zip() function returns a zip object, which is an iterator of tuples where the first item in each passed iterables, e.g., a list, is paired together, and then the second item in each passed iterator are paired together etc. If the passed iterators have different lengths, the iterator with the least items decides the length of the new iterator.
</div>
[Back to Table of Contents](#Table_of_Contents)<br>
```
numerator_list = [1, 2.0, 2.3, 'cat', 20]
denominator_list = [2, 2, 2.6, 90, 0]
for numerator, denominator in zip(numerator_list, denominator_list):
try:
print(f'{numerator}/{denominator} = {numerator/denominator}')
except:
print(f'Cannot compute {numerator}/{denominator}')
```
The following code block shows how you can get the name of the exception that triggers the `except` block to run.
[Back to Table of Contents](#Table_of_Contents)<br>
```
numerator_list = [1, 2.0, 2.3, 'cat', 20]
denominator_list = [2, 2, 2.6, 90, 0]
for numerator, denominator in zip(numerator_list, denominator_list):
try:
print(f'{numerator}/{denominator} = {numerator/denominator}')
except Exception as e:
print(f'{type(e).__name__} -> Cannot compute {numerator}/{denominator}')
```
This concludes this notebook on Python basics.
[Back to Table of Contents](#Table_of_Contents)<br>
| github_jupyter |
# T2 - Calibration
Models are simplifications of the real world, and quantities in the model (like the force of infection) represent the aggregation of many different factors. As a result, there can be uncertainty as to what value of the parameters most accurately reflects the real world - for instance, the population force of infection varies with the average number of contacts per person per day, but this quantity may not be well constrained. The first step in running a model is to improve estimates of the parameter values for a particular setting, using data from that setting. Typically, the model is started off at some point in the past (e.g. 2000), such that the initial compartment sizes correspond to the data in the simulation start year. The model is then run up to the current year, with the compartment sizes changing due to the model parameters. The model predictions can then be compared to the actual data for those same years. This allows model parameters to be adjusted to best match the existing data. These same parameters are then used for future projections.
To see calibration in effect, consider the following simple example:
```
import atomica as at
P = at.Project(framework='assets/T2/t2_framework_1.xlsx',databook='assets/T2/t2_databook_1.xlsx', do_run=False)
```
First, we inspect the default calibration by running the model and plotting it along with the data. To plot the data, pass the project's data to the plotting function (in this case, `plot_series`) - this will automatically add scatter points to the plot based on the data in the databook.
```
result = P.run_sim()
d = at.PlotData(result,project=P)
at.plot_series(d, data=P.data);
```
Notice how the number of susceptible people and infected people exactly match the data points in the simulation start year - as noted above, this is because the model is initialized from the data values in that year. There are some conditions under which the model won't exactly match the data in the initial year, such as if the initialization characteristics are overdetermined, but these situations are rare.
We can see, however, that the model does not predict enough susceptible people in 2020. There could be many reasons for this, and determining what parts of the model should be changed can often be something of an art. It typically reflects your understanding of the assumptions that were made in designing the framework, and also uncertainties and bias present in the input data. For example, the methodology used to gather data used for the calibration might provide hints as to which parameters to change first.
In this case, as there are insufficient people, it might be the case that the birth rate was too low. There are two ways to address this
- You could go back to the databook and enter a larger value for the birth rate
- You can add a 'scale factor' to the parameter set, which scales the parameter value up or down
Either approach can be used and would provide equivalent results. Why would we prefer one over the other?
<table>
<thead>
<tr><th>Decision factor</th><th> Databook calibration </th><th> Scale factor calibration</th></tr>
</thead>
<tbody>
<tr><td>How do you want to adjust the parameter? </td><td> Manual adjustment </td><td> Automatic adjustment</td></tr>
<tr><td>What kinds of parameters is this appropriate for?</td><td> Appropriate for model assumptions </td><td> Appropriate for original data</td></tr>
<tr><td>Granularity of calibration? </td><td> Adjustments can vary by year or even timestep </td><td> Single scaling factor for all timesteps</td></tr>
<tr><td>Pros?</td><td>
<ul>
<li>Easy to review reference point for value used in project</li>
<li>What you see is what you get in the databook</li>
</ul>
</td>
<td>
<ul>
<li>Maintains scatter points on plots of the parameter</li>
<li>Can calibrate a parameter with a function without defining additional multiplicative parameters</li>
</ul>
</td>
<tr><td>Cons?</td><td> Can cause confusion in the databook around what is data and what is not data</td><td> Can lack transparency about how parameters are being adjusted without careful review</td></tr>
</tbody>
</table>
An example of a suitable parameter for databook calibration is `relative seasonality of mosquito population size` - no hard data can exist, but it might be possible to calculate proxy values from other data such as rainfall patterns in different years, adjust annual values manually to match historical epidemic patterns, and then assume a median value for future years. Having this assumption in the databook allows for those calculations to be used as a starting point in the databook before manually editing, and allows for comparability with other projects that use the same parameter.
<div class="alert alert-block alert-success">
<b>Suggestion:</b> When designing a databook, it is recommended that <b>all</b> parameters intended for explicit databook calibration are placed on a single 'Calibration' sheet to provide clarity about what is data and what are calibrated assumptions.
</div>
An example of a suitable parameter for scale factor calibration is `treatment initiation` used to determine model transitions from diagnosed to treated - a country has reported data for the number of people initiating treatment, and it is important to accurately maintain the official numbers in the databook. Nevertheless, there may be systematic under-reporting by an unknown degree and we want to account for those additional treatments in the model to ensure outcomes are representative, so it is appropriate to adjust the scale factor.
An example of a parameter that could be adjusted in either way depending on the circumstances or just personal preference would be `force of infection` - this is a clear calibration parameter that is not based on data, and could be adjusted in a databook if it's necessary to reflect changing circumstances external to the model over time, calibrated automatically with a scale factor via the `calibrate` function below in order to achieve the best fit to data, or even a mixture of the two methods.
<div class="alert alert-block alert-info">
The web interfaces (such as the Cascade Analysis Tool) perform calibration using scale factors. The scale factors shown on the website correspond to the values being set here.
</div>
To set a scale factor, create a `ParameterSet` either by copying an existing one, or creating a new one. Then, access the `pars` attribute to look up the parameter you wish to change, and set the `y_factor` for the population you want to change:
```
p2 = P.parsets[0].copy()
p2.pars['b_rate'].y_factor['adults'] = 2
```
The example above doubled the birth rate. Now we can run the model again, and see how the results have changed. Notice how the `PlotData` command is being called with both the original results object, and the new results object, allowing both model runs to be shown on the same plot.
```
r2 = P.run_sim(parset=p2,result_name = 'More births')
d = at.PlotData([result,r2], outputs='sus',project=P)
at.plot_series(d,axis='results',data=P.data);
```
We can see that we have considerably overshot the data, indicating that doubling the birth rate was much too big a change. This would typically be the first step in an iterative process, where you adjust the scale factor, inspect the data, and then make further adjustments.
Automated calibration is also available via the project's `calibrate` method. This will automatically adjust parameter values to match the data. To use this function, you need to specify which parameters to set scale factors for, and which variables in the databook to compute calibration quality from. The framework can provide defaults for which parameters to automatically calibrate, or you can pass a list of those parameters in directly. In this example, we will pass in `b_rate` because we want to adjust the birth rate, and we will use `sus` as a measurable since we want to match the number of susceptible people. The configuration therefore corresponds to the example shown above.
```
with at.Quiet():
p3 = P.calibrate(max_time=10, parset='default', adjustables=['b_rate'], measurables=['sus']);
```
The result of automated calibration is another `ParameterSet`. We can inspect the scale factor that the algorithm found:
```
p3.pars['b_rate'].y_factor
```
and we can run the model to compare the automated calibration to the original default calibration:
```
r3 = P.run_sim(parset=p3,result_name = 'Auto calibration')
d = at.PlotData([result,r3], outputs='sus',project=P)
at.plot_series(d,axis='results',data=P.data);
```
## Calibration tips
While calibrations can vary significantly from model-to-model, it's generally a good idea to try and match coarse-grained quantities first, followed by fine-grained quantities. For example, for TB you might calibrate it in the following order:
1. Match population size (adjusting birth rate and death rate)
2. Match disease prevalance (adjusting force of infection)
3. Match drug-resistant/drug-susceptible split (adjusting proportion of infections that are drug-resistant)
For complex models when considering how to proceed with a calibration, it can help to start with mapping the expected relationships between key input parameters that will be used for calibration and key output parameters for which data exists and that should be matched by the calibration, in terms of how changes might flow throughout the model.

From the diagram above, it can be seen that as is typically the case, population size (`alive`) has a large impact on everything else, but the number of disease-related deaths have only a minor impact on population size in return, so population size needs to be matched first. Incidence (`inci`) and prevalence (`inf`) have a strong cyclical relationship should be considered together, but force of infection and recovery rate have direct links to modifying each of those individually through changing the rate at which people are infected and changing the rate at which people recover. Disease-related deaths (`m_num`) can be considered last if it is not already closely matched from calibrating to prevalence. Because this may adjust the population size, an iterative cycle of calibration may be appropriate to get the best overall fit.
A calibration might then proceed in the following order with three repeats:
```
cal_par = P.parsets['default'].copy()
with at.Quiet():
for _ in range(3):
cal_par = P.calibrate(max_time=10, parset=cal_par, adjustables=['b_rate', 'doth_rate'], measurables=['alive'])
cal_par = P.calibrate(max_time=10, parset=cal_par, adjustables=['foi', 'rec_rate'], measurables=['inci', 'inf'])
cal_par = P.calibrate(max_time=10, parset=cal_par, adjustables=['m_rate'], measurables=['m_num'])
r4 = P.run_sim(parset=cal_par,result_name = 'Auto calibration repeated')
for output in ['alive', 'inci', 'inf', 'm_num']:
d = at.PlotData([result,r4], outputs=output,project=P)
at.plot_series(d,axis='results',data=P.data);
```
There are a few major dangers in calibration, including but not limited to:
**1. Solution-rich spaces**
Often multiple input parameters can be adjusted and they might superficially produce the same historical trend lines. For example if modelled prevalence is far lower than data points from epidemiological surveys, a better calibration might be achieved in a number of different ways: increased force of infection (calibration parameter), increased behavioural risk factors (e.g. number of shared injections for HIV), increased duration of infection, reduced treatment success rate, or any number of other subtle data inputs. Where possible calibrate to include data from multiple other outputs, and even if this is not possible _review_ other outputs as a sanity check. In addition, consult country experts to determine which solution better explains the trend where it is not clear from the data.
**2. Overfitting**
Don't try to exactly match every historical data point such as by adjusting 'force of infection' precisely for every historical year. Often there is year to year variation in diagnostic capacity, reporting, or natural fluctation based on behaviour or external circumstances. It is more important to accurately capture the trend and the _reasons_ for that trend in the model than to match data from every year exactly, and this will lead to more reasonable future projections as well as a more useful model for prioritizing programs.
**3. Inaccurate data**
In conjunction with overfitting - not all data is of equal quality or reliability, or it may have been gathered with a methodology that does not exactly fit the way it is used in the model (*this is often a good argument to adjust the model if it applies to more than a single project*). Be aware of the quality of data and prioritize calibration to data that is known to be more accurate in a given project. Sometimes it is better to just ignore data for a specific output parameter if the methodology used to gather that data was unreliable or cannot be adequately represented by the model.
**4. Forced outcomes**
Especially with automatic calibration, it is possible to match outputs with some extreme values for `y_factor`. Some examples of how this can occur:
- The calibration of `prev` in a Males 50+ population using the `foi` parameter results in a `y_factor` of 0.00001 - if optimizations are run with this force of infection value, the Males 50+ population will be almost immune to infection for any reason, and any programs that target Males 50+ will be defunded. In reality, something else is wrong in the model, perhaps this high prevalence is because of Males 40-49 aging into the 50+ population and that population should be reviewed, or even more likely there is a risk-based transition from a key population such as people who inject drugs which is too high.
- The calibration to match `incidence` in children aged 0-14 using the `foi` parameter results in a `y_factor` of 1000. The real reason for this may be that the model itself has failed to include the critical pathway for child incidence through mother-to-child transmission.
In order to avoid these kinds of modelling outcomes it is critical to (a) review calibration values/`y_factors` to ensure they are within expected bounds, and (b) if there are extreme values conduct further investigation, as there will be something else that should be changed to improve how the model represents reality, and this will result in better recommendations using the model.
```
for par in ['b_rate', 'doth_rate', 'foi', 'rec_rate', 'm_rate']:
print (f'{par}: y_factor = {cal_par.pars[par].y_factor["adults"]}')
```
Some of these values are low at around 0.15, and this can be attributed to the high value for prevalence (`inf`) in the databook requiring a low value for each of `doth_rate`, `rec_rate`, and `m_rate`. Individually these are not outside of a 'reasonable' range for calibration `y_factors` but as they are all needed to fit a single data point this might suggest a second look at the reliability of original data values for prevalence in this project.
**5. Missing model features**
Calibration at its best accounts for factors outside of a model in order to focus on what is in the model. Sometimes those factors have too big an impact on the model to be ignored, and the right solution is to add depth to the model as the only other solution is to force outcomes with extreme `y_factors` or unrealistic values for other parameters.
In the example above, it is impossible to match both the 2010 and 2020 data points for number of deaths (`m_num`) as they are not consistent with increasing prevalence. Perhaps it is necessary to calibrate a changing `m_rate` over time in the databook, or even add diagnosis and treatment to the model?
**6. 'Burn-in' period**
Often with more complex models, there may be too many parameters to initialize each and every one of them accurately in the first time step of the model via databook entry, such as tracking compartments for many different stages of a disease or stages of treatment. In these cases, it may be best to initialize just using key databook values such as 'number on treatment', 'number infected' and make assumptions for compartments within these categories. This will typically result in strange model behaviour for a number of time steps during a 'burn-in' period before the proportions at different stages of treatment settle down to an equilibrium, so it may be necessary to exclude these burn-in years from exported results in general, and not try to match these in calibration. Ideally, the model should either initialize a model earlier (e.g. several years in models with long time steps) and/or calibrate to the years for which data exists, and exclude those burn-in years from exported results in general.
| github_jupyter |
# Module 1: Introduction to Exploratory Analysis
<a href="https://drive.google.com/file/d/1r4SBY6Dm6xjFqLH12tFb-Bf7wbvoIN_C/view" target="_blank">
<img src="http://www.deltanalytics.org/uploads/2/6/1/4/26140521/screen-shot-2019-01-05-at-4-48-15-pm_orig.png" width="500" height="400">
</a>
[(Page 17)](https://drive.google.com/file/d/1r4SBY6Dm6xjFqLH12tFb-Bf7wbvoIN_C/view)
What we'll be doing in this notebook:
-----
1. Checking variable type
2. Checking for missing variables
3. Number of observations in the dataset
4. Descriptive statistics
### Import packages
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import datetime
import dateutil.parser
# The command below means that the output of multiple commands in a cell will be output at once
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# The command below tells jupyter to display up to 80 columns, this keeps everything visible
pd.set_option('display.max_columns', 80)
pd.set_option('expand_frame_repr', True)
# Show figures in notebook
%matplotlib inline
```
### Import dataset
We read in our dataset
```
path = '../data/'
filename = 'loans.csv'
try:
df = pd.read_csv(path+filename)
except FileNotFoundError:
# If data is not found, download it from GitHub
import os
os.system(f'git clone --single-branch --depth=1 https://github.com/DeltaAnalytics/machine_learning_for_good_data {path}')
df = pd.read_csv(path+filename)
```
In the cell below, we take a random sample of 2 rows to get a feel for the data.
```
df.sample(n=2)
```
### 1) Type Checking
<a id='type_check'></a>
Type is very important in Python programing, because it affects the types of functions you can apply to a series. There are a few different types of data you will see regularly (see [this](https://en.wikibooks.org/wiki/Python_Programming/Data_Types) link for more detail):
* **int** - a number with no decimal places. example: loan_amount field
* **float** - a number with decimal places. example: partner_id field
* **str** - str is short for string. This type formally defined as a sequence of unicode characters. More simply, string means that the data is treated as a word, not a number. example: sector
* **boolean** - can only be True or False. There is not currently an example in the data, but we will be creating a gender field shortly.
* **datetime** - values meant to hold time data. Example: posted_date
Let's check the type of our variables using the examples we saw in the cell above.
```
# Here are all of the columns
df.columns.tolist()
# Find the dtype, aka datatype, for a column
df['id_number'].dtype
# Try this - Pick a couple of columns and check their type on your own
```
### 2) Do I have missing values?
<a id='missing_check'></a>
If we have missing data, is the missing data at random or not? If data is missing at random, the data distribution is still representative of the population. You can probably ignore the missing values as an inconvenience. However, if the data is systematically missing, the analysis you do may be biased. You should carefully consider the best way to clean the data, it may involve dropping some data.
We want to see how many values are missing in certain variable columns. One way to do this is to count the number of null observations.
For this, we wrote a short function to apply to the dataframe.
We print out the first few observations, but you can remove the .head() to print out all columns.
```
#Create a new function:
def num_missing(x):
return sum(x.isnull())
#Applying per column:
print("Missing values per column:")
## Check how many are missing by column, and then check which ones have any missing values
print(df.apply(num_missing, axis=0).where(lambda x : x != 0).dropna().head(20))
#axis=0 defines that function is to be applied on each column
```
### 3) Sanity Checks
<a id='obs_check'></a>
**Does the dataset match what you expected to find?**
- is the range of values what you would expect. For example, are all loan_amounts above 0.
- do you have the number of rows you would expect
- is your data for the date range you would expect. For example, is there a strange year in the data like 1880.
- are there unexpected spikes when you plot the data over time
In the command below we find out the number of loans and number of columns by using the function shape. You can also use len(df.index) to find the number of rows.
```
print(f'There are {df.shape[0]} observations and {df.shape[1]} features')
```
Remember, each row is an observation and each column is a potential feature.
Remember we need large about of data for machine learning.
### 4) Descriptive statistics of the dataset
<a id='desc_stats'></a>
The "describe" command below provides key summary statistics for each numeric column.
```
df.describe()
```
In order to get the same summary statistics for categorical columns (string) we need to do a little data wrangling. The first line of code filters for all columns that are a data type object. As we know from before this means they are considered to be a string. The final row of code provides summary statistics for these character fields.
```
categorical = df.dtypes[df.dtypes == "object"].index
df[categorical].describe()
```
In the table above, there are 4 really useful fields:
1) **count** - total number of fields populated (Not empty).
2) **unique** - tells us how many different unique ways this field is populated. For example 4 in description.languages tells us there are 4 different language descriptions.
3) **top** - tells us the most popular data point. For example, the top activity in this dataset is Farming which tells us most loans are in Farming.
4) **freq** - tells us that how frequent the most popular category is in our dataset. For example, 'en' (English) is the language almost all descriptions (description.languages) are written in (118,306 out of 118,316).
What is next
-----
In the next section, we move on to exploratory data analysis (EDA).
<br>
<br>
<br>
----
| github_jupyter |
# Pseudomonas experiment level analysis
Main notebook to run experiment-level simulation experiment using *P. aeruginosa* gene expression data.
```
%load_ext autoreload
%autoreload 2
import os
import sys
import ast
import pandas as pd
import numpy as np
import random
from plotnine import (ggplot,
labs,
geom_line,
geom_point,
geom_errorbar,
aes,
ggsave,
theme_bw,
theme,
facet_wrap,
scale_color_manual,
guides,
guide_legend,
element_blank,
element_text,
element_rect,
element_line,
coords)
from sklearn.decomposition import PCA
import warnings
warnings.filterwarnings(action='ignore')
from simulate_expression_compendia_modules import pipeline
from ponyo import utils, train_vae_modules
from numpy.random import seed
randomState = 123
seed(randomState)
# Read in config variables
base_dir = os.path.abspath(os.path.join(os.getcwd(),"../"))
config_file = os.path.abspath(os.path.join(base_dir,
"configs",
"config_test_Pa_experiment_limma.tsv"))
params = utils.read_config(config_file)
# Load parameters
local_dir = params["local_dir"]
dataset_name = params['dataset_name']
analysis_name = params["simulation_type"]
correction_method = params["correction_method"]
lst_num_partitions = params["lst_num_partitions"]
train_architecture = params['NN_architecture']
# Input files
normalized_data_file = os.path.join(
base_dir,
dataset_name,
"data",
"input",
"train_set_normalized_test.tsv")
metadata_file = os.path.join(
base_dir,
dataset_name,
"data",
"metadata",
"sample_annotations.tsv")
# Output files
normalized_processed_data_file = os.path.join(
base_dir,
dataset_name,
"data",
"input",
"train_set_normalized_processed_test.txt.xz")
```
## Setup directories
```
utils.setup_dir(config_file)
```
## Process data
This pipeline is expecting data to be of the form sample x gene. The downloaded data is gene x sample.
```
pipeline.transpose_data(normalized_data_file,
normalized_processed_data_file)
```
## Pre-process data
```
# Output file
experiment_id_file = os.path.join(
base_dir,
dataset_name,
"data",
"metadata",
"experiment_ids.txt")
utils.create_experiment_id_file(metadata_file,
normalized_processed_data_file,
experiment_id_file,
config_file)
```
## Train VAE
```
# Directory containing log information from VAE training
vae_log_dir = os.path.join(
base_dir,
dataset_name,
"logs",
train_architecture)
# Train VAE
train_vae_modules.train_vae(config_file,
normalized_processed_data_file)
```
## Run simulation experiment without noise correction
```
# Run simulation without correction
corrected=False
pipeline.run_simulation(config_file,
normalized_processed_data_file,
corrected,
experiment_id_file)
```
## Run simulation with correction applied
```
# Run simulation without correction
corrected=True
pipeline.run_simulation(config_file,
normalized_processed_data_file,
corrected,
experiment_id_file)
```
## Make figures
```
pca_ind = [0,1,2]
# File directories
similarity_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_svcca_uncorrected_" + correction_method + ".pickle")
ci_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_ci_uncorrected_" + correction_method + ".pickle")
similarity_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_svcca_corrected_" + correction_method + ".pickle")
ci_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_ci_corrected_" + correction_method + ".pickle")
permuted_score_file = os.path.join(
base_dir,
dataset_name,
"results",
"saved_variables",
dataset_name + "_" + analysis_name + "_permuted.npy")
compendia_dir = os.path.join(
local_dir,
"partition_simulated",
dataset_name + "_" + analysis_name)
# Output files
svcca_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_svcca_"+correction_method+".svg")
svcca_png_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_svcca_"+correction_method+".png")
pca_uncorrected_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_pca_uncorrected_"+correction_method+".svg")
pca_corrected_file = os.path.join(
base_dir,
dataset_name,
"results",
dataset_name +"_"+analysis_name+"_pca_corrected_"+correction_method+".svg")
# Load pickled files
uncorrected_svcca = pd.read_pickle(similarity_uncorrected_file)
err_uncorrected_svcca = pd.read_pickle(ci_uncorrected_file)
corrected_svcca = pd.read_pickle(similarity_corrected_file)
err_corrected_svcca = pd.read_pickle(ci_corrected_file)
permuted_score = np.load(permuted_score_file)
# Concatenate error bars
uncorrected_svcca_err = pd.concat([uncorrected_svcca, err_uncorrected_svcca], axis=1)
corrected_svcca_err = pd.concat([corrected_svcca, err_corrected_svcca], axis=1)
# Add group label
uncorrected_svcca_err['Group'] = 'uncorrected'
corrected_svcca_err['Group'] = 'corrected'
# Concatenate dataframes
all_svcca = pd.concat([uncorrected_svcca_err, corrected_svcca_err])
all_svcca
```
### SVCCA
```
# Plot
lst_num_partitions = list(all_svcca.index)
threshold = pd.DataFrame(
pd.np.tile(
permuted_score,
(len(lst_num_partitions), 1)),
index=lst_num_partitions,
columns=['score'])
panel_A = ggplot(all_svcca) \
+ geom_line(all_svcca,
aes(x=lst_num_partitions, y='score', color='Group'),
size=1.5) \
+ geom_point(aes(x=lst_num_partitions, y='score'),
color ='darkgrey',
size=0.5) \
+ geom_errorbar(all_svcca,
aes(x=lst_num_partitions, ymin='ymin', ymax='ymax'),
color='darkgrey') \
+ geom_line(threshold,
aes(x=lst_num_partitions, y='score'),
linetype='dashed',
size=1,
color="darkgrey",
show_legend=False) \
+ labs(x = "Number of Partitions",
y = "Similarity score (SVCCA)",
title = "Similarity across varying numbers of partitions") \
+ theme(
plot_background=element_rect(fill="white"),
panel_background=element_rect(fill="white"),
panel_grid_major_x=element_line(color="lightgrey"),
panel_grid_major_y=element_line(color="lightgrey"),
axis_line=element_line(color="grey"),
legend_key=element_rect(fill='white', colour='white'),
legend_title=element_text(family='sans-serif', size=15),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
) \
+ scale_color_manual(['#1976d2', '#b3e5fc']) \
print(panel_A)
ggsave(plot=panel_A, filename=svcca_file, device="svg", dpi=300)
ggsave(plot=panel_A, filename=svcca_png_file, device="svg", dpi=300)
```
### Uncorrected PCA
```
lst_num_partitions = [lst_num_partitions[i] for i in pca_ind]
all_data_df = pd.DataFrame()
# Get batch 1 data
partition_1_file = os.path.join(
compendia_dir,
"Partition_1_0.txt.xz")
partition_1 = pd.read_table(
partition_1_file,
header=0,
index_col=0,
sep='\t')
for i in lst_num_partitions:
print('Plotting PCA of 1 parition vs {} partition...'.format(i))
# Simulated data with all samples in a single batch
original_data_df = partition_1.copy()
# Add grouping column for plotting
original_data_df['num_partitions'] = '1'
# Get data with additional batch effects added
partition_other_file = os.path.join(
compendia_dir,
"Partition_"+str(i)+"_0.txt.xz")
partition_other = pd.read_table(
partition_other_file,
header=0,
index_col=0,
sep='\t')
# Simulated data with i batch effects
partition_data_df = partition_other
# Add grouping column for plotting
partition_data_df['num_partitions'] = 'multiple'
# Concatenate datasets together
combined_data_df = pd.concat([original_data_df, partition_data_df])
# PCA projection
pca = PCA(n_components=2)
# Encode expression data into 2D PCA space
combined_data_numeric_df = combined_data_df.drop(['num_partitions'], axis=1)
combined_data_PCAencoded = pca.fit_transform(combined_data_numeric_df)
combined_data_PCAencoded_df = pd.DataFrame(combined_data_PCAencoded,
index=combined_data_df.index,
columns=['PC1', 'PC2']
)
# Variance explained
print(pca.explained_variance_ratio_)
# Add back in batch labels (i.e. labels = "batch_"<how many batch effects were added>)
combined_data_PCAencoded_df['num_partitions'] = combined_data_df['num_partitions']
# Add column that designates which batch effect comparision (i.e. comparison of 1 batch vs 5 batches
# is represented by label = 5)
combined_data_PCAencoded_df['comparison'] = str(i)
# Concatenate ALL comparisons
all_data_df = pd.concat([all_data_df, combined_data_PCAencoded_df])
# Convert 'num_experiments' into categories to preserve the ordering
lst_num_partitions_str = [str(i) for i in lst_num_partitions]
num_partitions_cat = pd.Categorical(all_data_df['num_partitions'], categories=['1', 'multiple'])
# Convert 'comparison' into categories to preserve the ordering
comparison_cat = pd.Categorical(all_data_df['comparison'], categories=lst_num_partitions_str)
# Assign to a new column in the df
all_data_df = all_data_df.assign(num_partitions_cat = num_partitions_cat)
all_data_df = all_data_df.assign(comparison_cat = comparison_cat)
all_data_df.columns = ['PC1', 'PC2', 'num_partitions', 'comparison', 'No. of partitions', 'Comparison']
# Plot all comparisons in one figure
panel_B = ggplot(all_data_df[all_data_df['Comparison'] != '1'],
aes(x='PC1', y='PC2')) \
+ geom_point(aes(color='No. of partitions'),
alpha=0.2) \
+ facet_wrap('~Comparison') \
+ labs(x = "PC 1",
y = "PC 2",
title = "PCA of partition 1 vs multiple partitions") \
+ theme_bw() \
+ theme(
legend_title_align = "center",
plot_background=element_rect(fill='white'),
legend_key=element_rect(fill='white', colour='white'),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
) \
+ guides(colour=guide_legend(override_aes={'alpha': 1})) \
+ scale_color_manual(['#bdbdbd', '#b3e5fc']) \
+ geom_point(data=all_data_df[all_data_df['Comparison'] == '1'],
alpha=0.1,
color='#bdbdbd')
print(panel_B)
ggsave(plot=panel_B, filename=pca_uncorrected_file)
```
### Corrected PCA
```
lst_num_partitions = [lst_num_partitions[i] for i in pca_ind]
all_corrected_data_df = pd.DataFrame()
# Get batch 1 data
partition_1_file = os.path.join(
compendia_dir,
"Partition_corrected_1_0.txt.xz")
partition_1 = pd.read_table(
partition_1_file,
header=0,
index_col=0,
sep='\t')
# Transpose data to df: sample x gene
partition_1 = partition_1.T
for i in lst_num_partitions:
print('Plotting PCA of 1 partition vs {} partitions...'.format(i))
# Simulated data with all samples in a single batch
original_data_df = partition_1.copy()
# Match format of column names in before and after df
original_data_df.columns = original_data_df.columns.astype(str)
# Add grouping column for plotting
original_data_df['num_partitions'] = '1'
# Get data with additional batch effects added and corrected
partition_other_file = os.path.join(
compendia_dir,
"Partition_corrected_"+str(i)+"_0.txt.xz")
partition_other = pd.read_table(
partition_other_file,
header=0,
index_col=0,
sep='\t')
# Transpose data to df: sample x gene
partition_other = partition_other.T
# Simulated data with i batch effects that are corrected
partition_data_df = partition_other
# Add grouping column for plotting
partition_data_df['num_partitions'] = 'multiple'
# Match format of column names in before and after df
partition_data_df.columns = original_data_df.columns.astype(str)
# Concatenate datasets together
combined_data_df = pd.concat([original_data_df, partition_data_df])
# PCA projection
pca = PCA(n_components=2)
# Encode expression data into 2D PCA space
combined_data_numeric_df = combined_data_df.drop(['num_partitions'], axis=1)
combined_data_PCAencoded = pca.fit_transform(combined_data_numeric_df)
combined_data_PCAencoded_df = pd.DataFrame(combined_data_PCAencoded,
index=combined_data_df.index,
columns=['PC1', 'PC2']
)
# Add back in batch labels (i.e. labels = "batch_"<how many batch effects were added>)
combined_data_PCAencoded_df['num_partitions'] = combined_data_df['num_partitions']
# Add column that designates which batch effect comparision (i.e. comparison of 1 batch vs 5 batches
# is represented by label = 5)
combined_data_PCAencoded_df['comparison'] = str(i)
# Concatenate ALL comparisons
all_corrected_data_df = pd.concat([all_corrected_data_df, combined_data_PCAencoded_df])
# Convert 'num_experiments' into categories to preserve the ordering
lst_num_partitions_str = [str(i) for i in lst_num_partitions]
num_partitions_cat = pd.Categorical(all_corrected_data_df['num_partitions'], categories=['1', 'multiple'])
# Convert 'comparison' into categories to preserve the ordering
comparison_cat = pd.Categorical(all_corrected_data_df['comparison'], categories=lst_num_partitions_str)
# Assign to a new column in the df
all_corrected_data_df = all_corrected_data_df.assign(num_partitions_cat = num_partitions_cat)
all_corrected_data_df = all_corrected_data_df.assign(comparison_cat = comparison_cat)
all_corrected_data_df.columns = ['PC1', 'PC2', 'num_partitions', 'comparison', 'No. of partitions', 'Comparison']
# Plot all comparisons in one figure
panel_C = ggplot(all_corrected_data_df[all_corrected_data_df['Comparison'] != '1'],
aes(x='PC1',
y='PC2')) \
+ geom_point(aes(color='No. of partitions'),
alpha=0.1) \
+ facet_wrap('~Comparison') \
+ labs(x = "PC 1",
y = "PC 2",
title = "PCA of partition 1 vs multiple partitions") \
+ theme_bw() \
+ theme(
legend_title_align = "center",
plot_background=element_rect(fill='white'),
legend_key=element_rect(fill='white', colour='white'),
legend_text=element_text(family='sans-serif', size=12),
plot_title=element_text(family='sans-serif', size=15),
axis_text=element_text(family='sans-serif', size=12),
axis_title=element_text(family='sans-serif', size=15)
) \
+ guides(colour=guide_legend(override_aes={'alpha': 1})) \
+ scale_color_manual(['#bdbdbd', '#1976d2']) \
+ geom_point(data=all_corrected_data_df[all_corrected_data_df['Comparison'] == '1'],
alpha=0.1,
color='#bdbdbd')
print(panel_C)
ggsave(plot=panel_C, filename=pca_corrected_file)
```
| github_jupyter |
```
!pip install -Uq catalyst gym
```
# Seminar. RL, DQN.
Hi! In the first part of the seminar, we are going to introduce one of the main algorithm in the Reinforcment Learning domain. Deep Q-Network is the pioneer algorithm, that amalmagates Q-Learning and Deep Neural Networks. And there is small review on gym enviroments, where our bots will play in games.
```
from collections import deque, namedtuple
import random
import numpy as np
import gym
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
from catalyst import dl, utils
```
In the beginning, look at the algorithm:

There are several differences between the usual DL and RL routines. Our bots are trained by his actions, that he has done in the past. We don't have infinity memory, but we can save some actions in the buffer. Let's code it!
```
device = utils.get_device()
import numpy as np
from collections import deque, namedtuple
Transition = namedtuple(
'Transition',
field_names=[
'state',
'action',
'reward',
'done',
'next_state'
]
)
class ReplayBuffer:
def __init__(self, capacity: int):
self.buffer = deque(maxlen=capacity)
def append(self, transition: Transition):
self.buffer.append(transition)
def sample(self, size: int):
indices = np.random.choice(
len(self.buffer),
size,
replace=size > len(self.buffer)
)
states, actions, rewards, dones, next_states = \
zip(*[self.buffer[idx] for idx in indices])
states, actions, rewards, dones, next_states = (
np.array(states, dtype=np.float32),
np.array(actions, dtype=np.int64),
np.array(rewards, dtype=np.float32),
np.array(dones, dtype=np.bool),
np.array(next_states, dtype=np.float32)
)
return states, actions, rewards, dones, next_states
def __len__(self):
return len(self.buffer)
```
To work well with Catalyst train loops, implement intermedeate abstraction.
```
from torch.utils.data.dataset import IterableDataset
# as far as RL does not have some predefined dataset,
# we need to specify epoch lenght by ourselfs
class ReplayDataset(IterableDataset):
def __init__(self, buffer: ReplayBuffer, epoch_size: int = int(1e3)):
self.buffer = buffer
self.epoch_size = epoch_size
def __iter__(self):
states, actions, rewards, dones, next_states = \
self.buffer.sample(self.epoch_size)
for i in range(len(dones)):
yield states[i], actions[i], rewards[i], dones[i], next_states[i]
def __len__(self):
return self.epoch_size
```
After creating a Buffer, we need to gather action-value-state and save it in the buffer. We create one function, that asks model for action, and another function to communicate with the enviroment.
```
def get_action(env, network, state, epsilon=-1):
if np.random.random() < epsilon:
action = env.action_space.sample()
else:
state = torch.tensor(state[None], dtype=torch.float32).to(device)
q_values = network(state).detach().cpu().numpy()[0]
action = np.argmax(q_values)
return int(action)
def generate_session(
env,
network,
t_max=1000,
epsilon=-1,
replay_buffer=None,
):
total_reward = 0
state = env.reset()
for t in range(t_max):
action = get_action(env, network, state=state, epsilon=epsilon)
next_state, reward, done, _ = env.step(action)
if replay_buffer is not None:
transition = Transition(
state, action, reward, done, next_state)
replay_buffer.append(transition)
total_reward += reward
state = next_state
if done:
break
return total_reward, t
def generate_sessions(
env,
network,
t_max=1000,
epsilon=-1,
replay_buffer=None,
num_sessions=100,
):
sessions_reward, sessions_steps = 0, 0
for i_episone in range(num_sessions):
r, t = generate_session(
env=env,
network=network,
t_max=t_max,
epsilon=epsilon,
replay_buffer=replay_buffer,
)
sessions_reward += r
sessions_steps += t
return sessions_reward, sessions_steps
```
If we look closely into algorithm, we'll see that we need two networks. They looks the same, but one updates weights by gradients algorithm and second one by moving average with the first. This process helps to get stable training by REINFORCE.
```
def soft_update(target, source, tau):
"""Updates the target data with smoothing by ``tau``"""
for target_param, param in zip(target.parameters(), source.parameters()):
target_param.data.copy_(
target_param.data * (1.0 - tau) + param.data * tau
)
```
To communicate with the Buffer, Catalyst's Runner requires adiitional Callback.
```
class GameCallback(dl.Callback):
def __init__(
self,
*,
env,
replay_buffer,
session_period,
epsilon,
epsilon_k,
actor_key,
):
super().__init__(order=0)
self.env = env
self.replay_buffer = replay_buffer
self.session_period = session_period
self.epsilon = epsilon
self.epsilon_k = epsilon_k
self.actor_key = actor_key
def on_stage_start(self, runner: dl.IRunner):
self.actor = runner.model[self.actor_key]
self.actor.eval()
generate_sessions(
env=self.env,
network=self.actor,
epsilon=self.epsilon,
replay_buffer=self.replay_buffer,
num_sessions=1000,
)
self.actor.train()
def on_epoch_start(self, runner: dl.IRunner):
self.epsilon *= self.epsilon_k
self.session_counter = 0
self.session_steps = 0
def on_batch_end(self, runner: dl.IRunner):
if runner.global_batch_step % self.session_period == 0:
self.actor.eval()
session_reward, session_steps = generate_session(
env=self.env,
network=self.actor,
epsilon=self.epsilon,
replay_buffer=self.replay_buffer
)
self.session_counter += 1
self.session_steps += session_steps
runner.batch_metrics.update({"s_reward": session_reward})
runner.batch_metrics.update({"s_steps": session_steps})
self.actor.train()
def on_epoch_end(self, runner: dl.IRunner):
num_sessions = 100
self.actor.eval()
valid_rewards, valid_steps = generate_sessions(
env=self.env,
network=self.actor,
num_sessions=num_sessions
)
self.actor.train()
valid_rewards /= float(num_sessions)
valid_steps /= float(num_sessions)
runner.epoch_metrics["_epoch_"]["num_samples"] = self.session_steps
runner.epoch_metrics["_epoch_"]["updates_per_sample"] = (
runner.loader_sample_step / self.session_steps
)
runner.epoch_metrics["_epoch_"]["v_reward"] = valid_rewards
class CustomRunner(dl.Runner):
def __init__(
self,
*,
gamma,
tau,
tau_period=1,
**kwargs,
):
super().__init__(**kwargs)
self.gamma = gamma
self.tau = tau
self.tau_period = tau_period
def on_stage_start(self, runner: dl.IRunner):
super().on_stage_start(runner)
soft_update(self.model["target"], self.model["origin"], 1.0)
def handle_batch(self, batch):
# model train/valid step
states, actions, rewards, dones, next_states = batch
network, target_network = self.model["origin"], self.model["target"]
# get q-values for all actions in current states
state_qvalues = network(states)
# select q-values for chosen actions
state_action_qvalues = \
state_qvalues.gather(1, actions.unsqueeze(-1)).squeeze(-1)
# compute q-values for all actions in next states
# compute V*(next_states) using predicted next q-values
# at the last state we shall use simplified formula:
# Q(s,a) = r(s,a) since s' doesn't exist
with torch.no_grad():
next_state_qvalues = target_network(next_states)
next_state_values = next_state_qvalues.max(1)[0]
next_state_values[dones] = 0.0
next_state_values = next_state_values.detach()
# compute "target q-values" for loss,
# it's what's inside square parentheses in the above formula.
target_state_action_qvalues = \
next_state_values * self.gamma + rewards
# mean squared error loss to minimize
loss = self.criterion(
state_action_qvalues,
target_state_action_qvalues.detach()
)
self.batch_metrics.update({"loss": loss})
if self.is_train_loader:
loss.backward()
self.optimizer.step()
self.optimizer.zero_grad()
if self.global_batch_step % self.tau_period == 0:
soft_update(target_network, network, self.tau)
def get_network(env, num_hidden=128):
inner_fn = utils.get_optimal_inner_init(nn.ReLU)
outer_fn = utils.outer_init
network = torch.nn.Sequential(
nn.Linear(env.observation_space.shape[0], num_hidden),
nn.ReLU(),
nn.Linear(num_hidden, num_hidden),
nn.ReLU(),
)
head = nn.Linear(num_hidden, env.action_space.n)
network.apply(inner_fn)
head.apply(outer_fn)
return torch.nn.Sequential(network, head)
# data
batch_size = 64
epoch_size = int(1e3) * batch_size
buffer_size = int(1e5)
# runner settings, ~training
gamma = 0.99
tau = 0.01
tau_period = 1 # in batches
# callback, ~exploration
session_period = 100 # in batches
epsilon = 0.98
epsilon_k = 0.9
# optimization
lr = 3e-4
# env_name = "LunarLander-v2"
env_name = "CartPole-v1"
env = gym.make(env_name)
replay_buffer = ReplayBuffer(buffer_size)
network, target_network = get_network(env), get_network(env)
utils.set_requires_grad(target_network, requires_grad=False)
models = {"origin": network, "target": target_network}
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(network.parameters(), lr=lr)
loaders = {
"train": DataLoader(
ReplayDataset(replay_buffer, epoch_size=epoch_size),
batch_size=batch_size,
),
}
runner = CustomRunner(
gamma=gamma,
tau=tau,
tau_period=tau_period,
)
runner.train(
model=models,
criterion=criterion,
optimizer=optimizer,
loaders=loaders,
logdir="./logs_dqn",
num_epochs=10,
verbose=True,
valid_loader="_epoch_",
valid_metric="v_reward",
minimize_valid_metric=False,
load_best_on_end=True,
callbacks=[
GameCallback(
env=env,
replay_buffer=replay_buffer,
session_period=session_period,
epsilon=epsilon,
epsilon_k=epsilon_k,
actor_key="origin",
)
]
)
```
|And we can watch how our model plays in the games!
\* to run cells below, you should update your python environment. Instruction depends on your system specification.
```
# record sessions
import gym.wrappers
env = gym.wrappers.Monitor(
gym.make(env_name),
directory="videos_dqn",
force=True)
generate_sessions(
env=env,
network=runner.model["origin"],
num_sessions=100
)
env.close()
# show video
from IPython.display import HTML
import os
video_names = list(
filter(lambda s: s.endswith(".mp4"), os.listdir("./videos_dqn/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./videos/"+video_names[-1])) # this may or may not be _last_ video. Try other indices
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Espanta/handson-ml/blob/master/Learning3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Your name:
<pre> Your Name </pre>
### Collaborators:
<pre> Collaborators </pre>
```
import numpy as np
import pandas as pd
import seaborn as sns
# to make this notebook's output stable across runs
np.random.seed(123)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
```
### Classification
Q1. Build a classification model for the default of credit card clients dataset. More info here:
https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients
- Explore the data
- Make sure you build a full data pipeline
- Do you require any data pre-processing? Are all the features useful? (Use only raw features)
- set the random seed to 123 (For splitting or any other random algorithm)
- Split data into training (80%) and testing (20%)
- Follow similar procedure as the one for week 2 (End-to-end Machine Learning Project). Remember apendix B
- Study the ROC Curve, decide threshold
- Use 2 classifiers.
- Random Forest
- tune only: n_estimators: {4, 5, 10, 20, 50}
- KNN Classfier
- tune only: n_neighbors: {3, 5, 10, 20}
- Which one performs better in the cross validation?
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html
- Cross-validation with 5-folds
- Other paramenters -> Use default
# Frame the Problem and Look at the Big Picture
The objective is to predict whether or not a credit card client will default for their payment in the next month. We will be using the better of 2 classifiers namely, Random Forest and KNN Classifier, and determine the best of a given set of hyperparameters by using GridSearchCV.
In a business context, this project would be potentially useful at 2 phases of credit lending:
1. Active credit card client: It can be used to identify clients that are likely to default in the next payment and therefore flag these clients. It may also be possible to extend the model to detect it 2-3 months prior to a potential default so certain actions may be taken.
2. During credit card application: The results/model of the this project can be further used as part of the application process to improve the company's assessment of the creditworthiness of the clients applying for credit cards or other credit products
One limitation of this study is that the data is from April 2005 to September 2005 and is considered stale for use in the current period or in the near future.
Another major limitation of this study is that the data are from credit card clients in Taiwan and cannot be used in Canada (or any other country for that matter) due to fundamental differences in income levels, culture, consumer behaviour, credit industry landscape, and other conditions.
# Get the Data
Data is obtained through the University of California, Irvine Machine Learning Repository (https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients). The data is open for public use and no authorizations are required.
Data is imported from the UCI website:
```
df = pd.read_excel("https://archive.ics.uci.edu/ml/machine-learning-databases/00350/default%20of%20credit%20card%20clients.xls",
sheet = 0, skiprows= 1, header = 0)
```
Show all columns. Not ideal when there's a lot of columns but this dataset has a manageable number of columns.
```
pd.set_option('display.max_columns', None)
```
Let's take a look at the headers and a few rows to better understand the data structure
Data description on the website provides information about the features and the values therein.:
* **LIMIT_BAL**: Amount of given credit (in NT dollars) for the invidiual consumer credit and his/her family credit
* **GENDER**:
* 1 = Male
* 2 = Female
* **EDUCATION**:
* 1 = Graduate school
* 2 = University
* 3 = High School
* 4 = Others
* **MARITAL STATUS**:
* 1 = Married
* 2 = Single
* 3 = Others
* **AGE**: In years
* **PAY_#**: History of past monthly payment records (April 2005 to September 2005). The scale is
* -1 = Duly paid
* 1 = One month delay
* 2 = 2 month delay
...
* N = N month delay
* **BILL_AMT#**: Amount on bill statement (in NT dollars) from April 2005 to September 2005
* **PAY_AMT#**: Amount of previous payment (in NT dollars) from April 20015 to September 2005
* **default payment next month**: The data label
* 0 = Did not default
* 1 = Default
Note that PAY_0 seems to be an odd column header since the BILL_AMT and PAY_AMT columns go from 1 (April 2005) to 6 (September 2005). While this isn't absolutely necessary, we will change it to PAY_1 for consistency and clarity
```
df = df.rename(columns = {'PAY_0':'PAY_1'})
df.head()
```
Find out if your dataset is balanced or not.
```
df.groupby("default payment next month")["ID"].count()
```
A quick summary on the description of the data with info() shows that there are no missing values and all of the data is reprsented as integers.
```
df.info()
```
Next, split the data into training set and test set. We will set aside 20% of the test data as test set.
```
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(df, test_size = 0.2, random_state = 123)
```
# Explore the Data
Make a copy of the train_set for exploration then generate some descriptive statistics
```
credit = train_set.copy()
credit.describe()
```
Although these are numeric, descriptive statistics for the ID, EDUCATION, and MARRIAGE features are not helpful since these are identifiers (for ID) and non-pattern-based numeric representations for categorical variables (e.g. if education was say total number of years of education, or if marriage was say # of people in household then these statistics would make sense).
Some observations:
* Average credit provided to clients is around $167,900
* Max credit issued is 1,000,000, median is 140,000, and IQR is 190,000 suggesting decreasing number of credit issued for higher amounts
* More females than males
* Average age of clients is 35, youngest client is 21, and oldest is 79
* Average bill amounts range from 38,709 to 51,069 per month
* Max payment delay is 8 months
* Some bill amounts are negative
* Average payment amounts range from 4,764 to 5,910 per month
* Around 22% of the clients defaulted on their payment for the next month
Next, let's do a pairwise plot of the features excluding the PAY_#, BILL_AMT#, and PAY_AMT# features
```
credit_nobillpay = credit[["LIMIT_BAL", "SEX","EDUCATION","MARRIAGE","AGE", "default payment next month"]].copy()
sns.pairplot(credit_nobillpay)
```
There seems to be some correlation between the ages and the limit balance provided to the consumer. As for the plots with sex, education, and marriage, these plots do not tell much and would be better plotted using other plot forms.
The target attribute is the "default payment next month". To be sure, let's check that the label is truly 0 or 1
```
credit["default payment next month"].value_counts()
```
Let's check the values under EDUCATION. The values range from 0 to 6 however the data description does not provide any context on values other than 1, 2, and 3.
```
credit["EDUCATION"].value_counts()
```
Let's check the values under "MARRIAGE". The value of 0 is not defined in the data description.
```
credit["MARRIAGE"].value_counts()
```
# Prepare the Data
```
credit = train_set.drop("default payment next month", axis = 1)
credit_labels = train_set["default payment next month"].copy()
```
The current encoding for the categorical variables of EDUCATION, and MARRIAGE may be in ordinal numbers but is not ideal since the numbers themselves don't lend meaning (e.g. EDUCATION has values not defined in the data description). We will then use one-hot encoding for both the EDUCATION and MARRIAGE features.
Let's build the preprocessing pipeline for the numerical and categorical features.
```
from CategoricalEncoder import CategoricalEncoder
from DataFrameSelector import DataFrameSelector
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
("select_numeric", DataFrameSelector(["LIMIT_BAL", "AGE", "PAY_1", "PAY_2", "PAY_3", "PAY_4", "PAY_5", "PAY_6",
"BILL_AMT1", "BILL_AMT2", "BILL_AMT3", "BILL_AMT4", "BILL_AMT5", "BILL_AMT6",
"PAY_AMT1", "PAY_AMT2", "PAY_AMT3", "PAY_AMT4", "PAY_AMT5", "PAY_AMT6"])),
])
cat_pipeline = Pipeline([
("select_cat", DataFrameSelector(["SEX", "EDUCATION", "MARRIAGE"])),
("cat_encoder", CategoricalEncoder(encoding='onehot-dense')),
])
from sklearn.pipeline import FeatureUnion
preprocess_pipeline = FeatureUnion(transformer_list=[
("num_pipeline", num_pipeline),
("cat_pipeline", cat_pipeline),
])
```
Preprocesing pipeline is complete and let's prepare the data
```
credit_prepared = preprocess_pipeline.fit_transform(credit)
```
# SHORT-LIST PROMISING MODELS and FINE-TUNE THE SYSTEM
Short listing promising models has been done as per the requirements of the homework. Moreover, a list of hyperparameters are provided to determine the best model. These two steps will be merged since they are intertwined by the nature of the homework.
- Use 2 classifiers.
- Random Forest
- tune only: n_estimators: {4, 5, 10, 20, 50}
- KNN Classfier
- tune only: n_neighbors: {3, 5, 10, 20}
### Random Forest classifier
Let's build the Random Forest Classifier
```
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
RF = RandomForestClassifier(random_state=123)
```
Create a function to plot the ROC curve
```
def plot_roc_curve(fpr, tpr, label=None):
plt.plot(fpr, tpr, linewidth=2, label=label)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
```
Using GridSearchCV, determine the best hyperparameter for the given list of n_estimators {4, 5, 10, 20, 50}.
We wil use CV = 5 and the scoring to be the roc_auc (area under the curve)
```
from sklearn.model_selection import GridSearchCV
param_grid = [{'n_estimators': [4, 5, 10, 20, 50]}]
grid_search_RF = GridSearchCV(RF, param_grid, cv=5 ,scoring='roc_auc')
grid_search_RF.fit(credit_prepared, credit_labels)
```
The best hyperparameter in the given list for Random Forest is when n_estimators = 50
```
grid_search_RF.best_params_
```
For documentation, let's show the resulting Area Under the Curve for each of the n_estimators used
```
cvres_RF = grid_search_RF.cv_results_
for mean_score, params in zip(cvres_RF["mean_test_score"], cvres_RF["params"]):
print(mean_score, params)
```
Let's plot the ROC curve for the best RF estimator
```
best_RF_model = grid_search_RF.best_estimator_
y_probas_RF = cross_val_predict(best_RF_model, credit_prepared, credit_labels, cv=5, method="predict_proba")
y_scores_RF = y_probas_RF[:, 1]
fpr_RF, tpr_RF, thresholds_RF = roc_curve(credit_labels,y_scores_RF)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr_RF, tpr_RF, "Random Forest")
plt.legend(loc="lower right", fontsize=16)
plt.show()
```
### KNN Classifier
First, let's build the KNN Classifier
```
from sklearn.neighbors import KNeighborsClassifier
KNN = KNeighborsClassifier()
```
Using GridSearchCV, determine the best hyperparameter for the given list of n_neighbors {3, 5, 10, 20}.
We wil use CV = 5 and the scoring to be the roc_auc (area under the curve)
```
param_grid = [{'n_neighbors': [3, 5, 10, 20]}]
grid_search_KNN = GridSearchCV(KNN, param_grid, cv=5 ,scoring='roc_auc')
grid_search_KNN.fit(credit_prepared, credit_labels)
```
The best hyperparameter in the list for KNN is when n_neighbors = 20
```
grid_search_KNN.best_params_
```
For documentation, let's show the resulting Area Under the Curve for each of the n_neighbors used
```
cvres_KNN = grid_search_KNN.cv_results_
for mean_score, params in zip(cvres_KNN["mean_test_score"], cvres_KNN["params"]):
print(mean_score, params)
```
Let's plot the ROC curve for the best KNN estimator
```
best_KNN_model = grid_search_KNN.best_estimator_
y_probas_KNN = cross_val_predict(best_KNN_model, credit_prepared, credit_labels, cv=5, method="predict_proba")
y_scores_KNN = y_probas_KNN[:, 1]
fpr_KNN, tpr_KNN, thresholds_KNN = roc_curve(credit_labels,y_scores_KNN)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr_KNN, tpr_KNN, "KNN")
plt.legend(loc="lower right", fontsize=16)
plt.show()
```
The best RF estimator (with n_estimators = 50) has an ROC AUC of 0.754135222935.
Compare this to the best KNN estimator (with n_neighors = 20) which has an ROC AUC of 0.644265674088.
Effectively, the best model out of the models trained is Random Forest with n_estimators = 50. This is also demonstrated by plotting the curves for the best RF and KNN estimators.
```
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr_KNN, tpr_KNN, "KNN with n_neighbours = 20")
plot_roc_curve(fpr_RF, tpr_RF, "Random Forest with n_estimators = 50 ")
plt.legend(loc="lower right", fontsize=16)
plt.show()
```
### Final Model Performance
Given that we found our best estimator, let's measure its performance on the test set.
Preprocess the data:
```
credit_test = test_set.drop("default payment next month", axis = 1)
credit_test_labels = test_set["default payment next month"].copy()
credit_test_prepared = preprocess_pipeline.fit_transform(credit_test)
```
Plot the ROC curve
```
y_probas_RF_test = cross_val_predict(best_RF_model, credit_test_prepared, credit_test_labels, cv=5, method="predict_proba")
y_scores_RF_test = y_probas_RF_test[:, 1]
fpr_RF_test, tpr_RF_test, thresholds_RF_test = roc_curve(credit_test_labels,y_scores_RF_test)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr_RF_test, tpr_RF_test, "Random Forest - Test")
plt.legend(loc="lower right", fontsize=16)
plt.show()
```
Compute the AUC score
```
roc_auc_score(credit_test_labels, y_scores_RF_test)
```
#### Conclusions?
Explain your results and choices
**Response:**
The best classifier found is Random Forest with n_estimators = 50. Both the training set and test set had an ROC AUC score of about 0.75 with the test set performing slightly worse.
While the problem is limited to raw features only, I would want to include one additional feature that I think would be relevant to the task. That is the % of the bill amount that was paid for the month.
With respect to features, not all raw features are necessary. potentially the last 2-3 months of information on payment status is necessary. However, I can think of 2 additional measures that can perhaps improve the classifiers:
1. % of the bill amount that was paid for the month (so payments > or = to 100% means consumer is either on time on payments or paying down debt)
2. Taking measure #1 a bit further is to get the the change in % of outstanding bill paid (e.g. say 75% of bill paid in month 1 then 45% of bill paid in month 2 shows a worsening payment rate and also reflects a ballooning bill). This means that an decreasing rate of change in % of outstanding bill paid means they can't keep up with payments.
I think these 2 measure would be more useful than the current raw features for BILL_AMT# and PAY_AMT#.
**Q2.** (Optional) Write a function that can shift an MNIST image in any direction (left, right, up, or down) by one pixel. Then, for each image in the training set, create four shifted copies (one per direction) and add them to the training set. Finally, train your best model on this expanded training set and measure its accuracy on the test set. You should observe that your model performs even better now! This technique of artificially growing the training set is called data augmentation or training set expansion.
Gather MNIST data
```
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
```
Split data to train set and test set
```
X, y = mnist["data"], mnist["target"]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
shuffle_index = np.random.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
```
Define function to shift image by *dx* number of pixels horizontally and *dy* number of pixels vertically
```
from scipy.ndimage.interpolation import shift
def shift_image(image, dx, dy):
image = image.reshape((28, 28))
shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
return shifted_image.reshape([-1])
```
Demonstrate the function *shift_image* by choosing the 1000th data point and shifting it up 5 pixels and then left 5 pixels
```
image = X_train[1000]
shifted_image_down = shift_image(image, 0, 5)
shifted_image_left = shift_image(image, -5, 0)
plt.figure(figsize=(12,3))
plt.subplot(131)
plt.title("Original", fontsize=14)
plt.imshow(image.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.subplot(133)
plt.title("Shifted left", fontsize=14)
plt.imshow(shifted_image_left.reshape(28, 28), interpolation="nearest", cmap="Greys")
plt.show()
```
For each image in the training set, 4 copies are created (shifted right by 1 pixel, shifted left by 1 pixel, shifted up by 1 pixel, shifted down by 1 pixel, respectively). These copies are then appended to the original training set alongside their respective labels
```
X_train_augmented = [image for image in X_train]
y_train_augmented = [label for label in y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
for image, label in zip(X_train, y_train):
X_train_augmented.append(shift_image(image, dx, dy))
y_train_augmented.append(label)
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]
```
The best estimator found in Chapter 3 notebook is n_neighbors = 4. Let's set this hyperparamater
```
knn_clf = KNeighborsClassifier(n_neighbors = 4)
```
Fitting the KNN classifier with the augmented data
```
knn_clf.fit(X_train_augmented, y_train_augmented)
```
Compute for accuracy score. The resulting accuracy score is very high with just about 97.5% similar to that found in the book
```
from sklearn.metrics import accuracy_score
y_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_pred)
```
### Conclusions
The increase in accuracy is not much however and there was an immense cost (a few hours compared to 3/4 of an hour) in running the augmented data (at least on my local machine). Knowing this, this technique must be used with caution. If accuracy is absolutely required and/or resourcing isn't an issue, then using data augmentation should increase model performance.
I find this technique to be very interesting. Something that I haven't encountered before and the concept of artifically growing the training set without effectively adding new data while increasing its performance is impressive. I'm sure there are other techniques that I would definitely would want to explore.
### Submit your notebook
Submit your solution here
https://goo.gl/forms/VKD7Zwu54oHjutDc2
Make sure you rename your notebook to
W3_UTORid.ipynb
Example W3_adfasd01.ipynb
```
```
| github_jupyter |
# Parameter plotting with LiionDB
In this notebook we will show how to plot and compare parameters in a loop.
A simplified interactive GUI is available online at [**www.liiondb.com**](www.liiondb.com)
---
* LiionDB is a database of DFN-type battery model parameters that accompanies the review manuscript: [**Parameterising Continuum-Level Li-ion Battery Models**.](https://www.overleaf.com/project/5ed63d9378cbf700018a2018).
* If you use LiionDB in your work, please cite our paper at: [doi.org](https://www.doi.org/).
---
Start by cloning the liiondb library into this notebook & loading modules
```
%rm -rf liiondb #uncomment if refreshing
!git clone https://github.com/ndrewwang/liiondb.git
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import liiondb.functions.fn_db as fn_db
dfndb, db_connection = fn_db.liiondb()
%cd liiondb
```
---
## First, define function `plot_loop()` that loops and plots all data_id's from a given list:
```
def plot_loop(id_list,temperature):
#plot data id list in a loop fashion
%matplotlib inline
from matplotlib.font_manager import FontProperties
from matplotlib.ticker import (MultipleLocator, AutoMinorLocator)
import importlib
# For parameters with temperature dependence
T = temperature #K
# Plot display settings
w = 5
h = 4
d = 100
plt.figure(figsize=(w, h), dpi=d)
# Label font sizes
SMALL_SIZE = 8
MEDIUM_SIZE = 10
BIGGER_SIZE = 15
# Looping through and plotting
for i in range(len(id_list)):
data_id = id_list[i]
QUERY = 'SELECT * FROM data WHERE data_id = %s;' %str(data_id)
df = pd.read_sql(QUERY,dfndb)
# Query and assign the legend and axes label strings
paper_id = df.paper_id[0]
QUERY = 'SELECT paper.paper_tag FROM paper WHERE paper_id = %s;' %str(paper_id)
paper_string = pd.read_sql(QUERY,dfndb).paper_tag[0]
material_id = df.material_id[0]
QUERY = 'SELECT material.name FROM material WHERE material_id = %s;' %str(material_id)
material_string = pd.read_sql(QUERY,dfndb).name[0]
leg_string = material_string + ', '+ paper_string
parameter_id = df.parameter_id[0]
QUERY = 'SELECT * FROM parameter WHERE parameter_id = %s;' %str(parameter_id)
paramdf = pd.read_sql(QUERY,dfndb)
xlabel = '['+paramdf.units_input[0]+']'
y_unit = paramdf.units_output[0]
y_param = paramdf.name[0]
ylabel = y_param + ' ' + '['+y_unit+']'
# Reading the raw data either in value, array, or function format
csv_data = fn_db.read_data(df)
import streamlit_gui.elements.parameter_from_db
importlib.reload(streamlit_gui.elements.parameter_from_db)
#Plots based on the valid input range defined in the database (with some small padding)
if df.raw_data_class[0] == 'function':
c_low = float(df.input_range.to_numpy()[0].lower)+0.001
c_max = float(df.input_range.to_numpy()[0].upper)-0.001
c = np.linspace(c_low,c_max) #SI Units mol/m3
try:
y = streamlit_gui.elements.parameter_from_db.function(c,T) #run the function just written from the database
except:
y = streamlit_gui.elements.parameter_from_db.function(c)
x = c
y = y
plt.plot(x,y,'-',label=leg_string)
elif df.raw_data_class[0] == 'array':
c = csv_data[:,0]
y = csv_data[:,1]
x = c
y = y
plt.plot(x,y,'-',label=leg_string)
elif df.raw_data_class[0] == 'value':
n = 10
y = float(csv_data)
y = y*np.linspace(1,1,n)
c_low = float(df.input_range.to_numpy()[0].lower)
c_max = float(df.input_range.to_numpy()[0].upper)
c = np.linspace(c_low,c_max,n) #SI Units mol/m3
x = c
y = y
plt.plot(x,y,'-',label=leg_string)
plt.ylabel(ylabel)
plt.xlabel(xlabel)
fontP = FontProperties()
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=SMALL_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=SMALL_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rcParams['axes.linewidth'] = 1
plt.legend(bbox_to_anchor=(1.02, 1), loc='upper left')
fig1 = plt.gcf()
plt.show()
```
---
# Plotting examples:
## 1. Plot all LFP OCV curves:
First, query and show the resulting matches
```
QUERY = '''
SELECT DISTINCT data.data_id,parameter.symbol,parameter.name as parameter, material.name as material,data.raw_data, parameter.units_input, parameter.units_output, paper.paper_tag, paper.doi
FROM data
JOIN paper ON paper.paper_id = data.paper_id
JOIN material ON material.material_id = data.material_id
JOIN parameter ON parameter.parameter_id = data.parameter_id
WHERE parameter.name = 'half cell ocv'
AND material.lfp = 1
LIMIT 5
'''
df = pd.read_sql(QUERY,dfndb)
id_list = df['data_id'].to_list()
df
```
Now we use `plot_loop()` to plot all of these together:
```
plot_loop(id_list,298)
```
## 2. Plot all conductivities for LiFP6 containing electrolytes at 10 deg C:
First, query and show the resulting matches, limited to the top 10
```
QUERY = '''
SELECT DISTINCT data.data_id,parameter.symbol,parameter.name as parameter, material.name as material,data.raw_data, parameter.units_input, parameter.units_output, paper.paper_tag, paper.doi
FROM data
JOIN paper ON paper.paper_id = data.paper_id
JOIN material ON material.material_id = data.material_id
JOIN parameter ON parameter.parameter_id = data.parameter_id
WHERE parameter.name = 'ionic conductivity'
AND material.lipf6 = 1
AND 283 BETWEEN lower(data.temp_range) AND upper(data.temp_range)
LIMIT 10
'''
df = pd.read_sql(QUERY,dfndb)
id_list = df['data_id'].to_list()
df
```
Now we use `plot_loop()` to plot all of these together:
```
plot_loop(id_list,283) #313 is 10 degrees
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
import numpy as np
!wget --no-check-certificate \
https://storage.googleapis.com/laurencemoroney-blog.appspot.com/irish-lyrics-eof.txt \
-O /tmp/irish-lyrics-eof.txt
tokenizer=Tokenizer()
data = open('/tmp/irish-lyrics-eof.txt').read()
corpus=data.lower().split('\n')
tokenizer.fit_on_texts(corpus)
total_words=len(tokenizer.word_index)+1
print(tokenizer.word_index)
print(total_words)
input_sequences=[]
for line in corpus:
token_list=tokenizer.texts_to_sequences([line])[0]
for i in range(1,len(token_list)):
n_gram_sequence=token_list[:i+1]
input_sequences.append(n_gram_sequence)
# Pad sequences
max_sequence_len=max([len(x) for x in input_sequences])
input_sequences=np.array(pad_sequences(input_sequences,maxlen=max_sequence_len,padding='pre'))
# Create predictors adnd label
xs,labels=input_sequences[:,:-1],input_sequences[:,-1]
ys=tf.keras.utils.to_categorical(labels,num_classes=total_words)
print(tokenizer.word_index['in'])
print(tokenizer.word_index['the'])
print(tokenizer.word_index['town'])
print(tokenizer.word_index['of'])
print(tokenizer.word_index['athy'])
print(tokenizer.word_index['one'])
print(tokenizer.word_index['jeremy'])
print(tokenizer.word_index['lanigan'])
print(xs[6])
print(ys[6])
print(tokenizer.word_index)
model=tf.keras.models.Sequential()
model.add(Embedding(total_words,100,input_length=max_sequence_len-1))
model.add(Bidirectional(LSTM(150)))
model.add(Dense(total_words,activation='softmax'))
adam=Adam(lr=0.01)
model.compile(optimizer=adam,
loss='categorical_crossentropy',
metrics=['acc'])
history=model.fit(xs,ys,
epochs=20,
verbose=1)
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.show()
plot_graphs(history,'acc')
plot_graphs(history,'loss')
seed_text = "I've got a bad feeling about this"
next_words = 100
for _ in range(next_words):
token_list = tokenizer.texts_to_sequences([seed_text])[0]
token_list = pad_sequences([token_list], maxlen=max_sequence_len-1, padding='pre')
predicted = model.predict_classes(token_list, verbose=0)
output_word = ""
for word, index in tokenizer.word_index.items():
if index == predicted:
output_word = word
break
seed_text += " " + output_word
print(seed_text)
```
| github_jupyter |
```
import os
import time
import tensorflow as tf
import numpy as np
from glob import glob
import datetime
import random
from PIL import Image
import matplotlib.pyplot as plt
from numpy import savetxt
import pandas as pd
import sys
%matplotlib inline
array_sum = []
from google.colab import drive
drive.mount('/content/drive')
def generator(z, output_channel_dim, training):
with tf.variable_scope("generator", reuse= not training):
# 8x8x1024
fully_connected = tf.layers.dense(z, 8*8*1024)
fully_connected = tf.reshape(fully_connected, (-1, 8, 8, 1024))
fully_connected = tf.nn.leaky_relu(fully_connected)
# 8x8x1024 -> 16x16x512
trans_conv1 = tf.layers.conv2d_transpose(inputs=fully_connected,
filters=512,
kernel_size=[5,5],
strides=[2,2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name="trans_conv1")
batch_trans_conv1 = tf.layers.batch_normalization(inputs = trans_conv1,
training=training,
epsilon=EPSILON,
name="batch_trans_conv1")
trans_conv1_out = tf.nn.leaky_relu(batch_trans_conv1,
name="trans_conv1_out")
# 16x16x512 -> 32x32x256
trans_conv2 = tf.layers.conv2d_transpose(inputs=trans_conv1_out,
filters=256,
kernel_size=[5,5],
strides=[2,2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name="trans_conv2")
batch_trans_conv2 = tf.layers.batch_normalization(inputs = trans_conv2,
training=training,
epsilon=EPSILON,
name="batch_trans_conv2")
trans_conv2_out = tf.nn.leaky_relu(batch_trans_conv2,
name="trans_conv2_out")
# 32x32x256 -> 64x64x128
trans_conv3 = tf.layers.conv2d_transpose(inputs=trans_conv2_out,
filters=128,
kernel_size=[5,5],
strides=[2,2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name="trans_conv3")
batch_trans_conv3 = tf.layers.batch_normalization(inputs = trans_conv3,
training=training,
epsilon=EPSILON,
name="batch_trans_conv3")
trans_conv3_out = tf.nn.leaky_relu(batch_trans_conv3,
name="trans_conv3_out")
# 64x64x128 -> 128x128x64
trans_conv4 = tf.layers.conv2d_transpose(inputs=trans_conv3_out,
filters=64,
kernel_size=[5,5],
strides=[2,2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name="trans_conv4")
batch_trans_conv4 = tf.layers.batch_normalization(inputs = trans_conv4,
training=training,
epsilon=EPSILON,
name="batch_trans_conv4")
trans_conv4_out = tf.nn.leaky_relu(batch_trans_conv4,
name="trans_conv4_out")
# 128x128x64 -> 128x128x3
logits = tf.layers.conv2d_transpose(inputs=trans_conv4_out,
filters=3,
kernel_size=[5,5],
strides=[1,1],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name="logits")
out = tf.tanh(logits, name="out")
return out
def discriminator(x, reuse):
with tf.variable_scope("discriminator", reuse=reuse):
# 128*128*3 -> 64x64x64
conv1 = tf.layers.conv2d(inputs=x,
filters=64,
kernel_size=[5,5],
strides=[2,2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name='conv1')
batch_norm1 = tf.layers.batch_normalization(conv1,
training=True,
epsilon=EPSILON,
name='batch_norm1')
conv1_out = tf.nn.leaky_relu(batch_norm1,
name="conv1_out")
# 64x64x64-> 32x32x128
conv2 = tf.layers.conv2d(inputs=conv1_out,
filters=128,
kernel_size=[5, 5],
strides=[2, 2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name='conv2')
batch_norm2 = tf.layers.batch_normalization(conv2,
training=True,
epsilon=EPSILON,
name='batch_norm2')
conv2_out = tf.nn.leaky_relu(batch_norm2,
name="conv2_out")
# 32x32x128 -> 16x16x256
conv3 = tf.layers.conv2d(inputs=conv2_out,
filters=256,
kernel_size=[5, 5],
strides=[2, 2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name='conv3')
batch_norm3 = tf.layers.batch_normalization(conv3,
training=True,
epsilon=EPSILON,
name='batch_norm3')
conv3_out = tf.nn.leaky_relu(batch_norm3,
name="conv3_out")
# 16x16x256 -> 16x16x512
conv4 = tf.layers.conv2d(inputs=conv3_out,
filters=512,
kernel_size=[5, 5],
strides=[1, 1],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name='conv4')
batch_norm4 = tf.layers.batch_normalization(conv4,
training=True,
epsilon=EPSILON,
name='batch_norm4')
conv4_out = tf.nn.leaky_relu(batch_norm4,
name="conv4_out")
# 16x16x512 -> 8x8x1024
conv5 = tf.layers.conv2d(inputs=conv4_out,
filters=1024,
kernel_size=[5, 5],
strides=[2, 2],
padding="SAME",
kernel_initializer=tf.truncated_normal_initializer(stddev=WEIGHT_INIT_STDDEV),
name='conv5')
batch_norm5 = tf.layers.batch_normalization(conv5,
training=True,
epsilon=EPSILON,
name='batch_norm5')
conv5_out = tf.nn.leaky_relu(batch_norm5,
name="conv5_out")
flatten = tf.reshape(conv5_out, (-1, 8*8*1024))
logits = tf.layers.dense(inputs=flatten,
units=1,
activation=None)
out = tf.sigmoid(logits)
return out, logits
def model_loss(input_real, input_z, output_channel_dim):
g_model = generator(input_z, output_channel_dim, True)
noisy_input_real = input_real + tf.random_normal(shape=tf.shape(input_real),
mean=0.0,
stddev=random.uniform(0.0, 0.1),
dtype=tf.float32)
d_model_real, d_logits_real = discriminator(noisy_input_real, reuse=False)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_model_real)*random.uniform(0.9, 1.0)))
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_model_fake)))
d_loss = tf.reduce_mean(0.5 * (d_loss_real + d_loss_fake))
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_model_fake)))
return d_loss, g_loss
def model_optimizers(d_loss, g_loss):
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith("generator")]
d_vars = [var for var in t_vars if var.name.startswith("discriminator")]
update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
gen_updates = [op for op in update_ops if op.name.startswith('generator')]
with tf.control_dependencies(gen_updates):
d_train_opt = tf.train.AdamOptimizer(learning_rate=LR_D, beta1=BETA1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate=LR_G, beta1=BETA1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='inputs_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name="input_z")
learning_rate_G = tf.placeholder(tf.float32, name="lr_g")
learning_rate_D = tf.placeholder(tf.float32, name="lr_d")
return inputs_real, inputs_z, learning_rate_G, learning_rate_D
def show_samples(sample_images, name, epoch):
figure, axes = plt.subplots(1, len(sample_images), figsize = (IMAGE_SIZE, IMAGE_SIZE))
for index, axis in enumerate(axes):
axis.axis('off')
#flatten() change 3d array into 1d
image_array = sample_images[index].flatten()
if epoch == 50:
!kill -9 -1
array_sum = np.concatenate((image_array, array_sum), axis=0)
else:
print(epoch)
print(image_array.size)
print(image_array.shape)
print(image_array)
if epoch == 300:
#save to csv file
#pd.DataFrame(image_array).to_csv("/content/drive/My Drive/Colab Notebooks/image_generator/output.csv")
savetxt('/content/drive/My Drive/Colab Notebooks/image_generator/output.csv', array_sum, delimiter=' ')
#print(array_sum.size)
#print(array_sum.shape)
#print(array_sum)
#axis.imshow(image_array)
#实现array到image的转换
#image = Image.fromarray(image_array)
#image.save(name+"_"+str(epoch)+"_"+str(index)+".png")
#plt.savefig(name+"_"+str(epoch)+".png", bbox_inches='tight', pad_inches=0)
#plt.show()
#plt.close()
def test(sess, input_z, out_channel_dim, epoch):
example_z = np.random.uniform(-1, 1, size=[SAMPLES_TO_SHOW, input_z.get_shape().as_list()[-1]])
samples = sess.run(generator(input_z, out_channel_dim, False), feed_dict={input_z: example_z})
samples = samples.flatten()
if epoch == 300:
print("samples:", samples)
print("samples_size:", samples.size)
savetxt('/content/drive/output.csv')
else:
print("epoch:", epoch)
#sample_images = [((sample + 1.0) * 127.5).astype(np.uint8) for sample in samples] #int8 字节(-128 ~ 127)
#show_samples(sample_images, OUTPUT_DIR + "samples", epoch) ======================already 127
#print("sample_images:", sample_images)
def summarize_epoch(epoch, duration, sess, d_losses, g_losses, input_z, data_shape):
minibatch_size = int(data_shape[0]//BATCH_SIZE)
# print("Epoch {}/{}".format(epoch, EPOCHS),
# "\nDuration: {:.5f}".format(duration),
# "\nD Loss: {:.5f}".format(np.mean(d_losses[-minibatch_size:])),
# "\nG Loss: {:.5f}".format(np.mean(g_losses[-minibatch_size:])))
# fig, ax = plt.subplots()
# plt.plot(d_losses, label='Discriminator', alpha=0.6)
# plt.plot(g_losses, label='Generator', alpha=0.6)
# plt.title("Losses")
# plt.legend()
# plt.savefig(OUTPUT_DIR + "losses_" + str(epoch) + ".png")
# plt.show()
# plt.close()
#print(input_z)
#print("sess", sess, "d_losses:", d_losses, "input_z:", input_z)
test(sess, input_z, data_shape[3], epoch)
def get_batches(data):
batches = []
for i in range(int(data.shape[0]//BATCH_SIZE)):
batch = data[i * BATCH_SIZE:(i + 1) * BATCH_SIZE]
augmented_images = []
for img in batch:
image = Image.fromarray(img)
if random.choice([True, False]):
image = image.transpose(Image.FLIP_LEFT_RIGHT)
augmented_images.append(np.asarray(image))
batch = np.asarray(augmented_images)
normalized_batch = (batch / 127.5) - 1.0
batches.append(normalized_batch)
return batches
def train(get_batches, data_shape, checkpoint_to_load=None):
#print("data_shape:", data_shape)
input_images, input_z, lr_G, lr_D = model_inputs(data_shape[1:], NOISE_SIZE)
#print("input_images", input_images, "input_z", input_z)
d_loss, g_loss = model_loss(input_images, input_z, data_shape[3])
d_opt, g_opt = model_optimizers(d_loss, g_loss)
#print("d_opt:", d_opt, "g_opt:", g_opt)
#print("get_batches:", get_batches)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
epoch = 0
iteration = 0
d_losses = []
g_losses = []
for epoch in range(EPOCHS):
epoch += 1
start_time = time.time()
for batch_images in get_batches:
iteration += 1
batch_z = np.random.uniform(-1, 1, size=(BATCH_SIZE, NOISE_SIZE))
_ = sess.run(d_opt, feed_dict={input_images: batch_images, input_z: batch_z, lr_D: LR_D})
_ = sess.run(g_opt, feed_dict={input_images: batch_images, input_z: batch_z, lr_G: LR_G})
d_losses.append(d_loss.eval({input_z: batch_z, input_images: batch_images}))
g_losses.append(g_loss.eval({input_z: batch_z}))
summarize_epoch(epoch, time.time()-start_time, sess, d_losses, g_losses, input_z, data_shape)
# Paths
INPUT_DATA_DIR = "/content/drive/My Drive/Colab Notebooks/image_generator/" # Path to the folder with input images. For more info check simspons_dataset.txt
OUTPUT_DIR = './{date:%Y-%m-%d_%H:%M:%S}/'.format(date=datetime.datetime.now())
if not os.path.exists(OUTPUT_DIR):
os.makedirs(OUTPUT_DIR)
# Hyperparameters
IMAGE_SIZE = 128
NOISE_SIZE = 100
LR_D = 0.00004
LR_G = 0.0004
BATCH_SIZE = 64
EPOCHS = 300
BETA1 = 0.5
WEIGHT_INIT_STDDEV = 0.02
EPSILON = 0.00005
SAMPLES_TO_SHOW = 5
def readcsv(filename):
data = pd.read_csv(filename) #Please add four spaces here before this line
return(np.array(data)) #Please add four spaces here before this line
# Training
#input_images = np.asarray([np.asarray(Image.open("/content/drive/My Drive/Colab Notebooks/AI_Lab/image_generator/1.png").resize((IMAGE_SIZE, IMAGE_SIZE))) for file in glob(INPUT_DATA_DIR + '*')])
#print (input_images.shape)
#print (input_images.size) #294912
#!kill -9 -1
#np.random.shuffle(input_images)
#print ("==========================")
#sample_images = random.sample(list(input_images), SAMPLES_TO_SHOW)
#sample_images = list(input_images)
#print(sample_images)
Input_data = readcsv("/content/drive/My Drive/Colab Notebooks/image_generator/data02.csv") #data02 295098 data02(295098)-186=293912 这里data02已经减去了这么多,为了方便reshape矩阵
#Input_data = np.reshape(Input_data, (128, 3)) #这里还是要用它原来的维度 (6, 128, 128, 3)
#Input_data = np.reshape(Input_data, (128, 128, 3))
Input_data = np.reshape(Input_data, (294912,))
#print(Input_data.size)
#print(Input_data.shape)
#print(Input_data)
Input_data = np.reshape(Input_data, (6, 128, 128, 3))
#print(Input_data.size)
#Input_data = np.transpose(Input_data)
#print(Input_data)
#print (Input_data.shape)
#print (Input_data.size)
#!kill -9 -1
#show_samples(Input_data, OUTPUT_DIR + "inputs", 0)
with tf.Graph().as_default():
train(get_batches(Input_data), Input_data.shape)
```
| github_jupyter |
# Tutorial 7: Estimator
## Overview
In this tutorial, we will talk about:
* [Estimator API](#t07estimator)
* [Reducing the number of training steps per epoch](#t07train)
* [Reducing the number of evaluation steps per epoch](#t07eval)
* [Changing logging behavior](#t07logging)
* [Monitoring intermediate results during training](#t07intermediate)
* [Trace](#t07trace)
* [Concept](#t07concept)
* [Structure](#t07structure)
* [Usage](#t07usage)
* [Model Testing](#t07testing)
* [Related Apphub Examples](#t07apphub)
`Estimator` is the API that manages everything related to the training loop. It combines `Pipeline` and `Network` together and provides users with fine-grain control over the training loop. Before we demonstrate different ways to control the training loop let's define a template similar to [tutorial 1](./t01_getting_started.ipynb), but this time we will use a PyTorch model.
```
import fastestimator as fe
from fastestimator.architecture.pytorch import LeNet
from fastestimator.dataset.data import mnist
from fastestimator.op.numpyop.univariate import ExpandDims, Minmax
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
import tempfile
def get_estimator(log_steps=100, monitor_names=None, use_trace=False, train_steps_per_epoch=None, epochs=2):
# step 1
train_data, eval_data = mnist.load_data()
test_data = eval_data.split(0.5)
pipeline = fe.Pipeline(train_data=train_data,
eval_data=eval_data,
test_data=test_data,
batch_size=32,
ops=[ExpandDims(inputs="x", outputs="x", axis=0), Minmax(inputs="x", outputs="x")])
# step 2
model = fe.build(model_fn=LeNet, optimizer_fn="adam", model_name="LeNet")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce1"),
UpdateOp(model=model, loss_name="ce")
])
# step 3
traces = None
if use_trace:
traces = [Accuracy(true_key="y", pred_key="y_pred"),
BestModelSaver(model=model, save_dir=tempfile.mkdtemp(), metric="accuracy", save_best_mode="max")]
estimator = fe.Estimator(pipeline=pipeline,
network=network,
epochs=epochs,
traces=traces,
train_steps_per_epoch=train_steps_per_epoch,
log_steps=log_steps,
monitor_names=monitor_names)
return estimator
```
Let's train our model using the default `Estimator` arguments:
```
est = get_estimator()
est.fit()
```
<a id='t07estimator'></a>
## Estimator API
<a id='t07train'></a>
### Reduce the number of training steps per epoch
In general, one epoch of training means that every element in the training dataset will be visited exactly one time. If evaluation data is available, evaluation happens after every epoch by default. Consider the following two scenarios:
* The training dataset is very large such that evaluation needs to happen multiple times during one epoch.
* Different training datasets are being used for different epochs, but the number of training steps should be consistent between each epoch.
One easy solution to the above scenarios is to limit the number of training steps per epoch. For example, if we want to train for only 300 steps per epoch, with training lasting for 4 epochs (1200 steps total), we would do the following:
```
est = get_estimator(train_steps_per_epoch=300, epochs=4)
est.fit()
```
<a id='t07eval'></a>
### Reduce the number of evaluation steps per epoch
One may need to reduce the number of evaluation steps for debugging purpose. This can be easily done by setting the `eval_steps_per_epoch` argument in `Estimator`.
<a id='t07logging'></a>
### Change logging behavior
When the number of training epochs is large, the log can become verbose. You can change the logging behavior by choosing one of following options:
* set `log_steps` to `None` if you do not want to see any training logs printed.
* set `log_steps` to 0 if you only wish to see the evaluation logs.
* set `log_steps` to some integer 'x' if you want training logs to be printed every 'x' steps.
Let's set the `log_steps` to 0:
```
est = get_estimator(train_steps_per_epoch=300, epochs=4, log_steps=0)
est.fit()
```
<a id='t07intermediate'></a>
### Monitor intermediate results
You might have noticed that in our example `Network` there is an op: `CrossEntropy(inputs=("y_pred", "y") outputs="ce1")`. However, the `ce1` never shows up in the training log above. This is because FastEstimator identifies and filters out unused variables to reduce unnecessary communication between the GPU and CPU. On the contrary, `ce` shows up in the log because by default we log all loss values that are used to update models.
But what if we want to see the value of `ce1` throughout training?
Easy: just add `ce1` to `monitor_names` in `Estimator`.
```
est = get_estimator(train_steps_per_epoch=300, epochs=4, log_steps=150, monitor_names="ce1")
est.fit()
```
As we can see, both `ce` and `ce1` showed up in the log above. Unsurprisingly, their values are identical because because they have the same inputs and forward function.
<a id='t07trace'></a>
## Trace
<a id='t07concept'></a>
### Concept
Now you might be thinking: 'changing logging behavior and monitoring extra keys is cool, but where is the fine-grained access to the training loop?'
The answer is `Trace`. `Trace` is a module that can offer you access to different training stages and allow you "do stuff" with them. Here are some examples of what a `Trace` can do:
* print any training data at any training step
* write results to a file during training
* change learning rate based on some loss conditions
* calculate any metrics
* order you a pizza after training ends
* ...
So what are the different training stages? They are:
* Beginning of training
* Beginning of epoch
* Beginning of batch
* End of batch
* End of epoch
* End of training
<img src="../resources/t07_trace_concept.png" alt="drawing" width="500"/>
As we can see from the illustration above, the training process is essentially a nested combination of batch loops and epoch loops. Over the course of training, `Trace` places 6 different "road blocks" for you to leverage.
<a id='t07structure'></a>
### Structure
If you are familiar with Keras, you will notice that the structure of `Trace` is very similar to the `Callback` in keras. Despite the structural similarity, `Trace` gives you a lot more flexibility which we will talk about in depth in [advanced tutorial 4](../advanced/t04_trace.ipynb). Implementation-wise, `Trace` is a python class with the following structure:
```
class Trace:
def __init__(self, inputs=None, outputs=None, mode=None):
self.inputs = inputs
self.outputs = outputs
self.mode = mode
def on_begin(self, data):
"""Runs once at the beginning of training"""
def on_epoch_begin(self, data):
"""Runs at the beginning of each epoch"""
def on_batch_begin(self, data):
"""Runs at the beginning of each batch"""
def on_batch_end(self, data):
"""Runs at the end of each batch"""
def on_epoch_end(self, data):
"""Runs at the end of each epoch"""
def on_end(self, data):
"""Runs once at the end training"""
```
Given the structure, users can customize their own functions at different stages and insert them into the training loop. We will leave the customization of `Traces` to the advanced tutorial. For now, let's use some pre-built `Traces` from FastEstimator.
During the training loop in our earlier example, we want 2 things to happen:
1. Save the model weights if the evaluation loss is the best we have seen so far
2. Calculate the model accuracy during evaluation
<a id='t07usage'></a>
```
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import Accuracy
est = get_estimator(use_trace=True)
est.fit()
```
As we can see from the log, the model is saved in a predefined location and the accuracy is displayed during evaluation.
<a id='t07testing'></a>
## Model Testing
Sometimes you have a separate testing dataset other than training and evaluation data. If you want to evalate the model metrics on test data, you can simply call:
```
est.test()
```
This will feed all of your test dataset through the `Pipeline` and `Network`, and finally execute the traces (in our case, compute accuracy) just like during the training.
<a id='t07apphub'></a>
## Apphub Examples
You can find some practical examples of the concepts described here in the following FastEstimator Apphubs:
* [UNet](../../apphub/semantic_segmentation/unet/unet.ipynb)
| github_jupyter |
# PyAutoGUI——让所有GUI都自动化
本教程译自大神[Al Sweigart](http://inventwithpython.com/)的[PyAutoGUI](https://pyautogui.readthedocs.org/)项目,Python自动化工具,更适合处理GUI任务,网页任务推荐:
- [Selenium](https://selenium-python.readthedocs.org/)+Firefox记录(Chromedriver和Phantomjs也很给力,Phantomjs虽然是无头浏览器,但有时定位不准),然后用Python写单元测试
- [request](http://www.python-requests.org/en/latest/)处理get/post请求写一堆代码自动化处理,都在后台运行,不用运行浏览器,非常适合处理表单
没有[sikuli](http://www.sikuli.org/)功能多,但是Python让生活更简单,[人生苦短,Python当歌](http://cn.pycon.org/2015/)。
同时推荐一本Python网络数据采集(图灵社区取的名字^_^)的基础书籍[Ryan Mitchell的《Web Scraping with Python》](http://shop.oreilly.com/product/0636920034391.do),可以和PyAutoGUI结合使用。
tl;dr
<!-- TEASER_END -->
2015-08-17:输入中文bug没有解决,目前的解决方案是Python 2.X环境下安装[pyperclip](https://github.com/asweigart/pyperclip)和pyautogui,用复制粘贴来实现。
```
import pyperclip
import pyautogui
# PyAutoGUI中文输入需要用粘贴实现
# Python 2版本的pyperclip提供中文复制
def paste(foo):
pyperclip.copy(foo)
pyautogui.hotkey('ctrl', 'v')
foo = u'学而时习之'
# 移动到文本框
pyautogui.click(130,30)
paste(foo)
```
# 1.简介
## 1.1 目的
PyAutoGUI是一个纯Python的GUI自动化工具,其目的是可以用程序自动控制鼠标和键盘操作,多平台支持(Windows,OS X,Linux)。可以用`pip`安装,Github上有[源代码](https://github.com/asweigart/pyautogui)。
下面的代码让鼠标移到屏幕中央。
```
import pyautogui
screenWidth, screenHeight = pyautogui.size()
pyautogui.moveTo(screenWidth / 2, screenHeight / 2)
```
PyAutoGUI可以模拟鼠标的移动、点击、拖拽,键盘按键输入、按住操作,以及鼠标+键盘的热键同时按住等操作,可以说手能动的都可以。
## 1.2 例子
```
import pyautogui
screenWidth, screenHeight = pyautogui.size()
currentMouseX, currentMouseY = pyautogui.position()
pyautogui.moveTo(100, 150)
pyautogui.click()
# 鼠标向下移动10像素
pyautogui.moveRel(None, 10)
pyautogui.doubleClick()
# 用缓动/渐变函数让鼠标2秒后移动到(500,500)位置
# use tweening/easing function to move mouse over 2 seconds.
pyautogui.moveTo(1800, 500, duration=2, tween=pyautogui.easeInOutQuad)
# 在每次输入之间暂停0.25秒
pyautogui.typewrite('Hello world!', interval=0.25)
pyautogui.press('esc')
pyautogui.keyDown('shift')
pyautogui.press(['left', 'left', 'left', 'left', 'left', 'left'])
pyautogui.keyUp('shift')
pyautogui.hotkey('ctrl', 'c')
distance = 200
while distance > 0:
pyautogui.dragRel(distance, 0, duration=0.5) # 向右
distance -= 5
pyautogui.dragRel(0, distance, duration=0.5) # 向下
pyautogui.draIn gRel(-distance, 0, duration=0.5) # 向左
distance -= 5
pyautogui.dragRel(0, -distance, duration=0.5) # 向上
```
## 1.4 保护措施(Fail-Safes)
就像《魔法师的学徒》(Sorcerer’s Apprentice)会担水的扫帚,可以担水,却无力阻止水漫浴室。你的程序也可能会失控(即使是按照你的意思执行的),那时就需要中断。如果鼠标还在自动操作,就很难在程序窗口关闭它。
为了能够及时中断,PyAutoGUI提供了一个保护措施。当`pyautogui.FAILSAFE = True`时,如果把鼠标光标在屏幕左上角,PyAutoGUI函数就会产生`pyautogui.FailSafeException`异常。如果失控了,需要中断PyAutoGUI函数,就把鼠标光标在屏幕左上角。要禁用这个特性,就把`FAILSAFE`设置成`False`:
```
import pyautogui
pyautogui.FAILSAFE = False
```
通过把`pyautogui.PAUSE`设置成`float`或`int`时间(秒),可以为所有的PyAutoGUI函数增加延迟。默认延迟时间是0.1秒。在函数循环执行的时候,这样做可以让PyAutoGUI运行的慢一点,非常有用。例如:
```
import pyautogui
pyautogui.PAUSE = 2.5
pyautogui.moveTo(100,100); pyautogui.click()
```
所有的PyAutoGUI函数在延迟完成前都处于阻塞状态(block)。(未来计划增加一个可选的非阻塞模式来调用函数。)
**建议`PAUSE`和`FAILSAFE`一起使用。**
# 2 安装与依赖
PyAutoGUI支持Python 2.x和Python 3.x
- Windows:PyAutoGUI没有任何依赖,因为它用Python的`ctypes`模块所以不需要`pywin32`
```
pip3 install pyautogui
```
- OS X:PyAutoGUI需要[PyObjC](http://pythonhosted.org/pyobjc/install.html)运行AppKit和Quartz模块。这个模块在PyPI上的按住顺序是`pyobjc-core`和`pyobjc`
```
sudo pip3 install pyobjc-core
sudo pip3 install pyobjc
sudo pip3 install pyautogui
```
- Linux:PyAutoGUI需要`python-xlib`(Python 2)、`python3-Xlib`(Python 3)
```
sudo pip3 install python3-xlib
sudo apt-get scrot
sudo apt-get install python-tk
sudo apt-get install python3-dev
sudo pip3 install pyautogui```
## 3.速查表(小抄,Cheat Sheet)
### 3.1 常用函数
```
import pyautogui
# 当前鼠标的坐标
pyautogui.position()
# 当前屏幕的分辨率(宽度和高度)
pyautogui.size()
# (x,y)是否在屏幕上
x, y = 122, 244
pyautogui.onScreen(x, y)
```
### 3.2 保护措施
PyAutoGUI函数增加延迟为2.5秒:
```
import pyautogui
pyautogui.PAUSE = 2.5
```
当pyautogui.FAILSAFE = True时,如果把鼠标光标在屏幕左上角,PyAutoGUI函数就会产生pyautogui.FailSafeException异常。
```
import pyautogui
pyautogui.FAILSAFE = True
```
### 3.3 鼠标函数
坐标系的原点是左上角。X轴(水平)坐标向右增大,Y轴(竖直)坐标向下增大。
```
num_seconds = 1.2
# 用num_seconds秒的时间把光标移动到(x, y)位置
pyautogui.moveTo(x, y, duration=num_seconds)
# 用num_seconds秒的时间把光标的X轴(水平)坐标移动xOffset,
# Y轴(竖直)坐标向下移动yOffset。
xOffset, yOffset = 50, 100
pyautogui.moveRel(xOffset, yOffset, duration=num_seconds)
```
`click()`函数就是让鼠标点击,默认是单击左键,参数可以设置:
```
pyautogui.click(x=moveToX, y=moveToY, clicks=num_of_clicks, interval=secs_between_clicks, button='left')
```
其中,`button`属性可以设置成`left`,`middle`和`right`。
所有的点击都可以用这个函数,不过下面的函数可读性更好:
```
pyautogui.rightClick(x=moveToX, y=moveToY)
pyautogui.middleClick(x=moveToX, y=moveToY)
pyautogui.doubleClick(x=moveToX, y=moveToY)
pyautogui.tripleClick(x=moveToX, y=moveToY)
```
`scroll`函数控制鼠标滚轮的滚动,`amount_to_scroll`参数表示滚动的格数。正数则页面向上滚动,负数则向下滚动:
```
pyautogui.scroll(clicks=amount_to_scroll, x=moveToX, y=moveToY)
```
每个按键按下和松开两个事件可以分开处理:
```
pyautogui.mouseDown(x=moveToX, y=moveToY, button='left')
pyautogui.mouseUp(x=moveToX, y=moveToY, button='left')
```
### 3.4 键盘函数
键盘上可以按的键都可以调用:
```
# 每次键入的时间间隔
secs_between_keys = 0.1
pyautogui.typewrite('Hello world!\n', interval=secs_between_keys)
```
多个键也可以:
```
pyautogui.typewrite(['a', 'b', 'c', 'left', 'backspace', 'enter', 'f1'], interval=secs_between_keys)
```
按键名称列表:
```
pyautogui.KEYBOARD_KEYS[:10]
```
键盘的一些热键像`Ctrl-S`或`Ctrl-Shift-1`都可以用`hotkey()`函数来实现:
```
pyautogui.hotkey('ctrl', 'a') # 全选
pyautogui.hotkey('ctrl', 'c') # 复制
pyautogui.hotkey('ctrl', 'v') # 粘贴
```
每个按键的按下和松开也可以单独调用:
```
pyautogui.keyDown(key_name)
pyautogui.keyUp(key_name)
```
### 3.5 消息弹窗函数
如果你需要消息弹窗,通过单击OK暂停程序,或者向用户显示一些信息,消息弹窗函数就会有类似JavaScript的功能:
```
pyautogui.alert('这个消息弹窗是文字+OK按钮')
pyautogui.confirm('这个消息弹窗是文字+OK+Cancel按钮')
pyautogui.prompt('这个消息弹窗是让用户输入字符串,单击OK')
```
在`prompt()`函数中,如果用户什么都不输入,就会返回`None`。
### 3.6 截屏函数
PyAutoGUI用Pillow/PIL库实现图片相关的识别和操作。
在Linux里面,你必须执行`sudo apt-get install scrot`来使用截屏特性。
```
# 返回一个Pillow/PIL的Image对象
pyautogui.screenshot()
pyautogui.screenshot('foo.png')
```
如果你有一个图片文件想在上面做点击操作,你可以用`locateOnScreen()`函数来定位。
```
# 返回(最左x坐标,最顶y坐标,宽度,高度)
pyautogui.locateOnScreen('pyautogui/looks.png')
```
`locateAllOnScreen()`函数会寻找所有相似图片,返回一个生成器:
```
for i in pyautogui.locateAllOnScreen('pyautogui/looks.png'):
print(i)
list(pyautogui.locateAllOnScreen('pyautogui/looks.png'))
```
`locateCenterOnScreen()`函数会返回图片在屏幕上的中心XY轴坐标值:
```
pyautogui.locateCenterOnScreen('pyautogui/looks.png')
```
如果没找到图片会返回`None`。
>定位比较慢,一般得用1~2秒
# 4 常用函数
- `position()`:返回整数元组(x, y),分别表示鼠标光标所在位置的XY轴坐标
- `size()`:返回显示器的尺寸整数元组(x, y)。未来将加入多屏支持
# 5 鼠标控制函数
## 5.1 屏幕与鼠标位置
屏幕位置使用X和Y轴的笛卡尔坐标系。原点`(0,0)`在左上角,分别向右、向下增大。
如果屏幕像素是$1920 \times 1080$,那么右下角的坐标是`(1919, 1079)`。
分辨率大小可以通过`size()`函数返回整数元组。光标的位置用`position()`返回。例如:
```
pyautogui.size()
pyautogui.position()
```
下面是Python 3版本的光标位置记录程序:
```
# ! python 3
import pyautogui
print('Press Ctrl-C to quit')
try:
while True:
x, y = pyautogui.position()
positionStr = 'X: {} Y: {}'.format(*[str(x).rjust(4) for x in [x, y]])
print(positionStr, end='')
print('\b' * len(positionStr), end='', flush=True)
except KeyboardInterrupt:
print('\n')
```
Python 2版本是:
```
# ! python
import pyautogui, sys
print('Press Ctrl-C to quit.')
try:
while True:
x, y = pyautogui.position()
positionStr = 'X: ' + str(x).rjust(4) + ' Y: ' + str(y).rjust(4)
print positionStr,
print '\b' * (len(positionStr) + 2),
sys.stdout.flush()
except KeyboardInterrupt:
print '\n'
```
要检查XY坐标是否在屏幕上,需要用`onScreen()`函数来检验,如果在屏幕上返回`True`:
```
import pyautogui
pyautogui.onScreen(0, 0)
pyautogui.onScreen(0, -1)
pyautogui.onScreen(0, 2080)
pyautogui.onScreen(1920, 1080)
pyautogui.onScreen(1919, 1079)
```
## 5.2 鼠标行为
`moveTo()`函数会把鼠标光标移动到指定的XY轴坐标处。如果传入`None`值,则表示使用当前光标的对象轴坐标值。
```
pyautogui.moveTo(100, 200) # 光标移动到(100, 200)位置
pyautogui.moveTo(None, 500) # 光标移动到(100, 500)位置
pyautogui.moveTo(600, None) # 光标移动到(600, 500)位置
```
一般鼠标光标都是瞬间移动到指定的位置,如果你想让鼠标移动的慢点,可以设置持续时间:
```
pyautogui.moveTo(100, 200, duration=2) # 用2秒把光标移动到(100, 200)位置
```
默认的持续时间`pyautogui.MINIMUM_DURATION `是0.1秒,如果你设置的时间比默认值还短,那么就会瞬间执行。
如果你想让光标以当前位置为原点,进行相对移动,就用`pyautogui.moveRel()`函数。例如:
```
pyautogui.moveTo(100, 200) #把光标移动到(100, 200)位置
pyautogui.moveRel(0, 50) #向下移动50
pyautogui.moveRel(30, 0, 2) #向右移动30
pyautogui.moveRel(30, None) #向右移动30
```
## 5.3 鼠标拖拽
PyAutoGUI的`dragTo()`和`dragRel()`函数与`moveTo()`和`moveRel()`函数类似。另外,他们有一个`button`参数可以设置成`left`,`middle`和`right`三个键。例如:
```
# 按住鼠标左键,把鼠标拖拽到(100, 200)位置
pyautogui.dragTo(100, 200, button='left')
# 按住鼠标左键,用2秒钟把鼠标拖拽到(300, 400)位置
pyautogui.dragTo(300, 400, 2, button='left')
# 按住鼠标右键,用2秒钟把鼠标拖拽到(30,0)位置
pyautogui.dragTo(30, 0, 2, button='right')
```
## 5.4 缓动/渐变(Tween / Easing)函数
缓动/渐变函数的作用是让光标的移动更炫。如果你不需要用到的话,你可以忽略这些。
缓动/渐变函数可以改变光标移动过程的速度和方向。通常鼠标是匀速直线运动,这就是线性缓动/渐变函数。PyAutoGUI有30种缓动/渐变函数,可以通过`pyautogui.ease*?`查看。其中,`pyautogui.easeInQuad()`函数可以用于`moveTo()`,`moveRel()`,`dragTo()`和`dragRel()`函数,光标移动呈现先慢后快的效果,整个过程的时间还是和原来一样。而`pyautogui.easeOutQuad`函数的效果相反:光标开始移动很快,然后慢慢减速。`pyautogui.easeOutElastic`是弹簧效果,首先越过终点,然后再反弹回来。例如:
```
# 开始很慢,不断加速
pyautogui.moveTo(100, 100, 2, pyautogui.easeInQuad)
# 开始很快,不断减速
pyautogui.moveTo(100, 100, 2, pyautogui.easeOutQuad)
# 开始和结束都快,中间比较慢
pyautogui.moveTo(100, 100, 2, pyautogui.easeInOutQuad)
# 一步一徘徊前进
pyautogui.moveTo(100, 100, 2, pyautogui.easeInBounce)
# 徘徊幅度更大,甚至超过起点和终点
pyautogui.moveTo(100, 100, 2, pyautogui.easeInElastic)
```
这些效果函数是模仿Al Sweigart的[PyTweening](https://github.com/asweigart/pytweening)模块,可以直接使用,不需要额外安装。
如果你想创建自己的效果,也可以定义一个函数,其参数是(0.0,1.0),表示起点和终点,返回值是介于[0.0,1.0]之间的数。
## 5.5 鼠标单击
`click()`函数模拟单击鼠标左键一次的行为。例如:
```
pyautogui.click()
```
如果单机之前要先移动,可以把目标的XY坐标值传入函数:
```
# 先移动到(100, 200)再单击
pyautogui.click(x=100, y=200, duration=2)
```
可以通过`button`参数设置`left`,`middle`和`right`三个键。例如:
```
pyautogui.click(button='right')
```
要做多次单击可以设置`clicks`参数,还有`interval`参数可以设置每次单击之间的时间间隔。例如:
```
# 双击左键
pyautogui.click(clicks=2)
# 两次单击之间停留0.25秒
pyautogui.click(clicks=2, interval=0.25)
# 三击右键
pyautogui.click(button='right', clicks=2, interval=0.25)
```
为了操作方便,PyAutoGUI提供了`doubleClick()`,`tripleClick()`和`rightClick()`来实现双击、三击和右击操作。
## 5.6 鼠标按下和松开函数
`mouseDown()`和`mouseUp()`函数可以实现鼠标按下和鼠标松开的操作。两者参数相同,有`x`,`y`和`button`。例如:
```
# 鼠标左键按下再松开
pyautogui.mouseDown(); pyautogui.mouseUp()
# 按下鼠标右键
pyautogui.mouseDown(button='right')
# 移动到(100, 200)位置,然后松开鼠标右键
pyautogui.mouseUp(button='right', x=100, y=200)
```
## 5.7 滚轮滚动函数
鼠标滚轮滚动可以用`scroll()`函数和`clicks`次数参数来模拟。不同平台上的`clicks`次数不太一样。还有`x`和`y`参数可以在滚动之前定位到(x, y)位置。例如:
```
# 向上滚动10格
pyautogui.scroll(10)
# 向下滚动10格
pyautogui.scroll(-10)
# 移动到(100, 100)位置再向上滚动10格
pyautogui.scroll(10, x=100, y=100)
```
在OS X和Linux平台上,PyAutoGUI还可以用`hscroll()`实现水平滚动。例如:
```
# 向右滚动10格
pyautogui.hscroll(10)
# 向左滚动10格
pyautogui.hscroll(-10)
```
`scroll()`函数是`vscroll()`的一个包装(`wrapper`),执行竖直滚动。
## 6 键盘控制函数
## 6.1 `typewrite()`输入函数
键盘控制的主要函数就是`typewrite()`。这个函数可以实现字符输入。要在两次输入间增加时间间隔,可以用`interval`参数。例如:
```
# 输入Hello world!
pyautogui.typewrite('Hello world!')
# 每次输入间隔0.25秒,输入Hello world!
pyautogui.typewrite('Hello world!', interval=0.25)
```
`typewrite()`函数只能用于单个字符键,不能按SHITF和F1这些功能键。
## 6.2 `press()`,`keyDown()`和`keyUp()`函数
要按那些功能键,可以用`press()`函数把`pyautogui.KEYBOARD_KEYS`里面按键对应的字符串输入进去。例如:
```
# ENTER键
pyautogui.press('enter')
# F1键
pyautogui.press('f1')
# 左方向键
pyautogui.press('left')
```
`press()`函数其实是`keyDown()`和`keyUp()`函数的包装,模拟的按下然后松开两个动作。这两个函数可以单独调用。例如,按下`shift`键的同时按3次左方向键:
```
# 按下`shift`键
pyautogui.keyDown('shift')
pyautogui.press('left')
pyautogui.press('left')
pyautogui.press('left')
# 松开`shift`键
pyautogui.keyUp('shift')
```
和`typewrite()`函数一样,可以用数组把一组键传入`press()`。例如:
```
pyautogui.press(['left', 'left', 'left'])
```
## 6.3 `hotkey()`函数
为了更高效的输入热键,PyAutoGUI提供了`hotkey()`函数来绑定若干按键:
```
pyautogui.hotkey('ctrl', 'shift', 'ese')
```
等价于:
```
pyautogui.keyDown('ctrl')
pyautogui.keyDown('shift')
pyautogui.keyDown('esc')
pyautogui.keyUp('esc')
pyautogui.keyUp('shift')
pyautogui.keyUp('ctrl')
```
## 6.4 KEYBOARD_KEYS
下面就是`press()`,`keyDown()`,`keyUp()`和`hotkey()`函数可以输入的按键名称:
```
print(pyautogui.KEYBOARD_KEYS)
```
## 7 消息弹窗函数
PyAutoGUI通过Tkinter实现了4种纯Python的消息弹窗函数,和JavaScript类似。
## 7.1 alert()函数
```
pyautogui.alert(text='', title='', button='OK')
```
显示一个简单的带文字和OK按钮的消息弹窗。用户点击后返回`button`的文字。
## 7.2 The confirm() Function
```
# OK和Cancel按钮的消息弹窗
pyautogui.confirm(text='', title='', buttons=['OK', 'Cancel'])
# 10个按键0-9的消息弹窗
pyautogui.confirm(text='', title='', buttons=range(10))
```
显示一个简单的带文字、OK和Cancel按钮的消息弹窗,用户点击后返回被点击button的文字,支持自定义数字、文字的列表。
## 7.3 The prompt() Function
```
pyautogui.prompt(text='', title='' , default='')
```
可以输入的消息弹窗,带OK和Cancel按钮。用户点击OK按钮返回输入的文字,点击Cancel按钮返回`None`。
## 7.4 The password() Function
```
pyautogui.password(text='', title='', default='', mask='*')
```
样式同`prompt()`,用于输入密码,消息用`*`表示。带OK和Cancel按钮。用户点击OK按钮返回输入的文字,点击Cancel按钮返回`None`。
## 8 截屏函数
PyAutoGUI可以截屏并保存为图片文件,然后定位这些截屏在屏幕上的位置。与[sikuli](http://www.sikuli.org/)类似,把屏幕上的按键截取下来,然后定位,就可以执行点击等操作了。
截屏功能需要安装Pillow模块。OS X用`screencapture`命令,是系统自带的。Linux用户用`scrot`命令,可以通过`sudo apt-get install scrot`安装。
## 8.1 Ubuntu注意事项
由于Ubuntu上安装Pillow时缺少PNG和JPEG依赖,所以安装比较复杂,具体可以看[Ubuntu论坛](http://conda.pydata.org/miniconda.html)。不过用[miniconda](http://conda.pydata.org/miniconda.html)可以解决这些问题,如果Ubuntu或Mint上安装了miniconda,可以直接`conda install pillow`来安装。
## 8.2 `screenshot()`函数
`screenshot()`函数会返回`Image`对象(参考[Pillow或PIL模块文档](http://python-pillow.github.io/)),也可以设置文件名:
```
import pyautogui
im1 = pyautogui.screenshot()
im2 = pyautogui.screenshot('my_screenshot.png')
```
在一个$1920 \times 1080$的屏幕上,`screenshot()`函数要消耗100微秒——不快也不慢。
如果你不需要截取整个屏幕,还有一个可选的`region`参数。你可以把截取区域的左上角XY坐标值和宽度、高度传入截取。
```
im = pyautogui.screenshot(region=(0, 0, 300 ,400))
```
## 8.3 定位函数
可以定位截图在屏幕上的坐标位置。比如,你需要在计算器里输入:

如果你不知道按钮的位置,就不能用`moveTo()`定位和`click()`点击。而且每次计算器的位置可能会变化,这时即使有来坐标也不好用了。但是如果你有要点击按钮的截图,比如数字`7`:

你可以调用`pyautogui.locateOnScreen('calc7key.png')`函数来获得`7`的屏幕坐标。返回的是一个元组`(top, left, width, height)`。这个元组可以用`pyautogui.center()`函数来获取截图屏幕的中心坐标。如果截图没找到,`pyautogui.locateOnScreen()`函数返回`None`:
```
import pyautogui
button7location = pyautogui.locateOnScreen('pyautogui/calc7key.png')
button7location
button7x, button7y = pyautogui.center(button7location)
button7x, button7y
pyautogui.click(button7x, button7y)
```
`locateCenterOnScreen()`等价于上面的前两布操作,直接获得截屏屏幕中心坐标:
```
import pyautogui
x, y = pyautogui.locateCenterOnScreen('pyautogui/calc7key.png')
pyautogui.click(x, y)
```
在$1920 \times 1080$的屏幕上,定位函数需要1~2秒时间。对视频游戏(LOL、DOTA)来说就太慢了,但是上班干活还是绰绰有余。
还是几个定位函数。都是从左上角原点开始向右向下搜索截图位置:
- locateOnScreen(image, grayscale=False):返回找到的第一个截图`Image`对象在屏幕上的坐标`(left, top, width, height)`,如果没找到返回`None`
- locateCenterOnScreen(image, grayscale=False):返回找到的第一个截图`Image`对象在屏幕上的中心坐标`(x, y)`,如果没找到返回`None`
- locateAllOnScreen(image, grayscale=False):返回找到的所有相同截图`Image`对象在屏幕上的坐标`(left, top, width, height)`的生成器
- locate(needleImage, haystackImage, grayscale=False):返回找到的第一个截图`Image`对象在`haystackImage`里面的坐标`(left, top, width, height)`,如果没找到返回`None`
- locateAll(needleImage, haystackImage, grayscale=False):返回找到的所有相同截图`Image`对象在`haystackImage`里面的坐标`(left, top, width, height)`的生成器
两个`locateAll*`函数都可以用`for`循环和`list()`输出:
```
for pos in pyautogui.locateAllOnScreen('pyautogui/calc7key.png'):
print(pos)
list(pyautogui.locateAllOnScreen('pyautogui/calc7key.png'))
```
### 8.3.1 灰度值匹配
可以把`grayscale`参数设置为`True`来加速定位(大约提升30%),默认为`False`。这种去色(desaturate)方法可以加速定位,但是也可能导致假阳性(false-positive)匹配:
```
import pyautogui
button7location = pyautogui.locateOnScreen('pyautogui/calc7key.png', grayscale=True)
button7location
```
### 8.3.2 像素匹配
要获取截屏某个位置的RGB像素值,可以用`Image`对象的`getpixel()`方法:
```
import pyautogui
im = pyautogui.screenshot()
im.getpixel((100, 200))
```
也可以用PyAutoGUI的`pixel()`函数,是之前调用的包装:
```
pyautogui.pixel(100, 200)
```
如果你只是要检验一下指定位置的像素值,可以用`pixelMatchesColor()`函数,把X、Y和RGB元组值穿入即可:
```
pyautogui.pixelMatchesColor(100, 200, (255, 255, 255))
pyautogui.pixelMatchesColor(100, 200, (255, 255, 245))
```
`tolerance`参数可以指定红、绿、蓝3种颜色误差范围:
```
pyautogui.pixelMatchesColor(100, 200, (255, 255, 245), tolerance=10)
pyautogui.pixelMatchesColor(100, 200, (248, 250, 245), tolerance=10)
pyautogui.pixelMatchesColor(100, 200, (205, 255, 245), tolerance=10)
```
| github_jupyter |
```
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop
from keras.utils import np_utils
batch_size = 128
num_classes = 10
epochs = 10
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
model = Sequential()
model.add(Dense(20, activation='relu', name='layer1', input_shape=(784,)))
model.add(Dense(20, activation='relu' ,name = 'layer2'))
model.add(Dense(10, activation='softmax', name='layer3'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
history = model.fit(x_train, y_train,
batch_size=batch_size, nb_epoch=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[tbCallBack])
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
```
#what's inside?
```
tbCallBack = keras.callbacks.TensorBoard(log_dir='./Graph', histogram_freq=0, write_graph=True, write_images=True)
history = model.fit(x_train, y_train,
batch_size=batch_size, nb_epoch=epochs,
verbose=0,
validation_data=(x_test, y_test),
callbacks=[tbCallBack])
```
while running do this
```
(C:\Users\oleg\Anaconda3\envs\superman) C:\Users\oleg\PycharmProjects\deep>tensorboard --logdir Graph/
```
and enjoy results visiting
Starting TensorBoard b'41' on port 6006
(You can navigate to http://192.168.178.185:6006)
# we need to go deeper
```
import matplotlib.pyplot as plt
hist = history
train_loss=hist.history['loss']
val_loss=hist.history['val_loss']
train_acc=hist.history['acc']
val_acc=hist.history['val_acc']
xc=range(epochs)
plt.figure(1,figsize=(7,5))
plt.plot(xc,train_loss)
plt.plot(xc,val_loss)
plt.xlabel('num of Epochs')
plt.ylabel('loss')
plt.title('train_loss vs val_loss')
plt.grid(True)
plt.legend(['train','val'])
plt.figure(2,figsize=(7,5))
plt.plot(xc,train_acc)
plt.plot(xc,val_acc)
plt.xlabel('num of Epochs')
plt.ylabel('accuracy')
plt.title('train_acc vs val_acc')
plt.grid(True)
plt.legend(['train','val'],loc=4)
#print(plt.style.available) # use bmh, classic,ggplot for big pictures
plt.style.use('classic')
plt.show()
from keras import backend as K
# with a Sequential model
get_3rd_layer_output = K.function([model.layers[0].input],
[model.layers[0].output])
layer_output = get_3rd_layer_output([x_test[0:1]])[0]
layer_output
```
https://transcranial.github.io/keras-js/#/imdb-bidirectional-lstm
https://blog.keras.io/category/demo.html
```
from keras import backend as K
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers])
layer_name='layer2'
# build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
loss = K.mean(layer_output)
input_img = model.input
# compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
import numpy as np
# we start from a gray image with some noise
d = np.copy(x_test[2])
input_img_data = np.expand_dims(np.copy(d),axis=0)
print(input_img_data.shape)
# run gradient ascent for 20 steps
for i in range(30):
loss_value, grads_value = iterate([input_img_data])
grads_value = np.reshape(grads_value,(1, 784))
input_img_data += grads_value * step
from scipy.misc import imsave
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
return x
img = input_img_data
img = deprocess_image(img)
img = np.reshape(img, (28,28))
import matplotlib.pyplot as plt
import numpy as np
plt.subplot(221)
plt.imshow(np.reshape(x_test[0],(28,28)), cmap=plt.get_cmap('gray'))
plt.subplot(222)
#plt.imshow(np.reshape(255- x_train[0], (28,28)), cmap=plt.get_cmap('gray'))
plt.imshow(img, cmap=plt.get_cmap('gray'))
plt.subplot(223)
#plt.imshow(np.reshape(x_train[0], (28,28)), cmap=plt.get_cmap('gray'))
plt.imshow(np.reshape(x_test[2],(28,28)), cmap=plt.get_cmap('gray'))
plt.subplot(224)
#plt.imshow(np.reshape(x_train[3], (28,28)), cmap=plt.get_cmap('gray'))
plt.imshow(img, cmap=plt.get_cmap('gray'))
# show the plot
plt.show()
```
| github_jupyter |
# Creating a Sampled Dataset
**Learning Objectives**
- Sample the natality dataset to create train/eval/test sets
- Preprocess the data in Pandas dataframe
## Introduction
In this notebook we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe.
```
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
```
## Create ML datasets by sampling using BigQuery
We'll begin by sampling the BigQuery data to create smaller datasets.
```
# Create SQL query using natality data after the year 2000
query_string = """
WITH
CTE_hash_cols_fixed AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL AND wday IS NULL THEN 0
ELSE
CASE
WHEN day IS NULL THEN wday
ELSE
wday
END
END
AS date,
IFNULL(state,
"Unknown") AS state,
IFNULL(mother_birth_state,
"Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
FROM
CTE_hash_cols_fixed
"""
```
There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are.
We'll call BigQuery but group by the hashcolumn and see the number of records for each group. This will enable us to get the correct train/eval/test percentages
```
from google.cloud import bigquery
bq = bigquery.Client(project = PROJECT)
df = bq.query("SELECT hashvalues, COUNT(weight_pounds) AS num_babies FROM ("
+ query_string +
") GROUP BY hashvalues").to_dataframe()
print("There are {} unique hashvalues.".format(len(df)))
df.head()
```
We can make a query to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly
```
sampling_percentages_query = """
WITH
-- Get label, features, and column that we are going to use to split into buckets on
CTE_hash_cols_fixed AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
year,
month,
CASE
WHEN day IS NULL AND wday IS NULL THEN 0
ELSE
CASE
WHEN day IS NULL THEN wday
ELSE
wday
END
END
AS date,
IFNULL(state,
"Unknown") AS state,
IFNULL(mother_birth_state,
"Unknown") AS mother_birth_state
FROM
publicdata.samples.natality
WHERE
year > 2000),
CTE_data AS (
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(year AS STRING), CAST(month AS STRING), CAST(date AS STRING), CAST(state AS STRING), CAST(mother_birth_state AS STRING))) AS hashvalues
FROM
CTE_hash_cols_fixed),
-- Get the counts of each of the unique hashs of our splitting column
CTE_first_bucketing AS (
SELECT
hashvalues,
COUNT(*) AS num_records
FROM
CTE_data
GROUP BY
hashvalues ),
-- Get the number of records in each of the hash buckets
CTE_second_bucketing AS (
SELECT
ABS(MOD(hashvalues, {0})) AS bucket_index,
SUM(num_records) AS num_records
FROM
CTE_first_bucketing
GROUP BY
ABS(MOD(hashvalues, {0}))),
-- Calculate the overall percentages
CTE_percentages AS (
SELECT
bucket_index,
num_records,
CAST(num_records AS FLOAT64) / (
SELECT
SUM(num_records)
FROM
CTE_second_bucketing) AS percent_records
FROM
CTE_second_bucketing ),
-- Choose which of the hash buckets will be used for training and pull in their statistics
CTE_train AS (
SELECT
*,
"train" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= 0
AND bucket_index < {1}),
-- Choose which of the hash buckets will be used for validation and pull in their statistics
CTE_eval AS (
SELECT
*,
"eval" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= {1}
AND bucket_index < {2}),
-- Choose which of the hash buckets will be used for testing and pull in their statistics
CTE_test AS (
SELECT
*,
"test" AS dataset_name
FROM
CTE_percentages
WHERE
bucket_index >= {2}
AND bucket_index < {0}),
-- Union the training, validation, and testing dataset statistics
CTE_union AS (
SELECT
0 AS dataset_id,
*
FROM
CTE_train
UNION ALL
SELECT
1 AS dataset_id,
*
FROM
CTE_eval
UNION ALL
SELECT
2 AS dataset_id,
*
FROM
CTE_test ),
-- Show final splitting and associated statistics
CTE_split AS (
SELECT
dataset_id,
dataset_name,
SUM(num_records) AS num_records,
SUM(percent_records) AS percent_records
FROM
CTE_union
GROUP BY
dataset_id,
dataset_name )
SELECT
*
FROM
CTE_split
ORDER BY
dataset_id
"""
modulo_divisor = 100
train_percent = 80.0
eval_percent = 10.0
train_buckets = int(modulo_divisor * train_percent / 100.0)
eval_buckets = int(modulo_divisor * eval_percent / 100.0)
df = bq.query(sampling_percentages_query.format(modulo_divisor, train_buckets, train_buckets + eval_buckets)).to_dataframe()
df.head()
```
#### **Exercise 1**
Modify the `query_string` above so to produce a 80/10/10 split for the train/valid/test set. Use the `hashvalues` taking an appropriate `ABS(MOD())` value.
**Hint**: You can use every_n in the SQL query to create a smaller subset of the data
```
# Added every_n so that we can now subsample from each of the hash values to get approximately the record counts we want
every_n = 500
train_query = # TODO: Your code goes here
eval_query = # TODO: Your code goes here
test_query = # TODO: Your code goes here
train_df = # TODO: Your code goes here
eval_df = # TODO: Your code goes here
test_df = # TODO: Your code goes here
print("There are {} examples in the train dataset.".format(len(train_df)))
print("There are {} examples in the validation dataset.".format(len(eval_df)))
print("There are {} examples in the test dataset.".format(len(test_df)))
```
## Preprocess data using Pandas
We'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the `is_male` field be `Unknown`. Also, if there is more than child we'll change the `plurality` to `Multiple(2+)`. While we're at it, We'll also change the plurality column to be a string. We'll perform these operations below.
Let's start by examining the training dataset as is.
```
train_df.head()
```
Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)
```
train_df.describe()
```
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a `preprocess` function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.
#### **Exercise 2**
The code cell below has some TODOs for you to complete.
In the first block of TODOs, we'll clean the data so that
- `weight_pounds` is always positive
- `mother_age` is always positive
- `gestation_weeks` is always positive
- `plurality` is always positive
The next block of TODOs will create extra rows to simulate lack of ultrasound information. That is, we'll make a copy of the dataframe and call it `no_ultrasound`. Then, use Pandas functionality to make two changes in place to `no_ultrasound`:
- set the `plurality` value of `no_ultrasound` to be 'Multiple(2+)' whenever the plurality is not 'Single(1)'
- set the `is_male` value of `no_ultrasound` to be 'Unknown'
```
import pandas as pd
def preprocess(df):
# Clean up data
# Remove what we don"t want to use for training
df = # TODO: Your code goes here
df = # TODO: Your code goes here
df = # TODO: Your code goes here
df = # TODO: Your code goes here
# Modify plurality field to be a string
twins_etc = dict(zip([1,2,3,4,5],
["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]))
df["plurality"].replace(twins_etc, inplace=True)
# Now create extra rows to simulate lack of ultrasound
no_ultrasound = df.copy(deep=True)
# TODO: Your code goes here
# TODO: Your code goes here
# Concatenate both datasets together and shuffle
return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True)
```
Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:
```
train_df = preprocess(train_df)
eval_df = preprocess(eval_df)
test_df = preprocess(test_df)
train_df.head()
train_df.tail()
```
Let's look again at a summary of the dataset. Note that we only see numeric columns, so `plurality` does not show up.
```
train_df.describe()
```
## Write to .csv files
In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.
#### **Exercise 3**
Complete the code in the cell below to write the the three Pandas dataframes you made above to csv files. Have a look at [the documentation for `.to_csv`]( https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_csv.html) to remind yourself its usage. Remove `hashvalues` from the data since we will not be using it in training so there is no need to move around extra data.
```
# TODO: Your code goes here
# TODO: Your code goes here
# TODO: Your code goes here
```
Check your work above by inspecting the files you made.
```
%%bash
wc -l *.csv
%%bash
head *.csv
%%bash
tail *.csv
```
Copyright 2017-2018 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
# Étiquetage morpho-syntaxique
## Définition
Opération par laquelle un programme associe automatiquement à un mot des étiquettes grammaticales, comme :
- le genre
- le nombre
- la partie du discours (catégorie)
- …
Elle intervient après celle de segmentation en mots et se positionne comme pré-requis pour l’analyse syntaxique de surface.
Le résultat est un couple (mot, étiquette) :
- *Le petit chat boit du lait.*
- *Le/DET petit/ADJ chat/N boit/V du/DET lait/N ./PONCT*
En Python, le résultat attendu d’un étiquetage morpho-syntaxique se conçoit sous la forme d’une liste de tuples :
```
tagged = [
('Le', 'DET'),
('petit', 'ADJ'),
('chat', 'N'),
('boit', 'V'),
('du', 'DET'),
('lait', 'N'),
('.', 'PONCT')
]
```
## L’étiqueteur de NLTK
NLTK fournit un étiqueteur pré-entraîné avec le *Penn Treebank* que l’on peut employer pour étiqueter un texte de langue anglaise. Il requiert uniquement, en entrée, une liste de mots.
```
from nltk import pos_tag, word_tokenize
on_the_sea_keats = """It keeps eternal whisperings around
Desolate shores, and with its mighty swell
Gluts twice ten thousand Caverns, till the spell
Of Hecate leaves them their old shadowy sound."""
words = word_tokenize(on_the_sea_keats)
tagged_words = pos_tag(words)
```
Un paramètre `tagset` permet d’adopter le jeu d’étiquettes en parties du discours universel plutôt que la nomenclature du *Penn Treebank*.
```
tagged_words = pos_tag(words, tagset="universal")
```
## Modèles de langue
NLTK fournit la classe `DefaultTagger` pour étiqueter automatiquement une liste de mots. En revanche, il convient d’entraîner au préalable le logiciel sur un corpus étiqueté de référence.
NLTK embarque justement plusieurs *treebanks* conçus pour l’anglais. Autrement, il faut apporter son propre corpus étiqueté !
Certains étiqueteurs déjà entraînés :
- *TreeTagger* (multilingue)
- *Stanford POS tagger* (multilingue)
- *MElt* (français, python 2.7)
## Entraîner un modèle de langue : les fondamentaux
Dans cet exemple basique, nous entraînons un étiqueteur pour unigrammes, issu de la classe `UnigramTagger`.
**Étape 1 :** importer l’étiqueteur pour unigrammes.
```
from nltk.tag import UnigramTagger
```
**Étape 2 :** proposer un modèle de langues sous la forme d’une liste de phrases découpées en une liste de mots.
```
corpus = [[
('Le', 'DET'),
('petit', 'ADJ'),
('chat', 'N'),
('boit', 'V'),
('du', 'DET'),
('lait', 'N'),
('.', 'PONCT')
]]
```
**Étape 3 :** entraîner l’étiqueteur avec ce modèle.
```
tagger = UnigramTagger(corpus)
```
**Étape 4 :** effectuer l’étiquetage d’une liste de mots.
```
words = ['Le', 'petit', 'chien', 'boit', 'de', 'l', 'eau', '.']
tagger.tag(words)
```
**Remarque :** le modèle est très incomplet (étiquettes `None`).
Solution pour améliorer le modèle dans le contexte : appeler un étiqueteur par défaut (*Backoff tagger*).
```
from nltk.tag import DefaultTagger
backoff_tagger = DefaultTagger('N')
tagger = UnigramTagger(train=corpus, backoff=backoff_tagger)
tagger.tag(words)
```
Les étiquettes `None` sont remplacées par l’étiquette jugée la plus courante (`N`). Le mieux étant de disposer d’un corpus plus complet car plus volumineux est le corpus avec lequel on entraîne un étiqueteur, meilleur sera le résultat.
| github_jupyter |
# Initial Setup
```
import pyspark
import pandas as pd
import numpy as np
from pyspark.ml.recommendation import ALSModel, ALS
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
from sklearn.preprocessing import OneHotEncoder, StandardScaler
spark = pyspark.sql.SparkSession.builder.getOrCreate()
sc = spark.sparkContext
```
## All Dataframes
### Ratings
```
ratings = spark.read.json('data/ratings.json')
ratings.persist()
ratings_df = ratings.toPandas()
ratings_df.head()
```
### Movies
```
movies = pd.read_csv('data/movies.dat', sep='::', engine='python', header=None)
movies.head()
```
### Users
```
users = pd.read_csv('data/users.dat', sep='::', engine='python', header=None)
users = users.rename({0:'user_id',
1:'gender',
2:'min_age',
3:'occupation',
4:'zipcode'},
axis=1)
users.head()
# To see user age ranges
users.min_age.value_counts()
```
### Requests (to predict)
```
requests = spark.read.json('data/requests.json')
requests.persist()
requests.show(5)
requests_df=requests.toPandas()
len(requests_df)
```
# ALS Model
```
als = ALS(
rank=11,
userCol='user_id',
itemCol='movie_id',
ratingCol='rating'
)
als_model = als.fit(ratings)
preds = als_model.transform(ratings) # Known ratings
request_preds = als_model.transform(requests) # Unknown ratings
nan_df = request_preds.toPandas() # Fill prediction column with predicted ratings for users we have ratings from.
nan_df.head() # Cold start users have a predicted rating of NaN.
```
Since our ALS model can predict a rating for users who have rated movies in the past, we ignore these users and focus only on 'Cold Start Users', or users who have no prior movie rating in our database.
```
nan_df = nan_df[nan_df['prediction'].isnull()]
nan_df.head()
```
# Data Cleaning
### Movie Meta Data
```
meta_df = pd.read_csv('data/movies_metadata.csv')
```
There are some rows that have incorrectly formatted ids. Below we locate them and remove them from the data.
```
meta_df[meta_df.id.str.contains('-')==True]
#drop things that got shifted
bad_ids = ['1997-08-20', '2012-09-29', '2014-01-01']
meta_df = meta_df[~meta_df['id'].isin(bad_ids)]
meta_df['id'] = meta_df['id'].astype(int) # Set all values in the id column to an integer type.
```
### Combining DataFrames
#### Merging ratings_df / meta_df / users:
```
all_training_data_df = ratings_df.merge(meta_df, how='left', left_on='movie_id', right_on='id')
all_training_data_df = all_training_data_df.merge(users, how='left', left_on='user_id', right_on='user_id')
all_training_data_df.head().T
```
#### Merging the nan_df with meta_df and users:
```
all_data_df = nan_df.merge(meta_df, how='left', left_on='movie_id', right_on='id')
all_data_df = all_data_df.merge(users, how='left', left_on='user_id', right_on='user_id')
all_data_df.head(2)
```
# Testing Data
```
X = all_data_df.filter(['occupation','min_age','gender','vote_count', 'vote_average',
'runtime', 'revenue', 'release_date', 'popularity',
'budget', 'adult', 'user_id', 'movie_id'], axis=1)
y = all_data_df.filter(['prediction'], axis=1)
```
#### Data Cleaning
```
# Converting the gender feature to a 1 (F) or 0 (M).
gender_dict = {'M':0, 'F':1}
X['gender'] = X['gender'].replace(gender_dict)
# Converting adult videos to a boolean.
adult_dict = {'True':True, 'False':False}
X['adult'] = X['adult'].replace(adult_dict)
X['adult'] = X['adult'].astype(bool)
# Converting budget column to a float.
X['budget'] = X['budget'].astype(float)
# Converting release date to an integer.
X['release_date'] = pd.DatetimeIndex(X['release_date']).astype(np.int64)
# Converting popularity to a float.
X['popularity'] = X['popularity'].astype(float)
```
#### One Hot Encoding Occupation Column
```
#one-hot encode occupation
enc_cols = X['occupation'].values.reshape(-1, 1)
encoder = OneHotEncoder().fit(enc_cols)
encoder.get_feature_names(['occupation'])
ohe = pd.DataFrame(encoder.transform(enc_cols).toarray(),
columns=encoder.get_feature_names(['occupation']))
X = pd.concat([X.drop(['occupation'], axis=1), ohe], axis=1, )
X.head().T
# Removing 'other/not specified' occupation
X.drop(['occupation_0.0'], axis=1, inplace=True)
# Populating occupation column with actual names
X.rename({'occupation_1.0':'academic_educator',
'occupation_2.0':'artist',
'occupation_3.0':'clerical_admin',
'occupation_4.0':'coll_grad_student',
'occupation_5.0':'cust_service',
'occupation_6.0':'doctor',
'occupation_7.0':'exec',
'occupation_8.0':'farmer',
'occupation_9.0':'homemaker',
'occupation_10.0':'young_student',
'occupation_11.0':'lawyer',
'occupation_12.0':'programmer',
'occupation_13.0':'retired',
'occupation_14.0':'sales_mkting',
'occupation_15.0':'scientist',
'occupation_16.0':'self_employed',
'occupation_17.0':'tech_eng',
'occupation_18.0':'tradesman',
'occupation_19.0':'unemployed',
'occupation_20.0':'writer',}, axis=1, inplace=True)
X.info()
```
#### Making 2 Versions of X:
X_3 -- Removes columns with significant nulls
X_2 -- Removes all rows with null values
```
#X_3 will have more rows to train the model but we'll only take the predictions of the ones we still need
X_3 = X.drop(['vote_count',
'vote_average',
'runtime',
'revenue',
'popularity',
'budget',
'adult'],
axis=1)
X_2 = X.dropna()
X_2.info()
X_3.info()
print(f'There are {95628-48000} predictions for our third model to make.')
```
# Training Data:
```
all_training_data_df.info()
X_train = all_training_data_df.filter(['occupation',
'min_age',
'gender',
'vote_count',
'vote_average',
'runtime',
'revenue',
'release_date',
'popularity',
'budget',
'adult',
'user_id',
'movie_id',
'rating'],
axis=1)
```
#### Data Cleaning
```
# Converting Gender to a 1 (F) or 0 (M)
X_train['gender'] = X_train['gender'].replace(gender_dict)
# Converting adult to a boolean.
X_train['adult'] = X_train['adult'].replace(adult_dict)
X_train['adult'] = X_train['adult'].astype(bool)
# Converting budget to a float.
X_train['budget'] = X_train['budget'].astype(float)
# Converting release date to an integer.
X_train['release_date'] = pd.DatetimeIndex(X_train['release_date']).astype(np.int64)
# Converting popularity to a float.
X_train['popularity'] = X_train['popularity'].astype(float)
```
#### One Hot Encoding Occupation Column
```
#one-hot encode occupation
enc_cols = X_train['occupation'].values.reshape(-1, 1)
encoder = OneHotEncoder().fit(enc_cols)
encoder.get_feature_names(['occupation'])
ohe = pd.DataFrame(encoder.transform(enc_cols).toarray(),
columns=encoder.get_feature_names(['occupation']))
X_train = pd.concat([X_train.drop(['occupation'], axis=1), ohe], axis=1, )
X_train.info()
# Removing 'other/not specified' occupation
X_train.drop(['occupation_0.0'], axis=1, inplace=True)
# Populating occupation columns with actual names
X_train.rename({'occupation_1.0':'academic_educator',
'occupation_2.0':'artist',
'occupation_3.0':'clerical_admin',
'occupation_4.0':'coll_grad_student',
'occupation_5.0':'cust_service',
'occupation_6.0':'doctor',
'occupation_7.0':'exec',
'occupation_8.0':'farmer',
'occupation_9.0':'homemaker',
'occupation_10.0':'young_student',
'occupation_11.0':'lawyer',
'occupation_12.0':'programmer',
'occupation_13.0':'retired',
'occupation_14.0':'sales_mkting',
'occupation_15.0':'scientist',
'occupation_16.0':'self_employed',
'occupation_17.0':'tech_eng',
'occupation_18.0':'tradesman',
'occupation_19.0':'unemployed',
'occupation_20.0':'writer',},
axis=1,
inplace=True)
X_train.rating.unique()
```
#### Making 2 Versions of X:
X_3 -- Removes columns with significant nulls
X_2 -- Removes all rows with null values
#### X3:
```
#X_3 will have more rows to train the model but we'll only take the predictions of the ones we still need
X_3_train = X_train.drop(['vote_count',
'vote_average',
'runtime',
'revenue',
'popularity',
'budget',
'adult',
'rating'],
axis=1)
y_3_train = X_train.filter(['rating'], axis=1)
```
#### X2:
```
X_2_train = X_train.dropna()
y_2_train = X_2_train.filter(['rating'], axis=1)
X_2_train = X_2_train.drop(['rating'], axis=1)
X_2_train.info()
X_3_train.info()
```
# Neural Network Model:
```
# ss = StandardScaler()
# X_2_scaled = ss.fit_transform(X_2)
# X_3_scaled = ss.fit_transform(X_3)
# X_2_train_scaled = ss.fit_transform(X_2_train)
# X_3_train_scaled = ss.fit_transform(X_3_train)
# # from keras.utils import to_categorical
# model = Sequential()
# inputs = X_2_scaled.shape[1]
# hiddens = inputs
# model.add(Dense(hiddens, input_dim=inputs, activation='relu'))
# model.add(Dense())
# adam=Adam()
# y_2_train = to_categorical(y_2_train)
# y_2_train
# y_3_train = to_categorical(y_3_train)
# y_3_train
# model.compile(loss='mean_squared_error', optimizer = 'adam', metrics=['acc'])
# history_y_2 = model.fit(X_2_train_scaled, y_2_train, epochs=5)
# X_2_predictions = model.predict_proba(X_2_train)
#y_2_train
#y_3_train
history_y_2 = model.fit(X_2_train_scaled, y_2_train, epochs=5)
model.predict(X_2_train[:1])
```
## Trying XGBoost
```
# import xgboost as xgb
# np.random.seed(0)
# import matplotlib.pyplot as plt
# from sklearn.metrics import accuracy_score, f1_score
# from sklearn.model_selection import GridSearchCV
# %matplotlib inline
# # clf = xgb.XGBClassifier(objective = "multi:softmax" ,
# # num_class = 5, n_jobs=-1, n_estimators=50)
# # clf.fit(X_2_train, y_2_train)
# # X_2_train_preds = clf.predict(X_2_train)
# # X_2_preds = clf.predict(X_2)
# # X_2_training_accuracy = accuracy_score(y_2_train, X_2_train_preds)
# # X_2_training_f1 = f1_score(y_2_train, X_2_train_preds, average="weighted")
# # print("Training F1: {:.4}%".format(X_2_training_f1*100))
# # print("Training Accuracy: {:.4}%".format(X_2_training_accuracy * 100))
# clf_x_3 = xgb.XGBClassifier(objective = "multi:softmax" , max_depth=5, n_estimators=50, n_jobs=-1)
# clf_x_3.fit(X_3_train, y_3_train)
# X_3_train_preds = clf_x_3.predict(X_3_train)
# X_3_preds = clf_x_3.predict(X_3)
# X_3_training_accuracy = accuracy_score(y_3_train, X_3_train_preds)
# X_3_training_f1 = f1_score(y_3_train, X_3_train_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_3_training_f1*100))
# print("Training Accuracy: {:.4}%".format(X_3_training_accuracy * 100))#
<<<<<<< LOCAL CELL DELETED >>>>>>>
# clf = xgb.XGBClassifier(objective = "multi:softmax" ,
# num_class = 5, n_jobs=-1, n_estimators=50)
# clf.fit(X_2_train, y_2_train)
<<<<<<< LOCAL CELL DELETED >>>>>>>
# X_2_train_preds = clf.predict(X_2_train)
# X_2_preds = clf.predict(X_2)
<<<<<<< LOCAL CELL DELETED >>>>>>>
# X_2_training_accuracy = accuracy_score(y_2_train, X_2_train_preds)
# X_2_training_f1 = f1_score(y_2_train, X_2_train_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_2_training_f1*100))
# print("Training Accuracy: {:.4}%".format(X_2_training_accuracy * 100))
<<<<<<< LOCAL CELL DELETED >>>>>>>
# clf_x_3 = xgb.XGBClassifier()
# clf_x_3.fit(X_3_train, y_3_train)
<<<<<<< LOCAL CELL DELETED >>>>>>>
# X_3_train_preds = clf_x_3.predict(X_3_train)
# X_3_preds = clf_x_3.predict(X_3)
# X_3_training_accuracy = accuracy_score(y_3_train, X_3_train_preds)
# X_3_training_f1 = f1_score(y_3_train, X_3_train_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_3_training_f1*100))
# print("Training Accuracy: {:.4}%".format(X_3_training_accuracy * 100))
```
## GridSearch XGBoost
```
# from sklearn.model_selection import GridSearchCV
# param_grid = {
# "learning_rate": [.1, .01] ,
# 'max_depth': [4, 5],
# 'min_child_weight': [1],
# 'n_estimators': [100]
# }
# grid_clf_x_2 = GridSearchCV(clf, param_grid, scoring='accuracy',
# cv=None, n_jobs=-1 )
# grid_clf_x_2.fit(X_2_train, y_2_train)
# best_parameters = grid_clf_x_2.best_params_
# print("Grid Search found the following optimal parameters: ")
# for param_name in sorted(best_parameters.keys()):
# print("%s: %r" % (param_name, best_parameters[param_name]))
# X_2_train_gs_preds = grid_clf_x_2.predict(X_2_train)
# X_2_gs_preds = grid_clf_x_2.predict(X_2)
# X_2_gstraining_accuracy = accuracy_score(y_2_train, X_2_train_gs_preds)
# X_2_gstraining_f1 = f1_score(y_2_train, X_2_train_gs_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_2_gstraining_f1*100))
# print("Training Accuracy: {:.4}%".format(X_2_gstraining_accuracy * 100))
xgb_clf = xgb.XGBClassifier(learning_rate=.1, n_estimators=50, max_depth=5)
xgb_clf.fit(X_2_train, y_2_train)
# X_2_training_accuracy = accuracy_score(y_2_train, X_2_train_preds)
# X_2_training_f1 = f1_score(y_2_train, X_2_train_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_2_training_f1*100))
# print("Training Accuracy: {:.4}%".format(X_2_training_accuracy * 100))
request_df = request_preds.toPandas()
# for i ,row in request_df.iterrows():
# if row['prediction'] == np.nan:
# if row['user_id'] in X_2['user_id']:
# request_df.loc[i, 'prediction'] = xgb_clf.predict(X_2.loc[row['user_id'],'user_id'])
clf_x_3 = xgb.XGBClassifier(learning_rate=.1, n_estimators=50, max_depth=5)
clf_x_3.fit(X_3_train, y_3_train)
<<<<<<< local
# X_2_gstraining_accuracy = accuracy_score(y_2_train, X_2_train_gs_preds)
# X_2_gstraining_f1 = f1_score(y_2_train, X_2_train_gs_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_2_gstraining_f1*100))
# print("Training Accuracy: {:.4}%".format(X_2_gstraining_accuracy * 100))
=======
# X_3_train_preds = clf_x_3.predict(X_3_train)
# X_3_preds = clf_x_3.predict(X_3)
# X_3_training_accuracy = accuracy_score(y_3_train, X_3_train_preds)
# X_3_training_f1 = f1_score(y_3_train, X_3_train_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_3_training_f1*100))
# print("Training Accuracy: {:.4}%".format(X_3_training_accuracy * 100))
>>>>>>> remote
```
<span style="color:red">**<<<<<<< local**</span>
```
# from sklearn.model_selection import GridSearchCV
# param_grid_3 = {
# "learning_rate": [.1, .01] ,
# 'max_depth': [4, 5],
# 'min_child_weight': [1],
# 'n_estimators': [100]
# }
# grid_clf_x_3 = GridSearchCV(clf_x_3, param_grid_3, scoring='accuracy',
# cv=None, n_jobs=-1 )
# grid_clf_x_3.fit(X_3_train, y_3_train)
# best_parameters = grid_clf_x_3.best_params_
# print("Grid Search found the following optimal parameters: ")
# for param_name in sorted(best_parameters.keys()):
# print("%s: %r" % (param_name, best_parameters[param_name]))
# X_3_train_gs_preds = grid_clf_x_3.predict(X_3_train)
# X_3_gs_preds = grid_clf_x_3.predict(X_3)
# X_3_gstraining_accuracy = accuracy_score(y_3_train, X_3_train_gs_preds)
# X_3_gstraining_f1 = f1_score(y_3_train, X_3_train_gs_preds, average="weighted")
# print("Training F1: {:.4}%".format(X_3_gstraining_f1*100))
# print("Training Accuracy: {:.4}%".format(X_3_gstraining_accuracy * 100))
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=10, n_jobs=-1)
rfc.fit(X_2_train, y_2_train)
rfc3 = RandomForestClassifier(n_estimators=10, n_jobs=-1)
rfc3.fit(X_3_train, y_3_train)
preds_X_2 = rfc.predict_proba(X_2_train)
preds_X_3 = rfc3.predict_proba(X_3_train)
requests_df = request_preds.toPandas()
```
<span style="color:red">**=======**</span>
```
X_3.head().T
```
<span style="color:red">**>>>>>>> remote**</span>
```
# for i, row in requests_df.iterrows():
# if str(row['prediction']) == 'nan':
# if row['user_id'] in X_2['user_id']:
# requests_df.loc[i, 'prediction'] = rfc.predict(pd.DataFrame(X_2.loc[row['user_id'],:]).T)
# =======
# for i, row in request_df.iterrows():
# if str(row['prediction']) == 'nan':
# print('i am here')
# if row['user_id'] in X_3['user_id']:
# request_df.loc[i, 'prediction'] = xgb_clf.predict(X_3.loc[row['user_id'],'user_id'])
# >>>>>>> remote
for i, row in requests_df.iterrows():
if str(row['prediction']) == 'nan':
if row['user_id'] in X_2['user_id']:
requests_df.loc[i, 'prediction'] = rfc.predict(pd.DataFrame(X_2.loc[row['user_id'],:]).T)
for i, row in requests_df.iterrows():
if str(row['prediction']) == 'nan':
if row['user_id'] in X_3['user_id']:
requests_df.loc[i, 'prediction'] = rfc3.predict(pd.DataFrame(X_3.loc[row['user_id'],:]).T)
```
<span style="color:red">**<<<<<<< local**</span>
```
# for i, row in requests_df[requests_df['prediction'].isna()].iterrows():
# if row['user_id'] in X_2['user_id']:
# requests_df.loc[i, 'prediction'] = rfc.predict(pd.DataFrame(X_2.loc[row['user_id'],:]).T)
# for i, row in requests_df.iterrows():
# if str(row['prediction']) == 'nan':
# if row['user_id'] in X_3['user_id']:
# requests_df.loc[i, 'prediction'] = rfc3.predict(pd.DataFrame(X_3.loc[row['user_id'],:]).T)
request_df.to_json('final_final_rfc_json.json')
```
<span style="color:red">**=======**</span>
```
request_df
```
<span style="color:red">**>>>>>>> remote**</span>
| github_jupyter |
```
import numpy as np
import pandas as pd
class PastSampler:
'''
Forms training samples for predicting future values from past value
'''
def __init__(self, N, K, sliding_window = True):
'''
Predict K future sample using N previous samples
'''
self.K = K
self.N = N
self.sliding_window = sliding_window
def transform(self, A):
M = self.N + self.K #Number of samples per row (sample + target)
#indexes
if self.sliding_window:
I = np.arange(M) + np.arange(A.shape[0] - M + 1).reshape(-1, 1)
else:
if A.shape[0]%M == 0:
I = np.arange(M)+np.arange(0,A.shape[0],M).reshape(-1,1)
else:
I = np.arange(M)+np.arange(0,A.shape[0] -M,M).reshape(-1,1)
B = A[I].reshape(-1, M * A.shape[1], A.shape[2])
ci = self.N * A.shape[1] #Number of features per sample
return B[:, :ci], B[:, ci:] #Sample matrix, Target matrix
#data file path
dfp = 'data/bitcoin2015to2017.csv'
#Columns of price data to use
columns = ['Close']
# df = pd.read_csv(dfp).dropna().tail(1000000)
df = pd.read_csv(dfp)
time_stamps = df['Timestamp']
df = df.loc[:,columns]
# original_df = pd.read_csv(dfp).dropna().tail(1000000).loc[:,columns]
original_df = pd.read_csv(dfp).loc[:,columns]
file_name='bitcoin2015to2017_close.h5'
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# normalization
for c in columns:
df[c] = scaler.fit_transform(df[c].values.reshape(-1,1))
#%%Features are channels
A = np.array(df)[:,None,:]
original_A = np.array(original_df)[:,None,:]
time_stamps = np.array(time_stamps)[:,None,None]
#%%Make samples of temporal sequences of pricing data (channel)
NPS, NFS = 256, 16 #Number of past and future samples
ps = PastSampler(NPS, NFS, sliding_window=False)
B, Y = ps.transform(A)
input_times, output_times = ps.transform(time_stamps)
original_B, original_Y = ps.transform(original_A)
import h5py
with h5py.File(file_name, 'w') as f:
f.create_dataset("inputs", data = B)
f.create_dataset('outputs', data = Y)
f.create_dataset("input_times", data = input_times)
f.create_dataset('output_times', data = output_times)
f.create_dataset("original_datas", data=np.array(original_df))
f.create_dataset('original_inputs',data=original_B)
f.create_dataset('original_outputs',data=original_Y)
# f.create_dataset('original_times', data=time_stamps)
B.shape
```
| github_jupyter |
## Using Random EMA to check End-of-Day: Exploratory Data Analysis
- This notebook is dedicated to understanding End-of-Day EMA using Random EMA
- For every Random EMA where the response is 'Yes', check to see
+ What is the fraction where the user clicked correct hour
+ What is the fraction where the user clicked plus/minus one hour
- We see that for _aggregated data_, the fraction where Random EMA and EOD agree is:
+ 0.413 for current hour only
+ 0.712 for plus/minus hour
- If we assume normal and use midpoint of hour then we can convert to an estimated variance of a normal:
+ \Phi(30/\sigma) - \Phi(-30/\sigma) = 2\Phi(30/sigma) - 1 = 0.413 -> sigma = 30/ \Phi^{-1} ( (0.413+1)/2 ) = 55 minutes
+ \sigma = 60/ \Phi^{-1} ( (0.712+1)/2 ) = 56.47 minutes
* Here we use 60 since it was pm one hour so 90 seemed excessive
+ So both suggest a prior with variance 55 minutes (or Inverse Wishart with mean 55
```
import pandas as pd
import numpy as np
import datetime as datetime
import os
os.getcwd()
dir = "../final-data"
# different hour windows in eod_ema
keys = ['8to9', '9to10', '10to11', '11to12','12to13','13to14','14to15','15to16','16to17','17to18','18to19','19to20']
random_accptresponse = ['1 - 19 Minutes', '20 - 39 Minutes', '40 - 59 Minutes',
'60 - 79 Minutes', '80 - 100 Minutes']
random_dictionary = {'1 - 19 Minutes': 10,
'20 - 39 Minutes': 30,
'40 - 59 Minutes':50,
'60 - 79 Minutes':70,
'80 - 100 Minutes':90 }
# read data
random_ema = pd.read_csv(os.path.join(os.path.realpath(dir), 'random-ema-final.csv'))
eod_ema = pd.read_csv(os.path.join(os.path.realpath(dir), 'eod-ema-final.csv'))
# Make a list of all random EMA event-times between 8AM and 8PM
# Throw away observations for 'when_smoke' is nan or
# 'More than 30 minutes' to ensure we can calculate a meaningful
# quantity.
days_smoked = {}
for index, row in random_ema.iterrows():
try:
time = datetime.datetime.strptime(row['date'], '%m/%d/%y %H:%M')
except:
time = datetime.datetime.strptime(row['date'], '%Y-%m-%d %H:%M:%S')
if row['when_smoke'] in random_accptresponse:
time = time - datetime.timedelta(minutes=random_dictionary[row['when_smoke']])
date = (time.year, time.month, time.day, time.hour)
if row['participant_id'] not in days_smoked:
days_smoked[row['participant_id']] = set()
if 8 <= date[3] < 20 and row['when_smoke'] in random_accptresponse:
days_smoked[row['participant_id']].add(date)
# Construct a list of id + dates for eod
# Use to look
eod_dates = []
for irow in range(0,eod_ema.shape[0]):
row = eod_ema.iloc[irow]
if row['status'] == "MISSED":
continue
try:
time = datetime.datetime.strptime(row['date'], '%m/%d/%Y %H:%M')
except:
time = datetime.datetime.strptime(row['date'], '%Y-%m-%d %H:%M:%S')
if time.hour == 0 or time.hour == 1:
date = np.array([row['participant_id'], time.year, time.month, time.day-1])
date = np.append(date, np.array(row[keys]))
else:
date = np.array([row['participant_id'], time.year, time.month, time.day])
date = np.append(date, np.array(row[keys]))
eod_dates.append(date)
eod_dates = np.asarray(eod_dates)
# For participants with both Random EMA and EOD measurements,
# on days when you give both, we ask whether they agree,
# up to the current hour, or +- 1 hour in either direction.
# The +-1 is max/min by 8AM and 8PM respectively.
matching_counts = []
max_iloc = 15; min_iloc = 4
for id in set(days_smoked.keys()) & set(eod_dates[:,0]):
eod_dates_id = np.where(eod_dates[:,0] == id)
eod_dates_subset = eod_dates[eod_dates_id[0],:]
total_count_id = 0
hour_count_id_true = 0
twohour_count_id_true = 0
if days_smoked[id] == set():
continue
for ec_time in days_smoked[id]:
row_iloc = np.where((eod_dates_subset[:,1:4] == ec_time[0:3]).all(axis=1))[0]
if not row_iloc.size > 0:
continue
total_count_id+=1
row = eod_dates_subset[row_iloc][0]
ec_iloc = range(8,20).index(ec_time[3])+4
if row[ec_iloc]==1:
hour_count_id_true+=1
if any(row[range(max(min_iloc, ec_iloc-1), min(max_iloc, ec_iloc+1)+1)] == 1):
twohour_count_id_true+=1
if total_count_id > 0:
matching_counts.append(np.array([total_count_id, hour_count_id_true, twohour_count_id_true], dtype='f'))
matching_counts = np.asarray(matching_counts)
fraction_per_id_onehour = np.divide(matching_counts[:,1],matching_counts[:,0])
fraction_per_id_twohour = np.divide(matching_counts[:,2],matching_counts[:,0])
aggregate_matching_counts = np.sum(matching_counts, axis=0)
aggregate_frac_onehour = aggregate_matching_counts[1]/aggregate_matching_counts[0]
aggregate_frac_twohour = aggregate_matching_counts[2]/aggregate_matching_counts[0]
print('Current hour only:')
print('Aggregated data, Fraction agreement between EC and EOD: %s' % (np.round(aggregate_frac_onehour,3)))
print('Mean of Fraction agreement across indidivuals: %s' % (np.round(np.mean(fraction_per_id_onehour),3)))
print('Standard deviation of Fraction agreement across indidivuals: %s' % (np.round(np.std(fraction_per_id_onehour),3)))
print()
print('Plus-minus one hour:')
print('Aggregated data, Fraction agreement between EC and EOD: %s' % (np.round(aggregate_frac_twohour,3)))
print('Mean of Fraction agreement across indidivuals: %s' % (np.round(np.mean(fraction_per_id_twohour),3)))
print('Standard deviation of Fraction agreement across indidivuals: %s' % (np.round(np.std(fraction_per_id_twohour),3)))
# Compute an anova decomposition using the bernoulli likelihood
# This will test if there are significant differences across
# individuals.
llik_onehour = 0; llik_twohour = 0
for i in range(0, fraction_per_id_onehour.size):
num_ones_onehour = matching_counts[i,1]
num_zeros_onehour = matching_counts[i,0] - matching_counts[i,1]
if num_ones_onehour > 0.0:
llik_onehour += np.multiply(num_ones_onehour, np.log(fraction_per_id_onehour[i]))
if num_zeros_onehour > 0.0:
llik_onehour += np.multiply(num_zeros_onehour, np.log(1-fraction_per_id_onehour[i]))
num_ones_twohour = matching_counts[i,2]
num_zeros_twohour = matching_counts[i,0] - matching_counts[i,2]
if num_ones_twohour > 0.0:
llik_twohour += np.multiply(num_ones_twohour, np.log(fraction_per_id_twohour[i]))
if num_zeros_twohour > 0.0:
llik_twohour += np.multiply(num_zeros_twohour, np.log(1-fraction_per_id_twohour[i]))
agg_num_ones = aggregate_matching_counts[1]
agg_num_zeros = aggregate_matching_counts[0] - aggregate_matching_counts[1]
agg_llik_onehour = agg_num_ones*np.log(aggregate_frac_onehour)+agg_num_zeros*np.log(1-aggregate_frac_onehour)
D_onehour = -2*agg_llik_onehour + 2*llik_onehour
agg_num_ones_twohour = aggregate_matching_counts[2]
agg_num_zeros_twohour = aggregate_matching_counts[0] - aggregate_matching_counts[2]
agg_llik_twohour = agg_num_ones_twohour*np.log(aggregate_frac_twohour)+agg_num_zeros_twohour*np.log(1-aggregate_frac_twohour)
D_twohour = -2*agg_llik_twohour + 2*llik_twohour
from scipy.stats import chi2
n = aggregate_matching_counts[0]
k = matching_counts.shape[0]
df = k-1
print('ANOVA p-value for current hour: %s' % (1-chi2.cdf(D_onehour, df)))
print('ANOVA p-value for plus-minus one hour: %s' % (1-chi2.cdf(D_twohour, df)))
```
| github_jupyter |
<img src="http://akhavanpour.ir/notebook/images/srttu.gif" alt="SRTTU" style="width: 150px;"/>
[](https://notebooks.azure.com/import/gh/Alireza-Akhavan/class.vision)
# <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma">عملیات بیتی (Bitwise Operations) و نقابگذاری تصویر (Masking)</div>
<div style="direction:rtl;text-align:right;font-family:Tahoma">
در گام نخست اشکالی برای درک این عملیات ایجاد میکنیم.
</div>
```
import cv2
import numpy as np
# If you're wondering why only two dimensions, well this is a grayscale image,
# if we doing a colored image, we'd use
# rectangle = np.zeros((300, 300, 3),np.uint8)
# Making a sqare
square = np.zeros((300, 300), np.uint8)
cv2.rectangle(square, (50, 50), (250, 250), 255, -1)
cv2.imshow("Square", square)
cv2.waitKey(0)
# Making a ellipse
ellipse = np.zeros((300, 300), np.uint8)
cv2.ellipse(ellipse, (150, 150), (150, 150), 30, 0, 180, 255, -1)
cv2.imshow("Ellipse", ellipse)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
## <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma">عملیات بیتی تصویر در OpenCV</div>
<ul>
<li>bitwise_and</li>
<li>bitwise_or</li>
<li>bitwise_xor</li>
<li>bitwise_not</li>
</ul>
```
# Shows only where they intersect
And = cv2.bitwise_and(square, ellipse)
cv2.imshow("AND", And)
cv2.waitKey(0)
# Shows where either square or ellipse is
bitwiseOr = cv2.bitwise_or(square, ellipse)
cv2.imshow("OR", bitwiseOr)
cv2.waitKey(0)
# Shows where either exist by itself
bitwiseXor = cv2.bitwise_xor(square, ellipse)
cv2.imshow("XOR", bitwiseXor)
cv2.waitKey(0)
# Shows everything that isn't part of the square
bitwiseNot_sq = cv2.bitwise_not(square)
cv2.imshow("NOT - square", bitwiseNot_sq)
cv2.waitKey(0)
### Notice the last operation inverts the image totally
cv2.destroyAllWindows()
```
## <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma">مثال عملی با تصویر رنگی</div>
```
import cv2
import numpy as np
image = cv2.imread('./images/input.jpg')
cropped = image[100:600 , 150:650]
cv2.imshow("Beautiful Cow!", cropped)
cv2.waitKey(0)
circle = np.zeros((500,500,3), np.uint8)
cv2.circle(circle, (250, 250), 250, (255,255,255), -1)
cv2.imshow("Circle", circle)
cv2.waitKey(0)
output_image = cv2.bitwise_and(cropped, circle)
cv2.imshow("Output Image", output_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
```
## <div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> برای درک عملگر بیتی دو تصویر به مثال زیر توجه کنید </div>
```
square = np.zeros((10, 10), np.uint8)
cv2.rectangle(square, (2, 2), (8, 8), 2, -1) # 2 : 010
circle1 = np.zeros((10, 10), np.uint8)
cv2.circle(circle1, (5, 5), 2, 3, -1) # 3 : 011
circle2 = np.zeros((10, 10), np.uint8)
cv2.circle(circle2, (5, 5), 2, 4, -1) # 4 : 100
output1 = cv2.bitwise_and(square, circle1)
output2 = cv2.bitwise_and(square, circle2)
print("--square--")
print(square)
print("--circle1--")
print(circle1)
print("--circle2--")
print(circle2)
print("--output1--")
print(output1)
print("--output2--")
print(output2)
```
<div class="alert alert-block alert-info">
<div style="direction:rtl;text-align:right;font-family:B Lotus, B Nazanin, Tahoma"> دانشگاه تربیت دبیر شهید رجایی<br>مباحث ویژه - آشنایی با بینایی کامپیوتر<br>علیرضا اخوان پور<br>96-97<br>
</div>
<a href="https://www.srttu.edu/">SRTTU.edu</a> - <a href="http://class.vision">Class.Vision</a> - <a href="http://AkhavanPour.ir">AkhavanPour.ir</a>
</div>
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
import scipy
from scipy import ndimage
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784')
x = mnist.data
y = mnist.target
e_k = np.zeros_like(x)
s_k = np.zeros_like(x)
n_k = np.zeros_like(x)
nw_k = np.zeros_like(x)
ne_k = np.zeros_like(x)
sw_k = np.zeros_like(x)
se_k = np.zeros_like(x)
w_k = np.zeros_like(x)
ka= np.array([[-3,-3,-3],[-3,0,-3],[5,5,5]])
na= np.array([[-3,-3,5],[-3,0,5],[-3,-3,5]])
wa= np.array([[5,5,5],[-3,0,-3],[-3,-3,-3]])
sa= np.array([[5,-3,-3],[5,0,-3],[5,-3,-3]])
nea= np.array([[-3,-3,-3],[-3,0,5],[-3,5,5]])
nwa= np.array([[-3,5,5],[-3,0,5],[-3,-3,-3]])
sea= np.array([[-3,-3,-3],[5,0,-3],[5,5,-3]])
swa= np.array([[5,5,-3],[5,0,-3],[-3,-3,-3]])
for i in range(70000):
e_k[i]=ndimage.convolve(x[i].reshape((28, 28)),ka,mode='nearest',cval=0.0).reshape(784)
s_k[i]=ndimage.convolve(x[i].reshape((28, 28)),sa,mode='nearest',cval=0.0).reshape(784)
n_k[i]=ndimage.convolve(x[i].reshape((28, 28)),na,mode='nearest',cval=0.0).reshape(784)
w_k[i]=ndimage.convolve(x[i].reshape((28, 28)),wa,mode='nearest',cval=0.0).reshape(784)
nw_k[i]=ndimage.convolve(x[i].reshape((28, 28)),nwa,mode='nearest',cval=0.0).reshape(784)
ne_k[i]=ndimage.convolve(x[i].reshape((28, 28)),nea,mode='nearest',cval=0.0).reshape(784)
sw_k[i]=ndimage.convolve(x[i].reshape((28, 28)),swa,mode='nearest',cval=0.0).reshape(784)
se_k[i]=ndimage.convolve(x[i].reshape((28, 28)),sea,mode='nearest',cval=0.0).reshape(784)
ldp_mat=np.zeros_like(x)
ldp_hist=np.zeros((70000,56))
for i in range(70000):
e=e_k[i].reshape((28,28))
s=s_k[i].reshape((28,28))
n=n_k[i].reshape((28,28))
w=w_k[i].reshape((28,28))
nw=nw_k[i].reshape((28,28))
ne=ne_k[i].reshape((28,28))
sw=sw_k[i].reshape((28,28))
se=se_k[i].reshape((28,28))
ldp=ldp_mat[i].reshape((28,28))
for k in range(28):
for j in range(28):
lst=[se[k][j],s[k][j],sw[k][j],w[k][j],nw[k][j],n[k][j],ne[k][j],e[k][j]]
l=[abs(h) for h in lst]
marr=np.argsort(l)
marr1=marr[::-1]
binary=np.zeros(8,dtype="uint8")
binary[marr1[0]]=1
binary[marr1[1]]=1
binary[marr1[2]]=1
d_no=binary[0]*2**7+binary[1]*2**6+binary[2]*2**5+binary[3]*2**4+binary[4]*2**3+binary[5]*2**2+binary[6]*2**1+binary[7]*2**0
ldp[k][j]=d_no
ldp_mat[i]=ldp.reshape(784)
for i in range (70000):
hist=ldp_mat[i].reshape((28,28))
arr=np.zeros(56)
for c in range(1,57):
cnt=0
for k in range(28):
for j in range(28):
if hist[k][j]==c:
cnt+=1
arr[c-1]=cnt
ldp_hist[i]=arr
from sklearn.model_selection import train_test_split
train_img, test_img, train_lbl, test_lbl = train_test_split( ldp_mat, mnist.target, test_size=1/7.0, random_state=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(train_img)
train_img = scaler.transform(train_img)
test_img = scaler.transform(test_img)
from sklearn.decomposition import PCA
pca = PCA(.95)
pca.fit(train_img)
train_img = pca.transform(train_img)
test_img = pca.transform(test_img)
from sklearn.svm import SVC
svc_model=SVC()
import time
f=time.time()
svc_model.fit(train_img, train_lbl)
q=time.time()
print(q-f)
y_predict=svc_model.predict(test_img)
from sklearn import metrics
d=svc_model.score(test_img,test_lbl)
print(d*100)
```
| github_jupyter |
```
#notebook to fetch reanalysis used in example
import cdsapi
import pyart
import os
import sys
import netCDF4
import xarray as xr
from matplotlib import pyplot as plt
%matplotlib inline
#NOTE.. you need a key from ECMWF
#populate ~/.cdsapirc with
#url: https://cds.climate.copernicus.eu/api/v2
#key: YOURKEYHASH
def make_a_request(year, month, day, area):
request = {'product_type': 'reanalysis',
'format': 'netcdf',
'variable': [
'divergence', 'fraction_of_cloud_cover', 'geopotential',
'potential_vorticity', 'relative_humidity',
'specific_cloud_ice_water_content', 'specific_cloud_liquid_water_content', 'specific_humidity',
'specific_rain_water_content', 'specific_snow_water_content', 'temperature',
'u_component_of_wind', 'v_component_of_wind', 'vertical_velocity',
'vorticity'
],
'pressure_level': [
'1', '2', '3',
'5', '7', '10',
'20', '30', '50',
'70', '100', '125',
'150', '175', '200',
'225', '250', '300',
'350', '400', '450',
'500', '550', '600',
'650', '700', '750',
'775', '800', '825',
'850', '875', '900',
'925', '950', '975',
'1000'
],
'year': year,
'month': month,
'day': day,
'area': area,#"37.00/-94.00/24.00/-75.00"[60, -10, 50, 2], # North, West, South, East. Default: global
'time': [
'00:00', '01:00', '02:00',
'03:00', '04:00', '05:00',
'06:00', '07:00', '08:00',
'09:00', '10:00', '11:00',
'12:00', '13:00', '14:00',
'15:00', '16:00', '17:00',
'18:00', '19:00', '20:00',
'21:00', '22:00', '23:00'
]
}
return request
radar = pyart.aux_io.read_odim_h5(os.path.expanduser('~/data/20171230/20171230_172408_FAIRS.h5'))
radartime = netCDF4.num2date(0, radar.time['units'])
lats = radar.gate_latitude
lons = radar.gate_longitude
min_lon = lons['data'].min()
min_lat = lats['data'].min()
max_lat = lats['data'].max()
max_lon = lons['data'].max()
datetime.year
c = cdsapi.Client()
myreq = make_a_request(radartime.year, radartime.month, radartime.day, [max_lat, min_lon, min_lat, max_lon])
c.retrieve("reanalysis-era5-pressure-levels",
myreq, os.path.expanduser('~/data/era5_data.nc'))
dset = xr.load_dataset(os.path.expanduser('~/data/era5_data.nc'))
dset
dset.t[0].sel(level=1000, method='nearest').plot.pcolormesh()
cprof = dset.sel(longitude=28.5, latitude=-26.0, method='nearest')
plt.plot( cprof.t[0], cprof.z[0]/(9.8*1000.0))
plt.plot( cprof.u[0], cprof.z[0]/(9.8*1000.0))
plt.plot( cprof.v[0], cprof.z[0]/(9.8*1000.0))
```
| github_jupyter |
### 1. Setting up the meta-BO environment
```
from matplotlib import pyplot as plt
from meta_bo.meta_environment import RandomMixtureMetaEnv
import numpy as np
# setup meta-learning / meta-bo environment
rds = np.random.RandomState(456)
meta_env = RandomMixtureMetaEnv(random_state=rds)
# sample functions / BO tasks from the meta-env
envs = meta_env.sample_envs(num_envs=10)
meta_train_data = meta_env.generate_uniform_meta_train_data(num_tasks=20, num_points_per_task=20)
meta_test_data = meta_env.generate_uniform_meta_valid_data(num_tasks=50, num_points_context=10, num_points_test=50)
# setup test task
test_env = meta_env.sample_env()
x_context, y_context = test_env.generate_uniform_data(num_points=5)
# visualize samples from the meta-bo env
x_plot = np.linspace(meta_env.domain.l, meta_env.domain.u, 200)
for env in envs:
plt.plot(x_plot, env.f(x_plot))
plt.title('Sample from the Meta-BO environment')
plt.show()
```
### 2. Set up a plotting helper function
```
def plot_regret(evals_stacked: dict, fig_title: str = 'Regret'):
regret = evals_stacked['y_exact'] - evals_stacked['y_min']
regret_bp = evals_stacked['y_exact_bp'] - evals_stacked['y_min']
simple_regret = np.minimum.accumulate(regret, axis=-1)
cum_regret = np.cumsum(regret, axis=-1)
cum_regret_bp = np.cumsum(regret_bp, axis=-1)
fig, axes = plt.subplots(ncols=2, figsize=(10, 4))
axes[0].plot(simple_regret)
axes[0].set_ylabel('simple regret')
axes[0].set_yscale('log')
axes[0].set_xlabel('t')
axes[1].plot(cum_regret_bp)
axes[1].set_ylabel('cumulative inference regret')
axes[1].set_xlabel('t')
plt.suptitle(fig_title)
plt.tight_layout()
plt.show()
```
### 3. Run GP-UCB with Vanilla GP
```
from meta_bo.algorithms.acquisition import UCB
from meta_bo.models.vanilla_gp import GPRegressionVanilla
model = GPRegressionVanilla(input_dim=test_env.domain.d, normalization_stats=test_env.normalization_stats,
normalize_data=True, random_state=rds)
# perform BO with the UCB algorithm
algo = UCB(model, test_env.domain, beta=2.0)
evals = []
fig, axes = plt.subplots(ncols=4, figsize=(12, 3))
plt_id = 0
for t in range(20):
x = algo.next()
x_bp = algo.best_predicted()
evaluation = test_env.evaluate(x, x_bp=x_bp)
evals.append(evaluation)
evals_stacked = {k: np.array([dic[k] for dic in evals]) for k in evals[0]}
algo.add_data(evaluation['x'], evaluation['y'])
if t in [2, 4, 8, 16]:
x_plot = np.expand_dims(np.linspace(-10, 10, 200), axis=-1)
pred_mean, pred_std = model.predict(x_plot)
axes[plt_id].plot(x_plot, test_env.f(x_plot), linestyle='--')
axes[plt_id].plot(x_plot, pred_mean)
axes[plt_id].fill_between(np.squeeze(x_plot), pred_mean - 2 * pred_std,
pred_mean + 2 * pred_std, alpha=0.25)
axes[plt_id].scatter(evals_stacked['x'], evals_stacked['y'], label='BO evaluations')
axes[plt_id].set_title(f'GP-UCB at iter {t}')
plt_id += 1
plt.show()
# plt regret
evals_stacked = {k: np.array([dic[k] for dic in evals]) for k in evals[0]}
plot_regret(evals_stacked, 'Regret Vanilla GP + UCB')
```
### Meta-train F-PACOH and used it for UCB
```
from meta_bo.algorithms.acquisition import UCB
from meta_bo.models.f_pacoh_map import FPACOH_MAP_GP
import warnings
warnings.filterwarnings("ignore") # filter some numerical warnings to make the logs cleaner
NN_LAYERS = (32, 32)
rds_model = np.random.RandomState(345)
fpacoh_model = FPACOH_MAP_GP(domain=meta_env.domain, num_iter_fit=6000, weight_decay=1e-4, prior_factor=0.5,
task_batch_size=5, covar_module='NN', mean_module='NN',
mean_nn_layers=NN_LAYERS, kernel_nn_layers=NN_LAYERS, random_state=rds_model)
# meta-training for 6000 iterations
fpacoh_model.meta_fit(meta_train_data, meta_valid_tuples=meta_test_data, log_period=500)
# perform BO with the UCB algorithm
algo = UCB(fpacoh_model, test_env.domain, beta=2.0)
evals = []
fig, axes = plt.subplots(ncols=4, figsize=(12, 3))
plt_id = 0
for t in range(20):
x = algo.next()
x_bp = algo.best_predicted()
evaluation = test_env.evaluate(x, x_bp=x_bp)
evals.append(evaluation)
evals_stacked = {k: np.array([dic[k] for dic in evals]) for k in evals[0]}
algo.add_data(evaluation['x'], evaluation['y'])
if t in [2, 4, 8, 16]:
x_plot = np.expand_dims(np.linspace(-10, 10, 200), axis=-1)
pred_mean, pred_std = model.predict(x_plot)
axes[plt_id].plot(x_plot, test_env.f(x_plot), linestyle='--')
axes[plt_id].plot(x_plot, pred_mean)
axes[plt_id].fill_between(np.squeeze(x_plot), pred_mean - 2 * pred_std,
pred_mean + 2 * pred_std, alpha=0.25)
axes[plt_id].scatter(evals_stacked['x'], evals_stacked['y'], label='BO evaluations')
axes[plt_id].set_title(f'F-PACOH UCB at iter {t}')
plt_id += 1
plt.show()
# plt regret
evals_stacked = {k: np.array([dic[k] for dic in evals]) for k in evals[0]}
plot_regret(evals_stacked, fig_title='Regret F-PACOH + UCB')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm as tqdm
%matplotlib inline
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import random
# from google.colab import drive
# drive.mount('/content/drive')
transform = transforms.Compose(
[transforms.CenterCrop((28,28)),transforms.ToTensor(),transforms.Normalize([0.5], [0.5])])
mnist_trainset = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)
mnist_testset = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)
index1 = [np.where(mnist_trainset.targets==0)[0] , np.where(mnist_trainset.targets==1)[0] ]
index1 = np.concatenate(index1,axis=0)
len(index1) #12665
true = 10000
total = 47000
sin = total-true
sin
epochs = 300
indices = np.random.choice(index1,true)
indices.shape
index = np.where(np.logical_and(mnist_trainset.targets!=0,mnist_trainset.targets!=1))[0] #47335
index.shape
req_index = np.random.choice(index.shape[0], sin, replace=False)
# req_index
index = index[req_index]
index.shape
values = np.random.choice([0,1],size= sin)
print(sum(values ==0),sum(values==1), sum(values ==0) + sum(values==1) )
mnist_trainset.data = torch.cat((mnist_trainset.data[indices],mnist_trainset.data[index]))
mnist_trainset.targets = torch.cat((mnist_trainset.targets[indices],torch.Tensor(values).type(torch.LongTensor)))
mnist_trainset.targets.shape, mnist_trainset.data.shape
# mnist_trainset.targets[index] = torch.Tensor(values).type(torch.LongTensor)
j =20078 # Without Shuffle upto True Training numbers correct , after that corrupted
print(plt.imshow(mnist_trainset.data[j]),mnist_trainset.targets[j])
trainloader = torch.utils.data.DataLoader(mnist_trainset, batch_size=250,shuffle=True, num_workers=2)
testloader = torch.utils.data.DataLoader(mnist_testset, batch_size=250,shuffle=False, num_workers=2)
mnist_trainset.data.shape
classes = ('zero', 'one')
dataiter = iter(trainloader)
images, labels = dataiter.next()
images[:4].shape
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
imshow(torchvision.utils.make_grid(images[:10]))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(10)))
class Conv_module(nn.Module):
def __init__(self,inp_ch,f,s,k,pad):
super(Conv_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.s = s
self.k = k
self.pad = pad
self.conv = nn.Conv2d(self.inp_ch,self.f,k,stride=s,padding=self.pad)
self.bn = nn.BatchNorm2d(self.f)
self.act = nn.ReLU()
def forward(self,x):
x = self.conv(x)
x = self.bn(x)
x = self.act(x)
return x
class inception_module(nn.Module):
def __init__(self,inp_ch,f0,f1):
super(inception_module, self).__init__()
self.inp_ch = inp_ch
self.f0 = f0
self.f1 = f1
self.conv1 = Conv_module(self.inp_ch,self.f0,1,1,pad=0)
self.conv3 = Conv_module(self.inp_ch,self.f1,1,3,pad=1)
#self.conv1 = nn.Conv2d(3,self.f0,1)
#self.conv3 = nn.Conv2d(3,self.f1,3,padding=1)
def forward(self,x):
x1 = self.conv1.forward(x)
x3 = self.conv3.forward(x)
#print(x1.shape,x3.shape)
x = torch.cat((x1,x3),dim=1)
return x
class downsample_module(nn.Module):
def __init__(self,inp_ch,f):
super(downsample_module,self).__init__()
self.inp_ch = inp_ch
self.f = f
self.conv = Conv_module(self.inp_ch,self.f,2,3,pad=0)
self.pool = nn.MaxPool2d(3,stride=2,padding=0)
def forward(self,x):
x1 = self.conv(x)
#print(x1.shape)
x2 = self.pool(x)
#print(x2.shape)
x = torch.cat((x1,x2),dim=1)
return x,x1
class inception_net(nn.Module):
def __init__(self):
super(inception_net,self).__init__()
self.conv1 = Conv_module(1,96,1,3,0)
self.incept1 = inception_module(96,32,32)
self.incept2 = inception_module(64,32,48)
self.downsample1 = downsample_module(80,80)
self.incept3 = inception_module(160,112,48)
self.incept4 = inception_module(160,96,64)
self.incept5 = inception_module(160,80,80)
self.incept6 = inception_module(160,48,96)
self.downsample2 = downsample_module(144,96)
self.incept7 = inception_module(240,176,60)
self.incept8 = inception_module(236,176,60)
self.pool = nn.AvgPool2d(5)
self.linear = nn.Linear(236,2)
def forward(self,x):
x = self.conv1.forward(x)
#act1 = x
x = self.incept1.forward(x)
#act2 = x
x = self.incept2.forward(x)
#act3 = x
x,act4 = self.downsample1.forward(x)
x = self.incept3.forward(x)
#act5 = x
x = self.incept4.forward(x)
#act6 = x
x = self.incept5.forward(x)
#act7 = x
x = self.incept6.forward(x)
#act8 = x
x,act9 = self.downsample2.forward(x)
x = self.incept7.forward(x)
#act10 = x
x = self.incept8.forward(x)
#act11 = x
#print(x.shape)
x = self.pool(x)
#print(x.shape)
x = x.view(-1,1*1*236)
x = self.linear(x)
return x
inc = inception_net()
inc = inc.to("cuda")
criterion_inception = nn.CrossEntropyLoss()
optimizer_inception = optim.SGD(inc.parameters(), lr=0.01, momentum=0.9)
acti = []
loss_curi = []
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# print(inputs.shape)
# zero the parameter gradients
optimizer_inception.zero_grad()
# forward + backward + optimize
outputs = inc(inputs)
loss = criterion_inception(outputs, labels)
loss.backward()
optimizer_inception.step()
# print statistics
running_loss += loss.item()
if i % 50 == 49: # print every 50 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 50))
ep_lossi.append(running_loss/50) # loss per minibatch
running_loss = 0.0
loss_curi.append(np.mean(ep_lossi)) #loss per epoch
if (np.mean(ep_lossi)<=0.03):
break
# acti.append(actis)
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = inc(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 60000 train images: %d %%' % (
100 * correct / total))
total,correct
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= inc(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
out = np.concatenate(out,axis=0)
pred = np.concatenate(pred,axis=0)
index = np.logical_or(out ==1,out==0)
print(index.shape)
acc = sum(out[index] == pred[index])/sum(index)
print('Accuracy of the network on the 10000 test images: %d %%' % (
100*acc))
sum(index)
import random
random.sample([1,2,3,4,5,6,7,8],5)
# torch.save(inc.state_dict(),"/content/drive/My Drive/model_simple_8000.pkl")
fig = plt.figure()
plt.plot(loss_curi,label="loss_Curve")
plt.xlabel("epochs")
plt.ylabel("training_loss")
plt.legend()
fig.savefig("loss_curve.pdf")
```
| github_jupyter |
# 3M1 Introduction to optimization
Luca Magri (lm547@cam.ac.uk), office ISO-44, Hopkinson Lab.
(With many thanks to Professor Gábor Csányi.)
[Booklist](https://www.vle.cam.ac.uk/mod/book/view.php?id=364091&chapterid=49051):
- Antoniou, A. & Lu, W.-S. Practical Optimization: Algorithms and Engineering Applications, Springer, 2007. Engineering Library: ER.227 and Part IIA Tripos shelves (3M)
- Gill, P.E., Murray, W. & Wright, M.H. Practical Optimization, Academic Press, 1981. Engineering Library: ER.115
- Luenberger, D.G. & Ye, Y. Linear and Non-Linear Programming, Springer, 4th edition 2016. Engineering Library: ER.239.4
How to get these jupyter books:
- Click on this link https://notebooks.azure.com/lm547/projects/3M1OptLecNotes-LM will take you to the Microsoft Azure cloud system.
- The jupyter books will be maintained on this link, where you will find the most updated version of the book.
- After you have clicked on "clone", you will be asked to log in and use your Cambridge CrsID.
- You will get your own copy of the jupyter books on your account
## Topics for the seven optimization lectures:
- Introduction to optimization
- Unconstrained optimization
- Line search
- Gradient methods
- Constrained Optimization
- Linear programming: Simplex Algorithm
- Lagrange and Karush-Kuhn-Tucker (KKT) multipliers
- Note that the Karush-Kuhn-Tucker (KKT) multipliers are also known as Kuhn-Tucker (KT) multipliers
- Barrier and penalty methods
- Global optimisation: Simulated annealing
- Principal component analysis
## Lecture 1: List of contents
1. Introduction to optimisation
1. Definitions
1. A simple example of a can
## Nomenclature
- $f(x): \mathbb{R}^N\rightarrow\mathbb{R}$ is a nonlinear function, which we want to minimize
- $x\in\mathbb{R}^N$ is the vector containing the variables $x_1, x_2, \ldots, x_N$
- $\nabla f = \begin{pmatrix}
\frac{\partial f}{\partial x_1},
\frac{\partial f}{\partial x_2},
\ldots,
\frac{\partial f}{\partial x_N}
\end{pmatrix}^T = \frac{\partial f}{\partial x_i}$, $i=1,2,\ldots, N$ is the gradient
- $H=\nabla(\nabla f(x))$ is the Hessian $\left(H_{i,j}=\frac{\partial^2 f}{\partial x_i\partial x_j}\right)$
## Aims
- Optimisation is the mathematical theory and computational practice of making a choice to achieve the best outcome.
- In order to optimise, we must
<!---1. Formalize the problem mathematically--->
1. Model the problem
1. Identify parameters that can be changed
1. Formulate a mathematical criterion for what is "best"
1. Identify potential constraints
1. Select an appropriate algorithm
1. Correctly interpret the result
## Goal of optimisation
- Find the parameters (or independent variables) that minimize/maximize a given quantity...
- ... possibly subject to some restrictions on the allowed range of parameters
## Definitions
- The quantity to be minimized/maximized is called the __objective function__, or __cost function__, or __utility function__, or __loss function__
- This will be usually denoted $f(x)$ in these lectures, unless otherwise specified
- The parameters that can be changed are called __control__ or __decision variables__
- The restrictions on the allowed parameter values are called __constraints__
- Mathematically, the optimization problem is
minimize
$$
\quad f(x), \quad x = (x_1,x_2,x_3,\ldots x_N)^T
$$
subject to
$$
\quad c_i(x) = 0, \quad i=1,\ldots ,m'\quad(\textrm{equality})
$$
and
$$
\quad\qquad c_i(x) \ge 0, \quad i=m'+1,\ldots, m\quad(\textrm{inequality})
$$
- A minimum $x$ of the function $-f(x)$ is a maximum $x$ of the function $f(x)$
- Therefore, maximization problems can be cast as minimization problems
- $f(x)$ is the __objective function__
- $x$ is the column vector of $N$ __control variables__
- $\{c_i(x)\}$ is the set of __constraint functions__
- Inequality constraints that are restrictions on the allowed values of a single control variable are called __bounds__, e.g. $x_{i\textrm{min}} \le x_i \le x_{i\textrm{max}}$
## Types of functions
- __Linear__
$$
f(x) = b^T x + c
$$
- __Quadratic__
$$
f(x) = x^T A x + b^T x + c
$$
- __Nonlinear__ typically means neither linear nor quadratic, for example
$$
f(x) = \exp(x_1) + x^T A x + b^T x + c
$$
- Nonlinear optimisation problems are typically more difficult to solve
## Types of constraints
- _Equality_ constraints can sometimes be eliminated by substitution
- _Inequality_ constraints can sometimes be left out and candidate results checked
- We will learn to treat them formally with the KKT multipliers
- In general, constrained optimization is more difficult to solve than unconstrained optimization
## Optimisation methods
First, we define the __optimality criteria__. Then,
- Solve analytically. Equations derived from criteria and solved for variables
- Solve numerically. Search methods:
1. Initial trial point selected
1. A move is proposed. If the objective function is reduced, the new point is retained
1. Repeat until criteria satisfied (minimum is reached) or we run out of resources
Search methods are needed when
- The number of variables is large
- The equations cannot be solved analytically
These are typical situations in engineering problems.
Different algorithms correspond to different ways of updating the variables.
## Example: What are the optimal dimensions of a 330 cc can that minimise the amount of material?
- Assume:
- Cylindrical shape
- 4.5% of "air space"
- Observe: amount of material is proportional to surface area.
- We neglect the thickness of the can

- Two independent variables
- base radius, $r$
- height, $h$
- The radius should be greater than or equal to $25$ $mm$ and smaller than or equal to $50$ $mm$
- The objective function to minimize is the surface area $$A = 2\pi r^2 + 2\pi r h = 2\pi (r^2 + rh)$$
- The equality constraint to impose is the can volume $$V = \pi r^2 h = 330\cdot 104.5\%\approx345 cm^3$$
- The inequality constraints to impose are
\begin{align}
25 \textrm{ mm} & \le r \le 50 \textrm{ mm}\\
h &> 0
\end{align}
- Ignore inequality constraints for now
- Eliminate $h$ using the equality constraint
$$
A = f(r) = 2\pi\left(r^2 + \frac{V}{\pi r}\right)
$$
## Definitions
- When minimizing $f(x)$ subject to constraints
\begin{align}
S & \;\;\;\;\textbf{is the feasible region}\\
\text{any}\;x\;\;\; \in S & \;\;\;\text{is a }\textbf{feasible solution}\\
\end{align}
for an unconstrained problem, $S$ is infinitely large.
- The __gradient__ is
$$
g(x) = \nabla f(x) = \left[\frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2},\ldots,
\frac{\partial f}{\partial x_N}\right]^T
$$
- The __Hessian__ is
$$
H(x) = \nabla(\nabla f(x)) =
\begin{bmatrix}
\frac{\partial^2 f}{\partial x_1^2} & \ldots & \frac{\partial^2 f}{\partial x_1 \partial x_N}\\
\vdots & \ddots & \vdots\\
\frac{\partial^2 f}{\partial x_N \partial x_1} & \ldots & \frac{\partial^2 f}{\partial x_N^2}
\end{bmatrix}
$$
The Hessian is a symmetric matrix by definition.
## Feasible directions
At a feasible point $x$, a direction $d$ is a __feasible direction__ if an arbitrary small move from $x$ in direction $d$ remains feasible
```
from pylab import *
import numpy as np
fig=figure(figsize=(12,8))
x = np.linspace(0,1, 50)
plot(x, np.sqrt(1-x**2), 'b')
text(0.2,0.6, 'infeasible space')
a = 1/np.sqrt(2);
arrow(a, a, 0.2, 0, width=0.01)
arrow(a, a, 0.16, 0.16, width=0.01)
arrow(a, a, 0, 0.2, width=0.01)
arrow(a, a, 0.16, -0.16, width=0.01)
arrow(a, a, -0.16, 0.16, width=0.01)
text(0.8, 1.0, 'feasible directions')
text(0.0, 1.05, 'constraint boundary', color='b')
axis('equal')
axis((0.0, 1.3, 0.0, 1.2))
show()
```
## Stationary point
If $f(x)$ is smooth so that $\nabla f(x)$ exists, then $x^*$ is a __stationary point__ of $f$ if
$$
\nabla f(x^*) = 0
$$
- Minima, maxima and saddle points are stationary points
## Types of minima
$$
$$
\begin{align}
\textbf{Global minimum }\quad & f(x^*) \le f(y) \qquad \forall\, y \in S \\
\\
\textbf{Strong global minimum }\quad & f(x^*) \lt f(y) \qquad \forall\, y \in S, y \neq x^* \\
\\
\textbf{Weak local minimum }\quad & f(x^*) \le f(y) \qquad \forall\, y = x^*+\varepsilon d \in S, y \neq x^* \\
\\
\textbf{Strong local minimum }\quad & f(x^*) < f(y) \qquad \forall\, y = x^*+\varepsilon d \in S, y \neq x^*
\end{align}
- Local maxima and minima are local extrema
- If we say "local minimum / maximum" we will refer to an _interior_ "local minimum / maximum", unless otherwise specified
```
x = np.linspace(-1.5,1.5,100)
figure(figsize=(12,8))
plot(x, 1.45*x**4 + sin(6*x), 'b') # Note that the point labelled "weak local minimum" is not mathematically a weak local minimum for this function (show it!). This function is used only for visualization purposes.
axis((-1.6, 1.6,-2.5, 8))
annotate(s="global minimum", xy=(-0.25,-1.1), xytext=(-1.2, -2), arrowprops=dict(arrowstyle='->'))
annotate(s="weak local minimum", xy=(-1.1,1.8), xytext=(-1.2, 4), arrowprops=dict(arrowstyle='->'))
annotate(s="strong local minima", xy=(0.7,-0.4), xytext=(0.1, 3), arrowprops=dict(arrowstyle='->'))
annotate(s="", xy=(-0.25,-0.9), xytext=(0.25, 2.9), arrowprops=dict(arrowstyle='->'))
show()
```
## With constraints, a global minimum might not be a stationary point
```
import matplotlib.patches as patches
x = np.linspace(-1.5,1.5,100)
figure(figsize=(12,8))
plot(x, 1.45*x**4 + sin(6*x), 'b')
axis((-1.6, 1.6,-2.5, 8))
annotate(s="global minimum (not stationary)", xy=(-0.09,-0.7), xytext=(0.2, -2), arrowprops=dict(arrowstyle='->'))
annotate(s="strong local minimum", xy=(0.7,-0.4), xytext=(0.1, 3), arrowprops=dict(arrowstyle='->'))
gca().add_patch(patches.Rectangle((-1.6, -2.5), 1.5, 10.5, hatch='/',fill=False))
text(0.0, 7, 'feasible region')
text(-0.8, 7, 'infeasible region')
annotate(s="constraint", xy=(-0.09, 4.5), xytext=(0.2, 6), arrowprops=dict(arrowstyle='->'))
show()
```
## Unimodality
- A function is __unimodal__ if it has a single extremum
- It is __strongly unimodal__ if along a straight line from every point to the extremum the gradient is negative (for a minimum) or positive (for a maximum)
- Example of a unimodal function: The Rosenbrock's function
$$ f(x_1, x_2) = 100(x_2-x_1^2)^2 + (1-x_1)^2$$
```
x,y = np.meshgrid(np.linspace(-1.7,1.7,100), np.linspace(-0.8,3,100))
R = (1-x)**2 + 10*(y-x**2)**2
from mpl_toolkits.mplot3d import Axes3D
fig = figure(figsize=(16,8))
fig.suptitle('Rosenbrock function')
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(x, y, R, rstride=5, cstride=5, cmap=cm.jet, linewidth=0)
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.view_init(elev=90, azim=-90)
ax = fig.add_subplot(1,2,2)
ax.contour(x, y, R, np.logspace(-1,2, 8))
show()
```
- Example of a strongly unimodal function
$$ f(x_1, x_2)=x_1^2 + x_2^2 - 0.2x_1x_2$$
```
x, y = np.meshgrid(np.linspace(-1,1,50), np.linspace(-1,1,50))
f = x**2+y**2 - 0.2*x*y
fig = figure(figsize=(16,8))
fig.suptitle('Quadratic function')
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(x, y, f, rstride=1, cstride=1, cmap=cm.jet, linewidth=0)
ax.set_xlabel('x1')
ax.set_ylabel('x2')
ax.view_init(elev=0, azim=-120)
ax = fig.add_subplot(1,2,2)
contour(x, y, f, linspace(0, 0.8, 10))
plot([0], [0], 'bx', markersize=20, markeredgewidth=3)
annotate(s="global minimum", xy=(0,0), xytext=(0,0.5), arrowprops=dict(arrowstyle='->'))
show()
```
## Convex functions
- A function is convex if its graph at any point $y$ is never below the tangent at any other point $x$
- Mathematically
$$ f(y) \ge f(x) + \nabla f(x)^T (y-x)$$
- It is strictly convex if instead of $\ge$ we use $>$
- Convex functions have a _global_ minimum
- Convex functions are unimodal, but not all unimodal functions are convex
- Example: The negative Gaussian distribution
- Non-convex functions have often multiple local minima
- Finding the global minimum is hard
- Many, but not all, engineering problems are non-convex
```
from mpl_toolkits.mplot3d import Axes3D
fig = figure(figsize=(16,16))
fig.suptitle('The negative Gaussian is unimodal but not convex')
x, y = np.meshgrid(np.linspace(-7,7,50), np.linspace(-7,7,50))
ax = fig.add_subplot(1,2,1, projection='3d')
ax.plot_surface(x, y, -exp(0.1*(-x**2-y**2)), rstride=1, cstride=1, cmap=cm.jet, linewidth=0)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.view_init(elev=0, azim=-120)
show()
```
## Necessary condition for a local minimum
A __necessary__ condition for $x^*$ to be a _local minimum_ of $f(x)$ in $S$ is
$$
\nabla f(x^*) \cdot d \ge 0
$$
for all feasible directions $d$.
If $x^*$ is an __interior point__, i.e., it is not at the boundary of the feasible region:
- because all directions are feasible, $x^*$ must be a stationary point when $$\nabla f(x^*) = 0$$
- The condition $\nabla f(x^*) = 0$ is __necessary but not sufficient__ for $x^*$ to be a minimum
- This is because the same condition holds for maxima and saddle points
## Sufficient condition for a local minimum (univariate)
For the _univariate_ case (i.e., $x\in\mathbb{R})$, the __Taylor expansion__ around $x^*$ is
\begin{align}
f(x) &= f(x^*) + (x-x^*) f'(x^*) + \frac12 (x-x^*)^2 f''(x^*) + h.o.t.\\
f(x) - f(x^*) &= (x-x^*) f'(x^*) + \frac12 (x-x^*)^2 f''(x^*) + h.o.t.\\
\end{align}
If $x^*$ is an interior stationary point, $f'(x^*) = 0$, so we have a strong local minimum if
$$
f(x) - f(x^*) \approx \frac12 (x-x^*)^2 f''(x^*)\gt 0
$$
$$\boxed{\large f''(x^*) \gt 0}\strut$$
- At a stationary point $x^*$, $f''(x^*)>0$ is a sufficient condition of strong local minimum
- At a stationary point, $f''(x^*)\ge0$ is a necessary condition of strong local minimum (it is necessary that $f''(x^*)$ be non-negative. If it is negative, $x^*$ cannot be a minimum.)
- If $f''(x^*)=0$, we need to analyse the higher order terms
## Sufficient condition for a local minimum (multivariate)
For the _multivariate case_ (i.e., $x\in\mathbb{R}^N$ with $N>1$), the Taylor series is
$$
f(x) = f(x^*) + (x-x^*)^T \nabla f(x^*) + \frac12 (x-x^*)^T H(x^*) (x-x^*) + h.o.t.
$$
If $x^*$ is an interior stationary point, $\nabla f(x^*) = 0$. Let $d = x-x^*$. Then $x^*$ is a strong local minimum if
$$
\boxed{\large d^T H(x^*) d > 0 \qquad \forall d\strut}
$$
- At a stationary point $x^*$, $\large d^T H(x^*) d > 0$ (the Hessian is __positive definite__) is a sufficient condition of strong local minimum.
- At a stationary point $x^*$, $\large d^T H(x^*) d \ge 0$ (the Hessian is __positive semidefinite__) is a necessary condition of strong local minimum.
- Test for a $\bf{2\times2}$ matrix: It is positive definite if $$H_{11}>0 \;\;\;\textrm{and}\;\;\; \det(H)>0$$.
- If $H(x)$ is positive definite everywhere (e.g. for a quadratic function), $f$ is a __convex function__, and therefore the minimum is unique and a _global minimum_.
- A matrix is positive definite if and only if all its __eigenvalues are positive__.
- If $H(x^*) = 0$, higher order terms determine whether $x^*$ is a minimum or not.
## Back to the can optimization

Objective function: $$f(r) = r^2 + \frac{V}{\pi r}$$
Necessary condition for local minimum: $$0 = \frac{df}{dr} = 2r-\frac{V}{\pi r^2}$$
Candidate solution: $$ r^* = \left(\frac{V}{2\pi}\right)^{1/3}$$
- For $V=345 cm^3$, $r^* = 38\text{ mm}$
- This satisfies the constraints $25\text{ mm} \le r^* \le 50\text{ mm}$, hence, it is feasible.
- We need to check sufficient condition $$ \frac{d^2 f}{d r^2} = 2+\frac{2V}{\pi r^3} \ge 0$$
- Hence, $r^*$ is a minimum, and $h^* = V/(\pi r^2) = 76\text{ mm}$.
<!--- - Real cans have $r=33\text{ mm}$.--->
- The upcoming optimization problems will be less straightforward!
| github_jupyter |
# Taxonomy of time series learning tasks
* What is machine learning with time series?
* How is it different from standard machine learning?
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
```
## Learning objectives
You'll learn about
* different time series learning tasks;
* how to tell them apart.
---
## Learning tasks
#### 1. Generative setting:
* We observe realisations of an assumed generative process, represented by random variables.
* What is the assumed statistical dependence between the observations?
* What is the assumed relation we want to estimate?
* We usually have to assume some consistency of the generative process over time (in forecasting, formally known as stationarity). Is the process likely to change? Can the deployment of our model change it?
#### 2. Learning:
* We usually use an algorithm to fit a prediction function.
* What is the input of the algorithm?
* What is the input and output of the fitted prediction function?
* What data is available during training and during prediction/deployment?
* Are we in an online learning setting?
#### 3. Evaluation:
* We estimate performance on unseen data.
* What do we mean by unseen data? For example, in forecasting, it's future values of the same instance. In classification, it's data from unseen instances.
* The test set should be representative of the values we are trying to predict in deployment.
---
## Single series
Time series comes in many shapes and forms.
As an example, consider that we observe a chemical process in a [bioreactor](https://en.wikipedia.org/wiki/Bioreactor).
<img src="../images/bioreactor.png" width=200 />
We may observe the repeated sensor readings for the pressure over time from a single bioreactor run.
```
from utils import load_pressure
pressure = load_pressure() # single pandas.Series
fig, ax = plt.subplots(1, figsize=(16, 4))
pressure.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
```
Suppose you only have a single time series, what are some real-world problems that you encounter and may want to solve with machine learning?
> * Time series annotation (e.g. outlier/anomaly detection, segmentation)
> * Forecasting
---
## Multiple time series
You may observe multiple time series. There are two ways in which this can happen:
### Multivariate time series
Here we observe two or more variables over time, with variables representing *different kinds of measurements* within a single *experimental unit* (e.g. readings from different sensors of a single chemical process).
```
from utils import load_temperature
temperature = load_temperature() # another pandas.Series
fig, (ax0, ax1) = plt.subplots(nrows=2, figsize=(16, 8), sharex=True)
pressure.plot(ax=ax0)
temperature.plot(ax=ax1)
ax0.set(ylabel="Pressure")
ax1.set(ylabel="Temperature", xlabel="Time");
```
Suppose you have multivariate time series, what are some real-world problems that you encounter and may want to solve with machine learning?
> * Time series annotation with additional variables
> * Forecasting with exogenous variables
> * Vector forecasting (forecasting multiple series at the same time)
---
### Panel data
Sometimes also called longitudinal data, here we observe multiple independent instances of the *same kind(s) of measurements* over time, e.g. sensor readings from multiple separate chemical processes).
```
from utils import load_experiments
experiments = load_experiments(variables="pressure") # pandas.DataFrame
experiments.head()
fig, ax = plt.subplots(1, figsize=(16, 4))
experiments.sample(10).T.plot(ax=ax)
ax.set(ylabel="Pressure", xlabel="Time");
```
Panel data may be multivariate (i.e. i.i.d. instances of multivariate time series). In this case, the different instances are i.i.d., but the univariate component series within an instance are not.
Panel data may also be mixed with time-constant variables.
Suppose you have panel data, what are some real-world problems that you encounter and may want to solve with machine learning?
> * Supervised time series annotation
> * Panel/supervised forecasting
> * "Series-as-features" learning tasks, i.e. time series classification/regression/clustering
---
## Time series data and statistical dependence
* An intrinsic characteristic of time series is that observations are statistically depend on past observations. So, they don't naturally fit into the standard machine learning setting where we assume to have i.i.d. instances.
* In multivariate data, it is implausible to assume that the different univariate component time series are independent and identically distributed (i.i.d.).
* In panel data, it is plausible to assume that the different instances are i.i.d., while time series observations within a given instance may still depend on past observations.
---
## Why does it matter?
* Different learning tasks require different types of algorithms to solve them (e.g. time series classifiers or forecasters).
* If you misdiagnose the task associated with the real-world problem you're trying to solve, your algorithm may not work in deployment.
* If you misdiagnose the task, your performance estimates may be unreliable. Performance estimates tell us how well our model will perform when we deploy it. They allow us to make informed choices about which model to deploy. But our estimates are only reliable, if we properly take into account the statistical dependence of the data we use to obtain our estimates.
---
## More complications
In many real-world application we find:
* **time-heterogeneous** data where instances/variables may not share a common time index (e.g. unequal length time series data, timestamped data)
* **type-heterogeneous** data where different variables have different types (e.g. categoricals, floats)
#### How to represent this kind of data for the purpose of machine learning?
* Usually we have instances in rows, variables in columns, but how to fit in time points?
* Which format (wide format, long format, etc)?
* Which data container to use (numpy, pandas, xarray, custom data container, etc)?
For more details, see our [wiki entry](https://github.com/alan-turing-institute/sktime/wiki/Time-series-data-container).
---
## Summary
We've discussed different time series learning tasks and how to tell them apart.
* **Time series annotation** (anomaly detection, change point detection, segmentation)
* **Time series classification/regression/clustering**
* **Forecasting** (classical, supervised/panel, vector)
Variations:
* univariate or multivariate
* online learning (new time points and/or instances)
For more mathematical descriptions of these tasks, see our [paper](http://learningsys.org/neurips19/assets/papers/sktime_ml_systems_neurips2019.pdf).
---
## Reduction
While these tasks are distinct, they are also related.
Reduction is essentially the idea that an algorithm for one task can be adapted to help solve another task.
A classical example of reduction in supervised learning is one-vs-all classification, reducing 𝑘-way multi-category classification to 𝑘 binary classification tasks.
Reduction approaches are very popular in machine learning with time series.
### Example: from forecasting to standard regression
For time series, a common example is reduce classical forecasting to regression, which is usually done as follows: We first split the training series into fixed-length windows and stack them on top of each other. This gives us a tabular matrix of lagged values, so that we can use any standard regression algorithm from scikit-learn. Once we have a fitted regression algorithm, we can generate forecasts recursively.

More on this in the tutorial notebook on forecasting.
### More reductions
There are many more reduction relations, here's an overview:

For more details, see our [paper](http://learningsys.org/neurips19/assets/papers/sktime_ml_systems_neurips2019.pdf).
## References
* Löning, Markus, Anthony Bagnall, Sajaysurya Ganesh, Viktor Kazakov, Jason Lines, and Franz J. Király (2019), "sktime: A Unified Interface for Machine Learning with Time Series." Workshop on Systems for ML at NeurIPS 2019.
* The data that we're using in this notebook is an small extract from the Tennessee Eastman Process Simulation Data for Anomaly Detection. You can download the full data set [here](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/6C3JR1).
| github_jupyter |
# Lesson 5: the trouble with slope area
*This lesson has been written by Simon M. Mudd at the University of Edinburgh*
*Last update 30/09/2021*
In the past few lessons, we have learned:
* Channels tend to have a higher gradient near their headwaters (i.e., parts of the network with low drainage area).
* If the landscape is on uniform bedrock, and has an uplift rate that has remained steady for a long time, the relationship between slope and area looks like $S = k_s A^{-\theta}$.
* If the landscape is perturbed in some way, channel steepness will diverge from this idealised shape.
* We might look for parts of the channel experiencing some sort of perturbation (i.e., changing lithology, changing uplift in space and time) by looking for parts of the landscape that are steeper than others.
* Channel gradient doesn't work since it is changing along the channel even if there are no perturbations. So instead we look for changes in the "channel steepness index", or $k_s$ from the equation above. Areas of elevated $k_s$ are also known as knickpoints.
## Using real data
Okay, so lets try to fit some real data, and look for changes to the channel steepness. I prepared some data previously, which we will look at using `geopandas`.
```
import pandas as pd
import geopandas as gpd
```
First read a csv that I made from Xi'an (see lesson_01) that contains slope and area data:
```
df = pd.read_csv("Xian_SAvertical.csv")
```
Just to show you what is in this file, I will print out the data elements:
```
df.head()
```
Each point has a latitude and longitude. Meaning it is spatial data. So we can load it as a `geopandas` dataframe: that is a kind of python object for holding geographic data. I wrote this data so I happen to know its coordinate system. It is in the global WGS84 geographic coordinate system. All coordinate systems have an EPSG code. This system's code is `epsg=4326`. I set that system after loading the data.
**A slight note on coordinate systems**: coordinate systems might seem an arcane topic but if you are doing something in a GIS or manipulating spatial data and something goes wrong, a messed up coordinate system is frequently to blame. If I had to guess, i would say betwen 1/3-1/2 of problems students bring to me are solved by fixing the coordinate system. A very brief overview can be found here: https://lsdtopotools.github.io/LSDTT_documentation/LSDTT_introduction_to_geospatial_data.html#_projections_and_transformations
Okay, now that I have said that, lets go on to importing some data
```
# This changes a pandas dataframe (reminder, pandas is a little bit like the excel of python)
# to a geopandas dataframe (which means the data has spatial information)
gdf = gpd.GeoDataFrame(
df, geometry=gpd.points_from_xy(df.longitude, df.latitude))
gdf = gdf.set_crs(epsg=4326)
gdf.head()
```
Just to see where these are, we can use some map tiles. Here comes another EPSG code! This time we conver to something called web mercator, which all of the map tiling services (e.g., google maps) use. It is `epsg:3857`. But first we need to know the bounds of the data.
```
bounds = gdf.total_bounds
print(bounds)
```
Now we are going to plot these data, to show you where they are.
They are in a few basins near Xian, China.
```
import matplotlib.pyplot as plt
from matplotlib.transforms import offset_copy
import cartopy.crs as ccrs
import cartopy.io.img_tiles as cimgt
stamen_terrain = cimgt.Stamen('terrain-background')
fig = plt.figure()
# Create a GeoAxes in the tile's projection.
ax = fig.add_subplot(1, 1, 1, projection=stamen_terrain.crs)
# Limit the extent of the map to a small longitude/latitude range.
ax.set_extent([bounds[0]-0.5, bounds[2]+0.5, bounds[1]-0.25, bounds[3]+0.25], crs=ccrs.Geodetic())
# Add the Stamen data at zoom level 8.
ax.add_image(stamen_terrain, 8)
# Add the channel data
gdf = gdf.to_crs(epsg=3857) # We have to convert the data to the same
#system as the ap tiles. It happens to be this one.
# This epsg code is used for all map tiles (like google maps)
gdf.plot(ax=ax, markersize=0.5, column='chi', zorder=10,cmap="jet")
```
Now lets plot the slope area data:
```
# First lets isolate just one of these basins. They go from 0 to 12
gdf_b1 = gdf[(gdf['basin_key'] == 12)]
# Now make the slope area plot
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
plt.scatter(gdf_b1.drainage_area,gdf_b1.slope,marker="+")
plt.xlabel(r"Drainage area ($m^2$)")
plt.ylabel("Gradient (m/m)")
ax.set_xscale('log')
ax.set_yscale('log')
fig.show()
```
__Task:__ Change the basin number. How noisy are the slope area plots?
**What is happening?**
Well, first of all, the gaps in drainage area are because there are tributary junctions where the drainage area takes a step change. Then, between junctions, drainage are doesn't change much, but channels are rough: you get some boulders making some steps, or a little bit of slack water behind a log, etc. These mean that local gradient can change a lot between tributary junctions. So this shows up as a bunch of different gradients at apparently the same drainage area.
These factors combine to make this a very noisy plot. When you see papers with slope-area data, it has been through some smoothing and binning routine, so you tend not to see figures this messy in scientific paper. But I assure you **all** slope area data is this messy before you start to bin and smooth it.
In the next lesson we will explore a way to make these data less noisy.
| github_jupyter |
```
# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
```
# 1.Overview
In this notebook, we want to provide a tutorial about how to make inference using HugeCTR trained WDL model. And we can collect the inference benchmark by Triton performance analyzer tool.
1. Overview
2. Generate the WDL deployment Configuration
3. Load Models on the Triton Server
4. Prepare Inference Input Data
5. Inference Benchmarm by Triton Performance Tool
# 2. Generate the WDL Deployment Configuration
## 2.1 Generate related model folders
```
# define some data folder to store the model related files
# Standard Libraries
import os
from time import time
import re
import shutil
import glob
import warnings
BASE_DIR = "/wdl_infer"
model_folder = os.path.join(BASE_DIR, "model")
wdl_model_repo= os.path.join(model_folder, "wdl")
wdl_version =os.path.join(wdl_model_repo, "1")
if os.path.isdir(model_folder):
shutil.rmtree(model_folder)
os.makedirs(model_folder)
if os.path.isdir(wdl_model_repo):
shutil.rmtree(wdl_model_repo)
os.makedirs(wdl_model_repo)
if os.path.isdir(wdl_version):
shutil.rmtree(wdl_version)
os.makedirs(wdl_version)
```
## 2.2 Copy WDL model files and configuration to model repository
```
!cp -r /wdl_train/wdl0_sparse_20000.model $wdl_version/
!cp -r /wdl_train/wdl1_sparse_20000.model $wdl_version/
!cp /wdl_train/wdl_dense_20000.model $wdl_version/
!cp /wdl_train/wdl.json $wdl_version/
!ls -l $wdl_version
```
## 2.3 Generate the Triton configuration for deploying WDL
```
%%writefile $wdl_model_repo/config.pbtxt
name: "wdl"
backend: "hugectr"
max_batch_size:64,
input [
{
name: "DES"
data_type: TYPE_FP32
dims: [ -1 ]
},
{
name: "CATCOLUMN"
data_type: TYPE_INT64
dims: [ -1 ]
},
{
name: "ROWINDEX"
data_type: TYPE_INT32
dims: [ -1 ]
}
]
output [
{
name: "OUTPUT0"
data_type: TYPE_FP32
dims: [ -1 ]
}
]
instance_group [
{
count: 1
kind : KIND_GPU
gpus:[2]
}
]
parameters [
{
key: "config"
value: { string_value: "/wdl_infer/model/wdl/1/wdl.json" }
},
{
key: "gpucache"
value: { string_value: "true" }
},
{
key: "hit_rate_threshold"
value: { string_value: "0.8" }
},
{
key: "gpucacheper"
value: { string_value: "0.5" }
},
{
key: "label_dim"
value: { string_value: "1" }
},
{
key: "slots"
value: { string_value: "28" }
},
{
key: "cat_feature_num"
value: { string_value: "28" }
},
{
key: "des_feature_num"
value: { string_value: "13" }
},
{
key: "max_nnz"
value: { string_value: "2" }
},
{
key: "embedding_vector_size"
value: { string_value: "128" }
},
{
key: "embeddingkey_long_type"
value: { string_value: "true" }
}
]
```
## 2.4 Generate the Hugectr Backend parameter server configuration for deploying wdl
```
%%writefile /wdl_infer/model/ps.json
{
"supportlonglong":true,
"models":[
{
"model":"wdl",
"sparse_files":["/wdl_infer/model/wdl/1/wdl0_sparse_20000.model", "/wdl_infer/model/wdl/1/wdl1_sparse_20000.model"],
"dense_file":"/wdl_infer/model/wdl/1/wdl_dense_20000.model",
"network_file":"/wdl_infer/model/wdl/1/wdl.json"
}
]
}
!ls -l $wdl_model_repo
!ls -l $wdl_version
```
# 3.Deploy WDL on Triton Server
At this stage, you should have already launched the Triton Inference Server with the following command:
In this tutorial, we will deploy the Wide&Deep to a single A100(32GB)
docker run --gpus=all -it -v /wdl_infer/:/wdl_infer -v /wdl_train/:/wdl_train --net=host nvcr.io/nvidia/merlin/merlin-inference:0.7 /bin/bash
After you enter into the container you can launch triton server with the command below:
tritonserver --model-repository=/wdl_infer/model/ --load-model=wdl
--model-control-mode=explicit
--backend-directory=/usr/local/hugectr/backends
--backend-config=hugectr,ps=/wdl_infer/model/ps.json Note: The model-repository path is /wdl_infer/model/.
The path for the parameter server configuration file is /wdl_infer/model/ps.json.
### 3.1 Check Triton server status if deploy Wide&Deep model successfully
```
!curl -v localhost:8000/v2/health/ready
```
# 4. Prepare Inference Request
### 4.1 Read validation data
```
!ls -l /wdl_train/val
import pandas as pd
df = pd.read_parquet("/wdl_train/val/0.110d099942694a5cbf1b71eb73e10f27.parquet")
df.head()
df.head(10).to_csv('/wdl_infer/infer_test.txt', sep='\t', index=False,header=True)
```
## 4.2 Follow the Triton requirements to generate inference requests
```
%%writefile /wdl_infer/wdl2predict.py
from tritonclient.utils import *
import tritonclient.http as httpclient
import numpy as np
import pandas as pd
import sys
model_name = 'wdl'
CATEGORICAL_COLUMNS=["C" + str(x) for x in range(1, 27)]+["C1_C2","C3_C4"]
CONTINUOUS_COLUMNS=["I" + str(x) for x in range(1, 14)]
LABEL_COLUMNS = ['label']
emb_size_array = [249058, 19561, 14212, 6890, 18592, 4, 6356, 1254, 52, 226170, 80508, 72308, 11, 2169, 7597, 61, 4, 923, 15, 249619, 168974, 243480, 68212, 9169, 75, 34, 278018, 415262]
shift = np.insert(np.cumsum(emb_size_array), 0, 0)[:-1]
test_df=pd.read_csv("/wdl_infer/infer_test.txt",sep='\t')
with httpclient.InferenceServerClient("localhost:8000") as client:
dense_features = np.array([list(test_df[CONTINUOUS_COLUMNS].values.flatten())],dtype='float32')
embedding_columns = np.array([list((test_df[CATEGORICAL_COLUMNS]+shift).values.flatten())],dtype='int64')
row_ptrs = np.array([list(range(0,21))+list(range(0,261))],dtype='int32')
inputs = [
httpclient.InferInput("DES", dense_features.shape,
np_to_triton_dtype(dense_features.dtype)),
httpclient.InferInput("CATCOLUMN", embedding_columns.shape,
np_to_triton_dtype(embedding_columns.dtype)),
httpclient.InferInput("ROWINDEX", row_ptrs.shape,
np_to_triton_dtype(row_ptrs.dtype)),
]
inputs[0].set_data_from_numpy(dense_features)
inputs[1].set_data_from_numpy(embedding_columns)
inputs[2].set_data_from_numpy(row_ptrs)
outputs = [
httpclient.InferRequestedOutput("OUTPUT0")
]
response = client.infer(model_name,
inputs,
request_id=str(1),
outputs=outputs)
result = response.get_response()
print(result)
print("Prediction Result:")
print(response.as_numpy("OUTPUT0"))
```
## 4.3 Send requests to Triton Server
```
!python3 ./wdl2predict.py
```
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import sys
sys.path.append('..')
import pdb, sys, inspect
from enum import Enum
import pandas as pd
import torch
from transformers import *
from fastai2.text.all import *
torch.cuda.set_device(1)
print(f'Using GPU #{torch.cuda.current_device()}: {torch.cuda.get_device_name()}')
MODEL_FOR_QUESTION_ANSWERING_MAPPING
MODEL_FOR_TOKEN_CLASSIFICATION_MAPPING
MODEL_MAPPING
MODEL_CONFIG_CLASSES = list(MODEL_FOR_QUESTION_ANSWERING_MAPPING.keys())
MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES)
ALL_MODELS = sum((tuple(conf.pretrained_config_archive_map.keys()) for conf in MODEL_CONFIG_CLASSES), (),)
MODEL_CONFIG_CLASSES, MODEL_TYPES, #ALL_MODELS
```
## Utility Methods
```
# converts string representation to class
def str_to_class(classname):
return getattr(sys.modules[__name__], classname)
```
## Class Inspection & Querying
**Notes**:
[1] There are "three standard classes required to use each model: **configuration, models and tokenizer**."
[2] All three standard classes can be initialized [via] `from_pretrained()`. This method will download (as needed), cache, and load the pre-trained instace from the library or via the filesystem.
**Models**: All derive from `nn.Module` (e.g., `BertModel`)
**Configuration**: Stores configuration required to **build a model** (e.g., `BertConfig`). "*If you are using a pretrained model* without any modification, *creating the model will automatically take care of instantiating the configuration* (which is part of the model)."
**Tokenizer**: Stores the vocab for each model and provides methods to encode/decode strings and provide the various embeddings required to be fed into a model.
**`from_pretrained()`**: To instantiate any of the above classes using a friendly name included in the library (`bert-base-uncased`) or from a path.
**`save_pretrained()`**: To save any of the classes locally so it can be re-loaded using `from_pretrained()`
```
transformer_classes = inspect.getmembers(sys.modules[__name__],
lambda member: inspect.isclass(member)
and member.__module__.startswith('transformers.'))
transformer_classes[:5]
df = pd.DataFrame(transformer_classes, columns=['class_name', 'class_location'])
df.head()
df['module'] = df.class_location.apply(lambda v: v.__module__); df.head()
df.drop(labels=['class_location'], axis=1, inplace=True)
df.head()
module_parts_df = df.module.str.split(".", n = -1, expand = True)
for i in range(len(module_parts_df.columns)):
df[f'module_part_{i}'] = module_parts_df[i]
df.head()
module_part_1_df = df.module_part_1.str.split("_", n = 1, expand = True)
module_part_1_df.head()
df[['functional_area', 'arch']] = module_part_1_df
df.head()
```
Look custom, task-based implementations of models (indicated by `<model>For<task>`)
```
model_type_df = df[(df.functional_area == 'modeling')].class_name.str.split('For', n=1, expand=True)
model_type_df.head()
model_type_df[1] = np.where(model_type_df[1].notnull(), 'For' + model_type_df[1].astype(str), model_type_df[1])
df['model_task'] = model_type_df[1]
```
Look custom, task-based implementations of models (indicated by `<model>With<task>`)
```
model_type_df = df[(df.functional_area == 'modeling')].class_name.str.split('With', n=1, expand=True)
model_type_df.head()
model_type_df[1] = np.where(model_type_df[1].notnull(),
'With' + model_type_df[1].astype(str),
df[(df.functional_area == 'modeling')].model_task)
df['model_task'] = model_type_df[1]
df.head()
print(list(df.model_task.unique()))
print(list(df.functional_area.unique()))
print(list(df.module_part_2.unique()))
print(list(df.module_part_3.unique()))
# look at what we're going to remove (use to verify we're just getting rid of stuff we want too)
# df[~df['hf_class_type'].isin(['modeling', 'configuration', 'tokenization'])]
df = df[df['functional_area'].isin(['modeling', 'configuration', 'tokenization'])]
```
### Get included architectures
```
def get_architectures():
return df[(df.arch.notna()) & (df.arch != None)].arch.unique().tolist()
print(get_architectures())
TRANSFORMER_ARCHITECTURES = Enum('TRANSFORMER_ARCHITECTURES', get_architectures())
print(L(TRANSFORMER_ARCHITECTURES))
```
### Get an architecture's config
```
def get_config(arch):
return df[(df.functional_area == 'configuration') & (df.arch == arch)].class_name.values[0]
print(get_config('bert'))
```
### Get an architecture's tokenizers
There may be multiple so this returns a list
```
def get_tokenizers(arch):
return df[(df.functional_area == 'tokenization') & (df.arch == arch)].class_name.values
print(get_tokenizers('electra'))
```
### Get included custom model tasks
Get the type of tasks for which there is a custom model for (*optional: by architecture*). There are a number of customized models built for specific tasks like token classification, question/answering, LM, etc....
```
def get_tasks(arch=None):
query = ['model_task.notna()']
if (arch): query.append(f'arch == "{arch}"')
return df.query(' & '.join(query)).model_task.unique().tolist()
print(get_tasks())
print(get_tasks('bart'))
TRANSFORMER_TASKS_ALL = Enum('TRANSFORMER_TASKS_ALL', get_tasks())
TRANSFORMER_TASKS_AUTO = Enum('TRANSFORMER_TASKS_AUTO', get_tasks('auto'))
print('--- all tasks ---')
print(L(TRANSFORMER_TASKS_ALL))
print('\n--- auto only ---')
print(L(TRANSFORMER_TASKS_AUTO))
```
### Get included models
The transformer models available for use (*optional: by architecture | task*)
```
def get_models(arch=None, task=None):
query = ['functional_area == "modeling"']
if (arch): query.append(f'arch == "{arch}"')
if (task): query.append(f'model_task == "{task}"')
return df.query(' & '.join(query)).class_name.tolist()
print(L(get_models()))
print(get_models(arch='bert'))
print(get_models(task='ForTokenClassification'))
print(get_models(arch='bert', task='ForTokenClassification'))
TRANSFORMER_MODELS = Enum('TRANSFORMER_MODELS', get_models())
print(L(TRANSFORMER_MODELS))
```
### Get tokenizers, config, and model for a given model name / enum
```
def get_classes_for_model(model_name_or_enum):
model_name = model_name_or_enum if isinstance(model_name_or_enum, str) else model_name_or_enum.name
meta = df[df.class_name == model_name]
tokenizers = get_tokenizers(meta.arch.values[0])
config = get_config(meta.arch.values[0])
return ([str_to_class(tok) for tok in tokenizers], str_to_class(config), str_to_class(model_name))
tokenizers, config, model = get_classes_for_model('RobertaForSequenceClassification')
print(tokenizers[0])
print(config)
print(model)
tokenizers, config, model = get_classes_for_model(TRANSFORMER_MODELS.DistilBertModel)
print(tokenizers[0])
print(config)
print(model)
def get_model_architecture(model_name_or_enum):
model_name = model_name_or_enum if isinstance(model_name_or_enum, str) else model_name_or_enum.name
return df[df.class_name == model_name].arch.values[0]
get_model_architecture('RobertaForSequenceClassification')
```
## Loading Pre-Trained (configs, tokenizer, model)
```
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
def get_auto_hf_objects(pretrained_model_name_or_path,
task=TRANSFORMER_TASKS_AUTO.ForSequenceClassification,
config=None):
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path)
config = AutoConfig.from_pretrained(pretrained_model_name_or_path) if (config is None) else config
model = str_to_class(f'AutoModel{task.name}').from_pretrained(pretrained_model_name_or_path,
config=config)
arch = get_model_architecture(type(model).__name__)
return (arch, tokenizer, config, model)
arch, tokenizer, config, model = get_auto_hf_objects("bert-base-cased-finetuned-mrpc",
task=TRANSFORMER_TASKS_AUTO.WithLMHead)
print(arch)
print(type(tokenizer))
print(type(config))
print(type(model))
arch, tokenizer, config, model = get_auto_hf_objects("fmikaelian/flaubert-base-uncased-squad",
task=TRANSFORMER_TASKS_AUTO.ForQuestionAnswering)
print(arch)
print(type(tokenizer))
print(type(config))
print(type(model))
def get_transformer_objects(pretrained_model_name_or_path,
tokenizer_cls=BertTokenizer,
model_cls=TRANSFORMER_MODELS.BertModel,
config_cls=BertConfig):
tokenizer = tokenizer_cls.from_pretrained(pretrained_model_name_or_path)
if (config_cls is None):
model = str_to_class(model_cls.name).from_pretrained(pretrained_model_name_or_path)
config = None
else:
config = config_cls.from_pretrained(pretrained_model_name_or_path)
model = str_to_class(model_cls.name).from_pretrained(pretrained_model_name_or_path, config=config)
arch = get_model_architecture(type(model).__name__)
return (arch, tokenizer, config, model)
arch, tokenizer, config, model = get_transformer_objects("bert-base-cased-finetuned-mrpc",
tokenizer_cls=BertTokenizer,
config_cls=None,
model_cls=TRANSFORMER_MODELS.BertForNextSentencePrediction)
print(arch)
print(type(tokenizer))
print(type(config))
print(type(model))
```
## Tokenizers
Terms:
**Input IDs**: \
"The input ids are often the only required parameters to be passed to the model as input. They are *token indices, numerical representations of tokens* building the sequences that will be used as input by the model."
`tokenizer.tokenize(sequence)` => Splits the sequnce into tokens based on vocab
`tokenizer.encode(sequence)` => Converts tokens to their numerical IDs (add `add_special_tokens=False` to exclude special tokens)
`tokenizer.encode_plus(sequence)` => Returnes a dictionary of "input_ids", "token_type_ids", and "attention_mask"
**Attention Mask**: \
"This argument indicates to the model which tokens should be attended to, and which should not ... a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the BertTokenizer, 1 indicate a value that should be attended to while 0 indicate a padded value." (optional)
`tokenizer.encode(sequence, max_length=20, pad_to_max_length=True)`
**Token Type IDs**: \
"Some models’ purpose is to do sequence classification or question answering. These require two different sequences to be encoded in the same input IDs. They are usually separated by special tokens, such as the classifier and separator tokens.... The Token Type IDs are a binary mask identifying the different sequences (segments) in the model."
`tokenizer.encode(sequence_a, sequence_b)`
"The first sequence, the “context” used for the question, has all its tokens represented by 0, whereas the question has all its tokens represented by 1. Some models, like `XLNetModel` use an additional token represented by a 2."
**Position IDs**: \
"The position IDs are used by the model to identify which token is at which position. Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token.... If no position IDs are passed to the model, they are automatically created as absolute positional embeddings." (optional)
"Absolute positional embeddings are selected in the range `[0, config.max_position_embeddings - 1]`. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings."
```
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
print(tokenizer.tokenize("Hi! You like the Bert Tokenizer?"))
print(tokenizer.encode("Hi! You like the Bert Tokenizer?"))
print(tokenizer.encode("Hi! You like the Bert Tokenizer?", add_special_tokens=False))
print(tokenizer.encode_plus("Hi! You like the Bert Tokenizer?"))
print(tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?"))
print(tokenizer.encode("Hi! You like the Bert Tokenizer?", add_special_tokens=False))
# ALBERT
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# BART
tokenizer = BartTokenizer.from_pretrained("bart-large-cnn")
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# BERT
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
print(tokenizer.prepare_for_model([101, 8790, 106, 102, 1192, 1176, 1103, 15035, 1706, 6378, 17260, 136, 102],None))
# CTRL
tokenizer = CTRLTokenizer.from_pretrained("ctrl")
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# CAMBERT
tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# ELECTRA
tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# GPT-2
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# GPT
tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# T5
tokenizer = T5Tokenizer.from_pretrained('t5-small')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?")
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# TransfoXLTokenizer
tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103')
tok_results = tokenizer.encode_plus("Hi!", "You like the Bert Tokenizer?", add_space_before_punct_symbol=True)
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# XLMRobertaTokenizer
# XLM
tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-en-2048")
tok_results = tokenizer.encode_plus("Hi!", None)
print(tok_results)
print(tokenizer.decode(tok_results['input_ids']))
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
# XLNet
tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
tok_results = tokenizer.encode_plus("Hi! what do you thing of this thing we are doing")
print(tok_results)
print(tokenizer.pad_token_id, tokenizer.pad_token_type_id)
tokenizer.batch_encode_plus(['Hi! what do you thing of this thing we are doing'],
max_length=10, stride=5,
pad_to_max_length=True,
return_overflowing_tokens=True,
return_special_tokens_masks=True,
return_input_lengths=True)
encoded_ids = tokenizer.encode("Hi!", "You like the Bert Tokenizer?")
print(encoded_ids)
toks = tokenizer.convert_ids_to_tokens(encoded_ids)
print(toks)
sep_idxs = [idx for idx, tok in enumerate(toks) if tok == tokenizer.sep_token]
print(len(sep_idxs), sep_idxs)
toks_modified = toks if len(sep_idxs) == 1 else [toks[:sep_idxs[0]+1], toks[sep_idxs[0]+1:]]
print(toks_modified)
tokenizer.get_special_tokens_mask(*toks_modified)
tokenizer.encode('.', add_special_tokens=False)
tokenizer.get_vocab()['.']
tok_a =tokenizer.tokenize("Hi!")
tok_b =tokenizer.tokenize("You like the Bert Tokenizer?")
tok_a, tok_b
a = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("Hi!"))
b = tokenizer.convert_tokens_to_ids(tokenizer.tokenize("You like the Bert Tokenizer?"))
print(tokenizer.build_inputs_with_special_tokens(a,b))
print(tokenizer.create_token_type_ids_from_sequences(a,b))
# ddd = tokenizer.build_inputs_with_special_tokens(a,b)
# [0 if idx == tokenizer.pad_token_id else 1 for idx in ddd]
tokenizer.pad_token_id, tokenizer.pad_token_type_id
d = tokenizer.prepare_for_model(a,b, max_length=25, pad_to_max_length=True, return_tensors='pt')
e = tokenizer.prepare_for_model(a,b, max_length=25, pad_to_max_length=True, return_tensors='pt')
f = tokenizer.prepare_for_model(a,b, max_length=25, pad_to_max_length=True, return_tensors='pt')
x = [d['input_ids'], e['input_ids'], f['input_ids']]
d['input_ids'].shape, torch.cat(x).shape
```
## Models
"See the models docstrings for the detail of the inputs" ... `outputs = model(tokens_tensor, token_type_ids=segments_tensors)`
"Transformers models always output tuples. See the models docstrings for the detail of all the outputs. In our case, the first element is the hidden state of the last layer of the Bert model" ... `encoded_layers = outputs[0]`
`GPT-2`, `GPT`, `XLNet`, `Transfo-XL`, `CTRL` (and some others) "make use of a `past` or `mems` attribute which can be used to prevent re-computing the key/value pairs when using sequential decoding. It is useful when generating sequences as a big part of the attention mechanism benefits from previous computations."
"If you want to fine-tune a model on a specific task, you can leverage one of the `run_$TASK.py` script in the examples directory.
**AutoModel**:
"These examples leverage auto-models, which are classes that will instantiate a model according to a given checkpoint, automatically selecting the correct model architecture. Please check the `AutoModel` documentation for more information"
- AutoConfig
- AutoTokenizer
- AutoModel
- AutoModelForPreTraining
- AutoModelWithLMHead
- AutoModelForQuestionAnswering
- AutoModelForSequenceClassification
- AutoModelForTokenClassification
**Inference**:
Option 1: Use `Pipelines`
Option 2: Use the model directly with the tokenizer
## Question-Answer
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertForQuestionAnswering.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_ids = tokenizer.encode(question, text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
input_ids, start_scores.shape
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer
tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-en-2048')
model = XLMForQuestionAnsweringSimple.from_pretrained('xlm-mlm-en-2048')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_ids = tokenizer.encode(question, text, add_special_tokens=True)
outputs = model(torch.tensor([input_ids]))
len(outputs)
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1])
answer, all_tokens
input_ids = torch.tensor(tokenizer.encode("Who was Jim Henson?", "Jim Henson was a nice puppet", add_special_tokens=True)).unsqueeze(0) # Batch size 1
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(input_ids)
# loss = outputs[0]
torch.argmax(outputs[1]), len(input_ids[0]), input_ids
```
## Navigating nn hierarchy
```
# layer_groups = hft_splitter(temp_arch, tmp_model)
# print(len(layer_groups))
# for g in layer_groups:
# print(len(g))
# layer_groups[3][3].shape
# tmp_model
# for g in layer_groups:
# print(len(g))
# x = list(hft_model.named_children())[0]
# len(list(x[1].named_children()))
# for m in x[1].named_children():
# print(m[0])
# for m in tmp_model.named_children():
# print(m[0])
```
| github_jupyter |
<a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/src/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Plotting on a Map with CartoPy</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
<div style="float:right; width:250 px"><img src="http://scitools.org.uk/images/cartopy.png" alt="CartoPy" style="height: 200px;"></div>
### Questions
1. How do we plot on a map in Python?
1. How do I specify a map projection?
1. How do I tell CartoPy how to reference my data?
1. How do I add map features to a CartoPy plot?
### Objectives
1. <a href="#basicfigure">Create a basic figure using CartoPy</a>
1. <a href="#mapfeatures">Add maps to the figure</a>
1. <a href="#plottingdata">Plot georeferenced data on the figure</a>
<a name="basicfigure"></a>
## 1. Basic CartoPy Plotting
- High level API for dealing with maps
- CartoPy allows you to plot data on a 2D map.
- Support many different map projections
- Support for shapefiles from the GIS world
```
# Set things up
%matplotlib inline
# Importing CartoPy
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
```
The simplest plot we can make sets a projection with no parameters. The one below uses the Robinson projection:
```
# Works with matplotlib's built-in transform support.
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson())
# Sets the extent to cover the whole globe
ax.set_global()
# Adds standard background map
ax.stock_img()
```
We also have fine-tuned control over the globe used in the projection as well as lots of standard parameters, which depend on individual projections:
```
# Set up a globe with a specific radius
globe = ccrs.Globe(semimajor_axis=6371000.)
# Set up a Lambert Conformal projection
proj = ccrs.LambertConformal(standard_parallels=[25.0], globe=globe)
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Sets the extent using a lon/lat box
ax.set_extent([-130, -60, 20, 55])
ax.stock_img()
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="mapfeatures"></a>
## 2. Adding maps to CartoPy
CartoPy provides a couple helper methods for adding maps to the plot:
```
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
ax.stock_img()
ax.add_feature(cfeature.COASTLINE)
ax.set_extent([-130, -60, 20, 55])
```
Cartopy also has a lot of built-in support for a variety of map features:
```
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
# Add variety of features
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
# Can also supply matplotlib kwargs
ax.add_feature(cfeature.BORDERS, linestyle=':')
ax.add_feature(cfeature.STATES, linestyle=':')
ax.add_feature(cfeature.LAKES, alpha=0.5)
ax.add_feature(cfeature.RIVERS, edgecolor='tab:green')
ax.set_extent([-130, -60, 20, 55])
```
The map features are available at several different scales depending on how large the area you are covering is. The scales can be accessed using the `with_scale` method. Natural Earth features are available at 110m, 50m and 10m.
```
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
# Add variety of features
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.COASTLINE)
# Can also supply matplotlib kwargs
ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':')
ax.add_feature(cfeature.STATES.with_scale('50m'), linestyle=':')
ax.add_feature(cfeature.LAKES.with_scale('50m'), alpha=0.5)
ax.add_feature(cfeature.RIVERS.with_scale('50m'), edgecolor='tab:green')
ax.set_extent([-130, -60, 20, 55])
```
You can also grab other features from the Natural Earth project: http://www.naturalearthdata.com/
## US Counties
MetPy has US Counties built in at the 20m, 5m, and 500k resolutions.
```
from metpy.plots import USCOUNTIES
proj = ccrs.LambertConformal(central_longitude=-85.0, central_latitude=45.0)
fig = plt.figure(figsize=(12, 9))
ax1 = fig.add_subplot(1, 3, 1, projection=proj)
ax2 = fig.add_subplot(1, 3, 2, projection=proj)
ax3 = fig.add_subplot(1, 3, 3, projection=proj)
for scale, axis in zip(['20m', '5m', '500k'], [ax1, ax2, ax3]):
axis.set_extent([270.25, 270.9, 38.15, 38.75], ccrs.Geodetic())
axis.add_feature(USCOUNTIES.with_scale(scale), edgecolor='black')
```
<a href="#top">Top</a>
<hr style="height:2px;">
<a name="plottingdata"></a>
## 3. Plotting Data
CartoPy supports all of the matplotlib plotting options you would expect on a map. It handles transforming your data between different coordinate systems transparently, provided you provide the correct information. (More on this later...). To start, let's put a marker at -105, 40:
```
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linewidth=2)
ax.add_feature(cfeature.STATES, linestyle='--', edgecolor='black')
ax.plot(-105, 40, marker='o', color='tab:red')
ax.set_extent([-130, -60, 20, 55])
```
So that did not succeed at putting a marker at -105 longitude, 40 latitude (Boulder, CO). Instead, what actually happened is that it put the marker at (-105, 40) in the map projection coordinate system; in this case that's a Lambert Conformal projection, and x,y are assumed in meters relative to the origin of that coordinate system. To get CartoPy to treat it as longitude/latitude, we need to tell it that's what we're doing. We do this through the use of the `transform` argument to all of the plotting functions.
```
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS, linewidth=2)
ax.add_feature(cfeature.STATES, linestyle='--', edgecolor='black')
data_projection = ccrs.PlateCarree()
ax.plot(-105, 40, marker='o', color='tab:red', transform=data_projection)
ax.set_extent([-130, -60, 20, 55])
```
This approach by CartoPy separates the data coordinate system from the coordinate system of the plot. It allows you to take data in any coordinate system (lon/lat, Lambert Conformal) and display it in any map you want. It also allows you to combine data from various coordinate systems seamlessly. This extends to all plot types, not just `plot`:
```
# Create some synthetic gridded wind data
import numpy as np
from metpy.calc import wind_speed
from metpy.units import units
# Note that all of these winds have u = 0 -> south wind
v = (np.full((5, 5), 10, dtype=np.float64) + 10 * np.arange(5)) * units.knots
u = np.zeros_like(v) * units.knots
speed = wind_speed(u, v)
# Create arrays of longitude and latitude
x = np.linspace(-120, -60, 5)
y = np.linspace(30, 55, 5)
# Plot as normal
fig = plt.figure(figsize=(10, 8))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.LambertConformal())
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.BORDERS)
# Plot wind barbs--CartoPy handles reprojecting the vectors properly for the
# coordinate system
ax.barbs(x, y, u.m, v.m, transform=ccrs.PlateCarree(), color='tab:blue')
ax.set_extent([-130, -60, 20, 55])
```
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Create a map, on a Mercator Projection, which at least has coastlines and country and state borders. Bonus points for putting on colored land and oceans, or other map features.</li>
<li>Plot our location correctly on the map.</li>
<li>Set the bounds of the map to zoom in mostly over our state/region.</li>
</ul>
</div>
```
# YOUR CODE GOES HERE
```
<div class="alert alert-info">
<b>SOLUTION</b>
</div>
```
# %load solutions/map.py
```
<a href="#top">Top</a>
<hr style="height:2px;">
| github_jupyter |
```
%use dataframe, khttp
// to see autogenerated code, uncomment the line below:
//%trackExecution -generated
```
## Get Data
```
val response = khttp.get("http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.txt")
val cleanedText = response.text.replace("\"Molly\"", "Molly").replace("row.names", "row").replace("home.dest", "home")
// convert data to dataframe, generate marker interface for typed data frame wrapper and extension properties for it
val df = DataFrame.readDelim(cleanedText.byteInputStream())
df
```
## Select
```
// get typed column as extension property
df.name
// select single column -> returns DataFrame
df.select{name}
// select several columns
df.select{columns(name, age, embarked)}
// another way to select columns without compile-time check
df.select(df.name, df.age, df.sex)
// select columns filtered by predicate
df.selectIf{valueClass == String::class}
```
## Index
```
// Row indexing
df[1]
// Column indexing
df.name[0]
// Same result
df[0].name
```
## Filter
```
// filter rows by predicate. Predicate receiver is of type TypedDataFrameRow<*> with generated extension properties
df.filter {sex == "female"}
df.filter { age > 50 } // compilation error, because 'age' is a nullable property
// filter rows where 'age' is not null.
val withAges = df.filterNotNull {age}
withAges
// now filtration works
withAges.filter {age > 50}
// find the oldest survived woman
withAges.filter {survived == 1 && sex == "female"}.maxBy{age}
```
## Sort
```
// sort by single column
withAges.sortedByDesc {age}
// sort by several columns
withAges.sortedBy {columns(age, name)}
// another way
withAges.sortedBy(withAges.age, withAges.name)
```
## Add Columns
```
// add new column and store result in a new field
val withYear = withAges.add("year") {1912 - age}
withYear
// check new column
withYear.year
// add several columns
withAges.add {
"year" {1912-age}
"died" {survived == 0}
}
// plus is overloaded for adding columns
withAges + {
"year" {1912-age}
"died" {survived == 0}
}
// another way to build new column via column arithmetics
val birthYear = withAges.age * (-1) + 1912
// new column can be added to dataframe with '+' operator
withAges + birthYear.rename("year")
// Iterable of columns can also be added with '+'
withAges + withAges.columns.map {it.rename(it.name + " duplicate")}
```
## Remove columns
```
// remove single column
df.remove{ticket}
// remove several columns
df.remove {columns(row, pclass, ticket, room, survived)}
// remove several columns by column instances
df.remove(df.row, df.pclass)
// '-' operator can also be used for removing columns
df - {row} - {pclass} - {room}
```
## Group
```
// group by single column
df.groupBy{ embarked }.count()
// group by several columns
df.groupBy{ columns(sex, survived) }.count()
// another way
df.groupBy(df.sex, df.survived).count()
// Various summarization operations on grouped data frame
withAges.groupBy{ embarked }.summarize {
"total count" { size } // lamba expressions are computed for every group. Type of receiver: TypedDataFrame<*>
"survival rate" { count { survived == 1 }.toDouble() / size * 100 }
"average age" { age.mean() } // column operations are also supported
"median age" { age.median() }
val youngest = find { minBy {age}!! } // 'find' builds data frame, collecting one row for every group
"youngest" (youngest.name) // columns of collected data frame are passed in round parenthesis '()'
"youngest age" (youngest.age)
val oldest = find { maxBy {age}!! }
"oldest" (oldest.name)
"oldest age" (oldest.age)
}
```
## Misc
```
df.size
withAges.count {age > 50 }
withAges.sortedBy{age}.take(5)
withAges.sortedBy{age}.takeLast(5)
```
## List <-> DataFrame conversion
```
// 'rows' field is Iterable<TypedDataFrameRow<*>> so it can be used in any stdlib extensions for Iterable
df.rows.map {it.name}.take(5)
// Sample List
data class Item(val first: Int, val second: Double)
val itemsList = listOf(Item(1,2.0), Item(2, 3.0), Item(3, 4.0))
// List -> DataFrame by reflection
itemsList.toDataFrame()
// List -> DataFrame by mappings
itemsList.toDataFrame {
"a" {first}
"b" {second}
"c" {first*second}
}
// Convert data frame to a list of data class items
val passengers = df.toList("Passenger")
// Check type of the element
passengers[0].javaClass
// Do any list operations
passengers.maxBy {it.age ?: .0}
```
## Column-specific extensions for TypedDataFrame
```
// create marker interface to write column-specific extensions for data frame
@DataFrameType
interface SimplePerson {
val name: String
val age: Double
}
// create extension for any data frame with fields 'name' and 'age'
fun TypedDataFrame<SimplePerson>.getOlderThan(minAge: Double) = filter {age > minAge}
// extension works even for objects that were created before marker interface declaration
withAges.getOlderThan(50.0)
// code for marker interface can be auto-generated
// 'getScheme' method returns generated code without execution
withAges.select{columns(name,age,home,sex)}.generateInterface("Person")
// 'extractScheme' method generates and executes code
withAges.select{columns(name,age,home,sex)}.extractInterface("Person")
// Now interface 'Person' is available, so we can write an extension method,
// that will work for any data frame with these four columns
fun TypedDataFrame<Person>.addSummary() = add("summary"){"$sex $name $age y.o. from $home"}
// for example, it works for 'withAges' data frame
withAges.addSummary()
// data frame can also be converted to a list of objects implementing 'Person' interface that was generated above
val persons = withAges.toList<Person>()
// check element type
persons[0].javaClass
persons
```
## Column-based polymorphism
```
// When data frame variable is mutable, a strongly typed wrapper for it
// is generated only once after the first execution of a cell where it is declared
var nameAndSex = df.select(df.name, df.sex)
nameAndSex
// let's declare immutable variable, that contains all string columns
val strings = df.selectIf{valueClass == String::class}
strings
// 'nameAndSex' is assignable from 'strings',
// because 'strings' has all the columns that are required by type of 'nameAndSex'
nameAndSex = strings
// note, that the actual value of 'nameAndSex' is still a data frame of all string columns
nameAndSex
// but typed access to the fields works only for 'name' and 'sex'
nameAndSex.sex // this is OK
nameAndSex.home // this fails with compilation error
nameAndSex["home"] // the requested column is still available by column name string
// now let's create a variable with two other columns
val nameAndHome = df.select(df.name, df.home)
nameAndHome
nameAndSex = nameAndHome // this assignment doesn't work because of columns mismatch
// unfortunately, there is a way to get a runtime error here,
// because typed wrappers are generated only after execution of a cell
// so the following assigment will pass fine, because return type of 'select' is the same as in 'df' variable,
// although the set of columns was reduced
nameAndSex = df.select(df.name, df.home)
// if we try to access the column, we get runtime error
nameAndSex.sex
```
## TODO
Support operations:
* Add row
* Join
* Reshape
Improve typed wrappers for:
* Grouped data frame
* Columns
| github_jupyter |
# Automating GIS-processes - Final work
**Aim of the work:**
Aim of the final assignment is to apply the programming techniques and skills that we have learned during the course and create a GIS tool called *AccessHandler* (see below instructions). You can choose yourself what tools / techniques / modules you want to use. You can either do the task by applying pure Python coding, arcpy or even ArcGIS ModelBuilder (not recommended though).
Write your codes into a single Python file and return the code to the Moodle (due date 2.12.2015 at 12:00). In the evaluation of the final work different functionalities of the code are evaluated individually. Thus, if you do not get all different parts / functionalities of the tool working, it is not the *"the end of the world"*. It's good if you get at least some parts of the code working. Good documentation of the code will be highly regarded and will affect positively in the grading of the final work.
**What the tool should do?**
***AccessHandler*** is a tool that is used for managing and helping to analyze MetropAccess-Travel Time Matrix (MTTM) data that can be downloaded from <a href="http://blogs.helsinki.fi/accessibility/data/metropaccess-travel-time-matrix/download/" target="_blank">here</a>. Read also the description of the dataset from the web-pages so that you get familiar with the data.
AccessHandler has two main functionalities:
1) It finds from the data folder all the matrices that user has specified by assigning a list of integer values that should correspond to YKR-IDs found from the attribute table of [a Shapefile called MetropAccess_YKR_grid.shp](http://www.helsinki.fi/science/accessibility/data/MetropAccess-matka-aikamatriisi/MetropAccess_YKR_grid.zip). AccessHandler will create Shapefiles from the chosen Matrix text tables (e.g. *travel_times\_to\_5797076.txt*) by joining the Matrix file with MetropAccess_YKR_grid Shapefile (*from_id* in Matrix file corresponds to *YKR_ID* in the Shapefile) and saves the result in the output-folder that user has defined. You should name the files in a way that it is possible to identify the ID from the name (e.g. 5797076).
2) AccessHandler can also compare *travel times* or *travel distances* between two different travel modes (more than two travel modes are not allowed). Thus IF the user has specified two travel modes (passed in as a list) for the AccessHandler, the tool will calculate the time/distance difference of those travel modes into a new data column that should be created in the Shapefile. The logic of the calculation is following the order of the items passed on the list where first travel mode is always subtracted by the last one: ***travelmode1 - travelmode2***. Notice that there are NoData values present in the data (with integer value -1). In such cases the result should always be integer value -1. The tool should ensure that distances are not compared to travel times and vice versa. If the user chooses to compare travel modes to each other, you should add the travel modes to the filename (e.g. "Accessibility\_5797076\_pt\_vs\_car.shp") If the user has not specified any travel modes, the tool should only create the Shapefile but not execute any calculations.
AccessHandler asks from the user three parameters:
1. MatrixID (type: a list of integers)
2. TravelModes (type: a list of strings with max length of 2)
3. OutputFolder (type: a string containing a folder path)
With ***MatrixID*** parameter the user can pass a list of Travel Time Matrix ID-numbers (should be integers) for the program. If the ID-number that the user has specified does not exist in the data folders, the tools should warn about this to the user but still continue running. The tool should also inform the user about the execution process: tell the user what file is currently under process and how many files there are left (e.g. "Processing file travel_times\_to\_5797076.txt.. Progress: 3/25").
With ***TravelModes*** parameter the user can pass a list of travel modes that will be compared to each other (as described earlier). If this parameter is used, the length of the list should be exactly 2, otherwise, stop the program and give advice for the user how the parameter is used ("Parameter 'TravelModes' takes exactly two items"). Travel modes should be the same as are found in the actual TravelTimeMatrix file. Thus only five following values are accepted: 'Walk_time', 'Walk_dist', 'PT_total_time', 'PT_time', 'PT_dist'. If the user specifies something else, stop the program, and give advice what are the acceptable values.
With ***OutputFolder*** parameter the user defines the directory where the Shapefiles will be created.
| github_jupyter |
```
%matplotlib widget
import glob
import os
from mpl_toolkits.axes_grid1 import make_axes_locatable
from astropy.io import fits
from astropy.stats import sigma_clipped_stats
from astropy.table import Table
from astropy.visualization import ImageNormalize, SqrtStretch, LogStretch, LinearStretch, ZScaleInterval, ManualInterval
import matplotlib.colors as colors
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.ticker import (MultipleLocator, FormatStrFormatter,
AutoMinorLocator)
from matplotlib import ticker
# plt.style.use('dark_background')
plt.style.use('ggplot')
import numpy as np
data_path = '/Users/nmiles/hst_cosmic_rays/analyzing_cr_rejection/1100.0_clean/'
def read_in_CRREJTAB():
"""
Parameters
----------
dh : TYPE
Description
"""
tb = Table.read('/Users/nmiles/hst_cosmic_rays/j3m1403io_crr.fits')
df = tb.to_pandas()
return df
df = read_in_CRREJTAB()
df
def plot_image(data, norm=None, units=None, title=None, xlim=None, ylim=None):
fig, ax = plt.subplots(nrows=1, ncols=1)
im = ax.imshow(data, norm=norm, origin='lower', cmap='viridis')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = fig.colorbar(im, cax=cax)
cbar.set_label(f"{units}")
ax.grid(False)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
ax.set_title(title)
plt.show()
N=20
flist = glob.glob(data_path+'*flt.fits')[:N]
flist
hdu = fits.open(flist[1])
hdr = hdu[0].header
data = hdu[1].data
units = hdu[1].header['BUNIT']
exptime = hdu[0].header['TEXPTIME']
hdu.close()
norm = ImageNormalize(data, stretch=SqrtStretch(), vmin=0, vmax=50*np.median(data))
cbar_bounds = [i for i in range(0,500,70)]
sci_cmap = plt.cm.viridis
norm1 = colors.BoundaryNorm(boundaries=cbar_bounds,
ncolors=sci_cmap.N)
global_mean, global_median, global_std = sigma_clipped_stats(data, sigma=5, maxiters=3)
print(f"mean: {global_mean:.3f}\nmedian: {global_median:.3f}\nstd: {global_std:.3f}")
plot_image(data, norm=norm1, units=units, title=f"Exposure Time: {exptime:0.0f}s")
```
### Visualization the CR rejection algorithm
- The following cells contain a series of functions and widgets that are combined to create an interactive visualization tool for analyzing the principles behind the CR algorithm
```
def compute_noise_model(hdr, val, scalense=10):
readnse = hdr['READNSE']/ hdr['ATODGAIN']
poisson = val/hdr['ATODGAIN']
total_noise = np.sqrt(readnse**2 + poisson**2 + (scalense*0.01 * val))
return total_noise
import ipywidgets as widgets
from ipywidgets import interact, fixed, interactive, VBox, HBox
```
Define some preset values and analyze the list of data to extract the pixel value for
```
x0=212
y0=704
pixval = []
for f in flist:
data = fits.getdata(f)
pixval.append(data[y0][x0])
med = np.nanmedian(pixval)
minim = np.nanmin(pixval)
noisemodel = compute_noise_model(hdr=hdr, val=med, scalense=10)
# @interact(fname=file_slider, x=fixed(512), y= fixed(512), norm=fixed(norm))
def interactive_plot_image(
fname,
norm,
x=fixed(434),
y=fixed(434),
ax=None,
fig=None,
units=None,
w=10,
h=10
):
ax.clear()
texptime = fits.getval(fname, keyword='TEXPTIME')
data = fits.getdata(fname)
patch = patches.Rectangle(xy=(x-0.5, y-0.5), width=1, height=1, fill=False, edgecolor='r', lw=2.25)
im = ax.imshow(data, norm=norm, origin='lower')
ax.set_xlim((x-w, x+w))
ax.set_ylim((y-h, y+h))
ticks = [y-i for i in range(1,11)] + [y] + [y+j for j in range(1,11)]
ticks.sort()
ax.add_patch(patch)
ax.grid(False)
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="10%", pad=0.15)
cbar = fig.colorbar(im, cax=cax, orientation='vertical')
# n = len(cbar.ax.get_yticklabels())
tick_locator = ticker.MaxNLocator(6)
cbar.ax.set_yticklabels(
cbar.ax.get_yticklabels(),
rotation=-10,
horizontalalignment='left',
fontsize=8
)
cbar.locator = tick_locator
cbar.update_ticks()
ax.xaxis.set_minor_locator(AutoMinorLocator(5))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.tick_params(axis='both', which='both', width=1.5)
cbar.set_label(f"{units}", fontsize=10)
# cbar.ax.set_yticklabels(cbar.ax.get_yticklabels(), horizontalalignment='left', rotation=-25, fontsize=10)
# ax.set_title(f"{os.path.basename(fname)}, {texptime:.0f}s")
fig.suptitle(f"Current Image: {os.path.basename(fname)}, Exposure Time: {texptime:.0f}s")
ax.set_xlabel('X [pixel]')
ax.set_ylabel('Y [pixel]')
def plot_pix_vals(
current_file,
flist=None,
current_img_color='red',
c='k',
x=512,
y=512,
ax=None,
med=None,
minim=None,
units=None,
ymin=None,
ymax=None,
crsigmas=None,
noisemodel= None
):
ax.clear()
pixvals = []
scatter_color = []
scatter_marker = []
labels = []
for i, f in enumerate(flist):
data = fits.getdata(f)
pixval = data[y][x]
pixvals.append(pixval)
if f == current_file:
current_im = ax.scatter(i+1, pixval, label=f"Current: {pixval:.2f}", c=current_img_color)
else:
nom = ax.scatter(i+1, pixval, c=c)
# scat = ax.scatter([i for i in range(1,len(pixval)+1)], pixval, label=labels, c=scatter_color )
# ax.set_title(os.path.basename(current_file))
ax.set_xlim((0,22))
med = np.nanmedian(pixvals)
minim = np.nanmin(pixvals)
ax_med = ax.axhline(med,ls='--',c='k', label=f"med: {med:.2f}")
ax_noise = ax.axhline(med + crsigmas * noisemodel, label=f"med + {crsigmas:.0f}$\sigma$")
ax_min = ax.axhline(minim, ls=':', c='k', label=f"min: {minim:.2f}")
if ymin is None or ymax is None:
ax.set_ylim((0, ymax+2*std))
else:
ax.set_ylim((ymin, ymax))
ax.set_ylabel(units)
ax.xaxis.set_minor_locator(AutoMinorLocator(5))
ax.yaxis.set_minor_locator(AutoMinorLocator(5))
ax.tick_params(axis='both', which='both', width=1.5)
ax_legend = ax.legend(handles=[current_im, ax_med, ax_noise, ax_min],
loc='upper right', edgecolor='k')
return ax
```
Setup a slider to control the file we are examining
```
file_slider1 = widgets.Select(
options=flist,
value=flist[np.argmax(pixval)],
description='Image Plot',
continuous_update=True,
orientation='horizontal',
readout=True,
)
file_slider2 = widgets.Select(
options=flist,
value=flist[np.argmax(pixval)],
description='Scatter Plot',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
xslider = widgets.IntText(
options=[i for i in range(1,1025)],
value=x0,
description='X Coordinate',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
yslider = widgets.IntText(
options=[i for i in range(1,1025)],
value=y0,
description='Y Coordinate',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
wslider = widgets.IntSlider(
min=5,
max=200,
step=5,
value=20,
description='Width',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
hslider = widgets.IntSlider(
min=5,
max=200,
step=5,
value=20,
description='Height',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
ymin_slider = widgets.IntText(
options=[i for i in range(1,1025)],
value=0,
description='ymin',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
ymax_slider = widgets.IntText(
options=[i for i in range(1,1025)],
value= global_mean + 5*global_std,
description='ymax',
disabled=False,
continuous_update=True,
orientation='horizontal',
readout=True
)
# crsigmas_slider = widgets.IntText(
# options=[i for i in range(10)],
# value=3,
# description='crsigmas',
# disabled=False,
# continuous_update=True,
# orientation='horizontal',
# readout=True
# )
l = widgets.link((file_slider1, 'value'), (file_slider2, 'value'))
out = widgets.Output(layout={'border': '1px solid black'})
with out:
fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, gridspec_kw={'wspace':0.75, 'hspace':0.5})
w1 = interactive(
interactive_plot_image,
fname=file_slider1,fig=fixed(fig),
x=xslider, y=yslider,w=wslider,h=hslider,
norm=fixed(norm), ax=fixed(ax1),
units=fixed('COUNTS')
)
w2 = interactive(
plot_pix_vals,
current_file=file_slider2,
x=xslider, y=yslider, w=wslider, h=hslider, ymin=ymin_slider, ymax=ymax_slider,
flist=fixed(flist), ax=fixed(ax2),med=fixed(med), minim=fixed(minim),crsigmas=fixed(4),
units=fixed('COUNTS'), current_img_color=fixed('r'), c=fixed('k'), noisemodel=fixed(noisemodel)
)
# hbox = HBox([w1, w2])
w = widgets.GridBox([w1, w2], layout=widgets.Layout(grid_template_columns="repeat(2,50%)"))
display(w)
out
plt.close('all')
out.clear_output()
```
| github_jupyter |
```
import collections as cl
import faiss
import numpy as np
import torch as th
from misc import load_sift, save_sift
```
### Load vectors extracted from fasttext
```
xq = load_sift('../data/siftLSHTC/predictions.hid.fvecs', dtype=np.float32)
xb = load_sift('../data/siftLSHTC/predictions.wo.fvecs', dtype=np.float32)
xb = np.ascontiguousarray(xb.T)
n, d, c = xq.shape[0], xq.shape[1], xb.shape[1]
print(f"Loaded dataset of {n:_}, {d:_}-dimensionsl queries (examples)")
print(f"The dataset contains {c:_} classes, and more than one class can be positive")
```
### Load groundtruth
```
gt = []
for line in open('../data/siftLSHTC/predictions.labels.txt'):
gt.append({int(y) for y in line.split()})
```
# Evaluate matmul approach
### Compute scores
```
%%time
BATCH_SIZE = 1024
K = 1
tq = th.from_numpy(xq).cuda()
tb = th.from_numpy(xb).cuda()
ti = th.cuda.LongTensor(tq.shape[0], K)
start_idx = 0
while start_idx < tq.shape[0]:
stop_idx = min(start_idx + BATCH_SIZE, tq.shape[0])
scores = tq[start_idx:stop_idx, :] @ tb
D, I = th.topk(scores, K)
ti[start_idx:stop_idx, :] = I
start_idx = stop_idx
ti = ti.cpu()
th.cuda.synchronize()
```
### Evaluate p@1
```
p1 = 0.
for i, item in enumerate(ti.cpu().numpy()):
p1 += float(int(item[0]) in gt[i])
p1 /= len(gt)
print(f'Precision @ 1: {p1}')
```
# Evaluate GPU-Flat
### Setup
```
%%time
if xb.shape[1] > xb.shape[0]:
xb = np.ascontiguousarray(xb.T)
res = faiss.StandardGpuResources()
flat_config = faiss.GpuIndexFlatConfig()
flat_config.device = 0
index = faiss.GpuIndexFlatIP(res, xb.shape[1], flat_config)
index.add(xb)
res.syncDefaultStream(0)
```
### Warmup
```
_ = index.search(xq, 1)
```
### Search
```
%%time
D, I = index.search(xq, 1)
```
### Evaluate
```
p1 = 0.
for i, item in enumerate(I):
p1 += float(int(item) in gt[i])
p1 /= len(gt)
print(f'Precision @ 1: {p1}')
```
# Evaluate GPU-Fast
### Setup
```
%%time
if xb.shape[1] > xb.shape[0]:
xb = np.ascontiguousarray(xb.T)
d = xb.shape[1]
res = faiss.StandardGpuResources()
flat_config = faiss.GpuIndexFlatConfig()
flat_config.device = 0
co = faiss.GpuClonerOptions()
index = faiss.index_factory(d, "IVF16384,Flat", faiss.METRIC_INNER_PRODUCT)
index = faiss.index_cpu_to_gpu(res, 0, index, co)
index.train(xb)
index.add(xb)
res.syncDefaultStream(0)
```
### Warmup
```
_ = index.search(xq, 1)
```
### Search
```
%%time
index.setNumProbes(32)
D, I = index.search(xq, 1)
```
### Evaluate
```
p1 = 0.
for i, item in enumerate(I):
p1 += float(int(item) in gt[i])
p1 /= len(gt)
print(f'Precision @ 1: {p1}')
```
# Evaluate CPU Fast
### Setup
```
%%time
if xb.shape[1] > xb.shape[0]:
xb = np.ascontiguousarray(xb.T)
d = xb.shape[1]
index = faiss.index_factory(d, "IVF16384,Flat", faiss.METRIC_INNER_PRODUCT)
index.train(xb)
index.add(xb)
```
### Search
```
%%time
index.nprobe = 32
D, I = index.search(xq, 1)
```
### Evaluate
```
p1 = 0.
for i, item in enumerate(I):
p1 += float(int(item) in gt[i])
p1 /= len(gt)
print(f'Precision @ 1: {p1}')
pwd
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import glob
import numpy as np
from collections import defaultdict
import pickle
import os
dataset_name = 'fma_small'
folder = "../exp/" + dataset_name
selected = {'ytc': 5.5, 'fma_small': 7, 'gtzan': 5.8}[dataset_name]
df = pd.read_csv(os.path.join(folder, "tf.csv"))
with open(os.path.join(folder, 'DF.pickle'), 'rb') as handle:
DF = defaultdict(int, pickle.load(handle))
with open(os.path.join(folder, 'docs.pickle'), 'rb') as handle:
docs = pickle.load(handle)
N = len(docs.keys())
doc_frequency_df = pd.DataFrame.from_dict(DF, orient='index')
doc_frequency_df.columns = ['DF']
idfs = np.log(N / (doc_frequency_df['DF'] + 1))
df['idf'] = idfs[df.term].values
df['tf_idf'] = df.tf * df.idf
#selected = 7
print('selected=', selected, '; max idf=', df.idf.max())
df['tf_idf_round'] = df['tf_idf'].round(1)
df['idf_round'] = df['idf'].round(1)
dfn2 = df.groupby('idf_round').count()
dfn = df.groupby('tf_idf_round').count()
fig, ax1 = plt.subplots(figsize=(14, 7))
ax2 = ax1.twinx()
#plt.hist(df.tf_idf_round, bins=300)
lns1 = ax1.plot(dfn.index, dfn.term, '.', label='# of fingerprints')
lns2 = ax2.plot(100*dfn.tf.cumsum()/dfn.tf.sum(), color='orange', label='cumulative sum of fingerprints')
ax2.vlines(df.idf.max(), 0, 100, linestyles='--', color='gray', linewidth=1)
ax2.vlines(selected, 0, 100, color='red', linewidth=1, linestyles='--')
#ax1.plot(dfn2.index, dfn2.term)
plt.title("Number of occurrences of fingerprints by tf-idf values")
ax1.set_yscale('log')
ax1.set_xlim((1, 20))
ax1.set_ylabel('# of occurrences')
ax2.set_ylabel('% of indexed fingerprints')
lns = lns1+lns2
labs = [l.get_label() for l in lns]
ax1.legend(lns, labs, loc='upper right', bbox_to_anchor=(1, 0.9))
plt.savefig('fgpt_by_tfidf.pdf')
plt.show()
df['idf_round'] = df['idf'].round(2)
dfn2 = df.groupby('idf_round').count()
df_rank = df.groupby('term').sum().reset_index(drop=False)
df_rank['rank'] = df_rank.tf.rank(ascending=False)
df_rank = df_rank[['term', 'tf', 'rank']].sort_values('rank', ascending=True).reset_index(drop=True)
df_rank2 = df.groupby('term').count().reset_index(drop=False)
df_rank2['rank'] = df_rank2.tf.rank(ascending=False)
df_rank2 = df_rank2[['term', 'tf', 'rank']].sort_values('rank', ascending=True).reset_index(drop=True)
plt.figure(figsize=[14, 7])
plt.plot(df_rank.tf, label='hashes in unique documents')
#plt.plot(df_rank2.tf, label='hashes in database', alpha=0.7)
plt.title('Rank-frequency distribution of fingerprints')
plt.grid()
plt.yscale('log')
plt.xscale('log')
plt.xlabel("Ranking position")
plt.ylabel("Frequency (# of terms)")
#plt.savefig('rank_freq_distr.pdf')
#plt.xlim((1, 30))
plt.show()
```
| github_jupyter |
# Particle Swarm Optimization Algorithm (explained with Python!)
[SPOILER] We will be using the [Particle Swarm Optimization algorithm](https://en.wikipedia.org/wiki/Particle_swarm_optimization) to obtain the minumum of some test functions

First of all, let's import the libraries we'll need (remember we are using Python 3)
```
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from plotPSO import plotPSO_2D
import optitestfuns
# import scipy as sp
# import time
%matplotlib inline
plt.style.use('bmh')
```
We can define and plot the function we want to optimize:
```
# Testing 2D plot
lo_b = -5 # lower bound
up_b = 5 # upper bound
limits=([lo_b, up_b], # x bounds
[lo_b, up_b]) # y bounds
x_lo = limits[0][0]
x_up = limits[0][1]
y_lo = limits[1][0]
y_up = limits[1][1]
f = optitestfuns.ackley # Objective function (aka fitness function)
#fig, ax = plotPSO_2D(f, limits)
```
## PSO Algorithm
```
n_iterations = 50
def run_PSO(n_particles=10, omega=0.3, phi_p=0.7, phi_g=0.7):
""" PSO algorithm to a funcion already defined.
Params:
omega = 0.3 # Particle weight (intertial)
phi_p = 0.7 # particle best weight
phi_g = 0.7 # global global weight
"""
global x_best_p_global, y_best_p_global, z_p_best_global, \
x_particles, y_particles, z_particles, \
u_particles, v_particles
# Note: we are using global variables to ease the use of interactive widgets
# This code will work fine without the global (and actually it will be safer)
# Initialazing x postion of particles
x_particles = np.zeros((n_iterations, n_particles))
x_particles[0,:] = np.random.uniform(lo_b, up_b, size=n_particles)
# Initialazing y postion of particles
y_particles = np.zeros((n_iterations, n_particles))
y_particles[0,:] = np.random.uniform(lo_b, up_b, size=n_particles)
# Initialazing best praticles
x_best_particles = np.copy(x_particles[0,:])
y_best_particles = np.copy(y_particles[0,:])
# Calculate Objective function (aka fitness function)
z_particles = np.zeros((n_iterations, n_particles))
for i in range(n_particles):
z_particles[0,i] = f((x_particles[0,i],y_particles[0,i]))
z_best_global = np.min(z_particles[0,:])
index_best_global = np.argmin(z_particles[0,:])
x_best_p_global = x_particles[0,index_best_global]
y_best_p_global = y_particles[0,index_best_global]
# Initialazin velocity
velocity_lo = lo_b-up_b # [L/iteration]
velocity_up = up_b-lo_b # [L/iteration]
v_max = 0.07 # [L/iteration]
u_particles = np.zeros((n_iterations, n_particles))
u_particles[0,:] = 0.1*np.random.uniform(velocity_lo, velocity_up, size=n_particles)
v_particles = np.zeros((n_iterations, n_particles))
v_particles[0,:] = 0.1*np.random.uniform(velocity_lo, velocity_up, size=n_particles)
# PSO STARTS
iteration = 1
while iteration <= n_iterations-1:
for i in range(n_particles):
x_p = x_particles[iteration-1, i]
y_p = y_particles[iteration-1, i]
u_p = u_particles[iteration-1, i]
v_p = v_particles[iteration-1, i]
x_best_p = x_best_particles[i]
y_best_p = y_best_particles[i]
r_p = np.random.uniform(0, 1)
r_g = np.random.uniform(0, 1)
u_p_new = omega*u_p + \
phi_p*r_p*(x_best_p-x_p) + \
phi_g*r_g*(x_best_p_global-x_p)
v_p_new = omega*v_p + \
phi_p*r_p*(y_best_p-y_p) + \
phi_g*r_g*(y_best_p_global-y_p)
# # Velocity control
# while not (-v_max <= u_p_new <= v_max):
# u_p_new = 0.9*u_p_new
# while not (-v_max <= u_p_new <= v_max):
# u_p_new = 0.9*u_p_new
x_p_new = x_p + u_p_new
y_p_new = y_p + v_p_new
# Ignore new position if it's out of the domain
if not ((lo_b <= x_p_new <= up_b) and (lo_b <= y_p_new <= up_b)):
x_p_new = x_p
y_p_new = y_p
x_particles[iteration, i] = x_p_new
y_particles[iteration, i] = y_p_new
u_particles[iteration, i] = u_p_new
v_particles[iteration, i] = v_p_new
# Evaluation
z_p_new = f((x_p_new, y_p_new))
z_p_best = f((x_best_p, y_best_p))
z_particles[iteration, i] = z_p_new
if z_p_new < z_p_best:
x_best_particles[i] = x_p_new
y_best_particles[i] = y_p_new
z_p_best_global = f([x_best_p_global, y_best_p_global])
if z_p_new < z_p_best_global:
x_best_p_global = x_p_new
y_best_p_global = y_p_new
# end while loop particles
iteration = iteration + 1
# Plotting convergence
z_particles_best_hist = np.min(z_particles, axis=1)
z_particles_worst_hist = np.max(z_particles, axis=1)
z_best_global = np.min(z_particles)
index_best_global = np.argmin(z_particles)
fig, ax1 = plt.subplots(nrows=1, ncols=1, figsize=(10, 2))
# Grid points
x_lo = limits[0][0]
x_up = limits[0][1]
y_lo = limits[1][0]
y_up = limits[1][1]
assert x_lo<x_up, "Unbound x limits, the first value of the list needs to be higher"
assert y_lo<y_up, "Unbound x limits, the first value of the list needs to be higher"
n_points = 100
x = np.linspace(x_lo, x_up, n_points) # x coordinates of the grid
y = np.linspace(y_lo, y_up, n_points) # y coordinates of the grid
XX, YY = np.meshgrid(x,y)
ZZ = np.zeros_like(XX)
for i in range(n_points):
for j in range(n_points):
ZZ[i,j] = f((XX[i,j], YY[i,j]))
# Limits of the function being plotted
ax1.plot((0,n_iterations),(np.min(ZZ),np.min(ZZ)), '--g', label="min$f(x)$")
ax1.plot((0,n_iterations),(np.max(ZZ),np.max(ZZ)),'--r', label="max$f(x)$")
# Convergence of the best particle and worst particle value
ax1.plot(np.arange(n_iterations),z_particles_best_hist,'b', label="$p_{best}$")
ax1.plot(np.arange(n_iterations),z_particles_worst_hist,'k', label="$p_{worst}$")
ax1.set_xlim((0,n_iterations))
ax1.set_ylabel('$f(x)$')
ax1.set_xlabel('$i$ (iteration)')
ax1.set_title('Convergence')
ax1.legend()
run_PSO()
```
# Animation
```
from __future__ import print_function
import ipywidgets as widgets
from IPython.display import display, HTML
def plotPSO_iter(i=0): #iteration
"""Visualization of particles and obj. function"""
fig, (ax1, ax2) = plotPSO_2D(f, limits,
particles_xy=(x_particles[i, :],y_particles[i, :]),
particles_uv=(u_particles[i, :],v_particles[i, :]))
w_arg_PSO = widgets.interact_manual(run_PSO,
n_particles=(2,50),
omega=(0,1,0.001),
phi_p=(0,1,0.001),
phi_g=(0,1,0.001),
continuous_update=False)
w_viz_PSO = widgets.interact_manual(plotPSO_iter, i=(0,n_iterations-1), continuous_update=False)
```
---
Let's have a look to some examples in case you can't play with the sliders:




| github_jupyter |
### Evaluating Used Cars with Classification
#### Introduction
In recent years, used car market is getting larger and larger. Many people begin purchasing used cars instead of new cars, since used cars are always cheaper than new cars, and a lot of used cars really have good reliability. However, there are still a bunch of defective used cars in market. For example, my one friend bought a 2000 Toyota. One day, when she was driving, suddenly, her engine broke down. I am also a used-car victim, I purchased a 2001 Nissan six years ago, after just one week, I could not start up my car anymore. Defective used cars not only hurt customers, but also ruin sellers' reputation, so evaluating used cars is very important.
#### Data Description
Our data includes 1728 used cars. Our variables are: 1). Buying price, 2). Price of maintenance, 3). Number of doors, 4). Capacity in terms of persons to carry, 5).The size of trunk, and 6). Estimated safety of the car. Both buying price and price of maintenance are categorized into four levels: very high, high, medium, and low. Number of doors includes 2, 3, 4, and 5-more. Capacity in terms of persons to carry has three levels: 2, 4, and more. The size of trunk is categorized into small, medium, and big. Estimated safety of the car is low, medium, and high. Our classifications for the used cars are unacceptable, acceptable, good, and very good. Our dataset can be downloaded from https://archive.ics.uci.edu/ml/datasets/Car+Evaluation
```
import pandas as pd
car = pd.read_csv('C:\Atop Materials\car evaluation.csv', header = 0)
car.head()
def tonum (x):
if x == "vhigh":
return 4
if x == "high":
return 3
if x == "med":
return 2
if x == "low":
return 1
if x == "5more":
return 5
if x == "4":
return 4
if x == "3":
return 3
if x == "2":
return 2
if x == "more":
return 5
if x == "small":
return 1
if x == "big":
return 3
if x == "unacc":
return 1
if x == "acc":
return 2
if x == "good":
return 3
if x == "vgood":
return 4
car["buying"] = car["Buying"].apply(tonum)
car["maint"] = car["Maint"].apply(tonum)
car["doors"] = car["Doors"].apply(tonum)
car["persons"] = car["Persons"].apply(tonum)
car["trunk"] = car["Trunk"].apply(tonum)
car["safty"] = car["Safty"].apply(tonum)
car["evaluation"] = car["Evaluation"].apply(tonum)
car = car[["buying", "maint", "doors", "persons", "trunk", "safty", "evaluation"]]
```
#### Methodology
In this section, we will devide our data into training set and test set, and then we will use support vector machine, k-nearest neighbors, and decision tree to do classification.
```
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import svm
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
x = car[["buying", "maint", "doors", "persons", "trunk", "safty"]]
y = car["evaluation"]
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 1)
# svm
clf1 = svm.SVC(kernel = 'linear')
clf1.fit(x_train, y_train)
yhat1 = clf1.predict(x_test)
print("Accuracy of SVM is:", np.round(np.mean(y_test == yhat1), 4))
# KNN
k = 5
clf2 = KNeighborsClassifier(n_neighbors = k)
clf2.fit(x_train, y_train)
yhat2 = clf2.predict(x_test)
print("Accuracy of KNN is:", np.round(np.mean(y_test == yhat2), 4))
# Decision Tree
clf3 = DecisionTreeClassifier(criterion = 'entropy', max_depth = 4)
clf3.fit(x_train, y_train)
yhat3 = clf3.predict(x_test)
print("Accuracy of Decision Tree is:", np.round(np.mean(y_test == yhat3), 4))
```
#### Result
From methodology section, we see that KNN with k = 5 has the highest accuracy.
#### Discussion
In this report, we used SVM, KNN, and Decision Tree to do classification analysis for used cars, and we found that KNN has the highest accuracy. However, this time, we just used very simple versions of these three classifiers, some advanced versions may improve the accuracies for SVM, and Decision Tree, like Twin Bondary SVM.
#### Conclusion
We can predict the quality of used cars with high accuracy using classifiers.
| github_jupyter |
```
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0";
```
# QA-Based Information Extraction
As of v0.28.x, **ktrain** now includes a “universal” information extractor, which uses a Question-Answering model to extract any information of interest from documents.
Suppose you have a table (e.g., an Excel spreadsheet) that looks like the DataFrame below. (In this example, each document is a single sentence, but each row can potenially be an entire report with many paragraphs.)
```
data = [
'Indeed, risk factors are sex, obesity, genetic factors and mechanical factors (3) .',
'There is a risk of Donald Trump running again in 2024.',
"""This risk was consistent across patients stratified by history of CVD, risk factors
but no CVD, and neither CVD nor risk factors.""",
"""Risk factors associated with subsequent death include older age, hypertension, diabetes,
ischemic heart disease, obesity and chronic lung disease; however, sometimes
there are no obvious risk factors .""",
'Three major risk factors for COVID-19 were sex (male), age (≥60), and severe pneumonia.',
'His speciality is medical risk assessments, and he is 30 years old.',
"""Results: A total of nine studies including 356 patients were included in this study,
the mean age was 52.4 years and 221 (62.1%) were male."""]
import pandas as pd
pd.set_option("display.max_colwidth", None)
df = pd.DataFrame(data, columns=['Text'])
df.head(10)
```
Let's pretend your boss wants you to extract both the reported risk factors from each document and the sample sizes for the reported studies. This can easily be accomplished with the `AnswerExtractor` in **ktrain**, a kind of universal information extractor based on a Question-Answering model.
```
from ktrain.text.qa import AnswerExtractor
ae = AnswerExtractor()
df = ae.extract(df.Text.values, df, [('What are the risk factors?', 'Risk Factors'),
('How many individuals in sample?', 'Sample Size')])
df.head(10)
```
As you can see, all that's required is that you phrase the type information you want to extract as a question (e.g., *What are the risk factors?*) and provide a label (e.g., *Risk Factors*). The above command will return a new DataFrame with additional columns containing the information of interest.
### Additional Examples
QA-based information extraction is surprisingly versatile. Here, we use it to extract **URLs**, **dates**, and **amounts**.
```
data = ["Closing price for Square on October 8th was $238.57, for details - https://finance.yahoo.com",
"""The film "The Many Saints of Newark" was released on 10/01/2021.""",
"Release delayed until the 1st of October due to COVID-19",
"Price of Bitcoin fell to forty thousand dollars",
"Documentation can be found at: amaiya.github.io/causalnlp",
]
df = pd.DataFrame(data, columns=['Text'])
df = ae.extract(df.Text.values, df, [('What is the amount?', 'Amount'),
('What is the URL?', 'URL'),
('What is the date?', 'Date')])
df.head(10)
```
For our last example, let's extract universities from a sample of the 20 Newsgroup dataset:
```
# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
df = pd.DataFrame(train_b.data[:10], columns=['Text']) # let's examine the first 10 posts
df = ae.extract(df.Text.values, df, [('What is the university?', 'University')])
df.head(10)
```
### Customizing the `AnswerExtractor` to Your Use Case
If there are false positives (or false negatives), you can adjust the `min_conf` parameter (i.e., minimum confidence threshold) until you’re happy (default is `min_conf=6`). If `return_conf=True`, then columns showing the confidence scores of each extraction will also be included in the resultant DataFrame.
If adjusting the confidence threshold is not sufficient to address the false positives and false negatives you're seeing, you can also try fine-tuning the QA model to your custom dataset by providing only a small handful examples:
**Example:**
```python
data = [
{"question": "What is the URL?",
"context": "Closing price for Square on October 8th was $238.57, for details - https://finance.yahoo.com",
"answers": "https://finance.yahoo.com"},
{"question": "What is the URL?",
"context": "HTTP is a protocol for fetching resources.",
"answers": None},
]
from ktrain.text.qa import AnswerExtractor
ae = AnswerExtractor(bert_squad_model='distilbert-base-cased-distilled-squad')
ae.finetune(data)
```
Note that, by default, the `AnswerExtractor` uses a `bert-large-*` model that requires a lot of memory to train. If fine-tuning, you may want to switch to a smaller model like DistilBERT, as shown in the example above.
Finally, the `finetune` method accepts other parameters such as `batch_size` and `max_seq_length` that you can adjust depending on your speed requirements, dataset characteristics, and system resources.
| github_jupyter |
```
import sys, os
if 'google.colab' in sys.modules:
# https://github.com/yandexdataschool/Practical_RL/issues/256
!pip uninstall tensorflow --yes
!pip uninstall keras --yes
!pip install tensorflow-gpu==1.13.1
!pip install keras==2.2.4
if not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week07_seq2seq/basic_model_tf.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week07_seq2seq/he-pron-wiktionary.txt
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week07_seq2seq/main_dataset.txt
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week07_seq2seq/voc.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
print(sorted(set(''.join(['12','3']))))
list(filter(len, ' s t r i n g'.split(' ')))
```
## Reinforcement Learning for seq2seq
This time we'll solve a problem of transribing hebrew words in english, also known as g2p (grapheme2phoneme)
* word (sequence of letters in source language) -> translation (sequence of letters in target language)
Unlike what most deep learning practitioners do, we won't only train it to maximize likelihood of correct translation, but also employ reinforcement learning to actually teach it to translate with as few errors as possible.
### About the task
One notable property of Hebrew is that it's consonant language. That is, there are no wovels in the written language. One could represent wovels with diacritics above consonants, but you don't expect people to do that in everyay life.
Therefore, some hebrew characters will correspond to several english letters and others - to none, so we should use encoder-decoder architecture to figure that out.

_(img: esciencegroup.files.wordpress.com)_
Encoder-decoder architectures are about converting anything to anything, including
* Machine translation and spoken dialogue systems
* [Image captioning](http://mscoco.org/dataset/#captions-challenge2015) and [image2latex](https://openai.com/requests-for-research/#im2latex) (convolutional encoder, recurrent decoder)
* Generating [images by captions](https://arxiv.org/abs/1511.02793) (recurrent encoder, convolutional decoder)
* Grapheme2phoneme - convert words to transcripts
We chose simplified __Hebrew->English__ machine translation for words and short phrases (character-level), as it is relatively quick to train even without a gpu cluster.
```
# If True, only translates phrases shorter than 20 characters (way easier).
EASY_MODE = True
# Useful for initial coding.
# If false, works with all phrases (please switch to this mode for homework assignment)
MODE = "he-to-en" # way we translate. Either "he-to-en" or "en-to-he"
# maximal length of _generated_ output, does not affect training
MAX_OUTPUT_LENGTH = 50 if not EASY_MODE else 20
REPORT_FREQ = 100 # how often to evaluate validation score
```
### Step 1: preprocessing
We shall store dataset as a dictionary
`{ word1:[translation1,translation2,...], word2:[...],...}`.
This is mostly due to the fact that many words have several correct translations.
We have implemented this thing for you so that you can focus on more interesting parts.
__Attention python2 users!__ You may want to cast everything to unicode later during homework phase, just make sure you do it _everywhere_.
```
import numpy as np
from collections import defaultdict
word_to_translation = defaultdict(list) # our dictionary
bos = '_' # beginning of sentence
eos = ';' # end of sentence
with open("main_dataset.txt") as fin:
for line in fin:
en, he = line[:-1].lower().replace(bos, ' ').replace(eos,
' ').split('\t')
word, trans = (he, en) if MODE == 'he-to-en' else (en, he)
if len(word) < 3:
continue
if EASY_MODE:
if max(len(word), len(trans)) > 20:
continue
word_to_translation[word].append(trans)
print("size = ", len(word_to_translation))
# get all unique lines in source language
all_words = np.array(list(word_to_translation.keys()))
# get all unique lines in translation language
all_translations = np.array(
[ts for all_ts in word_to_translation.values() for ts in all_ts])
print(all_words)
print(all_translations)
```
### split the dataset
We hold out 10% of all words to be used for validation.
```
from sklearn.model_selection import train_test_split
train_words, test_words = train_test_split(
all_words, test_size=0.1, random_state=42)
```
### Building vocabularies
We now need to build vocabularies that map strings to token ids and vice versa. We're gonna need these fellas when we feed training data into model or convert output matrices into english words.
```
from voc import Vocab
inp_voc = Vocab.from_lines(''.join(all_words), bos=bos, eos=eos, sep='')
out_voc = Vocab.from_lines(''.join(all_translations), bos=bos, eos=eos, sep='')
# Here's how you cast lines into ids and backwards.
batch_lines = all_words[:5]
batch_ids = inp_voc.to_matrix(batch_lines)
batch_lines_restored = inp_voc.to_lines(batch_ids)
print("lines")
print(batch_lines)
print("\nwords to ids (0 = bos, 1 = eos):")
print(batch_ids)
print("\nback to words")
print(batch_lines_restored)
```
Draw word/translation length distributions to estimate the scope of the task.
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=[8, 4])
plt.subplot(1, 2, 1)
plt.title("words")
plt.hist(list(map(len, all_words)), bins=20)
plt.subplot(1, 2, 2)
plt.title('translations')
plt.hist(list(map(len, all_translations)), bins=20)
```
### Step 3: deploy encoder-decoder (1 point)
__assignment starts here__
Our architecture consists of two main blocks:
* Encoder reads words character by character and outputs code vector (usually a function of last RNN state)
* Decoder takes that code vector and produces translations character by character
Than it gets fed into a model that follows this simple interface:
* __`model.symbolic_translate(inp, **flags) -> out, logp`__ - takes symbolic int32 matrix of hebrew words, produces output tokens sampled from the model and output log-probabilities for all possible tokens at each tick.
* if given flag __`greedy=True`__, takes most likely next token at each iteration. Otherwise samples with next token probabilities predicted by model.
* __`model.symbolic_score(inp, out, **flags) -> logp`__ - takes symbolic int32 matrices of hebrew words and their english translations. Computes the log-probabilities of all possible english characters given english prefices and hebrew word.
* __`model.weights`__ - weights from all model layers [a list of variables]
That's all! It's as hard as it gets. With those two methods alone you can implement all kinds of prediction and training.
```
#import tensorflow.compat.v1 as tf
#tf.disable_v2_behavior()
import tensorflow as tf
tf.reset_default_graph()
s = tf.InteractiveSession()
# ^^^ if you get "variable *** already exists": re-run this cell again
from basic_model_tf import BasicTranslationModel
model = BasicTranslationModel('model', inp_voc, out_voc,
emb_size=64, hid_size=128)
s.run(tf.global_variables_initializer())
# Play around with symbolic_translate and symbolic_score
inp = tf.placeholder_with_default(np.random.randint(
0, 10, [3, 5], dtype='int32'), [None, None])
out = tf.placeholder_with_default(np.random.randint(
0, 10, [3, 5], dtype='int32'), [None, None])
# translate inp (with untrained model)
sampled_out, logp = model.symbolic_translate(inp, greedy=False)
print("\nSymbolic_translate output:\n", sampled_out, logp)
print("\nSample translations:\n", s.run(sampled_out))
# score logp(out | inp) with untrained input
logp = model.symbolic_score(inp, out)
print("\nSymbolic_score output:\n", logp)
print("\nLog-probabilities (clipped):\n", s.run(logp)[:, :2, :5])
# Prepare any operations you want here
input_sequence = tf.placeholder('int32', [None, None])
greedy_translations, logp = model.symbolic_translate(input_sequence, greedy=True)
def translate(lines):
"""
You are given a list of input lines.
Make your neural network translate them.
:return: a list of output lines
"""
# Convert lines to a matrix of indices
lines_ix = inp_voc.to_matrix(lines)
# Compute translations in form of indices
trans_ix = s.run(greedy_translations, { input_sequence: lines_ix })
# Convert translations back into strings
return out_voc.to_lines(trans_ix)
print("Sample inputs:", all_words[:3])
print("Dummy translations:", translate(all_words[:3]))
assert isinstance(greedy_translations,
tf.Tensor) and greedy_translations.dtype.is_integer, "trans must be a tensor of integers (token ids)"
assert translate(all_words[:3]) == translate(
all_words[:3]), "make sure translation is deterministic (use greedy=True and disable any noise layers)"
assert type(translate(all_words[:3])) is list and (type(translate(all_words[:1])[0]) is str or type(
translate(all_words[:1])[0]) is unicode), "translate(lines) must return a sequence of strings!"
print("Tests passed!")
```
### Scoring function
LogLikelihood is a poor estimator of model performance.
* If we predict zero probability once, it shouldn't ruin entire model.
* It is enough to learn just one translation if there are several correct ones.
* What matters is how many mistakes model's gonna make when it translates!
Therefore, we will use minimal Levenshtein distance. It measures how many characters do we need to add/remove/replace from model translation to make it perfect. Alternatively, one could use character-level BLEU/RougeL or other similar metrics.
The catch here is that Levenshtein distance is not differentiable: it isn't even continuous. We can't train our neural network to maximize it by gradient descent.
```
import editdistance # !pip install editdistance
def get_distance(word, trans):
"""
A function that takes word and predicted translation
and evaluates (Levenshtein's) edit distance to closest correct translation
"""
references = word_to_translation[word]
assert len(references) != 0, "wrong/unknown word"
return min(editdistance.eval(trans, ref) for ref in references)
def score(words, bsize=100):
"""a function that computes levenshtein distance for bsize random samples"""
assert isinstance(words, np.ndarray)
batch_words = np.random.choice(words, size=bsize, replace=False)
batch_trans = translate(batch_words)
distances = list(map(get_distance, batch_words, batch_trans))
return np.array(distances, dtype='float32')
# should be around 5-50 and decrease rapidly after training :)
[score(test_words, 10).mean() for _ in range(5)]
```
## Step 2: Supervised pre-training
Here we define a function that trains our model through maximizing log-likelihood a.k.a. minimizing crossentropy.
```
# import utility functions
from basic_model_tf import initialize_uninitialized, infer_length, infer_mask, select_values_over_last_axis
class supervised_training:
# variable for inputs and correct answers
input_sequence = tf.placeholder('int32', [None, None])
reference_answers = tf.placeholder('int32', [None, None])
# Compute log-probabilities of all possible tokens at each step. Use model interface.
_, logprobs_seq = model.symbolic_translate(input_sequence, greedy=False)
# compute mean crossentropy
crossentropy = - select_values_over_last_axis(logprobs_seq, reference_answers)
mask = infer_mask(reference_answers, out_voc.eos_ix)
loss = tf.reduce_sum(crossentropy * mask)/tf.reduce_sum(mask)
# Build weights optimizer. Use model.weights to get all trainable params.
train_step = tf.train.AdamOptimizer().minimize(loss, var_list=model.weights)
# intialize optimizer params while keeping model intact
initialize_uninitialized(s)
```
Actually run training on minibatches
```
import random
def sample_batch(words, word_to_translation, batch_size):
"""
sample random batch of words and random correct translation for each word
example usage:
batch_x,batch_y = sample_batch(train_words, word_to_translations,10)
"""
# choose words
batch_words = np.random.choice(words, size=batch_size)
# choose translations
batch_trans_candidates = list(map(word_to_translation.get, batch_words))
batch_trans = list(map(random.choice, batch_trans_candidates))
return inp_voc.to_matrix(batch_words), out_voc.to_matrix(batch_trans)
bx, by = sample_batch(train_words, word_to_translation, batch_size=3)
print("Source:")
print(bx)
print("Target:")
print(by)
from IPython.display import clear_output
from tqdm import tqdm, trange # or use tqdm_notebook,tnrange
loss_history = []
editdist_history = []
iters = 25000
for i in trange(iters):
bx, by = sample_batch(train_words, word_to_translation, 32)
feed_dict = {
supervised_training.input_sequence: bx,
supervised_training.reference_answers: by
}
loss, _ = s.run([supervised_training.loss,
supervised_training.train_step], feed_dict)
loss_history.append(loss)
if (i+1) % REPORT_FREQ == 0:
clear_output(True)
current_scores = score(test_words)
editdist_history.append(current_scores.mean())
plt.figure(figsize=(12, 4))
plt.subplot(131)
plt.title('train loss / traning time')
plt.plot(loss_history)
plt.grid()
plt.subplot(132)
plt.title('val score distribution')
plt.hist(current_scores, bins=20)
plt.subplot(133)
plt.title('val score / traning time')
plt.plot(editdist_history)
plt.grid()
plt.show()
print("llh=%.3f, mean score=%.3f" %
(np.mean(loss_history[-10:]), np.mean(editdist_history[-10:])))
# Note: it's okay if loss oscillates up and down as long as it gets better on average over long term (e.g. 5k batches)
for word in train_words[:10]:
print("%s -> %s" % (word, translate([word])[0]))
test_scores = []
for start_i in trange(0, len(test_words), 32):
batch_words = test_words[start_i:start_i+32]
batch_trans = translate(batch_words)
distances = list(map(get_distance, batch_words, batch_trans))
test_scores.extend(distances)
print("Supervised test score:", np.mean(test_scores))
```
## Preparing for reinforcement learning (2 points)
First we need to define loss function as a custom tf operation.
The simple way to do so is through `tensorflow.py_func` wrapper.
```
def my_func(x):
# x will be a numpy array with the contents of the placeholder below
return np.sinh(x)
inp = tf.placeholder(tf.float32)
y = tf.py_func(my_func, [inp], tf.float32)
```
__Your task__ is to implement `_compute_levenshtein` function that takes matrices of words and translations, along with input masks, then converts those to actual words and phonemes and computes min-levenshtein via __get_distance__ function above.
```
def _compute_levenshtein(words_ix, trans_ix):
"""
A custom tensorflow operation that computes levenshtein loss for predicted trans.
Params:
- words_ix - a matrix of letter indices, shape=[batch_size,word_length]
- words_mask - a matrix of zeros/ones,
1 means "word is still not finished"
0 means "word has already finished and this is padding"
- trans_mask - a matrix of output letter indices, shape=[batch_size,translation_length]
- trans_mask - a matrix of zeros/ones, similar to words_mask but for trans_ix
Please implement the function and make sure it passes tests from the next cell.
"""
# convert words to strings
words = inp_voc.to_lines(words_ix)
#assert type(words) is list and type(
# words[0]) is str and len(words) == len(words_ix)
# convert translations to lists
translations = out_voc.to_lines(trans_ix)
#assert type(translations) is list and type(
# translations[0]) is str and len(translations) == len(trans_ix)
# computes levenstein distances. can be arbitrary python code.
distances = [get_distance(w,t) for w,t in zip(words, translations)]
#assert type(distances) in (list, tuple, np.ndarray) and len(
# distances) == len(words_ix)
distances = np.array(list(distances), dtype='float32')
return distances
def compute_levenshtein(words_ix, trans_ix):
out = tf.py_func(_compute_levenshtein, [words_ix, trans_ix, ], tf.float32)
out.set_shape([None])
return tf.stop_gradient(out)
```
Simple test suite to make sure your implementation is correct. Hint: if you run into any bugs, feel free to use print from inside _compute_levenshtein.
```
# test suite
# sample random batch of (words, correct trans, wrong trans)
batch_words = np.random.choice(train_words, size=100)
batch_trans = list(map(random.choice, map(word_to_translation.get, batch_words)))
batch_trans_wrong = np.random.choice(all_translations, size=100)
batch_words_ix = tf.constant(inp_voc.to_matrix(batch_words))
batch_trans_ix = tf.constant(out_voc.to_matrix(batch_trans))
batch_trans_wrong_ix = tf.constant(out_voc.to_matrix(batch_trans_wrong))
# assert compute_levenshtein is zero for ideal translations
tf_lev_dists = compute_levenshtein(batch_words_ix, batch_trans_ix)
correct_answers_score = tf_lev_dists.eval()
assert np.all(correct_answers_score ==
0), "a perfect translation got nonzero levenshtein score!"
print("Everything seems alright!")
# assert compute_levenshtein matches actual scoring function
wrong_answers_score = compute_levenshtein(
batch_words_ix, batch_trans_wrong_ix).eval()
true_wrong_answers_score = np.array(
list(map(get_distance, batch_words, batch_trans_wrong)))
assert np.all(wrong_answers_score ==
true_wrong_answers_score), "for some word symbolic levenshtein is different from actual levenshtein distance"
print("Everything seems alright!")
```
Once you got it working...
* You may now want to __remove/comment asserts__ from function code for a slight speed-up.
* There's a more detailed tutorial on custom tensorflow ops: [`py_func`](https://www.tensorflow.org/api_docs/python/tf/py_func), [`low-level`](https://www.tensorflow.org/api_docs/python/tf/py_func).
## 3. Self-critical policy gradient (2 points)
In this section you'll implement algorithm called self-critical sequence training (here's an [article](https://arxiv.org/abs/1612.00563)).
The algorithm is a vanilla policy gradient with a special baseline.
$$ \nabla J = E_{x \sim p(s)} E_{y \sim \pi(y|x)} \nabla log \pi(y|x) \cdot (R(x,y) - b(x)) $$
Here reward R(x,y) is a __negative levenshtein distance__ (since we minimize it). The baseline __b(x)__ represents how well model fares on word __x__.
In practice, this means that we compute baseline as a score of greedy translation, $b(x) = R(x,y_{greedy}(x)) $.

Luckily, we already obtained the required outputs: `model.greedy_translations, model.greedy_mask` and we only need to compute levenshtein using `compute_levenshtein` function.
```
class trainer:
input_sequence = tf.placeholder('int32', [None, None])
# use model to __sample__ symbolic translations given input_sequence
sample_translations, sample_logp = model.symbolic_translate(input_sequence, greedy=False)
# use model to __greedy__ symbolic translations given input_sequence
greedy_translations, greedy_logp = model.symbolic_translate(input_sequence, greedy=True)
rewards = - compute_levenshtein(input_sequence, sample_translations)
# compute __negative__ levenshtein for greedy mode
baseline = - compute_levenshtein(input_sequence, greedy_translations)
# compute advantage using rewards and baseline
advantage = rewards - baseline
assert advantage.shape.ndims == 1, "advantage must be of shape [batch_size]"
# compute log_pi(a_t|s_t), shape = [batch, seq_length]
logprobs_phoneme = select_values_over_last_axis(sample_logp, sample_translations)
# ^-- hint: look at how crossentropy is implemented in supervised learning loss above
# mind the sign - this one should not be multiplied by -1 :)
# Compute policy gradient
# or rather surrogate function who's gradient is policy gradient
J = logprobs_phoneme * advantage[:, None]
mask = infer_mask(sample_translations, out_voc.eos_ix)
loss = - tf.reduce_sum(J * mask) / tf.reduce_sum(mask)
# regularize with negative entropy. Don't forget the sign!
# note: for entropy you need probabilities for all tokens (sample_logp), not just phoneme_logprobs
#entropy = <compute entropy matrix of shape[batch, seq_length], H = -sum(p*log_p), don't forget the sign!>
# hint: you can get sample probabilities from sample_logp using math :)
entropy = - tf.reduce_sum(sample_logp * tf.math.exp(sample_logp), axis=-1)
assert entropy.shape.ndims == 2, "please make sure elementwise entropy is of shape [batch,time]"
loss -= 0.01*tf.reduce_sum(entropy*mask) / tf.reduce_sum(mask)
# compute weight updates, clip by norm
grads = tf.gradients(loss, model.weights)
grads = tf.clip_by_global_norm(grads, 50)[0]
train_step = tf.train.AdamOptimizer(
learning_rate=1e-5).apply_gradients(zip(grads, model.weights,))
initialize_uninitialized()
```
# Policy gradient training
```
iters = 100000
for i in trange(iters):
bx = sample_batch(train_words, word_to_translation, 32)[0]
pseudo_loss, _ = s.run([trainer.loss, trainer.train_step], {
trainer.input_sequence: bx})
loss_history.append(pseudo_loss)
if (i+1) % REPORT_FREQ == 0:
clear_output(True)
current_scores = score(test_words)
editdist_history.append(current_scores.mean())
plt.figure(figsize=(12, 4))
plt.subplot(131)
plt.title('train loss / traning time')
plt.plot(loss_history)
plt.grid()
plt.subplot(132)
plt.title('val score distribution')
plt.hist(current_scores, bins=20)
plt.subplot(133)
plt.title('val score / traning time')
plt.plot(editdist_history)
plt.grid()
plt.show()
print("llh=%.3f, mean score=%.3f" %
(np.mean(loss_history[-10:]), np.mean(editdist_history[-10:])))
```
### Results
```
for word in train_words[:10]:
print("%s -> %s" % (word, translate([word])[0]))
test_scores = []
for start_i in trange(0, len(test_words), 32):
batch_words = test_words[start_i:start_i+32]
batch_trans = translate(batch_words)
distances = list(map(get_distance, batch_words, batch_trans))
test_scores.extend(distances)
print("Supervised test score:", np.mean(test_scores))
# ^^ If you get Out Of Memory, please replace this with batched computation
```
***
Checking Backend Code:
```
m = s.run(sampled_out)
print(m,'\n')
q = s.run(logp)
print(q.shape)
print(q[0])
_b = tf.fill([5], 3)
print(s.run(_b))
_f = tf.one_hot(_b, 7) + 0.1
print(s.run(_f))
print(s.run(_f[:, None]))
_e = np.array([1, 2, 3, 4, 5, 6])
_i = np.array([1, 1, 1, 1, 1, 1])
_so = tf.scan(lambda s, e: e * s, _e, initializer=_i)
print(s.run(_so))
_values = [ [1,2,3],
[4,5,6]]
_indices = [ [] ]
batch_size, seq_len = tf.shape(indices)[0], tf.shape(indices)[1]
batch_i = tf.tile(tf.range(0, batch_size)[:, None], [1, seq_len])
time_i = tf.tile(tf.range(0, seq_len)[None, :], [batch_size, 1])
indices_nd = tf.stack([batch_i, time_i, indices], axis=-1)
```
## Step 6: Make it actually work (5++ pts)
<img src=https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/do_something_scst.png width=400>
In this section we want you to finally __restart with EASY_MODE=False__ and experiment to find a good model/curriculum for that task.
We recommend you to start with the following architecture
```
encoder---decoder
P(y|h)
^
LSTM -> LSTM
^ ^
biLSTM -> LSTM
^ ^
input y_prev
```
__Note:__ you can fit all 4 state tensors of both LSTMs into a in a single state - just assume that it contains, for example, [h0, c0, h1, c1] - pack it in encode and update in decode.
Here are some cool ideas on what you can do then.
__General tips & tricks:__
* In some tensorflow versions and for some layers, it is required that each rnn/gru/lstm cell gets it's own `tf.variable_scope(unique_name, reuse=False)`.
* Otherwise it will complain about wrong tensor sizes because it tries to reuse weights from one rnn to the other.
* You will likely need to adjust pre-training time for such a network.
* Supervised pre-training may benefit from clipping gradients somehow.
* SCST may indulge a higher learning rate in some cases and changing entropy regularizer over time.
* It's often useful to save pre-trained model parameters to not re-train it every time you want new policy gradient parameters.
* When leaving training for nighttime, try setting REPORT_FREQ to a larger value (e.g. 500) not to waste time on it.
__Formal criteria:__
To get 5 points we want you to build an architecture that:
* _doesn't consist of single GRU_
* _works better_ than single GRU baseline.
* We also want you to provide either learning curve or trained model, preferably both
* ... and write a brief report or experiment log describing what you did and how it fared.
### Attention
There's more than one way to connect decoder to encoder
* __Vanilla:__ layer_i of encoder last state goes to layer_i of decoder initial state
* __Every tick:__ feed encoder last state _on every iteration_ of decoder.
* __Attention:__ allow decoder to "peek" at one (or several) positions of encoded sequence on every tick.
The most effective (and cool) of those is, of course, attention.
You can read more about attention [in this nice blog post](https://distill.pub/2016/augmented-rnns/). The easiest way to begin is to use "soft" attention with "additive" or "dot-product" intermediate layers.
__Tips__
* Model usually generalizes better if you no longer allow decoder to see final encoder state
* Once your model made it through several epochs, it is a good idea to visualize attention maps to understand what your model has actually learned
* There's more stuff [here](https://github.com/yandexdataschool/Practical_RL/blob/master/week8_scst/bonus.ipynb)
* If you opted for hard attention, we recommend [gumbel-softmax](https://blog.evjang.com/2016/11/tutorial-categorical-variational.html) instead of sampling. Also please make sure soft attention works fine before you switch to hard.
### UREX
* This is a way to improve exploration in policy-based settings. The main idea is that you find and upweight under-appreciated actions.
* Here's [video](https://www.youtube.com/watch?v=fZNyHoXgV7M&feature=youtu.be&t=3444)
and an [article](https://arxiv.org/abs/1611.09321).
* You may want to reduce batch size 'cuz UREX requires you to sample multiple times per source sentence.
* Once you got it working, try using experience replay with importance sampling instead of (in addition to) basic UREX.
### Some additional ideas:
* (advanced deep learning) It may be a good idea to first train on small phrases and then adapt to larger ones (a.k.a. training curriculum).
* (advanced nlp) You may want to switch from raw utf8 to something like unicode or even syllables to make task easier.
* (advanced nlp) Since hebrew words are written __with vowels omitted__, you may want to use a small Hebrew vowel markup dataset at `he-pron-wiktionary.txt`.
### Bonus hints: [here](https://github.com/yandexdataschool/Practical_RL/blob/master/week8_scst/bonus.ipynb)
```
assert not EASY_MODE, "make sure you set EASY_MODE = False at the top of the notebook."
```
`[your report/log here or anywhere you please]`
__Contributions:__ This notebook is brought to you by
* Yandex [MT team](https://tech.yandex.com/translate/)
* Denis Mazur ([DeniskaMazur](https://github.com/DeniskaMazur)), Oleg Vasilev ([Omrigan](https://github.com/Omrigan/)), Dmitry Emelyanenko ([TixFeniks](https://github.com/tixfeniks)) and Fedor Ratnikov ([justheuristic](https://github.com/justheuristic/))
* Dataset is parsed from [Wiktionary](https://en.wiktionary.org), which is under CC-BY-SA and GFDL licenses.
| github_jupyter |
```
from tensorflow.keras.layers import Dense
Dense(10, activation="relu", kernel_initializer="he_normal")
from tensorflow.keras.initializers import VarianceScaling
from tensorflow.keras.layers import Dense
he_avg_init = VarianceScaling(scale=2., mode='fan_avg',
distribution='uniform')
Dense(10, activation='sigmoid', kernel_initializer='he_avg_init')
from tensorflow.keras.layers import Dense, LeakyReLU
from tensorflow.keras.models import Sequential
model = Sequential([
[...]
Dense(10, kernel_initializer="he_normal"),
LeakyReLU(alpha=0.2)
[...]
])
from tensorflow.keras.layers import Flatten, Dense, PReLU
from tensorflow.keras.models import Sequential
model = Sequential([
Flatten(input_shape=[28, 28]),
Dense(300, kernel_initializer='he_normal'),
PReLU(),
Dense(100, kernel_initializer='he_normal'),
PReLU(),
Dense(10, activation='softmax')
])
from tensorflow.keras.layers import Flatten, Dense
from tensorflow.keras.models import Sequential
model = Sequential()
model.add(Flatten(input_shape=[28, 28]))
model.add(Dense(300, activation='selu', kernel_initializer='lecun_normal'))
for layer in range(99):
model.add(Dense(100, activation='selu', kernel_initializer='lecun_normal'))
model.add(Dense(10, activation='softmax'))
from tensorflow.keras.layers import Dense, Flatten, BatchNormalization
from tensorflow.keras.models import Sequential
model = Sequential([
Flatten(input_shape=[28, 28]),
BatchNormalization(),
Dense(300, activation="elu", kernel_initializer="he_normal"),
BatchNormalization(),
Dense(100, activation="elu", kernel_initializer="he_normal"),
BatchNormalization(),
Dense(10, activation="softmax")
])
model.summary()
from tensorflow.keras.layers import Dense, Flatten, BatchNormalization, Activation
from tensorflow.keras.models import Sequential
model = Sequential([
Flatten(input_shape=[28, 28]),
BatchNormalization(),
Dense(300, kernel_initializer='he_normal', use_bias=False),
BatchNormalization(),
Activation('elu'),
Dense(100, kernel_initializer='he_normal', use_bias=False),
BatchNormalization(),
Activation('elu'),
Dense(10, activation='softmax')
])
from tensorflow.keras.optimizers import Adagrad
optimizer = Adagrad(lr=0.001)
optimizer = tensorflow.keras.SGD(clipvalue=1.0)
model.compile(loss='mse', optimizer=optimizer)
from tensorflow.keras.optimizers import Adamax
optimizer = Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
from tensorflow.keras.optimizers import Adamax
optimizer = Adamax(lr=0.001, beta_1=0.9, beta_2=0.999)
from tensorflow.keras.models import load_model, Sequential
from tensorflow.keras.layers import Dense
model_A = load_model("my_model_A.h5")
model_B_on_A = Sequential(model_A.layers[:-1])
model_B_on_A =(Dense(1, activation="sigmoid"))
from tensorflow.keras.models import clone_model
model_A_clone = clone_model(model_A)
model_A_clone.set_weights(model_A.get_weights())
for layer in model_B_on_A.layers[:-1]:
layer.trainable = False
model_B_on_A.compile(loss="binary_crossentropy", optimizer="sgd",
metrics=["accuracy"])
from tensorflow.keras.optimizers import SGD
history = model_B_on_A.fit(X_train_B, y_train_B, epochs=4,
validation_data=(X_valid_B, y_valid_B))
for layer in model_B_on_A.layers[:-1]:
layer.trainable = True
optimizer = SGD(lr=1e-4)
model_B_on_A.compile(loss="binary_crossentropy", optimizer=optimizer,
metrics=["accuracy"])
history = model_B_on_A.fit(X_Train_B, y_train_B, epochs=16,
validation_data=(X_valid_B, y_valid_B))
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.001, momentum=0.9)
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.001, momentum=0.9, nesterov=True)
from tensorflow.keras.optimizers import RMSProp
optimizer = RMSProp(lr=0.001, rho=0.9)
from tensorflow.keras.optimizers import Adam
optimizer = Adam(lr=0.001, beta_1=0.9, beta_2=0.999)
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.01, decay=1e-4)
from tensorflow.keras.optimizers import SGD
optimizer = SGD(lr=0.01, decay=1e-4)
def exponential_decay_fn(epoch):
return 0.01 * 0.1**(epoch / 20)
def piecewise_constant_fn(epoch):
if epoch < 5:
return 0.01
elif epoch < 15:
return 0.005
else:
return 0.001
from tensorflow.keras.callbacks import LearningRateScheduler
lr_scheduler = LearningRateScheduler(piecewise_constant_fn)
history = model.fit(X_train_scaled, y_train, epochs=n_epochs,
validation_data=(X_valid_scaled, y_valid),
callbakcs=[lr_scheduler])
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.schedules import ExponentialDecay
from tensorflow.keras.optimizers import SGD
# 다음 콜백을 fit() 메서드에 전달하면 최상의 검증 손실이 다섯 번의 연속적인 에포크 동안 향상되지 안흥ㄹ 때마다 학습률에 0.5를 곱한다.
lr_scheduler = ReduceLROnPlateau(factor=0.5, patience=5)
s = 20 * len(X_train) // 32 # 20번 에포크에 담긴 전체 스텝 수 (배치 크기 = 32)
learning_rate = ExponentialDecay(0.01, s, 0.1)
optimizer = SGD(learning_rate=learning_rate)
from tensorflow.keras.layers import Dense
from tensroflow.keras.regularizers import l1,l2, l1_l2
layer = Dense(100, activation='relu',
kernel_initializer='he_normal',
kernel_regularizer=l2(0.01))
from functools import partial
from tensroflow.keras.regularizers import l1,l2, l1_l2
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten
RegularizedDense = partial(Dense,
activation='elu',
kernel_initializer='he_normal',
kernel_regularizer=l2(0.01))
model = Sequential([
Flatten(input_shape=[28,28]),
RegularizedDense(300),
RegularizedDense(100),
RegularizedDense(10, activation='softmax',
kernel_initializer='glorot_uniform',)
])
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dropout, Dense
model = Sequential([
Flatten(input_shape=[28, 28]),
Dropout(rate=0.2),
Dense(300, activation='elu', kernel_initializer='he_normal'),
Dropout(rate=0.2),
Dense(100, activation='elu', kernel_initializer='he_normal'),
Dropout(rate=0.2),
Dense(10, activation='softmax')
])
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, AlphaDropout, Dense
model = Sequential([
Flatten(input_shape=[28, 28]),
AlphaDropout(rate=0.2),
Dense(300, activation='selu', kernel_initializer='lecun_normal'),
AlphaDropout(rate=0.2),
Dense(100, activation='selu', kernel_initializer='lecun_normal'),
AlphaDropout(rate=0.2),
Dense(10, activation='softmax')
])
import numpy as np
y_probas = np.stack([model(X_test_scaled, training=True)
for sample in range(100)])
y_proba = y_probas.mean(axis=0)
y_std = y_probas.std(axis=0)
from tensorflow.keras.layers import Dropout
class MCDropout(Dropout):
def call(self, inputs):
return super().call(inputs, training=True)
from tensorflow.keras.layers import Dense
from tensorflow.keras.constraints import max_norm
Dense(100, activation='elu', kernel_initializer='he_normal',
kernel_constraint=max_norm(1.))
```
| github_jupyter |
# Transfer Learning
A Convolutional Neural Network (CNN) for image classification is made up of multiple layers that extract features, such as edges, corners, etc; and then use a final fully-connected layer to classify objects based on these features. You can visualize this like this:
<table>
<tr><td rowspan=2 style='border: 1px solid black;'>⇒</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Convolutional Layer</td><td style='border: 1px solid black;'>Pooling Layer</td><td style='border: 1px solid black;'>Fully Connected Layer</td><td rowspan=2 style='border: 1px solid black;'>⇒</td></tr>
<tr><td colspan=4 style='border: 1px solid black; text-align:center;'>Feature Extraction</td><td style='border: 1px solid black; text-align:center;'>Classification</td></tr>
</table>
*Transfer Learning* is a technique where you can take an existing trained model and re-use its feature extraction layers, replacing its final classification layer with a fully-connected layer trained on your own custom images. With this technique, your model benefits from the feature extraction training that was performed on the base model (which may have been based on a larger training dataset than you have access to) to build a classification model for your own specific set of object classes.
How does this help? Well, think of it this way. Suppose you take a professional tennis player and a complete beginner, and try to teach them both how to play raquetball. It's reasonable to assume that the professional tennis player will be easier to train, because many of the underlying skills involved in raquetball are already learned. Similarly, a pre-trained CNN model may be easier to train to classify specific set of objects because it's already learned how to identify the features of common objects, such as edges and corners. Fundamentally, a pre-trained model can be a great way to produce an effective classifier even when you have limited data with which to train it.
In this notebook, we'll see how to implement transfer learning for a classification model using TensorFlow.
## Install and import TensorFlow libraries
Let's start my ensuring that we have the latest version of the **TensorFlow** package installed and importing the Tensorflow libraries we're going to use.
```
!pip install --upgrade tensorflow
import tensorflow
from tensorflow import keras
print('TensorFlow version:',tensorflow.__version__)
print('Keras version:',keras.__version__)
```
## Prepare the base model
To use transfer learning, we need a base model from which we can use the trained feature extraction layers. The ***resnet*** model is an CNN-based image classifier that has been pre-trained using a huge dataset of 3-color channel images of 224x224 pixels. Let's create an instance of it with some pretrained weights, excluding its final (top) prediction layer.
```
base_model = keras.applications.resnet.ResNet50(weights='imagenet', include_top=False, input_shape=(224,224,3))
print(base_model.summary())
```
## Prepare the image data
The pretrained model has many layers, starting with a convolutional layer that starts the feature extraction process from image data.
For feature extraction to work with our own images, we need to ensure that the image data we use the train our prediction layer has the same number of features (pixel values) as the images originally used to train the feature extraction layers, so we need data loaders for color images that are 224x224 pixels in size.
Tensorflow includes functions for loading and transforming data. We'll use these to create a generator for training data, and a second generator for test data (which we'll use to validate the trained model). The loaders will transform the image data to match the format used to train the original resnet CNN model and normalize them.
Run the following cell to define the data generators and list the classes for our images.
```
from tensorflow.keras.preprocessing.image import ImageDataGenerator
data_folder = 'data/shapes'
pretrained_size = (224,224)
batch_size = 30
print("Getting Data...")
datagen = ImageDataGenerator(rescale=1./255, # normalize pixel values
validation_split=0.3) # hold back 30% of the images for validation
print("Preparing training dataset...")
train_generator = datagen.flow_from_directory(
data_folder,
target_size=pretrained_size, # resize to match model expected input
batch_size=batch_size,
class_mode='categorical',
subset='training') # set as training data
print("Preparing validation dataset...")
validation_generator = datagen.flow_from_directory(
data_folder,
target_size=pretrained_size, # resize to match model expected input
batch_size=batch_size,
class_mode='categorical',
subset='validation') # set as validation data
classnames = list(train_generator.class_indices.keys())
print("class names: ", classnames)
```
## Create a prediction layer
We downloaded the complete *resnet* model excluding its final prediction layer, so need to combine these layers with a fully-connected (*dense*) layer that takes the flattened outputs from the feature extraction layers and generates a prediction for each of our image classes.
We also need to freeze the feature extraction layers to retain the trained weights. Then when we train the model using our images, only the final prediction layer will learn new weight and bias values - the pre-trained weights already learned for feature extraction will remain the same.
```
from tensorflow.keras import applications
from tensorflow.keras import Model
from tensorflow.keras.layers import Flatten, Dense
# Freeze the already-trained layers in the base model
for layer in base_model.layers:
layer.trainable = False
# Create prediction layer for classification of our images
x = base_model.output
x = Flatten()(x)
prediction_layer = Dense(len(classnames), activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=prediction_layer)
# Compile the model
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Now print the full model, which will include the layers of the base model plus the dense layer we added
print(model.summary())
```
## Train the Model
With the layers of the CNN defined, we're ready to train it using our image data. The weights used in the feature extraction layers from the base resnet model will not be changed by training, only the final dense layer that maps the features to our shape classes will be trained.
```
# Train the model over 3 epochs
num_epochs = 3
history = model.fit(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
validation_steps = validation_generator.samples // batch_size,
epochs = num_epochs)
```
## View the loss history
We tracked average training and validation loss for each epoch. We can plot these to verify that the loss reduced over the training process and to detect *over-fitting* (which is indicated by a continued drop in training loss after validation loss has levelled out or started to increase).
```
%matplotlib inline
from matplotlib import pyplot as plt
epoch_nums = range(1,num_epochs+1)
training_loss = history.history["loss"]
validation_loss = history.history["val_loss"]
plt.plot(epoch_nums, training_loss)
plt.plot(epoch_nums, validation_loss)
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
```
## Evaluate model performance
We can see the final accuracy based on the test data, but typically we'll want to explore performance metrics in a little more depth. Let's plot a confusion matrix to see how well the model is predicting each class.
```
# Tensorflow doesn't have a built-in confusion matrix metric, so we'll use SciKit-Learn
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
print("Generating predictions from validation data...")
# Get the image and label arrays for the first batch of validation data
x_test = validation_generator[0][0]
y_test = validation_generator[0][1]
# Use the model to predict the class
class_probabilities = model.predict(x_test)
# The model returns a probability value for each class
# The one with the highest probability is the predicted class
predictions = np.argmax(class_probabilities, axis=1)
# The actual labels are hot encoded (e.g. [0 1 0], so get the one with the value 1
true_labels = np.argmax(y_test, axis=1)
# Plot the confusion matrix
cm = confusion_matrix(true_labels, predictions)
plt.imshow(cm, interpolation="nearest", cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(len(classnames))
plt.xticks(tick_marks, classnames, rotation=85)
plt.yticks(tick_marks, classnames)
plt.xlabel("Actual Shape")
plt.ylabel("Predicted Shape")
plt.show()
```
## Use the trained model
Now that we've trained the model, we can use it to predict the class of an image.
```
from tensorflow.keras import models
import numpy as np
from random import randint
import os
%matplotlib inline
# Function to predict the class of an image
def predict_image(classifier, image):
from tensorflow import convert_to_tensor
# The model expects a batch of images as input, so we'll create an array of 1 image
imgfeatures = img.reshape(1, img.shape[0], img.shape[1], img.shape[2])
# We need to format the input to match the training data
# The generator loaded the values as floating point numbers
# and normalized the pixel values, so...
imgfeatures = imgfeatures.astype('float32')
imgfeatures /= 255
# Use the model to predict the image class
class_probabilities = classifier.predict(imgfeatures)
# Find the class predictions with the highest predicted probability
index = int(np.argmax(class_probabilities, axis=1)[0])
return index
# Function to create a random image (of a square, circle, or triangle)
def create_image (size, shape):
from random import randint
import numpy as np
from PIL import Image, ImageDraw
xy1 = randint(10,40)
xy2 = randint(60,100)
col = (randint(0,200), randint(0,200), randint(0,200))
img = Image.new("RGB", size, (255, 255, 255))
draw = ImageDraw.Draw(img)
if shape == 'circle':
draw.ellipse([(xy1,xy1), (xy2,xy2)], fill=col)
elif shape == 'triangle':
draw.polygon([(xy1,xy1), (xy2,xy2), (xy2,xy1)], fill=col)
else: # square
draw.rectangle([(xy1,xy1), (xy2,xy2)], fill=col)
del draw
return np.array(img)
# Create a random test image
classnames = os.listdir(os.path.join('data', 'shapes'))
classnames.sort()
img = create_image ((224,224), classnames[randint(0, len(classnames)-1)])
plt.axis('off')
plt.imshow(img)
# Use the classifier to predict the class
class_idx = predict_image(model, img)
print (classnames[class_idx])
```
## Learn More
* [Tensorflow Documentation](https://www.tensorflow.org/tutorials/images/transfer_learning)
| github_jupyter |
# Kernel PCA
## Importing the libraries
```
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
```
## Importing the dataset
```
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
```
## Splitting the dataset into the Training set and Test set
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
```
## Feature Scaling
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
```
## Applying Kernel PCA
```
from sklearn.decomposition import KernelPCA
kpca = KernelPCA(n_components = 2, kernel = 'rbf')
X_train = kpca.fit_transform(X_train)
X_test = kpca.transform(X_test)
```
## Training the Logistic Regression model on the Training set
```
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
```
## Making the Confusion Matrix
```
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
```
## Visualising the Training set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
```
## Visualising the Test set results
```
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.legend()
plt.show()
```
| github_jupyter |
# Metacells Vignette
This vignette demonstrates step-by-step use of the metacells package to analyze scRNA-seq data. The latest version of this vignette is available in [Github](https://github.com/tanaylab/metacells/blob/master/sphinx/Manual_Analysis.rst).
## Preparation
First, let's import the Python packages we'll be using. If you don't have these installed, run `pip install metacells`, and also `pip install seaborn` for the embedded diagrams - this is just for the purpose of this vignette; the metacells package itself has no dependency on any visualization packages.
```
import anndata as ad
import matplotlib.pyplot as plt
import metacells as mc
import numpy as np
import os
import pandas as pd
import scipy.sparse as sp
import seaborn as sb
from math import hypot
from matplotlib.collections import LineCollection
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
sb.set_style("white")
```
## Getting the raw data
The metacells package is built around the [scanpy](https://pypi.org/project/scanpy/) framework. In particular it uses [anndata](https://pypi.org/project/anndata/) to hold the analyzed data, and uses `.h5ad` files to persist this data on disk. You can access also these files directly from R using several packages, most notably [anndata](https://cran.r-project.org/web/packages/anndata/index.html).
You can convert data from various "standard" scRNA data formats into a `.h5ad` file using any of the functions available in the scanpy and/or anndata packages. Note that converting textual data to this format takes a "non-trivial" amount of time for large data sets. Mercifully, this is a one-time operation. Less excusable is the fact that none of the above packages memory-map the `.h5ad` files so reading large files will still take a noticeable amount of time for no good reason.
For the purposes of this vignette, we'll use a 160K cells data set which is a unification of
several batches of PBMC scRNA data from [10x](https://support.10xgenomics.com/single-cell-gene-expression/datasets>), specifically from the "Single Cell 3' Paper: Zheng et al. 2017" datasets. Since 10x do not provide stable links to their data sets, and to avoid the long time it would take to convert their textual format files to `.h5ad` files, simply download the compressed data file [pbmc163k.h5ad.gz](http://www.wisdom.weizmann.ac.il/~atanay/metac_data/pbmc163k.h5ad.gz) to your work directory (using `wget`, `curl`, your browser's download function, etc.), and then run `gunzip pbmc163k.h5ad.gz` (or your zip program of choice) to extract the `pbmc163k.h5ad` file which we read below.
The metacells package uses a convention where the `__name__` unstructured property of the data contains its name for logging purposes; we initialize this name to `PBMC` below.
```
raw = ad.read_h5ad('pbmc163k.h5ad')
mc.ut.set_name(raw, 'PBMC')
print(raw.shape)
```
## Cleaning the data
The first step in processing the data is to extract a "clean" subset of it for further analysis.
If the data set contains metadata that can be used to rule out some of the genes or cells, this is a good time to do it, using the `mc.ut.slice` function.
Regardless of such metadata, we still need to perform initial filtering of the data. The exact details might vary depending on your specific data set's origins. Still, the metacells package supports a basic 2-phase procedure which should be useful in many cases.
### Cleaning the genes
The first phase excludes genes from the "clean" data.
#### Excluding genes by name
Some genes are known to be detrimental for the analysis and should be excluded from the clean data based on their name. The poster child for such genes are mitochondrial genes which we exclude using a pattern (all genes whose name starts with `MT-`).
```
excluded_gene_names = ['IGHMBP2', 'IGLL1', 'IGLL5', 'IGLON5', 'NEAT1', 'TMSB10', 'TMSB4X']
excluded_gene_patterns = ['MT-.*']
```
#### Excluding genes by their expression
We also want to exclude genes based on their expression. For example, it makes no sense to keep genes which have zero expression in our data set - in general we allow specifying a threshold on the minimal total UMIs of the gene in the data set. In addition, we have discovered it is useful to exclude "noisy lonely genes", that is, genes which have a significant expression level but no significant correlation with any other gene.
#### Automating clean genes selection
We provide a function that automates the above (given an explicit list of excluded gene names). All it does is create per-gene (variable) annotations in the data: `excluded_gene`, `properly_sampled_gene`, and `noisy_lonely_gene`. You can achieve the same effect by manually invoking the lower-level functions (e.g., `mc.tl.find_noisy_lonely_genes`). Or, you could create additional masks of your own based on your own criteria.
```
mc.pl.analyze_clean_genes(raw,
excluded_gene_names=excluded_gene_names,
excluded_gene_patterns=excluded_gene_patterns,
random_seed=123456)
```
We then combine all these mask into a final `clean_gene` mask. By default this is based on the three masks listed above, but you can customize it to use any list of per-gene masks instead.
```
mc.pl.pick_clean_genes(raw)
```
This is a good time to save the data so we can load it later without recomputing it. We'll do this under a different name to avoid modifying the raw file, and we'll rename our variable refering to it for clarity.
```
raw.write('full.h5ad')
full = raw
```
### Cleaning the cells
The second phase is excluding cells. We do so based on two criteria: the total number of UMIs we have for each cell, and the fraction of these UMIs that come from excluded (non-clean) genes.
Setting these thresholds is done manually. To guide this decision, we can visualize the relevant distributions.
#### Thresholds on the total number of UMIs
We'll start with looking at the total UMIs per cell. We set a threshold for the minimal and maximal number of UMIs of cells we wish to analyze.
```
properly_sampled_min_cell_total = 800
properly_sampled_max_cell_total = 8000
total_umis_of_cells = mc.ut.get_o_numpy(full, name='__x__', sum=True)
plot = sb.distplot(total_umis_of_cells)
plot.set(xlabel='UMIs', ylabel='Density', yticks=[])
plot.axvline(x=properly_sampled_min_cell_total, color='darkgreen')
plot.axvline(x=properly_sampled_max_cell_total, color='crimson')
too_small_cells_count = sum(total_umis_of_cells < properly_sampled_min_cell_total)
too_large_cells_count = sum(total_umis_of_cells > properly_sampled_max_cell_total)
too_small_cells_percent = 100.0 * too_small_cells_count / len(total_umis_of_cells)
too_large_cells_percent = 100.0 * too_large_cells_count / len(total_umis_of_cells)
print(f"Will exclude %s (%.2f%%) cells with less than %s UMIs"
% (too_small_cells_count,
too_small_cells_percent,
properly_sampled_min_cell_total))
print(f"Will exclude %s (%.2f%%) cells with more than %s UMIs"
% (too_large_cells_count,
too_large_cells_percent,
properly_sampled_max_cell_total))
```
#### Thresholds on the fractionof excluded gene UMIs
We also set a threshold on the fraction of excluded gene UMIs in each cell we wish to analyze. This ensures that there will be a sufficient number of clean gene UMIs left to analyze.
```
properly_sampled_max_excluded_genes_fraction = 0.1
excluded_genes_data = mc.tl.filter_data(full, var_masks=['~clean_gene'])[0]
excluded_umis_of_cells = mc.ut.get_o_numpy(excluded_genes_data, name='__x__', sum=True)
excluded_fraction_of_umis_of_cells = excluded_umis_of_cells / total_umis_of_cells
plot = sb.distplot(excluded_fraction_of_umis_of_cells)
plot.set(xlabel='Fraction of excluded gene UMIs', ylabel='Density', yticks=[])
plot.axvline(x=properly_sampled_max_excluded_genes_fraction, color='crimson')
too_excluded_cells_count = sum(excluded_fraction_of_umis_of_cells > properly_sampled_max_excluded_genes_fraction)
too_excluded_cells_percent = 100.0 * too_excluded_cells_count / len(total_umis_of_cells)
print(f"Will exclude %s (%.2f%%) cells with more than %.2f%% excluded gene UMIs"
% (too_excluded_cells_count,
too_excluded_cells_percent,
100.0 * properly_sampled_max_excluded_genes_fraction))
```
#### Automating clean cells selection
We provide a function that automates the above (given the thresholds). All it does is create per-cell (observation) annotation in the data: `properly_sampled_cell`. You can achieve the same effect by manually invoking the lower-level functions (e.g., `mc.tl.find_properly_sampled_cells`). Or, you could create additional masks of your own based on your own criteria.
```
mc.pl.analyze_clean_cells(
full,
properly_sampled_min_cell_total=properly_sampled_min_cell_total,
properly_sampled_max_cell_total=properly_sampled_max_cell_total,
properly_sampled_max_excluded_genes_fraction=properly_sampled_max_excluded_genes_fraction)
```
We again combine all the relevant masks into a final `clean_cell` mask. By default this is based just on the `properly_sampled_cell` mask, but you can customize it to use any list of per-cell masks instead.
```
mc.pl.pick_clean_cells(full)
```
### Extracting the clean data
We now extract just the clean genes and cells data out of the data set, using the `clean_gene` and `clean_cell` masks, to obtain the clean data we'll be analyzing.
```
clean = mc.pl.extract_clean_data(full)
```
### Initial forbidden genes
Some of the genes that are included in the clean data are "lateral", that is, indicate some real biolgical behavior such as cell cycle, but are irrelevant to the biological questions we are interested in. Such genes shouldn't be completely excluded - for example they are used to detect outliers. That is, we will still make sure the level of the expression of these genes is consistent for all the cells (e.g., the cells will be of the same cell cycle stage), but we do not want the algorithm to create metacells based on these genes (e.g., creating a metacell with a strong consistent S-state signature, but mixing up weakly different cell behaviors which we are trying to isolate).
To ensure this, we can specify (again by name or by pattern) "forbidden genes", that is, genes which must not be used as "feature genes". Coming up with the list of forbidden genes for a new data set is not trivial, and in general may require an iterative approach, where we generate metacells, understand their behavior, identify additional lateral gene modules we'd like to add to the list, and then recompute the metacells.
To kickstart this process, we can start with a few "known suspect" genes, and (manually) consider genes which are related (correlated) to them. We correlate all the (interesting) genes with each other (using a random subset of the cells for efficiency), cluster the genes using these correlations, split the genes into modules with some maximal number of genes in each, and finally look at each cluster containing any of the suspect genes to decide which genes to add to the list.
```
suspect_gene_names = ['PCNA', 'MKI67', 'TOP2A', 'HIST1H1D',
'FOS', 'JUN', 'HSP90AB1', 'HSPA1A',
'ISG15', 'WARS' ]
suspect_gene_patterns = [ 'MCM[0-9]', 'SMC[0-9]', 'IFI.*' ]
suspect_genes_mask = mc.tl.find_named_genes(clean, names=suspect_gene_names,
patterns=suspect_gene_patterns)
suspect_gene_names = sorted(clean.var_names[suspect_genes_mask])
```
This gave us a list of 49 suspect genes. To look for additional candidates, let us first look for the (coarse) relationship between "interesting" genes. This isn't meant to be detailed, we are looking for lateral genes which are strongly correlated with our suspects, so the code samples a subset of the cells and ignores genes which are too weak to matter.
```
mc.pl.relate_genes(clean, random_seed=123456)
```
This discovered 73 gene modules with ~15 genes in each one. In general, it may prove beneficial to look at each and every one of them. This would give us some idea about (most of) the gene modules that characterize the cell types in the data, and for our purpose now, may suggest additional lateral gene modules unrelated to our original suspect genes. However, to keep this vignette simple, let us just look at the modules containing already suspect genes:
```
module_of_genes = clean.var['related_genes_module']
suspect_gene_modules = np.unique(module_of_genes[suspect_genes_mask])
suspect_gene_modules = suspect_gene_modules[suspect_gene_modules >= 0]
print(suspect_gene_modules)
```
For each such module, let us look at the genes it contains and the similarity between them:
```
similarity_of_genes = mc.ut.get_vv_frame(clean, 'related_genes_similarity')
for gene_module in suspect_gene_modules:
module_genes_mask = module_of_genes == gene_module
similarity_of_module = similarity_of_genes.loc[module_genes_mask, module_genes_mask]
similarity_of_module.index = \
similarity_of_module.columns = [
'(*) ' + name if name in suspect_gene_names else name
for name in similarity_of_module.index
]
ax = plt.axes()
sb.heatmap(similarity_of_module, vmin=0, vmax=1, ax=ax, cmap="YlGnBu")
ax.set_title(f'Gene Module {gene_module}')
plt.show()
```
We can now extend the list of forbidden genes to include additional genes using these modules.
Note we'd rather err on the side of caution and not forbid genes needlessly, since we expect the metacell analysis to help us expose any remaining genes we have missed. That said, thiw will require us to regenerate the metacells with the expanded forbidden genes list.
For simplicity, we'll simply forbid all the original suspect genes as well as all the genes in the strong modules 4, 5, 47, 52 and 68. This gives us a total of 106 initially forbidden genes:
```
forbidden_genes_mask = suspect_genes_mask
for gene_module in [4, 5, 47, 52]:
module_genes_mask = module_of_genes == gene_module
forbidden_genes_mask |= module_genes_mask
forbidden_gene_names = sorted(clean.var_names[forbidden_genes_mask])
print(len(forbidden_gene_names))
print(' '.join(forbidden_gene_names))
```
## Computing the metacells
Once we have a clean data set for analysis, we can go ahead and compute the metacells.
### Main parameters
There are many parameters other than the forbidden genes list that we can tweak (see `mc.pl.divide_and_conquer_pipeline`). Here we'll just discuss controlling the main ones.
#### Reproducibility
The `random_seed` must be non-zero to ensure reprodibility. Note that even though the implementation is parallel for efficiency, the results are still reprodicible given the same random seed (in contrast to the `umap` package where you need to specify an additional flag for reproducible results).
#### Target Metacell size
The `target_metacell_size` (number of UMIs). We want each metacell to have a sufficient number of UMIs so that we get a robust estimation of the expression of each (relevant) gene in it. By default the target is 160,000 UMIs. The algorithm will generate metacells no larger than double this size (that is, a maximum of 320,000 UMIs per metacell) and no fewer than a quarter of this size (that is, a minimum of 40,000 UMIs per metacell), where metacells smaller than half the size (that is, between 40,000 UMIs and 80,000 UMIs) are "especially distinct". These ratios and relevant thresholds can all be controlled using additional parameters.
#### Parallelism
By default, the implementation uses all the physical cores of the system (ignoring hyper-threading as using them actually reduces performance). It is possible to reduce the number of cores used by invoking `mc.ut.set_processors_count` (or set the `METACELLS_PROCESSORS_COUNT` environment variable), if one wants to avoid taking all the physical cores for some reason. More importantly, one may want to use `mc.pl.set_max_parallel_piles` (or set the `METACELLS_MAX_PARALLEL_PILES` environment variable) to reduce the number of piles processed in parallel (values higher than the maximal number of processes have no effect).
Processing each pile takes a significant amount of memory (several GBs, depending on how dense the cells UMIs matrix is). On a server with a high core count and a limited amount of memory, this can cause the computation to crash with an error message complaining about failed allocations or some other indication of running out of memory, especially if other memory-intensive programs are running at the same time. Note that the implementation also needs to load the full data set into memory, which takes a large amount of memory regardless of computing the piles, and again varies depending on how dense the cells UMIs matrix is.
The `mc.pl.guess_max_parallel_piles` function can be invoked after loading the input cells data and before computing the metacells, and will return a hopefully reasonable guess for the maximal number of parallel piles to use, based on the density of the input, the amount of RAM available, and the target pile size. That said, this is just a (conservative) guess. When running very large data sets (millions of cells), it is best to avoid any other heavy computations on the same server, keep an eye on the memory usage, and tweak the parameters if needed.
The expected run-time of the computation will depend on the size of the data, the density of the UMIs map, and the amount of parallelism used. It can take well over an hour to fully analyze a dataset of millions of cells on a large server (with dozens of physical processors), and this will consume hundreds of gigabytes of memory. Luckily, smaller data sets (like the ~160K PBMC dataset we use here) only take a few minutes to compute on such a strong server, using only a few tens of gigabytes. This makes it possible to analyze such data sets on a strong modern laptop with 16GB (or better yet, 32GB) of RAM.
```
max_parallel_piles = mc.pl.guess_max_parallel_piles(clean)
print(max_parallel_piles)
mc.pl.set_max_parallel_piles(max_parallel_piles)
```
### Grouping into Metacells
We can finally compute the metacells. We are only running this on ~160K cells, still this may take a few minutes, depending on the number of cores on your server. For ~2 million cells this takes ~10 minutes on a 28-core server.
```
mc.pl.divide_and_conquer_pipeline(clean,
forbidden_gene_names=forbidden_gene_names,
#target_metacell_size=...,
random_seed=123456)
```
This has written many annotations for each cell (observation), the most important of which is `metacell` specifying the 0-based index of the metacell each cell belongs to (or -1 if the cell is an "outlier").
However, for further analysis, what we want is data where each observation is a metacell:
```
metacells = mc.pl.collect_metacells(clean, name='PBMC.metacells')
```
### Visualizing the Metacells
A common technique is to use UMAP to project the metacells to a 2D scatter plot. The code provides built-in support for generating such projections. UMAP offers many parameters that can be tweaked, but the main one we offer control over is `min_dist` which controls how tightly the points are packed together. A non-zero `random_seed` will make this computation reproducible, at the cost of switching to a single-threaded implementation.
```
mc.pl.compute_umap_by_features(metacells, max_top_feature_genes=1000,
min_dist=2.0, random_seed=123456)
```
This filled in `umap_x` and `umap_y` per-metacell (observation) annotations, which can be used to generate 2D projection diagrams (it also filled in a boolean `top_feature_gene` mask designating the genes used). Typically such diagrams use additional metadata (such as type annotations) to color the points, but here we just show the raw projection:
```
umap_x = mc.ut.get_o_numpy(metacells, 'umap_x')
umap_y = mc.ut.get_o_numpy(metacells, 'umap_y')
plot = sb.scatterplot(x=umap_x, y=umap_y)
```
We can also visualize the (skeleton) KNN graph on top of the UMAP. Long edges indicate that UMAP did not capture this skeleton KNN graph well. This must be inevitable due to the need to project a complex N-dimentional structure to 2D, or it might indicate that we are using some as features some "lateral" genes which are not relevant to the structure we are investigating. To make this clearer we can just filter out the short edges:
```
umap_edges = sp.coo_matrix(mc.ut.get_oo_proper(metacells, 'obs_outgoing_weights'))
min_long_edge_size = 4
sb.set()
plot = sb.scatterplot(x=umap_x, y=umap_y)
for (source_index, target_index, weight) \
in zip(umap_edges.row, umap_edges.col, umap_edges.data):
source_x = umap_x[source_index]
target_x = umap_x[target_index]
source_y = umap_y[source_index]
target_y = umap_y[target_index]
if hypot(target_x - source_x, target_y - source_y) >= min_long_edge_size:
plt.plot([source_x, target_x], [source_y, target_y],
linewidth=weight * 2, color='indigo')
plt.show()
```
## Further analysis
Metacells is **not** an scRNA analysis method. Rather, it is meant to be an (early) step in the analysis process. The promise of metacells is that it makes further analysis easier; instead of grappling with many individual cells with a very weak and noisy signal of few hundred UMIs in each, one can analyze fewer complete metacells with a strong signal of tens of thousands of UMIs, which allows for robust estimation of their gene expression levels. Therefore, working on metacells instead of single cells makes life easier for any further analysis method one wishes to use.
Further analysis methods are expected to create variable-sized groups of metacells with a similar "cell type" or gradients of metacells between such "cell types", based on the gene programs they express. Such methods are beyond the scope of the metacells package; it merely prepares the input for such methods and is agnostic to the exact method of further analysis.
In particular, "metacells of metacells" is *not* a good method: An "ideal" metacell is defined "a group of cells, with a maximal size, with the same biological state". Crucially, this maximal size is picked to be the smallest that allows for robust estimation of gene expression in the metacell; this allows for capturing rare behaviors in their own metacells, instead of them becoming outliers.
Computing "metacells of metacells" would suffer from the same problem as having a too-large target metacell size: it would artifically quantize gradients into less intermediate states, and it would identify rare behavior metacells as outliers. At the same time, computing metacells-of-metacells can not be trusted to group all the metacells of the "same" (or very similar) cell state together, since the grouping will obey some (artificial) maximal size limit.
Thus, the best thing we can do now is to save the data, and feed it to a separate further data analysis pipeline. To import the data into Seurat, we first need to delete the special `__name__` property, since for some reason it breaks the Seurat importer.
The [manual analysis vignette](Manual_Analysis.html) demonstrates manual analysis of the data (based on the [MCView](https://tanaylab.github.io/MCView) tool), and the [seurat analysis vignette](Seurat_Analysis.html) demonstrates importing the metacells into [Seurat](https://satijalab.org/seurat/index.html) for further analysis there.
```
clean.write('cells.h5ad')
metacells.write('metacells.h5ad')
del metacells.uns['__name__']
metacells.write('for_seurat.h5ad')
```
| github_jupyter |
## cloudFPGA Studio
### Case study: Harris Corner Detector (Computer Vision) - NumpPy version with camera loop
### You don't need FPGA knowledge, just basic Python syntax !!!
Note: Assuming that the FPGA is already flashed
Configure the Python path to look for FPGA aceleration library
```
import time
import sys
import os
from IPython.display import Image
from IPython.display import display
from IPython.display import clear_output
# for software execution
import cv2 as cv
import numpy as np
from matplotlib import pyplot as plt
trieres_lib=os.environ['cFpRootDir'] + "HOST/vision/harris/languages/python/build"
sys.path.append(trieres_lib)
```
Import the FPGA accelerator library
```
import _trieres_harris_numpi
```
Assign the IP of the FPGA that is already loaded with Harris image
```
# Give image dimensions (the ones that the FPGA bitstream is configured with!)
height = width = 256
#fpga_ip=os.environ['FPGA_IP']
fpga_ip="10.12.200.203"
fpga_port="2718"
print(fpga_ip+"@"+fpga_port)
def cornerHarris_hw(image):
# Flattening the image from 2D to 1D
image = image.flatten()
# Detecting corners
start_fpga = time.time()
dst1d = _trieres_harris_numpi.harris(image, height*width, fpga_ip, fpga_port)
elapsed_fpga = time.time() - start_fpga
# Convert 1D array to a 2D numpy array of 2 rows and 3 columns
dst = np.reshape(dst1d, (height, width))
return dst
# Grab the input device, in this case the webcam
# You can also give path to the video file
vid = cv.VideoCapture(0)
# Put the code in try-except statements
# Catch the keyboard exception and
# release the camera device and
# continue with the rest of code.
try:
while(True):
# Capture frame-by-frame
ret, frame = vid.read()
if not ret:
# Release the Video Device if ret is false
vid.release()
# Message to be displayed after releasing the device
print("Released Video Resource")
break
# Convert the image from OpenCV BGR format to matplotlib RGB format
# to display the image
frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)
# Converting to grayscale
frame = cv.cvtColor(frame, cv.COLOR_RGB2GRAY)
# Adjusting the image file if needed
if ((frame.shape[0] != height) or (frame.shape[1] != width)):
#print("Warning: The image was resized from [", frame.shape[0] , " x ", frame.shape[1] , "] to [", height , " x ", width, "]")
dim = (width, height)
frame = cv.resize(frame, dim, interpolation = cv.INTER_LINEAR)
# Call the FPGA harris accelerator as a Python function
framerx = cornerHarris_hw(frame)
# Turn off the axis
plt.axis('off')
# Title of the window
plt.title("Processed Stream by FPGA")
# Display the frame
plt.imshow(framerx)
plt.imshow(frame, alpha=0.5)
plt.show()
# Display the frame until new frame is available
clear_output(wait=True)
except KeyboardInterrupt:
# Release the Video Device
vid.release()
# Message to be displayed after releasing the device
print("Released Video Resource")
```
| github_jupyter |
<a href="https://colab.research.google.com/github/maxigaarp/Gestion-De-Datos-en-R/blob/main/Clase_7_y_8_Depuracion_en_SQL.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
system("gdown https://drive.google.com/uc?id=1q089qSqKr7Ak29lUkzKSWjm2pcb_jzX0")
unzip("/content/matriculas_y_notas_anual.zip")
install.packages("RSQLite")
install.packages("data.table")
library(data.table)
library(RSQLite)
library(tidyverse)
system("gdown https://drive.google.com/uc?id=1bxebySwiYquw1B72xi1E_YrELXzDS4cC")
system("mv /content/Rendimiento2013.csv /content/content/Datos/20210223_Rendimiento_2013.csv")
conn <- dbConnect(RSQLite::SQLite(), "mineduc.db")
```
Supongamos que queremos ver la cantidad de años que tiene cada alumno registrado a lo largo de los 18 años de datos disponibles en MINEDUC.
Filtrando la información me quedo con una tabla de alumnos con su informacion relevante y otra con los colegios.
# Cargando bases en SQL
```
colenames <- c("RBD","AGNO","NOM_RBD","COD_REG_RBD","NOM_COM_RBD", "COD_DEPE","RURAL_RBD")
alnames <- c("MRUN","AGNO","RBD","COD_ENSE","COD_GRADO","LET_CUR","GEN_ALU", "FEC_NAC_ALU","COD_COM_ALU", "SIT_FIN_R", "PROM_GRAL", "ASISTENCIA")
years<-2010:2020
for (i in 1:length(years)) {
name=gsub("%",years[i],"/content/content/Datos/20210223_Rendimiento_%.csv")
data=fread(name)
names(data) <- toupper(names(data))
coles <- data %>%
select(colenames)%>%
distinct()
alus <- data %>%
select(c("MRUN","AGNO","RBD","COD_ENSE","COD_GRADO","LET_CUR",
"GEN_ALU", "FEC_NAC_ALU","COD_COM_ALU", if (years[i]!=2014) "SIT_FIN_R" else "SIT_FINAL_R",
"PROM_GRAL", "ASISTENCIA")) %>%
distinct()
names(alus)=alnames
apnd=if (i==1) FALSE else TRUE
dbWriteTable(conn , name = "colegios",
value = coles,
row.names = FALSE, header = !apnd, sep=',',append=apnd,
colClasses='character')
dbWriteTable(conn , name = "alumnos",
value = alus,
row.names = FALSE, header = !apnd, sep=',',append=apnd,
colClasses='character')
}
```
En particular podemos hacer consultas complicadas de manera rapida y sin preocuparnos por si el computador podrá darnos una respuesta antes de acabarse la memoria RAM.
```
conn
```
Queremos ver los cambios de nombre para por ejemplo consolidar una base de datos de colegios.
```
select
tabla1.atributo1 as A1,
tabla2.atributo2 as A2,
...,
tabla1.atributon as AN,
AVG(tablan.atributon2) as avgatributon
From tabla1, tabla2,...,tablan
where tabla1.atributo1=tabla3.atributo1
Group by tabla1.atributo1
Having AVG(tablan.atributon2)>3
ORDER BY A1 DESC
Limit 1000
```
```
dbExecute(conn,"CREATE TABLE COLESCAMBIO AS
select
RBD,
AGNO,
NOM_RBD,
LAST_VALUE(NOM_RBD) OVER(PARTITION BY RBD) AS LNAME
from colegios;")
dbListTables(conn)
dbGetQuery(conn, "select * from COLESCAMBIO limit 20")
system("wget https://sqlite.org/2016/sqlite-src-3110100.zip")
unzip("sqlite-src-3110100.zip")
system("gcc -shared -fPIC -Wall -Isqlite-src-3110100 sqlite-src-3110100/ext/misc/spellfix.c -o spellfix.so")
dbExecute(conn,"select load_extension('./spellfix')")
dbGetQuery(conn,"
select
RBD,
AGNO,
NOM_RBD,
LNAME,
EDITDIST3(NOM_RBD, LNAME) AS EDIT
FROM COLESCAMBIO")
dbGetQuery(conn,"
select
RBD,
NOM_RBD,
LNAME,
EDITDIST3(NOM_RBD, LNAME) AS EDIT
FROM COLESCAMBIO
GROUP BY RBD
HAVING EDIT=MAX(EDIT)
ORDER BY EDIT")
```
# Consolidando una base de datos
Supongamos que estamos interesados en ocupar nuevamente los datos de los alumnos, en particular por ahora:
* Asistencia
* Promedio
Para asegurar que los datos tengan buena calidad debemos testearlos:
* Cantidad de nulos (Completitud)
* Revisar como se guardan los datos, ver que se cumplan estas relaciones (Validez)
* Eliminar duplicados, datos con inconsistencias entre tablas (Consistencia)
* Datos en un solo formato con escala adecuada para el problema (Uniformidad)
```
dbGetQuery(conn,"select *
from alumnos
limit 10")
```
#Nulos y completitud
```
dbGetQuery(conn,"select
sum(case when MRUN is null then 1 else 0 end) MRUN,
sum(case when AGNO is null then 1 else 0 end) AGNO,
sum(case when RBD is null then 1 else 0 end) RBD,
sum(case when COD_ENSE is null then 1 else 0 end) COD_ENSE,
sum(case when LET_CUR is null then 1 else 0 end) LET_CUR,
sum(case when GEN_ALU is null then 1 else 0 end) GEN_ALU,
sum(case when FEC_NAC_ALU is null then 1 else 0 end) FEC_NAC_ALU,
sum(case when COD_COM_ALU is null then 1 else 0 end) COD_COM_ALU,
sum(case when SIT_FIN_R is null then 1 else 0 end) SIT_FIN_R,
sum(case when PROM_GRAL is null then 1 else 0 end) PROM_GRAL,
sum(case when ASISTENCIA is null then 1 else 0 end) ASISTENCIA
from alumnos")
```
## Definiendo bien los nulos
### Situacion final
```
UPDATE <NOMBRETABLA>
SET ATRIBUTO=#VALUE#
WHERE <<CONDICION>>
```
```
dbGetQuery(conn, "
select *
from alumnos
where SIT_FIN_R='' ")
dbExecute(conn,"UPDATE alumnos
SET SIT_FIN_R = NULL
WHERE SIT_FIN_R=''")
dbGetQuery(conn,"select
sum(case when SIT_FIN_R is null then 1 else 0 end) SIT_FIN_R
from alumnos")
```
### Genero
```
dbExecute(conn,"UPDATE alumnos
SET GEN_ALU = NULL
WHERE GEN_ALU=0")
```
### Promedio y asistencia
```
dbGetQuery(conn,"
select SIT_FIN_R,
count(),
AVG(PROM_GRAL),
MIN(PROM_GRAL),
MAX(PROM_GRAL),
AVG(ASISTENCIA),
MIN(ASISTENCIA),
MAX(ASISTENCIA)
FROM alumnos
group by SIT_FIN_R")
```
Primero pequeña capsula de arreglar problemas de validez, el promedio esta como character, esto lo arreglamos con el siguiente comando.
```
dbExecute(conn, " UPDATE alumnos
SET PROM_GRAL = CAST(replace(PROM_GRAL, ',', '.') AS NUMERIC);")
dbExecute(conn,"UPDATE alumnos
SET PROM_GRAL = NULL, ASISTENCIA = NULL
WHERE SIT_FIN_R='Y' or SIT_FIN_R='T'")
```
Luego
```
dbGetQuery(conn,"
select SIT_FIN_R,
count(),
AVG(PROM_GRAL),
MIN(PROM_GRAL),
MAX(PROM_GRAL),
AVG(ASISTENCIA),
MIN(ASISTENCIA),
MAX(ASISTENCIA)
FROM alumnos
group by SIT_FIN_R")
```
Notemos que hay alumnos aprobados con nota 0, consideraré eso como informacion faltante por lo que lo asignare a nulo.
```
dbExecute(conn,"UPDATE alumnos
SET PROM_GRAL = NULL
WHERE PROM_GRAL=0")
dbGetQuery(conn, "
select *
from alumnos
where PROM_GRAL!=0 and ASISTENCIA=0
limit 10")
dbGetQuery(conn,"
select SIT_FIN_R,
count(),
AVG(PROM_GRAL),
MIN(PROM_GRAL),
MAX(PROM_GRAL),
AVG(ASISTENCIA),
MIN(ASISTENCIA),
MAX(ASISTENCIA)
FROM alumnos
group by SIT_FIN_R")
dbExecute(conn,"UPDATE alumnos
SET COD_COM_ALU = NULL
WHERE COD_COM_ALU =0")
```
##¿Que eliminamos?
```
dbExecute(conn,"
DELETE FROM alumnos
WHERE MRUN IS NULL")
dbExecute(conn,"
DELETE FROM alumnos
WHERE SIT_FIN_R IS NULL")
```
### ¿Que pasa con la fecha de nacimiento?
```
dbGetQuery(conn, "
select *
from alumnos
where FEC_NAC_ALU is null limit 5")
dbGetQuery(conn, "
select *
from alumnos
where MRUN=2849761
limit 10")
```
Solo falta el registro pero el valor existe y puede extraerse
# Consistencia
```
dbGetQuery(conn,"
select MRUN,
FEC_NAC_ALU,
count() as N
from alumnos
group by MRUN,FEC_NAC_ALU
order by N DESC
")
dbGetQuery(conn,"
select *
FROM alumnos
where MRUN=9014208
")
```
Parece todo bien pero no es asi
```
dbGetQuery(conn,"
select
MRUN,
FEC_NAC_ALU
from (
select MRUN,
FEC_NAC_ALU,
count() as N
from alumnos
group by MRUN,FEC_NAC_ALU
)
GROUP BY MRUN
having N=MAX(N) and
sum(case when FEC_NAC_ALU is null then 1 else 0 end) >0
")
dbGetQuery(conn,"
update alumnos
set FEC_NAC_ALU= fchs
from (
select
MRUN as mrns,
FEC_NAC_ALU as fchs
from (
select MRUN,
FEC_NAC_ALU,
count() as N
from alumnos
group by MRUN,FEC_NAC_ALU
)
GROUP BY MRUN
having N=MAX(N) and
sum(case when FEC_NAC_ALU is null then 1 else 0 end) >0)
where MRUN=mrns
")
dbGetQuery(conn, "
select *
from alumnos
where MRUN=2849761")
dbExecute(conn, "
UPDATE alumnos
SET FEC_NAC_ALU=SUBSTRING(FEC_NAC_ALU,1,6)")
```
# Validez
Lo mas importante es verificar la unicidad de lo que creamos que sea la llave
```
a<-dbGetQuery(conn,"
select
MRUN,
AGNO,
count() as n
from alumnos
where SIT_FIN_R='P'
group by MRUN, AGNO
having n>1
order by n
")
a
dbGetQuery(conn,"
select *
from alumnos
where MRUN=951
limit 100")
a %>% filter(n==3)
dbGetQuery(conn,"select * from alumnos where MRUN=3138123")
dbGetQuery(conn,"
select MRUN,
AGNO,
COD_ENSE,
COD_GRADO,
COUNT() AS N
from alumnos
WHERE SIT_FIN_R='P'
group by MRUN,COD_ENSE, COD_GRADO
ORDER BY N")
dbGetQuery(conn,"
select *
from alumnos
WHERE MRUN=9098462")
dbExecute(conn,"DELETE FROM alumnos
WHERE MRUN IN (
select
MRUN
from alumnos
where SIT_FIN_R='P'
group by MRUN, AGNO
having count()>1 )
")
```
OPCION 1:
```
dbGetQuery(conn,"
select alumnos.*
from alumnos, (select
MRUN,
FEC_NAC_ALU
from (
select MRUN,
FEC_NAC_ALU,
count() as N
from alumnos
group by MRUN,FEC_NAC_ALU
)
GROUP BY MRUN
having N=MAX(N)) as cumples
where alumnos.MRUN=cumples.MRUN and
alumnos.FEC_NAC_ALU!=cumples.FEC_NAC_ALU
")
```
OPCION 2:
```
dbExecute(conn,"DELETE FROM alumnos
WHERE MRUN IN (
select
MRUN
from alumnos
group by MRUN
having count(DISTINCT FEC_NAC_ALU)>1 )")
dbGetQuery(conn, "
select * from alumnos ")
```
## Testeo de algunas relaciones
Suponemos que los alumnos para pasar de curso deben tner el promedio azul. Podemos testear esta hipotesis sobre los datos
```
dbGetQuery(conn,"select * from alumnos where PROM_GRAL<4 and SIT_FIN_R='P'")
```
Por el otro lado los reprobados deberian tener notas o asistencia baja
```
dbGetQuery(conn, "select * from alumnos where PROM_GRAL>5.5 and ASISTENCIA>60 and SIT_FIN_R='R'")
```
# Preguntas
Tenemos listos los datos para ser utilizados pero dependerá del uso que vamos a querer darles. Ej:
* Las notas por nivel
```
resp <- dbGetQuery(conn, "
SELECT
COD_ENSE,
COD_GRADO,
AVG(PROM_GRAL) AS MPROM
FROM alumnos
Where (COD_ENSE==110 OR COD_ENSE==310) and (SIT_FIN_R=='P')
GROUP BY COD_ENSE, COD_GRADO")
resp %>% arrange(COD_ENSE,COD_GRADO)%>%
ggplot(aes(x=1:12,y=MPROM)) +
geom_bar(stat="identity", position="stack")
```
* Promedio de un alumno durante su enseñanza (completa)
```
resp <- dbGetQuery(conn, "
SELECT
MRUN,
COUNT() AS N,
AVG(ASISTENCIA) AS MASIS,
AVG(PROM_GRAL) AS MPROM
FROM alumnos
Where (COD_ENSE==110 OR COD_ENSE==310) and (SIT_FIN_R=='P')
GROUP BY MRUN
HAVING N=12"
)
ggplot(resp, aes(x=`MASIS`, y=`MPROM`)) + geom_point(size=2, shape=23)
```
* Comunas con mejor promedio
```
resp <- dbGetQuery(conn, "
SELECT
COD_COM_ALU,
COUNT() AS N,
AVG(ASISTENCIA) AS MASIS,
AVG(PROM_GRAL) AS MPROM
FROM alumnos
Where (COD_ENSE==110 OR COD_ENSE==310) and (SIT_FIN_R=='P')
GROUP BY COD_COM_ALU
")
arrange(resp, MPROM)
```
*
# Muestreo
```
dbGetQuery(conn, "
SELECT *
FROM alumnos
LIMIT 1000
")
```
Muestreo aleatorio simple
```
dbGetQuery(conn, "
SELECT *
FROM alumnos
ORDER BY RANDOM()
LIMIT 1000
")
```
Muestreo Aleatorio Estratificado
```
dbGetQuery(conn, "
SELECT DISTINCT MRUN
FROM alumnos
ORDER BY RANDOM()
LIMIT 1000
")
dbGetQuery(conn, "
SELECT *
FROM alumnos
WHERE MRUN IN (
SELECT DISTINCT MRUN
FROM alumnos
ORDER BY RANDOM()
LIMIT 1000)
")
sample.int(3,1)-1
a=3002
b=sample.int(a,1)-1
dbGetQuery(conn, "
select *
FROM alumnos
WHERE MRUN %? =?
limit 10
", params = c(a,b))
```
## Estratificado
```
dbGetQuery(conn, "
select COD_COM_ALU, count() as n
FROM alumnos
group by COD_COM_ALU
")
sample <- 10000
a <- dbGetQuery(conn, "
select COD_COM_ALU, ?*count(*)/ CAST( SUM(count(*)) over () as float) as PERC
FROM alumnos
group by COD_COM_ALU
", params=c(sample))
sample <- 10000
for (row in 1:2) {
sub <- dbGetQuery(conn, "
select *
from alumnos
where COD_COM_ALU=?
order by RANDOM()
limit ?
", params=c(a[row, "COD_COM_ALU"],ceiling(a[row, "PERC"])))
out=if(row==1) sub else rbind(out,sub)
}
out
rbind(sub,sub)
```
# Clasificacion
Ir a https://www.cr2.cl/datos-de-precipitacion/?cp_Precipitacion=2
Descargar datos del 2019
```
library(tidyverse)
library(data.table)
unzip("/content/cr2_prDaily_2018.zip")
unzip("/content/cr2_tasDaily_2020_ghcn.zip")
pp <- read.csv("/content/cr2_prDaily_2018/cr2_prDaily_2018.txt", na = "-9999", header =F)
tm <- read.csv("/content/cr2_tasDaily_2020_ghcn/cr2_tasDaily_2020_ghcn.txt", na = "-9999", header =F)
head(pp)
tm <- setNames(as.data.frame(t(tm[,-1])),as.character(tm[,1]))
pp <- setNames(as.data.frame(t(pp[,-1])),as.character(pp[,1]))
ppp <- pp %>% select( c("codigo_estacion","nombre", "latitud","longitud") | "2000-01-01":"2017-12-31")%>%
pivot_longer(cols = "2000-01-01":"2017-12-31",
values_to = "Precipitacion",
names_to = c("Año", "Mes", "Dia"),
names_pattern = "(....)-(..)-(..)")
ppp
tmp <- tm %>% select( c("codigo_estacion") | "2000-01-01":"2017-12-31")%>%
pivot_longer(cols = "2000-01-01":"2017-12-31",
values_to = "Temperatura",
names_to = c("Año", "Mes", "Dia"),
names_pattern = "(....)-(..)-(..)")
tmp
X_full <- ppp %>%
inner_join(tmp, by=c("codigo_estacion","Año", "Mes","Dia"))%>%
mutate(Precipitacion=as.double(Precipitacion),
Temperatura=as.double(Temperatura),
BPrecipitacion=ifelse(Precipitacion>0,1, 0)) %>%
group_by(codigo_estacion)%>%
mutate(YPrecipitacion=shift(Precipitacion,1),
YTemperatura=shift(Temperatura,1)) %>%
ungroup()%>%
group_by(codigo_estacion,Mes,Dia )%>%
mutate(MPrecipitacion=mean(Precipitacion,na.rm=TRUE),
MTemperatura=mean(Temperatura,na.rm=TRUE)) %>%
ungroup()
df <-apply(X = is.na(X_full), MARGIN = 2, FUN = mean)
print(df*100)
```
## KNN
```
install.packages("caTools")
library(class)
library(caTools)
X<- X_full%>%
filter(Año==2017& Mes==12 & Dia==14)%>%
select("latitud", "longitud","BPrecipitacion")%>%
drop_na()%>%
select("latitud", "longitud")
Y<- X_full%>%
filter(Año==2017& Mes==12 & Dia==14)%>%
select("latitud", "longitud","BPrecipitacion")%>%
drop_na()%>%
select("BPrecipitacion")
set.seed(101)
sample = sample.split(X$longitud, SplitRatio = .75)
X_train = subset(X, sample == TRUE)
X_test = subset(X, sample == FALSE)
Y_train = subset(Y, sample == TRUE)
Y_test = subset(Y, sample == FALSE)
knn.prd=knn(X_train,X_test,Y_train[["BPrecipitacion"]],k=5,prob=TRUE)
table(knn.prd,Y_test[["BPrecipitacion"]])
```
Acá calculamos los indicadores.
# Regresion Logistica
```
head(X_full)
datax <- X_full%>%
select(c("BPrecipitacion", "Temperatura", "YPrecipitacion", "YTemperatura", "MPrecipitacion", "MTemperatura"))%>%
drop_na()
set.seed(101)
sample = sample.split(datax$BPrecipitacion, SplitRatio = .75)
train = subset(datax, sample == TRUE)
test = subset(datax, sample == FALSE)
m1 <- glm(BPrecipitacion ~ YPrecipitacion+YTemperatura+MPrecipitacion , family = binomial,data=train)
summary(m1)
pred<- predict.glm(m1,newdata = test, type="response")
result1<- table(test$BPrecipitacion, floor(pred+0.5))
result1
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
Training and Testing Data
=====================================
To evaluate how well our supervised models generalize, we can split our data into a training and a test set:
<img src="figures/train_test_split_matrix.svg" width="100%">
```
from sklearn.datasets import load_iris
iris = load_iris()
X, y = iris.data, iris.target
```
Thinking about how machine learning is normally performed, the idea of a train/test split makes sense. Real world systems train on the data they have, and as other data comes in (from customers, sensors, or other sources) the classifier that was trained must predict on fundamentally *new* data. We can simulate this during training using a train/test split - the test data is a simulation of "future data" which will come into the system during production.
Specifically for iris, the 150 labels in iris are sorted, which means that if we split the data using a proportional split, this will result in fudamentally altered class distributions. For instance, if we'd perform a common 2/3 training data and 1/3 test data split, our training dataset will only consists of flower classes 0 and 1 (Setosa and Versicolor), and our test set will only contain samples with class label 2 (Virginica flowers).
Under the assumption that all samples are independent of each other (in contrast time series data), we want to **randomly shuffle the dataset before we split the dataset** as illustrated above.
```
y
```
Now we need to split the data into training and testing. Luckily, this is a common pattern in machine learning and scikit-learn has a pre-built function to split data into training and testing sets for you. Here, we use 50% of the data as training, and 50% testing. 80% and 20% is another common split, but there are no hard and fast rules. The most important thing is to fairly evaluate your system on data it *has not* seen during training!
```
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123)
print("Labels for training data:")
print(train_y)
print("Labels for test data:")
print(test_y)
```
---
**Tip: Stratified Split**
Especially for relatively small datasets, it's better to stratify the split. Stratification means that we maintain the original class proportion of the dataset in the test and training sets. For example, after we randomly split the dataset as shown in the previous code example, we have the following class proportions in percent:
```
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
```
So, in order to stratify the split, we can pass the label array as an additional option to the `train_test_split` function:
```
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123,
stratify=y)
print('All:', np.bincount(y) / float(len(y)) * 100.0)
print('Training:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
```
---
By evaluating our classifier performance on data that has been seen during training, we could get false confidence in the predictive power of our model. In the worst case, it may simply memorize the training samples but completely fails classifying new, similar samples -- we really don't want to put such a system into production!
Instead of using the same dataset for training and testing (this is called "resubstitution evaluation"), it is much much better to use a train/test split in order to estimate how well your trained model is doing on new data.
```
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier().fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("Fraction Correct [Accuracy]:")
print(np.sum(pred_y == test_y) / float(len(test_y)))
```
We can also visualize the correct predictions ...
```
print('Samples correctly classified:')
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
```
... as well as the failed predictions
```
print('Samples incorrectly classified:')
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Plot two dimensions
for n in np.unique(test_y):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 1], test_X[idx, 2], label="Class %s" % str(iris.target_names[n]))
plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred")
plt.xlabel('sepal width [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc=3)
plt.title("Iris Classification results")
plt.show()
```
We can see that the errors occur in the area where green (class 1) and gray (class 2) overlap. This gives us insight about what features to add - any feature which helps separate class 1 and class 2 should improve classifier performance.
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>
Print the true labels of 3 wrong predictions and modify the scatterplot code, which we used above, to visualize and distinguish these three samples with different markers in the 2D scatterplot. Can you explain why our classifier made these wrong predictions?
</li>
</ul>
</div>
```
# %load solutions/04_wrong-predictions.py
```
| github_jupyter |
# Introduction
In this notebook, we'll assign documents to domains in RDoC with the highest Dice similarity of their brain structures and mental function terms.
# Load the data
```
import pandas as pd
import numpy as np
import sys
sys.path.append("..")
import utilities, partition
framework = "rdoc"
```
## Brain activation coordinates
```
act_bin = utilities.load_coordinates()
print("Document N={}, Structure N={}".format(
act_bin.shape[0], act_bin.shape[1]))
```
## Document-term matrix
```
dtm_bin = utilities.load_doc_term_matrix(version=190325, binarize=True)
print("Document N={}, Term N={}".format(
dtm_bin.shape[0], dtm_bin.shape[1]))
```
## Domain archetypes
```
from collections import OrderedDict
lists, circuits = utilities.load_framework("rdoc", suffix="_opsim")
words = sorted(list(set(lists["TOKEN"])))
structures = sorted(list(act_bin.columns))
domains = list(OrderedDict.fromkeys(lists["DOMAIN"]))
archetypes = pd.DataFrame(0.0, index=words+structures, columns=domains)
for dom in domains:
for word in lists.loc[lists["DOMAIN"] == dom, "TOKEN"]:
archetypes.loc[word, dom] = 1.0
for struct in structures:
archetypes.loc[struct, dom] = circuits.loc[struct, dom]
archetypes[archetypes > 0.0] = 1.0
print("Term & Structure N={}, Domain N={}".format(
archetypes.shape[0], archetypes.shape[1]))
```
## Document splits
```
splits = {}
splits["discovery"] = [int(pmid.strip()) for pmid in open("../data/splits/train.txt")]
splits["replication"] = [int(pmid.strip()) for pmid in open("../data/splits/validation.txt")]
splits["replication"] += [int(pmid.strip()) for pmid in open("../data/splits/test.txt")]
for split, pmids in splits.items():
print("{:12s} N={}".format(split.title(), len(pmids)))
```
# Assign documents to systems
```
from scipy.spatial.distance import cdist
pmids = dtm_bin.index.intersection(act_bin.index)
len(pmids)
dtm_words = dtm_bin.loc[pmids, words]
act_structs = act_bin.loc[pmids, structures]
docs = dtm_words.copy()
docs[structures] = act_structs.copy()
docs.shape
dom_dists = cdist(docs.values, archetypes.values.T, metric="dice")
dom_dists = pd.DataFrame(dom_dists, index=docs.index, columns=domains)
dom_dists.shape
doc2dom = {pmid: 0 for pmid in pmids}
for i, pmid in enumerate(pmids):
doc2dom[pmid] = dom_dists.columns[np.argmin(dom_dists.values[i,:])]
doc2dom_df = pd.Series(doc2dom)
doc2dom_df.to_csv("data/doc2dom_rdoc.csv", header=False)
dom2doc = {dom: [] for dom in domains}
for pmid, dom in doc2dom.items():
dom2doc[dom].append(pmid)
for dom, dom_pmids in dom2doc.items():
n_pmids_dis = len(set(dom_pmids).intersection(set(splits["discovery"])))
n_pmids_rep = len(set(dom_pmids).intersection(set(splits["replication"])))
print("{:20s} {:5d} discovery {:5d} replication".format(dom, n_pmids_dis, n_pmids_rep))
```
# Plot document distances
```
from style import style
%matplotlib inline
for split, split_pmids in splits.items():
print("Processing {} split (N={} documents)".format(split, len(split_pmids)))
print("----- Computing Dice distance between documents")
docs_split = docs.loc[split_pmids]
doc_dists = cdist(docs_split, docs_split, metric="dice")
doc_dists = pd.DataFrame(doc_dists, index=split_pmids, columns=split_pmids)
print("----- Sorting documents by domain assignment")
dom_pmids = []
for dom in domains:
dom_pmids += [pmid for pmid, sys in doc2dom.items() if sys == dom and pmid in split_pmids]
doc_dists = doc_dists[dom_pmids].loc[dom_pmids]
print("----- Locating transition points between domains")
transitions = []
for i, pmid in enumerate(dom_pmids):
if doc2dom[dom_pmids[i-1]] != doc2dom[pmid]:
transitions.append(i)
transitions += [len(split_pmids)]
print("----- Plotting distances between documents sorted by domain")
partition.plot_partition(framework, doc_dists, transitions,
style.palettes[framework], suffix="_{}".format(split))
```
| github_jupyter |
# IBM Db2 Event Store - Data Analytics using Python API
IBM Db2 Event Store is a hybrid transactional/analytical processing (HTAP) system. It extends the Spark SQL interface to accelerate analytics queries.
This notebook illustrates how the IBM Db2 Event Store can be integrated with multiple popular scientific tools to perform data analytics.
***Pre-Req: Event_Store_Table_Creation***
## Connect to IBM Db2 Event Store
Edit the values in the next cell
```
CONNECTION_ENDPOINT=""
EVENT_USER_ID=""
EVENT_PASSWORD=""
# Port will be 1100 for version 1.1.2 or later (5555 for version 1.1.1)
PORT = "30370"
DEPLOYMENT_ID=""
# Database name
DB_NAME = "EVENTDB"
# Table name
TABLE_NAME = "IOT_TEMPERATURE"
HOSTNAME=""
DEPLOYMENT_SPACE=""
bearerToken=!echo `curl --silent -k -X GET https://{HOSTNAME}:443/v1/preauth/validateAuth -u {EVENT_USER_ID}:{EVENT_PASSWORD} |python -c "import sys, json; print(json.load(sys.stdin)['accessToken'])"`
bearerToken=bearerToken[0]
keystorePassword=!echo `curl -k --silent GET -H "authorization: Bearer {bearerToken}" "https://{HOSTNAME}:443/icp4data-databases/{DEPLOYMENT_ID}/zen/com/ibm/event/api/v1/oltp/keystore_password"`
keystorePassword
```
## Import Python modules
```
## Note: Only run this cell if your IBM Db2 Event Store is installed with IBM Cloud Pak for Data (CP4D)
# In IBM Cloud Pak for Data, we need to create link to ensure Event Store Python library is
# properly exposed to the Spark runtime.
from pathlib import Path
src = '/home/spark/user_home/eventstore/eventstore'
dst = '/home/spark/shared/user-libs/python3.6/eventstore'
is_symlink = Path(dst).is_symlink()
if is_symlink == False :
os.symlink(src, dst)
print("Creating symlink to include Event Store Python library...")
else:
print("Symlink already exists, not creating..")
%matplotlib inline
from eventstore.common import ConfigurationReader
from eventstore.oltp import EventContext
from eventstore.sql import EventSession
from pyspark.sql import SparkSession
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn
from scipy import stats
import warnings
import datetime
warnings.filterwarnings('ignore')
plt.style.use("fivethirtyeight")
```
## Connect to Event Store
```
ConfigurationReader.setConnectionEndpoints(CONNECTION_ENDPOINT)
ConfigurationReader.setEventUser(EVENT_USER_ID)
ConfigurationReader.setEventPassword(EVENT_PASSWORD)
ConfigurationReader.setSslKeyAndTrustStorePasswords(keystorePassword[0])
ConfigurationReader.setDeploymentID(DEPLOYMENT_ID)
ConfigurationReader.getSslTrustStorePassword()
```
## Open the database
The cells in this section are used to open the database and create a temporary view for the table that we created previously.
To run Spark SQL queries, you must set up a Db2 Event Store Spark session. The EventSession class extends the optimizer of the SparkSession class.
```
sparkSession = SparkSession.builder.appName("EventStore SQL in Python").getOrCreate()
eventSession = EventSession(sparkSession.sparkContext, DB_NAME)
```
Now you can execute the command to open the database in the event session you created:
```
eventSession.open_database()
```
## Access an existing table in the database
The following code section retrieves the names of all tables that exist in the database.
```
with EventContext.get_event_context(DB_NAME) as ctx:
print("Event context successfully retrieved.")
print("Table names:")
table_names = ctx.get_names_of_tables()
for name in table_names:
print(name)
```
Now we have the name of the existing table. We then load the corresponding table and get the DataFrame references to access the table with query.
```
tab = eventSession.load_event_table(TABLE_NAME)
print("Table " + TABLE_NAME + " successfully loaded.")
```
The next code retrieves the schema of the table we want to investigate:
```
try:
resolved_table_schema = ctx.get_table(TABLE_NAME)
print(resolved_table_schema)
except Exception as err:
print("Table not found")
```
In the following cell, we create a temporary view with that DataFrame called `readings` that we will use in the queries below.
```
tab.createOrReplaceTempView("readings")
```
## Data Analytics with IBM Db2 Event Store
Data analytics tasks can be performed on table stored in the IBM Db2 Event Store database with various data analytics tools.
Let's first take a look at the timestamp range of the record.
```
query = "SELECT MIN(ts) MIN_TS, MAX(ts) MAX_TS FROM readings"
print("{}\nRunning query in Event Store...".format(query))
df_data = eventSession.sql(query)
df_data.toPandas()
```
The following cell converts the timestamps in miliseconds to datetime to make it human readable
```
MIN_TS=1541019342393
MAX_TS=1541773999825
print("The time range of the dataset is from {} to {}".format(
datetime.datetime.fromtimestamp(MIN_TS/1000).strftime('%Y-%m-%d %H:%M:%S'),
datetime.datetime.fromtimestamp(MAX_TS/1000).strftime('%Y-%m-%d %H:%M:%S')))
```
## Sample Problem
Assume we are only interested in the data recorded by the 12th sensor on the 1st device in the time period on the day of 2018-11-01, and we want to investigate the effects of power consumption and ambient power on the temperature recorded by the sensor in this date.
Because the timestamp is recorded in milliseconds, we need to convert the datetime of interest to a time range in milliseconds, and then use the range as a filter in the query.
```
start_ts = (datetime.datetime(2018,11,1,0,0) - datetime.datetime(1970,1,1)).total_seconds() * 1000
end_ts = (datetime.datetime(2018,11,2,0,0) - datetime.datetime(1970,1,1)).total_seconds() * 1000
print("The time range of datetime 2018-11-01 in milisec is from {:.0f} to {:.0f}".format(start_ts, end_ts))
```
IBM Db2 Event Store extends the Spark SQL functionality, which allows users to apply filters with ease.
In the following cell, the relevant data is extracted according to the problem scope. Note that because we are specifying a specific device and sensor, this query is fully exploiting the index.
```
query = "SELECT * FROM readings WHERE deviceID=1 AND sensorID=12 AND ts >1541030400000 AND ts < 1541116800000 ORDER BY ts"
print("{}\nRunning query in Event Store...".format(query))
refined_data = eventSession.sql(query)
refined_data.createOrReplaceTempView("refined_reading")
refined_data.toPandas()
```
### Basic Statistics
For numerical data, knowing the descriptive summary statistics can help a lot in understanding the distribution of the data.
IBM Event Store extends the Spark DataFrame functionality. We can use the `describe` function to retrieve statistics about data stored in an IBM Event Store table.
```
refined_data.describe().toPandas()
```
It's worth noticing that some power reading records are negative, which may be caused by sensor error. The records with negative power reading will be dropped.
```
query = "SELECT * FROM readings WHERE deviceID=1 AND sensorID=12 AND ts >1541030400000 AND ts < 1541116800000 AND power > 0 ORDER BY ts"
print("{}\nRunning query in Event Store...".format(query))
refined_data = eventSession.sql(query)
refined_data.createOrReplaceTempView("refined_reading")
```
Total number of records in the refined table view
```
query = "SELECT count(*) count FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
df_data = eventSession.sql(query)
df_data.toPandas()
```
### Covariance and correlation
- Covariance is a measure of how two variables change with respect to each other. It can be examined by calling `.stat.cov()` function on the table.
```
refined_data.stat.cov("AMBIENT_TEMP","TEMPERATURE")
refined_data.stat.cov("POWER","TEMPERATURE")
```
- Correlation is a normalized measure of covariance that is easier to understand, as it provides quantitative measurements of the statistical dependence between two random variables. It can be examined by calling `.stat.corr()` function on the table.
```
refined_data.stat.corr("AMBIENT_TEMP","TEMPERATURE")
refined_data.stat.corr("POWER","TEMPERATURE")
```
### Visualization
Visualization of each feature provides insights into the underlying distributions.
- Distribution of Ambient Temperature
```
query = "SELECT ambient_temp FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
ambient_temp = eventSession.sql(query)
ambient_temp= ambient_temp.toPandas()
ambient_temp.head()
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(ambient_temp.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(ambient_temp.iloc[:,0])
axs[1].set_title("Boxplot on Ambient_temp")
axs[2].hist(ambient_temp.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Ambient_temp")
```
- Distribution of Power Consumption
```
query = "SELECT power FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
power = eventSession.sql(query)
power= power.toPandas()
power.head()
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(power.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(power.iloc[:,0])
axs[1].set_title("Boxplot on Power")
axs[2].hist(power.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Power")
```
- Distribution of Sensor Temperature
```
query = "SELECT temperature FROM refined_reading"
print("{}\nRunning query in Event Store...".format(query))
temperature = eventSession.sql(query)
temperature= temperature.toPandas()
temperature.head()
fig, axs = plt.subplots(1,3, figsize=(16,6))
stats.probplot(temperature.iloc[:,0], plot=plt.subplot(1,3,1))
axs[1].boxplot(temperature.iloc[:,0])
axs[1].set_title("Boxplot on Temperature")
axs[2].hist(temperature.iloc[:,0], bins = 20)
axs[2].set_title("Histogram on Temperature")
```
- Input-variable vs. Target-variable
```
fig, axs = plt.subplots(1,2, figsize=(16,6))
axs[0].scatter(power.iloc[:,0], temperature.iloc[:,0])
axs[0].set_xlabel("power in kW")
axs[0].set_ylabel("temperature in celsius")
axs[0].set_title("Power vs. Temperature")
axs[1].scatter(ambient_temp.iloc[:,0], temperature.iloc[:,0])
axs[1].set_xlabel("ambient_temp in celsius")
axs[1].set_ylabel("temperature in celsius")
axs[1].set_title("Ambient_temp vs. Temperature")
```
**By observing the plots above, we noticed:**
- The distribution of power consumption, ambient temperature, and sensor temperature each follows an roughly normal distribution.
- The scatter plot shows the sensor temperature has linear relationships with power consumption and ambient temperature.
## Summary
This notebook introduced you to data analytics using IBM Db2 Event Store.
## Next Step
`"Event_Store_ML_Model_Deployment.ipynb"` will show you how to build and deploy a machine learning model.
<p><font size=-1 color=gray>
© Copyright 2019 IBM Corp. All Rights Reserved.
<p>
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied. See the License for the specific language governing permissions and
limitations under the License.
</font></p>
| github_jupyter |
<img style="float: center;" src="images/CI_horizontal.png" width="600">
<center>
<span style="font-size: 1.5em;">
<a href='https://www.coleridgeinitiative.org'>Website</a>
</span>
</center>
Ghani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, and Jonathan Morgan.
_citation to be updated on export_
# Databases
---
## Table of Contents
- [Introduction](#Introduction)
- [Learning objectives](#Learning-objectives)
- [Methods](#Methods)
- [Connection information](#Connection-information)
- [GUI clients](#GUI-clients)
- [GUI - DBeaver](#GUI---DBeaver)
- [GUI - pgAdmin](#GUI---pgAdmin)
- [Python database clients](#Python-database-clients)
- [Python - `psycopg2`](#Python---psycopg2)
- [Python - `SQLAlchemy`](#Python---SQLAlchemy)
- [Python - `pandas`](#Python---pandas)
## Introduction
- Back to [Table of Contents](#Table-of-Contents)
Regardless of how you connect, most interactions with relational database management systems (RDBMS) are carried out via Structured Query Language (SQL). Many programming languages are more similar than different.
SQL is genuinely different conceptually and syntactically.
To make learning SQL easier, in this notebook we list a number of database clients you can use to connect to a PostgreSQL database and run SQL queries, so you can try them out and find one you prefer to use (we recommend pgAdmin if you are new to databases).
We will follow the following sequence:
1. Connection Information: We'll outline the information needed to connect to our class database server.
2. Then, we'll briefly look at how to use a number of different SQL clients, and the pros and cons of each.
3. Finally, we'll each pick one to connect and test before we move on to focusing on SQL.
### Learning objectives
- Back to [Table of Contents](#Table-of-Contents)
This notebook documents different database clients you can use to run SQL queries against the PostgreSQL database used for this class. PostgreSQL is an open source relational database management system (DBMS) developed by a worldwide team of volunteers.
** Learning objectives: **
- Understand options for connecting to a PostgreSQL database and running SQL, including pros and cons of each.
- Pick an SQL interface to use while learning SQL.
### Methods
- Back to [Table of Contents](#Table-of-Contents)
We cover the following database clients in this notebook:
1. Graphical User Interface (GUI) application 'pgAdmin'
2. Using SQL in Python with:
- Direct database connection - `psycopg2`
- `SQLAlchemy`
- `pandas`
You can use any of these clients to run SQL in the database. Some are easier to use or better suited in certain situations over others. Each client's section includes information on good and bad points of each.
If you are here to learn SQL, once you've looked over your options, pick one and proceed to the notebook "Intro to SQL" to learn more about the SQL language.
## Connection information
- Back to [Table of Contents](#Table-of-Contents)
All of the programs that connect to and query a database listed below need to be initially told how to connect to the database one wants to query. There are a set of common connection properties that are used to specify how to connect to a broad range of database servers:
- **_host name_**: the network name of the database server one is connecting to, if the database is not on your local computer.
- **_host port_**: the network port on which the database server is listening, if the database is not on your local computers. Most database server types have a default port that is assumed if you don't specify a port (5432 for PostgreSQL, for example, or 3306 for MySQL).
- **_username_**: for databases that authenticate a connection based on user credentials, the username you want to use to connect.
- **_password_**: for databases that authenticate a connection based on user credentials, the password you want to use to authenticate your username.
- **_database name_**: The name of the database to which you want to connect.
Not all setups will need all of these parameters to be specified to successfully connect to the database. For our class database in the ADRF, for example, we only need to specify:
- **_host name_**: 10.10.2.10
- **_database name_**: appliedda
The database server listens on the default PostgreSQL port (5432), so no port is needed, and it authenticates the user based on whether that user has a linux user on the database server itself, rather than requiring a username and password (though access to schemas and tables inside are controlled by a more stringent set of per-user access privileges stored within the database).
## GUI clients
- Back to [Table of Contents](#Table-of-Contents)
The first database clients we will cover are Graphical User Interface (GUI) clients. These clients are designed to be used with mouse and keyboard, and to simplify submitting queries to a database and interacting with the results.
The ADRF has two GUI clients you can use to access the database:
- DBeaver
- pgAdmin
### GUI - DBeaver
- Back to [Table of Contents](#Table-of-Contents)
DBeaver is open-source software which is built to connect to most of the popular databases. The connection settings have already been configured so you should be able to simply open the program and begin exploring the data to which you have access.
1. Open DBeaver
<br>
<img src="images/DBeaver_thumbnail.png" />
<br><br>
2. Expand the navigation window to explore the `appliedda` database
<br>
<img src="images/DBeaver_Navigator.png" />
<br><br>
3. Open a script to perform queries against the data
<br>
<img src="images/Dbeaver_full.png" />
### GUI - pgAdmin
- Back to [Table of Contents](#Table-of-Contents)
pgAdmin is a PostgreSQL client written and suppported by the PostgreSQL community. It isn't the most beautiful program, but it is full-featured and available on many platforms. It doesn't let you connect to any databases other than PostgreSQL.
**1. Running pgadmin** Double-click the "`pgAdmin III`" icon on the Desktop in the ADRF workspace.
<img src="images/pgAdmin-open.png" />
**2. Creating a connection to the class database** In pgadmin:
- Go to the file menu, then click on the "Add Connection to Server" option on top-left.
- In the "New Server Registration" window that opens, set:
- the "Name" to whatever you want to call the connection (we've used "ADRF-appliedda" in this example)
- the "Host" to "10.10.2.10"
- the "Username" field to your username (it won't let you leave it empty)
- and uncheck the "Store password" checkbox
<img src="images/pgAdmin-new_connection.png" />
**3. Connecting to the class database 'appliedda'**
- Double-click on the "ADRF-appliedda" link in the pane on the left, under "Server Groups" --> "Servers (1)".
- If prompted for a password, just click "OK". You do not have to type any password
- On successful connection, you should see items under "ADRF-appliedda", including "Databases". Click on the "+" sign to the left of "Databases".
- Double-click on "appliedda" (it will probably have a red X on its icon, denoting that it is not currently connected.).
<img src="images/pgAdmin-connected.png" />
**4. Running a Query**
Once you are connected to the "appliedda" database, you can start running queries using this GUI. Click on the button that looks like a magnifying glass with "SQL" inside it, at the top center of the window. Enter your SQL query in the "SQL Editor" in the top left.
Let us count the number of rows in the dataset il_des_kcmo.il_qcew_employers:
SELECT COUNT(*)
FROM il_des_kcmo.il_qcew_employers;
Now, press the green triangle "play" button to run the query. In the data output tab (down left)- you will see the results of this query.
<img src="images/pgAdmin-run_query.png" />
Other queries you can run:
- Counting number of unique employers in the data:
SELECT COUNT(distinct ein)
FROM il_des_kcmo.il_qcew_employers;
- Counting number of different records for each NAICS industry code:
SELECT naics, COUNT(*) AS cnt
FROM il_des_kcmo.il_qcew_employers
GROUP BY naics;
## Python database clients
- Back to [Table of Contents](#Table-of-Contents)
Apart from client GUIs, we can also access PostgreSQL using programming languages like Python. We do this using libraries of code that extend core Python named 'packages'.
The commands work similarly, you can execute almost any SQL in a programming language that you can in a manual client, and the results are returned in a format that lets you interact with them after the SQL statements finish.
_(Python lets you interact with databases using SQL just like you would in any SQL GUI or terminal. Python code can do SELECTs, CREATEs, INSERTs, UPDATEs, and DELETEs, and any other SQL)_
Below are three ways one can interact with PostgreSQL using Python:
1. **_`psycopg2`_** - The Python `psycopg2` package implements Python's DBAPI, a mostly-standardized API for database interaction, to allow for querying PostgreSQL, It is the closest you can get in Python to a direct database connection.
2. **_`SQLAlchemy`_** - `SQLAlchemy` can be used to map Python objects to database tables, but it also contains a wrapper around DBAPI that allows for query code be more consistently re-used across databases.
3. **_`pandas`_** - `pandas` is an analysis package that can use a database connection (with either `SQLAlchemy` or `psycopg2`) to read the results of SQL queries directly into a `pandas` DataFrame, allowing you to further analyze the data in Python.
### Python - `psycopg2`
- Back to [Table of Contents](#Table-of-Contents)
The `psycopg2` package is the most popular PostgreSQL adapter for the Python programming language. This Python package implements the standard DBAPI Python interface for interacting with a relational database. This is the closest you can get to connecting directly to the database in Python - there aren't any objects creating in-memory tables or layers of abstraction between you and the data. Your Python sends SQL directly to the database and then deals row-by-row with the results.
__Pros:__
- This is often the best way to use Python to manage a database (ALTER, CREATE, INSERT, UPDATE, etc.). Fancier packages sometimes don't deal well with more complicated management SQL statements.
- It also is often what you have to resort to for genuinely big data, since the different ways you can fetch rows from the results of a query give you fine-grained control over exactly how much data is in memory at a given time.
- If you have a particularly vexing problem with a more feature-rich package, this is going to be your bare-bones troubleshooting sanity check to see if the problem is with that package rather than your SQL or your database.
__Cons:__
- All this control and bare-bones-ed-ness means that some things that are pretty easy in pandas can take a lot more code, time, and learning at this lower level. Pandas manages a lot of the details of connecting to and interacting with a database for you.
__Mixed:__
- In theory, when you write DBAPI-compliant code, that code can be used to interact with any database that has a DBAPI=compliant driver package. In practice, DBAPI drivers are about 95% compatible between databases and SQL for some tasks can be different from database to database, so you end up with code that can be ported between databases with a few tweaks and modifications, and then needing to test it all to make sure your SQL works.
```
# importing datetime and psycopg2 package
import datetime
import psycopg2
import psycopg2.extras
print( "psycopg2 imports completed at " + str( datetime.datetime.now() ) )
# set up connection properties
db_host = "10.10.2.10"
db_database = "appliedda"
# and connect.
pgsql_connection = psycopg2.connect( host = db_host, database = db_database )
print( "psycopg2 connection to host: " + db_host + ", database: " + db_database
+ " completed at " + str( datetime.datetime.now() ) )
# results come back as a list of columns:
pgsql_cursor = pgsql_connection.cursor()
# results come back as a dictionary where values are mapped to column names (preferred)
pgsql_cursor = pgsql_connection.cursor( cursor_factory = psycopg2.extras.DictCursor )
print( "psycopg2 cursor created at " + str( datetime.datetime.now() ) )
# SQL
sql_string = "SELECT COUNT( * ) AS row_count FROM public.tl_2016_us_county;"
# execute it.
pgsql_cursor.execute( sql_string )
# fetch first (and only) row, then output the count
first_row = pgsql_cursor.fetchone()
print( "row_count = " + str( first_row[ "row_count" ] ) )
# SQL
sql_string = "SELECT * FROM public.tl_2016_us_county LIMIT 1000;"
# execute it.
pgsql_cursor.execute( sql_string )
# ==> fetch rows to loop over:
# all rows.
#result_list = pgsql_cursor.fetchall()
# first 10 rows.
result_list = pgsql_cursor.fetchmany( size = 10 )
# loop
result_counter = 0
for result_row in result_list:
result_counter += 1
print( "- row " + str( result_counter ) + ": " + str( result_row ) )
#-- END loop over 10 rows --#
# ==> loop over the rest one at a time.
result_counter = 0
result_row = pgsql_cursor.fetchone()
while result_row is not None:
# increment counter
result_counter += 1
# get next row
result_row = pgsql_cursor.fetchone()
#-- END loop over rows, one at a time. --#
print( "fetchone() row_count = " + str( result_counter ) )
# Close Connection and cursor
pgsql_cursor.close()
pgsql_connection.close()
print( "psycopg2 cursor and connection closed at " + str( datetime.datetime.now() ) )
```
### Python - `SQLAlchemy`
- Back to [Table of Contents](#Table-of-Contents)
`SQLAlchemy` is a higher-level Python database library that, among many other things, contains a wrapper around DBAPI that makes a subset of the DBAPI API work the same for any database `SQLAlchemy` supports (though it doesn't work exactly like DBAPI... nothing's perfect). You can use this wrapper to write Python code that can be re-used with different databases (though you'll have to make sure the SQL also is portable). `SQLAlchemy` also includes advanced features like connection pooling in its implementation of DBAPI that help to make it perform better than a direct database connection.
Just be aware that the farther you move from a direct connection, the more potential there is for things to go wrong. Under the hood, `SQLAlchemy` is using `psycopg2` for its PostgreSQL database access, so now you have two relatively complex packages working in tandem. If you get a particularly vexing bug running SQL with `SQLAlchemy`, in particular complex SQL or statements that update or alter the database, make sure to try that SQL with a pure DBAPI client or in the command line client to see if it is a problem with `SQLAlchemy`, not with your SQL or database.
`SQLAlchemy`'s database connection is called an engine. To connect a `SQLAlchemy` engine to a database, you will:
- create a `SQLAlchemy` connection string for your database.
- use that string to initialize an engine and connect it to your database.
A full connection URL for `SQLAlchemy` looks like this:
dialect+driver://username:password@host:port/database
If you recall back to our connection properties, we only need to specify host name and database. In `SQLAlchemy`, any elements of the URL that are not needed can be omitted. So for our database, the connection URL is:
postgresql://10.10.2.10/appliedda
```
# imports
import sqlalchemy
import datetime
# Connect
connection_string = 'postgresql://10.10.2.10/appliedda'
pgsql_engine = sqlalchemy.create_engine( connection_string )
print( "SQLAlchemy engine connected to " + connection_string + " at " + str( datetime.datetime.now() ) )
# Single row query - with the streaming option so it does not return results until we "fetch" them:
sql_string = "SELECT COUNT( * ) AS row_count FROM public.tl_2016_us_county;"
query_result = pgsql_engine.execution_options( stream_results = True ).execute( sql_string )
# output results - you can also check what columns "query_result" has by accessing
# it's "keys" since it is just a Python dict object. Like so:
print( query_result.keys() )
# print an empty string to separate out our two more useful print statements
print('')
# fetch first (and only) row, then output the count
first_row = query_result.fetchone()
print("row_count = " + str( first_row[ "row_count" ] ) )
# run query with the streaming option so it does not return results until we "fetch" them:
# SQL
sql_string = "SELECT * FROM public.tl_2016_us_county LIMIT 1000;"
# execute it.
query_result = pgsql_engine.execution_options( stream_results = True ).execute( sql_string )
# ==> fetch rows to loop over:
# all rows.
#result_list = query_result.fetchall()
# first 10 rows.
result_list = query_result.fetchmany( size = 10 )
# loop
result_counter = 0
for result_row in result_list:
result_counter += 1
print( "- row " + str( result_counter ) + ": " + str( result_row ) )
#-- END loop over 10 rows --#
# ==> loop over the rest one at a time.
result_counter = 0
result_row = query_result.fetchone()
while result_row is not None:
# increment counter
result_counter += 1
# get next row
result_row = query_result.fetchone()
#-- END loop over rows, one at a time. --#
print( "fetchone() row_count = " + str( result_counter ) )
# Clean up:
pgsql_engine.dispose()
print( "SQLAlchemy engine dispose() called at " + str( datetime.datetime.now() ) )
```
### Python - `pandas`
- Back to [Table of Contents](#Table-of-Contents)
Next we'll use the [pandas package](http://pandas.pydata.org/) to populate `pandas` DataFrames from the results of SQL queries. `pandas` uses a `SQLAlchemy` database engine to connect to databases and run queries. It then reads data returned from a given SQL query and further processes it to store it in a tabular data format called a "DataFrame" (a term that will be familiar for those with R or STATA experience).
DataFrames allow for easy statistical analysis, and can be directly used for machine learning. They also load your entire result set into memory by default, and so are not suitable for really large data sets.
And, as discussed in the `SQLAlchemy` section, this is yet another layer added on top of other relatively complex database packages, such that you multiply the potential for a peculiarity in one to cause obscure, difficult-to-troubleshoot problems in one of the other layers. It won't occur frequently, but if you run into weird or inexplicable problems when turning SQL into DataFrames, try running the SQL using lower layers to isolate the problem.
In the code cell below, we'll use `SQLAlchemy` to connect to the database, then we'll give this engine to pandas and let it retrieve and process data.
_Note: in addition to processing SQL queries, `pandas` has a range of [Input/Output tools](http://pandas.pydata.org/pandas-docs/stable/io.html) that let it read from and write to a large variety of tabular data formats, including CSV and Excel files, databases via SQL, JSON files, and even SAS and Stata data files. In the example below, we'll use the `pandas.read_sql()` function to read the results of an SQL query into a data frame._
```
# imports
import datetime
import pandas
# Connect - create SQLAlchemy engine for pandas to use.
connection_string = 'postgresql://10.10.2.10/appliedda'
pgsql_engine = sqlalchemy.create_engine( connection_string )
print( "SQLAlchemy engine connected to " + connection_string + " at " + str( datetime.datetime.now() ) )
# Single row query
sql_string = "SELECT COUNT( * ) AS row_count FROM public.tl_2016_us_county;"
df = pandas.read_sql( sql_string, con = pgsql_engine )
# get row_count - first get first row
first_row = df.iloc[ 0 ]
# then grab value.
row_count = first_row[ "row_count" ]
print("row_count = " + str( row_count ) )
# and call head().
df.head()
# SQL
sql_string = "SELECT * FROM public.tl_2016_us_county LIMIT 2000;"
# execute it.
df = pandas.read_sql( sql_string, con = pgsql_engine )
# unlike previous Python examples, rows are already fetched and in a dataframe:
# you can loop over them...
row_count = 0
for result_row in df.iterrows():
row_count += 1
#-- END loop over rows. --#
print( "loop row_count = " + str( row_count ) )
# Print out the first X using head()
output_count = 10
df.head( output_count )
# etc.
# Close Connection - Except you don't have to because pandas does it for you!
```
| github_jupyter |
```
##### from collections import OrderedDict
## Pandas
import pandas as pd
from IPython.display import display
from IPython.display import HTML
from pandas.io.json import json_normalize
pd.set_option('max_colwidth',255)
pd.set_option('max_columns',10)
#### Prep for the presentation
### Authenticate to Ambari
#### Python requirements
import difflib
import getpass
import json
import requests
import sys
import time
#### Change these to fit your Ambari configuration
ambari_protocol = 'http'
ambari_server = 'sroberts-bp02.cloud.hortonworks.com'
#ambari_server = 'pregion-shared01.cloud.hortonworks.com'
ambari_port = 8080
ambari_user = 'admin'
#cluster = 'Sandbox'
#### Above input gives us http://user:pass@hostname:port/api/v1/
api_url = ambari_protocol + '://' + ambari_server + ':' + str(ambari_port)
#### Prompt for password & build the HTTP session
ambari_pass = getpass.getpass()
s = requests.Session()
s.auth = (ambari_user, ambari_pass)
s.headers.update({'X-Requested-By':'seanorama'})
#### Authenticate & verify authentication
r = s.get(api_url + '/api/v1/clusters')
assert r.status_code == 200
print("You are authenticated to Ambari!")
```
# Field Notes: Ambari Blueprints

## Nerd Alert: Presenting from ipython
* https://github.com/damianavila/RISE
#### Not Zeppelin
## whoami
Sean Roberts
Partner Engineering, EMEA

## Today
* Requirements for Blueprints
* Refresher on Ambari Stacks
* Blueprint & Cluster Template
* Deploying the Blueprint & Cluster
* Field Notes *(sort of)*
* Questions
## Not Today
### Deploying with Ambari:
- Infrastructure & Node Prep
- Deploying Ambari Server & Agents
- Ambari considerations (java, database, …)
- Lessons learned from large scale deployments
### Ongoing operations:
- General overview of the API
- Configuration Groups
- Adding nodes to a config group
## What You'll Need
* Ambari Server & Agents Installed
* Ambari Agents Registered to Ambari Server (/api/v1/hosts)
* HDP prereqs(networking, OS repos, …)
* If using separate or non-default SQL databases, configure them first.
* Access & credentials to Ambari Server
* Blueprint (JSON)
* Cluster Description (JSON)
* https://wiki.hortonworks.com/display/SE/SE+Cloud
* http://github.com/HortonworksUniversity/Ops_Labs/1.1.0/build/security/ambari-bootstrap
* CloudBreak
* Note: HA Blueprints not supported *(simple validation bug should be fixed soon)*
```
r = s.get(api_url + '/api/v1/hosts')
print(json.dumps(r.json(), indent=2))
```
## Reminder: Ambari Stacks
* Stack: HDP, PHD, ...
* Versions: 2.2
* Services: HDFS, SPARK
* Components: NODEMANAGER
## Service & Component List
```
r = s.get(api_url + '/api/v1/stacks/HDP/versions/2.2/services')
stackservicecomponents = {}
for a in r.json()['items']:
r = s.get(a['href'] + '/components')
components = []
for b in [a['StackServiceComponents'] for a in r.json()['items']]:
service = b['service_name']
components.append(b['component_name'])
stackservicecomponents[service] = ' '.join(components)
pd.DataFrame.from_dict(stackservicecomponents, orient='index').sort()
```
## A Blueprint is
JSON document with 3 sections:
* `Blueprints`: Ambari Stack to use
* `host_groups`: Grouping of hosts and the Ambari Stack's components to deploy
* `configurations`: (optional) Specific configurations to pass through for the Ambari Stack
```json
{
"Blueprints": {
"stack_name": "HDP", "stack_version": "2.2"
},
"host_groups": [
{ "name": "master_1", "components": [ { "name": "NAMENODE" }, ... ] },
{ "name": "slave_1", "components": [ { "name": "DATANODE" }, ... ] }
],
"configurations": [
{ "hive-site": { "hive.execution.engine": "tez" }
]
}
```
## Blueprint example
* With special configurations:
- HA HDFS
- HA YARN Resource Manager
- Set different HDFS dirs depend on host_group
- and Oozie using Postgresql instead of Derby
```
blueprint = json.loads(open('blueprints/blueprint-config-example.json').read())
#blueprint = json.loads(open('blueprints/blueprint-hdfs-ha.json').read())
#blueprint = json.loads(open('blueprints/blueprint-yarn-ha.json').read())
#blueprint = json.loads(open('blueprints/hdp-all.json').read())
print(json.dumps(blueprint, indent=2))
host_groups = {}
for group in blueprint['host_groups']:
host_group = group['name']
components = []
for component in group['components']:
components.append(component['name'])
host_groups[host_group] = components
```
## Host Groups & Components
```
display(pd.DataFrame.from_dict(OrderedDict(sorted(host_groups.items())), orient='index').T.sort())
```
## Upload Blueprint
POST /api/v1/blueprints/blueprintname
```
## Upload the Blueprint
body = blueprint
r = s.post(api_url + '/api/v1/blueprints/testblueprint', data=json.dumps(body))
print(r.status_code) # should return 201
#print(json.dumps(r.json(), indent=2))
r = s.get(api_url + '/api/v1/blueprints/testblueprint')
print(json.dumps(r.json(), indent=2))
```
## Cluster Description
### Setup Oozie database
On Ambari Server:
```bash
echo "host all all 172.24.0.0/16 trust" >> /var/lib/pgsql/data/pg_hba.conf
/etc/init.d/postgresql restart
sudo -u postgres psql
CREATE DATABASE oozie;
CREATE USER oozie WITH PASSWORD 'changethis';
GRANT ALL PRIVILEGES ON DATABASE oozie TO oozie;
```
```
cluster = {
"blueprint": "testblueprint",
"configurations": [
{ "oozie-site": {
"oozie.db.schema.name" : "oozie",
"oozie.service.JPAService.create.db.schema" : "true",
"oozie.service.JPAService.jdbc.driver" : "org.postgresql.Driver",
"oozie.service.JPAService.jdbc.username" : "oozie",
"oozie.service.JPAService.jdbc.password" : "changethis",
"oozie.service.JPAService.jdbc.url" : "jdbc:postgresql://sroberts-bp02.cloud.hortonworks.com:5432/oozie?createDatabaseIfNotExist=true"
}},
{ "yarn-site" : {
"yarn.resourcemanager.hostname.rm1": "sroberts-bp04.cloud.hortonworks.com",
"yarn.resourcemanager.hostname.rm2": "sroberts-bp05.cloud.hortonworks.com"
}}
],
"default_password": "changethis",
"host_groups": [
{ "hosts": [
{ "fqdn": "sroberts-bp02.cloud.hortonworks.com" }
], "name": "gateway"
},
{ "hosts": [
{ "fqdn": "sroberts-bp03.cloud.hortonworks.com" }
], "name": "master_1"
},
{ "hosts": [
{ "fqdn": "sroberts-bp04.cloud.hortonworks.com" }
], "name": "master_2"
},
{ "hosts": [
{ "fqdn": "sroberts-bp05.cloud.hortonworks.com" }
], "name": "master_3"
},
{ "configurations": [
{ "yarn-site": {
"yarn.nodemanager.local-dirs": "/mnt/hdfs0/yarn/local,/mnt/hdfs1/yarn/local,/mnt/hdfs2/yarn/local",
"yarn.nodemanager.log-dirs": "/mnt/hdfs0/yarn/log,/mnt/hdfs1/yarn/log,/mnt/hdfs2/yarn/log"
}
},{ "hdfs-site": { "dfs.datanode.data.dir": "/mnt/hdfs0/data,/mnt/hdfs1/data,/mnt/hdfs2/data"}}
],
"hosts": [
{ "fqdn": "sroberts-bp06.cloud.hortonworks.com" }
],
"name": "slave_standard"
},
{ "hosts": [
{ "fqdn": "sroberts-bp07.cloud.hortonworks.com" },
{ "fqdn": "sroberts-bp08.cloud.hortonworks.com" }
], "name": "slave_archive"
}
]
}
body = cluster
r = s.post(api_url + '/api/v1/clusters/mycluster', data=json.dumps(body))
print(r.status_code) ## Should return 202
print(json.dumps(r.json(), indent=2))
r = s.get(api_url + '/api/v1/clusters/mycluster/requests/1')
print(json.dumps(r.json()['Requests'], indent=2))
```
## Export Blueprint
/api/v1/blueprints
/api/v1/clusters/clustername?format=blueprint
## Field Notes: How to ...
* Separate log locations
* HA
## Field Notes: Separate Databases
Example: PostgreSQL for Oozie
1. Prepare the database (see below)
2. Add appropriate configuration to Blueprint or Cluster template
## Field Notes: Blueprint Schema Changes from 1.7 to 2.0
Ambari Metrics replaces Ganglia & Nagios
| 1.7 | 2.0 |
|------------------------------|-------------------|
| NAGIOS_SERVER GANGLIA SERVER | METRICS_COLLECTOR |
| GANGLIA_MONITOR | METRICS_MONITOR |
## Field Notes: HDFS dirs
- Blueprints will not detect your mount points.
- It will use the default path (/hadoop/...) unless set.
- Add to your Blueprint or Cluster template.
- They can be added globally or different for each host-group.
```json
{ "hdfs-site": { "dfs.datanode.data.dir": "/mnt/hdfs0/data,/mnt/hdfs1/data,/mnt/hdfs2/data"}}
```
## Field Notes: Timeouts
Raise the limits if running with limited networking or on slow hardware
```
# grep agent.*timeout /etc/ambari-server/conf/ambari.properties
agent.package.install.task.timeout=1800
agent.task.timeout=900
```
## Consideration: Local Repositories
/api/v1/stacks/HDP/versions/2.2/operating_systems/
```
for repo in HDP-2.2 HDP-UTILS-1.1.0.20; do
curl -sSu admin http://${ambari_server}:8080/api/v1/stacks/HDP/versions/2.2/operating_systems/redhat6/repositories/${repo} -o /tmp/update-repo.txt
sed -ir -e 's/\(public\|private\)-repo-1.hortonworks.com/repo.cloud.hortonworks.com/g' -e '/^ "href"/d' /tmp/update-repo.txt
curl -sSu admin -H x-requested-by:sean http://${ambari_server}:8080/api/v1/stacks/HDP/versions/2.2/operating_systems/redhat6/repositories/${repo} -T /tmp/update-repo.txt
done
```
## Field Notes: Stack Advisor does not run on Blueprints!
But this undocumented API helps: `/api/v1/stacks/HDP/versions/2.2/recommendations`
```
body = {
"recommend" : "host_groups",
"services" : [ "AMBARI_METRICS","FALCON","FLUME","HBASE","HDFS","HIVE","KAFKA","KNOX","MAPREDUCE2","OOZIE","PIG","SLIDER","SPARK","SQOOP","STORM","TEZ","YARN","ZOOKEEPER" ],
"hosts" : [ "sroberts-bp02.cloud.hortonworks.com","sroberts-bp03.cloud.hortonworks.com","sroberts-bp04.cloud.hortonworks.com","sroberts-bp05.cloud.hortonworks.com","sroberts-bp06.cloud.hortonworks.com","sroberts-bp07.cloud.hortonworks.com","sroberts-bp08.cloud.hortonworks.com" ]
}
r = s.post(api_url + '/api/v1/stacks/HDP/versions/2.2/recommendations', data=json.dumps(body))
print(json.dumps(r.json(), indent=2))
body = {
"recommend" : "configurations",
"services" : [ "AMBARI_METRICS","FALCON","FLUME","HBASE","HDFS","HIVE","KAFKA","KNOX","MAPREDUCE2","OOZIE","PIG","SLIDER","SPARK","SQOOP","STORM","TEZ","YARN","ZOOKEEPER" ],
"hosts" : [ "sroberts-bp02.cloud.hortonworks.com","sroberts-bp03.cloud.hortonworks.com","sroberts-bp04.cloud.hortonworks.com","sroberts-bp05.cloud.hortonworks.com","sroberts-bp06.cloud.hortonworks.com","sroberts-bp07.cloud.hortonworks.com","sroberts-bp08.cloud.hortonworks.com" ]
}
r = s.post(api_url + '/api/v1/stacks/HDP/versions/2.2/recommendations', data=json.dumps(body))
print(json.dumps(r.json(), indent=2))
```
## Blueprint Generator
https://github.com/HortonworksUniversity/Ops_Labs/1.1.0/build/security/ambari-bootstrap/tree/master/deploy
# The End

### Field Notes from Aaron Wiebe
* We pre-deployed and configured ambari agents on each node. This allowed us to perform the installation without automated ssh/root permissions being given to Ambari.
* Registration was done in batches of maximum 400 nodes. Doing more than this crushed the internal yum repository during installation and caused timeouts and installation failures.
* Local repos were used
* Client threads on the ambari server needed to be turned up to handle the high number of agents
* AMS is slick, but requires it’s own server. HBase backed in distributed mode against HDFS.
* Ambari server database needed its own server and the key cache of the mysql database was turned up to 24G and threads increased.
* Ambari server heap was increased to 12G.
* We’re looking at SSDs for Ambari servers and databases in the future.
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
plt. clf()
plt.figure(figsize=(15,10))
meanViola1 = np.array([76.712204564012, 271.962595069704, 104.464056106481])
medianViola1 = np.array([87.2101204224871, 267.298392954475, 73.4574594321263])
firstQtViola1 = np.array([66.392424612713, 188.281033331681, 82.4853708840244])
thirdQtViola1 = np.array([115.610511999689, 428.7120359926, 189.5979476391])
df=pd.DataFrame({'x_values': np.array(['Baroque','Classical', 'Romantic']), 'std of mean frequency': meanViola1, 'std of median frequency': medianViola1, 'std of 25th percentile frequency': firstQtViola1, 'std of 75th percentile frequency': thirdQtViola1 })
plt.xticks(size = 25)
plt.yticks(size = 25)
plt.plot( 'x_values', 'std of mean frequency', data=df, marker='X', color='blue', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of median frequency', data=df, marker='o', color='green', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 25th percentile frequency', data=df, marker='D', color='orangered', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 75th percentile frequency', data=df, marker='s', color='purple', linewidth=0, markersize=14)
# show legend
#plt.title("Standard deviation of each histogram of frequency attribute(Viola solo pieces)", fontsize=22)
plt.xlabel("Musical era", fontsize=25)
plt.ylabel("Standard deviation (Hz)", fontsize=25)
plt.legend(prop={'size': 13},facecolor='white', framealpha=1)
plt.savefig(f'solo_viola.eps', format='eps')
plt. clf()
plt.figure(figsize=(15,10))
meanViolin1 = np.array([100.058034165768,279.766584662516,130.068817541032 ])
medianViolin1 = np.array([59.413320527951, 293.524563232148, 125.441280574219])
firstQtViolin1 = np.array([45.1013326655587, 164.924778130907, 87.147286964214])
thirdQtViolin1 = np.array([174.069046870845, 538.359292016019, 233.701828917493])
df=pd.DataFrame({'x_values': np.array(['Baroque','Classical', 'Romantic']), 'std of mean frequency': meanViolin1, 'std of median frequency': medianViolin1, 'std of 25th percentile frequency': firstQtViolin1, 'std of 75th percentile frequency': thirdQtViolin1 })
plt.xticks(size = 25)
plt.yticks(size = 25)
plt.plot( 'x_values', 'std of mean frequency', data=df, marker='X', color='blue', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of median frequency', data=df, marker='o', color='green', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 25th percentile frequency', data=df, marker='D', color='orangered', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 75th percentile frequency', data=df, marker='s', color='purple', linewidth=0, markersize=14)
# show legend
#plt.title("Standard deviation of each histogram of frequency attribute(Viola solo pieces)", fontsize=22)
plt.xlabel("Musical era", fontsize=25)
plt.ylabel("Standard deviation (Hz)", fontsize=25)
plt.legend(prop={'size': 13},facecolor='white', framealpha=1)
plt.savefig(f'solo_violin.eps', format='eps')
plt. clf()
plt.figure(figsize=(15,10))
meanViola1 = np.array([278.68991841689, 141.539379563951, 72.6455550001523])
medianViola1 = np.array([249.473008349951, 174.169087467177, 97.6176088753514])
firstQtViola1 = np.array([225.273388371636, 121.72137122659, 70.1320309214679])
thirdQtViola1 = np.array([323.284330449662, 228.362015923272, 121.8113833453])
df=pd.DataFrame({'x_values': np.array(['Baroque','Classical', 'Romantic']), 'std of mean frequency': meanViola1, 'std of median frequency': medianViola1, 'std of 25th percentile frequency': firstQtViola1, 'std of 75th percentile frequency': thirdQtViola1 })
plt.xticks(size = 25)
plt.yticks(size = 25)
plt.plot( 'x_values', 'std of mean frequency', data=df, marker='X', color='blue', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of median frequency', data=df, marker='o', color='green', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 25th percentile frequency', data=df, marker='D', color='orangered', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 75th percentile frequency', data=df, marker='s', color='purple', linewidth=0, markersize=14)
# show legend
#plt.title("Standard deviation of each histogram of frequency attribute(Viola solo pieces)", fontsize=22)
plt.xlabel("Musical era", fontsize=25)
plt.ylabel("Standard deviation (Hz)", fontsize=25)
plt.legend(prop={'size': 13},facecolor='white', framealpha=1)
plt.savefig(f'non_solo_viola.eps', format='eps')
plt. clf()
plt.figure(figsize=(15,10))
meanViolin1 = np.array([282.725665018563, 286.585497386932, 116.405448388546])
medianViolin1 = np.array([243.054111459576, 354.047723965355, 139.584101749529])
firstQtViolin1 = np.array([186.121223660793, 114.758830234338, 98.2173193721459])
thirdQtViolin1 = np.array([400.962870927815, 573.658483143952, 174.769383472671])
df=pd.DataFrame({'x_values': np.array(['Baroque','Classical', 'Romantic']), 'std of mean frequency': meanViolin1, 'std of median frequency': medianViolin1, 'std of 25th percentile frequency': firstQtViolin1, 'std of 75th percentile frequency': thirdQtViolin1 })
plt.xticks(size = 25)
plt.yticks(size = 25)
plt.plot( 'x_values', 'std of mean frequency', data=df, marker='X', color='blue', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of median frequency', data=df, marker='o', color='green', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 25th percentile frequency', data=df, marker='D', color='orangered', linewidth=0, markersize=14)
plt.plot( 'x_values', 'std of 75th percentile frequency', data=df, marker='s', color='purple', linewidth=0, markersize=14)
# show legend
#plt.title("Standard deviation of each histogram of frequency attribute(Viola solo pieces)", fontsize=22)
plt.xlabel("Musical era", fontsize=25)
plt.ylabel("Standard deviation (Hz)", fontsize=25)
plt.legend(prop={'size': 13},facecolor='white', framealpha=1)
plt.savefig(f'non_solo_violin.eps', format='eps')
arr=[[0.97, 0.67,0.7],[0.9,0.92,0.93],[0.93,0.83,0.83]]
arr1=[[0.83, 0.75,0.5],[0.33,0.75,0.67],[0.58,0.75,0.75]]
import seaborn as sns
import matplotlib.pyplot as plt
plt. clf()
plt.figure(figsize=(15,12))
sns.set(font_scale=2.5)
categories = ['Baroque', 'Classical','Romantic']
x_axis_labels = ['Baroque', 'Classical','Romantic'] # labels for x-axis
y_axis_labels = ['Baroque', 'Classical','Romantic'] # labels for y-axis
sns.heatmap(arr,annot=True,cmap='Purples', fmt='.2f', xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("Testing Repertoire", fontsize=25, fontweight='bold')
plt.ylabel("Training Repertoire", fontsize=25, fontweight='bold')
plt.savefig(f'heatmap1.eps', format='eps')
plt. clf()
plt. clf()
plt.figure(figsize=(15,12))
sns.set(font_scale=2.5)
categories = ['Baroque', 'Classical','Romantic']
x_axis_labels = ['Baroque', 'Classical','Romantic'] # labels for x-axis
y_axis_labels = ['Baroque', 'Classical','Romantic'] # labels for y-axis
sns.heatmap(arr1,annot=True,cmap='Oranges', fmt='.2f', xticklabels=x_axis_labels, yticklabels=y_axis_labels)
plt.xlabel("Testing Repertoire", fontsize=25, fontweight='bold')
plt.ylabel("Training Repertoire", fontsize=25, fontweight='bold')
plt.savefig(f'heatmap2.eps', format='eps')
import numpy as np
import matplotlib.pyplot as plt
import os
os.chdir('/home/student/Downloads')
# data to plot
n_groups = 4
means_rf = (0.97, 0.92, 0.87, 0.92)
means_cnn = (0.80, 0.75, 0.83, 0.90)
means_mfcc = (0.84, 0.75, 0.90, 0.89)
# create plot
#fig, ax = plt.subplots()
plt.figure(figsize=(18,9))
index = np.arange(n_groups)
bar_width = 0.25
opacity = 0.8
rects1 = plt.bar(index, means_rf, bar_width,
alpha=opacity,
color='b',
label='Histogram of Frequencies (RF)')
rects2 = plt.bar(index + bar_width, means_mfcc, bar_width,
alpha=opacity,
color='purple',
label='MFCC')
rects3 = plt.bar(index + 2*bar_width, means_cnn, bar_width,
alpha=opacity,
color='skyblue',
label='CNN')
plt.xlabel('Dataset Era')
plt.ylabel('Accuracy')
plt.title('')
plt.xticks(index + bar_width, ('Baroque', 'Classical', 'Romantic', 'All'))
plt.legend( bbox_to_anchor=(1.05, 1), loc='upper left')
plt.tight_layout()
#plt.show()
plt.savefig(f'compare1.eps', format='eps')
import numpy as np
import matplotlib.pyplot as plt
# data to plot
n_groups = 4
means_rf = (0.83,0.75,0.75,0.72)
means_cnn = (0.75,0.58,0.75,0.58)
#means_mfcc = (0.84, 0.75, 0.90, 0.89)
# create plot
#fig, ax = plt.subplots()
plt.figure(figsize=(18,9))
index = np.arange(n_groups)
bar_width = 0.25
opacity = 0.8
rects1 = plt.bar(index, means_rf, bar_width,
alpha=opacity,
color='b',
label='Histogram of Frequencies (RF)')
#rects2 = plt.bar(index + bar_width, means_mfcc, bar_width,
#alpha=opacity,
#color='purple',
#label='MFCC')
rects3 = plt.bar(index + bar_width, means_cnn, bar_width,
alpha=opacity,
color='skyblue',
label='CNN')
plt.xlabel('Dataset Era')
plt.ylabel('Accuracy')
plt.title('')
plt.xticks(index + bar_width, ('Baroque', 'Classical', 'Romantic', 'All'))
plt.legend( bbox_to_anchor=(1.05, 1), loc='upper left')
plt.tight_layout()
#plt.show()
plt.savefig(f'compare2.eps', format='eps')
```
| github_jupyter |
# Assignment 3
All questions are weighted the same in this assignment. This assignment requires more individual learning then the last one did - you are encouraged to check out the [pandas documentation](http://pandas.pydata.org/pandas-docs/stable/) to find functions or methods you might not have used yet, or ask questions on [Stack Overflow](http://stackoverflow.com/) and tag them as pandas and python related. All questions are worth the same number of points except question 1 which is worth 23% of the assignment grade.
**Note**: Questions 2-12 rely on your question 1c answer.
```
import re
import pandas as pd
import numpy as np
# Filter all warnings. If you would like to see the warnings, please comment the two lines below.
import warnings
warnings.filterwarnings('ignore')
```
### Question 1(a)
Complete the function `load_data` below to load three datasets that we will use in subsequent questions. Be sure to follow the instructions below for each dataset *respectively*.
**Energy**
Load the energy data from the file `assets/Energy Indicators.xls`, which is a list of indicators of [energy supply and renewable electricity production](assets/Energy%20Indicators.xls) from the [United Nations](http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls) for the year 2013, and should be put into a DataFrame with the variable name of `energy`.
Keep in mind that this is an Excel file, and not a comma separated values file. Also, make sure to exclude the footer and header information from the datafile. The first two columns are unneccessary, so you should get rid of them, and you should change the column labels so that the columns are:
`['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable]`
Convert `Energy Supply` to gigajoules (**Note: there are 1,000,000 gigajoules in a petajoule**). For all countries which have missing data (e.g. data with "...") make sure this is reflected as `np.NaN` values.
Rename the following list of countries (for use in later questions):
```"Republic of Korea": "South Korea",
"United States of America": "United States",
"United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
"China, Hong Kong Special Administrative Region": "Hong Kong"```
There are also several countries with parenthesis in their name. Be sure to remove these, e.g. `'Bolivia (Plurinational State of)'` should be `'Bolivia'`.
**GDP**
Next, load the GDP data from the file `assets/world_bank.csv`, which is a csv containing countries' GDP from 1960 to 2015 from [World Bank](http://data.worldbank.org/indicator/NY.GDP.MKTP.CD). Call this DataFrame `gdp`.
Make sure to skip the header, and rename the following list of countries:
```"Korea, Rep.": "South Korea",
"Iran, Islamic Rep.": "Iran",
"Hong Kong SAR, China": "Hong Kong"```
**ScimEn**
Finally, load the [Sciamgo Journal and Country Rank data for Energy Engineering and Power Technology](http://www.scimagojr.com/countryrank.php?category=2102) from the file `assets/scimagojr-3.xlsx`, which ranks countries based on their journal contributions in the aforementioned area. Call this DataFrame `scim_en`.
**For all three datasets, use country names as the index.**
```
import pandas as pd
import numpy as np
def load_data():
# Competency: reading files in Pandas, df manipulation, regex
# The three variables are initialized to None. You will fill them with the correct values.
energy = pd.read_excel('assets/Energy Indicators.xls', skiprows = 17, skip_footer = 38, usecols = "C:F")
col_names = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
# Rename the columns
energy.columns = col_names
# Convert Energy Supply to gigajoules (Note: there are 1,000,000 gigajoules in a petajoule).
# For all countries which have missing data (e.g. data with "...") make sure this is
# reflected as np.NaN values.
energy['Energy Supply'] *= 1000000
energy.replace('\.+', np.nan, regex=True, inplace = True)
# Rename the following list of countries
replace_dict = {"Republic of Korea": "South Korea",
"United States of America": "United States",
"United Kingdom of Great Britain and Northern Ireland": "United Kingdom",
"China, Hong Kong Special Administrative Region": "Hong Kong"}
# Removes digits and text enclosed in parantheses
energy['Country'] = energy['Country'].str.replace('\d+','').str.replace('\s*\(.*?\)\s*', '').replace(replace_dict)
energy.set_index('Country', inplace = True)
gdp = pd.read_csv('assets/world_bank.csv', sep = ',', skiprows = 4)
country_dict = {"Korea, Rep.": "South Korea",
"Iran, Islamic Rep.": "Iran",
"Hong Kong SAR, China": "Hong Kong"}
gdp['Country Name'] = gdp['Country Name'].replace(country_dict)
gdp = gdp.set_index('Country Name')
scim_en = pd.read_excel('assets/scimagojr-3.xlsx')
scim_en.set_index('Country', inplace = True)
return energy, gdp, scim_en
load_data()
# energy, gdp, scim_en = load_data()
# all([isinstance(energy, pd.DataFrame), isinstance(gdp, pd.DataFrame), isinstance(scim_en, pd.DataFrame)])
# Cell for autograder.
```
### Question 1(b)
Now suppose we take the intersection of the three datasets based on the country names, how many *unique* entries will we lose? Complete the function below that returns the answer as a single number. The Venn diagram in the next cell is worth a thousand words.
*This function should return a single (whole) number.*
```
%%HTML
<svg width="800" height="300">
<circle cx="150" cy="180" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="blue" />
<circle cx="200" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="red" />
<circle cx="100" cy="100" r="80" fill-opacity="0.2" stroke="black" stroke-width="2" fill="green" />
<line x1="150" y1="125" x2="300" y2="150" stroke="black" stroke-width="2" fill="black" stroke-dasharray="5,3"/>
<text x="300" y="165" font-family="Verdana" font-size="35">Everything but this!</text>
</svg>
def answer_1b():
# Competency: joining datasets, sets
# YOUR CODE HERE
energy, gdp, scim_en = load_data()
intersect = pd.merge(pd.merge(scim_en, energy, how = 'inner', left_index = True, right_index = True), gdp, how = 'inner', left_index = True, right_index = True)
# Python Pandas Dataframe merge and pick only few columns
#merged = pd.merge(merged, gdp[['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']], how = 'inner', left_index = True, right_index = True)
merged_outer = pd.merge(pd.merge(scim_en, energy, how = 'outer', left_index = True, right_index = True), gdp, how = 'outer', left_index = True, right_index = True)
# Python Pandas Dataframe merge and pick only few columns
#merged_outer = pd.merge(merged, gdp[['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']], how = 'left', left_index = True, right_index = True)
return len(merged_outer) - len(intersect)
#raise NotImplementedError()
answer_1b()
# Cell for autograder.
```
### Question 1(c)
Join the three datasets to form a new dataset, using the intersection of country names. Keep only the last 10 years (2006-2015) of GDP data and only the top 15 countries by Scimagojr 'Rank' (Rank 1 through 15).
The index of the resultant DataFrame should still be the name of the country, and the columns should be
```['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',
'Citations per document', 'H index', 'Energy Supply',
'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008',
'2009', '2010', '2011', '2012', '2013', '2014', '2015']```.
*This function should return a DataFrame with 20 columns and 15 entries.*
```
def answer_1c():
# Competency: df manipulation, joining datasets
energy, gdp, scim_en = load_data()
# YOUR CODE HERE
scim_en = scim_en[scim_en['Rank'] <=15]
merged = pd.merge(scim_en, energy, how = 'left', left_index = True, right_index = True)
# Python Pandas Dataframe merge and pick only few columns
merged = pd.merge(merged, gdp[['2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015']], how = 'left', left_index = True, right_index = True)
# The index of this DataFrame should be the name of the country,
# and the columns should be
#['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations', 'Citations per document', 'H index', 'Energy Supply', 'Energy Supply per Capita', '% Renewable', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015'].
#merged = merged.set_index('Country')
return merged
#raise NotImplementedError()
answer_1c()
your_ans = answer_1c()
assert isinstance(your_ans, pd.DataFrame), "Q1c: Your function should return a DataFrame."
assert your_ans.shape == (15, 20), "Q1c: Your resultant DataFrame should have 20 columns and 15 entries."
assert list(your_ans.columns) == ['Rank', 'Documents', 'Citable documents', 'Citations', 'Self-citations',
'Citations per document', 'H index', 'Energy Supply','Energy Supply per Capita', '% Renewable',
'2006', '2007', '2008','2009', '2010', '2011', '2012', '2013', '2014', '2015'] , "Q1c: The column names should be as specified in the question. "
del your_ans
# Cell for autograder.
```
**Note: all subsequent questions rely on the DataFrame returned by your function in Question 1(c) above.**
### Question 2
What is the average GDP over the last 10 years for each country?
*This function should return a Series named `avgGDP` with 15 countries and their average GDP sorted in descending order.*
```
def answer_two():
# Competency: indexing, math fn, sorting
# YOUR CODE HERE
merged = answer_1c()
year_list = ['2006', '2007', '2008', '2009', '2010',
'2011', '2012', '2013', '2014', '2015']
avgGDP = merged[year_list].apply(np.mean, axis = 1).rename('avgGDP').sort_values(ascending = False)
return pd.Series(avgGDP)
#raise NotImplementedError()
answer_two()
your_ans = answer_two()
assert isinstance(your_ans, pd.Series), "Q2: You should return a Series. "
assert your_ans.name == "avgGDP", "Q2: Your Series should have the correct name. "
del your_ans
# Cell for autograder.
```
### Question 3
By how much had the GDP changed over the 10 year span for the country with the 6th largest average GDP?
*This function should return a single number.*
```
def answer_three():
# Competency: indexing, broadcasting
# YOUR CODE HERE
merged = answer_1c()
change = merged.loc['United Kingdom']['2015'] - merged.loc['United Kingdom']['2006']
return change
#raise NotImplementedError()
answer_three()
# Cell for autograder.
```
### Question 4
What is the mean energy supply per capita?
*This function should return a single number.*
```
def answer_four():
# Competency: math fn
# YOUR CODE HERE
merged = answer_1c()
#merged.iloc[:,8].mean()
return merged['Energy Supply per Capita'].mean()
# raise NotImplementedError()
answer_four()
# 157.6
# Cell for autograder.
```
### Question 5
What country has the maximum % Renewable and what is the percentage?
*This function should return a tuple with the name of the country and the percentage.*
```
def answer_five():
# Competency: math fn
# YOUR CODE HERE
merged = answer_1c()
#merged['% Renewable'].idxmax()
return (merged['% Renewable'].idxmax(), merged['% Renewable'].max())
#raise NotImplementedError()
answer_five()
your_ans = answer_five()
assert isinstance(your_ans, tuple), "Q5: Your function should return a tuple. "
assert isinstance(your_ans[0], str), "Q5: The first element in your result should be the name of the country. "
del your_ans
# Cell for autograder.
```
### Question 6
Create a new column that is the ratio of `Self-Citations` to total `Citations`.
What is the maximum value for this new column, and what country has the highest ratio?
*This function should return a tuple with the name of the country and the ratio.*
```
def answer_six():
# Competency: math fn, broadcasting
# YOUR CODE HERE
merged = answer_1c()
merged['Ratio'] = merged['Self-citations'] / merged['Citations']
return (merged['Ratio'].idxmax(), merged['Ratio'].max())
#raise NotImplementedError()
answer_six()
your_ans = answer_six()
assert isinstance(your_ans, tuple), "Q6: Your function should return a tuple. "
assert isinstance(your_ans[0], str), "Q6: The first element in your result should be the name of the country. "
del your_ans
# Cell for autograder.
```
### Question 7
Create a column that estimates the population using `Energy Supply` and `Energy Supply per capita`.
What is the third most populous country according to this estimate?
*This function should return the name of the country*
```
def answer_seven():
# Competency: Broadcasting, sorting
# YOUR CODE HERE
merged = answer_1c()
merged['Population'] = merged['Energy Supply'] / merged['Energy Supply per Capita']
#merged[merged['Population'] == merged['Population'].nlargest(3)[2]].index
return merged[merged['Population'] == merged['Population'].nlargest(3)[2]].index[0]
#raise NotImplementedError()
answer_seven()
assert isinstance(answer_seven(), str), "Q7: Your function should return the name of the country. "
# Cell for autograder.
```
### Question 8
Create a column that estimates the number of citable documents per person.
What is the correlation between the number of citable documents per capita and the energy supply per capita? Use the `.corr()` method, (Pearson's correlation).
*This function should return a single number.*
```
def answer_eight():
# Competency: Broadcasting, math fn, correlation, visualization
# YOUR CODE HERE
merged = answer_1c()
merged['Population'] = merged['Energy Supply'] / merged['Energy Supply per Capita']
merged['Citable Documents per Capita'] = merged['Citable documents'] / merged['Population']
#['Citable documents', 'Energy Supply per Capita']
#merged.corr(method = 'pearson')
#return merged
#return merged[['Citable documents', 'Energy Supply per Capita']].corr(method = 'pearson').iloc[0,1]
return merged.corr(method = 'pearson').loc['Citable Documents per Capita']['Energy Supply per Capita']
#raise NotImplementedError()
answer_eight()
assert -1 <= answer_eight() <= 1, "Q8: A valid correlation should be between -1 to 1. "
# Cell for autograder.
```
### Question 9
Create a new column with a 1 if a country's `% Renewable` value is **at or above** the median, and a 0 otherwise for all countries in the top 15.
*This function should return a series named `HighRenew` whose index is the country name sorted in ascending order of rank.*
```
def answer_nine():
# Competency: df querying, math fn, variable encoding
# YOUR CODE HERE
merged = answer_1c()
merged['HighRenew'] = None
merged['HighRenew'][merged['% Renewable'] >= merged['% Renewable'].median()] = 1
merged['HighRenew'][merged['% Renewable'] < merged['% Renewable'].median()] = 0
HighRenew = merged['HighRenew']
return pd.Series(HighRenew)
#raise NotImplementedError()
answer_nine()
assert isinstance(answer_nine(), pd.Series), "Q9: Your function should return a Series. "
# Cell for autograder.
```
### Question 10
Use the following dictionary to group the `Countries` by `Continent`, then create a DataFrame that displays the sample size (the number of countries in each continent bin), and the sum, mean, and *population* standard deviation of the estimated population for each country.
```python
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
```
*This function should return a DataFrame with index named Continent `['Asia', 'Australia', 'Europe', 'North America', 'South America']` and columns `['size', 'sum', 'mean', 'std']`*
**Special Note: make sure you are indeed calculating the *population* std rather than the sample std. When in doubt, carefully check the documentation of the function you plan to use. Don't take things for granted.**
```
def answer_ten():
# Competency: mapping, groupby, agg
# YOUR CODE HERE
merged = answer_1c()
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
merged['Continent'] = None
merged['Population'] = merged['Energy Supply'] / merged['Energy Supply per Capita']
# Assign mapped values
for i in range(len(merged)):
merged.iloc[i,20] = ContinentDict[merged.index[i]]
ans_df = merged.groupby('Continent').agg([np.size, np.sum, np.mean, np.std])['Population']
return ans_df
#raise NotImplementedError()
answer_ten()
your_ans = answer_ten()
assert isinstance(your_ans, pd.DataFrame), "Q10: Your function should return a DataFrame. "
assert your_ans.shape[0] == 5, "Q10: You have a incorrect number of rows. "
assert your_ans.shape[1] == 4, "Q10: You have a incorrect number of columns. "
assert list(your_ans.index) == ['Asia', 'Australia', 'Europe', 'North America', 'South America'], "Q10: You have a wrong index. "
assert list(your_ans.columns) == ['size', 'sum', 'mean', 'std'], "Q10: You have wrong column names. "
assert np.isclose(your_ans.loc["Asia", "sum"], 2898666386.6106005, rtol=0.0, atol=1e-5), "Q10: The sum value for Asia should be around 2898666386.6106005. "
assert np.isclose(your_ans.loc["Europe", "mean"], 76321611.20272864, rtol=0.0, atol=1e-5), "Q10: The mean value for Europe should be around 76321611.20272864. "
assert np.isnan(your_ans.loc["South America", "std"]), "Q10: South America should have a NaN std. "
del your_ans
# Cell for autograder.
```
### Question 11
Cut `% Renewable` into 5 bins. Group the top 15 countries by `Continent` as well as these new `% Renewable` bins. How many countries are there in each of these groups?
*This function should return a Series with a MultiIndex of `Continent`, then the bins for `% Renewable`. Do not include groups with no countries.*
```
def answer_eleven():
# Competency: cut, groupby, math fn
# YOUR CODE HERE
merged = answer_1c()
merged['Continent'] = None
ContinentDict = {'China':'Asia',
'United States':'North America',
'Japan':'Asia',
'United Kingdom':'Europe',
'Russian Federation':'Europe',
'Canada':'North America',
'Germany':'Europe',
'India':'Asia',
'France':'Europe',
'South Korea':'Asia',
'Italy':'Europe',
'Spain':'Europe',
'Iran':'Asia',
'Australia':'Australia',
'Brazil':'South America'}
for i in range(len(merged)):
merged.iloc[i,20] = ContinentDict[merged.index[i]]
merged['% Renewable'] = pd.cut(x=merged['% Renewable'], bins=5)
list_index = ['Continent', '% Renewable']
merged = merged[list_index]
# merged = merged.set_index([list_index])
# merged.groupby(list_index).agg([np.size])
return pd.Series(merged.groupby(list_index).size())#.agg({"Continent":np.size}).dropna()
#raise NotImplementedError()
answer_eleven()
your_ans = answer_eleven()
assert isinstance(your_ans, pd.Series), "Q11: Your function should return a Series. "
assert len(your_ans) == 9, "Q11: Your answer should have 9 rows. "
del your_ans
# Cell for autograder.
```
### Question 12
Write a function to display the estimated population as a string with thousands separator (using commas). Use all significant digits, namely, do not round the results.
e.g. 12345678.90 -> 12,345,678.90
*This function should return a series `PopEst` whose index is the country name and whose values are the population estimate string*
**Special Note: make sure your `Energy Supply` column is of a numerical type rather than "object".**
```
def answer_twelve():
# Competency: lambdas, format
# YOUR CODE HERE
merged = answer_1c()
merged['PopEst'] = merged['Energy Supply'] / merged['Energy Supply per Capita']
PopEst = merged['PopEst'].apply(lambda x: format(x, ','))
return PopEst
#raise NotImplementedError()
answer_twelve()
your_ans = answer_twelve()
assert isinstance(your_ans, pd.Series), "Q12: Your function should return a Series. "
assert len(your_ans) == 15, "Q12: There should be 15 countries. "
del your_ans
# Cell for autograder.
```
### Optional
Use the built in function `plot_optional()` to see an example visualization.
```
def plot_optional():
import matplotlib as plt
%matplotlib inline
Top15 = answer_1c()
ax = Top15.plot(x='Rank', y='% Renewable', kind='scatter',
c=['#e41a1c','#377eb8','#e41a1c','#4daf4a','#4daf4a','#377eb8','#4daf4a','#e41a1c',
'#4daf4a','#e41a1c','#4daf4a','#4daf4a','#e41a1c','#dede00','#ff7f00'],
xticks=range(1,16), s=6*Top15['2014']/10**10, alpha=.75, figsize=[16,6]);
for i, txt in enumerate(Top15.index):
ax.annotate(txt, [Top15['Rank'][i], Top15['% Renewable'][i]], ha='center')
print("This is an example of a visualization that can be created to help understand the data. \
This is a bubble chart showing % Renewable vs. Rank. The size of the bubble corresponds to the countries' \
2014 GDP, and the color corresponds to the continent.")
plot_optional()
```
| github_jupyter |
```
import tabint
from tabint.utils import *
from tabint.dataset import *
from tabint.feature import *
from tabint.pre_processing import *
from tabint.visual import *
from tabint.learner import *
from tabint.interpretation import *
from tabint.inference import *
from tabint.model_performance import *
data = pd.read_csv('DLCO.csv', sep=";")
data.head()
df, y, pp_outp = tabular_proc(data, 'DLCO', [fill_na(), app_cat(), dummies()])
df.head()
pp_outp['cons']
```
# dataset
```
ds = TBDataset.from_SKSplit(df, y=y, cons=pp_outp['cons'])
ds.n_trn
```
# learner
```
from sklearn.linear_model import LinearRegression
learner = SKLearner(LinearRegression())
learner.fit(*ds.trn, *ds.val)
```
# plot
```
avp = actual_vs_predict.from_learner(learner, ds)
avp.plot()
import seaborn as sns
def plot_line(x_series, y_series, labels = None, fmts = None, xlabel = None, xlim = None, ylim = None, **kwargs):
length = len(to_iter(x_series))
if fmts is None: fmts = [None]*length
if labels is None: labels_ = [None]*length
for x_serie, y_serie, label, fmt in zip(to_iter(x_series),to_iter(y_series),to_iter(labels_), to_iter(fmts)):
params = [x_serie, y_serie]
if fmt is not None: params.append(fmt)
if label is not None: params.append(label)
plt.plot(*params, **kwargs)
if xlabel is not None: plt.xlabel(xlabel)
if ylim is not None: plt.ylim(ylim)
if xlim is not None: plt.xlim(xlim)
def plot_bisectrix(start = 0, stop = 10, num = 20, **kargs):
obs = np.linspace(start = start, stop = stop, num = num)
plot_line(obs, obs, **kargs)
def plot_scatter(x, y, xlabel=None, ylabel=None, title = None, hue=None, **kargs):
sns.scatterplot(pred,ds.y_val, hue=hue, **kargs)
if xlabel is not None: plt.xlabel(xlabel)
if ylabel is not None: plt.ylabel(ylabel)
if title is not None: plt.title(title)
class actual_vs_predicted:
def __init__(self, actual, predict, df, data):
self.actual, self.predict = actual, predict
self.df, self.data = df, data
@classmethod
def from_learner(cls, learner, ds):
actual = ds.y_val
predict = learner.predict(ds.x_val)
data = cls.calculate(actual, predict)
return cls(actual, predict, ds.x_val, data)
@classmethod
def from_
@staticmethod
def calculate(actual, predict):
data = pd.DataFrame({'actual':actual, 'predict':predict, 'mse': (actual-predict)**2})
return ResultDF(data, 'mse')
def plot(self, hue = None, num = 100, **kagrs):
if hue is not None: hue = self.df[hue]
concat = np.concatenate([self.actual, self.predict])
plot_scatter(self.actual, self.predict, xlabel='actual', ylabel='predict', hue=hue)
plot_bisectrix(np.min(concat), np.max(concat), num)
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
avp = actual_vs_predict.from_learner(learner, ds)
avp.plot(hue = 'Height')
class model_performace:
def __init__(self, data, **kagrs):
self.data = data
@classmethod
def from_learner(cls, learner, ds, **kargs):
y_true = ds.y_val
y_pred = learner.predict(ds.x_val)
return y_true, y_pred
@classmethod
def from_df(cls, **kagrs): pass
@classmethod
def from_series(cls, y_true, y_pred, **kargs):
return cls.calculate(y_true, y_pred, **kargs)
@staticmethod
def calculate(y_true, y_pred, **kargs): pass
def plot(self, **kagrs): pass
class actual_vs_predict(model_performace):
def __init__(self, data, y_true, y_pred, df):
super().__init__(data)
self.df, self.y_true, self.y_pred = df, y_true, y_pred
@classmethod
def from_learner(cls, learner, ds):
y_true, y_pred = model_performace.from_learner(learner, ds)
return cls.from_series(y_true, y_pred, ds.x_val)
@classmethod
def from_series(cls, y_true, y_pred, df):
data = model_performace.from_series(y_true, y_pred)
return cls(data, y_true, y_pred, df)
@staticmethod
def calculate(actual, predict):
data = pd.DataFrame({'actual':actual, 'predict':predict, 'mse': (actual-predict)**2})
return ResultDF(data, 'mse')
def plot(self, hue = None, num = 100, **kagrs):
if hue is not None: hue = self.df[hue]
concat = np.concatenate([self.y_true, self.y_pred])
plot_scatter(self.y_true, self.y_pred, xlabel='actual', ylabel='predict', hue=hue)
plot_bisectrix(np.min(concat), np.max(concat), num)
if hue is not None: plot_legend()
avp = actual_vs_predict.from_learner(learner, ds)
avp.plot()
```
| github_jupyter |
# Práctico 2: Recomendación de videojuegos
En este práctico trabajaremos con un subconjunto de datos sobre [videojuegos de Steam](http://cseweb.ucsd.edu/~jmcauley/datasets.html#steam_data). Para facilitar un poco el práctico, se les dará el conjunto de datos previamente procesado. En este mismo notebook mostraremos el proceso de limpieza, para que quede registro del proceso (de todas maneras, por el tamaño de los datos no recomendamos que pierdan tiempo en el proceso salvo que lo consideren útil a fines personales).
El conjunto de datos se basa en dos partes: lista de juegos (items), y lista de reviews de usuarios sobre distintos juegos. Este último, en su versión original es muy grande, (pesa 1.3GB), por lo que será solo una muestra del mismo sobre la que trabajarán.
A diferencia del conjunto de datos de LastFM utilizados en el [Práctico 1](./practico1.ipynb), en este caso los datos no están particularmente pensados para un sistema de recomendación, por lo que requerirá de un poco más de trabajo general sobre el dataset.
La idea es que, de manera similar al práctico anterior, realicen un sistema de recomendación. A diferencia del práctico anterior, este será un poco más completo y deberán hacer dos sistemas, uno que, dado un nombre de usuario le recomiende una lista de juegos, y otro que dado el título de un juego, recomiende una lista de juegos similares. Además, en este caso se requiere que el segundo sistema (el que recomienda juegos basado en el nombre de un juego en particular) haga uso de la información de contenido (i.e. o bien harán un filtrado basado en contenido o algo híbrido).
## Obtención y limpieza del conjunto de datos
El conjunto de datos originalmente se encuentra en archivos que deberían ser de formato "JSON". Sin embargo, en realidad es un archivo donde cada línea es un objeto de JSON. Hay un problema no obstante y es que las líneas están mal formateadas, dado que no respetan el estándar JSON de utilizar comillas dobles (**"**) y en su lugar utilizan comillas simples (**'**). Afortunadamente, se pueden evaluar como diccionarios de Python, lo cuál permite trabajarlos directamente.
### Descarga
La siguiente celda descarga los conjuntos de datos crudos. Nuevamente, no es necesario ejecutarla y pueden ir [más abajo](#Conjunto-de-datos-limpio) para ejecutar la celda que descargará el conjunto ya procesado.
```
%%bash
mkdir -p data/steam/
curl -L -o data/steam/steam_games.json.gz http://cseweb.ucsd.edu/\~wckang/steam_games.json.gz
curl -L -o data/steam/steam_reviews.json.gz http://cseweb.ucsd.edu/\~wckang/steam_reviews.json.gz
```
### Carga de datos
Como se dijo, por la naturaleza de los datos, necesitamos utilizar Python para trabajarlos (no podemos leerlos con JSON).
```
import gzip
from tqdm import tqdm_notebook # To print a progress bar (comes with Anaconda or can be installed)
with gzip.open("./data/steam/steam_games.json.gz") as fh:
games = []
for game in tqdm_notebook(fh, total=32135):
try:
games.append(eval(game))
except SyntaxError:
continue
print("Loaded {} games".format(len(games)))
with gzip.open("./data/steam/steam_reviews.json.gz") as fh:
reviews = []
for review in tqdm_notebook(fh, total=7793069):
try:
reviews.append(eval(review))
except SyntaxError:
continue
print("Loaded {} user reviews".format(len(reviews)))
```
### Exploración de los datos
En esta parte necesitamos revisar la estructura general, para poder pasarlos a un formato más amigable (e.g. CSV).
```
games[0]
reviews[0]
```
### Transformación de los datos
Viendo los datos que tenemos de cada tipo, podemos utilizar pandas para leer los registros y trabajar con algo más sencillo.
```
import pandas as pd
games = pd.DataFrame.from_records(games)
games.head(3)
reviews = pd.DataFrame.from_records(reviews)
reviews.head(3)
```
### Selección de características
Teniendo los datos, podemos hacer una selección muy superficial (no basada en EDA) de algunas características que consideremos irrelevantes. En particular, para el caso del dataset de juegos, vemos que las columnas `url` y `reviews_url` no son útiles a los propósitos de este práctico, por lo que las removeremos.
Por el lado del dataset de opiniones todas parecen útiles. Aunque, si vemos muy por arriba `recommended` vemos que para todos los valores son `True`, por lo que la podemos sacar también.
```
games.drop(columns=["url", "reviews_url"], inplace=True)
games.head(3)
reviews.drop(columns=["recommended"], inplace=True)
reviews.head(3)
```
### Muestreo y guarda de datos
Como dijimos, tenemos muchas reviews. Sería excelente trabajarlas a todas, pero el dataset es medio pesado (en RAM llega a ocupar más de 8 GB). Por lo que optaremos por hacer un muestreo de reviews. Esto quiere decir que, probablemente, algunos usuarios/juegos queden afuera. Podríamos hacer algún muestreo estratificado, pero iremos por algo más sencillo. Dejaremos aproximadamente el 10% del dataset (700 mil reviews).
El conjunto de datos de juegos lo dejaremos como está. Lo guardaremos con formato JSON para conservar la información de aquellas columnas que sean de tipo lista.
```
games.to_json("./data/steam/games.json.gz", orient="records")
reviews.sample(n=int(7e5), random_state=42).to_json("./data/steam/reviews.json.gz", orient="records")
```
## Conjunto de datos limpio
Para descargar el conjunto de datos que se utilizará en el práctico, basta con ejecutar la siguiente celda.
```
%%bash
mkdir -p data/steam/
curl -L -o data/steam/games.json.gz https://cs.famaf.unc.edu.ar/\~ccardellino/diplomatura/games.json.gz
curl -L -o data/steam/reviews.json.gz https://cs.famaf.unc.edu.ar/\~ccardellino/diplomatura/reviews.json.gz
```
## Ejercicio 1: Análisis Exploratorio de Datos
Ya teniendo los datos, podemos cargarlos y empezar con el práctico. Antes que nada vamos a hacer una exploración de los datos. Lo principal a tener en cuenta para este caso es que debemos identificar las variables con las que vamos a trabajar. A diferencia del práctico anterior, este conjunto de datos no está documentado, por lo que la exploración es necesaria para poder entender que cosas van a definir nuestro sistema de recomendación.
```
import pandas as pd
```
### Características del conjunto de datos sobre videojuegos
Las características del conjunto de datos de videojuegos tienen la información necesaria para hacer el "vector de contenido" utilizado en el segundo sistema de recomendación. Su tarea es hacer un análisis sobre dicho conjunto de datos y descartar aquella información redundante.
```
games = pd.read_json("./data/steam/games.json.gz")
games.head()
# Completar
```
### Características del conjunto de datos de reviews
Este será el conjunto de datos a utilizar para obtener información sobre los usuarios y su interacción con videojuegos. Como se puede observar no hay un rating explícito, sino uno implícito a calcular, que será parte de su trabajo (deberán descubrir que característica les puede dar información que puede ser equivalente a un rating).
```
reviews = pd.read_json("./data/steam/reviews.json.gz")
reviews.head()
# Completar
```
## Ejercicio 2 - Sistema de Recomendación Basado en Usuarios
Este sistema de recomendación deberá entrenar un algoritmo y desarrollar una interfaz que, dado un usuario, le devuelva una lista con los juegos más recomendados.
```
# Completar
```
## Ejercicio 3 - Sistema de Recomendación Basado en Juegos
Similar al caso anterior, con la diferencia de que este sistema espera como entrada el nombre de un juego y devuelve una lista de juegos similares. El sistema deberá estar programado en base a información de contenido de los juegos (i.e. filtrado basado en contenido o sistema híbrido).
```
# Completar
```
| github_jupyter |
<!--NAVIGATION-->
<| [Main Contents](Index.ipynb) |>
# Appendix: The computing Miniproject <span class="tocSkip"><a name="Apx:Miniproj"></a>
<h1>Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Objectives" data-toc-modified-id="Objectives-1">Objectives</a></span></li><li><span><a href="#The-Project" data-toc-modified-id="The-Project-2">The Project</a></span><ul class="toc-item"><li><span><a href="#Patching-together-your-computing-workflow" data-toc-modified-id="Patching-together-your-computing-workflow-2.1">Patching together your computing workflow</a></span></li><li><span><a href="#The-Report" data-toc-modified-id="The-Report-2.2">The Report</a></span></li><li><span><a href="#Submission" data-toc-modified-id="Submission-2.3">Submission</a></span></li></ul></li><li><span><a href="#Marking-criteria" data-toc-modified-id="Marking-criteria-3">Marking criteria</a></span></li><li><span><a href="#A-Candidate-Problem:-Fitting-TPCs" data-toc-modified-id="A-Candidate-Problem:-Fitting-TPCs-4">A Candidate Problem: Fitting TPCs</a></span><ul class="toc-item"><li><span><a href="#The-Data" data-toc-modified-id="The-Data-4.1">The Data</a></span></li><li><span><a href="#The-Models" data-toc-modified-id="The-Models-4.2">The Models</a></span></li><li><span><a href="#Fitting-models-to-the-TPC-data" data-toc-modified-id="Fitting-models-to-the-TPC-data-4.3">Fitting models to the TPC data</a></span></li><li><span><a href="#The-Workflow" data-toc-modified-id="The-Workflow-4.4">The Workflow</a></span></li></ul></li><li><span><a href="#Readings-&-Resources" data-toc-modified-id="Readings-&-Resources-5">Readings & Resources</a></span></li></ul></div>
We have talked a lot about workflows and confronting models with data. It's time to do something concrete with all the techniques you have been learning.
The CMEE Miniproject gives you an opportunity to try the "whole nine yards" of developing and implementing a workflow and delivering a "finished product" — where you ask and answer a scientific question in biology (potentially involving multiple sub-questions/hypotheses). It will give you an opportunity to perform a "dry run" of executing your actual dissertation project, and you may use it to trial some of the techniques and/or explore some of the data/theory you might use in your Dissertation project.
## Objectives
**The general question you will address is:** *What mathematical model best fits an empirical dataset?*
You may think of this as testing a set of alternative hypotheses — every alternative hypothesis is nothing but an alternative model to describe an observed phenomenon, as you will have learned in the lectures on model fitting.
## The Project
You may choose any dataset and set of alternative models, provided that the work can feasibly be done in the time you have for your miniproject (see the CMEE Guidebook for the submission deadline). You may choose a problem and dataset that is a related to, or even preliminary work for your main Masters project (to give you a reality/feasibility) check.
*Please read the papers in the Readings & Resources section of this chapter* — these will help you make a decision about what data and what models to use.
You will be given lectures on model fitting in Ecology ad Evolution at the start of the Miniproject week.
The Miniproject must satisfy the following criteria:
1. It should employ all the biological computing tools you have learned so far: shell (bash) scripting, git, LaTeX, R, and Python. Using these tools, you will build a workflow that starts with the data and ends with a written report (in LaTeX).
2. *At least* two different models (hypotheses) must be fitted to the data. The models should be fitted and selected using an appropriate method (e.g., non-linear least squares for model fitting and the Akaike Information Criterion for model selection, respectively). *You will be given a lectures on model fitting before you start on your Miniproject.*
3. The project should be fully reproducible — a script should "glue" the workflow together and run it. The assessor should be able to run just this script to get everything to work, from data processing to model fitting to plotting (e.g., in R) to compilation of the LaTeX written report (*More detailed instructions on this below*).
If you are unable to find a dataset and/or problem that you would like to tackle, you may use the "TPC problem" given below.
### Patching together your computing workflow
Use Python and/or bash scripting for this. If using bash, call it `run_MiniProject.sh` and if using Python, called it `run_MiniProject.py`. It should run all the components of the project's workflow, including compilation of the LaTeXdocument. Look back at the notes to see how you would run these different components. For example, we have covered how to run R and compile $\LaTeX$ using the `subprocess` module in Python.
### The Report
The report should,
* be written in LaTeX using the article document class, in 11pt (any font will do, within reason!).
* be double-spaced, with *continuous* line numbers.
* have a title, author name with affiliation and wordcount (next point) on a separate title page.
* have an introduction with objectives of the study, and appropriate additional sections such as methods, data, results, discussion, etc.
* should contain in the Methods a sub-section called "Computing languages" which states briefly how each of the three scripting language (bash, R, Python) was used, and a justification of why.
* must contain $\leq$3500 words *excluding the contents of the title page, references, and Figure or Table captions+legends*; there should be a word count at the beginning of the document (typically using the `texcount` package).
* have references properly cited in text and formatted in a list using bibtex.
For the writeup, you probably should read the *general* (*not* word count, formatting etc.) dissertation writing guidelines given in the Silwood Masters Student Guidebook.
### Submission
Commit and push all your work to your bitbucket repository using a directory called `MiniProject` at the same level as the Week1, Week2 etc. directories, by the Miniproject deadline given in your course guidebook.
At this stage you are not going to be told you how to organize your project — that's part of the marking criteria (see next section).
## Marking criteria
*Equal weightage will be given to the code+workflow and writeup components — each component will be marked to a max of 100 pts and then rescaled to a single mark / 100 using equal weightage*
The assessor will be looking for the following while assessing your submission:
* A well-organized project where code, results, data, etc., are easy to locate, inspect, and use. In the project's README also include:
* Any dependencies or special packages the user/marker should be aware of
* What each package you used is for
* Version of each language used
* A project that runs smoothly, without any errors once the appropriate script is called (i.e., `run_MiniProject.py` or `run_MiniProject.sh`)
* A report that contains all the components indicated above in "The Report" subsection with some original thought and synthesis in the Introduction and Discussion sections.
* Quality of the presentation of the graphics and tables in your report, as well as any plots showing model fits to the data.
* The marking criteria you may refer to is the [summative marking criteria](./MARKING_CRITERIA.pdf).
## A Candidate Problem: Fitting TPCs
One choice you have is to use a large dataset that we can provide to address the following question:
*How well do different mathematical models, e.g., based upon biochemical (mechanistic) principles vs. phenomenological ones, fit to the thermal responses of metabolic traits?*
This is currently a "hot" (no pun intended!) topic in biology, with both ecological and evolutionary consequences, as we discussed in the modelling lecture. On the *ecological side*, because the temperature-dependence of metabolic rate sets the rate of intrinsic $r_\text{max}$ (papers by Savage et al., Brown et al.) as well as interactions between species, it has a strong effect on population dynamics. In this context, note that 99.9% of life on earth is ectothermic! On the *evolutionary side*, the temperature-dependence of fitness and species interactions also means that warmer environments may have stronger rates of evolution. This may be compounded by the fact that mutation rates may also increase with temperature (papers by Gillooly et al.).
### The Data
The dataset is called BioTraits.csv. It contants a subset of the full "BioTraits" database. This subset contains hundreds of "thermal responses" for growth, respiration and photosynthesis rates in plants and bacteria (both aquatic and terrestrial). These data were collected through lab experiments across the world, and compiled by various people over the years. The field names are defined [here](https://drive.google.com/open?id=1nsm9nvcz70TGUk07wyOnVL6xGJFM_CTs). The two main fields of interest are `OriginalTraitValue` (the trait values responding to temperature), and `ConTemp` (the temperature). Individual thermal response curves can be identified by `FinalID` values --- each `FinalID` corresponds to one thermal performance curve.
### The Models
*All the following parameters and variables are in SI units*.
There are multiple models that might best describe these data. The simplest is a the general cubic polynomial model:
$$\label{eq:cubic}
B = B_0 + B_1 T + B_2 T^2 + B_3 T^3
$$
This is a phenomenological model, with the parameters $B_0$, $B_1$, $B_2$ and $B_3$ lacking any mechanistic interpretation.
Another phenomenological model option is the [Briere model](Appendix-ModelFitting.ipynb#The-TPC-models):
$$B = B_0 T (T-T_0) \sqrt{T_m-T}$$
Where $T_0$ and $T_m$ are the minimum and maximum feasible temperatures for the trait (below or above which the traits goes to zero), and $B_0$ is a normalization constant.
In contrast, the Schoolfield model (paper is in Readings directory) is a mechanistic option that is based upon thermodynamics and enzyme kinetics:
$$\label{eq:schoolf}
B = \frac{B_0 e^{\frac{-E}{k} (\frac{1}{T} - \frac{1}{283.15})}}
{ 1 + e^{\frac{E_l}{k} (\frac{1}{T_l} - \frac{1}{T})} +
e^{\frac{E_h}{k} (\frac{1}{T_h} - \frac{1}{T})}}
$$
*Please also have a look at the Delong et al 2017 paper, which lists this and other mechanistic TPC models* (see the Readings and Resources section). You may choose additional models listed in that paper for comparison, if you want.
Here, $k$ is the Boltzmann constant ($8.617 \times 10^{-5}$ eV $\cdot$ K$^{-1}$), $B$ the value of the trait at a given temperature $T$ (K) (K = $^\circ$C + 273.15), while $B_0$ is the trait value at 283.15 K (10$^\circ$C) which stands for the value of the growth rate at low temperature and controls the vertical offset of the curve. $E_l$ is the enzyme's low-temperature de-activation energy (eV) which controls the behavior of the enzyme (and the curve) at very low temperatures, and $T_l$ is the at which the enzyme is 50% low-temperature deactivated. $E_h$ is the
enzyme's high-temperature de-activation energy (eV) which controls the behavior of the enzyme (and the curve) at very high temperatures, and $T_h$ is the at which the enzyme is 50% high-temperature deactivated. $E$ is the activation energy (eV) which controls the rise of the curve up to the peak in the "normal operating range" for the enzyme (below the peak of the curve and above $T_h$).
---
<figure>
<img src="./graphics/SchoolfEx.png" alt="NLLS fit to Sharoe-Schoolfield model" style="width:30%">
<small>
<center>
<figcaption>
Example of the Sharpe-Schoolfield model (eq. \ref{eq:schoolf}) fitted to the thermal response curve of a biological trait.
</figcaption>
</center>
</small>
</figure>
---
In many cases, a simplified Schoolfield model would be more appropriate for thermal response data, because low temperature inactivation is weak, or is undetectable in the data because low-temperature measurements were not made.
$$\label{eq:schoolfH}
B = \frac{B_0 e^{\frac{-E}{k} (\frac{1}{T} - \frac{1}{283.15})}}
{ 1 + e^{\frac{E_h}{k} (\frac{1}{T_h} - \frac{1}{T})}}
$$
In other cases, a different simplified Schoolfield model would be more appropriate, because high temperature inactivation was not detectable in the data because measurements were not made at sufficiently high temperatures:
$$\label{eq:schoolfL}
B = \frac{B_0 e^{\frac{-E}{k} (\frac{1}{T} - \frac{1}{283.15})}}
{ 1 + e^{\frac{E_l}{k} (\frac{1}{T_l} - \frac{1}{T})}}
$$
Note that the cubic model (Equation \ref{eq:cubic}) has the same number of parameters as the the reduced Schoolfield models (eq. \ref{eq:schoolfH} & \ref{eq:schoolfL}). Also, the temperature parameter ($T$) of the cubic model (Equation \ref{eq:cubic}) is in $^\circ$C, whereas the Temperature parameter in the Schoolfield model is in K.
### Fitting models to the TPC data
You will use Nonlinear Least Squares (NLLS) to fit the alternative models above (eqns \ref{eq:schoolf} – \ref{eq:cubic}) to data, followed by model selection with AIC and BIC (also known as the Schwartz Criterion —
*read the Johnson and Omland 2005 paper*).
### The Workflow
You will build a workflow that starts with the data and ends with a report written in LaTeX. I suggest the following components and sequence in your workflow (you can choose to do it differently!):
* A Python or R script that imports the data and prepares it for NLLS fitting, with the following features:
* It should create unique ids so that you can identify unique thermal responses (what does this mean?)
* It should filter out datasets with less than 5 data points (why?)
* It should deal with negative and zero trait values (why?)
* The script should add columns containing starting values of the model parameters for the NLLS fitting (how will you get these?)
* Save the modified data to a new csv file.
A Python script that opens the new modified dataset (from step 1) and does the NLLS fitting, with the following features:
* Uses lmfit — look up submodules `minimize`, `Parameters`, `Parameter`, and `report_fit`. *Have a look through* <http://lmfit.github.io/lmfit-py>, especially <http://lmfit.github.io/lmfit-py/fitting.html#minimize>\
You will have to install lmfit using pip or `easy_install` - use sudo mode. In addition to the lmfit example in class, you may want to look for others online.
* Will use the try construct because not all runs will converge. Recall the try example from R
* The more thermal response curves you are able to fit, the better — that is part of the challenge!
* Will calculate AIC, BIC, R$^{2}$, and other statistical measures of fit (you decide what you want to include)
* Will export the results to a csv that the plotting R script (next item) can read.
* A R script that imports the results from the previous step and plots every thermal response with both models (or none, if nothing converges) overlaid — all plots should be saved in a single separate sub-directory. *Use ggplot for pretty results!*
* LaTeX source code that generates your report.
* A Python script (saved in Code) called run_MiniProject.py that runs the whole project, right down to compilation of the LaTeX document.
Doing all this may seem a bit scary at the start. However, you need to approach the problem systematically and methodically, and you will be OK. I suggest the following to get you started:
* Explore the data in R and get a preliminary version of the plottingscript without the fitted models overlaid worked out. That will also give you a feel for the data.
* Explore the two models – be able to plot them. Write them as functions in your python script, because that's where you will use them (step 2 above) (you can use matplotlib for quick and dirty plotting and then suppress those code lines later).
* Figure out, using a minimal example (say, with one, "nice-looking" thermal response dataset) to see how the python `lmfit` module works. We can help you work out th minimal example, including the usage of try to catch errors in case the fitting doesn't converge.
*One thing to note is that you will need to do the NLLS fitting on the logarithm of the the function to facilitate convergence.*
## Readings & Resources
All these papers are in pdf format in the Readings directory on TheMulQuaBio repository.
* Levins, R. (1966) The strategy of model building in population biology. Am. Sci. 54, 421–431.
* Johnson, J. B. & Omland, K. S. (2004) Model selection in ecology and evolution. Trends Ecol. Evol. 19, 101–108.
* Bolker, B. M. et al. (2013) Strategies for fitting nonlinear ecological models in R, AD Model Builder, and BUGS. Methods Ecol. Evol. 4, 501–512
* [Some illustrative examples of (nonlinear) model-fitting to ecological/evolutionary data](https://groups.nceas.ucsb.edu/non-linear-modeling/projects)
* For the suggested fitting TPCs project: Papers in the Temperature_response_papers directory within , but especially :
* Schoolfield, R. M., Sharpe, P. J. & Magnuson, C. E. (1981) Non-linear regression of biological temperature-dependent rate models based on absolute reaction-rate theory. J. Theor. Biol. 88, 719–31.
* DeLong, J. P. et al. (2017) The combined effects of reactant kinetics and enzyme stability explain the temperature dependence of metabolic rates. Ecol. Evol. 7, 3940–3950 .
| github_jupyter |
<div class="contentcontainer med left" style="margin-left: -50px;">
<dl class="dl-horizontal">
<dt>Title</dt> <dd> Path Element</dd>
<dt>Dependencies</dt> <dd>Matplotlib</dd>
<dt>Backends</dt> <dd><a href='./Path.ipynb'>Matplotlib</a></dd> <dd><a href='../bokeh/Path.ipynb'>Bokeh</a></dd>
</dl>
</div>
```
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
```
A ``Path`` object is actually a collection of lines, unlike ``Curve`` where the y-axis is the dependent variable, a ``Path`` consists of lines connecting arbitrary points in two-dimensional space. The individual subpaths should be supplied as a list and will be stored as NumPy arrays, DataFrames or dictionaries for each column, i.e. any of the formats accepted by columnar data formats. For a full description of the path geometry data model see the [Geometry Data User Guide](../user_guide/Geometry_Data.ipynb).
In this example we will create a Lissajous curve, which describe complex harmonic motion:
```
%%opts Path (color='black' linewidth=4)
lin = np.linspace(0, np.pi*2, 200)
def lissajous(t, a, b, delta):
return (np.sin(a * t + delta), np.sin(b * t), t)
hv.Path([lissajous(lin, 3, 5, np.pi/2)])
```
If you looked carefully the ``lissajous`` function actually returns three columns, respectively for the x, y columns and a third column describing the point in time. By declaring a value dimension for that third column we can also color the Path by time. Since the value is cyclical we will also use a cyclic colormap (``'hsv'``) to represent this variable:
```
%%opts Path [color_index='time'] (linewidth=4 cmap='hsv')
hv.Path([lissajous(lin, 3, 5, np.pi/2)], vdims='time')
```
If we do not provide a ``color_index`` overlaid ``Path`` elements will cycle colors just like other elements do unlike ``Curve`` a single ``Path`` element can contain multiple lines that are disconnected from each other. A ``Path`` can therefore often useful to draw arbitrary annotations on top of an existing plot.
A ``Path`` Element accepts multiple formats for specifying the paths, the simplest of which is passing a list of ``Nx2`` arrays of the x- and y-coordinates, alternative we can pass lists of coordinates. In this example we will create some coordinates representing rectangles and ellipses annotating an ``RGB`` image:
```
%%opts Path (linewidth=4)
angle = np.linspace(0, 2*np.pi, 100)
baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2))
adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)]
adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)]
scene = hv.RGB.load_image('../assets/penguins.png')
scene * hv.Path([adultL, adultR, baby]) * hv.Path([baby])
```
A ``Path`` can also be used as a means to display a number of lines with the same sampling along the x-axis at once. If we initialize the ``Path`` with a tuple of x-coordinates and stacked y-coordinates, we can quickly view a number of lines at once. Here we will generate a number of random traces each slightly offset along the y-axis:
```
%%opts Path [aspect=3 fig_size=300]
N, NLINES = 100, 10
hv.Path((np.arange(N), np.random.rand(N, NLINES) + np.arange(NLINES)[np.newaxis, :])) *\
hv.Path((np.arange(N), np.random.rand(N, NLINES) + np.arange(NLINES)[np.newaxis, :]))
```
For full documentation and the available style and plot options, use ``hv.help(hv.Path).``
| github_jupyter |
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from *Introduction to TensorFlow* to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "`All modules imported`".
```
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
```
The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
```
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
```
<img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
## Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the `normalize_grayscale()` function to a range of `a=0.1` and `b=0.9`. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in [grayscale](https://en.wikipedia.org/wiki/Grayscale), the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
*If you're having trouble solving problem 1, you can view the solution [here](https://github.com/udacity/deep-learning/blob/master/intro-to-tensorflow/intro_to_tensorflow_solution.ipynb).*
```
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
Xmin = 0
Xmax = 255
return (a + (((image_data-Xmin)*(b-a))/(Xmax-Xmin)))
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
```
# Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
```
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
```
## Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- `features`
- Placeholder tensor for feature data (`train_features`/`valid_features`/`test_features`)
- `labels`
- Placeholder tensor for label data (`train_labels`/`valid_labels`/`test_labels`)
- `weights`
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">`tf.truncated_normal()` documentation</a> for help.
- `biases`
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> `tf.zeros()` documentation</a> for help.
*If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available [here](intro_to_tensorflow_solution.ipynb).*
```
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count,labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
```
<img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
## Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* **Epochs:** 1
* **Learning Rate:**
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* **Epochs:**
* 1
* 2
* 3
* 4
* 5
* **Learning Rate:** 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
*If you're having trouble solving problem 3, you can view the solution [here](intro_to_tensorflow_solution.ipynb).*
```
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.5
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
```
## Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
```
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
```
# Multiple layers
Good job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers.
| github_jupyter |
```
from jupyter_innotater import *
import numpy as np, os
```
## Save button calls your supplied Python function
```
foodfns = sorted(os.listdir('./foods/'))
targets = np.zeros((len(foodfns), 4), dtype='int') # (x,y,w,h) for each data row
def my_save_hook(uindexes):
np.savetxt("foodboxes.csv", targets, delimiter=",", fmt="%d")
return True # Tell Innotater the save was successful (we just assume so here...)
Innotater( ImageInnotation(foodfns, path='./foods'), BoundingBoxInnotation(targets), save_hook=my_save_hook )
```
Click the Save button above after making changes, and a csv file will be saved containing your latest data.
Your function should return True if the save was successful, otherwise False if the data should still be saved.
The uindexes parameter is a list of integers telling you which indexes have been changed through the Innotater.
## Custom Buttons calling your own Python function
The ButtonInnotation allows you to provide custom button functionality.
In this example, there is a button to reset everything in the current sample, and buttons to reset each bounding box.
```
animalfns = sorted(os.listdir('./animals/'))
repeats = 8
# Per-photo data
classes = ['cat', 'dog']
targets_type = np.zeros((len(animalfns), len(classes)), dtype='int') # One-hot encoding
# Repeats within each photo
targets_bboxes = np.zeros((len(animalfns), repeats, 4), dtype='int') # (x,y,w,h) for each animal
def reset_click(uindex, repeat_index, **kwargs):
# uindex is the (underlying) index of the data sample where the button was clicked
# repeat_index will be the sub-index of the row in a RepeatInnotation, or -1 if at the top level
# kwargs will contain name and desc fields
if repeat_index == -1: # This was a top-level button (no sub-index within the RepeatInnotation)
# So reset everything
targets_type[uindex] = [1,0]
for i in range(repeats):
targets_bboxes[uindex, i, :] = 0
else:
# Only reset the row with repeat_index
targets_bboxes[uindex, repeat_index, :] = 0
return True # Tell Innotater the data at uindex was changed
Innotater(
ImageInnotation(animalfns, path='./animals', width=400, height=300),
[
MultiClassInnotation(targets_type, name='Animal Type', classes=classes, dropdown=False),
RepeatInnotation(
(ButtonInnotation, None, {'desc': 'X', 'on_click': reset_click, 'layout': {'width': '40px'}}),
(BoundingBoxInnotation, targets_bboxes),
max_repeats=repeats, min_repeats=1
),
ButtonInnotation(None, name='Reset All', on_click=reset_click)
]
)
```
| github_jupyter |
## Plots comparison of interpretability performance for CNNs with log-based activations
Figures generated in this notebook:
- Supplementary Fig. 11
```
import os
import numpy as np
from six.moves import cPickle
import matplotlib.pyplot as plt
import helper
from tfomics import utils
results_path = os.path.join('../results', 'task3')
params_path = os.path.join(results_path, 'model_params')
save_path = os.path.join(results_path, 'scores')
# load data
data_path = '../data/synthetic_code_dataset.h5'
data = helper.load_data(data_path)
x_train, y_train, x_valid, y_valid, x_test, y_test = data
# load ground truth values
test_model = helper.load_synthetic_models(data_path, dataset='test')
true_index = np.where(y_test[:,0] == 1)[0]
X = x_test[true_index][:500]
X_model = test_model[true_index][:500]
activations = ['relu', 'relu_l2', 'log_relu', 'log_relu_l2']
score_names = ['saliency_scores']
num_trials = 10
model_name = 'cnn-dist'
results = {}
for activation in activations:
name = model_name+'_'+activation
results[name] = {}
file_path = os.path.join(save_path, name+'.pickle')
with open(file_path, 'rb') as f:
saliency_scores = cPickle.load(f)
#mut_scores = cPickle.load(f)
#integrated_scores = cPickle.load(f)
#shap_scores = cPickle.load(f)
all_scores = [saliency_scores]#, mut_scores, integrated_scores, shap_scores]
for score_name, scores in zip(score_names, all_scores):
shap_roc = []
shap_pr = []
for trial in range(num_trials):
if 'mut' in score_name:
trial_scores = np.sqrt(np.sum(scores[trial]**2, axis=-1, keepdims=True)) * X
else:
trial_scores = scores[trial] * X
roc_score, pr_score = helper.interpretability_performance(X, trial_scores, X_model)
shap_roc.append(np.mean(roc_score))
shap_pr.append(np.mean(pr_score))
results[name][score_name] = [np.array(shap_roc), np.array(shap_pr)]
print('%s: %.4f+/-%.4f\t'%(name+'_'+score_name,
np.mean(results[name][score_name][0]),
np.std(results[name][score_name][0])))
names = ['Log-Relu-l2', 'Log-relu', 'Relu-l2', 'Relu']
score_name = 'saliency_scores'
fig = plt.figure(figsize=(4,3))
ax = plt.subplot(1,1,1)
vals = [results['cnn-dist_log_relu_l2'][score_name][0],
results['cnn-dist_log_relu'][score_name][0],
results['cnn-dist_relu_l2'][score_name][0],
results['cnn-dist_relu'][score_name][0],
]
ax.boxplot(vals, widths = 0.6);
plt.ylabel('AUROC', fontsize=12)
plt.yticks([0.7, 0.75, 0.8], fontsize=12)
plt.xticks(np.linspace(1,4,4), names, fontsize=12, rotation=60)
ax.set_ybound([.69,0.84])
ax.set_xbound([.5,4.5])
outfile = os.path.join(results_path, 'task3_compare_attr_score_auroc_log.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
names = ['Log-Relu-l2', 'Log-relu', 'Relu-l2', 'Relu']
fig = plt.figure(figsize=(4,3))
ax = plt.subplot(1,1,1)
vals = [results['cnn-dist_log_relu_l2'][score_name][1],
results['cnn-dist_log_relu'][score_name][1],
results['cnn-dist_relu_l2'][score_name][1],
results['cnn-dist_relu'][score_name][1],
]
ax.boxplot(vals, widths = 0.6);
plt.ylabel('AUPR', fontsize=12)
plt.yticks([ 0.6, 0.65, 0.7], fontsize=12)
plt.xticks(np.linspace(1,4,4), names, fontsize=12, rotation=60)
ax.set_ybound([.58,0.74])
ax.set_xbound([.5,4.5])
outfile = os.path.join(results_path, 'task3_compare_attr_score_pr_log.pdf')
fig.savefig(outfile, format='pdf', dpi=200, bbox_inches='tight')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from synthpop.census_helpers import Census
from synthpop import categorizer as cat
import pandas as pd
import numpy as np
import os
pd.set_option('display.max_columns', 500)
```
## The census api needs a key - you can register for can sign up
### http://api.census.gov/data/key_signup.html
```
c = Census(os.environ["CENSUS"])
```
## Here we get aggregate information on households from ACS - note some variables are associated with block groups and others with tracts
```
income_columns = ['B19001_0%02dE'%i for i in range(1, 18)]
vehicle_columns = ['B08201_0%02dE'%i for i in range(1, 7)]
workers_columns = ['B08202_0%02dE'%i for i in range(1, 6)]
families_columns = ['B11001_001E', 'B11001_002E']
block_group_columns = income_columns + families_columns
tract_columns = vehicle_columns + workers_columns
h_acs = c.block_group_and_tract_query(block_group_columns,
tract_columns, "06", "075",
merge_columns=['tract', 'county', 'state'],
block_group_size_attr="B11001_001E",
tract_size_attr="B08201_001E",
tract="030600")
h_acs
```
## And here is aggregate information on people from ACS
```
population = ['B01001_001E']
sex = ['B01001_002E', 'B01001_026E']
race = ['B02001_0%02dE'%i for i in range(1,11)]
male_age_columns = ['B01001_0%02dE'%i for i in range(3,26)]
female_age_columns = ['B01001_0%02dE'%i for i in range(27,50)]
all_columns = population + sex + race + male_age_columns + female_age_columns
p_acs = c.block_group_query(all_columns, "06", "075", tract="030600")
p_acs
```
## Get the puma for our test tracts - this actually downloads the mapping file from the census website so it might take a few seconds
```
puma = c.tract_to_puma("06", "075", "030600")
puma
puma10 = puma[0]
puma00 = puma[1]
```
## Download PUMS for people records for a PUMA from our server (we processed the large files into smaller ones for you)
```
p_pums = c.download_population_pums("06", puma10=puma10, puma00=puma00)
p_pums.head(5)
```
## Download PUMS for household records for a PUMA
```
h_pums = c.download_household_pums("06", puma10=puma10, puma00=puma00)
h_pums.head(5)
```
## Now the job is to categorize acs and pums into the same categories - we start with the household acs data
```
h_acs_cat = cat.categorize(h_acs, {
("households", "total"): "B11001_001E",
("children", "yes"): "B11001_002E",
("children", "no"): "B11001_001E - B11001_002E",
("income", "lt35"): "B19001_002E + B19001_003E + B19001_004E + "
"B19001_005E + B19001_006E + B19001_007E",
("income", "gt35-lt100"): "B19001_008E + B19001_009E + "
"B19001_010E + B19001_011E + B19001_012E"
"+ B19001_013E",
("income", "gt100"): "B19001_014E + B19001_015E + B19001_016E"
"+ B19001_017E",
("cars", "none"): "B08201_002E",
("cars", "one"): "B08201_003E",
("cars", "two or more"): "B08201_004E + B08201_005E + B08201_006E",
("workers", "none"): "B08202_002E",
("workers", "one"): "B08202_003E",
("workers", "two or more"): "B08202_004E + B08202_005E"
}, index_cols=['NAME'])
h_acs_cat
assert np.all(cat.sum_accross_category(h_acs_cat) < 2)
```
## And the same for ACS population - the output of the categorization is the MARGINALS for each variable category
```
p_acs_cat = cat.categorize(p_acs, {
("population", "total"): "B01001_001E",
("age", "19 and under"): "B01001_003E + B01001_004E + B01001_005E + "
"B01001_006E + B01001_007E + B01001_027E + "
"B01001_028E + B01001_029E + B01001_030E + "
"B01001_031E",
("age", "20 to 35"): "B01001_008E + B01001_009E + B01001_010E + "
"B01001_011E + B01001_012E + B01001_032E + "
"B01001_033E + B01001_034E + B01001_035E + "
"B01001_036E",
("age", "35 to 60"): "B01001_013E + B01001_014E + B01001_015E + "
"B01001_016E + B01001_017E + B01001_037E + "
"B01001_038E + B01001_039E + B01001_040E + "
"B01001_041E",
("age", "above 60"): "B01001_018E + B01001_019E + B01001_020E + "
"B01001_021E + B01001_022E + B01001_023E + "
"B01001_024E + B01001_025E + B01001_042E + "
"B01001_043E + B01001_044E + B01001_045E + "
"B01001_046E + B01001_047E + B01001_048E + "
"B01001_049E",
("race", "white"): "B02001_002E",
("race", "black"): "B02001_003E",
("race", "asian"): "B02001_005E",
("race", "other"): "B02001_004E + B02001_006E + B02001_007E + "
"B02001_008E",
("sex", "male"): "B01001_002E",
("sex", "female"): "B01001_026E"
}, index_cols=['NAME'])
p_acs_cat
assert np.all(cat.sum_accross_category(p_acs_cat) < 2)
```
## To get the marginals a series for one geography do this
```
p_acs_cat.iloc[0].transpose()
```
## Now categorize the PUMS population data into the same categories
```
def age_cat(r):
if r.AGEP <= 19: return "19 and under"
elif r.AGEP <= 35: return "20 to 35"
elif r.AGEP <= 60: return "35 to 60"
return "above 60"
def race_cat(r):
if r.RAC1P == 1: return "white"
elif r.RAC1P == 2: return "black"
elif r.RAC1P == 6: return "asian"
return "other"
def sex_cat(r):
if r.SEX == 1: return "male"
return "female"
_, jd_persons = cat.joint_distribution(
p_pums,
cat.category_combinations(p_acs_cat.columns),
{"age": age_cat, "race": race_cat, "sex": sex_cat}
)
jd_persons
```
## Do the same for households - the output of this step is the JOINT DISTRIBUTIONS for the cross product of all possible categories
```
def cars_cat(r):
if r.VEH == 0: return "none"
elif r.VEH == 1: return "one"
return "two or more"
def children_cat(r):
if r.NOC > 0: return "yes"
return "no"
def income_cat(r):
if r.FINCP > 100000: return "gt100"
elif r.FINCP > 35000: return "gt35-lt100"
return "lt35"
def workers_cat(r):
if r.WIF == 3: return "two or more"
elif r.WIF == 2: return "two or more"
elif r.WIF == 1: return "one"
return "none"
_, jd_households = cat.joint_distribution(
h_pums,
cat.category_combinations(h_acs_cat.columns),
{"cars": cars_cat, "children": children_cat,
"income": income_cat, "workers": workers_cat}
)
jd_households
```
## With marginals (aggregate, from ACS) and joint distribution (disaggregate, from PUMS) we're ready for some synthesis
```
"TBD"
```
| github_jupyter |
#### Copyright 2017 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 合成特征和离群值
**学习目标:**
* 创建一个合成特征,即另外两个特征的比例
* 将此新特征用作线性回归模型的输入
* 通过识别和截取(移除)输入数据中的离群值来提高模型的有效性
我们来回顾下之前的“使用 TensorFlow 的基本步骤”练习中的模型。
首先,我们将加利福尼亚州住房数据导入 *Pandas* `DataFrame` 中:
## 设置
```
from __future__ import print_function
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.metrics as metrics
import tensorflow as tf
from tensorflow.python.data import Dataset
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://download.mlcc.google.cn/mledu-datasets/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
california_housing_dataframe["median_house_value"] /= 1000.0
california_housing_dataframe
```
接下来,我们将设置输入函数,并针对模型训练来定义该函数:
```
def my_input_fn(features, targets, batch_size=1, shuffle=True, num_epochs=None):
"""Trains a linear regression model of one feature.
Args:
features: pandas DataFrame of features
targets: pandas DataFrame of targets
batch_size: Size of batches to be passed to the model
shuffle: True or False. Whether to shuffle the data.
num_epochs: Number of epochs for which data should be repeated. None = repeat indefinitely
Returns:
Tuple of (features, labels) for next data batch
"""
# Convert pandas data into a dict of np arrays.
features = {key:np.array(value) for key,value in dict(features).items()}
# Construct a dataset, and configure batching/repeating.
ds = Dataset.from_tensor_slices((features,targets)) # warning: 2GB limit
ds = ds.batch(batch_size).repeat(num_epochs)
# Shuffle the data, if specified.
if shuffle:
ds = ds.shuffle(buffer_size=10000)
# Return the next batch of data.
features, labels = ds.make_one_shot_iterator().get_next()
return features, labels
def train_model(learning_rate, steps, batch_size, input_feature):
"""Trains a linear regression model.
Args:
learning_rate: A `float`, the learning rate.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
batch_size: A non-zero `int`, the batch size.
input_feature: A `string` specifying a column from `california_housing_dataframe`
to use as input feature.
Returns:
A Pandas `DataFrame` containing targets and the corresponding predictions done
after training the model.
"""
periods = 10
steps_per_period = steps / periods
my_feature = input_feature
my_feature_data = california_housing_dataframe[[my_feature]].astype('float32')
my_label = "median_house_value"
targets = california_housing_dataframe[my_label].astype('float32')
# Create input functions.
training_input_fn = lambda: my_input_fn(my_feature_data, targets, batch_size=batch_size)
predict_training_input_fn = lambda: my_input_fn(my_feature_data, targets, num_epochs=1, shuffle=False)
# Create feature columns.
feature_columns = [tf.feature_column.numeric_column(my_feature)]
# Create a linear regressor object.
my_optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
my_optimizer = tf.contrib.estimator.clip_gradients_by_norm(my_optimizer, 5.0)
linear_regressor = tf.estimator.LinearRegressor(
feature_columns=feature_columns,
optimizer=my_optimizer
)
# Set up to plot the state of our model's line each period.
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.title("Learned Line by Period")
plt.ylabel(my_label)
plt.xlabel(my_feature)
sample = california_housing_dataframe.sample(n=300)
plt.scatter(sample[my_feature], sample[my_label])
colors = [cm.coolwarm(x) for x in np.linspace(-1, 1, periods)]
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print("Training model...")
print("RMSE (on training data):")
root_mean_squared_errors = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_regressor.train(
input_fn=training_input_fn,
steps=steps_per_period,
)
# Take a break and compute predictions.
predictions = linear_regressor.predict(input_fn=predict_training_input_fn)
predictions = np.array([item['predictions'][0] for item in predictions])
# Compute loss.
root_mean_squared_error = math.sqrt(
metrics.mean_squared_error(predictions, targets))
# Occasionally print the current loss.
print(" period %02d : %0.2f" % (period, root_mean_squared_error))
# Add the loss metrics from this period to our list.
root_mean_squared_errors.append(root_mean_squared_error)
# Finally, track the weights and biases over time.
# Apply some math to ensure that the data and line are plotted neatly.
y_extents = np.array([0, sample[my_label].max()])
weight = linear_regressor.get_variable_value('linear/linear_model/%s/weights' % input_feature)[0]
bias = linear_regressor.get_variable_value('linear/linear_model/bias_weights')
x_extents = (y_extents - bias) / weight
x_extents = np.maximum(np.minimum(x_extents,
sample[my_feature].max()),
sample[my_feature].min())
y_extents = weight * x_extents + bias
plt.plot(x_extents, y_extents, color=colors[period])
print("Model training finished.")
# Output a graph of loss metrics over periods.
plt.subplot(1, 2, 2)
plt.ylabel('RMSE')
plt.xlabel('Periods')
plt.title("Root Mean Squared Error vs. Periods")
plt.tight_layout()
plt.plot(root_mean_squared_errors)
# Create a table with calibration data.
calibration_data = pd.DataFrame()
calibration_data["predictions"] = pd.Series(predictions)
calibration_data["targets"] = pd.Series(targets)
display.display(calibration_data.describe())
print("Final RMSE (on training data): %0.2f" % root_mean_squared_error)
return calibration_data
```
## 任务 1:尝试合成特征
`total_rooms` 和 `population` 特征都会统计指定街区的相关总计数据。
但是,如果一个街区比另一个街区的人口更密集,会怎么样?我们可以创建一个合成特征(即 `total_rooms` 与 `population` 的比例)来探索街区人口密度与房屋价值中位数之间的关系。
在以下单元格中,创建一个名为 `rooms_per_person` 的特征,并将其用作 `train_model()` 的 `input_feature`。
通过调整学习速率,您使用这一特征可以获得的最佳效果是什么?(效果越好,回归线与数据的拟合度就越高,最终 RMSE 也会越低。)
**注意**:在下面添加一些代码单元格可能有帮助,这样您就可以尝试几种不同的学习速率并比较结果。要添加新的代码单元格,请将光标悬停在该单元格中心的正下方,然后点击**代码**。
```
#
# YOUR CODE HERE
#
california_housing_dataframe["rooms_per_person"] =
calibration_data = train_model(
learning_rate=0.00005,
steps=500,
batch_size=5,
input_feature="rooms_per_person"
)
```
### 解决方案
点击下方即可查看解决方案。
```
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] / california_housing_dataframe["population"])
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
```
## 任务 2:识别离群值
我们可以通过创建预测值与目标值的散点图来可视化模型效果。理想情况下,这些值将位于一条完全相关的对角线上。
使用您在任务 1 中训练过的人均房间数模型,并使用 Pyplot 的 `scatter()` 创建预测值与目标值的散点图。
您是否看到任何异常情况?通过查看 `rooms_per_person` 中值的分布情况,将这些异常情况追溯到源数据。
```
# YOUR CODE HERE
```
### 解决方案
点击下方即可查看解决方案。
```
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(calibration_data["predictions"], calibration_data["targets"])
```
校准数据显示,大多数散点与一条线对齐。这条线几乎是垂直的,我们稍后再讲解。现在,我们重点关注偏离这条线的点。我们注意到这些点的数量相对较少。
如果我们绘制 `rooms_per_person` 的直方图,则会发现我们的输入数据中有少量离群值:
```
plt.subplot(1, 2, 2)
_ = california_housing_dataframe["rooms_per_person"].hist()
```
## 任务 3:截取离群值
看看您能否通过将 `rooms_per_person` 的离群值设置为相对合理的最小值或最大值来进一步改进模型拟合情况。
以下是一个如何将函数应用于 Pandas `Series` 的简单示例,供您参考:
clipped_feature = my_dataframe["my_feature_name"].apply(lambda x: max(x, 0))
上述 `clipped_feature` 没有小于 `0` 的值。
```
# YOUR CODE HERE
```
### 解决方案
点击下方即可查看解决方案。
我们在任务 2 中创建的直方图显示,大多数值都小于 `5`。我们将 `rooms_per_person` 的值截取为 5,然后绘制直方图以再次检查结果。
```
california_housing_dataframe["rooms_per_person"] = (
california_housing_dataframe["rooms_per_person"]).apply(lambda x: min(x, 5))
_ = california_housing_dataframe["rooms_per_person"].hist()
```
为了验证截取是否有效,我们再训练一次模型,并再次输出校准数据:
```
calibration_data = train_model(
learning_rate=0.05,
steps=500,
batch_size=5,
input_feature="rooms_per_person")
_ = plt.scatter(calibration_data["predictions"], calibration_data["targets"])
```
| github_jupyter |
```
import pandas
import numpy as np
import matplotlib.pyplot as plt; plt.rcdefaults()
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import confusion_matrix
from collections import defaultdict, Counter, OrderedDict
from operator import itemgetter
import codecs
import csv
import itertools
import spacy
%matplotlib
%matplotlib inline
import seaborn as sns
f = '../data/resolution/raw/turkers_annotations.csv'
reg_ans = {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8}
comb_ans = {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 5, 7: 7, 8: 8}
def parse_results(f_name, ans_dic):
results = {}
table = pandas.read_csv(f_name)
answers = defaultdict(dict)
for ind, row in table.iterrows():
worker = row['WorkerId']
time_complete = row['WorkTimeInSeconds']
task_id = row['Input.task_id']
show_id = row['Input.show_id']
scene_ind = row['Input.scene_index']
text_ind = row['Input.text_index']
text = row['Input.text_reduced']
target = row['Input.target']
target_ind_s = row['Input.target_start_index']
target_ind_e = row['Input.target_end_index']
ans = row['Answer.ans']
comment = row['Answer.comment']
ref_offset = row['Answer.offset']
ref = row['Answer.reference']
other = row['Answer.other']
if task_id in answers:
answers[task_id]['ans'] += [(ans_dic[ans], ref, other, ref_offset)]
else:
answers[task_id] = {'text': text, 'target': target,
'target_sent': row['Input.target_sentence'],
'ind_s': target_ind_s, 'ind_e': target_ind_e,
'ans': [(ans_dic[ans], ref, other, ref_offset)]}
return answers
answers = parse_results(f, reg_ans)
answers_comb = parse_results(f, comb_ans)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('Majority label')
plt.xlabel('Minority label')
ind_dic = {1: 0, 2: 1, 3: 2, 4: 3, 7: 4, 8: 5, 5: 6, 6: 6}
def majority_agreement(answers):
no_ag = Counter()
distr = Counter(tuple(sorted(tuple([v[0] for v in c['ans']]), reverse=True)) for c in answers.values())
conf_mat = np.zeros((7, 7))
for trip, c in distr.most_common():
x = Counter(trip)
y = OrderedDict(x.most_common())
if len(y) == len(trip):
no_ag[tuple(x.keys())] += c
elif len(y) == 1:
i = j = y.items()[0][0]
else:
i = y.items()[0][0]
j = y.items()[1][0]
conf_mat[ind_dic[i]][ind_dic[j]] = c
# print conf_mat
plt.figure()
class_names = ['Reference', 'Year', 'Age', 'Currency', 'People', 'Time', 'Other']
# sns.heatmap(conf_mat, =class_names)
plot_confusion_matrix(conf_mat.astype(int), classes=class_names,
title='Majority Answer Confusion matrix')
# plt.savefig('../resources/conf-mat.svg', bbox_inches='tight', format='svg', dpi=1200)
# plt.savefig('../reports/figures/ans-agg.png', bbox_inches='tight', dpi=200)
plt.show()
return no_ag
no_ag = majority_agreement(answers_comb)
sum(no_ag.values())
115.0/10223
len(answers_comb)
```
| github_jupyter |
<small><small><i>
All the IPython Notebooks in **Python Introduction** lecture series by Dr. Milaan Parmar are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)**
</i></small></small>
# Python Variables and Constants
In this class, you will learn about Python variables, constants, literals and their use cases.
# 1. Python Variables
A variable is a named location used to **store data in the memory**. Variable also known as **identifier** and used to hold value. It is helpful to think of variables as a container that holds data that can be changed later in the program. **Mnemonic** variables are recommended to use in many programming languages. A mnemonic variable is a variable name that can be easily remembered and associated. A variable refers to a memory address in which data is stored. For example,
```python
>>>number = 90
```
Here, we have created a variable named **`number`**. We have assigned the value **`10`** to the variable.
You can think of variables as a bag to store books in it and that book can be replaced at any time.
```python
>>>number = 90
>>>number = 9.1
```
Initially, the value of number was **`90`**. Later, it was changed to **`9.1`**.
> **Note**: In Python, we don't actually assign values to the variables. Instead, Python gives the reference of the object(value) to the variable.
In Python, we don't need to specify the type of variable because Python is a **type infer language** and smart enough to get variable type.
Python Variable Name Rules
- A variable name must start with a **letter** **`A`**-**`z`** or the **underscore** **`_`** character
- A variable name cannot start with a **number** **`0`**-**`9`**
- A variable name can only contain alpha-numeric characters and underscores (**`A`**-**`z`**, **`0`**-**`9`**, and **`_`** )
- Variable names are case-sensitive (**`firstname`**, **`Firstname`**, **`FirstName`** and **`FIRSTNAME`**) are different variables). It is recomended to use lowercase letters for variable name.
#### Let us see valid variable names
```python
firstname
lastname
age
country
city
first_name
last_name
capital_city
_if # if we want to use reserved word as a variable
year_2021
year2021
current_year_2021
birth_year
num1
num2
```
Invalid variables names:
```python
first-name
first@name
first$name
num-1
1num
```
We will use standard Python variable naming style which has been adopted by many Python developers. Python developers use snake case(snake_case) variable naming convention. We use underscore character after each word for a variable containing more than one word (eg. **`first_name`**, **`last_name`**, **`engine_rotation_speed`**). The example below is an example of standard naming of variables, underscore is required when the variable name is more than one word.
When we assign a certain data type to a variable, it is called variable declaration. For instance in the example below my first name is assigned to a variable **`first_name`**. The equal sign is an assignment operator. Assigning means storing data in the variable. The equal sign in Python is not equality as in Mathematics.
### Assigning values to Variables in Python
Think of a variable as a name attached to a particular object. In Python, variables need not be declared or defined in advance, as is the case in many other programming languages.
As you can see from the above example, you can use the assignment operator **`=`** to assign a value to a variable.
#### Example 1: Declaring and assigning value to a variable
```
number = 90
number = 9.1
number
website = "github.com" # `website` is my variable and `github.com` is an argument
print(website)
```
In the above program, we assigned a value **`github.com`** to the variable **`website`**. Then, we printed out the value assigned to **`website`** i.e. **`github.com`**.
> **Note**: Python is a **[type-inferred](https://en.wikipedia.org/wiki/Type_inference)** language, so you don't have to explicitly define the variable type. It automatically knows that **`github.com`** is a string and declares the **`website`** variable as a string.
```
print('Hello',',', 'World','!') # it can take multiple arguments, 4 arguments have been passed
first_name = 'Milaan'
last_name = 'Parmar'
country = 'Finland'
city = 'Tampere'
age = 96
is_married = True
skills = ['Python', 'Matlab', 'JS', 'C', 'C++']
person_info = {
'firstname':'Milaan',
'lastname':'Parmar',
'country':'Finland',
'city':'Tampere'
}
```
Let us print and also find the length of the variables declared at the top:
```
# Printing the values stored in the variables
print('First name:', first_name)
print('First name length:', len(first_name))
print('Last name: ', last_name)
print('Last name length: ', len(last_name))
print('Country: ', country)
print('City: ', city)
print('Age: ', age)
print('Married: ', is_married)
print('Skills: ', skills)
print('Person information: ', person_info)
```
#### Example 2: Declaring multiple variables in one line** using comma **`,`** and semicolon **`;`**
```
a, b, c = 6, 9.3, "Hello"
print (a)
print (b)
print (c)
a = 1; b = 2; c = 3
print(a,b,c) # outout: 1 2 3
a,b,c # outout: 1 2 3
first_name, last_name, country, age, is_married = 'Milaan', 'Parmar', 'Finland', 96, True
print(first_name, last_name, country, age, is_married)
print('First name:', first_name)
print('Last name: ', last_name)
print('Country: ', country)
print('Age: ', age) # Don't worry it is not my real age ^_^
print('Married: ', is_married)
```
If we want to assign the same value to **multiple**/**chained** variables at once, we can do this as:
```
x = y = z = "same"
print (x)
print (y)
print (z)
```
The second program assigns the **`same`** string to all the three variables **`x`**, **`y`** and **`z`**.
```
p = q = r = 300 # Assigning value together
print(p, q, r) # Printing value together
```
#### Example 3: Changing the value of a variable
```
website = "github.com"
print(website)
# assigning a new variable to website
website = "baidu.com"
print(website)
```
In the above program, we have assigned **`github.com`** to the **`website`** variable initially. Then, the value is changed to **`baidu.com`**.
```
n=300
print(n)
m=n
print(n)
m = 1000 # assigning a new value to n
print(m)
# Declare & Redeclare variables
m = "Python is Fun"
m = 10
print (m)
```
# 2. Constants
A constant is a type of variable whose value cannot be changed. It is helpful to think of constants as containers that hold information which cannot be changed later.
You can think of constants as a bag to store some books which cannot be replaced once placed inside the bag.
### Assigning value to constant in Python
In Python, constants are usually declared and assigned in a module. Here, the module is a new file containing variables, functions, etc which is imported to the main file. Inside the module, constants are written in all capital letters and underscores separating the words.
#### Example 1: Declaring and assigning value to a constant
Create a **constant.py**:
```python
>>>PI = 3.14
>>>GRAVITY = 9.8
```
Create a **main.py**:
```python
>>>import constant
>>>print(constant.PI)
>>>print(constant.GRAVITY)
3.14
9.8
```
In the above program, we create a **constant.py** module file. Then, we assign the constant value to **`PI`** and **`GRAVITY`**. After that, we create a **main.py** file and import the **`constant`** module. Finally, we print the constant value.
> **Note**: In reality, we don't use constants in Python. Naming them in all capital letters is a convention to separate them from variables, however, it does not actually prevent reassignment.
## Rules and Naming Convention for Variables and constants
The examples you have seen so far have used **short**, terse variable names like m and n. But variable names can be more **verbose**. In fact, it is usually beneficial if they are because it makes the purpose of the variable more evident at first glance.
1. Constant and variable names should have a combination of letters in lowercase (a to z) or uppercase (**A to Z**) or digits (**0 to 9**) or an underscore **`_`**. For example:
```python
snake_case
MACRO_CASE
camelCase
CapWords
```
2. Create a name that makes sense. For example, **`vowel`** makes more sense than **`v`**.
3. If you want to create a variable name having two words, use underscore to separate them. For example:
```python
my_name
current_salary
```
4. Use capital letters possible to declare a constant. For example:
```python
PI
G
MASS
SPEED_OF_LIGHT
TEMP
```
5. Never use special symbols like **!**, **@**, **#**, **$** <b> % </b>, etc.
6. Don't start a variable name with a digit.
>**Note**: One of the additions to Python 3 was full Unicode support, which allows for **Unicode** characters in a variable name as well. You will learn about Unicode in greater depth in a future tutorial.
For example, all of the following are valid variable names:
```
name = "Bob"
Age = 54
has_W2 = True
print(name, Age, has_W2)
```
But this one is not, **because a variable name can’t begin with a digit**:
```
1099_filed = False # cannot start name of a variable with a number.
```
Note that case is **significant**. Lowercase and uppercase letters are not the same. Use of the underscore character is significant as well. Each of the following defines a different variable:
```python
>>>age = 1
>>>Age = 2
>>>aGe = 3
>>>AGE = 4
>>>a_g_e = 5
>>>_age = 6
>>>age_ = 7
>>>AGe = 8
>>>print(age, Age, aGe, AGE, a_g_e, age, age, AGe)
1 2 3 4 5 6 7 8
```
There is nothing stopping you from creating two different variables in the same program called age and Age, or for that matter agE. But it is probably **ill-advised**. It would certainly be likely to confuse anyone trying to read your code, and even you yourself, after you’d been away from it awhile.
```
age = 1
Age = 2
aGe = 3
AGE = 4
a_g_e = 5
_age = 6
age_ = 7
AGe = 8
print(age, Age, aGe, AGE, a_g_e, age, age, AGe)
```
## 💻 Exercises ➞ <span class='label label-default'>Variables</span>
### Exercises ➞ <span class='label label-default'>Level 1</span>
1. Write a python comment saying **`Python variables and Constants`**
2. Declare a **`first_name`** variable and assign a value to it
3. Declare a **`last_name`** variable and assign a value to it
4. Declare a **`full_name`** variable and assign a value to it
5. Declare a variable **`is_light_on`** and assign a value to it
6. Declare multiple variable on one line
### Exercises ➞ <span class='label label-default'>Level 2</span>
1. Check the data type of all your variables using **`type()`** built-in function
2. Using the **`len()`** built-in function, find the length of your first name
3. Compare the length of your **`first_name`** and your **`last_name`**
4. Declare **6** as **`num_1`** and **4** as **`num_2`**
1. Add **`num_1`** and **`num_2`** and assign the value to a variable **`total`**
2. Subtract **`num_2`** from **`num_1`** and assign the value to a variable **`difference`**
3. Multiply **`num_2`** and **`num_1`** and assign the value to a variable **`product`**
4. Divide **`num_1`** by **`num_2`** and assign the value to a variable **`division`**
5. Use modulus division to find **`num_2`** divided by **`num_1`** and assign the value to a variable **`remainder`**
6. Calculate **`num_1`** to the power of **`num_2`** and assign the value to a variable **`exp`**
7. Find floor division of **`num_1`** by **`num_2`** and assign the value to a variable **`floor_division`**
5. The radius of a circle is **30 meters**.
1. Calculate the area of a circle and assign the value to a variable name of **`area_of_circle`** by taking user **`input()`**
2. Calculate the circumference of a circle and assign the value to a variable name of **`circum_of_circle`** by taking user **`input()`**
3. Take radius as user **`input()`** and calculate the area.
6. Use the built-in **`input()`** function to get first name, last name, country and age from a user and store the value to their corresponding variable names
7. Run help (**`keywords`**) in Python shell or in your file to check for the Python reserved words or keywords
| github_jupyter |
# Mini-batching
In its purest form, online machine learning encompasses models which learn with one sample at a time. This is the design which is used in `river`.
The main downside of single-instance processing is that it doesn't scale to big data, at least not in the sense of traditional batch learning. Indeed, processing one sample at a time means that we are unable to fully take advantage of [vectorisation](https://www.wikiwand.com/en/Vectorization) and other computational tools that are taken for granted in batch learning. On top of this, processing a large dataset in `river` essentially involves a Python `for` loop, which might be too slow for some usecases. However, this doesn't mean that `river` is slow. In fact, for processing a single instance, `river` is actually a couple of orders of magnitude faster than libraries such as scikit-learn, PyTorch, and Tensorflow. The reason why is because `river` is designed from the ground up to process a single instance, whereas the majority of other libraries choose to care about batches of data. Both approaches offer different compromises, and the best choice depends on your usecase.
In order to propose the best of both worlds, `river` offers some limited support for mini-batch learning. Some of `river`'s estimators implement `*_many` methods on top of their `*_one` counterparts. For instance, `preprocessing.StandardScaler` has a `learn_many` method as well as a `transform_many` method, in addition to `learn_one` and `transform_one`. Each mini-batch method takes as input a `pandas.DataFrame`. Supervised estimators also take as input a `pandas.Series` of target values. We choose to use `pandas.DataFrames` over `numpy.ndarrays` because of the simple fact that the former allows us to name each feature. This in turn allows us to offer a uniform interface for both single instance and mini-batch learning.
As an example, we will build a simple pipeline that scales the data and trains a logistic regression. Indeed, the `compose.Pipeline` class can be applied to mini-batches, as long as each step is able to do so.
```
from river import compose
from river import linear_model
from river import preprocessing
model = compose.Pipeline(
preprocessing.StandardScaler(),
linear_model.LogisticRegression()
)
```
For this example, we will use `datasets.Higgs`.
```
from river import datasets
dataset = datasets.Higgs()
if not dataset.is_downloaded:
dataset.download()
dataset
```
The easiest way to read the data in a mini-batch fashion is to use the `read_csv` from `pandas`.
```
import pandas as pd
names = [
'target', 'lepton pT', 'lepton eta', 'lepton phi',
'missing energy magnitude', 'missing energy phi',
'jet 1 pt', 'jet 1 eta', 'jet 1 phi', 'jet 1 b-tag',
'jet 2 pt', 'jet 2 eta', 'jet 2 phi', 'jet 2 b-tag',
'jet 3 pt', 'jet 3 eta', 'jet 3 phi', 'jet 3 b-tag',
'jet 4 pt', 'jet 4 eta', 'jet 4 phi', 'jet 4 b-tag',
'm_jj', 'm_jjj', 'm_lv', 'm_jlv', 'm_bb', 'm_wbb', 'm_wwbb'
]
for x in pd.read_csv(dataset.path, names=names, chunksize=8096, nrows=3e5):
y = x.pop('target')
y_pred = model.predict_proba_many(x)
model.learn_many(x, y)
```
If you are familiar with scikit-learn, you might be aware that [some](https://scikit-learn.org/dev/computing/scaling_strategies.html#incremental-learning) of their estimators have a `partial_fit` method, which is similar to river's `learn_many` method. Here are some advantages that river has over scikit-learn:
- We guarantee that river's is just as fast, if not faster than scikit-learn. The differences are negligeable, but are slightly in favor of river.
- We take as input dataframes, which allows us to name each feature. The benefit is that you can add/remove/permute features between batches and everything will keep working.
- Estimators that support mini-batches also support single instance learning. This means that you can enjoy the best of both worlds. For instance, you can train with mini-batches and use `predict_one` to make predictions.
Note that you can check which estimators can process mini-batches programmatically:
```
import importlib
import inspect
def can_mini_batch(obj):
return hasattr(obj, 'learn_many')
for module in importlib.import_module('river').__all__:
if module in ['datasets', 'synth']:
continue
for name, obj in inspect.getmembers(importlib.import_module(f'river.{module}'), can_mini_batch):
print(name)
```
Because mini-batch learning isn't treated as a first-class citizen, some of the river's functionalities require some work in order to play nicely with mini-batches. For instance, the objects from the `metrics` module have an `update` method that take as input a single pair `(y_true, y_pred)`. This might change in the future, depending on the demand.
We plan to promote more models to the mini-batch regime. However, we will only be doing so for the methods that benefit the most from it, as well as those that are most popular. Indeed, `river`'s core philosophy will remain to cater to single instance learning.
| github_jupyter |
```
from models.DistMult import DistMult_Lite
from models.Complex import Complex
from models.ConvE import ConvE, ConvE_args
from utils.loaders import load_data, get_onehots
from utils.evaluation_metrics import SRR, auprc_auroc_ap
import torch
import numpy as np
from sklearn.utils import shuffle
from tqdm import tqdm
from utils.path_manage import get_files
data, lookup, ASD_dictionary, BCE_dictionary, Edge_list, Edge_features, Drug_graph_list, Protein_graph_list = get_files()
entities = int(len(lookup)/2)
# Drug_list = list(set(data[:,0]))
# Protein_list = list(set(data[:,2]))
# Drug_graph_dict = {x : y for x, y in zip(Drug_list, Drug_graph_list)}
# Protein_graph_dict = {x : y for x, y in zip(Protein_list, Protein_graph_list)}
# filtered_data = [x for x in data if not isinstance(Drug_graph_dict[x[0]], str)]
# filtered_data = [x for x in filtered_data if not isinstance(Protein_graph_dict[x[2]], str)]
# data = filtered_data
protien_ids = list(set(data[:,2]))
protien_ids = torch.LongTensor(protien_ids)
number_of_batches = 5
number_of_epochs = 20
x = shuffle(data)
test_data = x[:50]
model = DistMult_Lite(num_entities = entities, embedding_dim=100, num_relations=4)
optimiser = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(number_of_epochs):
#training stage
model.train()
objects, subjects, relationships = load_data(test_data, number_of_batches)
for index in range(number_of_batches):
obj = torch.LongTensor(objects[index])
# print(obj.shape)
rel = torch.LongTensor(relationships[index])
# print(len(rel))
preprocessed_target = torch.squeeze(torch.LongTensor(subjects[index]))
target = torch.squeeze(torch.stack([(protien_ids == x).nonzero() for x in preprocessed_target]))
# print(len(subj))
optimiser.zero_grad()
pred = model.forward(obj = obj, rel = rel, subj = protien_ids)
loss = model.loss(pred, target)
loss.backward()
optimiser.step()
#evaluation stage
model.eval()
objects, subjects, relationships = load_data(test_data, number_of_batches)
total_sum_reciporical_rank = torch.zeros(1)
for index in range(number_of_batches):
obj = torch.LongTensor(objects[index])
rel = torch.LongTensor(relationships[index])
preprocessed_target = torch.squeeze(torch.LongTensor(subjects[index]))
target = torch.squeeze(torch.stack([(protien_ids == x).nonzero() for x in preprocessed_target]),1)
predictions = model.forward(obj = obj, rel = rel, subj = protien_ids)
srr = SRR(predictions, target)
# print(predictions.shape)
# print(targets.shape)
total_sum_reciporical_rank = total_sum_reciporical_rank + srr
print('mean reciporical rank is...')
print(total_sum_reciporical_rank / len(test_data))
def get_adj_mask(max_nodes, graph):
num_nodes = graph.num_nodes
mask = np.zeros([max_nodes,max_nodes], dtype = bool)
mask[0:num_nodes][0:num_nodes] = True
adjacency = np.zeros([max_nodes,max_nodes]) # Check if Dtype int is needed!
edges = graph.edge_index.T
for edge in edges:
adjacency[edge[0]][edge[1]] = 1
adjacency[edge[1]][edge[0]] = 1
return Data(x = store.x, adj = adjacency, mask = mask)
max_graph_size = 2000
graphs_with_masks = []
for store, PDB_key in tqdm(zip(stored_graphs, PDB_keys)):
try:
if store.num_nodes > max_graph_size:
# print(store, ' too big')
graphs_with_masks.append('{} too big'.format(PDB_key))
else:
graphs_with_masks.append(get_adj_mask(max_graph_size, store))
except:
# print(PDB_key, ' missing')
graphs_with_masks.append('{} missing'.format(PDB_key))
```
| github_jupyter |
# Quick Start Tutorial
The GluonTS toolkit contains components and tools for building time series models using MXNet. The models that are currently included are forecasting models but the components also support other time series use cases, such as classification or anomaly detection.
* 基于MXNet
* 包含了预测模型,也支持其他类型的时序预测问题,比如分类、异常检测
The toolkit is not intended as a forecasting solution for businesses or end users but it rather targets scientists and engineers who want to tweak algorithms or build and experiment with their own models.
GluonTS contains:
* Components for building new models (likelihoods, feature processing pipelines, calendar features etc.)
* 构建新模型(likehoods,特征抽取pipeline,日期特征等)
* Data loading and processing
* 数据加载和处理
* A number of pre-built models
* 一些pre-built模型
* Plotting and evaluation facilities
* 绘图和评估
* Artificial and real datasets (only external datasets with blessed license)
* 数据集
```
# Third-party imports
%matplotlib inline
import mxnet as mx
from mxnet import gluon
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import json
```
## 1.数据理解(Datasets)
### GluonTS datasets
GluonTS comes with a number of publicly available datasets.
```
from gluonts.dataset.repository.datasets import get_dataset, dataset_recipes
from gluonts.dataset.util import to_pandas
```
* 这儿是自带的一些数据集
```
print(f"Available datasets: {list(dataset_recipes.keys())}")
```
To download one of the built-in datasets, simply call get_dataset with one of the above names. GluonTS can re-use the saved dataset so that it does not need to be downloaded again: simply set `regenerate=False`.
* 利用get_dataset函数访问上述数据集
* 参数 regenerate=False 可以避免数据集重新下载
* 下载下来的数据保存在用户个人目录下
* 比如:是json文件格式
saving time-series into /home/chenkai/.mxnet/gluon-ts/datasets/m4_hourly/train/data.json
saving time-series into /home/chenkai/.mxnet/gluon-ts/datasets/m4_hourly/test/data.json
```
dataset = get_dataset("m4_hourly", regenerate=True)
```
In general, the datasets provided by GluonTS are objects that consists of three main members:
- `dataset.train` is an iterable collection of data entries used for training. Each entry corresponds to one time series
- `dataset.test` is an iterable collection of data entries used for inference. The test dataset is an extended version of the train dataset that contains a window in the end of each time series that was not seen during training. This window has length equal to the recommended prediction length.
- `dataset.metadata` contains metadata of the dataset such as the frequency of the time series, a recommended prediction horizon, associated features, etc.
上面下载的包含train、test部分,没有metadata部分
```
print(dataset.metadata)
dataset.train
entry = next(iter(dataset.train))
train_series = to_pandas(entry)
print(train_series.shape)
print(train_series.head())
train_series.plot()
plt.grid(which="both")
plt.legend(["train series"], loc="upper left")
plt.show()
```
test-data中包含了 748 - 700 = 48 条数据
```
entry = next(iter(dataset.test))
test_series = to_pandas(entry)
test_series.plot()
print(test_series.shape)
print(test_series.head())
print('train_series.index[-1] - train数据集的Index的最后一条是:{}'.format(train_series.index[-1]))
plt.axvline(train_series.index[-1], color='r') # end of train dataset
plt.grid(which="both")
plt.legend(["test series", "end of train series"], loc="upper left")
plt.show()
```
* 测试数据集中包含了48个数据,数据频率是小时H,推荐的预测长度是48,也就是dataset.metadata.prediction_length
```
print(f"Length of forecasting window in test dataset: {len(test_series) - len(train_series)}")
print(f"Recommended prediction horizon: {dataset.metadata.prediction_length}")
print(f"Frequency of the time series: {dataset.metadata.freq}")
```
### 2.用户自定义的数据集(Custom datasets)
At this point, it is important to emphasize that GluonTS does not require this specific format for a custom dataset that a user may have. The only requirements for a custom dataset are to be iterable and have a "target" and a "start" field. To make this more clear, assume the common case where a dataset is in the form of a `numpy.array` and the index of the time series in a `pandas.Timestamp` (possibly different for each time series):
* GluonTS并不要求特定的格式
* 用户自定义的数据集应该是可以迭代的 iterable,并且有'target'、'start'字段
```
N = 10 # number of time series 10条
T = 100 # number of timesteps 100个数据步
prediction_length = 24 # 预测长度
freq = "1H" # 频率,1H时1条
custom_dataset = np.random.normal(size=(N, T)) # 构造数据集
start = pd.Timestamp("01-01-2019", freq=freq) # can be different for each time series 起始日期是2019-01-01日
custom_dataset.shape
```
Now, you can split your dataset and bring it in a GluonTS appropriate format with just two lines of code:
把数据集转化为GluonTS要求的格式
* 注意,train是10条记录,每条train中,包含了100 - 24 = 76个点,而test中包含了整个数据的长度,也就是100条。
```
custom_dataset[:, :-prediction_length].shape #
[{'target': x, 'start': start} for x in custom_dataset[:, :-prediction_length]].__len__() # 注意,train是10条记录,每条train中,包含了100 - 24 = 76个点,而test中包含了整个数据的长度,也就是100条。
from gluonts.dataset.common import ListDataset
# train dataset: cut the last window of length "prediction_length", add "target" and "start" fields
train_ds = ListDataset([{'target': x, 'start': start} for x in custom_dataset[:, :-prediction_length]],
freq=freq)
# test dataset: use the whole dataset, add "target" and "start" fields
test_ds = ListDataset([{'target': x, 'start': start} for x in custom_dataset],
freq=freq)
ListDataset?
```
## Training an existing model (`Estimator`)
GluonTS comes with a number of pre-built models. All the user needs to do is configure some hyperparameters. The existing models focus on (but are not limited to) probabilistic forecasting. Probabilistic forecasts are predictions in the form of a probability distribution, rather than simply a single point estimate.
We will begin with GulonTS's pre-built feedforward neural network estimator, a simple but powerful forecasting model. We will use this model to demonstrate the process of training a model, producing forecasts, and evaluating the results.
GluonTS's built-in feedforward neural network (`SimpleFeedForwardEstimator`) accepts an input window of length `context_length` and predicts the distribution of the values of the subsequent `prediction_length` values. In GluonTS parlance, the feedforward neural network model is an example of `Estimator`. In GluonTS, `Estimator` objects represent a forecasting model as well as details such as its coefficients, weights, etc.
In general, each estimator (pre-built or custom) is configured by a number of hyperparameters that can be either common (but not binding) among all estimators (e.g., the `prediction_length`) or specific for the particular estimator (e.g., number of layers for a neural network or the stride in a CNN).
Finally, each estimator is configured by a `Trainer`, which defines how the model will be trained i.e., the number of epochs, the learning rate, etc.
* 自带了一些pre-built模型。用户需要的是配置一些超参数。pre-built模型主要研究的是概率预测,而不仅仅是单个样本点的估计
* 这儿展示的是GulonTS的一个简单的前向神经网络
* `SimpleFeedForwardEstimator`的参数:输入是`context_length`, `prediction_length` ,模型是`Estimator`
```
from gluonts.model.simple_feedforward import SimpleFeedForwardEstimator
from gluonts.trainer import Trainer
dataset.metadata.prediction_length
estimator = SimpleFeedForwardEstimator(
num_hidden_dimensions=[10], # 隐藏层的节点个数
prediction_length=dataset.metadata.prediction_length, # 预测步长48
context_length=100, # 用多少步长的数据来做输入
freq=dataset.metadata.freq, # 频率
trainer=Trainer(ctx="cpu",
epochs=5,
learning_rate=1e-3,
num_batches_per_epoch=100
)
)
```
After specifying our estimator with all the necessary hyperparameters we can train it using our training dataset `dataset.train` by invoking the `train` method of the estimator. The training algorithm returns a fitted model (or a `Predictor` in GluonTS parlance) that can be used to construct forecasts.
* 在estimator中定义好超参数
* 然后输入dataset.train,开始训练,获取拟合好的模型
```
predictor = estimator.train(dataset.train)
```
With a predictor in hand, we can now predict the last window of the `dataset.test` and evaluate our model's performance.
GluonTS comes with the `make_evaluation_predictions` function that automates the process of prediction and model evaluation. Roughly, this function performs the following steps:
- Removes the final window of length `prediction_length` of the `dataset.test` that we want to predict
- The estimator uses the remaining data to predict (in the form of sample paths) the "future" window that was just removed
- The module outputs the forecast sample paths and the `dataset.test` (as python generator objects)
```
from gluonts.evaluation.backtest import make_evaluation_predictions
forecast_it, ts_it = make_evaluation_predictions(
dataset=dataset.test, # test dataset
predictor=predictor, # predictor
num_samples=100, # number of sample paths we want for evaluation,因为是用100个样本点来做预测的,所以这儿是100?
)
make_evaluation_predictions?
```
First, we can convert these generators to lists to ease the subsequent computations.
```
forecast_it, ts_it
forecasts = list(forecast_it)
tss = list(ts_it)
```
#### 理解输出结果
We can examine the first element of these lists (that corresponds to the first time series of the dataset). Let's start with the list containing the time series, i.e., `tss`. We expect the first entry of `tss` to contain the (target of the) first time series of `dataset.test`.
`tss[0]`就是原始值
```
test_series.head(5)
test_series.shape
tss[0].head()
tss[0].shape
test_series.shape
len(tss)
len(tss[0]),len(tss[1]) # tss[0]是原始数据,后面的413个是什么呢?
# first entry of the time series list
ts_entry = tss[0]
ts_entry.shape
# first 5 values of the time series (convert from pandas to numpy)
np.array(ts_entry[:5]).reshape(-1,)
# first entry of dataset.test
dataset_test_entry = next(iter(dataset.test))
dataset_test_entry['target'].shape
# first 5 values
dataset_test_entry['target'][:5]
```
#### 理解 'forecast'
The entries in the `forecast` list are a bit more complex. They are objects that contain all the sample paths in the form of `numpy.ndarray` with dimension `(num_samples, prediction_length)`, the start date of the forecast, the frequency of the time series, etc. We can access all these information by simply invoking the corresponding attribute of the forecast object.
```
# first entry of the forecast list
forecast_entry = forecasts[0]
print(f"Number of sample paths: {forecast_entry.num_samples}")
print(f"Dimension of samples: {forecast_entry.samples.shape}")
print(f"Start date of the forecast window: {forecast_entry.start_date}")
print(f"Frequency of the time series: {forecast_entry.freq}")
len(forecasts)
forecast_entry.median
forecast_entry.mean_ts
```
* start_date刚好是test_series的起始日期
* 为什么是 100 ×48呢,而且这个100是可以选择的,比如可以设定为73等等?
```
test_series.tail(48)
```
We can also do calculations to summarize the sample paths, such computing the mean or a quantile for each of the 48 time steps in the forecast window.
```
print(f"Mean of the future window:\n {forecast_entry.mean}")
print(f"0.5-quantile (median) of the future window:\n {forecast_entry.quantile(0.5)}")
len(forecast_entry.mean),len(forecast_entry.quantile(0.5))
```
`Forecast` objects have a `plot` method that can summarize the forecast paths as the mean, prediction intervals, etc. The prediction intervals are shaded in different colors as a "fan chart".
```
def plot_prob_forecasts(ts_entry, forecast_entry):
plot_length = 250 # 原始序列取出来的后面的数据量
prediction_intervals = (20.0,90.0) # 置信区间
legend = ["observations", "median prediction"] + [f"{k}% prediction interval" for k in prediction_intervals][::-1]
fig, ax = plt.subplots(1, 1, figsize=(10, 7))
ts_entry[-plot_length:].plot(ax=ax) # plot the time series,原始序列取后面(plot_length)250个数值,拿出来绘图
forecast_entry.plot(prediction_intervals=prediction_intervals, color=('r'))
plt.grid(which="both")
plt.legend(legend, loc="upper left")
plt.show()
plot_prob_forecasts(ts_entry, forecast_entry)
```
# 评价模型
We can also evaluate the quality of our forecasts numerically. In GluonTS, the `Evaluator` class can compute aggregate performance metrics, as well as metrics per time series (which can be useful for analyzing performance across heterogeneous time series).
```
from gluonts.evaluation import Evaluator
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(dataset.test))
```
Aggregate metrics aggregate both across time-steps and across time series.
```
print(json.dumps(agg_metrics, indent=5))
```
Individual metrics are aggregated only across time-steps.
```
item_metrics.head()
item_metrics # 总计是414条,为什么是414条呢?
item_metrics.plot(x='MSIS', y='MASE', kind='scatter')
plt.grid(which="both")
plt.show()
```
## 构建自己的模型
## Create your own forecast model
### 创建训练网络、预测网络
For creating your own forecast model you need to:
- Define the training and prediction network
- Define a new estimator that specifies any data processing and uses the networks
- 可以自定义训练和预测神经网络
- estimator 可以指定数据预处理过程
The training and prediction networks can be arbitrarily complex but they should follow some basic rules:
- Both should have a `hybrid_forward` method that defines what should happen when the network is called
- The training network's `hybrid_forward` should return a **loss** based on the prediction and the true values
- The prediction network's `hybrid_forward` should return the predictions
训练和预测网络:
- `hybrid_forward` ,必须要定义
- `hybrid_forward` , 应该返回损失
- `hybrid_forward` , 应该返回预测值
For example, we can create a simple training network that defines a neural network which takes as an input the past values of the time series and outputs a future predicted window of length `prediction_length`. It uses the L1 loss in the `hybrid_forward` method to evaluate the error among the predictions and the true values of the time series. The corresponding prediction network should be identical to the training network in terms of architecture (we achieve this by inheriting the training network class), and its `hybrid_forward` method outputs directly the predictions.
Note that this simple model does only point forecasts by construction, i.e., we train it to outputs directly the future values of the time series and not any probabilistic view of the future (to achieve this we should train a network to learn a probability distribution and then sample from it to create sample paths).
```
class MyTrainNetwork(gluon.HybridBlock): # 继承了 gluon.HybridBlock
def __init__(self, prediction_length, **kwargs):
"""训练过程"""
super().__init__(**kwargs)
self.prediction_length = prediction_length
with self.name_scope():
# 3 层神经网络,直接预测目标值
# Set up a 3 layer neural network that directly predicts the target values
self.nn = mx.gluon.nn.HybridSequential()
self.nn.add(mx.gluon.nn.Dense(units=40, activation='relu'))
self.nn.add(mx.gluon.nn.Dense(units=40, activation='relu'))
self.nn.add(mx.gluon.nn.Dense(units=self.prediction_length, activation='softrelu'))
def hybrid_forward(self, F, past_target, future_target):
prediction = self.nn(past_target)
# 计算L1损失,future_target是未来的真实值,past_target是过去的数值
# calculate L1 loss with the future_target to learn the median
return (prediction - future_target).abs().mean(axis=-1)
class MyPredNetwork(MyTrainNetwork):
"""预测过程"""
# The prediction network only receives past_target and returns predictions
def hybrid_forward(self, F, past_target):
prediction = self.nn(past_target)
return prediction.expand_dims(axis=1)
```
Now, we need to construct the estimator which should also follow some rules:
- It should include a `create_transformation` method that defines all the possible feature transformations and how the data is split during training
- It should include a `create_training_network` method that returns the training network configured with any necessary hyperparameters
- It should include a `create_predictor` method that creates the prediction network, and returns a `Predictor` object
A `Predictor` defines the `predict` method of a given predictor. Roughly, this method takes the test dataset, it passes it through the prediction network and yields the predictions. You can think of the `Predictor` object as a wrapper of the prediction network that defines its `predict` method.
Earlier, we used the `make_evaluation_predictions` to evaluate our predictor. Internally, the `make_evaluation_predictions` function invokes the `predict` method of the predictor to get the forecasts.
```
from gluonts.model.estimator import GluonEstimator
from gluonts.model.predictor import Predictor, RepresentableBlockPredictor
from gluonts.core.component import validated
from gluonts.support.util import copy_parameters
from gluonts.transform import ExpectedNumInstanceSampler, Transformation, InstanceSplitter
from gluonts.dataset.field_names import FieldName
from mxnet.gluon import HybridBlock
class MyEstimator(GluonEstimator):
@validated()
def __init__(
self,
freq: str,
context_length: int,
prediction_length: int,
trainer: Trainer = Trainer()
) -> None:
super().__init__(trainer=trainer)
self.context_length = context_length
self.prediction_length = prediction_length
self.freq = freq
def create_transformation(self):
# 特征转化
# Feature transformation that the model uses for input.
# Here we use a transformation that randomly select training samples from all time series.
return InstanceSplitter(
target_field=FieldName.TARGET,
is_pad_field=FieldName.IS_PAD,
start_field=FieldName.START,
forecast_start_field=FieldName.FORECAST_START,
train_sampler=ExpectedNumInstanceSampler(num_instances=1),
past_length=self.context_length,
future_length=self.prediction_length,
)
def create_training_network(self) -> MyTrainNetwork:
return MyTrainNetwork(
prediction_length=self.prediction_length
)
def create_predictor(
self, transformation: Transformation, trained_network: HybridBlock
) -> Predictor:
prediction_network = MyPredNetwork(
prediction_length=self.prediction_length
)
copy_parameters(trained_network, prediction_network)
return RepresentableBlockPredictor(
input_transform=transformation,
prediction_net=prediction_network,
batch_size=self.trainer.batch_size,
freq=self.freq,
prediction_length=self.prediction_length,
ctx=self.trainer.ctx,
)
```
Now, we can repeat the same pipeline as in the case we had a pre-built model: train the predictor, create the forecasts and evaluate the results.
```
estimator = MyEstimator(
prediction_length=dataset.metadata.prediction_length,
context_length=100,
freq=dataset.metadata.freq,
trainer=Trainer(ctx="cpu",
epochs=5,
learning_rate=1e-3,
num_batches_per_epoch=100
)
)
predictor = estimator.train(dataset.train)
forecast_it, ts_it = make_evaluation_predictions(
dataset=dataset.test,
predictor=predictor,
num_samples=100
)
forecasts = list(forecast_it)
tss = list(ts_it)
plot_prob_forecasts(tss[0], forecasts[0])
```
Observe that we cannot actually see any prediction intervals in the predictions. This is expected since the model that we defined does not do probabilistic forecasting but it just gives point estimates.(只有点估计,没有给出来区间预测) By requiring 100 sample paths (defined in `make_evaluation_predictions`) in such a network, we get 100 times the same output.
```
evaluator = Evaluator(quantiles=[0.1, 0.5, 0.9])
agg_metrics, item_metrics = evaluator(iter(tss), iter(forecasts), num_series=len(dataset.test))
print(json.dumps(agg_metrics, indent=4))
item_metrics.head(10)
item_metrics.plot(x='MSIS', y='MASE', kind='scatter')
plt.grid(which="both")
plt.show()
```
| github_jupyter |
# Mixup data augmentation
```
from fastai.gen_doc.nbdoc import *
from fastai.callbacks.mixup import *
from fastai.vision import *
from fastai import *
```
## What is Mixup?
This module contains the implementation of a data augmentation technique called [Mixup](https://arxiv.org/abs/1710.09412). It is extremely efficient at regularizing models in computer vision (we used it to get our time to train CIFAR10 to 94% on one GPU to 6 minutes).
As the name kind of suggests, the authors of the mixup article propose to train the model on a mix of the pictures of the training set. Let’s say we’re on CIFAR10 for instance, then instead of feeding the model the raw images, we take two (which could be in the same class or not) and do a linear combination of them: in terms of tensor it’s
`new_image = t * image1 + (1-t) * image2`
where t is a float between 0 and 1. Then the target we assign to that image is the same combination of the original targets:
`new_target = t * target1 + (1-t) * target2`
assuming your targets are one-hot encoded (which isn’t the case in pytorch usually). And that’s as simple as this.

Dog or cat? The right answer here is 70% dog and 30% cat!
As the picture above shows, it’s a bit hard for a human eye to comprehend the pictures obtained (although we do see the shapes of a dog and a cat) but somehow, it makes a lot of sense to the model which trains more efficiently. The final loss (training or validation) will be higher than when training without mixup even if the accuracy is far better, which means that a model trained like this will make predictions that are a bit less confident.
## Basic Training
To test this method, we will first build a [`simple_cnn`](/layers.html#simple_cnn) and train it like we did with [`basic_train`](/basic_train.html#basic_train) so we can compare its results with a network trained with Mixup.
```
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
model = simple_cnn((3,16,16,2))
learn = Learner(data, model, metrics=[accuracy])
learn.fit(8)
```
## Mixup implementation in the library
In the original article, the authors suggested four things:
1. Create two separate dataloaders and draw a batch from each at every iteration to mix them up
2. Draw a t value following a beta distribution with a parameter alpha (0.4 is suggested in their article)
3. Mix up the two batches with the same value t.
4. Use one-hot encoded targets
The implementation of this module is based on these suggestions but was modified when experiments suggested modifications with positive impact in performance.
The authors suggest to use the beta distribution with the same parameters alpha. Why do they suggest this? Well it looks like this:

so it means there is a very high probability of picking values close to 0 or 1 (in which case the image is almost from 1 category) and then a somewhat constant probability of picking something in the middle (0.33 as likely as 0.5 for instance).
While this works very well, it’s not the fastest way we can do this and this is the first suggestion we will adjust. The main point that slows down this process is wanting two different batches at every iteration (which means loading twice the amount of images and applying to them the other data augmentation function). To avoid this slow down, ou be a little smarter and mixup a batch with a shuffled version of itself (this way the images mixed up are still different).
Using the same parameter t for the whole batch is another suggestion we will modify. In our experiments, we noticed that the model can train faster if we draw a different `t` for every image in the batch (both options get to the same result in terms of accuracy, it’s just that one arrives there more slowly).
The last trick we have to apply with this is that there can be some duplicates with this strategy: let’s say we decide to mix `image0` with `image1` then `image1` with `image0`, and that we draw `t=0.1` for the first, and `t=0.9` for the second. Then
`image0 * 0.1 + shuffle0 * (1-0.1) = image0 * 0.1 + image1 * 0.9`
and
`image1 * 0.9 + shuffle1 * (1-0.9) = image1 * 0.9 + image0 * 0.1`
will be the sames. Of course we have to be a bit unlucky but in practice, we saw there was a drop in accuracy by using this without removing those duplicates. To avoid them, the tricks is to replace the vector of parameters `t` we drew by:
`t = max(t, 1-t)`
The beta distribution with the two parameters equal is symmetric in any case, and this way we insure that the biggest coefficient is always near the first image (the non-shuffled batch).
## Adding Mixup to the Mix
Now we will add [`MixUpCallback`](/callbacks.mixup.html#MixUpCallback) to our Learner so that it modifies our input and target accordingly. The [`mixup`](/train.html#mixup) function does that for us behind the scene, with a few other tweaks detailed below.
```
model = simple_cnn((3,16,16,2))
learner = Learner(data, model, metrics=[accuracy]).mixup()
learner.fit(8)
```
Training the net with Mixup improves the best accuracy. Note that the validation loss is higher than without MixUp, because the model makes less confident predictions: without mixup, most precisions are very close to 0. or 1. (in terms of probability) whereas the model with MixUp will give predictions that are more nuanced. Be sure to know what is the thing you want to optimize (lower loss or better accuracy) before using it.
```
show_doc(MixUpCallback, doc_string=False)
```
Create a [`Callback`](/callback.html#Callback) for mixup on `learn` with a parameter `alpha` for the beta distribution. `stack_x` and `stack_y` determines if we stack our inputs/targets with the vector lambda drawn or do the linear combination (in general, we stack the inputs or ouputs when they correspond to categories or classes and do the linear combination otherwise).
```
show_doc(MixUpCallback.on_batch_begin, doc_string=False)
```
Draws a vector of lambda following a beta distribution with `self.alpha` and operates the mixup on `last_input` and `last_target` according to `self.stack_x` and `self.stack_y`.
## Dealing with the loss
We often have to modify the loss so that it is compatible with Mixup: pytorch was very careful to avoid one-hot encoding targets when it could, so it seems a bit of a drag to undo this. Fortunately for us, if the loss is a classic [cross-entropy](https://pytorch.org/docs/stable/nn.html#torch.nn.functional.cross_entropy), we have
`loss(output, new_target) = t * loss(output, target1) + (1-t) * loss(output, target2)`
so we won’t one-hot encode anything and just compute those two losses then do the linear combination.
The following class is used to adapt the loss to mixup. Note that the [`mixup`](/train.html#mixup) function will use it to change the `Learner.loss_func` if necessary.
```
show_doc(MixUpLoss, doc_string=False, title_level=3)
```
Create a loss function from `crit` that is compatible with MixUp.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(MixUpLoss.forward)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.