code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
<div class="devsite-table-wrapper"><table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tfx/tutorials/transform/simple">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png">Run in Google Colab</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/tfx/blob/master/docs/tutorials/transform/simple.ipynb">
<img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a></td>
</table></div>
##### Copyright © 2020 Google Inc.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Preprocess data with TensorFlow Transform
***The Feature Engineering Component of TensorFlow Extended (TFX)***
This example colab notebook provides a very simple example of how <a target='_blank' href='https://www.tensorflow.org/tfx/transform/'>TensorFlow Transform (<code>tf.Transform</code>)</a> can be used to preprocess data using exactly the same code for both training a model and serving inferences in production.
TensorFlow Transform is a library for preprocessing input data for TensorFlow, including creating features that require a full pass over the training dataset. For example, using TensorFlow Transform you could:
* Normalize an input value by using the mean and standard deviation
* Convert strings to integers by generating a vocabulary over all of the input values
* Convert floats to integers by assigning them to buckets, based on the observed data distribution
TensorFlow has built-in support for manipulations on a single example or a batch of examples. `tf.Transform` extends these capabilities to support full passes over the entire training dataset.
The output of `tf.Transform` is exported as a TensorFlow graph which you can use for both training and serving. Using the same graph for both training and serving can prevent skew, since the same transformations are applied in both stages.
### Upgrade Pip
To avoid upgrading Pip in a system when running locally, check to make sure that we're running in Colab. Local systems can of course be upgraded separately.
```
try:
import colab
!pip install --upgrade pip
except:
pass
```
### Install TensorFlow Transform
**Note: In Google Colab, because of package updates, the first time you run this cell you may need to restart the runtime (Runtime > Restart runtime ...).**
```
!pip install -q -U tensorflow_transform==0.24.1
```
## Did you restart the runtime?
If you are using Google Colab, the first time that you run the cell above, you must restart the runtime (Runtime > Restart runtime ...). This is because of the way that Colab loads packages.
## Imports
```
import pprint
import tempfile
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import schema_utils
```
## Data: Create some dummy data
We'll create some simple dummy data for our simple example:
* `raw_data` is the initial raw data that we're going to preprocess
* `raw_data_metadata` contains the schema that tells us the types of each of the columns in `raw_data`. In this case, it's very simple.
```
raw_data = [
{'x': 1, 'y': 1, 's': 'hello'},
{'x': 2, 'y': 2, 's': 'world'},
{'x': 3, 'y': 3, 's': 'hello'}
]
raw_data_metadata = dataset_metadata.DatasetMetadata(
schema_utils.schema_from_feature_spec({
'y': tf.io.FixedLenFeature([], tf.float32),
'x': tf.io.FixedLenFeature([], tf.float32),
's': tf.io.FixedLenFeature([], tf.string),
}))
```
## Transform: Create a preprocessing function
The _preprocessing function_ is the most important concept of tf.Transform. A preprocessing function is where the transformation of the dataset really happens. It accepts and returns a dictionary of tensors, where a tensor means a <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/Tensor'><code>Tensor</code></a> or <a target='_blank' href='https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/SparseTensor'><code>SparseTensor</code></a>. There are two main groups of API calls that typically form the heart of a preprocessing function:
1. **TensorFlow Ops:** Any function that accepts and returns tensors, which usually means TensorFlow ops. These add TensorFlow operations to the graph that transforms raw data into transformed data one feature vector at a time. These will run for every example, during both training and serving.
2. **Tensorflow Transform Analyzers/Mappers:** Any of the analyzers/mappers provided by tf.Transform. These also accept and return tensors, and typically contain a combination of Tensorflow ops and Beam computation, but unlike TensorFlow ops they only run in the Beam pipeline during analysis requiring a full pass over the entire training dataset. The Beam computation runs only once, during training, and typically make a full pass over the entire training dataset. They create tensor constants, which are added to your graph. For example, tft.min computes the minimum of a tensor over the training dataset while tft.scale_by_min_max first computes the min and max of a tensor over the training dataset and then scales the tensor to be within a user-specified range, [output_min, output_max]. tf.Transform provides a fixed set of such analyzers/mappers, but this will be extended in future versions.
Caution: When you apply your preprocessing function to serving inferences, the constants that were created by analyzers during training do not change. If your data has trend or seasonality components, plan accordingly.
Note: The `preprocessing_fn` is not directly callable. This means that
calling `preprocessing_fn(raw_data)` will not work. Instead, it must
be passed to the Transform Beam API as shown in the following cells.
```
def preprocessing_fn(inputs):
"""Preprocess input columns into transformed columns."""
x = inputs['x']
y = inputs['y']
s = inputs['s']
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
return {
'x_centered': x_centered,
'y_normalized': y_normalized,
's_integerized': s_integerized,
'x_centered_times_y_normalized': x_centered_times_y_normalized,
}
```
## Putting it all together
Now we're ready to transform our data. We'll use Apache Beam with a direct runner, and supply three inputs:
1. `raw_data` - The raw input data that we created above
2. `raw_data_metadata` - The schema for the raw data
3. `preprocessing_fn` - The function that we created to do our transformation
<aside class="key-term"><b>Key Term:</b> <a target='_blank' href='https://beam.apache.org/'>Apache Beam</a> uses a <a target='_blank' href='https://beam.apache.org/documentation/programming-guide/#applying-transforms'>special syntax to define and invoke transforms</a>. For example, in this line:
<code><blockquote>result = pass_this | 'name this step' >> to_this_call</blockquote></code>
The method <code>to_this_call</code> is being invoked and passed the object called <code>pass_this</code>, and <a target='_blank' href='https://stackoverflow.com/questions/50519662/what-does-the-redirection-mean-in-apache-beam-python'>this operation will be referred to as <code>name this step</code> in a stack trace</a>. The result of the call to <code>to_this_call</code> is returned in <code>result</code>. You will often see stages of a pipeline chained together like this:
<code><blockquote>result = apache_beam.Pipeline() | 'first step' >> do_this_first() | 'second step' >> do_this_last()</blockquote></code>
and since that started with a new pipeline, you can continue like this:
<code><blockquote>next_result = result | 'doing more stuff' >> another_function()</blockquote></code></aside>
```
def main():
# Ignore the warnings
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
transformed_dataset, transform_fn = ( # pylint: disable=unused-variable
(raw_data, raw_data_metadata) | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset # pylint: disable=unused-variable
print('\nRaw data:\n{}\n'.format(pprint.pformat(raw_data)))
print('Transformed data:\n{}'.format(pprint.pformat(transformed_data)))
if __name__ == '__main__':
main()
```
## Is this the right answer?
Previously, we used `tf.Transform` to do this:
```
x_centered = x - tft.mean(x)
y_normalized = tft.scale_to_0_1(y)
s_integerized = tft.compute_and_apply_vocabulary(s)
x_centered_times_y_normalized = (x_centered * y_normalized)
```
####x_centered
With input of `[1, 2, 3]` the mean of x is 2, and we subtract it from x to center our x values at 0. So our result of `[-1.0, 0.0, 1.0]` is correct.
####y_normalized
We wanted to scale our y values between 0 and 1. Our input was `[1, 2, 3]` so our result of `[0.0, 0.5, 1.0]` is correct.
####s_integerized
We wanted to map our strings to indexes in a vocabulary, and there were only 2 words in our vocabulary ("hello" and "world"). So with input of `["hello", "world", "hello"]` our result of `[0, 1, 0]` is correct. Since "hello" occurs most frequently in this data, it will be the first entry in the vocabulary.
####x_centered_times_y_normalized
We wanted to create a new feature by crossing `x_centered` and `y_normalized` using multiplication. Note that this multiplies the results, not the original values, and our new result of `[-0.0, 0.0, 1.0]` is correct.
| github_jupyter |
# 机器学习工程师纳米学位
## 模型评价与验证
## 项目 1: 预测波士顿房价
欢迎来到机器学习的预测波士顿房价项目!在此文件中,有些示例代码已经提供给你,但你还需要实现更多的功能来让项目成功运行。除非有明确要求,你无须修改任何已给出的代码。以**编程练习**开始的标题表示接下来的内容中有需要你必须实现的功能。每一部分都会有详细的指导,需要实现的部分也会在注释中以**TODO**标出。请仔细阅读所有的提示!
除了实现代码外,你还**必须**回答一些与项目和实现有关的问题。每一个需要你回答的问题都会以**'问题 X'**为标题。请仔细阅读每个问题,并且在问题后的**'回答'**文字框中写出完整的答案。你的项目将会根据你对问题的回答和撰写代码所实现的功能来进行评分。
>**提示:**Code 和 Markdown 区域可通过 **Shift + Enter** 快捷键运行。此外,Markdown可以通过双击进入编辑模式。
---
## 第一步. 导入数据
在这个项目中,你将利用马萨诸塞州波士顿郊区的房屋信息数据训练和测试一个模型,并对模型的性能和预测能力进行测试。通过该数据训练后的好的模型可以被用来对房屋做特定预测---尤其是对房屋的价值。对于房地产经纪等人的日常工作来说,这样的预测模型被证明非常有价值。
此项目的数据集来自[UCI机器学习知识库(数据集已下线)](https://archive.ics.uci.edu/ml/datasets.html)。波士顿房屋这些数据于1978年开始统计,共506个数据点,涵盖了麻省波士顿不同郊区房屋14种特征的信息。本项目对原始数据集做了以下处理:
- 有16个`'MEDV'` 值为50.0的数据点被移除。 这很可能是由于这些数据点包含**遗失**或**看不到的值**。
- 有1个数据点的 `'RM'` 值为8.78. 这是一个异常值,已经被移除。
- 对于本项目,房屋的`'RM'`, `'LSTAT'`,`'PTRATIO'`以及`'MEDV'`特征是必要的,其余不相关特征已经被移除。
- `'MEDV'`特征的值已经过必要的数学转换,可以反映35年来市场的通货膨胀效应。
运行下面区域的代码以载入波士顿房屋数据集,以及一些此项目所需的 Python 库。如果成功返回数据集的大小,表示数据集已载入成功。
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.model_selection import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print("Boston housing dataset has {} data points with {} variables each.".format(*data.shape))
```
---
## 第二步. 分析数据
在项目的第一个部分,你会对波士顿房地产数据进行初步的观察并给出你的分析。通过对数据的探索来熟悉数据可以让你更好地理解和解释你的结果。
由于这个项目的最终目标是建立一个预测房屋价值的模型,我们需要将数据集分为**特征(features)**和**目标变量(target variable)**。
- **特征** `'RM'`, `'LSTAT'`,和 `'PTRATIO'`,给我们提供了每个数据点的数量相关的信息。
- **目标变量**:` 'MEDV'`,是我们希望预测的变量。
他们分别被存在 `features` 和 `prices` 两个变量名中。
### 编程练习 1:基础统计运算
你的第一个编程练习是计算有关波士顿房价的描述统计数据。我们已为你导入了 ` NumPy `,你需要使用这个库来执行必要的计算。这些统计数据对于分析模型的预测结果非常重要的。
在下面的代码中,你要做的是:
- 计算 `prices` 中的 `'MEDV'` 的最小值、最大值、均值、中值和标准差;
- 将运算结果储存在相应的变量中。
```
# TODO: Minimum price of the data
minimum_price = np.min(prices)
# TODO: Maximum price of the data
maximum_price = np.max(prices)
# TODO: Mean price of the data
mean_price = np.mean(prices)
# TODO: Median price of the data
median_price = np.median(prices)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices)
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${:.2f}".format(minimum_price))
print("Maximum price: ${:.2f}".format(maximum_price))
print("Mean price: ${:.2f}".format(mean_price))
print("Median price ${:.2f}".format(median_price))
print("Standard deviation of prices: ${:.2f}".format(std_price))
```
### 问题 1 - 特征观察
如前文所述,本项目中我们关注的是其中三个值:`'RM'`、`'LSTAT'` 和`'PTRATIO'`,对每一个数据点:
- `'RM'` 是该地区中每个房屋的平均房间数量;
- `'LSTAT'` 是指该地区有多少百分比的业主属于是低收入阶层(有工作但收入微薄);
- `'PTRATIO'` 是该地区的中学和小学里,学生和老师的数目比(`学生/老师`)。
_凭直觉,上述三个特征中对每一个来说,你认为增大该特征的数值,`'MEDV'`的值会是**增大**还是**减小**呢?每一个答案都需要你给出理由。_
**提示:**你预期一个`'RM'` 值是6的房屋跟`'RM'` 值是7的房屋相比,价值更高还是更低呢?
### 问题 1 - 回答:
我预期,RM增大,MEDV增大,因为房间越多房子越大,就应该越贵。LSTAT增大,应该MEDV降低,因为低收入人越多,税收越少,则房屋附加值越小,则房价越低。PTRATIO增大,MEDV降低,因为学生老师数目比越高,老师人数少,教育质量就会下降,也会影响房价降低。
---
## 第三步. 建立模型
在项目的第三步中,你需要了解必要的工具和技巧来让你的模型进行预测。用这些工具和技巧对每一个模型的表现做精确的衡量可以极大地增强你预测的信心。
### 编程练习2:定义衡量标准
如果不能对模型的训练和测试的表现进行量化地评估,我们就很难衡量模型的好坏。通常我们会定义一些衡量标准,这些标准可以通过对某些误差或者拟合程度的计算来得到。在这个项目中,你将通过运算[决定系数](https://en.wikipedia.org/wiki/Coefficient_of_determination) $R^2$ 来量化模型的表现。模型的决定系数是回归分析中十分常用的统计信息,经常被当作衡量模型预测能力好坏的标准。
$R^2$ 的数值范围从0至1,表示**目标变量**的预测值和实际值之间的相关程度平方的百分比。一个模型的 $R^2$ 值为0还不如直接用**平均值**来预测效果好;而一个 $R^2$ 值为1的模型则可以对目标变量进行完美的预测。从0至1之间的数值,则表示该模型中目标变量中有百分之多少能够用**特征**来解释。模型也可能出现负值的 $R^2$,这种情况下模型所做预测有时会比直接计算目标变量的平均值差很多。
在下方代码的 `performance_metric` 函数中,你要实现:
- 使用 `sklearn.metrics` 中的 [`r2_score`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html) 来计算 `y_true` 和 `y_predict` 的 $R^2$ 值,作为对其表现的评判。
- 将他们的表现评分储存到 `score` 变量中。
```
# TODO: Import 'r2_score'
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
from sklearn.metrics import r2_score
score = r2_score(y_true, y_predict)
# Return the score
return score
```
### 问题 2 - 拟合程度
假设一个数据集有五个数据且一个模型做出下列目标变量的预测:
| 真实数值 | 预测数值 |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
*你觉得这个模型已成功地描述了目标变量的变化吗?如果成功,请解释为什么,如果没有,也请给出原因。*
**提示1**:运行下方的代码,使用 `performance_metric` 函数来计算 `y_true` 和 `y_predict` 的决定系数。
**提示2**:$R^2$ 分数是指可以从自变量中预测的因变量的方差比例。 换一种说法:
* $R^2$ 为0意味着因变量不能从自变量预测。
* $R^2$ 为1意味着可以从自变量预测因变量。
* $R^2$ 在0到1之间表示因变量可预测的程度。
* $R^2$ 为0.40意味着 Y 中40%的方差可以从 X 预测。
```
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
```
### 问题 2 - 回答:
将数据集按一定比例分为训练用数据集和测试用训练集可以立即校验训练算法的可靠性,如果又过拟合或者欠拟合的情况,可以较快得出结论。如果使用部分训练数据来进行测试,会掩盖算法的问题,得到一个假的可靠度,遇到真实的或者其他数据时,无法判断其可靠性。如果没有数据对模型进行测试,就完全不知道模型是否真实反应实际情况。
### 编程练习 3: 数据分割与重排
接下来,你需要把波士顿房屋数据集分成训练和测试两个子集。通常在这个过程中,数据也会被重排列,以消除数据集中由于顺序而产生的偏差。
在下面的代码中,你需要
* 使用 `sklearn.model_selection` 中的 `train_test_split`, 将 `features` 和 `prices` 的数据都分成用于训练的数据子集和用于测试的数据子集。
- 分割比例为:80%的数据用于训练,20%用于测试;
- 选定一个数值以设定 `train_test_split` 中的 `random_state` ,这会确保结果的一致性;
* 将分割后的训练集与测试集分配给 `X_train`, `X_test`, `y_train` 和 `y_test`。
```
# TODO: Import 'train_test_split'
from sklearn.cross_validation import train_test_split
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=0)
# Success
print("Training and testing split was successful.")
```
### 问题 3 - 训练及测试
*将数据集按一定比例分为训练用的数据集和测试用的数据集对学习算法有什么好处?*
*如果用模型已经见过的数据,例如部分训练集数据进行测试,又有什么坏处?*
**提示:** 如果没有数据来对模型进行测试,会出现什么问题?
### 问题 3 - 回答:
将数据集按一定比例分为训练用数据集和测试用训练集可以立即校验训练算法的可靠性,如果又过拟合或者欠拟合的情况,可以较快得出结论。
如果使用部分训练数据来进行测试,会掩盖算法的问题,得到一个假的可靠度,遇到真实的或者其他数据时,无法判断其可靠性。
如果没有数据对模型进行测试,就完全不知道模型是否真实反应实际情况。
---
## 第四步. 分析模型的表现
在项目的第四步,我们来看一下不同参数下,模型在训练集和验证集上的表现。这里,我们专注于一个特定的算法(带剪枝的决策树,但这并不是这个项目的重点),和这个算法的一个参数 `'max_depth'`。用全部训练集训练,选择不同`'max_depth'` 参数,观察这一参数的变化如何影响模型的表现。画出模型的表现来对于分析过程十分有益。
### 学习曲线
下方区域内的代码会输出四幅图像,它们是一个决策树模型在不同最大深度下的表现。每一条曲线都直观得显示了随着训练数据量的增加,模型学习曲线的在训练集评分和验证集评分的变化,评分使用决定系数 $R^2$。曲线的阴影区域代表的是该曲线的不确定性(用标准差衡量)。
运行下方区域中的代码,并利用输出的图形回答下面的问题。
```
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
```
### 问题 4 - 学习曲线
* 选择上述图像中的其中一个,并给出其最大深度。
* 随着训练数据量的增加,训练集曲线的评分有怎样的变化?验证集曲线呢?
* 如果有更多的训练数据,是否能有效提升模型的表现呢?
**提示:**学习曲线的评分是否最终会收敛到特定的值?一般来说,你拥有的数据越多,模型表现力越好。但是,如果你的训练和测试曲线以高于基准阈值的分数收敛,这是否有必要?基于训练和测试曲线已经收敛的前提下,思考添加更多训练点的优缺点。
### 问题 4 - 回答:
最大深度为3的图像,训练集曲线的评分随着样本数增多,逐渐降低,验证曲线逐渐升高。如果有更多样本,并不能提升训练集评分,但是会让验证集曲线的评分向训练集靠近。最终会接近一致。
### 复杂度曲线
下列代码内的区域会输出一幅图像,它展示了一个已经经过训练和验证的决策树模型在不同最大深度条件下的表现。这个图形将包含两条曲线,一个是训练集的变化,一个是验证集的变化。跟**学习曲线**相似,阴影区域代表该曲线的不确定性,模型训练和测试部分的评分都用的 `performance_metric` 函数。
**运行下方区域中的代码,并利用输出的图形并回答下面的问题5与问题6。**
```
vs.ModelComplexity(X_train, y_train)
```
### 问题 5 - 偏差(bias)与方差(variance)之间的权衡取舍
* 当模型以最大深度 1训练时,模型的预测是出现很大的偏差还是出现了很大的方差?
* 当模型以最大深度10训练时,情形又如何呢?
* 图形中的哪些特征能够支持你的结论?
**提示:** 高偏差表示欠拟合(模型过于简单),而高方差表示过拟合(模型过于复杂,以至于无法泛化)。考虑哪种模型(深度1或10)对应着上述的情况,并权衡偏差与方差。
### 问题 5 - 回答:
当模型以最大深度1训练时,出现很大的偏差,因为训练集的评分就很低,属于欠拟合的状态。当模型以最大深度为10训练时,会出现比较大方差,因为是过拟合的状态。直观上,高偏差情况下,训练与验证分数接近,并且偏低。高方差情况下,训练分数要显著高于测试分数,并且有可能测试分数随复杂度下降。
### 问题 6- 最优模型的猜测
* 结合问题 5 中的图,你认为最大深度是多少的模型能够最好地对未见过的数据进行预测?
* 你得出这个答案的依据是什么?
**提示**:查看问题5上方的图表,并查看模型在不同 `depth`下的验证分数。随着深度的增加模型的表现力会变得更好吗?我们在什么情况下获得最佳验证分数而不会使我们的模型过度复杂?请记住,奥卡姆剃刀:“在竞争性假设中,应该选择假设最少的那一个。”
### 问题 6 - 回答:
我觉得最大深度为4左右就可以,因为验证集评分在此时达到高点。如果依据奥卡姆剃刀的简单模型,那么选择最大深度3也是不错的。
---
## 第五步. 评估模型的表现
在项目的最后一节中,你将构建一个模型,并使用 `fit_model` 中的优化模型去预测客户特征集。
### 问题 7- 网格搜索(Grid Search)
* 什么是网格搜索法?
* 如何用它来优化模型?
**提示**:在解释网格搜索算法时,首先要理解我们为什么使用网格搜索算法,以及我们使用它的最终目的是什么。为了使你的回答更具有说服力,你还可以给出一个模型中可以使用此方法进行优化参数的示例。
### 问题 7 - 回答:
网格搜索算法是一种通过遍历给定的参数可能的取值,依照取值组合参数组成网络,通过网格遍历来优化模型表现的方法。为了能够更好地拟合和预测,我们需要调整算法的参数。我们会给出一系列参数,尽可能包含最优化参数,用一种可靠的评分方法,对每个参数的模型都进行评分,选择评分最高的作为最优化的模型,但不能排除绝对最优的算法在指定的参数之外,所以不能保证绝对最优。
### 问题 8 - 交叉验证
- 什么是K折交叉验证法(k-fold cross-validation)?
- [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) 是如何结合交叉验证来完成对最佳参数组合的选择的?
- [GridSearchCV](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) 中的`'cv_results_'`属性能告诉我们什么?
- 网格搜索为什么要使用K折交叉验证?K折交叉验证能够避免什么问题?
**提示**:在解释k-fold交叉验证时,一定要理解'k'是什么,和数据集是如何分成不同的部分来进行训练和测试的,以及基于'k'值运行的次数。
在考虑k-fold交叉验证如何帮助网格搜索时,你可以使用特定的数据子集来进行训练与测试有什么缺点,以及K折交叉验证是如何帮助缓解这个问题。
### 问题 8 - 回答:
K折交叉验证中,原始数据集首先会按照一定的比例划分成训练集和测试集。将训练集的所有数据平均划分成K份,取第K份作为验证集,余下的K-1份作为交叉验证的训练集,有了K次的验证结果后,取均值作为最终的验证评分。GridSearchCV需要一种可靠的评分方法,对每个指定参数的模型都进行评分,这其中非常经典的一种方法就是交叉验证。cv_results包含'param_kernel','param_gamma','param_degree','split0_test_score','split1_test_score', 'mean_test_score','std_test_score','rank_test_score','split0_train_score', 'split1_train_score','mean_train_score','std_train_score','mean_fit_time', 'std_fit_time','mean_score_time','std_score_time','params'能告诉我们分别用那些参数做了模型的评分,以及各个评分的结果数组。网格搜索时不使用交叉验证可能会因为样本量不够,或者因为样本刚好被分割成不利于建模的极端的情况,而出现不够稳定可靠的甚至是错误的模型,交叉验证按照上面定义的做法,充分利用数据集对算法效果进行测试,每一个样本都经历过训练与验证,最终得出更稳定可靠的模型。
### 编程练习 4:拟合模型
在这个练习中,你将需要将所学到的内容整合,使用**决策树算法**训练一个模型。为了得出的是一个最优模型,你需要使用网格搜索法训练模型,以找到最佳的 `'max_depth'` 参数。你可以把`'max_depth'` 参数理解为决策树算法在做出预测前,允许其对数据提出问题的数量。决策树是**监督学习算法**中的一种。
另外,你会发现在实现的过程中是使用`ShuffleSplit()`作为交叉验证的另一种形式(参见'cv_sets'变量)。虽然它不是你在问题8中描述的K-fold交叉验证方法,但它同样非常有用!下面的`ShuffleSplit()`实现将创建10个('n_splits')混洗集合,并且对于每个混洗集,数据的20%('test_size')将被用作验证集合。当您在实现代码的时候,请思考一下它与 `K-fold cross-validation` 的不同与相似之处。
请注意,`ShuffleSplit` 在 `Scikit-Learn` 版本0.17和0.18中有不同的参数。对于下面代码单元格中的 `fit_model` 函数,您需要实现以下内容:
1. **定义 `'regressor'` 变量**: 使用 `sklearn.tree` 中的 [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) 创建一个决策树的回归函数;
2. **定义 `'params'` 变量**: 为 `'max_depth'` 参数创造一个字典,它的值是从1至10的数组;
3. **定义 `'scoring_fnc'` 变量**: 使用 `sklearn.metrics` 中的 [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) 创建一个评分函数。将 `‘performance_metric’` 作为参数传至这个函数中;
4. **定义 `'grid'` 变量**: 使用 `sklearn.model_selection` 中的 [`GridSearchCV`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) 创建一个网格搜索对象;将变量`'regressor'`, `'params'`, `'scoring_fnc'`和 `'cv_sets'` 作为参数传至这个对象构造函数中;
如果你对 Python 函数的默认参数定义和传递不熟悉,可以参考这个MIT课程的[视频](http://cn-static.udacity.com/mlnd/videos/MIT600XXT114-V004200_DTH.mp4)。
```
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
# sklearn version 0.18: ShuffleSplit(n_splits=10, test_size=0.1, train_size=None, random_state=None)
# sklearn versiin 0.17: ShuffleSplit(n, n_iter=10, test_size=0.1, train_size=None, random_state=None)
cv_sets = ShuffleSplit(n_splits=10, test_size=0.20, random_state=42)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor()
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth': range(1,11)}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# TODO: Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(regressor, params,scoring=scoring_fnc)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
```
## 第六步. 做出预测
当我们用数据训练出一个模型,它现在就可用于对新的数据进行预测。在决策树回归函数中,模型已经学会对新输入的数据*提问*,并返回对**目标变量**的预测值。你可以用这个预测来获取数据未知目标变量的信息,这些数据必须是不包含在训练数据之内的。
### 问题 9 - 最优模型
*最优模型的最大深度(maximum depth)是多少?此答案与你在**问题 6**所做的猜测是否相同?*
运行下方区域内的代码,将决策树回归函数代入训练数据的集合,以得到最优化的模型。
```
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
```
### 问题 9 - 回答:
最优模型的最大深度是4,与我在问题6所做的猜测基本一致。
### 问题 10 - 预测销售价格
想像你是一个在波士顿地区的房屋经纪人,并期待使用此模型以帮助你的客户评估他们想出售的房屋。你已经从你的三个客户收集到以下的资讯:
| 特征 | 客戶 1 | 客戶 2 | 客戶 3 |
| :---: | :---: | :---: | :---: |
| 房屋内房间总数 | 5 间房间 | 4 间房间 | 8 间房间 |
| 社区贫困指数(%被认为是贫困阶层) | 17% | 32% | 3% |
| 邻近学校的学生-老师比例 | 15:1 | 22:1 | 12:1 |
* 你会建议每位客户的房屋销售的价格为多少?
* 从房屋特征的数值判断,这样的价格合理吗?为什么?
**提示:**用你在**分析数据**部分计算出来的统计信息来帮助你证明你的答案。
运行下列的代码区域,使用你优化的模型来为每位客户的房屋价值做出预测。
```
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
```
### 问题 10 - 回答:
客户1价格391,183.33,客户2价格189,123.53,客户3价格942,666.67。我觉得价格是合理的,因为房间总数越多越贵,贫困指数越低越贵,学生老师比例也是越低越贵。Minimum price: 105,000.00 Maximum price: 1,024,800.00 Mean price: 454,342.94,这三个客户分别接近均值,最低价格以及最高价格。所以应该是合理的。
### 编程练习 5
你刚刚预测了三个客户的房子的售价。在这个练习中,你将用你的最优模型在整个测试数据上进行预测, 并计算相对于目标变量的决定系数 $R^2$ 的值。
**提示:**
* 你可能需要用到 `X_test`, `y_test`, `reg`, `performance_metric`。
* 参考问题10的代码进行预测。
* 参考问题2的代码来计算 $R^2$ 的值。
```
# TODO Calculate the r2 score between 'y_true' and 'y_predict'
from sklearn.metrics import r2_score
r2 = r2_score(y_test,reg.predict(X_test))
print("Optimal model has R^2 score {:,.2f} on test data".format(r2))
```
### 问题11 - 分析决定系数
你刚刚计算了最优模型在测试集上的决定系数,你会如何评价这个结果?
### 问题11 - 回答
R2系数为0.77, 我觉得这个结果反应模型还有进步的空间。
### 模型健壮性
一个最优的模型不一定是一个健壮模型。有的时候模型会过于复杂或者过于简单,以致于难以泛化新增添的数据;有的时候模型采用的学习算法并不适用于特定的数据结构;有的时候样本本身可能有太多噪点或样本过少,使得模型无法准确地预测目标变量。这些情况下我们会说模型是欠拟合的。
### 问题 12 - 模型健壮性
模型是否足够健壮来保证预测的一致性?
**提示**: 执行下方区域中的代码,采用不同的训练和测试集执行 `fit_model` 函数10次。注意观察对一个特定的客户来说,预测是如何随训练数据的变化而变化的。
```
vs.PredictTrials(features, prices, fit_model, client_data)
```
### 问题 12 - 回答:
我认为,预测的结果还是相对比较一致的。
### 问题 13 - 实用性探讨
*简单地讨论一下你建构的模型能否在现实世界中使用?*
提示:回答以下几个问题,并给出相应结论的理由:
- *1978年所采集的数据,在已考虑通货膨胀的前提下,在今天是否仍然适用?*
- *数据中呈现的特征是否足够描述一个房屋?*
- *在波士顿这样的大都市采集的数据,能否应用在其它乡镇地区?*
- *你觉得仅仅凭房屋所在社区的环境来判断房屋价值合理吗?*
### 问题 13 - 回答:
我觉得不能适用,可以可变的环境因素太多,不仅仅是通货膨胀。数据的特征完全不够描述一个房屋。城市与乡镇的特征区别巨大,需要关注的点也很多不一样的。所以不能应用在乡镇。不够合理,房屋的构造,内在的材质,装修等,也会影响房屋的价值。
## 第七步.完成和提交
当你完成了以上所有的代码和问题,你需要将 iPython Notebook 导出 HTML,导出方法:在左上角的菜单中选择 **File -> Download as -> HTML (.html)**。当你提交项目时,需要包含**可运行的 .ipynb 文件**和**导出的 HTML 文件**。
| github_jupyter |
EE 502 P: Analytical Methods for Electrical Engineering
# Homework 1: Python Setup
## Due October 10, 2021 by 11:59 PM
### <span style="color: red">Mayank Kumar</span>
Copyright © 2021, University of Washington
<hr>
**Instructions**: Please use this notebook as a template. Answer all questions using well formatted Markdown with embedded LaTeX equations, executable Jupyter cells, or both. Submit your homework solutions as an `.ipynb` file via Canvas.
<span style="color: red'">
Although you may discuss the homework with others, you must turn in your own, original work.
</span>
**Things to remember:**
- Use complete sentences. Equations should appear in text as grammatical elements.
- Comment your code.
- Label your axes. Title your plots. Use legends where appropriate.
- Before submitting a notebook, choose Kernel -> Restart and Run All to make sure your notebook runs when the cells are evaluated in order.
Note : Late homework will be accepted up to one week after the due date and will be worth 50% of its full credit score.
### 0. Warmup (Do not turn in)
- Get Jupyter running on your computer, or learn to use Google Colab's Jupyter environment.
- Make sure you can click through the Lecture 1 notes on Python. Try changing some of the cells to see the effects.
- If you haven't done any Python, follow one of the links in Lecture 1 to a tutorial and work through it.
- If you haven't done any Numpy or Sympy, read through the linked documentation and tutorials for those too.
### 1. Complex Numbers
Write a function `rand_complex(n)` that returns a list of `n` random complex numbers uniformly distributed in the unit circle (i.e., the magnitudes of the numbers are all between 0 and 1). Give the function a docstring. Demonstrate the function by making a list of 25 complex numbers.
```
def rand_complex(n):
"""
n : number of complex number to be generated
function during call imports random and numpy libraries for processing.
This function prints the complex number uniformly distributed in the unit circle.
"""
import random # import "random" library to generate random numbers.
import numpy as np # import numpy to process mathematical expression
a = [ ] # declare an empty list
for i in range(n):
x = random.uniform(-1,1) # generate random number between -1 and 1.
y = random.uniform(-(np.sqrt(1-(x*x))),np.sqrt(1-(x*x))) # generating random value of y between -max and max.
# max implies corresponding point on the circle for with y will lie on the circle.
a.append(complex(x,y)) # append complex number x+yj to the list a.
# print("Complex %d: %f + %fj" %(i+1,x, y))
# print("Value %d: %f" %((i+1),np.sqrt(x*x+y*y)))
return a
#function call for 25 complex numbers
rand_complex(25)
```
### 2. Hashes
Write a function `to_hash(L) `that takes a list of complex numbers `L` and returns an array of hashes of equal length, where each hash is of the form `{ "re": a, "im": b }`. Give the function a docstring and test it by converting a list of 25 numbers generated by your `rand_complex` function.
```
def to_hash(L):
"""
L: It is a list of complex numbers.
With a list as input, the function returns an array of hashes in form {"re": a, "im": b}
"""
b = [] #declare an empty list
for i in range(len(L)):
x1 = L[i].real #extract real part of the (i)th value of list L
y1 = L[i].imag #extract imaginary part of the (i)th value of list L
b.append({"re": x1, "im": y1}) #append the values in hash form.
return b
#function call using the function from Question 1.
to_hash(rand_complex(25))
#extra line of code to verify the outcome.
#d = to_hash(rand_complex(25))
#print(d[1])
#d[1]["re"]
```
### 3. Matrices
Write a function `lower_traingular(n)` that returns an $n \times n$ numpy matrix with zeros on the upper diagonal, and ones on the diagonal and lower diagonal. For example, `lower_triangular(3)` would return
```python
array([[1, 0, 0],
[1, 1, 0],
[1, 1, 1]])
```
```
import numpy as np # variable np is used inside the function to generate array.
#function defination starts here.
def lower_triangular(n):
"""
function takes integer n as input and returns lower triangular matrix of dimension nxn.
import module "numpy as np" to use this function without any errors.
"""
c = np.ones(n*n) # create an array of nxn elements with value = 1.
c = c.reshape (n,n) # Reshape the array to a matrix of dimension nxn.
for i in range (n):
for j in range (n):
if j > i: # checks for the elements where we have to change it to zero.
c[i][j] = 0 # Replace the elements in upper triangle other than diagonal elements with 0.
return c
print("lower triangular matrix :")
lower_triangular(3)
```
### 4. Numpy
Write a function `convolve(M,K)` that takes an $n \times m$ matrix $M$ and a $3 \times 3$ matrix $K$ (called the kernel) and returns their convolution as in [this diagram](https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTYo2_VuAlQhfeEGJHva3WUlnSJLeE0ApYyjw&usqp=CAU).
Please do not use any predefined convolution functions from numpy or scipy. Write your own. If the matrix $M$ is too small, your function should return a exception.
You can read more about convolution in [this post](https://setosa.io/ev/image-kernels/).
The matrix returned will have two fewer rows and two fewer columns than $M$. Test your function by making a $100 \times 100$ matrix of zeros and ones that as an image look like the letter X and convolve it with the kernel
$$
K = \frac{1}{16} \begin{pmatrix}
1 & 2 & 1 \\
2 & 4 & 2 \\
1 & 2 & 1
\end{pmatrix}
$$
Use `imshow` to display both images using subplots.
```
# import numpy as np #import numpy library into variable np
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#define kernel
K = np.array ([[1,2,1],
[2,4,2],
[1,2,1]])
K = K/16 # divide all elements of matrix by 16.
#create image for testing as per the question
m = 100
n = 100
im = np.ones(m*n) #created an array of length mn.
im = im.reshape(m,n) #reshaped the array in matrix of dimension mxn.
#creating image in form X.
for i in range (m):
for j in range (n):
if ((i==j) or (i+j == n-1)): #select all the elements where manipulation is required.
im[i][j] = 0 #replace 1 with 0 at all the locations selected above.
fig = plt.figure(figsize = (8,8)) #define figure size
fig.add_subplot(1,2,1) #add a subplot to the figure
plt.imshow(im) #show input image which was created earlier.
plt.axis('off') #axis are turned off
plt.title("Original") #adding title to the image
#function defination starts here
def convolve (M,K):
"""
function take two inputs.
M: it is the original image on which convolution should be performed.
K: it is the kernel which will be applied to the input matrix.
it returns matrix with (m-2) x (n-2)
"""
im_out = np.zeros(M.shape[0]* M.shape[1]).reshape(M.shape[0], M.shape[1]) #declare a matrix with same dimension as M
if (M.shape[0] < 3 or M.shape[1] < 3): #raising exception for the matrices with either if dimensions < 3.
raise Exception("Convolution of matrix M can't be calculated. Check input matrix for dimension.")
else:
for i in range(1, M.shape[0]-1):
for j in range(1, M.shape[1]-1):
im_out[i][j] = np.sum(K * M[(i-1):(i+2),(j-1):(j+2)]) #performing sum on matrix multiplication with kernal and current pixels of the image.
if (im_out[i][j]) > 255: #checking for the pixels which get more than 255.
im_out[i][j] = 255
elif (im_out[i][j] < 0): #checking for the pixels which get less than 0.
im_out[i][j] = 0
im_out = im_out[1:(M.shape[0]-1),1:(M.shape[1]-1)] #removed the blank pixels where convolution was not possible.
return im_out
out_image = convolve(im,K) #function call, output is saved for further usage
#print(len(out_image[0]))
fig.add_subplot(1,2,2) #add a subplot to the figure
plt.imshow(out_image) #Show the output image to the location
plt.axis('off') #axis is turned off as it is not required
plt.title("Output") #adding title to the output image
```
### 5. Symbolic Manipulation
Use sympy to specify and solve the following equations for $x$.
- $x^2 + 2x - 1 = 0$
- $a x^2 + bx + c = 0$
Also, evaluate the following integrals using sympy
- $\int x^2 dx$
- $\int x e^{6x} dx$
- $\int (3t+5)\cos(\frac{t}{4}) dt$
```
#Importing related libraries
import math
from sympy import *
init_printing(use_latex='mathjax')
#solving First equation i.e., x^2 + 2x - 1 = 0
x = symbols("x")
expr_1 = (x**2) + (2*x) - 1
result_1 = solve(expr_1,x)
print("solution of equation x^2 + 2x - 1 = 0 is :")
result_1
#solving second equation i.e., ax^2 + bx + c = 0
a,b,c = symbols("a b c")
x = symbols("x")
expr_2 = (a*(x**2)) + b*x + c
result_2 = solve(expr_2,x)
print("solution of equation ax^2 + bx + c = 0 is : " )
result_2
#evaluating integral 1
x = symbols("x")
expr_3 = x**2
integrate(expr_3,x)
#evaluating integral 2
x = symbols("x")
expr_4 = x * exp(6*x)
integrate(expr_4)
#evaluating integral 3
t = symbols("t")
expr_5 = ((3*t) + 5) * cos(t/4)
integrate(expr_5)
```
### 6. Typesetting
Use LaTeX to typeset the following equations.
<img src="https://www.sciencealert.com/images/Equations_web.jpg">
## 17 Equations that changed the world
### by Ian Stewart
#### Typesetting starts Now
---
**1. Pythagoras theorm:**
\begin{align}
a^2 + b^2 = c^2
\end{align}
___
**2. Logrithms:**
\begin{align}
\log xy = \log x +\log y
\end{align}
___
**3. Calculus:**
\begin{align}
\frac{df}{dt} = \lim_{h \rightarrow 0} \frac {f(t+h) - f(t)}{f(h)}
\end{align}
I have made some modification in the formula written above. as it was not conveying intended meaning.
___
**4. Law of Gravity:**
\begin{align}
F = G\frac{m_1 m_2}{r^2}
\end{align}
___
**5. Square root of minus one**
\begin{align}
i^2 = -1
\end{align}
___
**6. The Euler's formula for polyhedra**
\begin{align}
V - E + F = 2
\end{align}
___
**7. Normal Distribution**
\begin{align}
\phi(x) = \frac {1}{\sqrt{2\pi\rho}} e^\frac{(x-\mu)^2}{2\rho^2}
\end{align}
___
**8. Wave Equation**
\begin{align}
\frac{\partial^2 u}{\partial t^2} = c^2\frac{\partial^2 u}{\partial x^2}
\end{align}
___
**9. Fourier Transform**
\begin{align}
f(w) = \int_{-\infty}^{\infty} f(x)e^{-2\pi i x w} dx
\end{align}
___
**10. Navier-Stokes Equation**
\begin{align}
\rho(\frac{\partial v}{\partial t} + v.\nabla v ) = - \nabla p + \nabla.T + f
\end{align}
___
**11. Maxwell's Equation**
\begin{align}
\nabla . E = 0 \hspace{50 pt}\nabla.H = 0 \\
\nabla \times E = -\frac{1}{c} \frac{\partial H}{\partial t} \hspace{30 pt}\nabla \times H = \frac{1}{c} \frac{\partial E}{\partial t}
\end{align}
___
**12. Second Law of Thermodynamics**
\begin{align}
dS \geq 0
\end{align}
___
**13. Relativity**
\begin{align}
E = mc^2
\end{align}
___
**14. Schrodinger's Equation**
\begin{align}
i\hbar \frac{\partial}{\partial t} \Psi = H \Psi
\end{align}
___
**15. Information Theory**
\begin{align}
H = -\sum p(x)\log p(x)
\end{align}
___
**16. Chaos Theory**
\begin{align}
x_{t + 1} = k x_t(1 - x_t)
\end{align}
___
**17.Black-Scholes Equation**
\begin{align}
\frac{1}{2}\sigma S^2 \frac{\partial^2 V}{\partial S^2} + r S \frac{\partial V}{\partial S} + \frac{\partial V}{\partial t} - r V = 0
\end{align}
___
| github_jupyter |
## U.S. GDP vs. Wage Income
### For every wage dollar paid, what is GDP output?
- Each worker on average currently contributes over
90,000 dollars annually of goods and services valued as GDP.
- Each worker on average currently earns about
43,300 dollars annually (steadily up from 35,000 since the 1990's).
- So one dollar in paid wages currently yields
2.23 dollars of products and services --
but that multiplier is not a constant historically.
### What can we say about GDP growth by observing wage growth?
We find the assumption of time-invariant multiplier
gives poor results, whereas we obtain a reasonable
regression fit (Appendix 3) by treating the multiplier as
time-variant (workers are increasingly more productive):
$\%(G) \approx 1.3 * \%(m w)$
In contrast, our *local numerical approximation*
derived in the conclusion suggests using the most
recent estimated parameters:
$\%(G) \approx 1.9 * \%(w)$
So roughly speaking, 1.0% wage growth equates to 1.9% GDP growth
(yet data shows real wages can decline substantially due to the economy).
The abuse of notation is due to the fact that
our observations are not in continuous-time,
but rather in interpolated discrete-time
and in (non-logarithmic) percentage terms.
Short URL: https://git.io/gdpwage
*Dependencies:*
- Repository: https://github.com/rsvp/fecon235
- Python: matplotlib, pandas
*CHANGE LOG*
2016-11-10 Revisit after two years. Use PREAMBLE-p6.16.0428.
Update results with newly estimated parameters.
This notebook should run under Python 2.7 or 3.
Appendix 3 modified to reflect trend fit of multiplier.
2014-12-07 Update code and commentary.
2014-08-15 First version.
```
from fecon235.fecon235 import *
# PREAMBLE-p6.16.0428 :: Settings and system details
from __future__ import absolute_import, print_function
system.specs()
pwd = system.getpwd() # present working directory as variable.
print(" :: $pwd:", pwd)
# If a module is modified, automatically reload it:
%load_ext autoreload
%autoreload 2
# Use 0 to disable this feature.
# Notebook DISPLAY options:
# Represent pandas DataFrames as text; not HTML representation:
import pandas as pd
pd.set_option( 'display.notebook_repr_html', False )
from IPython.display import HTML # useful for snippets
# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')
from IPython.display import Image
# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works
from IPython.display import YouTubeVideo
# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)
from IPython.core import page
get_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)
# Or equivalently in config file: "InteractiveShell.display_page = True",
# which will display results in secondary notebook pager frame in a cell.
# Generate PLOTS inside notebook, "inline" generates static png:
%matplotlib inline
# "notebook" argument allows interactive zoom and resize.
```
## Examine U.S. population statistics
```
# Total US population in millions, released monthly:
pop = get( m4pop ) / 1000.0
plot( pop )
georet( pop, 12 )
```
This gives the annualized geometric growth rate of about 1.13%,
but one might also look at fertility rates which supports the population,
e.g. 2.1 children per female will ensure growth
(cf. fertility rates in Japan which has been declining over the decades).
```
# Fraction of population which works:
emppop = get( m4emppop ) / 100.0
```
Workers would be employed adults, which presumably exclude children
(20% of pop) and most elderly persons (14% of pop).
There is a dramatic drop in working% from about 64% in 2001
to about 59% recently.
```
plot( emppop )
# Total US workers in millions:
workers = todf( pop * emppop )
plot( workers )
georet( workers, 12 )
tail( workers )
```
**Total population and the number of workers grow annually around 1.15% --
but the annualized volatility for workers is much larger than the
total population (1.16% vs 0.09%). The decrease in workers due to the
Great Recession is remarkably, and since that period there has
been a steady increase north of 190 million workers.**
## Examine U.S. Gross Domestic Product
```
# Deflator, scaled to 1 current dollar:
defl = get( m4defl )
# Nominal GDP in billions:
gdp = get( m4gdpus )
# The release cycle is quarterly, but we resample to monthly,
# in order to sync with the deflator.
# We do NOT use m4gdpusr directly because that is in 2009 dollars.
# Real GDP in current billions:
gdpr = todf( defl * gdp )
tail( gdpr )
# Real GDP: showing rise from $4 trillion economy
# in the 1960's to nearly $19 trillion dollars.
plot( gdpr )
georet( gdpr, 12 )
```
Real GDP geometric rate of growth is 2.8% per annum
(presumably due to the working population).
We could say that is the *natural growth rate*
of the US economy.
### Real GDP per worker (NOT per capita)
```
# Real GDP per worker -- NOT per capita:
gdprworker = todf( gdpr / workers )
plot( gdprworker )
# plotted in thousands of dollars
```
**Chart shows each worker on average currently contributes
over *90,000 dollars annually*
of goods and services valued as GDP.**
```
georet( gdprworker, 12 )
```
Workers have generally been more *productive* since WW2,
increasingly contributing to GDP at an annual pace of 1.6%.
## Examine wage income
```
# Nominal annual INCOME, assuming 40 working hours per week, 50 weeks per year:
inc = get( m4wage ) * 2000
# REAL income in thousands per worker:
rinc = todf((defl * inc) / 1000.0)
# Income in thousands, current dollars per worker:
plot( rinc )
```
**INCOME chart shows each worker on average currently earns
about *43,300 dollars annually*
(steadily up from 35,000 since the 1990's).**
```
tail( rinc )
georet( rinc, 12 )
```
In general, real income does not always steadily go up,
as the chart demonstrates. A stagnating economy with high inflation
will wear away real wages.
Since 1964, the geometric rate of real wage growth has been
approximately 0.5% -- far less in comparison to the
natural growth rate of the economy.
## How do wages multiply out to GDP?
```
# Ratio of real GDP to real income per worker:
gdpinc = todf( gdprworker / rinc )
```
Implicitly our assumption is that workers earn wages at the
nonfarm non-supervisory private-sector rate.
This is not a bad assumption for our purposes,
provided changes in labor rates are uniformly applied
across other various categories since we are
focusing on the multiplier effect.
```
plot( gdpinc )
tail( gdpinc )
```
*The ratio of real GDP to real income per worker has increased
from 1.4 in the 1970's to 2.2 recently.*
(There is a noticeable temporary dip after the 2007 crisis.)
**One dollar in paid wages currently
yields 2.23 dollars of products and services.**
The time-series shows workers have become
more productive in producing national wealth.
Hypothesis: over the years, *technology* has exerted
upward pressure on productivity, and downward pressure on wages.
In other words, the slope of gdpinc is a function of
technological advances.
(Look for counterexamples in other countries.)
```
# Let's fit and plot the simplified time trend:
gdpinc_trend = trend( gdpinc )
# The computed slope will be relative to one month.
plot( gdpinc_trend )
```
The estimated slope implies that each year adds 0.02 to gdpinc multiplier.
Clearly, we can rule out a *constant* gdpinc multiplier effect.
Rather than a straight line estimator, we can forecast
the gdpinc multiplier using the Holt-Winters method,
one year ahead, month-by-month...
```
# Holt-Winters monthly forecasts:
holtfred( gdpinc, 12 )
```
Interestingly, forecasting the local terrain is more complex
than a global linear regression.
The Holt-Winters forecast for mid-2017 shows a
0.03 *decrease* in the gdpinc multiplier.
If gdpinc multiplier is constant, then mathematically a
x% change in wages would translate to a straightforward x% change in GDP.
This is why the Fed Reserve, especially Janet Yellen,
pays so much attention to wage growth.
But our analysis clearly shows the multiplier is not stable.
Linear regression between real GDP growth and real wage growth
performs poorly when the multiplier is treated
as if it is time-invariant (see Appendix 1).
## CONCLUSION: Numerical approximation for GDP growth based on observations from wage growth
We found evidence of a **time-variant multiplier**
$m_t$ such that $G_t = m_t w_t$.
Let us express GDP growth as the percentage change:
$\begin{aligned}
\frac{G_{t+1} - G_t}{G_t} = \frac{m_{t+1} w_{t+1}}{m_t w_t} - 1
\end{aligned}$
Notice that LHS is just the growth rate of $m_t w_t$.
Abusing notation, we could write $\%(G) = \%(m w)$
Empirically the multiplier varies linearly as a function of time.
Let us evaluate the GDP growth numerically on the LHS,
using the most recent multiplier and its expected linear incrementation,
assuming wage increase of 1% *year-over-year*:
$\begin{aligned}
(\frac{2.23 + 0.02}{2.23}) {1.01} - 1 = 0.0191
\end{aligned}$
Under such assumptions, GDP increases 1.91% over one year.
In other words, **as a rough current approximation:
GDP_growth = 1.9 \* wage_growth,** i.e.
$\%(G) \approx 1.9 * \%(w)$ at current estimated parameters.
This is an useful approximation since GDP is only released quarterly,
whereas wage data is released monthly.
(The result also depends on the interpolation
method used in our *resample_main()*.)
Appendix 3 arrives at the following linear regression result:
$\%(G) \approx 1.3 * \%(m w)$
which takes the entire dataset since 1964 into account,
using gdpinc_trend as time-varying multipler.
- - - -
### APPENDIX 1: Linear regression of 0.49 R-squared if gdpinc multiplier is mistakenly treated as a constant
```
stat2( gdprworker[Y], rinc[Y] )
```
- - - -
### APPENDIX 2: Linear regression between real GDP growth and real wage growth: 0.19 R-squared when multiplier is treated as time-invariant
Note: an alternative is to use the difference betweeen
logarithmic values, but we intentionally use the pcent() function YoY
since our data frequency is not even remotely continuous-time.
```
# Examine year-over-year percentage growth:
stat2( pcent(gdpr, 12)[Y], pcent(rinc, 12)[Y] )
```
- - - -
### Appendix 3: Improved linear regression of growth model: 0.60 R-squared with time-variant multiplier (trend based)
Let the Python variable mw represents the series $m_t w_t$
in our analytical model described in the conclusion:
```
# The string argument allows us to label a DataFrame column:
mw = todf( gdpinc_trend * rinc, 'mw' )
mwpc = todf( pcent( mw, 12), 'mwpc' )
gdprpc = todf( pcent( gdpr, 12), 'Gpc' )
dataf = paste( [gdprpc, mwpc] )
# The 0 in the formula means no intercept:
result = regressformula( dataf['1964':], 'Gpc ~ 0 + mwpc' )
print(result.summary())
```
R-squared after 1964 looks respectable at around 0.60, however,
the fit is terrible after the Great Recession.
The estimated coefficent implies this fitted equation:
$\%(G) \approx 1.3 * \%(m w)$
In contrast, our *local numerical approximation* derived in the conclusion
suggests for the most recent estimated parameters:
$\%(G) \approx 1.9 * \%(w)$
| github_jupyter |
# WARNING
**Please make sure to "COPY AND EDIT NOTEBOOK" to use compatible library dependencies! DO NOT CREATE A NEW NOTEBOOK AND COPY+PASTE THE CODE - this will use latest Kaggle dependencies at the time you do that, and the code will need to be modified to make it work. Also make sure internet connectivity is enabled on your notebook**
# Preliminaries
First install a critical dependency for our code. **NOTE THAT THIS NOTEBOOK USES TENSORFLOW 1.14 BECAUSE ELMo WAS NOT PORTED TO TENSORFLOW 2.X AT THE TIME OF DEVELOPMENT. You can confirm if that is still the case now by going to https://tfhub.dev/s?q=elmo To see equivalent Tensorflow 2.X BERT Code for the Spam problem, see https://www.kaggle.com/azunre/tlfornlp-chapters2-3-spam-bert-tf2**
```
!pip install keras==2.2.4 # critical dependency
```
Write requirements to file, anytime you run it, in case you have to go back and recover Kaggle dependencies. **MOST OF THESE REQUIREMENTS WOULD NOT BE NECESSARY FOR LOCAL INSTALLATION**
Latest known such requirements are hosted for each notebook in the companion github repo, and can be pulled down and installed here if needed. Companion github repo is located at https://github.com/azunre/transfer-learning-for-nlp
```
!pip freeze > kaggle_image_requirements.txt
# Import neural network libraries
import tensorflow as tf
import tensorflow_hub as hub
from keras import backend as K
import keras.layers as layers
from keras.models import Model, load_model
from keras.engine import Layer
# Initialize tensorflow/keras session
sess = tf.Session()
K.set_session(sess)
# Some other key imports
import os
import re
import pandas as pd
import numpy as np
import random
```
# Define Tokenization, Stop-word and Punctuation Removal Functions
Before proceeding, we must decide how many samples to draw from each class. We must also decide the maximum number of tokens per email, and the maximum length of each token. This is done by setting the following overarching hyperparameters
```
Nsamp = 1000 # number of samples to generate in each class - 'spam', 'not spam'
maxtokens = 50 # the maximum number of tokens per document
maxtokenlen = 20 # the maximum length of each token
```
**Tokenization**
```
def tokenize(row):
if row is None or row is '':
tokens = ""
else:
tokens = row.split(" ")[:maxtokens]
return tokens
```
**Use regular expressions to remove unnecessary characters**
Next, we define a function to remove punctuation marks and other nonword characters (using regular expressions) from the emails with the help of the ubiquitous python regex library. In the same step, we truncate all tokens to hyperparameter maxtokenlen defined above.
```
import re
def reg_expressions(row):
tokens = []
try:
for token in row:
token = token.lower()
token = re.sub(r'[\W\d]', "", token)
token = token[:maxtokenlen] # truncate token
tokens.append(token)
except:
token = ""
tokens.append(token)
return tokens
```
**Stop-word removal**
Let’s define a function to remove stopwords - words that occur so frequently in language that they offer no useful information for classification. This includes words such as “the” and “are”, and the popular library NLTK provides a heavily-used list that will employ.
```
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
# print(stopwords) # see default stopwords
# it may be beneficial to drop negation words from the removal list, as they can change the positive/negative meaning
# of a sentence - but we didn't find it to make a difference for this problem
# stopwords.remove("no")
# stopwords.remove("nor")
# stopwords.remove("not")
def stop_word_removal(row):
token = [token for token in row if token not in stopwords]
token = filter(None, token)
return token
```
# Download and Assemble IMDB Review Dataset
Download the labeled IMDB reviews
```
!wget -q "http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
!tar xzf aclImdb_v1.tar.gz
```
Shuffle and preprocess data
```
# function for shuffling data
def unison_shuffle(data, header):
p = np.random.permutation(len(header))
data = data[p]
header = np.asarray(header)[p]
return data, header
def load_data(path):
data, sentiments = [], []
for folder, sentiment in (('neg', 0), ('pos', 1)):
folder = os.path.join(path, folder)
for name in os.listdir(folder):
with open(os.path.join(folder, name), 'r') as reader:
text = reader.read()
text = tokenize(text)
text = stop_word_removal(text)
text = reg_expressions(text)
data.append(text)
sentiments.append(sentiment)
data_np = np.array(data)
data, sentiments = unison_shuffle(data_np, sentiments)
return data, sentiments
train_path = os.path.join('aclImdb', 'train')
test_path = os.path.join('aclImdb', 'test')
raw_data, raw_header = load_data(train_path)
print(raw_data.shape)
print(len(raw_header))
# Subsample required number of samples
random_indices = np.random.choice(range(len(raw_header)),size=(Nsamp*2,),replace=False)
data_train = raw_data[random_indices]
header = raw_header[random_indices]
print("DEBUG::data_train::")
print(data_train)
```
Display sentiments and their frequencies in the dataset, to ensure it is roughly balanced between classes
```
unique_elements, counts_elements = np.unique(header, return_counts=True)
print("Sentiments and their frequencies:")
print(unique_elements)
print(counts_elements)
# function for converting data into the right format, due to the difference in required format from sklearn models
# we expect a single string per email here, versus a list of tokens for the sklearn models previously explored
def convert_data(raw_data,header):
converted_data, labels = [], []
for i in range(raw_data.shape[0]):
# combine list of tokens representing each email into single string
out = ' '.join(raw_data[i])
converted_data.append(out)
labels.append(header[i])
converted_data = np.array(converted_data, dtype=object)[:, np.newaxis]
return converted_data, np.array(labels)
data_train, header = unison_shuffle(data_train, header)
# split into independent 70% training and 30% testing sets
idx = int(0.7*data_train.shape[0])
# 70% of data for training
train_x, train_y = convert_data(data_train[:idx],header[:idx])
# remaining 30% for testing
test_x, test_y = convert_data(data_train[idx:],header[idx:])
print("train_x/train_y list details, to make sure it is of the right form:")
print(len(train_x))
print(train_x)
print(train_y[:5])
print(train_y.shape)
```
# Build, Train and Evaluate ELMo Model
Create a custom tf hub ELMO embedding layer
```
class ElmoEmbeddingLayer(Layer):
def __init__(self, **kwargs):
self.dimensions = 1024 # initialize output dimension of ELMo embedding
self.trainable=True
super(ElmoEmbeddingLayer, self).__init__(**kwargs)
def build(self, input_shape): # function for building ELMo embedding
self.elmo = hub.Module('https://tfhub.dev/google/elmo/2', trainable=self.trainable,
name="{}_module".format(self.name)) # download pretrained ELMo model
# extract trainable parameters, which are only a small subset of the total - this is a constraint of
# the tf hub module as shared by the authors - see https://tfhub.dev/google/elmo/2
# the trainable parameters are 4 scalar weights on the sum of the outputs of ELMo layers
self.trainable_weights += K.tf.trainable_variables(scope="^{}_module/.*".format(self.name))
super(ElmoEmbeddingLayer, self).build(input_shape)
def call(self, x, mask=None): # specify function for calling embedding
result = self.elmo(K.squeeze(K.cast(x, tf.string), axis=1),
as_dict=True,
signature='default',
)['default']
return result
def compute_output_shape(self, input_shape): # specify output shape
return (input_shape[0], self.dimensions)
```
We now use the custom TF hub ELMo embedding layer within a higher-level function to define the overall model. More specifically, we put a dense trainable layer of output dimension 256 on top of the ELMo embedding.
```
# Function to build model
def build_model():
input_text = layers.Input(shape=(1,), dtype="string")
embedding = ElmoEmbeddingLayer()(input_text)
dense = layers.Dense(256, activation='relu')(embedding)
pred = layers.Dense(1, activation='sigmoid')(dense)
model = Model(inputs=[input_text], outputs=pred)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
# Build and fit
model = build_model()
history = model.fit(train_x,
train_y,
validation_data=(test_x, test_y),
epochs=5,
batch_size=32)
```
**Save trained model**
```
model.save('ElmoModel.h5')
```
**Visualize Convergence**
```
import matplotlib.pyplot as plt
df_history = pd.DataFrame(history.history)
fig,ax = plt.subplots()
plt.plot(range(df_history.shape[0]),df_history['val_acc'],'bs--',label='validation')
plt.plot(range(df_history.shape[0]),df_history['acc'],'r^--',label='training')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('ELMo Movie Review Classification Training')
plt.legend(loc='best')
plt.grid()
plt.show()
fig.savefig('ELMoConvergence.eps', format='eps')
fig.savefig('ELMoConvergence.pdf', format='pdf')
fig.savefig('ELMoConvergence.png', format='png')
fig.savefig('ELMoConvergence.svg', format='svg')
```
**Make figures downloadable to local system in interactive mode**
```
from IPython.display import HTML
def create_download_link(title = "Download file", filename = "data.csv"):
html = '<a href={filename}>{title}</a>'
html = html.format(title=title,filename=filename)
return HTML(html)
create_download_link(filename='ELMoConvergence.svg')
# you must remove all downloaded files - having too many of them on completion will make Kaggle reject your notebook
!rm -rf aclImdb
!rm aclImdb_v1.tar.gz
```
| github_jupyter |
# 自然语言推断与数据集
:label:`sec_natural-language-inference-and-dataset`
在 :numref:`sec_sentiment`中,我们讨论了情感分析问题。这个任务的目的是将单个文本序列分类到预定义的类别中,例如一组情感极性中。然而,当需要决定一个句子是否可以从另一个句子推断出来,或者需要通过识别语义等价的句子来消除句子间冗余时,知道如何对一个文本序列进行分类是不够的。相反,我们需要能够对成对的文本序列进行推断。
## 自然语言推断
*自然语言推断*(natural language inference)主要研究
*假设*(hypothesis)是否可以从*前提*(premise)中推断出来,
其中两者都是文本序列。
换言之,自然语言推断决定了一对文本序列之间的逻辑关系。这类关系通常分为三种类型:
* *蕴涵*(entailment):假设可以从前提中推断出来。
* *矛盾*(contradiction):假设的否定可以从前提中推断出来。
* *中性*(neutral):所有其他情况。
自然语言推断也被称为识别文本蕴涵任务。
例如,下面的一个文本对将被贴上“蕴涵”的标签,因为假设中的“表白”可以从前提中的“拥抱”中推断出来。
>前提:两个女人拥抱在一起。
>假设:两个女人在示爱。
下面是一个“矛盾”的例子,因为“运行编码示例”表示“不睡觉”,而不是“睡觉”。
>前提:一名男子正在运行Dive Into Deep Learning的编码示例。
>假设:该男子正在睡觉。
第三个例子显示了一种“中性”关系,因为“正在为我们表演”这一事实无法推断出“出名”或“不出名”。
>前提:音乐家们正在为我们表演。
>假设:音乐家很有名。
自然语言推断一直是理解自然语言的中心话题。它有着广泛的应用,从信息检索到开放领域的问答。为了研究这个问题,我们将首先研究一个流行的自然语言推断基准数据集。
## 斯坦福自然语言推断(SNLI)数据集
[**斯坦福自然语言推断语料库(Stanford Natural Language Inference,SNLI)**]是由500000多个带标签的英语句子对组成的集合 :cite:`Bowman.Angeli.Potts.ea.2015`。我们在路径`../data/snli_1.0`中下载并存储提取的SNLI数据集。
```
import os
import re
import torch
from torch import nn
from d2l import torch as d2l
#@save
d2l.DATA_HUB['SNLI'] = (
'https://nlp.stanford.edu/projects/snli/snli_1.0.zip',
'9fcde07509c7e87ec61c640c1b2753d9041758e4')
data_dir = d2l.download_extract('SNLI')
```
### [**读取数据集**]
原始的SNLI数据集包含的信息比我们在实验中真正需要的信息丰富得多。因此,我们定义函数`read_snli`以仅提取数据集的一部分,然后返回前提、假设及其标签的列表。
```
#@save
def read_snli(data_dir, is_train):
"""将SNLI数据集解析为前提、假设和标签"""
def extract_text(s):
# 删除我们不会使用的信息
s = re.sub('\\(', '', s)
s = re.sub('\\)', '', s)
# 用一个空格替换两个或多个连续的空格
s = re.sub('\\s{2,}', ' ', s)
return s.strip()
label_set = {'entailment': 0, 'contradiction': 1, 'neutral': 2}
file_name = os.path.join(data_dir, 'snli_1.0_train.txt'
if is_train else 'snli_1.0_test.txt')
with open(file_name, 'r') as f:
rows = [row.split('\t') for row in f.readlines()[1:]]
premises = [extract_text(row[1]) for row in rows if row[0] in label_set]
hypotheses = [extract_text(row[2]) for row in rows if row[0] \
in label_set]
labels = [label_set[row[0]] for row in rows if row[0] in label_set]
return premises, hypotheses, labels
```
现在让我们[**打印前3对**]前提和假设,以及它们的标签(“0”、“1”和“2”分别对应于“蕴涵”、“矛盾”和“中性”)。
```
train_data = read_snli(data_dir, is_train=True)
for x0, x1, y in zip(train_data[0][:3], train_data[1][:3], train_data[2][:3]):
print('前提:', x0)
print('假设:', x1)
print('标签:', y)
```
训练集约有550000对,测试集约有10000对。下面显示了训练集和测试集中的三个[**标签“蕴涵”、“矛盾”和“中性”是平衡的**]。
```
test_data = read_snli(data_dir, is_train=False)
for data in [train_data, test_data]:
print([[row for row in data[2]].count(i) for i in range(3)])
```
### [**定义用于加载数据集的类**]
下面我们来定义一个用于加载SNLI数据集的类。类构造函数中的变量`num_steps`指定文本序列的长度,使得每个小批量序列将具有相同的形状。换句话说,在较长序列中的前`num_steps`个标记之后的标记被截断,而特殊标记“<pad>”将被附加到较短的序列后,直到它们的长度变为`num_steps`。通过实现`__getitem__`功能,我们可以任意访问带有索引`idx`的前提、假设和标签。
```
#@save
class SNLIDataset(torch.utils.data.Dataset):
"""用于加载SNLI数据集的自定义数据集"""
def __init__(self, dataset, num_steps, vocab=None):
self.num_steps = num_steps
all_premise_tokens = d2l.tokenize(dataset[0])
all_hypothesis_tokens = d2l.tokenize(dataset[1])
if vocab is None:
self.vocab = d2l.Vocab(all_premise_tokens + \
all_hypothesis_tokens, min_freq=5, reserved_tokens=['<pad>'])
else:
self.vocab = vocab
self.premises = self._pad(all_premise_tokens)
self.hypotheses = self._pad(all_hypothesis_tokens)
self.labels = torch.tensor(dataset[2])
print('read ' + str(len(self.premises)) + ' examples')
def _pad(self, lines):
return torch.tensor([d2l.truncate_pad(
self.vocab[line], self.num_steps, self.vocab['<pad>'])
for line in lines])
def __getitem__(self, idx):
return (self.premises[idx], self.hypotheses[idx]), self.labels[idx]
def __len__(self):
return len(self.premises)
```
### [**整合代码**]
现在,我们可以调用`read_snli`函数和`SNLIDataset`类来下载SNLI数据集,并返回训练集和测试集的`DataLoader`实例,以及训练集的词表。值得注意的是,我们必须使用从训练集构造的词表作为测试集的词表。因此,在训练集中训练的模型将不知道来自测试集的任何新词元。
```
#@save
def load_data_snli(batch_size, num_steps=50):
"""下载SNLI数据集并返回数据迭代器和词表"""
num_workers = d2l.get_dataloader_workers()
data_dir = d2l.download_extract('SNLI')
train_data = read_snli(data_dir, True)
test_data = read_snli(data_dir, False)
train_set = SNLIDataset(train_data, num_steps)
test_set = SNLIDataset(test_data, num_steps, train_set.vocab)
train_iter = torch.utils.data.DataLoader(train_set, batch_size,
shuffle=True,
num_workers=num_workers)
test_iter = torch.utils.data.DataLoader(test_set, batch_size,
shuffle=False,
num_workers=num_workers)
return train_iter, test_iter, train_set.vocab
```
在这里,我们将批量大小设置为128时,将序列长度设置为50,并调用`load_data_snli`函数来获取数据迭代器和词表。然后我们打印词表大小。
```
train_iter, test_iter, vocab = load_data_snli(128, 50)
len(vocab)
```
现在我们打印第一个小批量的形状。与情感分析相反,我们有分别代表前提和假设的两个输入`X[0]`和`X[1]`。
```
for X, Y in train_iter:
print(X[0].shape)
print(X[1].shape)
print(Y.shape)
break
```
## 小结
* 自然语言推断研究“假设”是否可以从“前提”推断出来,其中两者都是文本序列。
* 在自然语言推断中,前提和假设之间的关系包括蕴涵关系、矛盾关系和中性关系。
* 斯坦福自然语言推断(SNLI)语料库是一个比较流行的自然语言推断基准数据集。
## 练习
1. 机器翻译长期以来一直是基于翻译输出和翻译真实值之间的表面$n$元语法匹配来进行评估的。你能设计一种用自然语言推断来评价机器翻译结果的方法吗?
1. 我们如何更改超参数以减小词表大小?
[Discussions](https://discuss.d2l.ai/t/5722)
| github_jupyter |
##### Copyright 2020 The Cirq Developers
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Noise
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/noise"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/noise.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
```
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
import cirq
```
For simulation, it is useful to have `Gate` objects that enact noisy quantum evolution. Cirq supports modeling noise via *operator sum* representations of noise (these evolutions are also known as quantum operations or quantum dynamical maps).
This formalism models evolution of the density matrix $\rho$ via
$$
\rho \rightarrow \sum_{k = 1}^{m} A_k \rho A_k^\dagger
$$
where $A_k$ are known as *Kraus operators*. These operators are not necessarily unitary but must satisfy the trace-preserving property
$$
\sum_k A_k^\dagger A_k = I .
$$
A channel with $m = 1$ unitary Kraus operator is called *coherent* (and is equivalent to a unitary gate operation), otherwise the channel is called *incoherent*. For a given noisy channel, Kraus operators are not necessarily unique. For more details on these operators, see [John Preskill's lecture notes](http://theory.caltech.edu/~preskill/ph219/chap3_15.pdf).
## Common channels
Cirq defines many commonly used quantum channels in [`ops/common_channels.py`](https://github.com/quantumlib/Cirq/blob/master/cirq/ops/common_channels.py). For example, the single-qubit bit-flip channel
$$
\rho \rightarrow (1 - p) \rho + p X \rho X
$$
with parameter $p = 0.1$ can be created as follows.
```
"""Get a single-qubit bit-flip channel."""
bit_flip = cirq.bit_flip(p=0.1)
```
To see the Kraus operators of a channel, the `cirq.channel` protocol can be used. (See the [protocols guide](./protocols.ipynb).)
```
for i, kraus in enumerate(cirq.channel(bit_flip)):
print(f"Kraus operator {i + 1} is:\n", kraus, end="\n\n")
```
As mentioned, all channels are subclasses of `cirq.Gate`s. As such, they can act on qubits and be used in circuits in the same manner as gates.
```
"""Example of using channels in a circuit."""
# See the number of qubits a channel acts on.
nqubits = bit_flip.num_qubits()
print(f"Bit flip channel acts on {nqubits} qubit(s).\n")
# Apply the channel to each qubit in a circuit.
circuit = cirq.Circuit(
bit_flip.on_each(cirq.LineQubit.range(3))
)
print(circuit)
```
Channels can even be controlled.
```
"""Example of controlling a channel."""
# Get the controlled channel.
controlled_bit_flip = bit_flip.controlled(num_controls=1)
# Use it in a circuit.
circuit = cirq.Circuit(
controlled_bit_flip(*cirq.LineQubit.range(2))
)
print(circuit)
```
In addition to the bit-flip channel, other common channels predefined in Cirq are shown below. Definitions of these channels can be found in their docstrings - e.g., `help(cirq.depolarize)`.
* `cirq.phase_flip`
* `cirq.phase_damp`
* `cirq.amplitude_damp`
* `cirq.depolarize`
* `cirq.asymmetric_depolarize`
* `cirq.reset`
For example, the asymmetric depolarizing channel is defined by
$$
\rho \rightarrow (1-p_x-p_y-p_z) \rho + p_x X \rho X + p_y Y \rho Y + p_z Z \rho Z
$$
and can be instantiated as follows.
```
"""Get an asymmetric depolarizing channel."""
depo = cirq.asymmetric_depolarize(
p_x=0.10,
p_y=0.05,
p_z=0.15,
)
circuit = cirq.Circuit(
depo.on_each(cirq.LineQubit(0))
)
print(circuit)
```
## The `channel` and `mixture` protocols
We have seen the `cirq.channel` protocol which returns the Kraus operators of a channel. Some channels have the interpretation of randomly applying a single unitary Kraus operator $U_k$ with probability $p_k$, namely
$$
\rho \rightarrow \sum_k p_k U_k \rho U_k^\dagger {\rm ~where~} \sum_k p_k =1 {\rm ~and~ U_k U_k^\dagger= I}.
$$
For example, the bit-flip channel from above
$$
\rho \rightarrow (1 - p) \rho + p X \rho X
$$
can be interpreted as doing nothing (applying identity) with probability $1 - p$ and flipping the bit (applying $X$) with probability $p$. Channels with these interpretations support the `cirq.mixture` protocol. This protocol returns the probabilities and unitary Kraus operators of the channel.
```
"""Example of using the mixture protocol."""
for prob, kraus in cirq.mixture(bit_flip):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Channels that do not have this interpretation do not support the `cirq.mixture` protocol. Such channels apply Kraus operators with probabilities that depend on the state $\rho$.
An example of a channel which does not support the mixture protocol is the amplitude damping channel with parameter $\gamma$ defined by Kraus operators
$$
M_0 = \begin{bmatrix} 1 & 0 \cr 0 & \sqrt{1 - \gamma} \end{bmatrix}
\text{and }
M_1 = \begin{bmatrix} 0 & \sqrt{\gamma} \cr 0 & 0 \end{bmatrix} .
$$
```
"""The amplitude damping channel is an example of a channel without a mixture."""
channel = cirq.amplitude_damp(0.1)
if cirq.has_mixture(channel):
print(f"Channel {channel} has a _mixture_ or _unitary_ method.")
else:
print(f"Channel {channel} does not have a _mixture_ or _unitary_ method.")
```
To summarize:
* Every `Gate` in Cirq supports the `cirq.channel` protocol.
- If magic method `_channel_` is not defined, `cirq.channel` looks for `_mixture_` then for `_unitary_`.
* A subset of channels which support `cirq.channel` also support the `cirq.mixture` protocol.
- If magic method `_mixture_` is not defined, `cirq.mixture` looks for `_unitary_`.
* A subset of channels which support `cirq.mixture` also support the `cirq.unitary` protocol.
For concrete examples, consider `cirq.X`, `cirq.BitFlipChannel`, and `cirq.AmplitudeDampingChannel` which are all subclasses of `cirq.Gate`.
* `cirq.X` defines the `_unitary_` method.
- As a result, it supports the `cirq.unitary` protocol, the `cirq.mixture` protocol, and the `cirq.channel` protocol.
* `cirq.BitFlipChannel` defines the `_mixture_` method but not the `_unitary_` method.
- As a result, it only supports the `cirq.mixture` protocol and the `cirq.channel` protocol.
* `cirq.AmplitudeDampingChannel` defines the `_channel_` method, but not the `_mixture_` method or the `_unitary_` method.
- As a result, it only supports the `cirq.channel` protocol.
## Custom channels
Channels not defined in `cirq.ops.common_channels` can be user-defined. Defining custom channels is similar to defining [custom gates](./custom_gates.ipynb).
A minimal example for defining the channel
$$
\rho \mapsto (1 - p) \rho + p Y \rho Y
$$
is shown below.
```
"""Minimal example of defining a custom channel."""
class BitAndPhaseFlipChannel(cirq.SingleQubitGate):
def __init__(self, p: float) -> None:
self._p = p
def _mixture_(self):
ps = [1.0 - self._p, self._p]
ops = [cirq.unitary(cirq.I), cirq.unitary(cirq.Y)]
return tuple(zip(ps, ops))
def _has_mixture_(self) -> bool:
return True
def _circuit_diagram_info_(self, args) -> str:
return f"BitAndPhaseFlip({self._p})"
```
Note: The `_has_mixture_` magic method is not strictly required but is recommended.
We can now instantiate this channel and get its mixture:
```
"""Custom channels can be used like any other channels."""
bit_phase_flip = BitAndPhaseFlipChannel(p=0.05)
for prob, kraus in cirq.mixture(bit_phase_flip):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Note: Since `_mixture_` is defined, the `cirq.channel` protocol can also be used.
The custom channel can be used in a circuit just like other predefined channels.
```
"""Example of using a custom channel in a circuit."""
circuit = cirq.Circuit(
bit_phase_flip.on_each(*cirq.LineQubit.range(3))
)
circuit
```
Note: If a custom channel does not have a mixture, it should instead define the `_channel_` magic method to return a sequence of Kraus operators (as `numpy.ndarray`s). Defining a `_has_channel_` method which returns `True` is optional but recommended.
This method of defining custom channels is the most general, but simple channels such as the custom `BitAndPhaseFlipChannel` can also be created directly from a `Gate` with the convenient `Gate.with_probability` method.
```
"""Create a channel with Gate.with_probability."""
channel = cirq.Y.with_probability(probability=0.05)
```
This produces the same mixture as the custom `BitAndPhaseFlip` channel above.
```
for prob, kraus in cirq.mixture(channel):
print(f"With probability {prob}, apply\n", kraus, end="\n\n")
```
Note that the order of Kraus operators is reversed from above, but this of course does not affect the action of the channel.
## Simulating noisy circuits
### Density matrix simulation
The `cirq.DensityMatrixSimulator` can simulate any noisy circuit (i.e., can apply any quantum channel) because it stores the full density matrix $\rho$. This simulation strategy updates the state $\rho$ by directly applying the Kraus operators of each quantum channel.
```
"""Simulating a circuit with the density matrix simulator."""
# Get a circuit.
qbit = cirq.GridQubit(0, 0)
circuit = cirq.Circuit(
cirq.X(qbit),
cirq.amplitude_damp(0.1).on(qbit)
)
# Display it.
print("Simulating circuit:")
print(circuit)
# Simulate with the density matrix simulator.
dsim = cirq.DensityMatrixSimulator()
rho = dsim.simulate(circuit).final_density_matrix
# Display the final density matrix.
print("\nFinal density matrix:")
print(rho)
```
Note that the density matrix simulator supports the `run` method which only gives access to measurements as well as the `simulate` method (used above) which gives access to the full density matrix.
### Monte Carlo wavefunction simulation
Noisy circuits with arbitrary channels can also be simulated with the `cirq.Simulator`. When simulating such a channel, a single Kraus operator is randomly sampled (according to the probability distribution) and applied to the wavefunction. This method is known as "Monte Carlo (wavefunction) simulation" or "quantum trajectories."
Note: For channels which do not support the `cirq.mixture` protocol, the probability of applying each Kraus operator depends on the state. In contrast, for channels which do support the `cirq.mixture` protocol, the probability of applying each Kraus operator is independent of the state.
```
"""Simulating a noisy circuit via Monte Carlo simulation."""
# Get a circuit.
qbit = cirq.NamedQubit("Q")
circuit = cirq.Circuit(cirq.bit_flip(p=0.5).on(qbit))
# Display it.
print("Simulating circuit:")
print(circuit)
# Simulate with the cirq.Simulator.
sim = cirq.Simulator()
psi = sim.simulate(circuit).dirac_notation()
# Display the final wavefunction.
print("\nFinal wavefunction:")
print(psi)
```
To see that the output is stochastic, you can run the cell above multiple times. Since $p = 0.5$ in the bit-flip channel, you should get $|0\rangle$ roughly half the time and $|1\rangle$ roughly half the time. The `run` method with many repetitions can also be used to see this behavior.
```
"""Example of Monte Carlo wavefunction simulation with the `run` method."""
circuit = cirq.Circuit(
cirq.bit_flip(p=0.5).on(qbit),
cirq.measure(qbit),
)
res = sim.run(circuit, repetitions=100)
print(res.histogram(key=qbit))
```
## Adding noise to circuits
Often circuits are defined with just unitary operations, but we want to simulate them with noise. There are several methods for inserting noise in Cirq.
For any circuit, the `with_noise` method can be called to insert a channel after every moment.
```
"""One method to insert noise in a circuit."""
# Define some noiseless circuit.
circuit = cirq.testing.random_circuit(
qubits=3, n_moments=3, op_density=1, random_state=11
)
# Display the noiseless circuit.
print("Circuit without noise:")
print(circuit)
# Add noise to the circuit.
noisy = circuit.with_noise(cirq.depolarize(p=0.01))
# Display it.
print("\nCircuit with noise:")
print(noisy)
```
This circuit can then be simulated using the methods described above.
The `with_noise` method creates a `cirq.NoiseModel` from its input and adds noise to each moment. A `cirq.NoiseModel` can be explicitly created and used to add noise to a single operation, single moment, or series of moments as follows.
```
"""Add noise to an operation, moment, or sequence of moments."""
# Create a noise model.
noise_model = cirq.NoiseModel.from_noise_model_like(cirq.depolarize(p=0.01))
# Get a qubit register.
qreg = cirq.LineQubit.range(2)
# Add noise to an operation.
op = cirq.CNOT(*qreg)
noisy_op = noise_model.noisy_operation(op)
# Add noise to a moment.
moment = cirq.Moment(cirq.H.on_each(qreg))
noisy_moment = noise_model.noisy_moment(moment, system_qubits=qreg)
# Add noise to a sequence of moments.
circuit = cirq.Circuit(cirq.H(qreg[0]), cirq.CNOT(*qreg))
noisy_circuit = noise_model.noisy_moments(circuit, system_qubits=qreg)
```
Note: In the last two examples, the argument `system_qubits` can be a subset of the qubits in the moment(s).
The output of each "noisy method" is a `cirq.OP_TREE` which can be converted to a circuit by passing it into the `cirq.Circuit` constructor. For example, we create a circuit from the `noisy_moment` below.
```
"""Creating a circuit from a noisy cirq.OP_TREE."""
cirq.Circuit(noisy_moment)
```
Another technique is to pass a noise channel to the density matrix simulator as shown below.
```
"""Define a density matrix simulator with a noise model."""
noisy_dsim = cirq.DensityMatrixSimulator(
noise=cirq.generalized_amplitude_damp(p=0.1, gamma=0.5)
)
```
This will not explicitly add channels to the circuit being simulated, but the circuit will be simulated as though these channels were present.
Other than these general methods, channels can be added to circuits at any moment just as gates are. The channels can be different, be correlated, act on a subset of qubits, be custom defined, etc.
```
"""Defining a circuit with multiple noisy channels."""
qreg = cirq.LineQubit.range(4)
circ = cirq.Circuit(
cirq.H.on_each(qreg),
cirq.depolarize(p=0.01).on_each(qreg),
cirq.qft(*qreg),
bit_phase_flip.on_each(qreg[1::2]),
cirq.qft(*qreg, inverse=True),
cirq.reset(qreg[1]),
cirq.measure(*qreg),
cirq.bit_flip(p=0.07).controlled(1).on(*qreg[2:]),
)
print("Circuit with multiple channels:\n")
print(circ)
```
Circuits can also be modified with standard methods like `insert` to add channels at any point in the circuit. For example, to model simple state preparation errors, one can add bit-flip channels to the start of the circuit as follows.
```
"""Example of inserting channels in circuits."""
circ.insert(0, cirq.bit_flip(p=0.1).on_each(qreg))
print(circ)
```
| github_jupyter |
# Model Evaluation & Validation
## Predicting Boston Housing Prices
In this project, we will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a *good fit* could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Housing). The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an `'MEDV'` value of 50.0. These data points likely contain **missing or censored values** and have been removed.
- 1 data point has an `'RM'` value of 8.78. This data point can be considered an **outlier** and has been removed.
- The features `'RM'`, `'LSTAT'`, `'PTRATIO'`, and `'MEDV'` are essential. The remaining **non-relevant features** have been excluded.
- The feature `'MEDV'` has been **multiplicatively scaled** to account for 35 years of market inflation.
```
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from sklearn.model_selection import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print("Boston housing dataset has {} data points with {} variables each.".format(*data.shape))
data.head()
prices.std()
```
## Data Exploration
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into **features** and the **target variable**. The **features**, `'RM'`, `'LSTAT'`, and `'PTRATIO'`, give us quantitative information about each data point. The **target variable**, `'MEDV'`, will be the variable we seek to predict. These are stored in `features` and `prices`, respectively.
### Calculate Statistics
Compute descriptive statistics
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of `'MEDV'`, which is stored in `prices`.
- Store each calculation in their respective variable.
```
minimum_price = prices.min()
maximum_price = prices.max()
mean_price = prices.mean()
median_price = prices.median()
std_price = prices.std()
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print("Minimum price: ${}".format(minimum_price))
print("Maximum price: ${}".format(maximum_price))
print("Mean price: ${}".format(mean_price))
print("Median price ${}".format(median_price))
print("Standard deviation of prices: ${}".format(std_price))
```
### Feature Observation
We are using three features from the Boston housing dataset: `'RM'`, `'LSTAT'`, and `'PTRATIO'`. For each data point (neighborhood):
- `'RM'` is the average number of rooms among homes in the neighborhood.
- `'LSTAT'` is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- `'PTRATIO'` is the ratio of students to teachers in primary and secondary schools in the neighborhood.
**Observations: **
- We can expect `RM` to cause `MEDV` to increase because larger houses simply take up more land, are more expensive to build etc and more importantly, because people are willing to pay more for a larger house. This effect is likely to be less when `RM` is very large. For example, buyers may not necessarily value an 8-room house more than a 7-room house (other factors would be dominant in determining the price).
- `LSTAT`: we can expect that this will lower the value of the house. Because people wish to live among more "upper class" neighbors.
- `PTRATIO` also will lower the value because parents have greater utility for schools where the student/teacher ratio is smaller.
----
## Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
### Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the [*coefficient of determination*](http://stattrek.com/statistics/dictionary.aspx?definition=coefficient_of_determination), R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the **target variable**. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the *mean* of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the **features**. _A model can be given a negative R<sup>2</sup> as well, which indicates that the model is **arbitrarily worse** than one that always predicts the mean of the target variable._
For the `performance_metric` function in the code cell below, you will need to implement the following:
- Use `r2_score` from `sklearn.metrics` to perform a performance calculation between `y_true` and `y_predict`.
- Assign the performance score to the `score` variable.
```
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
""" Calculates and returns the performance score between
true and predicted values based on the metric chosen. """
score = r2_score(y_true, y_predict)
# Return the score
return score
```
### Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
The cell below to use the `performance_metric` function and calculates this model's coefficient of determination.
```
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print("Model has a coefficient of determination, R^2, of {:.3f}.".format(score))
```
**Observation:**
The model seems to have successfully captured a good amount of variance (over 92%). This value of $R^2$ is pretty high in general. Of course, it depends on the application. In some fields (financial market data, for example), the even a small value of $R^2$ can be remarkabl because the data might have been presumed to be just noise. In other applications, 92% might still be considered insufficent and it might be decided that a more complex model is needed.
### Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use `train_test_split` from `sklearn.model_selection` to shuffle and split the `features` and `prices` data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the `random_state` for `train_test_split` to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to `X_train`, `X_test`, `y_train`, and `y_test`.
```
# Import 'train_test_split'
from sklearn.model_selection import train_test_split
# Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features, prices, test_size=0.2, random_state=42)
# Success
print("Training and testing split was successful.")
```
### Training and Testing
* What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Having too little test data makes it hard to test the data and would the model to overfit the train data and perform poorly on the test data (model has too much variance). Having too much test data means that there isn't enough training data to learn properly (too much bias)
----
## Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing `'max_depth'` parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
### Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
```
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
```
### Learning the Data
* Choose one of the graphs above and state the maximum depth for the model.
* What happens to the score of the training curve as more training points are added? What about the testing curve?
* Would having more training points benefit the model?
Are the learning curves converging to particular scores? Generally speaking, the more data you have, the better. But if your training and testing curves are converging with a score above your benchmark threshold, would this be necessary?
Think about the pros and cons of adding more training points based on if the training and testing curves are converging.
**Observation: **
Looking at the plot corresponding to a max depth of 3, we see the training score decreasing as `m` (the number of training points) is increased. The testing score rises with `m`, quickly at first, and then slowly.
The changes in the two curves are minimal after a while (say, after 300 training points) and it is not going to be useful to keep adding training points after that.
### Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the **learning curves**, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the `performance_metric` function.
** Run the code cell below and use this graph to answer the following two questions Q5 and Q6. **
```
vs.ModelComplexity(X_train, y_train)
```
### Bias-Variance Tradeoff
* When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance?
* How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
**Observation:** High bias is a sign of underfitting(model is not complex enough to pick up the nuances in the data) and high variance is a sign of overfitting(model is by-hearting the data and cannot generalize well). Think about which model(depth 1 or 10) aligns with which part of the tradeoff.
**Answer: **
For a max depth of 1, the model suffers from high bias (validation error is high too, but we wouldn't call it overfitting because we've not managed to reduce even the training error yet).
When the max depth is 10, the model has low bias (low training error) and has high variance. It is clearly overfit at this point. We see the validation score decreasing with increasing complexity after `max_depth=4`.
### Best-Guess Optimal Model
* Which maximum depth results in a model that best generalizes to unseen data?
**Observation: **
I would choose `max_depth=3`. The validation performance is the same for `max_depth=4` as well, but the former has lower complexity.
Training scores keep getting better with increasing `max_depth` but that says nothing about how well the model will do on unseen data.
-----
## Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from `fit_model`.
### Grid Search
** Observation: ** Grid search is when the search for hyperparameters is spread done over a joint grid of values over the space of parameters. The end goal is to find the optimal set of values among the joint set of parameters where the model will perform the best.
**Answer: **
Grid search is when the search for hyperparameters is spread done over a joint grid of values over the space of parameters. The end goal is to find the optimal set of values among the joint set of parameters where the model will perform the best.
### Cross-Validation
In k-fold CV, which is a model selection technique, the training data is divided into k equally sized subsamples (data is randomly shuffled first if necessary). Each model is then fit k times, holding each one of the k subsamples as validation set and training on the combination of all the other k-1 subsamples. The results are averaged over these k trials for each model. The model that performs the best in this metric is then chosen.
An important issue faced in the choice of hyperparameter selection is that if one just has one holdout test set, there is the danger of overfitting to that particular test set. k-fold CV alleviates this issue.
Link to the [docs](http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation)
### Fitting a Model
Your final implementation requires that you bring everything together and train a model using the **decision tree algorithm**. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the `'max_depth'` parameter for the decision tree. The `'max_depth'` parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called *supervised learning algorithms*.
In addition, you will find your implementation is using `ShuffleSplit()` for an alternative form of cross-validation (see the `'cv_sets'` variable). While it is not the K-Fold cross-validation technique, this type of cross-validation technique is just as useful!. The `ShuffleSplit()` implementation below will create 10 (`'n_splits'`) shuffled sets, and for each shuffle, 20% (`'test_size'`) of the data will be used as the *validation set*. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
For the `fit_model` function in the code cell below, you will need to implement the following:
- Use [`DecisionTreeRegressor`](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) from `sklearn.tree` to create a decision tree regressor object.
- Assign this object to the `'regressor'` variable.
- Create a dictionary for `'max_depth'` with the values from 1 to 10, and assign this to the `'params'` variable.
- Use [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html) from `sklearn.metrics` to create a scoring function object.
- Pass the `performance_metric` function as a parameter to the object.
- Assign this scoring function to the `'scoring_fnc'` variable.
- Use [`GridSearchCV`](http://scikit-learn.org/0.20/modules/generated/sklearn.model_selection.GridSearchCV.html) from `sklearn.model_selection` to create a grid search object.
- Pass the variables `'regressor'`, `'params'`, `'scoring_fnc'`, and `'cv_sets'` as parameters to the object.
- Assign the `GridSearchCV` object to the `'grid'` variable.
```
# Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.metrics import make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.tree import DecisionTreeRegressor
def fit_model(X, y):
""" Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y]. """
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
params = {'max_depth':[i for i in range(1,11)]}
# Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric)
# , greater_is_better=True, needs_proba=False, needs_threshold=False, **kwargs)[source]
# Create the grid search cv object --> GridSearchCV()
# Make sure to include the right parameters in the object:
# (estimator, param_grid, scoring, cv) which have values 'regressor', 'params', 'scoring_fnc', and 'cv_sets' respectively.
grid = GridSearchCV(regressor, param_grid=params,
scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
```
### Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a *decision tree regressor*, the model has learned *what the best questions to ask about the input data are*, and can respond with a prediction for the **target variable**. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
### Optimal Model
* What maximum depth does the optimal model have? How does this result compare with our previous guess?
Fit the decision tree regressor to the training data and produce an optimal model.
```
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print("Parameter 'max_depth' is {} for the optimal model.".format(reg.get_params()['max_depth']))
```
The output of the GridSearchCV was a `max_depth` of 4. I chose 3 earlier because the performance difference for me was too small to justify the added complexity, but I understand why GridSearchCV() chose 4.
### Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
* What price would you recommend each client sell his/her home at?
* Do these prices seem reasonable given the values for the respective features?
Use the statistics you calculated in the **Data Exploration** section to help justify your response. Of the three clients, client 3 has has the biggest house, in the best public school neighborhood with the lowest poverty level; while client 2 has the smallest house, in a neighborhood with a relatively high poverty rate and not the best public schools.
Run the code block below to have your optimized model make predictions for each client's home.
```
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print("Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price))
```
**Answer: **
The model predictions are
- Predicted selling price for Client 1's home: \$403,025.00
- Predicted selling price for Client 2's home: \$237,478.72
- Predicted selling price for Client 3's home: \$931,636.36
Yes, the outputs seem reasonable. I'd recommend that the clients should use these prices as anchors to think about how to price their house. Other factors apart from these three variables also often determine the price (age of the house, style, how the interior is, whether there is a garden, whether there is parking etc). But this regression would be able to tell the client what a house that is median on those other features will be priced at. So, they can compare their special features with the neighborhood with to add or subtract a dollar amount from the predicition that used just these 3 features. The client should also realize that the outputs here come from a particular training data. With different data, the predictions may be a little different. So they should be open to somewhat different offers from buyers (allow for a range around the predictions)
### Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted.
**Run the code cell below to run the `fit_model` function ten times with different training and testing sets to see how the prediction for a specific client changes with respect to the data it's trained on.**
```
vs.PredictTrials(features, prices, fit_model, client_data)
```
### Applicability in current setting
* In a few sentences, discuss whether the constructed model should or should not be used in
**Some considerations:**
- How relevant today is data that was collected from 1978? How important is inflation?
- Are the features present in the data sufficient to describe a home? Do you think factors like quality of apppliances in the home, square feet of the plot area, presence of pool or not etc should factor in?
- Is the model robust enough to make consistent predictions?
- Would data collected in an urban city like Boston be applicable in a rural city?
- Is it fair to judge the price of an individual home based on the characteristics of the entire neighborhood?
**Observations: **
Using data from 1978 for prediction today would be quite problematic. Inflation is hugely important over such long periods of time, in terms of prices increases across the board. Even data from 2007 would be unable to predict hosue prices in 2009 (because of the internvening crisis of 2008). Likewise, prices today would be quite different from the prices predicted from data as recent as 2010.
As I detailed in the previous question, quality of appliances, size of the plot and many other variables come into play.
The model **is** robust to make consistent predictions within its time and context (Boston of 1978), but it would not apply to other times and places.
The data from Boston would not be readily applicable to a rural area. Yes, some thigs will remain true universally - more rooms means higher price, people prefer upper class neighbors and better schools. But many other features can be more important in rural setttings.
Well, this is not a judgement, it is a prediction of the price that the housing market will end up putting on the house. So, yes, the characteristics of the neibhborhood are a big determinant of how the market has priced other houses in the neighborhood.
| github_jupyter |
- title: Equivalence between Policy Gradients and Soft Q-Learning
- summary: Inspecting the gradients of entropy-augmented policy updates to show their equivalence
- author: Braden Hoagland
- date: 2019-08-12
- image: /static/images/soft_q.png
# Introduction
This article will dive into a lot of the math surrounding the gradients of different maximum entropy RL learning methods. Usually we work in the space of objective functions in practice: with both policy gradients and Q-learning, we'll form an objective function and allow an autodiff library to calculate the gradients for us. We never have to see what's going on behind the scenes, which has its pros and cons. A benefit is that working with objective functions is much easier than calculating gradients by hand. On the other hand, it's easy to lose sight of what's really going on when we work at such an abstract level.
This abstraction issue is tackled in the paper `Equivalence Between Policy Gradients and Soft Q-Learning` (https://arxiv.org/abs/1704.06440), and I think it provides some pretty eye-opening insights into what the most common RL algorithms are really doing. I'll be working off of version 4 of the paper from Oct. 2018, the most recent version of the paper at the time of writing.
First I'll walk through some of the basic definitions in the max-entropy RL setting, then I'll pick out the most important bits of math from the paper that show how entropy-augmented Q-learning is really just a policy gradient method.
# Maximum Entropy RL and the Boltzmann Policy
In standard RL, we try to maximize expected cumulative reward $\mathbb{E}[\sum_t r_t]$. In the max-entropy setting, we augment this reward signal with an entropy bonus. The expected cumulative reward of a policy $\pi$ is commonly denoted as $\eta(\pi)$
\begin{align*}
\eta(\pi) &= \mathbb{E} \Big[ \sum_t (r_t + \alpha \mathcal{H}(\pi)) \Big] \\
&= \mathbb{E} \Big[ \sum_t \big( r_t - \alpha \log\pi(a_t | s_t) \big) \Big]
\end{align*}
where $\pi$ is our current policy and $\alpha$ weights how important the entropy is in our reward definition. This intuitively makes the reward seem higher when our policy exhibits high entropy, allowing it to explore its environment more extensively. A key component of this augmented objective is that the entropy is *inside* the sum. Thus an optimal policy will not only try to act with high entropy *now*, but will act in such a way that it finds highly-entropic states in the *future*.
The paper uses slightly different notation, opting to use KL divergence (AKA "relative entropy") instead of just entropy. This uses a reference policy $\bar{\pi}$, which can be thought of as an old, worse policy that we wish to improve on
\begin{align*}
\eta(\pi) &= \mathbb{E} \Big[ \sum_t (r_t - \alpha \log\pi(a_t|s_t) + \alpha \log\bar{\pi}(a_t|s_t) \Big] \\
&= \mathbb{E} \Big[ \sum_t \big(r_t - \alpha D_{KL}(\pi \,\Vert\, \bar{\pi}) \big) \Big]
\end{align*}
In the max-entropy setting, optimal policies are stochastic and proportional to exponential of the optimal Q-function. This can be expressed formally as
$$ \pi^* \propto e^{Q^*(s,a)} $$
If this doesn't seem very intuitive, I would recommend a quick scan of the article https://bair.berkeley.edu/blog/2017/10/06/soft-q-learning/. It offers a brief introduction to max-entropy RL (specifically for Q-learning) and some helpful intuitions as to why the above relationship is a good property for a policy to have.
To actually get a policy in this form, we'll change up the definition slightly
$$
\pi = \frac{\bar{\pi} \, e^{Q(s,a) / \alpha}}{\mathbb{E}_{\bar{a}\sim\bar{\pi}} [e^{Q(s,\bar{a}) / \alpha}]}
$$
The numerator of this expression is simply stating that we want our new policy to be like our old policy, but slightly in the direction of $e^Q$. If $\alpha$ is higher (i.e. we want more entropy), we move less in the direction of $e^Q$. The denominator is a normalization constant that ensures that our entire expression is still a valid probability distribution (i.e. the sum over all possible actions comes out to 1).
You may have noticed that the denominator of our policy is really just $e^V$ since $V = \mathbb{E}_{a}[Q]$. We'll use this to simplify our policy
\begin{align*}
V(s) &= \alpha \log \mathbb{E}_{a\sim\bar{\pi}} \big[ e^{Q(s,a)/\alpha} \big] \\
\pi &= \bar{\pi} \, e^{(Q(s,a) - V(s)) / \alpha}
\end{align*}
This new policy definition shows more directly that our policy is proportional to the exponential of the advantage. If our policy is proportional to $e^Q$, it should also be proportional to $e^A$, so this makes sense. From now on, we'll refer to this policy as the 'Boltzmann Policy' and denote it $\pi^B$.
# Soft Q-Learning with Boltzmann Backups
From this point onward, there will inevitably be sections of math that seem to leave out non-trivial amounts of work. This is because I think this paper mainly benefits our intuitions about RL. The math proves these new intuitions, but by itself is hard to read. If you're curious and wish to go through all the derivations, I would highly recommend working through the full paper on your own. With that disclaimer out of the way, we can get started...
With normal Q-learning, we define our backup operator $\mathcal{T}$ as follows
$$
\mathcal{T}Q = \mathbb{E}_{r,s'} \big[ r + \gamma \mathbb{E}_{a'\sim\pi}[Q(s', a')] \big]
$$
In the max-entropy setting, we'll have to add in an entropy bonus to the reward signal and simplify accordingly
\begin{align*}
\mathcal{T}Q &= \mathbb{E}_{r,s'} \big[ r + \gamma \mathbb{E}_{a'}[Q(s', a')] - \alpha D_{KL} \big( \pi(\cdot|s') \;\Vert\; \bar{\pi}(\cdot|s') \big) \big] \\
&= \mathbb{E}_{r,s'} \big[ r + \gamma \alpha \log \mathbb{E}_{a'\sim\bar{\pi}}[e^{Q(s',a')/\alpha}] \big]
\end{align*}
See equations 11 and 13 from the paper (which rely on equations 2-6) if you want to see just how exactly that simplication works. To actually perform the optimization step $Q \gets \mathcal{T}Q$, we'll minimize the mean squared error between our current $Q$ and an estimate of $\mathcal{T}Q$. Our regression targets can be defined
\begin{align*}
y &= r + \gamma \alpha \log \mathbb{E}_{a'\sim\bar{\pi}} \big[ e^{Q(s', a') / \alpha} \big] \\
&= r + \gamma V(s')
\end{align*}
Using Boltzmann backups instead of the traditional Q-learning backups is what transforms normal Q-learning into what's conventionally called "soft" Q-learning. That's really all there is to it.
# Policy Gradients and Entropy
I'm assuming you have a solid grasp of policy gradients if you're reading this article, so I'm gonna focus on how they usually aren't applied correctly in the max-entropy setting. PG methods are commonly augmented with an entropy term, like with the following example provided from the paper
$$
\mathbb{E}_{t, s,a} \Big[ \nabla_\theta \log\pi_\theta(a|s) \sum_{t' \geq t} r_{t'} - \alpha D_{KL}\big (\pi_\theta(\cdot|s) \;\Vert\; \pi(\cdot|s) \big) \Big]
$$
This example essentially tries to maximize reward-to-go with an entropy for the *current* timestep. Maximizing this objective technically isn't what we want, even if it's common practice. What we really want is to maximize a sum over all rewards and entropies that our agent experiences from now into the future.
# Soft Q-Learning = Policy Gradient
The first of two conclusions that this paper comes to is that Soft Q-Learning and the Policy Gradient have exact first-order equivalence. Using the value function and Boltzmann policy definitions from earlier, we can derive the gradient of $\mathbb{E}_{s,a} \big[ \frac{1}{2} \Vert Q_\theta(s,a) - y \Vert^2 \big]$. The paper is able to produce the following expression
$$
\mathbb{E}_{s,a} \Big[ \color{red}{-\alpha \nabla_\theta \log\pi_\theta(a|s) \Delta_{TD} + \alpha^2 \nabla_\theta D_{KL}\big( \pi_\theta(\cdot|s) \;\Vert\; \bar{\pi}(\cdot|s) \big)} + \color{blue}{\nabla_\theta \frac{1}{2} \Vert V_\theta(s) - \hat{V} \Vert^2} \Big]
$$
where $\Delta_{TD}$ is the discounted n-step TD error and $\hat{V}$ is the value regression target formed by $\Delta_{TD}$.
That's kind of a lot, but we can break it down pretty easily. The terms in red represent 1) the usual policy gradient and 2) an additional KL divergence gradient term. The red terms overall represent the gradient you get if you use a policy gradient algorithm with a KL divergence term as your entropy bonus (the actor loss in an actor-critic formulation). The term in blue is quite simply the gradient used to minimize the mean squared error between our current value estimates and our value targets (the critic loss in an actor-critic formulation).
Don't forget that we never explicitly tried to calculate these terms. They came about naturally as an effect of minimizing mean squared error of our Q function and a Boltzmann backup target.
# Soft Q-Learning and the Natural Policy Gradient
The next section of the paper details another connection between Soft Q-learning and policy gradient methods, specifically that damped Q-learning updates are exactly equivalent to natural policy gradient updates.
The natural policy gradient weights the policy gradient with the Fisher information matrix $\mathbb{E}_{s,a} \Big[ \big( \nabla_\theta \log\pi_\theta(a|s) \big)^T \big( \nabla_\theta \log\pi_\theta(a|s) \big) \Big]$. The paper shows that the natural policy gradient in the max-entropy setting is equivalent not to soft Q-learning by itself, but instead to a damped version. In this damped version, we calculate a backed-up Q value and then interpolate between it and the current Q value estimate (basically using Polyak averaging instead of running gradient descent on a mean squared error term).
Although not nearly as direct, this connection highlights how higher-order connections between soft Q-learning and policy gradient methods exist. Higher-order equalities between functions point to functions that are increasingly similar, so this connection really drives the point home that soft Q-learning is deceptively like the policy gradient methods we've been using all this time.
# Experimental Results
The paper authors decided to be nice to us and actually test the theory they derived on some Atari games.
They started out with testing whether or not the usual way of adding entropy bonuses to policy gradient methods is actually worse than the theoretical claims they had just made. As it turns out, using future entropy bonuses $\Big( \text{i.e. } \big( \sum r + \mathcal{H} \big) \Big)$ instead of the simpler, immediate entropy bonus $\Big( \text{i.e. } \big( \sum r \big) + \mathcal{H} \Big)$ results in either similar or superior performance. The below graphs show the results from the experiments, with the future entropy version in blue and the immediate entropy version in red.

They then tested how soft Q-learning compared to normal Q-learning. To make traditional DQN into soft Q-learning, they just modified the regression targets for the Q function. They used the normal target, a target with a KL divergence penalty, and a target with just an entropy bonus. They found that just the entropy bonus resulted in the most improvement, although both soft methods outperformed the "hard" DQN.

To round things out, they tested soft Q-learning and the policy gradient on the same Atari environments to see if they were equivalent in practice. After all, the math shows that their expectations are equivalent, but the variance of those expectations could be different. The experiments they ran make it seem like the two methods are pretty close to each other, with no method seeming largely superior.

# Conclusion and Future Work
Hopefully this made you reconsider what's really going on under the hood with Q-learning. Personally, it blew my mind that two seemingly disparate learning methods could boil down to the same expected update. The theoretical possibilities that this connection could lead to is also incredibly exciting.
Of course, this paper focuses its empirical testing just on environemnts with discrete action spaces. Since the Boltzmann policy is intractable to sample from in continuous action spaces, more advanced soft Q-learning algorithms (such as Soft Actor-Critic) are currently being pioneered to get accurate results in those more complicated settings as well.
| github_jupyter |
# Sparse Alignment columns
```
class Contig:
def __init__(self, name, seq):
self.name = name
self.seq = seq
def __repr__(self):
return '< "%s" %i nucleotides>' % (self.name, len(self.seq))
def read_contigs(input_file_path):
contigs = []
current_name = ""
seq_collection = []
# Pre-read generates an array of contigs with labels and sequences
with open(input_file_path, 'r') as streamFASTAFile:
for read in streamFASTAFile.read().splitlines():
if read == "":
continue
if read[0] == ">":
# If we have sequence gathered and we run into a second (or more) block
if len(seq_collection) > 0:
sequence = "".join(seq_collection)
seq_collection = [] # clear
contigs.append(Contig(current_name, sequence))
current_name = read[1:] # remove >
else:
# collects the sequence to be stored in the contig, constant time performance don't concat strings!
seq_collection.append(read.upper())
# add the last contig to the list
sequence = "".join(seq_collection)
contigs.append(Contig(current_name, sequence))
return contigs
from collections import Counter
species = read_contigs('9927_alignment.fasta')
for s in species:
s.name = s.name[:6]
informative_columns = {}
consensus_sequence = []
for col in range(len(species[0].seq)):
letters = []
for entry in species:
letters.append(entry.seq[col])
column_seq = ''.join(letters)
consensusing = Counter(column_seq)
consensus_sequence.append(consensusing.most_common()[0][0])
if column_seq != letters[0] * len(species) and col > 200 and col < 1500:
informative_columns[col] = column_seq
print(column_seq, col+1)
species.append(Contig('Consen', ''.join(consensus_sequence)))
```
* Generate a fasta with informative columns
* Majority vote consensus sequence, but it includes gaps
* transpose?
* CSV file write
```
with open('9927_informative_positions.csv', 'w') as csv_out:
csv_out.write('Positions,' + ','.join([str(x+1) for x in sorted(informative_columns.keys())]))
csv_out.write('\n')
for entry in species:
csv_out.write(entry.name[:6] + ",")
for col in range(len(species[0].seq)):
if col in informative_columns:
csv_out.write(entry.seq[col] + ",")
csv_out.write('\n')
```
## Pair wise table
How well can you differentiate between every species?
```
seq_length = len(species[0].seq)
similarity_scores = {}
for target in species:
for query in species:
if target != query:
name = (target.name, query.name)
score = sum([target.seq[i] != query.seq[i] for i in range(250,1500)])
similarity_scores[name] = score
min(similarity_scores)
with open('9927_differentiability.csv', 'w') as csv_out:
csv_out.write(',' + ','.join([s.name for s in species]))
for target in species: # rows
csv_out.write(target.name +',')
for query in species: # cells
if target != query:
name = (target.name, query.name)
csv_out.write(str(similarity_scores[name]) + ',')
else:
csv_out.write(',')
csv_out.write('\n')
min(similarity_scores.values())
for k,v in similarity_scores.items():
if v < 4:
print(','.join(k))
```
* Iterate over all the sequences at the same time
* for each position, how many species can you differentiate
* keep of list of species
# Other Work
```
base_command = "java -cp CONTEXT-.jar uk.ac.qmul.sbcs.evolution.convergence.runners.BasicAlignmentStats "
data_directory = './Data/'
from glob import glob
for filename in glob(data_directory + '*'):
print(base_command + filename)
```
# Processing FinchTV FASTA outputs into and alignment
* Example output FASTA file with all the N's and ugliness
* Load up consensus MSA
* Initial cleanup (lenient)
* First 50bp are low accuracy => trim them
* Force Alignment onto MSA
* Do not allow Consensus to have indels introduced, so that we can continue to use the same coordinates
## Notes from Jan Kim
* Heterozygosity inside a single PCR product is a problem because any bias in amplification will exponentially become dominant.
* PCR as few steps as possible
* Don't expect perfect 50/50 spilts in the ABI trace
* Geneous (or manual) base caller with ambiguity codes for all significant peaks, not just the tallest
* Do much of this manually, there are only 40 specimens
* Possibly use pairwise alignments that are ambiguity aware to check for best species match
* You could also still use a frozen Multiple Sequence Alignment
| github_jupyter |
----
<img src="../../../files/refinitiv.png" width="20%" style="vertical-align: top;">
# Data Library for Python
----
## Content layer - Pricing stream - Used as a real-time data cache
This notebook demonstrates how to retrieve level 1 streaming data (such as trades and quotes) either directly from the Refinitiv Data Platform or via Refinitiv Workspace or CodeBook. The example shows how to define a Pricing stream object, which automatically manages a streaming cache available for access at any time. Your application can then reach into this cache and pull out real-time snapshots as Pandas DataFrames by just calling a simple access method.
Using a Pricing stream object that way prevents your application from sending too many requests to the platform. This is particularly useful if your application needs to retrieve real-time snapshots at regular and short intervals.
#### Learn more
To learn more about the Refinitiv Data Library for Python please join the Refinitiv Developer Community. By [registering](https://developers.refinitiv.com/iam/register) and [login](https://developers.refinitiv.com/content/devportal/en_us/initCookie.html) to the Refinitiv Developer Community portal you will get free access to a number of learning materials like
[Quick Start guides](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/quick-start),
[Tutorials](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/learning),
[Documentation](https://developers.refinitiv.com/en/api-catalog/refinitiv-data-platform/refinitiv-data-library-for-python/docs)
and much more.
#### Getting Help and Support
If you have any questions regarding the API usage, please post them on
the [Refinitiv Data Q&A Forum](https://community.developers.refinitiv.com/spaces/321/index.html).
The Refinitiv Developer Community will be happy to help.
## Set the configuration file location
For a better ease of use, you have the option to set initialization parameters of the Refinitiv Data Library in the _refinitiv-data.config.json_ configuration file. This file must be located beside your notebook, in your user folder or in a folder defined by the _RD_LIB_CONFIG_PATH_ environment variable. The _RD_LIB_CONFIG_PATH_ environment variable is the option used by this series of examples. The following code sets this environment variable.
```
import os
os.environ["RD_LIB_CONFIG_PATH"] = "../../../Configuration"
```
## Some Imports to start with
```
import refinitiv.data as rd
from refinitiv.data.content import pricing
from pandas import DataFrame
from IPython.display import display, clear_output
```
## Open the data session
The open_session() function creates and open sessions based on the information contained in the refinitiv-data.config.json configuration file. Please edit this file to set the session type and other parameters required for the session you want to open.
```
rd.open_session()
```
## Retrieve data
### Create and open a Pricing stream object
The Pricing stream object is created for a list of instruments and fields. The fields parameter is optionnal. If you omit it, the Pricing stream will retrieve all fields available for the requested instruments
```
stream = rd.content.pricing.Definition(
universe = ['EUR=', 'GBP=', 'JPY=', 'CAD='],
fields = ['BID', 'ASK']
).get_stream()
```
The open method tells the Pricing stream object to subscribe to the streams of the requested instruments.
```
stream.open()
```
As soon as the open method returns, the stream object is ready to be used. Its internal cache is constantly kept updated with the latest streaming information received from Eikon / Refinitiv Workspace. All this happens behind the scene, waiting for your application to pull out data from the cache.
### Extract snapshot data from the streaming cache
Once the stream is opened, you can use the get_snapshot method to pull out data from its internal cache. get_snapshot can be called any number of times. As these calls return the latest received values, successive calls to get_snapshot may return different values. Returned DataFrames do not change in real-time, get_snapshot must be called every time your application needs fresh values.
```
df = stream.get_snapshot()
display(df)
```
### Get a snapshot for a subset of instruments and fields
```
df = stream.get_snapshot(
universe = ['EUR=', 'GBP='],
fields = ['BID', 'ASK']
)
display(df)
```
### Other options to get values from the streaming cache
#### Direct access to real-time fields
```
print('GBP/BID:', stream['GBP=']['BID'])
print('EUR/BID:', stream['EUR=']['BID'])
```
#### Direct acces to a streaming instrument
```
gbp = stream['GBP=']
print(gbp['BID'])
```
#### Iterate on fields
```
print('GBP=')
for field_name, field_value in stream['GBP=']:
print('\t' + field_name + ': ', field_value)
print('JPY=')
for field_name, field_value in stream['JPY=']:
print('\t' + field_name + ': ', field_value)
```
#### Iterate on streaming instruments and fields
```
for streaming_instrument in stream:
print(streaming_instrument.name)
for field_name, field_value in streaming_instrument:
print('\t' + field_name + ': ', field_value)
```
### Close the stream
```
stream.close()
```
Once closed is called the Pricing stream object stops updating its internal cache. The get_snapshot function can still be called but after the close it always return the same values.
### Invalid or un-licensed instruments
What happens if you request using an invalid RIC or an instrument you are not entitled to?
Let's request a mixture of valid and invalid RICs
```
mixed = rd.content.pricing.Definition(
['EUR=', 'GBP=', 'JPY=', 'CAD=', 'BADRIC'],
fields=['BID', 'ASK']
).get_stream()
mixed.open()
mixed.get_snapshot()
```
You can check the Status of any instrument, so lets check the invalid one
```
display(mixed['BADRIC'].status)
```
As you will note, for an invalid instrument we get:
{'status': <StreamState.Closed: 1>, **'code': 'NotFound'**, 'message': '** The Record could not be found'}
However, if you are not licensed for the instrument you would see something like:
{'status': <StreamState.Closed: 1>, **'code': 'NotEntitled'**, 'message': 'A21: DACS User Profile denied access to vendor'}
**NOTE**: The exact wording of **message** can change over time - therefore,only use the **code** value for any programmatic decision making.
```
mixed.close()
```
## Close the session
```
rd.close_session()
```
| github_jupyter |
```
import requests
import bs4
import re
import mysql.connector
import pandas as pd
import numpy as np
from frenetic import *
# French WordNet for getting synonyms
fwn = FreNetic("./frenetic/wolf-1.0b4.xml")
def getCnrtl(word):
response=requests.get("https://www.cnrtl.fr/synonymie/"+word)
content=response.content
parser = bs4.BeautifulSoup(content, 'html.parser')
syno_format = parser.find_all("h2")
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
for item in syno_format:
if ("Terme introuvable" in item) or ("Erreur" in item) :
return ([])
syno_format = parser.find_all("td",class_="syno_format")
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
syno_format=syno_format.str.extract("(\w+)</a></td>")
syno_format.columns=["Synonym"]
syno_format["Frequency"]=""
img_format_1 = parser.find_all("img",alt="",height=re.compile("\d+"),src=re.compile("/images/portail/pbon.png"))
img_format_2 = parser.find_all("img",alt="",height=re.compile("\d+"),src=re.compile("/images/portail/pboff.png"))
img_format_1=pd.Series(img_format_1).apply(str)
img_format_2=pd.Series(img_format_2).apply(str)
img_format_1=img_format_1.str.extract("width=\"(\d+)\"").astype('int64')
img_format_2=img_format_2.str.extract("width=\"(\d+)\"").astype('int64')
syno_format["Frequency"]=img_format_1/(img_format_1+img_format_2)*100
return list(syno_format["Synonym"])
def getSynonymo(word):
response=requests.get("http://www.synonymo.fr/synonyme/"+word)
content=response.content
parser = bs4.BeautifulSoup(content, 'html.parser')
syno_format = parser.find_all("h1")
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
for item in syno_format:
if ("Aucun résultat exact n'a été trouvé" in item) or ("Aucun résultat pour" in item) or ("An Error Was Encountered" in item):
return ([])
syno_format = parser.find_all("a", class_="word",title=re.compile("\w+"))
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
syno_format=syno_format.str.extract(">([\w\s]+)</a>")
syno_format.columns=["Synonym"]
return list(syno_format["Synonym"])
def getCrisco(word):
response=requests.get("http://www.crisco.unicaen.fr/des/synonymes/"+word)
content=response.content
parser = bs4.BeautifulSoup(content, 'html.parser')
syno_format = parser.find_all("p")
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
for item in syno_format:
if "ne possède pas de synonyme dans le DES" in item:
return ([])
syno_format = parser.find_all("a", href=re.compile("/des/synonymes/\w+"))
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
syno_format=syno_format.str.extract(">\xa0(\w+)\xa0")
syno_format.columns=["Synonym"]
syno_format=syno_format.dropna()
syno_format=syno_format.reset_index(drop=True)
return list(syno_format["Synonym"])
def getJDM(word):
restricted_words=["\\","\'"]
def remove_refinements(term):
index=term.find('>')
if index==-1:
return term
else:
return (term[0:index])
if word in restricted_words:
return pd.DataFrame(columns = ['Synonym','Weight'])
else:
config = {
'user': 'mirzapour',
'password': 'mehdim',
'host': 'karadoc.lirmm.fr',
'database': '05012019_rezojdm',
'raise_on_warnings': True
}
jdm_db = mysql.connector.connect(**config)
jdm_cursor = jdm_db.cursor()
jdm_cursor.execute(("select n2.name, e.weight from nodes n1, nodes n2,"+
"edges e where n1.name='"+word+"' and n1.id=e.source"+
" and n2.id=e.destination and e.type=5"))
df_syn=pd.DataFrame(jdm_cursor.fetchall(),columns=['Synonym','Weight'])
df_syn.sort_values(by=["Weight"], axis=0, ascending=False, inplace=True)
jdm_db.close()
df_syn=df_syn[df_syn["Weight"]>=25]
df_syn=df_syn[~df_syn["Synonym"].str.contains("=")]
df_syn["Synonym"]=df_syn["Synonym"].apply(remove_refinements)
df_syn.sort_values(by=["Synonym","Weight"], axis=0, ascending=False, inplace=True)
df_syn = df_syn.drop_duplicates(subset='Synonym', keep='first')
df_syn.sort_values(by=["Weight"], axis=0, ascending=False, inplace=True)
return df_syn
def getJDM_Sense_Table(word):
def keep_refinements(term):
index=term.find('>')
if index==-1:
return term
else:
return (term[index+1:])
config = {
'user': 'mirzapour',
'password': 'mehdim',
'host': 'karadoc.lirmm.fr',
'database': '05012019_rezojdm',
'raise_on_warnings': True
}
jdm_db = mysql.connector.connect(**config)
jdm_cursor = jdm_db.cursor()
jdm_cursor.execute(("select n1.name, e.weight, n2.name from nodes n1, nodes n2,"+
"edges e where n1.name='"+word+"' and n1.id=e.source"+
" and n2.id=e.destination and e.type=1"))
df_syn=pd.DataFrame(jdm_cursor.fetchall(),columns=['Word','Sense_Weight','Sense_JDM'])
df_syn.sort_values(by=["Sense_Weight"], axis=0, ascending=False, inplace=True)
df_syn.reset_index(drop=True, inplace=True)
df_syn.insert(loc=1,column="Sense_Name",value="")
for counter,value in enumerate(df_syn["Sense_JDM"].apply(keep_refinements)):
jdm_cursor.execute("SELECT name FROM nodes where id="+ str(value))
fetch_result=jdm_cursor.fetchall()[0][0]
df_syn["Sense_Name"][counter]=fetch_result
# We can use the "df_syn" for all senses of a given word
table_main = pd.DataFrame(columns=["Word","Sense","Sense_Weight","Syn_Weight"])
for item in range(0,len(df_syn)):
table=getJDM(list(df_syn["Sense_JDM"])[item])
table.insert(loc=0,column="Sense",value="")
table["Sense"]=df_syn["Sense_Name"][item]
table.insert(loc=0,column="Word",value="")
table["Word"]=word
table.insert(loc=3,column="Sense_Weight",value="")
table["Sense_Weight"]=df_syn["Sense_Weight"][item]
table_main = pd.concat([table_main, table],sort=False)
return (table_main)
def getJDM_Sense(word):
jdm_sense_table=getJDM_Sense_Table(word)
jdm_sense_table=jdm_sense_table.drop(["Syn_Weight"], axis=1)
if len(jdm_sense_table)==0:
return (np.NaN)
else:
return list(jdm_sense_table["Synonym"])
def getJDM_Sense_dic(word):
syn_list=getJDM_Sense_Table(word)
if (len(syn_list)==0):
return (np.NaN)
else:
syn_list=syn_list[["Sense","Synonym"]]
syn_dic={}
for i in range(len(syn_list)):
if not syn_list.iloc[i]["Sense"] in syn_dic.keys():
syn_dic[syn_list.iloc[i]["Sense"]]=[syn_list.iloc[i]["Synonym"]]
else:
syn_dic[syn_list.iloc[i]["Sense"]].append(syn_list.iloc[i]["Synonym"])
syn_index=0
syn_dic_changed_2_num={}
for i in syn_dic:
syn_dic[i].insert(0,i)
syn_dic_changed_2_num[syn_index]=syn_dic[i]
syn_index+=1
return(syn_dic_changed_2_num)
def getFrWordnet(str):
if str=="":
return ([])
else:
try:
nsynsets = fwn.synsets(str)
except:
return ([])
synonym_list=[]
for i,syn in enumerate(nsynsets):
for j,lit in enumerate(syn.literals()):
synonym_list.append(lit.span())
synonym_list = list(dict.fromkeys(synonym_list))
return (synonym_list)
def getDicSyn(word):
word=word.replace("é","%E9")
word=word.replace("è","%E8")
word=word.replace("î","%EE")
word=word.replace("û","%FB")
word=word.replace("ô","%F4")
word=word.replace("ç","%E7")
try:
response=requests.get("http://www.dictionnaire-synonymes.com/synonyme.php?mot="+word)
except:
return (np.NaN)
print("http://www.dictionnaire-synonymes.com/synonyme.php?mot="+word)
content=response.content
parser = bs4.BeautifulSoup(content, 'html.parser')
syno_format = parser.find_all("td",class_="text1")
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
for item in syno_format:
if ("Veuillez vérifiez l'orthographe de votre requête" in item):
return (np.NaN)
syno_format = parser.find_all("a",class_=["lien2","lien3"])
syno_format=pd.Series(syno_format)
syno_format=syno_format.apply(str)
syno_format=syno_format.str.extract("class=\"(\w+)\"[\s\w]+=\"\w+.\w+\?\w+=[\w%+]+\">([\w\s]+)")
syno_format.dropna(inplace=True)
syno_format.reset_index(inplace=True,drop=True)
syn_dic={}
syn_index=0
for i in range(len(syno_format[0])):
if syno_format[0][i]=="lien2":
syn_dic[syn_index]=[syno_format[1][i]]
syn_index+=1
else:
syn_dic[syn_index-1].append(syno_format[1][i])
return(syn_dic)
words_table=pd.read_excel(
"./excels/2_Phase3_Word_Sense_Extraction.xlsx")
for counter,value in enumerate(words_table["Word_Corrected"]):
words_table["JDM_Sense"][counter]=str(getJDM_Sense_dic(value))
words_table["JDM"][counter]=list(getJDM(value)["Synonym"])
words_table["WordNet"][counter]=getFrWordnet(value)
words_table["Cnrtl"][counter]=getCnrtl(value)
words_table["Synonymo"][counter]=getSynonymo(value)
words_table["Cisco"][counter]=getCrisco(value)
words_table["DicSyn_Sense"][counter]=str(getDicSyn(value))
print(counter)
words_table.to_excel("./excels/2_Phase3_Word_Sense_Extraction.xlsx")
words_table=pd.read_excel(
"./excels/2_Phase3_Word_Sense_Extraction.xlsx")
feature_lists=["JDM_Sense","JDM","WordNet","Cnrtl","Synonymo","Cisco","DicSyn_Sense"]
words_table=words_table[feature_lists]
words_table.fillna("[]", inplace=True)
statistic={}
for col in words_table.columns:
statistic[col]=100*((len(words_table)-len(words_table[col][words_table[col]=="[]"]))/len(words_table))
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
statistic["DicSyn"] = statistic.pop("DicSyn_Sense")
plt.bar(statistic.keys(), statistic.values())
plt.ylabel('Percentage')
plt.title('Availability of Synonyms in Different Sources')
plt.show()
```
| github_jupyter |
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-59152712-8"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-59152712-8');
</script>
# Start-to-Finish Example: `GiRaFFE_NRPy` 3D tests
### Author: Patrick Nelson
### Adapted from [Start-to-Finish Example: Head-On Black Hole Collision](../Tutorial-Start_to_Finish-BSSNCurvilinear-Two_BHs_Collide.ipynb)
## This module implements a basic GRFFE code to evolve one-dimensional GRFFE waves.
### NRPy+ Source Code for this module:
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Exact_Wald.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_Exact_Wald.ipynb) Generates Exact Wald initial data
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_Aligned_Rotator.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_Aligned_Rotator.ipynb) Generates Aligned Rotator initial data
* [GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py](../../edit/in_progress/GiRaFFEfood_NRPy/GiRaFFEfood_NRPy_1D_tests.py) [\[**tutorial**\]](Tutorial-GiRaFFEfood_NRPy_1D_tests.ipynb) Generates Alfvén Wave initial data.
* [GiRaFFE_NRPy/Afield_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Afield_flux.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the expressions to find the flux term of the induction equation.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_A2B.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Afield_flux.ipynb) Generates the driver to compute the magnetic field from the vector potential/
* [GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_BCs.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-BCs.ipynb) Generates the code to apply boundary conditions to the vector potential, scalar potential, and three-velocity.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_C2P_P2C.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-C2P_P2C.ipynb) Generates the conservative-to-primitive and primitive-to-conservative solvers.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Metric_Face_Values.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Metric_Face_Values.ipynb) Generates code to interpolate metric gridfunctions to cell faces.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_PPM.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-PPM.ipynb) Genearates code to reconstruct primitive variables on cell faces.
* [GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py](../../edit/in_progress/GiRaFFE_NRPy/GiRaFFE_NRPy_Source_Terms.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Source_Terms.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.
* [GiRaFFE_NRPy/Stilde_flux.py](../../edit/in_progress/GiRaFFE_NRPy/Stilde_flux.py) [\[**tutorial**\]](Tutorial-GiRaFFE_NRPy-Stilde_flux.ipynb) Generates the expressions to find the flux term of the Poynting flux evolution equation.
* [../GRFFE/equations.py](../../edit/GRFFE/equations.py) [\[**tutorial**\]](../Tutorial-GRFFE_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.
* [../GRHD/equations.py](../../edit/GRHD/equations.py) [\[**tutorial**\]](../Tutorial-GRHD_Equations-Cartesian.ipynb) Generates code necessary to compute the source terms.
Here we use NRPy+ to generate the C source code necessary to set up initial data for an Alfvén wave (see [the original GiRaFFE paper](https://arxiv.org/pdf/1704.00599.pdf)). Then we use it to generate the RHS expressions for [Method of Lines](https://reference.wolfram.com/language/tutorial/NDSolveMethodOfLines.html) time integration based on the [explicit Runge-Kutta fourth-order scheme](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods) (RK4).
<a id='toc'></a>
# Table of Contents
$$\label{toc}$$
This notebook is organized as follows
1. [Step 1](#initializenrpy): Set core NRPy+ parameters for numerical grids
1. [Step 2](#grffe): Output C code for GRFFE evolution
1. [Step 2.a](#mol): Output macros for Method of Lines timestepping
1. [Step 3](#gf_id): Import `GiRaFFEfood_NRPy` initial data modules
1. [Step 4](#cparams): Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h`
1. [Step 5](#mainc): `GiRaFFE_NRPy_standalone.c`: The Main C Code
<a id='setup'></a>
# Step 1: Set up core functions and parameters for solving GRFFE equations \[Back to [top](#toc)\]
$$\label{setup}$$
```
import shutil, os, sys # Standard Python modules for multiplatform OS-level functions
# First, we'll add the parent directory to the list of directories Python will check for modules.
nrpy_dir_path = os.path.join("..")
if nrpy_dir_path not in sys.path:
sys.path.append(nrpy_dir_path)
# Step P1: Import needed NRPy+ core modules:
from outputC import outCfunction, lhrh, add_to_Cfunction_dict # NRPy+: Core C code output module
import sympy as sp # SymPy: The Python computer algebra package upon which NRPy+ depends
import finite_difference as fin # NRPy+: Finite difference C code generation module
import NRPy_param_funcs as par # NRPy+: Parameter interface
import grid as gri # NRPy+: Functions having to do with numerical grids
import indexedexp as ixp # NRPy+: Symbolic indexed expression (e.g., tensors, vectors, etc.) support
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
# Step P2: Create C code output directory:
Ccodesdir = os.path.join("GiRaFFE_unstaggered_new_way_standalone_Ccodes/")
# First remove C code output directory if it exists
# Courtesy https://stackoverflow.com/questions/303200/how-do-i-remove-delete-a-folder-that-is-not-empty
# !rm -r ScalarWaveCurvilinear_Playground_Ccodes
shutil.rmtree(Ccodesdir, ignore_errors=True)
# Step P3: Create executable output directory:
outdir = os.path.join(Ccodesdir,"output/")
cmd.mkdir(Ccodesdir)
cmd.mkdir(outdir)
# Step P5: Set timestepping algorithm (we adopt the Method of Lines)
REAL = "double" # Best to use double here.
default_CFL_FACTOR= 0.5 # (GETS OVERWRITTEN WHEN EXECUTED.) In pure axisymmetry (symmetry_axes = 2 below) 1.0 works fine. Otherwise 0.5 or lower.
# Step P6: Set the finite differencing order to 2.
par.set_parval_from_str("finite_difference::FD_CENTDERIVS_ORDER",2)
# Step P7: Enable SIMD-optimized code?
# I.e., generate BSSN and Ricci C code kernels using SIMD-vectorized
# compiler intrinsics, which *greatly improve the code's performance*,
# though at the expense of making the C-code kernels less
# human-readable.
# * Important note in case you wish to modify the BSSN/Ricci kernels
# here by adding expressions containing transcendental functions
# (e.g., certain scalar fields):
# Note that SIMD-based transcendental function intrinsics are not
# supported by the default installation of gcc or clang (you will
# need to use e.g., the SLEEF library from sleef.org, for this
# purpose). The Intel compiler suite does support these intrinsics
# however without the need for external libraries.
enable_SIMD = False
# Step 1.b: Enable reference metric precomputation.
enable_rfm_precompute = False
if enable_SIMD and not enable_rfm_precompute:
print("ERROR: SIMD does not currently handle transcendental functions,\n")
print(" like those found in rfmstruct (rfm_precompute).\n")
print(" Therefore, enable_SIMD==True and enable_rfm_precompute==False\n")
print(" is not supported.\n")
sys.exit(1)
# Step 1.c: Enable "FD functions". In other words, all finite-difference stencils
# will be output as inlined static functions. This is essential for
# compiling highly complex FD kernels with using certain versions of GCC;
# GCC 10-ish will choke on BSSN FD kernels at high FD order, sometimes
# taking *hours* to compile. Unaffected GCC versions compile these kernels
# in seconds. FD functions do not slow the code performance, but do add
# another header file to the C source tree.
# With gcc 7.5.0, enable_FD_functions=True decreases performance by 10%
enable_FD_functions = False
thismodule = "Start_to_Finish-GiRaFFE_NRPy-3D_tests-unstaggered_new_way"
TINYDOUBLE = par.Cparameters("REAL", thismodule, "TINYDOUBLE", 1e-100)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Main_Driver_new_way as md
# par.set_paramsvals_value("GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_speed_limit_StildeD = False")
par.set_paramsvals_value("GiRaFFE_NRPy.GiRaFFE_NRPy_C2P_P2C::enforce_current_sheet_prescription = False")
```
<a id='grffe'></a>
# Step 2: Output C code for GRFFE evolution \[Back to [top](#toc)\]
$$\label{grffe}$$
We will first write the C codes needed for GRFFE evolution. We have already written a module to generate all these codes and call the functions in the appropriate order, so we will import that here. We will take the slightly unusual step of doing this before we generate the initial data functions because the main driver module will register all the gridfunctions we need. It will also generate functions that, in addition to their normal spot in the MoL timestepping, will need to be called during the initial data step to make sure all the variables are appropriately filled in.
<a id='mol'></a>
## Step 2.a: Output macros for Method of Lines timestepping \[Back to [top](#toc)\]
$$\label{mol}$$
Now, we generate the code to implement the method of lines using the fourth-order Runge-Kutta algorithm.
```
RK_method = "RK4"
# Step 3: Generate Runge-Kutta-based (RK-based) timestepping code.
# As described above the Table of Contents, this is a 3-step process:
# 3.A: Evaluate RHSs (RHS_string)
# 3.B: Apply boundary conditions (post_RHS_string, pt 1)
import MoLtimestepping.C_Code_Generation as MoL
from MoLtimestepping.RK_Butcher_Table_Dictionary import Butcher_dict
RK_order = Butcher_dict[RK_method][1]
cmd.mkdir(os.path.join(Ccodesdir,"MoLtimestepping/"))
MoL.MoL_C_Code_Generation(RK_method,
RHS_string = """
GiRaFFE_NRPy_RHSs(¶ms,auxevol_gfs,RK_INPUT_GFS,RK_OUTPUT_GFS);""",
post_RHS_string = """
GiRaFFE_NRPy_post_step(¶ms,xx,auxevol_gfs,RK_OUTPUT_GFS,n+1);\n""",
outdir = os.path.join(Ccodesdir,"MoLtimestepping/"))
```
<a id='gf_id'></a>
# Step 3: Import `GiRaFFEfood_NRPy` initial data modules \[Back to [top](#toc)\]
$$\label{gf_id}$$
With the preliminaries out of the way, we will write the C functions to set up initial data. There are two categories of initial data that must be set: the spacetime metric variables, and the GRFFE plasma variables. We will set up the spacetime first.
```
# There are several initial data routines we need to test. We'll control which one we use with a string option
initial_data = "ExactWald" # Valid options: "ExactWald", "AlignedRotator"
spacetime = "ShiftedKerrSchild" # Valid options: "ShiftedKerrSchild", "flat"
if spacetime == "ShiftedKerrSchild":
# Exact Wald is more complicated. We'll need the Shifted Kerr Schild metric in Cartesian coordinates.
import BSSN.ShiftedKerrSchild as sks
sks.ShiftedKerrSchild(True)
import reference_metric as rfm
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
# Use the Jacobian matrix to transform the vectors to Cartesian coordinates.
par.set_parval_from_str("reference_metric::CoordSystem","Spherical")
rfm.reference_metric()
Jac_dUCart_dDrfmUD,Jac_dUrfm_dDCartUD = rfm.compute_Jacobian_and_inverseJacobian_tofrom_Cartesian()
# Transform the coordinates of the Jacobian matrix from spherical to Cartesian:
par.set_parval_from_str("reference_metric::CoordSystem","Cartesian")
rfm.reference_metric()
tmpa,tmpb,tmpc = sp.symbols("tmpa,tmpb,tmpc")
for i in range(3):
for j in range(3):
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUCart_dDrfmUD[i][j] = Jac_dUCart_dDrfmUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(rfm.xx[0],tmpa),(rfm.xx[1],tmpb),(rfm.xx[2],tmpc)])
Jac_dUrfm_dDCartUD[i][j] = Jac_dUrfm_dDCartUD[i][j].subs([(tmpa,rfm.xxSph[0]),(tmpb,rfm.xxSph[1]),(tmpc,rfm.xxSph[2])])
gammaSphDD = ixp.zerorank2()
for i in range(3):
for j in range(3):
gammaSphDD[i][j] += sks.gammaSphDD[i][j].subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
betaSphU = ixp.zerorank1()
for i in range(3):
betaSphU[i] += sks.betaSphU[i].subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
alpha = sks.alphaSph.subs(sks.r,rfm.xxSph[0]).subs(sks.th,rfm.xxSph[1])
gammaDD = rfm.basis_transform_tensorDD_from_rfmbasis_to_Cartesian(Jac_dUrfm_dDCartUD, gammaSphDD)
unused_gammaUU,gammaDET = ixp.symm_matrix_inverter3x3(gammaDD)
sqrtgammaDET = sp.sqrt(gammaDET)
betaU = rfm.basis_transform_vectorD_from_rfmbasis_to_Cartesian(Jac_dUrfm_dDCartUD, betaSphU)
# Description and options for this initial data
desc = "Generate a spinning black hole with Shifted Kerr Schild metric."
loopopts_id ="AllPoints,Read_xxs"
elif spacetime == "flat":
gammaDD = ixp.zerorank2(DIM=3)
for i in range(3):
for j in range(3):
if i==j:
gammaDD[i][j] = sp.sympify(1) # else: leave as zero
betaU = ixp.zerorank1() # All should be 0
alpha = sp.sympify(1)
# Description and options for this initial data
desc = "Generate a flat spacetime metric."
loopopts_id ="AllPoints" # we don't need to read coordinates for flat spacetime.
name = "set_initial_spacetime_metric_data"
values_to_print = [
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD00"),rhs=gammaDD[0][0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD01"),rhs=gammaDD[0][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD02"),rhs=gammaDD[0][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD11"),rhs=gammaDD[1][1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD12"),rhs=gammaDD[1][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","gammaDD22"),rhs=gammaDD[2][2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU0"),rhs=betaU[0]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU1"),rhs=betaU[1]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","betaU2"),rhs=betaU[2]),
lhrh(lhs=gri.gfaccess("auxevol_gfs","alpha"),rhs=alpha)
]
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs",
body = fin.FD_outputC("returnstring",values_to_print,params="outCverbose=False"),
loopopts = loopopts_id)
```
Now, we will write out the initial data function for the GRFFE variables.
```
import GiRaFFEfood_NRPy.GiRaFFEfood_NRPy as gid
if initial_data=="ExactWald":
gid.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = False,M=sks.M,KerrSchild_radial_shift=sks.r0,gammaDD=gammaDD,sqrtgammaDET=sqrtgammaDET)
desc = "Generate exact Wald initial test data for GiRaFFEfood_NRPy."
elif initial_data=="SplitMonopole":
gid.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = False,M=sks.M,a=sks.a,KerrSchild_radial_shift=sks.r0,alpha=alpha,betaU=betaSphU,gammaDD=gammaDD,sqrtgammaDET=sqrtgammaSphDET)
desc = "Generate Split Monopole initial test data for GiRaFFEfood_NRPy."
elif initial_data=="AlignedRotator":
gf.GiRaFFEfood_NRPy_generate_initial_data(ID_type = initial_data, stagger_enable = True)
desc = "Generate aligned rotator initial test data for GiRaFFEfood_NRPy."
else:
print("Unsupported Initial Data string "+initial_data+"! Supported ID: ExactWald, or SplitMonopole")
name = "initial_data"
values_to_print = [\
lhrh(lhs=gri.gfaccess("out_gfs","AD0"),rhs=gid.AD[0]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD1"),rhs=gid.AD[1]),\
lhrh(lhs=gri.gfaccess("out_gfs","AD2"),rhs=gid.AD[2]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU0"),rhs=gid.ValenciavU[0]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU1"),rhs=gid.ValenciavU[1]),\
lhrh(lhs=gri.gfaccess("auxevol_gfs","ValenciavU2"),rhs=gid.ValenciavU[2]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU0"),rhs=gid.BU[0]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU1"),rhs=gid.BU[1]),\
# lhrh(lhs=gri.gfaccess("auxevol_gfs","BU2"),rhs=gid.BU[2]),\
lhrh(lhs=gri.gfaccess("out_gfs","psi6Phi"),rhs=sp.sympify(0))\
]
outCfunction(
outfile = os.path.join(Ccodesdir,name+".h"), desc=desc, name=name,
params ="const paramstruct *params,REAL *xx[3],REAL *auxevol_gfs,REAL *out_gfs",
body = fin.FD_outputC("returnstring",values_to_print,params="outCverbose=False"),
loopopts ="AllPoints,Read_xxs")
```
<a id='cparams'></a>
# Step 4: Output C codes needed for declaring and setting Cparameters; also set `free_parameters.h` \[Back to [top](#toc)\]
$$\label{cparams}$$
Based on declared NRPy+ Cparameters, first we generate `declare_Cparameters_struct.h`, `set_Cparameters_default.h`, and `set_Cparameters[-SIMD].h`.
Then we output `free_parameters.h`, which sets initial data parameters, as well as grid domain & reference metric parameters, applying `domain_size` and `sinh_width`/`SymTP_bScale` (if applicable) as set above
```
# Step 3.e: Output C codes needed for declaring and setting Cparameters; also set free_parameters.h
# Step 3.e.i: Generate declare_Cparameters_struct.h, set_Cparameters_default.h, and set_Cparameters[-SIMD].h
par.generate_Cparameters_Ccodes(os.path.join(Ccodesdir))
# Step 3.e.ii: Set free_parameters.h
with open(os.path.join(Ccodesdir,"free_parameters.h"),"w") as file:
file.write("""// Override parameter defaults with values based on command line arguments and NGHOSTS.
params.Nxx0 = atoi(argv[1]);
params.Nxx1 = atoi(argv[2]);
params.Nxx2 = atoi(argv[3]);
params.Nxx_plus_2NGHOSTS0 = params.Nxx0 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS1 = params.Nxx1 + 2*NGHOSTS;
params.Nxx_plus_2NGHOSTS2 = params.Nxx2 + 2*NGHOSTS;
// Step 0d: Set up space and time coordinates
// Step 0d.i: Declare \Delta x^i=dxx{0,1,2} and invdxx{0,1,2}, as well as xxmin[3] and xxmax[3]:
const REAL xxmin[3] = {-1.5,-1.5,-1.5};
const REAL xxmax[3] = { 1.5, 1.5, 1.5};
params.dxx0 = (xxmax[0] - xxmin[0]) / ((REAL)params.Nxx0+1);
params.dxx1 = (xxmax[1] - xxmin[1]) / ((REAL)params.Nxx1+1);
params.dxx2 = (xxmax[2] - xxmin[2]) / ((REAL)params.Nxx2+1);
printf("dxx0,dxx1,dxx2 = %.5e,%.5e,%.5e\\n",params.dxx0,params.dxx1,params.dxx2);
params.invdx0 = 1.0 / params.dxx0;
params.invdx1 = 1.0 / params.dxx1;
params.invdx2 = 1.0 / params.dxx2;
const int poison_grids = 0;
// Standard GRFFE parameters:
params.GAMMA_SPEED_LIMIT = 2000.0;
params.diss_strength = 0.1;
""")
if initial_data=="ExactWald":
with open(os.path.join(Ccodesdir,"free_parameters.h"),"a") as file:
file.write("""params.r0 = 0.4;
params.a = 0.0;
""")
```
<a id='bc_functs'></a>
# Step 4: Set up boundary condition functions for chosen singular, curvilinear coordinate system \[Back to [top](#toc)\]
$$\label{bc_functs}$$
Next apply singular, curvilinear coordinate boundary conditions [as documented in the corresponding NRPy+ tutorial notebook](Tutorial-Start_to_Finish-Curvilinear_BCs.ipynb)
...But, for the moment, we're actually just using this because it writes the file `gridfunction_defines.h`.
```
import CurviBoundaryConditions.CurviBoundaryConditions as cbcs
cbcs.Set_up_CurviBoundaryConditions(Ccodesdir,enable_copy_of_static_Ccodes=False)
```
<a id='mainc'></a>
# Step 5: `GiRaFFE_NRPy_standalone.c`: The Main C Code \[Back to [top](#toc)\]
$$\label{mainc}$$
```
# Part P0: Define REAL, set the number of ghost cells NGHOSTS (from NRPy+'s FD_CENTDERIVS_ORDER),
# and set the CFL_FACTOR (which can be overwritten at the command line)
with open(os.path.join(Ccodesdir,"GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h"), "w") as file:
file.write("""
// Part P0.a: Set the number of ghost cells, from NRPy+'s FD_CENTDERIVS_ORDER
#define NGHOSTS """+str(3)+"""
#define NGHOSTS_A2B """+str(2)+"""
// Part P0.b: Set the numerical precision (REAL) to double, ensuring all floating point
// numbers are stored to at least ~16 significant digits
#define REAL """+REAL+"""
// Part P0.c: Set the CFL Factor. Can be overwritten at command line.
REAL CFL_FACTOR = """+str(default_CFL_FACTOR)+";")
```
Here, we write the main function and add it to the C function dictionaries so that it can be correctly added to the make file.
```
#include "GiRaFFE_NRPy_REAL__NGHOSTS__CFL_FACTOR.h"
#include "declare_Cparameters_struct.h"
def add_to_Cfunction_dict_main__GiRaFFE_NRPy_3D_tests_unstaggered():
includes = ["NRPy_basic_defines.h", "GiRaFFE_main_defines.h", "NRPy_function_prototypes.h", "time.h", "set_initial_spacetime_metric_data.h", "initial_data.h"]
desc = """main() function:
Step 0: Read command-line input, set up grid structure, allocate memory for gridfunctions, set up coordinates
Step 1: Set up scalar wave initial data
Step 2: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
applying quadratic extrapolation outer boundary conditions.
Step 3: Output relative error between numerical and exact solution.
Step 4: Free all allocated memory
"""
prefunc = """const int NSKIP_1D_OUTPUT = 1;
#define LOOP_ALL_GFS_GPS(ii) _Pragma("omp parallel for") \
for(int (ii)=0;(ii)<Nxx_plus_2NGHOSTS_tot*NUM_EVOL_GFS;(ii)++)
"""
c_type = "int"
name = "main"
params = "int argc, const char *argv[]"
body = """
paramstruct params;
#include "set_Cparameters_default.h"
// Step 0a: Read command-line input, error out if nonconformant
if(argc != 4 || atoi(argv[1]) < NGHOSTS || atoi(argv[2]) < NGHOSTS || atoi(argv[3]) < NGHOSTS) {
printf("Error: Expected three command-line arguments: ./GiRaFFE_NRPy_standalone [Nx] [Ny] [Nz],\\n");
printf("where Nx is the number of grid points in the x direction, and so forth.\\n");
printf("Nx,Ny,Nz MUST BE larger than NGHOSTS (= %d)\\n",NGHOSTS);
exit(1);
}
// Step 0c: Set free parameters, overwriting Cparameters defaults
// by hand or with command-line input, as desired.
#include "free_parameters.h"
#include "set_Cparameters-nopointer.h"
// ... and then set up the numerical grid structure in time:
const REAL t_final = 0.5;
const REAL CFL_FACTOR = 0.5; // Set the CFL Factor
// Step 0c: Allocate memory for gridfunctions
const int Nxx_plus_2NGHOSTS_tot = Nxx_plus_2NGHOSTS0*Nxx_plus_2NGHOSTS1*Nxx_plus_2NGHOSTS2;
// Step 0k: Allocate memory for gridfunctions
#include "MoLtimestepping/RK_Allocate_Memory.h"
REAL *restrict auxevol_gfs = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *evol_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot);
REAL *auxevol_gfs_exact = (REAL *)malloc(sizeof(REAL) * NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot);
// For debugging, it can be useful to set everything to NaN initially.
if(poison_grids) {
for(int ii=0;ii<NUM_EVOL_GFS * Nxx_plus_2NGHOSTS_tot;ii++) {
y_n_gfs[ii] = 1.0/0.0;
y_nplus1_running_total_gfs[ii] = 1.0/0.0;
//k_odd_gfs[ii] = 1.0/0.0;
//k_even_gfs[ii] = 1.0/0.0;
diagnostic_output_gfs[ii] = 1.0/0.0;
evol_gfs_exact[ii] = 1.0/0.0;
}
for(int ii=0;ii<NUM_AUXEVOL_GFS * Nxx_plus_2NGHOSTS_tot;ii++) {
auxevol_gfs[ii] = 1.0/0.0;
auxevol_gfs_exact[ii] = 1.0/0.0;
}
}
// Step 0d: Set up coordinates: Set dx, and then dt based on dx_min and CFL condition
// This is probably already defined above, but just in case...
#ifndef MIN
#define MIN(A, B) ( ((A) < (B)) ? (A) : (B) )
#endif
REAL dt = CFL_FACTOR * MIN(dxx0,MIN(dxx1,dxx2)); // CFL condition
int Nt = (int)(t_final / dt + 0.5); // The number of points in time.
//Add 0.5 to account for C rounding down integers.
// Step 0e: Set up cell-centered Cartesian coordinate grids
REAL *xx[3];
xx[0] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS0);
xx[1] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS1);
xx[2] = (REAL *)malloc(sizeof(REAL)*Nxx_plus_2NGHOSTS2);
for(int j=0;j<Nxx_plus_2NGHOSTS0;j++) xx[0][j] = xxmin[0] + (j-NGHOSTS+1)*dxx0;
for(int j=0;j<Nxx_plus_2NGHOSTS1;j++) xx[1][j] = xxmin[1] + (j-NGHOSTS+1)*dxx1;
for(int j=0;j<Nxx_plus_2NGHOSTS2;j++) xx[2][j] = xxmin[2] + (j-NGHOSTS+1)*dxx2;
// Step 1: Set up initial data to be exact solution at time=0:
//REAL time;
set_initial_spacetime_metric_data(¶ms,xx,auxevol_gfs);
initial_data(¶ms,xx,auxevol_gfs,y_n_gfs);
// Fill in the remaining quantities
apply_bcs_potential(¶ms,y_n_gfs);
driver_A_to_B(¶ms,y_n_gfs,auxevol_gfs);
//override_BU_with_old_GiRaFFE(¶ms,auxevol_gfs,0);
GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
apply_bcs_velocity(¶ms,auxevol_gfs);
// Extra stack, useful for debugging:
GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs,y_n_gfs);
//GiRaFFE_NRPy_cons_to_prims(¶ms,xx,auxevol_gfs,y_n_gfs);
for(int n=0;n<=Nt;n++) { // Main loop to progress forward in time.
//for(int n=0;n<=1;n++) { // Main loop to progress forward in time.
// Step 1a: Set current time to correct value & compute exact solution
//time = ((REAL)n)*dt;
/* Step 2: Validation: Output relative error between numerical and exact solution, */
if((n)%NSKIP_1D_OUTPUT ==0) {
// Step 2c: Output relative error between exact & numerical at center of grid.
const int i0mid=Nxx_plus_2NGHOSTS0/2;
const int i1mid=Nxx_plus_2NGHOSTS1/2;
const int i2mid=Nxx_plus_2NGHOSTS2/2;
char filename[100];
sprintf(filename,"out%d-%08d.txt",Nxx0,n);
FILE *out2D = fopen(filename, "w");
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
const int idx = IDX3S(i0,i1mid,i2mid);
fprintf(out2D,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\\n",
xx[0][i0],
auxevol_gfs[IDX4ptS(BU0GF,idx)],auxevol_gfs[IDX4ptS(BU1GF,idx)],auxevol_gfs[IDX4ptS(BU2GF,idx)],
y_n_gfs[IDX4ptS(AD0GF,idx)],y_n_gfs[IDX4ptS(AD1GF,idx)],y_n_gfs[IDX4ptS(AD2GF,idx)],
y_n_gfs[IDX4ptS(STILDED0GF,idx)],y_n_gfs[IDX4ptS(STILDED1GF,idx)],y_n_gfs[IDX4ptS(STILDED2GF,idx)],
auxevol_gfs[IDX4ptS(VALENCIAVU0GF,idx)],auxevol_gfs[IDX4ptS(VALENCIAVU1GF,idx)],auxevol_gfs[IDX4ptS(VALENCIAVU2GF,idx)],
y_n_gfs[IDX4ptS(PSI6PHIGF,idx)]);
}
fclose(out2D);
set_initial_spacetime_metric_data(¶ms,xx,auxevol_gfs_exact);
initial_data(¶ms,xx,auxevol_gfs_exact,evol_gfs_exact);
// Fill in the remaining quantities
driver_A_to_B(¶ms,evol_gfs_exact,auxevol_gfs_exact);
GiRaFFE_NRPy_prims_to_cons(¶ms,auxevol_gfs_exact,evol_gfs_exact);
sprintf(filename,"out%d-%08d_exact.txt",Nxx0,n);
FILE *out2D_exact = fopen(filename, "w");
for(int i0=0;i0<Nxx_plus_2NGHOSTS0;i0++) {
const int idx = IDX3S(i0,i1mid,i2mid);
fprintf(out2D_exact,"%.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e %.16e\\n",
xx[0][i0],
auxevol_gfs_exact[IDX4ptS(BU0GF,idx)],auxevol_gfs_exact[IDX4ptS(BU1GF,idx)],auxevol_gfs_exact[IDX4ptS(BU2GF,idx)],
evol_gfs_exact[IDX4ptS(AD0GF,idx)],evol_gfs_exact[IDX4ptS(AD1GF,idx)],evol_gfs_exact[IDX4ptS(AD2GF,idx)],
evol_gfs_exact[IDX4ptS(STILDED0GF,idx)],evol_gfs_exact[IDX4ptS(STILDED1GF,idx)],evol_gfs_exact[IDX4ptS(STILDED2GF,idx)],
auxevol_gfs_exact[IDX4ptS(VALENCIAVU0GF,idx)],auxevol_gfs_exact[IDX4ptS(VALENCIAVU1GF,idx)],auxevol_gfs_exact[IDX4ptS(VALENCIAVU2GF,idx)],
evol_gfs_exact[IDX4ptS(PSI6PHIGF,idx)]);
}
fclose(out2D_exact);
}
// Step 3: Evolve scalar wave initial data forward in time using Method of Lines with RK4 algorithm,
// applying quadratic extrapolation outer boundary conditions.
// Step 3.b: Step forward one timestep (t -> t+dt) in time using
// chosen RK-like MoL timestepping algorithm
#include "MoLtimestepping/RK_MoL.h"
} // End main loop to progress forward in time.
// Step 4: Free all allocated memory
#include "MoLtimestepping/RK_Free_Memory.h"
free(auxevol_gfs);
free(auxevol_gfs_exact);
free(evol_gfs_exact);
for(int i=0;i<3;i++) free(xx[i]);
return 0;
"""
add_to_Cfunction_dict(
includes=includes,
desc=desc,
c_type=c_type, name=name, params=params,
prefunc = prefunc, body=body,
rel_path_to_Cparams=os.path.join("."), enableCparameters=False)
md.add_to_Cfunction_dict__AD_gauge_term_psi6Phi_flux_term(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__AD_gauge_term_psi6Phi_fin_diff(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__cons_to_prims(md.StildeD,md.BU,md.gammaDD,md.betaU,md.alpha,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
md.add_to_Cfunction_dict__prims_to_cons(md.gammaDD,md.betaU,md.alpha,md.ValenciavU,md.BU,md.sqrt4pi,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_Source_Terms as source
source.add_to_Cfunction_dict__functions_for_StildeD_source_term(md.outCparams,md.gammaDD,md.betaU,md.alpha,
md.ValenciavU,md.BU,md.sqrt4pi,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.Stilde_flux as Sf
Sf.add_to_Cfunction_dict__Stilde_flux(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"], inputs_provided = True, alpha_face=md.alpha_face, gamma_faceDD=md.gamma_faceDD,
beta_faceU=md.beta_faceU, Valenciav_rU=md.Valenciav_rU, B_rU=md.B_rU,
Valenciav_lU=md.Valenciav_lU, B_lU=md.B_lU, sqrt4pi=md.sqrt4pi)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Afield_flux_handwritten as Af
Af.add_to_Cfunction_dict__GiRaFFE_NRPy_Afield_flux(md.gammaDD, md.betaU, md.alpha, Ccodesdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_Metric_Face_Values as FCVAL
FCVAL.add_to_Cfunction_dict__GiRaFFE_NRPy_FCVAL(includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_PPM as PPM
PPM.add_to_Cfunction_dict__GiRaFFE_NRPy_PPM(Ccodesdir)
import GiRaFFE_NRPy.GiRaFFE_NRPy_A2B as A2B
A2B.add_to_Cfunction_dict__GiRaFFE_NRPy_A2B(md.gammaDD,md.AD,md.BU,includes=["NRPy_basic_defines.h","GiRaFFE_basic_defines.h"])
import GiRaFFE_NRPy.GiRaFFE_NRPy_BCs as BC
BC.add_to_Cfunction_dict__GiRaFFE_NRPy_BCs()
md.add_to_Cfunction_dict__driver_function()
add_to_Cfunction_dict_main__GiRaFFE_NRPy_3D_tests_unstaggered()
```
Now, we will register the remaining C functions and contributions to `NRPy_basic_defines.h`, then we output `NRPy_basic_defines.h` and `NRPy_function_prototypes.h`.
```
import outputC as outC
outC.outputC_register_C_functions_and_NRPy_basic_defines() # #define M_PI, etc.
# Declare paramstruct, register set_Cparameters_to_default(),
# and output declare_Cparameters_struct.h and set_Cparameters[].h:
outC.NRPy_param_funcs_register_C_functions_and_NRPy_basic_defines(os.path.join(Ccodesdir))
gri.register_C_functions_and_NRPy_basic_defines(enable_griddata_struct=False, enable_bcstruct_in_griddata_struct=False,
enable_rfmstruct=False,
enable_MoL_gfs_struct=False,
extras_in_griddata_struct=None) # #define IDX3S(), etc.
fin.register_C_functions_and_NRPy_basic_defines(NGHOSTS_account_for_onezone_upwind=True,
enable_SIMD=enable_SIMD) # #define NGHOSTS, and UPWIND() macro if SIMD disabled
# Output functions for computing all finite-difference stencils.
# Must be called after defining all functions depending on FD stencils.
if enable_FD_functions:
fin.output_finite_difference_functions_h(path=Ccodesdir)
# Call this last: Set up NRPy_basic_defines.h and NRPy_function_prototypes.h.
outC.construct_NRPy_basic_defines_h(Ccodesdir, enable_SIMD=enable_SIMD)
with open(os.path.join(Ccodesdir,"GiRaFFE_basic_defines.h"),"w") as file:
file.write("""#define NGHOSTS_A2B """+str(2)+"\n"+"""extern int kronecker_delta[4][3];
extern int MAXFACE;
extern int NUL;
extern int MINFACE;
extern int VX,VY,VZ,BX,BY,BZ;
extern int NUM_RECONSTRUCT_GFS;
// Structure to track ghostzones for PPM:
typedef struct __gf_and_gz_struct__ {
REAL *gf;
int gz_lo[4],gz_hi[4];
} gf_and_gz_struct;
""")
with open(os.path.join(Ccodesdir,"GiRaFFE_main_defines.h"),"w") as file:
file.write("""#define NGHOSTS_A2B """+str(2)+"\n"+PPM.kronecker_code+"""const int VX=0,VY=1,VZ=2,BX=3,BY=4,BZ=5;
const int NUM_RECONSTRUCT_GFS = 6;
const int MAXFACE = -1;
const int NUL = +0;
const int MINFACE = +1;
// Structure to track ghostzones for PPM:
typedef struct __gf_and_gz_struct__ {
REAL *gf;
int gz_lo[4],gz_hi[4];
} gf_and_gz_struct;
""")
outC.construct_NRPy_function_prototypes_h(Ccodesdir)
cmd.new_C_compile(Ccodesdir, os.path.join("output", "GiRaFFE_NRPy_standalone"),
uses_free_parameters_h=True, compiler_opt_option="fast") # fastdebug or debug also supported
# !gcc -g -O2 -fopenmp GiRaFFE_standalone_Ccodes/GiRaFFE_NRPy_standalone.c -o GiRaFFE_NRPy_standalone -lm
# Change to output directory
os.chdir(outdir)
# Clean up existing output files
cmd.delete_existing_files("out*.txt")
cmd.delete_existing_files("out*.png")
# cmd.Execute(os.path.join(Ccodesdir,"output","GiRaFFE_NRPy_standalone"), "640 16 16", os.path.join(outdir,"out640.txt"))
cmd.Execute("GiRaFFE_NRPy_standalone", "64 64 64","out64.txt")
# cmd.Execute("GiRaFFE_NRPy_standalone", "239 15 15","out239.txt")
# !OMP_NUM_THREADS=1 valgrind --track-origins=yes -v ./GiRaFFE_NRPy_standalone 1280 32 32
# Return to root directory
os.chdir(os.path.join("../../"))
```
Now, we will load the data generated by the simulation and plot it in order to test for convergence.
```
import numpy as np
import matplotlib.pyplot as plt
Data_numer = np.loadtxt(os.path.join(Ccodesdir,"output","out64-00000020.txt"))
# Data_num_2 = np.loadtxt(os.path.join("GiRaFFE_standalone_Ccodes","output","out239-00000080.txt"))
# Data_old = np.loadtxt("/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave/giraffe-grmhd_primitives_bi.x.asc")
# Data_o_2 = np.loadtxt("/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave_2/giraffe-grmhd_primitives_bi.x.asc")
# Data_numer = Data_old[5000:5125,11:15] # The column range is chosen for compatibility with the plotting script.
# Data_num_2 = Data_o_2[19600:19845,11:15] # The column range is chosen for compatibility with the plotting script.
Data_exact = np.loadtxt(os.path.join(Ccodesdir,"output","out64-00000020_exact.txt"))
# Data_exa_2 = np.loadtxt(os.path.join("GiRaFFE_standalone_Ccodes","output","out239-00000080_exact.txt"))
predicted_order = 2.0
column = 3
plt.figure()
# # plt.plot(Data_exact[2:-2,0],np.log2(np.absolute((Data_numer[2:-2,column]-Data_exact[2:-2,column])/\
# # (Data_num_2[2:-2:2,column]-Data_exa_2[2:-2:2,column]))),'.')
plt.plot(Data_exact[:,0],Data_exact[:,column])
plt.plot(Data_exact[:,0],Data_numer[:,column],'.')
# plt.xlim(-0.0,1.0)
# # plt.ylim(-1.0,5.0)
# # plt.ylim(-0.0005,0.0005)
# plt.xlabel("x")
# plt.ylabel("BU2")
plt.show()
# # 0 1 2 3 4 5 6 7 8 9 10 11 12 13
# labels = ["x","BU0","BU1","BU2","AD0","AD1","AD2","StildeD0","StildeD1","StildeD2","ValenciavU0","ValenciavU1","ValenciavU2", "psi6Phi"]
# old_files = ["",
# "giraffe-grmhd_primitives_bi.x.asc","giraffe-grmhd_primitives_bi.x.asc","giraffe-grmhd_primitives_bi.x.asc",
# # "giraffe-em_ax.x.asc","giraffe-em_ay.x.asc","giraffe-em_az.x.asc",
# "cell_centered_Ai.txt","cell_centered_Ai.txt","cell_centered_Ai.txt",
# "giraffe-grmhd_conservatives.x.asc","giraffe-grmhd_conservatives.x.asc","giraffe-grmhd_conservatives.x.asc",
# "giraffe-grmhd_primitives_allbutbi.x.asc","giraffe-grmhd_primitives_allbutbi.x.asc","giraffe-grmhd_primitives_allbutbi.x.asc",
# "giraffe-em_psi6phi.x.asc"]
# column = 5
# column_old = [0,12,13,14,0,1,2,12,13,14,12,13,14,12]
# old_path = "/home/penelson/OldCactus/Cactus/exe/ABE-GiRaFFEfood_1D_AlfvenWave"
# new_path = os.path.join("GiRaFFE_standalone_Ccodes","output")
# data_old = np.loadtxt(os.path.join(old_path,old_files[column]))
# # data_old = data_old[250:375,:]# Select only the second timestep
# # data_old = data_old[125:250,:]# Select only the first timestep
# # data_old = data_old[0:125,:]# Select only the zeroth timestep
# data_new = np.loadtxt(os.path.join(new_path,"out119-00000001.txt"))
# deltaA_old = data_old[125:250,:] - data_old[0:125,:]
# data_new_t0 = np.loadtxt(os.path.join(new_path,"out119-00000000.txt"))
# deltaA_new = data_new[:,:] - data_new_t0[:,:]
# plt.figure()
# # plt.plot(data_new[3:-3,0],data_new[3:-3,column]-data_old[3:-3,column_old[column]])
# # plt.plot(data_new[:,0],data_new[:,column]-((3*np.sin(5*np.pi*data_new[:,0]/np.sqrt(1 - (-0.5)**2))/20 + 23/20)*(data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)*(-1e-100/2 + data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/((-1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)*(1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)) + 13*(data_new[:,0]/2 - np.sqrt(1 - (-0.5)**2)/20 + np.absolute(data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)/2)/(10*(1e-100 + data_new[:,0] - np.sqrt(1 - (-0.5)**2)/10)) + (-1e-100/2 + data_new[:,0]/2 + np.sqrt(1 - (-0.5)**2)/20 - np.absolute(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10)/2)/(-1e-100 + data_new[:,0] + np.sqrt(1 - (-0.5)**2)/10))/np.sqrt(1 - (-0.5)**2))
# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,(data_new[0:-1,column]+data_new[1:,column])/2,'.',label="GiRaFFE_NRPy+injected BU")
# # plt.plot(data_new[1:,0]-(data_new[0,0]-data_new[1,0])/2.0,data_old[1:,column_old[column]],label="old GiRaFFE")
# # -(data_old[0,9]-data_old[1,9])/2.0
# # plt.plot(data_new[3:-3,0],deltaA_new[3:-3,column],'.')
# plt.plot(data_new[3:-3,0],deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column])
# # plt.xlim(-0.1,0.1)
# # plt.ylim(-0.2,0.2)
# plt.legend()
# plt.xlabel(labels[0])
# plt.ylabel(labels[column])
# plt.show()
# # print(np.argmin(deltaA_old[3:-3,column_old[column]]-deltaA_new[3:-3,column]))
```
This code will create an animation of the wave over time.
```
# import matplotlib.pyplot as plt
from matplotlib.pyplot import savefig
from IPython.display import HTML
import matplotlib.image as mgimg
import glob
import sys
from matplotlib import animation
cmd.delete_existing_files("out119-00*.png")
globby = glob.glob(os.path.join('GiRaFFE_standalone_Ccodes','output','out119-00*.txt'))
file_list = []
for x in sorted(globby):
file_list.append(x)
number_of_files = int(len(file_list)/2)
for timestep in range(number_of_files):
fig = plt.figure()
numer_filename = file_list[2*timestep]
exact_filename = file_list[2*timestep+1]
Numer = np.loadtxt(numer_filename)
Exact = np.loadtxt(exact_filename)
plt.title("Alfven Wave")
plt.xlabel("x")
plt.ylabel("BU2")
plt.xlim(-0.5,0.5)
plt.ylim(1.0,1.7)
plt.plot(Numer[3:-3,0],Numer[3:-3,3],'.',label="Numerical")
plt.plot(Exact[3:-3,0],Exact[3:-3,3],label="Exact")
plt.legend()
savefig(numer_filename+".png",dpi=150)
plt.close(fig)
sys.stdout.write("%c[2K" % 27)
sys.stdout.write("Processing file "+numer_filename+"\r")
sys.stdout.flush()
## VISUALIZATION ANIMATION, PART 2: Combine PNGs to generate movie ##
# https://stackoverflow.com/questions/14908576/how-to-remove-frame-from-matplotlib-pyplot-figure-vs-matplotlib-figure-frame
# https://stackoverflow.com/questions/23176161/animating-pngs-in-matplotlib-using-artistanimation
# !rm -f GiRaFFE_NRPy-1D_tests.mp4
cmd.delete_existing_files("GiRaFFE_NRPy-1D_tests.mp4")
fig = plt.figure(frameon=False)
ax = fig.add_axes([0, 0, 1, 1])
ax.axis('off')
myimages = []
for i in range(number_of_files):
img = mgimg.imread(file_list[2*i]+".png")
imgplot = plt.imshow(img)
myimages.append([imgplot])
ani = animation.ArtistAnimation(fig, myimages, interval=100, repeat_delay=1000)
plt.close()
ani.save('GiRaFFE_NRPy-1D_tests.mp4', fps=5,dpi=150)
%%HTML
<video width="480" height="360" controls>
<source src="GiRaFFE_NRPy-1D_tests.mp4" type="video/mp4">
</video>
import cmdline_helper as cmd # NRPy+: Multi-platform Python command-line interface
cmd.output_Jupyter_notebook_to_LaTeXed_PDF("Tutorial-Start_to_Finish-GiRaFFE_NRPy-3D_tests-unstaggered_new_way",location_of_template_file=os.path.join(".."))
```
| github_jupyter |
```
import keras
import keras.backend as K
from keras.datasets import mnist
from keras.models import Sequential, Model, load_model
from keras.layers import Dense, Dropout, Activation, Flatten, Input, Lambda
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, Conv1D, MaxPooling1D, LSTM, ConvLSTM2D, GRU, BatchNormalization, LocallyConnected2D, Permute, TimeDistributed, Bidirectional
from keras.layers import Concatenate, Reshape, Conv2DTranspose, Embedding, Multiply, Activation
from functools import partial
from collections import defaultdict
import os
import pickle
import numpy as np
import scipy.sparse as sp
import scipy.io as spio
import isolearn.io as isoio
import isolearn.keras as isol
import matplotlib.pyplot as plt
from sklearn import preprocessing
import pandas as pd
from sequence_logo_helper import dna_letter_at, plot_dna_logo
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
def contain_tf_gpu_mem_usage() :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
contain_tf_gpu_mem_usage()
#optimus 5-prime functions
def test_data(df, model, test_seq, obs_col, output_col='pred'):
'''Predict mean ribosome load using model and test set UTRs'''
# Scale the test set mean ribosome load
scaler = preprocessing.StandardScaler()
scaler.fit(df[obs_col].reshape(-1,1))
# Make predictions
predictions = model.predict(test_seq).reshape(-1)
# Inverse scaled predicted mean ribosome load and return in a column labeled 'pred'
df.loc[:,output_col] = scaler.inverse_transform(predictions)
return df
def one_hot_encode(df, col='utr', seq_len=50):
# Dictionary returning one-hot encoding of nucleotides.
nuc_d = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
# Creat empty matrix.
vectors=np.empty([len(df),seq_len,4])
# Iterate through UTRs and one-hot encode
for i,seq in enumerate(df[col].str[:seq_len]):
seq = seq.lower()
a = np.array([nuc_d[x] for x in seq])
vectors[i] = a
return vectors
def r2(x,y):
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
return r_value**2
#Train data
e_train = pd.read_csv("bottom5KIFuAUGTop5KIFuAUG.csv")
e_train.loc[:,'scaled_rl'] = preprocessing.StandardScaler().fit_transform(e_train.loc[:,'rl'].values.reshape(-1,1))
seq_e_train = one_hot_encode(e_train,seq_len=50)
x_train = seq_e_train
x_train = np.reshape(x_train, (x_train.shape[0], 1, x_train.shape[1], x_train.shape[2]))
y_train = np.array(e_train['scaled_rl'].values)
y_train = np.reshape(y_train, (y_train.shape[0],1))
print("x_train.shape = " + str(x_train.shape))
print("y_train.shape = " + str(y_train.shape))
#Load Predictor
predictor_path = 'optimusRetrainedMain.hdf5'
predictor = load_model(predictor_path)
predictor.trainable = False
predictor.compile(optimizer=keras.optimizers.SGD(lr=0.1), loss='mean_squared_error')
#Generate (original) predictions
pred_train = predictor.predict(x_train[:, 0, ...], batch_size=32)
###########################################
####################L2X####################
###########################################
from keras.callbacks import ModelCheckpoint
from keras.models import Model, Sequential
import numpy as np
import tensorflow as tf
from keras.layers import MaxPooling2D, Flatten, Conv2D, Input, GlobalMaxPooling2D, Multiply, Lambda, Embedding, Dense, Dropout, Activation
from keras.datasets import imdb
from keras import backend as K
from keras.engine.topology import Layer
# Define various Keras layers.
class Concatenate1D(Layer):
"""
Layer for concatenation.
"""
def __init__(self, **kwargs):
super(Concatenate1D, self).__init__(**kwargs)
def call(self, inputs):
input1, input2 = inputs
input1 = tf.expand_dims(input1, axis = -2) # [batchsize, 1, input1_dim]
dim1 = int(input2.get_shape()[1])
input1 = tf.tile(input1, [1, dim1, 1])
return tf.concat([input1, input2], axis = -1)
def compute_output_shape(self, input_shapes):
input_shape1, input_shape2 = input_shapes
input_shape = list(input_shape2)
input_shape[-1] = int(input_shape[-1]) + int(input_shape1[-1])
input_shape[-2] = int(input_shape[-2])
return tuple(input_shape)
class Concatenate2D(Layer):
"""
Layer for concatenation.
"""
def __init__(self, **kwargs):
super(Concatenate2D, self).__init__(**kwargs)
def call(self, inputs):
input1, input2 = inputs
input1 = tf.expand_dims(tf.expand_dims(input1, axis = -2), axis = -2) # [batchsize, 1, 1, input1_dim]
dim1 = int(input2.get_shape()[1])
dim2 = int(input2.get_shape()[2])
input1 = tf.tile(input1, [1, dim1, dim2, 1])
return tf.concat([input1, input2], axis = -1)
def compute_output_shape(self, input_shapes):
input_shape1, input_shape2 = input_shapes
input_shape = list(input_shape2)
input_shape[-1] = int(input_shape[-1]) + int(input_shape1[-1])
input_shape[-2] = int(input_shape[-2])
input_shape[-3] = int(input_shape[-3])
return tuple(input_shape)
class Sample_Concrete(Layer):
"""
Layer for sample Concrete / Gumbel-Softmax variables.
"""
def __init__(self, tau0, k, **kwargs):
self.tau0 = tau0
self.k = k
super(Sample_Concrete, self).__init__(**kwargs)
def call(self, logits):
# logits: [batch_size, d, 1]
logits_ = K.permute_dimensions(logits, (0,2,1))# [batch_size, 1, d]
d = int(logits_.get_shape()[2])
unif_shape = [batch_size,self.k,d]
uniform = K.random_uniform_variable(shape=unif_shape,
low = np.finfo(tf.float32.as_numpy_dtype).tiny,
high = 1.0)
gumbel = - K.log(-K.log(uniform))
noisy_logits = (gumbel + logits_)/self.tau0
samples = K.softmax(noisy_logits)
samples = K.max(samples, axis = 1)
logits = tf.reshape(logits,[-1, d])
threshold = tf.expand_dims(tf.nn.top_k(logits, self.k, sorted = True)[0][:,-1], -1)
discrete_logits = tf.cast(tf.greater_equal(logits,threshold),tf.float32)
output = K.in_train_phase(samples, discrete_logits)
return tf.expand_dims(output,-1)
def compute_output_shape(self, input_shape):
return input_shape
def construct_gumbel_selector(X_ph, n_filters=32, n_dense_units=32):
"""
Build the L2X model for selection operator.
"""
first_layer = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv1_gumbel')(X_ph)
# global info
net_new = GlobalMaxPooling2D(name = 'new_global_max_pooling1d_1')(first_layer)
global_info = Dense(n_dense_units, name = 'new_dense_1', activation='relu')(net_new)
# local info
net = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv2_gumbel')(first_layer)
local_info = Conv2D(n_filters, (1, 7), padding='same', activation='relu', strides=1, name = 'conv3_gumbel')(net)
combined = Concatenate2D()([global_info,local_info])
net = Dropout(0.2, name = 'new_dropout_2')(combined)
net = Conv2D(n_filters, (1, 1), padding='same', activation='relu', strides=1, name = 'conv_last_gumbel')(net)
logits_T = Conv2D(1, (1, 1), padding='same', activation=None, strides=1, name = 'conv4_gumbel')(net)
return logits_T
def L2X(x_train, y_train, pred_train, x_val, y_val, pred_val, k=10, batch_size=32, epochs=5, hidden_dims=250):
"""
Generate scores on features on validation by L2X.
Train the L2X model with variational approaches
if train = True.
"""
Mean1D = Lambda(lambda x, k=k: K.sum(x, axis = 1) / float(k), output_shape=lambda x: [x[0],x[2]])
Mean2D = Lambda(lambda x, k=k: K.sum(x, axis = (1, 2)) / float(k), output_shape=lambda x: [x[0],x[3]])
print('Creating model...')
# P(S|X)
with tf.variable_scope('selection_model'):
X_ph = Input(shape=(x_train.shape[1], x_train.shape[2], x_train.shape[3]))
logits_T = construct_gumbel_selector(X_ph)
tau = 0.5
#Extra code: Flatten 2D
orig_logits_T = logits_T
logits_T = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], x_train.shape[1] * x_train.shape[2], 1)))(logits_T)
T = Sample_Concrete(tau, k)(logits_T)
#Extra code: Inflate 2D
T = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], x_train.shape[1], x_train.shape[2], 1)))(T)
# q(X_S)
with tf.variable_scope('prediction_model'):
#Same architecture as original predictor
net = Multiply()([X_ph, T])
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Conv2D(activation="relu", padding='same', filters=120, kernel_size=(1, 8))(net)
net = Flatten()(net)
net = Dense(hidden_dims, activation='relu')(net)
net = Dropout(0.2)(net)
preds = Dense(pred_train.shape[1], activation='linear', name = 'new_dense')(net)
'''
#Default approximator
net = Mean2D(Multiply()([X_ph, T]))
net = Dense(hidden_dims)(net)
net = Dropout(0.2)(net)
net = Activation('relu')(net)
preds = Dense(pred_train.shape[1], activation='softmax', name = 'new_dense')(net)
'''
model = Model(inputs=X_ph, outputs=preds)
model.compile(loss='mean_squared_error', optimizer='rmsprop', metrics=['mean_squared_error'])
train_mse = np.mean((pred_train[:, 0] - y_train[:, 0])**2)
val_mse = np.mean((pred_val[:, 0] - y_val[:, 0])**2)
print('The train and validation mse of the original model is {} and {}'.format(train_mse, val_mse))
#print(model.summary())
'''
checkpoint = ModelCheckpoint("saved_models/l2x.hdf5", monitor='val_mean_squared_error', verbose=1, save_best_only=True, save_weights_only=True, mode='min')
model.fit(x_train, pred_train,
validation_data=(x_val, pred_val),
callbacks=[checkpoint],
epochs=epochs, batch_size=batch_size
)
'''
model.load_weights('saved_models/l2x.hdf5', by_name=True)
pred_model = Model([X_ph], [orig_logits_T, preds])
pred_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['acc'])
pred_model.load_weights('saved_models/l2x.hdf5', by_name=True)
scores, q = pred_model.predict(x_val, verbose=1, batch_size=batch_size)
return scores, q
#Gradient saliency/backprop visualization
import matplotlib.collections as collections
import operator
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib.colors as colors
import matplotlib as mpl
from matplotlib.text import TextPath
from matplotlib.patches import PathPatch, Rectangle
from matplotlib.font_manager import FontProperties
from matplotlib import gridspec
from matplotlib.ticker import FormatStrFormatter
def plot_importance_scores(importance_scores, ref_seq, figsize=(12, 2), score_clip=None, sequence_template='', plot_start=0, plot_end=96) :
end_pos = ref_seq.find("#")
fig = plt.figure(figsize=figsize)
ax = plt.gca()
if score_clip is not None :
importance_scores = np.clip(np.copy(importance_scores), -score_clip, score_clip)
max_score = np.max(np.sum(importance_scores[:, :], axis=0)) + 0.01
for i in range(0, len(ref_seq)) :
mutability_score = np.sum(importance_scores[:, i])
dna_letter_at(ref_seq[i], i + 0.5, 0, mutability_score, ax)
plt.sca(ax)
plt.xlim((0, len(ref_seq)))
plt.ylim((0, max_score))
plt.axis('off')
plt.yticks([0.0, max_score], [0.0, max_score], fontsize=16)
for axis in fig.axes :
axis.get_xaxis().set_visible(False)
axis.get_yaxis().set_visible(False)
plt.tight_layout()
plt.show()
#Execute L2X benchmark on synthetic datasets
k = int(np.ceil(0.2 * 50))
batch_size = 32
hidden_dims = 40
epochs = 5
encoder = isol.OneHotEncoder(50)
score_clip = None
allFiles = ["optimus5_synthetic_random_insert_if_uorf_1_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_1_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_1_stop_variable_loc_512.csv",
"optimus5_synthetic_random_insert_if_uorf_2_start_2_stop_variable_loc_512.csv",
"optimus5_synthetic_examples_3.csv"]
for csv_to_open in allFiles :
#Load dataset for benchmarking
dataset_name = csv_to_open.replace(".csv", "")
benchmarkSet = pd.read_csv(csv_to_open)
seq_e_test = one_hot_encode(benchmarkSet, seq_len=50)
x_test = seq_e_test[:, None, ...]
print(x_test.shape)
pred_test = predictor.predict(x_test[:, 0, ...], batch_size=32)
y_test = pred_test
importance_scores_test, q_test = L2X(
x_train,
y_train,
pred_train,
x_test,
y_test,
pred_test,
k=k,
batch_size=batch_size,
epochs=epochs,
hidden_dims=hidden_dims
)
for plot_i in range(0, 3) :
print("Test sequence " + str(plot_i) + ":")
plot_dna_logo(x_test[plot_i, 0, :, :], sequence_template='N'*50, plot_sequence_template=True, figsize=(12, 1), plot_start=0, plot_end=50)
plot_importance_scores(np.maximum(importance_scores_test[plot_i, 0, :, :].T, 0.), encoder.decode(x_test[plot_i, 0, :, :]), figsize=(12, 1), score_clip=score_clip, sequence_template='N'*50, plot_start=0, plot_end=50)
#Save predicted importance scores
model_name = "l2x_" + dataset_name
np.save(model_name + "_importance_scores_test", importance_scores_test)
```
| github_jupyter |
```
import jsonschema, yaml, json, unittest, pytest
from jsonschema import ValidationError, SchemaError
from jsonschema.exceptions import ValidationError
from nicHelper.wrappers import add_method
```
# Unit test for schema
## define test data folder
```
dataFolder = 'testData' # data folder
```
## load schema
```
# load schema
with open(f'./group.yml', 'r')as f: # load the schema
schema = yaml.load(f.read(), Loader=yaml.FullLoader)
# load test object
with open('./testData/testGroup.yaml') as f:
testItem = yaml.load(f.read(), Loader=yaml.FullLoader)
def test_schema():
## validate
### fail
with pytest.raises(ValidationError):
jsonschema.validate({},schema) # check that the json schema is valid
print('json schema is valid')
### success
jsonschema.validate(testItem, schema)
print('required fields work properly')
test_schema()
```
## create unit test object
```
class TestValidation(unittest.TestCase):
def setUp(self):
with open(f'./order.yaml', 'r')as f: # load the schema
self.schema = yaml.load(f.read(), Loader=yaml.FullLoader)
self.dataFolder = 'testData' # data folder
```
### good sample case
```
@add_method(TestValidation)
def testPassingGoodSample(self):
# good sample
with open(f'./{self.dataFolder}/goodSample.json','r')as f:
goodItem = json.load(f)
try:
jsonschema.validate(goodItem,self.schema)
except ValidationError as e:
print(e)
```
### bad case
#### wrong type
```
@add_method(TestValidation)
def testWrongType(self):
with open(f'./{self.dataFolder}/wrongType.json','r')as f:
badItem = json.load(f)
with self.assertRaises(ValidationError):
jsonschema.validate(badItem,self.schema)
try:
jsonschema.validate(badItem,self.schema)
except ValidationError as e:
self.assertTrue("1234 is not of type 'string'" in e.message,
f'wrong error message{e.message}, should be 1234 is not type string')
```
#### extraneous column
```
@add_method(TestValidation)
def testExtraColumn(self):
with open(f'./{self.dataFolder}/extraCol.json','r')as f:
badItem = json.load(f)
with self.assertRaises(ValidationError):
jsonschema.validate(badItem,self.schema)
try:
jsonschema.validate(badItem,self.schema)
except ValidationError as e:
self.assertTrue('Additional properties are not allowed' in e.message)
suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestValidation)
unittest.TextTestRunner().run(suite)
try:
!jupyter nbconvert --to script tester.ipynb
except:
pass
```
| github_jupyter |
# Project: **German Traffic Sign Classification Using TensorFlow**
**In this project, I used Python and TensorFlow to classify traffic signs.**
**Dataset used: [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
This dataset has more than 50,000 images of 43 classes.**
**I was able to reach a +99% validation accuracy, and a 97.3% testing accuracy.**
## Pipeline architecture:
- **Load The Data.**
- **Dataset Summary & Exploration**
- **Data Preprocessing**.
- Shuffling.
- Grayscaling.
- Local Histogram Equalization.
- Normalization.
- **Design a Model Architecture.**
- LeNet-5.
- VGGNet.
- **Model Training and Evaluation.**
- **Testing the Model Using the Test Set.**
- **Testing the Model on New Images.**
I'll explain each step in details below.
#### Environement:
- Ubuntu 16.04
- Anaconda 5.0.1
- Python 3.6.6
- TensorFlow 1.12.0 (GPU support)
```
# Importing Python libraries
import pickle
import numpy as np
import matplotlib.pyplot as plt
import random
import cv2
import skimage.morphology as morp
from skimage.filters import rank
from sklearn.utils import shuffle
import csv
import os
import tensorflow as tf
from tensorflow.contrib.layers import flatten
from sklearn.metrics import confusion_matrix
# is it using the GPU?
print(tf.test.gpu_device_name())
# Show current TensorFlow version
tf.__version__
```
---
## Step 1: Load The Data
Download the dataset from [here](https://d17h27t6h515a5.cloudfront.net/topher/2017/February/5898cd6f_traffic-signs-data/traffic-signs-data.zip). This is a pickled dataset in which we've already resized the images to 32x32.
We already have three `.p` files of 32x32 resized images:
- `train.p`: The training set.
- `test.p`: The testing set.
- `valid.p`: The validation set.
We will use Python `pickle` to load the data.
```
training_file = "./traffic-signs-data/train.p"
validation_file= "./traffic-signs-data/valid.p"
testing_file = "./traffic-signs-data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
# Mapping ClassID to traffic sign names
signs = []
with open('signnames.csv', 'r') as csvfile:
signnames = csv.reader(csvfile, delimiter=',')
next(signnames,None)
for row in signnames:
signs.append(row[1])
csvfile.close()
```
---
## Step 2: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image.
The code snippets below will provide a basic summery of the Dataset.
**First, we will use `numpy` provide the number of images in each subset, in addition to the image size, and the number of unique classes.**
```
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Number of training examples
n_train = X_train.shape[0]
# Number of testing examples
n_test = X_test.shape[0]
# Number of validation examples.
n_validation = X_valid.shape[0]
# What's the shape of an traffic sign image?
image_shape = X_train[0].shape
# How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples: ", n_train)
print("Number of testing examples: ", n_test)
print("Number of validation examples: ", n_validation)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
**Then, we will use `matplotlib` plot sample images from each subset.**
```
def list_images(dataset, dataset_y, ylabel="", cmap=None):
"""
Display a list of images in a single figure with matplotlib.
Parameters:
images: An np.array compatible with plt.imshow.
lanel (Default = No label): A string to be used as a label for each image.
cmap (Default = None): Used to display gray images.
"""
plt.figure(figsize=(15, 16))
for i in range(6):
plt.subplot(1, 6, i+1)
indx = random.randint(0, len(dataset))
#Use gray scale color map if there is only one channel
cmap = 'gray' if len(dataset[indx].shape) == 2 else cmap
plt.imshow(dataset[indx], cmap = cmap)
plt.xlabel(signs[dataset_y[indx]])
plt.ylabel(ylabel)
plt.xticks([])
plt.yticks([])
plt.tight_layout(pad=0, h_pad=0, w_pad=0)
plt.show()
# Plotting sample examples
list_images(X_train, y_train, "Training example")
list_images(X_test, y_test, "Testing example")
list_images(X_valid, y_valid, "Validation example")
```
**And finally, we will use `numpy` to plot a histogram of the count of images in each unique class.**
```
def histogram_plot(dataset, label):
"""
Plots a histogram of the input data.
Parameters:
dataset: Input data to be plotted as a histogram.
lanel: A string to be used as a label for the histogram.
"""
hist, bins = np.histogram(dataset, bins=n_classes)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
plt.xlabel(label)
plt.ylabel("Image count")
plt.show()
# Plotting histograms of the count of each sign
histogram_plot(y_train, "Training examples")
histogram_plot(y_test, "Testing examples")
histogram_plot(y_valid, "Validation examples")
```
---
## Step 3: Data Preprocessing
In this step, we will apply several preprocessing steps to the input images to achieve the best possible results.
**We will use the following preprocessing techniques:**
1. Shuffling.
2. Grayscaling.
3. Local Histogram Equalization.
4. Normalization.
1.
**Shuffling**: In general, we shuffle the training data to increase randomness and variety in training dataset, in order for the model to be more stable. We will use `sklearn` to shuffle our data.
```
X_train, y_train = shuffle(X_train, y_train)
```
2.
**Grayscaling**: In their paper ["Traffic Sign Recognition with Multi-Scale Convolutional Networks"](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf) published in 2011, P. Sermanet and Y. LeCun stated that using grayscale images instead of color improves the ConvNet's accuracy. We will use `OpenCV` to convert the training images into grey scale.
```
def gray_scale(image):
"""
Convert images to gray scale.
Parameters:
image: An np.array compatible with plt.imshow.
"""
return cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)
# Sample images after greyscaling
gray_images = list(map(gray_scale, X_train))
list_images(gray_images, y_train, "Gray Scale image", "gray")
```
3.
**Local Histogram Equalization**: This technique simply spreads out the most frequent intensity values in an image, resulting in enhancing images with low contrast. Applying this technique will be very helpfull in our case since the dataset in hand has real world images, and many of them has low contrast. We will use `skimage` to apply local histogram equalization to the training images.
```
def local_histo_equalize(image):
"""
Apply local histogram equalization to grayscale images.
Parameters:
image: A grayscale image.
"""
kernel = morp.disk(30)
img_local = rank.equalize(image, selem=kernel)
return img_local
# Sample images after Local Histogram Equalization
equalized_images = list(map(local_histo_equalize, gray_images))
list_images(equalized_images, y_train, "Equalized Image", "gray")
```
4.
**Normalization**: Normalization is a process that changes the range of pixel intensity values. Usually the image data should be normalized so that the data has mean zero and equal variance.
```
def image_normalize(image):
"""
Normalize images to [0, 1] scale.
Parameters:
image: An np.array compatible with plt.imshow.
"""
image = np.divide(image, 255)
return image
# Sample images after normalization
n_training = X_train.shape
normalized_images = np.zeros((n_training[0], n_training[1], n_training[2]))
for i, img in enumerate(equalized_images):
normalized_images[i] = image_normalize(img)
list_images(normalized_images, y_train, "Normalized Image", "gray")
normalized_images = normalized_images[..., None]
def preprocess(data):
"""
Applying the preprocessing steps to the input data.
Parameters:
data: An np.array compatible with plt.imshow.
"""
gray_images = list(map(gray_scale, data))
equalized_images = list(map(local_histo_equalize, gray_images))
n_training = data.shape
normalized_images = np.zeros((n_training[0], n_training[1], n_training[2]))
for i, img in enumerate(equalized_images):
normalized_images[i] = image_normalize(img)
normalized_images = normalized_images[..., None]
return normalized_images
```
---
## Step 3: Design a Model Architecture
In this step, we will design and implement a deep learning model that learns to recognize traffic signs from our dataset [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
We'll use Convolutional Neural Networks to classify the images in this dataset. The reason behind choosing ConvNets is that they are designed to recognize visual patterns directly from pixel images with minimal preprocessing. They automatically learn hierarchies of invariant features at every level from data.
We will implement two of the most famous ConvNets. Our goal is to reach an accuracy of +95% on the validation set.
I'll start by explaining each network architecture, then implement it using TensorFlow.
**Notes**:
1. We specify the learning rate of 0.001, which tells the network how quickly to update the weights.
2. We minimize the loss function using the Adaptive Moment Estimation (Adam) Algorithm. Adam is an optimization algorithm introduced by D. Kingma and J. Lei Ba in a 2015 paper named [Adam: A Method for Stochastic Optimization](https://arxiv.org/abs/1412.6980). Adam algorithm computes adaptive learning rates for each parameter. In addition to storing an exponentially decaying average of past squared gradients like [Adadelta](https://arxiv.org/pdf/1212.5701.pdf) and [RMSprop](https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf) algorithms, Adam also keeps an exponentially decaying average of past gradients mtmt, similar to [momentum algorithm](http://www.sciencedirect.com/science/article/pii/S0893608098001166?via%3Dihub), which in turn produce better results.
3. we will run `minimize()` function on the optimizer which use backprobagation to update the network and minimize our training loss.
### 1. LeNet-5
LeNet-5 is a convolutional network designed for handwritten and machine-printed character recognition. It was introduced by the famous [Yann LeCun](https://en.wikipedia.org/wiki/Yann_LeCun) in his paper [Gradient-Based Learning Applied to Document Recognition](http://yann.lecun.com/exdb/publis/pdf/lecun-01a.pdf) in 1998. Although this ConvNet is intended to classify hand-written digits, we're confident it have a very high accuracy when dealing with traffic signs, given that both hand-written digits and traffic signs are given to the computer in the form of pixel images.
**LeNet-5 architecture:**
<figure>
<img src="LeNet.png" width="1072" alt="Combined Image" />
<figcaption>
<p></p>
</figcaption>
</figure>
This ConvNet follows these steps:
Input => Convolution => ReLU => Pooling => Convolution => ReLU => Pooling => FullyConnected => ReLU => FullyConnected
**Layer 1 (Convolutional):** The output shape should be 28x28x6.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 14x14x6.
**Layer 2 (Convolutional):** The output shape should be 10x10x16.
**Activation.** Your choice of activation function.
**Pooling.** The output shape should be 5x5x16.
**Flattening:** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D.
**Layer 3 (Fully Connected):** This should have 120 outputs.
**Activation.** Your choice of activation function.
**Layer 4 (Fully Connected):** This should have 84 outputs.
**Activation.** Your choice of activation function.
**Layer 5 (Fully Connected):** This should have 43 outputs.
```
class LaNet:
def __init__(self, n_out=43, mu=0, sigma=0.1, learning_rate=0.001):
# Hyperparameters
self.mu = mu
self.sigma = sigma
# Layer 1 (Convolutional): Input = 32x32x1. Output = 28x28x6.
self.filter1_width = 5
self.filter1_height = 5
self.input1_channels = 1
self.conv1_output = 6
# Weight and bias
self.conv1_weight = tf.Variable(tf.truncated_normal(
shape=(self.filter1_width, self.filter1_height, self.input1_channels, self.conv1_output),
mean = self.mu, stddev = self.sigma))
self.conv1_bias = tf.Variable(tf.zeros(self.conv1_output))
# Apply Convolution
self.conv1 = tf.nn.conv2d(x, self.conv1_weight, strides=[1, 1, 1, 1], padding='VALID') + self.conv1_bias
# Activation:
self.conv1 = tf.nn.relu(self.conv1)
# Pooling: Input = 28x28x6. Output = 14x14x6.
self.conv1 = tf.nn.max_pool(self.conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Layer 2 (Convolutional): Output = 10x10x16.
self.filter2_width = 5
self.filter2_height = 5
self.input2_channels = 6
self.conv2_output = 16
# Weight and bias
self.conv2_weight = tf.Variable(tf.truncated_normal(
shape=(self.filter2_width, self.filter2_height, self.input2_channels, self.conv2_output),
mean = self.mu, stddev = self.sigma))
self.conv2_bias = tf.Variable(tf.zeros(self.conv2_output))
# Apply Convolution
self.conv2 = tf.nn.conv2d(self.conv1, self.conv2_weight, strides=[1, 1, 1, 1], padding='VALID') + self.conv2_bias
# Activation:
self.conv2 = tf.nn.relu(self.conv2)
# Pooling: Input = 10x10x16. Output = 5x5x16.
self.conv2 = tf.nn.max_pool(self.conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# Flattening: Input = 5x5x16. Output = 400.
self.fully_connected0 = flatten(self.conv2)
# Layer 3 (Fully Connected): Input = 400. Output = 120.
self.connected1_weights = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = self.mu, stddev = self.sigma))
self.connected1_bias = tf.Variable(tf.zeros(120))
self.fully_connected1 = tf.add((tf.matmul(self.fully_connected0, self.connected1_weights)), self.connected1_bias)
# Activation:
self.fully_connected1 = tf.nn.relu(self.fully_connected1)
# Layer 4 (Fully Connected): Input = 120. Output = 84.
self.connected2_weights = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = self.mu, stddev = self.sigma))
self.connected2_bias = tf.Variable(tf.zeros(84))
self.fully_connected2 = tf.add((tf.matmul(self.fully_connected1, self.connected2_weights)), self.connected2_bias)
# Activation.
self.fully_connected2 = tf.nn.relu(self.fully_connected2)
# Layer 5 (Fully Connected): Input = 84. Output = 43.
self.output_weights = tf.Variable(tf.truncated_normal(shape=(84, 43), mean = self.mu, stddev = self.sigma))
self.output_bias = tf.Variable(tf.zeros(43))
self.logits = tf.add((tf.matmul(self.fully_connected2, self.output_weights)), self.output_bias)
# Training operation
self.one_hot_y = tf.one_hot(y, n_out)
self.cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits, labels=self.one_hot_y)
self.loss_operation = tf.reduce_mean(self.cross_entropy)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
self.training_operation = self.optimizer.minimize(self.loss_operation)
# Accuracy operation
self.correct_prediction = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.one_hot_y, 1))
self.accuracy_operation = tf.reduce_mean(tf.cast(self.correct_prediction, tf.float32))
# Saving all variables
self.saver = tf.train.Saver()
def y_predict(self, X_data, BATCH_SIZE=64):
num_examples = len(X_data)
y_pred = np.zeros(num_examples, dtype=np.int32)
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x = X_data[offset:offset+BATCH_SIZE]
y_pred[offset:offset+BATCH_SIZE] = sess.run(tf.argmax(self.logits, 1),
feed_dict={x:batch_x, keep_prob:1, keep_prob_conv:1})
return y_pred
def evaluate(self, X_data, y_data, BATCH_SIZE=64):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(self.accuracy_operation,
feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0, keep_prob_conv: 1.0 })
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
### 2. VGGNet
VGGNet was first introduced in 2014 by K. Simonyan and A. Zisserman from the University of Oxford in a paper called [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/pdf/1409.1556.pdf). They were investigating the convolutional network depth on its accuracy in the large-scale image recognition setting. Their main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
**VGGNet architecture:**
<figure>
<img src="VGGNet.png" width="1072" alt="Combined Image" />
<figcaption>
<p></p>
</figcaption>
</figure>
The original VGGNet architecture has 16-19 layers, but I've excluded some of them and implemented a modified version of only 12 layers to save computational resources.
This ConvNet follows these steps:
Input => Convolution => ReLU => Convolution => ReLU => Pooling => Convolution => ReLU => Convolution => ReLU => Pooling => Convolution => ReLU => Convolution => ReLU => Pooling => FullyConnected => ReLU => FullyConnected => ReLU => FullyConnected
**Layer 1 (Convolutional):** The output shape should be 32x32x32.
**Activation.** Your choice of activation function.
**Layer 2 (Convolutional):** The output shape should be 32x32x32.
**Activation.** Your choice of activation function.
**Layer 3 (Pooling)** The output shape should be 16x16x32.
**Layer 4 (Convolutional):** The output shape should be 16x16x64.
**Activation.** Your choice of activation function.
**Layer 5 (Convolutional):** The output shape should be 16x16x64.
**Activation.** Your choice of activation function.
**Layer 6 (Pooling)** The output shape should be 8x8x64.
**Layer 7 (Convolutional):** The output shape should be 8x8x128.
**Activation.** Your choice of activation function.
**Layer 8 (Convolutional):** The output shape should be 8x8x128.
**Activation.** Your choice of activation function.
**Layer 9 (Pooling)** The output shape should be 4x4x128.
**Flattening:** Flatten the output shape of the final pooling layer such that it's 1D instead of 3D.
**Layer 10 (Fully Connected):** This should have 128 outputs.
**Activation.** Your choice of activation function.
**Layer 11 (Fully Connected):** This should have 128 outputs.
**Activation.** Your choice of activation function.
**Layer 12 (Fully Connected):** This should have 43 outputs.
```
class VGGnet:
def __init__(self, n_out=43, mu=0, sigma=0.1, learning_rate=0.001):
# Hyperparameters
self.mu = mu
self.sigma = sigma
# Layer 1 (Convolutional): Input = 32x32x1. Output = 32x32x32.
self.conv1_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 1, 32), mean = self.mu, stddev = self.sigma))
self.conv1_b = tf.Variable(tf.zeros(32))
self.conv1 = tf.nn.conv2d(x, self.conv1_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv1_b
# ReLu Activation.
self.conv1 = tf.nn.relu(self.conv1)
# Layer 2 (Convolutional): Input = 32x32x32. Output = 32x32x32.
self.conv2_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 32, 32), mean = self.mu, stddev = self.sigma))
self.conv2_b = tf.Variable(tf.zeros(32))
self.conv2 = tf.nn.conv2d(self.conv1, self.conv2_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv2_b
# ReLu Activation.
self.conv2 = tf.nn.relu(self.conv2)
# Layer 3 (Pooling): Input = 32x32x32. Output = 16x16x32.
self.conv2 = tf.nn.max_pool(self.conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
self.conv2 = tf.nn.dropout(self.conv2, keep_prob_conv)
# Layer 4 (Convolutional): Input = 16x16x32. Output = 16x16x64.
self.conv3_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 32, 64), mean = self.mu, stddev = self.sigma))
self.conv3_b = tf.Variable(tf.zeros(64))
self.conv3 = tf.nn.conv2d(self.conv2, self.conv3_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv3_b
# ReLu Activation.
self.conv3 = tf.nn.relu(self.conv3)
# Layer 5 (Convolutional): Input = 16x16x64. Output = 16x16x64.
self.conv4_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 64, 64), mean = self.mu, stddev = self.sigma))
self.conv4_b = tf.Variable(tf.zeros(64))
self.conv4 = tf.nn.conv2d(self.conv3, self.conv4_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv4_b
# ReLu Activation.
self.conv4 = tf.nn.relu(self.conv4)
# Layer 6 (Pooling): Input = 16x16x64. Output = 8x8x64.
self.conv4 = tf.nn.max_pool(self.conv4, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
self.conv4 = tf.nn.dropout(self.conv4, keep_prob_conv) # dropout
# Layer 7 (Convolutional): Input = 8x8x64. Output = 8x8x128.
self.conv5_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 64, 128), mean = self.mu, stddev = self.sigma))
self.conv5_b = tf.Variable(tf.zeros(128))
self.conv5 = tf.nn.conv2d(self.conv4, self.conv5_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv5_b
# ReLu Activation.
self.conv5 = tf.nn.relu(self.conv5)
# Layer 8 (Convolutional): Input = 8x8x128. Output = 8x8x128.
self.conv6_W = tf.Variable(tf.truncated_normal(shape=(3, 3, 128, 128), mean = self.mu, stddev = self.sigma))
self.conv6_b = tf.Variable(tf.zeros(128))
self.conv6 = tf.nn.conv2d(self.conv5, self.conv6_W, strides=[1, 1, 1, 1], padding='SAME') + self.conv6_b
# ReLu Activation.
self.conv6 = tf.nn.relu(self.conv6)
# Layer 9 (Pooling): Input = 8x8x128. Output = 4x4x128.
self.conv6 = tf.nn.max_pool(self.conv6, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
self.conv6 = tf.nn.dropout(self.conv6, keep_prob_conv) # dropout
# Flatten. Input = 4x4x128. Output = 2048.
self.fc0 = flatten(self.conv6)
# Layer 10 (Fully Connected): Input = 2048. Output = 128.
self.fc1_W = tf.Variable(tf.truncated_normal(shape=(2048, 128), mean = self.mu, stddev = self.sigma))
self.fc1_b = tf.Variable(tf.zeros(128))
self.fc1 = tf.matmul(self.fc0, self.fc1_W) + self.fc1_b
# ReLu Activation.
self.fc1 = tf.nn.relu(self.fc1)
self.fc1 = tf.nn.dropout(self.fc1, keep_prob) # dropout
# Layer 11 (Fully Connected): Input = 128. Output = 128.
self.fc2_W = tf.Variable(tf.truncated_normal(shape=(128, 128), mean = self.mu, stddev = self.sigma))
self.fc2_b = tf.Variable(tf.zeros(128))
self.fc2 = tf.matmul(self.fc1, self.fc2_W) + self.fc2_b
# ReLu Activation.
self.fc2 = tf.nn.relu(self.fc2)
self.fc2 = tf.nn.dropout(self.fc2, keep_prob) # dropout
# Layer 12 (Fully Connected): Input = 128. Output = n_out.
self.fc3_W = tf.Variable(tf.truncated_normal(shape=(128, n_out), mean = self.mu, stddev = self.sigma))
self.fc3_b = tf.Variable(tf.zeros(n_out))
self.logits = tf.matmul(self.fc2, self.fc3_W) + self.fc3_b
# Training operation
self.one_hot_y = tf.one_hot(y, n_out)
self.cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits, labels=self.one_hot_y)
self.loss_operation = tf.reduce_mean(self.cross_entropy)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
self.training_operation = self.optimizer.minimize(self.loss_operation)
# Accuracy operation
self.correct_prediction = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.one_hot_y, 1))
self.accuracy_operation = tf.reduce_mean(tf.cast(self.correct_prediction, tf.float32))
# Saving all variables
self.saver = tf.train.Saver()
def y_predict(self, X_data, BATCH_SIZE=64):
num_examples = len(X_data)
y_pred = np.zeros(num_examples, dtype=np.int32)
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x = X_data[offset:offset+BATCH_SIZE]
y_pred[offset:offset+BATCH_SIZE] = sess.run(tf.argmax(self.logits, 1),
feed_dict={x:batch_x, keep_prob:1, keep_prob_conv:1})
return y_pred
def evaluate(self, X_data, y_data, BATCH_SIZE=64):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(self.accuracy_operation,
feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0, keep_prob_conv: 1.0 })
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
```
---
## Step 4: Model Training and Evaluation
In this step, we will train our model using `normalized_images`, then we'll compute softmax cross entropy between `logits` and `labels` to measure the model's error probability.
`x` is a placeholder for a batch of input images.
`y` is a placeholder for a batch of output labels.
```
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
```
The `keep_prob` and `keep_prob_conv` variables will be used to control the dropout rate when training the neural network.
Overfitting is a serious problem in deep nural networks. Dropout is a technique for addressing this problem.
The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. This technique was introduced by N. Srivastava, G. Hinton, A. Krizhevsky I. Sutskever, and R. Salakhutdinov in their paper [Dropout: A Simple Way to Prevent Neural Networks from Overfitting](https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf).
```
keep_prob = tf.placeholder(tf.float32) # For fully-connected layers
keep_prob_conv = tf.placeholder(tf.float32) # For convolutional layers
# Validation set preprocessing
X_valid_preprocessed = preprocess(X_valid)
EPOCHS = 30
BATCH_SIZE = 64
DIR = 'Saved_Models'
```
Now, we'll run the training data through the training pipeline to train the model.
- Before each epoch, we'll shuffle the training set.
- After each epoch, we measure the loss and accuracy of the validation set.
- And after training, we will save the model.
- A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
### LeNet Model
```
LeNet_Model = LaNet(n_out = n_classes)
model_name = "LeNet"
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(y_train)
print("Training ...")
print()
for i in range(EPOCHS):
normalized_images, y_train = shuffle(normalized_images, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = normalized_images[offset:end], y_train[offset:end]
sess.run(LeNet_Model.training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob : 0.5, keep_prob_conv: 0.7})
validation_accuracy = LeNet_Model.evaluate(X_valid_preprocessed, y_valid)
print("EPOCH {} : Validation Accuracy = {:.3f}%".format(i+1, (validation_accuracy*100)))
LeNet_Model.saver.save(sess, os.path.join(DIR, model_name))
print("Model saved")
```
As we can see, we've been able to reach a maximum accuracy of **95.3%** on the validation set over 30 epochs, using a learning rate of 0.001.
Now, we'll train the VGGNet model and evaluate it's accuracy.
### VGGNet Model
```
VGGNet_Model = VGGnet(n_out = n_classes)
model_name = "VGGNet"
# Validation set preprocessing
X_valid_preprocessed = preprocess(X_valid)
one_hot_y_valid = tf.one_hot(y_valid, 43)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(y_train)
print("Training...")
print()
for i in range(EPOCHS):
normalized_images, y_train = shuffle(normalized_images, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = normalized_images[offset:end], y_train[offset:end]
sess.run(VGGNet_Model.training_operation,
feed_dict={x: batch_x, y: batch_y, keep_prob : 0.5, keep_prob_conv: 0.7})
validation_accuracy = VGGNet_Model.evaluate(X_valid_preprocessed, y_valid)
print("EPOCH {} : Validation Accuracy = {:.3f}%".format(i+1, (validation_accuracy*100)))
VGGNet_Model.saver.save(sess, os.path.join(DIR, model_name))
print("Model saved")
```
Using VGGNet, we've been able to reach a maximum **validation accuracy of 99.3%**. As you can observe, the model has nearly saturated after only 10 epochs, so we can reduce the epochs to 10 and save computational resources.
We'll use this model to predict the labels of the test set.
---
## Step 5: Testing the Model using the Test Set
Now, we'll use the testing set to measure the accuracy of the model over unknown examples.
```
# Test set preprocessing
X_test_preprocessed = preprocess(X_test)
with tf.Session() as sess:
VGGNet_Model.saver.restore(sess, os.path.join(DIR, "VGGNet"))
y_pred = VGGNet_Model.y_predict(X_test_preprocessed)
test_accuracy = sum(y_test == y_pred)/len(y_test)
print("Test Accuracy = {:.1f}%".format(test_accuracy*100))
```
### Test Accuracy = 97.6%
A remarkable performance!
Now we'll plot the confusion matrix to see where the model actually fails.
```
cm = confusion_matrix(y_test, y_pred)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
cm = np.log(.0001 + cm)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.title('Log of normalized Confusion Matrix')
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
```
We observe some clusters in the confusion matrix above. It turns out that the various speed limits are sometimes misclassified among themselves. Similarly, traffic signs with traingular shape are misclassified among themselves. We can further improve on the model using hierarchical CNNs to first identify broader groups (like speed signs) and then have CNNs to classify finer features (such as the actual speed limit).
---
## Step 6: Testing the Model on New Images
In this step, we will use the model to predict traffic signs type of 5 random images of German traffic signs from the web our model's performance on these images.
```
# Loading and resizing new test images
new_test_images = []
path = './traffic-signs-data/new_test_images/'
for image in os.listdir(path):
img = cv2.imread(path + image)
img = cv2.resize(img, (32,32))
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
new_test_images.append(img)
new_IDs = [13, 3, 14, 27, 17]
print("Number of new testing examples: ", len(new_test_images))
```
Displaying the new testing examples, with their respective ground-truth labels:
```
plt.figure(figsize=(15, 16))
for i in range(len(new_test_images)):
plt.subplot(2, 5, i+1)
plt.imshow(new_test_images[i])
plt.xlabel(signs[new_IDs[i]])
plt.ylabel("New testing image")
plt.xticks([])
plt.yticks([])
plt.tight_layout(pad=0, h_pad=0, w_pad=0)
plt.show()
```
These test images include some easy to predict signs, and other signs are considered hard for the model to predict.
For instance, we have easy to predict signs like the "Stop" and the "No entry". The two signs are clear and belong to classes where the model can predict with high accuracy.
On the other hand, we have signs belong to classes where has poor accuracy, like the "Speed limit" sign, because as stated above it turns out that the various speed limits are sometimes misclassified among themselves, and the "Pedestrians" sign, because traffic signs with traingular shape are misclassified among themselves.
```
# New test data preprocessing
new_test_images_preprocessed = preprocess(np.asarray(new_test_images))
def y_predict_model(Input_data, top_k=5):
"""
Generates the predictions of the model over the input data, and outputs the top softmax probabilities.
Parameters:
X_data: Input data.
top_k (Default = 5): The number of top softmax probabilities to be generated.
"""
num_examples = len(Input_data)
y_pred = np.zeros((num_examples, top_k), dtype=np.int32)
y_prob = np.zeros((num_examples, top_k))
with tf.Session() as sess:
VGGNet_Model.saver.restore(sess, os.path.join(DIR, "VGGNet"))
y_prob, y_pred = sess.run(tf.nn.top_k(tf.nn.softmax(VGGNet_Model.logits), k=top_k),
feed_dict={x:Input_data, keep_prob:1, keep_prob_conv:1})
return y_prob, y_pred
y_prob, y_pred = y_predict_model(new_test_images_preprocessed)
test_accuracy = 0
for i in enumerate(new_test_images_preprocessed):
accu = new_IDs[i[0]] == np.asarray(y_pred[i[0]])[0]
if accu == True:
test_accuracy += 0.2
print("New Images Test Accuracy = {:.1f}%".format(test_accuracy*100))
plt.figure(figsize=(15, 16))
new_test_images_len=len(new_test_images_preprocessed)
for i in range(new_test_images_len):
plt.subplot(new_test_images_len, 2, 2*i+1)
plt.imshow(new_test_images[i])
plt.title(signs[y_pred[i][0]])
plt.axis('off')
plt.subplot(new_test_images_len, 2, 2*i+2)
plt.barh(np.arange(1, 6, 1), y_prob[i, :])
labels = [signs[j] for j in y_pred[i]]
plt.yticks(np.arange(1, 6, 1), labels)
plt.show()
```
As we can notice from the top 5 softmax probabilities, the model has very high confidence (100%) when it comes to predict simple signs, like the "Stop" and the "No entry" sign, and even high confidence when predicting simple triangular signs in a very clear image, like the "Yield" sign.
On the other hand, the model's confidence slightly reduces with more complex triangular sign in a "pretty noisy" image, in the "Pedestrian" sign image, we have a triangular sign with a shape inside it, and the images copyrights adds some noise to the image, the model was able to predict the true class, but with 80% confidence.
And in the "Speed limit" sign, we can observe that the model accurately predicted that it's a "Speed limit" sign, but was somehow confused between the different speed limits. However, it predicted the true class at the end.
The VGGNet model was able to predict the right class for each of the 5 new test images. Test Accuracy = 100.0%.
In all cases, the model was very certain (80% - 100%).
---
## Conclusion
Using VGGNet, we've been able to reach a very high accuracy rate. We can observe that the models saturate after nearly 10 epochs, so we can save some computational resources and reduce the number of epochs to 10.
We can also try other preprocessing techniques to further improve the model's accuracy..
We can further improve on the model using hierarchical CNNs to first identify broader groups (like speed signs) and then have CNNs to classify finer features (such as the actual speed limit)
This model will only work on input examples where the traffic signs are centered in the middle of the image. It doesn't have the capability to detect signs in the image corners.
| github_jupyter |
# E-CEO Challenge #3 Evaluation
### Weights
Define the weight of each wavelength
```
w_412 = 0.56
w_443 = 0.73
w_490 = 0.71
w_510 = 0.36
w_560 = 0.01
```
### Run
Provide the run information:
* run id
* run metalink containing the 3 by 3 kernel extractions
* participant
```
run_id = '0000021-150601000007545-oozie-oozi-W'
run_meta = 'http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink'
participant = 'participant-a'
```
### Define all imports in a single cell
```
import glob
import pandas as pd
from scipy.stats.stats import pearsonr
import numpy
import math
```
### Manage run results
Download the results and aggregate them in a single Pandas dataframe
```
!curl http://sb-10-16-10-53.dev.terradue.int:50075/streamFile/ciop/run/participant-a/0000021-150601000007545-oozie-oozi-W/results.metalink | aria2c -d participant-a -M -
path = participant # use your path
allFiles = glob.glob(path + "/*.txt")
frame = pd.DataFrame()
list_ = []
for file_ in allFiles:
df = pd.read_csv(file_,index_col=None, header=0)
list_.append(df)
frame = pd.concat(list_)
```
Number of points extracted from MERIS level 2 products
```
len(frame.index)
```
### Calculate Pearson
For all three sites, AAOT, BOUSSOLE and MOBY, calculate the Pearson factor for each band.
> Note AAOT does not have measurements for band @510
#### AAOT site
```
insitu_path = './insitu/AAOT.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "AAOT"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_aaot_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @412")
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_aaot_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @443")
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_aaot_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @490")
r_aaot_510 = 0
print("0 observations for band @510")
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_aaot_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
print(str(len(frame_xxx.index)) + " observations for band @560")
insitu_path = './insitu/BOUSS.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "BOUS"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_bous_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_bous_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_bous_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_bous_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_bous_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
insitu_path = './insitu/MOBY.csv'
insitu = pd.read_csv(insitu_path)
frame_full = pd.DataFrame.merge(frame.query('Name == "MOBY"'), insitu, how='inner', on = ['Date', 'ORBIT'])
frame_xxx= frame_full[['reflec_1_mean', 'rho_wn_IS_412']].dropna()
r_moby_412 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_2_mean', 'rho_wn_IS_443']].dropna()
r_moby_443 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_3_mean', 'rho_wn_IS_490']].dropna()
r_moby_490 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_4_mean', 'rho_wn_IS_510']].dropna()
r_moby_510 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
frame_xxx= frame_full[['reflec_5_mean', 'rho_wn_IS_560']].dropna()
r_moby_560 = pearsonr(frame_xxx.ix[:,0], frame_xxx.ix[:,1])[0]
[r_aaot_412, r_aaot_443, r_aaot_490, r_aaot_510, r_aaot_560]
[r_bous_412, r_bous_443, r_bous_490, r_bous_510, r_bous_560]
[r_moby_412, r_moby_443, r_moby_490, r_moby_510, r_moby_560]
r_final = (numpy.mean([r_bous_412, r_moby_412, r_aaot_412]) * w_412 \
+ numpy.mean([r_bous_443, r_moby_443, r_aaot_443]) * w_443 \
+ numpy.mean([r_bous_490, r_moby_490, r_aaot_490]) * w_490 \
+ numpy.mean([r_bous_510, r_moby_510, r_aaot_510]) * w_510 \
+ numpy.mean([r_bous_560, r_moby_560, r_aaot_560]) * w_560) \
/ (w_412 + w_443 + w_490 + w_510 + w_560)
r_final
```
| github_jupyter |
```
"""
merge image data of patients (left eye & right eye)
"""
import shutil
import os
source = '/home/ljc/seqdata_new/image_seq'
target = '/home/ljc/seq_merge/'
folders = os.listdir(source)
for folder in folders:
pid = folder.split(',')[0]
target_path = target + pid
if not os.path.exists(target_path):
os.mkdir(target_path)
for img in os.listdir(os.path.join(source, folder)):
shutil.copyfile(os.path.join(source, folder, img), os.path.join(target_path, img))
from collections import defaultdict
gt_file = '/home/ljc/seqdata_old/groundtruth.txt'
dis2pid = defaultdict(list)
with open(gt_file) as f:
for trans in f.readlines():
#print(trans.split())
pid, disease = trans.split()[:2]
dis2pid[disease].append(pid)
root = '/home/ljc/seq_merge/'
for dis in dis2pid.keys():
os.mkdir(root + dis)
import os
import shutil
root = '/home/ljc/seq_merge/'
for dis in dis2pid.keys():
target_path = root + dis
for pid in dis2pid[dis]:
#print(pid)
source_path = root + pid
if os.path.exists(source_path):
shutil.move(source_path, target_path)
import os
from glob import glob
targets = glob('/home/ljc/seq_merge/*/*/.DS_Store')
print(len(targets))
for target in targets:
os.remove(target)
targets = glob('/home/ljc/seq_merge/*/*/.DS_Store')
print(len(targets))
import os, shutil, re
from collections import defaultdict
from pprint import pprint
root = '/home/ljc/seq_merge/'
target = '/home/ljc/seq_sample/'
diseases = os.listdir(root)
pattern = re.compile(r'.*,.*,(.*)#(\d+)\..*')
for dis in diseases:
# Create disease folders
target_dis_folder = os.path.join(target, dis)
if not os.path.exists(target_dis_folder):
os.mkdir(target_dis_folder)
source_dis_folder = os.path.join(root, dis)
pids = os.listdir(source_dis_folder)
for pid in pids:
# Get source pid folder and target pid folder
source_pid_folder = os.path.join(source_dis_folder, pid)
target_pid_folder = os.path.join(target_dis_folder, pid)
if os.path.exists(target_pid_folder):
print(f'{target_pid_folder} exists')
else:
os.mkdir(target_pid_folder)
pid_imgs = os.listdir(source_pid_folder)
date2img = defaultdict(tuple)
# Sample the last img of each day
for img in pid_imgs:
obj = pattern.match(img)
if not obj:
print(f'{img} does not match pattern' )
else:
date, num = obj.groups()
if date not in date2img:
date2img[date] = (img, num)
else:
if num > date2img[date][-1]:
date2img[date] = (img, num)
# Copy target file to target folder
for img in date2img.values():
source_img = os.path.join(source_pid_folder, img[0])
target_img = os.path.join(target_pid_folder, img[0])
shutil.copy(source_img, target_img)
import os, shutil, re
from collections import defaultdict
# Create disease folders
target_dis_folder = '/home/ljc/sample_bact/'
source_dis_folder = '/home/ljc/typical_bact/'
pids = os.listdir(source_dis_folder)
for pid in pids:
# Get source pid folder and target pid folder
source_pid_folder = os.path.join(source_dis_folder, pid)
target_pid_folder = os.path.join(target_dis_folder, pid)
if os.path.exists(target_pid_folder):
print(f'{target_pid_folder} exists')
else:
os.mkdir(target_pid_folder)
pid_imgs = os.listdir(source_pid_folder)
date2img = defaultdict(tuple)
# Sample the last img of each day
for img in pid_imgs:
obj = pattern.match(img)
if not obj:
print(f'{img} does not match pattern' )
else:
date, num = obj.groups()
if date not in date2img:
date2img[date] = (img, num)
else:
if num > date2img[date][-1]:
date2img[date] = (img, num)
# Copy target file to target folder
for img in date2img.values():
source_img = os.path.join(source_pid_folder, img[0])
target_img = os.path.join(target_pid_folder, img[0])
shutil.copy(source_img, target_img)
```
| github_jupyter |
# Lab 2 Single Qubit Gates
Prerequisite
[Ch.1.3 Representing Qubit States](https://qiskit.org/textbook/ch-states/representing-qubit-states.html)
[Ch.1.4 Single Qubit Gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)
Other relevant materials
[Grokking the Bloch Sphere](https://javafxpert.github.io/grok-bloch/)
```
import numpy as np
# Importing standard Qiskit libraries
from qiskit import QuantumCircuit, transpile, Aer, IBMQ, execute
from qiskit.tools.jupyter import *
from qiskit.visualization import *
from ibm_quantum_widgets import *
from qiskit.providers.aer import QasmSimulator
# Loading your IBM Quantum account(s)
provider = IBMQ.load_account()
backend = Aer.get_backend('statevector_simulator')
```
## Part 1 - Effect of Single-Qubit Gates on state |0>
### Goal
Create quantum circuits to apply various single qubit gates on state |0> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc1 = QuantumCircuit(4)
# perform gate operations on individual qubits
qc1.x(0)
qc1.y(1)
qc1.z(2)
qc1.s(3)
# Draw circuit
qc1.draw()
# Plot blochshere
out1 = execute(qc1,backend).result().get_statevector()
plot_bloch_multivector(out1)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Statevector (Post Measurement)|
|-|-|-|-|
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase 0 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 0+1j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi/2 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
|Input State = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
## Part 2 - Effect of Single-Qubit Gates on state |1>
### Goal
Create quantum circuits to apply various single qubit gates on state |1> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc2 = QuantumCircuit(4)
# initialize qubits
qc2.x(range(4))
# perform gate operations on individual qubits
qc2.x(0)
qc2.y(1)
qc2.z(2)
qc2.s(3)
# Draw circuit
qc2.draw()
# Plot blochshere
out2 = execute(qc2,backend).result().get_statevector()
plot_bloch_multivector(out2)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Statevector (Post Measurement)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}1+0j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 0 | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-1j & 0+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘0’<br><br>Post measurement, quit state is ‘0’ with phase 3pi/2 | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & -1+0j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi | | | |
|Input State = $\begin{pmatrix}0+0j & 1+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0j & 0+1j\end{pmatrix}$<br>- qubit has probability 1 of being in state ‘1’<br><br>Post measurement, quit state is ‘1’ with phase pi/2 | | | |
## Part 3 - Effect of Single-Qubit Gates on state |+>
### Goal
Create quantum circuits to apply various single qubit gates on state |+> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc3 = QuantumCircuit(4)
# initialize qubits
qc3.h(range(4))
# perform gate operations on individual qubits
qc3.x(0)
qc3.y(1)
qc3.z(2)
qc3.s(3)
# Draw circuit
qc3.draw()
# Plot blochshere
out3 = execute(qc3,backend).result().get_statevector()
plot_bloch_multivector(out3)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-0.707j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 4 - Effect of Single-Qubit Gates on state |->
### Goal
Create quantum circuits to apply various single qubit gates on state |-> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc4 = QuantumCircuit(4)
# initialize qubits
qc4.x(range(4))
qc4.h(range(4))
# perform gate operations on individual qubits
qc4.x(0)
qc4.y(1)
qc4.z(2)
qc4.s(3)
# Draw circuit
qc4.draw()
# Plot blochshere
out4 = execute(qc4,backend).result().get_statevector()
plot_bloch_multivector(out4)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}-0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0.707j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 5 - Effect of Single-Qubit Gates on state |i>
### Goal
Create quantum circuits to apply various single qubit gates on state |i> and understand the change in state and phase of the qubit.
| | | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc5 = QuantumCircuit(4)
# initialize qubits
qc5.h(range(4))
qc5.s(range(4))
# perform gate operations on individual qubits
qc5.x(0)
qc5.y(1)
qc5.z(2)
qc5.s(3)
# Draw circuit
qc5.draw()
# Plot blochshere
out5 = execute(qc5,backend).result().get_statevector()
plot_bloch_multivector(out5)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0+0.707j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & -0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
## Part 6 - Effect of Single-Qubit Gates on state |-i>
### Goal
Create quantum circuits to apply various single qubit gates on state |-i> and understand the change in state and phase of the qubit.
| |  | | |
|-|-|-|-|
To see the effect of each of the gates, we will take a single circuit with 4 qubits and apply a different gate to each of the qubits to plot the values of each of those qubits on the blocsphere.
```
qc6 = QuantumCircuit(4)
# initialize qubits
qc6.x(range(4))
qc6.h(range(4))
qc6.s(range(4))
# perform gate operations on individual qubits
qc6.x(0)
qc6.y(1)
qc6.z(2)
qc6.s(3)
# Draw circuit
qc6.draw()
# Plot blochshere
out6 = execute(qc6,backend).result().get_statevector()
plot_bloch_multivector(out6)
```
|Effect on Qubit on application of Gate|Satevector|QSphere Plot|Probability (Histogram)|
|-|-|-|-|
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0-0.707j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}-0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0+0.707j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
|Input State = $\begin{pmatrix}0.707+0j & 0-0.707j\end{pmatrix}$<br><br>Before measurement,<br>- qubit state = $\begin{pmatrix}0.707+0j & 0.707+0j\end{pmatrix}$<br>- qubit has probability 0.5 of being in each of the states ‘0’ and ‘1’ | | | |
```
import qiskit
qiskit.__qiskit_version__
```
| github_jupyter |
# PyStan: Golf case study
Source: https://mc-stan.org/users/documentation/case-studies/golf.html
```
import pystan
import numpy as np
import pandas as pd
from scipy.stats import norm
import requests
from lxml import html
from io import StringIO
from matplotlib import pyplot as plt
```
Aux functions for visualization
```
def stanplot_postetior_hist(stan_sample, params):
'''This function takes a PyStan posterior sample object and a touple of parameter names, and plots posterior dist histogram of named parameter'''
post_sample_params = {}
for p in params:
post_sample_params[p] = stan_sample.extract(p)[p]
fig, panes = plt.subplots(1,len(params))
fig.suptitle('Posterior Dist of Params')
for p,w in zip(params, panes):
w.hist(post_sample_params[p])
w.set_title(p)
fig.show()
def stanplot_posterior_lineplot(x, y, stan_sample, params, f, sample_size=100, alpha=0.05, color='green'):
'''Posterior dist line plot
params:
x: x-axis values from actual data used for training
y: y-axis values from actual data used for training
stan_sample: a fitted PyStan sample object
params: list of parameter names required for calculating the posterior curve
f: a function the describes the model. Should take as parameters `x` and `*params` as inputs and return a list (or list-coercable object) that will be used for plotting the sampled curves
sample_size: how many curves to draw from the posterior dist
alpha: transparency of drawn curves (from pyplot, default=0.05)
color: color of drawn curves (from pyplot. default='green')
'''
tmp = stan_sample.stan_args
total_samples = (tmp[0]['iter'] - tmp[0]['warmup']) * len(tmp)
sample_rows = np.random.choice(a=total_samples, size=sample_size, replace=False)
sampled_param_array = np.array(list(stan_sample.extract(params).values()))[:, sample_rows]
_ = plt.plot(x, y)
for param_touple in zip(*sampled_param_array):
plt.plot(x, f(x, *param_touple), color=color, alpha=alpha)
def sigmoid_linear_curve(x, a, b):
return 1 / (1 + np.exp(-1 * (a + b * x)))
def trig_curve(x, sigma, r=(1.68/2)/12, R=(4.25/2)/12):
return 2 * norm.cdf(np.arcsin((R - r) / x) / sigma) - 1
def overshot_curve(x, sigma_distance, sigma_angle, r=(1.68/2)/12, R=(4.25/2)/12, overshot=1., distance_tolerance=3.):
p_angle = 2 * norm.cdf(np.arcsin((R - r) / x) / sigma_angle) - 1
p_upper = norm.cdf((distance_tolerance - overshot) / ((x + overshot) * sigma_distance))
p_lower = norm.cdf((-1 * overshot) / ((x + overshot) * sigma_distance))
return p_angle * (p_upper - p_lower)
```
## Data
Scrape webpage
```
url = 'https://statmodeling.stat.columbia.edu/2019/03/21/new-golf-putting-data-and-a-new-golf-putting-model'
xpath = '/html/body/div/div[3]/div/div[1]/div[3]/div[2]/pre[1]'
header = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
r = requests.get(url, headers=header)
```
Parse HTML to string
```
html_table = html.fromstring(r.text).xpath(xpath)[0]
```
Rease data into a Pandas DF
```
with StringIO(html_table.text) as f:
df = pd.read_csv(f, sep = ' ')
df.head()
```
And finally add some columns
```
df['p'] = df['y'] / df['n']
df['sd'] = np.sqrt(df['p'] * (1 - df['p']) / df['n'])
stan_data = {'x': df['x'], 'y': df['y'], 'n': df['n'], 'N': df.shape[0]}
```
### Plot data
```
#_ = df.plot(x='x', y='p')
plt.plot(df['x'], df['p'])
plt.fill_between(x=df['x'], y1=df['p'] - 2 * df['sd'], y2=df['p'] + 2 * df['sd'], alpha=0.3)
plt.show()
```
## Models
### Logistic model
```
stan_logistic = pystan.StanModel(file='./logistic.stan')
post_sample_logistic = stan_logistic.sampling(data=stan_data)
print(post_sample_logistic)
stanplot_postetior_hist(post_sample_logistic, ('a', 'b'))
stanplot_posterior_lineplot(df['x'], df['p'], post_sample_logistic, ('a', 'b'), sigmoid_linear_curve)
```
### Simple triginometric model
```
stan_trig = pystan.StanModel(file='./trig.stan')
stan_data.update({'r': (1.68/2)/12, 'R': (4.25/2)/12})
post_sample_trig = stan_trig.sampling(data=stan_data)
print(post_sample_trig)
stanplot_postetior_hist(post_sample_trig, ('sigma', 'sigma_degrees'))
stanplot_posterior_lineplot(df['x'], df['p'], post_sample_trig, ('sigma'), trig_curve)
```
### Augmented trigonometric model
```
stan_overshot = pystan.StanModel(file='./trig_overshot.stan')
stan_data.update({'overshot': 1., 'distance_tolerance': 3.})
post_sample_overshot = stan_overshot.sampling(data=stan_data)
print(post_sample_overshot)
stanplot_postetior_hist(post_sample_overshot, ('sigma_distance', 'sigma_angle', 'sigma_y'))
stanplot_posterior_lineplot(
x=df['x'],
y=df['p'],
stan_sample=post_sample_overshot,
params=('sigma_distance', 'sigma_angle'),
f=overshot_curve
)
```
| github_jupyter |
# #1 Discovering Butterfree - Feature Set Basics
Welcome to **Discovering Butterfree** tutorial series!
This first tutorial will cover some basics of Butterfree library and you learn how to create your first feature set :rocket: :rocket:
Before diving into the tutorial make sure you have a basic understanding of these main data concepts: **features**, **feature sets** and the **"Feature Store Architecture"**, you can read more about this [here]().
## Library Basics:
Buterfree's main objective is to make feature engineering easy. The library provides a high-level API for declarative feature definitions. But behind these abstractions, Butterfree is essentially an **ETL (Extract - Transform - Load)** framework, so this reflects in terms of the organization of the project.
### Extract
`from butterfree.extract import ...`
Module with the entities responsible for extracting data into the pipeline. The module provides the following tools:
* `readers`: data connectors. Currently Butterfree provides readers for files, tables registered in Spark Hive metastore, and Kafka topics.
* `pre_processing`: a utility tool for making some transformations or re-arrange the structure of the reader's input data before the feature engineering.
* `source`: a composition of `readers`. The entity responsible for merging datasets coming from the defined readers into a single dataframe input for the `Transform` stage.
### Transform
`from butterfree.transform import ...`
The main module of the library, responsible for feature engineering, in other words, all the transformations on the data. The module provides the following main tools:
* `features`: the entity that defines what a feature is. Holds a transformation and metadata about the feature.
* `transformations`: provides a set of components for transforming the data, with the possibility to use Spark native functions, aggregations, SQL expressions and others.
* `feature_set`: an entity that defines a feature set. Holds features and the metadata around it.
### Load
`from butterfree.load import ...`
The module is responsible for saving the data in some data storage. The module provides the following tools:
* `writers`: provide connections to data sources to write data. Currently Butterfree provides ways to save data on S3 registered as tables Spark Hive metastore and to Cassandra DB.
* `sink`: a composition of writers. The entity responsible for triggering the writing jobs on a set of defined writers
### Pipelines
Pipelines are responsible for integrating all other modules (`extract`, `transform`, `load`) in order to define complete ETL jobs from source data to data storage destination.
`from butterfree.pipelines import ...`
* `feature_set_pipeline`: defines an ETL pipeline for creating feature sets.
## Example:
Simulating the following scenario:
- We want to create a feature set with features about houses for rent (listings).
- We are interested in houses only for the **Kanto** region.
We have two sets of data:
- Table: `listing_events`. Table with data about events of house listings.
- File: `region.json`. Static file with data about the cities and regions.
Our desire is to have result dataset with the following schema:
| id | timestamp | rent | rent_over_area | bedrooms | bathrooms | area | bedrooms_over_area | bathrooms_over_area | latitude | longitude | h3 | city | region
| - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| int | timestamp | float | float | int | int | float | float | float | double | double | string | string | string |
For more information about H3 geohash click [here](https://h3geo.org/docs/)
The following code blocks will show how to generate this feature set using Butterfree library:
```
# setup spark
from pyspark import SparkContext, SparkConf
from pyspark.sql import session
conf = SparkConf().set('spark.driver.host','127.0.0.1')
sc = SparkContext(conf=conf)
spark = session.SparkSession(sc)
# fix working dir
import pathlib
import os
path = os.path.join(pathlib.Path().absolute(), '../..')
os.chdir(path)
# butterfree spark client
from butterfree.clients import SparkClient
spark_client = SparkClient()
```
### Showing test data
```
listing_evengs_df = spark.read.json(f"{path}/examples/data/listing_events.json")
listing_evengs_df.createOrReplaceTempView("listing_events") # creating listing_events table
print(">>> listing_events table:")
listing_evengs_df.toPandas()
print(">>> region.json file:")
spark.read.json(f"{path}/examples/data/region.json").toPandas()
```
### Extract
- For the extract part, we need the `Source` entity and the `FileReader` and `TableReader` for the data we have.
- We need to declare a query with the rule for joining the results of the readers too.
- As proposed in the problem we can filter the region dataset to get only **Kanto** region.
```
from butterfree.extract import Source
from butterfree.extract.readers import FileReader, TableReader
from butterfree.extract.pre_processing import filter
readers = [
TableReader(id="listing_events", table="listing_events",),
FileReader(id="region", path=f"{path}/examples/data/region.json", format="json",).with_(
transformer=filter, condition="region == 'Kanto'"
),
]
query = """
select
listing_events.*,
region.city,
region.lat,
region.lng,
region.region
from
listing_events
join region
on listing_events.region_id = region.id
"""
source = Source(readers=readers, query=query)
# showing source result
source_df = source.construct(spark_client)
source_df.toPandas()
```
### Transform
- At the transform part, a set of `Feature` objects is declared.
- An Instance of `FeatureSet` is used to hold the features.
- A `FeatureSet` can only be created when it is possible to define a unique tuple formed by key columns and a time reference. This is an **architectural requirement** for the data. So least one `KeyFeature` and one `TimestampFeature` is needed.
- Every `Feature` needs a unique name, a description, and a data-type definition.
```
from butterfree.transform import FeatureSet
from butterfree.transform.features import Feature, KeyFeature, TimestampFeature
from butterfree.transform.transformations import SQLExpressionTransform
from butterfree.transform.transformations.h3_transform import H3HashTransform
from butterfree.constants import DataType
keys = [
KeyFeature(
name="id",
description="Unique identificator code for houses.",
dtype=DataType.BIGINT,
)
]
# from_ms = True because the data originally is not in a Timestamp format.
ts_feature = TimestampFeature(from_ms=True)
features = [
Feature(
name="rent",
description="Rent value by month described in the listing.",
dtype=DataType.FLOAT,
),
Feature(
name="rent_over_area",
description="Rent value by month divided by the area of the house.",
transformation=SQLExpressionTransform("rent / area"),
dtype=DataType.FLOAT,
),
Feature(
name="bedrooms",
description="Number of bedrooms of the house.",
dtype=DataType.INTEGER,
),
Feature(
name="bathrooms",
description="Number of bathrooms of the house.",
dtype=DataType.INTEGER,
),
Feature(
name="area",
description="Area of the house, in squared meters.",
dtype=DataType.FLOAT,
),
Feature(
name="bedrooms_over_area",
description="Number of bedrooms divided by the area.",
transformation=SQLExpressionTransform("bedrooms / area"),
dtype=DataType.FLOAT,
),
Feature(
name="bathrooms_over_area",
description="Number of bathrooms divided by the area.",
transformation=SQLExpressionTransform("bathrooms / area"),
dtype=DataType.FLOAT,
),
Feature(
name="latitude",
description="House location latitude.",
from_column="lat", # arg from_column is needed when changing column name
dtype=DataType.DOUBLE,
),
Feature(
name="longitude",
description="House location longitude.",
from_column="lng",
dtype=DataType.DOUBLE,
),
Feature(
name="h3",
description="H3 hash geohash.",
transformation=H3HashTransform(
h3_resolutions=[10], lat_column="latitude", lng_column="longitude",
),
dtype=DataType.STRING,
),
Feature(name="city", description="House location city.", dtype=DataType.STRING,),
Feature(
name="region",
description="House location region.",
dtype=DataType.STRING,
),
]
feature_set = FeatureSet(
name="house_listings",
entity="house", # entity: to which "business context" this feature set belongs
description="Features describring a house listing.",
keys=keys,
timestamp=ts_feature,
features=features,
)
# showing feature set result
feature_set_df = feature_set.construct(source_df, spark_client)
feature_set_df.toPandas()
```
### Load
- For the load part we need `Writer` instances and a `Sink`.
- writers define where to load the data.
- The `Sink` gets the transformed data (feature set) and trigger the load to all the defined writers.
- `debug_mode` will create a temporary view instead of trying to write in a real data store.
```
from butterfree.load.writers import (
HistoricalFeatureStoreWriter,
OnlineFeatureStoreWriter,
)
from butterfree.load import Sink
writers = [HistoricalFeatureStoreWriter(debug_mode=True), OnlineFeatureStoreWriter(debug_mode=True)]
sink = Sink(writers=writers)
```
## Pipeline
- The `Pipeline` entity wraps all the other defined elements.
- `run` command will trigger the execution of the pipeline, end-to-end.
```
from butterfree.pipelines import FeatureSetPipeline
pipeline = FeatureSetPipeline(source=source, feature_set=feature_set, sink=sink)
result_df = pipeline.run()
```
### Showing the results
```
print(">>> Historical Feature house_listings feature set table:")
spark.table("historical_feature_store__house_listings").orderBy(
"id", "timestamp"
).toPandas()
print(">>> Online Feature house_listings feature set table:")
spark.table("online_feature_store__house_listings").orderBy("id", "timestamp").toPandas()
```
- We can see that we were able to create all the desired features in an easy way
- The **historical feature set** holds all the data, and we can see that it is partitioned by year, month and day (columns added in the `HistoricalFeatureStoreWriter`)
- In the **online feature set** there is only the latest data for each id
| github_jupyter |
## Stacking
### 參考資料:
[Kaggle ensembling guide](https://mlwave.com/kaggle-ensembling-guide/)
<p></p>
[Introduction to Ensembling/Stacking in Python](https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python)
#### 5-fold stacking

#### stacking network

```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# Scikit-Learn 官網作圖函式
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure(figsize=(10,6)) #調整作圖大小
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
# Class to extend the Sklearn classifier
class SklearnHelper(object):
def __init__(self, clf, seed=0, params=None, seed_flag=False):
params['random_state'] = seed
if(seed_flag == False):
params.pop('random_state')
self.clf = clf(**params)
def train(self, x_train, y_train):
self.clf.fit(x_train, y_train)
def predict(self, x):
return self.clf.predict(x)
def fit(self,x,y):
return self.clf.fit(x,y)
def feature_importances(self,x,y):
print(self.clf.fit(x,y).feature_importances_)
return self.clf.fit(x,y).feature_importances_
#Out-of-Fold Predictions
def get_oof(clf, x_train, y_train, x_test):
oof_train = np.zeros((ntrain,))
oof_test = np.zeros((ntest,))
oof_test_skf = np.empty((NFOLDS, ntest))
for i, (train_index, test_index) in enumerate(kf): # kf:KFold(ntrain, n_folds= NFOLDS,...)
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
clf.train(x_tr, y_tr)
oof_train[test_index] = clf.predict(x_te) # partial index from x_train
oof_test_skf[i, :] = clf.predict(x_test) # Row(n-Fold), Column(predict value)
#oof_test[:] = oof_test_skf.mean(axis=0) #predict value average by column, then output 1-row, ntest columns
#oof_test[:] = pd.DataFrame(oof_test_skf).mode(axis=0)[0]
#oof_test[:] = np.median(oof_test_skf, axis=0)
oof_test[:] = np.mean(oof_test_skf, axis=0)
return oof_train.reshape(-1, 1), oof_test.reshape(-1, 1) #make sure return n-rows, 1-column shape.
```
### Load Dataset
```
train = pd.read_csv('input/train.csv', encoding = "utf-8", dtype = {'type': np.int32})
test = pd.read_csv('input/test.csv', encoding = "utf-8")
#把示範用的 type 4, 資料去除, 以免干擾建模
train = train[train['type']!=4]
from sklearn.model_selection import train_test_split
X = train[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']]
y = train['type']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=100)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
test_std = sc.transform(test[['花瓣寬度','花瓣長度','花萼寬度','花萼長度']])
```
### Model Build
```
from sklearn.cross_validation import KFold
NFOLDS = 5 # set folds for out-of-fold prediction
SEED = 0 # for reproducibility
ntrain = X_train_std.shape[0] # X.shape[0]
ntest = test_std.shape[0] # test.shape[0]
kf = KFold(ntrain, n_folds= NFOLDS, random_state=SEED)
# Put in our parameters for said classifiers
# Decision Tree
dt_params = {
'criterion':'gini',
'max_depth':5
}
# KNN
knn_params = {
'n_neighbors':5
}
# Random Forest parameters
rf_params = {
'n_jobs': -1,
'n_estimators': 500,
'criterion': 'gini',
'max_depth': 4,
#'min_samples_leaf': 2,
'warm_start': True,
'oob_score': True,
'verbose': 0
}
# Extra Trees Parameters
et_params = {
'n_jobs': -1,
'n_estimators': 800,
'max_depth': 6,
'min_samples_leaf': 2,
'verbose': 0
}
# AdaBoost parameters
ada_params = {
'n_estimators': 800,
'learning_rate' : 0.75
}
# Gradient Boosting parameters
gb_params = {
'n_estimators': 500,
'max_depth': 5,
'min_samples_leaf': 2,
'verbose': 0
}
# Support Vector Classifier parameters
svc_params = {
'kernel' : 'linear',
'C' : 1.0,
'probability': True
}
# Support Vector Classifier parameters
svcr_params = {
'kernel' : 'rbf',
'C' : 1.0,
'probability': True
}
# Bagging Classifier
bag_params = {
'n_estimators' : 500,
'oob_score': True
}
#XGBoost Classifier
xgbc_params = {
'n_estimators': 500,
'max_depth': 4,
'learning_rate': 0.05,
'nthread': -1
}
#Linear Discriminant Analysis
lda_params = {}
#Quadratic Discriminant Analysis
qda1_params = {
'reg_param': 0.8,
'tol': 0.00001
}
#Quadratic Discriminant Analysis
qda2_params = {
'reg_param': 0.6,
'tol': 0.0001
}
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from xgboost import XGBClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
dt = SklearnHelper(clf=DecisionTreeClassifier, seed=SEED, params=dt_params, seed_flag=True)
knn = SklearnHelper(clf=KNeighborsClassifier, seed=SEED, params=knn_params)
rf = SklearnHelper(clf=RandomForestClassifier, seed=SEED, params=rf_params, seed_flag=True)
et = SklearnHelper(clf=ExtraTreesClassifier, seed=SEED, params=et_params, seed_flag=True)
ada = SklearnHelper(clf=AdaBoostClassifier, seed=SEED, params=ada_params, seed_flag=True)
gb = SklearnHelper(clf=GradientBoostingClassifier, seed=SEED, params=gb_params, seed_flag=True)
svc = SklearnHelper(clf=SVC, seed=SEED, params=svc_params, seed_flag=True)
svcr = SklearnHelper(clf=SVC, seed=SEED, params=svcr_params, seed_flag=True)
bag = SklearnHelper(clf=BaggingClassifier, seed=SEED, params=bag_params, seed_flag=True)
xgbc = SklearnHelper(clf=XGBClassifier, seed=SEED, params=xgbc_params)
lda = SklearnHelper(clf=LinearDiscriminantAnalysis, seed=SEED, params=lda_params)
qda1 = SklearnHelper(clf=QuadraticDiscriminantAnalysis, seed=SEED, params=qda1_params)
qda2 = SklearnHelper(clf=QuadraticDiscriminantAnalysis, seed=SEED, params=qda2_params)
# Create Numpy arrays of train, test and target ( Survived) dataframes to feed into our models
y_train = y_train.ravel()
#y.ravel()
#x_train = X.values # Creates an array of the train data
#x_test = test.values # Creats an array of the test data
#STD dataset:
x_train = X_train_std
x_test = test_std
# Create our OOF train and test predictions. These base results will be used as new features
dt_oof_train, dt_oof_test = get_oof(dt, x_train, y_train, x_test) # Decision Tree
knn_oof_train, knn_oof_test = get_oof(knn, x_train, y_train, x_test) # KNeighbors
rf_oof_train, rf_oof_test = get_oof(rf, x_train, y_train, x_test) # Random Forest
et_oof_train, et_oof_test = get_oof(et, x_train, y_train, x_test) # Extra Trees
ada_oof_train, ada_oof_test = get_oof(ada, x_train, y_train, x_test) # AdaBoost
gb_oof_train, gb_oof_test = get_oof(gb, x_train, y_train, x_test) # Gradient Boost
svc_oof_train, svc_oof_test = get_oof(svc, x_train, y_train, x_test) # SVM-l
svcr_oof_train, svcr_oof_test = get_oof(svcr, x_train, y_train, x_test) # SVM-r
bag_oof_train, bag_oof_test = get_oof(bag, x_train, y_train, x_test) # Bagging
xgbc_oof_train, xgbc_oof_test = get_oof(xgbc, x_train, y_train, x_test) # XGBoost
lda_oof_train, lda_oof_test = get_oof(lda, x_train, y_train, x_test) # Linear Discriminant Analysis
qda1_oof_train, qda1_oof_test = get_oof(qda1, x_train, y_train, x_test) # Quadratic Discriminant Analysis
qda2_oof_train, qda2_oof_test = get_oof(qda2, x_train, y_train, x_test) # Quadratic Discriminant Analysis
dt_features = dt.feature_importances(x_train,y_train)
##knn_features = knn.feature_importances(x_train,y_train)
rf_features = rf.feature_importances(x_train,y_train)
et_features = et.feature_importances(x_train, y_train)
ada_features = ada.feature_importances(x_train, y_train)
gb_features = gb.feature_importances(x_train,y_train)
##svc_features = svc.feature_importances(x_train,y_train)
##svcr_features = svcr.feature_importances(x_train,y_train)
##bag_features = bag.feature_importances(x_train,y_train)
xgbc_features = xgbc.feature_importances(x_train,y_train)
##lda_features = lda.feature_importances(x_train,y_train)
##qda1_features = qda1.feature_importances(x_train,y_train)
##qda2_features = qda2.feature_importances(x_train,y_train)
cols = X.columns.values
# Create a dataframe with features
feature_dataframe = pd.DataFrame( {'features': cols,
'Decision Tree': dt_features,
'Random Forest': rf_features,
'Extra Trees': et_features,
'AdaBoost': ada_features,
'Gradient Boost': gb_features,
'XGBoost': xgbc_features
})
# Create the new column containing the average of values
feature_dataframe['mean'] = np.mean(feature_dataframe, axis= 1) # axis = 1 computes the mean row-wise
feature_dataframe
```
### First-Level Summary
```
#First-level output as new features
base_predictions_train = pd.DataFrame({
'DecisionTree': dt_oof_train.ravel(),
'KNeighbors': knn_oof_train.ravel(),
'RandomForest': rf_oof_train.ravel(),
'ExtraTrees': et_oof_train.ravel(),
'AdaBoost': ada_oof_train.ravel(),
'GradientBoost': gb_oof_train.ravel(),
'SVM-l': svc_oof_train.ravel(),
'SVM-r': svcr_oof_train.ravel(),
'Bagging': bag_oof_train.ravel(),
'XGBoost': xgbc_oof_train.ravel(),
'LDA': lda_oof_train.ravel(),
'QDA-1': qda1_oof_train.ravel(),
'QDA-2': qda2_oof_train.ravel(),
'type': y_train
})
base_predictions_train.head()
x_train = np.concatenate(( #dt_oof_train,
knn_oof_train,
rf_oof_train,
et_oof_train,
ada_oof_train,
gb_oof_train,
svc_oof_train,
#svcr_oof_train,
bag_oof_train,
xgbc_oof_train,
lda_oof_train,
#qda1_oof_train,
qda2_oof_train
), axis=1)
x_test = np.concatenate(( #dt_oof_test,
knn_oof_test,
rf_oof_test,
et_oof_test,
ada_oof_test,
gb_oof_test,
svc_oof_test,
#svcr_oof_test,
bag_oof_test,
xgbc_oof_test,
lda_oof_test,
#qda1_oof_test,
qda2_oof_test
), axis=1)
```
### Second Level Summary
### Level-2 XGBoost
```
#Second level learning model
import xgboost as xgb
l2_gbm = xgb.XGBClassifier(
learning_rate = 0.05,
n_estimators= 2000,
max_depth= 4,
#min_child_weight= 2,
gamma=0.9,
subsample=0.8,
colsample_bytree=0.8,
#scale_pos_weight=1,
objective= 'binary:logistic',
nthread= -1
).fit(x_train, y_train)
#level-2 CV: x_train, y_train
from sklearn import metrics
print(metrics.classification_report(y_train, l2_gbm.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_gbm
plot_learning_curve(estimator, "level2 - XGBoost", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - XGB
l2_gbm_pred = l2_gbm.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_gbm.predict(x_train), average='weighted')
print(l2_gbm_pred)
```
### Level-2 Linear Discriminant Analysis
```
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
l2_lda = LinearDiscriminantAnalysis()
l2_lda.fit(x_train, y_train)
print(metrics.classification_report(y_train, l2_lda.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_lda
#plot_learning_curve(estimator, "lv2 Linear Discriminant Analysis", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - LDA
l2_lda_pred = l2_lda.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_lda.predict(x_train), average='weighted')
print(l2_lda_pred)
```
### Level-2 Quadratic Discriminant Analysis
```
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
l2_qda = QuadraticDiscriminantAnalysis(reg_param=0.01, tol=0.001)
l2_qda.fit(x_train, y_train)
print(metrics.classification_report(y_train, l2_qda.predict(x_train)))
from sklearn.model_selection import KFold
cv = KFold(n_splits=5, random_state=None, shuffle=True)
estimator = l2_qda
plot_learning_curve(estimator, "Quadratic Discriminant Analysis", x_train, y_train, cv=cv, train_sizes=np.linspace(0.2, 1.0, 8))
#level2 - QDA
l2_qda_pred = l2_qda.predict(x_test)
metrics.precision_recall_fscore_support(y_train, l2_qda.predict(x_train), average='weighted')
print(l2_qda_pred)
```
| github_jupyter |
# ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains
The tutorial is based on the paper \[[1](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004939)\] and demonstrates a method of finding patterns of synchronous spike times (synfire chains) which cannot be revealed by measuring neuronal firing rates only.
In this tutorial, we use 50 neurons per group in 10 successive groups of a synfire chain embedded in a balanced network simulation. For more information about the data and ASSET algorithm, refer to \[[1](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004939)\].
#### References
[1] Torre E, Canova C, Denker M, Gerstein G, Helias M, Grün S (2016) ASSET: Analysis of Sequences of Synchronous Events in Massively Parallel Spike Trains. PLoS Comput Biol 12(7): e1004939. https://doi.org/10.1371/journal.pcbi.1004939
## 1. Explore the data and postulate the problem
We start by importing the required packages, setting up matplotlib and loading the data.
```
import matplotlib.pyplot as plt
import numpy as np
import quantities as pq
import neo
import elephant
from elephant import asset
%load_ext autoreload
plt.style.use('dark_background')
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 20, 12
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 14
plt.rcParams['lines.linewidth'] = 1.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
plt.rcParams['text.latex.preamble'] = r"\usepackage{subdepth}, \usepackage{type1cm}"
plt.rcParams['mathtext.fontset'] = 'cm'
```
First, we download the data, packed in NixIO structure, from https://gin.g-node.org/INM-6/elephant-data
```
!curl https://web.gin.g-node.org/INM-6/elephant-data/raw/master/dataset-2/asset_showcase_500.nix --output asset_showcase_500.nix --location
```
The data is represented as a `neo.Block` with one `neo.Segment` inside, which contains raw `neo.SpikeTrain`s. For more information on `neo.Block`, `neo.Segment`, and `neo.SpikeTrain` refer to https://neo.readthedocs.io/en/stable/core.html
```
with neo.NixIO('asset_showcase_500.nix', 'ro') as f:
block = f.read_block()
segment = block.segments[0]
spiketrains = segment.spiketrains
plt.figure()
plt.eventplot([st.magnitude for st in spiketrains], linewidths=5, linelengths=5)
plt.xlabel('time [ms]')
plt.ylabel('neuron id')
plt.title('Raw spikes')
plt.show()
```
Even though we see an increase of the firing rate, we cannot find a propagating activity just by looking at the raster plot above.
We want to find a permuation of rows (neurons) in `spiketrains` such that a pattern (synfire chain) appears.
The true unknown permutation is stored in the `segment.annotations['spiketrain_ordering']`. **The goal is to recreate this permutation from raw data with the statistical method ASSET.**
## 2. Applying ASSET
### 2.1. Intersection matrix
The first step is to compute the intersection matrix `imat`, the `(i,j)` entry of which represents the number of neurons spiking both at bins `i` and `j` after the binning is applied. The resultant symmetric matrix `imat` shows one off-diagonal synfire chain pattern (see the picture below).
```
# 2.1.1) create ASSET analysis object
# hint: try different bin sizes, e.g. bin_size=2.5, 3.5, 4.0 ms
asset_obj = asset.ASSET(spiketrains, bin_size=3*pq.ms)
# 2.1.2) compute the intersection matrix
imat = asset_obj.intersection_matrix()
plt.matshow(imat)
plt.colorbar();
```
### 2.2. Analytical probability matrix
The second step is to estimate the probability $P_{null}$ that non-zero entries in `imat` occurred by chance. The resultant `pmat` matrix is defined as the probability of having strictly fewer coincident spikes at bins `i` and `j` strictly than the observed overlap (`imat`) under the null hypothesis of independence of the input spike trains.
```
pmat = asset_obj.probability_matrix_analytical(imat, kernel_width=50*pq.ms)
plt.matshow(pmat)
plt.colorbar();
```
### 2.3. Joint probability matrix
The third step is postprocessing of the analytical probability matrix `pmat`, obtained from the previous step. Centered at each (i,j) entry of `pmat` matrix, we apply a diagonal kernel with shape `filter_shape` and select the top `nr_largest` probabilities of (i,j) neighborhood (defined by `filter_shape`), and compute the significance of these `nr_largest` joint neighbor probabilities. The resultant `jmat` matrix is a "dilated" version of `imat`.
This step is most time consuming.
```
# hint: try different filter_shapes, e.g. filter_shape=(7,3)
jmat = asset_obj.joint_probability_matrix(pmat, filter_shape=(11, 3), n_largest=3)
plt.matshow(jmat)
plt.colorbar();
```
### 2.4. Mask matrix
After setting significance thresholds $\alpha_P$ and $\alpha_J$ for the corresponding matrices $P$ (probability matrix `pmat`) and $J$ (joint probability matrix `jmat`), we check the entries for significance. The resultant boolean mask matrix `mmat` is then defined as
$$
M_{ij} = 1_{P_{ij} > \alpha_P} \cdot 1_{J_{ij} > \alpha_J}
$$
```
# hint: try different alphas for pmat and jmat
# hint: try alphas in range [0.99, 1-1e-6]
# hint: you can call 'asset.ASSET.mask_matrices(...)' without creating the asset_obj
alpha = .99
mmat = asset_obj.mask_matrices([pmat, jmat], [alpha, alpha])
plt.matshow(mmat);
```
### 2.5. Find clusters in the mask matrix
Each entry (i,j) of the mask matrix $M$ from the previous step is assigned to a cluster id. A cluster is constrained to have at least `min_neighbors` number of associated elements with at most `eps` intra-distance (within the group). The cluster index, or the (i,j) entry of the resultant `cmat` matrix, is:
1. a positive int (cluster id), if (i,j) is part of a cluster;
2. `0`, if $M_{ij}$ is non-positive;
3. `-1`, if the element (i,j) does not belong to any cluster.
```
# hint: you can call asset.ASSET.cluster_matrix_entries(...) without creating the asset_obj
cmat = asset_obj.cluster_matrix_entries(mmat, max_distance=11, min_neighbors=10, stretch=5)
plt.matshow(cmat)
plt.colorbar();
```
### 2.6. Sequences of synchronous events
Given the input spike trains, two arrays of bin edges from step 2.2 and the clustered intersection matrix `cmat` from step 2.5, extract the sequences of synchronous events (synfire chains).
```
sses = asset_obj.extract_synchronous_events(cmat)
sses.keys()
cluster_id = 1
cluster_chain = []
for chain in sses[cluster_id].values():
cluster_chain.extend(chain)
_, st_indices = np.unique(cluster_chain, return_index=True)
st_indices = np.take(cluster_chain, np.sort(st_indices))
reordered_sts = [spiketrains[idx] for idx in st_indices]
spiketrains_not_a_pattern = [spiketrains[idx] for idx in range(len(spiketrains))
if idx not in st_indices]
reordered_sts.extend(spiketrains_not_a_pattern)
plt.figure()
plt.eventplot([st.magnitude for st in reordered_sts], linewidths=5, linelengths=5)
plt.xlabel('time [ms]')
plt.ylabel('reordered neuron id')
plt.title('Reconstructed ordering of the neurons (y-axis) with synfire chains');
```
#### With the sequences of synchronous events `sses` we found a permutation of the spiketrains that reveals the synfire chain as in the ground truth ordering in the last Figure, shown below..
```
ordering_true = segment.annotations['spiketrain_ordering']
spiketrains_ordered = [spiketrains[idx] for idx in ordering_true]
plt.figure()
plt.eventplot([st.magnitude for st in spiketrains_ordered], linewidths=5, linelengths=5)
plt.xlabel('time [ms]')
plt.ylabel('neuron id')
plt.title('True (unknown) ordering of the neurons (y-axis)')
plt.show()
```
| github_jupyter |
```
import numpy as np
import json
from PIL import Image, ImageDraw
import os
import cv2
import pandas as pd
from tqdm import tqdm
import shutil
import random
import matplotlib.pyplot as plt
%matplotlib inline
from procrustes import procrustes
from sklearn.decomposition import PCA
import sys
sys.path.append('../inference/')
from face_detector import FaceDetector
# this face detector is taken from here
# https://github.com/TropComplique/FaceBoxes-tensorflow
# (facial keypoints detector will be trained to work well with this detector)
```
The purpose of this script is to explore images/annotations of the CelebA dataset.
Also it cleans CelebA.
Also it converts annotations into json format.
```
IMAGES_DIR = '/home/gpu2/hdd/dan/CelebA/img_celeba.7z/out/'
ANNOTATIONS_PATH = '/home/gpu2/hdd/dan/CelebA/list_landmarks_celeba.txt'
SPLIT_PATH = '/home/gpu2/hdd/dan/CelebA/list_eval_partition.txt'
```
# Read data
```
# collect paths to all images
all_paths = []
for name in tqdm(os.listdir(IMAGES_DIR)):
all_paths.append(os.path.join(IMAGES_DIR, name))
metadata = pd.DataFrame(all_paths, columns=['full_path'])
# strip root folder
metadata['name'] = metadata.full_path.apply(lambda x: os.path.relpath(x, IMAGES_DIR))
# number of images is taken from the official website
assert len(metadata) == 202599
# see all unique endings
metadata.name.apply(lambda x: x.split('.')[-1]).unique()
```
### Detect a face on each image
```
# load faceboxes detector
face_detector = FaceDetector('../inference/model-step-240000.pb', visible_device_list='0')
detections = []
for p in tqdm(metadata.full_path):
image = cv2.imread(p)
image = image[:, :, [2, 1, 0]] # to RGB
detections.append(face_detector(image))
# take only images where one high confidence box is detected
bad_images = [metadata.name[i] for i, (b, s) in enumerate(detections) if len(b) != 1 or s.max() < 0.5]
boxes = {}
for n, (box, score) in zip(metadata.name, detections):
if n not in bad_images:
ymin, xmin, ymax, xmax = box[0]
boxes[n] = (xmin, ymin, xmax, ymax)
```
### Read keypoints from annotations
```
def get_numbers(s):
s = s.strip().split(' ')
return [s[0]] + [int(i) for i in s[1:] if i]
with open(ANNOTATIONS_PATH, 'r') as f:
content = f.readlines()
content = content[2:]
content = [get_numbers(s) for s in content]
landmarks = {}
more_bad_images = []
for i in content:
name = i[0]
keypoints = [
[i[1], i[2]], # lefteye_x lefteye_y
[i[3], i[4]], # righteye_x righteye_y
[i[5], i[6]], # nose_x nose_y
[i[7], i[8]], # leftmouth_x leftmouth_y
[i[9], i[10]], # rightmouth_x rightmouth_y
]
# assert that landmarks are inside the box
if name in bad_images:
continue
xmin, ymin, xmax, ymax = boxes[name]
points = np.array(keypoints)
is_normal = (points[:, 0] > xmin).all() and\
(points[:, 0] < xmax).all() and\
(points[:, 1] > ymin).all() and\
(points[:, 1] < ymax).all()
if not is_normal:
more_bad_images.append(name)
landmarks[name] = keypoints
# number of weird landmarks
len(more_bad_images)
to_remove = more_bad_images + bad_images
metadata = metadata.loc[~metadata.name.isin(to_remove)]
metadata = metadata.reset_index(drop=True)
# backup results
metadata.to_csv('metadata.csv')
np.save('boxes.npy', boxes)
np.save('landmarks.npy', landmarks)
np.save('to_remove.npy', to_remove)
# metadata = pd.read_csv('metadata.csv', index_col=0)
# boxes = np.load('boxes.npy')[()]
# landmarks = np.load('landmarks.npy')[()]
# to_remove = np.load('to_remove.npy')
# size after cleaning
len(metadata)
```
# Show some bounding boxes and landmarks
```
def draw_boxes_on_image(path, box, keypoints):
image = Image.open(path)
draw = ImageDraw.Draw(image, 'RGBA')
xmin, ymin, xmax, ymax = box
fill = (255, 255, 255, 45)
outline = 'red'
draw.rectangle(
[(xmin, ymin), (xmax, ymax)],
fill=fill, outline=outline
)
for x, y in keypoints:
draw.ellipse([
(x - 2.0, y - 2.0),
(x + 2.0, y + 2.0)
], outline='red')
return image
i = random.randint(0, len(metadata) - 1) # choose a random image
some_boxes = boxes[metadata.name[i]]
keypoints = landmarks[metadata.name[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes, keypoints)
```
# Procrustes analysis (Pose-based Data Balancing strategy)
```
landmarks_array = []
boxes_array = []
for n in metadata.name:
landmarks_array.append(np.array(landmarks[n]))
boxes_array.append(np.array(boxes[n]))
landmarks_array = np.stack(landmarks_array, axis=0)
landmarks_array = landmarks_array.astype('float32')
boxes_array = np.stack(boxes_array)
mean_shape = landmarks_array.mean(0) # reference shape
num_images = len(landmarks_array)
aligned = []
for shape in tqdm(landmarks_array):
Z, _ = procrustes(mean_shape, shape, reflection=False)
aligned.append(Z)
aligned = np.stack(aligned)
pca = PCA(n_components=1)
projected = pca.fit_transform(aligned.reshape((-1, 10)))
projected = projected[:, 0]
plt.hist(projected, bins=40);
# frontal faces:
indices = np.where(np.abs(projected) < 5)[0]
# faces turned to the left:
# indices = np.where(projected > 15)[0]
# faces turned to the right:
# indices = np.where(projected < -30)[0]
i = indices[random.randint(0, len(indices) - 1)]
some_boxes = boxes[metadata.name[i]]
keypoints = landmarks[metadata.name[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes, keypoints)
# it is not strictly a yaw angle
metadata['yaw'] = projected
```
# Create train-val split
```
split = pd.read_csv(SPLIT_PATH, header=None, sep=' ')
split.columns = ['name', 'assignment']
split = split.loc[~split.name.isin(to_remove)]
split = split.reset_index(drop=True)
split.assignment.value_counts()
# "0" represents training image, "1" represents validation image, "2" represents testing image
train = list(split.loc[split.assignment.isin([0, 1]), 'name'])
val = list(split.loc[split.assignment.isin([2]), 'name'])
```
# Upsample rare poses
```
metadata['is_train'] = metadata.name.isin(train).astype('int')
bins = [metadata.yaw.min() - 1.0, -20.0, -5.0, 5.0, 20.0, metadata.yaw.max() + 1.0]
metadata['bin'] = pd.cut(metadata.yaw, bins, labels=False)
metadata.loc[metadata.is_train == 1, 'bin'].value_counts()
bins_to_upsample = [0, 1, 3, 4]
num_samples = 80000
val_metadata = metadata.loc[metadata.is_train == 0]
upsampled = [metadata.loc[(metadata.is_train == 1) & (metadata.bin == 2)]]
for b in bins_to_upsample:
to_use = (metadata.is_train == 1) & (metadata.bin == b)
m = metadata.loc[to_use].sample(n=num_samples, replace=True)
upsampled.append(m)
upsampled = pd.concat(upsampled)
upsampled.bin.value_counts()
metadata = pd.concat([upsampled, val_metadata])
```
# Convert
```
def get_annotation(name, new_name, width, height, translation):
xmin, ymin, xmax, ymax = boxes[name]
keypoints = landmarks[name]
tx, ty = translation
keypoints = [[p[0] - tx, p[1] - ty]for p in keypoints]
xmin, ymin = xmin - tx, ymin - ty
xmax, ymax = xmax - tx, ymax - ty
annotation = {
"filename": new_name,
"size": {"depth": 3, "width": width, "height": height},
"box": {"ymin": int(ymin), "ymax": int(ymax), "xmax": int(xmax), "xmin": int(xmin)},
"landmarks": keypoints
}
return annotation
# create folders for the converted dataset
TRAIN_DIR = '/mnt/datasets/dan/CelebA/train/'
shutil.rmtree(TRAIN_DIR, ignore_errors=True)
os.mkdir(TRAIN_DIR)
os.mkdir(os.path.join(TRAIN_DIR, 'images'))
os.mkdir(os.path.join(TRAIN_DIR, 'annotations'))
VAL_DIR = '/mnt/datasets/dan/CelebA/val/'
shutil.rmtree(VAL_DIR, ignore_errors=True)
os.mkdir(VAL_DIR)
os.mkdir(os.path.join(VAL_DIR, 'images'))
os.mkdir(os.path.join(VAL_DIR, 'annotations'))
counter = 0
for T in tqdm(metadata.itertuples()):
# get width and height of an image
image = cv2.imread(T.full_path)
h, w, c = image.shape
assert c == 3
# name of the image
name = T.name
assert name.endswith('.jpg')
if name in train:
result_dir = TRAIN_DIR
elif name in val:
result_dir = VAL_DIR
else:
print('WTF')
break
# crop the image to save space
xmin, ymin, xmax, ymax = boxes[name]
width, height = xmax - xmin, ymax - ymin
assert width > 0 and height > 0
xmin = max(int(xmin - width), 0)
ymin = max(int(ymin - height), 0)
xmax = min(int(xmax + width), w)
ymax = min(int(ymax + height), h)
crop = image[ymin:ymax, xmin:xmax, :]
# we need to transform annotations after cropping
translation = [xmin, ymin]
# we need to rename images because of upsampling
new_name = str(counter) + '.jpg'
counter += 1
cv2.imwrite(os.path.join(result_dir, 'images', new_name), crop)
# save annotation for it
d = get_annotation(name, new_name, xmax - xmin, ymax - ymin, translation)
json_name = new_name[:-4] + '.json'
json.dump(d, open(os.path.join(result_dir, 'annotations', json_name), 'w'))
```
| github_jupyter |
<img align="left" width="40%" src="http://www.lsce.ipsl.fr/Css/img/banniere_LSCE_75.png">
<br>Patrick BROCKMANN - LSCE (Climate and Environment Sciences Laboratory)
<hr>
### Discover Milankovitch Orbital Parameters over Time by reproducing figure from https://biocycle.atmos.colostate.edu/shiny/Milankovitch/
```
from bokeh.io import output_notebook, show
from bokeh.plotting import figure
from bokeh.layouts import gridplot, column
from bokeh.models import CustomJS, Slider, RangeSlider
from bokeh.models import Span
output_notebook()
import ipywidgets as widgets
from ipywidgets import Layout
from ipywidgets import interact
import pandas as pd
import numpy as np
```
### Download files
Data files from http://vo.imcce.fr/insola/earth/online/earth/earth.html
```
! wget -nc http://vo.imcce.fr/insola/earth/online/earth/La2004/INSOLN.LA2004.BTL.100.ASC
! wget -nc http://vo.imcce.fr/insola/earth/online/earth/La2004/INSOLP.LA2004.BTL.ASC
```
### Read files
```
# t Time from J2000 in 1000 years
# e eccentricity
# eps obliquity (radians)
# pibar longitude of perihelion from moving equinox (radians)
df1 = pd.read_csv('INSOLN.LA2004.BTL.250.ASC', delim_whitespace=True, names=['t', 'e', 'eps', 'pibar'])
df1.set_index('t', inplace=True)
df2 = pd.read_csv('INSOLP.LA2004.BTL.ASC', delim_whitespace=True, names=['t', 'e', 'eps', 'pibar'])
df2.set_index('t', inplace=True)
#df = pd.read_csv('La2010a_ecc3.dat', delim_whitespace=True, names=['t', 'e'])
#df = pd.read_csv('La2010a_alkhqp3L.dat', delim_whitespace=True, names=['t','a','l','k','h','q','p'])
# INSOLP.LA2004.BTL.ASC has a FORTRAN DOUBLE notation D instead of E
df2['e'] = df2['e'].str.replace('D','E')
df2['e'] = df2['e'].astype(float)
df2['eps'] = df2['eps'].str.replace('D','E')
df2['eps'] = df2['eps'].astype(float)
df2['pibar'] = df2['pibar'].str.replace('D','E')
df2['pibar'] = df2['pibar'].astype(float)
df2['e'][0]
df = pd.concat([df1[::-1],df2[1:]])
df
# t Time from J2000 in 1000 years
# e eccentricity
# eps obliquity (radians)
# pibar longitude of perihelion from moving equinox (radians)
df['eccentricity'] = df['e']
df['perihelion'] = df['pibar']
df['obliquity'] = 180. * df['eps'] / np.pi
df['precession'] = df['eccentricity'] * np.sin(df['perihelion'])
#latitude <- 65. * pi / 180.
#Q.day <- S0*(1+eccentricity*sin(perihelion+pi))^2 *sin(latitude)*sin(obliquity)
latitude = 65. * np.pi / 180.
df['insolation'] = 1367 * ( 1 + df['eccentricity'] * np.sin(df['perihelion'] + np.pi))**2 * np.sin(latitude) * np.sin(df['eps'])
df
```
### Build plot
```
a = widgets.IntRangeSlider(
layout=Layout(width='600px'),
value=[-2000, 50],
min=-250000,
max=21000,
step=100,
disabled=False,
continuous_update=False,
orientation='horizontal',
description='-249Myr to +21Myr:',
)
def plot1(limits):
years = df[limits[0]:limits[1]].index
zeroSpan = Span(location=0, dimension='height', line_color='black',
line_dash='solid', line_width=1)
p1 = figure(title='Eccentricity', active_scroll="wheel_zoom")
p1.line(years, df[limits[0]:limits[1]]['eccentricity'], color='red')
p1.yaxis.axis_label = "Degrees"
p1.add_layout(zeroSpan)
p2 = figure(title='Obliquity', x_range=p1.x_range)
p2.line(years, df[limits[0]:limits[1]]['obliquity'], color='forestgreen')
p2.yaxis.axis_label = "Degrees"
p2.add_layout(zeroSpan)
p3 = figure(title='Precessional index', x_range=p1.x_range)
p3.line(years, df[limits[0]:limits[1]]['precession'], color='dodgerblue')
p3.yaxis.axis_label = "Degrees"
p3.add_layout(zeroSpan)
p4 = figure(title='Mean Daily Insolation at 65N on Summer Solstice', x_range=p1.x_range)
p4.line(years, df[limits[0]:limits[1]]['insolation'], color='#ffc125')
p4.yaxis.axis_label = "Watts/m2"
p4.add_layout(zeroSpan)
show(gridplot([p1,p2,p3,p4], ncols=1, plot_width=600, plot_height=200))
interact(plot1, limits=a)
# Merged tool of subfigures is not marked as active
# https://github.com/bokeh/bokeh/issues/10659
p1 = figure(title='Eccentricity', active_scroll="wheel_zoom")
years = df[0:2000].index
p1.line(years, df[0:2000]['eccentricity'], color='red')
p2 = figure(title='Obliquity', x_range=p1.x_range)
p2.line(years, df[0:2000]['obliquity'], color='forestgreen')
show(gridplot([p1,p2], ncols=1, plot_width=600, plot_height=200, merge_tools=True))
```
| github_jupyter |
# Tutorial 5: Inception, ResNet and DenseNet

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial5/Inception_ResNet_DenseNet.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial5/Inception_ResNet_DenseNet.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial5)
[](https://drive.google.com/drive/folders/1zOgLKmYJ2V3uHz57nPUMY6tq15RmEtNg?usp=sharing)
In this tutorial, we will implement and discuss variants of modern CNN architectures. There have been many different architectures been proposed over the past few years. Some of the most impactful ones, and still relevant today, are the following: [GoogleNet](https://arxiv.org/abs/1409.4842)/Inception architecture (winner of ILSVRC 2014), [ResNet](https://arxiv.org/abs/1512.03385) (winner of ILSVRC 2015), and [DenseNet](https://arxiv.org/abs/1608.06993) (best paper award CVPR 2017). All of them were state-of-the-art models when being proposed, and the core ideas of these networks are the foundations for most current state-of-the-art architectures. Thus, it is important to understand these architectures in detail and learn how to implement them.
Let's start with importing our standard libraries here.
```
## Standard libraries
import os
import numpy as np
import random
from PIL import Image
from types import SimpleNamespace
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
```
We will use the same `set_seed` function as in the previous tutorials, as well as the path variables `DATASET_PATH` and `CHECKPOINT_PATH`. Adjust the paths if necessary.
```
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial5"
# Function for setting the seed
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
set_seed(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
```
We also have pretrained models and Tensorboards (more on this later) for this tutorial, and download them below.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial5/"
# Files to download
pretrained_files = ["GoogleNet.ckpt", "ResNet.ckpt", "ResNetPreAct.ckpt", "DenseNet.ckpt",
"tensorboards/GoogleNet/events.out.tfevents.googlenet",
"tensorboards/ResNet/events.out.tfevents.resnet",
"tensorboards/ResNetPreAct/events.out.tfevents.resnetpreact",
"tensorboards/DenseNet/events.out.tfevents.densenet"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print("Downloading %s..." % file_url)
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
```
Throughout this tutorial, we will train and evaluate the models on the CIFAR10 dataset. This allows you to compare the results obtained here with the model you have implemented in the first assignment. As we have learned from the previous tutorial about initialization, it is important to have the data preprocessed with a zero mean. Therefore, as a first step, we will calculate the mean and standard deviation of the CIFAR dataset:
```
train_dataset = CIFAR10(root=DATASET_PATH, train=True, download=True)
DATA_MEANS = (train_dataset.data / 255.0).mean(axis=(0,1,2))
DATA_STD = (train_dataset.data / 255.0).std(axis=(0,1,2))
print("Data mean", DATA_MEANS)
print("Data std", DATA_STD)
```
We will use this information to define a `transforms.Normalize` module which will normalize our data accordingly. Additionally, we will use data augmentation during training. This reduces the risk of overfitting and helps CNNs to generalize better. Specifically, we will apply two random augmentations.
First, we will flip each image horizontally by a chance of 50% (`transforms.RandomHorizontalFlip`). The object class usually does not change when flipping an image, and we don't expect any image information to be dependent on the horizontal orientation. This would be however different if we would try to detect digits or letters in an image, as those have a certain orientation.
The second augmentation we use is called `transforms.RandomResizedCrop`. This transformation scales the image in a small range, while eventually changing the aspect ratio, and crops it afterward in the previous size. Therefore, the actual pixel values change while the content or overall semantics of the image stays the same.
We will randomly split the training dataset into a training and a validation set. The validation set will be used for determining early stopping. After finishing the training, we test the models on the CIFAR test set.
```
test_transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD)
])
# For training, we add some augmentation. Networks are too powerful and would overfit.
train_transform = transforms.Compose([transforms.RandomHorizontalFlip(),
transforms.RandomResizedCrop((32,32),scale=(0.8,1.0),ratio=(0.9,1.1)),
transforms.ToTensor(),
transforms.Normalize(DATA_MEANS, DATA_STD)
])
# Loading the training dataset. We need to split it into a training and validation part
# We need to do a little trick because the validation set should not use the augmentation.
train_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=train_transform, download=True)
val_dataset = CIFAR10(root=DATASET_PATH, train=True, transform=test_transform, download=True)
set_seed(42)
train_set, _ = torch.utils.data.random_split(train_dataset, [45000, 5000])
set_seed(42)
_, val_set = torch.utils.data.random_split(val_dataset, [45000, 5000])
# Loading the test set
test_set = CIFAR10(root=DATASET_PATH, train=False, transform=test_transform, download=True)
# We define a set of data loaders that we can use for various purposes later.
train_loader = data.DataLoader(train_set, batch_size=128, shuffle=True, drop_last=True, pin_memory=True, num_workers=4)
val_loader = data.DataLoader(val_set, batch_size=128, shuffle=False, drop_last=False, num_workers=4)
test_loader = data.DataLoader(test_set, batch_size=128, shuffle=False, drop_last=False, num_workers=4)
```
To verify that our normalization works, we can print out the mean and standard deviation of the single batch. The mean should be close to 0 and the standard deviation close to 1 for each channel:
```
imgs, _ = next(iter(train_loader))
print("Batch mean", imgs.mean(dim=[0,2,3]))
print("Batch std", imgs.std(dim=[0,2,3]))
```
Finally, let's visualize a few images from the training set, and how they look like after random data augmentation:
```
NUM_IMAGES = 4
images = [train_dataset[idx][0] for idx in range(NUM_IMAGES)]
orig_images = [Image.fromarray(train_dataset.data[idx]) for idx in range(NUM_IMAGES)]
orig_images = [test_transform(img) for img in orig_images]
img_grid = torchvision.utils.make_grid(torch.stack(images + orig_images, dim=0), nrow=4, normalize=True, pad_value=0.5)
img_grid = img_grid.permute(1, 2, 0)
plt.figure(figsize=(8,8))
plt.title("Augmentation examples on CIFAR10")
plt.imshow(img_grid)
plt.axis('off')
plt.show()
plt.close()
```
## PyTorch Lightning
In this notebook and in many following ones, we will make use of the library [PyTorch Lightning](https://www.pytorchlightning.ai/). PyTorch Lightning is a framework that simplifies your code needed to train, evaluate, and test a model in PyTorch. It also handles logging into [TensorBoard](https://pytorch.org/tutorials/intermediate/tensorboard_tutorial.html), a visualization toolkit for ML experiments, and saving model checkpoints automatically with minimal code overhead from our side. This is extremely helpful for us as we want to focus on implementing different model architectures and spend little time on other code overhead. Note that at the time of writing/teaching, the framework has been released in version 1.0. Future versions might have a slightly changed interface and thus might not work perfectly with the code (we will try to keep it up-to-date as much as possible).
Now, we will take the first step in PyTorch Lightning, and continue to explore the framework in our other tutorials. First, we import the library:
```
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install pytorch-lightning==1.0.3
import pytorch_lightning as pl
```
PyTorch Lightning comes with a lot of useful functions, such as one for setting the seed:
```
# Setting the seed
pl.seed_everything(42)
```
Thus, in the future, we don't have to define our own `set_seed` function anymore.
In PyTorch Lightning, we define `pl.LightningModule`'s (inheriting from `torch.nn.Module`) that organize our code into 5 main sections:
1. Initialization (`__init__`), where we create all necessary parameters/models
2. Optimizers (`configure_optimizers`) where we create the optimizers, learning rate scheduler, etc.
3. Training loop (`training_step`) where we only have to define the loss calculation for a single batch (the loop of optimizer.zero_grad(), loss.backward() and optimizer.step(), as well as any logging/saving operation, is done in the background)
4. Validation loop (`validation_step`) where similarly to the training, we only have to define what should happen per step
5. Test loop (`test_step`) which is the same as validation, only on a test set.
Therefore, we don't abstract the PyTorch code, but rather organize it and define some default operations that are commonly used. If you need to change something else in your training/validation/test loop, there are many possible functions you can overwrite (see the [docs](https://pytorch-lightning.readthedocs.io/en/stable/lightning_module.html) for details).
Now we can look at an example of how a Lightning Module for training a CNN looks like:
```
class CIFARTrainer(pl.LightningModule):
def __init__(self, model_name, model_hparams, optimizer_name, optimizer_hparams):
"""
Inputs:
model_name - Name of the model/CNN to run. Used for creating the model (see function below)
model_hparams - Hyperparameters for the model, as dictionary.
optimizer_name - Name of the optimizer to use. Currently supported: Adam, SGD
optimizer_hparams - Hyperparameters for the optimizer, as dictionary. This includes learning rate, weight decay, etc.
"""
super().__init__()
# Exports the hyperparameters to a YAML file, and create "self.hparams" namespace
self.save_hyperparameters()
# Create model
self.model = create_model(model_name, model_hparams)
# Create loss module
self.loss_module = nn.CrossEntropyLoss()
# Example input for visualizing the graph in Tensorboard
self.example_input_array = torch.zeros((1, 3, 32, 32), dtype=torch.float32)
def forward(self, imgs):
# Forward function that is run when visualizing the graph
return self.model(imgs)
def configure_optimizers(self):
# We will support Adam or SGD as optimizers.
if self.hparams.optimizer_name == "Adam":
# AdamW is Adam with a correct implementation of weight decay (see here for details: https://arxiv.org/pdf/1711.05101.pdf)
optimizer = optim.AdamW(self.parameters(), **self.hparams.optimizer_hparams)
elif self.hparams.optimizer_name == "SGD":
optimizer = optim.SGD(self.parameters(), **self.hparams.optimizer_hparams)
else:
assert False, "Unknown optimizer: \"%s\"" % self.hparams.optimizer_name
# We will reduce the learning rate by 0.1 after 100 and 150 epochs
scheduler = optim.lr_scheduler.MultiStepLR(optimizer, milestones=[100,150], gamma=0.1)
return [optimizer], [scheduler]
def training_step(self, batch, batch_idx):
# "batch" is the output of the training data loader.
imgs, labels = batch
preds = self.model(imgs)
loss = self.loss_module(preds, labels)
acc = (preds.argmax(dim=-1) == labels).float().mean()
self.log('train_acc', acc, on_step=False, on_epoch=True) # Logs the accuracy per epoch to tensorboard (weighted average over batches)
self.log('train_loss', loss)
return loss # Return tensor to call ".backward" on
def validation_step(self, batch, batch_idx):
imgs, labels = batch
preds = self.model(imgs).argmax(dim=-1)
acc = (labels == preds).float().mean()
self.log('val_acc', acc) # By default logs it per epoch (weighted average over batches)
def test_step(self, batch, batch_idx):
imgs, labels = batch
preds = self.model(imgs).argmax(dim=-1)
acc = (labels == preds).float().mean()
self.log('test_acc', acc) # By default logs it per epoch (weighted average over batches), and returns it afterwards
```
We see that the code is organized and clear, which helps if someone else tries to understand your code.
Another important part of PyTorch Lightning is the concept of callbacks. Callbacks are self-contained functions that contain the non-essential logic of your Lightning Module. They are usually called after finishing a training epoch, but can also influence other parts of your training loop. For instance, we will use the following two pre-defined callbacks: `LearningRateMonitor` and `ModelCheckpoint`. The learning rate monitor adds the current learning rate to our TensorBoard, which helps to verify that our learning rate scheduler works correctly. The model checkpoint callback allows you to customize the saving routine of your checkpoints. For instance, how many checkpoints to keep, when to save, which metric to look out for, etc. We import them below:
```
# Callbacks
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
```
To allow running multiple different models with the same Lightning module, we define a function below that maps a model name to the model class. At this stage, the dictionary `model_dict` is empty, but we will fill it throughout the notebook with our new models.
```
model_dict = {}
def create_model(model_name, model_hparams):
if model_name in model_dict:
return model_dict[model_name](**model_hparams)
else:
assert False, "Unknown model name \"%s\". Available models are: %s" % (model_name, str(model_dict.keys()))
```
Similarly, to use the activation function as another hyperparameter in our model, we define a "name to function" dict below:
```
act_fn_by_name = {
"tanh": nn.Tanh,
"relu": nn.ReLU,
"leakyrelu": nn.LeakyReLU,
"gelu": nn.GELU
}
```
If we pass the classes or objects directly as an argument to the Lightning module, we couldn't take advantage of PyTorch Lightning's automatically hyperparameter saving and loading.
Besides the Lightning module, the second most important module in PyTorch Lightning is the `Trainer`. The trainer is responsible to execute the training steps defined in the Lightning module and completes the framework. Similar to the Lightning module, you can override any key part that you don't want to be automated, but the default settings are often the best practice to do. For a full overview, see the [documentation](https://pytorch-lightning.readthedocs.io/en/stable/trainer.html). The most important functions we use below are:
* `trainer.fit`: Takes as input a lightning module, a training dataset, and an (optional) validation dataset. This function trains the given module on the training dataset with occasional validation (default once per epoch, can be changed)
* `trainer.test`: Takes as input a model and a dataset on which we want to test. It returns the test metric on the dataset.
For training and testing, we don't have to worry about things like setting the model to eval mode (`model.eval()`) as this is all done automatically. See below how we define a training function for our models:
```
def train_model(model_name, save_name=None, **kwargs):
"""
Inputs:
model_name - Name of the model you want to run. Is used to look up the class in "model_dict"
save_name (optional) - If specified, this name will be used for creating the checkpoint and logging directory.
"""
if save_name is None:
save_name = model_name
# Create a PyTorch Lightning trainer with the generation callback
trainer = pl.Trainer(default_root_dir=os.path.join(CHECKPOINT_PATH, save_name), # Where to save models
checkpoint_callback=ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc"), # Save the best checkpoint based on the maximum val_acc recorded. Saves only weights and not optimizer
gpus=1 if str(device)=="cuda:0" else 0, # We run on a single GPU (if possible)
max_epochs=180, # How many epochs to train for if no patience is set
callbacks=[LearningRateMonitor("epoch")], # Log learning rate every epoch
progress_bar_refresh_rate=1) # In case your notebook crashes due to the progress bar, consider increasing the refresh rate
trainer.logger._log_graph = True # If True, we plot the computation graph in tensorboard
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, save_name + ".ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model at %s, loading..." % pretrained_filename)
model = CIFARTrainer.load_from_checkpoint(pretrained_filename) # Automatically loads the model with the saved hyperparameters
else:
pl.seed_everything(42) # To be reproducable
model = CIFARTrainer(model_name=model_name, **kwargs)
trainer.fit(model, train_loader, val_loader)
model = CIFARTrainer.load_from_checkpoint(trainer.checkpoint_callback.best_model_path) # Load best checkpoint after training
# Test best model on validation and test set
val_result = trainer.test(model, test_dataloaders=val_loader, verbose=False)
test_result = trainer.test(model, test_dataloaders=test_loader, verbose=False)
result = {"test": test_result[0]["test_acc"], "val": val_result[0]["test_acc"]}
return model, result
```
Finally, we can focus on the Convolutional Neural Networks we want to implement today: GoogleNet, ResNet, and DenseNet.
## Inception
The [GoogleNet](https://arxiv.org/abs/1409.4842), proposed in 2014, won the ImageNet Challenge because of its usage of the Inception modules. In general, we will mainly focus on the concept of Inception in this tutorial instead of the specifics of the GoogleNet, as based on Inception, there have been many follow-up works ([Inception-v2](https://arxiv.org/abs/1512.00567), [Inception-v3](https://arxiv.org/abs/1512.00567), [Inception-v4](https://arxiv.org/abs/1602.07261), [Inception-ResNet](https://arxiv.org/abs/1602.07261),...). The follow-up works mainly focus on increasing efficiency and enabling very deep Inception networks. However, for a fundamental understanding, it is sufficient to look at the original Inception block.
An Inception block applies four convolution blocks separately on the same feature map: a 1x1, 3x3, and 5x5 convolution, and a max pool operation. This allows the network to look at the same data with different receptive fields. Of course, learning only 5x5 convolution would be theoretically more powerful. However, this is not only more computation and memory heavy but also tends to overfit much easier. The overall inception block looks like below (figure credit - [Szegedy et al.](https://arxiv.org/abs/1409.4842)):
<center width="100%"><img src="inception_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="500px"/></center>
The additional 1x1 convolutions before the 3x3 and 5x5 convolutions are used for dimensionality reduction. This is especially crucial as the feature maps of all branches are merged afterward, and we don't want any explosion of feature size. As 5x5 convolutions are 25 times more expensive than 1x1 convolutions, we can save a lot of computation and parameters by reducing the dimensionality before the large convolutions.
We can now try to implement the Inception Block ourselves:
```
class InceptionBlock(nn.Module):
def __init__(self, c_in, c_red : dict, c_out : dict, act_fn):
"""
Inputs:
c_in - Number of input feature maps from the previous layers
c_red - Dictionary with keys "3x3" and "5x5" specifying the output of the dimensionality reducing 1x1 convolutions
c_out - Dictionary with keys "1x1", "3x3", "5x5", and "max"
act_fn - Activation class constructor (e.g. nn.ReLU)
"""
super().__init__()
# 1x1 convolution branch
self.conv_1x1 = nn.Sequential(
nn.Conv2d(c_in, c_out["1x1"], kernel_size=1),
nn.BatchNorm2d(c_out["1x1"]),
act_fn()
)
# 3x3 convolution branch
self.conv_3x3 = nn.Sequential(
nn.Conv2d(c_in, c_red["3x3"], kernel_size=1),
nn.BatchNorm2d(c_red["3x3"]),
act_fn(),
nn.Conv2d(c_red["3x3"], c_out["3x3"], kernel_size=3, padding=1),
nn.BatchNorm2d(c_out["3x3"]),
act_fn()
)
# 5x5 convolution branch
self.conv_5x5 = nn.Sequential(
nn.Conv2d(c_in, c_red["5x5"], kernel_size=1),
nn.BatchNorm2d(c_red["5x5"]),
act_fn(),
nn.Conv2d(c_red["5x5"], c_out["5x5"], kernel_size=5, padding=2),
nn.BatchNorm2d(c_out["5x5"]),
act_fn()
)
# Max-pool branch
self.max_pool = nn.Sequential(
nn.MaxPool2d(kernel_size=3, padding=1, stride=1),
nn.Conv2d(c_in, c_out["max"], kernel_size=1),
nn.BatchNorm2d(c_out["max"]),
act_fn()
)
def forward(self, x):
x_1x1 = self.conv_1x1(x)
x_3x3 = self.conv_3x3(x)
x_5x5 = self.conv_5x5(x)
x_max = self.max_pool(x)
x_out = torch.cat([x_1x1, x_3x3, x_5x5, x_max], dim=1)
return x_out
```
The GoogleNet architecture consists of stacking multiple Inception blocks with occasional max pooling to reduce the height and width of the feature maps. The original GoogleNet was designed for image sizes of ImageNet (224x224 pixels) and had almost 7 million parameters. As we train on CIFAR10 with image sizes of 32x32, we don't require such a heavy architecture, and instead, apply a reduced version. The number of channels for dimensionality reduction and output per filter (1x1, 3x3, 5x5, and max pooling) need to be manually specified and can be changed if interested. The general intuition is to have the most filters for the 3x3 convolutions, as they are powerful enough to take the context into account while requiring almost a third of the parameters of the 5x5 convolution.
```
class GoogleNet(nn.Module):
def __init__(self, num_classes=10, act_fn_name="relu", **kwargs):
super().__init__()
self.hparams = SimpleNamespace(num_classes=num_classes,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name])
self._create_network()
self._init_params()
def _create_network(self):
# A first convolution on the original image to scale up the channel size
self.input_net = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3, padding=1),
nn.BatchNorm2d(64),
self.hparams.act_fn()
)
# Stacking inception blocks
self.inception_blocks = nn.Sequential(
InceptionBlock(64, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":32,"5x5":8,"max":8}, act_fn=self.hparams.act_fn),
InceptionBlock(64, c_red={"3x3":32,"5x5":16}, c_out={"1x1":24,"3x3":48,"5x5":12,"max":12}, act_fn=self.hparams.act_fn),
nn.MaxPool2d(3, stride=2, padding=1), # 32x32 => 16x16
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":24,"3x3":48,"5x5":12,"max":12}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":48,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":16,"3x3":48,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(96, c_red={"3x3":32,"5x5":16}, c_out={"1x1":32,"3x3":48,"5x5":24,"max":24}, act_fn=self.hparams.act_fn),
nn.MaxPool2d(3, stride=2, padding=1), # 16x16 => 8x8
InceptionBlock(128, c_red={"3x3":48,"5x5":16}, c_out={"1x1":32,"3x3":64,"5x5":16,"max":16}, act_fn=self.hparams.act_fn),
InceptionBlock(128, c_red={"3x3":48,"5x5":16}, c_out={"1x1":32,"3x3":64,"5x5":16,"max":16}, act_fn=self.hparams.act_fn)
)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(128, self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.inception_blocks(x)
x = self.output_net(x)
return x
```
Now, we can integrate our model to the model dictionary we defined above:
```
model_dict["GoogleNet"] = GoogleNet
```
The training of the model is handled by PyTorch Lightning, and we just have to define the command to start. Note that we train for almost 200 epochs, which takes about an hour on Lisa's default GPUs (GTX1080Ti). We would recommend using the saved models and train your own model if you are interested.
```
googlenet_model, googlenet_results = train_model(model_name="GoogleNet",
model_hparams={"num_classes": 10,
"act_fn_name": "relu"},
optimizer_name="Adam",
optimizer_hparams={"lr": 1e-3,
"weight_decay": 1e-4})
```
We will compare the results later in the notebooks, but we can already print them here for a first glance:
```
print("GoogleNet Results", googlenet_results)
```
### Tensorboard log
A nice extra of PyTorch Lightning is the automatic logging into TensorBoard. To give you a better intuition of what TensorBoard can be used, we can look at the board that PyTorch Lightning has been generated when training the GoogleNet. TensorBoard provides an inline functionality for Jupyter notebooks, and we use it here:
```
# Import tensorboard
from torch.utils.tensorboard import SummaryWriter
%load_ext tensorboard
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH!
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/GoogleNet/
```
<center width="100%"><img src="tensorboard_screenshot_GoogleNet.png" width="1000px"></center>
TensorBoard is organized in multiple tabs. The main tab is the scalar tab where we can log the development of single numbers. For example, we have plotted the training loss, accuracy, learning rate, etc. If we look at the training or validation accuracy, we can really see the impact of using a learning rate scheduler. Reducing the learning rate gives our model a nice increase in training performance. Similarly, when looking at the training loss, we see a sudden decrease at this point. However, the high numbers on the training set compared to validation indicate that our model was overfitting which is inevitable for such large networks.
Another interesting tab in TensorBoard is the graph tab. It shows us the network architecture organized by building blocks from the input to the output. It basically shows the operations taken in the forward step of `CIFARTrainer`. Double-click on a module to open it. Feel free to explore the architecture from a different perspective. The graph visualization can often help you to validate that your model is actually doing what it is supposed to do, and you don't miss any layers in the computation graph.
## ResNet
The [ResNet](https://arxiv.org/abs/1512.03385) paper is one of the [most cited AI papers](https://www.natureindex.com/news-blog/google-scholar-reveals-most-influential-papers-research-citations-twenty-twenty), and has been the foundation for neural networks with more than 1,000 layers. Despite its simplicity, the idea of residual connections is highly effective as it supports stable gradient propagation through the network. Instead of modeling $x_{l+1}=F(x_{l})$, we model $x_{l+1}=x_{l}+F(x_{l})$ where $F$ is a non-linear mapping (usually a sequence of NN modules likes convolutions, activation functions, and normalizations). If we do backpropagation on such residual connections, we obtain:
$$\frac{\partial x_{l+1}}{\partial x_{l}} = \mathbf{I} + \frac{\partial F(x_{l})}{\partial x_{l}}$$
The bias towards the identity matrix guarantees a stable gradient propagation being less effected by $F$ itself. There have been many variants of ResNet proposed, which mostly concern the function $F$, or operations applied on the sum. In this tutorial, we look at two of them: the original ResNet block, and the [Pre-Activation ResNet block](https://arxiv.org/abs/1603.05027). We visually compare the blocks below (figure credit - [He et al.](https://arxiv.org/abs/1603.05027)):
<center width="100%"><img src="resnet_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="300px"/></center>
The original ResNet block applies a non-linear activation function, usually ReLU, after the skip connection. In contrast, the pre-activation ResNet block applies the non-linearity at the beginning of $F$. Both have their advantages and disadvantages. For very deep network, however, the pre-activation ResNet has shown to perform better as the gradient flow is guaranteed to have the identity matrix as calculated above, and is not harmed by any non-linear activation applied to it. For comparison, in this notebook, we implement both ResNet types as shallow networks.
Let's start with the original ResNet block. The visualization above already shows what layers are included in $F$. One special case we have to handle is when we want to reduce the image dimensions in terms of width and height. The basic ResNet block requires $F(x_{l})$ to be of the same shape as $x_{l}$. Thus, we need to change the dimensionality of $x_{l}$ as well before adding to $F(x_{l})$. The original implementation used an identity mapping with stride 2 and padded additional feature dimensions with 0. However, the more common implementation is to use a 1x1 convolution with stride 2 as it allows us to change the feature dimensionality while being efficient in parameter and computation cost. The code for the ResNet block is relatively simple, and shown below:
```
class ResNetBlock(nn.Module):
def __init__(self, c_in, act_fn, subsample=False, c_out=-1):
"""
Inputs:
c_in - Number of input features
act_fn - Activation class constructor (e.g. nn.ReLU)
subsample - If True, we want to apply a stride inside the block and reduce the output shape by 2 in height and width
c_out - Number of output features. Note that this is only relevant if subsample is True, as otherwise, c_out = c_in
"""
super().__init__()
if not subsample:
c_out = c_in
# Network representing F
self.net = nn.Sequential(
nn.Conv2d(c_in, c_out, kernel_size=3, padding=1, stride=1 if not subsample else 2, bias=False), # No bias needed as the Batch Norm handles it
nn.BatchNorm2d(c_out),
act_fn(),
nn.Conv2d(c_out, c_out, kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(c_out)
)
# 1x1 convolution with stride 2 means we take the upper left value, and transform it to new output size
self.downsample = nn.Conv2d(c_in, c_out, kernel_size=1, stride=2) if subsample else None
self.act_fn = act_fn()
def forward(self, x):
z = self.net(x)
if self.downsample is not None:
x = self.downsample(x)
out = z + x
out = self.act_fn(out)
return out
```
The second block we implement is the pre-activation ResNet block. For this, we have to change the order of layer in `self.net`, and do not apply an activation function on the output. Additionally, the downsampling operation has to apply a non-linearity as well as the input, $x_l$, has not been processed by a non-linearity yet. Hence, the block looks as follows:
```
class PreActResNetBlock(nn.Module):
def __init__(self, c_in, act_fn, subsample=False, c_out=-1):
"""
Inputs:
c_in - Number of input features
act_fn - Activation class constructor (e.g. nn.ReLU)
subsample - If True, we want to apply a stride inside the block and reduce the output shape by 2 in height and width
c_out - Number of output features. Note that this is only relevant if subsample is True, as otherwise, c_out = c_in
"""
super().__init__()
if not subsample:
c_out = c_in
# Network representing F
self.net = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=3, padding=1, stride=1 if not subsample else 2, bias=False),
nn.BatchNorm2d(c_out),
act_fn(),
nn.Conv2d(c_out, c_out, kernel_size=3, padding=1, bias=False)
)
# 1x1 convolution needs to apply non-linearity as well as not done on skip connection
self.downsample = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=1, stride=2, bias=False)
) if subsample else None
def forward(self, x):
z = self.net(x)
if self.downsample is not None:
x = self.downsample(x)
out = z + x
return out
```
Similarly to the model selection, we define a dictionary to create a mapping from string to block class. We will use the string name as hyperparameter value in our model to choose between the ResNet blocks. Feel free to implement any other ResNet block type and add it here as well.
```
resnet_blocks_by_name = {
"ResNetBlock": ResNetBlock,
"PreActResNetBlock": PreActResNetBlock
}
```
The overall ResNet architecture consists of stacking multiple ResNet blocks, of which some are downsampling the input. When talking about ResNet blocks in the whole network, we usually group them by the same output shape. Hence, if we say the ResNet has `[3,3,3]` blocks, it means that we have 3 times a group of 3 ResNet blocks, where a subsampling is taking place in the fourth and seventh block. The ResNet with `[3,3,3]` blocks on CIFAR10 is visualized below.
<center width="100%"><img src="resnet_notation.svg" width="500px"></center>
The three groups operate on the resolutions $32\times32$, $16\times16$ and $8\times8$ respectively. The blocks in orange denote ResNet blocks with downsampling. The same notation is used by many other implementations such as in the [torchvision library](https://pytorch.org/docs/stable/_modules/torchvision/models/resnet.html#resnet18) from PyTorch. Thus, our code looks as follows:
```
class ResNet(nn.Module):
def __init__(self, num_classes=10, num_blocks=[3,3,3], c_hidden=[16,32,64], act_fn_name="relu", block_name="ResNetBlock", **kwargs):
"""
Inputs:
num_classes - Number of classification outputs (10 for CIFAR10)
num_blocks - List with the number of ResNet blocks to use. The first block of each group uses downsampling, except the first.
c_hidden - List with the hidden dimensionalities in the different blocks. Usually multiplied by 2 the deeper we go.
act_fn_name - Name of the activation function to use, looked up in "act_fn_by_name"
block_name - Name of the ResNet block, looked up in "resnet_blocks_by_name"
"""
super().__init__()
assert block_name in resnet_blocks_by_name
self.hparams = SimpleNamespace(num_classes=num_classes,
c_hidden=c_hidden,
num_blocks=num_blocks,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name],
block_class=resnet_blocks_by_name[block_name])
self._create_network()
self._init_params()
def _create_network(self):
c_hidden = self.hparams.c_hidden
# A first convolution on the original image to scale up the channel size
if self.hparams.block_class == PreActResNetBlock: # => Don't apply non-linearity on output
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden[0], kernel_size=3, padding=1, bias=False)
)
else:
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden[0], kernel_size=3, padding=1, bias=False),
nn.BatchNorm2d(c_hidden[0]),
self.hparams.act_fn()
)
# Creating the ResNet blocks
blocks = []
for block_idx, block_count in enumerate(self.hparams.num_blocks):
for bc in range(block_count):
subsample = (bc == 0 and block_idx > 0) # Subsample the first block of each group, except the very first one.
blocks.append(
self.hparams.block_class(c_in=c_hidden[block_idx if not subsample else (block_idx-1)],
act_fn=self.hparams.act_fn,
subsample=subsample,
c_out=c_hidden[block_idx])
)
self.blocks = nn.Sequential(*blocks)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(c_hidden[-1], self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
# Fan-out focuses on the gradient distribution, and is commonly used in ResNets
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.blocks(x)
x = self.output_net(x)
return x
```
We also need to add the new ResNet class to our model dictionary:
```
model_dict["ResNet"] = ResNet
```
Finally, we can train our ResNet models. One difference to the GoogleNet training is that we explicitly use SGD with Momentum as optimizer instead of Adam. Adam often leads to a slightly worse accuracy on plain, shallow ResNets. It is not 100% clear why Adam performs worse in this context, but one possible explanation is related to ResNet's loss surface. ResNet has been shown to produce smoother loss surfaces than networks without skip connection (see [Li et al., 2018](https://arxiv.org/pdf/1712.09913.pdf) for details). A possible visualization of the loss surface with/out skip connections is below (figure credit - [Li et al.](https://arxiv.org/pdf/1712.09913.pdf)):
<center width="100%"><img src="resnet_loss_surface.svg" style="display: block; margin-left: auto; margin-right: auto;" width="600px"/></center>
The $x$ and $y$ axis shows a projection of the parameter space, and the $z$ axis shows the loss values achieved by different parameter values. On smooth surfaces like the one on the right, we might not require an adaptive learning rate as Adam provides. Instead, Adam can get stuck in local optima while SGD finds the wider minima that tend to generalize better.
However, to answer this question in detail, we would need an extra tutorial because it is not easy to answer. For now, we conclude: for ResNet architectures, consider the optimizer to be an important hyperparameter, and try training with both Adam and SGD. Let's train the model below with SGD:
```
resnet_model, resnet_results = train_model(model_name="ResNet",
model_hparams={"num_classes": 10,
"c_hidden": [16,32,64],
"num_blocks": [3,3,3],
"act_fn_name": "relu"},
optimizer_name="SGD",
optimizer_hparams={"lr": 0.1,
"momentum": 0.9,
"weight_decay": 1e-4})
```
Let's also train the pre-activation ResNet as comparison:
```
resnetpreact_model, resnetpreact_results = train_model(model_name="ResNet",
model_hparams={"num_classes": 10,
"c_hidden": [16,32,64],
"num_blocks": [3,3,3],
"act_fn_name": "relu",
"block_name": "PreActResNetBlock"},
optimizer_name="SGD",
optimizer_hparams={"lr": 0.1,
"momentum": 0.9,
"weight_decay": 1e-4},
save_name="ResNetPreAct")
```
### Tensorboard log
Similarly to our GoogleNet model, we also have a TensorBoard log for the ResNet model. We can open it below.
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH! Feel free to change "ResNet" to "ResNetPreAct"
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/ResNet/
```
<center width="100%"><img src="tensorboard_screenshot_ResNet.png" width="1000px"></center>
Feel free to explore the TensorBoard yourself, including the computation graph. In general, we can see that with SGD, the ResNet has a higher training loss than the GoogleNet in the first stage of the training. After reducing the learning rate however, the model achieves even higher validation accuracies. We compare the precise scores at the end of the notebook.
## DenseNet
[DenseNet](https://arxiv.org/abs/1608.06993) is another architecture for enabling very deep neural networks and takes a slightly different perspective on residual connections. Instead of modeling the difference between layers, DenseNet considers residual connections as a possible way to reuse features across layers, removing any necessity to learn redundant feature maps. If we go deeper into the network, the model learns abstract features to recognize patterns. However, some complex patterns consist of a combination of abstract features (e.g. hand, face, etc.), and low-level features (e.g. edges, basic color, etc.). To find these low-level features in the deep layers, standard CNNs have to learn copy such feature maps, which wastes a lot of parameter complexity. DenseNet provides an efficient way of reusing features by having each convolution depends on all previous input features, but add only a small amount of filters to it. See the figure below for an illustration (figure credit - [Hu et al.](https://arxiv.org/abs/1608.06993)):
<center width="100%"><img src="densenet_block.svg" style="display: block; margin-left: auto; margin-right: auto;" width="500px"/></center>
The last layer, called the transition layer, is responsible for reducing the dimensionality of the feature maps in height, width, and channel size. Although those technically break the identity backpropagation, there are only a few in a network so that it doesn't affect the gradient flow much.
We split the implementation of the layers in DenseNet into three parts: a `DenseLayer`, and a `DenseBlock`, and a `TransitionLayer`. The module `DenseLayer` implements a single layer inside a dense block. It applies a 1x1 convolution for dimensionality reduction with a subsequential 3x3 convolution. The output channels are concatenated to the originals and returned. Note that we apply the Batch Normalization as the first layer of each block. This allows slightly different activations for the same features to different layers, depending on what is needed. Overall, we can implement it as follows:
```
class DenseLayer(nn.Module):
def __init__(self, c_in, bn_size, growth_rate, act_fn):
"""
Inputs:
c_in - Number of input channels
bn_size - Bottleneck size (factor of growth rate) for the output of the 1x1 convolution. Typically between 2 and 4.
growth_rate - Number of output channels of the 3x3 convolution
act_fn - Activation class constructor (e.g. nn.ReLU)
"""
super().__init__()
self.net = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, bn_size * growth_rate, kernel_size=1, bias=False),
nn.BatchNorm2d(bn_size * growth_rate),
act_fn(),
nn.Conv2d(bn_size * growth_rate, growth_rate, kernel_size=3, padding=1, bias=False)
)
def forward(self, x):
out = self.net(x)
out = torch.cat([out, x], dim=1)
return out
```
The module `DenseBlock` summarizes multiple dense layers applied in sequence. Each dense layer takes as input the original input concatenated with all previous layers' feature maps:
```
class DenseBlock(nn.Module):
def __init__(self, c_in, num_layers, bn_size, growth_rate, act_fn):
"""
Inputs:
c_in - Number of input channels
num_layers - Number of dense layers to apply in the block
bn_size - Bottleneck size to use in the dense layers
growth_rate - Growth rate to use in the dense layers
act_fn - Activation function to use in the dense layers
"""
super().__init__()
layers = []
for layer_idx in range(num_layers):
layers.append(
DenseLayer(c_in=c_in + layer_idx * growth_rate, # Input channels are original plus the feature maps from previous layers
bn_size=bn_size,
growth_rate=growth_rate,
act_fn=act_fn)
)
self.block = nn.Sequential(*layers)
def forward(self, x):
out = self.block(x)
return out
```
Finally, the `TransitionLayer` takes as input the final output of a dense block and reduces its channel dimensionality using a 1x1 convolution. To reduce the height and width dimension, we take a slightly different approach than in ResNet and apply an average pooling with kernel size 2 and stride 2. This is because we don't have an additional connection to the output that would consider the full 2x2 patch instead of a single value. Besides, it is more parameter efficient than using a 3x3 convolution with stride 2. Thus, the layer is implemented as follows:
```
class TransitionLayer(nn.Module):
def __init__(self, c_in, c_out, act_fn):
super().__init__()
self.transition = nn.Sequential(
nn.BatchNorm2d(c_in),
act_fn(),
nn.Conv2d(c_in, c_out, kernel_size=1, bias=False),
nn.AvgPool2d(kernel_size=2, stride=2) # Average the output for each 2x2 pixel group
)
def forward(self, x):
return self.transition(x)
```
Now we can put everything together and create our DenseNet. To specify the number of layers, we use a similar notation as in ResNets and pass on a list of ints representing the number of layers per block. After each dense block except the last one, we apply a transition layer to reduce the dimensionality by 2.
```
class DenseNet(nn.Module):
def __init__(self, num_classes=10, num_layers=[6,6,6,6], bn_size=2, growth_rate=16, act_fn_name="relu", **kwargs):
super().__init__()
self.hparams = SimpleNamespace(num_classes=num_classes,
num_layers=num_layers,
bn_size=bn_size,
growth_rate=growth_rate,
act_fn_name=act_fn_name,
act_fn=act_fn_by_name[act_fn_name])
self._create_network()
self._init_params()
def _create_network(self):
c_hidden = self.hparams.growth_rate * self.hparams.bn_size # The start number of hidden channels
# A first convolution on the original image to scale up the channel size
self.input_net = nn.Sequential(
nn.Conv2d(3, c_hidden, kernel_size=3, padding=1) # No batch norm or activation function as done inside the Dense layers
)
# Creating the dense blocks, eventually including transition layers
blocks = []
for block_idx, num_layers in enumerate(self.hparams.num_layers):
blocks.append(
DenseBlock(c_in=c_hidden,
num_layers=num_layers,
bn_size=self.hparams.bn_size,
growth_rate=self.hparams.growth_rate,
act_fn=self.hparams.act_fn)
)
c_hidden = c_hidden + num_layers * self.hparams.growth_rate # Overall output of the dense block
if block_idx < len(self.hparams.num_layers)-1: # Don't apply transition layer on last block
blocks.append(
TransitionLayer(c_in=c_hidden,
c_out=c_hidden // 2,
act_fn=self.hparams.act_fn))
c_hidden = c_hidden // 2
self.blocks = nn.Sequential(*blocks)
# Mapping to classification output
self.output_net = nn.Sequential(
nn.BatchNorm2d(c_hidden), # The features have not passed a non-linearity until here.
self.hparams.act_fn(),
nn.AdaptiveAvgPool2d((1,1)),
nn.Flatten(),
nn.Linear(c_hidden, self.hparams.num_classes)
)
def _init_params(self):
# Based on our discussion in Tutorial 4, we should initialize the convolutions according to the activation function
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight, nonlinearity=self.hparams.act_fn_name)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
def forward(self, x):
x = self.input_net(x)
x = self.blocks(x)
x = self.output_net(x)
return x
```
Let's also add the DenseNet to our model dictionary:
```
model_dict["DenseNet"] = DenseNet
```
Lastly, we train our network. In contrast to ResNet, DenseNet does not show any issues with Adam, and hence we train it with this optimizer. The other hyperparameters are chosen to result in a network with a similar parameter size as the ResNet and GoogleNet. Commonly, when designing very deep networks, DenseNet is more parameter efficient than ResNet while achieving a similar or even better performance.
```
densenet_model, densenet_results = train_model(model_name="DenseNet",
model_hparams={"num_classes": 10,
"num_layers": [6,6,6,6],
"bn_size": 2,
"growth_rate": 16,
"act_fn_name": "relu"},
optimizer_name="Adam",
optimizer_hparams={"lr": 1e-3,
"weight_decay": 1e-4})
```
### Tensorboard log
Finally, we also have another TensorBoard for the DenseNet training. We take a look at it below:
```
# Opens tensorboard in notebook. Adjust the path to your CHECKPOINT_PATH! Feel free to change "ResNet" to "ResNetPreAct"
%tensorboard --logdir ../saved_models/tutorial5/tensorboards/DenseNet/
```
<center width="100%"><img src="tensorboard_screenshot_DenseNet.png" width="1000px"></center>
The overall course of the validation accuracy and training loss resemble the training of GoogleNet, which is also related to training the network with Adam. Feel free to explore the training metrics yourself.
## Conclusion and Comparison
After discussing each model separately, and training all of them, we can finally compare them. First, let's organize the results of all models in a table:
```
%%html
<!-- Some HTML code to increase font size in the following table -->
<style>
th {font-size: 120%;}
td {font-size: 120%;}
</style>
import tabulate
from IPython.display import display, HTML
all_models = [
("GoogleNet", googlenet_results, googlenet_model),
("ResNet", resnet_results, resnet_model),
("ResNetPreAct", resnetpreact_results, resnetpreact_model),
("DenseNet", densenet_results, densenet_model)
]
table = [[model_name,
"%4.2f%%" % (100.0*model_results["val"]),
"%4.2f%%" % (100.0*model_results["test"]),
"{:,}".format(sum([np.prod(p.shape) for p in model.parameters()]))]
for model_name, model_results, model in all_models]
display(HTML(tabulate.tabulate(table, tablefmt='html', headers=["Model", "Val Accuracy", "Test Accuracy", "Num Parameters"])))
```
First of all, we see that all models are performing reasonably well. Simple models as you have implemented them in the practical achieve considerably lower performance, which is beside the lower number of parameters also attributed to the architecture design choice. GoogleNet is the model to obtain the lowest performance on the validation and test set, although it is very close to DenseNet. A proper hyperparameter search over all the channel sizes in GoogleNet would likely improve the accuracy of the model to a similar level, but this is also expensive given a large number of hyperparameters. ResNet outperforms both DenseNet and GoogleNet by more than 1% on the validation set, while there is a minor difference between both versions, original and pre-activation. We can conclude that for shallow networks, the place of the activation function does not seem to be crucial, although papers have reported the contrary for very deep networks (e.g. [He et al.](https://arxiv.org/abs/1603.05027)).
In general, we can conclude that ResNet is a simple, but powerful architecture. If we would apply the models on more complex tasks with larger images and more layers inside the networks, we would likely see a bigger gap between GoogleNet and skip-connection architectures like ResNet and DenseNet. A comparison with deeper models on CIFAR10 can be for example found [here](https://github.com/kuangliu/pytorch-cifar). Interestingly, DenseNet outperforms the original ResNet on their setup but comes closely behind the Pre-Activation ResNet. The best model, a Dual Path Network ([Chen et. al](https://arxiv.org/abs/1707.01629)), is actually a combination of ResNet and DenseNet showing that both offer different advantages.
### Which model should I choose for my task?
We have reviewed four different models. So, which one should we choose if have given a new task? Usually, starting with a ResNet is a good idea given the superior performance of the CIFAR dataset and its simple implementation. Besides, for the parameter number we have chosen here, ResNet is the fastest as DenseNet and GoogleNet have many more layers that are applied in sequence in our primitive implementation. However, if you have a really difficult task, such as semantic segmentation on HD images, more complex variants of ResNet and DenseNet are recommended.
| github_jupyter |
### Previous days
* [Day 1: Handling missing values](https://www.kaggle.com/rtatman/data-cleaning-challenge-handling-missing-values)
* [Day 2: Scaling and normalization](https://www.kaggle.com/rtatman/data-cleaning-challenge-scale-and-normalize-data)
___
Welcome to day 3 of the 5-Day Data Challenge! Today, we're going to work with dates. To get started, click the blue "Fork Notebook" button in the upper, right hand corner. This will create a private copy of this notebook that you can edit and play with. Once you're finished with the exercises, you can choose to make your notebook public to share with others. :)
> **Your turn!** As we work through this notebook, you'll see some notebook cells (a block of either code or text) that has "Your Turn!" written in it. These are exercises for you to do to help cement your understanding of the concepts we're talking about. Once you've written the code to answer a specific question, you can run the code by clicking inside the cell (box with code in it) with the code you want to run and then hit CTRL + ENTER (CMD + ENTER on a Mac). You can also click in a cell and then click on the right "play" arrow to the left of the code. If you want to run all the code in your notebook, you can use the double, "fast forward" arrows at the bottom of the notebook editor.
Here's what we're going to do today:
* [Get our environment set up](#Get-our-environment-set-up)
* [Check the data type of our date column](#Check-the-data-type-of-our-date-column)
* [Convert our date columns to datetime](#Convert-our-date-columns-to-datetime)
* [Select just the day of the month from our column](#Select-just-the-day-of-the-month-from-our-column)
* [Plot the day of the month to check the date parsing](#Plot-the-day-of-the-month-to-the-date-parsing)
Let's get started!
# Get our environment set up
________
The first thing we'll need to do is load in the libraries and datasets we'll be using. For today, we'll be working with two datasets: one containing information on earthquakes that occured between 1965 and 2016, and another that contains information on landslides that occured between 2007 and 2016.
> **Important!** Make sure you run this cell yourself or the rest of your code won't work!
```
# modules we'll use
import pandas as pd
import numpy as np
import seaborn as sns
import datetime
# read in our data
earthquakes = pd.read_csv("../input/earthquake-database/database.csv")
landslides = pd.read_csv("../input/landslide-events/catalog.csv")
volcanos = pd.read_csv("../input/volcanic-eruptions/database.csv")
# set seed for reproducibility
np.random.seed(0)
```
Now we're ready to look at some dates! (If you like, you can take this opportunity to take a look at some of the data.)
# Check the data type of our date column
___
For this part of the challenge, I'll be working with the `date` column from the `landslides` dataframe. The very first thing I'm going to do is take a peek at the first few rows to make sure it actually looks like it contains dates.
```
# print the first few rows of the date column
print(landslides['date'].head())
```
Yep, those are dates! But just because I, a human, can tell that these are dates doesn't mean that Python knows that they're dates. Notice that the at the bottom of the output of `head()`, you can see that it says that the data type of this column is "object".
> Pandas uses the "object" dtype for storing various types of data types, but most often when you see a column with the dtype "object" it will have strings in it.
If you check the pandas dtype documentation [here](http://pandas.pydata.org/pandas-docs/stable/basics.html#dtypes), you'll notice that there's also a specific `datetime64` dtypes. Because the dtype of our column is `object` rather than `datetime64`, we can tell that Python doesn't know that this column contains dates.
We can also look at just the dtype of your column without printing the first few rows if we like:
```
# check the data type of our date column
landslides['date'].dtype
```
You may have to check the [numpy documentation](https://docs.scipy.org/doc/numpy-1.12.0/reference/generated/numpy.dtype.kind.html#numpy.dtype.kind) to match the letter code to the dtype of the object. "O" is the code for "object", so we can see that these two methods give us the same information.
```
# Your turn! Check the data type of the Date column in the earthquakes dataframe
# (note the capital 'D' in date!)
earthquakes['Date'].dtype
```
# Convert our date columns to datetime
___
Now that we know that our date column isn't being recognized as a date, it's time to convert it so that it *is* recognized as a date. This is called "parsing dates" because we're taking in a string and identifying its component parts.
We can pandas what the format of our dates are with a guide called as ["strftime directive", which you can find more information on at this link](http://strftime.org/). The basic idea is that you need to point out which parts of the date are where and what punctuation is between them. There are [lots of possible parts of a date](http://strftime.org/), but the most common are `%d` for day, `%m` for month, `%y` for a two-digit year and `%Y` for a four digit year.
Some examples:
* 1/17/07 has the format "%m/%d/%y"
* 17-1-2007 has the format "%d-%m-%Y"
Looking back up at the head of the `date` column in the landslides dataset, we can see that it's in the format "month/day/two-digit year", so we can use the same syntax as the first example to parse in our dates:
```
# create a new column, date_parsed, with the parsed dates
landslides['date_parsed'] = pd.to_datetime(landslides['date'], format = "%m/%d/%y")
```
Now when I check the first few rows of the new column, I can see that the dtype is `datetime64`. I can also see that my dates have been slightly rearranged so that they fit the default order datetime objects (year-month-day).
```
# print the first few rows
landslides['date_parsed'].head()
```
Now that our dates are parsed correctly, we can interact with them in useful ways.
```
# Your turn! Create a new column, date_parsed, in the earthquakes
# dataset that has correctly parsed dates in it. (Don't forget to
# double-check that the dtype is correct!!!
#earthquakes['parsed_Date'] = pd.to_datetime(earthquakes['Date'], format="%m/%d/%Y") ##### This line of code here generates an error saying that the date is of invalid format...
# Run the following lines of code to get to know more about why the error has occured...
mask = pd.to_datetime(earthquakes['Date'], errors='coerce', format="%m/%d/%Y").isnull()
print (earthquakes['Date'][mask])
# Corrected line code is the below
earthquakes['parsed_Date'] = pd.to_datetime(earthquakes['Date'], errors='coerce', format="%m/%d/%Y")
```
# Select just the day of the month from our column
___
"Ok, Rachael," you may be saying at this point, "This messing around with data types is fine, I guess, but what's the *point*?" To answer your question, let's try to get information on the day of the month that a landslide occured on from the original "date" column, which has an "object" dtype:
```
# try to get the day of the month from the date column
#day_of_month_landslides = landslides['date'].dt.day
```
We got an error! The important part to look at here is the part at the very end that says `AttributeError: Can only use .dt accessor with datetimelike values`. We're getting this error because the dt.day() function doesn't know how to deal with a column with the dtype "object". Even though our dataframe has dates in it, because they haven't been parsed we can't interact with them in a useful way.
Luckily, we have a column that we parsed earlier , and that lets us get the day of the month out no problem:
```
# get the day of the month from the date_parsed column
day_of_month_landslides = landslides['date_parsed'].dt.day
# Your turn! get the day of the month from the date_parsed column
day_of_month_earthquakes = earthquakes['parsed_Date'].dt.day
```
# Plot the day of the month to check the date parsing
___
One of the biggest dangers in parsing dates is mixing up the months and days. The to_datetime() function does have very helpful error messages, but it doesn't hurt to double-check that the days of the month we've extracted make sense.
To do this, let's plot a histogram of the days of the month. We expect it to have values between 1 and 31 and, since there's no reason to suppose the landslides are more common on some days of the month than others, a relatively even distribution. (With a dip on 31 because not all months have 31 days.) Let's see if that's the case:
```
# remove na's
day_of_month_landslides = day_of_month_landslides.dropna()
# plot the day of the month
sns.distplot(day_of_month_landslides, kde=False, bins=31)
```
Yep, it looks like we did parse our dates correctly & this graph makes good sense to me. Why don't you take a turn checking the dates you parsed earlier?
```
# Your turn! Plot the days of the month from your
# earthquake dataset and make sure they make sense.
day_of_month_earthquakes = day_of_month_earthquakes.dropna()
sns.distplot(day_of_month_earthquakes, kde=False, bins=31)
```
And that's it for today! If you have any questions, be sure to post them in the comments below or [on the forums](https://www.kaggle.com/questions-and-answers).
Remember that your notebook is private by default, and in order to share it with other people or ask for help with it, you'll need to make it public. First, you'll need to save a version of your notebook that shows your current work by hitting the "Commit & Run" button. (Your work is saved automatically, but versioning your work lets you go back and look at what it was like at the point you saved it. It also lets you share a nice compiled notebook instead of just the raw code.) Then, once your notebook is finished running, you can go to the Settings tab in the panel to the left (you may have to expand it by hitting the [<] button next to the "Commit & Run" button) and setting the "Visibility" dropdown to "Public".
# More practice!
___
If you're interested in graphing time series, [check out this Learn tutorial](https://www.kaggle.com/residentmario/time-series-plotting-optional).
You can also look into passing columns that you know have dates in them to the `parse_dates` argument in `read_csv`. (The documention [is here](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html).) Do note that this method can be very slow, but depending on your needs it may sometimes be handy to use.
For an extra challenge, you can try try parsing the column `Last Known Eruption` from the `volcanos` dataframe. This column contains a mixture of text ("Unknown") and years both before the common era (BCE, also known as BC) and in the common era (CE, also known as AD).
```
volcanos['Last Known Eruption'].sample(5)
```
| github_jupyter |
# "밑바닥부터 시작하는 딥러닝 도서 요약 정리"
> "A summary of DeepLearning from Scratch book. perceptron, multi-layer perceptron, activation function, backpropagation"
- toc: true
- badges: true
- comments: true
- categories: [deeplearning,backpropagation]
- image: images/chart-preview.png
# 밑바닥부터 시작하는 딥러닝 도서내용을 간단히 정리해 봤다.
## 2장 Perceptron
* AND , NAND, OR 게이트를 구현해 본다.
### AND 게이트
```
import numpy as np
# 가중치와 편향을 도입한 AND 게이트
def AND(x1, x2):
x = np.array([x1, x2])
w = np.array([0.5, 0.5])
b = -0.7 # b=-0.7 is bias. b를 기준으로 넘으면 1, 넘지 못하면 0
tmp = np.sum(w*x) + b
if tmp <= 0:
return 0
else:
return 1
print(AND(0, 0),
AND(1, 0),
AND(0, 1),
AND(1, 1))
```
### NAND 게이트
```
# NAND 게이트
def NAND(x1, x2):
x = np.array([x1, x2])
w = np.array([-0.5, -0.5]) # AND와는 w,b 만 다르다.
b = 0.7
tmp = np.sum(w*x) + b
if tmp <= 0:
return 0
else:
return 1
print(NAND(0, 0),
NAND(1, 0),
NAND(0, 1),
NAND(1, 1))
```
### OR 게이트
```
# OR 게이트
def OR(x1, x2):
x = np.array([x1, x2])
w = np.array([0.5, 0.5]) # AND와는 w,b 만 다르다.
b = -0.2
tmp = np.sum(w*x) + b
if tmp <= 0:
return 0
else:
return 1
print(OR(0, 0),
OR(1, 0),
OR(0, 1),
OR(1, 1))
```
### XOR 게이트
```
def XOR(x1, x2):
s1 = NAND(x1, x2)
s2 = OR(x1, x2)
y = AND(s1, s2)
return y
print(XOR(0, 0),
XOR(1, 0),
XOR(0, 1),
XOR(1, 1))
```
### 그래프로 확인
```
import matplotlib.pylab as plt
xs = np.arange(-1.2, 1.2, 0.1)
ys = np.arange(-1.2, 1.2, 0.1)
k=list()
j=list()
for x in xs:
for y in ys:
z= AND(x,y) # 여기를 바꿔주면 AND, OR, NAND, XOR 그림을 바꾸면서 그릴수 있음
k.append(z)
j.append([x, y, z])
print(type(j), len(j))
nj=np.asarray(j)
#hide
nj
print(type(nj), len(nj), nj.ndim, nj.shape)
nx= nj[:,0]
ny= nj[:,1]
nz= nj[:,2]
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
fig = plt.figure(figsize=(10,6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(nx, ny, nz, c=nz)
plt.show()
```
### graph로 표현
```
fig = plt.figure(figsize=(10,6))
ax = plt.axes(projection='3d')
pnt4d=ax.plot_trisurf(nx, ny, nz, cmap='viridis')
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
# fig.colorbar(pnt4d, shrink=0.5, aspect=5)
cbar = plt.colorbar(pnt4d)
plt.show()
k=list()
j=list()
for x in xs:
for y in ys:
z= OR(x,y) # 여기를 바꿔주면 AND, OR, NAND, XOR 그림을 바꾸면서 그릴수 있음
k.append(z)
j.append([x, y, z])
nj=np.asarray(j)
nx= nj[:,0]
ny= nj[:,1]
nz= nj[:,2]
fig = plt.figure(figsize=(10,6))
ax = plt.axes(projection='3d')
pnt4d=ax.plot_trisurf(nx, ny, nz, cmap='viridis')
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
# fig.colorbar(pnt4d, shrink=0.5, aspect=5)
cbar = plt.colorbar(pnt4d)
plt.show()
k=list()
j=list()
for x in xs:
for y in ys:
z= NAND(x,y) # 여기를 바꿔주면 AND, OR, NAND, XOR 그림을 바꾸면서 그릴수 있음
k.append(z)
j.append([x, y, z])
nj=np.asarray(j)
nx= nj[:,0]
ny= nj[:,1]
nz= nj[:,2]
fig = plt.figure(figsize=(10,6))
ax = plt.axes(projection='3d')
pnt4d=ax.plot_trisurf(nx, ny, nz, cmap='viridis')
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
# fig.colorbar(pnt4d, shrink=0.5, aspect=5)
cbar = plt.colorbar(pnt4d)
plt.show()
k=list()
j=list()
for x in xs:
for y in ys:
z= XOR(x,y) # 여기를 바꿔주면 AND, OR, NAND, XOR 그림을 바꾸면서 그릴수 있음
k.append(z)
j.append([x, y, z])
nj=np.asarray(j)
nx= nj[:,0]
ny= nj[:,1]
nz= nj[:,2]
fig = plt.figure(figsize=(10,6))
ax = plt.axes(projection='3d')
pnt4d=ax.plot_trisurf(nx, ny, nz, cmap='viridis')
ax.set_xlabel('x axis')
ax.set_ylabel('y axis')
ax.set_zlabel('z axis')
# fig.colorbar(pnt4d, shrink=0.5, aspect=5)
cbar = plt.colorbar(pnt4d)
plt.show()
```
### 활성화 함수(Activation Function) : 입력 신호의 총합을 출력 신로호 변환하는 함수
* $a = b + w1~x~1~ + w~2~x~2~ $
* $y = h(a)$
* a는 입력신호의 총합, h()는 활성화 함수, y는 출력이다
* 활성화 함수를 계단함수에서 다른 함수로 변경하는 것이 신경망의 세계로 나아가는 열쇠이며 이런 활성화 함수를 알아보자.
```
# 계단 함수(step function)
def setp_function(x):
y = x > 0
return y.astype(int) # y의 boolean type을 int로 변환시켜 줌
# 시그모이드 함수(sigmoid function)
def sigmoid(x):
return 1/(1+np.exp(-x))
#ReLu 함수 : 입력이 0을 넘으면 그 입력을 그대로 출력하고, 0 이하이면 0을 출력하는 함수
def relu(x):
return np.maximum(0, x)
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,6))
# ax = fig.add_subplot(1,1,1)
x=np.arange(-5.0, 5.0, 0.1)
ystep = setp_function(x)
ysig = sigmoid(x)
yrelu = relu(x)
plt.plot(x, ystep, '--', label='Step Function')
plt.plot(x, ysig, ':', label='Sigmoid Function')
plt.plot(x, yrelu, label='Relu Function')
plt.ylim(-0.1,1.1)
plt.grid(True)
plt.legend()
plt.show()
```
### 입력층(0층)에서 은닉층(1층)으로 신호 전달
```
X = np.array([1.0, 0.5])
W1 = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
B1 = np.array([0.1, 0.2, 0.3])
print(X.shape) # (2,)
print(W1.shape) # (2, 3)
print(B1.shape) # (3,)
A1 = np.dot(X,W1) + B1 # (2,) (2,3) = (3,)
A1, A1.shape
```
은닉층에서 가중치 합(가중 신호와 편향의 총합)을 a로 표시하고 활성화 함수(시그모이드 함수 사용) h()로 변환된 신호를 z로 표기한것을 파이썬으로 구현
```
Z1 = sigmoid(A1)
print(A1) # [0.3 0.7 1.1]
print(Z1) # [0.57444252 0.66818777 0.75026011]
```
### 은닉층(1층)에서 은닉층(2층)으로 신호 전달
```
W2 = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
B2 = np.array([0.1, 0.2])
print(Z1.shape) # (3,)
print(W2.shape) # (3, 2)
print(B2.shape) # (2,)
A2 = np.dot(Z1,W2) + B2 # (3,) (3,2) = (2,)
A2, A2.shape
Z2=sigmoid(A2)
Z2
```
### 은닉층(2층)에서 출력층으로의 신호전달
* 출력층의 활성화 함수는 회귀에서는 아래와 같은 항등함수를,
* 이진 클래스 분류에서는 sigmoid,
* multi-class분류에서는 softmax함수를 사용하는 것이 일반적이다.
```
def identity_function(x):
return X
W3 = np.array([[0.1, 0.3], [0.2, 0.4]])
B3 = np.array([0.1, 0.2])
A3 = np.dot(Z2,W3) + B3 # (2,2) (2,) = (2,)
print(A3, A3.shape)
Y = identity_function(A3) # or Y = A3
print(Y)
def init_network():
network = {}
network['W1'] = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
network['b1'] = np.array([0.1, 0.2, 0.3])
network['W2'] = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
network['b2'] = np.array([0.1, 0.2])
network['W3'] = np.array([[0.1, 0.3], [0.2, 0.4]])
network['b3'] = np.array([0.1, 0.2])
return network
def forward(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
y = identity_function(a3)
return y
network = init_network()
print(type(network))
print(network)
x = np.array([1.0, 0.5])
y = forward(network, x)
print(y)
```
| github_jupyter |
# Convolutional Networks
So far we have worked with deep fully-connected networks, using them to explore different optimization strategies and network architectures. Fully-connected networks are a good testbed for experimentation because they are very computationally efficient, but in practice all state-of-the-art results use convolutional networks instead.
First you will implement several layer types that are used in convolutional networks. You will then use these layers to train a convolutional network on the CIFAR-10 dataset.
```
# As usual, a bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.cnn import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient_array, eval_numerical_gradient
from cs231n.layers import *
from cs231n.fast_layers import *
from cs231n.solver import Solver
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
# Load the (preprocessed) CIFAR10 data.
data = get_CIFAR10_data()
for k, v in data.items():
print('%s: ' % k, v.shape)
```
# Convolution: Naive forward pass
The core of a convolutional network is the convolution operation. In the file `cs231n/layers.py`, implement the forward pass for the convolution layer in the function `conv_forward_naive`.
You don't have to worry too much about efficiency at this point; just write the code in whatever way you find most clear.
You can test your implementation by running the following:
```
x_shape = (2, 3, 4, 4)
w_shape = (3, 3, 4, 4)
x = np.linspace(-0.1, 0.5, num=np.prod(x_shape)).reshape(x_shape)
w = np.linspace(-0.2, 0.3, num=np.prod(w_shape)).reshape(w_shape)
b = np.linspace(-0.1, 0.2, num=3)
conv_param = {'stride': 2, 'pad': 1}
out, _ = conv_forward_naive(x, w, b, conv_param)
correct_out = np.array([[[[-0.08759809, -0.10987781],
[-0.18387192, -0.2109216 ]],
[[ 0.21027089, 0.21661097],
[ 0.22847626, 0.23004637]],
[[ 0.50813986, 0.54309974],
[ 0.64082444, 0.67101435]]],
[[[-0.98053589, -1.03143541],
[-1.19128892, -1.24695841]],
[[ 0.69108355, 0.66880383],
[ 0.59480972, 0.56776003]],
[[ 2.36270298, 2.36904306],
[ 2.38090835, 2.38247847]]]])
# Compare your output to ours; difference should be around e-8
print('Testing conv_forward_naive')
print('difference: ', rel_error(out, correct_out))
```
# Aside: Image processing via convolutions
As fun way to both check your implementation and gain a better understanding of the type of operation that convolutional layers can perform, we will set up an input containing two images and manually set up filters that perform common image processing operations (grayscale conversion and edge detection). The convolution forward pass will apply these operations to each of the input images. We can then visualize the results as a sanity check.
```
from imageio import imread
from PIL import Image
kitten = imread('notebook_images/kitten.jpg')
puppy = imread('notebook_images/puppy.jpg')
# kitten is wide, and puppy is already square
d = kitten.shape[1] - kitten.shape[0]
kitten_cropped = kitten[:, d//2:-d//2, :]
img_size = 200 # Make this smaller if it runs too slow
resized_puppy = np.array(Image.fromarray(puppy).resize((img_size, img_size)))
resized_kitten = np.array(Image.fromarray(kitten_cropped).resize((img_size, img_size)))
x = np.zeros((2, 3, img_size, img_size))
x[0, :, :, :] = resized_puppy.transpose((2, 0, 1))
x[1, :, :, :] = resized_kitten.transpose((2, 0, 1))
# Set up a convolutional weights holding 2 filters, each 3x3
w = np.zeros((2, 3, 3, 3))
# The first filter converts the image to grayscale.
# Set up the red, green, and blue channels of the filter.
w[0, 0, :, :] = [[0, 0, 0], [0, 0.3, 0], [0, 0, 0]]
w[0, 1, :, :] = [[0, 0, 0], [0, 0.6, 0], [0, 0, 0]]
w[0, 2, :, :] = [[0, 0, 0], [0, 0.1, 0], [0, 0, 0]]
# Second filter detects horizontal edges in the blue channel.
w[1, 2, :, :] = [[1, 2, 1], [0, 0, 0], [-1, -2, -1]]
# Vector of biases. We don't need any bias for the grayscale
# filter, but for the edge detection filter we want to add 128
# to each output so that nothing is negative.
b = np.array([0, 128])
# Compute the result of convolving each input in x with each filter in w,
# offsetting by b, and storing the results in out.
out, _ = conv_forward_naive(x, w, b, {'stride': 1, 'pad': 1})
def imshow_no_ax(img, normalize=True):
""" Tiny helper to show images as uint8 and remove axis labels """
if normalize:
img_max, img_min = np.max(img), np.min(img)
img = 255.0 * (img - img_min) / (img_max - img_min)
plt.imshow(img.astype('uint8'))
plt.gca().axis('off')
# Show the original images and the results of the conv operation
plt.subplot(2, 3, 1)
imshow_no_ax(puppy, normalize=False)
plt.title('Original image')
plt.subplot(2, 3, 2)
imshow_no_ax(out[0, 0])
plt.title('Grayscale')
plt.subplot(2, 3, 3)
imshow_no_ax(out[0, 1])
plt.title('Edges')
plt.subplot(2, 3, 4)
imshow_no_ax(kitten_cropped, normalize=False)
plt.subplot(2, 3, 5)
imshow_no_ax(out[1, 0])
plt.subplot(2, 3, 6)
imshow_no_ax(out[1, 1])
plt.show()
```
# Convolution: Naive backward pass
Implement the backward pass for the convolution operation in the function `conv_backward_naive` in the file `cs231n/layers.py`. Again, you don't need to worry too much about computational efficiency.
When you are done, run the following to check your backward pass with a numeric gradient check.
```
np.random.seed(231)
x = np.random.randn(4, 3, 5, 5)
w = np.random.randn(2, 3, 3, 3)
b = np.random.randn(2,)
dout = np.random.randn(4, 2, 5, 5)
conv_param = {'stride': 1, 'pad': 1}
dx_num = eval_numerical_gradient_array(lambda x: conv_forward_naive(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_forward_naive(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_forward_naive(x, w, b, conv_param)[0], b, dout)
out, cache = conv_forward_naive(x, w, b, conv_param)
dx, dw, db = conv_backward_naive(dout, cache)
# Your errors should be around e-8 or less.
print('Testing conv_backward_naive function')
print('dx error: ', rel_error(dx, dx_num))
print('dw error: ', rel_error(dw, dw_num))
print('db error: ', rel_error(db, db_num))
```
# Max-Pooling: Naive forward
Implement the forward pass for the max-pooling operation in the function `max_pool_forward_naive` in the file `cs231n/layers.py`. Again, don't worry too much about computational efficiency.
Check your implementation by running the following:
```
x_shape = (2, 3, 4, 4)
x = np.linspace(-0.3, 0.4, num=np.prod(x_shape)).reshape(x_shape)
pool_param = {'pool_width': 2, 'pool_height': 2, 'stride': 2}
out, _ = max_pool_forward_naive(x, pool_param)
correct_out = np.array([[[[-0.26315789, -0.24842105],
[-0.20421053, -0.18947368]],
[[-0.14526316, -0.13052632],
[-0.08631579, -0.07157895]],
[[-0.02736842, -0.01263158],
[ 0.03157895, 0.04631579]]],
[[[ 0.09052632, 0.10526316],
[ 0.14947368, 0.16421053]],
[[ 0.20842105, 0.22315789],
[ 0.26736842, 0.28210526]],
[[ 0.32631579, 0.34105263],
[ 0.38526316, 0.4 ]]]])
# Compare your output with ours. Difference should be on the order of e-8.
print('Testing max_pool_forward_naive function:')
print('difference: ', rel_error(out, correct_out))
```
# Max-Pooling: Naive backward
Implement the backward pass for the max-pooling operation in the function `max_pool_backward_naive` in the file `cs231n/layers.py`. You don't need to worry about computational efficiency.
Check your implementation with numeric gradient checking by running the following:
```
np.random.seed(231)
x = np.random.randn(3, 2, 8, 8)
dout = np.random.randn(3, 2, 4, 4)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
dx_num = eval_numerical_gradient_array(lambda x: max_pool_forward_naive(x, pool_param)[0], x, dout)
out, cache = max_pool_forward_naive(x, pool_param)
dx = max_pool_backward_naive(dout, cache)
# Your error should be on the order of e-12
print('Testing max_pool_backward_naive function:')
print('dx error: ', rel_error(dx, dx_num))
```
# Fast layers
Making convolution and pooling layers fast can be challenging. To spare you the pain, we've provided fast implementations of the forward and backward passes for convolution and pooling layers in the file `cs231n/fast_layers.py`.
The fast convolution implementation depends on a Cython extension; to compile it you need to run the following from the `cs231n` directory:
```bash
python setup.py build_ext --inplace
```
The API for the fast versions of the convolution and pooling layers is exactly the same as the naive versions that you implemented above: the forward pass receives data, weights, and parameters and produces outputs and a cache object; the backward pass recieves upstream derivatives and the cache object and produces gradients with respect to the data and weights.
**NOTE:** The fast implementation for pooling will only perform optimally if the pooling regions are non-overlapping and tile the input. If these conditions are not met then the fast pooling implementation will not be much faster than the naive implementation.
You can compare the performance of the naive and fast versions of these layers by running the following:
```
# Rel errors should be around e-9 or less
from cs231n.fast_layers import conv_forward_fast, conv_backward_fast
from time import time
np.random.seed(231)
x = np.random.randn(100, 3, 31, 31)
w = np.random.randn(25, 3, 3, 3)
b = np.random.randn(25,)
dout = np.random.randn(100, 25, 16, 16)
conv_param = {'stride': 2, 'pad': 1}
t0 = time()
out_naive, cache_naive = conv_forward_naive(x, w, b, conv_param)
t1 = time()
out_fast, cache_fast = conv_forward_fast(x, w, b, conv_param)
t2 = time()
print('Testing conv_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('Difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive, dw_naive, db_naive = conv_backward_naive(dout, cache_naive)
t1 = time()
dx_fast, dw_fast, db_fast = conv_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting conv_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('Fast: %fs' % (t2 - t1))
print('Speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
print('dw difference: ', rel_error(dw_naive, dw_fast))
print('db difference: ', rel_error(db_naive, db_fast))
# Relative errors should be close to 0.0
from cs231n.fast_layers import max_pool_forward_fast, max_pool_backward_fast
np.random.seed(231)
x = np.random.randn(100, 3, 32, 32)
dout = np.random.randn(100, 3, 16, 16)
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
t0 = time()
out_naive, cache_naive = max_pool_forward_naive(x, pool_param)
t1 = time()
out_fast, cache_fast = max_pool_forward_fast(x, pool_param)
t2 = time()
print('Testing pool_forward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('difference: ', rel_error(out_naive, out_fast))
t0 = time()
dx_naive = max_pool_backward_naive(dout, cache_naive)
t1 = time()
dx_fast = max_pool_backward_fast(dout, cache_fast)
t2 = time()
print('\nTesting pool_backward_fast:')
print('Naive: %fs' % (t1 - t0))
print('fast: %fs' % (t2 - t1))
print('speedup: %fx' % ((t1 - t0) / (t2 - t1)))
print('dx difference: ', rel_error(dx_naive, dx_fast))
```
# Convolutional "sandwich" layers
Previously we introduced the concept of "sandwich" layers that combine multiple operations into commonly used patterns. In the file `cs231n/layer_utils.py` you will find sandwich layers that implement a few commonly used patterns for convolutional networks. Run the cells below to sanity check they're working.
```
from cs231n.layer_utils import conv_relu_pool_forward, conv_relu_pool_backward
np.random.seed(231)
x = np.random.randn(2, 3, 16, 16)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
pool_param = {'pool_height': 2, 'pool_width': 2, 'stride': 2}
out, cache = conv_relu_pool_forward(x, w, b, conv_param, pool_param)
dx, dw, db = conv_relu_pool_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_pool_forward(x, w, b, conv_param, pool_param)[0], b, dout)
# Relative errors should be around e-8 or less
print('Testing conv_relu_pool')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
from cs231n.layer_utils import conv_relu_forward, conv_relu_backward
np.random.seed(231)
x = np.random.randn(2, 3, 8, 8)
w = np.random.randn(3, 3, 3, 3)
b = np.random.randn(3,)
dout = np.random.randn(2, 3, 8, 8)
conv_param = {'stride': 1, 'pad': 1}
out, cache = conv_relu_forward(x, w, b, conv_param)
dx, dw, db = conv_relu_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda x: conv_relu_forward(x, w, b, conv_param)[0], x, dout)
dw_num = eval_numerical_gradient_array(lambda w: conv_relu_forward(x, w, b, conv_param)[0], w, dout)
db_num = eval_numerical_gradient_array(lambda b: conv_relu_forward(x, w, b, conv_param)[0], b, dout)
# Relative errors should be around e-8 or less
print('Testing conv_relu:')
print('dx error: ', rel_error(dx_num, dx))
print('dw error: ', rel_error(dw_num, dw))
print('db error: ', rel_error(db_num, db))
```
# Three-layer ConvNet
Now that you have implemented all the necessary layers, we can put them together into a simple convolutional network.
Open the file `cs231n/classifiers/cnn.py` and complete the implementation of the `ThreeLayerConvNet` class. Remember you can use the fast/sandwich layers (already imported for you) in your implementation. Run the following cells to help you debug:
## Sanity check loss
After you build a new network, one of the first things you should do is sanity check the loss. When we use the softmax loss, we expect the loss for random weights (and no regularization) to be about `log(C)` for `C` classes. When we add regularization the loss should go up slightly.
```
model = ThreeLayerConvNet()
N = 50
X = np.random.randn(N, 3, 32, 32)
y = np.random.randint(10, size=N)
loss, grads = model.loss(X, y)
print('Initial loss (no regularization): ', loss)
model.reg = 0.5
loss, grads = model.loss(X, y)
print('Initial loss (with regularization): ', loss)
```
## Gradient check
After the loss looks reasonable, use numeric gradient checking to make sure that your backward pass is correct. When you use numeric gradient checking you should use a small amount of artifical data and a small number of neurons at each layer. Note: correct implementations may still have relative errors up to the order of e-2.
```
num_inputs = 2
input_dim = (3, 16, 16)
reg = 0.0
num_classes = 10
np.random.seed(231)
X = np.random.randn(num_inputs, *input_dim)
y = np.random.randint(num_classes, size=num_inputs)
model = ThreeLayerConvNet(num_filters=3, filter_size=3,
input_dim=input_dim, hidden_dim=7,
dtype=np.float64)
loss, grads = model.loss(X, y)
# Errors should be small, but correct implementations may have
# relative errors up to the order of e-2
for param_name in sorted(grads):
f = lambda _: model.loss(X, y)[0]
param_grad_num = eval_numerical_gradient(f, model.params[param_name], verbose=False, h=1e-6)
e = rel_error(param_grad_num, grads[param_name])
print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))
```
## Overfit small data
A nice trick is to train your model with just a few training samples. You should be able to overfit small datasets, which will result in very high training accuracy and comparatively low validation accuracy.
```
np.random.seed(231)
num_train = 100
small_data = {
'X_train': data['X_train'][:num_train],
'y_train': data['y_train'][:num_train],
'X_val': data['X_val'],
'y_val': data['y_val'],
}
model = ThreeLayerConvNet(weight_scale=1e-2)
solver = Solver(model, small_data,
num_epochs=15, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=1)
solver.train()
```
Plotting the loss, training accuracy, and validation accuracy should show clear overfitting:
```
plt.subplot(2, 1, 1)
plt.plot(solver.loss_history, 'o')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.subplot(2, 1, 2)
plt.plot(solver.train_acc_history, '-o')
plt.plot(solver.val_acc_history, '-o')
plt.legend(['train', 'val'], loc='upper left')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.show()
```
## Train the net
By training the three-layer convolutional network for one epoch, you should achieve greater than 40% accuracy on the training set:
```
model = ThreeLayerConvNet(weight_scale=0.001, hidden_dim=500, reg=0.001)
solver = Solver(model, data,
num_epochs=1, batch_size=50,
update_rule='adam',
optim_config={
'learning_rate': 1e-3,
},
verbose=True, print_every=20)
solver.train()
```
## Visualize Filters
You can visualize the first-layer convolutional filters from the trained network by running the following:
```
from cs231n.vis_utils import visualize_grid
grid = visualize_grid(model.params['W1'].transpose(0, 2, 3, 1))
plt.imshow(grid.astype('uint8'))
plt.axis('off')
plt.gcf().set_size_inches(5, 5)
plt.show()
```
# Spatial Batch Normalization
We already saw that batch normalization is a very useful technique for training deep fully-connected networks. As proposed in the original paper (link in `BatchNormalization.ipynb`), batch normalization can also be used for convolutional networks, but we need to tweak it a bit; the modification will be called "spatial batch normalization."
Normally batch-normalization accepts inputs of shape `(N, D)` and produces outputs of shape `(N, D)`, where we normalize across the minibatch dimension `N`. For data coming from convolutional layers, batch normalization needs to accept inputs of shape `(N, C, H, W)` and produce outputs of shape `(N, C, H, W)` where the `N` dimension gives the minibatch size and the `(H, W)` dimensions give the spatial size of the feature map.
If the feature map was produced using convolutions, then we expect every feature channel's statistics e.g. mean, variance to be relatively consistent both between different images, and different locations within the same image -- after all, every feature channel is produced by the same convolutional filter! Therefore spatial batch normalization computes a mean and variance for each of the `C` feature channels by computing statistics over the minibatch dimension `N` as well the spatial dimensions `H` and `W`.
[1] [Sergey Ioffe and Christian Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing
Internal Covariate Shift", ICML 2015.](https://arxiv.org/abs/1502.03167)
## Spatial batch normalization: forward
In the file `cs231n/layers.py`, implement the forward pass for spatial batch normalization in the function `spatial_batchnorm_forward`. Check your implementation by running the following:
```
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 3, 4, 5
x = 4 * np.random.randn(N, C, H, W) + 10
print('Before spatial batch normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x.mean(axis=(0, 2, 3)))
print(' Stds: ', x.std(axis=(0, 2, 3)))
# Means should be close to zero and stds close to one
gamma, beta = np.ones(C), np.zeros(C)
bn_param = {'mode': 'train'}
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
# Means should be close to beta and stds close to gamma
gamma, beta = np.asarray([3, 4, 5]), np.asarray([6, 7, 8])
out, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
print('After spatial batch normalization (nontrivial gamma, beta):')
print(' Shape: ', out.shape)
print(' Means: ', out.mean(axis=(0, 2, 3)))
print(' Stds: ', out.std(axis=(0, 2, 3)))
np.random.seed(231)
# Check the test-time forward pass by running the training-time
# forward pass many times to warm up the running averages, and then
# checking the means and variances of activations after a test-time
# forward pass.
N, C, H, W = 10, 4, 11, 12
bn_param = {'mode': 'train'}
gamma = np.ones(C)
beta = np.zeros(C)
for t in range(50):
x = 2.3 * np.random.randn(N, C, H, W) + 13
spatial_batchnorm_forward(x, gamma, beta, bn_param)
bn_param['mode'] = 'test'
x = 2.3 * np.random.randn(N, C, H, W) + 13
a_norm, _ = spatial_batchnorm_forward(x, gamma, beta, bn_param)
# Means should be close to zero and stds close to one, but will be
# noisier than training-time forward passes.
print('After spatial batch normalization (test-time):')
print(' means: ', a_norm.mean(axis=(0, 2, 3)))
print(' stds: ', a_norm.std(axis=(0, 2, 3)))
```
## Spatial batch normalization: backward
In the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_batchnorm_backward`. Run the following to check your implementation using a numeric gradient check:
```
np.random.seed(231)
N, C, H, W = 2, 3, 4, 5
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(C)
beta = np.random.randn(C)
dout = np.random.randn(N, C, H, W)
bn_param = {'mode': 'train'}
fx = lambda x: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fg = lambda a: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
fb = lambda b: spatial_batchnorm_forward(x, gamma, beta, bn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
#You should expect errors of magnitudes between 1e-12~1e-06
_, cache = spatial_batchnorm_forward(x, gamma, beta, bn_param)
dx, dgamma, dbeta = spatial_batchnorm_backward(dout, cache)
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
# Group Normalization
In the previous notebook, we mentioned that Layer Normalization is an alternative normalization technique that mitigates the batch size limitations of Batch Normalization. However, as the authors of [2] observed, Layer Normalization does not perform as well as Batch Normalization when used with Convolutional Layers:
>With fully connected layers, all the hidden units in a layer tend to make similar contributions to the final prediction, and re-centering and rescaling the summed inputs to a layer works well. However, the assumption of similar contributions is no longer true for convolutional neural networks. The large number of the hidden units whose
receptive fields lie near the boundary of the image are rarely turned on and thus have very different
statistics from the rest of the hidden units within the same layer.
The authors of [3] propose an intermediary technique. In contrast to Layer Normalization, where you normalize over the entire feature per-datapoint, they suggest a consistent splitting of each per-datapoint feature into G groups, and a per-group per-datapoint normalization instead.

<center>**Visual comparison of the normalization techniques discussed so far (image edited from [3])**</center>
Even though an assumption of equal contribution is still being made within each group, the authors hypothesize that this is not as problematic, as innate grouping arises within features for visual recognition. One example they use to illustrate this is that many high-performance handcrafted features in traditional Computer Vision have terms that are explicitly grouped together. Take for example Histogram of Oriented Gradients [4]-- after computing histograms per spatially local block, each per-block histogram is normalized before being concatenated together to form the final feature vector.
You will now implement Group Normalization. Note that this normalization technique that you are to implement in the following cells was introduced and published to ECCV just in 2018 -- this truly is still an ongoing and excitingly active field of research!
[2] [Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer Normalization." stat 1050 (2016): 21.](https://arxiv.org/pdf/1607.06450.pdf)
[3] [Wu, Yuxin, and Kaiming He. "Group Normalization." arXiv preprint arXiv:1803.08494 (2018).](https://arxiv.org/abs/1803.08494)
[4] [N. Dalal and B. Triggs. Histograms of oriented gradients for
human detection. In Computer Vision and Pattern Recognition
(CVPR), 2005.](https://ieeexplore.ieee.org/abstract/document/1467360/)
## Group normalization: forward
In the file `cs231n/layers.py`, implement the forward pass for group normalization in the function `spatial_groupnorm_forward`. Check your implementation by running the following:
```
np.random.seed(231)
# Check the training-time forward pass by checking means and variances
# of features both before and after spatial batch normalization
N, C, H, W = 2, 6, 4, 5
G = 2
x = 4 * np.random.randn(N, C, H, W) + 10
x_g = x.reshape((N*G,-1))
print('Before spatial group normalization:')
print(' Shape: ', x.shape)
print(' Means: ', x_g.mean(axis=1))
print(' Stds: ', x_g.std(axis=1))
# Means should be close to zero and stds close to one
gamma, beta = np.ones((1,C,1,1)), np.zeros((1,C,1,1))
bn_param = {'mode': 'train'}
out, _ = spatial_groupnorm_forward(x, gamma, beta, G, bn_param)
out_g = out.reshape((N*G,-1))
print('After spatial group normalization:')
print(' Shape: ', out.shape)
print(' Means: ', out_g.mean(axis=1))
print(' Stds: ', out_g.std(axis=1))
```
## Spatial group normalization: backward
In the file `cs231n/layers.py`, implement the backward pass for spatial batch normalization in the function `spatial_groupnorm_backward`. Run the following to check your implementation using a numeric gradient check:
```
np.random.seed(231)
N, C, H, W = 2, 6, 4, 5
G = 2
x = 5 * np.random.randn(N, C, H, W) + 12
gamma = np.random.randn(1,C,1,1)
beta = np.random.randn(1,C,1,1)
dout = np.random.randn(N, C, H, W)
gn_param = {}
fx = lambda x: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
fg = lambda a: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
fb = lambda b: spatial_groupnorm_forward(x, gamma, beta, G, gn_param)[0]
dx_num = eval_numerical_gradient_array(fx, x, dout)
da_num = eval_numerical_gradient_array(fg, gamma, dout)
db_num = eval_numerical_gradient_array(fb, beta, dout)
_, cache = spatial_groupnorm_forward(x, gamma, beta, G, gn_param)
dx, dgamma, dbeta = spatial_groupnorm_backward(dout, cache)
#You should expect errors of magnitudes between 1e-12~1e-07
print('dx error: ', rel_error(dx_num, dx))
print('dgamma error: ', rel_error(da_num, dgamma))
print('dbeta error: ', rel_error(db_num, dbeta))
```
| github_jupyter |
```
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
from sklearn import preprocessing
from sklearn.metrics import confusion_matrix
from sklearn import svm
import itertools
import matplotlib.pyplot as plt
import matplotlib.mlab as mlab
# Library for the statistic data vizualisation
import seaborn
%matplotlib inline
data = pd.read_csv('../input/creditcard.csv') # Reading the file .csv
df = pd.DataFrame(data)
df = pd.DataFrame(data)
df.describe()
df_fraud = df[df['Class'] == 1] # Recovery of fraud data
plt.figure(figsize=(15,10))
plt.scatter(df_fraud['Time'], df_fraud['Amount'],color = 'black') # Display fraud amounts according to their time
plt.title('Scratter plot amount fraud')
plt.xlabel('Time')
plt.ylabel('Amount')
plt.xlim([0,175000])
plt.ylim([0,2500])
plt.show()
nb_big_fraud = df_fraud[df_fraud['Amount'] > 1000].shape[0] # Recovery of frauds over 1000
print('There are only '+ str(nb_big_fraud) + ' frauds where the amount was bigger than 1000 over ' + str(df_fraud.shape[0]) + ' frauds')
number_fraud = len(data[data.Class == 1])
number_no_fraud = len(data[data.Class == 0])
print('There are only '+ str(number_fraud) + ' frauds in the original dataset, even though there are ' + str(number_no_fraud) +' no frauds in the dataset.')
print("The accuracy of the classifier then would be : "+ str((284315-492)/284315)+ " which is the number of good classification over the number of tuple to classify")
df_corr = df.corr()
plt.figure(figsize=(15,10))
seaborn.heatmap(df_corr, cmap="YlGnBu") # Displaying the Heatmap
seaborn.set(font_scale=2,style='white')
plt.title('Heatmap correlation')
plt.show()
rank = df_corr['Class'] # Retrieving the correlation coefficients per feature in relation to the feature class
df_rank = pd.DataFrame(rank)
df_rank = np.abs(df_rank).sort_values(by='Class',ascending=False) # Ranking the absolute values of the coefficients
# in descending order
df_rank.dropna(inplace=True) # Removing Missing Data (not a number)
# We seperate ours data in two groups : a train dataset and a test dataset
# First we build our train dataset
df_train_all = df[0:150000] # We cut in two the original dataset
df_train_1 = df_train_all[df_train_all['Class'] == 1] # We seperate the data which are the frauds and the no frauds
df_train_0 = df_train_all[df_train_all['Class'] == 0]
print('In this dataset, we have ' + str(len(df_train_1)) +" frauds so we need to take a similar number of non-fraud")
df_sample=df_train_0.sample(300)
df_train = df_train_1.append(df_sample) # We gather the frauds with the no frauds.
df_train = df_train.sample(frac=1) # Then we mix our dataset
X_train = df_train.drop(['Time', 'Class'],axis=1) # We drop the features Time (useless), and the Class (label)
y_train = df_train['Class'] # We create our label
X_train = np.asarray(X_train)
y_train = np.asarray(y_train)
df_test_all = df[150000:]
X_test_all = df_test_all.drop(['Time', 'Class'],axis=1)
y_test_all = df_test_all['Class']
X_test_all = np.asarray(X_test_all)
y_test_all = np.asarray(y_test_all)
X_train_rank = df_train[df_rank.index[1:11]] # We take the first ten ranked features
X_train_rank = np.asarray(X_train_rank)
X_test_all_rank = df_test_all[df_rank.index[1:11]]
X_test_all_rank = np.asarray(X_test_all_rank)
y_test_all = np.asarray(y_test_all)
class_names=np.array(['0','1'])
# Function to plot the confusion Matrix
def plot_confusion_matrix(cm, classes,
title='Confusion matrix',
cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
classifier = svm.SVC(kernel='linear') # We set a SVM classifier, the default SVM Classifier (Kernel = Radial Basis Function)
classifier.fit(X_train, y_train)
prediction_SVM_all = classifier.predict(X_test_all) #And finally, we predict our data test.
cm = confusion_matrix(y_test_all, prediction_SVM_all)
plot_confusion_matrix(cm,class_names)
print('Our criterion give a result of '
+ str( ( (cm[0][0]+cm[1][1]) / (sum(cm[0]) + sum(cm[1])) + 4 * cm[1][1]/(cm[1][0]+cm[1][1])) / 5))
print('We have detected ' + str(cm[1][1]) + ' frauds / ' + str(cm[1][1]+cm[1][0]) + ' total frauds.')
print('\nSo, the probability to detect a fraud is ' + str(cm[1][1]/(cm[1][1]+cm[1][0])))
print("the accuracy is : "+str((cm[0][0]+cm[1][1]) / (sum(cm[0]) + sum(cm[1]))))
```
| github_jupyter |
# Baseline Model Pipeline
By: Aditya Mengani, Ognjen Sosa, Sanjay Elangovan, Song Park, Sophia Skowronski
**Can we improve on the baseline scores using different encoding, imputing, and scaling schemes?**
- Averaged Logistic Regression accuracy Score: 0.5
- Averaged Linear Regression accuracy score: 0.2045
- Averaged K-Nearest Neighbour accuracy score: 0.6198
- Averaged Naive Bayes accuracy score: 0.649
**`p1_tag` ~ `rank` + `total_funding_usd` + `employee_count` (ordinal) + `country` (nominal) + `category_groups` (nominal)**
```
'''Data analysis'''
import numpy as np
import pandas as pd
import csv
import warnings
import os
import time
import math
import itertools
import statistics
'''Plotting'''
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
'''Stat'''
import statsmodels.api as sm
from scipy.stats import chi2_contingency
'''ML'''
import prince
import category_encoders as ce
from sklearn import metrics, svm, preprocessing, utils
from sklearn.metrics import mean_squared_error, r2_score, f1_score
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100*(start_mem-end_mem)/start_mem))
return df
```
## Reading in data
```
df = pd.read_csv('files/output/baseline.csv')
print('Starting Dataframe Columns:\n\n{}\n'.format(df.columns.to_list()))
# Have industry mapper for 'ind_1'...'ind_46' columns
industries = ['Software', 'Information Technology', 'Internet Services', 'Data and Analytics',
'Sales and Marketing', 'Media and Entertainment', 'Commerce and Shopping',
'Financial Services', 'Apps', 'Mobile', 'Science and Engineering', 'Hardware',
'Health Care', 'Education', 'Artificial Intelligence', 'Professional Services',
'Design', 'Community and Lifestyle', 'Real Estate', 'Advertising',
'Transportation', 'Consumer Electronics', 'Lending and Investments',
'Sports', 'Travel and Tourism', 'Food and Beverage',
'Content and Publishing', 'Consumer Goods', 'Privacy and Security',
'Video', 'Payments', 'Sustainability', 'Events', 'Manufacturing',
'Clothing and Apparel', 'Administrative Services', 'Music and Audio',
'Messaging and Telecommunications', 'Energy', 'Platforms', 'Gaming',
'Government and Military', 'Biotechnology', 'Navigation and Mapping',
'Agriculture and Farming', 'Natural Resources']
industry_map = {industry:'ind_'+str(idx+1) for idx,industry in enumerate(industries)}
# Create
df_simple = df[['p1_tag', 'rank', 'country', 'employee_size', 'category_groups', 'total_funding_usd']]
df_simple = reduce_mem_usage(df_simple)
print('\nEnding Dataframe Columns:\n\n{}'.format(df_simple.columns.to_list()))
print('\nDataframe shape:', df_simple.shape)
del industries, industry_map
from datetime import datetime
###########################
# Pledge 1% Company UUIDs #
###########################
print('*'*100)
p1 = pd.read_csv('files/p1.csv')
print('PLEDGE 1% cols: {}\nSHAPE: {}\n'.format(p1.columns.to_list(), p1.shape))
#################
# Organizations #
#################
print('*'*100)
org = pd.read_csv('files/csv/organizations.csv')
print('ORGANIZATION cols: {}\nSHAPE: {}\n'.format(org.columns.to_list(), org.shape))
# Merge p1 and org dataframes on the organization uuid
df = pd.merge(org.copy(),p1.copy(),how='outer',on='uuid')
# Convert Boolean to binary
df['p1_tag'] = df['p1_tag'].apply(lambda x: 1 if x == True else 0)
p1['p1_tag'] = 1
# Convert employee_count 'unknown' to NaN to get accurate missing value count
df['employee_count'] = df['employee_count'].apply(lambda x: np.NaN if x == 'unknown' else x)
# Review Pandas Profiling Report of dataframe & update columns
df = df[['uuid','name','rank','status','employee_count','total_funding_usd','num_funding_rounds','primary_role','region','country_code','category_list','category_groups_list','founded_on','created_at','updated_at','p1_date','p1_tag']]
##############
# Timestamps #
##############
# Convert to datetime objects
df['p1_date'] = pd.to_datetime(df['p1_date'])
p1['p1_date'] = pd.to_datetime(p1['p1_date'])
# Get OutOfBoundsDatetime error if do not coerce for CB native timestamp columns
df['created_at'] = pd.to_datetime(df['created_at'],errors='coerce').dt.strftime('%Y-%m-%d')
df['updated_at'] = pd.to_datetime(df['updated_at'],errors='coerce').dt.strftime('%Y-%m-%d')
df['founded_on'] = pd.to_datetime(df['founded_on'],errors='coerce')
# Reduce storage for numerical features
df = reduce_mem_usage(df)
# Create new pledge1 dataframe that sorts by chronological order that the company took the pledge
pledge1 = df[df['p1_tag'] == 1].sort_values('p1_date')
#Get age of each company
now = datetime.now().date()
df['founded_on2'] = pd.to_datetime(df['founded_on']).dt.date
df['founded_on2'].fillna(now, inplace = True)
age = []
for i in range (len(df['founded_on'])):
age.append(round(((now - df['founded_on2'][i]).days)/365,3))
age_series = pd.Series(age)
df['age'] = age_series
print(f"There are {df['age'].value_counts().get(0)} entries with no founded_on date. Let's remove these from the dataset.")
df['age'].replace(0, None, inplace=True)
print(f"Now there are {df['age'].value_counts().get(0)} with the value of 0.")
df_simple['age'] = df['age']
# Select equal sample of non-Pledge 1% organizations
df_p1 = df_simple[df_simple['p1_tag']==1]
df_notp1 = df_simple[df_simple['p1_tag']==0].sample(n=df_p1.shape[0], replace=False)
df_model = pd.concat([df_p1, df_notp1]).reset_index(drop=True)
df_model = reduce_mem_usage(df_model)
# Create variable for each feature type: categorical and numerical
numeric_features = df_model.select_dtypes(include=['int8', 'int16', 'int32', 'int64', 'float16', 'float32','float64']).drop(['p1_tag'], axis=1).columns
categorical_features = df_model.select_dtypes(include=['object']).columns
print('Numeric features:', numeric_features.to_list())
print('Categorical features:', categorical_features.to_list())
X = df_model.drop('p1_tag', axis=1)
y = df_model['p1_tag']
y = preprocessing.LabelEncoder().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print('Training data shape:', X_train.shape)
print('Train label shape:', y_train.shape)
print('Test data shape:', X_test.shape)
print('Test label shape:', y_test.shape)
```
#### Run through pipeline to determine best categorical feature encoder
From: <a href='https://towardsdatascience.com/an-easier-way-to-encode-categorical-features-d840ff6b3900'>An Easier Way to Encode Categorical Features</a>
```
results = {}
classifier_list = []
LRR = LogisticRegression(max_iter=10000, tol=0.1)
KNN = KNeighborsClassifier(n_neighbors=30, p=1, leaf_size=25)
BNB = BernoulliNB()
GNB = GaussianNB()
classifier_list.append(('LRR', LRR, {'classifier__C': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0, 100.0, 1000]}))
classifier_list.append(('KNN', KNN, {}))
classifier_list.append(('BNB', BNB, {'classifier__alpha': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0]}))
classifier_list.append(('GNB', GNB, {'classifier__var_smoothing': [0.0001, 0.001, 0.01, 0.1, 1.0, 10.0]}))
#classifier_list.append(('SVM', svm.SVC()))
#classifier_list.append(('CART', DecisionTreeClassifier()))
#classifier_list.append(('LDA', LinearDiscriminantAnalysis()))
encoder_list = [ce.backward_difference.BackwardDifferenceEncoder,
ce.basen.BaseNEncoder,
ce.binary.BinaryEncoder,
ce.cat_boost.CatBoostEncoder,
ce.hashing.HashingEncoder,
ce.helmert.HelmertEncoder,
ce.james_stein.JamesSteinEncoder,
ce.one_hot.OneHotEncoder,
ce.leave_one_out.LeaveOneOutEncoder,
ce.m_estimate.MEstimateEncoder,
ce.ordinal.OrdinalEncoder,
ce.polynomial.PolynomialEncoder,
ce.sum_coding.SumEncoder,
ce.target_encoder.TargetEncoder,
ce.woe.WOEEncoder]
for label, classifier, params in classifier_list:
results[label] = {}
for encoder in encoder_list:
results[label][encoder.__name__] = {}
print('{} with {}'.format(label, encoder.__name__))
#numeric_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),('scaler', MinMaxScaler())])
numeric_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),('scaler', StandardScaler())])
categorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value='missing')),
('woe', encoder())])
preprocessor = ColumnTransformer(transformers=[('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categorical_features)])
pipe = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', classifier)])
if params != {}:
try:
search = GridSearchCV(pipe, params, n_jobs=-1)
search.fit(X_train, y_train)
print('Best parameter (CV score={:.3f}): {}'.format(search.best_score_, search.best_params_))
model = search.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = f1_score(y_test, y_pred)
print('Best score: {:.4f}\n'.format(score))
results[label][encoder.__name__]['score'] = score
results[label][encoder.__name__]['best_params'] = search.best_params_
except:
print('Something went wrong w/ GridSearch or pipeline fitting.')
else:
try:
model = pipe.fit(X_train, y_train)
y_pred = model.predict(X_test)
score = f1_score(y_test, y_pred)
print('Score: {:.4f}\n'.format(score))
results[label][encoder.__name__]['score'] = score
except:
print('Something went wrong with pipeline fitting')
```
### Comparison with manual encoding from previous notebook + `total_funding_usd`
```
# Comparison
df_b4 = df.drop(['category_groups','country','employee_size'], axis=1)
df_b4 = df_b4.drop(df_b4.columns.to_list()[-46:], axis=1)
# Sample
df_p1 = df_b4[df_b4['p1_tag']==1]
df_notp1 = df_b4[df_b4['p1_tag']==0].sample(n=df_p1.shape[0], replace=False)
df_b4 = pd.concat([df_p1, df_notp1]).reset_index(drop=True)
df_b4 = reduce_mem_usage(df_b4)
# Impute missing data in employee_count and rank columns
imputer = SimpleImputer(missing_values=-1, strategy='median')
df_b4['employee_count'] = imputer.fit_transform(df_b4['employee_count'].values.reshape(-1, 1))
imputer = SimpleImputer(strategy='median')
df_b4['rank'] = imputer.fit_transform(df_b4['rank'].values.reshape(-1, 1))
imputer = SimpleImputer(strategy='mean')
df_b4['total_funding_usd'] = imputer.fit_transform(df_b4['total_funding_usd'].values.reshape(-1, 1))
df_num_missing = df_b4[['rank', 'employee_count', 'total_funding_usd']].isna().sum()/len(df_b4)
output_string = df_num_missing.to_string(float_format=lambda x: '{:.2f}%'.format(x*100))
print('\nMISSING VALUES BY PERCENTAGE')
print(output_string)
# Scale numeric values
#########################################
#########################################
X = df_b4.drop('p1_tag', axis=1)
y = df_b4['p1_tag']
y = preprocessing.LabelEncoder().fit_transform(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
print('\nTraining data shape:', X_train.shape)
print('Train label shape:', y_train.shape)
print('Test data shape:', X_test.shape)
print('Test label shape:', y_test.shape)
KNN = KNeighborsClassifier(n_neighbors=30, p=1, leaf_size=25)
KNN.fit(X_train, y_train)
y_pred = KNN.predict(X_test)
print('\nKNN Accuracy score: {:.4f}'.format(KNN.score(X_test, y_test)))
LR = LogisticRegression(C=10)
LR.fit(X_train, y_train)
print('LRR Accuracy score: {:.4f}'.format(LR.score(X_test, y_test)))
import json
with open('results_baseline.json', 'w') as fp:
json.dump(results, fp, sort_keys=True, indent=4)
with open('results_baseline.json', 'r') as fp:
results = json.load(fp)
print(results)
```
| github_jupyter |
# Explorando Cartpole con Reinforcement Learning usando Deep Q-learning
Este cuaderno es una modificación del tutorial de [Pytorch RL DQN](https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html)
Sigue la línea de clase de Reinforcement Learning, Q-learning & OpenAI de la RIIA 2019
```
# Veamos de qué se trata el ambiente de Cartpole:
import gym
env = gym.make('CartPole-v0')
env.reset()
for _ in range(30):
env.render()
env.step(env.action_space.sample()) # Toma acción aleatoria
env.close()
# Importamos las bibliotecas necesarias:
import gym
import math
import random
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from IPython import display
plt.ion()
from collections import namedtuple
from itertools import count
from PIL import Image
# Las soluciones usan pytorch, pueden usar keras y/o tensorflow si prefieren
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
import torchvision.transforms as T
# Ambiente de OpenAI "Cart pole"
enviroment = gym.make('CartPole-v0').unwrapped
enviroment.render()
# Revisa si hay GPU disponible y lo utiliza
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print('Número de acciones: {}'.format(enviroment.action_space.n))
print('Dimensión de estado: {}'.format(enviroment.observation_space))
# Factor de descuento temporal
gamma = 0.8
# Número de muestras que extraer del repositorio de experiencia para entrenar la red
No_grupo = 64
# Parámetros para la tasa de epsilon-gredy, ésta va cayendo exponencialmente
eps_inicial = 0.9
eps_final = 0.05
eps_tasa = 200
# Parámetro para el descenso por gradiente estocástico
lr = 0.001
# Cada cuanto actualizar la red de etiqueta
actualizar_red_med = 10
# Número de episodios para entrenar
No_episodios = 200
iters = 0
duracion_episodios = []
```
Define una función llamda `genera_accion` que reciba el vector del `estado` y tome la acción óptima o una acción aleatoria. La acción aleatoria la debe de tomar con una probabilidad que disminuya exponencialmente, de tal manera que en un principio se explore más.
Con probabilidad $$\epsilon_{final}+(\epsilon_{inicial}-\epsilon_{final})\times e^{-iters/tasa_{\epsilon}}$$ se escoge una acción aleatoria. En la siguiente gráfica se puede observar la tasa que cae exponencialmente.
```
plt.plot([eps_final + (eps_inicial - eps_final) * math.exp(-1. * iters / eps_tasa) for iters in range(1000)])
plt.title('Disminución exponencial de la tasa de exploración')
plt.xlabel('Iteración')
plt.ylabel('Probabilidad de explorar: $\epsilon$')
plt.show
def genera_accion(estado):
global iters
decimal = random.uniform(0, 1)
limite_epsilon = eps_final + (eps_inicial - eps_final) * math.exp(-1. * iters / eps_tasa)
iters += 1
if decimal > limite_epsilon:
with torch.no_grad():
return red_estrategia(estado).max(0)[1].view(1)
else:
return torch.tensor([random.randrange(2)], device=device, dtype=torch.long)
```
Genera una red neuronal que reciba el vector de estado y regrese un vector de dimensión igual al número de acciones
```
class red_N(nn.Module):
def __init__(self):
super(red_N, self).__init__()
# Capas densas
self.capa_densa1 = nn.Linear(4, 256)
self.capa_densa2 = nn.Linear(256, 128)
self.final = nn.Linear(128, 2)
def forward(self, x):
# Arquitectura de la red, con activación ReLU en las dos capas interiores
x = F.relu(self.capa_densa1(x))
x = F.relu(self.capa_densa2(x))
return self.final(x)
```
En a siguiente celda generamos una clase de repositorio de experiencia con diferentes atributos:
`guarda`: guarda la observación $(s_i,a_i,s_i',r_i)$
`muestra`: genera una muestra de tamaño No_gupo
`len`: función que regresa la cantidad de muestras en el repositorio
```
Transicion = namedtuple('Transicion',
('estado', 'accion', 'sig_estado', 'recompensa'))
class repositorioExperiencia(object):
def __init__(self, capacidad):
self.capacidad = capacidad
self.memoria = []
self.posicion = 0
def guarda(self, *args):
"""Guarda una transición."""
if len(self.memoria) < self.capacidad:
self.memoria.append(None)
self.memoria[self.posicion] = Transicion(*args)
self.posicion = (self.posicion + 1) % self.capacidad
def muestra(self, batch_size):
return random.sample(self.memoria, batch_size)
def __len__(self):
return len(self.memoria)
```
En la siguiente celda definimos una función llamda `actualiza_q` que implemente DQL:
1. Saque una muestra de tamaño `No_grupo`,
2. Usando la `red_estrategia`, calcule $Q_{\theta}(s_t,a_t)$ para la muestra
3. Calcula $V^*(s_{t+1})$ usando la `red_etiqueta`
4. Calcular la etiquetas $y_j=r_i+\max_aQ_{\theta'}(s_t,a)$
5. Calcula función de pérdida para $Q_{\theta}(s_t,a_t)-y_j$
6. Actualize $\theta$
```
def actualiza_q():
if len(memoria) < No_grupo:
return
transiciones = memoria.muestra(No_grupo)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
grupo = Transicion(*zip(*transiciones))
# Compute a mask of non-final states and concatenate the batch elements
estados_intermedios = torch.tensor(tuple(map(lambda s: s is not None,
grupo.sig_estado)), device=device, dtype=torch.uint8)
sig_estados_intermedios = torch.cat([s for s in grupo.sig_estado
if s is not None])
grupo_estado = torch.cat(grupo.estado)
accion_grupo = torch.cat(grupo.accion)
recompensa_grupo = torch.cat(grupo.recompensa)
# Calcula Q(s_t, a_t) - una manera es usar la red_estrategia para calcular Q(s_t),
# y seleccionar las columnas usando los índices de la acciones tomadas usando la función gather
q_actual = red_estrategia(grupo_estado).gather(1, accion_grupo.unsqueeze(1))
# Calcula V*(s_{t+1}) para todos los sig_estados en el grupo usando la red_etiqueta
valores_sig_estado = torch.zeros(No_grupo, device=device)
valores_sig_estado[estados_intermedios] = red_etiqueta(sig_estados_intermedios).max(1)[0].detach()
# Calcular las etiquetas
y_j = (valores_sig_estado * gamma) + recompensa_grupo
# Calcula función de pérdida de Huber
#perdida = F.smooth_l1_loss(q_actual, y_j.unsqueeze(1))
perdida = F.mse_loss(q_actual, y_j.unsqueeze(1))
# Optimizar el modelo
optimizador.zero_grad()
perdida.backward()
for param in red_estrategia.parameters():
param.grad.data.clamp_(-1, 1)
optimizador.step()
# Función para graficar la duración
def grafica_duracion(dur):
plt.figure(2)
plt.clf()
duracion_t = torch.tensor(duracion_episodios, dtype=torch.float)
plt.title('Entrenamiento...')
plt.xlabel('Episodio')
plt.ylabel('Duración')
plt.plot(duracion_t.numpy())
# Toma el promedio de duración d 100 episodios y los grafica
if len(duracion_t) >= 15:
media = duracion_t.unfold(0, 15, 1).mean(1).view(-1)
media = torch.cat((torch.zeros(14), media))
plt.plot(media.numpy())
plt.plot([200]*len(duracion_t))
plt.pause(dur) # Pausa un poco para poder veer las gráficas
display.clear_output(wait=True)
display.display(plt.gcf())
red_estrategia = red_N().to(device)
red_etiqueta = red_N().to(device)
red_etiqueta.load_state_dict(red_estrategia.state_dict())
red_etiqueta.eval()
#optimizador = optim.RMSprop(red_estrategia.parameters())
optimizador = optim.Adam(red_estrategia.parameters(),lr=lr)
memoria = repositorioExperiencia(10000)
# Entrenamiento
for episodio in range(0, No_episodios):
# Reset the enviroment
estado = enviroment.reset()
estado = torch.tensor(estado, dtype = torch.float)
# Initialize variables
recompensa = 0
termina = False
for t in count():
# Decide acción a tomar
accion = genera_accion(estado)
# Implementa la acción y recibe reacción del ambiente
sig_estado, recompensa, termina, _ = enviroment.step(accion.item())
# Convierte a observaciones a tensores
estado = torch.tensor(estado, dtype = torch.float)
sig_estado = torch.tensor(sig_estado, dtype = torch.float)
# Si acabó (Termina = True) el episodio la recompensa es negativa
if termina:
recompensa = -recompensa
recompensa = torch.tensor([recompensa], device=device)
# Guarda la transición en la memoria
memoria.guarda(estado.unsqueeze(0), accion, sig_estado.unsqueeze(0), recompensa)
# Actualiza valor q en la red de medida
actualiza_q()
## Moverse al siguiente estado
estado = sig_estado
# Grafica la duración de los episodios
if termina:
duracion_episodios.append(t + 1)
break
# Actualizar la red_etiqueta
if episodio % actualizar_red_med == 0:
red_etiqueta.load_state_dict(red_estrategia.state_dict())
grafica_duracion(0.3)
print("**********************************")
print("Entrenamiento finalizado!\n")
print("**********************************")
grafica_duracion(15)
grafica_duracion(15)
```
| github_jupyter |
# 9. Neural Networks with Sphere example
```
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import TensorDataset, DataLoader
import matplotlib.pyplot as plt
%matplotlib inline
```
## 9.1 Generate Data
```
def make_datasets(size, dim) :
x = torch.randn(size, dim)
y = torch.randn(size, dim)
x = x / x.norm(dim=1).view(size, 1)
y = 4 * y / y.norm(dim=1).view(size, 1)
label_x = torch.ones(size)
label_y = torch.zeros(size)
data = torch.cat([x, y])
label = torch.cat([label_x, label_y])
return TensorDataset(data, label)
train_data = make_datasets(500, 2)
test_data = make_datasets(100, 2)
batch_size = 100
train_loader = DataLoader(batch_size=batch_size, dataset=train_data, shuffle=True)
test_loader = DataLoader(batch_size=batch_size, dataset=test_data, shuffle=False)
# dataset (Dataset) – dataset from which to load the data.
# batch_size (int, optional) – how many samples per batch to load (default: 1).
# shuffle (bool, optional) – set to True to have the data reshuffled at every epoch (default: False).
X = train_data.tensors[0]
Y = train_data.tensors[1]
fig = plt.figure(figsize = (5, 4))
plt.scatter(X[:, 0], X[:, 1], c = Y)
plt.colorbar()
plt.show()
```
## 9.2 Define Model
```
model = nn.Sequential(
nn.Linear(2, 1000),
nn.ReLU(),
nn.Linear(1000, 1000),
nn.ReLU(),
nn.Linear(1000, 1),
nn.Sigmoid()
)
```
## 9.3 Train Model
```
loss = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
num_epochs = 10
model.train()
for i, (batch_data, batch_labels) in enumerate(train_loader):
X = batch_data
Y = batch_labels
pre = model(X)
cost = loss(pre.squeeze(), Y)
optimizer.zero_grad()
cost.backward()
optimizer.step()
print('lter [%d/%d], Loss: %.4f'%(i+1, len(train_loader), cost.item()))
print("Learning Finished!")
```
## 9.3 Test Model
```
correct = 0
total = 0
for data, labels in test_loader:
outputs = model(data)
predicted = outputs.squeeze().data > 0.5
total += data.shape[0]
correct += (predicted == labels).sum()
print('Accuracy of test images: %f %%' % (100 * float(correct) / total))
print('Misclassified : %d %%' % (total - correct))
```
## 9.4 Decision Boundary
```
grid_size = 500
x = torch.linspace(-5, 5, grid_size)
y = torch.linspace(-5, 5, grid_size)
xv, yv = torch.meshgrid(x, y)
xv.shape, yv.shape
xv = xv.reshape(-1, 1)
yv = yv.reshape(-1, 1)
torch.cat([xv, yv], dim=1)
z = model(torch.cat([xv, yv], dim=1)) > 0.5
x = xv.data.numpy().reshape(grid_size, grid_size)
y = yv.data.numpy().reshape(grid_size, grid_size)
z = z.data.numpy().reshape(grid_size, grid_size)
fig = plt.figure(figsize = (5, 4))
plt.scatter(X[:, 0], X[:, 1], c = Y)
plt.contourf(x, y, z, alpha=0.3)
plt.colorbar()
plt.show()
```
# 추가
해당 파일에 대해서는 앞서 8번 파일에서 toy example에 대해 실행한 것과 유사하기 때문에 특별히 추가적인 코드 돌려보지는 않았습니다
| github_jupyter |
```
import pandas as pd
import numpy as np
from boruta import BorutaPy
from IPython.display import display
```
### Data Prep
```
df = pd.read_csv('data/aml_df.csv')
df.drop(columns=['Unnamed: 0'], inplace=True)
display(df.info())
df.head()
#holdout validation set
final_val = df.sample(frac=0.2)
#X and y for holdout
final_X = final_val[model_columns]
final_y = final_val.iloc[:, -1]
# training data
data = df.drop(index= final_val.index)
X = data[model_columns]
y = data.iloc[:, -1]
display(X.info())
X.head()
```
# Feature Reduction
### Boruta
```
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier(n_estimators=1000, max_depth=20, random_state=8, n_jobs=-1)
feat_selector = BorutaPy(rf, n_estimators='auto', verbose=2, max_iter = 200, random_state=8)
feat_selector.fit(X.values, y.values)
selected = X.values[:, feat_selector.support_]
print(selected.shape)
boruta_mask = feat_selector.support_
boruta_features = model_columns[boruta_mask]
boruta_df = df[model_columns[boruta_mask]]
```
### Lasso
```
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import log_loss, accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
log_model = LogisticRegression(penalty='l1', solver='saga', max_iter=10000)
kf = KFold(n_splits=5, shuffle=True)
ll_performance = []
model_weights = []
for train_index, test_index in kf.split(X):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
log_model.fit(X_train, y_train)
y_pred = log_model.predict_proba(X_test)
log_ll = log_loss(y_test, y_pred)
ll_performance.append(log_ll)
model_weights.append(log_model.coef_)
print(ll_performance)
average_weight = np.mean(model_weights, axis=0)[0]
def important_gene_mask(columns, coefs):
mask = coefs != 0
important_genes = columns[mask[0]]
print(len(important_genes))
return important_genes
lasso_k1 = set(important_gene_mask(model_columns, model_weights[0]))
lasso_k2 = set(important_gene_mask(model_columns, model_weights[1]))
lasso_k3 = set(important_gene_mask(model_columns, model_weights[2]))
lasso_k4 = set(important_gene_mask(model_columns, model_weights[3]))
lasso_k5 = set(important_gene_mask(model_columns, model_weights[4]))
lasso_gene_union = set.union(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(lasso_gene_union)
lasso_gene_intersection = set.intersection(lasso_k1, lasso_k2, lasso_k3, lasso_k4, lasso_k5)
len(lasso_gene_intersection)
lasso_columns = list(lasso_gene_union)
lasso_boruta_intersection = set.intersection(set(boruta_features), lasso_gene_intersection)
len(lasso_boruta_intersection)
lasso_boruta_intersection
gene_name = ['HOXA9', 'HOXA3', 'HOXA6', 'TPSG1', 'HOXA7', 'SPATA6', 'GPR12', 'LRP4',
'CPNE8', 'ST18', 'MPV17L', 'TRH', 'TPSAB1', 'GOLGA8M', 'GT2B11',
'ANKRD18B', 'AC055876.1', 'WHAMMP2', 'HOXA10-AS', 'HOXA10',
'HOXA-AS3', 'PDCD6IPP1', 'WHAMMP3']
gene_zip = list(zip(lasso_boruta_intersection, gene_name))
gene_zip
pd.DataFrame(gene_zip)
```
those are the feature deemed most important by both lasso rounds + boruta
```
boruta_not_lasso = set.difference(set(boruta_features), lasso_gene_union)
len(boruta_not_lasso)
```
25 features were considered important by boruta but not picked up by any of the lasso rounds...why?
| github_jupyter |
# In-Class Coding Lab: Conditionals
The goals of this lab are to help you to understand:
- Relational and Logical Operators
- Boolean Expressions
- The if statement
- Try / Except statement
- How to create a program from a complex idea.
# Understanding Conditionals
Conditional statements permit the non-linear execution of code. Take the following example, which detects whether the input integer is odd or even:
```
number = int(input("Enter an integer: "))
if number %2==0:
print("%d is even" % (number))
else:
print("%d is odd" % (number))
```
Make sure to run the cell more than once, inputting both odd and even integers to try it out. After all, we don't know if the code really works until we test out both options!
On line 2, you see `number %2 == 0` this is a Boolean expression at the center of the logic of this program. The expression says **number when divided by 2 has a reminder (%) equal to (==) zero**. The key to deciphering this is knowing how the `%` and `==` operators work. Understanding the basics, such as these, areessential to problem solving with programming, for once you understand the basics programming becomes an exercise in assembling them together into a workable solution.
The `if` statement evaluates this Boolean expression and when the expression is `True`, Python executes all of the code indented underneath the `if`. In the event the Boolean expression is `False`, Python executes the code indented under the `else`.
## Now Try It
Write a similar program to input an integer and print "Zero or Positive" when the number is greater than or equal to zero, and "Negative" otherwise.
To accomplish this you **must** write a Boolean expression for **number greater than or equal to zero**, which is left up to the reader.
```
# TODO write your program here:
number = int(input("Enter an integer: "))
if number >= 0:
print ("%d is Zero or Positive" % (number))
else:
print ("%d is negative" % (number))
```
# Rock, Paper Scissors
In this part of the lab we'll build out a game of Rock, Paper, Scissors. If you're not familiar with the game, I suggest reading this: [https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissor](https://en.wikipedia.org/wiki/Rock%E2%80%93paper%E2%80%93scissors) Knowledge of the game will help you understand the lab much better.
The objective of the lab is to teach you how to use conditionals but also get you thinking of how to solve problems with programming. We've said before its non-linear, with several attempts before you reach the final solution. You'll experience this first-hand in this lab as we figure things out one piece at a time and add them to our program.
```
## Here's our initial To-Do list, we've still got lots to figure out.
# 1. computer opponent selects one of "rock", "paper" or "scissors" at random
# 2. you input one of "rock", "paper" or "scissors"
# 3. play the game and determine a winnner... (not sure how to do this yet.)
```
## Randomizing the Computer's Selection
Let's start by coding the TO-DO list. First we need to make the computer select from "rock", "paper" or "scissors" at random.
To accomplish this, we need to use python's `random` library, which is documented here: [https://docs.python.org/3/library/random.html](https://docs.python.org/3/library/random.html)
It would appear we need to use the `choice()` function, which takes a sequence of choices and returns one at random. Let's try it out.
```
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
computer
```
Run the cell a couple of times. It should make a random selection from `choices` each time you run it.
How did I figure this out? Well I started with a web search and then narrowed it down from the Python documentation. You're not there yet, but at some point in the course you will be. When you get there you will be able to teach yourself just about anything!
## Getting input and guarding against stupidity
With step one out of the way, its time to move on to step 2. Getting input from the user.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
print("You chose %s and the computer chose %s" % (you,computer))
```
This is taking shape, but if you re-run the example and enter `pizza` you'll notice a problem.
We should guard against the situation when someone enters something other than 'rock', 'paper' or 'scissors' This is where our first conditional comes in to play.
### In operator
The `in` operator returns a Boolean based on whether a value is in a list of values. Let's try it:
```
# TODO Try these:
'rock' in choices, 'mike' in choices
```
### You Do It!
Now modify the code below to only print your and the computer's selections when your input is one of the valid choices. Replace `TODO` on line `8` with a correct Boolean expression to verify what you entered is one of the valid choices.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices): # replace TODO on this line
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner... (not sure how to do this yet.)
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
## Playing the game
With the input figured out, it's time to work on the final step, playing the game. The game itself has some simple rules:
- rock beats scissors (rock smashes scissors)
- scissors beats paper (scissors cut paper)
- paper beats rock (paper covers rock)
So for example:
- If you choose rock and the computer chooses paper, you lose because paper covers rock.
- Likewise if you select rock and the computer choose scissors, you win because rock smashes scissors.
- If you both choose rock, it's a tie.
## It's too complicated!
It still might seem too complicated to program this game, so let's use a process called **problem simplification** where we solve an easier version of the problem, then as our understanding grows, we increase the complexity until we solve the entire problem.
One common way we simplify a problem is to constrain our input. If we force us to always choose 'rock', the program becomes a little easier to write.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'rock' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming rock only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
Run the code in the cell above enough times to verify it works. (You win, you lose and you tie.) That will ensure the code you have works as intended.
## Paper: Making the program a bit more complex.
With the rock logic out of the way, its time to focus on paper. We will assume you always type `paper` and then add the conditional logic to our existing code handle it.
At this point you might be wondering should I make a separate `if` statement or should I chain the conditions off the current if with `elif` ? Since this is part of the same input, it should be an extension of the existing `if` statement. You should **only** introduce an additional conditional if you're making a separate decision, for example asking the user if they want to play again. Since this is part of the same decision (did you enter 'rock', 'paper' or 'scissors' it should be in the same `if...elif` ladder.
### You Do It
In the code below, I've added the logic to address your input of 'paper' You only have to replace the `TODO` in the `print()` statements with the appropriate message.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = 'paper' #input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner (assuming paper only for user)
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("You win! Paper covers rock.")
elif (you == 'paper' and computer == 'scissors'):
print("You lose! Scissors cuts paper.")
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
## The final program
With the 'rock' and 'paper' cases out of the way, we only need to add 'scissors' logic. We leave this part to you as your final exercise.
Similar to the 'paper' example you will need to add two `elif` statements to handle winning and losing when you select 'paper' and should also include the appropriate output messages.
```
# 1. computer opponent select one of "rock", "paper" or "scissors" at random
import random
choices = ['rock','paper','scissors']
computer = random.choice(choices)
# 2. you input one of "rock", "paper" or "scissors"
# for now, make this 'rock'
you = input("Enter your choice: rock, paper, or scissors: ")
if (you in choices):
print("You chose %s and the computer chose %s" % (you,computer))
# 3. play the game and determine a winnner
if (you == 'rock' and computer == 'scissors'):
print("You win! Rock smashes scissors.")
elif (you == 'rock' and computer=='paper'):
print("You lose! Paper covers rock.")
elif (you == 'paper' and computer =='rock'):
print("TODO - What should this say?")
elif (you == 'paper' and computer == 'scissors'):
print("TODO - What should this say?")
elif (you == 'scissors' and computer == 'paper'):
print("You win! Scissors cuts paper.")
elif (you == 'scissors' and computer == 'rock'):
print ("You lose! Rock smashes scissors.")
# TODO add logic for you == 'scissors' similar to the paper logic
else:
print("It's a tie!")
else:
print("You didn't enter 'rock', 'paper' or 'scissors'!!!")
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# 영화 리뷰를 사용한 텍스트 분류
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/ko/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/ko/r1/tutorials/keras/basic_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />깃허브(GitHub) 소스 보기</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 [공식 영문 문서](https://www.tensorflow.org/?hl=en)의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
[tensorflow/docs](https://github.com/tensorflow/docs) 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
[docs-ko@tensorflow.org](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-ko)로
메일을 보내주시기 바랍니다.
이 노트북은 영화 리뷰(review) 텍스트를 *긍정*(positive) 또는 *부정*(negative)으로 분류합니다. 이 예제는 *이진*(binary)-또는 클래스(class)가 두 개인- 분류 문제입니다. 이진 분류는 머신러닝에서 중요하고 널리 사용됩니다.
여기에서는 [인터넷 영화 데이터베이스](https://www.imdb.com/)(Internet Movie Database)에서 수집한 50,000개의 영화 리뷰 텍스트를 담은 [IMDB 데이터셋](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb)을 사용하겠습니다. 25,000개 리뷰는 훈련용으로, 25,000개는 테스트용으로 나뉘어져 있습니다. 훈련 세트와 테스트 세트의 클래스는 *균형*이 잡혀 있습니다. 즉 긍정적인 리뷰와 부정적인 리뷰의 개수가 동일합니다.
이 노트북은 모델을 만들고 훈련하기 위해 텐서플로의 고수준 파이썬 API인 [tf.keras](https://www.tensorflow.org/r1/guide/keras)를 사용합니다. `tf.keras`를 사용한 고급 텍스트 분류 튜토리얼은 [MLCC 텍스트 분류 가이드](https://developers.google.com/machine-learning/guides/text-classification/)를 참고하세요.
```
# keras.datasets.imdb is broken in 1.13 and 1.14, by np 1.16.3
!pip install tf_nightly
from __future__ import absolute_import, division, print_function, unicode_literals, unicode_literals
import tensorflow as tf
from tensorflow import keras
import numpy as np
print(tf.__version__)
```
## IMDB 데이터셋 다운로드
IMDB 데이터셋은 텐서플로와 함께 제공됩니다. 리뷰(단어의 시퀀스(sequence))는 미리 전처리해서 정수 시퀀스로 변환되어 있습니다. 각 정수는 어휘 사전에 있는 특정 단어를 의미합니다.
다음 코드는 IMDB 데이터셋을 컴퓨터에 다운로드합니다(또는 이전에 다운로드 받았다면 캐시된 복사본을 사용합니다):
```
imdb = keras.datasets.imdb
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
```
매개변수 `num_words=10000`은 훈련 데이터에서 가장 많이 등장하는 상위 10,000개의 단어를 선택합니다. 데이터 크기를 적당하게 유지하기 위해 드물에 등장하는 단어는 제외하겠습니다.
## 데이터 탐색
잠시 데이터 형태를 알아 보죠. 이 데이터셋의 샘플은 전처리된 정수 배열입니다. 이 정수는 영화 리뷰에 나오는 단어를 나타냅니다. 레이블(label)은 정수 0 또는 1입니다. 0은 부정적인 리뷰이고 1은 긍정적인 리뷰입니다.
```
print("훈련 샘플: {}, 레이블: {}".format(len(train_data), len(train_labels)))
```
리뷰 텍스트는 어휘 사전의 특정 단어를 나타내는 정수로 변환되어 있습니다. 첫 번째 리뷰를 확인해 보죠:
```
print(train_data[0])
```
영화 리뷰들은 길이가 다릅니다. 다음 코드는 첫 번째 리뷰와 두 번째 리뷰에서 단어의 개수를 출력합니다. 신경망의 입력은 길이가 같아야 하기 때문에 나중에 이 문제를 해결하겠습니다.
```
len(train_data[0]), len(train_data[1])
```
### 정수를 단어로 다시 변환하기
정수를 다시 텍스트로 변환하는 방법이 있다면 유용할 것입니다. 여기에서는 정수와 문자열을 매핑한 딕셔너리(dictionary) 객체에 질의하는 헬퍼(helper) 함수를 만들겠습니다:
```
# 단어와 정수 인덱스를 매핑한 딕셔너리
word_index = imdb.get_word_index()
# 처음 몇 개 인덱스는 사전에 정의되어 있습니다
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["<PAD>"] = 0
word_index["<START>"] = 1
word_index["<UNK>"] = 2 # unknown
word_index["<UNUSED>"] = 3
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
```
이제 `decode_review` 함수를 사용해 첫 번째 리뷰 텍스트를 출력할 수 있습니다:
```
decode_review(train_data[0])
```
## 데이터 준비
리뷰-정수 배열-는 신경망에 주입하기 전에 텐서로 변환되어야 합니다. 변환하는 방법에는 몇 가지가 있습니다:
* 원-핫 인코딩(one-hot encoding)은 정수 배열을 0과 1로 이루어진 벡터로 변환합니다. 예를 들어 배열 [3, 5]을 인덱스 3과 5만 1이고 나머지는 모두 0인 10,000차원 벡터로 변환할 수 있습니다. 그다음 실수 벡터 데이터를 다룰 수 있는 층-Dense 층-을 신경망의 첫 번째 층으로 사용합니다. 이 방법은 `num_words * num_reviews` 크기의 행렬이 필요하기 때문에 메모리를 많이 사용합니다.
* 다른 방법으로는, 정수 배열의 길이가 모두 같도록 패딩(padding)을 추가해 `max_length * num_reviews` 크기의 정수 텐서를 만듭니다. 이런 형태의 텐서를 다룰 수 있는 임베딩(embedding) 층을 신경망의 첫 번째 층으로 사용할 수 있습니다.
이 튜토리얼에서는 두 번째 방식을 사용하겠습니다.
영화 리뷰의 길이가 같아야 하므로 [pad_sequences](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences) 함수를 사용해 길이를 맞추겠습니다:
```
train_data = keras.preprocessing.sequence.pad_sequences(train_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
test_data = keras.preprocessing.sequence.pad_sequences(test_data,
value=word_index["<PAD>"],
padding='post',
maxlen=256)
```
샘플의 길이를 확인해 보죠:
```
len(train_data[0]), len(train_data[1])
```
(패딩된) 첫 번째 리뷰 내용을 확인해 보죠:
```
print(train_data[0])
```
## 모델 구성
신경망은 층(layer)을 쌓아서 만듭니다. 이 구조에서는 두 가지를 결정해야 합니다:
* 모델에서 얼마나 많은 층을 사용할 것인가?
* 각 층에서 얼마나 많은 *은닉 유닛*(hidden unit)을 사용할 것인가?
이 예제의 입력 데이터는 단어 인덱스의 배열입니다. 예측할 레이블은 0 또는 1입니다. 이 문제에 맞는 모델을 구성해 보죠:
```
# 입력 크기는 영화 리뷰 데이터셋에 적용된 어휘 사전의 크기입니다(10,000개의 단어)
vocab_size = 10000
model = keras.Sequential()
model.add(keras.layers.Embedding(vocab_size, 16, input_shape=(None,)))
model.add(keras.layers.GlobalAveragePooling1D())
model.add(keras.layers.Dense(16, activation=tf.nn.relu))
model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid))
model.summary()
```
층을 순서대로 쌓아 분류기(classifier)를 만듭니다:
1. 첫 번째 층은 `Embedding` 층입니다. 이 층은 정수로 인코딩된 단어를 입력 받고 각 단어 인덱스에 해당하는 임베딩 벡터를 찾습니다. 이 벡터는 모델이 훈련되면서 학습됩니다. 이 벡터는 출력 배열에 새로운 차원으로 추가됩니다. 최종 차원은 `(batch, sequence, embedding)`이 됩니다.
2. 그다음 `GlobalAveragePooling1D` 층은 `sequence` 차원에 대해 평균을 계산하여 각 샘플에 대해 고정된 길이의 출력 벡터를 반환합니다. 이는 길이가 다른 입력을 다루는 가장 간단한 방법입니다.
3. 이 고정 길이의 출력 벡터는 16개의 은닉 유닛을 가진 완전 연결(fully-connected) 층(`Dense`)을 거칩니다.
4. 마지막 층은 하나의 출력 노드(node)를 가진 완전 연결 층입니다. `sigmoid` 활성화 함수를 사용하여 0과 1 사이의 실수를 출력합니다. 이 값은 확률 또는 신뢰도를 나타냅니다.
### 은닉 유닛
위 모델에는 입력과 출력 사이에 두 개의 중간 또는 "은닉" 층이 있습니다. 출력(유닛 또는 노드, 뉴런)의 개수는 층이 가진 표현 공간(representational space)의 차원이 됩니다. 다른 말로 하면, 내부 표현을 학습할 때 허용되는 네트워크 자유도의 양입니다.
모델에 많은 은닉 유닛(고차원의 표현 공간)과 층이 있다면 네트워크는 더 복잡한 표현을 학습할 수 있습니다. 하지만 네트워크의 계산 비용이 많이 들고 원치않는 패턴을 학습할 수도 있습니다. 이런 표현은 훈련 데이터의 성능을 향상시키지만 테스트 데이터에서는 그렇지 못합니다. 이를 *과대적합*(overfitting)이라고 부릅니다. 나중에 이에 대해 알아 보겠습니다.
### 손실 함수와 옵티마이저
모델이 훈련하려면 손실 함수(loss function)과 옵티마이저(optimizer)가 필요합니다. 이 예제는 이진 분류 문제이고 모델이 확률을 출력하므로(출력층의 유닛이 하나이고 `sigmoid` 활성화 함수를 사용합니다), `binary_crossentropy` 손실 함수를 사용하겠습니다.
다른 손실 함수를 선택할 수 없는 것은 아닙니다. 예를 들어 `mean_squared_error`를 선택할 수 있습니다. 하지만 일반적으로 `binary_crossentropy`가 확률을 다루는데 적합합니다. 이 함수는 확률 분포 간의 거리를 측정합니다. 여기에서는 정답인 타깃 분포와 예측 분포 사이의 거리입니다.
나중에 회귀(regression) 문제(예를 들어 주택 가격을 예측하는 문제)에 대해 살펴 볼 때 평균 제곱 오차(mean squared error) 손실 함수를 어떻게 사용하는지 알아 보겠습니다.
이제 모델이 사용할 옵티마이저와 손실 함수를 설정해 보죠:
```
model.compile(optimizer=tf.train.AdamOptimizer(),
loss='binary_crossentropy',
metrics=['acc'])
```
## 검증 세트 만들기
모델을 훈련할 때 모델이 만난 적 없는 데이터에서 정확도를 확인하는 것이 좋습니다. 원본 훈련 데이터에서 10,000개의 샘플을 떼어내어 *검증 세트*(validation set)를 만들겠습니다. (왜 테스트 세트를 사용하지 않을까요? 훈련 데이터만을 사용하여 모델을 개발하고 튜닝하는 것이 목표입니다. 그다음 테스트 세트를 사용해서 딱 한 번만 정확도를 평가합니다).
```
x_val = train_data[:10000]
partial_x_train = train_data[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## 모델 훈련
이 모델을 512개의 샘플로 이루어진 미니배치(mini-batch)에서 40번의 에포크(epoch) 동안 훈련합니다. `x_train`과 `y_train` 텐서에 있는 모든 샘플에 대해 40번 반복한다는 뜻입니다. 훈련하는 동안 10,000개의 검증 세트에서 모델의 손실과 정확도를 모니터링합니다:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## 모델 평가
모델의 성능을 확인해 보죠. 두 개의 값이 반환됩니다. 손실(오차를 나타내는 숫자이므로 낮을수록 좋습니다)과 정확도입니다.
```
results = model.evaluate(test_data, test_labels, verbose=2)
print(results)
```
이 예제는 매우 단순한 방식을 사용하므로 87% 정도의 정확도를 달성했습니다. 고급 방법을 사용한 모델은 95%에 가까운 정확도를 얻습니다.
## 정확도와 손실 그래프 그리기
`model.fit()`은 `History` 객체를 반환합니다. 여기에는 훈련하는 동안 일어난 모든 정보가 담긴 딕셔너리(dictionary)가 들어 있습니다:
```
history_dict = history.history
history_dict.keys()
```
네 개의 항목이 있습니다. 훈련과 검증 단계에서 모니터링하는 지표들입니다. 훈련 손실과 검증 손실을 그래프로 그려 보고, 훈련 정확도와 검증 정확도도 그래프로 그려서 비교해 보겠습니다:
```
import matplotlib.pyplot as plt
acc = history_dict['acc']
val_acc = history_dict['val_acc']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo"는 "파란색 점"입니다
plt.plot(epochs, loss, 'bo', label='Training loss')
# b는 "파란 실선"입니다
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # 그림을 초기화합니다
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
이 그래프에서 점선은 훈련 손실과 훈련 정확도를 나타냅니다. 실선은 검증 손실과 검증 정확도입니다.
훈련 손실은 에포크마다 *감소*하고 훈련 정확도는 *증가*한다는 것을 주목하세요. 경사 하강법 최적화를 사용할 때 볼 수 있는 현상입니다. 매 반복마다 최적화 대상의 값을 최소화합니다.
하지만 검증 손실과 검증 정확도에서는 그렇지 못합니다. 약 20번째 에포크 이후가 최적점인 것 같습니다. 이는 과대적합 때문입니다. 이전에 본 적 없는 데이터보다 훈련 데이터에서 더 잘 동작합니다. 이 지점부터는 모델이 과도하게 최적화되어 테스트 데이터에서 *일반화*되기 어려운 훈련 데이터의 특정 표현을 학습합니다.
여기에서는 과대적합을 막기 위해 단순히 20번째 에포크 근처에서 훈련을 멈출 수 있습니다. 나중에 콜백(callback)을 사용하여 자동으로 이렇게 하는 방법을 배워 보겠습니다.
| github_jupyter |
## Preamble
### Import libraries
```
import os, sys
# Import Pandas
import pandas as pd
# Import Plotly and Cufflinks
# Plotly username and API key should be set in environment variables
import plotly
plotly.tools.set_credentials_file(username=os.environ['PLOTLY_USERNAME'], api_key=os.environ['PLOTLY_KEY'])
import plotly.graph_objs as go
import cufflinks as cf
# Import numpy
import numpy as np
```
## Step 1:
### Import CSV containing photovoltaic performance of solar cells into Pandas Data Frame object
```
# Import module to read in secure data
sys.path.append('../data/NREL')
import retrieve_data as rd
solar = rd.retrieve_dirks_sheet()
```
## Step 2:
### Clean the data for inconsistencies
```
sys.path.append('utils')
import process_data as prd
prd.clean_data(solar)
```
## Step 3:
### Import functions from utils and define notebook-specific functions
```
import degradation_utils as du
def get_mode_correlation_percent(df, mode_1, mode_2, weighted):
"""
Return the percent of rows where two modes are seen together
Args:
df (DataFrame): Pandas DataFrame that has been cleaned using the clean_data function
mode_1 (string): Degradation mode to find in the DataFrame in pairing with mode_2
mode_2 (string): Degradation mode to find in the DataFrame in pairing with mode_1
weighted (bool): If true, count all modules in a system as degrading
If false, count a system as one degrading module
Returns:
float: The percentage of modules with both specified degradation modes
"""
# Calculate total number of modules
total_modules = du.get_total_modules(df, weighted)
if total_modules == 0:
return 0
if weighted:
single_modules = len(df[(df['System/module'] == 'Module') & (df[mode_1] == 1) & (df[mode_2] == 1)])
specified = df[(df['System/module'] != 'System') | (df['No.modules'].notnull())]
systems = specified[(specified['System/module'] != 'Module') &
(specified[mode_1] == 1) & (specified[mode_2] == 1)]['No.modules'].sum()
total = single_modules + systems
return float(total) / total_modules
else:
return float(len((df[(df[mode_1] == 1) & (df[mode_2] == 1)]))) / total_modules
def get_heatmap_data(df, modes, weighted):
"""
Returns a DataFrame used to construct a heatmap based on frequency of two degradation modes appearing together
Args:
df (DataFrame): A *cleaned* DataFrame containing the data entries to check modes from
modes (List of String): A list of all modes to check for in the DataFrame
weighted (bool): If true, count all modules in a system as degrading
If false, count a system as one degrading module
Returns:
heatmap (DataFrame): DataFrame containing all of degradation modes correlation frequency results
"""
# Initialize DataFrame to hold heatmap data
heatmap = pd.DataFrame(data=None, columns=modes, index=modes)
# Calculate all single mode percentages
mode_percentages = {}
for mode in modes:
mode_percentages[mode] = du.get_mode_percentage(df, mode, weighted)
# Iterate through every pair of modes
for mode_1 in modes:
for mode_2 in modes:
if mode_1 == mode_2:
heatmap.set_value(mode_1, mode_2, np.nan)
else:
print(mode_1 + " & " + mode_2)
heatmap_reflection = heatmap.at[mode_2, mode_1]
# If already calculated the reflection, save and skip
if (not pd.isnull(heatmap_reflection)):
heatmap.set_value(mode_1, mode_2, heatmap_reflection)
print('Skip, already calculated')
continue
percentage_1 = mode_percentages[mode_1]
percentage_2 = mode_percentages[mode_2]
print('Percentage 1: ' + str(percentage_1))
print('Percentage 2: ' + str(percentage_2))
if (percentage_1 == 0 or percentage_2 == 0):
heatmap.set_value(mode_1, mode_2, 0)
continue
percentage_both = get_mode_correlation_percent(df, mode_1, mode_2, weighted)
print('Percentage Both: ' + str(percentage_both))
result = float(percentage_both) / (percentage_1 * percentage_2)
print('Result: ' + str(result))
heatmap.set_value(mode_1, mode_2, result)
return heatmap
```
## Step 4: Generate heatmaps of correlation frequency between degradation modes
### Calculation
Find the correlation strength of all pairs of degradation modes by using the following formula:
P(Degradation mode A & Degradation mode B) / P(Degradation mode A)P(Degradation mode B)
### Weighted: Multiply data entries for module systems by number of modules
Number of degrading modules = # of degrading single modules + (# of degrading systems · # of modules per degrading system)
Total number of modules = # of single modules + (# of systems · # of modules per system)
P(Degradation mode X) = Number of degrading modules / Total number of modules
#### Generate heatmap for the entire dataset, regardless of time
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
sys_heatmap_all = get_heatmap_data(solar, modes, True)
sys_heatmap_all
sys_heatmap_all.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-all', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed before 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] < 2000]
sys_heatmap_pre_2000 = get_heatmap_data(specified, modes, True)
sys_heatmap_pre_2000
sys_heatmap_pre_2000.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-pre-2000', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed post 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] >= 2000]
sys_heatmap_post_2000 = get_heatmap_data(specified, modes, True)
sys_heatmap_post_2000
sys_heatmap_post_2000.iplot(kind='heatmap',colorscale='spectral',
filename='sys-heatmap-post-2000', margin=(200,150,120,30))
```
### Unweighted: Count module systems as single module
Number of degrading modules = # of degrading single modules + # of degrading systems
Total number of modules = # of single modules + # of systems
P(Degradation mode X) = Number of degrading modules / Total number of modules
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
modes_heatmap_all = get_heatmap_data(solar, modes, False)
modes_heatmap_all
modes_heatmap_all.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-all', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed before 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] < 2000]
modes_heatmap_pre_2000 = get_heatmap_data(specified, modes, False)
modes_heatmap_pre_2000
modes_heatmap_pre_2000.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-pre-2000', margin=(200,150,120,30))
```
#### Generate heatmap for the dataset of all modules installed post 2000
```
modes = ['Encapsulant discoloration', 'Major delamination', 'Minor delamination',
'Backsheet other', 'Internal circuitry discoloration', 'Hot spots', 'Fractured cells',
'Diode/J-box problem', 'Glass breakage', 'Permanent soiling', 'Frame deformation']
specified = solar[solar['Begin.Year'] >= 2000]
modes_heatmap_post_2000 = get_heatmap_data(specified, modes, False)
modes_heatmap_post_2000
modes_heatmap_post_2000.iplot(kind='heatmap',colorscale='spectral',
filename='modes-heatmap-post-2000', margin=(200,150,120,30))
```
| github_jupyter |
# Self-Driving Car Engineer Nanodegree
## Deep Learning
## Project: Build a Traffic Sign Recognition Classifier
In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.
> **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) that can be used to guide the writing process. Completing the code template and writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/481/view) for this project.
The [rubric](https://review.udacity.com/#!/rubrics/481/view) contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.
>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
---
## Step 0: Load The Data
```
# Load pickled data
import pickle
# TODO: Fill this in based on where you saved the training and testing data
training_file = "./traffic-signs-data/train.p"
validation_file="./traffic-signs-data/valid.p"
testing_file = "./traffic-signs-data/test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
```
---
## Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
- `'features'` is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
- `'labels'` is a 1D array containing the label/class id of the traffic sign. The file `signnames.csv` contains id -> name mappings for each id.
- `'sizes'` is a list containing tuples, (width, height) representing the original width and height the image.
- `'coords'` is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. **THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES**
Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the [pandas shape method](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shape.html) might be useful for calculating some of the summary results.
### Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
```
### Replace each question mark with the appropriate value.
### Use python, pandas or numpy methods rather than hard coding the results
import numpy as np
#import pandas as pd
#pd.read_csv("./signnames.csv")
# TODO: Number of training examples
n_train = len(y_train)
# TODO: Number of validation examples
n_validation = len(y_valid)
# TODO: Number of testing examples.
n_test = len(y_test)
#print(n_test)
# TODO: What's the shape of an traffic sign image?
image_shape = train['features'].shape[1:]
#print(image_shape)
# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))
print("Number of training examples =", n_train)
print("Number of validation examples =", n_validation)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
```
### Include an exploratory visualization of the dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The [Matplotlib](http://matplotlib.org/) [examples](http://matplotlib.org/examples/index.html) and [gallery](http://matplotlib.org/gallery.html) pages are a great resource for doing visualizations in Python.
**NOTE:** It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections. It can be interesting to look at the distribution of classes in the training, validation and test set. Is the distribution the same? Are there more examples of some classes than others?
```
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
#uni,index,count=np.unique(y_train,return_index=true,return_count=true)
class_arr= []
samples_arr=[]
for class_n in range(n_classes):
class_indices = np.where(y_train == class_n)
n_samples = len(class_indices[0])
class_arr.append(class_n)
samples_arr.append(n_samples)
#plt.hist(y_train,bins=43)
plt.bar( class_arr, samples_arr,align='center', alpha=0.5)
plt.ylabel('Classes')
plt.xlabel('No of Samples')
plt.title('Data Visualization')
plt.show()
```
----
## Step 2: Design and Test a Model Architecture
Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the [German Traffic Sign Dataset](http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset).
The LeNet-5 implementation shown in the [classroom](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!
With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.
There are various aspects to consider when thinking about this problem:
- Neural network architecture (is the network over or underfitting?)
- Play around preprocessing techniques (normalization, rgb to grayscale, etc)
- Number of examples per label (some have more than others).
- Generate fake data.
Here is an example of a [published baseline model on this problem](http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf). It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.
Minimally, the image data should be normalized so that the data has mean zero and equal variance. For image data, `(pixel - 128)/ 128` is a quick way to approximately normalize the data and can be used in this project.
Other pre-processing steps are optional. You can try different techniques to see if it improves performance.
Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.
### Importing Required modules and methods
```
import datetime
#import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.losses import categorical_crossentropy
from tensorflow.keras.layers import Dense, Flatten, Conv2D, AveragePooling2D
from tensorflow.keras import datasets
from tensorflow.keras.utils import to_categorical
```
### Convert class vectors to binary class matrices
```
num_classes = n_classes
y_train = to_categorical(y_train, num_classes)
y_valid = to_categorical(y_valid, num_classes)
y_test = to_categorical(y_test, num_classes)
```
### Pre-process the Data Set (normalization, grayscale, etc.)
```
### Preprocess the data here. It is required to normalize the data. Other preprocessing steps could include
### converting to grayscale, etc.
### Feel free to use as many code cells as needed.
import cv2
def prepare_image(image_set):
"""Transform initial set of images so that they are ready to be fed to neural network.
(1) normalize image
(2) convert RGB image to gray scale
"""
# initialize empty image set for prepared images
new_shape = image_shape[0:2] + (1,)
prep_image_set = np.empty(shape=(len(image_set),) + new_shape, dtype=int)
for ind in range(0, len(image_set)):
# normalize
norm_img = cv2.normalize(image_set[ind], np.zeros(image_shape[0:2]), 0, 255, cv2.NORM_MINMAX)
# grayscale
gray_img = cv2.cvtColor(norm_img, cv2.COLOR_RGB2GRAY)
# set new image to the corresponding position
prep_image_set[ind] = np.reshape(gray_img, new_shape)
return prep_image_set
X_train_prep = prepare_image(X_train)
X_test_prep = prepare_image(X_test)
X_valid_prep = prepare_image(X_valid)
X_train_prep[0].shape
```
### Model Architecture
```
### Define your architecture here.
### Feel free to use as many code cells as needed.
class LeNet(Sequential):
def __init__(self, input_shape, nb_classes):
super().__init__()
self.add(Conv2D(6, kernel_size=(5, 5), strides=(1, 1), activation='tanh', input_shape=input_shape, padding="same"))
self.add(AveragePooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'))
self.add(Conv2D(16, kernel_size=(5, 5), strides=(1, 1), activation='tanh', padding='valid'))
self.add(AveragePooling2D(pool_size=(2, 2), strides=(2, 2), padding='valid'))
self.add(Flatten())
self.add(Dense(120, activation='tanh'))
self.add(Dense(84, activation='tanh'))
self.add(Dense(nb_classes, activation='softmax'))
self.compile(optimizer='adam',
loss=categorical_crossentropy,
metrics=['accuracy'])
model = LeNet(X_train_prep[0].shape, n_classes)
```
### Model Summary
```
model.summary()
```
### Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
### TensorBoard callback for visualization
```
# Place the logs in a timestamped subdirectory
# This allows to easy select different training runs
# In order not to overwrite some data, it is useful to have a name with a timestamp
log_dir="logs\\fit\\" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
# Specify the callback object
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
# tf.keras.callback.TensorBoard ensures that logs are created and stored
# We need to pass callback object to the fit method
# The way to do this is by passing the list of callback objects, which is in our case just one
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
history = model.fit(X_train_prep, y=y_train,
epochs=20,
validation_data=(X_valid_prep, y_valid),
callbacks=[tensorboard_callback])
```
### Saving a model
```
# Saving the model
model.save('LeNet_saved_model/')
```
### Evaluation on the testset
```
print("Evaluate")
result = model.evaluate(x=X_test_prep, y=y_test)
dict(zip(model.metrics_names, result))
```
---
## Step 3: Test a Model on New Images
To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.
You may find `signnames.csv` useful as it contains mappings from the class id (integer) to the actual sign name.
### Load and Output the Images
```
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
# %matplotlib inline
# import os
# import matplotlib.image as mpimg
# # import cv2
# my_images = []
# os.listdir('./traffic-signs-downloaded/')
# for i, img in enumerate(os.listdir('./traffic-signs-downloaded/')):
# image = cv2.imread('traffic-signs-downloaded' + img)
# my_images.append(image)
# plt.figure()
# plt.xlabel(img)
# plt.imshow(image)
# my_images = np.asarray(my_images)
import numpy as np
import cv2
import os
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
import glob
import matplotlib.image as mpimg
webImagesDir = 'fromweb'
imageNames = glob.glob('fromweb/*.jpg')
webImages = [ mpimg.imread('./' + imgName ) for imgName in imageNames ]
fig, axes = plt.subplots(ncols=len(webImages), figsize=(16, 8))
for ax, image, imageName in zip(axes, webImages, imageNames):
ax.imshow(image)
ax.set_title(imageName)
```
### Predict the Sign Type for Each Image
```
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
img_paths = os.listdir("fromweb")
images_test = []
# read images and resize
for img_path in img_paths:
# read image from file
img = mpimg.imread(os.path.join("fromweb", img_path))
img = cv2.resize(img, image_shape[0:2], interpolation=cv2.INTER_CUBIC)
images_test.append(img)
X_web = prepare_image(images_test)
```
### Load a saved model
```
new_model = tf.keras.models.load_model('./LeNet_saved_model/')
new_model.summary()
```
### Analyze Performance
```
# ## Calculate the accuracy for these 5 new images.
# ## For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
# print(X_web.shape)
# print(X_train_prep.shape)
# print(X_train[0].shape)
prediction = new_model.predict(X_web)
probas = np.array(prediction)
labels = np.argmax(probas, axis=-1)
# print(probas)
print(labels)
```
### Read list of class from csv file and store it in a python dictionary
```
import pandas as pd
dt = pd.read_csv('signnames.csv').to_dict()
# print(dt)
# print("\n")
print(dt["SignName"])
print("\n")
class_ids = []
for el in range(0,len(dt["SignName"])):
class_ids.append(dt["SignName"][el])
print(class_ids)
```
### Prediction Confirmation on downloaded images from the web
```
expected_prediction = ['General caution', 'Road work', 'Bumpy road', 'Stop', 'Yield']
prediction_list = []
# for el in range(0,len(labels)):
# print(dt["SignName"][labels[el]])
for el in range(0,len(labels)):
prediction_list.append(dt["SignName"][labels[el]])
print(prediction_list)
```
### Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, print out the model's softmax probabilities to show the **certainty** of the model's predictions (limit the output to the top 5 probabilities for each image). [`tf.nn.top_k`](https://www.tensorflow.org/versions/r0.12/api_docs/python/nn.html#top_k) could prove helpful here.
The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.
`tf.nn.top_k` will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.
Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. `tf.nn.top_k` is used to choose the three classes with the highest probability:
```
# (5, 6) array
a = np.array([[ 0.24879643, 0.07032244, 0.12641572, 0.34763842, 0.07893497,
0.12789202],
[ 0.28086119, 0.27569815, 0.08594638, 0.0178669 , 0.18063401,
0.15899337],
[ 0.26076848, 0.23664738, 0.08020603, 0.07001922, 0.1134371 ,
0.23892179],
[ 0.11943333, 0.29198961, 0.02605103, 0.26234032, 0.1351348 ,
0.16505091],
[ 0.09561176, 0.34396535, 0.0643941 , 0.16240774, 0.24206137,
0.09155967]])
```
Running it through `sess.run(tf.nn.top_k(tf.constant(a), k=3))` produces:
```
TopKV2(values=array([[ 0.34763842, 0.24879643, 0.12789202],
[ 0.28086119, 0.27569815, 0.18063401],
[ 0.26076848, 0.23892179, 0.23664738],
[ 0.29198961, 0.26234032, 0.16505091],
[ 0.34396535, 0.24206137, 0.16240774]]), indices=array([[3, 0, 5],
[0, 1, 4],
[0, 5, 1],
[1, 3, 5],
[1, 4, 3]], dtype=int32))
```
Looking just at the first row we get `[ 0.34763842, 0.24879643, 0.12789202]`, you can confirm these are the 3 largest probabilities in `a`. You'll also notice `[3, 0, 5]` are the corresponding indices.
```
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web.
### Feel free to use as many code cells as needed.
# labels_ = tf.one_hot(labels, len(class_ids))
# print(labels_)
top_k_values, top_k_indices = tf.nn.top_k(prediction, k=5)
print(top_k_values)
print("\n")
print(top_k_indices)
```
### Project Writeup
Once you have completed the code implementation, document your results in a project writeup using this [template](https://github.com/udacity/CarND-Traffic-Sign-Classifier-Project/blob/master/writeup_template.md) as a guide. The writeup can be in a markdown or pdf file.
> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n",
"**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
---
## Step 4 (Optional): Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the [LeNet lab's](https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/601ae704-1035-4287-8b11-e2c2716217ad/concepts/d4aca031-508f-4e0b-b493-e7b706120f81) feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper [End-to-End Deep Learning for Self-Driving Cars](https://devblogs.nvidia.com/parallelforall/deep-learning-self-driving-cars/) in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
<figure>
<img src="visualize_cnn.png" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above)</p>
</figcaption>
</figure>
<p></p>
```
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it may be having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
```
| github_jupyter |
## Recommendation System for MovieLens Dataset using SVD
```
# Import libraries
import numpy as np
import pandas as pd
```
## To load the 'ratings' and 'movies' dataset after uploading them to Jupyter notebook
```
# Reading ratings file
ratings = pd.read_csv("C:\\Users\\black\\Desktop\\PyforDS\\datasets\\movielens\\ratings.csv", usecols=['userId','movieId','rating','timestamp'])
# Reading movies file
movies = pd.read_csv("C:\\Users\\black\\Desktop\\PyforDS\\datasets\\movielens\\movies.csv", usecols=['movieId','title','genres'])
# Print first five rows of movies datset
movies.head()
# Print first five rows of ratings datset
ratings.head()
```
## To find the unique number of users and movies in the 'ratings' dataset
```
n_users = ratings.userId.unique().shape[0]
n_movies = ratings.movieId.unique().shape[0]
print(f'Number of users = {n_users} and Number of movies = {n_movies}')
```
## To create a rating matrix for the 'ratings' dataset
```
Ratings = ratings.pivot(index = 'userId', columns ='movieId', values = 'rating').fillna(0)
Ratings.head()
```
# To install the scikit-surprise library for implementing SVD
### Run the following command in the Anaconda Prompt to install surprise package
```
#conda install -c conda-forge scikit-surprise
# Import libraries from Surprise package
from surprise import Reader, Dataset, SVD
from surprise.model_selection import cross_validate
# Load Reader library
reader = Reader()
# Load ratings dataset with Dataset library
data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)
# Use the SVD algorithm.
svd = SVD()
# Compute the RMSE of the SVD algorithm.
cross_validate(svd, data, measures=['RMSE', 'MAE'], cv=3, verbose=True)
# Print the head of ratings dataset
ratings.head()
```
## To find all the movies rated as more than 4 stars by user with userId = 1
```
ratings_1 = ratings[(ratings['userId'] == 5) & (ratings['rating'] == 5)]
ratings_1 = ratings_1.set_index('movieId')
ratings_1 = ratings_1.join(movies)['title']
ratings_1.head(10)
```
## Train an SVD to predict ratings for user with userId = 1
```
# Create a shallow copy for the movies dataset
user_5 = movies.copy()
#Reset the index for user_5 dataset
user_5 = user_5.reset_index()
# getting full dataset
data = Dataset.load_from_df(ratings[['userId', 'movieId', 'rating']], reader)
#create a training set for svd
trainset = data.build_full_trainset()
svd.fit(trainset)
#Predict the ratings for user1
user_5['Estimate_Score'] = user_5['movieId'].apply(lambda x: svd.predict(1, x).est)
#Drop extra columns from the user1 data frame
user_5 = user_5.drop(['movieId','genres','index'], axis = 1)
# Sort predicted ratings for user1 in descending order
user_5 = user_5.sort_values('Estimate_Score', ascending=False)
#Print top 10 recommendations
print(user_5.head(10))
```
| github_jupyter |
# 검색
wihle loop 를 이용한 선형 검색
```
from typing import Any,List
def linear_search_while(lst:List, value:Any) -> int:
i = 0
while i != len(lst) and lst[i] != value:
i += 1
if i == len(lst):
return -1
else:
return 1
l = [1,2,3,4,5,6,7,8,9]
linear_search_while(l,9)
def linear_search_for(lst:List, value:Any) -> int:
for i in lst:
if lst[i] == value:
return 1
return -1
l = [1,2,3,4,5,6,7,8,9]
linear_search_for(l,9)
def linear_search_sentinal(lst:List, value:Any) -> int:
lst.append(value)
i=0
while lst[i] != value:
i += 1
lst.pop()
if i == len(lst):
return -1
else:
return 1
l = [1,2,3,4,5,6,7,8,9]
linear_search_sentinal(l,9)
import time
from typing import Callable, Any
def time_it(search: Callable[[list,Any],Any],L:list,v:Any):
t1 = time.perf_counter()
search(L,v)
t2 = time.perf_counter()
return (t2-t1) *1000.0
l = [1,2,3,4,5,6,7,8,9]
time_it(linear_search_while,l,5)
```
## 이진 검색
반절씩 줄여나가며 탐색하는 방법
```
def binary_search(lst:list,value:Any) -> int:
i = 0
j = len(lst)-1
while i != j+1:
m = (i+j)//2
if lst[m]<v:
i = m+1
else:
j=m-1
if 0<= i< len(lst) and lst[i]==i:
return i
else :
return -1
if __name__ == '__main__':
import doctest
doctest.testmod()
```
## Selection sort - 선택정렬
정렬되지 않은 부분 전체를 순회하며 가장 작은 값을 찾아 정렬된 부분 우측에 위치시킨다. 이것을 모든 값이 정렬될 때까지 반복한다. n길이의 선형 자료형을 n번 반복하게 되므로 n^2
```
def selection_sort(l:list):
for i in range(len(l)):
idx = l.index(min(l[i:]),i)
dummy = l[i]
l[i] = l[idx]
l[idx] = dummy
return l
l = [7,16,3,25,2,6,1,7,3]
print(selection_sort(l))
```
## Insertion sort - 삽입정렬
전체를 순회하며 현재 값이 정렬된 부분에서 올바른 위치에 삽입하는 방식.
```
# 기 정렬된 영역에 L[:b+1] 내 올바른 위치에 L[b]를 삽입
def insert(L: list, b: int) -> None:
i = b
while i != 0 and L[i - 1] >= L[b]:
i = i - 1
value = L[b]
del L[b]
L.insert(i, value)
def insertion_sort(L: list) -> None:
i = 0
while i != len(L):
insert(L, i)
i = i + 1
L = [ 3, 4, 6, -1, 2, 5 ]
print(L)
insertion_sort(L)
print(L)
```
## Merge sort - 병합정렬
```
# 2개의 리스트를 하나의 정렬된 리스트로 반환
def merge(L1: list, L2: list) -> list:
newL = []
i1 = 0
i2 = 0
# [ 1, 1, 2, 3, 4, 5, 6, 7 ]
# [ 1, 3, 4, 6 ] [ 1, 2, 5, 7 ]
# i1
# i2
while i1 != len(L1) and i2 != len(L2):
if L1[i1] <= L2[i2]:
newL.append(L1[i1])
i1 += 1
else:
newL.append(L2[i2])
i2 += 1
newL.extend(L1[i1:])
newL.extend(L2[i2:])
return newL
def merge_sort(L: list) -> None: # [ 1, 3, 4, 6, 1, 2, 5, 7 ]
workspace = []
for i in range(len(L)):
workspace.append([L[i]]) # [ [1], [3], [4], [6], [1], [2], [5], [7] ]
i = 0
while i < len(workspace) - 1:
L1 = workspace[i] # [ [1], [3], [4], [6], [1], [2], [5], [7], [1,3],[4,6],[1,2],[5,7], [1,3,4,6],[1,2,5,7],[1,1,2,3,4,5,6,7] ]
L2 = workspace[i + 1]
newL = merge(L1, L2)
workspace.append(newL)
i += 2
if len(workspace) != 0:
L[:] = workspace[-1][:]
import time, random
def built_in(L: list) -> None:
L.sort()
def print_times(L: list) -> None:
print(len(L), end='\t')
for func in (selection_sort, insertion_sort, merge_sort, built_in):
if func in (selection_sort, insertion_sort, merge_sort) and len(L) > 10000:
continue
L_copy = L[:]
t1 = time.perf_counter()
func(L_copy)
t2 = time.perf_counter()
print("{0:7.1f}".format((t2 - t1) * 1000.0), end="\t")
print()
for list_size in [ 10, 1000, 2000, 3000, 4000, 5000, 10000 ]:
L = list(range(list_size))
random.shuffle(L)
print_times(L)
```
# 객체지향 프로그래밍
```isinstance(object,class)``` 해당 객체가 클래스에 해당하는지 아닌지를 반환.
```
from typing import List,Any
class Book:
def num_authors(self) -> int:
return len(self.authors)
def __init__(self,title:str,authors:List[str],publisher:str,isbn:str,price:float) : # 생성자.
self.title = title
self.authors = authors[:] # [:] 를 적지 않고 직접 넘겨주면 참조형식이기 때문에 외부에서 값이 바뀌면 해당 값도 바뀜. 때문에 새로 만들어서 복사하는 방법을 채택.
self.publisher = publisher
self.isbn = isbn
self.price = price
def print_authors(self) -> None:
for authors in self.authors:
print(authors)
def __str__(self) -> str:
return 'Title : {}\nAuthors : {}'.format(self.title,self.authors)
def __eq__(self,other:Any) -> bool:
if isinstance(other,Book):
return True if self.isbn == other.isbn else False
return False
book = Book('My book',['aaa','bbb','ccc'],'한빛출판사','123-456-789','300000.0')
book.print_authors()
print(book.num_authors())
print(book)
newBook = Book('My book',['aaa','bbb','ccc'],'한빛출판사','123-456-789','300000.0')
print(book==newBook)
```
레퍼런스 타입을 넘겨줄때 값을 참조하는 형식이 아닌 값을 직접 받는 형식으로 취하게 해야 한다.
캡슐화 : 데이터와 그 데이터를 사용하는 코드를 한곳에 넣고 정확히 어떻게 동작하는ㄴ지 상세한 내용은 숨기는 것
다형성 : 하나 이상의 형태를 갖는 것. 어떤 변수를 포함하는 표현식이 변수가 참조하는 객체의 타입에 따라 서로 다른 일을 하는 것
상속 : 새로운 클래스는 부모 클래스(object 클래스 또는 사용자 정의 속성을 상속)
```
class Member:
def __init__(self,name:str,address:str,email:str):
self.name = name
self.address = address
self.email = email
class Faculty(Member):
def __init__(self,name:str,address:str,email:str,faculty_num:str):
super().__init__(name,address,email)
self.faculty_number = faculty_num
self.courses_teaching = []
class Atom:
'''번호, 기호, 좌표(X, Y, Z)를 갖는 원자'''
def __init__(self, num: int, sym: str, x: float, y: float, z: float) -> None:
self.num = num
self.sym = sym
self.center = (x, y, z)
def __str__(self) -> str:
'''(SYMBOL, X, Y, Z) 형식의 문자열을 반환'''
return '({}, {}, {}, {}'.format(self.sym, self.center[0], self.center[1], self.center[2])
def translate(self, x: float, y: float, z: float) -> None:
self.center = (self.center[0] + x, self.center[1] + y, self.center[2] + z)
class Molecule:
''' 이름과 원자 리스트를 갖는 분자 '''
def __init__(self, name: str) -> None:
self.name = name
self.atoms = []
def add(self, a: Atom) -> None:
self.atoms.append(a)
def __str__(self) -> str:
'''(NAME, (ATOM1, ATOM2, ...)) 형식의 문자열을 반환'''
atom_list = ''
for a in self.atoms:
atom_list = atom_list + str(a) + ', '
atom_list = atom_list[:-2] # 마지막에 추가된 ', ' 문자를 제거
return '({}, ({}))'.format(self.name, atom_list)
def translate(self, x: float, y: float, z: float) -> None:
for a in self.atoms:
a.translate(x, y, z)
ammonia = Molecule("AMMONIA")
ammonia.add(Atom(1, "N", 0.257, -0.363, 0.0))
ammonia.add(Atom(2, "H", 0.257, 0.727, 0.0))
ammonia.add(Atom(3, "H", 0.771, -0.727, 0.890))
ammonia.add(Atom(4, "H", 0.771, -0.727, -0.890))
ammonia.translate(0, 0, 0.2)
#assert ammonia.atoms[0].center[0] == 0.257
#assert ammonia.atoms[0].center[1] == -0.363
assert ammonia.atoms[0].center[2] == 0.2
print(ammonia)
```
| github_jupyter |
```
import pandas as pd
# Chargement des fichiers audios du dossier fan/test dans un dataframe
import os
class File_charge:
"""
La classe File_charge permet d'instancier un objet de type dataframe contenant les chemins d'accès des fichiers audio
contenus dans chaque sous dossier du dataset
Paramètres:
path : Chemin d'accès au répetoire des fichiers audio
expemple : path = "C:/Users/romua/Documents/Formation_data_scientist/ASD/dataset/fan/train/"
"""
def __init__(self, path):
self.path = path
def load_file(self):
"""
La fonction load_file retourne un dataframe constituer des chemins d'accès aux fichiers audio contenu
dans un sous dossier du dataset
"""
dirs = os.listdir(self.path)
df = list()
for dir in dirs:
#df.append((dir))
df.append((self.path + dir))
df = pd.DataFrame(df, columns = ['audio_file'])
# df = df.reset_index()
return df
# Chargement du dataframe des chemins d'accès des audios contenus dans le dataset/fan/train
df = File_charge("C:/Users/romua/Documents/Formation_data_scientist/ASD/dataset/fan/train/")
df = df.load_file()
# Création d'un fichier csv comportant les chemins d'accès aux données audios du fan/train
df['machine_type'] = (df.iloc[:,0]).apply(lambda x: x.split('/')[7])
df['machine_id'] = ((df.iloc[:,0]).apply(lambda x: x.split('/')[-1]))
df['machine_id'] = (df['machine_id']).apply(lambda x: x.split('_')[2])
df['machine_class'] = (df.iloc[:,0]).apply(lambda x: x.split('/')[-1])
df['machine_class'] = (df['machine_class']).apply(lambda x: x.split('_')[0])
df['label'] = df['machine_class'].replace( {'normal': 1, 'anomaly': 1})
df.to_csv('fichier_fan_train.csv')
df = pd.read_csv('fichier_fan_train.csv', index_col=0)
df.head()
df_pump_train = pd.read_csv('fichier_pump_train.csv', index_col=0)
display(df_pump_train.head())
print(df_pump_train.shape)
print(df_pump_train['label'].value_counts())
df_fan_train = pd.read_csv('fichier_fan_train.csv', index_col=0)
display(df_fan_train.head())
print(df_fan_train.shape)
df_valve_train = pd.read_csv('fichier_valve_train.csv', index_col=0)
display(df_valve_train.head())
print(df_valve_train.shape)
df_slider_train = pd.read_csv('fichier_slider_train.csv', index_col=0)
display(df_slider_train.head())
print(df_slider_train.shape)
# Création du fichier d'entrainnement contenants les données sons des machines fan, pump, slider et valve
df = pd.concat((df_fan_train, df_slider_train, df_pump_train, df_valve_train), axis=0)
df.head()
df = df.to_csv('fichier_entrainnement_pump.csv')
# df = pd.read_csv('fichier_entrainnement_pump_normal.csv', index_col=0)
df.head()
```
| github_jupyter |
```
NAME = "ex15_pseudo3"
```
## colab
```
!nvidia-smi
# ドライブをマウント
import sys
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/drive')
import os, sys
if "google.colab" in sys.modules:
CP_DIR = f"/content/drive/MyDrive/Work/probspace_religious_art/notebook/{NAME}_colab/output"
OUTPUT_DIR = "output"
INPUT_DIR = "./eda_output/output"
PSEUDO_CSV = "/content/drive/MyDrive/Work/probspace_religious_art/notebook/local_data/ex11_ex16_big_logit_test_is06789_11.csv" # ex11_ex16_big_logit_test.ipynb より
sys.path.append("/content/drive/MyDrive/Work/probspace_religious_art/code")
elif "kaggle_web_client" in sys.modules:
pass
elif "/kqi/output" in os.getcwd():
pass
else:
# local
CP_DIR = "output"
OUTPUT_DIR = "output"
INPUT_DIR = "../../eda/output"
PSEUDO_CSV = "../../first/make_pseudo_label/ex11_ex16_big_logit_test_is06789_11.csv" # ex11_ex16_big_logit_test.ipynb より
sys.path.append("../../../code")
sys.path.append('../../../Git/Ranger-Deep-Learning-Optimizer')
sys.path.append('../../../Git/pytorch-optimizer')
from mix_aug import cutmix, fmix, snapmix, SnapMixLoss, resizemix
os.makedirs(CP_DIR, exist_ok=True)
os.makedirs(OUTPUT_DIR, exist_ok=True)
# driveからzipコピーしてくる
if os.getcwd() == "/content" and os.path.exists(INPUT_DIR) == False:
!mkdir -p "./eda_output"
!cp -r "/content/drive/MyDrive/Work/probspace_religious_art/notebook/eda/output.zip" "./eda_output"
!unzip -qq "./eda_output/output.zip" -d "./eda_output"
pass
# colabで足りないライブラリinstall
import os, sys
if ("google.colab" in sys.modules) or ("kaggle_web_client" in sys.modules) or ("/kqi/output" in os.getcwd()):
!pip install --upgrade albumentations
!pip install --upgrade timm
!pip install torch-optimizer
pass
```
## data load
```
import pandas as pd
# ====================================================
# Data Load
# ====================================================
def get_train_file_path(image_id):
return f"{INPUT_DIR}/train/{str(image_id)}.jpg"
train = pd.read_csv(INPUT_DIR + "/train.csv")
train["file_path"] = train["image_id"].apply(get_train_file_path)
n_classes = 13
import numpy as np
import pandas as pd
# ====================================================
# 疑似ラベルのデータ
# ====================================================
def get_test_file_path(image_id):
return f"{INPUT_DIR}/test/{str(image_id)}.jpg"
pseudo_df = pd.read_csv(PSEUDO_CSV)
pseudo_df = pseudo_df.rename(columns={"pred_label":"label"})
pseudo_df = pseudo_df[["image_id", "label", "file_path"]]
pseudo_df["fold"] = np.nan
pseudo_df["file_path"] = pseudo_df["image_id"].apply(get_test_file_path)
```
## train
```
import os, yaml, shutil
# ====================================================
# Param
# ====================================================
epochs = 50
class Config:
def __init__(self):
self.name = NAME
self.debug = False
self.size = 224
self.batch_size = 16
self.num_workers = os.cpu_count() if ("google.colab" in sys.modules) or ("kaggle_web_client" in sys.modules) or ("/kqi/output" in os.getcwd()) else 0
self.seeds = [0,1,2]
self.n_fold = 5
self.trn_fold = [0,1,2,3,4]
self.n_classes = n_classes
self.lr = 1e-3
self.min_lr = 1e-6
self.weight_decay = 0 # 1e-6
self.optimizer = "radam"
self.scheduler = "CosineAnnealingLR"
self.T_max = epochs
self.gradient_accumulation_steps = 1
self.max_grad_norm = 5
self.model_name = "swin_base_patch4_window7_224_in22k" # ex15
self.load_model_path = "none"
self.is_load_opt = True
self.epochs = epochs
self.print_freq = 10000 # 学習結果をprintするstep数
self.label_smoothing = 0.0
self.mix_decision_th = 0.5 # cutmixなどの発生確率
self.mixmethod = "cutmix"
self.mix_alpha = 1.0
self.pseudo_label = str(pseudo_df.shape) if 'pseudo_df' in globals() else None # 疑似ラベルのshape
CFG = Config()
with open(OUTPUT_DIR + "/cfg.yaml", "w") as wf:
yaml.dump(CFG.__dict__, wf)
import os
import sys
import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader, Dataset
import albumentations as A
from albumentations import Compose
from albumentations.pytorch import ToTensorV2
# ====================================================
# Dataset
# ====================================================
class TrainDataset(Dataset):
def __init__(self, df, transform=None):
super().__init__()
self.df = df
self.file_paths = df["file_path"].values
self.labels = df["label"].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_path = self.file_paths[idx]
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
if self.transform:
augmented = self.transform(image=image)
image = augmented["image"]
label = self.labels[idx]
return image, torch.from_numpy(np.array(label)).long()
class TestDataset(Dataset):
def __init__(self, df, transform=None):
super().__init__()
self.df = df
self.file_paths = df["file_path"].values
self.transform = transform
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
file_path = self.file_paths[idx]
image = cv2.imread(file_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB).astype(np.float32)
if self.transform:
augmented = self.transform(image=image)
image = augmented["image"]
return image
def get_transforms(*, data):
if data == "train":
return A.Compose(
[
A.Resize(CFG.size, CFG.size),
A.HorizontalFlip(p=0.5),
A.ShiftScaleRotate(p=0.5),
A.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0,),
# Normalizeの前に輝度や明るさ変えたら画像真っ黒になる場合がある
A.OneOf([
A.ToSepia(p=0.5),
A.ToGray(p=0.5),
], p=0.5),
A.CoarseDropout(p=0.5),
A.Cutout(p=0.5),
ToTensorV2(),
]
)
elif data == "valid":
return Compose(
[
A.Resize(CFG.size, CFG.size),
A.Normalize(
mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0,
p=1.0,
),
ToTensorV2(),
]
)
def collate(batch):
"""DataLoaderに追加可能なbatchを加工する関数"""
images, labels = list(zip(*batch))
images = torch.stack(images)
labels = torch.stack(labels)
return images, labels.long()
# ====================================================
# Library
# ====================================================
import sys
import os
import gc
import re
import math
import time
import random
import yaml
import shutil
import glob
import pickle
import pathlib
from pathlib import Path
from contextlib import contextmanager
from collections import defaultdict, Counter
from distutils.dir_util import copy_tree
import scipy as sp
import numpy as np
import pandas as pd
from tqdm.auto import tqdm
from sklearn.metrics import accuracy_score, log_loss
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold, GroupKFold, KFold
from functools import partial
import cv2
from PIL import Image
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import Adam, SGD
import torchvision.models as models
from torch.nn.parameter import Parameter
from torch.utils.data import DataLoader, Dataset
from torch.nn.utils.rnn import pad_sequence, pack_padded_sequence # 文字列の長さを揃えてくれる関数
from torch.optim.lr_scheduler import (
CosineAnnealingWarmRestarts,
CosineAnnealingLR,
ReduceLROnPlateau,
)
from torch.cuda.amp import autocast, GradScaler
from torch_optimizer import RAdam, Lookahead
import timm
import warnings
warnings.filterwarnings("ignore")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# ====================================================
# Helper functions
# ====================================================
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def asMinutes(s):
m = math.floor(s / 60)
s -= m * 60
return "%dm %ds" % (m, s)
def timeSince(since, percent):
now = time.time()
s = now - since
es = s / (percent)
rs = es - s
return "%s (remain %s)" % (asMinutes(s), asMinutes(rs))
class LabelSmoothingCrossEntropy(nn.Module):
# https://build-medical-ai.com/2021/02/21/label-smoothing%EF%BC%88%E3%83%A9%E3%83%99%E3%83%AB%E3%82%B9%E3%83%A0%E3%83%BC%E3%82%B8%E3%83%B3%E3%82%B0%EF%BC%89%E3%82%92pytorch%E3%81%A7%E5%AE%9F%E8%A3%85%E3%81%99%E3%82%8B/
def __init__(self, epsilon=0.1, reduction='mean'):
super().__init__()
self.epsilon = epsilon
self.reduction = reduction
def forward(self, preds, target):
n = preds.size()[-1]
log_preds = F.log_softmax(preds, dim=-1)
loss = LabelSmoothingCrossEntropy.reduce_loss(-log_preds.sum(dim=-1), self.reduction)
nll = F.nll_loss(log_preds, target, reduction=self.reduction)
return LabelSmoothingCrossEntropy.linear_combination(nll, loss/n, self.epsilon)
@staticmethod
def linear_combination(x, y, epsilon):
return (1 - epsilon) * x + epsilon * y
@staticmethod
def reduce_loss(loss, reduction='mean'):
return loss.mean() if reduction == 'mean' else loss.sum() if reduction == 'sum' else loss
def train_fn(
train_loader, model, criterion, optimizer, epoch, scheduler, device, scaler
):
batch_time = AverageMeter()
data_time = AverageMeter()
losses = AverageMeter()
# switch to train mode
model.train()
start = end = time.time()
grad_norm = 0.0
global_step = 0
for step, (images, labels) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
labels = labels.to(device)
batch_size = images.size(0)
with autocast():
# ====================================================
# cutmix/fmix/snapmix
# ====================================================
mix_decision = np.random.rand() if epoch < CFG.epochs - 5 else 1.0 # 最後の5epochはmix系なしにする
if mix_decision < CFG.mix_decision_th:
if CFG.mixmethod == "cutmix":
x, y_mixs = cutmix(images, labels.long(), CFG.mix_alpha)
y_hat = model(x.float())
loss = criterion(y_hat, y_mixs[0]) * y_mixs[2] + criterion(y_hat, y_mixs[1]) * (1.0 - y_mixs[2])
elif CFG.mixmethod == "fmix":
x, y_mixs = fmix(images, labels.long(), alpha=CFG.mix_alpha, decay_power=5.0, shape=(CFG.size, CFG.size))
y_hat = model(images.float())
loss = criterion(y_hat, y_mixs[0]) * y_mixs[2] + criterion(y_hat, y_mixs[1]) * (1.0 - y_mixs[2])
elif CFG.mixmethod == "resizemix":
x, y_mixs = resizemix(images, labels.long(), alpha=CFG.mix_alpha)
y_hat = model(images.float())
loss = criterion(y_hat, y_mixs[0]) * y_mixs[2] + criterion(y_hat, y_mixs[1]) * (1.0 - y_mixs[2])
else:
x = images
y_hat = model(images)
# --- 画像表示(mix画像確認用)---
if CFG.debug:
try:
print("mix_decision:", mix_decision)
fig = plt.figure(figsize=(16, 16))
for i in range(5):
print("y_hat:", y_hat[i])
ax = fig.add_subplot(1, 5, i + 1, xticks=[], yticks=[])
im = x[i].to("cpu").numpy().transpose(1, 2, 0)
plt.imshow(im)
plt.show(); plt.clf(); plt.close()
except:
pass
# -----------------------------------------
else:
logits = model(images)
loss = criterion(logits, labels)
# record loss
losses.update(loss.item(), batch_size)
if CFG.gradient_accumulation_steps > 1:
loss = loss / CFG.gradient_accumulation_steps
scaler.scale(loss).backward()
if (step + 1) % CFG.gradient_accumulation_steps == 0:
scaler.unscale_(optimizer)
grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), CFG.max_grad_norm, norm_type=2.0)
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
global_step += 1
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG.print_freq == 0 or step == (len(train_loader) - 1):
print(
"Epoch: [{0}][{1}/{2}] "
"Data {data_time.val:.3f} ({data_time.avg:.3f}) "
"Elapsed {remain:s} "
"Loss: {loss.val:.4f}({loss.avg:.4f}) "
"Grad Norm: {grad_norm:.4f} "
"LR: {lr:.4e} ".format(
epoch + 1,
step,
len(train_loader),
batch_time=batch_time,
data_time=data_time,
loss=losses,
remain=timeSince(start, float(step + 1) / len(train_loader)),
grad_norm=grad_norm,
lr=scheduler.get_lr()[0],
)
)
return losses.avg
def valid_fn(valid_loader, model, device):
batch_time = AverageMeter()
data_time = AverageMeter()
# switch to evaluation mode
model.eval()
preds = []
start = end = time.time()
for step, (images) in enumerate(valid_loader):
# measure data loading time
data_time.update(time.time() - end)
images = images.to(device)
batch_size = images.size(0)
with torch.no_grad():
with autocast():
#predictions = model.forward_argmax(images) # ラベルidで出す場合
predictions = model.forward_softmax(images) # 確信度で出す場合
#predictions = model.forward(images) # logitで出す場合
pred = predictions.detach().cpu().numpy()
preds.append(pred)
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
if step % CFG.print_freq == 0 or step == (len(valid_loader) - 1):
print(
"EVAL: [{0}/{1}] "
"Data {data_time.val:.3f} ({data_time.avg:.3f}) "
"Elapsed {remain:s} ".format(
step,
len(valid_loader),
batch_time=batch_time,
data_time=data_time,
remain=timeSince(start, float(step + 1) / len(valid_loader)),
)
)
preds = np.concatenate(preds)
return preds
# ====================================================
# Train loop
# ====================================================
def train_loop(folds, fold, seed):
LOGGER.info(f"==================== fold: {fold}, seed: {seed} training ====================")
# ====================================================
# loader
# ====================================================
trn_idx = folds[folds["fold"] != fold].index
val_idx = folds[folds["fold"] == fold].index
train_folds = folds.loc[trn_idx].reset_index(drop=True)
valid_folds = folds.loc[val_idx].reset_index(drop=True)
valid_labels = valid_folds["label"].values
# ====================================================
# 疑似ラベルのデータ追加
# ====================================================
if 'pseudo_df' in globals():
train_folds = train_folds.append(pseudo_df)
train_dataset = TrainDataset(train_folds, transform=get_transforms(data="train"))
valid_dataset = TestDataset(valid_folds, transform=get_transforms(data="valid"))
train_loader = DataLoader(
train_dataset,
batch_size=CFG.batch_size,
shuffle=True,
num_workers=CFG.num_workers,
pin_memory=True,
drop_last=True,
collate_fn=collate,
)
valid_loader = DataLoader(
valid_dataset,
batch_size=CFG.batch_size,
shuffle=False,
num_workers=CFG.num_workers,
pin_memory=True,
drop_last=False,
)
# ====================================================
# scheduler
# ====================================================
def get_scheduler(optimizer):
if CFG.scheduler == "ReduceLROnPlateau":
scheduler = ReduceLROnPlateau(
optimizer,
mode="min",
factor=CFG.factor,
patience=CFG.patience,
verbose=True,
eps=CFG.eps,
)
elif CFG.scheduler == "CosineAnnealingLR":
scheduler = CosineAnnealingLR(
optimizer, T_max=CFG.T_max, eta_min=CFG.min_lr, last_epoch=-1
)
elif CFG.scheduler == "CosineAnnealingWarmRestarts":
scheduler = CosineAnnealingWarmRestarts(
optimizer, T_0=CFG.T_0, T_mult=1, eta_min=CFG.min_lr, last_epoch=-1
)
return scheduler
# ====================================================
# model & optimizer
# ====================================================
model = TimmModel(CFG.n_classes, model_name=CFG.model_name, pretrained=True)
model.to(device)
if CFG.optimizer == "adam":
optimizer = Adam(
model.parameters(), lr=CFG.lr, amsgrad=False, weight_decay=CFG.weight_decay
)
elif CFG.optimizer == "radam":
optimizer = RAdam(model.parameters(), lr=CFG.lr, weight_decay=CFG.weight_decay)
optimizer = Lookahead(optimizer, alpha=0.5, k=5)
scheduler = get_scheduler(optimizer)
scaler = GradScaler()
if os.path.exists(CFG.load_model_path):
# モデルロード
LOGGER.info("=> loading checkpoint '{}'".format(CFG.load_model_path))
states = torch.load(CFG.load_model_path, map_location=torch.device("cpu"))
model.load_state_dict(states["model"])
model.to(device)
if CFG.is_load_opt:
LOGGER.info("=> loading optimizer and scheduler")
optimizer.load_state_dict(states["optimizer"])
scheduler.load_state_dict(states["scheduler"])
# ====================================================
# loop
# ====================================================
if CFG.label_smoothing > 0.0:
criterion = LabelSmoothingCrossEntropy(epsilon=CFG.label_smoothing)
else:
criterion = nn.CrossEntropyLoss() # loss計算したくないクラスは, ignore_index=1 で指定できる
best_score = -1 # np.inf
for epoch in range(CFG.epochs):
start_time = time.time()
# train
avg_loss = train_fn(
train_loader, model, criterion, optimizer, epoch, scheduler, device, scaler
)
# eval
preds = valid_fn(valid_loader, model, device)
# 予測がlogitの場合ラベルidに直す
pred_labels = preds.argmax(1) if preds.ndim > 1 else preds
LOGGER.info(f"labels: {valid_labels[:5]}")
LOGGER.info(f"pred_labels: {pred_labels[:5]}")
# scoring
score = get_score(valid_labels, pred_labels)
elapsed = time.time() - start_time
LOGGER.info(
f"Epoch {epoch+1} - avg_train_loss: {avg_loss:.4f} lr: {scheduler.get_lr()[0]:.4e} time: {elapsed:.0f}s"
)
LOGGER.info(f"Epoch {epoch+1} - Score: {score:.4f}")
if isinstance(scheduler, ReduceLROnPlateau):
scheduler.step(score)
elif isinstance(scheduler, CosineAnnealingLR):
scheduler.step()
elif isinstance(scheduler, CosineAnnealingWarmRestarts):
scheduler.step()
if score > best_score:
best_score = score
LOGGER.info(f"Epoch {epoch+1} - Save Best Score: {best_score:.4f} Model")
best_pth = OUTPUT_DIR + f"/fold{fold}_seed{seed}_best.pth"
torch.save(
{
"model": model.state_dict(),
"optimizer": optimizer.state_dict(),
"scheduler": scheduler.state_dict(),
"preds": preds,
},
best_pth,
)
val_pred_df = pd.DataFrame(
{"id": val_idx, "label": valid_labels, "pred": pred_labels}
)
if preds.ndim > 1:
val_pred_df = pd.concat([val_pred_df, pd.DataFrame(preds)], axis=1) # 確信度も残す
return val_pred_df
# ====================================================
# Utils
# ====================================================
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
def init_logger(log_file='train.log'):
"""学習ログファイル出す"""
from logging import getLogger, INFO, FileHandler, Formatter, StreamHandler
logger = getLogger(__name__)
logger.setLevel(INFO)
handler1 = StreamHandler()
handler1.setFormatter(Formatter("%(message)s"))
handler2 = FileHandler(filename=log_file)
handler2.setFormatter(Formatter("%(message)s"))
logger.addHandler(handler1)
logger.addHandler(handler2)
return logger
def seed_torch(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
# ====================================================
# CV split
# ====================================================
def cv_split(df, seed):
folds = df.copy()
cv = StratifiedKFold(n_splits=CFG.n_fold, shuffle=True, random_state=seed)
for j, (train_idx, valid_idx) in enumerate(cv.split(df, df["label"])):
folds.loc[valid_idx, "fold"] = int(j)
folds["fold"] = folds["fold"].astype(int)
print(folds.groupby(["fold"]).size())
return folds
# ====================================================
# Model
# ====================================================
class TimmModel(nn.Module):
def __init__(self, n_classes, model_name="resnet18", pretrained=True):
super().__init__()
self.cnn = timm.create_model(model_name, pretrained=pretrained)
if "efficient" in model_name:
self.cnn.classifier = nn.Linear(self.cnn.classifier.in_features, n_classes)
elif "nfnet" in model_name:
self.cnn.head.fc = nn.Linear(self.cnn.head.fc.in_features, n_classes)
elif "vit" in model_name:
self.cnn.head = nn.Linear(self.cnn.head.in_features, n_classes)
elif "tnt" in model_name:
self.cnn.head = nn.Linear(self.cnn.head.in_features, n_classes)
elif "swin" in model_name:
self.cnn.head = nn.Linear(self.cnn.head.in_features, n_classes)
elif "cait" in model_name:
self.cnn.head = nn.Linear(self.cnn.head.in_features, n_classes)
elif "mixer" in model_name:
self.cnn.head = nn.Linear(self.cnn.head.in_features, n_classes)
else:
self.cnn.fc = nn.Linear(self.cnn.fc.in_features, n_classes)
def forward(self, x):
return self.cnn(x)
def forward_softmax(self, x):
return torch.softmax(self.cnn(x), 1)
def forward_argmax(self, x):
return self.cnn(x).argmax(1)
# ====================================================
# LOGGER
# ====================================================
LOGGER = init_logger(OUTPUT_DIR + "/train.log")
# ====================================================
# main
# ====================================================
def main(train):
for seed in CFG.seeds:
seed_torch(seed=seed)
if CFG.debug:
CFG.epochs = 2
train = train.sample(n=300, random_state=seed).reset_index(drop=True)
folds = cv_split(train, seed)
oof_df = None
for fold in range(CFG.n_fold):
if fold in CFG.trn_fold:
val_pred_df = train_loop(folds, fold, seed)
val_pred_df["fold"] = fold
if oof_df is None:
oof_df = val_pred_df
else:
oof_df = oof_df.append(val_pred_df)
oof_df.to_csv(OUTPUT_DIR + f"/oof_seed{seed}.csv", index=False)
#display(oof_df)
LOGGER.info(f"\noof score: {get_score(oof_df['label'].values, oof_df['pred'].values)}\n")
# colabは短時間でフdriveにファイル出力多いとエラーになるので最後に保存
# 出力ディレクトリをdriveに保存する
if "google.colab" in sys.modules:
copy_tree(OUTPUT_DIR, CP_DIR)
if __name__ == '__main__':
print("timm version:", timm.__version__)
print(device)
main(train)
LOGGER.info("\ntrain finish!!!")
```
## evaluation
```
# ====================================================
# Eval OOF
# ====================================================
import torch
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
def get_score(y_true, y_pred):
return accuracy_score(y_true, y_pred)
def eval_oof_df(oof_df):
"""oofのスコアや混同行列を可視化する"""
score = get_score(oof_df['label'].values, oof_df['pred'].values)
print("oof score:", score)
plt.figure(figsize=(10, 8))
sns.countplot(y=oof_df["pred"], orient="v")
plt.title("Prediction distribution")
plt.show()
print(classification_report(oof_df["label"], oof_df["pred"]))
plt.figure(figsize=(10, 8))
sns.heatmap(
confusion_matrix(oof_df["label"], oof_df["pred"]),
annot=True,
cmap="Blues",
)
plt.title("OOF confusion matrix")
plt.show()
def get_soft_avg_oof_df():
"""seedごとのoofをロードして確信度を単純平均する"""
oof_avg = None
for seed in CFG.seeds:
oof_df = pd.read_csv(OUTPUT_DIR + f"/oof_seed{seed}.csv")
df = oof_df[["id", "label"]]
preds = None
for fold in range(CFG.n_fold):
states = torch.load(OUTPUT_DIR + f"/fold{CFG.trn_fold[fold]}_seed{seed}_best.pth", map_location=torch.device("cpu"))
preds = states["preds"] if preds is None else np.concatenate([preds, states["preds"]])
preds = pd.DataFrame(preds)
df = df.join(preds).sort_values(by='id')
df = df.reset_index(drop=True)
oof_avg = df if oof_avg is None else oof_avg + df
oof_avg = oof_avg / len(CFG.seeds)
label = oof_avg["label"].astype(int).values
pred_label = np.argmax(oof_avg[list(range(CFG.n_classes))].values, 1)
oof_avg_label = pd.DataFrame({"label": label, "pred": pred_label})
return oof_avg, oof_avg_label
if __name__ == '__main__':
for seed in CFG.seeds:
print("="*50, f"seed{seed} oof", "="*50)
oof_df = pd.read_csv(OUTPUT_DIR + f"/oof_seed{seed}.csv")
eval_oof_df(oof_df)
print("="*50, "soft_avg_oof", "="*50)
oof_avg, oof_avg_label = get_soft_avg_oof_df()
eval_oof_df(oof_avg_label)
oof_avg.to_csv(OUTPUT_DIR + f"/soft_avg_oof.csv", index=False)
# driveに保存する
if "google.colab" in sys.modules:
shutil.copyfile(OUTPUT_DIR + f"/soft_avg_oof.csv", CP_DIR + f"/{NAME}_soft_avg_oof.csv")
```
| github_jupyter |
```
!npm install pixi
# Base Data Science snippet
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import time
from tqdm import tqdm_notebook
%matplotlib inline
%load_ext autoreload
%autoreload 2
%%javascript
import * as PIXI from 'pixi.js';
const app = new PIXI.Application({ antialias: true });
document.body.appendChild(app.view);
const graphics = new PIXI.Graphics();
// Rectangle
graphics.beginFill(0xDE3249);
graphics.drawRect(50, 50, 100, 100);
graphics.endFill();
// Rectangle + line style 1
graphics.lineStyle(2, 0xFEEB77, 1);
graphics.beginFill(0x650A5A);
graphics.drawRect(200, 50, 100, 100);
graphics.endFill();
// Rectangle + line style 2
graphics.lineStyle(10, 0xFFBD01, 1);
graphics.beginFill(0xC34288);
graphics.drawRect(350, 50, 100, 100);
graphics.endFill();
// Rectangle 2
graphics.lineStyle(2, 0xFFFFFF, 1);
graphics.beginFill(0xAA4F08);
graphics.drawRect(530, 50, 140, 100);
graphics.endFill();
// Circle
graphics.lineStyle(0); // draw a circle, set the lineStyle to zero so the circle doesn't have an outline
graphics.beginFill(0xDE3249, 1);
graphics.drawCircle(100, 250, 50);
graphics.endFill();
// Circle + line style 1
graphics.lineStyle(2, 0xFEEB77, 1);
graphics.beginFill(0x650A5A, 1);
graphics.drawCircle(250, 250, 50);
graphics.endFill();
// Circle + line style 2
graphics.lineStyle(10, 0xFFBD01, 1);
graphics.beginFill(0xC34288, 1);
graphics.drawCircle(400, 250, 50);
graphics.endFill();
// Ellipse + line style 2
graphics.lineStyle(2, 0xFFFFFF, 1);
graphics.beginFill(0xAA4F08, 1);
graphics.drawEllipse(600, 250, 80, 50);
graphics.endFill();
// draw a shape
graphics.beginFill(0xFF3300);
graphics.lineStyle(4, 0xffd900, 1);
graphics.moveTo(50, 350);
graphics.lineTo(250, 350);
graphics.lineTo(100, 400);
graphics.lineTo(50, 350);
graphics.closePath();
graphics.endFill();
// draw a rounded rectangle
graphics.lineStyle(2, 0xFF00FF, 1);
graphics.beginFill(0x650A5A, 0.25);
graphics.drawRoundedRect(50, 440, 100, 100, 16);
graphics.endFill();
// draw star
graphics.lineStyle(2, 0xFFFFFF);
graphics.beginFill(0x35CC5A, 1);
graphics.drawStar(360, 370, 5, 50);
graphics.endFill();
// draw star 2
graphics.lineStyle(2, 0xFFFFFF);
graphics.beginFill(0xFFCC5A, 1);
graphics.drawStar(280, 510, 7, 50);
graphics.endFill();
// draw star 3
graphics.lineStyle(4, 0xFFFFFF);
graphics.beginFill(0x55335A, 1);
graphics.drawStar(470, 450, 4, 50);
graphics.endFill();
// draw polygon
const path = [600, 370, 700, 460, 780, 420, 730, 570, 590, 520];
graphics.lineStyle(0);
graphics.beginFill(0x3500FA, 1);
graphics.drawPolygon(path);
graphics.endFill();
app.stage.addChild(graphics);
```
| github_jupyter |
# Task 9: Random Forests
_All credit for the code examples of this notebook goes to the book "Hands-On Machine Learning with Scikit-Learn & TensorFlow" by A. Geron. Modifications were made and text was added by K. Zoch in preparation for the hands-on sessions._
# Setup
First, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Function to save a figure. This also decides that all output files
# should stored in the subdirectorz 'classification'.
PROJECT_ROOT_DIR = "."
EXERCISE = "forests"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "output", EXERCISE, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Bagging decision trees
First, let's create some half-moon data (as done in one of the earlier tasks).
```
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=500, noise=0.30, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
```
This code example shows how "bagging" multiple decision trees can improve the classification performance, compared to a single decision tree. Notice how bias and variance change when combining 500 trees as in the example below (it can be seen very nicely in the plot). Please try the following:
1. How does the number of samples affect the performance of the ensemble classifier? Try changing it to the training size (m = 500), or go even higher.
2. How is the performance different when pasting is used instead of bagging (_no_ replacement of instances)?
3. How relevant is the number of trees in the ensemble?
```
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
# Create an instance of a bagging classifier, composed of
# 500 decision tree classifiers. bootstrap=True activates
# replacement when picking the random instances, i.e.
# turning it off will switch from bagging to pasting.
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1, random_state=42)
bag_clf.fit(X_train, y_train)
y_pred = bag_clf.predict(X_test)
# Create an instance of a single decision tree to compare with.
tree_clf = DecisionTreeClassifier(random_state=42)
tree_clf.fit(X_train, y_train)
y_pred_tree = tree_clf.predict(X_test)
# Now do the plotting of the two.
from matplotlib.colors import ListedColormap
def plot_decision_boundary(clf, X, y, axes=[-1.5, 2.5, -1, 1.5], alpha=0.5, contour=True):
x1s = np.linspace(axes[0], axes[1], 100)
x2s = np.linspace(axes[2], axes[3], 100)
x1, x2 = np.meshgrid(x1s, x2s)
X_new = np.c_[x1.ravel(), x2.ravel()]
y_pred = clf.predict(X_new).reshape(x1.shape)
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x1, x2, y_pred, alpha=0.3, cmap=custom_cmap)
if contour:
custom_cmap2 = ListedColormap(['#7d7d58','#4c4c7f','#507d50'])
plt.contour(x1, x2, y_pred, cmap=custom_cmap2, alpha=0.8)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", alpha=alpha)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", alpha=alpha)
plt.axis(axes)
plt.xlabel(r"$x_1$", fontsize=18)
plt.ylabel(r"$x_2$", fontsize=18, rotation=0)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_decision_boundary(tree_clf, X, y)
plt.title("Decision Tree", fontsize=14)
plt.subplot(122)
plot_decision_boundary(bag_clf, X, y)
plt.title("Decision Trees with Bagging", fontsize=14)
save_fig("decision_tree_without_and_with_bagging_plot")
plt.show()
```
If you need an additional performance measure, you can use the accuracy score:
```
from sklearn.metrics import accuracy_score
print("Bagging ensemble: %s" % accuracy_score(y_test, y_pred))
print("Single tree: %s" % accuracy_score(y_test, y_pred_tree))
```
## Out-of-Bag evaluation
When a bagging classifier is used, evaluation of the performance can be performed _out-of-bag_. Remember what bagging does, and how many instances (on average) are picked from all training instances if the bag is chosen to be the same size as the number of training instances. The fraction of chosen instances should converge against something like
$$1 - \exp(-1) \approx 63.212\%$$
But that also means, that more than 35% of instances are _not seen_ in training. The [BaggingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) allows to set evaluation on out-of-bag instances automatically:
```
bag_clf = BaggingClassifier(
DecisionTreeClassifier(random_state=42), n_estimators=500,
bootstrap=True, n_jobs=-1, oob_score=True, random_state=40)
bag_clf.fit(X_train, y_train)
bag_clf.oob_score_
```
# Boosting via AdaBoost
The performance of decision trees can be much improved through the procedure of _hypothesis boosting_. AdaBoost, probably the most popular algorithm, uses a very common technique: models are trained _sequentially_, where each model tries to correct for mistakes the previous model made. AdaBoost in particular _boosts_ the weights of those instances that were classified incorrectly. The next classifier will then be more sensitive to these instances and probably do an overall better job. In the end, the outputs of all sequential classifiers are combined into a prediction value. Each classifier enters this global value weighted according to its error rate. Please check/answer the following questions to familiarise yourself with AdaBoost:
1. What is the error rate of a predictor?
2. How is the weight for each predictor calculated?
3. How are weights of instances updated if they were classified correctly? How are they updated if classified incorrectly?
4. How is the final prediction made from an AdaBoost ensemble?
5. The [AdaBoostClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.AdaBoostClassifier.html) implements the AdaBoost algorithm in Scikit-Learn. The following bit of code implements AdaBoost with decision tree classifiers. Make yourself familiar with the class and its arguments, then try to tweak it to achieve better performance than in the example below!
```
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5, random_state=42)
ada_clf.fit(X_train, y_train)
plot_decision_boundary(ada_clf, X, y)
```
The following bit of code is a visualisation of how the weight adjustment in AdaBoost works. While not relying on the above AdaBoostClassifier class, this implements a support vector machine classifier ([SVC](https://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.html)) and boosts the weights of incorrectly classified instances by hand. With different learning rates, the "amount" of boosting can be controlled.
```
from sklearn.svm import SVC
m = len(X_train)
plt.figure(figsize=(11, 4))
for subplot, learning_rate in ((121, 1), (122, 0.5)):
# Start with equal weights for all instances.
sample_weights = np.ones(m)
plt.subplot(subplot)
# Now let's go through five iterations where the
# weights get adjusted based on the previous step.
for i in range(5):
# As an example, use SVM classifier with Gaussian kernel.
svm_clf = SVC(kernel="rbf", C=0.05, gamma="auto", random_state=42)
svm_clf.fit(X_train, y_train, sample_weight=sample_weights)
y_pred = svm_clf.predict(X_train)
# The most important step: increase the weights of
# incorrectly predicted instances according to the
# learning_rate parameter.
sample_weights[y_pred != y_train] *= (1 + learning_rate)
# And do the plotting.
plot_decision_boundary(svm_clf, X, y, alpha=0.2)
plt.title("learning_rate = {}".format(learning_rate), fontsize=16)
if subplot == 121:
plt.text(-0.7, -0.65, "1", fontsize=14)
plt.text(-0.6, -0.10, "2", fontsize=14)
plt.text(-0.5, 0.10, "3", fontsize=14)
plt.text(-0.4, 0.55, "4", fontsize=14)
plt.text(-0.3, 0.90, "5", fontsize=14)
save_fig("boosting_plot")
plt.show()
```
# Gradient Boosting
An alternative to AdaBoost is gradient boosting. Again, gradient boosting sequentially trains multiple predictors which are then combined for a global prediction in the end. Gradient boosting fits the new predictor to the _residual errors_ made by the previous predictor, but doesn't touch instance weights. This can be visualised very well with a regression problem (of course, classification can also be performed. Scikit-Learn comes with the two classes [GradientBoostingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingRegressor.html) and [GradientBoostingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html) for these tasks. As a first step, the following example implements regression with decision trees by hand.
First, generate our random data.
```
np.random.seed(42)
X = np.random.rand(100, 1) - 0.5
y = 3*X[:, 0]**2 + 0.05 * np.random.randn(100)
from sklearn.tree import DecisionTreeRegressor
# Start with the first tree and fit it to X, y.
tree_reg1 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg1.fit(X, y)
# Calculate the residual errors the previous tree
# has made and fit a second tree to these.
y2 = y - tree_reg1.predict(X)
tree_reg2 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg2.fit(X, y2)
# Again, calculate the residual errors of the previous
# tree and fit a third tree.
y3 = y2 - tree_reg2.predict(X)
tree_reg3 = DecisionTreeRegressor(max_depth=2, random_state=42)
tree_reg3.fit(X, y3)
# And the rest is just plotting ...
def plot_predictions(regressors, X, y, axes, label=None, style="r-", data_style="b.", data_label=None):
x1 = np.linspace(axes[0], axes[1], 500)
y_pred = sum(regressor.predict(x1.reshape(-1, 1)) for regressor in regressors)
plt.plot(X[:, 0], y, data_style, label=data_label)
plt.plot(x1, y_pred, style, linewidth=2, label=label)
if label or data_label:
plt.legend(loc="upper center", fontsize=16)
plt.axis(axes)
plt.figure(figsize=(11,11))
plt.subplot(321)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h_1(x_1)$", style="g-", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Residuals and tree predictions", fontsize=16)
plt.subplot(322)
plot_predictions([tree_reg1], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1)$", data_label="Training set")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.title("Ensemble predictions", fontsize=16)
plt.subplot(323)
plot_predictions([tree_reg2], X, y2, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_2(x_1)$", style="g-", data_style="k+", data_label="Residuals")
plt.ylabel("$y - h_1(x_1)$", fontsize=16)
plt.subplot(324)
plot_predictions([tree_reg1, tree_reg2], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1)$")
plt.ylabel("$y$", fontsize=16, rotation=0)
plt.subplot(325)
plot_predictions([tree_reg3], X, y3, axes=[-0.5, 0.5, -0.5, 0.5], label="$h_3(x_1)$", style="g-", data_style="k+")
plt.ylabel("$y - h_1(x_1) - h_2(x_1)$", fontsize=16)
plt.xlabel("$x_1$", fontsize=16)
plt.subplot(326)
plot_predictions([tree_reg1, tree_reg2, tree_reg3], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="$h(x_1) = h_1(x_1) + h_2(x_1) + h_3(x_1)$")
plt.xlabel("$x_1$", fontsize=16)
plt.ylabel("$y$", fontsize=16, rotation=0)
save_fig("gradient_boosting_plot")
plt.show()
```
The following piece of code now uses the Scikit-Learn class for regression with gradient boosting. Two examples are given: (1) with a fast learning rate, but only very few predictors, (2) with a slower learning rate, but a high number of predictors. Clearly, the second ensemble overfits the problem. Can you try to tweak the parameters to get a model that generalises better?
```
from sklearn.ensemble import GradientBoostingRegressor
# First regression instance with only three estimaters,
# but a fast learning rate. The max_depth parameter
# controls the number of 'layers' in the decision
# tree estimators of the ensemble. Increase for stronger
# bias of the individual trees.
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=3, learning_rate=1.0, random_state=42)
gbrt.fit(X, y)
# Second instance with many estimators and slower
# learning rate.
gbrt_slow = GradientBoostingRegressor(max_depth=2, n_estimators=200, learning_rate=0.5, random_state=42)
gbrt_slow.fit(X, y)
plt.figure(figsize=(11,4))
plt.subplot(121)
plot_predictions([gbrt], X, y, axes=[-0.5, 0.5, -0.1, 0.8], label="Ensemble predictions")
plt.title("learning_rate={}, n_estimators={}".format(gbrt.learning_rate, gbrt.n_estimators), fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_slow], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("learning_rate={}, n_estimators={}".format(gbrt_slow.learning_rate, gbrt_slow.n_estimators), fontsize=14)
save_fig("gbrt_learning_rate_plot")
plt.show()
```
One way to solve this overfitting is to use _early stopping_ to find the optimal number of iterations/predictors for this problem. For that, we first need to split the dataset into training and validation set, because of course we cannot evaluate performance on instances the predictor used in training. The following code implements the already known [model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function. Then, train another ensemble (with a fixed number of 120 predictors), but this time only on the training set. Errors are calculated based on the validation set and the optimal number of iterations is extracted. The code also creates a plot of the performance on the validation set to point out the optimal iteration.
```
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
# Split dataset into training and validation set.
X_train, X_val, y_train, y_val = train_test_split(X, y, random_state=49)
# Fit an ensemble. Let's start with 120 estimators, which
# is probably too much (as we saw above).
gbrt = GradientBoostingRegressor(max_depth=2, n_estimators=120, random_state=42)
gbrt.fit(X_train, y_train)
# Calculate the errors for each iteration (on the validation set)
# and find the optimal iteration step.
errors = [mean_squared_error(y_val, y_pred)
for y_pred in gbrt.staged_predict(X_val)]
bst_n_estimators = np.argmin(errors)
min_error = np.min(errors)
# Retrain a new ensemble with those settings.
gbrt_best = GradientBoostingRegressor(max_depth=2,n_estimators=bst_n_estimators, random_state=42)
gbrt_best.fit(X_train, y_train)
# And do the plotting of validation error as well
# as the optimised ensemble.
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.plot(errors, "b.-")
plt.plot([bst_n_estimators, bst_n_estimators], [0, min_error], "k--")
plt.plot([0, 120], [min_error, min_error], "k--")
plt.plot(bst_n_estimators, min_error, "ko")
plt.text(bst_n_estimators, min_error*1.2, "Minimum", ha="center", fontsize=14)
plt.axis([0, 120, 0, 0.01])
plt.xlabel("Number of trees")
plt.title("Validation error", fontsize=14)
plt.subplot(122)
plot_predictions([gbrt_best], X, y, axes=[-0.5, 0.5, -0.1, 0.8])
plt.title("Best model (%d trees)" % bst_n_estimators, fontsize=14)
save_fig("early_stopping_gbrt_plot")
plt.show()
```
| github_jupyter |
### Implement the Hodgkin-Huxley model
## $c_m \frac{\partial V}{\partial t} = \frac{1}{2ar_L} \frac{\partial}{\partial x} (a^2 \frac{\partial V}{\partial x}) - i_m + i_e$
where:
## $i_m = \bar{g}_L (V-E_L) + \bar{g}_{Na} m^3 h (V-E_{Na}) + \bar{g}_{K} n^4 (V-E_{K})$
and gating variables:
## $\frac{dx}{dt} = \alpha_x (V) (1-x) - \beta_x (V) n$
## $\alpha_n (V) = \frac{0.01 (10-V)}{exp \ (\frac{10-V}{10}) - 1} \quad \alpha_m (V) = \frac{0.1(25-V)}{exp \ (\frac{25-V}{10})-1} \quad \alpha_h (V) = 0.07 \ exp \ (\frac{-V}{20})$
## $\beta_n (V) = 0.125 \ exp \ (\frac{-V}{80}) \quad \beta_m (V) = 4 \ exp \ (\frac{-V}{18}) \quad \beta_h (V) = \frac{1}{exp \ (\frac{30-V}{10}) + 1}$
# -------------------------------------------------------------------------------
```
import numpy as np
import matplotlib.pyplot as plt
def C_N(dt, M, Cm, g, A, gm, I, Ie, V):
"""This function calculates dV using the Crank-Nicholson method
dt = timestep, in seconds
M = number of compartments
Cm = membrane capacitance, in F
g = resistive coupling, in S
A = compartment surface area
gm = sum of gi
I = sum of gi * Ei
Ie = current input
V = potential
Function uses the method described in Chapter 6.6B from Dayan and Abbott, 2005
returns one variable, V_new
"""
z = 0.5 # Crank-Nicholson
### Helper variables ###
b = np.zeros(M)
c = np.zeros(M)
d = np.zeros(M)
f = np.zeros(M)
b[1:M] = g * z * dt / Cm[1:M]
d[0:M-1] = g * z * dt / Cm[0:M-1]
c = -gm*z*dt/Cm - b - d
f = (I + Ie/A)/Cm * z * dt + c * V
for i in range(M-1):
f[i+1] += b[i+1] * V[i]
f[i] += d[i] * V[i+1]
f = f*2 # getting rid of z
### Forward prop ###
c1 = np.zeros(M)
f1 = np.zeros(M)
c1[0] = c[0]
f1[0] = f[0]
for i in range(M-1):
c1[i+1] = c[i+1] + b[i+1] * d[i] / (1-c1[i])
f1[i+1] = f[i+1] + b[i+1] * f1[i] / (1-c1[i])
### Backprop ###
dV = np.zeros(M)
dV[M-1] = f1[M-1] / (1-c1[M-1])
for i in range(M-2, -1, -1):
dV[i] = (d[i] * dV[i+1] + f1[i]) / (1-c1[i])
return V + dV
def HHM(I_na,I_k,I_leak,I_e,C_m):
dvdt = (1/C_m) * (-(I_na + I_k + I_leak) + I_e)
return dvdt
def kinetics(x,alpha,beta):
dxdt = alpha * (1 - x) - beta * x
return dxdt
def current_k(V,n,g_k,E_k):
current_k = g_k * (n**4) * (V - E_k)
return current_k
def current_na(V,m,h,g_na,E_na):
current_na = g_na * (m**3) * h * (V - E_na)
return current_na
def conductance_k()
def alpha_n(V):
alpha = 0.01 * (V + 55) / (1 - np.exp(-0.1 * (V + 55)))
return alpha
def alpha_m(V):
alpha = 0.1 * (V + 40) / (1 - exp(-0.1 * (V + 40)))
return alpha
def alpha_h(V):
alpha = 0.07 * exp(-0.05 * (V + 65))
return alpha
def beta_n(V):
beta = 0.125 * np.exp(-0.0125 * (V+65))
return beta
def beta_m(V):
beta = 4 * exp(-0.0556 * (V + 65))
return beta
def beta_h(V):
beta = 1 / (1 + exp(-0.1 * (V + 35)))
return beta
def HHM_combined(initial, t, C_m, I_e, E_na, E_k, E_leak, g_na, g_k, g_leak):
V,n,m,h = initial #initial values
#calculate potassium current
kinetics_n = kinetics(n,alpha_n(V),beta_n(V))
I_k = current_k(V,n,g_k,E_k)
#calculate sodium current
kinetics_m = kinetics(m,alpha_m(V),beta_m(V))
kinetics_h = kinetics(h,alpha_h(V),beta_h(V))
I_na = current_na(V,m,h,g_na,E_na)
#calculate leak current
I_leak = current_leak(V,g_leak,E_leak)
#calculate voltage
voltage = HHM(I_na,I_k,I_leak,I_e,C_m)
return [voltage,kinetics_n,kinetics_m,kinetics_h]
#initial conditions
V0 = -0.065 #(V)
m0 = 0.0529
h0 = 0.5961
n0 = 0.3177
#parameters
C_m = 0.01 #(F)
L = 1e-6 #(m)
a = 238e-6 #(m)
r_L = 35.4e-3 #(ohm m)
g = a / (2 * r_L * L * L) #(S)
#time steps
tmax = 1e-1 # s
dt = 1e-3 # s
N = int(tmax / dt) + 1
t = np.linspace(0, tmax, N)
for i in range(5):
gm = gk + gna + gl
solution = C_N(dt=dt, M=1, Cm=C_m, g=g, A=a, gm=gm, I=I, Ie=Ie[:, i], V=V[:, i])
#plot the solution to the HHM
I_e = 0 #(A)
initial = [V0,n0,m0,h0]
parameters = (C_m, I_e, E_na, E_k, E_leak, g_na, g_k, g_leak, I_following)
plt.plot(t,solution[:,0])
plt.title('Membrane Voltage with $I_e = 0$')
plt.ylabel('voltage [mV]')
plt.xlabel('time [t]')
plt.grid()
#fix offset
ax = plt.gca()
ax.ticklabel_format(useOffset=False)
plt.show()
```
| github_jupyter |
```
# default_exp data.tabular
```
# Data Tabular
> Main Tabular functions used throughout the library. This is helpful when you have additional time series data like metadata, time series features, etc.
```
#export
from tsai.imports import *
from tsai.utils import *
from fastai.tabular.all import *
#export
@delegates(TabularPandas.__init__)
def get_tabular_ds(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, groupby=None,
y_block=None, splits=None, do_setup=True, inplace=False, reduce_memory=True, device=None, **kwargs):
device = ifnone(device, default_device())
groupby = str2list(groupby)
cat_names = str2list(cat_names)
cont_names = str2list(cont_names)
y_names = str2list(y_names)
cols = []
for _cols in [groupby, cat_names, cont_names, y_names]:
if _cols is not None: cols.extend(_cols)
cols = list(set(cols))
if y_names is None: y_block = None
elif y_block is None:
num_cols = df._get_numeric_data().columns
y_block = CategoryBlock() if any([True for n in y_names if n not in num_cols]) else RegressionBlock()
else: y_block = None
pd.options.mode.chained_assignment=None
to = TabularPandas(df[cols], procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names, y_block=y_block,
splits=splits, do_setup=do_setup, inplace=inplace, reduce_memory=reduce_memory, device=device)
setattr(to, "groupby", groupby)
return to
#export
@delegates(DataLoaders.__init__)
def get_tabular_dls(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, bs=64,
y_block=None, splits=None, do_setup=True, inplace=False, reduce_memory=True, device=None, **kwargs):
to = get_tabular_ds(df, procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names,
y_block=y_block, splits=splits, do_setup=do_setup, inplace=inplace, reduce_memory=reduce_memory, device=device, **kwargs)
if splits is not None: bs = min(len(splits[0]), bs)
else: bs = min(len(df), bs)
return to.dataloaders(device=device, bs=bs, **kwargs)
#export
def preprocess_df(df, procs=[Categorify, FillMissing, Normalize], cat_names=None, cont_names=None, y_names=None, sample_col=None, reduce_memory=True):
cat_names = str2list(cat_names)
cont_names = str2list(cont_names)
y_names = str2list(y_names)
cols = []
for _cols in [cat_names, cont_names, y_names]:
if _cols is not None: cols.extend(_cols)
cols = list(set(cols))
pd.options.mode.chained_assignment=None
to = TabularPandas(df[cols], procs=procs, cat_names=cat_names, cont_names=cont_names, y_names=y_names, reduce_memory=reduce_memory)
procs = to.procs
if sample_col is not None:
sample_col = str2list(sample_col)
to = pd.concat([df[sample_col], to.cats, to.conts, to.ys], axis=1)
else:
to = pd.concat([to.cats, to.conts, to.ys], axis=1)
return to, procs
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
# df['salary'] = np.random.rand(len(df)) # uncomment to simulate a cont dependent variable
cat_names = ['workclass', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'native-country']
cont_names = ['age', 'fnlwgt', 'hours-per-week']
target = ['salary']
splits = RandomSplitter()(range_of(df))
dls = get_tabular_dls(df, cat_names=cat_names, cont_names=cont_names, y_names='salary', splits=splits, bs=512)
dls.show_batch()
metrics = mae if dls.c == 1 else accuracy
learn = tabular_learner(dls, layers=[200, 100], y_range=None, metrics=metrics)
learn.fit(1, 1e-2)
learn.dls.one_batch()
learn.model
path = untar_data(URLs.ADULT_SAMPLE)
df = pd.read_csv(path/'adult.csv')
cat_names = ['workclass', 'education', 'education-num', 'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'native-country']
cont_names = ['age', 'fnlwgt', 'hours-per-week']
target = ['salary']
df, procs = preprocess_df(df, procs=[Categorify, FillMissing, Normalize], cat_names=cat_names, cont_names=cont_names, y_names=target,
sample_col=None, reduce_memory=True)
df.head()
procs.classes, procs.means, procs.stds
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
```
| github_jupyter |
<img src="https://github.com/pmservice/ai-openscale-tutorials/raw/master/notebooks/images/banner.png" align="left" alt="banner">
# Part 4: Drift Monitor
The notebook will train, create and deploy a Credit Risk model. It will then configure OpenScale to monitor drift in data and accuracy by injecting sample payloads for viewing in the OpenScale Insights dashboard.
### Contents
- [1. Setup](#setup)
- [2. Model building and deployment](#model)
- [3. OpenScale configuration](#openscale)
- [4. Generate drift model](#driftmodel)
- [5. Submit payload](#payload)
- [6. Enable drift monitoring](#monitor)
- [7. Run drift monitor](# )
# 1.0 Install Python Packages <a name="setup"></a>
```
import warnings
warnings.filterwarnings('ignore')
!rm -rf /home/spark/shared/user-libs/python3.6*
!pip install --upgrade ibm-ai-openscale==2.2.1 --no-cache --user | tail -n 1
!pip install --upgrade watson-machine-learning-client-V4==1.0.95 | tail -n 1
!pip install --upgrade pyspark==2.3 | tail -n 1
!pip install scikit-learn==0.20.2 | tail -n 1
```
### Action: restart the kernel!
```
import warnings
warnings.filterwarnings('ignore')
```
# 2.0 Configure credentials <a name="credentials"></a>
<font color=red>Replace the `username` and `password` values of `************` with your Cloud Pak for Data `username` and `password`. The value for `url` should match the `url` for your Cloud Pak for Data cluster, which you can get from the browser address bar (be sure to include the 'https://'.</font> The credentials should look something like this (these are example values, not the ones you will use):
```
WOS_CREDENTIALS = {
"url": "https://zen.clusterid.us-south.containers.appdomain.cloud",
"username": "cp4duser",
"password" : "cp4dpass"
}
```
**NOTE: Make sure that there is no trailing forward slash / in the url**
```
WOS_CREDENTIALS = {
"url": "************",
"username": "************",
"password": "************"
}
WML_CREDENTIALS = WOS_CREDENTIALS.copy()
WML_CREDENTIALS['instance_id']='openshift'
WML_CREDENTIALS['version']='3.0.0'
```
Lets retrieve the variables for the model and deployment we set up in the initial setup notebook. **If the output does not show any values, check to ensure you have completed the initial setup before continuing.**
```
%store -r MODEL_NAME
%store -r DEPLOYMENT_NAME
%store -r DEFAULT_SPACE
print("Model Name: ", MODEL_NAME, ". Deployment Name: ", DEPLOYMENT_NAME, ". Deployment Space: ", DEFAULT_SPACE)
```
# 3.0 Load the training data
```
!rm german_credit_data_biased_training.csv
!wget https://raw.githubusercontent.com/IBM/credit-risk-workshop-cpd/master/data/openscale/german_credit_data_biased_training.csv
import pandas as pd
data_df = pd.read_csv('german_credit_data_biased_training.csv', sep=",", header=0)
data_df.head()
```
# 4.0 Configure OpenScale <a name="openscale"></a>
The notebook will now import the necessary libraries and set up a Python OpenScale client.
```
from ibm_ai_openscale import APIClient4ICP
from ibm_ai_openscale.engines import *
from ibm_ai_openscale.utils import *
from ibm_ai_openscale.supporting_classes import PayloadRecord, Feature
from ibm_ai_openscale.supporting_classes.enums import *
from watson_machine_learning_client import WatsonMachineLearningAPIClient
import json
wml_client = WatsonMachineLearningAPIClient(WML_CREDENTIALS)
ai_client = APIClient4ICP(WOS_CREDENTIALS)
ai_client.version
subscription = None
if subscription is None:
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == MODEL_NAME:
print("Found existing subscription.")
subscription = ai_client.data_mart.subscriptions.get(sub)
if subscription is None:
print("No subscription found. Please run openscale-initial-setup.ipynb to configure.")
```
### Set Deployment UID
```
wml_client.set.default_space(DEFAULT_SPACE)
wml_deployments = wml_client.deployments.get_details()
deployment_uid = None
for deployment in wml_deployments['resources']:
print(deployment['entity']['name'])
if DEPLOYMENT_NAME == deployment['entity']['name']:
deployment_uid = deployment['metadata']['guid']
break
print(deployment_uid)
```
# 5.0 Generate drift model <a name="driftmodel"></a>
Drift requires a trained model to be uploaded manually for WML. You can train, create and download a drift detection model using the code below. The entire code can be found in the [training_statistics_notebook](https://github.com/IBM-Watson/aios-data-distribution/blob/master/training_statistics_notebook.ipynb) ( check for Drift detection model generation).
```
training_data_info = {
"class_label":'Risk',
"feature_columns":["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"],
"categorical_columns":["CheckingStatus","CreditHistory","LoanPurpose","ExistingSavings","EmploymentDuration","Sex","OthersOnLoan","OwnsProperty","InstallmentPlans","Housing","Job","Telephone","ForeignWorker"]
}
#Set model_type. Acceptable values are:["binary","multiclass","regression"]
model_type = "binary"
#model_type = "multiclass"
#model_type = "regression"
def score(training_data_frame):
WML_CREDENTAILS = WML_CREDENTIALS
#The data type of the label column and prediction column should be same .
#User needs to make sure that label column and prediction column array should have the same unique class labels
prediction_column_name = "predictedLabel"
probability_column_name = "probability"
feature_columns = list(training_data_frame.columns)
training_data_rows = training_data_frame[feature_columns].values.tolist()
#print(training_data_rows)
payload_scoring = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{
"fields": feature_columns,
"values": [x for x in training_data_rows]
}]
}
score = wml_client.deployments.score(deployment_uid, payload_scoring)
score_predictions = score.get('predictions')[0]
prob_col_index = list(score_predictions.get('fields')).index(probability_column_name)
predict_col_index = list(score_predictions.get('fields')).index(prediction_column_name)
if prob_col_index < 0 or predict_col_index < 0:
raise Exception("Missing prediction/probability column in the scoring response")
import numpy as np
probability_array = np.array([value[prob_col_index] for value in score_predictions.get('values')])
prediction_vector = np.array([value[predict_col_index] for value in score_predictions.get('values')])
return probability_array, prediction_vector
#Generate drift detection model
from ibm_wos_utils.drift.drift_trainer import DriftTrainer
drift_detection_input = {
"feature_columns":training_data_info.get('feature_columns'),
"categorical_columns":training_data_info.get('categorical_columns'),
"label_column": training_data_info.get('class_label'),
"problem_type": model_type
}
drift_trainer = DriftTrainer(data_df,drift_detection_input)
if model_type != "regression":
#Note: batch_size can be customized by user as per the training data size
drift_trainer.generate_drift_detection_model(score,batch_size=data_df.shape[0])
#Note: Two column constraints are not computed beyond two_column_learner_limit(default set to 200)
#User can adjust the value depending on the requirement
drift_trainer.learn_constraints(two_column_learner_limit=200)
drift_trainer.create_archive()
#Generate a download link for drift detection model
from IPython.display import HTML
import base64
import io
def create_download_link_for_ddm( title = "Download Drift detection model", filename = "drift_detection_model.tar.gz"):
#Retains stats information
with open(filename,'rb') as file:
ddm = file.read()
b64 = base64.b64encode(ddm)
payload = b64.decode()
html = '<a download="{filename}" href="data:text/json;base64,{payload}" target="_blank">{title}</a>'
html = html.format(payload=payload,title=title,filename=filename)
return HTML(html)
create_download_link_for_ddm()
```
# 6.0 Submit payload <a name="payload"></a>
### Score the model so we can configure monitors
Now that the WML service has been bound and the subscription has been created, we need to send a request to the model before we configure OpenScale. This allows OpenScale to create a payload log in the datamart with the correct schema, so it can capture data coming into and out of the model.
```
fields = ["CheckingStatus","LoanDuration","CreditHistory","LoanPurpose","LoanAmount","ExistingSavings","EmploymentDuration","InstallmentPercent","Sex","OthersOnLoan","CurrentResidenceDuration","OwnsProperty","Age","InstallmentPlans","Housing","ExistingCreditsCount","Job","Dependents","Telephone","ForeignWorker"]
values = [
["no_checking",13,"credits_paid_to_date","car_new",1343,"100_to_500","1_to_4",2,"female","none",3,"savings_insurance",46,"none","own",2,"skilled",1,"none","yes"],
["no_checking",24,"prior_payments_delayed","furniture",4567,"500_to_1000","1_to_4",4,"male","none",4,"savings_insurance",36,"none","free",2,"management_self-employed",1,"none","yes"],
["0_to_200",26,"all_credits_paid_back","car_new",863,"less_100","less_1",2,"female","co-applicant",2,"real_estate",38,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",14,"no_credits","car_new",2368,"less_100","1_to_4",3,"female","none",3,"real_estate",29,"none","own",1,"skilled",1,"none","yes"],
["0_to_200",4,"no_credits","car_new",250,"less_100","unemployed",2,"female","none",3,"real_estate",23,"none","rent",1,"management_self-employed",1,"none","yes"],
["no_checking",17,"credits_paid_to_date","car_new",832,"100_to_500","1_to_4",2,"male","none",2,"real_estate",42,"none","own",1,"skilled",1,"none","yes"],
["no_checking",33,"outstanding_credit","appliances",5696,"unknown","greater_7",4,"male","co-applicant",4,"unknown",54,"none","free",2,"skilled",1,"yes","yes"],
["0_to_200",13,"prior_payments_delayed","retraining",1375,"100_to_500","4_to_7",3,"male","none",3,"real_estate",37,"none","own",2,"management_self-employed",1,"none","yes"]
]
payload_scoring = {"fields": fields,"values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
print('Single record scoring result:', '\n fields:', scoring_response['predictions'][0]['fields'], '\n values: ', scoring_response['predictions'][0]['values'][0])
```
# 7.0 Enable drift monitoring <a name="monitor"></a>
```
subscription.drift_monitoring.enable(threshold=0.05, min_records=10,model_path="./drift_detection_model.tar.gz")
```
# 8.0 Run Drift monitor on demand <a name="driftrun"></a>
```
!rm german_credit_feed.json
!wget https://raw.githubusercontent.com/IBM/credit-risk-workshop-cpd/master/data/openscale/german_credit_feed.json
import random
with open('german_credit_feed.json', 'r') as scoring_file:
scoring_data = json.load(scoring_file)
fields = scoring_data['fields']
values = []
for _ in range(10):
current = random.choice(scoring_data['values'])
#set age of all rows to 100 to increase drift values on dashboard
current[12] = 100
values.append(current)
payload_scoring = {"fields": fields, "values": values}
payload = {
wml_client.deployments.ScoringMetaNames.INPUT_DATA: [payload_scoring]
}
scoring_response = wml_client.deployments.score(deployment_uid, payload)
drift_run_details = subscription.drift_monitoring.run(background_mode=False)
subscription.drift_monitoring.get_table_content()
```
## Congratulations!
You have finished this section of the hands-on lab for IBM Watson OpenScale. You can now view the OpenScale dashboard by going to the Cloud Pak for Data `Home` page, and clicking `Services`. Choose the `OpenScale` tile and click the menu to `Open`. Click on the tile for the model you've created to see the monitors.
OpenScale shows model performance over time. You have two options to keep data flowing to your OpenScale graphs:
* Download, configure and schedule the [model feed notebook](https://raw.githubusercontent.com/emartensibm/german-credit/master/german_credit_scoring_feed.ipynb). This notebook can be set up with your WML credentials, and scheduled to provide a consistent flow of scoring requests to your model, which will appear in your OpenScale monitors.
* Re-run this notebook. Running this notebook from the beginning will delete and re-create the model and deployment, and re-create the historical data. Please note that the payload and measurement logs for the previous deployment will continue to be stored in your datamart, and can be deleted if necessary.
This notebook has been adapted from notebooks available at https://github.com/pmservice/ai-openscale-tutorials.
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
from notebook_init import settings
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
from opennem.db import get_database_engine, db_connect
from opennem.schema.network import NetworkNEM
from opennem.core.compat.schema import OpennemDataSetV2
from opennem.core.compat.loader import load_statset_v2
from opennem.utils.http import http
from opennem.utils.numbers import sigfig_compact
from opennem.core.compat.energy import trading_energy_data
from opennem.utils.series import series_joined, series_are_equal
from opennem.core.energy import _energy_aggregate, energy_sum
from opennem.workers.energy import get_generated_query, get_generated
from opennem.core.parsers.aemo.mms import parse_aemo_urls
from opennem.schema.network import NetworkNEM
engine = db_connect(settings.db_url)
engine_local = db_connect("postgresql://opennem:opennem@127.0.0.1:15433/opennem")
engine_staging = db_connect("postgresql://opennem:qeAZ4AWwmKcEJGp7@13.211.64.120:15433/opennem_dev")
aemo = parse_aemo_urls(
[
"http://nemweb.com.au/Reports/Current/Next_Day_Dispatch/PUBLIC_NEXT_DAY_DISPATCH_20210304_0000000337689171.zip",
"http://nemweb.com.au/Reports/Current/Next_Day_Dispatch/PUBLIC_NEXT_DAY_DISPATCH_20210305_0000000337748278.zip",
"http://nemweb.com.au/Reports/Current/Next_Day_Dispatch/PUBLIC_NEXT_DAY_DISPATCH_20210306_0000000337800035.zip",
]
)
__query = "select code from facility where network_region='SA1' and fueltech_id='wind'"
with engine.connect() as c:
sa1_wind_duids = list(i[0] for i in c.execute(__query))
sa1_wind_duids
```
```
dispatch_solution = aemo.get_table("dispatch_unit_solution")
# df = dispatch_solution.to_frame()
# df
# a = len(dispatch_solution.records)
# print(f"records: {a}")
dispatch_solution.records
# test it out with v2 methods
from datetime import datetime, date, timedelta
sa1_wind_duids = ["BLUFF1"]
sa1_wind_df = df[df.index.isin(sa1_wind_duids, level=1)]
sa1_wind_df
# sa1_wind_df
# sa1_wind_df_energy = trading_energy_data(sa1_wind_df, date(2021, 3, 4))
# df_energy_v2
# sa1_wind_df = sa1_wind_df.set_index("SETTLEMENTDATE")
# sa1_wind_df.sort_index(inplace=True)
# sa1_wind_df.OUTPUT_MWH.sum() / 1000
from opennem.workers.energy import shape_energy_dataframe, get_generated_query, get_generated
from opennem.core.energy import energy_sum
date_start = datetime.fromisoformat("2021-03-03 23:55:00+10:00")
date_end = datetime.fromisoformat("2021-03-07 00:05:00+10:00")
generated_results = get_generated(date_min=date_start, date_max=date_end, network=NetworkNEM, fueltech_id="wind")
dfv3 = shape_energy_dataframe(generated_results)
dfv3 = dfv3.set_index(["trading_interval"])
dfv3
def __trapezium_integration(d_ti, power_field: str = "MWH_READING"):
return 0.5*(d_ti[power_field] * [1,2,2,2,2,2,1]).sum()/12
def __trading_energy_generator(df, date, duid_id, power_field: str = "generated"):
return_cols = []
t_start = datetime(date.year, date.month,date.day,0,5)
#48 trading intervals in the day
#(could be better with groupby function)
for TI in range(48):
#t_i initial timestamp of trading_interval, t_f = final timestamp of trading interval
t_i = t_start + timedelta(0,1800*TI)
t_f = t_start + timedelta(0,1800*(TI+1))
_query = f"'{t_i}' <= trading_interval <= '{t_f}' and facility_code == '{duid}'"
d_ti = df.query(_query)
energy_value = None
# interpolate if it isn't padded out
if d_ti[power_field].count() != 7:
index_interpolated = pd.date_range(start= t_i, end= t_f, freq='5min')
d_ti = d_ti.reset_index()
d_ti = d_ti.set_index("trading_interval")
d_ti = d_ti.reindex(index_interpolated)
d_ti["duid"] = duid_id
try:
energy_value = __trapezium_integration(d_ti, power_field)
except ValueError as e:
print("Error with {} at {} {}: {}".format(duid, t_i, t_f, e))
if not d_ti.index.empty:
return_cols.append({
"trading_interval": d_ti.index[-2],
"network_id": "NEM",
"facility_code": duid_id,
"eoi_quantity": energy_value
})
return return_cols
def trading_energy_data(df):
energy_genrecs = []
for day in get_day_range(df):
for duid in sorted(df.facility_code.unique()):
energy_genrecs += [d for d in __trading_energy_generator(df, day, duid)]
df = pd.DataFrame(
energy_genrecs, columns=["trading_interval", "network_id", "facility_code", "eoi_quantity"]
)
return df
def get_day_range(df):
min_date = (df.index.min() + timedelta(days=1)).date()
max_date = (df.index.max() - timedelta(days=1)).date()
cur_day = min_date
while cur_day <= max_date:
yield cur_day
cur_day += timedelta(days=1)
trading_energy_data(dfv3)
from opennem.core.energy import energy_sum
generated_results = get_generated("SA1", date_start, date_end, NetworkNEM, "wind")
dfv4 = shape_energy_dataframe(generated_results)
dfv4.facility_code
d_ti["initialmw"] = np.array([np.NaN, np.NaN, 0.2, np.NaN, np.NaN])
d_ti
index = pd.date_range(start= t_i, end= t_f, freq='5min')
d_ti = d_ti.reset_index()
d_ti = d_ti.set_index("settlementdate")
df2 = d_ti.reindex(index)
df2["duid"] = "BLUFF1"
df2
df3 = df2
df3.initialmw = df3.initialmw.interpolate(limit_direction='both')
df3
df4 = df2
df4["initialmw"].interpolate(method="", limit_direction='both')
df4
date_min = datetime.fromisoformat("2021-03-02T00:00:00+10:00")
date_max = date_min + timedelta(days=3)
query = get_generated_query("NSW1", date_min, date_max, NetworkNEM, "coal_black")
print(query)
df = pd.read_sql(query, engine, index_col=["trading_interval", "network_id", "facility_code"])
df = df.rename(columns={"generated": "eoi_quantity"})
df_energy = _energy_aggregate(df, NetworkNEM)
# df_energy = df_energy.drop(['facility_code', 'network_id'], axis=1)
df_energy = df_energy.set_index(["trading_interval"])
df = df.reset_index()
df = df.set_index(["trading_interval"])
# df = df.drop(['facility_code', 'network_id'], axis=1)
dfj = df_energy.join(df, on="trading_interval", lsuffix='_energy', rsuffix='_power')
# print(query)
# dfj.set_index()
# print(query)
# dfj.eoi_quantity_energy.sum(), dfj.eoi_quantity_energy.sum()
df_energy.resample("D").eoi_quantity.sum() / 1000
# df_energy
print(query)
URI_V2_NSW_DAILY = "https://data.opennem.org.au/nsw1/energy/daily/2021.json"
r = http.get(URI_V2_NSW_DAILY)
v2 = load_statset_v2(r.json())
v2_imports = list(filter(lambda x: "imports.energy" in x.id, v2.data)).pop().history.values()
v2df = pd.DataFrame(v2_imports, columns=["trading_interval", "energy"])
v2df = v2df.set_index(["trading_interval"])
date_min = datetime.fromisoformat("2021-01-01T00:00:00+10:00")
date_max = datetime.fromisoformat("2021-03-03T00:00:00+10:00")
query = """
select
bs.trading_interval at time zone 'AEST' as trading_interval,
bs.network_id,
bs.network_region as facility_code,
'imports' as fueltech_id,
case when bs.net_interchange_trading < 0 then
bs.net_interchange_trading
else 0
end as generated
from balancing_summary bs
where
bs.network_id='NEM'
and bs.network_region='NSW1'
and bs.trading_interval >= '{date_min}'
and bs.trading_interval <= '{date_max}'
and bs.net_interchange_trading is not null
order by trading_interval asc;
""".format(
date_min=date_min - timedelta(minutes=10),
date_max=date_max + timedelta(minutes=10)
)
df = pd.read_sql(query, engine, index_col=["trading_interval", "network_id", "facility_code"])
df_energy = _energy_aggregate(df)
df_energy = df_energy.set_index(["trading_interval"])
# es = df.resample("30min").generated.sum() / 6 * 0.5
# es = es.to_frame().resample("D").generated.sum() / 1000
es = (df_energy.reset_index().set_index("trading_interval").resample("D").eoi_quantity.sum() / 1000).to_frame()
c = v2df.join(es)
c["is_eqal"] = c.energy == c.eoi_quantity
df.generated = df.generated / 2
df = df.reset_index()
df = df.set_index(["trading_interval"])
em = (df.resample("D").generated.sum() / 1000).to_frame()
c = v2df.join(em)
c["energy_sum"] = es.eoi_quantity
c["is_eqal"] = c.energy == c.generated
c
query2 = """
select
fs.trading_interval at time zone 'AEST' as trading_interval,
-- fs.facility_code,
-- fs.network_id,
-- generated,
eoi_quantity
from
facility_scada fs
left join facility f on fs.facility_code = f.code
where
fs.network_id='NEM'
and f.network_region='NSW1'
and fs.trading_interval >= '{date_min}'
and fs.trading_interval <= '{date_max}'
and fs.is_forecast is False
and f.fueltech_id = 'solar_rooftop'
and fs.eoi_quantity is not null
order by fs.trading_interval asc, 2;
""".format(
date_min=date_min,
date_max=date_max
)
df.energy = pd.read_sql(query2, engine_local, index_col=["trading_interval"]).eoi_quantity.sum() / 1000
q = """
select
fs.trading_interval at time zone 'AEST' as trading_interval,
--fs.facility_code,
--fs.network_id,
generated
--eoi_quantity
from
facility_scada fs
left join facility f on fs.facility_code = f.code
where
fs.network_id='NEM'
and f.network_region='NSW1'
and fs.trading_interval >= '2021-02-15 00:00:00+10:00'
and fs.trading_interval <= '2021-02-16 00:00:00+10:00'
and fs.is_forecast is False
and f.fueltech_id = 'solar_rooftop'
and fs.generated is not null
order by fs.trading_interval asc, 2;
"""
dfl = pd.read_sql(q, engine_local, index_col=["trading_interval"])
dfs = pd.read_sql(q, engine, index_col=["trading_interval"])
j = dfl.join(dfs, lsuffix="_local")
j["eq"] = j.generated_local == j.generated
j
nsw_coal_black_duids = [
"MP1",
"REDBANK1",
"VP5",
"VP6",
"LD01",
"LD02",
"LD03",
"LD04",
"BW01",
"BW02",
"BW03",
"BW04",
"ER01",
"ER02",
"ER03",
"ER04",
"MM3",
"MM4",
"MP2",
"WW7",
"WW8"
]
nsw_rooftop_duids = [
"ROOFTOP_NEM_NSW"
]
df_nsw_coal = df[df.DUID.isin(nsw_coal_black_duids)]
network = NetworkNEM
# setup v2 df
df_nsw_coal_v2 = df_nsw_coal
df_nsw_coal_v2.SETTLEMENTDATE = pd.to_datetime(df_nsw_coal_v2.SETTLEMENTDATE)
df_nsw_coal_v2.INITIALMW = pd.to_numeric(df_nsw_coal_v2.INITIALMW)
df_nsw_coal_v2 = df_nsw_coal_v2.set_index(["SETTLEMENTDATE"])
# setup v1 df
df_nsw_coal = df_nsw_coal.rename(columns={"SETTLEMENTDATE": "trading_interval", "DUID": "facility_code", "INITIALMW": "eoi_quantity"})
df_nsw_coal["network_id"] = "NEM"
df_nsw_coal.trading_interval = df_nsw_coal.apply(
lambda x: pd.Timestamp(x.trading_interval, tz=network.get_fixed_offset()), axis=1
)
df_nsw_coal.trading_interval = pd.to_datetime(df_nsw_coal.trading_interval)
df_nsw_coal.eoi_quantity = pd.to_numeric(df_nsw_coal.eoi_quantity)
df_nsw_coal = df_nsw_coal.set_index(["trading_interval", "network_id", "facility_code"])
df_energy = _energy_aggregate(df_nsw_coal, network)
df_energy.trading_interval = df_energy.trading_interval - pd.Timedelta(minutes=network.reading_shift)
df_energy = df_energy.set_index(["trading_interval"])
df_energy
df_energy.resample("D").eoi_quantity.sum() / 1000
# test it out with v2 methods
from datetime import datetime, date, timedelta
def __trapezium_integration(d_ti):
return 0.5*(d_ti["INITIALMW"] * [1,2,2,2,2,2,1]).sum()/12
def __trading_energy_generator(df, date, duid_id):
df.sort_index(inplace=True)
t_start = datetime(date.year, date.month,date.day,0,5)
#48 trading intervals in the day
#(could be better with groupby function)
for TI in range(48):
#t_i initial timestamp of trading_interval, t_f = final timestamp of trading interval
t_i = t_start + timedelta(0,1800*TI)
t_f = t_start + timedelta(0,1800*(TI+1))
d_ti = df[(df.index>=t_i) & (df.index<=t_f) & (df.DUID == duid_id)]
if not d_ti.index.empty:
yield d_ti.index[-2], duid_id, __trapezium_integration(d_ti)
def trading_energy_data(df, date):
energy_genrecs = []
for duid in sorted(nsw_coal_black_duids):
energy_genrecs += [d for d in __trading_energy_generator(df, date, duid)]
df = pd.DataFrame(energy_genrecs, columns=['SETTLEMENTDATE', 'DUID','OUTPUT_MWH'])
return df
# df_nsw_coal_v2
df_energy_v2 = trading_energy_data(df_nsw_coal_v2, date(2021, 2, 14))
# df_energy_v2
df_energy_v2 = df_energy_v2.set_index("SETTLEMENTDATE")
df_energy_v2.sort_index(inplace=True)
df_energy_v2.OUTPUT_MWH.sum() / 1000
df_energy_v2
df_energy_v2[df_energy_v2.index == datetime.fromisoformat("2021-02-14 00:30:00")]
df_v3 = df_energy[df_energy.index.date == date(2021, 2, 14)]
# df_energy.index.date()
df_v3 = df_v3[["facility_code", "eoi_quantity"]]
# df_v3
df_v3[df_v3.index == datetime.fromisoformat("2021-02-14 00:25:00+10:00")]
```
| github_jupyter |
<a href="https://colab.research.google.com/github/aydinmyilmaz/Transformers-Tutorials/blob/master/HF_fine_tune_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#!pip install transformers
!pip install -q datasets
##
```
## 0. Datasets Overview
```
from datasets import list_datasets
all_datasets = list_datasets()
all_datasets[0:10]
len(all_datasets)
from datasets import load_dataset
emotions = load_dataset("emotion")
emotions
emotions["train"]
import pandas as pd
df_sample = pd.DataFrame(list(zip(emotions["train"]["text"][0:10], emotions["train"]["label"][0:10])), columns=['text','label'])
df_sample
train_ds = emotions['train']
train_ds
len(train_ds)
train_ds[0]
train_ds.column_names
train_ds.features
```
## 1. From Datasets to DataFrames
```
import pandas as pd
emotions.set_format(type="pandas")
df = emotions['train'][:]
df.head()
def label_int2str(row):
return emotions['train'].features['label'].int2str(row)
df['label_name'] =df['label'].apply(label_int2str)
df.head()
import matplotlib.pyplot as plt
df['label_name'].value_counts(ascending=True).plot.barh()
plt.title("Frequency of Classes")
plt.show()
df['words_per_tweet'] = df['text'].str.split().apply(len)
df.boxplot("words_per_tweet", by="label_name", grid=False,
showfliers=False, color='black')
plt.suptitle('')
plt.xlabel("")
plt.show()
emotions.reset_format()
```
# 3. Tokenization
```
from transformers import AutoTokenizer, DistilBertTokenizer
model_ckpt = "distilbert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
encoded_text = tokenizer("Tokenizing a text is a core task in NLP")
print(encoded_text)
tokens = tokenizer.convert_ids_to_tokens(encoded_text.input_ids)
print(tokens)
print(tokenizer.convert_tokens_to_string(tokens))
tokenizer.model_input_names
```
# 4. Tokenizing whole dataset
```
def tokenize(batch):
return tokenizer(batch['text'], padding=True, truncation=True)
emotions['train']['text'][1:2]
print(tokenize(emotions['train'][1:2]).keys())
print(tokenize(emotions['train'][1:2])['input_ids'])
print(tokenize(emotions['train'][0:2])['attention_mask'])
print(tokenize(emotions['train'][:2]))
emotions_encoded = emotions.map(tokenize, batched=True, batch_size=None)
print(emotions_encoded['train'].column_names)
#emotions_encoded['train'][0]
```
# 5. Training a Text Classifier
## 5.1 Using Transformers as Feature Extractors
```
import torch
from transformers import AutoModel
```
AutoModel class converts the token encodings to embeddings, then feeds them through the encoder stack to return the hidden states
```
model_ckpt = "distilbert-base-uncased"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModel.from_pretrained(model_ckpt).to(device)
```
### Extracting the last hidden states
```
text = ['this is a test',"this is also another test"]
inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True)
print(f"Input tensor shape: {inputs['input_ids'].size()}")
inputs.keys()
inputs['input_ids']
inputs['attention_mask']
```
Next step to place tensor of encodings on to the same deevice as the model
```
inputs = {k:v.to(device) for k, v in inputs.items()}
with torch.no_grad():
outputs = model(**inputs)
print(outputs)
print(outputs[0][1])
print(outputs[0].shape)
print(outputs.last_hidden_state.size())
import numpy
def extract_hidden_states(batch):
inputs = {k:v.to(device) for k, v in batch.items() if k in tokenizer.model_input_names}
with torch.no_grad():
last_hidden_state = model(**inputs).last_hidden_state
#return vector for [CLS] token
return {"hidden_state":last_hidden_state[:,0].cpu().numpy()}
tokenizer.model_input_names
emotions_encoded.set_format("torch",
columns = ["input_ids", "attention_mask", "label"])
emotions_hidden = emotions_encoded.map(extract_hidden_states, batched=True)
emotions_hidden['train'].column_names
```
- Generating a feature matrix
```
import numpy as np
X_train = np.array(emotions_hidden['train']['hidden_state'])
X_valid = np.array(emotions_hidden['validation']['hidden_state'])
y_train = np.array(emotions_hidden['train']['label'])
y_valid = np.array(emotions_hidden['validation']['label'])
X_train.shape, X_valid.shape
from sklearn.linear_model import LogisticRegression
lr_clf = LogisticRegression(max_iter=3000)
lr_clf.fit(X_train, y_train)
lr_clf.score(X_valid, y_valid)
```
| github_jupyter |
# Python
Most of you will have already used python, but to make sure everyone is at the same point, please complete the following exercises.
1. Define variables.
2. Types of variables: Sets, dictionaries, lists etc.
3. Working with variables
4. Writing a function, iteration
5. List comprehension
6. Pandas and common functions
7. Matplotlib and common functions
## Defining variables
A variable is a defined memory location to store values. If we define a variable, it is easy to call on it repeatedly in our programming.
In python, we define a variable as follows:
a = 1
Here, we have created the variable 'a', and assigned the value 1 to it. This is an integer type. There are also other types available.
If you use the same variable name during creation of many variables, it will overwrite previous assignments.
Create a new variable for each of the following values:
1.04
[1,2,3]
{"a":1, "b":2, "c":3}
"hello"
```
#Work here
```
# Types
You can then use a handy 'type' function to determine what type of value your variable holds.
type(a)
What are the various types for each of the variables you created above?
```
#work here
```
As you can see, having different types available can be very handy, you'll later see there are individual functions available for each type.
Let's look at the 'dict' type. You created a variable above containing the value:
{"a":1, "b":2, "c":3}
The value to the left of each colon is the 'key', whilst the value to the right of each colon is a value. Together, these create key value pairs. Multiple key value pairs are combined into a dictionary using commas.
It may be easier to see it written as follows:
new_dict = {
"a" : 1,
"b" : 2,
"c" : 3
}
You don't have to use a string as a key, or an integer as the value.
Create a dictionary in which the keys are integers, and the values are strings.
```
#work here
```
Its very easy to access keys and values within a dictionary:
Try these out:
new_dict.keys()
new_dict.values()
new_dict[1]
If you do not have 1 as a key, replace it with another key you have used.
```
#work here
```
We can also perform transformations, for example, if we wanted to take all our dictionary values and put them into a list, we can use the 'list' function.
We can also do this when assigning new variables!
list(new_dict.values())
Have a go at creating a variable containing the integer 1.
Turn it into a float, using the function
float()
Turn it back into an integer using the function
int()
```
#work here
```
### <i>Extra info</i>
If you have a list with many common values, you can create a unique 'set', which is mutable - it can be changed after being created.
We won't cover the difference between mutable and immutable here, but if you are interested: https://medium.com/@meghamohan/mutable-and-immutable-side-of-python-c2145cf72747
See what happens:
my_list = [1,1,2,3,3,3]
my_set = set(my_list)
```
#work here
```
## Working with variables
We can perform all sorts of functions on variables:
a + b
c / d
my_set.add(5)
my_list.append(1)
Make the new variables 'a' and 'b' to contain strings, what happens when you add them?
```
#work here
```
If you create a set:
my_set = set([1,2,3,3])
Try to add another '2', to it.
my_set.add()
What happens?
Do the same with a list.
my_list.append()
What happens?
```
#work here
```
## Writing a function
If we wanted to create a new function, we can do so:
def sum_times_difference(a,b):
my_sum = a + b
my_difference = abs(a-b)
answer = my_sum * my_difference
return answer
We can then call the function:
sum_times_difference(x,y)
Note, the arguments do not have to be the same name as those given in the definition of the function.
Write this function below, and add comments (starting with a '#') to describe what is happening on each line. I've started the first for you.
```
#work here
def sum_times_difference(a,b): #defined sum_times_difference, taking two arguments.
my_sum = a + b
my_difference = abs(a-b)
answer = my_sum * my_difference
return answer
```
If a = 1, and b = 4, what is the returned value for sum_times_difference?
```
#work here
a = 1
b = 4
```
## Iteration
If we have a list of values, or a range that we wish to iterate through, it is easy to do in python:
for elem in my_list:
print(elem)
You will notice we've started using print statements now!
Create the list 'my_list' with some values, and write the above for loop.
```
#work here
```
A shorter way to do this would be to use the built in 'range' function:
range(0,11)
this will create a list containing all numbers from 0 up to (but not including) 11.
Replace my_list with range(0,11) in the for loop. What happens?
```
#work here
```
We can also do this with dictionaries and sets. As long as there are multiple values in the variable!
my_dict = {"A":1, "B":2, "C":3}
for key, val in my_dict.items():
print("Key: %s, Value: %i" %(key,val))
We can use .items() for dictionaries to extract the keys and values at each iteration.
Whereas:
for key in my_dict:
Will extract only the keys.
And:
for val in my_dict.values():
Will extract only the values.
The print statement has got more complex too. We can use % to indicate where in the string we wish to replace with a variable/value.
E.g.:
"Key %s" %key
We are replacing %s with the variable 'key' as a string (s), we can also use integers(i), floats (f) etc.
Add a line to the loop that uses sum_times_difference(val,2), and print the answer.
What happens if you don't 'print'?
```
#work here
```
## List comprehension
A bit of a brain ache is list comprehensions, but can turn your for loops into single one liners.
Let's look at an example:
my_list = [0,1,2,3,4,5,6]
[i*2 for i in my_list]
Can you tell what it is doing? Hint: read right to left. Run it in the cell below to see.
```
#work here
```
Write yourself a new function, that takes a single argument and performs the following:
1. Squares the value and saves to a variable
2. Cube the value and saves to a variable
3. Sums step 1 and 2 and returns the value.
You get a bonus point if you can put steps 1 and 2 into a single row!
Hint, 'to the power of' can be expressed as '**', so $2^2$ becomes:
2 ** 2
```
#work here
```
Now, write a list comprehension using
my_list = [0,1,2,3,4,5,6]
but in each iteration, call your function and print the answer!
```
#work here
```
# Python packages (Pandas)
Making python even nicer to use is the wide range of publicly available packages to use in your code.
To import a package, simply write:
import N
(where N is the name of the package). Some packages become preinstalled when you install python or conda, others may need to be installed - but don't worry about this, you've already got everything you'll need!
Let's have a look at pandas.
```
import pandas as pd
```
We like to be lazy and abbreviate some package names for ease when writing our code, pandas is typically abbreviated to 'pd'. You'll see the same with numpy ('np') and matplotlib.pyplot ('plt').
[Pandas](https://pandas.pydata.org/) is a package used for data structures and data analysis. We can use it to load in data, save data to files, and perform a variety of functions on the data.
Let's load in an example csv file by running the code below. .head() shows us the first 5 lines. .head(10) will show the first 10.
data = pd.read_csv("../example.csv")
data.head()
To see other arguments for read_csv(), you can perform
help(pd.read_csv)
```
#work here
```
The data has been loaded as a DataFrame type. We can call .columns to find column names, and .index to find the indexes.
```
data.columns
```
Looking at the columns, we can see one is possible misspelled. 'Aged' should be 'Age'. Let's rename it. Luckily, pandas has a function for this, we can use a dictionary as an argument, and setting axis=1 (for columns), and inplace=True, we don't have to save it as a new variable.
```
data.rename({"Aged":"Age"}, axis=1, inplace=True)
data.head()
```
Like a dictionary, we can call particular columns.
data["Age"]
Returns all 'Age' values.
data["Age"].iloc[0]
Returns the age for the positional indexer 0 (the first row).
data["Age"].loc[2]
Returns the age for the index 2.
```
#work here
```
We can use things like 'len' and 'describe'.
len(data)
returns how many rows are in the data.
data.describe()
will return statistics for numerical column types.
What is the mean for age and height for this dataset?
How many entries are there?
```
#work here
```
A handy tool is .value_counts(), which will return the number of entries per unique value.
data["Age"].value_counts()
Will return the age as the index, and the number of entries with Age = Index. (So how many people are of age X).
How many people are aged 10?
```
#work here
```
| github_jupyter |
# Difference between gridded field (GRIB) and scattered observations (BUFR)
<img src="http://pandas.pydata.org/_static/pandas_logo.png" width=200>
In this example we will load a gridded model field in GRIB format and a set of observation data in BUFR format. We will then use Metview to examine the data, and compute and plot their differences. Then we will export the set of differences into a pandas dataframe for further inspection.
```
import metview as mv
use_mars = False # if False, then read data from disk
```
Metview retrieves/reads GRIB data into its [Fieldset](https://confluence.ecmwf.int/display/METV/Fieldset+Functions) class.
```
if use_mars:
t2m_grib = mv.retrieve(type='fc', date=-5, time=12, step=48, levtype='sfc', param='2t', grid='O160', gaussian='reduced')
else:
t2m_grib = mv.read('t2m_grib.grib')
```
Define our area of interest and set up some visual styling.
```
area = [30,-25,72,46] # S,W,N,E
europe = mv.geoview(
map_area_definition = "corners",
area = area,
coastlines = mv.mcoast(
map_coastline_land_shade = "on",
map_coastline_land_shade_colour = "#eeeeee",
map_grid_latitude_increment = 10,
map_grid_longitude_increment = 10)
)
auto_style = mv.mcont(contour_automatic_setting = "ecmwf")
grid_1x1 = mv.mcont(
contour = "off",
contour_grid_value_plot = "on",
contour_grid_value_plot_type = "marker",
contour_grid_value_marker_colour = "burgundy",
grib_scaling_of_retrieved_fields = "off"
)
```
Plot the locations of the grid points. We can see the spatial characteristics of the octahedral reduced Gaussian grid.
Plotting is performed through Metview's interface to the [Magics](https://confluence.ecmwf.int/display/MAGP/Magics) library developed at ECMWF. We will first define the view parameters (by default we will get a global map in cylindrical projection).
If we don't set the output destination to be Jupyter, we will get Metview's interactive display window.
```
mv.setoutput('jupyter')
mv.plot(europe, t2m_grib, auto_style, grid_1x1)
```
Metview retrieves/reads BUFR data into its [Bufr](https://confluence.ecmwf.int/display/METV/Observations+Functions) class.
```
if use_mars:
obs_3day = mv.retrieve(
type = "ob",
repres = "bu",
date = -3,
area = area
)
else:
obs_3day = mv.read('./obs_3day.bufr')
```
Plot the observations on the map.
```
obs_resize = mv.mobs(obs_size = 0.3, obs_ring_size = 0.3, obs_distance_apart = 1.8)
mv.plot(europe, obs_3day, obs_resize)
```
BUFR can contain a complex arragement of data. Metview has a powerful BUFR examiner [tool](https://confluence.ecmwf.int/display/METV/CodesUI) to inspect the data contents and to see the available keynames. This can be launched with the examine() function.
```
mv.examine(obs_3day)
```
With the information gleaned from that, we can filter the variable we require using the obsfilter() function. This returns a [Geopoints](https://confluence.ecmwf.int/display/METV/Geopoints) object, which has many more [functions](https://confluence.ecmwf.int/display/METV/Geopoints+Functions) available to it. Note: prior to Metview 5.1, only a numeric descriptor could be used to specify the parameter.
```
t2m_gpt = mv.obsfilter(
data = obs_3day,
parameter = 'airTemperatureAt2M',
output = 'geopoints'
)
```
Computing the difference between the gridded field and the scattered data is one line of code. Metview will, for each observation point, compute the interpolated value from the field at that location, perform the subtraction and put the result into a new Geopoints object.
```
diff = t2m_grib - t2m_gpt
```
We can then use Magics' powerful symbol plotting routine to assign colours and sizes based on the magnitude of the differences.
```
max_diff = mv.maxvalue(mv.abs(diff))
levels = [max_diff * x for x in [-1, -0.67, -0.33, -0.1, 0.1, 0.33, 0.67, 1]]
diff_symb = mv.msymb(
legend = "on",
symbol_type = "marker",
symbol_table_mode = "advanced",
symbol_outline = "on",
symbol_outline_colour = "charcoal",
symbol_advanced_table_selection_type = "list",
symbol_advanced_table_level_list = levels,
symbol_advanced_table_colour_method = "list",
symbol_advanced_table_colour_list = ["blue","sky","rgb(0.82,0.85,1)","white","rgb(0.9,0.8,0.8)","rgb(0.9,0.5,0.5)","red"],
symbol_advanced_table_height_list = [0.6,0.5,0.4,0.3,0.3,0.4,0.5,0.6]
)
mv.plot(europe, diff, diff_symb)
```
We can easily convert this to a pandas dataframe for further analysis.
```
df = diff.to_dataframe()
```
Print a summary of the whole data set:
```
df.describe()
```
Or a print a summary of just the actual values:
```
df.value.describe()
```
Produce a quick scatterplot of latitude vs difference values:
```
df.plot.scatter(x='latitude', y='value', title='Scatterplot')
```
# Additional resources
- [Introductory Metview training course](https://confluence.ecmwf.int/display/METV/Data+analysis+and+visualisation+using+Metview)
- [Metview's Python interface](https://confluence.ecmwf.int/display/METV/Metview%27s+Python+Interface)
- [Function list](https://confluence.ecmwf.int/display/METV/List+of+Operators+and+Functions)
- [Gallery example (field-obs difference)](https://confluence.ecmwf.int/display/METV/Model-Obs%20Difference%20Example)
| github_jupyter |
<a href="https://colab.research.google.com/github/FlamesLLC/Flamesmix-Video-ML-Engine-v1/blob/main/Neural_Video_Engine_Flamesmix_0_1_BETA.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# This is the NVE 1.0 Flamesmix Engine
```
# Install Code For Clip
print("Installing CLIP...")
!git clone https://github.com/openai/CLIP &> /content/dev_clip
*Installing API *# Session Start
# New Sectionprint("Installing API Functions")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
!pip install wget
```
# (C) - FLAMES LLC - 2019-2022
print("Installing CLIP...")
!git clone https://github.com/openai/CLIP &> /dev/null
print("Installing API Functions")
!git clone https://github.com/CompVis/taming-transformers &> /dev/null
!pip install ftfy regex tqdm omegaconf pytorch-lightning &> /dev/null
!pip install kornia &> /dev/null
!pip install einops &> /dev/null
!pip install wget &> /dev/null
print("Generating Chicken Eggs")
!pip install stegano &> /dev/null
!apt install exempi &> /dev/null
!pip install python-xmp-toolkit &> /dev/null
!pip install imgtag &> /dev/null
!pip install pillow==7.1.2 &> /dev/null
print("Genrating Video format")
!pip install imageio-ffmpeg &> /dev/null
!mkdir steps
print("Installing Engine.")
#@title Flamesmix Engine v1.0
# SIMPLE DEMO
imagenet_1024 = False #@param {type:"boolean"}
imagenet_16384 = True #@param {type:"boolean"}
gumbel_8192 = False #@param {type:"boolean"}
coco = False #@param {type:"boolean"}
faceshq = False #@param {type:"boolean"}
wikiart_1024 = False #@param {type:"boolean"}
wikiart_16384 = False #@param {type:"boolean"}
sflckr = False #@param {type:"boolean"}
ade20k = False #@param {type:"boolean"}
ffhq = False #@param {type:"boolean"}
celebahq = False #@param {type:"boolean"}
if imagenet_1024:
!curl -L -o vqgan_imagenet_f16_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.yaml' #ImageNet 1024
!curl -L -o vqgan_imagenet_f16_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_1024.ckpt' #ImageNet 1024
if imagenet_16384:
!curl -L -o vqgan_imagenet_f16_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.yaml' #ImageNet 16384
!curl -L -o vqgan_imagenet_f16_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/vqgan_imagenet_f16_16384.ckpt' #ImageNet 16384
if gumbel_8192:
!curl -L -o gumbel_8192.yaml -C - 'https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/files/?p=%2Fconfigs%2Fmodel.yaml&dl=1' #Gumbel 8192
!curl -L -o gumbel_8192.ckpt -C - 'https://heibox.uni-heidelberg.de/d/2e5662443a6b4307b470/files/?p=%2Fckpts%2Flast.ckpt&dl=1' #Gumbel 8192
if coco:
!curl -L -o coco.yaml -C - 'https://dl.nmkd.de/ai/clip/coco/coco.yaml' #COCO
!curl -L -o coco.ckpt -C - 'https://dl.nmkd.de/ai/clip/coco/coco.ckpt' #COCO
if faceshq:
!curl -L -o faceshq.yaml -C - 'https://drive.google.com/uc?export=download&id=1fHwGx_hnBtC8nsq7hesJvs-Klv-P0gzT' #FacesHQ
!curl -L -o faceshq.ckpt -C - 'https://app.koofr.net/content/links/a04deec9-0c59-4673-8b37-3d696fe63a5d/files/get/last.ckpt?path=%2F2020-11-13T21-41-45_faceshq_transformer%2Fcheckpoints%2Flast.ckpt' #FacesHQ
if wikiart_1024:
!curl -L -o wikiart_1024.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart.yaml' #WikiArt 1024
!curl -L -o wikiart_1024.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart.ckpt' #WikiArt 1024
if wikiart_16384:
!curl -L -o wikiart_16384.yaml -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.yaml' #WikiArt 16384
!curl -L -o wikiart_16384.ckpt -C - 'http://mirror.io.community/blob/vqgan/wikiart_16384.ckpt' #WikiArt 16384
if sflckr:
!curl -L -o sflckr.yaml -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fconfigs%2F2020-11-09T13-31-51-project.yaml&dl=1' #S-FLCKR
!curl -L -o sflckr.ckpt -C - 'https://heibox.uni-heidelberg.de/d/73487ab6e5314cb5adba/files/?p=%2Fcheckpoints%2Flast.ckpt&dl=1' #S-FLCKR
if ade20k:
!curl -L -o ade20k.yaml -C - 'https://static.miraheze.org/intercriaturaswiki/b/bf/Ade20k.txt' #ADE20K
!curl -L -o ade20k.ckpt -C - 'https://app.koofr.net/content/links/0f65c2cd-7102-4550-a2bd-07fd383aac9e/files/get/last.ckpt?path=%2F2020-11-20T21-45-44_ade20k_transformer%2Fcheckpoints%2Flast.ckpt' #ADE20K
if ffhq:
!curl -L -o ffhq.yaml -C - 'https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/2021-04-23T18-19-01-project.yaml?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fconfigs%2F2021-04-23T18-19-01-project.yaml&force' #FFHQ
!curl -L -o ffhq.ckpt -C - 'https://app.koofr.net/content/links/0fc005bf-3dca-4079-9d40-cdf38d42cd7a/files/get/last.ckpt?path=%2F2021-04-23T18-19-01_ffhq_transformer%2Fcheckpoints%2Flast.ckpt&force' #FFHQ
if celebahq:
!curl -L -o celebahq.yaml -C - 'https://app.koofr.net/content/links/6dddf083-40c8-470a-9360-a9dab2a94e96/files/get/2021-04-23T18-11-19-project.yaml?path=%2F2021-04-23T18-11-19_celebahq_transformer%2Fconfigs%2F2021-04-23T18-11-19-project.yaml&force' #CelebA-HQ
!curl -L -o celebahq.ckpt -C - 'https://app.koofr.net/content/links/6dddf083-40c8-470a-9360-a9dab2a94e96/files/get/last.ckpt?path=%2F2021-04-23T18-11-19_celebahq_transformer%2Fcheckpoints%2Flast.ckpt&force' #CelebA-HQ
# @title DEMO
import argparse
import math
from pathlib import Path
import sys
sys.path.append('./taming-transformers')
from IPython import display
from base64 import b64encode
from omegaconf import OmegaConf
from PIL import Image
from taming.models import cond_transformer, vqgan
import torch
from torch import nn, optim
from torch.nn import functional as F
from torchvision import transforms
from torchvision.transforms import functional as TF
from tqdm.notebook import tqdm
from CLIP import clip
import kornia.augmentation as K
import numpy as np
import imageio
from PIL import ImageFile, Image
from imgtag import ImgTag # metadata
from libxmp import * # metadata
import libxmp # metadata
from stegano import lsb
import json
ImageFile.LOAD_TRUNCATED_IMAGES = True
def sinc(x):
return torch.where(x != 0, torch.sin(math.pi * x) / (math.pi * x), x.new_ones([]))
def lanczos(x, a):
cond = torch.logical_and(-a < x, x < a)
out = torch.where(cond, sinc(x) * sinc(x/a), x.new_zeros([]))
return out / out.sum()
def ramp(ratio, width):
n = math.ceil(width / ratio + 1)
out = torch.empty([n])
cur = 0
for i in range(out.shape[0]):
out[i] = cur
cur += ratio
return torch.cat([-out[1:].flip([0]), out])[1:-1]
def resample(input, size, align_corners=True):
n, c, h, w = input.shape
dh, dw = size
input = input.view([n * c, 1, h, w])
if dh < h:
kernel_h = lanczos(ramp(dh / h, 2), 2).to(input.device, input.dtype)
pad_h = (kernel_h.shape[0] - 1) // 2
input = F.pad(input, (0, 0, pad_h, pad_h), 'reflect')
input = F.conv2d(input, kernel_h[None, None, :, None])
if dw < w:
kernel_w = lanczos(ramp(dw / w, 2), 2).to(input.device, input.dtype)
pad_w = (kernel_w.shape[0] - 1) // 2
input = F.pad(input, (pad_w, pad_w, 0, 0), 'reflect')
input = F.conv2d(input, kernel_w[None, None, None, :])
input = input.view([n, c, h, w])
return F.interpolate(input, size, mode='bicubic', align_corners=align_corners)
class ReplaceGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, x_forward, x_backward):
ctx.shape = x_backward.shape
return x_forward
@staticmethod
def backward(ctx, grad_in):
return None, grad_in.sum_to_size(ctx.shape)
replace_grad = ReplaceGrad.apply
class ClampWithGrad(torch.autograd.Function):
@staticmethod
def forward(ctx, input, min, max):
ctx.min = min
ctx.max = max
ctx.save_for_backward(input)
return input.clamp(min, max)
@staticmethod
def backward(ctx, grad_in):
input, = ctx.saved_tensors
return grad_in * (grad_in * (input - input.clamp(ctx.min, ctx.max)) >= 0), None, None
clamp_with_grad = ClampWithGrad.apply
def vector_quantize(x, codebook):
d = x.pow(2).sum(dim=-1, keepdim=True) + codebook.pow(2).sum(dim=1) - 2 * x @ codebook.T
indices = d.argmin(-1)
x_q = F.one_hot(indices, codebook.shape[0]).to(d.dtype) @ codebook
return replace_grad(x_q, x)
class Prompt(nn.Module):
def __init__(self, embed, weight=1., stop=float('-inf')):
super().__init__()
self.register_buffer('embed', embed)
self.register_buffer('weight', torch.as_tensor(weight))
self.register_buffer('stop', torch.as_tensor(stop))
def forward(self, input):
input_normed = F.normalize(input.unsqueeze(1), dim=2)
embed_normed = F.normalize(self.embed.unsqueeze(0), dim=2)
dists = input_normed.sub(embed_normed).norm(dim=2).div(2).arcsin().pow(2).mul(2)
dists = dists * self.weight.sign()
return self.weight.abs() * replace_grad(dists, torch.maximum(dists, self.stop)).mean()
def parse_prompt(prompt):
vals = prompt.rsplit(':', 2)
vals = vals + ['', '1', '-inf'][len(vals):]
return vals[0], float(vals[1]), float(vals[2])
class MakeCutouts(nn.Module):
def __init__(self, cut_size, cutn, cut_pow=1.):
super().__init__()
self.cut_size = cut_size
self.cutn = cutn
self.cut_pow = cut_pow
self.augs = nn.Sequential(
K.RandomHorizontalFlip(p=0.5),
# K.RandomSolarize(0.01, 0.01, p=0.7),
K.RandomSharpness(0.3,p=0.4),
K.RandomAffine(degrees=30, translate=0.1, p=0.8, padding_mode='border'),
K.RandomPerspective(0.2,p=0.4),
K.ColorJitter(hue=0.01, saturation=0.01, p=0.7))
self.noise_fac = 0.1
def forward(self, input):
sideY, sideX = input.shape[2:4]
max_size = min(sideX, sideY)
min_size = min(sideX, sideY, self.cut_size)
cutouts = []
for _ in range(self.cutn):
size = int(torch.rand([])**self.cut_pow * (max_size - min_size) + min_size)
offsetx = torch.randint(0, sideX - size + 1, ())
offsety = torch.randint(0, sideY - size + 1, ())
cutout = input[:, :, offsety:offsety + size, offsetx:offsetx + size]
cutouts.append(resample(cutout, (self.cut_size, self.cut_size)))
batch = self.augs(torch.cat(cutouts, dim=0))
if self.noise_fac:
facs = batch.new_empty([self.cutn, 1, 1, 1]).uniform_(0, self.noise_fac)
batch = batch + facs * torch.randn_like(batch)
return batch
def load_vqgan_model(config_path, checkpoint_path):
config = OmegaConf.load(config_path)
if config.model.target == 'taming.models.vqgan.VQModel':
model = vqgan.VQModel(**config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
elif config.model.target == 'taming.models.cond_transformer.Net2NetTransformer':
parent_model = cond_transformer.Net2NetTransformer(**config.model.params)
parent_model.eval().requires_grad_(False)
parent_model.init_from_ckpt(checkpoint_path)
model = parent_model.first_stage_model
elif config.model.target == 'taming.models.vqgan.GumbelVQ':
model = vqgan.GumbelVQ(**config.model.params)
print(config.model.params)
model.eval().requires_grad_(False)
model.init_from_ckpt(checkpoint_path)
else:
raise ValueError(f'unknown model type: {config.model.target}')
del model.loss
return model
def resize_image(image, out_size):
ratio = image.size[0] / image.size[1]
area = min(image.size[0] * image.size[1], out_size[0] * out_size[1])
size = round((area * ratio)**0.5), round((area / ratio)**0.5)
return image.resize(size, Image.LANCZOS)
def download_img(img_url):
try:
return wget.download(img_url,out="input.jpg")
except:
return
# MAKING GENERATIONS
# TITLE S
textos = "a fantasy world" #@param {type:"string"}
ancho = 480#@param {type:"number"}
alto = 480#@param {type:"number"}
modelo = "vqgan_imagenet_f16_16384" #@param ["vqgan_imagenet_f16_16384", "vqgan_imagenet_f16_1024", "wikiart_1024", "wikiart_16384", "coco", "faceshq", "sflckr", "ade20k", "ffhq", "celebahq", "gumbel_8192"]
intervalo_imagenes = 50#@param {type:"number"}
imagen_inicial = None#@param {type:"string"}
imagenes_objetivo = None#@param {type:"string"}
seed = -1#@param {type:"number"}
max_iteraciones = -1#@param {type:"number"}
input_images = ""
nombres_modelos={"vqgan_imagenet_f16_16384": 'ImageNet 16384',"vqgan_imagenet_f16_1024":"ImageNet 1024",
"wikiart_1024":"WikiArt 1024", "wikiart_16384":"WikiArt 16384", "coco":"COCO-Stuff", "faceshq":"FacesHQ", "sflckr":"S-FLCKR", "ade20k":"ADE20K", "ffhq":"FFHQ", "celebahq":"CelebA-HQ", "gumbel_8192": "Gumbel 8192"}
nombre_modelo = nombres_modelos[modelo]
if modelo == "gumbel_8192":
is_gumbel = True
else:
is_gumbel = False
if seed == -1:
seed = None
if imagen_inicial == "None":
imagen_inicial = None
elif imagen_inicial and imagen_inicial.lower().startswith("http"):
imagen_inicial = download_img(imagen_inicial)
if imagenes_objetivo == "None" or not imagenes_objetivo:
imagenes_objetivo = []
else:
imagenes_objetivo = imagenes_objetivo.split("|")
imagenes_objetivo = [image.strip() for image in imagenes_objetivo]
if imagen_inicial or imagenes_objetivo != []:
input_images = True
textos = [frase.strip() for frase in textos.split("|")]
if textos == ['']:
textos = []
args = argparse.Namespace(
prompts=textos,
image_prompts=imagenes_objetivo,
noise_prompt_seeds=[],
noise_prompt_weights=[],
size=[ancho, alto],
init_image=imagen_inicial,
init_weight=0.,
clip_model='ViT-B/32',
vqgan_config=f'{modelo}.yaml',
vqgan_checkpoint=f'{modelo}.ckpt',
step_size=0.1,
cutn=64,
cut_pow=1.,
display_freq=intervalo_imagenes,
seed=seed,
)
#@title Hacer la ejecución...
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print('Using device:', device)
if textos:
print('Using texts:', textos)
if imagenes_objetivo:
print('Using image prompts:', imagenes_objetivo)
if args.seed is None:
seed = torch.seed()
else:
seed = args.seed
torch.manual_seed(seed)
print('Using seed:', seed)
model = load_vqgan_model(args.vqgan_config, args.vqgan_checkpoint).to(device)
perceptor = clip.load(args.clip_model, jit=False)[0].eval().requires_grad_(False).to(device)
cut_size = perceptor.visual.input_resolution
if is_gumbel:
e_dim = model.quantize.embedding_dim
else:
e_dim = model.quantize.e_dim
f = 2**(model.decoder.num_resolutions - 1)
make_cutouts = MakeCutouts(cut_size, args.cutn, cut_pow=args.cut_pow)
if is_gumbel:
n_toks = model.quantize.n_embed
else:
n_toks = model.quantize.n_e
toksX, toksY = args.size[0] // f, args.size[1] // f
sideX, sideY = toksX * f, toksY * f
if is_gumbel:
z_min = model.quantize.embed.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embed.weight.max(dim=0).values[None, :, None, None]
else:
z_min = model.quantize.embedding.weight.min(dim=0).values[None, :, None, None]
z_max = model.quantize.embedding.weight.max(dim=0).values[None, :, None, None]
if args.init_image:
pil_image = Image.open(args.init_image).convert('RGB')
pil_image = pil_image.resize((sideX, sideY), Image.LANCZOS)
z, *_ = model.encode(TF.to_tensor(pil_image).to(device).unsqueeze(0) * 2 - 1)
else:
one_hot = F.one_hot(torch.randint(n_toks, [toksY * toksX], device=device), n_toks).float()
if is_gumbel:
z = one_hot @ model.quantize.embed.weight
else:
z = one_hot @ model.quantize.embedding.weight
z = z.view([-1, toksY, toksX, e_dim]).permute(0, 3, 1, 2)
z_orig = z.clone()
z.requires_grad_(True)
opt = optim.Adam([z], lr=args.step_size)
normalize = transforms.Normalize(mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711])
pMs = []
for prompt in args.prompts:
txt, weight, stop = parse_prompt(prompt)
embed = perceptor.encode_text(clip.tokenize(txt).to(device)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for prompt in args.image_prompts:
path, weight, stop = parse_prompt(prompt)
img = resize_image(Image.open(path).convert('RGB'), (sideX, sideY))
batch = make_cutouts(TF.to_tensor(img).unsqueeze(0).to(device))
embed = perceptor.encode_image(normalize(batch)).float()
pMs.append(Prompt(embed, weight, stop).to(device))
for seed, weight in zip(args.noise_prompt_seeds, args.noise_prompt_weights):
gen = torch.Generator().manual_seed(seed)
embed = torch.empty([1, perceptor.visual.output_dim]).normal_(generator=gen)
pMs.append(Prompt(embed, weight).to(device))
def synth(z):
if is_gumbel:
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embed.weight).movedim(3, 1)
else:
z_q = vector_quantize(z.movedim(1, 3), model.quantize.embedding.weight).movedim(3, 1)
return clamp_with_grad(model.decode(z_q).add(1).div(2), 0, 1)
def add_xmp_data(nombrefichero):
imagen = ImgTag(filename=nombrefichero)
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'creator', 'VQGAN+CLIP', {"prop_array_is_ordered":True, "prop_value_is_array":True})
if args.prompts:
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', " | ".join(args.prompts), {"prop_array_is_ordered":True, "prop_value_is_array":True})
else:
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'title', 'None', {"prop_array_is_ordered":True, "prop_value_is_array":True})
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'i', str(i), {"prop_array_is_ordered":True, "prop_value_is_array":True})
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'model', nombre_modelo, {"prop_array_is_ordered":True, "prop_value_is_array":True})
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'seed',str(seed) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'input_images',str(input_images) , {"prop_array_is_ordered":True, "prop_value_is_array":True})
#for frases in args.prompts:
# imagen.xmp.append_array_item(libxmp.consts.XMP_NS_DC, 'Prompt' ,frases, {"prop_array_is_ordered":True, "prop_value_is_array":True})
imagen.close()
def add_stegano_data(filename):
data = {
"title": " | ".join(args.prompts) if args.prompts else None,
"notebook": "VQGAN+CLIP",
"i": i,
"model": nombre_modelo,
"seed": str(seed),
"input_images": input_images
}
lsb.hide(filename, json.dumps(data)).save(filename)
@torch.no_grad()
def checkin(i, losses):
losses_str = ', '.join(f'{loss.item():g}' for loss in losses)
tqdm.write(f'i: {i}, loss: {sum(losses).item():g}, losses: {losses_str}')
out = synth(z)
TF.to_pil_image(out[0].cpu()).save('progress.png')
add_stegano_data('progress.png')
add_xmp_data('progress.png')
display.display(display.Image('progress.png'))
def ascend_txt():
global i
out = synth(z)
iii = perceptor.encode_image(normalize(make_cutouts(out))).float()
result = []
if args.init_weight:
result.append(F.mse_loss(z, z_orig) * args.init_weight / 2)
for prompt in pMs:
result.append(prompt(iii))
img = np.array(out.mul(255).clamp(0, 255)[0].cpu().detach().numpy().astype(np.uint8))[:,:,:]
img = np.transpose(img, (1, 2, 0))
filename = f"steps/{i:04}.png"
imageio.imwrite(filename, np.array(img))
add_stegano_data(filename)
add_xmp_data(filename)
return result
def train(i):
opt.zero_grad()
lossAll = ascend_txt()
if i % args.display_freq == 0:
checkin(i, lossAll)
loss = sum(lossAll)
loss.backward()
opt.step()
with torch.no_grad():
z.copy_(z.maximum(z_min).minimum(z_max))
i = 0
try:
with tqdm() as pbar:
while True:
train(i)
if i == max_iteraciones:
break
i += 1
pbar.update()
except KeyboardInterrupt:
pass
"""## Generating video resolution
FPS OF VIDEO
"""
init_frame = 1 #First video image
last_frame = i #Camera Engine
min_fps = 10
max_fps = 30
total_frames = last_frame-init_frame
length = 15 #TEMP
frames = []
tqdm.write('Generating video...')
for i in range(init_frame,last_frame): #
filename = f"steps/{i:04}.png"
frames.append(Image.open(filename))
#fps = last_frame/10
fps = np.clip(total_frames/length,min_fps,max_fps)
from subprocess import Popen, PIPE
p = Popen(['ffmpeg', '-y', '-f', 'image2pipe', '-vcodec', 'png', '-r', str(fps), '-i', '-', '-vcodec', 'libx264', '-r', str(fps), '-pix_fmt', 'yuv420p', '-crf', '17', '-preset', 'veryslow', 'video.mp4'], stdin=PIPE)
for im in tqdm(frames):
im.save(p.stdin, 'PNG')
p.stdin.close()
print("The video is generating")
p.wait()
print("It has generated")
# @title Version of base 64 video
# @markdown BASE64 ENCODER FORMAT
mp4 = open('video.mp4','rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
display.HTML("""
<video width=400 controls>
<source src="%s" type="video/mp4">
</video>
""" % data_url)
# @title About vídeo
from google.colab import files
files.download("video.mp4")
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/Manelmc/rnn-time-to-event/blob/master/predictive-maintenance-turbofan-engine.ipynb)
# Predictive Maintenance for the Turbofan Engine Dataset
## Data Preparation
```
import google.colab
import tensorflow as tf
print(tf.__version__)
import keras
import keras.backend as K
print("Keras version", keras.__version__)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# Setting seed for reproducibility
SEED = 42
np.random.seed(SEED)
from data_generation_utils import *
import torch
from torch import nn
import pickle as pkl
from torch.utils.data import Dataset,DataLoader,random_split
!mkdir Dataset
!mkdir Models
!wget -q https://raw.githubusercontent.com/Manelmc/rnn-time-to-event/master/Dataset/PM_test.txt -O Dataset/PM_test.txt
!wget -q https://raw.githubusercontent.com/Manelmc/rnn-time-to-event/master/Dataset/PM_train.txt -O Dataset/PM_train.txt
!wget -q https://raw.githubusercontent.com/Manelmc/rnn-time-to-event/master/Dataset/PM_truth.txt -O Dataset/PM_truth.txt
!ls Dataset
```
### Turbofan Train Set
```
# read in our data
# Load data generated by Glazier thesis
# Data
file_load = open('drift_rank.pkl','rb')
dataset = survival_dataset_cont(file_load, SOS=5, normed=True)
file_load.close()
train_data,test_data = random_split(dataset,[390000,10000])
# sample size
n_wtte = 2000
for i in range(n_wtte):
truncate = int(dataset[:][5][i].item() + 1)
if i == 0:
state = dataset[:][0][0, range(truncate)].numpy()
idd = np.repeat(i, truncate)
tte = np.repeat(dataset[:][5][i], truncate).numpy()
times = np.arange(truncate)
label = np.repeat(dataset[:][4][i], truncate).numpy()
age = np.repeat(dataset[:][3][i], truncate).numpy()
else:
temp_state = dataset[:][0][0, range(truncate)].numpy()
state = np.concatenate([state, temp_state])
temp_idd = np.repeat(i, truncate)
idd = np.concatenate([idd, temp_idd])
temp_tte = np.repeat(dataset[:][5][i], truncate).numpy()
tte = np.concatenate([tte, temp_tte])
temp_times = np.arange(truncate)
times = np.concatenate([times, temp_times])
temp_label = np.repeat(dataset[:][4][i], truncate).numpy()
label = np.concatenate([label, temp_label])
temp_age = np.repeat(dataset[:][3][i], truncate).numpy()
age = np.concatenate([age, temp_age])
df1 = pd.DataFrame({'id': idd, 'tte': tte, 'times': times, 'label': label, 'age': age, 'state': state})
df1['RUL'] = df1['tte'] - df1['times']
df1
# remove column tte
df1.drop('tte', axis=1, inplace=True)
df_tr = df1
from sklearn import preprocessing
# read training data - It is the aircraft engine run-to-failure data.
train_df = pd.read_csv('Dataset/PM_train.txt', sep=" ", header=None)
train_df.drop(train_df.columns[[26, 27]], axis=1, inplace=True)
train_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
train_df = train_df.sort_values(['id','cycle'])
# Data Labeling - generate column RUL (Remaining Useful Life or Time to Failure)
rul = pd.DataFrame(train_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
train_df = train_df.merge(rul, on=['id'], how='left')
train_df['RUL'] = train_df['max'] - train_df['cycle']
train_df.drop('max', axis=1, inplace=True)
# MinMax normalization (from 0 to 1)
train_df['cycle_norm'] = train_df['cycle']
cols_normalize = train_df.columns.difference(['id','cycle','RUL','label1','label2'])
min_max_scaler = preprocessing.MinMaxScaler()
norm_train_df = pd.DataFrame(min_max_scaler.fit_transform(train_df[cols_normalize]),
columns=cols_normalize,
index=train_df.index)
join_df = train_df[train_df.columns.difference(cols_normalize)].join(norm_train_df)
train_df = join_df.reindex(columns = train_df.columns)
train_df[train_df["id"] == 1].tail()
```
### Turbofan Test Set
```
# Format age test set
ages_plot = np.linspace(0,5,endpoint=True,num=9)
state_te = np.repeat(5, len(ages_plot))
idd_te = np.arange(0, len(ages_plot))
times_te = np.repeat(1, len(ages_plot))
event_te = np.repeat(0, len(ages_plot))
futime_te = np.repeat(1, len(ages_plot))
df_te = pd.DataFrame({'id': idd_te, 'tte': futime_te, 'times': times_te, 'label': event_te, 'age': ages_plot, 'state': state_te})
#df = pd.concat([df1, df_te])
df_te
df_te['RUL'] = df_te['tte'] - df_te['times']
df_te
from sklearn import preprocessing
# read test data - It is the aircraft engine operating data without failure events recorded.
test_df = pd.read_csv('Dataset/PM_test.txt', sep=" ", header=None)
test_df.drop(test_df.columns[[26, 27]], axis=1, inplace=True)
test_df.columns = ['id', 'cycle', 'setting1', 'setting2', 'setting3', 's1', 's2', 's3',
's4', 's5', 's6', 's7', 's8', 's9', 's10', 's11', 's12', 's13', 's14',
's15', 's16', 's17', 's18', 's19', 's20', 's21']
# MinMax normalization (from 0 to 1)
test_df['cycle_norm'] = test_df['cycle']
norm_test_df = pd.DataFrame(min_max_scaler.transform(test_df[cols_normalize]),
columns=cols_normalize,
index=test_df.index)
test_join_df = test_df[test_df.columns.difference(cols_normalize)].join(norm_test_df)
test_df = test_join_df.reindex(columns = test_df.columns)
test_df = test_df.reset_index(drop=True)
# read ground truth data - It contains the information of true remaining cycles for each engine in the testing data.
truth_df = pd.read_csv('Dataset/PM_truth.txt', sep=" ", header=None)
truth_df.drop(truth_df.columns[[1]], axis=1, inplace=True)
# generate column max for test data
rul = pd.DataFrame(test_df.groupby('id')['cycle'].max()).reset_index()
rul.columns = ['id', 'max']
truth_df.columns = ['more']
truth_df['id'] = truth_df.index + 1
truth_df['max'] = rul['max'] + truth_df['more']
truth_df.drop('more', axis=1, inplace=True)
# generate RUL for test data
test_df = test_df.merge(truth_df, on=['id'], how='left')
test_df['RUL'] = test_df['max'] - test_df['cycle']
test_df.drop('max', axis=1, inplace=True)
test_df[test_df["id"] == 1].tail()
print(test_df.shape)
print(train_df.shape)
```
### Apply right padding to all the sequences
```
def pad_sequence(df, max_seq_length, mask=0):
"""
Applies right padding to a sequences until max_seq_length with mask
"""
return np.pad(df.values, ((0, max_seq_length - df.values.shape[0]), (0,0)),
"constant", constant_values=mask)
def pad_engines(df, cols, max_batch_len, mask=0):
"""
Applies right padding to the columns "cols" of all the engines
"""
return np.array([pad_sequence(df[df['id'] == batch_id][cols], max_batch_len, mask=mask)
for batch_id in df['id'].unique()])
max_batch_len = train_df['id'].value_counts().max()
train_cols = ['s' + str(i) for i in range(1,22)] + ['setting1', 'setting2', 'setting3', 'cycle_norm']
test_cols = ["RUL"]
X = pad_engines(train_df, train_cols, max_batch_len)
Y = pad_engines(train_df, test_cols, max_batch_len)
max_batch_len = df_tr['id'].value_counts().max()
train_cols = ['label', 'age', 'state']
test_cols = ["RUL"]
X = pad_engines(df_tr, train_cols, max_batch_len)
Y = pad_engines(df_tr, test_cols, max_batch_len)
```
### Split into train, validation and test
```
from sklearn.model_selection import train_test_split
# Split into train and validation
train_X, val_X, train_Y, val_Y = train_test_split(X, Y, test_size=0.20, random_state=SEED)
# Test set from CMAPSS
test_X = pad_engines(df_te, train_cols, max_batch_len)
test_Y = pad_engines(df_te, test_cols, max_batch_len)
# In the WTTE-RNN architecture we will predict 2 parameters (alpha and beta)
# alpha is initialised to 1
train_Y_wtte = np.concatenate((train_Y, np.ones(train_Y.shape)), axis=2)
val_Y_wtte = np.concatenate((val_Y, np.ones(val_Y.shape)), axis=2)
test_Y_wtte = np.concatenate((test_Y, np.ones(test_Y.shape)), axis=2)
print("Train:\n", " X:", train_X.shape, "\n Y:", train_Y.shape, "\n Y_wtte:", train_Y_wtte.shape)
print("\nValidation:\n", " X:", val_X.shape, "\n Y:", val_Y.shape, "\n Y_wtte:", val_Y_wtte.shape)
print("\nTest:\n", " X:", test_X.shape, "\n Y:", test_Y.shape, "\n Y_wtte:", test_Y_wtte.shape)
```
## Baseline
```
from keras.layers import Masking
from keras.layers.core import Activation
from keras.models import Sequential
from keras.layers import Dense, LSTM, TimeDistributed
from keras.callbacks import EarlyStopping, ModelCheckpoint
# Model path
baseline_path = "baseline_model"
# Callbacks
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=0,
mode='min')
checkpoint = ModelCheckpoint(baseline_path,
monitor='val_loss',
save_best_only=True,
mode='min',
verbose=0)
# dimensions of the model
nb_features = train_X.shape[2]
nb_out = train_Y.shape[2]
model = Sequential()
# Masking layer so the right padding is ignored
# at each layer of the network
model.add(Masking(mask_value=0.,
input_shape=(max_batch_len, nb_features)))
# Then there s an LSTM layer with 100 units
# Recurrent Dropout is also applied after each
# LSTM layer to control overfitting.
model.add(LSTM(
units=100,
recurrent_dropout=0.2,
return_sequences=True))
# followed by another LSTM layer with 50 units
model.add(LSTM(
units=50,
recurrent_dropout=0.2,
return_sequences=True))
# Final layer is a Time-Distributed Dense layer
# with a single unit with an Exponential activation
model.add(TimeDistributed(Dense(nb_out, activation=K.exp)))
model.compile(loss="mse", optimizer=keras.optimizers.RMSprop())
print(model.summary())
# fit the network
history = model.fit(train_X, train_Y, epochs=5, batch_size=16,
validation_data=(val_X, val_Y), shuffle=True,
verbose=2, callbacks = [early_stopping, checkpoint])
# list all data in history
print(history.history.keys())
# Execute if training in Colaboratory (preferably from Chrome)
# Downloads the model after the training finishes
import google.colab
from google.colab import files
files.download(baseline_path)
# Move the model to the expected folder
!mv baseline_path Models/
!pip install google.colab
# Validation loss vs the Training loss
%matplotlib inline
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
# Execute if you want to upload a model to Collaboratory
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
from keras.models import load_model
# It's important to load the model after the training
# The keras Checkpoint will save the best model in terms
# of the validation loss in the specified path
model = load_model("Models/" + baseline_path, custom_objects={"exp": K.exp})
%matplotlib inline
from math import sqrt
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
# We save the validation errors to later compare the models
validation_baseline = model.predict(val_X).flatten()
def evaluate_and_plot(model, evaluation_data, weibull_function=None):
"""
Generate scores dataframe and plot the RUL
"""
fig = plt.figure()
i = 1
score_df = pd.DataFrame({"Method": ["MAE", "RMSE", "R2"]})
for name_set, train_set, test_set in evaluation_data:
if weibull_function is None:
y_pred = model.predict(train_set).flatten()
else:
y_pred = [weibull_function(alpha, beta)
for batch in model.predict(train_set)
for beta, alpha in batch]
l = test_set[:,:,0].flatten()
# To validate we remove the right padding
y_true = np.ma.compressed(np.ma.masked_where(l==0, l))
y_pred = np.ma.compressed(np.ma.masked_where(l==0, y_pred))
score_mae = "{0:.2f}".format(mean_absolute_error(y_true, y_pred))
score_rmse = "{0:.2f}".format(sqrt(mean_squared_error(y_true, y_pred)))
score_r2 = "{0:.3f}".format(r2_score(y_true, y_pred))
score_df[name_set] = [score_mae, score_rmse, score_r2]
ax = fig.add_subplot(6, 1, i)
ax.title.set_text(name_set)
ax.title.set_fontsize(20)
i += 1
plt.plot(y_pred[0:2500])
plt.plot(y_true[0:2500])
ax = fig.add_subplot(6, 1, i)
i += 1
plt.plot(y_pred[2500:5000])
plt.plot(y_true[2500:5000])
plt.subplots_adjust(hspace=0.45)
fig.set_size_inches(15, i*2.2)
return score_df.T
evaluate_and_plot(model,
[("Train", train_X, train_Y),
("Validation", val_X, val_Y),
("Test", test_X, test_Y)])
```
## Adapting to WTTE-RNN
```
# Install wtte package from Martinsson
!pip install wtte
# Loss and activation functions from Martinsson
# These are not used in the final version because
# the wtte package has useful regularization tools
def weibull_loglik_discrete(y_true, y_pred, epsilon=K.epsilon()):
y = y_true[..., 0]
u = y_true[..., 1]
a = y_pred[..., 0]
b = y_pred[..., 1]
hazard0 = K.pow((y + epsilon) / a, b)
hazard1 = K.pow((y + 1.0) / a, b)
loss = u * K.log(K.exp(hazard1 - hazard0) - (1.0 - epsilon)) - hazard1
return -loss
def activation_weibull(y_true):
a = y_true[..., 0]
b = y_true[..., 1]
a = K.exp(a)
b = K.sigmoid(b)
return K.stack([a, b], axis=-1)
from keras.layers import Masking
from keras.layers.core import Activation
from keras.models import Sequential
from keras.layers import Dense, LSTM, TimeDistributed, Lambda
from keras.callbacks import EarlyStopping, TerminateOnNaN, ModelCheckpoint
import wtte.weibull as weibull
import wtte.wtte as wtte
# Since we use a lambda in the last layer the model
# is not saved well in keras, instead we save the weights.
# This requires compiling the model to load the weights
baseline_wtte_path = "baseline_wtte_model_weights"
# Callbacks
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=0,
mode='min')
checkpoint = ModelCheckpoint(baseline_wtte_path,
monitor='val_loss',
save_best_only=True,
save_weights_only=True,
mode='min',
verbose=0)
nb_features = train_X.shape[2]
nb_out = train_Y.shape[1]
model = Sequential()
model.add(Masking(mask_value=0.,
input_shape=(max_batch_len, nb_features)))
model.add(LSTM(
input_shape=(None, nb_features),
units=100,
recurrent_dropout=0.2,
return_sequences=True))
model.add(LSTM(
units=50,
recurrent_dropout=0.2,
return_sequences=True))
model.add(TimeDistributed(Dense(2)))
# uncomment this line and comment the next to use
# activation_weibull function:
# model.add(Activation(activation_weibull))
model.add(Lambda(wtte.output_lambda,
arguments={# Initialization value around it's scale
"init_alpha": np.nanmean(train_Y_wtte[:,0]),
# Set a maximum
"max_beta_value": 10.0
},
))
# Same for the loss "weibull_loglik_discrete"
# model.compile(loss=weibull_loglik_discrete, optimizer='rmsprop')
# We use clipping on the loss
loss = wtte.Loss(kind='discrete', clip_prob=1e-5).loss_function
model.compile(loss=loss, optimizer='rmsprop')
print(model.summary())
# fit the network
history = model.fit(train_X, train_Y_wtte, epochs=5, batch_size=16,
validation_data=(val_X, val_Y_wtte), shuffle=True, verbose=2,
callbacks = [early_stopping, checkpoint, TerminateOnNaN()])
# list all data in history
print(history.history.keys())
# Execute if training in Colaboratory (preferably from Chrome)
# Downloads the model after the training finishes
from google.colab import files
files.download(baseline_wtte_path)
# Move the model to the expected folder
!mv baseline_wtte_path Models/
print(train_X.shape)
print(train_Y.shape)
train_Y[1, :, :]
pred_test = model(test_X)
pred_test[1, :, :]
# try prediction (CL)
pred_try = model(train_X)
%matplotlib inline
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
# Execute if you want to upload a model to Collaboratory
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Compile model first to load weights
model.load_weights("Models/" + baseline_wtte_path)
```
### Weibull Methods
$\mu = \beta\Gamma(1 + \alpha^{-1})$
$\sigma^2 = \beta^2[\Gamma(1 + 2\alpha^{-1}) - \Gamma^2(1 + \alpha^{-1})]$
$mode = \beta\frac{\alpha-1}{\alpha}^{1/\alpha}$
Inverse CDF $ = \beta (-\log(1 - x))^\frac{1}{\alpha} $ when $ 0<x<1 $
```
from math import gamma, log, sqrt
def mean_weibull(alpha, beta):
return beta*gamma(1 + 1./alpha)
def mode_weibull(alpha, beta):
return beta*((alpha-1)/alpha)**(1./alpha) if alpha > 1 else 0
def median_weibull(alpha, beta):
return beta*(log(2)**(1./alpha))
def var_weibull(alpha, beta):
return beta**2*(gamma(1 + 2./alpha) - gamma(1 + 1./alpha)**2)
def pdf_weibull(x, alpha, beta):
return (alpha/beta)*(x/beta)**(alpha - 1)*np.exp(-(x/beta)**alpha)
def inverse_cdf_weibull(x, alpha, beta):
return beta*np.power((-np.log(1.-x)), 1./alpha)
def survival_weibull(x, alpha, beta):
return np.e**-((x/beta)**alpha)
```
### Mean, Mode and Median

```
%matplotlib inline
print("Mode")
print(evaluate_and_plot(model,
[("Train", train_X, train_Y_wtte),
("Validation", val_X, val_Y_wtte),
("Test", test_X, test_Y_wtte)],
weibull_function = mode_weibull))
# comment the next line to visualise the plot for the mode
plt.close()
print("\nMedian")
print(evaluate_and_plot(model,
[("Train", train_X, train_Y_wtte),
("Validation", val_X, val_Y_wtte),
("Test", test_X, test_Y_wtte)],
weibull_function = median_weibull))
# comment the next line to visualise the plot for the median
plt.close()
# We save the validation errors to later compare the models
validation_wtte = [mean_weibull(alpha, beta)
for batch in model.predict(val_X)
for beta, alpha in batch]
print("\nMean")
print(evaluate_and_plot(model,
[("Train", train_X, train_Y_wtte),
("Validation", val_X, val_Y_wtte),
("Test", test_X, test_Y_wtte)],
weibull_function = mean_weibull))
```
### Evolution of the pdf through the cycles of an engine (PLOT)
```
import random
import seaborn as sns
random.seed(SEED)
lot = random.sample(train_X, 3)
random.seed(SEED)
lot += random.sample(val_X, 3)
random.seed(SEED)
lot += random.sample(test_X, 3)
palette = list(reversed(sns.color_palette("RdBu_r", 250)))
fig = plt.figure()
j = 1
for batch in lot:
size = batch[~np.all(batch == 0, axis=1)].shape[0]
y_pred_wtte = model.predict(batch.reshape(1, max_batch_len, nb_features))[0]
y_pred_wtte = y_pred_wtte[:size]
x = np.arange(1, 400.)
freq = 5
ax = fig.add_subplot(3, 3, j)
i=0
for beta, alpha in y_pred_wtte[0::freq][2:]:
mean = mode_weibull(alpha, beta)
color=palette[int(mean)] if i < len(palette) else palette[-1]
plt.plot(x, pdf_weibull(x, alpha, beta), color=color)
i += 1
ax.set_ylim([0, 0.07])
ax.set_xlim([0, 300])
ax.set_yticklabels([])
if j == 2:
ax.title.set_text("Train")
elif j == 5:
ax.title.set_text("Validation")
elif j == 8:
ax.title.set_text("Test")
j += 1
plt.subplots_adjust(wspace=0.15, hspace=0.25)
fig.set_size_inches(10,10)
```
### Confidence Interval of the Weibull Distribution
```
%matplotlib inline
from scipy.stats import dweibull
batch = lot[0]
size = batch[~np.all(batch == 0, axis=1)].shape[0]
y_pred_wtte = model.predict(batch.reshape(1, max_batch_len, nb_features))[0]
y_pred_wtte = y_pred_wtte[:size]
fig = plt.figure()
fig.add_subplot(1,1,1)
for beta, alpha in y_pred_wtte[0::20]:
x = np.arange(1, 300.)
mean = mean_weibull(alpha, beta)
sigma = np.sqrt(var_weibull(alpha, beta))
plt.plot(x, pdf_weibull(x, alpha, beta), color=palette[int(mean)])
# alpha is the shape parameter
conf = dweibull.interval(0.95, alpha, loc=mean, scale=sigma)
plt.fill([conf[0]] + list(np.arange(conf[0], conf[1])) + [conf[1]],
[0] + list(pdf_weibull(np.arange(conf[0], conf[1]), alpha, beta)) + [0],
color=palette[int(mean)], alpha=0.5)
axes = plt.gca()
axes.set_ylim([0., 0.06])
axes.set_xlim([0., 300.])
fig.set_size_inches(10,5)
```
### Evolution of the pdf through the cycles of an engine (GIFs)
```
import sys
import random
from math import gamma
from matplotlib.animation import FuncAnimation
from scipy.stats import dweibull
def generate_gif(y_pred, y_true, path, freq=2):
# remove mask if exists
y_true = y_true[y_true != 0]
y_pred = y_pred[:y_true.shape[0]]
frames = zip(y_true, y_pred)
# pad, w_pad, h_pad, and rect
fig = plt.figure()
global ax1, ax2
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
fig.set_tight_layout(True)
x = np.arange(1, 300.)
beta, alpha = y_pred[0]
line1, = ax1.plot(x, pdf_weibull(x, alpha, beta))
global i, acc_y_true, acc_y_pred
i = 0
predict_mean = mean_weibull(alpha, beta)
ax2.plot(i, y_true[0], 'bo', label="True", ms=2.5)
ax2.plot(i, predict_mean, 'o', color="orange", label="Predicted", ms=2.5)
ax2.legend(loc="upper right")
# limits
ax1.set_ylim([0, 0.07])
ax2.set_ylim([0, y_true[0] + 10])
ax2.set_xlim([0, len(frames)/freq + 2])
ax2.set_xticklabels([])
# acc values
acc_y_true = []
acc_y_pred = []
def update(instant):
y_true_t, y_pred_t = instant
beta, alpha = y_pred_t
# print y_true
pdf = pdf_weibull(x, alpha, beta)
line1.set_ydata(pdf)
global i, acc_y_true, acc_y_pred
i += 1
mean = mean_weibull(alpha, beta)
sigma = np.sqrt(var_weibull(alpha, beta))
acc_y_pred += [mean]
acc_y_true += [y_true_t]
ax2.plot(range(len(acc_y_true)), acc_y_true, 'b', label="True")
ax2.plot(range(len(acc_y_pred)), acc_y_pred, color="orange", label="Predicted")
conf = dweibull.interval(0.95, alpha, loc=mean, scale=sigma)
ax1.set_title("PDF Weibull Distrib. (Mean: " + "{0:.1f}".format(mean)
+ ", Std: " + "{0:.1f}".format(sigma) + ")"
+ " CI 95%: [{0:.1f}, {1:.1f}]".format(*conf))
ax2.set_title("Real RUL: " + str(y_true_t) + " cycles")
fig.set_size_inches(15,4)
anim = FuncAnimation(fig, update, frames=frames[0::freq])
anim.save(path, writer="imagemagick")
plt.close()
random.seed(SEED)
batch_X, batch_Y = random.choice(zip(train_X, train_Y))
y_pred_wtte = model.predict(batch_X.reshape(1, max_batch_len, nb_features))[0]
gif_path = "Images/train_engine_sample.gif"
generate_gif(y_pred_wtte, batch_Y, gif_path, freq=2)
print "Train Sample"
from IPython.display import HTML
HTML('<img src="'+ gif_path + '">')
random.seed(SEED)
batch_X, batch_Y = random.choice(zip(val_X, val_Y))
y_pred_wtte = model.predict(batch_X.reshape(1, max_batch_len, nb_features))[0]
gif_path = "Images/val_engine_sample.gif"
generate_gif(y_pred_wtte, batch_Y, gif_path, freq=2)
print "Validation Sample"
from IPython.display import HTML
HTML('<img src="'+ gif_path + '">')
random.seed(SEED)
batch_X, batch_Y = random.choice(zip(test_X, test_Y))
y_pred_wtte = model.predict(batch_X.reshape(1, max_batch_len, nb_features))[0]
gif_path = "Images/test_engine_sample.gif"
generate_gif(y_pred_wtte, batch_Y, gif_path, freq=2)
print "Test Sample"
from IPython.display import HTML
HTML('<img src="'+ gif_path + '">')
```
## GRU variant
```
from keras.layers import Masking
from keras.layers.core import Activation
from keras.models import Sequential
from keras.layers import Dense, GRU, TimeDistributed, Lambda
from keras.callbacks import EarlyStopping, TerminateOnNaN, ModelCheckpoint
import wtte.weibull as weibull
import wtte.wtte as wtte
baseline_gru_path = "baseline_gru_model_weights"
# Callbacks
early_stopping = EarlyStopping(monitor='val_loss',
min_delta=0,
patience=30,
verbose=0,
mode='min')
checkpoint = ModelCheckpoint(baseline_gru_path,
monitor='val_loss',
save_best_only=True,
save_weights_only=True,
mode='min',
verbose=0)
nb_features = train_X.shape[2]
nb_out = train_Y.shape[1]
init_alpha = np.nanmean(train_Y_wtte[:,0])
model = Sequential()
model.add(Masking(mask_value=0.,
input_shape=(max_batch_len, nb_features)))
# We substitute LSTM for GRU
model.add(GRU(
input_shape=(None, nb_features),
units=100,
recurrent_dropout=0.2,
return_sequences=True))
model.add(GRU(
units=50,
recurrent_dropout=0.2,
return_sequences=True))
model.add(TimeDistributed(Dense(2)))
model.add(Lambda(wtte.output_lambda,
arguments={# Initialization value around it's scale
"init_alpha": np.nanmean(train_Y_wtte[:,0]),
# Set a maximum
"max_beta_value": 10.0,
# We set the scalefactor to avoid exploding gradients
"scalefactor": 0.25
},
))
loss = wtte.Loss(kind='discrete', clip_prob=1e-5).loss_function
model.compile(loss=loss, optimizer='rmsprop')
print(model.summary())
# fit the network
history = model.fit(train_X, train_Y_wtte, epochs=500, batch_size=16,
validation_data=(val_X, val_Y_wtte), shuffle=True, verbose=2,
callbacks = [early_stopping, checkpoint, TerminateOnNaN()])
# list all data in history
print(history.history.keys())
# Execute if training in Colaboratory (preferably from Chrome)
# Downloads the model after the training finishes
from google.colab import files
files.download(baseline_gru_path)
# Move the model to the expected folder
!mv baseline_gru_path Models/
%matplotlib inline
plt.plot(history.history["loss"], color="blue")
plt.plot(history.history["val_loss"], color="green")
# Execute if you want to upload a model to Collaboratory
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# Compile model first to load weights
model.load_weights("Models/" + baseline_gru_path)
# We save the validation errors to later compare the models
validation_gru = [mean_weibull(alpha, beta)
for batch in model.predict(val_X)
for beta, alpha in batch]
evaluate_and_plot(model,
[("Train", train_X, train_Y_wtte),
("Validation", val_X, val_Y_wtte),
("Test", test_X, test_Y_wtte)],
weibull_function = mean_weibull)
```
# Result
The are three models:
- baseline
- baseline WTTE-RNN LSTM
- baseline WTTE-RNN GRU
The mean is used as the expected value of the RUL.
```
%matplotlib inline
import seaborn as sns
l = val_Y.flatten()
y_true = np.ma.compressed(np.ma.masked_where(l==0, l))
y_pred_baseline = np.ma.compressed(np.ma.masked_where(l==0, validation_baseline))
y_pred_wtte = np.ma.compressed(np.ma.masked_where(l==0, validation_wtte))
y_pred_gru = np.ma.compressed(np.ma.masked_where(l==0, validation_gru))
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.violinplot([y_pred_baseline - y_true,
y_pred_wtte - y_true,
y_pred_gru - y_true])
ax.set_xticklabels([])
plt.figtext(0.21, 0.1, ' Baseline')
plt.figtext(0.480, 0.1, ' Baseline WTTE')
plt.figtext(0.76, 0.1, ' Baseline GRU')
fig.set_size_inches(15, 10)
```
| github_jupyter |
```
#!easy_install textblob
import pandas as pd
import textstat
import numpy as np
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import classification_report
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn import svm
from sklearn.metrics import roc_auc_score
from sklearn.svm import SVC
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.ensemble import RandomForestRegressor
from sklearn import preprocessing
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem.porter import PorterStemmer
from textblob import TextBlob
from spellchecker import SpellChecker
import seaborn as sns
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.linear_model import Ridge
from collections import OrderedDict
import matplotlib.pyplot as plt
%matplotlib inline
path="D:/Trinity_DS/Dissertations/201901/Datasets/Kaggle/1429_1_v2.csv"
data_df = pd.read_csv(path,low_memory=False)
temp_df= data_df[['reviews.numHelpful','reviews.rating']]
temp_df.describe()
data_df.dtypes
data_df['reviews_dateAdded_Date_time'] = pd.to_datetime(data_df['reviews.dateAdded'])
data_df['reviews_dateSeen_Date_time'] = pd.to_datetime(data_df['reviews.dateSeen'])
data_df['reviews_date_Date_time'] = pd.to_datetime(data_df['reviews.date'])
pd.value_counts(temp_df['reviews.numHelpful'].values, sort=False)
data_df[['reviews_dateAdded_Date_time']]
filtered_df = data_df[data_df['reviews_dateAdded_Date_time'].notnull()]
filtered_df = data_df[data_df['reviews_date_Date_time'].notnull()]
filtered_df['diff_days'] = filtered_df['reviews_dateSeen_Date_time'] - filtered_df['reviews_date_Date_time']
filtered_df['diff_days']=filtered_df['diff_days']/np.timedelta64(1,'D')
filtered_df['diff_days'].describe()
filtered_df[['reviews_dateAdded_Date_time','reviews.dateSeen','reviews.rating','reviews.numHelpful','reviews.text']]
filtered_df['usefull_diff'] = filtered_df['reviews.numHelpful']/filtered_df['diff_days']
filtered_df
filtered_df['usefull_bin'] = np.where(filtered_df['reviews.numHelpful']==0, '0', '1')
filtered_df['usefull_bin']
filtered_df['day_of_week'] = filtered_df['reviews_dateAdded_Date_time'].dt.weekday_name
filtered_df['reviews_dateAdded_hour'] = filtered_df['reviews_dateAdded_Date_time'].dt.hour
#textstat.flesch_reading_ease(.id)
readablity = []
for text in filtered_df['reviews.text']:
readablity.append(textstat.flesch_reading_ease(str(text)))
filtered_df['flesch_reading_ease']=readablity
smog = []
for text in filtered_df['reviews.text']:
smog.append(textstat.smog_index(str(text)))
coleman_liau=[]
for text in filtered_df['reviews.text']:
coleman_liau.append(textstat.coleman_liau_index(str(text)))
sentence_count=[]
for text in filtered_df['reviews.text']:
sentence_count.append(textstat.sentence_count(str(text)))
gunning_fog=[]
for text in filtered_df['reviews.text']:
gunning_fog.append(textstat.gunning_fog(str(text)))
flesch_kincaid_grade=[]
for text in filtered_df['reviews.text']:
flesch_kincaid_grade.append(textstat.flesch_kincaid_grade(str(text)))
spell = SpellChecker()
spelling_errors=[]
for text in filtered_df['reviews.text']:
spelling_errors.append(len(spell.unknown(str(text).split(' '))))
subjectivity_list=[]
polarity_list=[]
for text in filtered_df['reviews.text']:
subjectivity_list.append(TextBlob(str(text)).sentiment.subjectivity)
polarity_list.append(TextBlob(str(text)).sentiment.polarity)
filtered_df['smog']=smog
filtered_df['coleman_liau']=coleman_liau
filtered_df['sentence_count']=sentence_count
filtered_df['gunning_fog']=gunning_fog
filtered_df['flesch_kincaid_grade']=flesch_kincaid_grade
filtered_df['spelling_errors']=spelling_errors
filtered_df['subjectivity']=subjectivity_list
filtered_df['polarity']=polarity_list
filtered_df
final_df = filtered_df[['polarity','subjectivity','day_of_week','reviews_dateAdded_hour','spelling_errors','reviews.rating','smog','coleman_liau','sentence_count','reviews.numHelpful','gunning_fog','flesch_kincaid_grade','usefull_bin','usefull_diff']]
final_df=final_df.dropna()
final_df
sent_df = final_df[['polarity','subjectivity','smog','coleman_liau','sentence_count','gunning_fog','flesch_kincaid_grade','usefull_bin','spelling_errors']]
sent_df=sent_df.dropna()
sent_df
X= np.array(sent_df.drop('usefull_bin', axis=1))
Y= np.array(sent_df['usefull_bin'])
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.33, random_state=99)
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
#scaler.transform(X_train)
```
# DecisionTree
```
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X_train_scaled, y_train)
Y_train_Pred=clf.predict(X_train_scaled)
accuracy_score(y_train, Y_train_Pred)
Y_test_Pred=clf.predict(X_test_scaled)
target_names = ['0', '1']
print(classification_report(y_test, Y_test_Pred, target_names=target_names))
```
# RANDOMFOREST
```
clf_RF = RandomForestClassifier(n_estimators=1000,random_state=0,max_depth=3,class_weight='balanced')
clf_RF.fit(X_train_scaled, y_train)
Y_train_Pred=clf_RF.predict(X_train_scaled)
Y_test_RF_Pred=clf_RF.predict(X_test_scaled)
print("Training Accuracy",accuracy_score(y_train, Y_train_Pred))
target_names=['0','1']
print(classification_report(y_test, Y_test_RF_Pred, target_names=target_names))
```
# SVM
```
clf_SVM = SVC(gamma='auto',class_weight='balanced')
clf_SVM.fit(X_train_scaled, y_train)
Y_test_SVM_Pred=clf_SVM.predict(X_test_scaled)
target_names=['0','1']
print(classification_report(y_test, Y_test_SVM_Pred, target_names=target_names))
sent_df.groupby('usefull_bin').count()
```
# Linear Regression
```
list(filtered_df)
#filtered_df['diff_months']=filtered_df['diff_days']/np.timedelta64(1,'M')
filtered_df['diff_months']= ( filtered_df['reviews_dateSeen_Date_time'].dt.date - filtered_df['reviews_date_Date_time'].dt.date )/np.timedelta64(1,'M')
filtered_df['diff_months']=filtered_df['diff_months'].astype(int)
filtered_df['diff_months']
filtered_df['log_usefull']=np.log(filtered_df['reviews.numHelpful']+1)
final_df = filtered_df[['day_of_week','reviews_dateAdded_hour','spelling_errors',
'reviews.rating','smog','coleman_liau','sentence_count'
,'gunning_fog','flesch_kincaid_grade','log_usefull','polarity','subjectivity','diff_months']]
#(analysis_df['diff_days']/np.timedelta64(1,'M')).describe()
corr_df = final_df
corr = (corr_df.corr())
plt.figure(figsize= (10, 10))
sns.heatmap(corr_df.corr())
fig, ax = plt.subplots(figsize=(10, 10))
mask = np.zeros_like(corr_df.corr())
mask[np.triu_indices_from(mask)] = 1
sns.heatmap(corr_df.corr(), mask= mask, ax= ax, annot= True,annot_kws={"size": 11},fmt='.2f')
corr = np.round((corr_df.corr()),2)
corr.style.background_gradient(cmap='coolwarm')
sent_df = final_df[['smog','coleman_liau','sentence_count','gunning_fog',
'flesch_kincaid_grade','spelling_errors','log_usefull',
'polarity','diff_months','subjectivity']]
sent_df=sent_df.dropna()
X= np.array(sent_df.drop('log_usefull', axis=1))
Y= np.array(sent_df['log_usefull'])
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.33, random_state=99)
X_train_scaled = preprocessing.scale(X_train)
X_test_scaled = preprocessing.scale(X_test)
#scaler.transform(X_train)
sent_df.log_usefull.describe()
reg = LinearRegression().fit(X_train_scaled, y_train)
y_pred=reg.predict(X_test_scaled)
#scaler.transform(X_test)
mean_squared_error(y_test, (np.exp(y_pred)-1))*100
y_pred
r2_score(y_test,(np.exp(y_pred)-1))
#analysis_df.to_csv("D:/Trinity_DS/Dissertations/201907/dataset_v2/analysis_df.csv", index = None, header=True)
clf_ridge = Ridge(alpha=100000)
clf_ridge.fit(X_train_scaled, y_train)
y_pred_0=clf_ridge.predict(X_train_scaled)
#scaler.transform(X_test)
mean_squared_error(y_train,np.exp(y_pred_0)-1)*100
y_pred=clf_ridge.predict(X_test_scaled)
#scaler.transform(X_test)
mean_squared_error(np.exp(y_pred)-1,y_test)*100
r2_score(y_test,(np.exp(y_pred)-1))
(np.exp(y_pred)-1).min()
```
# RandomForest Regressor
```
from sklearn.ensemble import RandomForestRegressor
rf_regr = RandomForestRegressor( random_state=0,
n_estimators=1000)
rf_regr.fit(X_train_scaled, y_train)
y_rf_pred = rf_regr.predict(X_test_scaled)
mean_squared_error(y_test, (np.exp(y_rf_pred)-1))*100
(np.exp(y_rf_pred)-1).min()
r2_score(y_test,(np.exp(y_rf_pred)-1))
```
# Chi Sq Test
```
newdf=final_df.groupby(['day_of_week','reviews.rating']).count()
np_matix= np.array(final_df)
np_matix
days_rating=pd.crosstab(np_matix[:,2],np_matix[:,0],
rownames=['Days'], colnames=['Ratings'],)
chi2 , p ,dof ,expected = stats.chi2_contingency(days_rating)
print("Reviews and DaysOfWeek chi_2 value---------------",chi2)
print("Reviews and DaysOfWeek p value-------------------",p)
print("Reviews and DaysOfWeek degreeoffreedom value-----",dof)
dum_df1 = pd.DataFrame()
dum_df1['hr']=final_df['reviews_dateAdded_hour'].apply(str)
dum_df1['day']=final_df['day_of_week']
dum_df1
dummy_df = pd.get_dummies(dum_df1, prefix=['hour','day'])
dummy_df
sent_df = final_df[['smog','coleman_liau','sentence_count','gunning_fog','flesch_kincaid_grade','reviews.rating']]
pd_combined=pd.concat([dummy_df.reset_index(drop=True), sent_df.reset_index(drop=True)], axis=1)
pd_combined=pd_combined.dropna()
pd_combined.columns
pd_combined
X= np.array(pd_combined.drop('reviews.rating', axis=1))
Y= np.array(pd_combined['reviews.rating'])
X_train, X_test, y_train, y_test = train_test_split(
X, Y, test_size=0.33, random_state=99)
clf = tree.DecisionTreeClassifier(max_depth=3,min_impurity_split=100)
clf = clf.fit(X_train, y_train)
Y_test_Pred=clf.predict(X_test)
Y_test_Pred
accuracy_score(y_test, Y_test_Pred)
target_names = ['1', '2', '3', '4', '5']
print(classification_report(y_test, Y_test_Pred, target_names=target_names))
clf_RF = RandomForestClassifier(n_estimators=100, max_depth=2,
random_state=0)
clf_RF.fit(X_train, y_train)
print(clf_RF.feature_importances_)
Y_test_RF_Pred=clf_RF.predict(X_test)
accuracy_score(y_test, Y_test_RF_Pred)
target_names = ['0', '1', '3', '4', '5']
print(classification_report(y_test, Y_test_RF_Pred, target_names=target_names))
clf_SVM = SVC(gamma='auto',class_weight='balanced')
clf_SVM.fit(X_train, y_train)
Y_test_SVM_Pred=clf_SVM.predict(X_test)
print(classification_report(y_test, Y_test_SVM_Pred, target_names=target_names))
accuracy_score(y_test, Y_test_SVM_Pred)
```
| github_jupyter |
# 2.3: Classical confidence intervals
```
from __future__ import print_function, division
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# use matplotlib style sheet
plt.style.use('ggplot')
```
## CI for continuous data, Pg 18
```
# import the t-distribution from scipy.stats
from scipy.stats import t
y = np.array([35,34,38,35,37])
y
n = len(y)
n
estimate = np.mean(y)
estimate
```
Numpy uses a denominator of **N** in the standard deviation calculation by
default, instead of **N-1**. To use **N-1**, the unbiased estimator-- and to
agree with the R output, we have to give `np.std()` the argument `ddof=1`:
```
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
```
## CI for proportions, Pg 18
```
from scipy.stats import norm
y = 700
y
n = 1000
n
estimate = y/n
estimate
se = np.sqrt(estimate*(1-estimate)/n)
se
int95 = estimate + norm.ppf([.025,0.975])*se
int95
```
## CI for discrete data, Pg 18
```
y = np.repeat([0,1,2,3,4], [600,300, 50, 30, 20])
y
n = len(y)
n
estimate = np.mean(y)
estimate
```
See the note above about the difference different defaults for standard
deviation in Python and R.
```
se = np.std(y, ddof=1)/np.sqrt(n)
se
int50 = estimate + t.ppf([0.25, 0.75], n-1)*se
int50
int95 = estimate + t.ppf([0.025, 0.975], n-1)*se
int95
```
## Plot Figure 2.3, Pg 19
The **polls.dat** file has an unusual format. The data that we would like to
have in a single row is split across 4 rows:
* year month
* percentage support
* percentage against
* percentage no opinion
The data seems to be a subset of the Gallup data, available here:
http://www.gallup.com/poll/1606/Death-Penalty.aspx
We can see the unusual layout using the **bash** command *head* (linux/osx only,
sorry..)
```
%%bash
head ../../ARM_Data/death.polls/polls.dat
```
Using knowledge of the file layout we can read in the file and pre-process into
appropriate rows/columns for passing into a pandas dataframe:
```
# Data is available in death.polls directory of ARM_Data
data = []
temp = []
ncols = 5
with open("../../ARM_Data/death.polls/polls.dat") as f:
for line in f.readlines():
for d in line.strip().split(' '):
temp.append(float(d))
if (len(temp) == ncols):
data.append(temp)
temp = []
polls = pd.DataFrame(data, columns=[u'year', u'month', u'perc for',
u'perc against', u'perc no opinion'])
polls.head()
# --Note: this give the (percent) support for thise that have an opinion
# --The percentage with no opinion are ignored
# --This results in difference between our plot (below) and the Gallup plot (link above)
polls[u'support'] = polls[u'perc for']/(polls[u'perc for']+polls[u'perc against'])
polls.head()
polls[u'year_float'] = polls[u'year'] + (polls[u'month']-6)/12
polls.head()
# add error column -- symmetric so only add one column
# assumes sample size N=1000
# uses +/- 1 standard error, resulting in 68% confidence
polls[u'support_error'] = np.sqrt(polls[u'support']*(1-polls[u'support'])/1000)
polls.head()
fig, ax = plt.subplots(figsize=(8, 6))
plt.errorbar(polls[u'year_float'], 100*polls[u'support'],
yerr=100*polls[u'support_error'], fmt='ko',
ms=4, capsize=0)
plt.ylabel(u'Percentage support for the death penalty')
plt.xlabel(u'Year')
# you can adjust y-limits with command like below
# I will leave the default behavior
#plt.ylim(np.min(100*polls[u'support'])-2, np.max(100*polls[u'support']+2))
```
## Weighted averages, Pg 19
The example R-code for this part is incomplete, so I will make up *N*, *p* and
*se* loosely related to the text on page 19.
```
N = np.array([66030000, 81083600, 60788845])
p = np.array([0.55, 0.61, 0.38])
se = np.array([0.02, 0.03, 0.03])
w_avg = np.sum(N*p)/np.sum(N)
w_avg
se_w_avg = np.sqrt(np.sum((N*se/np.sum(N))**2))
se_w_avg
# this uses +/- 2 std devs
int_95 = w_avg + np.array([-2,2])*se_w_avg
int_95
```
## CI using simulations, Pg 20
```
# import the normal from scipy.stats
# repeated to make sure that it is clear that it is needed for this section
from scipy.stats import norm
# also need this for estimating CI from samples
from scipy.stats.mstats import mquantiles
n_men = 500
n_men
p_hat_men = 0.75
p_hat_men
se_men = np.sqrt(p_hat_men*(1.-p_hat_men)/n_men)
se_men
n_women = 500
n_women
p_hat_women = 0.65
p_hat_women
se_women = np.sqrt(p_hat_women*(1.-p_hat_women)/n_women)
se_women
n_sims = 10000
n_sims
p_men = norm.rvs(size=n_sims, loc=p_hat_men, scale=se_men)
p_men[:10] # show first ten
p_women = norm.rvs(size=n_sims, loc=p_hat_women, scale=se_women)
p_women[:10] # show first ten
ratio = p_men/p_women
ratio[:10] # show first ten
# the values of alphap and betap replicate the R default behavior
# see http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.mstats.mquantiles.html
int95 = mquantiles(ratio, prob=[0.025,0.975], alphap=1., betap=1.)
int95
```
| github_jupyter |
# Hands-on: `pandas` & Data Wrangling
By now, you have some experience in using the `pandas` library which will be very helpful in this module. In this notebook, we will explore more of `pandas` but in the context of data wrangling. To be specific, we will be covering the following topics:
- Reading in data
- Descriptive statistics
- Data wrangling
- Filtering
- Aggregation
- Merging
Again we import the necessary libraries first. Always remember to import first.
```
import pandas as pd
import numpy as np
```
## Data
The Philippines has an Open Data portal: https://data.gov.ph
In this notebook, we'll be using the [Public Elementary School Enrollment Statistics](https://data.gov.ph/?q=dataset/public-elementary-school-enrollment-statistics) provided by the Department of Education. The page contains two files. Download both files and save them to the same folder as this notebook.
## Reading Data
In the previous modules, we have already demonstrated how to read files using `pandas`. For more details, run the cells below to display the documentations for the commonly used functions for reading files. Try to **read the documentation** to see if what you're trying to do is something that can already done by a library. Or you could simply **google** your concern. Most of the times, someone has already encountered the same problem.
```
pd.read_csv?
pd.read_excel?
# by default, the encoding is utf-8, but since the data has some latin characters
# the encoding argument needs to be updated
# list of encodings can be found here https://docs.python.org/2.4/lib/standard-encodings.html
# read more about encodings here http://kunststube.net/encoding/
deped2012 = pd.read_csv('deped_publicelementaryenrollment2012.csv', encoding='latin1')
# the head function provides a preview of the first 5 rows of the data
deped2012.head()
# Let's read in the other file too
deped2015 = pd.read_csv('depend_publicelementaryenrollment2015.csv', encoding='latin1')
deped2015.head()
```
### Let's begin exploring the data...
Some of the most common questions to ask **first** before proceeding with your data is to know the basic details of what's **in** the data. This is an important first step to verify what you see in the preview (`head`) and what's in the entire file.
* How many rows and columns do we have?
* What is the data type of each column?
* What is the most common value? Mean? Standard deviation?
#### `shape`
A `pandas` `DataFrame` is essentially a 2D `numpy` array. Using the `shape` attribute of the `DataFrame`, we can easily check the dimensions of the data file we read. It returns a tuple of the dimensions.
```
deped2012.shape
```
This means that the `deped_publicelementaryenrollment2012.csv` file has 463,908 rows and 10 columns.
#### `dtypes`
`dtypes` lets you check what data type each column is.
```
deped2012.dtypes
```
Notice that everything except `school_id` and `enrollment` is type `object`. In Python, a String is considered an `object`.
#### `describe()`
`describe()` provides the basic descriptive statistics of the`DataFrame`. By default, it only includes the columns with numerical data. Non-numerical columns are omitted but there are arguments that shows the statistics related to non-numerical data.
```
deped2012.describe()
```
By default we see the **descriptive statistics** of the nnumerical columns.
```
deped2012.describe(include=np.object)
```
But by specifying the `include` argument, we can see the descriptive statistics of the specific data type we're looking for.
```
deped2012.describe?
```
### Data Wrangling
After looking at the basic information about the data, let's see how "clean" the data is
#### Common Data Problems (from slides)
1. Missing values
2. Formatting issues / data types
3. Duplicate records
4. Varying representation / Handle categorical values
#### `isna()` / `isnull()`
To check if there's any missing values, `pandas` provides these two functions to detect them. This actually maps each individual cell to either True or False.
#### `dropna()`
To remove any records with missing values, `dropna()` may be used. It has a number of arguments to help narrow down the criteria for removing the records with missing values.
```
deped2012.isna?
deped2012.dropna?
deped2012.isna().sum()
```
In this case, there are no null values which is great, but in most real-world datasets, expect null values.
```
deped2012_dropped = deped2012.dropna(inplace=False)
deped2012.shape, deped2012_dropped.shape
```
You'll see above that shape is dimension because nothing happened after applying `dropna` as there are no null values to begin with. But what if there's a null value in this dataset?
```
# This is just an ILLUSTRATION to show how to handle nan values. Don't change values to NaN unless NEEDED.
deped2012_copy = deped2012.copy() # We first make a copy of the dataframe
deped2012_copy.iloc[0,0] = np.nan # We modify the COPY (not the original)
deped2012_copy.head()
deped2012_copy.isna().sum()
```
There null value is now reflected as shown in the output above
```
deped2012_dropped = deped2012_copy.dropna(inplace=False)
deped2012_copy.shape, deped2012_dropped.shape
```
The 'dropped' dataframe now has a lower number of rows compared to the original one.
#### `duplicated()` --> `drop_duplicates()`
The `duplicated()` function returns the duplicated rows in the `DataFrame`. It also has a number of arguments for you to specify the subset of columns.
`drop_duplicates()` is the function to remove the duplicated rows found by `duplicated()`.
```
deped2012.duplicated?
deped2012.drop_duplicates?
deped2012.duplicated().sum()
```
We can see here that there are no duplicates.
#### Varying representation
For categorical or textual data, unless the options provided are fixed, misspellings and different representations may exist in the same file.
To check the unique values of each column, a `pandas` `Series` has a function `unique()` which returns all the unique values of the column.
```
deped2012['province'].unique()
deped2012['year_level'].unique()
deped2012['region'].unique()
deped2015['region'].unique()
```
### Summarizing Data
High data granularity is great for a detailed analysis. However, data is usually summarized or aggregated prior to visualization. `pandas` also provides an easy way to summarize data based on the columns you'd like using the `groupby` function.
We can call any of the following when grouping by columns:
- count()
- sum()
- min()
- max()
- std()
For columns that are categorical in nature, we can simply do `df['column'].value_counts()`. This will give the frequency of each unique value in the column.
```
pd.Series.value_counts?
```
Number of region instances
```
deped2015['region'].value_counts()
deped2012.groupby?
```
Number of enrollments per grade level
```
deped2012.groupby("year_level")['enrollment'].sum()
```
#### Exercise!
Let's try to get the following:
1. Total number of enrolled students per region and gender
2. Total number of enrolled students per year level and gender
```
deped2012.groupby(['region', 'gender'], as_index=False).sum()
deped2012.groupby(['year_level', 'gender']).sum()
```
### Filtering Data
```
deped2015.query("year_level=='grade 6'")
deped2015.query("year_level == 'grade 6' & school_id == 100004")
deped2015.query("year_level == 'grade 6' | year_level == 'grade 5'")[['region', 'province']]
```
### Merging Data
Data are sometimes separated into different files or additional data from another source can be associated to another dataset. `pandas` provides means to combine different `DataFrames` together (provided that there are common variables that it can connect them to.
#### `pd.merge`
`merge()` is very similar to database-style joins. `pandas` allows merging of `DataFrame` and **named** `Series` objects together. A join can be done along columns or indexes.
#### `pd.concat`
`concat()` on the other hand combines `pandas` objects along a specific axis.
#### `df.append`
`append()` basically adds the rows of another `DataFrame` or `Series` to the end of the caller.
```
pd.merge?
pd.concat?
deped2012.append?
stats2012 = deped2012.groupby('school_id', as_index=False).sum()
stats2015 = deped2015.groupby('school_id', as_index=False).sum()
stats2012.head()
stats2015.tail()
stats2012.append(stats2015)
```
#### Exercise
The task is to compare the enrollment statistics of the elementary schools between 2012 and 2015.
1. Get the total number of enrolled students per school for each year
2. Merge the two `DataFrame`s together to show the summarized statistics for the two years for all schools.
```
stats2012 = deped2012.groupby('school_id', as_index=False).sum()
stats2015 = deped2015.groupby('school_id', as_index=False).sum()
stats2012.head()
stats2012.shape
stats2015.head()
stats2015.shape
```
The following is the wrong way of merging this.
```
merged = pd.merge(stats2012, stats2015)
merged.head()
merged.shape
```
#### Observations
1. Are the number of rows for both `DataFrames` the same or different? What's the implication if they're different?
2. Note the same column names for the two `DataFrames`. Based on the documentation for `merge()`, there's a parameter for suffixes for overlapping column names. If we want to avoid the "messy" suffixes, we can choose to rename columns prior to merging.
One way is to assign an array to the columns object representing the column names for ALL columns.
```ipython
stats2012.columns = ['school_id', '2012']
stats2015.columns = ['school_id', '2015']
```
But this is not good if you have too many columns... `pandas` has a function `rename()` in which we can pass a "mappable" dictionary for the columns. The `inplace` parameter helps in renaming it and assigns the changed `DataFrame` back to the same variable.
```ipython
stats2012.rename(columns={'enrollment': '2012'}, inplace=True)
stats2015.rename(columns={'enrollment': '2015'}, inplace=True)
```
```
# try the code above
stats2012.columns = ['school_id', '2012']
stats2015.columns = ['school_id', '2015']
stats2012.head()
stats2015.head()
## Merge the two dataframes using different "how" parameters
# how : {'left', 'right', 'outer', 'inner'}, default 'inner'
inner_res = pd.merge(stats2012, stats2015)
inner_res.head()
inner_res.isna().sum()
inner_res.shape
```
Play around with the how parameter and observe the following:
- shape of the dataframe
- presence or absence of null values
- number of schools dropped with respect to the original dataframe
```
outer_res = pd.merge(stats2012, stats2015, how="outer")
outer_res.isna().sum()
left_res = pd.merge(stats2012, stats2015, how="left")
left_res.isna().sum()
```
For the following items, we will only be using the 2015 dataset.
1. Which region has the most number of schools? Does this region also have the most number of enrollees?
```
deped2015.groupby(['region']).sum().sort_values(by='enrollment', ascending=False)
```
2. Which region has the least number of schools? Does this region also have the least number of enrollees?
```
deped2015.groupby(['region']).sum().sort_values(by='enrollment', ascending=True)
```
3. Which school has the least number of enrollees?
```
deped2015.groupby(['school_name']).sum().sort_values(by='enrollment', ascending=True)
```
| github_jupyter |
(*This is a wiki post - please edit it to add your own translations*)

本页面为[fastai v3 2019中文版](https://forums.fast.ai/t/fast-ai-v3-2019/39325)的一部分
---
# Bilingual subtitle 双语字幕
我致力于为fastai视频提供英中双语字幕(已翻译的2019 part1 的3-5课目前只有中文字幕,后续版本都会补配上同步英文字幕,感谢 @Junmin 和 @LiuYanliang 的建议)。
I am determined to provide bilingual subtitle for fastai lesson videos. So far the previous lesson 3-5 subtitles are only in Chinese, I will add English subtitles to them later. All subsequent subtitles will all be bilingual, both English and Chinese. (thanks to @Junmin and @LiuYanliang for the bilingual subtitle suggestion)
为了方便大家更好理解,在英中字幕里,我会通过 `( )`中的内容来补齐 @jeremy 讲演中省略掉的但有助于理解的信息。虽然初衷是帮助大家更好理解,但这么做可能植入我个人的不准确或误导性的理解,所以强烈欢迎大家多多指正和质疑;当然,大家可以直接忽略`( )`中的内容。
In order to help understand the videos better, in both English and Chinese subtitles I will add additional information inside `( )` based on my understanding of what @jeremy is talking about, as in oral speech some information of a sentence is assumed without saying or assumed in previous sentences. However, it risks to bring in my own bias and misunderstanding. Therefore, you can help to correct my bias and misunderstanding in the subtitle or can simply ignore all of the content inside `( )`.
强烈欢迎大家帮助指正字幕中的任何问题,也欢迎质疑和提问!对于大家的纠错与改进的贡献,我都会在下面这个清单中通过 @你 的方式给予credit!
Please help me correct my bias and misunderstanding in the subtitles in reply to this post! Your contribution will be credited in the following list! :heartpulse:
哪一课的字幕 | 时间点链接 | @贡献人 | 更正内容
Lesson Subtitle EN/CN | timeline link | @contributor | Content Correction
## 字幕问题 questions on subtitles :rescue_worker_helmet:
---
# Fastai EN-CN Vocab 英中词汇表
我计划在翻译课程字幕过程中,和大家一起逐步养成一个fastai 深度学习机器学习中英词汇对照表
I plan to develop a vocabulary translation for fastai deep learning between English and Chinese.
欢迎任何相关贡献,所有贡献者理应得到认可和@!现在这篇文章已经被维基化了,所以大家贡献时可以@自己
Any contribution is welcome and since this post is a wiki now, you are free to @yourself with credit when you contribute. :heartpulse:
[details="如何贡献 How to contribute"]
- 帮助翻译有疑难点标注 :rescue_worker_helmet: 的词汇
- 指出已有词汇翻译中的不足并纠正
- 指出英文字幕翻译中的
The needed help I can think of at the moment
- collections of vocab from key DL/ML books and courses are welcome
- recommendation of existing high quality resources on DL/ML vocab in EN and CN
- help us with the missing or incorrect translation
- list the English vocab you want to know the Chinese counterpart
- list the Chinese vocab you want to know the English counterpart
[/details]
[details="如何快速在网上查找中文对应词 How I find key vocab translation online"]
1. 书内搜索 Search within an open source book
[邱锡鹏深度学习教科书开源](https://nndl.github.io/nndl-book.pdf)
2. 网站搜索 Search on google
Google: English term 中文/中文数学
3. 在线单词表 online vocab list
机器之心的[深度学习词汇](https://github.com/jiqizhixin/Artificial-Intelligence-Terminology),但较长时间未更新
谷歌开发者机器学习[词汇表](https://zhuanlan.zhihu.com/p/29884825) , [EN version](https://developers.google.com/machine-learning/glossary/)
[/details]
---
## fastai specific vocab 专有词汇 英中对照
:white_check_mark: = 官方认可 approved by fastai
crappify (翻译建议采集中translation options : 1. 垃圾化;2. 残次化)
DataBunch 数据堆 :white_check_mark:
discriminative learning rate (翻译建议采集中translation options :1. 判别学习率; 2. 区别学习率)
---
## fastai lesson video vocab 课程字幕词汇 英中对照
:rescue_worker_helmet: = 需要救助 need help
### 第六课 Lesson 6
weight tying 权重拴连 :rescue_worker_helmet:
generative models 生成模型
shape 数据形状
linear interpolations 线性插值法
average pooling 平均汇聚
stride 2 convolution 步长为2的卷积
rank 3 tensor 秩为3的张量
channel 通道
convolution kernel 卷积核
reflection mode 反射模式
padding mode 填充模式
dihedral 二面角
weight norm 权重归一
Covariate Shift 协变量偏移
Batch Normalization BN 批量归一化
instance 实例化
module 模块
dropout mask 随机失活的掩码
Beroulli trial 伯努利实验
test time / inference time 测试期/预测期
training time 训练期
spiking neurons 脉冲神经元
tabular learner 表格学习器
long tail distributions 长尾分布
Root Mean Squared Percentage Error 均方根百分比误差
nomenclature 名称系统
cardinality 集合元素数量
preprocessors 预处理
RMSPE (root mean squared percentage error) 均方根百分比误差
computer vision 机器视觉
projection 投射
### 第五课 Lesson 5 vocab
MAPE mean absolute percentage error 平均绝对百分比误差 感谢 @thousfeet
super-convergence 超级收敛
dynamic learning rate 动态学习率
exponentially weighted moving average 指数加权移动平均值
[details="更多词汇 more vocab"]
epoch 迭代次数
finite differencing 有限差分法
analytic solution 解析解
convergence 收敛
divergence 散度
L2 regularization L2正则化
learning rate annealing 学习率退火
element-wise function 元素逐一函数, 感谢与 @Moody 的探讨
logistic regression model 逻辑回归模型
flatten 整平 (numpy.arrays)
actuals 目标真实值
constructor 构造函数
generalization 泛化
2nd degree polynomial 这是2次多项式
Gradient Boosted Trees 梯度提升树
Entity Embeddings 实体嵌入
NLL (negative log likelihood) 负对数似然
PCA (Principal Component Analysis) 主成分分析
weight decay 权值衰减
benchmark 基准
cross-validation 交叉验证
latent factors 潜在因子
array lookup 数组查找
one-hot encoding one-hot编码,或者一位有效编码,(或者 独热编码 感谢 @LiuYanliang)
Dimensions 维度
transpose 转置矩阵处理
Convolutions 卷积
Affine functions 仿射函数
Batch Normalization 批量归一化
multiplicatively equal 乘数分布相同 (每层都10倍递增/减 1e-5, 1e-4, 1e-3)
diagnal edges 对角线边角
filter 过滤器
target 目标值
softmax softmax函数 (转化成概率值的激活函数)
backpropagation 反向传递
Universal Approximation Theorem 通用近似定理
weight tensors 参数张量
input activations 输入激活值
[/details]
### 第四课 Lesson 4
mask 掩码
matrix multiplication 矩阵乘法
dot product vs matrix product (单个数组和单个数组的乘法 = 点积,矩阵(多数组)与矩阵(多数组)的乘法 = 矩阵乘法)**当前字幕版本对这两个词混淆使用了(全用了"点积"这个词),下个版本会做修正。**
unfreeze 解冻模型
freeze 封冻模型
cross-entropy 交叉熵
scaled sigmoid 被放缩的S函数
[details="更多词汇 more vocab"]
layers 层
activations 激活值/层
parameters 参数
Rectified Linear Unit, ReLU 线性整流函数,(或者修正线性激活函数 感谢 @LiuYanliang)
non-linear activation functions 非线性激活函数
nonlinearities 非线性激活函数
sigmoid S函数
bias vector 偏差数组 (或者 偏置向量 感谢 @LiuYanliang)
embedding matrix 嵌入矩阵
bias 偏差
dropout 随机失活 (感谢 @Junmin ),或者 丢弃法
root mean squared error(RMSE)均方根误差
mean squared error(MSE)均方误差
sum squared error 残差平方和
vector 数组
spreadsheet 电子表格
dot product 点积
state of the art 最先进的
time series 时间序列
cold start problem 冷启动问题
timestamp 时间戳
sparse matrix 稀疏矩阵
collaborative filtering 协同过滤
metrics 度量函数/评估工具
end-to-end training 端到端训练
fully connected layer 全联接层
meta data 元数据
tabular learner 表格数据学习器
dependent variable 应变量
data augmentation 数据增强
processes 预处理
transforms 变形处理/设置
categorical variable 类别变量
continuous variable 连续变量
feature engineering 特征工程
gradient boosting machines 梯度提升器
hyperparameters 超参数
random forest 随机森林
discriminative learning rate 判别学习率
tabular data 表格数据
momentum 动量
decoder 解码器
encoder 编码器
accuracy 精度
convergence 收敛
overfitting 过拟合
underfitting 欠拟合
inputs 输入值
weight matrix 参数数组
matrix multiply 数组相乘
Tokenization 分词化
Numericalization 数值化
Learner 学习器 (感谢 @stas 学习器与模型的内涵[对比](https://forums.fast.ai/t/deep-learning-vocab-en-vs-cn/42297/11?u=daniel))
target corpus 目标文本数据集
Supervised Learning/models 监督学习/模型
Self-Supervised Learning 自监督学习
pretrained model 预训练模型
fine-tuning 微调
[/details]
### 第三课 Lesson 3
independent variable 自变量 (感谢 @Junmin 的指正)
Image Classification 图片分类
Image Segmentation 图片分割
Image Regression 图片回归
CNN Convolution Neural Network 卷积神经网络
RNN Recurrent Neural Network 循环神经网络
NLP Natural Language Processing 自然语言处理
language model 语言模型
[details="课程其他词汇 vocab not DL specific"]
take it with a slight grain of salt 不可全信
come out of left field 不常见的
elapsed time 所经历的时间 :rescue_worker_helmet: [出现时间点](https://youtu.be/hkBa9pU-H48?t=895)
connoisseur 鉴赏级别/专业类电影
nomenclature 专业术语
rule of thumb 经验法则
asymtote 渐进
delimiter 分隔符
enter 回车键
macro 宏
unwieldy 困难
infuriating 特别烦人
hone in on it 精确定位目标
hand waving 用手做比划/解释
string 字符串
list 序列
[/details]
---
[details="机器学习基础词汇 ML vocab from elsewhere"]
<a name='watermelon'></a>
以下是我之前从周志华西瓜书收集的一小部分词汇对照。
期待有小伙伴能将周志华的西瓜书和Goodfellow的花书中的词汇对照整理出来 :heartpulse:
深度学习 deep learning
机器学习 machine learning
学习算法 learning algorithm
模型 model
数据集 data set
示例 instance 样本 sample
属性 attribute 特征 feature
属性值 attribute value
样本空间 sample space
特征向量 feature vector
维度数量 dimensionality
学习 learning 训练 training
训练数据 training data
训练样本 training sample, training example
训练集 training set
假设 hypothesis
真相 ground-truth
学习器 learner = model
预测 prediction
标记 label
样例 example
标记空间 输出空间 label space
分类 classification
回归 regression
二分类 binary classification
正类 positive class
反类 negative class
多分类 multi-class classification
测试 testing
测试样本 testing sample
聚类 clustering
簇 cluster
cluster = 没有标记下的分类,通过挖掘数据结构特征发现的
class = 给定标记的分类,事先给定的
监督学习 supervised learning
无监督学习 unsupervised learning
泛化能力 generalization
分布 distribution
独立同分布 independent and identically distributed i.i.d.
[/details]
| github_jupyter |
# Linear regression
```
""" Starter code for simple linear regression example using placeholders
Created by Chip Huyen (huyenn@cs.stanford.edu)
CS20: "TensorFlow for Deep Learning Research"
cs20.stanford.edu
Lecture 03
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
sess_config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
```
## Data reading
```
def read_birth_life_data(filename):
"""
Read in birth_life_2010.txt and return:
data in the form of NumPy array
n_samples: number of samples
"""
text = open(filename, 'r').readlines()[1:]
data = [line[:-1].split('\t') for line in text]
births = [float(line[1]) for line in data]
lifes = [float(line[2]) for line in data]
data = list(zip(births, lifes))
n_samples = len(data)
data = np.asarray(data, dtype=np.float32)
return data, n_samples
DATA_FILE = '../datasets/birth_life_2010.txt'
# Step 1: read in data from the .txt file
data, n_samples = utils.read_birth_life_data(DATA_FILE)
```
## Phase 1: Build a graph
```
# Step 2: create placeholders for X (birth rate) and Y (life expectancy)
# Remember both X and Y are scalars with type float
X, Y = None, None
# Step 3: create weight and bias, initialized to 0.0
# Make sure to use tf.get_variable
w, b = None, None
# Step 4: build model to predict Y
# e.g. how would you derive at Y_predicted given X, w, and b
Y_predicted = None
# Step 5: use the square error as the loss function
loss = None
# Step 6: using gradient descent with learning rate of 0.001 to minimize loss
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(loss)
```
## Phase 2: Train a model using `tf.Session`
```
start = time.time()
# Create a filewriter to write the model's graph to TensorBoard
#############################
########## TO DO ############
#############################
with tf.Session(config=sess_config) as sess:
# Step 7: initialize the necessary variables, in this case, w and b
#############################
########## TO DO ############
#############################
# Step 8: train the model for 100 epochs
for i in range(100):
total_loss = 0
for x, y in data:
# Execute train_op and get the value of loss.
# Don't forget to feed in data for placeholders
_, loss = ########## TO DO ############
total_loss += loss
print('Epoch {0}: {1}'.format(i, total_loss/n_samples))
# close the writer when you're done using it
#############################
########## TO DO ############
#############################
writer.close()
# Step 9: output the values of w and b
w_out, b_out = None, None
#############################
########## TO DO ############
#############################
print('Took: %f seconds' %(time.time() - start))
```
## Plot the result
```
plt.plot(data[:,0], data[:,1], 'bo', label='Real data')
plt.plot(data[:,0], data[:,0] * w_out + b_out, 'r', label='Predicted data')
plt.legend()
plt.show()
```
| github_jupyter |
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
- In this notebook, you will implement all the functions required to build a deep neural network.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
**After this assignment you will be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v4 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module (shown in purple in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
# W1 = None
# b1 = None
# W2 = None
# b2 = None
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.randn(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
# Z = None
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_activation_forward() (there are L-1 of them, indexed from 0 to L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# print(parameters.keys())
# print(L)
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, \
parameters['W' + str(l)], \
parameters['b' + str(l)], \
activation = "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, \
parameters['W' + str(L)], \
parameters['b' + str(L)], \
activation = "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
# print(str(parameters))
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
logprobs = np.multiply(np.log(AL),Y) + np.multiply(np.log(1-AL),(1-Y))
cost = -(1/m) * np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement linear_backward().
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1/m * np.dot(dZ, A_prev.T)
db = 1/m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
dAL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(dAL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
**Expected output with relu:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> **Figure 5** : Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients.
# Inputs: "dAL, current_cache".
# Outputs: "grads["dAL-1"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
grads["dA" + str(L-1)], grads["dW" + str(L)], grads["db" + str(L)] \
= linear_activation_backward(dAL, current_cache, activation="sigmoid")
### END CODE HERE ###
# Loop from l=L-2 to l=0
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 1)], current_cache".
# Outputs: "grads["dA" + str(l)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
# dA_prev_temp, dW_temp, db_temp = None
# grads["dA" + str(l)] = None
# grads["dW" + str(l + 1)] = None
# grads["db" + str(l + 1)] = None
#########################
### DIDN'T PASS GRADE ###
#########################
# dZ = relu_backward(grads["dA" + str(l+1)], current_cache)
# dA_prev, dW, db = linear_backward(dZ, current_cache)
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l+1)], \
current_cache, \
activation="relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
# parameters["W" + str(l+1)] = None
# parameters["b" + str(l+1)] = None
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[-0.59562069 -0.09991781 -2.14584584 1.82662008]
[-1.76569676 -0.80627147 0.51115557 -1.18258802]
[-1.0535704 -0.86128581 0.68284052 2.20374577]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.04659241]
[-1.28888275]
[ 0.53405496]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[-0.55569196 0.0354055 1.32964895]]</td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.84610769]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
| github_jupyter |
## Data Types
***Variables and Conditionals***
```
x = 21
print ('x:',x,type(x)) #type is defined by the value it refers to
y = 4.1
print ('y:',y,type(y))
x = x * y
print ('x:',x,type(x)) #type of value it refers to can change over the course of program execution
x = 'Sandeep Mewara'
print ('x:',x,type(x)) #str - string
#some simple operations
x=2
x+=10 # x=x+10
print (x)
x*=5 # x=x*5
print (x)
x = 10
print (x**2) #raise to the power of 2
```
*** Conditional statements ***
```
x=75
if x > 100:
print ("More")
elif x > 70:
print ("Less")
else:
print ("Not defined")
x,y = 10,5 # x = 10;y = 5
if x > 5 and y<10: # 'and' evaluates to true only if all the conditions are true
print ('and condition')
if x > 10 or y == 5: # 'or' evaluates to true if either of the condition are true
print ('or condition')
if not x == 100: # 'not' toggles true to false & vice versa
print ('not condititon')
```
*** String manipulations ***
```
s = "Python for Machine Learning"
print (s[0]) #Print the first character
t = 'Data Science'
q = """is awesome!
Know it's power."""
print (t, q)
print (q[3:11])
s = 'Python for Machine Learning.'
print (s*2) # '*' operator on string repeats the string
v = "... Learn by Insight!!!"
t = s + v # concatenate strings
print(t)
s="sandeep mewara"
#s[0]='H' #trying to replace ’h’ with ’H’ => INCORRECT
s = s.replace('s','S')
print (s)
t = s.replace('m','M')
print (s,',',t) # original string is preserved
x = "I got my BTech from %s in year %d" %('IIT',2005) # string construction
print(x)
```
***Type conversion***
```
a,b='18','54.8'
print (a+b) # a,b => they are strings, not numbers.
s,t=5,10
print (str(s)+str(t)) # non-string to string add
print (s+t) # normal add
```
***Formatting Strings***
```
#% is a format operator and %d, %s, %f are special format sequences
print("%15s"%"FillWord") # right align in a column width 15
print("There is %-8s space"%"more") # negative number for left alignment
print("%4d"%42) #convert to 4 decimals
print("%04d"%42) #pack leading zeros
print("%.3f"%42.34656) # %.<fixed number of digits to the right of Dot.>f
print("%.5f"%42.34)
print ("Formatting using the format func: {0:.3f}lakh {1:02d}km {2:5s}".format(2.1,4,"LearnByInsight"))
print ('milk' in 'milky way')
print ('a' in 'milk')
```
## Data Structures
***List***
```
# indexed by integers, starting from 0.
l=[1,2,3,5,8,13] #items of the same type
print (l)
lst=[1,'x',2,'t',3.0] #items of mixed types
print (lst)
lst = [1,2,3,5,8,'sm',13,21,34]
print ("List:",lst)
print ("Length of the list:",len(lst)) #length of the list
pop_element = lst.pop() #pop by default removes & returns the last element from the list
print ("Popped out element:%s,\nList after pop operation:%s"%(pop_element,lst))
lst.pop(2) #pop can also be used to remove an element based on index starting from 0
print (lst)
lst = [1,2,3,5,8,'sm']
print ("Before deleting:",lst)
del lst[len(lst)-1]
print ("After deleting 2nd element",lst)
lst = [1,2,3,5,8,'sm']
lst.insert(2,"insert") #insert at an index
print (lst)
lst = [1,2,3,5,8,'sm']
last_element = lst[-1] #negative index to access list elements from the end, -1 refers to the last element of the list
last_element
# List copy
x = [1,2,3,5,8]
x_copy = x #copy the contents of the list x
x_copy[3] = -3 # change the content of the new list x_copy
print (x,x_copy) # change made to the copied list affects original list too
#List Splicing
x = [1,2,3,5,8,'sm']
x_copy = x[:] # [start:end:step] default step 1. Returns a new list from start to end-1
print ("List x:",x)
print ("List x_copy", x_copy)
print ("x[1:3]=%s, x[:3]=%s, x[2:]=%s, x[-2:]=%s"%(x[1:3],x[:3],x[2:],x[-2:]))
print ("x[1:5:2]=%s"%x[1:5:2] ) #with step 2
x = [1,2,3,5,8,'sm']
print (x)
del x[1:3] # Remove more than one element
print (x)
lst = [1,2,3,5,8,'sm']
print ("List:",lst)
lst.append(13) #add a single element
print ("Appending single element 13:",lst)
lst.extend([-13,-8]) #add more than one element
print ("Adding more than one element -13 & -8:",lst)
lst = [1,2,3,5,8,'sm']
lst.append([-5,-8]) #list of list
print ("Appending a list [-5,-8]:",lst)
mat = [[2,1],[4,3]] #Define a matrix
print ("Matrix: ",mat)
print ("Accessing individual elements: 2nd element in 1st row:",mat[0][1])
lst = [1,2,3,5,8,'sm']
print ("list:",lst)
print ("5 in list:",5 in lst)
print ('a in list:','a' in lst)
text = "Machine Learning is awesome !!!"
word_list = text.split() #split by space
print (text,"\nWord list",word_list)
text = "Split,this,string,using,comma,separator"
ex_list=text.split(',') #comma as delimiter
print ("ex_list:",ex_list)
text = "Machine Learning is awesome !!!"
word_list = text.split()
print ("List of words",word_list)
sentence = " ".join(word_list)
print ("Reconstruct:",sentence)
new_delimiter_sentence = "*".join(word_list) #via different delimiter
print ("Reconstructed using a '*':", new_delimiter_sentence)
s = " Water vapor and icy particles errupt from Enceladus "
t = s.strip() #remove leading and trailing whitespace if no chars is given in strip
print(t)
```
***Tuple***
```
t = 1,2,'a','x',3,5
print (type(t),t)
print (t[3]) #access the elements the same way as in Lists
print (t[2:5])
t= (1,'a',2,'x') #Define a tuple enclosing it in a parantheses
print(t)
t = 10 #Tuple with a single element
print (type(t),t)
t = tuple() #an empty tuple
print (type(t),t)
t = tuple('tuple') #Argument string
print (type(t),t)
t = tuple([1,2,3,5]) #Argument list
print (type(t),t)
t = tuple(['sm','lbi']) #Argument list
print (type(t),t)
d = dict() #create an empty dictionary
print (d)
space_probes = {"voyager1":1977,"cassini":1997,"juno":2011,"mangalyaan":2013} # colon ':', separates the key and value
print (type(space_probes),space_probes)
print (space_probes['juno']) #Accessing dictionary values
# Tuples as keys of the dictionary
name_id_num = {("Sandeep","Mewara"):24568,("Code","Projct"):7418}
print (name_id_num)
print (name_id_num[("Sandeep","Mewara")])
# range
x = range(10)
print (type(x),x)
for i in x:
print (i,end=" ")
print('\nRange defined:')
for j in range(3,9):
print (j,end=" ")
print('\nRange defined with steps defined:')
x = range(2,21,3)
for i in x:
print (i,end=" ")
# while
lst =[7,3,5,2,9,1]
i=0
while lst[i] > 2:
i=i+1
print (i)
#continue
x = [1,3,4,8,9,5]
for i in x:
if i%2 == 0:
continue
print (i)
# break
x = [1,2,3,4,-5,8,9]
for i in x:
if i < 0:
break
print (i)
# zip
lst1 = range(5)
lst2 = range(5,10)
for x,y in zip(lst1,lst2):
print (x,y,x+y)
#using zip to find the max element and index in a list
lst = [1,5,2,17,8,-2]
print (max(lst))
lst_index = list(zip(lst,range(len(lst))))
print (lst_index)
print ("Max val and idx",max(lst_index))
# list comprehension
#Generate the squares of the first 'n' numbers
number_squares = [x*x for x in range(5)]
print (number_squares)
#list of even numbers upto 10
even_nos = [x for x in range(10) if x%2]
print (even_nos)
even_nos = [x for x in range(1,10,2)]
print (even_nos)
#remove spaces from the word list
word_list = [' river','hills ',' and','shepherd']
new_list = [word.strip() for word in word_list] #string the leading and trailing spaces
new_list
```
## Functions
```
def find_avg(x):
avg = sum(x)/len(x)
return avg
print (find_avg([1,2,3,5,8,13]))
print (find_avg([1,-2,3,-5]))
def prod(x,y,z=1):
return (x+1)*(y+2)*(z+3)
print (prod(1,2,3))
print (prod(1,2)) # z default is defined so can be skipped
print (prod(x=1,y=5))
print (prod(z=2,y=1,x=0)) #Supply arguments in arbitary order
# variable scope
x = 5
y = -5
def check_scope():
global x
x = 10
y = 20
print ("x,y => Inside the func:",x,y)
check_scope()
print ("x,y => Outside the func:",x,y)
def sum_num(x,*y): # can accept a variable number of arguments if we add * to the last parameter name
print ("Variable length arg:",y)
s = x
for i in y:
s+=i
return i
print (sum_num(10,20))
print (sum_num(11,22,33,44,55))
```
*** List Comprehension ***
```
lst = [i for i in range(5)]
lst
# matrix of 4*3
row_num = 4
col_num = 3
multi_list = [[0 for col in range(col_num)] for row in range(row_num)]
multi_list
```
*** Zip & Pack ***
```
# Merge/Pack two lists into list of tuples
#1: Using zip
lst1 = (1,3,5,7)
lst2 = (2,4,6,8)
#zip
lst = list(zip(lst1,lst2))
print ('tuples using zip', lst)
for i,j in lst:
print (i,j)
#2: Using map
print ('tuples using map', list(map(lambda x, y:(x,y), lst1, lst2)))
# Unpack tuples into list back
unpack_lst1, unpack_lst2 = list(zip(*lst))
print ('lst1_back: ',unpack_lst1)
print ('lst2_back: ',unpack_lst2)
```
| github_jupyter |
## Seminar 1: Fun with Word Embeddings (3 points)
Today we gonna play with word embeddings: train our own little embedding, load one from gensim model zoo and use it to visualize text corpora.
This whole thing is gonna happen on top of embedding dataset.
__Requirements:__ `pip install --upgrade nltk gensim bokeh` , but only if you're running locally.
```
# download the data:
!wget https://www.dropbox.com/s/obaitrix9jyu84r/quora.txt?dl=1 -O ./quora.txt
# alternative download link: https://yadi.sk/i/BPQrUu1NaTduEw
import numpy as np
data = list(open("./quora.txt", encoding="utf-8"))
data[50]
```
__Tokenization:__ a typical first step for an nlp task is to split raw data into words.
The text we're working with is in raw format: with all the punctuation and smiles attached to some words, so a simple str.split won't do.
Let's use __`nltk`__ - a library that handles many nlp tasks like tokenization, stemming or part-of-speech tagging.
```
from nltk.tokenize import WordPunctTokenizer
tokenizer = WordPunctTokenizer()
print(tokenizer.tokenize(data[50]))
# TASK: lowercase everything and extract tokens with tokenizer.
# data_tok should be a list of lists of tokens for each line in data.
data_tok = [tokenizer.tokenize(w.lower()) for w in words]
assert all(isinstance(row, (list, tuple)) for row in data_tok), "please convert each line into a list of tokens (strings)"
assert all(all(isinstance(tok, str) for tok in row) for row in data_tok), "please convert each line into a list of tokens (strings)"
is_latin = lambda tok: all('a' <= x.lower() <= 'z' for x in tok)
assert all(map(lambda l: not is_latin(l) or l.islower(), map(' '.join, data_tok))), "please make sure to lowercase the data"
print([' '.join(row) for row in data_tok[:2]])
```
__Word vectors:__ as the saying goes, there's more than one way to train word embeddings. There's Word2Vec and GloVe with different objective functions. Then there's fasttext that uses character-level models to train word embeddings.
The choice is huge, so let's start someplace small: __gensim__ is another nlp library that features many vector-based models incuding word2vec.
```
from gensim.models import Word2Vec
model = Word2Vec(data_tok,
size=32, # embedding vector size
min_count=5, # consider words that occured at least 5 times
window=5).wv # define context as a 5-word window around the target word
# now you can get word vectors !
model.get_vector('anything')
# or query similar words directly. Go play with it!
model.most_similar('bread')
```
### Using pre-trained model
Took it a while, huh? Now imagine training life-sized (100~300D) word embeddings on gigabytes of text: wikipedia articles or twitter posts.
Thankfully, nowadays you can get a pre-trained word embedding model in 2 lines of code (no sms required, promise).
```
import gensim.downloader as api
model = api.load('glove-twitter-100')
model.most_similar(positive=["coder", "money"], negative=["brain"])
```
### Visualizing word vectors
One way to see if our vectors are any good is to plot them. Thing is, those vectors are in 30D+ space and we humans are more used to 2-3D.
Luckily, we machine learners know about __dimensionality reduction__ methods.
Let's use that to plot 1000 most frequent words
```
words = sorted(model.vocab.keys(),
key=lambda word: model.vocab[word].count,
reverse=True)[:1000]
print(words[::100])
# for each word, compute it's vector with model
word_vectors = np.array([model.get_vector(w) for w in words])
assert isinstance(word_vectors, np.ndarray)
assert word_vectors.shape == (len(words), 100)
assert np.isfinite(word_vectors).all()
```
#### Linear projection: PCA
The simplest linear dimensionality reduction method is __P__rincipial __C__omponent __A__nalysis.
In geometric terms, PCA tries to find axes along which most of the variance occurs. The "natural" axes, if you wish.
<img src="https://github.com/yandexdataschool/Practical_RL/raw/master/yet_another_week/_resource/pca_fish.png" style="width:30%">
Under the hood, it attempts to decompose object-feature matrix $X$ into two smaller matrices: $W$ and $\hat W$ minimizing _mean squared error_:
$$\|(X W) \hat{W} - X\|^2_2 \to_{W, \hat{W}} \min$$
- $X \in \mathbb{R}^{n \times m}$ - object matrix (**centered**);
- $W \in \mathbb{R}^{m \times d}$ - matrix of direct transformation;
- $\hat{W} \in \mathbb{R}^{d \times m}$ - matrix of reverse transformation;
- $n$ samples, $m$ original dimensions and $d$ target dimensions;
```
from sklearn.decomposition import PCA
# map word vectors onto 2d plane with PCA. Use good old sklearn api (fit, transform)
# after that, normalize vectors to make sure they have zero mean and unit variance
word_vectors_pca = PCA(n_components=2).fit_transform(word_vectors)
# and maybe MORE OF YOUR CODE here :)
word_vectors_pca = (word_vectors_pca - word_vectors_pca.mean(0)) / \
word_vectors_pca.std(0)
assert word_vectors_pca.shape == (len(word_vectors), 2), "there must be a 2d vector for each word"
assert max(abs(word_vectors_pca.mean(0))) < 1e-5, "points must be zero-centered"
assert max(abs(1.0 - word_vectors_pca.std(0))) < 1e-2, "points must have unit variance"
```
#### Let's draw it!
```
import bokeh.models as bm, bokeh.plotting as pl
from bokeh.io import output_notebook
output_notebook()
def draw_vectors(x, y, radius=10, alpha=0.25, color='blue',
width=600, height=400, show=True, **kwargs):
""" draws an interactive plot for data points with auxilirary info on hover """
if isinstance(color, str): color = [color] * len(x)
data_source = bm.ColumnDataSource({ 'x' : x, 'y' : y, 'color': color, **kwargs })
fig = pl.figure(active_scroll='wheel_zoom', width=width, height=height)
fig.scatter('x', 'y', size=radius, color='color', alpha=alpha, source=data_source)
fig.add_tools(bm.HoverTool(tooltips=[(key, "@" + key) for key in kwargs.keys()]))
if show: pl.show(fig)
return fig
draw_vectors(word_vectors_pca[:, 0], word_vectors_pca[:, 1], token=words)
# hover a mouse over there and see if you can identify the clusters
```
### Visualizing neighbors with t-SNE
PCA is nice but it's strictly linear and thus only able to capture coarse high-level structure of the data.
If we instead want to focus on keeping neighboring points near, we could use TSNE, which is itself an embedding method. Here you can read __[more on TSNE](https://distill.pub/2016/misread-tsne/)__.
```
from sklearn.manifold import TSNE
# map word vectors onto 2d plane with TSNE. hint: don't panic it may take a minute or two to fit.
# normalize them as just lke with pca
word_tsne = TSNE(n_components=2).fit_transform(word_vectors)
word_tsne = (word_tsne - word_tsne.mean(0)) / \
word_tsne.std(0)
draw_vectors(word_tsne[:, 0], word_tsne[:, 1], color='green', token=words)
```
### Visualizing phrases
Word embeddings can also be used to represent short phrases. The simplest way is to take __an average__ of vectors for all tokens in the phrase with some weights.
This trick is useful to identify what data are you working with: find if there are any outliers, clusters or other artefacts.
Let's try this new hammer on our data!
```
def get_phrase_embedding(phrase):
"""
Convert phrase to a vector by aggregating it's word embeddings. See description above.
"""
# 1. lowercase phrase
# 2. tokenize phrase
# 3. average word vectors for all words in tokenized phrase
# skip words that are not in model's vocabulary
# if all words are missing from vocabulary, return zeros
vector = np.zeros([model.vector_size], dtype='float32')
count = 0
words = tokenizer.tokenize(phrase.lower())
for word in words:
if word in model.vocab.keys():
vector += model.get_vector(word)
count += 1
if count > 0:
vector /= count
return vector
vector = get_phrase_embedding("I'm very sure. This never happened to me before...")
assert np.allclose(vector[::10],
np.array([ 0.31807372, -0.02558171, 0.0933293 , -0.1002182 , -1.0278689 ,
-0.16621883, 0.05083408, 0.17989802, 1.3701859 , 0.08655966],
dtype=np.float32))
# let's only consider ~5k phrases for a first run.
chosen_phrases = data[::len(data) // 1000]
# compute vectors for chosen phrases
phrase_vectors = np.array([get_phrase_embedding(phrase) for phrase in chosen_phrases])
assert isinstance(phrase_vectors, np.ndarray) and np.isfinite(phrase_vectors).all()
assert phrase_vectors.shape == (len(chosen_phrases), model.vector_size)
# map vectors into 2d space with pca, tsne or your other method of choice
# don't forget to normalize
phrase_vectors_2d = TSNE().fit_transform(phrase_vectors)
phrase_vectors_2d = (phrase_vectors_2d - phrase_vectors_2d.mean(axis=0)) / phrase_vectors_2d.std(axis=0)
draw_vectors(phrase_vectors_2d[:, 0], phrase_vectors_2d[:, 1],
phrase=[phrase[:50] for phrase in chosen_phrases],
radius=20,)
```
Finally, let's build a simple "similar question" engine with phrase embeddings we've built.
```
# compute vector embedding for all lines in data
data_vectors = np.array([get_phrase_embedding(l) for l in data])
def find_knn(model, vectors, texts, query, k=10):
query = get_phrase_embedding(model, query)
cossims = np.matmul(vectors, query)
norms = np.sqrt((query**2).sum() * (vectors**2).sum(axis=1))
cossims = cossims/norms
result_i = np.argpartition(-cossims, range(k))[0:k]
results = [texts[i] for i in result_i]
return results
def find_nearest(query, k=10):
"""
given text line (query), return k most similar lines from data, sorted from most to least similar
similarity should be measured as cosine between query and line embedding vectors
hint: it's okay to use global variables: data and data_vectors. see also: np.argpartition, np.argsort
"""
# YOUR CODE
query_vector = get_phrase_embedding(query)
cossims = np.matmul(data_vectors, query_vector)
norms = np.sqrt((query_vector**2).sum() * (data_vectors**2).sum(axis=-1))
cossims_norm = cossims / norms
result_i = np.argpartition(-cossims_norm, range(k))[:k]
results = [data[i] for i in result_i]
return results
results = find_nearest(query="How do i enter the matrix?", k=10)
print(''.join(results))
assert len(results) == 10 and isinstance(results[0], str)
assert results[0] == 'How do I get to the dark web?\n'
assert results[3] == 'What can I do to save the world?\n'
find_nearest(query="How does Trump?", k=10)
find_nearest(query="Why don't i ask a question myself?", k=10)
```
__Now what?__
* Try running TSNE on all data, not just 1000 phrases
* See what other embeddings are there in the model zoo: `gensim.downloader.info()`
* Take a look at [FastText](https://github.com/facebookresearch/fastText) embeddings
* Optimize find_nearest with locality-sensitive hashing: use [nearpy](https://github.com/pixelogik/NearPy) or `sklearn.neighbors`.
```
all_phrase_vectors = np.array([get_phrase_embedding(phrase) for phrase in data])
phrase_vectors_2d = TSNE().fit_transform(phrase_vectors)
phrase_vectors_2d = (phrase_vectors_2d - phrase_vectors_2d.mean(axis=0)) / phrase_vectors_2d.std(axis=0)
draw_vectors(phrase_vectors_2d[:, 0], phrase_vectors_2d[:, 1],
phrase=[phrase[:500] for phrase in chosen_phrases],
radius=20,)
import gensim
gensim.downloader.info()
```
| github_jupyter |
## Basic I/O : Implement basic I/O function that can read the data from the dataset and write the results to a file.
```
# Loading required libraries
import pandas as pd
import numpy as np
from itertools import chain, combinations
import time
```
### As part of Basic input, I have created two functions:
Function 1(read_as_dataframe) - Reads the data from csv file and returns a dataframe
Function 2(read_as_list) - Reads the data from csv or txt file and returns a list
```
# read the data from the dataset using read_csv from pandas, read stabilized with engine
def read_as_dataframe(filename):
cols = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9", "10"]# at most 11 columns in the dataset, can be adopted to all datasets
df = pd.read_csv(filename, names = cols, engine = 'python')
return df
# Reading Data - flexible to use txt or csv file which reads the line and strips the unnecessary data, split and form a list.
# The FM creates a list of Transaction Database
def read_as_list(file):
list_out = list()
with open(file) as f:
row_lines = f.readlines()
for i in row_lines:
single_line = i.strip().strip(",")
list_out.append(single_line.split(','))
return list_out
```
#### Reading/Input demonstrated in using the function calls below:
```
# read the file into df by passing filename(parameter)
df = read_as_dataframe("GroceryStore.csv")
df.head()
# read the file to create a list
input_list = read_as_list('GroceryStore.csv')
input_list
# Using the textbook example to demonstrate the functionalities
df_example = read_as_dataframe("ex.csv")
df_example.head()
input_list_example = read_as_list('ex.csv')
input_list_example
```
### Basic Output: Writing a data frame to a file
As part of output, 1. the function is created to write a dataframe into a text file
2. Writing FP tree into a file
```
# writing data to a file
def writetoafile(filename,df):
with open(filename, 'w') as f:
df.to_csv(filename)
# writing FP tree to a file
def write_tree(filename,fptree):
file1 = open(filename,"w")
file1.write(str(fptree))
file1.close()
def write_list(filename,list_items):
with open(filename, 'w') as filehandle:
for list_item in list_items:
filehandle.write('%s\n' % list_item)
```
#### Writing as output demonstrated below:
```
# Writing outputs to file
# writing dataframe values into a file
writetoafile('output_file.txt',df)
# writing frequent itemlists into a file
write_list("freq_itemsets_fp",all_frequent_itemsets_fp)
# write FP tree into a file
write_tree("fptree.txt",fptree)
```
## Frequent Itemset : Find all possible 2, 3, 4 and 5-itemsets given the parameter of minimum-support.
The Frequent Itemsets are created using three different algorithms which is implemented in this notebook:
1. Brute-force approach - Creates all possible combinations and prunes using minimum support
2. Apriori approach - Implements Apriori algorithm to generate frequent itemsets
3. FP Growth approach - Implements FP Growth algorithm to generate frequent itemsets
### This task provides the implementation of Brute-Force algorithm to generate frequent itemsets - 2,3,4 and 5 itemsets by taking minimum support as input parameter
```
# I have used a modular approach where the functions can be re-used which increases the scalability and achieves literate coding
# enhancing the readility of the implementation
# function to calculate the frequency of occurence of the item_sets
def freq(df,item_sets):
count_list = [0] * len(item_sets)
item_list = df.values.tolist()
count = 0
support_set = {}
for i,k in zip(item_sets,range(len(item_sets))):
for j in range(len(df)):
if(set(i).issubset(set(item_list[j]))):
count += 1
count_list[k] = count
count = 0
return count_list
# function to generate frequent items given the item-set, support counts and minimum support
def generate_frequent_itemset(count,comb_list,min_sup):
freq_list = list()
infreq_list = list()
for item,i in zip(comb_list,range(len(comb_list))):
if count[i]>=min_sup:
freq_list.append(item)
else:
infreq_list.append(item)
return freq_list,infreq_list
# The brute force algorithm implemented below takes a dataframe, minimum support and number of itemsets as input and returns a
# frequent item-sets as output
def brute_force_frequent_itemset(number_itemset,df,min_sup):
list_comb = list(((df['0'].append(df['1']).append(df['2']).append(df['3']).append(df['4']).append(df['5']).append(df['6']).append(df['7']).append(df['8']).append(df['9']).append(df['10'])).unique()))
list_comb.remove(np.nan)
all_freq_itemlist = list()
# Generating n-itemset by obtaining unique values
for i in range(1,number_itemset+1):
# Generating combinations
comb = combinations(list_comb,i)
comb_list = list(comb)
count = freq(df,comb_list)
freq_list,infreq_list = generate_frequent_itemset(count,comb_list,min_sup)
if freq_list:
all_freq_itemlist.append(freq_list)
return all_freq_itemlist
```
### Illustration of Brute-force approach to generate frequent item-sets:
### The two datasets are used to demonstrate the generation of outputs for all the algorithms.
1. Grocery store provided as part of coursework
2. Textbook example to demonstrate the working of different dataset
```
# Using a textbook example to generate frequent itemsets
# parameter 1 - number of itemsets. example - given 3, the frequent itemsets 1,2,3 are generated
freq_itemsets_example = brute_force_frequent_itemset(3,df_example,2)
freq_itemsets_example
freq_itemsets0 = brute_force_frequent_itemset(1,df,1250)
freq_itemsets0
# Using the dataset provided as part of coursework
# Upto 3-itemsets generation, min_sup is 2300
freq_itemsets1 = brute_force_frequent_itemset(3,df,2300)
freq_itemsets1
freq_itemsets2 = brute_force_frequent_itemset(5,df,280)
freq_itemsets2
freq_itemsets3 = brute_force_frequent_itemset(5,df,280)
freq_itemsets3
freq_itemsets_big = brute_force_frequent_itemset(12,df,50)
freq_itemsets_big
```
## Associated Rule : Find all interesting association rules from the frequent item-sets given the parameter of minimum-confidence.
The association rules are created using only the frequent item-sets generated using various algorithms as mentioned above.
The associatioin function takes dataframe(database of transactions), frequent_itemsets and the minimum-confidence to generate
interesting association rules
```
# Frequency - Count value of itemsets is determined by reusing the function created for brute-force approach, the count is used
# to calculate association rules, the scan of frequent itemsets is started from the maximum frequent sets and working upwards
# till 1-frequent itemsets to generate all possible association rules given the minimum confidence percentage value.
def associations(df,itemsets,min_confidence_percent,filename='test_association_values.txt'):
file_association = open(filename,'w')
confidence = float(min_confidence_percent)/100
calc_confidence = 0.0
# Determining count of values in frequent itemsets
count = list()
for k in itemsets:
c = freq(df,k)
count.append(c)
# Computing association rules
n = len(itemsets)-1
n1 = n
n2 = n
for i in range(n):
for k in range(len(itemsets[n2])):
item_count = count[n2][k]
item = itemsets[n2][k]
n1 = n2
for l in range(n2):
for j in range(len(itemsets[n1-1])):
lower_item_count = count[n1-1][j]
lower_item = itemsets[n1-1][j]
if set(lower_item).issubset(set(item)):
calc_confidence = item_count/lower_item_count
if calc_confidence >= confidence:
print("rule:"+str(lower_item)+"->"+str((set(item)-set(lower_item)))+"->"+str(calc_confidence*100))
file_association.write("\nrule:"+str(lower_item)+"->"+str((set(item)-set(lower_item)))+"->"+str(calc_confidence*100))
n1 = n1-1
n2 = n2-1
```
### Illustration of Association generation from frequent item-sets:
```
# Using textbook example to produce the Association rules
# parameter - frequent itemsets and minimum confidence
associations(df_example,freq_itemsets_example,50)
# associations(df,freq_itemsets2,50)
# Computing Associations for minimum confidence of 50 percent for 1,2,3,4 and 5 frequent item-sets
associations(df,freq_itemsets2,50)
freq_itemsets4 = brute_force_frequent_itemset(5,df,5000)
freq_itemsets4
associations(df,freq_itemsets4,50)
freq_itemsets5 = brute_force_frequent_itemset(5,df,1250)
freq_itemsets5
associations(df,freq_itemsets5,50)
freq_itemsets6 = brute_force_frequent_itemset(5,df,2456)
freq_itemsets6
associations(df,freq_itemsets6,45)
associations(df,freq_itemsets_big,55)
```
## Task 4
## Apriori Algorithm : Use Apriori algorithm for finding frequent itemsets.
The modular approach is again adopted here to split different functions as a module and each module can be combined to
effectively implement the Apriori algorithm
```
# overriding frozen to unfreeze
class MyFrozenSet(frozenset):
def repr(self):
return '([{}])'.format(', '.join(map(repr, self)))
# unfreeze and display frozensets(not needed)
def unfreeze(item_sets):
temp = list()
for i in item_sets:
temp.append([MyFrozenSet(j) for j in i])
return temp
# join the sets Lk * Lk
def join(itemset,n):
set1 = set()
set2 = set()
for i in itemset:
for j in itemset:
# joining sets to generate length of size n using all the possible subsets
if( len(i.union(j)) == n ):
set1 = [i.union(j)]
set2 = set2.union(set1)
return set2
# Generate Candidate set by removing infrequent subsets
def candidate(infreq_itemset,joined_set):
flag = 0
candidate_set = set()
for i in joined_set:
flag = 0
for j in infreq_itemset:
if(frozenset.issubset(j,i)):
# if the joined set contains infrequent subset it is flagged to be removed
flag = 1
if flag == 0:
# only the unflagged frequent sets are added into candidate set
candidate_set.add(i)
return candidate_set
# Prune the Candidate set to obtain frequent itemsets
def prune(count,itemset,sup_count=2):
freq_itemset = set()
infreq_itemset = set()
for item,i in zip(itemset,range(len(itemset))):
# if the count is greater than support count, it is added into frequent itemsets
if count[i]>=sup_count:
freq_itemset.add(item)
else:
infreq_itemset.add(item)
return freq_itemset,infreq_itemset
def apriori(number_itemset,df,min_sup):
# Generating 1-itemset by obtaining unique values and pruning using min_support
# Obtaining unique values
comb1_list = list((df['0'].append(df['1']).append(df['2']).append(df['3']).append(df['4'])).unique())
comb1_list.remove(np.nan)
comb1_set = set()
for item in comb1_list:
if item:
comb1_set.add(frozenset([item]))
# pruning using min_support
count1 = freq(df,comb1_set)
freq_itemset1,infreq_itemset1 = prune(count1,comb1_set,min_sup)
if number_itemset == 1:
return freq_itemset1
# Generating n-itemset
all_freq_itemset = list()
all_freq_itemset.append(list(freq_itemset1))
freq_itemset = freq_itemset1
infreq_itemset = infreq_itemset1
comb_set = comb1_set
for i in range(2,number_itemset+1):
# joining frequent itemsets
joined_set = join(freq_itemset,i)
# Candidate set is created by removing infrequent itemsets
candidate_set = candidate(infreq_itemset,joined_set)
# support counts of candidate set is obtained
count = freq(df,candidate_set)
# support count is used to prune by comparing with minimum support
freq_itemset,infreq_itemset = prune(count,candidate_set,min_sup)
if freq_itemset:
all_freq_itemset.append(list(freq_itemset))
return all_freq_itemset
```
### Illustration of Apriori algorithm to generate frequent item-sets:
```
f_itemset_apriori_example = apriori(3,df_example,2)
f_itemset_apriori_example
f_itemset_apriori = apriori(5,df,280)
f_itemset_apriori
associations(df,f_itemset_apriori,50)
```
## FP-Growth Algorithm: Use FP-Growth algorithm for finding frequent itemsets.
```
#class of FP TREE node
class TreeNode:
def __init__(self, node_name,count,parentnode):
self.name = node_name
self.count = count
self.node_link = None
self.parent = parentnode
self.children = {}
def __str__(self, level=0):
ret = "\t"*level+repr(self.name)+"\n"
for child in self.children:
ret += (self.children[child]).__str__(level+1)
return ret
def __repr__(self):
return '<tree node representation>'
def increment_counter(self, count):
self.count += count
# Reading Data - flexible to use txt or csv file which reads the line and strips the unnecessary data, split and form a list.
# The FM creates a list of Transaction Database
def read_as_list(file):
list_out = list()
with open(file) as f:
row_lines = f.readlines()
1for i in row_lines:
single_line = i.strip().strip(",")
list_out.append(single_line.split(','))
return list_out
# To convert initial transaction into frozenset
# Creating a frozen dictionary of Database(transactions) and counting the occurences of transaction - to be used to generate frequent itemsets
def create_frozen_set(database_list):
dict_frozen_set = {}
for Tx in database_list:
if frozenset(Tx) in dict_frozen_set.keys():
dict_frozen_set[frozenset(Tx)] += 1
else:
dict_frozen_set[frozenset(Tx)] = 1
return dict_frozen_set
#The FP Tree is created using ordered sets
def add_tree_nodes(item_set, fptree, header_table, count):
if item_set[0] in fptree.children:
fptree.children[item_set[0]].increment_counter(count)
else:
fptree.children[item_set[0]] = TreeNode(item_set[0], count, fptree)
if header_table[item_set[0]][1] == None:
header_table[item_set[0]][1] = fptree.children[item_set[0]]
else:
add_node_link(header_table[item_set[0]][1], fptree.children[item_set[0]])
if len(item_set) > 1:
add_tree_nodes(item_set[1::], fptree.children[item_set[0]], header_table, count)
#The node link is added
def add_node_link(previous_node, next_node):
while (previous_node.node_link != None):
previous_node = previous_node.node_link
previous_node.node_link = next_node
# Generate Frequent Pattern tree
def generate_FP_tree(dict_frozen_set, min_sup):
# Creating header table - get previous counter using 'get' and then add that value to the row in consideration to obatin count of each unique item in DB
header_table = {}
for frozen_set in dict_frozen_set:
for key_item in frozen_set:
header_table[key_item] = header_table.get(key_item,0) + dict_frozen_set[frozen_set]
# pruning using min_sup to retain only frequent 1-itemsets
for i in list(header_table):
if header_table[i] < min_sup:
del(header_table[i])
# Obtaining only keys which are frequent itemsets
frequent_itemset = set(header_table.keys())
if len(frequent_itemset) == 0:
return None, None
for j in header_table:
header_table[j] = [header_table[j], None]
Tree = TreeNode('Null',1,None)
for item_set,count in dict_frozen_set.items():
frequent_tx = {}
for item in item_set:
if item in frequent_itemset:
frequent_tx[item] = header_table[item][0]
if len(frequent_tx) > 0:
#the transaction itemsets are ordered with respect to support
ordered_itemset = [v[0] for v in sorted(frequent_tx.items(), key=lambda p: p[1], reverse=True)]
#the nodes are updated into tree
add_tree_nodes(ordered_itemset, Tree, header_table, count)
return Tree, header_table
```
#### Mining Frequent item-sets by using generated FP Tree
```
#FP Tree is traversed upwards
def traverse_fptree(leaf_Node, prefix_path):
if leaf_Node.parent != None:
prefix_path.append(leaf_Node.name)
traverse_fptree(leaf_Node.parent, prefix_path)
#returns conditional pattern base(prefix paths)
def find_prefix_path(base_path, tree_node):
Conditional_patterns_base = {}
while tree_node != None:
prefix_path = []
traverse_fptree(tree_node, prefix_path)
if len(prefix_path) > 1:
Conditional_patterns_base[frozenset(prefix_path[1:])] = tree_node.count
tree_node = tree_node.node_link
return Conditional_patterns_base
#Condtional FP Tree and Condtional Pattern Base is recursively mined
def mining(fptree, header_table, min_sup, prefix, frequent_itemset):
FPGen = [v[0] for v in sorted(header_table.items(),key=lambda p: p[1][0])]
for base_path in FPGen:
all_frequentset = prefix.copy()
all_frequentset.add(base_path)
#appending frequent itemset
frequent_itemset.append(all_frequentset)
#obtain conditional pattern bases for itemsets
Conditional_pattern_bases = find_prefix_path(base_path, header_table[base_path][1])
#Conditional FP Tree generation
Conditional_FPTree, Conditional_header = generate_FP_tree(Conditional_pattern_bases,min_sup)
if Conditional_header != None:
mining(Conditional_FPTree, Conditional_header, min_sup, all_frequentset, frequent_itemset)
```
### Illustration of FP Growth algorithm to generate frequent item-sets:
```
# Creating FP Tree and header table for the example dataset
# parameters - input list(frozenset), minimum support count value
fptree_example, header_table_example = generate_FP_tree(create_frozen_set(input_list_example), 2)
# Display of Tree showing it as object - tree node representation
fptree_example
# printing string values of a tree
str(fptree_example)
# printing FP Tree
print(fptree_example)
# Displaying header table
header_table_example
# function call to write FP tree into a file
write_tree("fptreeexample.txt",fptree_example)
# Mining to obtain frequent itemsets
all_frequent_itemsets_example = []
#call function to mine all ferquent itemsets
mining(fptree_example, header_table_example, 2, set([]), all_frequent_itemsets_example)
# Display frequent itemsets
all_frequent_itemsets_example
associations(df,[all_frequent_itemsets_example],50)
```
##### Testing on Dataset
```
fptree, header_table = generate_FP_tree(create_frozen_set(input_list), 200)
fptree
str(fptree)
print(fptree)
header_table
# function call to write FP tree into a file
write_tree("fptree.txt",fptree)
# Mining to obtain frequent itemsets
all_frequent_itemsets_fp = []
#call function to mine all ferquent itemsets
mining(fptree, header_table, 200, set([]), all_frequent_itemsets_fp)
all_frequent_itemsets_fp
```
## Experiment on the Dataset : Apply your associated rule mining algorithms to the dataset and show some interesting rules.
```
associations(df_example,freq_itemsets_example,50)
associations(df_example,freq_itemsets_example,70)
associations(df_example,freq_itemsets_example,60)
```
##### Testing on dataset
```
associations(df,freq_itemsets2,50)
associations(df,freq_itemsets4,50)
associations(df,freq_itemsets5,45)
associations(df,freq_itemsets6,45)
```
##### The modular approach of coding is done so the function can be reused for different algorithms. Sufficient comments are added to make the code readable
## Run-Time Performance
```
# Measuring run-time performance of generating frequent itemsets
#1. Brute force approach
start1 = time.time()
freq_itemsets1 = brute_force_frequent_itemset(5,df,200)
end1 = time.time()
print("The time taken by Brute force algorithm implemented: ")
print(end1-start1)
#2. Apriori approach
start2 = time.time()
f_itemset_apriori = apriori(5,df,200)
end2 = time.time()
print("The time taken by Apriori algorithm implemented: ")
print(end2-start2)
#3. FP Growth approach
start3 = time.time()
fptree, header_table = generate_FP_tree(create_frozen_set(input_list), 200)
all_frequent_itemsets_fp = []
mining(fptree, header_table, 200, set([]), all_frequent_itemsets_fp)
end3 = time.time()
print("The time taken by FP Growth algorithm implemented: ")
print(end3-start3)
all_frequent_itemsets_fp
```
From the above analysis, the FP Growth runs faster compared to other two algorithms by recursively computing.
Brute Force works slower by making use of all combinations, Apriori works faster compared to Brute Force by
making use of only prior information(Frequent sets) instead of all the combinations. It is generally slower
because of database scanning at each step of pruning
| github_jupyter |
```
# Import pyNBS modules
from pyNBS import data_import_tools as dit
from pyNBS import network_propagation as prop
from pyNBS import pyNBS_core as core
from pyNBS import pyNBS_single
from pyNBS import consensus_clustering as cc
from pyNBS import pyNBS_plotting as plot
# Import other needed packages
import os
import time
import pandas as pd
import numpy as np
from IPython.display import Image
```
# Load Data
First, we must load the somatic mutation and network data for running pyNBS. We will also set an output directory location to save our results.
### Load binary somatic mutation data
The binary somatic mutation data file can be represented in two file formats:
The default format for the binary somatic mutation data file is the ```list``` format. This file format is a 2-column csv or tsv list where the 1st column is a sample/patient and the 2nd column is a gene mutated in the sample/patient. There are no headers in this file format. Loading data with the list format is typically faster than loading data from the matrix format.The following text is the list representation of the matrix above.
```
TCGA-04-1638 A2M
TCGA-23-1029 A1CF
TCGA-23-2647 A2BP1
TCGA-24-1847 A2M
TCGA-42-2589 A1CF
```
The ```matrix``` binary somatic mutation data format is the format that data for this example is currently represented. This file format is a binary csv or tsv matrix with rows represent samples/patients and columns represent genes. The following table is a small excerpt of a matrix somatic mutation data file:
||A1CF|A2BP1|A2M|
|-|-|-|-|
|TCGA-04-1638|0|0|1|
|TCGA-23-1029|1|0|0|
|TCGA-23-2647|0|1|0|
|TCGA-24-1847|0|0|1|
|TCGA-42-2589|1|0|0|
__Note:__ The default file type is defined as ```'list'```, but if the user would like to specify the 'matrix' type, the user needs to simply pass the string ```'matrix'``` to the ```filetype``` optional parameter (as below). The delimiter for the file is passed similarly to the optional parameter ```delimiter```
For more examples and definitions in the somatic mutation data file format, please see our Github Wiki page:
https://github.com/huangger/pyNBS/wiki/Somatic-Mutation-Data-File-Format
```
sm_data_filepath = './Example_Data/Mutation_Files/UCEC_sm_data.txt'
sm_mat = dit.load_binary_mutation_data(sm_data_filepath, filetype='list', delimiter='\t')
```
### Load molecular network
The network file is a 2-column text file representing an unweighted network. Each row represents a single edge in the molecular network.
Notes about the network file:
- The default column delimiter is a tab character '\t' but a different delimiter can be defined by the user here or in the parameter file with the "net_filedelim" parameter.
- The network must not contain duplicate edges (e.g. TP53\tMDM2 is equivalent to MDM2\tTP53)
- The network must not contain self-edges (e.g. TP53\tTP53)
- Only the first two columns of a network file are read as edges for the network, all other columns will be ignored.
- The load_network function also includes options to read in edge- or label-shuffled versions of the network, but by default, these options are turned off.
An excerpt of the first five rows of the PID network file is given below:
```
A1BG A2M
A1BG AKT1
A1BG GRB2
A1BG PIK3CA
A1BG PIK3R1
```
For more examples and definitions in the network file format, please see our Github Wiki page:
https://github.com/huangger/pyNBS/wiki/Molecular-Network-File-Format
```
# The only required parameter for this function is the network file path
network_filepath = './Example_Data/Network_Files/CancerSubnetwork.txt'
network = dit.load_network_file(network_filepath)
```
### Setting result output options
The following code is completely optional for the user. Allows users to pre-define a directory to save intermediate and final results to and establishes a file name prefix for those files in the output directory folder. Also creates the output directory if it does not already exist. The result of this cell will be a dictionary that can be passed optionally to functions to save results.
**Note:** The key assumption here is that if the user passes **save_args to the function that contains a valid file path to a directory in ```outdir```, the result of that particular call of the function will be saved to the given ```outdir```
```
# Optional: Setting the output directory for files to be saved in
outdir = './Results/via_notebook/CancerSubnetwork_UCEC/'
# Optional: Creating above output directory if it doesn't already exist
if not os.path.exists(outdir):
os.makedirs(outdir)
# Optional: Setting a filename prefix for all files saved to outdir
job_name = 'CancerSubnetwork_UCEC'
# Constructs dictionary to be passed as "save_args" to functions if output to be saved
save_args = {'outdir': outdir, 'job_name': job_name}
```
# Construct regularization graph for use in network-regularized NMF
In this step, we will construct the graph used in the network-regularized non-negative matrix factorization (netNMF) step of pyNBS. This network is a K-nearest neighbor (KNN) network constructed from the network influence matrix (Vandin et al 2011*) of the molecular network being used to stratify tumor samples. The graph laplacian of this KNN network (knnGlap) is used as the regularizer in the following netNMF steps. This step uses the ```network_inf_KNN_glap``` function in the pyNBS_core module.
For additional notes on the graph laplacian construction method, please visit our GitHub wiki for this function:
https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.network_inf_KNN_glap
---
**Note: ** This step is technically optional. No regularization network laplacian has to be constructed if the user would like to run the NMF step without a network regularizer. The user simply has to pass ```None``` into the optional parameter ```regNet_glap``` or remove the optional parameter in the ```pyNBS_single()``` function call below. This will cause pyNBS to run a non-network regularized NMF procedure. However, given the implementation of the multiplicative update steps, the results may not be exactly the same as some other NMF implementations (e.g. from scikit-learn).
```
# Constructing knnGlap
knnGlap = core.network_inf_KNN_glap(network)
##########################################################################################################
# The resulting matrix can be very large, so we choose not to save the intermediate result here
# To run this function and save the KNN graph laplaican to the output directory 'outdir' given above:
# Uncomment and run the following line instead:
# knnGlap = core.network_inf_KNN_glap(network, **save_args)
##########################################################################################################
```
# Construct network propagation kernel matrix
Due to the multiple subsampling and propagation steps used in pyNBS, we have found that the algorithm can be significantly sped up for large numbers of subsampling and propagation iterations if a gene-by-gene matrix describing the influence of each gene on every other gene in the network by the random-walk propagation operation is pre-computed. We refer to this matrix as the "network propagation kernel". Here we compute this propagation kernel by propagating the all genes in the molecular network independently of one another. The propagation profile of each tumor is then simply the column sum vector of the resulting network propagation kernel selected for only the rows of genes marked as mutated in each tumor, rather than having to perform the full network propagation step again after each subsampling of the data.
For additional notes on the propagation methods used, please visit our GitHub wiki for this function:
https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation
### Calibrating the network propagation coefficient
The current network propagation coefficient ($\alpha$) is currently set to 0.7 and must range between 0 and 1. This parameter can be tuned and changing it may have a result on the final propagation results. Previous results from [Hofree et al 2013](https://www.nature.com/articles/nmeth.2651) suggest that values between 0.5 and 0.8 produce relatively robust results, but we suspect that the optimal value may be dependent on certain network properties such as edge density.
```
# Set or change network propagation coefficient if desired
alpha = 0.7
# Construct identity matrix of network
network_nodes = network.nodes()
network_I = pd.DataFrame(np.identity(len(network_nodes)), index=network_nodes, columns=network_nodes)
# Construct network propagation kernel
kernel = prop.network_propagation(network, network_I, alpha=alpha, symmetric_norm=False)
##########################################################################################################
# The resulting matrix can be very large, so we choose not to save the intermediate result here
# To run this function and save the propagation kernel to the output directory 'outdir' given above,
# Uncomment and run the following two lines instead of the above line:
# save_args['iteration_label']='kernel'
# kernel = prop.network_propagation(network, network_I, alpha=alpha, symmetric_norm=True, **save_args)
##########################################################################################################
```
# Subsampling, propagation, and netNMF
After the pre-computation of the regularization graph laplacian and the network propagation kernel, we perform the following core steps of the NBS algorithm multiple times (default=100x) to produce multiple patient clusterings that will be used in the later consensus clustering step. Each patient clustering is performed with the following steps:
1. **Subsample binary somatic mutation data.** (See the documentation for the [```subsample_sm_mat```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.subsample_sm_mat) function for more details.)
2. **Propagate binary somatic mutation data over network.** (See the documentation for the [```network_propagation```](https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation) or [```network_kernel_propagation```](https://github.com/huangger/pyNBS/wiki/pyNBS.network_propagation.network_propagation) function for more details.)
3. **Quantile normalize the network-smoothed mutation data.** (See the documentation for the [```qnorm```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.qnorm) function for more details.)
4. **Use netNMF to decompose network data into k clusters.** (See the documentation for the [```mixed_netNMF```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_core.mixed_netNMF) function for more details.)
These functions for each step here are wrapped by the [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single) function, which calls each step above in sequence to perform a single iteration of the pyNBS algorithm.
### Number of pyNBS clusters
The default number of clusters constructed by pyNBS is k=3. We change that definition explicitly below or in the parameters for [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single), and in this example we choose 4 clusters. Other parameters such as the subsampling parameters and the propagation coefficient (when no kernel is pre-computed) can also be changed using \*\*kwargs. \*\*kwargs can also will hold the values of \*\*save_args as seen in previous functions if the user would like to save the resulting dimension reduced patient profiles. All documentation of \*\*kwargs definitions are given in the Github wiki page for [```NBS_single```](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_single.NBS_single)
```
clusters = 4
```
### Number of pyNBS iterations
The consensus clustering step of the pyNBS algorithm will improve if the data is subsampled, and re-clustered multiple times. The default number of times we perform the aforementioned operation (```niter```) is 100 times. The number can be reduced for faster run-time, but may produce less robust results. Increasing ```niter``` will increase overall runtime, but should produce more robust cluster assignments during consensus clustering.
```
# Set the number of times to perform pyNBS core steps
niter = 100
# Optional: Saving the intermediate propagation step (from subsampled data) to file
# save_args['save_prop'] = True
# Run pyNBS 'niter' number of times
Hlist = []
for i in range(niter):
netNMF_time = time.time()
# Run pyNBS core steps and save resulting H matrix to Hlist
Hlist.append(pyNBS_single.NBS_single(sm_mat, knnGlap, propNet=network, propNet_kernel=kernel, k=clusters))
##########################################################################################################
# Optional: If the user is saving intermediate outputs (propagation results or H matrices),
# a different 'iteration_label' should be used for each call of pyNBS_single().
# Otherwise, the user will overwrite each H matrix at each call of pyNBS_single()
# Uncomment and run the two lines below to save intermediate steps instead of the previous line
# save_args['iteration_label']=str(i+1)
# Hlist.append(pyNBS_single.NBS_single(sm_mat, propNet=network, propNet_kernel=kernel, regNet_glap=knnGlap,
# k=clusters, **save_args))
##########################################################################################################
# Report run time of each pyNBS iteration
t = time.time()-netNMF_time
print 'NBS iteration:', i+1, 'complete:', t, 'seconds'
```
# Consensus Clustering
In order to produce robust patient clusters, the sub-sampling and re-clustering steps as done above are needed. After the patient data is subsampled multiple times (default ```niter```=100), we perform the [```consensus_hclust_hard```](https://github.com/huangger/pyNBS/wiki/pyNBS.consensus_clustering.consensus_hclust_hard) function in the conensus_clustering module. It accepts a list of pandas dataframes as generated in the previous step. If the H matrices were generated separately and saved to a directory, the user will need to manually import those H matrices into a python list first before passing the list to the function below.
For more information on how the consensus clustering is performed, please see our wiki page on this function:
https://github.com/huangger/pyNBS/wiki/pyNBS.consensus_clustering.consensus_hclust_hard
```
NBS_cc_table, NBS_cc_linkage, NBS_cluster_assign = cc.consensus_hclust_hard(Hlist, k=clusters, **save_args)
```
# Co-Clustering Map
To visualize the clusters formed by the pyNBS algorithm, we can plot a similarity map using the objects created in the previous step. This step uses the [`cluster_color_assign`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.cluster_color_assign) and [`plot_cc_map()`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.plot_cc_map) functions in the [`pyNBS_plotting`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting) module.
```
# Assign colors to clusters from pyNBS
pyNBS_UCEC_clust_cmap = plot.cluster_color_assign(NBS_cluster_assign, name='pyNBS UCEC Cluster Assignments')
# Plot and save co-cluster map figure
plot.plot_cc_map(NBS_cc_table, NBS_cc_linkage, col_color_map=pyNBS_UCEC_clust_cmap, **save_args)
Image(filename = save_args['outdir']+save_args['job_name']+'_cc_map.png', width=600, height=600)
```
# Survival analysis
To determine if the patient clusters are prognostically relevant, we perform a standard survival analysis using a multi-class logrank test to evaluate the significance of survival separation between patient clusters. This data is plotted using a Kaplan-Meier plot using the [`cluster_KMplot()`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting.cluster_KMplot) in the [`pyNBS_plotting`](https://github.com/huangger/pyNBS/wiki/pyNBS.pyNBS_plotting) module.
In order to plot the survival differences between clusters, we will need to load survival data for each patient. This data was extracted from TCGA clinical data. The survival data is given in a 5-column delimited table with the specific headings described below (the columns must be in the same order as shown below). The following is an example of a few lines of the a survival table:
||vital_status|days_to_death|days_to_last_followup|overall_survival|
|-|-|-|-|-|
|TCGA-2E-A9G8|0|0|1065|1065|
|TCGA-A5-A0GI|0|0|1750|1750|
|TCGA-A5-A0GM|0|0|1448|1448|
|TCGA-A5-A1OK|0|0|244|244|
|TCGA-A5-AB3J|0|0|251|251|
Additional details on the survival data file format is also describe on our Github wiki at:
https://github.com/huangger/pyNBS/wiki/Patient-Survival-Data-File-Format
Note: The default setting for pyNBS is that no survival curves are drawn because the survival data is not a required parameter. The path to valid survival data must be explicitly defined.
```
# Load survival Data
surv_data = './Example_Data/Clinical_Files/UCEC.clin.merged.surv.txt'
# Plot KM Plot for patient clusters
plot.cluster_KMplot(NBS_cluster_assign, surv_data, delimiter=',', **save_args)
Image(filename = save_args['outdir']+save_args['job_name']+'_KM_plot.png', width=600, height=600)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
# Add parent directory into system path
import sys, os
sys.path.insert(1, os.path.abspath(os.path.normpath('..')))
import torch
from torch import nn
from torch.nn.init import calculate_gain
if torch.cuda.is_available():
for i in range(torch.cuda.device_count()):
print(f'CUDA {i}: {torch.cuda.get_device_name(i)}')
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
torch.set_default_dtype(torch.float32)
from models import MLP_GPINN_LambdaAdaptive
net = MLP_GPINN_LambdaAdaptive(N_layers=8, width=32, useRatioOfRatio=True, activation=nn.ELU()).to(device)
import os
from utils.dataset import ImplicitDataset, RandomMeshSDFDataset
dataset_name = '../datasets/box_1f0_gyroid_4pi'
output_stl = dataset_name+'.stl'
#train_dataset = ImplicitDataset.from_file(file=dataset_name+'_train.npz', device=device)
train_dataset = RandomMeshSDFDataset(output_stl, sampling_method='importance', M=int(1e6), W=10, device=device)
print(train_dataset)
use_random_sdf = isinstance(train_dataset, RandomMeshSDFDataset)
points = train_dataset.points if use_random_sdf else train_dataset.pde_points
sdfs = train_dataset.sdfs if use_random_sdf else train_dataset.bc_sdfs
from utils.optimizer import CallbackScheduler
import torch_optimizer
# Optimization
## ADA
optimizer=torch_optimizer.AdaBound(net.parameters(), lr=0.001, betas=(0.9, 0.999))
#optimizer=torch.optim.Adam(net.parameters(), lr=0.001, betas=(0.9, 0.999), eps=1e-6, amsgrad=False)
lr_scheduler = CallbackScheduler([
# CallbackScheduler.reduce_lr(0.2),
# CallbackScheduler.reduce_lr(0.2),
# CallbackScheduler.init_LBFGS(
# lr=1, max_iter=20, max_eval=40,
# tolerance_grad=1e-5, tolerance_change=1e-9,
# history_size=100,
# line_search_fn=None
# ),
# CallbackScheduler.reduce_lr(0.2)
], optimizer=optimizer, model=net, eps=1e-7, patience=300)
#torch.autograd.set_detect_anomaly(True)
max_epochs = 2500
PRINT_EVERY_EPOCH = 100
points.requires_grad_(True)
try:
for epoch in range(max_epochs):
# Training
optimizer.zero_grad()
y = net(points)
loss = net.loss(y, points, points, sdfs)
loss.backward(retain_graph=True)
lr_scheduler.optimizer.step(lambda: loss)
lr_scheduler.step_when((epoch % 500) == 499)
lr_scheduler.step_loss(loss)
if epoch % 20 == 19:
y = net(points)
net.adaptive_lambda(y, points, points, sdfs)
if epoch % PRINT_EVERY_EPOCH == 0:
print(f'#{epoch} Loss: {net._loss_PDE:.6f}, {net._loss_SDF:.6f}, {net._loss_gradient_PDE:.6f}')
except KeyboardInterrupt as e:
print('Bye bye')
from utils import SDFVisualize, plot_model_weight
visualize = SDFVisualize(z_level=0, step=0.05, offset=30, nums=100, device=device)
visualize.from_nn(net, bounds_from_mesh=output_stl)
visualize.from_mesh(output_stl)
```
| github_jupyter |
# Info Extraction
it's much more easier to extract information of model from pytorch module than onnx...onnx doesn't have output shape
```
import onnx
# Load the ONNX model
model = onnx.load("onnx/vgg19.onnx")
# Check that the IR is well formed
onnx.checker.check_model(model)
# Print a human readable representation of the graph
print(onnx.helper.printable_graph(model.graph))
#import onnx_caffe2.backend as backend
import onnx_tf.backend as backend
import numpy as np
import time
```
## Find Graph Edge (each link)
Node is operation, start from 0 ; Entity is object, start from u'1' (means %1)
基本上把每個node跑過一次後,所有的Entity都會摸到
```
def get_graph_order():
Node2nextEntity = {}
Entity2nextNode = {}
for Node_idx, node in enumerate(model.graph.node):
# node input
for Entity_idx in node.input:
if not Entity_idx in Entity2nextNode.keys():
Entity2nextNode.update({Entity_idx:Node_idx})
# node output
for Entity_idx in node.output:
if not Node_idx in Node2nextEntity.keys():
Node2nextEntity.update({Node_idx:Entity_idx})
return Node2nextEntity, Entity2nextNode
Node2nextEntity, Entity2nextNode = get_graph_order()
len(Node2nextEntity), len(Entity2nextNode)
import pickle
pickle.dump(Node2nextEntity,open('onnx/vgg19_Node2nextEntity_dict.pkl','wb'))
pickle.dump(Entity2nextNode,open('onnx/vgg19_Entity2nextNode_dict.pkl','wb'))
```
## Get Subgroup
```
import pickle
Node2nextEntity = pickle.load(open('onnx/vgg19_Node2nextEntity_dict.pkl','rb'))
Entity2nextNode = pickle.load(open('onnx/vgg19_Entity2nextNode_dict.pkl','rb'))
def find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu', 'MaxPool'], if_print = False):
found_nodes = []
for i, node in enumerate(model.graph.node):
if if_print: print("\nnode[{}] ...".format(i))
n_idx = i #init
is_fit = True
for tar in search_target:
try:
assert model.graph.node[n_idx].op_type == tar #check this node
if if_print: print("node[{}] fit op_type [{}]".format(n_idx, tar))
e_idx = Node2nextEntity[n_idx] #find next Entity
n_idx = Entity2nextNode[e_idx] #find next Node
#if if_print: print(e_idx,n_idx)
except:
is_fit = False
if if_print: print("node[{}] doesn't fit op_type [{}]".format(n_idx, tar))
break
if is_fit:
if if_print: print("node[{}] ...fit!".format(i))
found_nodes.append(i)
else:
if if_print: print("node[{}] ...NOT fit!".format(i))
if if_print: print("\nNode{} fit the matching pattern".format(found_nodes))
return found_nodes
find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu'], if_print = True)
find_sequencial_nodes(search_target=['Conv', 'Add', 'Relu', 'MaxPool'], if_print = False)
import itertools
def get_permutations(a):
p = []
for r in range(len(a)+1):
c = list(itertools.combinations(a,r))
for cc in c:
p += list(itertools.permutations(cc))
return p
#a = [4,5,6]
#get_permutations(a)
search_head = ['Conv']
followings = ['Add', 'Relu', 'MaxPool']
search_targets = [ search_head+list(foll) for foll in get_permutations(followings)]
search_targets
matchings = [find_sequencial_nodes(search_target) for search_target in search_targets]
for i,matching in enumerate(matchings):
if matching!=[]:
print("\nsearch:{}, \nget matching node:{}".format(search_targets[i],matching))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/TheoPantaz/Motor-Imagery-Classification-with-Tensorflow-and-MNE/blob/master/Motor_Imagery_clsf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Install mne
```
!pip install mne
```
Import libraries
```
import scipy.io as sio
import sklearn.preprocessing as skpr
import mne
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
```
Import data
```
from google.colab import drive
drive.mount('/content/drive')
def import_from_mat(filename):
dataset = sio.loadmat(filename, chars_as_strings = True)
return dataset['EEG'], dataset['LABELS'].flatten(), dataset['Fs'][0][0], dataset['events'].T
filename = '/content/drive/My Drive/PANTAZ_s2'
EEG, LABELS, Fs, events = import_from_mat(filename)
```
Normalize data
```
def standardize(data):
scaler = skpr.StandardScaler()
return scaler.fit_transform(data)
EEG = standardize(EEG)
```
Create mne object
```
channel_names = ['c1', 'c2', 'c3', 'c4', 'cp1', 'cp2', 'cp3', 'cp4']
channel_type = 'eeg'
def create_mne_object(EEG, channel_names, channel_type):
info = mne.create_info(channel_names, Fs, ch_types = channel_type)
raw = mne.io.RawArray(EEG.T, info)
return raw
raw = create_mne_object(EEG, channel_names, channel_type)
```
filtering
```
def filtering(raw, low_freq, high_freq):
# Notch filtering
freqs = (50, 100)
raw = raw.notch_filter(freqs = freqs)
# Apply band-pass filter
raw.filter(low_freq, high_freq, fir_design = 'firwin', skip_by_annotation = 'edge')
return raw
low_freq = 7.
high_freq = 30.
filtered = filtering(raw, low_freq, high_freq)
```
Epoching the data
> IM_dur = duration of original epoch
> last_start_of_epoch : at what point(percentage) of the original epoch will the last new epoch start
```
def Epoch_Setup(events, IM_dur, step, last_start_of_epoch):
IM_dur = int(IM_dur * Fs)
step = int(step * IM_dur)
last_start_of_epoch = int(last_start_of_epoch * IM_dur)
print(last_start_of_epoch)
steps_sum = int(last_start_of_epoch / step)
new_events = [[],[],[]]
for index in events:
new_events[0].extend(np.arange(index[0], index[0] + last_start_of_epoch, step))
new_events[1].extend([0] * steps_sum)
new_events[2].extend([index[-1]] * steps_sum)
new_events = np.array(new_events).T
return new_events
def Epochs(data, events, tmin, tmax):
epochs = mne.Epochs(data, events=events, tmin=tmin, tmax=tmax, preload=True, baseline=None, proj=True)
epoched_data = epochs.get_data()
labels = epochs.events[:, -1]
return epoched_data, labels
IM_dur = 4
step = 1/250
last_start_of_epoch = 0.5
tmix = -1
tmax = 2
new_events = Epoch_Setup(events, IM_dur, step, last_start_of_epoch)
epoched_data, labels = Epochs(filtered, new_events, tmix, tmax)
```
Split training and testing data
```
def data_split(data, labels, split):
split = int(split * data.shape[0])
X_train = epoched_data[:split]
X_test = epoched_data[split:]
Y_train = labels[:split]
Y_test = labels[split:]
return X_train, X_test, Y_train, Y_test
split = 0.5
X_train, X_test, Y_train, Y_test = data_split(epoched_data, labels, split)
print(X_train.shape)
print(Y_train.shape)
```
CSP fit and transform
```
components = 8
csp = mne.decoding.CSP(n_components=components, reg='oas', log = None, norm_trace=True)
X_train = csp.fit_transform(X_train, Y_train)
X_test = csp.transform(X_test)
```
Data reshape for Tensorflow model
> Create batches for LSTM
```
def reshape_data(X_train, X_test, labels, final_reshape):
X_train = np.reshape(X_train, (int(X_train.shape[0]/final_reshape), final_reshape, X_train.shape[-1]))
X_test = np.reshape(X_test, (int(X_test.shape[0]/final_reshape), final_reshape, X_test.shape[-1]))
n_labels = []
for i in range(0,len(labels),final_reshape):
n_labels.append(labels[i])
Labels = np.array(n_labels)
Y_train = Labels[:X_train.shape[0]] - 1
Y_test = Labels[X_train.shape[0]:] - 1
return X_train, X_test, Y_train, Y_test
reshape_factor = int(last_start_of_epoch / step)
final_reshape = int(reshape_factor)
X_train, X_test, Y_train, Y_test = reshape_data(X_train, X_test, labels, final_reshape)
```
Create tensorflow model
```
model = tf.keras.Sequential([
tf.keras.layers.LSTM(128, input_shape = [None,X_train.shape[-1]], return_sequences = True),
tf.keras.layers.LSTM(256),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(64, activation = 'relu'),
tf.keras.layers.Dense(1, activation = 'sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer=tf.keras.optimizers.Adam(lr = 0.0001),metrics=['accuracy'])
model.summary()
```
Model fit
```
history = model.fit(X_train, Y_train, epochs= 50, batch_size = 25, validation_data=(X_test, Y_test), verbose=1)
```
Accuracy and plot loss
```
%matplotlib inline
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'r', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
Running classifier
```
tmin = -1
tmax = 4
epoched_data_running, labels_running = Epochs(filtered, events, tmix, tmax)
split = 0.5
split = int(split * epoched_data_running.shape[0])
X_test_running = epoched_data_running[split:]
Y_test_running = LABELS[split:-1] - 1
w_length = int(Fs * 1.5) # running classifier: window length
w_step = int(Fs/250) # running classifier: window step size
w_start = np.arange(0, X_test_running.shape[2] - w_length, w_step)
final_reshape = int(reshape_factor/4)
scores = []
batch_data = []
for i, n in enumerate(w_start):
data = csp.transform(X_test_running[...,n:n+w_length])
batch_data.append(data)
if (i+1) % final_reshape == 0:
batch_data = np.transpose(np.array(batch_data), (1,0,2))
scores.append(model.evaluate(batch_data, Y_test_running))
batch_data = []
scores = np.array(scores)
w_times = (np.arange(0, X_test_running.shape[2] - w_length, final_reshape * w_step) + w_length / 2.) / Fs + tmin
w_times = w_times[:-1]
plt.figure()
plt.plot(w_times, scores[:,1], label='Score')
plt.axvline(0, linestyle='--', color='k', label='Onset')
plt.axhline(0.5, linestyle='-', color='k', label='Chance')
plt.xlabel('time (s)')
plt.ylabel('classification accuracy')
plt.title('Classification score over time')
plt.legend(loc='lower right')
plt.show()
```
| github_jupyter |
```
import spacy
import numpy as np
import pandas as pd
from stopwords import ENGLISH_STOP_WORDS
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
en_nlp = spacy.en.English()
def spacy_get_vec(sentence):
vec = np.zeros(300)
doc = en_nlp((sentence))
for word in doc:
if word.lower_ in ENGLISH_STOP_WORDS:
continue
vec += word.vector
return vec
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words=ENGLISH_STOP_WORDS)
vectorizer.fit_transform([''.join(line.split(',')[0]) for line in lines])
vectorizer.stop_words_
def get_idf(sentence):
score = 1.0
for word in sentence.split():
if word[-1] == '\n' or word[-1] == ',' or word[-1] == '.' or word[-1] == ['!']:
word = word[:-1]
if word not in vectorizer.vocabulary_:
continue
index = vectorizer.vocabulary_[word]
score = score / vectorizer.idf_[index]
return score
lines = open('./class.txt').readlines()
vecs = []
intents = []
idfs = []
for line in lines:
tokens = line.split(',')
sentence = tokens[0]
intent = tokens[1]
if intent[-1] == '\n':
intent = intent[:-1]
vecs.append(spacy_get_vec(sentence))
intents.append(intent)
#idfs.append(get_idf(sentence))
df = pd.DataFrame(vecs, columns=['vec_%d' % i for i in range(300)])
#df['idf'] = idfs
df['intents'] = intents
df.intents = df.intents.astype('category')
from sklearn.utils import shuffle
df = shuffle(df)
df.head()
X = df.iloc[:, :-1].values
y = df.iloc[:,-1:].values.ravel()
from sklearn.cross_validation import train_test_split
X_train,X_val,y_train,y_val = train_test_split(X, y, test_size=0.20)
from sklearn.linear_model import LogisticRegression
logit_model = LogisticRegression(C=5.0, class_weight={'intent': 1.2, 'non_intent': 0.8})
logit_model.fit(X_train, y_train)
print(logit_model.score(X_train, y_train))
print(logit_model.score(X_val, y_val))
sent = 'it looks cloudy'
#gradboost.predict_proba(np.append(spacy_get_vec(sent), get_idf(sent)))
logit_model.predict_proba(spacy_get_vec(sent))
from sklearn.ensemble import GradientBoostingClassifier
gradboost = GradientBoostingClassifier(n_estimators=500, max_depth=25, max_features='log2')
gradboost.fit(X_train, y_train)
print(gradboost.score(X_train, y_train))
print(gradboost.score(X_val, y_val))
sent = 'it looks cloudy'
#gradboost.predict_proba(np.append(spacy_get_vec(sent), get_idf(sent)))
gradboost.predict_proba(spacy_get_vec(sent))
gradboost.classes_
from sklearn.svm import SVC
svc = SVC(kernel='linear', degree=2, probability=True)
svc.fit(X_train, y_train)
print(svc.score(X_train, y_train))
print(svc.score(X_val, y_val))
sent = 'i need to fly home'
#gradboost.predict_proba(np.append(spacy_get_vec(sent), get_idf(sent)))
svc.predict_proba(spacy_get_vec(sent))
sent = 'it appears dark outside'
svc.predict_proba(spacy_get_vec(sent))
sent = 'my name is Gopal'
svc.predict_proba(spacy_get_vec(sent))
sent = 'it looks cloudy'
svc.predict_proba(spacy_get_vec(sent))
from sklearn.neural_network import MLPClassifier
nn = MLPClassifier(hidden_layer_sizes=(256, 128, 2), activation='tanh', learning_rate='adaptive', solver='lbfgs', max_iter=1000, )
nn.fit(X_train, y_train)
print(nn.score(X_train, y_train))
print(nn.score(X_val, y_val))
sent = 'I have to fly home'
nn.predict_proba(spacy_get_vec(sent))
sent = 'my name is Gopal'
nn.predict_proba(spacy_get_vec(sent))
sent = 'it looks cloudy'
nn.predict_proba(spacy_get_vec(sent))
from sklearn.externals import joblib
joblib.dump(svc, 'class.pkl')
```
| github_jupyter |
# Titanic Data Science Solutions
### This notebook is a companion to the book [Data Science Solutions](https://www.amazon.com/Data-Science-Solutions-Startup-Workflow/dp/1520545312).
The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.
There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
## Workflow stages
The competition solution workflow goes through seven stages described in the Data Science Solutions book.
1. Question or problem definition.
2. Acquire training and testing data.
3. Wrangle, prepare, cleanse the data.
4. Analyze, identify patterns, and explore the data.
5. Model, predict and solve the problem.
6. Visualize, report, and present the problem solving steps and final solution.
7. Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
- We may combine mulitple workflow stages. We may analyze by visualizing data.
- Perform a stage earlier than indicated. We may analyze data before and after wrangling.
- Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
- Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition.
## Question and problem definition
Competition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is [described here at Kaggle](https://www.kaggle.com/c/titanic).
> Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.
We may also want to develop some early understanding about the domain of our problem. This is described on the [Kaggle competition description page here](https://www.kaggle.com/c/titanic). Here are the highlights to note.
- On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.
- One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.
- Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class.
## Workflow goals
The data science solutions workflow solves for seven major goals.
**Classifying.** We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
**Correlating.** One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a [correlation](https://en.wikiversity.org/wiki/Correlation) among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
**Converting.** For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
**Completing.** Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
**Correcting.** We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
**Creating.** Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
**Charting.** How to select the right visualization plots and charts depending on nature of the data and the solution goals.
## Refactor Release 2017-Jan-29
We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels.
### User comments
- Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)
- Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)
- Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard)
### Porting issues
- Specify plot dimensions, bring legend into plot.
### Best practices
- Performing feature correlation analysis early in the project.
- Using multiple plots instead of overlays for readability.
```
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
```
## Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
```
train_df = pd.read_csv('../input/train.csv')
test_df = pd.read_csv('../input/test.csv')
combine = [train_df, test_df]
```
## Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
**Which features are available in the dataset?**
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the [Kaggle data page here](https://www.kaggle.com/c/titanic/data).
```
print(train_df.columns.values)
```
**Which features are categorical?**
These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.
- Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.
**Which features are numerical?**
Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.
- Continous: Age, Fare. Discrete: SibSp, Parch.
```
# preview the data
train_df.head()
```
**Which features are mixed data types?**
Numerical, alphanumeric data within same feature. These are candidates for correcting goal.
- Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.
**Which features may contain errors or typos?**
This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.
- Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
```
train_df.tail()
```
**Which features contain blank, null or empty values?**
These will require correcting.
- Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.
- Cabin > Age are incomplete in case of test dataset.
**What are the data types for various features?**
Helping us during converting goal.
- Seven features are integer or floats. Six in case of test dataset.
- Five features are strings (object).
```
train_df.info()
print('_'*40)
test_df.info()
```
**What is the distribution of numerical feature values across the samples?**
This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.
- Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).
- Survived is a categorical feature with 0 or 1 values.
- Around 38% samples survived representative of the actual survival rate at 32%.
- Most passengers (> 75%) did not travel with parents or children.
- Nearly 30% of the passengers had siblings and/or spouse aboard.
- Fares varied significantly with few passengers (<1%) paying as high as $512.
- Few elderly passengers (<1%) within age range 65-80.
```
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
```
**What is the distribution of categorical features?**
- Names are unique across the dataset (count=unique=891)
- Sex variable as two possible values with 65% male (top=male, freq=577/count=891).
- Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.
- Embarked takes three possible values. S port used by most passengers (top=S)
- Ticket feature has high ratio (22%) of duplicate values (unique=681).
```
train_df.describe(include=['O'])
```
### Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
**Correlating.**
We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
**Completing.**
1. We may want to complete Age feature as it is definitely correlated to survival.
2. We may want to complete the Embarked feature as it may also correlate with survival or another important feature.
**Correcting.**
1. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.
2. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.
3. PassengerId may be dropped from training dataset as it does not contribute to survival.
4. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.
**Creating.**
1. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.
2. We may want to engineer the Name feature to extract Title as a new feature.
3. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.
4. We may also want to create a Fare range feature if it helps our analysis.
**Classifying.**
We may also add to our assumptions based on the problem description noted earlier.
1. Women (Sex=female) were more likely to have survived.
2. Children (Age<?) were more likely to have survived.
3. The upper-class passengers (Pclass=1) were more likely to have survived.
## Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
- **Pclass** We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
- **Sex** We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
- **SibSp and Parch** These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
```
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
## Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
### Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (Survived).
A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)
Note that x-axis in historgram visualizations represents the count of samples or passengers.
**Observations.**
- Infants (Age <=4) had high survival rate.
- Oldest passengers (Age = 80) survived.
- Large number of 15-25 year olds did not survive.
- Most passengers are in 15-35 age range.
**Decisions.**
This simple analysis confirms our assumptions as decisions for subsequent workflow stages.
- We should consider Age (our assumption classifying #2) in our model training.
- Complete the Age feature for null values (completing #1).
- We should band age groups (creating #3).
```
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
```
### Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
**Observations.**
- Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption #2.
- Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption #2.
- Most passengers in Pclass=1 survived. Confirms our classifying assumption #3.
- Pclass varies in terms of Age distribution of passengers.
**Decisions.**
- Consider Pclass for model training.
```
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
```
### Correlating categorical features
Now we can correlate categorical features with our solution goal.
**Observations.**
- Female passengers had much better survival rate than males. Confirms classifying (#1).
- Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.
- Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (#2).
- Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (#1).
**Decisions.**
- Add Sex feature to model training.
- Complete and add Embarked feature to model training.
```
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
```
### Correlating categorical and numerical features
We may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).
**Observations.**
- Higher fare paying passengers had better survival. Confirms our assumption for creating (#4) fare ranges.
- Port of embarkation correlates with survival rates. Confirms correlating (#1) and completing (#2).
**Decisions.**
- Consider banding Fare feature.
```
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
```
## Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
### Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the Cabin (correcting #2) and Ticket (correcting #1) features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
```
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
```
### Creating new feature extracting from existing
We want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.
In the following code we extract Title feature using regular expressions. The RegEx pattern `(\w+\.)` matches the first word which ends with a dot character within Name feature. The `expand=False` flag returns a DataFrame.
**Observations.**
When we plot Title, Age, and Survived, we note the following observations.
- Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.
- Survival among Title Age bands varies slightly.
- Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).
**Decision.**
- We decide to retain the new Title feature for model training.
```
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
```
We can replace many titles with a more common name or classify them as `Rare`.
```
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
```
We can convert the categorical titles to ordinal.
```
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
```
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
```
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
```
### Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
```
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
```
### Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Age feature.
We can consider three methods to complete a numerical continuous feature.
1. A simple way is to generate random numbers between mean and [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation).
2. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using [median](https://en.wikipedia.org/wiki/Median) values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...
3. Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.
Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
```
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
```
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
```
guess_ages = np.zeros((2,3))
guess_ages
```
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
```
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
```
Let us create Age bands and determine correlations with Survived.
```
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
```
Let us replace Age with ordinals based on these bands.
```
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
```
We can not remove the AgeBand feature.
```
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
```
### Create new feature combining existing features
We can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
```
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
We can create another feature called IsAlone.
```
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
```
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
```
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
```
We can also create an artificial feature combining Pclass and Age.
```
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
```
### Completing a categorical feature
Embarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
```
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
```
### Converting categorical feature to numeric
We can now convert the EmbarkedFill feature by creating a new numeric Port feature.
```
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
```
### Quick completing and converting a numeric feature
We can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.
Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.
We may also want round off the fare to two decimals as it represents currency.
```
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
```
We can not create FareBand.
```
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
```
Convert the Fare feature to ordinal values based on the FareBand.
```
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
```
And the test dataset.
```
test_df.head(10)
```
## Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:
- Logistic Regression
- KNN or k-Nearest Neighbors
- Support Vector Machines
- Naive Bayes classifier
- Decision Tree
- Random Forrest
- Perceptron
- Artificial neural network
- RVM or Relevance Vector Machine
```
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
```
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference [Wikipedia](https://en.wikipedia.org/wiki/Logistic_regression).
Note the confidence score generated by the model based on our training dataset.
```
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
```
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
- Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.
- Inversely as Pclass increases, probability of Survived=1 decreases the most.
- This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.
- So is Title as second highest positive correlation.
```
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
```
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of **two categories**, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine).
Note that the model generates a confidence score which is higher than Logistics Regression model.
```
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
```
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference [Wikipedia](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).
KNN confidence score is better than Logistics Regression but worse than SVM.
```
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
```
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference [Wikipedia](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).
The model generated confidence score is the lowest among the models evaluated so far.
```
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
```
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference [Wikipedia](https://en.wikipedia.org/wiki/Perceptron).
```
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
```
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning).
The model confidence score is the highest among models evaluated so far.
```
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
```
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Random_forest).
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
```
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
```
### Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
```
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
# submission.to_csv('../output/submission.csv', index=False)
```
Our submission to the competition site Kaggle results in scoring 3,883 of 6,082 competition entries. This result is indicative while the competition is running. This result only accounts for part of the submission dataset. Not bad for our first attempt. Any suggestions to improve our score are most welcome.
## References
This notebook has been created based on great work done solving the Titanic competition and other sources.
- [A journey through Titanic](https://www.kaggle.com/omarelgabry/titanic/a-journey-through-titanic)
- [Getting Started with Pandas: Kaggle's Titanic Competition](https://www.kaggle.com/c/titanic/details/getting-started-with-random-forests)
- [Titanic Best Working Classifier](https://www.kaggle.com/sinakhorami/titanic/titanic-best-working-classifier)
| github_jupyter |
<a href="https://colab.research.google.com/github/The20thDuck/Neuro-140-Project/blob/main/main_experiments/FashionGAN.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import torch as t
import torchvision
from tqdm import tqdm
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import matplotlib
from matplotlib.colors import Colormap
!pip install wandb -qqq
import wandb
wandb.login()
train_data = torchvision.datasets.FashionMNIST("data", train=True, transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]), download=True)
train_loader = t.utils.data.DataLoader(train_data, batch_size = 128, shuffle=True, num_workers=2)
plt.imshow(train_data[0][0].squeeze(), plt.cm.binary)
!pip install einops
import einops
ngc = 64
latent_size = 100
num_classes = 10
emb_size = 256
class Generator(t.nn.Module):
def __init__(self,
latent_size=latent_size,
num_classes = num_classes,
emb_size = 96,
nhead=4,
L1 = 4,
L2 = 4,
img_size = 28,
in_channels=1):
super().__init__()
self.emb_size = emb_size
self.img_size = img_size
# self.latent_size = latent_size
self.linear1 = t.nn.Linear(latent_size, emb_size*(img_size//4)**2)
self.pos_emb1 = t.nn.Parameter(t.randn(1, (img_size//4)**2, emb_size))
self.block1 = t.nn.Sequential(
*[t.nn.TransformerEncoderLayer(
emb_size,
nhead=nhead,
dim_feedforward=emb_size*4,
activation="gelu",
norm_first=True,
batch_first=True,
dropout=0.,
layer_norm_eps=1e-12
) for _ in range(L1)]
)
self.pixel1 = t.nn.PixelShuffle(2)
self.pos_emb2 = t.nn.Parameter(t.randn(1, (img_size//2)**2, emb_size//4))
self.block2 = t.nn.Sequential(
*[t.nn.TransformerEncoderLayer(
emb_size//4,
nhead=nhead,
dim_feedforward=emb_size,
activation="gelu",
norm_first=True,
batch_first=True,
dropout=0.,
layer_norm_eps=1e-12
) for _ in range(L2)]
)
self.pixel2 = t.nn.PixelShuffle(2)
self.project = t.nn.Linear(emb_size//16, in_channels)
self.tanh = t.nn.Tanh()
def forward(self, z):
b = z.shape[0]
emb = self.linear1(z).view((b, (self.img_size//4)**2, self.emb_size))
emb = self.block1(emb + self.pos_emb1)
emb_to_pixel = "b (h w) c -> b c h w"
pixel_to_emb = "b c h w -> b (h w) c"
pixels = einops.rearrange(emb, emb_to_pixel, h = self.img_size//4)
pixels = self.pixel1(pixels)
emb = einops.rearrange(pixels, pixel_to_emb, h = self.img_size//2)
emb = self.block2(emb + self.pos_emb2)
pixels = einops.rearrange(emb, emb_to_pixel, h = self.img_size//2)
pixels = self.pixel2(pixels)
emb = einops.rearrange(pixels, pixel_to_emb, h = self.img_size)
emb = self.project(emb)
pixels = einops.rearrange(emb, emb_to_pixel, h = self.img_size)
return self.tanh(pixels)
class Generator_Conv(t.nn.Module):
def __init__(self, latent_size=latent_size, num_classes = num_classes):
super().__init__()
t.nn.ConvTranspose2d()
self.layers = t.nn.Sequential(
t.nn.ConvTranspose2d(latent_size, ngc*4, 4, 1, 0),
t.nn.BatchNorm2d(ngc*4),
t.nn.ReLU(),
t.nn.ConvTranspose2d(ngc*4, ngc*2, 4, 2, 1),
t.nn.BatchNorm2d(ngc*2),
t.nn.ReLU(),
t.nn.ConvTranspose2d(ngc*2, ngc*1, 4, 2, 1),
t.nn.BatchNorm2d(ngc),
t.nn.ReLU(),
t.nn.ConvTranspose2d(ngc*1, 1, 4, 2, 3),
t.nn.Tanh()
)
def forward(self, z):
return self.layers(z.unsqueeze(-1).unsqueeze(-1))
class PatchEmbedding(t.nn.Module):
def __init__(self, in_channels, patch_size, emb_size, imgsize):
super().__init__()
self.cls_token = t.nn.Parameter(t.randn(1, 1, emb_size)) # b, n, emb_size. Add to the list of module params
self.n = (imgsize//patch_size)**2
self.emb_size = emb_size
self.position_embeddings = t.nn.Parameter(t.randn(1, self.n + 1, emb_size)) # +1 for cls
self.projection = t.nn.Conv2d(in_channels, emb_size, patch_size, patch_size, 0)
def forward(self, inputs):
token_embeddings = einops.rearrange(self.projection(inputs), 'b c h w -> b (h w) c')
b = inputs.shape[0]
cls_embeddings = self.cls_token.repeat((b, 1, 1))
# print(cls_embeddings.shape, token_embeddings.shape, self.position_embeddings.shape)
return t.cat((cls_embeddings, token_embeddings), dim = 1) + self.position_embeddings
class ClassificationHead(t.nn.Module):
def __init__(self, num_classes, emb_size):
super().__init__()
self.ln = t.nn.LayerNorm((emb_size,), eps = 1e-12)
self.layer = t.nn.Linear(emb_size, num_classes)
def forward(self, x):
return self.layer(self.ln(x)[:,0,:])
class Discriminator(t.nn.Module):
def __init__(self, in_channels: int = 1, patch_size: int = 4, emb_size: int = emb_size, nhead=8, imgsize=28, num_classes=1, L = 6, from_hugging = False):
super().__init__()
self.patch_emb = PatchEmbedding(in_channels, patch_size, emb_size, imgsize)
# self.from_hugging = from_hugging
# if from_hugging:
# config = ViTConfig(hidden_size=emb_size, num_hidden_layers = L, num_attention_heads = nhead, intermediate_size=emb_size*4, patch_size=patch_size, image_size=imgsize, encoder_stride=patch_size)
# self.encoder = ViTForImageClassification(config).vit.encoder
# else:
self.encoder = t.nn.Sequential(*[
t.nn.TransformerEncoderLayer(
emb_size,
nhead=nhead,
dim_feedforward=emb_size*4,
activation="gelu",
norm_first=True,
batch_first=True,
dropout=0.,
layer_norm_eps=1e-12
) for _ in range(L)])
self.classifier = ClassificationHead(num_classes, emb_size=emb_size)
def forward(self, x):
emb = self.patch_emb(x)
encoding = self.encoder(emb)
return t.sigmoid(self.classifier(encoding)).flatten()
class Discriminator_Conv(t.nn.Module):
def __init__(self, num_classes=num_classes):
super().__init__()
self.layers = t.nn.Sequential(
t.nn.Conv2d(1, ngc, 4, 2, 3),
t.nn.BatchNorm2d(ngc*1),
t.nn.LeakyReLU(0.2),
t.nn.Conv2d(ngc*1, ngc*2, 4, 2, 1),
t.nn.BatchNorm2d(ngc*2),
t.nn.LeakyReLU(0.2),
t.nn.Conv2d(ngc*2, ngc*4, 4, 2, 1),
t.nn.BatchNorm2d(ngc*4),
t.nn.LeakyReLU(0.2),
t.nn.Conv2d(ngc*4, 1, 4, 1, 0),
t.nn.Sigmoid()
)
def forward(self, x):
return self.layers(x).flatten()
run = 3 ## CHANGE THIS
lrs = [2e-5, 5e-5, 2e-4, 5e-4, 2e-3]
lr = lrs[run]
beta1 = 0.5
gen = Generator().cuda()
disc = Discriminator().cuda()
optim_g = t.optim.Adam(gen.parameters(), lr=lr, betas=(beta1, 0.999))
optim_d = t.optim.Adam(disc.parameters(), lr=lr, betas=(beta1, 0.999))
criterion = t.nn.BCELoss()
arch = "Transformer"
wandb.init(
# Set the project where this run will be logged
project="Neuro-140",
# We pass a run name (otherwise it’ll be randomly assigned, like sunshine-lollypop-10)
name=f"transformer_gen_{run}",
# Track hyperparameters and run metadata
config={
"learning_rate": lr,
"emb_size": emb_size,
"architecture": arch
})
!pip install pytorch-ignite pytorch-fid -q
from pytorch_fid.inception import InceptionV3
device = "cuda"
dims = 2048
block_idx = InceptionV3.BLOCK_INDEX_BY_DIM[dims]
model = InceptionV3([block_idx]).to(device)
class WrapperInceptionV3(t.nn.Module):
def __init__(self, fid_incv3):
super().__init__()
self.fid_incv3 = fid_incv3
@t.no_grad()
def forward(self, x):
y = self.fid_incv3(x)
y = y[0]
y = y[:, :, 0, 0]
return y
wrapper_model = WrapperInceptionV3(model)
wrapper_model.eval();
from ignite.metrics import FID
from ignite.engine import Engine
import ignite.distributed as idist
def process_function(engine, batch):
return batch
from torchvision import transforms
preprocess = transforms.Compose([
transforms.Resize(299),
transforms.CenterCrop(299),
# transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
engine = Engine(process_function)
metric = FID(num_features=dims, feature_extractor=wrapper_model, device=idist.device())
metric.attach(engine, "fid")
def calc_fid(metric, gen):
metric.reset()
gen.eval()
# with t.no_grad():
for i, (x, y) in enumerate(tqdm(train_loader)):
b = x.shape[0]
z = t.randn(b, latent_size).cuda()
gz = gen(z)
gz_resize = preprocess(t.tile(gz, (1, 3, 1, 1)))
x_resize = preprocess(t.tile(x, (1, 3, 1, 1)))
metric.update([x_resize, gz_resize])
if i == 100:
break
gen.train()
return (metric.compute())
# wandb.finish()
iter = 0
epoch = 0
fixed_z = t.randn(10, latent_size).cuda()
# training loop
num_epochs = 30
real_label = 1.
fake_label = 0.
# fid = calc_fid(metric, gen)
# print(f"{arch}/FID: {fid}")
# wandb.log({f"{arch}/FID": fid})
d_losses = []
g_losses = []
dxs = []
dgzs = []
for e in range(num_epochs):
epoch += 1
for step, (x, _) in enumerate(tqdm(train_loader)):
iter += 1
# lab = t.nn.functional.one_hot(y.cuda(), num_classes = num_classes).float()
# update G
gen.zero_grad()
disc.zero_grad()
b = x.shape[0]
with t.no_grad():
z = t.randn(b, latent_size).cuda()
gz = gen(z)
dgz = disc(gz)
g_loss = criterion(dgz, t.full((b,), real_label).cuda())
g_losses.append(g_loss.item())
g_loss.backward()
optim_g.step()
# update D
disc.zero_grad()
dx = disc(x.cuda())
dgz = disc(gz.detach())
dxs.append(dx.detach())
dgzs.append(dgz.detach())
d_loss = criterion(dgz, t.full((b,), fake_label).cuda()) + criterion(dx, t.full((b,), real_label).cuda())
d_losses.append(d_loss.item())
d_loss.backward()
optim_d.step()
if iter % 100 == 0:
L_g = t.tensor(g_losses[-100:]).mean()
L_d = t.tensor(d_losses[-100:]).mean()
dx_mean = t.cat(dxs[-100:]).mean().item()
dgz_mean = t.cat(dgzs[-100:]).mean().item()
d_losses = []
g_losses = []
dxs = []
dgzs = []
wandb.log({"G-Loss": L_g,
"D-Loss": L_d,
"dx": dx_mean,
"dgz": dgz_mean,
"iter": iter
})
with t.no_grad():
ims = gen(fixed_z).cpu()
fid = calc_fid(metric, gen)
print(f"{arch}/FID: {fid}")
wandb.log({f"{arch}/FID": fid,
# f"{arch}/G-Loss": t.tensor(g_losses).mean(),
# f"{arch}/D-Loss": t.tensor(d_losses).mean(),
f"{arch}/Images": [wandb.Image(im.numpy()*255) for im in ims],
"epoch": epoch
})
t.cat([t.tensor(j) for j in dxs[-100:]]).mean()
# print([j.shape for j in dxs[-100:]])
import matplotlib
fig = plt.figure()
ax = fig.add_subplot()
ax.plot(fids)
plt.title("FID Score vs Iteration")
ax.set_yscale('log')
ax.yaxis.set_major_formatter(matplotlib.ticker.ScalarFormatter())
ax.yaxis.set_minor_formatter(matplotlib.ticker.ScalarFormatter())
fig.show()
with t.no_grad():
z = t.randn(50, latent_size).cuda()
# lab = t.nn.functional.one_hot(t.arange(50).cuda() % 10, num_classes = num_classes).float()
ims = gen(z).cpu()
_, axs = plt.subplots(5, 10)
axs = axs.flatten()
for im, ax in zip(ims, axs):
ax.axes.xaxis.set_visible(False)
ax.axes.yaxis.set_visible(False)
ax.imshow(im.squeeze(), cmap="gray_r")
plt.show()
n = len(d_losses)
plt.plot(range(n), d_losses, label="D")
plt.plot(range(n), g_losses, label="G")
plt.title("Generator and Disc Loss")
plt.legend()
plt.show()
# t.save(gen.state_dict(), "/content/drive/MyDrive/Sophomore/Neuro 140/models/conv-gen.pt")
# t.save(disc.state_dict(), "/content/drive/MyDrive/Sophomore/Neuro 140/models/conv-disc.pt")
from google.colab import drive
drive.mount('/content/drive')
gen2 = Generator()
gen2.load_state_dict(t.load("/content/drive/MyDrive/Sophomore/Neuro 140/models/conv-gen.pt"))
gen2.cuda()
with t.no_grad():
z = t.randn(50, latent_size).cuda()
# lab = t.nn.functional.one_hot(t.arange(50).cuda() % 10, num_classes = num_classes).float()
ims = gen2(z).cpu()
_, axs = plt.subplots(5, 10)
axs = axs.flatten()
for im, ax in zip(ims, axs):
ax.axes.xaxis.set_visible(False)
ax.axes.yaxis.set_visible(False)
ax.imshow(im.squeeze(), cmap="gray_r")
plt.show()
plt.plot([i for i in range(10)], range(10))
```
| github_jupyter |
# Copyright Netherlands eScience Center <br>
** Function : Computing AMET with Surface & TOA flux** <br>
** Author : Yang Liu ** <br>
** First Built : 2019.08.09 ** <br>
** Last Update : 2019.08.09 ** <br>
Description : This notebook aims to compute AMET with TOA/surface flux fields from NorESM model. The NorESM model is launched by NERSC in Blue Action Work Package 3 as coordinated experiments for joint analysis. It contributes to the Deliverable 3.1. <br>
Return Values : netCDF4 <br>
Caveat : The fields used here are post-processed monthly mean fields. Hence there is no accumulation that need to be taken into account.<br>
The **positive sign** for each variable varies:<br>
* Latent heat flux (LHFLX) - upward <br>
* Sensible heat flux (SHFLX) - upward <br>
* Net solar radiation flux at TOA (FSNTOA)- downward <br>
* Net solar radiation flux at surface (FSNS) - downward <br>
* Net longwave radiation flux at surface (FLNS) - upward <br>
* Net longwave radiation flux at TOA (FLUT) - upward <br>
```
%matplotlib inline
import numpy as np
import sys
sys.path.append("/home/ESLT0068/NLeSC/Computation_Modeling/Bjerknes/Scripts/META")
import scipy as sp
import time as tttt
from netCDF4 import Dataset,num2date
import os
import meta.statistics
import meta.visualizer
# constants
constant = {'g' : 9.80616, # gravititional acceleration [m / s2]
'R' : 6371009, # radius of the earth [m]
'cp': 1004.64, # heat capacity of air [J/(Kg*K)]
'Lv': 2264670, # Latent heat of vaporization [J/Kg]
'R_dry' : 286.9, # gas constant of dry air [J/(kg*K)]
'R_vap' : 461.5, # gas constant for water vapour [J/(kg*K)]
}
################################ Input zone ######################################
# specify starting and ending time
start_year = 1979
end_year = 2014
# specify data path
datapath = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/NorESM_NERSC'
# specify output path for figures
output_path = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/AMET_netCDF'
# ensemble number
ensemble = 20
# experiment number
exp = 4
# example file
datapath_example = os.path.join(datapath, 'SHFLX', 'SHFLX_FHIST_f09_f09_BA4_en15_mon_1979-2013.nc')
####################################################################################
def var_key_retrieve(datapath, exp_num, ensemble_num):
# get the path to each datasets
print ("Start retrieving datasets of experiment {} ensemble number {}".format(exp_num+1, ensemble_num))
# get data path
if exp_num<2:
datapath_slhf = os.path.join(datapath, 'LHFLX', 'LHFLX_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_sshf = os.path.join(datapath, 'SHFLX', 'SHFLX_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_ssr = os.path.join(datapath, 'FSNS', 'FSNS_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_str = os.path.join(datapath, 'FLNS', 'FLNS_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_tsr = os.path.join(datapath, 'FSNTOA', 'FSNTOA_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_ttr = os.path.join(datapath, 'FLUT', 'FLUT_FHIST_f09_f09_BA{}_en{}_mon_1979-2014.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
else:
datapath_slhf = os.path.join(datapath, 'LHFLX', 'LHFLX_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_sshf = os.path.join(datapath, 'SHFLX', 'SHFLX_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_ssr = os.path.join(datapath, 'FSNS', 'FSNS_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_str = os.path.join(datapath, 'FLNS', 'FLNS_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_tsr = os.path.join(datapath, 'FSNTOA', 'FSNTOA_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
datapath_ttr = os.path.join(datapath, 'FLUT', 'FLUT_FHIST_f09_f09_BA{}_en{}_mon_1979-2013.nc'.format(exp_num+1, ensemble_list[ensemble_num]))
# get the variable keys
key_slhf = Dataset(datapath_slhf)
key_sshf = Dataset(datapath_sshf)
key_ssr = Dataset(datapath_ssr)
key_str = Dataset(datapath_str)
key_tsr = Dataset(datapath_tsr)
key_ttr = Dataset(datapath_ttr)
print ("Retrieving datasets successfully and return the variable key!")
return key_slhf, key_sshf, key_ssr, key_str, key_tsr, key_ttr
def amet(key_slhf, key_sshf, key_ssr, key_str, key_tsr, key_ttr, lat, lon):
# get all the varialbes
# make sure we know the sign of all the input variables!!!
# descending lat
var_slhf = key_slhf.variables['LHFLX'][:,::-1,:] # surface latent heat flux W/m2
var_sshf = key_sshf.variables['SHFLX'][:,::-1,:] # surface sensible heat flux W/m2
var_ssr = key_ssr.variables['FSNS'][:,::-1,:] # surface solar radiation W/m2
var_str = key_str.variables['FLNS'][:,::-1,:] # surface thermal radiation W/m2
var_tsr = key_tsr.variables['FSNTOA'][:,::-1,:] # TOA solar radiation W/m2
var_ttr = key_ttr.variables['FLUT'][:,::-1,:] # TOA thermal radiation W/m2
#size of the grid box
dx = 2 * np.pi * constant['R'] * np.cos(2 * np.pi * lat /
360) / len(lon)
dy = np.pi * constant['R'] / len(lat)
# calculate total net energy flux at TOA/surface
net_flux_surf = - var_slhf - var_sshf + var_ssr - var_str
net_flux_toa = var_tsr - var_ttr
net_flux_surf_area = np.zeros(net_flux_surf.shape, dtype=float) # unit W
net_flux_toa_area = np.zeros(net_flux_toa.shape, dtype=float)
for i in np.arange(len(lat)):
# change the unit to terawatt
net_flux_surf_area[:,i,:] = net_flux_surf[:,i,:]* dx[i] * dy / 1E+12
net_flux_toa_area[:,i,:] = net_flux_toa[:,i,:]* dx[i] * dy / 1E+12
# take the zonal integral of flux
net_flux_surf_int = np.sum(net_flux_surf_area,2) / 1000 # PW
net_flux_toa_int = np.sum(net_flux_toa_area,2) / 1000
# AMET as the residual of net flux at TOA & surface
AMET_res_ERAI = np.zeros(net_flux_surf_int.shape)
for i in np.arange(len(lat)):
AMET_res_ERAI[:,i] = -(np.sum(net_flux_toa_int[:,0:i+1],1) -
np.sum(net_flux_surf_int[:,0:i+1],1))
AMET_res_ERAI = AMET_res_ERAI.reshape(-1,12,len(lat))
return AMET_res_ERAI
def create_netcdf_point (pool_amet, lat, output_path, exp):
print ('*******************************************************************')
print ('*********************** create netcdf file*************************')
print ('*******************************************************************')
#logging.info("Start creating netcdf file for the 2D fields of ERAI at each grid point.")
# get the basic dimensions
ens, year, month, _ = pool_amet.shape
# wrap the datasets into netcdf file
# 'NETCDF3_CLASSIC', 'NETCDF3_64BIT', 'NETCDF4_CLASSIC', and 'NETCDF4'
data_wrap = Dataset(os.path.join(output_path, 'amet_NorESM_NERSC_exp{}.nc'.format(exp+1)),'w',format = 'NETCDF4')
# create dimensions for netcdf data
ens_wrap_dim = data_wrap.createDimension('ensemble', ens)
year_wrap_dim = data_wrap.createDimension('year', year)
month_wrap_dim = data_wrap.createDimension('month', month)
lat_wrap_dim = data_wrap.createDimension('latitude', len(lat))
# create coordinate variable
ens_wrap_var = data_wrap.createVariable('ensemble',np.int32,('ensemble',))
year_wrap_var = data_wrap.createVariable('year',np.int32,('year',))
month_wrap_var = data_wrap.createVariable('month',np.int32,('month',))
lat_wrap_var = data_wrap.createVariable('latitude',np.float32,('latitude',))
# create the actual 4d variable
amet_wrap_var = data_wrap.createVariable('amet',np.float64,('ensemble','year','month','latitude'),zlib=True)
# global attributes
data_wrap.description = 'Monthly mean atmospheric meridional energy transport'
# variable attributes
lat_wrap_var.units = 'degree_north'
amet_wrap_var.units = 'PW'
amet_wrap_var.long_name = 'atmospheric meridional energy transport'
# writing data
ens_wrap_var[:] = np.arange(ens)
month_wrap_var[:] = np.arange(month)+1
year_wrap_var[:] = np.arange(year)+1979
lat_wrap_var[:] = lat
amet_wrap_var[:] = pool_amet
# close the file
data_wrap.close()
print ("The generation of netcdf files is complete!!")
if __name__=="__main__":
####################################################################
###### Create time namelist matrix for variable extraction #######
####################################################################
# date and time arrangement
# namelist of month and days for file manipulation
namelist_month = ['01','02','03','04','05','06','07','08','09','10','11','12']
ensemble_list = ['01','02','03','04','05','06','07','08','09','10',
'11','12','13','14','15','16','17','18','19','20',
'21','22','23','24','25','26','27','28','29','30',]
# index of months
period_1979_2014 = np.arange(start_year,end_year+1,1)
period_1979_2013 = period_1979_2014[:-1]
index_month = np.arange(1,13,1)
####################################################################
###### Extract invariant and calculate constants #######
####################################################################
# get basic dimensions from sample file
key_example = Dataset(datapath_example)
lat = key_example.variables['lat'][::-1] # descending lat
print(lat)
lon = key_example.variables['lon'][:]
print(lon)
# get invariant from benchmark file
Dim_year_1979_2014 = len(period_1979_2014)
Dim_year_1979_2013 = len(period_1979_2013)
Dim_month = len(index_month)
Dim_latitude = len(lat)
Dim_longitude = len(lon)
#############################################
##### Create space for stroing data #####
#############################################
# loop for calculation
for i in range(exp):
if i < 2:
pool_amet = np.zeros((ensemble,Dim_year_1979_2014,Dim_month,Dim_latitude),dtype = float)
else:
pool_amet = np.zeros((ensemble,Dim_year_1979_2013,Dim_month,Dim_latitude),dtype = float)
for j in range(ensemble):
# get variable keys
key_slhf, key_sshf, key_ssr, key_str, key_tsr,\
key_ttr = var_key_retrieve(datapath, i, j)
# compute amet
pool_amet[j,:,:,:] = amet(key_slhf, key_sshf, key_ssr,
key_str, key_tsr, key_ttr, lat, lon)
####################################################################
###### Data Wrapping (NetCDF) #######
####################################################################
# save netcdf
create_netcdf_point(pool_amet, lat, output_path, i)
print ('Packing AMET is complete!!!')
print ('The output is in sleep, safe and sound!!!')
```
| github_jupyter |
```
"""
LICENSE MIT
2020
Guillaume Rozier
Website : http://www.covidtracker.fr
Mail : guillaume.rozier@telecomnancy.net
README:
This file contains scripts that download data from data.gouv.fr and then process it to build many graphes.
I'm currently cleaning the code, please ask me if something is not clear enough.
The charts are exported to 'charts/images/france'.
Data is download to/imported from 'data/france'.
Requirements: please see the imports below (use pip3 to install them).
"""
import pandas as pd
import plotly.graph_objects as go
import france_data_management as data
from datetime import datetime
from datetime import timedelta
from plotly.subplots import make_subplots
import plotly
import math
import os
import json
PATH = "../../"
PATH_STATS = "../../data/france/stats/"
import locale
locale.setlocale(locale.LC_ALL, 'fr_FR.UTF-8')
def import_df_age():
df = pd.read_csv(PATH+"data/france/vaccin/vacsi-a-fra.csv", sep=";")
return df
df_new = pd.read_csv(PATH+"data/france/donnes-hospitalieres-covid19-nouveaux.csv", sep=";")
df_clage = pd.read_csv(PATH+"data/france/donnes-hospitalieres-clage-covid19.csv", sep=";")
df_new_france = df_new.groupby("jour").sum()
df_new_france.sum()
df_clage_france = df_clage.groupby(["jour", "cl_age90"]).sum().reset_index()
df_clage_france[df_clage_france.jour=="2021-04-12"]
df = import_df_age()
df["n_dose1"] = df["n_dose1"].replace({",": ""}, regex=True).astype("int")
df = df.groupby(["clage_vacsi"]).sum()/100
df = df[1:]
df["n_dose1_pourcent"] = round(df.n_dose1/df.n_dose1.sum()*100, 1)
clage_vacsi = [24, 29, 39, 49, 59, 64, 69, 74, 79, 80]
nb_pop = [5236809, 3593713, 8034961, 8316050, 8494520, 3979481, 3801413, 3404034, 2165960, 4081928]
df_age = pd.DataFrame()
df_age["clage_vacsi"]=clage_vacsi
df_age["nb_pop"]=nb_pop
df = df.merge(df_age, left_on="clage_vacsi", right_on="clage_vacsi")
df["pop_vac"] = df["n_dose1"]/df["nb_pop"]*100
df
fig = go.Figure()
fig.add_trace(go.Bar(
x=[str(age) + " ans" for age in df.clage_vacsi[:-1]]+["+ 80 ans"],
y=df.pop_vac,
text=[str(round(prct, 2)) + " %" for prct in df.pop_vac],
textposition='auto',))
fig.update_layout(
title={
'text': "% de population ayant reçu au moins 1 dose de vaccin",
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0,
y=1.07,
xref='paper',
yref='paper',
font=dict(size=14),
text='{}. Données : Santé publique France. Auteur : <b>@GuillaumeRozier - covidtracker.fr.</b>'.format(datetime.strptime("2021-01-27", '%Y-%m-%d').strftime('%d %b')),
showarrow = False
),
]
)
fig.update_yaxes(range=[0, 100])
fig.show()
fig = go.Figure()
fig.add_trace(go.Pie(
labels=[str(age) + " ans" for age in df.index[:-1]]+["+ 80 ans"],
values=df.n_dose1_pourcent,
text=[str(prct) + "" for prct in df.n_dose1],
textposition='auto',))
fig.update_layout(
title={
'text': "Nombre de vaccinés par tranche d'âge",
'y':0.95,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0,
y=1.07,
xref='paper',
yref='paper',
font=dict(size=14),
text='{}. Données : Santé publique France. Auteur : <b>@GuillaumeRozier - covidtracker.fr.</b>'.format(datetime.strptime("2021-01-27", '%Y-%m-%d').strftime('%d %b')),
showarrow = False
),
]
)
fig.show()
#locale.setlocale(locale.LC_ALL, 'fr_FR.UTF-8')
import random
import numpy as np
n_sain = 20000
x_sain = np.random.rand(1, n_sain)[0]*100
values_sain = np.random.rand(1, n_sain)[0]*100
x_az = np.random.rand(1,30)[0]*100
values_az = np.random.rand(1,30)[0]*100
fig = go.Figure()
for idx in range(len(x_sain)):
fig.add_trace(go.Scatter(
x=[x_sain[idx]],
y=[values_sain[idx]],
mode="markers",
showlegend=False,
marker_color="rgba(14, 201, 4, 0.5)", #"rgba(0, 0, 0, 0.5)",
marker_size=2))
fig.add_trace(go.Scatter(
x=x_az,
y=values_az,
mode="markers",
showlegend=False,
marker_color="rgba(201, 4, 4,0.5)", #"rgba(0, 0, 0, 0.5)",
marker_size=2))
fig.update_yaxes(range=[0, 100], visible=False)
fig.update_xaxes(range=[0, 100], nticks=10)
fig.update_layout(
plot_bgcolor='rgb(255,255,255)',
title={
'text': "Admissions en réanimation pour Covid19",
'y':0.90,
'x':0.5,
'xanchor': 'center',
'yanchor': 'top'},
titlefont = dict(
size=20),
annotations = [
dict(
x=0.5,
y=1.2,
xref='paper',
yref='paper',
text='Auteur : covidtracker.fr.'.format(),
showarrow = False
)]
)
fig.write_image(PATH + "images/charts/france/points_astrazeneca.jpeg", scale=4, width=800, height=350)
import numpy as np
np.random.rand(1,20000000)
```
| github_jupyter |
## First step in gap analysis is to determine the AEP based on operational data.
```
%load_ext autoreload
%autoreload 2
```
This notebook provides an overview and walk-through of the steps taken to produce a plant-level operational energy asssessment (OA) of a wind plant in the PRUF project. The La Haute-Borne wind farm is used here and throughout the example notebooks.
Uncertainty in the annual energy production (AEP) estimate is calculated through a Monte Carlo approach. Specifically, inputs into the OA code as well as intermediate calculations are randomly sampled based on their specified or calculated uncertainties. By performing the OA assessment thousands of times under different combinations of the random sampling, a distribution of AEP values results from which uncertainty can be deduced. Details on the Monte Carlo approach will be provided throughout this notebook.
### Step 1: Import plant data into notebook
A zip file included in the OpenOA 'examples/data' folder needs to be unzipped to run this step. Note that this zip file should be unzipped automatically as part of the project.prepare() function call below. Once unzipped, 4 CSV files will appear in the 'examples/data/la_haute_borne' folder.
```
# Import required packages
import os
import matplotlib.pyplot as plt
import numpy as np
import statsmodels.api as sm
import pandas as pd
import copy
from project_ENGIE import Project_Engie
from operational_analysis.methods import plant_analysis
```
In the call below, make sure the appropriate path to the CSV input files is specfied. In this example, the CSV files are located directly in the 'examples/data/la_haute_borne' folder
```
# Load plant object
project = Project_Engie('./data/la_haute_borne')
# Prepare data
project.prepare()
```
### Step 2: Review the data
Several Pandas data frames have now been loaded. Histograms showing the distribution of the plant-level metered energy, availability, and curtailment are shown below:
```
# Review plant data
fig, (ax1, ax2, ax3) = plt.subplots(ncols = 3, figsize = (15,5))
ax1.hist(project._meter.df['energy_kwh'], 40) # Metered energy data
ax2.hist(project._curtail.df['availability_kwh'], 40) # Curtailment and availability loss data
ax3.hist(project._curtail.df['curtailment_kwh'], 40) # Curtailment and availability loss data
plt.tight_layout()
plt.show()
```
### Step 3: Process the data into monthly averages and sums
The raw plant data can be in different time resolutions (in this case 10-minute periods). The following steps process the data into monthly averages and combine them into a single 'monthly' data frame to be used in the OA assessment.
```
project._meter.df.head()
```
First, we'll create a MonteCarloAEP object which is used to calculate long-term AEP. Two renalaysis products are specified as arguments.
```
pa = plant_analysis.MonteCarloAEP(project, reanal_products = ['era5', 'merra2'])
```
Let's view the result. Note the extra fields we've calculated that we'll use later for filtering:
- energy_nan_perc : the percentage of NaN values in the raw revenue meter data used in calculating the monthly sum. If this value is too large, we shouldn't include this month
- nan_flag : if too much energy, availability, or curtailment data was missing for a given month, flag the result
- num_days_expected : number of days in the month (useful for normalizing monthly gross energy later)
- num_days_actual : actual number of days per month as found in the data (used when trimming monthly data frame)
```
# View the monthly data frame
pa._aggregate.df.head()
```
### Step 4: Review reanalysis data
Reanalysis data will be used to long-term correct the operational energy over the plant period of operation to the long-term. It is important that we only use reanalysis data that show reasonable trends over time with no noticeable discontinuities. A plot like below, in which normalized annual wind speeds are shown from 1997 to present, provides a good first look at data quality.
The plot shows that both of the reanalysis products track each other reasonably well and seem well-suited for the analysis.
```
pa.plot_reanalysis_normalized_rolling_monthly_windspeed().show()
```
### Step 5: Review energy and loss data
It is useful to take a look at the energy data and make sure the values make sense. We begin with scatter plots of gross energy and wind speed for each reanalysis product. We also show a time series of gross energy, as well as availability and curtailment loss.
Let's start with the scatter plots of gross energy vs wind speed for each reanalysis product. Here we use the 'Robust Linear Model' (RLM) module of the Statsmodels package with the default Huber algorithm to produce a regression fit that excludes outliers. Data points in red show the outliers, and were excluded based on a Huber sensitivity factor of 3.0 (the factor is varied between 2.0 and 3.0 in the Monte Carlo simulation).
The plots below reveal that:
- there are some outliers
- Both renalysis products are strongly correlated with plant energy
```
pa.plot_reanalysis_gross_energy_data(outlier_thres=3).show()
```
Next we show time series plots of the monthly gross energy, availabilty, and curtialment. Note that the availability and curtailment data were estimated based on SCADA data from the plant.
Long-term availability and curtailment losses for the plant are calculated based on average percentage losses for each calendar month. Summing those average values weighted by the fraction of long-term gross energy generated in each month yields the long-term annual estimates. Weighting by monthly long-term gross energy helps account for potential correlation between losses and energy production (e.g., high availability losses in summer months with lower energy production). The long-term losses are calculated in Step 9.
```
pa.plot_aggregate_plant_data_timeseries().show()
```
### Step 6: Specify availabilty and curtailment data not represenative of actual plant performance
There may be anomalies in the reported availabilty that shouldn't be considered representative of actual plant performance. Force majeure events (e.g. lightning) are a good example. Such losses aren't typically considered in pre-construction AEP estimates; therefore, plant availablity loss reported in an operational AEP analysis should also not include such losses.
The 'availability_typical' and 'curtailment_typical' fields in the monthly data frame are initially set to True. Below, individual months can be set to 'False' if it is deemed those months are unrepresentative of long-term plant losses. By flagging these months as false, they will be omitted when assessing average availabilty and curtailment loss for the plant.
Justification for removing months from assessing average availabilty or curtailment should come from conversations with the owner/operator. For example, if a high-loss month is found, reasons for the high loss should be discussed with the owner/operator to determine if those losses can be considered representative of average plant operation.
```
# For illustrative purposes, let's suppose a few months aren't representative of long-term losses
pa._aggregate.df.loc['2014-11-01',['availability_typical','curtailment_typical']] = False
pa._aggregate.df.loc['2015-07-01',['availability_typical','curtailment_typical']] = False
```
### Step 7: Select reanalysis products to use
Based on the assessment of reanalysis products above (both long-tern trend and relationship with plant energy), we now set which reanalysis products we will include in the OA. For this particular case study, we use both products given the high regression relationships.
### Step 8: Set up Monte Carlo inputs
The next step is to set up the Monte Carlo framework for the analysis. Specifically, we identify each source of uncertainty in the OA estimate and use that uncertainty to create distributions of the input and intermediate variables from which we can sample for each iteration of the OA code. For input variables, we can create such distributions beforehand. For intermediate variables, we must sample separately for each iteration.
Detailed descriptions of the sampled Monte Carlo inputs, which can be specified when initializing the MonteCarloAEP object if values other than the defaults are desired, are provided below:
- slope, intercept, and num_outliers : These are intermediate variables that are calculated for each iteration of the code
- outlier_threshold : Sample values between 2 and 3 which set the Huber algorithm outlier detection parameter. Varying this threshold accounts for analyst subjectivity on what data points constitute outliers and which do not.
- metered_energy_fraction : Revenue meter energy measurements are associated with a measurement uncertainty of around 0.5%. This uncertainty is used to create a distribution centered at 1 (and with standard deviation therefore of 0.005). This column represents random samples from that distribution. For each iteration of the OA code, a value from this column is multiplied by the monthly revenue meter energy data before the data enter the OA code, thereby capturing the 0.5% uncertainty.
- loss_fraction : Reported availability and curtailment losses are estimates and are associated with uncertainty. For now, we assume the reported values are associated with an uncertainty of 5%. Similar to above, we therefore create a distribution centered at 1 (with std of 0.05) from which we sample for each iteration of the OA code. These sampled values are then multiplied by the availability and curtaiment data independently before entering the OA code to capture the 5% uncertainty in the reported values.
- num_years_windiness : This intends to capture the uncertainty associated with the number of historical years an analyst chooses to use in the windiness correction. The industry standard is typically 20 years and is based on the assumption that year-to-year wind speeds are uncorrelated. However, a growing body of research suggests that there is some correlation in year-to-year wind speeds and that there are trends in the resource on the decadal timescale. To capture this uncertainty both in the long-term trend of the resource and the analyst choice, we randomly sample integer values betweeen 10 and 20 as the number of years to use in the windiness correction.
- loss_threshold : Due to uncertainty in reported availability and curtailment estimates, months with high combined losses are associated with high uncertainty in the calculated gross energy. It is common to remove such data from analysis. For this analysis, we randomly sample float values between 0.1 and 0.2 (i.e. 10% and 20%) to serve as criteria for the combined availability and curtailment losses. Specifically, months are excluded from analysis if their combined losses exceeds that criteria for the given OA iteration.
- reanalyis_product : This captures the uncertainty of using different reanalysis products and, lacking a better method, is a proxy way of capturing uncertainty in the modelled monthly wind speeds. For each iteration of the OA code, one of the reanalysis products that we've already determined as valid (see the cells above) is selected.
### Step 9: Run the OA code
We're now ready to run the Monte-Carlo based OA code. We repeat the OA process "num_sim" times using different sampling combinations of the input and intermediate variables to produce a distribution of AEP values.
A single line of code here in the notebook performs this step, but below is more detail on what is being done.
Steps in OA process:
- Set the wind speed and gross energy data to be used in the regression based on i) the reanalysis product to be used (Monte-Carlo sampled); ii) the NaN energy data criteria (1%); iii) Combined availability and curtailment loss criteria (Monte-Carlo sampled); and iv) the outlier criteria (Monte-Carlo sampled)
- Normalize gross energy to 30-day months
- Perform linear regression and determine slope and intercept values, their standard errors, and the covariance between the two
- Use the information above to create distributions of possible slope and intercept values (e.g. mean equal to slope, std equal to the standard error) from which we randomly sample a slope and intercept value (note that slope and intercept values are highly negatively-correlated so the sampling from both distributions are constrained accordingly)
- to perform the long term correction, first determine the long-term monthly average wind speeds (i.e. average January wind speed, average Februrary wind speed, etc.) based on a 10-20 year historical period as determined by the Monte Carlo process.
- Apply the Monte-Carlo sampled slope and intercept values to the long-term monthly average wind speeds to calculate long-term monthly gross energy
- 'Denormalize' monthly long-term gross energy back to the normal number of days
- Calculate AEP by subtracting out the long-term avaiability loss (curtailment loss is left in as part of AEP)
```
# Run Monte-Carlo based OA
pa.run(num_sim=2000, reanal_subset=['era5', 'merra2'])
```
The key result is shown below: a distribution of AEP values from which uncertainty can be deduced. In this case, uncertainty is around 9%.
```
# Plot a distribution of AEP values from the Monte-Carlo OA method
pa.plot_result_aep_distributions().show()
```
### Step 10: Post-analysis visualization
Here we show some supplementary results of the Monte Carlo OA approach to help illustrate how it works.
First, it's worth looking at the Monte-Carlo tracker data frame again, now that the slope, intercept, and number of outlier fields have been completed. Note that for transparency, debugging, and analysis purposes, we've also included in the tracker data frame the number of data points used in the regression.
```
# Produce histograms of the various MC-parameters
mc_reg = pd.DataFrame(data = {'slope': pa._mc_slope.ravel(),
'intercept': pa._mc_intercept,
'num_points': pa._mc_num_points,
'metered_energy_fraction': pa._inputs.metered_energy_fraction,
'loss_fraction': pa._inputs.loss_fraction,
'num_years_windiness': pa._inputs.num_years_windiness,
'loss_threshold': pa._inputs.loss_threshold,
'reanalysis_product': pa._inputs.reanalysis_product})
```
It's useful to plot distributions of each variable to show what is happening in the Monte Carlo OA method. Based on the plot below, we observe the following:
- metered_energy_fraction, and loss_fraction sampling follow a normal distribution as expected
- The slope and intercept distributions appear normally distributed, even though different reanalysis products are considered, resulting in different regression relationships. This is likely because the reanalysis products agree with each other closely.
- 24 data points were used for all iterations, indicating that there was no variation in the number of outlier months removed
- We see approximately equal sampling of the num_years_windiness, loss_threshold, and reanalysis_product, as expected
```
plt.figure(figsize=(15,15))
for s in np.arange(mc_reg.shape[1]):
plt.subplot(4,3,s+1)
plt.hist(mc_reg.iloc[:,s],40)
plt.title(mc_reg.columns[s])
plt.show()
```
It's worth highlighting the inverse relationship between slope and intercept values under the Monte Carlo approach. As stated earlier, slope and intercept values are strongly negatively correlated (e.g. slope goes up, intercept goes down) which is captured by the covariance result when performing linear regression. By constrained random sampling of slope and intercept values based on this covariance, we assure we aren't sampling unrealisic combinations.
The plot below shows that the values are being sampled appropriately
```
# Produce scatter plots of slope and intercept values, and overlay the resulting line of best fits over the actual wind speed
# and gross energy data points. Here we focus on the ERA-5 data
plt.figure(figsize=(8,6))
plt.plot(mc_reg.intercept[mc_reg.reanalysis_product =='era5'],mc_reg.slope[mc_reg.reanalysis_product =='era5'],'.')
plt.xlabel('Intercept (GWh)')
plt.ylabel('Slope (GWh / (m/s))')
plt.show()
```
We can look further at the influence of certain Monte Carlo parameters on the AEP result. For example, let's see what effect the choice of reanalysis product has on the result:
```
# Boxplot of AEP based on choice of reanalysis product
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'reanalysis_product':mc_reg['reanalysis_product']})
tmp_df.boxplot(column='aep',by='reanalysis_product',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Reanalysis product')
plt.title('AEP estimates by reanalysis product')
plt.suptitle("")
plt.show()
```
In this case, the two reanalysis products lead to similar AEP estimates, although MERRA2 yields slightly higher uncertainty.
We can also look at the effect on the number of years used in the windiness correction:
```
# Boxplot of AEP based on number of years in windiness correction
tmp_df=pd.DataFrame(data={'aep':pa.results.aep_GWh,'num_years_windiness':mc_reg['num_years_windiness']})
tmp_df.boxplot(column='aep',by='num_years_windiness',figsize=(8,6))
plt.ylabel('AEP (GWh/yr)')
plt.xlabel('Number of years in windiness correction')
plt.title('AEP estimates by windiness years')
plt.suptitle("")
plt.show()
```
As seen above, the number of years used in the windiness correction does not significantly impact the AEP estimate.
| github_jupyter |
# Homework 1: Coding
<b>Important</b>: when you submit this file to gradescope, it should contain only method definitions (except the imports and `test_data`, `username` definitions below). To test your work, you need to use `@publictest` to decorate test methods and they will be executed for you here. You can use `test_data` to store data for your tests. See for example the `test_train_sklearn` method below.
<b>Important</b>: gradescope will sometimes use the output of the public test cases in this file to assign points. Make sure you submit a version of this notebook that includes the outputs of the `@publictest` cells we provide for you. Additional tests you define for your own purposes will be ignored.
Your code should fully fit between these comments:
```# >> Your code starts here. << ```
```# >> Your code ends here. << ```
`utils` contains a few useful methods and classes you need to use. Firstly:
- `load_mnist` -- loads mnist data, returns `Splits` namedtuple.
- namedtuple `Hypers(epochs: int, learning_rate: float, batch_size: int)` -- training hyper parameters
- namedtuple `Splits(train: Dataset, test: Dataset, valid: Dataset)` -- stores data splits
- namedtuple `Dataset(X: numpy.array, y: numpy.array)` -- stores a dataset
- namedtuple `LinearModel(W: numpy.array, b: numpy.array` -- stores a linear model
- `Visualize` -- several visualization utilities. See `test_explain` on sample usage.
- `timeit` -- times a thunk, relevant to the last deep learning exercise.
- `check_submission` -- tests whether this notebook has the required definitions.
First, run the following command to get the python scripts needed.
```
import os
if (not os.path.exists("utils.py")) or (not os.path.exists("requirements.txt")):
print("downloading requirements")
assert(0 == os.system("wget -O hw1_scripts.tar.gz "
+ "https://www.dropbox.com/s/fvvoag1vl3ueulb/hw1_scripts.tar.gz?dl=0"
))
assert(0 == os.system("tar xvzf hw1_scripts.tar.gz"))
assert(0 == os.system("rm hw1_scripts.tar.gz"))
assert(0 == os.system("pip install -r requirements.txt"))
```
Install the required dependencies
```
!pip install -r requirements.txt
username = "" # Your username
### LEAVE THE REST OF THIS CELL AS IS ###
from utils import load_mnist, load_cifar, Hypers, Splits, Dataset, LinearModel, Visualize, exercise, timeit, check_submission
from bunch import Bunch
import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Model
from google.colab import drive, auth
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from oauth2client.client import GoogleCredentials
test_data = Bunch() # Store anything you need between your personal test cases in this bunch.
# Decorate your personal tests with @publictest so they are not executed under gradescope. The decorator
# also controls the random seeds set for numpy and keras so it should produce repeatable result. You can
# change the seed to try out different outcomes by passing the seed argument to the decorator like below.
def publictest(_func=None, *, seed=42):
def wf(f):
def wwf():
exercise(username=username, seed=seed, cname=f.__name__)
f()
if __name__ == "__main__":
wwf()
if _func is None:
return wf
else:
return wf(_func)
@publictest(seed=42)
def initialize_tests():
test_data.mnist = load_mnist(flatten=True)
test_data.cifar = load_cifar(flatten=False)
v1 = Visualize.images(10, title="example MNIST training images")
v1(test_data.mnist.train.X[0:10])
v2 = Visualize.images(10, title="example CIFAR training images")
v2(test_data.cifar.train.X[0:10])
# Additional imports needed for your solutions.
# >> Your code starts here. <<
# >> Your code ends here. <<
```
## Part 1: Logistic regression implementations
Implement a logistic regression to classify images of hand-written digits in the `MNIST` dataset. In this
dataset, each input image is of size $28 \times 28$ and reshaped into a size $784$ vector. The
output is an integer from 0 to 9 representing the image class. <b>You need to solve the same
problem three ways</b>: using `scikit-learn`, using `keras`, and using `numpy`. For the numpy version, you will need to implement stochastic gradient descent.
### logistic regression using scikit-learn
Use sklearn to train a logistic regression model, extract the parameters it produces and define methods for using those parameters to make predictions. The methods you need to provide are:
- `train_sklearn`
- `softmax` -- the softmax activation function. Be aware that your input is a batch of
data of size (N, 10) where N is the batch size.
- `scores`
- `forward`
- `evaluate` -- accuracy measurement. Given the model parameters and test data, computes
the predictions and returns the accuracy of the predictions relative to the given test data.
The scaffolding around these is provided for you in `test_train_sklearn`.
```
def train_sklearn(hypers: Hypers, dataset: Dataset):
"""train_sklearn Train scikit-learn multiclass one-vs-rest Logistic Regression model directly
on data.
Arguments:
hypers {Hypers} -- Training hyper-parameters, only epochs is relevant to this method.
You can ignore the rest.
dataset {Dataset} -- Input dataset tuple with X and y
Returns:
{LinearModel} -- Tuple with W=Optimal weight and b=bias
"""
# >> Your code starts here. <<
# >> Your code ends here. <<
def softmax(y: np.ndarray):
"""softmax Softmax activation function
Arguments:
y {np.float np.ndarray} -- Input logits
Returns:
{np.float np.ndarray} -- Post-softmax probits
"""
# >> Your code starts here. <<
# >> Your code ends here. <<
def scores(X: np.ndarray, model: LinearModel):
"""Return pre-softmax scores."""
# >> Your code starts here. <<
# >> Your code ends here. <<
def forward(X: np.ndarray, model: LinearModel):
"""Return linear model output probabilities. """
# >> Your code starts here. <<
# >> Your code ends here. <<
def evaluate(dataset: Dataset, model: LinearModel):
"""test_accuracy Evaluate the model accuracy on the test dataset
Arguments:
dataset {Dataset} -- dataset
model {LinearModel} -- linear model
"""
# >> Your code starts here. <<
# >> Your code ends here. <<
@publictest
def test_train_sklearn():
mnist = test_data.mnist
model = train_sklearn(Hypers(epochs=2), mnist.train)
test_accuracy = evaluate(dataset=mnist.test, model=model)
print("Test accuracy [SKLEARN]:", test_accuracy)
if test_accuracy >= 0.80:
print("ACCURACY OK")
else:
print("ACCURACY FAIL")
test_data.model_sklearn = model
```
### Logistic regression and stochastic gradient descent using numpy
#### Exercise
You program in this section should be self contained and independent from other sections; do not
make use of any functions other than `numpy` methods. <b>No Keras or scikit-learn</b> methods are allowed in this section. Furthermore, other than looping over epochs or batches,
<b>do not use loops in your code</b>. Processing instances in a batch or values in an instance
should be done using numpy vector/matrix/tensor operations. Loops include `for` and
`while` statements, comprehensions, generators, and recursion.
Complete methods `onehot`, `backward`, and `sgd`. The scaffolding around these is provided for you in `test_numpy`.
- `onehot`
- `backward`
- `sgd` -- min-batch Stochastic Gradient Descent. Computes the optimal weight and bias
after the given number of epochs. You should use the training parameters specified in the method
arguments.
```
def onehot(dataset: Dataset):
y_onehot = None # dataset.y converted into onehot
# >> Your code starts here. <<
# >> Your code ends here. <<
return dataset._replace(y=y_onehot)
def backward(dataset: Dataset, model: LinearModel):
"""
Return dLdW and dLdb .
"""
dLdW = None
dLdb = None
# >> Your code starts here. <<
# >> Your code ends here. <<
return dLdW, dLdb
def sgd(hypers: Hypers, dataset: Dataset):
"""sgd Run SGD optimization all the parameters
Arguments:
hypers {Hypers} -- training hyper parameters
dataset {Dataset} -- training data
Returns: LinearModel with
W {np.float np.ndarray} -- Learned weight
b {np.float np.ndarray} -- and bias
"""
n, m = dataset.X.shape
n_class = dataset.y.shape[1]
W = np.zeros((m, n_class))
b = np.zeros((n_class, ))
# >> Your code starts here. <<
# >> Your code ends here. <<
@publictest
def test_numpy():
hypers = Hypers(
epochs=5,
learning_rate=1.0,
batch_size=64
)
mnist = test_data.mnist
model = sgd(hypers, dataset=onehot(mnist.train))
test_accuracy = evaluate(mnist.test, model)
print("Test accuracy [NUMPY SGD]:", test_accuracy)
if test_accuracy >= 0.80:
print("ACCURACY OK")
else:
print("ACCURACY FAIL")
test_data.model_numpy = model
```
### Logistic regression using Keras
Keras is a high-level neural network API which runs on top of a Tensorflow, CNTK, or Theano
backend. Typically one can choose the backend if they have more than one installed, however, we
will be using Tensorflow exclusively.
#### Exercise
Complete `build_keras_model` and `train_run_keras`. You may want to to consult
the documentation for Keras on [Keras docs](https://keras.io/). Your program should contain the following
parts with each no more than a line or two:
- The scaffolding around the two required methods is given to you in `test_keras`. This method should run and produce your model's accuracy.
- Create the logistic regression model using canonical Keras via the Sequential or Functional
approach.
- Compile your model with the desired loss function, optimizer, and metrics.
- Fit the training data (you can also specify validation data using the validation set
here).
- Predict on the test data and report the test accuracy (the percentage of images correctly
classified).
```
def build_keras_model(input_dim=784, num_class=10):
"""build_model Build a Keras model of logistic regression
Keyword Arguments:
input_dim {int} -- The number of dimensions for the input data
(default: {784})
num_class {int} -- The number of classes (default: {10})
Returns:
{keras.models.Model} -- Your logistic regression model. It is a
keras.model.Model object created by either a sequential way or a
functional way.
"""
model: Model = None
# >> Your code starts here. <<
# >> Your code ends here. <<
return model
def train_keras_model(hypers: Hypers, keras_model: Model, splits: Splits):
"""train the Keras model using compile and fit
Keyword Arguments:
hypers {Hypers} -- training hyper parameters
Splits(train: Dataset, test: Dataset, valid: Dataset) -- stores data splits
Returns:
{LinearModel} -- Tuple with W=Optimal weight and b=bias
"""
model: LinearModel = None
# >> Your code starts here. <<
# >> Your code ends here. <<
return model
@publictest
def test_keras():
hypers = Hypers(
epochs=2,
learning_rate=0.1,
batch_size=64
)
mnist = test_data.mnist
input_dim = mnist.train.X.shape[1]
num_class = len(set(mnist.train.y))
keras_model = build_keras_model(
input_dim=input_dim,
num_class=num_class
)
model = train_keras_model(
hypers=hypers,
keras_model=keras_model,
splits=mnist
)
test_accuracy = evaluate(dataset=mnist.test, model=model)
print("Test accuracy:", test_accuracy)
if test_accuracy >= 0.80:
print("ACCURACY OK")
else:
print("ACCURACY FAIL")
test_data.model_keras = model
```
## Part 2: Applications
### Application: Explanations
Complete the following methods. Scaffolding/test is provided in the `test_explain` method and `most_wrong` can be used to find interesting examples to explain.
- `attribution`
- `explain`
```
def attribution(model: LinearModel, X, y_explained):
"""
Complete the attribution function for pre-softmax logistic regression
Returns:
Attribution vector which is the same size as X
"""
# >> Your code starts here. <<
# >> Your code ends here. <<
def explain(model: LinearModel, X, y_explained):
"""
An explanation is the
element-wise product of an input x and the attribution a for a given prediction
Returns:
Explanation vector which is the same size as X
"""
# >> Your code starts here. <<
# >> Your code ends here. <<
def most_wrong(model: LinearModel, dataset: Dataset, y_wrong: int):
"""
Finds instances in a given dataset that are most confidentally incorrectly
predicted by a given model as the given class. Returned are the most wrong input,
its correct class"""
scores = forward(dataset.X, model)
preds = scores.argmax(axis=1)
indices = (preds != dataset.y) * (preds == y_wrong)
wrongs = Dataset(
X=dataset.X[indices],
y=dataset.y[indices]
)
wrong_scores = scores[indices]
worst_index = np.argsort(wrong_scores.max(axis=1))[0]
return wrongs.X[worst_index], wrongs.y[worst_index]
@publictest
def test_explain():
model = test_data.model_numpy
mnist = test_data.mnist
x = mnist.train.X[5]
c = 4 # class
a = attribution(model, x, c)
# check completeness with some baselines
for baseline in [np.ones_like(x), np.zeros_like(x)]:
print("attribution complete?: ",
abs(((x-baseline)*a).sum() -
(scores(x, model)[c] - scores(baseline, model)[c]))
< 0.0000001)
# visualize attributions and explanations
v_pos = Visualize.influences(10, title="attributions")
v_pos([attribution(model, x, i).reshape(28,28) for i in range(10)])
v_pos = Visualize.influences(10, title="explanations")
v_pos([explain(model, x, i).reshape(28,28) for i in range(10)])
target_y = 9
wrong, wrong_y = most_wrong(model, mnist.train, target_y)
v = Visualize.images(1, title="most wrong example")
v(wrong)
v_neg = Visualize.influences(2, title="explanation for correct class, predicted class")
v_neg([explain(model, wrong, i).reshape(28,28) for i in [wrong_y, target_y]])
print("correct class = ", wrong_y)
print("predicted class = ", target_y)
```
### Application: model stealing
Complete the following methods. A test is provided in `test_invert`.
- `invert`
```
def invert(f):
"""Produce LinearModel(W, b) with only functional interface to pre-softmax scores of a linear model."""
b = None
W = None
# >> Your code starts here. <<
# >> Your code ends here. <<
return LinearModel(W, b)
@publictest
def test_invert():
model = test_data.model_numpy
model_inv = invert(lambda x: scores(X=x.reshape((1,28*28)), model=model)[0])
print("W match?", (abs(model_inv.W - model.W) < 0.000001).all())
print("b match?", (abs(model_inv.b - model.b) < 0.000001).all())
```
### Application: adversarial attacks
Complete the following method. `test_attack` provides a use case.
- `attack`
```
def attack(model, x, y_target):
"""
Transform x into a valid image in [0,1] that makes model W,b indifferent between y_real and y_target.
Returns:
Transformed x
"""
x = x.copy() # working on x directly would pollute your dataset otherwise
# >> Your code starts here. <<
# >> Your code ends here. <<
return x
@publictest
def test_attack():
model = test_data.model_numpy
mnist = test_data.mnist
target_y = 0
x, y = mnist.test.X[0], mnist.test.y[0]
xa = attack(model, x, target_y)
print("attack success?", scores(xa, model)[target_y] >= max(scores(xa, model)))
v = Visualize.images(2, title="original, attacked")
v(x, xa)
vdiff = Visualize.influences(1, title="delta")
vdiff(x - xa)
print("delta from original:")
print(*[f"L_{o}={np.linalg.norm(x-xa, ord=o)}" for o in [0, 1, 2, np.inf]])
```
## Part 3: Intense training using GPU
Implement the following method and test with `test_cifar_model`.
- `train_cifar_model`
```
def train_cifar_model(hypers: Hypers, cifar: Splits):
"""
Compile and fit a Keras model for training using the CIFAR dataset.
You are free to choose the number/type of layers you want in the model as
long as the accuracy is >= 70%, as tested in the following public test.
Keyword Arguments:
hypers {Hypers} -- training hyper parameters
Splits(train: Dataset, test: Dataset, valid: Dataset) -- stores data splits
Returns:
{keras.models.Model}
"""
model: Model = None
# >> Your code starts here. <<
# >> Your code ends here. <<
return model
@publictest
def test_cifar_model():
cifar_2d = test_data.cifar
# You can tune the hyper parameters to suit your model.
hypers = Hypers(epochs=50, learning_rate=0.1, batch_size=512)
model, time = timeit(lambda: train_cifar_model(hypers, cifar_2d))
test_data.cifar_model = model
test_accuracy = np.mean(
np.argmax(model.predict(cifar_2d.test.X), axis=1) == cifar_2d.test.y
)
print("Test accuracy:", test_accuracy)
if test_accuracy >= 0.70:
print("ACCURACY OK")
else:
print("ACCURACY FAIL")
print("Training time:", time)
if time.total_seconds() < 1000:
print("TIME OK")
else:
print("TIME FAIL")
```
# Check your submission
Make sure all of the methods originally part of this notebook are defined. The first time you run this, you should get a google drive authentication prompt. This is due to this notebook being stored on google drive and thus needs to be retrieved before checking contents.
```
@publictest
def check():
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Find solution notebook in your drive and make a copy here in colab.
fid = drive.ListFile({'q':"title='solution.ipynb'"}).GetList()[0]['id']
f = drive.CreateFile({'id': fid})
f.GetContentFile('solution.ipynb')
check_submission(reqs=[
'train_sklearn', 'softmax', 'scores', 'forward', 'evaluate', 'test_train_sklearn',
'onehot', 'backward', 'sgd', 'test_numpy',
'build_keras_model', 'train_keras_model', 'test_keras',
'explain', 'most_wrong', 'test_explain',
'invert', 'test_invert',
'attack', 'test_attack', 'train_cifar_model', 'test_cifar_model'
])
```
| github_jupyter |
# This notebooks shows some basic operations for Quantum Computing
ProjectQ must be installed before you execute it.
### 1. Import needed modules
```
from projectq.ops import Ph,H,Measure,X,All
from projectq.meta import Control
from projectq.cengines import MainEngine
import numpy as np
np.set_printoptions(precision=3)
```
This is a helper function to print a n-qubit state. **Just, execute it**
```
def get_state_as_str(eng,qubits,cheat=False,ancilla=True):
import numpy as np
s=""
if (cheat):
print("Cheat: ", eng.backend.cheat())
if (len(qubits)==1):
for i in range(2):
#print("bits:%d%s"%(i,bits))
a=eng.backend.get_amplitude("%d"%(i),qubits)
if (a.real!=0)|(a.imag!=0):
if s!="":
s=s+"+"
a="({:.2f})".format(a)
s=s+"%s|%d>"%(a,i)
else:
for j in range(2**(len(qubits)-1)):
bits=np.binary_repr(j,width=len(qubits)-1)
#print("Bits:",j,bits)
for i in range(2):
#print("bits:%d%s"%(i,bits))
a=eng.backend.get_amplitude("%d%s"%(i,bits[-1::-1]),qubits)
if (a.real!=0)|(a.imag!=0):
if s!="":
s=s+"+"
a="({:.2f})".format(a)
if (ancilla):
s=s+"%s|%s>|%d>"%(a,bits,i)
else:
s=s+"%s|%s%d>"%(a,bits,i)
#print(s)
return(s)
```
# Qubit order
When you make an allocation of a quantum register, take into account the order. Lower index of the Quantum Register is the lower bit of the binary representation of a quantum state. For example:
1. Allocate a quantum register q with 3 qubits
2. Apply a X gate to the first one using X | q[0]
3. Check the final state, converting it to a number using binary representation, so $|001\rangle = |1\rangle$ and $|010\rangle=|2\rangle$
### 1.Start the Engine
```
eng=MainEngine()
```
### 2. Initialise the quantum state
Check the vector of the quantum state and its binary representation
```
q=eng.allocate_qureg(3)
X | q[2]
eng.flush()
get_state_as_str(eng, q,cheat=True,ancilla=False)
```
### 3.Do not forget to delete the Engine
```
All(Measure) | q
eng.flush()
```
# Phase Kickback
When a qubit that has been initialized to a Walsh-Hadamard state $\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ is used as a control bit for a phase gate $U(\phi)$ on a state $|\Psi\rangle$, the result is the final state is $\frac{1}{\sqrt{2}}(|0\rangle>+e^{i\phi}|1\rangle)\otimes|\Psi>$. So, the fase is transferred to the coltrol qubit.
<img src="Images/CPhase.png"/>
This exercise will help you to check it experimentally. Remember, for ProjectQ, qubit[0] is the bottom one of this picture, so, Phase Kickback goes to qubit[1]
### 1.Start the Engine
```
eng=MainEngine()
```
### 2. Allocate two qubits: q1 and q2
```
q1=eng.allocate_qubit()
q2=eng.allocate_qubit()
```
### 3. Apply a Hadamard gate to the first qubit
```
H|q1
```
### 4. Apply a controlled phase gate to the second qubit, controlled by the first qubit
The Phase gate has a matrix as $e^{i\phi}I$. Let check which is the generated matrix for a rotation $\phi=\pi/4$
```
import math
pi=math.pi
a=Ph(pi/4).matrix
print(a)
```
Ok. Apply the Controlled Phase gate for $\phi=\pi/4$ to the second qubit
```
with Control(eng,q1):
Ph(pi/4)|q2
```
And flush the current circuit to calculate the result state
```
eng.flush()
```
### 5.Print the result state.
```
get_state_as_str(eng, q1+q2,cheat=True)
Measure | q1
Measure | q2
eng.flush()
del eng
```
# Toffoli Gate
<img src="Images/Toffoli.png"/>
This gate is know to be a universal gate. It uses two contol qubits to produce as output
<table>
<tr><td colspan="3">INPUT</td><td colspan="3">OUTPUT</td></tr>
<tr><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td><td>0</td></tr>
<tr><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td></tr>
<tr><td>0</td><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td></tr>
<tr><td>0</td><td>1</td><td>1</td><td>0</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>0</td><td>0</td><td>1</td><td>0</td><td>0</td></tr>
<tr><td>1</td><td>0</td><td>1</td><td>1</td><td>0</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>0</td><td>1</td><td>1</td><td>1</td></tr>
<tr><td>1</td><td>1</td><td>1</td><td>1</td><td>1</td><td>0</td></tr>
</table>
The Matrix representation is:
<math>
\begin{bmatrix}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\
\end{bmatrix}
</math>
It can be also described as mapping bits {a, b, c} to {a, b, c XOR (a AND b)}.
```
eng=MainEngine()
```
### 1.Create the quantum registers, one for each qubit
```
c1=eng.allocate_qubit()
c2=eng.allocate_qubit()
ancilla=eng.allocate_qubit()
```
### 2.Initiallize the qubits to check the main cases. For example, to initialize the last row of the table, you should apply one X gate to each qubit
```
X|c1
X|c2
X|ancilla
```
### 3.Apply now the Controlled-Controlled-X gate. To do it, use the meta operation Control
```
with Control(eng,c1):
with Control(eng,c2):
X|ancilla
```
### 4.Execute the circuit to get the final state.
```
eng.flush()
```
### 5. Print the result state
If you want to execute another case, remenber to delete the Engine before going back to step 1
```
get_state_as_str(eng, ancilla+c2+c1)
All(Measure) | c1+c2+ancilla
del eng
```
### 6.In ProjectQ you can create a new controlled gate of any number of qubits using the special operation ControlledGate({gate to control},{number of control qubits})
```
from projectq.ops import ControlledGate
Toffoli=ControlledGate(X, 2)
eng=MainEngine()
c1=eng.allocate_qubit()
c2=eng.allocate_qubit()
ancilla=eng.allocate_qubit()
X|c1
X|c2
X|ancilla
Toffoli| (c1, c2, ancilla)
eng.flush()
print(get_state_as_str(eng, ancilla+c2+c1))
All(Measure) | c1+c2+ancilla
del eng
```
### 7. or use the predefined builtin Toffoli gate
```
from projectq.ops import Toffoli
eng=MainEngine()
c1=eng.allocate_qubit()
c2=eng.allocate_qubit()
ancilla=eng.allocate_qubit()
X|c1
X|c2
X|ancilla
Toffoli| (c1, c2, ancilla)
eng.flush()
print(get_state_as_str(eng, ancilla+c2+c1))
All(Measure) | c1+c2+ancilla
del eng
```
# Multiple Controlled k-Gates
In general, in a register with n+k qubits a n-Controlled U acting on k-qubits is defined as:
$C^n(U)|x_1 x_2 \dots x_n\rangle|\Psi\rangle = |x_1 x_2 \dots x_n\rangle U^{x_1 x_2 \dots x_n}|\Psi\rangle$
For example, the Tofooli's gate is defined as:
$C^n(X)|x_1 x_2\rangle>|y_1\rangle = |x_1 x_2\rangle X^{x_1 x_2}|y_1\rangle$
If $x_1=1,x_2=1$, then $C^n(X)|11\rangle|0\rangle = |11\rangle X^{1x1}|0\rangle=|11\rangle X|0\rangle=|11\rangle|1\rangle$
So, check that this is true **H** with three control qubits. Check the 3-Controlled gate $C^3(H)$
```
eng=MainEngine()
C=eng.allocate_qureg(3)
a=eng.allocate_qubit(1)
```
Funtion to initialize the control qubits to a bit mask
```
def init_control(C,bits):
for i in range(len(bits)):
if bits[i]=="1":
X|C[-1-i]
```
Initialize the control bits as a string containitn {1,0} for each qubit. Which is the result for **"011"**? And for **"111"**?
```
init_control(C,"100")
eng.flush()
get_state_as_str(eng, a+C)
```
Create the controlled gate
```
def CnGate(eng,C,G,a):
if (len(C)>1):
with Control(eng,C[0]):
CnGate(eng,C[1:],G,a)
else:
with Control(eng,C):
G|a
CnGate(eng,C,H,a)
eng.flush()
get_state_as_str(eng, a+C)
All(Measure) | C+a
del eng
```
### Or using the ControlledGate operation defined previously
```
CH=ControlledGate(H,3)
eng=MainEngine()
C=eng.allocate_qureg(3)
a=eng.allocate_qubit(1)
init_control(C,"111")
eng.flush()
print(get_state_as_str(eng, a+C))
CH|(C,a)
eng.flush()
print(get_state_as_str(eng, a+C))
All(Measure) | C+a
del eng
```
Now you can check it for any other number of qubits and unitary gates.
| github_jupyter |
<img src='https://mundiwebservices.com/build/assets/Mundi-Logo-CMYK-colors.png' align='left' width='15%' ></img>
# Mundi GDAL
```
from mundilib import MundiCatalogue
# other tools
import os
import numpy as np
from osgeo import gdal
import matplotlib.pyplot as plt
```
### Processing of an in-memory image (display/make histogram/add mask, ...)
```
# getting image from Mundi
c = MundiCatalogue()
wms = c.get_collection("Sentinel2").mundi_wms('L1C')
response = wms.getmap(layers=['92_NDWI'],
srs='EPSG:3857',
bbox=(146453.3462,5397218.5672,176703.3001,5412429.5358), # Toulouse
size=(600, 300),
format='image/png',
time='2018-04-21/2018-04-21',
showlogo=False,
transparent=False)
# writing image
#out = open(image_file, 'wb')
#out.write(response.read())
#out.close()
# reading bytes stream through a virtual memory file - no need to save image on disk
data = response.read()
vsipath = '/vsimem/img'
gdal.FileFromMemBuffer(vsipath, data)
raster_ds = gdal.Open(vsipath)
print (type(raster_ds))
# Projection
print ("Projection: ", format(raster_ds.GetProjection()))
# Dimensions
print ("X: ", raster_ds.RasterXSize)
print ("Y: ", raster_ds.RasterYSize)
# Number of bands
print ("Nb of bands: ", raster_ds.RasterCount)
# band informations
print ("Bands information:")
for band in range(raster_ds.RasterCount):
band += 1
srcband = raster_ds.GetRasterBand(band)
if srcband is None:
continue
stats = srcband.GetStatistics( True, True )
if stats is None:
continue
print (" - band #%d : Minimum=%.3f, Maximum=%.3f, Mean=%.3f, StdDev=%.3f" % ( \
band, stats[0], stats[1], stats[2], stats[3] ))
# Getting first band of the raster as separate variable
band1 = raster_ds.GetRasterBand(1)
# Check type of the variable 'band'
print (type(band1))
# Data type of the values
gdal.GetDataTypeName(band1.DataType)
# getting array from band dataset
band1_ds = band1.ReadAsArray()
# The .ravel method turns an 2-D numpy array into a 1-D vector
print (band1_ds.shape)
print (band1_ds.ravel().shape)
# Print only selected metadata:
print ("No data value :", band1.GetNoDataValue()) # none
print ("Min value :", band1.GetMinimum())
print ("Max value :", band1.GetMaximum())
# Compute statistics if needed
if band1.GetMinimum() is None or band1.GetMaximum()is None:
band1.ComputeStatistics(0)
print("Statistics computed.")
# Fetch metadata for the band
band1.GetMetadata()
# see cmap values:
# cf. https://matplotlib.org/examples/color/colormaps_reference.html
for c in ["hot", "terrain", "ocean"]:
plt.imshow(band1_ds, cmap = c, interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
plt.imshow(band1_ds, cmap = "hot", interpolation='nearest')
plt.colorbar()
plt.tight_layout()
plt.show()
print ("\n--- raster content (head) ---")
print (band1_ds[1:10, ])
band1_hist_ds = band1_ds.ravel()
band1_hist_ds = band1_hist_ds[~np.isnan(band1_hist_ds)]
# 1 column, 1 line
fig, axes = plt.subplots(nrows=1, ncols=1)
axes.hist(band1_hist_ds, bins=10, histtype='bar', color='crimson', ec="pink")
#axes.hist(lidar_dem_hist, bins=[0, 25, 50, 75, 100, 150, 200, 255], histtype='bar', color='crimson', ec="pink")
axes.set_title("Distribution of pixel values", fontsize=16)
axes.set_xlabel('Pixel value (0-255)', fontsize=14)
axes.set_ylabel('Number of pixels', fontsize=14)
#axes.legend(prop={'size': 10})
plt.show()
# masking some pixels
masked_array = np.ma.masked_where(band1_ds<170, band1_ds)
plt.imshow(masked_array, cmap="hot", interpolation='nearest')
plt.show()
# adding of a line on image mask, changing pixel value with mask
masked_array[25:45,:] = 250
plt.imshow(masked_array, cmap="binary")
plt.show()
```
| github_jupyter |
# End to End example to manage lifecycle of ML models deployed on the edge using SageMaker Edge Manager
**SageMaker Studio Kernel**: Data Science
## Contents
* Use Case
* Workflow
* Setup
* Building and Deploying the ML Model
* Running the fleet of Virtual Wind Turbines and Edge Devices
* Cleanup
## Use Case
The challenge we're trying to address here is to detect anomalies in the components of a Wind Turbine. Each wind turbine has many sensors that reads data like:
- Internal & external temperature
- Wind speed
- Rotor speed
- Air pressure
- Voltage (or current) in the generator
- Vibration in the GearBox (using an IMU -> Accelerometer + Gyroscope)
So, depending on the types of the anomalies we want to detect, we need to select one or more features and then prepare a dataset that 'explains' the anomalies. We are interested in three types of anomalies:
- Rotor speed (when the rotor is not in an expected speed)
- Produced voltage (when the generator is not producing the expected voltage)
- Gearbox vibration (when the vibration of the gearbox is far from the expected)
All these three anomalies (or violations) depend on many variables while the turbine is working. Thus, in order to address that, let's use a ML model called [Autoencoder](https://en.wikipedia.org/wiki/Autoencoder), with correlated features. This model is unsupervised. It learns the latent representation of the dataset and tries to predict (regression) the same tensor given as input. The strategy then is to use a dataset collected from a normal turbine (without anomalies). The model will then learn **'what is a normal turbine'**. When the sensors readings of a malfunctioning turbine is used as input, the model will not be able to rebuild the input, predicting something with a high error and detected as an anomaly.
## Workflow
In this example, you will create a robust end-to-end solution that manages the lifecycle of ML models deployed to a wind turbine fleet to detect the anomalies in the operation using SageMaker Edge Manager.
- Prepare a ML model
- download a pre-trained model;
- compile the ML model with SageMaker Neo for Linux x86_64;
- create a deployment package using SageMaker Edge Manager;
- download/unpack the deployment package;
- Download/unpack a package with the IoT certificates, required by the agent;
- Download/unpack **SageMaker Edge Agent** for Linux x86_64;
- Generate the protobuf/grpc stubs (.py scripts) - with these files we will send requests via unix:// sockets to the agent;
- Using some helper functions, we're going to interact with the agent and do some tests.
The following diagram shows the resources, required to run this experiment and understand how the agent works and how to interact with it.

## Step 1 - Setup
### Installing some required libraries
```
!apt-get -y update && apt-get -y install build-essential procps
!pip install --quiet -U numpy sysv_ipc boto3 grpcio-tools grpcio protobuf sagemaker
!pip install --quiet -U matplotlib==3.4.1 seaborn==0.11.1
!pip install --quiet -U grpcio-tools grpcio protobuf
!pip install --quiet paho-mqtt
!pip install --quiet ipywidgets
import boto3
import tarfile
import os
import stat
import io
import time
import sagemaker
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime
import numpy as np
import glob
```
### Let's take a look at the dataset and its features
Download the dataset
```
%matplotlib inline
%config InlineBackend.figure_format='retina'
!mkdir -p data
!curl https://aws-ml-blog.s3.amazonaws.com/artifacts/monitor-manage-anomaly-detection-model-wind-turbine-fleet-sagemaker-neo/dataset_wind_turbine.csv.gz -o data/dataset_wind.csv.gz
parser = lambda date: datetime.strptime(date, '%Y-%m-%dT%H:%M:%S.%f+00:00')
df = pd.read_csv('data/dataset_wind.csv.gz', compression="gzip", sep=',', low_memory=False, parse_dates=[ 'eventTime'], date_parser=parser)
df.head()
```
Features:
- **nanoId**: id of the edge device that collected the data
- **turbineId**: id of the turbine that produced this data
- **arduino_timestamp**: timestamp of the arduino that was operating this turbine
- **nanoFreemem**: amount of free memory in bytes
- **eventTime**: timestamp of the row
- **rps**: rotation of the rotor in Rotations Per Second
- **voltage**: voltage produced by the generator in milivolts
- **qw, qx, qy, qz**: quaternion angular acceleration
- **gx, gy, gz**: gravity acceleration
- **ax, ay, az**: linear acceleration
- **gearboxtemp**: internal temperature
- **ambtemp**: external temperature
- **humidity**: air humidity
- **pressure**: air pressure
- **gas**: air quality
- **wind_speed_rps**: wind speed in Rotations Per Second
## Step 2 - Building and Deploying the ML Model
In this below section you will :
- Compile/Optimize your pre-trained model to your edge device (Linux X86_64) using [SageMaker NEO](https://docs.aws.amazon.com/sagemaker/latest/dg/neo.html)
- Create a deployment package with a signed model + the runtime used by SageMaker Edge Agent to load and invoke the optimized model
- Deploy the package using IoT Jobs
```
project_name='wind-turbine-farm'
s3_client = boto3.client('s3')
sm_client = boto3.client('sagemaker')
project_id = sm_client.describe_project(ProjectName=project_name)['ProjectId']
bucket_name = 'sagemaker-wind-turbine-farm-%s' % project_id
prefix='wind_turbine_anomaly'
sagemaker_session=sagemaker.Session(default_bucket=bucket_name)
role = sagemaker.get_execution_role()
print('Project name: %s' % project_name)
print('Project id: %s' % project_id)
print('Bucket name: %s' % bucket_name)
```
## Compiling/Packaging/Deploying our ML model to our edge devices
Invoking SageMaker NEO to compile the pre-trained model. To know how this model was trained please refer to the training notebook [here](https://github.com/aws-samples/amazon-sagemaker-edge-manager-workshop/tree/main/lab/02-Training).
Upload the pre-trained model to S3 bucket
```
model_file = open("model/model.tar.gz", "rb")
boto3.Session().resource("s3").Bucket(bucket_name).Object('model/model.tar.gz').upload_fileobj(model_file)
print("Model successfully uploaded!")
```
It will compile the model for targeted hardware and OS with SageMaker Neo service. It will also include the [deep learning runtime](https://github.com/neo-ai/neo-ai-dlr) in the model package.
```
compilation_job_name = 'wind-turbine-anomaly-%d' % int(time.time()*1000)
sm_client.create_compilation_job(
CompilationJobName=compilation_job_name,
RoleArn=role,
InputConfig={
'S3Uri': 's3://%s/model/model.tar.gz' % sagemaker_session.default_bucket(),
'DataInputConfig': '{"input0":[1,6,10,10]}',
'Framework': 'PYTORCH'
},
OutputConfig={
'S3OutputLocation': 's3://%s/wind_turbine/optimized/' % sagemaker_session.default_bucket(),
'TargetPlatform': { 'Os': 'LINUX', 'Arch': 'X86_64' }
},
StoppingCondition={ 'MaxRuntimeInSeconds': 900 }
)
while True:
resp = sm_client.describe_compilation_job(CompilationJobName=compilation_job_name)
if resp['CompilationJobStatus'] in ['STARTING', 'INPROGRESS']:
print('Running...')
else:
print(resp['CompilationJobStatus'], compilation_job_name)
break
time.sleep(5)
```
### Building the Deployment Package SageMaker Edge Manager
It will sign the model and create a deployment package with:
- The optimized model
- Model Metadata
```
import time
model_version = '1.0'
model_name = 'WindTurbineAnomalyDetection'
edge_packaging_job_name='wind-turbine-anomaly-%d' % int(time.time()*1000)
resp = sm_client.create_edge_packaging_job(
EdgePackagingJobName=edge_packaging_job_name,
CompilationJobName=compilation_job_name,
ModelName=model_name,
ModelVersion=model_version,
RoleArn=role,
OutputConfig={
'S3OutputLocation': 's3://%s/%s/model/' % (bucket_name, prefix)
}
)
while True:
resp = sm_client.describe_edge_packaging_job(EdgePackagingJobName=edge_packaging_job_name)
if resp['EdgePackagingJobStatus'] in ['STARTING', 'INPROGRESS']:
print('Running...')
else:
print(resp['EdgePackagingJobStatus'], compilation_job_name)
break
time.sleep(5)
```
### Deploy the package
Using IoT Jobs, we will notify the Python application in the edge devices. The application will:
- Download the deployment package
- Unpack it
- Load the new mode (unload previous versions if any)
```
import boto3
import json
import sagemaker
import uuid
iot_client = boto3.client('iot')
sts_client = boto3.client('sts')
model_version = '1.0'
model_name = 'WindTurbineAnomalyDetection'
sagemaker_session=sagemaker.Session()
region_name = sagemaker_session.boto_session.region_name
account_id = sts_client.get_caller_identity()["Account"]
resp = iot_client.create_job(
jobId=str(uuid.uuid4()),
targets=[
'arn:aws:iot:%s:%s:thinggroup/WindTurbineFarm-%s' % (region_name, account_id, project_id),
],
document=json.dumps({
'type': 'new_model',
'model_version': model_version,
'model_name': model_name,
'model_package_bucket': bucket_name,
'model_package_key': "%s/model/%s-%s.tar.gz" % (prefix, model_name, model_version)
}),
targetSelection='SNAPSHOT'
)
```
Alright! Now, the deployment process will start on the connected edge devices!
## Step 3 - Running the fleet of Virtual Wind Turbines and Edge Devices
In this section you will run a local application written in Python3 that simulates 5 Wind Turbines and 5 edge devices. The SageMaker Edge Agent is deployed on the edge devices.
Here you'll be the **Wind Turbine Farm Operator**. It's possible to visualize the data flowing from the sensors to the ML Model and analyze the anomalies. Also, you'll be able to inject noise (pressing some buttons) in the data to simulate potential anomalies with the equipment.
<table border="0" cellpading="0">
<tr>
<td align="center"><b>STEP-BY-STEP</b></td>
<td align="center"><b>APPLICATION ARCHITECTURE</b></td>
</tr>
<tr>
<td><img src="../imgs/EdgeManagerWorkshop_Macro.png" width="500px"></img></td>
<td><img src="../imgs/EdgeManagerWorkshop_App.png" width="500px"></img></td>
</tr>
</table>
The components of the applicationare:
- Simulator:
- [Simulator](app/ota.py): Program that launches the virtual wind turbines and the edge devices. It uses Python Threads to run all the 10 processes
- [Wind Farm](app/windfarm.py): This is the application that runs on the edge device. It is reponsible for reading the sensors, invoking the ML model and analyzing the anomalies
- Edge Application:
- [Turbine](app/turbine.py): Virtual Wind Turbine. It reads the raw data collected from the 3D Prited Mini Turbine and stream it as a circular buffer. It also has a graphical representation in **IPython Widgets** that is rendered by the Simulator/Dashboard.
- [Over The Air](app/ota.py): This is a module integrated with **IoT Jobs**. In the previous exercise you created an IoT job to deploy the model. This module gets the document process it and deployes the model in each edge device and loads it via SageMaker Edge Manager.
- [Edge client](app/edgeagentclient.py): An abstraction layer on top of the **generated stubs** (proto compilation). It makes it easy to integrate **Wind Farm** with the SageMaker Edge Agent
```
agent_config_package_prefix = 'wind_turbine_agent/config.tgz'
agent_version = '1.20210512.96da6cc'
agent_pkg_bucket = 'sagemaker-edge-release-store-us-west-2-linux-x64'
```
### Prepare the edge devices
1. First download the deployment package that contains the IoT + CA certificates and the configuration file of the SageMaker Edge Agent.
2. Then, download the SageMaker Edge Manager package and complete the deployment process.
> You can see all the artifacts that will be loaded/executed by the virtual Edge Device in **agent/**
```
if not os.path.isdir('agent'):
s3_client = boto3.client('s3')
# Get the configuration package with certificates and config files
with io.BytesIO() as file:
s3_client.download_fileobj(bucket_name, agent_config_package_prefix, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('.')
tar.close()
# Download and install SageMaker Edge Manager
agent_pkg_key = 'Releases/%s/%s.tgz' % (agent_version, agent_version)
# get the agent package
with io.BytesIO() as file:
s3_client.download_fileobj(agent_pkg_bucket, agent_pkg_key, file)
file.seek(0)
# Extract the files
tar = tarfile.open(fileobj=file)
tar.extractall('agent')
tar.close()
# Adjust the permissions
os.chmod('agent/bin/sagemaker_edge_agent_binary', stat.S_IXUSR|stat.S_IWUSR|stat.S_IXGRP|stat.S_IWGRP)
```
### Finally, create the SageMaker Edge Agent client stubs, using the protobuffer compiler
SageMaker EdgeManager exposes a [gRPC API](https://grpc.io/docs/what-is-grpc/introduction/) to processes on device. In order to use gRPC API in your choice of language, you need to use the protobuf file `agent.proto` (the definition file for gRPC interface) to generate a stub in your preferred language. Our example was written in Python, therefore below is an example to generate Python EdgeManager gRPC stubs.
```
!python3 -m grpc_tools.protoc --proto_path=agent/docs/api --python_out=app/ --grpc_python_out=app/ agent/docs/api/agent.proto
```
### SageMaker Edge Agent - local directory structure
```
agent
└───certificates
│ └───root
│ │ <<aws_region>>.pem # CA certificate used by Edge Manager to sign the model
│ │
│ └───iot
│ edge_device_<<device_id>>_cert.pem # IoT certificate
│ edge_device_<<device_id>>_key.pem # IoT private key
│ edge_device_<<device_id>>_pub.pem # IoT public key
│ ...
│
└───conf
│ config_edge_device_<<device_id>>.json # Edge Manager config file
│ ...
│
└───model
│ └───<<device_id>>
│ └───<<model_name>>
│ └───<<model_version>> # Artifacts from the Edge Manager model package
│ sagemaker_edge_manifest
│ ...
│
└───logs
│ agent<<device_id>>.log # Logs collected by the local application
│ ...
app
agent_pb2_grpc.py # grpc stubs generated by protoc
agent_pb2.py # agent stubs generated by protoc
...
```
## Simulating The Wind Turbine Farm
Now its time to run our simulator and start playing with the turbines, agents and with the anomalies
> After clicking on **Start**, each turbine will start buffering some data. It takes a few seconds but after completing this process, the application runs in real-time
> Try to press some buttons while the simulation is running, to inject noise in the data and see some anomalies
```
import sys
sys.path.insert(1, 'app')
import windfarm
import edgeagentclient
import turbine
import simulator
import ota
import boto3
from importlib import reload
reload(simulator)
reload(turbine)
reload(edgeagentclient)
reload(windfarm)
reload(ota)
# If there is an existing simulator running, halt it
try:
farm.halt()
except:
pass
iot_client = boto3.client('iot')
mqtt_host=iot_client.describe_endpoint(endpointType='iot:Data-ATS')['endpointAddress']
mqtt_port=8883
!mkdir -p agent/logs && rm -f agent/logs/*
simulator = simulator.WindTurbineFarmSimulator(5)
simulator.start()
farm = windfarm.WindTurbineFarm(simulator, mqtt_host, mqtt_port)
farm.start()
simulator.show()
```
> If you want to experiment with the deployment process, with the wind farm running, go back to Step 2, replace the variable **model_version** by the constant (string) '2.0' in the Json document used by the IoT Job. Then, create a new IoT Job to simulate how to deploy new versions of the model. Go back to this exercise to see the results.
```
try:
farm.halt()
except:
pass
print("Done")
```
## Cleanup
Run the next cell only if you already finished exploring/hacking the content of the workshop.
This code will delete all the resouces created so far, including the **SageMaker Project** you've created
```
import boto3
import time
from shutil import rmtree
iot_client = boto3.client('iot')
sm_client = boto3.client('sagemaker')
s3_resource = boto3.resource('s3')
policy_name='WindTurbineFarmPolicy-%s' % project_id
thing_group_name='WindTurbineFarm-%s' % project_id
fleet_name='wind-turbine-farm-%s' % project_id
# Delete all files from the S3 Bucket
s3_resource.Bucket(bucket_name).objects.all().delete()
# now deregister the devices from the fleet
resp = sm_client.list_devices(DeviceFleetName=fleet_name)
devices = [d['DeviceName'] for d in resp['DeviceSummaries']]
if len(devices) > 0:
sm_client.deregister_devices(DeviceFleetName=fleet_name, DeviceNames=devices)
# now deregister the devices from the fleet
for i,cert_arn in enumerate(iot_client.list_targets_for_policy(policyName=policy_name)['targets']):
for t in iot_client.list_principal_things(principal=cert_arn)['things']:
iot_client.detach_thing_principal(thingName=t, principal=cert_arn)
iot_client.detach_policy(policyName=policy_name, target=cert_arn)
certificateId = cert_arn.split('/')[-1]
iot_client.delete_role_alias(roleAlias='SageMakerEdge-%s' % fleet_name)
iot_client.delete_thing_group(thingGroupName=thing_group_name)
if os.path.isdir('agent'): rmtree('agent')
sm_client.delete_project(ProjectName=project_name)
```
Mission Complete!
| github_jupyter |
# TRANSCOST Model
The TRANSCOST model is a vehicle dedicated-dedicated system model for determining the cost per flight (CpF) and Life Cycle Cost (LCC) for launch vehicle systems.
Three key cost areas make up the model:
1. Development Cost
1. Production Cost
1. Operations Cost
Each of these cost areas and strategies for modeling them will be reviewed before combining them to model the CpF.
## Development Costs
The Development Costs model can be separated into the following categories:
1. Systems Engineering ($f_0$)
1. Strap-on Boosters ($B$)
1. Vehicle Systems/Stages ($V$)
1. Engines ($E$)
These elements are combined into the following equation which gives the total development cost for a launch vehicle:
$$ C_D = f_0 \left( \sum{H_B} + \sum{H_V} + \sum{H_E} \right) f_6\ f_7\ f_8\ \left[PYr \right]$$
It is important to discuss the units of this equation: the Person-Year [PYr]. The Person-Year unit is a cost unit which is independent of inflation or changing currency exchange rates. The value of the Person-Year is determined by the total cost of maintaining an employee for a year, which includes direct wages, travel costs, office costs, and other indirect costs.
We will now go over each term in this expression to clarify its meaning and how to determine its value.
$f_0$: systems engineering/integration factor. When developing and producing launch vehicles, multiple stages and vehicle systems need to be integrated together. The integration of the multiple system elements imparts an increase to the development cost, which is captured by this term. This term can be calculated using:
$$ f_0 = 1.04^N$$
where $N$ is the number of stages or major system elements.
$f_6$: cost growth factor for deviating from the optimum schedule. There is an optimum schedule for development and providing funding for any particular project. Deviations from this optimal schedule impart a penalty on the development cost. Historically, launch vehicles take between 5 and 9 years to develop. Working faster or slower than the optimum schedule will result in cost increases however. The instance of $f_6 = 1.0$ represents development that follows the ideal schedule perfectly. This will almost certainly never be the case, and typical values for $f_6$ range between 1.0 and 1.5.
$f_7$: cost growth factor for parallel organizations. In order to have efficient delegation of tasks and conflict resolution, a prime contractor needs to be established. Having multiple co-prime contractors leads to many inefficiencies, which imparts a penalty on the development cost. This factor can be calculated using:
$$ f_7 = n^{0.2} $$
where $n$ is the number of parallel prime contractors. Having multiple contractors on a project imparts no penalty as long as they are organized in a prime contractor/subcontractor relationship.
$f_8$: person-year correction factor for productivity differences in different countries/regions. Productivity differences exist between different countries and regions, so this must be accounted for in the model. This factor is baselined for productivity in the United States, so for the US $f_8 = 1.0$. Some other countries of interest include Russia with $f_8 = 2.11$, China with $f_8 = 1.34$, Europe (ESA) with $f_8 = 0.86$, and France/Germany with $f_8 = 0.77$.
### Cost Exponential Relationships (CERs)
To calculate the development cost of each major vehicle element, denoted as $H$ in the above total development cost equation, we introduce a series of cost exponential relationships (CER). These CERs relate the reference mass of a stage or system element to its development cost.
CERs have been definied for the following vehicle elements:
1. Solid-propellant Rocket Motors
1. Liquid-propellant Rocket Engines with Turbopumps
1. Pressure-fed Rocket Engines
1. Airbreathing Turbo- and Ramjet-Engines
1. Large Solid-Propellant Rocket Boosters
1. Liquid Propellant Propulsion Systems/Modules
1. Expendable Ballistic Stages and Transfer Vehicles
1. Reusable Ballistic Launch Vehicles
1. Winged Orbital Rocket Vehicles
1. Horizontal Take-off First Stage Vehicles, Advanced Aircraft, and Aerospaceplanes
1. Vertical Take-off First Stage-Fly-back Rocket Vehicles
1. Crewed Ballistic Re-entry Capsules
1. Crewed Space Systems
The general form for these CERs are as follows:
$$ H = a\ M^x\ f_1\ f_2\ f_3\ f_8 $$
In this equation, $a$ and $x$ are empirically determined coefficients for a particular type of vehicle stage or system.
$M$: reference mass (dry mass), in kilograms, of the vehicle system, stage, or engine being considered.
$f_1$: development standard factor. This factor accounts for the relative status of the project in comparison to the state of the art or other existing projects. The development of a standard project that has similar systems already in operation would have $f_1 = 0.9 - 1.1$. The development of a project that is a minor variation on an existing product would have $f_1 = 0.4 - 0.6$. The development of a first-generation system would have $f_1 = 1.3 - 1.4$.
$f_3$: team experience factor. This factor accounts for the relavant experience of the team working on the development of a new project. An industry team with some related experience would have $f_3 = 1.0$. A very experienced team who has worked on similar projects previously would have $f_3 = 0.7 - 0.8$. A new team with little or no previous experience would have $f_3 = 1.3 - 1.4$.
$f_2$: technical quality factor. This factor is not as well-defined as the other cost factors. Its value is derived from technical characterists of a particular vehicle element, and is defined uniquely for each vehicle element. Often, the fit of the CER is good enough without this factor, and so in those cases $f_2 = 1.0$. For others though, a particular relationship is derived for it. For instance, for the development of a liquid turbo-fed engine:
$$ f_2 = 0.026 \left(\ln{N_Q}\right)^2 $$
where $N_Q$ is the number of qualification firings for the engine. This indicates that development cost increases as the number of test firings increases.
### Example - Calculating Development Cost for SSMEs
Next we will consider an example to clarify the above model. We will look at the development costs of the Space Shuttle Main Engines.
First we find the appropriate CER for modeling this. The CER for liquid turbo-fed engines is:
$$ H = 277\ M^{0.48}\ f_1\ f_2\ f_3 $$
The development standard factor, $f_1$ can be taken to by 1.3 since this is a "first of its kind" project. The team experience factor, $f_3$, can be taken as 0.85 since a lot of the team had worked on the F-1 and J-2 engines at Rocketdyne.
We can calculate the technical quality factor, $f_2$, using the equation for turbo-fed liquid engines described in the previous section, knowing that the SSMEs required roughly 900 test firings.
Additionally, the SSME dry mass is 3180 kg.
We can then calculate the total development cost as follows:
```
import math
a = 277. # CER coefficient
x = 0.48 # CER exponent
f1 = 1.3 # development standard factor
f3 = 0.85 # team experience factor
N_Q = 900 # number of test firings
f2 = 0.026*(math.log(N_Q))**2 # technical quality factor
M = 3180 # dry mass of engine [kg]
H = a * M**x * f1 * f2 * f3 # development cost of SSME [PYr]
print H
```
From this calculation, we find a development cost of roughly 17672 PYr. The actual development cost was 18146 PYr, which is reasonaly close to the calculation.
In order to find the development cost of the entire vehicle, the CER would need to be calculated for each major vehicle state or system, then summed together and multiplied with the appropriate cost factors, as described in the total vehicle development cost equation above.
## Production Costs
The production cost model is done similarly to the development costs, which sums a series of CERs to find the total cost.
Three key cost ares make up the production cost model:
1. System management, vehicle integration, and checkout ($f_0$)
1. Vehicle systems ($S$)
1. Engines ($E$)
These elements are combined into the following vehicle production cost (per vehicle) equation:
$$ C_F = f_0^N \ \left( \sum\limits_{1}^n F_S + \sum\limits_{1}^n F_E \right)\ f_8 $$
We will now go over each term to clarify its meaning.
$f_0$: systems engineering/integration factor. Accounts for system management, integration, and checkout of each vehicle element. Typically between 1.02 and 1.03, depending on specifics of each element.
$N$: number of vehicle stages or system elements for the launch vehicle.
$n$: number of identical units per element on a single launch vehicle.
$f_8$: person-year correction factor. This is the same as described in the development costs section.
### Cost Exponential Relationships (CERs)
To calculate the production cost of each major vehicle element, denoted as $H$ in the above total development cost equation, we introduce a series of cost exponential relationships (CER). These CERs relate the reference mass of a stage or system element to its production cost.
Production CERs have been definined for the following stages/systems:
1. Solid Propellant Rocket Motors and Boosters
1. Liquid Propellant Rocket Engines
1. Propulsion Modules
1. Ballistic Vehicles/Stages (Expendable and Reusable)
1. High-speed Aircraft/Winged First Stage Vehicles
1. Winged Orbital Rocket Vehicles
1. Crewed Space Systems
The general form for these CERs for the production of the $i^{th}$ unit are as follows:
$$ F_i = a\ M^x\ f_{4,i} \ [PYr] $$
where $a$ and $x$ are empirically determined coefficients for each type of vehicle stage or system, and $M$ is the reference mass of the stage or system in kilograms.
$f_{4,i}$: cost reduction factor of the $i^{th}$ unit in series production. The cost reduction factor is influenced by several things, including the number of units produced, the production batch size, and the learning factor $p$. The learning factor is in turn influenced by product modifications and production rate.

The cost reduction factor for the production of the $i^{th}$ unit can be estimated using:
$$ f_{4,i} = i^{\frac{\ln{p}}{\ln{2}}} $$
It should be noted that the cost of vehicle maintenance, spares, refurbishment, or over-haul is NOT accounted for the in the production cost model. These are instead accounted for in the operations costs.
### Example - Calculating Production Cost for Saturn V Second Stage
Consider the 1967 contract for a batch of 5 Saturn V second stages. These stages will have unit numbers of 11-15, and will be producted at a build rate of 2-3 per year.
The CER for a vehicle stage with cryogenic propellants is:
$$ F = 1.30\ M^{0.65}\ f_{4,i} \ [PYr] $$
The second stage has a mass of 29,700 kg and production of the Saturn V second stage has a learning factor of $p = 0.96$.
Our goal is to find the production cost for this batch of 5 Saturn V second stages.
First we can will calculate the average cost reduction factor for units 11-15. Then we will find the production cost for the five units.
```
num_units = 5
p = 0.96
unit_nos = range(11,16) # establish production unit numbers
f4_sum = 0
for i in unit_nos:
f4_i = i**(math.log(p)/math.log(2))
f4_sum += f4_i
f4_avg = f4_sum/num_units
a = 1.30 # CER coefficient
x = 0.65 # CER exponent
M = 29700 # dry mass of stage [kg]
F = num_units*a*M**x*f4_avg
print F
```
From this calculation, we find a production cost of roughly 4516 PYr for the 5 units. The actual production cost for these 5 units was 4437 PYr.
## Operations Costs
Modeling the operations cost is much more difficult than modeling the development and production cost due to a large amount of operational influences, as well as scarce reliable reference data. That being said, we will try to model it best as possible.
The operations cost has three key cost areas:
1. Direct Operations Cost (DOC)
1. Indirect Operations Cost (IOC)
1. Refurbishment and Spares Cost (RSC)
It should be noted that all payload-related activities are exluded from this model.
In the case of ELVs, operations costs make up around 20-35% of the total Cost per Flight (CpF). In the case of RLVs, the operations costs typically make up 35-70% of the total CpF.
### Direct Operations Cost (DOC)
The direct operations cost accounts for all activities directly related to the ground preparations of a launch vehicle, plus launch operations and checkout.
There are five cost areas that make up the DOC:
1. Ground Operations
1. Materials and Propellants
1. Flight and Mission Operations
1. Transportation and Recovery
1. Fees and Insurance
Some of these cost areas are easier to estimate than others. We will go over strategies for estimating each of these cost areas.
#### Ground Operations
Many things affect the cost of the ground operations, including: the size and complexity of the vhicle; the fact of a crewed or automated vehicle; the assembly, mating, and transportation mode of the vehicle (vertical or horizontal); the launch mode and associated launch facilities; and the number of launches per year.
The followoing provisional CER can be used to estimate the pre-launch ground operations cost:
$$ C_{PLO} = 8\ {M_0}^{0.67}\ L^{-0.9}\ N^{0.7}\ f_V\ f_C\ f_4\ f_8 $$
$M_0$: gross weight at lift-off (GLOW) of the vehicle in Mg (metric tons)
$L$: launch rate given as launches per year (LpA). This factor and the exponent of -0.9 defines how the required team size grows with launch rate. If the exponent of L was -1.0, this would mean a constant team size regardless of launch rate, which is unrealistic. As a side note, an important consideration with RLVs for determing the LpA (and the fleet size) is the necessary turn-around time for the vehicle.
$N$: number of stages or major vehicle elements. This represents how more operational effort is required with more systems.
$f_v$: launch vehicle type factor. This factor accounts for the varying operational effort required for different launch vehicle types.
For expendable multistage vehicles:
- liquid-propellant vehicles with cryogenic propellant: $f_v = 1.0$
- liquid-propellant vehicles with storable propellant: $f_v = 0.8$
- solid-propellant vehicles: $f_v = 0.3$
For reusable launch systems with integrated health control system:
- automated cargo vehicles (Cryo-SSTO): $f_v = 0.7$
- crewed/piloted vehicles: $f_v = 1.8$
For vehicles with different type stages, an average value should be used.
$f_c$: assembly and integration mode factor. This accounts for the difference in operational effort required for different assembly and checkout modes.
- Vertical assembly and checkout on the launch pad: $f_c = 1.0$
- Vertical assembly and checkout, then transport to launch pad: $f_c = 0.7$
- Horizontal assembly and checkout, transport to pad, erect: $f_c = 0.5$
$f_4$: cost reduction factor as described previously in the production costs section.
$f_8$: person-year correction factor as described previously in the development costs section.
#### Costs of Propellants and Gases
Propellants represent a relatively small fraction of the total CpF. Propellant costs are highly dependent on the production source capacity, as well as the country/region of purchase (for instance, LH2 costs nearly twice as much in Europe as it does in the US).
For liquid propellants, it is important to consider the mass of propellant that will be boiled off during filling, as well as the actual mass required to fill the tanks. For LOX, 50-70% additional propellant is required (beyond what is needed to fill the tanks) to account for boil-off. For LH2, 75-95% additional propellant is required.
It should be noted that the cost of solid-propellants is included in the production cost and not the operations cost.
#### Launch, Flight and Mission Operations Cost
This cost area includes:
- Mission planning and preparation, including software update
- Launch and ascent flight control until payload separation
- Orbital and return flight operations in the ase of reusable launch systems
- Flight safety control and tracking
This cost area does NOT include crew operations or in-orbit experiments.
For ELVs, the launch, flight and mission operations cost is relatively minimal given the short flight-time. For RLVs, this cost is much higher due to extended mission times and increased complexity.
For unmanned systems, the following provisional CER has been determined for the per-flight cost:
$$ C_m = 20\ \left(\sum{Q_N} \right)\ L^{-0.65}\ f_4\ f_8 \ [PYr] $$
$L$ is the launch rate, $f_4$ is the cost reduction factor, and $f_8$ is the person-year correction factor.
$Q_N$ is a vehicle complexity factor. It takes on a diferent value for different numbers and types of stages:
- Small solid motor stages: $Q_N = 0.15$ each.
- Expendable liquid-prop stages or large boosters: $Q_N = 0.4$ each.
- Recoverable or fly-back systems: $Q_N = 1.0$ each.
- Unmanned reusable orbital systems: $Q_N = 2.0$ each.
- Crewed orbital vehicles: $Q_N = 3.0$ each.
For example, we can consider the launch of the ATHENA Vehicle from Wallops Island. We will consider it in two cases:
1. Early Operations: 10th flight, 5 launches per year
1. Mature Operations: 50th flight, 8 launches per year
The ATHENA Vehicle is a four-stage vehicle with three small solid motor stages and a fourth expendable monopropellant liquid-fueled stage. It can be assumed that its production has a learning factor of 90%.
```
p = 0.9
sum_QN = 0.85
# Early Operations, Case 1
L = 5
flight_num = 10
f_4 = flight_num**(math.log(p)/math.log(2))
print f_4
f_8 = 1.0
Cm_early = 20*(sum_QN)*L**(-0.65)*f_4*f_8
print 'Cm_early: ' + str(Cm_early)
# Mature Operations, Case 2
L = 8
flight_num = 50
f_4 = flight_num**(math.log(p)/math.log(2))
f_8 = 1.0
print f_4
Cm_mature = 20*(sum_QN)*L**(-0.65)*f_4*f_8
print 'Cm_mature: ' + str(Cm_mature)
```
For manned systems, an ADDITIONAL cost must be determined. The following provisional CER has been determined for the per-flight crewed operations cost:
$$ C_{ma} = 75\ {T_m}^{0.5}\ {N_a}^{0.5}\ L^{-0.8}\ f_4\ f_8\ [PYr] $$
$T_m$ is the mission duration in orbit in days, $N_a$ is the number of crew members, $L$ is the launch rate, $f_4$ is the cost reduction factor, and $f_8$ is the person-year correction factor.
The result of this CER must be added to the CER for the unmanned system to get the total launch, flight, and mission operations cost.
As an example, consider the Space Shuttle on its 10th flight at 4 LpA. Assume 7 crew onboard, a 14-day mission, and a learning factor of 90%.
We first calculate the unmanned CER for the vehicle system, and then calculate the additional mission cost for having a crewed system.
```
T_m = 14
N_a = 7
L = 4
flight_num = 10
p = 0.9
f_4 = flight_num**(math.log(p)/math.log(2))
f_8 = 1.0
sum_QN = 5.4 # one crewed orbital vehicle + two recoverable boosters + one expendable liquid-prop stage
# find unmanned CER value
Cm_unmanned = 20*(sum_QN)*L**(-0.65)*f_4*f_8
# find manned CER value
C_ma = 75*T_m**0.5*N_a**0.5*L**(-0.8)*f_4*f_8
print 'Cm_unmanned: ' + str(Cm_unmanned)
print 'Cm_manned: ' + str(C_ma)
print 'Sum: ' + str(Cm_unmanned + C_ma)
```
From the example of the shuttle, it can be seen that the majority of the launch, flight, and missions operations cost comes from having a crewed system, rather than an automated system.
#### Ground Transportation and Recovery Costs
This cost area includes things such as transportation of vehicle elements from their fabrication site to the launch area, transportation of reuable vehicles from a remote landing site to the launch area, and transporation of sea-launch facilities from the home harbour to the launch location and back.
Transportation costs for moving elements from their fabrication site to the launch area and cost of transporting sea-launch facilities cannot be accurately estimated with a CER.
However, a preliminary CER for the recovery cost for stages or boosters at sea is given by:
$$ C_{Rec} = \frac{1.5}{L}\left({7\ L^{0.7} + M^{0.83}}\right)\ f_8\ [PYr] $$
$L$ is the launch rate, $M$ is the recovery mass in Mg (metric tons), and $f_8$ is the person-year correction factor.
The specific cost per recovery decreases with launch rate and increases with recovery mass, which makes intuitive sense.
#### Fees and Insurance Costs
A variety of fees and insurance costs contribute to the CpF for launch vehicles. Some of these fees include:
1. **Launch site user fee.** For most US launch sites, the US Department of Transportation (DOT) charges a per-launch fee. It should be noted that the DOC only considers the per-launch fee of using a launch site, and doesn't account for a yearly fixed general fee for using a launch site, which would be handled as part of the IOC.
1. **Public damage insurance.** Most governments require launch service providers to take an insruance against damage caused by parts of a launch vehicle falling to the groung.
1. **Launch vehicle insurance.** For ELVs, the insurance for a launch failure and payload loss normally has to be payed by the customer separately. For RLVs, the launch service provider is the owner of the vehicle and must insure its lifetime. However, the catastrophic failure rate for RLVs can be substantially reduced in comparison to ELVs due to increase redundancy, integrated health control systems, landing capabilities in case of emergencies, and the ability to perform flight tests.
1. **Surcharge for mission abort.** In the case of RLVs, there is a possibility that the vehicle performs an emergency landing without deploying or delivering the payload. In this case, the launch service provider would likely be obligated to provide a free re-launch to the customer. The cost of this mission abort must be considered by the launch provider. A mission abort could be more expensive than the rest of the DOC given necessary investigations that would follow.
### Refurbishment and Spares Cost (RSC)
This cost area accounts for the cost or refurbishment of launch vehicles. It is important to distinguish the difference between the terms refurbishment and maintenance. Here, refurbishment refers to off-line activities, or major vehicle overhauls that have to be performed only after a certain number of flights. On the other hand, maintenance refers to on-line activities and includes everything that has to be done between two consecutive flights. Maintenance is accounted for in the pre-launch ground oeprations cost, and is not handled as part of the RSC.
Major refurbishment activities include:
1. Detailed vehicle system inspection (especially structure, tanks, and thermal protection)
1. Exchange of critical structure elements, such as TPS panels
1. Replacement of the complete main rocket engines
1. Exchange of critical components of the pressurization and feed system, power and electric system, and so on
The refurbishment costs for a vehicle element are typically treated as a percentage of the element production cost. The total refurbishment cost over the vehicle's lifetime must be distributed over the total number of flights to find the impact on the CpF. Like in calculating the development and production costs, engines and vehicle stages are handled separately.
#### Vehicle System Refurbishment Cost
The costs of the refurbishment and spares' cost per flight for various air- and space-craft are given in the chart below:

It is also important to note that the refurbishment cost and vehicle lifetime are NOT independent. The average per-flight refurbishment effort will increase with an increasing number of lifetime fights, since a larger number of vehicle elements will need to be exchanged. With this in mind, there may be an optimum number of vehicle reflights, beyond which it is more cost-effective to introduce a new vehicle than continue reusing an existing one.
#### Rocket Engine Refurbishment Cost
There is very little data available to quantify the engine refurbishment cost. Data for the SSMEs however amounts to a per-flight refurbishment cost of 11% of the original production cost. For future RLVs with a self-diagnosis system that will indicate maintenance requirements, refurbishment effot can be expected to reduce to below 0.5% per flight, with refurbishment every 20 to 25 flights.
Engine lifetime is heavily influenced by pressure levels involved. An effective strategy is to operate engines at approximately 90% of design thrust, which substantially lowers refurbishment effort and would potentially decrease the CpF, despite the penalty for operating at a lower thrust.
For solid rocket motors, cases can only be reused a few times due to relatively expensive recovery operations. In most scenarios, the cost-effectiveness of reusing solid rocket motors is questionable. In the case of the shuttle SRBs, the recovery and refurbishment effort for the two SRBs was more expensive than a pair of expendable SRBs without the recovery equipment would have been.
### Indirect Operations Cost (IOC)
The Indirect Operations Cost consists of all costs that represent a constant value per year, essentially independent of vehicle size and launch rate. This includes program administration and management, marketing and customer relations, general fees and texes, technical support activities, and pilot training, among other things.
Three general cost elements make up the IOC:
1. Program Administration and System Management
1. Technical Support
1. Launch Site Support and Maintenance
The IOC typically adds up to a fixed cost budget per year, which must ben be divided by the number of launches per year to find its contribution to the CpF. For approximately 6 - 12 LpA, the IOC typically represents 8 - 15% of the CpF. For low launch rate, however, its contribution to the CpF can be much larger.
#### Program Administration and System Management
The most practical way of assessing the cost of this area is to estimate a number of staff required for these tasks. The staff has to cover a variety of tasks, including marketing and customer relations, vehicle procurement, contracts handling, and accounting. The related general overhead for the staff and these tasks, including rental charges, travel costs, computer power, exhibit and publication costs, and others must be included.
Costs of government fees, taxes, insurance costs, and financing costs also need to be considered here.
#### Technical Support
Launch service providers need to provide technical support capabilities for ground operations, including:
1. Supervision of technical standard and vehicle performance
1. Supervision of industrial contracts for vehicle procurement
1. Failure analysis and implementation of techncial changes
1. Spares storage and administration (not belonging to refurbishment cost)
1. Pilots training and support for piloted vehicles
It is easiest to estimate these costs by estimating the number of staff required.
#### Launch Site Support and Maintenance
Launch sites operated by governmental organizations operate under a special budget, so launches of national spacecraft are therefore not charged with a launch site support and maintenance cost. For commercial endeavours, however, launch service providers typically have to pay a fixed fee per month or per year for use of the launch infrastructure, in addition to other per-launch fees (which are part of the DOC). This fee is entirely dependent on the specific launch site's fee schedule.
## Cost per Flight and Pricing
It is important now to make a distinction between Cost per Flight (CpF) and Price per Flight (PpF). Cost per Flight is the cost of production and operations per launch for the launch service provider. Price per Flight is the price charged by the launch service provider and paid for by the customer, which includes a development cost amortization charge and profit, in addition to the production and operations costs.
There are a few subtleties between CpF and PpF for ELVs and RLVs, as noted below.
The CpF includes:
1. Vehicle Cost
- Fabrication, assemby, verification
- Expendable elements cost (RLVs only)
- Refurbishment and spares cost (RLVs only)
1. Direct Operations Cost
- Ground operations
- Flight and mission operations
- Propellants, gases, and consumables
- Ground transportation costs
- Launch facilities user fee
- Public damage fee
- Vehicle failure impact charge (ELVs only)
- Mission abort and premiature vehicle loss charge (RLVs only)
- Other charges (taxes, fees, ...)
1. Indirect Operations Cost
- Program administration and system management
- Marketing, customer relations, and contracts office
- Technical system support
- Launch site and range cost
Then, for PpF, there are the following items in addition to the CpF items:
4. Business Charges
- Development cost amortization charge
- Nominal profit
The total customer cost might also include an insurance fee for payload loss or launch failure (ELVs only) on top of all of this.
For comparison of different launch vehicle configurations and architectures within the same study, it may be appropriate to only consider the vehicle cost and direct operations cost. However, in order to get a complete CpF and be able to compare to existing vehicles, all cost items must be included in the model.
### Production Cost Amortization
In the case of RLVs, there is a charge for vehicle amortization, which is the production cost of the vehicle divided by the total expected number of flights. Little data exists for the maximum number of flights for a reusable vehicle, but it is expected to be somewhere between 100 and 300 flights. Choosing the optimal number of flights for a vehicle requires careful consideration of the refurbishment and amortization cost. There will likely exist a number of flights which yields a minumum CpF.
<img src="transcost_figures/cost_v_flights.png" alt="Drawing" style="width: 500px;"/>
Amortization of rocket engines must also be considered. The total number of flights per engine is likely substantially less than that of the vehicle, somewhere between 30 and 80 flights.
### Effects of Vehicle Size and Launch Frequency on CpF
CpF tends to decrease with launch frequency. The accounts for the effects of the learning curve, as well as better distribution of indirect operations costs, which tend to be independent of launch frequency. The sensitivity of ELVs to launch frequency is greater than the sensitivity for RLVs. IOC and DOC costs for RLVs are lower due to air-craft like operations, which explains this difference in sensitivity.
CpF also tends to increase with GLOW or payload capability. For small launch vehicles, the CpF difference between ELVs and RLVs for the same payload mass is generally negligible. For large launch vehicles, the difference is substantial. The reason for this is that for large launch vehicles, the hardware cost becomes substantial, which gives the RLVs a cost advantage, since major hardware is reused. Additionally, the major expenditures for RLVs is typially the operations costs, which are less sensitive to vehicle size.
<img src="transcost_figures/cpf_v_leo-payload.png" alt="Drawing" style="width: 500px;"/>
### Development Cost Amortization
In the case of government funded launch systems, the launch service provider typically is not concerned with a development cost amortization charge. This is different for commercial endeavours however. Development costs for very large or complicated launch vehicles are so high that it would likely be impossible to provide commercial funding, given that it could take 10 years or more for the investment to pay off. For this reason, commercially funded projects tend to be of a smaller scale.
Considering commercial endavours, for a new ELV, the CpF would likely require a 15 - 40% development amortization charge in the case of 200-400 flights for its life-cycle. For a new RLV, the CpF would likely require a 200 - 400% surcharge for the same life-cycle.
However, despite the huge non-recurring development cost for the RLV, if we consider a case for a ELV having 120M CpF and a RLV having 35M CpF to take a payload of 8Mg to LEO, we would expect the CpF for the RLV to be less than that of the ELV if the number of launches is greater than 40 - 90 launches. RLVs can almost certainly be competitive given a large enough total number of flights.
<img src="transcost_figures/dev_amortization.png" alt="Drawing" style="width: 500px;"/>
### Pricing Strategies
1. **Standard pricing.** Price the launch vehicle based on the actual cost of the vehicle, flight operations, and amortization, plus profit.
1. **Pricing below cost.** A few situations might make this practical. For instance, if an additional launch can be done without affecting the IOC, and therefore cost relatively little. Another situation where this might make sense is if an interruption of the production line or a layoff of a specialized team can be avoided.
1. **Pricing according to payload mass.** This could make sense in the case of multiple payloads. However, it should be noted that the vehicles maximum payload capacity will be reduced due to the necessity of more payload support structures. Additionally, the payload utilization factor usually decreases since it is difficult to find multiple payloads whose combined mass exactly achieves the payload capacity.
1. **Pricing for mini-satellites (piggy-back payloads).** This is for small satellites that can make up part of the residual payload capacity. Prices for this are typically negotiable.
## Cost of Unreliability/Insurance
Historically, liquid boosters and solid boosters have a similar reliability of ~98%. However, reliability is an inherent problem for expendable vehicles since stages and components cannot be tested in flight-like ways. Even if designing for redundancy, the production involves new materials and slight variations each time.
### Cost of ELV's Unreliability and Insurance Fees
Launch vehicle failures not only have an impact on insurance rates (paid for by the customer), but also imposes a cost penalty on the launch service prodvider, who now has to perform a failure analysis and implement technical improvements. Insurance costs are highly variant and depend on recent launch successes/failures.
### Cost of RLV's Unreliability and Insurance Fees
The case of relability and subsequent insurance costs is very different for RLVs than for ELVs. For RLVs:
1. The reliability will be higher due to better testing, and higher degree of redundancy and integrated health control systems
1. The flight can be aborted, with the vehicle landing at the launch site or an alternative site, and the payload can be saved
1. The vehicle loss insurance fee is paid by the launch provider as part of the DOC
In this case, the customer does not need to pay the vehicle insurance fee, and the payload insurance fee will be substantially less.
### Specific Costs vs. Total Annual Transportation Mass (Market Size) and Optimum RLV Payload Capability
If annual transportationd demand increases, launch fruquency and launch vehicle size will increase. These factors have an effect on the specific transportation cost.
Based on two data studies (SPS and NEPTUNE), it was found that specific payload costs decrease with increasing market size. For a RLV and a market size of 1000 Mg/Yr, one could expect a specific cost of 2 - 10 PYr/Mg. For a market size of 10000 Mg/Yr, one could expect a specific cost of 0.3 - 2 PYr/Mg.
<img src="transcost_figures/spec-payload_v_total.png" alt="Drawing" style="width: 500px;"/>
Additionally, for a given market size, there exists an optimum RLV payload capacity that will minimize the specific cost. Larger launch vehicles typically have a higher payload utilization fraction and mass efficiency, which decreases specific launch rate. However, larger launch vehicles means a reduced launch rate, which increases the specific cost.
<img src="transcost_figures/spec-cost_v_payload.png" alt="Drawing" style="width: 600px;"/>
## Sources of Uncertainty
### Development Cost Model
Accuracy of the development model depends very much on:
1. consideration of ALL development cost criteria
1. realistic input data for the different vehicle and engines mass values, as well as for schedule
Risks:
1. Required technical changes and additional qualifications for technology that was chosen but not fully qualified
1. Changing vehicle specifications - vehicle design should be frozen at the start of the program
1. System mass was underestimated
1. Assumptions were made that everything would stay on schedule
### Production Cost Model
Criteria to consider:
1. Scope of verification/acceptance testing
1. Modification of the product during production
1. Production quantity - this is a huge uncertainy that has a large impact on production cost
1. Varying PYr-costs for a particular company - the rest of TRANSCOST uses an average PYr value for aerospace in the US
1. Personell experience
### Operations Cost Model
Uncertainties arise from:
1. The duration of the operational phase and the total number of flights - this determines number of vehicles to be built
1. Launch frequency
1. Uncertainty over future launch site conditions
1. Required staff size and fixed annual cost
1. Technical problems during operational phase and subsequent failure investigation, implementation of modifications, interruptions to flight operations
| github_jupyter |
## Dependencies
```
import json, glob
from tweet_utility_scripts import *
from tweet_utility_preprocess_roberta_scripts_aux import *
from transformers import TFRobertaModel, RobertaConfig
from tokenizers import ByteLevelBPETokenizer
from tensorflow.keras import layers
from tensorflow.keras.models import Model
```
# Load data
```
test = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/test.csv')
print('Test samples: %s' % len(test))
display(test.head())
```
# Model parameters
```
input_base_path = '/kaggle/input/208-robertabase/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
# vocab_path = input_base_path + 'vocab.json'
# merges_path = input_base_path + 'merges.txt'
base_path = '/kaggle/input/qa-transformers/roberta/'
vocab_path = base_path + 'roberta-base-vocab.json'
merges_path = base_path + 'roberta-base-merges.txt'
config['base_model_path'] = base_path + 'roberta-base-tf_model.h5'
config['config_path'] = base_path + 'roberta-base-config.json'
model_path_list = glob.glob(input_base_path + 'model' + '*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep = '\n')
```
# Tokenizer
```
tokenizer = ByteLevelBPETokenizer(vocab_file=vocab_path, merges_file=merges_path,
lowercase=True, add_prefix_space=True)
```
# Pre process
```
test['text'].fillna('', inplace=True)
test['text'] = test['text'].apply(lambda x: x.lower())
test['text'] = test['text'].apply(lambda x: x.strip())
x_test, x_test_aux, x_test_aux_2 = get_data_test(test, tokenizer, config['MAX_LEN'], preprocess_fn=preprocess_roberta_test)
```
# Model
```
module_config = RobertaConfig.from_pretrained(config['config_path'], output_hidden_states=True)
def model_fn(MAX_LEN):
input_ids = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='input_ids')
attention_mask = layers.Input(shape=(MAX_LEN,), dtype=tf.int32, name='attention_mask')
base_model = TFRobertaModel.from_pretrained(config['base_model_path'], config=module_config, name="base_model")
_, _, hidden_states = base_model({'input_ids': input_ids, 'attention_mask': attention_mask})
h11 = hidden_states[-2]
x = layers.SpatialDropout1D(.1)(h11)
start_logits = layers.Dense(1, name="start_logit", use_bias=False)(x)
start_logits = layers.Flatten()(start_logits)
end_logits = layers.Dense(1, name="end_logit", use_bias=False)(x)
end_logits = layers.Flatten()(end_logits)
start_probs = layers.Activation('softmax', name='y_start')(start_logits)
end_probs = layers.Activation('softmax', name='y_end')(end_logits)
model = Model(inputs=[input_ids, attention_mask], outputs=[start_probs, end_probs])
return model
```
# Make predictions
```
NUM_TEST_IMAGES = len(test)
test_start_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
test_end_preds = np.zeros((NUM_TEST_IMAGES, config['MAX_LEN']))
for model_path in model_path_list:
print(model_path)
model = model_fn(config['MAX_LEN'])
model.load_weights(model_path)
test_preds = model.predict(get_test_dataset(x_test, config['BATCH_SIZE']))
test_start_preds += test_preds[0]
test_end_preds += test_preds[1]
```
# Post process
```
test['start'] = test_start_preds.argmax(axis=-1)
test['end'] = test_end_preds.argmax(axis=-1)
test['text_len'] = test['text'].apply(lambda x : len(x))
test['text_wordCnt'] = test['text'].apply(lambda x : len(x.split(' ')))
# test['end'].clip(0, test['text_len'], inplace=True)
# test['start'].clip(0, test['end'], inplace=True)
test['selected_text'] = test.apply(lambda x: decode(x['start'], x['end'], x['text'], config['question_size'], tokenizer), axis=1)
test['selected_text'] = test.apply(lambda x: x['text'] if (x['selected_text'] == '') else x['selected_text'], axis=1)
test['selected_text'].fillna(test['text'], inplace=True)
```
# Visualize predictions
```
display(test.head(10))
```
# Test set predictions
```
submission = pd.read_csv('/kaggle/input/tweet-sentiment-extraction/sample_submission.csv')
submission['selected_text'] = test['selected_text']
submission.to_csv('submission.csv', index=False)
submission.head(10)
```
| github_jupyter |
```
!pip install git+https://github.com/AlexMa123/phase_reconstruct
!wget "https://onedrive.live.com/download?cid=45D5A10F94E33861&resid=45D5A10F94E33861%21203780&authkey=AMUV4xKs9HGF32A" -O vanderpol_result.npy
import numpy as np
import phase_reconstruct as pr
import matplotlib.pyplot as plt
```
# Van der pol oscillator
The Van der pol oscillator is described by the equation:
$$\ddot{x}-\mu\left(1-x^{2}\right) \dot{x}+\omega^{2} x=0.05 \xi(t)$$
let $y = \dot{x}$
$$
\begin{eqnarray}
\dot{x} &=& y \\
\dot{y} &=& 0.05 \xi(t) + \mu(1-x^2) y - \omega^2 x
\end{eqnarray}
$$
My simulation is based on the Euler-Maruyama algorithm with time step $dt = 0.01$.
The time range is (0, 10000), and initial condition is [2, 0]
```
vdp_oscillator = np.load("vanderpol_result.npy")
t = vdp_oscillator[0]
x = vdp_oscillator[1]
y = vdp_oscillator[2]
plt.figure(figsize=(10, 6))
plt.subplot(221)
plt.plot(t[0*100 : 20*100], x[0*100 : 20*100], 'k')
plt.xlabel("t")
plt.ylabel("x")
plt.subplot(223)
plt.plot(t[0*100 : 20*100], y[0*100 : 20*100], 'k')
plt.xlabel("t")
plt.ylabel("y")
plt.subplot(122)
plt.plot(x[0*100 : 20*100], y[0*100 : 20*100], 'k')
plt.xlabel("x")
plt.ylabel("y")
```
## Signals
Use four signal to get the protophase
$$Y_1 = x, (Y_0 = 0, \hat{Y_0} = 0)$$
$$Y_2 = x, (Y_0 = 0, \hat{Y_0} = 0.8)$$
$$Y_3 = \exp{[x]} - 2.2, (Y_0 = 0, \hat{Y_0} = 0)$$
$$Y_4 = (x^2 - 1.7) x, (Y_0 = 0, \hat{Y_0} = 0)$$
## Get protophase
There are two ways to get protophase
1. using hilbert plane phase
$$ \theta = arctan\left(\frac{S_H - \hat{Y_0}}{S - Y_0} \right)$$
where S is the signal and $S_H$ is the hilbert transform of the signal
The function used to calculate protophase by this method is:
```python
phase_reconstruct.tools.get_protophase_hilbert(signal, y0, y0_hat)
```
The phase is from 0 to 2 $\pi$
and one can use the function
```python
phase_reconstruct.tools.flatten_phase(signal, threshold=5)
```
to make the phase from 0 to $\infty$
2. General events method
$$ \theta = 2\pi \frac{L(t) - L(t_i)}{L(t_{i + 1}) - L(t_i)} + 2\pi i, \text{ where }t_i < t < t_{i + 1}$$
The function is:
```python
phase_reconstruct.tools.get_protophase(signal, events_index)
```
the events_index are integer numbers
```
signals = [x, x, np.exp(x) - 2.2, (x**2 - 1.7) * x]
center = [(0, 0), (0, 0.8), (0, 0), (0, 0)]
start = [0, 0.25, 1.5, 2.0]
# calculate protophase
protophase = np.empty(4, dtype=object)
for i in range(3):
(y0, y0_hat) = center[i]
signal = signals[i]
protophase[i] = pr.tools.get_protophase_hilbert(signal, y0, y0_hat)
if i == 0:
protophase[i], shift_place = pr.tools.flatten_phase(protophase[i]) # shift place is where the protophase jump from 2pi to 0
else:
protophase[i], _ = pr.tools.flatten_phase(protophase[i]) # shift place is where the protophase jump from 2pi to 0
protophase[i] = protophase[i][shift_place[5] + 10: shift_place[-5]]
phase_shift = protophase[i][0] - start[i]
protophase[i] = protophase[i] - phase_shift
t = t[:protophase[0].size]
protophase[3] = pr.tools.get_protophase(signals[3], shift_place)
protophase[3] = protophase[3][shift_place[5] + 10: shift_place[-5]]
phase_shift = protophase[3][0] - start[3]
protophase[3] = protophase[3] - phase_shift
T0 = (shift_place[-5] - shift_place[5]) * 0.01 / (shift_place.size - 10)
omega0 = 2 * np.pi / T0
plt.figure(figsize = (9, 9))
for i in range(4):
plt.subplot(221 + i)
signal = signals[i]
sa = pr.tools.hilbert(signal)[10 * 100: 110 * 100]
plt.plot(sa.real, sa.imag, 'k')
if i < 3:
plt.axhline(center[i][1], color='k', linestyle='dashed', linewidth=0.5)
plt.axvline(center[i][0], color='k', linestyle='dashed', linewidth=0.5)
plt.xlabel("$Y_{}$".format(i))
sa = pr.tools.hilbert(signal)
plt.plot(sa.real[shift_place[5:-5]], sa.imag[shift_place[5:-5]], 'r.', label="poincare section")
```
## From protophase to phase
using function
```python
phase_reconstruct.get_phase.proto_to_phase(protophase, Num_of_fourier_terms)
```
which is based on fourier transform
or use the function:
```python
phase_reconstruct.get_phase.proto_to_phase_fast(protophase, nbins)
```
which is based on the CDF of protophase
```
phase = np.empty(4, dtype=object)
phase_fast = np.empty(4, dtype=object)
for i in range(4):
phase[i] = pr.get_phase.proto_to_phase(protophase[i], 48).real
phase[i], _ = pr.tools.flatten_phase(phase[i])
phase_fast[i] = pr.get_phase.proto_to_phase_fast(protophase[i], 1000)
phase_fast[i], _ = pr.tools.flatten_phase(phase_fast[i])
plt.figure(figsize=(20, 4))
ax = plt.subplot(131)
for i in range(4):
line = protophase[i] - omega0 * t
plt.plot(t[200*100: 220*100], line[200*100: 220*100], label=f"$Y_{i}$")
plt.xlabel("time")
plt.ylabel(r"$\theta - \omega_0 t$")
ax.set_title("protophase")
plt.legend()
plt.ylim([-0.5, 3])
ax = plt.subplot(132)
for i in range(4):
line = phase[i] - omega0 * t
plt.plot(t[200*100: 220*100], line[200*100: 220*100], label=f"$Y_{i}$")
plt.xlabel("time")
plt.ylabel(r"$\Phi - \omega_0 t$")
ax.set_title("prot_to_phase")
plt.legend()
plt.ylim([-0.5, 3])
ax = plt.subplot(133)
for i in range(4):
line = phase_fast[i] - omega0 * t
plt.plot(t[200*100: 220*100], line[200*100 : 220*100], label=f"$Y_{i}$")
plt.xlabel("time")
plt.ylabel(r"$\Phi - \omega_0 t$")
ax.set_title("prot_to_phase_fast")
plt.legend()
plt.ylim([-0.5, 3])
%timeit pr.get_phase.proto_to_phase_fast(protophase[i], 1000)
%timeit pr.get_phase.proto_to_phase(protophase[i], 48).real
```
## Long time behavior
(manually add a phase shift to the phase and phase_fast, for splitting these 3 lines)
```
plt.figure(figsize=(20, 16))
for i in range(4):
ax = plt.subplot(221 + i)
plt.plot(t, protophase[i] - omega0 * t, label="protophase")
plt.plot(t, phase[i] - omega0 * t + 1, label="phase")
plt.plot(t, phase_fast[i] - omega0 * t + 2, label="phase_fast")
ax.set_title(f"$Y_{i}$")
plt.xlim([2000, 6000])
plt.xlabel("time")
plt.ylabel(r"$\theta - \omega t$")
plt.legend()
```
## Phase distribution
```
plt.figure(figsize=(12, 12))
for i in range(4):
ax = plt.subplot(221 + i)
plt.hist(protophase[i] % (2 * np.pi), bins=1000, density=True, histtype="step", label="protophase")
plt.hist(phase[i] % (2 * np.pi), bins=1000, density=True, alpha=0.5, label="phase")
plt.ylim([0, 0.5])
plt.legend()
plt.show()
```
| github_jupyter |
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "2a".
* You can find your work in the file directory as version "2".
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* Clarified explanation of 'keep_prob' in the text description.
* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
* Updated print statements and 'expected output' for easier visual comparisons.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m)) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m) * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m) * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m) * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
```
**Expected Output**:
```
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
```
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.
**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.
This python statement:
`X = (X < keep_prob).astype(int)`
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
```
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
```
Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(*A1.shape) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1 < keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = np.multiply(A1, D1) # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(*A2.shape) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2 < keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = np.multiply(A2, D2) # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = np.multiply(dA2, D2) # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = np.multiply(dA1, D1) # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
```
**Expected Output**:
```
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
```
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
| github_jupyter |
```
# Import Splinter, BeautifulSoup, and Pandas
from splinter import Browser
from bs4 import BeautifulSoup as soup
import pandas as pd
from webdriver_manager.chrome import ChromeDriverManager
# Set the executable path and initialize Splinter
executable_path = {'executable_path': ChromeDriverManager().install()}
browser = Browser('chrome', **executable_path, headless=False)
```
### Visit the NASA Mars News Site
```
# Visit the mars nasa news site
url = 'https://redplanetscience.com/'
browser.visit(url)
# Optional delay for loading the page
browser.is_element_present_by_css('div.list_text', wait_time=1)
# Convert the browser html to a soup object and then quit the browser
html = browser.html
news_soup = soup(html, 'html.parser')
slide_elem = news_soup.select_one('div.list_text')
slide_elem.find('div', class_='content_title')
# Use the parent element to find the first a tag and save it as `news_title`
news_title = slide_elem.find('div', class_='content_title').get_text()
news_title
# Use the parent element to find the paragraph text
news_p = slide_elem.find('div', class_='article_teaser_body').get_text()
news_p
```
### JPL Space Images Featured Image
```
# Visit URL
url = 'https://spaceimages-mars.com'
browser.visit(url)
# Find and click the full image button
full_image_elem = browser.find_by_tag('button')[1]
full_image_elem.click()
# Parse the resulting html with soup
html = browser.html
img_soup = soup(html, 'html.parser')
img_soup
# find the relative image url
img_url_rel = img_soup.find('img', class_='fancybox-image').get('src')
img_url_rel
# Use the base url to create an absolute url
img_url = f'https://spaceimages-mars.com/{img_url_rel}'
img_url
```
### Mars Facts
```
df = pd.read_html('https://galaxyfacts-mars.com')[0]
df.head()
df.columns=['Description', 'Mars', 'Earth']
df.set_index('Description', inplace=True)
df
df.to_html()
```
# D1: Scrape High-Resolution Mars’ Hemisphere Images and Titles
### Hemispheres
```
# 1. Use browser to visit the URL
url = 'https://marshemispheres.com/'
browser.visit(url)
html = browser.html
hemi_img_soup = soup(html, 'html.parser')
hemi_img_soup
# 2. Create a list to hold the images and titles.
hemisphere_image_urls = []
# 3. Write code to retrieve the image urls and titles for each hemisphere.
img_links = browser.find_by_css('a.product-item img')
for i in range(len(img_links)):
#Find elements to click
browser.find_by_css('a.product-item img')[i].click()
hemisphere={}
sample_elem = browser.links.find_by_text('Sample').first
# Get hemisphere Title
hemisphere['img_url']=sample_elem['href']
# Get hemisphere Title
hemisphere['title']=browser.find_by_css('h2.title').text
# Add Objects to hemisphere_image_urls list
hemisphere_image_urls.append(hemisphere)
#Go Back
browser.back()
# 4. Print the list that holds the dictionary of each image url and title.
hemisphere_image_urls
# 5. Quit the browser
browser.quit()
```
| github_jupyter |
```
from IPython.core.display import display, HTML
display(HTML("<style>.container {width: 80% !important; }</style>"))
# import warnings
# warnings.filterwarnings("default")
import sys
import time
import scanpy as sc
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib
from matplotlib import colors
myColors = ['#e6194b', '#3cb44b', '#ffe119', '#4363d8', '#f58231',
'#911eb4', '#46f0f0', '#f032e6', '#bcf60c', '#fabebe',
'#008080', '#e6beff', '#9a6324', '#fffac8', '#800000',
'#aaffc3', '#808000', '#ffd8b1', '#000075', '#808080',
'#307D7E', '#000000', "#DDEFFF", "#000035", "#7B4F4B",
"#A1C299", "#300018", "#C2FF99", "#0AA6D8", "#013349",
"#00846F", "#8CD0FF", "#3B9700", "#04F757", "#C8A1A1",
"#1E6E00", "#DFFB71", "#868E7E", "#513A01", "#CCAA35"]
colors2 = plt.cm.Reds(np.linspace(0, 1, 128))
colors3 = plt.cm.Greys_r(np.linspace(0.7,0.8,20))
colorsComb = np.vstack([colors3, colors2])
mymap = colors.LinearSegmentedColormap.from_list('my_colormap', colorsComb)
sys.path.append("../../../functions")
from SMaSH_functions import SMaSH_functions
sf = SMaSH_functions()
sys.path.append("/home/ubuntu/Taneda/GitLab/lung/Functions/")
from scRNA_functions import scRNA_functions
fc = scRNA_functions()
```
# Loading annData object
```
obj = sc.read_h5ad('../../../../../External_datasets/mouse_brain_all_cells_20200625_with_annotations.h5ad')
obj.X = obj.X.toarray()
obj = obj[obj.obs["Cell broad annotation"]=="Astro"]
print("%d genes across %s cells"%(obj.n_vars, obj.n_obs))
obj.var.set_index(obj.var["SYMBOL"], inplace=True, drop=False)
obj.var.index.name = None
new_sub_annotation = []
for c in obj.obs["Cell sub annotation"].tolist():
if c in ['Astro_AMY', 'Astro_AMY_CTX', 'Astro_CTX']:
new_sub_annotation.append('Astro_AMY_CTX')
elif c in ['Astro_THAL_hab', 'Astro_THAL_lat', 'Astro_THAL_med']:
new_sub_annotation.append('Astro_THAL')
else:
new_sub_annotation.append(c)
obj.obs["Cell sub annotation"] = new_sub_annotation
obj.obs["Cell sub annotation"] = obj.obs["Cell sub annotation"].astype("category")
```
#### Data preparation
```
sf.data_preparation(obj)
```
#### Data split
```
s = time.time()
from sklearn.model_selection import train_test_split
data = obj.X.copy()
myDict = {}
for idx, c in enumerate(obj.obs["Cell sub annotation"].cat.categories):
myDict[c] = idx
labels = []
for l in obj.obs["Cell sub annotation"].tolist():
labels.append(myDict[l])
labels = np.array(labels)
X = data
y = labels
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, stratify=y)
```
#### RankCorr
```
sys.path.append("../../../../../Functions/RankCorr/")
from picturedRocks import Rocks
genes = obj.var.index.tolist()
data = Rocks(X_train, y_train)
markers = data.CSrankMarkers(lamb=3.0, writeOut=False, keepZeros=False, onlyNonZero=False)
data.genes = np.array(genes)
marker_genes = data.markers_to_genes(markers)
selectedGenes = [x for x in marker_genes if x != 'nan'][:30]
selectedGenes_dict = {}
selectedGenes_dict["group"] = selectedGenes
e = time.time()
```
#### Classifiers
```
sf.run_classifiers(obj, group_by="Cell sub annotation", genes=selectedGenes, classifier="KNN", balance=True, title="RankCorr-KNN")
```
#### Heatmap selected genes
```
matplotlib.rcdefaults()
matplotlib.rcParams.update({'font.size': 11})
ax = sc.pl.DotPlot(obj,
selectedGenes,
groupby="Cell sub annotation",
standard_scale='var',
use_raw=True,
figsize=(6,10),
linewidths=2).style(cmap=mymap, color_on='square', grid=True, dot_edge_lw=1)
ax.swap_axes(swap_axes=True)
# ax.show()
ax.savefig("Figures/RankCorr_top30.pdf")
```
# Elapsed time
```
print("%d genes across %s cells"%(obj.n_vars, obj.n_obs))
print('Elapsed time (s): ', e-s)
```
| github_jupyter |
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Plotting and Jupyter Notebooks</h1>
<h3>Unidata Python Workshop</h3>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
One of the most common tasks we face as scientists is making plots. Visually assessing data is one of the best ways to explore it - who can look at a wall of tabular data and tell anything? In this lesson we'll show how to make some basic plots in notebooks and introduce interactive widgets.
Matplotlib has many more features than we could possibly talk about - this is just a taste of making a basic plot. Be sure to browse the [matplotlib gallery](https://matplotlib.org/gallery.html) for ideas, inspiration, and a sampler of what's possible.
```
# Import matplotlib as use the inline magic so plots show up in the notebook
import matplotlib.pyplot as plt
%matplotlib inline
# Make some "data"
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
```
## Basic Line and Scatter Plots
```
# Make a simple line plot
plt.plot(x, y)
# Play with the line style
plt.plot(x, y, color='tab:red', linestyle='--')
# Make a scatter plot
plt.plot(x, y, color='tab:orange', linestyle='None', marker='o')
```
## Adding Interactivity to Plots
```
# Let's make some more complicated "data" using a sine wave with some
# noise superimposed. This gives us lots of things to manipulate - the
# amplitude, frequency, noise amplitude, and DC offset.
import numpy as np
x = np.linspace(0, 2*np.pi, 100)
y = 10 * np.sin(x) + np.random.random(100)*5 + 20
# Have a look at the basic form of the data
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title('My Temperature Data')
# Let's add some interactive widgets
from ipywidgets import interact
def plot_pseudotemperature(f, A, An, offset):
x = np.linspace(0, 2*np.pi, 100)
y = A * np.sin(f * x) + np.random.random(100) * An + offset
fig = plt.figure()
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title('My Temperature Data')
plt.show()
interact(plot_pseudotemperature,
f = (0, 10),
A = (1, 5),
An = (1, 10),
offset = (10, 40))
# We can specify the type of slider, range, and defaults as well
from ipywidgets import FloatSlider, IntSlider
def plot_pseudotemperature2(f, A, An, offset, title):
x = np.linspace(0, 2*np.pi, 100)
y = A * np.sin(f * x) + np.random.random(100) * An + offset
fig = plt.figure()
plt.plot(x, y)
plt.xlabel('X Values')
plt.ylabel('Y Values')
plt.title(title)
plt.show()
interact(plot_pseudotemperature2,
f = IntSlider(min=1, max=7, value=3),
A = FloatSlider(min=1, max=10, value=5),
An = IntSlider(min=1, max=10, value=1),
offset = FloatSlider(min=1, max=40, value=20),
title = 'My Improved Temperature Plot')
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
engine.execute('SELECT * FROM Measurement LIMIT 10').fetchall()
engine.execute('SELECT * FROM Station LIMIT 10').fetchall()
inspector = inspect(engine)
inspector.get_table_names()
measurement_columns = inspector.get_columns('measurement')
for m_c in measurement_columns:
print(m_c['name'], m_c["type"])
station_columns = inspector.get_columns('station')
for m_c in station_columns:
print(m_c['name'], m_c["type"])
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
Measurement = Base.classes.measurement
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
# Calculate the date 1 year ago from today
last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
Precipitation = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date > last_year).\
order_by(Measurement.date.desc()).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(Precipitation[:], columns=['date', 'prcp'])
df.set_index('date', inplace=True)
# Sort the dataframe by date
df = df.sort_index()
df.head()
# Use Pandas Plotting with Matplotlib to plot the data
df.plot(kind="line",linewidth=4,figsize=(15,10))
plt.style.use('fivethirtyeight')
plt.xlabel("Date")
plt.title("Precipitation Analysis (From 8/24/16 to 8/23/17)")
# Rotate the xticks for the dates
plt.xticks(rotation=45)
plt.legend(["Precipitation"])
plt.tight_layout()
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
df.describe()
# How many stations are available in this dataset?
stations_count = session.query(Measurement).group_by(Measurement.station).count()
print("There are {} stations.".format(stations_count))
# What are the most active stations?
# List the stations and the counts in descending order.
active_stations = session.query(Measurement.station, func.count(Measurement.tobs)).group_by(Measurement.station).\
order_by(func.count(Measurement.tobs).desc()).all()
active_stations
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
most_active_station = 'USC00519281';
active_station_stat = session.query(Measurement.station, func.min(Measurement.tobs), func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station == most_active_station).all()
active_station_stat
# A query to retrieve the last 12 months of temperature observation data (tobs).
# Filter by the station with the highest number of observations.
temperature = session.query(Measurement.station, Measurement.date, Measurement.tobs).\
filter(Measurement.station == 'USC00519397').\
filter(Measurement.date > last_year).\
order_by(Measurement.date).all()
temperature
# Plot the results as a histogram with bins=12.
measure_df=pd.DataFrame(temperature)
hist_plot = measure_df['tobs'].hist(bins=12, figsize=(15,10))
plt.xlabel("Recorded Temperature")
plt.ylabel("Frequency")
plt.title("Last 12 Months Station Analysis for Most Active Station")
plt.show()
# Write a function called `calc_temps` that will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
trip_departure = dt.date(2018, 5, 1)
trip_arrival = dt.date(2018, 4, 2)
last_year = dt.timedelta(days=365)
trip_stat = (calc_temps((trip_arrival - last_year), (trip_departure - last_year)))
print(trip_stat)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
average_temp = trip_stat[0][1]
minimum_temp = trip_stat[0][0]
maximum_temp = trip_stat[0][2]
peak_yerr = (maximum_temp - minimum_temp)/2
barvalue = [average_temp]
xvals = range(len(barvalue))
fig, ax = plt.subplots()
rects = ax.bar(xvals, barvalue, width, color='g', yerr=peak_yerr,
error_kw=dict(elinewidth=6, ecolor='black'))
def autolabel(rects):
# attach some text labels
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., .6*height, '%.2f'%float(height),
ha='left', va='top')
autolabel(rects)
plt.ylim(0, 100)
ax.set_xticks([1])
ax.set_xlabel("Trip")
ax.set_ylabel("Temp (F)")
ax.set_title("Trip Avg Temp")
fig.tight_layout()
plt.show()
#trip dates - last year
trip_arrival_date = trip_arrival - last_year
trip_departure_date = trip_departure - last_year
print(trip_arrival_date)
print(trip_departure_date)
# Calculate the rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
trip_arrival_date = trip_arrival - last_year
trip_departure_date = trip_departure - last_year
rainfall_trip_data = session.query(Measurement.station, Measurement.date, func.avg(Measurement.prcp), Measurement.tobs).\
filter(Measurement.date >= trip_arrival_date).\
filter(Measurement.date <= trip_departure_date).\
group_by(Measurement.station).\
order_by(Measurement.prcp.desc()).all()
rainfall_trip_data
df_rainfall_stations = session.query(Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation).\
order_by(Station.station.desc()).all()
df_rainfall_stations
df_rainfall = pd.DataFrame(rainfall_trip_data[:], columns=['station','date','precipitation','temperature'])
df_station = pd.DataFrame(df_rainfall_stations[:], columns=['station', 'name', 'latitude', 'longitude', 'elevation'])
df_station
result = pd.merge(df_rainfall, df_station, on='station')
df_result = result.drop(['date','precipitation','temperature',], 1)
df_result
```
## Optional Challenge Assignment
```
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
# First steps with OMEGA - Closed box
Prepared by Benoit Côté.
If you have any question, please contact Benoit Côté at <bcote@uvic.ca>.
A closed box in chemical evolution refers to a gas reservoir which does not exchange matter with its surrounding. There is nothing coming out of the box, and nothing coming in the box. The star formation and the enrichment process therefore take place inside an isolate environment. This notebook presents the basics input parameters that can modify your chemical evolution predictions.
```
# Python packages
%matplotlib nbagg
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
# One-zone galactic chemical evolution code
import omega
```
## 1. Initial mass of the gas reservoir
The mass of gas plays a crucial role in chemical evolution, as it sets the concentration of metals at each timestep. For example, for a fixed star formation history, a smaller gas reservoir will lead to a faster enrichment. In the following examples, we assume a primordial gas composition (only H, He, and Li).
```
# Run OMEGA simulations.
# Constant star formation rate, "cte_sfr", of 1 Msun/yr
# with different gas reservoir, "mgal", in units of solar mass.
o_res_1e10 = omega.omega(cte_sfr=1.0, mgal=1e10)
o_res_1e11 = omega.omega(cte_sfr=1.0, mgal=1e11)
o_res_1e12 = omega.omega(cte_sfr=1.0, mgal=1e12)
```
In the plot below, we can see with the blue line that star formation slowly consumes the galactic gas. Even if stars eject matter, a fraction of the original stellar mass will always be locked in stellar remnants (white dwarfs, neutron stars, black holes). This feature, however, is not visible when the mass of gas reservoir is very large (red line), compared to the total stellar mass formed.
```
# Plot the total mass of the gas reservoir as a function of time.
%matplotlib nbagg
o_res_1e12.plot_totmasses(color='r', label='M_res = 1e12 Msun')
o_res_1e11.plot_totmasses(color='g', label='M_res = 1e11 Msun')
o_res_1e10.plot_totmasses(color='b', label='M_res = 1e10 Msun')
plt.ylim(1e9, 2e12)
```
### 1.1. Evolution of the Fe concentration
[Fe/H] is often used as a proxi for time in galactic chemical evolution studies. But the relation between [Fe/H] and galactic age is not linear, as seen in the plot below.
```
# Plot the iron concentration of the gas reservoir as a function of time
%matplotlib nbagg
yaxis = '[Fe/H]'
o_res_1e12.plot_spectro(yaxis=yaxis, color='r', label='M_res = 1e12 Msun', shape='-.')
o_res_1e11.plot_spectro(yaxis=yaxis, color='g', label='M_res = 1e11 Msun', shape='--')
o_res_1e10.plot_spectro(yaxis=yaxis, color='b', label='M_res = 1e10 Msun', shape='-')
#plt.xscale('log')
```
### 1.2. Evolution of chemical abundances
Although the mass of gas plays a significant role in the evolution of [Fe/H], the abundances provided by stellar yields are still similar from one case to another, as shown below.
```
# Plot the Si to Fe abundances of the gas reservoir as a function of time (you can try different elements).
%matplotlib nbagg
xaxis = 'age'
yaxis = '[Si/Fe]'
o_res_1e12.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='r', label='mgal = 1e12 Msun', shape='-.')
o_res_1e11.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='g', label='mgal = 1e11 Msun', shape='--')
o_res_1e10.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='b', label='mgal = 1e10 Msun', shape='-')
```
In the [X/Fe] vs [Fe/H] space, **scaling up and down the mass of gas shifts the predictions on the [Fe/H] axis**, almost like a pure translation, as shown below.
```
# Plot the Si to Fe abundances of the gas reservoir as a function of [Fe/H]
%matplotlib nbagg
yaxis = '[Si/Fe]'
xaxis = '[Fe/H]'
o_res_1e12.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='r', label='mgal = 1e12 Msun', shape='-.')
o_res_1e11.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='g', label='mgal = 1e11 Msun', shape='--')
o_res_1e10.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='b', label='mgal = 1e10 Msun', shape='-')
plt.ylim(-0.2, 2.5)
```
## 2. Number of Type Ia supernovae
SNe Ia can eject a significant amount of Fe. In the [X/Fe] vs [Fe/H] space, if SNe Ia do not eject significantly the element X compared to core-collapse SNe (CC SNe) or AGB stars, increasing the number of SNe Ia will increase your final Fe abundances without modifying the element X. In other words, [X/Fe] will decrease, but [Fe/H] will increase.
```
# Run OMEGA simulations
# Different numbers of SNe Ia per stellar mass, "nb_1a_per_m",
# formed in each simple stellar population (SYGMA). Here, we use
# the same gas reservoir to isolate the impact of "nb_1a_per_m".
o_res_1e10_low_Ia = omega.omega(cte_sfr=1.0, mgal=1e10, nb_1a_per_m=1.0e-4)
o_res_1e10 = omega.omega(cte_sfr=1.0, mgal=1e10, nb_1a_per_m=1.0e-3)
o_res_1e10_high_Ia = omega.omega(cte_sfr=1.0, mgal=1e10, nb_1a_per_m=1.0e-2)
```
As seen in the above below, SNe Ia only appear at [Fe/H] = -2.5 in our case. Below this [Fe/H] value, the predictions are mainly driven by the ejecta of massive stars, which have short lifetimes compared to SN Ia progenitor stars.
```
# Plot the iron concentration of the gas reservoir as a function of time
%matplotlib nbagg
xaxis = '[Fe/H]'
yaxis = '[Si/Fe]'
o_res_1e10_low_Ia.plot_spectro( xaxis=xaxis, yaxis=yaxis, color='g', label='nb_1a_per_m = 1e-4', shape='-.')
o_res_1e10.plot_spectro( xaxis=xaxis, yaxis=yaxis, color='r', label='nb_1a_per_m = 1e-3', shape='--')
o_res_1e10_high_Ia.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='b', label='nb_1a_per_m = 1e-2', shape='-')
```
The plot below shows the contribution of SNe Ia, AGB stars, and massive stars in the Fe content of the gas reservoir (one-zone version of the interstellar medium), using different values for *nb_1a_per_m*.
```
# Plot the mass of Fe present inside the gas reservoir as a function of time.
%matplotlib nbagg
specie = 'Fe'
# Contribution of SNe Ia.
o_res_1e10_low_Ia.plot_mass( specie=specie, color='g', label='SNe Ia, nb_1a_per_m = 1e-4', source='sn1a')
o_res_1e10.plot_mass( specie=specie, color='r', label='SNe Ia, nb_1a_per_m = 1e-3', source='sn1a')
o_res_1e10_high_Ia.plot_mass(specie=specie, color='b', label='SNe Ia, nb_1a_per_m = 1e-2', source='sn1a')
# Contribution of massive (winds+SNe) and AGB stars.
o_res_1e10.plot_mass(specie=specie, color='k', label='Massive stars', source='massive', shape='-')
o_res_1e10.plot_mass(specie=specie, color='k', label='AGB stars', source='agb', shape='--')
# You can drop the 'source' argument to plot the sum the contribution of all stars.
```
As shown above, SNe Ia appear after 50 Myr of evolution, regardless of the total number of SNe Ia. However, **their contribution to the chemical evolution will not be noticable until they start to overcome the contribution of massive stars**, which is around 200 Myr for the blue line. The plot below shows the evolution of [Fe/H] using different total numbers of SNe Ia.
```
# Plot the evolution of [Fe/H] as a function of time.
%matplotlib nbagg
yaxis = '[Fe/H]'
o_res_1e10_low_Ia.plot_spectro( yaxis=yaxis, color='g', label='nb_1a_per_m = 1e-4', shape='-.')
o_res_1e10.plot_spectro( yaxis=yaxis, color='r', label='nb_1a_per_m = 1e-3', shape='--')
o_res_1e10_high_Ia.plot_spectro(yaxis=yaxis, color='b', label='nb_1a_per_m = 1e-2', shape='-')
plt.xscale('log')
```
The reason why the green and red lines are so similar is because CC SNe are dominating the Fe ejection, as opposed to the blue line case. Different chemical evolution studies typically assume $\sim[1-2]\times10^{-3}$ SN Ia per stellar mass formed.
## 3. Star formation history
The star formation history (SFH) sets how many stars are formed as a function of time. Modifying the SFH can change the chemical enrichment timescale.
###3.1. Setting the input SFH
Let's run three OMEGA simulations with decreasing, constant, and increasing SFHs.
```
# OMEGA can receive an input SFH array with the "sfh_array" parameter.
# sfh_array[ number of input times ][ 0 --> time in yr; 1 --> star formation rate in Msun/yr ]
# Time array [Gyr]
t = [0.0, 6.5e9, 13.0e9]
# Build the decreasing star formation history array [Msun/yr]
sfr_dec = [7.0, 4.0, 1.0]
sfh_array_dec = []
for i in range(len(t)):
sfh_array_dec.append([t[i], sfr_dec[i]])
# Build the increasing star formation history array [Msun/yr]
sfr_inc = [1.0, 4.0, 7.0]
sfh_array_inc = []
for i in range(len(t)):
sfh_array_inc.append([t[i], sfr_inc[i]])
# Run OMEGA simulations.
# Different star formation histories within the same initial gas reservoir.
o_cte = omega.omega(mgal=5e11, special_timesteps=30, cte_sfr=4.0)
o_dec = omega.omega(mgal=5e11, special_timesteps=30, sfh_array=sfh_array_dec)
o_inc = omega.omega(mgal=5e11, special_timesteps=30, sfh_array=sfh_array_inc)
```
Below, the SFH is plotted. This function can be used for any types of input SFH.
```
%matplotlib nbagg
o_cte.plot_star_formation_rate(color='k', shape='-.')
o_dec.plot_star_formation_rate(color='m', shape='--')
o_inc.plot_star_formation_rate(color='c', shape='-')
```
In this case, although the SFH are very different, the same amount of stars should be formed throughout the simulation. This can be verified by summing the mass locked into stars in all timesteps. However, as shown below, the total stellar masses are actually not the same.
```
# Calculate the cumulated stellar mass (integration of the SFH)
print ('Total stellar mass formed (not corrected for stellar mass loss)')
print (' Increasing SFH :', sum(o_inc.history.m_locked), 'Msun')
print (' Constant SFH :', sum(o_cte.history.m_locked), 'Msun')
print (' Decreasing SFH :', sum(o_dec.history.m_locked), 'Msun')
```
This discrepancy is due to a temporal resolution issue and to the fact that the value of the SFH at each timestep is multiplied by the duration of that timestep. To solve this problem, you can use the *sfh_array_norm* parameter which normalizes the input SFH array, *sfh_array*, to generate the desired total stellar mass, regardless of the chosen resolution.
```
# Re-run OMEGA simulations with the "sfh_array_norm" parameter.
o_cte = omega.omega(mgal=5e11, special_timesteps=30, cte_sfr=4.0, sfh_array_norm=5.2e10)
o_dec = omega.omega(mgal=5e11, special_timesteps=30, sfh_array=sfh_array_dec, sfh_array_norm=5.2e10)
o_inc = omega.omega(mgal=5e11, special_timesteps=30, sfh_array=sfh_array_inc, sfh_array_norm=5.2e10)
```
The resulting SFHs are a little bit shifted, but the total stellar masses are now the same, as shown below.
```
%matplotlib nbagg
o_cte.plot_star_formation_rate(color='k', shape='-.')
o_dec.plot_star_formation_rate(color='m', shape='--')
o_inc.plot_star_formation_rate(color='c', shape='-')
# Calculate the cumulated stellar mass (integration of the SFH)
print ('Total stellar mass formed (not corrected for stellar mass loss)')
print (' Increasing SFH :', sum(o_inc.history.m_locked), 'Msun')
print (' Constant SFH :', sum(o_cte.history.m_locked), 'Msun')
print (' Decreasing SFH :', sum(o_dec.history.m_locked), 'Msun')
```
### 3.2. Evolution of Fe in the gas reservoir
As shown in the plot below, more Fe is ejected by stars at early times when the SFH peaks at the beginning of the simulation (pink lines). At the end of the simulation (after 13 Gyr), the same final mass of Fe inside the galactic gas is predicted for all three SFHs. This is because, overall, the same amount of stars is formed in all cases. However, for the contribution of AGB stars, there are minor variations. Can you explain why?
```
# Plot the mass of Fe present inside the gas reservoir as a function of time (you can try other elements).
%matplotlib nbagg
specie = 'Fe'
# Increasing SFH
o_inc.plot_mass(specie=specie, color='c', source='massive')
#o_inc.plot_mass(specie=specie, color='c', source='sn1a')
o_inc.plot_mass(specie=specie, color='c', source='agb')
# Constant SFH
o_cte.plot_mass(specie=specie, color='k', source='massive')
#o_cte.plot_mass(specie=specie, color='k', source='sn1a')
o_cte.plot_mass(specie=specie, color='k', source='agb')
# Decreasing SFH
o_dec.plot_mass(specie=specie, color='m', source='massive')
#o_dec.plot_mass(specie=specie, color='m', source='sn1a')
o_dec.plot_mass(specie=specie, color='m', source='agb')
# Add legend directly on the plot
plt.annotate('Decreasing SFH', color='m', xy=(0.6, 0.30), xycoords='axes fraction', fontsize=13)
plt.annotate('Constant SFH', color='k', xy=(0.6, 0.22), xycoords='axes fraction', fontsize=13)
plt.annotate('Increasing SFH', color='c', xy=(0.6, 0.14), xycoords='axes fraction', fontsize=13)
# Remove the default log scale of the x axis
plt.xscale('linear')
```
### 3.3. Evolution of chemical abundances
Because we used the same initial gas reservoir and the same total stellar mass, **different SFHs therefore generate different enrichment paths** between the initial and final [Fe/H] values, which are the same in all cases. Within our setup, when the SFH is increasing, the enrichment process will be slower compared to SFHs that peak at early times (decreasing SFH).
```
# Plot the evolution of [Fe/H]
%matplotlib nbagg
o_inc.plot_spectro(color='c', shape='-', label='Increasing SFH')
o_cte.plot_spectro(color='k', shape='-.', label='Constant SFH')
o_dec.plot_spectro(color='m', shape='--', label='Decreasing SFH')
plt.ylim(-6,0)
```
However, as seen in the plot below, the relative metal abundances are still similar from one case to another, as those are linked to the composition of the ejecta of simple stellar populations (stellar yields).
```
# Plot the evolution of [Si/Fe].
%matplotlib nbagg
yaxis = '[Si/Fe]'
o_inc.plot_spectro(yaxis=yaxis, color='c', shape='-', label='Increasing SFH')
o_cte.plot_spectro(yaxis=yaxis, color='k', shape='-.', label='Constant SFH')
o_dec.plot_spectro(yaxis=yaxis, color='m', shape='--', label='Decreasing SFH')
```
As shown below, in the [X/Fe] vs [Fe/H] space, **modifying the SFH shifts the chemical evolution predictions on the [Fe/H] axis, but only at low [Fe/H] (at early times)**, as opposed to modifying the mass of the gas reservoir, which generate shifts for all [Fe/H] values.
```
# Plot the predicted chemical evolution.
%matplotlib nbagg
xaxis = '[Fe/H]'
yaxis = '[Si/Fe]'
o_inc.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='c', shape='-', label='Increasing SFH')
o_cte.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='k', shape='-.', label='Constant SFH')
o_dec.plot_spectro(xaxis=xaxis, yaxis=yaxis, color='m', shape='--', label='Decreasing SFH')
```
| github_jupyter |
---
_You are currently looking at **version 1.2** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._
---
# Assignment 3 - Evaluation
In this assignment you will train several models and evaluate how effectively they predict instances of fraud using data based on [this dataset from Kaggle](https://www.kaggle.com/dalpozz/creditcardfraud).
Each row in `fraud_data.csv` corresponds to a credit card transaction. Features include confidential variables `V1` through `V28` as well as `Amount` which is the amount of the transaction.
The target is stored in the `class` column, where a value of 1 corresponds to an instance of fraud and 0 corresponds to an instance of not fraud.
```
import numpy as np
import pandas as pd
```
### Question 1
Import the data from `fraud_data.csv`. What percentage of the observations in the dataset are instances of fraud?
*This function should return a float between 0 and 1.*
```
def answer_one():
df=pd.read_csv('fraud_data.csv')
df_fraud=df[df['Class']==1]
fraud=len(df_fraud)/len(df)
return fraud
answer_one()
# Use X_train, X_test, y_train, y_test for all of the following questions
from sklearn.model_selection import train_test_split
df = pd.read_csv('fraud_data.csv')
X = df.iloc[:,:-1]
y = df.iloc[:,-1]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
```
### Question 2
Using `X_train`, `X_test`, `y_train`, and `y_test` (as defined above), train a dummy classifier that classifies everything as the majority class of the training data. What is the accuracy of this classifier? What is the recall?
*This function should a return a tuple with two floats, i.e. `(accuracy score, recall score)`.*
```
def answer_two():
from sklearn.dummy import DummyClassifier
from sklearn.metrics import recall_score
dummy_majority = DummyClassifier(strategy = 'most_frequent').fit(X_train, y_train)
y_dummy_predictions = dummy_majority.predict(X_test)
score = dummy_majority.score(X_test, y_test)
rec =recall_score(y_test, y_dummy_predictions)
tup=tuple((score,rec))
return tup
answer_two()
```
### Question 3
Using X_train, X_test, y_train, y_test (as defined above), train a SVC classifer using the default parameters. What is the accuracy, recall, and precision of this classifier?
*This function should a return a tuple with three floats, i.e. `(accuracy score, recall score, precision score)`.*
```
def answer_three():
from sklearn.metrics import recall_score, precision_score
from sklearn.svm import SVC
svm = SVC().fit(X_train, y_train)
y_predict = svm.predict(X_test)
accu = svm.score(X_test, y_test)
prec =precision_score(y_test, y_predict)
rec =recall_score(y_test, y_predict)
tup=tuple((accu,rec, prec))
return tup
answer_three()
```
### Question 4
Using the SVC classifier with parameters `{'C': 1e9, 'gamma': 1e-07}`, what is the confusion matrix when using a threshold of -220 on the decision function. Use X_test and y_test.
*This function should return a confusion matrix, a 2x2 numpy array with 4 integers.*
```
def answer_four():
from sklearn.metrics import confusion_matrix
from sklearn.svm import SVC
svm = SVC(C=1e9, gamma= 1e-07).fit(X_train, y_train)
y_scores = svm.decision_function(X_test) > -220
confusion = confusion_matrix(y_test, y_scores)
return confusion
answer_four()
```
### Question 5
Train a logisitic regression classifier with default parameters using X_train and y_train.
For the logisitic regression classifier, create a precision recall curve and a roc curve using y_test and the probability estimates for X_test (probability it is fraud).
Looking at the precision recall curve, what is the recall when the precision is `0.75`?
Looking at the roc curve, what is the true positive rate when the false positive rate is `0.16`?
*This function should return a tuple with two floats, i.e. `(recall, true positive rate)`.*
```
def answer_five():
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import precision_recall_curve
from sklearn.metrics import roc_curve, auc
#import matplotlib.pyplot as plt
lr = LogisticRegression().fit(X_train, y_train)
lr_predicted = lr.predict(X_test)
score=lr.score(X_test, y_test)
y_scores_lr = lr.decision_function(X_test)
y_proba_lr = lr.predict_proba(X_test)
"""
precision, recall, thresholds = precision_recall_curve(y_test, lr_predicted)
fpr_lr, tpr_lr, _ = roc_curve(y_test, lr_predicted)
roc_auc_lr = auc(fpr_lr, tpr_lr)
closest_zero = np.argmin(np.abs(thresholds))
closest_zero_p = precision[closest_zero]
closest_zero_r = recall[closest_zero]
plt.figure()
plt.xlim([0.0, 1.01])
plt.ylim([0.0, 1.01])
plt.plot(precision, recall, label='Precision-Recall Curve')
plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)
plt.xlabel('Precision', fontsize=16)
plt.ylabel('Recall', fontsize=16)
plt.axes().set_aspect('equal')
plt.show()
plt.figure()
plt.xlim([-0.01, 1.00])
plt.ylim([-0.01, 1.01])
plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)
plt.legend(loc='lower right', fontsize=13)
plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')
plt.axes().set_aspect('equal')
plt.show()
"""
tup=tuple((0.85,0.82))
return tup
answer_five()
```
### Question 6
Perform a grid search over the parameters listed below for a Logisitic Regression classifier, using recall for scoring and the default 3-fold cross validation.
`'penalty': ['l1', 'l2']`
`'C':[0.01, 0.1, 1, 10, 100]`
From `.cv_results_`, create an array of the mean test scores of each parameter combination. i.e.
| | `l1` | `l2` |
|:----: |---- |---- |
| **`0.01`** | ? | ? |
| **`0.1`** | ? | ? |
| **`1`** | ? | ? |
| **`10`** | ? | ? |
| **`100`** | ? | ? |
<br>
*This function should return a 5 by 2 numpy array with 10 floats.*
*Note: do not return a DataFrame, just the values denoted by '?' above in a numpy array. You might need to reshape your raw result to meet the format we are looking for.*
```
def answer_six():
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
lr = LogisticRegression()
grid_values = {'C': [0.01, 0.1, 1, 10, 100], 'penalty': ['l1', 'l2']}
grid_clf_acc = GridSearchCV(lr, param_grid = grid_values, cv=3,scoring='recall')
grid_clf_acc.fit(X_train, y_train)
y_decision_fn_scores_acc = grid_clf_acc.decision_function(X_test)
cv_result = grid_clf_acc.cv_results_
mean_test_score = cv_result['mean_test_score']
output = np.array(mean_test_score).reshape(5,2)
return output
answer_six()
# Use the following function to help visualize results from the grid search
def GridSearch_Heatmap(scores):
%matplotlib notebook
import seaborn as sns
import matplotlib.pyplot as plt
plt.figure()
sns.heatmap(scores.reshape(5,2), xticklabels=['l1','l2'], yticklabels=[0.01, 0.1, 1, 10, 100])
plt.yticks(rotation=0);
#GridSearch_Heatmap(answer_six())
```
| github_jupyter |
```
from splinter import Browser
from bs4 import BeautifulSoup as bs
import time
import requests
import pandas as pd
def init_browser():
# @NOTE: Replace the path with your actual path to the chromedriver
executable_path = {"executable_path":'chromedriver.exe'}
return Browser("chrome", **executable_path, headless=False)
def scrape_info():
browser = init_browser()
# Visit visitcostarica.herokuapp.com
news_url = "https://mars.nasa.gov/news/"
browser.visit(news_url)
time.sleep(1)
# Scrape page into Soup
html = browser.html
soup = bs(html, "html.parser")
# Scrape news title
news_title = soup.find_all("div", class_="content_title")[1].text
news_p = soup.find("div", class_="article_teaser_body").text
print(news_title)
print(news_p)
scrape_info()
```
# Mars Image
```
def scrape_featured_image():
browser = init_browser()
jpl_url="https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars"
browser.visit(jpl_url)
html = browser.html
soup = bs(html, "html.parser")
featured_image_url = 'http://www.jpl.nasa.gov' + soup.find("article").find('a')['data-fancybox-href']
return featured_image_url
scrape_featured_image()
```
# Mars Weather
```
import requests
# browser = init_browser()
def scrape_mars_weather():
twitter_url="https://twitter.com/marswxreport?lang=en"
# browser.visit(twitter_url)
data = requests.get(twitter_url)
soup = bs(data.text, "html.parser")
tweets = soup.find_all('div', class_="js-tweet-text-container")
for tweet in tweets:
tweet.find('a', class_="twitter-timeline-link u-hidden").decompose()
if "sol " in tweet.text:
mars_weather = tweet.text.strip()
break
print(mars_weather)
scrape_mars_weather()
```
# Mars Facts
```
import pandas as pd
def scrape_mars_facts():
fact_url="https://space-facts.com/mars/"
#Use Pandas to scrape the table containing facts about the planet
fact_table = pd.read_html(fact_url)
fact_df = fact_table[0]
fact_df.columns=['Mars','Facts']
fact_df.columns=['Mars','Facts']
fact_df.set_index('Mars', inplace=True)
html_table = fact_df.to_html()
html_table = html_table.replace('\n', '')
return html_table
scrape_mars_facts()
```
# Mars Hemispheres
```
def scrape_mars_hemispheres():
#visit hemisphere website
h_url = "https://astrogeology.usgs.gov/search/results?q=hemisphere+enhanced&k1=target&v1=Mars"
response = requests.get(h_url)
#Parsel url with Beautiful Soup
soup = bs(response.text, 'html.parser')
#Retreive all items that contain mars hemispheres information
items = soup.find_all('div', class_='item')
#Create empty list for hemisphere urls
h_image_urls =[]
hemisphere_main_url = 'https://astrogeology.usgs.gov'
#browser = init_browser()
for i in items:
title = i.find('h3').text
partial_img_url = i.find('a', class_='itemLink product-item')['href']
#browser.visit(hemisphere_main_url + partial_img_url)
res = requests.get(hemisphere_main_url + partial_img_url)
partial_img_html = res.text #browser.html
soup = bs(partial_img_html, 'html.parser')
img_url = hemisphere_main_url + soup.find('img', class_='wide-image')['src']
h_image_urls.append({'title':title,'img_url':img_url})
return h_image_urls
scrape_mars_hemispheres()
```
| github_jupyter |
# Experiments for ER Graph
## Imports
```
%load_ext autoreload
%autoreload 2
import os
import sys
from collections import OrderedDict
import logging
import math
from matplotlib import pyplot as plt
import networkx as nx
import numpy as np
import torch
from torchdiffeq import odeint, odeint_adjoint
sys.path.append('../')
# Baseline imports
from gd_controller import AdjointGD
from dynamics_driver import ForwardKuramotoDynamics, BackwardKuramotoDynamics
# Nodec imports
from neural_net import EluTimeControl, TrainingAlgorithm
# Various Utilities
from utilities import evaluate, calculate_critical_coupling_constant, comparison_plot, state_plot
from nnc.helpers.torch_utils.oscillators import order_parameter_cos
logging.getLogger().setLevel(logging.CRITICAL) # set to info to look at loss values etc.
```
## Load graph parameters
Basic setup for calculations, graph, number of nodes, etc.
```
dtype = torch.float32
device = 'cpu'
graph_type = 'erdos_renyi'
adjacency_matrix = torch.load('../../data/'+graph_type+'_adjacency.pt')
parameters = torch.load('../../data/parameters.pt')
# driver vector is a column vector with 1 value for driver nodes
# and 0 for non drivers.
result_folder = '../../results/' + graph_type + os.path.sep
os.makedirs(result_folder, exist_ok=True)
```
## Load dynamics parameters
Load natural frequencies and initial states which are common for all graphs and also calculate the coupling constant which is different per graph. We use a coupling constant value that is $10%$ of the critical coupling constant value.
```
total_time = parameters['total_time']
total_time = 5
natural_frequencies = parameters['natural_frequencies']
critical_coupling_constant = calculate_critical_coupling_constant(adjacency_matrix, natural_frequencies)
coupling_constant = 0.1*critical_coupling_constant
theta_0 = parameters['theta_0']
```
## NODEC
We now train NODEC with a shallow neural network. We initialize the parameters in a deterministic manner, and use stochastic gradient descent to train it. The learning rate, number of epochs and neural architecture may change per graph. We use different fractions of driver nodes.
```
fractions = np.linspace(0,1,10)
order_parameter_mean = []
order_parameter_std = []
samples = 1000
for p in fractions:
sample_arr = []
for i in range(samples):
print(p,i)
driver_nodes = int(p*adjacency_matrix.shape[0])
driver_vector = torch.zeros([adjacency_matrix.shape[0],1])
idx = torch.randperm(len(driver_vector))[:driver_nodes]
driver_vector[idx] = 1
forward_dynamics = ForwardKuramotoDynamics(adjacency_matrix,
driver_vector,
coupling_constant,
natural_frequencies
)
backward_dynamics = BackwardKuramotoDynamics(adjacency_matrix,
driver_vector,
coupling_constant,
natural_frequencies
)
neural_net = EluTimeControl([2])
for parameter in neural_net.parameters():
parameter.data = torch.ones_like(parameter.data)/1000 # deterministic init!
train_algo = TrainingAlgorithm(neural_net, forward_dynamics)
best_model = train_algo.train(theta_0, total_time, epochs=3, lr=0.3)
control_trajectory, state_trajectory =\
evaluate(forward_dynamics, theta_0, best_model, total_time, 100)
nn_control = torch.cat(control_trajectory).squeeze().cpu().detach().numpy()
nn_states = torch.cat(state_trajectory).cpu().detach().numpy()
nn_e = (nn_control**2).cumsum(-1)
nn_r = order_parameter_cos(torch.tensor(nn_states)).cpu().numpy()
sample_arr.append(nn_r[-1])
order_parameter_mean.append(np.mean(sample_arr))
order_parameter_std.append(np.std(sample_arr,ddof=1))
order_parameter_mean = np.array(order_parameter_mean)
order_parameter_std = np.array(order_parameter_std)
plt.figure()
plt.errorbar(fractions,order_parameter_mean,yerr=order_parameter_std/np.sqrt(samples),fmt="o")
plt.xlabel(r"fraction of controlled nodes")
plt.ylabel(r"$r(T)$")
plt.tight_layout()
plt.show()
np.savetxt("ER_drivers_K01.csv",np.c_[order_parameter_mean,order_parameter_std],header="order parameter mean\t order parameter std")
```
| github_jupyter |
Parallel Single-channel CSC
===========================
This example compares the use of [parcbpdn.ParConvBPDN](http://sporco.rtfd.org/en/latest/modules/sporco.admm.parcbpdn.html#sporco.admm.parcbpdn.ParConvBPDN) with [admm.cbpdn.ConvBPDN](http://sporco.rtfd.org/en/latest/modules/sporco.admm.cbpdn.html#sporco.admm.cbpdn.ConvBPDN) solving a convolutional sparse coding problem with a greyscale signal
$$\mathrm{argmin}_\mathbf{x} \; \frac{1}{2} \left\| \sum_m \mathbf{d}_m * \mathbf{x}_{m} - \mathbf{s} \right\|_2^2 + \lambda \sum_m \| \mathbf{x}_{m} \|_1 \;,$$
where $\mathbf{d}_{m}$ is the $m^{\text{th}}$ dictionary filter, $\mathbf{x}_{m}$ is the coefficient map corresponding to the $m^{\text{th}}$ dictionary filter, and $\mathbf{s}$ is the input image.
```
from __future__ import print_function
from builtins import input
import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
import sporco.metric as sm
from sporco.admm import cbpdn
from sporco.admm import parcbpdn
```
Load example image.
```
img = util.ExampleImages().image('kodim23.png', zoom=1.0, scaled=True,
gray=True, idxexp=np.s_[160:416, 60:316])
```
Highpass filter example image.
```
npd = 16
fltlmbd = 10
sl, sh = signal.tikhonov_filter(img, fltlmbd, npd)
```
Load dictionary and display it.
```
D = util.convdicts()['G:12x12x216']
plot.imview(util.tiledict(D), fgsz=(7, 7))
lmbda = 5e-2
```
The RelStopTol option was chosen for the two different methods to stop with similar functional values
Initialise and run standard serial CSC solver using ADMM with an equality constraint [[49]](http://sporco.rtfd.org/en/latest/zreferences.html#id51).
```
opt = cbpdn.ConvBPDN.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 5e-3, 'AuxVarObj': False,
'AutoRho': {'Enabled': False}})
b = cbpdn.ConvBPDN(D, sh, lmbda, opt=opt, dimK=0)
X = b.solve()
```
Initialise and run parallel CSC solver using ADMM dictionary partition method [[42]](http://sporco.rtfd.org/en/latest/zreferences.html#id43).
```
opt_par = parcbpdn.ParConvBPDN.Options({'Verbose': True, 'MaxMainIter': 200,
'RelStopTol': 1e-2, 'AuxVarObj': False, 'AutoRho':
{'Enabled': False}, 'alpha': 2.5})
b_par = parcbpdn.ParConvBPDN(D, sh, lmbda, opt=opt_par, dimK=0)
X_par = b_par.solve()
```
Report runtimes of different methods of solving the same problem.
```
print("ConvBPDN solve time: %.2fs" % b.timer.elapsed('solve_wo_rsdl'))
print("ParConvBPDN solve time: %.2fs" % b_par.timer.elapsed('solve_wo_rsdl'))
print("ParConvBPDN was %.2f times faster than ConvBPDN\n" %
(b.timer.elapsed('solve_wo_rsdl')/b_par.timer.elapsed('solve_wo_rsdl')))
```
Reconstruct images from sparse representations.
```
shr = b.reconstruct().squeeze()
imgr = sl + shr
shr_par = b_par.reconstruct().squeeze()
imgr_par = sl + shr_par
```
Report performances of different methods of solving the same problem.
```
print("Serial reconstruction PSNR: %.2fdB" % sm.psnr(img, imgr))
print("Parallel reconstruction PSNR: %.2fdB\n" % sm.psnr(img, imgr_par))
```
Display original and reconstructed images.
```
fig = plot.figure(figsize=(21, 7))
plot.subplot(1, 3, 1)
plot.imview(img, title='Original', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(imgr, title=('Serial Reconstruction PSNR: %5.2f dB' %
sm.psnr(img, imgr)), fig=fig)
plot.subplot(1, 3, 3)
plot.imview(imgr_par, title=('Parallel Reconstruction PSNR: %5.2f dB' %
sm.psnr(img, imgr_par)), fig=fig)
fig.show()
```
Display low pass component and sum of absolute values of coefficient maps of highpass component.
```
fig = plot.figure(figsize=(21, 7))
plot.subplot(1, 3, 1)
plot.imview(sl, title='Lowpass component', fig=fig)
plot.subplot(1, 3, 2)
plot.imview(np.sum(abs(X), axis=b.cri.axisM).squeeze(),
cmap=plot.cm.Blues, title='Serial Sparse Representation',
fig=fig)
plot.subplot(1, 3, 3)
plot.imview(np.sum(abs(X_par), axis=b.cri.axisM).squeeze(),
cmap=plot.cm.Blues, title='Parallel Sparse Representation',
fig=fig)
fig.show()
```
| github_jupyter |
# Let's compare 4 different strategies to solve sentiment analysis:
1. **Custom model using open source package**. Build a custom model using scikit-learn and TF-IDF features on n-grams. This method is known to work well for English text.
2. **Integrate** a pre-built API. The "sentiment HQ" API provided by indico has been shown to achieve state-of-the-art accuracy, using a recurrent neural network.
3. **Word-level features**. A custom model, built from word-level text features from indico's "text features" API.
4. **RNN features**. A custom model, using transfer learning, using the recurrent features from indico's sentiment HQ model to train a new custom model.
Note: this notebook and the enclosed code snippets accompany the KDnuggets post:
### Semi-supervised feature transfer: the big practical benefit of deep learning today?
<img src="header.jpg">
### Download the data
1. Download the "Large Movie Review Dataset" from http://ai.stanford.edu/~amaas/data/sentiment/.
2. Decompress it.
3. Put it into some directory path that you define below.
Citation: Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011).
### User parameters
```
seed = 3 # for reproducibility across experiments, just pick something
train_num = 100 # number of training examples to use
test_num = 100 # number of examples to use for testing
base_model_name = "sentiment_train%s_test%s" % (train_num, test_num)
lab2bin = {'pos': 1, 'neg': 0} # label -> binary class
pos_path = "~DATASETS/aclImdb/train/pos/" # filepath to the positive examples
neg_path = "~DATASETS/aclImdb/train/neg/" # file path to the negative examples
output_path = "OUTPUT" # path where output file should go
batchsize = 25 # send this many requests at once
max_num_examples = 25000.0 # for making subsets below
```
### Setup and imports
Install modules as needed (for example: `pip install indicoio`)
```
import os, io, glob, random, time
# from itertools import islice, chain, izip_longest
import numpy as np
import pandas as pd
from tqdm import tqdm
import pprint
pp = pprint.PrettyPrinter(indent=4)
import indicoio
from indicoio.custom import Collection
from indicoio.custom import collections as check_status
import sklearn
from sklearn import metrics
from sklearn import linear_model
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt # for plotting results
%matplotlib inline
import seaborn # just for the colors
```
### Define your indico API key
If you don't have a (free) API key, you can [get one here](https://indico.io/pay-per-call). Your first 10,000 calls per months are free.
```
indicoio.config.api_key = "" # Add your API key here
```
### Convenience function for making batches of examples
```
def batcher(seq, stride = 4):
"""
Generator strides across the input sequence,
combining the elements between each stride.
"""
for pos in xrange(0, len(seq), stride):
yield seq[pos : pos + stride]
# for making subsets below
train_subset = (train_num / 25000.0)
test_subset = (test_num / 25000.0)
random.seed(seed)
np.random.seed(seed)
```
### Check that the requested paths exist
```
# check that paths exist
for p in [pos_path, neg_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path):
os.makedirs(abs_path)
print(abs_path)
for p in [output_path]:
abs_path = os.path.abspath(p)
if not os.path.exists(abs_path): # and make output path if necessary
os.makedirs(abs_path)
print(abs_path)
```
### Query indico API to make sure everything is plumbed up correctly
```
# pre_status = check_status()
# pp.pprint(pre_status)
```
### Read data into a list of dictionary objects
where each dictionary object will be a single example. This makes it easy to manipulate later using dataframes, for cross-validation, visualization, etc.
### This dataset has pre-defined train/test splits
so rather than sampling our own, we'll use the existing splits to enable fair comparison with other published results.
```
train_data = [] # these lists will contain a bunch of little dictionaries, one for each example
test_data = []
# Positive examples (train)
examples = glob.glob(os.path.join(pos_path, "*")) # find all the positive examples, and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos' # label as "pos"
t = f.read().lower() # these files are already ascii text, so just lowercase them
d['text'] = t
d['pred_label'] = None # placeholder for predicted label
d['prob_pos'] = None # placeholder for predicted probability of a positive label
train_data.append(d) # add example to the list of training data
i +=1
print("Read %d positive training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (train)
examples = glob.glob(os.path.join(neg_path, "*")) # find all the negative examples and read them
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower()
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
train_data.append(d)
i +=1
print("Read %d negative training examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Positive examples (test)
examples = glob.glob(os.path.join(pos_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'pos'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d positive test examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Negative examples (test)
examples = glob.glob(os.path.join(neg_path, "*"))
i = 0
for ex in examples:
d = {}
with open(ex, 'rb') as f:
d['label'] = 'neg'
t = f.read().lower() # these files are already ascii text
d['text'] = t
d['pred_label'] = None
d['prob_pos'] = None
test_data.append(d)
i +=1
print("Read %d negative examples, of %d available (%.2f%%)" % (i, len(examples), (100.0*i)/len(examples)))
# Populate a dataframe, shuffle, and subset as required
df_train = pd.DataFrame(train_data)
df_train = df_train.sample(frac = train_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d training records" % len(df_train))
df_test = pd.DataFrame(test_data)
df_test = df_test.sample(frac = test_subset) # shuffle (by sampling everything randomly)
print("After resampling, down to %d test records" % len(df_test))
```
### Quick sanity check on the data, is everything as expected?
```
df_train.head(10) # sanity check
df_train.tail(10)
df_test.tail(10)
```
# Strategy A: scikit-learn
Build a custom model from scratch using sklearn (ngrams -> TFIDF -> LR)
### Define the vectorizer, logistic regression model, and overall pipeline
```
vectorizer = sklearn.feature_extraction.text.TfidfVectorizer(
max_features = int(1e5), # max vocab size (pretty large)
max_df = 0.50,
sublinear_tf = True,
use_idf = True,
encoding = 'ascii',
decode_error = 'replace',
analyzer = 'word',
ngram_range = (1,3),
stop_words = 'english',
lowercase = True,
norm = 'l2',
smooth_idf = True,
)
lr = linear_model.SGDClassifier(
alpha = 1e-5,
average = 10,
class_weight = 'balanced',
epsilon = 0.15,
eta0 = 0.0,
fit_intercept = True,
l1_ratio = 0.15,
learning_rate = 'optimal',
loss = 'log',
n_iter = 5,
n_jobs = -1,
penalty = 'l2',
power_t = 0.5,
random_state = seed,
shuffle = True,
verbose = 0,
warm_start = False,
)
classifier = Pipeline([('vectorizer', vectorizer),
('logistic_regression', lr)
])
```
### Fit the classifier
```
_ = classifier.fit(df_train['text'], df_train['label'])
```
### Get predictions
```
pred_sk = classifier.predict(df_test['text'])
y_true_sk = [lab2bin[ex] for ex in df_test['label']]
proba_sk = classifier.predict_proba(df_test['text']) # also get probas
```
### Compute and plot ROC and AUC
```
cname = base_model_name + "_sklearn"
plt.figure(figsize=(8,8))
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
# get predictions
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i) # if this happens, need to fix something
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
# compute ROC
fpr, tpr, thresholds = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("Accuracy: %.4f" % (acc))
```
# Put examples data into batches, for APIs
### Prepare batches of training examples
```
examples = [list(ex) for ex in zip(df_train['text'], df_train['label'])]
batches = [b for b in batcher(examples, batchsize)] # stores in memory, but the texts are small so no problem
```
### Prepare batches of test examples
```
test_examples = [list(ex) for ex in zip(df_test['text'], df_test['label'])] # test data
test_batches = [b for b in batcher(test_examples, batchsize)]
```
# Strategy B. Pre-trained sentiment HQ
```
# get predictions from sentiment-HQ API
cname = base_model_name + "hq"
predictions_hq = []
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = indicoio.sentiment_hq(texts)
for i, result in enumerate(results):
r = {}
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result
predictions_hq.append(r)
cname = base_model_name + "_hq"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_hq):
y_true.append(lab2bin[pred['label']])
proba = pred['proba']
probas.append(proba)
if float(proba) >= 0.50:
pl = 'pos'
elif float(proba) < 0.50:
pl= 'neg'
else:
print("Error. Check proba value and y_true logic")
pred_label = pl # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_hq_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Strategy C. Custom model using general text features.
### Create an indico custom collection using general (word-level) text features, and upload data
```
cname = base_model_name
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname)
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname)
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch)
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait() # blocks until the model is trained
# get predictions from the trained API model
predictions = []
cname = base_model_name
collection = Collection(cname)
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions.append(r)
pp.pprint(predictions[0]) # sanity check
```
### Draw ROC plot and compute metrics for the custom collection
```
plt.figure(figsize=(8,8))
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_cc_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Strategy D. Custom model using sentiment features from the pretrained deep neural network.
```
cname = base_model_name + "_domain"
print("This model will be cached as an indico custom collection using the name: '%s'" % cname)
collection = Collection(cname, domain = "sentiment")
try:
collection.clear() # delete any previous data in this collection
collection.info()
collection = Collection(cname, domain = "sentiment")
except:
print(" Error, probably because a collection with the given name didn't exist. Continuing...")
print(" Submitting %d training examples in %d batches..." % (len(examples), len(batches)))
for batch in tqdm(batches):
try:
collection.add_data(batch, domain = "sentiment")
except Exception as e:
print("Exception: '%s' for batch:" % e)
pp.pprint(batch)
print(" training model: '%s'" % cname)
collection.train()
collection.wait()
```
### Get predictions for custom collection with sentiment domain text features
```
# get predictions from trained API
predictions_domain = []
cname = base_model_name + "_domain"
collection = Collection(cname, domain = "sentiment")
for batch in tqdm(test_batches):
labels = [x[1] for x in batch]
texts = [x[0] for x in batch]
results = collection.predict(texts, domain = "sentiment")
# batchsize = len(batch)
for i, result in enumerate(results):
r = {}
r['indico_result'] = result
r['label'] = labels[i]
r['text'] = texts[i]
r['proba'] = result['pos']
predictions_domain.append(r)
```
### Compute metrics and plot
```
cname = base_model_name + "_domain"
plt.figure(figsize=(8,8))
# y_true = [df_test['label']]
probas = []
y_true = []
y_pred_labels = []
y_pred = []
for i, pred in enumerate(predictions_domain):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 1, label = "ROC area = %0.3f" % (roc_auc))
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC plot model: '%s'" % cname)
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
# plt.savefig(os.path.abspath(cname + "_cc_domain_ROC" + ".png"))
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_ROC" + ".png")))
plt.show()
acc = metrics.accuracy_score(y_true, y_pred)
print("Accuracy: %.4f" % (acc))
```
# Sanity check on results for all 4 strategies
Compare the first prediction for each to make sure all the right stuff is there...
```
print("Strategy A. Custom sklearn model using n-grams, TFIDF, LR:")
print(y_true_sk[0])
print(pred_sk[0])
print(proba_sk[0])
print("")
print("Strategy B. Sentiment HQ:")
pp.pprint(predictions_hq[0])
print("Strategy C. Custom collection using general text features:")
pp.pprint(predictions[0])
print("")
print("Strategy D. Custom collection using sentiment features:")
pp.pprint(predictions_domain[0])
print("")
```
# Compute overall metrics and plot
```
plt.figure(figsize=(10,10))
cname = base_model_name
# compute and draw curve for sklearn LR built from scratch
probas_sk = []
y_pred_labels_sk = []
y_pred_sk = []
for i, pred in enumerate(pred_sk):
proba_pos = proba_sk[i][1]
probas_sk.append(proba_pos)
if float(proba_pos) >= 0.50:
pred_label = "pos"
elif float(proba_pos) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_sk.append(pred_label)
y_pred_sk.append(lab2bin[pred_label])
fpr_sk, tpr_sk, thresholds_sk = metrics.roc_curve(y_true_sk, probas_sk)
roc_auc_sk = metrics.auc(fpr_sk, tpr_sk)
plt.plot(fpr_sk, tpr_sk, lw = 2, color = "#a5acaf", label = "A. Custom sklearn ngram LR model; area = %0.3f" % roc_auc_sk)
# compute and draw curve for sentimentHQ
probas_s = []
y_true_s = []
y_pred_labels_s = []
y_pred_s = []
for i, pred in enumerate(predictions_hq):
y_true_s.append(lab2bin[pred['label']])
probas_s.append(pred['proba'])
if float(pred['proba']) >= 0.50:
pred_label = "pos"
elif float(pred['proba']) < 0.50:
pred_label = "neg"
else:
print("ERROR on example %d" % i)
y_pred_labels_s.append(pred_label)
y_pred_s.append(lab2bin[pred_label])
fpr_s, tpr_s, thresholds_s = metrics.roc_curve(y_true_s, probas_s)
roc_auc_s = metrics.auc(fpr_s, tpr_s)
plt.plot(fpr_s, tpr_s, lw = 2, color = "#b05ecc", label = "B. Sentiment HQ model; area = %0.3f" % roc_auc_s)
# Compute and draw curve for the custom collection using general text features
probas = []
y_true = []
y_pred_labels = []
y_pred = []
lab2bin = {'pos': 1,
'neg': 0}
for i, pred in enumerate(predictions):
y_true.append(lab2bin[pred['label']])
probas.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x])) # pick the most likely class by predicted proba
y_pred_labels.append(pred_label)
y_pred.append(lab2bin[pred_label])
fpr, tpr, thresholds = metrics.roc_curve(y_true, probas)
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw = 2, color = "#ffbb3b", label = "C. Custom IMDB model using general text features; area = %0.3f" % (roc_auc))
# now compute and draw curve for the CC using sentiment text features
probas_d = []
y_true_d = []
y_pred_labels_d = []
y_pred_d = []
for i, pred in enumerate(predictions_domain):
y_true_d.append(lab2bin[pred['label']])
probas_d.append(pred['indico_result']['pos'])
pred_label = max(pred['indico_result'].keys(), key = (lambda x: pred['indico_result'][x]))
y_pred_labels_d.append(pred_label)
y_pred_d.append(lab2bin[pred_label])
fpr_d, tpr_d, thresholds_d = metrics.roc_curve(y_true_d, probas_d)
roc_auc_d = metrics.auc(fpr_d, tpr_d)
plt.plot(fpr_d, tpr_d, lw = 2, color = "#43b9af", label = "D. Custom IMDB model using sentiment text features; area = %0.3f" % roc_auc_d)
# Add other stuff to figure
plt.plot([0, 1], [0, 1], '--', color=(0.5, 0.5, 0.5), label='random')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title("ROC: %d training examples" % len(examples))
plt.title("%d training examples" % len(examples))
plt.legend(loc="lower right")
plt.savefig(os.path.abspath(os.path.join(output_path, cname + "_comparison_ROC" + ".png")), dpi = 300)
plt.show()
```
## Accuracy metrics
```
acc_sk = metrics.accuracy_score(y_true_sk, y_pred_sk)
print("A. Sklearn model from scratch (sklearn) : %.4f" % (acc_sk))
acc_s = metrics.accuracy_score(y_true_s, y_pred_s)
print("B. Sentiment HQ : %.4f" % (acc_s))
acc = metrics.accuracy_score(y_true, y_pred)
print("C. Custom model using general text features : %.4f" % (acc))
acc_d = metrics.accuracy_score(y_true_d, y_pred_d)
print("D. Custom model using sentiment text features : %.4f" % (acc_d))
# print("Using (%d, %d, %d, %d) examples" % (len(y_pred), len(y_pred_d), len(y_pred_s), len(y_pred_sk)))
```
| github_jupyter |
```
#Python Basics
#Dictionaries in Python
#Keys and Elements
#Dictionary is defined by {"key1": element1, "key2": element2, "key3": element3}
#Examples
dic1={"Fist Name":"Behdad", "Surname": "Jam", "Age": 35, "Records": [11.32, 14.34, 13.003]}
print(dic1)
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
print(History)
#we can add a new value to dictionary as follows
dic1={"Fist Name":"Behdad", "Surname": "Jam", "Age": 35, "Records": [11.32, 14.34, 13.003]}
dic1["key1"]="John"
dic1["John's Age"]=37
dic1["Birthday"]="23rd May"
print(dic1)
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
History["bands"]=1988
History["bandee"]=1976
History["bandbb"]=1944
print(History)
#we can add a new value to dictionary as follows
dictionary1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
print(dictionary1)
del(dictionary1['Birthday'])
print(dictionary1)
del(dictionary1['Age'])
print(dictionary1)
del(dictionary1['Surname'])
print(dictionary1)
del(dictionary1['First Name'])
print(dictionary1)
His= {'band1': 1943, 'bandx': 1967, 'bandy': 1984, 'band4': 1933}
print(His)
del(His['band1'])
print(His)
del(His['band4'])
print(His)
del(His['bandx'])
print(His)
#We can verify a element or item is in the dictionary or not with "in" as follows
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
d1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
d2={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
d3={'bandy': 1984}
g="band1" in History
print("band1 is in History: ", bool(g))
g="band1" in History
print("band1 is in History: ", bool(g))
g="bandx" in History
print("bandx is in History: ", bool(g))
g="bandy" in History
print("bandy is in History: ", bool(g))
g="band1" in d2
print("band1 is in d2: ", bool(g))
g="band1" in d3
print("band1 is in d3: ", bool(g))
g="bandx" in d3
print("bandx is in d3: ", bool(g))
g="bandy" in d2
print("bandy is in d2: ", bool(g))
g="band1" in d2
print("band1 is in d2: ", g)
g="band1" in d3
print("band1 is in d3: ", g)
g="bandx" in d3
print("bandx is in d3: ", g)
g="bandy" in d2
print("bandy is in d2: ", g)
### You can show all keys in a dictionary using "key method" as follow:
History={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
d1= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
d2={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
d3={'bandy': 1984}
print (History.keys())
print(d1.keys())
print(d2.keys())
print(d3.keys())
### You can illustrate all values in a dictionary using "value method" as follow:
w1={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
w2= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
w3={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
w4={'bandy': 1984}
print (w1.values())
print(w2.values())
print(w3.values())
print(w4.values())
#You can show any value of keys as follows:
w1={"band1":1943, "bandx": 1967, "bandy": 1984, "band4":1933}
w2= {'First Name': 'Behdad', 'Surname': 'Jam', 'Age': 35, "John's Age": 37, 'Birthday': '23rd May'}
w3={'bandx': 1967, 'bandy': 1984, 'band4': 1933}
w4={'bandy': 1984}
#value of key band1 is computed as follows
vw1= w1["band1"]
print(vw1)
#value of key bandx is computed as follows
vw1= w1["bandx"]
print(vw1)
#value of key band1y is computed as follows
vw1= w1["bandy"]
print(vw1)
```
| github_jupyter |
```
import numpy as np
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.model_selection import RandomizedSearchCV
import pandas as pd
df = np.genfromtxt('D:/Github/eeg.fem/public/data/Musical/6080072/data_for_train/ALL_PCA_64.csv',delimiter=',')
x = df[:, :-1]
y = df[:, -1]
X_train, X_test, y_train, y_test = train_test_split(x, y, random_state=3, test_size=0.3)
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-5, 1, 1e5],
'gamma': [1e-5, 'scale', 1e5]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-1, 1, 1e3, 1e5],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=16)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e3],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=12)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e1],
'gamma': [1e-6, 1e-5, 1e-4,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=12)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-2, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-1, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-1, 1, 1e1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-1, 2.5e-1, 1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-2, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [5e-2, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr', tol=1e-3)
params = {'C': [1e-3, 1.5e-1, 2.5e-1],
'gamma': [1e-4, 1e-3,'scale']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=9)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [1e-3, 2e-3, 3e-3, 4e-3, 5e-3, 6e-3, 7e-3, 8e-3, 9e-3],
'gamma': ['scale'],
'tol': [1e-5, 1e-3, 1]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=27)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','param_tol','mean_test_score']])
print(pd.DataFrame(rs.cv_results_)[['param_C','param_gamma','param_tol','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3e-3, 3.5e-3],
'gamma': ['scale'],
'tol': [1e-5, 1e-3, 1]}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=6)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','param_tol','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.1e-3, 3.2e-3, 3.3e-3, 3.4e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovo', 'ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=8)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.1e-3, 3.15e-3, 3.2e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=8)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
%%time
classifier = SVC(kernel='rbf', max_iter=-1 ,decision_function_shape='ovr')
params = {'C': [3.15e-3, 3.16e-3, 3.17e-3, 3.18e-3, 3.19e-3, 3.2e-3],
'gamma': ['scale'],
'tol': [1e-3],
'decision_function_shape': ['ovr']}
rs = RandomizedSearchCV(classifier, params, cv=8, scoring='accuracy', n_iter=6)
rs.fit(X_train, y_train)
print(pd.DataFrame(rs.cv_results_)[['param_C','mean_test_score']])
```
| github_jupyter |
The following notebook shows a simple Natural Language Processing (NLP) classification for identifying Tweets about natural disasters as real versus non-real or unrelated. For details about the classification as a machine learning example, for the source datasets, or for other similar exampmle see:
https://www.kaggle.com/c/nlp-getting-started
Data exploration (EDA) is largely done prior to starting this notebook. This focuses mostly on tokenization and data prep prior to model setup, generation and predictions. The Keras Sequential ML model utilized here obviously is set with the most basic parameters, and no attempt has been made to optimize parameters or to compare the implementation of different alternative classification approaches. However, validation accuracy for this first attempt (as shown) was over 92%.
Note that other than source data, a Bing Maps API is required (and is sourced here from a local .env file).
```
import pandas as pd
import numpy as np
from glob import glob
from collections import Counter
import os
import re
import string
import nltk
import base64
from dotenv import load_dotenv
import geocoder
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import models
from tensorflow.keras import layers
from tensorflow.keras import optimizers
from tensorflow.keras import losses, metrics
load_dotenv()
BING_MAP_KEY = base64.b64decode(os.getenv('BING_MAP_KEY')).decode('ascii')
#Show local tables to load into memory
print(glob('*.csv'))
#Load and check training data
df_train = pd.read_csv('train.csv')
df_train.iloc[0]
#Show most common locations for Tweets
df_train.location.value_counts()[:20]
#Clean locations by geocoding and replacing with just country code
#Bing Maps API is great geocoder tool, also Google works well
count = 1
def relocate(x):
global count
print(f'Processed line {count}...', end='\r')
count += 1
if pd.isnull(x['location']):
return 'EMPTY'
g = geocoder.bing(x['location'], key=BING_MAP_KEY)
if not g.json is None:
try:
return g.json['country']
except:
return 'EMPTY'
else:
return 'EMPTY'
df_train['country'] = df_train.apply(relocate, axis=1)
#Clean text that is very messy - return bases of words from Treebank tokens and WordNet Lemmatizer
def clean_text(text):
'''Make text lowercase, remove text in square brackets,remove links,remove punctuation
and remove words containing numbers.'''
text = text.lower()
text = re.sub('[^a-zA-Z0-9 \n\.]', '', text)
text = re.sub('https?://\S+|www\.\S+', '', text)
text = re.sub('[%s]' % re.escape(string.punctuation), '', text)
text = re.sub('\n', '', text)
text = re.sub('\w*\d\w*', '', text)
tokenizer = nltk.tokenize.TreebankWordTokenizer()
tokens = tokenizer.tokenize(text)
lemmatizer=nltk.stem.WordNetLemmatizer()
text = " ".join(lemmatizer.lemmatize(token) for token in tokens)
return text
df_train.country.unique()
#OPTIONAL - write cleaned data to local file. Geocoding takes some time
df_train.to_csv('train_data_with_locations.csv', index=False, header=True)
df_train['location'] = df_train['country']
df_train.drop(labels='country', axis=1, inplace=True)
df_train['text'] = df_train.apply(lambda x: clean_text(x['text']), axis=1)
df_train.keyword.fillna('', inplace=True)
df_train['keyword'] = df_train.apply(lambda x: clean_text(x['keyword']), axis=1)
#Check cleaned text data
for i in df_train.iloc[:10]['text'].values:
print(i)
#Get word index for training data - ALL WORDS USED as returned by tokenizer
vals = list(Counter([i for i in ' '.join(df_train.text.values.tolist()).split(' ')]).keys())
#As an option, take top n most used words
counter = Counter([i for i in ' '.join(df_train.text.values.tolist()).split(' ')])
top_words = [i for _, i in sorted(zip(counter.values(), counter.keys()),
key=lambda x: x[0], reverse=True)][:20000]
#vals = top_words
indices = [i for i, _ in enumerate(vals)]
word_index = {}
values = {}
for i in range(len(indices)):
word_index[indices[i]] = vals[i]
values[vals[i]] = indices[i]
def vectorize_sequences(sequences, dimension=50000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
df_train.columns
#Get lines of text and convert to list of word keys
texts = df_train.text.values
sequences = []
for line in texts:
test_line = [values[i.replace('#', '')] for i in line.split(' ') if i.replace('#', '') in vals]
sequences.append(test_line)
print(len(sequences), sequences[0])
#Convert word keys for main text of tweet to categorical for training purposes
first_part_train = vectorize_sequences(sequences)
#Perform tokenization for locations, and convert to binary/categorical arrays
keyvals = list(Counter(df_train.location.values.tolist()).keys())
indices = [i for i, _ in enumerate(keyvals)]
loc_dict = {}
for i in range(len(keyvals)):
loc_dict[keyvals[i]] = indices[i] + 1 #nan is 0
locations = []
for val in df_train.location.values:
if pd.isnull(val):
locations.append(0)
else:
locations.append(loc_dict[val])
locations = to_categorical(np.array(locations), dtype='int32')
print(locations.shape)
#Perform same categorization for keywords as for location values
locvals = list(Counter(df_train.keyword.values.tolist()).keys())
indices = [i for i, _ in enumerate(locvals)]
key_dict = {}
for i in range(len(locvals)):
key_dict[locvals[i]] = indices[i] + 1 #nan is 0
keywords = []
for val in df_train.keyword.values:
if pd.isnull(val):
keywords.append(0)
else:
keywords.append(key_dict[val])
keywords = to_categorical(np.array(keywords), dtype='int32')
print(keywords.shape)
x_train = np.hstack((first_part_train, locations, keywords))
y_train = df_train.target.values
#Develop model to train based on prepared input data
model = models.Sequential()
model.add(layers.Dense(16, activation='tanh', input_shape=(x_train.shape[1],)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
#model.compile(optimizer=optimizers.RMSprop(lr=0.001), loss='binary_crossentropy', metrics=['accuracy'])
model.compile(optimizer=optimizers.Nadam(),
loss=losses.binary_crossentropy, metrics=[metrics.binary_accuracy])
x_val = x_train[-100:]
partial_x_train = x_train[:-100]
y_val = y_train[-100:]
partial_y_train = y_train[:-100]
model.compile(optimizer='nadam', loss='binary_crossentropy', metrics=['acc'])
history = model.fit(partial_x_train, partial_y_train, epochs=20, batch_size=50, validation_data=[x_val, y_val])
#As an optional step, save trained model for reuse in the future
model.save('model3.h5')
test_df = pd.read_csv('test.csv')
"""
Go through and apply same text tokens, as well as keyword and location tokens, from the training dataframe.
Tokens won't be remade, to satisfy specific input shape of learning model.
"""
count = 1
test_df['country'] = test_df.apply(relocate, axis=1)
test_df['location'] = test_df['country']
test_df.drop(labels='country', axis=1, inplace=True)
test_df['text'] = test_df.apply(lambda x: clean_text(x['text']), axis=1)
test_df.keyword.fillna('', inplace=True)
test_df['keyword'] = test_df.apply(lambda x: clean_text(x['keyword']), axis=1)
test_df['keyword'].fillna('', inplace=True)
texts = test_df.text.values
sequences = []
for line in texts:
test_line = [values[i] for i in str(line).split(' ') if i in vals]
sequences.append(test_line)
#Convert word keys for main text of tweet to categorical (this time for test purposes)
first_part_test = vectorize_sequences(sequences)
#Now to get location values (if matching ones exist in training dataset)
locations = []
for val in test_df.location.values:
if pd.isnull(val):
locations.append(0)
else:
try:
locations.append(loc_dict[val])
except KeyError:
locations.append(len(loc_dict))
locations = to_categorical(np.array(locations), dtype='int32')
print(locations.shape)
#And finally, get keywords (again, only if matching keywords were present in training dataset)
keywords = []
for val in test_df.keyword.values:
if pd.isnull(val):
keywords.append(0)
else:
try:
keywords.append(key_dict[val])
except KeyError:
keywords.append(len(key_dict)) #value must be absent, but make sure tensor has same shape
keywords = to_categorical(np.array(keywords), dtype='int32')
print(keywords.shape)
#As before, merge text tokens, location and keyword category values
x_test = np.hstack((first_part_test, locations, keywords))
#With the tensors carefully prepared, run predictions. Perform additional check to make sure shape matches.
if x_test.shape[1] != x_train.shape[1]:
print('Error in processing - input matrix needs same number of columns as training data')
else:
output_targets = model.predict(x_test, verbose=1)
samples = pd.read_csv('sample_submission.csv')
samples.head()
output_aslist = [int(i) for i in output_targets]
output_table = pd.DataFrame({'id': test_df.id.values, 'target': output_aslist})
output_table.to_csv('submissions.csv', index=False, header=True)
```
| github_jupyter |
# *Aplicação de PSO Híbrido* :
# *Caixeiro Viajante*
## *Overview*
### PSO Hibrido + Fator Genético
* **Posição da Partícula:** Uma rota válida
* **Velocidade:** N par de trocas simples entre elementos da rota
* **Fator Genético**: Cada partícula tem 50% de chance de desencadear evento genético que substitui os 2 piores elementos do enxame por 2 partículas da próxima geração
- **Seleção dos pais:** GENITOR
- **Cross-over**: Order Cross-over (OX1)
- **Sem mutação**
### Partícula
```
import numpy as np
class DParticle:
def __init__(self, path: np.array):
self.position = path
self.combination_count = np.random.randint(len(path) * (len(path) - 1)) + 1
self.velocity = np.random.randint(len(path), size=(self.combination_count, 2))
self.best_position = np.copy(self.position)
self.best_path_len = np.inf
def __repr__(self):
return self.__str__()
```
### Fit
```
def fit(path, problem):
cyclic_path = np.hstack((path, np.array([path[0]])))
return sum(problem.get_weight(a, b) for a, b in zip(cyclic_path[0:], cyclic_path[1:]))
```
### Atualização
```
def discrete_velocity(particle: DParticle):
return random.choices(particle.velocity, k=np.random.randint(len(particle.position)))
```
### Algoritmo PSO Discreto Híbrido
```
def submit(self, iterations=1000):
for i in range(iterations):
for particle in self.particles:
distance = fit(particle.position, self.problem)
logger.debug(f"Distance: {distance}\t Path:{particle.position}\tV:{particle.velocity}")
# Is it the best particle distance so far?
if distance < particle.best_path_len:
particle.best_position = np.copy(particle.position)
particle.best_path_len = distance
# May be the best global distance as well?
if distance < self.best_path:
self.best_path = distance
self.best_path_pos = np.copy(particle.position)
logger.info(f"Best distance: {self.best_path}\tBest Path:{self.best_path_pos}")
# Adjust position
velocity = discrete_velocity(particle)
adjust_discrete_position(particle, velocity)
# Adding genetic vector
if random.random() <= 0.5:
parents = self.parent_extractor.extract_parent(problem=self.problem, population=self.particles)
offspring = self.crossover.cross(parents, 2, self.problem)
self.particles.extend(DParticle(off) for off in offspring)
natural_select(problem=self.problem, population=self.particles, die=len(offspring))
```
### Auxiliares
```
# Realiza as trocas entre posições, similar a mutação SIM
def adjust_discrete_position(particle, velocity):
for exchange in velocity:
tmp = np.copy(particle.position[exchange[0]])
particle.position[exchange[0]] = particle.position[exchange[1]]
particle.position[exchange[1]] = tmp
```
## Resultados
```
import pandas as pd
pd.read_csv("benchmark.csv")
```
## Referências
*[1]* PARTICLE SWARM OPTIMIZATION FOR SOLVING CONSTRAINT SATISFACTION PROBLEMS,Lin., https://core.ac.uk/download/pdf/56374467.pdf
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
%matplotlib inline
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
from sqlalchemy.orm import create_session
from sqlalchemy.schema import Table, MetaData
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Float
from sqlalchemy import create_engine, inspect, func
engine = create_engine("sqlite:///hawaii2.sqlite")
conn = engine.connect()
# reflect an existing database into a new model
Base = automap_base()
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Use the Inspector to explore the database and print the table names
inspector = inspect(engine)
inspector.get_table_names()
measurement_col = []
columns = inspector.get_columns('measurement')
for c in columns:
measurement_col.append(c['name'])
print(c['name'], c["type"])
measurement_col
# top 5 measurements
engine.execute('SELECT * FROM Measurement LIMIT 5').fetchall()
```
# Exploratory Climate Analysis
```
# last date
session.query(Measurement.date).order_by((Measurement.date.desc())).first()
#today date
print(dt.date.today())
print(dt.date(2017, 8 ,23))
# Design a query to retrieve the last 12 months of precipitation data and plot the results
session.query(Measurement.date).order_by(Measurement.date.desc()).first
# Calculate the date 1 year ago from the last data point in the database
one_year = dt.date(2017,8,23) -dt.timedelta(days=365)
print(one_year)
#Perform a query to retrieve the data and precipitation scores
results = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= one_year).all()
precipitation_last_12 = session.query(Measurement.date,Measurement.prcp).\
filter(func.datetime(Measurement.date) >= one_year).\
order_by(Measurement.date).all()
precipitation_last_12
#Save the query results as a Pandas DataFrame and set the index to the date column
df = pd.DataFrame(results, columns=['date', 'precipitation'])
df.set_index(df['date'], inplace=True)
df.tail()
precipitation_pd = pd.DataFrame(precipitation_last_12)
precipitation_pd= precipitation_pd.dropna()
#Sort the dataframe by date
precipitation_pd.describe()
# date and pcp top 5
precipitation_pd.head()
#count of date and Precipitation
precipitation_pd.count()
# Create Date vs. Prcp Plot scatter plot
#precipitation_pd.date.set_index('date',inplace=True,sort_columns=True,use_index=True, legend=True,grid=True, color='g')
precipitation_pd=precipitation_pd.sort_values('date')
precipitation_pd.plot(x='date',y='prcp',rot=90)
# Set title/
plt.title("Date vs Precipitation")
# Set x axis label
#FIX DATES
plt.xlabel("Date")
# Set y axis label
plt.ylabel("Precipitation")
# Set grid line
plt.grid(linestyle='-', linewidth=1, alpha = 1.5)
plt.savefig('date_precip_12.png')
# Design a query to show how many stations are available in this dataset?
session.query(Measurement.station, func.sum(Measurement.station))\
.group_by(Measurement.station).all()
stationcount = session.query(Measurement).distinct(Measurement.station).group_by(Measurement.station).count()
stationcount
# What are the most active stations? (i.e. what stations have the most rows)?
#print('Total Station Number:',Measurement.station)
sel = [Measurement.station, func.count(Measurement.tobs)]
query = session.query(*sel).\
group_by(Measurement.station).\
order_by(func.count(Measurement.tobs).desc())
q_df = pd.DataFrame(query, columns=['station_id','total_count']).set_index('station_id')
store_station=q_df.index[0]
q_df
plt.savefig('total_count_df.png')
# store min,max and avg for the active station
sel= [func.min(Measurement.tobs), func.max(Measurement.tobs),func.avg(Measurement.tobs)]
query = session.query(*sel).\
filter(Measurement.station==store_station).all()
q_df=pd.DataFrame(query,columns=['low','hgh','avg'])
q_df
#histogram of Tobs
sel=[Measurement.date, Measurement.tobs]
query = session.query(*sel).\
filter(Measurement.station==store_station).\
filter(func.strftime(Measurement.date)>=one_year).\
group_by(Measurement.date).\
order_by(func.count(Measurement.tobs).desc())
q_df=pd.DataFrame(query[:][:],columns=['date', 'tobs'])
q_df.plot.hist(bins=12, color='green')
plt.tight_layout()
plt.show()
plt.savefig('stat_temp.png')
```
## Bonus Challenge Assignment
```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
prev_year_start = dt.date(2018, 1, 1) - dt.timedelta(days=365)
prev_year_end = dt.date(2018, 1, 7) - dt.timedelta(days=365)
tmin, tavg, tmax = calc_temps(prev_year_start.strftime("%Y-%m-%d"), prev_year_end.strftime("%Y-%m-%d"))[0]
print(tmin, tavg, tmax)
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
fig, ax = plt.subplots(figsize=plt.figaspect(2.))
xpos = 1
yerr = tmax-tmin
bar = ax.bar(xpos, tmax, yerr=yerr, alpha=0.5, color='coral', align="center")
ax.set(xticks=range(xpos), xticklabels="a", title="Trip Avg Temp", ylabel="Temp (F)")
ax.margins(.2, .2)
# fig.autofmt_xdate()
fig.tight_layout()
fig.show()
plt.savefig('trip_avg_temp.png')
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
start_date = '2012-01-01'
end_date = '2012-01-07'
sel = [Measurement.station, station.name, station.latitude,
station.longitude, station.elevation, func.sum(Measurement.prcp)]
results = session.query(*sel).\
filter(Measurement.station == station.station,
func.strftime(Measurement.date) >= start_date,
func.strftime(Measurement.date) <= end_date).\
group_by(Measurement.station).order_by(func.sum(Measurement.prcp).desc())
q_df=pd.DataFrame(results,columns=['station_id','station_name','total_precip','lat_station','Lng_station','elevation_station']).set_index('station_id')
q_df
# Create a query that will calculate the daily normals
#disp# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
trip_start = '2018-01-01'
trip_end = '2018-01-07'
# Use the start and end date to create a range of dates
trip_dates = pd.date_range(trip_start, trip_end, freq='D')
# Stip off the year and save a list of %m-%d strings
trip_month_day = trip_dates.strftime('%m-%d')
# Loop through the list of %m-%d strings and calculate the normals for each date
normals = []
for date in trip_month_day:
normals.append(*daily_normals(date))
normals
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
df_trip = pd.DataFrame(normals, columns=['tmin', 'tavg', 'tmax'])
df_trip['date'] = trip_dates
df_trip.set_index(['date'],inplace=True)
df_trip.head()
# Plot the daily normals as an area plot with `stacked=False`
df_trip.plot(kind='area', stacked=False, x_compat=True, alpha=.2)
plt.tight_layout()
plt.savefig('area_count_df.png')
```
| github_jupyter |
(pystan_refitting_xr)=
# Refitting PyStan models with ArviZ (and xarray)
ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses {class}`~arviz.SamplingWrapper`s to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.
Below there is one example of `SamplingWrapper` usage for PyStan.
```
import arviz as az
import pystan
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
```
For the example we will use a linear regression.
```
np.random.seed(26)
xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
```
Now we will write the Stan code, keeping in mind only to include the array shapes as parameters.
```
refit_lr_code = """
data {
// Define data for fitting
int<lower=0> N;
vector[N] x;
vector[N] y;
}
parameters {
real b0;
real b1;
real<lower=0> sigma_e;
}
model {
b0 ~ normal(0, 10);
b1 ~ normal(0, 10);
sigma_e ~ normal(0, 10);
for (i in 1:N) {
y[i] ~ normal(b0 + b1 * x[i], sigma_e); // use only data for fitting
}
}
generated quantities {
vector[N] y_hat;
for (i in 1:N) {
// pointwise log likelihood will be calculated outside stan,
// posterior predictive however will be generated here, there are
// no restrictions on adding more generated quantities
y_hat[i] = normal_rng(b0 + b1 * x[i], sigma_e);
}
}
"""
sm = pystan.StanModel(model_code=refit_lr_code)
data_dict = {
"N": len(ydata),
"y": ydata,
"x": xdata,
}
sample_kwargs = {"iter": 1000, "chains": 4}
fit = sm.sampling(data=data_dict, **sample_kwargs)
```
We have defined a dictionary `sample_kwargs` that will be passed to the `SamplingWrapper` in order to make sure that all
refits use the same sampler parameters. We follow the same pattern with {func}`~arviz.from_pystan`.
```
dims = {"y": ["time"], "x": ["time"], "y_hat": ["time"]}
idata_kwargs = {
"posterior_predictive": ["y_hat"],
"observed_data": "y",
"constant_data": "x",
"dims": dims,
}
idata = az.from_pystan(posterior=fit, **idata_kwargs)
```
We are now missing the `log_likelihood` group because we have not used the `log_likelihood` argument in `idata_kwargs`. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get Stan to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.
Even though it is not ideal to lose part of the straight out of the box capabilities of PyStan-ArviZ integration, this should generally not be a problem. We are basically moving the pointwise log likelihood calculation from the Stan code to the Python code, in both cases we need to manyally write the function to calculate the pointwise log likelihood.
Moreover, the Python computation could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape `n_samples * n_observations`) in memory.
```
def calculate_log_lik(x, y, b0, b1, sigma_e):
mu = b0 + b1 * x
return stats.norm(mu, sigma_e).logpdf(y)
```
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.
Therefore, we can use `xr.apply_ufunc` to handle the broadasting and preserve the dimension names:
```
log_lik = xr.apply_ufunc(
calculate_log_lik,
idata.constant_data["x"],
idata.observed_data["y"],
idata.posterior["b0"],
idata.posterior["b1"],
idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)
```
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to {func}`xarray:xarray.apply_ufunc`.
We are now passing the arguments to `calculate_log_lik` initially as {class}`xarray:xarray.DataArray`s. What is happening here behind the scenes is that {func}`~xarray:xarray.apply_ufunc` is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing numpy arrays to `calculate_log_lik`. Everything works automagically.
Now let's see what happens if we were to pass the arrays directly to `calculate_log_lik` instead:
```
calculate_log_lik(
idata.constant_data["x"].values,
idata.observed_data["y"].values,
idata.posterior["b0"].values,
idata.posterior["b1"].values,
idata.posterior["sigma_e"].values
)
```
If you are still curious about the magic of xarray and {func}`~xarray:xarray.apply_ufunc`, you can also try to modify the `dims` used to generate the InferenceData a couple cells before:
dims = {"y": ["time"], "x": ["time"]}
What happens to the result if you use a different name for the dimension of `x`?
```
idata
```
We will create a subclass of {class}`~arviz.SamplingWrapper`. Therefore, instead of having to implement all functions required by {func}`~arviz.reloo` we only have to implement `sel_observations` (we are cloning `sample` and `get_inference_data` from the `PyStanSamplingWrapper` in order to use `apply_ufunc` instead of assuming the log likelihood is calculated within Stan).
Note that of the 2 outputs of `sel_observations`, `data__i` is a dictionary because it is an argument of `sample` which will pass it as is to `model.sampling`, whereas `data_ex` is a list because it is an argument to `log_likelihood__i` which will pass it as `*data_ex` to `apply_ufunc`. More on `data_ex` and `apply_ufunc` integration below.
```
class LinearRegressionWrapper(az.SamplingWrapper):
def sel_observations(self, idx):
xdata = self.idata_orig.constant_data["x"]
ydata = self.idata_orig.observed_data["y"]
mask = np.isin(np.arange(len(xdata)), idx)
data__i = {"x": xdata[~mask], "y": ydata[~mask], "N": len(ydata[~mask])}
data_ex = [ary[mask] for ary in (xdata, ydata)]
return data__i, data_ex
def sample(self, modified_observed_data):
#Cloned from PyStanSamplingWrapper.
fit = self.model.sampling(data=modified_observed_data, **self.sample_kwargs)
return fit
def get_inference_data(self, fit):
# Cloned from PyStanSamplingWrapper.
idata = az.from_pystan(posterior=fit, **self.idata_kwargs)
return idata
loo_orig = az.loo(idata, pointwise=True)
loo_orig
```
In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify `loo_orig` in order to make {func}`~arviz.reloo` believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.
```
loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])
```
We initialize our sampling wrapper. Let's stop and analize each of the arguments.
We then use the `log_lik_fun` and `posterior_vars` argument to tell the wrapper how to call {func}`~xarray:xarray.apply_ufunc`. `log_lik_fun` is the function to be called, which is then called with the following positional arguments:
log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]
where `data_ex` is the second element returned by `sel_observations` and `idata__i` is the InferenceData object result of `get_inference_data` which contains the fit on the subsetted data. We have generated `data_ex` to be a tuple of DataArrays so it plays nicely with this call signature.
We use `idata_orig` as a starting point, and mostly as a source of observed and constant data which is then subsetted in `sel_observations`.
Finally, `sample_kwargs` and `idata_kwargs` are used to make sure all refits and corresponding InferenceData are generated with the same properties.
```
pystan_wrapper = LinearRegressionWrapper(
sm,
log_lik_fun=calculate_log_lik,
posterior_vars=("b0", "b1", "sigma_e"),
idata_orig=idata,
sample_kwargs=sample_kwargs,
idata_kwargs=idata_kwargs
)
```
And eventually, we can use this wrapper to call `az.reloo`, and compare the results with the PSIS LOO-CV results.
```
loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
loo_relooed
loo_orig
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import pickle
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
import cv2
from sklearn.model_selection import KFold, cross_val_score
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier, BaggingClassifier, GradientBoostingClassifier
from sklearn.ensemble import RandomForestClassifier, IsolationForest
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import LinearSVC, SVC
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
from skopt.plots import plot_objective, plot_histogram
from sklearn.pipeline import Pipeline
from src.utils.feats import load_gei
from src.utils.results import df_results
import pandas as pd
# Kfold
n_splits = 3
cv = KFold(n_splits=n_splits, random_state=42, shuffle=True)
# classifier
model = RandomForestClassifier(n_estimators=150, max_depth=None, random_state=0, criterion='gini')
datapath = "../data/feats/database24_gei_480x640.pkl"
dim = (64, 48)
crop_person = True
X, y = load_gei(datapath, dim=dim, crop_person=crop_person)
# pipeline class is used as estimator to enable
# search over different model types
pipe = Pipeline([
('model', KNeighborsClassifier())
])
# single categorical value of 'model' parameter is
# sets the model class
# We will get ConvergenceWarnings because the problem is not well-conditioned.
# But that's fine, this is just an example.
# from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier
# from sklearn.ensemble import RandomForestClassifier, IsolationForest
# from sklearn.neighbors import KNeighborsClassifier
# from sklearn.svm import LinearSVC, SVC
# explicit dimension classes can be specified like this
ada_search = {
'model': Categorical([AdaBoostClassifier(base_estimator=DecisionTreeClassifier(max_depth=10), random_state=0)]),
'model__n_estimators': Integer(300, 1100),
'model__learning_rate': Real(0.1, 0.5, prior='uniform'),
}
# gdb_search = {
# 'model': Categorical([GradientBoostingClassifier(max_depth=None, random_state=0)]),
# 'model__learning_rate': Real(1e-3, 0.5, prior='uniform'),
# 'model__n_estimators': Integer(1, 400),
# 'model__max_depth': Integer(1, 400),
# }
knn_search = {
'model': Categorical([KNeighborsClassifier()]),
'model__n_neighbors': Integer(1,6),
}
rf_search = {
'model': Categorical([RandomForestClassifier(max_depth=None, random_state=0, criterion='gini')]),
'model__n_estimators': Integer(250, 400),
}
svc_search = {
'model': Categorical([SVC()]),
'model__C': Real(1e-6, 1e+6, prior='log-uniform'),
'model__gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'model__degree': Integer(1,8),
'model__kernel': Categorical(['linear', 'poly', 'rbf']),
}
opt = BayesSearchCV(
pipe,
# (parameter space, # of evaluations)
[(ada_search, 32), (knn_search, 8), (svc_search, 128), (rf_search, 128)],
cv=cv,
scoring='accuracy'
)
opt.fit(X, y)
df = df_results(opt)
df.to_csv('results_classifiers_bayes_search.csv')
df
# 5 best ADA models
df[df['model__learning_rate']>0].head(5)
# 5 best knn models
df[df['model__n_neighbors']>0].head(5)
# 5 best RF models
df[df['model__n_estimators']>0].head(5)
```
| github_jupyter |
```
import numpy as np
import tensorflow as tf
from sklearn.utils import shuffle
import re
import time
import collections
import os
import itertools
from sklearn.cross_validation import train_test_split
def build_dataset(words, n_words, atleast=1):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
counter = [i for i in counter if i[1] >= atleast]
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def add_start_end(string):
string = string.split()
strings = []
for s in string:
s = list(s)
s[0] = '<%s'%(s[0])
s[-1] = '%s>'%(s[-1])
strings.extend(s)
return strings
with open('lemmatization-en.txt','r') as fopen:
texts = fopen.read().split('\n')
after, before = [], []
for i in texts[:10000]:
splitted = i.encode('ascii', 'ignore').decode("utf-8").lower().split('\t')
if len(splitted) < 2:
continue
after.append(add_start_end(splitted[0]))
before.append(add_start_end(splitted[1]))
print(len(after),len(before))
concat_from = list(itertools.chain(*before))
vocabulary_size_from = len(list(set(concat_from)))
data_from, count_from, dictionary_from, rev_dictionary_from = build_dataset(concat_from, vocabulary_size_from)
print('vocab from size: %d'%(vocabulary_size_from))
print('Most common words', count_from[4:10])
print('Sample data', data_from[:10], [rev_dictionary_from[i] for i in data_from[:10]])
print('filtered vocab size:',len(dictionary_from))
print("% of vocab used: {}%".format(round(len(dictionary_from)/vocabulary_size_from,4)*100))
concat_to = list(itertools.chain(*after))
vocabulary_size_to = len(list(set(concat_to)))
data_to, count_to, dictionary_to, rev_dictionary_to = build_dataset(concat_to, vocabulary_size_to)
print('vocab from size: %d'%(vocabulary_size_to))
print('Most common words', count_to[4:10])
print('Sample data', data_to[:10], [rev_dictionary_to[i] for i in data_to[:10]])
print('filtered vocab size:',len(dictionary_to))
print("% of vocab used: {}%".format(round(len(dictionary_to)/vocabulary_size_to,4)*100))
GO = dictionary_from['GO']
PAD = dictionary_from['PAD']
EOS = dictionary_from['EOS']
UNK = dictionary_from['UNK']
for i in range(len(after)):
after[i].append('EOS')
before[:10], after[:10]
class Stemmer:
def __init__(self, size_layer, num_layers, embedded_size,
from_dict_size, to_dict_size, learning_rate,
dropout = 0.5, beam_width = 15):
def lstm_cell(size, reuse=False):
return tf.nn.rnn_cell.GRUCell(size, reuse=reuse)
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None, None])
self.X_seq_len = tf.count_nonzero(self.X, 1, dtype=tf.int32)
self.Y_seq_len = tf.count_nonzero(self.Y, 1, dtype=tf.int32)
# encoder
encoder_embeddings = tf.Variable(tf.random_uniform([from_dict_size, embedded_size], -1, 1))
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
batch_size = tf.shape(self.X)[0]
for n in range(num_layers):
(out_fw, out_bw), (state_fw, state_bw) = tf.nn.bidirectional_dynamic_rnn(
cell_fw = lstm_cell(size_layer // 2),
cell_bw = lstm_cell(size_layer // 2),
inputs = encoder_embedded,
sequence_length = self.X_seq_len,
dtype = tf.float32,
scope = 'bidirectional_rnn_%d'%(n))
encoder_embedded = tf.concat((out_fw, out_bw), 2)
bi_state = tf.concat((state_fw, state_bw), -1)
self.encoder_state = tuple([bi_state] * num_layers)
self.encoder_state = tuple(self.encoder_state[-1] for _ in range(num_layers))
main = tf.strided_slice(self.Y, [0, 0], [batch_size, -1], [1, 1])
decoder_input = tf.concat([tf.fill([batch_size, 1], GO), main], 1)
# decoder
decoder_embeddings = tf.Variable(tf.random_uniform([to_dict_size, embedded_size], -1, 1))
decoder_cells = tf.nn.rnn_cell.MultiRNNCell([lstm_cell(size_layer) for _ in range(num_layers)])
dense_layer = tf.layers.Dense(to_dict_size)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs = tf.nn.embedding_lookup(decoder_embeddings, decoder_input),
sequence_length = self.Y_seq_len,
time_major = False)
training_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = training_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
training_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = training_decoder,
impute_finished = True,
maximum_iterations = tf.reduce_max(self.Y_seq_len))
predicting_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
embedding = decoder_embeddings,
start_tokens = tf.tile(tf.constant([GO], dtype=tf.int32), [batch_size]),
end_token = EOS)
predicting_decoder = tf.contrib.seq2seq.BasicDecoder(
cell = decoder_cells,
helper = predicting_helper,
initial_state = self.encoder_state,
output_layer = dense_layer)
predicting_decoder_output, _, _ = tf.contrib.seq2seq.dynamic_decode(
decoder = predicting_decoder,
impute_finished = True,
maximum_iterations = 2 * tf.reduce_max(self.X_seq_len))
self.training_logits = training_decoder_output.rnn_output
self.predicting_ids = predicting_decoder_output.sample_id
masks = tf.sequence_mask(self.Y_seq_len, tf.reduce_max(self.Y_seq_len), dtype=tf.float32)
self.cost = tf.contrib.seq2seq.sequence_loss(logits = self.training_logits,
targets = self.Y,
weights = masks)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(self.cost)
y_t = tf.argmax(self.training_logits,axis=2)
y_t = tf.cast(y_t, tf.int32)
self.prediction = tf.boolean_mask(y_t, masks)
mask_label = tf.boolean_mask(self.Y, masks)
correct_pred = tf.equal(self.prediction, mask_label)
correct_index = tf.cast(correct_pred, tf.float32)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 2
embedded_size = 64
learning_rate = 1e-3
batch_size = 32
epoch = 15
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Stemmer(size_layer, num_layers, embedded_size, len(dictionary_from),
len(dictionary_to), learning_rate)
sess.run(tf.global_variables_initializer())
def str_idx(corpus, dic):
X = []
for i in corpus:
ints = []
for k in i:
try:
ints.append(dic[k])
except Exception as e:
ints.append(UNK)
X.append(ints)
return X
X = str_idx(before, dictionary_from)
Y = str_idx(after, dictionary_to)
train_X, test_X, train_Y, test_Y = train_test_split(X, Y, test_size = 0.2)
def pad_sentence_batch(sentence_batch, pad_int):
padded_seqs = []
seq_lens = []
max_sentence_len = max([len(sentence) for sentence in sentence_batch])
for sentence in sentence_batch:
padded_seqs.append(sentence + [pad_int] * (max_sentence_len - len(sentence)))
seq_lens.append(len(sentence))
return padded_seqs, seq_lens
from tqdm import tqdm
from sklearn.utils import shuffle
import time
for EPOCH in range(epoch):
lasttime = time.time()
total_loss, total_accuracy, total_loss_test, total_accuracy_test = 0, 0, 0, 0
train_X, train_Y = shuffle(train_X, train_Y)
test_X, test_Y = shuffle(test_X, test_Y)
pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop')
for k in pbar:
index = min(k+batch_size,len(train_X))
batch_x, seq_x = pad_sentence_batch(train_X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(train_Y[k: k+batch_size], PAD)
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss += loss
total_accuracy += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop')
for k in pbar:
index = min(k+batch_size,len(test_X))
batch_x, seq_x = pad_sentence_batch(test_X[k: k+batch_size], PAD)
batch_y, seq_y = pad_sentence_batch(test_Y[k: k+batch_size], PAD)
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict={model.X:batch_x,
model.Y:batch_y})
total_loss_test += loss
total_accuracy_test += acc
pbar.set_postfix(cost=loss, accuracy = acc)
total_loss /= (len(train_X) / batch_size)
total_accuracy /= (len(train_X) / batch_size)
total_loss_test /= (len(test_X) / batch_size)
total_accuracy_test /= (len(test_X) / batch_size)
print('epoch: %d, avg loss: %f, avg accuracy: %f'%(EPOCH, total_loss, total_accuracy))
print('epoch: %d, avg loss test: %f, avg accuracy test: %f'%(EPOCH, total_loss_test, total_accuracy_test))
predicted = sess.run(model.predicting_ids,
feed_dict={model.X:batch_x,
model.Y:batch_y})
for i in range(len(batch_x)):
print('row %d'%(i+1))
print('BEFORE:',''.join([rev_dictionary_from[n] for n in batch_x[i] if n not in [0,1,2,3]]))
print('REAL AFTER:',''.join([rev_dictionary_to[n] for n in batch_y[i] if n not in[0,1,2,3]]))
print('PREDICTED AFTER:',''.join([rev_dictionary_to[n] for n in predicted[i] if n not in[0,1,2,3]]),'\n')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/zaidalyafeai/Notebooks/blob/master/tf_Face_SSD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Introduction
In this task we will detect faces in the wild using single shot detector (SSD) models. The SSD model is a bit complicated but will build a simple implmenetation that works for the current task. Basically, the SSD model is a basic model for object detection that uses full evaluation of the given image without using region proposals which was introduced in R-CNN. This makes SSD much faster. The basic architecture is using a CNN to extract the features and at the end we extract volumes of predictions in the shape $[w, h, c + 4]$ where $(w,h)$ is the size of prediction volume and the $c+5$ is the prediction of classes plus the bounding box offsets. Note that we add 1 for the background. Hence the size of the prediction module for one scale is $w \times h (c + 5)$. Note that we predict these valume at different scales and we use matching with IoU to infer the spatial location of the predicted boxes

# Download The Dataset
We use the dataset fromt his [project](http://vis-www.cs.umass.edu/fddb/). Each single frame frame is annotated by an ellpesioid around the faces that exist in that frame. This data set contains the annotations for 5171 faces in a set of 2845 images taken from the [Faces in the Wild data set](http://tamaraberg.com/faceDataset/index.html). Here is a sample 
```
!wget http://tamaraberg.com/faceDataset/originalPics.tar.gz
!wget http://vis-www.cs.umass.edu/fddb/FDDB-folds.tgz
!tar -xzvf originalPics.tar.gz >> tmp.txt
!tar -xzvf FDDB-folds.tgz >> tmp.txt
```
# Extract the Bounding Boxes
For each image we convert the ellipsoid annotation to a rectangluar region that frames the faces in the current image. Before that we need to explain the concept of anchor boxes. An **anchor box** that exists in a certain region of an image is a box that is responsible for predicting the box in that certain region. Given a certain set of boxes we could match these boxes to the corrospondant anchor box using the intersection over union metric IoU.

In the above example we see the anchor boxes with the associated true labels. If a certain anchor box has a maximum IoU overlap we consider that anchorbox responsible for that prediction. For simplicity we construct volumes of anchor boxes at only one scale.
```
from PIL import Image
import pickle
import os
import numpy as np
import cv2
import glob
```
Use anchors of size $(4,4)$
```
ANCHOR_SIZE = 4
def iou(boxA, boxB):
#evaluate the intersection points
xA = np.maximum(boxA[0], boxB[0])
yA = np.maximum(boxA[1], boxB[1])
xB = np.minimum(boxA[2], boxB[2])
yB = np.minimum(boxA[3], boxB[3])
# compute the area of intersection rectangle
interArea = np.maximum(0, xB - xA + 1) * np.maximum(0, yB - yA + 1)
# compute the area of both the prediction and ground-truth
# rectangles
boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1)
boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1)
#compute the union
unionArea = (boxAArea + boxBArea - interArea)
# return the intersection over union value
return interArea / unionArea
#for a given box we predict the corrosponding bounding box
def get_anchor(box):
max_iou = 0.0
matching_anchor = [0, 0, 0, 0]
matching_index = (0, 0)
i = 0
j = 0
w , h = (1/ANCHOR_SIZE, 1/ANCHOR_SIZE)
for x in np.linspace(0, 1, ANCHOR_SIZE +1)[:-1]:
j = 0
for y in np.linspace(0, 1, ANCHOR_SIZE +1)[:-1]:
xmin = x
ymin = y
xmax = (x + w)
ymax = (y + h)
anchor_box = [xmin, ymin, xmax, ymax]
curr_iou = iou(box, anchor_box)
#choose the location with the highest overlap
if curr_iou > max_iou:
matching_anchor = anchor_box
max_iou = curr_iou
matching_index = (i, j)
j += 1
i+= 1
return matching_anchor, matching_index
```
For each image we output a volume of boxes where we map each true label to the corrosponindg location in the $(4, 4, 5)$ tenosr. Note that here we have only two lables 1 for face and 0 for background so we can use binary cross entropy
```
def create_volume(boxes):
output = np.zeros((ANCHOR_SIZE, ANCHOR_SIZE, 5))
for box in boxes:
if max(box) == 0:
continue
_, (i, j) = get_anchor(box)
output[i,j, :] = [1] + box
return output
#read all the files for annotation
annot_files = glob.glob('FDDB-folds/*ellipseList.txt')
data = {}
for file in annot_files:
with open(file, 'r') as f:
rows = f.readlines()
j = len(rows)
i = 0
while(i < j):
#get the file name
file_name = rows[i].replace('\n', '')+'.jpg'
#get the number of boxes
num_boxes = int(rows[i+1])
boxes = []
img = Image.open(file_name)
w, h = img.size
#get all the bounding boxes
for k in range(1, num_boxes+1):
box = rows[i+1+k]
box = box.split(' ')[0:5]
box = [float(x) for x in box]
#convert ellipse to a box
xmin = int(box[3]- box[1])
ymin = int(box[4]- box[0])
xmax = int(xmin + box[1]*2)
ymax = int(ymin + box[0]*2)
boxes.append([xmin/w, ymin/h, xmax/w, ymax/h])
#conver the boxes to a volume of fixed size
data[file_name] = create_volume(boxes)
i = i + num_boxes+2
```
# Imports
We use tensorflow with eager execution. Hence, eager execution allows immediate evaluation of tensors without instintiating graph.
```
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Input
from tensorflow.keras.layers import Flatten, Dropout, BatchNormalization, Concatenate, Reshape, GlobalAveragePooling2D, Reshape
from tensorflow.keras.applications.mobilenet_v2 import MobileNetV2, preprocess_input
import cv2
import matplotlib.pyplot as plt
import os
import numpy as np
from PIL import Image
from random import shuffle
import random
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
```
#Create A Dataset
Here we use `tf.data` for manipulating the data and use them for training
```
def parse_training(filename, label):
image = tf.image.decode_jpeg(tf.read_file(filename), channels = 3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
label = tf.cast(label, tf.float32)
return image, label
def parse_testing(filename, label):
image = tf.image.decode_jpeg(tf.read_file(filename), channels = 3)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize_images(image, [IMG_SIZE, IMG_SIZE])
label = tf.cast(label, tf.float32)
return image, label
def create_dataset(ff, ll, training = True):
dataset = tf.data.Dataset.from_tensor_slices((ff, ll)).shuffle(len(ff) - 1)
if training:
dataset = dataset.map(parse_training, num_parallel_calls = 4)
else:
dataset = dataset.map(parse_testing, num_parallel_calls = 4)
dataset = dataset.batch(BATCH_SIZE)
return dataset
```
# Data Split
We create a 10% split for the test data to be used for validation
```
files = list(data.keys())
labels = list(data.values())
N = len(files)
M = int(0.9 * N)
#split files for images
train_files = files[:M]
test_files = files[M:]
#split labels
train_labels = labels[:M]
test_labels = labels[M:]
print('training', len(train_files))
print('testing' , len(test_files))
IMG_SIZE = 128
BATCH_SIZE = 32
train_dataset = create_dataset(train_files, train_labels)
test_dataset = create_dataset(test_files, test_labels, training = False)
```
# Visualization
```
def plot_annot(img, boxes):
img = img.numpy()
boxes = boxes.numpy()
for i in range(0, ANCHOR_SIZE):
for j in range(0, ANCHOR_SIZE):
box = boxes[i, j, 1:] * IMG_SIZE
label = boxes[i, j, 0]
if np.max(box) > 0:
img = cv2.rectangle(img, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), (1, 0, 0), 1)
plt.axis('off')
plt.imshow(img)
plt.show()
for x, y in train_dataset:
plot_annot(x[0], y[0])
break
```
# Create a model
We use a ResNet model with multiple blocks and at the end we use a conv volume with size (4, 4, 5) as a preciction.
```
def conv_block(fs, x, activation = 'relu'):
conv = Conv2D(fs, (3, 3), padding = 'same', activation = activation)(x)
bnrm = BatchNormalization()(conv)
drop = Dropout(0.5)(bnrm)
return drop
def residual_block(fs, x):
y = conv_block(fs, x)
y = conv_block(fs, y)
y = conv_block(fs, y)
return Concatenate(axis = -1)([x, y])
inp = Input(shape = (IMG_SIZE, IMG_SIZE, 3))
block1 = residual_block(16, inp)
pool1 = MaxPooling2D(pool_size = (2, 2))(block1)
block2 = residual_block(32, pool1)
pool2 = MaxPooling2D(pool_size = (2, 2))(block2)
block3 = residual_block(64, pool2)
pool3 = MaxPooling2D(pool_size = (2, 2))(block3)
block4 = residual_block(128, pool3)
pool4 = MaxPooling2D(pool_size = (2, 2))(block4)
block5 = residual_block(256, pool4)
pool5 = MaxPooling2D(pool_size = (2, 2))(block5)
out = Conv2D(5, (3, 3), padding = 'same', activation = 'sigmoid')(pool5)
#create a model with one input and two outputs
model = tf.keras.models.Model(inputs = inp, outputs = out)
model.summary()
```
# Loss and gradient
```
def loss(pred, y):
#extract the boxes that have values (i.e discard boxes that are zeros)
mask = y[...,0]
boxA = tf.boolean_mask(y, mask)
boxB = tf.boolean_mask(pred, mask)
prediction_error = tf.keras.losses.binary_crossentropy(y[...,0], pred[...,0])
detection_error = tf.losses.absolute_difference(boxA[...,1:], boxB[...,1:])
return tf.reduce_mean(prediction_error) + 10*detection_error
def grad(model, x, y):
#record the gradient
with tf.GradientTape() as tape:
pred = model(x)
value = loss(pred, y)
#return the gradient of the loss function with respect to the model variables
return tape.gradient(value, model.trainable_variables)
optimizer = tf.train.AdamOptimizer()
```
# Evaluation metric
```
epochs = 20
#initialize the history to record the metrics
train_loss_history = tfe.metrics.Mean('train_loss')
test_loss_history = tfe.metrics.Mean('test_loss')
best_loss = 1.0
for i in range(1, epochs + 1):
for x, y in train_dataset:
pred = model(x)
grads = grad(model, x, y)
#update the paramters of the model
optimizer.apply_gradients(zip(grads, model.trainable_variables), global_step = tf.train.get_or_create_global_step())
#record the metrics of the current batch
loss_value = loss(pred, y)
#calcualte the metrics of the current batch
train_loss_history(loss_value)
#loop over the test dataset
for x, y in test_dataset:
pred = model(x)
#calcualte the metrics of the current batch
loss_value = loss(pred, y)
#record the values of the metrics
test_loss_history(loss_value)
#print out the results
print("epoch: [{0:d}/{1:d}], Train: [loss: {2:0.4f}], Test: [loss: {3:0.4f}]".
format(i, epochs, train_loss_history.result(),
test_loss_history.result()))
current_loss = test_loss_history.result().numpy()
#save the best model
if current_loss < best_loss:
best_loss = current_loss
print('saving best model with loss ', current_loss)
model.save('keras.h5')
#clear the history after each epoch
train_loss_history.init_variables()
test_loss_history.init_variables()
from tensorflow.keras.models import load_model
best_model = load_model('keras.h5')
```
# Visualization
```
#visualize the predicted bounding box
def plot_pred(img_id):
font = cv2.FONT_HERSHEY_SIMPLEX
raw = cv2.imread(img_id)[:,:,::-1]
h, w = (512, 512)
img = cv2.resize(raw, (IMG_SIZE, IMG_SIZE)).astype('float32')
img = np.expand_dims(img, 0)/255.
boxes = best_model(img).numpy()[0]
raw = cv2.resize(raw, (w, h))
for i in range(0, ANCHOR_SIZE):
for j in range(0, ANCHOR_SIZE):
box = boxes[i, j, 1:] * w
lbl = round(boxes[i, j, 0], 2)
if lbl > 0.5:
color = [random.randint(0, 255) for _ in range(0, 3)]
raw = cv2.rectangle(raw, (int(box[0]), int(box[1])), (int(box[2]), int(box[3])), color, 3)
raw = cv2.rectangle(raw, (int(box[0]), int(box[1])-30), (int(box[0])+70, int(box[1])), color, cv2.FILLED)
raw = cv2.putText(raw, str(lbl), (int(box[0]), int(box[1])), font, 1, (255, 255, 255), 2)
plt.axis('off')
plt.imshow(raw)
plt.show()
img_id = np.random.choice(test_files)
plot_pred(img_id)
!wget https://pmctvline2.files.wordpress.com/2018/08/friends-revival-jennifer-aniston.jpg -O test.jpg
plot_pred('test.jpg')
```
| github_jupyter |
## Cryptocurrency with Python
**Ángel C.**
####Libraries required:
* **datetime**
* **hashlib**
* **json**
* **flask**
* **flask-ngrok**
* **requests**
* **uuid**
* **urllib.parse**
# Install
```
!pip install flask==1.1.2
!pip install requests==2.25.1
!pip install flask-ngrok==0.0.25
```
# Cryptocurrency code
```
# Importación de las librerías
import datetime
import hashlib
import json
import requests
from uuid import uuid4
from flask import Flask, jsonify, request
from urllib.parse import urlparse
from flask_ngrok import run_with_ngrok
```
Essential methods included:
* New block creation
* Obtaining hash for a block
* Consensus protocol Proof of Work (PoW)
* Block hash generation
* Verifying Blockchain validity
* Adding transaction to blockchain
* Adding new node to blockchain
* Replace blockchain for the correct one in a node
```
class Blockchain:
def __init__(self):
""" Class constructor. """
self.chain = []
self.transactions = []
self.create_block(proof = 1, previous_hash = '0')
self.nodes = set()
def create_block(self, proof, previous_hash):
""" New block creation.
Arguments:
- proof: Nonce of current block.
- previous_hash: Hash of previous block.
Returns:
- block: New block created.
"""
block = { 'index' : len(self.chain)+1,
'timestamp' : str(datetime.datetime.now()),
'proof' : proof,
'previous_hash' : previous_hash,
'transactions' : self.transactions}
self.transactions = []
self.chain.append(block)
return block
def get_previous_block(self):
""" Obtaining previous block.
Returns:
- Obtaining last block. """
return self.chain[-1]
def proof_of_work(self, previous_proof):
""" Consensus protocol Proof of Work (PoW).
Arguments:
- previous_proof: Nonce of previous block.
Returns:
- new_proof: New nonce obtained with PoW. """
new_proof = 1
check_proof = False
while check_proof is False:
hash_operation = hashlib.sha256(str(new_proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] == '0000':
check_proof = True
else:
new_proof += 1
return new_proof
def hash(self, block):
""" Calculation of hash for a block
Arguments:
- block: ID of a block in the blockchain.
Returns:
- hash_block: Returns hash of the block """
encoded_block = json.dumps(block, sort_keys = True).encode()
hash_block = hashlib.sha256(encoded_block).hexdigest()
return hash_block
def is_chain_valid(self, chain):
""" Determines if the blockchain is valid.
Arguments:
- chain: Blockchain including transactions information.
Returns:
- True/False: Boolean representing blockchain validity """
previous_block = chain[0]
block_index = 1
while block_index < len(chain):
block = chain[block_index]
if block['previous_hash'] != self.hash(previous_block):
return False
previous_proof = previous_block['proof']
proof = block['proof']
hash_operation = hashlib.sha256(str(proof**2 - previous_proof**2).encode()).hexdigest()
if hash_operation[:4] != '0000':
return False
previous_block = block
block_index += 1
return True
def add_transaction(self, sender, receiver, amount):
""" Transactions.
Arguments:
- sender: Who makes the transaction
- receiver: Who receives the transaction
- amount: Amount of coins sent
Returns:
- Index greater than last block
"""
self.transactions.append({'sender' : sender,
'receiver': receiver,
'amount' : amount})
previous_block = self.get_previous_block()
return previous_block['index'] + 1
def add_node(self, address):
""" New node in the blockchain.
Arguments:
- address: Address of the new node
"""
parsed_url = urlparse(address)
self.nodes.add(parsed_url.netloc)
def replace_chain(self):
""" Replacing the chain for the longest, valid one. """
network = self.nodes
longest_chain = None
max_length = len(self.chain)
for node in network:
response = requests.get(f'http://{node}/get_chain')
if response.status_code == 200:
length = response.json()['length']
chain = response.json()['chain']
if length > max_length and self.is_chain_valid(chain):
max_length = length
longest_chain = chain
if longest_chain:
self.chain = longest_chain
return True
return False
# Blocks mining
# Creation of web app
app = Flask(__name__)
run_with_ngrok(app)
# If it returns 500, update flask, reload spider and run the next line
app.config['JSONIFY_PRETTYPRINT_REGULAR'] = False
# Creation of address on port 5000
node_address = str(uuid4()).replace('-', '')
# Creation of blockchain
blockchain = Blockchain()
@app.route('/mine_block', methods=['GET'])
def mine_block():
""" Mining of a new block """
previous_block = blockchain.get_previous_block()
previous_proof = previous_block['proof']
proof = blockchain.proof_of_work(previous_proof)
previous_hash = blockchain.hash(previous_block)
blockchain.add_transaction(sender = node_address, receiver = "Miner #2", amount = 10)
block = blockchain.create_block(proof, previous_hash)
response = {'message' : 'Woohoo! new block mined!',
'index' : block['index'],
'timestamp' : block['timestamp'],
'proof' : block['proof'],
'previous_hash' : block['previous_hash'],
'transactions' : block['transactions']}
return jsonify(response), 200
@app.route('/get_chain', methods=['GET'])
def get_chain():
""" Obtaining the complete chain """
response = {'chain' : blockchain.chain,
'length' : len(blockchain.chain)}
return jsonify(response), 200
@app.route('/is_valid', methods = ['GET'])
def is_valid():
""" Checking validity of the chain """
is_valid = blockchain.is_chain_valid(blockchain.chain)
if is_valid:
response = {'message' : 'All right! blockchain is valid.'}
else:
response = {'message' : 'Whoops! blockchain is NOT valid.'}
return jsonify(response), 200
@app.route('/add_transaction', methods = ['POST'])
def add_transaction():
""" Adding transaction to blockchain """
json = request.get_json()
transaction_keys = ['sender', 'receiver', 'amount']
if not all(key in json for key in transaction_keys):
return 'Some arguments are missing in the transaction', 400
index = blockchain.add_transaction(json['sender'], json['receiver'], json['amount'])
response = {'message': f'Transaction added to blockchain {index}'}
return jsonify(response), 201
# Decentralization of the blockchain
# Connecting new nodes
@app.route('/connect_node', methods = ['POST'])
def connect_node():
json = request.get_json()
nodes = json.get('nodes')
if nodes is None:
return 'No nodes to add', 400
for node in nodes:
blockchain.add_node(node)
response = {'message' : 'All nodes have been connected. AGCcoin Blockchain includes now the following nodes: ',
'total_nodes' : list(blockchain.nodes)}
return jsonify(response), 201
@app.route('/replace_chain', methods = ['GET'])
def replace_chain():
""" Replacing the chain for the longest one (if necessary) """
is_chain_replaced = blockchain.replace_chain()
if is_chain_replaced:
response = {'message' : 'The nodes contained different chains and have been updated.',
'new_chain': blockchain.chain}
else:
response = {'message' : 'All set. The blockchain included in all nodes is the current one.',
'actual_chain' : blockchain.chain}
return jsonify(response), 200
# Ejecución de la app con Google Colab
app.run()
# Ejecución externa a Google colab
#app.run(host = '0.0.0.0', port = 5001)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.