code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Scikit-Learn singalong: EEG Eye State Classification
Author: Kevin Yang
Contact: kyang@h2o.ai
This tutorial replicates Erin LeDell's oncology demo using Scikit Learn and Pandas, and is intended to provide a comparison of the syntactical and performance differences between sklearn and H2O implementations of Gradient Boosting Machines.
We'll be using Pandas, Numpy and the collections package for most of the data exploration.
```
import pandas as pd
import numpy as np
from collections import Counter
```
## Download EEG Data
The following code downloads a copy of the [EEG Eye State](http://archive.ics.uci.edu/ml/datasets/EEG+Eye+State#) dataset. All data is from one continuous EEG measurement with the [Emotiv EEG Neuroheadset](https://emotiv.com/epoc.php). The duration of the measurement was 117 seconds. The eye state was detected via a camera during the EEG measurement and added later manually to the file after analysing the video frames. '1' indicates the eye-closed and '0' the eye-open state. All values are in chronological order with the first measured value at the top of the data.

Let's import the same dataset directly with pandas
```
csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
data = pd.read_csv(csv_url)
```
## Explore Data
Once we have loaded the data, let's take a quick look. First the dimension of the frame:
```
data.shape
```
Now let's take a look at the top of the frame:
```
data.head()
```
The first two columns contain an ID and the response. The "diagnosis" column is the response. Let's take a look at the column names. The data contains derived features from the medical images of the tumors.
```
data.columns.tolist()
```
To select a subset of the columns to look at, typical Pandas indexing applies:
```
columns = ['AF3', 'eyeDetection', 'split']
data[columns].head(10)
```
Now let's select a single column, for example -- the response column, and look at the data more closely:
```
data['eyeDetection'].head()
```
It looks like a binary response, but let's validate that assumption:
```
data['eyeDetection'].unique()
```
We can query the categorical "levels" as well ('B' and 'M' stand for "Benign" and "Malignant" diagnosis):
```
data['eyeDetection'].nunique()
```
Since "diagnosis" column is the response we would like to predict, we may want to check if there are any missing values, so let's look for NAs. To figure out which, if any, values are missing, we can use the `isna` method on the diagnosis column. The columns in an H2O Frame are also H2O Frames themselves, so all the methods that apply to a Frame also apply to a single column.
```
data.isnull()
data['eyeDetection'].isnull()
```
The `isna` method doesn't directly answer the question, "Does the diagnosis column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look:
```
data['eyeDetection'].isnull().sum()
```
Great, no missing labels.
Out of curiosity, let's see if there is any missing data in this frame:
```
data.isnull().sum()
```
The next thing I may wonder about in a binary classification problem is the distribution of the response in the training data. Is one of the two outcomes under-represented in the training set? Many real datasets have what's called an "imbalanace" problem, where one of the classes has far fewer training examples than the other class. Let's take a look at the distribution, both visually and numerically.
```
Counter(data['eyeDetection'])
```
Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Let's calculate the percentage that each class represents:
```
n = data.shape[0] # Total number of training samples
np.array(Counter(data['eyeDetection']).values())/float(n)
```
### Split H2O Frame into a train and test set
So far we have explored the original dataset (all rows). For the machine learning portion of this tutorial, we will break the dataset into three parts: a training set, validation set and a test set.
If you want H2O to do the splitting for you, you can use the `split_frame` method. However, we have explicit splits that we want (for reproducibility reasons), so we can just subset the Frame to get the partitions we want.
```
train = data[data['split']=="train"]
train.shape
valid = data[data['split']=="valid"]
valid.shape
test = data[data['split']=="test"]
test.shape
```
## Machine Learning in H2O
We will do a quick demo of the H2O software -- trying to predict eye state (open/closed) from EEG data.
### Specify the predictor set and response
The response, `y`, is the 'diagnosis' column, and the predictors, `x`, are all the columns aside from the first two columns ('id' and 'diagnosis').
```
y = 'eyeDetection'
x = data.columns.drop(['eyeDetection','split'])
```
### Split H2O Frame into a train and test set
```
from sklearn.ensemble import GradientBoostingClassifier
import sklearn
test.shape
```
### Train and Test a GBM model
```
model = GradientBoostingClassifier(n_estimators=100,
max_depth=4,
learning_rate=0.1)
X=train[x].reset_index(drop=True)
y=train[y].reset_index(drop=True)
model.fit(X, y)
print(model)
```
### Inspect Model
```
model.get_params()
```
### Model Performance on a Test Set
```
from sklearn.metrics import r2_score, roc_auc_score, mean_squared_error
y_pred = model.predict(X)
r2_score(y_pred, y)
roc_auc_score(y_pred, y)
mean_squared_error(y_pred, y)
```
### Cross-validated Performance
```
from sklearn import cross_validation
cross_validation.cross_val_score(model, X, y, scoring='roc_auc', cv=5)
cross_validation.cross_val_score(model, valid[x].reset_index(drop=True), valid['eyeDetection'].reset_index(drop=True), scoring='roc_auc', cv=5)
```
| github_jupyter |
# Introduction to MLOps
## Environment setup
```
import platform
print(f"Python version: {platform.python_version()}")
assert platform.python_version_tuple() >= ("3", "6")
from IPython.display import YouTubeVideo
```
## The Machine Learning workflow
[](https://www.redhat.com/files/summit/session-assets/2019/T957A0.pdf)
### Codifying problems and metrics
- Main questions:
- What is the business objective?
- How to measure success?
- What are the technical, temporal and organisational constraints?
- Possible solutions: communicate with PO and stakeholders, knowing product and client needs.
### Data collection and cleaning
- Main questions:
- Which data?
- Is it free/in adequate quantity/noisy/labelled/biased?
- Is it stable or evolving?
- Possible solutions: [public datasets](https://github.com/awesomedata/awesome-public-datasets), [DVC](https://dvc.org/), [Doccano](https://github.com/doccano/doccano), manual work.
### Feature engineering
- Main questions:
- What is the format of my input data?
- Whet features could potentially be useful for my models?
- How are they retrieved during training and production?
- Possible solutions: data pipelines, feature stores, domain experts.
[](https://www.tecton.ai/blog/what-is-a-feature-store/)
### Model training and tuning
- Main questions:
- Which model(s)?
- How to optimize its performance?
- How to track model versions?
- Possible solutions: starting simple, hyperparameter tuning, [MLflow](https://mlflow.org).
### Model validation
- Main questions:
- Does the model address the business objective?
- How to measure its performance?
- Are there uptime constraints for my model?
- Possible solutions: testing set, [continuous integration](https://en.wikipedia.org/wiki/Continuous_integration), [memoization](https://en.wikipedia.org/wiki/Memoization).
### Model deployment
- Main questions:
- How to serve my model?
- How to handle model versioning?
- How to handle scaling?
- Possible solutions: [FastAPI](https://fastapi.tiangolo.com/), [Docker](https://www.docker.com/), [Kubernetes](https://kubernetes.io/), [Cortex](https://www.cortex.dev/), [Databricks](https://databricks.com/), stress tests.
### Monitoring, validation
- Main questions:
- How to check model performance in production?
- How to prevent [model drifting](https://c3.ai/glossary/data-science/model-drift/)?
- How to explain model results?
- Possible solutions: [A/B testing](https://en.wikipedia.org/wiki/A/B_testing), [canary release](https://martinfowler.com/bliki/CanaryRelease.html), [explainability tools](https://github.com/EthicalML/awesome-production-machine-learning#explaining-black-box-models-and-datasets).
## From DevOps to MLOps
### Motivation
> "The real challenge isn't building an ML model, the challenge is building an integrated ML system and to continuously operate it in production."
### Elements of a ML system
[](https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning)
### DevOps
DevOps is a set of practices that combines software development (*Dev*) and IT operations (*Ops*). Its goal is to shorten the product delivery loop while maintaining high quality.
It implies constant collaboration between the development and infrastructure teams, as well as the use of several tools to automate and streamline the push to production and monitoring of a project.

### MLOps
MLOps is the process of automating and productionalizing Machine Learning-based systems. MLOps integrates data- and model-specific tasks into the DevOps workflow cycle to obtain a unified release process. Like DevOps, it combines ML system development (*Dev*) and ML system operation (*Ops*).
[](https://www.phdata.io/blog/mlops-vs-devops-whats-the-difference/)
[](https://ml-ops.org/content/mlops-principles)
### MLOps core principles
Like DevOps, MLOps is built on the following [principles](https://ml-ops.org/content/mlops-principles):
- Automation.
- Continuous X (integration, delivery and training).
- Versioning.
- Testing.
### Manual process
[](https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning#mlops_level_2_cicd_pipeline_automation)
### ML pipeline automation
[](https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning#mlops_level_1_ml_pipeline_automation)
### MLOps resources
- [MLOps: Continuous delivery and automation pipelines in machine learning](https://cloud.google.com/architecture/mlops-continuous-delivery-and-automation-pipelines-in-machine-learning)
- [Machine Learning Operations](https://ml-ops.org/)
- [MLOps and DevOps: Why Data Makes It Different](https://www.oreilly.com/radar/mlops-and-devops-why-data-makes-it-different/)
- [Practitioners guide to MLOps](https://services.google.com/fh/files/misc/practitioners_guide_to_mlops_whitepaper.pdf)
- [Awesome MLOps](https://github.com/visenger/awesome-mlops)
- [The 2021 Machine Learning, AI and Data Landscape](https://mattturck.com/data2021/)
## Overview of some MLOps tools
### Flask & FastAPI
[Flask](https://flask.palletsprojects.com) is a fast and lightweight web framework written in Python. It can be used to deploy Machine Learning models as APIs.
[FastAPI](https://fastapi.tiangolo.com/) is another lightweight web framework for builiding APIs. it has become the standard for deploying Python ML models on the web.
### DVC
[DVC](https://dvc.org/) ("Data Version Control") is an open source project that extends [Git](https://git-scm.com/) version control to data and model files. DVC uses a remote repository (including supports all major cloud providers) to store all the data and models for a project. In the actual code repository, a pointer to this remote location is stored to access the actual artifacts.
```
YouTubeVideo("UbL7VUpv1Bs")
```
### MLflow
[MLflow](https://mlflow.org) is an open source platform for managing the end-to-end machine learning lifecycle. It can:
- track ML experiments (training runs) to record and compare parameters and results;
- package ML code in a reusable, reproducible form;
- manage and deploy models from a variety of ML libraries to a variety of model serving and inference platforms.
MLflow can be used with any ML library, in any programming language. Python, R and Java are supported out-of-the-box. It is included in several ML cloud platforms, like [Databricks](https://databricks.com/product/managed-mlflow).
[](https://www.datanami.com/2018/06/05/databricks-launches-mlflow-to-simplify-machine-learning-lifecycle/)
| github_jupyter |
# Gaussian 中的 PUHF/PMP2 结果的重新实现
> 创建时间:2019-08-31,最后修改:2019-09-01
在这一份笔记中,我们将使用 PySCF 的功能与 NumPy 重复 Gaussian 中计算的 PUHF 与 PMP2 能量结果;并对 PUHF 与 PMP2 的推导作简单的说明。
```
from pyscf import gto, scf, mp
```
## 参考结果与体系定义
### Gaussian 结果
在 Gaussian 中,我们使用以下输入卡可以得到 PUHF/PMP2 能量:
```
#p UMP2(Full)/6-31G nosymm
H2O
3 4
O 0. 0. 0.
H 1. 0. 0.
H 0. 1. 0.
```
对于上述分子,其中一些重要的输出结果是
1. $E_\mathrm{UHF}$:-73.0451423839
2. $E_\mathrm{UMP2, corr}$: -0.02646719276
3. $E_\mathrm{UMP2}$: -73.071609576661
4. $\langle S_z \rangle$: 1.5
5. $\langle S^{2(0)} \rangle$: 3.7531
6. $\langle S^{2(0)} \rangle + \langle S^{2(1)} \rangle$: 3.7504
7. $E_\mathrm{PUHF}$:-73.046146318
8. $E_\mathrm{PMP2}$: -73.072180589
输出文件参见 {download}`assets/PUHF_and_PMP2.out`;其中,有效的数据可以通过下述的代码获得:
```
with open("assets/PUHF_and_PMP2.out", "r") as output:
output_lines = output.read().split("\n")
for line_num, line_text in enumerate(output_lines):
if any([keyword in line_text for keyword in
["SCF Done", "EUMP2", "<S**2>", "(S**2,1)", "E(PMP2)"]]) \
and "Initial guess" not in line_text:
print("line {:03d}: {}".format(line_num, line_text))
```
我们的目标就是近乎准确无误地重复上述八个结果。
### PySCF 体系定义
为了获得与 Gaussian 相同的结果,我们需要定义相同的分子与电荷、多重度环境:
```
mol = gto.Mole()
mol.atom = """
O 0. 0. 0.
H 1. 0. 0.
H 0. 1. 0.
"""
mol.charge = 3
mol.spin = 3
mol.basis = "6-31G"
mol.build()
```
通过 PySCF 计算 UHF 能量:
```
scf_eng = scf.UHF(mol)
scf_eng.conv_tol = 1e-10
scf_eng.run();
```
上述结果应当能与 $E_\mathrm{UHF}$ 和 $\langle S^{2(0)} \rangle$ 对应。$\langle S_z \rangle = 1.5$ 几乎是显然的。不过,我们仍然不了解 $\langle S^{2(0)} \rangle$ 是如何生成的。
通过 PySCF 计算 UMP2 能量:
```
mp2_eng = mp.UMP2(scf_eng)
mp2_eng.run();
```
上述结果应当能与 $E_\mathrm{UMP2, corr}$ 和 $E_\mathrm{UMP2}$ 对应。
因此,当前的问题将是回答:如何重复
1. $\langle S^{2(0)} \rangle$: 3.7531
2. $\langle S^{2(0)} \rangle + \langle S^{2(1)} \rangle$: 3.7504
3. $E_\mathrm{PUHF}$:-73.046146318
4. $E_\mathrm{PMP2}$: -73.072180589
### 部分变量定义
首先,我们遵从大多数量化文章中的记号
- $i, j$ 代表占据分子轨道
- $a, b$ 代表非占分子轨道
- $p, q$ 代表任意分子轨道
- $\alpha, \beta$ 代表任意原子轨道
<center><b>Table 1. 分子相关变量</b></center>
| 变量名 | 元素记号 | 意义与注解 | 标量或区间 |
|-|-|-|-|
| `nocc_a` | $n_\mathrm{occ}^\alpha$ | $\alpha$ 自旋电子数 | $5$ |
| `nocc_b` | $n_\mathrm{occ}^\beta$ | $\beta$ 自旋电子数 | $2$ |
| `N` | $N$ | 总电子数 | $7$ |
| `nmo` | $n_\mathrm{MO}$ | 分子轨道数 | $13$ |
| `nao` | $n_\mathrm{AO}$ | 原子轨道数 | $13$ |
| `S` | $S_{\mu \nu}$ | 原子轨道重叠积分 | |
| `so_a` | | $\alpha$ 占据轨道分割 | $[0, 5)$ |
| `so_b` | | $\beta$ 占据轨道分割 | $[0, 2)$ |
| `sv_a` | | $\alpha$ 非占轨道分割 | $[5, 13)$ |
| `sv_b` | | $\beta$ 非占轨道分割 | $[2, 13)$ |
| `Sx` | $S_x$ | $x$ 分量自旋 | $0$ |
| `Sy` | $S_y$ | $y$ 分量自旋 | $0$ |
| `Sz` | $S_z$ | $z$ 分量自旋 | $3/2$ |
<center><b>Table 2. UHF 计算相关变量</b></center>
| 变量名 | 元素记号 | 意义与注解 |
|-|-|-|
| `C_a` | $C_{\mu p}$ | $\alpha$ 系数矩阵 |
| `C_b` | $C_{\mu \bar p}$ | $\beta$ 系数矩阵 |
| `e_a` | $e_{p}$ | $\alpha$ 轨道能 |
| `e_b` | $e_{\bar p}$ | $\alpha$ 轨道能 |
| `eo_a` | $e_{i}$ | $\beta$ 占据轨道能 |
| `eo_b` | $e_{\bar i}$ | $\alpha$ 占据轨道能 |
| `ev_a` | $e_{a}$ | $\alpha$ 非占轨道能 |
| `ev_b` | $e_{\bar a}$ | $\beta$ 非占轨道能 |
| `D2_aa` | $D_{ij}^{ab}$ | $\alpha, \alpha$ 轨道能差 |
| `D2_ab` | $D_{i \bar j}^{a \bar b}$ | $\alpha, \beta$ 轨道能差 |
| `D2_bb` | $D_{\bar i \bar j}^{\bar a \bar b}$ | $\beta, \beta$ 轨道能差 |
<center><b>Table 3. UMP2 计算相关变量</b></center>
| 变量名 | 元素记号 | 意义与注解 |
|-|-|-|
| `t2_aa` | $t_{ij}^{ab}$ | $\alpha, \alpha$ MP2 激发系数 |
| `t2_ab` | $t_{i \bar j}^{a \bar b}$ | $\alpha, \beta$ MP2 激发系数 |
| `t2_bb` | $t_{\bar i \bar j}^{\bar a \bar b}$ | $\beta, \beta$ MP2 激发系数 |
| `D2_aa` | $D_{ij}^{ab}$ | $\alpha, \alpha$ MP2 激发系数分母 |
| `D2_ab` | $D_{i \bar j}^{a \bar b}$ | $\alpha, \beta$ MP2 激发系数分母 |
| `D2_bb` | $D_{\bar i \bar j}^{\bar a \bar b}$ | $\beta, \beta$ MP2 激发系数分母 |
上述需要补充说明的公式有:
$$
S_z = \frac{1}{2} (n_\mathrm{occ}^\alpha - n_\mathrm{occ}^\beta)
$$
$$
D_{i \bar j}^{a \bar b} = e_i + e_{\bar j} - e_a - e_{\bar b}
$$
对于 MP2 激发系数分母,另外两种自旋情况的 $D_{ij}^{ab}$ 与 $D_{\bar i \bar j}^{\bar a \bar b}$ 也可以类似地生成。
```
# === Molecular
# --- Definition
nocc_a, nocc_b = mol.nelec
N = nocc_a + nocc_b
nmo = nao = mol.nao
S = mol.intor("int1e_ovlp")
# --- Derivative
so_a, so_b = slice(0, nocc_a), slice(0, nocc_b)
sv_a, sv_b = slice(nocc_a, nmo), slice(nocc_b, nmo)
Sx, Sy, Sz = 0, 0, 0.5 * (nocc_a - nocc_b)
# === UHF Calculation
# --- Definition
C_a, C_b = scf_eng.mo_coeff
e_a, e_b = scf_eng.mo_energy
# --- Derivative
eo_a, eo_b = e_a[so_a], e_b[so_b]
ev_a, ev_b = e_a[sv_a], e_b[sv_b]
D2_aa = eo_a[:, None, None, None] + eo_a[None, :, None, None] - ev_a[None, None, :, None] - ev_a[None, None, None, :]
D2_ab = eo_a[:, None, None, None] + eo_b[None, :, None, None] - ev_a[None, None, :, None] - ev_b[None, None, None, :]
D2_bb = eo_b[:, None, None, None] + eo_b[None, :, None, None] - ev_b[None, None, :, None] - ev_b[None, None, None, :]
# === MP2 Calculation
t2_aa, t2_ab, t2_bb = mp2_eng.t2
```
作为对四脚标张量性质的验证,我们计算 MP2 相关能 $E_\mathrm{MP2, corr}$ 如下:
$$
E_\mathrm{MP2, corr} =
\frac{1}{4} \sum_{ijab} (t_{ij}^{ab})^2 D_{ij}^{ab} +
\frac{1}{4} \sum_{\bar i \bar j \bar a \bar b} (t_{\bar i \bar j}^{\bar a \bar b})^2 D_{i \bar j}^{a \bar b} +
\sum_{i \bar j a \bar b} (t_{i \bar j}^{a\bar b})^2 D_{i \bar j}^{a \bar b}
$$
```
(+ 0.25 * (t2_aa**2 * D2_aa).sum()
+ 0.25 * (t2_bb**2 * D2_bb).sum()
+ (t2_ab**2 * D2_ab).sum())
```
PySCF 所给出的 $E_\mathrm{MP2, corr}$ 可以给出相同的结果:
```
mp2_eng.e_corr
```
## $\langle S^2 \rangle$ 相关计算
### 分子轨道基组重叠矩阵 `S_pq` $S_{p \bar q}$
$$
S_{p \bar q} = \sum_{\mu \nu} C_{\mu p} S_{\mu \nu} C_{\nu \bar q}
$$
若用量子力学记号,上述矩阵元素可能表示为
$$
S_{p \bar q} = \int \phi_p (\boldsymbol{r}) \phi_{\bar q} (\boldsymbol{r}) \, \mathrm{d} \boldsymbol{r}
$$
注意上述的积分是空间坐标的积分,不包含自旋部分的积分。
```
S_pq = C_a.T @ S @ C_b
S_pq.shape
```
我们以后还会使用上述矩阵的占据-占据部分 `S_ij` $S_{i \bar j}$、占据-非占部分 `S_ia` $S_{i \bar a}$ 与非占-占据部分 `S_ai` $S_{a \bar i} = S_{\bar i a}$:
```
S_ij, S_ia, S_ai = S_pq[so_a, so_b], S_pq[so_a, sv_b], S_pq[sv_a, so_b]
[S_ij.shape, S_ia.shape, S_ai.shape]
```
### `S2_0` $\langle S^{2(0)} \rangle$
$\langle S^{2(0)} \rangle$ 在程序中通常写为 `<S^2>` 或 `<S**2>`。在 Gaussian 计算 PUHF 处,还写为 `(S**2,0)`。这意味着是 UHF 波函数的 $\langle S^2 \rangle_\mathrm{UHF}$。相对地,UMP2 波函数给出的对 $\langle S^2 \rangle$ 的矫正将记作 $\langle S^{2(1)} \rangle$。
参考 Chen and Schlegel [^Chen-Schlegel.JCP.1994.101] Table 1, $0 \rightarrow 0$ 或等价地,Szabo and Ostlund [^Szabo-Ostlund.Dover.1996] eq (2.271)
$$
\langle S^{2(0)} \rangle = \langle \Psi_0 | \hat S^2 | \Psi_0 \rangle = S_z (S_z + 1) + n_\mathrm{occ}^\beta - \sum_{i \bar j} (S_{i \bar j})^2
$$
```
S2_0 = Sz * (Sz + 1) + nocc_b - (S_ij**2).sum()
S2_0
```
Gaussian 的参考值是 3.7531。
为了以后的记号便利,我们在这里定义 `L`
$$
L = \sum_{i \bar j} (S_{i \bar j})^2
$$
```
L = (S_ij**2).sum()
```
### `S2_1` $\langle S^{2(1)} \rangle$
$$
\begin{align}
\langle S^{2(1)} \rangle &= 2 \langle \Psi_0 | \hat S^2 | \Psi^{(1)} \rangle = 2 \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} \langle \Psi_0 | \hat S^2 | \Psi_{i \bar j}^{a \bar b} \rangle \\
&= - 2 \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} \langle i | \bar b \rangle \langle a | \bar j \rangle = - 2 \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} S_{i \bar b} S_{a \bar j}
\end{align}
$$
上式的第一个等号是 Chen and Schlegel [^Chen-Schlegel.JCP.1994.101] eq (5) 所给出的;而第三个等号是 Table 1 $0 \rightarrow \alpha \beta (i, a: \alpha; j, b: \beta)$ 给出的。
上式的推导中有一处关于 $| \Psi^{(1)} \rangle$ 的展开的推导省略。我们知道
$$
| \Psi^{(1)} \rangle = \hat T_2 | \Psi_0 \rangle
= \frac{1}{4} \sum_{ijab} t_{ij}^{ab} | \Psi_{ij}^{ab} \rangle + \frac{1}{4} \sum_{\bar i \bar j \bar a \bar b} t_{\bar i \bar j}^{\bar a \bar b} | \Psi_{\bar i \bar j}^{\bar a \bar b} \rangle + \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} | \Psi_{i \bar j}^{a \bar b} \rangle
$$
但由于利用到 $\langle 0 | \hat S^2 | \Psi_{ij}^{ab} \rangle = \langle 0 | \hat S^2 | \Psi_{\bar i \bar j}^{\bar a \bar b} \rangle = 0$,因此在第二个等号时只从三个 $| \Psi^{(1)} \rangle$ 中留下了一项。关于 $\hat S^2$ 作用在 UHF 波函数与轨道下的性质,可以参考 Schlegel [^Schlegel-Schlegel.JCP.1986.84] eq (5) 的说明。
```
S2_1 = - 2 * (t2_ab * S_ia[:, None, None, :] * S_ai.T[None, :, :, None]).sum()
S2_1
```
因此,UMP2 矫正过的 $\langle S^2 \rangle_\mathrm{UMP2} = \langle S^{2(0)} \rangle + \langle S^{2(1)} \rangle$ 的结果是
```
S2_0 + S2_1
```
Gaussian 的参考值是 3.7504。
### `S4SD` $\texttt{S4SD}$
`S4SD` 的表达式较为复杂,我们也直接使用 $\texttt{S4SD}$ 而不用其他记号表示该项:
$$
\begin{align}
\texttt{S4SD} = (n_\mathrm{occ}^\alpha - L) (n_\mathrm{occ}^\beta - L) + 2 L - 2 \sum_{i \bar j k \bar l} S_{i \bar j} S_{\bar j k} S_{k \bar l} S_{\bar l i} + \langle S^{2(0)} \rangle^2
\end{align}
$$
```
S4SD = (nocc_a - L) * (nocc_b - L) + 2 * L - 2 * (S_ij @ S_ij.T @ S_ij @ S_ij.T).trace() + S2_0**2
S4SD
```
该表达式的来源可能是 Amos and Hall [^Amos-Hall.PRSLA.1961.263]。该文的 eq (7·02) 下方公式中,有通过稍高阶的投影而获得的 $\langle S^2 \rangle$ 的计算方式
$$
\langle S^2 \rangle \simeq \langle S^{2(0)} \rangle + \frac{\texttt{S4SD} - \langle S^{2(0)} \rangle^2}{\langle S^{2(0)} \rangle - (S_z + 1) (S_z + 2)}
$$
通过这种方式获得的 $\langle S^2 \rangle$ 近似值可以相当精确,比 $\langle S^{2(0)} \rangle + \langle S^{2(1)} \rangle$ 还要接近精确值 $3.75$:
```
S2_0 + (S4SD - S2_0**2) / (S2_0 - (Sz + 1) * (Sz + 2))
```
相信 $\texttt{S4SD}$ 的存在意义是用于计算 Schlegel [^Schlegel-Schlegel.JCP.1986.84] 式 eq (24) 中的 $\langle \tilde \Phi_1 | \tilde \Phi_1 \rangle = \langle \Phi_0 | A_{s + 1}^\dagger A_{s + 1} | \Phi_0 \rangle$;但关于这一关系我还不确定是否正确。后面计算 PMP2 能量时会使用上 $\texttt{S4SD}$。
## 自旋污染矫正能量计算
### `EPUHF` $E_\mathrm{PUHF}$
根据 Schlegel [^Schlegel-Schlegel.JCP.1986.84] eq (22),PUHF 能量可以表达为
$$
E_\mathrm{PUHF} = E_\mathrm{UHF} + \frac{1}{\langle \Psi_0 | \hat P_s | \Psi_0 \rangle} \sum_{i \bar j a \bar b} \langle \Psi_0 | \hat H | \Psi_{i \bar j}^{a \bar b} \rangle \langle \Psi_{i \bar j}^{a \bar b} | \hat P_s | \Psi_0 \rangle
$$
其中,$\hat P_s$ 算符称为 Löwdin 算符 [^Lowdin-Lowdin.PR.1955.97] eq (7),
$$
\hat P_s = \prod_{k \neq s}^{N / 2} \frac{\hat S^2 - k (k + 1)}{s (s + 1) - k (k + 1)}
$$
相当于将自旋不纯的波函数纯化为自旋量子数为 $s$ 的态。在实际使用中,通常使用 $\hat A_{s + 1} \simeq \hat P_s$ 替代;关于这段讨论可以参考 Rossky and Karplus [^Rossky-Karplus.JCP.1980.73] section V.A 的讨论,而下面公式的形式参考 Schlegel [^Schlegel-Schlegel.JCP.1986.84] eq (14);其中,$s$ 一般取 $S_z$:
$$
\hat A_{s + 1} = \frac{\hat S^2 - (s + 1)(s + 2)}{\langle S^{2(0)} \rangle - (s + 1)(s + 2)}
$$
关于 $\hat A_{s + 1}$,一个显然的性质是 $\langle \Psi_0 | \hat A_{s + 1} | \Psi_0 \rangle = 1$。
为了程序方便,定义下述临时变量 `Y`
$$
Y = \langle S^{2(0)} \rangle - (S_z + 1) (S_z + 2)
$$
那么 `D_EPUHF`
$$
\begin{align}
\Delta E_\mathrm{PUHF} &= \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} D_{i \bar j}^{a \bar b} \cdot \langle \Psi_{i \bar j}^{a \bar b} | \frac{\hat S^2}{Y} | \Psi_0 \rangle \\
&= - \frac{1}{Y} \sum_{i \bar j a \bar b} t_{i \bar j}^{a \bar b} D_{i \bar j}^{a \bar b} S_{i \bar b} S_{\bar j a}
\end{align}
$$
```
Y = S2_0 - (Sz + 1) * (Sz + 2)
D_EPUHF = - 1 / Y * (t2_ab * D2_ab * S_ia[:, None, None, :] * S_ai.T[None, :, :, None]).sum()
D_EPUHF
```
因而 $E_\mathrm{PUHF} = E_\mathrm{UHF} + \Delta E_\mathrm{PUHF}$:
```
scf_eng.e_tot + D_EPUHF
```
Gaussian 的参考值是 -73.046146318。
### `EPMP2` $E_\mathrm{PMP2}$
根据 Schlegel [^Schlegel-Schlegel.JCP.1986.84] eq (24),PMP2 能量可以表达为
$$
\begin{align}
\Delta E_\mathrm{PMP2} = \Delta E_\mathrm{PUHF} \left( 1 - \frac{\langle \Phi^{(1)} | \hat A_{s + 1} | \Psi_0 \rangle}{\langle \Phi_0 | \hat A_{s + 1}^\dagger \hat A_{s + 1} | \Psi_0 \rangle} \right)
\end{align}
$$
关于上式的分数项,分子部分可以写为
$$
\begin{align}
\langle \Phi^{(1)} | \hat A_{s + 1} | \Psi_0 \rangle
= \langle \Phi^{(1)} | \frac{\hat S^2}{Y} - \frac{(s + 1)(s + 2)}{Y} | \Psi_0 \rangle = \frac{1}{2} \frac{\langle S^{2(1)} \rangle}{Y}
\end{align}
$$
而关于分子项,参考在 $\texttt{S4SD}$ 的讨论,
$$
\langle \Phi_0 | \hat A_{s + 1}^\dagger \hat A_{s + 1} | \Psi_0 \rangle \simeq \langle S^2 \rangle - \langle S^{2(0)} \rangle = \frac{\texttt{S4SD} - \langle S^{2(0)} \rangle^2}{Y^2}
$$
但作者不能断定上述论断的正确性。
将分子、分母的结果代入 $\Delta E_\mathrm{PMP2}$ 的算式中,可以得到 `D_EPMP2`
$$
\Delta E_\mathrm{PMP2} = \Delta E_\mathrm{PUHF} \left( 1 - \frac{1}{2} \frac{\langle S^{2(1)} \rangle \cdot Y}{\texttt{S4SD} - \langle S^{2(0)} \rangle^2} \right)
$$
```
D_EPMP2 = D_EPUHF * (1 - 0.5 * S2_1 * Y / (S4SD - S2_0**2))
D_EPMP2
```
因而 $E_\mathrm{PMP2} = E_\mathrm{UMP2} + \Delta E_\mathrm{PMP2}$:
```
mp2_eng.e_tot + D_EPMP2
```
Gaussian 的参考值是 -73.072180589。
至此,我们已经完成了使用 PySCF 的功能与 NumPy 重复 Gaussian 的 PUHF、PMP2 的能量结果了。
## 修订时间轴
- 2019/08/30 写完文档;文档基于 2019/08/13 的一份笔记。
- 2019/09/01 补充一部分推导。
[^Chen-Schlegel.JCP.1994.101]: Chen, W.; Schlegel, H. B. Evaluation of S2 for Correlated Wave Functions and Spin Projection of Unrestricted Moller–Plesset Perturbation Theory. *J. Chem. Phys.* **1994**, *101* (7), 5957–5968. doi: [10.1063/1.467312](https://doi.org/10.1063/1.467312).
[^Szabo-Ostlund.Dover.1996]: Szabo, A.; Ostlund, N. S. *Modern Quantum Chemistry: Introduction to Advanced Electronic Structure Theory (Dover Books on Chemistry)*; Dover Publications, 1996.
[^Schlegel-Schlegel.JCP.1986.84]: Schlegel, H. B. Potential Energy Curves Using Unrestricted Mo/ller–Plesset Perturbation Theory with Spin Annihilation. *J. Chem. Phys.* **1986**, *84* (8), 4530–4534. doi: [10.1063/1.450026](https://doi.org/10.1063/1.450026).
[^Amos-Hall.PRSLA.1961.263]: Amos, A. T.; Hall, G. G. Single Determinant Wave Functions. *Proc. R. Soc. Lond. A* **1961**, *263* (1315), 483–493. doi: [10.1098/rspa.1961.0175](https://doi.org/10.1098/rspa.1961.0175).
[^Lowdin-Lowdin.PR.1955.97]: Lowdin, P.-O. Quantum Theory of Many-Particle Systems. III. Extension of the Hartree-Fock Scheme to Include Degenerate Systems and Correlation Effects. *Phys. Rev.* **1955**, *97* (6), 1509–1520. [10.1103/physrev.97.1509](https://doi.org/10.1103/physrev.97.1509).
[^Rossky-Karplus.JCP.1980.73]: Rossky, P. J.; Karplus, M. Spin Dependent Properties of Perturbed Wave Functions: An Analytic Comparison of the Exact, UHF, and Spin-Projected UHF States. *J. Chem. Phys.* **1980**, *73* (12), 6196–6214. [10.1063/1.440115](https://doi.org/10.1063/1.440115).
| github_jupyter |
```
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Adjective2 -> "significatif"
Trigger_Rule -> "|forward|trigger|negated|10|Group[377]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Determiner2 Noun3 Adposition4 Trigger_Rule
Adverb1 -> "nie"
Determiner2 -> "les"
Noun3 -> "symptômes" | "marque" | "indice" | "manifestation" | "présage" | "prodrome" | "syndrome" | "diagnostique" | "stigmate" | "signal" | "signe prognostique" | "signe avant-coureur" | "diagnostic" | "symptôme" | "affection"
Adposition4 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[379]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Verb5 Determiner6 Noun7 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Verb5 -> "avait"
Determiner6 -> "du"
Noun7 -> "courant" | "habituel" | "fréquent" | "connu" | "répandu" | "normal" | "classique" | "présent" | "commune" | "régulier" | "en cours"
Trigger_Rule -> "|forward|trigger|negated|10|Group[381]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Verb5 Determiner6 Noun7 Adposition8 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Verb5 -> "ait"
Determiner6 -> "des"
Noun7 -> "antécédents" | "antérieur" | "préalable" | "préexistant" | "passé" | "hérédité" | "précurseur" | "précédente" | "antériorité" | "préliminaire" | "condition" | "premier" | "antan" | "ancienneté" | "antécédence"
Adposition8 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[383]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Pronoun2 Pronoun3 Verb4 Determiner5 Adjective6 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Pronoun2 -> "il"
Pronoun3 -> "y"
Verb4 -> "avait"
Determiner5 -> "aucune"
Adjective6 -> "récente"
Trigger_Rule -> "|forward|trigger|negated|10|Group[385]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Pronoun5 Verb6 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Pronoun5 -> "en"
Verb6 -> "ait"
Trigger_Rule -> "|forward|trigger|negated|10|Group[387]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Auxiliary5 Adverb6 Verb7 Determiner8 Noun9 Adposition10 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Auxiliary5 -> "ait"
Adverb6 -> "jamais"
Verb7 -> "eu"
Determiner8 -> "l'"
Noun9 -> "histoire" | "passé" | "souvenir" | "historique"
Adposition10 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[389]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Pronoun2 Pronoun3 Verb4 Adverb5 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Pronoun2 -> "il"
Pronoun3 -> "y"
Verb4 -> "avait"
Adverb5 -> "jamais"
Trigger_Rule -> "|forward|trigger|negated|10|Group[391]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Verb5 Determiner6 Noun7 Adposition8 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Verb5 -> "avait"
Determiner6 -> "une"
Noun7 -> "histoire" | "passé" | "souvenir" | "historique"
Adposition8 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[393]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Verb5 Determiner6 Noun7 Adposition8 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Verb5 -> "avait"
Determiner6 -> "des"
Noun7 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Adposition8 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[395]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Verb5 Determiner6 Noun7 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Verb5 -> "avait"
Determiner6 -> "des"
Noun7 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Trigger_Rule -> "|forward|trigger|negated|10|Group[397]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Subordinating_conjunction2 Pronoun3 Pronoun4 Auxiliary5 Verb6 Determiner7 Noun8 Adposition9 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Subordinating_conjunction2 -> "qu'"
Pronoun3 -> "il"
Pronoun4 -> "y"
Auxiliary5 -> "ait"
Verb6 -> "eu"
Determiner7 -> "des"
Noun8 -> "antécédents" | "antérieur" | "préalable" | "préexistant" | "passé" | "hérédité" | "précurseur" | "précédente" | "antériorité" | "préliminaire" | "condition" | "premier" | "antan" | "ancienneté" | "antécédence"
Adposition9 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[399]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Pronoun2 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Pronoun2 -> "cela"
Trigger_Rule -> "|forward|trigger|negated|10|Group[413]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Adjective2 -> "vrai"
Trigger_Rule -> "|forward|trigger|negated|10|Group[415]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Verb1 Determiner2 Noun3 Adposition4 Adjective5 Trigger_Rule
Verb1 -> "refuse" | "répudier" | "dédaigner" | "dénier" | "résister" | "rebeller" | "récuser" | "opposer" | "exclure" | "repousser" | "dit non" | "retourner" | "écarter" | "boycotter" | "débouter"
Determiner2 -> "l'"
Noun3 -> "utilisation" | "emploi" | "maniement" | "usage" | "exploitation" | "consommation" | "appel"
Adposition4 -> "d'"
Adjective5 -> "autres"
Trigger_Rule -> "|forward|trigger|negated|10|Group[417]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Determiner2 Noun3 Adposition4 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Determiner2 -> "l'"
Noun3 -> "utilisation" | "emploi" | "maniement" | "usage" | "exploitation" | "consommation" | "appel"
Adposition4 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[419]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Trigger_Rule
Noun1 -> "nie" | "refuser" | "désavouer" | "contredire" | "renier" | "discuter" | "réfuter" | "contester" | "critiquer" | "dénier" | "rejeter" | "contesté" | "rejeté" | "dénié"
Trigger_Rule -> "|forward|trigger|negated|10|Group[421]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Trigger_Rule
Adverb1 -> "nier"
Trigger_Rule -> "|forward|trigger|negated|10|Group[421]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Proper_noun1 Trigger_Rule
Proper_noun1 -> "Denise"
Trigger_Rule -> "|forward|trigger|negated|10|Group[421]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Trigger_Rule
Noun1 -> "Nier"
Trigger_Rule -> "|forward|trigger|negated|10|Group[421]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "eu"
Trigger_Rule -> "|forward|termination|negated|10|Group[429]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Verb3 Determiner4 Noun5 Adposition6 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Verb3 -> "eu"
Determiner4 -> "aucun"
Noun5 -> "épisode" | "moment" | "phase" | "événement" | "évènement" | "séquence" | "suite" | "période"
Adposition6 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[430]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Determiner4 Noun5 Adposition6 Trigger_Rule
Adverb1 -> "n'"
Verb2 -> "a"
Adverb3 -> "plus"
Determiner4 -> "d'"
Noun5 -> "épisodes" | "fait" | "histoire" | "phase" | "accident" | "événement" | "époque" | "évènement" | "séquence" | "suite" | "mésaventure" | "période"
Adposition6 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[432]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Adverb3 Verb4 Adposition5 Noun6 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Adverb3 -> "pas"
Verb4 -> "eu"
Adposition5 -> "de"
Noun6 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Trigger_Rule -> "|forward|trigger|negated|10|Group[434]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Adverb3 Verb4 Adposition5 Noun6 Adposition7 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Adverb3 -> "pas"
Verb4 -> "eu"
Adposition5 -> "de"
Noun6 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Adposition7 -> "avec"
Trigger_Rule -> "|forward|trigger|negated|10|Group[434]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Verb3 Determiner4 Noun5 Adposition6 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Verb3 -> "signalé" | "indiqué" | "notifié" | "découvert" | "montré" | "dévoilé" | "souligné" | "informé" | "révélé" | "signifié" | "déclaré" | "averti" | "rapporté" | "témoigné" | "alerté"
Determiner4 -> "aucun"
Noun5 -> "problème" | "ennui" | "souci" | "complication" | "histoire" | "anomalie" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "mal" | "trouble"
Adposition6 -> "avec"
Trigger_Rule -> "|forward|trigger|negated|10|Group[434]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Adverb3 Verb4 Determiner5 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Adverb3 -> "pas"
Verb4 -> "observé" | "remarquer" | "noter" | "découvrir" | "apercevoir" | "signaler" | "voir" | "relever" | "enregistrer" | "déceler" | "détecter" | "souligner" | "mentionner"
Determiner5 -> "un"
Trigger_Rule -> "|forward|trigger|negated|10|Group[436]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Adverb3 Verb4 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Adverb3 -> "pas"
Verb4 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Trigger_Rule -> "|backward|trigger|uncertain|30|Group[438]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Adposition4 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclue" | ""mise à lécart"" | "inenvisageable" | "irréalisable" | "évincé"
Adposition4 -> "contre"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Adposition3 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Adposition3 -> "pour"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Adposition3 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Adposition3 -> "contre"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Adposition4 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclue" | ""mise à lécart"" | "inenvisageable" | "irréalisable" | "évincé"
Adposition4 -> "pour"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439, 441]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclue" | ""mise à lécart"" | "inenvisageable" | "irréalisable" | "évincé"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439, 443]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Adposition4 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Adposition4 -> "contre"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439, 445]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Adposition4 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Adposition4 -> "pour"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439, 447]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Auxiliary2 Verb3 Trigger_Rule
Pronoun1 -> "l'"
Auxiliary2 -> "a"
Verb3 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Trigger_Rule -> "|forward|trigger|negated|10|Group[439, 449]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Determiner3 Noun4 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Determiner3 -> "le"
Noun4 -> "patient" | "client" | "souffrant" | "sujet"
Trigger_Rule -> "|forward|trigger|negated|10|Group[457]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Determiner3 Noun4 Adposition5 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Determiner3 -> "le"
Noun4 -> "patient" | "client" | "souffrant" | "sujet"
Adposition5 -> "pour"
Trigger_Rule -> "|forward|trigger|negated|10|Group[457]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Determiner3 Noun4 Adposition5 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "exclu" | "refusé" | "repoussé" | "rejeté" | "éliminé" | "proscrit"
Determiner3 -> "le"
Noun4 -> "patient" | "client" | "souffrant" | "sujet"
Adposition5 -> "contre"
Trigger_Rule -> "|forward|trigger|negated|10|Group[457]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Auxiliary1 Verb2 Trigger_Rule
Auxiliary1 -> "a"
Verb2 -> "montré" | "affiché" | "exhibition" | "état" | "démonstration" | "exposition" | "livré" | "dénudé" | "exhibé" | "dénoté" | "confirmé" | "expliqué" | "découvert" | "affecté" | "dévoilé"
Trigger_Rule -> "|forward|termination|negated|10|Group[463]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Adverb3 Verb4 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Adverb3 -> "pas"
Verb4 -> "exprimer" | "manifester" | "dire" | "signifier" | "témoigner" | "montrer" | "expliquer" | "traduire" | "extérioriser" | "révéler" | "formuler" | "exposer" | "afficher" | "émettre" | "signaler"
Trigger_Rule -> "|forward|trigger|negated|10|Group[464]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Verb3 Determiner4 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Verb3 -> "exprimé" | "manifester" | "dire" | "signifier" | "témoigner" | "montrer" | "expliquer" | "traduire" | "extérioriser" | "révéler" | "formuler" | "exposer" | "afficher" | "émettre" | "signaler"
Determiner4 -> "aucune"
Trigger_Rule -> "|forward|trigger|negated|10|Group[466]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Auxiliary2 Verb3 Determiner4 Trigger_Rule
Adverb1 -> "n'"
Auxiliary2 -> "a"
Verb3 -> "signalé" | "indiqué" | "notifié" | "découvert" | "montré" | "dévoilé" | "souligné" | "informé" | "révélé" | "signifié" | "déclaré" | "averti" | "rapporté" | "témoigné" | "alerté"
Determiner4 -> "aucun"
Trigger_Rule -> "|forward|trigger|negated|10|Group[472]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adjective1 Verb2 Adverb3 Trigger_Rule
Adjective1 -> "différentiel"
Verb2 -> "comprend" | "consister" | "posséder" | "contient"
Adverb3 -> "également"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[475]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Pronoun1 Adjective2 Verb3 Adverb4 Trigger_Rule
Pronoun1 -> "diagnostics"
Adjective2 -> "différentiels"
Verb3 -> "comprend" | "consister" | "posséder" | "contient"
Adverb4 -> "également"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[476]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Determiner1 Noun2 Adjective3 Verb4 Trigger_Rule
Determiner1 -> "les"
Noun2 -> "diagnostics" | "diagnostique" | "symptomatique" | "symptôme" | "diagnose" | "diagnostic"
Adjective3 -> "différentiels"
Verb4 -> "comprennent" | "comporter" | "traduire" | "expliquer"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[477]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adjective1 Verb2 Trigger_Rule
Adjective1 -> "différentiel"
Verb2 -> "comprend" | "consister" | "posséder" | "contient"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[478]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Verb4 Adposition5 Noun6 Adposition7 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "semble"
Adverb3 -> "pas"
Verb4 -> "avoir"
Adposition5 -> "de"
Noun6 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Adposition7 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[479]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Verb4 Adposition5 Noun6 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "semble"
Adverb3 -> "pas"
Verb4 -> "avoir"
Adposition5 -> "d'"
Noun6 -> "étrange" | "bizarre" | "extraordinaire" | "curieux" | "insolite" | "exceptionnel" | "incroyable" | "surprenant" | "étonnant" | "énigmatique" | "inaccoutumé" | "inexplicable" | "incompréhensible" | "inquiétant" | "invraisemblable"
Trigger_Rule -> "|forward|trigger|negated|10|Group[479]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Verb4 Adposition5 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "semble"
Adverb3 -> "pas"
Verb4 -> "avoir"
Adposition5 -> "de"
Trigger_Rule -> "|forward|trigger|negated|10|Group[479]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Verb4 Adposition5 Noun6 Adposition7 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "semble"
Adverb3 -> "pas"
Verb4 -> "avoir"
Adposition5 -> "de"
Noun6 -> "problèmes" | "question" | "ennui" | "souci" | "complication" | "dysfonctionnement" | "disfonctionnement" | "soucis" | "tracas" | "problème" | "incident" | "couac"
Adposition7 -> "avec"
Trigger_Rule -> "|forward|trigger|negated|10|Group[479]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Trigger_Rule
Adverb1 -> "n'"
Verb2 -> "exprime" | "dire" | "exposer" | "énoncer" | "afficher" | "émettre" | "signaler" | "signifier" | "déclarer" | "témoigner" | "montrer" | "manifester" | "découvrir" | "figurer" | "traduire"
Adverb3 -> "pas"
Trigger_Rule -> "|forward|trigger|negated|10|Group[487]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Trigger_Rule
Adverb1 -> "n'"
Verb2 -> "utilise" | "employé" | "servi" | "recourir" | "servir" | "user" | "prendre" | "appliquer" | "manipuler" | "exploiter" | "manier" | "employer"
Adverb3 -> "pas"
Trigger_Rule -> "|forward|trigger|negated|10|Group[489]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Adverb2 Trigger_Rule
Adverb1 -> "généralement"
Adverb2 -> "pas"
Trigger_Rule -> "|forward|trigger|negated|10|Group[491]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "fait"
Adverb3 -> "pas"
Trigger_Rule -> "|forward|trigger|negated|10|Group[493]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "ressemble" | "approcher" | "correspondre" | "imiter" | "rappeler" | "se recouper" | "avoir un rapport" | "être la réplique de" | "se rapprocher"
Adverb3 -> "pas"
Trigger_Rule -> "|forward|trigger|negated|10|Group[495]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Trigger_Rule
Noun1 -> "doute" | "perplexité" | "hésitation" | "indétermination" | "indécision" | "crainte" | "appréhension" | "supposition" | "inquiétude" | "croyance"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[497]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adposition1 Noun2 Adposition3 Trigger_Rule
Adposition1 -> "en"
Noun2 -> "raison" | "cause" | "pourquoi" | "mobile" | "explication" | "fondement"
Adposition3 -> "de"
Trigger_Rule -> "|forward|termination|negated|10|Group[498]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Trigger_Rule
Noun1 -> "ecchymoses" | "bleu" | "tache" | "hématome" | "coup" | "blessure" | "meurtrissure" | "cicatrice" | "égratignure" | "escarre" | "contusionné" | "entaillé" | "éraflure"
Adposition2 -> "à"
Trigger_Rule -> "|forward|termination|negated|10|Group[499]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Verb1 Trigger_Rule
Verb1 -> "ed"
Trigger_Rule -> "|both|termination|historical|30|Group[500]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Verb1 Adposition2 Noun3 Trigger_Rule
Verb1 -> "département" | "secteur"
Adposition2 -> "d'"
Noun3 -> "urgence" | "impératif" | "gravité" | "crise" | "secours"
Trigger_Rule -> "|both|termination|historical|30|Group[501]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Trigger_Rule
Noun1 -> "étiologie" | "étiologie" | "étiopathie" | "causalité"
Adposition2 -> "de"
Trigger_Rule -> "|forward|termination|negated|10|Group[502]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Trigger_Rule
Noun1 -> "étiologie" | "étiologie" | "étiopathie" | "causalité"
Adposition2 -> "pour"
Trigger_Rule -> "|forward|termination|negated|10|Group[502]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Trigger_Rule
Noun1 -> "exacerbation" | "redoublement" | "intensification" | "recrudescence" | "excitation" | "augmentation" | "irritation" | "aggravation" | "agravation"
Adposition2 -> "de"
Trigger_Rule -> "|forward|termination|negated|10|Group[504]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Verb3 Trigger_Rule
Noun1 -> "examens" | "consultation" | "observation" | "étude" | "auscultation" | "examen médical"
Adposition2 -> "à"
Verb3 -> "évaluer" | "juger" | "apprécier" | "chiffrer" | "calculer" | "quantifier" | "mesurer" | "déterminer" | "expertiser" | "jauger" | "compter" | "peser" | "comparer" | "examiner" | "recenser"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[505]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Trigger_Rule
Noun1 -> "examen" | "analyse" | "consultation" | "observation" | "vérification" | "recherche" | "étude" | "auscultation" | "examen médical" | "autopsie" | "dépistage" | "interrogatoire"
Verb2 -> "évaluer" | "juger" | "apprécier" | "chiffrer" | "calculer" | "quantifier" | "mesurer" | "déterminer" | "expertiser" | "jauger" | "compter" | "peser" | "comparer" | "examiner" | "recenser"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[505]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Verb3 Trigger_Rule
Noun1 -> "examen" | "analyse" | "consultation" | "observation" | "vérification" | "recherche" | "étude" | "auscultation" | "examen médical" | "autopsie" | "dépistage" | "interrogatoire"
Adposition2 -> "pour"
Verb3 -> "évaluer" | "juger" | "apprécier" | "chiffrer" | "calculer" | "quantifier" | "mesurer" | "déterminer" | "expertiser" | "jauger" | "compter" | "peser" | "comparer" | "examiner" | "recenser"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[505]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adposition1 Trigger_Rule
Adposition1 -> "sauf"
Trigger_Rule -> "|forward|termination|negated|10|Group[510]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Noun3 Trigger_Rule
Noun1 -> "f"
Adposition2 -> "/"
Noun3 -> "h"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[511]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adverb1 Verb2 Adverb3 Adposition4 Verb5 Trigger_Rule
Adverb1 -> "ne"
Verb2 -> "parvient" | "aboutir" | "atteindre" | "accéder" | "réussir" | "se pousser" | "finir par" | "se rendre" | "déboucher"
Adverb3 -> "pas"
Adposition4 -> "à"
Verb5 -> "révéler" | "divulguer" | "déceler" | "trahir" | "dire" | "exposer" | "faire connaître" | "exprimer" | "signaler" | "signifier" | "annoncer" | "développer" | "démasquer" | "afficher" | "déclarer"
Trigger_Rule -> "|forward|trigger|negated|10|Group[512]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adjective1 Adjective2 Trigger_Rule
Adjective1 -> "faux"
Adjective2 -> "négatif"
Trigger_Rule -> "|both|pseudo|negated|10|Group[514]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Noun2 Trigger_Rule
Noun1 -> "fam"
Noun2 -> "hx"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[515]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Determiner1 Noun2 Auxiliary3 Verb4 Trigger_Rule
Determiner1 -> "la"
Noun2 -> "famille" | "cercle familial" | "entourage" | "parents" | "belle-famille"
Auxiliary3 -> "a"
Verb4 -> "dit"
Trigger_Rule -> "|both|termination|historical|30|Group[516, 517]|PRE-VALIDATION"|"|both|termination|nonpatient|30|Group[516, 517]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Trigger_Rule
Noun1 -> "famille" | "cercle familial" | "entourage" | "parents" | "belle-famille"
Verb2 -> "trouvée" | "déceler" | "deviner" | "détecter" | "apercevoir" | "sentir" | "repérer" | "croire" | "saisir" | "voir" | "mettre la main sur" | "constater" | "remarquer" | "conclure"
Trigger_Rule -> "|both|termination|historical|30|Group[516, 517]|PRE-VALIDATION"|"|both|termination|nonpatient|30|Group[516, 517]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Determiner1 Noun2 Auxiliary3 Verb4 Trigger_Rule
Determiner1 -> "la"
Noun2 -> "famille" | "cercle familial" | "entourage" | "parents" | "belle-famille"
Auxiliary3 -> "a"
Verb4 -> "déclaré" | "avoué" | "prétendu" | "exprimé" | "décidé" | "manifesté" | "affirmé" | "dénoncé" | "certifié" | "reconnu" | "témoigné" | "notifié" | "expliqué" | "dit" | "énoncé"
Trigger_Rule -> "|both|termination|historical|30|Group[516, 517]|PRE-VALIDATION"|"|both|termination|nonpatient|30|Group[516, 517]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adposition2 Noun3 Trigger_Rule
Noun1 -> "histoire" | "passé" | "souvenir" | "historique"
Adposition2 -> "de"
Noun3 -> "famille" | "cercle familial" | "entourage" | "parents" | "belle-famille"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[518]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Trigger_Rule
Noun1 -> "famille" | "cercle familial" | "entourage" | "parents" | "belle-famille"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[525]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Auxiliary2 Auxiliary3 Trigger_Rule
Noun1 -> "père" | "parent" | "abbé" | "beau-père"
Auxiliary2 -> "a"
Auxiliary3 -> "appelé"
Trigger_Rule -> "|both|termination|historical|30|Group[526, 527]|PRE-VALIDATION"|"|both|termination|nonpatient|30|Group[526, 527]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Determiner1 Noun2 Trigger_Rule
Determiner1 -> "du"
Noun2 -> "père" | "parent" | "abbé" | "beau-père"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[528]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Trigger_Rule
Noun1 -> "père" | "parent" | "abbé" | "beau-père"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[529]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adposition1 Trigger_Rule
Adposition1 -> "fh"
Trigger_Rule -> "|forward|trigger|nonpatient|30|Group[530]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Determiner2 Noun3 Adjective4 Trigger_Rule
Noun1 -> "indication" | "avertissement" | "prescription" | "directive" | "annotation" | "explication" | "renvoi" | "information" | "note" | "recommandation" | "critère" | "notation" | "suggestion" | "mention" | "symptôme"
Determiner2 -> "du"
Noun3 -> "rapport" | "correspondance" | "relation" | "accord" | "ressemblance" | "affinité" | "récit" | "analogie" | "corrélation" | "concordance" | "conformité" | "parenté" | "exposé" | "rapprochement" | "témoignage"
Adjective4 -> "final"
Trigger_Rule -> "|both|pseudo|uncertain|30|Group[531]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adposition1 Noun2 Trigger_Rule
Adposition1 -> "pour"
Noun2 -> "présumé" | "soupçonner" | "estimer" | "conjecturer" | "pressentir" | "présager" | "espérer" | "attendre" | "compter" | "préjuger" | "présupposer" | "prétendre" | "prédire"
Trigger_Rule -> "|forward|trigger|uncertain|30|Group[532]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "mur" | "mûri" | "obstacle" | "bon pour" | "mûr" | "accompli" | "mature" | "apte" | "usagé" | "falaise" | "frontière" | "avancé" | "formé" | "cloisonnement" | "barrières"
Adjective2 -> "libre"
Trigger_Rule -> "|both|pseudo|negated|10|Group[533]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Trigger_Rule
Noun1 -> "air" | "aspect" | "apparence" | "extérieur" | "genre" | "allure" | "expression" | "mine" | "démarche" | "masque" | "manières" | "impression" | "manière" | "ressemblance" | "oxygène"
Verb2 -> "gratuit" | "public"
Trigger_Rule -> "|both|pseudo|negated|10|Group[533]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adjective1 Noun2 Trigger_Rule
Adjective1 -> "libre"
Noun2 -> "t"
Trigger_Rule -> "|both|pseudo|negated|10|Group[533]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "fluide" | "coulant" | "clair" | "limpide" | "liquoreux" | "aqueuse"
Adjective2 -> "libre"
Trigger_Rule -> "|both|pseudo|negated|10|Group[533]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "eau" | "H2O" | "eau minérale" | "distribution de l'eau" | "source"
Adjective2 -> "gratuite"
Trigger_Rule -> "|both|pseudo|negated|10|Group[533]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adjective1 Trigger_Rule
Adjective1 -> "libre"
Trigger_Rule -> "|backward|trigger|negated|10|Group[540]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Adposition1 Trigger_Rule
Adposition1 -> "de"
Trigger_Rule -> "|both|pseudo|uncertain|30|Group[541]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Verb1 Adjective2 Adposition3 Adposition4 Noun5 Trigger_Rule
Verb1 -> "donné" | "prescrit" | "administré" | "prodigué" | "appliqué" | "fait" | "indiqué" | "proposé" | "distribué" | "redonné" | "assené" | "employé" | "inoculé"
Adjective2 -> "\"
Adposition3 -> "w"
Adposition4 -> "+"
Noun5 -> "histoire" | "passé" | "souvenir" | "historique"
Trigger_Rule -> "|both|pseudo|uncertain|30|Group[542]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Determiner2 Noun3 Trigger_Rule
Noun1 -> "histoire" | "passé" | "souvenir" | "historique"
Determiner2 -> "du"
Noun3 -> "patient" | "client" | "souffrant" | "sujet"
Trigger_Rule -> "|both|pseudo|uncertain|30|Group[542]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Adposition3 Determiner4 Noun5 Trigger_Rule
Noun1 -> "compte"
Verb2 -> "tenu"
Adposition3 -> "de"
Determiner4 -> "l'"
Noun5 -> "histoire" | "passé" | "souvenir" | "historique"
Trigger_Rule -> "|forward|termination|negated|10|Group[543]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Determiner3 Noun4 Trigger_Rule
Noun1 -> "compte"
Verb2 -> "tenu"
Determiner3 -> "du"
Noun4 -> "fait"
Trigger_Rule -> "|forward|termination|negated|10|Group[543]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Adposition3 Determiner4 Noun5 Trigger_Rule
Noun1 -> "compte"
Verb2 -> "tenu"
Adposition3 -> "de"
Determiner4 -> "son"
Noun5 -> "histoire" | "passé" | "souvenir" | "historique"
Trigger_Rule -> "|forward|termination|negated|10|Group[543]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Verb2 Adposition3 Determiner4 Noun5 Adposition6 Trigger_Rule
Noun1 -> "compte"
Verb2 -> "tenu"
Adposition3 -> "de"
Determiner4 -> "la"
Noun5 -> "gravité" | "sérieux" | "sévérité" | "grandeur" | "poids" | "urgence" | "caractère" | "sériosité"
Adposition6 -> "de"
Trigger_Rule -> "|forward|termination|negated|10|Group[543]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Verb1 Verb2 Subordinating_conjunction3 Trigger_Rule
Verb1 -> "étant"
Verb2 -> "donné" | "prescrit" | "administré" | "prodigué" | "appliqué" | "fait" | "indiqué" | "proposé" | "distribué" | "redonné" | "assené" | "employé" | "inoculé"
Subordinating_conjunction3 -> "que"
Trigger_Rule -> "|forward|termination|negated|10|Group[543]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
from nltk.parse.generate import generate, demo_grammar
from nltk import CFG
cfg_grammar= """
S -> Noun1 Adjective2 Trigger_Rule
Noun1 -> "gram"
Adjective2 -> "négatif"
Trigger_Rule -> "|both|pseudo|negated|10|Group[550]|PRE-VALIDATION"
"""
for sentence in generate(CFG.fromstring(cfg_grammar), n=1000):
print(' '.join(sentence))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle as pickle
import tensorflow as tf
from tensorflow import keras
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
#Set number of predictions to make
n = 25
#load in needed column info
with open('BinColNames.txt', 'r') as f:
raw_names = f.read()
col_names = raw_names.split("\n")
col_names.pop()
#col_names
#Create random continuous input data
import random
def CreateRandList(n=10):
data = []
for i in range(n):
data.append(random.random())
return(data)
#Create rand cont vars
QTY_TRACKED = CreateRandList(n=n)
MS1_Delta = CreateRandList(n=n)
MS2_Delta = CreateRandList(n=n)
MS3_Delta = CreateRandList(n=n)
PO_Delta = CreateRandList(n=n)
Schd_Rng = CreateRandList(n=n)
SOP_MS1_Delta = CreateRandList(n=n)
SOP_MS2_Delta = CreateRandList(n=n)
RAS_MS1_Delta = CreateRandList(n=n)
RAS_MS2_Delta = CreateRandList(n=n)
ETA_SCP_Delta = CreateRandList(n=n)
#Get rand cat vars
num_countries = 12
num_dests = 8
num_types = 6
def GetRandInts(ub, lb=0, n=10):
data = []
for i in range(n):
val = random.randint(lb, ub-1)
data.append(val)
return(data)
country_index = GetRandInts(ub=num_countries, n=n)
dest_index = GetRandInts(ub=num_dests, n=n)
type_index = GetRandInts(ub=num_types, n=n)
raw_data = [QTY_TRACKED, MS1_Delta, MS2_Delta,
MS3_Delta, PO_Delta, Schd_Rng,
SOP_MS1_Delta, SOP_MS2_Delta,
RAS_MS1_Delta, RAS_MS2_Delta,
ETA_SCP_Delta, country_index,
dest_index, type_index]
#Format data into proper nparray
def MakeArray(data):
array = []
CountryDic = {'AU':0, 'BE':1, 'CA':2, 'DE':3, 'GB':4, 'IT':5,
'KP':6, 'KR':7, 'NL':8, 'NO':9, 'RU':10, 'US':11}
DestDic = {'ABB':0, 'CHY':1, 'CTC':2, 'FLD':3,
'HOU':4, 'MY1':5, 'SH2':6, 'SHP':7}
MtlDic = {'B':0, 'I':1, 'K':2, 'M':3, 'S':4, 'T':5}
for line in range(len(data[0])):
####################
#deal with cont vars
newline = []
cont_line = []
for col_val in range(11):
cont_line.append(data[col_val][line])
###################
#deal with cat vars
#
#deal with country
country_line = [0]*12
country_line[(data[11][line])] = 1
#deal with dests
dest_line = [0]*8
dest_line[(data[12][line])] = 1
#deal with mtl types
mtl_line = [0]*6
mtl_line[(data[13][line])] = 1
##############
#combine lines
#
newline = cont_line + country_line + dest_line + mtl_line
#append line to array
array.append(newline)
#convert to np array
array = np.array(array)
return(array)
data = MakeArray(raw_data)
data.shape
#Load in all models
#DT
mypickle = open('DT_full.pickle','rb')
DT_full = pickle.load(mypickle)
mypickle = open('DT_mini.pickle','rb')
DT_mini = pickle.load(mypickle)
#RF
mypickle = open('RF_full.pickle','rb')
RF_full = pickle.load(mypickle)
mypickle = open('RF_mini.pickle','rb')
RF_mini = pickle.load(mypickle)
#SVM
mypickle = open('SVM_full.pickle','rb')
SVM_full = pickle.load(mypickle)
mypickle = open('SVM_mini.pickle','rb')
SVM_mini = pickle.load(mypickle)
#Load tensorflow models
NN_full = keras.models.load_model('NN_full.h5')
NN_mini = keras.models.load_model('NN_mini.h5')
#Function to convert TF probability array into 1D prediction array
def Probs2Preds(labels, prob_array):
preds = []
for list in prob_array:
position = 0
max_prob = 0
for prob in list:
if prob > max_prob:
max_prob = prob
max_label = labels[position]
position = position + 1
preds.append(max_label)
preds = np.array(preds)
return(preds)
labels = ['on-time', '1-7dayL', '7-30dayL', '30-90dayL', '>90dayL']
#Make Predictions
#DT predictions
DT_full_preds = DT_full.predict(data)
DT_mini_preds = DT_mini.predict(data)
#RF predicitons
RF_full_preds = RF_full.predict(data)
RF_mini_preds = RF_mini.predict(data)
#SVM predictions
SVM_full_preds = SVM_full.predict(data)
SVM_mini_preds = SVM_mini.predict(data)
#NN predictions
NN_full_preds = NN_full.predict(data)
NN_mini_preds = NN_mini.predict(data)
#convert prob_arrays to list of labels
NN_full_preds = Probs2Preds(labels=labels, prob_array=NN_full_preds)
NN_mini_preds = Probs2Preds(labels=labels, prob_array=NN_mini_preds)
stacked_preds = np.stack((DT_full_preds, DT_mini_preds,
RF_full_preds, RF_mini_preds,
SVM_full_preds, SVM_mini_preds,
NN_full_preds, NN_mini_preds))
def MakeColList(n=10):
col_list = []
for i in range (n):
col = 'prediction {}'.format(i)
col_list.append(col)
return(col_list)
col_list = MakeColList(n=n)
preds_index = ['DT full','DT mini', 'RF full', 'RF mini',
'SVM full', 'SVM mini', 'NN full', 'NN mini']
pred_df = pd.DataFrame(data=stacked_preds,
index=preds_index,
columns=col_list)
display(pred_df)
display(pred_df.describe())
```
| github_jupyter |
```
#default_exp asyncUtil
```
# async
> tools to help writing async python codes
```
#hide
from nbdev.showdoc import *
```
# async wrap
```
#export
import asyncio
from functools import wraps, partial
def async_wrap(func):
@wraps(func)
async def run(*args, loop=None, executor=None, **kwargs):
if loop is None:
loop = asyncio.get_event_loop()
pfunc = partial(func, *args, **kwargs)
return await loop.run_in_executor(executor, pfunc)
return run
%%time
@async_wrap
def aSlowFunc(input_:str):
time.sleep(2)
return input_
## async func execute
import nest_asyncio, time
nest_asyncio.apply()
async def runASlowFunc(input_):
return await aSlowFunc(input_)
async def runLoop():
rtup = (runASlowFunc(i) for i in range (10))
r = await asyncio.gather(*rtup)
return r
asyncio.run(runLoop())
```
# thread mapping
```
#export
import multiprocessing.dummy
from typing import Callable, List, Any, Iterable
from beartype import beartype
@beartype
def asyncMap(f:Callable, data:Iterable[Any], threads:int = 5)->Any:
p = multiprocessing.dummy.Pool(threads)
return p.map(f,data)
%%time
import time
asyncMap(lambda x: (x+1, time.sleep(1))[0] , range(100), threads = 100)[:10]
def aSlowFunc(x):
time.sleep(1)
return x
%%time
asyncMap(aSlowFunc, range(100))[:10]
input_ = list(zip(range(10), range(1,11)))
print(input_)
asyncMap(lambda x: (lambda x,y: x+y )(x[0],x[1]), input_)
```
# asyncAwaitMap
```
#export
def asyncAwaitMap(f:Callable, data:Iterable[Any])->Any:
af = async_wrap(f) # convert to async func
async def runLoop():
rtup = (af(i) for i in data)
return await asyncio.gather(*rtup)
return asyncio.run(runLoop())
%%time
import nest_asyncio
nest_asyncio.apply()
asyncAwaitMap(aSlowFunc, range(100))[:10]
input_ = list(zip(range(10), range(1,11)))
print(input_)
asyncAwaitMap(lambda x: (lambda x,y: x+y )(x[0],x[1]), input_)
```
# AsyncThread
```
#export
from concurrent.futures import ThreadPoolExecutor
def asyncThreadMap(f,data, threads=10):
with ThreadPoolExecutor(threads) as tr:
return tr.map(f,data)
%%time
def aSlowFunc(x):
time.sleep(1)
return x
list(asyncThreadMap(aSlowFunc, range(100)))[:10]
```
# AsyncProcess map
```
#export
from concurrent.futures import ProcessPoolExecutor
def asyncProcessMap(f,data, threads=10):
with ProcessPoolExecutor(threads) as tr:
return tr.map(f,data)
%%time
def aSlowFunc(x):
time.sleep(1)
return x
list(asyncProcessMap(aSlowFunc, range(100)))[:10]
```
| github_jupyter |
In all our analyses, we used estimations for either simple or logarithmic rates of return. <br/>
The formula for simple returns is
$$
\frac{P_t - P_{t-1}}{P_{t-1}}
,$$
while the formula for log returns is
$$
ln( \frac{P_t}{P_{t-1}} )
.$$
<br/>
If our dataset is simply called "data", in Python, we could write the first formula as <br/ >
*(data / data.shift(1)) - 1,*
and the second one as
*np.log(data / data.shift(1)).*
<br/>
Instead of coding it this way, some professionals prefer using **Pandas.DataFrame.pct_change()** method, as it computes simple returns directly. We will briefly introduce it to you in this notebook document.
First, let's import NumPy, Pandas, and pandas_datareader.
```
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
```
We will calculate returns of the Procter and Gamble stock, based on adjusted closing price data since the 1st of January 2007.
```
ticker = 'PG'
data = pd.DataFrame()
data[ticker] = wb.DataReader(ticker, data_source='yahoo', start='2007-1-1')['Adj Close']
```
So far, we estimated simple returns in the following way.
```
s_rets_1 = (data / data.shift(1)) - 1
s_rets_1.head()
```
Observe the .pct_change() method can obtain an identical result.
```
s_rets_2 = data.pct_change()
s_rets_2.head()
```
Now, if you multiply the obtained values by 100, you will see the percentage change:
```
s_rets_2.head() * 100
```
This means the close price on 2007-01-04 was 0.76% lower than the price on 2007-01-03, the price on 2007-01-05 was 0.85% lower than the price on 2007-01-04, and so on.
A few arguments can be used in the percentage change method. The most important one is 'period' as it specifies the difference between prices in the nominator. By default, it equals one, and that's why we obtained the same result for s_rets_1 and s_rets_2. Let's assume we would like to calculate simple returns with the following formula:
$$
\frac{P_t - P_{t-2}}{P_{t-2}}
,$$
Then, we should specify 'periods = 2' in parentheses:
```
s_rets_3 = data.pct_change(periods=2)
s_rets_3.head()
```
You can see there was no value obtained not only for the first, but also for the second observation. If we use the "old" formula, and not this method, *shift(2)* would lead us to the same output:
```
s_rets_4 = (data / data.shift(2)) - 1
s_rets_4.head()
```
Great! <br/>
Now, let's consider logarithmic returns. To this moment, we applied the following formula:
```
log_rets_1 = np.log(data / data.shift(1))
log_rets_1.tail()
```
You can calculate the same formula for log returns with the help of the .pct_change() method. Just be careful with the way you apply the formula! Mathematically, it will look like this:
$$
ln(\frac{P_t}{P_{t-1}} ) = ln( \frac{P_t - P_{t-1}}{P_{t-1}} + \frac{P_{t-1}}{P_{t-1}}) = ln(\ simple.returns + 1)
.$$
```
log_rets_2 = np.log(data.pct_change() + 1)
log_rets_2.tail()
```
***
The .pct_change() method is very popular. Whether you include it in your code or you go the other way around and type the formulas as we did in our analyses, you should obtain the correct value for the returns you need.
| github_jupyter |
## Linear Regression using pytorch
Linear regression is one of the must have tools in any data scientists toolkit. It attempts to fit the input data using a solution like:
* y is our measured output
* X is our input data, there are m measurements each of n values
Using linear regression we find coefficients θ<sub>0</sub> ... θ<sub>n</sub>
ŷ = θ<sub>0</sub> + θ<sub>1</sub>X<sub>1</sub> + θ <sub>2</sub>X<sub>2</sub> + ... + θ<sub>n</sub>X<sub>n</sub>
* ŷ is our predicted output
We minimize the error (loss function) between ŷ and y. A very common way is to minimize the squared distance between the each ŷ and y pair.
An example may help:
### We survey 10 people, based on 3 facts about a car we ask what would they pay
- m = 10 ( 10 samples of data )
- n = 3 ( 3 observations in each sample )
- X is m × n matrix
- y is a m × 1 matrix ( vector )
We want to find θ<sub>0</sub>, θ<sub>1</sub>, θ<sub>2</sub> & θ<sub>3</sub>. That will allow us to find the price of any car (OK so we may need more than 3 things to really price a car but ... )
The 3 questions may be:
* Top speed in mph.
* Fuel consumption in mpg.
* Cargo capacity in cuft.
Let's consider some cases for out θs
* θ<sub>0</sub>=10000.0 θ<sub>1</sub>=0.0 θ<sub>2</sub>=0.0 θ<sub>3</sub>=0.0
- all cars would cost 10000, non of the three factors make any difference
* θ<sub>0</sub>= 1000.0 θ<sub>1</sub>=150.0 θ<sub>2</sub>=120.0 θ<sub>3</sub>=0.0
- People like fast cars with low fuel consumption
- cars with a top speed of 100mph, consuming 5mpg cost 1000 + 100×150 + 120×5 = 16,600
- cars with a top speed of 90mph, consuming 20mpg cost 1000 + 90×150 + 120×25 = 17,500
- cars with a top speed of 10mph, consuming 1mpg cost 1000 + 10×150 + 120×1 = 1,620
That's linear regression!
```
import torch
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
```
```
m = -4
c = 12
noisiness = 7
num_points = 20
x = ( torch.rand( num_points, 1 ) - 0.5 ) * 10
y = ( ( torch.rand( num_points, 1 ) - 0.5 ) * noisiness ) + ( x * m + c )
plt.scatter( x.tolist(), y.tolist(), color='red' )
plt.show()
```
We want to solve for 2 variables ( m & c ) so we have to synthesize a bias ( the c ). Bias is independent of the input data, so we'll prepend a column of ones to our input values ( x ).
if x is
```
1
2
3
```
It will become
```
1 1
1 2
1 3
```
```
xplusone = torch.cat( ( torch.ones( x.size(0),1 ), x) , 1 )
```
Now we use the pytorch built-in solver [gels](https://pytorch.org/docs/stable/torch.html#torch.gels). This is the least squares solver. The returns includes two parts:
- the first n items are the co-officients we want
- the remainder are error terms
We have two dimensions ( the 1D x data and the prepended ones ), so read the first two items from the result.
```
R, _ = torch.gels( y, xplusone )
R = R[0:xplusone.size(1)]
```
Let's plot the original points and the best fit line. The best fit line comes from the co-efficients.
Multiply the original inputs by R ( the coefficients ) to get the line
```
yh = xplusone.mm( R )
plt.plot( x.tolist(), yh.tolist() )
plt.scatter( x.tolist(), y.tolist(), color='red' )
plt.show()
```
### An example using more than one dimensional input
This example shows the method above works for higher dimensional data.
First define the x & y arrays of inputs and results
```
m = torch.tensor( [ [-2.0], [-2.0] ] )
c = 12
noisiness = 10
num_points = 100
x = ( torch.rand( num_points, 2 ) - 0.5 ) * 10
y = ( ( torch.rand( num_points, 1 ) - 0.5 ) * noisiness ) + ( x.mm( m ) + c )
```
Find the best fitting plane of the input data points. Compare the code to the 2D case above: it's the same.
```
xplusone = torch.cat( ( torch.ones( x.size(0),1 ), x) , 1 )
R, _ = torch.gels( y, xplusone )
R = R[0:xplusone.size(1)]
yh = xplusone.mm( R )
```
Plot the results
It's harder to see a 3D plot. The red dots are the data points, the green plane is the best fit solution.
```
fig = plt.figure()
ax = fig.add_subplot( 111, projection='3d')
ax.scatter( x[:,0].tolist(), x[:,1].tolist(), y[:,0].tolist(), color='red' )
ax.plot_trisurf( x[:,0].tolist(), x[:,1].tolist(), yh[:,0].tolist(), color='green', shade=False )
plt.show()
```
| github_jupyter |
```
#importing required packages
import numpy as np
import re, tarfile, random
from functools import reduce
import keras
from keras.layers import Dense, Merge, Dropout, RepeatVector, Activation, recurrent
from keras.layers.recurrent import SimpleRNN, LSTM
from keras.layers.embeddings import Embedding
from keras.models import Sequential
from keras.preprocessing.sequence import pad_sequences
from keras.utils.np_utils import to_categorical
from keras.utils.data_utils import get_file
from keras.callbacks import History
from keras import backend as K
#defining the temperatues for the notes
def sample(preds, temperature):
# Helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
#Loading abc notation based music file
data = open('music_abc.txt', 'r').read()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print ('Unique characters:', chars)
print ('The data has', data_size, 'characters, with', vocab_size, 'unique characters')
#Vectorizing the data for one hot encoding
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
train_data=data[:int(0.8*data_size)]
val_data = data[int(0.8*data_size):]
seq_len = 25
train_data = [data[i] for i in range(len(data))]
train_data=train_data[:-30]
train_data_onehot = [list(to_categorical(char_to_ix[x],vocab_size)) for x in train_data]
#Loading it to an array
train_data_onehot = np.array(train_data_onehot)
print(len(train_data_onehot))
train_data_onehot
print(len(train_data))
#Reshaping the data for dividing into train, test and validation
training_batches = np.reshape(train_data_onehot[:-1], (int(train_data_onehot.shape[0]/seq_len), seq_len, vocab_size))
print(training_batches.shape)
print(y.shape)
#Splitting the data
X = training_batches[:,:-1,:]
y = training_batches[:,1:,:]
train_len=int(0.8*training_batches.shape[0])
X_train = X[:train_len,:,:]
y_train = y[:train_len,:,:]
X_valid = X[train_len:,:,:]
y_valid = y[train_len:,:,:]
#Converting to the string
convert2String = lambda y: ''.join([ix_to_char[x[0]] for x in list(np.reshape(np.argmax(y, axis=2), (-1,1)))])
#Defining the model
epochs=50
input_dim = vocab_size
hidden_dim = 100
output_dim = vocab_size
rnn_model = Sequential()
rnn_model.add(SimpleRNN(hidden_dim,
activation='tanh', return_sequences = True, input_shape = (None,vocab_size)))
rnn_model = Sequential()
rnn_model.add(LSTM(hidden_dim,input_shape=(None,vocab_size),return_sequences=True))
rnn_model.add(Dropout(0.3))
rnn_model.add(LSTM(512, input_shape=(None,vocab_size),return_sequences=True))
rnn_model.add(Dropout(0.3))
rnn_model.add(LSTM(512, input_shape=(None,vocab_size),return_sequences=True))
rnn_model.add(Dense(256))
rnn_model.add(Dropout(0.3))
rnn_model.add(Dense(output_dim))
rnn_model.add(Activation('softmax'))
rnn_model.compile(loss='categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
rnn_model.summary()
print('Training')
#Compiling the model
history = rnn_model.fit(X_train,y_train, batch_size=50, nb_epoch=epochs, validation_data=(X_valid,y_valid))
# Function to get lstm rnn layer output
get_rnn_layer_output = K.function([rnn_model.layers[0].input], [rnn_model.layers[0].output])
prime_len = 25
gen_len = 900
start_index = 0
d =0
rnn_activations = []
for T in [1.0]:
d +=1
generated = ''
sentence = data[start_index: start_index + prime_len]
generated += sentence
print ('Generating with seed: "' + sentence + '"')
for i in range(gen_len):
x = np.zeros((1, prime_len, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_to_ix[char]] = 1.
preds = rnn_model.predict(x, verbose=0)[0]
layer_output = get_rnn_layer_output([x])[0]
rnn_activations.append(layer_output[0][-1])
next_index = sample(preds[-1], T)
next_char = ix_to_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
f= open('pred_feature3' +'_'+ str(T)+ '_' + str(d) + '.txt','w')
f.write(generated)
f.close()
rnn_activations = np.array(rnn_activations)
print(rnn_activations.shape)
np.savetxt('rnn_activations_pred',rnn_activations,delimiter =',')
import matplotlib.pyplot as plt
#Validation Graph
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
#Accuracy Graph
plt.clf() # clear figure
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
```
| github_jupyter |
# 量子神经网络在自然语言处理中的应用
[](https://mindspore.cn/mindquantum/api/zh-CN/master/index.html) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindquantum/zh_cn/mindspore_qnn_for_nlp.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindquantum/zh_cn/mindspore_qnn_for_nlp.py) [](https://gitee.com/mindspore/docs/blob/master/docs/mindquantum/docs/source_zh_cn/qnn_for_nlp.ipynb)
## 概述
在自然语言处理过程中,词嵌入(Word embedding)是其中的重要步骤,它是一个将高维度空间的词向量嵌入到一个维数更低的连续向量空间的过程。当给予神经网络的语料信息不断增加时,网络的训练过程将越来越困难。利用量子力学的态叠加和纠缠等特性,我们可以利用量子神经网络来处理这些经典语料信息,加入其训练过程,并提高收敛精度。下面,我们将简单地搭建一个量子经典混合神经网络来完成一个词嵌入任务。
## 环境准备
导入本教程所依赖模块
```
import numpy as np
import time
from mindquantum.core import QubitOperator
import mindspore.ops as ops
import mindspore.dataset as ds
from mindspore import nn
from mindspore.train.callback import LossMonitor
from mindspore import Model
from mindquantum.framework import MQLayer
from mindquantum import Hamiltonian, Circuit, RX, RY, X, H, UN
```
本教程实现的是一个[CBOW模型](https://blog.csdn.net/u010665216/article/details/78724856),即利用某个词所处的环境来预测该词。例如对于“I love natural language processing”这句话,我们可以将其切分为5个词,\["I", "love", "natural", "language", "processing”\],在所选窗口为2时,我们要处理的问题是利用\["I", "love", "language", "processing"\]来预测出目标词汇"natural"。这里我们以窗口为2为例,搭建如下的量子神经网络,来完成词嵌入任务。

这里,编码线路会将"I"、"love"、"language"和"processing"的编码信息编码到量子线路中,待训练的量子线路由四个Ansatz线路构成,最后我们在量子线路末端对量子比特做$\text{Z}$基矢上的测量,具体所需测量的比特的个数由所需嵌入空间的维数确定。
## 数据预处理
我们对所需要处理的语句进行处理,生成关于该句子的词典,并根据窗口大小来生成样本点。
```
def GenerateWordDictAndSample(corpus, window=2):
all_words = corpus.split()
word_set = list(set(all_words))
word_set.sort()
word_dict = {w: i for i, w in enumerate(word_set)}
sampling = []
for index, _ in enumerate(all_words[window:-window]):
around = []
for i in range(index, index + 2*window + 1):
if i != index + window:
around.append(all_words[i])
sampling.append([around, all_words[index + window]])
return word_dict, sampling
word_dict, sample = GenerateWordDictAndSample("I love natural language processing")
print(word_dict)
print('word dict size: ', len(word_dict))
print('samples: ', sample)
print('number of samples: ', len(sample))
```
根据如上信息,我们得到该句子的词典大小为5,能够产生一个样本点。
## 编码线路
为了简单起见,我们使用的编码线路由$\text{RX}$旋转门构成,结构如下。

我们对每个量子门都作用一个$\text{RX}$旋转门。
```
def GenerateEncoderCircuit(n_qubits, prefix=''):
if prefix and prefix[-1] != '_':
prefix += '_'
circ = Circuit()
for i in range(n_qubits):
circ += RX(prefix + str(i)).on(i)
return circ
GenerateEncoderCircuit(3, prefix='e')
```
我们通常用$\left|0\right>$和$\left|1\right>$来标记二能级量子比特的两个状态,由态叠加原理,量子比特还可以处于这两个状态的叠加态:
$$\left|\psi\right>=\alpha\left|0\right>+\beta\left|1\right>$$
对于$n$比特的量子态,其将处于$2^n$维的希尔伯特空间中。对于上面由5个词构成的词典,我们只需要$\lceil \log_2 5 \rceil=3$个量子比特即可完成编码,这也体现出量子计算的优越性。
例如对于上面词典中的"love",其对应的标签为2,2的二进制表示为`010`,我们只需将编码线路中的`e_0`、`e_1`和`e_2`分别设为$0$、$\pi$和$0$即可。下面来验证一下。
```
from mindquantum.simulator import Simulator
from mindspore import context
from mindspore import Tensor
n_qubits = 3 # number of qubits of this quantum circuit
label = 2 # label need to encode
label_bin = bin(label)[-1: 1: -1].ljust(n_qubits, '0') # binary form of label
label_array = np.array([int(i)*np.pi for i in label_bin]).astype(np.float32) # parameter value of encoder
encoder = GenerateEncoderCircuit(n_qubits, prefix='e') # encoder circuit
encoder_params_names = encoder.params_name # parameter names of encoder
print("Label is: ", label)
print("Binary label is: ", label_bin)
print("Parameters of encoder is: \n", np.round(label_array, 5))
print("Encoder circuit is: \n", encoder)
print("Encoder parameter names are: \n", encoder_params_names)
# quantum state evolution operator
state = encoder.get_qs(pr=dict(zip(encoder_params_names, label_array)))
amp = np.round(np.abs(state)**2, 3)
print("Amplitude of quantum state is: \n", amp)
print("Label in quantum state is: ", np.argmax(amp))
```
通过上面的验证,我们发现,对于标签为2的数据,最后得到量子态的振幅最大的位置也是2,因此得到的量子态正是对输入标签的编码。我们将对数据编码生成参数数值的过程总结成如下函数。
```
def GenerateTrainData(sample, word_dict):
n_qubits = np.int(np.ceil(np.log2(1 + max(word_dict.values()))))
data_x = []
data_y = []
for around, center in sample:
data_x.append([])
for word in around:
label = word_dict[word]
label_bin = bin(label)[-1: 1: -1].ljust(n_qubits, '0')
label_array = [int(i)*np.pi for i in label_bin]
data_x[-1].extend(label_array)
data_y.append(word_dict[center])
return np.array(data_x).astype(np.float32), np.array(data_y).astype(np.int32)
GenerateTrainData(sample, word_dict)
```
根据上面的结果,我们将4个输入的词编码的信息合并为一个更长向量,便于后续神经网络调用。
## Ansatz线路
Ansatz线路的选择多种多样,我们选择如下的量子线路作为Ansatz线路,它的一个单元由一层$\text{RY}$门和一层$\text{CNOT}$门构成,对此单元重复$p$次构成整个Ansatz线路。

定义如下函数生成Ansatz线路。
```
def GenerateAnsatzCircuit(n_qubits, layers, prefix=''):
if prefix and prefix[-1] != '_':
prefix += '_'
circ = Circuit()
for l in range(layers):
for i in range(n_qubits):
circ += RY(prefix + str(l) + '_' + str(i)).on(i)
for i in range(l % 2, n_qubits, 2):
if i < n_qubits and i + 1 < n_qubits:
circ += X.on(i + 1, i)
return circ
GenerateAnsatzCircuit(5, 2, 'a')
```
## 测量
我们把对不同比特位上的测量结果作为降维后的数据。具体过程与比特编码类似,例如当我们想将词向量降维为5维向量时,对于第3维的数据可以如下产生:
- 3对应的二进制为`00011`。
- 测量量子线路末态对$Z_0Z_1$哈密顿量的期望值。
下面函数将给出产生各个维度上数据所需的哈密顿量(hams),其中`n_qubits`表示线路的比特数,`dims`表示词嵌入的维度:
```
def GenerateEmbeddingHamiltonian(dims, n_qubits):
hams = []
for i in range(dims):
s = ''
for j, k in enumerate(bin(i + 1)[-1:1:-1]):
if k == '1':
s = s + 'Z' + str(j) + ' '
hams.append(Hamiltonian(QubitOperator(s)))
return hams
GenerateEmbeddingHamiltonian(5, 5)
```
## 量子版词向量嵌入层
量子版词向量嵌入层结合前面的编码量子线路和待训练量子线路,以及测量哈密顿量,将`num_embedding`个词嵌入为`embedding_dim`维的词向量。这里我们还在量子线路的最开始加上了Hadamard门,将初态制备为均匀叠加态,用以提高量子神经网络的表达能力。
下面,我们定义量子嵌入层,它将返回一个量子线路模拟算子。
```
def QEmbedding(num_embedding, embedding_dim, window, layers, n_threads):
n_qubits = int(np.ceil(np.log2(num_embedding)))
hams = GenerateEmbeddingHamiltonian(embedding_dim, n_qubits)
circ = Circuit()
circ = UN(H, n_qubits)
encoder_param_name = []
ansatz_param_name = []
for w in range(2 * window):
encoder = GenerateEncoderCircuit(n_qubits, 'Encoder_' + str(w))
ansatz = GenerateAnsatzCircuit(n_qubits, layers, 'Ansatz_' + str(w))
encoder.no_grad()
circ += encoder
circ += ansatz
encoder_param_name.extend(encoder.params_name)
ansatz_param_name.extend(ansatz.params_name)
grad_ops = Simulator('projectq', circ.n_qubits).get_expectation_with_grad(hams,
circ,
None,
None,
encoder_param_name,
ansatz_param_name,
n_threads)
return MQLayer(grad_ops)
```
整个训练模型跟经典网络类似,由一个嵌入层和两个全连通层构成,然而此处的嵌入层是由量子神经网络构成。下面定义量子神经网络CBOW。
```
class CBOW(nn.Cell):
def __init__(self, num_embedding, embedding_dim, window, layers, n_threads,
hidden_dim):
super(CBOW, self).__init__()
self.embedding = QEmbedding(num_embedding, embedding_dim, window,
layers, n_threads)
self.dense1 = nn.Dense(embedding_dim, hidden_dim)
self.dense2 = nn.Dense(hidden_dim, num_embedding)
self.relu = ops.ReLU()
def construct(self, x):
embed = self.embedding(x)
out = self.dense1(embed)
out = self.relu(out)
out = self.dense2(out)
return out
```
下面我们对一个稍长的句子来进行训练。首先定义`LossMonitorWithCollection`用于监督收敛过程,并搜集收敛过程的损失。
```
class LossMonitorWithCollection(LossMonitor):
def __init__(self, per_print_times=1):
super(LossMonitorWithCollection, self).__init__(per_print_times)
self.loss = []
def begin(self, run_context):
self.begin_time = time.time()
def end(self, run_context):
self.end_time = time.time()
print('Total time used: {}'.format(self.end_time - self.begin_time))
def epoch_begin(self, run_context):
self.epoch_begin_time = time.time()
def epoch_end(self, run_context):
cb_params = run_context.original_args()
self.epoch_end_time = time.time()
if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:
print('')
def step_end(self, run_context):
cb_params = run_context.original_args()
loss = cb_params.net_outputs
if isinstance(loss, (tuple, list)):
if isinstance(loss[0], Tensor) and isinstance(loss[0].asnumpy(), np.ndarray):
loss = loss[0]
if isinstance(loss, Tensor) and isinstance(loss.asnumpy(), np.ndarray):
loss = np.mean(loss.asnumpy())
cur_step_in_epoch = (cb_params.cur_step_num - 1) % cb_params.batch_num + 1
if isinstance(loss, float) and (np.isnan(loss) or np.isinf(loss)):
raise ValueError("epoch: {} step: {}. Invalid loss, terminating training.".format(
cb_params.cur_epoch_num, cur_step_in_epoch))
self.loss.append(loss)
if self._per_print_times != 0 and cb_params.cur_step_num % self._per_print_times == 0:
print("\repoch: %+3s step: %+3s time: %5.5s, loss is %5.5s" % (cb_params.cur_epoch_num, cur_step_in_epoch, time.time() - self.epoch_begin_time, loss), flush=True, end='')
```
接下来,利用量子版本的`CBOW`来对一个长句进行词嵌入。运行之前请在终端运行`export OMP_NUM_THREADS=4`,将量子模拟器的线程数设置为4个,当所需模拟的量子系统比特数较多时,可设置更多的线程数来提高模拟效率。
```
import mindspore as ms
from mindspore import context
from mindspore import Tensor
context.set_context(mode=context.PYNATIVE_MODE, device_target="CPU")
corpus = """We are about to study the idea of a computational process.
Computational processes are abstract beings that inhabit computers.
As they evolve, processes manipulate other abstract things called data.
The evolution of a process is directed by a pattern of rules
called a program. People create programs to direct processes. In effect,
we conjure the spirits of the computer with our spells."""
ms.set_seed(42)
window_size = 2
embedding_dim = 10
hidden_dim = 128
word_dict, sample = GenerateWordDictAndSample(corpus, window=window_size)
train_x, train_y = GenerateTrainData(sample, word_dict)
train_loader = ds.NumpySlicesDataset({
"around": train_x,
"center": train_y
}, shuffle=False).batch(3)
net = CBOW(len(word_dict), embedding_dim, window_size, 3, 4, hidden_dim)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
loss_monitor = LossMonitorWithCollection(500)
model = Model(net, net_loss, net_opt)
model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)
```
打印收敛过程中的损失函数值:
```
import matplotlib.pyplot as plt
plt.plot(loss_monitor.loss, '.')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.show()
```
通过如下方法打印量子嵌入层的量子线路中的参数:
```
net.embedding.weight.asnumpy()
```
## 经典版词向量嵌入层
这里我们利用经典的词向量嵌入层来搭建一个经典的CBOW神经网络,并与量子版本进行对比。
首先,搭建经典的CBOW神经网络,其中的参数跟量子版本的类似。
```
class CBOWClassical(nn.Cell):
def __init__(self, num_embedding, embedding_dim, window, hidden_dim):
super(CBOWClassical, self).__init__()
self.dim = 2 * window * embedding_dim
self.embedding = nn.Embedding(num_embedding, embedding_dim, True)
self.dense1 = nn.Dense(self.dim, hidden_dim)
self.dense2 = nn.Dense(hidden_dim, num_embedding)
self.relu = ops.ReLU()
self.reshape = ops.Reshape()
def construct(self, x):
embed = self.embedding(x)
embed = self.reshape(embed, (-1, self.dim))
out = self.dense1(embed)
out = self.relu(out)
out = self.dense2(out)
return out
```
生成适用于经典CBOW神经网络的数据集。
```
train_x = []
train_y = []
for i in sample:
around, center = i
train_y.append(word_dict[center])
train_x.append([])
for j in around:
train_x[-1].append(word_dict[j])
train_x = np.array(train_x).astype(np.int32)
train_y = np.array(train_y).astype(np.int32)
print("train_x shape: ", train_x.shape)
print("train_y shape: ", train_y.shape)
```
我们对经典CBOW网络进行训练。
```
context.set_context(mode=context.GRAPH_MODE, device_target="CPU")
train_loader = ds.NumpySlicesDataset({
"around": train_x,
"center": train_y
}, shuffle=False).batch(3)
net = CBOWClassical(len(word_dict), embedding_dim, window_size, hidden_dim)
net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
net_opt = nn.Momentum(net.trainable_params(), 0.01, 0.9)
loss_monitor = LossMonitorWithCollection(500)
model = Model(net, net_loss, net_opt)
model.train(350, train_loader, callbacks=[loss_monitor], dataset_sink_mode=False)
```
打印收敛过程中的损失函数值:
```
import matplotlib.pyplot as plt
plt.plot(loss_monitor.loss, '.')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.show()
```
由上可知,通过量子模拟得到的量子版词嵌入模型也能很好的完成嵌入任务。当数据集大到经典计算机算力难以承受时,量子计算机将能够轻松处理这类问题。
## 参考文献
[1] Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean. [Efficient Estimation of Word Representations in
Vector Space](https://arxiv.org/pdf/1301.3781.pdf)
| github_jupyter |
# Basic Python CTC Session 6
## Python Loops and Functions
#### A simple food loop
```
food = ["Ceviche","Lomo Saltado","Chaufa"]
for item_in_list in food:
print (item_in_list)
```
#### Student Heights
```
student_heights = input("Input a list of student heights: ").split()
#print(student_heights)
total_height=0
no_students=0
for n in student_heights:
n = int(n)
no_students += 1
#print(n)
total_height += n
avg_height=round(total_height/no_students)
print(avg_height)
```
#### Best practice for adding to variable
```
no_students += 1
no_students = no_students+1
```
#### Using input() to get user input
```
favourite_number = int(input("What is your favourite number? "))
print(favourite_number)
```
#### Adding Even Numbers
```
evensum=0
for n in range(0,101,2):
evensum += n
print(evensum)
```
#### The FizzBuzz Job Interview Question
```
children = int(input("Please indicate number of children: "))
for child in range(1,children+1):
if child%3==0 and child%5==0:
print("FizzBuzz")
elif child%5==0:
print("Buzz")
elif child%3==0:
print("Fizz")
else:
print(child)
```
#### Function with Outputs
_Best Practice: add a doc string to describe what your function is doing_
```
def format_name(f_name, l_name):
"""converting input names"""
if f_name == "" or l_name == "":
return "You did not provide valid inputs."
f_name_format = f_name.title()
l_name_format = l_name.title()
return f"{f_name_format} {l_name_format}" #return tells computer to exit the function
#afterwards nothing will be executed anymore
output = format_name("jan", "THOmA")
print(output)
```
#### The return keyword "replaces" the function with the result for the variable tennis_point
```
def tennis(point):
"""function adds one point to input"""
return point+1
tennis_point = tennis(6)
print(tennis_point)
```
#### F-Strings: very useful to combine your results stored in a variable within a string :)
```
student = 16
print(f"We are {student} student today!")
```
#### Password Generator: combining concepts above
In the following exercise we want to create a password generator for which the user can specify the number of letters, symbols and numbers. We use the random module for selecting random items out of the lists letters, numbers, and symbols and to shuffle our password characters.
```
import random
letters = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z']
numbers = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
symbols = ['!', '#', '$', '%', '&', '(', ')', '*', '+']
def password_generator(letters,numbers,symbols,nr_letters,nr_symbols,nr_numbers):
"""This function will generate random strong passwords"""
password=[]
for l in range(0,nr_letters+1):
password.append(random.choice(letters))
for s in range(0,nr_symbols+1):
password.append(random.choice(symbols))
for n in range(0,nr_numbers+1):
password.append(random.choice(numbers))
random.shuffle(password)
password = "".join(password)
return print(password)
print("Welcome to the PyPassword Generator!")
nr_letters= int(input("How many letters would you like in your password?\n"))
nr_symbols = int(input(f"How many symbols would you like?\n"))
nr_numbers = int(input(f"How many numbers would you like?\n"))
password_generator(letters, numbers, symbols, nr_letters, nr_symbols, nr_numbers)
```
| github_jupyter |
# Handout 9
```
#Chi-square goodness of Fit test
#Kolmogorov-Smirnov (K-S) Measure
#evaluating Fit to the chicken Data
#Cramer-von Mises (CvM) Measure
#Anderson Darling (AD) Measure
#replicate gofnormex.R in python
from scipy.stats import norm
from math import sqrt, log
L = sorted([156,162,168,182,186,190,190,196,202,210,214,220,226,230,230,236,236,242,246,270])
n, m, a = 20, 200, 35
z = norm.cdf(L,m,a)
i = list(range(1, n + 1))
print(i)
print(z)
# K-S Computations
d1 = [i/n - z for i, z in zip(i,z)]
dp = max(d1)
d2 = [z - (i -1)/n for i, z in zip(i,z)]
dm = max(d2)
ks = max(dp,dm)
KS = ks*(sqrt(n) + .12+.11/sqrt(n))
#look into formatting values
print("KS Statistic: " + str(KS))
#reject normality at 0.05 level if KS > 1.358
# Cramer-von Mises
wi = [(z-(2*i-1)/(2*n))**2 for i, z in zip(i,z)]
s = sum(wi)
cvm = s + 1/(12*n)
CvM = (cvm - .4/n + .6/n**2)*(1+1/n)
print("CvM: " + str(CvM))
#Anderson-Darling Computations
ali = [(2*i-1)*log(z) for i, z in zip(i,z)]
print(ali)
a2i = [(2*n+1-2*i)*log(1-z) for i, z in zip(i,z)]
#print(a2i)
s1 = sum(ali)
#print(s1)
s2 = sum(a2i)
#print(s2)
AD = -n-(1/n)*(s1+s2)
#AD = -n-(1/n)*(-144-276)
print("AD: " + str(AD))
#functions to the same thing as above?
#Shapiro Wilk Test
# Correlation Test
from scipy.stats import norm
L = sorted([156,162,168,182,186,190,190,196,202,210,214,220,226,230,230,236,236,242,246,270])
n = len(L)
i = list(range(1,n+1))
u = [(i-.375)/(n+25) for i in range(1,n+1)]
q = norm.ppf(u)
#correlation test - turn formula on pg 28 into a function?
#Modified for the Exponential Distribution
from math import log, exp
w = sorted([12,21,26,27,29,29,48,57,59,70,74,153,326,386,502])
n = len(w)
lam = sum(w)/n
z = [1-exp(-x/lam) for x in w] #computes F0(X(i))
i = list(range(1,n + 1))
# K-S Computations:
d1 = [j/n - a for j, a in zip(i,z)]
dp = max(d1)
d2 = [a - (j - 1)/n for j, a in zip(i,z)]
dm = max(d2)
KS = max(dp,dm)
KSM = (KS-.2/n)*(sqrt(n)+.26+.5/sqrt(n))
print(KSM)
# Cramer-von Mises Computations:
wi = [(a-(2*j-1)/(2*n))**2 for j, a in zip(i,z)]
s = sum(wi)
cvm = s + 1/(12*n)
cvmM = cvm*(1+.16/n)
print(cvmM)
# Anderson-Darling Computations:
a1i = [(2*j-1)*log(a) for j, a in zip(i,z)]
a2i = [(2*n+1-2*j)*log(1-a) for j, a in zip(i,z)]
s1 = sum(a1i)
s2 = sum(a2i)
AD = -n-(1/n)*(s1+s2)
ADM = AD*(1+.6/n)
print(ADM)
#Python Code to find MLE:
library(MASS)
x <- c(
17.88 , 28.92 , 33.00 , 41.52 , 42.12 , 45.60 , 48.40, 51.84 ,
51.96 , 54.12 , 55.56 , 67.80 , 68.64 , 68.64 , 68.88 , 84.12 ,
93.12 , 98.64 , 105.12 , 105.84 , 127.92 , 128.04 , 173.40)
fitdistr(x,"weibull")
# convert gofweibmle.r to gofweibmle.py
# The following program computes the Anderson-Darling Statistics
# for testing goodness of the fit of a
# Weibull Distribution
# with unspecified parameters (need to supply MLE's).
# The statistics include the modification needed to use the Tables included
# in the GOF handout.
# This example is based on a random sample of n=23 observations:
x = c(17.88, 28.92, 33.00, 41.52, 42.12, 45.60, 48.40, 51.84,
51.96, 54.12, 55.56, 67.80, 68.64, 68.64, 68.88, 84.12,
93.12, 98.64, 105.12, 105.84, 127.92, 128.04, 173.40)
n = length(x)
i = seq(1,n,1)
y = -log(x)
y = sort(y)
# Anderson-Darling: For Weibull Model
library(MASS)
mle <- fitdistr(x,"weibull")
shape = mle$estimate[1]
scale = mle$estimate[2]
a = -log(scale)
b = 1/shape
z = exp(-exp(-(y-a)/b))
A1i = (2*i-1)*log(z)
A2i = (2*n+1-2*i)*log(1-z)
s1 = sum(A1i)
s2 = sum(A2i)
AD = -n-(1/n)*(s1+s2)
ADM = AD*(1+.2/sqrt(n))
AD
ADM
n
n = length(y)
weib= -y
weib= sort(weib)
i= 1:n
ui= (i-.5)/n
QW= log(-log(1-ui))
plot(QW,weib,abline(lm(weib~QW)),
main="Weibull Reference Plot",cex=.75,lab=c(7,11,7),
xlab="Q=ln(-ln(1-ui))",
ylab="y=ln(W(i))")
legend(-3.5,5.0,"y=4.388+.4207Q")
legend(-3.5,4.7,"AD=.3721, p-value>.25")
#boxcox,samozone.R converted to boxcox_samozone.py
y = scan("u:/meth1/sfiles/ozone1.DAT")
n = length(y)
yt0 = log(y)
s = sum(yt0)
varyt0 = var(yt0)
Lt0 = -1*s - .5*n*(log(2*pi*varyt0)+1)
th = 0
Lt = 0
t = -3.01
i = 0
while(t < 3)
{t = t+.001
i = i+1
th[i] = t
yt = (y^t -1)/t
varyt = var(yt)
Lt[i] = (t-1)*s - .5*n*(log(2*pi*varyt)+1)
if(abs(th[i])<1.0e-10)Lt[i]<-Lt0
if(abs(th[i])<1.0e-10)th[i]<-0
}
# The following outputs the values of the likelihood and theta and yields
# the value of theta where likelihood is a maximum
out = cbind(th,Lt)
Ltmax= max(Lt)
imax= which(Lt==max(Lt))
thmax= th[imax]
postscript("boxcox,plotsam.ps",height=8,horizontal=FALSE)
plot(th,Lt,lab=c(30,50,7),main="Box-Cox Transformations",
xlab=expression(theta),
ylab=expression(Lt(theta)))
#the following plots a 95\% c.i. for theta
cic = Ltmax-.5*qchisq(.95,1)
del= .01
iLtci = which(abs(Lt-cic)<=del)
iLtciL= min(iLtci)
iLtciU= max(iLtci)
thLci= th[iLtciL]
thUci= th[iLtciU]
abline(h=cic)
abline(v=thLci)
abline(v=thUci)
abline(v=thmax)
#Reference distributions
qqnorm(x,main="Normal Prob Plots of Samford Ozone Data",
xlab="normal quantiles",ylab="ozone concentration",cex=.65)
qqline(x)
text(-2,200,"SW=.9288")
text(-2,190,"p-value=0")
y1= log(x)
y2= x^.23
y3= x^.5
s = shapiro.test(x)
s1 = shapiro.test(y1)
s2 = shapiro.test(y2)
s3 = shapiro.test(y3)
qqnorm(y2,main="Normal Prob Plots of Samford Ozone Data with (Ozone)^.23",
xlab="normal quantiles",ylab=expression(Ozone^.23),cex=.65)
qqline(y2)
text(-2,3.5,"SW=.9872")
text(-2,3.4,"p-value=.2382")
qqnorm(y1,main="Normal Prob Plots of Samford Ozone Data with Log(Ozone)",
xlab="normal quantiles",ylab="Log(Ozone)",cex=.65)
qqline(y1)
text(-2,5.0,"SW=.9806")
text(-2,4.85,"p-value=.0501")
qqnorm(y3,main="Normal Prob Plots of Samford Ozone Data with SQRT(Ozone)",
xlab="normal quantiles",ylab=expression(Ozone^.5),cex=.65)
qqline(y3)
text(-2,14.5,"SW=.9789")
text(-2,13.5,"p-value=.0501")
```
| github_jupyter |
## 07 Functions
- **Definition**: Function are nothing but a block of code which performs a specific task and can be called again and again as per requirement in the code.
- A generic way to write a simple function is shown below-
```python
def function_name():
# function body starts
pass
# function body ends
# calling function in the code -
function_name()
```
- **DRY**: It stands for **D**o not **R**epeat **Y**ourself. It means that a good generally should not contain repeated blocks of code. So if you observe in your code repeatition of certain lines of code then you can also make it a function and call as per your requirement. Let's see an example below-
### 7.1 A simple function definition
**EXAMPLE:**
```
def function_name():
# function body starts
pass
# function body ends
# calling function in the code -
function_name()
```
**EXAMPLE:**
```
def greetings():
print("Hi there from iNeuron")
# calling greeting function in the code -
greetings()
```
- We can also pass some values to the function. These values are known as arguments.A function can have following kind of arguments -
- 1. Key word arguments: Here we define the name of arguments
**EXAMPLE**:
Observe the below code from Expressions, Operators, and Precedence Chapter-
```
# Block One
BIKE = True
CAR = True
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
# Block Two
BIKE = True
CAR = False
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
# Block Three
BIKE = False
CAR = True
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
# Block Four
BIKE = False
CAR = False
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
```
**EXAMPLE**:
So above codes are repetetive. Now lets observe the below code which will give the same results as shown above but in avery less line of code. -
```
def travel_or_not(BIKE, CAR):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
travel_or_not(False, False)
travel_or_not(False, True)
travel_or_not(True, False)
travel_or_not(True, True)
```
**EXAMPLE**:
Lets make it more concise using a for loop and remove the repetition-
```
def travel_or_not(BIKE, CAR):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
BIKE = [False, False, True, True]
CAR = [False, True,False, True]
for bike, car in zip(BIKE, CAR):
travel_or_not(bike, car)
```
### 7.2 Default Arguments
**EXAMPLE**:
Let's see a different varient of above code. You can also define a default value of the arguments which will be utilized when no argument is passed. as shown below -
```
def travel_or_not(BIKE=True, CAR=False):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
travel_or_not()
def travel_or_not(BIKE=True, CAR=False):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
print(f"You can travel 100 KMs: {TRAVEL_100_KM}\n")
travel_or_not(False, True) # overriding the default arguments value by passing our value
```
**EXAMPLE**:
Here ```None``` gets printed if we try to print the ouput of the given function because it is not returning any value. Its just printing ther results.
```
result = travel_or_not(False, True)
print(f"\nresults:-\n{result}")
```
### 7.3 Return statement
**EXAMPLE**:
In some cases you'd like to return some value from a function. So for that we use a return keyword to return the results as show below-
```
def travel_or_not(BIKE=True, CAR=False):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
return f"You can travel 100 KMs: {TRAVEL_100_KM}\n"
result = travel_or_not(False, True)
print(f"\nresults:-\n{result}")
```
**EXAMPLE**:
You can also return multiple values as show below -
```
def travel_or_not(BIKE=True, CAR=False):
TRAVEL_100_KM = BIKE or CAR
return BIKE, CAR, TRAVEL_100_KM
result1, result2, result3 = travel_or_not(False, True)
print(f"You have BIKE: {result1}")
print(f"You have CAR: {result2}")
print(f"You can travel 100 KMs: {result3}\n")
```
### 7.4 Variable length args
**EXAMPLE**:
At some point there may be chance that you don't know how many args that you should pass. Lets check the example below -
```
def branch_and_subjects_in_graduation(*args, branch):
print(f"My branch was {branch}")
return f"I liked these subjects in graduation {args}"
results = branch_and_subjects_in_graduation("Digital Image Processing", "Microprocessor", branch="Electronics engineering")
print(results)
def branch_and_subjects_in_graduation(*args, branch="Electronics engineering"):
print(f"My branch was {branch}")
return f"I liked these subjects in graduation {args}"
results = branch_and_subjects_in_graduation("Digital Image Processing", "Microprocessor")
print(results)
```
### 7.5 Variable length keyword args
**EXAMPLE**:
Some times we need varying keyword arguments as shown below-
```
def marks_in_subjects_of_semester(**kwargs):
print(kwargs)
marks_in_subjects_of_semester(Digital_Image_Processing = 78, Microprocessor= 79, Signals_and_systems=83)
def marks_in_subjects_of_semester(**kwargs):
for subject, marks in kwargs.items():
print(f"Score in {subject} = {marks}")
marks_in_subjects_of_semester(Digital_Image_Processing = 78, Microprocessor= 79, Signals_and_systems=83)
```
### 7.6 Functions inside functions -
**EXAMPLE**:
```
def marks_in_subjects_of_semester(**kwargs):
def total_marks(marks_list):
return sum(marks_list)
marks_list = list()
for subject, marks in kwargs.items():
marks_list.append(marks)
print(f"Score in {subject} = {marks}")
return total_marks(marks_list)
results = marks_in_subjects_of_semester(Digital_Image_Processing = 78, Microprocessor= 79, Signals_and_systems=83)
print(f"\ntotal marks obtained {results}")
```
### 7.7 Anonymous Function or Lambda Function
**EXAMPLE**: Lets convert below example into a onliner or lambda function
```python
def travel_or_not(BIKE=True, CAR=False):
TRAVEL_100_KM = BIKE or CAR
print(f"You have BIKE: {BIKE}")
print(f"You have CAR: {CAR}")
return f"You can travel 100 KMs: {TRAVEL_100_KM}\n"
result = travel_or_not(False, True)
print(f"\nresults:-\n{result}")
```
```
BIKE=True
CAR=False
result = lambda BIKE, CAR: f"You can travel 100 KMs: {BIKE or CAR}\n"
print(f"\nresults:-\n{result(BIKE, CAR)}")
```
**EXAMPLE**
Lets make a lambda function that returns power of an input no. It takes power and number as an input
```
number = 5
power = 3
result = lambda number, power: number**power
print(f"{number} to the power of {power} is {result(number, power)}")
```
### 7.8 Scope of variables
* Variables defined inside the function are known as Local variable
* Global varaibles can be accessed anywhere in the script unlike local variable who's scope is defined within the function only.
**EXAMPLE**
```
result = 0 # outside the scope of function
def divide(numerator, denominator):
result = numerator//denominator # result inside the scope of function
print(f"result after division inside the function: {result}")
numerator = 625
denominator = 5
divide(numerator, denominator)
print(f"result after division outside the function: {result}")
```
### 7.9 Docstring
```python
def function():
"""
This is docstring of the function.
"""
pass
```
**EXAMPLE**
```
def find_power(number, power):
"""This function returns the power of the number
Args:
number (int): insert any integer value
power (int): insert any integer value
Returns:
integer: power of number
"""
return number**power
number = 5
power = 3
result = find_power(number, power)
print(f"{number} to the power of {power} is {result}")
# This prints the docstring.
# Its very good practice to write the docstring of very important function
# It imroves code readabilty
help(find_power)
```
| github_jupyter |
## Introduction
This notebook showcases both the use of notebooks in data exploration, and the use of Iris for reading and analysing datasets.
Sections are somewhat lengthy, because it showcases the data exploration part, where a user will want to examine the various variables and results. A shorter version is available for a very quick overview of analysing datasets with Jupyter notebooks.
## Reading the data from URLs
Iris can read from filenames or URLs, when files are served from an OpenDAP server. The `iris.load()` utility function takes a path or URL and will load the dataset lazily: it only reads the actual data when it's needed. This saves the memory and time overhead when there are multiple datasets inside a file and not all need to be used.
But neither Iris nor netCDF4 can read directly from a URL if the file is not hosted by an OpenDAP server (which includes, for example, ftp sites).
Since Iris also can't read a binary stream (file object), we are forced to first download the file, then save it to a temporary file, then read from that file. Note that the latter temporary file is still lazily loaded, though we had to download the full file over the internet.
```
from tempfile import NamedTemporaryFile
from urllib.request import urlopen
import iris
```
Note that `iris.load()` returns a list of cubes, a `CubeList`, not a single cube. In case of a single datacube inside the file, that would result in a one-element list. In this case, there are two cubes, although the latter one is of less interest for the actual figure we'll make.
```
url = 'https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc'
with urlopen(url) as stream, NamedTemporaryFile(delete=False, suffix=".nc") as fp:
fp.write(stream.read())
cubes = iris.load(fp.name)
cubes
```
A slight disadvantage of the temporary file is that it may not get cleaned up immediately after we stop using the file. In fact, with Iris's lazy loading, we have to keep the file around until we're completely done with the data, hence the argument `delete=False`.
But each time we run the above cell, we create a new temporary file, and we may end up with a lot of temporary files containing the same data, taking up a lot of disk space. It is then more logical to just save the data to a specific named file, and use that all the time. For this demonstration, we use the temporary file option.
Since the above code can be annoying enough to remember, a small utility function `fetch` has been created; it needs to be imported from the `eucp` module. The `fetch` function takes a filepath or URI as first argument, and an optional `path` argument as the second argument: if `path` is set, it will be the location of the output file. If it's not set, a temporary file, like above, is created. If `path` is a directory, the filename is deduced from the URI and the file is created in that directory. Let's use the latter option with a newly created directory:
```
import os
os.makedirs('datafiles', exist_ok=True)
```
You should see the directory appear in the browser area of the notebook. Let's download and save the dataset:
```
url = 'https://crudata.uea.ac.uk/cru/data/temperature/HadCRUT.4.6.0.0.median.nc'
#url = 'datafiles/HadCRUT.4.6.0.0.median.nc' # Use this for faster loading after the first download
import eucp
cubes = eucp.fetch(url, 'datafiles')
cubes
```
## Examining the data
Iris doesn't order the list of cubes returned, so we'll first make sure they are in the order we want them (so that this notebook is consistent).
```
if cubes[0].name() == "field_status":
cubes = cubes[::-1]
cubes
```
Let's examine the first cube. Iris provides a nice representation in notebooks:
```
cubes[0]
```
And the second cube (which we'll discard later on):
```
cubes[1]
```
Out of curiosity, what is precisely in that second cube?
Let's look at the unique data values.
```
import numpy as np
np.unique(cubes[1].data), np.unique(cubes[1].data.data[:-20])
```
A set of 2030 'f' and 'p' single characters, relating to the time coordinate, with 'p' values only at the end of the array. Presumably related to the validation or verification of the data.
Let's have a look at the coordinates of the first dataset:
```
cube1 = cubes[0]
cube1.coords()
```
## Fixing the dimensions
The coordinates are missing bounds, which we'll want to use later on, when averaging over the area.
Iris has a utility function that allows it to guess the bounds from the grid, which in this case will work well, since it's a very regular grid.
Note that in the cell below, a few lines have been commented out:
- when bounds exists (previously guessed or not), Iris will not allow someone to override the bounds. You'll have to explicitly set the bounds to `None` before you can guess the bounds again. So when running the cell multiple times, the first three lines will need to be uncommented to avoid errors.
- the counds consist of two dimensional NumPy arrays. The way (and shape of the arrays) these are printed by default causes lengthy outputs. to avoid that, I've commented out the last line, but feel free to uncoment and see what the guessed bounds are.
```
#cube1.coord('time').bounds = None
#cube1.coord('latitude').bounds = None
#cube1.coord('longitude').bounds = None
cube1.coord('time').guess_bounds()
cube1.coord('latitude').guess_bounds()
cube1.coord('longitude').guess_bounds()
#cube1.coord('latitude'), cube1.coord('longitude'), cube.coord('time')
```
## Reading the other datasets
Before proceeding with the analysis, let's read the other datasets of interest.
```
url = 'ftp://ftp.cdc.noaa.gov/Datasets/noaaglobaltemp/air.mon.anom.nc'
#url = 'datafiles/air.mon.anom.nc' # Use this for faster loading after the first download
cubes = eucp.fetch(url, 'datafiles')
cubes
```
There is only one cube of interest in this list; let's examine that cube
```
cube2 = cubes[0]
cube2
```
Note that the time coordinate is an auxiliary coordinate; let's have a closer look
```
cube2.coord('time')
```
The bounds for the time coordinate are all off. These look like fill values, and the original input had masked bounds.
Let guess the proper bounds.
```
cube2.coord('time').bounds = None
cube2.coord('time').guess_bounds()
```
Once the bounds are fixed, we can make the time coordinate a proper dimension coordinate. Iris has a utility function to upgrade an auxiliary coordinate to a dimension coordinate.
```
iris.util.promote_aux_coord_to_dim_coord(cube2, 'time')
cube2.coord('time')
```
Much better. Let's update the bounds for the longitude and latitude as well.
```
#cube2.coord('latitude').bounds = None
#cube2.coord('longitude').bounds = None
cube2.coord('latitude').guess_bounds()
cube2.coord('longitude').guess_bounds()
#cube2.coord('latitude'), cube2.coord('longitude')
```
The same procecure for the last dataset: load it, select the right cube, and fix the coordinates as necessary
```
url = 'ftp://ftp.cdc.noaa.gov/Datasets/gistemp/landonly/250km/air.2x2.250.mon.anom.land.nc'
#url = 'datafiles/air.2x2.250.mon.anom.land.nc' # Use this for faster loading after the first download
cubes = eucp.fetch(url, 'datafiles')
cubes
cube3 = cubes[0]
cube3.coords()
#cube3.coord('latitude').bounds = None
#cube3.coord('longitude').bounds = None
cube3.coord('latitude').guess_bounds()
cube3.coord('longitude').guess_bounds()
#cube3.coord('latitude'), cube2.coord('longitude')
```
We'll put all three cubes into a list; (or a set, since order doesn't really matter).
Iris has a `CubeList`, but since we'll not use any of the specific functionality of a `CubeList`, we'll just use a normal list
```
cubes = [cube1, cube2, cube3]
cubes
```
## Calculating the global mean
To calculate the global mean temperature, we "collapse" the cube over latitude and longitude, using the mean function, and with area weights applied. The area weights need to be calculated separately for each cube, and are obtained from the latitude and longitude bounds.
```
means = []
for cube in cubes:
weights = iris.analysis.cartography.area_weights(cube)
mean = cube.collapsed(['latitude', 'longitude'], iris.analysis.MEAN, weights=weights)
means.append(mean)
means
```
Each mean is still an Iris cube, with all relevant meta information. The dimensions are obviously reduced to just time; let's look at one cube
```
means[0]
```
## Plotting the global means
Let's plot the three, now one-dimensional, datasets. Matplotlib is the default plotting package, with an easy to use interface through its module Pyplot. Iris provides its own plotting plotting module, which is essentially a wrapper around Matplotlib/Pyplot, that can handle Iris cubes directly.
With notebooks, we can use the so-called 'magic" command `%matplotlib inline` to let Jupyter render the figure directly in-place: no need to save it to a file or anything, it will just appears directly below the cell once executed. That's great for interactive exploration.
We just loop through the three averaged datasets and plot them on top of each other.
```
import iris.plot as iplt
import matplotlib.pyplot as plt
%matplotlib inline
for mean in means:
iplt.plot(mean)
```
## Calculating a zeropoint for the datasets
The monthly resolution is a bit too much, certainly over the relevant timespan. We'll fix that later, but we'll first account for the fact that the three datasets may not have the same zeropoint. Here, we'll use the 1961 to 1990 date range to calculate an average zeropoint for each dataset individually.
We can use plain Python `datetime.datetime` objects to start with: Iris will handle this properly behind the scenes, since it knows the unit of the various time coordinates. We use an `iris.Constraint`, which can take a function that will be applied to any point of the relevant coordinate to filter it.
`lambda` is Python's way to create a quick, anonymous function. For those not familiar with it: read it as
def anonymous_function(cell):
return start <= cell.point <= stop
```
from datetime import datetime
start = datetime(1961, 1, 1)
stop = datetime(1990, 12, 31)
timespan = iris.Constraint(time=lambda cell: start <= cell.point <= stop)
```
Now loop over the means, extracting the relevant timespan, and then averaging that timespan (collapsing with a mean). Since `mean.extract` returns another cube, we can chain the two operations.
```
zeropoints = []
for mean in means:
zeropoint = mean.extract(timespan).collapsed('time', iris.analysis.MEAN)
zeropoints.append(zeropoint)
zeropoints
```
There'll be a warning that that the averaging happened over a non-contiguous coordinate. Presumably, there is the occasional gaps in the time coordinate, and the time bounds don't fully cover the timespan. For ease, we assume that, overall, Iris did the right thing, and our zeropoint is correct.
Even the zeropoints, while scalars, are Iris cubes that include units and the full metadata:
```
zeropoints[0]
```
Instead of retrieving the actual data values, we can simply subtract one cube from the other: Iris handles this under the hood. Thus, for each dataset separately, we subtract the zeropoint.
```
zpmeans = []
for mean, zeropoint in zip(means, zeropoints):
zpmeans.append(mean - zeropoint)
```
Subtracting a zeropoint has made a difference in the resulting plot
```
for zpmean in zpmeans:
iplt.plot(zpmean)
```
## Calculating a rolling average
Let's smooth the data with a rolling average. For Iris, there is the `rolling_window` method of a cube that can be used. It takes an analyses method (again, here, the mean), and a size of the mean (I've picked 12 here, which is 12 months).
(Don't be deceived by the use of a list comprehension here: it is the same loop as before, written differently and shorter this time)
```
smootheds = [cube.rolling_window('time', iris.analysis.MEAN, 12) for cube in zpmeans]
for smoothed in smootheds:
iplt.plot(smoothed)
```
## Calculating seasonal and yearly averages
We can also calculate the seasonal and yearly averages. For this, we use the `aggregated_by` method on cubes, but we need to add two auxiliary coordinates to the datasets first: the season-year and year coordinates. We can use the `add_season` and `add_season_year` utility functions from the `iris.coord_categorisation` module (note: this module has to be loaded explicitly, unlike other modules, which are loaded automatically with `import iris`).
```
import iris.coord_categorisation
for cube in zpmeans:
iris.coord_categorisation.add_season(cube, 'time', name='clim_season')
iris.coord_categorisation.add_season_year(cube, 'time', name='season_year')
zpmeans[0]
```
Calculate and plot the seasonal averages for each year.
```
for cube in zpmeans:
annual_seasonal_mean = cube.aggregated_by(['clim_season', 'season_year'], iris.analysis.MEAN)
iplt.plot(annual_seasonal_mean)
```
That is actually only marginally better; let's try just the average temperature per year
```
for cube in zpmeans:
annual_seasonal_mean = cube.aggregated_by('season_year', iris.analysis.MEAN) # ['season_year'] also works as argument
iplt.plot(annual_seasonal_mean)
```
| github_jupyter |
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli</td>
</table>
```
from __future__ import print_function
from __future__ import absolute_import
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
```
# Interpolation
There are times when you have estimates for the values of a function for specific inputs. The values of the function may be obtained in a variety of ways either through experiment or through the use of other approximation techniques. Our goal in this chapter is to explore techniques that allow us to determine a new function whose values match the known observations at a set of predetermined input values. We first formally define the term we will use to describe the process.
**Definition:** Given a discrete set of values $y_i$ at locations $x_i$, an __*interpolant*__ is a (piece-wise) continuous function $f(x)$ that passes exactly through the data (*i.e.* $f(x_i) = y_i$).
**Example 0** The linear polynomial
$$
P_1(x) = 2(x-1)+3
$$
interpolates the coordinates $(1,3)$ and $(3,7)$.
In general a polynomial of degree $N$ can be used to interpolate $N+1$ data points. There are many different kinds of functions to use to interpolate values, but here we focus on polynomials.
## Applications
- Data filling
- Function approximation: only have data on discrete set of x, approximate continuous function
- Fundamental component of other algorithms
- Root finding (secant method)
- Optimization, minima/maxima (successive parabolic interpolation)
- Numerical integration and differentiation
- The Finite Element Method
## Polynomial Interpolation
**Theorem:** There is a *unique* polynomial of degree $N$, $P_N(x)$, that passes exactly through $N + 1$ values $y_1, y_2, \ldots, y_N, y_{N+1}$ at *distinct* points $x_1, x_2, \ldots, x_N, x_{N+1}$.
Consequence of the number of unknowns in $P_N(x)$.
#### Example 1: 2 Points
Given points are $(x_0, y_0)$ and $(x_1, y_1)$ which will lead to a line:
Define $P_1(x) = p_1 x + p_0$ and use the two points to find $p_0$ and $p_1$:
We first note that we have two equations and two unknowns. The two equations can be found by assuming the function $P_1(x)$ interpolates the two data points
$$
\begin{align}
y_0 &= p_1 x_0 + p_0, \\
y_1 &= p_1 x_1 + p_0.
\end{align}
$$
In this example we will solve the first equation for $p_0$, substitute the result into the second equation, and then solve for $p_1$.
$$y_0 = p_1 x_0 + p_0 \quad \Rightarrow \quad p_0 = y_0 - p_1 x_0$$
Or in matrix form:
$$
\begin{bmatrix}
y_0 \\
y_1
\end{bmatrix} =
\begin{bmatrix}
1 & x_0 \\
1 & x_1
\end{bmatrix}
\begin{bmatrix}
p_0 \\
p_1
\end{bmatrix}
$$
$$\begin{aligned}
y_1 &= p_1 x_1 + p_0 & \Rightarrow \\
y_1 &= p_1 x_1 + y_0 - p_1 x_0 & \Rightarrow \\
p_1 &= \frac{y_1 - y_0}{x_1 - x_0} & \Rightarrow \\
p_0 &= y_0 - \frac{y_1 - y_0}{x_1 - x_0} x_0 &
\end{aligned}$$
$$P_1(x) = \frac{y_1 - y_0}{x_1 - x_0} x + y_0 - \frac{y_1 - y_0}{x_1 - x_0} x_0 = \frac{y_1 - y_0}{x_1 - x_0} (x - x_0) + y_0$$
#### Example 2: 3 Points
Given points are $(x_0, y_0)$, $(x_1, y_1)$, and $(x_2, y_2)$ which will lead to quadratic polynomial:
Define $P_2(x) = p_0 x^2 + p_1 x + p_2$ leading to the equations
$$y_0 = p_2 x_0^2 + p_1 x_0 + p_0$$
$$y_1 = p_2 x_1^2 + p_1 x_1 + p_0$$
$$y_2 = p_2 x_2^2 + p_1 x_2 + p_0$$
This gets complicated quickly! Note, we have three equations and three unknowns, and the previous system is a linear system of three equations.
and in general, the problem will reduce to a linear system
$$
A(\mathbf{x})\mathbf{p} = \mathbf{y}
$$
A more general approach to solving the system will be explored later, but first it is important to determine whether or not the system even has a solution.
*A must be square and invertible.
### Proof - Uniqueness of Polynomial Interpolants
Let
$$\mathcal{P}_N(x) = \sum^N_{n=0} p_n x^n $$
or
$$\mathcal{P}_N(x) = p_0 + p_1 x + \cdots + p_{N - 1} x^{N - 1} + p_{N} x^N$$
and require $\mathcal{P}_N(x_i) = y_i$ for $i=0,1,\ldots,N$ and $x_i \neq x_j ~~~ \forall i,j$.
### Preliminaries: Monomial Basis
We can think of $\mathcal{P}_N(x) = \sum^N_{n=0} p_n x^n$ as a polynomial, or more fundamentally as a *linear combination* of a set of simpler functions, the monomials
$$1, x, x^2, x^3, \ldots, x^{N-1}, x^N$$
with weights
$$p_0, p_1, p_2, p_3, \ldots, p_{N-1}, \text{and } p_N$$
respectively.
### Linear independence of the Monomials
The monomials, form a *linearly independent* set of functions such that no monomial $x^n$ can be written as a linear combination of any other monomial (i.e. can't form another monomial in the set by multiplying one monomial by a scalar and adding itself). We can see this graphically, for the first few monomials.
```
x = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1,1,1)
for n in range(4):
axes.plot(x,x**n,label='$x^{}$'.format(n))
axes.set_xlabel('x')
axes.grid()
axes.legend(loc='best')
axes.set_title('The First 4 Monomials')
plt.show()
```
But more fundamentally. A set of functions is linearly independent if the only linear combination that add to form the zero function, e.g.
$$
P_N(x) = p_0 1 + p_1 x + p_2 x^2 + \ldots + p_n x^n = 0
$$
is if all the coefficients $p_i = 0$, $\forall i=0,\ldots N$
**Theorem**: The monomials $x^0,\ldots, x^n$ are linearly independent.
**Proof**: consider $P_N(x) = 0$ for all $x$. Since the polynomials (and monomials) are differentiable at least $n$ times, differentiate $n$ times to yield
$$
P^{(n)}_N(x) = n!p_n = 0
$$
which implies $p_n=0$.
Using this result and differentiating $n-1$ times shows $p_{n-1}=0$, which by induction gives all $p_i = 0$.
Put another way, the only $n$th degree polynomial that is zero everywhere is if all coefficients are zero.
#### The Fundamental theorem of algebra
Every $n$th degree polynomial has exactly $n$ complex roots and at most $n$ real roots, i.e.
$$
P_N(x) = (x - a_1)(x - a_2)\ldots(x - a_n)
$$
for $a_i\in \mathbb{C}$. Therefore, a non-trivial $n$th order polynomial can only be zero at $n$ points.
### Proof - Uniqueness of Polynomial Interpolants
Let
$$\mathcal{P}_N(x) = \sum^N_{n=0} p_n x^n $$
**interpolate** the $N+1$ points $y_i$ at $x_i$.
i.e.
$$\mathcal{P}_N(x_i) = y_i,\quad \mathrm{for}\quad i=0,1,\ldots,N
$$
and $x_i \neq x_j ~~~ \forall i,j$.
Assume there exists another polynomial
$$Q_N(x) = \sum^N_{n=0} q_n x^n$$
that passes through the same set of points such that $Q_N(x_i) = y_i$. For uniqueness, $p_n = q_n$ for all n. Now compute $T_N(x) = \mathcal{P}_N(x) - Q_N(x)$:
Now, by construction, $T_N(x_i) = 0$ which implies that it is equal to zero at $n+1$ points. However,
$$T_N(x) = \mathcal{P}_N(x) - Q_N(x) = \sum^N_{n=0} p_n x^n - q_n x^n = \sum^N_{n=0} (p_n - q_n) x^n$$
is a $n$th order polynomial which has at most $n$ real roots. The only way to reconcile this is if T_n(x) = 0, for all $x$, and therefore $p_n - q_n = 0$ individually and therefore $\mathcal{P}_N(x) = Q_N(x)$.
#### Example 3: Monomial Basis
Consider $\mathcal{P}_3(x) = p_0 + p_1 x + p_2 x^2 + p_3 x^3$ with the four data points $(x_i, y_i), ~~ i = 0,1,2,3$. We have four equations and four unknowns as expected:
$$\mathcal{P}_3(x_0) = p_0 + p_1 x_0 + p_2 x_0^2 + p_3 x_0^3 = y_0$$
$$\mathcal{P}_3(x_1) = p_0 + p_1 x_1 + p_2 x_1^2 + p_3 x_1^3 = y_1$$
$$\mathcal{P}_3(x_2) = p_0 + p_1 x_2 + p_2 x_2^2 + p_3 x_2^3 = y_2$$
$$\mathcal{P}_3(x_3) = p_0 + p_1 x_3 + p_2 x_3^2 + p_3 x_3^3 = y_3$$
Lets rewrite these as a matrix equation:
$$\mathbf{x} = \begin{bmatrix} x_0 \\ x_1 \\ x_2 \\ x_3 \end{bmatrix} \quad \mathbf{y} = \begin{bmatrix} y_0 \\ y_1 \\ y_2 \\ y_3 \end{bmatrix} \quad \mathbf{p} = \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ p_3 \end{bmatrix}$$
When we write the system in matrix/vector form the matrix that arises is called __*Vandermonde*__ matrix:
$$
V = \begin{bmatrix}
1 & x_0 & x_0^2 & x_0^3 \\
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3
\end{bmatrix}.
$$
We can now write the system of linear equations as $V \mathbf{p} = \mathbf{y}$:
$$\begin{bmatrix}
1 & x_0 & x_0^2 & x_0^3 \\
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3
\end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ p_3 \end{bmatrix} = \begin{bmatrix} y_0 \\ y_1 \\ y_2 \\ y_3 \end{bmatrix}.$$
**Note**: the columns of $V$ are simply the monomial functions sampled at the discrete points $x_i$. Because the monomials are linearly independent, so are the columns of $V$
$$\begin{bmatrix}
1 & x_0 & x_0^2 & x_0^3 \\
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3
\end{bmatrix} \begin{bmatrix} p_0 \\ p_1 \\ p_2 \\ p_3 \end{bmatrix} = \begin{bmatrix} y_0 \\ y_1 \\ y_2 \\ y_3 \end{bmatrix}$$
- Uniqueness: $V$ is invertible
- What happens if we have redundant data? Either $(x_i, y_i)$ is repeated or for one $i$ we have two values of $y$.
- $V$ is singular
- What if we have more points then the order of polynomial we want?
- Add rows to $V$: no longer square matrix
- If extra points are consistent, can interpolate
- If not, will need to approximate (least squares)
- How does this relate to solving the above linear system of equations?
Vandermonde matrices in general are defined as
$$V = \begin{bmatrix}
1 & x_0 & x_0^2 & \cdots & x_0^N \\
1 & x_1 & x_1^2 & \cdots & x_1^N \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & x_m & x_m^2 & \cdots & x_m^N \\
\end{bmatrix}
$$
where $V$ is a $m \times n$ matrix with points $(x_i, y_i)$ for $i = 0, 1, 2, 3, \ldots m$ and for an order $N$ polynomial $\mathcal{P}_N(x)$.
### Finding $p_i$
Finding the coefficients of $\mathcal{P}_N(x)$ can be done by solving the system outlined above. There are functions in `numpy` that can do this for us such as:
- `numpy.polyfit(x, y, x.shape[0] - 1)`
- `numpy.vander(x, N=None)` to construct the matrix and use a linear solver routine.
We can also use a different **basis** that might be easier to use.
Note: large $V$ matrices are ill-conditioned (columns (points) are very close to each other)
### Basis
**Def:** A basis for a $N$ dimensional vector space is a set of linearly independent vectors that span the space.
The monomials, $1,x,\ldots, x^n$, form the usual basis for the vector space of $n$th degree polynomials $P_N(x)$.
**Example** $P_2(x)$ is the space of all quadratic functions. i.e. $P_2(x) = \mathrm{span}< 1,x,x^2>$
$$
P_2(x) = p_0 + p_1 x + p_2 x^2
$$
i.e for every vector $\mathbf{p}\in\mathbb{R}^3$, there is a unique quadratic function in $P_2(x)$. (we say $P_2$ is *isomorphic* to $\mathbb{R}^3$ and is a three dimensional function space).
**However**, the monomials are not the only basis for $P_N$, and want to choose a good basis to work with
### Lagrange Basis
Given $N+1$ points $(x_0,y_0), (x_1,y_1), \ldots, (x_{N},y_{N})$ again assuming the $x_i$ are all unique, the interpolating polynomial $\mathcal{P}_N(x)$ can be written as
$$\mathcal{P}_N(x) = \sum^{N}_{i=0} y_i \ell_i(x)$$
where
$$\ell_i(x) = \prod^{N}_{j=0, j \neq i} \frac{x - x_j}{x_i - x_j} = \frac{x - x_0}{x_i - x_0} \frac{x - x_1}{x_i - x_1} \cdots \frac{x - x_{i-1}}{x_i - x_{i-1}}\frac{x - x_{i+1}}{x_i - x_{i+1}} \cdots \frac{x - x_{N}}{x_i - x_{N}}$$
are the **Lagrange Polynomials**
### Lagrange Polynomials
$$\ell_i(x) = \prod^{N}_{j=0, j \neq i} \frac{x - x_j}{x_i - x_j} $$
A Key property of the Lagrange polynomials is that
$$
\ell_i(x_j) = \delta_{ij} = \left\{\begin{matrix}
0 & i\neq j \\
1 & i=j\\
\end{matrix}\right.
$$
which is why the weights in $P_N(x)$ are simply the $y$ values of the interpolant
### Solving for the coefficients of $P_N(x)$
In general, if
$$
P_N(x) = \sum_{n=0}^N w_j\phi_j(x)
$$
where $\phi_j(x)$ is any basis function for $P_N$ (i.e. monomial, Lagrange, and there are many more). Then finding the unique set of weights for the interpolating polynomial through $N+1$ distinct data points $(x_i, y_i)$, just reduces to solving $N+1$ linear equations $y_i = P_N(x_i)$.
For the monomial basis this reduces to the linear system
$$
V(\mathbf{x})\mathbf{w} = \mathbf{y}
$$
What is the matrix for the Lagrange Basis?
$$
y_0 = w_0\ell_0(x_0) + w_1\ell_1(x_0) + w_2\ell_2(x_0) \\
y_1 = w_0\ell_0(x_1) + w_1\ell_1(x_1) + w_2\ell_2(x_1) \\
y_2 = w_0\ell_0(x_2) + w_1\ell_1(x_2) + w_2\ell_2(x_2) \\
$$
For $x_i$ and $\ell_j$:
- When $i = j$, $w_j\ell_j(x_i) = 1$
- When $i \neq j$, $w_j\ell_j(x_i) = 0$
__So Lagrange matrix is identity matrix, so $P_n(x) = \sum_{i=0}^{N}y_i\ell_i(x)$__
### Visualizing the Lagrange Polynomials
```
# ====================================================
# Compute the Lagrange basis (\ell_i(x))
def lagrange_basis(x, data):
"""Compute Lagrange basis at x given data"""
basis = numpy.ones((data.shape[0], x.shape[0]))
for i in range(data.shape[0]):
for j in range(data.shape[0]):
if i != j:
basis[i, :] *= (x - data[j, 0]) / (data[i, 0] - data[j, 0])
return basis
# ====================================================
# Calculate full polynomial
def poly_interpolant(x, data):
"""Compute polynomial interpolant of (x,y) using Lagrange basis"""
P = numpy.zeros(x.shape[0])
basis = lagrange_basis(x, data)
for n in range(data.shape[0]):
P += basis[n, :] * data[n, 1]
return P
# ====================================================
x_data = numpy.array([0., 1., 2., 3.])
y_data = numpy.ones(x_data.shape)
data = numpy.array([x_data, y_data]).T
x = numpy.linspace(x_data.min(),x_data.max(),100)
data
# ====================================================
# Plot individual basis functions
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
basis = lagrange_basis(x, data)
for i in range(len(x_data)):
axes.plot(x, basis[i, :], label="$\ell_{%s}(x)$" % i)
axes.set_title("Lagrange Basis $\ell_i(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$\ell_i(x)$")
axes.grid()
axes.legend(loc='best')
plt.show()
```
### Linear Independence of the Lagrange Polynomials
Because the weights of each basis function in the Lagrange basis is just the $y$ value at the interpolation points, it is straightforward to show that the Lagrange polynomials are linearly independent. I.e. the statement
$$
\sum_{n=0}^N w_j\phi_j(x) =0
$$
is equivalent to interpolating the zero function, where all the $w_j =0$
**Example 0 Revisited** In example 0 above the linear polynomial that interpolates the coordinates $(1,3)$ and $(3,7)$ was simply stated as
$$
P_1(x) = 2(x-1)+3.
$$
Another way to look at this example is to first note that when we add two linear polynomials
the result is another linear polynomial. The first polynomial to define interpolates $(1,1)$
and $(3,0)$,
$$
\ell_0(x) = \frac{x-3}{1-3}.
$$
The second polynomial to define interpolates $(1,0)$ and $(3,1)$,
$$
\ell_1(x) = \frac{x-1}{3-1}.
$$
A linear combination of these two functions can be defined that will interpolate the points $(1,3)$ and $(3,7)$,
$$
P_1(x) = 3\cdot\ell_0(x) + 7\cdot\ell_1(x).
$$
The graphs of these functions are shown below.
```
# =============================================================
# Plot the two example basis functions in the current example
x = numpy.linspace(1.0, 3.0, 2)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.set_ylim([0,9])
axes.plot(x, (x-3)/(-2), color='r', label="$\ell_{%s}(x)$" % 0)
axes.plot(x, (x-1)/(2), color='b', label="$\ell_{%s}(x)$" % 1)
axes.plot(x, 3*(x-3)/(-2) + 7*(x-1)/(2),color='g',label='interpolant')
axes.set_title("Interpolant for (1,3) and (3,7)")
axes.set_xlabel("x")
axes.grid()
plt.show()
```
#### Example 4: $N = 1$ Lagrange Polynomial
Given 2 points $(x_0, y_0)$ and $(x_1, y_1)$ the Lagrange form of $\mathcal{P}_N(x)$ is given by
$$\ell_0(x) = \frac{x - x_1}{x_0 - x_1}$$
and
$$\ell_1(x) = \frac{x - x_0}{x_1 - x_0}$$
so that
$$\mathcal{P}_1(x) = \ell_0(x) \cdot y_0 + \ell_1(x) \cdot y_1 = \frac{x - x_1}{x_0 - x_1} \cdot y_0 + \frac{x - x_0}{x_1 - x_0} \cdot y_1$$
One important aspect of Lagrange polynomials to note is that the $\ell_i(x)$ functions are exactly 1 when $x = x_i$ and that every other $\ell_j(x)$ where $j \neq i$ is 0.
Partition of unity(?): sum of Lagrange bases always = 1
```
data = numpy.array([[-1.5, -0.5], [0.0, 0.5]])
# data = numpy.array([[-1.5, -0.5], [0.0, 0.5], [-0.5, 1.0]])
N = data.shape[0] - 1
M = data.shape[0]
x = numpy.linspace(-2.0, 2.0, 100)
# Plot individual basis functions
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
basis = lagrange_basis(x, data)
for i in range(N + 1):
axes.plot(x, basis[i, :], label="$\ell_{%s}(x)$" % i)
axes.grid()
axes.set_title("Lagrange Basis $\ell_i(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$\ell_i(x)$")
axes.legend(loc=8)
# Plot full polynomial P_N(x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, poly_interpolant(x, data), label="$P_{%s}(x)$" % N)
for point in data:
axes.plot(point[0], point[1], 'ko')
axes.set_title("$P_N(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$P_N(x)$")
axes.grid()
plt.show()
```
#### Example 5: Interpolate four points from $sin(\pi x)$
Use four points to approximate $\sin$ on the interval $x \in [-1, 1]$. What is the behavior as $N \rightarrow \infty$? Also plot the error between $f(x)$ and the interpolant $P_N(x)$.
```
num_points = 21
# num_points = 5
# num_points = 6
num_points = 20
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.linspace(-1, 1, num_points)
data[:, 1] = numpy.sin(2.0 * numpy.pi * data[:, 0])
N = data.shape[0] - 1 # Degree of polynomial
M = data.shape[0]
x = numpy.linspace(-1.0, 1.0, 100)
# ====================================================
# Plot individual basis functions
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
basis = lagrange_basis(x, data)
for i in range(N + 1):
axes.plot(x, basis[i, :], label="$\ell_{%s}(x)$" % i)
axes.set_title("Lagrange Basis $\ell_i(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$\ell_i(x)$")
axes.legend(loc=1)
axes.grid()
# Plot full polynomial P_N(x)
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, poly_interpolant(x, data), label="$P_{%s}(x)$" % N)
axes.plot(x, numpy.sin(2.0 * numpy.pi * x), 'r--', label="True $f(x)$")
for point in data:
axes.plot(point[0], point[1], 'ko')
axes.set_title("$P_N(x)$")
axes.set_xlabel("x")
axes.set_ylabel("$P_N(x)$")
axes.legend(loc=1)
axes.grid()
plt.show()
```
#### Example 6: Runge's Function
Interpolate $f(x) = \frac{1}{1 + 25 x^2}$ using 6 points of your choosing on $x \in [-1, 1]$.
Try it with 11 points.
Keep increasing the number of points and see what happens.
Runge's effect:
```
def f(x):
return 1.0 / (1.0 + 25.0 * x**2)
x = numpy.linspace(-1, 1, 100)
# x = numpy.linspace(-2, 2, 100)
num_points = 15
num_points = 11
# num_points = 20
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.linspace(-1, 1, num_points)
data[:, 1] = f(data[:, 0])
N = data.shape[0] - 1
# Plot the results
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, poly_interpolant(x, data), 'b', label="$P_6(x)$")
axes.plot(x, f(x), 'k', label="True $f(x)$")
axes.plot(data[:, 0], data[:, 1], 'ro', label="data")
axes.set_title("Interpolation of Runge's function")
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.legend(loc=1)
axes.grid()
plt.show()
```
Adding more points: ends (x = +/-1) are not able to be approximated
#### Example 7: Weierstrass "Monster" Function
Defined as
$$
f(x) = \sum^\infty_{n=0} a^n \cos(b^n \pi x)
$$
such that
$$
0 < a < 1 \quad \text{and} \quad a b > 1 + \frac{3\pi}{2}.
$$
This function is continuous everywhere but not differentiable anywhere.
```
def f(x, a=0.9, N=100):
summation = 0.0
b = (1.0 + 3.0 / 2.0 * numpy.pi) / a + 0.01
print(b)
for n in range(N + 1):
summation += a**n * numpy.cos(b**n * numpy.pi * x)
return summation
x = numpy.linspace(-1, 1, 1000)
# x = numpy.linspace(-2, 2, 100)
num_points = 10
data = numpy.empty((num_points, 2))
data[:, 0] = numpy.linspace(-1, 1, num_points)
data[:, 1] = f(data[:, 0])
N = data.shape[0] - 1
# Plot the results
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, poly_interpolant(x, data), 'b', label="$P_6(x)$")
axes.plot(x, f(x), 'k', label="True $f(x)$")
axes.plot(data[:, 0], data[:, 1], 'ro', label="data")
axes.set_title("Interpolation of Runge's function")
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.legend(loc=1)
plt.show()
```
### Rules of Thumb
- Avoid __high-order interpolants__ when possible! Keep increasing the number of points and see what happens.
- Avoid __extrapolation__ - Increase the range of $x$ in the above example and check how good the approximation is beyond our sampling interval
### Error Analysis: Minimization
**Theorem:** Lagrange Remainder Theorem - Let $f(x) \in C^{N+1}[-1, 1]$, then
$$
f(x) = \mathcal{P}_N(x) + R_N(x)
$$
where $\mathcal{P}_N(x)$ is the interpolating polynomial and
$$
R_N(x) = Q(x) \frac{f^{(N+1)}(c)}{(N+1)!} \quad \text{with} \quad c \in [-1,1]
$$
with
$$
Q(x) = \prod^N_{i=0} (x - x_i) = (x-x_0)(x-x_1)\cdots(x-x_N) .
$$
A few things to note:
- For Taylor's theorem note that $Q(x) = (x - x_0)^{N+1}$ and the error only vanishes at $x_0$.
- For Lagrange's theorem the error vanishes at all $x_i$.
- To minimize $R_N(x)$ requires minimizing $|Q(x)|$ for $x \in [-1, 1]$.
#### Minimizing $R_N(x)$
Minimizing the error $R_N(x)$ in Lagrange's theorem is equivalent to minimizing $|Q(x)|$ for $x \in [-1, 1]$.
Minimizing error $\Leftrightarrow$ picking roots of $Q(x)$ or picking the points where the interpolant data is located. How do we this?
### Chebyshev Polynomials
*Chebyshev polynomials* $T_N(x)$ are another basis that can be used for interpolation.
First 5 polynomials
$$T_0(x) = 1$$
$$T_1(x) = x$$
$$T_2(x) = 2 x^2 - 1$$
$$T_3(x) = 4 x^3 - 3 x$$
$$T_4(x) = 8x^4 - 8x^2 + 1$$
$$ T_N(x) = 2 x (T_{N-1}) - T_{N-2} $$
Even N: doesn't change sign if x is negative
Odd N: changes sign if x is negative
```
def cheb_poly(x, N):
"""Compute the *N*th Chebyshev polynomial and evaluate it at *x*"""
T = numpy.empty((3, x.shape[0]))
T[0, :] = numpy.ones(x.shape)
T[1, :] = x
if N == 0:
return T[0, :]
elif N == 1:
return T[1, :]
else:
for k in range(2, N + 1):
T[2, :] = 2.0 * x * T[1, :] - T[0, :]
T[0, :] = T[1, :]
T[1, :] = T[2, :]
return T[2, :]
x = numpy.linspace(-1, 1, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
for n in range(5):
axes.plot(x, cheb_poly(x, n), label="$T_%s$" % n)
axes.set_ylim((-1.1, 1.1))
axes.set_title("Chebyshev Polynomials")
axes.set_xlabel("x")
axes.set_ylabel("$T_N(x)$")
axes.legend(loc='best')
axes.grid()
plt.show()
```
1. Chebyshev nodes of the 1st kind (roots)
$$
x_k = \cos \left (\frac{(2 k - 1) \pi}{2 N} \right ) \quad k = 1, \ldots, N
$$
1. Chebyshev nodes of the 2nd kind (extrema)
$$
x_k = \cos \left( \frac{k \pi}{N} \right) \quad k = 0, \ldots, N
$$
```
N = 4
x_extrema = numpy.cos(numpy.arange(N + 1) * numpy.pi / N)
x_nodes = numpy.cos((2.0 * numpy.arange(1, N + 1) - 1.0) / (2.0 * N) * numpy.pi)
fig = plt.figure()
# fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 1, 1)
# Plot points
axes.plot(x_extrema, numpy.zeros(N+1), 'ro')
axes.plot(x_nodes, numpy.zeros(N), 'bo')
# Plot some helpful lines
axes.plot((-1.0, -1.0), (-1.1, 1.1), 'k--')
axes.plot((1.0, 1.0), (-1.1, 1.1), 'k--')
axes.plot((-1.0, 1.0), (0.0, 0.0), 'k--')
for i in range(x_extrema.shape[0]):
axes.plot((x_extrema[i], x_extrema[i]), (-1.1, 1.1), 'r--')
axes.plot(x_extrema[i], cheb_poly(x_extrema, N)[i], 'ro')
print('Nodes = {}'.format(numpy.sort(x_nodes)))
print('Extrema = {}'.format(numpy.sort(x_extrema)))
#print(numpy.cos(x_extrema))
# Plot Chebyshev polynomial
x_hat = numpy.linspace(-1, 1, 1000)
axes.plot(x_hat, cheb_poly(x_hat, N), 'k')
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
# Labels
axes.set_title("Chebyshev Nodes and Extrema, N={}".format(N), fontsize="20")
axes.set_xlabel("x", fontsize="15")
axes.set_ylabel("$T_{N+1}(x)$", fontsize="15")
plt.show()
# First-kind Nesting (3 x)
fig = plt.figure()
# fig.set_figwidth(fig.get_figwidth() * 2)
axes = fig.add_subplot(1, 1, 1)
N = 5
factor = 3
x_1 = numpy.cos((2.0 * numpy.arange(1, N + 1) - 1.0) / (2.0 * N) * numpy.pi)
x_2 = numpy.cos((2.0 * numpy.arange(1, factor * N + 1) - 1.0) / (2.0 * factor * N) * numpy.pi)
axes.plot(x_1, numpy.zeros(N), "o", color="r", markerfacecolor="lightgray", markersize="15")
axes.plot(x_2, numpy.zeros(N * factor), 'kx', markersize="10")
x_hat = numpy.linspace(-1, 1, 1000)
axes.plot(x_hat, cheb_poly(x_hat, N), 'k')
axes.plot(x_hat, cheb_poly(x_hat, factor * N), 'k')
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-1.1, 1.1))
axes.set_title("Nesting of 1st and 2nd Kind Chebyshev Polynomials")
axes.set_xlabel("$x$")
axes.set_ylabel("$T_N(x)$")
plt.show()
```
#### Properties of Chebyshev Polynomials
1. Defined by a recurrence relation
$$T_k(x) = 2 x T_{k-1}(x) - T_{k-2}(x)$$
2. Leading coefficient of $x^N$ in $T_N(x)$ is $2^{N-1}$ for $N \geq 1$ (NOT a monic polynomial ie leading term coefficient is not always 1)
3. Extreme values:
$$|T_N(x)| \leq 1 \quad \text{for} \quad -1 \leq x \leq 1$$
#### Properties of Chebyshev Polynomials
4. __Minimax principle__: The polynomial
$$T(x) = \frac{T_{N+1}(x)}{2^N}$$
is a *monic polynomial*, a univariate function with the leading coefficient equal to 1, with the property that
$$
\max |T(x)| \leq \max |Q(X)| \quad \text{for} \quad x \in [-1, 1], \quad \text{and}
$$
$$
\max |T(x)| = \frac{1}{2^N}
$$
Recall that the remainder term in the Lagrange Remainder Theorem was
$$
R_N(x) = Q(x) \frac{f^{(N+1)}(c)}{(N+1)!} \quad \text{with} \quad c \in [-1,1]
$$
with
$$
Q(x) = \prod^N_{i=0} (x - x_i) = (x-x_0)(x-x_1)\cdots(x-x_N) .
$$
#### Error Analysis Redux
Given that the Chebyshev polynomials are a minimum on the interval $[-1, 1]$ we would like $T(x) = Q(x)$.
Since we only know the roots of $Q(x)$ (the points where the interpolant data is located) we require these points to be the roots of the Chebyshev polynomial $T_{N+1}(x)$ therefore enforcing $T(x) = Q(x)$.
The zeros of $T_N(x)$ in the interval $[-1, 1]$ can be shown to satisfy
$$
x_k = \cos\left( \frac{(2k - 1) \pi}{2 N} \right ) \quad \text{for} \quad k=1, \ldots, N
$$
These __nodal points__ (sampling the function at these points) can be shown to minimize interpolation error.
```
x = numpy.linspace(0, numpy.pi, 100)
N = 15
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1, aspect="equal")
axes.plot(numpy.cos(x), numpy.sin(x), 'r--')
axes.plot(numpy.linspace(-1.1, 1.1, 100), numpy.zeros(x.shape), 'r')
for k in range(1, N + 1):
location = [numpy.cos((2.0 * k - 1.0) * numpy.pi / (2.0 * N)),
numpy.sin((2.0 * k - 1.0) * numpy.pi / (2.0 * N))]
axes.plot(location[0], location[1], 'ko')
axes.plot(location[0], 0.0, 'ko')
axes.plot([location[0], location[0]], [0.0, location[1]], 'k--')
axes.set_xlim((-1.1, 1.1))
axes.set_ylim((-0.1, 1.1))
plt.show()
```
#### Summary
1. Minimizing the error in Lagrange's theorem is equivalent to minimizing
$$
|Q(x)| \quad \text{for} \quad x \in [-1, 1].
$$
1. We know Chebyshev polynomials are a minimum on the interval $[-1, 1]$ so we would like to have $T(x) = Q(x)$.
1. Since we only know the roots of $Q(x)$ (the points where the interpolant data is located) we require these points to be the roots of the Chebyshev polynomial $T_{N+1}(x)$ therefore enforcing $T(x) = Q(x)$.
1. The zeros of $T_N(x)$ in the interval $[-1, 1]$ can be shown to satisfy
$$
x_k = \cos\left( \frac{(2k - 1) \pi}{2 N} \right ) \quad \text{for} \quad k=1, \ldots, N
$$
These nodal points (sampling the function at these points) can be shown to minimize interpolation error.
#### Notes
- The Chebyshev nodes minimize interpolation error for any polynomial basis (due to uniqueness of the interpolating polynomial, any polynomial that interpolates these points are identical regardless of the basis).
- Chebyshev nodes uniquely define the Chebyshev polynomials.
- The boundedness properties of Chebyshev polynomials are what lead us to the roots as a minimization but there are other used for these orthogonal polynomials.
- There are two kinds of Chebyshev nodes and therefore two definitions.
- Chebyshev polynomial used to determine *nodal points*, but can then use Lagrange to interpolate.
### Return to Runge's function
```
# Runge's function again
def f(x):
return 1.0 / (1.0 + 25.0 * x**2)
# Parameters
x = numpy.linspace(-1, 1, 100)
num_points = 21
# ============================================================
# Equidistant nodes
equidistant_data = numpy.empty((num_points, 2))
equidistant_data[:, 0] = numpy.linspace(-1, 1, num_points)
equidistant_data[:, 1] = f(equidistant_data[:, 0])
N = equidistant_data.shape[0] - 1
P_lagrange = poly_interpolant(x, equidistant_data)
# ============================================================
# Chebyshev nodes
chebyshev_data = numpy.empty((num_points, 2))
chebyshev_data[:, 0] = numpy.cos((2.0 * numpy.arange(1, num_points + 1) - 1.0) * numpy.pi / (2.0 * num_points))
chebyshev_data[:, 1] = f(chebyshev_data[:, 0])
P_cheby1 = poly_interpolant(x, chebyshev_data)
# Fit directly with Chebyshev polynomials
coeff = numpy.polynomial.chebyshev.chebfit(chebyshev_data[:, 0], chebyshev_data[:, 1], N)
P_cheby2 = numpy.polynomial.chebyshev.chebval(x, coeff)
# Check on unique polynomials
#print(numpy.allclose(P_cheby1, P_cheby2))
# calculate errornorms for different interpolants
equidistant_err = numpy.linalg.norm(P_lagrange - f(x))
cheb_err = numpy.linalg.norm(P_cheby1 - f(x))
# ============================================================
# Plot the results
fig = plt.figure(figsize=(16,6))
fig.subplots_adjust(hspace=.5)
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, P_lagrange, 'b', label="$P_%s(x)$" % N)
axes.plot(x, f(x), 'k', label="True $f(x)$")
axes.plot(equidistant_data[:, 0], equidistant_data[:, 1], 'ro', label="data")
axes.set_title("Interpolation at Equispaced Points: err = {}".format(equidistant_err))
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.legend(loc=8)
#print('Equispaced error = {}'.format(numpy.linalg.norm(P_lagrange - f(x))))
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x), 'k', label="True $f(x)$")
axes.plot(x, P_cheby1, 'b', label="$P_%s(x)$" % N)
axes.plot(chebyshev_data[:, 0], chebyshev_data[:, 1], 'ro', label="data")
axes.set_title("Interpolation at Chebyshev Points: err = {}".format(cheb_err))
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.legend(loc=1)
#print('Chebyshev error = {}'.format(numpy.linalg.norm(P_cheby1 - f(x))))
plt.show()
```
## Piece-Wise Polynomial Interpolation
Given $N$ points, use lower order polynomial interpolation to fit the function in pieces. We can choose the order of the polynomials and the continuity.
- $C^0$: Interpolant is continuous
- Linear interpolation
- Quadratic interpolation
- $C^1$: Interpolation and 1st derivative are continuous
- Cubic Hermite polynomials (PCHiP)
- $C^2$: Interpolation, 1st and 2nd derivatives are continuous
- Cubic splines
### Piece-Wise Linear
Given a segment between point $(x_k, y_k)$a nd $(x_{k+1}, y_{k+1})$ define the segment as
$$\mathcal{P}_k(x) = \frac{y_{k+1} - y_k}{x_{k+1} - x_k} (x - x_k) + y_k$$
The final interpolant $\mathcal{P}(x)$ is then defined on $[x_k, x_{k+1}]$ using this function.
```
data = numpy.array([[1.0, 3.0], [2.0, 1.0], [3.5, 4.0], [5.0, 0.0], [6.0, 0.5], [9.0, -2.0], [9.5, -3.0]])
x = numpy.linspace(0.0, 10, 100)
N = data.shape[0] - 1
# Lagrange Basis
P_lagrange = poly_interpolant(x, data)
# C^0 Piece-wise linear
# P_pw_linear = numpy.interp(x, data[:, 0], data[:, 1])
P_linear = numpy.zeros(x.shape)
for n in range(1, N + 1):
P_linear += ((data[n, 1] - data[n - 1, 1]) / (data[n, 0] - data[n - 1, 0]) * (x - data[n - 1, 0])
+ data[n - 1, 1]) * (x > data[n - 1, 0]) * (x <= data[n, 0])
# Add end points for continuity
P_linear += numpy.ones(x.shape) * data[0, 1] * (x < data[0, 0])
P_linear += numpy.ones(x.shape) * data[-1, 1] * (x >= data[-1, 0])
# Plot
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(data[:,0], data[:,1], 'ko')
axes.plot(x, P_lagrange, 'b--')
axes.plot(x, P_linear, 'r')
axes.set_title("Interpolated Data - $C^0$ Linear")
axes.set_xlabel("x")
axes.set_ylabel("$P_1(x)$")
axes.set_xlim([0.0, 10.0])
axes.set_ylim([-4.0, 15.0])
plt.show()
```
### Piece-Wise Non-Overlapping Polynomials
In sets of three points $(x_{k+1}, y_{k+1})$, $(x_{k}, y_{k})$, and $(x_{k-1}, y_{k-1})$, find quadratic interpolant and define final interpolant $P(x)$ using the quadratic interpolant $\mathcal{P}_k(x)$ on $[x_{k-1}, x_{k+1}]$.
```
data = numpy.array([[1.0, 3.0], [2.0, 1.0], [3.5, 4.0], [5.0, 0.0], [6.0, 0.5], [9.0, -2.0], [9.5, -3.0]])
x = numpy.linspace(0.0, 10, 100)
N = data.shape[0] - 1
# This isn't overlapping, it's more like C_0 P_2
# C^0 Piece-wise quadratic
P_quadratic = numpy.zeros(x.shape)
for k in range(1, N + 1, 2):
p = numpy.polyfit(data[k - 1:k + 2, 0], data[k - 1:k + 2, 1], 2)
P_quadratic += numpy.polyval(p, x) * (x > data[k - 1, 0]) * (x <= data[k + 1, 0])
# Add end points for continuity
P_quadratic += numpy.ones(x.shape) * data[0, 1] * (x < data[0, 0])
P_quadratic += numpy.ones(x.shape) * data[-1, 1] * (x >= data[-1, 0])
# Plot
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(data[:,0], data[:,1], 'ko')
axes.plot(x, P_lagrange, 'b--', label = "Lagrange")
axes.plot(x, P_quadratic, 'r', label = "Piecewise quadratic")
axes.set_title("Interpolated Data - $C^0$ Quadratic")
axes.set_xlabel("x")
axes.set_ylabel("$P_3(x)$")
axes.set_xlim([0.0, 10.0])
axes.set_ylim([-4.0, 15.0])
axes.legend()
plt.show()
```
### Piece-Wise $C^1$ Cubic Interpolation
For the previous two cases we had discontinous 1st derivatives! We can make this better by constraining the polynomials to be continuous at the boundaries of the piece-wise intervals.
Given a segment between points $(x_k, y_k)$ and $(x_{k+1}, y_{k+1})$ we want to fit a cubic function between the two points.
$$\mathcal{P}_k(x) = p_0 + p_1 x + p_2 x^2 + p_3 x^3$$
$$\mathcal{P}_k(x_k) = y_k, \quad \mathcal{P}_k(x_{k+1}) = y_{k+1}$$
Now we have 4 unknowns but only two data points! Constraining the derivative at each interval end will lead to two new equations and therefore we can solve for the interpolant.
$$\frac{\text{d}}{\text{dx}} \mathcal{P}_k(x_k) = d_k, \quad \frac{\text{d}}{\text{dx}} \mathcal{P}_k(x_{k+1}) = d_{k+1}$$
where we need to prescribe the $d_k$s. Since we know the polynomial we can write these 4 equations as
$$\begin{aligned}
p_0 + p_1 x_k + p_2 x_k^2 + p_3 x_k^3 &= y_k \\
p_0 + p_1 x_{k+1} + p_2 x_{k+1}^2 + p_3 x_{k+1}^3 &= y_{k+1} \\
p_1 + 2p_2 x_k + 3 p_3 x_k^2 &= d_k \\
p_1 + 2 p_2 x_{k+1} + 3 p_3 x_{k+1}^2 &= d_{k+1}
\end{aligned}$$
Rewriting this as a system we get
$$\begin{bmatrix}
1 & x_k & x_k^2 & x_k^3 \\
1 & x_{k+1} & x_{k+1}^2 & x_{k+1}^3 \\
0 & 1 & 2 x_k & 3 x_k^2 \\
0 & 1 & 2 x_{k+1} & 3 x_{k+1}^2
\end{bmatrix} \begin{bmatrix}
p_0 \\ p_1 \\ p_2 \\ p_3
\end{bmatrix} = \begin{bmatrix}
y_k \\ y_{k+1} \\ d_k \\ d_{k+1}
\end{bmatrix}$$
A common simplification to the problem description re-parameterizes the locations of the points such that $s \in [0, 1]$ and recast the problem with $(0, y_k)$ and $(1, y_{k+1})$. This simplifies the above system to
$$\begin{bmatrix}
1 & 0 & 0 & 0 \\
1 & 1 & 1 & 1 \\
0 & 1 & 0 & 0 \\
0 & 1 & 2 & 3
\end{bmatrix} \begin{bmatrix}
p_0 \\ p_1 \\ p_2 \\ p_3
\end{bmatrix} = \begin{bmatrix}
y_k \\ y_{k+1} \\ d_k \\ d_{k+1}
\end{bmatrix}$$
which can be solved to find
$$\begin{aligned}
\mathcal{P}(s) &= (1-s)^2 (1 + 2s) y_k + s^2 (3 - 2 s) y_{k+1} + s (1 - s)^2 d_k - s^2 (1 - s)d_{k+1}\\
\mathcal{P}'(s) &= 6s(s-1) y_k + 6s(1-s) y_{k+1} + (s-1)(3s-1) d_k - s(3s-2) d_{k+1}\\
\mathcal{P}''(s) &= 6 (1-2s)(y_{k+1} - y_k) + (6s - 4) d_k + (6s-2) d_{k+1}
\end{aligned}$$
Now, how to choose $d_k$?
#### PCHIP
Piecewise Cubic Hermite Interpolation Polynomial
- Picks the slope that preserves monotonicity
- Also tried to preserve the shape of the data
- Note that in general this interpolant is $\mathcal{P}_k(x) \in C^1$
```
import scipy.interpolate as interpolate
data = numpy.array([[1.0, 3.0], [2.0, 1.0], [3.5, 4.0], [5.0, 0.0], [6.0, 0.5], [9.0, -2.0], [9.5, -3.0]])
x = numpy.linspace(0.0, 10, 100)
# C^1 Piece-wise PCHIP
P_pchip = interpolate.pchip_interpolate(data[:, 0], data[:, 1], x)
# Plot
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(data[:,0], data[:,1], 'ro')
axes.plot(x, P_pchip, 'r')
axes.set_title("Interpolated Data - $C^1$ Cubic PCHIP")
axes.set_xlabel("x")
axes.set_ylabel("$P_3(x)$")
axes.set_xlim([0.0, 10.0])
axes.set_ylim([-4.0, 15.0])
axes.grid()
plt.show()
```
#### Cubic Splines
Enfores continuity on second derivatives as well:
$$\mathcal{P}''_{k}(x_{k}) = \mathcal{P}''_{k-1}(x_k)$$
From our generalization before we know
$$\mathcal{P}''(s) = 6 (1-2s)(y_{k+1} - y_k) + (6s - 4) d_k + (6s-2) d_{k+1}$$
and our constraint now becomes
$$\mathcal{P}''_{k}(0) = \mathcal{P}''_{k-1}(1)$$
$$\mathcal{P}''_{k-1}(1) = 6 (1-2 \cdot 1)(y_{k} - y_{k-1}) + (6\cdot 1 - 4) d_{k-1} + (6\cdot 1-2) d_{k}$$
$$\mathcal{P}''_{k}(0) = 6 (1-2 \cdot 0)(y_{k+1} - y_k) + (6\cdot 0 - 4) d_k + (6\cdot 0-2) d_{k+1}$$
$$-6(y_{k} - y_{k-1}) + 2 d_{k-1} + 4 d_{k} = 6 (y_{k+1} - y_k) - 4 d_k -2 d_{k+1}$$
We now have constraints on choosing the $d_k$ values. Note that we still need to prescribe them at the boundaries of the full interval.
This forms a linear set of equations for the $d_k$s based on the $y_k$ values and can be reformulated into a tri-diagonal linear system
$$\begin{bmatrix}
& \ddots & \ddots & \ddots & & &\\
& 0 & 2 & 8 & 2 & 0 & & \\
& & 0 & 2 & 8 & 2 & 0 & & & \\
& & & 0 & 2 & 8 & 2 & 0 & & \\
& & & & & \ddots & \ddots & \ddots &
\end{bmatrix}\begin{bmatrix}
\vdots \\ d_{k-1} \\ d_{k} \\ d_{k+1} \\ \vdots
\end{bmatrix} = \begin{bmatrix}
\vdots \\ 6 (y_{k} - y_{k-2}) \\ 6 (y_{k+1} - y_{k-1}) \\ 6 (y_{k+2} - y_{k}) \\\vdots
\end{bmatrix}$$
The boundaries are still left unconstrained and we must pick some rule to specify the derivatives there.
```
import scipy.interpolate as interpolate
data = numpy.array([[1.0, 3.0], [2.0, 1.0], [3.5, 4.0], [5.0, 0.0], [6.0, 0.5], [9.0, -2.0], [9.5, -3.0]])
x = numpy.linspace(0.0, 10, 100)
# C^2 Piece-wise Splines
# Note that to get an interpolant we need to set the smoothing
# parameters *s* to 0
P_spline = interpolate.UnivariateSpline(data[:, 0], data[:, 1], s=0)
# Plot
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(data[:,0], data[:,1], 'ro')
axes.plot(x, P_spline(x), 'r', label = '$C^2$')
axes.plot(x, P_pchip, 'b--', label = 'Pchip')
axes.set_title("Interpolated Data - $C^2$ Cubic Splines")
axes.set_xlabel("x")
axes.set_ylabel("$P_3(x)$")
axes.set_xlim([0.0, 10.0])
axes.set_ylim([-4.0, 15.0])
axes.grid()
axes.legend(loc='best')
plt.show()
```
### Let's compare all of these methods
```
import scipy.interpolate as interpolate
data = numpy.array([[1.0, 3.0], [2.0, 1.0], [3.5, 4.0], [5.0, 0.0], [6.0, 0.5], [9.0, -2.0], [9.5, -3.0]])
x = numpy.linspace(0.0, 10, 100)
# Lagrange Basis
N = data.shape[0] - 1
lagrange_basis = numpy.ones((N + 1, x.shape[0]))
for i in range(N + 1):
for j in range(N + 1):
if i != j:
lagrange_basis[i, :] *= (x - data[j, 0]) / (data[i, 0] - data[j, 0])
# Calculate full polynomial
P_lagrange = numpy.zeros(x.shape[0])
for n in range(N + 1):
P_lagrange += lagrange_basis[n, :] * data[n, 1]
# C^0 Piece-wise linear
# P_pw_linear = numpy.interp(x, data[:, 0], data[:, 1])
P_linear = numpy.zeros(x.shape)
for n in range(1, N + 1):
P_linear += ((data[n, 1] - data[n - 1, 1]) / (data[n, 0] - data[n - 1, 0]) * (x - data[n - 1, 0])
+ data[n - 1, 1]) * (x > data[n - 1, 0]) * (x <= data[n, 0])
# Add end points for continuity
P_linear += numpy.ones(x.shape) * data[0, 1] * (x < data[0, 0])
P_linear += numpy.ones(x.shape) * data[-1, 1] * (x >= data[-1, 0])
# C^0 Piece-wise quadratic
P_quadratic = numpy.zeros(x.shape)
for k in range(1, N + 1, 2):
p = numpy.polyfit(data[k - 1:k + 2, 0], data[k - 1:k + 2, 1], 2)
P_quadratic += numpy.polyval(p, x) * (x > data[k - 1, 0]) * (x <= data[k + 1, 0])
# Add end points for continuity
P_quadratic += numpy.ones(x.shape) * data[0, 1] * (x < data[0, 0])
P_quadratic += numpy.ones(x.shape) * data[-1, 1] * (x >= data[-1, 0])
# C^1 Piece-wise PCHIP
P_pchip = interpolate.pchip_interpolate(data[:, 0], data[:, 1], x)
# C^2 Piece-wise Splines
P_spline = interpolate.UnivariateSpline(data[:, 0], data[:, 1], s=0)
# Plot
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(data[:,0], data[:,1], 'ko', label="Data")
axes.plot(x, P_lagrange, 'y', label="Lagrange")
axes.plot(x, P_linear, 'g', label="PW Linear")
axes.plot(x, P_quadratic, 'r', label="PW Quadratic")
axes.plot(x, P_pchip, 'c', label="PW Cubic - PCHIP")
axes.plot(x, P_spline(x), 'b', label="PW Cubic - Spline")
axes.grid()
axes.set_title("Interpolated Data - Method Comparisons")
axes.set_xlabel("x")
axes.set_ylabel("$P(x)$")
axes.legend(loc='best')
axes.set_xlim([0.0, 10.0])
axes.set_ylim([-4.0, 15.0])
plt.show()
```
How do you choose which method to use?
## Relationship to Regression
What if we have more data and want a lower degree polynomial but do not want to use a piece-wise defined interpolant?
Regression techniques are often used to minimize a form of error between the data points $y_i$ at $x_i$ with an approximating function $f(x_i)$. __Note that this is NOT interpolation anymore!__
### Least-Squares
One way of doing this is to require that we minimize the least-squares error
$$
E = \left( \sum^m_{i=1} |y_i - f(x_i)|^2 \right )^{1/2}.
$$
where as before we have data $y_i$ at locations $x_i$ and an approximating function $f(x_i)$.
From the beginning of our discussion we know we can write the interpolant as a system of linear equations which we can then solve for the coefficients of a monomial basis. If we wanted to fit a line
$$
\mathcal{P}_1(x) = p_0 + p_1 x
$$
to $N$ data points we would have
$$
\begin{bmatrix}
1 & x_1 \\
1 & x_2 \\
\vdots & \vdots \\
1 & x_N
\end{bmatrix} \begin{bmatrix}
p_0 \\ p_1
\end{bmatrix} = \begin{bmatrix}
y_1 \\ y_2 \\ \vdots \\ y_N
\end{bmatrix}
$$
or
$$
A p = y
$$
What's wrong with this system?
This leads to the likelihood that there is no solution to the system as
$$
A \in \mathbb{R}^{N \times 2}, p \in \mathbb{R}^{2 \times 1}, \text{ and } y \in \mathbb{R}^{N \times 1}.
$$
Instead we can solve the related least-squares system, multiplying both sides by $A^T$, so that $A^T A$ is a square matrix.
$$
A^T A p = A^T y
$$
whose solution minimizes the least-square error defined before as $E$.
Note: In general, this is __not__ the most stable way to solve least squares problems, in general, using an orthogonalization technique like $QR$ factorization is better numerically.
```
# Linear Least Squares Problem
N = 50
x = numpy.linspace(-1.0, 1.0, N)
y = x + numpy.random.random((N))
A = numpy.ones((x.shape[0], 2))
A[:, 1] = x
p = numpy.linalg.solve(numpy.dot(A.transpose(), A), numpy.dot(A.transpose(), y))
#p = numpy.linalg.lstsq(A, y, rcond=None)[0]
f = lambda x: p[0] + p[1] * x
E = numpy.linalg.norm(y - f(x), ord=2)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, y, 'ko')
axes.plot(x, f(x), 'r')
axes.set_title("Least Squares Fit to Data, err={}".format(E))
axes.set_xlabel("$x$")
axes.set_ylabel("$f(x)$ and $y_i$")
axes.grid()
plt.show()
```
### Themes and variations
You can play all sorts of games, whether they are justified by the data or not, for example we can fit the same random data with a function like
$$
f(x) = p_0 + p_1\tanh(x)
$$
which is still a linear problem for the coefficients $p_0$ and $p_1$, however the vandermonde matrix now has columns of $\mathbf{1}$ and $\tanh\mathbf{x}$.
```
# Linear Least Squares Problem
A = numpy.ones((x.shape[0], 2))
A[:, 1] = numpy.tanh(x)
p = numpy.linalg.solve(numpy.dot(A.transpose(), A), numpy.dot(A.transpose(), y))
# p = numpy.linalg.lstsq(A, y)[0]
f = lambda x: p[0] + p[1] * numpy.tanh(x)
E = numpy.linalg.norm(y - f(x), ord=2)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, y, 'ko')
axes.plot(x, f(x), 'r')
axes.set_title("Least Squares Fit to Data, err = {}".format(E))
axes.set_xlabel("$x$")
axes.set_ylabel("$f(x)$ and $y_i$")
axes.grid()
plt.show()
```
### Let ye be warned...

(Original image can be found at [Curve Fitting](https://xkcd.com/2048/).)
| github_jupyter |
# Unit 12 - Tales from the Crypto
---
## 1. Sentiment Analysis
Use the [newsapi](https://newsapi.org/) to pull the latest news articles for Bitcoin and Ethereum and create a DataFrame of sentiment scores for each coin.
Use descriptive statistics to answer the following questions:
1. Which coin had the highest mean positive score?
2. Which coin had the highest negative score?
3. Which coin had the highest positive score?
```
# Initial imports
import os
import pandas as pd
from dotenv import load_dotenv
import nltk as nltk
nltk.download('vader_lexicon')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
analyzer = SentimentIntensityAnalyzer()
%matplotlib inline
# Read your api key environment variable
# YOUR CODE HERE!
# Create a newsapi client
# YOUR CODE HERE!
# Fetch the Bitcoin news articles
# YOUR CODE HERE!
# Fetch the Ethereum news articles
# YOUR CODE HERE!
# Create the Bitcoin sentiment scores DataFrame
# YOUR CODE HERE!
# Create the Ethereum sentiment scores DataFrame
# YOUR CODE HERE!
# Describe the Bitcoin Sentiment
# YOUR CODE HERE!
# Describe the Ethereum Sentiment
# YOUR CODE HERE!
```
### Questions:
Q: Which coin had the highest mean positive score?
A:
Q: Which coin had the highest compound score?
A:
Q. Which coin had the highest positive score?
A:
---
## 2. Natural Language Processing
---
### Tokenizer
In this section, you will use NLTK and Python to tokenize the text for each coin. Be sure to:
1. Lowercase each word.
2. Remove Punctuation.
3. Remove Stopwords.
```
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer, PorterStemmer
from string import punctuation
import re
# Instantiate the lemmatizer
# YOUR CODE HERE!
# Create a list of stopwords
# YOUR CODE HERE!
# Expand the default stopwords list if necessary
# YOUR CODE HERE!
# Complete the tokenizer function
def tokenizer(text):
"""Tokenizes text."""
# Remove the punctuation from text
# Create a tokenized list of the words
# Lemmatize words into root words
# Convert the words to lowercase
# Remove the stop words
return tokens
# Create a new tokens column for Bitcoin
# YOUR CODE HERE!
# Create a new tokens column for Ethereum
# YOUR CODE HERE!
```
---
### NGrams and Frequency Analysis
In this section you will look at the ngrams and word frequency for each coin.
1. Use NLTK to produce the n-grams for N = 2.
2. List the top 10 words for each coin.
```
from collections import Counter
from nltk import ngrams
# Generate the Bitcoin N-grams where N=2
# YOUR CODE HERE!
# Generate the Ethereum N-grams where N=2
# YOUR CODE HERE!
# Function token_count generates the top 10 words for a given coin
def token_count(tokens, N=3):
"""Returns the top N tokens from the frequency count"""
return Counter(tokens).most_common(N)
# Use token_count to get the top 10 words for Bitcoin
# YOUR CODE HERE!
# Use token_count to get the top 10 words for Ethereum
# YOUR CODE HERE!
```
---
### Word Clouds
In this section, you will generate word clouds for each coin to summarize the news for each coin
```
from wordcloud import WordCloud
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = [20.0, 10.0]
# Generate the Bitcoin word cloud
# YOUR CODE HERE!
# Generate the Ethereum word cloud
# YOUR CODE HERE!
```
---
## 3. Named Entity Recognition
In this section, you will build a named entity recognition model for both Bitcoin and Ethereum, then visualize the tags using SpaCy.
```
import spacy
from spacy import displacy
# Download the language model for SpaCy
# !python -m spacy download en_core_web_sm
# Load the spaCy model
nlp = spacy.load('en_core_web_sm')
```
---
### Bitcoin NER
```
# Concatenate all of the Bitcoin text together
# YOUR CODE HERE!
# Run the NER processor on all of the text
# YOUR CODE HERE!
# Add a title to the document
# YOUR CODE HERE!
# Render the visualization
# YOUR CODE HERE!
# List all Entities
# YOUR CODE HERE!
```
---
### Ethereum NER
```
# Concatenate all of the Ethereum text together
# YOUR CODE HERE!
# Run the NER processor on all of the text
# YOUR CODE HERE!
# Add a title to the document
# YOUR CODE HERE!
# Render the visualization
# YOUR CODE HERE!
# List all Entities
# YOUR CODE HERE!
```
---
| github_jupyter |
<a href="https://colab.research.google.com/github/PatrickCortes/Linear--Algbebra_ChE_2nd-Sem-2021-2022/blob/main/Copy_of_Activity_1_Python_Fundamentals.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Welcome to Python Fundamentals
In this module, we are going to establish or review our skills in Python programming. In this notebook we are going to cover:
* Variables and Data Types
* Operations
* Input and Output Operations
* Logic Control
* Iterables
* Functions
##Variable and Data Types
```
x = 2
a,b = 0,-3
b
type(x)
y=1.0
type(y)
x= float(x)
type (x)
s,t,u ="0", '1', 'one'
type (t)
s_int = int(s)
s_int
```
## Operations
### Arithmetic
```
a,b,c,d = -3.0, 2, -1.5, -21
# Addition
S = a+b
S
# Subtraction
D = b-d
D
# Multiplication
V = a*d
V
# Division
Q = b/a
Q
# Floor Division
Fq = a//b
Fq
# Exponentation
E = a**b
E
### Modulo
mod = d%a
mod
```
## Assignment Operations
```
G,H,J,K = 134, 200, 3000, 6222
G +=a
G
G =+a
G
H-=d
H
J *=2
J
K**= 3
K
```
## Comparators
```
res_1, res_2, res_3, = 4, 2, "1"
true_val = 1.0
## Equality
res_1 == true_val
## Nonequality
res_2 != true_val
## Inequality
t1 = res_1 > res_2
t2 = res_1 < res_2/2
t3 = res_1 >= res_2/2
```
## Logical
```
res_1 == true_val
res_1 is not true_val
p, q = True, False
conj = p and q
conj
p, q = True, False
disj = p or q
disj
p, q = True, True
conj = p and q
conj
p, q = False, False
disj = p or q
disj
p, q = True, False
nand = not(p and q)
nand
p, q = True, False
xor = (not p and q ) or (p and not q)
xor
```
## I/O
```
print("Hello World")
cnt = 1
string = "Hello World"
print(string, ",Current run count is; ",cnt)
cnt += 1
print(f"{string}, Curent Count is:{cnt}")
f
sem_grade = 95.24252156
name = "patpat"
print("Hello" name, ", Your Semestral Grade is:, {}",sem_grade)
w_pg, w_mg, w_fg = 0.3, 0.3,0.4
print("The Weights of your semestral grade are:\
\n\t{:.2%} for Prelims\
\n\t{:.2%} for midterms, and\
\n\t{:.2%} for finals.".format(w_pg, w_mg, w_fg))
x=input("Enter a number :")
x
name= input("Yawaii mou:")
pg = input("Enter prelim grade:")
ng = input("Enter midterm grade:")
fg = input("Enter finals grade:")
sem_grade = 90
print("Hello{}, your semestral grade is : {}".format(name, sem_grade))
name= input("Yawaii mou:")
pg = float(input("Enter prelim grade:"))
print()
ng = float(input("Enter midterm grade:"))
print()
fg = float(input("Enter finals grade:"))
print()
sem_grade = ((float(pg)+ float(ng)+ float(fg)) / 3)
if float(sem_grade) >= 90:
print("\U0001F600")
elif 80<= float(sem_grade) <90:
print("\U0001F601")
elif 70<=float(sem_grade) < 80:
print("\U0001F62D")
print("Hello {}, your semestral grade is : {}".format(name, sem_grade))
numeral1, numeral2 = 10000000, 200000000
if(numeral1 == numeral2):
print("Goodjob")
elif(numeral1>numeral2):
print("JOJO")
else:
print("MIKASA")
```
#Looping Statements
##While
```
## while loops
i,j = 0, 10
while(i<=j):
print(f"{i}\t|\t{j}")
i+=1
```
## For
```
# for(int i=0; i<10; i++){
# printf(i)
#}
i=100
for i in range (100):
print(i)
playlist = ["Sacrifice","Rap god","MAMAA "]
print('Now Playing :\n')
for song in playlist:
print(song)
```
#Flow Control
## Condition Statements
```
numeral1, numeral2 = 10000000, 200000000
if(numeral1 == numeral2):
print("Goodjob")
elif(numeral1>numeral2):
print("JOJO")
else:
print("MIKASA")
```
##Functions
```
# void DeleteUser(int userid){
# delete(userid);
#}
def delete_user(userid):
print(" Successfully deleted user:{}".format(userid))
def delete_all_users ():
print("Successfully deleted all users")
userid = 100029991223125410
delete_user(100029991223125410)
delete_all_users()
def add(addend1, addend2):
print("I know how to add addend1 and addend2")
return addend1 + addend2
def power_of_base2(exponent):
return 2**exponent
addend1= 5
addend2= 10
exponent= 5
#add(addend1, addend2)
power_of_base2(exponent)
add(addend1, addend2)
```
| github_jupyter |
```
import re
import requests
from bs4 import BeautifulSoup
import pandas as pd
from konlpy.tag import Okt
okt = Okt()
import tensorflow as tf
import numpy as np
from collections import Counter
from wordcloud import WordCloud
import matplotlib.pyplot as plt
import urllib.request
from tqdm import tqdm
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
import matplotlib.pyplot as plt
from string import punctuation
import warnings
warnings.filterwarnings('ignore')
test = pd.read_csv('sort_total_years.csv')
test.info()
sort_like2000 = test[test['year']=='2000y']
print('2000년대 곡 개수 : ',len(sort_like2000))
sort_like2000.head(2)
# 불용어 (가사 빈도수 높은 + 감정분류와 무관한 단어 추가 중)
stop_w = ['all','이렇게','네가','있는','니가','없는','너의','너무','그런',
'oh','whoo','tuesday','내가','너를','나를','we','this','the','그렇게',
'so','am','baby','and','can','you','much','me','for','go','in',
'은', '는', '이', '가', '하','부터','처럼','까지',
'know','no','of','let','my','수','너','내','나','그','난','봐',
'돼','건','모든','에서','에게','싶어','잖아',
'날','널','수','것','못','말','넌','젠','하나','정말','알','여기',
'우리','다시','하게','니까',
'때','아','더','게','또','채','일','걸','누구','나는','너는','라면',
'같아','있어','지금',
'의','가','보','들','좀','잘','걍','과','도','를','으로','우린','하지',
'해도','하고','없어','않아',
'자','에','와','한','하다','네','있다','나의','해','다','내게','왜',
'거야','이제','그냥','했던','하는']
stop_w = set(stop_w)
```
## 토큰화
```
sort_like2000['Lyric'] = sort_like2000['Lyric'].apply(lambda x: [word for word in okt.nouns(x) if word not in stop_w])
# sort_like2000['Lyric'] = sort_like2000['Lyric'].apply(lambda x: [word for word in okt.morphs(x, stem=True) if word not in stop_w])
# sort_like2000['Lyric'] # okt.morphs(x, stem=True)
sort_like2000['Lyric'] # okt.nouns(x)
word2000 = sort_like2000['Lyric']
```
## corpus 생성
### (1) 단어를 id로 바꾸고 뜻을 dictionary로 만들기
```
from gensim import corpora
# 각 단어를 (단어id, 나온횟수) 로 바꾸는 작업
dictionary = corpora.Dictionary(word2000)
corpus = [dictionary.doc2bow(text) for text in word2000]
```
### (2) 코퍼스와 딕셔너리 확인
```
# corpus[i] : i번째 노래에서 나온단어에서 (단어id, 나온횟수)들을 저장한 list
print(corpus[1])
# corpus 사전 단어수
print(len(dictionary))
# dictionary[j] : id값을 j를 가진 단어가 무엇인지 확인
print(dictionary[1111])
```
## 3. LDA 모델 훈련
### (1) 훈련
```
import gensim
# N개의 토픽, k=3
NUM_TOPICS = 3
# passes : 알고리즘 동작횟수, num_words : 각 토픽별 출력할 단어
ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics = NUM_TOPICS, id2word=dictionary, passes=15)
topics = ldamodel.print_topics(num_words=5)
for topic in topics:
print(topic)
```
### (2) 토픽별 10개 표시
```
# 각 토픽별 10개의 단어를 단어를 출력 (위 코드에서 num_words=10을 한것)
for i in range(3):
print(ldamodel.print_topics()[i])
```
---
```
from gensim import corpora
dictionary = corpora.Dictionary(word2000)
corpus = [dictionary.doc2bow(text) for text in word2000]
pd.DataFrame(corpus)
from tqdm import tqdm
import re
from gensim.models.ldamodel import LdaModel
from gensim.models.callbacks import CoherenceMetric
from gensim import corpora
from gensim.models.callbacks import PerplexityMetric
from gensim.models.coherencemodel import CoherenceModel
# Compute Coherence Score using c_v
coherence_model_lda = CoherenceModel(model=ldamodel, texts=word2000, dictionary=dictionary, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score (c_v): ', coherence_lda) # c_v는 0과 1사이로 0.55 정도면 준수
# Compute Coherence Score using UMass
coherence_model_lda = CoherenceModel(model=ldamodel, texts=word2000, dictionary=dictionary, coherence="u_mass")
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score (u_mass): ', coherence_lda) # u_mass는 0에 가까울수록 완벽한 일관성을 가진다는 의미
sort_like2000.head(1)
```
## Error
AttributeError: 'list' object has no attribute 'lower' gensim
> https://stackoverflow.com/questions/41829323/attributeerror-list-object-has-no-attribute-lower-gensim<br>
> df.Lyric.astype(str) 수정
```
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
# 설정해준 카테고리의 데이터들만 추출
year_df = sort_like2000
# CountVectorizer로 텍스트 데이터들 단어 빈도수에 기반해 벡터화시키기(fit_transform까지!)
count_vect = CountVectorizer(max_df=0.95, max_features=1000,
min_df=2, stop_words=stop_w,
ngram_range=(1,2))
ftr_vect = count_vect.fit_transform(year_df.Lyric.astype(str))
# LDA클래스를 이용해서 피처 벡터화시킨 것을 토픽모델링 시키기
# 8개의 주제만 뽑았으니 n_components(토픽개수) 8로 설정
lda = LatentDirichletAllocation(n_components=3, random_state=42)
lda.fit(ftr_vect)
# components_속성은 8개의 토픽별(row)로 1000개의 feature(단어)들의 분포수치(column)를 보여줌
print(lda.components_.shape)
print(lda.components_)
# 이 때 lda_model이란, 벡터화시킨 텍스트 데이터를 fit까지만 적용한 모델!
def display_topic_words(lda_model, feature_names, num_top_words):
for topic_idx, topic in enumerate(lda_model.components_):
print('\nTopic #', topic_idx+1)
# Topic별로 1000개의 단어들(features)중에서 높은 값 순으로 정렬 후 index를 반환해줌!
# argsort()는 디폴트가 오름차순임(1,2,3,...) 그래서 [::-1]로 내림차순으로 바꿔주기
topic_word_idx = topic.argsort()[::-1]
top_idx = topic_word_idx[:num_top_words]
# CountVectorizer함수 할당시킨 객체에 get_feature_names()로 벡터화시킨 feature(단어들)볼 수 있음!
# 이 벡터화시킨 단어들(features)은 숫자-알파벳순으로 정렬되며, 단어들 순서는 fit_transform시키고 난 이후에도 동일!
# '문자열'.join 함수로 특정 문자열 사이에 끼고 문자열 합쳐줄 수 있음.
feature_concat = '+'.join([str(feature_names[i])+'*'+str(round(topic[i], 1)) for i in top_idx])
print(feature_concat)
feature_names = count_vect.get_feature_names()
display_topic_words(lda, feature_names, 15)
# transform까지 수행하면, 문서별(row)로 토픽들(column)의 분포를 알려줌
doc_topics = lda.transform(ftr_vect)
print(doc_topics.shape)
print(doc_topics[:2])
# 주어진 내장 텍스트데이터의 문서이름에는 카테고리가 labeling되어있음.
# 따라서, 카테고리가 무엇인지 아는 상태이니까 어떤 문서들이 어떤 토픽들이 높은지 확인해보자.
# 그리고 그 토픽들이 각각 무엇을 내용으로 하는지 추측해보자.
# 주어진 데이터셋의 filename속성을 이용해서 카테고리값들 가져오기
def get_filename_list(year_df):
filename_lst = []
for file in year_df.song_name:
filename_temp = file.split('/')[-2:]
filename = '.'.join(filename_temp)
filename_lst.append(filename)
return filename_lst
filename_lst = get_filename_list(year_df)
# Dataframe형태로 만들어보기
topic_names = ['Topic #'+ str(i) for i in range(0,3)]
topic_df = pd.DataFrame(data=doc_topics, columns=topic_names,
index=filename_lst)
# print(topic_df.head(20))
topic_df[10:20]
```
## Topic # 1
그대*1751.9+사랑*1129.0+사람*392.7+눈물*306.9+기억*235.9+그대 사랑*207.3+사랑 그대*205.8+그대 그대*170.3+생각*164.9+마음*148.2+모두*148.0+시간*147.2+가슴*137.4+바보*135.0+혼자*130.3
```
topic_df[topic_df['Topic #0']>=0.7].head(20)
```
## Topic # 2
Topic # 2
하늘*263.0+세상*249.1+모두*241.3+오늘*194.9+바람*161.5+노래*158.4+시간*155.2+위로*150.7+생각*137.3+사람*124.4+하루*121.1+순간*119.1+모습*117.2+가슴*117.1+거리*106.7
```
topic_df[topic_df['Topic #1']>=0.7].head(20)
```
## Topic # 3
사랑*1741.9+눈물*290.2+마음*286.1+가슴*284.5+세상*283.1+그녀*277.2+나나*258.3+사랑 사랑*252.9+나나 나나*192.3+사람*191.9+여자*161.0+당신*150.7+남자*139.0+때문*137.5+생각*133.8
```
topic_df[topic_df['Topic #2']>=0.7].head(20) #
```
| github_jupyter |
# Snakemake End-To-End Tutorial
Hello, and welcome to the end-to-end tutorial! In this notebook, we will explore the workflow as a pipeline stitched together using the Snakemake software. By the end of the tutorial, you will be able to run the same workflow explored in the previous notebooks using just one command on your terminal. Although this seems ambitious, it is quite possible using Snakemake. Here are the steps we will review:
- About Snakemake
- Set up the project directory
- Set up virtual environment
- Set up repository and installations
- Review the Snakefile
- Run pipeline using Snakemake
- Additional features
Note that this tutorial assumes that you have already completed the pipeline notebook series. If not, you must download the toy data set and change the config files to reflect the new location of your data. This notebook is estimated to take just under 30 minutes to complete.
## About Snakemake
[Snakemake](https://snakemake.readthedocs.io/en/stable/) is a workflow management software that allows users to build pipelines that can be represented using directed acyclic graphs.
The basic workflow unit of Snakemake is a `rule`, defined most simply by an input, an output, and a shell command or script used to translate the input into the output. Every Snakemake project requires one `Snakefile`, which contains all of the rules for the project. By design, Snakemake will only run the first rule specified in the Snakefile. If an input to this first rule does not already exist in the project space, Snakemake will refer to other rules in the Snakefile in order to determine which sequence of rules must be run in order to produce the missing input. Given this design structure, before executing the workflow, Snakemake is able to build a directed acyclic graph in order to determine the order of the rules and any concurrencies that may persist in the pipeline (i.e. rules may be run simultaneously, rather than sequentially, if their inputs and outputs do not rely on one another). This greatly saves time in the execution process. Moreover, with this design system, it becomes quite easy to only execute part of a workflow instead of the entire process.
## Set up the project directory
Because you have already completed the step-by-step pipeline project in the `PRO_12-123` directory, we will demonstrate how to complete it in an entirely new directory called `snakemake`.
```
%%bash
mkdir snakemake
cd snakemake
mkdir outputs
mkdir logs
mkdir model
mkdir data
mkdir model/outputs
cp -r ../configs/snakemake-configs/scripts scripts
```
We will use slightly modified config files for this project. Note that these configs are nearly identical to the ones that you previously used for the pipeline, but certain output destinations are modified to redirect to the snakemake folder. These new configs can be found in the `configs/snakemake-configs` folder.
We will also copy the toy dataset into this new project space.
```
%%bash
cp -r PRO_12-123/data/toy_data_set snakemake/data/snakemake_toy_data_set
```
## Set up virtual environment
It is critical to note that one is not able to do a full installation of the Snakemake software on Windows (for reference: https://snakemake.readthedocs.io/en/stable/getting_started/installation.html). We will install Snakemake with a minimal installation using pip.
The following cell will create and activate your virtual environment, upgrade the pip package manager, and register your virtual environment with JupyterLab. These steps were taken directly from the `setup.ipynb` notebook.
Open a terminal in your Jupyter Lab environment by selecting File -> New -> Terminal and execute the following commands. It is assumed that your default python environment on the host system has python3-venv installed (sudo apt-get install python3-venv -y).
Now, apply the new kernel to your notebook by first selecting the default kernel (which is typically "Python 3") and then selecting your new kernel "venv-snakemake" from the drop-down list. **NOTE:** It may take a minute for the drop-down list to update. You may refresh the browser instead.
Any python packages you pip install through the jupyter environment will now persist only in this environment.
## Set up repository and installations
At this point, make sure that you are in the new virtual environment that you just set up. You should see the name of the environment in parenthesis on the left-hand side of your command line prompt. Still in your terminal, you must clone into the repository and copy the `classifier` folder as well as the `Snakefile` into the repository. To do so, run the following commands:
```
%%bash
cd snakemake
git clone https://github.com/msk-mind/data-processing.git
cp ../configs/snakemake-configs/Snakefile data-processing/Snakefile
cp -r ../classifier data-processing/classifier
```
Finally, you must install the dependencies required by the classifier code and the repository. Run the following in your terminal:
Lastly, we will set some environment variables to ensure that this project will run smoothly. Still in your terminal, run:
At this point, you should have all the necessary installations and setup within your project space. You are ready to review the Snakefile!
## Review the Snakefile
Before running the Snakemake software, it would be useful to review the contents of the Snakefile. Note that the first rule of the Snakefile, which is by design the only rule that Snakemake will run on, is titled `rule all`. The Snakemake software will attempt to create every input necessary for this rule in order to execute the workflow. In this case, we only have one two inputs: `../outputs/visualize_tiles.done` and `../outputs/infer_tiles.done`. This is because these are the two tasks that we wish to accomplish at the end of our pipeline. Snakemake will search for ways to produce these inputs given the other rules in the Snakefile.
The subsequent rules should look very familiar to you if you completed the full end-to-end notebook series. Each of the steps in the pipeline appear as rules in this Snakefile, with a few modifications. Sometimes, shell commands are run to translate inputs into outputs; other times, python scripts are run instead. Each rule creates a file entitled `../outputs/rule.done` to signify the completion of the rule.
Note that the `update_patient_ids` rule associates spoof id's to each of the slides. If you intend to run a different analysis using the Snakemake pipeline, be sure to update this rule to include a script that associates your slides with the correct patient ids for your research.
In order to get an idea for what will happen when you run the software, we will execute a "dry run" of our project. In your terminal, navigate to the `data-processing` folder that you just cloned (the folder in which the Snakefile is located) and run the following command:
```
!cd snakemake/data-processing && snakemake -np
```
You should see a series of rules that rely on one another; the order of the rules should be reminiscent of the order of these steps in the full end-to-end notebook series that you completed. To see these rules represented as an acyclic graph, run the following command in the terminal:
To create the directed acyclic graph directly from the notebook, run the following command:
```
import graphviz
import snakemake
!cd snakemake/data-processing/ && snakemake --dag --cores 1 | dot -Tsvg > ../dag.svg
```
Let's take a look at what the directed acyclic graph for our pipeline looks like.
```
import IPython
from IPython.display import SVG, display
def show_svg():
display(SVG('/gpfs/mskmindhdp_emc/user/shared_data_folder/pathology-tutorial/snakemake/dag.svg'))
show_svg()
```
You should be able to view this svg file in the root folder of the snakemake project space. Notice how the rules rely on one another!
## Run pipeline using Snakemake
At this point, you are ready to run the pipeline using Snakemake! In the `data-processing` folder in your terminal, run the following command. Note that a lot of output will appear as the software navigates through the rules. Moreover, this process should take just about twenty minutes to complete, as some steps are time-intensive, so be patient!
If you wish to run the pipeline in your terminal, navigate to your terminal and run the following command:
If you instead wish to run the pipeline in this notebook, first make sure that you are in the proper virtual environment (as visible in the top right of the notebook). Then, run the following shell:
```
%%bash
export PYTHONPATH='/opt/spark-3.0.0-bin-hadoop3.2/python/'
cp -r classifier snakemake/scripts/classifier
cd snakemake/data-processing/
snakemake --cores 1
```
Congratulations, you now have the capability to run the entire tutorial from start to finish using just a single command!
## Additional features
If you are interested in only running part of the pipeline, this section will explain how to do so.
To run the pipeline from start until a specific point, you may simply change the input to the `rule all` to `../outputs/your_desired_stopping_point.done`. If you wish to run from a certain point in the pipeline until the finish, you must edit the Snakefile further. Navigate to the earliest rule that you wish to include in your segmented pipeline. Change that rule's input file to a file that does not depend on other rules (for instance, "/gpfs/mskmindhdp_emc/user/shared_data_folder/pathology-tutorial/snakemake/data/snakemake_toy_data_set/" is a great input placeholder because this file path is sure to exist already). Now, the Snakemake software will not redo any computation before this point in your rules!
*A quick note: when you are running the same rule repeatedly, be sure to clear the project space occupied by that rule in between runs so that it does not become cluttered with outputs from older runs of that rule. You may do so manually or create a bash script to empty your folders for you!*
At this point, you have completed the Snakemake End-to-End Pipeline tutorial notebook! Congratulations!
| github_jupyter |
# Keras tutorial - the Happy House
Welcome to the first assignment of week 2. In this assignment, you will:
1. Learn to use Keras, a high-level neural networks API (programming framework), written in Python and capable of running on top of several lower-level frameworks including TensorFlow and CNTK.
2. See how you can in a couple of hours build a deep learning algorithm.
Why are we using Keras? Keras was developed to enable deep learning engineers to build and experiment with different models very quickly. Just as TensorFlow is a higher-level framework than Python, Keras is an even higher-level framework and provides additional abstractions. Being able to go from idea to result with the least possible delay is key to finding good models. However, Keras is more restrictive than the lower-level frameworks, so there are some very complex models that you can implement in TensorFlow but not (without more difficulty) in Keras. That being said, Keras will work fine for many common models.
In this exercise, you'll work on the "Happy House" problem, which we'll explain below. Let's load the required packages and solve the problem of the Happy House!
```
import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from kt_utils import *
import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
%matplotlib inline
```
**Note**: As you can see, we've imported a lot of functions from Keras. You can use them easily just by calling them directly in the notebook. Ex: `X = Input(...)` or `X = ZeroPadding2D(...)`.
## 1 - The Happy House
For your next vacation, you decided to spend a week with five of your friends from school. It is a very convenient house with many things to do nearby. But the most important benefit is that everybody has commited to be happy when they are in the house. So anyone wanting to enter the house must prove their current state of happiness.
<img src="images/happy-house.jpg" style="width:350px;height:270px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **the Happy House**</center></caption>
As a deep learning expert, to make sure the "Happy" rule is strictly applied, you are going to build an algorithm which that uses pictures from the front door camera to check if the person is happy or not. The door should open only if the person is happy.
You have gathered pictures of your friends and yourself, taken by the front-door camera. The dataset is labbeled.
<img src="images/house-members.png" style="width:550px;height:250px;">
Run the following code to normalize the dataset and learn about its shapes.
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
**Details of the "Happy" dataset**:
- Images are of shape (64,64,3)
- Training: 600 pictures
- Test: 150 pictures
It is now time to solve the "Happy" Challenge.
## 2 - Building a model in Keras
Keras is very good for rapid prototyping. In just a short time you will be able to build a model that achieves outstanding results.
Here is an example of a model in Keras:
```python
def model(input_shape):
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
```
Note that Keras uses a different convention with variable names than we've previously used with numpy and TensorFlow. In particular, rather than creating and assigning a new variable on each step of forward propagation such as `X`, `Z1`, `A1`, `Z2`, `A2`, etc. for the computations for the different layers, in Keras code each line above just reassigns `X` to a new value using `X = ...`. In other words, during each step of forward propagation, we are just writing the latest value in the commputation into the same variable `X`. The only exception was `X_input`, which we kept separate and did not overwrite, since we needed it at the end to create the Keras model instance (`model = Model(inputs = X_input, ...)` above).
**Exercise**: Implement a `HappyModel()`. This assignment is more open-ended than most. We suggest that you start by implementing a model using the architecture we suggest, and run through the rest of this assignment using that as your initial model. But after that, come back and take initiative to try out other model architectures. For example, you might take inspiration from the model above, but then vary the network architecture and hyperparameters however you wish. You can also use other functions such as `AveragePooling2D()`, `GlobalMaxPooling2D()`, `Dropout()`.
**Note**: You have to be careful with your data's shapes. Use what you've learned in the videos to make sure your convolutional, pooling and fully-connected layers are adapted to the volumes you're applying it to.
```
# GRADED FUNCTION: HappyModel
def HappyModel(input_shape):
"""
Implementation of the HappyModel.
Arguments:
input_shape -- shape of the images of the dataset
Returns:
model -- a Model() instance in Keras
"""
### START CODE HERE ###
# Feel free to use the suggested outline in the text above to get started, and run through the whole
# exercise (including the later portions of this notebook) once. The come back also try out other
# network architectures as well.
# Define the input placeholder as a tensor with shape input_shape. Think of this as your input image!
X_input = Input(input_shape)
# Zero-Padding: pads the border of X_input with zeroes
X = ZeroPadding2D((3, 3))(X_input)
# CONV -> BN -> RELU Block applied to X
X = Conv2D(32, (7, 7), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
# MAXPOOL
X = MaxPooling2D((2, 2), name='max_pool')(X)
# FLATTEN X (means convert it to a vector) + FULLYCONNECTED
X = Flatten()(X)
X = Dense(1, activation='sigmoid', name='fc')(X)
# Create model. This creates your Keras model instance, you'll use this instance to train/test the model.
model = Model(inputs = X_input, outputs = X, name='HappyModel')
return model
### END CODE HERE ###
```
You have now built a function to describe your model. To train and test this model, there are four steps in Keras:
1. Create the model by calling the function above
2. Compile the model by calling `model.compile(optimizer = "...", loss = "...", metrics = ["accuracy"])`
3. Train the model on train data by calling `model.fit(x = ..., y = ..., epochs = ..., batch_size = ...)`
4. Test the model on test data by calling `model.evaluate(x = ..., y = ...)`
If you want to know more about `model.compile()`, `model.fit()`, `model.evaluate()` and their arguments, refer to the official [Keras documentation](https://keras.io/models/model/).
**Exercise**: Implement step 1, i.e. create the model.
```
### START CODE HERE ### (1 line)
happyModel = HappyModel(X_train.shape[1:])
### END CODE HERE ###
```
**Exercise**: Implement step 2, i.e. compile the model to configure the learning process. Choose the 3 arguments of `compile()` wisely. Hint: the Happy Challenge is a binary classification problem.
```
### START CODE HERE ### (1 line)
happyModel.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics= ['accuracy'])
### END CODE HERE ###
```
**Exercise**: Implement step 3, i.e. train the model. Choose the number of epochs and the batch size.
```
### START CODE HERE ### (1 line)
happyModel.fit(X_train, Y_train, epochs = 1, batch_size = 50)
### END CODE HERE ###
```
Note that if you run `fit()` again, the `model` will continue to train with the parameters it has already learnt instead of reinitializing them.
**Exercise**: Implement step 4, i.e. test/evaluate the model.
```
### START CODE HERE ### (1 line)
preds = happyModel.evaluate(X_test, Y_test, batch_size = 30)
### END CODE HERE ###
print()
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
If your `happyModel()` function worked, you should have observed much better than random-guessing (50%) accuracy on the train and test sets.
To give you a point of comparison, our model gets around **95% test accuracy in 40 epochs** (and 99% train accuracy) with a mini batch size of 16 and "adam" optimizer. But our model gets decent accuracy after just 2-5 epochs, so if you're comparing different models you can also train a variety of models on just a few epochs and see how they compare.
If you have not yet achieved a very good accuracy (let's say more than 80%), here're some things you can play around with to try to achieve it:
- Try using blocks of CONV->BATCHNORM->RELU such as:
```python
X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
```
until your height and width dimensions are quite low and your number of channels quite large (≈32 for example). You are encoding useful information in a volume with a lot of channels. You can then flatten the volume and use a fully-connected layer.
- You can use MAXPOOL after such blocks. It will help you lower the dimension in height and width.
- Change your optimizer. We find Adam works well.
- If the model is struggling to run and you get memory issues, lower your batch_size (12 is usually a good compromise)
- Run on more epochs, until you see the train accuracy plateauing.
Even if you have achieved a good accuracy, please feel free to keep playing with your model to try to get even better results.
**Note**: If you perform hyperparameter tuning on your model, the test set actually becomes a dev set, and your model might end up overfitting to the test (dev) set. But just for the purpose of this assignment, we won't worry about that here.
## 3 - Conclusion
Congratulations, you have solved the Happy House challenge!
Now, you just need to link this model to the front-door camera of your house. We unfortunately won't go into the details of how to do that here.
<font color='blue'>
**What we would like you to remember from this assignment:**
- Keras is a tool we recommend for rapid prototyping. It allows you to quickly try out different model architectures. Are there any applications of deep learning to your daily life that you'd like to implement using Keras?
- Remember how to code a model in Keras and the four steps leading to the evaluation of your model on the test set. Create->Compile->Fit/Train->Evaluate/Test.
## 4 - Test with your own image (Optional)
Congratulations on finishing this assignment. You can now take a picture of your face and see if you could enter the Happy House. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right (0 is unhappy, 1 is happy)!
The training/test sets were quite similar; for example, all the pictures were taken against the same background (since a front door camera is always mounted in the same position). This makes the problem easier, but a model trained on this data may or may not work on your own data. But feel free to give it a try!
```
### START CODE HERE ###
img_path = 'images/my_image.jpg'
### END CODE HERE ###
img = image.load_img(img_path, target_size=(64, 64))
imshow(img)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print(happyModel.predict(x))
```
## 5 - Other useful functions in Keras (Optional)
Two other basic features of Keras that you'll find useful are:
- `model.summary()`: prints the details of your layers in a table with the sizes of its inputs/outputs
- `plot_model()`: plots your graph in a nice layout. You can even save it as ".png" using SVG() if you'd like to share it on social media ;). It is saved in "File" then "Open..." in the upper bar of the notebook.
Run the following code.
```
happyModel.summary()
plot_model(happyModel, to_file='HappyModel.png')
SVG(model_to_dot(happyModel).create(prog='dot', format='svg'))
```
| github_jupyter |
```
from brainlit.utils import session
import napari
url = "s3://mouse-light-viz/precomputed_volumes/brain1"
sess = session.NeuroglancerSession(url=url, url_segments=url+"_segments", mip=0)
import numpy as np
from skimage import draw
import random
from cloudvolume.lib import Bbox
import pandas as pd
from tqdm import tqdm as tqdm
scale = sess.cv_segments.scales[0]["resolution"]
arr = []
sess.cv.progress = False # don't print download
for seg_id in [2]: # list of segments
vertices = sess.cv_segments.skeleton.get(seg_id).vertices
for i in tqdm(range(len(vertices)-1)):
v0 = (vertices[i]/scale).astype(int) # point 0
v1 = (vertices[i+1]/scale).astype(int) # point 1
coords = np.array(draw.line_nd(v0, v1)) # coords of line between points
inds = random.sample(range(coords.shape[1]), 2) # get X random points (here, 2)
for ind in inds:
coord = coords[:,ind]
bbox = Bbox(np.subtract(coord,[3,3,3]), np.add(coord,[4,4,4])) # get box around point
data = sess.pull_bounds_img(bbox) # get data
data_off = sess.pull_bounds_img(bbox+[50,50,50]) # get (50,50,50)-shifted data
arr.append([1, seg_id, i, ind, data.flatten()])
arr.append([0, seg_id, i, ind, data_off.flatten()])
header = ["label", "seg id", "vert id", "interp id", "data"]
df = pd.DataFrame(arr, columns=header)
header = ["label", "seg id", "vert id", "interp id", "data"]
df = pd.DataFrame(arr, columns=header)
import _pickle as pkl
pkl.dump(df, open("df.pkl", "wb"))
X = np.squeeze(np.array([i for i in df["data"]]))
y = df["label"]
from sklearn.neural_network import MLPClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1)
clf = MLPClassifier(random_state=1, max_iter=300).fit(X_train, y_train)
y_score = clf.predict(X_test)
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('MLP ROC. 2 random points on each edge/(50,50,50) offset. seg2')
plt.legend(loc="lower right")
plt.savefig('mlp.png')
plt.show()
from sklearn.linear_model import LogisticRegression
X_center = X[:, 172].reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(X_center, y, stratify=y, random_state=1)
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
y_score = clf.predict(X_test)
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('LogRegression ROC. vertex/(50,50,50) offset. seg2.')
plt.legend(loc="lower right")
plt.savefig('logreg.png')
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import seaborn as sns
from scipy.stats import pearsonr
from matplotlib import pyplot as plt
from utils import nan_gmean, METATHERIAN_ORDERS
anage_df = pd.read_csv('../data/112321_AnAge_cleaned.csv', index_col=0).set_index('Binomial Name')
pantheria_df = pd.read_csv('../data/PanTHERIA_1-0_WR05_Aug2008_cleaned.csv', index_col=0).set_index('Binomial Name')
# Pairs of columns to match up between the datasets.
column_pairs_df = pd.read_excel('anage_pantheria_column_pairs.xlsx').fillna('')
column_pairs_df
anage_df.columns.values
pantheria_df.columns.values
intersecting_names = set(pantheria_df.index.values).intersection(anage_df.index.values)
union_names = set(pantheria_df.index.values).union(anage_df.index.values)
n_intersecting = len(intersecting_names)
n_total_names = len(union_names)
n_pantheria_only = len(set(pantheria_df.index.values).difference(anage_df.index.values))
n_anage_only = len(set(anage_df.index.values).difference(pantheria_df.index.values))
print('AnAge DB {0} entries'.format(anage_df.index.size))
print('Pantheria DB {0} entries'.format(pantheria_df.index.size))
print('{0} entries with matching binomial names'.format(n_intersecting))
print('{0} binomial names in AnAge only'.format(n_anage_only))
print('{0} binomial names in Pantheria only'.format(n_pantheria_only))
print('{0} total binomial names'.format(n_total_names))
cols2test = 'Order,Family,Genus,Species,PlacentalMammal'.split(',')
# Make sure the phylogenetic information matches.
mismatch_count = 0
case_count = 0
for idx in union_names:
if idx not in pantheria_df.index or idx not in anage_df.index:
continue
case_count += 1
for col in cols2test:
panv = pantheria_df.loc[idx,col]
anv = anage_df.loc[idx,col]
if panv != anv:
print(idx)
print('\t{0} "{1}" != "{2}"'.format(col, panv, anv))
mismatch_count += 1
print(mismatch_count, 'mismatches of', case_count, 'cases')
# We see that the phylogenetic information matches in almost all cases (21/955).
# The mismatches are all family name discrepancies.
# After merging we will replace all the problematic Pantheria families
# with the matching values from AnAge since that DB is more recently updated.
# Comparing the AnAge and Pantheria datasets by plotting the most interesting
# columns (for our purposes) against each other. Looks like they are consistent with occasional outliers.
# tuples of (AnAge, Pantheria) column names
column_pairs = [('Body mass (g)', 'AdultBodyMass (g)'),
('Birth weight (g)', 'NeonateBodyMass (g)'),
('Litter/Clutch size', 'LitterSize (number)'),
('Gestation/Incubation (days)', 'GestationLen (days)'),
('Inter-litter/Interbirth interval', 'InterbirthInterval (d)'),
('Metabolic rate (W)', 'BasalMetRate (mLO2hr)'),
('Litters/Clutches per year', 'LittersPerYear (number)'),
('YoungMassPerYear_Estimated (g)', 'YoungMassPerYear_Estimated (g)')]
fig, axs = plt.subplots(ncols=4, nrows=2, figsize=(8,4.5))
flat_axs = axs.flatten()
for (anage_col, pantheria_col), my_ax in zip(column_pairs, flat_axs):
plt.sca(my_ax)
plt.xscale('log')
plt.yscale('log')
# grab data for both columns
x_data = anage_df.loc[intersecting_names][anage_col]
y_data = pantheria_df.loc[intersecting_names][pantheria_col]
# remove NaNs - needed for calculating correlation
mask = np.logical_and(x_data.notnull(), y_data.notnull())
x_data = x_data[mask]
y_data = y_data[mask]
sns.scatterplot(x=x_data, y=y_data, hue=pantheria_df.Order, legend=False)
# Calculate pearson correlation of log-transformed data
r_val = pearsonr(np.log(x_data), np.log(y_data))
N_obs = x_data.size
plt.text(0.05, 0.8, 'N = {0:d}\nR = {1:.2f}'.format(N_obs, r_val[0]),
transform=my_ax.transAxes)
plt.tight_layout()
plt.show()
# Now we will merge the two datasets following the column pair definitions in column_pairs_df
data_dict = dict((c,[]) for c in column_pairs_df.merged_col)
data_dict['BinomialName'] = []
# Textual columns found in both datasets are mostly phylogenetic in nature.
both_datasets = np.logical_and(column_pairs_df.anage_col.str.len(), column_pairs_df.pantheria_col.str.len())
mask = np.logical_and(both_datasets, column_pairs_df.is_numeric == False)
text_cols_both = column_pairs_df[mask]
print('Textual columns in both datasets')
print(text_cols_both.merged_col.values)
# Textual found in one dataset have dataset specific merged names
one_dataset = np.logical_xor(column_pairs_df.anage_col.str.len(), column_pairs_df.pantheria_col.str.len())
mask = np.logical_and(one_dataset, column_pairs_df.is_numeric == False)
text_cols_one = column_pairs_df[mask]
print('Textual columns in one dataset')
print(text_cols_one.merged_col.values)
# Numeric columns found are merged by geometric mean.
numeric_cols = column_pairs_df[column_pairs_df.is_numeric == True]
print('Numeric columns')
print(numeric_cols.merged_col.values)
# Make a merged row for each binomial name in the union of the two datasets,
for bin_name in union_names:
data_dict['BinomialName'].append(bin_name)
in_anage = bin_name in anage_df.index
in_pantheria = bin_name in pantheria_df.index
# Prefer AnAge for phylogenetic data since it is newer
for idx, row in text_cols_both.iterrows():
my_val = None
if in_anage:
my_val = anage_df.loc[bin_name, row.anage_col]
elif in_pantheria:
my_val = pantheria_df.loc[bin_name, row.pantheria_col]
else:
assert False
data_dict[row.merged_col].append(my_val)
# Merge numeric data by geometric mean
for idx, row in numeric_cols.iterrows():
anage_val, pantheria_val = np.NaN, np.NaN
if in_anage and row.anage_col != '':
anage_val = anage_df.loc[bin_name, row.anage_col]
elif in_pantheria and row.pantheria_col != '':
pantheria_val = pantheria_df.loc[bin_name, row.pantheria_col]
my_val = nan_gmean(pd.Series([anage_val, pantheria_val]))
data_dict[row.merged_col].append(my_val)
# Textual found in one dataset have dataset specific merged names
# to indicate where they came from.
for idx, row in text_cols_one.iterrows():
my_val = None
if row.anage_col != '':
if in_anage:
my_val = anage_df.loc[bin_name, row.anage_col]
elif row.pantheria_col != '':
if in_pantheria:
my_val = pantheria_df.loc[bin_name, row.pantheria_col]
else:
assert False
data_dict[row.merged_col].append(my_val)
merged_df = pd.DataFrame(data_dict)
# Replace the family names that don't match between the DBs to follow AnAge.
to_replace = {"Cebidae": "Callitrichidae",
"Ziphiidae": "Hyperoodontidae",
"Physeteridae": "Kogiidae",
"Vespertilionidae": "Miniopteridae"
}
merged_df.replace(to_replace)
# Calculate num. young per year:
# Have two values that can be used to get litters/year.
# 1/ Litters per year and 2/ Inter-litter interval
litter_size = merged_df['LitterSize (number)']
litters_per_year = merged_df['LittersPerYear (number)']
interbirth_interval_d = merged_df['InterbirthInterval (d)']
litters_per_year_inferred = 365.0/interbirth_interval_d
young_per_year_litters = litter_size / litters_per_year
young_per_year_interval = litter_size / litters_per_year_inferred
# Save that data
merged_df['YoungPerYear_Litters (number)'] = young_per_year_litters
merged_df['YoungPerYear_Interval (number)'] = young_per_year_interval
# The geometric mean of the two estimates is taken for plotting and fitting.
gmeans = pd.concat([young_per_year_litters, young_per_year_interval], axis=1).apply(nan_gmean, axis=1)
merged_df['YoungPerYear_Estimated (number)'] = gmeans
# Convert to a mass/year estimate using the mass of a neonate.
neonate_mass_g = merged_df['NeonateBodyMass (g)']
merged_df['YoungMassPerYear_Estimated (g)'] = neonate_mass_g*gmeans
# Load the basal metabolic rate data from Savage et al. 2004.
savage_df = pd.read_excel('../data/savage2004_BMR.xlsx')
# Rename columns to match merged_df
savage_df.columns = 'Order,Family,BinomialName'.split(',') + savage_df.columns[3:].tolist()
# Index on "BinomialName"
savage_df = savage_df.set_index('BinomialName')
# Merge it into our dataframe so that we have everything in one file now.
all_merged_df = merged_df.join(savage_df, on='BinomialName', how='outer', rsuffix='_savage04')
# Set PlacentalMammal for the species in Savage '04 without a match in the merged_df
mask = all_merged_df.PlacentalMammal.isnull()
all_merged_df.loc[mask, 'PlacentalMammal'] = all_merged_df.loc[mask].Order.isin(METATHERIAN_ORDERS)
all_merged_df.head(3)
# Check that the Family and Order are the same.
fam_check = np.logical_and(all_merged_df.Family.notnull(), all_merged_df['Family_savage04'].notnull())
fam_check = np.logical_and(fam_check, all_merged_df['Family_savage04'] != all_merged_df['Family'])
ord_check = np.logical_and(all_merged_df.Order.notnull(), all_merged_df['Order_savage04'].notnull())
ord_check = np.logical_and(ord_check, all_merged_df['Order_savage04'] != all_merged_df['Order'])
fam_matches = (all_merged_df['Family_savage04'] == all_merged_df['Family']).sum()
ord_matches = (all_merged_df['Order_savage04'] == all_merged_df['Order']).sum()
print('Family level: {0} matches, {1} mismatches'.format(fam_matches, fam_check.sum()))
print('Order level: {0} matches, {1} mismatches'.format(ord_matches, ord_check.sum()))
#print('Families:')
#for idx, row in all_merged_df[fam_check].iterrows():
# print('Savage {0}, merged_df {1}'.format(row.Family_savage04, row.Family))
#print('\nOrders:')
#for idx, row in all_merged_df[ord_check].iterrows():
# print('Savage {0}, merged_df {1}'.format(row.Order_savage04, row.Order))
# Looks like they do not match entirely...
# From a brief inspection it seems like these are mostly different names for the same groups.
# TODO: deeper inspection of the mismatches.
# As a test, plot body masses from Anage+Pantheria against BMR from Savage '04.
# The obvious power-law scaling suggests that we mostly got the matching right.
plt.figure()
plt.xscale('log')
plt.yscale('log')
sns.scatterplot(data=all_merged_df, x='AdultBodyMass (g)', y='BMR_W')
all_merged_df.to_csv('../data/merged_animal_traits.csv')
placental_mammals = all_merged_df[all_merged_df.PlacentalMammal]
placental_mammals.to_csv('../data/merged_animal_traits_placental_only.csv')
```
| github_jupyter |
```
import numpy as np
import pandas as pd
# Code to read csv file into colaboratory:
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
'''
downloaded = drive.CreateFile({'id':'1q9Yh9GorYkl_xf3O_P4zBbPYBXtTcuWx'})
downloaded.GetContentFile('moviereviews.tsv')
df= pd.read_csv("moviereviews.tsv", sep='\t')
df.head()
'''
```
# Sentiment Analysis
Now that we've seen word vectors we can start to investigate sentiment analysis. The goal is to find commonalities between documents, with the understanding that similarly *combined* vectors should correspond to similar sentiments.
While the scope of sentiment analysis is very broad, we will focus our work in two ways.
### 1. Polarity classification
We won't try to determine if a sentence is objective or subjective, fact or opinion. Rather, we care only if the text expresses a *positive*, *negative* or *neutral* opinion.
### 2. Document level scope
We'll also try to aggregate all of the sentences in a document or paragraph, to arrive at an overall opinion.
### 3. Coarse analysis
We won't try to perform a fine-grained analysis that would determine the degree of positivity/negativity. That is, we're not trying to guess how many stars a reviewer awarded, just whether the review was positive or negative.
## Broad Steps:
* First, consider the text being analyzed. A model trained on paragraph-long movie reviews might not be effective on tweets. Make sure to use an appropriate model for the task at hand.
* Next, decide the type of analysis to perform. In the previous section on text classification we used a bag-of-words technique that considered only single tokens, or *unigrams*. Some rudimentary sentiment analysis models go one step further, and consider two-word combinations, or *bigrams*. In this section, we'd like to work with complete sentences, and for this we're going to import a trained NLTK lexicon called *VADER*.
## NLTK's VADER module | Valence Aware Dictionary for sEntiment Reasoning
VADER is an NLTK module that provides sentiment scores based on words used ("completely" boosts a score, while "slightly" reduces it), on capitalization & punctuation ("GREAT!!!" is stronger than "great."), and negations (words like "isn't" and "doesn't" affect the outcome).
<br>To view the source code visit https://www.nltk.org/_modules/nltk/sentiment/vader.html
**Download the VADER lexicon.** You only need to do this once.
```
import nltk
nltk.download('vader_lexicon')
```
<div class="alert alert-danger">NOTE: At the time of this writing there's a <a href='https://github.com/nltk/nltk/issues/2053'>known issue</a> with SentimentIntensityAnalyzer that raises a harmless warning on loading<br>
<tt><font color=black> UserWarning: The twython library has not been installed.<br> Some functionality from the twitter package will not be available.</tt>
This is due to be fixed in an upcoming NLTK release. For now, if you want to avoid it you can (optionally) install the NLTK twitter library with<br>
<tt><font color=black> conda install nltk[twitter]</tt><br>or<br>
<tt><font color=black> pip3 install -U nltk[twitter]</tt></div>
```
# !pip3 install -U nltk[twitter]
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
!pip3 install -U nltk[twitter]
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
```
VADER's `SentimentIntensityAnalyzer()` takes in a string and returns a dictionary of scores in each of four categories:
* negative [0,1]
* neutral [0,1]
* positive [0,1]
* compound *(computed by normalizing the scores above)* [-1,1]
```
a = 'This was a good movie.'
sid.polarity_scores(a)
a = 'This was the best, most awesome movie EVER MADE!!!'
sid.polarity_scores(a)
a = 'This was the worst film to ever disgrace the screen.'
sid.polarity_scores(a)
```
## Use VADER to analyze Amazon Reviews
For this exercise we're going to apply `SentimentIntensityAnalyzer` to a dataset of 10,000 Amazon reviews. Like our movie reviews datasets, these are labeled as either "pos" or "neg". At the end we'll determine the accuracy of our sentiment analysis with VADER.
```
import numpy as np
import pandas as pd
downloaded = drive.CreateFile({'id':'1kb-mL5Dl-5VoV-ZREdKqwG_FCWCXO1uj'})
downloaded.GetContentFile('amazonreviews.tsv')
df= pd.read_csv("amazonreviews.tsv", sep='\t')
df.head()
df.shape
df['label'].value_counts()
```
### Clean the data:
Recall that our moviereviews.tsv file contained empty records. Let's check to see if any exist in amazonreviews.tsv.
```
# REMOVE NaN VALUES AND EMPTY STRINGS:
df.dropna(inplace=True)
blanks = [] # start with an empty list
for index,label,review in df.itertuples(): # iterate over the DataFrame
if type(review)==str: # avoid NaN values
if review.isspace(): # test 'review' for whitespace
blanks.append(index) # add matching index numbers to the list
df.drop(blanks, inplace=True)
df['label'].value_counts()
blanks # empty
# if blanks[] was not empty --> df.drop(blanks, inplace= True)
```
In this case there were no empty records. Good!
## Let's run the first review through VADER
```
df.iloc[0]['review']
# Below we are displaying the text as a script which is more readable (not like above)
from IPython.display import Markdown, display
display(Markdown('> '+df['review'][0]))
sid.polarity_scores(df.loc[0]['review'])
df.loc[0]['label']
```
Great! Our first review was labeled "positive", and earned a positive compound score.
## Adding Scores and Labels to the DataFrame
In this next section we'll add columns to the original DataFrame to store polarity_score dictionaries, extracted compound scores, and new "pos/neg" labels derived from the compound score. We'll use this last column to perform an accuracy test.
```
# lamda take that review and then apply polarity score to that particular review
df['scores'] = df['review'].apply(lambda review: sid.polarity_scores(review))
df.head()
# compound is usually useful, so adding that as a column as well
df['compound'] = df['scores'].apply(lambda score_dict: score_dict['compound'])
df.head()
# translating the compounding scores and creating a new column
# if compound score >0 -> positive else negative
df['comp_score'] = df['compound'].apply(lambda score: 'pos' if score >=0 else 'neg')
df.head()
```
## Report on Accuracy
Finally, we'll use scikit-learn to determine how close VADER came to our original 10,000 labels.
```
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
# comparing "label" which is the already true-correct label, with the compound
# score which we calculated afterwards
accuracy_score(df['label'],df['comp_score'])
print(classification_report(df['label'],df['comp_score']))
# vaden is not good at detecting sarcasm
print(confusion_matrix(df['label'],df['comp_score']))
# 2622 correctly classified as positive
# 434 inclorreclty classified as positive
# 2475 incorrecly classified as negative
# 4469 correctly classified as negative
# You can make the confusion matrix less confusing by adding labels:
#from sklearn import metrics
#df = pd.DataFrame(metrics.confusion_matrix(y_test,predictions), index=['negative','positive'], columns=['negative','positive'])
#df
# but here we hadn't split the data
```
This tells us that VADER correctly identified an Amazon review as "positive" or "negative" roughly 71% of the time.
# Sentiment Analysis Project
## Task #1: Perform vector arithmetic on your own words
Write code that evaluates vector arithmetic on your own set of related words. The goal is to come as close to an expected word as possible.
```
!python -m spacy download en_core_web_lg
# !python -m spacy download en_vectors_web_lg
# Import spaCy and load the language library. Remember to use a larger model!
import spacy
nlp = spacy.load('en_core_web_lg')
# Choose the words you wish to compare, and obtain their vectors
word1 = nlp.vocab['wolf'].vector
word2 = nlp.vocab['dog'].vector
word3 = nlp.vocab['cat'].vector
# Import spatial and define a cosine_similarity function
from scipy import spatial
cosine_similarity = lambda x, y: 1 - spatial.distance.cosine(x, y)
# Write an expression for vector arithmetic
# For example: new_vector = word1 - word2 + word3
new_vector = word1 - word2 + word3
# List the top ten closest vectors in the vocabulary to the result of the expression above
computed_similarities = []
for word in nlp.vocab:
if word.has_vector: #not all words have vectors in spacy
if word.is_lower:
if word.is_alpha: # if they are alphabetic
similarity = cosine_similarity(new_vector, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1]) #in descending indexing
print([w[0].text for w in computed_similarities[:10]])
```
### CHALLENGE: Write a function that takes in 3 strings, performs a-b+c arithmetic, and returns a top-ten result
```
def vector_math(a,b,c):
new_vector = nlp.vocab[a].vector - nlp.vocab[b].vector + nlp.vocab[c].vector
computed_similarities = []
for word in nlp.vocab:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity = cosine_similarity(new_vector, word.vector)
computed_similarities.append((word, similarity))
computed_similarities = sorted(computed_similarities, key=lambda item: -item[1])
return [w[0].text for w in computed_similarities[:10]]
# Test the function on known words:
vector_math('king','man','woman')
```
## Task #2: Perform VADER Sentiment Analysis on your own review
Write code that returns a set of SentimentIntensityAnalyzer polarity scores based on your own written review.
```
# Import SentimentIntensityAnalyzer and create an sid object
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
# Write a review as one continuous string (multiple sentences are ok)
my_review = 'This movie portrayed real people, and was based on actual events.'
# Obtain the sid scores for your review
sid.polarity_scores(my_review)
```
### CHALLENGE: Write a function that takes in a review and returns a score of "Positive", "Negative" or "Neutral"
```
def review_rating(string):
scores = sid.polarity_scores(string)
if scores['compound'] == 0:
return 'Neutral'
elif scores['compound'] > 0:
return 'Positive'
else:
return 'Negative'
# Test the function on your review above:
review_rating(my_review)
```
| github_jupyter |
# T1574.008 - Path Interception by Search Order Hijacking
Adversaries may execute their own malicious payloads by hijacking the search order used to load other programs. Because some programs do not call other programs using the full path, adversaries may place their own file in the directory where the calling program is located, causing the operating system to launch their malicious software at the request of the calling program.
Search order hijacking occurs when an adversary abuses the order in which Windows searches for programs that are not given a path. Unlike [DLL Search Order Hijacking](https://attack.mitre.org/techniques/T1574/001), the search order differs depending on the method that is used to execute the program. (Citation: Microsoft CreateProcess) (Citation: Windows NT Command Shell) (Citation: Microsoft WinExec) However, it is common for Windows to search in the directory of the initiating program before searching through the Windows system directory. An adversary who finds a program vulnerable to search order hijacking (i.e., a program that does not specify the path to an executable) may take advantage of this vulnerability by creating a program named after the improperly specified program and placing it within the initiating program's directory.
For example, "example.exe" runs "cmd.exe" with the command-line argument <code>net user</code>. An adversary may place a program called "net.exe" within the same directory as example.exe, "net.exe" will be run instead of the Windows system utility net. In addition, if an adversary places a program called "net.com" in the same directory as "net.exe", then <code>cmd.exe /C net user</code> will execute "net.com" instead of "net.exe" due to the order of executable extensions defined under PATHEXT. (Citation: Microsoft Environment Property)
Search order hijacking is also a common practice for hijacking DLL loads and is covered in [DLL Search Order Hijacking](https://attack.mitre.org/techniques/T1574/001).
## Atomic Tests:
Currently, no tests are available for this technique.
## Detection
Monitor file creation for files named after partial directories and in locations that may be searched for common processes through the environment variable, or otherwise should not be user writable. Monitor the executing process for process executable paths that are named for partial directories. Monitor file creation for programs that are named after Windows system programs or programs commonly executed without a path (such as "findstr," "net," and "python"). If this activity occurs outside of known administration activity, upgrades, installations, or patches, then it may be suspicious.
Data and events should not be viewed in isolation, but as part of a chain of behavior that could lead to other activities, such as network connections made for Command and Control, learning details about the environment through Discovery, and Lateral Movement.
| github_jupyter |
# Renumbering Test
In this notebook, we will use the _renumber_ function to compute new vertex IDs.
Under the covers, cuGraph represents a graph as a matrix in Compressed Sparse Row format (see https://en.wikipedia.org/wiki/Sparse_matrix). The problem with a matrix representation is that there is a column and row for every possible vertex ID. Therefore, if the data contains vertex IDs that are non-contiguious, or which start at a large initial value, then there is a lot of empty space that uses up memory.
An alternative case is using renumbering to convert from one data type down to a contiguious sequence of integer IDs. This is useful when the dataset contain vertex IDs that are not integers.
Notebook Credits
* Original Authors: Chuck Hastings and Bradley Rees
* Created: 08/13/2019
* Updated: 07/08/2020
RAPIDS Versions: 0.15
Test Hardware
* GV100 32G, CUDA 10.2
## Introduction
Demonstrate creating a graph with renumbering.
Most cugraph algorithms operate on a CSR representation of a graph. A CSR representation requires an indices array that is as long as the number of edges and an offsets array that is as 1 more than the largest vertex id. This makes the memory utilization entirely dependent on the size of the largest vertex id. For data sets that have a sparse range of vertex ids, the size of the CSR can be unnecessarily large. It is easy to construct an example where the amount of memory required for the offsets array will exceed the amount of memory in the GPU (not to mention the performance cost of having a large number of offsets that are empty but still have to be read to be skipped).
The renumbering feature allows us to generate unique identifiers for every vertex identified in the input data frame.
Renumbering can happen automatically as part of graph generation. It can also be done explicitely by the caller, this notebook will provide examples using both techniques.
The fundamental requirement for the user of the renumbering software is to specify how to identify a vertex. We will refer to this as the *external* vertex identifier. This will typically be done by specifying a cuDF DataFrame, and then identifying which columns within the DataFrame constitute source vertices and which columns specify destination columns.
Let us consider that a vertex is uniquely defined as a tuple of elements from the rows of a cuDF DataFrame. The primary restriction is that the number of elements in the tuple must be the same for both source vertices and destination vertices, and that the types of each element in the source tuple must be the same as the corresponding element in the destination tuple. This restriction is a natural restriction and should be obvious why this is required.
Renumbering takes the collection of tuples that uniquely identify vertices in the graph, eliminates duplicates, and assigns integer identifiers to the unique tuples. These integer identifiers are used as *internal* vertex identifiers within the cuGraph software.
One of the features of the renumbering function is that it maps vertex ids of any size and structure down into a range that fits into 32-bit integers. The current cugraph algorithms are limited to 32-bit signed integers as vertex ids. and the renumbering feature will allow the caller to translate ids that are 64-bit (or strings, or complex data types) into a densly packed 32-bit array of ids that can be used in cugraph algorithms. Note that if there are more than 2^31 - 1 unique vertex ids then the renumber method will fail with an error indicating that there are too many vertices to renumber into a 32-bit signed integer.
First step is to import the needed libraries
```
import cugraph
import cudf
import socket
import struct
import pandas as pd
import numpy as np
import networkx as nx
from cugraph.structure import NumberMap
```
# Create some test data
This creates a small circle using some ipv4 addresses, storing the columns in a GPU data frame.
The current version of renumbering operates only on integer types, so we translate the ipv4 strings into 64 bit integers.
```
source_list = [ '192.168.1.1', '172.217.5.238', '216.228.121.209', '192.16.31.23' ]
dest_list = [ '172.217.5.238', '216.228.121.209', '192.16.31.23', '192.168.1.1' ]
source_as_int = [ struct.unpack('!L', socket.inet_aton(x))[0] for x in source_list ]
dest_as_int = [ struct.unpack('!L', socket.inet_aton(x))[0] for x in dest_list ]
print("sources came from: " + str([ socket.inet_ntoa(struct.pack('!L', x)) for x in source_as_int ]))
print(" sources as int = " + str(source_as_int))
print("destinations came from: " + str([ socket.inet_ntoa(struct.pack('!L', x)) for x in dest_as_int ]))
print(" destinations as int = " + str(dest_as_int))
```
# Create our GPU data frame
```
df = pd.DataFrame({
'source_list': source_list,
'dest_list': dest_list,
'source_as_int': source_as_int,
'dest_as_int': dest_as_int
})
gdf = cudf.DataFrame.from_pandas(df[['source_as_int', 'dest_as_int']])
gdf.to_pandas()
```
# Run renumbering
Output from renumbering is a data frame and a NumberMap object. The data frame contains the renumbered sources and destinations. The NumberMap will allow you to translate from external to internal vertex identifiers. The renumbering call will rename the specified source and destination columns to indicate they were renumbered and no longer contain the original data, and the new names are guaranteed to be unique and not collide with other column names.
Note that renumbering does not guarantee that the output data frame is in the same order as the input data frame (although in our simple example it will match). To address this we will add the index as a column of gdf before renumbering.
```
gdf['order'] = gdf.index
renumbered_df, numbering = NumberMap.renumber(gdf, ['source_as_int'], ['dest_as_int'])
new_src_col_name = numbering.renumbered_src_col_name
new_dst_col_name = numbering.renumbered_dst_col_name
renumbered_df
```
# Now combine renumbered df with original df
We can use the order column to merge the data frames together.
```
renumbered_df = renumbered_df.merge(gdf, on='order').sort_values('order').reset_index(drop=True)
renumbered_df
```
# Data types
Just to confirm, the data types of the renumbered columns should be int32, the original data should be int64, the numbering map needs to be int64 since the values it contains map to the original int64 types.
```
renumbered_df.dtypes
```
# Quick verification
The NumberMap object allows us to translate back and forth between *external* vertex identifiers and *internal* vertex identifiers.
To understand the renumbering, here's an ugly block of verification logic.
```
numbering.from_internal_vertex_id(cudf.Series([0]))['0'][0]
for i in range(len(renumbered_df)):
print(" ", i,
": (", source_as_int[i], ",", dest_as_int[i],
"), renumbered: (", renumbered_df[new_src_col_name][i], ",", renumbered_df[new_dst_col_name][i],
"), translate back: (",
numbering.from_internal_vertex_id(cudf.Series([renumbered_df[new_src_col_name][i]]))['0'][0], ",",
numbering.from_internal_vertex_id(cudf.Series([renumbered_df[new_dst_col_name][i]]))['0'][0], ")"
)
```
# Now let's do some graph things...
To start, let's run page rank. Not particularly interesting on our circle, since everything should have an equal rank.
Note, we passed in the renumbered columns as our input, so the output is based upon the internal vertex ids.
```
G = cugraph.Graph()
gdf_r = cudf.DataFrame()
gdf_r["src"] = renumbered_df[new_src_col_name]
gdf_r["dst"] = renumbered_df[new_dst_col_name]
G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)
pr = cugraph.pagerank(G)
pr.to_pandas()
```
# Convert vertex ids back
To be relevant, we probably want the vertex ids converted back into the original ids. This can be done by the NumberMap object.
Note again, the unrenumber call does not guarantee order. If order matters you would need to do something to regenerate the desired order.
```
numbering.unrenumber(pr, 'vertex')
```
# Try to run jaccard
Not at all an interesting result, but it demonstrates a more complicated case. Jaccard returns a coefficient for each edge. In order to show the original ids we need to add columns to the data frame for each column that contains one of renumbered vertices. In this case, the columns source and destination contain renumbered vertex ids.
```
jac = cugraph.jaccard(G)
jac = numbering.unrenumber(jac, 'source')
jac = numbering.unrenumber(jac, 'destination')
jac.insert(len(jac.columns),
"original_source",
[ socket.inet_ntoa(struct.pack('!L', x)) for x in jac['source'].values_host ])
jac.insert(len(jac.columns),
"original_destination",
[ socket.inet_ntoa(struct.pack('!L', x)) for x in jac['destination'].values_host ])
jac.to_pandas()
```
# Working from the strings
Starting with version 0.15, the base renumbering feature contains support for any arbitrary columns. So we can now directly support strings.
Renumbering also happens automatically in the graph. So let's combine all of this to a simpler example with the same data.
```
gdf = cudf.DataFrame.from_pandas(df[['source_list', 'dest_list']])
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='source_list', destination='dest_list', renumber=True)
pr = cugraph.pagerank(G)
print('pagerank output:\n', pr)
jac = cugraph.jaccard(G)
print('jaccard output:\n', jac)
```
___
Copyright (c) 2019-2020, NVIDIA CORPORATION.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
___
| github_jupyter |
#IMPORTS
```
pip install gower
from sklearn.cluster import DBSCAN
from sklearn.decomposition import PCA
from mpl_toolkits import mplot3d
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
import gower
```
##Loading data
```
# Creating a dictionary with the data
dictionary = { 'age': [22, 25, 30, 38, 42, 47, 55, 62, 61, 90],
'gender': ['M', 'M', 'F', 'F', 'F', 'M', 'M', 'M', 'M', "M"],
'civil_status': ['SINGLE', 'SINGLE', 'SINGLE', 'MARRIED', 'MARRIED', 'SINGLE', 'MARRIED', 'DIVORCED', 'MARRIED', 'DIVORCED'],
'salary': [18000, 23000, 27000, 32000, 34000, 20000, 40000, 42000, 25000, 70000],
'has_children': [False, False, False, True, True, False, False, False, False, True],
'purchaser_type': ['LOW_PURCHASER', 'LOW_PURCHASER', 'LOW_PURCHASER', 'HEAVY_PURCHASER', 'HEAVY_PURCHASER', 'LOW_PURCHASER', 'MEDIUM_PURCHASER', 'MEDIUM_PURCHASER', 'MEDIUM_PURCHASER', 'LOW_PURCHASER']}
# Creating a Pandas DataFrame from the dictionary
df = pd.DataFrame.from_dict(dictionary)
df
```
#IMPLEMENTING GOWER
```
#Distance matrix
distance_matrix = gower.gower_matrix(df)
distance_matrix=pd.DataFrame(data=distance_matrix,columns=['c1','c2','c3','c4','c5','c6','c7','c8','c9','10'])
distance_matrix
# Configuring the parameters of the clustering algorithm
dbscan_cluster = DBSCAN(eps=0.3,
min_samples=2,
metric="precomputed")
# Fitting the clustering algorithm
dbscan_cluster.fit(distance_matrix)
# Adding the results to a new column in the dataframe
df["cluster"] = dbscan_cluster.labels_
df
#Dimensionality reduction
pca = PCA(n_components=3)
pca_reduction=pca.fit_transform(distance_matrix)
pca_reduction=pd.DataFrame(data=pca_reduction,columns=['X','Y','Z'])
pca_reduction['cluster']=dbscan_cluster.labels_
pca_reduction
# Creating dataset
z1 = pca_reduction[pca_reduction['cluster']==0]['Z']
x1 = pca_reduction[pca_reduction['cluster']==0]['X']
y1 = pca_reduction[pca_reduction['cluster']==0]['Y']
z2 = pca_reduction[pca_reduction['cluster']==1]['Z']
x2 = pca_reduction[pca_reduction['cluster']==1]['X']
y2 = pca_reduction[pca_reduction['cluster']==1]['Y']
z3 = pca_reduction[pca_reduction['cluster']==2]['Z']
x3 = pca_reduction[pca_reduction['cluster']==2]['X']
y3 = pca_reduction[pca_reduction['cluster']==2]['Y']
# Creating figure
fig = plt.figure(figsize = (10, 7))
ax = plt.axes(projection ="3d")
# Creating plot
ax.scatter3D(x1, y1, z1, color = "green")
ax.scatter3D(x2, y2, z2, color = "red")
ax.scatter3D(x3, y3, z3, color = "blue")
plt.title("simple 3D scatter plot")
# show plot
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/prasadbobby/pcos-training-model/blob/master/PCOS.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.__version__
from google.colab import files
uploaded = files.upload()
pcos = pd.read_csv('pcos-data.csv')
pcos.head(5)
pcos.drop('Patient File No.', inplace=True, axis=1)
pcos.drop('Sl. No', inplace=True, axis=1)
pcos.drop('Unnamed: 42', inplace=True, axis=1)
pcos.head(5)
pcos.info()
pcos.isnull().sum()
pcos =pcos.dropna()
pcos.isnull().sum()
pcos.info()
for column in pcos:
columnSeriesObj = pcos[column]
pcos[column] = pd.to_numeric(pcos[column], errors='coerce')
sns.pairplot(pcos.iloc[:,1:5])
def plot_hist(variable):
plt.figure(figsize = (9,3))
plt.hist(pcos[variable], bins = 50)
plt.xlabel(variable)
plt.ylabel("Frequency")
plt.title("{} distribution with hist".format(variable))
plt.show()
numericVar = [" Age (yrs)", "Weight (Kg)","Marraige Status (Yrs)"]
for n in numericVar:
plot_hist(n)
pcos=pcos.dropna()
pcos.corr()
corr_matrix= pcos.corr()
plt.subplots(figsize=(30,10))
sns.heatmap(corr_matrix, annot = True, fmt = ".2f");
plt.title("Correlation Between Features")
plt.show()
X = pcos.iloc[:,1:40].values
Y = pcos.iloc[:,0].values
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.3 , random_state = 0)
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.fit_transform(X_test)
def models(X_train, Y_train):
from sklearn.linear_model import LogisticRegression
log = LogisticRegression(random_state = 0)
log.fit(X_train, Y_train)
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 50, criterion = 'entropy', random_state = 0)
forest.fit(X_train, Y_train)
print('Logistic Regression Training Accuracy:', log.score(X_train, Y_train))
print('Random Forest Classifier:', forest.score(X_train, Y_train))
return log, forest
model = models(X_train, Y_train)
from sklearn.metrics import confusion_matrix
for i in range( len(model) ) :
cm = confusion_matrix(Y_test, model[i].predict(X_test))
TP = cm[1][1]
TN = cm[0][0]
FN = cm[1][0]
FP = cm[0][1]
print(cm)
print('Testing Accuracy = ', (TP + TN)/ (TP + TN + FP + FN))
```
| github_jupyter |
## Overview
#### The goal of this project is to find a model that results in a high accuracy rate for determining whether tissue is malignant or benign based off the other data it is given. This technology can be used in hospital to help doctors more accurately determine if somone could have a potentially life-threatening health concern. This is considered a supervised model because we know the two possible outcomes the model can result in- malignant or benign.
## Import Data
```
import pandas as pd
#Get data set and display it
cancer = pd.read_csv(r'C:\Users\Olivia\Desktop\DATA_601\data\cancer.csv')
cancer.head()
#Benign to Malignant tissue
cancer.loc[:, 'diagnosis'].value_counts()
#Dummy Accuracy Model
#Ratio of benign to malignant
357/(212 + 357)
#Set y equal to B or M
y = cancer.loc[:, 'diagnosis']
#Display y
y
#Set x to all columns except Diagnosis
x = cancer.loc[:, cancer.columns != 'diagnosis']
#Display x
x
from sklearn.preprocessing import LabelEncoder
#Instantiate
le = LabelEncoder()
#Fit encoder to y
le.fit(y)
#Transform
le.transform(y)
#B is 0, M is 1
le.classes_
```
## Exploratory Data Analysis
```
import matplotlib.pyplot as plt
# Display the count, mean, standard deviation, min and max values, as well as the percentiles of the data
cancer.describe()
#Display distribution of B to M
N, bins, patches = plt.hist(y)
for i in range(0,5):
patches[i].set_facecolor('pink')
for i in range(5,10):
patches[i].set_facecolor('lavender')
plt.title('Distribution of Malignant to Benign Breast Tissue')
```
## Data Cleaning / Prep
```
# No missing values in data set
x.info()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 25, stratify = y )
import numpy as np
np.unique(y_train, return_counts = True)
285/(285 + 170)
np.unique(y_test, return_counts = True)
72/(72 + 42)
```
## Scale Data
```
from sklearn.preprocessing import StandardScaler
std_scaler = StandardScaler()
std_scaler.fit(x_train)
x_train_s = std_scaler.transform(x_train)
x_test_s = std_scaler.transform(x_test)
x_train_s
```
## Feature Engineering
```
from sklearn.preprocessing import PolynomialFeatures
poly_feats = PolynomialFeatures(degree= 2, interaction_only= True)
poly_feats.fit(x_train_s)
poly_trained_s = poly_feats.transform(x_train_s)
poly_trained_s.shape
poly_test_s = poly_feats.transform(x_test_s)
poly_test_s.shape
```
## Data Modeling
#### I will be using a logistic regression model to help determine the results.
```
#Import logistic regression
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty = 'none')
#Fit model
lr.fit(x_train_s, y_train)
y_prediction = lr.predict(x_train_s)
#Setting Score
score = lr.score(x_train_s, y_train)
#Accuracy
score
```
## Validation
```
#Validation
from sklearn.model_selection import cross_validate
lr = LogisticRegression(penalty = 'none')
cross_five = cross_validate(estimator= lr,
X = x_train_s,
y = y_train,
cv = 5,
n_jobs= -1,
return_train_score= True,
return_estimator= True, verbose = 2)
cross_five['test_score']
validation_mean = cross_five['test_score'].mean()
validation_std = cross_five['test_score'].std()
#Print and store results
print('5 fold cross validation results (Accuracy) %.3f =/- %.3f'%(validation_mean, validation_std))
```
## Conclusion
#### A Logistic Regression model performs well and results in a high accuracy rate. This model could easily be implemented to help health care workers provide more detailed and accurate diagnosises. A limitation of this would be the size of the data set it would be helpful to have a larger number to help train the model and test the accuracy.
| github_jupyter |
```
import numpy as np
import pandas as pd
data = pd.read_csv('5_a.csv')
data
list(data.iloc[:,1])
y_predicted = [0 if i<0.5 else 1 for i in list(data['proba']) ]
0 in y_predicted
# confusion matrix
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
def custom_metrics(data):
y_actual = list(data.iloc[:,0])
y_predicted = list(data.iloc[:,1])
# lets binarize y_predicted with threshold 0.5.
# means >=0.5 is 1 otherwise 0
y_actual = list(map(int ,y_actual))
y_predicted = [0 if i<0.5 else 1 for i in y_predicted]
confusion_matrix = [
[0,0],
[0,0]
]
# print('actual y',y_actual)
# print('predicted y',y_predicted)
#let's calculate every part of confusion matrix
# TrueNegative(TN), FalseNegative(FN), TruePositive(TP),FalsePositive(FP)
tn,fn,tp,fp = 0,0,0,0
for i in range(len(y_actual)):
if y_actual[i]==0 and y_predicted[i] ==0:
tn +=1
elif y_actual[i]==0 and y_predicted[i] ==1:
fp +=1
elif y_actual[i]==1 and y_predicted[i] ==1:
tp +=1
elif y_actual[i]==1 and y_predicted[i]==0:
fn +=1
confusion_matrix[0][0] = tn
confusion_matrix[0][1] = fp
confusion_matrix[1][0] = fn
confusion_matrix[1][1] = tp
print(np.array(confusion_matrix))
##### we have computed confusion matrix ########
###############################################################################
#############################*************#####################################
######## let's compute f1-score ################
### first find precision and recall
###############################################
### precision = tp/(tp+fp) ####################
### precision = TruePositive/ total predicted positive)###
###############################################
### recall = tp/(tp+fn) ####################
### recall = TruePositive/ total actual positive)######
###############################################
precision = tp/(tp+fp)
recall = tp/(tp+fn)
f1_score = 2*((precision * recall)/(precision+recall))
print('f1-score',f1_score)
##############################**************###################################
###############################################################################
# let's find accuracy score before AUC Score
accuracy = (tp+tn)/(tp+tn+fp+fn)
print('accuracy',accuracy)
###############################################################################
##################### AUC SCORE ###############################################
## step 1: find n-unique probabilities from predicted y
# let's find n unique probabilities
y_predicted = np.array(data.iloc[:,1]) # changing to numpy.array
print('y_predicted',y_predicted)
threshold = np.unique(y_predicted)#finding unique values
threshold.sort()# sort the probabilities in ascending order
print('threshold',threshold)
#for every probabilities in threshold
fpr_values = []
tpr_values = []
for thresh in threshold:# make every probability as threshold
# for every threshold value change the y_predicted
# y = y_actual[:] #uncomment it to copy y_actual to y
y_predicted = [0 if i<thresh else 1 for i in y_predicted]
# find tpr and fpr for every threshold
tpr = tp/(tp+fp)
fpr = fp/(fp+tn)
tpr_values.append(tpr)
fpr_values.append(fpr)
# change tpr_values and fpr_values to numpy array
tpr_array = np.array(tpr_values)
fpr_array = np.array(fpr_values)
print('fpr:',fpr_array)
print('tpr:',tpr_array)
auc_score = np.trapz(tpr_array, fpr_array)
plt.plot(tpr_array,fpr_array)
plt.show()
print('auc score:',auc_score)
custom_metrics(data)
x = np.array([1,2,3,4,4,5,4,4,4,5,5,5,5,4,4,43,3,4,3443,45,6])
np.unique(x)
1 == 1.0
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(np.array([1,2,3,4,5]),np.array([3,4,6,4,3]))
5/2
np.trapz(np.array([3.4,3.6,2.5]),np.array([5.3,6.6,7.6]))
plt.plot(np.array([0.400,0.401,0.402]),np.array([0.401,0.402,0.403]))
```
| github_jupyter |
# Scraping Reddit Data

Using the PRAW library, a wrapper for the Reddit API, everyone can easily scrape data from Reddit or even create a Reddit bot.
```
import praw
```
Before it can be used to scrape data we need to authenticate ourselves. For this we need to create a Reddit instance and provide it with a client_id , client_secret and a user_agent . To create a Reddit application and get your id and secret you need to navigate to [this page](https://www.reddit.com/prefs/apps).
```
reddit = praw.Reddit(client_id='my_client_id',
client_secret='my_client_secret',
user_agent='my_user_agent')
```
We can get information or posts from a specifc subreddit using the reddit.subreddit method and passing it a subreddit name.
```
# get 10 hot posts from the MachineLearning subreddit
hot_posts = reddit.subreddit('MachineLearning').hot(limit=10)
```
Now that we scraped 10 posts we can loop through them and print some information.
```
for post in hot_posts:
print(post.title)
# get hot posts from all subreddits
hot_posts = reddit.subreddit('all').hot(limit=10)
for post in hot_posts:
print(post.title)
# get MachineLearning subreddit data
ml_subreddit = reddit.subreddit('MachineLearning')
print(ml_subreddit.description)
```
Because we only have a limited amoung of requests per day it is a good idea to save the scraped data in some kind of variable or file.
```
import pandas as pd
posts = []
ml_subreddit = reddit.subreddit('MachineLearning')
for post in ml_subreddit.hot(limit=10):
posts.append([post.title, post.score, post.id, post.subreddit, post.url, post.num_comments, post.selftext, post.created])
posts = pd.DataFrame(posts,columns=['title', 'score', 'id', 'subreddit', 'url', 'num_comments', 'body', 'created'])
posts
posts.to_csv('top_ml_subreddit_posts.csv')
```
PRAW also allows us to get information about a specifc post/submission
```
submission = reddit.submission(url="https://www.reddit.com/r/MapPorn/comments/a3p0uq/an_image_of_gps_tracking_of_multiple_wolves_in/")
# or
submission = reddit.submission(id="a3p0uq") #id comes after comments/
for top_level_comment in submission.comments:
print(top_level_comment.body)
```
This will work for some submission, but for others that have more comments this code will throw an AttributeError saying:
``AttributeError: 'MoreComments' object has no attribute 'body'``
These MoreComments object represent the “load more comments” and “continue this thread” links encountered on the websites, as described in more detail in the comment documentation.
There get rid of the MoreComments objects, we can check the datatype of each comment before printing the body.
```
from praw.models import MoreComments
for top_level_comment in submission.comments:
if isinstance(top_level_comment, MoreComments):
continue
print(top_level_comment.body)
```
The below cell is another way of getting rid of the MoreComments objects
```
submission.comments.replace_more(limit=0)
for top_level_comment in submission.comments:
print(top_level_comment.body)
```
The above codeblocks only got the top lebel comments. If we want to get the complete ``CommentForest`` we need to use the ``.list`` method.
```
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
print(comment.body)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Nutritiousfacts/DS-Unit-2-Regression-Classification/blob/master/module3/Gabe_flomo_assignment_regression_classification_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science, Unit 2: Predictive Modeling
# Regression & Classification, Module 3
## Assignment
We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices.
But not just for condos in Tribeca...
Instead, predict property sales prices for **One Family Dwellings** (`BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'`) using a subset of the data where the **sale price was more than \\$100 thousand and less than $2 million.**
The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal.
- [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test.
- [ ] Do exploratory visualizations with Seaborn.
- [ ] Do one-hot encoding of categorical features.
- [ ] Do feature selection with `SelectKBest`.
- [ ] Fit a linear regression model with multiple features.
- [ ] Get mean absolute error for the test set.
- [ ] As always, commit your notebook to your fork of the GitHub repo.
## Stretch Goals
- [ ] Add your own stretch goal(s) !
- [ ] Do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html).
- [ ] Learn more about feature selection:
- ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance)
- [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html)
- [mlxtend](http://rasbt.github.io/mlxtend/) library
- scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection)
- [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson.
- [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients.
- [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way (without an excessive amount of formulas or academic pre-requisites).
(That book is good regardless of whether your cultural worldview is inferential statistics or predictive machine learning)
- [ ] Read Leo Breiman's paper, ["Statistical Modeling: The Two Cultures"](https://projecteuclid.org/download/pdf_1/euclid.ss/1009213726)
- [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html):
> Pipeline can be used to chain multiple estimators into one. This is useful as there is often a fixed sequence of steps in processing the data, for example feature selection, normalization and classification. Pipeline serves multiple purposes here:
> - **Convenience and encapsulation.** You only have to call fit and predict once on your data to fit a whole sequence of estimators.
> - **Joint parameter selection.** You can grid search over parameters of all estimators in the pipeline at once.
> - **Safety.** Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors.
```
# If you're in Colab...
import os, sys
in_colab = 'google.colab' in sys.modules
if in_colab:
# Install required python packages:
# category_encoders, version >= 2.0
# pandas-profiling, version >= 2.0
# plotly, version >= 4.0
!pip install --upgrade category_encoders pandas-profiling plotly
# Pull files from Github repo
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Regression-Classification.git
!git pull origin master
# Change into directory for module
os.chdir('module3')
# Ignore this Numpy warning when using Plotly Express:
# FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead.
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
import pandas as pd
import pandas_profiling
# Read New York City property sales data
df = pd.read_csv('../data/NYC_Citywide_Rolling_Calendar_Sales.csv')
# Change column names: replace spaces with underscores
df.columns = [col.replace(' ', '_') for col in df]
# SALE_PRICE was read as strings.
# Remove symbols, convert to integer
df['SALE_PRICE'] = (
df['SALE_PRICE']
.str.replace('$','')
.str.replace('-','')
.str.replace(',','')
.astype(int)
)
df.tail()
df = df.query("SALE_PRICE > 100000 and SALE_PRICE < 2000000")
mask = df['BUILDING_CLASS_CATEGORY'].str.contains("01 ONE FAMILY DWELLINGS")
sub = df[mask]
sub.tail()
sub.columns
sub.columns.drop("EASE-MENT")
sub["SALE_DATE"] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True)
sub["SALE_DATE"].describe()
import numpy as np
sub = sub[(sub['SALE_PRICE'] >= np.percentile(sub['SALE_PRICE'], 0.5)) &
(sub['SALE_PRICE'] <= np.percentile(sub['SALE_PRICE'], 99.5)) &
(sub['GROSS_SQUARE_FEET'] >= np.percentile(sub['GROSS_SQUARE_FEET'], 0.05)) &
(sub['GROSS_SQUARE_FEET'] < np.percentile(sub['GROSS_SQUARE_FEET'], 99.95))]
sub = sub.query("GROSS_SQUARE_FEET > 0")
sub["SALE_DATE"].dt.month.value_counts()
shape = 907 + 763 + 734
shape
train = sub[sub["SALE_DATE"].dt.month < 4]
test = sub[sub["SALE_DATE"].dt.month == 4]
print(train.shape,test.shape)
# checking to see if they were combined correctly
#assert train.shape[0] == shape
# visualize the data
import plotly.express as px
px.scatter(train, y = "SALE_PRICE", x = "GROSS_SQUARE_FEET",trendline = "ols")
train["Gross_feet_binned"] = (train["GROSS_SQUARE_FEET"] > 500) & (train["GROSS_SQUARE_FEET"] > 4000)
train.groupby("Gross_feet_binned").SALE_PRICE.describe()
# cluster the location
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters = 20,n_jobs = -1)
train["cluster"] = kmeans.fit_predict(train[["GROSS_SQUARE_FEET","SALE_PRICE"]])
test["cluster"] = kmeans.fit_predict(test[["GROSS_SQUARE_FEET","SALE_PRICE"]])
#train.columns.drop("Gross_feet_binned")
px.scatter(train, y = "SALE_PRICE", x = "GROSS_SQUARE_FEET",color = "cluster")
train.groupby("cluster").SALE_PRICE.describe()
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.catplot(x = "cluster",y = "SALE_PRICE",data = train, kind = "bar", color = "grey");
train.columns
# exploring other data compared to the price
for col in sorted(train.columns.drop(["EASE-MENT","SALE_PRICE"])):
if train[col].nunique() <= 20:
sns.catplot(x = col ,y = "SALE_PRICE",data = train, kind = "bar", color = "#843B62");
plt.show()
train["BOROUGH"] = train["BOROUGH"].astype(str)
test["BOROUGH"] = test["BOROUGH"].astype(str)
# cardinality for categorical data
train.describe(exclude = "number").T.sort_values(by = "unique")
target = "SALE_PRICE"
numerics = train.select_dtypes(include = "number").columns.drop(target).tolist()
categoricals = train.select_dtypes(exclude = "number").columns.tolist()
low_cardinalality = [col for col in categoricals if train[col].nunique() <= 50]
features = numerics + low_cardinalality
features.remove("Gross_feet_binned")
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
X_train.shape, y_train.shape, X_test.shape,y_test.shape
import category_encoders as ce
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
X_train_encoded.head()
X_train_encoded = X_train_encoded.drop(columns='EASE-MENT')
X_test_encoded = X_test_encoded.drop(columns='EASE-MENT')
from sklearn.feature_selection import f_regression, SelectKBest
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_test_scaled = scaler.fit_transform(X_test_encoded)
for k in range(1, len(X_train_encoded.columns)+1):
print(f"{k} features")
selector = SelectKBest(score_func = f_regression, k=k)
X_train_selected = selector.fit_transform(X_train_scaled, y_train)
X_test_selected = selector.transform(X_test_scaled)
model = LinearRegression()
model.fit(X_train_selected, y_train)
y_pred = model.predict(X_test_selected)
mae = mean_absolute_error(y_test,y_pred)
print(f"Test MAE: ${mae:,.0f}")
print()
```
| github_jupyter |
# Image Classification
In this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
## Get the Data
Run the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile('cifar-10-python.tar.gz'):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
'cifar-10-python.tar.gz',
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open('cifar-10-python.tar.gz') as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
```
## Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 3
sample_id = 15
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
```
## Implement Preprocess Functions
### Normalize
In the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
```
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# DONE: Implement Function
return np.array(x / 255)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
```
### One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
```
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# DONE: Implement Function
return np.eye(10)[x]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
```
### Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
## Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
```
## Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
>**Note:** If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
>However, if you would like to get the most out of this course, try to solve all the problems _without_ using anything from the TF Layers packages. You **can** still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the `conv2d` class, [tf.layers.conv2d](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d), you would want to use the TF Neural Network version of `conv2d`, [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).
Let's begin!
### Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement `neural_net_image_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder)
* Set the shape using `image_shape` with batch size set to `None`.
* Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
* Implement `neural_net_label_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder)
* Set the shape using `n_classes` with batch size set to `None`.
* Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
* Implement `neural_net_keep_prob_input`
* Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).
These names will be used at the end of the project to load your saved model.
Note: `None` for shapes in TensorFlow allow for a dynamic size.
```
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# DONE: Implement Function
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# DONE: Implement Function
return tf.placeholder(tf.float32, shape=(None, n_classes), name='y')
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# DONE: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
```
### Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:
* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.
* Apply a convolution to `x_tensor` using weight and `conv_strides`.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using `pool_ksize` and `pool_strides`.
* We recommend you use same padding, but you're welcome to use any padding.
**Note:** You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for **this** layer, but you can still use TensorFlow's [Neural Network](https://www.tensorflow.org/api_docs/python/tf/nn) package. You may still use the shortcut option for all the **other** layers.
```
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# DONE: Implement Function
cl_ksize = [1, conv_ksize[0], conv_ksize[1], 1]
cl_num_inputs = x_tensor.get_shape().as_list()[3]
cl_wt = [cl_ksize[0], cl_ksize[1], cl_num_inputs, conv_num_outputs]
cl_weight = tf.Variable(tf.truncated_normal(cl_wt, stddev=0.1))
cl_bias = tf.Variable(tf.zeros(conv_num_outputs))
strides = [1, conv_strides[0],conv_strides[1],1]
conv = tf.nn.conv2d(x_tensor, cl_weight, strides=strides,padding='SAME')
conv = tf.nn.bias_add(conv, cl_bias)
#conv = tf.nn.relu(conv)
pool_ksize = [1, pool_ksize[0], pool_ksize[1], 1]
pool_strides = [1, pool_strides[0], pool_strides[1], 1]
max_pooling_cl = tf.nn.max_pool(conv, ksize=pool_ksize,strides=pool_strides,padding='SAME')
max_pooling_cl = tf.nn.relu(max_pooling_cl)
return max_pooling_cl
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
```
### Flatten Layer
Implement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
```
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# DONE: Implement Function
tn_flat = tf.contrib.layers.flatten(x_tensor)
return tn_flat
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
```
### Fully-Connected Layer
Implement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
```
def fc_way(x_tensor, num_outputs):
width = x_tensor.get_shape().as_list()[1]
weight = tf.Variable(tf.truncated_normal(([width, num_outputs]), stddev=0.1))
bias = tf.Variable(tf.zeros(num_outputs))
output = tf.add(tf.matmul(x_tensor, weight), bias)
ret = tf.nn.relu(output)
return ret
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# DONE: Implement Function
fc = fc_way(x_tensor, num_outputs)
return fc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
```
### Output Layer
Implement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
**Note:** Activation, softmax, or cross entropy should **not** be applied to this.
```
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# DONE: Implement Function
#fc = fc_way(x_tensor, num_outputs)
wid= x_tensor.get_shape().as_list()[-1]
wghts = tf.Variable(tf.random_normal([wid, num_outputs], stddev=0.1))
bias = tf.Variable(tf.zeros([num_outputs]))
linear = tf.nn.bias_add(tf.matmul(x_tensor, wghts), bias)
logits = tf.layers.dense(linear, units=num_outputs)
fc = logits
return fc
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
```
### Create Convolutional Model
Implement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:
* Apply 1, 2, or 3 Convolution and Max Pool layers
* Apply a Flatten Layer
* Apply 1, 2, or 3 Fully Connected Layers
* Apply an Output Layer
* Return the output
* Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`.
```
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# DONE: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = [3, 3]
conv_strides = [1, 1]
pool_ksize = [2, 2]
pool_strides = [2, 2]
conv_num_outputs = 32
conv_l = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 64
conv_l = conv2d_maxpool(conv_l, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 128
conv_l = conv2d_maxpool(conv_l, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
flatten_l = flatten(conv_l)
# DONE: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
fully_connected_l = fully_conn(flatten_l, 100)
fully_dropout_l = tf.nn.dropout(fully_connected_l, keep_prob)
fully_connected_l2 = fully_conn(fully_dropout_l, 100)
# DONE: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_layer = output(fully_connected_l2, 10)
# DONE: return output
return output_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
```
## Train the Neural Network
### Single Optimization
Implement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:
* `x` for image input
* `y` for labels
* `keep_prob` for keep probability for dropout
This function will be called for each batch, so `tf.global_variables_initializer()` has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
```
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# DONE: Implement Function
session.run(optimizer, feed_dict={x: feature_batch, y:label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
```
### Show Stats
Implement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
```
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
# DONE: Implement Function
loss = session.run(cost, feed_dict={x:feature_batch, y:label_batch, keep_prob: 1.0})
accuracy = session.run(accuracy, feed_dict={x: valid_features, y:valid_labels, keep_prob:1.0})
print(' Accuracy : {} '.format(accuracy))
print(' Loss : {} '.format(loss))
```
### Hyperparameters
Tune the following parameters:
* Set `epochs` to the number of iterations until the network stops learning or start overfitting
* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set `keep_probability` to the probability of keeping a node using dropout
```
# TODO: Tune Parameters
epochs = 30
batch_size = 256
keep_probability = 0.9
```
### Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
```
### Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
```
# Checkpoint
The model has been saved to disk.
## Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
```
## Why 50-70% Accuracy?
You might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores [well above 70%](http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#43494641522d3130). That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_image_classification.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
```
# Copyright 2021 Google LLC
# Use of this source code is governed by an MIT-style
# license that can be found in the LICENSE file or at
# https://opensource.org/licenses/MIT.
# Author(s): Kevin P. Murphy (murphyk@gmail.com) and Mahmoud Soliman (mjs@aucegypt.edu)
```
<a href="https://opensource.org/licenses/MIT" target="_parent"><img src="https://img.shields.io/github/license/probml/pyprobml"/></a>
<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/figures//chapter20_figures.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
## Figure 20.1:<a name='20.1'></a> <a name='pcaDemo2d'></a>
An illustration of PCA where we project from 2d to 1d. Circles are the original data points, crosses are the reconstructions. The red star is the data mean.
Figure(s) generated by [pcaDemo2d.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaDemo2d.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pcaDemo2d.py")
```
## Figure 20.2:<a name='20.2'></a> <a name='pcaDigits'></a>
An illustration of PCA applied to MNIST digits from class 9. Grid points are at the 5, 25, 50, 75, 95 \% quantiles of the data distribution along each dimension. The circled points are the closest projected images to the vertices of the grid. Adapted from Figure 14.23 of <a href='#HastieBook'>[HTF09]</a> .
Figure(s) generated by [pca_digits.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_digits.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pca_digits.py")
```
## Figure 20.3:<a name='20.3'></a> <a name='eigenFace'></a>
a) Some randomly chosen $64 \times 64$ pixel images from the Olivetti face database. (b) The mean and the first three PCA components represented as images.
Figure(s) generated by [pcaImageDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaImageDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaImages-faces-images.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaImages-faces-basis.png")
```
## Figure 20.4:<a name='20.4'></a> <a name='pcaProjVar'></a>
Illustration of the variance of the points projected onto different 1d vectors. $v_1$ is the first principal component, which maximizes the variance of the projection. $v_2$ is the second principal component which is direction orthogonal to $v_1$. Finally $v'$ is some other vector in between $v_1$ and $v_2$. Adapted from Figure 8.7 of <a href='#Geron2019'>[Aur19]</a> .
Figure(s) generated by [pca_projected_variance.py](https://github.com/probml/pyprobml/blob/master/scripts/pca_projected_variance.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pca_projected_variance.py")
```
## Figure 20.5:<a name='20.5'></a> <a name='heightWeightPCA'></a>
Effect of standardization on PCA applied to the height/weight dataset. (Red=female, blue=male.) Left: PCA of raw data. Right: PCA of standardized data.
Figure(s) generated by [pcaStandardization.py](https://github.com/probml/pyprobml/blob/master/scripts/pcaStandardization.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/pcaStandardization.py")
```
## Figure 20.6:<a name='20.6'></a> <a name='pcaErr'></a>
Reconstruction error on MNIST vs number of latent dimensions used by PCA. (a) Training set. (b) Test set.
Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitReconTrain.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitReconTest.png")
```
## Figure 20.7:<a name='20.7'></a> <a name='pcaFrac'></a>
(a) Scree plot for training set, corresponding to \cref fig:pcaErr (a). (b) Fraction of variance explained.
Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitScree.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitVar.png")
```
## Figure 20.8:<a name='20.8'></a> <a name='pcaProfile'></a>
Profile likelihood corresponding to PCA model in \cref fig:pcaErr (a).
Figure(s) generated by [pcaOverfitDemo.m](https://github.com/probml/pmtk3/blob/master/demos/pcaOverfitDemo.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaOverfitProfile.png")
```
## Figure 20.9:<a name='20.9'></a> <a name='sprayCan'></a>
Illustration of the FA generative process, where we have $L=1$ latent dimension generating $D=2$ observed dimensions; we assume $\boldsymbol \Psi =\sigma ^2 \mathbf I $. The latent factor has value $z \in \mathbb R $, sampled from $p(z)$; this gets mapped to a 2d offset $\boldsymbol \delta = z \mathbf w $, where $\mathbf w \in \mathbb R ^2$, which gets added to $\boldsymbol \mu $ to define a Gaussian $p(\mathbf x |z) = \mathcal N (\mathbf x |\boldsymbol \mu + \boldsymbol \delta ,\sigma ^2 \mathbf I )$. By integrating over $z$, we ``slide'' this circular Gaussian ``spray can'' along the principal component axis $\mathbf w $, which induces elliptical Gaussian contours in $\mathbf x $ space centered on $\boldsymbol \mu $. Adapted from Figure 12.9 of <a href='#BishopBook'>[Bis06]</a> .
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/PPCAsprayCan.png")
```
## Figure 20.10:<a name='20.10'></a> <a name='pcaSpring'></a>
Illustration of EM for PCA when $D=2$ and $L=1$. Green stars are the original data points, black circles are their reconstructions. The weight vector $\mathbf w $ is represented by blue line. (a) We start with a random initial guess of $\mathbf w $. The E step is represented by the orthogonal projections. (b) We update the rod $\mathbf w $ in the M step, keeping the projections onto the rod (black circles) fixed. (c) Another E step. The black circles can 'slide' along the rod, but the rod stays fixed. (d) Another M step. Adapted from Figure 12.12 of <a href='#BishopBook'>[Bis06]</a> .
Figure(s) generated by [pcaEmStepByStep.m](https://github.com/probml/pmtk3/blob/master/demos/pcaEmStepByStep.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepEstep1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepMstep1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepEstep2.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/pcaEmStepByStepMstep2.png")
```
## Figure 20.11:<a name='20.11'></a> <a name='mixFAdgm'></a>
Mixture of factor analyzers as a PGM.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/mixFAdgmC.png")
```
## Figure 20.12:<a name='20.12'></a> <a name='ppcaMixNetlab'></a>
Mixture of PPCA models fit to a 2d dataset, using $L=1$ latent dimensions and $K=1$ and $K=10$ mixture components.
Figure(s) generated by [mixPpcaDemoNetlab.m](https://github.com/probml/pmtk3/blob/master/demos/mixPpcaDemoNetlab.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/mixPpcaAnnulus1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/mixPpcaAnnulus10.png")
```
## Figure 20.13:<a name='20.13'></a> <a name='MFAGANsamples'></a>
Random samples from the MixFA model fit to CelebA. From Figure 4 of <a href='#Richardson2018'>[EY18]</a> . Used with kind permission of Yair Weiss
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/MFAGAN-samples.png")
```
## Figure 20.14:<a name='20.14'></a> <a name='binaryPCA'></a>
(a) 150 synthetic 16 dimensional bit vectors. (b) The 2d embedding learned by binary PCA, fit using variational EM. We have color coded points by the identity of the true ``prototype'' that generated them. (c) Predicted probability of being on. (d) Thresholded predictions.
Figure(s) generated by [binaryFaDemoTipping.m](https://github.com/probml/pmtk3/blob/master/demos/binaryFaDemoTipping.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCAinput.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCAembedding.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCApostpred.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/binaryPCArecon.png")
```
## Figure 20.15:<a name='20.15'></a> <a name='PLS'></a>
Gaussian latent factor models for paired data. (a) Supervised PCA. (b) Partial least squares.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/eSPCAxy.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ePLSxy.png")
```
## Figure 20.16:<a name='20.16'></a> <a name='CCA'></a>
Canonical correlation analysis as a PGM.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/eCCAxy.png")
```
## Figure 20.17:<a name='20.17'></a> <a name='autoencoder'></a>
An autoencoder with one hidden layer.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/autoencoder.png")
```
## Figure 20.18:<a name='20.18'></a> <a name='aeFashion'></a>
Results of applying an autoencoder to the Fashion MNIST data. Top row are first 5 images from validation set. Bottom row are reconstructions. (a) MLP model (trained for 20 epochs). The encoder is an MLP with architecture 784-100-30. The decoder is the mirror image of this. (b) CNN model (trained for 5 epochs). The encoder is a CNN model with architecture Conv2D(16, 3x3, same, selu), MaxPool2D(2x2), Conv2D(32, 3x3, same, selu), MaxPool2D(2x2), Conv2D(64, 3x3, same, selu), MaxPool2D(2x2). The decoder is the mirror image of this, using transposed convolution and without the max pooling layers. Adapted from Figure 17.4 of <a href='#Geron2019'>[Aur19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/ae_mnist_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae_fashion_mlp_recon.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae_fashion_cnn_recon.png")
```
## Figure 20.19:<a name='20.19'></a> <a name='aeFashionTSNE'></a>
tSNE plot of the first 2 latent dimensions of the Fashion MNIST validation set computed using an MLP-based autoencoder. Adapted from Figure 17.5 of <a href='#Geron2019'>[Aur19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/ae_mnist_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-mlp-fashion-tsne.png")
```
## Figure 20.20:<a name='20.20'></a> <a name='DAEfashion'></a>
Denoising autoencoder (MLP architecture) applied to some noisy Fashion MNIST images from the validation set. (a) Gaussian noise. (b) Bernoulli dropout noise. Top row: input. Bottom row: output Adapted from Figure 17.9 of <a href='#Geron2019'>[Aur19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/ae_mnist_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-denoising-gaussian.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-denoising-dropout.png")
```
## Figure 20.21:<a name='20.21'></a> <a name='DAEfield'></a>
The residual error from a DAE, $\mathbf e (\mathbf x )=r( \cc@accent "707E \mathbf x )-\mathbf x $, can learn a vector field corresponding to the score function. Arrows point towards higher probability regions. The length of the arrow is proportional to $||\mathbf e (\mathbf x )||$, so points near the 1d data manifold (represented by the curved line) have smaller arrows. From Figure 5 of <a href='#Alain2014'>[GY14]</a> . Used with kind permission of Guillaume Alain.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/DAE.png")
```
## Figure 20.22:<a name='20.22'></a> <a name='sparseAE'></a>
Neuron activity (in the bottleneck layer) for an autoencoder applied to Fashion MNIST. We show results for three models, with different kinds of sparsity penalty: no penalty (left column), $\ell _1$ penalty (middle column), KL penalty (right column). Top row: Heatmap of 300 neuron activations (columns) across 100 examples (rows). Middle row: Histogram of activation levels derived from this heatmap. Bottom row: Histogram of the mean activation per neuron, averaged over all examples in the validation set. Adapted from Figure 17.11 of <a href='#Geron2019'>[Aur19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/ae_mnist_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-heatmap.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-act.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-noreg-neurons.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-L1reg-neurons.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-sparse-KLreg-neurons.png")
```
## Figure 20.23:<a name='20.23'></a> <a name='vaeSchematic'></a>
Schematic illustration of a VAE. From a figure from http://krasserm.github.io/2018/07/27/dfc-vae/ . Used with kind permission of Martin Krasser.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-krasser.png")
```
## Figure 20.24:<a name='20.24'></a> <a name='VAEcelebaRecon'></a>
Comparison of reconstruction abilities of an autoencoder and VAE. Top row: Original images. Middle row: Reconstructions from a VAE. Bottom row: Reconstructions from an AE. We see that the VAE reconstructions (middle) are blurrier. Both models have the same shallow convolutional architecture (3 hidden layers, 200 latents), and are trained on identical data (20k images of size $64 \times 64$ extracted from CelebA) for the same number of epochs (20).
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/vae_celeba_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-orig.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-recon.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-recon.png")
```
## Figure 20.25:<a name='20.25'></a> <a name='VAEcelebaSamples'></a>
Unconditional samples from a VAE (top row) or AE (bottom row) trained on CelebA. Both models have the same structure and both are trained for 20 epochs.
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/vae_celeba_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-samples.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/ae-celeba-samples.png")
```
## Figure 20.26:<a name='20.26'></a> <a name='VAEcelebaInterpGender'></a>
Interpolation between two real images (first and last columns) in the latent space of a VAE. Adapted from Figure 3.22 of <a href='#Foster2019'>[Dav19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/vae_celeba_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-interp-gender.png")
```
## Figure 20.27:<a name='20.27'></a> <a name='VAEcelebaAddGlasses'></a>
Adding or removing the ``sunglasses'' vector to an image using a VAE. The first column is an input image, with embedding $\mathbf z $. Subsequent columns show the decoding of $\mathbf z + s \boldsymbol \Delta $, where $s \in \ -4,-3,-2,-1,0,1,2,3,4\ $ and $\boldsymbol \Delta = \overline \mathbf z ^+ - \overline \mathbf z ^-$ is the difference in the average embeddings of images of people with or without sunglasses. Adapted from Figure 3.21 of <a href='#Foster2019'>[Dav19]</a> .
To reproduce this figure, click the open in colab button: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/dimred/vae_celeba_tf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/vae-celeba-glasses-scale.png")
```
## Figure 20.28:<a name='20.28'></a> <a name='tangentSpace'></a>
Illustration of the tangent space and tangent vectors at two different points on a 2d curved manifold. From Figure 1 of <a href='#Bronstein2017'>[MM+17]</a> . Used with kind permission of Michael Bronstein
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/tangentSpace.png")
```
## Figure 20.29:<a name='20.29'></a> <a name='manifold-6-rotated'></a>
Illustration of the image manifold. (a) An image of the digit 6 from the USPS dataset, of size $64 \times 57 = 3,648$. (b) A random sample from the space $\ 0,1\ ^ 3648 $ reshaped as an image. (c) A dataset created by rotating the original image by one degree 360 times. We project this data onto its first two principal components, to reveal the underlying 2d circular manifold. From Figure 1 of <a href='#Lawrence2012'>[Nei12]</a> . Used with kind permission of Neil Lawrence
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-original.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-rnd.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/manifold-6-rotated.png")
```
## Figure 20.30:<a name='20.30'></a> <a name='manifoldData'></a>
Illustration of some data generated from low-dimensional manifolds. (a) The 2d Swiss-roll manifold embedded into 3d.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.31:<a name='20.31'></a> <a name='metricMDS'></a>
Metric MDS applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.32:<a name='20.32'></a> <a name='KNNgraph'></a>
(a) If we measure distances along the manifold, we find $d(1,6) > d(1,4)$, whereas if we measure in ambient space, we find $d(1,6) < d(1,4)$. The plot at the bottom shows the underlying 1d manifold. (b) The $K$-nearest neighbors graph for some datapoints; the red path is the shortest distance between A and B on this graph. From <a href='#HintonEmbedding'>[Hin13]</a> . Used with kind permission of Geoff Hinton.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/hinton-isomap1.png")
pmlt.show_image("/pyprobml/notebooks/figures/images/hinton-isomap2.png")
```
## Figure 20.33:<a name='20.33'></a> <a name='isomap'></a>
Isomap applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.34:<a name='20.34'></a> <a name='isomapNoisy'></a>
(a) Noisy version of Swiss roll data. We perturb each point by adding $\mathcal N (0, 0.5^2)$ noise. (b) Results of Isomap applied to this data.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
```
## Figure 20.35:<a name='20.35'></a> <a name='kpcaScholkopf'></a>
Visualization of the first 8 kernel principal component basis functions derived from some 2d data. We use an RBF kernel with $\sigma ^2=0.1$.
Figure(s) generated by [kpcaScholkopf.m](https://github.com/probml/pmtk3/blob/master/demos/kpcaScholkopf.m)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/kpcaScholkopfNoShade.png")
```
## Figure 20.36:<a name='20.36'></a> <a name='kPCA'></a>
Kernel PCA applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.37:<a name='20.37'></a> <a name='LLE'></a>
LLE applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.38:<a name='20.38'></a> <a name='eigenmaps'></a>
Laplacian eigenmaps applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.39:<a name='20.39'></a> <a name='graphLaplacian'></a>
Illustration of the Laplacian matrix derived from an undirected graph. From https://en.wikipedia.org/wiki/Laplacian_matrix . Used with kind permission of Wikipedia author AzaToth.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/graphLaplacian.png")
```
## Figure 20.40:<a name='20.40'></a> <a name='graphFun'></a>
Illustration of a (positive) function defined on a graph. From Figure 1 of <a href='#Shuman2013'>[DI+13]</a> . Used with kind permission of Pascal Frossard.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/graphFun.png")
```
## Figure 20.41:<a name='20.41'></a> <a name='tSNE'></a>
tSNE applied to (a) Swiss roll.
Figure(s) generated by [manifold_swiss_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_swiss_sklearn.py) [manifold_digits_sklearn.py](https://github.com/probml/pyprobml/blob/master/scripts/manifold_digits_sklearn.py)
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_and_run("/pyprobml/scripts/manifold_swiss_sklearn.py")
pmlt.show_and_run("/pyprobml/scripts/manifold_digits_sklearn.py")
```
## Figure 20.42:<a name='20.42'></a> <a name='tsneWattenberg'></a>
Illustration of the effect of changing the perplexity parameter when t-SNE is applied to some 2d data. From <a href='#Wattenberg2016how'>[MFI16]</a> . See http://distill.pub/2016/misread-tsne for an animated version of these figures. Used with kind permission of Martin Wattenberg.
```
#@title Setup { display-mode: "form" }
%%time
# If you run this for the first time it would take ~25/30 seconds
!git clone https://github.com/probml/pyprobml /pyprobml &> /dev/null && git clone https://github.com/Sekhen/colab_powertoys.git &> /dev/null
!pip3 install nbimporter -qqq
%cd -q /content/colab_powertoys
from colab_powertoys.probml_toys import probml_toys as pmlt
%cd -q /content/
pmlt.show_image("/pyprobml/notebooks/figures/images/tSNE-wattenberg0.png.png")
```
## References:
<a name='Geron2019'>[Aur19]</a> G. Aur'elien "Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques for BuildingIntelligent Systems (2nd edition)". (2019).
<a name='BishopBook'>[Bis06]</a> C. Bishop "Pattern recognition and machine learning". (2006).
<a name='Shuman2013'>[DI+13]</a> S. DI, N. SK, F. P, O. A and V. P. "The emerging field of signal processing on graphs: Extendinghigh-dimensional data analysis to networks and other irregulardomains". In: IEEE Signal Process. Mag. (2013).
<a name='Foster2019'>[Dav19]</a> F. David "Generative Deep Learning: Teaching Machines to Paint, WriteCompose, and Play". (2019).
<a name='Richardson2018'>[EY18]</a> R. Eitan and W. Yair. "On GANs and GMMs". (2018).
<a name='Alain2014'>[GY14]</a> A. Guillaume and B. Yoshua. "What Regularized Auto-Encoders Learn from the Data-GeneratingDistribution". In: jmlr (2014).
<a name='HastieBook'>[HTF09]</a> T. Hastie, R. Tibshirani and J. Friedman. "The Elements of Statistical Learning". (2009).
<a name='HintonEmbedding'>[Hin13]</a> G. Hinton "CSC 2535 Lecture 11: Non-linear dimensionality reduction". (2013).
<a name='Wattenberg2016how'>[MFI16]</a> W. Martin, V. Fernanda and J. Ian. "How to Use t-SNE Effectively". In: Distill (2016).
<a name='Bronstein2017'>[MM+17]</a> B. MM, B. J, L. Y, S. A and V. P. "Geometric Deep Learning: Going beyond Euclidean data". In: IEEE Signal Process. Mag. (2017).
<a name='Lawrence2012'>[Nei12]</a> L. NeilD "A Unifying Probabilistic Perspective for Spectral DimensionalityReduction: Insights and New Models". In: jmlr (2012).
| github_jupyter |
```
%matplotlib inline
```
배포를 위한 비전 트랜스포머(Vision Transformer) 모델 최적화하기
=================================================================
Authors : `Jeff Tang <https://github.com/jeffxtang>`_, `Geeta Chauhan <https://github.com/gchauhan/>`_
번역 : `김태영 <https://github.com/Taeyoung96/>`_
비전 트랜스포머(Vision Transformer)는 자연어 처리 분야에서 소개된
최고 수준의 결과를 달성한 최신의 어텐션 기반(attention-based) 트랜스포머 모델을
컴퓨터 비전 분야에 적용을 한 모델입니다.
FaceBook에서 발표한 Data-efficient Image Transformers는 `DeiT <https://ai.facebook.com/blog/data-efficient-image-transformers-a-promising-new-technique-for-image-classification>`_
이미지 분류를 위해 ImageNet 데이터셋을 통해 훈련된
비전 트랜스포머 모델입니다.
이번 튜토리얼에서는, DeiT가 무엇인지 그리고 어떻게 사용하는지 다룰 것입니다.
그 다음 스크립팅, 양자화, 최적화, 그리고 iOS와 안드로이드 앱 안에서
모델을 사용하는 전체적인 단계를 수행해 볼 것입니다.
또한, 양자화와 최적화가 된 모델과 양자화와 최적화가 되지 않은 모델을 비교해 볼 것이며,
단계를 수행해 가면서 양자화와 최적화를 적용한 모델이 얼마나 이점을 가지는지 볼 것입니다.
DeiT란 무엇인가
--------------------
합성곱 신경망(CNNs)은 2012년 딥러닝이 시작된 이후
이미지 분류를 수행할 때 주요한 모델이였습니다. 그러나 합성곱 신경망은 일반적으로
최첨단의 결과를 달성하기 위해 훈련에 수억 개의 이미지가 필요했습니다.
DeiT는 훈련에 더 적은 데이터와 컴퓨팅 자원을 필요로 하는 비전 트랜스포머 모델이며,
최신 CNN 모델과 이미지 분류를 수행하는데 경쟁을 합니다.
이는 DeiT의 두 가지 주요 구성 요소에 의해 가능하게 되었습니다.
- 훨씬 더 큰 데이터 세트에 대한 훈련을 시뮬레이션하는 데이터 증강(augmentation)
- 트랜스포머 네트워크에 CNN의 출력값을 그대로 증류(distillation)하여 학습할 수 있도록 하는 기법
DeiT는 제한된 데이터와 자원을 활용하여 컴퓨터 비전 태스크(task)에 트랜스포머 모델을
성공적으로 적용할 수 있음을 보여줍니다.
DeiT의 좀 더 자세한 내용을 원한다면, `저장소 <https://github.com/facebookresearch/deit>`_
와 `논문 <https://arxiv.org/abs/2012.12877>`_ 을 참고하시길 바랍니다.
DeiT를 활용한 이미지 분류
-------------------------------
DeiT를 사용하여 이미지를 분류하는 방법에 대한 자세한 정보는 DeiT 저장소에 README를 참고하시길 바랍니다.
빠른 테스트를 위해서, 먼저 필요한 패키지들을
설치합니다:
pip install torch torchvision timm pandas requests
Google Colab에서는 아래와 같이 실행합니다:
```
# !pip install timm pandas requests
```
그런 다음 아래 스크립트를 실행합니다:
```
from PIL import Image
import torch
import timm
import requests
import torchvision.transforms as transforms
from timm.data.constants import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
print(torch.__version__)
# Pytorch 버전은 1.8.0 이어야 합니다.
model = torch.hub.load('facebookresearch/deit:main', 'deit_base_patch16_224', pretrained=True)
model.eval()
transform = transforms.Compose([
transforms.Resize(256, interpolation=3),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD),
])
img = Image.open(requests.get("https://raw.githubusercontent.com/pytorch/ios-demo-app/master/HelloWorld/HelloWorld/HelloWorld/image.png", stream=True).raw)
img = transform(img)[None,]
out = model(img)
clsidx = torch.argmax(out)
print(clsidx.item())
```
ImageNet 목록에 따라 `라벨(labels) 파일 <https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a>`_
클래스 인덱스의 출력은 269여야 하며, 이는 ‘timber wolf, grey wolf, gray wolf, Canis lupus’에 매핑됩니다.
이제 DeiT 모델을 사용하여 이미지들을 분류할 수 있음을 확인했습니다.
iOS 및 Android 앱에서 실행할 수 있도록 모델을 수정하는 방법을 살펴보겠습니다.
DeiT 스크립팅
----------------------
모바일에서 이 모델을 사용하려면, 우리는 첫번째로 모델 스크립팅이 필요합니다.
전체적인 개요는 `스크립트 그리고 최적화 레시피 <https://tutorials.pytorch.kr/recipes/script_optimized.html>`_
에서 확인할 수 있습니다. 아래 코드를 실행하여 이전 단계에서 사용한 DeiT 모델을
모바일에서 실행할 수 있는 TorchScript 형식으로 변환합니다.
```
model = torch.hub.load('facebookresearch/deit:main', 'deit_base_patch16_224', pretrained=True)
model.eval()
scripted_model = torch.jit.script(model)
scripted_model.save("fbdeit_scripted.pt")
```
약 346MB 크기의 스크립팅된 모델 파일 fbdeit_scripted.pt가 생성됩니다.
DeiT 양자화
---------------------
추론 정확도를 거의 동일하게 유지하면서 훈련된 모델 크기를 크게 줄이기 위해
모델에 양자화를 적용할 수 있습니다.
DeiT에서 사용된 트랜스포머 모델 덕분에,
모델에 동적 양자화를 쉽게 적용할 수 있습니다.
왜나하면 동적 양자화는 LSTM 모델과 트랜스포머 모델에서 가장 잘 적용되기 때문입니다.
(자세한 내용은 `여기 <https://pytorch.org/docs/stable/quantization.html?highlight=quantization#dynamic-quantization>`_
를 참고하세요.)
아래의 코드를 실행시켜 봅시다.
```
# 서버 추론을 위해 'fbgemm'을, 모바일 추론을 위해 'qnnpack'을 사용해 봅시다.
backend = "fbgemm" # 이 주피터 노트북에서는 양자화된 모델의 더 느린 추론 속도를 일으키는 qnnpack으로 대체되었습니다.
model.qconfig = torch.quantization.get_default_qconfig(backend)
torch.backends.quantized.engine = backend
quantized_model = torch.quantization.quantize_dynamic(model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)
scripted_quantized_model = torch.jit.script(quantized_model)
scripted_quantized_model.save("fbdeit_scripted_quantized.pt")
```
fbdeit_quantized_scripted.pt 모델의 스크립팅과 양자화가 적용된 버전이 만들어졌습니다.
모델의 크기는 단지 89MB 입니다.
양자화가 적용되지 않은 모델의 크기인 346MB보다 74%나 감소했습니다!
동일한 추론 결과를 만들기 위해 ``scripted_quantized_model``을
사용해 봅시다.
```
out = scripted_quantized_model(img)
clsidx = torch.argmax(out)
print(clsidx.item())
# 동일한 출력 결과인 269가 출력 되어야 합니다.
```
DeiT 최적화
---------------------
모바일에 스크립트 되고 양자화된 모델을 사용하기 위한
마지막 단계는 최적화입니다.
```
from torch.utils.mobile_optimizer import optimize_for_mobile
optimized_scripted_quantized_model = optimize_for_mobile(scripted_quantized_model)
optimized_scripted_quantized_model.save("fbdeit_optimized_scripted_quantized.pt")
```
생성된 fbdeit_optimized_scripted_quantized.pt 파일은
양자화되고 스크립트되지만 최적화되지 않은 모델과 크기가 거의 같습니다.
추론 결과는 동일하게 유지됩니다.
```
out = optimized_scripted_quantized_model(img)
clsidx = torch.argmax(out)
print(clsidx.item())
# 다시 한번, 동일한 출력 결과인 269가 출력 되어야 합니다.
```
라이트 인터프리터(Lite interpreter) 사용
-----------------------------------------
라이트 인터프리터를 사용하면 얼마나 모델의 사이즈가 작아지고, 추론 시간이 짧아지는지
결과를 확인해 봅시다. 이제 좀 더 가벼운 버전의 모델을 만들어 봅시다.
```
optimized_scripted_quantized_model._save_for_lite_interpreter("fbdeit_optimized_scripted_quantized_lite.ptl")
ptl = torch.jit.load("fbdeit_optimized_scripted_quantized_lite.ptl")
```
가벼운 모델의 크기는 그렇지 않은 버전의 모델 크기와 비슷하지만,
모바일에서 가벼운 버전을 실행하면 추론 속도가 빨라질 것으로 예상됩니다.
추론 속도 비교
---------------------------
네 가지 모델(원본 모델, 스크립트된 모델, 스크립트와 양자화를 적용한 모델,
스크립트와 양자화를 적용한 후 최적화한 모델)의 추론 속도가 어떻게 다른지 확인해 봅시다.
아래의 코드를 실행해 봅시다.
```
with torch.autograd.profiler.profile(use_cuda=False) as prof1:
out = model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof2:
out = scripted_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof3:
out = scripted_quantized_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof4:
out = optimized_scripted_quantized_model(img)
with torch.autograd.profiler.profile(use_cuda=False) as prof5:
out = ptl(img)
print("original model: {:.2f}ms".format(prof1.self_cpu_time_total/1000))
print("scripted model: {:.2f}ms".format(prof2.self_cpu_time_total/1000))
print("scripted & quantized model: {:.2f}ms".format(prof3.self_cpu_time_total/1000))
print("scripted & quantized & optimized model: {:.2f}ms".format(prof4.self_cpu_time_total/1000))
print("lite model: {:.2f}ms".format(prof5.self_cpu_time_total/1000))
```
Google Colab에서 실행 시킨 결과는 다음과 같습니다.
::
original model: 1236.69ms
scripted model: 1226.72ms
scripted & quantized model: 593.19ms
scripted & quantized & optimized model: 598.01ms
lite model: 600.72ms
다음 결과는 각 모델이 소요한 추론 시간과
원본 모델에 대한 각 모델의 감소율을 요약한 것입니다.
```
import pandas as pd
import numpy as np
df = pd.DataFrame({'Model': ['original model','scripted model', 'scripted & quantized model', 'scripted & quantized & optimized model', 'lite model']})
df = pd.concat([df, pd.DataFrame([
["{:.2f}ms".format(prof1.self_cpu_time_total/1000), "0%"],
["{:.2f}ms".format(prof2.self_cpu_time_total/1000),
"{:.2f}%".format((prof1.self_cpu_time_total-prof2.self_cpu_time_total)/prof1.self_cpu_time_total*100)],
["{:.2f}ms".format(prof3.self_cpu_time_total/1000),
"{:.2f}%".format((prof1.self_cpu_time_total-prof3.self_cpu_time_total)/prof1.self_cpu_time_total*100)],
["{:.2f}ms".format(prof4.self_cpu_time_total/1000),
"{:.2f}%".format((prof1.self_cpu_time_total-prof4.self_cpu_time_total)/prof1.self_cpu_time_total*100)],
["{:.2f}ms".format(prof5.self_cpu_time_total/1000),
"{:.2f}%".format((prof1.self_cpu_time_total-prof5.self_cpu_time_total)/prof1.self_cpu_time_total*100)]],
columns=['Inference Time', 'Reduction'])], axis=1)
print(df)
"""
Model Inference Time Reduction
0 original model 1236.69ms 0%
1 scripted model 1226.72ms 0.81%
2 scripted & quantized model 593.19ms 52.03%
3 scripted & quantized & optimized model 598.01ms 51.64%
4 lite model 600.72ms 51.43%
"""
```
더 읽을거리
~~~~~~~~~~~~~~~~~
- `Facebook Data-efficient Image Transformers <https://ai.facebook.com/blog/data-efficient-image-transformers-a-promising-new-technique-for-image-classification>`__
- `Vision Transformer with ImageNet and MNIST on iOS <https://github.com/pytorch/ios-demo-app/tree/master/ViT4MNIST>`__
- `Vision Transformer with ImageNet and MNIST on Android <https://github.com/pytorch/android-demo-app/tree/master/ViT4MNIST>`__
| github_jupyter |
# Example of Data Analysis with DCD Hub Data
First, we import the Python SDK
```
from dcd.entities.thing import Thing
```
We provide the thing ID and access token (replace with yours)
```
from dotenv import load_dotenv
import os
load_dotenv()
THING_ID = os.environ['THING_ID']
THING_TOKEN = os.environ['THING_TOKEN']
```
We instantiate a Thing with its credential, then we fetch its details
```
my_thing = Thing(thing_id=THING_ID, token=THING_TOKEN)
my_thing.read()
```
What does a Thing look like?
```
my_thing.to_json()
```
Which property do we want to explore and over which time frame?
```
from datetime import datetime
# What dates?
START_DATE = "2019-10-08 21:17:00"
END_DATE = "2019-11-08 21:25:00"
from datetime import datetime
DATE_FORMAT = '%Y-%m-%d %H:%M:%S'
from_ts = datetime.timestamp(datetime.strptime(START_DATE, DATE_FORMAT)) * 1000
to_ts = datetime.timestamp(datetime.strptime(END_DATE, DATE_FORMAT)) * 1000
```
Let's find this property and read the data.
```
PROPERTY_NAME = "IMU"
my_property = my_thing.find_property_by_name(PROPERTY_NAME)
my_property.read(from_ts, to_ts)
```
How many data point did we get?
```
print(len(my_property.values))
```
Display values
```
my_property.values
```
# From CSV
```
from numpy import genfromtxt
import pandas as pd
data = genfromtxt('data.csv', delimiter=',')
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')), columns = ['x', 'y', 'z'])
data_frame
```
# Plot some charts with Matplotlib
In this example we plot an histogram, distribution of all values and dimensions.
```
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
from numpy import ma
data = np.array(my_property.values)
figure(num=None, figsize=(15, 5))
t = data_frame.index
plt.plot(t, data_frame.x, t, data_frame.y, t, data_frame.z)
plt.hist(data[:,1:])
plt.show()
```
# Generate statistics with NumPy and Pandas
```
import numpy as np
from scipy.stats import kurtosis, skew
np.min(data[:,1:4], axis=0)
skew(data[:,1:4])
```
You can select a column (slice) of data, or a subset of data. In the example below we select rows
from 10 to 20 (10 in total) and the colum 1 to x (i.e skiping the first column representing the time).
```
data[:10,1:]
```
Out of the box, Pandas give you some statistics, do not forget to convert your array into a DataFrame.
```
data_frame = pd.DataFrame(data[:,1:], index = pd.DatetimeIndex(pd.to_datetime(data[:,0], unit='ms')))
pd.DataFrame.describe(data_frame)
data_frame.rolling(10).std()
```
# Rolling / Sliding Window
To apply statistics on a sliding (or rolling) window, we can use the rolling() function of a data frame. In the example below, we roll with a window size of 4 elements to apply a skew()
```
rolling2s = data_frame.rolling('2s').std()
plt.plot(rolling2s)
plt.show()
rolling100_data_points = data_frame.rolling(100).skew()
plt.plot(rolling100_data_points)
plt.show()
```
# Zero Crossing
```
plt.hist(np.where(np.diff(np.sign(data[:,1]))))
plt.show()
```
https://docs.scipy.org/doc/scipy/reference/stats.html#discrete-distributions
| github_jupyter |
```
!cp drive/My\ Drive/time-series-analysis/london_bike_sharing_dataset.csv .
```
### Importing libraries
```
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import pandas as pd
import tensorflow as tf
from tensorflow import keras
import seaborn as sns
from matplotlib import rc
from pylab import rcParams
rcParams['figure.figsize'] = 22, 6
```
### Load data
```
df = pd.read_csv('london_bike_sharing_dataset.csv',parse_dates=['timestamp'],index_col='timestamp')
df.head()
```
#### Get a copy of the data
```
df_copy = df.copy()
```
## Exploratory data analysis
### Extracting extra features from timestamps
```
df['hour'] = df.index.hour
df['day_of_week'] = df.index.dayofweek
df['day_of_month'] = df.index.day
df['month'] = df.index.month
df.head()
```
### total numbers of bike shared during the period
```
sns.lineplot(x=df.index, y=df.cnt);
```
### total numbers of bike shared during each month
```
df_by_month = df.resample('M').sum()
sns.lineplot(x=df_by_month.index, y='cnt', data=df_by_month, color='b');
```
### total numbers of bike shared in each hour in comparison with holidays
```
sns.pointplot(x='hour',y='cnt', data=df, hue='is_holiday');
```
### total numbers of bike shared during each day of the week
```
sns.pointplot(x='day_of_week',y='cnt', data=df, color='b');
```
## Splitting train & test
```
train_size = int(len(df_)*0.9)
test_size = len(df) - train_size
train , test = df.iloc[:train_size], df.iloc[train_size:]
print(train.shape, test.shape)
```
## Feature scaling
```
from sklearn.preprocessing import RobustScaler
pd.options.mode.chained_assignment = None
f_columns = ['t1', 't2', 'hum', 'wind_speed']
f_transformer = RobustScaler()
cnt_transformer = RobustScaler()
f_transformer = f_transformer.fit(train[f_columns].to_numpy())
cnt_transformer = cnt_transformer.fit(train[['cnt']])
train.loc[:, f_columns] = f_transformer.transform(train[f_columns].to_numpy())
train['cnt'] = cnt_transformer.transform(train[['cnt']])
test.loc[:, f_columns] = f_transformer.transform(test[f_columns].to_numpy())
test['cnt'] = cnt_transformer.transform(test[['cnt']])
```
### Converting the data to a time series format
```
def to_sequence(X, y, time_steps=1):
Xs, ys = [], []
for i in range(len(X) - time_steps):
v = X.iloc[i: (i + time_steps)].to_numpy()
Xs.append(v)
ys.append(y.iloc[i + time_steps])
return np.asarray(Xs), np.asarray(ys)
TIMESTEPS = 24
x_train, y_train = to_sequence(train, train['cnt'], TIMESTEPS)
x_test, y_test = to_sequence(test, test['cnt'], TIMESTEPS)
print(f"X_train shape is {x_train.shape}, and y_train shape is {y_train.shape}")
```
## Defining a model
```
from keras.models import Sequential
from keras.layers import LSTM, Dropout, Bidirectional, Dense
model = Sequential()
model.add(Bidirectional(LSTM(units=128),input_shape=(x_train.shape[1], x_train.shape[2])))
model.add(Dropout(rate=0.3))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse', metrics=['acc'])
model.summary()
```
### Fitting the model on data
```
history = model.fit(x_train, y_train, batch_size=16, validation_split=0.1, epochs=100, shuffle=False)
```
### Model loss visualization
```
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.legend();
```
### Model prediction
```
y_pred = model.predict(x_test)
y_test_inv = cnt_transformer.inverse_transform(y_test.reshape(1,-1))
y_train_inv = cnt_transformer.inverse_transform(y_train.reshape(1,-1))
y_pred_inv = cnt_transformer.inverse_transform(y_pred)
```
### Model prediction visualization
```
plt.plot(y_test_inv.flatten(), marker='.', label='True')
plt.plot(y_pred_inv, marker='.', label='Prediction')
plt.legend();
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D3_ModelFitting/W1D3_Tutorial4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W1D3_ModelFitting/W1D3_Tutorial4.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# Tutorial 4: Multiple linear regression and polynomial regression
**Week 1, Day 3: Model Fitting**
**By Neuromatch Academy**
**Content creators**: Pierre-Étienne Fiquet, Anqi Wu, Alex Hyafil with help from Byron Galbraith, Ella Batty
**Content reviewers**: Lina Teichmann, Saeed Salehi, Patrick Mineault, Michael Waskom
**Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
<p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
---
# Tutorial Objectives
*Estimated timing of tutorial: 35 minutes*
This is Tutorial 4 of a series on fitting models to data. We start with simple linear regression, using least squares optimization (Tutorial 1) and Maximum Likelihood Estimation (Tutorial 2). We will use bootstrapping to build confidence intervals around the inferred linear model parameters (Tutorial 3). We'll finish our exploration of regression models by generalizing to multiple linear regression and polynomial regression (Tutorial 4). We end by learning how to choose between these various models. We discuss the bias-variance trade-off (Tutorial 5) and Cross Validation for model selection (Tutorial 6).
In this tutorial, we will generalize the regression model to incorporate multiple features.
- Learn how to structure inputs for regression using the 'Design Matrix'
- Generalize the MSE for multiple features using the ordinary least squares estimator
- Visualize data and model fit in multiple dimensions
- Fit polynomial regression models of different complexity
- Plot and evaluate the polynomial regression fits
```
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/2mkq4/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Video 1: Multiple Linear Regression and Polynomial Regression
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV11Z4y1u7cf", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="d4nfTki6Ejc", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
```
---
# Setup
```
# Imports
import numpy as np
import matplotlib.pyplot as plt
#@title Figure Settings
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Plotting Functions
def evaluate_fits(order_list, mse_list):
""" Compare the quality of multiple polynomial fits
by plotting their MSE values.
Args:
order_list (list): list of the order of polynomials to be compared
mse_list (list): list of the MSE values for the corresponding polynomial fit
"""
fig, ax = plt.subplots()
ax.bar(order_list, mse_list)
ax.set(title='Comparing Polynomial Fits', xlabel='Polynomial order', ylabel='MSE')
def plot_fitted_polynomials(x, y, theta_hat):
""" Plot polynomials of different orders
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
theta_hat (dict): polynomial regression weights for different orders
"""
x_grid = np.linspace(x.min() - .5, x.max() + .5)
plt.figure()
for order in range(0, max_order + 1):
X_design = make_design_matrix(x_grid, order)
plt.plot(x_grid, X_design @ theta_hat[order]);
plt.ylabel('y')
plt.xlabel('x')
plt.plot(x, y, 'C0.');
plt.legend([f'order {o}' for o in range(max_order + 1)], loc=1)
plt.title('polynomial fits')
plt.show()
```
---
# Section 1: Multiple Linear Regression
*Estimated timing to here from start of tutorial: 8 min*
This video covers linear regression with multiple inputs (more than 1D) and polynomial regression.
<details>
<summary> <font color='blue'>Click here for text recap of video </font></summary>
Now that we have considered the univariate case and how to produce confidence intervals for our estimator, we turn to the general linear regression case, where we can have more than one regressor, or feature, in our input.
Recall that our original univariate linear model was given as
\begin{align}
y = \theta x + \epsilon
\end{align}
where $\theta$ is the slope and $\epsilon$ some noise. We can easily extend this to the multivariate scenario by adding another parameter for each additional feature
\begin{align}
y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + ... +\theta_d x_d + \epsilon
\end{align}
where $\theta_0$ is the intercept and $d$ is the number of features (it is also the dimensionality of our input).
We can condense this succinctly using vector notation for a single data point
\begin{align}
y_i = \boldsymbol{\theta}^{\top}\mathbf{x}_i + \epsilon
\end{align}
and fully in matrix form
\begin{align}
\mathbf{y} = \mathbf{X}\boldsymbol{\theta} + \mathbf{\epsilon}
\end{align}
where $\mathbf{y}$ is a vector of measurements, $\mathbf{X}$ is a matrix containing the feature values (columns) for each input sample (rows), and $\boldsymbol{\theta}$ is our parameter vector.
This matrix $\mathbf{X}$ is often referred to as the "[design matrix](https://en.wikipedia.org/wiki/Design_matrix)".
We want to find an optimal vector of paramters $\boldsymbol{\hat\theta}$. Recall our analytic solution to minimizing MSE for a single regressor:
\begin{align}
\hat\theta = \frac{\sum_{i=1}^N x_i y_i}{\sum_{i=1}^N x_i^2}.
\end{align}
The same holds true for the multiple regressor case, only now expressed in matrix form
\begin{align}
\boldsymbol{\hat\theta} = (\mathbf{X}^\top\mathbf{X})^{-1}\mathbf{X}^\top\mathbf{y}.
\end{align}
This is called the [ordinary least squares](https://en.wikipedia.org/wiki/Ordinary_least_squares) (OLS) estimator.
</details>
For this tutorial we will focus on the two-dimensional case ($d=2$), which allows us to easily visualize our results. As an example, think of a situation where a scientist records the spiking response of a retinal ganglion cell to patterns of light signals that vary in contrast and in orientation. Then contrast and orientation values can be used as features / regressors to predict the cells response.
In this case our model can be writen for a single data point as:
\begin{align}
y = \theta_0 + \theta_1 x_1 + \theta_2 x_2 + \epsilon
\end{align}
or for multiple data points in matrix form where
\begin{align}
\mathbf{X} =
\begin{bmatrix}
1 & x_{1,1} & x_{1,2} \\
1 & x_{2,1} & x_{2,2} \\
\vdots & \vdots & \vdots \\
1 & x_{n,1} & x_{n,2}
\end{bmatrix},
\boldsymbol{\theta} =
\begin{bmatrix}
\theta_0 \\
\theta_1 \\
\theta_2 \\
\end{bmatrix}
\end{align}
When we refer to $x_{i, j}$, we mean that it is the i-th data point and the j-th feature of that data point.
For our actual exploration dataset we shall set $\boldsymbol{\theta}=[0, -2, -3]$ and draw $N=40$ noisy samples from $x \in [-2,2)$. Note that setting the value of $\theta_0 = 0$ effectively ignores the offset term.
```
# @markdown Execute this cell to simulate some data
# Set random seed for reproducibility
np.random.seed(1234)
# Set parameters
theta = [0, -2, -3]
n_samples = 40
# Draw x and calculate y
n_regressors = len(theta)
x0 = np.ones((n_samples, 1))
x1 = np.random.uniform(-2, 2, (n_samples, 1))
x2 = np.random.uniform(-2, 2, (n_samples, 1))
X = np.hstack((x0, x1, x2))
noise = np.random.randn(n_samples)
y = X @ theta + noise
ax = plt.subplot(projection='3d')
ax.plot(X[:,1], X[:,2], y, '.')
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
```
## Coding Exercise 1: Ordinary Least Squares Estimator
In this exercise you will implement the OLS approach to estimating $\boldsymbol{\hat\theta}$ from the design matrix $\mathbf{X}$ and measurement vector $\mathbf{y}$. You can use the `@` symbol for matrix multiplication, `.T` for transpose, and `np.linalg.inv` for matrix inversion.
```
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
x (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
######################################################################
## TODO for students: solve for the optimal parameter vector using OLS
# Fill out function and remove
raise NotImplementedError("Student exercise: solve for theta_hat vector using OLS")
######################################################################
# Compute theta_hat using OLS
theta_hat = ...
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
# to_remove solution
def ordinary_least_squares(X, y):
"""Ordinary least squares estimator for linear regression.
Args:
X (ndarray): design matrix of shape (n_samples, n_regressors)
y (ndarray): vector of measurements of shape (n_samples)
Returns:
ndarray: estimated parameter values of shape (n_regressors)
"""
# Compute theta_hat using OLS
theta_hat = np.linalg.inv(X.T @ X) @ X.T @ y
return theta_hat
theta_hat = ordinary_least_squares(X, y)
print(theta_hat)
```
After filling in this function, you should see that $\boldsymbol{\hat\theta}$ = [ 0.13861386, -2.09395731, -3.16370742]
Now that we have our $\boldsymbol{\hat\theta}$, we can obtain $\hat{\mathbf{y}}$ and thus our mean squared error.
```
# Compute predicted data
theta_hat = ordinary_least_squares(X, y)
y_hat = X @ theta_hat
# Compute MSE
print(f"MSE = {np.mean((y - y_hat)**2):.2f}")
```
Finally, the following code will plot a geometric visualization of the data points (blue) and fitted plane.
```
# @markdown Execute this cell to visualize data and predicted plane
theta_hat = ordinary_least_squares(X, y)
xx, yy = np.mgrid[-2:2:50j, -2:2:50j]
y_hat_grid = np.array([xx.flatten(), yy.flatten()]).T @ theta_hat[1:]
y_hat_grid = y_hat_grid.reshape((50, 50))
ax = plt.subplot(projection='3d')
ax.plot(X[:, 1], X[:, 2], y, '.')
ax.plot_surface(xx, yy, y_hat_grid, linewidth=0, alpha=0.5, color='C1',
cmap=plt.get_cmap('coolwarm'))
for i in range(len(X)):
ax.plot((X[i, 1], X[i, 1]),
(X[i, 2], X[i, 2]),
(y[i], y_hat[i]),
'g-', alpha=.5)
ax.set(
xlabel='$x_1$',
ylabel='$x_2$',
zlabel='y'
)
plt.tight_layout()
```
---
# Section 2: Polynomial Regression
So far today, you learned how to predict outputs from inputs by fitting a linear regression model. We can now model all sort of relationships, including in neuroscience!
One potential problem with this approach is the simplicity of the model. Linear regression, as the name implies, can only capture a linear relationship between the inputs and outputs. Put another way, the predicted outputs are only a weighted sum of the inputs. What if there are more complicated computations happening? Luckily, many more complex models exist (and you will encounter many more over the next 3 weeks). One model that is still very simple to fit and understand, but captures more complex relationships, is **polynomial regression**, an extension of linear regression.
<details>
<summary> <font color='blue'>Click here for text recap of relevant part of video </font></summary>
Since polynomial regression is an extension of linear regression, everything you learned so far will come in handy now! The goal is the same: we want to predict the dependent variable $y$ given the input values $x$. The key change is the type of relationship between inputs and outputs that the model can capture.
Linear regression models predict the outputs as a weighted sum of the inputs:
\begin{align}
y = \theta_0 + \theta x + \epsilon
\end{align}
With polynomial regression, we model the outputs as a polynomial equation based on the inputs. For example, we can model the outputs as:
\begin{align}
y & = \theta_0 + \theta_1 x + \theta_2 x^2 + \theta_3 x^3 + \epsilon
\end{align}
We can change how complex a polynomial is fit by changing the order of the polynomial. The order of a polynomial refers to the highest power in the polynomial. The equation above is a third order polynomial because the highest value x is raised to is 3. We could add another term ($+ \theta_4 x^4$) to model an order 4 polynomial and so on.
</details>
First, we will simulate some data to practice fitting polynomial regression models. We will generate random inputs $x$ and then compute y according to $y = x^2 - x - 2 $, with some extra noise both in the input and the output to make the model fitting exercise closer to a real life situation.
```
# @markdown Execute this cell to simulate some data
# setting a fixed seed to our random number generator ensures we will always
# get the same psuedorandom number sequence
np.random.seed(121)
n_samples = 30
x = np.random.uniform(-2, 2.5, n_samples) # inputs uniformly sampled from [-2, 2.5)
y = x**2 - x - 2 # computing the outputs
output_noise = 1/8 * np.random.randn(n_samples)
y += output_noise # adding some output noise
input_noise = 1/2 * np.random.randn(n_samples)
x += input_noise # adding some input noise
fig, ax = plt.subplots()
ax.scatter(x, y) # produces a scatter plot
ax.set(xlabel='x', ylabel='y');
```
## Section 2.1: Design matrix for polynomial regression
*Estimated timing to here from start of tutorial: 16 min*
Now we have the basic idea of polynomial regression and some noisy data, let's begin! The key difference between fitting a linear regression model and a polynomial regression model lies in how we structure the input variables.
Let's go back to one feature for each data point. For linear regression, we used $\mathbf{X} = \mathbf{x}$ as the input data, where $\mathbf{x}$ is a vector where each element is the input for a single data point. To add a constant bias (a y-intercept in a 2-D plot), we use $\mathbf{X} = \big[ \boldsymbol 1, \mathbf{x} \big]$, where $\boldsymbol 1$ is a column of ones. When fitting, we learn a weight for each column of this matrix. So we learn a weight that multiples with column 1 - in this case that column is all ones so we gain the bias parameter ($+ \theta_0$).
This matrix $\mathbf{X}$ that we use for our inputs is known as a **design matrix**. We want to create our design matrix so we learn weights for $\mathbf{x}^2, \mathbf{x}^3,$ etc. Thus, we want to build our design matrix $X$ for polynomial regression of order $k$ as:
\begin{align}
\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}^1, \mathbf{x}^2 , \ldots , \mathbf{x}^k \big],
\end{align}
where $\boldsymbol{1}$ is the vector the same length as $\mathbf{x}$ consisting of of all ones, and $\mathbf{x}^p$ is the vector $\mathbf{x}$ with all elements raised to the power $p$. Note that $\boldsymbol{1} = \mathbf{x}^0$ and $\mathbf{x}^1 = \mathbf{x}$.
If we have inputs with more than one feature, we can use a similar design matrix but include all features raised to each power. Imagine that we have two features per data point: $\mathbf{x}_m$ is a vector of one feature per data point and $\mathbf{x}_n$ is another. Our design matrix for a polynomial regression would be:
\begin{align}
\mathbf{X} = \big[ \boldsymbol 1 , \mathbf{x}_m^1, \mathbf{x}_n^1, \mathbf{x}_m^2 , \mathbf{x}_n^2\ldots , \mathbf{x}_m^k , \mathbf{x}_n^k \big],
\end{align}
### Coding Exercise 2.1: Structure design matrix
Create a function (`make_design_matrix`) that structures the design matrix given the input data and the order of the polynomial you wish to fit. We will print part of this design matrix for our data and order 5.
```
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (n_samples)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
########################################################################
## TODO for students: create the design matrix ##
# Fill out function and remove
raise NotImplementedError("Student exercise: create the design matrix")
########################################################################
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = ...
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
# to_remove solution
def make_design_matrix(x, order):
"""Create the design matrix of inputs for use in polynomial regression
Args:
x (ndarray): input vector of shape (samples,)
order (scalar): polynomial regression order
Returns:
ndarray: design matrix for polynomial regression of shape (samples, order+1)
"""
# Broadcast to shape (n x 1) so dimensions work
if x.ndim == 1:
x = x[:, None]
#if x has more than one feature, we don't want multiple columns of ones so we assign
# x^0 here
design_matrix = np.ones((x.shape[0], 1))
# Loop through rest of degrees and stack columns (hint: np.hstack)
for degree in range(1, order + 1):
design_matrix = np.hstack((design_matrix, x**degree))
return design_matrix
order = 5
X_design = make_design_matrix(x, order)
print(X_design[0:2, 0:2])
```
You should see that the printed section of this design matrix is `[[ 1. -1.51194917]
[ 1. -0.35259945]]`
## Section 2.2: Fitting polynomial regression models
*Estimated timing to here from start of tutorial: 24 min*
Now that we have the inputs structured correctly in our design matrix, fitting a polynomial regression is the same as fitting a linear regression model! All of the polynomial structure we need to learn is contained in how the inputs are structured in the design matrix. We can use the same least squares solution we computed in previous exercises.
### Coding Exercise 2.2: Fitting polynomial regression models with different orders
Here, we will fit polynomial regression models to find the regression coefficients ($\theta_0, \theta_1, \theta_2,$ ...) by solving the least squares problem. Create a function `solve_poly_reg` that loops over different order polynomials (up to `max_order`), fits that model, and saves out the weights for each. You may invoke the `ordinary_least_squares` function.
We will then qualitatively inspect the quality of our fits for each order by plotting the fitted polynomials on top of the data. In order to see smooth curves, we evaluate the fitted polynomials on a grid of $x$ values (ranging between the largest and smallest of the inputs present in the dataset).
```
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
##################################################################################
## TODO for students: Create design matrix and fit polynomial model for this order
# Fill out function and remove
raise NotImplementedError("Student exercise: fit a polynomial model")
##################################################################################
# Create design matrix
X_design = ...
# Fit polynomial model
this_theta = ...
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
plot_fitted_polynomials(x, y, theta_hats)
# to_remove solution
def solve_poly_reg(x, y, max_order):
"""Fit a polynomial regression model for each order 0 through max_order.
Args:
x (ndarray): input vector of shape (n_samples)
y (ndarray): vector of measurements of shape (n_samples)
max_order (scalar): max order for polynomial fits
Returns:
dict: fitted weights for each polynomial model (dict key is order)
"""
# Create a dictionary with polynomial order as keys,
# and np array of theta_hat (weights) as the values
theta_hats = {}
# Loop over polynomial orders from 0 through max_order
for order in range(max_order + 1):
# Create design matrix
X_design = make_design_matrix(x, order)
# Fit polynomial model
this_theta = ordinary_least_squares(X_design, y)
theta_hats[order] = this_theta
return theta_hats
max_order = 5
theta_hats = solve_poly_reg(x, y, max_order)
# Visualize
with plt.xkcd():
plot_fitted_polynomials(x, y, theta_hats)
```
## Section 2.3: Evaluating fit quality
*Estimated timing to here from start of tutorial: 29 min*
As with linear regression, we can compute mean squared error (MSE) to get a sense of how well the model fits the data.
We compute MSE as:
\begin{align}
\mathrm{MSE} = \frac 1 N ||\mathbf{y} - \hat{\mathbf{y}}||^2 = \frac 1 N \sum_{i=1}^N (y_i - \hat y_i)^2
\end{align}
where the predicted values for each model are given by $ \hat{\mathbf{y}} = \mathbf{X}\boldsymbol{\hat\theta}$.
*Which model (i.e. which polynomial order) do you think will have the best MSE?*
### Coding Exercise 2.3: Compute MSE and compare models
We will compare the MSE for different polynomial orders with a bar plot.
```
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
########################################################################
## TODO for students
# Fill out function and remove
raise NotImplementedError("Student exercise: compute MSE")
########################################################################
# Get prediction for the polynomial regression model of this order
y_hat = ...
# Compute the residuals
residuals = ...
# Compute the MSE
mse = ...
mse_list.append(mse)
# Visualize MSE of fits
evaluate_fits(order_list, mse_list)
# to_remove solution
mse_list = []
order_list = list(range(max_order + 1))
for order in order_list:
X_design = make_design_matrix(x, order)
# Get prediction for the polynomial regression model of this order
y_hat = X_design @ theta_hats[order]
# Compute the residuals
residuals = y - y_hat
# Compute the MSE
mse = np.mean(residuals ** 2)
mse_list.append(mse)
# Visualize MSE of fits
with plt.xkcd():
evaluate_fits(order_list, mse_list)
```
---
# Summary
*Estimated timing of tutorial: 35 minutes*
* Linear regression generalizes naturally to multiple dimensions
* Linear algebra affords us the mathematical tools to reason and solve such problems beyond the two dimensional case
* To change from a linear regression model to a polynomial regression model, we only have to change how the input data is structured
* We can choose the complexity of the model by changing the order of the polynomial model fit
* Higher order polynomial models tend to have lower MSE on the data they're fit with
**Note**: In practice, multidimensional least squares problems can be solved very efficiently (thanks to numerical routines such as LAPACK).
---
# Notation
\begin{align}
x &\quad \text{input, independent variable}\\
y &\quad \text{response measurement, dependent variable}\\
\epsilon &\quad \text{measurement error, noise contribution}\\
\theta &\quad \text{slope parameter}\\
\hat{\theta} &\quad \text{estimated slope parameter}\\
\mathbf{x} &\quad \text{vector of inputs where each element is a different data point}\\
\mathbf{X} &\quad \text{design matrix}\\
\mathbf{y} &\quad \text{vector of measurements}\\
\mathbf{\hat y} &\quad \text{vector of estimated measurements}\\
\boldsymbol{\theta} &\quad \text{vector of parameters}\\
\boldsymbol{\hat\theta} &\quad \text{vector of estimated parameters}\\
d &\quad \text{dimensionality of input}\\
N &\quad \text{number of samples}\\
\end{align}
**Suggested readings**
[Introduction to Applied Linear Algebra – Vectors, Matrices, and Least Squares](http://vmls-book.stanford.edu/)
Stephen Boyd and Lieven Vandenberghe
| github_jupyter |
# A Simple Autoencoder
We'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.

In this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.
```
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
```
Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
```
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
```
We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a **single ReLU hidden layer**. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a **sigmoid activation on the output layer** to get values matching the input.

> **Exercise:** Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, `tf.layers`. For instance, you would use [`tf.layers.dense(inputs, units, activation=tf.nn.relu)`](https://www.tensorflow.org/api_docs/python/tf/layers/dense) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this `tf.nn.sigmoid_cross_entropy_with_logits` ([documentation](https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits)). You should note that `tf.nn.sigmoid_cross_entropy_with_logits` takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.
```
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, (None, 784))
targets_ = tf.placeholder(tf.float32, (None, 784))
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, 16, activation=tf.nn.relu)
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, 784)
# Sigmoid output from logits
decoded = tf.nn.sigmoid(logits)
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=targets_)
# Mean of the loss
cost = tf.reduce_mean(loss)
# Adam optimizer
opt = tf.train.AdamOptimizer().minimize(cost)
```
## Training
```
# Create the session
sess = tf.Session()
```
Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Calling `mnist.train.next_batch(batch_size)` will return a tuple of `(images, labels)`. We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with `sess.run(tf.global_variables_initializer())`. Then, run the optimizer and get the loss with `batch_cost, _ = sess.run([cost, opt], feed_dict=feed)`.
```
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
```
## Checking out the results
Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
```
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
```
## Up Next
We're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.
In practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build.
| github_jupyter |
```
#############################################################
# Project: External Finance Data
# Step: Prepare RBA Credit Card Data
# Purpose: Provide insights into Australian Credit Card Data
# usage
# Author: Michael Letheby
# Version:
# 1.0 - 1/11/2021
# - Created initial version with ABS Mortgages and RBA Card data
# 1.1 - 21/11/2021
# - Cleaned up code and added full commentary
#############################################################
#############################################################
# Section: Libraries
#############################################################
import pandas as pd # Data Analysis Library
import matplotlib.pyplot as plt # Data Visualisation Library
import matplotlib.ticker as ticker
%matplotlib inline
import seaborn as sns # Data Visualisation Library
import requests # For downloading
import datetime
import re # Regex
import numbers
import pickle # for saving/loading files
pd.options.mode.chained_assignment = None # default='warn'
#############################################################
# Section: Functions
#############################################################
# pickle_save: save the files after importing and reading them
def pickle_save(name, to_save):
with open('./Data/' + name + '.pickle', 'wb') as handle:
pickle.dump(to_save, handle, protocol=pickle.HIGHEST_PROTOCOL)
# picle_load: load previously saved files
def pickle_load(name):
with open('./Data/' + name + '.pickle', 'rb') as handle:
load_data = pickle.load(handle)
return load_data
# match: search each string element within a list ('list_search') in a string ('in_string') and
# return the match. Used to define the type of variable within the ABS Lending Indicator datasets.
def match(list_search, in_string):
# need to add restrictions on input types to list + string
result = [f for f in list_search if re.search(f, desclist)]
return(result)
# human_format: format numbers to be more readable
def human_format(num):
num = float('{:.3g}'.format(num))
magnitude = 0
while abs(num) >= 1000:
magnitude += 1
num /= 1000.0
return '{}{}'.format('{:f}'.format(num).rstrip('0').rstrip('.'), ['', 'K', 'M', 'B', 'T'][magnitude])
# Credit Cards - Personal
final_table_dict = pickle_load('imported_data_final')
rba_df = final_table_dict['RBA-Credit']['Final_Data']
# Visualise the trend
#rba_df = rba_df[rba_df['Date'] >= '2018-01-01'] # In order to calculate MoM + YoY
rba_df['Date_Axis'] = rba_df['Date'].dt.strftime("%b-%y")
rba_df['Year'] = rba_df['Date'].dt.year
rba_df['Month'] = rba_df['Date'].dt.strftime("%b")
# Make sure the table is ordered first
rba_df = rba_df.sort_values(by=['Date'])
rba_df = rba_df.reset_index(drop=True)
rba_df['Utilisation'] = (rba_df['balance'] / rba_df['limits'])
rba_df['Balance MoM'] = rba_df['balance'] - rba_df['balance'].shift(periods=1)
rba_df['Balance MoM Percent'] = rba_df['balance']/rba_df['balance'].shift(periods=1)-1
rba_df['Balance YoY'] = rba_df['balance'] - rba_df['balance'].shift(periods=12)
rba_df['Balance YoY Percent'] = rba_df['balance']/rba_df['balance'].shift(periods=12)-1
rba_df['Interest Balance Percent'] = rba_df['interest balance']/rba_df['balance']
rba_df['Transaction Value YoY'] = rba_df['transaction value']/rba_df['transaction value'].shift(periods=12)-1
rba_df = rba_df[rba_df['Date'] >= '2019-01-01'].reset_index(drop=True)
rba_df.head()
# Summary stats
# Latest month results for commentary
max_date = max(rba_df['Date'])
latest_date = rba_df[rba_df['Date'] == max_date]
latest_limits_fmt = '$' + human_format(latest_date['limits'].values[0])
latest_balance_fmt = '$' + human_format(latest_date['balance'].values[0])
latest_accounts_fmt = human_format(latest_date['accounts'].values[0])
latest_interest_balance_fmt = '$' + human_format(latest_date['interest balance'].values[0])
latest_transaction_value_fmt = '$' + human_format(latest_date['transaction value'].values[0])
latest_balance_MoM_fmt = '$' + human_format(latest_date['Balance MoM'].values[0])
latest_balance_MoMPer_fmt = '{:,.1f}'.format(latest_date['Balance MoM Percent'].values[0]*100) + '%'
latest_balance_YoY_fmt = '$' + human_format(latest_date['Balance YoY'].values[0])
latest_balance_YoYPer_fmt = '{:,.1f}'.format(latest_date['Balance YoY Percent'].values[0]*100) + '%'
# Commentary
print("In ", max_date.strftime("%b-%y"), ":", sep='')
print(" -There were", latest_accounts_fmt, "open Credit Cards.")
print(" -Total Limits were", latest_limits_fmt)
print(" -Total Balances were", latest_balance_fmt)
print(" -MoM Balances changed by ", latest_balance_MoM_fmt, " (", latest_balance_MoMPer_fmt, ")", sep='')
print(" -YoY Balances changed by", latest_balance_YoY_fmt, " (", latest_balance_YoYPer_fmt, ")", sep='')
print(" -", latest_interest_balance_fmt, " of Balances were accruing Interest.", sep='')
print(" -", latest_transaction_value_fmt, " of Transactions were made.", sep='')
f, axes = plt.subplots(2,2, figsize=(15,12))
max_y_balance = int(round(max(rba_df['balance']),-10))
max_y_trans = int(round(max(rba_df['transaction value']),-10))
# Credit Card Balance
balance_graph = sns.lineplot(x='Date_Axis', y='balance', data=rba_df, color = 'red', ax=axes[0,0], label='Balance (LHS)')
balance_graph.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '${:,.0f}'.format(y/1E9) + 'B')) # Not sure how to set this programmatically
# Credit Card Utilisation Rate (Balance/Limit)
balance_graph2 = balance_graph.twinx()
balance_graph2.plot(balance_graph.get_xticks(), rba_df['Utilisation'], label='Utilisation (RHS)', linestyle='dashed')
balance_graph2.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '{:,.0f}'.format(y*100) + '%'))
# Credit Card Balance YoY Change
palette_custom = ['#B0B8B4FF', '#FC766AFF', '#184A45FF']
balance_YoY_graph = sns.barplot(x='Date_Axis', y='Balance YoY Percent', data=rba_df, hue='Year', palette = palette_custom,
dodge=False, ax=axes[0,1])
balance_YoY_graph.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '{:,.0f}'.format(y*100) + '%'))
# Credit Card Balance Accruing Interest
balance_interest_graph = sns.lineplot(x='Date_Axis', y='interest balance', data=rba_df, color ='red', ax=axes[1,0],
label='Interest Balance (LHS)')
balance_interest_graph.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '${:,.0f}'.format(y/1E9) + 'B'))
# % of Credit Card Balance Accruing Interest
balance_interest_graph2 = balance_interest_graph.twinx()
balance_interest_graph2.plot(balance_interest_graph2.get_xticks(), rba_df['Interest Balance Percent'],
label='% of Total Balance', linestyle='dashed')
balance_interest_graph2.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '{:,.0f}'.format(y*100) + '%'))
# Transaction Volumes
trans_val_graph = sns.lineplot(x='Month', y='transaction value', data=rba_df, hue = 'Year', palette = palette_custom,
ax=axes[1,1])
trans_val_graph.yaxis.set_major_formatter(ticker.FuncFormatter(lambda y, pos: '${:,.0f}'.format(y/1E9) + 'B'))
graphs = []
# Use value NONE where invalid
graphs.append({
'graph': balance_graph,
'title': 'Credit Card Balance',
'label-x': 'Date',
'label-y': 'Balance',
'legend': 'upper right',
'limit-x-lower': 'NONE',
'limit-x-upper': 'NONE',
'limit-y-lower': 0,
'limit-y-upper': max_y_balance,
'rotate-x-ticks': 45,
'date-axis-spread': 'Y',
'secondary-graph': balance_graph2,
'secondary-label-y': 'Utilisation',
'secondary-legend': 'lower right',
'secondary-limit-y-lower': 0,
'secondary-limit-y-upper': 1,
})
graphs.append({
'graph': balance_YoY_graph,
'title': 'Credit Card Balance - YoY Changes',
'label-x': 'Date',
'label-y': 'YoY Balance Change',
'legend': 'NONE',
'limit-x-lower': 'NONE',
'limit-x-upper': 'NONE',
'limit-y-lower': 'NONE',
'limit-y-upper': 'NONE',
'rotate-x-ticks': 45,
'date-axis-spread': 'Y',
'secondary-graph': 'NONE',
'secondary-label-y': 'NONE',
'secondary-legend': 'NONE',
'secondary-limit-y-lower': 'NONE',
'secondary-limit-y-upper': 'NONE',
})
graphs.append({
'graph': balance_interest_graph,
'title': 'Credit Card Balance - Balances Accruing Interest',
'label-x': 'Date',
'label-y': 'Interest Balance',
'legend': 'upper right',
'limit-x-lower': 'NONE',
'limit-x-upper': 'NONE',
'limit-y-lower': 0,
'limit-y-upper': max_y_balance,
'rotate-x-ticks': 45,
'date-axis-spread': 'Y',
'secondary-graph': balance_interest_graph2,
'secondary-label-y': '% of Total Balance',
'secondary-legend': 'lower right',
'secondary-limit-y-lower': 0,
'secondary-limit-y-upper': 1,
})
graphs.append({
'graph': trans_val_graph,
'title': 'Credit Card - Transaction Values',
'label-x': 'Month',
'label-y': 'Transaction Balance',
'legend': 'NONE',
'limit-x-lower': 'NONE',
'limit-x-upper': 'NONE',
'limit-y-lower': 0,
'limit-y-upper': max_y_trans,
'rotate-x-ticks': 45,
'date-axis-spread': 'N',
'secondary-graph': 'NONE',
'secondary-label-y': 'NONE',
'secondary-legend': 'NONE',
'secondary-limit-y-lower': 'NONE',
'secondary-limit-y-upper': 'NONE',
})
# need to add a counter here
for graph in graphs: #Loop to set all graph parameters
graph['graph'].set_title(graph['title'], loc='left')
graph['graph'].set(xlabel=graph['label-x'], ylabel=graph['label-y'])
if graph['legend'] != 'NONE': # Legend
graph['graph'].legend(loc=graph['legend'])
if graph['limit-x-lower'] != 'NONE': # X Limits
graph['graph'].set_xlim(graph['limit-x-lower'], graph['limit-x-upper'])
if graph['limit-y-lower'] != 'NONE': # Y Limits
graph['graph'].set_ylim(graph['limit-y-lower'], graph['limit-y-upper'])
if graph['rotate-x-ticks'] != 'NONE': # X Tick Rotation
graph['graph'].xaxis.set_tick_params(labelrotation=graph['rotate-x-ticks'])
if graph['date-axis-spread'] == 'Y':
for ind, label in enumerate(graph['graph'].get_xticklabels()):
if ind % 3 == 0: # every 3rd label is kept
label.set_visible(True)
else:
label.set_visible(False)
if graph['secondary-graph'] != 'NONE':
graph['secondary-graph'].set(ylabel=graph['secondary-label-y'])
if graph['secondary-legend'] != 'NONE':
graph['secondary-graph'].legend(loc=graph['secondary-legend'])
if graph['secondary-limit-y-lower'] != 'NONE':
graph['secondary-graph'].set_ylim(graph['secondary-limit-y-lower'], graph['secondary-limit-y-upper'])
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Intro to Hidden Markov Models (optional)
---
### Introduction
In this notebook, you'll use the [Pomegranate](http://pomegranate.readthedocs.io/en/latest/index.html) library to build a simple Hidden Markov Model and explore the Pomegranate API.
<div class="alert alert-block alert-info">
**Note:** You are not required to complete this notebook and it will not be submitted with your project, but it is designed to quickly introduce the relevant parts of the Pomegranate library that you will need to complete the part of speech tagger.
</div>
The notebook already contains some code to get you started. You only need to add some new functionality in the areas indicated; you will not need to modify the included code beyond what is requested. Sections that begin with **'IMPLEMENTATION'** in the header indicate that you need to fill in code in the block that follows. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
<div class="alert alert-block alert-info">
**Note:** Code and Markdown cells can be executed using the `Shift + Enter` keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.
</div>
<hr>
<div class="alert alert-block alert-warning">
**Note:** Make sure you have selected a **Python 3** kernel in Workspaces or the hmm-tagger conda environment if you are running the Jupyter server on your own machine.
</div>
```
# Jupyter "magic methods" -- only need to be run once per kernel restart
%load_ext autoreload
%aimport helpers
%autoreload 1
# import python modules -- this cell needs to be run again if you make changes to any of the files
import matplotlib.pyplot as plt
import numpy as np
from helpers import show_model
from pomegranate import State, HiddenMarkovModel, DiscreteDistribution
```
## Build a Simple HMM
---
You will start by building a simple HMM network based on an example from the textbook [Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu/).
> You are the security guard stationed at a secret under-ground installation. Each day, you try to guess whether it’s raining today, but your only access to the outside world occurs each morning when you see the director coming in with, or without, an umbrella.
A simplified diagram of the required network topology is shown below.

### Describing the Network
<div class="alert alert-block alert-warning">
$\lambda = (A, B)$ specifies a Hidden Markov Model in terms of an emission probability distribution $A$ and a state transition probability distribution $B$.
</div>
HMM networks are parameterized by two distributions: the emission probabilties giving the conditional probability of observing evidence values for each hidden state, and the transition probabilities giving the conditional probability of moving between states during the sequence. Additionally, you can specify an initial distribution describing the probability of a sequence starting in each state.
<div class="alert alert-block alert-warning">
At each time $t$, $X_t$ represents the hidden state, and $Y_t$ represents an observation at that time.
</div>
In this problem, $t$ corresponds to each day of the week and the hidden state represent the weather outside (whether it is Rainy or Sunny) and observations record whether the security guard sees the director carrying an umbrella or not.
For example, during some particular week the guard may observe an umbrella ['yes', 'no', 'yes', 'no', 'yes'] on Monday-Friday, while the weather outside is ['Rainy', 'Sunny', 'Sunny', 'Sunny', 'Rainy']. In that case, $t=Wednesday$, $Y_{Wednesday}=yes$, and $X_{Wednesday}=Sunny$. (It might be surprising that the guard would observe an umbrella on a sunny day, but it is possible under this type of model.)
### Initializing an HMM Network with Pomegranate
The Pomegranate library supports [two initialization methods](http://pomegranate.readthedocs.io/en/latest/HiddenMarkovModel.html#initialization). You can either explicitly provide the three distributions, or you can build the network line-by-line. We'll use the line-by-line method for the example network, but you're free to use either method for the part of speech tagger.
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
```
### **IMPLEMENTATION**: Add the Hidden States
When the HMM model is specified line-by-line, the object starts as an empty container. The first step is to name each state and attach an emission distribution.
#### Observation Emission Probabilities: $P(Y_t | X_t)$
We need to assume that we have some prior knowledge (possibly from a data set) about the director's behavior to estimate the emission probabilities for each hidden state. In real problems you can often estimate the emission probabilities empirically, which is what we'll do for the part of speech tagger. Our imaginary data will produce the conditional probability table below. (Note that the rows sum to 1.0)
| | $yes$ | $no$ |
| --- | --- | --- |
| $Sunny$ | 0.10 | 0.90 |
| $Rainy$ | 0.80 | 0.20 |
```
# create the HMM model
model = HiddenMarkovModel(name="Example Model")
# emission probability distributions, P(umbrella | weather)
sunny_emissions = DiscreteDistribution({"yes": 0.1, "no": 0.9})
sunny_state = State(sunny_emissions, name="Sunny")
# TODO: create a discrete distribution for the rainy emissions from the probability table above & use that distribution to create a state named Rainy
rainy_emissions = DiscreteDistribution({"yes": 0.8, "no": 0.2})
rainy_state = State(rainy_emissions, name="Rainy")
# add the states to the model
model.add_states(sunny_state, rainy_state)
assert rainy_emissions.probability("yes") == 0.8, "The director brings his umbrella with probability 0.8 on rainy days"
print("Looks good so far!")
```
### **IMPLEMENTATION:** Adding Transitions
Once the states are added to the model, we can build up the desired topology of individual state transitions.
#### Initial Probability $P(X_0)$:
We will assume that we don't know anything useful about the likelihood of a sequence starting in either state. If the sequences start each week on Monday and end each week on Friday (so each week is a new sequence), then this assumption means that it's equally likely that the weather on a Monday may be Rainy or Sunny. We can assign equal probability to each starting state by setting $P(X_0=Rainy) = 0.5$ and $P(X_0=Sunny)=0.5$:
| $Sunny$ | $Rainy$ |
| --- | ---
| 0.5 | 0.5 |
#### State transition probabilities $P(X_{t} | X_{t-1})$
Finally, we will assume for this example that we can estimate transition probabilities from something like historical weather data for the area. In real problems you can often use the structure of the problem (like a language grammar) to impose restrictions on the transition probabilities, then re-estimate the parameters with the same training data used to estimate the emission probabilities. Under this assumption, we get the conditional probability table below. (Note that the rows sum to 1.0)
| | $Sunny$ | $Rainy$ |
| --- | --- | --- |
|$Sunny$| 0.80 | 0.20 |
|$Rainy$| 0.40 | 0.60 |
```
# create edges for each possible state transition in the model
# equal probability of a sequence starting on either a rainy or sunny day
model.add_transition(model.start, sunny_state, 0.5)
model.add_transition(model.start, rainy_state, 0.5)
# add sunny day transitions (we already know estimates of these probabilities from the problem statement)
model.add_transition(sunny_state, sunny_state, 0.8) # 80% sunny->sunny
model.add_transition(sunny_state, rainy_state, 0.2) # 20% sunny->rainy
# TODO: add rainy day transitions using the probabilities specified in the transition table
model.add_transition(rainy_state, sunny_state, 0.4) # 40% rainy->sunny
model.add_transition(rainy_state, rainy_state, 0.6) # 60% rainy->rainy
# finally, call the .bake() method to finalize the model
model.bake()
assert model.edge_count() == 6, "There should be two edges from model.start, two from Rainy, and two from Sunny"
assert model.node_count() == 4, "The states should include model.start, model.end, Rainy, and Sunny"
print("Great! You've finished the model.")
```
## Visualize the Network
---
We have provided a helper function called `show_model()` that generates a PNG image from a Pomegranate HMM network. You can specify an optional filename to save the file to disk. Setting the "show_ends" argument True will add the model start & end states that are included in every Pomegranate network.
```
show_model(model, figsize=(7, 7), filename="example.png", overwrite=True, show_ends=False)
```
### Checking the Model
The states of the model can be accessed using array syntax on the `HMM.states` attribute, and the transition matrix can be accessed by calling `HMM.dense_transition_matrix()`. Element $(i, j)$ encodes the probability of transitioning from state $i$ to state $j$. For example, with the default column order specified, element $(2, 1)$ gives the probability of transitioning from "Rainy" to "Sunny", which we specified as 0.4.
Run the next cell to inspect the full state transition matrix, then read the .
```
column_order = ["Example Model-start", "Sunny", "Rainy", "Example Model-end"] # Override the Pomegranate default order
column_names = [s.name for s in model.states] #['Rainy', 'Sunny', 'Example Model-start', 'Example Model-end']
order_index = [column_names.index(c) for c in column_order] #[2, 1, 0, 3]
# re-order the rows/columns to match the specified column order
transitions = model.dense_transition_matrix()[:, order_index][order_index, :]
print("The state transition matrix, P(Xt|Xt-1):\n")
print(transitions)
print("\nThe transition probability from Rainy to Sunny is {:.0f}%".format(100 * transitions[2, 1]))
```
## Inference in Hidden Markov Models
---
Before moving on, we'll use this simple network to quickly go over the Pomegranate API to perform the three most common HMM tasks:
<div class="alert alert-block alert-info">
**Likelihood Evaluation**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $P(Y|\lambda)$, the likelihood of observing that sequence from the model
</div>
We can use the weather prediction model to evaluate the likelihood of the sequence [yes, yes, yes, yes, yes] (or any other state sequence). The likelihood is often used in problems like machine translation to weight interpretations in conjunction with a statistical language model.
<div class="alert alert-block alert-info">
**Hidden State Decoding**<br>
Given a model $\lambda=(A,B)$ and a set of observations $Y$, determine $Q$, the most likely sequence of hidden states in the model to produce the observations
</div>
We can use the weather prediction model to determine the most likely sequence of Rainy/Sunny states for a known observation sequence, like [yes, no] -> [Rainy, Sunny]. We will use decoding in the part of speech tagger to determine the tag for each word of a sentence. The decoding can be further split into "smoothing" when we want to calculate past states, "filtering" when we want to calculate the current state, or "prediction" if we want to calculate future states.
<div class="alert alert-block alert-info">
**Parameter Learning**<br>
Given a model topography (set of states and connections) and a set of observations $Y$, learn the transition probabilities $A$ and emission probabilities $B$ of the model, $\lambda=(A,B)$
</div>
We don't need to learn the model parameters for the weather problem or POS tagging, but it is supported by Pomegranate.
### IMPLEMENTATION: Calculate Sequence Likelihood
Calculating the likelihood of an observation sequence from an HMM network is performed with the [forward algorithm](https://en.wikipedia.org/wiki/Forward_algorithm). Pomegranate provides the the `HMM.forward()` method to calculate the full matrix showing the likelihood of aligning each observation to each state in the HMM, and the `HMM.log_probability()` method to calculate the cumulative likelihood over all possible hidden state paths that the specified model generated the observation sequence.
Fill in the code in the next section with a sample observation sequence and then use the `forward()` and `log_probability()` methods to evaluate the sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
assert len(observations) > 0, "You need to choose a sequence of 'yes'/'no' observations to test"
# TODO: use model.forward() to calculate the forward matrix of the observed sequence,
# and then use np.exp() to convert from log-likelihood to likelihood
forward_matrix = np.exp(model.forward(observations))
# TODO: use model.log_probability() to calculate the all-paths likelihood of the
# observed sequence and then use np.exp() to convert log-likelihood to likelihood
probability_percentage = np.exp(model.log_probability(observations))
# Display the forward probabilities
print(" " + "".join(s.name.center(len(s.name)+6) for s in model.states))
for i in range(len(observations) + 1):
print(" <start> " if i==0 else observations[i - 1].center(9), end="")
print("".join("{:.0f}%".format(100 * forward_matrix[i, j]).center(len(s.name) + 6)
for j, s in enumerate(model.states)))
print("\nThe likelihood over all possible paths " + \
"of this model producing the sequence {} is {:.2f}%\n\n"
.format(observations, 100 * probability_percentage))
```
### IMPLEMENTATION: Decoding the Most Likely Hidden State Sequence
The [Viterbi algorithm](https://en.wikipedia.org/wiki/Viterbi_algorithm) calculates the single path with the highest likelihood to produce a specific observation sequence. Pomegranate provides the `HMM.viterbi()` method to calculate both the hidden state sequence and the corresponding likelihood of the viterbi path.
This is called "decoding" because we use the observation sequence to decode the corresponding hidden state sequence. In the part of speech tagging problem, the hidden states map to parts of speech and the observations map to sentences. Given a sentence, Viterbi decoding finds the most likely sequence of part of speech tags corresponding to the sentence.
Fill in the code in the next section with the same sample observation sequence you used above, and then use the `model.viterbi()` method to calculate the likelihood and most likely state sequence. Compare the Viterbi likelihood against the forward algorithm likelihood for the observation sequence.
```
# TODO: input a sequence of 'yes'/'no' values in the list below for testing
observations = ['yes', 'no', 'yes']
# TODO: use model.viterbi to find the sequence likelihood & the most likely path
viterbi_likelihood, viterbi_path = model.viterbi(observations)
print("The most likely weather sequence to have generated " + \
"these observations is {} at {:.2f}%."
.format([s[1].name for s in viterbi_path[1:]], np.exp(viterbi_likelihood)*100)
)
```
### Forward likelihood vs Viterbi likelihood
Run the cells below to see the likelihood of each sequence of observations with length 3, and compare with the viterbi path.
```
from itertools import product
observations = ['no', 'no', 'yes']
p = {'Sunny': {'Sunny': np.log(.8), 'Rainy': np.log(.2)}, 'Rainy': {'Sunny': np.log(.4), 'Rainy': np.log(.6)}}
e = {'Sunny': {'yes': np.log(.1), 'no': np.log(.9)}, 'Rainy':{'yes':np.log(.8), 'no':np.log(.2)}}
o = observations
k = []
vprob = np.exp(model.viterbi(o)[0])
print("The likelihood of observing {} if the weather sequence is...".format(o))
for s in product(*[['Sunny', 'Rainy']]*3):
k.append(np.exp(np.log(.5)+e[s[0]][o[0]] + p[s[0]][s[1]] + e[s[1]][o[1]] + p[s[1]][s[2]] + e[s[2]][o[2]]))
print("\t{} is {:.2f}% {}".format(s, 100 * k[-1], " <-- Viterbi path" if k[-1] == vprob else ""))
print("\nThe total likelihood of observing {} over all possible paths is {:.2f}%".format(o, 100*sum(k)))
```
### Congratulations!
You've now finished the HMM warmup. You should have all the tools you need to complete the part of speech tagger project.
| github_jupyter |
# Kerja Gaya Gesek
Alya Ismia Rudiana<sup>1</sup> <br>
Program Studi Sarjana Fisika, Institut Teknologi Bandung <br>
Jalan Ganesha 10, Bandung 40132, Indonesia <br>
<sup>1</sup>alyarusdiana2001@gmail.com, https://github.com/alyarusdiana10 <br>
Kerja yang dilakukan oleh gaya gesek merupakan bentuk kerja yang tidak diharapkan karena energi yang dikeluarkan, biasanya dalam bentuk panas atau bunyi yang dilepas ke lingkungan, tidak dapat dimanfaatkan lagi oleh sistem sehingga energi sistem berkurang.
## Gerak benda di atas lantai mendatar kasar
Sistem yang ditinjau adalah suatu benda yang bergerak di atas lantai mendatar kasar. Benda diberi kecepatan awal tertentu dan bergerak melambat sampai berhenti karena adanya gaya gesek kinetis antara benda dan lantai kasar.
## Parameter
Beberapa parameter yang digunakan adalah seperti pada tabel berikut ini.
Tabel <a name='tab1'>1</a>. Simbol beserta satuan dan artinya.
Simbol | Satuan | Arti
:- | :- | :-
$t$ | s | waktu
$v_0$ | m/s | kecepatan awal
$x_0$ | m | posisi awal
$v$ | m/s | kecepatan saat $t$
$x$ | m | waktu saat $t$
$a$ | m/s<sup>2</sup> | percepatan
$\mu_k$ | - | koefisien gesek kinetis
$f_k$ | N | gaya gesek kinetis
$m$ | kg | massa benda
$F$ | N | total gaya yang bekerja
$N$ | N | gaya normal
$w$ | N | gaya gravitasi
Simbol-simbol pada Tabel [1](#tab1) akan diberi nilai kemudian saat diimplementasikan dalam program.
## Persamaan
Persamaan-persamaan yang akan digunakan adalah seperti dicantumkan pada bagian ini.
### Kinematika
Hubungan antara antara kecepatan $v$, kecepatan awal $v_0$, percepatan $a$, dan waktu $t$ diberikan oleh
<a name='eqn1'></a>
\begin{equation}\label{eqn:kinematics-v-a-t}\tag{1}
v = v_0 + at.
\end{equation}
Posisi benda $x$ bergantung pada posisi awal $x_0$, kecepatan awal $v_0$, percepatan $a$, dan waktu $t$ melalui hubungan
<a name='eqn2'></a>
\begin{equation}\label{eqn:kinematics-x-v-a-t}\tag{2}
x = x_0 + v_0 t + \tfrac12 at^2.
\end{equation}
Selain kedua persamaan sebelumnya, terdapat pula persamaan berikut
<a name='eqn3'></a>
\begin{equation}\label{eqn:kinematics-v-x-a}\tag{3}
v^2 = v_0^2 + 2a(x - x_0),
\end{equation}
yang menghubungkan kecepatan $v$ dengan kecepatan awal $v_0$, percepatan $a$, dan jarak yang ditempuh $x - x_0$.
### Dinamika
Hukum Newton I menyatakan bahwa benda yang semula diam akan tetap diam dan yang semula bergerak dengan kecepatan tetap akan tetap bergerak dengan kecepatan tetap bila tidak ada gaya yang bekerja pada benda atau jumlah gaya-gaya yang bekerja sama dengan nol
<a name='eqn4'></a>
\begin{equation}\label{eqn:newtons-law-1}\tag{4}
\sum F = 0.
\end{equation}
Bila ada gaya yang bekerj pada benda bermassa $m$ atau jumlah gaya-gaya tidak nol
<a name='eqn5'></a>
\begin{equation}\label{eqn:newtons-law-2}\tag{5}
\sum F = ma,
\end{equation}
maka keadaan gerak benda akan berubah melalui percepatan $a$, dengan $m > 0$ dan $a \ne 0$.
### Usaha
Usaha oleh suatu gaya $F$ dengan posisi awal $x_0$ dan posisi akhir $x_0$ dapat diperoleh melalui
<a name='eqn6'></a>
\begin{equation}\label{eqn:work-1}\tag{6}
W = \int_{x_0}^x F dx
\end{equation}
atau dengan
<a name='eqn7'></a>
\begin{equation}\label{eqn:work-2}\tag{7}
W = \Delta K
\end{equation}
dengan $K$ adalah energi kinetik. Persamaan ([7](#eqn7)) akan memberikan gaya oleh semua gaya. Dengan demikian bila $F$ adalah satu-satunya gaya yang bekerja pada benda, maka persamaan ini akan menjadi Persamaan ([6](#eqn6)).
## Sistem
Ilustrasi sistem perlu diberikan agar dapat terbayangan dan memudahkan penyelesaian masalah. Selain itu juga perlu disajikan diagram gaya-gaya yang bekerja pada benda.
### Ilustrasi
Sistem yang benda bermassa $m$ bergerak di atas lantai kasar dapat digambarkan
seperti berikut ini.
```
%%html
<svg
width="320"
height="140"
viewBox="0 0 320 140.00001"
id="svg2"
version="1.1"
inkscape:version="1.1.2 (b8e25be833, 2022-02-05)"
sodipodi:docname="mass-horizontal-rough-surface.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<defs
id="defs4">
<marker
style="overflow:visible"
id="TriangleOutM"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479" />
</marker>
<marker
style="overflow:visible"
id="marker11604"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11602" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Mend"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11361" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-1" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-35"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-0" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-0"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-4" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9" />
</marker>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.5"
inkscape:cx="173"
inkscape:cy="97.333333"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:snap-bbox="false"
inkscape:snap-global="false"
units="px"
showborder="true"
inkscape:showpageshadow="true"
borderlayer="false"
inkscape:window-width="1366"
inkscape:window-height="705"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:pagecheckerboard="0">
<inkscape:grid
type="xygrid"
id="grid970" />
</sodipodi:namedview>
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-732.36216)">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="120.0725"
y="759.6109"
id="text2711-6-2-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2"
x="120.0725"
y="759.6109"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923">v</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668">0</tspan></tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM)"
d="m 84.656156,757.55169 25.738704,1.3e-4"
id="path11252" />
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-opacity:1"
id="rect1007"
width="59"
height="59"
x="56.5"
y="772.86218"
rx="0"
ry="0" />
<path
style="fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1"
d="m 20,832.86218 280,-2e-5"
id="path1386" />
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#c8c8c8;stroke-width:0.5;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:2, 2;stroke-dashoffset:0;stroke-opacity:1"
id="rect1007-2"
width="59"
height="59"
x="225.16667"
y="772.86218"
rx="0"
ry="0" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#c8c8c8;fill-opacity:1;stroke:none"
x="236.05922"
y="759.6109"
id="text2711-6-2-9-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-8"
x="236.05922"
y="759.6109"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, ';fill:#c8c8c8;fill-opacity:1"><tspan
style="font-style:italic;fill:#c8c8c8;fill-opacity:1"
id="tspan9923-8">v</tspan> = 0</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="149.18359"
y="824.54877"
id="text2711-6-2-9-96"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-6"
x="149.18359"
y="824.54877"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan3028">μ<tspan
style="font-size:65%;baseline-shift:sub"
id="tspan3074">k</tspan></tspan> > 0</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="79.505844"
y="806.37714"
id="text2711-6-2-9-2"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-84"
x="79.505844"
y="806.37714"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">m</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37)"
d="m 33.785239,770.82609 -1.3e-4,25.7387"
id="path11252-5" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="29.173132"
y="759.45776"
id="text2711-6-2-9-8"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2"
x="29.173132"
y="759.45776"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">g</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="79.368446"
y="849.21539"
id="text2711-6-2-9-23"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3"
x="79.368446"
y="849.21539"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923-0">x</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668-9">0</tspan></tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="250.91145"
y="849.21539"
id="text2711-6-2-9-23-0"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3-0"
x="250.91145"
y="849.21539"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923-0-3">x</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668-9-3" /></tspan></text>
</g>
</svg>
<br/>
Gambar <a name='fig1'>1</a>. Sistem benda bermassa $m$ begerak di atas lantai
mendatar kasar dengan koefisien gesek kinetis $\mu_k$.
```
Keadaan akhir benda, yaitu saat kecepatan $v = 0$ diberikan pada bagian kanan Gambar [1](#fig1) dengan warna abu-abu.
### Diagram gaya
Diagram gaya-gaya yang berja pada benda perlu dibuat berdasarkan informasi dari Gambar [1](#fig1) dan Tabel [1](#tab1), yang diberikan berikut ini.
```
%%html
<svg
width="320"
height="200"
viewBox="0 0 320 200.00001"
id="svg2"
version="1.1"
inkscape:version="1.1.2 (b8e25be833, 2022-02-05)"
sodipodi:docname="mass-horizontal-rough-surface-fbd.svg"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/">
<defs
id="defs4">
<marker
style="overflow:visible"
id="TriangleOutM"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479" />
</marker>
<marker
style="overflow:visible"
id="marker11604"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11602" />
</marker>
<marker
style="overflow:visible"
id="Arrow2Mend"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="Arrow2Mend"
inkscape:isstock="true">
<path
transform="scale(-0.6)"
d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:0.625;stroke-linejoin:round"
id="path11361" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-1" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-35"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-0" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-0"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-4" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-9"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-8" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-9-3"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-8-3" />
</marker>
<marker
style="overflow:visible"
id="TriangleOutM-37-5"
refX="0"
refY="0"
orient="auto"
inkscape:stockid="TriangleOutM"
inkscape:isstock="true">
<path
transform="scale(0.4)"
style="fill:context-stroke;fill-rule:evenodd;stroke:context-stroke;stroke-width:1pt"
d="M 5.77,0 -2.88,5 V -5 Z"
id="path11479-9-9" />
</marker>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="1.2079428"
inkscape:cx="159.36185"
inkscape:cy="35.597712"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
inkscape:snap-bbox="false"
inkscape:snap-global="false"
units="px"
showborder="true"
inkscape:showpageshadow="true"
borderlayer="false"
inkscape:window-width="1366"
inkscape:window-height="705"
inkscape:window-x="-8"
inkscape:window-y="-8"
inkscape:window-maximized="1"
inkscape:pagecheckerboard="0">
<inkscape:grid
type="xygrid"
id="grid970" />
</sodipodi:namedview>
<metadata
id="metadata7">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="Layer 1"
inkscape:groupmode="layer"
id="layer1"
transform="translate(0,-732.36216)">
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="148.01953"
y="766.72156"
id="text2711-6-2-9-23"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-3"
x="148.01953"
y="766.72156"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">N</tspan></text>
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="251.40584"
y="806.94421"
id="text2711-6-2-9"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2"
x="251.40584"
y="806.94421"
style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '"><tspan
style="font-style:italic"
id="tspan9923">v</tspan><tspan
style="font-size:65%;baseline-shift:sub"
id="tspan1668" /></tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM)"
d="m 215.98949,804.88502 25.7387,1.3e-4"
id="path11252" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="153.68098"
y="915.71051"
id="text2711-6-2-9-2"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-84"
x="153.68098"
y="915.71051"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">w</tspan></text>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37)"
d="m 31.113403,791.97918 -1.3e-4,25.7387"
id="path11252-5" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="26.501303"
y="780.6109"
id="text2711-6-2-9-8"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2"
x="26.501303"
y="780.6109"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">g</tspan></text>
<rect
style="fill:#ffffff;fill-opacity:1;stroke:#000000;stroke-width:1;stroke-linecap:round;stroke-opacity:1"
id="rect1007"
width="59"
height="59"
x="130.5"
y="792.86218"
rx="0"
ry="0" />
<g
id="g1363"
transform="translate(-6,20)">
<path
style="fill:none;stroke:#ff0000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-9)"
d="m 161.00001,831.69534 -45.73871,1.3e-4"
id="path11252-4" />
<path
style="fill:none;stroke:#0000ff;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-9-3)"
d="m 160.79738,832.36215 -1.3e-4,-75.7387"
id="path11252-4-6" />
</g>
<path
style="fill:none;stroke:#000000;stroke-width:1.5;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1;marker-end:url(#TriangleOutM-37-5)"
d="m 159.99967,822.02879 3.4e-4,75.73871"
id="path11252-5-0" />
<text
xml:space="preserve"
style="font-style:normal;font-weight:normal;font-size:18.6667px;line-height:1.25;font-family:sans-serif;fill:#000000;fill-opacity:1;stroke:none"
x="85.624084"
y="854.51099"
id="text2711-6-2-9-8-4"><tspan
sodipodi:role="line"
id="tspan2709-5-9-2-2-1"
x="85.624084"
y="854.51099"
style="font-style:italic;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:18.6667px;font-family:'Times New Roman';-inkscape-font-specification:'Times New Roman, '">f<tspan
style="font-size:65%;baseline-shift:sub"
id="tspan2197">k</tspan></tspan></text>
</g>
</svg>
<br>
Gambar <a name='fig2'>2</a>. Diagram gaya-gaya yang bekerja pada benda
bermassa $m$.
```
Terlihat bahwa pada arah $y$ terdapat gaya normal $N$ dan gaya gravitasi $w$, sedangkan pada arah $x$ hanya terdapat gaya gesek kinetis $f_k$ yang melawan arah gerak benda. Arah gerak benda diberikan oleh arah kecepatan $v$.
## Metode numerik
Interasi suatu fungsi $f(x)$ berbentuk
<a name='eqn8'></a>
\begin{equation}\label{eqn:integral-1}\tag{8}
A = \int_a^b f(x) dx
\end{equation}
dapat didekati dengan
<a name='eqn9'></a>
\begin{equation}\label{eqn:integral-2}\tag{9}
A \approx \sum_{i = 0}^N f\left[ \tfrac12(x_i + x_{i+1}) \right] \Delta x
\end{equation}
yang dikenal sebagai metode persegi titik tengah, di mana
<a name='eqn10'></a>
\begin{equation}\label{eqn:integral-3}\tag{10}
\Delta x = \frac{b - a}{N}
\end{equation}
dengan $N$ adalah jumlah partisi. Variabel $x_i$ pada Persamaan ([9](#eqn9)) diberikan oleh
<a name='eqn11'></a>
\begin{equation}\label{eqn:integral-4}\tag{11}
x_i = a + i\Delta x
\end{equation}
dengan $i = 0, \dots, N$.
## Penyelesaian
Penerapan Persamaan ([1](#eqn1)), ([2](#eqn2)), ([3](#eqn3)), ([4](#eqn4)), dan ([5](#eqn5)) pada Gambar [2](#fig2) akan menghasilkan
<a name='eqn10'></a>
\begin{equation}\label{eqn:friction}\tag{10}
f_k = \mu_k mg
\end{equation}
dan usahanya adalah
<a name='eqn11'></a>
\begin{equation}\label{eqn:friction-work}\tag{11}
\begin{array}{rcl}
W & = & \displaystyle \int_{x_0}^x f_k dx \newline
& = & \displaystyle \int_{x_0}^x \mu_k m g dx \newline
& = & \displaystyle m g \int_{x_0}^x \mu_k dx
\end{array}
\end{equation}
dengan koefisien gesek statisnya dapat merupakan fungsi dari posisi $\mu_k = \mu_k(x)$.
```
import numpy as np
import matplotlib.pyplot as plt
plt.ion()
# set integral lower and upper bounds
a = 0
b = 1
# generate x
x = [1, 2, 3, 4, 5]
# generate y from numerical integration
y = [1, 2, 3, 5, 6]
## plot results
fig, ax = plt.subplots()
ax.scatter(x, y)
ax.set_xlabel("$x - x^0$")
ax.set_ylabel("y")
from IPython import display
from IPython.core.display import HTML
HTML('''
<div>
Gambar <a name='fig3'>3</a>. Kurva antara usaha $W$ dan jarak tempuh $x - x_0$.
</div>
''')
```
## Diskusi
Berdasarkan Gambar [3](#fig3) dapat dijelaskan bahwa dengan $\mu_k = \mu_k(x)$ maka kurva $W(x)$ tidak lagi linier karena dipengaruhi oleh sejauh mana perhitungan kerja dilakukan.
## Kesimpulan
Perhitungan kerja dengan $\mu_k = \mu_k(x)$ telah dapat dilakukan.
## Referensi
1. J. A. C. Martins, J. T. Oden, F. M. F. Simões, "A study of static and kinetic friction", International Journal of Engineerting Science, vol 28, no 1, p 29-92, 1990, url <https://doi.org/10.1016/0020-7225(90)90014-A>.
1. Carl Rod Nave, "Friction", HyperPhysics, 2017, url <http://hyperphysics.phy-astr.gsu.edu/hbase/frict.html#fri> [20220419].
2. Wikipedia contributors, "Friction", Wikipedia, The Free Encyclopedia, 12 April 2022, 00:33 UTC, url <https://en.wikipedia.org/w/index.php?oldid=1082223658> [20220419].
3. Tia Ghose, Ailsa Harvey, "What is friction?", Live Science, 8 Feb 2022, url <https://www.livescience.com/37161-what-is-friction.html> [20220419].
| github_jupyter |
.. meta::
:description: A guide which introduces the most important steps to get started with pymoo, an open-source multi-objective optimization framework in Python.
.. meta::
:keywords: Multi-objective Optimization, Python, Evolutionary Computation, Optimization Test Problem, Hypervolume
# Getting Started
In the following, we like to introduce *pymoo* by presenting an example optimization scenario. This guide goes through the essential steps to get started with our framework. This guide is structured as follows:
1. Introduction to Multi-objective Optimization and an exemplarily Test Problem
2. Implementation of a Problem (vectorized, element-wise or functional)
3. Initialization of an Algorithm (in our case NSGA2)
4. Definition of a Termination Criterion
5. Optimize (functional through `minimize` or object-oriented by calling `next()`)
6. Visualization of Results and Convergence
7. Summary
8. Source code (in one piece)
We try to cover the essential steps you have to follow to get started optimizing your own optimization problem and have also included some posteriori analysis which is known to be particularly important in multi-objective optimization.
## 1. Introduction
### Multi-Objective Optimization
In general, multi-objective optimization has several objective functions with subject to inequality and equality constraints to optimize <cite data-cite="multi_objective_book"></cite>. The goal is to find a set of solutions that do not have any constraint violation and are as good as possible regarding all its objectives values. The problem definition in its general form is given by:
\begin{align}
\begin{split}
\min \quad& f_{m}(x) \quad \quad \quad \quad m = 1,..,M \\[4pt]
\text{s.t.} \quad& g_{j}(x) \leq 0 \quad \; \; \, \quad j = 1,..,J \\[2pt]
\quad& h_{k}(x) = 0 \quad \; \; \quad k = 1,..,K \\[4pt]
\quad& x_{i}^{L} \leq x_{i} \leq x_{i}^{U} \quad i = 1,..,N \\[2pt]
\end{split}
\end{align}
The formulation above defines a multi-objective optimization problem with $N$ variables, $M$ objectives, $J$ inequality and $K$ equality constraints. Moreover, for each variable $x_i$ lower and upper variable boundaries ($x_i^L$ and $x_i^U$) are defined.
### Test Problem
In the following, we investigate exemplarily a bi-objective optimization with two constraints.
We tried to select a suitable optimization problem with enough complexity for demonstration purposes, but not too difficult to lose track of the overall idea. Its definition is given by:
\begin{align}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\max \;\; & f_2(x) = -(x_1-1)^2 - x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \leq 0\\
& g_2(x) = 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \geq 0\\[1mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2
\end{split}
\end{align}
It consists of two objectives ($M=2$) where $f_1(x)$ is minimized and $f_2(x)$ maximized. The optimization is with subject to two inequality constraints ($J=2$) where $g_1(x)$ is formulated as a less than and $g_2(x)$ as a greater than constraint. The problem is defined with respect to two variables ($N=2$), $x_1$ and $x_2$, which both are in the range $[-2,2]$. The problem does not contain any equality constraints ($K=0$).
```
import numpy as np
X1, X2 = np.meshgrid(np.linspace(-2, 2, 500), np.linspace(-2, 2, 500))
F1 = X1**2 + X2**2
F2 = (X1-1)**2 + X2**2
G = X1**2 - X1 + 3/16
G1 = 2 * (X1[0] - 0.1) * (X1[0] - 0.9)
G2 = 20 * (X1[0] - 0.4) * (X1[0] - 0.6)
import matplotlib.pyplot as plt
plt.rc('font', family='serif')
levels = [0.02, 0.1, 0.25, 0.5, 0.8]
plt.figure(figsize=(7, 5))
CS = plt.contour(X1, X2, F1, levels, colors='black', alpha=0.5)
CS.collections[0].set_label("$f_1(x)$")
CS = plt.contour(X1, X2, F2, levels, linestyles="dashed", colors='black', alpha=0.5)
CS.collections[0].set_label("$f_2(x)$")
plt.plot(X1[0], G1, linewidth=2.0, color="green", linestyle='dotted')
plt.plot(X1[0][G1<0], G1[G1<0], label="$g_1(x)$", linewidth=2.0, color="green")
plt.plot(X1[0], G2, linewidth=2.0, color="blue", linestyle='dotted')
plt.plot(X1[0][X1[0]>0.6], G2[X1[0]>0.6], label="$g_2(x)$",linewidth=2.0, color="blue")
plt.plot(X1[0][X1[0]<0.4], G2[X1[0]<0.4], linewidth=2.0, color="blue")
plt.plot(np.linspace(0.1,0.4,100), np.zeros(100),linewidth=3.0, color="orange")
plt.plot(np.linspace(0.6,0.9,100), np.zeros(100),linewidth=3.0, color="orange")
plt.xlim(-0.5, 1.5)
plt.ylim(-0.5, 1)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, 1.12),
ncol=4, fancybox=True, shadow=False)
plt.tight_layout()
plt.show()
```
The figure above shows the contours of the problem. The contour lines of the objective function $f_1(x)$ is represented by a solid and $f_2(x)$ by a dashed line. The constraints $g_1(x)$ and $g_2(x)$ are parabolas which intersect the $x_1$-axis at $(0.1, 0.9)$ and $(0.4, 0.6)$. A thick orange line illustrates the pareto-optimal set. Through the combination of both constraints, the pareto-set is split into two parts.
Analytically, the pareto-optimal set is given by $PS = \{(x_1, x_2) \,|\, (0.1 \leq x_1 \leq 0.4) \lor (0.6 \leq x_1 \leq 0.9) \, \land \, x_2 = 0\}$ and the Pareto-front by $f_2 = (\sqrt{f_1} - 1)^2$ where $f_1$ is defined in $[0.01,0.16]$ and $[0.36,0.81]$.
## 2. Implementation of a Problem
In *pymoo*, we consider **minimization** problems for optimization in all our modules. However, without loss of generality, an objective that is supposed to be maximized can be multiplied by $-1$ and be minimized. Therefore, we minimize $-f_2(x)$ instead of maximizing $f_2(x)$ in our optimization problem. Furthermore, all constraint functions need to be formulated as a $\leq 0$ constraint.
The feasibility of a solution can, therefore, be expressed by:
$$ \begin{cases}
\text{feasible,} \quad \quad \sum_i^n \langle g_i(x)\rangle = 0\\
\text{infeasbile,} \quad \quad \quad \text{otherwise}\\
\end{cases}
$$
$$
\text{where} \quad \langle g_i(x)\rangle =
\begin{cases}
0, \quad \quad \; \text{if} \; g_i(x) \leq 0\\
g_i(x), \quad \text{otherwise}\\
\end{cases}
$$
For this reason, $g_2(x)$ needs to be multiplied by $-1$ in order to flip the $\geq$ to a $\leq$ relation. We recommend the normalization of constraints to give equal importance to each of them.
For $g_1(x)$, the coefficient results in $2 \cdot (-0.1) \cdot (-0.9) = 0.18$ and for $g_2(x)$ in $20 \cdot (-0.4) \cdot (-0.6) = 4.8$, respectively. We achieve normalization of constraints by dividing $g_1(x)$ and $g_2(x)$ by its corresponding coefficient.
Finally, the optimization problem to be optimized using *pymoo* is defined by:
\begin{align}
\label{eq:getting_started_pymoo}
\begin{split}
\min \;\; & f_1(x) = (x_1^2 + x_2^2) \\
\min \;\; & f_2(x) = (x_1-1)^2 + x_2^2 \\[1mm]
\text{s.t.} \;\; & g_1(x) = 2 \, (x_1 - 0.1) \, (x_1 - 0.9) \, / \, 0.18 \leq 0\\
& g_2(x) = - 20 \, (x_1 - 0.4) \, (x_1 - 0.6) \, / \, 4.8 \leq 0\\[1mm]
& -2 \leq x_1 \leq 2 \\
& -2 \leq x_2 \leq 2
\end{split}
\end{align}
This getting started guide demonstrates **3** different ways of defining a problem:
- **By Class**
- **Vectorized evaluation:** A set of solutions is evaluated directly.
- **Elementwise evaluation:** Only one solution is evaluated at a time.
- **By Functions**: Functional interface as commonly defined in other optimization libraries.
**Optional**: Define a Pareto set and front for the optimization problem to track convergence to the analytically derived optimum/optima.
Please choose the most convenient implementation for your purpose.
### By Class
Defining a problem through a class allows defining the problem very naturally, assuming the metadata, such as the number of variables and objectives, are known.
The problem inherits from the [Problem](problems/index.ipynb) class. By calling the `super()` function in the constructor `__init__` the problem properties such as the number of variables `n_var`, objectives `n_obj` and constraints `n_constr` are supposed to be initialized. Furthermore, lower `xl` and upper variables boundaries `xu` are supplied as a NumPy array. Please note that most algorithms in our framework require the lower and upper boundaries to be provided and not equal to negative or positive infinity. Finally, the evaluation function `_evaluate` needs to be overwritten to calculated the objective and constraint values.
#### Vectorized Evaluation
The `_evaluate` method takes a **two-dimensional** NumPy array `X` with *n* rows and *m* columns as an input. Each row represents an individual, and each column an optimization variable. After doing the necessary calculations, the objective values must be added to the dictionary out with the key `F` and the constraints with key `G`.
**Note**: This method is only called once per iteration for most algorithms. This gives you all the freedom to implement your own parallelization.
```
import numpy as np
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]))
def _evaluate(self, X, out, *args, **kwargs):
f1 = X[:,0]**2 + X[:,1]**2
f2 = (X[:,0]-1)**2 + X[:,1]**2
g1 = 2*(X[:, 0]-0.1) * (X[:, 0]-0.9) / 0.18
g2 = - 20*(X[:, 0]-0.4) * (X[:, 0]-0.6) / 4.8
out["F"] = np.column_stack([f1, f2])
out["G"] = np.column_stack([g1, g2])
vectorized_problem = MyProblem()
```
#### Elementwise Evaluation
The `_evaluate` method takes a **one-dimensional** NumPy array `x` number of entries equal to `n_var`. This behavior is enabled by setting `elementwise_evaluation=True` while calling the `super()` method.
**Note**: This method is called in each iteration for **each** solution exactly once.
```
import numpy as np
from pymoo.util.misc import stack
from pymoo.model.problem import Problem
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2,-2]),
xu=np.array([2,2]),
elementwise_evaluation=True)
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[0]**2 + x[1]**2
f2 = (x[0]-1)**2 + x[1]**2
g1 = 2*(x[0]-0.1) * (x[0]-0.9) / 0.18
g2 = - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8
out["F"] = [f1, f2]
out["G"] = [g1, g2]
elementwise_problem = MyProblem()
```
### By Functions
The definition by functions is a common way in Python and available and many other optimization frameworks. It reduces the problem's definitions without any overhead, and the number of objectives and constraints is simply derived from the list of functions.
After having defined the functions, the problem object is created by initializing `FunctionalProblem`. Please note that the number of variables `n_var` must be passed as an argument.
**Note**: This definition is recommended to be used to define a problem through simple functions. It is worth noting that the evaluation can require many functions calls. For instance, for 100 individuals with 2 objectives and 2 constraints 400 function calls are necessary for evaluation. Whereas, a vectorized definition through the `Problem` class requires only a single function call. Moreover, if metrics are shared between objectives or constraints, they need to be calculated twice.
```
import numpy as np
from pymoo.model.problem import FunctionalProblem
objs = [
lambda x: x[0]**2 + x[1]**2,
lambda x: (x[0]-1)**2 + x[1]**2
]
constr_ieq = [
lambda x: 2*(x[0]-0.1) * (x[0]-0.9) / 0.18,
lambda x: - 20*(x[0]-0.4) * (x[0]-0.6) / 4.8
]
functional_problem = FunctionalProblem(2,
objs,
constr_ieq=constr_ieq,
xl=np.array([-2,-2]),
xu=np.array([2,2]))
```
### (Optional) Pareto front (pf) and Pareto set (ps)
In this case, we have a test problem where the optimum is **known**. For illustration, we like to measure the convergence of the algorithm to the known true optimum. Thus, we implement override the `_calc_pareto_front` and `_calc_pareto_set` for this purpose. Please note that both have to be mathematically derived.
**Note: This is not necessary if your goal is solely optimizing a function**. For test problems, this is usually done to measure and visualize the performance of an algorithm.
The implementation of `func_pf` and `func_ps` looks as follows:
```
from pymoo.util.misc import stack
def func_pf(flatten=True, **kwargs):
f1_a = np.linspace(0.1**2, 0.4**2, 100)
f2_a = (np.sqrt(f1_a) - 1)**2
f1_b = np.linspace(0.6**2, 0.9**2, 100)
f2_b = (np.sqrt(f1_b) - 1)**2
a, b = np.column_stack([f1_a, f2_a]), np.column_stack([f1_b, f2_b])
return stack(a, b, flatten=flatten)
def func_ps(flatten=True, **kwargs):
x1_a = np.linspace(0.1, 0.4, 50)
x1_b = np.linspace(0.6, 0.9, 50)
x2 = np.zeros(50)
a, b = np.column_stack([x1_a, x2]), np.column_stack([x1_b, x2])
return stack(a,b, flatten=flatten)
```
This information can be passed to the definition via class or functions as follows:
#### Add to Class
```
import numpy as np
from pymoo.util.misc import stack
from pymoo.model.problem import Problem
class MyTestProblem(MyProblem):
def _calc_pareto_front(self, *args, **kwargs):
return func_pf(**kwargs)
def _calc_pareto_set(self, *args, **kwargs):
return func_ps(**kwargs)
test_problem = MyTestProblem()
```
#### Add to Function
```
from pymoo.model.problem import FunctionalProblem
functional_test_problem = FunctionalProblem(2,
objs,
constr_ieq=constr_ieq,
xl=-2,
xu=2,
func_pf=func_pf,
func_ps=func_ps
)
```
### Initialize the object
Choose the way you have defined your problem and initialize it:
```
problem = test_problem
```
Moreover, we would like to mention that in many test optimization problems, implementation already exists. For example, the test problem *ZDT1* can be initiated by:
```
from pymoo.factory import get_problem
zdt1 = get_problem("zdt1")
```
Our framework has various single- and many-objective optimization test problems already implemented. Furthermore, a more advanced guide for custom problem definitions is available. In case problem functions are computationally expensive, more sophisticated parallelization of the evaluation functions might be worth looking at.
[Optimization Test Problems](problems/index.ipynb) |
[Define a Custom Problem](problems/custom.ipynb) |
[Parallelization](problems/parallelization.ipynb) |
[Callback](interface/callback.ipynb) |
[Constraint Handling](misc/constraint_handling.ipynb)
## 3. Initialization of an Algorithm
Moreover, we need to initialize a method to optimize the problem.
In *pymoo*, factory methods create an `algorithm` object to be used for optimization. For each of those methods, an API documentation is available, and through supplying different parameters, algorithms can be customized in a plug-and-play manner.
Depending on the optimization problem, different algorithms can be used to optimize the problem. Our framework offers various [Algorithms](algorithms/index.ipynb), which can be used to solve problems with different characteristics.
In general, the choice of a suitable algorithm for optimization problems is a challenge itself. Whenever problem characteristics are known beforehand, we recommended using those through customized operators.
However, in our case, the optimization problem is rather simple, but the aspect of having two objectives and two constraints should be considered. We decided to use [NSGA-II](algorithms/nsga2.ipynb) with its default configuration with minor modifications. We chose a population size of 40 (`pop_size=40`) and decided instead of generating the same number of offsprings to create only 10 (`n_offsprings=40`). Such an implementation is a greedier variant and improves the convergence of rather simple optimization problems without difficulties regarding optimization, such as the existence of local Pareto fronts.
Moreover, we enable a duplicate check (`eliminate_duplicates=True`), making sure that the mating produces offsprings that are different from themselves and the existing population regarding their design space values. To illustrate the customization aspect, we listed the other unmodified default operators in the code snippet below.
```
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.factory import get_sampling, get_crossover, get_mutation
algorithm = NSGA2(
pop_size=40,
n_offsprings=10,
sampling=get_sampling("real_random"),
crossover=get_crossover("real_sbx", prob=0.9, eta=15),
mutation=get_mutation("real_pm", eta=20),
eliminate_duplicates=True
)
```
The `algorithm` object contains the implementation of NSGA-II with the custom settings supplied to the factory method.
## 4. Definition of a Termination Criterion
Furthermore, a termination criterion needs to be defined to start the optimization procedure finally. Different kind of [Termination Criteria](interface/termination.ipynb) are available. Here, since the problem is rather simple, we run the algorithm for some number of generations.
```
from pymoo.factory import get_termination
termination = get_termination("n_gen", 40)
```
Instead of the number of generations (or iterations), other criteria such as the number of function evaluations or the improvement in design or objective space between generations can be used.
## 5. Optimize
Finally, we are solving the problem with the algorithm and termination criterion we have defined.
In *pymoo*, we provide two interfaces for solving an optimization problem:
- **Functional:** Commonly in Python, a function is used as a global interface. In pymoo, the `minimize` method is the most crucial method which is responsible for using an algorithm to solve a problem using
other attributes such as `seed`, `termination`, and others.
- **Object Oriented:** The object-oriented interface directly uses the algorithm object to perform an iteration.
This allows the flexibility of executing custom code very quickly between iterations. However, features already
implemented in the functional approach, such as displaying metrics, saving the history, or pre-defined callbacks, need to be incorporated manually.
Both ways have their benefits and drawbacks depending on the different use cases.
### Functional Interface
The functional interface is provided by the `minimize` method. By default, the method performs deep-copies of the algorithm and the termination object. Which means the objects are not altered during the function call. This ensures repetitive function calls end up with the same results. The `minimize` function returns the [Result](interface/result.ipynb) object, which provides attributes such as the optimum.
```
from pymoo.optimize import minimize
res = minimize(problem,
algorithm,
termination,
seed=1,
save_history=True,
verbose=True)
```
The [Result](interface/result.ipynb) object provides the corresponding X and F values and some more information.
### Object-Oriented Interface
On the contrary, the object-oriented approach directly modifies the algorithm object by calling the `next` method. Thus, it makes sense to create a deepcopy of the algorithm object beforehand, as shown in the code below.
In the while loop, the algorithm object can be accessed to be modified or for other purposes.
**NOTE**: In this guide, we have used the functional interface because the history is used during analysis.
```
import copy
# perform a copy of the algorithm to ensure reproducibility
obj = copy.deepcopy(algorithm)
# let the algorithm know what problem we are intending to solve and provide other attributes
obj.setup(problem, termination=termination, seed=1)
# until the termination criterion has not been met
while obj.has_next():
# perform an iteration of the algorithm
obj.next()
# access the algorithm to print some intermediate outputs
print(f"gen: {obj.n_gen} n_nds: {len(obj.opt)} constr: {obj.opt.get('CV').min()} ideal: {obj.opt.get('F').min(axis=0)}")
# finally obtain the result object
result = obj.result()
```
## 6. Visualization of Results and Convergence
### Results
The optimization results are illustrated below (design and objective space). The solid lines represent the analytically derived Pareto set, and front in the corresponding space, and the circles represent solutions found by the algorithm. It can be observed that the algorithm was able to converge, and a set of nearly-optimal solutions was obtained.
```
from pymoo.visualization.scatter import Scatter
# get the pareto-set and pareto-front for plotting
ps = problem.pareto_set(use_cache=False, flatten=False)
pf = problem.pareto_front(use_cache=False, flatten=False)
# Design Space
plot = Scatter(title = "Design Space", axis_labels="x")
plot.add(res.X, s=30, facecolors='none', edgecolors='r')
if ps is not None:
plot.add(ps, plot_type="line", color="black", alpha=0.7)
plot.do()
plot.apply(lambda ax: ax.set_xlim(-0.5, 1.5))
plot.apply(lambda ax: ax.set_ylim(-2, 2))
plot.show()
# Objective Space
plot = Scatter(title = "Objective Space")
plot.add(res.F)
if pf is not None:
plot.add(pf, plot_type="line", color="black", alpha=0.7)
plot.show()
```
Visualization is a vital post-processing step in multi-objective optimization. Although it seems to be pretty easy for our example optimization problem, it becomes much more difficult in higher dimensions where trade-offs between solutions are not readily observable. For visualizations in higher dimensions, various more advanced [Visualizations](visualization/index.ipynb) are implemented in our framework.
### Convergence
A not negligible step is the post-processing after having obtained the results. We strongly recommend not only analyzing the final result but also the algorithm's behavior. This gives more insights into the convergence of the algorithm.
For such an analysis, intermediant steps of the algorithm need to be considered. This can either be achieved by:
- A `Callback` class storing the necessary information in each iteration of the algorithm.
- Enabling the `save_history` flag when calling the minimize method to store a deepcopy of the algorithm's objective each iteration.
We provide some more details about each variant in our [convergence](misc/convergence.ipynb) tutorial.
As you might have already seen, we have set `save_history=True` when calling the `minmize` method in this getting started guide and, thus, will you the `history` for our analysis. Moreover, we need to decide what metric should be used to measure the performance of our algorithm. In this tutorial, we are going to use `Hypervolume` and `IGD`. Feel free to look at our [performance indicators](misc/performance_indicator.ipynb) to find more information about metrics to measure the performance of multi-objective algorithms.
As a first step we have to extract the population in each generation of the algorithm. We extract the constraint violation (`cv`), the objective space values (`F`) and the number of function evaluations (`n_evals`) of the corresponding generation.
```
n_evals = [] # corresponding number of function evaluations\
F = [] # the objective space values in each generation
cv = [] # constraint violation in each generation
# iterate over the deepcopies of algorithms
for algorithm in res.history:
# store the number of function evaluations
n_evals.append(algorithm.evaluator.n_eval)
# retrieve the optimum from the algorithm
opt = algorithm.opt
# store the least contraint violation in this generation
cv.append(opt.get("CV").min())
# filter out only the feasible and append
feas = np.where(opt.get("feasible"))[0]
_F = opt.get("F")[feas]
F.append(_F)
```
**NOTE:** If your problem has different scales on the objectives (e.g. first objective in range of [0.1, 0.2] and the second objective [100, 10000] you **HAVE** to normalize to measure the performance in a meaningful way! This example assumes no normalization is necessary to keep things a bit simple.
### Constraint Violation (CV)
Here, in the first generation, a feasible solution was already found.
Since the constraints of the problem are rather simple, the constraints are already satisfied in the initial population.
```
import matplotlib.pyplot as plt
k = min([i for i in range(len(cv)) if cv[i] <= 0])
first_feas_evals = n_evals[k]
print(f"First feasible solution found after {first_feas_evals} evaluations")
plt.plot(n_evals, cv, '--', label="CV")
plt.scatter(first_feas_evals, cv[k], color="red", label="First Feasible")
plt.xlabel("Function Evaluations")
plt.ylabel("Constraint Violation (CV)")
plt.legend()
plt.show()
```
### Hypvervolume (HV)
Hypervolume is a very well-known performance indicator for multi-objective problems. It is known to be pareto-compliant and is based on the volume between a predefined reference point and the solution provided. Hypervolume requires to define a reference point `ref_point` which shall be larger than the maximum value of the Pareto front.
**Note:** Hypervolume becomes computationally expensive with increasing dimensionality. The exact hypervolume can be calculated efficiently for 2 and 3 objectives. For higher dimensions, some researchers use a hypervolume approximation, which is not available yet in pymoo.
```
import matplotlib.pyplot as plt
from pymoo.performance_indicator.hv import Hypervolume
# MODIFY - this is problem dependend
ref_point = np.array([1.0, 1.0])
# create the performance indicator object with reference point
metric = Hypervolume(ref_point=ref_point, normalize=False)
# calculate for each generation the HV metric
hv = [metric.calc(f) for f in F]
# visualze the convergence curve
plt.plot(n_evals, hv, '-o', markersize=4, linewidth=2)
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("Hypervolume")
plt.show()
```
### IGD
For IGD the Pareto front needs to be known or to be approximated.
In our framework the Pareto front of **test problems** can be obtained by:
```
pf = problem.pareto_front(flatten=True, use_cache=False)
```
For real-world problems, you have to use an **approximation**. An approximation can be obtained by running an algorithm a couple of times and extracting the non-dominated solutions out of all solution sets. If you have only a single run, an alternative is to use the obtain a non-dominated set of solutions as an approximation. However, the result does then only indicate how much the algorithm's progress in converging to the final set.
```
import matplotlib.pyplot as plt
from pymoo.performance_indicator.igd import IGD
if pf is not None:
# for this test problem no normalization for post prcessing is needed since similar scales
normalize = False
metric = IGD(pf=pf, normalize=normalize)
# calculate for each generation the HV metric
igd = [metric.calc(f) for f in F]
# visualze the convergence curve
plt.plot(n_evals, igd, '-o', markersize=4, linewidth=2, color="green")
plt.yscale("log") # enable log scale if desired
plt.title("Convergence")
plt.xlabel("Function Evaluations")
plt.ylabel("IGD")
plt.show()
```
### Running Metric
Another way of analyzing a run when the true Pareto front is **not** known is using are recently proposed [running metric](https://www.egr.msu.edu/~kdeb/papers/c2020003.pdf). The running metric shows the difference in the objective space from one generation to another and uses the algorithm's survival to visualize the improvement.
This metric is also being used in pymoo to determine the termination of a multi-objective optimization algorithm if no default termination criteria have been defined.
For instance, this analysis reveals that the algorithm was able to improve from the 4th to the 5th generation significantly.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=5,
n_plots=3,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history[:15]:
running.notify(algorithm)
```
Plotting until the final population shows the the algorithm seems to have more a less converged and only a small improvement has been made.
```
from pymoo.util.running_metric import RunningMetric
running = RunningMetric(delta_gen=10,
n_plots=4,
only_if_n_plots=True,
key_press=False,
do_show=True)
for algorithm in res.history:
running.notify(algorithm)
```
## 7. Summary
We hope you have enjoyed the getting started guide. For more topics we refer to each section covered by on the [landing page](https://pymoo.org). If you have any question or concern do not hesitate to [contact us](contact.rst).
### Citation
If you have used **pymoo** for research purposes please refer to our framework in your reports or publication by:
## 8. Source Code
In this guide, we have provided a couple of options on defining your problem and how to run the optimization.
You might have already copied the code into your IDE. However, if not, the following code snippets cover the problem definition, algorithm initializing, solving the optimization problem, and visualization of the non-dominated set of solutions altogether.
```
import numpy as np
from pymoo.algorithms.nsga2 import NSGA2
from pymoo.model.problem import Problem
from pymoo.optimize import minimize
from pymoo.visualization.scatter import Scatter
class MyProblem(Problem):
def __init__(self):
super().__init__(n_var=2,
n_obj=2,
n_constr=2,
xl=np.array([-2, -2]),
xu=np.array([2, 2]),
elementwise_evaluation=True)
def _evaluate(self, x, out, *args, **kwargs):
f1 = x[0] ** 2 + x[1] ** 2
f2 = (x[0] - 1) ** 2 + x[1] ** 2
g1 = 2 * (x[0] - 0.1) * (x[0] - 0.9) / 0.18
g2 = - 20 * (x[0] - 0.4) * (x[0] - 0.6) / 4.8
out["F"] = [f1, f2]
out["G"] = [g1, g2]
problem = MyProblem()
algorithm = NSGA2(pop_size=100)
res = minimize(problem,
algorithm,
("n_gen", 100),
verbose=True,
seed=1)
plot = Scatter()
plot.add(res.F, color="red")
plot.show()
```
| github_jupyter |
```
!pip install eli5
!pip install --upgrade tables
!pip install xgboost
!pip install hyperopt
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.metrics import mean_absolute_error as mae
from sklearn.model_selection import cross_val_score, KFold
from hyperopt import hp, fmin, tpe, STATUS_OK
import eli5
from eli5.sklearn import PermutationImportance
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix/matrix_two/dw_matrix_car"
df = pd.read_hdf('data/car.h5')
df.shape
```
### Feature Engineering
```
SUFFIX_CAT = '__cat'
for feat in df.columns:
if isinstance(df[feat][0], list): continue
factorized_values = df[feat].factorize()[0]
if SUFFIX_CAT in feat:
df[feat] = factorized_values
else:
df[feat + SUFFIX_CAT] = factorized_values
df['param_rok-produkcji'] = df['param_rok-produkcji'].map(lambda x: -1 if str(x) == 'None' else int(x))
df['param_moc'] = df['param_moc'].map(lambda x: -1 if str(x) == 'None' else int(x.split(' ')[0]) )
df['param_pojemność-skokowa'] = df['param_pojemność-skokowa'].map(lambda x: -1 if str(x) == 'None' else int(x.split('cm')[0].replace(' ', '')) )
def run_model(model, feats):
X = df[feats].values
y = df['price_value'].values
scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = ['param_rok-produkcji', 'param_stan__cat', 'param_napęd__cat', 'param_skrzynia-biegów__cat', 'param_faktura-vat__cat', 'param_moc', 'param_marka-pojazdu__cat', 'feature_kamera-cofania__cat', 'param_typ__cat', 'param_pojemność-skokowa', 'seller_name__cat', 'feature_wspomaganie-kierownicy__cat', 'param_model-pojazdu__cat', 'param_wersja__cat', 'param_kod-silnika__cat', 'feature_system-start-stop__cat', 'feature_asystent-pasa-ruchu__cat', 'feature_czujniki-parkowania-przednie__cat', 'feature_łopatki-zmiany-biegów__cat', 'feature_regulowane-zawieszenie__cat']
xgb_params = {
'max_depth': 5,
'n_estimators': 50,
'learning_rate': 0.1,
'seed': 0
}
run_model(xgb.XGBRegressor(**xgb_params), feats )
#w jaki sposób dopierac parametry modelu? mozna brutalnie wpisujac wszystkie kombinacje ale to zajmuje duzo czasu. wykorzytamy do tego biblioteke hyperopt
```
### Hyperopt
```
def obj_func(params):
print ("Training with params: ")
print(params)
mean_mae, score_std = run_model(xgb.XGBRegressor(**params), feats)
return {'loss': np.abs(mean_mae), 'status': STATUS_OK}
#space
xgb_reg_params = {
'learning_rate': hp.choice('learning_rate', np.arange(0.05, 0.31, 0.05)),
'max_depth': hp.choice('max_depth', np.arange(5, 16, 1, dtype=int)),
'subsample': hp.quniform('subsample', 0.5, 1, 0.1),
'colsample_bytree': hp.quniform('colsample_bytree', 0.5, 1, 0.05),
'objective': 'reg:squarederror', #eliminacja wyswietlania warningu
'n_estimators': 100,
'seed': 0,
}
##run
best = fmin(obj_func, xgb_reg_params, algo=tpe.suggest, max_evals=25)
best
```
| github_jupyter |
This notebook creates a VM in the user's project with the airflow scheduler and webserver. A default GCP zone for the VM has been chosen (below). Feel free to change this as desired.
## Airflow Dashboard
After successful setup of the Airflow VM, you will be able to view the Airflow Dashboard by creating an ssh tunnel to the VM. To do so, a sample command that you could execute:
gcloud compute ssh --zone us-central1-b datalab-airflow -- -N -p 22 -L localhost:5000:localhost:8080
Once this tunnel is open, you'd be able to view the dashboard by navigating to http://localhost:5000 on your browser.
```
# Get the latest datalab version. Restart the kernel.
!pip install --upgrade --force-reinstall datalab
zone='us-central1-b'
from google.datalab import Context
import google.datalab.storage as storage
project = Context.default().project_id
vm_name = 'datalab-airflow'
# The name of this GCS bucket follows a convention between this notebook and
# the 'BigQuery Pipeline' tutorial notebook, so don't change this.
gcs_dag_bucket_name = project + '-' + vm_name
gcs_dag_bucket = storage.Bucket(gcs_dag_bucket_name)
gcs_dag_bucket.create()
vm_startup_script_contents = """#!/bin/bash
apt-get update
apt-get --assume-yes install python-pip
pip install datalab==1.1.2
pip install apache-airflow==1.9.0
pip install pandas-gbq==0.3.0
export AIRFLOW_HOME=/airflow
export AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION=False
export AIRFLOW__CORE__LOAD_EXAMPLES=False
airflow initdb
airflow scheduler &
airflow webserver -p 8080 &
# We append a gsutil rsync command to the cron file and have this run every minute to sync dags.
PROJECT_ID=$(gcloud info --format="get(config.project)")
GCS_DAG_BUCKET=$PROJECT_ID-datalab-airflow
AIRFLOW_CRON=temp_crontab.txt
crontab -l > $AIRFLOW_CRON
DAG_FOLDER="dags"
LOCAL_DAG_PATH=$AIRFLOW_HOME/$DAG_FOLDER
mkdir $LOCAL_DAG_PATH
echo "* * * * * gsutil rsync gs://$GCS_DAG_BUCKET/$DAG_FOLDER $LOCAL_DAG_PATH" >> $AIRFLOW_CRON
crontab $AIRFLOW_CRON
rm $AIRFLOW_CRON
EOF
"""
vm_startup_script_file_name = 'vm_startup_script.sh'
script_file = open(vm_startup_script_file_name, 'w')
script_file.write(vm_startup_script_contents)
script_file.close()
import subprocess
print subprocess.check_output([
'gcloud', 'compute', '--project', project, 'instances', 'create', vm_name,
'--zone', zone,
'--machine-type', 'n1-standard-1',
'--network', 'default',
'--maintenance-policy', 'MIGRATE',
'--scopes', 'https://www.googleapis.com/auth/cloud-platform',
'--image', 'debian-9-stretch-v20171025',
'--min-cpu-platform', 'Automatic',
'--image-project', 'debian-cloud',
'--boot-disk-size', '10',
'--boot-disk-type', 'pd-standard',
'--boot-disk-device-name', vm_name,
'--metadata-from-file', 'startup-script=' + vm_startup_script_file_name])
```
# Cleanup
```
# The following cleans up the VM and associated GCS bucket. Uncomment and run.
#!gsutil rm -r gs://$gcs_dag_bucket_name
#!gcloud compute instances delete datalab-airflow --zone us-central1-b --quiet
# This just verifies that cleanup actually worked. Uncomment and run. Should
# show an error like "BucketNotFoundException: 404 ...".
#!gsutil ls gs://$gcs_dag_bucket_name
```
| github_jupyter |
# Seq2Seq time series outlier detection on ECG data
## Method
The [Sequence-to-Sequence](https://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf) (Seq2Seq) outlier detector consists of 2 main building blocks: an encoder and a decoder. The encoder consists of a [Bidirectional](https://en.wikipedia.org/wiki/Bidirectional_recurrent_neural_networks) [LSTM](https://colah.github.io/posts/2015-08-Understanding-LSTMs/) which processes the input sequence and initializes the decoder. The LSTM decoder then makes sequential predictions for the output sequence. In our case, the decoder aims to reconstruct the input sequence. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is measured as the mean squared error (MSE) between the input and the reconstructed instance.
Since even for normal data the reconstruction error can be state-dependent, we add an outlier threshold estimator network to the Seq2Seq model. This network takes in the hidden state of the decoder at each timestep and predicts the estimated reconstruction error for normal data. As a result, the outlier threshold is not static and becomes a function of the model state. This is similar to [Park et al. (2017)](https://arxiv.org/pdf/1711.00614.pdf), but while they train the threshold estimator separately from the Seq2Seq model with a Support-Vector Regressor, we train a neural net regression network end-to-end with the Seq2Seq model.
The detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The Seq2Seq outlier detector is suitable for both **univariate and multivariate time series**.
## Dataset
The outlier detector needs to spot anomalies in electrocardiograms (ECG's). The dataset contains 5000 ECG's, originally obtained from [Physionet](https://archive.physionet.org/cgi-bin/atm/ATM) under the name *BIDMC Congestive Heart Failure Database(chfdb)*, record *chf07*. The data has been pre-processed in 2 steps: first each heartbeat is extracted, and then each beat is made equal length via interpolation. The data is labeled and contains 5 classes. The first class which contains almost 60% of the observations is seen as *normal* while the others are outliers. The detector is trained on heartbeats from the first class and needs to flag the other classes as anomalies.
```
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.metrics import accuracy_score, confusion_matrix, f1_score, precision_score, recall_score
from alibi_detect.od import OutlierSeq2Seq
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.datasets import fetch_ecg
```
## Load dataset
Flip train and test data because there are only 500 ECG's in the original training set and 4500 in the test set:
```
(X_test, y_test), (X_train, y_train) = fetch_ecg(return_X_y=True)
print(X_train.shape, y_train.shape)
print(X_test.shape, y_test.shape)
```
Since we treat the first class as the normal, *inlier* data and the rest of *X_train* as outliers, we need to adjust the training (inlier) data and the labels of the test set.
```
inlier_idx = np.where(y_train == 1)[0]
X_inlier, y_inlier = X_train[inlier_idx], np.zeros_like(y_train[inlier_idx])
outlier_idx = np.where(y_train != 1)[0]
X_outlier, y_outlier = X_train[outlier_idx], y_train[outlier_idx]
y_test[y_test == 1] = 0 # class 1 represent the inliers
y_test[y_test != 0] = 1
print(X_inlier.shape, X_outlier.shape)
```
Some of the outliers in *X_train* are used in combination with some of the inlier instances to infer the threshold level:
```
n_threshold = 1000
perc_inlier = 60
n_inlier = int(perc_inlier * .01 * n_threshold)
n_outlier = int((100 - perc_inlier) * .01 * n_threshold)
idx_thr_in = np.random.choice(X_inlier.shape[0], n_inlier, replace=False)
idx_thr_out = np.random.choice(X_outlier.shape[0], n_outlier, replace=False)
X_threshold = np.concatenate([X_inlier[idx_thr_in], X_outlier[idx_thr_out]], axis=0)
y_threshold = np.zeros(n_threshold).astype(int)
y_threshold[-n_outlier:] = 1
print(X_threshold.shape, y_threshold.shape)
```
Apply min-max scaling between 0 and 1 to the observations using the inlier data:
```
xmin, xmax = X_inlier.min(), X_inlier.max()
rng = (0, 1)
X_inlier = ((X_inlier - xmin) / (xmax - xmin)) * (rng[1] - rng[0]) + rng[0]
X_threshold = ((X_threshold - xmin) / (xmax - xmin)) * (rng[1] - rng[0]) + rng[0]
X_test = ((X_test - xmin) / (xmax - xmin)) * (rng[1] - rng[0]) + rng[0]
X_outlier = ((X_outlier - xmin) / (xmax - xmin)) * (rng[1] - rng[0]) + rng[0]
print('Inlier: min {:.2f} --- max {:.2f}'.format(X_inlier.min(), X_inlier.max()))
print('Threshold: min {:.2f} --- max {:.2f}'.format(X_threshold.min(), X_threshold.max()))
print('Test: min {:.2f} --- max {:.2f}'.format(X_test.min(), X_test.max()))
```
Reshape the observations to *(batch size, sequence length, features)* for the detector:
```
shape = (-1, X_inlier.shape[1], 1)
X_inlier = X_inlier.reshape(shape)
X_threshold = X_threshold.reshape(shape)
X_test = X_test.reshape(shape)
X_outlier = X_outlier.reshape(shape)
print(X_inlier.shape, X_threshold.shape, X_test.shape)
```
We can now visualize scaled instances from each class:
```
idx_plt = [np.where(y_outlier == i)[0][0] for i in list(np.unique(y_outlier))]
X_plt = np.concatenate([X_inlier[0:1], X_outlier[idx_plt]], axis=0)
for i in range(X_plt.shape[0]):
plt.plot(X_plt[i], label='Class ' + str(i+1))
plt.title('ECGs of Different Classes')
plt.xlabel('Time step')
plt.legend()
plt.show()
```
## Load or define Seq2Seq outlier detector
The pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can either manually download the relevant files in the [od_seq2seq_ecg](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect/od_seq2seq_ecg) folder to e.g. the local directory ```my_dir```. Alternatively, if you have [Google Cloud SDK](https://cloud.google.com/sdk/docs/) installed, you can download the whole folder as follows:
```bash
!gsutil cp -r gs://seldon-models/alibi-detect/od_seq2seq_ecg my_dir
```
```
load_outlier_detector = False
filepath = './models/od_seq2seq_ecg/'
if load_outlier_detector: # load pretrained outlier detector
od = load_detector(filepath)
else: # define model, initialize, train and save outlier detector
# initialize outlier detector
od = OutlierSeq2Seq(1,
X_inlier.shape[1], # sequence length
threshold=None,
latent_dim=40)
# train
od.fit(X_inlier,
epochs=100,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
```
Let's inspect how well the sequence-to-sequence model can predict the ECG's of the inlier and outlier classes. The predictions in the charts below are made on ECG's from the test set:
```
ecg_pred = od.seq2seq.decode_seq(X_test)[0]
i_normal = np.where(y_test == 0)[0][0]
plt.plot(ecg_pred[i_normal], label='Prediction')
plt.plot(X_test[i_normal], label='Original')
plt.title('Predicted vs. Original ECG of Inlier Class 1')
plt.legend()
plt.show()
i_outlier = np.where(y_test == 1)[0][0]
plt.plot(ecg_pred[i_outlier], label='Prediction')
plt.plot(X_test[i_outlier], label='Original')
plt.title('Predicted vs. Original ECG of Outlier')
plt.legend()
plt.show()
```
It is clear that the model can reconstruct the inlier class but struggles with the outliers.
The warning thrown when we initialized the model tells us that we need to set the outlier threshold. This can be done with the `infer_threshold` method. We need to pass a time series of instances and specify what percentage of those we consider to be normal via `threshold_perc`, equal to the percentage of *Class 1* in *X_threshold*. The `outlier_perc` parameter defines the percentage of features used to define the outlier threshold. In this example, the number of features considered per instance equals 140 (1 for each timestep). We set the ```outlier_perc``` at 95, which means that we will use the 95% features with highest reconstruction error, adjusted for by the threshold estimate.
```
od.infer_threshold(X_threshold, outlier_perc=95, threshold_perc=perc_inlier)
print('New threshold: {}'.format(od.threshold))
```
Let's save the outlier detector with the updated threshold:
```
save_detector(od, filepath)
```
We can load the same detector via `load_detector`:
```
od = load_detector(filepath)
```
## Detect outliers
```
od_preds = od.predict(X_test,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
```
## Display results
F1 score, accuracy, recall and confusion matrix:
```
y_pred = od_preds['data']['is_outlier']
labels = ['normal', 'outlier']
f1 = f1_score(y_test, y_pred)
acc = accuracy_score(y_test, y_pred)
prec = precision_score(y_test, y_pred)
rec = recall_score(y_test, y_pred)
print('F1 score: {:.3f} -- Accuracy: {:.3f} -- Precision: {:.3f} -- Recall: {:.3f}'.format(f1, acc, prec, rec))
cm = confusion_matrix(y_test, y_pred)
df_cm = pd.DataFrame(cm, index=labels, columns=labels)
sns.heatmap(df_cm, annot=True, cbar=True, linewidths=.5)
plt.show()
```
| github_jupyter |
```
import seaborn as sns
sns.set(style="white")
```
```
penguins = sns.load_dataset("penguins")
sns.histplot(data=penguins, x="flipper_length_mm")
```
Flip the plot by assigning the data variable to the y axis:
```
sns.histplot(data=penguins, y="flipper_length_mm")
```
Check how well the histogram represents the data by specifying a different bin width:
```
sns.histplot(data=penguins, x="flipper_length_mm", binwidth=3)
```
You can also define the total number of bins to use:
```
sns.histplot(data=penguins, x="flipper_length_mm", bins=30)
```
Add a kernel density estimate to smooth the histogram, providing complementary information about the shape of the distribution:
```
sns.histplot(data=penguins, x="flipper_length_mm", kde=True)
```
If neither `x` nor `y` is assigned, the dataset is treated as wide-form, and a histogram is drawn for each numeric column:
```
sns.histplot(data=penguins)
```
You can otherwise draw multiple histograms from a long-form dataset with hue mapping:
```
sns.histplot(data=penguins, x="flipper_length_mm", hue="species")
```
The default approach to plotting multiple distributions is to "layer" them, but you can also "stack" them:
```
sns.histplot(data=penguins, x="flipper_length_mm", hue="species", multiple="stack")
```
Overlapping bars can be hard to visually resolve. A different approach would be to draw a step function:
```
sns.histplot(penguins, x="flipper_length_mm", hue="species", element="step")
```
You can move even farther away from bars by drawing a polygon with vertices in the center of each bin. This may make it easier to see the shape of the distribution, but use with caution: it will be less obvious to your audience that they are looking at a histogram:
```
sns.histplot(penguins, x="flipper_length_mm", hue="species", element="poly")
```
To compare the distribution of subsets that differ substantially in size, use indepdendent density normalization:
```
sns.histplot(
penguins, x="bill_length_mm", hue="island", element="step",
stat="density", common_norm=False,
)
```
It's also possible to normalize so that each bar's height shows a probability, which make more sense for discrete variables:
```
tips = sns.load_dataset("tips")
sns.histplot(data=tips, x="size", stat="probability", discrete=True)
```
You can even draw a histogram over categorical variables (although this is an experimental feature):
```
sns.histplot(data=tips, x="day", shrink=.8)
```
When using a ``hue`` semantic with discrete data, it can make sense to "dodge" the levels:
```
sns.histplot(data=tips, x="day", hue="sex", multiple="dodge", shrink=.8)
```
```
planets = sns.load_dataset("planets")
sns.histplot(data=planets, x="distance")
```
To the log-scale version:
```
sns.histplot(data=planets, x="distance", log_scale=True)
```
There are also a number of options for how the histogram appears. You can show unfilled bars:
```
sns.histplot(data=planets, x="distance", log_scale=True, fill=False)
```
Or an unfilled step function:
```
sns.histplot(data=planets, x="distance", log_scale=True, element="step", fill=False)
```
Step functions, esepcially when unfilled, make it easy to compare cumulative histograms:
```
sns.histplot(
data=planets, x="distance", hue="method",
hue_order=["Radial Velocity", "Transit"],
log_scale=True, element="step", fill=False,
cumulative=True, stat="density", common_norm=False,
)
```
When both ``x`` and ``y`` are assigned, a bivariate histogram is computed and shown as a heatmap:
```
sns.histplot(penguins, x="bill_depth_mm", y="body_mass_g")
```
It's possible to assign a ``hue`` variable too, although this will not work well if data from the different levels have substantial overlap:
```
sns.histplot(penguins, x="bill_depth_mm", y="body_mass_g", hue="species")
```
Multiple color maps can make sense when one of the variables is discrete:
```
sns.histplot(
penguins, x="bill_depth_mm", y="species", hue="species", legend=False
)
```
The bivariate histogram accepts all of the same options for computation as its univariate counterpart, using tuples to parametrize ``x`` and ``y`` independently:
```
sns.histplot(
planets, x="year", y="distance",
bins=30, discrete=(True, False), log_scale=(False, True),
)
```
The default behavior makes cells with no observations transparent, although this can be disabled:
```
sns.histplot(
planets, x="year", y="distance",
bins=30, discrete=(True, False), log_scale=(False, True),
thresh=None,
)
```
It's also possible to set the threshold and colormap saturation point in terms of the proportion of cumulative counts:
```
sns.histplot(
planets, x="year", y="distance",
bins=30, discrete=(True, False), log_scale=(False, True),
pthresh=.05, pmax=.9,
)
```
To annotate the colormap, add a colorbar:
```
sns.histplot(
planets, x="year", y="distance",
bins=30, discrete=(True, False), log_scale=(False, True),
cbar=True, cbar_kws=dict(shrink=.75),
)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import glob
import os
from datetime import datetime
import io
import csv
import shutil
import matplotlib.pyplot as plt
year = 2017
YEAR_FLAG = 'train'
img_folder = '/datadrive/timelapse_images_fast'
timeseries_folder = '/datadrive/timeseries_derived_data_products_'+str(year)
target_folder_train = '/datadrive/train_data'
target_folder_test = '/datadrive/test_data'
year_path = [(year, img_folder)]#] for y in range(2011, 2018)]
df = pd.DataFrame()
for y, _ in year_path:
path = os.path.join(timeseries_folder,'MH11_resistivity_rock_{}.csv'.format(y))
df = pd.concat((df, pd.read_csv(path)), axis=0)
df2 = pd.DataFrame()
for y, _ in year_path:
path2 = os.path.join(timeseries_folder,'MH25_vaisalawxt520prec_{}.csv'.format(y))
df2 = pd.concat((df2, pd.read_csv(path2)), axis=0)
df3 = pd.DataFrame()
for y, _ in year_path:
path3 = os.path.join(timeseries_folder,'MH25_vaisalawxt520windpth_2017.csv'.format(y))
df3 = pd.concat((df3, pd.read_csv(path3)), axis=0)
def interpolate_to_timestamps(df, time_stamps):
df = df.resample('4T').interpolate()
ind = [df.index.get_loc(tim, method='nearest') for tim in time_stamps.time]
return df.iloc[ind, :]
def extract_time_stamps(year_path):
"""
input: [(year, root_dir), ...]
"""
dfs = pd.DataFrame()
for y, root_dir in year_path:
path_dict = {}
for filename in glob.iglob(root_dir + '**/'+str(y)+'*/*', recursive=True):
di, filname = os.path.split(filename)
ddi, ydi = os.path.split(di)
path_dict[filname] = extract_time(filname)
df = pd.DataFrame({'time':list(path_dict.values()), 'filename': list(path_dict.keys())}, )
dfs = pd.concat((dfs, df), axis=0)
return dfs
def extract_time(filname):
return datetime.strptime(filname, '%Y%m%d_%H%M%S.JPG')
def extract_summer_days(time_stamps):
start = pd.Timestamp(datetime(year=2000, month=1, day=1, hour=8)).time()
end = pd.Timestamp(datetime(year=2000, month=1, day=1, hour=20)).time()
time_stamps_day = time_stamps[np.logical_and(time_stamps['time'].dt.time >= start ,
time_stamps['time'].dt.time <= end)]
june = pd.Timestamp(datetime(year=2000, month=5, day=1, hour=8)).month
august = pd.Timestamp(datetime(year=2000, month=8, day=1, hour=8)).month
time_stamps_summerday = time_stamps_day[np.logical_and(time_stamps['time'].dt.month < august ,
time_stamps['time'].dt.month >= june)]
return time_stamps_summerday
#time_stamps = pd.read_pickle('pd_time_stamps.pkl')
#time_stamps.head()
time_stamps = extract_time_stamps(year_path)
summer_days = extract_summer_days(time_stamps)
df = df.set_index(pd.DatetimeIndex(df.loc[:, 'time']))
df_interp = interpolate_to_timestamps(df, summer_days)
df2 = df2.set_index(pd.DatetimeIndex(df2.loc[:, 'time']))
df2_interp = interpolate_to_timestamps(df2, summer_days)
df_interp['path'] = summer_days.filename.values
df_interp['label_thresh_rest10_1'] = df_interp.loc[:, 'resistivity_10cm [Mohm]'] < 300
df_interp['label_thresh_rest10_2'] = np.logical_and(300 < df_interp.loc[:, 'resistivity_10cm [Mohm]'],
df_interp.loc[:, 'resistivity_10cm [Mohm]'] < 1200)
df_interp['label_thresh_rest10_3'] = 1200 < df_interp.loc[:, 'resistivity_10cm [Mohm]']
df_interp['label_thresh_rest10'] = np.where(df_interp.loc[:, ['label_thresh_rest10_1',
'label_thresh_rest10_2',
'label_thresh_rest10_3']].values)[1]
df_interp['rain_label'] = df2_interp.loc[:, 'rain_intensity [mm/h]'] > 1
df_interp.query('rain_label == False').loc[:, ['path', 'label_thresh_rest10']].to_csv('labels_rain_resist.csv', header=False)
df_interp.loc[:, ['path', 'label_thresh_rest10']].to_csv('/datadrive/labels.csv', header=False)
#df_interp.loc[:,'label_thresh_rest10'].hist(bins=3)
```
## Preprocessing
Run the following cell to generate the folder for the Torch ImageLoader class.
The cell requires a labels.csv file which contains the filenames
of the image files and corresponding resistivity labels (which can be extended
from binary to multiclass depending on resistivity threshold)
```
with open('/datadrive/labels.csv') as csvfile:
readCSV = csv.reader(csvfile, delimiter=',')
for row in readCSV:
print(row)
img_name = row[1]
label = row[2]#int(row[2]=='True')
month_folder = row[0][:10]
#print(month_folder)
#print(img_name,label)
#print(os.path.join(img_folder,month_folder,img_name))
if YEAR_FLAG == 'train':
shutil.copyfile(os.path.join(img_folder,month_folder,img_name),os.path.join(target_folder_train,label,img_name))
else:
shutil.copyfile(os.path.join(img_folder,month_folder,img_name),os.path.join(target_folder_test,label,img_name))
```
| github_jupyter |
# Running an example simulation
It's all very simple, mostly because this is a simple simulation.
First, let's import stuff we'll need later on:
```
import numpy as np
from pandemic_sim.simulation import Person, Simulation
from pandemic_sim.geometries import RectangleGeometry
from pandemic_sim.health_systems import SimpleHealthSystem
from pandemic_sim.particle_engines import (DefaultParticleEngine,
VelocityVerletIntegrator)
from pandemic_sim.transmission_models import DefaultTransmissionModel
from pandemic_sim.disease_models import DefaultPersonalDiseaseModel
from pandemic_sim.visualizations import (DefaultVisualization,
DefaultPersonsDrawer,
RectangleGeometryDrawer,
SimpleHealthSystemCurvesPlotter)
from pandemic_sim.animators import CelluloidAnimator
```
- `Person` is class representing a person with attributes such as position in the room, infection status and whether the person is dead or alive
- `RectangleGeometry` represents a rectangular room and implements the force the walls exert on persons
- `Simulation` is the class containing the actual simulation code
- `SimpleHealthSystem` models the influence of the health system on the probability of a person dying during their infection. Setting a threshold and a death rate factor, you can emulate the limited ICU bed capacity.
- `DefaultParticleEngine` is a class which performs the purely physics part of the simulation, meaning the movement of persons and the forces acting between them. It does that with the help of the `VelocityVerletIntegration` class, which numerically integrates equations of motion
- `DefaultTransmissionModel` is a disease transmission model based on probabilities for exposing other people and for being susceptible
- `DefaultPersonalDiseaseModel` is a model which, for a single person, determines how likely that person is to die or to be cured from the disease in given simulation time step
- in `DefaultVisualization` is the... well... default visualization. It shows all persons as solid circles and additionally plots the timeline of infected and immune persons and fatalities
- `RectangeGeometryDrawer`, `DefaultPersonsDrawer`, `SimpleHealthSystemCurvesPlotter`: classes in which the drawing / plotting of different things is implemented
- `CelluloidAnimator` contains code to animate a visualization using the [`celluloid` package](https://github.com/jwkvam/celluloid)
If you want to change any of these components, the object-oriented design should make it relatively easy to, e.g., implement a new visualization. The visualization and animation code is currently not documented, though.
Preamble done, so let's start setting up our simulation:
```
n_persons = 200
room = RectangleGeometry(25, 25)
transmission_model = DefaultTransmissionModel(prob_dist=lambda d: d < 1)
# persons start with random positions and velocities
initial_positions = room.get_random_position_set(n_persons)
persons = [Person(pos=pos,
vel=np.random.uniform(low=(-2, -2),
high=(2, 2)),
personal_disease_model=None, personal_transmission_model=None,
infected=False, immune=False)
for pos in initial_positions]
for p in persons:
p.personal_transmission_model = transmission_model.personal_tm_factory(
p, in_prob=1.0, out_prob=0.05)
p.personal_disease_model = DefaultPersonalDiseaseModel(p, death_probability=0.00015, time_to_heal=150)
```
So we're going to have 200 persons moving around in a room with width 25 and height 25 (arbitrary units). The transmission model has an argument `prob_dist` which is a base probability for transmission to occur and which depends on the distance between persons. `death_prob` is the probability for an infected person to die in one unit of simulation time. The person-specific transmission model takes a probability for being susceptible (`in_prob`) and a probability for exposing others (`out_prob`). The latter is rather low, simulating the effect of a simple tissue face mask. `time_to_heal` is the number of simulation time units it takes for an infected person to get cured (and thus become immune)
Now we want some random persons to start out infected:
```
chosen_ones = np.random.choice(np.arange(n_persons), n_persons // 50)
for i in chosen_ones:
persons[i].infected = True
```
Sorry, guys.
On to setting up the simulation object:
```
health_system = SimpleHealthSystem(threshold=50, death_probability_factor=5.0)
pe = DefaultParticleEngine(cutoff=0.75, geometry_gradient=room.gradient,
integrator_params={'timestep': 0.1},
integrator_class=VelocityVerletIntegrator,
inter_particle_force_constant=20.0,
geometry_force_constant=20.0)
sim = Simulation(room, persons, health_system, transmission_model, particle_engine=pe)
```
The parameters with which the health system object is initialized mean that as soon as more than 150 persons are infected, the probability of a person dying in a time step is increased by a factor of three.
As for the simulaton object: lots of parameters here:
- `timestep`: the time step for the integration scheme, which approximately solves the equations of motions for persons. The smaller that parameter is, the more accurate and detailed the simulated movement, but the longer the simulation takes to cover some predefined time span
- `cutoff`: distance at which the repulsive force between two persons (which makes them bounce off each other) kicks in
- `inter_particle_force_constant`: determines how hard two people bounce off each other. If this were set to zero, they could just pass through each other unhindered
- `geometry_force_constant`: determines how hard persons bounce off walls
Now we can run the simulation for some number `n_steps`:
```
n_steps = 500
simulation_results = sim.run(n_steps)
```
Done? Amazing. Let's use the above-discussed `DefaultVisualization` and the `CelluloidAnimator` to get a `.mp4` file with the animated simulation results:
```
radius = pe.cutoff / 2
viz = DefaultVisualization(simulation_results, RectangleGeometryDrawer(room),
DefaultPersonsDrawer(radius), SimpleHealthSystemCurvesPlotter(health_system))
animator = CelluloidAnimator(viz, out="output.mp4", frame_rate=20)
animator.animate(n_steps, interval=2)
```
The `radius` determines the radius of the circles representing the persons. You want to set this to `pe.cutoff / 2`, otherwise things will look weird (persons might not touch or overlap too much). The dashed red line in the "# infected" plot is the health system threshold.
Animating takes quite a while, too; comparable to the time it takes to run the simulation itself. In the animation, blue dots represent healthy persons, orange dot infected persons, and gray dots dead persons.
Let's view the results:
```
from IPython.display import Video
Video("output.mp4")
```
That's it—enjoy playing around with the parameters and the code and if you have suggestions on how to improve all this, feel free to open an issue and / or a pull request!
| github_jupyter |
```
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
train_df = pd.read_csv(Path('Resources/2019loans.csv'))
test_df = pd.read_csv(Path('Resources/2020Q1loans.csv'))
test_df.head()
train_df.head()
# Convert categorical data to numeric and separate target feature for training data
y_train = train_df['loan_status']
y_train = LabelEncoder().fit_transform(train_df['loan_status'])
X_train = train_df.drop(columns = ['loan_status'])
X_train = pd.get_dummies(X_train)
X_train.head()
# Convert categorical data to numeric and separate target feature for testing data
y_test = test_df['loan_status']
y_test = LabelEncoder().fit_transform(test_df['loan_status'])
X_test = test_df.drop(columns = ['loan_status'])
X_test = pd.get_dummies(X_test)
X_test.head()
# add missing dummy variables to testing set
for missing in X_train.columns:
if missing not in X_test.columns:
X_test[missing]=0
# Train the Logistic Regression model on the unscaled data and print the model score
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
# Train a Random Forest Classifier model and print the model score
clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train, y_train)
print(f"Training Data Score: {clf.score(X_train, y_train)}")
print(f"Testing Data Score: {clf.score(X_test, y_test)}")
# Scale the data
Scaler = StandardScaler().fit(X_train)
X_train_scaled = Scaler.transform(X_train)
X_test_scaled = Scaler.transform(X_test)
# Train the Logistic Regression model on the scaled data and print the model score
classifier_scaled = LogisticRegression()
classifier_scaled.fit(X_train_scaled, y_train)
print(f"Training Data Score: {classifier.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test_scaled, y_test)}")
# Train a Random Forest Classifier model on the scaled data and print the model score
clf_scaled = clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_train)
print(f"Training Data Score: {clf.score(X_train_scaled, y_train)}")
print(f"Testing Data Score: {clf.score(X_test_scaled, y_test)}")
```
The Random Forest Classifer tested better than the logistic model for scaled data. And the logistic regression model tested better for unscaled data
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
from prml.utils.datasets import load_mnist,load_iris
from prml.kernel_method import BaseKernelMachine
```
# PCA
```
class PCA():
"""PCA
Attributes:
X_mean (1-D array): mean of data
weight (2-D array): proj matrix
importance (1-D array): contirbution of ratio
"""
def __init__(self):
pass
def fit(self,X):
"""fit
Args:
X (2-D array): shape = (N_samples,N_dim), data
"""
N = X.shape[0]
X_mean = X.mean(axis = 0)
S = (X - X_mean).T@(X - X_mean)/N
eig_val,eig_vec = np.linalg.eigh(S)
eig_val,eig_vec = np.real(eig_val),np.real(eig_vec.real)
idx = np.argsort(eig_val)[::-1]
eig_val,eig_vec = eig_val[idx],eig_vec[:,idx]
self.X_mean = X_mean
self.importance = eig_val/eig_val.sum()
self.weight = eig_vec
def transform(self,X,M,return_importance=False,whitening=False):
"""transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, if M > N_dim, M = N_dim
return_importance (bool): return importance or not
whitening (bool): if whitening or not
Retunrs:
X_proj (2-D array): shape = (N_samples,M), projected data
impotance_rate (float): how important X_proj is
"""
if whitening:
return (X-self.X_mean)@self.weight[:,:M]/np.sqrt(self.importance[:M])
elif return_importance:
return X@self.weight[:,:M],self.importance[:M].sum()
else:
return X@self.weight[:,:M]
def fit_transform(self,X,M,return_importance=False,whitening=False):
"""fit_transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, if M > N_dim, M = N_dim
return_importance (bool): return importance or not
whitening (bool): if whitening or not
Retunrs:
X_proj (2-D array): shape = (N_samples,M), projected data
impotance_rate (float): how important X_proj is
"""
self.fit(X)
return self.transform(X,M,return_importance,whitening)
X,y = load_iris()
pca = PCA()
X_proj = pca.fit_transform(X,2,whitening=True)
fig,axes = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
axes.scatter(x=X_proj[y == label,0],
y=X_proj[y == label,1],
alpha=0.8,
label=label)
axes.set_title("iris PCA (4dim -> 2dim)")
plt.legend()
plt.show()
```
### mnist image compression
```
X,y = load_mnist([3])
X = X[:600].reshape(-1,28*28)
X_mean = X.mean(axis=0)
pca = PCA()
pca.fit(X)
img = X[0]
fig = plt.figure(figsize=(10,7))
ax = fig.add_subplot(231)
ax.imshow(img.reshape(28,28))
ax.set_title("original image")
ax.axis("off")
img = img.ravel()
weight = pca.weight
approximate = np.dot(weight.T,img - X_mean)*weight
for n,M in enumerate([1,10,50,100,250]):
ax = fig.add_subplot(int(f"23{n+2}"))
img_proj = X_mean + np.sum(approximate[:,:M],axis = 1)
i,j = (n+1)//2,(n+1)%3
ax.imshow(img_proj.reshape(28,28))
ax.set_title(f"M = {M}")
ax.axis("off")
plt.show()
```
# ProbabilisticPCA
```
class ProbabilisticPCA():
"""ProbabilisticPCA
find parameter by maximum likelihood method, O(D^3)
Attributes:
D (int): original dim of data
mu (1-D array): mean of data
W (2-D array): param of density of data
sigma (float): param of density of data
U (2-D array): eigen vectors of covariance matrix of data
lamda (1-D array): eigen values of covariance matrix of data
"""
def __init__(self) -> None:
pass
def fit(self,X):
"""
Args:
X (2-D array): shape = (N_samples,N_dim), data
"""
N = X.shape[0]
X_mean = X.mean(axis = 0)
S = (X - X_mean).T@(X - X_mean)/N
eig_val,eig_vec = np.linalg.eigh(S)
eig_val,eig_vec = np.real(eig_val),np.real(eig_vec.real)
idx = np.argsort(eig_val)[::-1]
eig_val,eig_vec = eig_val[idx],eig_vec[:,idx]
self.D = X.shape[1]
self.mu = X_mean
self.U = eig_vec
self.lamda = eig_val
def transform(self,X,M):
"""transform
after this method is called, attribute W,sigma can be used
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, M is less than X.shape[1]
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
if self.D == M:
raise ValueError("M is less than X.shape[1]")
sigma = np.mean(self.lamda[M:])
W = self.U[:,:M]@(np.diag((self.lamda[:M] - sigma)**0.5))
Mat = W.T@W + sigma*np.eye(M)
proj_weight = W@np.linalg.inv(Mat) # x -> z
return (X - self.mu)@proj_weight
def fit_transform(self,X,M):
"""fit_transform
after this method is called, attribute W,sigma can be used
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, M is less than X.shape[1]
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
self.fit(X)
return self.transform(X,M)
X,y = load_iris()
ppca = ProbabilisticPCA()
X_proj = ppca.fit_transform(X,2)
fig,axes = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
axes.scatter(x=X_proj[y == label,0],
y=X_proj[y == label,1],
alpha=0.8,
label=label)
axes.set_title("iris PCA (4dim -> 2dim)")
plt.legend()
plt.show()
```
# Probablistic PCA
```
class ProbabilisticPCAbyEM():
"""ProbabilisticPCAbyEM
Attributes:
M (int): dimension of latent variables
mu (1-D array): mean of data
W (2-D array): param of density of data
sigma (float): param of density of data
"""
def __init__(self,max_iter=100,threshold=1e-5) -> None:
"""
Args:
max_iter (int): maximum iteration
threshold (float): threshold
"""
self.max_iter = max_iter
self.threshold = threshold
def fit(self,X,M,find_M=False,alpha_limit=10):
"""
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): dimension of latent variables
find_M (bool): if appropriate M will be found or not, if this is True, appropriate_M <= M
alpha_limit (float): if alpha is more than this, this component is removed
"""
N = X.shape[0]
D = X.shape[1]
# init param
self.mu = X.mean(axis = 0)
W = np.random.randn(D,M)
sigma = np.random.rand() + 1e-1
if find_M:
alpha = np.random.rand(M) + 1e-1
Y = X - self.mu
Ysum = np.sum(Y**2)
for _ in range(self.max_iter):
# E step
Mat = W.T@W + sigma*np.eye(M)
Minv = np.linalg.inv(Mat)
E_z = Y@W@Minv
E_zz = sigma*Minv + E_z.reshape(-1,M,1)@E_z.reshape(-1,1,M)
# M step
if find_M:
W_new = Y.T@E_z@np.linalg.inv(E_zz.sum(axis = 0) + sigma*np.diag(alpha))
else:
W_new = Y.T@E_z@np.linalg.inv(E_zz.sum(axis = 0))
sigma_new = (Ysum - 2*np.diag(E_z@W_new.T@Y.T).sum() + np.diag(np.sum(E_zz@W_new.T@W_new,axis=0)).sum())/(N*D)
diff = ((sigma_new - sigma)**2 + np.mean((W_new - W)**2)) ** 0.5
if diff < self.threshold:
W = W_new
sigma = sigma_new
break
W = W_new
sigma = sigma_new
if find_M:
alpha = D/np.diag(W.T@W)
idx = alpha < alpha_limit
alpha = alpha[idx]
W = W[:,idx]
M = idx.astype("int").sum()
self.M = M
self.W = W
self.sigma = sigma
def transform(self,X):
"""transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
Note:
unlike other method you should choose M when you call `fit()`
"""
Mat = self.W.T@self.W + self.sigma*np.eye(self.M)
proj_weight = self.W@np.linalg.inv(Mat) # x -> z
return (X - self.mu)@proj_weight
def fit_transform(self,X,M,find_M=False,alpha_limit=10):
"""fit_transform
after this method is called, attribute W,sigma can be used
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, M is less than X.shape[1]
find_M (bool): if appropriate M will be found or not, if this is True, appropriate_M <= M
alpha_limit (float): if alpha is more than this, this component is removed
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
self.fit(X,M,find_M,alpha_limit)
return self.transform(X)
```
you can find appropriate `M` by EM algorhithm
```
X,y = load_iris()
em = ProbabilisticPCAbyEM(max_iter=1000)
X_proj = em.fit_transform(X,4,find_M=True)
M = X_proj.shape[1]
if M == 1:
fig,ax = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
ax.hist(x=X_proj[y == label,0],
alpha=0.8,
label=label)
ax.set_title("iris PCA by EM (4dim -> 1dim)")
plt.legend()
plt.show()
elif M == 2:
fig,axes = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
axes.scatter(x=X_proj[y == label,0],
y=X_proj[y == label,1],
alpha=0.8,
label=label)
axes.set_title("mnist PCA by EM (10dim -> 2dim)")
plt.legend()
plt.show()
else:
print(f"M = {M} >= 3 ...")
```
# Factor Analysis
```
class FactorAnalysis():
"""FactorAnalysis
"""
def __init__(self,max_iter=100,threshold=1e-5) -> None:
"""
Args:
max_iter (int): maximum iteration
threshold (float): threshold
"""
self.max_iter = max_iter
self.threshold = threshold
def fit(self,X,M):
"""fit
"""
N = X.shape[0]
D = X.shape[1]
self.mu = X.mean(axis = 0)
W = np.random.randn(D,M)
Sigma = np.random.rand(D) + 1e-1
Y = X - self.mu
S = Y.T@Y/N
for _ in range(self.max_iter):
# E step
G = np.linalg.inv(np.eye(M) + (W.T/Sigma)@W)
E_z = Y/Sigma@W@G.T
E_zz = G + E_z.reshape(-1,M,1)@E_z.reshape(-1,1,M)
# M step
W_new = Y.T@E_z@np.linalg.inv(E_zz.sum(axis = 0))
Sigma_new = np.diag(S - W_new@E_z.T@Y/N)
diff = (np.mean((Sigma_new - Sigma)**2) + np.mean((W_new - W)**2))**0.5
if diff < self.threshold:
W = W_new
Sigma = Sigma_new
break
W = W_new
Sigma = Sigma_new
self.W = W
self.Sigma = Sigma
self.G = G = np.linalg.inv(np.eye(M) + (W.T/Sigma)@W)
def transform(self,X):
"""transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
return (X - self.mu)/self.Sigma@self.W@self.G.T
def fit_transform(self,X,M):
"""fit_transform
after this method is called, attribute W,sigma can be used
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, M is less than X.shape[1]
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
self.fit(X,M)
return self.transform(X)
X,y = load_iris()
fa = FactorAnalysis()
X_proj = fa.fit_transform(X,M=2)
fig,axes = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
axes.scatter(x=X_proj[y == label,0],
y=X_proj[y == label,1],
alpha=0.8,
label=label)
axes.set_title("iris Factor Analysis (4dim -> 2dim)")
plt.legend()
plt.show()
```
# Kernel PCA
```
class KernelPCA(BaseKernelMachine):
"""KernelPCA
Attributes:
a (2-D array): projection weight of pca
kernel_func (function) : kernel function k(x,y)
gram_func (function) : function which make gram matrix
"""
def __init__(self,kernel="Linear",sigma=0.1,a=1.0,b=0.0,h=None,theta=1.0):
"""
Args:
kernel (string) : kernel type (default "Linear"). you can choose "Linear","Gaussian","Sigmoid","RBF","Exponential"
sigma (float) : for "Gaussian" kernel
a,b (float) : for "Sigmoid" kernel
h (function) : for "RBF" kernel
theta (float) : for "Exponential" kernel
"""
super(KernelPCA,self).__init__(kernel=kernel,sigma=sigma,a=a,b=b,h=h,theta=theta)
def fit(self,X):
"""
Args:
X (2-D array): shape = (N_samples,N_dim), data
"""
# make gram mat
N = X.shape[0]
gram_mat = self.gram_func(X)
divN = np.ones((N,N))/N
K = gram_mat - divN@gram_mat - gram_mat@divN + divN@gram_mat@divN
# eig
eig_val,eig_vec = np.linalg.eigh(K)
eig_val,eig_vec = np.real(eig_val),np.real(eig_vec.real)
idx = np.argsort(eig_val)[::-1]
eig_val,eig_vec = eig_val[idx],eig_vec[:,idx]
plus = eig_val > 0
eig_val,eig_vec = eig_val[plus],eig_vec[:,plus] # if dimension of kernel space is lower than N, K can have eigen values of 0
eig_vec /= eig_val**0.5
self.a = eig_vec
self.X = X
def transform(self,X,M):
"""transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
gram_mat = np.zeros((self.X.shape[0],X.shape[0]))
for i in range(self.X.shape[0]):
gram_mat[i] = np.array([self.kernel_func(self.X[i],X[j]) for j in range(X.shape[0])])
return gram_mat.T@self.a[:,:M]
def fit_transform(self,X,M):
"""fit_transform
Args:
X (2-D array): shape = (N_samples,N_dim), data
M (int): number of principal component, M is less than X.shape[1]
Returns:
X_proj (2-D array): shape = (N_samples,M), projected data
"""
self.fit(X)
return self.transform(X,M)
X,y = load_iris()
kpca = KernelPCA(kernel="Gaussian",sigma=3.0)
X_proj = kpca.fit_transform(X,2)
fig,axes = plt.subplots(1,1,figsize=(10,7))
for idx,label in enumerate(np.unique(y)):
axes.scatter(x=X_proj[y == label,0],
y=X_proj[y == label,1],
alpha=0.8,
label=label)
axes.set_title("iris KPCA (4im -> 2dim)")
plt.legend()
plt.show()
```
| github_jupyter |
# <center> Volume of the Snub Dodecahedron
<center><font size="3">Mark S. Adams</font></center>
<center><font size="3">December 7, 2020</font></center>
<br>
<font size="3"><strong>Abstract</strong></font>
<br>
Volume solutions to the Snub Dodecahedron, Thirteenth Archimedean Solid, are made by HSM Coxeter, HC Rajpoot, MS Adams, and numerical means. The two closed-form solutions have different expressions, but are the same number. The alternate solution is shown to yield a possible origin to one third powers of the golden ratio.
<strong>Keywords:</strong> Snub dodecahedron · Archimedean solids · Golden ratio
<br>
<strong>Mathematics Subject Classification (2000):</strong> 51M15, (51M20)
<br>
<font size="3"><strong>Contents</strong></font>
<br>
1 Closed-form volume of the Snub Dodecahedron
<br>
2 Conventional solution
<br>
3 Harish Chandra Rajpoot
<br>
4 Numerical
<br>
5 Alternate solution
<br>
Origin to one third powers of the golden ratio
<br>
References
<br>
## 1 Closed-form volume of the Snub Dodecahedron
#### The two closed-form solutions have different expressions
<br>
$$
\text{Conventional solution =}\; \frac{12\color{MediumVioletRed}{\eta^2}\color{Black}{(3}\color{DarkGoldenrod}{\varphi}
\color{Black}{+1)-}\color{MediumVioletRed}{\eta}
\color{Black}{(36}\color{DarkGoldenrod}{\varphi}
\color{Black}{+7)-(53}\color{DarkGoldenrod}{\varphi}
\color{Black}{+6)}}{6\sqrt{(3-\color{MediumVioletRed}{\eta^2}\color{black}{)^3}}}
$$
<br>
$$
\text{Alternate solution =}\;
\frac{10\color{DarkGoldenrod}{\varphi}}{3}\sqrt{ \color{DarkGoldenrod}{\varphi^2}
\color{Black}{+3}\color{MediumVioletRed}{\eta}
\color{Black}{(}\color{DarkGoldenrod}{\varphi}
\color{Black}{+}\color{MediumVioletRed}{\eta}
\color{Black}{)}}
\,+\,\frac{\color{DarkGoldenrod}{\varphi^2}}{2}\sqrt{ 5 + 5\sqrt{5}\color{DarkGoldenrod}{\varphi}\color{MediumVioletRed}{\eta}
\color{Black}{(}\color{DarkGoldenrod}{\varphi}
\color{Black}{+}\color{MediumVioletRed}{\eta}
\color{Black}{)}}
\\
$$
<br>
$$
\text{where } \color{MediumVioletRed}{eta}\text{ is defined as:} \quad
\color{MediumVioletRed}{\eta\,\equiv\,
\sqrt[3]{ \frac{
\color{DarkGoldenrod}{\varphi}
\color{MediumVioletRed}{}}{2} + \frac{1}{2} \sqrt{
\color{DarkGoldenrod}{\varphi}
\color{MediumVioletRed}{-}\frac{5}{27}}}\;+\;}
\color{MediumVioletRed}{
\sqrt[3]{ \frac{
\color{DarkGoldenrod}{\varphi}
\color{MediumVioletRed}{}}{2} - \frac{1}{2} \sqrt{
\color{DarkGoldenrod}{\varphi}
\color{MediumVioletRed}{-}\frac{5}{27}}}}
$$
Both expressions are the same number, 37.616..., as shown in this
<em>wolfram cloud notebook</em>[<sup>1</sup>](#fn1)
## 2 Conventional solution
One of the world's finest and most eloquent geometers <strong>Harold Scott MacDonald Coxeter,</strong> in his <strong>Uniform Polyhedra</strong>[<sup>2</sup>](#fn2).
<br>
<br>
<font size="3"><strong>Uniform Polyhedra, Section 10. The Snub Polyhedra</strong></font>
<br>
<br>
<font size="2">We construct $\vert$ p q r by regarding the spherical triangles (p q r) as being alternately white
and black (see § 3, especially figure 6). The three white triangles that surround a black one
contain corresponding points forming an equilateral triangle which we may called a
‘snub face' of $\vert$ pqr. One of these three white triangles is derived from another, sharing
with it the vertex P (say), by a rotation through $\frac{2\pi}{p}$ about P. If this rotation takes the chosen
point $C\prime\prime\prime$ in the first triangle to $C\prime\prime$ in the second, we have an isosceles triangle
$C\prime\prime\prime PC\prime\prime$
whose base
$C\prime\prime\prime C\prime\prime$
(opposite to the angle $\frac{2\pi}{p}$ at P) is one side of the snub face. Solving this
isosceles triangle, we find.
$
\sin PC\prime\prime\sin\frac{\pi}{p}\;=\;\sin\frac{1}{2}C\prime\prime C\prime\prime\prime
$
Besides the usual twelve pentagrams and sixty 'snub’ triangles, they have each forty more triangles, lying by pairs in twenty planes (the face-planes of an icosahedron).</font>
$$
Let\;\color{Teal}{xi\;\xi}\;be\;the\;real\;root\;of\;the\;polynomial:\;
X^3 + 2X^2 - \color{DarkGoldenrod}{\varphi^2} = 0\quad where\;
\color{Teal}{\xi\,}=\,
\frac{
\color{DarkGoldenrod}{\varphi}}{
\color{MediumVioletRed}{\eta}
}
$$
MathWorld--A Wolfram Web Resource by <strong>Eric Weisstein</strong>
shows in <strong>Snub Dodecahedron</strong>[<sup>3</sup>](#fn3) Coxeter's polynomial expression below from which the closed-form expression is derived as shown in Wikipedia <strong>Snub Dodecahedron</strong>[<sup>4</sup>](#fn4). The volume is the real root of x.
$$
187445810737515625 -
182124351550575000\,x^2 +
6152923794150000\,x^4 +
\\
1030526618040000\,x^6 +
162223191936000\,x^8 -
3195335070720\,x^{10} +
2176782336\,x^{12} = 0
$$
## 3 Harish Chandra Rajpoot
<strong>Harish Chandra Rajpoot</strong> published <strong>Optimum Solution of Snub Dodecahedron</strong>[<sup>5</sup>](#fn5) HCR's Theory of Polygon & Newton-Raphson Method is used to calculate the volume of the Snub Dodecahedron.
After only 7 iterations, the calculated volume matches the closed-form solutions to 50 digits of accuracy.
\begin{align*}
\\&
Iterate\,to\,find\,Circumradius\quad
C_{0} = 2.3\quad C_{n+1}=\frac{f(C_{n})}{f\prime (C_{n})}
\\&
f(x)= 256(3-\sqrt{5})x^8 - 128(13-2\sqrt{5})x^6 + 32(35-3\sqrt{5})x^4 - 16(19-\sqrt{5})x^2 +(29-\sqrt{5})
\\&
f\prime(x)= 2048(3-\sqrt{5})x^7 - 768(13-2\sqrt{5})x^5 + 128(35-3\sqrt{5})x^3 - 32(19-\sqrt{5})x
\\&
Volume = \left( \frac{20\sqrt{3C^2-1}}{3} +
\sqrt{ \frac{10(5+2\sqrt{5})C^2 - 5((7+3\sqrt{5})}{2}} \right)
\end{align*}

## 4 Numerical
<strong>Five volume calculations for Snub Dodecahedron</strong>[<sup>6</sup>](#fn6)
is a supplement to this essay. Five Python classes, each calculating the volume of the Snub Dodecahedron in a different way, the four methods above plus a 3D Numerical method. 3D distances drive a numerical root finder. Two triangle objects are defined as adjacent triangles on a regular icosahedron. The algorithm is applied to one point on the plane of each triangle so that the distance of a side of an inscribed snub triangle is equal to the side of a non inscribed snub triangle.
## 5 Alternate solution
Mark Adams published <strong>Archimedean Platonic Solids</strong>[<sup>7</sup>](#fn7). The Snub Dodecahedron is inscribed on to its base icosahedron of unit edge length.

<math>
\begin{array}{ll}
Base\,Icosahedron\,faces:\quad \color{blue}{\triangle A\,B\,C,\; \triangle A\prime \,B\,C}
&
\color {blue}{\overline{AB}} \,=\,
\color {blue}{\overline{BC}} \,=\,
\color {blue}{\overline{CA}} \,=\,
\color {blue}{\overline{A\prime}B} \,=\,
\color {blue}{\overline{CA\prime}} \,=\, 1
\\
Inscribed\,Snub\,Dodecahedron\,faces:\quad \color {blue}{\triangle g\,j\,k,\; \triangle g\prime\,j\prime\,k\prime}
&
\color {red}{\overline{gj}} \,=\,
\color {red}{\overline{jk}} \,=\,
\color {red}{\overline{kg}} \,=\,
\color {red}{\overline{gg\prime}} \,=\,
\color {red}{\overline{g\prime j}} \,=\, \color{DarkGreen}{D\,=\, \sqrt{3}\sin\alpha-\cos\alpha}
\\
Non\,Inscribed\,Snub\,Dodecahedron\,faces:\quad \color {red}{\triangle g\,g\prime\,j,\; \triangle g\prime\,g\,j\prime}
&
\overline{\color {red}{g}\color {blue}{i}} \,=\,
\overline{\color {blue}{i}\color {red}{j}} \,=\,
\overline{\color {blue}{i}\color {DarkGreen}{d}} \,=\,
\overline{\color {red}{g}\color {DarkGreen}{d}} \,=\,
\overline{\color {DarkGreen}{d}\color {red}{g\prime}} \,=\,
\color{DarkGreen}{\frac{D}{2}}
\\
Mid\,points\,on\, \color {red}{\triangle g\,g\prime\,j}:\quad\color {blue}{i} \; \color {DarkGreen}{d} \; \color {red}{m}
&
\color{red}{ \overline{\color{blue}{a}\color{DarkGreen}{f}}^2} \,+\,
\color{red}{ \overline{\color{DarkGreen}{f}\color{red}{g}}^2} \,=\,
\color{blue}{ \overline{\color{blue}{a}\color{red}{g}}^2} \,=\,
\color{DarkGreen}{\frac{D^2}{3}}
\\
Center\,point\,for\,both\,\color {blue}{\triangle A\,B\,C\,and\,\triangle g\,j\,k:\quad a}
&
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{h}}^2} \,+\,
\color{blue}{ \overline{\color{DarkGreen}{h}\color{blue}{i}}^2} \,=\,
\color{blue}{ \overline{\color{blue}{a}\color{blue}{i}}^2} \,=\,
\color{DarkGreen}{\frac{D^2}{12}}
\\
Center\,point\,for\,both\,\color {blue}{\triangle A\prime\,B\,C\,and\,\triangle g\prime\,j\prime\,k\prime:\quad a\prime}
&
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{h}}} \,=\,
\color{blue}{ \overline{ai}\cos(60-\alpha)} \,=\,
\color{DarkGreen}{\frac{D}{4\sqrt{3}} (\cos\alpha+\sqrt{3}\sin\alpha)}
\\
Center\,point\,for\,both\,Icosahedron\,and\,Snub Dodecahedron:\quad \color{blue}{c}\quad\quad
&
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{f}}} \,=\,
\color{blue}{ \overline{a\color{red}{g}}\cos\alpha} \,=\,
\color{DarkGreen}{ \frac{D}{ \sqrt{3}}\cos\alpha}
\\
Right\,angles:\quad
\color{DarkGreen}{\angle} \color{DarkGreen}{f}\color{DarkGreen}{e}\color{DarkGreen}{d}\;
\color{DarkGreen}{\angle} \color{DarkGreen}{f}\color{DarkGreen}{d}\color{red}{b}\;
\color{blue}{\angle} \color{blue}{a}\color{DarkGreen}{f}\color{red}{g}\;
\color{DarkGreen}{\angle} \color{blue}{a}\color{DarkGreen}{h}\color{blue}{i}\;
\color{blue}{\angle} \color{blue}{a\prime}\color{DarkGreen}{f\prime}\color{red}{g\prime}\;
&
\color{red}{ \overline{\color{DarkGreen}{f}\color{red}{b}}} \,=\,
\color{red}{ \overline{\color{blue}{a}\color{red}{b}}} \,-\,
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{f}}} \,=\,
\color{DarkGreen}{\frac{1}{2\sqrt{3}} (1-2D\cos\alpha})
\\
Distance\,of\,the \,Snub\,Dodecahedron\,edge:\quad\color{DarkGreen}{D}
&
\color{red}{ \overline{ \color{DarkGreen}{e}\color{red}{b}} } \,=\,
\color{red}{ \overline{ \color{DarkGreen}{d}\color{red}{b}} \,sin\beta} \,=\,
\color{red}{ \overline{ \color{DarkGreen}{f}\color{red}{b}} \,sin^2\beta}
\\
&
\color{red}{ \overline{ \color{DarkGreen}{e}\color{red}{b}}^2 } \,+\,
\color{blue}{ \overline{ \color{DarkGreen}{e}\color{DarkGreen}{d}}^2} \,=\,
\color{red}{ \overline{ \color{DarkGreen}{e}\color {red}{b}}^2} \,
(\color{red}{1+}\color{blue}{\cot^2\beta} \color{red}{)} \,=\,
\color{red}{ \overline{ \color{DarkGreen}{e}\color{red}{b}}}\,
\color{red}{ \overline{ \color{DarkGreen}{f}\color{red}{b}}}
\end{array}
</math>
$
\text{Equations 1 and 2 are second order equations of} \;\color{DarkGreen}{D}.
$
$
\quad \color{red}{ \overline{\color{red}{g}\color{DarkGreen}{d}}^2} \,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,
\color{red}{ \overline{\color{red}{g}\color{DarkGreen}{f}}^2} \,+\,
\color{blue}{ ( \overline{\color{blue}{a}\color{red}{b}}} \,-\,
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{f}}} \,-\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{e}\color{red}{b}} )^2 } \,+\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{e}\color{DarkGreen}{d}}^2} \,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\quad \color{blue}{ \overline{\color{blue}{a}\color{red}{g}}^2} \,+\,
\color{blue}{ \overline{\color{blue}{a}\color{red}{b}}^2} \,-\,
\color{blue} {2\,\overline{\color{blue}{a}\color{red}{b}}} \,
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{f}}} \,+\,
\color{DarkGreen} { \overline{\color{DarkGreen}{e}\color{red}{b}} \, \lbrack } \,
\color{blue}{2\,\overline{\color{blue }{a}\color{DarkGreen}{f}}} \,-\,
\color{blue}{2\,\overline{\color{blue }{a}\color{red }{b}}} \,+\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{f}\color{red}{b}} \rbrack }\,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\quad \color{red}{ \frac{D^2}{3} \,+\, \frac{1}{12} \,-\, \frac{D}{3} \, \cos\alpha \,+\, }
\color{red}{ \sin^2\beta \, \frac{1\,-\,2\,D\cos\alpha}{2\sqrt{3}}}
\color{red}{ \left[ \frac{2\,D\cos\alpha}{\sqrt{3}} \,-\, \frac{1}{\sqrt{3}} \,+\, \frac{1\,-\,2\,D\cos\alpha}{2\sqrt{3}} \right] } \,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\color{red}{ D^2 \,-\, 4D\,\cos\alpha \,+\, 1 \,-\, \sin^2\beta \,(1 \,-\, 2\,D\,\cos\alpha)^2 \,=\, 0 \quad(Eq.1)}
$
$
\quad \color{blue}{ \overline{\color{blue}{i}\color{DarkGreen}{d}}^2} \,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{h}\color{blue}{i}}^2} \,+\,
\color{blue}{ ( \overline{\color{blue}{a}\color{red}{b}}} \,-\,
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{h}}} \,-\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{e}\color{red}{b}} } \color{blue }{)^2} \,+\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{e}\color{DarkGreen}{d}}^2} \,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\quad \color{blue}{ \overline{\color{blue}{a}\color{blue}{i}}^2} \,+\,
\color{blue}{ \overline{\color{blue}{a}\color{red}{b}}^2} \,-\,
\color{blue} {2\,\overline{\color{blue}{a}\color{red}{b}}} \,
\color{blue}{ \overline{\color{blue}{a}\color{DarkGreen}{h}}} \,+\,
\color{DarkGreen} { \overline{\color{DarkGreen}{e}\color{red}{b}} \, \lbrack } \,
\color{blue}{2\,\overline{\color{blue }{a}\color{DarkGreen}{h}}} \,-\,
\color{blue}{2\,\overline{\color{blue }{a}\color{red }{b}}} \,+\,
\color{DarkGreen}{ \overline{\color{DarkGreen}{f}\color{red}{b}} \rbrack }\,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\quad \color{blue}{ \frac{D^2}{12} \,+\, \frac{1}{12} \,-\, \frac{D}{12} \, (\cos\alpha \,+\, \sqrt{3}\sin\alpha) \,+\, }
\color{blue}{ \sin^2\beta \, \frac{1\,-\,2\,D\cos\alpha}{2\sqrt{3}}}
\color{blue}{ \left[ \frac{D(\cos\alpha \,+\, \sqrt{3}\sin\alpha)}{2\sqrt{3}} \,-\, \frac{1}{\sqrt{3}} \,+\,\frac{1\,-\,2\,D\cos\alpha}{2\sqrt{3}}\right]}\,-\,
\color{DarkGreen}{\frac{D^2}{4}} \,=\,0
$
$
\color{blue}{ -2D^2 \,-\, D\,(\cos\alpha \,+\, \sqrt{3}\sin\alpha) \,+\, 1 \,-\, \sin^2\beta \,(1 \,-\, 2\,D\,\cos\alpha) \,}
\color{blue}{ \left[ 1 \,+\, D\,(\cos\alpha \,+\, \sqrt{3}\sin\alpha) \right] \,=\, 0 \quad(Eq.2)}
$
$
\text{Equations 3 and 4 are trigonometric operators defining gamma letters:} \;\color{red}{\gamma}\;\text{and}\;\color{blue}{\Gamma}.
$
$
\color{red}{\gamma \,\equiv\,\sqrt{3}\,\tan\alpha} \quad
\color{blue}{\Gamma \,\equiv\,3\cos\alpha-\sqrt{3}\,\sin\alpha}
$
$
\color{blue}{\cos^2\alpha} \,+\, \color{red}{\sin^2\alpha} \,=\, 1 \,=\,
\color{blue}{\cos^2\alpha \left(1 \,+\, \color{red}{\frac{\gamma^2}{3}}\right) } \quad or \quad
\color{blue}{ \cos^2\alpha\,=\,\frac{1}{ 1 \,+\, \color{red}{ \frac{\gamma^2}{3}} }}
$
$
\color{blue}{ \Gamma\,\cos\alpha \,=\, (\cos\alpha -\color{red}{\sqrt{3}\sin\alpha}) \cos\alpha }\,=\,
\color{blue}{ (3 \,-\, \color{red}{\gamma}) \, \cos^2\alpha \,=\, \frac{3\,-\,\color{red}{\gamma}}{1 \,+\, \color{red}{ \frac{\gamma^2}{3}} }}
$
$
\color{red}{ 3 \left( 1 \,+\, \frac{\gamma^2}{3} \right)}
\color{red}{ \left[ \color{blue}{\Gamma\,\cos\alpha\,-\,}
\color{blue}{ \frac{3\,-\,\color{red}{\gamma}}{1 \,+\, \color{red}{ \frac{\gamma^2}{3}} }} \right]} \,=\,
\color{red}{ \overbrace{\Gamma\,\cos\alpha}^{a}\;\gamma^2 \,+\, \overbrace{3}^{b}\;\gamma + \overbrace{3( \Gamma\,\cos\alpha\,-\,3)}^{c} }\,=\,0
$
$
Positive\,Root:\;\color{red}{ \gamma \,=\, \frac{-b+\sqrt{b^2-4\,a\,c}}{2\,a} } =
\color{red}{ \frac{ -3 + \sqrt{ 9-12\,\Gamma\cos\alpha(\Gamma\cos\alpha -3)}}{2\Gamma\,\cos\alpha} \quad(Eq.3) }
$
$
\color{blue}{ \Gamma^2 \,=\, \Gamma\,\cos\alpha \,(3 - \gamma) }\,=\,
\color{blue}{ 3\Gamma\,\cos\alpha - \frac{1}{2}\left[ -3 + \sqrt{ 9-12\,\Gamma\cos\alpha(\Gamma\cos\alpha -3)} \right] \quad (Eq. 4)}
$
$
\text{Combine Equations 1 and 2 with variable y to solve between D and }\alpha
$
$
\color{DarkGoldenrod}{ Golden\,Ratio\,phi:\;\varphi\,\equiv\, \frac{1+\sqrt{5}}{2} }
\quad\quad\color{red}{ Icosa\,symmetry:\;\sin^2\beta\,=\,\frac{1}{3\varphi^2}}
$
$
\color{blue}{Combine:}\quad\color{blue}{ 3\varphi^2(Eq.2)} \;+\; \color{red}{3\varphi^2(Eq.1) \color{DarkGreen}{y}} \,=\,0
$
$
\color{DarkGreen}{\overbrace{\left[3\varphi^2(y-2)+2\left((1-2y)\cos\alpha-\sqrt{3}\sin\alpha\right)\cos\alpha\right]}^{i}\,D^2 \,-\,}
\color{DarkGreen}{\overbrace{\left((4y+1)\cos\alpha+\sqrt{3}\sin\alpha\right)\varphi^4}^{j}\,D\;+}
\color{DarkGreen}{\overbrace{(y+1)\varphi^4}^{k}}\,=\,0
$
$
\color{DarkGreen}{Define}\,\color{red}{mu}\,and\,\color{blue}{lambda:}\;
\color{DarkGreen}{j^2-4ik \,\equiv\,(}
\color{red}{\mu}\color{DarkGreen}{\cos\alpha\,+\,}
\color{blue}{\lambda}\color{DarkGreen}{\sqrt{3}\sin\alpha)^2\;=\;}
\color{blue}{\mu^2}\color{DarkGreen}{\cos^2\alpha\,+\,}
\color{red}{(\mu\lambda)} \color{DarkGreen}{2\sqrt{3}\cos\alpha\sin\alpha +}
\color{blue}{\lambda^2}\color{DarkGreen}{3\sin^2\alpha}
$
$
\color{blue}{ \underbrace{ \left[ \varphi^2(4y+1)^2-4\varphi^2(y+1)\left(3\varphi^2(y-2)+2(1-2y)\right)\right]}_{\mu^2} }
\color{DarkGreen}{\cos^2\alpha \,+\,}
\color{red}{ \underbrace{ \left[ \varphi^8(4y+1)+4\varphi^4(y+1) \right]}_{(\mu\lambda)} }
\color{DarkGreen}{2\sqrt{3}\cos\alpha\sin\alpha \,+\,}
\color{blue}{ \underbrace{ \left[ \varphi^8-4\varphi^6(y+1)(y-2) \right]}_{\lambda^2} }
\color{DarkGreen}{3\sin^2\alpha}
$
$
Sum\,components
\;\color{red}{(\mu\lambda)^2}
\color{blue}{-\mu^2\lambda^2}\,=\,0
\;\color{DarkGoldenrod}{using\;identities\quad \varphi^{n+1}-\varphi^{n-1}=\varphi^{n}\quad and \quad \varphi^{n+2}+\varphi^{n-2}=3\varphi^{n}}
$
$
\begin{array}{l}
\\ \color{DarkGoldenrod}{\varphi^{16}}
&\color{DarkGoldenrod}{|}&&\color{DarkGoldenrod}{|}&&\color{DarkGoldenrod}{|}\;
\color{red}{16}-\color{blue}{16}
&&\color{DarkGoldenrod}{|}\;
\color{red}{8}-\color{blue}{8}
&&\color{DarkGoldenrod}{|}\;
\color{red}{1}-\color{blue}{1}
\\ \color{DarkGoldenrod}{\varphi^{14}}
&\color{DarkGoldenrod}{|}\;
\color{blue}{64}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-32}\quad
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-144}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-80}
&
\color{DarkGreen}{144}
&\color{DarkGoldenrod}{|}\;
\color{blue}{-32}
&
\color{DarkGreen}{-144}
\\ \color{DarkGoldenrod}{\varphi^{12}}
&\color{DarkGoldenrod}{|}\;
\color{blue}{-48}\quad
&
\color{DarkGreen}{144}\;
&\color{DarkGoldenrod}{|}\;
\color{blue}{96}
&
\color{DarkGreen}{0}\;
&\color{DarkGoldenrod}{|}\;
\color{red}{32}
\color{blue}{+128}\quad
&
\color{DarkGreen}{-288}
&\color{DarkGoldenrod}{|}\;
\color{red}{40}
\color{blue}{-200}
&&\color{DarkGoldenrod}{|}\;
\color{red}{8}
\color{blue}{-184}\quad
&
\color{DarkGreen}{\;144}
\\ \color{DarkGoldenrod}{\varphi^{10}}
&\color{DarkGoldenrod}{|}\;
\color{blue}{64}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-32}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-192}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{-32}
&&\color{DarkGoldenrod}{|}\;
\color{blue}{64}
\\ \color{DarkGoldenrod}{\varphi^{8}}
&\color{DarkGoldenrod}{|}&&\color{DarkGoldenrod}{|}&&\color{DarkGoldenrod}{|}\;
\color{red}{16}
&&\color{DarkGoldenrod}{|}\;
\color{red}{32}
&&\color{DarkGoldenrod}{|}\;
\color{red}{16}
\\ \color{DarkGoldenrod}{\div144\varphi^{12}}
&\color{DarkGoldenrod}{|}&
\color{DarkGreen}{y^4}
&\color{DarkGoldenrod}{|}&
&\color{DarkGoldenrod}{|}&
\color{DarkGreen}{-2y^2\;}
&\color{DarkGoldenrod}{|}&
\color{DarkGreen}{-\varphi^2 y\;}
&\color{DarkGoldenrod}{|}&
\color{DarkGreen}{\;-\varphi}
\end{array}
$
$
\begin{array}{lc}
\color{red}{First\,root\,of\,y:\;-1}
&&&\color{blue}{y^3}
&\color{blue}{\overbrace{-y^2}^{p=-1}}
&\color{blue}{\overbrace{-y}^{q=-1}}
&\color{blue}{\overbrace{-\varphi}^{r=-\varphi}}
\\
\color{blue}{a=\frac{-p^2}{3}
\color{DarkGoldenrod}{\subset}+q\;=\; \frac{-1}{3}-
\color{DarkGoldenrod}{\supset} 1\;=\;\frac{-4}{3} }
&
\color{red}{(y+1)}
&
\color{DarkGreen}{\overline{|\;y^4}}
&&\quad\color{DarkGreen}{-2y^2}
&\quad\color{DarkGreen}{-\varphi^2y}
&\color{DarkGreen}{-\varphi}
\\
\color{blue}{b=\frac{2p^2}{27}-\frac{pq}{3}+r}
&&
\color{DarkGreen}{y^4}
&
\color{DarkGreen}{+y^3}
\\
\color{blue}{b=-\frac{2}{27}-\frac{1}{3}-\varphi \;=\; \frac{-49-27\sqrt{5}}{54}}
&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{-y^3}
&
\color{DarkGreen}{-y^2}
\\
\color{blue}{Second\,root\,of\,y:}
&&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{-y^2}
&
\color{DarkGreen}{-y}
\\
\color{blue}{\frac{-p}{3}
-\sqrt[3]{ \frac{b}{2}+\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}}
-\sqrt[3]{ \frac{b}{2}-\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}} \;=\; \color{MediumVioletRed}{\eta^2}-\frac{1}{3}}
&&&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{-\varphi y}
&
\color{DarkGreen}{-\varphi}
\\
\color{MediumVioletRed}{ eta:\,\eta\,\equiv\,}
\color{MediumVioletRed}{ \sqrt[3]{ \frac{\varphi}{2} + \frac{1}{2} \sqrt{ \varphi-\frac{5}{27}}}\;+\;}
\color{MediumVioletRed}{ \sqrt[3]{ \frac{\varphi}{2} - \frac{1}{2} \sqrt{ \varphi-\frac{5}{27}}}}
&&&&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{0}
\end{array}
$
Note: If <span style="color:blue"><font size="3"> a </font></span> is taken as $\color{blue}{\frac{2}{3}}$, then
<span style="color:blue">y</span> becomes an <span style="color:DarkGoldenrod">Origin to one third powers of the golden ratio</span>
$
\color{red}{From\,the\,first\,root\,of\,y:} \frac{ \color{blue}{(Eq.2)} -\color{red}{(Eq.1)}}{\color{DarkGreen}{D}}\;=\;0
$
$
\color{DarkGreen}{ \left[ 2(3\cos\alpha-\sqrt{3}\sin\alpha)\cos\alpha-9\varphi^2\right] D \;+\; \varphi^4(3\cos\alpha-\sqrt{3}\sin\alpha)}\;=\;
\color{DarkGreen}{\left[ 2\Gamma\cos\alpha-9\varphi^2\right] D \;+\; \varphi^4\Gamma}\;=\;0
$
$
\color{DarkGreen}{D\;=\;\frac{ \varphi^4\Gamma}{9\varphi^2 - 2\Gamma\cos\alpha} \quad(Eq.5)}
$
$
\color{red}{\frac{ \Gamma(9\varphi^2 - 2\Gamma\cos\alpha)}{ 3\varphi^2D} (Eq.1)} \;=\;0
\quad\color{DarkGreen}{(substitute\,D\,with\,Eq.5)}
$
$
\color{red}{ \frac{ \Gamma(9\varphi^2 - 2\Gamma\cos\alpha)}{ 3\varphi^2}}
\color{red}{\left[ (3\varphi^2-4\cos^2\alpha) \color{DarkGreen}{ \left( \frac{ \varphi^4\Gamma}{9\varphi^2-2\Gamma\cos\alpha} \right)}
-4\varphi^4\cos\alpha+\varphi^4 \color{DarkGreen}{ \left( \frac{9\varphi^2-2\Gamma\cos\alpha}{ \varphi^4\Gamma} \right)}
\right]} \;=\;0
$
$
\color{red}{4(\Gamma\cos\alpha)^2 - 36\varphi^2\Gamma\cos\alpha + 27\varphi^2 + \varphi^4\Gamma^2} \;=\;0
\quad\color{blue}{(substitute\,\Gamma^2\,with\,Eq.4)}
$
$
\color{red}{4(\Gamma\cos\alpha)^2 + 3\varphi^2(\color{blue}{\varphi^4}-12)\Gamma\cos\alpha + \frac{3}{2}\varphi^2(\color{blue}{\varphi^2}+18)}\;=\;
\color{blue}{ \frac{\varphi^4}{2}\sqrt{9-12\Gamma\cos\alpha(\Gamma\cos\alpha - 3)}}
$
$
\color{DarkGreen}{\text{Square both sides and subtract}}
$
$
\color{DarkGreen}{16(\Gamma\cos\alpha)^4 + 24\varphi^2(\varphi^2-12)(\Gamma\cos\alpha)^3 +
36\varphi^2(21\varphi^2 + 11)(\Gamma\cos\alpha)^2 + 54\varphi^4(\varphi^2-36)\Gamma\cos\alpha + 81\varphi^4(\varphi^2+9)}\;=\;0
$
$
\color{DarkGreen}{ Define\;x:\quad x\equiv\frac{2}{3}\Gamma\cos\alpha \quad \text{ and divide by 81:}}
$
$
\color{DarkGreen}{x^4 + \varphi^2(\varphi^2-12)x^3 + \varphi^2(21\varphi^2+11)x^2 + \varphi^4(\varphi^2-36)x + \varphi^4(\varphi^2+9)}\;=\;0
$
$
\begin{array}{lc}
&&\color{blue}{x^3}
&\color{blue}{\overbrace{-9\varphi^2}^{p=-9\varphi^2}x^2}
&\color{blue}{\overbrace{+\varphi^2(21\varphi^2+2)}^{q=\varphi^2(21\varphi^2+2)}x}
&\color{blue}{\overbrace{-\varphi^4(\varphi^2+9)}^{r=-\varphi^4(\varphi^2+9)}}
\\
\color{red}{(x-1)}
&\quad
\color{DarkGreen}{\overline{|\;x^4}}
\quad&\quad\color{DarkGreen}{+\varphi^2(\varphi^2-12)x^3}
\quad&\quad\color{DarkGreen}{+\varphi^2(21\varphi^2+11)x^2}
\quad&\color{DarkGreen}{+\varphi^4(\varphi^2-36)x}
\quad&\color{DarkGreen}{+\varphi^4(\varphi^2+9)}
\\
&
\color{DarkGreen}{x^4}
&
\color{DarkGreen}{-x^3}
\\
&
\color{DarkGreen}{0}
&
\color{DarkGreen}{-9\varphi^2x^3}
&
\color{DarkGreen}{+9\varphi^2x^2}
\\
&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{\varphi^2(21\varphi^2+2)x^2}
&
\color{DarkGreen}{-\varphi^2(21\varphi^2+2)x}
\\
&&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{-\varphi^4(\varphi^2+9)x}
&
\color{DarkGreen}{\varphi^4(\varphi^2+9)}
\\
&&&&
\color{DarkGreen}{0}
&
\color{DarkGreen}{0}
\\
\end{array}
$
$
\color{red}{First\,root\,of\,x:\;1}
\\
\color{blue}{a=\frac{-p^2}{3}+q\;=\; -27\varphi^4+\varphi^2(21\varphi^2+2) \;=\; -2\varphi^6 }
\\
\color{blue}{b=\frac{2p^2}{27}-\frac{pq}{3}+r}
\\
\color{blue}{b=-54\varphi^6 + 3\varphi^4(21\varphi^2+2) - \varphi^4(\varphi^2+9)\;=\;\color{DarkGoldenrod}{\varphi^{10}}}
\\
\\
\color{blue}{Second\,root\,of\,x:}
\\
\color{blue}{\frac{-p}{3}
-\sqrt[3]{ \frac{b}{2}+\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}}
-\sqrt[3]{ \frac{b}{2}-\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}} \;=\; 3\varphi^2-\varphi^3\eta}
$
$
\color{blue}{\text{From the second root of x:} \quad \Gamma\cos\alpha \;=\; \frac{3}{2}(3\varphi^2-\varphi^3\eta)\quad(Eq.6) }
$
$
\color{DarkGreen}{ \left( \sqrt[3]{ \frac{\varphi}{2} + \sqrt{ \frac{\varphi^2}{4}-\frac{8}{27}}}\right)}
\color{DarkGreen}{ \left( \sqrt[3]{ \frac{\varphi}{2} - \sqrt{ \frac{\varphi^2}{4}-\frac{8}{27}}}\right)}
\color{DarkGreen}{\;=\; \sqrt[3]{\frac{8}{27}} \;=\; \frac{2}{3} \quad so, \quad \eta^3 = 2\eta + \varphi\quad(Eq.7)}
$
$
\color{DarkGreen}{Noting\;D\,=\, \sqrt{3}\sin\alpha-\cos\alpha, \;we\,find:}
$
$
\quad\color{red}{ (9\varphi^3+\varphi+6\eta-3\varphi^3\eta^2)^2}
\color{blue}{ -(\varphi+3\eta)^2 \left(1-3\varphi^4(3\sqrt{5}-2\varphi^3\eta + \varphi^2\eta^2)\right)} \;=\;0
\quad\color{DarkGreen}{(expand\,and\,substitute\,\eta^3\,with\,Eq.7)}
$
$
\color{DarkGreen}{=\;}
\color{red}{(9\varphi^3+\varphi)^2 + 36\eta^2 +9\varphi^6\eta\color{DarkGreen}{(2\eta+\varphi)}+2(9\varphi^3+\varphi)(6\eta-3\varphi^3\eta^2)}
\color{red}{-36\varphi^3} \color{DarkGreen}{(2\eta+\varphi)}
\color{blue}{ -\varphi^2 \left(1-3\varphi^4(3\sqrt{5} - 2\varphi^3\eta + \varphi^2\eta^2) \right)}
$
$
\quad\color{blue}{-6\varphi \left( \eta-3\varphi^4 \left( 3\sqrt{5}\eta - 2\varphi^3\eta^2 +
\varphi^2\color{DarkGreen}{(2\eta+\varphi)} \right)\right)}
\color{blue}{-9 \left( \eta^2-3\varphi^4 \left( 3\sqrt{5}\eta^2 - 2\varphi^3\color{DarkGreen}{(2\eta+\varphi) +}
\varphi^2\eta\color{DarkGreen}{(2\eta+\varphi)} \right)\right)}\;=\;0
$
$
\color{DarkGreen}{=\;(}
\color{red}{81\varphi^6+18\varphi^4+\varphi^2-36\varphi^4} \color{blue}{-\varphi^2+9\varphi^6\sqrt{5}+18\varphi^8-54\varphi^8}
\color{DarkGreen}{)\quad\quad\text{(all 3 orders of}\;\eta\;\text{sum to zero)}}
$
$
\quad\color{DarkGreen}{+\;(}
\color{red}{9\varphi^7+12(9\varphi^3+\varphi)-72\varphi^3}
\color{blue}{+6\varphi^9-6\varphi +54\varphi^5\sqrt{5}+36\varphi^7-108\varphi^8+27\varphi^7}
\color{DarkGreen}{)\eta}
$
$
\quad\quad\color{DarkGreen}{+\;(}
\color{red}{36+ 18\varphi^6-6\varphi^3(9\varphi^3+\varphi)}
\color{blue}{+3\varphi^8-36\varphi^8-9 + 81\varphi^4 \sqrt{5}+54\varphi^6}
\color{DarkGreen}{)\eta^2}\;=\;0
$
$
\color{blue}{\sqrt{1-3\varphi^4(3\sqrt{5}-2\varphi^3\eta+ \varphi^2\eta^2)}}
\color{red}{\;=\;\frac{9\varphi^3+\varphi+6\eta-3\varphi^3\eta^2}{\color{blue}{\varphi +3\eta}}}
\quad\color{blue}{(Eq.8)}
$
$
\color{DarkGreen}{Substitute\color{blue}{Eq.6}\,and\,\color{blue}{Eq.8}\,into\,\color{red}{Eq.3}}
$
$
\color{red}{\gamma\;=\; \frac{ -3 + \sqrt{ 9-12\,\Gamma\cos\alpha(\Gamma\cos\alpha -3)}}{2\Gamma\,\cos\alpha}}
\color{blue}{ \;=\;\frac{ -1+\sqrt{ 1-3\varphi^4(3\sqrt{5}-2\varphi^3\eta+\varphi^2\eta^2)}}{3\varphi^2-\varphi^3\eta} }
\color{blue}{ \;=\;\frac{ -1+
\color{red}{\frac{9\varphi^3+\varphi+6\eta-3\varphi^3\eta^2}{\color{blue}{\varphi +3\eta}}}}{ 3\varphi^2-\varphi^3\eta}}
$
$
\color{red}{\gamma\;=\; \frac{ \color{blue}{-\varphi -3\eta} +9\varphi^3 +\varphi +3\eta +3(3\varphi^2 - \varphi^4)\eta -3\varphi^3\eta^2}
{\color{blue}{(\varphi +3\eta)(3\varphi^2 -\varphi^3\eta) }}}
\color{red}{\;=\;\frac{3\varphi+3\eta}{\varphi+3\eta}}
$
$
\color{blue}{ \cos\alpha \;=\; \frac{1}{ \sqrt{ 1\color{red}{+\tan^2\alpha}}}} \;=\;
\color{blue}{ \frac{1}{ \sqrt{ 1\color{red}{ +\frac{\gamma^2}{3} }}}} \;=\;
\color{blue}{ \frac{1}{ \sqrt{ 1\color{red}{ +\frac{1}{3}\left( \frac{3\varphi+3\eta}{\varphi+3\eta} \right)^2}}}} \;=\;
\color{blue}{ \frac{\varphi+3\eta}{2\sqrt{ \varphi^2+3\eta(\varphi+\eta)}}}
$
$
\color{blue}{ \Gamma\;=\;\cos\alpha[3\color{red}{-\gamma}]}
\color{blue}{ \;=\; \left( \frac{\varphi+3\eta}{2\sqrt{ \varphi^2+3\eta(\varphi+\eta)}} \right) }
\color{blue}{ \left[ 3\color{red}{-\frac{3\varphi+3\eta}{\varphi+3\eta}} \right] }
\color{blue}{ \;=\; \frac{3\eta}{\sqrt{ \varphi^2+3\eta(\varphi+\eta)}} \quad(Eq.9)}
$
$
\color{DarkGreen}{Substitute\;\color{blue}{Eq.6}\;and\;\color{blue}{Eq.9}\;into\,Eq.5}
$
$
\color{blue}{D\;=\;\frac{ \varphi^4\Gamma}{9\varphi^2 - 2\Gamma\cos\alpha}}
\color{blue}{\;=\;\frac{\Gamma\color{DarkGoldenrod}{\varphi}}{3\color{MediumVioletRed}{\eta}}}
\color{blue}{\;=\;\frac{\Gamma\color{Teal}{\xi}}{3}}
\color{blue}{ \;=\; \frac{\varphi}{\sqrt{ \varphi^2+3\eta(\varphi+\eta)}}}
$
$
\color{red}{Icosa\,symmetry:}\;
\color{blue}{\cos\beta=\frac{\varphi}{\sqrt{3}}} \quad
\color{red}{ \sin\beta=\frac{1}{\varphi\sqrt{3}}} \quad
\color{blue}{\overline{a\color{DarkGreen}{b}}=\frac{1}{2\sqrt{3}}} \quad
\color{blue}{\overline{ac}=\frac{\overline{a\color{DarkGreen}{b}}}{\tan\beta}= \frac{\varphi^2}{2\sqrt{3}}}
$
$
\color{DarkGreen}{ \text{Radius to triangle face:}\;r_{triangle}\;=\; \frac{\color{blue}{\overline{ac}}}{\color{DarkGreen}{D}}}
\color{DarkGreen}{ \;=\; \frac{\varphi}{2\sqrt{3}}\sqrt{ \varphi^2+3\eta(\varphi+\eta)}}
$
$
\color{DarkGreen}{ \text{Circumradius (radius to vertex):}}
$
$
\color{DarkGreen}{r_{circumradius}\;=\; \sqrt{ \color{blue}{r^2_{triangle}} \color{red}{+(2\sin\frac{\pi}{3})^{-2}} }}
\color{DarkGreen}{ \;=\; \sqrt{ \frac{ \color{blue}{\varphi^2\left(\varphi^2 + 3\eta(\varphi+\eta)\right)} \color{red}{+ 4}}{12} }}
\color{DarkGreen}{ \;=\; \frac{1}{2}\sqrt{ \frac{ \varphi^4 + 4 +3\varphi^2\eta(\varphi+\eta) }{3}}}
$
$
\color{DarkGreen}{ \text{Inradius (radius to pentagon face):}}
$
$
\color{DarkGreen}{r_{pentagon}\;=\; \sqrt{ \color{blue}{r^2_{circumradius}} \color{red}{-(2\sin\frac{\pi}{5})^{-2}} }}
\color{DarkGreen}{ \;=\; \sqrt{ \frac{ \color{blue}{ \varphi^4 + 4 +3\varphi^2\eta(\varphi+\eta) } }{12} \color{red}{-\frac{\varphi}{\sqrt{5}}} }}
\color{DarkGreen}{ \;=\; \frac{\varphi}{2}\sqrt{ \frac{1}{\varphi\sqrt{5}}+\eta(\varphi+\eta)}}
$
$
\color{DarkGreen}{ \text{Midradius (radius to edge bisector):}}
$
$
\color{DarkGreen}{r_{midradius}\;=\; \sqrt{ \color{blue}{r^2_{triangle}} \color{red}{+ (2\tan\frac{\pi}{3})^{-2}} }}
\color{DarkGreen}{ \;=\; \sqrt{ \frac{ \color{blue}{\varphi^2\left(\varphi^2 + 3\eta(\varphi+\eta)\right)} \color{red}{+1} }{12} }}
\color{DarkGreen}{ \;=\; \frac{1}{2}\sqrt{ \frac{ \varphi^4 + 1 +3\varphi^2\eta(\varphi+\eta) }{3}}}
$
$
\color{red}{Volume_{Snub Dodecahedron}\;=\;}
\color{blue}{N_{triangle}\times Area_{triangle}\times\frac{1}{3} \times r_{triangle} }
\color{DarkGreen}{+N_{pentagon}\times Area_{pentagon}\times\frac{1}{3} \times r_{pentagon} }
$
$
\quad\color{red}{=\;}
\color{blue}{ 80\frac{\sqrt{3}}{4}\frac{1}{3}\frac{\varphi}{2\sqrt{3}}\sqrt{ \varphi^2+3\eta(\varphi+\eta)}}
\color{DarkGreen}{\,+\,12\frac{5}{4}\sqrt{\frac{\varphi^3}{\sqrt{5}}}\frac{1}{3} \frac{\varphi}{2}\sqrt{ \frac{1}{\varphi\sqrt{5}}+\eta(\varphi+\eta)} }
$
$$
\color{red}{Volume_{Snub Dodecahedron}\;=\;}
\color{red}{\frac{10\varphi}{3}\sqrt{ \varphi^2+3\eta(\varphi+\eta)}}
\color{red}{\,+\,\frac{\varphi^2}{2}\sqrt{ 5 + 5\sqrt{5}\varphi\eta(\varphi+\eta)} }
$$
### <span style="color:DarkGoldenrod">Origin to one third powers of the golden ratio</span>
<span style="color:DarkGreen">Spinnability of the </span>
<span style="color:DarkGoldenrod">first golden circle</span>
<span style="color:DarkGreen"> leads to a new second root, if
<span style="color:blue"><font size="3"> a </font></span>
<span style="color:DarkGreen">is taken as $\color{blue}{\frac{2}{3}}$</span>
<span style="color:DarkGreen">, the alternate root:</span>
$
\color{DarkGreen}{y\;=\;\frac{-p}{3}
-\sqrt[3]{ \frac{b}{2}+\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}}
-\sqrt[3]{ \frac{b}{2}-\sqrt{ \frac{b^2}{4}+\frac{a^3}{27}}} \;=\;}
\color{DarkGreen}{\frac{1}{3}}
\color{DarkGreen}{+\frac{2}{3}}\color{DarkGoldenrod}{\varphi^{\frac{4}{3}}}
\color{DarkGreen}{-\frac{1}{3}}\color{DarkGoldenrod}{\varphi^{\frac{-4}{3}}}
$
<span style="color:DarkGoldenrod">Golden ratio powers</span> may be expressed as the sum of two Fibonacci Sequences (r,s): $\color{DarkGoldenrod}{\varphi^n} = \frac{r+s\sqrt{5}}{2}$
$
\begin{array}{rrrr}
\color{DarkGoldenrod}{\varphi^{ n}} & r & s & r^2-5s^2\\
\hline
\color{DarkGoldenrod}{\varphi^{ -7}} & -29 & 13 & -4\\
\color{DarkGoldenrod}{\varphi^{ -6}} & 18 & -8 & 4\\
\color{DarkGoldenrod}{\varphi^{ -5}} & -11 & 5 & -4\\
\color{DarkGoldenrod}{\varphi^{ -4}} & 7 & -3 & 4\\
\color{DarkGoldenrod}{\varphi^{ -3}} & -4 & 2 & -4\\
\color{DarkGoldenrod}{\varphi^{ -2}} & 3 & -1 & 4\\
\color{DarkGoldenrod}{\varphi^{ -1}} & -1 & 1 & -4\\
\color{DarkGoldenrod}{\varphi^{ 0}} & 2 & 0 & 4\\
\color{DarkGoldenrod}{\varphi^{ 1}} & 1 & 1 & -4\\
\color{DarkGoldenrod}{\varphi^{ 2}} & 3 & 1 & 4\\
\color{DarkGoldenrod}{\varphi^{ 3}} & 4 & 2 & -4\\
\color{DarkGoldenrod}{\varphi^{ 4}} & 7 & 3 & 4\\
\color{DarkGoldenrod}{\varphi^{ 5}} & 11 & 5 & -4\\
\color{DarkGoldenrod}{\varphi^{ 6}} & 18 & 8 & 4\\
\color{DarkGoldenrod}{\varphi^{ 7}} & 29 & 13 & -4\\
\end{array}
$
The oscillation of $r^2-5s^2$ may be extended to <span style="color:DarkGoldenrod">one third powers of the golden ratio</span>
$
\begin{array}{rrrr}
\color{DarkGoldenrod}{\varphi^{ \frac{n}{3}}} & r & s & r^2-5s^2\\
\hline
\color{DarkGoldenrod}{\varphi^{ -2}} & 3 & -1 & 4\\
\color{DarkGoldenrod}{\varphi^{ \frac{-5}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{-5}{3}} - \varphi^{ \frac{5}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{-5}{3}} + \varphi^{ \frac{5}{3}})\frac{1}{\sqrt{5}}} &
-4\\
\color{DarkGoldenrod}{\varphi^{ \frac{-4}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{-4}{3}} + \varphi^{ \frac{4}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{-4}{3}} - \varphi^{ \frac{4}{3}})\frac{1}{\sqrt{5}}} &
4\\
\color{DarkGoldenrod}{\varphi^{ -1}} & -1 & 1 & -4\\
\color{DarkGoldenrod}{\varphi^{ \frac{-2}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{-2}{3}} + \varphi^{ \frac{2}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{-2}{3}} - \varphi^{ \frac{2}{3}})\frac{1}{\sqrt{5}}} &
4\\
\color{DarkGoldenrod}{\varphi^{ \frac{-1}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{-1}{3}} - \varphi^{ \frac{1}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{-1}{3}} + \varphi^{ \frac{1}{3}})\frac{1}{\sqrt{5}}} &
-4\\
\color{DarkGoldenrod}{\varphi^{ 0}} & 2 & 0 & 4\\
\color{DarkGoldenrod}{\varphi^{ \frac{1}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{1}{3}} - \varphi^{ \frac{-1}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{1}{3}} + \varphi^{ \frac{-1}{3}})\frac{1}{\sqrt{5}}} &
-4\\
\color{DarkGoldenrod}{\varphi^{ \frac{2}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{2}{3}} + \varphi^{ \frac{-2}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{2}{3}} - \varphi^{ \frac{-2}{3}})\frac{1}{\sqrt{5}}} &
4\\
\color{DarkGoldenrod}{\varphi^{ 1}} & 1 & 1 & -4\\
\color{DarkGoldenrod}{\varphi^{ \frac{4}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{4}{3}} + \varphi^{ \frac{-4}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{4}{3}} - \varphi^{ \frac{-4}{3}})\frac{1}{\sqrt{5}}} &
4\\
\color{DarkGoldenrod}{\varphi^{ \frac{5}{3}}} &
\color{DarkGoldenrod}{\varphi^{ \frac{5}{3}} - \varphi^{ \frac{-5}{3}}} &
\color{DarkGoldenrod}{(\varphi^{ \frac{5}{3}} + \varphi^{ \frac{-5}{3}})\frac{1}{\sqrt{5}}} &
-4\\
\color{DarkGoldenrod}{\varphi^{ 2}} & 3 & 1 & 4\\
\end{array}
$
<span style="color:DarkGreen">Vectors</span> are used to plot the alternate root as the <span style="color:DarkGoldenrod">Origin to one third powers of the golden ratio.</span>

## References
<span id="fn1">[1]</span> Wolfram Cloud MathMusing ["Volume_of_the_Snub_Dodecahedron"]( https://www.wolframcloud.com/obj/MathMusing/Published/Volume_of_the_Snub_Dodecahedron.nb), a supplement to this essay.
<br>
<span id="fn2">[2]</span> [Coxeter, H. S. M.;](https://mathworld.wolfram.com/news/2003-04-02/coxeter/) Longuet-Higgins, M. S.; and Miller, J. C. P. ["Uniform Polyhedra."](https://royalsocietypublishing.org/doi/abs/10.1098/rsta.1954.0003) Phil. Trans. Roy. Soc. London, 1954.
<br>
<span id="fn3">[3]</span> [Weisstein, Eric W.](https://mathworld.wolfram.com/about/author.html) ["Snub Dodecahedron."](https://mathworld.wolfram.com/SnubDodecahedron.html) From MathWorld--A Wolfram Web Resource.
<br>
<span id="fn4">[4]</span> Wikipedia ["Snub Dodecahedron"](https://en.wikipedia.org/wiki/Snub_dodecahedron#Cartesian_coordinates)
<br>
<span id="fn5">[5]</span> [Harish Chandra Rajpoot](https://www.researchgate.net/profile/Harish_Chandra_Rajpoot) [“Optimum Solution of Snub Dodecahedron"](https://www.researchgate.net/publication/335967411_Optimum_Solution_of_Snub_Dodecahedron_an_Archimedean_Solid_by_Using_HCR's_Theory_of_Polygon_Newton-Raphson_Method) 2014
<br>
<span id="fn6">[6]</span> Jupyter.Org ["Five volume calculations for Snub Dodecahedron"](https://nbviewer.jupyter.org/urls/archive.org/download/five-volume-snub/Five_volume_calculations_for_Snub_Dodecahedron.ipynb), a supplement to this essay.
<br>
<span id="fn7">[7]</span> Mark Shelby Adams ["Archimedean Platonic Solids"](https://archive.org/details/archimedeanplatonicsolids/mode/2up) 1985.
<br>
<br>
Mark Adams
<br>
markadams@gatech.edu
<br>
[ORCID iD 0000-0003-4469-051X](https://orcid.org/0000-0003-4469-051X)
| github_jupyter |
when computing the rankings group all cases in same ing snapshot year and call get_edge_data once for each group. Ends up not making it faster...
```
top_directory = '/Users/iaincarmichael/Dropbox/Research/law/law-net/'
from __future__ import division
import os
import sys
import time
from math import *
import copy
import cPickle as pickle
# data
import numpy as np
import pandas as pd
# viz
import matplotlib.pyplot as plt
# graph
import igraph as ig
# NLP
from nltk.corpus import stopwords
# our code
sys.path.append(top_directory + 'code/')
from load_data import load_and_clean_graph, case_info
from pipeline.download_data import download_bulk_resource
from pipeline.make_clean_data import *
from viz import print_describe
sys.path.append(top_directory + 'explore/vertex_metrics_experiment/code/')
from make_snapshots import *
from make_edge_df import *
from attachment_model_inference import *
from compute_ranking_metrics import *
from pipeline_helper_functions import *
from make_case_text_files import *
from bag_of_words import *
from similarity_matrix import *
# directory set up
data_dir = top_directory + 'data/'
experiment_data_dir = data_dir + 'vertex_metrics_experiment/'
court_name = 'scotus'
# jupyter notebook settings
%load_ext autoreload
%autoreload 2
%matplotlib inline
G = load_and_clean_graph(data_dir, court_name)
active_years = range(1900, 2015 + 1)
```
# group by snapshot year
```
def compute_ranking_metrics_LR_group(G,
LogReg,
columns_to_use,
experiment_data_dir,
active_years,
R,
year_floor=1900,
seed=None,
print_progress=False):
'''
Computes the rank score metric for a given logistic regression object.
Sample R test cases that have at least one citation. For each test case
rank test case's ancestors then compute rank score for test cases actual
citations.
Parameters
------------
G: network (so we can get each cases' ancestor network)
LogReg: a logistic regression object
(i.e. the output of fit_logistic_regression)
columns_to_use: list of column names of edge metrics data frame that we
should use to fit logistic regression
path_to_vertex_metrics_folder: we will need these for prediciton
year_interval: the year interval between each vertex metric .csv file
R: how many cases to compute ranking metrics for
year_floor: sample only cases after this year
seed: random seed for selecting cases whose ancsetry to score
Output
-------
The average ranking score over all R cases we tested
'''
# ranking scores for each test case
test_case_rank_scores = []
# get list of test cases
test_vertices = get_test_cases(G, active_years, R, seed=seed)
# load snapshots
snapshots_dict = load_snapshots(experiment_data_dir)
# mabye load the similarities
if 'similarity' in columns_to_use:
similarity_matrix, CLid_to_index = load_similarity_matrix(experiment_data_dir)
else:
similarity_matrix = None
CLid_to_index = None
# organize edges by ing snapshot year
case_dict = get_test_cases_by_snapshot_dict(G, test_vertices, active_years)
for year in case_dict.keys():
# get vetex metrics in year before citing year
snapshot_year = year - 1
# grab data frame of vertex metrics for test case's snapshot
snapshot_df = snapshots_dict['vertex_metrics_' +
str(int(snapshot_year))]
# build edgelist for all cases in given year
edgelist = get_combined_edgelist(G, case_dict[year], snapshot_year)
# grab edge data
edge_data = get_edge_data(G, edgelist, snapshot_df, columns_to_use,
similarity_matrix, CLid_to_index,
edge_status=None)
for test_case in case_dict[year]:
# indices of edge_data
df_indices = [test_case['name'] + '_' + v['name']
for v in G.vs.select(year_le=snapshot_year)]
# grab test case edges
case_edge_data = edge_data.loc[df_indices]
# rank ancestors
ancestor_ranking = get_case_ranking_logreg(case_edge_data,
LogReg, columns_to_use)
# get cited cases
cited_cases = get_cited_cases(G, test_case)
# compute rank score for cited cases
score = score_ranking(cited_cases, ancestor_ranking)
test_case_rank_scores.append(score)
# return test_case_rank_scores, case_ranks, test_cases
return test_case_rank_scores
def get_cited_cases(G, citing_vertex):
"""
Returns the ciations of a cases whose cited year is strictly less than citing year
Parameters
----------
G: igraph object
citing_vertex: igraph vertex
Output
------
list of CL ids of cited cases
"""
# get neighbors first as ig index
all_citations = G.neighbors(citing_vertex.index, mode='OUT')
# return CL indices of cases
# only return cited cases whose year is stictly less than citing year
return [G.vs[ig_id]['name'] for ig_id in all_citations
if G.vs[ig_id]['year'] < citing_vertex['year']]
def get_test_cases_by_snapshot_dict(G, test_cases, active_years):
"""
Organizes test cases by year
list is igraph indices
"""
# get the citing year of each edge
case_years = [case['year'] for case in test_cases]
# dict that organizes edges by ing snapshot year
case_dict = {y: [] for y in active_years}
for i in range(len(test_cases)):
case_dict[case_years[i]].append(test_cases[i])
# only return years with at least one case
return {k : case_dict[k] for k in case_dict.keys() if len(case_dict[k]) > 1}
def contact_lists(LOL):
"""
Concatonates a list of lists
"""
if len(LOL) > 1:
return LOL[0] + contact_lists(LOL[1:])
else:
return LOL[0]
def get_combined_edgelist(G, test_cases, snapshot_year):
# build edgelist for all cases in given year
edgelists = []
for test_case in test_cases:
# restrict ourselves to ancestors of ing
# case strictly before ing year
ancentors = [v.index for v in G.vs.select(year_le=snapshot_year)]
# append test cases edgelist to edgelist
edgelists.append(zip([test_case.index] * len(ancentors), ancentors))
return contact_lists(edgelists)
```
# compare new vs old ranking metrics
```
columns_to_use = ['indegree', 'similarity']
R = 1000
seed_ranking = 3424
LogReg = fit_logistic_regression(experiment_data_dir, columns_to_use)
start = time.time()
compute_ranking_metrics_LR(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking,print_progress=True)
print 'new function took %d seconds for %d test cases' % (time.time() - start, R)
start = time.time()
compute_ranking_metrics_LR_group(G, LogReg, columns_to_use, experiment_data_dir,
active_years, R, seed=seed_ranking,print_progress=True)
print 'new and improved function took %d seconds for %d test cases' % (time.time() - start, R)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/PacktPublishing/Hands-On-Computer-Vision-with-PyTorch/blob/master/Chapter11/Generating_deep_fakes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import os
if not os.path.exists('Faceswap-Deepfake-Pytorch'):
!wget -q https://www.dropbox.com/s/5ji7jl7httso9ny/person_images.zip
!wget -q https://raw.githubusercontent.com/sizhky/deep-fake-util/main/random_warp.py
!unzip -q person_images.zip
!pip install -q torch_snippets torch_summary
from torch_snippets import *
from random_warp import get_training_data
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
def crop_face(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
if(len(faces)>0):
for (x,y,w,h) in faces:
img2 = img[y:(y+h),x:(x+w),:]
img2 = cv2.resize(img2,(256,256))
return img2, True
else:
return img, False
!mkdir cropped_faces_personA
!mkdir cropped_faces_personB
def crop_images(folder):
images = Glob(folder+'/*.jpg')
for i in range(len(images)):
img = read(images[i],1)
img2, face_detected = crop_face(img)
if(face_detected==False):
continue
else:
cv2.imwrite('cropped_faces_'+folder+'/'+str(i)+'.jpg',cv2.cvtColor(img2, cv2.COLOR_RGB2BGR))
crop_images('personA')
crop_images('personB')
class ImageDataset(Dataset):
def __init__(self, items_A, items_B):
self.items_A = np.concatenate([read(f,1)[None] for f in items_A])/255.
self.items_B = np.concatenate([read(f,1)[None] for f in items_B])/255.
self.items_A += self.items_B.mean(axis=(0, 1, 2)) - self.items_A.mean(axis=(0, 1, 2))
def __len__(self):
return min(len(self.items_A), len(self.items_B))
def __getitem__(self, ix):
a, b = choose(self.items_A), choose(self.items_B)
return a, b
def collate_fn(self, batch):
imsA, imsB = list(zip(*batch))
imsA, targetA = get_training_data(imsA, len(imsA))
imsB, targetB = get_training_data(imsB, len(imsB))
imsA, imsB, targetA, targetB = [torch.Tensor(i).permute(0,3,1,2).to(device) for i in [imsA, imsB, targetA, targetB]]
return imsA, imsB, targetA, targetB
a = ImageDataset(Glob('cropped_faces_personA'), Glob('cropped_faces_personB'))
x = DataLoader(a, batch_size=32, collate_fn=a.collate_fn)
inspect(*next(iter(x)))
for i in next(iter(x)):
subplots(i[:8], nc=4, sz=(4,2))
def _ConvLayer(input_features, output_features):
return nn.Sequential(
nn.Conv2d(input_features, output_features, kernel_size=5, stride=2, padding=2),
nn.LeakyReLU(0.1, inplace=True)
)
def _UpScale(input_features, output_features):
return nn.Sequential(
nn.ConvTranspose2d(input_features, output_features, kernel_size=2, stride=2, padding=0),
nn.LeakyReLU(0.1, inplace=True)
)
class Reshape(nn.Module):
def forward(self, input):
output = input.view(-1, 1024, 4, 4) # channel * 4 * 4
return output
class Autoencoder(nn.Module):
def __init__(self):
super(Autoencoder, self).__init__()
self.encoder = nn.Sequential(
_ConvLayer(3, 128),
_ConvLayer(128, 256),
_ConvLayer(256, 512),
_ConvLayer(512, 1024),
nn.Flatten(),
nn.Linear(1024 * 4 * 4, 1024),
nn.Linear(1024, 1024 * 4 * 4),
Reshape(),
_UpScale(1024, 512),
)
self.decoder_A = nn.Sequential(
_UpScale(512, 256),
_UpScale(256, 128),
_UpScale(128, 64),
nn.Conv2d(64, 3, kernel_size=3, padding=1),
nn.Sigmoid(),
)
self.decoder_B = nn.Sequential(
_UpScale(512, 256),
_UpScale(256, 128),
_UpScale(128, 64),
nn.Conv2d(64, 3, kernel_size=3, padding=1),
nn.Sigmoid(),
)
def forward(self, x, select='A'):
if select == 'A':
out = self.encoder(x)
out = self.decoder_A(out)
else:
out = self.encoder(x)
out = self.decoder_B(out)
return out
from torchsummary import summary
model = Autoencoder()
summary(model, torch.zeros(32,3,64,64), 'A');
def train_batch(model, data, criterion, optimizes):
optA, optB = optimizers
optA.zero_grad()
optB.zero_grad()
imgA, imgB, targetA, targetB = data
_imgA, _imgB = model(imgA, 'A'), model(imgB, 'B')
lossA = criterion(_imgA, targetA)
lossB = criterion(_imgB, targetB)
lossA.backward()
lossB.backward()
optA.step()
optB.step()
return lossA.item(), lossB.item()
model = Autoencoder().to(device)
dataset = ImageDataset(Glob('cropped_faces_personA'), Glob('cropped_faces_personB'))
dataloader = DataLoader(dataset, 32, collate_fn=dataset.collate_fn)
optimizers = optim.Adam([{'params': model.encoder.parameters()},
{'params': model.decoder_A.parameters()}],
lr=5e-5, betas=(0.5, 0.999)), \
optim.Adam([{'params': model.encoder.parameters()},
{'params': model.decoder_B.parameters()}],
lr=5e-5, betas=(0.5, 0.999))
criterion = nn.L1Loss()
n_epochs = 10000
log = Report(n_epochs)
!mkdir checkpoint
for ex in range(n_epochs):
N = len(dataloader)
for bx,data in enumerate(dataloader):
lossA, lossB = train_batch(model, data, criterion, optimizers)
log.record(ex+(1+bx)/N, lossA=lossA, lossB=lossB, end='\r')
log.report_avgs(ex+1)
if (ex+1)%100 == 0:
state = {
'state': model.state_dict(),
'epoch': ex
}
torch.save(state, './checkpoint/autoencoder.pth')
if (ex+1)%100 == 0:
bs = 5
a,b,A,B = data
line('A to B')
_a = model(a[:bs], 'A')
_b = model(a[:bs], 'B')
x = torch.cat([A[:bs],_a,_b])
subplots(x, nc=bs, figsize=(bs*2, 5))
line('B to A')
_a = model(b[:bs], 'A')
_b = model(b[:bs], 'B')
x = torch.cat([B[:bs],_a,_b])
subplots(x, nc=bs, figsize=(bs*2, 5))
log.plot_epochs()
```
| github_jupyter |
# Introduction
.....
Check to see if jupyter lab uses the correct python interpreter with '!which python'.
It should be something like '/opt/anaconda3/envs/[environment name]/bin/python' (on Mac).
If not, try this: https://github.com/jupyter/notebook/issues/3146#issuecomment-352718675
```
!which python
```
# Install dependencies:
```
# install_packages = True
# if install_packages:
# !conda install tensorflow=2 -y
# !conda install -c anaconda pandas -y
# !conda install -c conda-forge tensorflow-hub -y
# !conda install -c akode html2text -y
# !conda install -c conda-forge tqdm -y
# !conda install -c anaconda scikit-learn -y
# !conda install -c conda-forge matplotlib -y
# !conda install -c anaconda seaborn -y
```
# Imports
```
#imports
import pandas as pd
import numpy as np
import os
import time
import tensorflow as tf
import tensorflow_hub as hub
import zipfile
from html2text import HTML2Text
from tqdm import tqdm
import re
from sklearn.metrics import pairwise_distances
from sklearn.preprocessing import normalize
import matplotlib.pyplot as plt
import seaborn as sns
```
# Set pandas print options
This will improve readability of printed pandas dataframe.
```
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', None)
pd.set_option('display.max_colwidth', None)
```
## Set global Parameters
Set your parameters here:
data_path: In this path put the data you have downloaded with YouTube Data Tools.
output_path: Tghe files generated in this notebook will be saved here.
url_dict: URLs to models on Tensorflow hub are saved here. Other models are available there.
model_type: Define which model you would like to use. Choose one from url_dict
new_embeddings: If this is true, new embeddings will be generated and saved at output_path. Otherwise, embeddings are loaded from Disc.
```
data_path = './data/videoinfo_rcHCretMIZU_2020_11_30-16_16_17_comments.tab'
output_path = "./output/"
new_embeddings = True
url_dict = {
'Transformer' : "https://tfhub.dev/google/universal-sentence-encoder-large/5",
'DAN' : "https://tfhub.dev/google/universal-sentence-encoder/4",
'Transformer_Multilingual': "https://tfhub.dev/google/universal-sentence-encoder-multilingual-large/3"
}
model_type = 'Transformer' #@param ['DAN','Transformer','Transformer_Multilingual']
```
## Create output directory
Try to create the directory defined by output_path
```
try:
os.mkdir(output_path)
except OSError:
print ("Creation of the directory %s failed" % output_path)
else:
print ("Successfully created the directory %s " % output_path)
```
# Load Data
Load you data as a pandas dataframe
```
if new_embeddings:
data = pd.read_csv(data_path,sep='\t',header=(0))
data.head()
```
# Preprocessing
Preprocess your data:
- Drop empty rows
- Drop unused columns
```
if new_embeddings:
data = data.dropna(subset=['text', 'authorName']) # drop rows with no content
data=data.drop(['id', 'replyCount','likeCount','authorChannelUrl','authorChannelId','isReplyTo','isReplyToName'],axis=1) # drop unused columns
data.head()
```
- remove HTML-tags, links and usernames
```
if new_embeddings:
# Remove HTML tags
tqdm.pandas()
h = HTML2Text()
h.ignore_links = True
data['cleaned'] = data['text'].progress_apply(lambda x: h.handle(x))
print( "Removed HTML Tags.")
# Remove links
http_link_pattern = r'http\S+'
bitly_link_pattern = r'bit.ly/\S+'
data['cleaned'] = data['cleaned'].str.replace(http_link_pattern, '')
data['cleaned'] = data['cleaned'].str.replace(bitly_link_pattern, '')
print( "Removed Links.")
# Remove user names
keep_names = ["earth", "Tide", "Geologist", "A Person", "Titanic", "adventure", "Sun", "The United States Of America"] # user names we want to keep
user_names = [name for name in data['authorName'].unique() if (len(name)> 3 and name not in keep_names)]
data['cleaned'] = data['cleaned'].str.replace('|'.join(map(re.escape, user_names)), '')
print( "Removed user names.")
```
# Save or Load preprocessed data
Save your data afte preprocessing, or load preprocessed data from disc.
```
if new_embeddings:
data.to_pickle(output_path+'data_preprocessed'+'.pkl')
else:
data = pd.read_pickle(output_path+'data_preprocessed'+'.pkl')
data.head()
```
# Produce Text Embeddings with Universal Sentence Encoder
## Load Model
Load the model from TF-hub
```
hub_url = url_dict[model_type]
if new_embeddings:
print("Loading model. This will take some time...")
embed = hub.load(hub_url)
```
## Embed Documents
Produce embeddings of your documents.
```
if new_embeddings:
for k,g in data.groupby(np.arange(len(data))//200):
if k == 0:
embeddings = embed(g['cleaned'])
else:
embeddings_new = embed(g['cleaned'])
embeddings = tf.concat(values=[embeddings,embeddings_new],axis = 0)
print(k , end =" ")
print("The embeddings vector is of fixed length {}".format(embeddings.shape[1]))
np.save(output_path+'/embeddings'+model_type+'.npy', embeddings, allow_pickle=True, fix_imports=True)
else:
embeddings = np.load(output_path+'/embeddings'+model_type+'.npy', mmap_mode=None, allow_pickle=False, fix_imports=True, encoding='ASCII')
embeddings.shape
```
## Calculate Similarity Matrix with angular distance
'Following Cer et al. (2018), we first compute
the sentence embeddings u, v for an STS sentence
pair, and then score the sentence pair similarity
based on the angular distance between the two
embedding vectors d = − arccos (uv/||u|| ||v||).'
```
from sklearn.metrics.pairwise import cosine_similarity
def cos_sim(input_vectors):
similarity = cosine_similarity(input_vectors)
return similarity
cosine_similarity_matrix = cos_sim(np.array(embeddings))
print(cosine_similarity_matrix)
```
# Plots Similarity
Plot and print a heat map showing the semantic contextual similarity between comments.
```
import seaborn as sns
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
num_samples = 5
off_set = 10000
plot_similarity(data.iloc[off_set:off_set+num_samples]['cleaned'], embeddings[off_set:off_set+num_samples], 90)
```
# Show neighbours of a comment
Define which comment to analyze
```
comment_index = 13
comment = data["cleaned"][comment_index]
comment_list = data["cleaned"].tolist()
print(comment)
```
Print similar comments.
```
def get_top_similar(sentence, sentence_list, similarity_matrix, topN):
# find the index of sentence in list
index = sentence_list.index(sentence)
# get the corresponding row in similarity matrix
similarity_row = np.array(similarity_matrix[index, :])
# get the indices of top similar
indices = similarity_row.argsort()[-topN:][::-1]
return [sentence_list[i] for i in indices]
for i, value in enumerate(get_top_similar(comment, comment_list, cosine_similarity_matrix, 20)):
print("Top similar comment {}: {}".format(i+1, value))
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
%config Completer.use_jedi = False
```
# Movshon lab - SpikeGLX + Expo Converter
This tutorial follows the step-by-step guide for a [NWB Converter](https://github.com/catalystneuro/nwb-conversion-tools/blob/master/documentation/conversion_tools_structure.md#step-by-step-operations)
```
from movshon_lab_to_nwb import MovshonSpikeglxNWBConverter
from datetime import datetime
from pynwb import NWBFile, NWBHDF5IO
from nwbwidgets import nwb2widget
from pathlib import Path
import yaml
import pprint
```
## Step 1 - Converter.get_source_schema()
```
# Get source_schema
source_schema = MovshonSpikeglxNWBConverter.get_source_schema()
pprint.pprint(source_schema['properties'], width=120)
```
## Step 2 - Get user-input source_data that complies to the returned full source_schema
```
# Source data
base_path = Path('/home/luiz/storage/taufferconsulting/client_ben/project_movshon/movshon_data/expo/exampledata/expo_spikeglx/m676p3l11#1')
spikeglx_file = base_path / 'spikeglx/m676p3l11#1_ori16_g0_t0.imec.lf.bin'
expo_file = base_path / 'm676p3l11#1[ori16].xml'
ttl_file = spikeglx_file
source_data = dict(
SpikeGLXRecordingExtractorInterface=dict(
file_path=str(spikeglx_file),
),
ExpoDataInterface=dict(
expo_file=str(expo_file),
ttl_file=str(ttl_file)
)
)
pprint.pprint(source_data, width=120)
```
## Step 3 - Instantiate Converter
```
# Initialize converter
converter = MovshonSpikeglxNWBConverter(source_data=source_data)
print('Data interfaces for this converter:')
pprint.pprint(converter.data_interface_objects, width=120)
```
## Step 4 - Converter.get_metadata_schema()
```
# Get metadata_schema
metadata_schema = converter.get_metadata_schema()
pprint.pprint(metadata_schema, width=120)
```
## Step 5 - Automatically fetches available metadata with Converter.get_metadata()
```
# Get metadata from source data
metadata = converter.get_metadata()
pprint.pprint(metadata, width=120)
```
## Step 6 - Get user-input metadata
```
metadata['NWBFile']['session_description'] = 'example conversion'
pprint.pprint(metadata, width=120)
```
## Step 7 - Converter.get_conversion_options_schema()
```
conversion_options_schema = converter.get_conversion_options_schema()
print('Conversion options for each data interface: \n')
pprint.pprint(conversion_options_schema['properties'], width=120)
```
## Step 8 - Get user-input conversion options
```
conversion_options = dict(
SpikeGLXRecordingExtractorInterface=dict(),
ExpoDataInterface=dict(convert_expo=True)
)
```
## Step 9 - Run conversion user filled metadata and conversion_options
```
output_file = 'out_example.nwb'
converter.run_conversion(
metadata=metadata,
nwbfile_path=output_file,
save_to_file=True,
conversion_options=conversion_options
)
```
## Final 1 - Check NWB file
```
# load file
fname = 'out_example.nwb'
with NWBHDF5IO(fname, 'r') as io:
nwbfile = io.read()
print(nwbfile)
```
## Final 2 - Check NWB file with widgets
```
io = NWBHDF5IO(fname, 'r')
nwbfile = io.read()
nwb2widget(nwbfile)
```
| github_jupyter |
## Using RNNs to add two binary strings ##
if two input binary strings say 010 and 011 are given your network should output the sum = 101
- How do you represent the data
- Defining a simple recurrent network to model the problem in a seq2seq fashion
- Train it on binary strings of a fixed length
- Test the network using binary strings of different lengths
```
# coding: utf-8
# =============================================================================
# Make a simple RNN learn binray addition
# ============================================================================
# author mineshmathew.github.io
# ==============================================================================
from __future__ import print_function
import numpy as np
from time import sleep
import random
import sys
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
random.seed( 10 )
```
## Preparing the input data ##
### Radom binary strings of required length as training data ###
- The function <i>getSample()</i> takes a string-length as input and then returns the input vector and target vector that need to be fed to the RNN
- Say if your string-length is 2, lower and upper bounds would be 2 and 3.
- Then if the two random numbers picked from this range are 2 and 3 ( you have only 2 and 3 in that range :) )
- your inputs in binary would be 10 and 11 and your sum is 5 which is 101.
- <b> Padding :</b>Since your ouput is one bit longer we will rewrite the inputs too in 3 bit form so 010 + 011 -- > 101
### Training data as input sequene and target sequence pairs ###
Starting from the least significant bit (since the addition starts from LSB) we concatenate the correspodning bits in each input binary string and that forms our input sequence.
And your target vector would be the ourput binary string reversed (Since you start from LSB)
Hence your input at one timestep is this ordered pair of bits for that particular position and target for that timestep would be the corresponding bit in the output string
so your input dimension at each time step is 2 and target dimesnion is 1
in the above case so your input and target pairs would be
[1 0] - > 1 <br>
[1 1] -> 0 <br>
[0 0] -> 0

```
def getSample(stringLength, testFlag):
#takes stringlength as input
#returns a sample for the network - an input sequence - x and its target -y
#x is a T*2 array, T is the length of the string and 2 since we take one bit each from each string
#testFlag if set prints the input numbers and its sum in both decimal and binary form
lowerBound=pow(2,stringLength-1)+1
upperBound=pow(2,stringLength)
num1=random.randint(lowerBound,upperBound)
num2=random.randint(lowerBound,upperBound)
num3=num1+num2
num3Binary=(bin(num3)[2:])
num1Binary=(bin(num1)[2:])
num2Binary=(bin(num2)[2:])
if testFlag==1:
print('input numbers and their sum are', num1, ' ', num2, ' ', num3)
print ('binary strings are', num1Binary, ' ' , num2Binary, ' ' , num3Binary)
len_num1= (len(num1Binary))
len_num2= (len(num2Binary))
len_num3= (len(num3Binary))
# since num3 will be the largest, we pad other numbers with zeros to that num3_len
num1Binary= ('0'*(len(num3Binary)-len(num1Binary))+num1Binary)
num2Binary= ('0'*(len(num3Binary)-len(num2Binary))+num2Binary)
# forming the input sequence
# the input at first timestep is the least significant bits of the two input binary strings
# x will be then a len_num3 ( or T ) * 2 array
x=np.zeros((len_num3,2),dtype=np.float32)
for i in range(0, len_num3):
x[i,0]=num1Binary[len_num3-1-i] # note that MSB of the binray string should be the last input along the time axis
x[i,1]=num2Binary[len_num3-1-i]
# target vector is the sum in binary
# convert binary string in <string> to a numpy 1D array
#https://stackoverflow.com/questions/29091869/convert-bitstring-string-of-1-and-0s-to-numpy-array
y=np.array(map(int, num3Binary[::-1]))
#print (x)
#print (y)
return x,y
```
## How does the network look like ? ##
The figure below shows fully rolled network for our task for the input - target pair we took as an example earlier.
In the figure, for ease of drawing, hiddenDIm is chosen as 2

```
class Adder (nn.Module):
def __init__(self, inputDim, hiddenDim, outputDim):
super(Adder, self).__init__()
self.inputDim=inputDim
self.hiddenDim=hiddenDim
self.outputDim=outputDim
self.lstm=nn.RNN(inputDim, hiddenDim )
self.outputLayer=nn.Linear(hiddenDim, outputDim)
self.sigmoid=nn.Sigmoid()
def forward(self, x ):
#size of x is T x B x featDim
#B=1 is dummy batch dimension added, because pytorch mandates it
#if you want B as first dimension of x then specift batchFirst=True when LSTM is initalized
#T,D = x.size(0), x.size(1)
#batch is a must
lstmOut,_ =self.lstm(x ) #x has two dimensions seqLen *batch* FeatDim=2
T,B,D = lstmOut.size(0),lstmOut.size(1) , lstmOut.size(2)
lstmOut = lstmOut.contiguous()
# before feeding to linear layer we squash one dimension
lstmOut = lstmOut.view(B*T, D)
outputLayerActivations=self.outputLayer(lstmOut)
#reshape actiavtions to T*B*outputlayersize
outputLayerActivations=outputLayerActivations.view(T,B,-1).squeeze(1)
outputSigmoid=self.sigmoid(outputLayerActivations)
return outputSigmoid
```
### traning the network ###
- batch learning is not used, only one seqeuence is fed at a time
- runs purely on a cpu
- MSE loss is used
```
featDim=2 # two bits each from each of the String
outputDim=1 # one output node which would output a zero or 1
lstmSize=10
lossFunction = nn.MSELoss()
model =Adder(featDim, lstmSize, outputDim)
print ('model initialized')
#optimizer = optim.SGD(model.parameters(), lr=3e-2, momentum=0.8)
optimizer=optim.Adam(model.parameters(),lr=0.001)
epochs=500
### epochs ##
totalLoss= float("inf")
while totalLoss > 1e-5:
print(" Avg. Loss for last 500 samples = %lf"%(totalLoss))
totalLoss=0
for i in range(0,epochs): # average the loss over 200 samples
stringLen=4
testFlag=0
x,y=getSample(stringLen, testFlag)
model.zero_grad()
x_var=autograd.Variable(torch.from_numpy(x).unsqueeze(1).float()) #convert to torch tensor and variable
# unsqueeze() is used to add the extra dimension since
# your input need to be of t*batchsize*featDim; you cant do away with the batch in pytorch
seqLen=x_var.size(0)
#print (x_var)
x_var= x_var.contiguous()
y_var=autograd.Variable(torch.from_numpy(y).float())
finalScores = model(x_var)
#finalScores=finalScores.
loss=lossFunction(finalScores,y_var)
totalLoss+=loss.data[0]
optimizer.zero_grad()
loss.backward()
optimizer.step()
totalLoss=totalLoss/epochs
```
### Testing the model ###
Remember that the network was purely trained on strings of length =3 <br>
now lets the net on bitstrings of length=4
```
stringLen=5
testFlag=1
# test the network on 10 random binary string addition cases where stringLen=4
for i in range (0,10):
x,y=getSample(stringLen,testFlag)
x_var=autograd.Variable(torch.from_numpy(x).unsqueeze(1).float())
y_var=autograd.Variable(torch.from_numpy(y).float())
seqLen=x_var.size(0)
x_var= x_var.contiguous()
finalScores = model(x_var).data.t()
#print(finalScores)
bits=finalScores.gt(0.5)
bits=bits[0].numpy()
print ('sum predicted by RNN is ',bits[::-1])
print('##################################################')
```
### Things to try out
- See that increasing the hidden size to say 100 worsens the performance
- Change the model slightly to use NLL loss or cross entropy loss (you may want to add two output nodes in this case; one for 1 and one for 0.)
| github_jupyter |
<a href="https://githubtocolab.com/giswqs/geemap/blob/master/examples/notebooks/14_legends.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
Uncomment the following line to install [geemap](https://geemap.org) if needed.
```
# !pip install geemap
import ee
import geemap
geemap.show_youtube('NwnW_qOkNRw')
```
## Add builtin legends from geemap Python package
https://github.com/giswqs/geemap/blob/master/geemap/legends.py
### Available builtin legends:
```
legends = geemap.builtin_legends
for legend in legends:
print(legend)
```
### Available Land Cover Datasets in Earth Engine
https://developers.google.com/earth-engine/datasets/tags/landcover
### National Land Cover Database (NLCD)
https://developers.google.com/earth-engine/datasets/catalog/USGS_NLCD
```
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(builtin_legend='NLCD')
Map
```
### National Wetlands Inventory (NWI)
https://www.fws.gov/wetlands/data/mapper.html
```
Map = geemap.Map()
Map.add_basemap('HYBRID')
Map.add_basemap('FWS NWI Wetlands')
Map.add_legend(builtin_legend='NWI')
Map
```
### MODIS Land Cover Type Yearly Global 500m
https://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
```
Map = geemap.Map()
Map.add_basemap('HYBRID')
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
Map.add_legend(builtin_legend='MODIS/051/MCD12Q1')
Map
```
## Add customized legends for Earth Engine data
There are three ways you can add customized legends for Earth Engine data
1. Define legend keys and colors
2. Define legend dictionary
3. Convert Earth Engine class table to legend dictionary
### Define legend keys and colors
```
Map = geemap.Map()
legend_keys = ['One', 'Two', 'Three', 'Four', 'ect']
# colorS can be defined using either hex code or RGB (0-255, 0-255, 0-255)
legend_colors = ['#8DD3C7', '#FFFFB3', '#BEBADA', '#FB8072', '#80B1D3']
# legend_colors = [(255, 0, 0), (127, 255, 0), (127, 18, 25), (36, 70, 180), (96, 68 123)]
Map.add_legend(
legend_keys=legend_keys, legend_colors=legend_colors, position='bottomleft'
)
Map
```
### Define a legend dictionary
```
Map = geemap.Map()
legend_dict = {
'11 Open Water': '466b9f',
'12 Perennial Ice/Snow': 'd1def8',
'21 Developed, Open Space': 'dec5c5',
'22 Developed, Low Intensity': 'd99282',
'23 Developed, Medium Intensity': 'eb0000',
'24 Developed High Intensity': 'ab0000',
'31 Barren Land (Rock/Sand/Clay)': 'b3ac9f',
'41 Deciduous Forest': '68ab5f',
'42 Evergreen Forest': '1c5f2c',
'43 Mixed Forest': 'b5c58f',
'51 Dwarf Scrub': 'af963c',
'52 Shrub/Scrub': 'ccb879',
'71 Grassland/Herbaceous': 'dfdfc2',
'72 Sedge/Herbaceous': 'd1d182',
'73 Lichens': 'a3cc51',
'74 Moss': '82ba9e',
'81 Pasture/Hay': 'dcd939',
'82 Cultivated Crops': 'ab6c28',
'90 Woody Wetlands': 'b8d9eb',
'95 Emergent Herbaceous Wetlands': '6c9fb8',
}
landcover = ee.Image('USGS/NLCD/NLCD2016').select('landcover')
Map.addLayer(landcover, {}, 'NLCD Land Cover')
Map.add_legend(legend_title="NLCD Land Cover Classification", legend_dict=legend_dict)
Map
```
### Convert an Earth Engine class table to legend
For example: MCD12Q1.051 Land Cover Type Yearly Global 500m
https://developers.google.com/earth-engine/datasets/catalog/MODIS_051_MCD12Q1
```
Map = geemap.Map()
ee_class_table = """
Value Color Description
0 1c0dff Water
1 05450a Evergreen needleleaf forest
2 086a10 Evergreen broadleaf forest
3 54a708 Deciduous needleleaf forest
4 78d203 Deciduous broadleaf forest
5 009900 Mixed forest
6 c6b044 Closed shrublands
7 dcd159 Open shrublands
8 dade48 Woody savannas
9 fbff13 Savannas
10 b6ff05 Grasslands
11 27ff87 Permanent wetlands
12 c24f44 Croplands
13 a5a5a5 Urban and built-up
14 ff6d4c Cropland/natural vegetation mosaic
15 69fff8 Snow and ice
16 f9ffa4 Barren or sparsely vegetated
254 ffffff Unclassified
"""
landcover = ee.Image('MODIS/051/MCD12Q1/2013_01_01').select('Land_Cover_Type_1')
Map.setCenter(6.746, 46.529, 2)
Map.addLayer(landcover, {}, 'MODIS Land Cover')
legend_dict = geemap.legend_from_ee(ee_class_table)
Map.add_legend(legend_title="MODIS Global Land Cover", legend_dict=legend_dict)
Map
```
| github_jupyter |
# Overfitting Figure Generation
We're going to generate `n_points` points distributed along a line, remembering that the formula for a line is $y = mx+b$. Modified (slightly) from [here](https://stackoverflow.com/a/35730618/8068638).
```
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
n_points = 12
m = 1
b = 0
training_delta = 1.0
test_points_offset = 0.5
test_points_jitter = 0.1
test_delta = 1.0
np.random.seed(3)
```
Now, we need to generate the testing and training "data"
```
points_x = np.arange(n_points)
training_delta = np.random.uniform(-training_delta, training_delta, size=(n_points))
training_points_y = m*points_x + b + training_delta
testing_points_x = points_x + np.random.uniform(-test_points_jitter, test_points_jitter, size=(n_points)) + test_points_offset
testing_delta = np.random.uniform(-test_delta, test_delta, size=(n_points))
testing_points_y = m*testing_points_x + b + testing_delta
```
We'll overfit by generating a $n$-dimensional polynomial
```
overfitted = np.poly1d(np.polyfit(points_x, training_points_y, n_points - 1))
x_space = np.linspace(-(n_points/5), 2*n_points+(n_points/5), n_points*100)
overfitted_x_space = np.linspace(-(n_points/5), 2*n_points+(n_points/5), n_points*100)
y_overfitted = overfitted(x_space)
```
## Plot it
Colors chosen from [Wong, B. (2011). Points of view: Color blindness. *Nature Methods, 8*(6), 441–441. doi:10.1038/nmeth.1618](doi.org/10.1038/nmeth.1618). I had to do some magic to make the arrays colors play nicely with matplotlib
```
def rgb_to_np_rgb(r, g, b):
return (r / 255, g / 255, b / 255)
orange = rgb_to_np_rgb(230, 159, 0)
blueish_green = rgb_to_np_rgb(0, 158, 115)
vermillion = rgb_to_np_rgb(213, 94, 0)
blue = rgb_to_np_rgb(0, 114, 178)
# configure the plot
plt.rcParams["figure.figsize"] = (12.8 * 0.75, 9.6 * 0.75)
plt.rcParams['svg.fonttype'] = 'path'
plt.rcParams['axes.spines.left'] = True
plt.rcParams['axes.spines.right'] = False
plt.rcParams['axes.spines.top'] = False
plt.rcParams['axes.spines.bottom'] = True
plt.rcParams["xtick.labelbottom"] = False
plt.rcParams["xtick.bottom"] = False
plt.rcParams["ytick.left"] = False
plt.rcParams["ytick.labelleft"] = False
plt.xkcd() # for fun (see https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003858#s12)
# plot the data
plt.scatter(points_x, training_points_y, zorder=3,label="Training data", s=100, c=[blue])
plt.scatter(testing_points_x, testing_points_y, zorder=3,label="Test data", s=100, c=[vermillion])
plt.plot(x_space, m*x_space + b, zorder=2, label="Properly fit model", c=blueish_green)
plt.plot(x_space, y_overfitted, zorder=1, label="Overfit model", c=orange)
plt.xlim(-(n_points/5) - 1, max(testing_points_x) + 1)
plt.ylim(-(n_points/5) - 1, max(testing_points_y)+(n_points/5) + 1)
# plt.rcParams["figure.figsize"] = [6.4*2, 4.8*2]
plt.legend(loc=2)
plt.savefig('overfitting.svg', bbox_inches='tight')
plt.savefig('overfitting.png', dpi=150, bbox_inches='tight')
```
| github_jupyter |
```
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import dash
import pandas as pd
import numpy as np
import datetime as dt
import plotly.express as px
import plotly
from plotly.subplots import make_subplots
import plotly.graph_objs as go
```
#### Import data for creating the energy consumption dashboard for the top 20 average energy consumer countries in the world
```
df_long = pd.read_csv('../dataFiles/top20_NetEnergy_DashboardData.csv')
#df_long = df_long.sort_values(['Net Energy Use', 'Year'])
display(df_long)
```
### Create Interactive Dashboard App Using Dash
```
app = dash.Dash()
df_long['Year'] = df_long['Year'].astype('int')
df_long = df_long.sort_values('Year')
available_indicators = df_long[df_long['Indicator_Name'] != 'Natural Resources Depletion (% of GNI)']['Indicator_Name'].unique()
available_countries = df_long['Country_Name'].unique()
income_fig = px.choropleth(df_long[df_long['Year'] == 2014].sort_values('Income_Group'), locations="Country_Code",
color="Income_Group", width=600,
hover_name="Country_Name", # column to add to hover information
category_orders={
"Income_Group": [
'High income',
'Upper middle income',
'Lower middle income'
]
},
color_discrete_map= {
'High income': 'rgb(13, 48, 100)',
'Upper middle income': 'rgb(126, 77, 143)',
'Lower middle income': 'rgb(193, 100, 121)'
},
labels={'Income_Group': 'Income Group'},
title ='Income Group of Countries')
income_fig.update_layout(
title_font_size=22,
margin=dict(l=40, r=2, t=40, b=5),
)
app.layout = html.Div([
html.H1("Energy Use Dashboard"),
html.H2("Top 20 Countries with the Highest Average Energy Use"),
html.Div([
html.Div([
dcc.Dropdown(
id='crossfilter-xaxis-column',
options=[{'label': i, 'value': i} for i in available_indicators],
value='Net Energy Use'
)
],
style={'width': '50%', 'display': 'block'}),
],
style={
'borderBottom': 'thin lightgrey solid',
'backgroundColor': 'rgb(250, 250, 250)',
'padding': '10px 5px'
}),
html.Div([
dcc.Graph(
id='crossfilter-indicator-bar',
hoverData={'points': [{'customdata': 'Canada'}]}
)
], style={'width': '50%', 'height':'60%', 'display': 'inline-block', 'padding': '20 20'}),
html.Div([
dcc.Graph(figure=income_fig
)
], style={'width': '50%', 'display': 'inline-block', 'padding': '5 0 5 5'}),
html.Div(dcc.Slider(
id='crossfilter-year--slider',
min=df_long['Year'].min(),
max=df_long['Year'].max(),
value=df_long['Year'].max(),
step=None,
marks={str(year): str(year) for year in df_long['Year'].unique()}
), style={'width': '50%', 'padding': '0px 20px 20px 20px'}),
html.Div([
dcc.Graph(id='x-time-series')
], style={'display': 'inline-block', 'width': '50%'}),
html.Div([
dcc.Graph(id='natDepl-time-series')
], style={'display': 'inline-block', 'width': '50%'}),
html.Div([
html.H4('Data Source: ', style={'display': 'inline-block', 'marginRight': 10}),
dcc.Link(html.A("The World Bank Data Catalog"), href='https://datacatalog.worldbank.org/search/dataset/0037651/Environment--Social-and-Governance-Data', target="_blank"),
], style={'width': '50%', 'padding': '0px 20px 20px 20px'})
])
@app.callback(
dash.dependencies.Output('crossfilter-indicator-bar', 'figure'),
[dash.dependencies.Input('crossfilter-xaxis-column', 'value'),
dash.dependencies.Input('crossfilter-year--slider', 'value')])
def update_graph(xaxis_column_name,
year_value):
dff = df_long[df_long['Year'] == year_value]
return {
'data': [go.Bar(
x=dff[dff['Indicator_Name'] == xaxis_column_name]['Value'],
y=dff[dff['Indicator_Name'] == xaxis_column_name]['Country_Name'],
text=dff[dff['Indicator_Name'] == xaxis_column_name]['Country_Name'],
customdata=dff[dff['Indicator_Name'] == xaxis_column_name]['Country_Name'],
orientation='h',
# mode='markers',
marker=dict(
color='rgba(0,0,128, 1)' if (xaxis_column_name == 'Net Energy Use' or xaxis_column_name == 'Energy Use Per Capita') else 'rgba(0,128,0,1)' if 'Renewables' in xaxis_column_name else 'rgba(255,0,0,1)'
)
)],
'layout': go.Layout(
xaxis={
'title': xaxis_column_name,
},
yaxis={
'title': '',
},
margin={'l': 150, 'b': 30, 't': 10, 'r': 0},
height=450,
hovermode='closest'
)
}
def create_time_series(dff, title, xaxis_column_name):
return {
'data': [go.Scatter(
# x=dff['Year'].sort_values(ascending=False),
x=dff['Year'],
y=dff['Value'],
mode='lines+markers',
marker=dict(
color='rgba(0,0,128, 1)' if (xaxis_column_name == 'Net Energy Use' or xaxis_column_name == 'Energy Use Per Capita') else 'rgba(0,128,0,1)' if 'Renewables' in xaxis_column_name else 'rgba(255,0,0,1)'
)
)],
'layout': {
'height': 400,
'margin': {'l': 100, 'b': 30, 'r': 10, 't': 40},
'annotations': [{
'x': 0, 'y': 1, 'xanchor': 'left', 'yanchor': 'bottom',
'xref': 'paper', 'yref': 'paper', 'showarrow': False,
'align': 'left', 'bgcolor': 'rgba(255, 255, 255, 0.5)',
'text': title,
'font': dict(
color="black",
size=16
)
}],
'yaxis': {'type': 'linear', 'title': 'Energy Use (kg of oil equivalent)'},
'xaxis': {'showgrid': False, 'title': 'Year'}
}
}
@app.callback(
dash.dependencies.Output('x-time-series', 'figure'),
[dash.dependencies.Input('crossfilter-indicator-bar', 'hoverData'),
dash.dependencies.Input('crossfilter-xaxis-column', 'value')])
def update_x_timeseries(hoverData, xaxis_column_name):
country_name = hoverData['points'][0]['customdata']
dff = df_long[df_long['Country_Name'] == country_name]
dff = dff[dff['Indicator_Name'] == xaxis_column_name]
title = '<b>{}</b><br>{}'.format(country_name, xaxis_column_name)
return create_time_series(dff, title, xaxis_column_name)
@app.callback(
dash.dependencies.Output('natDepl-time-series', 'figure'),
[dash.dependencies.Input('crossfilter-indicator-bar', 'hoverData'),
dash.dependencies.Input('crossfilter-xaxis-column', 'value')])
def update_natDepl_timeseries(hoverData, xaxis_column_name):
country_name = hoverData['points'][0]['customdata']
dff2 = df_long[df_long['Country_Name'] == country_name]
dff2 = dff2[dff2['Indicator_Name'] == 'Natural Resources Depletion (% of GNI)']
title = '<b>{}</b><br>{}'.format(country_name, 'Natural Resources Depletion (% of GNI)')
return {
'data': [go.Scatter(
x=dff2['Year'],
y=dff2['Value'],
mode='lines+markers',
marker=dict(
color='rgba(0,0,128, 1)' if (xaxis_column_name == 'Net Energy Use' or xaxis_column_name == 'Energy Use Per Capita') else 'rgba(0,128,0,1)' if 'Renewables' in xaxis_column_name else 'rgba(255,0,0,1)'
)
)],
'layout': {
'height': 400,
'margin': {'l': 100, 'b': 30, 'r': 10, 't': 40},
'annotations': [{
'x': 0, 'y': 1, 'xanchor': 'left', 'yanchor': 'bottom',
'xref': 'paper', 'yref': 'paper', 'showarrow': False,
'align': 'left', 'bgcolor': 'rgba(255, 255, 255, 0.5)',
'text': title,
'font': dict(
color="black",
size=16
)
}],
'yaxis': {'type': 'linear', 'title': 'Natural Resources Depletion (% of GNI)', 'range':[0, max(dff2['Value'])+1]},
'xaxis': {'showgrid': False, 'title': 'Year'}
}
}
app.css.append_css({
'external_url': 'https://codepen.io/chriddyp/pen/bWLwgP.css'
})
if __name__ == '__main__':
app.run_server()
```
| github_jupyter |
# Python objects
Before we go much further into numerical modeling, we should stop and discuss some of the inner workings of Python. Recognizing the way values can be handled by Python will give you flexibility in programming and help you avoid common errors.
Early in the previous lesson, we saw that we could assign a value to a variable using the symbol <code>=</code>:
```
elevation_ft = 5430 # elevation of Boulder, CO in feet
```
#callout
Documentation is important!
One way to include documentation in your programs is with comments. Comments are text within the code that the computer ignores. In Python, comments start with the symbol <code>#</code>.
The variable name <code>elevation_ft</code> is not itself the value 5430. It is simply a label that points to a place in the memory where the **object** with the value <code>5430</code> is stored.
This is different from the way the symbol = is used in algebra. An equation like this one represents different things in Python and in algebra:
```
x = 4 + 1
```
In both cases, the letter 'x' corresponds to the value 5. In algebra, 'x' is equivalent to 5; the symbol is simply taking the place of the number. In Python, 'x' is not itself 5; it is a name that points to an object with a value of 5. The variable name 'x' is short-hand for the address where the object is stored in the memory.
#callout
## What is an object?
We can think of objects as the things that Python programs manipulate.
Different programming languages define "object" in different ways. *Everything in Python is an object* and almost everything has attributes and methods. Strings are objects. Lists are objects. Functions are objects. Even modules are objects.
Objects are classified into different **classes** or **data types** that define the kinds of things that a program can do with those objects. An integer (like <code>5430</code> above) is one type of object, the string "Hello, World!" is also an object, and the *numpy array* of elevation values in the previous lesson was another type of object.
### Integers
We can use the built-in function <code>type</code> to see what type a particular object is:
```
type(5430)
```
The number <code>5430</code> is an object of type **int**, or integer. We can also use <code>type</code> see the type of object that the variable is assigned to:
```
type(elevation_ft)
```
The variable <code>elevation_ft</code> is assigned to an object with the value <code>5430</code>, which is of type int. Integer is one of several built-in data types in Python. Because they are built in, we don’t need to load a library to use them.
### Floats
Real numbers (*potentially* with decimals) are floating point numbers or **floats**:
```
elevation_m = 1655.064 # elevation of Boulder, CO in meters
type(elevation_m)
```
#test
What type of object are these values?
- 5.6
- 1932
- 7.0000
- 22.
#solution
- float
- int
- float
- float
#callout
## Math with integers and floats
One would expect that a programming language would follow the same rules of arithmatic that we learned as kids. Confusingly, that’s not always the case:
```
print '7/2 =', 7/2
```
In the real world, half of 7 is 3.5! Why is it behaving this way?
In Python 2 (but not Python 3), dividing an integer (the number 7) by another integer (the number 2) always results in an integer. This is known as **integer division**. If either number is a float, though, division behaves as expected and returns a float:
```
print '7.0000000001/2 =', 7.0000000001/2
```
While this might seem strange and unnecessarily annoying, some programming languages use integer division for historical reasons: integers take up less memory space and integer operations were much faster on early machines. By default, Python 3 does not use integer division.
Adding a decimal point to a whole number makes it a float:
```
print '7 is', type(7)
print '-' * 20
print '7. is', type(7.)
print '7.0 is', type(7.0)
```
We can also convert between types through **casting** (we'll look at this again later). To convert an integer into a float, use the function **float()**:
```
num_int = 7
print 'integer division:', num_int / 2
print 'after casting:', float(num_int) / 2
```
### Booleans
Other types of objects in Python are a bit more unusual. **Boolean** objects can take one of two values: False or True. We will see in a later lesson that boolean objects are produced by operations that compare values against one another and by conditional statements.
You'll notice that the words True and False change color when you type them into a Jupyter Notebook. They look different because they are recognized as special keywords. This only works when True and False are capitalized, though! Python does not treat lower case true and false as boolean objects.
```
i_like_chocolate = True
type(i_like_chocolate)
```
When used in an arithmetic operation, a boolean object acts like an integer of value 0 or 1, respectively:
```
print 3 * True
print 3.0 * True
print 3.0 * False
```
### NoneType
The most abstract of object types in Python is the **NoneType**. NoneType objects can only contain the special constant **None**. <code>None</code> is the value that an object takes when no value is set or in the absence of a value. <code>None</code> is a null or NoData value. It is not the same as False, it is not 0 and it is not an empty string. <code>None</code> is nothing.
If you compare <code>None</code> to anything other than <code>None</code>, <code>None</code> will always be less than the other value (In Python 3, comparing <code>None</code> to another object will instead produce an error):
```
nothing = None
print type(nothing)
print nothing > -4
print nothing == nothing # single = assigns variables, double == compares for equivalency
```
Why would you ever want to create an object that contains nothing at all? As you build more complex programs, you'll find many situations where you might want to set a variable but don't want to assign a value to it quite yet. For example, you might want your code to perform one action if the user sets a certain variable but perform a different action if the user does nothing:
```
input_from_user = None
## The user might or might not provide input here.
## If the user provides input, the value would get assigned to the variable input_from_user
if input_from_user is None:
print "The user hasn't said anything!"
if input_from_user is not None:
print "The user said:", input_from_user
```
#callout
You can use the <code>whos</code> command to see what variables you have created and what modules you have loaded into the memory. This is an iPython command, so it will only work if you are in an iPython terminal or a Jupyter Notebook.
```
whos
```
## Sequences
There are several built-in object types in Python for storing multiple values in an organized structure. We can use **indexing** to extract individual values from **sequences** and **slicing** to extract sections with multiple values.
### Strings
Objects of type **string** are simply sequences of characters with a defined order. Strings have to be enclosed in sigle quotes (' '), double quotes (" "), triple single or double quotes (''' ''', """ """), or single quotes within double quotes ("' '"):
```
print type("The judge said 'Nobody expects the Spanish Inquisition!'")
```
### Lists
A **list** is exactly what it sounds like – a sequence of things. The objects contained in a list don’t have to be of the same type: one list can simultaneously contain numbers, strings, other lists, numpy arrays, and even commands to run. Like other sequences, lists are ordered. We can access the individual items in a list through an integer index.
Lists are created by putting values, separated by commas, inside square brackets:
```
shopping_list = ['funions', 'ice cream', 'guacamole']
```
We can change the individual values in a list using indexing:
```
shopping_list[0] = 'funyuns' # oops
print shopping_list
```
There are many ways to change the contents of lists besides assigning new values to individual elements:
```
shopping_list.append('tortilla chips') # add one item
print shopping_list
del shopping_list[0] # delete the first item
print shopping_list
shopping_list.reverse() # reverse the order of the list (in place)
print shopping_list
```
### Tuples
Like lists, **tuples** are simply sequences of objects. Tuples are created by putting values, separated by commas, inside parenthesis:
```
things = ('toy cars', 42, 'elephant')
print type(things)
```
#test
- When we write down a large integer, it's customary to use commas (or periods, depending on the country) to separate the number into groups of three digits. It's easier for humans to read a large number with separators but Python sees them as something else. What type of object is this?
my_account_balance = 15,752,000,000
#solution
```
my_account_balance = 15,752,000,000
type(my_account_balance)
```
#test
- Create a tuple that contains only one value. Confirm that it's really a tuple. You might have to experiment!
Hint: Start with a tuple with two values and simplify it.
#solution
```
lil_tuple = 1,
type(lil_tuple)
```
There is one very important difference between lists and tuples: lists can be modified while tuples cannot.
#callout
## Ch-ch-changes
Data which can be modified in place is called **mutable**, while data which cannot be modified is called **immutable**. Strings, numbers and tuples are immutable. This does not mean that variable names assigned to these objects will forever be assigned to those objects! If we want to change the value of a string, number, or tuple, we do it by re-assigning the variable name to a completely new object in memory.
```
fav_animal = 'capibara' # misspelled!
print fav_animal[3]
fav_animal[3] = 'y' # change to capybara
fav_animal2 = fav_animal # both variable names point to same object
fav_animal2 = 'capybara' # re-assign variable name to new object
print 'Old object:', fav_animal
print 'New object:', fav_animal2
text = "Your Mother was a Hamster, and your Father smelt of Elderberries!"
text[-1:-len(text)-1:-2]
```
Lists and numpy arrays, on the other hand, are mutable objects: we can modify them in place after they have been created. We can change individual elements, append new elements, or reorder the whole list. For operations like sorting, we can choose whether to use a function that modifies the data in place or a function that leaves the original object unchanged and creates a new, modified object with a new variable name.
Be careful when modifying data in place. **If two variables refer to the same list and you modify a value in the list, it will change the contents of the list for both variables.** If you want to have two variables refer to independent versions of the same mutable object, you must make a copy of the object when you assign it to the new variable name.
Consider the relationship between variable names and objects in this script:
```
mildSalsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
hotSalsa = mildSalsa # both salsas point to the same object
hotSalsa[0] = 'jalapenos' # change the recipe for hot salsa
print 'Mild salsa:', mildSalsa
print 'Hot salsa:', hotSalsa
```
Because both variable names <code>mildSalsa</code> and <code>hotSalsa</code> point to the same mutable object, changing the recipe for one also changed the recipe for the other.
If we want variables with mutable values to be independent, we must make a copy of the object:
```
mildSalsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
hotSalsa = list(mildSalsa) # make a **copy** of the list
hotSalsa[0] = 'jalapenos' # change the recipe for hot salsa
print 'Mild salsa:', mildSalsa
print 'Hot salsa:', hotSalsa
```
Code that modifies data in place can be more difficult to understand (and therefore to debug). However, it is often far more efficient to modify a large data structure in place than to create a modified copy for every small change. You should consider both of these aspects when writing your code.
```
num_float = 7.0
print "'normal' division:", num_float / 2
print 'after casting:', int(num_float) / 2
string = "if it's in caps i'm trying to YELL!"
print string.lower()
print string.upper()
print string.split()
print string.replace('YELL', 'fix my keyboard')
print string.find('caps')
print string[-1:-len(string)-1:-1].find('spac')
print string[11:-20]
print '/'.join(string.split())
from math import exp
bool(2e-324)
s1 = 3 * shopping_list[-1:]
s2 = 3 * shopping_list[-1]
print s1, type(s1)
print s2, type(s2)
print shopping_list[-1:], type(shopping_list[-1:])
print shopping_list[-1], type(shopping_list[-1])
shopping_list = ['tortilla chips', 'guacamole', 'ice cream']
shopping_list = shopping_list + ['coffee', 'cheese']
print shopping_list
```
#callout
## Object ID
We can find the address of an object in memory with the function <code>id()</code>. This function returns the “identity” of an object: an integer which is guaranteed to be unique and constant for this object during its lifetime.
Two variables names that point to the same mutable object will show the same ID.
```
mildSalsa = ['peppers', 'onions', 'cilantro', 'tomatoes']
hotSalsa = mildSalsa
print 'Variable 1 to mutable obj:', id(mildSalsa)
print 'Variable 2 to mutable obj:', id(hotSalsa)
```
When the second variable name is tied to an independing copy of the mutable object, the two variable names will have different IDs:
```
hotSalsa = list(mildSalsa)
print 'Original mutable obj:', id(mildSalsa)
print 'Copied mutable obj:', id(hotSalsa)
```
## Non-continuous slices
So far we’ve seen how to use slicing to take single blocks of successive entries from a sequence. But what if we want to take a subset of entries that aren’t next to each other in the sequence?
You can achieve this by providing a third argument - the step size - to the index range within the brackets. The example below shows how you can take every third entry in a list:
```
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[0:12:3]
print "Every third prime:", subset
```
Notice that the slice taken begins with the first entry in the range, followed by entries taken at equally-spaced intervals (the steps) thereafter. If you wanted to begin the subset with the third entry, you would need to specify that as the starting point of the sliced range:
#test
Use the step size argument to create a new string that contains only every other character in the string "Your Mother was a Hamster, and your Father smelt of Elderberries!".
#solution
```
text = "Your Mother was a Hamster, and your Father smelt of Elderberries!"
print text[::2]
```
## Mapping types
### Dictionaries
Because values in sequences are stored a known order, individual values in sequence-type objects can be accessed by position through integer indices. **Dictionaries** are a type of object where values are not stored in any particular order. Dictionaries are unordered collections of **key:value** pairs. They map (or match) keys, which can be any immutable type (strings, numbers, tuples), to values, which can be of any type (heterogeneous). Individual values in a dictionary are accessed by their keys.
We create dictionaries with curly brackets and pairs of keys and values. An empty dictionary would simply have no key:value pairs inside the curly brackets:
```
primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37]
subset = primes[2:12:3]
print "Every third prime:", subset
person = {'name':'Jack', 'age': 32}
print person
```
Notice that the order of the key:value pairs is different in the dictionary definition than in the output! Because values in a dictionary are not stored in a particular order, they take an arbitrary order when the dictionary is displayed.
We can access and modify individual values in a dictionary with their keys:
```
person['age'] = 33
print person
```
We can also use keys to add values to a previously defined dictionary:
```
person['address'] = 'Downtown Boulder'
print person
```
#test
- Create an empty dictionary called "states"
- Add 3 items to the dictionary. Map state names (the keys) to their abbreviations (the values) (ex. 'Wyoming':'WY'). Pick easy ones!
(You can use states from another country or look here: http://www.50states.com/abbreviations.htm)
#solution
```
states = {}
states['Colorado'] = 'CO'
states['California'] = 'CA'
states['Florida'] = 'FL'
```
#test
- Use a variable in place of a key to access values in your <code>states</code> dictionary. For example, if I set the variable to "Wyoming", the value should be "WY".
#solution
```
selected_state = 'California'
print states[selected_state]
```
#test
- Create a dictionary called "cities" that contains 3 key:value pairs. The keys should be the state abbreviation in your <code>states</code> dictionary and the values should be the names of one city in each of those states state (ex. 'WY':'Laramie'). Don't start with an empty dictionary and add values to it -- initialize the dictionary with the all of the key:value pairs already in it.
#solution
```
cities = {'CO':'Denver', 'FL':'Miami', 'CA':'San Francisco'}
```
#challenge
- Write a short script to fill in the blanks in this string for any state in your <code>states</code> dictionary.
\_\_\_\_\_\_\_\_\_\_ is abbreviated \_\_\_\_ and has cities like \_\_\_\_\_\_\_\_
- Refactor (rewrite, improve) your code so you only have to change one word in your script to change states.
<br>
Hints:
- You can use '+' to concatenate strings
- The values in one of your dictionaries are the keys in the the other dictionary
#solution
```
selected_state = 'Colorado'
print selected_state + ' is abbreviated ' + states[selected_state] + ' and has cities like ' + cities[states[selected_state]]
```
#callout
## Converting between types
Many Python functions are sensitive to the type of object they receive. For example, you cannot concatenate a string with an integer:
```
age = 21
sign = 'You must be ' + age + '-years-old to enter this bar'
print sign
```
You will often find yourself needing to convert one data type to another. This is called **casting**. Luckily, conversion functions are easy to remember: the type names double up as a conversion function:
- <code>int()</code>: *strings*, *floats* -> *integers*
- <code>float()</code>: *strings*, *integers* -> *floats*
- <code>str()</code>: all types -> *strings*
- <code>list()</code>: *strings*, *tuples*, *dictionaries* -> *lists*
- <code>tuple()</code>: *strings*, *tuples* -> *dictionaries*
#test
## Variables in strings
Fix the second line of the example above so it prints the text in <code>sign</code> correctly. Don't simply change the first line to <code>age = "21"</code>!
#solution
```
age = 21
sign = 'You must be ' + str(age) + '-years-old to enter this bar'
print sign
```
#test
## Lemonade sales
You get hired to work for a highly successful lemonade stand. Their database is managed by a 7-year-old. These are their sales reports for FY2017:
```
sales_1q = ["50.3"] # thousand dollars
sales_2q = 108.52
sales_3q = 79
sales_4q = "82"
```
- Calculate the total sales for FY2017
#solution
```
total_sales = float(sales_1q[0]) + sales_2q + sales_3q + float(sales_4q)
print 'Total lemonade sales:', str(total_sales) + ' thousand dollars'
```
#test
## Aquarium inventory
An aquarium has exhibits for these species:
```
sea_creatures = ['shark', 'cuttlefish', 'squid', 'mantis shrimp']
```
- Convert this list to a tuple
```
element = 'tungsten'
list(element)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#EDA-and-pre-processing" data-toc-modified-id="EDA-and-pre-processing-1"><span class="toc-item-num">1 </span>EDA and pre-processing</a></span><ul class="toc-item"><li><span><a href="#Descriptive-statistics-(data-shape,-balance,-etc)" data-toc-modified-id="Descriptive-statistics-(data-shape,-balance,-etc)-1.1"><span class="toc-item-num">1.1 </span>Descriptive statistics (data shape, balance, etc)</a></span></li><li><span><a href="#Data-pre-processing" data-toc-modified-id="Data-pre-processing-1.2"><span class="toc-item-num">1.2 </span>Data pre-processing</a></span></li></ul></li><li><span><a href="#ML-template-starts---training-session" data-toc-modified-id="ML-template-starts---training-session-2"><span class="toc-item-num">2 </span>ML template starts - training session</a></span><ul class="toc-item"><li><span><a href="#Training-model-(LGBM)-with-stratisfied-CV" data-toc-modified-id="Training-model-(LGBM)-with-stratisfied-CV-2.1"><span class="toc-item-num">2.1 </span>Training model (LGBM) with stratisfied CV</a></span></li></ul></li><li><span><a href="#Model-evaluation" data-toc-modified-id="Model-evaluation-3"><span class="toc-item-num">3 </span>Model evaluation</a></span><ul class="toc-item"><li><span><a href="#Plot-of-the-CV-folds---F1-macro-and-F1-for-the-positive-class" data-toc-modified-id="Plot-of-the-CV-folds---F1-macro-and-F1-for-the-positive-class-3.1"><span class="toc-item-num">3.1 </span>Plot of the CV folds - F1 macro and F1 for the positive class</a></span></li><li><span><a href="#Scikit-learn---Classification-report" data-toc-modified-id="Scikit-learn---Classification-report-3.2"><span class="toc-item-num">3.2 </span>Scikit learn - Classification report</a></span></li><li><span><a href="#ROC-curve-with-AUC" data-toc-modified-id="ROC-curve-with-AUC-3.3"><span class="toc-item-num">3.3 </span>ROC curve with AUC</a></span></li><li><span><a href="#Confusion-Matrix-plot-(normalized-and-with-absolute-values)" data-toc-modified-id="Confusion-Matrix-plot-(normalized-and-with-absolute-values)-3.4"><span class="toc-item-num">3.4 </span>Confusion Matrix plot (normalized and with absolute values)</a></span></li><li><span><a href="#Feature-Importance-plot" data-toc-modified-id="Feature-Importance-plot-3.5"><span class="toc-item-num">3.5 </span>Feature Importance plot</a></span></li><li><span><a href="#Correlations-analysis-(on-top-features)" data-toc-modified-id="Correlations-analysis-(on-top-features)-3.6"><span class="toc-item-num">3.6 </span>Correlations analysis (on top features)</a></span></li><li><span><a href="#Anomaly-detection-on-the-training-set-(on-top-features-alone)" data-toc-modified-id="Anomaly-detection-on-the-training-set-(on-top-features-alone)-3.7"><span class="toc-item-num">3.7 </span>Anomaly detection on the training set (on top features alone)</a></span></li><li><span><a href="#Data-leakage-test" data-toc-modified-id="Data-leakage-test-3.8"><span class="toc-item-num">3.8 </span>Data leakage test</a></span></li>
<li><span><a href="##-Analysis-of-FPs/FNs" data-toc-modified-id="##-Analysis-of-FPs/FNs"><span class="toc-item-num">3.9 </span>Analysis of FPs/FNs</a></span></li></ul></li></ul></div>
```
import warnings
import pandas as pd
import numpy as np
from pandas_summary import DataFrameSummary
import octopus_ml as oc
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
import seaborn as sns
import re
import optuna
pd.set_option('display.max_columns', None) # or 1000
pd.set_option('display.max_rows', None) # or 1000
pd.set_option('display.max_colwidth', -1) # or 199
%matplotlib inline
warnings.simplefilter("ignore")
```
### Read the Kaggle Titanic competition dataset
https://www.kaggle.com/c/titanic
```
pwd
XY_df=pd.read_csv('../../datasets/Kaggle_titanic_train.csv')
test_df=pd.read_csv('../../datasets/Kaggle_titanic_test.csv')
```
# EDA and pre-processing
## Descriptive statistics (data shape, balance, etc)
```
XY_df.shape
XY_df.head(5)
```
### Target distribution
```
XY_df['Survived'].value_counts()
oc.target_pie(XY_df,'Survived')
XY_df.shape
def convert_to_categorical(df):
categorical_features = []
for c in df.columns:
col_type = df[c].dtype
if col_type == "object" or col_type.name == "category":
# an option in case the data(pandas dataframe) isn't passed with the categorical column type
df[c] = df[c].astype('category')
categorical_features.append(c)
return df, categorical_features
def lgbm_fast(X_train, y_train, num, params=None):
# Training function for LGBM with basic categorical features treatment and close to default params
X_train, categorical_features=convert_to_categorical(X_train)
lgb_train = lgb.Dataset(X_train, y_train, categorical_feature=categorical_features)
if params == None:
params = {
"objective": "binary",
"boosting": "gbdt",
"scale_pos_weight": 0.02,
"learning_rate": 0.005,
"seed": 100,
"verbose":-1
# 'categorical_feature': 'auto',
# 'metric': 'auc',
# 'scale_pos_weight':0.1,
# 'learning_rate': 0.02,
# 'num_boost_round':2000,
# "min_sum_hessian_in_leaf":1,
# 'max_depth' : 100,
# "num_leaves":31,
# "bagging_fraction" : 0.4,
# "feature_fraction" : 0.05,
}
clf = lgb.train(
params, lgb_train, num_boost_round=num
)
return clf
```
## Dataset comparisons
```
features=XY_df.columns.to_list()
print ('number of features ', len(features))
features_remove=['PassengerId','Survived']
for f in features_remove:
features.remove(f)
def dataset_comparison(df1,df2, top=3):
print ('Datasets shapes:\n df1: '+str(df1.shape)+'\n df2: '+str(df2.shape))
df1['label']=0
df2['label']=1
df=pd.concat([df1,df2])
print (df.shape)
clf=lgbm_fast(df,df['label'], 100, params=None)
oc.plot_imp( clf, df, title="Datasets differences", model="lgbm", num=12, importaince_type="split", save_path=None)
return df
df=dataset_comparison(XY_df[features],test_df)
import lightgbm as lgb
def dataset_comparison(df1,df2, top=3):
print ('Datasets shapes:\n df1: '+str(df1.shape)+'\n df2: '+str(df2.shape))
df1['label']=0
df2['label']=1
df=pd.concat([df1,df2])
print (df.shape)
clf=lgbm_fast(df,df['label'], 100, params=None)
feature_imp_list=oc.plot_imp( clf, df, title="Datasets differences", model="lgbm", num=10, importaince_type="gain", save_path=None)
oc.target_corr(df,df['label'],feature_imp_list)
return df
df=dataset_comparison(XY_df[features],test_df)
df[1700:1800]
```
### Selected features vs target historgrams
```
oc.hist_target(XY_df, 'Sex', 'Survived')
oc.hist_target(XY_df, 'Fare', 'Survived')
```
### Data summary - and missing values analysis
```
import missingno as msno
from pandas_summary import DataFrameSummary
dfs = DataFrameSummary(XY_df)
dfs.summary()
# Top 5 sparse features, mainly labs results
pd.Series(1 - XY_df.count() / len(XY_df)).sort_values(ascending=False).head(5)
```
## Data pre-processing
```
XY_df['Cabin'] = XY_df['Cabin'].astype('str').fillna("U0")
deck = {"A": 1, "B": 2, "C": 3, "D": 4, "E": 5, "F": 6, "G": 7, "U": 8}
XY_df['Deck'] = XY_df['Cabin'].map(lambda x: re.compile("([a-zA-Z]+)").search(x).group())
XY_df['Deck'] = XY_df['Deck'].map(deck)
XY_df['Deck'] = XY_df['Deck'].fillna(0)
XY_df['Deck'] = XY_df['Deck'].astype('category')
XY_df['relatives'] = XY_df['SibSp'] + XY_df['Parch']
XY_df.loc[XY_df['relatives'] > 0, 'not_alone'] = 0
XY_df.loc[XY_df['relatives'] == 0, 'not_alone'] = 1
XY_df['not_alone'] = XY_df['not_alone'].astype(int)
def encodeAgeFare(train):
train.loc[train['Age'] <= 16, 'Age_fare'] = 0
train.loc[(train['Age'] > 16) & (train['Age'] <= 32), 'Age_fare'] = 1
train.loc[(train['Age'] > 32) & (train['Age'] <= 48), 'Age_fare'] = 2
train.loc[(train['Age'] > 48) & (train['Age'] <= 64), 'Age_fare'] = 3
train.loc[ (train['Age'] > 48) & (train['Age'] <= 80), 'Age_fare'] = 4
train.loc[train['Fare'] <= 7.91, 'Fare'] = 0
train.loc[(train['Fare'] > 7.91) & (train['Fare'] <= 14.454), 'Fare_adj'] = 1
train.loc[(train['Fare'] > 14.454) & (train['Fare'] <= 31.0), 'Fare_adj'] = 2
train.loc[(train['Fare'] > 31.0) & (train['Fare'] <= 512.329), 'Fare_adj'] = 3
encodeAgeFare(XY_df)
# Categorical features pre-proccesing
cat_list ,XY_df=oc.cat_features_proccessing(XY_df)
print (cat_list)
features=XY_df.columns.to_list()
print ('number of features ', len(features))
features_remove=['PassengerId','Survived']
for f in features_remove:
features.remove(f)
X=XY_df[features]
y=XY_df['Survived']
from IPython.display import Image
Image("../images/octopus_know_your_data.PNG", width=600, height=600)
XY_sampled=oc.sampling(XY_df,'Survived',200)
```
# ML template starts - training session
## Training model (LGBM) with stratisfied CV
```
def create(hyperparams):
"""Create LGBM Classifier for a given set of hyper-parameters."""
model = LGBMClassifier(**hyperparams)
return model
def kfold_evaluation(X, y, k, hyperparams, esr=50):
scores = []
kf = KFold(k)
for i, (train_idx, test_idx) in enumerate(kf.split(X)):
X_train = X.iloc[train_idx]
y_train = y.iloc[train_idx]
X_val = X.iloc[test_idx]
y_val = y.iloc[test_idx]
model = create(hyperparams)
model = fit_with_stop(model, X_train, y_train, X_val, y_val, esr)
train_score = evaluate(model, X_train, y_train)
val_score = evaluate(model, X_val, y_val)
scores.append((train_score, val_score))
scores = pd.DataFrame(scores, columns=['train score', 'validation score'])
return scores
# Constant
K = 5
# Objective function
def objective(trial):
# Search spaces
hyperparams = {
'reg_alpha': trial.suggest_float('reg_alpha', 0.001, 10.0),
'reg_lambda': trial.suggest_float('reg_lambda', 0.001, 10.0),
'num_leaves': trial.suggest_int('num_leaves', 5, 1000),
'min_child_samples': trial.suggest_int('min_child_samples', 5, 100),
'max_depth': trial.suggest_int('max_depth', 5, 64),
'colsample_bytree': trial.suggest_float('colsample_bytree', 0.1, 0.5),
'cat_smooth' : trial.suggest_int('cat_smooth', 10, 100),
'cat_l2': trial.suggest_int('cat_l2', 1, 20),
'min_data_per_group': trial.suggest_int('min_data_per_group', 50, 200)
}
hyperparams.update(best_params)
scores = kfold_evaluation(X, y, K, hyperparams, 10)
return scores['validation score'].mean()
def create(hyperparams):
model = LGBMClassifier(**hyperparams)
return model
def fit(model, X, y):
model.fit(X, y,verbose=-1)
return model
def fit_with_stop(model, X, y, X_val, y_val, esr):
#model.fit(X, y,
# eval_set=(X_val, y_val),
# early_stopping_rounds=esr,
# verbose=-1)
model.fit(X, y,
eval_set=(X_val, y_val),
verbose=-1)
return model
def evaluate(model, X, y):
yp = model.predict_proba(X)[:, 1]
auc_score = roc_auc_score(y, yp)
return auc_score
```
## Hyper Parameter Optimization
```
best_params = {
'n_estimators': 1000,
'learning_rate': 0.05,
'metric': 'auc',
'verbose': -1
}
from lightgbm import LGBMClassifier
from sklearn.model_selection import KFold
from sklearn.metrics import roc_auc_score
study = optuna.create_study(direction='maximize')
study.optimize(objective, n_trials=50)
study.best_value
best_params.update(study.best_params)
best_params
#plot_param_importances(study)
#plot_optimization_history(study)
params = {
'boosting_type': 'gbdt',
'objective': 'binary',
'metric': 'auc',
'learning_rate': 0.1,
'n_estimators': 500,
'verbose': -1,
'max_depth': -1,
'seed':100,
'min_split_gain': 0.01,
'num_leaves': 18,
'reg_alpha': 0.01,
'reg_lambda': 1.50,
'feature_fraction':0.2,
'bagging_fraction':0.84
}
metrics= oc.cv_adv(X,y,0.5,1000,shuffle=True,params=best_params)
```
# Model evaluation
### Plot of the CV folds - F1 macro and F1 for the positive class
(in this case it's an unbalanced dataset)
```
oc.cv_plot(metrics['f1_weighted'],metrics['f1_macro'],metrics['f1_positive'],'Titanic Kaggle competition')
```
## Scikit learn - Classification report
```
print(classification_report(metrics['y'], metrics['predictions_folds']))
```
## ROC curve with AUC
```
oc.roc_curve_plot(metrics['y'], metrics['predictions_proba'])
```
## Confusion Matrix plot (normalized and with absolute values)
```
oc.confusion_matrix_plot(metrics['y'], metrics['predictions_folds'])
```
## Feature Importance plot
```
feature_imp_list=oc.plot_imp(metrics['final_clf'],X,'LightGBM Mortality Kaggle',num=15)
top_features=feature_imp_list.sort_values(by='Value', ascending=False).head(20)
top_features
```
## Correlations analysis (on top features)
```
list_for_correlations=top_features['Feature'].to_list()
list_for_correlations.append('Survived')
oc.correlations(XY_df,list_for_correlations)
```
## Data leakage test
```
oc.data_leakage(X,top_features['Feature'].to_list())
```
## Analysis of FPs/FNs
```
fps=oc.recieve_fps(XY_df, metrics['index'] ,metrics['y'], metrics['predictions_proba'],top=10)
fns=oc.recieve_fns(XY_df, metrics['index'] ,metrics['y'], metrics['predictions_proba'],top=10)
fps
fns
filter_fps = XY_df[XY_df.index.isin(fps['index'])]
filter_fns = XY_df[XY_df.index.isin(fns['index'])]
filter_fps_with_prediction=pd.merge(filter_fps,fps[['index','preds_proba']], left_on=[pd.Series(filter_fps.index.values)], right_on=fps['index'])
filter_fns_with_prediction=pd.merge(filter_fns,fns[['index','preds_proba']], left_on=[pd.Series(filter_fns.index.values)], right_on=fns['index'])
```
### Top FPs with full features
```
filter_fps_with_prediction
```
### Top FNs with full features
```
filter_fns_with_prediction
```
| github_jupyter |
<div class="alert alert-info">
<p class="lead"> Instructions <i class="fa fa-info-circle"></i></p>
Run all cells (in order) in this notebook if you want to create the TCGA Cancer subtype figures from "A FZD7-specific antibody-drug conjugate induces tumor regression in preclinical modelsTargeting solid tumors with a FZD7-specific antibody-drug conjugateTargeting FZD7 tumors with an antibody-drug conjugate" by Myan Do et al.
</div>
## Create figures for paper
### Load libraries
```
import pandas as pd
from tqdm.notebook import tqdm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from statannot import add_stat_annotation #version 0.2.3
from itertools import combinations
def logp1(x):
return np.log2(x+1)
def prepare_df(df,subtypes,subtype_dict):
#Format the dataframe for plotting
subsets = {}
first_df = True
how_many = 0
for subtype in subtypes:
print('Working with subtype',subtype)
common = list(set(subtype_dict[subtype])&set(df.columns))
if len(common)==0:
print(subtype_dict[subtype])
print('none found')
else:
subsets[subtype] = df[common]
temp = df[common].apply(logp1)
temp.loc['subtype'] = len(df[common].columns)*[subtype]
if first_df:
first_df = False
df_out = temp
else:
df_out = pd.concat([df_out, temp], axis=1)
how_many += len(df[common].columns)
print(how_many,f'of {len(df.columns)},',f'{round(100*how_many/len(df.columns),2)}% are included.')
display(df_out.tail())
medians = pd.DataFrame()
for subtype in subtypes:
medians[subtype] = subsets[subtype].median(axis=1)
log_med = medians.apply(logp1)
return log_med, df_out
titlefont = {'fontname':'Arial','fontsize':'16'}
axisfont = {'fontname':'Arial','fontsize':'12'}
def make_plot(df_out,cancer_type,subtypes,titlefont=titlefont,axisfont=axisfont):
fig, ax = plt.subplots(figsize=(10,6),dpi=300)
to_plot = df_out.T
sns.boxplot(data=to_plot,x='subtype',y='FZD7')
plt.gca().set_title(f'FZD7 Expression per {cancer_type} Subtype from TCGA',**titlefont)
plt.gca().set_ylabel('FZD7 Expression\nlog2(FZD7_TPM+1)',**axisfont)
plt.yticks(**axisfont)
plt.xticks(**axisfont)
new_ticks = []
for i in subtypes:
new_ticks.append(f'{i}\n(n={str(n_obs[i])})')
plt.gca().set_xticklabels(new_ticks)
pairs_to_plot = []
for comb in combinations(subtypes, 2):
pairs_to_plot.append(comb)
to_stat = to_plot[['FZD7','subtype']].pivot_table(values='FZD7', index=to_plot.index, columns='subtype',aggfunc='first')[subtypes]
test_results = add_stat_annotation(ax, data=to_stat,
box_pairs=pairs_to_plot,
test='Mann-Whitney', text_format='star',
loc='inside', verbose=2, line_height=0.01,text_offset=0, line_offset=0.005, use_fixed_offset=False,line_offset_to_box=0.07)
plt.show()
return
```
### Make plots per selected subtype
```
for cancer_type in tqdm(['BRCA','GBM_LGG','LUSC','OVCA','UCS']):
print('Working with',cancer_type)
cancer_subtypes = pd.read_excel(f'https://datasets.genepattern.org/data/publications/Do_2021/{cancer_type}_subtypes.xlsx',index_col=0)
# cancer_subtypes = cancer_subtypes.fillna('other')
if cancer_type == 'BRCA':
#TCGAbiolinks do not have Triple Negative information, so I will add it here
metadata = pd.read_table('https://datasets.genepattern.org/data/publications/Do_2021/nationwidechildrens.org_clinical_patient_brca.txt',skiprows=0, index_col=1)
# look for the columns 'er_status_by_ihc', 'pr_status_by_ihc', 'her2_status_by_ihc'
subset = metadata[['er_status_by_ihc','pr_status_by_ihc','her2_status_by_ihc']].drop(['bcr_patient_barcode','CDE_ID:2003301'])
display(subset.head())
tn_list = list(subset[(subset['er_status_by_ihc']=='Negative')&(subset['pr_status_by_ihc']=='Negative')&(subset['her2_status_by_ihc']=='Negative')].index)
common = set(cancer_subtypes.index) & set(tn_list)
cancer_subtypes.loc[common,'Cancer_Subtype'] = 'TripleN'
cancer_subtypes = cancer_subtypes.replace('Normal',np.nan)
# cancer_subtypes = cancer_subtypes.dropna()
cancer_subtypes = cancer_subtypes.dropna() # "uncategorized samples show as NaN and there are only a few.
subtypes = cancer_subtypes['Cancer_Subtype'].unique()
print('There are',len(subtypes),'in this cancer type:')
subtype_dict = {}
n_obs = {}
for stype in subtypes:
stype_list = list(cancer_subtypes[cancer_subtypes['Cancer_Subtype']==stype].index)
subtype_dict[stype] = stype_list
n_obs[stype] = len(stype_list)
print(n_obs)
print('Reading dataset...',end='')
df = pd.read_csv(f'https://datasets.genepattern.org/data/publications/Do_2021/{cancer_type}_TPM.csv',index_col=0)
print(' done!')
log_med, df_out = prepare_df(df,subtypes,subtype_dict)
make_plot(df_out,cancer_type,subtypes)
print('============================================')
print('============================================')
print('Done!')
```
## Prerendered figures
| github_jupyter |
# Deploy to Triton Inference Server locally
description: (preview) deploy a bi-directional attention flow (bidaf) Q&A model locally with Triton
Please note that this Public Preview release is subject to the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
```
!pip install nvidia-pyindex
!pip install --upgrade tritonclient
from azureml.core import Workspace
ws = Workspace.from_config()
ws
```
## Download model
It's important that your model have this directory structure for Triton Inference Server to be able to load it. [Read more about the directory structure that Triton expects](https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/model_repository.html).
```
from src.model_utils import download_triton_models, delete_triton_models
from pathlib import Path
prefix = Path(".")
download_triton_models(prefix)
```
## Register model
```
from azureml.core.model import Model
model_path = prefix.joinpath("models")
model = Model.register(
model_path=model_path,
model_name="bidaf-9-tutorial",
tags={"area": "Natural language processing", "type": "Question-answering"},
description="Question answering from ONNX model zoo",
workspace=ws,
model_framework=Model.Framework.MULTI,
)
model
```
## Deploy webservice
Deploy to a pre-created [AksCompute](https://docs.microsoft.com/python/api/azureml-core/azureml.core.compute.aks.akscompute?view=azure-ml-py#provisioning-configuration-agent-count-none--vm-size-none--ssl-cname-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--location-none--vnet-resourcegroup-name-none--vnet-name-none--subnet-name-none--service-cidr-none--dns-service-ip-none--docker-bridge-cidr-none--cluster-purpose-none--load-balancer-type-none-) named `aks-gpu-deploy`. For other options, see [our documentation](https://docs.microsoft.com/azure/machine-learning/how-to-deploy-and-where?tabs=azcli).
```
from azureml.core.webservice import LocalWebservice
from azureml.core.model import InferenceConfig
from random import randint
service_name = "triton-bidaf-9" + str(randint(10000, 99999))
config = LocalWebservice.deploy_configuration(port=6789)
service = Model.deploy(
workspace=ws,
name=service_name,
models=[model],
deployment_config=config,
overwrite=True,
)
service.wait_for_deployment(show_output=True)
print(service.get_logs())
```
## Test the webservice
```
!pip install --upgrade nltk geventhttpclient python-rapidjson
scoring_uri = service.scoring_uri
!curl -v $scoring_uri/v2/health/ready
import json
import tritonclient.http as tritonhttpclient
from tritonclientutils import triton_to_np_dtype
from src.bidaf_utils import preprocess, postprocess
headers = {}
triton_client = tritonhttpclient.InferenceServerClient(service.scoring_uri[7:])
context = "A quick brown fox jumped over the lazy dog."
query = "Which animal was lower?"
model_name = "bidaf-9"
model_metadata = triton_client.get_model_metadata(
model_name=model_name, headers=headers
)
input_meta = model_metadata["inputs"]
output_meta = model_metadata["outputs"]
# We use the np.object data type for string data
np_dtype = triton_to_np_dtype(input_meta[0]["datatype"])
cw, cc = preprocess(context, np_dtype)
qw, qc = preprocess(query, np_dtype)
input_mapping = {
"query_word": qw,
"query_char": qc,
"context_word": cw,
"context_char": cc,
}
inputs = []
outputs = []
# Populate the inputs array
for in_meta in input_meta:
input_name = in_meta["name"]
data = input_mapping[input_name]
input = tritonhttpclient.InferInput(input_name, data.shape, in_meta["datatype"])
input.set_data_from_numpy(data, binary_data=False)
inputs.append(input)
# Populate the outputs array
for out_meta in output_meta:
output_name = out_meta["name"]
output = tritonhttpclient.InferRequestedOutput(output_name, binary_data=False)
outputs.append(output)
# Run inference
res = triton_client.infer(
model_name,
inputs,
request_id="0",
outputs=outputs,
model_version="1",
headers=headers,
)
result = postprocess(context_words=cw, answer=res)
result
```
## Delete the webservice and the downloaded model
```
service.delete()
delete_triton_models(prefix)
```
# Next steps
Try reading [our documentation](https://aka.ms/triton-aml-docs) to use Triton with your own models or check out the other notebooks in this folder for ways to do pre- and post-processing on the server.
| github_jupyter |
```
import sys
sys.path.append('../scripts/')
from puddle_world import *
import itertools
import collections
class PolicyEvaluator:
def __init__(self, widths, goal, puddles, time_interval, sampling_num, \
puddle_coef=100.0, lowerleft=np.array([-4, -4]).T, upperright=np.array([4, 4]).T): #puddle_coef追加
self.pose_min = np.r_[lowerleft, 0]
self.pose_max = np.r_[upperright, math.pi*2]
self.widths = widths
self.goal = goal
self.index_nums = ((self.pose_max - self.pose_min)/self.widths).astype(int)
nx, ny, nt = self.index_nums
self.indexes = list(itertools.product(range(nx), range(ny), range(nt)))
self.value_function, self.final_state_flags = self.init_value_function()
self.policy = self.init_policy()
self.actions = list(set([tuple(self.policy[i]) for i in self.indexes]))
self.state_transition_probs = self.init_state_transition_probs(time_interval, sampling_num)
self.depths = self.depth_means(puddles, sampling_num)
self.time_interval = time_interval #追加
self.puddle_coef = puddle_coef
def policy_evaluation_sweep(self): #追加
for index in self.indexes:
if not self.final_state_flags[index]:
self.value_function[index] = self.action_value(tuple(self.policy[index]), index) #actionはタプルに直してから与える
def action_value(self, action, index): #追加
value = 0.0
for delta, prob in self.state_transition_probs[(action, index[2])]: #index[2]: 方角のインデックス
after = tuple(self.out_correction(np.array(index).T + delta) ) #indexに差分deltaを足してはみ出し処理の後にタプルにする
reward = - self.time_interval * self.depths[(after[0], after[1])] * self.puddle_coef - self.time_interval
value += (self.value_function[after] + reward) * prob
return value
def out_correction(self, index): #追加
index[2] = (index[2] + self.index_nums[2])%self.index_nums[2] #方角の処理
return index
def depth_means(self, puddles, sampling_num):
###セルの中の座標を均等にsampling_num**2点サンプリング###
dx = np.linspace(0, self.widths[0], sampling_num)
dy = np.linspace(0, self.widths[1], sampling_num)
samples = list(itertools.product(dx, dy))
tmp = np.zeros(self.index_nums[0:2]) #深さの合計が計算されて入る
for xy in itertools.product(range(self.index_nums[0]), range(self.index_nums[1])):
for s in samples:
pose = self.pose_min + self.widths*np.array([xy[0], xy[1], 0]).T + np.array([s[0], s[1], 0]).T #セルの中心の座標
for p in puddles:
tmp[xy] += p.depth*p.inside(pose) #深さに水たまりの中か否か(1 or 0)をかけて足す
tmp[xy] /= sampling_num**2 #深さの合計から平均値に変換
return tmp
def init_state_transition_probs(self, time_interval, sampling_num):
###セルの中の座標を均等にsampling_num**3点サンプリング###
dx = np.linspace(0.001, self.widths[0]*0.999, sampling_num) #隣のセルにはみ出さないように端を避ける
dy = np.linspace(0.001, self.widths[1]*0.999, sampling_num)
dt = np.linspace(0.001, self.widths[2]*0.999, sampling_num)
samples = list(itertools.product(dx, dy, dt))
###各行動、各方角でサンプリングした点を移動してインデックスの増分を記録###
tmp = {}
for a in self.actions:
for i_t in range(self.index_nums[2]):
transitions = []
for s in samples:
before = np.array([s[0], s[1], s[2] + i_t*self.widths[2]]).T + self.pose_min #遷移前の姿勢
before_index = np.array([0, 0, i_t]).T #遷移前のインデックス
after = IdealRobot.state_transition(a[0], a[1], time_interval, before) #遷移後の姿勢
after_index = np.floor((after - self.pose_min)/self.widths).astype(int) #遷移後のインデックス
transitions.append(after_index - before_index) #インデックスの差分を追加
unique, count = np.unique(transitions, axis=0, return_counts=True) #集計(どのセルへの遷移が何回か)
probs = [c/sampling_num**3 for c in count] #サンプル数で割って確率にする
tmp[a,i_t] = list(zip(unique, probs))
return tmp
def init_policy(self):
tmp = np.zeros(np.r_[self.index_nums,2]) #制御出力が2次元なので、配列の次元を4次元に
for index in self.indexes:
center = self.pose_min + self.widths*(np.array(index).T + 0.5) #セルの中心の座標
tmp[index] = PuddleIgnoreAgent.policy(center, self.goal)
return tmp
def init_value_function(self):
v = np.empty(self.index_nums) #全離散状態を要素に持つ配列を作成
f = np.zeros(self.index_nums)
for index in self.indexes:
f[index] = self.final_state(np.array(index).T)
v[index] = self.goal.value if f[index] else -100.0
return v, f
def final_state(self, index):
x_min, y_min, _ = self.pose_min + self.widths*index #xy平面で左下の座標
x_max, y_max, _ = self.pose_min + self.widths*(index + 1) #右上の座標(斜め上の離散状態の左下の座標)
corners = [[x_min, y_min, _], [x_min, y_max, _], [x_max, y_min, _], [x_max, y_max, _] ] #4隅の座標
return all([self.goal.inside(np.array(c).T) for c in corners ])
import seaborn as sns ###policyevaluator6create
puddles = [Puddle((-2, 0), (0, 2), 0.1), Puddle((-0.5, -2), (2.5, 1), 0.1)]
pe = PolicyEvaluator(np.array([0.2, 0.2, math.pi/18]).T, Goal(-3,-3), puddles, 0.1, 10)
counter = 0 #スイープの回数
for i in range(10):
pe.policy_evaluation_sweep()
counter += 1
v = pe.value_function[:, :, 18]
sns.heatmap(np.rot90(v), square=False)
plt.show()
print(counter)
```
| github_jupyter |
```
%matplotlib inline
```
# Demo of OPTICS clustering algorithm
Finds core samples of high density and expands clusters from them.
This example uses data that is generated so that the clusters have
different densities.
The :class:`sklearn.cluster.OPTICS` is first used with its Xi cluster detection
method, and then setting specific thresholds on the reachability, which
corresponds to :class:`sklearn.cluster.DBSCAN`. We can see that the different
clusters of OPTICS's Xi method can be recovered with different choices of
thresholds in DBSCAN.
```
# Authors: Shane Grigsby <refuge@rocktalus.com>
# Adrin Jalali <adrin.jalali@gmail.com>
# License: BSD 3 clause
from sklearn.cluster import OPTICS, cluster_optics_dbscan
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import numpy as np
# Generate sample data
np.random.seed(0)
n_points_per_cluster = 250
C1 = [-5, -2] + .8 * np.random.randn(n_points_per_cluster, 2)
C2 = [4, -1] + .1 * np.random.randn(n_points_per_cluster, 2)
C3 = [1, -2] + .2 * np.random.randn(n_points_per_cluster, 2)
C4 = [-2, 3] + .3 * np.random.randn(n_points_per_cluster, 2)
C5 = [3, -2] + 1.6 * np.random.randn(n_points_per_cluster, 2)
C6 = [5, 6] + 2 * np.random.randn(n_points_per_cluster, 2)
X = np.vstack((C1, C2, C3, C4, C5, C6))
clust = OPTICS(min_samples=50, xi=.05, min_cluster_size=.05)
# Run the fit
clust.fit(X)
labels_050 = cluster_optics_dbscan(reachability=clust.reachability_,
core_distances=clust.core_distances_,
ordering=clust.ordering_, eps=0.5)
labels_200 = cluster_optics_dbscan(reachability=clust.reachability_,
core_distances=clust.core_distances_,
ordering=clust.ordering_, eps=2)
space = np.arange(len(X))
reachability = clust.reachability_[clust.ordering_]
labels = clust.labels_[clust.ordering_]
plt.figure(figsize=(10, 7))
G = gridspec.GridSpec(2, 3)
ax1 = plt.subplot(G[0, :])
ax2 = plt.subplot(G[1, 0])
ax3 = plt.subplot(G[1, 1])
ax4 = plt.subplot(G[1, 2])
# Reachability plot
colors = ['g.', 'r.', 'b.', 'y.', 'c.']
for klass, color in zip(range(0, 5), colors):
Xk = space[labels == klass]
Rk = reachability[labels == klass]
ax1.plot(Xk, Rk, color, alpha=0.3)
ax1.plot(space[labels == -1], reachability[labels == -1], 'k.', alpha=0.3)
ax1.plot(space, np.full_like(space, 2., dtype=float), 'k-', alpha=0.5)
ax1.plot(space, np.full_like(space, 0.5, dtype=float), 'k-.', alpha=0.5)
ax1.set_ylabel('Reachability (epsilon distance)')
ax1.set_title('Reachability Plot')
# OPTICS
colors = ['g.', 'r.', 'b.', 'y.', 'c.']
for klass, color in zip(range(0, 5), colors):
Xk = X[clust.labels_ == klass]
ax2.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3)
ax2.plot(X[clust.labels_ == -1, 0], X[clust.labels_ == -1, 1], 'k+', alpha=0.1)
ax2.set_title('Automatic Clustering\nOPTICS')
# DBSCAN at 0.5
colors = ['g', 'greenyellow', 'olive', 'r', 'b', 'c']
for klass, color in zip(range(0, 6), colors):
Xk = X[labels_050 == klass]
ax3.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3, marker='.')
ax3.plot(X[labels_050 == -1, 0], X[labels_050 == -1, 1], 'k+', alpha=0.1)
ax3.set_title('Clustering at 0.5 epsilon cut\nDBSCAN')
# DBSCAN at 2.
colors = ['g.', 'm.', 'y.', 'c.']
for klass, color in zip(range(0, 4), colors):
Xk = X[labels_200 == klass]
ax4.plot(Xk[:, 0], Xk[:, 1], color, alpha=0.3)
ax4.plot(X[labels_200 == -1, 0], X[labels_200 == -1, 1], 'k+', alpha=0.1)
ax4.set_title('Clustering at 2.0 epsilon cut\nDBSCAN')
plt.tight_layout()
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mahfuz978/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module4-makefeatures/%20Day_4_Make_Features_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
# ASSIGNMENT
- Replicate the lesson code.
- This means that if you haven't followed along already, type out the things that we did in class. Forcing your fingers to hit each key will help you internalize the syntax of what we're doing.
- [Lambda Learning Method for DS - By Ryan Herr](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit?usp=sharing)
- Convert the `term` column from string to integer.
- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.
- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
```
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.csv
!tail LoanStats_2018Q4.csv
import pandas as pd
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_rows', 500)
df = pd.read_csv('LoanStats_2018Q4.csv', header = 1 , skipfooter = 2, engine = 'python')
print(df.shape)
df.head()
df.tail()
df.isnull().sum().sort_values(ascending = False)
df = df.drop(columns =['id', 'member_id', 'desc', 'url'], axis= 'columns')
df.dtypes
type(df['int_rate'])
int_rate = '15.02%'
int_rate[:-1]
int_list =[ '15.02%', '13.56%', '16.91%' ]
int_list[:2]
int_rate.strip('%')
type(int_rate.strip('%'))
float(int_rate.strip('%'))
type(float(int_rate.strip('%')))
def remove_percent_to_float(string):
return float(string.strip('%'))
int_list = ['15.02%','13.56%', '16.91%']
[remove_percent_to_float(item) for item in int_list]
df['int_rate']= df['int_rate'].apply(remove_percent_to_float)
df.head()
df.dtypes
df['emp_title']
df['emp_title'].value_counts(dropna = False).head(20)
df['emp_title'].isnull().sum()
import numpy as np
examples = ['owner', 'Supervisor', 'Project Manager', np.NaN]
def clean_title(item):
if isinstance(item, str):
return item.strip().title()
else:
return "Unknown"
[clean_title(item) for item in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
df.head()
```
```
df['emp_title'].value_counts(dropna = False).reset_index().shape
df.describe(exclude = 'number')
df['emp_title'].describe(exclude = 'number')
df['emp_title'].nunique()
df.emp_title_manager = True
print(df.emp_title_manager)
df['emp_title_manager'] = True
print(df['emp_title_manager'])
df['emp_title_manager'] = df['emp_title'].str.contains("Manager")
df.head()
condition = (df['emp_title_manager'] == True)
managers = df[condition]
print(managers.shape)
managers.head()
managers = df[df['emp_title'].str.contains('Manager')]
print(managers.shape)
managers.head()
plebians = df[df['emp_title_manager'] == False]
print(plebians.shape)
plebians.head()
managers['int_rate'].hist(bins=20);
plebians['int_rate'].hist(bins=20);
managers['int_rate'].plot.density();
plebians['int_rate'].plot.density();
managers['int_rate'].mean()
plebians['int_rate'].mean()
df['issue_d']
df['issue_d'].describe()
df['issue_d'].value_counts()
df.dtypes
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df.dtypes
df['issue_d'].dt.year
df['issue_d'].dt.month
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.head()
[col for col in df if col.endswith('_d')]
df['earliest_cr_line'].head()
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
25171/365
```
# STRETCH OPTIONS
You can do more with the LendingClub or Instacart datasets.
LendingClub options:
- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.
- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20.
- Take initiatve and work on your own ideas!
Instacart options:
- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)
- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)
- Take initiative and work on your own ideas!
You can uncomment and run the cells below to re-download and extract the Instacart data
```
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
```
| github_jupyter |
# CORIOLIX REST API Documentation
## EXAMPLE 1: Query the CORIOLIX REST API - Get a list of all REST endpoints
```
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Returns:
# Returns a list of all CORIOLIX REST Endpoints
import requests
import json
# Base URL for Datapresence REST API - MODIFY AS NEEDED
rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/?format=json'
# Make the query to the REST API
response = requests.get(rest_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all the published endpoints
print(json.dumps(responseJSON, indent=4, sort_keys=True))
```
## EXAMPLE 2: Query the CORIOLIX REST API - Get the current sensor observation
```
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Select a specific endpoint to query.
# Returns:
# Returns a list of all currently valid sensor values
import requests
import json
# URL for Datapresence REST endpoint for the current observations table.
rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/cur_obs/?format=json'
#rest_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/decimateData/?model=TsgFlth&date_0=2019-10-10%2002:06:55.353%2B00&decfactr=1&format=json'
# Make the query to the REST API
response = requests.get(rest_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all current observations
print(json.dumps(responseJSON, indent=4, sort_keys=True))
## EXAMPLE 3: Query the CORIOLIX REST API - Get the Thermosalinograph data for a user specified window of time
"""Example script to query the CORIOLIX REST API."""
# Key concepts:
# Use the python requests module to query the REST API
# Use the python json module to parse and dump the json response
# Select a specific sensor endpoint to query.
# Filter results
# Returns:
# Returns a list of all currently valid sensor values
import requests
import json
# URL for Datapresence REST endpoint for the current thermosalinograph table.
base_url = 'https://coriolix.ceoas.oregonstate.edu/oceanus/api/decimateData/?model=TsgFlth'
# Set the start date and time using the ISO8601 format, data stored in UTC
start_date = '2019-10-08T20:00:00Z'
end_date = '2019-10-08T21:00:00Z'
# build the query string
query_url = base_url+'?date_0='+start_date+'&date_1='+end_date+'&format=json'
query_url = base_url+'&date_0=2019-10-10%2002:06:55.353%2B00&decfactr=1&format=json'
# Make the query to the REST API
response = requests.get(query_url, verify=False)
# Load the response as json data
responseJSON = json.loads(response.text)
# Print all thermosalinograph observations
print(json.dumps(responseJSON, indent=4, sort_keys=True))
```
| github_jupyter |
# Laboratorio 11
La finalidad de este laboratorio es tener un mejor manejo de las herramientas que nos ofrece Scikit-Learn, como los _transformers_ y _pipelines_. Usaremos el dataset [The Current Population Survey (CPS)](https://www.openml.org/d/534) que consiste en predecir el salario de una persona en función de atributos como la educación, experiencia o edad.
```
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
%matplotlib inline
```
Como siempre, un pequeño análisis descriptivo
```
survey = fetch_openml(data_id=534, as_frame=True)
X = survey.data[survey.feature_names]
X.head()
X.describe(include="all").T.fillna("")
y = survey.target
y.head()
```
Y la posterior partición _train/test_.
```
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=42
)
```
## Ejercicio 1
(1 pto)
_One-Hot Encode_ es una técnica que a partir de una _feature_ categórica generar múltiples columnas, una por categoría.
* Define el transformador `ohe_sex` utilizando `OneHotEncoder` con atributos `drop="if_binary"` y `sparse=False`, luego ajusta y transforma el dataframe `X` solo con la columna `SEX`.
* Define el transformador `ohe_race` utilizando `OneHotEncoder` con atributos `drop="if_binary"` y `sparse=False`, luego ajusta y transforma el dataframe `X` solo con la columna `RACE`.
```
from sklearn.preprocessing import OneHotEncoder
ohe_sex = OneHotEncoder(drop = "if_binary",sparse = False)
#Calculando el tamaño
ohe_sex.fit_transform(X['SEX'].to_numpy().reshape(-1,1)).shape
ohe_race = OneHotEncoder(drop ="if_binary",sparse = False)
#Calculando el tamaño
ohe_race.fit_transform(X['RACE'].to_numpy().reshape(-1,1)).shape
```
__Pregunta:__ ¿Por qué las transformaciones resultantes tiene diferente cantidad de columnas?
__Respuesta:__ Tienen diferentes cantidad de columnas porque RACE , tiene tres opciones de valores , en cambio SEX, sólo tiene dos y como estamosen binario , por usar if_binary , entonces se reduce a una columna 1 hombre 0 mujer o alrrevez.
## Ejercicio 2
(1 pto)
Realizar _One-Hot-Encoding_ para cada una de las columnas categóricas y luego unirlas en un nuevo array o dataframe es tedioso, poco escablable y probablemente conlleve a errores. La función `make_column_transformer` permite automatizar este proceso en base a aplicar transformadores a distintas columnas.
* `categorical_columns` debe ser una lista con todos los nombres de columnas categóricas del dataframe `X`.
* `numerical_columns` debe ser una lista con todos los nombres de columnas numéricas del dataframe `X`.
* Define `preprocessor` utilizando `make_column_transformer` tal que:
- A las columnas categóricas se les aplique `OneHotEncoder` con el argumento `drop="if_binary"`
- El resto de las columnas se mantena igual. Hint: Revisar la documentación del argumento `remainder`.
* Finalmente define `X_processed` al ajustar y transformar el dataframe `X` utilizando `preprocessor`
```
from sklearn.compose import make_column_transformer
numerical_columns = ['EDUCATION','EXPERIENCE','AGE']
categorical_columns = ['SOUTH','SEX','UNION','RACE','OCCUPATION','SECTOR','MARR']
preprocessor = make_column_transformer(
(OneHotEncoder(drop ="if_binary"), categorical_columns),
remainder= 'passthrough'
)
X_processed = preprocessor.fit_transform(X)
print(f"X_processed tiene {X_processed.shape[0]} filas y {X_processed.shape[1]} columnas.")
print(X_processed)
```
## Ejercicio 3
(1 pto)
Sucede un fenómeno similar al aplicar transformaciones al vector de respuesta. En ocasiones es necesario transformarlo pero que las predicciones sean en la misma escala original. `TransformedTargetRegressor` juega un rol clave, pues los insumos necesarios son: un estimador, la función y la inversa para aplicar al vector de respuesta.
Define `ttr` como un `TransformedTargetRegressor` tal que:
* El regresor sea un modelo de regresión Ridge y parámetro de regularización `1e-10`.
* La función para transformar sea logaritmo base 10. Hint: `NumPy` es tu amigo.
* La función inversa sea aplicar `10**x`. Hint: Revisa el módulo `special` de `SciPy` en la sección de _Convenience functions_.
```
from sklearn.compose import TransformedTargetRegressor
from sklearn.linear_model import Ridge
regresor = Ridge(alpha= 1e-10)
f = np.log10
invf = sp.special.exp10
ttr = TransformedTargetRegressor(
regressor = regresor,
func = f
, inverse_func = invf
)
```
Ajusta el modelo con los datos de entrenamiento
```
ttr.fit(X_train, y_train)
```
Lamentablemente lanza un error :(
Prueba lo siguiente:
```
ttr.fit(X_train.select_dtypes(include="number"), y_train)
```
__Pregunta:__ ¿Por qué falló el primer ajusto? ¿Qué tiene de diferente el segundo?
__Respuesta:__ El primer ajuste falla porque se usan variables categoricas y numericas, a diferencia del segundo fit en el que se usan solo variables numericas
## Ejercicio 4
(1 pto)
Ahora agreguemos todos los ingredientes a la juguera.
* Define `model` utilizando `make_pipeline` con los insumos `preprocessor` y `ttr`.
* Ajusta `model` con los datos de entrenamiento.
* Calcula el error absoluto medio con los datos de test.
```
from sklearn.pipeline import make_pipeline
from sklearn.metrics import median_absolute_error
model = make_pipeline(
preprocessor,
ttr
)
model.fit(X_train,y_train)
y_pred = model.predict(X_test)
mae = median_absolute_error(y_test,y_pred)
print(f"El error absoluto medio obtenido es {mae}")
```
| github_jupyter |
# Read Cloud Optimized Geotiffs
The following materials are based on [this tutorial](https://geohackweek.github.io/raster/04-workingwithrasters/). Read more from that tutorial until this one get's better updated.
- Let's read a Landsat TIF profile from AWS cloud storage:
```
import rasterio
import matplotlib.pyplot as plt
import numpy as np
# Specify the path for Landsat TIF on AWS
fp = 'http://landsat-pds.s3.amazonaws.com/c1/L8/042/034/LC08_L1TP_042034_20170616_20170629_01_T1/LC08_L1TP_042034_20170616_20170629_01_T1_B4.TIF'
# See the profile
with rasterio.open(fp) as src:
print(src.profile)
```
- Let's plot a low resolution overview:
```
%matplotlib inline
# Open the COG
with rasterio.open(fp) as src:
# List of overviews from biggest to smallest
oviews = src.overviews(1)
# Retrieve the smallest thumbnail
oview = oviews[-1]
print('Decimation factor= {}'.format(oview))
# NOTE this is using a 'decimated read' (http://rasterio.readthedocs.io/en/latest/topics/resampling.html)
thumbnail = src.read(1, out_shape=(1, int(src.height // oview), int(src.width // oview)))
print('array type: ',type(thumbnail))
print(thumbnail)
plt.imshow(thumbnail)
plt.colorbar()
plt.title('Overview - Band 4 {}'.format(thumbnail.shape))
plt.xlabel('Column #')
plt.ylabel('Row #')
```
- Let's fix the NoData values to be `NaN` instead of 0:
```
# Open the file
with rasterio.open(fp) as src:
# Access the overviews
oviews = src.overviews(1)
oview = oviews[-1]
print('Decimation factor= {}'.format(oview))
# Read the thumbnail
thumbnail = src.read(1, out_shape=(1, int(src.height // oview), int(src.width // oview)))
# Convert the values into float
thumbnail = thumbnail.astype('f4')
# Convert 0 values to NaNs
thumbnail[thumbnail==0] = np.nan
plt.imshow(thumbnail)
plt.colorbar()
plt.title('Overview - Band 4 {}'.format(thumbnail.shape))
plt.xlabel('Column #')
plt.ylabel('Row #')
```
- Let's take a subset from high resolution image:
```
#https://rasterio.readthedocs.io/en/latest/topics/windowed-rw.html
#rasterio.windows.Window(col_off, row_off, width, height)
window = rasterio.windows.Window(1024, 1024, 1280, 2560)
with rasterio.open(fp) as src:
subset = src.read(1, window=window)
plt.figure(figsize=(6,8.5))
plt.imshow(subset)
plt.colorbar(shrink=0.5)
plt.title(f'Band 4 Subset\n{window}')
plt.xlabel('Column #')
plt.ylabel('Row #')
```
These commands demonstrate the basics how to use COGs to retrieve data from the cloud.
| github_jupyter |
```
import pandas as pd
import numpy as np
from scipy import stats
%load_ext autoreload
%autoreload 2
rng = np.random.default_rng(seed=42)
```
## ttest
```
from hypothesis import Hypothesis
from generator import Permute
from test_statistic import DiffMeans
cle_sac = (
pd.read_table("https://moderndive.com/data/cleSac.txt")
.rename(
columns={
"Metropolitan_area_Detailed": "metro_area",
"Total_personal_income": "income",
}
)
.dropna()
)
OH = cle_sac.loc[
lambda df: df["metro_area"] == "Cleveland_ OH", "income"
].values
CA = cle_sac.loc[
lambda df: df["metro_area"] == "Sacramento_ CA", "income"
].values
from hypothesis import Hypothesis
from generator import Permute
from test_statistic import DiffMeans
from specifier import Specifier
hypo = Hypothesis(
cle_sac, specifier=Specifier('income', 'metro_area'), generator=Permute(), test_statistic=DiffMeans()
)
tstats, pvalue = stats.ttest_ind(OH, CA)
hypo.simulate()
print(pvalue - hypo.PValue)
```
## Two proportions
```
# promotions = pd.read_csv('gender_discrimination.csv').assign(id = lambda df: range(len(df)))
# data = promotions.pivot_table(index='decision', columns='gender', aggfunc='size')
# stats.chi2_contingency(data, correction=False)
# male = promotions.query('gender == "male"')['decision'].values == 'promoted'
# female = promotions.query('gender == "female"')['decision'].values == 'promoted'
# from test_statistic import DiffProps
# hypo = Hypothesis(
# (male, female), generator=Permute(), test_statistic=DiffProps(direction='right'), iters=10_000
# )
# hypo.simulate()
# hypo.PValue
# from generator import Bootstrap
# from test_statistic import Props
#
# df = pd.DataFrame(dict(satisfy=["satisfied"] * 73 + ["unsatisfied"] * 27))
# samples = pd.DataFrame(dict(value=np.repeat([1, 0], [80, 20])))
# hypo = Hypothesis(
# samples,
# specifier=Specifier(response='value'),
# generator=Bootstrap(),
# test_statistic=Props(),
# iters=10_000
# )
# hypo.simulate()
# a = np.array(hypo.test_stats)
# # ((a >= 0.87) | (a <= 0.73)).mean()
# left_pval = (a >= 0.87).mean()
# right_pval = (a <= 0.73).mean()
# 2 * min(left_pval, right_pval)
```
### One Mean
```
age_at_marriage = pd.read_csv("https://moderndive.com/data/ageAtMar.csv")
from generator import Bootstrap
from test_statistic import Props
hypo = Hypothesis(
age_at_marriage,
specifier=Specifier(response='age'),
generator=Bootstrap(),
test_statistic=Props(),
iters=10_000
)
hypo.simulate()
mu_hat = age_at_marriage['age'].mean()
a = np.array(hypo.test_stats) - mu_hat + 23.40
(a >= mu_hat).mean()
stats.ttest_1samp(age_at_marriage['age'], 23.40, alternative='greater')
age_at_marriage['age'].mean()
```
| github_jupyter |
# Economics 101B Spring 2018 Pre-Semester Exercises
### Professor DeLong
## Our Computing Environment, Jupyter notebooks
This webpage is called a Jupyter notebook. A notebook is a place to write programs and view their results.
### Text cells
In a notebook, each rectangle containing text or code is called a *cell*.
Text cells (like this one) can be edited by double-clicking on them. They're written in a simple format called [Markdown](http://daringfireball.net/projects/markdown/syntax) to add formatting and section headings. You don't need to learn Markdown, but you might want to.
After you edit a text cell, click the "run cell" button at the top that looks like ▶| to confirm any changes. (Try not to delete the instructions of the lab.)
**Question 1.1.1.** This paragraph is in its own text cell. Try editing it so that this sentence is the last sentence in the paragraph, and then click the "run cell" ▶| button . This sentence, for example, should be deleted. So should this one.
### Code cells
Other cells contain code in the Python 3 language. Running a code cell will execute all of the code it contains.
To run the code in a code cell, first click on that cell to activate it. It'll be highlighted with a little green or blue rectangle. Next, either press ▶| or hold down the `shift` key and press `return` or `enter`.
Try running this cell:
```
print("Hello, World!")
```
And this one:
```
print("\N{WAVING HAND SIGN}, \N{EARTH GLOBE ASIA-AUSTRALIA}!")
```
The fundamental building block of Python code is an expression. Cells can contain multiple lines with multiple expressions. When you run a cell, the lines of code are executed in the order in which they appear. Every `print` expression prints a line. Run the next cell and notice the order of the output.
```
print("First this line is printed,")
print("and then this one.")
```
**Question 1** Change the cell above so that it prints out:
First this line,
then the whole 🌏,
and then this one.
*Hint:* If you're stuck on the Earth symbol for more than a few minutes, try talking to a neighbor or a TA. That's a good idea for any lab problem.
## Writing Jupyter notebooks
You can use Jupyter notebooks for your own projects or documents. When you make your own notebook, you'll need to create your own cells for text and code.
To add a cell, click the + button in the menu bar. It'll start out as a text cell. You can change it to a code cell by clicking inside it so it's highlighted, clicking the drop-down box next to the restart (⟳) button in the menu bar, and choosing "Code".
**Question 2** Add a code cell below this one. Write code in it that prints out:
A whole new cell! ♪🌏♪
(That musical note symbol is like the Earth symbol. Its long-form name is `\N{EIGHTH NOTE}`.)
Run your cell to verify that it works.
## 1.4. Errors
Python is a language, and like natural human languages, it has rules. It differs from natural language in two important ways:
1. The rules are *simple*. You can learn most of them in a few weeks and gain reasonable proficiency with the language in a semester.
2. The rules are *rigid*. If you're proficient in a natural language, you can understand a non-proficient speaker, glossing over small mistakes. A computer running Python code is not smart enough to do that.
Whenever you write code, you'll make mistakes. When you run a code cell that has errors, Python will sometimes produce error messages to tell you what you did wrong.
Errors are okay; even experienced programmers make many errors. When you make an error, you just have to find the source of the problem, fix it, and move on.
We have made an error in the next cell. Run it and see what happens.
```
print("This line is missing something."
```
You should see something like this (minus our annotations):
<img src="images/error.jpg"/>
The last line of the error output attempts to tell you what went wrong. The *syntax* of a language is its structure, and this `SyntaxError` tells you that you have created an illegal structure. "`EOF`" means "end of file," so the message is saying Python expected you to write something more (in this case, a right parenthesis) before finishing the cell.
There's a lot of terminology in programming languages, but you don't need to know it all in order to program effectively. If you see a cryptic message like this, you can often get by without deciphering it. (Of course, if you're frustrated, ask a neighbor or a TA for help.)
Try to fix the code above so that you can run the cell and see the intended message instead of an error.
## 1.5. Submitting your work
All assignments in the course will be distributed as notebooks like this one, and you will submit your work from the notebook. We will use a system called OK that checks your work and helps you submit. At the top of each assignment, you'll see a cell like the one below that prompts you to identify yourself. Run it and follow the instructions.
```
# Don't change this cell; just run it.
# The result will give you directions about how to log in to the submission system, called OK.
# Once you're logged in, you can run this cell again, but it won't ask you who you are because
# it remembers you. However, you will need to log in once per assignment.
from client.api.notebook import Notebook
ok = Notebook('intro.ok')
_ = ok.auth(force=True, inline=True)
```
When you finish an assignment, you need to submit it by running the submit command below. It's OK to submit multiple times, OK will only try to grade your final submission for each assignment. Don't forget to submit your lab assignment at the end of section, even if you haven't finished everything.
```
_ = ok.submit()
```
Now that you are comfortable with our computing environment, we are going to be moving into more of the fundamentals of Python, but first, run the cell below to ensure all the libraries needed for this notebook are installed.
```
!pip install numpy
!pip install pandas
!pip install sklearn
!pip install seaborn
!pip install matplotlib
!pip install -U okpy
# imports
import numpy as np
import pandas as pd
import seaborn as sns
import ipywidgets as widgets
from ipywidgets import interact
import matplotlib.pyplot as plt
%matplotlib inline
```
Here is how to make an interactive model:
```
# make a function that takes in parameter[s] and graphs
# stick that into interact
```
# Introduction to programming concepts
Welcome to 101B! This introductory notebook will familiarize you with some of the basic strategies for data analysis that will be useful to you throughout the course. Once you have completed setting up Python on your computer using `pip install`, move on to the next cells to begin.
## Part 1: Python basics
Before getting into the more advanced analysis techniques that will be required in this course, we need to brush up on a few of the foundational elements of programming in Python.
### A. Expressions
The departure point for all programming is the concept of the __expression__. An expression is a combination of variables, operators, and other Python elements that the language interprets and acts upon. Expressions act as a set of instructions to be fed through the interpreter, with the goal of generating specific outcomes. See below for some examples of basic expressions.
```
### Examples of expressions:
a = 4
b = 10/5
### The two expressions above do not return anything – they simply store values to the computer.
### An expression that returns an output:
print(a + b)
```
### B. Variables
In the examples above, `a` and `b` are specific Python objects known as __variables__. The first two lines set the variables equal to numerical (one `integer` and one `float`) values, while the final line asks the interpreter to `print` their sum. Variables are stored within the notebook's environment, meaning stored variable values carry over from cell to cell.
```
### Notice that 'a' retains its value.
print(a)
```
### Question 1: Variables
See if you can write a series of expressions that creates two new variables called __x__ and __y__, assigns them values of __10.5__ and __7.2__, then prints their product.
```
### Fill in the missing lines to complete the expressions.
x = ...
...
print()
```
### C. Lists
The next topic is particularly useful in the kind of data manipulation that you will see throughout 101B. The following few cells will introduce the concept of __lists__ (and their counterpart, `numpy arrays`). Read through the following cell to understand the basic structure of a list.
```
### A list is initialized like this:
lst = [1, 3, 6, 'lists', 'are' 'fun', 4]
### And elements are selected like this:
example = lst[2]
### The above line selects the 3rd element of lst (list indices are 0-offset) and sets it to a variable named example.
print(example)
```
### Slicing lists
As you can see from above, lists do not have to be made up of elements of the same kind. Indices do not have to be taken one at a time, either. Instead, we can take a slice of indices and return the elements at those indices as a separate list.
```
### This line will store the first (inclusive) through fourth (exclusive) elements of lst as a new list called lst_2:
lst_2 = lst[1:4]
lst_2
```
### Question 2: Lists
Build a list of length 10 containing whatever elements you'd like. Then, slice it into a new list of length five using a index slicing. Finally, print the last element in your sliced list.
```
### Fill in the ellipses to complete the question.
my_list = ...
my_list_sliced = my_list[...]
print(...)
```
Lists can also be operated on with a few built-in analysis functions. These include `min` and `max`, among others. Lists can also be concatenated together. Find some examples below.
```
### A list containing six integers.
a_list = [1, 6, 4, 8, 13, 2]
### Another list containing six integers.
b_list = [4, 5, 2, 14, 9, 11]
print('Max of a_list:', max(a_list))
print('Min of b_list:', min(a_list))
### Concatenate a_list and b_list:
c_list = a_list + b_list
print('Concatenated:', c_list)
```
### D. Numpy Arrays
Closely related to the concept of a list is the array, a nested sequence of elements that is structurally identical to a list. Arrays, however, can be operated on arithmetically with much more versatility than regular lists. For the purpose of later data manipulation, we'll access arrays through Numpy, which will require an installation and an import statement.
To install numpy, open your terminal and use the command:
> `pip install numpy`
Now run the next cell to import the numpy library into your notebook, and examine how numpy arrays can be used.
```
import numpy as np
### Initialize an array of integers 0 through 9.
example_array = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
### This can also be accomplished using np.arange
example_array_2 = np.arange(10)
### Double the values in example_array and print the new array.
double_array = example_array*2
double_array
```
### E. Looping
Loops are often useful in manipulating, iterating over, or transforming large lists and arrays. The first type we will discuss is the __for loop__. For loops are helpful in traversing a list and performing an action at each element. For example, the following code moves through every element in example_array, adds it to the previous element in example_array, and copies this sum to a new array.
```
new_list = []
for element in example_array:
new_element = element + 5
new_list.append(new_element)
new_list
```
The most important line in the above cell is the "`for element in...`" line. This statement sets the structure of our loop, instructing the machine to stop at every number in `example_array`, perform the indicated operations, and then move on. Once Python has stopped at every element in `example_array`, the loop is completed and the final line, which outputs `new_list`, is executed. It's important to note that "element" is an arbitrary variable name used to represent whichever index value the loop is currently operating on. We can change the variable name to whatever we want and achieve the same result, as long as we stay consistent. For example:
```
newer_list = []
for completely_arbitrary_name in example_array:
newer_element = completely_arbitrary_name + 5
newer_list.append(newer_element)
newer_list
```
For loops can also iterate over ranges of numerical values. If I wanted to alter `example_array` without copying it over to a new list, I would use a numerical iterator to access list indices rather than the elements themselves. This iterator, called `i`, would range from 0, the value of the first index, to 9, the value of the last. I can make sure of this by using the built-in `range` and `len` functions.
```
for i in range(len(example_array)):
example_array[i] = example_array[i] + 5
example_array
```
### Other types of loops
The __while loop__ repeatedly performs operations until its conditional is no longer satisfied. In the below example, an array of integers 0 to 9 is generated. When the program enters the while loop on the subsequent line, it notices that the maximum value of the array is less than 50. Because of this, it adds 1 to the fifth element, as instructed. Once the instructions embedded in the loop are complete, the program refers back to the conditional. Again, the maximum value is less than 50. This process repeats until the the fifth element, now the maximum value of the array, is equal to 50, at which point the conditional is no longer true and the loop breaks.
```
while_array = np.arange(10) # Generate our array of values
print('Before:', while_array)
while(max(while_array) < 50): # Set our conditional
while_array[4] += 1 # Add 1 to the fifth element if the conditional is satisfied
print('After:', while_array)
```
### Question 3: Loops
In the following cell, partial steps to manipulate an array are included. You must fill in the blanks to accomplish the following: <br>
1. Iterate over the entire array, checking if each element is a multiple of 5
2. If an element is not a multiple of 5, add 1 to it repeatedly until it is
3. Iterate back over the list and print each element.
> Hint: To check if an integer `x` is a multiple of `y`, use the modulus operator `%`. Typing `x % y` will return the remainder when `x` is divided by `y`. Therefore, (`x % y != 0`) will return `True` when `y` __does not divide__ `x`, and `False` when it does.
```
### Make use of iterators, range, length, while loops, and indices to complete this question.
question_3 = np.array([12, 31, 50, 0, 22, 28, 19, 105, 44, 12, 77])
for i in range(len(...)):
while(...):
question_3[i] = ...
for element in question_3:
print(...)
```
### F. Functions!
Functions are useful when you want to repeat a series of steps on multiple different objects, but don't want to type out the steps over and over again. Many functions are built into Python already; for example, you've already made use of `len()` to retrieve the number of elements in a list. You can also write your own functions, though, and at this point you already have the skills to do so. <br>
Functions generally take a set of __parameters__, which define the objects they will use when they are run. For example, the `len()` function takes a list or array as its parameter, and returns the length of that list. <br>
The following cell gives an example of an extremely simple function, called `add_two`, which takes as its parameter an integer and returns that integer with, you guessed it, 2 added to it.
```
# An adder function that adds 2 to the given n.
def add_two(n):
return n + 2
add_two(5)
```
Easy enough, right? Let's look at a function that takes two parameters, compares them somehow, and then returns a boolean value (`True` or `False`) depending on the comparison. The `is_multiple` function below takes as parameters an integer `m` and an integer `n`, checks if `m` is a multiple of `n`, and returns `True` if it is. Otherwise, it returns `False`.
```
def is_multiple(m, n):
if (m % n == 0):
return True
else:
return False
is_multiple(12, 4)
is_multiple(12, 7)
```
Since functions are so easily replicable, we can include them in loops if we want. For instance, our `is_multiple` function can be used to check if a number is prime! See for yourself by testing some possible prime numbers in the cell below.
```
# Change possible_prime to any integer to test its primality
# NOTE: If you happen to stumble across a large (> 8 digits) prime number, the cell could take a very, very long time
# to run and will likely crash your kernel. Just click kernel>interrupt if it looks like it's caught.
possible_prime = 9999991
for i in range(2, possible_prime):
if (is_multiple(possible_prime, i)):
print(possible_prime, 'is not prime')
break
if (i >= possible_prime/2):
print(possible_prime, 'is prime')
break
```
### Question 4: Writing functions
In the following cell, complete a function that will take as its parameters a list and two integers x and y, iterate through the list, and replace any number in the list that is a multiple of x with y.
> Hint: use the is_multiple() function to streamline your code.
```
def replace_with_y(lst, x, y):
for i in range(...):
if(...):
...
return lst
```
## Pandas Dataframes
We will be using Pandas dataframes for much of this class to organize and sort through economic data. Pandas is one of the most widely used Python libraries in data science. It is mainly used for data cleaning, and with good reason: it’s very powerful and flexible, among many other things.
### Creating dataframes
The rows and columns of a pandas dataframe are essentially a collection of lists stacked on top/next to each other. For example, if I wanted to store the top 10 movies and their ratings in a datatable, I could create 10 lists that each contain a rating and a corresponding title, and these lists would be the rows of the table:
```
top_10_movies = pd.DataFrame(data=np.array(
[[9.2, 'The Shawshank Redemption (1994)'],
[9.2, 'The Godfather (1972)'],
[9., 'The Godfather: Part II (1974)'],
[8.9, 'Pulp Fiction (1994)'],
[8.9, "Schindler's List (1993)"],
[8.9, 'The Lord of the Rings: The Return of the King (2003)'],
[8.9, '12 Angry Men (1957)'],
[8.9, 'The Dark Knight (2008)'],
[8.9, 'Il buono, il brutto, il cattivo (1966)'],
[8.8, 'The Lord of the Rings: The Fellowship of the Ring (2001)']]), columns=["Rating", "Movie"])
top_10_movies
```
Alternatively, we can store data in a dictionary instead of in lists. A dictionary keeps a mapping of keys to a set of values, and each key is unique. Using our top 10 movies example, we could create a dictionary that contains ratings a key, and movie titles as another key.
```
top_10_movies_dict = {"Rating" : [9.2, 9.2, 9., 8.9, 8.9, 8.9, 8.9, 8.9, 8.9, 8.8],
"Movie" : ['The Shawshank Redemption (1994)',
'The Godfather (1972)',
'The Godfather: Part II (1974)',
'Pulp Fiction (1994)',
"Schindler's List (1993)",
'The Lord of the Rings: The Return of the King (2003)',
'12 Angry Men (1957)',
'The Dark Knight (2008)',
'Il buono, il brutto, il cattivo (1966)',
'The Lord of the Rings: The Fellowship of the Ring (2001)']}
```
Now, we can use this dictionary to create a table with columns `Rating` and `Movie`
```
top_10_movies_2 = pd.DataFrame(data=top_10_movies_dict, columns=["Rating", "Movie"])
top_10_movies_2
```
Notice how both ways return the same table! However, the list method created the table by essentially taking the lists and making up the rows of the table, while the dictionary method took the keys from the dictionary to make up the columns of the table. In this way, dataframes can be viewed as a collection of basic data structures, either through collecting rows or columns.
### Reading in Dataframes
Luckily for you, most datatables in this course will be premade and given to you in a form that is easily read into a pandas method, which creates the table for you. A common file type that is used for economic data is a Comma-Separated Values(.csv) file, which stores tabular data. It is not necessary for you to know exactly how .csv files store data, but you should know how to read a file in as a pandas dataframe.
We will read in a .csv file that contains quarterly real GDI, real GDP, and nominal GDP data in the U.S. from 1947 to the present.
```
### Run this cell to read in the table
accounts = pd.read_csv("data/Quarterly_Accounts.csv")
```
The `pd.read_csv` function expects a path to a .csv file as its input, and will return a datatable created from the data contained in the csv.
We have provided `Quarterly_Accouunts.csv` in the data directory, which is all contained in the current working directory (aka the folder this assignment is contained in). For this reason, we must specify to the `read_csv` function that it should look for the csv in the data directory, and the `/` indicates that `Quarterly_Accounts.csv` can be found there.
Here is a sample of some of the rows in this datatable:
```
accounts.head()
```
### Indexing Dataframes
Oftentimes, tables will contain a lot of extraneous data that muddles our datatables, making it more difficult to quickly and accurately obtain the data we need. To correct for this, we can select out columns or rows that we need by indexing our dataframes.
The easiest way to index into a table is with square bracket notation. Suppose you wanted to obtain all of the Real GDP data from the data. Using a single pair of square brackets, you could index the table for `"Real GDP"`
```
## Run this cell and see what it outputs
accounts["Real GDP"]
```
Notice how the above cell returns an array of all the real GDP values in their original order.
Now, if you wanted to get the first real GDP value from this array, you could index it with another pair of square brackets:
```
accounts["Real GDP"][0]
```
Keep in mind that pandas dataframes, as well as many other data structures, are zero-indexed, meaning indexes start at 0 and end at the number of elements minus one.
If you wanted to create a new datatable with select columns from the original table, you can index with double brackets.
```
## Note: .head() returns the first five rows of the table
accounts[["Year", "Quarter", "Real GDP", "Real GDI"]].head()
```
You can also use column indices instead of names.
```
accounts[[0, 1, 2, 3]].head()
```
Alternatively, you can also get rid of columns you dont need using `.drop()`
```
accounts.drop("Nominal GDP", axis=1).head()
```
Finally, you can use square bracket notation to index rows by their indices with a single set of brackets. You must specify a range of values for which you want to index. For example, if I wanted the 20th to 30th rows of `accounts`:
```
accounts[20:31]
```
### Filtering Data
As you can tell from the previous, indexing rows based on indices is only useful when you know the specific set of rows that you need, and you can only really get a range of entries. Working with data often involves huge datasets, making it inefficient and sometimes impossible to know exactly what indices to be looking at. On top of that, most data analysis concerns itself with looking for patterns or specific conditions in the data, which is impossible to look for with simple index based sorting.
Thankfully, you can also use square bracket notation to filter out data based on a condition. Suppose we only wanted real GDP and nominal GDP data from the 21st century:
```
accounts[accounts["Year"] >= 2000][["Real GDP", "Nominal GDP"]]
```
The `accounts` table is being indexed by the condition `accounts["Year"] >= 2000`, which returns a table where only rows that have a "Year" greater than $2000$ is returned. We then index this table with the double bracket notation from the previous section to only get the real GDP and nominal GDP columns.
Suppose now we wanted a table with data from the first quarter, and where the real GDP was less than 5000 or nominal GDP is greater than 15,000.
```
accounts[(accounts["Quarter"] == "Q1") & ((accounts["Real GDP"] < 5000) | (accounts["Nominal GDP"] > 15000))]
```
Many different conditions can be included to filter, and you can use `&` and `|` operators to connect them together. Make sure to include parantheses for each condition!
Another way to reorganize data to make it more convenient is to sort the data by the values in a specific column. For example, if we wanted to find the highest real GDP since 1947, we could sort the table for real GDP:
```
accounts.sort_values("Real GDP")
```
But wait! The table looks like it's sorted in increasing order. This is because `sort_values` defaults to ordering the column in ascending order. To correct this, add in the extra optional parameter
```
accounts.sort_values("Real GDP", ascending=False)
```
Now we can clearly see that the highest real GDP was attained in the first quarter of this year, and had a value of 16903.2
### Useful Functions for Numeric Data
Here are a few useful functions when dealing with numeric data columns.
To find the minimum value in a column, call `min()` on a column of the table.
```
accounts["Real GDP"].min()
```
To find the maximum value, call `max()`.
```
accounts["Nominal GDP"].max()
```
And to find the average value of a column, use `mean()`.
```
accounts["Real GDI"].mean()
```
## Part 2: Visualization and Regression
Now that you have completed the Python tutorial, you are now ready to learn about how to visualize data and how to analyze data with regression. To begin, run the cells below to import the required packages we will be using for this tutorial.
```
%matplotlib inline
import numpy as np
import pandas as pd
import sklearn as sk
import matplotlib.pyplot as plt
```
We will be using the US unemployment data from https://fred.stlouisfed.org/ show what we can do with data. Let's start by importing the .csv file data with pandas. The statement below will put the csv file into a pandas DataFrame, a data structure for holding tabular (2D) data.
```
unemployment_data = pd.read_csv("data/detailed_unemployment.csv")
unemployment_data
```
We can start visualizing the data that we have in the table. First, we convert the table into a numpy array. Let's extract the columns that we are interested in and plot them with pyplot.
```
#Once this cell is run, the "data" variable will store the table and can be accessed from any cell
data = np.array(unemployment_data[:len(unemployment_data)-1])
#data[:, col_num] means take select all row values in that column number in numpy
total_unemployed = data[:, 1]
not_labor = data[:, 3]
#Plot the data by inputting the x and y axis
plt.scatter(total_unemployed, not_labor)
plt.xlabel("Percent Unemployed")
plt.ylabel("Total Not In Labor, Searched for Work")
plt.show()
```
## Question 5: Plotting
Try plotting the total percent of people unemployed vs those unemployed for more than 15 weeks.
```
total_unemployed = ...
unemp_15_weeks = ...
plt.scatter(total_unemployed, unemp_15_weeks)
plt.xlabel("Percent Unemployed")
plt.ylabel("Total Unemployed for > 15 Weeks")
plt.show()
ok.grade('q05')
```
Now that we know how to select and plot our data, we are ready to dive into regression. For our current task, let's use the total unemployed and housing price index columns.
```
total_unemployed = data[:,1]
hpi = data[:,7]
plt.scatter(total_unemployed, hpi)
plt.xlabel("Percent Unemployed")
plt.ylabel("Housing Price Index")
plt.show()
```
The Scikit Learn library has lots of helpful tools for regression tasks. We will perform linear regression on our data with the imported LinearRegression class. The .fit(x, y) method fits the model to the input data and y values and saves it to that instance of the model. We can then use the .predict(x) method to predict the values of the input data from the model.
```
from sklearn.linear_model import LinearRegression
model = LinearRegression()
#We set x and y to our column values, and use np.reshape so that
#the data is formatted correctly to be input into the model
x = total_unemployed
x = np.reshape(x, (73,1))
y = hpi
model.fit(x, y)
y_ = model.predict(x)
plt.scatter(x, y)
plt.plot(x, y_)
plt.xlabel("Percent Unemployed")
plt.ylabel("Housing Price Index")
plt.show()
```
You can also use your linear regression model to predict values based on your current data. For example, if we wanted to predict the housing price index at an 18% unemployment rate:
```
#The predict method returns an array of length 1, so let's take the first element since it's our predicted value
prediction = model.predict(18)[0]
prediction
```
## Question 6: Regression
Let's make a regression model to predict the total number of people unemployed for more than 15 weeks based on the total percent of people unemployed. Then, let's make a prediction on the number of people unemployed for more than 15 weeks if the unemployment rate was 20%.
```
#Initialize the model
model = ...
#Set up our x and y variables using columns from earlier
x = ...
x = np.reshape(x, (73, 1))
y = ...
#Fit our model to the data and store y_ values for our regression line
#INSERT CODE HERE
y_ = ...
plt.scatter(x, y)
plt.plot(x, y_)
plt.xlabel("Percent Unemployed")
plt.ylabel("Total Unemployed for > 15 Weeks")
plt.show()
#Make your prediction here
prediction = model.predict(20)[0]
prediction
ok.grade('q06')
```
Congratulations! You have completed the regression tutorial. Try importing your own datasets and using regression to analyze your data.
## Chapter 1: Introduction to Macroeconomics
#### Suppose a quantity grows at a steady proportional rate of 3% per year.
How long will it take to double?
```
# ANSWER
TIME_TO_DOUBLE = ___
```
Quadruple?
```
# ANSWER
TIME_TO_QUADRUPLE = ___
```
Grow 1024-fold?
```
# ANSWER
TIME_TO_1024 = ___
```
#### Suppose we have a quantity x(t) that varies over time following the equation: $\frac{dx(t)}{dt} = -(0.06)t + 0.36$
Without integrating the equation:
$1.$ Tell me what the long-run steady-state value of $x$--that is, the limit of $x$ as $t$ approaches in infinity--is going to be.
```
steady_state_val = ___
```
$2.$ Suppose that the value of $x$ at time $t=0$, $x(0)$ equals 12. Once again, without integrating the equation, tell me how long it will take x to close half the distance between its initial value of 12 and its steady-state value.
```
half_dist_time = ___
```
$3.$ How long will it take to close 3/4 of the distance?
```
three_fourth_time = ___
```
$4.$ $7/8$ of the distance?
```
seven_eighth_time = ___
```
$5.$ $15/16$ of the distance?
```
fifteen_sixteenth = ___
```
Now you are allowed to integrate $\frac{dx(t)}{dt} = -(0.06)t + 0.36$.
$1.$ Write down and solve the indefinite integral.
<font color='blue'> ANSWER: here is an integral
$2.$ Write down and solve the definite integral for the initial condition $x(0) = 12$.
<font color='blue'> ANSWER:
$3.$ Write down and solve the definite integral for the initial condition $x(0) = 6$.
<font color='blue'> ANSWER:
#### Suppose we have a quantity $z = (\frac{x}{y})^\beta$
Suppose $x$ is growing at 4% per year and that $\beta=1/4$:
$1.$ How fast is $z$ growing if $y$ is growing at 0% per year?
```
zero_per_growth = ___
```
$2.$ If $y$ is growing at 2% per year?
```
two_per_growth = ___
```
$3.$ If $y$ is growing at 4% per year?
```
four_per_growth = ___
```
#### Rule of 72
1. If a quantity grows at about 3% per year, how long will it take to double?
```
time_to_double = ___
```
$2.$ If a quantity shrinks at about 4% per year, how long will it take it to halve itself?
```
time_to_half = ___
```
$3.$ If a quantity doubles five times, how large is it relative to its original value?
```
doubled_five_times_ratio = ___
```
$4.$ If a quantity halves itself three times, how large is it relative to its original value?
```
halved_three_times_ratio = ___
```
#### Show the relationship between the interest rate and the amount of time it takes to double graphically
```
def graph(interest_rate):
x = np.linspace(1,10,30)
y = 72 / x
print('Time to double:', 72 / interest_rate, 'years')
plt.plot(x,y)
plt.scatter(interest_rate, 72 / interest_rate, c='r')
plt.xlabel('interest rate (%)')
plt.ylabel('time (years)')
interact(graph, interest_rate=widgets.IntSlider(min=1,max=10,step=1))
```
#### How close is this to the actual formula? (EXPAND)
#### Why do DeLong and Olney think that the interest rate and the level of the stock market are important macroeconomic variables?
<font color='blue'> ANSWER:
#### What are the principal flaws in using national product per worker as a measure of material welfare? Given these flaws, why do we use it anyway?
<font color='blue'> ANSWER:
#### What is the difference between the nominal interest rate and the real interest rate? Why do DeLong and Olney think that the real interest rate is more important?
<font color='blue'> ANSWER:
## Chapter 2: Measuring the Macroeconomy
#### National Income and Product Accounting
Explain whether or not, why, and how the following items are included in the calculations of national product:
$1.$ Increases in business inventories.
<font color='blue'> ANSWER:
$2.$ Fees earned by real estate agents on selling existing homes.
<font color='blue'> ANSWER:
$3.$ Social Security checks written by the government.
<font color='blue'> ANSWER:
$4.$ Building of a new dam by the Army Corps of Engineers.
<font color='blue'> ANSWER:
$5.$ Interest that your parents pay on the mortgage they have on their house.
<font color='blue'> ANSWER:
$6.$ Purchases of foreign-made trucks by American residents
<font color='blue'> ANSWER:
#### In or Out of National Product? And Why
Explain whether or not, why, and how the following items are included in the calculation of national product:
$1.$ The sale for \$25,000 of an automobile that cost \$20,000 to manufacture that had been produced here at home last year and carried over in inventory.
<font color='blue'> ANSWER:
$2.$ The sale for \$35,000 of an automobile that cost \$25,000 to manufacture newly- made at home this year.
<font color='blue'> ANSWER:
$3.$ The sale for \$45,000 of an automobile that cost \$30,000 to manufacture that was newly-made abroad this year and imported.
<font color='blue'> ANSWER:
$4.$ The sale for \$25,000 of an automobile that cost \$20,000 to manufacture that was made abroad and imported last year.
<font color='blue'> ANSWER:
#### In or Out of National Product? And Why II
Explain whether or not, why, and how the following items are included in the calculation of GDP:
$1.$ The purchase for \$500 of a dishwasher produced here at home this year.
<font color='blue'> ANSWER:
$2.$ The purchase for $500 of a dishwasher made abroad this year.
<font color='blue'> ANSWER:
$3.$ The purchase for $500 of a used dishwasher.
<font color='blue'> ANSWER:
$4.$ The manufacture of a new dishwasher here at home for $500 of a dishwasher that
then nobody wants to buy.
<font color='blue'> ANSWER:
#### Components of National Income and Product
Suppose that the appliance store buys a refrigerator from the manufacturer on December 15, 2018 for \$600, and that you then buy that refrigerator on January 15, 2019 for \$1000:
$1.$ What is the contribution to GDP in 2018?
```
contribution_2018 = ___
```
$2.$ How is the refrigerator accounted for in the NIPA in 2019?
<font color='blue'> ANSWER: nka;sldkf;ohwalk;ldfh;la lna;sldjfn;landf;l
lajsndf;lankjsd;fljknasd;fljljsnd;fl
$3.$ What is the contribution to GDP in 2018?
```
contribution_2019 = ___
```
$4.$ How is the refrigerator accounted for in the NIPA in 2019?
<font color='blue'> ANSWER:
```
## These lines are reading in CSV files and creating datatables from then, you don't have to worry about them! ##
unemployment = pd.read_csv("data/Unemployment.csv")
quarterly_acc = pd.read_csv("data/Quarterly_Accounts.csv")
from_2007 = quarterly_acc.loc[(quarterly_acc["Year"].isin(np.arange(2007, 2018)))]
```
### Estimating National Product
The Bureau of Economic Analysis measures national product in two different ways: as total expenditure on the economy’s output of goods and services and as the total income of everyone in the economy. Since – as you learned in earlier courses – these two things are the same, the two approaches should give the same answer. But in practice they do not.
We have provided a data table `quarterly_gdp` that contains quarterly data on real GDP measured on the expenditure side (referred to in the National Income and Product Accounts as “Real Gross Domestic Product, chained dollars”) and real GDP measured on the income side (referred to as “Real Gross Domestic Income, chained dollars”). The table refers to Real Gross Dometic Product as "Real GDP" and to Real Gross Dometic Income as "Real GDI", and they are measured in billions of dollars. (Note: You will not have to use Nominal GDP)
Another table, `from_2007`, has been created from `quarterly_gdp`, and includes information from 2007 to 2017.
Below is a snippet from `from_2007`:
```
from_2007.head(10)
```
$1.$ Compute the growth rate at an annual rate of each of the two series by quarter for
2007:Q1–2012:Q4.
```
gdi_rate = ___
gdp_rate = ___
```
$2.$ Describe any two things you see when you compare the two series that you find
interesting, and explain why you find them interesting.
<font color='blue'> ANSWER:
#### Calculating Real Magnitudes:
$1.$ When you calculate real national product, do you do so by dividing nominal national product by the price level or by subtracting the price level from nominal national product?
<font color='blue'> ANSWER:
$2.$ When you calculate the real interest rate, do you do so by dividing the nominal interest rate by the price level or by subtracting the inflation rate from the nominal interest rate?
<font color='blue'> ANSWER:
$3.$ Are your answers to (a) and (b) the same? Why or why not?
<font color='blue'> ANSWER:
### Unemployment Rate
Use the `unemployment` table provided to answer the following questions. ***All numbers (other than percents) are in the thousands.***
Here are the first five entries of the table.
```
unemployment.head()
```
#### What, roughly, was the highest level the U.S. unemployment rate (measured as Percent Unemployed of Labor Force in the table) reached in:
$1.$ The 20th century?
```
unemployment_20th = ___
```
$2.$ The past fifty years?
```
unemployment_past_50 = ___
```
$3.$ The twenty years before 2006?
```
unemployment_before_2006 = ___
```
$4.$ Given your answers to (1) through (3), Do you think there is a connection between your answer to the question above and the fact that Federal Reserve Chair Alan Greenspan received a five-minute standing ovation at the end of the first of many events marking his retirement in 2005?
<font color='blue'> ANSWER:
#### The State of the Labor Market
$1.$ About how many people lose or quit their jobs in an average year?
```
average_quitters = ___
```
$2.$ About how many people get jobs in an average year?
```
average_getters = ___
```
$3.$ About how many people are unemployed in an average year?
```
average_unemployed = ___
```
$4.$ About how many people are at work in an average year?
```
average_workers = ___
```
$5.$ About how many people are unemployed now?
```
unemployed_now = ___
```
#### National Income Accounting:
$1.$ What was the level of real GDP in 2005 dollars in 1970?
```
real_gdp_2005 = ___
```
$2.$ What was the rate of inflation in the United States in 2000?
```
inflation_rate_2000 = ___
```
$3.$ Explain whether or not, how, and why the following items are included in the calculation of GDP: (i) rent you pay on an apartment, (ii) purchase of a used textbook, (iii) purchase of a new tank by the Department of Defense, (iv) watching an advertisement on youtube.
<font color='blue'> ANSWER: thisjo is my answer
al;sdknf;ls
alsjdn
dksdlkfakd;slfn
Congratulations, you have finished your first assignment for Econ 101B! Run the cell below to submit all of your work. Make sure to check on OK to make sure that it has uploaded.
```
_ = ok.submit()
```
Materials from this notebook were partly taken from [Data 8](http://data8.org/), [CS 61A](http://cs61a.org/), and [DS Modules](http://data.berkeley.edu/education/modules) lessons.
| github_jupyter |
# Dataframe modification
```
import os
import pandas as pd
import numpy as np
filename = '..\Data\dataset_clean.csv'
df = pd.read_csv(filename)
df_2=df[['Q1','Q4','Q5','Q10','Q16_Part_1','Q16_Part_2','Q16_Part_3','Q16_Part_4','Q16_Part_5','Q16_Part_6','Q16_Part_7','Q16_Part_8','Q16_Part_9','Q16_Part_10','Q18_Part_1','Q18_Part_2','Q18_Part_3','Q18_Part_4','Q18_Part_5','Q18_Part_6','Q18_Part_7','Q18_Part_8','Q18_Part_9','Q18_Part_10','Q23','Q24_Part_1','Q24_Part_2','Q24_Part_3','Q24_Part_4','Q24_Part_5','Q24_Part_6','Q24_Part_7','Q24_Part_8','Q24_Part_9','Q24_Part_10','Q28_Part_1','Q28_Part_2','Q28_Part_3','Q28_Part_4','Q28_Part_5','Q28_Part_6','Q28_Part_7','Q28_Part_8','Q28_Part_9','Q28_Part_10']]
df_2
cleanup_nums = {"Q1": {"18-21": 0, "22-24": 0,"25-29": 1,"30-34": 1,
"35-39": 1,"40-44": 2,"45-49": 2,"50-54": 2,
"55-59": 3,"60-69": 3,"70+": 3},
# "Q2": {"Prefer not to say": 0,
# "Prefer to self-describe": 0,
# "Male": 1, "Female": 2},
"Q4": {"I prefer not to answer": 0,
"No formal education past high school": 0,
"Some college/university study without earning a bachelors degree": 1,
"Bachelors degree": 2,
"Masters degree": 3,
"Doctoral degree": 4,
"Professional degree": 5},
"Q5": {"Not employed": 0,
"Other": 0,
"Student": 0,
"Data Scientist": 1,
"Software Engineer": 2,
"Data Analyst": 3,
"Data Engineer": 4,
"Statistician": 5,
"DBA/Database Engineer": 6,
"Research Scientist": 7,
"Product/Project Manager": 8,
"Business Analyst": 9},
"Q10": {"0-999": 0,
"1,000-1,999": 0,
"2,000-2,999": 0,
"3,000-3,999": 0,
"4,000-4,999": 0,
"5,000-7,499": 0,
"7,500-9,999": 0,
"10,000-14,999": 0,
"15,000-19,999": 0,
"20,000-24,999": 0,
"25,000-29,999": 0,
"30,000-39,999": 0,
"40,000-49,999": 0,
"50,000-59,999": 1,
"60,000-69,999": 2,
"70,000-79,999": 3,
"80,000-89,999": 4,
"90,000-99,999": 5,
"100,000-124,999": 6,
"125,000-149,999": 7,
"150,000-199,999": 8,
"200,000-249,999": 9,
"250,000-299,999": 9,
"300,000-500,000": 9,
"> $500,000": 9},
"Q23": {"< 1 years": 0,
"1-2 years": 1,
"2-3 years": 1,
"3-4 years": 2,
"4-5 years": 2,
"5-10 years": 3,
"10-15 years": 4,
"20+ years": 4
}
}
df_3=df_2.replace(cleanup_nums)
# from sklearn.preprocessing import LabelEncoder
# le=LabelEncoder()
# Iterating over all the common columns in train and test
# for col in df_3.columns.values:
# if df_3[col].dtypes==object:
# if df_3[col] !=0:
# le.fit(df_3[col].values)
# df_3[col]=le.transform(df_3[col])
cols=['Q16_Part_1','Q16_Part_2','Q16_Part_3','Q16_Part_4','Q16_Part_5','Q16_Part_6','Q16_Part_7','Q16_Part_8','Q16_Part_9','Q16_Part_10','Q18_Part_1','Q18_Part_2','Q18_Part_3','Q18_Part_4','Q18_Part_5','Q18_Part_6','Q18_Part_7','Q18_Part_8','Q18_Part_9','Q18_Part_10','Q23','Q24_Part_1','Q24_Part_2','Q24_Part_3','Q24_Part_4','Q24_Part_5','Q24_Part_6','Q24_Part_7','Q24_Part_8','Q24_Part_9','Q24_Part_10','Q28_Part_1','Q28_Part_2','Q28_Part_3','Q28_Part_4','Q28_Part_5','Q28_Part_6','Q28_Part_7','Q28_Part_8','Q28_Part_9','Q28_Part_10']
for col in cols:
df_3[col]=pd.to_numeric(df_3[col], errors='coerce').fillna(1).astype(int)
cols_Q16=['Q16_Part_1','Q16_Part_2','Q16_Part_3','Q16_Part_4','Q16_Part_5','Q16_Part_6','Q16_Part_7','Q16_Part_8','Q16_Part_9','Q16_Part_10']
df_3['Q16_count'] = np.count_nonzero(df_3[cols_Q16],axis=1)
cols_Q18=['Q18_Part_1','Q18_Part_2','Q18_Part_3','Q18_Part_4','Q18_Part_5','Q18_Part_6','Q18_Part_7','Q18_Part_8','Q18_Part_9','Q18_Part_10']
df_3['Q18_count'] = np.count_nonzero(df_3[cols_Q18],axis=1)
cols_Q24=['Q24_Part_1','Q24_Part_2','Q24_Part_3','Q24_Part_4','Q24_Part_5','Q24_Part_6','Q24_Part_7','Q24_Part_8','Q24_Part_9','Q24_Part_10']
df_3['Q24_count'] = np.count_nonzero(df_3[cols_Q24],axis=1)
cols_Q28=['Q28_Part_1','Q28_Part_2','Q28_Part_3','Q28_Part_4','Q28_Part_5','Q28_Part_6','Q28_Part_7','Q28_Part_8','Q28_Part_9','Q28_Part_10']
df_3['Q28_count'] = np.count_nonzero(df_3[cols_Q28],axis=1)
df_4=df_3[['Q1','Q4','Q5','Q10','Q23','Q16_count','Q18_count','Q24_count','Q28_count']]
# df_4.to_csv('data_x_cleaned.csv', index=False)
```
# 1. Job Title Prediction Model - SVM
### input user data -> job_input = [x,x,x,x,x,x,x,x]
### output -> ans1 = [job1,job2,job3]
```
from sklearn.model_selection import train_test_split
train_df, test_df = train_test_split(df_4,test_size=0.15, random_state=9832)
X1_train = train_df.drop(['Q5'], axis=1)
Y1_train = train_df['Q5']
X1_test = test_df.drop(['Q5'], axis=1)
Y1_test = test_df['Q5']
from sklearn.svm import SVC, LinearSVC
svc = SVC(probability=True) # instantiate
svc.fit(X1_train, Y1_train) # fit
acc_svc = svc.score(X1_test, Y1_test) # predict + evaluate
print('Support Vector Machines labeling accuracy:', str(round(acc_svc*100,2)),'%')
# from sklearn.externals import joblib
from joblib import dump, load
dump(svc, 'svc_jobs.joblib')
# lr = joblib.load('model.pkl')
import joblib
joblib.__version__
job_input = [[1,2,2,3,1,2,2,3]]
k1_unchanged=svc.predict_proba(job_input)[0]
k1=svc.predict_proba(job_input)[0]
ynew_result1 = svc.predict(job_input)
k1.sort()
print(k1_unchanged)
print(k1)
print(ynew_result1)
Second=k1[-2]
Third=k1[-3]
Fourth=k1[-4]
if ynew_result1==0:
Highest = np.where(k1_unchanged ==k1[-2])[0]
Sec_high = (np.where(k1_unchanged ==k1[-3])[0])
Third_high = (np.where(k1_unchanged ==k1[-4])[0])
ans1 = [Highest[0], Sec_high[0], Third_high[0]]
print(ans1)
else:
Highest = np.where(k1_unchanged ==k1[-1])[0]
Sec_high = (np.where(k1_unchanged ==k1[-2])[0])
Third_high = (np.where(k1_unchanged ==k1[-3])[0])
Four_high = (np.where(k1_unchanged ==k1[-4])[0])
ans1 = [Highest[0], Sec_high[0], Third_high[0], Four_high[0]]
if 0 in ans1: ans1.remove(0)
print(ans1[0:3])
salary_input = ans1
# salary_input = [[1, 4, 10, 4, 1, 4, 6, 6]]
```
# 2. Salary Range Prediction Model - SVM
### model input -> salary_model_input = [x,x,job input from ans1,x,x,x,x,x]
### output -> ans2 = [salary1,salary2,salary3]
```
from sklearn.model_selection import train_test_split
train_df2, test_df2 = train_test_split(df_4,test_size=0.15, random_state=100)
X2_train = train_df.drop(['Q10'], axis=1)
Y2_train = train_df['Q10']
X2_test = test_df.drop(['Q10'], axis=1)
Y2_test = test_df['Q10']
svc = SVC(probability=True) # instantiate
svc.fit(X2_train, Y2_train) # fit
acc_svc = svc.score(X2_test, Y2_test) # predict + evaluate
print('Support Vector Machines labeling accuracy:', str(round(acc_svc*100,2)),'%')
from joblib import dump, load
dump(svc, 'svc_salary.joblib')
ans2 = []
for x in salary_input:
salary_model_input = [[0,0,x,0,0,0,0,0]]
k_unchanged=svc.predict_proba(salary_model_input)[0]
k=svc.predict_proba(salary_model_input)[0]
ynew_result = svc.predict(salary_model_input)
#print(k_unchanged)
k.sort()
#print(k)
if ynew_result==0:
Highest = np.where(k_unchanged ==k[-2])[0]
print(Highest)
ans2.append(Highest[0])
else:
Highest = np.where(k_unchanged ==k[-1])[0]
print(Highest)
ans2.append(Highest[0])
# ans1 = job recommended
# ans2 = salary range predicted based on ans1
print(ans1,ans2)
```
SalaryRange:
"0-49,999": 0,
"50,000-59,999": 1,
"60,000-69,999": 2,
"70,000-79,999": 3,
"80,000-89,999": 4,
"90,000-99,999": 5,
"100,000-124,999": 6,
"125,000-149,999": 7,
"150,000-199,999": 8,
"200,000+": 9
| github_jupyter |
# MLP on Simulated ORFs
Start with ORF_MLP_118 which had the simulator bug fix.
Evaluate MLP with wide,deep network.
Train on copious simulated data.
Use uniform but longer RNA lengths: 1500
Run on Alien.
79% accuracy.
```
import time
def show_time():
t = time.time()
print(time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t)))
show_time()
PC_TRAINS=50000
NC_TRAINS=50000
PC_TESTS=5000
NC_TESTS=5000
RNA_LEN=1500
MAX_K = 3
INPUT_SHAPE=(None,84) # 4^3 + 4^2 + 4^1
NEURONS=128
DROP_RATE=0.30
EPOCHS=200
SPLITS=3
FOLDS=3 # make this 5 for serious testing
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Flatten,TimeDistributed
from keras.losses import BinaryCrossentropy
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from GenCodeTools import Collection_Generator,Transcript_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/KmerTools.py')
with open('KmerTools.py', 'w') as f:
f.write(r.text)
from KmerTools import KmerTools
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/DataPrep.py')
with open('DataPrep.py', 'w') as f:
f.write(r.text)
from DataPrep import DataPrep
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_gen import Collection_Generator,Transcript_Oracle
from SimTools.KmerTools import KmerTools
from SimTools.DataPrep import DataPrep
BESTMODELPATH=DATAPATH+"BestModel"
LASTMODELPATH=DATAPATH+"LastModel"
```
## Data Load
```
show_time()
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
pcgen.set_seq_oracle(Transcript_Oracle())
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(RNA_LEN)
pc_all = pc_sim.get_sequences(PC_TRAINS+PC_TESTS)
nc_all = nc_sim.get_sequences(NC_TRAINS+NC_TESTS)
print("Generated",len(pc_all),"PC seqs")
print("Generated",len(nc_all),"NC seqs")
pc_sim=None
nc_sim=None
print("Simulated sequence characteristics:")
oc = ORF_counter()
print("PC seqs")
oc.describe_sequences(pc_all)
print("NC seqs")
oc.describe_sequences(nc_all)
oc=None
show_time()
```
## Data Prep
```
dp = DataPrep()
Xseq,y=dp.combine_pos_and_neg(pc_all,nc_all)
nc_all=None
pc_all=None
nc_all=None
print("The first few shuffled labels:")
print(y[:30])
show_time()
Xfrq=KmerTools.seqs_to_kmer_freqs(Xseq,MAX_K)
Xseq = None
y=np.asarray(y)
show_time()
# Assume X and y were shuffled.
train_size=PC_TRAINS+NC_TRAINS
X_train=Xfrq[:train_size]
X_test=Xfrq[train_size:]
y_train=y[:train_size]
y_test=y[train_size:]
print("Training set size=",len(X_train),"=",len(y_train))
print("Reserved test set size=",len(X_test),"=",len(y_test))
Xfrq=None
y=None
show_time()
```
## Neural network
```
def make_DNN():
dt=np.float32
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=dt))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=dt))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
return dnn
model = make_DNN()
print(model.summary())
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=BESTMODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
# When shuffle=True, the valid indices are a random subset.
# No need to shuffle here assuming data was shuffled above.
splitter = KFold(n_splits=SPLITS,shuffle=False)
model = None
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
return model # parameters at end of training
show_time()
last_model = do_cross_validation(X_train,y_train)
best_model = load_model(BESTMODELPATH)
def show_test_AUC(model,X,y):
ns_probs = [0 for _ in range(len(y))]
bm_probs = model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
def show_test_accuracy(model,X,y):
scores = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
print("Accuracy on training data.")
show_time()
show_test_AUC(best_model,X_train,y_train)
show_test_accuracy(best_model,X_train,y_train)
show_time()
print("Accuracy on test data.")
show_time()
show_test_AUC(last_model,X_test,y_test)
show_test_accuracy(best_model,X_test,y_test)
show_time()
```
| github_jupyter |
### Notebook to make the the fastq files line up with the mapped bam files and the single fast5 files
```
import os
from joblib import Parallel, delayed
###define input directories here
FAST5singleIN_DIR = '../../analyses/single_fast5s/infected_leaves/infected_leaves_2_fast5_single_fast5'
#####One OUT_DIR per treatment. This should be one for germinated spores and one for infected leaves
OUT_DIR = '../../analyses/single_fast5s/infected_leaves/mapped_fast5s'
#####One OUT_DIR per treatment. This should be one for germinated spores and one for infected leaves
BAM_DIR = '../../analyses/mapping/infected_leaves/infected_leaves_2'
fastq_all_fn = '../../data/genomic_data/infected_leaves/all_fastq/infected_leaves_2.all.fastq'
minimap_index = '../../data/genomic_resources/chr_A_B_unassigned.fasta'
n_threads = 20
#counts single fast5s and fastqs
single_fast5_count = 0
fastqs = []
dirs = []
for direcotry in (os.path.join(FAST5singleIN_DIR, x) for x in os.listdir(FAST5singleIN_DIR) if os.path.isdir(os.path.join(FAST5singleIN_DIR, x))):
dirs.append(direcotry)
fast5s = [os.path.join(direcotry ,x) for x in os.listdir(direcotry) if x.endswith('.fast5')]
single_fast5_count += len(fast5s)
for x in [os.path.join(direcotry ,x) for x in os.listdir(direcotry) if x.endswith('.fastq')]:
fastqs.append(x)
print('This is the number of fast5s: %s' % single_fast5_count)
```
# Section 1 checking the input
```
fastq_entries = !cat {fastq_all_fn} | grep 'sampleid' | wc -l
###first check if we have the right amount of fastq entries in our file
int(fastq_entries[0]) == single_fast5_count
###You want this to be True
###Now check on if ids match up
fastqids_fn = fastq_all_fn.replace('.fastq', '.fastqids.txt')
!cat {fastq_all_fn} | grep 'sampleid'| cut -d ' ' -f 1 | sed 's/@//g' > {fastqids_fn}
###Read in ids as set
fastq_ids = []
with open(fastqids_fn) as fh:
for line in fh:
fastq_ids.append(line.strip('\n'))
fastq_ids = set(fastq_ids)
match_count = 0
for directory in os.listdir(FAST5singleIN_DIR):
directory = os.path.join(FAST5singleIN_DIR, directory)
if os.path.isdir(directory):
fast5s = [fn for fn in os.listdir(directory) if fn.endswith('.fast5')]
for fast5_file in fast5s:
if fast5_file.replace('.fast5', '') in fastq_ids:
match_count = match_count + 1
####This needs to be true
match_count == int(fastq_entries[0]) == single_fast5_count
####This needs to be true
```
### If above is false go to section 3 and execute this before moving on
# Section 2 mapping the reads and pulling out the mapped fast5s
```
bam_fn = os.path.join(BAM_DIR, os.path.basename(fastq_all_fn).replace('.fastq', '.sorted.bam'))
!minimap2 -t 15 -ax map-ont {minimap_index} {fastq_all_fn} | samtools sort -@ 15 -o {bam_fn} -
#this is only here because the mapping was done on the command line and not in here
#if mapping is done in here don't execute this cell
bam_fn = '../../analyses/mapping/infected_leaves/infected_leaves_2/infected_leaves_2.sorted.bam'
##generated the mapped read ID list
mappedids_fn = bam_fn.replace('.bam', '.mappedids.txt')
!samtools view -F 4 {bam_fn} | cut -f 1 | sort | uniq > {mappedids_fn}
#get the mapped ids as a set
mapped_reads = []
with open(mappedids_fn) as fh:
for line in fh:
mapped_reads.append(line.rstrip())
mapped_reads = set(mapped_reads)
len(mapped_reads)
#move fast5s you want from tmp to out dir
match_count = 0
for directory in os.listdir(FAST5singleIN_DIR):
directory = os.path.join(FAST5singleIN_DIR, directory)
#check if path is directory
if os.path.isdir(directory):
#get all fastq files
fast5s = [fn for fn in os.listdir(directory) if fn.endswith('.fast5')]
for fast5_file in fast5s:
if fast5_file.replace('.fast5', '') in mapped_reads:
match_count = match_count + 1
#move the files by renaming absolute path
old_fn = os.path.join(directory, fast5_file)
new_fn = os.path.join(OUT_DIR, fast5_file)
os.replace(old_fn, new_fn)
##This should be true
len(mapped_reads) == match_count
```
### Below are useful code snippets we leave for now but won't execute
# Section 3 Regenerating fastqs if they don't add up
```
#Run only if the tests above do fail
%run -i infected_leaves_2_fast5_to_fastq.py
#combine all fastqs
all_fastq_fn = os.path.join(FAST5singleIN_DIR, '%s.fastq' % os.path.basename(FAST5singleIN_DIR))
with open(all_fastq_fn, mode='w') as all_fastq_fh:
for dir_ in dirs:
fn = os.path.join(os.path.join(dir_), os.path.basename(dir_) + '.fastq')
#print(fn)
with open(fn, mode = 'r') as fh:
for line in fh:
line = line.rstrip()
print(line, file=all_fastq_fh)
fastq_entries = !cat {all_fastq_fn} | grep 'sampleid' | wc -l
int(fastq_entries[0]) == single_fast5_count
all_fastq_fn = os.path.join(FAST5singleIN_DIR, '%s.fastq' % os.path.basename(FAST5singleIN_DIR))
fastqids_fn = all_fastq_fn.replace('.fastq', '.fastqids.txt')
!cat {all_fastq_fn} | grep 'sampleid'| cut -d ' ' -f 1 | sed 's/@//g' > {fastqids_fn}
fastq_reads = []
with open(fastqids_fn) as fh:
for line in fh:
fastq_reads.append(line.strip('\n'))
fastq_reads = set(fastq_reads)
len(fastq_reads) == single_fast5_count
count = 0
TMPOUT_DIR = FAST5singleIN_DIR
for directory in os.listdir(TMPOUT_DIR):
directory = os.path.join(TMPOUT_DIR, directory)
#check if path is directory
if os.path.isdir(directory):
#print(directory)
fast5s = [fn for fn in os.listdir(directory) if fn.endswith('.fast5')]
#missing = set([x.replace('.fast5', '') for x in fast5s]) - fastq_reads
#print(len(missing))
for fast5_file in fast5s:
if fast5_file.replace('.fast5', '') in fastq_reads:
count = count + 1
#move the files by renaming absolute path
#old_fn = os.path.join(directory, fast5_file)
#new_fn = os.path.join(OUT_DIR, fast5_file)
#os.replace(old_fn, new_fn)
#count = count + len(fast5s)
#print(count)
count == single_fast5_count
```
| github_jupyter |
# Navigation
---
In this notebook, you will learn how to use the Unity ML-Agents environment for the first project of the [Deep Reinforcement Learning Nanodegree](https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893).
### 1. Start the Environment
We begin by importing some necessary packages. If the code cell below returns an error, please revisit the project instructions to double-check that you have installed [Unity ML-Agents](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Installation.md) and [NumPy](http://www.numpy.org/).
```
%load_ext autoreload
%autoreload 2
import os
import sys
repo_path = os.path.dirname(os.path.dirname(os.path.abspath("__file__")))
sys.path.append(repo_path)
from unityagents import UnityEnvironment
import numpy as np
```
Next, we will start the environment! **_Before running the code cell below_**, change the `file_name` parameter to match the location of the Unity environment that you downloaded.
- **Mac**: `"path/to/Banana.app"`
- **Windows** (x86): `"path/to/Banana_Windows_x86/Banana.exe"`
- **Windows** (x86_64): `"path/to/Banana_Windows_x86_64/Banana.exe"`
- **Linux** (x86): `"path/to/Banana_Linux/Banana.x86"`
- **Linux** (x86_64): `"path/to/Banana_Linux/Banana.x86_64"`
- **Linux** (x86, headless): `"path/to/Banana_Linux_NoVis/Banana.x86"`
- **Linux** (x86_64, headless): `"path/to/Banana_Linux_NoVis/Banana.x86_64"`
For instance, if you are using a Mac, then you downloaded `Banana.app`. If this file is in the same folder as the notebook, then the line below should appear as follows:
```
env = UnityEnvironment(file_name="Banana.app")
```
```
env = UnityEnvironment(file_name="Banana_Windows_x86_64/Banana.exe")
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
The simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:
- `0` - walk forward
- `1` - walk backward
- `2` - turn left
- `3` - turn right
The state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana.
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
action = np.random.randint(action_size)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0]
reward = env_info.rewards[0]
done = env_info.local_done[0]
```
### 3. Take Random Actions in the Environment
In the next code cell, you will learn how to use the Python API to control the agent and receive feedback from the environment.
Once this cell is executed, you will watch the agent's performance, if it selects an action (uniformly) at random with each time step. A window should pop up that allows you to observe the agent, as it moves through the environment.
Of course, as part of the project, you'll have to change the code so that the agent is able to use its experience to gradually choose better actions when interacting with the environment!
```
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
acc_steps = 0
while True:
action = np.random.randint(action_size) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
acc_steps += 1
if done: # exit loop if episode finished
break
print("Score: {} in {} steps {}".format(score, acc_steps))
```
When finished, you can close the environment.
```
env.close()
```
### 4. It's Your Turn!
Now it's your turn to train your own agent to solve the environment! When training the environment, set `train_mode=True`, so that the line for resetting the environment looks like the following:
```python
env_info = env.reset(train_mode=True)[brain_name]
```
```
from collections import deque
import pandas as pd
def train(env, brain_name, agent, n_episodes=2000, max_t=1000,
eps_start=1.0, eps_end=0.01, eps_decay=0.995,
model_save_path='checkpoint.pth', score_solved=13., score_win=100):
"""
Train DQ-Learning agent on a given environment, based on epsilon-greedy policy and GLIE evolution of epsilon parameter
When the game is considered solved, save the DQ-Net underlaying the agent in a given path
Params
======
env: Environment to solve an episodic game. Should behave like:
state = env.reset()
next_state, reward, done, _ = env.step(action)
agent: DQ-Learning Agent, should estimate and optimal policy estimating Q function using a DQN
n_episodes (int): Number of episodes to simulate
max_t (int): Max number of time steps (transitions) on each episode
eps_start (float): Epsilon parameter starting value (at first episode)
eps_end (float): Epsilon min value
eps_decay (float): Epsilon decay rate
model_save_path (str): Path to persist model
score_solved (float): Score to consider the game solved
Returns
======
scores (list): Average reward over 100 episodes
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
for i_episode in range(1, n_episodes + 1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
for t in range(max_t):
action = agent.act(state, eps)
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = update_epsilon(eps_end, eps_decay, eps) # decrease epsilon
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)), end="")
if i_episode % score_win == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window) >= score_solved:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(
i_episode - score_win, np.mean(scores_window)))
agent.save_network(model_save_path)
break
env.close()
return pd.Series(index=range(1, len(scores) +1), data=scores, dtype=np.float32, name='score')
def update_epsilon(eps_end, eps_decay, eps_curr):
"""
"""
return max(eps_end, eps_decay * eps_curr)
from src.dqn_agent import AgentDQ
agent_dq = AgentDQ(state_size=37, action_size=4, gamma=0.99, hidden_layers=[64, 32], drop_p=0.2,
batch_size=64, learning_rate=5e-4, soft_upd_param=1e-3, update_every=4, buffer_size=int(1e5), seed=123)
scores_dq = train(env, brain_name=brain_name, agent=agent_dq, n_episodes=2000, max_t=1000,
eps_start=1.0, eps_end=0.01, eps_decay=0.995, model_save_path='models/dq_checkpoint_v02.pth')
scores_df = scores_dq.to_frame('score')
scores_df['score_mave100'] = scores_df['score'].rolling(100).mean()
scores_df['experiment'] = 'dqn:v02'
scores_df.index.name = 'idx_episode'
checkpoint_metadata = pd.Series(index=['N_episodes', 'gamma', 'hidden_layers', 'drop_p',
'batch_size', 'learning_rate', 'soft_upd_param', 'update_every', 'buffer_size','solved',
'checkpoint'],
data = [len(scores_dq), 0.99, [64, 32], 0.2, 64, 5e-4, 1e-3, 4, int(1e5), True, 'dq_checkpoint_v02.pth'], name='experiment:dqn:v02')
checkpoint_metadata
import datetime as dt
experiment_dt = dt.datetime.strftime(dt.datetime.now(), "%Y%m%d%H%M%S")
checkpoint_metadata.to_json(f'models/experiments/hparams_{experiment_dt}.json')
scores_df.to_csv(f'models/experiments/scores_{experiment_dt}.csv')
```
| github_jupyter |
# Bayesian Optimazation Classification and Regression
> From now, stop using GridSearch and RandomSearch
- toc: true
- badges: true
- comments: true
- categories: [Bayesian]
```
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import RandomForestRegressor
from bayes_opt import BayesianOptimization
from sklearn.datasets import make_classification, make_regression
from sklearn.model_selection import cross_val_score
import warnings
warnings.simplefilter('ignore')
```
# Classification Problem Bayesian Optimazation
```
X, y = make_classification(n_samples=10000, n_features=10, n_classes=2)
```
We using the default hyperparameter to fit the data
```
rfc = RandomForestClassifier()
np.mean(cross_val_score(rfc, X, y, cv=5, scoring='roc_auc'))
```
Default hyperparameter perform aroudn `0.98` ROC_AUC, next step we use `Bayesian Optimazation` to fine turn the hyperparameter.
## Define the `blackBox` function
```
def rfc_cv(n_estimators, min_samples_split, max_features, max_depth):
val = np.mean(cross_val_score(RandomForestClassifier(n_estimators=int(n_estimators),
min_samples_split=int(min_samples_split),
max_features=min(max_features, 0.999),
max_depth=int(max_depth), random_state=42),
X, y, scoring='roc_auc', cv=5))
return val
# define Bayesian Optimazation
rfc_bo = BayesianOptimization(
rfc_cv,
{'n_estimators': (10, 250),
'min_samples_split': (2, 25),
'max_features': (0.1, 0.999),
'max_depth': (5, 30)})
# start the optimazation
rfc_bo.maximize()
# check the best hyperparameter
rfc_bo.max
rfc_Optimazed = RandomForestClassifier(n_estimators=18, max_depth=6, max_features=0.78, min_samples_split=22)
np.mean(cross_val_score(rfc_Optimazed, X, y, cv=5, scoring='roc_auc'))
```
* Original `roc_auc`: 0.989776
* Optimized `roc_auc`: 0.99006
# Regression Problem Bayesian Optimazation
```
X, y = make_regression(n_samples=10000, n_features=10)
rfe = RandomForestRegressor()
np.mean(cross_val_score(rfe, X, y, cv=5, scoring='neg_mean_squared_error'))
```
## Define the `blackbox` function
```
def rfe_cv(n_estimators, min_samples_split, max_features, max_depth):
val = np.mean(cross_val_score(RandomForestRegressor(n_estimators=int(n_estimators),
min_samples_split=int(min_samples_split),
max_features=min(max_features, 0.999),
max_depth=int(max_depth), random_state=42),
X, y, scoring='neg_mean_squared_error', cv=5))
return val
score = rfe_cv(n_estimators=100, min_samples_split=10, max_depth=6, max_features=0.78)
score
# define Bayesian Optimazation
rfe_bo = BayesianOptimization(
rfe_cv,
{'n_estimators': (10, 250),
'min_samples_split': (2, 25),
'max_features': (0.1, 0.999),
'max_depth': (5, 30)})
# start the optimazation
rfe_bo.maximize()
rfe_bo.max
# use the best hyperparameter
rfe = RandomForestRegressor(n_estimators=140, max_depth=29, max_features=0.84, min_samples_split=2)
np.mean(cross_val_score(rfe, X, y, cv=5, scoring='neg_mean_squared_error'))
```
* Origin `neg_mean_squared_error`: -1409.2889528620326
* Optimazed `neg_mean_squared_error`: -1383.4479089516929
| github_jupyter |
### Code 2 Sub-selection
```
from netCDF4 import Dataset
import numpy as np
import numpy.ma as ma
import pandas as pd
pd.set_option('max_columns', None)
from scipy.io import loadmat # this is the SciPy module that loads mat-files
import scipy.io as sio
from itertools import islice
import matplotlib.pyplot as plt
from pathlib import Path
import warnings
warnings.filterwarnings('ignore')
```
#### [0] Load data
zos data along Bathymetry 300 for CMIP6 models and CMEMS observations
```
#[1] Select section and general parameters
m=loadmat('zos_data_B300_section.mat')
ndata = {n: m['row'][n][0,0] for n in m['row'].dtype.names}
dfm=pd.DataFrame.from_dict(dict((column, ndata[column][0]) for column in [n for n, v in ndata.items() if v.size == 1]))
NSeg=dfm['N'][0];KB_bloom=dfm['KB'][0];model=dfm['model'][0];
ns=dfm['Nstr'][0]; ne=dfm['Nend'][0];ss=dfm['Sstr'][0]; se=dfm['Send'][0];
#[2] KB data
G=int(KB_bloom[-1])
for Q in ['2Q']:
file='KB_data_{}2014L10G{}.csv'.format(Q,G)
Kdf=pd.read_csv(file)
kb=Kdf.iloc[:,-1].copy()
kb[kb>0]=1
kb[kb.isnull()]=0
KBCC=Kdf['max_cells/L_raw_b1e5'].copy()
KBCC[pd.isna(Kdf['n_days_bloom'])]=0
if Q=='Q':
KBQ=kb.to_numpy()
KBCCQ=KBCC.to_numpy()
elif Q=='2Q':
KB2Q=kb.to_numpy()
KBCC2Q=KBCC.to_numpy()
if KB_bloom[0]=='Q':
nm=3;Q='Q';KB=KBQ
elif KB_bloom[0]=='2':
nm=6;Q='2Q';KB=KB2Q
print(file)
print(KB_bloom,Q,NSeg,model,ne,ns,ss,se)
#[3] observation data
#(Obs) CMEMS.AVISO-1-0.phy-001-030.r1.Omon.zos.gn (1 realization)
zosO=np.loadtxt('zos_data_B300_10_phy001_030_r1.csv',delimiter=',')
print('zos_obs:',zosO.shape)
#[4] zos model data
#(0-1) CMIP6.HighResMIP.NCAR.CESM1-CAM5-SE-HR.hist-1950.r1i1p1f1.Omon.zos.gn (1 realization) [Q 3]
#(1-2) CMIP6.HighResMIP.CMCC.CMCC-CM2-HR4.hist-1950.r1i1p1f1.Omon.zos.gn (1 realization) [Q 2]
#(2-3) CMIP6.HighResMIP.CMCC.CMCC-CM2-VHR4.hist-1950.r1i1p1f1.Omon.zos.gn (1 realization) [Q 2]
#(3-6) CMIP6.HighResMIP.CNRM-CERFACS.CNRM-CM6-1-HR.hist-1950.r1i1p1f2.Omon.zos.gn (3 realizations) [Q 1]
#(6-7) CMIP6.CMIP.CNRM-CERFACS.CNRM-CM6-1-HR.historical.r1i1p1f2.Omon.zos.gn (1 realizations) [Q 1]
#(7-12) CMIP6.CMIP.E3SM-Project.ES3M-1-0.historical.r1i1p1f1.Omon.zos.gr (5 realizations) [Q 0]
#(12-15) CMIP6.HighResMIP.EC-Earth-Consortium.EC-Earth3P-HR.hist-1950.r1i1p2f1.Omon.zos.gn (3 realizations) [Q 0]
#(15-18) CMIP6.HighResMIP.EC-Earth-Consortium.EC-Earth3P.hist-1950.r1i1p2f1.Omon.zos.gn (3 realizations) [Q 4]
#(18-24) CMIP6.HighResMIP.ECMWF.ECMWF-IFS-HR.hist-1950.r1i1p1f1.Omon.zos.gn (6 realizations) [Q 5]
#(24-27) CMIP6.HighResMIP.ECMWF.ECMWF-IFS-MR.hist-1950.r1i1p1f1.Omon.zos.gn (3 realizations)[Q 5]
#(27-28) CMIP6.CMIP.NOAA-GFDL.GFDL-CM4.historical.r1i1p1f1.Omon.zos.gn (1 realizations) [Q 4]
#(28-30) CMIP6.CMIP.NOAA-GFDL.GFDL-ESM4.historical.r2i1p1f1.Omon.zos.gn (2 realizations) [Q 3]
#(30-31) CMIP6.HighResMIP.NERC.HadGEM3-GC31-HH.hist-1950.r1i1p1f1.Omon.zos.gn (1 realization) [Q 5]
#(31-34) CMIP6.HighResMIP.MOHC.HadGEM3-GC31-HM.hist-1950.r1i1p1f1.Omon.zos.gn (3 realizations) [Q 5]
#(34-37) CMIP6.HighResMIP.MOHC.HadGEM3-GC31-MM.hist-1950.r1i1p1f1.Omon.zos.gn (3 realizations) [Q 5]
#(37-41) CMIP6.CMIP.MOHC.HadGEM3-GC31-MM.historical.r1i1p1f3.Omon.zos.gn (4 realizations) [Q 5]
zosMRaw=np.load('zos_data_B300_543210.npy')
print('zos_model:', zosMRaw.shape)
print ('Number of members:', zosMRaw.shape[0])
#Model info
df=pd.read_csv('zos_data_B300_members_score.csv',index_col=0)
display(df)
```
### [1] Sub-selection predictors
For Loop Current north (LC-N) and Loop Current south (LC-S) given 2Q (i.e., 6 month perid): <br>
(1) resolve observed physical phenomena (Yes / No), (2) frequency of an oscillation(LC-N, LC-S), <br>
(3) temproal-match(LC-N, LC-S,Total), (4) RMSE(Total) for each member, model, and group (Table 1-3)
```
def predictos(resm,member,KB,LCO,LC,Institution_ID,Source_ID,ensemble_size,Flag):
#Info
if Flag==1:
resm.loc[member,'Institution_ID']=Institution_ID
resm.loc[member,'Source_ID']=Source_ID
resm.loc[member,'e_size']=ensemble_size
#KB Blooms and LC counts
resm.loc[member,'KB']=(KB>0).sum()
resm.loc[member,'LCN']=(LC>=0).sum()
resm.loc[member,'LCS']=(LC<0).sum()
resm.loc[member,'LCN_NB']=((LC>=0) & (KB==0)).sum()
resm.loc[member,'LCN_B']=((LC>=0) & (KB>0)).sum()
resm.loc[member,'LCS_NB']=((LC<0) & (KB==0)).sum()
resm.loc[member,'LCS_B']=((LC<0) & (KB>0)).sum()
resm.loc[member,'Err_KB']= np.round(resm.loc[member,'LCS_B']/resm.loc[member,'KB'],decimals=3)
#Temporal match between observation and model
resm.loc[member,'Match_LCN']=((LC>=0) & (LCO>=0)).sum()
resm.loc[member,'Match_LCS']=((LC<0) & (LCO<0)).sum()
resm.loc[member,'Match_Tot']=resm.loc[member,'Match_LCN']+resm.loc[member,'Match_LCS']
#Temporal error between observation and model
resm.loc[member,'Err_LCN']=0
resm.loc[member,'Err_LCS']=0
resm.loc[member,'Err_Tot']=0
#Temporal error between AVISO and model
if member =='obs':
resm.loc[member,'Err_LCN']=0
resm.loc[member,'Err_LCS']=0
resm.loc[member,'Err_Tot']=0
else:
resm.loc[member,'Err_LCN']=np.round((resm.loc['obs','LCN']-resm.loc[member,'Match_LCN'])/resm.loc['obs','LCN'],decimals=3)
resm.loc[member,'Err_LCS']=np.round((resm.loc['obs','LCS']-resm.loc[member,'Match_LCS'])/resm.loc['obs','LCS'],decimals=3)
resm.loc[member,'Err_Tot']=np.round((len(LCO)-resm.loc[member,'Match_Tot'])/len(LCO),decimals=3)
#RMSE between AVISO and model
resm.loc[member,'RMSE']=np.round(np.sqrt(np.mean(np.square(LC-LCO)))*1e2,decimals=2)
return resm
print('zos data processing steps MSXP: mean_segment(mean_ensemble).delta_north_south.max_period')
#(1) Ensembles
NME=['3210', '321X', '32XX', '3XXX', 'XXX0']
ME=[[3,2,1,0], [3,2,1,-1], [3,2,-1,-1], [3,-1,-1,-1], [-1,-1,-1,0]]
# NME=['3XXX']
# ME=[[3,-1,-1,-1]]
Disp=0
#(2)Create results dataframe
members=['obs', *[*NME]]
columns=['e_size', 'KB','LCN','LCS','LCN_NB','LCN_B','LCS_NB','LCS_B','Err_KB', \
'Match_LCN','Match_LCS','Match_Tot','Err_LCN','Err_LCS','Err_Tot','RMSE']
resm = pd.DataFrame(columns = columns,index=members)
#(3) Create zos dataframe
columns=['KB','obs', *[*NME]]
Q=pd.date_range('1993-01-01', periods=44, freq='2Q',closed='left')
dfzos = pd.DataFrame(columns=columns,index=Q)
dfzos.KB=KB
#(4) Observation data processing
DO=(np.nanmean(zosO[:,ns:ne], axis=1) - np.nanmean(zosO[:,ss:se], axis=1))
LCO=DO.reshape((-1,nm),order='C').max(axis=1)
member='obs'
ensemble_size=1
resm=predictos(resm,'obs',KB,LCO,LCO,Institution_ID='', Source_ID='', ensemble_size='', Flag=2)
dfzos.obs=LCO
for nme,me in zip(NME,ME):
#[1] Step 1: Collect model runs data
#Initalize ensemble: zos data and info
ZOS=[]
df_ensemble = df[0:0]
#(1.1)Ensemble data and info
for index, row in df.iterrows():
Score=row['Score']
Institution_ID=row['Institution_ID']
if Score==me[0] or Score==me[1] or Score==me[2] or Score==me[3]:
if (Institution_ID != 'CMEMS'):
temp=zosMRaw[index,:,:]
temp[temp>1e3]=np.nan
ZOS.append(temp)
df_ensemble.loc[index]=row
ZOS= np.stack(ZOS)
ZOSN=ZOS[:,:,ns:ne]
ZOSS=ZOS[:,:,ss:se]
print('Step 1: Collect zos data {} for north {} and south {} segments for all model runs for multi-model ensemble {}'.\
format(ZOS.shape,ZOSN.shape,ZOSS.shape,nme))
#(1.2) Save data for optimization
m['ZOS']=ZOS
m['ZOSN']=ZOSN
m['ZOSS']=ZOSS
m['Member']=df_ensemble.loc[:,['Score','Member']].to_numpy()
m['row']['Ensemble'][0,0][0]=nme
df_ensemble.to_csv('zos_data_B300_opt_T{}.csv'.format(nme))
#[2] Process ensemble (ensemble mean and std)
#zosA=np.nanmean(ZOS, axis=0)
#zosAstd=np.nanstd(ZOS,axis=0)
zosAN=np.nanmean(ZOSN, axis=0)
zosAS=np.nanmean(ZOSS, axis=0)
#Display ensemble info and zos data size
if nme=='XXX0':
print('For each multi-model ensemble:')
print('Step 2: Average zos data of all model runs for north segment {} and south segment {} '.format(zosAN.shape,zosAS.shape))
if Disp>0:
display(df_ensemble)
print('zos data',nme,':',ZOS.shape,zosAN.shape,zosAS.shape)
#[3] Data processing MSXP: mean_segment(delta-north-south), max_period
#(3.1) Mean segment
std=0
if std==0:
zosMN=zosAN
zosMS=zosAS
elif std==1:
zosM=zosAstd
#DMN=np.nanmean(zosM[:,ns:ne], axis=1)
#DMS=np.nanmean(zosM[:,ss:se], axis=1)
DMN=np.nanmean(zosMN, axis=1)
DMS=np.nanmean(zosMS, axis=1)
if nme=='XXX0':
print('Step 3: Average zos data of north segment{} and south segment {}'.format(DMN.shape,DMS.shape))
#(3.2) Delta north and south
DM=DMN-DMS
if nme=='XXX0':
print('Step 4: Subtract zos data of north segment from south segment {}'.format(DM.shape))
#(3.3) Maximum delta zos per period
LCM=DM.reshape((-1,nm),order='C').max(axis=1)
if nme=='XXX0':
print('Step 5: Select maximum delta zos in the 6-month interval {} given 22-year study period'.format(LCM.shape))
#(3.4) Collect data per model run
dfzos.loc[:,nme]=LCM
#(3.5) Save data for optimization
m['LCO']=LCO
m['LCM']=LCM
mfile='zos_data_B300_opt_R{}.mat'.format(nme)
sio.savemat(mfile,m)
#[4] Calculate predictors
ensemble_size=ZOS.shape[0]
resm=predictos(resm,nme,KB,LCO,LCM,Institution_ID='', Source_ID='', ensemble_size=ensemble_size, Flag=2)
#Display results table
resm.iloc[0,0]=1
display(resm)
#Save table
resm.to_csv('res_Table3_Subset_selection.csv')
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
Let's start with a model that's very effective at learning Cats v Dogs.
It's similar to the previous models that you have used, but I have updated the layers definition. Note that there are now 4 convolutional layers with 32, 64, 128 and 128 convolutions respectively.
Also, this will train for 100 epochs, because I want to plot the graph of loss and accuracy.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy'])
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
The Training Accuracy is close to 100%, and the validation accuracy is in the 70%-80% range. This is a great example of overfitting -- which in short means that it can do very well with images it has seen before, but not so well with images it hasn't. Let's see if we can do better to avoid overfitting -- and one simple method is to augment the images a bit. If you think about it, most pictures of a cat are very similar -- the ears are at the top, then the eyes, then the mouth etc. Things like the distance between the eyes and ears will always be quite similar too.
What if we tweak with the images to change this up a bit -- rotate the image, squash it, etc. That's what image augementation is all about. And there's an API that makes it easy...
Now take a look at the ImageGenerator. There are properties on it that you can use to augment the image.
```
# Updated to do image augmentation
train_datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
```
These are just a few of the options available (for more, see the Keras documentation. Let's quickly go over what we just wrote:
* rotation_range is a value in degrees (0–180), a range within which to randomly rotate pictures.
* width_shift and height_shift are ranges (as a fraction of total width or height) within which to randomly translate pictures vertically or horizontally.
* shear_range is for randomly applying shearing transformations.
* zoom_range is for randomly zooming inside pictures.
* horizontal_flip is for randomly flipping half of the images horizontally. This is relevant when there are no assumptions of horizontal assymmetry (e.g. real-world pictures).
* fill_mode is the strategy used for filling in newly created pixels, which can appear after a rotation or a width/height shift.
Here's some code where we've added Image Augmentation. Run it to see the impact.
```
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy'])
# This code has changed. Now instead of the ImageGenerator just rescaling
# the image, we also rotate and do other operations
# Updated to do image augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
!wget --no-check-certificate \
https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip \
-O /tmp/cats_and_dogs_filtered.zip
import os
import zipfile
import tensorflow as tf
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.preprocessing.image import ImageDataGenerator
local_zip = '/tmp/cats_and_dogs_filtered.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('/tmp')
zip_ref.close()
base_dir = '/tmp/cats_and_dogs_filtered'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=1e-4),
metrics=['accuracy'])
# This code has changed. Now instead of the ImageGenerator just rescaling
# the image, we also rotate and do other operations
# Updated to do image augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
test_datagen = ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # This is the source directory for training images
target_size=(150, 150), # All images will be resized to 150x150
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
history = model.fit(
train_generator,
steps_per_epoch=100, # 2000 images = batch_size * steps
epochs=100,
validation_data=validation_generator,
validation_steps=50, # 1000 images = batch_size * steps
verbose=2)
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training Loss')
plt.plot(epochs, val_loss, 'b', label='Validation Loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
```
| github_jupyter |
# Data Upload Tutorial
* This notebook is a tutorial on how to upload data using Graphistry's REST API.
- Our REST API is designed to be language agnostic. For our Python specific API, please review the other notebooks in <https://github.com/graphistry/pygraphistry>
* For permission to upload to our public service, you **must** have an API key. Go to <www.graphistry.com/api-request> to recieve a key.
* For more details, visit https://graphistry.github.io/docs/legacy/api/0.9.2/api.html for a full API Reference
#### Import the necessary libaries
```
import graphistry
import pandas
import requests
import random
import time
```
#### Set your API key and Graphistry Server Location
- To use our public server at **labs.graphistry.com**, you must have a valid API key
```
API_KEY = 'Go to www.graphistry.com/api-request to get your key!'
SERVER = 'labs.graphistry.com'
# Current time is used to create a unique dataset name
current_time = str(int(time.time()))
```
## Create a dictionary describing the graph
- Visit https://graphistry.github.io/docs/legacy/api/0.9.2/api.html for full API reference
```
datasetName = 'RestUploadTutorial-' + current_time
data = {
"name": datasetName,
"type": "edgelist",
"bindings": {
"sourceField": "src",
"destinationField": "dst",
"idField": "node"
},
"graph": [
{"src": "myNode1", "dst": "myNode2",
"myEdgeField1": "I'm an edge!", "myCount": 7},
{"src": "myNode2", "dst": "myNode3",
"myEdgeField1": "I'm also an edge!", "myCount": 200}
],
"labels": [
{"node": "myNode1",
"myNodeField1": "I'm a node!",
"pointColor": 5},
{"node": "myNode2",
"myNodeField1": "I'm node 2",
"pointColor": 4},
{"node": "myNode3",
"myNodeField1": "I'm a node three!",
"pointColor": 4}
]
}
```
### Post the json data to construct a graph vizualization dataset, and upload it to the server
```
params = {
'key': API_KEY
}
resp = requests.post('http://'+ SERVER +'/etl', params=params, json=data)
print resp.status_code
resp.raise_for_status()
```
## Embed the uploaded graph vizualization into the notebook using an IFrame
```
datasetName = resp.json()['dataset']
url = 'http://' + SERVER + '/graph/graph.html?dataset=' + datasetName + '&splashAfter=' + str(int(time.time()))
from IPython.display import IFrame
IFrame(url, width=1000, height=500)
```
# Upload a workbook programmatically using a PUT request
```
import json
from pprint import pprint
with open('lesMiserablesWorkbook.json') as data_file:
wb = json.load(data_file)
print (wb)
workbook_id = wb[u'id']
print workbook_id
params = {
'key': API_KEY
}
resp = requests.post('http://'+ SERVER +'/workbook', params=params, json=wb)
print resp.status_code
resp.raise_for_status()
```
## Using the workbook on the Les Miserables dataset
```
url = 'http://' + SERVER + '/graph/graph.html?dataset=Miserables&workbook=%s' % wb['id']
from IPython.display import IFrame
IFrame(url, width=1000, height=500)
```
## Download the previously uploaded workbook using a GET request
```
resp = requests.get('http://'+ SERVER +'/workbook/' + workbook_id)
print resp.status_code
resp.raise_for_status()
downloadedWorkbook = resp.json()
print(downloadedWorkbook)
```
| github_jupyter |
## __PPSO__ (Parallel Particle Swarm Optimisation)
Now we are going to implement a faster, parallel version of PSO, i.e PPSO
Let us first use the code from the [previous notebook](/notebooks/Basic%20PSO.ipynb)
```
%%file particle.py
#dependencies
import random
import math
import copy # for array copying
import sys
class Particle:
def __init__(self,x0, num_dimensions):
self.position_i=[] # particle position
self.velocity_i=[] # particle velocity
self.pos_best_i=[] # best position individual
self.err_best_i=-1 # best error individual
self.err_i=-1 # error individual
self.num_dimensions = num_dimensions
for i in range(0, self.num_dimensions):
self.velocity_i.append(random.uniform(-1,1))
self.position_i.append(x0[i])
# evaluate current fitness
def evaluate(self,costFunc):
self.err_i=costFunc(self.position_i)
# check to see if the current position is an individual best
if self.err_i < self.err_best_i or self.err_best_i==-1:
self.pos_best_i=self.position_i
self.err_best_i=self.err_i
# update new particle velocity
def update_velocity(self,pos_best_g):
w=0.5 # constant inertia weight (how much to weigh the previous velocity)
c1=1 # cognative constant
c2=2 # social constant
for i in range(0, self.num_dimensions):
r1=random.random()
r2=random.random()
vel_cognitive=c1*r1*(self.pos_best_i[i]-self.position_i[i])
vel_social=c2*r2*(pos_best_g[i]-self.position_i[i])
self.velocity_i[i]=w*self.velocity_i[i]+vel_cognitive+vel_social
# update the particle position based off new velocity updates
def update_position(self,bounds):
for i in range(0, self.num_dimensions):
self.position_i[i]=self.position_i[i]+self.velocity_i[i]
# adjust maximum position if necessary
if self.position_i[i]>bounds[i][1]:
self.position_i[i]=bounds[i][1]
# adjust minimum position if neseccary
if self.position_i[i] < bounds[i][0]:
self.position_i[i]=bounds[i][0]
from particle import Particle
import numba
def PSO(costFunc,x0,bounds,num_particles,maxiter):
global num_dimensions
num_dimensions=len(x0)
err_best_g=-1 # best error for group
pos_best_g=[] # best position for group
# establish the swarm
swarm=[]
for i in range(0,num_particles):
swarm.append(Particle(x0))
# begin optimization loop
i=0
while i < maxiter:
#print i,err_best_g
# cycle through particles in swarm and evaluate fitness
for j in range(0,num_particles):
swarm[j].evaluate(costFunc)
# determine if current particle is the best (globally)
if swarm[j].err_i < err_best_g or err_best_g == -1:
pos_best_g=list(swarm[j].position_i)
err_best_g=float(swarm[j].err_i)
# cycle through swarm and update velocities and position
for j in range(0,num_particles):
swarm[j].update_velocity(pos_best_g)
swarm[j].update_position(bounds)
i+=1
# print final results
print ('\nFINAL:')
print (pos_best_g)
print (err_best_g)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/00_decision_trees_and_random_forests/colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
#### Copyright 2020 Google LLC.
```
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Decision Trees and Random Forests
In this lab we will apply decision trees and random forests to perform machine learning tasks. These two model types are relatively easy to understand, but they are very powerful tools.
Random forests build upon decision tree models, so we'll start by creating a decision tree and then move to random forests.
## Load Data
Let's start by loading some data. We'll use the familiar iris dataset from scikit-learn.
```
import pandas as pd
from sklearn.datasets import load_iris
iris_bunch = load_iris()
feature_names = iris_bunch.feature_names
target_name = 'species'
iris_df = pd.DataFrame(
iris_bunch.data,
columns=feature_names
)
iris_df[target_name] = iris_bunch.target
iris_df.head()
```
## Decision Trees
Decision trees are models that create a tree structure that has a condition at each non-terminal leaf in the tree. The condition is used to choose which branch to traverse down the tree.
Let's see what this would look like with a simple example.
Let's say we want to determine if a piece of fruit is a lemon, lime, orange, or grapefruit. We might have a tree that looks like:
```txt
----------
-----------| color? |-----------
| ---------- |
| | |
<green> <orange> <yellow>
| | |
| | |
======== | =========
| lime | | | lemon |
======== --------- =========
-----| size? |-----
| --------- |
| |
<small> <large>
| |
| |
========== ==============
| orange | | grapefruit |
========== ==============
```
This would roughly translate to the following code:
```python
def fruit_type(fruit):
if fruit.color == "green":
return "lime"
if fruit.color == "yellow":
return "lemon"
if fruit.color == "orange":
if fruit.size == "small":
return "orange"
if fruit.size == "large":
return "grapefruit"
```
As you can see, the decision tree is very easy to interpret. If you use a decision tree to make predictions and then need to determine why the tree made the decision that it did, it is very easy to inspect.
Also, decision trees don't benefit from scaling or normalizing your data, which is different from many types of models.
### Create a Decision Tree
Now that we have the data loaded, we can create a decision tree. We'll use the [`DecisionTreeClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) from scikit-learn to perform this task.
Note that there is also a [`DecisionTreeRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeRegressor.html) that can be used for regression models. In practice, you'll typically see decision trees applied to classification problems more than regression.
To build and train the model, we create an instance of the classifier and then call the `fit()` method that is used for all scikit-learn models.
```
from sklearn import tree
dt = tree.DecisionTreeClassifier()
dt.fit(
iris_df[feature_names],
iris_df[target_name]
)
```
If this were a real application, we'd keep some data to the side for testing.
### Visualize the Tree
We now have a decision tree and can use it to make predictions. But before we do that, let's take a look at the tree itself.
To do this we create a [`StringIO`](https://docs.python.org/3/library/io.html) object that we can export dot data to. [DOT](https://www.graphviz.org/doc/info/lang.html) is a graph description language with Python-graphing utilities that we can plot with.
```
import io
import pydotplus
from IPython.display import Image
dot_data = io.StringIO()
tree.export_graphviz(
dt,
out_file=dot_data,
feature_names=feature_names
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
That tree looks pretty complex. Many branches in the tree is a sign that we may have overfit the model. Let's create the tree again; this time we'll limit the depth.
```
from sklearn import tree
dt = tree.DecisionTreeClassifier(max_depth=2)
dt.fit(
iris_df[feature_names],
iris_df[target_name]
)
```
And plot to see the branching.
```
import io
import pydotplus
from IPython.display import Image
dot_data = io.StringIO()
tree.export_graphviz(
dt,
out_file=dot_data,
feature_names=feature_names
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
This tree is less likely to be overfitting since we forced it to have a depth of 2. Holding out a test sample and performing validation would be a good way to check.
What are the `gini`, `samples`, and `value` items shown in the tree?
`gini` is is the *Gini impurity*. This is a measure of the chance that you'll misclassify a random element in the dataset at this decision point. Smaller `gini` is better.
`samples` is a count of the number of samples that have met the criteria to reach this leaf.
Within `value` is the count of each class of data that has made it to this leaf. Summing `value` should equal `sample`.
### Hyperparameters
There are many hyperparameters you can tweak in your decision tree models. One of those is `criterion`. `criterion` determines the quality measure that the model will use to determine the shape of the tree.
The possible `criterion` values are `gini` and `entropy`. `gini` is the [Gini Impuirty](https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity) while `entropy` is a measure of [Information Gain](https://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain).
In the example below, we switch the classifier to use "entropy" for `criterion`. You'll see in the resultant tree that we now see "entropy" instead of "gini", but the resultant trees are the same. For more complex models, though, it may be worthwhile to test the different criterion.
```
import io
import pydotplus
from IPython.display import Image
from sklearn import tree
dt = tree.DecisionTreeClassifier(
max_depth=2,
criterion="entropy"
)
dt.fit(
iris_df[feature_names],
iris_df[target_name]
)
dot_data = io.StringIO()
tree.export_graphviz(
dt,
out_file=dot_data,
feature_names=feature_names
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
We've limited the depth of the tree using `max_depth`. We can also limit the number of samples required to be present in a node for it to be considered for splitting using `min_samples_split`. We can also limit the minimum size of a leaf node using `min_samples_leaf`. All of these hyperparameters help you to prevent your model from overfitting.
There are many other hyperparameters that can be found in the [`DecisionTreeClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html) documentation.
### Exercise 1: Tuning Decision Tree Hyperparameters
In this exercise we will use a decision tree to classify wine quality in the [Red Wine Quality dataset](https://www.kaggle.com/uciml/red-wine-quality-cortez-et-al-2009).
The target column in the dataset is `quality`. Quality is an integer value between 1 and 10 (inclusive). You'll use the other columns in the dataset to build a decision tree to predict wine quality.
For this exercise:
* Hold out some data for final testing of model generalization.
* Use [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) to compare some hyperparameters for your model. You can choose which parameters to test.
* Print the hyperparameters of the best performing model.
* Print the accuracy of the best performing model and the holdout dataset.
* Visualize the best performing tree.
Use as many text and code cells as you need to perform this exercise. We'll get you started with the code to authenticate and download the dataset.
First upload your `kaggle.json` file, and then run the code block below.
```
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
```
Next, download the wine quality dataset.
```
! kaggle datasets download uciml/red-wine-quality-cortez-et-al-2009
! ls
```
##### **Student Solution**
```
# Your Code Goes Here
```
---
## Random Forests
Random forests are a simple yet powerful machine learning tool based on decision trees. Random forests are easy to understand, yet they touch upon many advanced machine learning concepts, such as ensemble learning and bagging. These models can be used for both classification and regression. Also, since they are built from decision trees, they are not sensitive to unscaled data.
You can think of a random forest as a group decision made by a number of decision trees. For classification problems, the random forest creates multiple decision trees with different subsets of the data. When it is asked to classify a data point, it will ask all of the trees what they think and then take the majority decision.
For regression problems, the random forest will again use the opinions of multiple decision trees, but it will take the mean (or some other summation) of the responses and use that as the regression value.
This type of modeling, where one model consists of other models, is called *ensemble learning*. Ensemble learning can often lead to better models because taking the combined, differing opinions of a group of models can reduce overfitting.
### Create a Random Forest
Creating a random forest is as easy as creating a decision tree.
scikit-learn provides a [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) and a [`RandomForestRegressor`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html), which can be used to combine the predictive power of multiple decision trees.
```
import pandas as pd
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
iris_bunch = load_iris()
feature_names = iris_bunch.feature_names
target_name = 'species'
iris_df = pd.DataFrame(
iris_bunch.data,
columns=feature_names
)
iris_df[target_name] = iris_bunch.target
rf = RandomForestClassifier()
rf.fit(
iris_df[feature_names],
iris_df[target_name]
)
```
You can look at different trees in the random forest to see how their decision branching differs. By default there are `100` decision trees created for the model.
Let's view a few.
Run the code below a few times, and see if you notice a difference in the trees that are shown.
```
import pydotplus
import random
from IPython.display import Image
from sklearn.externals.six import StringIO
dot_data = StringIO()
tree.export_graphviz(
random.choice(rf.estimators_),
out_file=dot_data,
feature_names=feature_names
)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
```
### Make Predictions
Just like any other scikit-learn model, you can use the `predict()` method to make predictions.
```
print(rf.predict([iris_df.iloc[121][feature_names]]))
```
### Hyperparameters
Many of the hyperparameters available in decision trees are also available in random forest models. There are, however, some hyperparameters that are only available in random forests.
The two most important are `bootstrap` and `oob_score`. These two hyperparameters are relevant to ensemble learning.
`bootstrap` determines if the model will use [bootstrap sampling](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)). When you bootstrap, only a sample of the dataset will be used for training each tree in the forest. The full dataset will be used as the source of the sampling for each tree, but each sample will have a different set of data points, perhaps with some repetition. In bootstrapping, there is also "replacement" of the data, which means a data point can occur in more that one tree.
`oob_score` stands for "Out of bag score." When you create a bootstrap sample, this is referred to as a *bag* in machine learning parlance. When the tree is being scored, only data points in the bag sampled for the tree will be used unless `oob_score` is set to true.
### Exercise 2: Feature Importance
In this exercise we will use the [UCI Abalone dataset](https://www.kaggle.com/hurshd0/abalone-uci) to determine the age of sea snails.
The target feature in the dataset is `rings`, which is a proxy for age in the snails. This is a numeric value, but it is stored as an integer and has a biological limit. So we can think of this as a classification problem and use a [`RandomForestClassifier`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html).
You will download the dataset and train a random forest classifier. After you have fit the classifier, the `feature_importances_` attribute of the model will be populated. Use the importance scores to print the least important feature.
*Note that some of the features are categorical string values. You'll need to convert these to numeric values to use them in the model.*
Use as many text and code blocks as you need to perform this exercise.
#### **Student Solution**
```
# Your Code Goes Here
```
---
| github_jupyter |
<a href="https://colab.research.google.com/github/pabair/rl-course-ss21/blob/main/solutions/S6_LunarLander_PolicyBased.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Install Dependencies
```
# source: https://medium.com/coinmonks/landing-a-rocket-with-simple-reinforcement-learning-3a0265f8b58c
!pip3 install box2d-py
import gym
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Categorical
import matplotlib.pyplot as plt
from collections import deque
torch.manual_seed(1)
np.random.seed(1)
```
# Neural Network
```
class Net(nn.Module):
def __init__(self, obs_size, hidden_size, n_actions):
super(Net, self).__init__()
self.fc1 = nn.Linear(obs_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, n_actions)
def forward(self, x):
x = F.relu(self.fc1(x))
return self.fc2(x)
```
# Generate Episodes
```
def generate_batch(env, batch_size, t_max=5000):
activation = nn.Softmax(dim=1)
batch_actions,batch_states, batch_rewards = [],[],[]
for b in range(batch_size):
states,actions = [],[]
total_reward = 0
s = env.reset()
for t in range(t_max):
s_v = torch.FloatTensor([s])
act_probs_v = activation(net(s_v))
act_probs = act_probs_v.data.numpy()[0]
a = np.random.choice(len(act_probs), p=act_probs)
new_s, r, done, info = env.step(a)
#record sessions like you did before
states.append(s)
actions.append(a)
total_reward += r
s = new_s
if done:
batch_actions.append(actions)
batch_states.append(states)
batch_rewards.append(total_reward)
break
return batch_states, batch_actions, batch_rewards
```
# Training
```
def filter_batch(states_batch, actions_batch, rewards_batch, percentile):
reward_threshold = np.percentile(rewards_batch, percentile)
elite_states = []
elite_actions = []
for i in range(len(rewards_batch)):
if rewards_batch[i] > reward_threshold:
for j in range(len(states_batch[i])):
elite_states.append(states_batch[i][j])
elite_actions.append(actions_batch[i][j])
return elite_states, elite_actions
batch_size = 100
session_size = 500
percentile = 80
hidden_size = 200
completion_score = 100
learning_rate = 0.01
env = gym.make("LunarLander-v2")
n_states = env.observation_space.shape[0]
n_actions = env.action_space.n
#neural network
net = Net(n_states, hidden_size, n_actions)
#loss function
objective = nn.CrossEntropyLoss()
#optimisation function
optimizer = optim.Adam(params=net.parameters(), lr=learning_rate)
for i in range(session_size):
#generate new sessions
batch_states, batch_actions, batch_rewards = generate_batch(env, batch_size, t_max=500)
elite_states, elite_actions = filter_batch(batch_states, batch_actions, batch_rewards, percentile)
optimizer.zero_grad()
tensor_states = torch.FloatTensor(elite_states)
tensor_actions = torch.LongTensor(elite_actions)
action_scores_v = net(tensor_states)
loss_v = objective(action_scores_v, tensor_actions)
loss_v.backward()
optimizer.step()
#show results
mean_reward, threshold = np.mean(batch_rewards), np.percentile(batch_rewards, percentile)
print("%d: loss=%.3f, reward_mean=%.1f, reward_threshold=%.1f" % (
i, loss_v.item(), mean_reward, threshold))
#check if
if np.mean(batch_rewards)> completion_score:
print("Environment has been successfullly completed!")
break
```
# Evaluation
```
import time
FPS = 25
record_folder="video"
env = gym.make('LunarLander-v2')
env = gym.wrappers.Monitor(env, record_folder, force=True)
state = env.reset()
total_reward = 0.0
activation = nn.Softmax(dim=1)
while True:
start_ts = time.time()
env.render()
s_v = torch.FloatTensor([state])
act_probs_v = activation(net(s_v))
act_probs = act_probs_v.data.numpy()[0]
a = np.random.choice(len(act_probs), p=act_probs)
state, reward, done, _ = env.step(a)
total_reward += reward
if done:
break
delta = 1/FPS - (time.time() - start_ts)
if delta > 0:
time.sleep(delta)
print("Total reward: %.2f" % total_reward)
env.close()
```
| github_jupyter |
# Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis.
>Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words.
Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.
<img src="assets/reviews_ex.png" width=40%>
### Network Architecture
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=40%>
>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*
>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data.
>**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1.
We don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).
---
### Load in and visualize the data
```
import numpy as np
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:1000])
print()
print(labels[:20])
```
## Data pre-processing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. Here are the processing steps, we'll want to take:
>* We'll want to get rid of periods and extraneous punctuation.
* Also, you might notice that the reviews are delimited with newline characters `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter.
* Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
```
from string import punctuation
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30]
```
### Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.
> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`.
```
# feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
for review in reviews_split:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
```
**Test your code**
As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.
```
# stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1])
```
### Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`.
```
# 1=positive, 0=negative label conversion
labels_split = labels.split('\n')
encoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])
```
### Removing Outliers
As an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:
1. Getting rid of extremely long or short reviews; the outliers
2. Padding/truncating the remaining data so that we have reviews of the same length.
Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.
```
# outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
```
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.
> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`.
```
print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
# get indices of any reviews with length 0
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
# remove 0-length reviews and their labels
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
encoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])
print('Number of reviews after removing outliers: ', len(reviews_ints))
```
---
## Padding sequences
To deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some `seq_length`, we'll pad with 0s. For reviews longer than `seq_length`, we can truncate them to the first `seq_length` words. A good `seq_length`, in this case, is 200.
> **Exercise:** Define a function that returns an array `features` that contains the padded data, of a standard size, that we'll pass to the network.
* The data should come from `review_ints`, since we want to feed integers to the network.
* Each row should be `seq_length` elements long.
* For reviews shorter than `seq_length` words, **left pad** with 0s. That is, if the review is `['best', 'movie', 'ever']`, `[117, 18, 128]` as integers, the row will look like `[0, 0, 0, ..., 0, 117, 18, 128]`.
* For reviews longer than `seq_length`, use only the first `seq_length` words as the feature vector.
As a small example, if the `seq_length=10` and an input review is:
```
[117, 18, 128]
```
The resultant, padded sequence should be:
```
[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]
```
**Your final `features` array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified `seq_length`.**
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
```
def pad_features(reviews_ints, seq_length):
''' Return features of review_ints, where each review is padded with 0's
or truncated to the input seq_length.
'''
# getting the correct rows x cols shape
features = np.zeros((len(reviews_ints), seq_length), dtype=int)
# for each review, I grab that review and
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_length]
return features
# Test your implementation!
seq_length = 200
features = pad_features(reviews_ints, seq_length=seq_length)
## test statements - do not change - ##
assert len(features)==len(reviews_ints), "Your features should have as many rows as reviews."
assert len(features[0])==seq_length, "Each feature row should contain seq_length values."
# print first 10 values of the first 30 batches
print(features[:30,:10])
```
## Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
> **Exercise:** Create the training, validation, and test sets.
* You'll need to create sets for the features and the labels, `train_x` and `train_y`, for example.
* Define a split fraction, `split_frac` as the fraction of data to **keep** in the training set. Usually this is set to 0.8 or 0.9.
* Whatever data is left will be split in half to create the validation and *testing* data.
```
split_frac = 0.8
## split data into training, validation, and test data (features and labels, x and y)
split_idx = int(len(features)*split_frac)
train_x, remaining_x = features[:split_idx], features[split_idx:]
train_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]
test_idx = int(len(remaining_x)*0.5)
val_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]
val_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]
## print out the shapes of your resultant feature data
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
```
**Check your work**
With train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:
```
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
```
---
## DataLoaders and Batching
After creating training, test, and validation data, we can create DataLoaders for this data by following two steps:
1. Create a known format for accessing our data, using [TensorDataset](https://pytorch.org/docs/stable/data.html#) which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.
2. Create DataLoaders and batch our training, validation, and test Tensor datasets.
```
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
train_loader = DataLoader(train_data, batch_size=batch_size)
```
This is an alternative to creating a generator function for batching our data into full batches.
```
import torch
from torch.utils.data import TensorDataset, DataLoader
# create Tensor datasets
train_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))
valid_data = TensorDataset(torch.from_numpy(val_x), torch.from_numpy(val_y))
test_data = TensorDataset(torch.from_numpy(test_x), torch.from_numpy(test_y))
# dataloaders
batch_size = 50
# make sure the SHUFFLE your training data
train_loader = DataLoader(train_data, shuffle=True, batch_size=batch_size)
valid_loader = DataLoader(valid_data, shuffle=True, batch_size=batch_size)
test_loader = DataLoader(test_data, shuffle=True, batch_size=batch_size)
# obtain one batch of training data
dataiter = iter(train_loader)
sample_x, sample_y = dataiter.next()
print('Sample input size: ', sample_x.size()) # batch_size, seq_length
print('Sample input: \n', sample_x)
print()
print('Sample label size: ', sample_y.size()) # batch_size
print('Sample label: \n', sample_y)
```
---
# Sentiment Network with PyTorch
Below is where you'll define the network.
<img src="assets/network_diagram.png" width=40%>
The layers are as follows:
1. An [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) that converts our word tokens (integers) into embeddings of a specific size.
2. An [LSTM layer](https://pytorch.org/docs/stable/nn.html#lstm) defined by a hidden_state size and number of layers
3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size
4. A sigmoid activation layer which turns all outputs into a value 0-1; return **only the last sigmoid output** as the output of this network.
### The Embedding Layer
We need to add an [embedding layer](https://pytorch.org/docs/stable/nn.html#embedding) because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.
### The LSTM Layer(s)
We'll create an [LSTM](https://pytorch.org/docs/stable/nn.html#lstm) to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.
Most of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships.
> **Exercise:** Complete the `__init__`, `forward`, and `init_hidden` functions for the SentimentRNN model class.
Note: `init_hidden` should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.
```
# First checking if GPU is available
train_on_gpu=torch.cuda.is_available()
if(train_on_gpu):
print('Training on GPU.')
else:
print('No GPU available, training on CPU.')
import torch.nn as nn
class SentimentRNN(nn.Module):
"""
The RNN model that will be used to perform Sentiment analysis.
"""
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):
"""
Initialize the model by setting up the layers.
"""
super(SentimentRNN, self).__init__()
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# embedding and LSTM layers
self.embedding = nn.Embedding(vocab_size, embedding_dim)
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers,
dropout=drop_prob, batch_first=True)
# dropout layer
self.dropout = nn.Dropout(0.3)
# linear and sigmoid layers
self.fc = nn.Linear(hidden_dim, output_size)
self.sig = nn.Sigmoid()
def forward(self, x, hidden):
"""
Perform a forward pass of our model on some input and hidden state.
"""
batch_size = x.size(0)
# embeddings and lstm_out
x = x.long()
embeds = self.embedding(x)
lstm_out, hidden = self.lstm(embeds, hidden)
# stack up lstm outputs
lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)
# dropout and fully-connected layer
out = self.dropout(lstm_out)
out = self.fc(out)
# sigmoid function
sig_out = self.sig(out)
# reshape to be batch_size first
sig_out = sig_out.view(batch_size, -1)
sig_out = sig_out[:, -1] # get last batch of labels
# return last sigmoid output and hidden state
return sig_out, hidden
def init_hidden(self, batch_size):
''' Initializes hidden state '''
# Create two new tensors with sizes n_layers x batch_size x hidden_dim,
# initialized to zero, for hidden state and cell state of LSTM
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
```
## Instantiate the network
Here, we'll instantiate the network. First up, defining the hyperparameters.
* `vocab_size`: Size of our vocabulary or the range of values for our input, word tokens.
* `output_size`: Size of our desired output; the number of class scores we want to output (pos/neg).
* `embedding_dim`: Number of columns in the embedding lookup table; size of our embeddings.
* `hidden_dim`: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
* `n_layers`: Number of LSTM layers in the network. Typically between 1-3
> **Exercise:** Define the model hyperparameters.
```
# Instantiate the model w/ hyperparams
vocab_size = len(vocab_to_int)+1 # +1 for the 0 padding + our word tokens
output_size = 1
embedding_dim = 400
hidden_dim = 256
n_layers = 2
net = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)
print(net)
```
---
## Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.
>We'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. [BCELoss](https://pytorch.org/docs/stable/nn.html#bceloss), or **Binary Cross Entropy Loss**, applies cross entropy loss to a single value between 0 and 1.
We also have some data and training hyparameters:
* `lr`: Learning rate for our optimizer.
* `epochs`: Number of times to iterate through the training dataset.
* `clip`: The maximum gradient value to clip at (to prevent exploding gradients).
```
# loss and optimization functions
lr=0.001
criterion = nn.BCELoss()
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
# training params
epochs = 4 # 3-4 is approx where I noticed the validation loss stop decreasing
counter = 0
print_every = 100
clip=5 # gradient clipping
# move model to GPU, if available
if(train_on_gpu):
net.cuda()
net.train()
# train for some number of epochs
for e in range(epochs):
# initialize hidden state
h = net.init_hidden(batch_size)
# batch loop
for inputs, labels in train_loader:
counter += 1
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
# zero accumulated gradients
net.zero_grad()
# get the output from the model
output, h = net(inputs, h)
# calculate the loss and perform backprop
loss = criterion(output.squeeze(), labels.float())
loss.backward()
# `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.
nn.utils.clip_grad_norm_(net.parameters(), clip)
optimizer.step()
# loss stats
if counter % print_every == 0:
# Get validation loss
val_h = net.init_hidden(batch_size)
val_losses = []
net.eval()
for inputs, labels in valid_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
val_h = tuple([each.data for each in val_h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
output, val_h = net(inputs, val_h)
val_loss = criterion(output.squeeze(), labels.float())
val_losses.append(val_loss.item())
net.train()
print("Epoch: {}/{}...".format(e+1, epochs),
"Step: {}...".format(counter),
"Loss: {:.6f}...".format(loss.item()),
"Val Loss: {:.6f}".format(np.mean(val_losses)))
```
---
## Testing
There are a few ways to test your network.
* **Test data performance:** First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.
* **Inference on user-generated data:** Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called **inference**.
```
# Get test data loss and accuracy
test_losses = [] # track loss
num_correct = 0
# init hidden state
h = net.init_hidden(batch_size)
net.eval()
# iterate over test data
for inputs, labels in test_loader:
# Creating new variables for the hidden state, otherwise
# we'd backprop through the entire training history
h = tuple([each.data for each in h])
if(train_on_gpu):
inputs, labels = inputs.cuda(), labels.cuda()
# get predicted outputs
output, h = net(inputs, h)
# calculate loss
test_loss = criterion(output.squeeze(), labels.float())
test_losses.append(test_loss.item())
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze()) # rounds to the nearest integer
# compare predictions to true label
correct_tensor = pred.eq(labels.float().view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
num_correct += np.sum(correct)
# -- stats! -- ##
# avg test loss
print("Test loss: {:.3f}".format(np.mean(test_losses)))
# accuracy over all test data
test_acc = num_correct/len(test_loader.dataset)
print("Test accuracy: {:.3f}".format(test_acc))
```
### Inference on a test review
You can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!
> **Exercise:** Write a `predict` function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!
* You can use any functions that you've already defined or define any helper functions you want to complete `predict`, but it should just take in a trained net, a text review, and a sequence length.
```
# negative test review
test_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'
from string import punctuation
def tokenize_review(test_review):
test_review = test_review.lower() # lowercase
# get rid of punctuation
test_text = ''.join([c for c in test_review if c not in punctuation])
# splitting by spaces
test_words = test_text.split()
# tokens
test_ints = []
test_ints.append([vocab_to_int.get(word, 0) for word in test_words])
return test_ints
# test code and generate tokenized review
test_ints = tokenize_review(test_review_neg)
print(test_ints)
# test sequence padding
seq_length=200
features = pad_features(test_ints, seq_length)
print(features)
# test conversion to tensor and pass into your model
feature_tensor = torch.from_numpy(features)
print(feature_tensor.size())
def predict(net, test_review, sequence_length=200):
net.eval()
# tokenize review
test_ints = tokenize_review(test_review)
# pad tokenized sequence
seq_length=sequence_length
features = pad_features(test_ints, seq_length)
# convert to tensor to pass into your model
feature_tensor = torch.from_numpy(features)
batch_size = feature_tensor.size(0)
# initialize hidden state
h = net.init_hidden(batch_size)
if(train_on_gpu):
feature_tensor = feature_tensor.cuda()
# get the output from the model
output, h = net(feature_tensor, h)
# convert output probabilities to predicted class (0 or 1)
pred = torch.round(output.squeeze())
# printing output value, before rounding
print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))
# print custom response
if(pred.item()==1):
print("Positive review detected!")
else:
print("Negative review detected.")
# positive test review
test_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'
# call function
seq_length=200 # good to use the length that was trained on
predict(net, test_review_neg, seq_length)
```
### Try out test_reviews of your own!
Now that you have a trained model and a predict function, you can pass in _any_ kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.
Later, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!
| github_jupyter |
```
# Building the CNN
from keras.models import Sequential
from keras.layers import Convolution2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.models import load_model
from keras.callbacks import EarlyStopping
# Initializing the CNN
classifier = Sequential()
# Step 1 - Convolution
classifier.add(Convolution2D(16, 3, 3, input_shape = (64, 64, 3),activation = 'relu'))
# Step 2 - Max Pooling
classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a second convolution layer
classifier.add(Convolution2D(5120, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a third convolution layer
classifier.add(Convolution2D(128, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a 4th convolution layer
classifier.add(Convolution2D(512, 3, 3, activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a 5th convolution layer
#classifier.add(Convolution2D(64, 3, 3, activation = 'relu'))
#classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a 6th convolution layer
#classifier.add(Convolution2D(256, 3, 3, activation = 'relu'))
#classifier.add(MaxPooling2D(pool_size = (2,2)))
# Adding a 5th convolution layer
#classifier.add(Convolution2D(512, 3, 3, activation = 'relu'))
#classifier.add(MaxPooling2D(pool_size = (2,2)))
# Step 3 - Flattening
classifier.add(Flatten())
# Step 4 - Full Connection
classifier.add(Dense(output_dim = 128, activation= 'relu'))
classifier.add(Dense(output_dim = 30,activation= 'softmax'))
# Compiling the CNN
classifier.compile(optimizer = 'rmsprop', loss = 'categorical_crossentropy', metrics = ['accuracy'])
# Part 2 - Fitting the CNN to the image
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
'/demo5/TrainingData',
target_size=(64, 64),
batch_size=10,
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
'/demo5/ValidationData',
target_size=(64, 64),
batch_size=2,
class_mode='categorical')
early_stopping = EarlyStopping(monitor='val_acc', patience=15, verbose=1, mode='max')
classifier.fit_generator(
train_generator,
samples_per_epoch=2000,
epochs=500,
validation_data=validation_generator,
callbacks=[early_stopping],
validation_steps=1000)
classifier.save('/demo5/all-model.h5')
import glob
import numpy as np
import csv
from keras.models import load_model
from keras.preprocessing import image
classifier = load_model('/demo5/all-model.h5')
with open('/demo5/prediction.csv', "w") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
for filename in glob.iglob('/demo5/Remaining Clips/*.jpg'):
test_image = image.load_img( filename,target_size=(64, 64))
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image, axis = 0)
result = classifier.predict_classes(test_image, verbose = 1)
for item in train_generator.class_indices: # Python's for loops are a "for each" loop
if (result[0] == train_generator.class_indices[item]):
line = filename + ',' + item
writer.writerow([filename, item])
```
| github_jupyter |
<a href="https://colab.research.google.com/github/graviraja/100-Days-of-NLP/blob/applications%2Fclassification/applications/classification/grammatically_correct_sentence/CoLA%20with%20DistilBERT.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
### Installations
```
!pip install transformers
!pip install wget
```
### CoLA (Corpus of Linguistic Acceptability) Dataset
```
import os
import wget
print('Downloading dataset')
# The URL for the dataset zip file.
url = 'https://nyu-mll.github.io/CoLA/cola_public_1.1.zip'
# Download the file (if we haven't already)
if not os.path.exists('./cola_public_1.1.zip'):
wget.download(url, './cola_public_1.1.zip')
if not os.path.exists('./cola_public'):
!unzip cola_public_1.1.zip
!ls
```
### Imports
```
import time
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import transformers
from transformers import AdamW, get_linear_schedule_with_warmup
from sklearn import model_selection
from sklearn import metrics
import matplotlib.pyplot as plt
import seaborn as sns
RANDOM_SEED = 42
np.random.seed(RANDOM_SEED)
torch.manual_seed(RANDOM_SEED)
torch.backends.cudnn.deterministic = True
train_file = "cola_public/raw/in_domain_train.tsv"
test_file = "cola_public/raw/in_domain_dev.tsv"
df_train = pd.read_csv(train_file, sep='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])
df_valid = pd.read_csv(test_file, sep='\t', header=None, names=['sentence_source', 'label', 'label_notes', 'sentence'])
```
### Data Analysis
```
df_train.head()
df_train = df_train.drop(columns=['sentence_source', 'label_notes'])
df_train.head()
df_valid = df_valid.drop(columns=['sentence_source', 'label_notes'])
df_train.shape, df_valid.shape
df_train = df_train.sample(frac=1).reset_index(drop=True)
df_train.head()
sns.countplot(df_train['label'].values)
plt.xlabel("Training Data Distribution")
sns.countplot(df_valid['label'].values)
plt.xlabel("Testing Data Distribution")
```
#### Choosing maximum sequence length
```
token_lens = []
for txt in df_train.sentence:
tokens = txt.split()
token_lens.append(len(tokens))
sns.distplot(token_lens)
plt.xlim([0, 512]);
plt.xlabel('Token lengths');
```
### Configurations
```
OUTPUT_DIM = 1
MAX_LEN = 100
TRAIN_BATCH_SIZE = 8
VALID_BATCH_SIZE = 8
EPOCHS = 3
TEACHER_MODEL_NAME = "bert-base-uncased"
STUDENT_MODEL_NAME = "distilbert-base-uncased"
TEACHER_MODEL_PATH = "teacher_model.bin"
STUDENTSA_MODEL_PATH = "studentsa_model.bin"
STUDENT_MODEL_PATH = "student_model.bin"
TOKENIZER = transformers.BertTokenizer.from_pretrained(TEACHER_MODEL_NAME)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
device
```
### CoLA Dataset
```
class CoLADataset:
def __init__(self, sentences, labels):
self.sentences = sentences
self.labels = labels
self.tokenizer = TOKENIZER
self.max_len = MAX_LEN
def __len__(self):
return len(self.labels)
def __getitem__(self, item):
sentence = self.sentences[item]
label = self.labels[item]
encoding = self.tokenizer.encode_plus(
sentence,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
return_tensors='pt',
)
return {
"ids": encoding["input_ids"].flatten(),
"mask": encoding["attention_mask"].flatten(),
"targets": torch.tensor(label, dtype=torch.float)
}
train_dataset = CoLADataset(
sentences=df_train.sentence.values,
labels=df_train.label.values
)
valid_dataset = CoLADataset(
sentences=df_valid.sentence.values,
labels=df_valid.label.values
)
```
### DataLoaders
```
train_data_loader = torch.utils.data.DataLoader(
train_dataset,
TRAIN_BATCH_SIZE,
shuffle=True
)
valid_data_loader = torch.utils.data.DataLoader(
valid_dataset,
VALID_BATCH_SIZE
)
sample = next(iter(train_data_loader))
sample["ids"].shape, sample["mask"].shape, sample["targets"].shape
```
## BERT Model (Teacher)
```
class BERTModel(nn.Module):
def __init__(self):
super().__init__()
self.bert = transformers.BertModel.from_pretrained(TEACHER_MODEL_NAME)
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, OUTPUT_DIM)
def forward(self, ids, mask):
_, o2 = self.bert(ids, attention_mask=mask)
bo = self.bert_drop(o2)
output = self.out(bo)
return output
teacher_model = BERTModel()
teacher_model.to(device)
```
### Optimizer
```
# create parameters we want to optimize
# we generally dont use any decay for bias and weight layers
param_optimizer = list(teacher_model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0
}
]
num_train_steps = int(len(df_train) / TRAIN_BATCH_SIZE * EPOCHS)
num_train_steps
optimizer = AdamW(optimizer_parameters, lr=3e-5)
```
### Scheduler
```
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=num_train_steps
)
```
### Loss Criterion
```
criterion = nn.BCEWithLogitsLoss().to(device)
```
### Training Method
```
def train_fn(data_loader, model, optimizer, criterion, device, scheduler):
model.train()
epoch_loss = 0
for batch in data_loader:
ids = batch['ids'].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
optimizer.zero_grad()
outputs = model(
ids=ids,
mask=mask
)
loss = criterion(outputs, targets.view(-1, 1))
epoch_loss += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
return epoch_loss / len(data_loader)
```
### Evaluation Method
```
def eval_fn(data_loader, model, criterion, device):
model.eval()
fin_outputs = []
fin_targets = []
epoch_loss = 0
with torch.no_grad():
for batch in data_loader:
ids = batch["ids"].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
outputs = model(
ids=ids,
mask=mask
)
loss = criterion(outputs, targets.view(-1, 1))
epoch_loss += loss.item()
targets = targets.cpu().detach()
fin_targets.extend(targets.numpy().tolist())
outputs = torch.sigmoid(outputs).cpu().detach()
fin_outputs.extend(outputs.numpy().tolist())
outputs = np.array(fin_outputs) >= 0.5
accuracy = metrics.accuracy_score(fin_targets, outputs)
mat_cor = metrics.matthews_corrcoef(fin_targets, outputs)
return epoch_loss / len(data_loader), accuracy, mat_cor
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
```
### Training
```
best_valid_loss = float('inf')
for epoch in range(EPOCHS):
start_time = time.time()
train_loss = train_fn(train_data_loader, teacher_model, optimizer, criterion, device, scheduler)
val_loss, val_acc, val_mat_cor = eval_fn(valid_data_loader, teacher_model, criterion, device)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if val_loss < best_valid_loss:
best_valid_loss = val_loss
torch.save(teacher_model.state_dict(), TEACHER_MODEL_PATH)
print(f"Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s")
print(f"\t Train Loss: {train_loss:.3f}")
print(f"\t Valid Loss: {val_loss:.3f} | Valid Acc: {val_acc * 100:.2f} | Matthews Cor: {val_mat_cor:.3f}")
teacher_model.load_state_dict(torch.load(TEACHER_MODEL_PATH))
```
### Inference
```
def inference(sentence, model, device):
encoded = TOKENIZER.encode_plus(
sentence,
max_length=MAX_LEN,
add_special_tokens=True,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
return_tensors='pt',
)
input_ids = encoded['input_ids'].to(device)
attention_mask = encoded['attention_mask'].to(device)
output = model(input_ids, attention_mask)
prediction = torch.round(torch.sigmoid(output))
print(f'Sentence: {sentence}')
print(f'Grammatically Correct: {prediction.item()}')
sentence = "I like coding"
inference(sentence, teacher_model, device)
sentence = "I myself talking to"
inference(sentence, teacher_model, device)
sentence = "I am talking to myself"
inference(sentence, teacher_model, device)
torch.cuda.empty_cache()
```
## DistilBERT Model (Standalone)
Without any teacher forcing from BERT Model
```
class DistilBERTModelSA(nn.Module):
def __init__(self):
super().__init__()
self.bert = transformers.DistilBertModel.from_pretrained(STUDENT_MODEL_NAME)
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, OUTPUT_DIM)
def forward(self, ids, mask):
output = self.bert(ids, attention_mask=mask)
hidden = output[0]
bo = self.bert_drop(hidden[:, 0])
output = self.out(bo)
return output
student_model_sa = DistilBERTModelSA()
student_model_sa.to(device)
param_optimizer = list(student_model_sa.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0
}
]
num_train_steps = int(len(df_train) / TRAIN_BATCH_SIZE * EPOCHS)
num_train_steps
optimizer = AdamW(optimizer_parameters, lr=3e-5)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=num_train_steps
)
criterion = nn.BCEWithLogitsLoss().to(device)
def train_fn(data_loader, model, optimizer, criterion, device, scheduler):
model.train()
epoch_loss = 0
for batch in data_loader:
ids = batch['ids'].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
optimizer.zero_grad()
outputs = model(
ids=ids,
mask=mask
)
loss = criterion(outputs, targets.view(-1, 1))
epoch_loss += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
return epoch_loss / len(data_loader)
def eval_fn(data_loader, model, criterion, device):
model.eval()
fin_outputs = []
fin_targets = []
epoch_loss = 0
with torch.no_grad():
for batch in data_loader:
ids = batch["ids"].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
outputs = model(
ids=ids,
mask=mask
)
loss = criterion(outputs, targets.view(-1, 1))
epoch_loss += loss.item()
targets = targets.cpu().detach()
fin_targets.extend(targets.numpy().tolist())
outputs = torch.sigmoid(outputs).cpu().detach()
fin_outputs.extend(outputs.numpy().tolist())
outputs = np.array(fin_outputs) >= 0.5
accuracy = metrics.accuracy_score(fin_targets, outputs)
mat_cor = metrics.matthews_corrcoef(fin_targets, outputs)
return epoch_loss / len(data_loader), accuracy, mat_cor
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
best_valid_loss = float('inf')
for epoch in range(EPOCHS):
start_time = time.time()
train_loss = train_fn(train_data_loader, student_model_sa, optimizer, criterion, device, scheduler)
val_loss, val_acc, val_mat_cor = eval_fn(valid_data_loader, student_model_sa, criterion, device)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if val_loss < best_valid_loss:
best_valid_loss = val_loss
torch.save(student_model_sa.state_dict(), STUDENTSA_MODEL_PATH)
print(f"Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s")
print(f"\t Train Loss: {train_loss:.3f}")
print(f"\t Valid Loss: {val_loss:.3f} | Valid Acc: {val_acc * 100:.2f} | Matthews Cor: {val_mat_cor:.3f}")
student_model_sa.load_state_dict(torch.load(STUDENTSA_MODEL_PATH))
def inference(sentence, model, device):
encoded = TOKENIZER.encode_plus(
sentence,
max_length=MAX_LEN,
add_special_tokens=True,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
return_tensors='pt',
)
input_ids = encoded['input_ids'].to(device)
attention_mask = encoded['attention_mask'].to(device)
output = model(input_ids, attention_mask)
prediction = torch.round(torch.sigmoid(output))
print(f'Sentence: {sentence}')
print(f'Grammatically Correct: {prediction.item()}')
sentence = "I like coding"
inference(sentence, student_model_sa, device)
torch.cuda.empty_cache()
```
## DistilBERT Model (With Teacher Forcing)
```
class DistilBERTModel(nn.Module):
def __init__(self):
super().__init__()
self.bert = transformers.DistilBertModel.from_pretrained(STUDENT_MODEL_NAME)
self.bert_drop = nn.Dropout(0.3)
self.out = nn.Linear(768, OUTPUT_DIM)
def forward(self, ids, mask):
output = self.bert(ids, attention_mask=mask)
hidden = output[0]
bo = self.bert_drop(hidden[:, 0])
output = self.out(bo)
return output
student_model = DistilBERTModel()
student_model.to(device)
param_optimizer = list(student_model.named_parameters())
no_decay = ["bias", "LayerNorm.bias", "LayerNorm.weight"]
optimizer_parameters = [
{
"params": [
p for n, p in param_optimizer if not any(nd in n for nd in no_decay)
],
"weight_decay": 0.001,
},
{
"params": [
p for n, p in param_optimizer if any(nd in n for nd in no_decay)
],
"weight_decay": 0.0
}
]
num_train_steps = int(len(df_train) / TRAIN_BATCH_SIZE * EPOCHS)
num_train_steps
optimizer = AdamW(optimizer_parameters, lr=3e-5)
scheduler = get_linear_schedule_with_warmup(
optimizer,
num_warmup_steps=0,
num_training_steps=num_train_steps
)
criterion = nn.BCEWithLogitsLoss().to(device)
MSE_loss = nn.MSELoss(reduction='mean')
KLD_loss = nn.KLDivLoss(reduction="batchmean")
def train_fn(data_loader, model, teacher_model, optimizer, criterion, device, scheduler, alpha_clf=1.0, alpha_teacher=1.0, temperature=2.0):
model.train()
epoch_clf_loss = 0
epoch_total_loss = 0
for batch in data_loader:
ids = batch['ids'].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
optimizer.zero_grad()
student_logits = model(
ids=ids,
mask=mask
)
with torch.no_grad():
teacher_logits = teacher_model(
ids=ids,
mask=mask
)
mse_loss = MSE_loss(student_logits, teacher_logits)
kld_loss = KLD_loss(
(student_logits / temperature),
(teacher_logits / temperature),
)
clf_loss = criterion(student_logits, targets.view(-1, 1))
teacher_loss = mse_loss + kld_loss
loss = alpha_clf * clf_loss + alpha_teacher * teacher_loss
epoch_clf_loss += clf_loss.item()
epoch_total_loss += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
return epoch_clf_loss / len(data_loader), epoch_total_loss / len(data_loader)
def eval_fn(data_loader, model, teacher_model, criterion, device, alpha_clf=1.0, alpha_teacher=1.0, temperature=2.0):
model.eval()
fin_outputs = []
fin_targets = []
epoch_clf_loss = 0
epoch_total_loss = 0
with torch.no_grad():
for batch in data_loader:
ids = batch["ids"].to(device)
mask = batch["mask"].to(device)
targets = batch["targets"].to(device)
student_logits = model(
ids=ids,
mask=mask
)
with torch.no_grad():
teacher_logits = teacher_model(
ids=ids,
mask=mask
)
mse_loss = MSE_loss(student_logits, teacher_logits)
kld_loss = KLD_loss(
(student_logits / temperature),
(teacher_logits / temperature),
)
clf_loss = criterion(student_logits, targets.view(-1, 1))
teacher_loss = mse_loss + kld_loss
loss = alpha_clf * clf_loss + alpha_teacher * teacher_loss
epoch_clf_loss += clf_loss.item()
epoch_total_loss += loss.item()
targets = targets.cpu().detach()
fin_targets.extend(targets.numpy().tolist())
outputs = torch.sigmoid(student_logits).cpu().detach()
fin_outputs.extend(outputs.numpy().tolist())
outputs = np.array(fin_outputs) >= 0.5
accuracy = metrics.accuracy_score(fin_targets, outputs)
mat_cor = metrics.matthews_corrcoef(fin_targets, outputs)
return epoch_clf_loss / len(data_loader), epoch_total_loss / len(data_loader), accuracy, mat_cor
def epoch_time(start_time, end_time):
elapsed_time = end_time - start_time
elapsed_mins = int(elapsed_time / 60)
elapsed_secs = int(elapsed_time - (elapsed_mins * 60))
return elapsed_mins, elapsed_secs
teacher_model.load_state_dict(torch.load(TEACHER_MODEL_PATH))
best_valid_loss = float('inf')
for epoch in range(EPOCHS):
start_time = time.time()
train_clf_loss, train_total_loss = train_fn(train_data_loader, student_model, teacher_model, optimizer, criterion, device, scheduler)
val_clf_loss, val_total_loss, val_acc, val_mat_cor = eval_fn(valid_data_loader, student_model, teacher_model, criterion, device)
end_time = time.time()
epoch_mins, epoch_secs = epoch_time(start_time, end_time)
if val_loss < best_valid_loss:
best_valid_loss = val_loss
torch.save(student_model.state_dict(), STUDENT_MODEL_PATH)
print(f"Epoch: {epoch+1:02} | Epoch Time: {epoch_mins}m {epoch_secs}s")
print(f"\t Train CLF Loss: {train_clf_loss:.3f} | Train total Loss: {train_total_loss:.3f}")
print(f"\t Valid CLF Loss: {val_clf_loss:.3f} | Valid total Loss: {val_total_loss:.3f}")
print(f"\t Valid Acc: {val_acc * 100:.2f} | Matthews Cor: {val_mat_cor:.3f}")
student_model.load_state_dict(torch.load(STUDENT_MODEL_PATH))
def inference(sentence, model, device):
encoded = TOKENIZER.encode_plus(
sentence,
max_length=MAX_LEN,
add_special_tokens=True,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
truncation=True,
return_tensors='pt',
)
input_ids = encoded['input_ids'].to(device)
attention_mask = encoded['attention_mask'].to(device)
output = model(input_ids, attention_mask)
prediction = torch.round(torch.sigmoid(output))
print(f'Sentence: {sentence}')
print(f'Grammatically Correct: {prediction.item()}')
sentence = "I like coding"
inference(sentence, student_model, device)
```
| github_jupyter |
# Welcome!
Below, we will learn to implement and train a policy to play atari-pong, using only the pixels as input. We will use convolutional neural nets, multiprocessing, and pytorch to implement and train our policy. Let's get started!
(I strongly recommend you to try this notebook on the Udacity workspace first before running it locally on your desktop/laptop, as performance might suffer in different environments)
```
# install package for displaying animation
!pip install JSAnimation
# custom utilies for displaying animation, collecting rollouts and more
import pong_utils
%matplotlib inline
# check which device is being used.
# I recommend disabling gpu until you've made sure that the code runs
device = pong_utils.device
print("using device: ",device)
# render ai gym environment
import gym
import time
# PongDeterministic does not contain random frameskip
# so is faster to train than the vanilla Pong-v4 environment
env = gym.make('PongDeterministic-v4')
print("List of available actions: ", env.unwrapped.get_action_meanings())
# we will only use the actions 'RIGHTFIRE' = 4 and 'LEFTFIRE" = 5
# the 'FIRE' part ensures that the game starts again after losing a life
# the actions are hard-coded in pong_utils.py
```
## Preprocessing
To speed up training, we can simplify the input by cropping the images and use every other pixel
```
import matplotlib
import matplotlib.pyplot as plt
# show what a preprocessed image looks like
env.reset()
_, _, _, _ = env.step(0)
# get a frame after 20 steps
for _ in range(20):
frame, _, _, _ = env.step(1)
plt.subplot(1,2,1)
plt.imshow(frame)
plt.title('original image')
plt.subplot(1,2,2)
plt.title('preprocessed image')
# 80 x 80 black and white image
plt.imshow(pong_utils.preprocess_single(frame), cmap='Greys')
plt.show()
```
# Policy
## Exercise 1: Implement your policy
Here, we define our policy. The input is the stack of two different frames (which captures the movement), and the output is a number $P_{\rm right}$, the probability of moving left. Note that $P_{\rm left}= 1-P_{\rm right}$
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# set up a convolutional neural net
# the output is the probability of moving right
# P(left) = 1-P(right)
class Policy(nn.Module):
def __init__(self):
super(Policy, self).__init__()
# 80x80 to outputsize x outputsize
# outputsize = (inputsize - kernel_size + stride)/stride
# (round up if not an integer)
# 80x80X2 to 38X38X4
# outputsize = (80 - 6 + 2)/2 = 38
self.conv1 = nn.Conv2d(2, 4, kernel_size=6, stride=2, bias=False)
# 38X38X4 to 9X9X16
# outputsize = (38 - 6 + 4)/4 = 9
self.conv2 = nn.Conv2d(4, 16, kernel_size=6, stride=4)
self.size=9*9*16
# two fully connected layer
self.fc1 = nn.Linear(self.size, 256)
self.fc2 = nn.Linear(256, 1)
# Sigmoid to
self.sig = nn.Sigmoid()
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = x.view(-1,self.size) # flatten the tensor
x = F.relu(self.fc1(x))
return self.sig(self.fc2(x))
# use your own policy!
policy=Policy().to(device)
# we use the adam optimizer with learning rate 2e-4
# optim.SGD is also possible
import torch.optim as optim
optimizer = optim.Adam(policy.parameters(), lr=1e-4)
```
# Game visualization
pong_utils contain a play function given the environment and a policy. An optional preprocess function can be supplied. Here we define a function that plays a game and shows learning progress
```
pong_utils.play(env, policy, time=100)
# try to add the option "preprocess=pong_utils.preprocess_single"
# to see what the agent sees
```
# Rollout
Before we start the training, we need to collect samples. To make things efficient we use parallelized environments to collect multiple examples at once
```
envs = pong_utils.parallelEnv('PongDeterministic-v4', n=4, seed=12345)
prob, state, action, reward = pong_utils.collect_trajectories(envs, policy, tmax=100)
print(prob) ## Notice that the probs for all the actions are almost 0.5 i.e. random
```
# Function Definitions
Here you will define key functions for training.
## Exercise 2: write your own function for training
(this is the same as policy_loss except the negative sign)
### REINFORCE
you have two choices (usually it's useful to divide by the time since we've normalized our rewards and the time of each trajectory is fixed)
1. $\frac{1}{T}\sum^T_t R_{t}^{\rm future}\log(\pi_{\theta'}(a_t|s_t))$
2. $\frac{1}{T}\sum^T_t R_{t}^{\rm future}\frac{\pi_{\theta'}(a_t|s_t)}{\pi_{\theta}(a_t|s_t)}$ where $\theta'=\theta$ and make sure that the no_grad is enabled when performing the division
```
def discounted_future_rewards(rewards, ratio=0.999):
n = rewards.shape[1]
step = torch.arange(n)[:,None] - torch.arange(n)[None,:]
ones = torch.ones_like(step)
zeros = torch.zeros_like(step)
target = torch.where(step >= 0, ones, zeros)
step = torch.where(step >= 0, step, zeros)
discount = target * (ratio ** step)
discount = discount.to(device)
rewards_discounted = torch.mm(rewards, discount)
return rewards_discounted
def surrogate(policy, old_probs, states, actions, rewards,
discount = 0.995, beta=0.01):
actions = torch.tensor(actions, dtype=torch.int8, device=device)
rewards = torch.tensor(rewards, dtype=torch.float, device=device)
old_probs = torch.tensor(old_probs, dtype=torch.float, device=device)
# convert states to policy (or probability)
new_probs = pong_utils.states_to_prob(policy, states)
new_probs = torch.where(actions == pong_utils.RIGHT, new_probs, 1.0-new_probs)
# discounted cumulative reward
R_future = discounted_future_rewards(rewards, discount)
# subtract baseline (= mean of reward)
R_mean = torch.mean(R_future)
R_future -= R_mean
# policy gradient maxmize target
surrogates = (R_future * torch.log(new_probs)).mean()
# include a regularization term
# this steers new_policy towards 0.5
# which prevents policy to become exactly 0 or 1
# this helps with exploration
# add in 1.e-10 to avoid log(0) which gives nan
# entropy = -(new_probs*torch.log(old_probs+1.e-10) + (1.0-new_probs)*torch.log(1.0-old_probs+1.e-10))
# surrogates += torch.mean(beta*entropy)
return surrogates
Lsur= surrogate(policy, prob, state, action, reward)
print(Lsur)
```
# Training
We are now ready to train our policy!
WARNING: make sure to turn on GPU, which also enables multicore processing. It may take up to 45 minutes even with GPU enabled, otherwise it will take much longer!
```
from parallelEnv import parallelEnv
import numpy as np
# WARNING: running through all 800 episodes will take 30-45 minutes
# training loop max iterations
episode = 2000
# episode = 800
# widget bar to display progress
!pip install progressbar
import progressbar as pb
widget = ['training loop: ', pb.Percentage(), ' ',
pb.Bar(), ' ', pb.ETA() ]
timer = pb.ProgressBar(widgets=widget, maxval=episode).start()
# initialize environment
envs = parallelEnv('PongDeterministic-v4', n=16, seed=1234)
discount_rate = .99
beta = .01
tmax = 100
# keep track of progress
mean_rewards = []
for e in range(episode):
# collect trajectories
old_probs, states, actions, rewards = \
pong_utils.collect_trajectories(envs, policy, tmax=tmax)
total_rewards = np.sum(rewards, axis=0)
# this is the SOLUTION!
# use your own surrogate function
# L = -surrogate(policy, old_probs, states, actions, rewards, beta=beta)
L = -pong_utils.surrogate(policy, old_probs, states, actions, rewards, beta=beta)
optimizer.zero_grad()
L.backward()
optimizer.step()
del L
# the regulation term also reduces
# this reduces exploration in later runs
beta*=.995
# get the average reward of the parallel environments
mean_rewards.append(np.mean(total_rewards))
# display some progress every 20 iterations
if (e+1)%20 ==0 :
print("Episode: {0:d}, score: {1:f}".format(e+1,np.mean(total_rewards)))
print(total_rewards)
# update progress widget bar
timer.update(e+1)
timer.finish()
# play game after training!
pong_utils.play(env, policy, time=2000)
plt.plot(mean_rewards)
# save your policy!
torch.save(policy, 'REINFORCE.policy')
# load your policy if needed
# policy = torch.load('REINFORCE.policy')
# try and test out the solution!
# policy = torch.load('PPO_solution.policy')
```
| github_jupyter |
# Lecture 2.0.1: Numpy Random and Random Graphs

Numpy is not only cool because it permits to handle array quite fast (btw, there is C under the hood), but it also have some submodules able to handle a variety of different math things. We are going to learn about random that may be of use for our network analysis.
## Random
[random](https://docs.scipy.org/doc/numpy/reference/routines.random.html) provides a big number of instances for random sampling
```
import numpy
import numpy.random as random
```
### Random functions
Random is somehow redundant...<br/> random.random(size), random.random_sample(size), random.ranf(size), random.sample(size) do exactly the same, i.e. sampling a vector of size size from [0, 1)
```
random.random()
random.random(2)
random.random([2,2])
```
random.randint(low, high) instead sample from an interval of integers **[high is exclusive]**
```
random.randint(0,42)
for i in xrange(10):
if random.randint(0,1)!=0:
print i
```
random.choice(array) random choose an element from the array
```
random.choice(np.arange(42))
random.choice(np.arange(42), size=42)
out=random.choice(np.arange(42), size=42)
len(np.unique(out))==len(out)
```
???
```
random.choice(np.arange(42), size=42, replace=False)
out=random.choice(np.arange(42), size=42, replace=False)
len(np.unique(out))==len(out)
```
This second feature is analogous to random.permutation()
```
out=random.permutation(np.arange(42))
len(np.unique(out))==len(out)
```
Instead random.shuffle(something) shuffle someone else
```
out=np.arange(42)
out
random.shuffle(out)
out
```
### Exercise: generate an instance of a (grand-canonical) Erdös-Renyi random graph for the monopartite network of the previous notebook
In an Erdös-Renyi random graph every link is given a probability equal to the link density, i.e. $p^\text{RG}=\dfrac{2L}{N(N-1)}$. In the grand-canonical approach each possible link is an independent event with probability of success equal to $p_\text{RG} $. Thus, the total number of links is conserved one mean.
#### Load the monopartite adjacency matrix
#### Define the probability per couple of nodes
#### Generate the new adjacency matrix
In principle the number of links is not conserved on a single instance.
### Exercise: generate an instance of a (µ-canonical) Erdös-Renyi random graph for the monopartite network of the previous notebook
Here the solution is to use a trick. random.shuffle is able to reshuffle even matrices, but it does it by reshuffling only rows.
### Exercise: generate an instance of a (grand-canonical) Configuration Model <br/>(Chung-Lu version) <br/>for the monopartite network of the previous notebook
In a configuration model a la Chung-Lu (better, in the Chung-Lu approximation, from now on CLA), the probability per link is $p^\text{CLA}_{ij}=\dfrac{k_ik_j}{2m}$, where $k_i$ is the degree of the real network of the node $i$. **Exercise in the exercise:** use a piece of paper and a pen to show that the mean degree over the ensemble is equal to the one of the real network.
### Exercise: compare the real $k^\text{nn}$ with the average and $\sigma$ over a sample of 1000 sample with gran-canonical ER and CLA-CM
In order to have the name of the films
```
bip_el=np.genfromtxt('./data/imdb_2018_films_actors.txt', delimiter='\t', dtype=[('film', 'U50'),('actor', 'U50')])
films, k_films=np.unique(bip_el['film'], return_counts=True)
```
##### Define a function calculating the $k^\text{nn}$
##### Define a function generating an element of the grand canonical ER
##### Save the $k^\text{nn}$ on the real matrix somewhere
##### Generate a vector of $k^\text{nn}$s vectors, whose elements are the value from the sample of a grand-ER
By the way, what do I expect about the distribution of the $k^\text{nn}$? Why?
##### CLA-CM sampler
##### Generate a vector of $k^\text{nn}$s vectors, whose elements are the value from the sample of a grand-CLACM
#### Select movies whose k^nn is more significantly more than expected...
#### ...and those whose k^nn is more significantly less than expected.
| github_jupyter |
# Continuous Control
---
You are welcome to use this coding environment to train your agent for the project. Follow the instructions below to get started!
### 1. Start the Environment
Run the next code cell to install a few packages. This line will take a few minutes to run!
```
!pip -q install ./python
```
The environments corresponding to both versions of the environment are already saved in the Workspace and can be accessed at the file paths provided below.
Please select one of the two options below for loading the environment.
```
from unityagents import UnityEnvironment
import numpy as np
# select this option to load version 1 (with a single agent) of the environment
env = UnityEnvironment(file_name='/data/Reacher_One_Linux_NoVis/Reacher_One_Linux_NoVis.x86_64')
# select this option to load version 2 (with 20 agents) of the environment
# env = UnityEnvironment(file_name='/data/Reacher_Linux_NoVis/Reacher.x86_64')
```
Environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
```
### 2. Examine the State and Action Spaces
Run the code cell below to print some information about the environment.
```
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
env_info.vector_observations
```
### 3. Take Random Actions in the Environment
In the next code cell, you will train the DDPG agent
```
import torch
import numpy as np
from collections import deque
import matplotlib.pyplot as plt
%matplotlib inline
from ddpg_agent import Agent
agent = Agent(state_size=33, action_size=4, random_seed=2)
def ddpg(n_episodes=1500, max_t=1500, print_every=100):
scores_deque = deque(maxlen=print_every)
scores = []
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
state = env_info.vector_observations[0]
agent.reset()
score = 0
for t in range(max_t):
action = agent.act(state)
env_info = env.step(action)[brain_name]
next_state = env_info.vector_observations[0] # get next state
reward = env_info.rewards[0] # get reward
done = env_info.local_done[0] # see if episode finished
agent.step(state, action, reward, next_state, done, t)
state = next_state
score += reward
if done:
break
scores_deque.append(score)
scores.append(score)
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)), end="")
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
if i_episode % print_every == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_deque)))
if np.mean(scores_deque) >= 30.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_deque)))
torch.save(agent.actor_local.state_dict(), 'checkpoint_actor.pth')
torch.save(agent.critic_local.state_dict(), 'checkpoint_critic.pth')
break
return scores
scores = ddpg()
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(1, len(scores)+1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
```
When finished, you can close the environment.
```
env.close()
```
| github_jupyter |
# Machine Bottleneck
This notebook demonstrates the formulation and solution of the a machine bottleneck problem using Pyomo. The task is to schedule a set of jobs on a single machine given the release time, duration, and due time for each job. Date for the example problem is from Christelle Gueret, Christian Prins, Marc Sevaux, "Applications of Optimization with Xpress-MP," Chapter 5, Dash Optimization, 2000.
## Imports
```
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display
import pandas as pd
import shutil
import sys
import os.path
if not shutil.which("pyomo"):
!pip install -q pyomo
assert(shutil.which("pyomo"))
if not (shutil.which("cbc") or os.path.isfile("cbc")):
if "google.colab" in sys.modules:
!apt-get install -y -qq coinor-cbc
else:
try:
!conda install -c conda-forge coincbc
except:
pass
assert(shutil.which("cbc") or os.path.isfile("cbc"))
from pyomo.environ import *
from pyomo.gdp import *
```
## Example
The problem is to schedule a sequence of jobs for a single machine. The data consists of a Python dictionary of jobs. Each job is labeled by a key, and an associated data dictionary provides the time at which the job is released to the for machine processing, the expected duration of the job, and the due date. The problem is to sequence the jobs on the machine to meet the due dates, or show that no such sequence is possible.
```
JOBS = {
'A': {'release': 2, 'duration': 5, 'due': 10},
'B': {'release': 5, 'duration': 6, 'due': 21},
'C': {'release': 4, 'duration': 8, 'due': 15},
'D': {'release': 0, 'duration': 4, 'due': 10},
'E': {'release': 0, 'duration': 2, 'due': 5},
'F': {'release': 8, 'duration': 3, 'due': 15},
'G': {'release': 9, 'duration': 2, 'due': 22},
}
JOBS
```
### Gantt chart
A traditional means of visualizing scheduling data in the form of a Gantt chart. The next cell presents a function `gantt` that plots a Gantt chart given JOBS and SCHEDULE information. Two charts are presented showing job schedule and machine schedule. If no machine information is contained in SCHEDULE, then it assumed to be a single machine operation.
```
def gantt(JOBS, SCHEDULE={}):
bw = 0.3
plt.figure(figsize=(12, 0.7*(len(JOBS.keys()))))
idx = 0
for j in sorted(JOBS.keys()):
x = JOBS[j]['release']
y = JOBS[j]['due']
plt.fill_between([x,y],[idx-bw,idx-bw],[idx+bw,idx+bw], color='cyan', alpha=0.6)
if j in SCHEDULE.keys():
x = SCHEDULE[j]['start']
y = SCHEDULE[j]['finish']
plt.fill_between([x,y],[idx-bw,idx-bw],[idx+bw,idx+bw], color='red', alpha=0.5)
plt.plot([x,y,y,x,x], [idx-bw,idx-bw,idx+bw,idx+bw,idx-bw],color='k')
plt.text((SCHEDULE[j]['start'] + SCHEDULE[j]['finish'])/2.0,idx,
'Job ' + j, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
idx += 1
plt.ylim(-0.5, idx-0.5)
plt.title('Job Schedule')
plt.xlabel('Time')
plt.ylabel('Jobs')
plt.yticks(range(len(JOBS)), JOBS.keys())
plt.grid()
xlim = plt.xlim()
if SCHEDULE:
for j in SCHEDULE.keys():
if 'machine' not in SCHEDULE[j].keys():
SCHEDULE[j]['machine'] = 1
MACHINES = sorted(set([SCHEDULE[j]['machine'] for j in SCHEDULE.keys()]))
plt.figure(figsize=(12, 0.7*len(MACHINES)))
for j in sorted(SCHEDULE.keys()):
idx = MACHINES.index(SCHEDULE[j]['machine'])
x = SCHEDULE[j]['start']
y = SCHEDULE[j]['finish']
plt.fill_between([x,y],[idx-bw,idx-bw],[idx+bw,idx+bw], color='red', alpha=0.5)
plt.plot([x,y,y,x,x], [idx-bw,idx-bw,idx+bw,idx+bw,idx-bw],color='k')
plt.text((SCHEDULE[j]['start'] + SCHEDULE[j]['finish'])/2.0,idx,
'Job ' + j, color='white', weight='bold',
horizontalalignment='center', verticalalignment='center')
plt.xlim(xlim)
plt.ylim(-0.5, len(MACHINES)-0.5)
plt.title('Machine Schedule')
plt.yticks(range(len(MACHINES)), MACHINES)
plt.ylabel('Machines')
plt.grid()
gantt(JOBS)
```
## The machine scheduling problem
A schedule consists of a dictionary listing the start and finish times for each job. Once the order of jobs has been determined, the start time can be no earlier than when the job is released for processing, and no earlier than the finish of the previous job.
The following cell presents a function which, given the JOBS data and an order list of jobs indices, computes the start and finish times for all jobs on a single machine. We use this to determine the schedule if the jobs are executed in alphabetical order.
```
def schedule(JOBS, order=sorted(JOBS.keys())):
"""Schedule a dictionary of JOBS on a single machine in a specified order."""
start = 0
finish = 0
SCHEDULE = {}
for job in order:
start = max(JOBS[job]['release'], finish)
finish = start + JOBS[job]['duration']
SCHEDULE[job] = {'start': start, 'finish': finish}
return SCHEDULE
SCHEDULE = schedule(JOBS)
SCHEDULE
```
Here we demonstrate a 'partial schedule'.
```
gantt(JOBS, schedule(JOBS, ['E', 'D', 'A', 'C', 'B']))
```
Here's a schedule where jobs are done in alphabetical order.
```
gantt(JOBS, SCHEDULE)
```
### Key performance indicators
As presented above, a given schedule may not meet all of the due time requirements. In fact, a schedule meeting all of the requirements might not even be possible. So given a schedule, it is useful to have a function that computes key performance indicators.
```
def kpi(JOBS, SCHEDULE):
KPI = {}
KPI['Makespan'] = max(SCHEDULE[job]['finish'] for job in SCHEDULE)
KPI['Max Pastdue'] = max(max(0, SCHEDULE[job]['finish'] - JOBS[job]['due']) for job in SCHEDULE)
KPI['Sum of Pastdue'] = sum(max(0, SCHEDULE[job]['finish'] - JOBS[job]['due']) for job in SCHEDULE)
KPI['Number Pastdue'] = sum(SCHEDULE[job]['finish'] > JOBS[job]['due'] for job in SCHEDULE)
KPI['Number on Time'] = sum(SCHEDULE[job]['finish'] <= JOBS[job]['due'] for job in SCHEDULE)
KPI['Fraction on Time'] = KPI['Number on Time']/len(SCHEDULE)
return KPI
kpi(JOBS, SCHEDULE)
```
### Exercise
Show the Gantt chart and key performance metrics if the jobs are executed in reverse alphabetical order.
```
order = sorted(JOBS, reverse=True)
gantt(JOBS, schedule(JOBS,order))
kpi(JOBS, schedule(JOBS,order))
```
## Empirical scheduling
There are a number of commonly encountered empirical rules for scheduling jobs on a single machine. These include:
* First-In First-Out (FIFO)
* Last-In, First-Out (LIFO)
* Shortest Processing Time First (SPT)
* Earliest Due Data (EDD)
### First-in first-out
As an example, we'll first look at 'First-In-First-Out' scheduling which executes job in the order they are released. The following function sorts jobs by release time, then schedules the jobs to execute in that order. A job can only be started no earlier than when it is released.
```
def fifo(JOBS):
order_by_release = sorted(JOBS, key=lambda job: JOBS[job]['release'])
return schedule(JOBS, order_by_release)
SCHEDULE = fifo(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
### Last-in, first-out
```
def lifo(JOBS):
unfinished_jobs = set(JOBS.keys())
start = 0
while len(unfinished_jobs) > 0:
start = max(start, min(JOBS[job]['release'] for job in unfinished_jobs))
lifo = {job:JOBS[job]['release'] for job in unfinished_jobs if JOBS[job]['release'] <= start}
job = max(lifo, key=lifo.get)
finish = start + JOBS[job]['duration']
unfinished_jobs.remove(job)
SCHEDULE[job] = {'machine': 1, 'start': start, 'finish': finish}
start = finish
return SCHEDULE
gantt(JOBS, lifo(JOBS))
kpi(JOBS, lifo(JOBS))
```
### Earliest due date
```
def edd(JOBS):
unfinished_jobs = set(JOBS.keys())
start = 0
while len(unfinished_jobs) > 0:
start = max(start, min(JOBS[job]['release'] for job in unfinished_jobs))
edd = {job:JOBS[job]['due'] for job in unfinished_jobs if JOBS[job]['release'] <= start}
job = min(edd, key=edd.get)
finish = start + JOBS[job]['duration']
unfinished_jobs.remove(job)
SCHEDULE[job] = {'machine': 1, 'start': start, 'finish': finish}
start = finish
return SCHEDULE
gantt(JOBS, edd(JOBS))
kpi(JOBS, edd(JOBS))
```
### Shortest processing time
```
def spt(JOBS):
unfinished_jobs = set(JOBS.keys())
start = 0
while len(unfinished_jobs) > 0:
start = max(start, min(JOBS[job]['release'] for job in unfinished_jobs))
spt = {job:JOBS[job]['duration'] for job in unfinished_jobs if JOBS[job]['release'] <= start}
job = min(spt, key=spt.get)
finish = start + JOBS[job]['duration']
unfinished_jobs.remove(job)
SCHEDULE[job] = {'machine': 1, 'start': start, 'finish': finish}
start = finish
return SCHEDULE
gantt(JOBS, spt(JOBS))
kpi(JOBS, spt(JOBS))
```
## Modeling
### Data
The data for this problem consists of a list of jobs. Each job is tagged with a unique ID along with numerical data giving the time at which the job will be released for machine processing, the expected duration, and the time at which it is due.
| Symbol | Description
| ------ | :----------
| $\text{ID}_{j}$ | Unique ID for task $j$
| $\text{due}_{j}$ | Due time for task $j$
| $\text{duration}_{j}$ | Duration of task $j$
| $\text{release}_{j}$ | Time task $j$ becomes available for processing
### Decision variables
For a single machine, the essential decision variable is the start time at which the job begins processing.
| Symbol | Description |
| ------ | :---------- |
| $\text{start}_{j}$ | Start of task $j$
| $\text{makespan}$ | Time to complete *all* jobs.
| $\text{pastdue}_{j}$ | Time by which task $j$ is past due
| $\text{early}_{j}$ | Time by which task $j$ is finished early
A job cannot start until it is released for processing
\begin{align*}
\text{start}_{j} & \geq \text{release}_{j}\\
\end{align*}
Once released for processing, we assume the processing continues until the job is finished. The finish time is compared to the due time, and the result stored in either the early or pastdue decision variables. These decision variables are needed to handle cases where it might not be possible to complete all jobs by the time they are due.
\begin{align*}
\text{start}_{j} + \text{duration}_{j} + \text{early}_{j} & = \text{due}_{j} + \text{pastdue}_{j}\\
\text{early}_{j} & \geq 0 \\
\text{pastdue}_{j} & \geq 0
\end{align*}
Finally, we include a single decision variable measuring the overall makespan for all jobs.
\begin{align*}
\text{start}_{j} +\text{duration}_{j} \leq \text{makespan}
\end{align*}
The final set of constraints requires that, for any given pair of jobs $j$ and $k$, that either $j$ starts before $k$ finishes, or $k$ finishes before $j$ starts. The boolean variable $y_{jk} = 1$ indicates $j$ finishes before $k$ starts, and is 0 for the opposing case. Note that we only need to consider cases $j > k$
\begin{align*}
\text{start}_{i}+\text{duration}_{i} & \leq \text{start}_{j}+My_{i,j}\\
\text{start}_{j}+\text{duration}_{j} & \leq \text{start}_{i}+M(1-y_{i,j})
\end{align*}
where $M$ is a sufficiently large enough to assure the relaxed constraint is satisfied for all plausible values of the decision variables.
## Big-M model
We'll take a step-by-step approach to the construction of a "Big-M" model.
### Step 1. An incomplete bare-bones model
We'll start this model building exercise with just enough variables and constraints to get an answer. This is not a complete model and will therefore give non-physical answers. But it does give a scaffold for further model building.
This first model includes decision variables for the start and finish of each job, a decision variable for makespan, and constraints that define the relationships among these decision variables. The objective function is to minimize makespan.
```
def opt_schedule(JOBS):
# create model
m = ConcreteModel()
# index set to simplify notation
m.JOBS = Set(initialize=JOBS.keys())
# decision variables
m.start = Var(m.JOBS, domain=NonNegativeReals)
m.finish = Var(m.JOBS, domain=NonNegativeReals)
# additional decision variables for use in the objecive
m.makespan = Var(domain=NonNegativeReals)
# objective function
m.OBJ = Objective(expr = m.makespan, sense = minimize)
# constraints
m.c = ConstraintList()
for j in m.JOBS:
m.c.add(m.finish[j] == m.start[j] + JOBS[j]['duration'])
m.c.add(m.finish[j] <= m.makespan)
SolverFactory('cbc').solve(m)
SCHEDULE = {}
for j in m.JOBS:
SCHEDULE[j] = {'machine': 1, 'start': m.start[j](), 'finish': m.start[j]() + JOBS[j]['duration']}
return SCHEDULE
SCHEDULE = opt_schedule(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
### Step 2. Add release time information
Obviously some jobs are being started before they are released for processing. The next version of the model adds that constraint.
```
def opt_schedule(JOBS):
# create model
m = ConcreteModel()
# index set to simplify notation
m.JOBS = Set(initialize=JOBS.keys())
# decision variables
m.start = Var(m.JOBS, domain=NonNegativeReals)
m.finish = Var(m.JOBS, domain=NonNegativeReals)
# additional decision variables for use in the objecive
m.makespan = Var(domain=NonNegativeReals)
# objective function
m.OBJ = Objective(expr = m.makespan, sense = minimize)
# constraints
m.c = ConstraintList()
for j in m.JOBS:
m.c.add(m.finish[j] == m.start[j] + JOBS[j]['duration'])
m.c.add(m.finish[j] <= m.makespan)
m.c.add(m.start[j] >= JOBS[j]['release'])
SolverFactory('cbc').solve(m)
SCHEDULE = {}
for j in m.JOBS:
SCHEDULE[j] = {'machine': 1, 'start': m.start[j](), 'finish': m.start[j]() + JOBS[j]['duration']}
return SCHEDULE
SCHEDULE = opt_schedule(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
### Step 3. Machine conflict constraints
```
def opt_schedule(JOBS):
# create model
m = ConcreteModel()
# index set to simplify notation
m.JOBS = Set(initialize=JOBS.keys())
m.PAIRS = Set(initialize = m.JOBS * m.JOBS, dimen=2, filter=lambda m, j, k : j < k)
# decision variables
m.start = Var(m.JOBS, domain=NonNegativeReals)
m.finish = Var(m.JOBS, domain=NonNegativeReals)
m.y = Var(m.PAIRS, domain=Boolean)
# additional decision variables for use in the objecive
m.makespan = Var(domain=NonNegativeReals)
# objective function
m.OBJ = Objective(expr = m.makespan, sense = minimize)
# constraints
m.c = ConstraintList()
for j in m.JOBS:
m.c.add(m.finish[j] == m.start[j] + JOBS[j]['duration'])
m.c.add(m.finish[j] <= m.makespan)
m.c.add(m.start[j] >= JOBS[j]['release'])
M = 100.0
for j,k in m.PAIRS:
m.c.add(m.finish[j] <= m.start[k] + M*m.y[j,k])
m.c.add(m.finish[k] <= m.start[j] + M*(1 - m.y[j,k]))
SolverFactory('cbc').solve(m)
SCHEDULE = {}
for j in m.JOBS:
SCHEDULE[j] = {'machine': 1, 'start': m.start[j](), 'finish': m.start[j]() + JOBS[j]['duration']}
return SCHEDULE
SCHEDULE = opt_schedule(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
### Step 4. Improve the objective function
```
def opt_schedule(JOBS):
# create model
m = ConcreteModel()
# index set to simplify notation
m.JOBS = Set(initialize=JOBS.keys())
m.PAIRS = Set(initialize = m.JOBS * m.JOBS, dimen=2, filter=lambda m, j, k : j < k)
# decision variables
m.start = Var(m.JOBS, domain=NonNegativeReals)
m.finish = Var(m.JOBS, domain=NonNegativeReals)
m.pastdue = Var(m.JOBS, domain=NonNegativeReals)
m.y = Var(m.PAIRS, domain=Boolean)
# additional decision variables for use in the objecive
m.makespan = Var(domain=NonNegativeReals)
# objective function
m.OBJ = Objective(expr = sum(m.pastdue[j] for j in m.JOBS), sense = minimize)
# constraints
m.c = ConstraintList()
for j in m.JOBS:
m.c.add(m.finish[j] == m.start[j] + JOBS[j]['duration'])
m.c.add(m.finish[j] <= m.makespan)
m.c.add(m.start[j] >= JOBS[j]['release'])
m.c.add(m.finish[j] <= JOBS[j]['due'] + m.pastdue[j])
M = 100.0
for j,k in m.PAIRS:
m.c.add(m.finish[j] <= m.start[k] + M*m.y[j,k])
m.c.add(m.finish[k] <= m.start[j] + M*(1 - m.y[j,k]))
SolverFactory('cbc').solve(m)
SCHEDULE = {}
for j in m.JOBS:
SCHEDULE[j] = {'machine': 1, 'start': m.start[j](), 'finish': m.start[j]() + JOBS[j]['duration']}
return SCHEDULE
SCHEDULE = opt_schedule(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
## Pyomo model
```
def opt_schedule(JOBS):
# create model
m = ConcreteModel()
# index set to simplify notation
m.J = Set(initialize=JOBS.keys())
m.PAIRS = Set(initialize = m.J * m.J, dimen=2, filter=lambda m, j, k : j < k)
# upper bounds on how long it would take to process all jobs
tmax = max([JOBS[j]['release'] for j in m.J]) + sum([JOBS[j]['duration'] for j in m.J])
# decision variables
m.start = Var(m.J, domain=NonNegativeReals, bounds=(0, tmax))
m.pastdue = Var(m.J, domain=NonNegativeReals, bounds=(0, tmax))
m.early = Var(m.J, domain=NonNegativeReals, bounds=(0, tmax))
# additional decision variables for use in the objecive
m.makespan = Var(domain=NonNegativeReals, bounds=(0, tmax))
m.maxpastdue = Var(domain=NonNegativeReals, bounds=(0, tmax))
m.ispastdue = Var(m.J, domain=Binary)
# objective function
m.OBJ = Objective(expr = sum([m.pastdue[j] for j in m.J]), sense = minimize)
# constraints
m.c1 = Constraint(m.J, rule=lambda m, j: m.start[j] >= JOBS[j]['release'])
m.c2 = Constraint(m.J, rule=lambda m, j:
m.start[j] + JOBS[j]['duration'] + m.early[j] == JOBS[j]['due'] + m.pastdue[j])
m.c3 = Disjunction(m.PAIRS, rule=lambda m, j, k:
[m.start[j] + JOBS[j]['duration'] <= m.start[k],
m.start[k] + JOBS[k]['duration'] <= m.start[j]])
m.c4 = Constraint(m.J, rule=lambda m, j: m.pastdue[j] <= m.maxpastdue)
m.c5 = Constraint(m.J, rule=lambda m, j: m.start[j] + JOBS[j]['duration'] <= m.makespan)
m.c6 = Constraint(m.J, rule=lambda m, j: m.pastdue[j] <= tmax*m.ispastdue[j])
TransformationFactory('gdp.chull').apply_to(m)
SolverFactory('cbc').solve(m).write()
SCHEDULE = {}
for j in m.J:
SCHEDULE[j] = {'machine': 1, 'start': m.start[j](), 'finish': m.start[j]() + JOBS[j]['duration']}
return SCHEDULE
SCHEDULE = opt_schedule(JOBS)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
## Multiple machines
The case of multiple machines requires a modest extension of model described above. Given a set $M$ of machines, we introduce an additional decision binary variable $z_{j,m}$ indicating if job $j$ has been assigned to machine $m$. The additional constraints
\begin{align*}
\sum_{m\in M}z_{j,m} & = 1 & \forall j
\end{align*}
require each job to be assigned to exactly one machine for processing.
If both jobs $j$ and $k$ have been assigned to machine $m$, then the disjunctive ordering constraints must apply. This logic is equivalent to the following constraints for $j < k$.
\begin{align*}
\text{start}_{j}+\text{duration}_{j} & \leq \text{start}_{k}+My_{j,k} + M(1-z_{j,m}) + M(1-z_{k,m})\\
\text{start}_{k}+\text{duration}_{k} & \leq \text{start}_{j}+M(1-y_{j,k}) + M(1-z_{j,m}) + M(1-z_{k,m}))
\end{align*}
```
MACHINES = ['A','B']
def schedule_machines(JOBS, MACHINES):
# create model
m = ConcreteModel()
# index set to simplify notation
m.J = Set(initialize=JOBS.keys())
m.M = Set(initialize=MACHINES)
m.PAIRS = Set(initialize = m.J * m.J, dimen=2, filter=lambda m, j, k : j < k)
# decision variables
m.start = Var(m.J, bounds=(0, 1000))
m.makespan = Var(domain=NonNegativeReals)
m.pastdue = Var(m.J, domain=NonNegativeReals)
m.early = Var(m.J, domain=NonNegativeReals)
# additional decision variables for use in the objecive
m.ispastdue = Var(m.J, domain=Binary)
m.maxpastdue = Var(domain=NonNegativeReals)
# for binary assignment of jobs to machines
m.z = Var(m.J, m.M, domain=Binary)
# for modeling disjunctive constraints
m.y = Var(m.PAIRS, domain=Binary)
BigM = max([JOBS[j]['release'] for j in m.J]) + sum([JOBS[j]['duration'] for j in m.J])
m.OBJ = Objective(expr = sum(m.pastdue[j] for j in m.J) + m.makespan - sum(m.early[j] for j in m.J), sense = minimize)
m.c1 = Constraint(m.J, rule=lambda m, j:
m.start[j] >= JOBS[j]['release'])
m.c2 = Constraint(m.J, rule=lambda m, j:
m.start[j] + JOBS[j]['duration'] + m.early[j] == JOBS[j]['due'] + m.pastdue[j])
m.c3 = Constraint(m.J, rule=lambda m, j:
sum(m.z[j,mach] for mach in m.M) == 1)
m.c4 = Constraint(m.J, rule=lambda m, j:
m.pastdue[j] <= BigM*m.ispastdue[j])
m.c5 = Constraint(m.J, rule=lambda m, j:
m.pastdue[j] <= m.maxpastdue)
m.c6 = Constraint(m.J, rule=lambda m, j:
m.start[j] + JOBS[j]['duration'] <= m.makespan)
m.d1 = Constraint(m.M, m.PAIRS, rule = lambda m, mach, j, k:
m.start[j] + JOBS[j]['duration'] <= m.start[k] + BigM*(m.y[j,k] + (1-m.z[j,mach]) + (1-m.z[k,mach])))
m.d2 = Constraint(m.M, m.PAIRS, rule = lambda m, mach, j, k:
m.start[k] + JOBS[k]['duration'] <= m.start[j] + BigM*((1-m.y[j,k]) + (1-m.z[j,mach]) + (1-m.z[k,mach])))
SolverFactory('cbc').solve(m).write()
SCHEDULE = {}
for j in m.J:
SCHEDULE[j] = {
'start': m.start[j](),
'finish': m.start[j]() + JOBS[j]['duration'],
'machine': [mach for mach in MACHINES if m.z[j,mach]][0]
}
return SCHEDULE
SCHEDULE = schedule_machines(JOBS,MACHINES)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
## Disjunctive Version
```
MACHINES = ['A','B']
def schedule_machines(JOBS, MACHINES):
# create model
m = ConcreteModel()
# index set to simplify notation
m.J = Set(initialize=JOBS.keys())
m.M = Set(initialize=MACHINES)
m.PAIRS = Set(initialize = m.J * m.J, dimen=2, filter=lambda m, j, k : j < k)
# decision variables
m.start = Var(m.J, bounds=(0, 1000))
m.makespan = Var(domain=NonNegativeReals)
m.pastdue = Var(m.J, bounds=(0, 1000))
m.early = Var(m.J, bounds=(0, 10000))
# additional decision variables for use in the objecive
m.ispastdue = Var(m.J, domain=Binary)
m.maxpastdue = Var(domain=NonNegativeReals)
# for binary assignment of jobs to machines
m.z = Var(m.J, m.M, domain=Binary)
# for modeling disjunctive constraints
BigM = max([JOBS[j]['release'] for j in m.J]) + sum([JOBS[j]['duration'] for j in m.J])
m.OBJ = Objective(expr = sum(m.pastdue[j] for j in m.J) + m.makespan - sum(m.early[j] for j in m.J), sense = minimize)
# job starts after it is released
m.c1 = Constraint(m.J, rule=lambda m, j: m.start[j] >= JOBS[j]['release'])
# defines early and pastdue
m.c2 = Constraint(m.J, rule=lambda m, j: m.start[j] + JOBS[j]['duration'] + m.early[j] == JOBS[j]['due'] + m.pastdue[j])
m.d1 = Disjunction(m.J, rule=lambda m, j: [m.early[j]==0, m.pastdue[j]==0])
# each job is assigned to one and only one machine
m.c3 = Constraint(m.J, rule=lambda m, j: sum(m.z[j, mach] for mach in m.M) == 1)
# defines a binary variable indicating if a job is past due
m.c4 = Disjunction(m.J, rule=lambda m, j: [m.pastdue[j] == 0, m.ispastdue[j] == 1])
# all jobs must be finished before max pastdue
m.c5 = Constraint(m.J, rule=lambda m, j: m.pastdue[j] <= m.maxpastdue)
# defining make span
m.c6 = Constraint(m.J, rule=lambda m, j: m.start[j] + JOBS[j]['duration'] <= m.makespan)
# disjuctions
m.d0 = Disjunction(m.M, m.PAIRS, rule = lambda m, mach, j, k:
[m.start[j] + JOBS[j]['duration'] <= m.start[k] + BigM*((1-m.z[j, mach]) + (1-m.z[k, mach])),
m.start[k] + JOBS[k]['duration'] <= m.start[j] + BigM*((1-m.z[j, mach]) + (1-m.z[k, mach]))])
transform = TransformationFactory('gdp.hull')
transform.apply_to(m)
SolverFactory('cbc').solve(m).write()
SCHEDULE = {}
for j in m.J:
SCHEDULE[j] = {
'start': m.start[j](),
'finish': m.start[j]() + JOBS[j]['duration'],
'machine': [mach for mach in MACHINES if m.z[j,mach]][0]
}
return SCHEDULE
SCHEDULE = schedule_machines(JOBS,MACHINES)
gantt(JOBS, SCHEDULE)
kpi(JOBS, SCHEDULE)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error, mean_absolute_error
bucket_name="ndir-metis-bucket"
path=f"gs://{bucket_name}/asteroid/processed_asteroid_data.csv"
df = pd.read_csv(path,
storage_options={"token": "secrets.json"},
low_memory=False)
df.drop(columns=['Unnamed: 0'], inplace=True)
df.condition_code.value_counts()
sns.displot(df.diameter);
df = df[df['diameter'] < np.quantile(df['diameter'], 0.99)]
sns.displot(df.diameter);
plt.figure(figsize=(10,10))
sns.heatmap(df.corr(), annot=True, fmt='.2f', cmap='coolwarm');
```
# Feature Info
- a: semi-major axis (au)
- e: eccentricity
- i: inclination with respect to ecliptic plane
- om: longitude of the ascending node
- w: argument of perihelion
- q: perihelion distance (au)
- ad: aphelion distance (au)
- per_y: orbital period (years)
- data_arc: span of recorded data (days)
- condition_code: orbital condition code
- n_obs_used: number of observations used
- H: absolute magnitude parameter
- neo: near-earth object
- pha: physically hazardous object
- diameter: diameter (target variable)
- extent: Object bi/tri axial ellipsoid dimensions(Km)
- albedo: geometric albedo
- rot_per: rotation period (hours)
- GM: gravitational parameter. Product of mass and gravitational constant
- BV: Color index B-V magnitude difference
- UB: Color index U-B magnitude difference
- IR: Color index I-R magnitude difference
- specB: Spectral taxonomic type(SMASSII)
- specT: Spectral taxonomic type (Tholen)
- G: Magnitude slope parameter
- moid: Earth minimum orbit intersection distance
- class: asteroid orbit class
- n: mean motion (degrees/day)
- per: orbital period (days)
- ma: mean ananomly (degrees)
```
final_df = df.drop(columns=['om', 'w', 'ma', 'name', 'per_y'])
final_df = pd.get_dummies(final_df, prefix='class', columns=['class'])
final_df = pd.get_dummies(final_df, prefix='condition_code', columns=['condition_code'], drop_first=False)
X, y = final_df.drop('diameter',axis=1), final_df['diameter']
X_train_val, X_test, y_train_val, y_test = train_test_split(X, y, test_size=.2, random_state=13)
X_train, X_val, y_train, y_val = train_test_split(X_train_val, y_train_val, test_size=.25, random_state=42)
xgr_model = xgb.XGBRegressor(objective = 'reg:squarederror', n_estimators=1000, learning_rate=0.05, max_depth=10, reg_lambda=10, reg_alpha=10, n_jobs=-1, colsample_bytree=0.3)
xgr_model.fit(X_train, y_train)
y_val_pred = xgr_model.predict(X_val)
r2 = r2_score(y_val, y_val_pred)
print("R2: %.2f" % (r2))
gbm = xgb.XGBRegressor(
n_estimators=30000, #arbitrary large number
max_depth=10,
objective="reg:squarederror", # Other options: https://xgboost.readthedocs.io/en/latest/parameter.html#learning-task-parameters
learning_rate=.1,
subsample=1,
min_child_weight=1,
colsample_bytree=.8
)
eval_set=[(X_train,y_train),(X_val,y_val)] #tracking train/validation error as we go
fit_model = gbm.fit(
X_train, y_train,
eval_set=eval_set,
eval_metric='rmse',
early_stopping_rounds=20,
verbose=True #gives output log as below
)
def rmse(actuals, preds):
return np.sqrt(((actuals - preds) ** 2).mean())
rmse(gbm.predict(X_test, ntree_limit=gbm.best_ntree_limit),y_test)
#Step by step RMSEs, with .1 learning rate:
#best max_depth: 7, .452
#best subsample: .8, .448
#best min_child_weight: 12, .446
#best colsample_bytree: .7, .444
gbm = xgb.XGBRegressor(
n_estimators=30000, #arbitrary large number
max_depth=10,
objective="reg:squarederror",
learning_rate=.05,
subsample=.8,
min_child_weight=12,
colsample_bytree=.7,
n_jobs=-1,
random_state=0)
eval_set = [(X_train, y_train),
(X_val, y_val)] #tracking train/validation error as we go
fit_model = gbm.fit(
X_train,
y_train,
eval_set=eval_set,
eval_metric='rmse',
early_stopping_rounds=50,
verbose=False)
rmse(gbm.predict(X_val, ntree_limit=gbm.best_ntree_limit), y_val)
mean_absolute_error(y_val, gbm.predict(X_val, ntree_limit=gbm.best_ntree_limit))
xgb.plot_importance(gbm)
```
# Feature Info
- a: semi-major axis (au)
- e: eccentricity
- i: inclination with respect to ecliptic plane
- om: longitude of the ascending node
- w: argument of perihelion
- q: perihelion distance (au)
- ad: aphelion distance (au)
- per_y: orbital period (years)
- data_arc: span of recorded data (days)
- condition_code: orbital condition code
- n_obs_used: number of observations used
- H: absolute magnitude parameter
- neo: near-earth object
- pha: physically hazardous object
- diameter: diameter (target variable)
- extent: Object bi/tri axial ellipsoid dimensions(Km)
- albedo: geometric albedo
- rot_per: rotation period (hours)
- GM: gravitational parameter. Product of mass and gravitational constant
- BV: Color index B-V magnitude difference
- UB: Color index U-B magnitude difference
- IR: Color index I-R magnitude difference
- specB: Spectral taxonomic type(SMASSII)
- specT: Spectral taxonomic type (Tholen)
- G: Magnitude slope parameter
- moid: Earth minimum orbit intersection distance
- class: asteroid orbit class
- n: mean motion (degrees/day)
- per: orbital period (days)
- ma: mean ananomly (degrees)
```
print(np.sqrt(mean_squared_error(y_test, gbm.predict(X_test, ntree_limit=gbm.best_ntree_limit))))
print((r2_score(y_test, gbm.predict(X_test, ntree_limit=gbm.best_ntree_limit))))
final_df.to_csv('gs://ndir-metis-bucket/asteroid/final_df.csv',
storage_options={"token": "secrets.json"})
gbm.save_model('xgb_model.json')
```
| github_jupyter |
# Calculations with PmagPy
This notebook demonstrates many of the PmagPy calculation functions such as those that rotate directions, return statistical parameters, and simulate data from specified distributions.
## Guide to PmagPy
The notebook is one of a series of notebooks that demonstrate the functionality of PmagPy. The other notebooks are:
- [PmagPy_introduction.ipynb](PmagPy_introduction.ipynb) This notebook introduces PmagPy and lists the functions that are demonstrated in the other notebooks.
- [PmagPy_plots_analysis.ipynb](PmagPy_plots_analysis.ipynb) This notebook demonstrates PmagPy functions that can be used to visualize data as well as those that conduct statistical tests that have associated visualizations.
- [PmagPy_MagIC.ipynb](PmagPy_MagIC.ipynb) This notebook demonstrates how PmagPy can be used to read and write data to and from the MagIC database format including conversion from many individual lab measurement file formats.
## Customizing this notebook
If you want to make changes to this notebook, you should make a copy (see File menu). Otherwise each time you update **PmagPy**, your changes will be overwritten.
## Get started
To use the functions in this notebook, we have to import the **PmagPy** modules **pmagplotlib**, **pmag** and **ipmag** and some other handy functions for use in the notebook. This is done in the following code block which must be executed before running any other code block. To execute, click on the code block and then click on the "Run" button in the menu.
In order to access the example data, this notebook is meant to be run in the PmagPy-data directory (PmagPy directory for developers).
Try it! Run the code block below (click on the cell and then click 'Run'):
```
import pmagpy.pmag as pmag
import pmagpy.pmagplotlib as pmagplotlib
import pmagpy.ipmag as ipmag
import matplotlib.pyplot as plt # our plotting buddy
from pmagpy import convert_2_magic as convert
import numpy as np # the fabulous NumPy package
import pandas as pd # and of course Pandas
has_basemap, Basemap = pmag.import_basemap()
has_cartopy, Cartopy = pmag.import_cartopy()
from IPython.display import Image
%matplotlib inline
```
## Functions demonstrated within this notebook:
- Functions in **PmagPy_calculations.ipynb**:
- [aarm_magic](#aarm_magic) : calculate AARM tensors
- [atrm_magic](#aarm_magic) : calculate ATRM tensors
- [angle](#angle) : calculates the angle between two vectors
- [apwp](#apwp) : returns predicted paleolatitudes, directions and pole latitude/longitude from apparent polar wander paths of Besse and Courtillot (2002).
- [b_vdm](#b_vdm) : converts B (in microT) and (magnetic) latitude to V(A)DM (see [vdm_b](#vdm_b))
- [bootams](#bootams) : calculates bootstrap statistics for tensor data
- [cart_dir](#cart_dir) : converts cartesian coordinates (x,y,z) to declination, inclination, intensity (see [dir_cart](#dir_cart))
- [di_eq](#di_eq) : maps declination, inclinatitions to X,Y for plotting in equal area projections
- [di_geo](#di_geo) : rotates declination, inclination in specimen coordinates to geographic coordinates
- [di_rot](#di_rot) : rotates directions to a coordinate system with D,I as center
- [di_tilt](#di_tilt) : rotates directions to stratigraphic coordinates
- [di_vgp](#di_vgp) : converts direction to Virtual Geomagnetic Pole (see [vgp_di](#vgp_di))
- [dia_vgp](#dia_vgp) : converts direction and $\alpha_{95}$ to Virtual Geomagnetic Pole and dp,dm
- [dipole_pinc](#dipole_pinc) : calculates inclination given latitude assuming geocentric axial dipole
- [dipole_plat](#dipole_plat) : calculates latitude given inclination assuming geocentric axial dipole
- [dir_cart](#dir_cart) : converts declination, inclination, intensity to cartesian coordinates (see [cart_dir](#cart_dir))
- [eigs_s](#eigs_s) : converts eigenparameters to equivalent 6 element tensor (see [s_eigs](#s_eigs))
- [eq_di](#eq_di) : takes X,Y from equal area projection (e.g., from digitized coordinates) and converts to declination, inclination
- [fcalc](#fcalc) : returns the value from an F table, given the degrees of freedom.
- [fisher](#fisher) : generates sets of directions drawn from Fisher distributions with vertical true mean
- [fishrot](#fishrot) : generates sets of directions drawn from Fisher distributions with arbitrary true mean
- [flip](#flip) : flips a second mode (reverse directions) to their antipodes
- [gaussian](#gaussian) : generates data drawn from a normal distribution
- [gobing](#gobing) : calculates Bingham statistics from a set of directions
- [gofish](#gofish) : calculates Fisher statistics from a set of directions
- [gokent](#gokent) : calculates Kent statistics from a set of directions
- [goprinc](#goprinc) : calculates principal directions statistics
- [igrf](#igrf) : calculates geomagnetic field vectors for location, age given a field model (e.g., IGRF) including paleofield models (e.g., cals10k)
- [incfish](#incfish) : estimates the true mean inclination from inclination only data
- [pca](#pca) : calculates the best-fit line or plane for demagnetization data and associated statistics
- [pt_rot](#pt_rot) : rotates point given finite rotation pole
- [s_eigs](#s_eigs) : takes a 6 element tensor and calculates eigen parameters (see [eigs_s](#eigs_s))
- [s_geo](#s_geo) : rotates 6 element tensors to geographic coordinates
- [s_hext](#s_hext) : calculates Hext statistics from 6 element tensors
- [s_tilt](#s_tilt) : rotates 6 element tensors to stratigraphic coordinates
- [s_magic](#s_tilt) :
- [scalc](#scalc) : calculates VGP scatter
- [scalc_magic](#scalc) : calculates VGP scatter
- [separate_directions](#separate_directions) : separates a set of directions into two modes (normal and reverse)
- [squish](#squish): flattens inclination data given flattening factor (see [unsquish](#unsquish))
- [sundec](#sundec) : calulates direction to sun for location, date, time and sun azimuth
- [tk03](#tk03) : generates sets of directions consistent with the TK03 field model
- [uniform](#uniform) : generates sets of uniformly distributed directions
- [unsquish](#unsquish) : unsquishes flattened inclinations, given flattening factor (see [squish](#squish))
- [vdm_b](#vdm_b) : calculates intensity at given location from specified virtual dipole moment (see [b_vdm](#b_vdm))
- [vector_mean](#vector_mean) : calculates vector mean for sets of vectors (declination, inclination, intensity)
- [vgp_di](#vgp_di) : calculates direction at given location from virtual geomagnetic pole (see [di_vgp](#di_vgp))
- [watsons_f](#watsons_f) : calculates Watson's F statistic for testing for common mean
## aarm_magic
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#aarm_magic.py)
Anisotropy of anhysteretic or other remanence can be converted to a tensor and used to correct natural remanence data for the effects of anisotropy remanence acquisition. For example, directions may be deflected from the geomagnetic field direction or intensities may be biased by strong anisotropies in the magnetic fabric of the specimen. By imparting an anhysteretic or thermal remanence in many specific orientations, the anisotropy of remanence acquisition can be characterized and used for correction. We do this for anisotropy of anhysteretic remanence (AARM) by imparting an ARM in 9, 12 or 15 positions. Each ARM must be preceded by an AF demagnetization step. The 15 positions are shown in the [k15_magic](#k15_magic) example.
For the 9 position scheme, **aarm_magic** assumes that the AARMs are imparted in positions 1,2,3, 6,7,8, 11,12,13. Someone (a.k.a. Josh Feinberg) has kindly made the measurements and saved them an SIO formatted measurement file named aarm_magic_example.dat in the datafile directory called aarm_magic. Note the special format of these files - the treatment column (column #2) has the position number (1,2,3,6, etc.) followed by either a “00” for the obligatory zero field baseline step or a “10” for the in-field step. These could also be ‘0‘ and ‘1’.
We need to first import these into the measurements format and then calculate the anisotropy tensors. These can then be plotted or used to correct paleointensity or directional data for anisotropy of remanence.
So, first follow the instructions in [sio_magic](#sio_magic) to import the AARM data into the MagIC format. The DC field was 50 μT, the peak AC field was 180 mT, the location was "Bushveld" and the lab protocol was AF and Anisotropy. The naming convention used Option # 3 (see help menu).
Then we need to calculate the best-fit tensor and write them out to the specimens.txt MagIC tables which can be used to correct remanence data for anisotropy.
The **aarm_magic** program takes a measurements.txt formatted file with anisotropy of ARM data in it and calculates the tensors, rotates it into the desired coordinate system and stores the data in a specimens.txt format file. To do this in a notebook, use **ipmag.aarm_magic()**.
```
convert.sio('arm_magic_example.dat',dir_path='data_files/aarm_magic/',specnum=3,
location='Bushveld',codelist='AF:ANI',samp_con='3',
meas_file='aarm_measurements.txt',peakfield=180,labfield=50, phi=-1, theta=-1)
help(ipmag.aarm_magic)
ipmag.aarm_magic('aarm_measurements.txt',dir_path='data_files/aarm_magic/')
# plot the data generated by aarm_magic:
ipmag.aniso_magic_nb(infile='data_files/aarm_magic/specimens.txt', save_plots=False)
```
## atrm_magic
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#atrm_magic.py)
Anisotropy of thermal remanence (ATRM) is similar to anisotropy of anhysteretic remanence (AARM) and the procedure for obtaining the tensor is also similar. Therefore, the **atrm_magic** is quite similar to [aarm_magic](#aarm_magic). However, the SIO lab procedures for the two experiments are somewhat different. In the ATRM experiment, there is a single, zero field step at the chosen temperature which is used as a baseline. We use only six positions (as opposed to nine for AARM) because of the additional risk of alteration at each temperature step. The positions are also different:
```
Image('data_files/Figures/atrm_meas.png')
```
The file atrm_magic_example.dat in the data_files/atrm_magic directory is an SIO formatted data file containing ATRM measurement data done in a temperature of 520∘C. Note the special format of these files - the treatment column (column 2) has the temperature in centigrade followed by either a “00” for the obligatory zero field baseline step or a “10” for the first postion, and so on. These could also be ‘0‘ and ‘1’, etc..
Follow the instructions for [sio_magic](#sio_magic) to import the ATRM data into the MagIC format. The DC field was 40 μT. The sample/site naming convention used option # 1 (see help menu) and the specimen and sample name are the same (specnum=0).
We will use **ipmag.atrm_magic()** to calculate the best-fit tensor and write out the MagIC tables which can be used to correct remanence data for the effects of remanent anisotropy.
```
convert.sio('atrm_magic_example.dat',dir_path='data_files/atrm_magic/',specnum=0,
location='unknown',codelist='T:ANI',samp_con='1',
meas_file='measurements.txt',labfield=40, phi=-1, theta=-1)
help(ipmag.atrm_magic)
ipmag.atrm_magic('measurements.txt',dir_path='data_files/atrm_magic')
```
## angle
[\[Essentials Appendix A.3.4\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ap1.html#x20-215000A.3.4) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#angle.py)
**angle** calculates the angle $\alpha$ between two declination,inclination pairs.
There are several ways to use this function from the notebook - one loading the data into a Pandas dataframe, then converting to the desired arrays, or load directly into a **Numpy** array of desired shape.
```
help(pmag.angle)
# Pandas way:
di=pd.read_csv('data_files/angle/angle.dat',delim_whitespace=True,header=None)
#rename column headers
di.columns=['Dec1','Inc1','Dec2','Inc2']
```
Here's the sort of data in the file:
```
di.head()
```
Now we will use **pmag.angle()** to calculate the angles.
```
# call pmag.angle
pmag.angle(di[['Dec1','Inc1']].values,di[['Dec2','Inc2']].values)
```
Here is the other (equally valid) way using **np.loadtext()**.
```
# Numpy way:
di=np.loadtxt('data_files/angle/angle.dat').transpose() # read in file
D1=di[0:2].transpose() # assign to first array
D2=di[2:].transpose() # assign to second array
pmag.angle(D1,D2) # call pmag.angle
```
You can always save your output using **np.savetxt()**.
```
angles=pmag.angle(D1,D2) # assign the returned array to angles
```
## apwp
[\[Essentials Chapter 16\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch16.html#x15-15600016) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#apwp.py)
The program **apwp** calculates paleolatitude, declination, inclination from a pole latitude and longitude based on the paper Besse and Courtillot (2002; see [Essentials Chapter 16](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch16.html#x15-15600016) for complete discussion). Here we will calculate the expected direction for 100 million year old rocks at a locality in La Jolla Cove (Latitude: 33$^{\circ}$N, Longitude 117$^{\circ}$W). Assume that we are on the North American Plate! (Note that there IS no option for the Pacific plate in the program **apwp**, and that La Jolla was on the North American plate until a few million years ago (6?).
Within the notebook we will call **pmag.apwp**.
```
help(pmag.apwp)
# here are the desired plate, latitude, longitude and age:
data=['NA',33,-117,100] # North American plate, lat and lon of San Diego at 100 Ma
pmag.apwp(data,print_results=True)
```
## b_vdm
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html#x15-1560002) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#b_vdm.py)
**b_vdm** converts geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude and calculates the Virtual \[Axial\] Dipole Moment (vdm or vadm). We will call **pmag.b_vdm()** directly from within the notebook. \[See also [**vdm_b**](#vdm_b).\]
Here we use the function **pmag.b_vdm()** to convert an estimated paleofield value of 33 $\mu$T obtained from a lava flow at 22$^{\circ}$ N latitude to the equivalent Virtual Dipole Moment (VDM) in Am$^2$.
```
help(pmag.b_vdm)
print ('%7.1f'%(pmag.b_vdm(33e-6,22)*1e-21),' ZAm^2')
pmag.b_vdm(33e-6,22)*1e-21
```
## bootams
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-15600013) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#bootams.py)
**bootams** calculates bootstrap statistics for anisotropy tensor data in the form of:
x11 x22 x33 x12 x23 x13
It does this by selecting para-data sets and calculating the Hext average eigenparameters. It has an optional parametric bootstrap whereby the $\sigma$ for the data set as a whole is used to draw new para data sets. The bootstrapped eigenparameters are assumed to be Kent distributed and the program calculates Kent error ellipses for each set of eigenvectors. It also estimates the standard deviations of the bootstrapped eigenvalues.
**bootams** reads in a file with data for the six tensor elements (x11 x22 x33 x12 x23 x13) for specimens, calls **pmag.s_boot()** using a parametric or non-parametric bootstrap as desired. If all that is desired is the bootstrapped eigenparameters, **pmag.s_boot()** has all we need, but if the Kent ellipses are required, and we can call **pmag.sbootpars()** to calculated these more derived products and print them out.
Note that every time the bootstrap program gets called, the output will be slightly different because this depends on calls to random number generators. If the answers are different by a lot, then the number of bootstrap calculations is too low. The number of bootstraps can be changed with the nb option below.
We can do all this from within the notebook as follows:
```
help(pmag.s_boot)
```
So we will:
- read in the AMS tensor data
- get the bootstrapped eigenparameters
- print out the formatted results
```
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
# with parametric bootstrap:
Ss=np.loadtxt('data_files/bootams/bootams_example.dat')
Tmean,Vmean,Taus,Vs=pmag.s_boot(Ss,ipar=1,nb=5000) # get the bootstrapped eigenparameters
bpars=pmag.sbootpars(Taus,Vs) # calculate kent parameters for bootstrap
print("""tau tau_sigma V_dec V_inc V_eta V_eta_dec V_eta_inc V_zeta V_zeta_dec V_zeta_inc
""")
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[0],bpars["t1_sigma"],Vmean[0][0],Vmean[0][1],\
bpars["v1_zeta"],bpars["v1_zeta_dec"],bpars["v1_zeta_inc"],\
bpars["v1_eta"],bpars["v1_eta_dec"],bpars["v1_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[1],bpars["t2_sigma"],Vmean[1][0],Vmean[1][1],\
bpars["v2_zeta"],bpars["v2_zeta_dec"],bpars["v2_zeta_inc"],\
bpars["v2_eta"],bpars["v2_eta_dec"],bpars["v2_eta_inc"])
print(outstring)
outstring='%7.5f %7.5f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f %7.1f'%(\
Tmean[2],bpars["t3_sigma"],Vmean[2][0],Vmean[2][1],\
bpars["v3_zeta"],bpars["v3_zeta_dec"],bpars["v3_zeta_inc"],\
bpars["v3_eta"],bpars["v3_eta_dec"],bpars["v3_eta_inc"])
print(outstring)
```
## cart_dir
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html#x15-1560002) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#cart_dir.py)
**cart_dir** converts cartesian coordinates (X,Y,Z) to polar coordinates (Declination, Inclination, Intensity). We will call **pmag.cart2dir()**.
```
help(pmag.cart2dir)
# read in data file from example file
cart=np.loadtxt('data_files/cart_dir/cart_dir_example.dat')
print ('Input: \n',cart) # print out the cartesian coordinates
# print out the results
dirs = pmag.cart2dir(cart)
print ("Output: ")
for d in dirs:
print ('%7.1f %7.1f %8.3e'%(d[0],d[1],d[2]))
```
## di_eq
[\[Essentials Appendix B\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ap2.html#equal_area)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#di_eq.py)
Paleomagnetic data are frequently plotted in equal area projection. PmagPy has several plotting options which do this (e.g., [**eqarea**](#eqarea), but occasionally it is handy to be able to convert the directions to X,Y coordinates directly, without plotting them at all. Here is an example transcript of a session using the datafile di_eq_example.dat:
The program **di_eq** program calls **pmag.dimap()** which we can do from within a Jupyter notebook.
```
help(pmag.dimap)
DIs=np.loadtxt('data_files/di_eq/di_eq_example.dat').transpose() # load in the data
print (pmag.dimap(DIs[0],DIs[1])) # call the function
```
## di_geo
[\[Essentials Chapter 9\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch9.html)
and [Changing coordinate systems](http://earthref.org/MAGIC/books/Tauxe/Essentials/WebBook3ap1.html#Changing_coordinate_systems)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#di_geo.py)
Here we will convert D = 8.1,I = 45.2 from specimen coordinates to geographic-adjusted coordinates. The orientation of laboratory arrow on the specimen was: azimuth = 347; plunge = 27. To do this we will call **pmag.dogeo()**. There is also **pmag.dogeo_V** for arrays of data.
So let's start with **pmag.dogeo()**.
```
help(pmag.dogeo)
pmag.dogeo(dec=81,inc=45.2,az=347,pl=27)
```
Now let's check out the version that takes many data points at once.
```
help(pmag.dogeo_V)
indata=np.loadtxt('data_files/di_geo/di_geo_example.dat')
print (indata)
```
Let's take a look at these data in equal area projection: (see [eqarea](#eqarea) for details)
```
ipmag.plot_net(1)
ipmag.plot_di(dec=indata.transpose()[0],inc=indata.transpose()[1],color='red',edge='black')
```
The data are highly scattered and we hope that the geographic coordinate system looks better! To find out try:
```
decs,incs=pmag.dogeo_V(indata)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='red',edge='black')
```
These data are clearly much better grouped.
And here they are printed out.
```
print(np.column_stack([decs,incs]))
```
## di_rot
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#di_rot.py)
**di_rot** rotates dec inc pairs to a new origin. We can call **pmag.dodirot()** for single [dec,inc,Dbar,Ibar] data or **pmag.dodirot_V()** for an array of Dec, Inc pairs. We can use the data from the [di_geo](#di_geo) example and rotate the geographic coordinate data such that the center of the distribution is the principal direction.
We do it like this:
- read in a data set with dec inc pairs
- make an equal area projection of the data to remind us what they look like
- calculate the principal component with **pmag.doprinc())**
- rotate the data to the principal direction
- plot the rotated data in an equal area projection.
```
di_block=np.loadtxt('data_files/di_rot/di_rot_example.txt') # read in some data
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=di_block,title='geographic',color='red',edge='black')
```
Now we calculate the principal direction using the method described inthe [goprinc](#goprinc) section.
```
princ=pmag.doprinc(di_block)
```
And note we use **pmag.dodirot_V** to do the rotation.
```
help(pmag.dodirot_V)
rot_block=pmag.dodirot_V(di_block,princ['dec'],princ['inc'])
rot_block
```
And of course look at what we have done!
```
ipmag.plot_net(1) # make the plot
ipmag.plot_di(di_block=rot_block,color='red',title='rotated',edge='black')
```
## di_tilt
[\[Essentials Chapter 9\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch9.html) [\[Changing coordinate systems\]](http://earthref.org/MAGIC/books/Tauxe/Essentials/WebBook3ap1.html#Changing_coordinate_systems)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#di_tilt.py)
**di_tilt** can rotate a direction of Declination = 5.3 and Inclination = 71.6 to “stratigraphic” coordinates, assuming the strike was 135 and the dip was 21. The convention in this program is to use the dip direction, which is to the “right” of this strike.
We can perform this calculation by calling **pmag.dotilt** or **pmag.dotilt_V()** depending on if we have a single point or an array to rotate.
```
help(pmag.dotilt)
help(pmag.dotilt_V)
# read in some data
data=np.loadtxt('data_files/di_tilt/di_tilt_example.dat') # load up the data
di_block=data[:,[0,1]] # let's look at the data first!
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block)
```
Now we can rotate them
```
Dt,It=pmag.dotilt_V(data) # rotate them
ipmag.plot_net(1) # and take another look
ipmag.plot_di(dec=Dt,inc=It)
```
Use the handy function **np.column_stack** to pair the decs and incs together
```
np.column_stack((Dt,It)) # if you want to see the output:
```
## di_vgp
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#di_vgp.py)
**di_vgp** converts directions (declination,inclination) to Virtual Geomagnetic Pole positions. This is the inverse of [**vgp_di**](#vgp_di).
To do so, we will call **pmag.dia_vgp()** from within the notebook.
```
help(pmag.dia_vgp)
data=np.loadtxt('data_files/di_vgp/di_vgp_example.dat') # read in some data
print (data)
```
The data are almost in the correct format, but there is no a95 field, so that will have to be inserted (as zeros).
```
a95=np.zeros(len(data))
a95
DIs=data.transpose()[0:2].transpose() # get the DIs
LatLons=data.transpose()[2:].transpose() # get the Lat Lons
newdata=np.column_stack((DIs,a95,LatLons)) # stitch them back together
print (newdata)
vgps=np.array(pmag.dia_vgp(newdata)) # get a tuple with lat,lon,dp,dm, convert to array
print (vgps.transpose()) # print out the vgps
```
## dipole_pinc
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#dipole_pinc.py)
If we assume a geocentric axial dipole, we can calculate an expected inclination at a given latitude and that is what **dipole_pinc** does. It calls **pmag.pinc()** and so will we to find the expected inclination at a paleolatitude of 24$^{\circ}$S!
```
help(pmag.pinc)
lat=-24
pmag.pinc(-24)
```
Or as an array
```
lats=range(-90,100,10)
incs=pmag.pinc(lats)
plt.plot(incs,lats)
plt.ylim(100,-100)
plt.xlabel('Latitude')
plt.ylabel('Inclination')
plt.axhline(0,color='black')
plt.axvline(0,color='black');
```
## dipole_plat
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#dipole_plat.py)
**dipole_plat** is similar to [dipole_pinc](#dipole_pinc) but calculates the paleolatitude from the inclination. We will call **pmag.plat()**:
```
help(pmag.plat)
inc=42
pmag.plat(inc)
```
## dir_cart
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html#x15-1560002) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#dir_cart.py)
**pmag.dir2cart()** converts directions (Declination, Inclination, Intensity) to cartesian coordinates (X,Y,Z).
```
help(pmag.dir2cart)
# read in data file from example file
dirs=np.loadtxt('data_files/dir_cart/dir_cart_example.dat')
print ('Input: \n',dirs) # print out the cartesian coordinates
# print out the results
carts = pmag.dir2cart(dirs)
print ("Output: ")
for c in carts:
print ('%8.4e %8.4e %8.4e'%(c[0],c[1],c[2]))
```
## eigs_s
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#eigs_s.py)
This program converts eigenparameters to the six tensor elements. This is the inverse of [s_eigs](#s_eigs).
There is a function **ipmag.eigs_s()** which will do this in a notebook:
```
help(ipmag.eigs_s)
Ss=ipmag.eigs_s(infile="eigs_s_example.dat", dir_path='data_files/eigs_s')
for s in Ss:
print (s)
```
## eq_di
[\[Essentials Appendix B\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ap2.html#x21-227000B#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#eq_di.py)
Data are frequently published as equal area projections and not listed in data tables. These data can be digitized as x,y data (assuming the outer rim is unity) and converted to approximate directions with the program **eq_di**. To use this program, install a graph digitizer (GraphClick from http://www.arizona-software.ch/graphclick/ works on Macs).
Digitize the data from the equal area projection saved in the file eqarea.png in the eq_di directory. You should only work on one hemisphere at a time (upper or lower) and save each hemisphere in its own file. Then you can convert the X,Y data to approximate dec and inc data - the quality of the data depends on your care in digitizing and the quality of the figure that you are digitizing.
Here we will try this out on a datafile already prepared, which are the digitized data from the lower hemisphere of a plot. You check your work with [eqarea](#eqarea).
To do this in a notebook, we can use **pmag.doeqdi()**.
```
help(pmag.doeqdi)
# read in the data into an array
# x is assumed first column, y, second
xy=np.loadtxt('data_files/eq_di/eq_di_example.dat').transpose()
decs,incs=pmag.doeqdi(xy[0],xy[1])
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,color='r',edge='black')
```
## fcalc
**pmag.fcalc()** returns the values of an F-test from an F table.
```
help(pmag.fcalc)
```
## fisher
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#fisher.py)
**fisher** draws $N$ directions from a Fisher distribution with specified $\kappa$ and a vertical mean. (For other directions see [fishrot](#fishrot)). To do this, we can just call the function **pmag.fshdev()** $N$ times.
```
help(pmag.fshdev)
# set the number, N, and kappa
N,kappa=100,20
# a basket to put our fish in
fish=[]
# get the Fisherian deviates
for i in range(N):
d,i=pmag.fshdev(kappa)
fish.append([d,i])
ipmag.plot_net(1)
ipmag.plot_di(di_block=fish,color='r',edge='black')
```
## fishrot
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#fishrot.py)
This program is similar to [fisher](#fisher), but allows you to specify the mean direction.
This has been implemented as **ipmag.fishrot()**.
```
help(ipmag.fishrot)
rotdi=ipmag.fishrot(k=50,n=5,dec=33,inc=41)
for di in rotdi:
print ('%7.1f %7.1f'%(di[0],di[1]))
ipmag.plot_net(1)
ipmag.plot_di(di_block=rotdi)
```
## flip
Fisher statistics requires unimodal data (all in one direction with no reversals) but many paleomagnetic data sets are bimodal. To flip bimodal data into a single mode, we can use **pmag.flip( )**. This function calculates the principle direction and flips all the 'reverse' data to the 'normal' direction along the principle axis.
```
help(pmag.flip)
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,flipped=pmag.flip(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red')
ipmag.plot_di(di_block=flipped,color='b')
```
## gaussian
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#gaussian.py)
This program generates sets of data drawn from a normal distribution with a given mean and standard deviation. It is just a wrapper for a call to **pmag.gaussdev()** which just calls **numpy.random.normal()** which we could do, but we would have to import it, so it is easiest just to call the **pmag** version which we have already imported.
```
help(pmag.gaussdev)
N=1000
bins=100
norm=pmag.gaussdev(10,3,N)
plt.hist(norm,bins=bins,color='black',histtype='step',density=True)
plt.xlabel('Gaussian Deviates')
plt.ylabel('Frequency');
# alternatively we can plot with ipmag.histplot:
ipmag.histplot(data=norm, xlab='Gaussian Deviates', save_plots=False, norm=-1)
```
## gobing
[\[Essentials Chapter 12\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch12.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#gobing.py)
**gobing** calculates Bingham statistics for sets of directional data (see the section for eqarea_ell in the PmagPy_plots_analysis documentation for nice examples). We do this by calling **pmag.dobingham()**.
```
help(pmag.dobingham)
di_block=np.loadtxt('data_files/gobing/gobing_example.txt')
pmag.dobingham(di_block)
```
## gofish
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#gofish.py)
**gofish** calculates Fisher statistics for sets of directional data. (see the section for eqarea_ell in the PmagPy_plots_analysis documentation for nice examples).
This can be done with **ipmag.fisher_mean()**.
```
help(ipmag.fisher_mean)
di_block=np.loadtxt('data_files/gofish/fishrot.out')
ipmag.fisher_mean(di_block=di_block)
```
### fisher mean on pandas DataFrames
There is also a function **pmag.dir_df_fisher_mean()** that calculates Fisher statistics on a Pandas DataFrame with directional data
```
help(pmag.dir_df_fisher_mean)
# make the data frame
dir_df=pd.read_csv('data_files/gofish/fishrot.out',delim_whitespace=True, header=None)
dir_df.columns=['dir_dec','dir_inc']
pmag.dir_df_fisher_mean(dir_df)
```
## gokent
[\[Essentials Chapter 12\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch12.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#gokent.py)
With **gokent** we can calculate Kent statistics on sets of directional data (see the section for eqarea_ell in the PmagPy_plots_analysis documentation for nice examples).
This calls **pmag.dokent()** (see also **eqarea_ell** example)
```
help(pmag.dokent)
di_block=np.loadtxt('data_files/gokent/gokent_example.txt')
pmag.dokent(di_block,di_block.shape[0])
```
## goprinc
[\[Essentials Chapter 12\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch12.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#goprinc.py)
**goprinc** calculates the principal directions (and their eigenvalues) for sets of paleomagnetic vectors. It doesn't do any statistics on them, unlike the other programs.
We will call **pmag.doprinc()**:
```
help(pmag.doprinc)
di_block=np.loadtxt('data_files/goprinc/goprinc_example.txt')
pmag.doprinc(di_block)
```
## igrf
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#igrf.py)
This program gives geomagnetic field vector data for a specified place at a specified time. It has many built in models including IGRFs, GUFM and several archeomagnetic models. It calls the function **ipmag.igrf()** for this so that is what we will do.
```
help(ipmag.igrf)
```
We will calculate the field for San Diego from 3000 BCE to 1950 in 50 year increments using the hfm.OL1.A1 model of Constable et al. (2016, doi: 10.1016/j.epsl.2016.08.015).
```
# make a list of desired dates
dates=range(-3000,1950,50) # list of dates in +/- Common Era
mod = 'hfm10k' # choose the desired model
lat,lon,alt=33,-117,0 # desired latitude, longitude and alitude
Vecs=[] # list for Dec,Inc,Int outputs
for date in dates: # step through the dates
Vecs.append(ipmag.igrf([date,alt,lat,lon],mod=mod)) # append to list
vector_df = pd.DataFrame(Vecs) # make it into a Pandas dataframe
vector_df.columns=['dec','inc','int']
vector_df['vadms']=pmag.b_vdm(vector_df.int.values*1e-9, lat) # calculate the VADMs
vector_df['dec_adj']=vector_df['dec']
vector_df.loc[vector_df.dec>180,['dec_adj']]=vector_df.dec-360 # adjust declinations to be -180 => 180
fig=plt.figure(1,figsize=(7,9)) # set up the figure
fig.add_subplot(411) # make 4 rows of plots, this is the first
plt.plot(dates,vector_df.dec_adj) # plot the adjusted declinations
plt.ylabel('Declination ($^{\circ}$)')
plt.title('Geomagnetic field evaluated at Lat: '+str(lat)+' / Lon: '+str(lon))
fig.add_subplot(412) # this is the second
plt.plot(dates,vector_df.inc) # plot the inclinations
plt.ylabel('Inclination ($^{\circ}$)')
fig.add_subplot(413)
plt.plot(dates,vector_df.int*1e-3) # plot the intensites (in uT instead of nT)
plt.ylabel('Intensity ($\mu$T)')
fig.add_subplot(414) # plot the VADMs
plt.plot(dates,vector_df.vadms*1e-21) # plot as ZAm^2
plt.ylabel('VADM (ZAm$^2$)')
plt.xlabel('Dates (CE)');
```
## incfish
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#incfish.py)
You can't get a meaningful average inclination from inclination only data because of the exponential relationship between inclinations and the true mean inclination for Fisher distributions (except exactly at the pole and the equator). So, McFadden and Reid (1982, doi: 10.1111/j.1365-246X.1982.tb04950.x) developed a maximum liklihood estimate for getting an estimate for true mean absent declination. **pmag.doincfish()** is an implementation of that concept.
```
help(pmag.doincfish)
incs=np.loadtxt('data_files/incfish/incfish_example_inc.dat')
pmag.doincfish(incs)
```
## pca
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#pca.py)
**pca** calculates best-fit lines, planes or Fisher means through selected treatment steps along with Kirschvink (1980, doi: 10.1111/j.1365-246X.1980.tb02601.x) MAD values. The file format is a simple space delimited file with specimen name, treatment step, intensity, declination and inclination. **pca.py** calls **pmag.domean()**, so that is what we will do here.
```
help(pmag.domean)
# read in data as space delimited file
data=pd.read_csv('data_files/pca/pca_example.txt',\
delim_whitespace=True,header=None)
# we need to add a column for quality
data['quality']='g'
# strip off the specimen name and reorder records
# from: int,dec,inc to: dec,inc,int
data=data[[1,3,4,2,'quality']].values.tolist()
pmag.domean(data,1,10,'DE-BFL')
```
## pt_rot
[\[Essentials Chapter 16\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch16.html)
[\[Essentials Appendix A.3.5\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ap1.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#pt_rot.py)
This program finds rotation poles for a specified location, age and destination plate, then rotates the point into the destination plate coordinates using the roations and methods described in Essentials Appendix A.3.5.
This can be done for you using the function **frp.get_pole()** in the finite rotation pole module called **pmagpy.frp**. You then call **pmag.pt_rot()** to do the rotation. Let's do this for to rotate the Cretaceous poles from Europe (sane data as in the polemap_magic example) and rotate them to South African coordinates.
```
# need to load this special module
import pmagpy.frp as frp
help(frp.get_pole)
Prot=frp.get_pole('eur',100)
Prot
help(pmag.pt_rot)
data=pd.read_csv('data_files/polemap_magic/locations.txt',sep='\t',header=1)
lats=data['pole_lat'].values
lons=data['pole_lon'].values
RLats,RLons=rot_pts=pmag.pt_rot(Prot,lats,lons)
```
And now we can plot them using **pmagplotlib.plot_map()**
```
Opts={}
Opts['sym']='wo' # sets the symbol
Opts['symsize']=10
Opts['proj']='ortho'
Opts['edge']='black'
Opts['lat_0']=90
Opts['details']={}
Opts['details']['fancy']=True # warning : this option takes a few minutes
if has_cartopy:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map(1, RLats, RLons, Opts)
elif has_basemap:
plt.figure(1,(6,6)) # optional - make a map
pmagplotlib.plot_map_basemap(1, RLats, RLons, Opts)
```
## s_eigs
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#s_eigs.py)
This program converts the six tensor elements to eigenparameters - the inverse of [eigs_s](#eigs_s).
We can call the function **pmag.doseigs()** from the notebook.
```
help(pmag.doseigs)
Ss=np.loadtxt('data_files/s_eigs/s_eigs_example.dat')
for s in Ss:
tau,V=pmag.doseigs(s)
print ('%f %8.2f %8.2f %f %8.2f %8.2f %f %8.2f %8.2f'%\
(tau[2],V[2][0],V[2][1],tau[1],V[1][0],V[1][1],tau[0],V[0][0],V[0][1]))
```
## s_geo
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#s_geo.py)
**s_geo** takes the 6 tensor elements in specimen coordinates and applies the rotation similar to [**di_geo**](#di_geo). To do this we will call **pmag.dosgeo()** from within the notebook.
```
help(pmag.dosgeo)
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
for s in Ss:
print(pmag.dosgeo(s[0:6],s[6],s[7]))
```
## s_hext
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#s_hext.py)
**s_hext** calculates Hext (1963, doi: 10.2307/2333905) statistics for anisotropy data in the six tensor element format.
It calls **pmag.dohext()**.
```
help(pmag.dohext)
```
We are working with data that have no sigmas attached to them and want to average all the values in the file together. Let's look at the rotated data from the [**s_geo**](#s_geo) example.
```
# read in the data
Ss=np.loadtxt('data_files/s_geo/s_geo_example.dat')
# make a container for the rotated S values
SGeos=[]
for s in Ss:
SGeos.append(pmag.dosgeo(s[0:6],s[6],s[7]))
nf,sigma,avs=pmag.sbar(SGeos) # get the average over all the data
hpars=pmag.dohext(nf,sigma,avs)
print(hpars)
```
## s_magic
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#s_magic.py)
NEED TO ADD THIS ONE....
## s_tilt
[\[Essentials Chapter 13\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch13.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#s_tilt.py)
**s_tilt** takes the 6 tensor elements in geographic coordinates and applies the rotation similar to [**di_tilt**](#di_tilt) into stratigraphic coordinates. It calls **pmag.dostilt()**. But be careful! **s_tilt.py** (the command line program) assumes that the bedding info is the strike, with the dip to the right of strike unlike **pmag.dostilt** which assumes that the azimuth is the dip direction.
```
help(pmag.dostilt)
# note that the data in this example are Ss and strike and dip (not bed_az,bed_pl)
Ss=np.loadtxt('data_files/s_tilt/s_tilt_example.dat')
for s in Ss:
print(pmag.dostilt(s[0:6],s[6]+90.,s[7])) # make the bedding azimuth dip direction, not strike.
```
## scalc
[\[Essentials Chapter 14\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch14.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#scalc.py)
This program reads in data files with vgp_lon, vgp_lat and optional kappa, N, and site latitude.
It allows some filtering based on the requirements of the study, such as:
- Fisher k cutoff
- VGP latitudinal cutoff
- Vandamme (1994, doi: 10.1016/0031-9201(94)90012-4) iterative cutoff
- flipping the reverse mode to antipodes
- rotating principle direction to the spin axis
- bootstrap confidence bounds
- optionally calculates the scatter (Sp or Sf of McElhinny & McFadden, 1997) of VGPs
with correction for within site scatter.
The filtering is just what **Pandas** was designed for, so we can calls **pmag.scalc_vgp_df()** which works on a suitably constructed **Pandas** DataFrame.
```
help(pmag.scalc_vgp_df)
```
To just calculate the value of S (without the within site scatter) we read in a data file and attach the correct headers to it depending on what is in it.
```
vgp_df=pd.read_csv('data_files/scalc/scalc_example.txt',delim_whitespace=True,header=None)
if len(list(vgp_df.columns))==2:
vgp_df.columns=['vgp_lon','vgp_lat']
vgp_df['dir_k'],vgp_df['dir_n'],vgp_df['lat']=0,0,0
else:
vgp_df.columns=['vgp_lon','vgp_lat','dir_k','dir_n_samples','lat']
pmag.scalc_vgp_df
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
To apply a cutoff for the Fisher k value, we just filter the DataFrame prior to calculating S_b. Let's filter for kappa>50
```
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,kappa=50)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
To apply the Vandamme (1994) approach, we set v to True
```
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,v=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
To flip the "reverse" directions, we set anti to 1
```
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
And, to do relative to the spin axis, set spin to True:
```
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
## scalc_magic
[\[Essentials Chapter 14\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch14.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#scalc_magic.py)
This program does the same thing as [**scalc**](#scalc), but reads in a MagIC formatted file. So, we can do that easy-peasy.
```
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
vgp_df=pd.read_csv('data_files/scalc_magic/sites.txt',sep='\t',header=1)
N,S_B,low,high,cutoff=pmag.scalc_vgp_df(vgp_df,anti=True,spin=True)
print(N, '%7.1f %7.1f ' % (S_B, cutoff))
```
## separate_directions
Like [pmag.flip( )](#flip), **pmag.separate_directions** divides a directional data set into two modes. Unlike [pmag.flip( )](#flip), it returns the two separate modes (e.g., normal and reverse)
```
help(pmag.separate_directions)
#read in the data into an array
vectors=np.loadtxt('data_files/eqarea_ell/tk03.out').transpose()
di_block=vectors[0:2].transpose() # decs are di_block[0], incs are di_block[1]
# flip the reverse directions to their normal antipodes
normal,reverse=pmag.separate_directions(di_block)
# and plot them up
ipmag.plot_net(1)
ipmag.plot_di(di_block=normal,color='red')
ipmag.plot_di(di_block=reverse,color='b')
```
## squish
[\[Essentials Chapter 7\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch7.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#squish.py)
This program reads in dec/inc data and "squishes" the inclinations using the formula from King
(1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) $\tan(I_o)=flat \tan(I_f)$. \[See also [unsquish](#unsquish)\].
We can call **pmag.squish()** from within the notebook.
```
help(pmag.squish)
di_block=np.loadtxt('data_files/squish/squish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=0.4
fincs=pmag.squish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Original',color='blue')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Squished',color='red')
```
## stats
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html#x15-156000813) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#/stats.py)
This program just calculates the N, mean, sum, sigma and sigma % for data. There are numerous ways to do that in **Numpy**, so let's just use those.
```
data=np.loadtxt('data_files/gaussian/gauss.out')
print (data.shape[0],data.mean(),data.sum(),data.std())
```
## strip_magic
[\[Essentials Chapter 15\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch15.html)
[\[MagIC Database\]](https://earthref.org/MagIC)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#strip_magic.py)
We can do this easily using the wonders of **Pandas** and **matplotlib** as demonstrated here.
```
# read in the data
data=pd.read_csv('data_files/strip_magic/sites.txt',sep='\t',header=1)
# see what's there
data.columns
# you might have to use **df.dropna()** to clean off unwanted NaN lines or other data massaging
# but not for this example
plt.figure(1,(10,4)) # make the figure
plt.plot(data.age,data.vgp_lat,'b-') # plot as blue line
plt.plot(data.age,data.vgp_lat,'ro',markeredgecolor="black") # plot as red dots with black rims
plt.xlabel('Age (Ma)') # label the time axis
plt.ylabel('VGP Lat.$^{\circ}$')
plt.ylim(-90,90) # set the plot limits
plt.axhline(color='black'); # put on a zero line
```
## sundec
[\[Essentials Chapter 9\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch9.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#sundec.py)
Paleomagnetists often use the sun to orient their cores, especially if the sampling site is strongly magnetic and would deflect the magnetic compass. The information required is: where are you (e.g., latitude and longitude), what day is it, what time is it in Greenwhich Mean Time (a.k.a. Universal Time) and where is the sun (e.g., the antipode of the angle the shadow of a gnomon makes with the desired direction)?
This calculation is surprisingly accurate and is implemented in the function
**pmag.dosundec()**.
```
help(pmag.dosundec)
```
Say you (or your elderly colleague) were located at 35$^{\circ}$ N and 33$^{\circ}$ E. The local time was three hours ahead of Universal Time. The shadow angle for the drilling direction was 68$^{\circ}$ measured at 16:09 on May 23, 1994. **pmag.dosundec()** requires a dictionary with the necessary information:
```
sundata={'delta_u':3,'lat':35,'lon':33,\
'date':'1994:05:23:16:9','shadow_angle':68}
print ('%7.1f'%(pmag.dosundec(sundata)))
```
## tk03
[\[Essentials Chapter 16\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch16.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#tk03.py)
Sometimes it is useful to generate a distribution of synthetic geomagnetic field vectors that you might expect to find from paleosecular variation of the geomagnetic field. The program **tk03** generates distributions of field vectors from the PSV model of Tauxe and Kent (2004, doi: 10.1029/145GM08). This program was implemented for notebook use as **ipmag.tk03()**. \[See also [**find_ei**](#find_ei)\].
```
help(ipmag.tk03)
di_block=ipmag.tk03(lat=30)
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
```
## uniform
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#uniform.py)
It is at times handy to be able to generate a uniformly distributed set of directions (or geographic locations). This is done using a technique described by Fisher et al. (Fisher, N. I., Lewis, T., & Embleton, B. J. J. (1987). Statistical Analysis of Spherical Data. Cambridge: Cambridge University Press). We do this by calling **pmag.get_unf()**.
```
help(pmag.get_unf)
di_block=pmag.get_unf()
ipmag.plot_net(1)
ipmag.plot_di(di_block=di_block,color='red',edge='black')
```
## unsquish
[\[Essentials Chapter 7\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch7.html#x15-156000813)
[\[Essentials Chapter 16\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch16.html#x15-156000813)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#unsquish.py)
This program is just the inverse of [**squish**](#squish) in that it takes "squished" data and "unsquishes" them, assuming a King (1955, doi: 10.1111/j.1365-246X.1955.tb06558.x) relationship: $\tan(I_o)=flat \tan(I_f)$. So, $\tan(I_f) = \tan(I_o)/flat$.
It calls **pmag.unquish()**.
```
help(pmag.unsquish)
di_block=np.loadtxt('data_files/unsquish/unsquish_example.dat').transpose()
decs=di_block[0]
incs=di_block[1]
flat=.4
fincs=pmag.unsquish(incs,flat)
ipmag.plot_net(1)
ipmag.plot_di(dec=decs,inc=incs,title='Squished',color='red')
ipmag.plot_net(2)
ipmag.plot_di(dec=decs,inc=fincs,title='Unsquished',color='blue')
```
## vdm_b
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html#x15-1560002) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#vdm_b.py)
**vdm_b** is the inverse of [**b_vdm**](#b_vdm) in that it converts a virtual \[axial\] dipole moment (vdm or vadm) to a predicted geomagnetic field intensity observed at the earth's surface at a particular (paleo)latitude. This program calls **pmag.vdm_b()**.
```
help(pmag.vdm_b)
print ('%7.1f microtesla'%(pmag.vdm_b(7.159e22,22)*1e6))
```
## vector_mean
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html#x15-1560002) [\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#vector_mean.py)
**vector_mean** calculates the vector mean for a set of vectors in polar coordinates (e.g., declination, inclination, intensity). This is similar to the Fisher mean ([**gofish**](#gofish)) but uses vector length instead of unit vectors. It calls
calls **pmag.vector_mean()**.
```
help(pmag.vector_mean)
data=np.loadtxt('data_files/vector_mean/vector_mean_example.dat')
Dir,R=pmag.vector_mean(data)
print (('%i %7.1f %7.1f %f')%(data.shape[0],Dir[0],Dir[1],R))
```
## vgp_di
[\[Essentials Chapter 2\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch2.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#vgp_di.py)
We use **vgp_di** to convert virtual geomagnetic pole positions to predicted directions at a given location. \[See also [**di_vgp**](#di_vgp)\].
This program uses the function **pmag.vgp_di()**.
```
help(pmag.vgp_di)
d,i=pmag.vgp_di(68,191,33,243)
print ('%7.1f %7.1f'%(d,i))
```
## watsons_f
[\[Essentials Chapter 11\]](https://earthref.org/MagIC/books/Tauxe/Essentials/WebBook3ch11.html)
[\[command line version\]](https://pmagpy.github.io/PmagPy-cli.html#watsons_f.py)
There are several different ways of testing whether two sets of directional data share a common mean. One popular (although perhaps not the best) way is to use Watson's F test (Watson, 1956, doi: 10.1111/j.1365-246X.1956.tb05560.x). \[See also [**watsons_v**](#watsons_v) or Lisa Tauxe's bootstrap way: [**common_mean**](#common_mean)\].
If you still want to use Waston's F, then try
**pmag.watsons_f()** for this.
```
help(pmag.watsons_f)
DI1=np.loadtxt('data_files/watsons_f/watsons_f_example_file1.dat')
DI2=np.loadtxt('data_files/watsons_f/watsons_f_example_file2.dat')
F,Fcrit=pmag.watsons_f(DI1,DI2)
print ('%7.2f %7.2f'%(F,Fcrit))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Uzor13/GitStarter/blob/master/TSP.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
"""Simple travelling salesman problem between cities."""
from ortools.constraint_solver import routing_enums_pb2
from ortools.constraint_solver import pywrapcp
def create_data_model():
"""Stores the data for the problem."""
data = {}
data['distance_matrix'] = [
[0, 10, 15, 20],
[5, 0, 9, 10],
[6, 13, 0, 12],
[8, 8, 9, 0]
] # yapf: disable
data['num_vehicles'] = 1
data['depot'] = 0
return data
def print_solution(manager, routing, solution):
"""Prints solution on console."""
print('Objective: {} miles'.format(solution.ObjectiveValue()))
index = routing.Start(0)
plan_output = 'Route for vehicle 0:\n'
route_distance = 0
while not routing.IsEnd(index):
plan_output += ' {} ->'.format(manager.IndexToNode(index))
previous_index = index
index = solution.Value(routing.NextVar(index))
route_distance += routing.GetArcCostForVehicle(previous_index, index, 0)
plan_output += ' {}\n'.format(manager.IndexToNode(index))
print(plan_output)
plan_output += 'Route distance: {}miles\n'.format(route_distance)
def main():
"""Entry point of the program."""
# Instantiate the data problem.
data = create_data_model()
# Create the routing index manager.
manager = pywrapcp.RoutingIndexManager(len(data['distance_matrix']),
data['num_vehicles'], data['depot'])
# Create Routing Model.
routing = pywrapcp.RoutingModel(manager)
def distance_callback(from_index, to_index):
"""Returns the distance between the two nodes."""
# Convert from routing variable Index to distance matrix NodeIndex.
from_node = manager.IndexToNode(from_index)
to_node = manager.IndexToNode(to_index)
return data['distance_matrix'][from_node][to_node]
transit_callback_index = routing.RegisterTransitCallback(distance_callback)
# Define cost of each arc.
routing.SetArcCostEvaluatorOfAllVehicles(transit_callback_index)
# Setting first solution heuristic.
search_parameters = pywrapcp.DefaultRoutingSearchParameters()
search_parameters.first_solution_strategy = (
routing_enums_pb2.FirstSolutionStrategy.PATH_CHEAPEST_ARC)
# Solve the problem.
solution = routing.SolveWithParameters(search_parameters)
# Print solution on console.
if solution:
print_solution(manager, routing, solution)
if __name__ == '__main__':
main()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import torch
import torchvision
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from matplotlib import pyplot as plt
%matplotlib inline
class MosaicDataset1(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list, mosaic_label,fore_idx):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list
self.label = mosaic_label
self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] , self.fore_idx[idx]
# data = [{"mosaic_list":mosaic_list_of_images, "mosaic_label": mosaic_label, "fore_idx":fore_idx}]
# np.save("mosaic_data.npy",data)
data = np.load("type4_data.npy",allow_pickle=True)
mosaic_list_of_images = data[0]["mosaic_list"]
mosaic_label = data[0]["mosaic_label"]
fore_idx = data[0]["fore_idx"]
batch = 250
msd = MosaicDataset1(mosaic_list_of_images, mosaic_label, fore_idx)
train_loader = DataLoader( msd,batch_size= batch ,shuffle=True)
class Focus_deep(nn.Module):
'''
deep focus network averaged at zeroth layer
input : elemental data
'''
def __init__(self,inputs,output,K,d):
super(Focus_deep,self).__init__()
self.inputs = inputs
self.output = output
self.K = K
self.d = d
self.linear1 = nn.Linear(self.inputs,300) #,self.output)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(300,self.output)
def forward(self,z):
batch = z.shape[0]
x = torch.zeros([batch,self.K],dtype=torch.float64)
y = torch.zeros([batch,300], dtype=torch.float64)
features = torch.zeros([batch,self.K,300],dtype=torch.float64)
x,y = x.to("cuda"),y.to("cuda")
features = features.to("cuda")
for i in range(self.K):
alp,ftrs = self.helper(z[:,i] ) # self.d*i:self.d*i+self.d
x[:,i] = alp[:,0]
features[:,i] = ftrs
x = F.softmax(x,dim=1) # alphas
for i in range(self.K):
x1 = x[:,i]
y = y+torch.mul(x1[:,None],features[:,i]) # self.d*i:self.d*i+self.d
return y , x
def helper(self,x):
x = self.linear1(x)
x1 = F.tanh(x)
x = F.relu(x)
#x = F.relu(self.linear2(x))
x = self.linear2(x)
#print(x1.shape)
return x,x1
class Classification_deep(nn.Module):
'''
input : elemental data
deep classification module data averaged at zeroth layer
'''
def __init__(self,inputs,output):
super(Classification_deep,self).__init__()
self.inputs = inputs
self.output = output
self.linear1 = nn.Linear(self.inputs,200)
#self.linear2 = nn.Linear(6,12)
self.linear2 = nn.Linear(200,self.output)
def forward(self,x):
x = F.relu(self.linear1(x))
#x = F.relu(self.linear2(x))
x = self.linear2(x)
return x
def calculate_attn_loss(dataloader,what,where,criter):
what.eval()
where.eval()
r_loss = 0
alphas = []
lbls = []
pred = []
fidices = []
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels,fidx = data
lbls.append(labels)
fidices.append(fidx)
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
avg,alpha = where(inputs)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
alphas.append(alpha.cpu().numpy())
loss = criter(outputs, labels)
r_loss += loss.item()
alphas = np.concatenate(alphas,axis=0)
pred = np.concatenate(pred,axis=0)
lbls = np.concatenate(lbls,axis=0)
fidices = np.concatenate(fidices,axis=0)
#print(alphas.shape,pred.shape,lbls.shape,fidices.shape)
analysis = analyse_data(alphas,lbls,pred,fidices)
return r_loss/i,analysis
def analyse_data(alphas,lbls,predicted,f_idx):
'''
analysis data is created here
'''
batch = len(predicted)
amth,alth,ftpt,ffpt,ftpf,ffpf = 0,0,0,0,0,0
for j in range (batch):
focus = np.argmax(alphas[j])
if(alphas[j][focus] >= 0.5):
amth +=1
else:
alth +=1
if(focus == f_idx[j] and predicted[j] == lbls[j]):
ftpt += 1
elif(focus != f_idx[j] and predicted[j] == lbls[j]):
ffpt +=1
elif(focus == f_idx[j] and predicted[j] != lbls[j]):
ftpf +=1
elif(focus != f_idx[j] and predicted[j] != lbls[j]):
ffpf +=1
#print(sum(predicted==lbls),ftpt+ffpt)
return [ftpt,ffpt,ftpf,ffpf,amth,alth]
number_runs = 20
FTPT_analysis = pd.DataFrame(columns = ["FTPT","FFPT", "FTPF","FFPF"])
for n in range(number_runs):
print("--"*40)
# instantiate focus and classification Model
torch.manual_seed(n)
where = Focus_deep(2,1,9,2).double()
torch.manual_seed(n)
what = Classification_deep(300,3).double()
where = where.to("cuda")
what = what.to("cuda")
# instantiate optimizer
optimizer_where = optim.Adam(where.parameters(),lr =0.01)
optimizer_what = optim.Adam(what.parameters(), lr=0.01)
criterion = nn.CrossEntropyLoss()
acti = []
analysis_data = []
loss_curi = []
epochs = 1000
# calculate zeroth epoch loss and FTPT values
running_loss,anlys_data = calculate_attn_loss(train_loader,what,where,criterion)
loss_curi.append(running_loss)
analysis_data.append(anlys_data)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
# training starts
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
what.train()
where.train()
for i, data in enumerate(train_loader, 0):
# get the inputs
inputs, labels,_ = data
inputs = inputs.double()
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_where.zero_grad()
optimizer_what.zero_grad()
# forward + backward + optimize
avg, alpha = where(inputs)
outputs = what(avg)
loss = criterion(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_where.step()
optimizer_what.step()
running_loss,anls_data = calculate_attn_loss(train_loader,what,where,criterion)
analysis_data.append(anls_data)
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.01:
break
print('Finished Training run ' +str(n))
analysis_data = np.array(analysis_data)
FTPT_analysis.loc[n] = analysis_data[-1,:4]/30
correct = 0
total = 0
with torch.no_grad():
for data in train_loader:
images, labels,_ = data
images = images.double()
images, labels = images.to("cuda"), labels.to("cuda")
avg, alpha = where(images)
outputs = what(avg)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 3000 train images: %d %%' % ( 100 * correct / total))
# plt.figure(figsize=(6,6))
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,0],label="ftpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,1],label="ffpt")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,2],label="ftpf")
# plt.plot(np.arange(0,epoch+2,1),analysis_data[:,3],label="ffpf")
# plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.plot(loss_curi)
np.mean(np.array(FTPT_analysis),axis=0)
FTPT_analysis.to_csv("synthetic_first_300_200.csv",index=False)
```
```
FTPT_analysis
```
| github_jupyter |
# Venture Funding with Deep Learning
You work as a risk management associate at Alphabet Soup, a venture capital firm. Alphabet Soup’s business team receives many funding applications from startups every day. This team has asked you to help them create a model that predicts whether applicants will be successful if funded by Alphabet Soup.
The business team has given you a CSV containing more than 34,000 organizations that have received funding from Alphabet Soup over the years. With your knowledge of machine learning and neural networks, you decide to use the features in the provided dataset to create a binary classifier model that will predict whether an applicant will become a successful business. The CSV file contains a variety of information about these businesses, including whether or not they ultimately became successful.
## Instructions:
The steps for this challenge are broken out into the following sections:
* Prepare the data for use on a neural network model.
* Compile and evaluate a binary classification model using a neural network.
* Optimize the neural network model.
### Prepare the Data for Use on a Neural Network Model.
Using your knowledge of Pandas and scikit-learn’s `StandardScaler()`, preprocess the dataset so that you can use it to compile and evaluate the neural network model later.
Open the starter code file, and complete the following data preparation steps:
1. Read the `applicants_data.csv` file into a Pandas DataFrame. Review the DataFrame, looking for categorical variables that will need to be encoded, as well as columns that could eventually define your features and target variables.
2. Drop the “EIN” (Employer Identification Number) and “NAME” columns from the DataFrame, because they are not relevant to the binary classification model.
3. Encode the dataset’s categorical variables using `OneHotEncoder`, and then place the encoded variables into a new DataFrame.
4. Add the original DataFrame’s numerical variables to the DataFrame containing the encoded variables.
> **Note** To complete this step, you will employ the Pandas `concat()` function that was introduced earlier in this course.
5. Using the preprocessed data, create the features (`X`) and target (`y`) datasets. The target dataset should be defined by the preprocessed DataFrame column “IS_SUCCESSFUL”. The remaining columns should define the features dataset.
6. Split the features and target sets into training and testing datasets.
7. Use scikit-learn's `StandardScaler` to scale the features data.
### Compile and Evaluate a Binary Classification Model Using a Neural Network.
Use your knowledge of TensorFlow to design a binary classification deep neural network model. This model should use the dataset’s features to predict whether an Alphabet Soup–funded startup will be successful based on the features in the dataset. Consider the number of inputs before determining the number of layers that your model will contain or the number of neurons on each layer. Then, compile and fit your model. Finally, evaluate your binary classification model to calculate the model’s loss and accuracy.
To do so, complete the following steps:
1. Create a deep neural network by assigning the number of input features, the number of layers, and the number of neurons on each layer using Tensorflow’s Keras.
> **Hint** You can start with a two-layer deep neural network model that uses the `relu` activation function for both layers.
2. Compile and fit the model using the `binary_crossentropy` loss function, the `adam` optimizer, and the `accuracy` evaluation metric.
> **Hint** When fitting the model, start with a small number of epochs, such as 20, 50, or 100.
3. Evaluate the model using the test data to determine the model’s loss and accuracy.
4. Save and export your model to an HDF5 file, and name the file `AlphabetSoup.h5`.
### Optimize the Neural Network Model.
Using your knowledge of TensorFlow and Keras, optimize your model to improve the model's accuracy. Even if you do not successfully achieve a better accuracy, you'll need to demonstrate at least two attempts to optimize the model. You can include these attempts in your existing notebook. Or, you can make copies of the starter notebook in the same folder, rename them, and code each model optimization in a new notebook.
> **Note** You will not lose points if your model does not achieve a high accuracy, as long as you make at least two attempts to optimize the model.
To do so, complete the following steps:
1. Define at least three new deep neural network models (the original plus 2 optimization attempts). With each, try to improve on your first model’s predictive accuracy.
> **Rewind** Recall that perfect accuracy has a value of 1, so accuracy improves as its value moves closer to 1. To optimize your model for a predictive accuracy as close to 1 as possible, you can use any or all of the following techniques:
>
> * Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
>
> * Add more neurons (nodes) to a hidden layer.
>
> * Add more hidden layers.
>
> * Use different activation functions for the hidden layers.
>
> * Add to or reduce the number of epochs in the training regimen.
2. After finishing your models, display the accuracy scores achieved by each model, and compare the results.
3. Save each of your models as an HDF5 file.
```
# Imports
import pandas as pd
from pathlib import Path
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler,OneHotEncoder
```
---
## Prepare the data to be used on a neural network model
### Step 1: Read the `applicants_data.csv` file into a Pandas DataFrame. Review the DataFrame, looking for categorical variables that will need to be encoded, as well as columns that could eventually define your features and target variables.
```
# Read the applicants_data.csv file from the Resources folder into a Pandas DataFrame
applicant_data_df = pd.read_csv(Path("./Resources/applicants_data.csv"))
# Review the DataFrame
applicant_data_df
# Review the data types associated with the columns
applicant_data_df.dtypes
```
### Step 2: Drop the “EIN” (Employer Identification Number) and “NAME” columns from the DataFrame, because they are not relevant to the binary classification model.
```
# Drop the 'EIN' and 'NAME' columns from the DataFrame
applicant_data_df = applicant_data_df.drop(columns=["EIN", "NAME"])
# Review the DataFrame
applicant_data_df
```
### Step 3: Encode the dataset’s categorical variables using `OneHotEncoder`, and then place the encoded variables into a new DataFrame.
```
# Create a list of categorical variables
categorical_variables = list(applicant_data_df.dtypes[applicant_data_df.dtypes == "object"].index)
# Display the categorical variables list
categorical_variables
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Encode the categorcal variables using OneHotEncoder
encoded_data = enc.fit_transform(applicant_data_df[categorical_variables])
# Create a DataFrame with the encoded variables
encoded_df = pd.DataFrame(encoded_data, columns = enc.get_feature_names(categorical_variables))
# Review the DataFrame
encoded_df
```
### Step 4: Add the original DataFrame’s numerical variables to the DataFrame containing the encoded variables.
> **Note** To complete this step, you will employ the Pandas `concat()` function that was introduced earlier in this course.
```
# Add the numerical variables from the original DataFrame to the one-hot encoding DataFrame
encoded_df = pd.concat([encoded_df, applicant_data_df[["STATUS","ASK_AMT","IS_SUCCESSFUL"]]],axis=1)
# Review the Dataframe
encoded_df
```
### Step 5: Using the preprocessed data, create the features (`X`) and target (`y`) datasets. The target dataset should be defined by the preprocessed DataFrame column “IS_SUCCESSFUL”. The remaining columns should define the features dataset.
```
# Define the target set y using the IS_SUCCESSFUL column
y = encoded_df["IS_SUCCESSFUL"]
# Display a sample of y
y
# Define features set X by selecting all columns but IS_SUCCESSFUL
X = encoded_df.drop(columns=["IS_SUCCESSFUL"])
# Review the features DataFrame
# YOUR CODE HERE
X
```
### Step 6: Split the features and target sets into training and testing datasets.
```
# Split the preprocessed data into a training and testing dataset
# Assign the function a random_state equal to 1
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=1)
```
### Step 7: Use scikit-learn's `StandardScaler` to scale the features data.
```
# Create a StandardScaler instance
scaler = StandardScaler()
# Fit the scaler to the features training dataset
X_scaler = scaler.fit(X_train)
# Fit the scaler to the features training dataset
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
---
## Compile and Evaluate a Binary Classification Model Using a Neural Network
### Step 1: Create a deep neural network by assigning the number of input features, the number of layers, and the number of neurons on each layer using Tensorflow’s Keras.
> **Hint** You can start with a two-layer deep neural network model that uses the `relu` activation function for both layers.
```
# Define the the number of inputs (features) to the model
number_input_features =len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1 = (number_input_features + number_output_neurons) // 2
# Review the number hidden nodes in the first layer
hidden_nodes_layer1
# Define the number of hidden nodes for the second hidden layer
hidden_nodes_layer2 = (hidden_nodes_layer1 + number_output_neurons) // 2
# Review the number hidden nodes in the second layer
hidden_nodes_layer2
# Create the Sequential model instance
nn = Sequential()
# Add the first hidden layer
nn.add(Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="relu"))
# Add the second hidden layer
nn.add(Dense(units=hidden_nodes_layer2, activation = "relu"))
# Add the output layer to the model specifying the number of output neurons and activation function
nn.add(Dense(units= 1, activation = "sigmoid" ))
# Display the Sequential model summary
nn.summary()
```
### Step 2: Compile and fit the model using the `binary_crossentropy` loss function, the `adam` optimizer, and the `accuracy` evaluation metric.
```
# Compile the Sequential model
nn.compile(loss= "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"])
# Fit the model using 50 epochs and the training data
fit_model = nn.fit(X_train_scaled, y_train, epochs = 50)
```
### Step 3: Evaluate the model using the test data to determine the model’s loss and accuracy.
```
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test, verbose = 2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 4: Save and export your model to an HDF5 file, and name the file `AlphabetSoup.h5`.
```
# Set the model's file path
file_path = Path("./Resources/AlphabetSoup.h5")
# Export your model to a HDF5 file
nn.save(file_path)
```
---
## Optimize the neural network model
### Step 1: Define at least three new deep neural network models (resulting in the original plus 3 optimization attempts). With each, try to improve on your first model’s predictive accuracy.
> **Rewind** Recall that perfect accuracy has a value of 1, so accuracy improves as its value moves closer to 1. To optimize your model for a predictive accuracy as close to 1 as possible, you can use any or all of the following techniques:
>
> * Adjust the input data by dropping different features columns to ensure that no variables or outliers confuse the model.
>
> * Add more neurons (nodes) to a hidden layer.
>
> * Add more hidden layers.
>
> * Use different activation functions for the hidden layers.
>
> * Add to or reduce the number of epochs in the training regimen.
### Alternative Model 1
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons_alternative_1 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_alternative_1 = (number_input_features + number_output_neurons) // 2
# Review the number of hidden nodes in the first layer
hidden_nodes_layer1_alternative_1
# Create the Sequential model instance
nn_alternative_1 = Sequential()
# First hidden layer
nn_alternative_1.add(Dense(units=hidden_nodes_layer1_alternative_1, input_dim=number_input_features, activation="relu"))
# Output layer
nn_alternative_1.add(Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn_alternative_1.summary()
# Compile the Sequential model
nn_alternative_1.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
# Fit the model using 50 epochs and the training data
fit_model_alternative_1 = nn_alternative_1.fit(X_train_scaled, y_train, epochs=50)
```
#### Alternative Model 2
```
# Define the the number of inputs (features) to the model
number_input_features = len(X_train.iloc[0])
# Review the number of features
number_input_features
# Define the number of neurons in the output layer
number_output_neurons_alternative_2 = 1
# Define the number of hidden nodes for the first hidden layer
hidden_nodes_layer1_alternative_2 = (number_input_features + number_output_neurons) // 2
# Review the number of hidden nodes in the first layer
hidden_nodes_layer1_alternative_2
# Create the Sequential model instance
nn_alternative_2 = Sequential()
# First hidden layer
nn_alternative_2.add(Dense(units=hidden_nodes_layer1_alternative_2 , input_dim=number_input_features, activation="relu"))
# Second hidden layer
nn_alternative_2.add(Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn_alternative_2.add(Dense(units=number_output_neurons_alternative_2, activation="sigmoid"))
# Check the structure of the model
nn_alternative_2.summary()
# Compile the model
nn_alternative_2.compile(loss="binary_crossentropy", optimizer = "adam", metrics=["accuracy"])
# Fit the model
fit_model_alternative_2 = nn_alternative_2.fit(X_train_scaled, y_train, epochs= 100)
```
### Step 2: After finishing your models, display the accuracy scores achieved by each model, and compare the results.
```
print("Original Model Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled, y_test,verbose= 2 )
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 1 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_alternative_1.evaluate(X_test_scaled,y_test,verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
print("Alternative Model 2 Results")
# Evaluate the model loss and accuracy metrics using the evaluate method and the test data
model_loss, model_accuracy = nn_alternative_2.evaluate(X_test_scaled,y_test,verbose=2)
# Display the model loss and accuracy results
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
### Step 3: Save each of your alternative models as an HDF5 file.
```
# Set the file path for the first alternative model
file_path = Path("./Resources/AlphabetSoup_alternative_1.h5")
# Export your model to a HDF5 file
nn.save(file_path)
# Set the file path for the second alternative model
file_path = Path("./Resources/AlphabetSoup_alternative_2.h5")
# Export your model to a HDF5 file
nn.save(file_path)
```
| github_jupyter |
# EGEDA cleaning script
For cleaning the EGEDA data sent by Edito: 00_APEC_EGEDA_20190925_DMW.xlsx
Economy names need to be updated
```
# import packages
import numpy as np
import pandas as pd
# read raw data
RawEGEDA = pd.read_excel('../data/raw/EGEDA/00_APEC_EGEDA_20190925.xlsx', sheet_name=None, na_values=['x', 'X', ''])
# define year range
years = list(range(1980,2017,1))
# create empty list to store each dataframe
df_list =[]
for sheet, dataframe in RawEGEDA.items():
# Make Item Code Columns
df_name = (RawEGEDA[sheet].set_index(['Product Code', 'Item Code'])
.rename_axis(['Year'], axis=1)
.stack().unstack('Item Code')
.reset_index())
# create column with economy name
df_name['Economy'] = sheet
df_list.append(df_name)
# combine individual economy dataframes to one dataframe
dfResults = pd.concat(df_list, sort=True).reset_index(drop=True)
dfResults.head()
dfResults.tail()
# replace economies using APEC approved abbreviations
#EconomyNames = {
# '01_AUS':'AUS',
# '02_BD' :'BD' ,
# '03_CAN':'CDA',
# '04_CHL':'CHL',
# '05_PRC':'PRC',
# '06_HKC':'HKC',
# '07_INA':'INA',
# '08_JPN':'JPN',
# '09_ROK':'KOR',
# '10_MAS':'MAS',
# '11_MEX':'MEX',
# '12_NZ' :'NZ' ,
# '13_PNG':'PNG',
# '14_PE' :'PE' ,
# '15_RP' :'RP' ,
# '16_RUS':'RUS',
# '17_SIN':'SIN',
# '18_CT' :'CT' ,
# '19_THA':'THA',
# '20_USA':'USA',
# '21_VN' :'VN' ,
# '22_SEA':'SEA',
# '23_NEA':'NEA',
# '24_OAM':'OAM',
# '25_OCE':'OCE',
# }
dfResults.info()
# code to replace economy abbreviations
#dfResults.replace(EconomyNames, inplace=True)
## create dictionary of EGEDA Product Codes and APERC Fuel codes
Fuelcodes = {
'1 Coal':'Coal',
'1.1 Hard coal':'CoalH',
'1.1.1 Coking coal':'CoalHC',
'1.1.2 Other bituminous coal':'CoalHB',
'1.1.3 Sub-bituminous coal':'CoalHS',
'1.2 Anthracite':'CoalA',
'1.3 Lignite':'CoalL',
'1.4 Peat':'CoalO',
'2 Coal products':'CoalP',
'2.1 Coke oven coke':'CoalPC',
'2.2 Coke oven gas':'CoalPO',
'2.3 Blast furnace gas':'CoalPF',
'2.4 Oxygen steel furnace gas':'CoalPS',
'2.5 Patent fuel':'CoalPP',
'2.6 Coal tar':'CoalPT',
'2.7 BKB/PB':'CoalPB',
'3 Crude oil & NGL':'Oil',
'3.1 Crude Oil':'OilC',
'3.2 Natural gas liquids':'OilN',
'3.3 Refinery feedstocks':'OilOR',
'3.4 Additives/oxygenates':'OilOA',
'3.5 Other hydrocarbons':'OilOO',
'4 Petroleum products':'PetP',
'4.1 Gasoline':'PetPGx',
'4.1.1 Motor gasoline':'PetPG',
'4.1.2 Aviation gasoline':'PetPJG',
'4.2 Naphtha':'PetPN',
'4.3 Jet fuel':'PetPJ',
'4.3.1 Gasoline type jet fuel':'PetPJO',
'4.3.2 Kerosene type jet fuel':'PetPJK',
'4.4 Other kerosene':'PetPK',
'4.5 Gas/diesel oil':'PetPD',
'4.6 Fuel oil':'PetPF',
'4.7 LPG':'PetPL',
'4.8 Refinery gas (not liq.)':'PetPR',
'4.9 Ethane':'PetPE',
'4.10 Other petroleum products':'PetPO',
'4.10.1 White spirit SBP':'PetPOW',
'4.10.2 Lubricants':'PetPOL',
'4.10.3 Bitumen':'PetPOB',
'4.10.4 Paraffin waxes':'PetPOP',
'4.10.4 Paraffin waxes':'PetPOP',
'4.10.5 Petroleum coke':'PetPOC',
'4.10.6 Other products':'PetPOO',
'5 Gas':'Gas',
'5.1 Natural gas':'GasN',
'5.2 LNG':'GasL',
'5.3 Gas works gas':'GasO',
'6 Hydro':'RenH',
'7 Nuclear':'Nuc',
'8 Geothermal, solar etc.':'RenNRE',
'8.1 Geothermal power':'RenGE',
'8.2 Other power':'RenOO',
'8.2.1 Photovoltaic':'RenSE',
'8.2.2 Tide, wave, ocean':'RenO',
'8.2.3 Wind':'RenW',
'8.2.4 Solar':'RenSO',
'8.3 Geothermal heat':'RenGH',
'8.4 Solar heat':'RenSH',
'9 Others':'Oth',
'9.1 Fuel wood & woodwaste':'RenBSF',
'9.2 Bagasse':'RenBSB',
'9.3 Charcoal':'RenBSC',
'9.4 Other biomass':'RenBSO',
'9.5 Biogas':'RenBG',
'9.6 Industrial waste':'OthI',
'9.7 Municipal solid waste':'RenMSW',
'9.7.1 Municipal solid waste (renewable)':'RenBSW',
'9.7.2 Municipal solid waste (non-renewable)':'OthM',
'9.8 Liquid biofuels':'RenBL',
'9.8.1 Biogasoline':'RenBLE',
'9.8.2 Biodiesel':'RenBLD',
'9.8.3 Bio jet kerosene':'RenBLJ',
'9.8.4 Other liquid biofuels':'RenBLO',
'9.9 Other sources':'OthO',
'10 Electricity':'Elec',
'11 Heat':'Heat',
'12 Total':'Tot',
'13 Total renewables':'TotRen'
}
# code to replace fuel abbreviations
dfResults.replace(Fuelcodes, inplace=True)
dfResults.rename(columns={'Product Code':'Fuel Code'}, inplace=True)
# set index
# maybe a better way to do?
dfResults = dfResults.set_index(['Economy','Year','Fuel Code']).stack().unstack('Fuel Code')
# create subgroup totals
dfResults['RenG'] = dfResults['RenGE']+dfResults['RenGH']
dfResults['RenS'] = dfResults['RenSE']+dfResults['RenSH']+dfResults['RenSO']
dfResults['RenBS'] = dfResults['RenBSF']+dfResults['RenBSB']+dfResults['RenBSC']+dfResults['RenBSO']+dfResults['RenBSW']
dfResults['RenB'] = dfResults['RenBS']+dfResults['RenBL']+dfResults['RenBG']
dfResults = dfResults.unstack('Item Code').stack('Fuel Code')
# Reorder the columns
dfResults = dfResults[[
'1 Indigenous production',
'1.1 Production',
'1.2 From other sources - primary energy',
'2 Imports',
'3 Exports',
'4.1 International marine bunkers',
'4.2 International aviation bunkers',
'5 Stock changes',
'6 Total primary energy supply',
'7 Transfers',
'7.1 Recycled products',
'7.2 Interproduct transfers',
'7.3 Products transferred',
'7.4 Gas separation',
'8 Total transformation sector',
'8.1 Main activity producer',
'8.1.1 Electricity plants',
'8.1.2 CHP plants',
'8.1.3 Heat plants',
'8.2 Autoproducers',
'8.2.1 Electricity plants',
'8.2.2 CHP plants',
'8.2.3 Heat plants',
'8.3 Gas processing',
'8.3.1 Gas works',
'8.3.2 Liquefaction',
'8.3.3 Regasification',
'8.3.4 Natural gas blending plants',
'8.3.5 Gas-to-liquid',
'8.4 Refineries',
'8.5. Coal transformation',
'8.5.1 Coke ovens',
'8.5.2 Blast furnaces',
'8.5.3 Patent fuel plants',
'8.5.4 BKB/PB plants',
'8.5.5 Liquefaction (coal to oil)',
'8.6 Petrochemical industry',
'8.7 Biofuel processing',
'8.8 Charcoal processing',
'8.9 Non-specified transformation',
'9 Losses & own use',
'9.1 Energy sector own use',
'9.1.1 Electricity, CHP and heat plants',
'9.1.2 Gas works plants',
'9.1.3 Liquefaction plants',
'9.1.4 Regasification',
'9.1.5 Natural gas blending plants',
'9.1.6 Gas to liquid',
'9.1.7 Gas separation',
'9.1.8 Coke ovens',
'9.1.9 Coal mines',
'9.1.10 Blast furnaces',
'9.1.11 Patent fuel plants',
'9.1.12 BKB/PB plants',
'9.1.13 Liquefaction plants (coal to oil)',
'9.1.14 Oil refineries',
'9.1.15 Oil and gas extraction',
'9.1.16 Biofuel processing',
'9.1.17 Nuclear industry',
'9.1.18 Non-specified own use',
'9.2 Losses',
'10 Discrepancy',
'11 Total final consumption',
'12 Total final energy consumption',
'13 Industry sector',
'13.1 Iron and steel',
'13.2 Chemical (incl. petrochemical)',
'13.3 Non-ferrous metals',
'13.4 Non-metallic mineral products',
'13.5 Transportation equipment',
'13.6 Machinery',
'13.7 Mining and quarrying',
'13.8 Food, beverages and tobacco',
'13.9 Pulp, paper and printing',
'13.10 Wood and wood products',
'13.11 Construction',
'13.12 Textiles and leather',
'13.13 Non-specified industry',
'14 Transport sector',
'14.1 Domestic air transport',
'14.2 Road',
'14.3 Rail',
'14.4 Domestic water transport',
'14.5 Pipeline transport',
'14.6 Non-specified transport',
'15 Other sector',
'15.1.1 Commerce and public services',
'15.1.2 Residential',
'15.2 Agriculture',
'15.3 Fishing',
'15.4 Non-specified others',
'16 Non-energy use',
'16.1 Transformation sector',
'16.2 Industry sector',
'16.3 Transport sector',
'16.4 Other sector',
'17 Electricity output in GWh',
'18 Heat output in ktoe']]
# write to csv
dfResults.to_csv("../data/final/EGEDA_2019_09_25_tidy.csv", index=True)
# export fuel list
fuels = pd.DataFrame(dfResults.index.unique(level=-1))
fuels.to_csv("../data/final/fuel_list_2019_09_25.csv", index=False)
```
| github_jupyter |
Classical probability distributions can be written as a stochastic vector, which can be transformed to another stochastic vector by applying a stochastic matrix. In other words, the evolution of stochastic vectors can be described by a stochastic matrix.
Quantum states also evolve and their evolution is described by unitary matrices. This leads to some interesting properties in quantum computing. Unitary evolution is true for a closed system, that is, a quantum system perfectly isolated from the environment. This is not the case in the quantum computers we have today: these are open quantum systems that evolve differently due to to uncontrolled interactions with the environment. In this notebook, we take a glimpse at both types of evolution.
# Unitary evolution
A unitary matrix has the property that its conjugate transpose is its inverse. Formally, it means that a matrix $U$ is unitary if $UU^\dagger=U^\dagger U=\mathbb{1}$, where $^\dagger$ stands for conjugate transpose, and $\mathbb{1}$ is the identity matrix. A quantum computer is a machine that implements unitary operations.
As an example, we have seen the NOT operation before, which is performed by the X gate in a quantum computer. While the generic discussion on gates will only occur in a subsequent notebook, we can study the properties of the X gate. Its matrix representation is $X = \begin{bmatrix} 0 & 1\\ 1 & 0\end{bmatrix}$. Let's check if it is indeed unitary:
```
import numpy as np
X = np.array([[0, 1], [1, 0]])
print("XX^dagger")
print(X.dot(X.T.conj()))
print("X^daggerX")
print(X.T.conj().dot(X))
```
It looks like a legitimate unitary operation. The unitary nature ensures that the $l_2$ norm is preserved, that is, quantum states are mapped to quantum states.
```
print("The norm of the state |0> before applying X")
zero_ket = np.array([[1], [0]])
print(np.linalg.norm(zero_ket))
print("The norm of the state after applying X")
print(np.linalg.norm(X.dot(zero_ket)))
```
Furthermore, since the unitary operation is a matrix, it is linear. Measurements are also represented by matrices. These two observations imply that everything a quantum computer implements is actually linear. If we want to see some form of nonlinearity, that must involve some classical intervention.
Another consequence of the unitary operations is reversibility. Any unitary operation can be reversed. Quantum computing libraries often provide a function to reverse entire circuits. Reversing the X gate is simple: we just apply it again (its conjugate transpose is itself, therefore $X^2=\mathbb{1}$).
```
import numpy as np
from pyquil import Program, get_qc
from pyquil.gates import *
from forest_tools import *
%matplotlib inline
qvm_server, quilc_server, fc = init_qvm_and_quilc('/home/local/bin/qvm', '/home/local/bin/quilc')
qc = get_qc('1q-qvm', connection=fc)
circuit = Program()
circuit += X(0)
circuit += X(0)
results = qc.run_and_measure(circuit, trials=100)
plot_histogram(results)
```
which is exactly $|0\rangle$ as we would expect.
In the next notebook, you will learn about classical and quantum many-body systems and the Hamiltonian. In the notebook on adiabatic quantum computing, you will learn that a unitary operation is in fact the Schrödinger equation solved for a Hamiltonian for some duration of time. This connects the computer science way of thinking about gates and unitary operations to actual physics, but there is some learning to be done before we can make that connection. Before that, let us take another look at the interaction with the environment.
# Interaction with the environment: open systems
Actual quantum systems are seldom closed: they constantly interact with their environment in a largely uncontrolled fashion, which causes them to lose coherence. This is true for current and near-term quantum computers too.
<img src="figures/open_system.svg" alt="A quantum processor as an open quantum system" style="width: 400px;"/>
This also means that their actual time evolution is not described by a unitary matrix as we would want it, but some other operator (the technical name for it is a completely positive trace-preserving map).
Quantum computing libraries often offer a variety of noise models that mimic different types of interaction, and increasing the strength of the interaction with the environment leads to faster decoherence. The timescale for decoherence is often called $T_2$ time. Among a couple of other parameters, $T_2$ time is critically important for the number of gates or the duration of the quantum computation we can perform.
A very cheap way of studying the effects of decoherence is mixing a pure state with the maximally mixed state $\mathbb{1}/2^d$, where $d$ is the number of qubits, with some visibility parameter in $[0,1]$. This way we do not have to specify noise models or any other map modelling decoherence. For instance, we can mix the $|\phi^+\rangle$ state with the maximally mixed state:
```
def mixed_state(pure_state, visibility):
density_matrix = pure_state.dot(pure_state.T.conj())
maximally_mixed_state = np.eye(4)/2**2
return visibility*density_matrix + (1-visibility)*maximally_mixed_state
ϕ = np.array([[1],[0],[0],[1]])/np.sqrt(2)
print("Maximum visibility is a pure state:")
print(mixed_state(ϕ, 1.0))
print("The state is still entangled with visibility 0.8:")
print(mixed_state(ϕ, 0.8))
print("Entanglement is lost by 0.6:")
print(mixed_state(ϕ, 0.6))
print("Barely any coherence remains by 0.2:")
print(mixed_state(ϕ, 0.2))
```
Another way to look at what happens to a quantum state in an open system is through equilibrium processes. Think of a cup of coffee: left alone, it will equilibrate with the environment, eventually reaching the temperature of the environment. This includes energy exchange. A quantum state does the same thing and the environment has a defined temperature, just the environment of a cup of coffee.
The equilibrium state is called the thermal state. It has a very specific structure and we will revisit it, but for now, suffice to say that the energy of the samples pulled out of a thermal state follows a Boltzmann distribution. The Boltzmann -- also called Gibbs -- distribution is described as $P(E_i) = \frac {e^{-E_{i}/T}}{\sum _{j=1}^{M}{e^{-E_{j}/T}}}$, where $E_i$ is an energy, and $M$ is the total number of possible energy levels. Temperature enters the definition: the higher the temperature, the closer we are to the uniform distribution. In the infinite temperature limit, it recovers the uniform distribution. At high temperatures, all energy levels have an equal probability. In contrast, at zero temperature, the entire probability mass is concentrated on the lowest energy level, the ground state energy. To get a sense of this, let's plot the Boltzmann distribution with vastly different temperatures:
```
import matplotlib.pyplot as plt
temperatures = [.5, 5, 2000]
energies = np.linspace(0, 20, 100)
fig, ax = plt.subplots()
for i, T in enumerate(temperatures):
probabilities = np.exp(-energies/T)
Z = probabilities.sum()
probabilities /= Z
ax.plot(energies, probabilities, linewidth=3, label = "$T_" + str(i+1)+"$")
ax.set_xlim(0, 20)
ax.set_ylim(0, 1.2*probabilities.max())
ax.set_xticks([])
ax.set_yticks([])
ax.set_xlabel('Energy')
ax.set_ylabel('Probability')
ax.legend()
```
Here $T_1<T_2<T_3$. Notice that $T_1$ is a low temperature, and therefore it is highly peaked at low energy levels. In contrast, $T_3$ is a very high temperature and the probability distribution is almost completely flat.
```
qvm_server.terminate()
quilc_server.terminate()
```
| github_jupyter |
<h1>IndabaX Tanzania Mobile Banking Prediction Challenge by Tanzania IndabaX 2021 <h1><h2>by XVIII_6@zindi<h2>
<h2>OBJECTIVE OF THE CHALLENGE <h2>
<h4>The objective of this challenge is to build a machine learning model to predict which individuals across Africa and around the world use mobile or internet banking<h4>
<h2>IMPORTING THE IMPORTANT LIBRARIES <h2>
```
#Start by importing the modules
import pandas as pd
import numpy as np
import os
import sys
import gc
import random
from sklearn.model_selection import StratifiedKFold
from sklearn import preprocessing
import lightgbm as lgb
import seaborn as sns
from tqdm import tqdm_notebook
from sklearn.metrics import auc
from sklearn.metrics import roc_auc_score
from lightgbm import LGBMClassifier
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
```
<h2>LOADING THE DATA FROM CSV FILE <h2>
```
#Here loading the file from csv ,that is train,test,and sub and vd
pd.set_option('display.max_columns',50000) #this code help to display the full columns and rows of the data in cell
pd.set_option('display.max_rows',None) #
pd.set_option('display.width',70000) #
#read the train ,test,sub and variable definition.
train=pd.read_csv('train1.csv')
test=pd.read_csv('test1.csv')
sub=pd.read_csv('SampleSubmission1.csv')
vd=pd.read_csv('VariableDefinitions1.csv')
```
<h2>EXPLORE THE DATA <h2>
```
#check the train data file for the first 5 above data
train.head()
#check the test data file for the first 5 above data
test.head()
#check the sub file
sub.head()
#check the variable definitions file to gain more understanding
vd
#check the information like data type of the train data file
train.info()
#check information like datatype of the test data file
test.info()
#checking for the missing values in the train data file
train.isnull().sum()
#check for the missing values in the test data file
test.isnull().sum()
```
<h2>DATA VISUALIZATION <h2>
```
#here use graph to check the ditribution of the target
sns.countplot(train.Target)
plt.title('Distribution of the target', fontdict={'size':25})
#Also continue to check for missing values in the train data file by using graph to gain insight
graph= train.isna().sum().sort_values().plot(kind = 'barh', figsize = (15, 15))
plt.title('precentage of missing values', fontdict={'size':41})
for p in graph.patches:
percentage ='{:,.0f}%'.format((p.get_width()/train.shape[0])*100)
width, height =p.get_width(),p.get_height()
x=p.get_x()+width+0.02
y=p.get_y()+height/2
plt.annotate(percentage,(x,y))
#Also continue to check for missing values in the test data file by using graph to gain insight
graph = test.isna().sum().sort_values().plot(kind = 'barh', figsize = (15, 15))
plt.title('precentage of missing values', fontdict={'size':41})
for p in graph.patches:
percentage ='{:,.1f}%'.format((p.get_width()/test.shape[0])*100)
width, height =p.get_width(),p.get_height()
x=p.get_x()+width+0.02
y=p.get_y()+height/2
plt.annotate(percentage,(x,y))
```
<h2>DATA CLEANING UP <h2>
<h4>Since from data visualization and exploration there a lot of nan values so cleaning for better modelling <h4>
```
#Here data cleaning up by using function clean and dirt rate and droping the columns with more dirt rate
def clean(train,thresh) :
def dirt_rate(train,col) :
return train[col].isna().sum() / train.shape[0]
for col in train.columns :
if dirt_rate(train,col) >= thresh :
train.drop(col,axis=1,inplace=True)
return train
#calling the function clean to clean and drop the data with more nans values from the train data and set the test data
train = clean(train,thresh=0.8)
test = test[train.columns[:-1]]
```
<h2>DATA IMPUTING FOR OTHER NANS VALUES <h2>
```
#As still there are missing values so here we imputing those missing values by using the function impute columns
def imputeColumns(train ,test) :
total = pd.concat([train,test]) #here we concat the train and test data to form the total data for easily computing and imputing of the nans values
total['age'].fillna(total.age.mean(),inplace=True) #filling the nans values in age columns by using the mean
FQ = total.filter(like= 'FQ').columns
for cl in FQ :
total[cl] = total[cl].fillna(-1) #filling the nans values in other columns by -1
total[FQ] = total[FQ].astype('int') #changing the data type to int
# get train - test
train = total[total['ID'].isin(train['ID'].unique())] #get train from total data
train['Target'] = train['Target'].astype('int')
test = total[~total['ID'].isin(train['ID'].unique())] #get the test data from the total data
return train , test #return the clean test and train
#Calling the function imputeColumns to clean and perform the above work
train , test =imputeColumns(train , test)
#Explore the train data
train.head()
#check if there is other nans values in clumns in train data
train.isnull().sum()
#check if there is other missing values in clumns in test data
test.isnull().sum()
#Explore the shape of the train,test and sub
train.shape,sub.shape,test.shape
```
<h2>BUILDING THE MODEL <h2>
```
#Creating the class to capture the hyperparameter and droping the other unnecesary feature from the data
class model:
seedNumber = 42
n_splits = 5
remove_features = ['ID', 'country','Target'] #These are the unneccesary columns are to be removed
categorical_features = ['country_code','region'] #Categorical features from the data set
TARGET_COL = 'Target'
params = {'boosting_type': #Hypertuning the parameter to gain more efficiency of the model
'gbdt','objective':
'binary','metric': 'auc',
'n_estimators': 500,
'colsample_bytree' : 0.8,
'seed': 42,
'silent':False,
'early_stopping_rounds': 100,
'learning_rate' :0.1
}
def random_stateNumber(state):
random.seed(state)
np.random.seed(state)
random_stateNumber(model.seedNumber)
features_columns = [col for col in train.columns if col not in model.remove_features]
#train the model by using the function region to train the regions one by one in a data set
def region(X,y,Test,skf,reg) :
oof_lgb = np.zeros((X.shape[0],))
Test['target'] = 0
lgb_preds = []
for fold_, (trn_idx, val_idx) in enumerate(skf.split(X, X.country_code)):
tr_x, tr_y = X.iloc[trn_idx,:], y[trn_idx]
vl_x, vl_y = X.iloc[val_idx,:], y[val_idx]
data_train = lgb.Dataset(tr_x, label=tr_y,categorical_feature=model.categorical_features)
data_valid= lgb.Dataset(vl_x, label=vl_y,categorical_feature=model.categorical_features)
estimator = lgb.train(model.params,data_train,valid_sets = [data_train,data_valid ],verbose_eval = 0)
y_pred_val = estimator.predict(vl_x,num_iteration=estimator.best_iteration)
oof_lgb[val_idx] = y_pred_val
y_pred_test = estimator.predict(Test[features_columns],num_iteration=estimator.best_iteration)
lgb_preds.append(y_pred_test)
print(f'Region[{reg}] AUC : ',roc_auc_score(y, oof_lgb))
return np.mean(lgb_preds,axis=0) , oof_lgb
#Then ,here i continue training the model to get the predictions for the validation set
def continue_training() :
train_ids = [] ; test_ids = [] ;
train_target = [] ;custom_preds = [] ; test_preds = [] ;
for reg in tqdm_notebook(np.sort(train.region.unique())) :
skf = StratifiedKFold(n_splits=model.n_splits,shuffle=True, random_state=model.seedNumber)
train_ = train[train['region']==reg].reset_index(drop=True)
Test = test[test['region']==reg].reset_index(drop=True)
train_ids.extend(train_['ID'].values.tolist()) ; test_ids.extend(Test['ID'].values.tolist())
X , y = train_[features_columns] , train_[model.TARGET_COL]
test_pred , oof_pred =region(X,y,Test,skf,reg=reg)
train_target.extend(y) ; custom_preds.extend(oof_pred) ; test_preds.extend(test_pred)
return train_ids , custom_preds ,train_target ,test_ids, test_preds
train_ids , oof_preds ,train_target ,test_ids, test_preds = continue_training()
#Evaluate the model by using the area under the curve from the metrics module
complete = pd.DataFrame({'ID' :train_ids ,'OOF_lgbm' :oof_preds , 'Target' :train_target})
print(f'AUC : ',roc_auc_score(complete['Target'], complete['OOF_lgbm']))
Submission = pd.DataFrame({'ID' :test_ids ,'Target' :test_preds})
#Complete and submit the files
Submission.to_csv('sub3.csv',index=False)
```
| github_jupyter |
## Learning Objectives
- How we can exctract keywords from corpus (collections of texts) using TF-IDF
- Explain what is TF-IDF
- Applications of keywords exctraction algorithm and Word2Vec
## Review: What are the pre-processings to apply a machine learning algorithm on text data?
1. The text must be parsed to words, called tokenization
2. Then the words need to be encoded as integers or floating point values
3. scikit-learn library offers easy-to-use tools to perform both tokenization and feature extraction of text data
## What is TF-IDF Vectorizer?
- Word counts are a good starting point, but are very basic
An alternative is to calculate word frequencies, and by far the most popular method is called TF-IDF.
**Term Frequency**: This summarizes how often a given word appears within a document
**Inverse Document Frequency**: This downscales words that appear a lot across documents
## Intuitive idea behind TF-IDF:
- If a word appears frequently in a document, it's important. Give the word a high score
- But if a word appears in many documents, it's not a unique identifier. Give the word a low score
<img src="Images/tfidf_slide.png" width="700" height="700">
## Activity: Obtain the keywords from TF-IDF
1- First obtain the TF-IDF matrix for given corpus
2- Do column-wise addition
3- Sort the score from highest to lowest
4- Return the associated words based on step 3
```
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
import numpy as np
def keyword_sklearn(docs, k):
vectorizer = TfidfVectorizer(stop_words='english')
tfidf_matrix = vectorizer.fit_transform(docs)
print(tfidf_matrix.toarray())
print(vectorizer.get_feature_names())
tfidf_scores = np.sum(tfidf_matrix, axis=0)
tfidf_scores = np.ravel(tfidf_scores)
return sorted(dict(zip(vectorizer.get_feature_names(), tfidf_scores)).items(), key=lambda x: x[1], reverse=True)[:k]
documnets = ['The sky is bule', 'The sun is bright', 'The sun in the sky is bright', 'we can see the shining sun, the bright sun']
print(keyword_sklearn(documnets, 3))
```
## Word2Vec
- Data Scientists have assigned a vector to each english word
- This process of assignning vectors to each word is called Word2Vec
- In DS 2.4, we will learn how they accomplished Word2Vec task
- Download this huge Word2Vec file: https://nlp.stanford.edu/projects/glove/
- Do not open the extracted file
## What is the property of vectors associated to each word in Word2Vec?
- Words with similar meanings would be closer to each other in Euclidean Space
- For example if $V_{pizza}$, $V_{food}$ and $V_{sport}$ represent the vector associated to pizza, food and sport then:
${\| V_{pizza} - V_{food}}\|$ < ${\| V_{pizza} - V_{sport}}\|$
## Acitivity: Obtain the vector associated to pizza in Glove
```
import codecs
with codecs.open('/Users/miladtoutounchian/Downloads/glove.840B.300d.txt', 'r') as f:
for c, r in enumerate(f):
sr = r.split()
if sr[0] == 'pizza':
print(sr[0])
print([float(i) for i in sr[1:]])
print(len([float(i) for i in sr[1:]]))
break
```
## Activity: Obtain the vectors associated to pizza, food and sport in Glove
```
import codecs
with codecs.open('/Users/miladtoutounchian/Downloads/glove.840B.300d.txt', 'r') as f:
ls = {}
for c, r in enumerate(f):
sr = r.split()
if sr[0] in ['pizza', 'food', 'sport']:
ls[sr[0]] =[float(i) for i in sr[1:]]
if len(ls) == 3:
break
print(ls)
```
## Acitivty: Show that the vector of pizza is closer to vector of food than vector of sport
```
import numpy as np
np.linalg.norm(np.array(ls['pizza']) - np.array(ls['food']))
np.linalg.norm(np.array(ls['pizza']) - np.array(ls['sport']))
np.linalg.norm(np.array(ls['food']) - np.array(ls['sport']))
```
| github_jupyter |
```
# Import libraries
import os
import ee
import geemap
import ipywidgets as widgets
from bqplot import pyplot as plt
from ipyleaflet import WidgetControl
# Create an interactive map
Map = geemap.Map(center=[-23.36, -46.36], zoom=5, add_google_map=True)
Map
# Definindo a barra interativa
style = {'description_width': 'initial'}
# Definindo as opções de índices
nd_options =["Índice de Vegetação por Diferença Normalizada (NDVI)",
"Índice da Água por Diferença Normalizada (NDWI)",
"Índice da Água por Diferença Normalizada Melhorado (MNDWI)",
"Índice do Solo por Diferença Normalizada (NDSI)"]
# Definindo primeira banda
first_band = widgets.Dropdown(
description='1ª banda:',
options=['Blue', 'Green','Red','NIR', 'SWIR1', 'SWIR2'],
value='Green',
style=style
)
# Definindo segunda banda
second_band = widgets.Dropdown(
description='2ª banda:',
options=['Blue', 'Green','Red','NIR', 'SWIR1', 'SWIR2'],
value='SWIR1',
style=style
)
output_widget = widgets.Output(layout={'border': '4px solid black'})
output_control = WidgetControl(widget=output_widget, position='bottomright')
Map.add_control(output_control)
aoi_widget = widgets.Checkbox(
value=True,
description='Área de interesse',
style=style
)
download_widget = widgets.Checkbox(
value=False,
description='Download dos dados do gráfico',
style=style
)
def aoi_change(change):
Map.layers = Map.layers[:4]
Map.user_roi = None
Map.user_rois = None
Map.draw_count = 0
output_widget.clear_output()
aoi_widget.observe(aoi_change, names='value')
band_combo = widgets.Dropdown(
description='Band combo:',
options=['Red/Green/Blue', 'NIR/Red/Green', 'SWIR2/SWIR1/NIR', 'NIR/SWIR1/Red','SWIR2/NIR/Red',
'SWIR2/SWIR1/Red', 'SWIR1/NIR/Blue', 'NIR/SWIR1/Blue', 'SWIR2/NIR/Green', 'SWIR1/NIR/Red'],
value='NIR/Red/Green',
style=style
)
year_widget = widgets.IntSlider(min=1984, max=2020, value=2010, description='Selecionar ano:', width=400, style=style)
fmask_widget = widgets.Checkbox(
value=True,
description='Aplicar fmask?(remove nuvem, sombra e neve)',
style=style,
layout = {'width':'2px'}
)
# Normalized Satellite Indices: https://www.usna.edu/Users/oceano/pguth/md_help/html/norm_sat.
nd_indices = widgets.Dropdown(options=nd_options, value=nd_options[0], description='Índices:', style=style)
nd_threshold = widgets.FloatSlider(
value=0,
min=-1,
max=1,
step=0.01,
description='Threshold:',
orientation='horizontal',
style=style
)
nd_color = widgets.ColorPicker(
concise=False,
description='Color:',
value='blue',
style=style
)
def nd_index_change(change):
if nd_indices.value == 'Índice de Vegetação por Diferença Normalizada (NDVI)':
first_band.value = 'NIR'
second_band.value = 'Red'
elif nd_indices.value == 'Índice da Água por Diferença Normalizada (NDWI)':
first_band.value = 'NIR'
second_band.value = 'SWIR1'
elif nd_indices.value == 'Índice da Água por Diferença Normalizada Melhorado (MNDWI)':
first_band.value = 'Green'
second_band.value = 'SWIR1'
elif nd_indices.value == 'Índice do Solo por Diferença Normalizada (NDSI)':
first_band.value = 'SWIR1'
second_band.value = 'NIR'
elif nd_indices.value == 'Customized':
first_band.value = None
second_band.value = None
nd_indices.observe(nd_index_change, names='value')
submit = widgets.Button(
description='Analisar',
button_style='primary',
tooltip='Clique aqui',
style=style
)
full_widget = widgets.VBox([
widgets.HBox([nd_indices, first_band, second_band]),
widgets.HBox([band_combo, year_widget, fmask_widget]),
widgets.HBox([aoi_widget, nd_threshold, nd_color, download_widget]),
submit
])
full_widget
# Click event handler
def submit_clicked(b):
with output_widget:
output_widget.clear_output()
print('Computing...')
Map.default_style = {'cursor': 'wait'}
try:
band1 = first_band.value
band2 = second_band.value
selected_year = year_widget.value
threshold = nd_threshold.value
bands = band_combo.value.split('/')
apply_fmask = fmask_widget.value
palette = nd_color.value
use_aoi = aoi_widget.value
download = download_widget.value
if use_aoi:
if Map.user_roi is not None:
roi = Map.user_roi
layer_name = 'User drawn AOI'
geom = roi
else:
output_widget.clear_output()
print('No user AOI could be found.')
return
Map.layers = Map.layers[:4]
Map.addLayer(ee.Image().paint(geom, 0, 2), {'palette': 'red'}, layer_name)
images = geemap.landsat_timeseries(roi=roi, start_year=1984, end_year=2020, start_date='01-01', end_date='12-31', apply_fmask=apply_fmask)
nd_images = images.map(lambda img: img.normalizedDifference([band1, band2]))
result_images = nd_images.map(lambda img: img.gt(threshold))
selected_image = ee.Image(images.toList(images.size()).get(selected_year - 1984))
selected_result_image = ee.Image(result_images.toList(result_images.size()).get(selected_year - 1984)).selfMask()
vis_params = {
'bands': bands,
'min': 0,
'max': 3000
}
Map.addLayer(selected_image, vis_params, 'Landsat ' + str(selected_year))
Map.addLayer(selected_result_image, {'palette': palette}, 'Result ' + str(selected_year))
def cal_area(img):
pixel_area = img.multiply(ee.Image.pixelArea()).divide(1e4)
img_area = pixel_area.reduceRegion(**{
'geometry': geom,
'reducer': ee.Reducer.sum(),
'scale': 1000,
'maxPixels': 1e12,
'bestEffort': True
})
return img.set({'area': img_area})
areas = result_images.map(cal_area)
stats = areas.aggregate_array('area').getInfo()
x = list(range(1984, 2021))
y = [item.get('nd') for item in stats]
fig = plt.figure(1)
fig.layout.height = '270px'
plt.clear()
plt.plot(x, y)
plt.title('Temporal trend (1984-2020)')
plt.xlabel('Year')
plt.ylabel('Area (ha)')
output_widget.clear_output()
plt.show()
if download:
out_dir = os.path.join(os.path.expanduser('~'), 'Downloads')
out_name = 'chart_' + geemap.random_string() + '.csv'
out_csv = os.path.join(out_dir, out_name)
if not os.path.exists(out_dir):
os.makedirs(out_dir)
with open(out_csv, 'w') as f:
f.write('year, area (ha)\n')
for index, item in enumerate(x):
line = '{},{:.2f}\n'.format(item, y[index])
f.write(line)
link = geemap.create_download_link(
out_csv, title="Click here to download the chart data: ")
display(link)
except Exception as e:
print(e)
print('An error occurred during computation.')
Map.default_style = {'cursor': 'default'}
submit.on_click(submit_clicked)
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
import os
import bidi.algorithm
import arabic_reshaper
import matplotlib.pyplot as plt
fpath = '/media/sf_VBox_Shared/Arabic/Analyses/Fiqh_final2/quotes'
links_df = pd.read_csv(os.path.join(fpath, 'fiqh_quran_links_v2.csv'))
nodes_aya_df = pd.read_csv(os.path.join(fpath, 'fiqh_quran_aya_nodes_v2.csv'))
nodes_books_df = pd.read_csv(os.path.join(fpath, 'fiqh_quran_book_nodes.csv'))
quotes_df = pd.read_csv(os.path.join(fpath, 'quran_quotes.csv'))
merged_df = links_df.merge(nodes_aya_df, left_on='Target', right_on='id')[['Source', 'Weight', 'sura_id', 'aya_id', 'sura_name', 'sura_arabic_name', 'Label']]
merged_df = merged_df.rename({'Label': 'aya_label'}, axis=1)
merged_df = merged_df.merge(nodes_books_df, left_on='Source', right_on='id')
merged_df = merged_df.drop(['Source', 'id', 'Type', 'Group', 'Label'], axis=1)
merged_df.to_csv(os.path.join(fpath, 'quotes_merged_v2.csv'), index=False)
def reshape_arabic(text):
return bidi.algorithm.get_display(arabic_reshaper.reshape(text))
# Nr of quotes per book, sorted per school
count_per_book = merged_df.groupby(['BookURI', 'BookSUBJ'])['Weight'].sum().unstack()
barplot = count_per_book.sort_values(list(count_per_book.columns), ascending=False).plot(kind='bar', stacked=True, figsize=(15,8))
leg = barplot.axes.get_legend()
for t in leg.get_texts():
t.set_text(reshape_arabic(t.get_text()))
t.set_fontsize(15)
plt.title('Nr of quotes per book, sorted by school')
plt.show()
# Relative of quotes per book (divided by length of book) , sorted per school
nr_tokens_per_book = merged_df.groupby(['BookURI', 'BookSUBJ'])['Number_of_tokens'].min()
rel_count_per_book = merged_df.groupby(['BookURI', 'BookSUBJ'])['Weight'].sum() / nr_tokens_per_book
rel_count_per_book = rel_count_per_book.unstack()
barplot = rel_count_per_book.sort_values(list(rel_count_per_book.columns), ascending=False).plot(kind='bar', stacked=True, figsize=(15,8))
leg = barplot.axes.get_legend()
for t in leg.get_texts():
t.set_text(reshape_arabic(t.get_text()))
t.set_fontsize(15)
plt.title('Relative nr of quotes per book')
plt.show()
rel_count_per_book = merged_df.groupby(['BookURI', 'BookSUBJ'])['Weight'].sum() / nr_tokens_per_book
rel_count_per_book = rel_count_per_book.reset_index().sort_values(['BookSUBJ', 'BookURI'])
rel_count_per_book = rel_count_per_book.pivot('BookURI', 'BookSUBJ', 0).reindex(rel_count_per_book.BookURI)
barplot = rel_count_per_book.plot(kind='bar', stacked=True, figsize=(15,8))
leg = barplot.axes.get_legend()
for t in leg.get_texts():
t.set_text(reshape_arabic(t.get_text()))
t.set_fontsize(15)
plt.title('Relative number of quotes per book (sorted by school/year)')
plt.show()
import matplotlib
rel_count_per_school = merged_df.groupby('BookSUBJ')['Weight'].sum() / nr_tokens_per_book.groupby('BookSUBJ').sum()
#barplot = rel_count_per_school.plot(kind='bar', figsize=(20,10), colormap=)
plt.subplots(figsize=(20,10))
barplot = plt.bar(range(len(rel_count_per_school)), rel_count_per_school.values, color='grey', width=0.3)
#plt.x
plt.xticks(range(len(rel_count_per_school)), labels=[reshape_arabic(t) for t in rel_count_per_school.index], fontsize=15) #barplot.xaxis.get_ticklabels()])
#plt.xticks()
plt.title('Relative nr of quotes per school')
plt.show()
rel_count_per_school.to_csv(os.path.join(fpath, 'counts_pers_school.csv'))
import re
merged_df['Century_num'] = merged_df.Century.apply(lambda s: int(re.match('^[0-9]*', s).group(0)))
count_per_century_subj = merged_df.groupby(['Century_num', 'BookSUBJ'])['Weight'].sum()
# Total number of quotes per century, per school
barplot = count_per_century_subj.unstack().plot(kind='bar', stacked=True)
leg = barplot.axes.get_legend()
for t in leg.get_texts():
t.set_text(reshape_arabic(t.get_text()))
t.set_fontsize(15)
plt.title('Number of quotes per century (by school)')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2)
plt.show()
nrtokens_per_century_book = merged_df.groupby(['BookURI', 'Century_num'])['Number_of_tokens'].min()
# Relative number of quotes per century
count_per_century = count_per_century_subj.groupby('Century_num').sum()
rel_count_per_century = count_per_century / nrtokens_per_century_book.groupby(['Century_num']).sum()
barplot = rel_count_per_century.plot(kind='bar', color='grey')
plt.title('Relative number of quotes per century')
plt.show()
```
## Which verses are cited most often?
Which verses have the most citations in total? And by how many books are they cited?
```
counts_per_verse = pd.DataFrame({'nr_books': merged_df.aya_label.value_counts(),
'nr_citations': merged_df.groupby('aya_label')['Weight'].sum()})
counts_per_verse.sort_values('nr_citations', ascending=False).head(20)
# What are the verses cited by most books?
print('What are the verses cited by most books?')
counts_per_verse.sort_values('nr_books', ascending=False).head(10)
```
## What are the most cited verses per school?
And by how many books are they cited?
```
from IPython.display import display
# What are the verses cited by most books, per school?
for school in merged_df.BookSUBJ.unique():
print(school)
df_sub = merged_df[merged_df.BookSUBJ==school]
counts_per_verse_sub = pd.DataFrame({'nr_books': df_sub.aya_label.value_counts(),
'nr_citations': df_sub.groupby('aya_label')['Weight'].sum(),
'books': df_sub.groupby('aya_label')['BookURI'].aggregate(set)})
print('Total nr of books in this school: ', df_sub.BookURI.nunique())
display(counts_per_verse_sub.sort_values('nr_citations', ascending=False).head(20))
print('\n')
```
## NLP approaches
```
adj_df = merged_df.pivot('BookURI', 'aya_label', 'Weight').fillna(0)
from sklearn.feature_extraction.text import TfidfTransformer
tfidf_transformer = TfidfTransformer()
tfidf = tfidf_transformer.fit_transform(adj_df.values)
print(tfidf.shape)
from sklearn.metrics.pairwise import cosine_similarity
similarities = cosine_similarity(tfidf, tfidf)
similarities.shape
plt.hist(similarities.flatten());
import numpy as np
similarities_df = pd.DataFrame(similarities, columns=adj_df.index, index=adj_df.index)
np.fill_diagonal(similarities_df.values, 0)
plt.hist(similarities_df.values.flatten());
from sklearn.metrics.pairwise import cosine_distances
from sklearn.manifold import TSNE
dist = cosine_distances(tfidf, tfidf)
X_embedded = TSNE(n_components=2, metric='precomputed').fit_transform(dist)
print(X_embedded.shape)
df_books_embedded = pd.DataFrame(X_embedded, index=adj_df.index, columns=['x', 'y'])
df_books_embedded.head()
df_books_embedded = df_books_embedded.merge(nodes_books_df, right_on='BookURI', left_on='BookURI')
fig, ax = plt.subplots(figsize=(15,15))
for subj, group in df_books_embedded.groupby('BookSUBJ'):
ax.plot(group.x, group.y, label=reshape_arabic(subj), marker='o', linestyle='', markersize=10)
for x,y,s in zip(group.x, group.y, group.BookURI):
ax.text(x-10, y, s)
ax.legend()
plt.show()
```
## Network analysis
```
import networkx as nx
# nw_similarities = nx.from_pandas_adjacency(similarities_df)
# list(nw_similarities.edges(data=True))[:10]
# nx.to_pandas_edgelist(nw_similarities).to_csv(os.path.join(fpath, 'links_tfidf_books.csv'), index=False)
links_df['Distance'] = 1.0/links_df.Weight
network = nx.from_pandas_edgelist(links_df, source='Source', target='Target', edge_attr=['Weight', 'Distance'])
nx.algorithms.is_bipartite(network)
book_ids = links_df.Source.unique()
verse_ids = links_df.Target.unique()
network_books = nx.bipartite.weighted_projected_graph(network, book_ids)
network_verses = nx.bipartite.weighted_projected_graph(network, verse_ids)
print(network.number_of_edges(), network.number_of_nodes())
print(network_books.number_of_edges(), network_books.number_of_nodes())
print(network_verses.number_of_edges(), network_verses.number_of_nodes())
weights = nx.get_edge_attributes(network_books, 'weight')
nx.set_edge_attributes(network_books,
{k: 1/weights[k] for k in weights},
'distance')
closeness_centrality_books = nx.closeness_centrality(network_books, distance='distance')
#closeness_centrality_verses = nx.closeness_centrality(network_verses)
betweenness_centrality_books = nx.betweenness_centrality(network_books, weight='distance')
#betweenness_centrality_verses = nx.betweenness_centrality(network_verses)
nodes_books_df = nodes_books_df.set_index('id')
nodes_books_df['closeness_centrality'] = pd.Series(closeness_centrality_books)
nodes_books_df['betweenness_centrality'] = pd.Series(betweenness_centrality_books)
nodes_books_df = nodes_books_df.reset_index()
nodes_aya_df = nodes_aya_df.set_index('id')
#nodes_aya_df['closeness_centrality'] = pd.Series(closeness_centrality_verses)
nodes_aya_df['betweenness_centrality'] = pd.Series(betweenness_centrality_verses)
nodes_aya_df = nodes_aya_df.reset_index()
nodes_aya_df.head()
book_closeness = nodes_books_df.set_index('BookURI')['closeness_centrality']
book_closeness.sort_values(ascending=False).plot(kind='bar', figsize=(15,5))
book_closeness.sort_values(ascending=False)
book_betweenness = nodes_books_df.set_index('BookURI')['betweenness_centrality']
book_betweenness.sort_values(ascending=False).plot(kind='bar', figsize=(15,5))
book_betweenness.sort_values(ascending=False)
verse_closeness = nodes_aya_df.set_index('Label')['closeness_centrality']
verse_closeness.sort_values(ascending=False).head(30).plot(kind='bar', figsize=(15,5))
# Get projected graphs
book_ids = links_df.Source.unique()
nw_books_jaccard = nx.algorithms.bipartite.overlap_weighted_projected_graph(network, book_ids)
nw_books_overlap
list(nw_books.edges(data=True))[:10]
nx.to_pandas_edgelist(nw_books).to_csv(os.path.join(fpath, 'links_projected_books.csv'), index=False)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.