code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# The Theta Model
The Theta model of Assimakopoulos & Nikolopoulos (2000) is a simple method for forecasting the involves fitting two $\theta$-lines, forecasting the lines using a Simple Exponential Smoother, and then combining the forecasts from the two lines to produce the final forecast. The model is implemented in steps:
1. Test for seasonality
2. Deseasonalize if seasonality detected
3. Estimate $\alpha$ by fitting a SES model to the data and $b_0$ by OLS.
4. Forecast the series
5. Reseasonalize if the data was deseasonalized.
The seasonality test examines the ACF at the seasonal lag $m$. If this lag is significantly different from zero then the data is deseasonalize using `statsmodels.tsa.seasonal_decompose` use either a multiplicative method (default) or additive.
The parameters of the model are $b_0$ and $\alpha$ where $b_0$ is estimated from the OLS regression
$$
X_t = a_0 + b_0 (t-1) + \epsilon_t
$$
and $\alpha$ is the SES smoothing parameter in
$$
\tilde{X}_t = (1-\alpha) X_t + \alpha \tilde{X}_{t-1}
$$
The forecasts are then
$$
\hat{X}_{T+h|T} = \frac{\theta-1}{\theta} \hat{b}_0
\left[h - 1 + \frac{1}{\hat{\alpha}}
- \frac{(1-\hat{\alpha})^T}{\hat{\alpha}} \right]
+ \tilde{X}_{T+h|T}
$$
Ultimately $\theta$ only plays a role in determining how much the trend is damped. If $\theta$ is very large, then the forecast of the model is identical to that from an Integrated Moving Average with a drift,
$$
X_t = X_{t-1} + b_0 + (\alpha-1)\epsilon_{t-1} + \epsilon_t.
$$
Finally, the forecasts are reseasonalized if needed.
This module is based on:
* Assimakopoulos, V., & Nikolopoulos, K. (2000). The theta model: a decomposition
approach to forecasting. International journal of forecasting, 16(4), 521-530.
* Hyndman, R. J., & Billah, B. (2003). Unmasking the Theta method.
International Journal of Forecasting, 19(2), 287-290.
* Fioruci, J. A., Pellegrini, T. R., Louzada, F., & Petropoulos, F.
(2015). The optimized theta method. arXiv preprint arXiv:1503.03529.
## Imports
We start with the standard set of imports and some tweaks to the default matplotlib style.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pandas_datareader as pdr
import seaborn as sns
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=15)
plt.rc("lines", linewidth=3)
sns.set_style("darkgrid")
```
## Load some Data
We will first look at housing starts using US data. This series is clearly seasonal but does not have a clear trend during the same.
```
reader = pdr.fred.FredReader(["HOUST"], start="1980-01-01", end="2020-04-01")
data = reader.read()
housing = data.HOUST
housing.index.freq = housing.index.inferred_freq
ax = housing.plot()
```
We fit specify the model without any options and fit it. The summary shows that the data was deseasonalized using the multiplicative method. The drift is modest and negative, and the smoothing parameter is fairly low.
```
from statsmodels.tsa.forecasting.theta import ThetaModel
tm = ThetaModel(housing)
res = tm.fit()
print(res.summary())
```
The model is first and foremost a forecasting method. Forecasts are produced using the `forecast` method from fitted model. Below we produce a hedgehog plot by forecasting 2-years ahead every 2 years.
**Note**: the default $\theta$ is 2.
```
forecasts = {"housing": housing}
for year in range(1995, 2020, 2):
sub = housing[: str(year)]
res = ThetaModel(sub).fit()
fcast = res.forecast(24)
forecasts[str(year)] = fcast
forecasts = pd.DataFrame(forecasts)
ax = forecasts["1995":].plot(legend=False)
children = ax.get_children()
children[0].set_linewidth(4)
children[0].set_alpha(0.3)
children[0].set_color("#000000")
ax.set_title("Housing Starts")
plt.tight_layout(pad=1.0)
```
We could alternatively fit the log of the data. Here it makes more sense to force the deseasonalizing to use the additive method, if needed. We also fit the model parameters using MLE. This method fits the IMA
$$ X_t = X_{t-1} + \gamma\epsilon_{t-1} + \epsilon_t $$
where $\hat{\alpha}$ = $\min(\hat{\gamma}+1, 0.9998)$ using `statsmodels.tsa.SARIMAX`. The parameters are similar although the drift is closer to zero.
```
tm = ThetaModel(np.log(housing), method="additive")
res = tm.fit(use_mle=True)
print(res.summary())
```
The forecast only depends on the forecast trend component,
$$
\hat{b}_0
\left[h - 1 + \frac{1}{\hat{\alpha}}
- \frac{(1-\hat{\alpha})^T}{\hat{\alpha}} \right],
$$
the forecast from the SES (which does not change with the horizon), and the seasonal. These three components are available using the `forecast_components`. This allows forecasts to be constructed using multiple choices of $\theta$ using the weight expression above.
```
res.forecast_components(12)
```
## Personal Consumption Expenditure
We next look at personal consumption expenditure. This series has a clear seasonal component and a drift.
```
reader = pdr.fred.FredReader(["NA000349Q"], start="1980-01-01", end="2020-04-01")
pce = reader.read()
pce.columns = ["PCE"]
pce.index.freq = "QS-OCT"
_ = pce.plot()
```
Since this series is always positive, we model the $\ln$.
```
mod = ThetaModel(np.log(pce))
res = mod.fit()
print(res.summary())
```
Next we explore differenced in the forecast as $\theta$ changes. When $\theta$ is close to 1, the drift is nearly absent. As $\theta$ increases, the drift becomes more obvious.
```
forecasts = pd.DataFrame(
{
"ln PCE": np.log(pce.PCE),
"theta=1.2": res.forecast(12, theta=1.2),
"theta=2": res.forecast(12),
"theta=3": res.forecast(12, theta=3),
"No damping": res.forecast(12, theta=np.inf),
}
)
_ = forecasts.tail(36).plot()
plt.title("Forecasts of ln PCE")
plt.tight_layout(pad=1.0)
```
Finally, `plot_predict` can be used to visualize the predictions and prediction intervals which are constructed assuming the IMA is true.
```
ax = res.plot_predict(24, theta=2)
```
We conclude be producing a hedgehog plot using 2-year non-overlapping samples.
```
ln_pce = np.log(pce.PCE)
forecasts = {"ln PCE": ln_pce}
for year in range(1995, 2020, 3):
sub = ln_pce[: str(year)]
res = ThetaModel(sub).fit()
fcast = res.forecast(12)
forecasts[str(year)] = fcast
forecasts = pd.DataFrame(forecasts)
ax = forecasts["1995":].plot(legend=False)
children = ax.get_children()
children[0].set_linewidth(4)
children[0].set_alpha(0.3)
children[0].set_color("#000000")
ax.set_title("ln PCE")
plt.tight_layout(pad=1.0)
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
data_label = pd.read_csv("data(with_label).csv")
```
### 30 day death age
```
fig = plt.figure(figsize=(12,6))
sns.set_style('darkgrid')
ax = sns.violinplot(x="thirty_days", hue="gender", y="age",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('Age (years)')
plt.title('Age distributions for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="age",data=data_label[data_label.age<300], split=True,)
plt.legend(loc='lower left')
#plt.ylim([0,100])
plt.xlabel(' ')
plt.ylabel('Age (years)')
plt.title('Age distributions for 30-day death \n (excluding ages > 300)')
```
## One year death age
```
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="one_year", hue="gender", y="age",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('Age (years)')
plt.title('Age distributions for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="one_year", hue="gender", y="age",data=data_label[data_label.age<300], split=True,)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('Age (years)')
plt.title('Age distributions for 30-day death \n (excluding ages > 300)')
```
## sapsii
```
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="sapsii",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('SAPS II distributions for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="one_year", hue="gender", y="sapsii",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('SAPS II distributions for one year death')
```
## Sofa
```
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="sofa",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('SOFA distributions for 30-day death')
```
## Cormorbidity
```
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="elixhauser_vanwalraven",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('elixhauser_vanwalraven for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="elixhauser_sid29",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('elixhauser_sid29 for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="elixhauser_sid30",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('elixhauser_sid30 for 30-day death')
```
## urea_n_mean
```
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="urea_n_mean",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel('SAPS II score')
plt.title('urea_n_mean for 30-day death')
fig = plt.figure(figsize=(12,6))
ax = sns.violinplot(x="thirty_days", hue="gender", y="rrt",data=data_label, split=True)
plt.legend(loc='lower left')
plt.xlabel(' ')
plt.ylabel(' ')
plt.title('rrt for 30-day death')
```
## Correlation heatmap
### Remains features: age, gender, 'sapsii', 'sofa', 'thirty_days', 'one_year','oasis', 'lods', 'sirs', and other physiological parameters
```
#'platelets_mean','urea_n_mean', 'glucose_mean','resprate_mean', 'sysbp_mean', 'diasbp_mean', 'urine_mean', 'spo2_mean','temp_mean','hr_mean',
data = data_label.drop(columns=['subject_id', 'hadm_id', 'admittime', 'dischtime', 'deathtime', 'dod',
'first_careunit', 'last_careunit', 'marital_status',
'insurance', 'urea_n_min', 'urea_n_max', 'platelets_min',
'platelets_max', 'magnesium_max', 'albumin_min',
'calcium_min', 'resprate_min', 'resprate_max',
'glucose_min', 'glucose_max', 'hr_min', 'hr_max',
'sysbp_min', 'sysbp_max','diasbp_min',
'diasbp_max', 'temp_min', 'temp_max',
'urine_min', 'urine_max',
'elixhauser_vanwalraven', 'elixhauser_sid29', 'elixhauser_sid30',
'los_hospital', 'meanbp_min', 'meanbp_max', 'meanbp_mean', 'spo2_min',
'spo2_max', 'vent', 'rrt', 'urineoutput',
'icustay_age_group', 'admission_type',
'admission_location', 'discharge_location', 'ethnicity', 'diagnosis',
'time_before_death'])
correlation = data.corr()
plt.figure(figsize=(10,10))
sns.heatmap(correlation, vmax=1, square=True, annot=False, cmap="YlGnBu")
```
## KDE for 30 day death
```
data_pos = data_label.loc[data_label.thirty_days == 1]
data_neg = data_label.loc[data_label.thirty_days == 0]
fig = plt.figure(figsize=(15,15))
plt.subplot(331)
data_neg.platelets_min.plot.kde(color = 'red', alpha = 0.5)
data_pos.platelets_min.plot.kde(color = 'blue', alpha = 0.5)
plt.title('platelets_min')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(332)
data_neg.age.plot.kde(color = 'red', alpha = 0.5)
data_pos.age.plot.kde(color = 'blue', alpha = 0.5)
plt.title('Age')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(333)
data_neg.albumin_min.plot.kde(color = 'red', alpha = 0.5)
data_pos.albumin_min.plot.kde(color = 'blue', alpha = 0.5)
plt.title('albumin_min')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(334)
data_neg.sysbp_min.plot.kde(color = 'red', alpha = 0.5)
data_pos.sysbp_min.plot.kde(color = 'blue', alpha = 0.5)
plt.title('sysbp_min')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(335)
data_neg.temp_mean.plot.kde(color = 'red', alpha = 0.5)
data_pos.temp_mean.plot.kde(color = 'blue', alpha = 0.5)
plt.title('temp_mean')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(336)
data_neg.resprate_max.plot.kde(color = 'red', alpha = 0.5)
data_pos.resprate_max.plot.kde(color = 'blue', alpha = 0.5)
plt.title('resprate_max')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(337)
data_neg.urea_n_mean.plot.kde(color = 'red', alpha = 0.5)
data_pos.urea_n_mean.plot.kde(color = 'blue', alpha = 0.5)
plt.title('urea_n_mean')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(338)
data_neg.vent.plot.kde(color = 'red', alpha = 0.5)
data_pos.vent.plot.kde(color = 'blue', alpha = 0.5)
plt.title('vent')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(339)
data_neg.rrt.plot.kde(color = 'red', alpha = 0.5)
data_pos.rrt.plot.kde(color = 'blue', alpha = 0.5)
plt.title('rrt')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
fig = plt.figure(figsize=(15,15))
plt.subplot(321)
data_neg.sofa.plot.kde(color = 'red', alpha = 0.5)
data_pos.sofa.plot.kde(color = 'blue', alpha = 0.5)
plt.title('sofa')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(322)
data_neg.sapsii.plot.kde(color = 'red', alpha = 0.5)
data_pos.sapsii.plot.kde(color = 'blue', alpha = 0.5)
plt.title('sapsii')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(323)
data_neg.oasis.plot.kde(color = 'red', alpha = 0.5)
data_pos.oasis.plot.kde(color = 'blue', alpha = 0.5)
plt.title('oasis')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(324)
data_neg.lods.plot.kde(color = 'red', alpha = 0.5)
data_pos.lods.plot.kde(color = 'blue', alpha = 0.5)
plt.title('lods')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
plt.subplot(325)
data_neg.sirs.plot.kde(color = 'red', alpha = 0.5)
data_pos.sirs.plot.kde(color = 'blue', alpha = 0.5)
plt.title('sirs')
plt.legend(labels=['Alive in 30 days', 'Dead in 30 days'])
```
## Pie chart
```
# Age groups
age_category = np.floor(data_label['age']/10)
count = age_category.value_counts()
count['10-20'] = 345
count['20-30'] = 1860
count['30-40'] = 2817
count['40-50'] = 5716
count['50-60'] = 10190
count['60-70'] = 12300
count['70-80'] = 12638
count['80-89'] = 9233
count['older than 89'] = 2897
count = count.drop([7.0, 6.0, 5.0, 8.0, 4.0, 30.0, 3.0, 2.0, 1.0, 31.0])
count
fig = plt.figure(figsize=(25,25))
plt.rcParams.update({'font.size': 18})
#explode = (0, 0.15, 0)
colors = ['#79bd9a','#f4f7f7','#aacfd0','#79a8a9','#a8dba8']
#f4f7f7 #aacfd0 #79a8a9 #a8dba8 #79bd9a
plt.subplot(321)
data_label.admission_type.value_counts().plot.pie( colors = colors, autopct='%1.1f%%')
plt.title('Admission type')
plt.ylabel('')
plt.subplot(322)
plotting = (data_label.admission_location.value_counts(dropna=False))
plotting['OTHER'] = plotting['TRANSFER FROM SKILLED NUR'] + plotting['TRANSFER FROM OTHER HEALT'] + plotting['** INFO NOT AVAILABLE **']+plotting['HMO REFERRAL/SICK']+plotting['TRSF WITHIN THIS FACILITY']
plotting = plotting.drop(['TRANSFER FROM SKILLED NUR', 'TRANSFER FROM OTHER HEALT', '** INFO NOT AVAILABLE **','HMO REFERRAL/SICK','TRSF WITHIN THIS FACILITY'])
plotting.plot.pie( colors = colors, autopct='%1.1f%%')
plt.title('Admission location')
plt.ylabel('')
plt.subplot(323)
count.plot.pie( colors = colors, autopct='%1.1f%%')
plt.title('Age groups')
plt.ylabel('')
plt.subplot(324)
data_label.insurance.value_counts().plot.pie( colors = colors, autopct='%1.1f%%')
plt.title('Insurance provider')
plt.ylabel('')
#admission_location
#discharge_location
#ethnicity
#diagnosis
fig = plt.figure(figsize=(8,8))
plt.rcParams.update({'font.size': 15})
explode = (0, 0.1)
data_label.one_year.value_counts().plot.pie( colors = colors, autopct='%1.1f%%',explode = explode, startangle = 90)
plt.title('Patient died in 1 year')
plt.ylabel('')
fig = plt.figure(figsize=(8,8))
plt.rcParams.update({'font.size': 15})
data_label.thirty_days.value_counts().plot.pie( colors = colors, autopct='%1.1f%%',explode = explode, startangle = 90)
plt.title('Patient died in 30 days')
plt.ylabel('')
```
| github_jupyter |
# Logic: `logic.py`; Chapters 6-8
This notebook describes the [logic.py](https://github.com/aimacode/aima-python/blob/master/logic.py) module, which covers Chapters 6 (Logical Agents), 7 (First-Order Logic) and 8 (Inference in First-Order Logic) of *[Artificial Intelligence: A Modern Approach](http://aima.cs.berkeley.edu)*. See the [intro notebook](https://github.com/aimacode/aima-python/blob/master/intro.ipynb) for instructions.
We'll start by looking at `Expr`, the data type for logical sentences, and the convenience function `expr`. We'll be covering two types of knowledge bases, `PropKB` - Propositional logic knowledge base and `FolKB` - First order logic knowledge base. We will construct a propositional knowledge base of a specific situation in the Wumpus World. We will next go through the `tt_entails` function and experiment with it a bit. The `pl_resolution` and `pl_fc_entails` functions will come next. We'll study forward chaining and backward chaining algorithms for `FolKB` and use them on `crime_kb` knowledge base.
But the first step is to load the code:
```
from utils import *
from logic import *
```
## Logical Sentences
The `Expr` class is designed to represent any kind of mathematical expression. The simplest type of `Expr` is a symbol, which can be defined with the function `Symbol`:
```
Symbol('x')
```
Or we can define multiple symbols at the same time with the function `symbols`:
```
(x, y, P, Q, f) = symbols('x, y, P, Q, f')
```
We can combine `Expr`s with the regular Python infix and prefix operators. Here's how we would form the logical sentence "P and not Q":
```
P & ~Q
```
This works because the `Expr` class overloads the `&` operator with this definition:
```python
def __and__(self, other): return Expr('&', self, other)```
and does similar overloads for the other operators. An `Expr` has two fields: `op` for the operator, which is always a string, and `args` for the arguments, which is a tuple of 0 or more expressions. By "expression," I mean either an instance of `Expr`, or a number. Let's take a look at the fields for some `Expr` examples:
```
sentence = P & ~Q
sentence.op
sentence.args
P.op
P.args
Pxy = P(x, y)
Pxy.op
Pxy.args
```
It is important to note that the `Expr` class does not define the *logic* of Propositional Logic sentences; it just gives you a way to *represent* expressions. Think of an `Expr` as an [abstract syntax tree](https://en.wikipedia.org/wiki/Abstract_syntax_tree). Each of the `args` in an `Expr` can be either a symbol, a number, or a nested `Expr`. We can nest these trees to any depth. Here is a deply nested `Expr`:
```
3 * f(x, y) + P(y) / 2 + 1
```
## Operators for Constructing Logical Sentences
Here is a table of the operators that can be used to form sentences. Note that we have a problem: we want to use Python operators to make sentences, so that our programs (and our interactive sessions like the one here) will show simple code. But Python does not allow implication arrows as operators, so for now we have to use a more verbose notation that Python does allow: `|'==>'|` instead of just `==>`. Alternately, you can always use the more verbose `Expr` constructor forms:
| Operation | Book | Python Infix Input | Python Output | Python `Expr` Input
|--------------------------|----------------------|-------------------------|---|---|
| Negation | ¬ P | `~P` | `~P` | `Expr('~', P)`
| And | P ∧ Q | `P & Q` | `P & Q` | `Expr('&', P, Q)`
| Or | P ∨ Q | `P`<tt> | </tt>`Q`| `P`<tt> | </tt>`Q` | `Expr('`|`', P, Q)`
| Inequality (Xor) | P ≠ Q | `P ^ Q` | `P ^ Q` | `Expr('^', P, Q)`
| Implication | P → Q | `P` <tt>|</tt>`'==>'`<tt>|</tt> `Q` | `P ==> Q` | `Expr('==>', P, Q)`
| Reverse Implication | Q ← P | `Q` <tt>|</tt>`'<=='`<tt>|</tt> `P` |`Q <== P` | `Expr('<==', Q, P)`
| Equivalence | P ↔ Q | `P` <tt>|</tt>`'<=>'`<tt>|</tt> `Q` |`P <=> Q` | `Expr('<=>', P, Q)`
Here's an example of defining a sentence with an implication arrow:
```
~(P & Q) |'==>'| (~P | ~Q)
```
## `expr`: a Shortcut for Constructing Sentences
If the `|'==>'|` notation looks ugly to you, you can use the function `expr` instead:
```
expr('~(P & Q) ==> (~P | ~Q)')
```
`expr` takes a string as input, and parses it into an `Expr`. The string can contain arrow operators: `==>`, `<==`, or `<=>`, which are handled as if they were regular Python infix operators. And `expr` automatically defines any symbols, so you don't need to pre-define them:
```
expr('sqrt(b ** 2 - 4 * a * c)')
```
For now that's all you need to know about `expr`. If you are interested, we explain the messy details of how `expr` is implemented and how `|'==>'|` is handled in the appendix.
## Propositional Knowledge Bases: `PropKB`
The class `PropKB` can be used to represent a knowledge base of propositional logic sentences.
We see that the class `KB` has four methods, apart from `__init__`. A point to note here: the `ask` method simply calls the `ask_generator` method. Thus, this one has already been implemented, and what you'll have to actually implement when you create your own knowledge base class (though you'll probably never need to, considering the ones we've created for you) will be the `ask_generator` function and not the `ask` function itself.
The class `PropKB` now.
* `__init__(self, sentence=None)` : The constructor `__init__` creates a single field `clauses` which will be a list of all the sentences of the knowledge base. Note that each one of these sentences will be a 'clause' i.e. a sentence which is made up of only literals and `or`s.
* `tell(self, sentence)` : When you want to add a sentence to the KB, you use the `tell` method. This method takes a sentence, converts it to its CNF, extracts all the clauses, and adds all these clauses to the `clauses` field. So, you need not worry about `tell`ing only clauses to the knowledge base. You can `tell` the knowledge base a sentence in any form that you wish; converting it to CNF and adding the resulting clauses will be handled by the `tell` method.
* `ask_generator(self, query)` : The `ask_generator` function is used by the `ask` function. It calls the `tt_entails` function, which in turn returns `True` if the knowledge base entails query and `False` otherwise. The `ask_generator` itself returns an empty dict `{}` if the knowledge base entails query and `None` otherwise. This might seem a little bit weird to you. After all, it makes more sense just to return a `True` or a `False` instead of the `{}` or `None` But this is done to maintain consistency with the way things are in First-Order Logic, where an `ask_generator` function is supposed to return all the substitutions that make the query true. Hence the dict, to return all these substitutions. I will be mostly be using the `ask` function which returns a `{}` or a `False`, but if you don't like this, you can always use the `ask_if_true` function which returns a `True` or a `False`.
* `retract(self, sentence)` : This function removes all the clauses of the sentence given, from the knowledge base. Like the `tell` function, you don't have to pass clauses to remove them from the knowledge base; any sentence will do fine. The function will take care of converting that sentence to clauses and then remove those.
## Wumpus World KB
Let us create a `PropKB` for the wumpus world with the sentences mentioned in `section 7.4.3`.
```
wumpus_kb = PropKB()
```
We define the symbols we use in our clauses.<br/>
$P_{x, y}$ is true if there is a pit in `[x, y]`.<br/>
$B_{x, y}$ is true if the agent senses breeze in `[x, y]`.<br/>
```
P11, P12, P21, P22, P31, B11, B21 = expr('P11, P12, P21, P22, P31, B11, B21')
```
Now we tell sentences based on `section 7.4.3`.<br/>
There is no pit in `[1,1]`.
```
wumpus_kb.tell(~P11)
```
A square is breezy if and only if there is a pit in a neighboring square. This has to be stated for each square but for now, we include just the relevant squares.
```
wumpus_kb.tell(B11 | '<=>' | ((P12 | P21)))
wumpus_kb.tell(B21 | '<=>' | ((P11 | P22 | P31)))
```
Now we include the breeze percepts for the first two squares leading up to the situation in `Figure 7.3(b)`
```
wumpus_kb.tell(~B11)
wumpus_kb.tell(B21)
```
We can check the clauses stored in a `KB` by accessing its `clauses` variable
```
wumpus_kb.clauses
```
We see that the equivalence $B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was automatically converted to two implications which were inturn converted to CNF which is stored in the `KB`.<br/>
$B_{1, 1} \iff (P_{1, 2} \lor P_{2, 1})$ was split into $B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ and $B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$.<br/>
$B_{1, 1} \implies (P_{1, 2} \lor P_{2, 1})$ was converted to $P_{1, 2} \lor P_{2, 1} \lor \neg B_{1, 1}$.<br/>
$B_{1, 1} \Longleftarrow (P_{1, 2} \lor P_{2, 1})$ was converted to $\neg (P_{1, 2} \lor P_{2, 1}) \lor B_{1, 1}$ which becomes $(\neg P_{1, 2} \lor B_{1, 1}) \land (\neg P_{2, 1} \lor B_{1, 1})$ after applying De Morgan's laws and distributing the disjunction.<br/>
$B_{2, 1} \iff (P_{1, 1} \lor P_{2, 2} \lor P_{3, 2})$ is converted in similar manner.
## Inference in Propositional Knowledge Base
In this section we will look at two algorithms to check if a sentence is entailed by the `KB`. Our goal is to decide whether $\text{KB} \vDash \alpha$ for some sentence $\alpha$.
### Truth Table Enumeration
It is a model-checking approach which, as the name suggests, enumerates all possible models in which the `KB` is true and checks if $\alpha$ is also true in these models. We list the $n$ symbols in the `KB` and enumerate the $2^{n}$ models in a depth-first manner and check the truth of `KB` and $\alpha$.
```
%psource tt_check_all
```
Note that `tt_entails()` takes an `Expr` which is a conjunction of clauses as the input instead of the `KB` itself. You can use the `ask_if_true()` method of `PropKB` which does all the required conversions. Let's check what `wumpus_kb` tells us about $P_{1, 1}$.
```
wumpus_kb.ask_if_true(~P11), wumpus_kb.ask_if_true(P11)
```
Looking at Figure 7.9 we see that in all models in which the knowledge base is `True`, $P_{1, 1}$ is `False`. It makes sense that `ask_if_true()` returns `True` for $\alpha = \neg P_{1, 1}$ and `False` for $\alpha = P_{1, 1}$. This begs the question, what if $\alpha$ is `True` in only a portion of all models. Do we return `True` or `False`? This doesn't rule out the possibility of $\alpha$ being `True` but it is not entailed by the `KB` so we return `False` in such cases. We can see this is the case for $P_{2, 2}$ and $P_{3, 1}$.
```
wumpus_kb.ask_if_true(~P22), wumpus_kb.ask_if_true(P22)
```
### Proof by Resolution
Recall that our goal is to check whether $\text{KB} \vDash \alpha$ i.e. is $\text{KB} \implies \alpha$ true in every model. Suppose we wanted to check if $P \implies Q$ is valid. We check the satisfiability of $\neg (P \implies Q)$, which can be rewritten as $P \land \neg Q$. If $P \land \neg Q$ is unsatisfiable, then $P \implies Q$ must be true in all models. This gives us the result "$\text{KB} \vDash \alpha$ <em>if and only if</em> $\text{KB} \land \neg \alpha$ is unsatisfiable".<br/>
This technique corresponds to <em>proof by <strong>contradiction</strong></em>, a standard mathematical proof technique. We assume $\alpha$ to be false and show that this leads to a contradiction with known axioms in $\text{KB}$. We obtain a contradiction by making valid inferences using inference rules. In this proof we use a single inference rule, <strong>resolution</strong> which states $(l_1 \lor \dots \lor l_k) \land (m_1 \lor \dots \lor m_n) \land (l_i \iff \neg m_j) \implies l_1 \lor \dots \lor l_{i - 1} \lor l_{i + 1} \lor \dots \lor l_k \lor m_1 \lor \dots \lor m_{j - 1} \lor m_{j + 1} \lor \dots \lor m_n$. Applying the resolution yeilds us a clause which we add to the KB. We keep doing this until:
* There are no new clauses that can be added, in which case $\text{KB} \nvDash \alpha$.
* Two clauses resolve to yield the <em>empty clause</em>, in which case $\text{KB} \vDash \alpha$.
The <em>empty clause</em> is equivalent to <em>False</em> because it arises only from resolving two complementary
unit clauses such as $P$ and $\neg P$ which is a contradiction as both $P$ and $\neg P$ can't be <em>True</em> at the same time.
```
%psource pl_resolution
pl_resolution(wumpus_kb, ~P11), pl_resolution(wumpus_kb, P11)
pl_resolution(wumpus_kb, ~P22), pl_resolution(wumpus_kb, P22)
```
## First-Order Logic Knowledge Bases: `FolKB`
The class `FolKB` can be used to represent a knowledge base of First-order logic sentences. You would initialize and use it the same way as you would for `PropKB` except that the clauses are first-order definite clauses. We will see how to write such clauses to create a database and query them in the following sections.
## Criminal KB
In this section we create a `FolKB` based on the following paragraph.<br/>
<em>The law says that it is a crime for an American to sell weapons to hostile nations. The country Nono, an enemy of America, has some missiles, and all of its missiles were sold to it by Colonel West, who is American.</em><br/>
The first step is to extract the facts and convert them into first-order definite clauses. Extracting the facts from data alone is a challenging task. Fortunately, we have a small paragraph and can do extraction and conversion manually. We'll store the clauses in list aptly named `clauses`.
```
clauses = []
```
<em>“... it is a crime for an American to sell weapons to hostile nations”</em><br/>
The keywords to look for here are 'crime', 'American', 'sell', 'weapon' and 'hostile'. We use predicate symbols to make meaning of them.
* `Criminal(x)`: `x` is a criminal
* `American(x)`: `x` is an American
* `Sells(x ,y, z)`: `x` sells `y` to `z`
* `Weapon(x)`: `x` is a weapon
* `Hostile(x)`: `x` is a hostile nation
Let us now combine them with appropriate variable naming to depict the meaning of the sentence. The criminal `x` is also the American `x` who sells weapon `y` to `z`, which is a hostile nation.
$\text{American}(x) \land \text{Weapon}(y) \land \text{Sells}(x, y, z) \land \text{Hostile}(z) \implies \text{Criminal} (x)$
```
clauses.append(expr("(American(x) & Weapon(y) & Sells(x, y, z) & Hostile(z)) ==> Criminal(x)"))
```
<em>"The country Nono, an enemy of America"</em><br/>
We now know that Nono is an enemy of America. We represent these nations using the constant symbols `Nono` and `America`. the enemy relation is show using the predicate symbol `Enemy`.
$\text{Enemy}(\text{Nono}, \text{America})$
```
clauses.append(expr("Enemy(Nono, America)"))
```
<em>"Nono ... has some missiles"</em><br/>
This states the existence of some missile which is owned by Nono. $\exists x \text{Owns}(\text{Nono}, x) \land \text{Missile}(x)$. We invoke existential instantiation to introduce a new constant `M1` which is the missile owned by Nono.
$\text{Owns}(\text{Nono}, \text{M1}), \text{Missile}(\text{M1})$
```
clauses.append(expr("Owns(Nono, M1)"))
clauses.append(expr("Missile(M1)"))
```
<em>"All of its missiles were sold to it by Colonel West"</em><br/>
If Nono owns something and it classifies as a missile, then it was sold to Nono by West.
$\text{Missile}(x) \land \text{Owns}(\text{Nono}, x) \implies \text{Sells}(\text{West}, x, \text{Nono})$
```
clauses.append(expr("(Missile(x) & Owns(Nono, x)) ==> Sells(West, x, Nono)"))
```
<em>"West, who is American"</em><br/>
West is an American.
$\text{American}(\text{West})$
```
clauses.append(expr("American(West)"))
```
We also know, from our understanding of language, that missiles are weapons and that an enemy of America counts as “hostile”.
$\text{Missile}(x) \implies \text{Weapon}(x), \text{Enemy}(x, \text{America}) \implies \text{Hostile}(x)$
```
clauses.append(expr("Missile(x) ==> Weapon(x)"))
clauses.append(expr("Enemy(x, America) ==> Hostile(x)"))
```
Now that we have converted the information into first-order definite clauses we can create our first-order logic knowledge base.
```
crime_kb = FolKB(clauses)
```
## Inference in First-Order Logic
In this section we look at a forward chaining and a backward chaining algorithm for `FolKB`. Both aforementioned algorithms rely on a process called <strong>unification</strong>, a key component of all first-order inference algorithms.
### Unification
We sometimes require finding substitutions that make different logical expressions look identical. This process, called unification, is done by the `unify` algorithm. It takes as input two sentences and returns a <em>unifier</em> for them if one exists. A unifier is a dictionary which stores the substitutions required to make the two sentences identical. It does so by recursively unifying the components of a sentence, where the unification of a variable symbol `var` with a constant symbol `Const` is the mapping `{var: Const}`. Let's look at a few examples.
```
unify(expr('x'), 3)
unify(expr('A(x)'), expr('A(B)'))
unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(y)'))
```
In cases where there is no possible substitution that unifies the two sentences the function return `None`.
```
print(unify(expr('Cat(x)'), expr('Dog(Dobby)')))
```
We also need to take care we do not unintentionally use the same variable name. Unify treats them as a single variable which prevents it from taking multiple value.
```
print(unify(expr('Cat(x) & Dog(Dobby)'), expr('Cat(Bella) & Dog(x)')))
```
### Forward Chaining Algorithm
We consider the simple forward-chaining algorithm presented in <em>Figure 9.3</em>. We look at each rule in the knoweldge base and see if the premises can be satisfied. This is done by finding a substitution which unifies each of the premise with a clause in the `KB`. If we are able to unify the premises, the conclusion (with the corresponding substitution) is added to the `KB`. This inferencing process is repeated until either the query can be answered or till no new sentences can be added. We test if the newly added clause unifies with the query in which case the substitution yielded by `unify` is an answer to the query. If we run out of sentences to infer, this means the query was a failure.
The function `fol_fc_ask` is a generator which yields all substitutions which validate the query.
```
%psource fol_fc_ask
```
Let's find out all the hostile nations. Note that we only told the `KB` that Nono was an enemy of America, not that it was hostile.
```
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
```
The generator returned a single substitution which says that Nono is a hostile nation. See how after adding another enemy nation the generator returns two substitutions.
```
crime_kb.tell(expr('Enemy(JaJa, America)'))
answer = fol_fc_ask(crime_kb, expr('Hostile(x)'))
print(list(answer))
```
<strong><em>Note</em>:</strong> `fol_fc_ask` makes changes to the `KB` by adding sentences to it.
### Backward Chaining Algorithm
This algorithm works backward from the goal, chaining through rules to find known facts that support the proof. Suppose `goal` is the query we want to find the substitution for. We find rules of the form $\text{lhs} \implies \text{goal}$ in the `KB` and try to prove `lhs`. There may be multiple clauses in the `KB` which give multiple `lhs`. It is sufficient to prove only one of these. But to prove a `lhs` all the conjuncts in the `lhs` of the clause must be proved. This makes it similar to <em>And/Or</em> search.
#### OR
The <em>OR</em> part of the algorithm comes from our choice to select any clause of the form $\text{lhs} \implies \text{goal}$. Looking at all rules's `lhs` whose `rhs` unify with the `goal`, we yield a substitution which proves all the conjuncts in the `lhs`. We use `parse_definite_clause` to attain `lhs` and `rhs` from a clause of the form $\text{lhs} \implies \text{rhs}$. For atomic facts the `lhs` is an empty list.
```
%psource fol_bc_or
```
#### AND
The <em>AND</em> corresponds to proving all the conjuncts in the `lhs`. We need to find a substitution which proves each <em>and</em> every clause in the list of conjuncts.
```
%psource fol_bc_and
```
Now the main function `fl_bc_ask` calls `fol_bc_or` with substitution initialized as empty. The `ask` method of `FolKB` uses `fol_bc_ask` and fetches the first substitution returned by the generator to answer query. Let's query the knowledge base we created from `clauses` to find hostile nations.
```
# Rebuild KB because running fol_fc_ask would add new facts to the KB
crime_kb = FolKB(clauses)
crime_kb.ask(expr('Hostile(x)'))
```
You may notice some new variables in the substitution. They are introduced to standardize the variable names to prevent naming problems as discussed in the [Unification section](#Unification)
## Appendix: The Implementation of `|'==>'|`
Consider the `Expr` formed by this syntax:
```
P |'==>'| ~Q
```
What is the funny `|'==>'|` syntax? The trick is that "`|`" is just the regular Python or-operator, and so is exactly equivalent to this:
```
(P | '==>') | ~Q
```
In other words, there are two applications of or-operators. Here's the first one:
```
P | '==>'
```
What is going on here is that the `__or__` method of `Expr` serves a dual purpose. If the right-hand-side is another `Expr` (or a number), then the result is an `Expr`, as in `(P | Q)`. But if the right-hand-side is a string, then the string is taken to be an operator, and we create a node in the abstract syntax tree corresponding to a partially-filled `Expr`, one where we know the left-hand-side is `P` and the operator is `==>`, but we don't yet know the right-hand-side.
The `PartialExpr` class has an `__or__` method that says to create an `Expr` node with the right-hand-side filled in. Here we can see the combination of the `PartialExpr` with `Q` to create a complete `Expr`:
```
partial = PartialExpr('==>', P)
partial | ~Q
```
This [trick](http://code.activestate.com/recipes/384122-infix-operators/) is due to [Ferdinand Jamitzky](http://code.activestate.com/recipes/users/98863/), with a modification by [C. G. Vedant](https://github.com/Chipe1),
who suggested using a string inside the or-bars.
## Appendix: The Implementation of `expr`
How does `expr` parse a string into an `Expr`? It turns out there are two tricks (besides the Jamitzky/Vedant trick):
1. We do a string substitution, replacing "`==>`" with "`|'==>'|`" (and likewise for other operators).
2. We `eval` the resulting string in an environment in which every identifier
is bound to a symbol with that identifier as the `op`.
In other words,
```
expr('~(P & Q) ==> (~P | ~Q)')
```
is equivalent to doing:
```
P, Q = symbols('P, Q')
~(P & Q) |'==>'| (~P | ~Q)
```
One thing to beware of: this puts `==>` at the same precedence level as `"|"`, which is not quite right. For example, we get this:
```
P & Q |'==>'| P | Q
```
which is probably not what we meant; when in doubt, put in extra parens:
```
(P & Q) |'==>'| (P | Q)
```
## Examples
```
from notebook import Canvas_fol_bc_ask
canvas_bc_ask = Canvas_fol_bc_ask('canvas_bc_ask', crime_kb, expr('Criminal(x)'))
```
# Authors
This notebook by [Chirag Vartak](https://github.com/chiragvartak) and [Peter Norvig](https://github.com/norvig).
| github_jupyter |
```
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Vertex AI: Vertex AI Migration: Custom Image Classification w/custom training container
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ3%20Vertex%20SDK%20Custom%20Image%20Classification%20with%20custom%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ3%20Vertex%20SDK%20Custom%20Image%20Classification%20with%20custom%20training%20container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
### Dataset
The dataset used for this tutorial is the [CIFAR10 dataset](https://www.tensorflow.org/datasets/catalog/cifar10) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck.
### Costs
This tutorial uses billable components of Google Cloud:
* Vertex AI
* Cloud Storage
Learn about [Vertex AI
pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage
pricing](https://cloud.google.com/storage/pricing), and use the [Pricing
Calculator](https://cloud.google.com/products/calculator/)
to generate a cost estimate based on your projected usage.
### Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
- The Cloud Storage SDK
- Git
- Python 3
- virtualenv
- Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to [Setting up a Python development environment](https://cloud.google.com/python/setup) and the [Jupyter installation guide](https://jupyter.org/install) provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
1. [Install and initialize the SDK](https://cloud.google.com/sdk/docs/).
2. [Install Python 3](https://cloud.google.com/python/setup#installing_python).
3. [Install virtualenv](https://cloud.google.com/python/setup#installing_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.
4. To install Jupyter, run `pip3 install jupyter` on the command-line in a terminal shell.
5. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.
6. Open this notebook in the Jupyter Notebook Dashboard.
## Installation
Install the latest version of Vertex SDK for Python.
```
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
```
Install the latest GA version of *google-cloud-storage* library as well.
```
! pip3 install -U google-cloud-storage $USER_FLAG
if os.environ["IS_TESTING"]:
! apt-get update && apt-get install -y python3-opencv-headless
! apt-get install -y libgl1-mesa-dev
! pip3 install --upgrade opencv-python-headless $USER_FLAG
if os.environ["IS_TESTING"]:
! pip3 install --upgrade tensorflow $USER_FLAG
```
### Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
```
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
```
## Before you begin
### GPU runtime
This tutorial does not require a GPU runtime.
### Set up your Google Cloud project
**The following steps are required, regardless of your notebook environment.**
1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.
2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)
3. [Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component,storage-component.googleapis.com)
4. If you are running this notebook locally, you will need to install the [Cloud SDK]((https://cloud.google.com/sdk)).
5. Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$`.
```
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
```
#### Region
You can also change the `REGION` variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
- Americas: `us-central1`
- Europe: `europe-west4`
- Asia Pacific: `asia-east1`
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
```
REGION = "us-central1" # @param {type: "string"}
```
#### Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
```
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
```
### Authenticate your Google Cloud account
**If you are using Google Cloud Notebooks**, your environment is already authenticated. Skip this step.
**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
**Otherwise**, follow these steps:
In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.
**Click Create service account**.
In the **Service account name** field, enter a name, and click **Create**.
In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
```
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
```
### Create a Cloud Storage bucket
**The following steps are required, regardless of your notebook environment.**
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
```
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
```
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
```
! gsutil mb -l $REGION $BUCKET_NAME
```
Finally, validate access to your Cloud Storage bucket by examining its contents:
```
! gsutil ls -al $BUCKET_NAME
```
### Set up variables
Next, set up some variables used throughout the tutorial.
### Import libraries and define constants
```
import google.cloud.aiplatform as aip
```
## Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
```
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
```
#### Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables `TRAIN_GPU/TRAIN_NGPU` and `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify `(None, None)` to use a container image to run on a CPU.
Learn more [here](https://cloud.google.com/vertex-ai/docs/general/locations#accelerators) hardware accelerator support for your region
*Note*: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
```
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
```
#### Set pre-built containers
Set the pre-built Docker container image for prediction.
- Set the variable `TF` to the TensorFlow version of the container image. For example, `2-1` would be version 2.1, and `1-15` would be version 1.15. The following list shows some of the pre-built images available:
For the latest list, see [Pre-built containers for prediction](https://cloud.google.com/ai-platform-unified/docs/predictions/pre-built-containers).
```
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU)
```
#### Set machine type
Next, set the machine type to use for training and prediction.
- Set the variables `TRAIN_COMPUTE` and `DEPLOY_COMPUTE` to configure the compute resources for the VMs you will use for for training and prediction.
- `machine type`
- `n1-standard`: 3.75GB of memory per vCPU.
- `n1-highmem`: 6.5GB of memory per vCPU
- `n1-highcpu`: 0.9 GB of memory per vCPU
- `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]
*Note: The following is not supported for training:*
- `standard`: 2 vCPUs
- `highcpu`: 2, 4 and 8 vCPUs
*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*.
```
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
```
### Create a Docker file
In this tutorial, you train a CIFAR10 model using your own custom container.
To use your own custom container, you build a Docker file. First, you will create a directory for the container components.
### Examine the training package
#### Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
- PKG-INFO
- README.md
- setup.cfg
- setup.py
- trainer
- \_\_init\_\_.py
- task.py
The files `setup.cfg` and `setup.py` are the instructions for installing the package into the operating environment of the Docker image.
The file `trainer/task.py` is the Python script for executing the custom training job. *Note*, when we referred to it in the worker pool specification, we replace the directory slash with a dot (`trainer.task`) and dropped the file suffix (`.py`).
#### Package Assembly
In the following cells, you will assemble the training package.
```
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: CIFAR10 image classification\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
```
#### Task.py contents
In the next cell, you write the contents of the training script task.py. We won't go into detail, it's just there for you to browse. In summary:
- Get the directory where to save the model artifacts from the command line (`--model_dir`), and if not specified, then from the environment variable `AIP_MODEL_DIR`.
- Loads CIFAR10 dataset from TF Datasets (tfds).
- Builds a model using TF.Keras model API.
- Compiles the model (`compile()`).
- Sets a training distribution strategy according to the argument `args.distribute`.
- Trains the model (`fit()`) with epochs and steps according to the arguments `args.epochs` and `args.steps`
- Saves the trained model (`save(args.model_dir)`) to the specified model directory.
```
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for CIFAR-10
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv("AIP_MODEL_DIR"), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.01, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=200, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
print('DEVICES', device_lib.list_local_devices())
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
# Preparing dataset
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def make_datasets_unbatched():
# Scaling CIFAR10 data from (0, 255] to (0., 1.]
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
datasets, info = tfds.load(name='cifar10',
with_info=True,
as_supervised=True)
return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()
# Build the Keras model
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
loss=tf.keras.losses.sparse_categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),
metrics=['accuracy'])
return model
# Train the model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
train_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_cnn_model()
model.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)
model.save(args.model_dir)
```
#### Write the Docker file contents
Your first step in containerizing your code is to create a Docker file. In your Docker you’ll include all the commands needed to run your container image. It’ll install all the libraries you’re using and set up the entry point for your training code.
1. Install a pre-defined container image from TensorFlow repository for deep learning images.
2. Copies in the Python training code, to be shown subsequently.
3. Sets the entry into the Python training script as `trainer/task.py`. Note, the `.py` is dropped in the ENTRYPOINT command, as it is implied.
```
%%writefile custom/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu.2-3
WORKDIR /root
WORKDIR /
# Copies the trainer code to the docker image.
COPY trainer /trainer
# Sets up the entry point to invoke the trainer.
ENTRYPOINT ["python", "-m", "trainer.task"]
```
#### Build the container locally
Next, you will provide a name for your customer container that you will use when you submit it to the Google Container Registry.
```
TRAIN_IMAGE = "gcr.io/" + PROJECT_ID + "/cifar10:v1"
```
Next, build the container.
```
! docker build custom -t $TRAIN_IMAGE
```
#### Test the container locally
Run the container within your notebook instance to ensure it’s working correctly. You will run it for 5 epochs.
```
! docker run $TRAIN_IMAGE --epochs=5
```
#### Register the custom container
When you’ve finished running the container locally, push it to Google Container Registry.
```
! docker push $TRAIN_IMAGE
```
#### Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
```
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_cifar10.tar.gz
```
## Train a model
### [training.containers-overview](https://cloud.google.com/vertex-ai/docs/training/containers-overview)
### Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
#### Create custom training job
A custom training job is created with the `CustomTrainingJob` class, with the following parameters:
- `display_name`: The human readable name for the custom training job.
- `container_uri`: The training container image.
```
job = aip.CustomContainerTrainingJob(
display_name="cifar10_" + TIMESTAMP, container_uri=TRAIN_IMAGE
)
print(job)
```
*Example output:*
<google.cloud.aiplatform.training_jobs.CustomContainerTrainingJob object at 0x7feab1346710>
#### Run the custom training job
Next, you run the custom job to start the training job by invoking the method `run`, with the following parameters:
- `args`: The command-line arguments to pass to the training script.
- `replica_count`: The number of compute instances for training (replica_count = 1 is single node training).
- `machine_type`: The machine type for the compute instances.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `base_output_dir`: The Cloud Storage location to write the model artifacts to.
- `sync`: Whether to block until completion of the job.
```
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
```
### Wait for completion of custom training job
Next, wait for the custom training job to complete. Alternatively, one can set the parameter `sync` to `True` in the `run()` methid to block until the custom training job is completed.
## Evaluate the model
## Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras `model.load_model()` method passing it the Cloud Storage path where the model is saved -- specified by `MODEL_DIR`.
```
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
```
## Evaluate the model
Now find out how good the model is.
### Load evaluation data
You will load the CIFAR10 test (holdout) data from `tf.keras.datasets`, using the method `load_data()`. This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the image data, and the corresponding labels.
You don't need the training data, and hence why we loaded it as `(_, _)`.
Before you can run the data through evaluation, you need to preprocess it:
`x_test`:
1. Normalize (rescale) the pixel data by dividing each pixel by 255. This replaces each single byte integer pixel with a 32-bit floating point number between 0 and 1.
`y_test`:<br/>
2. The labels are currently scalar (sparse). If you look back at the `compile()` step in the `trainer/task.py` script, you will find that it was compiled for sparse labels. So we don't need to do anything more.
```
import numpy as np
from tensorflow.keras.datasets import cifar10
(_, _), (x_test, y_test) = cifar10.load_data()
x_test = (x_test / 255.0).astype(np.float32)
print(x_test.shape, y_test.shape)
```
### Perform the model evaluation
Now evaluate how well the model in the custom job did.
```
local_model.evaluate(x_test, y_test)
```
### [general.import-model](https://cloud.google.com/vertex-ai/docs/general/import-model)
### Serving function for image data
To pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (`serving_fn`) and attach it to the model as a preprocessing step. Add a `@tf.function` decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).
When you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (`tf.string`), which is passed to the serving function (`serving_fn`). The serving function preprocesses the `tf.string` into raw (uncompressed) numpy bytes (`preprocess_fn`) to match the input requirements of the model:
- `io.decode_jpeg`- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).
- `image.convert_image_dtype` - Changes integer pixel values to float 32.
- `image.resize` - Resizes the image to match the input shape for the model.
- `resized / 255.0` - Rescales (normalization) the pixel data between 0 and 1.
At this point, the data can be passed to the model (`m_call`).
```
CONCRETE_INPUT = "numpy_inputs"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(32, 32))
rescale = tf.cast(resized / 255.0, tf.float32)
return rescale
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
decoded_images = tf.map_fn(
_preprocess, bytes_inputs, dtype=tf.float32, back_prop=False
)
return {
CONCRETE_INPUT: decoded_images
} # User needs to make sure the key matches model's input
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
images = preprocess_fn(bytes_inputs)
prob = m_call(**images)
return prob
m_call = tf.function(local_model.call).get_concrete_function(
[tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]
)
tf.saved_model.save(
local_model, model_path_to_deploy, signatures={"serving_default": serving_fn}
)
```
## Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
For your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
```
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
```
## Upload the model
Next, upload your model to a `Model` resource using `Model.upload()` method, with the following parameters:
- `display_name`: The human readable name for the `Model` resource.
- `artifact`: The Cloud Storage location of the trained model artifacts.
- `serving_container_image_uri`: The serving container image.
- `sync`: Whether to execute the upload asynchronously or synchronously.
If the `upload()` method is run asynchronously, you can subsequently block until completion with the `wait()` method.
```
model = aip.Model.upload(
display_name="cifar10_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
sync=False,
)
model.wait()
```
*Example output:*
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/759209241365/locations/us-central1/models/925164267982815232/operations/3458372263047331840
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/759209241365/locations/us-central1/models/925164267982815232
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/759209241365/locations/us-central1/models/925164267982815232')
## Make batch predictions
### [predictions.batch-prediction](https://cloud.google.com/vertex-ai/docs/predictions/batch-predictions)
### Get test items
You will use examples out of the test (holdout) portion of the dataset as a test items.
```
test_image_1 = x_test[0]
test_label_1 = y_test[0]
test_image_2 = x_test[1]
test_label_2 = y_test[1]
print(test_image_1.shape)
```
### Prepare the request content
You are going to send the CIFAR10 images as compressed JPG image, instead of the raw uncompressed bytes:
- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
- Denormalize the image data from \[0,1) range back to [0,255).
- Convert the 32-bit floating point values to 8-bit unsigned integers.
```
import cv2
cv2.imwrite("tmp1.jpg", (test_image_1 * 255).astype(np.uint8))
cv2.imwrite("tmp2.jpg", (test_image_2 * 255).astype(np.uint8))
```
### Copy test item(s)
For the batch prediction, copy the test items over to your Cloud Storage bucket.
```
! gsutil cp tmp1.jpg $BUCKET_NAME/tmp1.jpg
! gsutil cp tmp2.jpg $BUCKET_NAME/tmp2.jpg
test_item_1 = BUCKET_NAME + "/tmp1.jpg"
test_item_2 = BUCKET_NAME + "/tmp2.jpg"
```
### Make the batch input file
Now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can only be in JSONL format. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:
- `input_name`: the name of the input layer of the underlying model.
- `'b64'`: A key that indicates the content is base64 encoded.
- `content`: The compressed JPG image bytes as a base64 encoded string.
Each instance in the prediction request is a dictionary entry of the form:
{serving_input: {'b64': content}}
To pass the image data to the prediction service you encode the bytes into base64 -- which makes the content safe from modification when transmitting binary data over the network.
- `tf.io.read_file`: Read the compressed JPG images into memory as raw bytes.
- `base64.b64encode`: Encode the raw bytes into a base64 encoded string.
```
import base64
import json
gcs_input_uri = BUCKET_NAME + "/" + "test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
bytes = tf.io.read_file(test_item_1)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
bytes = tf.io.read_file(test_item_2)
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
data = {serving_input: {"b64": b64str}}
f.write(json.dumps(data) + "\n")
```
### Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
- `job_display_name`: The human readable name for the batch prediction job.
- `gcs_source`: A list of one or more batch request input files.
- `gcs_destination_prefix`: The Cloud Storage location for storing the batch prediction resuls.
- `instances_format`: The format for the input instances, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
- `predictions_format`: The format for the output predictions, either 'csv' or 'jsonl'. Defaults to 'jsonl'.
- `machine_type`: The type of machine to use for training.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `sync`: If set to True, the call will block while waiting for the asynchronous batch job to complete.
```
MIN_NODES = 1
MAX_NODES = 1
batch_predict_job = model.batch_predict(
job_display_name="cifar10_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_NAME,
instances_format="jsonl",
predictions_format="jsonl",
model_parameters=None,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
starting_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
sync=False,
)
print(batch_predict_job)
```
*Example output:*
INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob
<google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0> is waiting for upstream dependencies to complete.
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state:
JobState.JOB_STATE_RUNNING
### Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter `sync` to `True` in the `batch_predict()` method to block until the batch prediction job is completed.
```
batch_predict_job.wait()
```
*Example Output:*
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session:
INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328')
INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job:
https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_RUNNING
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state:
JobState.JOB_STATE_SUCCEEDED
INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328
### Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
- `instance`: The prediction request.
- `prediction`: The prediction response.
```
import json
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}"
with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile:
for line in gfile.readlines():
line = json.loads(line)
print(line)
break
```
*Example Output:*
{'instance': {'bytes_inputs': {'b64': '/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel15ppn5MuD6MM6qUJzbppRdrO8lJa2a7NNHyJoGheKvHngfUfGjXSaHHZX/ANmW2kQTsHIBXzDxgt1GMAcDPU1xI1xdS16/8FaxNA2o2kPmGS2OI51zyV65Izz0z1xg1718Ivhd4b8IfBX4qeItWuxql+2tW+n6dHPOEijt1s9xYgnaR50hw2dvygDrXz/4v+HWo6ha6X8R/C7iwv7CTy7YiRSLslGG3AzlGAGQenPTFfL4XiDMvr0ZVZuSk/ej66adj6bGcPZX/Z8oUoKHKtJemurP1H+OekS/tAeAvDmpfDjw/wDbL3W/FOlalpkNgqyhJrtgsqPg4ACyyK4J9c1418XP2X4P2ev2jNQ+C3x6+OnhbRfCtpJHfLp1p4klkD73kldkhRAYTKzoSkmSmxiNysDXK/stftQD9kn9oSx8aa3p0uq+GdN1drq70W3cAJKYmRLmINgbl35xwGAI4ODXiXxK+Mtp8W/G+v8Ajvxl4mn/ALW1TU5bq6u9Q+fzHZixG8dFyQB0wOOnFfjuH40f1GNSnG05P3o9F5r9D9dr8LReNdOs7wS0l19PwKPxZ8TeNNAkvPh/8GruO8BE9v8A8JHbaq8VrPA8h+aSBl5mKKiiYAlQowRnAh+H/gWTwx4MiTV52vdRUlTLPMJNgK/NsJxgEgnpwGxmtnSfDsOl6VH4nuLWG8glbCtHcb1bvjqD+PSu78SSXfwn8F2XjnxHo2n3smpSKdPsJCpW3iB+Z2VRl2VckA4HA6k1xf8AEQs9wOKVWjGN0rK8eZLp1/M2nwLkuOwsqNWUrN3dpWb620P/2Q=='}}, 'prediction': [0.0560616329, 0.122713037, 0.121289924, 0.109751239, 0.121320881, 0.0897410363, 0.145011798, 0.0976110101, 0.0394041203, 0.0970953554]}
## Make online predictions
### [predictions.deploy-model-api](https://cloud.google.com/vertex-ai/docs/predictions/deploy-model-api)
## Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the `deploy` method, with the following parameters:
- `deployed_model_display_name`: A human readable name for the deployed model.
- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
- `machine_type`: The type of machine to use for training.
- `accelerator_type`: The hardware accelerator type.
- `accelerator_count`: The number of accelerators to attach to a worker replica.
- `starting_replica_count`: The number of compute instances to initially provision.
- `max_replica_count`: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
```
DEPLOYED_NAME = "cifar10-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
```
*Example output:*
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472
### [predictions.online-prediction-automl](https://cloud.google.com/vertex-ai/docs/predictions/online-predictions-automl)
### Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
```
test_image = x_test[0]
test_label = y_test[0]
print(test_image.shape)
```
### Prepare the request content
You are going to send the CIFAR10 image as compressed JPG image, instead of the raw uncompressed bytes:
- `cv2.imwrite`: Use openCV to write the uncompressed image to disk as a compressed JPEG image.
- Denormalize the image data from \[0,1) range back to [0,255).
- Convert the 32-bit floating point values to 8-bit unsigned integers.
- `tf.io.read_file`: Read the compressed JPG images back into memory as raw bytes.
- `base64.b64encode`: Encode the raw bytes into a base 64 encoded string.
```
import base64
import cv2
cv2.imwrite("tmp.jpg", (test_image * 255).astype(np.uint8))
bytes = tf.io.read_file("tmp.jpg")
b64str = base64.b64encode(bytes.numpy()).decode("utf-8")
```
### Make the prediction
Now that your `Model` resource is deployed to an `Endpoint` resource, you can do online predictions by sending prediction requests to the Endpoint resource.
#### Request
Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using `tf.io.gfile.Gfile()`. To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network.
The format of each instance is:
{ serving_input: { 'b64': base64_encoded_bytes } }
Since the `predict()` method can take multiple items (instances), send your single test item as a list of one test item.
#### Response
The response from the `predict()` call is a Python dictionary with the following entries:
- `ids`: The internal assigned unique identifiers for each prediction request.
- `predictions`: The predicted confidence, between 0 and 1, per class label.
- `deployed_model_id`: The Vertex AI identifier for the deployed `Model` resource which did the predictions.
```
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: {"b64": b64str}}]
prediction = endpoint.predict(instances=instances)
print(prediction)
```
*Example output:*
Prediction(predictions=[[0.0560616292, 0.122713044, 0.121289924, 0.109751239, 0.121320873, 0.0897410288, 0.145011798, 0.0976110175, 0.0394041166, 0.0970953479]], deployed_model_id='4087166195420102656', explanations=None)
## Undeploy the model
When you are done doing predictions, you undeploy the model from the `Endpoint` resouce. This deprovisions all compute resources and ends billing for the deployed model.
```
endpoint.undeploy_all()
```
# Cleaning up
To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
- Dataset
- Pipeline
- Model
- Endpoint
- AutoML Training Job
- Batch Job
- Custom Job
- Hyperparameter Tuning Job
- Cloud Storage Bucket
```
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
```
| github_jupyter |
# Classification algorithms
In the context of record linkage, classification refers to the process of dividing record pairs into matches and non-matches (distinct pairs). There are dozens of classification algorithms for record linkage. Roughly speaking, classification algorithms fall into two groups:
- **supervised learning algorithms** - These algorithms make use of trainings data. If you do have trainings data, then you can use supervised learning algorithms. Most supervised learning algorithms offer good accuracy and reliability. Examples of supervised learning algorithms in the *Python Record Linkage Toolkit* are *Logistic Regression*, *Naive Bayes* and *Support Vector Machines*.
- **unsupervised learning algorithms** - These algorithms do not need training data. The *Python Record Linkage Toolkit* supports *K-means clustering* and an *Expectation/Conditional Maximisation* classifier.
```
%precision 5
from __future__ import print_function
import pandas as pd
pd.set_option('precision',5)
pd.options.display.max_rows = 10
```
**First things first**
The examples below make use of the [Krebs register](http://recordlinkage.readthedocs.org/en/latest/reference.html#recordlinkage.datasets.krebsregister_cmp_data) (German for cancer registry) dataset. The Krebs register dataset contains comparison vectors of a large set of record pairs. For each record pair, it is known if the records represent the same person (match) or not (non-match). This was done with a massive clerical review. First, import the recordlinkage module and load the Krebs register data. The dataset contains 5749132 compared record pairs and has the following variables: first name, last name, sex, birthday, birth month, birth year and zip code. The Krebs register contains `len(krebs_true_links) == 20931` matching record pairs.
```
import recordlinkage as rl
from recordlinkage.datasets import load_krebsregister
krebs_X, krebs_true_links = load_krebsregister(missing_values=0)
krebs_X
```
Most classifiers can not handle comparison vectors with missing values. To prevent issues with the classification algorithms, we convert the missing values into disagreeing comparisons (using argument missing_values=0). This approach for handling missing values is widely used in record linkage applications.
```
krebs_X.describe().T
```
## Supervised learning
As described before, supervised learning algorithms do need training data. Training data is data for which the true match status is known for each comparison vector. In the example in this section, we consider that the true match status of the first 5000 record pairs of the Krebs register data is known.
```
golden_pairs = krebs_X[0:5000]
golden_matches_index = golden_pairs.index & krebs_true_links # 2093 matching pairs
```
### Logistic regression
The ``recordlinkage.LogisticRegressionClassifier`` classifier is an application of the logistic regression model. This supervised learning method is one of the oldest classification algorithms used in record linkage. In situations with enough training data, the algorithm gives relatively good results.
```
# Initialize the classifier
logreg = rl.LogisticRegressionClassifier()
# Train the classifier
logreg.fit(golden_pairs, golden_matches_index)
print ("Intercept: ", logreg.intercept)
print ("Coefficients: ", logreg.coefficients)
# Predict the match status for all record pairs
result_logreg = logreg.predict(krebs_X)
len(result_logreg)
rl.confusion_matrix(krebs_true_links, result_logreg, len(krebs_X))
# The F-score for this prediction is
rl.fscore(krebs_true_links, result_logreg)
```
The predicted number of matches is not much more than the 20931 true matches. The result was achieved with a small training dataset of 5000 record pairs.
In (older) literature, record linkage procedures are often divided in **deterministic record linkage** and **probabilistic record linkage**. The Logistic Regression Classifier belongs to deterministic record linkage methods. Each feature/variable has a certain importance (named weight). The weight is multiplied with the comparison/similarity vector. If the total sum exceeds a certain threshold, it as considered to be a match.
```
intercept = -9
coefficients = [2.0, 1.0, 3.0, 1.0, 1.0, 1.0, 1.0, 2.0, 3.0]
logreg = rl.LogisticRegressionClassifier(coefficients, intercept)
# predict without calling LogisticRegressionClassifier.fit
result_logreg_pretrained = logreg.predict(krebs_X)
print (len(result_logreg_pretrained))
rl.confusion_matrix(krebs_true_links, result_logreg_pretrained, len(krebs_X))
# The F-score for this classification is
rl.fscore(krebs_true_links, result_logreg_pretrained)
```
For the given coefficients, the F-score is better than the situation without trainings data. Surprising? No (use more trainings data and the result will improve)
### Naive Bayes
In contrast to the logistic regression classifier, the Naive Bayes classifier is a probabilistic classifier. The probabilistic record linkage framework by Fellegi and Sunter (1969) is the most well-known probabilistic classification method for record linkage. Later, it was proved that the Fellegi and Sunter method is mathematically equivalent to the Naive Bayes method in case of assuming independence between comparison variables.
```
# Train the classifier
nb = rl.NaiveBayesClassifier(binarize=0.3)
nb.fit(golden_pairs, golden_matches_index)
# Predict the match status for all record pairs
result_nb = nb.predict(krebs_X)
len(result_nb)
rl.confusion_matrix(krebs_true_links, result_nb, len(krebs_X))
# The F-score for this classification is
rl.fscore(krebs_true_links, result_nb)
```
### Support Vector Machines
Support Vector Machines (SVM) have become increasingly popular in record linkage. The algorithm performs well there is only a small amount of training data available. The implementation of SVM in the Python Record Linkage Toolkit is a linear SVM algorithm.
```
# Train the classifier
svm = rl.SVMClassifier()
svm.fit(golden_pairs, golden_matches_index)
# Predict the match status for all record pairs
result_svm = svm.predict(krebs_X)
len(result_svm)
rl.confusion_matrix(krebs_true_links, result_svm, len(krebs_X))
# The F-score for this classification is
rl.fscore(krebs_true_links, result_svm)
```
## Unsupervised learning
In situations without training data, unsupervised learning can be a solution for record linkage problems. In this section, we discuss two unsupervised learning methods. One algorithm is K-means clustering, and the other algorithm is an implementation of the Expectation-Maximisation algorithm. Most of the time, unsupervised learning algorithms take more computational time because of the iterative structure in these algorithms.
### K-means clustering
The K-means clustering algorithm is well-known and widely used in big data analysis. The K-means classifier in the Python Record Linkage Toolkit package is configured in such a way that it can be used for linking records. For more info about the K-means clustering see [Wikipedia](https://en.wikipedia.org/wiki/K-means_clustering).
```
kmeans = rl.KMeansClassifier()
result_kmeans = kmeans.fit_predict(krebs_X)
# The predicted number of matches
len(result_kmeans)
```
The classifier is now trained and the comparison vectors are classified.
```
rl.confusion_matrix(krebs_true_links, result_kmeans, len(krebs_X))
rl.fscore(krebs_true_links, result_kmeans)
```
### Expectation/Conditional Maximization Algorithm
The ECM-algorithm is an Expectation-Maximisation algorithm with some additional constraints. This algorithm is closely related to the Naive Bayes algorithm. The ECM algorithm is also closely related to estimating the parameters in the Fellegi and Sunter (1969) framework. The algorithms assume that the attributes are independent of each other. The Naive Bayes algorithm uses the same principles.
```
# Train the classifier
ecm = rl.ECMClassifier(binarize=0.8)
result_ecm = ecm.fit_predict(krebs_X)
len(result_ecm)
rl.confusion_matrix(krebs_true_links, result_ecm, len(krebs_X))
# The F-score for this classification is
rl.fscore(krebs_true_links, result_ecm)
```
| github_jupyter |
# Deep Matrix Factorisation
Matrix factorization with deep layers
```
import sys
sys.path.append("../")
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
from IPython.display import SVG, display
import matplotlib.pyplot as plt
import seaborn as sns
from reco.preprocess import encode_user_item, random_split, user_split
%matplotlib inline
```
### Prepare the data
```
df_ratings = pd.read_csv("data/ratings.csv")
df_items = pd.read_csv("data/items.csv")
# Data Encoding
DATA, user_encoder, item_encoder = encode_user_item(df_ratings, "user_id", "movie_id", "rating", "unix_timestamp")
DATA.head()
n_users = DATA.USER.nunique()
n_items = DATA.ITEM.nunique()
n_users, n_items
max_rating = DATA.RATING.max()
min_rating = DATA.RATING.min()
min_rating, max_rating
# Data Splitting
#train, val, test = user_split(DATA, [0.6, 0.2, 0.2])
train, test = user_split(DATA, [0.9, 0.1])
train.shape, test.shape
```
## Deep Matrix Factorization
This is a model with User and Item Embedding Dot Product
```
from keras.models import Model
from keras.layers import Input, Embedding, Flatten, Dot, Add, Lambda, Activation, Reshape, Concatenate, Dense, Dropout
from keras.regularizers import l2
from keras.constraints import non_neg
from keras.optimizers import Adam
from keras.utils import plot_model
from keras.utils.vis_utils import model_to_dot
from reco import vis
```
### Build the Model
```
def Deep_MF(n_users, n_items, n_factors):
# Item Layer
item_input = Input(shape=[1], name='Item')
item_embedding = Embedding(n_items, n_factors, embeddings_regularizer=l2(1e-6),
embeddings_initializer='glorot_normal',
name='ItemEmbedding')(item_input)
item_vec = Flatten(name='FlattenItemE')(item_embedding)
# Item Bias
item_bias = Embedding(n_items, 1, embeddings_regularizer=l2(1e-6),
embeddings_initializer='glorot_normal',
name='ItemBias')(item_input)
item_bias_vec = Flatten(name='FlattenItemBiasE')(item_bias)
# User Layer
user_input = Input(shape=[1], name='User')
user_embedding = Embedding(n_users, n_factors, embeddings_regularizer=l2(1e-6),
embeddings_initializer='glorot_normal',
name='UserEmbedding')(user_input)
user_vec = Flatten(name='FlattenUserE')(user_embedding)
# User Bias
user_bias = Embedding(n_users, 1, embeddings_regularizer=l2(1e-6),
embeddings_initializer='glorot_normal',
name='UserBias')(user_input)
user_bias_vec = Flatten(name='FlattenUserBiasE')(user_bias)
# Dot Product of Item and User & then Add Bias
Concat = Concatenate(name='Concat')([item_vec, user_vec])
ConcatDrop = Dropout(0.5)(Concat)
kernel_initializer='he_normal'
# Use Dense to learn non-linear dense representation
Dense_1 = Dense(10, kernel_initializer='glorot_normal', name="Dense1")(ConcatDrop)
Dense_1_Drop = Dropout(0.5)(Dense_1)
Dense_2 = Dense(1, kernel_initializer='glorot_normal', name="Dense2")(Dense_1_Drop)
AddBias = Add(name="AddBias")([Dense_2, item_bias_vec, user_bias_vec])
# Scaling for each user
y = Activation('sigmoid')(AddBias)
rating_output = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(y)
# Model Creation
model = Model([user_input, item_input], rating_output)
# Compile Model
model.compile(loss='mean_squared_error', optimizer=Adam(lr=0.001))
return model
n_factors = 50
model = Deep_MF(n_users, n_items, n_factors)
model.summary()
from reco.utils import create_directory
create_directory("/model-img")
plot_model(model, show_layer_names=True, show_shapes=True, to_file="model-img/Deep-CF.png" )
```
### Train the Model
```
%%time
output = model.fit([train.USER, train.ITEM], train.RATING,
batch_size=128, epochs=5, verbose=1,
validation_data= ([test.USER, test.ITEM], test.RATING))
vis.metrics(output.history)
```
### Score the Model
```
score = model.evaluate([test.USER, test.ITEM], test.RATING, verbose=1)
score
```
### Evaluate the Model
```
from reco.evaluate import get_embedding, get_predictions, recommend_topk
from reco.evaluate import precision_at_k, recall_at_k, ndcg_at_k
item_embedding = get_embedding(model, "ItemEmbedding")
user_embedding = get_embedding(model, "UserEmbedding")
%%time
predictions = get_predictions(model, DATA)
predictions.head()
%%time
# Recommendation for Top10K
ranking_topk = recommend_topk(model, DATA, train, k=5)
eval_precision = precision_at_k(test, ranking_topk, k=10)
eval_recall = recall_at_k(test, ranking_topk, k=10)
eval_ndcg = ndcg_at_k(test, ranking_topk, k=10)
print("NDCG@K:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
```
### Get Similar Items
```
from reco.recommend import get_similar, show_similar
%%time
item_distances, item_similar_indices = get_similar(item_embedding, 5)
item_similar_indices
show_similar(1, item_similar_indices, item_encoder)
```
| github_jupyter |
# Convolutional Layer
In this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer.
In this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights.
<img src='notebook_ims/conv_layer.gif' height=60% width=60% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
## Define a convolutional layer
The various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a:
* Convolutional layer
Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
#### `__init__` and `forward`
To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python.
Below, I define the structure of a class called `Net` that has a convolutional layer that can contain four 4x4 grayscale filters.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
```
#### ReLu activation
In this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
```
| github_jupyter |
# AdaDelta compared to AdaGrad
Presented during ML reading group, 2019-11-12.
Author: Ivan Bogdan-Daniel, ibogdanidaniel@gmail.com
```
#%matplotlib notebook
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
print(f'Numpy version: {np.__version__}')
```
# AdaDelta
The [AdaDelta paper](https://arxiv.org/pdf/1212.5701.pdf)
The idea presented in this paper was derived from ADAGRAD in order to improve upon the two main drawbacks of the method:
1) the continual decay of learning rates throughout training
2) the need for a manually selected global learning rate.
AdaGrad comes with:
$$w_{t+1}^{(j)} = w_{t}^{(j)} - \frac{\eta}{\sqrt{\varepsilon + \sum_{\tau=1}^{t}{(g_{\tau}^{(j)}})^2}} \nabla J_{w}(w_t^{(j)})$$
where $g_{\tau}$ is the gradient of error function at iteration $\tau$, $g_{\tau}^{(j)}$ is the partial derivative of the
error function in direction of the $j$ - th feature, at iteration $\tau$, $m$ - is the number of features, i.e.
The problem appears in the sum:
$${\varepsilon + \sum_{\tau=1}^{t}{(g_{\tau}^{(j)}})^2}$$
It grows into a very large number making the fraction $$\frac{\eta}{\sqrt{\varepsilon + \sum_{\tau=1}^{t}{(g_{\tau}^{(j)}})^2}}$$ become an insignificant number. The
learning rate will continue to decrease throughout training,
eventually decreasing to zero and stopping training completely.
# Solution
Instead of accumulating the sum of squared gradients over all
time, we restricted the window of past gradients that are accumulated to be some fixed size w.
Since storing w previous squared gradients is inefficient,
our methods implements this accumulation as an exponentially decaying average of the squared gradients
This ensures that learning continues
to make progress even after many iterations of updates have
been done.
At time t this average is: $$E[g^2]_{t}$$ then we compute:
$$E[g^2]_{t}=\rho E[g^2]_{t-1}+(1-\rho)g^2_{t}$$
Where $\rho$ is a hyper parameter similar to the one used in momentum, it can take values between 0 and 1, generally 0.95 is recommended.
Since we require the square root of this quantity:
$$RMS[g]_{t} = \sqrt{E[g^2]_{t}+\epsilon}$$
The parameter update becomes:
$$w_{t+1}^{(j)} = w_{t}^{(j)} - \frac{\eta}{RMS[g]_{t}} g_{t}$$
AdaDelta rule:
$$w_{t+1}^{(j)} = w_{t}^{(j)} - \frac{RMS[\Delta w]_{t-1}}{RMS[g]_{t}} g_{t}$$
Where $RMS[\Delta w]_{t-1}$ is computed similar to $RMS[g]_{t}$
# Algorithm
Require: Decay rate $\rho$, Constant $\epsilon$
Require: Initial parameter x
<img src="./images/adadelta_algorithm.png" alt="drawing" width="600"/>
Source: [AdaDelta paper](https://arxiv.org/pdf/1212.5701.pdf)
## Generate data
```
from scipy.sparse import random #to generate sparse data
np.random.seed(10) # for reproducibility
m_data = 100
n_data = 4 #number of features of the data
_scales = np.array([1,10, 10,1 ]) # play with these...
_parameters = np.array([3, 0.5, 1, 7])
def gen_data(m, n, scales, parameters, add_noise=True):
# Adagrad is designed especially for sparse data.
# produce: X, a 2d tensor with m lines and n columns
# and X[:, k] uniformly distributed in [-scale_k, scale_k] with the first and the last column containing sparse data
#(approx 75% of the elements are 0)
#
# To generate a sparse data matrix with m rows and n columns
# and random values use S = random(m, n, density=0.25).A, where density = density of the data. S will be the
# resulting matrix
# more information at https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.random.html
#
# To obtain X - generate a random matrix with X[:, k] uniformly distributed in [-scale_k, scale_k]
# set X[:, 0] and X[:, -1] to 0 and add matrix S with the sparse data.
#
# let y be X@parameters.T + epsilon, with epsilon ~ N(0, 1); y is a vector with m elements
# parameters - the ideal weights, used to produce output values y
#
return X, y
X, y = gen_data(m_data, n_data, _scales, _parameters)
print(X)
print(y)
```
## Define error function, gradient, inference
```
def model_estimate(X, w):
'''Computes the linear regression estimation on the dataset X, using coefficients w
:param X: 2d tensor with m_data lines and n_data columns
:param w: a 1d tensor with n_data coefficients (no intercept)
:return: a 1d tensor with m_data elements y_hat = w @X.T
'''
return y_hat
def J(X, y, w):
"""Computes the mean squared error of model. See the picture from last week's sheet.
:param X: input values, of shape m_data x n_data
:param y: ground truth, column vector with m_data values
:param w: column with n_data coefficients for the linear form
:return: a scalar value >= 0
:use the same formula as in the exercise from last week
"""
return err
def gradient(X, y, w):
'''Commputes the gradients to be used for gradient descent.
:param X: 2d tensor with training data
:param y: 1d tensor with y.shape[0] == W.shape[0]
:param w: 1d tensor with current values of the coefficients
:return: gradients to be used for gradient descent.
:use the same formula as in the exercise from last week
'''
return grad## implement
```
## Momentum algorithm
```
#The function from last week for comparison
def gd_with_momentum(X, y, w_init, eta=1e-1, gamma = 0.9, thresh = 0.001):
"""Applies gradient descent with momentum coefficient
:params: as in gd_no_momentum
:param gamma: momentum coefficient
:param thresh: the threshold for gradient norm (to stop iterations)
:return: the list of succesive errors and the found w* vector
"""
w = w_init
w_err=[]
delta = np.zeros_like(w)
while True:
grad = gradient(X, y, w)
err = J(X, y, w)
w_err.append(err)
w_nou = w + gamma * delta - eta * grad
delta = w_nou - w
w = w_nou
if np.linalg.norm(grad) < thresh :
break;
return w_err, w
w_init = np.array([0, 0, 0, 0])
errors_momentum, w_best = gd_with_momentum(X, y, w_init,0.0001, 0.9)
print(f'How many iterations were made: {len(errors_momentum)}')
w_best
fig, axes = plt.subplots()
axes.plot(list(range(len(errors_momentum))), errors_momentum)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization with momentum')
```
## Apply AdaGrad and report resulting $\eta$'s
```
def ada_grad(X, y, w_init, eta_init=1e-1, eps = 0.001, thresh = 0.001):
'''Iterates with gradient descent. algorithm
:param X: 2d tensor with data
:param y: 1d tensor, ground truth
:param w_init: 1d tensor with the X.shape[1] initial coefficients
:param eta_init: the initial learning rate hyperparameter
:param eps: the epsilon value from the AdaGrad formula
:param thresh: the threshold for gradient norm (to stop iterations)
:return: the list of succesive errors w_err, the found w - the estimated feature vector
:and rates the learning rates after the final iteration
'''
n = X.shape[1]
w = w_init
w_err=[]
sum_sq_grad = np.zeros(n)
rates = np.zeros(n) + eta_init
while True:
grad = gradient(X, y, w)
pgrad = grad**2
err = J(X, y, w)
w_err.append(err)
prod = rates*grad
w = w - prod
sum_sq_grad += pgrad
rates = eta_init/np.sqrt(eps + sum_sq_grad)
if np.linalg.norm(grad) < thresh:
break;
return w_err, w, rates
w_init = np.array([0,0,0,0])
adaGerr, w_ada_best, rates = ada_grad(X, y, w_init)
print(rates)
print(f'How many iterations were made: {len(adaGerr)}')
w_ada_best
fig, axes = plt.subplots()
axes.plot(list(range(len(adaGerr))),adaGerr)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization with AdaGrad')
```
## Apply AdaDelta and report resulting $\eta$'s
```
def ada_delta(X, y, w_init, eta_init=1e-1, gamma=0.99, eps = 0.001, thresh = 0.001):
'''Iterates with gradient descent. algorithm
:param X: 2d tensor with data
:param y: 1d tensor, ground truth
:param w_init: 1d tensor with the X.shape[1] initial coefficients
:param eta_init: the initial learning rate hyperparameter
:param gamma: decay constant, similar to momentum
:param eps: the epsilon value from the AdaGrad formula
:param thresh: the threshold for gradient norm (to stop iterations)
:return: the list of succesive errors w_err, the found w - the estimated feature vector
:and rates the learning rates after the final iteration
'''
#todo
#same as adagrad but instead of summing the square of gradients
#use the adadelta formula for decaying average
w_init = np.array([0,0,0,0])
adaDerr, w_adad_best, rates = ada_delta(X, y, w_init)
print(rates)
print(f'How many iterations were made: {len(adaDerr)}')
w_adad_best
fig, axes = plt.subplots()
axes.plot(list(range(len(adaDerr))),adaDerr)
axes.set_xlabel('Epochs')
axes.set_ylabel('Error')
axes.set_title('Optimization with AdaDelta')
```
| github_jupyter |
```
#!/usr/bin/env python
# coding: utf-8
# Imports
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.metrics import f1_score, accuracy_score
from sklearn.metrics import roc_curve, confusion_matrix
import torch
import torch.nn as nn # All neural network modules, nn.Linear, nn.Conv2d, BatchNorm, Loss functions
import torch.optim as optim # For all Optimization algorithms, SGD, Adam, etc.
import torch.nn.functional as F # All functions that don't have any parameters
```
## Model
```
###############################
### Load data ###
###############################
data_list = []
target_list = []
import glob
for fp in glob.glob("data/train/*input.npz"):
data = np.load(fp)["arr_0"]
targets = np.load(fp.replace("input", "labels"))["arr_0"]
data_list.append(data)
target_list.append(targets)
#print(data_list)
# Note:
# Choose your own training and val set based on data_list and target_list
# Here using the last partition as val set
X_train = np.concatenate(data_list[ :-1])
y_train = np.concatenate(target_list[:-1])
nsamples, nx, ny = X_train.shape
print("Training set shape:", nsamples,nx,ny)
X_val = np.concatenate(data_list[-1: ])
y_val = np.concatenate(target_list[-1: ])
nsamples, nx, ny = X_val.shape
print("val set shape:", nsamples,nx,ny)
p_neg = len(y_train[y_train == 1])/len(y_train)*100
print("Percent positive samples in train:", p_neg)
p_pos = len(y_val[y_val == 1])/len(y_val)*100
print("Percent positive samples in val:", p_pos)
# make the data set into one dataset that can go into dataloader
train_ds = []
for i in range(len(X_train)):
train_ds.append([np.transpose(X_train[i]), y_train[i]])
val_ds = []
for i in range(len(X_val)):
val_ds.append([np.transpose(X_val[i]), y_val[i]])
bat_size = 64
print("\nNOTE:\nSetting batch-size to", bat_size)
train_ldr = torch.utils.data.DataLoader(train_ds,batch_size=bat_size, shuffle=True)
val_ldr = torch.utils.data.DataLoader(val_ds,batch_size=bat_size, shuffle=True)
# Set device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print("Using device (CPU/GPU):", device)
#device = torch.device("cpu")
###############################
### Define network ###
###############################
print("Initializing network")
# Hyperparameters
input_size = 420
num_classes = 1
learning_rate = 0.01
class Net(nn.Module):
def __init__(self, num_classes):
super(Net, self).__init__()
self.bn0 = nn.BatchNorm1d(54)
self.conv1 = nn.Conv1d(in_channels=54, out_channels=100, kernel_size=3, stride=2, padding=1)
torch.nn.init.trunc_normal_(self.conv1.weight)
self.pool = nn.MaxPool1d(kernel_size=2, stride=2)
self.conv1_bn = nn.BatchNorm1d(100)
self.conv2 = nn.Conv1d(in_channels=100, out_channels=100, kernel_size=3, stride=2, padding=1)
torch.nn.init.xavier_normal_(self.conv2.weight)
self.conv2_bn = nn.BatchNorm1d(100)
self.fc1 = nn.Linear(2600, num_classes)
torch.nn.init.kaiming_normal_(self.fc1.weight)
def forward(self, x):
x = self.bn0(x)
x = self.pool(F.leaky_relu(self.conv1(x)))
x = self.conv1_bn(x)
x = self.pool(F.leaky_relu(self.conv2(x)))
x = self.conv2_bn(x)
x = x.view(x.size(0), -1)
x = torch.sigmoid(self.fc1(x))
return x
# Initialize network
net = Net(num_classes=num_classes).to(device)
# Loss and optimizer
criterion = nn.BCELoss()
optimizer = optim.Adam(net.parameters(), lr=learning_rate)
###############################
### TRAIN ###
###############################
print("Training")
num_epochs = 5
train_acc, train_loss = [], []
valid_acc, valid_loss = [], []
losses = []
val_losses = []
for epoch in range(num_epochs):
cur_loss = 0
val_loss = 0
net.train()
train_preds, train_targs = [], []
for batch_idx, (data, target) in enumerate(train_ldr):
X_batch = data.float().detach().requires_grad_(True)
target_batch = torch.tensor(np.array(target), dtype = torch.float).unsqueeze(1)
optimizer.zero_grad()
output = net(X_batch)
batch_loss = criterion(output, target_batch)
batch_loss.backward()
optimizer.step()
preds = np.round(output.detach().cpu())
train_targs += list(np.array(target_batch.cpu()))
train_preds += list(preds.data.numpy().flatten())
cur_loss += batch_loss.detach()
losses.append(cur_loss / len(train_ldr.dataset))
net.eval()
### Evaluate validation
val_preds, val_targs = [], []
with torch.no_grad():
for batch_idx, (data, target) in enumerate(val_ldr): ###
x_batch_val = data.float().detach()
y_batch_val = target.float().detach().unsqueeze(1)
output = net(x_batch_val)
val_batch_loss = criterion(output, y_batch_val)
preds = np.round(output.detach())
val_preds += list(preds.data.numpy().flatten())
val_targs += list(np.array(y_batch_val))
val_loss += val_batch_loss.detach()
val_losses.append(val_loss / len(val_ldr.dataset))
print("\nEpoch:", epoch+1)
train_acc_cur = accuracy_score(train_targs, train_preds)
valid_acc_cur = accuracy_score(val_targs, val_preds)
train_acc.append(train_acc_cur)
valid_acc.append(valid_acc_cur)
from sklearn.metrics import matthews_corrcoef
print("Training loss:", losses[-1].item(), "Validation loss:", val_losses[-1].item(), end = "\n")
print("MCC Train:", matthews_corrcoef(train_targs, train_preds), "MCC val:", matthews_corrcoef(val_targs, val_preds))
```
## MH
```
###############################
### PERFORMANCE ###
###############################
epoch = np.arange(1,len(train_acc)+1)
plt.figure()
plt.plot(epoch, losses, 'r', epoch, val_losses, 'b')
plt.legend(['Train Loss','Validation Loss'])
plt.xlabel('Epoch^'), plt.ylabel('Loss')
epoch = np.arange(1,len(train_acc)+1)
plt.figure()
plt.plot(epoch, train_acc, 'r', epoch, valid_acc, 'b')
plt.legend(['Train Accuracy','Validation Accuracy'])
plt.xlabel('Epoch'), plt.ylabel('Acc')
#print("Train accuracy:", train_acc, sep = "\n")
#print("Validation accuracy:", valid_acc, sep = "\n")
from sklearn.metrics import matthews_corrcoef
print("MCC Train:", matthews_corrcoef(train_targs, train_preds))
print("MCC Test:", matthews_corrcoef(val_targs, val_preds))
print("Confusion matrix train:", confusion_matrix(train_targs, train_preds), sep = "\n")
print("Confusion matrix test:", confusion_matrix(val_targs, val_preds), sep = "\n")
def plot_roc(targets, predictions):
# ROC
fpr, tpr, threshold = metrics.roc_curve(targets, predictions)
roc_auc = metrics.auc(fpr, tpr)
# plot ROC
plt.figure()
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
#plt.show()
plot_roc(train_targs, train_preds)
plt.title("Training AUC")
plot_roc(val_targs, val_preds)
plt.title("Validation AUC")
```
# Helpful scripts
# Show dataset as copied dataframes with named features
The dataset is a 3D numpy array, of dimensions n_complexes x features x positions. This makes viewing the features for individual complexes or samples challenging. Below is a function which copies the entire dataset, and converts it into a list of DataFrames with named indices and columns, in order to make understanding the data easier.
NB: This list of dataframes are only copies, and will not be passable into the neural network architecture.
```
pd.read_csv("data/example.csv")
def copy_as_dataframes(dataset_X):
"""
Returns list of DataFrames with named features from dataset_X,
using example CSV file
"""
df_raw = pd.read_csv("data/example.csv")
return [pd.DataFrame(arr, columns = df_raw.columns) for arr in dataset_X]
named_dataframes = copy_as_dataframes(X_train)
print("Showing first complex as dataframe. Columns are positions and indices are calculated features")
named_dataframes[0]
```
# View complex MHC, peptide and TCR alpha/beta sequences
You may want to view the one-hot encoded sequences as sequences in single-letter amino-acid format. The below function will return the TCR, peptide and MHC sequences for the dataset as 3 lists.
```
def oneHot(residue):
"""
Converts string sequence to one-hot encoding
Example usage:
seq = "GSHSMRY"
oneHot(seq)
"""
mapping = dict(zip("ACDEFGHIKLMNPQRSTVWY", range(20)))
if residue in "ACDEFGHIKLMNPQRSTVWY":
return np.eye(20)[mapping[residue]]
else:
return np.zeros(20)
def reverseOneHot(encoding):
"""
Converts one-hot encoded array back to string sequence
"""
mapping = dict(zip(range(20),"ACDEFGHIKLMNPQRSTVWY"))
seq=''
for i in range(len(encoding)):
if np.max(encoding[i])>0:
seq+=mapping[np.argmax(encoding[i])]
return seq
def extract_sequences(dataset_X):
"""
Return DataFrame with MHC, peptide and TCR a/b sequences from
one-hot encoded complex sequences in dataset X
"""
mhc_sequences = [reverseOneHot(arr[0:179,0:20]) for arr in dataset_X]
pep_sequences = [reverseOneHot(arr[179:190,0:20]) for arr in dataset_X]
tcr_sequences = [reverseOneHot(arr[192:,0:20]) for arr in dataset_X]
df_sequences = pd.DataFrame({"MHC":mhc_sequences, "peptide":pep_sequences,
"tcr":tcr_sequences})
return df_sequences
complex_sequences = extract_sequences(X_val)
print("Showing MHC, peptide and TCR alpha/beta sequences for each complex")
complex_sequences
```
| github_jupyter |
<a href="https://colab.research.google.com/github/r5racker/012_RahilBhensdadia/blob/main/Lab_05_1_linear_regression_scratch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Import Numpy & PyTorch
import numpy as np
```
A tensor is a number, vector, matrix or any n-dimensional array.
## Problem Statement
We'll create a model that predicts crop yeilds for apples (*target variable*) by looking at the average temperature, rainfall and humidity (*input variables or features*) in different regions.
Here's the training data:
>Temp | Rain | Humidity | Prediction
>--- | --- | --- | ---
> 73 | 67 | 43 | 56
> 91 | 88 | 64 | 81
> 87 | 134 | 58 | 119
> 102 | 43 | 37 | 22
> 69 | 96 | 70 | 103
In a **linear regression** model, each target variable is estimated to be a weighted sum of the input variables, offset by some constant, known as a bias :
```
yeild_apple = w11 * temp + w12 * rainfall + w13 * humidity + b1
```
It means that the yield of apples is a linear or planar function of the temperature, rainfall & humidity.
**Our objective**: Find a suitable set of *weights* and *biases* using the training data, to make accurate predictions.
## Training Data
The training data can be represented using 2 matrices (inputs and targets), each with one row per observation and one column for variable.
```
# Input (temp, rainfall, humidity)
X = np.array([[73, 67, 43],
[91, 88, 64],
[87, 134, 58],
[102, 43, 37],
[69, 96, 70]], dtype='float32')
# Target (apples)
Y = np.array([[56],
[81],
[119],
[22],
[103]], dtype='float32')
```
Before we build a model, we need to convert inputs and targets to PyTorch tensors.
## Linear Regression Model (from scratch)
The *weights* and *biases* can also be represented as matrices, initialized with random values. The first row of `w` and the first element of `b` are use to predict the first target variable i.e. yield for apples, and similarly the second for oranges.
The *model* is simply a function that performs a matrix multiplication of the input `x` and the weights `w` (transposed) and adds the bias `w0` (replicated for each observation).
$$
\hspace{2.5cm} X \hspace{1.1cm} \times \hspace{1.2cm} W^T
$$
$$
\left[ \begin{array}{cc}
1 & 73 & 67 & 43 \\
1 &91 & 88 & 64 \\
\vdots & \vdots & \vdots & \vdots \\
1 &69 & 96 & 70
\end{array} \right]
%
\times
%
\left[ \begin{array}{cc}
w_{0} \\
w_{1} \\
w_{2} \\
w_{3}
\end{array} \right]
%
$$
```
mu = np.mean(X, 0)
sigma = np.std(X, 0)
#normalizing the input
X = (X-mu) / sigma
X = np.hstack((np.ones((Y.size,1)),X))
print(X.shape)
# Weights and biases
rg = np.random.default_rng(14)
w = rg.random((1, 4))
print(w)
```
Because we've started with random weights and biases, the model does not perform a good job of predicting the target varaibles.
## Loss Function
We can compare the predictions with the actual targets, using the following method:
* Calculate the difference between the two matrices (`preds` and `targets`).
* Square all elements of the difference matrix to remove negative values.
* Calculate the average of the elements in the resulting matrix.
The result is a single number, known as the **mean squared error** (MSE).
```
# MSE loss function
def mse(t1, t2):
diff = t1 - t2
return np.sum(diff * diff) / diff.size
# Compute error
preds = model(X,w)
cost_initial = mse(preds, Y)
print("Cost before regression: ",cost_initial)
```
## Compute Gradients
```
# Define the model
def model(x,w):
return x @ w.T
def gradient_descent(X, y, w, learning_rate, n_iters):
J_history = np.zeros((n_iters,1))
for i in range(n_iters):
h = model(X,w)
diff = h - y
delta = (learning_rate/Y.size)*(X.T@diff)
new_w = w - delta.T
w=new_w
J_history[i] = mse(h, y)
return (J_history, w)
```
## Train for multiple iteration
To reduce the loss further, we repeat the process of adjusting the weights and biases using the gradients multiple times. Each iteration is called an epoch.
```
import matplotlib.pyplot as plt
n_iters = 500
learning_rate = 0.01
initial_cost = mse(model(X,w),Y)
print("Initial cost is: ", initial_cost, "\n")
(J_history, optimal_params) = gradient_descent(X, Y, w, learning_rate, n_iters)
print("Optimal parameters are: \n", optimal_params, "\n")
print("Final cost is: ", J_history[-1])
plt.plot(range(len(J_history)), J_history, 'r')
plt.title("Convergence Graph of Cost Function")
plt.xlabel("Number of Iterations")
plt.ylabel("Cost")
plt.show()
# Calculate error
preds = model(X,optimal_params)
cost_final = mse(preds, Y)
# Print predictions
print("Prediction:\n",preds)
# Comparing predicted with targets
print("Targets:\n",Y)
print("Cost after linear regression: ",cost_final)
print("Cost reduction percentage : {} %".format(((cost_initial- cost_final)/cost_initial)*100))
```
| github_jupyter |
# Exploratory Data Analysis of AllenSDK
```
# Only for Colab
#!python -m pip install --upgrade pip
#!pip install allensdk
```
## References
- [[AllenNB1]](https://allensdk.readthedocs.io/en/latest/_static/examples/nb/visual_behavior_ophys_data_access.html) Download data using the AllenSDK or directly from our Amazon S3 bucket
- [[AllenNB2]](https://allensdk.readthedocs.io/en/latest/_static/examples/nb/visual_behavior_ophys_dataset_manifest.html) Identify experiments of interest using the dataset manifest
- [[AllenNB3]](https://allensdk.readthedocs.io/en/latest/_static/examples/nb/visual_behavior_load_ophys_data.html) Load and visualize data from a 2-photon imaging experiment
- [[AllenNB4]](https://allensdk.readthedocs.io/en/latest/_static/examples/nb/visual_behavior_mouse_history.html) Examine the full training history of one mouse
- [[AllenNB5]](https://allensdk.readthedocs.io/en/latest/_static/examples/nb/visual_behavior_compare_across_trial_types.html) Compare behavior and neural activity across different trial types in the task
## Imports
Import and setup Python packages. You should not need to touch this section.
```
from pathlib import Path
from tqdm import tqdm
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from allensdk.brain_observatory.behavior.behavior_project_cache import VisualBehaviorOphysProjectCache
from allensdk.core.brain_observatory_cache import BrainObservatoryCache
# import mindscope_utilities
# import mindscope_utilities.visual_behavior_ophys as ophys
np.random.seed(42)
```
## Setup AllenSDK
Configure AllenSDK to get `cache`, `sessions_df` and `experiments_df`. Data will be stored in `./allensdk_storage` by default.
```
!mkdir -p allensdk_storage
DATA_STORAGE_DIRECTORY = Path("./allensdk_storage")
cache = VisualBehaviorOphysProjectCache.from_s3_cache(cache_dir=DATA_STORAGE_DIRECTORY)
```
The data manifest is comprised of three types of tables:
1. `behavior_session_table`
2. `ophys_session_table`
3. `ophys_experiment_table`
The` behavior_session_table` contains metadata for every **behavior session** in the dataset. Some behavior sessions have 2-photon data associated with them, while others took place during training in the behavior facility. The different training stages that mice are progressed through are described by the session_type.
The `ophys_session_table` contains metadata for every 2-photon imaging (aka optical physiology, or ophys) session in the dataset, associated with a unique `ophys_session_id`. An **ophys session** is one continuous recording session under the microscope, and can contain different numbers of imaging planes (aka experiments) depending on which microscope was used. For Scientifica sessions, there will only be one experiment (aka imaging plane) per session. For Multiscope sessions, there can be up to eight imaging planes per session. Quality Control (QC) is performed on each individual imaging plane within a session, so each can fail QC independent of the others. This means that a Multiscope session may not have exactly eight experiments (imaging planes).
The `ophys_experiment_table` contains metadata for every **ophys experiment** in the dataset, which corresponds to a single imaging plane recorded in a single session, and associated with a unique `ophys_experiment_id`. A key part of our experimental design is targeting a given population of neurons, contained in one imaging plane, across multiple `session_types` (further described below) to examine the impact of varying sensory and behavioral conditions on single cell responses. The collection of all imaging sessions for a given imaging plane is referred to as an **ophys container**, associated with a unique `ophys_container_id`. Each ophys container may contain different numbers of sessions, depending on which experiments passed QC, and how many retakes occured (when a given session_type fails QC on the first try, an attempt is made to re-acquire the `session_type` on a different recording day - this is called a retake, also described further below).
*Text copied from [[AllenNB2]](#References)*
---
We will just use the `ophys_experiment_table`.
```
experiments_df = cache.get_ophys_experiment_table()
```
## Specify Experiment
There are a lot of experiments in the table. Let's choose a particular experiment that meet the following criteria:
- Excitatory cells with fast reporter
- Single-plane imaging
### Cre Line and Reporter Line
<img style="width: 50%" src="https://github.com/seungjaeryanlee/nma-cn-project/blob/main/images/cre_lines.png?raw=1">
The `cre_line` determines which genetically identified neuron type will be labeled by the reporter_line.
This dataset have 3 `cre_line`:
- **Slc17a7-IRES2-Cre**, which labels excitatory neurons across all cortical layers
- **Sst-IRES-Cre** which labels somatostatin expressing inhibitory interneurons
- **Vip-IRES-Cre**, which labels vasoactive intestinal peptide expressing inhibitory interneurons
*Text copied from [[AllenNB2]](#References)*
```
experiments_df["cre_line"].unique()
```
There are also 3 `reporter_line`:
- **Ai93(TITL-GCaMP6f)**, which expresses the genetically encoded calcium indicator GCaMP6f (f is for 'fast', this reporter has fast offset kinetics, but is only moderately sensitive to calcium relative to other sensors) in cre labeled neurons
- **Ai94(TITL-GCaMP6s)**, which expresses the indicator GCaMP6s (s is for 'slow', this reporter is very sensitive to calcium but has slow offset kinetics), and
- **Ai148(TIT2L-GC6f-ICL-tTA2)**, which expresses GCaMP6f using a self-enhancing system to achieve higher expression than other reporter lines (which proved necessary to label inhibitory neurons specifically).
```
experiments_df["reporter_line"].unique()
```
The specific `indicator` expressed by each `reporter_line` also has its own column in the table.
```
experiments_df["indicator"].unique()
```
`full_genotype` contains information for both cre line and reporter line.
```
experiments_df["full_genotype"].unique()
```
---
We are looking at excitatory cells, so we should use `cre_line` of `Slc17a7-IRES2-Cre`. We want the fast one, so we select `Slc17a7-IRES2-Cre/wt;Camk2a-tTA/wt;Ai93(TITL-GCaMP6f)/wt`.
```
FULL_GENOTYPE = "Slc17a7-IRES2-Cre/wt;Camk2a-tTA/wt;Ai93(TITL-GCaMP6f)/wt"
```
### Project Code
<img style="width: 50%" src="https://github.com/seungjaeryanlee/nma-cn-project/blob/main/images/datasets.png?raw=1">
"The distinct groups of mice are referred to as dataset variants and can be identified using the `project_code` column." [[AllenNB2]](#References)
```
experiments_df["project_code"].unique()
```
---
We are interested in single-plane imaging, so either `VisualBehavior` or `VisualBehaviorTask1B` works.
```
# We are looking at single-plane imaging
# "VisualBehavior" or "VisualBehaviorTask1B"
PROJECT_CODE = "VisualBehavior"
```
### Experiment
<img style="width: 50%" src="https://github.com/seungjaeryanlee/nma-cn-project/blob/main/images/data_structure.png?raw=1">
(Note that we are looking at single-plane imaging, so there is only one row (container) per mouse.)
#### `MOUSE_ID`
"The mouse_id is a 6-digit unique identifier for each experimental animal in the dataset." [[AllenNB2]](#References)
---
We retrieve all mouse that can be used for our experiment and select one mouse.
```
experiments_df.query("project_code == @PROJECT_CODE") \
.query("full_genotype == @FULL_GENOTYPE") \
["mouse_id"].unique()
MOUSE_ID = 450471
```
#### `ACTIVE_SESSION`, `PASSIVE_SESSION`
<img style="width: 50%" src="https://github.com/seungjaeryanlee/nma-cn-project/blob/main/images/experiment_design.png?raw=1">
The session_type for each behavior session indicates the behavioral training stage or 2-photon imaging conditions for that particular session. This determines what stimuli were shown and what task parameters were used.
During the 2-photon imaging portion of the experiment, mice perform the task with the same set of images they saw during training (either image set A or B), as well as an additional novel set of images (whichever of A or B that they did not see during training). This allows evaluation of the impact of different sensory contexts on neural activity - familiarity versus novelty.
- Sessions with **familiar images** include those starting with `OPHYS_0`, `OPHYS_1`, `OPHYS_2`, and `OPHYS_3`.
- Sessions with **novel images** include those starting with `OPHYS_4`, `OPHYS_5`, and `OPHYS_6`.
Interleaved between **active behavior sessions** are **passive viewing sessions** where mice are given their daily water ahead of the sesssion (and are thus satiated) and view the stimulus with the lick spout retracted so they are unable to earn water rewards. This allows comparison of neural activity in response to stimuli under different behavioral context - active task engagement and passive viewing without reward. There are two passive sessions:
- `OPHYS_2_images_A_passive`: passive session with familiar images
- `OPHYS_5_images_A_passive`: passive session with novel images
*Text copied from [[AllenNB2]](#References)*
---
We check which sessions are available for this particular mouse and select one active and one passive session type. Not all sessions may be availble due to QC.
```
experiments_df.query("project_code == @PROJECT_CODE") \
.query("full_genotype == @FULL_GENOTYPE") \
.query("mouse_id == @MOUSE_ID") \
["session_type"].unique()
```
Looks like this mouse has all sessions! Let's select the first one then.
```
SESSION_TYPE = "OPHYS_1_images_A"
```
#### `EXPERIMENT_ID`
We retrieve the `ophys_experiment_id` of the session type we chose. We need this ID to get the experiment data.
```
experiments_df.query("project_code == @PROJECT_CODE") \
.query("full_genotype == @FULL_GENOTYPE") \
.query("mouse_id == @MOUSE_ID") \
.query("session_type == @SESSION_TYPE")
```
---
Looks like this mouse went through the same session multiple times! Let's just select the first experiment ID.
```
EXPERIMENT_ID = 871155338
```
#### `ACTIVE_EXPERIMENT_ID_CONTROL`, `PASSIVE_EXPERIMENT_ID_CONTROL`
```
PASSIVE_EXPERIMENT_ID_CONTROL =884218326
```
## Download Experiment
Download the experiment with the selected `experiment_id`.
We can now download the experiment. Each experiment will be approximately 600MB - 2GB in size.
```
experiment = cache.get_behavior_ophys_experiment(EXPERIMENT_ID)
experiment
```
This returns an instance of `BehaviorOphysExperiment`. It contains multiple attributes that we will need to explore.
## Attributes of the Experiment
Explore what information we have about the experiment by checking its attributes.
### `dff_traces`
"`dff_traces` dataframe contains traces for all neurons in this experiment, unaligned to any events in the task." [[AllenNB3]](#References)
```
experiment.dff_traces.head()
```
Since `dff` is stored as a list, we need to get timestamps for each of those numbers.
### `ophys_timestamps`
`ophys_timestamps` contains the timestamps of every record.
```
experiment.ophys_timestamps
```
Let's do a sanity check by checking the length of both lists.
```
print(f"dff has length {len(experiment.dff_traces.iloc[0]['dff'])}")
print(f"timestamp has length {len(experiment.ophys_timestamps)}")
```
### `stimulus_presentations`
We also need timestamps of when stimulus was presented. This information is contained in `stimulus_presentations`.
```
experiment.stimulus_presentations.head()
```
During imaging sessions, stimulus presentations (other than the change and pre-change images) are omitted with a 5% probability, resulting in some inter stimlus intervals appearing as an extended gray screen period. [[AllenNB2]](#References)
<img style="width: 50%" src="https://github.com/seungjaeryanlee/nma-cn-project/blob/main/images/omissions.png?raw=1">
```
experiment.stimulus_presentations.query("omitted").head()
```
### `stimulus_templates`
If we want to know what the stimulus looks like, we can check `stimulus_templates`.
```
experiment.stimulus_templates
```
We see that we have a matrix for the `warped` column and a stub matrix for the unwarped column. Let's display the `warped` column.
```
fig, ax = plt.subplots(4, 2, figsize=(8, 12))
for i, image_name in enumerate(experiment.stimulus_templates.index):
ax[i%4][i//4].imshow(experiment.stimulus_templates.loc[image_name]["warped"], cmap='gray', vmin=0, vmax=255)
ax[i%4][i//4].set_title(image_name)
ax[i%4][i//4].get_xaxis().set_visible(False)
ax[i%4][i//4].get_yaxis().set_visible(False)
fig.show()
```
So this is what the mouse is seeing! But can we see the original, unwarped image? For that, we need to use another AllenSDK cache that contains these images.
```
boc = BrainObservatoryCache()
scenes_data_set = boc.get_ophys_experiment_data(501498760)
```
This data set contains a lot of images in a form of a 3D matrix (`# images` x `width` x `height` ).
```
scenes = scenes_data_set.get_stimulus_template('natural_scenes')
scenes.shape
```
We just want the images that were shown above. Notice that the indices are part of the name of the images.
```
experiment.stimulus_templates.index
```
Using this, we can plot the unwarped versions!
```
fig, ax = plt.subplots(4, 2, figsize=(6, 12))
for i, image_name in enumerate(experiment.stimulus_templates.index):
scene_id = int(image_name[2:])
ax[i%4][i//4].imshow(scenes[scene_id, :, :], cmap='gray', vmin=0, vmax=255)
ax[i%4][i//4].set_title(image_name)
ax[i%4][i//4].get_xaxis().set_visible(False)
ax[i%4][i//4].get_yaxis().set_visible(False)
```
## Visualization
We do some basic plots from the information we gathered from various attributes.
### Plot dF/F Trace
Let's choose some random `cell_specimen_id` and plots its dff trace for time 400 to 450.
```
fig, ax = plt.subplots(figsize=(15, 4))
ax.plot(
experiment.ophys_timestamps,
experiment.dff_traces.loc[1086545833]["dff"],
)
ax.set_xlim(400, 450)
fig.show()
```
### Plot Stimulus
Let's also plot stimulus for a short interval.
*Part of code from [[AllenNB3]](#References)*
```
# Create a color map for each image
unique_stimuli = [stimulus for stimulus in experiment.stimulus_presentations['image_name'].unique()]
colormap = {image_name: sns.color_palette()[image_number] for image_number, image_name in enumerate(np.sort(unique_stimuli))}
# Keep omitted image as white
colormap['omitted'] = (1,1,1)
stimulus_presentations_sample = experiment.stimulus_presentations.query('stop_time >= 400 and start_time <= 450')
fig, ax = plt.subplots(figsize=(15, 4))
for idx, stimulus in stimulus_presentations_sample.iterrows():
ax.axvspan(stimulus['start_time'], stimulus['stop_time'], color=colormap[stimulus['image_name']], alpha=0.25)
ax.set_xlim(400, 450)
fig.show()
```
### Plot Both dF/F trace and Stimulus
```
fig, ax = plt.subplots(figsize=(15, 4))
ax.plot(
experiment.ophys_timestamps,
experiment.dff_traces.loc[1086545833]["dff"],
)
for idx, stimulus in stimulus_presentations_sample.iterrows():
ax.axvspan(stimulus['start_time'], stimulus['stop_time'], color=colormap[stimulus['image_name']], alpha=0.25)
ax.set_xlim(400, 450)
ax.set_ylim(-0.5, 0.5)
ax.legend(["dff trace"])
fig.show()
```
| github_jupyter |
```
#
# libraries
#
import pandas as pd
from sklearn.linear_model import LinearRegression as OLS
from sklearn.ensemble import RandomForestRegressor
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import seaborn as sns
#
#
# utility function for plotting histograms in a grid
#
def plot_histogram_grid(df, variables, n_rows, n_cols, bins):
fig = plt.figure(figsize = (11, 11))
for i, var_name in enumerate(variables):
ax = fig.add_subplot(n_rows, n_cols, i + 1)
#
# for some variables there are relatively few unique values, so we'll
# adjust the histogram appearance accordingly to avoid "gaps" in the plots
#
if len(np.unique(df[var_name])) <= bins:
use_bins = len(np.unique(df[var_name]))
else:
use_bins = bins
#
df[var_name].hist(bins = use_bins, ax = ax)
ax.set_title(var_name)
fig.tight_layout()
plt.show()
#
# utility function for plotting scatterplots in a grid
#
def plot_scatter_grid(df, y_cols, x_var, rows, cols):
fig, ax = plt.subplots(rows, cols, figsize = (15, 15))
fig.tight_layout(pad = 2)
for row in range(rows):
for col in range(cols):
if (row * (cols - 1) + col) <= (len(y_cols) - 1):
plot_col = y_cols[row * (cols - 1) + col]
ax[row, col].scatter(my_data[x_var], my_data.loc[:, plot_col])
ax[row, col].set_title(plot_col + ' vs. ' + x_var)
else:
fig.delaxes(ax[row, col])
plt.show()
#
#
# load data
#
my_data = pd.read_csv('Datasets\\CO_sensors.csv')
my_data.head()
#
#
my_data.describe().T
#
#
plot_histogram_grid(my_data, my_data.columns[1:], 7, 3, 25)
#
#
plt.figure(figsize = (13, 11))
sns.pairplot(my_data.iloc[:, :5])
#
#
plot_cols = list(my_data.loc[:, 'R1 (MOhm)': ].columns)
plot_scatter_grid(my_data, plot_cols, 'Time (s)', 5, 4)
#
#
fig, ax = plt.subplots(figsize = (11, 9))
ax.scatter(my_data.loc[(my_data['Time (s)'] > 40000) &
(my_data['Time (s)'] < 45000), 'Time (s)'],
my_data.loc[(my_data['Time (s)'] > 40000) &
(my_data['Time (s)'] < 45000), 'R13 (MOhm)'])
ax.set_title('R13 vs. time')
plt.show()
#
#
plt.figure(figsize = (13, 11))
sns.heatmap(my_data.loc[:, 'R1 (MOhm)':].corr())
#
#
# we see two or three groups in the sensor correlations
#
# the data description
# (https://archive.ics.uci.edu/ml/datasets/Gas+sensor+array+temperature+modulation)
# says there are two kinds of sensors:
# "(7 units of TGS 3870-A04) and FIS (7 units of SB-500-12)"
#
# let's investigate the behavior of these vs. CO and humidity
#
Sensor_CO_corr = pd.concat([my_data.loc[:, ['CO (ppm)', 'Humidity (%r.h.)']],
my_data.loc[:, 'R1 (MOhm)':]], axis = 1).corr().loc['CO (ppm)':'Humidity (%r.h.)', 'R1 (MOhm)':]
#
# plot the CO correlations
#
fig, ax = plt.subplots(figsize = (11, 11))
ax.bar(x = Sensor_CO_corr.columns, height = Sensor_CO_corr.loc['CO (ppm)'])
ax.xaxis.set_ticks_position('top')
plt.xticks(rotation = 90)
plt.show()
#
# plot the humidity correlations
#
fig, ax = plt.subplots(figsize = (11, 11))
ax.bar(x = Sensor_CO_corr.columns, height = Sensor_CO_corr.loc['Humidity (%r.h.)'])
ax.xaxis.set_ticks_position('top')
plt.xticks(rotation = 90)
plt.show()
#
#
# the sensors are impacted by humidity
# here we know the humidity, but in the field we would not necessarily
# the fact that the two sensors have very different CO / humidity sensitivity may be usefule
# we see the sensor data are all skewed
# let's add sqrt() transform (since there are 0s and near 0s in those data) to all sensors
# then fit a linaer model
#
sensor_cols = list(my_data.loc[:, 'R1 (MOhm)': ].columns)
for i in range(len(sensor_cols)):
my_data['sqrt_' + sensor_cols[i]] = np.sqrt(my_data[sensor_cols[i]])
#
#
model_X = my_data.drop(columns = ['Time (s)', 'Temperature (C)', 'Humidity (%r.h.)', 'CO (ppm)'])
model_y = my_data.loc[:, 'CO (ppm)']
my_model = OLS()
my_model.fit(model_X, model_y)
preds = my_model.predict(model_X)
residuals = preds - model_y
fig, ax = plt.subplots(figsize = (9, 9))
ax.hist(residuals, bins = 50)
plt.show()
#
fig, ax = plt.subplots(figsize = (9, 9))
ax.scatter(preds, model_y)
ax.plot([0, 20], [0, 20], color = 'black', lw = 1)
ax.set_xlim(0, 20)
ax.set_ylim(0, 20)
plt.show()
#
#
# scale the data
#
scaler = StandardScaler()
model_X = scaler.fit_transform(model_X)
#
# fit a non-linear model
#
RF_model = RandomForestRegressor(n_estimators = 250)
RF_model.fit(model_X, model_y)
#
# look at the residuals
#
preds = RF_model.predict(model_X)
residuals = preds - model_y
#
fig, ax = plt.subplots(figsize = (9, 9))
ax.hist(residuals, bins = 50)
plt.show()
#
# compare predicted to actual
#
fig, ax = plt.subplots(figsize = (9, 9))
ax.scatter(preds, model_y)
ax.plot([0, 20], [0, 20], color = 'black', lw = 1)
ax.set_xlim(0, 20)
ax.set_ylim(0, 20)
plt.show()
#
```
| github_jupyter |
# Consumption Equivalent Variation (CEV)
1. Use the model in the **ConsumptionSaving.pdf** slides and solve it using **egm**
2. This notebooks estimates the *cost of income risk* through the Consumption Equivalent Variation (CEV)
We will here focus on the cost of income risk, but the CEV can be used to estimate the value of many different aspects of an economy. For eaxample, [Oswald (2019)](http://qeconomics.org/ojs/index.php/qe/article/view/701 "The option value of homeownership") estimated the option value of homeownership using a similar strategy as described below.
**Goal:** To estimate the CEV by comparing the *value of life* under the baseline economy and an alternative economy with higher permanent income shock variance along with a consumption compensation.
**Value of Life:**
1. Let the *utility function* be a generalized version of the CRRA utility function with $\delta$ included as a potential consumption compensation.
\begin{equation}
{u}(c,\delta) = \frac{(c\cdot(1+\delta))^{1-\rho}}{1-\rho}
\end{equation}
2. Let the *value of life* of a synthetic consumer $s$ for a given level of permanent income shock varaince, $\sigma_{\psi}$, and $\delta$, be
\begin{equation}
{V}_{s}({\sigma}_{\psi},\delta)=\sum_{t=1}^T \beta ^{t-1}{u}({c}^{\star}_{s,t}({\sigma}_{\psi},\delta),\delta)
\end{equation}
where ${c}^{\star}_{s,t}({\sigma}_{\psi},\delta)$ is optimal consumption found using the **egm**. The value of life is calcualted in the function `value_of_life(.)` defined below.
**Consumption Equivalent Variation:**
1. Let $V=\frac{1}{S}\sum_{s=1}^SV(\sigma_{\psi},0)$ be the average value of life under the *baseline* economy with the baseline value of $\sigma_{\psi}$ and $\delta=0$.
2. Let $\tilde{V}(\delta)=\frac{1}{S}\sum_{s=1}^SV(\tilde{\sigma}_{\psi},\delta)$ be the average value of life under the *alternative* economy with $\tilde{\sigma}_{\psi} > \sigma_{\psi}$.
The CEV is the value of $\delta$ that sets $V=\tilde{V}(\delta)$ and can be estimated as
\begin{equation}
\hat{\delta} = \arg\min_\delta (V-\tilde{V}(\delta))^2
\end{equation}
where the objective function is calculated in `obj_func_cev(.)` defined below.
# Setup
```
%matplotlib inline
%load_ext autoreload
%autoreload 2
import time
import numpy as np
import scipy.optimize as optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
prop_cycle = plt.rcParams['axes.prop_cycle']
colors = prop_cycle.by_key()['color']
import sys
sys.path.append('../')
import ConsumptionSavingModel as csm
from ConsumptionSavingModel import ConsumptionSavingModelClass
```
# Setup the baseline model and the alternative model
```
par = {'simT':40}
model = ConsumptionSavingModelClass(name='baseline',solmethod='egm',**par)
# increase the permanent income with 100 percent and allow for consumption compensation
par_cev = {'sigma_psi':0.2,'do_cev':1,'simT':40}
model_cev = ConsumptionSavingModelClass(name='cev',solmethod='egm',**par_cev)
model.solve()
model.simulate()
```
# Average value of life
**Define Functions:** value of life and objective function used to estimate "cev"
```
def value_of_life(model):
# utility associated with consumption for all N and T
util = csm.utility(model.sim.c,model.par)
# discounted sum of utility
disc = np.ones(model.par.simT)
disc[1:] = np.cumprod(np.ones(model.par.simT-1)*model.par.beta)
disc_util = np.sum(disc*util,axis=1)
# return average of discounted sum of utility
return np.mean(disc_util)
def obj_func_cev(theta,model_cev,value_of_life_baseline):
# update cev-parameter
setattr(model_cev.par,'cev',theta)
# re-solve and simulate alternative model
model_cev.solve(do_print=False)
model_cev.simulate(do_print=False)
# calculate value of life
value_of_life_cev = value_of_life(model_cev)
# return squared difference to baseline
return (value_of_life_cev - value_of_life_baseline)*(value_of_life_cev - value_of_life_baseline)
```
**Baseline value of life and objective function at cev=0**
```
value_of_life_baseline = value_of_life(model)
obj_func_cev(0.0,model_cev,value_of_life_baseline)
# plot the objective function
grid_cev = np.linspace(0.0,0.2,20)
grid_obj = np.empty(grid_cev.size)
for j,cev in enumerate(grid_cev):
grid_obj[j] = obj_func_cev(cev,model_cev,value_of_life_baseline)
plt.plot(grid_cev,grid_obj);
```
# Estimate the Consumption Equivalent Variation (CEV)
```
res = optimize.minimize_scalar(obj_func_cev, bounds=[-0.01,0.5],
args=(model_cev,value_of_life_baseline),method='golden')
res
```
The estimated CEV suggests that consumers would be indifferent between the baseline economy and a 100% increase in the permanent income shock variance along with a 10% increase in consumption in all periods.
| github_jupyter |
# Facial Expression Recognizer
```
#The OS module in Python provides a way of using operating system dependent functionality.
#import os
# For array manipulation
import numpy as np
#For importing data from csv and other manipulation
import pandas as pd
#For displaying images
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
#For displaying graph
#import seaborn as sns
#For constructing and handling neural network
import tensorflow as tf
#Constants
LEARNING_RATE = 1e-4
TRAINING_ITERATIONS = 10000 #increase iteration to improve accuracy
DROPOUT = 0.5
BATCH_SIZE = 50
IMAGE_TO_DISPLAY = 3
VALIDATION_SIZE = 2000
#Reading data from csv file
data = pd.read_csv('Train_updated_six_emotion.csv')
#Seperating images data from labels ie emotion
images = data.iloc[:,1:].values
images = images.astype(np.float)
#Normalizaton : convert from [0:255] => [0.0:1.0]
images = np.multiply(images, 1.0 / 255.0)
image_size = images.shape[1]
image_width = image_height = 48
#Displaying an image from 20K images
def display(img):
#Reshaping,(1*2304) pixels into (48*48)
one_image = img.reshape(image_width,image_height)
plt.axis('off')
#Show image
plt.imshow(one_image, cmap=cm.binary)
display(images[IMAGE_TO_DISPLAY])
#Creating an array of emotion labels using dataframe 'data'
labels_flat = data[['label']].values.ravel()
labels_count = np.unique(labels_flat).shape[0]
# convert class labels from scalars to one-hot vectors
# 0 => [1 0 0]
# 1 => [0 1 0]
# 2 => [0 0 1]
def dense_to_one_hot(labels_dense, num_classes = 7):
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
labels = dense_to_one_hot(labels_flat, labels_count)
labels = labels.astype(np.uint8)
#Printing example hot-dense label
print ('labels[{0}] => {1}'.format(IMAGE_TO_DISPLAY,labels[IMAGE_TO_DISPLAY]))
#Using data for training & cross validation
validation_images = images[:2000]
validation_labels = labels[:2000]
train_images = images[2000:]
train_labels = labels[2000:]
```
#Next is the neural network structure.
#Weights and biases are created.
#The weights should be initialised with a small a amount of noise
#for symmetry breaking, and to prevent 0 gradients. Since we are using
#rectified neurones (ones that contain rectifier function *f(x)=max(0,x)*),
#we initialise them with a slightly positive initial bias to avoid "dead neurones.
```
# initialization of weight
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
# We use zero padded convolution neural network with a stride of 1 and the size of the output is same as that of input.
# The convolution layer finds the features in the data the number of filter denoting the number of features to be detected.
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
# Pooling downsamples the data. 2x2 max-pooling splits the image into square 2-pixel blocks and only keeps the maximum value
# for each of the blocks.
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
# images
x = tf.placeholder('float', shape=[None, image_size])
# labels (0, 1 or 2)
y_ = tf.placeholder('float', shape=[None, labels_count])
BATCH_SIZE
```
### VGG-16 architecture
```
W_conv1 = weight_variable([3, 3, 1, 8])
b_conv1 = bias_variable([8])
# we reshape the input data to a 4d tensor, with the first dimension corresponding to the number of images,
# second and third - to image width and height, and the final dimension - to the number of colour channels.
# (20000,2304) => (20000,48,48,1)
image = tf.reshape(x, [-1,image_width , image_height,1])
print (image.get_shape())
h_conv1 = tf.nn.relu(conv2d(image, W_conv1) + b_conv1)
print (h_conv1)
W_conv2 = weight_variable([3, 3, 8, 8])
b_conv2 = bias_variable([8])
h_conv2 = tf.nn.relu(conv2d(h_conv1, W_conv2) + b_conv2)
print (h_conv2)
# pooling reduces the size of the output from 48x48 to 24x24.
h_pool1 = max_pool_2x2(h_conv2)
#print (h_pool1.get_shape()) => (20000, 24, 24, 8)
# Prepare for visualization
# display 8 features in 4 by 2 grid
layer1 = tf.reshape(h_conv1, (-1, image_height, image_width, 4 ,2))
# reorder so the channels are in the first dimension, x and y follow.
layer1 = tf.transpose(layer1, (0, 3, 1, 4,2))
layer1 = tf.reshape(layer1, (-1, image_height*4, image_width*2))
# The second layer has 16 features for each 5x5 patch. Its weight tensor has a shape of [5, 5, 8, 16].
# The first two dimensions are the patch size. the next is the number of input channels (8 channels correspond to 8
# features that we got from previous convolutional layer).
W_conv3 = weight_variable([3, 3, 8, 16])
b_conv3 = bias_variable([16])
h_conv3 = tf.nn.relu(conv2d(h_pool1, W_conv3) + b_conv3)
print(h_conv3)
W_conv4 = weight_variable([3, 3, 16, 16])
b_conv4 = bias_variable([16])
h_conv4 = tf.nn.relu(conv2d(h_conv3, W_conv4) + b_conv4)
print(h_conv4)
h_pool2 = max_pool_2x2(h_conv4)
#print (h_pool2.get_shape()) => (20000, 12, 12, 16)
# The third layer has 16 features for each 5x5 patch. Its weight tensor has a shape of [5, 5, 16, 32].
# The first two dimensions are the patch size. the next is the number of input channels (16 channels correspond to 16
# features that we got from previous convolutional layer)
W_conv5 = weight_variable([3, 3, 16, 32])
b_conv5 = bias_variable([32])
h_conv5 = tf.nn.relu(conv2d(h_pool2, W_conv5) + b_conv5)
print(h_conv5)
W_conv6 = weight_variable([3, 3, 32, 32])
b_conv6 = bias_variable([32])
h_conv6 = tf.nn.relu(conv2d(h_conv5, W_conv6) + b_conv6)
print(h_conv6)
W_conv7 = weight_variable([3, 3, 32, 32])
b_conv7 = bias_variable([32])
h_conv7 = tf.nn.relu(conv2d(h_conv6, W_conv7) + b_conv7)
print(h_conv7)
h_pool3 = max_pool_2x2(h_conv7)
#print (h_pool2.get_shape()) => (20000, 6, 6, 32)
W_conv8 = weight_variable([3, 3, 32, 32])
b_conv8 = bias_variable([32])
h_conv8 = tf.nn.relu(conv2d(h_pool3, W_conv8) + b_conv8)
print(h_conv8)
W_conv9 = weight_variable([3, 3, 32, 32])
b_conv9 = bias_variable([32])
h_conv9 = tf.nn.relu(conv2d(h_conv8, W_conv9) + b_conv9)
print(h_conv9)
W_conv10 = weight_variable([3, 3, 32, 32])
b_conv10 = bias_variable([32])
h_conv10 = tf.nn.relu(conv2d(h_conv9, W_conv10) + b_conv10)
print(h_conv10)
h_pool4 = max_pool_2x2(h_conv10)
print (h_pool4.get_shape())
# Now that the image size is reduced to 3x3, we add a Fully_Connected_layer) with 1024 neurones
# to allow processing on the entire image (each of the neurons of the fully connected layer is
# connected to all the activations/outpus of the previous layer)
W_conv11 = weight_variable([3, 3, 32, 32])
b_conv11 = bias_variable([32])
h_conv11 = tf.nn.relu(conv2d(h_pool4, W_conv11) + b_conv11)
print(h_conv11)
W_conv12 = weight_variable([3, 3, 32, 32])
b_conv12 = bias_variable([32])
h_conv12 = tf.nn.relu(conv2d(h_conv11, W_conv12) + b_conv12)
print(h_conv12)
W_conv13 = weight_variable([3, 3, 32, 32])
b_conv13 = bias_variable([32])
h_conv13 = tf.nn.relu(conv2d(h_conv12, W_conv13) + b_conv13)
print(h_conv13)
# densely connected layer
W_fc1 = weight_variable([3 * 3 * 32, 512])
b_fc1 = bias_variable([512])
# (20000, 6, 6, 32) => (20000, 1152 )
h_pool2_flat = tf.reshape(h_conv13, [-1, 3*3*32])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
print (h_fc1.get_shape()) # => (20000, 1024)
W_fc2 = weight_variable([512, 512])
b_fc2 = bias_variable([512])
h_fc2 = tf.nn.relu(tf.matmul(h_fc1, W_fc2) + b_fc2)
print (h_fc2.get_shape()) # => (20000, 1024)
W_fc3 = weight_variable([512, 512])
b_fc3 = bias_variable([512])
h_fc3 = tf.nn.relu(tf.matmul(h_fc2, W_fc3) + b_fc3)
print (h_fc3.get_shape()) # => (20000, 1024)
# To prevent overfitting, we apply dropout before the readout layer.
# Dropout removes some nodes from the network at each training stage. Each of the nodes is either kept in the
# network with probability (keep_prob) or dropped with probability (1 - keep_prob).After the training stage
# is over the nodes are returned to the NN with their original weights.
keep_prob = tf.placeholder('float')
h_fc1_drop = tf.nn.dropout(h_fc2, keep_prob)
# readout layer 1024*3
W_fc4 = weight_variable([512, labels_count])
b_fc4 = bias_variable([labels_count])
# Finally, we add a softmax layer
y = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc4) + b_fc4)
#print (y.get_shape()) # => (20000, 3)
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, 'float'))
predict = tf.argmax(y,1)
epochs_completed = 0
index_in_epoch = 0
num_examples = train_images.shape[0]
# serve data by batches
def next_batch(batch_size):
global train_images
global train_labels
global index_in_epoch
global epochs_completed
start = index_in_epoch
index_in_epoch += batch_size
# when all trainig data have been already used, it is reorder randomly
if index_in_epoch > num_examples:
# finished epoch
epochs_completed += 1
# shuffle the data
perm = np.arange(num_examples)
np.random.shuffle(perm)
train_images = train_images[perm]
train_labels = train_labels[perm]
# start next epoch
start = 0
index_in_epoch = batch_size
assert batch_size <= num_examples
end = index_in_epoch
return train_images[start:end], train_labels[start:end]
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
# visualisation variables
train_accuracies = []
validation_accuracies = []
x_range = []
display_step=1
for i in range(TRAINING_ITERATIONS):
#get new batch
batch_xs, batch_ys = next_batch(BATCH_SIZE)
# check progress on every 1st,2nd,...,10th,20th,...,100th... step
if i%display_step == 0 or (i+1) == TRAINING_ITERATIONS:
train_accuracy = accuracy.eval(feed_dict={x:batch_xs,
y_: batch_ys,
keep_prob: 1.0})
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={ x: validation_images[0:BATCH_SIZE],
y_: validation_labels[0:BATCH_SIZE],
keep_prob: 1.0})
print('training_accuracy / validation_accuracy => %.2f / %.2f for step %d'%(train_accuracy, validation_accuracy, i))
validation_accuracies.append(validation_accuracy)
else:
print('training_accuracy => %.4f for step %d'%(train_accuracy, i))
train_accuracies.append(train_accuracy)
x_range.append(i)
# increase display_step
if i%(display_step*10) == 0 and i:
display_step *= 10
# train on batch
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys, keep_prob: DROPOUT})
if(VALIDATION_SIZE):
validation_accuracy = accuracy.eval(feed_dict={x: validation_images,
y_: validation_labels,
keep_prob: 1.0})
print('validation_accuracy => %.4f'%validation_accuracy)
plt.plot(x_range, train_accuracies,'-b', label='Training')
plt.plot(x_range, validation_accuracies,'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 1.1, ymin = 0.0)
plt.ylabel('accuracy')
plt.xlabel('step')
plt.show()
```
| github_jupyter |
2017
Machine Learning Practical
University of Edinburgh
Georgios Pligoropoulos - s1687568
Coursework 4 (part 7)
### Imports, Inits, and helper functions
```
jupyterNotebookEnabled = True
plotting = True
coursework, part = 4, 7
saving = True
if jupyterNotebookEnabled:
#%load_ext autoreload
%reload_ext autoreload
%autoreload 2
import sys, os
mlpdir = os.path.expanduser(
'~/pligor.george@gmail.com/msc_Artificial_Intelligence/mlp_Machine_Learning_Practical/mlpractical'
)
sys.path.append(mlpdir)
from collections import OrderedDict
from __future__ import division
import skopt
from mylibs.jupyter_notebook_helper import show_graph
import datetime
import os
import time
import tensorflow as tf
import numpy as np
from mlp.data_providers import MSD10GenreDataProvider, MSD25GenreDataProvider,\
MSD10Genre_Autoencoder_DataProvider, MSD10Genre_StackedAutoEncoderDataProvider
import matplotlib.pyplot as plt
%matplotlib inline
from mylibs.batch_norm import fully_connected_layer_with_batch_norm_and_l2
from mylibs.stacked_autoencoder_pretrainer import \
constructModelFromPretrainedByAutoEncoderStack,\
buildGraphOfStackedAutoencoder, executeNonLinearAutoencoder
from mylibs.jupyter_notebook_helper import getRunTime, getTrainWriter, getValidWriter,\
plotStats, initStats, gatherStats
from mylibs.tf_helper import tfRMSE, tfMSE, fully_connected_layer
#trainEpoch, validateEpoch
from mylibs.py_helper import merge_dicts
from mylibs.dropout_helper import constructProbs
from mylibs.batch_norm import batchNormWrapper_byExponentialMovingAvg,\
fully_connected_layer_with_batch_norm
import pickle
from skopt.plots import plot_convergence
from mylibs.jupyter_notebook_helper import DynStats
import operator
from skopt.space.space import Integer, Categorical
from skopt import gp_minimize
from rnn.rnn_batch_norm import RNNBatchNorm
seed = 16011984
rng = np.random.RandomState(seed=seed)
config = tf.ConfigProto(log_device_placement=True, allow_soft_placement=True)
config.gpu_options.allow_growth = True
figcount = 0
tensorboardLogdir = 'tf_cw%d_%d' % (coursework, part)
curDtype = tf.float32
reluBias = 0.1
batch_size = 50
num_steps = 6 # number of truncated backprop steps ('n' in the discussion above)
#num_classes = 2
state_size = 10 #each state is represented with a certain width, a vector
learningRate = 1e-4 #default of Adam is 1e-3
#momentum = 0.5
#lamda2 = 1e-2
best_params_filename = 'best_params_rnn.npy'
```
here the state size is equal to the number of classes because we have given to the last output all the responsibility.
We are going to follow a repetitive process. For example if num_steps=6 then we break the 120 segments into 20 parts
The output of each part will be the genre. We are comparing against the genre every little part
### MSD 10 genre task
```
segmentCount = 120
segmentLen = 25
from rnn.msd10_data_providers import MSD10Genre_120_rnn_DataProvider
```
### Experiment with Best Parameters
```
best_params = np.load(best_params_filename)
best_params
(state_size, num_steps) = best_params
(state_size, num_steps)
rnnModel = RNNBatchNorm(batch_size=batch_size, rng=rng, dtype = curDtype, config=config,
segment_count=segmentCount, segment_len= segmentLen)
%%time
epochs = 100
stats, keys = rnnModel.run_rnn(state_size = state_size, num_steps=num_steps,
epochs = epochs)
if plotting:
fig_1, ax_1, fig_2, ax_2 = plotStats(stats, keys)
plt.show()
if saving:
figcount += 1
fig_1.savefig('cw%d_part%d_%02d_fig_error.svg' % (coursework, part, figcount))
fig_2.savefig('cw%d_part%d_%02d_fig_valid.svg' % (coursework, part, figcount))
print max(stats[:, -1]) #maximum validation accuracy
```
| github_jupyter |
# Exercises 06 - Strings and Dictionaries
## 0. Length of Strings
Let's start with a string lightning round to warm up. What are the lengths of the strings below?
For each of the five strings below, predict what `len()` would return when passed that string. Use the variable `length` to record your answer.
```
a = ""
length = 0
print(length==len(a))
b = "it's ok"
length = 7
print(length==len(b))
c = 'it\'s ok'
length = 7
print(length==len(c))
d = """hey"""
length = 3
print(length==len(d))
e = '\n'
length = 1
print(length==len(e))
```
## 1. Check the Zip Code
There is a saying that *\"Data scientists spend 80% of their time cleaning data, and 20% of their time complaining about cleaning data.\"* Let's see if you can write a function to help clean US zip code data. Given a string, it should return whether or not that string represents a valid zip code. For our purposes, a valid zip code is any string consisting of exactly 5 digits.
HINT: `str` has a method that will be useful here. Use `help(str)` to review a list of string methods.
```
def is_valid_zip(zip_code):
"""Returns whether the input string is a valid (5 digit) zip code
"""
return len(zip_code) == 5 and zip_code.isdigit() # make sure that zip_code is in digit format
print(is_valid_zip("123456"))
print(is_valid_zip("abcde"))
print(is_valid_zip("12345"))
```
## 2. Searching a Word
A researcher has gathered thousands of news articles. But she wants to focus her attention on articles including a specific word. Complete the function below to help her filter her list of articles.
Your function should meet the following criteria
- Do not include documents where the keyword string shows up only as a part of a larger word. For example, if she were looking for the keyword “closed”, you would not include the string “enclosed.”
- She does not want you to distinguish upper case from lower case letters. So the phrase “Closed the case.” would be included when the keyword is “closed”
- Do not let periods or commas affect what is matched. “It is closed.” would be included when the keyword is “closed”. But you can assume there are no other types of punctuation.
*HINT*: Some methods that may be useful here: `str.split()`, `str.strip()`, `str.lower()`
```
def word_search(doc_list, keyword):
"""
Takes a list of documents (each document is a string) and a keyword.
Returns list of the index values into the original list for all documents
containing the keyword.
Example:
doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
>>> word_search(doc_list, 'casino')
>>> [0]
"""
index_list = []
for i in range(len(doc_list)-1):
if keyword.lower() in doc_list[i].lower().strip('.').split():
index_list.append(i)
return index_list
doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"]
word_search(doc_list, 'casino')
```
## 3. Searching Multiple Words
Now the researcher wants to supply multiple keywords to search for. Complete the function below to help her.
(You're encouraged to use the `word_search` function you just wrote when implementing this function. Reusing code in this way makes your programs more robust and readable - and it saves typing!)
```
def multi_word_search(doc_list, keywords):
"""
Takes list of documents (each document is a string) and a list of keywords.
Returns a dictionary where each key is a keyword, and the value is a list of indices
(from doc_list) of the documents containing that keyword
>>> doc_list = ["The Learn Python Challenge Casino.", "They bought a car and a casino", "Casinoville"]
>>> keywords = ['casino', 'they']
>>> multi_word_search(doc_list, keywords)
{'casino': [0, 1], 'they': [1]}
"""
dictionary = {}
for keyword in keywords:
dictionary[keyword] = word_search(doc_list, keyword)
return dictionary
doc_list = ["The Learn Python Challenge Casino.", "They bought a car and a casino", "Casinoville"]
keywords = ['casino', 'they']
multi_word_search(doc_list, keywords)
```
# Keep Going 💪
| github_jupyter |
<a href="https://colab.research.google.com/github/nationalarchives/TechneTraining/blob/main/Code/Techne_ML_workbook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Set up variables and install useful library code
```
import sys
data_source = "Github"
!git clone https://github.com/nationalarchives/TechneTraining.git
sys.path.insert(0, 'TechneTraining')
sys.path.insert(0, 'TechneTraining/Code')
github_data = "TechneTraining/Data/TopicModelling/"
import techne_library_code as tlc
from IPython.display import display
import math
TEST_MODE = False
```
# Topic Modelling
### Load word list from Topic Model built on 'regulation' related websites.
Display words in table.
A topic model is created from a corpus of text by an unsupervised machine learning algorithm. The process is non-deterministic which means the results will differ every time it is run. Below is on results from running the software MALLET over the 'regulation' corpus.
The primary output is a list of topics (in this case 8, one row each) and a list of words most representative of that topic. A word can appear in more than one topic.
From this we can get a high level overview of a corpus of text.
```
topic_words = tlc.read_topic_list(github_data + "topic_list.txt")
TOPICS = len(topic_words)
topic_table = tlc.pd.DataFrame([v[0:12] for v in topic_words.values()])
topic_table
```
Stop words are used in Natural Language Processing to filter out very common words, or those which may negatively affect the results.
Below is a list of example stop words.
```
stop_words = ["i","or","which","of","and","is","the","a","you're","you","at","his","etc",'an','where','when']
```
Exercise: Picking stop words
We can add more to this list by selecting from the following list. Which ones do you think might be worth filtering out?
```
additional_stops = ['medical','freight','pdf','plan','kb','regulation','risk']
stop_word_select = tlc.widgets.SelectMultiple(options=additional_stops, rows=len(additional_stops))
stop_word_select
for w in stop_word_select.value:
print("Adding",w,"to stop words")
stop_words.append(w)
```
As well as a list of topics and related words, MALLET also produces a topic breakdown for each document in the corpus.
Here we load the topic data and visualise some examples.
```
topics_per_doc = tlc.read_doc_topics(github_data + "topics_per_doc.txt")
```
The following plots show the proportion of each topic attributed to 4 different documents.
1. Top Left: One topic clearly dominates
2. Top Right: One dominant topic but a second topic is above the level of others
3. Bottom Left: Two topics clearly above others
4. Bottom Right: Topics close to being even
```
file_number_list = [212, 85, 9, 372]
fig, ax = tlc.plot_doc_topics(file_number_list, topics_per_doc, TOPICS)
tlc.pyplot.show()
```
### Exercise: From Topics to Classes
For this Machine Learning exercise we want to predict a Category of regulation (e.g. "Medicine" or "Rail"). The categories we may want to predict do not map one-to-one with the topics above. So first we need to create that mapping.
Firstly we will define a list of possible categories. Sometimes the topics that come out may be worth ignoring (e.g. cookie information) but in this case all of them seem to be of interest.
```
topics_of_interest = [0,1,2,3,4,5,6,7]
class_names = {0:"General",1:"Medicine",2:"Rail",3:"Safety",4:"Pensions",5:"Education",6:"Other",-1:"Unclassified"}
topic_to_class = {}
if TEST_MODE:
topic_to_class = {0:1,1:0,2:2,3:3,4:4,5:4,6:2,7:2} #For testing
```
Using the dropdown and list selector below we can set the mapping from topic to Class (a term more commonly used in machine learning for category.
```
class_drop = tlc.widgets.Dropdown(options=[(v,k) for k,v in class_names.items()],
value=topics_of_interest[0], rows = len(class_names))
topic_select = tlc.widgets.SelectMultiple(options=[(w[0:5],t) for t,w in topic_words.items()],
value = [k for k,v in topic_to_class.items() if v == class_drop.value],
rows = TOPICS+1, height="100%")
button = tlc.widgets.Button(description="Update")
output = tlc.widgets.Output()
def on_button_clicked(b):
for v in topic_select.value:
topic_to_class[v] = class_drop.value
print("Updated")
button.on_click(on_button_clicked)
```
### Exercise: Map topics to classes
```
V = tlc.widgets.VBox([class_drop, topic_select, button])
V
```
Update the selected values here and then return to the dropdown until finished.
Display the resulting mappings from **Topic** to **Class**
```
tlc.pd.DataFrame([(",".join(topic_words[k][0:10]),class_names[v]) for k,v in topic_to_class.items()], columns=['Topic Words','Class'])
```
## Viewing document class proportions
```
classes_per_doc = tlc.topic_to_class_scores(topics_per_doc, topic_to_class)
```
Generally every document contains a bit of each topic. Before visualising the class breakdown for our sample documents, we can filter out lowest scoring classes and focus on the primary class(es) by zeroing all values below a threshold. We then **normalise** the probabilities to add to 1.
Run the next piece of code to create a slider to set the threshold, and then the following one will draw graphs. To try a different threshold, adjust the slider and rerun the graph code.
```
class_threshold = tlc.widgets.FloatSlider(0.10, min=0.10, max=0.65, step=0.05)
class_threshold
```
This is the graph code. It shows Classes above the threshold defined by the slider for the four documents previously visualised.
```
filtered_classes_per_doc = tlc.filter_topics_by_threshold(classes_per_doc, class_threshold.value)
class_count = len(filtered_classes_per_doc['file_1.txt'])
fig, ax = tlc.plot_doc_topics(file_number_list, filtered_classes_per_doc, class_count, normalise=True)
tlc.pyplot.show()
```
# Representing text as numbers
The next stage is to use the results of the topic modelling to train a Supervised Learning algorithm.
Supervised Learning is learning by example. We label our data in advance with categories, and then the algorithm derives a function which will map an input data item to an output Class. The input data is usually termed **Features**, the outputs are often called **Responses**.
## Term Frequency-Inverse Document Frequency (TF-IDF)
For this exercise the features are the words in our documents. There is no semantic meaning attached, the words are no more than tokens. Imagine a spreadsheet where each row represents a document and each column represents a word from a fixed vocabulary.
One representation we could use would be to use a 1 to indicate a word appears in a document, and a 0 if it doesn't. This is simple but overly so. A better representation is TF-IDF which stands for Term Frequency-Inverse Document Frequency. It is a very simple idea but the general gist is that a word that appears in most documents will score lowly, while a word which appears in few documents will score highly (this is the Inverse Document Frequency part). The Term Frequency increases the score when it appears many times in the document.
Now that we've mapped topics and categories, it is time to prepare the text corpus (the document contents) for Machine Learning.
First we load the content from a file.
```
D4ML = tlc.MLData()
D4ML.load_content(github_data + "tm_file_contents.txt")
D4ML.set_classes(classes_per_doc)
```
We have some influence over the parameters used to define the TFIDF representation of our corpus. How they are set can influence the results.
1. Features: how many distinct words from the corpus to use for the Vocabulary
2. Min Doc Frequency: minimum number of documents a word must appear in to be considered
3. Max Doc Frequency: maximum number of documents a word must appear in to be considered
```
FEATURES=1000
MIN_DOC_FREQ=4
MAX_DOC_FREQ=100
```
Calculate the TF-IDF scores for each document and prepare some of the data for Machine Learning
```
D4ML.add_stop_words(*stop_words)
D4ML.calc_tfidf(FEATURES, MIN_DOC_FREQ, MAX_DOC_FREQ)
print("Documents:",D4ML.TFIDF.shape[0],"\tWords",D4ML.TFIDF.shape[1])
training_features, training_classes, training_ids = D4ML.get_ml_data()
training_features.shape
```
Here we can see an example of the TF-IDF scores for a document. They are sorted in score order, the higher scores indicating a greater importance for that document.
```
EXAMPLE = 0 # 0 to 3 only
vocabulary = D4ML.vectorizer.get_feature_names()
example_row = D4ML.TFIDF[D4ML.file_to_idx['file_' + str(file_number_list[EXAMPLE]) + '.txt']]
example_table = tlc.pd.DataFrame(zip([vocabulary[w] for w in example_row.nonzero()[1]],
[int(example_row[0,v]*1000)/1000 for i,v in enumerate(example_row.nonzero()[1])]),
columns = ['word','tfidf']).sort_values(by='tfidf', ascending=False)
example_table
```
# Supervised Machine Learning
For this exercise we will use the Naive Bayes algorithm. It is called Bayes because it uses Bayesian probability (named after the Reverend Thomas Bayes who discovered it). It is Naive because it assumes that all words in a document are independent of each other (think of the sentence "my cat miaows when hungry"). It seems like a bad assumption but actually works well in practice.
Bayesian probability is surprisingly simple and gives us the ability to flip probabilities around. From a corpus of text I can easily calculate the probability of the word "Passenger" appearing in a document about "Railways", and also in a document about "Medicines". What Bayes' rule does is allow me to then calculate the probability that a document is about "Railways" or "Medicines" given that it contains the word "Passenger". Very handy!
### Preparing training and test data
Before starting any Machine Learning we need to split our data into Training and Test datasets. The reason for this is that algorithms can appear to perform very well against the dataset they were trained with, but then perform very badly on new, unseen, data.
The algorithm will learn from the Training data and then we check its performance against the Test data.
```
TEST_PROPORTION = 0.6
X_train, X_test, \
y_train, y_test, \
x_train_ids, x_test_ids = tlc.train_test_split(training_features, training_classes, training_ids,
test_size = TEST_PROPORTION,
random_state=42, stratify=training_classes)
```
### Training the Naive Bayes model
Now we **fit** a Naive Bayes model to the training data. Two lines of code and it is done!
```
model = tlc.BernoulliNB()
model.fit(X_train, y_train)
```
### Optional: Under the hood of Naive Bayes
One nice feature of Naive Bayes is we can see exactly what is going on internally and could recreate its results with a calculator, if we so wished.
The first part of the calculation is called the prior and is the probability of each class without any further information (i.e. without seeing the document content)
This corpus is heavily skewed so rail is dominant.
```
tlc.pyplot.bar(height=[math.exp(p) for p in model.class_log_prior_],
x=[x for x in range(len(model.class_log_prior_))])
tlc.pyplot.xticks([0,1,2,3,4],[class_names[c] for c in range(len(model.class_log_prior_))])
tlc.pyplot.show()
```
We can also take a class and see which words have the highest probability within that class.
```
C = 1
N = 10
print("Class:",class_names[C])
topN = (-model.feature_log_prob_[C]).argsort()[0:N]
words = [vocabulary[w] for w in topN]
probs = tlc.np.exp(model.feature_log_prob_[C,topN])
tlc.pyplot.bar(height=probs,
x=[x for x in range(len(words))])
tlc.pyplot.xticks(range(len(words)),words, rotation=45)
tlc.pyplot.show()
```
We can then choose a word and see the probability of each class for that word. The final result is a combination of this and the prior.
Since the prior is heavily in favour of one class, the word probabilities need to be strongly in favour of another to change the result.
```
W = 9
print("Word:",vocabulary[topN[W]])
w_probs = tlc.np.exp([model.feature_log_prob_[i,topN[W]] for i in range(len(model.class_log_prior_))])
tlc.pyplot.bar(height=w_probs,
x=[x for x in range(len(model.class_log_prior_))])
tlc.pyplot.show()
```
### Evaluating the model's performance
Having created the model we now use it to generate predictions for the test dataset.
We have two ways of assessing the performance of the model. The first is to give an accuracy score, quite simply the percentage of predictions it has got right.
```
y_pred = model.predict(X_test)
y_prob = model.predict_proba(X_test)
print("Prediction Accuracy:",int(tlc.accuracy_score(y_test, y_pred)*1000)/10,'%')
```
A more granular method is to view what is quite rightly called a **Confusion Matrix**.
A confusion matrix is a grid mapping 'correct' answers to predictions. The rows represent the class assigned in the test data and the columns represent predictions. The top left to bottom right diagonal shows us how many of each class have been predicted correctly. All of the other numbers count incorrect predictions. The number in row 2, column 3, will show how many documents of whatever the 2nd class represents ("Medicine") have been misclassified as the 3rd class ("Rail").
In this example we see that "Rail" documents tend to be classified correctly, but a lot of the other types are being also misclassified as "Rail".
We can see from the numbers that this dataset is highly imbalanced, so most of the records are "Rail". This may be responsible for the bias towards that class.
```
fig, ax = tlc.draw_confusion(y_test, y_pred, model, class_names)
tlc.pyplot.show()
```
### Exploring individual predictions
Similar to the topic model output, the prediction also comes with probabilities. We can visualise these probabilites for a selection of predictions.
```
y_true_pred = [x for x in zip(range(len(y_test)),y_test, y_pred, y_prob)]
incorrect_predictions = [(y[0],y[1],y[2],y[3]) for i,y in enumerate(y_true_pred) if y[1] != y[2]]
incorrect_unsure = [x for x in incorrect_predictions if max(x[3]) < 0.8]
prediction_sample = tlc.random.sample(range(len(incorrect_unsure)), min(5, len(incorrect_unsure)))
fig, ax = tlc.pyplot.subplots(min(5,len(prediction_sample)),1)
fig.set_size_inches(5,7)
for i, sample_idx in enumerate(prediction_sample):
prediction_row = incorrect_unsure[sample_idx]
data_idx = prediction_row[0]
true_class = prediction_row[1]
predicted = prediction_row[2]
tp = prediction_row[3]
class_probs = [tp[0],tp[1],tp[2],tp[3],tp[4]]
ax[i].set_ylim([0,1.0])
ax[i].text(.4,0.8, str(data_idx) + ": True:" + class_names[true_class] + ", Predicted:" + class_names[predicted],
horizontalalignment='center',
transform=ax[i].transAxes)
if i < 4:
ax[i].set_xticks([])
else:
ax[i].set_xticks(ticks=[0,1,2,3,4])
ax[i].set_xticklabels(labels=['General','Medicine','Rail', 'Safety', 'Pension'])
ax[i].bar(x = [0,1,2,3,4], height = class_probs)
tlc.pyplot.show()
```
If we look at probabilities in aggregate we see that generally high confidence predictions match the correct class, and lower confidence ones are more likely to be incorrect. This is desirable behaviour because it gives us more trust in the high confidence predictions.
```
prob_match_sum = tlc.np.zeros((11,4))
for i in range(11):
prob_match_sum[i,0] = i/10
for row in y_true_pred:
max_prob = int(max(row[3]) * 10)
prob_match_sum[max_prob,int(row[1] == row[2])+1] += 1
prob_match_sum[max_prob,3] += 1
prob_match_sum = tlc.pd.DataFrame(prob_match_sum, columns=["Probability", "NoMatch", "Match", "Total"])
ax = prob_match_sum.plot(x="Probability", y="Total", kind="bar", color="blue")
ax.legend(['Disagree', 'Match'])
prob_match_sum.plot(x="Probability", y="Match", kind="bar", ax=ax, color="orange")
tlc.pyplot.show()
```
We will now use another Machine Learning algorithm called Nearest Neighbours to help use classify some new training data.
This code prepares the data for the next section. Firstly the data is indexed to find similar documents quickly (KDTree), then we sort the predictions by their prediction probability (most confident first). Finally we create a sample of most and least confident predictions.
```
kdt = tlc.KDTree(training_features, leaf_size=30, metric='minkowski', p=2)
tfidf_words = D4ML.vectorizer.get_feature_names()
max_sorted_asc = sorted(y_true_pred, key=lambda max_x : max(max_x[3]), reverse=True)
sample_ids = (tlc.np.random.beta(0.55, 0.4, 50) * len(max_sorted_asc)).astype('int')
sample_ids = list(set(sample_ids))
#tlc.random.shuffle(sample_ids)
s = 0
```
# Reviewing and correcting predictions
## Form setup code
```
output = tlc.widgets.Output()
neighbour_list = []
neighbour_idx = 0
NUMBER_OF_NEIGHBOURS = 4
human_classification = {}
def dropdown_update(change):
check_row = change.new
this_idx = max_sorted_asc[check_row][0]
this_words = set([tfidf_words[w] for w in tlc.np.nonzero(X_test[this_idx])[1]])
true_class_drop.value = y_test[this_idx]
this_file_name = D4ML.idx_to_file[x_test_ids[this_idx]]
file_name_text.set_state({'value':this_file_name})
file_name_text.send_state('value')
predicted_class_text.set_state({'value': class_names[y_pred[this_idx]]})
predicted_class_text.send_state('value')
prediction_prob = tlc.np.max(y_prob[this_idx])
doc_pred_prob.set_state({'value': prediction_prob})
doc_pred_prob.send_state('value')
doc_words.set_state({'value': ",".join(list(this_words))})
doc_words.send_state('value')
nn_dist, nn_ind = kdt.query(X_test[this_idx], k=NUMBER_OF_NEIGHBOURS)
doc_contents.set_state({'value': D4ML.file_contents[this_file_name]})
doc_contents.send_state('value')
neighbour_count = 0
global neighbour_list
neighbour_list = []
for i in range(len(nn_dist[0])):
dist = nn_dist[0][i]
idx = nn_ind[0][i]
file_name = D4ML.idx_to_file[idx]
if idx == x_test_ids[this_idx]:
continue
pred = model.predict(training_features[idx])
neighbour_distance.set_state({'value':dist})
neighbour_distance.send_state('value')
true_class = -1
words = set([tfidf_words[w] for w in tlc.np.nonzero(training_features[idx])[1]])
if len(words) == 0:
continue
if file_name in class_probs:
true_class = tlc.np.argmax(class_probs[file_list[idx]])
overlap_words = ",".join(list(words.intersection(this_words)))
neighbour_list.append([idx, dist, file_name, overlap_words, D4ML.file_contents[file_name],
true_class, class_names[pred[0]]])
neighbour_count += 1
accordion.set_title(3, "Nearest Neighbours: " + str(len(neighbour_list)))
with output:
output.clear_output()
display(accordion)
sample_dropdown = tlc.widgets.Dropdown(options=sample_ids, value=None, description='Choose a document',
layout={'width':'100px'}, style={'description_width': 'initial'})
sample_dropdown.observe(dropdown_update, 'value')
file_name_text = tlc.widgets.Text(value=None, description="File Name")
class_options = [(v,k) for k,v in class_names.items()]
true_class_drop = tlc.widgets.Dropdown(options=class_options,
value = None, description='True Class')
update_true = tlc.widgets.Button(description="Update")
def on_true_pressed(b):
human_classification[file_name_text.value] = true_class_drop.value
update_true.on_click(on_true_pressed)
#true_class_drop.observe(on_true_changed,'value')
predicted_class_text = tlc.widgets.Text(value=None, description='Predicted class', style={'description_width': 'initial'})
doc_pred_prob = tlc.widgets.FloatText(value=None, description='Probability')
details = tlc.widgets.VBox([file_name_text,
tlc.widgets.HBox([true_class_drop, predicted_class_text, update_true]),
doc_pred_prob])
neighbour_overlap = tlc.widgets.Text(value="",
layout={'height': '100%', 'width': '700px'}, disabled=False)
neighbour_distance = tlc.widgets.FloatText(value=None, description='Distance')
neighbour_true_class = tlc.widgets.Dropdown(value=None, description="True", options=class_options)
neighbour_prediction = tlc.widgets.Text(value=None, description="Prediction")
neighbour_file = tlc.widgets.Text(value=None)
neighbour_content = tlc.widgets.Textarea(value=None, layout={'height': '150px', 'width': '700px'})
def on_next_clicked(b):
global neighbour_idx
global neighbour_list
if len(neighbour_list) == 0:
return
neighbour_idx += 1
if neighbour_idx >= len(neighbour_list):
neighbour_idx = 0
neighbour_data = neighbour_list[neighbour_idx]
neighbour_true_class.value = neighbour_data[5]
neighbour_true_class.send_state('value')
neighbour_prediction.set_state({'value':neighbour_data[6]})
neighbour_prediction.send_state('value')
neighbour_distance.set_state({'value':neighbour_data[1]})
neighbour_distance.send_state('value')
neighbour_file.set_state({'value':neighbour_data[2]})
neighbour_file.send_state('value')
neighbour_overlap.set_state({'value':neighbour_data[3]})
neighbour_overlap.send_state('value')
neighbour_content.set_state({'value':neighbour_data[4]})
neighbour_content.send_state('value')
next_neighbour = tlc.widgets.Button(description="Next", layout={'width':'100px'})
next_neighbour.on_click(on_next_clicked)
neighbour_true_update = tlc.widgets.Button(description="Update")
def on_neighbour_update_click(b):
human_classification[neighbour_file.value] = neighbour_true_class.value
neighbour_true_update.on_click(on_neighbour_update_click)
neighbour_details = tlc.widgets.VBox([next_neighbour, neighbour_file, neighbour_distance,
tlc.widgets.HBox([neighbour_true_class, neighbour_true_update]),
neighbour_prediction])
doc_words = tlc.widgets.Text(value=None, layout={'height': '100%', 'width': '700px'}, disabled=False)
#overlap.observe(sample_dropdown, on_button_clicked)
doc_contents = tlc.widgets.Textarea(value=None,
layout={'height': '200px', 'width': '700px'})
sub_accordion = tlc.widgets.Accordion(children=[neighbour_details, neighbour_overlap, neighbour_content])
accordion = tlc.widgets.Accordion(children=[details,doc_words, doc_contents, sub_accordion])
accordion.set_title(0,"Details")
accordion.set_title(1,"Document Features")
accordion.set_title(2,"Document Contents")
accordion.set_title(3,"Nearest Neighbours")
sub_accordion.set_title(0,"Neighbour details")
sub_accordion.set_title(1,"Word overlap")
sub_accordion.set_title(2,"Neighbour content")
```
## Refreshing the training data
Run the line of code below and then use the dropdown menu to select a sample document. The ids will be meaningless so pick any. This will open a form for viewing the classification results for the document. The form layout is called 'accordion' and you can click on any title and a different part of the form will open up. You can also change the 'true' classification and press update to save the new value. As well as checking the selected document classification, the form also shows the most similar documents (using nearest neighbours) to classify those at the same time.
The output from this exercise could be used in future to train a new classifier.
```
display(sample_dropdown, output)
human_classification
```
| github_jupyter |
```
import re
import json
import pandas as pd
import numpy as np
from collections import deque
```
## Process dataset
```
base_folder = "../movies-dataset/"
movies_metadata_fn = "movies_metadata.csv"
credits_fn = "credits.csv"
links_fn = "links.csv"
```
## Process movies_metadata data structure/schema
```
metadata = pd.read_csv(base_folder + movies_metadata_fn)
metadata.head()
```
## Cast id to int64 and drop any NAN values!
```
metadata.id = pd.to_numeric(metadata.id, downcast='signed', errors='coerce')
metadata = metadata[metadata['id'].notna()]
list(metadata.columns.values)
def CustomParser(data):
obj = json.loads(data)
return obj
```
We probably need id, title from this dataframe.
## Process credits data structure/schema
```
credits = pd.read_csv(base_folder + credits_fn)
# credits = pd.read_csv(base_folder + credits_fn, converters={'cast':CustomParser}, header=0)
# Cast id to int
credits.id = pd.to_numeric(credits.id, downcast='signed', errors='coerce')
credits.head()
# cast id to int64 for later join
metadata['id'] = metadata['id'].astype(np.int64)
credits['id'] = credits['id'].astype(np.int64)
metadata.dtypes
credits.dtypes
metadata.head(3)
credits.head(3)
```
## Let's join the two dataset based on movie id
We start with one example movie `Toy Story` with id = 862 in metadata dataset.
```
merged = pd.merge(metadata, credits, on='id')
merged.head(3)
toy_story_id = 862
merged.loc[merged['id'] == toy_story_id]
```
## Examine crew/cast json data schme for toy story
```
cast = merged.loc[merged['id'] == toy_story_id].cast
crew = merged.loc[merged['id'] == toy_story_id].crew
cast
```
## Find all movies Tom hanks has acted in
```
def has_played(actor_name, cast_data):
for cast in cast_data:
name = cast['name']
actor_id = cast['id']
cast_id = cast['cast_id']
credit_id = cast['credit_id']
if actor_name.lower() == name.lower():
print("name: {}, id: {}, cast_id: {}, credit_id: {}".format(name, actor_id, cast_id, credit_id))
return True
return False
```
## Setup data structure
```
# a map from movie id to a list of actor id's
movie_actor_adj_list = {}
# a map from actor id to a list of movie id's
actor_movie_adj_list = {}
# a map from movies id to their title
movies_map = {}
# a map from actors id to their name
actors_map = {}
cnt, errors = 0, 0
failed_movies = {}
for index, row in merged.iterrows():
cnt += 1
movie_id, movie_title = row['id'], row['title']
if movie_id not in movies_map:
movies_map[movie_id] = movie_title
dirty_json = row['cast']
try:
regex_replace = [(r"([ \{,:\[])(u)?'([^']+)'", r'\1"\3"'), (r" None", r' null')]
for r, s in regex_replace:
dirty_json = re.sub(r, s, dirty_json)
cast_data = json.loads(dirty_json)
# if has_played('Tom Hanks', cast_data):
# print("Movie id: {}, title: {}".format(movie_id, movie_title))
for cast in cast_data:
actor_name = cast['name']
actor_id = cast['id']
if actor_id not in actors_map:
actors_map[actor_id] = actor_name
# build movie-actor adj list
if movie_id not in movie_actor_adj_list:
movie_actor_adj_list[movie_id] = [actor_id]
else:
movie_actor_adj_list[movie_id].append(actor_id)
# build actor-movie adj list
if actor_id not in actor_movie_adj_list:
actor_movie_adj_list[actor_id] = [movie_id]
else:
actor_movie_adj_list[actor_id].append(movie_id)
except json.JSONDecodeError as err:
# print("JSONDecodeError: {}, Movie id: {}, title: {}".format(err, movie_id, movie_title))
failed_movies[movie_id] = True
errors += 1
print("Parsed credist: {}, errors: {}".format(cnt, errors))
movie_actor_adj_list[862]
inv_actors_map = {v: k for k, v in actors_map.items()}
inv_movies_map = {v: k for k, v in movies_map.items()}
kevin_id = inv_actors_map['Kevin Bacon']
print(kevin_id)
DEBUG = False
q = deque()
q.append(kevin_id)
bacon_degrees = {kevin_id: 0}
visited = {}
degree = 1
while q:
u = q.popleft()
if DEBUG:
print("u: {}".format(u))
# print(q)
if u not in visited:
visited[u] = True
if DEBUG:
print("degree(u): {}".format(bacon_degrees[u]))
if bacon_degrees[u] % 2 == 0:
# actor type node
neighbors = actor_movie_adj_list[u]
if DEBUG:
print("actor type, neighbors: {}".format(neighbors))
else:
# movie type node
neighbors = movie_actor_adj_list[u]
if DEBUG:
print("movie type, neighbors: {}".format(neighbors))
for v in neighbors:
if v not in visited:
q.append(v)
if v not in bacon_degrees:
bacon_degrees[v] = bacon_degrees[u] + 1
bacon_degrees[kevin_id]
actors_map[2224]
movies_map[9413]
actor_id = inv_actors_map['Tom Hanks']
bacon_degrees[actor_id]
actor_id = inv_actors_map['Tom Cruise']
bacon_degrees[actor_id]
movie_id = inv_movies_map['Apollo 13']
failed_movies[movie_id]
actor_id = inv_actors_map['Tom Cruise']
tom_cruise_movies = actor_movie_adj_list[actor_id]
actor_id = inv_actors_map['Kevin Bacon']
kevin_bacon_movies = actor_movie_adj_list[actor_id]
set(tom_cruise_movies).intersection(set(kevin_bacon_movies))
movies_map[881]
```
| github_jupyter |
```
import json
import os
import time
import requests
import psycopg2
import os
import json
database_dict = {
"database": os.environ.get("POSTGRES_DB"),
"user": os.environ.get("POSTGRES_USERNAME"),
"password": os.environ.get("POSTGRES_PASSWORD"),
"host": os.environ.get("POSTGRES_WRITER"),
"port": os.environ.get("POSTGRES_PORT"),
}
engine = psycopg2.connect(**database_dict)
cur = engine.cursor()
def get_mainnet_daos(cur, engine):
execute_string = f"SELECT * FROM daohaus.dao"
cur.execute(execute_string)
records = cur.fetchall()
js = [];
for x in records:
if(x[5] == 1):
curl = []
curl.append(x[0])
curl.append(x[1])
js.append(curl)
return js
def get_all_members(cur, engine):
execute_string = f"SELECT * FROM daohaus.member"
cur.execute(execute_string)
records = cur.fetchall()
js = [];
for x in records:
curl = []
curl.append(x[1])
curl.append(x[6])
js.append(curl)
return js
main_net_daos = get_mainnet_daos(cur, engine)
all_members = get_all_members(cur, engine)
dict_of_main_net_daos = {}
for dao in main_net_daos:
dict_of_main_net_daos[dao[0]] = dao[1]
main_net_members = []
for member in all_members:
if member[0] in dict_of_main_net_daos:
curl = []
curl.append(dict_of_main_net_daos[member[0]])
curl.append(member[1])
main_net_members.append(curl)
main_net_members
API_KEY = "ASKMW73735RNXSZCUDNZPFU1W2URIWYXRE"
addresses = main_net_daos
all_tokens = set()
list_of_sets_tokens_in_each_address = []
counter = 0
for addy in addresses:
address = str(addy[1])
cur_add = address
url = f"https://api.etherscan.io/api?module=account&action=tokentx&address={address}&page=1&offset=100&startblock=0&endblock=27025780&sort=asc&apikey={API_KEY}"
response = requests.get(url)
counter+=1
if(response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json")):
test_data = response.json()
else:
break
cd = dict()
current_address_set_of_tokens = set()
for transaction in test_data['result']:
all_tokens.add(transaction['contractAddress'])
current_address_set_of_tokens.add(transaction['contractAddress'])
cd[str(cur_add)] = current_address_set_of_tokens
list_of_sets_tokens_in_each_address.append(cd)
if counter % 20 == 0:
print("SLEPT")
time.sleep(1)
all_tokens_with_spot_prices = {}
for token in all_tokens:
all_tokens_with_spot_prices[str(token)] = 0
did_not_work_tokens = []
counter = 0
for token in all_tokens:
fl = True
response = requests.get(f"https://api.etherscan.io/api?module=token&action=tokeninfo&contractaddress={token}&apikey={API_KEY}")
if not (response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json")):
time.sleep(1)
response = requests.get(f"https://api.etherscan.io/api?module=token&action=tokeninfo&contractaddress={token}&apikey={API_KEY}")
if not (response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json")):
time.sleep(1)
response = requests.get(f"https://api.etherscan.io/api?module=token&action=tokeninfo&contractaddress={token}&apikey={API_KEY}")
if not (response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json")):
did_not_work_tokens.append(token)
fl = False
else:
test_data = response.json()
else:
test_data = response.json()
else:
test_data = response.json()
if fl and test_data['message'] == "OK":
if(len(test_data['result'])> 0):
cd = {"tokenPriceUSD": test_data['result'][0]['tokenPriceUSD'], "tokenName" : test_data['result'][0]['tokenName'], "divisor":test_data['result'][0]['divisor'], "symbol" : test_data['result'][0]['symbol']}
print(test_data['result'][0]['tokenPriceUSD'],test_data['result'][0]['tokenName'],test_data['result'][0]['divisor'])
all_tokens_with_spot_prices[str(token)] = cd
else:
did_not_work_tokens.append(token)
counter+=1
if(counter % 2 == 1):
time.sleep(1.5)
js_str = json.dumps(all_tokens_with_spot_prices)
js_fin = json.loads(js_str)
with open('./list_of_all_mainnet_tokens_with_spot_prices_for_daos.json', 'w') as f:
json.dump(js_fin, f)
md_of_dao_data_with_balances_of_each_token = {}
counter = 0
for member_dict in list_of_sets_tokens_in_each_address:
list_of_all_tokens_with_balances_for_current_member = []
for address in member_dict:
for token in member_dict[address]:
if(counter%20 == 0):
time.sleep(1)
cd = {}
response = requests.get(f"https://api.etherscan.io/api?module=account&action=tokenbalance&contractaddress={token}&address={address}&tag=latest&apikey={API_KEY}")
counter += 1
if not (response.status_code != 204 and response.headers["content-type"].strip().startswith("application/json")):
response = requests.get(f"https://api.etherscan.io/api?module=account&action=tokenbalance&contractaddress={token}&address={address}&tag=latest&apikey={API_KEY}")
counter+=1
test_data = response.json()
else:
test_data = response.json()
cd["contract_address"] = token
cd["balance"] = test_data['result']
print(address, token, test_data["result"])
list_of_all_tokens_with_balances_for_current_member.append(cd)
md_of_dao_data_with_balances_of_each_token[address] = list_of_all_tokens_with_balances_for_current_member
all_wallet_totals = {}
final_data_set = {}
for address in md_of_dao_data_with_balances_of_each_token:
total = 0
for token in md_of_dao_data_with_balances_of_each_token[address]:
if(len(md_of_dao_data_with_balances_of_each_token[address]) > 0):
if type(all_tokens_with_spot_prices[token['contract_address']]) is dict:
price = float(all_tokens_with_spot_prices[token['contract_address']]["tokenPriceUSD"])
divisor = int(all_tokens_with_spot_prices[token['contract_address']]["divisor"])
balance = int(token['balance'])
divided_balance = balance/ (10**divisor)
value = divided_balance * price
total+=value
all_wallet_totals[address] = total
ml_of_all_addresses_and_token_info = []
for address in md_of_dao_data_with_balances_of_each_token:
address_final_dict = {}
address_final_dict["address"] = address
enriched_list_of_tokens = []
for token in md_of_dao_data_with_balances_of_each_token[address]:
if type(all_tokens_with_spot_prices[token['contract_address']]) is dict:
cd = {}
cd['contract_address'] = token['contract_address']
cd['token_name'] = all_tokens_with_spot_prices[token['contract_address']]["tokenName"]
cd['divisor'] = all_tokens_with_spot_prices[token['contract_address']]["divisor"]
cd['symbol'] = all_tokens_with_spot_prices[token['contract_address']]["symbol"]
cd['balance'] = token['balance']
enriched_list_of_tokens.append(cd)
address_final_dict["token_info"] = enriched_list_of_tokens
address_final_dict["wallet_total"] = all_wallet_totals[address]
address_final_dict["members"] = []
ml_of_all_addresses_and_token_info.append(address_final_dict)
for dao in ml_of_all_addresses_and_token_info:
dao['members'] = []
for member in main_net_members:
for dao in ml_of_all_addresses_and_token_info:
if(dao['address'] == member[0]):
dao['members'].append(member[1])
js_str = json.dumps(ml_of_all_addresses_and_token_info)
js_fin = json.loads(js_str)
with open('./all_mainnet_dao_data_with_token_balances_and_wallet_totals.json', 'w') as f:
json.dump(js_fin, f)
```
| github_jupyter |
```
import sympy as sp
import numpy as np
x = sp.symbols('x')
p = sp.Function('p')
l = sp.Function('l')
poly = sp.Function('poly')
p3 = sp.Function('p3')
p4 = sp.Function('p4')
```
# Introduction
Last time we have used Lagrange basis to interpolate polynomial. However, it is not efficient to update the interpolating polynomial when a new data point is added. We look at an iterative approach.
Given points $\{(z_i, f_i) \}_{i=0}^{n-1}$, $z_i$ are distinct and $p_{n-1} \in \mathbb{C}[z]_{n-1}\, , p_{n-1}(z_i) = f_i$. <br> We add a point $(z_n, f_n)$ and find a polynomial $p_n \in \mathbb{C}[x]_{n-1}$ which satisfies $\{(z_i, f_i) \}_{i=0}^{n}$.
We assume $p_n(z)$ be the form
\begin{equation}
p_n(z) = p_{n-1}(z) + C\prod_{i=0}^{n-1}(z - z_i)
\end{equation}
so that the second term vanishes at $z = z_0,...,z_{n-1}$ and $p_n(z_i) = p_{n-1}(z_i), i = 0,...,n-1$. We also want $p_n(z_n) = f_n$ so we have
\begin{equation}
f_n = p_{n-1}(z_n) + C\prod_{i=0}^{n-1}(z_n - z_i) \Rightarrow C = \frac{f_n - p_{n-1}(z_n)}{\prod_{i=0}^{n-1}(z_n - z_i)}
\end{equation}
Thus we may perform interpolation iteratively.
**Example:** Last time we have
\begin{equation}
(z_0, f_0) = (-1,-3), \quad
(z_1, f_1) = (0,-1), \quad
(z_2, f_2) = (2,4), \quad
(z_3, f_3) = (5,1)
\end{equation}
and
\begin{equation}
p_3(x) = \frac{-13}{90}z^3 + \frac{14}{45}z^2 + \frac{221}{90}z - 1
\end{equation}
```
z0 = -1; f0 = -3; z1 = 0; f1 = -1; z2 = 2; f2 = 4; z3 = 5; f3 = 1; z4 = 1; f4 = 1
p3 = -13*x**3/90 + 14*x**2/45 + 221*x/90 - 1
```
We add a point $(z_4,f_4) = (1,1)$ and obtain $p_4(x)$
```
z4 = 1; f4 = 1
C = (f4 - p3.subs(x,z4))/((z4-z0)*(z4-z1)*(z4-z2)*(z4-z3))
C
p4 = p3 + C*(x-z0)*(x-z1)*(x-z2)*(x-z3)
sp.expand(p4)
```
**Remark:** the constant $C$ is usually written as $f[z_0,z_1,z_2,z_3,z_4]$. Moreover by iteration we have
$$p_n(z) = \sum_{i=0}^n f[z_0,...,z_n] \prod_{j=0}^i (z - z_j)$$
# Newton Tableau
We look at efficient ways to compute $f[z_0,...,z_n]$, iteratively from $f[z_0,...,z_{n-1}]$ and $f[z_1,...,z_n]$. <br>
We may first construct $p_{n-1}$ and $q_{n-1}$ before constructing $p_n$ itself, where
\begin{gather}
p_{n-1}(z_i) = f_i \quad i = 0,...,n-1\\
q_{n-1}(z_i) = f_i \quad i = 1,...,n
\end{gather}
**Claim:** The following polynomial interpolate $\{(z_i,f_i)\}_{i=0}^n$
\begin{equation}
p_n(z_i) = \frac{(z - z_n)p_{n-1}(z) - (z - z_0)q_{n-1}(z)}{z_0 - z_n}
\end{equation}
Since interpolating polynomial is unique, by comparing coefficient of $z_n$, we have
$$f[z_0,...,z_{n}] = \frac{f[z_0,...,z_{n-1}]-f[z_1,...,z_{n}]}{z_0 - z_n}$$
```
def product(xs,key,i):
#Key: Forward or Backward
n = len(xs)-1
l = 1
for j in range(i):
if key == 'forward':
l *= (x - xs[j])
else:
l *= (x - xs[n-j])
return l
def newton(xs,ys,key):
# Key: Forward or Backward
n = len(xs)-1
# print(xs)
print(ys)
old_column = ys
if key == 'forward':
coeff = [fs[0]]
elif key == 'backward':
coeff = [fs[len(fs)-1]]
else:
return 'error'
for i in range(1,n+1): # Column Index
new_column = [(old_column[j+1] - old_column[j])/(xs[j+i] - xs[j]) for j in range(n-i+1)]
print(new_column)
if key == 'forward':
coeff.append(new_column[0])
else:
coeff.append(new_column[len(new_column)-1])
old_column = new_column
# print(coeff)
poly = 0
for i in range(n+1):
poly += coeff[i] * product(xs,key,i)
return poly
zs = [1, 4/3, 5/3, 2]; fs = [np.sin(x) for x in zs]
p = newton(zs,fs,'backward')
sp.simplify(p)
```
| github_jupyter |
```
import ml
reload(ml)
from ml import *
import timeit
import scipy
import operator
import collections
import numpy as np
import pandas as pd
from scipy import stats
import seaborn as sns
from collections import Counter
import matplotlib.pyplot as plt
from __future__ import division
from matplotlib.colors import ListedColormap
import statsmodels.api as sm
from sklearn import metrics
from sklearn.svm import SVC
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB as GNB
from sklearn.ensemble import AdaBoostClassifier as ADB
from sklearn.neural_network import MLPClassifier as MLP
from sklearn.tree import DecisionTreeClassifier as CART
from sklearn.ensemble import RandomForestClassifier as RF
from sklearn.neighbors import KNeighborsClassifier as KNN
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA
from sklearn.model_selection import train_test_split, KFold
from sklearn.metrics import classification_report
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix
from sklearn.manifold.t_sne import TSNE
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from collections import OrderedDict
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
sns.set_style('whitegrid')
plt.style.use('seaborn-whitegrid')
%matplotlib inline
__author__ = 'HK Dambanemuya'
__version__ = 'Python 2'
'''
Analysis originaly performed in Python 2 (deprecated)
Seaborn, Statsmodel, and * imports broken in Python 3
'''
# Lender Experience
# Borrower Experience
borrower_features = ["DebtToIncomeRatio", "BorrowerAge", "BorrowerSuccessRate", "AvailableBankcardCredit",
"BankDraftFeeAnnualRate", "BorrowerMaximumRate", "CreditGrade",
"CreditScoreRangeLower", "CreditScoreRangeUpper", "DebtToIncomeRatio", "EffectiveYield",
"IsBorrowerHomeowner", "OnTimeProsperPayments", "ProsperPaymentsLessThanOneMonthLate",
"ProsperPaymentsOneMonthPlusLate", "ProsperScore", "TotalInquiries", "TotalProsperLoans",
"TotalProsperPaymentsBilled", "TradesOpenedLast6Months", ]
lender_features = ["NoLenders", "MedianLenderAge", "MedianLenderSuccessRate"]
loan_features = ["MedianEstimatedLoss", "MedianEstimatedReturn", "MedianLenderRate", "MedianLenderYield",
"MedianMonthlyLoanPayment", "TotalMonthlyLoanPayment",
"MedianTerm", "MedianAgeInMonths", "TotalAmountBorrowed", "MedianBorrowerRate", ]
listing_features = ["ListingKey", "Category", "AmountRequested", "BidCount",
"BidMaximumRate",
"ProsperPrincipalBorrowed", "ProsperPrincipalOutstanding",
"TimeToFirstBid", "AvgInterBidTime", "TimeToCompletion",
"Gini", "DescriptionLength", "FundedOrNot", "RepaidOrNot"]
```
## Bid Data
```
bid_data = pd.read_csv('../Data/bid_notick.txt', sep="|")
bid_data = bid_data[["Bid_Key", "Amount","CreationDate","ListingKey","ListingStatus"]]
bid_data= bid_data.rename(index=str, columns={"Bid_Key": "BidKey", "Amount": "BidAmount", "CreationDate": "BidCreationDate", "ListingKey": "ListingKey", "ListingStatus": "ListingStatus"})
bid_data = bid_data.loc[(bid_data["ListingStatus"]=="Cancelled") | (bid_data["ListingStatus"]=="Expired") | (bid_data["ListingStatus"]=="Withdrawn") | (bid_data["ListingStatus"]=="Completed")]
bid_data = bid_data.loc[bid_data["BidAmount"]>0]
bid_data["FundedOrNot"] = bid_data["ListingStatus"]=="Completed"
bid_data.sample(10)
```
## Listing Data
```
listing_data = pd.read_csv('../Data/listing.txt', sep="|")
listing_data = listing_data[["Lst_Key", "ActiveProsperLoans", "BidCount", "BidMaximumRate", "AmountRequested","CreationDate",
"BorrowerRate", "BorrowerMaximumRate", "EffectiveYield", "BorrowerState","CreditGrade",
"DebtToIncomeRatio", "EstimatedReturn", "EstimatedLoss", "IsBorrowerHomeowner", "Category",
"LenderRate", "LenderYield", "TotalProsperLoans", "MonthlyLoanPayment", "OnTimeProsperPayments",
"ProsperScore"]]
listing_data = listing_data.rename(index=str, columns={"Lst_Key": "ListingKey", "AmountRequested": "AmountRequested", "CreationDate": "ListingStartDate"})
listing_data.sample(5)
```
## Loan Data
```
loan_data = pd.read_csv('../Data/loan.txt', sep="|")
loan_data = loan_data[["Status","ListingKey","CreationDate"]]
loan_data = loan_data.rename(index=str, columns={"Status": "LoanStatus", "ListingKey": "ListingKey", "CreationDate": "LoanCreationDate"})
loan_data = loan_data.loc[(loan_data["LoanStatus"]=="Paid") |
(loan_data["LoanStatus"]=="Defaulted (Bankruptcy)") |
(loan_data["LoanStatus"]=="Defaulted (Delinquency)") |
(loan_data["LoanStatus"]=="Defaulted (PaidInFull)") |
(loan_data["LoanStatus"]=="Defaulted (SettledInFull)")]
loan_data['RepaidOrNot'] = loan_data["LoanStatus"]=="Paid"
loan_data.sample(10)
```
## Merge Data
```
data = bid_data.merge(listing_data, on="ListingKey")
data = data.merge(loan_data, on="ListingKey", how="outer")
data = data[data.FundedOrNot == True]
del bid_data
del listing_data
del loan_data
data.sample(10)
print ("Dataset dimension: {0}".format(data.shape))
print ("\nDataset contains {0} features: {1}.".format(len(data.columns), data.columns))
print "\nTotal Listings: ", len(set(data.ListingKey))
print "\nTotal Bids: ", len(set(data.BidKey))
print ("\nListing Status:")
print Counter(data.ListingStatus)
print ("\nFunding Status:")
print Counter(data.FundedOrNot)
print ("\nPercentage Funded: ")
print (dict(Counter(data.FundedOrNot))[True] / len(data)) * 100
print ("\nRepayment Status:")
print Counter(data.loc[data['FundedOrNot']==True]['RepaidOrNot'])
print ("\nPercentage Repaid:")
print (dict(Counter(data.loc[data['FundedOrNot']==True]['RepaidOrNot']))[True] / len(data.loc[data['FundedOrNot']==True])) * 100
```
## Summary Statistics
```
data.describe()
```
## Correlation Matrix
```
corr = data.corr(method='pearson')
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
plt.figure(figsize=(12,8))
sns.heatmap(corr,
xticklabels=corr.columns,
yticklabels=corr.columns,
cmap=sns.color_palette("coolwarm_r"),
mask = mask,
linewidths=.5,
annot=True)
plt.title("Variable Correlation Heatmap")
plt.show()
```
## Listing Status
```
print data.groupby('ListingStatus').size()
listing_labels = sorted(data.groupby('ListingStatus').groups.keys())
plt.bar(listing_labels,
data.groupby('ListingStatus').size())
plt.yscale('log')
plt.xticks(range(4), listing_labels, rotation='vertical')
plt.title('Listing Status')
plt.show()
data.hist(figsize=(12,12), layout=(5,4), log=True)
plt.grid()
plt.tight_layout()
plt.show()
funding_features = ['AmountRequested', 'BidCount', 'BidMaximumRate', 'BorrowerRate',
'BorrowerMaximumRate', 'EffectiveYield', 'DebtToIncomeRatio', 'IsBorrowerHomeowner', 'Category',
'OnTimeProsperPayments', 'ActiveProsperLoans', 'TotalProsperLoans', 'ProsperScore']
y_funding = data['FundedOrNot']
y_funding = np.array(y_funding)
funding_class_names = np.unique(y_funding)
print "Class Names: %s" % funding_class_names
print "\nFunding target labels:", Counter(data.FundedOrNot)
# data.loc[data['FundedOrNot']==True].fillna(False)
repayment_features = funding_features
y_repayment =data.loc[data['FundedOrNot']==True]['RepaidOrNot'].fillna(False)
y_repayment = np.array(y_repayment)
repayment_class_names = np.unique(y_repayment)
print "Class Names: %s" % repayment_class_names
print "Classification Features: %s" % funding_features
print "Repayment target labels:", Counter(data.loc[data['FundedOrNot']==True]['RepaidOrNot'])
names = ['RBF SVM', 'Naive Bayes', 'AdaBoost', 'Neural Net',
'Decision Tree', 'Random Forest', 'K-Nearest Neighbors', 'QDA']
print "\nClassifiers: %s" % names
# Construct Feature Space
funding_feature_space = data[funding_features].fillna(0)
X_funding = funding_feature_space.as_matrix().astype(np.float)
# This is Important!
scaler = StandardScaler()
X_funding = scaler.fit_transform(X_funding)
print "Feature space holds %d observations and %d features" % X_funding.shape
# # T-Stochastic Neighborhood Embedding
# start = timeit.default_timer()
# Y = TSNE(n_components=2).fit_transform(X)
# stop = timeit.default_timer()
# print "\nEmbedded Feature space holds %d observations and %d features" % Y.shape
# print "Feature Embedding completed in %s seconds" % (stop - start)
# Filter important features
#filtered_features = [u'customer_autoship_active_flag', u'total_autoships', u'autoship_active', u'autoship_cancel', u'pets', u'brands']
# print "\nFiltered Features:"
# print filtered_features
frank_summary(X_funding, y_funding, funding_features)
logit = sm.Logit(data['FundedOrNot'],
scaler.fit_transform(data[funding_features].fillna(0)))
result = logit.fit()
print result.summary()
# prob_plot(X_funding, y_funding) #Inspect probability distribution
# plot_accuracy(X, y_funding, names)
```
| github_jupyter |
# Spark DataFrames Project Exercise
Let's get some quick practice with your new Spark DataFrame skills, you will be asked some basic questions about some stock market data, in this case Walmart Stock from the years 2012-2017. This exercise will just ask a bunch of questions, unlike the future machine learning exercises, which will be a little looser and be in the form of "Consulting Projects", but more on that later!
For now, just answer the questions and complete the tasks below.
#### Use the walmart_stock.csv file to Answer and complete the tasks below!
#### Load the Walmart Stock CSV File, have Spark infer the data types.
```
walmart = spark.read.csv("/FileStore/tables/walmart_stock.csv", inferSchema = True, header = True)
```
#### What are the column names?
```
walmart.columns
```
#### What does the Schema look like?
```
walmart.printSchema()
from pyspark.sql import functions as F
walmart = walmart.withColumn('Date', F.to_timestamp('Date'))
walmart.printSchema()
```
#### Print out the first 5 columns.
```
for row in walmart.head(5):
print(row)
```
#### Use describe() to learn about the DataFrame.
```
walmart.describe().show()
```
## Bonus Question!
#### There are too many decimal places for mean and stddev in the describe() dataframe. Format the numbers to just show up to two decimal places. Pay careful attention to the datatypes that .describe() returns, we didn't cover how to do this exact formatting, but we covered something very similar. [Check this link for a hint](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.Column.cast)
If you get stuck on this, don't worry, just view the solutions.
```
walmart.describe().printSchema()
from pyspark.sql.functions import format_number
result = walmart.describe()
result.select(result['summary'],
format_number(result['Open'].cast('float'),2).alias('Open'),
format_number(result['High'].cast('float'),2).alias('High'),
format_number(result['Low'].cast('float'),2).alias('Low'),
format_number(result['Close'].cast('float'),2).alias('Close'),
result['Volume'].cast('int').alias('Volume'),
format_number(result['Adj Close'].cast('float'),2).alias('Adj Close')).show()
```
#### Create a new dataframe with a column called HV Ratio that is the ratio of the High Price versus volume of stock traded for a day.
```
walmart.withColumn('HV Ratio', walmart["High"]/walmart["Low"]).select('HV Ratio').show()
```
#### What day had the Peak High in Price?
```
walmart.orderBy('High').select('Date').show() # ascending
walmart.orderBy(walmart['High'].desc()).select('Date').show() # desc
```
#### What is the mean of the Close column?
```
from pyspark.sql.functions import mean
walmart.select(mean('Close')).show()
```
#### What is the max and min of the Volume column?
```
from pyspark.sql.functions import max, min
walmart.select(max("Volume"), min("Volume")).show()
```
#### How many days was the Close lower than 60 dollars?
```
walmart.filter('Close < 60').count()
walmart.filter(walmart['Close'] < 60).count()
from pyspark.sql.functions import count
result = walmart.filter(walmart['Close'] < 60)
result.select(count('Close')).show()
```
#### What percentage of the time was the High greater than 80 dollars ?
#### In other words, (Number of Days High>80)/(Total Days in the dataset)
```
100*walmart.filter(walmart['High'] > 80).count()*1.0 / walmart.count()
```
#### What is the Pearson correlation between High and Volume?
#### [Hint](http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.DataFrameStatFunctions.corr)
```
from pyspark.sql.functions import corr
walmart.select(corr('High', 'Volume')).show()
```
#### What is the max High per year?
```
from pyspark.sql.functions import year
yeardf = walmart.withColumn('Year', year('Date'))
yeardf.groupby('year').max('High').show()
```
#### What is the average Close for each Calendar Month?
#### In other words, across all the years, what is the average Close price for Jan,Feb, Mar, etc... Your result will have a value for each of these months.
```
from pyspark.sql.functions import month
monthdf = walmart.withColumn('month', month('Date'))
monthdf.groupby('month').avg('Close').orderBy('month').show()
```
# Great Job!
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# HubSpot - Get closed deals weekly
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/HubSpot/HubSpot_Get_closed_deals_weekly.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #hubspot #crm #sales #deal #scheduler #asset #html #png #csv #naas_drivers #naas
**Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/)
## Input
```
#-> Uncomment the 2 lines below (by removing the hashtag) to schedule your job everyday at 8:00 AM (NB: you can choose the time of your scheduling bot)
# import naas
# naas.scheduler.add(cron="0 8 * * *")
#-> Uncomment the line below (by removing the hashtag) to remove your scheduler
# naas.scheduler.delete()
```
### Import libraries
```
from naas_drivers import hubspot
from datetime import timedelta
import pandas as pd
import plotly.graph_objects as go
from plotly.subplots import make_subplots
```
### Setup your HubSpot
👉 Access your [HubSpot API key](https://knowledge.hubspot.com/integrations/how-do-i-get-my-hubspot-api-key)
```
HS_API_KEY = 'YOUR_HUBSPOT_API_KEY'
```
### Select your pipeline ID
Here below you can select your pipeline.<br>
If not, all deals will be taken for the analysis
```
df_pipelines = hubspot.connect(HS_API_KEY).pipelines.get_all()
df_pipelines
pipeline_id = None
```
### Setup Outputs
```
csv_output = "HubSpot_closed_weekly.csv"
image_output = "HubSpot_closed_weekly.html"
html_output = "HubSpot_closed_weekly.png"
```
## Model
### Get all deals
```
df_deals = hubspot.connect(HS_API_KEY).deals.get_all()
df_deals
```
### Create trend data
```
def get_trend(df_deals, pipeline):
df = df_deals.copy()
# Filter data
df = df[df["pipeline"].astype(str) == str(pipeline)]
# Prep data
df["closedate"] = pd.to_datetime(df["closedate"])
df["amount"] = df.apply(lambda row: float(row["amount"]) if str(row["amount"]) not in ["None", ""] else 0, axis=1)
# Calc by week
df = df.groupby(pd.Grouper(freq='W', key='closedate')).agg({"hs_object_id": "count", "amount": "sum"}).reset_index()
df["closedate"] = df["closedate"] + timedelta(days=-1)
df = pd.melt(df, id_vars="closedate")
# Rename col
to_rename = {
"closedate": "LABEL_ORDER",
"variable": "GROUP",
"value": "VALUE"
}
df = df.rename(columns=to_rename).replace("hs_object_id", "No of deals").replace("amount", "Amount")
df["YEAR"] = df["LABEL_ORDER"].dt.strftime("%Y")
df = df[df["YEAR"] == datetime.now().strftime("%Y")]
df["LABEL"] = df["LABEL_ORDER"].dt.strftime("%Y-W%U")
df["LABEL_ORDER"] = df["LABEL_ORDER"].dt.strftime("%Y%U")
df = df[df["LABEL_ORDER"].astype(int) <= int(datetime.now().strftime("%Y%U"))]
# Calc variation
df_var = pd.DataFrame()
groups = df.GROUP.unique()
for group in groups:
tmp = df[df.GROUP == group].reset_index(drop=True)
for idx, row in tmp.iterrows():
if idx == 0:
value_n1 = 0
else:
value_n1 = tmp.loc[tmp.index[idx-1], "VALUE"]
tmp.loc[tmp.index[idx], "VALUE_COMP"] = value_n1
df_var = pd.concat([df_var, tmp]).fillna(0).reset_index(drop=True)
df_var["VARV"] = df_var["VALUE"] - df_var["VALUE_COMP"]
df_var["VARP"] = df_var["VARV"] / abs(df_var["VALUE_COMP"])
# Prep data
df_var["VALUE_D"] = df_var["VALUE"].map("{:,.0f}".format).str.replace(",", " ")
df_var["VARV_D"] = df_var["VARV"].map("{:,.0f}".format).str.replace(",", " ")
df_var.loc[df_var["VARV"] > 0, "VARV_D"] = "+" + df_var["VARV_D"]
df_var["VARP_D"] = df_var["VARP"].map("{:,.0%}".format).str.replace(",", " ")
df_var.loc[df_var["VARP"] > 0, "VARP_D"] = "+" + df_var["VARP_D"]
# Create hovertext
df_var["TEXT"] = ("<b>Deal closed as of " + df_var["LABEL"] + " : </b>" +
df_var["VALUE_D"] + "<br>" +
df_var["VARP_D"] + " vs last week (" + df_var["VARV_D"] + ")")
return df_var
df_trend = get_trend(df_deals, pipeline_id)
df_trend
```
## Output
### Plotting a barchart with filters
```
def create_barchart(df, label, group, value, varv, varp):
# Create figure with secondary y-axis
fig = make_subplots(specs=[[{"secondary_y": True}]])
# Add traces
df1 = df[df[group] == "No of deals"].reset_index(drop=True)[:]
total_volume = "{:,.0f}".format(df1[value].sum()).replace(",", " ")
var_volume = df1.loc[df1.index[-1], varv]
positive = False
if var_volume > 0:
positive = True
var_volume = "{:,.0f}".format(var_volume).replace(",", " ")
if positive:
var_volume = f"+{var_volume}"
fig.add_trace(
go.Bar(
name="No of deals",
x=df1[label],
y=df1[value],
offsetgroup=0,
hoverinfo="text",
text=df1["VALUE_D"],
hovertext=df1["TEXT"],
marker=dict(color="#33475b")
),
secondary_y=False,
)
df2 = df[df[group] == "Amount"].reset_index(drop=True)[:]
total_value = "{:,.0f}".format(df2[value].sum()).replace(",", " ")
var_value = df2.loc[df2.index[-1], varv]
positive = False
if var_value > 0:
positive = True
var_value = "{:,.0f}".format(var_value).replace(",", " ")
if positive:
var_value = f"+{var_value}"
fig.add_trace(
go.Bar(
name="Amount",
x=df2[label],
y=df2[value],
text=df2["VALUE_D"] + " K€",
offsetgroup=1,
hoverinfo="text",
hovertext=df2["TEXT"],
marker=dict(color="#ff7a59")
),
secondary_y=True,
)
# Add figure title
fig.update_layout(
title=f"<b>Hubspot - Closed deals this year</b><br><span style='font-size: 14px;'>Total deals: {total_volume} ({total_value} K€) | This week: {var_volume} ({var_value} K€) vs last week</span>",
title_font=dict(family="Arial", size=20, color="black"),
legend=None,
plot_bgcolor="#ffffff",
width=1200,
height=800,
paper_bgcolor="white",
xaxis_title="Weeks",
xaxis_title_font=dict(family="Arial", size=11, color="black"),
)
# Set y-axes titles
fig.update_yaxes(
title_text="No of deals",
title_font=dict(family="Arial", size=11, color="black"),
secondary_y=False
)
fig.update_yaxes(
title_text="Amount in K€",
title_font=dict(family="Arial", size=11, color="black"),
secondary_y=True
)
fig.show()
return fig
fig = create_barchart(df_trend, "LABEL", "GROUP", "VALUE", "VARV", "VARP")
```
### Export and share graph
```
# Export in HTML
df_trend.to_csv(csv_output, index=False)
fig.write_image(image_output)
fig.write_html(html_output)
# Shave with naas
naas.asset.add(csv_output)
naas.asset.add(image_output)
naas.asset.add(html_output, params={"inline": True})
#-> Uncomment the line below (by removing the hashtag) to delete your asset
# naas.asset.delete(csv_output)
# naas.asset.delete(image_output)
# naas.asset.delete(html_output)
```
| github_jupyter |
```
import numpy as np, pandas as pd, matplotlib.pyplot as plt
import os
import seaborn as sns
sns.set()
root_path = r'C:\Users\54638\Desktop\Cannelle\Excel handling'
input_path = os.path.join(root_path, "input")
output_path = os.path.join(root_path, "output")
%%time
# this line magic function should always be put on first line of the cell, and without comments followed after in the same line
# Read all Excel file
all_deals = pd.DataFrame()
for file in os.listdir(input_path):
# can add other criteria as you want
if file[-4:] == 'xlsx':
tmp = pd.read_excel(os.path.join(input_path,file), index_col = 'order_no')
all_deals = pd.concat([all_deals, tmp])
# reindex, otherwise you may have many lines with same index
# all_deals = all_deals.reset_index() # this method is not recommended here, as it will pop out the original index 'order_no'
all_deals.index = range(len(all_deals))
all_deals.head()
# all_deals.tail()
# all_deals.shape
# all_deals.describe()
# overview
all_deals['counterparty'].value_counts().sort_values(ascending = False)
# all_deals['counterparty'].unique()
all_deals['deal'].value_counts().sort_values(ascending = False)
deal_vol = all_deals['deal'].value_counts().sort_values()
deal_vol.plot(figsize = (10,6), kind = 'bar');
# Some slicing
all_deals[all_deals['deal'] == 'Accumulator']
all_deals[(all_deals['deal'] == 'Variance Swap')
& (all_deals['counterparty'].isin(['Citi','HSBC']))]
all_deals.groupby('currency').sum()
# all_deals.groupby('currency')[['nominal']].sum()
ccy_way = all_deals.groupby(['way','currency']).sum().unstack('currency')
ccy_way
ccy_way.plot(figsize = (10,6), kind ='bar')
plt.legend(loc='upper left', bbox_to_anchor=(1,1), ncol=1)
# pivot_table
all_deals.pivot_table(values = 'nominal',
index='counterparty', columns='deal', aggfunc='count')
# save data
# ccy_way.to_excel(os.path.join(output_path,"Extract.xlsx"))
file_name = "Extract" + ".xlsx"
sheet_name = "Extract"
writer = pd.ExcelWriter(os.path.join(output_path, file_name), engine = 'xlsxwriter')
ccy_way.to_excel(writer, sheet_name=sheet_name)
# adjust the column width
worksheet = writer.sheets[sheet_name]
for id, col in enumerate(ccy_way):
series = ccy_way[col]
max_len = max(
series.astype(str).map(len).max(), # len of largest item
len(str(series.name)) # len of column name
) + 3 # a little extra space
max_len = min(max_len, 30) # set a cap, dont be too long
worksheet.set_column(id+1, id+1, max_len)
writer.save()
# Deleting all the file
del_path = input_path
for file in os.listdir(del_path):
os.remove(os.path.join(del_path, file))
# Generating file, all the data you see is just randomly created
import calendar
# Transaction generator
def generate_data(year, month):
order_amount = max(int(np.random.randn()*200)+500,0) + np.random.randint(2000)
start_loc = 1
order_no = np.arange(start_loc,order_amount+start_loc)
countparty_list = ['JPMorgan', 'Credit Suisse', 'Deutsche Bank', 'BNP Paribas', 'Credit Agricole', 'SinoPac', 'Goldman Sachs', 'Citi',
'Blackstone', 'HSBC', 'Natixis', 'BOCI', 'UBS', 'CLSA', 'CICC', 'Fidelity', 'Jefferies']
countparty_prob = [0.04, 0.07, 0.06, 0.1, 0.09, 0.02, 0.1, 0.08, 0.025, 0.13, 0.065, 0.05, 0.01, 0.08, 0.01, 0.04, 0.03]
countparty = np.random.choice(countparty_list, size=order_amount, p=countparty_prob)
deal_list = ['Autocall', 'Accumulator', 'Range Accrual', 'Variance Swap', 'Vanilla', 'Digital', 'Twinwin', 'ForwardStart',
'ForwardBasket', 'Cross Currency Swap', 'Hybrid']
deal_prob = [0.16, 0.2, 0.11, 0.05, 0.11, 0.09, 0.08, 0.04, 0.04, 0.03, 0.09]
deal = np.random.choice(deal_list, size=order_amount, p=deal_prob)
way = np.random.choice(['buy','sell'], size=order_amount)
nominal = [(int(np.random.randn()*10) + np.random.randint(200)+ 50)*1000 for _ in range(order_amount)]
currency_list = ['USD', 'CNY', 'EUR', 'SGP', 'JPY', 'KRW', 'AUD', 'GBP']
currency_prob = [0.185, 0.195, 0.14, 0.08, 0.135, 0.125, 0.06, 0.08]
currency = np.random.choice(currency_list, size=order_amount, p=currency_prob)
datelist = list(date for date in calendar.Calendar().itermonthdates(year, month) if date.month == month)
trade_date = np.random.choice(datelist, size=order_amount)
data = {'order_no': order_no, 'counterparty': countparty, 'deal':deal, 'way': way, 'nominal': nominal,
'currency':currency, 'trade_date': trade_date}
return pd.DataFrame(data)
save_path = input_path
cur_month = 4
cur_year = 2018
for i in range(24):
if cur_month == 12:
cur_month = 1
cur_year +=1
else:
cur_month += 1
df = generate_data(cur_year, cur_month)
df_name = 'Derivatives Transaction '+calendar.month_abbr[cur_month]+' '+str(cur_year)+'.xlsx'
df.to_excel(os.path.join(save_path, df_name), index = False)
```
| github_jupyter |
# fuzzy_pandas examples
These are almost all from [Max Harlow](https://twitter.com/maxharlow)'s [awesome NICAR2019 presentation](https://docs.google.com/presentation/d/1djKgqFbkYDM8fdczFhnEJLwapzmt4RLuEjXkJZpKves/) where he demonstrated [csvmatch](https://github.com/maxharlow/csvmatch), which fuzzy_pandas is based on.
**SCROLL DOWN DOWN DOWN TO GET TO THE FUZZY MATCHING PARTS.**
```
import pandas as pd
import fuzzy_pandas as fpd
df1 = pd.read_csv("data/data1.csv")
df2 = pd.read_csv("data/data2.csv")
df1
df2
```
# Exact matches
By default, all columns from both dataframes are returned.
```
# csvmatch \
# forbes-billionaires.csv \
# bloomberg-billionaires.csv \
# --fields1 name \
# --fields2 Name
df1 = pd.read_csv("data/forbes-billionaires.csv")
df2 = pd.read_csv("data/bloomberg-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2, left_on='name', right_on='Name')
print("Found", results.shape)
results.head(5)
```
### Only keeping matching columns
The csvmatch default only gives you the shared columns, which you can reproduce with `keep='match'`
```
df1 = pd.read_csv("data/forbes-billionaires.csv")
df2 = pd.read_csv("data/bloomberg-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2, left_on='name', right_on='Name', keep='match')
print("Found", results.shape)
results.head(5)
```
### Only keeping specified columns
```
df1 = pd.read_csv("data/forbes-billionaires.csv")
df2 = pd.read_csv("data/bloomberg-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='Name',
keep_left=['name', 'realTimeRank'],
keep_right=['Rank'])
print("Found", results.shape)
results.head(5)
```
## Case sensitivity
This one doesn't give us any results!
```
# csvmatch \
# cia-world-leaders.csv \
# davos-attendees-2019.csv \
# --fields1 name \
# --fields2 full_name
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/davos-attendees-2019.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='full_name',
keep='match')
print("Found", results.shape)
results.head(10)
```
But if we add **ignore_case** we are good to go.
```
# csvmatch \
# cia-world-leaders.csv \
# davos-attendees-2019.csv \
# --fields1 name \
# --fields2 full_name \
# --ignore-case \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/davos-attendees-2019.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='full_name',
ignore_case=True,
keep='match')
print("Found", results.shape)
results.head(5)
```
### Ignoring case, non-latin characters, word ordering
You should really be reading [the presentation](https://docs.google.com/presentation/d/1djKgqFbkYDM8fdczFhnEJLwapzmt4RLuEjXkJZpKves/edit)!
```
# $ csvmatch \
# cia-world-leaders.csv \
# davos-attendees-2019.csv \
# --fields1 name \
# --fields2 full_name \
# -i -a -n -s \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/davos-attendees-2019.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on=['name'],
right_on=['full_name'],
ignore_case=True,
ignore_nonalpha=True,
ignore_nonlatin=True,
ignore_order_words=True,
keep='match')
print("Found", results.shape)
results.head(5)
```
# Fuzzy matching
## Levenshtein: Edit distance
```
# csvmatch \
# cia-world-leaders.csv \
# forbes-billionaires.csv \
# --fields1 name \
# --fields2 name \
# --fuzzy levenshtein \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/forbes-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='name',
method='levenshtein',
keep='match')
print("Found", results.shape)
results.head(10)
```
### Setting a threshold with Levenshtein
```
# csvmatch \
# cia-world-leaders.csv \
# forbes-billionaires.csv \
# --fields1 name \
# --fields2 name \
# --fuzzy levenshtein \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/forbes-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='name',
method='levenshtein',
threshold=0.85,
keep='match')
print("Found", results.shape)
results.head(10)
```
## Jaro: Edit distance
```
# csvmatch \
# cia-world-leaders.csv \
# forbes-billionaires.csv \
# --fields1 name \
# --fields2 name \
# --fuzzy levenshtein \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/forbes-billionaires.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='name',
method='jaro',
keep='match')
print("Found", results.shape)
results.head(10)
```
## Metaphone: Phonetic match
```
# csvmatch \
# cia-world-leaders.csv \
# un-sanctions.csv \
# --fields1 name \
# --fields2 name \
# --fuzzy metaphone \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/un-sanctions.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='name',
method='metaphone',
keep='match')
print("Found", results.shape)
results.head(10)
```
## Bilenko
You'll need to respond to the prompts when you run the code. 10-15 is best, but send `f` when you've decided you're finished.
```
# $ csvmatch \
# cia-world-leaders.csv \
# davos-attendees-2019.csv \
# --fields1 name \
# --fields2 full_name \
# --fuzzy bilenko \
df1 = pd.read_csv("data/cia-world-leaders.csv")
df2 = pd.read_csv("data/davos-attendees-2019.csv")
results = fpd.fuzzy_merge(df1, df2,
left_on='name',
right_on='full_name',
method='bilenko',
keep='match')
print("Found", results.shape)
results.head(10)
```
| github_jupyter |
<h1>Demand forecasting with BigQuery and TensorFlow</h1>
In this notebook, we will develop a machine learning model to predict the demand for taxi cabs in New York.
To develop the model, we will need to get historical data of taxicab usage. This data exists in BigQuery. Let's start by looking at the schema.
```
import google.datalab.bigquery as bq
import pandas as pd
import numpy as np
import shutil
%bq tables describe --name bigquery-public-data.new_york.tlc_yellow_trips_2015
```
<h2> Analyzing taxicab demand </h2>
Let's pull the number of trips for each day in the 2015 dataset using Standard SQL.
```
%bq query
SELECT
EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_2015`
LIMIT 5
```
<h3> Modular queries and Pandas dataframe </h3>
Let's use the total number of trips as our proxy for taxicab demand (other reasonable alternatives are total trip_distance or total fare_amount). It is possible to predict multiple variables using Tensorflow, but for simplicity, we will stick to just predicting the number of trips.
We will give our query a name 'taxiquery' and have it use an input variable '$YEAR'. We can then invoke the 'taxiquery' by giving it a YEAR. The to_dataframe() converts the BigQuery result into a <a href='http://pandas.pydata.org/'>Pandas</a> dataframe.
```
%bq query -n taxiquery
WITH trips AS (
SELECT EXTRACT (DAYOFYEAR from pickup_datetime) AS daynumber
FROM `bigquery-public-data.new_york.tlc_yellow_trips_*`
where _TABLE_SUFFIX = @YEAR
)
SELECT daynumber, COUNT(1) AS numtrips FROM trips
GROUP BY daynumber ORDER BY daynumber
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': 2015}
}
]
trips = taxiquery.execute(query_params=query_parameters).result().to_dataframe()
trips[:5]
```
<h3> Benchmark </h3>
Often, a reasonable estimate of something is its historical average. We can therefore benchmark our machine learning model against the historical average.
```
avg = np.mean(trips['numtrips'])
print('Just using average={0} has RMSE of {1}'.format(avg, np.sqrt(np.mean((trips['numtrips'] - avg)**2))))
```
The mean here is about 400,000 and the root-mean-square-error (RMSE) in this case is about 52,000. In other words, if we were to estimate that there are 400,000 taxi trips on any given day, that estimate is will be off on average by about 52,000 in either direction.
Let's see if we can do better than this -- our goal is to make predictions of taxicab demand whose RMSE is lower than 52,000.
What kinds of things affect people's use of taxicabs?
<h2> Weather data </h2>
We suspect that weather influences how often people use a taxi. Perhaps someone who'd normally walk to work would take a taxi if it is very cold or rainy.
One of the advantages of using a global data warehouse like BigQuery is that you get to mash up unrelated datasets quite easily.
```
%bq query
SELECT * FROM `bigquery-public-data.noaa_gsod.stations`
WHERE state = 'NY' AND wban != '99999' AND name LIKE '%LA GUARDIA%'
```
<h3> Variables </h3>
Let's pull out the minimum and maximum daily temperature (in Fahrenheit) as well as the amount of rain (in inches) for La Guardia airport.
```
%bq query -n wxquery
SELECT EXTRACT (DAYOFYEAR FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP)) AS daynumber,
MIN(EXTRACT (DAYOFWEEK FROM CAST(CONCAT(@YEAR,'-',mo,'-',da) AS TIMESTAMP))) dayofweek,
MIN(min) mintemp, MAX(max) maxtemp, MAX(IF(prcp=99.99,0,prcp)) rain
FROM `bigquery-public-data.noaa_gsod.gsod*`
WHERE stn='725030' AND _TABLE_SUFFIX = @YEAR
GROUP BY 1 ORDER BY daynumber DESC
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': 2015}
}
]
weather = wxquery.execute(query_params=query_parameters).result().to_dataframe()
weather[:5]
```
<h3> Merge datasets </h3>
Let's use Pandas to merge (combine) the taxi cab and weather datasets day-by-day.
```
data = pd.merge(weather, trips, on='daynumber')
data[:5]
```
<h3> Exploratory analysis </h3>
Is there a relationship between maximum temperature and the number of trips?
```
j = data.plot(kind='scatter', x='maxtemp', y='numtrips')
```
The scatterplot above doesn't look very promising. There appears to be a weak downward trend, but it's also quite noisy.
Is there a relationship between the day of the week and the number of trips?
```
j = data.plot(kind='scatter', x='dayofweek', y='numtrips')
```
Hurrah, we seem to have found a predictor. It appears that people use taxis more later in the week. Perhaps New Yorkers make weekly resolutions to walk more and then lose their determination later in the week, or maybe it reflects tourism dynamics in New York City.
Perhaps if we took out the <em>confounding</em> effect of the day of the week, maximum temperature will start to have an effect. Let's see if that's the case:
```
j = data[data['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
```
Removing the confounding factor does seem to reflect an underlying trend around temperature. But ... the data are a little sparse, don't you think? This is something that you have to keep in mind -- the more predictors you start to consider (here we are using two: day of week and maximum temperature), the more rows you will need so as to avoid <em> overfitting </em> the model.
<h3> Adding 2014 and 2016 data </h3>
Let's add in 2014 and 2016 data to the Pandas dataframe. Note how useful it was for us to modularize our queries around the YEAR.
```
data2 = data # 2015 data
for year in [2014, 2016]:
query_parameters = [
{
'name': 'YEAR',
'parameterType': {'type': 'STRING'},
'parameterValue': {'value': year}
}
]
weather = wxquery.execute(query_params=query_parameters).result().to_dataframe()
trips = taxiquery.execute(query_params=query_parameters).result().to_dataframe()
data_for_year = pd.merge(weather, trips, on='daynumber')
data2 = pd.concat([data2, data_for_year])
data2.describe()
j = data2[data2['dayofweek'] == 7].plot(kind='scatter', x='maxtemp', y='numtrips')
```
The data do seem a bit more robust. If we had even more data, it would be better of course. But in this case, we only have 2014-2016 data for taxi trips, so that's what we will go with.
<h2> Machine Learning with Tensorflow </h2>
We'll use 80% of our dataset for training and 20% of the data for testing the model we have trained. Let's shuffle the rows of the Pandas dataframe so that this division is random. The predictor (or input) columns will be every column in the database other than the number-of-trips (which is our target, or what we want to predict).
The machine learning models that we will use -- linear regression and neural networks -- both require that the input variables are numeric in nature.
The day of the week, however, is a categorical variable (i.e. Tuesday is not really greater than Monday). So, we should create separate columns for whether it is a Monday (with values 0 or 1), Tuesday, etc.
Against that, we do have limited data (remember: the more columns you use as input features, the more rows you need to have in your training dataset), and it appears that there is a clear linear trend by day of the week. So, we will opt for simplicity here and use the data as-is. Try uncommenting the code that creates separate columns for the days of the week and re-run the notebook if you are curious about the impact of this simplification.
```
import tensorflow as tf
shuffled = data2.sample(frac=1, random_state=13)
# It would be a good idea, if we had more data, to treat the days as categorical variables
# with the small amount of data, we have though, the model tends to overfit
#predictors = shuffled.iloc[:,2:5]
#for day in range(1,8):
# matching = shuffled['dayofweek'] == day
# key = 'day_' + str(day)
# predictors[key] = pd.Series(matching, index=predictors.index, dtype=float)
predictors = shuffled.iloc[:,1:5]
predictors[:5]
shuffled[:5]
targets = shuffled.iloc[:,5]
targets[:5]
```
Let's update our benchmark based on the 80-20 split and the larger dataset.
```
trainsize = int(len(shuffled['numtrips']) * 0.8)
avg = np.mean(shuffled['numtrips'][:trainsize])
rmse = np.sqrt(np.mean((targets[trainsize:] - avg)**2))
print('Just using average={0} has RMSE of {1}'.format(avg, rmse))
```
<h2> Linear regression with tf.contrib.learn </h2>
We scale the number of taxicab rides by 400,000 so that the model can keep its predicted values in the [0-1] range. The optimization goes a lot faster when the weights are small numbers. We save the weights into ./trained_model_linear and display the root mean square error on the test dataset.
```
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model_linear', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean(np.power((targets[trainsize:].values - pred), 2)))
print('LinearRegression has RMSE of {0}'.format(rmse))
```
The RMSE here (57K) is lower than the benchmark (62K) indicates that we are doing about 10% better with the machine learning model than we would be if we were to just use the historical average (our benchmark).
<h2> Neural network with tf.contrib.learn </h2>
Let's make a more complex model with a few hidden nodes.
```
SCALE_NUM_TRIPS = 600000.0
trainsize = int(len(shuffled['numtrips']) * 0.8)
testsize = len(shuffled['numtrips']) - trainsize
npredictors = len(predictors.columns)
noutputs = 1
tf.logging.set_verbosity(tf.logging.WARN) # change to INFO to get output every 100 steps ...
shutil.rmtree('./trained_model', ignore_errors=True) # so that we don't load weights from previous runs
estimator = tf.contrib.learn.DNNRegressor(model_dir='./trained_model',
hidden_units=[5, 5],
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(predictors.values))
print("starting to train ... this will take a while ... use verbosity=INFO to get more verbose output")
def input_fn(features, targets):
return tf.constant(features.values), tf.constant(targets.values.reshape(len(targets), noutputs)/SCALE_NUM_TRIPS)
estimator.fit(input_fn=lambda: input_fn(predictors[:trainsize], targets[:trainsize]), steps=10000)
pred = np.multiply(list(estimator.predict(predictors[trainsize:].values)), SCALE_NUM_TRIPS )
rmse = np.sqrt(np.mean((targets[trainsize:].values - pred)**2))
print('Neural Network Regression has RMSE of {0}'.format(rmse))
```
Using a neural network results in similar performance to the linear model when I ran it -- it might be because there isn't enough data for the NN to do much better. (NN training is a non-convex optimization, and you will get different results each time you run the above code).
<h2> Running a trained model </h2>
So, we have trained a model, and saved it to a file. Let's use this model to predict taxicab demand given the expected weather for three days.
Here we make a Dataframe out of those inputs, load up the saved model (note that we have to know the model equation -- it's not saved in the model file) and use it to predict the taxicab demand.
```
input = pd.DataFrame.from_dict(data =
{'dayofweek' : [4, 5, 6],
'mintemp' : [60, 40, 50],
'maxtemp' : [70, 90, 60],
'rain' : [0, 0.5, 0]})
# read trained model from ./trained_model
estimator = tf.contrib.learn.LinearRegressor(model_dir='./trained_model_linear',
feature_columns=tf.contrib.learn.infer_real_valued_columns_from_input(input.values))
pred = np.multiply(list(estimator.predict(input.values)), SCALE_NUM_TRIPS )
print(pred)
```
Looks like we should tell some of our taxi drivers to take the day off on Thursday (day=5). No wonder -- the forecast calls for extreme weather fluctuations on Thursday.
Copyright 2017 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy.linalg import norm
import pandas as pd
plt.style.use('ggplot')
deaths = pd.read_csv('deaths.txt')
pumps = pd.read_csv('pumps.txt')
print deaths.head()
print pumps.head()
plt.plot(deaths['X'], deaths['Y'], 'o', lw=0, mew=1, mec='0.9', ms=6) # marker edge color/width, marker size
plt.plot(pumps['X'], pumps['Y'], 'ks', lw=0, mew=1, mec='0.9', ms=6)
plt.axis('equal')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Jhon Snow\'s Cholera')
fig = plt.figure(figsize=(4, 3.5))
ax = fig.add_subplot(111)
plt.plot(deaths['X'], deaths['Y'], 'bo', lw=0, mew=1, mec='0.9', ms=6, alpha=0.6) # marker edge color/width, marker size
plt.plot(pumps['X'], pumps['Y'], 'ks', lw=0, mew=1, mec='0.9', ms=6)
plt.axis('equal')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Jhon Snow\'s Cholera')
from matplotlib.patches import Ellipse
e = Ellipse(xy=(deaths['X'].mean(), deaths['Y'].mean()),
width=deaths.X.std(), height=deaths.Y.std(), lw=2, fc='None', ec='r', zorder=10)
ax.add_artist(e)
plt.plot(deaths['X'].mean(), deaths['Y'].mean(), 'r.', lw=2)
for i in pumps.index:
plt.annotate(s='%d'%i, xy=(pumps[['X', 'Y']].loc[i]), xytext=(-15, 6), textcoords='offset points', color='k')
# calculate the nearing pump for each death
deaths['C'] = [np.argmin(norm(pumps - deaths.iloc[i,:2], axis=1)) for i in xrange(len(deaths))]
deaths.head()
fig = plt.figure(figsize=(4, 3.5))
ax = fig.add_subplot(111)
plt.scatter(deaths['X'], deaths['Y'], marker='o', lw=0.5, color=plt.cm.jet(deaths.C/12), edgecolors='0.5')
plt.plot(pumps['X'], pumps['Y'], 'ks', lw=0, mew=1, mec='0.9', ms=6)
plt.axis('equal')
plt.xlabel('X')
plt.ylabel('Y')
plt.title('Jhon Snow\'s Cholera')
from matplotlib.patches import Ellipse
e = Ellipse(xy=(deaths['X'].mean(), deaths['Y'].mean()),
width=deaths.X.std(), height=deaths.Y.std(), lw=2, fc='None', ec='r', zorder=10)
ax.add_artist(e)
plt.plot(deaths['X'].mean(), deaths['Y'].mean(), 'r.', lw=2)
for i in pumps.index:
plt.annotate(s='%d'%i, xy=(pumps[['X', 'Y']].loc[i]), xytext=(-15, 6), textcoords='offset points', color='k')
#################
d2 = pd.read_hdf('../LinearRegression/ch4data.h5').dropna()
rates = d2[['dfe', 'gdp', 'both']].as_matrix().astype('float')
print rates.shape
plt.figure(figsize=(8, 3.5))
plt.subplot(121)
_ = plt.hist(rates[:, 1], bins=20, color='steelblue')
plt.xticks(rotation=45, ha='right')
plt.yscale('log')
plt.xlabel('GDP')
plt.ylabel('count')
plt.subplot(122)
plt.scatter(rates[:, 0], rates[:, 2], s=141*4*rates[:, 1] / rates[:, 1].max(), edgecolor='0.3', color='steelblue')
plt.xlabel('dfe')
plt.ylabel('suicide rate (both)')
plt.subplots_adjust(wspace=0.3)
from scipy.cluster.vq import whiten
w = whiten(rates) # convert to unit variance, k-means prerequisite
plt.figure(figsize=(8, 3.5))
plt.subplot(121)
_ = plt.hist(w[:, 1], bins=20, color='steelblue')
plt.xticks(rotation=45, ha='right')
plt.yscale('log')
plt.xlabel('GDP')
plt.ylabel('count')
plt.subplot(122)
plt.scatter(w[:, 0], w[:, 2], s=141*4*w[:, 1] / w[:, 1].max(), edgecolor='0.3', color='steelblue')
plt.xlabel('dfe')
plt.ylabel('suicide rate (both)')
plt.subplots_adjust(wspace=0.3)
from sklearn.cluster import KMeans
k = 2
model = KMeans(n_clusters=k).fit(w[:, [0, 2]])
plt.scatter(w[:, 0], w[:, 2], s=141*4*w[:, 1] / w[:, 1].max(), edgecolor='0.3',
color=plt.cm.get_cmap("hsv", k+1)(model.labels_), alpha=0.5)
plt.xlabel('dfe')
plt.ylabel('suicide rate (both)')
plt.scatter(model.cluster_centers_[:, 0], model.cluster_centers_[:, 1], marker='+',
color='k', s=141, lw=3)
x, y = np.meshgrid(np.linspace(0, 4, 100), np.linspace(0, 7, 100))
x, y = x.reshape((-1, 1)), y.reshape((-1, 1))
p = model.predict(np.hstack((x, y)))
plt.scatter(x, y, color=plt.cm.get_cmap("hsv", k+1)(p), alpha=0.3)
plt.scatter(model.cluster_centers_[:, 0], model.cluster_centers_[:, 1], marker='+',
color='k', s=141, lw=3)
plt.xlim((0, 4))
plt.ylim((0, 7))
############
import astropy.coordinates as coord
import astropy.units as u
import astropy.constants as c
uzcat = pd.read_table('uzcJ2000.tab', sep='\t', dtype='str', header=16,
names=['ra', 'dec', 'Zmag', 'cz', 'cze', 'T', 'U',
'Ne', 'Zname', 'C', 'Ref', 'Oname', 'M', 'N'], skiprows=[17])
uzcat.head()
uzcat['ra'] = uzcat['ra'].apply(lambda x: '%sh%sm%ss' % (x[:2], x[2:4], x[4:]))
uzcat['dec'] = uzcat['dec'].apply(lambda x: '%sd%sm%ss' % (x[:3], x[3:5], x[5:]))
uzcat.head()
uzcat2 = uzcat.applymap(lambda x: np.nan if x.isspace() else x.strip())
uzcat2['cz'] = uzcat2['cz'].astype('float')
uzcat2['Zmag'] = uzcat2['Zmag'].astype('float')
uzcat2.head()
coords_uzc = coord.SkyCoord(uzcat2['ra'], uzcat2['dec'], frame='fk5', equinox='J2000')
color_czs = (uzcat2['cz'] + abs(uzcat2['cz'].min())) / (uzcat2['cz'].max() + abs(uzcat2['cz'].min()))
from matplotlib.patheffects import withStroke
whitebg = withStroke(foreground='w', linewidth=2.5)
fig = plt.figure(figsize=(8, 3.5), facecolor='w')
ax = fig.add_subplot(111, projection='mollweide')
ax.scatter(coords_uzc.ra.radian - np.pi, coords_uzc.dec.radian, c=plt.cm.Blues_r(color_czs),
s=4, marker='.', zorder=-1)
#plt.grid()
for label in ax.get_xticklabels():
label.set_path_effects([whitebg])
uzcat2.cz.hist(bins=50)
plt.yscale('log')
plt.xlabel('cz distance')
plt.ylabel('count')
_ = plt.xticks(rotation=45, ha='right')
uzc_czs = uzcat2['cz'].as_matrix()
uzcat2['Zmag'] = uzcat2['Zmag'].astype('float')
decmin = 15
decmax = 30
ramin = 90
ramax = 295
czmin = 0
czmax = 12500
selection_dec = (coords_uzc.dec.deg > decmin) * (coords_uzc.dec.deg < decmax)
selection_ra = (coords_uzc.ra.deg > ramin) * (coords_uzc.ra.deg < ramax)
selection_czs = (uzc_czs > czmin) * (uzc_czs < czmax)
selection= selection_dec * selection_ra * selection_czs
fig = plt.figure( figsize=(6,6))
ax = fig.add_subplot(111, projection='polar')
sct = ax.scatter(coords_uzc.ra.radian[selection_dec], uzc_czs[selection_dec],
color='SteelBlue', s=uzcat2['Zmag'][selection_dec * selection_czs],
edgecolors="none", alpha=0.7, zorder=0)
ax.set_rlim(0,20000)
ax.set_theta_offset(np.pi/-2)
ax.set_rlabel_position(65)
ax.set_rticks(range(2500, 20001, 5000));
ax.plot([(ramin * u.deg).to(u.radian).value, (ramin * u.deg).to(u.radian).value], [0, 12500],
color='IndianRed', alpha=0.8, dashes=(10,4))
ax.plot([ramax*np.pi/180., ramax*np.pi/180.], [0,12500], color='IndianRed', alpha=0.8, dashes=(10, 4))
theta = np.arange(ramin, ramax, 1)
ax.plot(theta*np.pi/180., np.ones_like(theta)*12500, color='IndianRed', alpha=0.8, dashes=(10, 4))
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, polar=True)
sct = ax.scatter(coords_uzc.ra.radian[selection], uzc_czs[selection], color='SteelBlue',
s=uzcat2['Zmag'][selection], edgecolors="none", alpha=0.7, zorder=0)
ax.set_rlim(0,12500)
ax.set_theta_offset(np.pi/-2)
ax.set_rlabel_position(65)
ax.set_rticks(range(2500,12501,2500));
mycat = uzcat2.copy(deep=True).loc[selection]
mycat['ra_deg'] = coords_uzc.ra.deg[selection]
mycat['dec_deg'] = coords_uzc.dec.deg[selection]
zs = (((mycat['cz'].as_matrix() * u.km / u.s) / c.c).decompose())
dist = coord.Distance(z=zs)
print(dist)
mycat['dist'] = dist
coords_xyz = coord.SkyCoord(ra=mycat['ra_deg'] * u.deg,
dec=mycat['dec_deg'] * u.deg,
distance=dist * u.Mpc,
frame='fk5',
equinox='J2000')
mycat['X'] = coords_xyz.cartesian.x.value
mycat['Y'] = coords_xyz.cartesian.y.value
mycat['Z'] = coords_xyz.cartesian.z.value
mycat.head()
fig, axs = plt.subplots(1, 2, figsize=(14,6))
plt.subplot(121)
plt.scatter(mycat['Y'], -1*mycat['X'], s=8,
color=plt.cm.OrRd_r(10**(mycat.Zmag - mycat.Zmag.max())),
edgecolor='None')
plt.xlabel('Y (Mpc)')
plt.ylabel('X (Mpc)')
plt.axis('equal')
plt.subplot(122)
plt.scatter(-1*mycat['X'], mycat['Z'], s=8,
color=plt.cm.OrRd_r(10**(mycat.Zmag - mycat.Zmag.max())),
edgecolor='None')
lstyle = dict(lw=1.5, color='k', dashes=(6, 4))
plt.plot([0, 150], [0, 80], **lstyle)
plt.plot([0, 150], [0, 45], **lstyle)
plt.plot([0, -25], [0, 80], **lstyle)
plt.plot([0, -25], [0, 45], **lstyle)
plt.xlabel('X (Mpc)')
plt.ylabel('Z (Mpc)')
plt.axis('equal')
plt.subplots_adjust(wspace=0.25)
#mycat.to_pickle('data_ch5_clustering.pick')
import scipy.cluster.hierarchy as hac
X = mycat.X.reshape(-1, 1)
Y = mycat.Y.reshape(-1, 1)
galpos = np.hstack((X, Y))
Z = hac.linkage(galpos, metric='euclidean', method='centroid')
plt.figure(figsize=(10, 8))
hac.dendrogram(Z, p=6, truncate_mode='level', orientation='right');
k = 10
clusters = hac.fcluster(Z, k, criterion='maxclust')
plt.scatter(Y, -X, c=clusters, cmap='rainbow')
for i in range(k):
plt.plot(Y[clusters==i+1].mean(), -X[clusters==i+1].mean(),
'o', c='0.7', mec='k', mew=1.5, alpha=0.7)
```
| github_jupyter |
# Concise Implementation of Linear Regression
With the development of deep learning frameworks, it has become increasingly easy to develop deep learning applications. In practice, we can usually implement the same model, but much more concisely than how we introduce it in the previous section. In this section, we will introduce how to use the Gluon interface provided by MXNet.
## Generating Data Sets
We will generate the same data set as that used in the previous section.
```
from mxnet import autograd, nd
num_inputs = 2
num_examples = 1000
true_w = nd.array([2, -3.4])
true_b = 4.2
features = nd.random.normal(scale=1, shape=(num_examples, num_inputs))
labels = nd.dot(features, true_w) + true_b
labels += nd.random.normal(scale=0.01, shape=labels.shape)
```
## Reading Data
Gluon provides the `data` module to read data. Since `data` is often used as a variable name, we will replace it with the pseudonym `gdata` (adding the first letter of Gluon) when referring to the imported `data` module. In each iteration, we will randomly read a mini-batch containing 10 data instances.
```
from mxnet.gluon import data as gdata
batch_size = 10
# Combine the features and labels of the training data
dataset = gdata.ArrayDataset(features, labels)
# Randomly reading mini-batches
data_iter = gdata.DataLoader(dataset, batch_size, shuffle=True)
```
The use of `data_iter` here is the same as in the previous section. Now, we can read and print the first mini-batch of instances.
```
for X, y in data_iter:
print(X, y)
break
```
## Define the Model
When we implemented the linear regression model from scratch in the previous section, we needed to define the model parameters and use them to describe step by step how the model is evaluated. This can become complicated as we build complex models. Gluon provides a large number of predefined layers, which allow us to focus especially on the layers used to construct the model rather than having to focus on the implementation.
To define a linear model, first import the module `nn`. `nn` is an abbreviation for neural networks. As the name implies, this module defines a large number of neural network layers. We will first define a model variable `net`, which is a `Sequential` instance. In Gluon, a `Sequential` instance can be regarded as a container that concatenates the various layers in sequence. When constructing the model, we will add the layers in their order of occurrence in the container. When input data is given, each layer in the container will be calculated in order, and the output of one layer will be the input of the next layer.
```
from mxnet.gluon import nn
net = nn.Sequential()
```
Recall the architecture of a single layer network. The layer is fully connected since it connects all inputs with all outputs by means of a matrix-vector multiplication. In Gluon, the fully connected layer is referred to as a `Dense` instance. Since we only want to generate a single scalar output, we set that number to $1$.

```
net.add(nn.Dense(1))
```
It is worth noting that, in Gluon, we do not need to specify the input shape for each layer, such as the number of linear regression inputs. When the model sees the data, for example, when the `net(X)` is executed later, the model will automatically infer the number of inputs in each layer. We will describe this mechanism in detail in the chapter "Deep Learning Computation". Gluon introduces this design to make model development more convenient.
## Initialize Model Parameters
Before using `net`, we need to initialize the model parameters, such as the weights and biases in the linear regression model. We will import the `initializer` module from MXNet. This module provides various methods for model parameter initialization. The `init` here is the abbreviation of `initializer`. By`init.Normal(sigma=0.01)` we specify that each weight parameter element is to be randomly sampled at initialization with a normal distribution with a mean of 0 and standard deviation of 0.01. The bias parameter will be initialized to zero by default.
```
from mxnet import init
net.initialize(init.Normal(sigma=0.01))
```
The code above looks pretty straightforward but in reality something quite strange is happening here. We are initializing parameters for a networks where we haven't told Gluon yet how many dimensions the input will have. It might be 2 as in our example or 2,000, so we couldn't just preallocate enough space to make it work. What happens behind the scenes is that the updates are deferred until the first time that data is sent through the networks. In doing so, we prime all settings (and the user doesn't even need to worry about it). The only cautionary notice is that since the parameters have not been initialized yet, we would not be able to manipulate them yet.
## Define the Loss Function
In Gluon, the module `loss` defines various loss functions. We will replace the imported module `loss` with the pseudonym `gloss`, and directly use the squared loss it provides as a loss function for the model.
```
from mxnet.gluon import loss as gloss
loss = gloss.L2Loss() # The squared loss is also known as the L2 norm loss
```
## Define the Optimization Algorithm
Again, we do not need to implement mini-batch stochastic gradient descent. After importing Gluon, we now create a `Trainer` instance and specify a mini-batch stochastic gradient descent with a learning rate of 0.03 (`sgd`) as the optimization algorithm. This optimization algorithm will be used to iterate through all the parameters contained in the `net` instance's nested layers through the `add` function. These parameters can be obtained by the `collect_params` function.
```
from mxnet import gluon
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': 0.03})
```
## Training
You might have noticed that it was a bit more concise to express our model in Gluon. For example, we didn't have to individually allocate parameters, define our loss function, or implement stochastic gradient descent. The benefits of relying on Gluon's abstractions will grow substantially once we start working with much more complex models. But once we have all the basic pieces in place, the training loop itself is quite similar to what we would do if implementing everything from scratch.
To refresh your memory. For some number of epochs, we'll make a complete pass over the dataset (train_data), grabbing one mini-batch of inputs and the corresponding ground-truth labels at a time. Then, for each batch, we'll go through the following ritual.
* Generate predictions `net(X)` and the loss `l` by executing a forward pass through the network.
* Calculate gradients by making a backwards pass through the network via `l.backward()`.
* Update the model parameters by invoking our SGD optimizer (note that we need not tell trainer.step about which parameters but rather just the amount of data, since we already performed that in the initialization of trainer).
For good measure we compute the loss on the features after each epoch and print it to monitor progress.
```
num_epochs = 3
for epoch in range(1, num_epochs + 1):
for X, y in data_iter:
with autograd.record():
l = loss(net(X), y)
l.backward()
trainer.step(batch_size)
l = loss(net(features), labels)
print('epoch %d, loss: %f' % (epoch, l.mean().asnumpy()))
```
The model parameters we have learned and the actual model parameters are compared as below. We get the layer we need from the `net` and access its weight (`weight`) and bias (`bias`). The parameters we have learned and the actual parameters are very close.
```
w = net[0].weight.data()
print('Error in estimating w', true_w.reshape(w.shape) - w)
b = net[0].bias.data()
print('Error in estimating b', true_b - b)
```
## Summary
* Using Gluon, we can implement the model more succinctly.
* In Gluon, the module `data` provides tools for data processing, the module `nn` defines a large number of neural network layers, and the module `loss` defines various loss functions.
* MXNet's module `initializer` provides various methods for model parameter initialization.
* Dimensionality and storage are automagically inferred (but caution if you want to access parameters before they've been initialized).
## Exercises
1. If we replace `l = loss(output, y)` with `l = loss(output, y).mean()`, we need to change `trainer.step(batch_size)` to `trainer.step(1)` accordingly. Why?
1. Review the MXNet documentation to see what loss functions and initialization methods are provided in the modules `gluon.loss` and `init`. Replace the loss by Huber's loss.
1. How do you access the gradient of `dense.weight`?
## Scan the QR Code to [Discuss](https://discuss.mxnet.io/t/2333)

| github_jupyter |
```
from airsenal.framework.utils import *
from airsenal.framework.bpl_interface import get_fitted_team_model
from airsenal.framework.season import get_current_season, CURRENT_TEAMS
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
%matplotlib inline
model_team = get_fitted_team_model(get_current_season(), NEXT_GAMEWEEK, session)
# extract indices of current premier league teams
# val-1 because 1-indexed in model but 0-indexed in python
current_idx = {key: val-1 for key, val in model_team.team_indices.items()
if key in CURRENT_TEAMS}
top6 = ['MCI', 'LIV', 'TOT', 'CHE', 'MUN', 'ARS']
ax = plt.figure(figsize=(15, 5)).gca()
for team, idx in current_idx.items():
sns.kdeplot(model_team.a[:, idx], label=team)
plt.title('a')
plt.legend()
ax = plt.figure(figsize=(15, 5)).gca()
for team, idx in current_idx.items():
sns.kdeplot(model_team.b[:, idx], label=team)
plt.title('b')
plt.legend()
a_mean = model_team.a.mean(axis=0)
b_mean = model_team.b.mean(axis=0)
a_conf95 = np.abs(np.quantile(model_team.a,[0.025, 0.975], axis=0) - a_mean)
b_conf95 = np.abs(np.quantile(model_team.b, [0.025, 0.975], axis=0) - b_mean)
a_conf80 = np.abs(np.quantile(model_team.a,[0.1, 0.9], axis=0) - a_mean)
b_conf80 = np.abs(np.quantile(model_team.b, [0.1, 0.9], axis=0) - b_mean)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(aspect='equal')
plt.errorbar(a_mean[list(current_idx.values())],
b_mean[list(current_idx.values())],
xerr=a_conf80[:, list(current_idx.values())],
yerr=b_conf80[:, list(current_idx.values())],
marker='o', markersize=10,
linestyle='', linewidth=0.5)
plt.xlabel('a', fontsize=14)
plt.ylabel('b', fontsize=14)
for team, idx in current_idx.items():
ax.annotate(team,
(a_mean[idx]-0.03, b_mean[idx]+0.02),
fontsize=12)
plt.plot([0.6, 1.6], [0.6, 1.6], "k--")
# team features (excluding first column which is team name)
feats = model_team.X.columns[1:]
for idx in range(model_team.beta_a.shape[1]):
sns.kdeplot(model_team.beta_a[:,idx],
label=feats[idx])
plt.legend()
plt.title('beta_a')
plt.figure()
for idx in range(model_team.beta_b.shape[1]):
sns.kdeplot(model_team.beta_b[:,idx],
label=feats[idx])
plt.legend()
plt.title('beta_b')
beta_a_mean = model_team.beta_a.mean(axis=0)
beta_b_mean = model_team.beta_b.mean(axis=0)
beta_a_conf95 = np.abs(np.quantile(model_team.beta_a,[0.025, 0.975], axis=0) - beta_a_mean)
beta_b_conf95 = np.abs(np.quantile(model_team.beta_b, [0.025, 0.975], axis=0) - beta_b_mean)
beta_a_conf80 = np.abs(np.quantile(model_team.beta_a,[0.1, 0.9], axis=0) - beta_a_mean)
beta_b_conf80 = np.abs(np.quantile(model_team.beta_b, [0.1, 0.9], axis=0) - beta_b_mean)
fig = plt.figure(figsize=(10,10))
ax = fig.gca(aspect='equal')
plt.errorbar(beta_a_mean,
beta_b_mean,
xerr=beta_a_conf80,
yerr=beta_b_conf80,
marker='o', markersize=10,
linestyle='', linewidth=0.5)
plt.xlabel('beta_a', fontsize=14)
plt.ylabel('beta_b', fontsize=14)
plt.title('FIFA Ratings')
for idx, feat in enumerate(feats):
ax.annotate(feat,
(beta_a_mean[idx]-0.03, beta_b_mean[idx]+0.02),
fontsize=12)
xlim = ax.get_xlim()
ylim = ax.get_ylim()
plt.plot([0, 0], ylim, color='k', linewidth=0.75)
plt.plot(xlim, [0, 0], color='k', linewidth=0.75)
plt.xlim(xlim)
plt.ylim(ylim)
sns.kdeplot(model_team.beta_b_0)
sns.kdeplot(model_team.sigma_a)
sns.kdeplot(model_team.sigma_b)
sns.kdeplot(model_team.gamma)
model_team.log_score()
team_h = "MCI"
team_a = "MUN"
model_team.plot_score_probabilities(team_h, team_a);
model_team.concede_n_probability(2, team_h, team_a)
model_team.score_n_probability(2, team_h, team_a)
model_team.overall_probabilities(team_h, team_a)
model_team.score_probability(team_h, team_a, 2, 2)
sim = model_team.simulate_match(team_h, team_a)
sim[team_h].value_counts(normalize=True).sort_index().plot.bar()
plt.title(team_h)
plt.ylim([0, 0.4])
plt.xlim([-1, 8])
plt.figure()
sim[team_a].value_counts(normalize=True).sort_index().plot.bar()
plt.title(team_a)
plt.ylim([0, 0.4])
plt.xlim([-1, 8])
max_goals = 10
prob_score_h = [model_team.score_n_probability(n, team_h, team_a) for n in range(max_goals)]
print(team_h, "exp goals", sum([n*prob_score_h[n] for n in range(max_goals)])/sum(prob_score_h))
prob_score_a = [model_team.score_n_probability(n, team_a, team_h, home=False) for n in range(max_goals)]
print(team_a, "exp goals", sum([n*prob_score_a[n] for n in range(max_goals)])/sum(prob_score_a))
max_prob = 1.1*max(prob_score_h + prob_score_a)
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.bar(range(max_goals), prob_score_h)
plt.ylim([0, max_prob])
plt.xlim([-1, max_goals])
plt.title(team_h)
plt.subplot(1,2,2)
plt.bar(range(max_goals), prob_score_a)
plt.ylim([0, max_prob])
plt.xlim([-1, max_goals])
plt.title(team_a);
df = model_team.simulate_match(team_h, team_a)
print(df.quantile(0.25))
print(df.median())
print(df.quantile(0.75))
```
| github_jupyter |
# TV Script Generation
In this project, you'll generate your own [Seinfeld](https://en.wikipedia.org/wiki/Seinfeld) TV scripts using RNNs. You'll be using part of the [Seinfeld dataset](https://www.kaggle.com/thec03u5/seinfeld-chronicles#scripts.csv) of scripts from 9 seasons. The Neural Network you'll build will generate a new ,"fake" TV script, based on patterns it recognizes in this training data.
## Get the Data
The data is already provided for you in `./data/Seinfeld_Scripts.txt` and you're encouraged to open that file and look at the text.
>* As a first step, we'll load in this data and look at some samples.
* Then, you'll be tasked with defining and training an RNN to generate a new script!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# load in data
import helper
data_dir = './data/Seinfeld_Scripts.txt'
text = helper.load_data(data_dir)
```
## Explore the Data
Play around with `view_line_range` to view different parts of the data. This will give you a sense of the data you'll be working with. You can see, for example, that it is all lowercase text, and each new line of dialogue is separated by a newline character `\n`.
```
view_line_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
lines = text.split('\n')
print('Number of lines: {}'.format(len(lines)))
word_count_line = [len(line.split()) for line in lines]
print('Average number of words in each line: {}'.format(np.average(word_count_line)))
print()
print('The lines {} to {}:'.format(*view_line_range))
print('\n'.join(text.split('\n')[view_line_range[0]:view_line_range[1]]))
```
---
## Implement Pre-processing Functions
The first thing to do to any dataset is pre-processing. Implement the following pre-processing functions below:
- Lookup Table
- Tokenize Punctuation
### Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call `vocab_to_int`
- Dictionary to go from the id to word, we'll call `int_to_vocab`
Return these dictionaries in the following **tuple** `(vocab_to_int, int_to_vocab)`
```
import problem_unittests as tests
from collections import Counter
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
count = Counter(text)
vocabulary = sorted(count, key=count.get, reverse=True)
int_vocabulary = {i: word for i, word in enumerate(vocabulary)}
vocabulary_int = {word: i for i, word in int_vocabulary.items()}
# return tuple
return (vocabulary_int, int_vocabulary)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
```
### Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks can create multiple ids for the same word. For example, "bye" and "bye!" would generate two different word ids.
Implement the function `token_lookup` to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( **.** )
- Comma ( **,** )
- Quotation Mark ( **"** )
- Semicolon ( **;** )
- Exclamation mark ( **!** )
- Question mark ( **?** )
- Left Parentheses ( **(** )
- Right Parentheses ( **)** )
- Dash ( **-** )
- Return ( **\n** )
This dictionary will be used to tokenize the symbols and add the delimiter (space) around it. This separates each symbols as its own word, making it easier for the neural network to predict the next word. Make sure you don't use a value that could be confused as a word; for example, instead of using the value "dash", try using something like "||dash||".
```
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenized dictionary where the key is the punctuation and the value is the token
"""
# TODO: Implement Function
tokenfinder = dict()
tokenfinder['.'] = '<PERIOD>'
tokenfinder[','] = '<COMMA>'
tokenfinder['"'] = '<QUOTATION_MARK>'
tokenfinder[';'] = '<SEMICOLON>'
tokenfinder['!'] = '<EXCLAMATION_MARK>'
tokenfinder['?'] = '<QUESTION_MARK>'
tokenfinder['('] = '<LEFT_PAREN>'
tokenfinder[')'] = '<RIGHT_PAREN>'
tokenfinder['?'] = '<QUESTION_MARK>'
tokenfinder['-'] = '<DASH>'
tokenfinder['\n'] = '<NEW_LINE>'
return tokenfinder
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
```
## Pre-process all the data and save it
Running the code cell below will pre-process all the data and save it to file. You're encouraged to lok at the code for `preprocess_and_save_data` in the `helpers.py` file to see what it's doing in detail, but you do not need to change this code.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# pre-process training data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
```
## Build the Neural Network
In this section, you'll build the components necessary to build an RNN by implementing the RNN Module and forward and backpropagation functions.
### Check Access to GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
# Check for a GPU
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('No GPU found. Please use a GPU to train your neural network.')
```
## Input
Let's start with the preprocessed input data. We'll use [TensorDataset](http://pytorch.org/docs/master/data.html#torch.utils.data.TensorDataset) to provide a known format to our dataset; in combination with [DataLoader](http://pytorch.org/docs/master/data.html#torch.utils.data.DataLoader), it will handle batching, shuffling, and other dataset iteration functions.
You can create data with TensorDataset by passing in feature and target tensors. Then create a DataLoader as usual.
```
data = TensorDataset(feature_tensors, target_tensors)
data_loader = torch.utils.data.DataLoader(data,
batch_size=batch_size)
```
### Batching
Implement the `batch_data` function to batch `words` data into chunks of size `batch_size` using the `TensorDataset` and `DataLoader` classes.
>You can batch words using the DataLoader, but it will be up to you to create `feature_tensors` and `target_tensors` of the correct size and content for a given `sequence_length`.
For example, say we have these as input:
```
words = [1, 2, 3, 4, 5, 6, 7]
sequence_length = 4
```
Your first `feature_tensor` should contain the values:
```
[1, 2, 3, 4]
```
And the corresponding `target_tensor` should just be the next "word"/tokenized word value:
```
5
```
This should continue with the second `feature_tensor`, `target_tensor` being:
```
[2, 3, 4, 5] # features
6 # target
```
```
from torch.utils.data import TensorDataset, DataLoader
def batch_data(words, sequence_length, batch_size):
"""
Batch the neural network data using DataLoader
:param words: The word ids of the TV scripts
:param sequence_length: The sequence length of each batch
:param batch_size: The size of each batch; the number of sequences in a batch
:return: DataLoader with batched data
"""
# TODO: Implement function
n_batches = len(words)//batch_size
# number of words in a batch
words = words[:n_batches*batch_size]
# length of output
y_len = len(words) - sequence_length
# empty lists for sequences
x, y = [], []
for i in range(0, y_len):
i_end = sequence_length + i
x_batch = words[i:i_end]
x.append(x_batch)
batch_y = words[i_end]
y.append(batch_y)
data = TensorDataset(torch.from_numpy(np.asarray(x)), torch.from_numpy(np.asarray(y)))
data_loader = DataLoader(data, shuffle=False, batch_size=batch_size)
# return a dataloader
return data_loader
# there is no test for this function, but you are encouraged to create
# print statements and tests of your own
```
### Test your dataloader
You'll have to modify this code to test a batching function, but it should look fairly similar.
Below, we're generating some test text data and defining a dataloader using the function you defined, above. Then, we are getting some sample batch of inputs `sample_x` and targets `sample_y` from our dataloader.
Your code should return something like the following (likely in a different order, if you shuffled your data):
```
torch.Size([10, 5])
tensor([[ 28, 29, 30, 31, 32],
[ 21, 22, 23, 24, 25],
[ 17, 18, 19, 20, 21],
[ 34, 35, 36, 37, 38],
[ 11, 12, 13, 14, 15],
[ 23, 24, 25, 26, 27],
[ 6, 7, 8, 9, 10],
[ 38, 39, 40, 41, 42],
[ 25, 26, 27, 28, 29],
[ 7, 8, 9, 10, 11]])
torch.Size([10])
tensor([ 33, 26, 22, 39, 16, 28, 11, 43, 30, 12])
```
### Sizes
Your sample_x should be of size `(batch_size, sequence_length)` or (10, 5) in this case and sample_y should just have one dimension: batch_size (10).
### Values
You should also notice that the targets, sample_y, are the *next* value in the ordered test_text data. So, for an input sequence `[ 28, 29, 30, 31, 32]` that ends with the value `32`, the corresponding output should be `33`.
```
# test dataloader
test_text = range(50)
t_loader = batch_data(test_text, sequence_length=5, batch_size=10)
data_iter = iter(t_loader)
sample_x, sample_y = data_iter.next()
print(sample_x.shape)
print(sample_x)
print()
print(sample_y.shape)
print(sample_y)
```
---
## Build the Neural Network
Implement an RNN using PyTorch's [Module class](http://pytorch.org/docs/master/nn.html#torch.nn.Module). You may choose to use a GRU or an LSTM. To complete the RNN, you'll have to implement the following functions for the class:
- `__init__` - The initialize function.
- `init_hidden` - The initialization function for an LSTM/GRU hidden state
- `forward` - Forward propagation function.
The initialize function should create the layers of the neural network and save them to the class. The forward propagation function will use these layers to run forward propagation and generate an output and a hidden state.
**The output of this model should be the *last* batch of word scores** after a complete sequence has been processed. That is, for each input sequence of words, we only want to output the word scores for a single, most likely, next word.
### Hints
1. Make sure to stack the outputs of the lstm to pass to your fully-connected layer, you can do this with `lstm_output = lstm_output.contiguous().view(-1, self.hidden_dim)`
2. You can get the last batch of word scores by shaping the output of the final, fully-connected layer like so:
```
# reshape into (batch_size, seq_length, output_size)
output = output.view(batch_size, -1, self.output_size)
# get last batch
out = output[:, -1]
```
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5):
"""
Initialize the PyTorch RNN Module
:param vocab_size: The number of input dimensions of the neural network (the size of the vocabulary)
:param output_size: The number of output dimensions of the neural network
:param embedding_dim: The size of embeddings, should you choose to use them
:param hidden_dim: The size of the hidden layer outputs
:param dropout: dropout to add in between LSTM/GRU layers
"""
super(RNN, self).__init__()
# TODO: Implement function
self.embed = nn.Embedding(vocab_size, embedding_dim)
# set class variables
self.lstm = nn.LSTM(embedding_dim, hidden_dim, n_layers, dropout=dropout, batch_first=True)
self.output_size = output_size
self.n_layers = n_layers
self.hidden_dim = hidden_dim
# define model layers
self.fc = nn.Linear(hidden_dim, output_size)
def forward(self, nn_input, hidden):
"""
Forward propagation of the neural network
:param nn_input: The input to the neural network
:param hidden: The hidden state
:return: Two Tensors, the output of the neural network and the latest hidden state
"""
# TODO: Implement function
batch_size = nn_input.size(0)
embedding = self.embed(nn_input)
lstm_output, hidden = self.lstm(embedding, hidden)
# return one batch of output word scores and the hidden state
out = self.fc(lstm_output)
out = out.view(batch_size, -1, self.output_size)
out = out[:, -1]
return out, hidden
def init_hidden(self, batch_size):
'''
Initialize the hidden state of an LSTM/GRU
:param batch_size: The batch_size of the hidden state
:return: hidden state of dims (n_layers, batch_size, hidden_dim)
'''
# Implement function
weight = next(self.parameters()).data
if (train_on_gpu):
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())
else:
hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),
weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())
return hidden
# initialize hidden state with zero weights, and move to GPU if available
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_rnn(RNN, train_on_gpu)
```
### Define forward and backpropagation
Use the RNN class you implemented to apply forward and back propagation. This function will be called, iteratively, in the training loop as follows:
```
loss = forward_back_prop(decoder, decoder_optimizer, criterion, inp, target)
```
And it should return the average loss over a batch and the hidden state returned by a call to `RNN(inp, hidden)`. Recall that you can get this loss by computing it, as usual, and calling `loss.item()`.
**If a GPU is available, you should move your data to that GPU device, here.**
```
def forward_back_prop(rnn, optimizer, criterion, inp, target, hidden):
"""
Forward and backward propagation on the neural network
:param decoder: The PyTorch Module that holds the neural network
:param decoder_optimizer: The PyTorch optimizer for the neural network
:param criterion: The PyTorch loss function
:param inp: A batch of input to the neural network
:param target: The target output for the batch of input
:return: The loss and the latest hidden state Tensor
"""
# TODO: Implement Function
if(train_on_gpu):
rnn.cuda()
# move data to GPU, if available
h1 = tuple([each.data for each in hidden])
rnn.zero_grad()
# perform backpropagation and optimization
if(train_on_gpu):
inputs, target = inp.cuda(), target.cuda()
output, h = rnn(inputs, h1)
loss = criterion(output, target)
loss.backward()
nn.utils.clip_grad_norm(rnn.parameters(), 5)
optimizer.step()
# return the loss over a batch and the hidden state produced by our model
return loss.item(), h1
# Note that these tests aren't completely extensive.
# they are here to act as general checks on the expected outputs of your functions
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_forward_back_prop(RNN, forward_back_prop, train_on_gpu)
```
## Neural Network Training
With the structure of the network complete and data ready to be fed in the neural network, it's time to train it.
### Train Loop
The training loop is implemented for you in the `train_decoder` function. This function will train the network over all the batches for the number of epochs given. The model progress will be shown every number of batches. This number is set with the `show_every_n_batches` parameter. You'll set this parameter along with other parameters in the next section.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def train_rnn(rnn, batch_size, optimizer, criterion, n_epochs, show_every_n_batches=100):
batch_losses = []
rnn.train()
print("Training for %d epoch(s)..." % n_epochs)
for epoch_i in range(1, n_epochs + 1):
# initialize hidden state
hidden = rnn.init_hidden(batch_size)
for batch_i, (inputs, labels) in enumerate(train_loader, 1):
# make sure you iterate over completely full batches, only
n_batches = len(train_loader.dataset)//batch_size
if(batch_i > n_batches):
break
# forward, back prop
loss, hidden = forward_back_prop(rnn, optimizer, criterion, inputs, labels, hidden)
# record loss
batch_losses.append(loss)
# printing loss stats
if batch_i % show_every_n_batches == 0:
print('Epoch: {:>4}/{:<4} Loss: {}\n'.format(
epoch_i, n_epochs, np.average(batch_losses)))
batch_losses = []
# returns a trained rnn
return rnn
```
### Hyperparameters
Set and train the neural network with the following parameters:
- Set `sequence_length` to the length of a sequence.
- Set `batch_size` to the batch size.
- Set `num_epochs` to the number of epochs to train for.
- Set `learning_rate` to the learning rate for an Adam optimizer.
- Set `vocab_size` to the number of uniqe tokens in our vocabulary.
- Set `output_size` to the desired size of the output.
- Set `embedding_dim` to the embedding dimension; smaller than the vocab_size.
- Set `hidden_dim` to the hidden dimension of your RNN.
- Set `n_layers` to the number of layers/cells in your RNN.
- Set `show_every_n_batches` to the number of batches at which the neural network should print progress.
If the network isn't getting the desired results, tweak these parameters and/or the layers in the `RNN` class.
```
# Data params
# Sequence Length
sequence_length = 5 # of words in a sequence
# Batch Size
batch_size = 128
# data loader - do not change
train_loader = batch_data(int_text, sequence_length, batch_size)
# Training parameters
# Number of Epochs
num_epochs = 10
# Learning Rate
learning_rate = 0.001
# Model parameters
# Vocab size
vocab_size = len(vocab_to_int)
# Output size
output_size = vocab_size
# Embedding Dimension
embedding_dim = 200
# Hidden Dimension
hidden_dim = 250
# Number of RNN Layers
n_layers = 2
# Show stats for every n number of batches
show_every_n_batches = 500
```
### Train
In the next cell, you'll train the neural network on the pre-processed data. If you have a hard time getting a good loss, you may consider changing your hyperparameters. In general, you may get better results with larger hidden and n_layer dimensions, but larger models take a longer time to train.
> **You should aim for a loss less than 3.5.**
You should also experiment with different sequence lengths, which determine the size of the long range dependencies that a model can learn.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# create model and move to gpu if available
rnn = RNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers, dropout=0.5)
if train_on_gpu:
rnn.cuda()
# defining loss and optimization functions for training
optimizer = torch.optim.Adam(rnn.parameters(), lr=learning_rate)
criterion = nn.CrossEntropyLoss()
# training the model
trained_rnn = train_rnn(rnn, batch_size, optimizer, criterion, num_epochs, show_every_n_batches)
# saving the trained model
helper.save_model('./save/trained_rnn', trained_rnn)
print('Model Trained and Saved')
```
### Question: How did you decide on your model hyperparameters?
For example, did you try different sequence_lengths and find that one size made the model converge faster? What about your hidden_dim and n_layers; how did you decide on those?
**Answer:**
- I experimented with the hyperparameters and observed different scenarios. Learning from the content, the dimensions for the typical layers were given from 100-300. For exact tuning, I referred to the blog:
https://towardsdatascience.com/choosing-the-right-hyperparameters-for-a-simple-lstm-using-keras-f8e9ed76f046
- I particularly loved this post: https://blog.floydhub.com/guide-to-hyperparameters-search-for-deep-learning-models/
- It was about hyperparameter in general but its emphasis on preventing any overfitting or underfitting in the model was great. I tried a learning rate of 0.01 initially however turned out the loss saturated just after 4 epochs and my best understanding of the situation was that the model was jumping quite quickly in the gradient descent and thus these large jumps were at cost of missing many features that hold significance for predicting patterns.
- A learning rate of 0.0001 became too low and the model was converging slowly but provided that we had to wait more patiently for the loss to minimize. I tried with sequence length of 25 initially and the loss was high nearly starting from 11 and coming towars 7 to maximum of 6.2. So, my understanding was that we need to have lesser number of sequences to identify patterns more accurately as the model seemed in a hurry to predict it and led to huge loss values.
- So, I tried with 15, 20 and 10 sequence lengths and everytime the loss decreased so finally I decided to keep it minimal enough and took 5 so that atleast we can identify patterns more correctly than rushing to a large value and spoiling the aim of the training. The sequence length of 10 noted good patterns too with loss minimizing upto 3.7.
- With 250 hidden layers, I do not think that there was a problem in minimizing loss due to less number of layers for processing the sequence. Ofcourse, the idea of a cyclic learning rate can be used to further minimize the loss by making use of optimized learning rate at the correct timing.
- Finally with a learning rate of 0.001 and 5 sequences with batch size of 128 I was able to minimize the loss below 3.5! Also, batch size also affects the training process with larger batch sizes accelerating the time of training but at the cost of loss and too small batch size took huge training time! If used less than 128, the model was converging at only a little better loss and hence, as a tradeoff 128 did not seem to be a bad batch size for training but I do recommend 64 too!
---
# Checkpoint
After running the above training cell, your model will be saved by name, `trained_rnn`, and if you save your notebook progress, **you can pause here and come back to this code at another time**. You can resume your progress by running the next cell, which will load in our word:id dictionaries _and_ load in your saved model by name!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import torch
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
trained_rnn = helper.load_model('./save/trained_rnn')
```
## Generate TV Script
With the network trained and saved, you'll use it to generate a new, "fake" Seinfeld TV script in this section.
### Generate Text
To generate the text, the network needs to start with a single word and repeat its predictions until it reaches a set length. You'll be using the `generate` function to do this. It takes a word id to start with, `prime_id`, and generates a set length of text, `predict_len`. Also note that it uses topk sampling to introduce some randomness in choosing the most likely next word, given an output set of word scores!
```
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import torch.nn.functional as F
def generate(rnn, prime_id, int_to_vocab, token_dict, pad_value, predict_len=100):
"""
Generate text using the neural network
:param decoder: The PyTorch Module that holds the trained neural network
:param prime_id: The word id to start the first prediction
:param int_to_vocab: Dict of word id keys to word values
:param token_dict: Dict of puncuation tokens keys to puncuation values
:param pad_value: The value used to pad a sequence
:param predict_len: The length of text to generate
:return: The generated text
"""
rnn.eval()
# create a sequence (batch_size=1) with the prime_id
current_seq = np.full((1, sequence_length), pad_value)
current_seq[-1][-1] = prime_id
predicted = [int_to_vocab[prime_id]]
for _ in range(predict_len):
if train_on_gpu:
current_seq = torch.LongTensor(current_seq).cuda()
else:
current_seq = torch.LongTensor(current_seq)
# initialize the hidden state
hidden = rnn.init_hidden(current_seq.size(0))
# get the output of the rnn
output, _ = rnn(current_seq, hidden)
# get the next word probabilities
p = F.softmax(output, dim=1).data
if(train_on_gpu):
p = p.cpu() # move to cpu
# use top_k sampling to get the index of the next word
top_k = 5
p, top_i = p.topk(top_k)
top_i = top_i.numpy().squeeze()
# select the likely next word index with some element of randomness
p = p.numpy().squeeze()
word_i = np.random.choice(top_i, p=p/p.sum())
# retrieve that word from the dictionary
word = int_to_vocab[word_i]
predicted.append(word)
# the generated word becomes the next "current sequence" and the cycle can continue
current_seq = np.roll(current_seq, -1, 1)
current_seq[-1][-1] = word_i
gen_sentences = ' '.join(predicted)
# Replace punctuation tokens
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
gen_sentences = gen_sentences.replace(' ' + token.lower(), key)
gen_sentences = gen_sentences.replace('\n ', '\n')
gen_sentences = gen_sentences.replace('( ', '(')
# return all the sentences
return gen_sentences
```
### Generate a New Script
It's time to generate the text. Set `gen_length` to the length of TV script you want to generate and set `prime_word` to one of the following to start the prediction:
- "jerry"
- "elaine"
- "george"
- "kramer"
You can set the prime word to _any word_ in our dictionary, but it's best to start with a name for generating a TV script. (You can also start with any other names you find in the original text file!)
```
# run the cell multiple times to get different results!
gen_length = 400 # modify the length to your preference
prime_word = 'jerry' # name for starting the script
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
pad_word = helper.SPECIAL_WORDS['PADDING']
generated_script = generate(trained_rnn, vocab_to_int[prime_word + ':'], int_to_vocab, token_dict, vocab_to_int[pad_word], gen_length)
print(generated_script)
```
#### Save your favorite scripts
Once you have a script that you like (or find interesting), save it to a text file!
```
# save script to a text file
f = open("generated_script_1.txt","w")
f.write(generated_script)
f.close()
```
# The TV Script is Not Perfect
It's ok if the TV script doesn't make perfect sense. It should look like alternating lines of dialogue, here is one such example of a few generated lines.
### Example generated script
>jerry: what about me?
>
>jerry: i don't have to wait.
>
>kramer:(to the sales table)
>
>elaine:(to jerry) hey, look at this, i'm a good doctor.
>
>newman:(to elaine) you think i have no idea of this...
>
>elaine: oh, you better take the phone, and he was a little nervous.
>
>kramer:(to the phone) hey, hey, jerry, i don't want to be a little bit.(to kramer and jerry) you can't.
>
>jerry: oh, yeah. i don't even know, i know.
>
>jerry:(to the phone) oh, i know.
>
>kramer:(laughing) you know...(to jerry) you don't know.
You can see that there are multiple characters that say (somewhat) complete sentences, but it doesn't have to be perfect! It takes quite a while to get good results, and often, you'll have to use a smaller vocabulary (and discard uncommon words), or get more data. The Seinfeld dataset is about 3.4 MB, which is big enough for our purposes; for script generation you'll want more than 1 MB of text, generally.
# Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_tv_script_generation.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "helper.py" and "problem_unittests.py" files in your submission. Once you download these files, compress them into one zip file for submission.
| github_jupyter |
# Deterministic point jet
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import matplotlib.pylab as pl
```
\begin{equation}
\partial_t \zeta = \frac{\zeta_{jet}}{\tau} - \mu \zeta + \nu_\alpha \nabla^{2\alpha} - \beta \partial_x \psi - J(\psi, \zeta) \zeta
\end{equation}
Here $\zeta_{jet}$ is the profile of a prescribed jet and $\tau = 1/\mu$ is the relaxation time. Also $\beta = 2\Omega cos \theta$ and $\theta = 0$ corresponds to the equator.
Parameters
> $\Omega = 2\pi$
> $\mu = 0.05$
> $\nu = 0$
> $\nu_4 = 0$
> $\Xi = 1.0$
> $\Delta \theta = 0.1$
> $\tau$ = 20 days
```
dn = "pointjet/8x8/"
# Parameters: μ = 0.05, τ = 20.0, Ξ = 1.0*Ω and Δθ = 0.1
M = 8
N = 8
colors = pl.cm.nipy_spectral(np.linspace(0,1,M))
```
## NL v GQL v GCE2
```
nl = np.load(dn+"nl.npz",allow_pickle=True)
gql = np.load(dn+"gql.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2.npz",allow_pickle=True)
# fig,ax = plt.subplots(1,2,figsize=(14,5))
# # Energy
# ax[0].plot(nl['t'],nl['Etav'],label=r'$\langle NL \rangle$')
# ax[0].plot(gql['t'],gql['Etav'],label=r'$\langle GQL(M) \rangle$')
# ax[0].plot(gce2['t'],gce2['Etav'],label=r'$\langle GCE2(M) \rangle$')
# ax[0].set_xlabel(r'$t$',fontsize=14)
# ax[0].set_ylabel(r'$E$',fontsize=14)
# # ax[0].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
# # Enstrophy
# ax[1].plot(nl['t'],nl['Ztav'],label=r'$\langle NL \rangle$')
# ax[1].plot(gql['t'],gql['Ztav'],label=r'$\langle GQL(M) \rangle$')
# ax[1].plot(gce2['t'],gce2['Ztav'],label=r'$\langle GCE2(M) \rangle$')
# ax[1].set_xlabel(r'$t$',fontsize=14)
# ax[1].set_ylabel(r'$Z$',fontsize=14)
# ax[1].legend(loc=4,fontsize=14)
# plt.show()
fig,ax = plt.subplots(1,3,figsize=(14,5))
ax[0].set_title(f'NL')
for i,x in enumerate(nl['Emt'].T):
ax[0].plot(nl['t'],x,label=i,c=colors[i])
ax[1].set_title(f'GQL(M)')
for i,x in enumerate(gql['Emt'].T):
ax[1].plot(gql['t'],x,label=i,c=colors[i])
ax[2].set_title(f'GCE2(M)')
for i,x in enumerate(gce2['Emt'].T):
ax[2].plot(gce2['t'],x,label=i,c=colors[i])
for a in ax:
a.set_xlabel(r'$t$',fontsize=14)
a.set_yscale('log')
a.set_ylim(1e-12,1e2)
ax[0].set_ylabel(r'$E(m)$',fontsize=14)
ax[2].legend(bbox_to_anchor=(1.01,0.85),ncol=1)
plt.show()
# fig,ax = plt.subplots(1,3,figsize=(16,4))
# im = ax[0].imshow((nl['Vxy'][:,:,0]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
# fig.colorbar(im, ax=ax[0])
# ax[0].set_title(r'NL: $\zeta(x,y,t = 0 )$',fontsize=14)
# im = ax[1].imshow((gql['Vxy'][:,:,0]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
# fig.colorbar(im, ax=ax[1])
# ax[1].set_title(r'GQL: $\zeta(x,y,t = 0 )$',fontsize=14)
# im = ax[2].imshow((gce2['Vxy'][:,:,0]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
# fig.colorbar(im, ax=ax[2])
# ax[2].set_title(r'GCE2:$\zeta(x,y,t = 0 )$',fontsize=14)
# for a in ax:
# a.set_xticks([0,M-1,2*M-2])
# a.set_xticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
# a.set_yticks([0,M-1,2*M-2])
# a.set_yticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
# plt.show()
fig,ax = plt.subplots(1,3,figsize=(16,4))
im = ax[0].imshow((nl['Vxy'][:,:,-1]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
fig.colorbar(im, ax=ax[0])
ax[0].set_title(r'NL: $\zeta(x,y,t = 0 )$',fontsize=14)
im = ax[1].imshow((gql['Vxy'][:,:,-1]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
fig.colorbar(im, ax=ax[1])
ax[1].set_title(r'GQL: $\zeta(x,y,t = 0 )$',fontsize=14)
im = ax[2].imshow((gce2['Vxy'][:,:,-1]),interpolation="bicubic",cmap="RdBu_r",origin="lower")
fig.colorbar(im, ax=ax[2])
ax[2].set_title(r'GCE2:$\zeta(x,y,t = 0 )$',fontsize=14)
for a in ax:
a.set_xticks([0,M-1,2*M-2])
a.set_xticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
a.set_yticks([0,M-1,2*M-2])
a.set_yticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
plt.show()
```
## QL v CE2 v GCE2(0)
```
ql = np.load(dn+"ql.npz",allow_pickle=True)
ce2 = np.load(dn+"ce2.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2_0.npz",allow_pickle=True)
fig,ax = plt.subplots(1,3,figsize=(18,5))
# Energy
ax[0].set_title(f'QL',fontsize=14)
for i,x in enumerate(ql['Emtav'].T):
ax[0].plot(ql['t'],x,label=i,c=colors[i])
ax[1].set_title(f'CE2',fontsize=14)
for i,x in enumerate(ce2['Emtav'].T):
ax[1].plot(ce2['t'],x,label=i,c=colors[i])
ax[2].set_title(f'GCE2(0)',fontsize=14)
for i,x in enumerate(gce2['Emtav'].T):
ax[2].plot(gce2['t'],x,label=i,c=colors[i])
ax[2].legend(bbox_to_anchor=(1.01,0.5),ncol=1)
for a in ax:
a.set_xlabel(r'$t$',fontsize=14)
a.set_ylabel(r'$E$',fontsize=14)
a.set_yscale('log')
a.set_ylim(1e-12,1e2)
# plt.show()
# plt.savefig(dn+"ze_tau20_qlce2gce2_0.png",bbox_inches='tight',)
```
## GQL(1) v GCE2(1)
```
gql = np.load(dn+"gql_1.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2_1.npz",allow_pickle=True)
fig,ax = plt.subplots(1,2,figsize=(14,5))
# Energy
ax[0].set_title(f'GQL(1)',fontsize=14)
for i,x in enumerate(gql['Emtav'].T):
ax[0].plot(gql['t'],x,label=i,c=colors[i])
ax[1].set_title(f'GCE2(1)',fontsize=14)
for i,x in enumerate(gce2['Emtav'].T):
ax[1].plot(gce2['t'],x,label=i,c=colors[i])
for a in ax:
a.set_xlabel(r'$t$',fontsize=14)
a.set_ylabel(r'$E$',fontsize=14)
a.set_yscale('log')
# a.set_ylim(1e-1,1e0)
ax[1].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
plt.show()
fig,ax = plt.subplots(1,2,figsize=(15,6))
ax[0].set_title(f'GQL(1)',fontsize=14)
im = ax[0].imshow((gql['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
fig.colorbar(im,ax=ax[0])
ax[1].set_title(f'GCE2(1)',fontsize=14)
im = ax[1].imshow((gce2['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
fig.colorbar(im,ax=ax[1])
for a in ax:
a.set_xticks([0,M-1,2*M-2])
a.set_xticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
a.set_yticks([0,M-1,2*M-2])
a.set_yticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
plt.show()
fig,ax = plt.subplots(1,2,figsize=(12,6))
ax[0].set_title('GQL(1)')
im = ax[0].imshow((gql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im)
ax[1].set_title('GCE2(1)')
im = ax[1].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im)
for a in ax:
a.set_xticks([0,M-1,2*M-2])
a.set_xticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
a.set_yticks([0,M-1,2*M-2])
a.set_yticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
plt.show()
gql = np.load(dn+"gql_3.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2_3.npz",allow_pickle=True)
fig,ax = plt.subplots(1,2,figsize=(14,5))
# Energy
ax[0].set_title(f'GQL(3)',fontsize=14)
for i,x in enumerate(gql['Emtav'].T):
ax[0].plot(gql['t'],x,label=i,c=colors[i])
ax[1].set_title(f'GCE2(3)',fontsize=14)
for i,x in enumerate(gce2['Emtav'].T):
ax[1].plot(gce2['t'],x,label=i,c=colors[i])
for a in ax:
a.set_xlabel(r'$t$',fontsize=14)
a.set_ylabel(r'$E$',fontsize=14)
a.set_yscale('log')
# a.set_ylim(1e-1,1e0)
ax[1].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
plt.show()
fig,ax = plt.subplots(1,2,figsize=(15,6))
ax[0].set_title(f'GQL(3)',fontsize=14)
im = ax[0].imshow((gql['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
fig.colorbar(im,ax=ax[0])
ax[1].set_title(f'GCE2(3)',fontsize=14)
im = ax[1].imshow((gce2['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
fig.colorbar(im,ax=ax[1])
for a in ax:
a.set_xticks([0,M-1,2*M-2])
a.set_xticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
a.set_yticks([0,M-1,2*M-2])
a.set_yticklabels([r'$0$',r'$\pi$',r'$2\pi$'],fontsize=14)
plt.show()
fig,ax = plt.subplots(1,2,figsize=(12,6))
ax[0].set_title('GQL(3)')
im = ax[0].imshow((gql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im)
ax[1].set_title('GCE2(3)')
im = ax[1].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im)
for a in ax:
a.set_xticks([0,M-1,2*M-2])
a.set_xticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
a.set_yticks([0,M-1,2*M-2])
a.set_yticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
plt.show()
# gql = np.load(dn+"gql_5.npz",allow_pickle=True)
# gce2 = np.load(dn+"gce2_5.npz",allow_pickle=True)
# fig,ax = plt.subplots(1,2,figsize=(14,5))
# # Energy
# ax[0].set_title(f'GQL(5)',fontsize=14)
# for i,x in enumerate(gql['Emtav'].T):
# ax[0].plot(gql['t'],x,label=i,c=colors[i])
# ax[1].set_title(f'GCE2(5)',fontsize=14)
# for i,x in enumerate(gce2['Emtav'].T):
# ax[1].plot(gce2['t'],x,c=colors[i])
# for a in ax:
# a.set_xlabel(r'$t$',fontsize=14)
# a.set_ylabel(r'$E$',fontsize=14)
# a.set_yscale('log')
# # a.set_ylim(1e-1,1e0)
# # ax[1].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,2,figsize=(12,6))
# ax[0].set_title('GQL(5)')
# im = ax[0].imshow((gql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# # fig.colorbar(im)
# ax[1].set_title('GCE2(5)')
# im = ax[1].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# # fig.colorbar(im)
# for a in ax:
# a.set_xticks([0,M-1,2*M-2])
# a.set_xticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
# a.set_yticks([0,M-1,2*M-2])
# a.set_yticklabels([r'$-M$',r'$0$',r'$M$'],fontsize=14)
# plt.show()
# dn = "pointjet/12x12/"
# Parameters: μ = 0.05, τ = 20.0, Ξ = 1.0*Ω and Δθ = 0.1
Nx = 12
Ny = 12
nl = np.load(dn+"nl.npz",allow_pickle=True)
gql = np.load(dn+"gql.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2.npz",allow_pickle=True)
# fig,ax = plt.subplots(1,2,figsize=(14,5))
# # Energy
# ax[0].plot(nl['t'],nl['Etav'],label=r'$\langle NL \rangle$')
# ax[0].plot(gql['t'],gql['Etav'],label=r'$\langle GQL(M) \rangle$')
# ax[0].plot(gce2['t'],gce2['Etav'],label=r'$\langle GCE2(M) \rangle$')
# ax[0].set_xlabel(r'$t$',fontsize=14)
# ax[0].set_ylabel(r'$E$',fontsize=14)
# # ax[0].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
# # Enstrophy
# ax[1].plot(nl['t'],nl['Ztav'],label=r'$\langle NL \rangle$')
# ax[1].plot(gql['t'],gql['Ztav'],label=r'$\langle GQL(M) \rangle$')
# ax[1].plot(gce2['t'],gce2['Ztav'],label=r'$\langle GCE2(M) \rangle$')
# ax[1].set_xlabel(r'$t$',fontsize=14)
# ax[1].set_ylabel(r'$Z$',fontsize=14)
# ax[1].legend(loc=4,fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(14,5))
# ax[0].set_title(f'NL')
# for i,x in enumerate(nl['Emt'].T):
# ax[0].plot(nl['t'],x,label=i,c=colors[i])
# ax[1].set_title(f'GQL(M)')
# for i,x in enumerate(gql['Emt'].T):
# ax[1].plot(gql['t'],x,label=i,c=colors[i])
# ax[2].set_title(f'GCE2(M)')
# for i,x in enumerate(gce2['Emt'].T):
# ax[2].plot(gce2['t'],x,label=i,c=colors[i])
# for a in ax:
# a.set_xlabel(r'$t$',fontsize=14)
# a.set_yscale('log')
# # a.set_ylim(1e-10,1e1)
# ax[0].set_ylabel(r'$E(m)$',fontsize=14)
# ax[2].legend(bbox_to_anchor=(1.01,0.85),ncol=1)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(24,6))
# ax[0].set_title(f'NL',fontsize=14)
# im = ax[0].imshow((nl['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'GQL(M)',fontsize=14)
# im = ax[1].imshow((gql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# ax[2].set_title(f'GCE2(M)',fontsize=14)
# im = ax[2].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[2])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(24,6))
# ax[0].set_title(f'NL',fontsize=14)
# im = ax[0].imshow((nl['Vxy'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'GQL(M)',fontsize=14)
# im = ax[1].imshow((gql['Vxy'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# ax[2].set_title(f'GCE2(M)',fontsize=14)
# im = ax[2].imshow((gce2['Vxy'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[2])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
ql = np.load(dn+"ql.npz",allow_pickle=True)
ce2 = np.load(dn+"ce2.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2_0.npz",allow_pickle=True)
# fig,ax = plt.subplots(1,3,figsize=(14,5))
# # Energy
# ax[0].set_title(f'QL',fontsize=14)
# for i,x in enumerate(ql['Emtav'].T):
# ax[0].plot(ql['t'],x,label=i,c=colors[i])
# ax[1].set_title(f'CE2',fontsize=14)
# for i,x in enumerate(ce2['Emtav'].T):
# ax[1].plot(ce2['t'],x,c=colors[i])
# ax[2].set_title(f'GCE2(0)',fontsize=14)
# for i,x in enumerate(gce2['Emtav'].T):
# ax[2].plot(gce2['t'],x,c=colors[i])
# for a in ax:
# a.set_xlabel(r'$t$',fontsize=14)
# a.set_ylabel(r'$E$',fontsize=14)
# a.set_yscale('log')
# # a.set_ylim(1e-1,1e0)
# # ax[1].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(24,6))
# ax[0].set_title(f'QL',fontsize=14)
# im = ax[0].imshow((ql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'CE2',fontsize=14)
# im = ax[1].imshow((ce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# ax[2].set_title(f'GCE2(0)',fontsize=14)
# im = ax[2].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[2])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(24,6))
# ax[0].set_title(f'QL',fontsize=14)
# im = ax[0].imshow((ql['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'CE2',fontsize=14)
# im = ax[1].imshow((ce2['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# ax[2].set_title(f'GCE2(0)',fontsize=14)
# im = ax[2].imshow((gce2['Vxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[2])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,3,figsize=(24,6))
# ax[0].set_title(f'QL',fontsize=14)
# im = ax[0].imshow((ql['Uxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'CE2',fontsize=14)
# im = ax[1].imshow((ce2['Uxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# ax[2].set_title(f'GCE2(0)',fontsize=14)
# im = ax[2].imshow((gce2['Uxy'][:,:,-1]),cmap="RdBu_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[2])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
gql = np.load(dn+"gql_3.npz",allow_pickle=True)
gce2 = np.load(dn+"gce2_3.npz",allow_pickle=True)
# fig,ax = plt.subplots(1,2,figsize=(14,5))
# # Energy
# ax[0].set_title(f'GQL',fontsize=14)
# for i,x in enumerate(gql['Emtav'].T):
# ax[0].plot(gql['t'],x,label=i,c=colors[i])
# ax[1].set_title(f'GCE2',fontsize=14)
# for i,x in enumerate(gce2['Emtav'].T):
# ax[1].plot(gce2['t'],x,c=colors[i])
# for a in ax:
# a.set_xlabel(r'$t$',fontsize=14)
# a.set_ylabel(r'$E$',fontsize=14)
# a.set_yscale('log')
# # a.set_ylim(1e-1,1e0)
# # ax[1].legend(bbox_to_anchor=(1.01,0.85),fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,2,figsize=(15,6))
# ax[0].set_title(f'GQL(1)',fontsize=14)
# im = ax[0].imshow((gql['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'GCE2(1)',fontsize=14)
# im = ax[1].imshow((gce2['Emn'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
# fig,ax = plt.subplots(1,2,figsize=(15,6))
# ax[0].set_title(f'GQL(1)',fontsize=14)
# im = ax[0].imshow((gql['Vxy'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[0])
# ax[1].set_title(f'GCE2(1)',fontsize=14)
# im = ax[1].imshow((gce2['Vxy'][:,:,-1]),cmap="nipy_spectral_r",origin="lower",interpolation="bicubic")
# fig.colorbar(im,ax=ax[1])
# for a in ax:
# a.set_xticks([0,Nx-1,2*Nx-2])
# a.set_xticklabels([r'$-N_x$',r'$0$',r'$N_x$'],fontsize=14)
# a.set_yticks([0,Ny-1,2*Ny-2])
# a.set_yticklabels([r'$-N_y$',r'$0$',r'$N_y$'],fontsize=14)
# plt.show()
```
| github_jupyter |
# Building Autonomous Trader using mt5se
## How to setup and use mt5se
### 1. Install Metatrader 5 (https://www.metatrader5.com/)
### 2. Install python package Metatrader5 using pip
#### Use: pip install MetaTrader5 ... or Use sys package
### 3. Install python package mt5se using pip
#### Use: pip install mt5se ... or Use sys package
#### For documentation, check : https://paulo-al-castro.github.io/mt5se/
```
# installing Metatrader5 using sys
import sys
# python MetaTrader5
#!{sys.executable} -m pip install MetaTrader5
#mt5se
!{sys.executable} -m pip install mt5se --upgrade
```
<hr>
## Connecting and getting account information
```
import mt5se as se
connected=se.connect()
if connected:
print('Ok!! It is connected to the Stock exchange!!')
else:
print('Something went wrong! It is NOT connected to se!!')
ti=se.terminal_info()
print('Metatrader program file path: ', ti.path)
print('Metatrader path to data folder: ', ti.data_path )
print('Metatrader common data path: ',ti.commondata_path)
```
<hr>
### Getting information about the account
```
acc=se.account_info() # it returns account's information
print('login=',acc.login) # Account id
print('balance=',acc.balance) # Account balance in the deposit currency using buy price of assets (margin_free+margin)
print('equity=',acc.equity) # Account equity in the deposit currency using current price of assets (capital liquido) (margin_free+margin+profit)
print('free margin=',acc.margin_free) # Free margin ( balance in cash ) of an account in the deposit currency(BRL)
print('margin=',acc.margin) #Account margin used in the deposit currency (equity-margin_free-profit )
print('client name=',acc.name) #Client name
print('Server =',acc.server) # Trade server name
print('Currency =',acc.currency) # Account currency, BRL for Brazilian Real
```
<hr>
### Getting info about asset's prices quotes (a.k.a bars)
```
import pandas as pd
# Some example of Assets in Nasdaq
assets=[
'AAL', # American Airlines Group, Inc.
'GOOG', # Apple Inc.
'UAL', # United Airlines Holdings, Inc.
'AMD', # Advanced Micro Devices, Inc.
'MSFT' # MICROSOFT
]
asset=assets[0]
df=se.get_bars(asset,10) # it returns the last 10 days
print(df)
```
<hr>
### Getting information about current position
```
print('Position=',se.get_positions()) # return the current value of assets (not include balance or margin)
symbol_id='MSFT'
print('Position on paper ',symbol_id,' =',se.get_position_value(symbol_id)) # return the current position in a given asset (symbol_id)
pos=se.get_position_value(symbol_id)
print(pos)
```
<hr>
### Creating, checking and sending orders
```
###Buying three hundred shares of AAPL !!
symbol_id='AAPL'
bars=se.get_bars(symbol_id,2)
price=se.get_last(bars)
volume=300
b=se.buyOrder(symbol_id,volume, price ) # price, sl and tp are optional
if se.is_market_open(symbol_id):
print('Market is Open!!')
else:
print('Market is closed! Orders will not be accepted!!')
if se.checkOrder(b):
print('Buy order seems ok!')
else:
print('Error : ',se.getLastError())
# if se.sendOrder(b):
# print('Order executed!)
```
### Direct Control Robots using mt5se
```
import mt5se as se
import pandas as pd
import time
asset='AAPL'
def run(asset):
if se.is_market_open(asset): # change 'if' for 'while' for running until the end of the market session
print("getting information")
bars=se.get_bars(asset,14)
curr_shares=se.get_shares(asset)
# number of shares that you can buy
price=se.get_last(bars)
free_shares=se.get_affor_shares(asset,price)
rsi=se.tech.rsi(bars)
print("deliberating")
if rsi>=70 and free_shares>0:
order=se.buyOrder(asset,free_shares)
elif rsi<70 and curr_shares>0:
order=se.sellOrder(asset,curr_shares)
else:
order=None
print("sending order")
# check and send (it is sent only if check is ok!)
if order!=None:
if se.checkOrder(order) and se.sendOrder(order):
print('order sent to se')
else:
print('Error : ',se.getLastError())
else:
print("No order at the moment for asset=",asset )
time.sleep(1) # waits one second
print('Trader ended operation!')
if se.connect()==False:
print('Error when trying to connect to se')
exit()
else:
run(asset) # trade asset PETR4
```
### Multiple asset Trading Robot
```
#Multiple asset Robot (Example), single strategy for multiple assets, where the resources are equally shared among the assets
import time
def runMultiAsset(assets):
if se.is_market_open(assets[0]): # change 'if' for 'while' for running until the end of the market session
for asset in assets:
bars=se.get_bars(asset,14) #get information
curr_shares=se.get_shares(asset)
money=se.account_info().margin_free/len(assets) # number of shares that you can buy
price=se.get_last(bars)
free_shares=se.get_affor_shares(asset,price,money)
rsi=se.tech.rsi(bars)
if rsi>=70 and free_shares>0:
order=se.buyOrder(asset,free_shares)
elif rsi<70 and curr_shares>0:
order=se.sellOrder(asset,curr_shares)
else:
order=None
if order!=None: # check and send if it is Ok
if se.checkOrder(order) and se.sendOrder(order):
print('order sent to se')
else:
print('Error : ',se.getLastError())
else:
print("No order at the moment for asset=",asset)
time.sleep(1)
print('Trader ended operation!')
```
## Running multiple asset direct control code!
```
assets=['GOOG','AAPL']
runMultiAsset(assets) # trade asset
```
### Processing Financial Data - Return Histogram Example
```
import mt5se as se
from datetime import datetime
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
asset='MSFT'
se.connect()
bars=se.get_bars(asset,252) # 252 business days (basically one year)
x=se.get_returns(bars) # calculate daily returns given the bars
#With a small change we can see the historgram of weekly returns
#x=se.getReturns(bars,offset=5)
plt.hist(x,bins=16) # creates a histogram graph with 16 bins
plt.grid()
plt.show()
```
### Robots based on Inversion of control
You may use an alternative method to build your robots, that may reduce your workload. It is called inverse control robots. You receive the most common information requrired by robots and returns your orders
Let's some examples of Robots based on Inversion of Control including the multiasset strategy presented before in a inverse control implementation
### Trader class
Inversion of control Traders are classes that inherint from se.Trader and they have to implement just one function:
trade: It is called at each moment, with dbars. It should returns the list of orders to be executed or None if there is no order at the moment
Your trader may also implement two other function if required:
setup: It is called once when the operation starts. It receives dbars ('mem' bars from each asset) . See the operation setup, for more information
ending: It is called one when the sheculed operation reaches its end time.
Your Trader class may also implement a constructor function
Let's see an Example!
### A Random Trader
```
import numpy.random as rand
class RandomTrader(se.Trader):
def __init__(self):
pass
def setup(self,dbars):
print('just getting started!')
def trade(self,dbars):
orders=[]
assets=ops['assets']
for asset in assets:
if rand.randint(2)==1:
order=se.buyOrder(asset,100)
else:
order=se.sellOrder(asset,100)
orders.append(order)
return orders
def ending(self,dbars):
print('Ending stuff')
if issubclass(RandomTrader,se.Trader):
print('Your trader class seems Ok!')
else:
print('Your trader class should a subclass of se.Trader')
trader=RandomTrader() # DummyTrader class also available in se.sampleTraders.DummyTrader()
```
### Another Example of Trader class
```
class MultiAssetTrader(se.Trader):
def trade(self,dbars):
assets=dbars.keys()
orders=[]
for asset in assets:
bars=dbars[asset]
curr_shares=se.get_shares(asset)
money=se.get_balance()/len(assets) # divide o saldo em dinheiro igualmente entre os ativos
# number of shares that you can buy of asset
price=se.get_last(bars)
free_shares=se.get_affor_shares(asset,price,money)
rsi=se.tech.rsi(bars)
if rsi>=70 and free_shares>0:
order=se.buyOrder(asset,free_shares)
elif rsi<70 and curr_shares>0:
order=se.sellOrder(asset,curr_shares)
else:
order=None
if order!=None:
orders.append(order)
return orders
if issubclass(MultiAssetTrader,se.Trader):
print('Your trader class seems Ok!')
else:
print('Your trader class should a subclass of se.Trader')
trader=MultiAssetTrader()
```
### Testing your Trader!!!
The evaluation for trading robots is usually called backtesting. That means that a trading robot executes with historical price series , and its performance is computed
In backtesting, time is discretized according with bars and the package mt5se controls the information access to the Trader according with the simulated time.
To backtest one strategy, you just need to create a subclass of Trader and implement one function:
trade
You may implement function setup, to prepare the Trader Strategy if it is required and a function ending to clean up after the backtest is done
The simulation time advances and in function 'trade' the Trader class receives the new bar info and decides wich orders to send
## Let's create a Simple Algorithmic Trader and Backtest it
```
## Defines the Trader
class MonoAssetTrader(se.Trader):
def trade(self,dbars):
assets=dbars.keys()
asset=list(assets)[0]
orders=[]
bars=dbars[asset]
curr_shares=se.get_shares(asset)
# number of shares that you can buy
price=se.get_last(bars)
free_shares=se.get_affor_shares(asset,price)
rsi=se.tech.rsi(bars)
if rsi>=70:
order=se.buyOrder(asset,free_shares)
else:
order=se.sellOrder(asset,curr_shares)
if rsi>=70 and free_shares>0:
order=se.buyOrder(asset,free_shares)
elif rsi<70 and curr_shares>0:
order=se.sellOrder(asset,curr_shares)
if order!=None:
orders.append(order)
return orders
trader=MonoAssetTrader() # also available in se.sampleTraders.MonoAssetTrader()
print(trader)
```
## Setup and check a backtest!
```
# sets Backtest options
prestart=se.date(2018,12,10)
start=se.date(2019,1,10)
end=se.date(2019,2,27)
capital=1000000
results_file='data_equity_file.csv'
verbose=False
assets=['AAPL']
# Use True if you want debug information for your Trader
#sets the backtest setup
period=se.DAILY
# it may be se.INTRADAY (one minute interval)
bts=se.backtest.set(assets,prestart,start,end,period,capital,results_file,verbose)
# check if the backtest setup is ok!
if se.backtest.checkBTS(bts):
print('Backtest Setup is Ok!')
else:
print('Backtest Setup is NOT Ok!')
```
## Run the Backtest
```
# Running the backtest
df= se.backtest.run(trader,bts)
# run calls the Trader. setup and trade (once for each bar)
```
## Evaluate the Backtest result
```
#print the results
print(df)
# evaluates the backtest results
se.backtest.evaluate(df)
```
## Evaluating Backtesting results
The method backtest.run creates a data file with the name given in the backtest setup (bts)
This will give you a report about the trader performance
We need ot note that it is hard to perform meaningful evaluations using backtest. There are many pitfalls to avoid and it may be easier to get trading robots with great performance in backtest, but that perform really badly in real operations.
More about that in mt5se backtest evaluation chapter.
For a deeper discussion, we suggest:
Is it a great Autonomous Trading Strategy or you are just fooling yourself Bernardini,M. and Castro, P.A.L
In order to analyze the trader's backtest, you may use :
se.backtest.evaluateFile(fileName) #fileName is the name of file generated by the backtest
or
se.bactest.evaluate(df) # df is the dataframe returned by se.backtest.run
# Another Example: Multiasset Trader
```
import mt5se as se
class MultiAssetTrader(se.Trader):
def trade(self,dbars):
assets=dbars.keys()
orders=[]
for asset in assets:
bars=dbars[asset]
curr_shares=se.get_shares(asset)
money=se.get_balance()/len(assets) # divide o saldo em dinheiro igualmente entre os ativos
# number of shares that you can buy of asset
price=se.get_last(bars)
free_shares=se.get_affor_shares(asset,price,money)
rsi=se.tech.rsi(bars)
if rsi>=70 and free_shares>0:
order=se.buyOrder(asset,free_shares)
elif rsi<70 and curr_shares>0:
order=se.sellOrder(asset,curr_shares)
else:
order=None
if order!=None:
orders.append(order)
return orders
trader=MultiAssetTrader() # also available in se.sampleTraders.MultiAssetTrader()
print(trader)
```
## Setuping Backtest for Multiple Assets
```
# sets Backtest options
prestart=se.date(2020,5,4)
start=se.date(2020,5,6)
end=se.date(2020,6,21)
capital=10000000
results_file='data_equity_file.csv'
verbose=False
assets=[
'AAL', # American Airlines Group, Inc.
'GOOG', # Apple Inc.
'UAL', # United Airlines Holdings, Inc.
'AMD', # Advanced Micro Devices, Inc.
'MSFT' # MICROSOFT
]
# Use True if you want debug information for your Trader
#sets the backtest setup
period=se.DAILY
bts=se.backtest.set(assets,prestart,start,end,period,capital,results_file,verbose)
if se.backtest.checkBTS(bts): # check if the backtest setup is ok!
print('Backtest Setup is Ok!')
else:
print('Backtest Setup is NOT Ok!')
```
## Run and evaluate the backtest
```
se.connect()
# Running the backtest
df= se.backtest.run(trader,bts)
# run calls the Trader. setup and trade (once for each bar)
# evaluates the backtest results
se.backtest.evaluate(df)
```
## Next Deploying Autonomous Trader powered by mt5se
### You have seen how to:
install and import mt5se and MetaTrader5
get financial data
create direct control trading robots
create [Simple] Trader classes based on inversion of control
backtest Autonomous Traders
### Next, We are going to show how to:
deploy autonomous trader to run on simulated or real Stock Exchange accounts
create Autonomous Traders based on Artifical Intelligence and Machine Learning
| github_jupyter |
# Introduction to Deep Learning with PyTorch
In this notebook, you'll get introduced to [PyTorch](http://pytorch.org/), a framework for building and training neural networks. PyTorch in a lot of ways behaves like the arrays you love from Numpy. These Numpy arrays, after all, are just tensors. PyTorch takes these tensors and makes it simple to move them to GPUs for the faster processing needed when training neural networks. It also provides a module that automatically calculates gradients (for backpropagation!) and another module specifically for building neural networks. All together, PyTorch ends up being more coherent with Python and the Numpy/Scipy stack compared to TensorFlow and other frameworks.
## Neural Networks
Deep Learning is based on artificial neural networks which have been around in some form since the late 1950s. The networks are built from individual parts approximating neurons, typically called units or simply "neurons." Each unit has some number of weighted inputs. These weighted inputs are summed together (a linear combination) then passed through an activation function to get the unit's output.
<img src="assets/simple_neuron.png" width=400px>
Mathematically this looks like:
$$
\begin{align}
y &= f(w_1 x_1 + w_2 x_2 + b) \\
y &= f\left(\sum_i w_i x_i +b \right)
\end{align}
$$
With vectors this is the dot/inner product of two vectors:
$$
h = \begin{bmatrix}
x_1 \, x_2 \cdots x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$$
## Tensors
It turns out neural network computations are just a bunch of linear algebra operations on *tensors*, a generalization of matrices. A vector is a 1-dimensional tensor, a matrix is a 2-dimensional tensor, an array with three indices is a 3-dimensional tensor (RGB color images for example). The fundamental data structure for neural networks are tensors and PyTorch (as well as pretty much every other deep learning framework) is built around tensors.
<img src="assets/tensor_examples.svg" width=600px>
With the basics covered, it's time to explore how we can use PyTorch to build a simple neural network.
```
# First, import PyTorch
import torch
def activation(x):
""" Sigmoid activation function
Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 5 random normal variables
features = torch.randn((1, 5))
# True weights for our data, random normal variables again
weights = torch.randn_like(features)
# and a true bias term
bias = torch.randn((1, 1))
```
Above I generated data we can use to get the output of our simple network. This is all just random for now, going forward we'll start using normal data. Going through each relevant line:
`features = torch.randn((1, 5))` creates a tensor with shape `(1, 5)`, one row and five columns, that contains values randomly distributed according to the normal distribution with a mean of zero and standard deviation of one.
`weights = torch.randn_like(features)` creates another tensor with the same shape as `features`, again containing values from a normal distribution.
Finally, `bias = torch.randn((1, 1))` creates a single value from a normal distribution.
PyTorch tensors can be added, multiplied, subtracted, etc, just like Numpy arrays. In general, you'll use PyTorch tensors pretty much the same way you'd use Numpy arrays. They come with some nice benefits though such as GPU acceleration which we'll get to later. For now, use the generated data to calculate the output of this simple single layer network.
> **Exercise**: Calculate the output of the network with input features `features`, weights `weights`, and bias `bias`. Similar to Numpy, PyTorch has a [`torch.sum()`](https://pytorch.org/docs/stable/torch.html#torch.sum) function, as well as a `.sum()` method on tensors, for taking sums. Use the function `activation` defined above as the activation function.
```
## Calculate the output of this network using the weights and bias tensors
```
You can do the multiplication and sum in the same operation using a matrix multiplication. In general, you'll want to use matrix multiplications since they are more efficient and accelerated using modern libraries and high-performance computing on GPUs.
Here, we want to do a matrix multiplication of the features and the weights. For this we can use [`torch.mm()`](https://pytorch.org/docs/stable/torch.html#torch.mm) or [`torch.matmul()`](https://pytorch.org/docs/stable/torch.html#torch.matmul) which is somewhat more complicated and supports broadcasting. If we try to do it with `features` and `weights` as they are, we'll get an error
```python
>> torch.mm(features, weights)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-15d592eb5279> in <module>()
----> 1 torch.mm(features, weights)
RuntimeError: size mismatch, m1: [1 x 5], m2: [1 x 5] at /Users/soumith/minicondabuild3/conda-bld/pytorch_1524590658547/work/aten/src/TH/generic/THTensorMath.c:2033
```
As you're building neural networks in any framework, you'll see this often. Really often. What's happening here is our tensors aren't the correct shapes to perform a matrix multiplication. Remember that for matrix multiplications, the number of columns in the first tensor must equal to the number of rows in the second column. Both `features` and `weights` have the same shape, `(1, 5)`. This means we need to change the shape of `weights` to get the matrix multiplication to work.
**Note:** To see the shape of a tensor called `tensor`, use `tensor.shape`. If you're building neural networks, you'll be using this method often.
There are a few options here: [`weights.reshape()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.reshape), [`weights.resize_()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.resize_), and [`weights.view()`](https://pytorch.org/docs/stable/tensors.html#torch.Tensor.view).
* `weights.reshape(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)` sometimes, and sometimes a clone, as in it copies the data to another part of memory.
* `weights.resize_(a, b)` returns the same tensor with a different shape. However, if the new shape results in fewer elements than the original tensor, some elements will be removed from the tensor (but not from memory). If the new shape results in more elements than the original tensor, new elements will be uninitialized in memory. Here I should note that the underscore at the end of the method denotes that this method is performed **in-place**. Here is a great forum thread to [read more about in-place operations](https://discuss.pytorch.org/t/what-is-in-place-operation/16244) in PyTorch.
* `weights.view(a, b)` will return a new tensor with the same data as `weights` with size `(a, b)`.
I usually use `.view()`, but any of the three methods will work for this. So, now we can reshape `weights` to have five rows and one column with something like `weights.view(5, 1)`.
> **Exercise**: Calculate the output of our little network using matrix multiplication.
```
## Calculate the output of this network using matrix multiplication
```
### Stack them up!
That's how you can calculate the output for a single neuron. The real power of this algorithm happens when you start stacking these individual units into layers and stacks of layers, into a network of neurons. The output of one layer of neurons becomes the input for the next layer. With multiple input units and output units, we now need to express the weights as a matrix.
<img src='assets/multilayer_diagram_weights.png' width=450px>
The first layer shown on the bottom here are the inputs, understandably called the **input layer**. The middle layer is called the **hidden layer**, and the final layer (on the right) is the **output layer**. We can express this network mathematically with matrices again and use matrix multiplication to get linear combinations for each unit in one operation. For example, the hidden layer ($h_1$ and $h_2$ here) can be calculated
$$
\vec{h} = [h_1 \, h_2] =
\begin{bmatrix}
x_1 \, x_2 \cdots \, x_n
\end{bmatrix}
\cdot
\begin{bmatrix}
w_{11} & w_{12} \\
w_{21} &w_{22} \\
\vdots &\vdots \\
w_{n1} &w_{n2}
\end{bmatrix}
$$
The output for this small network is found by treating the hidden layer as inputs for the output unit. The network output is expressed simply
$$
y = f_2 \! \left(\, f_1 \! \left(\vec{x} \, \mathbf{W_1}\right) \mathbf{W_2} \right)
$$
```
### Generate some data
torch.manual_seed(7) # Set the random seed so things are predictable
# Features are 3 random normal variables
features = torch.randn((1, 3))
# Define the size of each layer in our network
n_input = features.shape[1] # Number of input units, must match number of input features
n_hidden = 2 # Number of hidden units
n_output = 1 # Number of output units
# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)
# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))
```
> **Exercise:** Calculate the output for this multi-layer network using the weights `W1` & `W2`, and the biases, `B1` & `B2`.
```
## Your solution here
```
If you did this correctly, you should see the output `tensor([[ 0.3171]])`.
The number of hidden units a parameter of the network, often called a **hyperparameter** to differentiate it from the weights and biases parameters. As you'll see later when we discuss training a neural network, the more hidden units a network has, and the more layers, the better able it is to learn from data and make accurate predictions.
## Numpy to Torch and back
Special bonus section! PyTorch has a great feature for converting between Numpy arrays and Torch tensors. To create a tensor from a Numpy array, use `torch.from_numpy()`. To convert a tensor to a Numpy array, use the `.numpy()` method.
```
import numpy as np
a = np.random.rand(4,3)
a
b = torch.from_numpy(a)
b
b.numpy()
```
The memory is shared between the Numpy array and Torch tensor, so if you change the values in-place of one object, the other will change as well.
```
# Multiply PyTorch Tensor by 2, in place
b.mul_(2)
# Numpy array matches new values from Tensor
a
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Lambda-School-Labs/bridges-to-prosperity-ds-d/blob/SMOTE_model_building%2Ftrevor/notebooks/Modeling_off_original_data_smote_gridsearchcv.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
This notebook is for problem 2 as described in `B2P Dataset_2020.10.xlsx` Contextual Summary tab:
## Problem 2: Predicting which sites will be technically rejected in future engineering reviews
> Any sites with a "Yes" in the column AQ (`Senior Engineering Review Conducted`) have undergone a full technical review, and of those, the Stage (column L) can be considered to be correct. (`Bridge Opportunity: Stage`)
> Any sites without a "Yes" in Column AQ (`Senior Engineering Review Conducted`) have not undergone a full technical review, and the Stage is based on the assessor's initial estimate as to whether the site was technically feasible or not.
> We want to know if we can use the sites that have been reviewed to understand which of the sites that haven't yet been reviewed are likely to be rejected by the senior engineering team.
> Any of the data can be used, but our guess is that Estimated Span, Height Differential Between Banks, Created By, and Flag for Rejection are likely to be the most reliable predictors.
### Load the data
```
import pandas as pd
url = 'https://github.com/Lambda-School-Labs/bridges-to-prosperity-ds-d/blob/main/Data/B2P%20Dataset_2020.10.xlsx?raw=true'
df = pd.read_excel(url, sheet_name='Data')
```
### Define the target
```
# Any sites with a "Yes" in the column "Senior Engineering Review Conducted"
# have undergone a full technical review, and of those, the
# "Bridge Opportunity: Stage" column can be considered to be correct.
positive = (
(df['Senior Engineering Review Conducted']=='Yes') &
(df['Bridge Opportunity: Stage'].isin(['Complete', 'Prospecting', 'Confirmed']))
)
negative = (
(df['Senior Engineering Review Conducted']=='Yes') &
(df['Bridge Opportunity: Stage'].isin(['Rejected', 'Cancelled']))
)
# Any sites without a "Yes" in column Senior Engineering Review Conducted"
# have not undergone a full technical review ...
# So these sites are unknown and unlabeled
unknown = df['Senior Engineering Review Conducted'].isna()
# Create a new column named "Good Site." This is the target to predict.
# Assign a 1 for the positive class and 0 for the negative class.
df.loc[positive, 'Good Site'] = 1
df.loc[negative, 'Good Site'] = 0
# Assign -1 for unknown/unlabled observations.
# Scikit-learn's documentation for "Semi-Supervised Learning" says,
# "It is important to assign an identifier to unlabeled points ...
# The identifier that this implementation uses is the integer value -1."
# We'll explain this soon!
df.loc[unknown, 'Good Site'] = -1
```
### Drop columns used to derive the target
```
# Because these columns were used to derive the target,
# We can't use them as features, or it would be leakage.
df = df.drop(columns=['Senior Engineering Review Conducted', 'Bridge Opportunity: Stage'])
```
### Look at target's distribution
```
df['Good Site'].value_counts()
```
So we have 65 labeled observations for the positive class, 24 labeled observations for the negative class, and almost 1,400 unlabeled observations.
### 4 recommendations:
- Use **semi-supervised learning**, which "combines a small amount of labeled data with a large amount of unlabeled data". See Wikipedia notes below. Python implementations are available in [scikit-learn](https://scikit-learn.org/stable/modules/label_propagation.html) and [pomegranate](https://pomegranate.readthedocs.io/en/latest/semisupervised.html). Another way to get started: feature engineering + feature selection + K-Means Clustering + PCA in 2 dimensions. Then visualize the clusters on a scatterplot, with colors for the labels.
- Use [**leave-one-out cross-validation**](https://en.wikipedia.org/wiki/Cross-validation_(statistics)#Leave-one-out_cross-validation), without an independent test set, because we have so few labeled observations. It's implemented in [scikit-learn](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.LeaveOneOut.html). Or maybe 10-fold cross-validation with stratified sampling (and no independent test set).
- Consider **"over-sampling"** techniques for imbalanced classification. Python implementations are available in [imbalanced-learn](https://github.com/scikit-learn-contrib/imbalanced-learn).
- Consider using [**Snorkel**](https://www.snorkel.org/) to write "labeling functions" for "weakly supervised learning." The site has many [tutorials](https://www.snorkel.org/use-cases/).
### [Semi-supervised learning - Wikipedia](https://en.wikipedia.org/wiki/Semi-supervised_learning)
> Semi-supervised learning is an approach to machine learning that combines a small amount of labeled data with a large amount of unlabeled data during training. Semi-supervised learning falls between unsupervised learning (with no labeled training data) and supervised learning (with only labeled training data).
> Unlabeled data, when used in conjunction with a small amount of labeled data, can produce considerable improvement in learning accuracy. The acquisition of labeled data for a learning problem often requires a skilled human agent ... The cost associated with the labeling process thus may render large, fully labeled training sets infeasible, whereas acquisition of unlabeled data is relatively inexpensive. In such situations, semi-supervised learning can be of great practical value.

> An example of the influence of unlabeled data in semi-supervised learning. The top panel shows a decision boundary we might adopt after seeing only one positive (white circle) and one negative (black circle) example. The bottom panel shows a decision boundary we might adopt if, in addition to the two labeled examples, we were given a collection of unlabeled data (gray circles). This could be viewed as performing clustering and then labeling the clusters with the labeled data, pushing the decision boundary away from high-density regions ...
See also:
- “Positive-Unlabeled Learning”
- https://en.wikipedia.org/wiki/One-class_classification
# Model attempt - Smaller Dataset Not utilizing world data
- Here I am attempting to build a model using less "created" data than we did in previous attempts using the world data set. This had more successful bridge builds but those bridge builds did not include pertainant feature for building a predictive model.
```
df.info()
# Columns suggeested bt stakeholder to utilize while model building
keep_list = ['Bridge Opportunity: Span (m)',
'Bridge Opportunity: Individuals Directly Served',
'Form: Created By',
'Height differential between banks', 'Flag for Rejection',
'Good Site']
# isolating the dataset to just the modelset
modelset = df[keep_list]
modelset.head()
# built modelset based off of original dataset - not much cleaning here.
# further cleaning could be an area for improvement.
modelset['Good Site'].value_counts()
!pip install category_encoders
# Imports:
from collections import Counter
from sklearn.pipeline import make_pipeline
from imblearn.pipeline import make_pipeline as make_pipeline_imb
from imblearn.over_sampling import SMOTE
from imblearn.metrics import classification_report_imbalanced
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, accuracy_score, classification_report
# split data - intial split eliminated all of the "unlabeled" sites
data = modelset[(modelset['Good Site']== 0) | (modelset['Good Site']== 1)]
test = modelset[modelset['Good Site']== -1]
train, val = train_test_split(data, test_size=.2, random_state=42)
# splitting our labeled sites into a train and validation set for model building
X_train = train.drop('Good Site', axis=1)
y_train = train['Good Site']
X_val = val.drop('Good Site', axis=1)
y_val = val['Good Site']
X_train.shape, y_train.shape, X_val.shape, y_val.shape
# Building a base model without SMOTE
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
kf = KFold(n_splits=5, shuffle=False)
base_pipe = make_pipeline(ce.OrdinalEncoder(),
SimpleImputer(strategy = 'mean'),
RandomForestClassifier(n_estimators=100, random_state=42))
cross_val_score(base_pipe, X_train, y_train, cv=kf, scoring='precision')
```
From the results above we can see the variety of the precision scores, looks like we have some overfit values when it comes to different cross validations
```
# use of imb_learn pipeline
imba_pipe = make_pipeline_imb(ce.OrdinalEncoder(),
SimpleImputer(strategy = 'mean'),
SMOTE(random_state=42),
RandomForestClassifier(n_estimators=100, random_state=42))
cross_val_score(imba_pipe, X_train, y_train, cv=kf, scoring='precision')
```
Using an imbalanced Pipeline with SMOTE we still see the large variety in precision 1.0 as a high and .625 as a low.
```
# using grid search to attempt to further validate the model to use on predicitions
new_params = {'randomforestclassifier__n_estimators': [100, 200, 50],
'randomforestclassifier__max_depth': [4, 6, 10, 12],
'simpleimputer__strategy': ['mean', 'median']
}
imba_grid_1 = GridSearchCV(imba_pipe, param_grid=new_params, cv=kf,
scoring='precision',
return_train_score=True)
imba_grid_1.fit(X_train, y_train);
# Params used and best score on a basis of precision
print(imba_grid_1.best_params_, imba_grid_1.best_score_)
# Working with more folds for validation
more_kf = KFold(n_splits=15)
imba_grid_2 = GridSearchCV(imba_pipe, param_grid=new_params, cv=more_kf,
scoring='precision',
return_train_score=True)
imba_grid_2.fit(X_train, y_train);
print(imba_grid_2.best_score_, imba_grid_2.best_estimator_)
imba_grid_2.cv_results_
# muted output because it was lenghty
# during output we did see a lot of 1s... Is this a sign of overfitting?
# Now looking to the val set to get some more numbers
y_val_predict = imba_grid_2.predict(X_val)
precision_score(y_val, y_val_predict)
```
The best score from above was .87, now running the model on the val set, it looks like we end with 92% precision score.
```
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
```
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Text Classification with Movie Reviews
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/tf2_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/collections/nnlm/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub models</a>
</td>
</table>
This notebook classifies movie reviews as *positive* or *negative* using the text of the review. This is an example of *binary*—or two-class—classification, an important and widely applicable kind of machine learning problem.
We'll use the [IMDB dataset](https://www.tensorflow.org/api_docs/python/tf/keras/datasets/imdb) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews.
This notebook uses [tf.keras](https://www.tensorflow.org/guide/keras), a high-level API to build and train models in TensorFlow, and [TensorFlow Hub](https://www.tensorflow.org/hub), a library and platform for transfer learning. For a more advanced text classification tutorial using `tf.keras`, see the [MLCC Text Classification Guide](https://developers.google.com/machine-learning/guides/text-classification/).
### More models
[Here](https://tfhub.dev/s?module-type=text-embedding) you can find more expressive or performant models that you could use to generate the text embedding.
## Setup
```
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")
```
## Download the IMDB dataset
The IMDB dataset is available on [TensorFlow datasets](https://github.com/tensorflow/datasets). The following code downloads the IMDB dataset to your machine (or the colab runtime):
```
train_data, test_data = tfds.load(name="imdb_reviews", split=["train", "test"],
batch_size=-1, as_supervised=True)
train_examples, train_labels = tfds.as_numpy(train_data)
test_examples, test_labels = tfds.as_numpy(test_data)
```
## Explore the data
Let's take a moment to understand the format of the data. Each example is a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. The label is an integer value of either 0 or 1, where 0 is a negative review, and 1 is a positive review.
```
print("Training entries: {}, test entries: {}".format(len(train_examples), len(test_examples)))
```
Let's print first 10 examples.
```
train_examples[:10]
```
Let's also print the first 10 labels.
```
train_labels[:10]
```
## Build the model
The neural network is created by stacking layers—this requires three main architectural decisions:
* How to represent the text?
* How many layers to use in the model?
* How many *hidden units* to use for each layer?
In this example, the input data consists of sentences. The labels to predict are either 0 or 1.
One way to represent the text is to convert sentences into embeddings vectors. We can use a pre-trained text embedding as the first layer, which will have two advantages:
* we don't have to worry about text preprocessing,
* we can benefit from transfer learning.
For this example we will use a model from [TensorFlow Hub](https://www.tensorflow.org/hub) called [google/nnlm-en-dim50/2](https://tfhub.dev/google/nnlm-en-dim50/2).
There are two other models to test for the sake of this tutorial:
* [google/nnlm-en-dim50-with-normalization/2](https://tfhub.dev/google/nnlm-en-dim50-with-normalization/2) - same as [google/nnlm-en-dim50/2](https://tfhub.dev/google/nnlm-en-dim50/2), but with additional text normalization to remove punctuation. This can help to get better coverage of in-vocabulary embeddings for tokens on your input text.
* [google/nnlm-en-dim128-with-normalization/2](https://tfhub.dev/google/nnlm-en-dim128-with-normalization/2) - A larger model with an embedding dimension of 128 instead of the smaller 50.
Let's first create a Keras layer that uses a TensorFlow Hub model to embed the sentences, and try it out on a couple of input examples. Note that the output shape of the produced embeddings is a expected: `(num_examples, embedding_dimension)`.
```
model = "https://tfhub.dev/google/nnlm-en-dim50/2"
hub_layer = hub.KerasLayer(model, input_shape=[], dtype=tf.string, trainable=True)
hub_layer(train_examples[:3])
```
Let's now build the full model:
```
model = tf.keras.Sequential()
model.add(hub_layer)
model.add(tf.keras.layers.Dense(16, activation='relu'))
model.add(tf.keras.layers.Dense(1))
model.summary()
```
The layers are stacked sequentially to build the classifier:
1. The first layer is a TensorFlow Hub layer. This layer uses a pre-trained Saved Model to map a sentence into its embedding vector. The model that we are using ([google/nnlm-en-dim50/2](https://tfhub.dev/google/nnlm-en-dim50/2)) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: `(num_examples, embedding_dimension)`.
2. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.
3. The last layer is densely connected with a single output node. This outputs logits: the log-odds of the true class, according to the model.
### Hidden units
The above model has two intermediate or "hidden" layers, between the input and output. The number of outputs (units, nodes, or neurons) is the dimension of the representational space for the layer. In other words, the amount of freedom the network is allowed when learning an internal representation.
If a model has more hidden units (a higher-dimensional representation space), and/or more layers, then the network can learn more complex representations. However, it makes the network more computationally expensive and may lead to learning unwanted patterns—patterns that improve performance on training data but not on the test data. This is called *overfitting*, and we'll explore it later.
### Loss function and optimizer
A model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.
This isn't the only choice for a loss function, you could, for instance, choose `mean_squared_error`. But, generally, `binary_crossentropy` is better for dealing with probabilities—it measures the "distance" between probability distributions, or in our case, between the ground-truth distribution and the predictions.
Later, when we are exploring regression problems (say, to predict the price of a house), we will see how to use another loss function called mean squared error.
Now, configure the model to use an optimizer and a loss function:
```
model.compile(optimizer='adam',
loss=tf.losses.BinaryCrossentropy(from_logits=True),
metrics=[tf.metrics.BinaryAccuracy(threshold=0.0, name='accuracy')])
```
## Create a validation set
When training, we want to check the accuracy of the model on data it hasn't seen before. Create a *validation set* by setting apart 10,000 examples from the original training data. (Why not use the testing set now? Our goal is to develop and tune our model using only the training data, then use the test data just once to evaluate our accuracy).
```
x_val = train_examples[:10000]
partial_x_train = train_examples[10000:]
y_val = train_labels[:10000]
partial_y_train = train_labels[10000:]
```
## Train the model
Train the model for 40 epochs in mini-batches of 512 samples. This is 40 iterations over all samples in the `x_train` and `y_train` tensors. While training, monitor the model's loss and accuracy on the 10,000 samples from the validation set:
```
history = model.fit(partial_x_train,
partial_y_train,
epochs=40,
batch_size=512,
validation_data=(x_val, y_val),
verbose=1)
```
## Evaluate the model
And let's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.
```
results = model.evaluate(test_examples, test_labels)
print(results)
```
This fairly naive approach achieves an accuracy of about 87%. With more advanced approaches, the model should get closer to 95%.
## Create a graph of accuracy and loss over time
`model.fit()` returns a `History` object that contains a dictionary with everything that happened during training:
```
history_dict = history.history
history_dict.keys()
```
There are four entries: one for each monitored metric during training and validation. We can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:
```
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
# "bo" is for "blue dot"
plt.plot(epochs, loss, 'bo', label='Training loss')
# b is for "solid blue line"
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
```
In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.
Notice the training loss *decreases* with each epoch and the training accuracy *increases* with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.
This isn't the case for the validation loss and accuracy—they seem to peak after about twenty epochs. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations *specific* to the training data that do not *generalize* to test data.
For this particular case, we could prevent overfitting by simply stopping the training after twenty or so epochs. Later, you'll see how to do this automatically with a callback.
| github_jupyter |
<a href="https://cognitiveclass.ai"><img src = "https://ibm.box.com/shared/static/9gegpsmnsoo25ikkbl4qzlvlyjbgxs5x.png" width = 400> </a>
<h1 align=center><font size = 5>Waffle Charts, Word Clouds, and Regression Plots</font></h1>
## Introduction
In this lab, we will learn how to create word clouds and waffle charts. Furthermore, we will start learning about additional visualization libraries that are based on Matplotlib, namely the library *seaborn*, and we will learn how to create regression plots using the *seaborn* library.
## Table of Contents
<div class="alert alert-block alert-info" style="margin-top: 20px">
1. [Exploring Datasets with *p*andas](#0)<br>
2. [Downloading and Prepping Data](#2)<br>
3. [Visualizing Data using Matplotlib](#4) <br>
4. [Waffle Charts](#6) <br>
5. [Word Clouds](#8) <br>
7. [Regression Plots](#10) <br>
</div>
<hr>
# Exploring Datasets with *pandas* and Matplotlib<a id="0"></a>
Toolkits: The course heavily relies on [*pandas*](http://pandas.pydata.org/) and [**Numpy**](http://www.numpy.org/) for data wrangling, analysis, and visualization. The primary plotting library we will explore in the course is [Matplotlib](http://matplotlib.org/).
Dataset: Immigration to Canada from 1980 to 2013 - [International migration flows to and from selected countries - The 2015 revision](http://www.un.org/en/development/desa/population/migration/data/empirical2/migrationflows.shtml) from United Nation's website
The dataset contains annual data on the flows of international migrants as recorded by the countries of destination. The data presents both inflows and outflows according to the place of birth, citizenship or place of previous / next residence both for foreigners and nationals. In this lab, we will focus on the Canadian Immigration data.
# Downloading and Prepping Data <a id="2"></a>
Import Primary Modules:
```
import numpy as np # useful for many scientific computing in Python
import pandas as pd # primary data structure library
from PIL import Image # converting images into arrays
```
Let's download and import our primary Canadian Immigration dataset using *pandas* `read_excel()` method. Normally, before we can do that, we would need to download a module which *pandas* requires to read in excel files. This module is **xlrd**. For your convenience, we have pre-installed this module, so you would not have to worry about that. Otherwise, you would need to run the following line of code to install the **xlrd** module:
```
!conda install -c anaconda xlrd --yes
```
Download the dataset and read it into a *pandas* dataframe:
```
df_can = pd.read_excel('https://ibm.box.com/shared/static/lw190pt9zpy5bd1ptyg2aw15awomz9pu.xlsx',
sheet_name='Canada by Citizenship',
skiprows=range(20),
skip_footer=2)
print('Data downloaded and read into a dataframe!')
```
Let's take a look at the first five items in our dataset
```
df_can.head()
```
Let's find out how many entries there are in our dataset
```
# print the dimensions of the dataframe
print(df_can.shape)
```
Clean up data. We will make some modifications to the original dataset to make it easier to create our visualizations. Refer to *Introduction to Matplotlib and Line Plots* and *Area Plots, Histograms, and Bar Plots* for a detailed description of this preprocessing.
```
# clean up the dataset to remove unnecessary columns (eg. REG)
df_can.drop(['AREA','REG','DEV','Type','Coverage'], axis = 1, inplace = True)
# let's rename the columns so that they make sense
df_can.rename (columns = {'OdName':'Country', 'AreaName':'Continent','RegName':'Region'}, inplace = True)
# for sake of consistency, let's also make all column labels of type string
df_can.columns = list(map(str, df_can.columns))
# set the country name as index - useful for quickly looking up countries using .loc method
df_can.set_index('Country', inplace = True)
# add total column
df_can['Total'] = df_can.sum (axis = 1)
# years that we will be using in this lesson - useful for plotting later on
years = list(map(str, range(1980, 2014)))
print ('data dimensions:', df_can.shape)
```
# Visualizing Data using Matplotlib<a id="4"></a>
Import `matplotlib`:
```
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches # needed for waffle Charts
mpl.style.use('ggplot') # optional: for ggplot-like style
# check for latest version of Matplotlib
print ('Matplotlib version: ', mpl.__version__) # >= 2.0.0
```
# Waffle Charts <a id="6"></a>
A `waffle chart` is an interesting visualization that is normally created to display progress toward goals. It is commonly an effective option when you are trying to add interesting visualization features to a visual that consists mainly of cells, such as an Excel dashboard.
Let's revisit the previous case study about Denmark, Norway, and Sweden.
```
# let's create a new dataframe for these three countries
df_dsn = df_can.loc[['Denmark', 'Norway', 'Sweden'], :]
# let's take a look at our dataframe
df_dsn
```
Unfortunately, unlike R, `waffle` charts are not built into any of the Python visualization libraries. Therefore, we will learn how to create them from scratch.
**Step 1.** The first step into creating a waffle chart is determing the proportion of each category with respect to the total.
```
# compute the proportion of each category with respect to the total
total_values = sum(df_dsn['Total'])
category_proportions = [(float(value) / total_values) for value in df_dsn['Total']]
# print out proportions
for i, proportion in enumerate(category_proportions):
print (df_dsn.index.values[i] + ': ' + str(proportion))
```
**Step 2.** The second step is defining the overall size of the `waffle` chart.
```
width = 40 # width of chart
height = 10 # height of chart
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is ', total_num_tiles)
```
**Step 3.** The third step is using the proportion of each category to determe it respective number of tiles
```
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
```
Based on the calculated proportions, Denmark will occupy 129 tiles of the `waffle` chart, Norway will occupy 77 tiles, and Sweden will occupy 194 tiles.
**Step 4.** The fourth step is creating a matrix that resembles the `waffle` chart and populating it.
```
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
print ('Waffle chart populated!')
```
Let's take a peek at how the matrix looks like.
```
waffle_chart
```
As expected, the matrix consists of three categories and the total number of each category's instances matches the total number of tiles allocated to each category.
**Step 5.** Map the `waffle` chart matrix into a visual.
```
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
```
**Step 6.** Prettify the chart.
```
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
```
**Step 7.** Create a legend and add it to chart.
```
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add gridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(df_dsn['Total'])
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(df_dsn.index.values):
label_str = category + ' (' + str(df_dsn['Total'][i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(handles=legend_handles,
loc='lower center',
ncol=len(df_dsn.index.values),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
```
And there you go! What a good looking *delicious* `waffle` chart, don't you think?
Now it would very inefficient to repeat these seven steps every time we wish to create a `waffle` chart. So let's combine all seven steps into one function called *create_waffle_chart*. This function would take the following parameters as input:
> 1. **categories**: Unique categories or classes in dataframe.
> 2. **values**: Values corresponding to categories or classes.
> 3. **height**: Defined height of waffle chart.
> 4. **width**: Defined width of waffle chart.
> 5. **colormap**: Colormap class
> 6. **value_sign**: In order to make our function more generalizable, we will add this parameter to address signs that could be associated with a value such as %, $, and so on. **value_sign** has a default value of empty string.
```
def create_waffle_chart(categories, values, height, width, colormap, value_sign=''):
# compute the proportion of each category with respect to the total
total_values = sum(values)
category_proportions = [(float(value) / total_values) for value in values]
# compute the total number of tiles
total_num_tiles = width * height # total number of tiles
print ('Total number of tiles is', total_num_tiles)
# compute the number of tiles for each catagory
tiles_per_category = [round(proportion * total_num_tiles) for proportion in category_proportions]
# print out number of tiles per category
for i, tiles in enumerate(tiles_per_category):
print (df_dsn.index.values[i] + ': ' + str(tiles))
# initialize the waffle chart as an empty matrix
waffle_chart = np.zeros((height, width))
# define indices to loop through waffle chart
category_index = 0
tile_index = 0
# populate the waffle chart
for col in range(width):
for row in range(height):
tile_index += 1
# if the number of tiles populated for the current category
# is equal to its corresponding allocated tiles...
if tile_index > sum(tiles_per_category[0:category_index]):
# ...proceed to the next category
category_index += 1
# set the class value to an integer, which increases with class
waffle_chart[row, col] = category_index
# instantiate a new figure object
fig = plt.figure()
# use matshow to display the waffle chart
colormap = plt.cm.coolwarm
plt.matshow(waffle_chart, cmap=colormap)
plt.colorbar()
# get the axis
ax = plt.gca()
# set minor ticks
ax.set_xticks(np.arange(-.5, (width), 1), minor=True)
ax.set_yticks(np.arange(-.5, (height), 1), minor=True)
# add dridlines based on minor ticks
ax.grid(which='minor', color='w', linestyle='-', linewidth=2)
plt.xticks([])
plt.yticks([])
# compute cumulative sum of individual categories to match color schemes between chart and legend
values_cumsum = np.cumsum(values)
total_values = values_cumsum[len(values_cumsum) - 1]
# create legend
legend_handles = []
for i, category in enumerate(categories):
if value_sign == '%':
label_str = category + ' (' + str(values[i]) + value_sign + ')'
else:
label_str = category + ' (' + value_sign + str(values[i]) + ')'
color_val = colormap(float(values_cumsum[i])/total_values)
legend_handles.append(mpatches.Patch(color=color_val, label=label_str))
# add legend to chart
plt.legend(
handles=legend_handles,
loc='lower center',
ncol=len(categories),
bbox_to_anchor=(0., -0.2, 0.95, .1)
)
```
Now to create a `waffle` chart, all we have to do is call the function `create_waffle_chart`. Let's define the input parameters:
```
width = 40 # width of chart
height = 10 # height of chart
categories = df_dsn.index.values # categories
values = df_dsn['Total'] # correponding values of categories
colormap = plt.cm.coolwarm # color map class
```
And now let's call our function to create a `waffle` chart.
```
create_waffle_chart(categories, values, height, width, colormap)
```
There seems to be a new Python package for generating `waffle charts` called [PyWaffle](https://github.com/ligyxy/PyWaffle), but the repository has barely any documentation on the package. Accordingly, I couldn't use the package to prepare enough content to incorporate into this lab. But feel free to check it out and play with it. In the event that the package becomes complete with full documentation, then I will update this lab accordingly.
# Word Clouds <a id="8"></a>
`Word` clouds (also known as text clouds or tag clouds) work in a simple way: the more a specific word appears in a source of textual data (such as a speech, blog post, or database), the bigger and bolder it appears in the word cloud.
Luckily, a Python package already exists in Python for generating `word` clouds. The package, called `word_cloud` was developed by **Andreas Mueller**. You can learn more about the package by following this [link](https://github.com/amueller/word_cloud/).
Let's use this package to learn how to generate a word cloud for a given text document.
First, let's install the package.
```
# install wordcloud
!conda install -c conda-forge wordcloud==1.4.1 --yes
# import package and its set of stopwords
from wordcloud import WordCloud, STOPWORDS
print ('Wordcloud is installed and imported!')
```
`Word` clouds are commonly used to perform high-level analysis and visualization of text data. Accordinly, let's digress from the immigration dataset and work with an example that involves analyzing text data. Let's try to analyze a short novel written by **Lewis Carroll** titled *Alice's Adventures in Wonderland*. Let's go ahead and download a _.txt_ file of the novel.
```
# download file and save as alice_novel.txt
!wget --quiet https://ibm.box.com/shared/static/m54sjtrshpt5su20dzesl5en9xa5vfz1.txt -O alice_novel.txt
# open the file and read it into a variable alice_novel
alice_novel = open('alice_novel.txt', 'r').read()
print ('File downloaded and saved!')
```
Next, let's use the stopwords that we imported from `word_cloud`. We use the function *set* to remove any redundant stopwords.
```
stopwords = set(STOPWORDS)
```
Create a word cloud object and generate a word cloud. For simplicity, let's generate a word cloud using only the first 2000 words in the novel.
```
# instantiate a word cloud object
alice_wc = WordCloud(
background_color='white',
max_words=2000,
stopwords=stopwords
)
# generate the word cloud
alice_wc.generate(alice_novel)
```
Awesome! Now that the `word` cloud is created, let's visualize it.
```
# display the word cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
```
Interesting! So in the first 2000 words in the novel, the most common words are **Alice**, **said**, **little**, **Queen**, and so on. Let's resize the cloud so that we can see the less frequent words a little better.
```
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
# display the cloud
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
```
Much better! However, **said** isn't really an informative word. So let's add it to our stopwords and re-generate the cloud.
```
stopwords.add('said') # add the words said to stopwords
# re-generate the word cloud
alice_wc.generate(alice_novel)
# display the cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
```
Excellent! This looks really interesting! Another cool thing you can implement with the `word_cloud` package is superimposing the words onto a mask of any shape. Let's use a mask of Alice and her rabbit. We already created the mask for you, so let's go ahead and download it and call it *alice_mask.png*.
```
# download image
!wget --quiet https://ibm.box.com/shared/static/3mpxgaf6muer6af7t1nvqkw9cqj85ibm.png -O alice_mask.png
# save mask to alice_mask
alice_mask = np.array(Image.open('alice_mask.png'))
print('Image downloaded and saved!')
```
Let's take a look at how the mask looks like.
```
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_mask, cmap=plt.cm.gray, interpolation='bilinear')
plt.axis('off')
plt.show()
```
Shaping the `word` cloud according to the mask is straightforward using `word_cloud` package. For simplicity, we will continue using the first 2000 words in the novel.
```
# instantiate a word cloud object
alice_wc = WordCloud(background_color='white', max_words=2000, mask=alice_mask, stopwords=stopwords)
# generate the word cloud
alice_wc.generate(alice_novel)
# display the word cloud
fig = plt.figure()
fig.set_figwidth(14) # set width
fig.set_figheight(18) # set height
plt.imshow(alice_wc, interpolation='bilinear')
plt.axis('off')
plt.show()
```
Really impressive!
Unfortunately, our immmigration data does not have any text data, but where there is a will there is a way. Let's generate sample text data from our immigration dataset, say text data of 90 words.
Let's recall how our data looks like.
```
df_can.head()
```
And what was the total immigration from 1980 to 2013?
```
total_immigration = df_can['Total'].sum()
total_immigration
```
Using countries with single-word names, let's duplicate each country's name based on how much they contribute to the total immigration.
```
max_words = 90
word_string = ''
for country in df_can.index.values:
# check if country's name is a single-word name
if len(country.split(' ')) == 1:
repeat_num_times = int(df_can.loc[country, 'Total']/float(total_immigration)*max_words)
word_string = word_string + ((country + ' ') * repeat_num_times)
# display the generated text
word_string
```
We are not dealing with any stopwords here, so there is no need to pass them when creating the word cloud.
```
# create the word cloud
wordcloud = WordCloud(background_color='white').generate(word_string)
print('Word cloud created!')
# display the cloud
fig = plt.figure()
fig.set_figwidth(14)
fig.set_figheight(18)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis('off')
plt.show()
```
According to the above word cloud, it looks like the majority of the people who immigrated came from one of 15 countries that are displayed by the word cloud. One cool visual that you could build, is perhaps using the map of Canada and a mask and superimposing the word cloud on top of the map of Canada. That would be an interesting visual to build!
# Regression Plots <a id="10"></a>
> Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics. You can learn more about *seaborn* by following this [link](https://seaborn.pydata.org/) and more about *seaborn* regression plots by following this [link](http://seaborn.pydata.org/generated/seaborn.regplot.html).
In lab *Pie Charts, Box Plots, Scatter Plots, and Bubble Plots*, we learned how to create a scatter plot and then fit a regression line. It took ~20 lines of code to create the scatter plot along with the regression fit. In this final section, we will explore *seaborn* and see how efficient it is to create regression lines and fits using this library!
Let's first install *seaborn*
```
# install seaborn
!pip install seaborn
# import library
import seaborn as sns
print('Seaborn installed and imported!')
```
Create a new dataframe that stores that total number of landed immigrants to Canada per year from 1980 to 2013.
```
# we can use the sum() method to get the total population per year
df_tot = pd.DataFrame(df_can[years].sum(axis=0))
# change the years to type float (useful for regression later on)
df_tot.index = map(float,df_tot.index)
# reset the index to put in back in as a column in the df_tot dataframe
df_tot.reset_index(inplace = True)
# rename columns
df_tot.columns = ['year', 'total']
# view the final dataframe
df_tot.head()
```
With *seaborn*, generating a regression plot is as simple as calling the **regplot** function.
```
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot)
```
This is not magic; it is *seaborn*! You can also customize the color of the scatter plot and regression line. Let's change the color to green.
```
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green')
```
You can always customize the marker shape, so instead of circular markers, let's use '+'.
```
import seaborn as sns
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
```
Let's blow up the plot a little bit so that it is more appealing to the sight.
```
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+')
```
And let's increase the size of markers so they match the new size of the figure, and add a title and x- and y-labels.
```
plt.figure(figsize=(15, 10))
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration') # add x- and y-labels
ax.set_title('Total Immigration to Canada from 1980 - 2013') # add title
```
And finally increase the font size of the tickmark labels, the title, and the x- and y-labels so they don't feel left out!
```
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
```
Amazing! A complete scatter plot with a regression fit with 5 lines of code only. Isn't this really amazing?
If you are not a big fan of the purple background, you can easily change the style to a white plain background.
```
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('ticks') # change background to white background
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
```
Or to a white background with gridlines.
```
plt.figure(figsize=(15, 10))
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
ax = sns.regplot(x='year', y='total', data=df_tot, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigration to Canada from 1980 - 2013')
```
**Question**: Use seaborn to create a scatter plot with a regression line to visualize the total immigration from Denmark, Sweden, and Norway to Canada from 1980 to 2013.
```
### type your answer here
```
Double-click __here__ for the solution.
<!-- The correct answer is:
\\ # create df_countries dataframe
df_countries = df_can.loc[['Denmark', 'Norway', 'Sweden'], years].transpose()
-->
<!--
\\ # create df_total by summing across three countries for each year
df_total = pd.DataFrame(df_countries.sum(axis=1))
-->
<!--
\\ # reset index in place
df_total.reset_index(inplace=True)
-->
<!--
\\ # rename columns
df_total.columns = ['year', 'total']
-->
<!--
\\ # change column year from string to int to create scatter plot
df_total['year'] = df_total['year'].astype(int)
-->
<!--
\\ # define figure size
plt.figure(figsize=(15, 10))
-->
<!--
\\ # define background style and font size
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
-->
<!--
\\ # generate plot and add title and axes labels
ax = sns.regplot(x='year', y='total', data=df_total, color='green', marker='+', scatter_kws={'s': 200})
ax.set(xlabel='Year', ylabel='Total Immigration')
ax.set_title('Total Immigrationn from Denmark, Sweden, and Norway to Canada from 1980 - 2013')
-->
### Thank you for completing this lab!
This notebook was created by [Alex Aklson](https://www.linkedin.com/in/aklson/). I hope you found this lab interesting and educational. Feel free to contact me if you have any questions!
This notebook is part of a course on **Coursera** called *Data Visualization with Python*. If you accessed this notebook outside the course, you can take this course online by clicking [here](http://cocl.us/DV0101EN_Coursera_Week3_LAB1).
<hr>
Copyright © 2018 [Cognitive Class](https://cognitiveclass.ai/?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).
| github_jupyter |
<h2>Factorization Machines - Movie Recommendation Model</h2>
Input Features: [userId, moveId] <br>
Target: rating <br>
```
import numpy as np
import pandas as pd
# Define IAM role
import boto3
import re
import sagemaker
from sagemaker import get_execution_role
# SageMaker SDK Documentation: http://sagemaker.readthedocs.io/en/latest/estimators.html
```
## Upload Data to S3
```
# Specify your bucket name
bucket_name = 'chandra-ml-sagemaker'
training_file_key = 'movie/user_movie_train.recordio'
test_file_key = 'movie/user_movie_test.recordio'
s3_model_output_location = r's3://{0}/movie/model'.format(bucket_name)
s3_training_file_location = r's3://{0}/{1}'.format(bucket_name,training_file_key)
s3_test_file_location = r's3://{0}/{1}'.format(bucket_name,test_file_key)
# Read Dimension: Number of unique users + Number of unique movies in our dataset
dim_movie = 0
# Update movie dimension - from file used for training
with open(r'ml-latest-small/movie_dimension.txt','r') as f:
dim_movie = int(f.read())
dim_movie
print(s3_model_output_location)
print(s3_training_file_location)
print(s3_test_file_location)
# Write and Reading from S3 is just as easy
# files are referred as objects in S3.
# file name is referred as key name in S3
# Files stored in S3 are automatically replicated across 3 different availability zones
# in the region where the bucket was created.
# http://boto3.readthedocs.io/en/latest/guide/s3.html
def write_to_s3(filename, bucket, key):
with open(filename,'rb') as f: # Read in binary mode
return boto3.Session().resource('s3').Bucket(bucket).Object(key).upload_fileobj(f)
write_to_s3(r'ml-latest-small/user_movie_train.recordio',bucket_name,training_file_key)
write_to_s3(r'ml-latest-small/user_movie_test.recordio',bucket_name,test_file_key)
```
## Training Algorithm Docker Image
### AWS Maintains a separate image for every region and algorithm
```
sess = sagemaker.Session()
role = get_execution_role()
# This role contains the permissions needed to train, deploy models
# SageMaker Service is trusted to assume this role
print(role)
# https://sagemaker.readthedocs.io/en/stable/api/utility/image_uris.html#sagemaker.image_uris.retrieve
# SDK 2 uses image_uris.retrieve the container image location
# Use factorization-machines
container = sagemaker.image_uris.retrieve("factorization-machines",sess.boto_region_name)
print (f'Using FM Container {container}')
container
```
## Build Model
```
# Configure the training job
# Specify type and number of instances to use
# S3 location where final artifacts needs to be stored
# Reference: http://sagemaker.readthedocs.io/en/latest/estimators.html
# SDK 2.x version does not require train prefix for instance count and type
estimator = sagemaker.estimator.Estimator(container,
role,
instance_count=1,
instance_type='ml.m4.xlarge',
output_path=s3_model_output_location,
sagemaker_session=sess,
base_job_name ='fm-movie-v4')
```
### New Configuration after Model Tuning
### Refer to Hyperparameter Tuning Lecture on how to optimize hyperparameters
```
estimator.set_hyperparameters(feature_dim=dim_movie,
num_factors=8,
predictor_type='regressor',
mini_batch_size=994,
epochs=91,
bias_init_method='normal',
bias_lr=0.21899531189430518,
factors_init_method='normal',
factors_lr=5.357593337770278e-05,
linear_init_method='normal',
linear_lr=0.00021524948053767607)
estimator.hyperparameters()
```
### Train the model
```
# New Hyperparameters
# Reference: Supported channels by algorithm
# https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html
estimator.fit({'train':s3_training_file_location, 'test': s3_test_file_location})
```
## Deploy Model
```
# Ref: http://sagemaker.readthedocs.io/en/latest/estimators.html
predictor = estimator.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge',
endpoint_name = 'fm-movie-v4')
```
## Run Predictions
### Dense and Sparse Formats
https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-inference.html
```
import json
def fm_sparse_serializer(data):
js = {'instances': []}
for row in data:
column_list = row.tolist()
value_list = np.ones(len(column_list),dtype=int).tolist()
js['instances'].append({'data':{'features': { 'keys': column_list, 'shape':[dim_movie], 'values': value_list}}})
return json.dumps(js)
# SDK 2
from sagemaker.deserializers import JSONDeserializer
# https://github.com/aws/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/factorization_machines_mnist/factorization_machines_mnist.ipynb
# Specify custom serializer
predictor.serializer.serialize = fm_sparse_serializer
predictor.serializer.content_type = 'application/json'
predictor.deserializer = JSONDeserializer()
import numpy as np
fm_sparse_serializer([np.array([341,1416])])
# Let's test with few entries from test file
# Movie dataset is updated regularly...so, instead of hard coding userid and movie id, let's
# use actual values
# Each row is in this format: ['2.5', '426:1', '943:1']
# ActualRating, UserID, MovieID
with open(r'ml-latest-small/user_movie_test.svm','r') as f:
for i in range(3):
rating = f.readline().split()
print(f"Movie {rating}")
userID = rating[1].split(':')[0]
movieID = rating[2].split(':')[0]
predicted_rating = predictor.predict([np.array([int(userID),int(movieID)])])
print(f' Actual Rating:\t{rating[0]}')
print(f" Predicted Rating:\t{predicted_rating['predictions'][0]['score']}")
print()
```
## Summary
1. Ensure Training, Test and Validation data are in S3 Bucket
2. Select Algorithm Container Registry Path - Path varies by region
3. Configure Estimator for training - Specify Algorithm container, instance count, instance type, model output location
4. Specify algorithm specific hyper parameters
5. Train model
6. Deploy model - Specify instance count, instance type and endpoint name
7. Run Predictions
| github_jupyter |
# The overview of the basic approaches to solving the Uplift Modeling problem
<br>
<center>
<a href="https://colab.research.google.com/github/maks-sh/scikit-uplift/blob/master/notebooks/RetailHero_EN.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg">
</a>
<br>
<b><a href="https://github.com/maks-sh/scikit-uplift/">SCIKIT-UPLIFT REPO</a> | </b>
<b><a href="https://scikit-uplift.readthedocs.io/en/latest/">SCIKIT-UPLIFT DOCS</a> | </b>
<b><a href="https://scikit-uplift.readthedocs.io/en/latest/user_guide/index.html">USER GUIDE</a></b>
<br>
<b><a href="https://nbviewer.jupyter.org/github/maks-sh/scikit-uplift/blob/master/notebooks/RetailHero.ipynb">RUSSIAN VERSION</a></b>
</center>
## Content
* [Introduction](#Introduction)
* [1. Single model approaches](#1.-Single-model-approaches)
* [1.1 Single model](#1.1-Single-model-with-treatment-as-feature)
* [1.2 Class Transformation](#1.2-Class-Transformation)
* [2. Approaches with two models](#2.-Approaches-with-two-models)
* [2.1 Two independent models](#2.1-Two-independent-models)
* [2.2 Two dependent models](#2.2-Two-dependent-models)
* [Conclusion](#Conclusion)
## Introduction
Before proceeding to the discussion of uplift modeling, let's imagine some situation:
A customer comes to you with a certain problem: it is necessary to advertise a popular product using the sms.
You know that the product is quite popular, and it is often installed by the customers without communication, that the usual binary classification will find the same customers, and the cost of communication is critical for us...
And then you begin to understand that the product is already popular, that the product is often installed by customers without communication, that the usual binary classification will find many such customers, and the cost of communication is critical for us...
Historically, according to the impact of communication, marketers divide all customers into 4 categories:
<p align="center">
<img src="https://raw.githubusercontent.com/maks-sh/scikit-uplift/master/docs/_static/images/user_guide/ug_clients_types.jpg" alt="Customer types" width='40%'/>
</p>
- **`Do-Not-Disturbs`** *(a.k.a. Sleeping-dogs)* have a strong negative response to a marketing communication. They are going to purchase if *NOT* treated and will *NOT* purchase *IF* treated. It is not only a wasted marketing budget but also a negative impact. For instance, customers targeted could result in rejecting current products or services. In terms of math: $W_i = 1, Y_i = 0$ or $W_i = 0, Y_i = 1$.
- **`Lost Causes`** will *NOT* purchase the product *NO MATTER* they are contacted or not. The marketing budget in this case is also wasted because it has no effect. In terms of math: $W_i = 1, Y_i = 0$ or $W_i = 0, Y_i = 0$.
- **`Sure Things`** will purchase *ANYWAY* no matter they are contacted or not. There is no motivation to spend the budget because it also has no effect. In terms of math: $W_i = 1, Y_i = 1$ or $W_i = 0, Y_i = 1$.
- **`Persuadables`** will always respond *POSITIVE* to the marketing communication. They is going to purchase *ONLY* if contacted (or sometimes they purchase *MORE* or *EARLIER* only if contacted). This customer's type should be the only target for the marketing campaign. In terms of math: $W_i = 0, Y_i = 0$ or $W_i = 1, Y_i = 1$.
Because we can't communicate and not communicate with the customer at the same time, we will never be able to observe exactly which type a particular customer belongs to.
Depends on the product characteristics and the customer base structure some types may be absent. In addition, a customer response depends heavily on various characteristics of the campaign, such as a communication channel or a type and a size of the marketing offer. To maximize profit, these parameters should be selected.
Thus, when predicting uplift score and selecting a segment by the highest score, we are trying to find the only one type: **persuadables**.
Thus, in this task, we don’t want to predict the probability of performing a target action, but to focus the advertising budget on the customers who will perform the target action only when we interact. In other words, we want to evaluate two conditional probabilities separately for each client:
* Performing a targeted action when we influence the client.
We will refer such clients to the **test group (aka treatment)**: $P^T = P(Y=1 | W = 1)$,
* Performing a targeted action without affecting the client.
We will refer such clients to the **control group (aka control)**: $P^C = P(Y=1 | W = 0)$,
where $Y$ is the binary flag for executing the target action, and $W$ is the binary flag for communication (in English literature, _treatment_)
The very same cause-and-effect effect is called **uplift** and is estimated as the difference between these two probabilities:
$$ uplift = P^T - P^C = P(Y = 1 | W = 1) - P(Y = 1 | W = 0) $$
Predicting uplift is a cause-and-effect inference task. The point is that you need to evaluate the difference between two events that are mutually exclusive for a particular client (either we interact with a person, or not; you can't perform two of these actions at the same time). This is why additional requirements for source data are required for building uplift models.
To get a training sample for the uplift simulation, you need to conduct an experiment:
1. Randomly split a representative part of the client base into a test and control group
2. Communicate with the test group
The data obtained as part of the design of such a pilot will allow us to build an uplift forecasting model in the future. It is also worth noting that the experiment should be as similar as possible to the campaign, which will be launched later on a larger scale. The only difference between the experiment and the campaign should be the fact that during the pilot, we choose random clients for interaction, and during the campaign - based on the predicted value of the Uplift. If the campaign that is eventually launched differs significantly from the experiment that is used to collect data about the performance of targeted actions by clients, then the model that is built may be less reliable and accurate.
So, the approaches to predicting uplift are aimed at assessing the net effect of marketing campaigns on customers.
All classical approaches to uplift modeling can be divided into two classes:
1. Approaches with the same model
2. Approaches using two models
Let's download [RetailHero.ai contest data](https://ods.ai/competitions/x5-retailhero-uplift-modeling/data):
```
import sys
# install uplift library scikit-uplift and other libraries
!{sys.executable} -m pip install scikit-uplift catboost pandas
from sklearn.model_selection import train_test_split
from sklift.datasets import fetch_x5
import pandas as pd
pd.set_option('display.max_columns', None)
%matplotlib inline
dataset = fetch_x5()
dataset.keys()
print(f"Dataset type: {type(dataset)}\n")
print(f"Dataset features shape: {dataset.data['clients'].shape}")
print(f"Dataset features shape: {dataset.data['train'].shape}")
print(f"Dataset target shape: {dataset.target.shape}")
print(f"Dataset treatment shape: {dataset.treatment.shape}")
```
Read more about dataset <a href="https://www.uplift-modeling.com/en/latest/api/datasets/fetch_x5.html">in the api docs</a>.
Now let's preprocess it a bit:
```
# extract data
df_clients = dataset.data['clients'].set_index("client_id")
df_train = pd.concat([dataset.data['train'], dataset.treatment , dataset.target], axis=1).set_index("client_id")
indices_test = pd.Index(set(df_clients.index) - set(df_train.index))
# extracting features
df_features = df_clients.copy()
df_features['first_issue_time'] = \
(pd.to_datetime(df_features['first_issue_date'])
- pd.Timestamp('1970-01-01')) // pd.Timedelta('1s')
df_features['first_redeem_time'] = \
(pd.to_datetime(df_features['first_redeem_date'])
- pd.Timestamp('1970-01-01')) // pd.Timedelta('1s')
df_features['issue_redeem_delay'] = df_features['first_redeem_time'] \
- df_features['first_issue_time']
df_features = df_features.drop(['first_issue_date', 'first_redeem_date'], axis=1)
indices_learn, indices_valid = train_test_split(df_train.index, test_size=0.3, random_state=123)
```
For convenience, we will declare some variables:
```
X_train = df_features.loc[indices_learn, :]
y_train = df_train.loc[indices_learn, 'target']
treat_train = df_train.loc[indices_learn, 'treatment_flg']
X_val = df_features.loc[indices_valid, :]
y_val = df_train.loc[indices_valid, 'target']
treat_val = df_train.loc[indices_valid, 'treatment_flg']
X_train_full = df_features.loc[df_train.index, :]
y_train_full = df_train.loc[:, 'target']
treat_train_full = df_train.loc[:, 'treatment_flg']
X_test = df_features.loc[indices_test, :]
cat_features = ['gender']
models_results = {
'approach': [],
'uplift@30%': []
}
```
## 1. Single model approaches
### 1.1 Single model with treatment as feature
The most intuitive and simple uplift modeling technique. A training set consists of two groups: treatment samples and control samples. There is also a binary treatment flag added as a feature to the training set. After the model is trained, at the scoring time it is going to be applied twice:
with the treatment flag equals `1` and with the treatment flag equals `0`. Subtracting these model's outcomes for each test sample, we will get an estimate of the uplift.
<p align="center">
<img src="https://raw.githubusercontent.com/maks-sh/scikit-uplift/master/docs/_static/images/SoloModel.png" alt="Solo model with treatment as a feature"/>
</p>
```
# installation instructions: https://github.com/maks-sh/scikit-uplift
# link to the documentation: https://scikit-uplift.readthedocs.io/en/latest/
from sklift.metrics import uplift_at_k
from sklift.viz import plot_uplift_preds
from sklift.models import SoloModel
# sklift supports all models,
# that satisfy scikit-learn convention
# for example, let's use catboost
from catboost import CatBoostClassifier
sm = SoloModel(CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True))
sm = sm.fit(X_train, y_train, treat_train, estimator_fit_params={'cat_features': cat_features})
uplift_sm = sm.predict(X_val)
sm_score = uplift_at_k(y_true=y_val, uplift=uplift_sm, treatment=treat_val, strategy='by_group', k=0.3)
models_results['approach'].append('SoloModel')
models_results['uplift@30%'].append(sm_score)
# get conditional probabilities (predictions) of performing the target action
# during interaction for each object
sm_trmnt_preds = sm.trmnt_preds_
# And conditional probabilities (predictions) of performing the target action
# without interaction for each object
sm_ctrl_preds = sm.ctrl_preds_
# draw the probability (predictions) distributions and their difference (uplift)
plot_uplift_preds(trmnt_preds=sm_trmnt_preds, ctrl_preds=sm_ctrl_preds);
# You can also access the trained model with the same ease.
# For example, to build the importance of features:
sm_fi = pd.DataFrame({
'feature_name': sm.estimator.feature_names_,
'feature_score': sm.estimator.feature_importances_
}).sort_values('feature_score', ascending=False).reset_index(drop=True)
sm_fi
```
### 1.2 Class Transformation
Simple yet powerful and mathematically proven uplift modeling method, presented in 2012.
The main idea is to predict a slightly changed target $Z_i$:
$$
Z_i = Y_i \cdot W_i + (1 - Y_i) \cdot (1 - W_i),
$$
where
* $Z_i$ - new target variable of the $i$ client;
* $Y_i$ - target variable of the $i$ client;
* $W_i$ - flag for communication of the $i$ client;
In other words, the new target equals 1 if a response in the treatment group is as good as a response in the control group and equals 0 otherwise:
$$
Z_i = \begin{cases}
1, & \mbox{if } W_i = 1 \mbox{ and } Y_i = 1 \\
1, & \mbox{if } W_i = 0 \mbox{ and } Y_i = 0 \\
0, & \mbox{otherwise}
\end{cases}
$$
Let's go deeper and estimate the conditional probability of the target variable:
$$
P(Z=1|X = x) = \\
= P(Z=1|X = x, W = 1) \cdot P(W = 1|X = x) + \\
+ P(Z=1|X = x, W = 0) \cdot P(W = 0|X = x) = \\
= P(Y=1|X = x, W = 1) \cdot P(W = 1|X = x) + \\
+ P(Y=0|X = x, W = 0) \cdot P(W = 0|X = x).
$$
We assume that $ W $ is independent of $X = x$ by design.
Thus we have: $P(W | X = x) = P(W)$ and
$$
P(Z=1|X = x) = \\
= P^T(Y=1|X = x) \cdot P(W = 1) + \\
+ P^C(Y=0|X = x) \cdot P(W = 0)
$$
Also, we assume that $P(W = 1) = P(W = 0) = \frac{1}{2}$, which means that during the experiment the control and the treatment groups were divided in equal proportions. Then we get the following:
$$
P(Z=1|X = x) = \\
= P^T(Y=1|X = x) \cdot \frac{1}{2} + P^C(Y=0|X = x) \cdot \frac{1}{2} \Rightarrow \\
2 \cdot P(Z=1|X = x) = \\
= P^T(Y=1|X = x) + P^C(Y=0|X = x) = \\
= P^T(Y=1|X = x) + 1 - P^C(Y=1|X = x) \Rightarrow \\
\Rightarrow P^T(Y=1|X = x) - P^C(Y=1|X = x) = \\
= uplift = 2 \cdot P(Z=1|X = x) - 1
$$
Thus, by doubling the estimate of the new target $Z$ and subtracting one we will get an estimation of the uplift:
$$
uplift = 2 \cdot P(Z=1) - 1
$$
This approach is based on the assumption: $P(W = 1) = P(W = 0) = \frac{1}{2}$, That is the reason that it has to be used only in cases where the number of treated customers (communication) is equal to the number of control customers (no communication).
```
from sklift.models import ClassTransformation
ct = ClassTransformation(CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True))
ct = ct.fit(X_train, y_train, treat_train, estimator_fit_params={'cat_features': cat_features})
uplift_ct = ct.predict(X_val)
ct_score = uplift_at_k(y_true=y_val, uplift=uplift_ct, treatment=treat_val, strategy='by_group', k=0.3)
models_results['approach'].append('ClassTransformation')
models_results['uplift@30%'].append(ct_score)
```
## 2. Approaches with two models
The two-model approach can be found in almost any uplift modeling work and is often used as a baseline. However, using two models can lead to some unpleasant consequences: if you use fundamentally different models for training, or if the nature of the test and control group data is very different, then the scores returned by the models will not be comparable. As a result, the calculation of the uplift will not be completely correct. To avoid this effect, you need to calibrate the models so that their scores can be interpolated as probabilities. The calibration of model probabilities is described perfectly in [scikit-learn documentation](https://scikit-learn.org/stable/modules/calibration.html).
### 2.1 Two independent models
The main idea is to estimate the conditional probabilities of the treatment and control groups separately.
1. Train the first model using the treatment set.
2. Train the second model using the control set.
3. Inference: subtract the control model scores from the treatment model scores.
<p align= "center">
<img src="https://raw.githubusercontent.com/maks-sh/scikit-uplift/master/docs/_static/images/TwoModels_vanila.png" alt="Two Models vanila"/>
</p>
```
from sklift.models import TwoModels
tm = TwoModels(
estimator_trmnt=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
estimator_ctrl=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
method='vanilla'
)
tm = tm.fit(
X_train, y_train, treat_train,
estimator_trmnt_fit_params={'cat_features': cat_features},
estimator_ctrl_fit_params={'cat_features': cat_features}
)
uplift_tm = tm.predict(X_val)
tm_score = uplift_at_k(y_true=y_val, uplift=uplift_tm, treatment=treat_val, strategy='by_group', k=0.3)
models_results['approach'].append('TwoModels')
models_results['uplift@30%'].append(tm_score)
plot_uplift_preds(trmnt_preds=tm.trmnt_preds_, ctrl_preds=tm.ctrl_preds_);
```
### 2.2 Two dependent models
The dependent data representation approach is based on the classifier chain method originally developed
for multi-class classification problems. The idea is that if there are $L$ different classifiers, each of which solves the problem of binary classification and in the learning process,
each subsequent classifier uses the predictions of the previous ones as additional features.
The authors of this method proposed to use the same idea to solve the problem of uplift modeling in two stages.
At the beginning we train the classifier based on the control data:
$$
P^C = P(Y=1| X, W = 0),
$$
Next, we estimate the $P_C$ predictions and use them as a feature for the second classifier.
It effectively reflects a dependency between treatment and control datasets:
$$
P^T = P(Y=1| X, P_C(X), W = 1)
$$
To get the uplift for each observation, calculate the difference:
$$
uplift(x_i) = P^T(x_i, P_C(x_i)) - P^C(x_i)
$$
Intuitively, the second classifier learns the difference between the expected probability in the treatment and the control sets which is the uplift.
<p align= "center">
<img src="https://raw.githubusercontent.com/maks-sh/scikit-uplift/master/docs/_static/images/TwoModels_ddr_control.png", alt="Two dependent models"/>
</p>
```
tm_ctrl = TwoModels(
estimator_trmnt=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
estimator_ctrl=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
method='ddr_control'
)
tm_ctrl = tm_ctrl.fit(
X_train, y_train, treat_train,
estimator_trmnt_fit_params={'cat_features': cat_features},
estimator_ctrl_fit_params={'cat_features': cat_features}
)
uplift_tm_ctrl = tm_ctrl.predict(X_val)
tm_ctrl_score = uplift_at_k(y_true=y_val, uplift=uplift_tm_ctrl, treatment=treat_val, strategy='by_group', k=0.3)
models_results['approach'].append('TwoModels_ddr_control')
models_results['uplift@30%'].append(tm_ctrl_score)
plot_uplift_preds(trmnt_preds=tm_ctrl.trmnt_preds_, ctrl_preds=tm_ctrl.ctrl_preds_);
```
Similarly, you can first train the $P^T$ classifier, and then use its predictions as a feature for the $P^C$ classifier.
```
tm_trmnt = TwoModels(
estimator_trmnt=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
estimator_ctrl=CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True),
method='ddr_treatment'
)
tm_trmnt = tm_trmnt.fit(
X_train, y_train, treat_train,
estimator_trmnt_fit_params={'cat_features': cat_features},
estimator_ctrl_fit_params={'cat_features': cat_features}
)
uplift_tm_trmnt = tm_trmnt.predict(X_val)
tm_trmnt_score = uplift_at_k(y_true=y_val, uplift=uplift_tm_trmnt, treatment=treat_val, strategy='by_group', k=0.3)
models_results['approach'].append('TwoModels_ddr_treatment')
models_results['uplift@30%'].append(tm_trmnt_score)
plot_uplift_preds(trmnt_preds=tm_trmnt.trmnt_preds_, ctrl_preds=tm_trmnt.ctrl_preds_);
```
## Conclusion
Let's consider which method performed best in this task and use it to speed up the test sample:
```
pd.DataFrame(data=models_results).sort_values('uplift@30%', ascending=False)
```
From the table above you can see that the current task suits best for the approach to the transformation of the target line. Let's train the model on the entire sample and predict the test.
```
ct_full = ClassTransformation(CatBoostClassifier(iterations=20, thread_count=2, random_state=42, silent=True))
ct_full = ct_full.fit(
X_train_full,
y_train_full,
treat_train_full,
estimator_fit_params={'cat_features': cat_features}
)
X_test.loc[:, 'uplift'] = ct_full.predict(X_test.values)
sub = X_test[['uplift']].to_csv('sub1.csv')
!head -n 5 sub1.csv
ct_full = pd.DataFrame({
'feature_name': ct_full.estimator.feature_names_,
'feature_score': ct_full.estimator.feature_importances_
}).sort_values('feature_score', ascending=False).reset_index(drop=True)
ct_full
```
This way we got acquainted with uplift modeling and considered the main basic approaches to its construction. What's next? Then you can plunge them into the intelligence analysis of data, generate some new features, select the models and their hyperparameters and learn new approaches and libraries.
**Thank you for reading to the end.**
**I will be pleased if you support the project with an star on [github](https://github.com/maks-sh/scikit-uplift/) or tell your friends about it.**
| github_jupyter |
```
s = 'abc'
s.upper()
# L E G B
# local
# enclosing
# global
# builtins
globals()
globals()['s']
s.upper()
dir(s)
s.title()
x = 'this is a bunch of words to show to people'
x.title()
for attrname in dir(s):
print attrname, s.attrname
for attrname in dir(s):
print attrname, getattr(s, attrname)
s.upper
getattr(s, 'upper')
while True:
attrname = raw_input("Enter attribute name: ").strip()
if not attrname: # if I got an empty string
break
elif attrname in dir(s):
print getattr(s, attrname)
else:
print "I don't know what {} is".format(attrname)
s.upper
s.upper()
5()
s.upper.__call__
hasattr(s, 'upper')
import sys
sys.version
sys.version = '4.0.0'
sys.version
def foo():
return 5
foo.x = 100
def hello(name):
return "Hello, {}".format(name)
hello('world')
hello(123)
hello(hello)
class Foo(object):
def __init__(self, x):
self.x = x
def __add__(self, other):
return Foo(self.x + other.x)
f = Foo(10)
f.x
class Foo(object):
pass
f = Foo()
f.x = 100
f.y = {'a':1, 'b':2, 'c':3}
vars(f)
g = Foo()
g.a = [1,2,3]
g.b = 'hello'
vars(g)
class Foo(object):
def __init__(self, x, y):
self.x = x
self.y = y
f = Foo(10, [1,2,3])
vars(f)
class Person(object):
population = 0
def __init__(self, name):
self.name = name
Person.population = self.population + 1
def hello(self):
return "Hello, {}".format(self.name)
print "population = {}".format(Person.population)
p1 = Person('name1')
p2 = Person('name2')
print "population = {}".format(Person.population)
print "p1.population = {}".format(p1.population)
print "p2.population = {}".format(p2.population)
print p1.hello()
p1.thing
Person.thing = 'hello'
p1.thing
class Person(object):
def __init__(self, name):
self.name = name
def hello(self):
return "Hello, {}".format(self.name)
class Employee(Person):
def __init__(self, name, id_number):
Person.__init__(self, name)
self.id_number = id_number
e = Employee('emp1', 1)
e.hello()
e.hello()
Person.hello(e)
```
```
s = 'abc'
s.upper()
str.upper(s)
type(s)
id(s)
type(Person.hello)
id(Person.hello)
id(Person.hello)
id(Person.hello)
Person.__dict__
Person.__dict__['hello'](e)
# descriptor protocol
class Thermostat(object):
def __init__(self):
self.temp = 20
t = Thermostat()
t.temp = 100
t.temp = 0
class Thermostat(object):
def __init__(self):
self._temp = 20 # now it's private!
@property
def temp(self):
print "getting temp"
return self._temp
@temp.setter
def temp(self, new_temp):
print "setting temp"
if new_temp > 35:
print "Too high!"
new_temp = 35
elif new_temp < 0:
print "Too low!"
new_temp = 0
self._temp = new_temp
t = Thermostat()
t.temp = 100
print t.temp
t.temp = -40
print t.temp
# Temp will be a descriptor!
class Temp(object):
def __get__(self, obj, objtype):
return self.temp
def __set__(self, obj, newval):
if newval > 35:
newval = 35
if newval < 0:
newval = 0
self.temp = newval
class Thermostat(object):
temp = Temp() # temp is a class attribute, instance of Temp
t1 = Thermostat()
t2 = Thermostat()
t1.temp = 100
t2.temp = 20
print t1.temp
print t2.temp
# Temp will be a descriptor!
class Temp(object):
def __init__(self):
self.temp = {}
def __get__(self, obj, objtype):
return self.temp[obj]
def __set__(self, obj, newval):
if newval > 35:
newval = 35
if newval < 0:
newval = 0
self.temp[obj] = newval
class Thermostat(object):
temp = Temp() # temp is a class attribute, instance of Temp
t1 = Thermostat()
t2 = Thermostat()
t1.temp = 100
t2.temp = 20
print t1.temp
print t2.temp
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
**IMPORTANT NOTE:** This notebook is designed to run as a Colab. Click the button on top that says, `Open in Colab`, to run this notebook as a Colab. Running the notebook on your local machine might result in some of the code blocks throwing errors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# rps training set
!gdown --id 1DYVMuV2I_fA6A3er-mgTavrzKuxwpvKV
# rps testing set
!gdown --id 1RaodrRK1K03J_dGiLu8raeUynwmIbUaM
import os
import zipfile
local_zip = './rps.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('tmp/rps-train')
zip_ref.close()
local_zip = './rps-test-set.zip'
zip_ref = zipfile.ZipFile(local_zip, 'r')
zip_ref.extractall('tmp/rps-test')
zip_ref.close()
base_dir = 'tmp/rps-train/rps'
rock_dir = os.path.join(base_dir, 'rock')
paper_dir = os.path.join(base_dir, 'paper')
scissors_dir = os.path.join(base_dir, 'scissors')
print('total training rock images:', len(os.listdir(rock_dir)))
print('total training paper images:', len(os.listdir(paper_dir)))
print('total training scissors images:', len(os.listdir(scissors_dir)))
rock_files = os.listdir(rock_dir)
print(rock_files[:10])
paper_files = os.listdir(paper_dir)
print(paper_files[:10])
scissors_files = os.listdir(scissors_dir)
print(scissors_files[:10])
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
pic_index = 2
next_rock = [os.path.join(rock_dir, fname)
for fname in rock_files[pic_index-2:pic_index]]
next_paper = [os.path.join(paper_dir, fname)
for fname in paper_files[pic_index-2:pic_index]]
next_scissors = [os.path.join(scissors_dir, fname)
for fname in scissors_files[pic_index-2:pic_index]]
for i, img_path in enumerate(next_rock+next_paper+next_scissors):
#print(img_path)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.axis('Off')
plt.show()
import tensorflow as tf
import keras_preprocessing
from keras_preprocessing import image
from keras_preprocessing.image import ImageDataGenerator
TRAINING_DIR = "tmp/rps-train/rps"
training_datagen = ImageDataGenerator(
rescale = 1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
VALIDATION_DIR = "tmp/rps-test/rps-test-set"
validation_datagen = ImageDataGenerator(rescale = 1./255)
train_generator = training_datagen.flow_from_directory(
TRAINING_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=126
)
validation_generator = validation_datagen.flow_from_directory(
VALIDATION_DIR,
target_size=(150,150),
class_mode='categorical',
batch_size=126
)
model = tf.keras.models.Sequential([
# Note the input shape is the desired size of the image 150x150 with 3 bytes color
# This is the first convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.5),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(3, activation='softmax')
])
model.summary()
model.compile(loss = 'categorical_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
history = model.fit(train_generator, epochs=25, steps_per_epoch=20, validation_data = validation_generator, verbose = 1, validation_steps=3)
model.save("rps.h5")
import matplotlib.pyplot as plt
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'r', label='Training accuracy')
plt.plot(epochs, val_acc, 'b', label='Validation accuracy')
plt.title('Training and validation accuracy')
plt.legend(loc=0)
plt.figure()
plt.show()
```
# Here's a codeblock just for fun!
You should be able to upload an image here and have it classified without crashing. This codeblock will only work in Google Colab, however.
**Important Note:** Due to some compatibility issues, the following code block will result in an error after you select the images(s) to upload if you are running this notebook as a `Colab` on the `Safari` browser. For `all other broswers`, continue with the next code block and ignore the next one after it.
The ones running the `Colab` on `Safari`, comment out the code block below, uncomment the next code block and run it.
```
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(fn)
print(classes)
```
For those running this `Colab` on `Safari` broswer can upload the images(s) manually. Follow the instructions, uncomment the code block below and run it.
Instructions on how to upload image(s) manually in a Colab:
1. Select the `folder` icon on the left `menu bar`.
2. Click on the `folder with an arrow pointing upwards` named `..`
3. Click on the `folder` named `tmp`.
4. Inside of the `tmp` folder, `create a new folder` called `images`. You'll see the `New folder` option by clicking the `3 vertical dots` menu button next to the `tmp` folder.
5. Inside of the new `images` folder, upload an image(s) of your choice, preferably of either a horse or a human. Drag and drop the images(s) on top of the `images` folder.
6. Uncomment and run the code block below.
```
```
| github_jupyter |
```
# ###############################################
# ########## Default Parameters #################
# ###############################################
start = '2016-06-16 22:00:00'
end = '2016-06-18 00:00:00'
pv_nominal_kw = 5000 # There are 3 PV locations hardcoded at node 7, 8, 9
inverter_sizing = 1.05
inverter_qmax_percentage = 0.44
thrP = 0.04
hysP = 0.06
thrQ = 0.03
hysQ = 0.03
first_order_time_const = 1 * 60
solver_relative_tolerance = 0.1
solver_absolute_tolerance = 0.1
solver_name = 'CVode'
result_filename = 'result'
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
warnings.simplefilter(action='ignore', category=FutureWarning)
import pandas
import numpy
import datetime
from tabulate import tabulate
import json
import re
# Imports useful for graphics
import matplotlib
import matplotlib.pyplot as plt
import seaborn
seaborn.set_style("whitegrid")
seaborn.despine()
%matplotlib inline
font = {'size' : 14}
matplotlib.rc('font', **font)
# Date conversion
begin = '2016-01-01 00:00:00'
begin_dt = datetime.datetime.strptime(begin, '%Y-%m-%d %H:%M:%S')
start_dt = datetime.datetime.strptime(start, '%Y-%m-%d %H:%M:%S')
end_dt = datetime.datetime.strptime(end, '%Y-%m-%d %H:%M:%S')
start_s = int((start_dt - begin_dt).total_seconds())
end_s = int((end_dt - begin_dt).total_seconds())
inverter_smax = pv_nominal_kw * inverter_sizing
inverter_qmax = inverter_smax * inverter_qmax_percentage
pv_inverter_parameters = {
'weather_file':(path_to_fmiforpowersystems +
'examples\\002_cosimulation_custom_master\\pv_inverter\\' +
'USA_CA_San.Francisco.Intl.AP.724940_TMY3.mos'),
'n': 1,
'A': (pv_nominal_kw * 1000) / (0.158 * 1000),
'eta': 0.158,
'lat': 37.9,
'til': 10,
'azi': 0,
'thrP': thrP, #0.05,
'hysP': hysP, #0.04,
'thrQ': thrQ, #0.04,
'hysQ': hysQ, #0.01,
'SMax': inverter_smax,
'QMaxInd': inverter_qmax,
'QMaxCap': inverter_qmax,
'Tfirstorder': first_order_time_const,
}
first_order_parameters = {
'T': first_order_time_const
}
run_simulation = True
connections_filename = 'connections.xlsx'
pv_inverter_path = 'pv_inverter/Pv_Inv_VoltVarWatt_simple_Slim.fmu'
pv_inverter_path = 'pv_inverter/PV_0Inv_0VoltVarWat_0simple_0Slim_Pv_0Inv_0VoltVarWatt_0simple_0Slim.fmu'
pandapower_path = 'pandapower/pandapower.fmu'
pandapower_folder = 'pandapower'
pandapower_parameter = {}
firstorder_path = 'firstorder/FirstOrder.fmu'
```
## Create the connection mapping
```
connections = pandas.DataFrame(columns=['fmu1_id', 'fmu1_path',
'fmu2_id', 'fmu2_path',
'fmu1_parameters',
'fmu2_parameters',
'fmu1_output',
'fmu2_input'])
# Connection for each customer
nodes = [7, 9, 24]
for index in nodes:
connections = connections.append(
{'fmu1_id': 'PV' + str(index),
'fmu1_path': pv_inverter_path,
'fmu2_id': 'pandapower',
'fmu2_path': pandapower_path,
'fmu1_parameters': pv_inverter_parameters,
'fmu2_parameters': pandapower_parameter,
'fmu1_output': 'P',
'fmu2_input': 'KW_' + str(index)},
ignore_index=True)
connections = connections.append(
{'fmu1_id': 'PV' + str(index),
'fmu1_path': pv_inverter_path,
'fmu2_id': 'pandapower',
'fmu2_path': pandapower_path,
'fmu1_parameters': pv_inverter_parameters,
'fmu2_parameters': pandapower_parameter,
'fmu1_output': 'Q',
'fmu2_input': 'KVAR_' + str(index)},
ignore_index=True)
connections = connections.append(
{'fmu1_id': 'pandapower',
'fmu1_path': pandapower_path,
'fmu2_id': 'firstorder' + str(index),
'fmu2_path': firstorder_path,
'fmu1_parameters': pandapower_parameter,
'fmu2_parameters': first_order_parameters,
'fmu1_output': 'Vpu_' + str(index),
'fmu2_input': 'u'},
ignore_index=True)
connections = connections.append(
{'fmu1_id': 'firstorder' + str(index),
'fmu1_path': firstorder_path,
'fmu2_id': 'PV' + str(index),
'fmu2_path': pv_inverter_path,
'fmu1_parameters': first_order_parameters,
'fmu2_parameters': pv_inverter_parameters,
'fmu1_output': 'y',
'fmu2_input': 'v'},
ignore_index=True)
def _sanitize_name(name):
"""
Make a Modelica valid name.
In Modelica, a variable name:
Can contain any of the characters {a-z,A-Z,0-9,_}.
Cannot start with a number.
:param name(str): Variable name to be sanitized.
:return: Sanitized variable name.
"""
# Check if variable has a length > 0
assert(len(name) > 0), 'Require a non-null variable name.'
# If variable starts with a number add 'f_'.
if(name[0].isdigit()):
name = 'f_' + name
# Replace all illegal characters with an underscore.
g_rexBadIdChars = re.compile(r'[^a-zA-Z0-9_]')
name = g_rexBadIdChars.sub('_', name)
return name
connections['fmu1_output'] = connections['fmu1_output'].apply(lambda x: _sanitize_name(x))
connections['fmu2_input'] = connections['fmu2_input'].apply(lambda x: _sanitize_name(x))
print(tabulate(connections[
['fmu1_id', 'fmu2_id', 'fmu1_output', 'fmu2_input']].head(),
headers='keys', tablefmt='psql'))
print(tabulate(connections[
['fmu1_id', 'fmu2_id', 'fmu1_output', 'fmu2_input']].tail(),
headers='keys', tablefmt='psql'))
connections.to_excel(connections_filename, index=False)
```
# Launch FMU simulation
```
if run_simulation:
import shlex, subprocess
cmd = ("C:/JModelica.org-2.4/setenv.bat && " +
" cd " + pandapower_folder + " && "
"cyderc " +
" --path ./"
" --name pandapower" +
" --io pandapower.xlsx" +
" --path_to_simulatortofmu C:/Users/DRRC/Desktop/Joscha/SimulatorToFMU/simulatortofmu/parser/SimulatorToFMU.py"
" --fmu_struc python")
args = shlex.split(cmd)
process = subprocess.Popen(args, bufsize=1, universal_newlines=True)
process.wait()
process.kill()
if run_simulation:
import os
import signal
import shlex, subprocess
cmd = ("C:/JModelica.org-2.4/setenv.bat && " +
"cyders " +
" --start " + str(start_s) +
" --end " + str(end_s) +
" --connections " + connections_filename +
" --nb_steps 25" +
" --solver " + solver_name +
" --rtol " + str(solver_relative_tolerance) +
" --atol " + str(solver_absolute_tolerance) +
" --result " + 'results/' + result_filename + '.csv')
args = shlex.split(cmd)
process = subprocess.Popen(args, bufsize=1, universal_newlines=True,
creationflags=subprocess.CREATE_NEW_PROCESS_GROUP)
process.wait()
process.send_signal(signal.CTRL_BREAK_EVENT)
process.kill()
print('Killed')
```
# Plot results
```
# Load results
results = pandas.read_csv('results/' + result_filename + '.csv')
from pathlib import Path, PureWindowsPath
results = pandas.read_csv(os.path.join(r'C:\Users\DRRC\Desktop\Jonathan\voltvarwatt_with_cyme_fmus\usecases\004_pp_first_order\results\result.csv'))
epoch = datetime.datetime.utcfromtimestamp(0)
begin_since_epoch = (begin_dt - epoch).total_seconds()
results['datetime'] = results['time'].apply(
lambda x: datetime.datetime.utcfromtimestamp(begin_since_epoch + x))
results.set_index('datetime', inplace=True, drop=False)
print('COLUMNS=')
print(results.columns)
print('START=')
print(results.head(1).index[0])
print('END=')
print(results.tail(1).index[0])
# Plot sum of all PVs for P and P curtailled and Q
cut = '2016-06-17 01:00:00'
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.title('PV generation')
for node in nodes:
plt.plot(results['datetime'],
results['pandapower.KW_' + str(node)] / 1000,
linewidth=3, alpha=0.7, label='node ' + str(node))
plt.legend(loc=0)
plt.ylabel('PV active power [MW]')
plt.xlabel('Time')
plt.xlim([cut, end])
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.title('Inverter reactive power')
for node in nodes:
plt.plot(results['datetime'],
results['pandapower.KVAR_' + str(node)] / 1000,
linewidth=3, alpha=0.7, label='node ' + str(node))
plt.legend(loc=0)
plt.ylabel('PV reactive power [MVAR]')
plt.xlabel('Time')
plt.xlim([cut, end])
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.title('PV voltage')
for node in nodes:
plt.plot(results['datetime'],
results['pandapower.Vpu_' + str(node)],
linewidth=3, alpha=0.7, label='node ' + str(node))
plt.legend(loc=0)
plt.ylabel('PV voltage [p.u.]')
plt.xlabel('Time')
plt.xlim([cut, end])
plt.ylim([0.95, results[['pandapower.Vpu_' + str(node)
for node in nodes]].max().max()])
plt.show()
# Plot time/voltage
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.title('Feeder Voltages')
plt.plot(results['datetime'],
results[[col for col in results.columns
if 'Vpu' in col and 'transfer' not in col]],
linewidth=3, alpha=0.7)
plt.ylabel('Voltage [p.u.]')
plt.xlabel('Time')
plt.ylim([0.95, results[[col for col in results.columns
if 'Vpu' in col and 'transfer' not in col]].max().max()])
plt.show()
# Load results
debug = pandas.read_csv('debug.csv', parse_dates=[1])
epoch = datetime.datetime.utcfromtimestamp(0)
begin_since_epoch = (begin_dt - epoch).total_seconds()
debug['datetime'] = debug['sim_time'].apply(
lambda x: datetime.datetime.utcfromtimestamp(begin_since_epoch + x))
debug.set_index('datetime', inplace=True, drop=False)
print('COLUMNS=')
print(debug.columns)
print('START=')
print(debug.head(1).index[0])
print('END=')
print(debug.tail(1).index[0])
# Plot time/voltage
import matplotlib.dates as mdates
print('Number of evaluation=' + str(len(debug)))
fig, axes = plt.subplots(1, 1, figsize=(11, 8))
plt.plot(debug['clock'],
debug['datetime'],
linewidth=3, alpha=0.7)
plt.ylabel('Simulation time')
plt.xlabel('Computer clock')
plt.gcf().autofmt_xdate()
myFmt = mdates.DateFormatter('%H:%M')
plt.gca().xaxis.set_major_formatter(myFmt)
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.plot(debug['clock'],
debug['KW_7'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['KW_9'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['KW_24'],
linewidth=3, alpha=0.7)
plt.ylabel('KW')
plt.xlabel('Computer clock')
plt.legend([17, 31, 24], loc=0)
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.plot(debug['clock'],
debug['Vpu_7'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['Vpu_9'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['Vpu_24'],
linewidth=3, alpha=0.7)
plt.ylabel('Vpu')
plt.xlabel('Computer clock')
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.plot(debug['clock'],
debug['Vpu_7'].diff(),
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['Vpu_9'].diff(),
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['Vpu_24'].diff(),
linewidth=3, alpha=0.7)
plt.ylabel('Vpu Diff')
plt.xlabel('Computer clock')
plt.show()
fig, axes = plt.subplots(1, 1, figsize=(11, 3))
plt.plot(debug['clock'],
debug['KVAR_7'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['KVAR_9'],
linewidth=3, alpha=0.7)
plt.plot(debug['clock'],
debug['KVAR_24'],
linewidth=3, alpha=0.7)
plt.ylabel('KVAR')
plt.xlabel('Computer clock')
plt.show()
```
| github_jupyter |
# Basic Examples with Different Protocols
## Prerequisites
* A kubernetes cluster with kubectl configured
* curl
* grpcurl
* pygmentize
## Setup Seldon Core
Use the setup notebook to [Setup Cluster](seldon_core_setup.ipynb) to setup Seldon Core with an ingress - either Ambassador or Istio.
Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
* Ambassador: `kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080`
* Istio: `kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80`
## Install Seldon Analytics
```
!helm install seldon-core-analytics ../../../helm-charts/seldon-core-analytics -n seldon-system --wait
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
import json
import time
```
## Custom Metrics with a REST model
```
!pygmentize model_rest.yaml
!kubectl create -f model_rest.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model \
-o jsonpath='{.items[0].metadata.name}')
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/seldon-model/api/v1.0/predictions -H "Content-Type: application/json"
response=json.loads(responseRaw[0])
print(response)
assert(len(response["meta"]["metrics"]) == 3)
print("Waiting so metrics can be scraped")
time.sleep(3)
%%writefile get-metrics.sh
kubectl run --quiet=true -it --rm curl --image=tutum/curl --restart=Never -- \
curl -s seldon-core-analytics-prometheus-seldon.seldon-system/api/v1/query?query=mycounter_total
responseRaw =! bash get-metrics.sh
responseRaw[0]
response=json.loads(responseRaw[0])
print(response)
assert(response['data']["result"][0]["metric"]["__name__"]=='mycounter_total')
!kubectl delete -f model_rest.yaml
```
## Custom Metrics with a GRPC model
```
!pygmentize model_grpc.yaml
!kubectl create -f model_grpc.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model \
-o jsonpath='{.items[0].metadata.name}')
responseRaw=!curl -s -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' -X POST http://localhost:8003/seldon/seldon/seldon-model/api/v1.0/predictions -H "Content-Type: application/json"
responseRaw=!cd ../../../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0,5.0]]}}' \
-rpc-header seldon:seldon-model -rpc-header namespace:seldon \
-plaintext \
-proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
response=json.loads("".join(responseRaw))
print(response)
assert(len(response["meta"]["metrics"]) == 3)
print("Waiting so metrics can be scraped")
time.sleep(3)
responseRaw =! bash get-metrics.sh
response=json.loads(responseRaw[0])
print(response)
assert(response['data']["result"][0]["metric"]["__name__"]=='mycounter_total')
assert(response['data']["result"][0]["metric"]["image_name"]=='seldonio/model-with-metrics_grpc')
!kubectl delete -f model_grpc.yaml
!helm delete seldon-core-analytics --namespace seldon-system
```
| github_jupyter |
# Welcome to Python reference
this notebook contains pretty much every thing, if you want to refresh your knowledge or what so ever
it will help you ;D
<ul>
<li>Data types<ul>
<li>Numbers</li>
<li>Strings</li>
<li>Lists</li>
<li>Dictionairies</li>
<li>Booleans</li>
<li>Tuples</li>
<li>Sets</li>
<li>Printing</li>
</ul></li>
<li>Comparaison operators</li>
<li>if, elif, else Statements</li>
<li>for loops</li>
<li>while loops</li>
<li>range()</li>
<li>functions</li>
<li>lambda expressions</li>
<li>map and filter</li>
<li>inputs and castings</li>
<ul>
## Data types :
### 1 - Numbers :
```
# operations on numbers
print("Addition 1 + 1 : " + str(1 + 1) + " Type of the output : " + str(type(1 + 1))) #Addition
print("Devision 1 / 2 : " + str(1 / 2) + " Type of the output : " + str(type(1 / 2))) #Devision
print("Multiplication 2 * 3 : " + str(2 * 3) + " Type of the output : " + str(type(2 * 3))) #Multiplication
print("Modulus 2 % 3 : " + str(2 % 3) + " Type of the output : " + str(type(2 % 3))) #Modulus
print("Exponential 2** 3 : " + str(2** 3) + " Type of the output :" + str(type(2** 3))) #Exponential
```
### 2 - Strings :
```
'this is a string'
"and this is also a string"
hello = 'hello'
#format the print
age = 17
name = 'titiche'
print('My name is {} and i\'am {} years old'.format(name, age))
#strings are also arrays with indexes , remember indexes start with 0
print("full string : " + name)#the full string
print("index : " + name[3])#character in the index 3
print("slicing : " + name[2:])#you can slice a string starting from an index to the end
print("slicing : " + name[:-2])#also another slicing starting from the begenning to the -2 index before last
```
### 3 - Lists :
```
#lists are sequence of elements separated by commas
a = [1, 2, 3] # this is a list
a
a = ['A', 'B', 'C', 3] # this is also a list
a
# to Add an element to the end of list
a.append(name)
a
#you can slice a list or access it with an index
print(a[2])#index
print(a[3:])#slicing
#you can have a list in a list
b = ['a', 2, a]
b
#you can duplicate the list many times
b = [1, 2]*4
b
#you can concatenate two lists
a + b
```
### 4 - Dictionaries :
```
#dictionary is some key value pairs
d = {} #empty dictionary
d[name] = age #the key is titich and the value is age
d
#you can access it with key
d[name]
```
### 5 - Booleans :
```
# booleans ... this is the hardest part of the notebook :P
#there is two types of booleans
True #either true
False #or false
```
### 6 - Tuples :
```
#tuples are just like lists , at the exception of you can't change elements inside a tuple
tup = (1, 2, 3)#nothing to e afraid of
tup
```
### 7 - Sets
```
#sets are collection of unique elements
a = {1, 2, 3, 3, 2, 2, 3, 3, 4, 5, 4}
a
#you can convert a list to a set
set([1, 2, 3, 4, 5, 5, 5, 5, 5, 5, 5])
#you can add an element to a set
a.add(100)
a
#if it already exist it ignores it
a.add(100)
a
```
## Comparaison operators :
```
1 > 2 #greater than
3 <= 5 #lower or equal than
3 == 5 - 2 #equals ### double equal sign to compare
'hello' != name ## not equals
(1 < 2) and (3 > 2) #combine two statements with and
(1 < 2) or (3 <= 2) #combine two statements with or
```
## if, elif, else statements :
```
if 1 < 2:
print('something')
#you can make multiple cases
if 1 > 2:
print('hello')
elif 2 > 3:
print('i don\'t know what to write')
elif name == 'titiche':
print('hello master {} '.format(name))
else:
print('Default case')
```
## for loops :
```
seq = [1, 2, 3, 4]
#do some stuff many times
for item in seq:
print(item, end='->')
```
## while loops :
```
i = 1
while i <= 20:
print(i, end='->')
i = i+1 #ahha we don't whant to have an infinite loop do we ?
```
## range :
```
#this is a generator
for i in range(20):
print(i, end='->')
range(10)
list(range(10)) # this is called casting we basicly convert the type from range to list
```
## List Comprehension :
```
x = []
for num in range(5):
x.append(num)
x
# a better way to do it
x = [num for num in range(5)]
x
```
## Functions :
```
#function definition
def my_function(param):
"""
This is a doctstring
basicly you write how to use this function here
"""
print(param)
my_function('titiche')
```
## map :
```
def square(x):
return x**2
x = [1, 2, 3, 4]
# use map to apply the function to every element in list
list(map(square, x))
```
## lambda expression :
```
square = lambda x: x**2 + 1
#define a function quickly
```
## filter :
```
x = [1, 2, 3, 4, 5, 6, 7]
list(filter(lambda x: x%2 == 0, x))
```
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import re
import time
import warnings
import sqlite3
from sqlalchemy import create_engine # database connection
import csv
import os
warnings.filterwarnings("ignore")
import datetime as dt
import numpy as np
from nltk.corpus import stopwords
from sklearn.decomposition import TruncatedSVD
from sklearn.preprocessing import normalize
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.manifold import TSNE
import seaborn as sns
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.metrics.classification import accuracy_score, log_loss
from sklearn.feature_extraction.text import TfidfVectorizer
from collections import Counter
from scipy.sparse import hstack
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import SVC
from sklearn.cross_validation import StratifiedKFold
from collections import Counter, defaultdict
from sklearn.calibration import CalibratedClassifierCV
from sklearn.naive_bayes import MultinomialNB
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
import math
from sklearn.metrics import normalized_mutual_info_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import SGDClassifier
from mlxtend.classifier import StackingClassifier
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import precision_recall_curve, auc, roc_curve
```
<h1>4. Machine Learning Models </h1>
<h2> 4.1 Reading data from file and storing into sql table </h2>
```
#Creating db file from csv
if not os.path.isfile('train.db'):
disk_engine = create_engine('sqlite:///train.db')
start = dt.datetime.now()
chunksize = 180000
j = 0
index_start = 1
for df in pd.read_csv('final_features.csv', names=['Unnamed: 0','id','is_duplicate','cwc_min','cwc_max','csc_min','csc_max','ctc_min','ctc_max','last_word_eq','first_word_eq','abs_len_diff','mean_len','token_set_ratio','token_sort_ratio','fuzz_ratio','fuzz_partial_ratio','longest_substr_ratio','freq_qid1','freq_qid2','q1len','q2len','q1_n_words','q2_n_words','word_Common','word_Total','word_share','freq_q1+q2','freq_q1-q2','0_x','1_x','2_x','3_x','4_x','5_x','6_x','7_x','8_x','9_x','10_x','11_x','12_x','13_x','14_x','15_x','16_x','17_x','18_x','19_x','20_x','21_x','22_x','23_x','24_x','25_x','26_x','27_x','28_x','29_x','30_x','31_x','32_x','33_x','34_x','35_x','36_x','37_x','38_x','39_x','40_x','41_x','42_x','43_x','44_x','45_x','46_x','47_x','48_x','49_x','50_x','51_x','52_x','53_x','54_x','55_x','56_x','57_x','58_x','59_x','60_x','61_x','62_x','63_x','64_x','65_x','66_x','67_x','68_x','69_x','70_x','71_x','72_x','73_x','74_x','75_x','76_x','77_x','78_x','79_x','80_x','81_x','82_x','83_x','84_x','85_x','86_x','87_x','88_x','89_x','90_x','91_x','92_x','93_x','94_x','95_x','96_x','97_x','98_x','99_x','100_x','101_x','102_x','103_x','104_x','105_x','106_x','107_x','108_x','109_x','110_x','111_x','112_x','113_x','114_x','115_x','116_x','117_x','118_x','119_x','120_x','121_x','122_x','123_x','124_x','125_x','126_x','127_x','128_x','129_x','130_x','131_x','132_x','133_x','134_x','135_x','136_x','137_x','138_x','139_x','140_x','141_x','142_x','143_x','144_x','145_x','146_x','147_x','148_x','149_x','150_x','151_x','152_x','153_x','154_x','155_x','156_x','157_x','158_x','159_x','160_x','161_x','162_x','163_x','164_x','165_x','166_x','167_x','168_x','169_x','170_x','171_x','172_x','173_x','174_x','175_x','176_x','177_x','178_x','179_x','180_x','181_x','182_x','183_x','184_x','185_x','186_x','187_x','188_x','189_x','190_x','191_x','192_x','193_x','194_x','195_x','196_x','197_x','198_x','199_x','200_x','201_x','202_x','203_x','204_x','205_x','206_x','207_x','208_x','209_x','210_x','211_x','212_x','213_x','214_x','215_x','216_x','217_x','218_x','219_x','220_x','221_x','222_x','223_x','224_x','225_x','226_x','227_x','228_x','229_x','230_x','231_x','232_x','233_x','234_x','235_x','236_x','237_x','238_x','239_x','240_x','241_x','242_x','243_x','244_x','245_x','246_x','247_x','248_x','249_x','250_x','251_x','252_x','253_x','254_x','255_x','256_x','257_x','258_x','259_x','260_x','261_x','262_x','263_x','264_x','265_x','266_x','267_x','268_x','269_x','270_x','271_x','272_x','273_x','274_x','275_x','276_x','277_x','278_x','279_x','280_x','281_x','282_x','283_x','284_x','285_x','286_x','287_x','288_x','289_x','290_x','291_x','292_x','293_x','294_x','295_x','296_x','297_x','298_x','299_x','300_x','301_x','302_x','303_x','304_x','305_x','306_x','307_x','308_x','309_x','310_x','311_x','312_x','313_x','314_x','315_x','316_x','317_x','318_x','319_x','320_x','321_x','322_x','323_x','324_x','325_x','326_x','327_x','328_x','329_x','330_x','331_x','332_x','333_x','334_x','335_x','336_x','337_x','338_x','339_x','340_x','341_x','342_x','343_x','344_x','345_x','346_x','347_x','348_x','349_x','350_x','351_x','352_x','353_x','354_x','355_x','356_x','357_x','358_x','359_x','360_x','361_x','362_x','363_x','364_x','365_x','366_x','367_x','368_x','369_x','370_x','371_x','372_x','373_x','374_x','375_x','376_x','377_x','378_x','379_x','380_x','381_x','382_x','383_x','0_y','1_y','2_y','3_y','4_y','5_y','6_y','7_y','8_y','9_y','10_y','11_y','12_y','13_y','14_y','15_y','16_y','17_y','18_y','19_y','20_y','21_y','22_y','23_y','24_y','25_y','26_y','27_y','28_y','29_y','30_y','31_y','32_y','33_y','34_y','35_y','36_y','37_y','38_y','39_y','40_y','41_y','42_y','43_y','44_y','45_y','46_y','47_y','48_y','49_y','50_y','51_y','52_y','53_y','54_y','55_y','56_y','57_y','58_y','59_y','60_y','61_y','62_y','63_y','64_y','65_y','66_y','67_y','68_y','69_y','70_y','71_y','72_y','73_y','74_y','75_y','76_y','77_y','78_y','79_y','80_y','81_y','82_y','83_y','84_y','85_y','86_y','87_y','88_y','89_y','90_y','91_y','92_y','93_y','94_y','95_y','96_y','97_y','98_y','99_y','100_y','101_y','102_y','103_y','104_y','105_y','106_y','107_y','108_y','109_y','110_y','111_y','112_y','113_y','114_y','115_y','116_y','117_y','118_y','119_y','120_y','121_y','122_y','123_y','124_y','125_y','126_y','127_y','128_y','129_y','130_y','131_y','132_y','133_y','134_y','135_y','136_y','137_y','138_y','139_y','140_y','141_y','142_y','143_y','144_y','145_y','146_y','147_y','148_y','149_y','150_y','151_y','152_y','153_y','154_y','155_y','156_y','157_y','158_y','159_y','160_y','161_y','162_y','163_y','164_y','165_y','166_y','167_y','168_y','169_y','170_y','171_y','172_y','173_y','174_y','175_y','176_y','177_y','178_y','179_y','180_y','181_y','182_y','183_y','184_y','185_y','186_y','187_y','188_y','189_y','190_y','191_y','192_y','193_y','194_y','195_y','196_y','197_y','198_y','199_y','200_y','201_y','202_y','203_y','204_y','205_y','206_y','207_y','208_y','209_y','210_y','211_y','212_y','213_y','214_y','215_y','216_y','217_y','218_y','219_y','220_y','221_y','222_y','223_y','224_y','225_y','226_y','227_y','228_y','229_y','230_y','231_y','232_y','233_y','234_y','235_y','236_y','237_y','238_y','239_y','240_y','241_y','242_y','243_y','244_y','245_y','246_y','247_y','248_y','249_y','250_y','251_y','252_y','253_y','254_y','255_y','256_y','257_y','258_y','259_y','260_y','261_y','262_y','263_y','264_y','265_y','266_y','267_y','268_y','269_y','270_y','271_y','272_y','273_y','274_y','275_y','276_y','277_y','278_y','279_y','280_y','281_y','282_y','283_y','284_y','285_y','286_y','287_y','288_y','289_y','290_y','291_y','292_y','293_y','294_y','295_y','296_y','297_y','298_y','299_y','300_y','301_y','302_y','303_y','304_y','305_y','306_y','307_y','308_y','309_y','310_y','311_y','312_y','313_y','314_y','315_y','316_y','317_y','318_y','319_y','320_y','321_y','322_y','323_y','324_y','325_y','326_y','327_y','328_y','329_y','330_y','331_y','332_y','333_y','334_y','335_y','336_y','337_y','338_y','339_y','340_y','341_y','342_y','343_y','344_y','345_y','346_y','347_y','348_y','349_y','350_y','351_y','352_y','353_y','354_y','355_y','356_y','357_y','358_y','359_y','360_y','361_y','362_y','363_y','364_y','365_y','366_y','367_y','368_y','369_y','370_y','371_y','372_y','373_y','374_y','375_y','376_y','377_y','378_y','379_y','380_y','381_y','382_y','383_y'], chunksize=chunksize, iterator=True, encoding='utf-8', ):
df.index += index_start
j+=1
print('{} rows'.format(j*chunksize))
df.to_sql('data', disk_engine, if_exists='append')
index_start = df.index[-1] + 1
#http://www.sqlitetutorial.net/sqlite-python/create-tables/
def create_connection(db_file):
""" create a database connection to the SQLite database
specified by db_file
:param db_file: database file
:return: Connection object or None
"""
try:
conn = sqlite3.connect(db_file)
return conn
except Error as e:
print(e)
return None
def checkTableExists(dbcon):
cursr = dbcon.cursor()
str = "select name from sqlite_master where type='table'"
table_names = cursr.execute(str)
print("Tables in the databse:")
tables =table_names.fetchall()
print(tables[0][0])
return(len(tables))
read_db = 'train.db'
conn_r = create_connection(read_db)
checkTableExists(conn_r)
conn_r.close()
# try to sample data according to the computing power you have
if os.path.isfile(read_db):
conn_r = create_connection(read_db)
if conn_r is not None:
# for selecting first 1M rows
# data = pd.read_sql_query("""SELECT * FROM data LIMIT 100001;""", conn_r)
# for selecting random points
data = pd.read_sql_query("SELECT * From data ORDER BY RANDOM() LIMIT 100001;", conn_r)
conn_r.commit()
conn_r.close()
# remove the first row
data.drop(data.index[0], inplace=True)
y_true = data['is_duplicate']
data.drop(['Unnamed: 0', 'id','index','is_duplicate'], axis=1, inplace=True)
data.head()
```
<h2> 4.2 Converting strings to numerics </h2>
```
# after we read from sql table each entry was read it as a string
# we convert all the features into numaric before we apply any model
cols = list(data.columns)
for i in cols:
data[i] = data[i].apply(pd.to_numeric)
print(i)
# https://stackoverflow.com/questions/7368789/convert-all-strings-in-a-list-to-int
y_true = list(map(int, y_true.values))
```
<h2> 4.3 Random train test split( 70:30) </h2>
```
X_train,X_test, y_train, y_test = train_test_split(data, y_true, stratify=y_true, test_size=0.3)
print("Number of data points in train data :",X_train.shape)
print("Number of data points in test data :",X_test.shape)
print("-"*10, "Distribution of output variable in train data", "-"*10)
train_distr = Counter(y_train)
train_len = len(y_train)
print("Class 0: ",int(train_distr[0])/train_len,"Class 1: ", int(train_distr[1])/train_len)
print("-"*10, "Distribution of output variable in train data", "-"*10)
test_distr = Counter(y_test)
test_len = len(y_test)
print("Class 0: ",int(test_distr[1])/test_len, "Class 1: ",int(test_distr[1])/test_len)
# This function plots the confusion matrices given y_i, y_i_hat.
def plot_confusion_matrix(test_y, predict_y):
C = confusion_matrix(test_y, predict_y)
# C = 9,9 matrix, each cell (i,j) represents number of points of class i are predicted class j
A =(((C.T)/(C.sum(axis=1))).T)
#divid each element of the confusion matrix with the sum of elements in that column
# C = [[1, 2],
# [3, 4]]
# C.T = [[1, 3],
# [2, 4]]
# C.sum(axis = 1) axis=0 corresonds to columns and axis=1 corresponds to rows in two diamensional array
# C.sum(axix =1) = [[3, 7]]
# ((C.T)/(C.sum(axis=1))) = [[1/3, 3/7]
# [2/3, 4/7]]
# ((C.T)/(C.sum(axis=1))).T = [[1/3, 2/3]
# [3/7, 4/7]]
# sum of row elements = 1
B =(C/C.sum(axis=0))
#divid each element of the confusion matrix with the sum of elements in that row
# C = [[1, 2],
# [3, 4]]
# C.sum(axis = 0) axis=0 corresonds to columns and axis=1 corresponds to rows in two diamensional array
# C.sum(axix =0) = [[4, 6]]
# (C/C.sum(axis=0)) = [[1/4, 2/6],
# [3/4, 4/6]]
plt.figure(figsize=(20,4))
labels = [1,2]
# representing A in heatmap format
cmap=sns.light_palette("blue")
plt.subplot(1, 3, 1)
sns.heatmap(C, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels)
plt.xlabel('Predicted Class')
plt.ylabel('Original Class')
plt.title("Confusion matrix")
plt.subplot(1, 3, 2)
sns.heatmap(B, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels)
plt.xlabel('Predicted Class')
plt.ylabel('Original Class')
plt.title("Precision matrix")
plt.subplot(1, 3, 3)
# representing B in heatmap format
sns.heatmap(A, annot=True, cmap=cmap, fmt=".3f", xticklabels=labels, yticklabels=labels)
plt.xlabel('Predicted Class')
plt.ylabel('Original Class')
plt.title("Recall matrix")
plt.show()
```
<h2> 4.4 Building a random model (Finding worst-case log-loss) </h2>
```
# we need to generate 9 numbers and the sum of numbers should be 1
# one solution is to genarate 9 numbers and divide each of the numbers by their sum
# ref: https://stackoverflow.com/a/18662466/4084039
# we create a output array that has exactly same size as the CV data
predicted_y = np.zeros((test_len,2))
for i in range(test_len):
rand_probs = np.random.rand(1,2)
predicted_y[i] = ((rand_probs/sum(sum(rand_probs)))[0])
print("Log loss on Test Data using Random Model",log_loss(y_test, predicted_y, eps=1e-15))
predicted_y =np.argmax(predicted_y, axis=1)
plot_confusion_matrix(y_test, predicted_y)
```
<h2> 4.4 Logistic Regression with hyperparameter tuning </h2>
```
alpha = [10 ** x for x in range(-5, 2)] # hyperparam for SGD classifier.
# read more about SGDClassifier() at http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
# ------------------------------
# default parameters
# SGDClassifier(loss=’hinge’, penalty=’l2’, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None,
# shuffle=True, verbose=0, epsilon=0.1, n_jobs=1, random_state=None, learning_rate=’optimal’, eta0=0.0, power_t=0.5,
# class_weight=None, warm_start=False, average=False, n_iter=None)
# some of methods
# fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent.
# predict(X) Predict class labels for samples in X.
#-------------------------------
# video link:
#------------------------------
log_error_array=[]
for i in alpha:
clf = SGDClassifier(alpha=i, penalty='l2', loss='log', random_state=42)
clf.fit(X_train, y_train)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid")
sig_clf.fit(X_train, y_train)
predict_y = sig_clf.predict_proba(X_test)
log_error_array.append(log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
print('For values of alpha = ', i, "The log loss is:",log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
fig, ax = plt.subplots()
ax.plot(alpha, log_error_array,c='g')
for i, txt in enumerate(np.round(log_error_array,3)):
ax.annotate((alpha[i],np.round(txt,3)), (alpha[i],log_error_array[i]))
plt.grid()
plt.title("Cross Validation Error for each alpha")
plt.xlabel("Alpha i's")
plt.ylabel("Error measure")
plt.show()
best_alpha = np.argmin(log_error_array)
clf = SGDClassifier(alpha=alpha[best_alpha], penalty='l2', loss='log', random_state=42)
clf.fit(X_train, y_train)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid")
sig_clf.fit(X_train, y_train)
predict_y = sig_clf.predict_proba(X_train)
print('For values of best alpha = ', alpha[best_alpha], "The train log loss is:",log_loss(y_train, predict_y, labels=clf.classes_, eps=1e-15))
predict_y = sig_clf.predict_proba(X_test)
print('For values of best alpha = ', alpha[best_alpha], "The test log loss is:",log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
predicted_y =np.argmax(predict_y,axis=1)
print("Total number of data points :", len(predicted_y))
plot_confusion_matrix(y_test, predicted_y)
```
<h2> 4.5 Linear SVM with hyperparameter tuning </h2>
```
alpha = [10 ** x for x in range(-5, 2)] # hyperparam for SGD classifier.
# read more about SGDClassifier() at http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDClassifier.html
# ------------------------------
# default parameters
# SGDClassifier(loss=’hinge’, penalty=’l2’, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None,
# shuffle=True, verbose=0, epsilon=0.1, n_jobs=1, random_state=None, learning_rate=’optimal’, eta0=0.0, power_t=0.5,
# class_weight=None, warm_start=False, average=False, n_iter=None)
# some of methods
# fit(X, y[, coef_init, intercept_init, …]) Fit linear model with Stochastic Gradient Descent.
# predict(X) Predict class labels for samples in X.
#-------------------------------
# video link:
#------------------------------
log_error_array=[]
for i in alpha:
clf = SGDClassifier(alpha=i, penalty='l1', loss='hinge', random_state=42)
clf.fit(X_train, y_train)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid")
sig_clf.fit(X_train, y_train)
predict_y = sig_clf.predict_proba(X_test)
log_error_array.append(log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
print('For values of alpha = ', i, "The log loss is:",log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
fig, ax = plt.subplots()
ax.plot(alpha, log_error_array,c='g')
for i, txt in enumerate(np.round(log_error_array,3)):
ax.annotate((alpha[i],np.round(txt,3)), (alpha[i],log_error_array[i]))
plt.grid()
plt.title("Cross Validation Error for each alpha")
plt.xlabel("Alpha i's")
plt.ylabel("Error measure")
plt.show()
best_alpha = np.argmin(log_error_array)
clf = SGDClassifier(alpha=alpha[best_alpha], penalty='l1', loss='hinge', random_state=42)
clf.fit(X_train, y_train)
sig_clf = CalibratedClassifierCV(clf, method="sigmoid")
sig_clf.fit(X_train, y_train)
predict_y = sig_clf.predict_proba(X_train)
print('For values of best alpha = ', alpha[best_alpha], "The train log loss is:",log_loss(y_train, predict_y, labels=clf.classes_, eps=1e-15))
predict_y = sig_clf.predict_proba(X_test)
print('For values of best alpha = ', alpha[best_alpha], "The test log loss is:",log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
predicted_y =np.argmax(predict_y,axis=1)
print("Total number of data points :", len(predicted_y))
plot_confusion_matrix(y_test, predicted_y)
```
<h2> 4.6 XGBoost </h2>
```
import xgboost as xgb
params = {}
params['objective'] = 'binary:logistic'
params['eval_metric'] = 'logloss'
params['eta'] = 0.02
params['max_depth'] = 4
d_train = xgb.DMatrix(X_train, label=y_train)
d_test = xgb.DMatrix(X_test, label=y_test)
watchlist = [(d_train, 'train'), (d_test, 'valid')]
bst = xgb.train(params, d_train, 400, watchlist, early_stopping_rounds=20, verbose_eval=10)
xgdmat = xgb.DMatrix(X_train,y_train)
predict_y = bst.predict(d_test)
print("The test log loss is:",log_loss(y_test, predict_y, labels=clf.classes_, eps=1e-15))
predicted_y =np.array(predict_y>0.5,dtype=int)
print("Total number of data points :", len(predicted_y))
plot_confusion_matrix(y_test, predicted_y)
```
<h1> 5. Assignments </h1>
1. Try out models (Logistic regression, Linear-SVM) with simple TF-IDF vectors instead of TD_IDF weighted word2Vec.
2. Perform hyperparameter tuning of XgBoost models using RandomsearchCV with vectorizer as TF-IDF W2V to reduce the log-loss.
| github_jupyter |
# 01. Tabular Q Learning
Tabular Q Learning을 실습해봅니다.
- 모든 state의 value function을 table에 저장하고 테이블의 각 요소를 Q Learning으로 업데이트 하는 것으로 학습합니다.
## Colab 용 package 설치 코드
```
!pip install gym
import tensorflow as tf
import numpy as np
import random
import gym
# from gym.wrappers import Monitor
np.random.seed(777)
tf.set_random_seed(777)
print("tensorflow version: ", tf.__version__)
print("gym version: ", gym.__version__)
```
## Frozen Lake
**[state]**
SFFF
FHFH
FFFH
HFFG
S : starting point, safe
F : frozen surface, safe
H : hole, fall to your doom
G : goal, where the frisbee is located
**[action]**
LEFT = 0
DOWN = 1
RIGHT = 2
UP = 3
```
from IPython.display import clear_output
# Load Environment
env = gym.make("FrozenLake-v0")
# init envrionmnet
env.reset()
# only 'Right' action agent
for _ in range(5):
env.render()
next_state, reward, done, _ = env.step(2)
```
### Frozen Lake (not Slippery)
```
def register_frozen_lake_not_slippery(name):
from gym.envs.registration import register
register(
id=name,
entry_point='gym.envs.toy_text:FrozenLakeEnv',
kwargs={'map_name' : '4x4', 'is_slippery': False},
max_episode_steps=100,
reward_threshold=0.78, # optimum = .8196
)
register_frozen_lake_not_slippery('FrozenLakeNotSlippery-v0')
env = gym.make("FrozenLakeNotSlippery-v0")
env.reset()
env.render()
'''
env.step()을 이용해서 Goal까지 직접 이동해보세요.
LEFT = 0
DOWN = 1
RIGHT = 2
UP = 3
'''
env.step(0); env.render()
# env.step(); env.render()
```
## Q-Learning
**Pseudo code**
<img src="./img/qlearning_pseudo.png" width="80%" align="left">
### Epsilon greedy
```
# epsilon greedy policy
def epsilon_greedy_action(epsilon, n_action, state, q_table):
# 구현해보세요.
# if epsilon이 random 값보다 클때
# random action
# else
# 가장 큰 Q값을 갖는 action을 고른다.
return action
# epsilon greedy test
epsilon = 0
q_table = np.array([[1,0,0,0],
[0,0,0,1],
[0,1,0,0]])
for state in range(3):
action = epsilon_greedy_action(epsilon, 4, state, q_table)
print("state: {} action: {}".format(state, action))
```
### Q-value update
```
def q_update(q_table, state, next_state, action, reward, alpha, gamma):
# 구현해보세요.
# update 수식은 pseudo code 참조
# q_table[s, a] = q_table[s, a] + TD error
return q_table
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
q_table = np.array([[0,0,0,0],
[0,1,0,0]], dtype=np.float)
print("start\n", q_table)
reward = 1.0
alpha = 0.1
gamma = 0.9
for i in range(10):
print("update {}".format(i))
q_table = q_update(q_table, 0, 1, 2, reward, alpha, gamma)
print(q_table)
```
### Agent class
## Goal에 도착하기 위해 생각해야 하는것
1. Goal에 한번이라도 도착해야만 reward가 나와서 update 된다 $\rightarrow$ goal에 어떻게 가게 할까?
2. hole에 빠졌을 때 episode가 끝나긴 하지만 reward에 차이는 없다. $\rightarrow$ hole에 빠져서 끝나면 negative reward를 주도록 한다.
3. 학습이 잘 되어도 epsilon 만큼의 확률로 random action을 한다. $\rightarrow$ 학습이 진행될수록 epsilon을 줄인다.
```
class Tabular_Q_agent:
def __init__(self, q_table, n_action, epsilon, alpha, gamma):
self.q_table = q_table
self.epsilon = epsilon
self.alpha = alpha
self.gamma = gamma
self.n_action = n_action
def get_action(self, state):
# 구현해보세요. (e-greedy policy)
return action
def q_update(self, state, next_state, action, reward):
# 구현해보세요.
# update 수식은 pseudo code 참조
return q_table
```
### Training agent
```
env = gym.make("FrozenLakeNotSlippery-v0")
EPISODE = 500
epsilon = 0.9
alpha = 0.8 # learning rate
gamma = 0.9 # discount factor
n_action =
rlist = []
slist = []
is_render = False
# initialize Q-Table
q_table = np.random.rand(env.observation_space.n, env.action_space.n)
print("Q table size: ", q_table.shape)
# agent 생성
agent =
# Epiode 수만큼 반복
for e in range(EPISODE):
state = env.reset()
print("[Episode {}]".format(e))
if is_render:
env.render()
total_reward = 0
goal = 0
done = False
limit = 0
# 게임이 끝날때까지 반복 또는 100번 step할 때까지 반복
while not done and limit < 100:
# 1. select action by e-greedy policy
# e-greedy로 action을 선택.
# 2. do action and go to next state
# env.step()을 사용해 1 step 이동 후 next state와 reward, done 값을 받아옴.
# 2.1. hole 에 빠졌을 때 (-) reward를 받도록 함.
if reward == 1.0:
print("GOAL")
goal = 1
elif done:
reward = reward - 1
# 3. Q update
# Q table에서 현재 state의 Q값을 update 한다.
slist.append(state)
state = next_state
total_reward += reward
limit += 1
print(slist)
slist = []
print("total reward: ", total_reward)
rlist.append(goal)
print("성공한 확률" + str(sum(rlist) / EPISODE) + "%")
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})
print(agent.q_table)
```
### Test agent
```
state = env.reset()
done = False
limit = 0
agent.epsilon = 0.0
while not done and limit < 30:
action = agent.get_action(state)
next_state, reward, done, _ = env.step(action)
env.render()
state = next_state
limit += 1
```
| github_jupyter |
## **Accessing Elements in ndarays:**
Elements can be accessed using indices inside square brackets, [ ]. NumPy allows you to use both positive and negative indices to access elements in the ndarray. Positive indices are used to access elements from the beginning of the array, while negative indices are used to access elements from the end of the array.
```
import numpy as np
# We create a rank 1 ndarray that contains integers from 1 to 5
x = np.array([1, 2, 3, 4, 5])
# We print x
print()
print('x = ', x)
print()
# Let's access some elements with positive indices
print('This is First Element in x:', x[0])
print('This is Second Element in x:', x[1])
print('This is Fifth (Last) Element in x:', x[4])
print()
# Let's access the same elements with negative indices
print('This is First Element in x:', x[-5])
print('This is Second Element in x:', x[-4])
print('This is Fifth (Last) Element in x:', x[-1])
```
## **Modifying ndarrays:**
Now let's see how we can change the elements in rank 1 ndarrays. We do this by accessing the element we want to change and then using the = sign to assign the new value:
```
# We create a rank 1 ndarray that contains integers from 1 to 5
x = np.array([1, 2, 3, 4, 5])
# We print the original x
print()
print('Original:\n x = ', x)
print()
# We change the fourth element in x from 4 to 20
x[3] = 20
# We print x after it was modified
print('Modified:\n x = ', x)
```
Similarly, we can also access and modify specific elements of rank 2 ndarrays. To access elements in rank 2 ndarrays we need to provide 2 indices in the form [row, column]. Let's see some examples
```
# We create a 3 x 3 rank 2 ndarray that contains integers from 1 to 9
X = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print X
print()
print('X = \n', X)
print()
# Let's access some elements in X
print('This is (0,0) Element in X:', X[0,0])
print('This is (0,1) Element in X:', X[0,1])
print('This is (2,2) Element in X:', X[2,2])
```
Elements in rank 2 ndarrays can be modified in the same way as with rank 1 ndarrays. Let's see an example:
```
# We create a 3 x 3 rank 2 ndarray that contains integers from 1 to 9
X = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print the original x
print()
print('Original:\n X = \n', X)
print()
# We change the (0,0) element in X from 1 to 20
X[0,0] = 20
# We print X after it was modified
print('Modified:\n X = \n', X)
```
## **Adding and Deleting elements:**
Now, let's take a look at how we can add and delete elements from ndarrays. We can delete elements using the np.delete(ndarray, elements, axis) function. This function deletes the given list of elements from the given ndarray along the specified axis. For rank 1 ndarrays the axis keyword is not required. For rank 2 ndarrays, axis = 0 is used to select rows, and axis = 1 is used to select columns. Let's see some examples:
```
# We create a rank 1 ndarray
x = np.array([1, 2, 3, 4, 5])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[4,5,6],[7,8,9]])
# We print x
print()
print('Original x = ', x)
# We delete the first and last element of x
x = np.delete(x, [0,4])
# We print x with the first and last element deleted
print()
print('Modified x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We delete the first row of y
w = np.delete(Y, 0, axis=0)
# We delete the first and last column of y
v = np.delete(Y, [0,2], axis=1)
# We print w
print()
print('w = \n', w)
# We print v
print()
print('v = \n', v)
```
We can append values to ndarrays using the np.append(ndarray, elements, axis) function. This function appends the given list of elements to ndarray along the specified axis. Let's see some examples:
```
# We create a rank 1 ndarray
x = np.array([1, 2, 3, 4, 5])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[4,5,6]])
# We print x
print()
print('Original x = ', x)
# We append the integer 6 to x
x = np.append(x, 6)
# We print x
print()
print('x = ', x)
# We append the integer 7 and 8 to x
x = np.append(x, [7,8])
# We print x
print()
print('x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We append a new row containing 7,8,9 to y
v = np.append(Y, [[7,8,9]], axis=0)
# We append a new column containing 9 and 10 to y
q = np.append(Y,[[9],[10]], axis=1)
# We print v
print()
print('v = \n', v)
# We print q
print()
print('q = \n', q)
```
Now let's see now how we can insert values to ndarrays. We can insert values to ndarrays using the np.insert(ndarray, index, elements, axis) function. This function inserts the given list of elements to ndarray right before the given index along the specified axis. Let's see some examples:
```
# We create a rank 1 ndarray
x = np.array([1, 2, 5, 6, 7])
# We create a rank 2 ndarray
Y = np.array([[1,2,3],[7,8,9]])
# We print x
print()
print('Original x = ', x)
# We insert the integer 3 and 4 between 2 and 5 in x.
x = np.insert(x,2,[3,4])
# We print x with the inserted elements
print()
print('x = ', x)
# We print Y
print()
print('Original Y = \n', Y)
# We insert a row between the first and last row of y
w = np.insert(Y,1,[4,5,6],axis=0)
# We insert a column full of 5s between the first and second column of y
v = np.insert(Y,1,5, axis=1)
# We print w
print()
print('w = \n', w)
# We print v
print()
print('v = \n', v)
```
NumPy also allows us to stack ndarrays on top of each other, or to stack them side by side. The stacking is done using either the np.vstack() function for vertical stacking, or the np.hstack() function for horizontal stacking. It is important to note that in order to stack ndarrays, the shape of the ndarrays must match. Let's see some examples:
```
# We create a rank 1 ndarray
x = np.array([1,2])
# We create a rank 2 ndarray
Y = np.array([[3,4],[5,6]])
# We print x
print()
print('x = ', x)
# We print Y
print()
print('Y = \n', Y)
# We stack x on top of Y
z = np.vstack((x,Y))
# We stack x on the right of Y. We need to reshape x in order to stack it on the right of Y.
w = np.hstack((Y,x.reshape(2,1)))
# We print z
print()
print('z = \n', z)
# We print w
print()
print('w = \n', w)
```
## **Slicing ndarrays:**
As we mentioned earlier, in addition to being able to access individual elements one at a time, NumPy provides a way to access subsets of ndarrays. This is known as slicing. Slicing is performed by combining indices with the colon : symbol inside the square brackets. In general you will come across three types of slicing:
```
1. ndarray[start:end]
2. ndarray[start:]
3. ndarray[:end]
```
The first method is used to select elements between the start and end indices. The second method is used to select all elements from the start index till the last index. The third method is used to select all elements from the first index till the end index. We should note that in methods one and three, the end index is excluded. We should also note that since ndarrays can be multidimensional, when doing slicing you usually have to specify a slice for each dimension of the array.
We will now see some examples of how to use the above methods to select different subsets of a rank 2 ndarray.
```
# We create a 4 x 5 ndarray that contains integers from 0 to 19
X = np.arange(20).reshape(4, 5)
# We print X
print()
print('X = \n', X)
print()
# We select all the elements that are in the 2nd through 4th rows and in the 3rd to 5th columns
Z = X[1:4,2:5]
# We print Z
print('Z = \n', Z)
# We can select the same elements as above using method 2
W = X[1:,2:5]
# We print W
print()
print('W = \n', W)
# We select all the elements that are in the 1st through 3rd rows and in the 3rd to 4th columns
Y = X[:3,2:5]
# We print Y
print()
print('Y = \n', Y)
# We select all the elements in the 3rd row
v = X[2,:]
# We print v
print()
print('v = ', v)
# We select all the elements in the 3rd column
q = X[:,2]
# We print q
print()
print('q = ', q)
# We select all the elements in the 3rd column but return a rank 2 ndarray
R = X[:,2:3]
# We print R
print()
print('R = \n', R)
```
Notice that when we selected all the elements in the 3rd column, variable q above, the slice returned a rank 1 ndarray instead of a rank 2 ndarray. However, slicing X in a slightly different way, variable R above, we can actually get a rank 2 ndarray instead.
It is important to note that when we perform slices on ndarrays and save them into new variables, as we did above, the data is not copied into the new variable. This is one feature that often causes confusion for beginners. Therefore, we will look at this in a bit more detail.
In the above examples, when we make assignments, such as:
```
Z = X[1:4,2:5]
```
the slice of the original array X is not copied in the variable Z. Rather, X and Z are now just two different names for the same ndarray. We say that slicing only creates a view of the original array. This means that if you make changes in Z you will be in effect changing the elements in X as well. Let's see this with an example:
```
# We create a 4 x 5 ndarray that contains integers from 0 to 19
X = np.arange(20).reshape(4, 5)
# We print X
print()
print('X = \n', X)
print()
# We select all the elements that are in the 2nd through 4th rows and in the 3rd to 4th columns
Z = X[1:4,2:5]
# We print Z
print()
print('Z = \n', Z)
print()
# We change the last element in Z to 555
Z[2,2] = 555
# We print X
print()
print('X = \n', X)
print()
```
| github_jupyter |
```
(ql:quickload :delta-vega)
```
# Single-View Plots
## Bar Charts
### Simple Bar Chart
A bar chart encodes quantitative values as the extent of rectangular bars.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A simple bar chart with embedded data."
:data (delta-vega:make-vector-data "a" #("A" "B" "C" "D" "E" "F" "G" "H" "I")
"b" #(28 55 43 91 81 53 19 87 52))
:mark (delta-vega:make-bar-mark)
:encoding (delta-vega:make-encoding :x (delta-vega:make-field-definition :field "a" :type :nominal)
:y (delta-vega:make-field-definition :field "b" :type :quantitative)))
:display t)
```
***
## Scatter & Strip Plots
### Scatterplot
A scatterplot showing horsepower and miles per gallons for various cars.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A scatterplot showing horsepower and miles per gallons for various cars."
:data (j:make-object "url" "data/cars.json")
:mark (delta-vega:make-point-mark)
:encoding (delta-vega:make-encoding :x (delta-vega:make-field-definition :field "Horsepower" :type :quantitative)
:y (delta-vega:make-field-definition :field "Miles_per_Gallon" :type :quantitative)))
:display t)
```
***
### 1D Strip Plot
```
(jupyter:vega-lite
(delta-vega:make-top-view :data (j:make-object "url" "data/seattle-weather.csv")
:mark (delta-vega:make-tick-mark)
:encoding (delta-vega:make-encoding :x (delta-vega:make-field-definition :field "precipitation" :type :quantitative)))
:display t)
```
***
### Strip Plot
Shows the relationship between horsepower and the number of cylinders using tick marks.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "Shows the relationship between horsepower and the number of cylinders using tick marks."
:data (j:make-object "url" "data/cars.json")
:mark (delta-vega:make-tick-mark)
:encoding (delta-vega:make-encoding :x (delta-vega:make-field-definition :field "Horsepower" :type :quantitative)
:y (delta-vega:make-field-definition :field "Cylinders" :type :ordinal)))
:display t)
```
***
### Colored Scatterplot
A scatterplot showing body mass and flipper lengths of penguins.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A scatterplot showing body mass and flipper lengths of penguins."
:data (j:make-object "url" "data/penguins.json")
:mark (delta-vega:make-point-mark)
:encoding (delta-vega:make-encoding :x (delta-vega:make-field-definition :field "Flipper Length (mm)" :type :quantitative :scale '(:object-plist "zero" nil))
:y (delta-vega:make-field-definition :field "Body Mass (g)" :type :quantitative :scale '(:object-plist "zero" nil))
:color (delta-vega:make-field-definition :field "Species" :type :nominal)
:shape (delta-vega:make-field-definition :field "Species" :type :nominal)))
:display t)
```
***
## Circular Plots
### Simple Pie Chart
A pie chart encodes proportional differences among a set of numeric values as the angular extent and area of a circular slice.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A simple pie chart with embedded data."
:data (delta-vega:make-vector-data "category" #(1 2 3 4 5 6)
"value" #(4 6 10 3 7 8))
:mark (delta-vega:make-arc-mark)
:view '(:object-plist "stroke" :null)
:encoding (delta-vega:make-encoding :theta (delta-vega:make-field-definition :field "value" :type :quantitative)
:color (delta-vega:make-field-definition :field "category" :type :nominal)))
:display t)
```
***
### Simple Donut Chart
A donut chart encodes proportional differences among a set of numeric values using angular extents.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A simple dounut chart with embedded data."
:data (delta-vega:make-vector-data "category" #(1 2 3 4 5 6)
"value" #(4 6 10 3 7 8))
:mark (delta-vega:make-arc-mark :inner-radius 50)
:view '(:object-plist "stroke" :null)
:encoding (delta-vega:make-encoding :theta (delta-vega:make-field-definition :field "value" :type :quantitative)
:color (delta-vega:make-field-definition :field "category" :type :nominal)))
:display t)
```
***
### Pie Chart with Labels
Layering text over arc marks to label pie charts. For now, you need to `:stack t` to theta to force the text to apply the same polar stacking layout.
```
(jupyter:vega-lite
(delta-vega:make-top-view :description "A simple dounut chart with embedded data."
:data (delta-vega:make-vector-data "category" #("a" "b" "c" "d" "e" "f")
"value" #(4 6 10 3 7 8))
:view '(:object-plist "stroke" :null)
:layer (list (dv:make-view :mark (delta-vega:make-arc-mark :outer-radius 80))
(dv:make-view :mark (dv:make-text-mark :radius 90)
:encoding (dv:make-encoding :text (delta-vega:make-field-definition :field "category" :type :nominal))))
:encoding (delta-vega:make-encoding :theta (delta-vega:make-field-definition :field "value" :type :quantitative :stack t)
:color (delta-vega:make-field-definition :field "category" :type :nominal :legend nil)))
:display t)
```
| github_jupyter |
# Performing Large Numbers of Calculations with Thermo in Parallel
A common request is to obtain a large number of properties from Thermo at once. Thermo is not NumPy - it cannot just automatically do all of the calculations in parallel.
If you have a specific property that does not require phase equilibrium calculations to obtain, it is possible to
use the `chemicals.numba` interface to in your own numba-accelerated code.
https://chemicals.readthedocs.io/chemicals.numba.html
For those cases where lots of flashes are needed, your best bet is to brute force it - use multiprocessing (and maybe a beefy machine) to obtain the results faster. The following code sample uses `joblib` to facilitate the calculation. Note that joblib won't show any benefits on sub-second calculations. Also note that the `threading` backend of joblib will not offer any performance improvements due to the CPython GIL.
```
import numpy as np
from thermo import *
from chemicals import *
constants, properties = ChemicalConstantsPackage.from_IDs(
['methane', 'ethane', 'propane', 'isobutane', 'n-butane', 'isopentane',
'n-pentane', 'hexane', 'heptane', 'octane', 'nonane', 'nitrogen'])
T, P = 200, 5e6
zs = [.8, .08, .032, .00963, .0035, .0034, .0003, .0007, .0004, .00005, .00002, .07]
eos_kwargs = dict(Tcs=constants.Tcs, Pcs=constants.Pcs, omegas=constants.omegas)
gas = CEOSGas(SRKMIX, eos_kwargs, HeatCapacityGases=properties.HeatCapacityGases, T=T, P=P, zs=zs)
liq = CEOSLiquid(SRKMIX, eos_kwargs, HeatCapacityGases=properties.HeatCapacityGases, T=T, P=P, zs=zs)
# Set up a two-phase flash engine, ignoring kijs
flasher = FlashVL(constants, properties, liquid=liq, gas=gas)
# Set a composition - it could be modified in the inner loop as well
# Do a test flash
flasher.flash(T=T, P=P, zs=zs).gas_beta
def get_properties(T, P):
# This is the function that will be called in parallel
# note that Python floats are faster than numpy floats
res = flasher.flash(T=float(T), P=float(P), zs=zs)
return [res.rho_mass(), res.Cp_mass(), res.gas_beta]
from joblib import Parallel, delayed
pts = 30
Ts = np.linspace(200, 400, pts)
Ps = np.linspace(1e5, 1e7, pts)
Ts_grid, Ps_grid = np.meshgrid(Ts, Ps)
# processed_data = Parallel(n_jobs=16)(delayed(get_properties)(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat))
# Naive loop in Python
%timeit -r 1 -n 1 processed_data = [get_properties(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat)]
# Use the threading feature of Joblib
# Because the calculation is CPU-bound, the threads do not improve speed and Joblib's overhead slows down the calculation
%timeit -r 1 -n 1 processed_data = Parallel(n_jobs=16, prefer="threads")(delayed(get_properties)(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat))
# Use the multiprocessing feature of joblib
# We were able to improve the speed by 5x
%timeit -r 1 -n 1 processed_data = Parallel(n_jobs=16, batch_size=30)(delayed(get_properties)(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat))
# For small multiprocessing jobs, the slowest job can cause a significant delay
# For longer and larger jobs the full benefit of using all cores is shown better.
%timeit -r 1 -n 1 processed_data = Parallel(n_jobs=8, batch_size=30)(delayed(get_properties)(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat))
# Joblib returns the data as a flat structure, but we can re-construct it into a grid
processed_data = Parallel(n_jobs=16, batch_size=30)(delayed(get_properties)(T, P) for T, P in zip(Ts_grid.flat, Ps_grid.flat))
phase_fractions = np.array([[processed_data[j*pts+i][2] for j in range(pts)] for i in range(pts)])
# Make a plot to show the results
import matplotlib.pyplot as plt
from matplotlib import ticker, cm
from matplotlib.colors import LogNorm
fig, ax = plt.subplots()
color_map = cm.viridis
im = ax.pcolormesh(Ts_grid, Ps_grid, phase_fractions.T, cmap=color_map)
cbar = fig.colorbar(im, ax=ax)
cbar.set_label('Gas phase fraction')
ax.set_yscale('log')
ax.set_xlabel('Temperature [K]')
ax.set_ylabel('Pressure [Pa]')
plt.show()
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import random
from transformers import (AdamW, get_linear_schedule_with_warmup, logging,
ElectraConfig, ElectraTokenizer, ElectraForSequenceClassification,
ElectraPreTrainedModel, ElectraModel)
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import TensorDataset, SequentialSampler, RandomSampler, DataLoader
from tqdm.notebook import tqdm
import gc; gc.enable()
from IPython.display import clear_output
from sklearn.model_selection import StratifiedKFold
logging.set_verbosity_error()
INPUT_DIR = '../input/commonlitreadabilityprize'
MODEL_NAME = 'roberta-large'
MAX_LENGTH = 256
LR = 2e-5
EPS = 1e-8
SEED = 42
NUM_FOLDS = 5
SEEDS = [113, 71, 17, 43, 37]
EPOCHS = 5
TRAIN_BATCH_SIZE = 8
VAL_BATCH_SIZE = 32
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def set_seed(seed = 0):
np.random.seed(seed)
random_state = np.random.RandomState(seed)
random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
os.environ['PYTHONHASHSEED'] = str(seed)
return random_state
random_state = set_seed(SEED)
class ContinuousStratifiedKFold(StratifiedKFold):
def split(self, x, y, groups=None):
num_bins = int(np.floor(1 + np.log2(len(y))))
bins = pd.cut(y, bins=num_bins, labels=False)
return super().split(x, bins, groups)
def get_data_loaders(data, fold):
x_train = data.loc[data.fold != fold, 'excerpt'].tolist()
y_train = data.loc[data.fold != fold, 'target'].values
x_val = data.loc[data.fold == fold, 'excerpt'].tolist()
y_val = data.loc[data.fold == fold, 'target'].values
encoded_train = tokenizer.batch_encode_plus(
x_train,
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
max_length=MAX_LENGTH,
return_tensors='pt'
)
encoded_val = tokenizer.batch_encode_plus(
x_val,
add_special_tokens=True,
return_attention_mask=True,
padding='max_length',
truncation=True,
max_length=MAX_LENGTH,
return_tensors='pt'
)
dataset_train = TensorDataset(
encoded_train['input_ids'],
encoded_train['attention_mask'],
torch.tensor(y_train)
)
dataset_val = TensorDataset(
encoded_val['input_ids'],
encoded_val['attention_mask'],
torch.tensor(y_val)
)
dataloader_train = DataLoader(
dataset_train,
sampler = RandomSampler(dataset_train),
batch_size=TRAIN_BATCH_SIZE
)
dataloader_val = DataLoader(
dataset_val,
sampler = SequentialSampler(dataset_val),
batch_size=VAL_BATCH_SIZE
)
return dataloader_train, dataloader_val
data = pd.read_csv(os.path.join(INPUT_DIR, 'train.csv'))
# Create stratified folds
kf = ContinuousStratifiedKFold(n_splits=5, shuffle=True, random_state=SEED)
for f, (t_, v_) in enumerate(kf.split(data, data.target)):
data.loc[v_, 'fold'] = f
data['fold'] = data['fold'].astype(int)
def evaluate(model, val_dataloader):
model.eval()
loss_val_total = 0
for batch in val_dataloader:
batch = tuple(b.to(DEVICE) for b in batch)
labels = batch[2].type(torch.float)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': labels,
}
with torch.no_grad():
# To be used when using default architecture from transformers
output = model(**inputs)
loss = output.loss
# To be used with custome head
# loss = model(**inputs)
loss_val_total += loss.item()
loss_val_avg = loss_val_total/len(val_dataloader)
return loss_val_avg
def train(model, train_dataloader, val_dataloader):
optimizer = AdamW(model.parameters(), lr = LR, eps = EPS)
scheduler = get_linear_schedule_with_warmup(optimizer, num_warmup_steps=0, num_training_steps=len(train_dataloader) * EPOCHS)
best_val_loss = 1
model.train()
for epoch in range(EPOCHS):
loss_train_total = 0
for batch in tqdm(train_dataloader):
model.zero_grad()
batch = tuple(b.to(DEVICE) for b in batch)
labels = batch[2].type(torch.float)
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
'labels': labels,
}
# To be used when using default architecture from transformers
output = model(**inputs)
loss = output.loss
# To be used with custome head
# loss = model(**inputs)
loss_train_total += loss.item()
loss.backward()
optimizer.step()
scheduler.step()
loss_train_avg = loss_train_total / len(train_dataloader)
loss_val_avg = evaluate(model, val_dataloader)
print(f'epoch:{epoch+1}/{EPOCHS} train loss={loss_train_avg} val loss={loss_val_avg}')
if loss_val_avg < best_val_loss:
best_val_loss = loss_val_avg
return best_val_loss
class ElectraForRegression(ElectraPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.config = config
self.electra = ElectraModel(config)
self.dropout = nn.Dropout(0.1)
self.out_proj = nn.Linear(config.hidden_size, config.num_labels)
self.loss = nn.MSELoss()
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.electra(
input_ids,
attention_mask,
token_type_ids,
position_ids,
head_mask,
inputs_embeds,
output_attentions,
output_hidden_states,
return_dict,
)
last_hidden_state = outputs[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(last_hidden_state.size()).float()
sum_embeddings = torch.sum(last_hidden_state * input_mask_expanded, 1)
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
mean_embeddings = sum_embeddings / sum_mask
dropped = self.dropout(mean_embeddings)
logits = self.out_proj(dropped)
preds = logits.squeeze(-1).squeeze(-1)
if labels is not None:
loss = self.loss(preds.view(-1).float(), labels.view(-1).float())
return loss
else:
return preds
tokenizer = ElectraTokenizer.from_pretrained('google/electra-large-discriminator')
config = ElectraConfig.from_pretrained('google/electra-large-discriminator')
config.update({'problem_type': 'regression', 'num_labels': 1})
model = ElectraForSequenceClassification.from_pretrained('google/electra-large-discriminator', config=config)
# model = ElectraForRegression.from_pretrained('google/electra-large-discriminator', config=config)
model.to(DEVICE)
clear_output()
losses = []
MAX_RUNS = 2
runs = 0 # Variable to control termination condition
for i, seed in enumerate(SEEDS):
# Termination condition
if runs == MAX_RUNS:
print(f'{runs} runs termination condition reached.')
break
print(f'********* seed({i}) = {seed} ***********')
for fold in range(NUM_FOLDS):
print(f'*** fold = {fold} ***')
set_seed(seed)
train_dataloader, val_dataloader = get_data_loaders(data, fold)
loss = train(model, train_dataloader, val_dataloader)
losses.append(loss)
# Termination condition
runs += 1
if runs == MAX_RUNS:
break
from sklearn.metrics import mean_squared_error
train_dataloader, val_dataloader = get_data_loaders(data, 2)
model.eval()
predictions = []
targets = []
with torch.no_grad():
for batch in val_dataloader:
batch = tuple(b.to(DEVICE) for b in batch)
targets.extend(batch[2].cpu().detach().numpy().ravel().tolist())
inputs = {
'input_ids': batch[0],
'attention_mask': batch[1],
}
outputs = model(**inputs)
# predictions.extend(outputs.cpu().detach().numpy().ravel().tolist()) #Custome head
predictions.extend(outputs.logits.cpu().detach().numpy().ravel().tolist()) #Default head
mean_squared_error(targets, predictions, squared=False)
# Default head score
# 0.035847414585227534
# Custome head score
# 0.1438390363656116
model.save_pretrained('/kaggle/working')
tokenizer.save_pretrained('/kaggle/working')
```
| github_jupyter |
```
## plot plasma density
%pylab inline
import numpy as np
from matplotlib import pyplot as plt
from ReadBinary import *
filename = "../data/Wp2-x.data"
arrayInfo = GetArrayInfo(filename)
print("typeCode: ", arrayInfo["typeCode"])
print("typeSize: ", arrayInfo["typeSize"])
print("shape: ", arrayInfo["shape"])
print("numOfArrays: ", arrayInfo["numOfArrays"])
Wp2 = GetArrays(filename, 0, 1)[0,0,:,:]
print("shape: ", Wp2.shape)
shape = Wp2.shape
plt.figure(figsize=(6, 6*(shape[0]/shape[1])))
plt.imshow(np.real(Wp2[:,:]), cmap="rainbow", origin='lower', aspect='auto')
plt.show()
## animate Electric field
%pylab tk
import numpy as np
from matplotlib import pyplot as plt
from ReadBinary import *
filename = "../data/E-x.data"
arrayInfo = GetArrayInfo(filename)
print("typeCode: ", arrayInfo["typeCode"])
print("typeSize: ", arrayInfo["typeSize"])
print("shape: ", arrayInfo["shape"])
print("numOfArrays: ", arrayInfo["numOfArrays"])
E = GetArrays(filename, indStart=-500, indEnd=None)[:, 0, :, :]
print("shape: ", E.shape)
shape = E.shape[1:]
plt.ion()
plt.figure(figsize=(7,6*(shape[0]/shape[1])))
for n in range(E.shape[0]):
plt.clf()
plt.imshow(np.real(E[n, :,:]), cmap="rainbow", origin='lower', aspect='auto')
plt.colorbar()
plt.pause(0.05)
%pylab tk
shape = E.shape[1:]
ion()
nz_ignore = 300
for n in range(E.shape[0]):
clf()
plot(E[n, int(shape[0]/2), nz_ignore:])
pause(0.05)
## Get Spectrum 1D
E = GetArrays(filename, indStart=-600, indEnd=None)[:, 0, :, :]
shape = E.shape
print("shape : ", shape)
Nt, Ny, Nz = shape
print("Nt: {}, Ny: {}, Nz: {}".format(Nt, Ny, Nz))
#E_tz = np.sum(E, axis=1)/Ny
E_tz = E[:, int(Ny/2), nz_ignore:]
E_f = np.fft.fft2(E_tz)
%pylab inline
imshow(np.real(E_f)[0:100, 0:100], origin="lower", cmap="rainbow")
## Get Spectrum 2D
E = GetArrays(filename, indStart=-600, indEnd=None)[:, 0, :, :]
shape = E.shape
print("shape : ", shape)
nz_ignore = 300
Nt, Ny, Nz = shape
print("Nt: {}, Ny: {}, Nz: {}".format(Nt, Ny, Nz))
E_tz = (np.sum(E, axis=1)/Ny)[:, nz_ignore:]
#E_tz = E[:, int(Ny/2), nz_ignore:]
Nky, Nkz = 100, 100
Nw = 100
ky_max, kz_max = 4.0*np.pi, 4.0*np.pi
w_max = 30.0
w = np.linspace(0, w_max, Nw)
kz = np.linspace(0, kz_max, Nkz)
E_f = np.zeros((Nw, Nkz), dtype=complex)
S = 0.95
dy = 10/Ny
dz = 12/Nz
dt = 1.0/np.sqrt(1.0/dy**2 + 1.0/dz**2)*S
t = np.linspace(0.0, Nt*dt, Nt, endpoint=True)
z = np.linspace(0.0, Nz*dz, Nz)[nz_ignore:]
t_mesh, z_mesh = np.meshgrid(t, z, indexing="ij")
for i in range(Nw):
w_i = w[i]
print(i, end=" ")
for j in range(Nkz):
kz_j = kz[j]
E_f[i, j] = np.sum(E_tz*np.exp(-1j*w_i*t_mesh + 1j*kz_j*z_mesh))
E_f *= dt*dy/(2.0*np.pi)**2
%pylab inline
figsize(8, 8)
E_f_max = np.max(np.abs(E_f))
imshow(np.abs(E_f), origin="lower")
```
| github_jupyter |
# Creating images using shapes and simple simulation with attenuation
This exercise shows how to create images via geometric shapes. It then uses forward projection without
and with attenuation.
It is recommended you complete the [Introductory](../Introductory) notebooks first (or alternatively the [display_and_projection.ipynb](display_and_projection.ipynb)). There is some overlap with [acquisition_model_mr_pet_ct.ipynb](../Introductory/acquisition_model_mr_pet_ct.ipynb), but here we use some geometric shapes to create an image and add attenuation etc.
Authors: Kris Thielemans and Evgueni Ovtchinnikov
First version: 8th of September 2016
Second Version: 17th of May 2018
CCP SyneRBI Synergistic Image Reconstruction Framework (SIRF).
Copyright 2015 - 2017 Rutherford Appleton Laboratory STFC.
Copyright 2015 - 2018 University College London.
This is software developed for the Collaborative Computational
Project in Synergistic Reconstruction for Biomedical Imaging
(http://www.ccpsynerbi.ac.uk/).
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
# Initial set-up
```
#%% make sure figures appears inline and animations works
%matplotlib notebook
# Setup the working directory for the notebook
import notebook_setup
from sirf_exercises import cd_to_working_dir
cd_to_working_dir('PET', 'image_creation_and_simulation')
import notebook_setup
#%% Initial imports etc
import numpy
from numpy.linalg import norm
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import os
import sys
import shutil
#%% Use the 'pet' prefix for all SIRF functions
# This is done here to explicitly differentiate between SIRF pet functions and
# anything else.
import sirf.STIR as pet
from sirf.Utilities import show_2D_array, show_3D_array, examples_data_path
from sirf_exercises import exercises_data_path
# define the directory with input files
data_path = os.path.join(examples_data_path('PET'), 'brain')
```
# Creation of images
```
#%% Read in image
# We will use an image provided with the demo to have correct voxel-sizes etc
image = pet.ImageData(os.path.join(data_path, 'emission.hv'))
print(image.dimensions())
print(image.voxel_sizes())
#%% create a shape
shape = pet.EllipticCylinder()
# define its size (in mm)
shape.set_length(50)
shape.set_radii((40, 30))
# centre of shape in (x,y,z) coordinates where (0,0,0) is centre of first plane
shape.set_origin((20, -30, 60))
#%% add the shape to the image
# first set the image values to 0
image.fill(0)
image.add_shape(shape, scale=1)
#%% add same shape at different location and with different intensity
shape.set_origin((40, -30, -60))
image.add_shape(shape, scale=0.75)
#%% show the phantom image as a sequence of transverse images
show_3D_array(image.as_array());
```
# Simple simulation
Let's first do simple ray-tracing without attenuation
```
#%% Create a SIRF acquisition model
acq_model = pet.AcquisitionModelUsingRayTracingMatrix()
# Specify sinogram dimensions via the template
template = pet.AcquisitionData(os.path.join(data_path, 'template_sinogram.hs'))
# Now set-up our acquisition model with all information that it needs about the data and image.
acq_model.set_up(template,image);
#%% forward project this image and display all sinograms
acquired_data_no_attn = acq_model.forward(image)
acquired_data_no_attn_array = acquired_data_no_attn.as_array()[0,:,:,:]
show_3D_array(acquired_data_no_attn_array);
#%% Show every 8th view
# Doing this here with a complicated one-liner...
show_3D_array(
acquired_data_no_attn_array[:,0:acquired_data_no_attn_array.shape[1]:8,:].transpose(1,0,2),
show=False)
# You could now of course try the animation of the previous demo...
```
# Adding attenuation
Attenuation in PET follows the Lambert-Beer law:
$$\exp\left\{-\int\mu(x) dx\right\},$$
with $\mu(x)$ the linear attenuation coefficients (roughly proportional to density),
and the line integral being performed between the 2 detectors.
In SIRF, we model this via an `AcquisitionSensitivityModel` object. The rationale for the name is that attenuation reduces the sensitivity of the detector-pair.
```
#%% create an attenuation image
# we will use the "emission" image as a template for sizes (although xy size doesn't have to be identical)
attn_image = image.get_uniform_copy(0)
#%% create a shape for a uniform cylinder in the centre
shape = pet.EllipticCylinder()
shape.set_length(150)
shape.set_radii((60, 60))
shape.set_origin((0, 0, 40))
# add it to the attenuation image with mu=-.096 cm^-1 (i.e. water)
attn_image.add_shape(shape, scale=0.096)
#%% show the phantom image as a sequence of transverse images
show_3D_array(attn_image.as_array());
#%% Create the acquisition sensitivity model
# First create the ray-tracer
acq_model_for_attn = pet.AcquisitionModelUsingRayTracingMatrix()
# Now create the attenuation model
asm_attn = pet.AcquisitionSensitivityModel(attn_image, acq_model_for_attn)
attn_image.as_array().max()
# Use this to find the 'detection efficiencies' as sinograms
asm_attn.set_up(template)
attn_factors = asm_attn.forward(template.get_uniform_copy(1))
# We will store these directly as an `AcquisitionSensitivityModel`,
# such that we don't have to redo the line integrals
asm_attn = pet.AcquisitionSensitivityModel(attn_factors)
#%% check a single sinogram (they are all the same of course)
show_2D_array('Attenuation factor sinogram', attn_factors.as_array()[0,5,:,:]);
#%% check a profile (they are also all the same as the object is a cylinder in the centre)
plt.figure()
plt.plot(attn_factors.as_array()[0,5,0,:]);
#%% Create a SIRF acquisition model
# start with ray-tracing
acq_model_with_attn = pet.AcquisitionModelUsingRayTracingMatrix()
# add the 'sensitivity'
acq_model_with_attn.set_acquisition_sensitivity(asm_attn)
# set-up
acq_model_with_attn.set_up(template,attn_image);
#%% forward project the original image, now including attenuation modelling, and display all sinograms
acquired_data_with_attn = acq_model_with_attn.forward(image)
acquired_data_with_attn_array = acquired_data_with_attn.as_array()[0,:,:,:]
show_3D_array(acquired_data_with_attn_array);
#%% Plot some profiles
slice = 40
plt.figure()
profile_no_attn = acquired_data_no_attn_array[5,slice,:]
profile_with_attn = acquired_data_with_attn_array[5,slice,:]
profile_attn_factors = attn_factors.as_array()[0,5,slice,:]
plt.plot(profile_no_attn,label='no atten')
plt.plot(profile_with_attn,label='with atten')
plt.plot(profile_no_attn * profile_attn_factors,'bo',label='check')
plt.legend();
```
# Further things to try
- Back project the data with and without attenuation
- Add noise to the data before backprojection (not so easy unfortunately. Adding noise is done in the [ML_reconstruction](ML_reconstruction.ipynb) exercise).
Hint: use `acquired_data.clone()` to create a copy, `numpy.random.poisson`, and `acquired_data.fill()`.
- Add an additive background to the model. Check if it modifies the forward projection (it should!) and the back-projection?
Hint: read the help for `AcquisitionModel`. Create a simple background by using `AcquisitionData.get_uniform_copy`.
```
help(pet.AcquisitionModel)
help(pet.AcquisitionData.get_uniform_copy)
```
| github_jupyter |
Dictionary and its default functions.
```
# Creating a Dictionary
# with Integer Keys
Dict = {1: 'Geeks', 2: 'For', 3: 'Geeks'}
print("\nDictionary with the use of Integer Keys: ")
print(Dict)
# Creating a Dictionary
# with Mixed keys
Dict = {'Name': 'Geeks', 1: [1, 2, 3, 4]}
print("\nDictionary with the use of Mixed Keys: ")
print(Dict)
# Creating an empty Dictionary
Dict = {}
print("Empty Dictionary: ")
print(Dict)
# Adding elements one at a time
Dict[0] = 'Geeks'
Dict[2] = 'For'
Dict[3] = 1
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Adding set of values
# to a single Key
Dict['Value_set'] = 2, 3, 4
print("\nDictionary after adding 3 elements: ")
print(Dict)
# Updating existing Key's Value
Dict[2] = 'Welcome'
print("\nUpdated key value: ")
print(Dict)
# Adding Nested Key value to Dictionary
Dict[5] = {'Nested' :{'1' : 'Life', '2' : 'Geeks'}}
print("\nAdding a Nested Key: ")
print(Dict)
# Python program to demonstrate
# accessing a element from a Dictionary
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# accessing a element using key
print("Accessing a element using key:")
print(Dict['name'])
# accessing a element using key
print("Accessing a element using key:")
print(Dict[1])
# Creating a Dictionary
Dict = {'Dict1': {1: 'Geeks'},
'Dict2': {'Name': 'For'}}
# Accessing element using key
print(Dict['Dict1'])
print(Dict['Dict1'][1])
print(Dict['Dict2']['Name'])
# Initial Dictionary
Dict = { 5 : 'Welcome', 6 : 'To', 7 : 'Geeks',
'A' : {1 : 'Geeks', 2 : 'For', 3 : 'Geeks'},
'B' : {1 : 'Geeks', 2 : 'Life'}}
print("Initial Dictionary: ")
print(Dict)
# Deleting a Key value
del Dict[6]
print("\nDeleting a specific key: ")
print(Dict)
# Deleting a Key from
# Nested Dictionary
del Dict['A'][2]
print("\nDeleting a key from Nested Dictionary: ")
print(Dict)
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# Deleting a key
# using pop() method
pop_ele = Dict.pop(1)
print('\nDictionary after deletion: ' + str(Dict))
print('Value associated to poped key is: ' + str(pop_ele))
# Creating Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# Deleting an arbitrary key
# using popitem() function
pop_ele = Dict.popitem()
print("\nDictionary after deletion: " + str(Dict))
print("The arbitrary pair returned is: " + str(pop_ele))
# Creating a Dictionary
Dict = {1: 'Geeks', 'name': 'For', 3: 'Geeks'}
# Deleting entire Dictionary
Dict.clear()
print("\nDeleting Entire Dictionary: ")
print(Dict)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ymoslem/OpenNMT-Tutorial/blob/main/2-NMT-Training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Install OpenNMT-py 2.x
!pip3 install OpenNMT-py
```
# Prepare Your Datasets
Please make sure you have completed the [first exercise](https://colab.research.google.com/drive/1rsFPnAQu9-_A6e2Aw9JYK3C8mXx9djsF?usp=sharing).
```
# Open the folder where you saved your prepapred datasets from the first exercise
%cd drive/MyDrive/nmt/
!ls
```
# Create the Training Configuration File
The following config file matches most of the recommended values for the Transformer model [Vaswani et al., 2017](https://arxiv.org/abs/1706.03762). As the current dataset is small, we reduced the following values:
* `train_steps` - for datasets with a few millions of sentences, consider using a value between 100000 and 200000, or more! Enabling the option `early_stopping` can help stop the training when there is no considerable improvement.
* `valid_steps` - 10000 can be good if the value `train_steps` is big enough.
* `warmup_steps` - obviously, its value must be less than `train_steps`. Try 4000 and 8000 values.
Refer to [OpenNMT-py training parameters](https://opennmt.net/OpenNMT-py/options/train.html) for more details. If you are interested in further explanation of the Transformer model, you can check this article, [Illustrated Transformer](https://jalammar.github.io/illustrated-transformer/).
```
# Create the YAML configuration file
# On a regular machine, you can create it manually or with nano
# Note here we are using some smaller values because the dataset is small
# For larger datasets, consider increasing: train_steps, valid_steps, warmup_steps, save_checkpoint_steps, keep_checkpoint
config = '''# config.yaml
## Where the samples will be written
save_data: run
# Training files
data:
corpus_1:
path_src: UN.en-fr.fr-filtered.fr.subword.train
path_tgt: UN.en-fr.en-filtered.en.subword.train
transforms: [filtertoolong]
valid:
path_src: UN.en-fr.fr-filtered.fr.subword.dev
path_tgt: UN.en-fr.en-filtered.en.subword.dev
transforms: [filtertoolong]
# Vocabulary files, generated by onmt_build_vocab
src_vocab: run/source.vocab
tgt_vocab: run/target.vocab
# Vocabulary size - should be the same as in sentence piece
src_vocab_size: 50000
tgt_vocab_size: 50000
# Filter out source/target longer than n if [filtertoolong] enabled
#src_seq_length: 200
#src_seq_length: 200
# Tokenization options
src_subword_model: source.model
tgt_subword_model: target.model
# Where to save the log file and the output models/checkpoints
log_file: train.log
save_model: models/model.fren
# Stop training if it does not imporve after n validations
early_stopping: 4
# Default: 5000 - Save a model checkpoint for each n
save_checkpoint_steps: 1000
# To save space, limit checkpoints to last n
# keep_checkpoint: 3
seed: 3435
# Default: 100000 - Train the model to max n steps
# Increase for large datasets
train_steps: 3000
# Default: 10000 - Run validation after n steps
valid_steps: 1000
# Default: 4000 - for large datasets, try up to 8000
warmup_steps: 1000
report_every: 100
decoder_type: transformer
encoder_type: transformer
word_vec_size: 512
rnn_size: 512
layers: 6
transformer_ff: 2048
heads: 8
accum_count: 4
optim: adam
adam_beta1: 0.9
adam_beta2: 0.998
decay_method: noam
learning_rate: 2.0
max_grad_norm: 0.0
# Tokens per batch, change if out of GPU memory
batch_size: 4096
valid_batch_size: 4096
batch_type: tokens
normalization: tokens
dropout: 0.1
label_smoothing: 0.1
max_generator_batches: 2
param_init: 0.0
param_init_glorot: 'true'
position_encoding: 'true'
# Number of GPUs, and IDs of GPUs
world_size: 1
gpu_ranks: [0]
'''
with open("config.yaml", "w+") as config_yaml:
config_yaml.write(config)
# [Optional] Check the content of the configuration file
!cat config.yaml
```
# Build Vocabulary
For large datasets, it is not feasable to use all words/tokens found in the corpus. Instead, a specific set of vocabulary is extracted from the training dataset, usually betweeen 32k and 100k words. This is the main purpose of the vocabulary building step.
```
# Find the number of CPUs/cores on the machine
!nproc --all
# Build Vocabulary
# -config: path to your config.yaml file
# -n_sample: use -1 to build vocabulary on all the segment in the training dataset
# -num_threads: change it to match the number of CPUs to run it faster
!onmt_build_vocab -config config.yaml -n_sample -1 -num_threads 2
```
From the **Runtime menu** > **Change runtime type**, make sure that the "**Hardware accelerator**" is "**GPU**".
```
# Check if the GPU is active
!nvidia-smi -L
# Check if the GPU is visable to PyTorch
import torch
print(torch.cuda.is_available())
print(torch.cuda.get_device_name(0))
```
# Training
Now, start training your NMT model! 🎉 🎉 🎉
```
# Train the NMT model
!onmt_train -config config.yaml
```
# Translation
Translation Options:
* `-model` - specify the last model checkpoint name; try testing the quality of multiple checkpoints
* `-src` - the subworded test dataset, source file
* `-output` - give any file name to the new translation output file
* `-gpu` - GPU ID, usually 0 if you have one GPU. Otherwise, it will translate on CPU, which would be slower.
* `-min_length` - [optional] to avoid empty translations
* `-verbose` - [optional] if you want to print translations
Refer to [OpenNMT-py translation options](https://opennmt.net/OpenNMT-py/options/translate.html) for more details.
```
# Translate - change the model name
!onmt_translate -model models/model.fren_step_3000.pt -src UN.en-fr.fr-filtered.fr.subword.test -output UN.en.translated -gpu 0 -min_length 1
# Check the first 5 lines of the translation file
!head -n 5 UN.en.translated
# Desubword the translation file
!python3 MT-Preparation/subwording/3-desubword.py target.model UN.en.translated
# Check the first 5 lines of the desubworded translation file
!head -n 5 UN.en.translated.desubword
# Desubword the source test
# Note: You might as well have split files *before* subwording during dataset preperation,
# but sometimes datasets have tokeniztion issues, so this way you are sure the file is really untokenized.
!python3 MT-Preparation/subwording/3-desubword.py target.model UN.en-fr.en-filtered.en.subword.test
# Check the first 5 lines of the desubworded source
!head -n 5 UN.en-fr.en-filtered.en.subword.test.desubword
```
# MT Evaluation
There are several MT Evaluation metrics such as BLEU, TER, METEOR, COMET, BERTScore, among others.
Here we are using BLEU. Files must be detokenized/desubworded beforehand.
```
# Download the BLEU script
!wget https://raw.githubusercontent.com/ymoslem/MT-Evaluation/main/BLEU/compute-bleu.py
# Install sacrebleu
!pip3 install sacrebleu
# Evaluate the translation (without subwording)
!python3 compute-bleu.py UN.en-fr.en-filtered.en.subword.test.desubword UN.en.translated.desubword
```
# More Features and Directions to Explore
Experiment with the following ideas:
* Icrease `train_steps` and see to what extent new checkpoints provide better translation, in terms of both BLEU and your human evaluation.
* Check other MT Evaluation mentrics other than BLEU such as [TER](https://github.com/mjpost/sacrebleu#ter), [WER](https://blog.machinetranslation.io/compute-wer-score/), [METEOR](https://blog.machinetranslation.io/compute-bleu-score/#meteor), [COMET](https://github.com/Unbabel/COMET), and [BERTScore](https://github.com/Tiiiger/bert_score). What are the conceptual differences between them? Is there there special cases for using a specific metric?
* Continue training from the last model checkpoint using the `-train_from` option, only if the training stopped and you want to continue it. In this case, `train_steps` in the config file should be larger than the steps of the last checkpoint you train from.
```
!onmt_train -config config.yaml -train_from models/model.fren_step_3000.pt
```
* **Ensemble Decoding:** During translation, instead of adding one model/checkpoint to the `-model` argument, add multiple checkpoints. For example, try the two last checkpoints. Does it improve quality of translation? Does it affect translation seepd?
* **Averaging Models:** Try to average multiple models into one model using the [average_models.py](https://github.com/OpenNMT/OpenNMT-py/blob/master/onmt/bin/average_models.py) script, and see how this affects translation quality.
```
python3 average_models.py -models model_step_xxx.pt model_step_yyy.pt -output model_avg.pt
```
* **Release the model:** Try this command and see how it reduce the model size.
```
onmt_release_model --model "model.pt" --output "model_released.pt
```
* **Use CTranslate2:** For efficient translation, consider using [CTranslate2](https://github.com/OpenNMT/CTranslate2), a fast inference engine. Check out an [example](https://gist.github.com/ymoslem/60e1d1dc44fe006f67e130b6ad703c4b).
* **Work on low-resource languages:** Find out more details about [how to train NMT models for low-resource languages](https://blog.machinetranslation.io/low-resource-nmt/).
* **Train a multilingual model:** Find out helpful notes about [training multilingual models](https://blog.machinetranslation.io/multilingual-nmt).
* **Publish a demo:** Show off your work through a [simple demo with CTranslate2 and Streamlit](https://blog.machinetranslation.io/nmt-web-interface/).
| github_jupyter |
## Distributed Training with Chainer and ChainerMN
Chainer can train in two modes: single-machine, and distributed. Unlike the single-machine notebook example that trains an image classification model on the CIFAR-10 dataset, we will write a Chainer script that uses `chainermn` to distribute training to multiple instances.
[VGG](https://arxiv.org/pdf/1409.1556v6.pdf) is an architecture for deep convolution networks. In this example, we train a convolutional network to perform image classification using the CIFAR-10 dataset on multiple instances. CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We'll train a model on SageMaker, deploy it to Amazon SageMaker, and then classify images using the deployed model.
The Chainer script runs inside of a Docker container running on SageMaker. For more information about the Chainer container, see the sagemaker-chainer-containers repository and the sagemaker-python-sdk repository:
* https://github.com/aws/sagemaker-chainer-containers
* https://github.com/aws/sagemaker-python-sdk
For more on Chainer and ChainerMN, please visit the Chainer and ChainerMN repositories:
* https://github.com/chainer/chainer
* https://github.com/chainer/chainermn
This notebook is adapted from the [CIFAR-10](https://github.com/chainer/chainer/tree/master/examples/cifar) example in the Chainer repository.
```
# Setup
from sagemaker import get_execution_role
import sagemaker
sagemaker_session = sagemaker.Session()
# This role retrieves the SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
```
## Downloading training and test data
We use helper functions provided by `chainer` to download and preprocess the CIFAR10 data.
```
import chainer
from chainer.datasets import get_cifar10
train, test = get_cifar10()
```
## Uploading the data
We save the preprocessed data to the local filesystem, and then use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the S3 location, which we will use when we start the Training Job.
```
import os
import shutil
import numpy as np
train_data = [element[0] for element in train]
train_labels = [element[1] for element in train]
test_data = [element[0] for element in test]
test_labels = [element[1] for element in test]
try:
os.makedirs("/tmp/data/distributed_train_cifar")
os.makedirs("/tmp/data/distributed_test_cifar")
np.savez("/tmp/data/distributed_train_cifar/train.npz", data=train_data, labels=train_labels)
np.savez("/tmp/data/distributed_test_cifar/test.npz", data=test_data, labels=test_labels)
train_input = sagemaker_session.upload_data(
path=os.path.join("/tmp", "data", "distributed_train_cifar"),
key_prefix="notebook/distributed_chainer_cifar/train",
)
test_input = sagemaker_session.upload_data(
path=os.path.join("/tmp", "data", "distributed_test_cifar"),
key_prefix="notebook/distributed_chainer_cifar/test",
)
finally:
shutil.rmtree("/tmp/data")
print("training data at ", train_input)
print("test data at ", test_input)
```
## Writing the Chainer script to run on Amazon SageMaker
### Training
We need to provide a training script that can run on the SageMaker platform. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:
* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to.
These artifacts are uploaded to S3 for model hosting.
* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.
* `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may
include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed
and uploaded to S3 to the same S3 prefix as the model artifacts.
Supposing two input channels, 'train' and 'test', were used in the call to the Chainer estimator's ``fit()`` method,
the following will be set, following the format `SM_CHANNEL_[channel_name]`:
* `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel
* `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel.
A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script run by this notebook starts with the following:
```python
import argparse
import os
if __name__ =='__main__':
training_env = sagemaker_containers.training_env()
num_gpus = int(os.environ['SM_NUM_GPUS'])
parser = argparse.ArgumentParser()
# retrieve the hyperparameters we set from the client in the notebook (with some defaults)
parser.add_argument('--epochs', type=int, default=30)
parser.add_argument('--batch-size', type=int, default=256)
parser.add_argument('--learning-rate', type=float, default=0.05)
parser.add_argument('--communicator', type=str, default='pure_nccl' if num_gpus > 0 else 'naive')
# Data, model, and output directories. These are required.
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
parser.add_argument('--test', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
args, _ = parser.parse_known_args()
# ... load from args.train and args.test, train a model, write model to args.model_dir.
```
Because the Chainer container imports your training script, you should always put your training code in a main guard (`if __name__=='__main__':`) so that the container does not inadvertently run your training code at the wrong point in execution.
For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers.
### Hosting and Inference
We use a single script to train and host the Chainer model. You can also write separate scripts for training and hosting. In contrast with the training script, the hosting script requires you to implement functions with particular function signatures (or rely on defaults for those functions).
These functions load your model, deserialize data sent by a client, obtain inferences from your hosted model, and serialize predictions back to a client:
* **`model_fn(model_dir)` (always required for hosting)**: This function is invoked to load model artifacts from those that were written into `model_dir` during training.
The script that this notebook runs uses the following `model_fn` function for hosting:
```python
def model_fn(model_dir):
chainer.config.train = False
model = L.Classifier(net.VGG(10))
serializers.load_npz(os.path.join(model_dir, 'model.npz'), model)
return model.predictor
```
* `input_fn(input_data, content_type)`: This function is invoked to deserialize prediction data when a prediction request is made. The return value is passed to predict_fn. `input_data` is the serialized input data in the body of the prediction request, and `content_type`, the MIME type of the data.
* `predict_fn(input_data, model)`: This function accepts the return value of `input_fn` as the `input_data` parameter and the return value of `model_fn` as the `model` parameter and returns inferences obtained from the model.
* `output_fn(prediction, accept)`: This function is invoked to serialize the return value from `predict_fn`, which is passed in as the `prediction` parameter, back to the SageMaker client in response to prediction requests.
`model_fn` is always required, but default implementations exist for the remaining functions. These default implementations can deserialize a NumPy array, invoking the model's `__call__` method on the input data, and serialize a NumPy array back to the client.
This notebook relies on the default `input_fn`, `predict_fn`, and `output_fn` implementations. See the Chainer sentiment analysis notebook for an example of how one can implement these hosting functions.
Please examine the script below, reproduced in its entirety. Training occurs behind the main guard, which prevents the function from being run when the script is imported, and `model_fn` loads the model saved into `model_dir` during training.
The script uses a chainermn Communicator to distribute training to multiple nodes. The Communicator depends on MPI (Message Passing Interface), so the Chainer container running on SageMaker runs this script with mpirun if the Chainer Estimator specifies a train_instance_count of two or greater, or if use_mpi in the Chainer estimator is true.
By default, one process is created per GPU (on GPU instances), or one per host (on CPU instances, which are not recommended for this notebook).
For more on writing Chainer scripts to run on SageMaker, or for more on the Chainer container itself, please see the following repositories:
* For writing Chainer scripts to run on SageMaker: https://github.com/aws/sagemaker-python-sdk
* For more on the Chainer container and default hosting functions: https://github.com/aws/sagemaker-chainer-containers
```
!pygmentize 'src/chainer_cifar_vgg_distributed.py'
```
## Running the training script on SageMaker
To train a model with a Chainer script, we construct a ```Chainer``` estimator using the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk). We pass in an `entry_point`, the name of a script that contains a couple of functions with certain signatures (`train` and `model_fn`), and a `source_dir`, a directory containing all code to run inside the Chainer container. This script will be run on SageMaker in a container that invokes these functions to train and load Chainer models.
The ```Chainer``` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on two `ml.p2.xlarge` instances, but you may need to request a service limit increase on the number of training instances in order to train.
This script uses the `chainermn` package, which distributes training with MPI. Your script is run with `mpirun`, so a ChainerMN Communicator object can be used to distribute training. Arguments to `mpirun` are set to sensible defaults, but you can configure how your script is run in distributed mode. See the ```Chainer``` class documentation for more on configuring MPI.
```
from sagemaker.chainer.estimator import Chainer
chainer_estimator = Chainer(
entry_point="chainer_cifar_vgg_distributed.py",
source_dir="src",
role=role,
sagemaker_session=sagemaker_session,
use_mpi=True,
train_instance_count=2,
train_instance_type="ml.p3.2xlarge",
hyperparameters={"epochs": 30, "batch-size": 256},
)
chainer_estimator.fit({"train": train_input, "test": test_input})
```
Our Chainer script writes various artifacts, such as plots, to a directory `output_data_dir`, the contents of which which SageMaker uploads to S3. Now we download and extract these artifacts.
```
from s3_util import retrieve_output_from_s3
chainer_training_job = chainer_estimator.latest_training_job.name
desc = sagemaker_session.sagemaker_client.describe_training_job(
TrainingJobName=chainer_training_job
)
output_data = desc["ModelArtifacts"]["S3ModelArtifacts"].replace("model.tar.gz", "output.tar.gz")
retrieve_output_from_s3(output_data, "output/distributed_cifar")
```
These plots show the accuracy and loss over epochs:
```
from IPython.display import Image
from IPython.display import display
accuracy_graph = Image(filename="output/distributed_cifar/accuracy.png", width=800, height=800)
loss_graph = Image(filename="output/distributed_cifar/loss.png", width=800, height=800)
display(accuracy_graph, loss_graph)
```
## Deploying the Trained Model
After training, we use the Chainer estimator object to create and deploy a hosted prediction endpoint. We can use a CPU-based instance for inference (in this case an `ml.m4.xlarge`), even though we trained on GPU instances.
The predictor object returned by `deploy` lets us call the new endpoint and perform inference on our sample images.
```
predictor = chainer_estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
### CIFAR10 sample images
We'll use these CIFAR10 sample images to test the service:
<img style="display: inline; height: 32px; margin: 0.25em" src="images/airplane1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/automobile1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/bird1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/cat1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/deer1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/dog1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/frog1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/horse1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/ship1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/truck1.png" />
## Predicting using SageMaker Endpoint
We batch the images together into a single NumPy array to obtain multiple inferences with a single prediction request.
```
from skimage import io
import numpy as np
def read_image(filename):
img = io.imread(filename)
img = np.array(img).transpose(2, 0, 1)
img = np.expand_dims(img, axis=0)
img = img.astype(np.float32)
img *= 1.0 / 255.0
img = img.reshape(3, 32, 32)
return img
def read_images(filenames):
return np.array([read_image(f) for f in filenames])
filenames = [
"images/airplane1.png",
"images/automobile1.png",
"images/bird1.png",
"images/cat1.png",
"images/deer1.png",
"images/dog1.png",
"images/frog1.png",
"images/horse1.png",
"images/ship1.png",
"images/truck1.png",
]
image_data = read_images(filenames)
```
The predictor runs inference on our input data and returns a list of predictions whose argmax gives the predicted label of the input data.
```
response = predictor.predict(image_data)
for i, prediction in enumerate(response):
print("image {}: prediction: {}".format(i, prediction.argmax(axis=0)))
```
## Cleanup
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
chainer_estimator.delete_endpoint()
```
| github_jupyter |
```
try:
from openmdao.utils.notebook_utils import notebook_mode
except ImportError:
!python -m pip install openmdao[notebooks]
```
# NonlinearBlockGS
NonlinearBlockGS applies Block Gauss-Seidel (also known as fixed-point iteration) to the
components and subsystems in the system. This is mainly used to solve cyclic connections. You
should try this solver for systems that satisfy the following conditions:
1. System (or subsystem) contains a cycle, though subsystems may.
2. System does not contain any implicit states, though subsystems may.
NonlinearBlockGS is a block solver, so you can specify different nonlinear solvers in the subsystems and they
will be utilized to solve the subsystem nonlinear problem.
Note that you may not know if you satisfy the second condition, so choosing a solver can be a trial-and-error proposition. If
NonlinearBlockGS doesn't work, then you will need to use [NewtonSolver](../../../_srcdocs/packages/solvers.nonlinear/newton).
Here, we choose NonlinearBlockGS to solve the Sellar problem, which has two components with a
cyclic dependency, has no implicit states, and works very well with Gauss-Seidel.
```
from openmdao.utils.notebook_utils import get_code
from myst_nb import glue
glue("code_src33", get_code("openmdao.test_suite.components.sellar.SellarDis1withDerivatives"), display=False)
```
:::{Admonition} `SellarDis1withDerivatives` class definition
:class: dropdown
{glue:}`code_src33`
:::
```
from openmdao.utils.notebook_utils import get_code
from myst_nb import glue
glue("code_src34", get_code("openmdao.test_suite.components.sellar.SellarDis2withDerivatives"), display=False)
```
:::{Admonition} `SellarDis2withDerivatives` class definition
:class: dropdown
{glue:}`code_src34`
:::
```
import numpy as np
import openmdao.api as om
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives
prob = om.Problem()
model = prob.model
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
model.nonlinear_solver = om.NonlinearBlockGS()
prob.setup()
prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))
prob.run_model()
print(prob.get_val('y1'))
print(prob.get_val('y2'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('y1'), 25.58830273, .00001)
assert_near_equal(prob.get_val('y2'), 12.05848819, .00001)
```
This solver runs all of the subsystems each iteration, passing data along all connections
including the cyclic ones. After each iteration, the iteration count and the residual norm are
checked to see if termination has been satisfied.
You can control the termination criteria for the solver using the following options:
# NonlinearBlockGS Options
```
om.show_options_table("openmdao.solvers.nonlinear.nonlinear_block_gs.NonlinearBlockGS")
```
## NonlinearBlockGS Constructor
The call signature for the `NonlinearBlockGS` constructor is:
```{eval-rst}
.. automethod:: openmdao.solvers.nonlinear.nonlinear_block_gs.NonlinearBlockGS.__init__
:noindex:
```
## Aitken relaxation
This solver implements Aitken relaxation, as described in Algorithm 1 of this paper on aerostructual design [optimization](http://www.umich.edu/~mdolaboratory/pdf/Kenway2014a.pdf).
The relaxation is turned off by default, but it may help convergence for more tightly coupled models.
## Residual Calculation
The `Unified Derivatives Equations` are formulated so that explicit equations (via `ExplicitComponent`) are also expressed
as implicit relationships, and their residual is also calculated in "apply_nonlinear", which runs the component a second time and
saves the difference in the output vector as the residual. However, this would require an extra call to `compute`, which is
inefficient for slower components. To eliminate the inefficiency of running the model twice every iteration the NonlinearBlockGS
driver saves a copy of the output vector and uses that to calculate the residual without rerunning the model. This does require
a little more memory, so if you are solving a model where memory is more of a concern than execution time, you can set the
"use_apply_nonlinear" option to True to use the original formulation that calls "apply_nonlinear" on the subsystem.
## NonlinearBlockGS Option Examples
**maxiter**
`maxiter` lets you specify the maximum number of Gauss-Seidel iterations to apply. In this example, we
cut it back from the default, ten, down to two, so that it terminates a few iterations earlier and doesn't
reach the specified absolute or relative tolerance.
```
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives
prob = om.Problem()
model = prob.model
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
prob.setup()
nlbgs = model.nonlinear_solver = om.NonlinearBlockGS()
#basic test of number of iterations
nlbgs.options['maxiter'] = 1
prob.run_model()
print(model.nonlinear_solver._iter_count)
assert(model.nonlinear_solver._iter_count == 1)
nlbgs.options['maxiter'] = 5
prob.run_model()
print(model.nonlinear_solver._iter_count)
assert(model.nonlinear_solver._iter_count == 5)
#test of number of iterations AND solution after exit at maxiter
prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))
nlbgs.options['maxiter'] = 3
prob.set_solver_print()
prob.run_model()
print(prob.get_val('y1'))
print(prob.get_val('y2'))
print(model.nonlinear_solver._iter_count)
assert_near_equal(prob.get_val('y1'), 25.58914915, .00001)
assert_near_equal(prob.get_val('y2'), 12.05857185, .00001)
assert(model.nonlinear_solver._iter_count == 3)
```
**atol**
Here, we set the absolute tolerance to a looser value that will trigger an earlier termination. After
each iteration, the norm of the residuals is calculated one of two ways. If the "use_apply_nonlinear" option
is set to False (its default), then the norm is calculated by subtracting a cached previous value of the
outputs from the current value. If "use_apply_nonlinear" is True, then the norm is calculated by calling
apply_nonlinear on all of the subsystems. In this case, `ExplicitComponents` are executed a second time.
If this norm value is lower than the absolute tolerance `atol`, the iteration will terminate.
```
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives
prob = om.Problem()
model = prob.model
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
nlbgs = model.nonlinear_solver = om.NonlinearBlockGS()
nlbgs.options['atol'] = 1e-4
prob.setup()
prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))
prob.run_model()
print(prob.get_val('y1'))
print(prob.get_val('y2'))
assert_near_equal(prob.get_val('y1'), 25.5882856302, .00001)
assert_near_equal(prob.get_val('y2'), 12.05848819, .00001)
```
**rtol**
Here, we set the relative tolerance to a looser value that will trigger an earlier termination. After
each iteration, the norm of the residuals is calculated one of two ways. If the "use_apply_nonlinear" option
is set to False (its default), then the norm is calculated by subtracting a cached previous value of the
outputs from the current value. If "use_apply_nonlinear" is True, then the norm is calculated by calling
apply_nonlinear on all of the subsystems. In this case, `ExplicitComponents` are executed a second time.
If the ratio of the currently calculated norm to the initial residual norm is lower than the relative tolerance
`rtol`, the iteration will terminate.
```
from openmdao.utils.notebook_utils import get_code
from myst_nb import glue
glue("code_src35", get_code("openmdao.test_suite.components.sellar.SellarDerivatives"), display=False)
```
:::{Admonition} `SellarDerivatives` class definition
:class: dropdown
{glue:}`code_src35`
:::
```
from openmdao.test_suite.components.sellar import SellarDis1withDerivatives, SellarDis2withDerivatives, SellarDerivatives
prob = om.Problem()
model = prob.model
model.add_subsystem('d1', SellarDis1withDerivatives(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2withDerivatives(), promotes=['z', 'y1', 'y2'])
model.add_subsystem('obj_cmp', om.ExecComp('obj = x**2 + z[1] + y1 + exp(-y2)',
z=np.array([0.0, 0.0]), x=0.0),
promotes=['obj', 'x', 'z', 'y1', 'y2'])
model.add_subsystem('con_cmp1', om.ExecComp('con1 = 3.16 - y1'), promotes=['con1', 'y1'])
model.add_subsystem('con_cmp2', om.ExecComp('con2 = y2 - 24.0'), promotes=['con2', 'y2'])
nlbgs = model.nonlinear_solver = om.NonlinearBlockGS()
nlbgs.options['rtol'] = 1e-3
prob.setup()
prob.set_val('x', 1.)
prob.set_val('z', np.array([5.0, 2.0]))
prob.run_model()
print(prob.get_val('y1'), 25.5883027, .00001)
print(prob.get_val('y2'), 12.05848819, .00001)
assert_near_equal(prob.get_val('y1'), 25.5883027, .00001)
assert_near_equal(prob.get_val('y2'), 12.05848819, .00001)
```
| github_jupyter |
# AVL Tree
```
from Module.classCollection import Queue
class AVLNode:
def __init__(self, data):
self.data = data
self.leftChild = None
self.rightChild = None
self.height = 1
def preorderTraversal(rootNode):
if not rootNode:
return
print(rootNode.data)
preorderTraversal(rootNode.leftChild)
preorderTraversal(rootNode.rightChild)
def inorderTraversal(rootNode):
if not rootNode:
return
inorderTraversal(rootNode.leftChild)
print(root.data)
inorderTraversal(rootNode.rightChild)
def postorderTraversal(rootNode):
if rootNode == None:
return
postorderTraversal(rootNode.leftChild)
postorderTraversal(rootNode.rightChild)
print(rootNode.data)
def levelorderTraversal(rootNode):
if rootNode == None:
return
customQueue = Queue()
customQueue.enqueue(rootNode)
while customQueue.isEmpty() is not True:
tempNode = customQueue.dequeue()
print(tempNode.value.data)
if tempNode.value.leftChild is not None:
customQueue.enqueue(tempNode.value.leftChild)
if tempNode.value.rightChild is not None:
customQueue.enqueue(tempNode.value.rightChild)
def searchNode(rootNode, nodeValue):
if rootNode.data == nodeValue:
print("The value is found")
elif nodeValue < rootNode.data:
searchNode(rootNode.leftChild, nodeValue)
else:
searchNode(rootNode.rightChild, nodeValue)
def rightRotate(disbalanceNode):
newRoot = disbalanceNode.leftChild
disbalanceNode.leftChild = disbalanceNode.leftChild.rightChild
newRoot.leftChild = disbalanceNode
disbalanceNode.height = 1 + max(getHeight(disbalanceNode.leftChild), getHeight(disbalanceNode.rightChild))
newRoot.height = 1 + max(getHeight(newRoot.leftChild), getHeight(newRoot.rightChild))
return newRoot
def leftRotate(disbalanceNode):
newRoot = disbalanceNode.rightChild
disbalanceNode.rightChild = disbalanceNode.rightChild.leftChild
newRoot.leftChild = disbalanceNode
disbalanceNode.height = 1 + max(getHeight(disbalanceNode.leftChild), getHeight(disbalanceNode.rightChild))
newRoot.height = 1 + max(getHeight(newRoot.leftChild), getHeight(newRoot.rightChild))
return newRoot
def getHeight(rootNode):
if not rootNode:
return 0
return rootNode.height
def getBalance(rootNode):
if not rootNode:
return 0
return getHeight(rootNode.leftChild) - getHeight(rootNode.rightChild)
def insertNode(rootNode, nodeValue):
# (0) put the value first
if not rootNode:
return AVLNode(nodeValue)
# (1) No need to rotate
elif nodeValue <= rootNode.data:
rootNode.leftChild = insertNode(rootNode.leftChild, nodeValue)
else:
rootNode.rightChild = insertNode(rootNode.rightChild, nodeValue)
rootNode.height = 1+ max(getHeight(rootNode.leftChild), getHeight(rootNode.rightChild))
balance = getBalance(rootNode)
# LL
if balance > 1 and nodeValue < rootNode.leftChild.data:
return rightRotate(rootNode)
# LR
if balance > 1 and nodeValue > rootNode.leftChild.data:
rootNode.leftChild = leftRotate(rootNode.leftChild)
return rightRotate(rootNode)
# RR
if balance < -1 and nodeValue > rootNode.rightChild.data:
return leftRotate(rootNode)
# RL
if balance < -1 and nodeValue < rootNode.rightChild.data:
rootNode.rightChild = rightRotate(rootNode.rightChild)
return leftRotate(rootNode)
return rootNode
def getMinValueNode(rootNode):
if rootNode is None or rootNode.leftChild is None:
return rootNode
return getMinValueNode(rootNode.leftChild)
def deleteNode(rootNode, nodeValue):
if not rootNode:
return rootNode
elif nodeValue < rootNode.data:
rootNode.leftChild = deleteNode(rootNode.leftChild, nodeValue)
elif nodeValue > rootNode.data:
rootNode.rightChild = deleteNode(rootNode.rightChild, nodeValue)
else:
if rootNode.leftChild is None:
temp = rootNode.rightChild
rootNode = None
return temp
elif rootNode.rightChild is None:
temp = rootNode.leftChild
rootNode = None
return temp
temp = getMinValueNode(rootNode.rightChild)
rootNode.data = temp.data
rootNode.rightChild = deleteNode(rootNode.rightChild, temp.data)
rootNode.height = 1+max(getHeight(rootNode.leftChild), getHeight(rootNode.rightChild))
balance = getBalance(rootNode)
# LL
if balance > 1 and getBalance(rootNode.leftChild)>0:
return rightRotate(rootNode)
if balance > 1 and getBalance(rootNode.leftChild)<0:
rootNode.leftChild = leftRotate(rootNode.leftChild)
return rightRotate(rootNode)
#RR
if balance < -1 and getBalance(rootNode.rightChild)<0:
return leftRotate(rootNode)
# RL
if balance < -1 and getBalance(rootNode.rightChild)>0:
rootNode.rightChild = rightRotate(rootNode.rightChild)
return leftRotate(rootNode)
return rootNode
def deleteAVL(rootNode):
rootNode.data = None
rootNode.leftChild = None
rootNode.rightChild = None
return "AVL has been successfully deleted"
newAVL = AVLNode(5)
print("----1.preorder----")
print("----2.inorder----")
print("----3.postorder----")
print("----4.levelorder----")
print("----5.Insertation----")
newAVL = insertNode(newAVL, 10)
newAVL = insertNode(newAVL, 15)
newAVL = insertNode(newAVL, 20)
levelorderTraversal(newAVL)
print("----6. Deletion----")
newAVL = deleteNode(newAVL, 15)
levelorderTraversal(newAVL)
print("----7. Delete AVL----")
print(deleteAVL(newAVL))
levelorderTraversal(newAVL)
```
| github_jupyter |
# DA320 Assignment 7: Mongo Charts
Jon Kaimmer
DA320
Winter2022
### Introduction
Lets import our chirp data and then chart it.
```
#IMPORTS
import pymongo
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import json as json
import plotly.express as px
# import warnings
# warnings.filterwarnings('ignore') #Ignore the seaborn warnings...
#METHODS
def connectToMongoDB():
with open(credentialLocation, 'r') as myFile: #open seperate file that stores passwords in JSON array format
data = myFile.read() #read file into memory
credentialDict = json.loads(data) #parse json file into a python dictionary
return(credentialDict['MONGO']['mDBconnectionString'])
#FIELDS
credentialLocation = r"C:\Users\\jonat\\OneDrive\Documents\GitHub\\DA320\credentials.json"
sns.set(rc = {'figure.figsize':(40,8)})
```
### Read MongoDB connection string from my credentials.json file
```
MONGOconnectionString = connectToMongoDB()
client = pymongo.MongoClient(MONGOconnectionString)
db = client.admin
serverStatusResult=db.command('serverStatus')
#print(serverStatusResult)
```
### Query MongoDB
```
db = client['MoviesDB'] #<- MoviesDB is the mongoCLUSTER
chirpCollection = db['movies'] # <-movies is the chirps collection within the mongoCluster
query = {'comment' : 'I hate ice cream'}
print(chirpCollection.find_one(query))
```
### Create a simple pipeline: match to "i hate ice cream" and group on the month field.
```
mongoPipeline = [
{
'$match': { 'likes': { '$gte': 10 } }
}, {
'$addFields':
{
'Year': {'$toInt': {'$substr': ['$date', 0, 4]}},
'Month': {'$toInt': {'$substr': ['$date', 5, 2]}},
'Day': {'$toInt': {'$substr': ['$date', 8, 2]}}
}
}, {
'$set':
{
'subject': {
'$switch': {
'branches':
[
{'case': {'$gte': [{ '$indexOfCP': [ '$comment', 'hiking'] }, 0] }, 'then': 'Hiking'},
{'case': {'$gte': [{'$indexOfCP': [ '$comment', 'camping'] }, 0] }, 'then': 'Camping'},
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'ice cream']}, 0] }, 'then': 'Ice cream' },
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'tacos']}, 0] }, 'then': 'Tacos'},
{'case': {'$gte': [{'$indexOfCP': [ '$comment', 'walks on the beach' ] }, 0]}, 'then': 'Walks on the beach'},
{'case': { '$gte': [ {'$indexOfCP': [ '$comment', 'skiing'] }, 0]}, 'then': 'Skiing' }
],'default': 'DID NOT MATCH'
}
}
}
}, {
'$set': {
'sentiment': {
'$switch': {
'branches':
[
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'I love'] }, 0] }, 'then': 1},
{'case': {'$gte': [ {'$indexOfCP': ['$comment', 'Maybe I']}, 0 ]}, 'then': 0.3},
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'I like']}, 0] }, 'then': 0.6},
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'I think']}, 0]}, 'then': 0.1},
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'I hate']}, 0]}, 'then': -0.6},
{'case': {'$gte': [{'$indexOfCP': ['$comment', 'really hate']}, 0]}, 'then': -1}
],'default': 'DID NOT MATCH'
}
}
}
{
'$group': {
'_id': {
'subject': '$subject',
'year': '$Year',
'month': '$Month'
},
'chirpCount': {'$sum': 1},
'averageSentiment': {'$avg': '$sentiment'},
'chirps': {
'$push': {
'name': '$name',
'comment': '$comment',
'sentiment': '$sentiment',
'location': '$location'
}
}
}
}
]
results = chirpCollection.aggregate(mongoPipeline)
```
### We then need to clean the data coming out of our data pipeline
- First lets break out the '_id' JSON Object into their own columns.
- Then we will rename those columns and reindex them.
```
### Normalize data using pandas
#
# This data has JSON objects nestled within it. To start we will need break out the ['subject', 'year', 'month'] fields that are nestled behind '_.id". Basically i had created a multilayered key for my _id index in MongoDB. I need to now break that out into a long form datastructure.
# We can do that with .json_normalize built in pandas funciton.
# Note that the _id column is a JSON object while the chirps column is a JSON array.
df= pd.json_normalize(results, sep='>')
df
#we want to rename these three columns. We are doing this so that when we chart this data downbelow, we will be able to use "dot notation" to access the columns. if there is a period in the name of the column it causes us issues.
# _id>subject -> subject
# _id>year -> year
# _id>month -> month
df = df.rename( columns =
{
'_id>subject':'subject',
'_id>year':'year',
'_id>month':'month',
}
)
#and now lets reorder our columns useing dataFrame.reindex
df = df.reindex(columns=['subject', 'year', 'month', 'chirpCount', 'averageSentiment', 'chirps'])
df
```
### Better. Now we can graph our Data
```
fig = px.bar(df, x='month', y='chirpCount', facet_col='subject')
fig.show()
fig = px.scatter(df, x='month', y='averageSentiment', trendline='ols', title='Canadians overall sentiment towards things they chose to Chirp about')
fig.update_traces(
line=dict(width=3, color='gray')
)
fig.show()
```
| github_jupyter |
```
#remove cell visibility
from IPython.display import HTML
tag = HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide()
} else {
$('div.input').show()
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Promijeni vidljivost <a href="javascript:code_toggle()">ovdje</a>.''')
display(tag)
%matplotlib inline
import control
import numpy
import sympy as sym
from IPython.display import display, Markdown
import ipywidgets as widgets
import matplotlib.pyplot as plt
#print a matrix latex-like
def bmatrix(a):
"""Returns a LaTeX bmatrix - by Damir Arbula (ICCT project)
:a: numpy array
:returns: LaTeX bmatrix as a string
"""
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{bmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{bmatrix}']
return '\n'.join(rv)
# Display formatted matrix:
def vmatrix(a):
if len(a.shape) > 2:
raise ValueError('bmatrix can at most display two dimensions')
lines = str(a).replace('[', '').replace(']', '').splitlines()
rv = [r'\begin{vmatrix}']
rv += [' ' + ' & '.join(l.split()) + r'\\' for l in lines]
rv += [r'\end{vmatrix}']
return '\n'.join(rv)
#matrixWidget is a matrix looking widget built with a VBox of HBox(es) that returns a numPy array as value !
class matrixWidget(widgets.VBox):
def updateM(self,change):
for irow in range(0,self.n):
for icol in range(0,self.m):
self.M_[irow,icol] = self.children[irow].children[icol].value
#print(self.M_[irow,icol])
self.value = self.M_
def dummychangecallback(self,change):
pass
def __init__(self,n,m):
self.n = n
self.m = m
self.M_ = numpy.matrix(numpy.zeros((self.n,self.m)))
self.value = self.M_
widgets.VBox.__init__(self,
children = [
widgets.HBox(children =
[widgets.FloatText(value=0.0, layout=widgets.Layout(width='90px')) for i in range(m)]
)
for j in range(n)
])
#fill in widgets and tell interact to call updateM each time a children changes value
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
self.children[irow].children[icol].observe(self.updateM, names='value')
#value = Unicode('example@example.com', help="The email value.").tag(sync=True)
self.observe(self.updateM, names='value', type= 'All')
def setM(self, newM):
#disable callbacks, change values, and reenable
self.unobserve(self.updateM, names='value', type= 'All')
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].unobserve(self.updateM, names='value')
self.M_ = newM
self.value = self.M_
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].value = self.M_[irow,icol]
for irow in range(0,self.n):
for icol in range(0,self.m):
self.children[irow].children[icol].observe(self.updateM, names='value')
self.observe(self.updateM, names='value', type= 'All')
#self.children[irow].children[icol].observe(self.updateM, names='value')
#overlaod class for state space systems that DO NOT remove "useless" states (what "professor" of automatic control would do this?)
class sss(control.StateSpace):
def __init__(self,*args):
#call base class init constructor
control.StateSpace.__init__(self,*args)
#disable function below in base class
def _remove_useless_states(self):
pass
```
## Upravljanje putanjom zrakoplova
Dinamika rotacije zrakoplova koji se kreće po tlu može se predstaviti kao:
$$
J_z\ddot{\psi} = bF_1\delta + F_2\dot{\psi} \, ,
$$
gdje je $J_z = 11067000$ kg$\text{m}^2$, $b = 15$ , $F_1 = 35000000$ Nm, $F_2 = 500000$ kg$\text{m}^2$/$\text{s}$, $\psi$ je kut rotacije (u radijanima), ili kut zaošijanja u odnosu na vertikalnu os, a $\delta$ je kut upravljanja prednjih kotača (u radijanima). Kada zrakoplov slijedi ravnu liniju s uzdužnom linearnom brzinom $V$ (u m/s), bočna brzina zrakoplova $V_y$ (u m/s) približno je linearno proporcionalna kutu zaošijanja: $V_y = \dot{p_y} = V\psi$.
Cilj je dizajnirati regulator za bočni položaj zrakoplova $p_y$, s uzdužnom brzinom $V$ postavljenom na 35 km/h, koristeći kut upravljanja prednjeg kotača $\delta$ kao ulaz sustava, a prateći sljedeće specifikacije:
- vrijeme smirivanja za pojas tolerancije od 5% kraće od 4 sekunde;
- nulta pogreška u stacionarnom stanju u odzivu na promjenu željenog bočnog položaja;
- u potpunosti nema ili ima minimalnog prekoračenja;
- kut upravljanja ne prelazi $\pm8$ stupnjeva kada se prati promjena bočnog položaja od 5 metara.
Jednadžbe dinamike u formi prostora stanja su:
\begin{cases}
\dot{x} = \begin{bmatrix} \frac{F_2}{J_z} & 0 & 0 \\ 1 & 0 & 0 \\ 0 & V & 0 \end{bmatrix}x + \begin{bmatrix} \frac{bF_1}{J_z} \\ 0 \\ 0 \end{bmatrix}u \\
y = \begin{bmatrix} 0 & 0 & 1 \end{bmatrix}x \, ,
\end{cases}
gdje je $x=\begin{bmatrix} x_1 & x_2 & x_3 \end{bmatrix}^T = \begin{bmatrix} \dot{\psi} & \psi & p_y \end{bmatrix}^T$ i $u=\delta$.
Polovi sustava su $0$, $0$ i $\frac{F_2}{J_z} \simeq 0.045$, stoga je sustav nestabilan.
### Dizajn regulatora
#### Dizajn kontrolera
Da bismo udovoljili zahtjevima nulte pogreške stacionarnog stanja, dodajemo novo stanje:
$$
\dot{x_4} = p_y-y_d = x_3 - y_d
$$
Rezultirajući prošireni sustav je tada:
\begin{cases}
\dot{x_a} = \begin{bmatrix} \frac{F_2}{J_z} & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & V & 0 & 0 \\ 0 & 0 & 1 & 0 \end{bmatrix}x_a + \begin{bmatrix} \frac{bF_1}{J_z} & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} u \\ y_d \end{bmatrix} \\
y_a = \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}x_a,
\end{cases}
gdje je $x_a = \begin{bmatrix} x_1 & x_2 & x_3 & x_4 \end{bmatrix}^T$ i dodaje se drugi izlaz kako bi se održala osmotrivost sustava. Sustav ostaje upravljiv s ulazom $u$ i tako, pomoću ovog ulaza, možemo oblikovati povratnu vezu stanja. Moguće rješenje je smjestiti sve polove u $-2$.
#### Dizajn promatrača
Čak i ako se iz mjerenja mogu dobiti stanja $x_3$ i $x_4$, a mi trebamo procijeniti samo $x_2$ i $x_3$, prikladno je raditi sa sustavom 4x4 i dizajnirati promatrač reda 4 sa svim polovima u $-10$.
### Kako koristiti ovaj interaktivni primjer?
- Provjerite jesu li ispunjeni zahtjevi ako postoje pogreške u procjeni početnog stanja.
```
# Preparatory cell
X0 = numpy.matrix('0.0; 0.0; 0.0; 0.0')
K = numpy.matrix([0,0,0,0])
L = numpy.matrix([[0,0],[0,0],[0,0],[0,0]])
X0w = matrixWidget(4,1)
X0w.setM(X0)
Kw = matrixWidget(1,4)
Kw.setM(K)
Lw = matrixWidget(4,2)
Lw.setM(L)
eig1c = matrixWidget(1,1)
eig2c = matrixWidget(2,1)
eig3c = matrixWidget(1,1)
eig4c = matrixWidget(2,1)
eig1c.setM(numpy.matrix([-2.]))
eig2c.setM(numpy.matrix([[-2.],[-0.]]))
eig3c.setM(numpy.matrix([-2.]))
eig4c.setM(numpy.matrix([[-2.],[-0.]]))
eig1o = matrixWidget(1,1)
eig2o = matrixWidget(2,1)
eig3o = matrixWidget(1,1)
eig4o = matrixWidget(2,1)
eig1o.setM(numpy.matrix([-10.]))
eig2o.setM(numpy.matrix([[-10.],[0.]]))
eig3o.setM(numpy.matrix([-10.]))
eig4o.setM(numpy.matrix([[-10.],[0.]]))
# Misc
#create dummy widget
DW = widgets.FloatText(layout=widgets.Layout(width='0px', height='0px'))
#create button widget
START = widgets.Button(
description='Test',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Test',
icon='check'
)
def on_start_button_clicked(b):
#This is a workaround to have intreactive_output call the callback:
# force the value of the dummy widget to change
if DW.value> 0 :
DW.value = -1
else:
DW.value = 1
pass
START.on_click(on_start_button_clicked)
# Define type of method
selm = widgets.Dropdown(
options= ['Postavi K i L', 'Postavi svojstvene vrijednosti'],
value= 'Postavi svojstvene vrijednosti',
description='',
disabled=False
)
# Define the number of complex eigenvalues
selec = widgets.Dropdown(
options= ['0 kompleksnih svojstvenih vrijednosti', '2 kompleksne svojstvene vrijednosti', '4 kompleksne svojstvene vrijednosti'],
value= '0 kompleksnih svojstvenih vrijednosti',
description='Svojstvene vrijednosti kontrolera:',
disabled=False
)
seleo = widgets.Dropdown(
options= ['0 kompleksnih svojstvenih vrijednosti', '2 kompleksne svojstvene vrijednosti', '4 kompleksne svojstvene vrijednosti'],
value= '0 kompleksnih svojstvenih vrijednosti',
description='Svojstvene vrijednosti promatrača:',
disabled=False
)
#define type of ipout
selu = widgets.Dropdown(
options=['impuls', 'step', 'sinus', 'Pravokutni val'],
value='step',
description='Tip referentnog signala:',
style = {'description_width': 'initial'},
disabled=False
)
# Define the values of the input
u = widgets.FloatSlider(
value=5,
min=0,
max=10,
step=0.1,
description='Referentni signal [m]:',
style = {'description_width': 'initial'},
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
v = widgets.FloatSlider(
value=9.72,
min=1,
max=20,
step=0.1,
description=r'$V$ [m/s]:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.2f',
)
period = widgets.FloatSlider(
value=0.5,
min=0.001,
max=10,
step=0.001,
description='Period: ',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.3f',
)
simTime = widgets.FloatText(
value=5,
description='',
disabled=False
)
# Support functions
def eigen_choice(selec,seleo):
if selec == '0 kompleksnih svojstvenih vrijednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = True
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = True
eigc = 0
if seleo == '0 kompleksnih svojstvenih vrijednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = True
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = True
eigo = 0
if selec == '2 kompleksne svojstvene vrijednosti':
eig1c.children[0].children[0].disabled = False
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = False
eig4c.children[0].children[0].disabled = True
eig4c.children[1].children[0].disabled = True
eigc = 2
if seleo == '2 kompleksne svojstvene vrijednosti':
eig1o.children[0].children[0].disabled = False
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = False
eig4o.children[0].children[0].disabled = True
eig4o.children[1].children[0].disabled = True
eigo = 2
if selec == '4 kompleksne svojstvene vrijednosti':
eig1c.children[0].children[0].disabled = True
eig2c.children[1].children[0].disabled = False
eig3c.children[0].children[0].disabled = True
eig4c.children[0].children[0].disabled = False
eig4c.children[1].children[0].disabled = False
eigc = 4
if seleo == '4 kompleksne svojstvene vrijednosti':
eig1o.children[0].children[0].disabled = True
eig2o.children[1].children[0].disabled = False
eig3o.children[0].children[0].disabled = True
eig4o.children[0].children[0].disabled = False
eig4o.children[1].children[0].disabled = False
eigo = 4
return eigc, eigo
def method_choice(selm):
if selm == 'Postavi K i L':
method = 1
selec.disabled = True
seleo.disabled = True
if selm == 'Postavi svojstvene vrijednosti':
method = 2
selec.disabled = False
seleo.disabled = False
return method
F1 = 35000000
F2 = 500000
b = 15
V = 35/3.6
Jz = 11067000
A = numpy.matrix([[F2/Jz, 0, 0, 0],
[1, 0, 0, 0],
[0, V, 0, 0],
[0, 0, 1, 0]])
Bu = numpy.matrix([[b*F1/Jz],[0],[0],[0]])
Bref = numpy.matrix([[0],[0],[0],[-1]])
C = numpy.matrix([[0,0,1,0],[0,0,0,1]])
def main_callback2(v, X0w, K, L, eig1c, eig2c, eig3c, eig4c, eig1o, eig2o, eig3o, eig4o, u, period, selm, selec, seleo, selu, simTime, DW):
eigc, eigo = eigen_choice(selec,seleo)
method = method_choice(selm)
A = numpy.matrix([[F2/Jz, 0, 0, 0],
[1, 0, 0, 0],
[0, v, 0, 0],
[0, 0, 1, 0]])
if method == 1:
solc = numpy.linalg.eig(A-Bu*K)
solo = numpy.linalg.eig(A-L*C)
if method == 2:
#for better numerical stability of place
if eig1c[0,0]==eig2c[0,0] or eig1c[0,0]==eig3c[0,0] or eig1c[0,0]==eig4c[0,0]:
eig1c[0,0] *= 1.01
if eig2c[0,0]==eig3c[0,0] or eig2c[0,0]==eig4c[0,0]:
eig3c[0,0] *= 1.015
if eig1o[0,0]==eig2o[0,0] or eig1o[0,0]==eig3o[0,0] or eig1o[0,0]==eig4o[0,0]:
eig1o[0,0] *= 1.01
if eig2o[0,0]==eig3o[0,0] or eig2o[0,0]==eig4o[0,0]:
eig3o[0,0] *= 1.015
if eigc == 0:
K = control.acker(A, Bu, [eig1c[0,0], eig2c[0,0], eig3c[0,0], eig4c[0,0]])
Kw.setM(K)
if eigc == 2:
K = control.acker(A, Bu, [eig3c[0,0],
eig1c[0,0],
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigc == 4:
K = control.acker(A, Bu, [numpy.complex(eig4c[0,0], eig4c[1,0]),
numpy.complex(eig4c[0,0],-eig4c[1,0]),
numpy.complex(eig2c[0,0], eig2c[1,0]),
numpy.complex(eig2c[0,0],-eig2c[1,0])])
Kw.setM(K)
if eigo == 0:
L = control.place(A.T, C.T, [eig1o[0,0], eig2o[0,0], eig3o[0,0], eig4o[0,0]]).T
Lw.setM(L)
if eigo == 2:
L = control.place(A.T, C.T, [eig3o[0,0],
eig1o[0,0],
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
if eigo == 4:
L = control.place(A.T, C.T, [numpy.complex(eig4o[0,0], eig4o[1,0]),
numpy.complex(eig4o[0,0],-eig4o[1,0]),
numpy.complex(eig2o[0,0], eig2o[1,0]),
numpy.complex(eig2o[0,0],-eig2o[1,0])]).T
Lw.setM(L)
sys = sss(A,numpy.hstack((Bu,Bref)),[[0,0,1,0],[0,0,0,1],[0,0,0,0]],[[0,0],[0,0],[0,1]])
syse = sss(A-L*C,numpy.hstack((Bu,Bref,L)),numpy.eye(4),numpy.zeros((4,4)))
sysc = sss(0,[0,0,0,0],0,-K)
sys_append = control.append(sys,syse,sysc)
try:
sys_CL = control.connect(sys_append,
[[1,8],[3,8],[5,1],[6,2],[7,4],[8,5],[9,6],[10,7],[4,3]],
[2],
[1,8])
except:
sys_CL = control.connect(sys_append,
[[1,8],[3,8],[5,1],[6,2],[7,4],[8,5],[9,6],[10,7],[4,3]],
[2],
[1,8])
X0w1 = numpy.zeros((8,1))
X0w1[4,0] = X0w[0,0]
X0w1[5,0] = X0w[1,0]
X0w1[6,0] = X0w[2,0]
X0w1[7,0] = X0w[3,0]
if simTime != 0:
T = numpy.linspace(0, simTime, 10000)
else:
T = numpy.linspace(0, 1, 10000)
if selu == 'impuls': #selu
U = [0 for t in range(0,len(T))]
U[0] = u
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'step':
U = [u for t in range(0,len(T))]
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'sinus':
U = u*numpy.sin(2*numpy.pi/period*T)
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
if selu == 'Pravokutni val':
U = u*numpy.sign(numpy.sin(2*numpy.pi/period*T))
T, yout, xout = control.forced_response(sys_CL,T,U,X0w1)
try:
step_info_dict = control.step_info(sys_CL[0,0],SettlingTimeThreshold=0.05,T=T)
print('Informacije o koraku: \n\tVrijeme porasta =',step_info_dict['RiseTime'],'\n\tVrijeme smirivanja (5%) =',step_info_dict['SettlingTime'],'\n\tPrekoračenje (%)=',step_info_dict['Overshoot'])
print('Maksimalna u vrijednost (% od 8deg)=', max(abs(yout[1]))/(8*numpy.pi/180)*100)
except:
print("Pogreška u izračunu informacija o koraku.")
fig = plt.figure(num='Simulation1', figsize=(14,12))
fig.add_subplot(221)
plt.title('Izlazni odziv')
plt.ylabel('Izlaz')
plt.plot(T,yout[0],T,U,'r--')
plt.xlabel('$t$ [s]')
plt.legend(['$y$','Referentni signal'])
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(222)
plt.title('Ulaz')
plt.ylabel('$u$ [deg]')
plt.plot(T,yout[1]*180/numpy.pi)
plt.plot(T,[8 for i in range(len(T))],'r--')
plt.plot(T,[-8 for i in range(len(T))],'r--')
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.grid()
fig.add_subplot(223)
plt.title('Odziv stanja')
plt.ylabel('Stanja')
plt.plot(T,xout[0],
T,xout[1],
T,xout[2],
T,xout[3])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$x_{1}$','$x_{2}$','$x_{3}$','$x_{4}$'])
plt.grid()
fig.add_subplot(224)
plt.title('Pogreška procjene')
plt.ylabel('Pogreška')
plt.plot(T,xout[4]-xout[0])
plt.plot(T,xout[5]-xout[1])
plt.plot(T,xout[6]-xout[2])
plt.plot(T,xout[7]-xout[3])
plt.xlabel('$t$ [s]')
plt.axvline(x=0,color='black',linewidth=0.8)
plt.axhline(y=0,color='black',linewidth=0.8)
plt.legend(['$e_{1}$','$e_{2}$','$e_{3}$','$e_{4}$'])
plt.grid()
#plt.tight_layout()
alltogether2 = widgets.VBox([widgets.HBox([selm,
selec,
seleo,
selu]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.HBox([widgets.Label('K:',border=3), Kw,
widgets.Label('Svojstvene vrijednosti:',border=3),
widgets.HBox([eig1c,
eig2c,
eig3c,
eig4c])])]),
widgets.Label(' ',border=3),
widgets.HBox([widgets.VBox([widgets.HBox([widgets.Label('L:',border=3), Lw, widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('Svojstvene vrijednosti:',border=3),
eig1o,
eig2o,
eig3o,
eig4o,
widgets.Label(' ',border=3),
widgets.Label(' ',border=3),
widgets.Label('X0 est.:',border=3), X0w]),
widgets.Label(' ',border=3),
widgets.HBox([
widgets.VBox([widgets.Label('Vrijeme simulacije [s]:',border=3)]),
widgets.VBox([simTime])])]),
widgets.Label(' ',border=3)]),
widgets.Label(' ',border=3),
widgets.HBox([u,
v,
period,
START])])
out2 = widgets.interactive_output(main_callback2, {'v':v, 'X0w':X0w, 'K':Kw, 'L':Lw,
'eig1c':eig1c, 'eig2c':eig2c, 'eig3c':eig3c, 'eig4c':eig4c,
'eig1o':eig1o, 'eig2o':eig2o, 'eig3o':eig3o, 'eig4o':eig4o,
'u':u, 'period':period, 'selm':selm, 'selec':selec, 'seleo':seleo, 'selu':selu, 'simTime':simTime, 'DW':DW})
out2.layout.height = '860px'
display(out2, alltogether2)
```
| github_jupyter |
# Classification Metrics Spark Example
Classification metrics are used to calculate the performance of binary predictors based on a binary target. They are used extensively in other Iguanas modules. This example shows how they can be applied in Spark and how to create your own.
## Requirements
To run, you'll need the following:
* A dataset containing binary predictor columns and a binary target column.
----
## Import packages
```
from iguanas.metrics.classification import Precision, Recall, FScore, Revenue
import numpy as np
import databricks.koalas as ks
from typing import Union
```
## Create data
Let's create some dummy predictor columns and a binary target column. For this example, let's assume the dummy predictor columns represent rules that have been applied to a dataset.
```
np.random.seed(0)
y_pred_ks = ks.Series(np.random.randint(0, 2, 1000), name = 'A')
y_preds_ks = ks.DataFrame(np.random.randint(0, 2, (1000, 5)), columns=[i for i in 'ABCDE'])
y_ks = ks.Series(np.random.randint(0, 2, 1000), name = 'label')
amounts_ks = ks.Series(np.random.randint(0, 1000, 1000), name = 'amounts')
```
----
## Apply optimisation functions
There are currently four classification metrics available:
* Precision score
* Recall score
* Fbeta score
* Revenue
**Note that the *FScore*, *Precision* or *Recall* classes are ~100 times faster on larger datasets compared to the same functions from Sklearn's *metrics* module. They also work with Koalas DataFrames, whereas the Sklearn functions do not.**
### Instantiate class and run fit method
We can run the `fit` method to calculate the optimisation metric for each column in the dataset.
#### Precision score
```
precision = Precision()
# Single predictor
rule_precision_ks = precision.fit(y_preds=y_pred_ks, y_true=y_ks, sample_weight=None)
# Multiple predictors
rule_precisions_ks = precision.fit(y_preds=y_preds_ks, y_true=y_ks, sample_weight=None)
```
#### Recall score
```
recall = Recall()
# Single predictor
rule_recall_ks = recall.fit(y_preds=y_pred_ks, y_true=y_ks, sample_weight=None)
# Multiple predictors
rule_recalls_ks = recall.fit(y_preds=y_preds_ks, y_true=y_ks, sample_weight=None)
```
#### Fbeta score (beta=1)
```
f1 = FScore(beta=1)
# Single predictor)
rule_f1_ks = f1.fit(y_preds=y_pred_ks, y_true=y_ks, sample_weight=None)
# Multiple predictors)
rule_f1s_ks = f1.fit(y_preds=y_preds_ks, y_true=y_ks, sample_weight=None)
```
#### Revenue
```
rev = Revenue(y_type='Fraud', chargeback_multiplier=2)
# Single predictor
rule_rev_ks = rev.fit(y_preds=y_pred_ks, y_true=y_ks, sample_weight=amounts_ks)
# Multiple predictors
rule_revs_ks = rev.fit(y_preds=y_preds_ks, y_true=y_ks, sample_weight=amounts_ks)
```
### Outputs
The `fit` method returns the optimisation metric defined by the class:
```
rule_precision_ks, rule_precisions_ks
rule_recall_ks, rule_recalls_ks
rule_f1_ks, rule_f1s_ks
rule_rev_ks, rule_revs_ks
```
The `fit` method can be fed into various Iguanas modules as an argument (wherever the `metric` parameter appears). For example, in the RuleGeneratorOpt module, you can set the metric used to optimise the rules using this methodology.
----
## Creating your own optimisation function
Say we want to create a class which calculates the Positive likelihood ratio (TP rate/FP rate).
The main class structure involves having a `fit` method which has three arguments - the binary predictor(s), the binary target and any event specific weights to apply. This method should return a single numeric value.
```
class PositiveLikelihoodRatio:
def fit(self,
y_preds: Union[ks.Series, ks.DataFrame],
y_true: ks.Series,
sample_weight: ks.Series) -> float:
def _calc_plr(y_true, y_preds):
# Calculate TPR
tpr = (y_true * y_preds).sum() / y_true.sum()
# Calculate FPR
fpr = ((1 - y_true) * y_preds).sum()/(1 - y_true).sum()
return 0 if tpr == 0 or fpr == 0 else tpr/fpr
# Set this option to allow calc of TPR/FPR on Koalas dataframes
with ks.option_context("compute.ops_on_diff_frames", True):
if y_preds.ndim == 1:
return _calc_plr(y_true, y_preds)
else:
plrs = np.empty(y_preds.shape[1])
for i, col in enumerate(y_preds.columns):
plrs[i] = _calc_plr(y_true, y_preds[col])
return plrs
```
We can then apply the `fit` method to the dataset to check it works:
```
plr = PositiveLikelihoodRatio()
# Single predictor
rule_plr_ks = plr.fit(y_preds=y_pred_ks, y_true=y_ks, sample_weight=None)
# Multiple predictors
rule_plrs_ks = plr.fit(y_preds=y_preds_ks, y_true=y_ks, sample_weight=None)
rule_plr_ks, rule_plrs_ks
```
Finally, after instantiating the class, we can feed the `fit` method to a relevant Iguanas module (for example, we can feed the `fit` method to the `metric` parameter in the `BayesianOptimiser` class so that rules are generated which maximise the Positive Likelihood Ratio).
----
| github_jupyter |
# Template Matching
### Full Image
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
full = cv2.imread('../images/diver_enhanced.jpg')
full = cv2.cvtColor(full, cv2.COLOR_BGR2RGB)
plt.imshow(full)
```
### Template Image
```
face= cv2.imread('../images/dolphin_template.jpg')
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
plt.imshow(face)
```
# Template Matching Methods
Make sure to watch the video for an explanation of the different methods!
-------
-------
**Quick Note on **eval()** function in case you haven't seen it before!**
```
sum([1,2,3])
mystring = 'sum'
eval(mystring)
myfunc = eval(mystring)
myfunc([1,2,3])
```
--------
--------
```
height, width,channels = face.shape
width
height
# The Full Image to Search
full = cv2.imread('../images/diver_enhanced.jpg')
full = cv2.cvtColor(full, cv2.COLOR_BGR2RGB)
# The Template to Match
face= cv2.imread('../images/dolphin_template.jpg')
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
# All the 6 methods for comparison in a list
# Note how we are using strings, later on we'll use the eval() function to convert to function
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR','cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
m = 'cv2.TM_CCORR_NORMED'
# Create a copy of the image
full_copy = full.copy()
# Get the actual function instead of the string
method = eval(m)
# Apply template Matching with the method
res = cv2.matchTemplate(full_copy,face,method)
# Grab the Max and Min values, plus their locations
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# Set up drawing of Rectangle
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
# Notice the coloring on the last 2 left hand side images.
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
# Assign the Bottom Right of the rectangle
bottom_right = (top_left[0] + width, top_left[1] + height)
# Draw the Red Rectangle
cv2.rectangle(full_copy,top_left, bottom_right, 255, 10)
# Plot the Images
fig = plt.figure(figsize=(18,18))
ax1 = fig.add_subplot(121)
plt.imshow(res)
plt.title('Result of Template Matching')
ax1 = fig.add_subplot(122)
plt.imshow(full_copy)
plt.title('Detected Point')
plt.suptitle(m)
plt.show()
plt.imshow(full_copy)
full_copy = cv2.cvtColor(full_copy, cv2.COLOR_RGB2BGR)
cv2.imwrite('../images/diver_dolphin_bounding_box.jpg', full_copy)
full = cv2.imread('../images/dolphin.jpg')
full = cv2.cvtColor(full, cv2.COLOR_BGR2RGB)
plt.imshow(full)
```
### Template Image
```
face= cv2.imread('../images/dolphin_template.jpg')
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
plt.imshow(face)
```
# Template Matching Methods
Make sure to watch the video for an explanation of the different methods!
-------
-------
**Quick Note on **eval()** function in case you haven't seen it before!**
```
sum([1,2,3])
mystring = 'sum'
eval(mystring)
myfunc = eval(mystring)
myfunc([1,2,3])
```
--------
--------
```
width
height
# The Full Image to Search
full = cv2.imread('../images/dolphin.jpg')
full = cv2.cvtColor(full, cv2.COLOR_BGR2RGB)
# The Template to Match
face= cv2.imread('../images/dolphin_template.jpg')
face = cv2.cvtColor(face, cv2.COLOR_BGR2RGB)
# All the 6 methods for comparison in a list
# Note how we are using strings, later on we'll use the eval() function to convert to function
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR','cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED']
height, width,channels = face.shape
m = 'cv2.TM_CCORR_NORMED'
# Create a copy of the image
full_copy = full.copy()
# Get the actual function instead of the string
method = eval(m)
# Apply template Matching with the method
res = cv2.matchTemplate(full_copy,face,method)
# Grab the Max and Min values, plus their locations
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# Set up drawing of Rectangle
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
# Notice the coloring on the last 2 left hand side images.
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
# Assign the Bottom Right of the rectangle
bottom_right = (top_left[0] + width, top_left[1] + height)
# Draw the Red Rectangle
cv2.rectangle(full_copy,top_left, bottom_right, 255, 10)
# Plot the Images
fig = plt.figure(figsize=(18,18))
ax1 = fig.add_subplot(121)
plt.imshow(res)
plt.title('Result of Template Matching')
ax1 = fig.add_subplot(122)
plt.imshow(full_copy)
plt.title('Detected Point')
plt.suptitle(m)
plt.show()
plt.imshow(full_copy)
full_copy = cv2.cvtColor(full_copy, cv2.COLOR_RGB2BGR)
cv2.imwrite('../images/diver_dolphin_bounding_box.jpg', full_copy)
```
| github_jupyter |
Many features of TensorFlow including their computational graphs lend themselves naturally to being computed in parallel. Computational graphs can be split over different processors as well as in processing different batches. This recipe demonstrates how to access different processors on the same machine.
## Getting ready...
In this recipe, we will explore different commands that will allow one to access various devices on their system. The recipe will also demonstrate how to find out which devices TensorFlow is using.
## How to do it...
```
import tensorflow as tf
tf.debugging.set_log_device_placement(True)
```
1. To find out which devices TensorFlow is using for operations, we will activates the logs for device placement by setting tf.debugging.set_log_device_placement to True. If a TensorFlow operation is implemented for CPU and GPU devices, the operation will be executed by default on a GPU device if a GPU is available:
```
tf.debugging.set_log_device_placement(True)
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
```
2. It is also possible to use the tensor device attribute that returns the name of the device on which this tensor will be assigned:
```
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
print(a.device)
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
print(b.device)
```
3. By default, TensorFlow automatically decides how to distribute computation across computing devices (CPUs and GPUs). Sometimes we need to select the device to use by creating a device context with the tf.device function. Each operation executed in this context will use the selected device:
```
tf.debugging.set_log_device_placement(True)
with tf.device('/device:CPU:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
```
4. If we move the matmul opeartion out of the context, this operation will be executed on a GPU device if it's available:
```
tf.debugging.set_log_device_placement(True)
with tf.device('/device:CPU:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
```
5. When using GPUs, TensorFlow automatically takes up a large portion of the GPU memory. While this is usually desired, we can take steps to be more careful with GPU memory allocation. While TensorFlow never releases GPU memory, we can slowly grow its allocation to the maximum limit (only when needed) by setting a GPU memory growth option. Note that physical devices cannot be modified after being initialized:
```
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
try:
tf.config.experimental.set_memory_growth(gpu_devices[0], True)
except RuntimeError as e:
# Memory growth cannot be modififed after GPU has been initialized
print(e)
```
6. If we desire to limit the GPU memory used by TensorFlow, we can also create a virtual GPU device and set the maximum memory limit (in MB) to allocate on this virtual GPU. Note that virtual GPU devices cannot be modififed after being initialized:
```
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
try:
tf.config.experimental.set_virtual_device_configuration(gpu_devices[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
except RuntimeError as e:
# Virtual devices cannot be modified after being initialized
print(e)
```
7. It is also possible to simulate virtual GPU devices with a single physical GPU:
```
gpu_devices = tf.config.list_physical_devices('GPU')
if gpu_devices:
try:
tf.config.experimental.set_virtual_device_configuration(gpu_devices[0],
[tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)])
except RuntimeError as e:
print(e)
```
8. Sometimes we need to write robust code that can determine whether it is running with the GPU available or not. TensorFlow has a built-in function that can test whether the GPU is available. This is helpful when we want to write code that takes advantage of the GPU when it is available and assign specific operations to it. This is done with the following code:
```
if tf.test.is_built_with_cuda():
# Run GPU specific code here
pass
```
9. If we have to assign specific operations to certain devices, we can use the following code. This will perform simple calculations and assign operations to the main CPU and two auxiliary GPUs:
```
print("Num GPUs Available: ", len(tf.config.list_logical_devices('GPU')))
if tf.test.is_built_with_cuda():
with tf.device('/cpu:0'):
a = tf.constant([1.0, 3.0, 5.0], shape = [1, 3])
b = tf.constant([2.0, 4.0, 6.0], shape = [3, 1])
with tf.device('/gpu:0'):
c = tf.matmul(a, b)
c = tf.reshape(c, [-1])
with tf.device('/gpu:1'):
d = tf.matmul(b, a)
flat_d = tf.reshape(d, [-1])
combined = tf.multiply(c, flat_d)
print(combined)
```
We can see that the first two operations are performed on the main CPU, while the next two are one our first auxiliary GPU, and the last two on our second auxiliary GPU.
| github_jupyter |
```
import sys
sys.path.append("../scripts/")
from ideal_robot import *
from scipy.stats import expon, norm, uniform
class Robot(IdealRobot):
def __init__(
self,
pose,
agent=None,
sensor=None,
color="black",
noise_per_meter=5,
noise_std=math.pi / 60,
bias_rate_stds=(0.1, 0.1),
expected_stuck_time=1e100,
expected_escape_time=1e-100,
expected_kidnap_time=1e100,
kidnap_range_x=(-5.0, 5.0),
kidnap_range_y=(-5.0, 5.0),
): # 追加
super().__init__(pose, agent, sensor, color)
self.noise_pdf = expon(scale=1.0 / (1e-100 + noise_per_meter))
self.distance_until_noise = self.noise_pdf.rvs()
self.theta_noise = norm(scale=noise_std)
self.bias_rate_nu = norm.rvs(loc=1.0, scale=bias_rate_stds[0])
self.bias_rate_omega = norm.rvs(loc=1.0, scale=bias_rate_stds[1])
self.stuck_pdf = expon(scale=expected_stuck_time)
self.escape_pdf = expon(scale=expected_escape_time)
self.is_stuck = False
self.time_until_stuck = self.stuck_pdf.rvs()
self.time_until_escape = self.escape_pdf.rvs()
self.kidnap_pdf = expon(scale=expected_kidnap_time)
self.time_until_kidnap = self.kidnap_pdf.rvs()
rx, ry = kidnap_range_x, kidnap_range_y
self.kidnap_dist = uniform(
loc=(rx[0], ry[0], 0.0), scale=(rx[1] - rx[0], ry[1] - ry[0], 2 * math.pi)
)
def noise(self, pose, nu, omega, time_interval):
self.distance_until_noise -= (
abs(nu) * time_interval + self.r * omega * time_interval
)
if self.distance_until_noise <= 0.0:
self.distance_until_noise += self.noise_pdf.rvs()
pose[2] += self.theta_noise.rvs()
return pose
def bias(self, nu, omega):
return nu * self.bias_rate_nu, omega * self.bias_rate_omega
def stuck(self, nu, omega, time_interval):
if self.is_stuck:
self.time_until_escape -= time_interval
if self.time_until_escape <= 0.0:
self.time_until_escape += self.escape_pdf.rvs()
self.is_stuck = False
else:
self.time_until_stuck -= time_interval
if self.time_until_stuck <= 0.0:
self.time_until_stuck += self.stuck_pdf.rvs()
self.is_stuck = True
return nu * (not self.is_stuck), omega * (not self.is_stuck)
def kidnap(self, pose, time_interval):
self.time_until_kidnap -= time_interval
if self.time_until_kidnap <= 0.0:
self.time_until_kidnap += self.kidnap_pdf.rvs()
return np.array(self.kidnap_dist.rvs()).T
else:
return pose
def one_step(self, time_interval):
if not self.agent:
return
obs = self.sensor.data(self.pose) if self.sensor else None
nu, omega = self.agent.decision(obs)
nu, omega = self.bias(nu, omega)
nu, omega = self.stuck(nu, omega, time_interval)
self.pose = self.state_transition(nu, omega, time_interval, self.pose)
self.pose = self.noise(self.pose, nu, omega, time_interval)
self.pose = self.kidnap(self.pose, time_interval)
class Camera(IdealCamera): ###camera_fourth### (noise,biasは省略)
def __init__(
self,
env_map,
distance_range=(0.5, 6.0),
direction_range=(-math.pi / 3, math.pi / 3),
distance_noise_rate=0.1,
direction_noise=math.pi / 90,
distance_bias_rate_stddev=0.1,
direction_bias_stddev=math.pi / 90,
phantom_prob=0.0,
phantom_range_x=(-5.0, 5.0),
phantom_range_y=(-5.0, 5.0),
): # 追加
super().__init__(env_map, distance_range, direction_range)
self.distance_noise_rate = distance_noise_rate
self.direction_noise = direction_noise
self.distance_bias_rate_std = norm.rvs(scale=distance_bias_rate_stddev)
self.direction_bias = norm.rvs(scale=direction_bias_stddev)
rx, ry = phantom_range_x, phantom_range_y # 以下追加
self.phantom_dist = uniform(
loc=(rx[0], ry[0]), scale=(rx[1] - rx[0], ry[1] - ry[0])
)
self.phantom_prob = phantom_prob
def noise(self, relpos):
ell = norm.rvs(loc=relpos[0], scale=relpos[0] * self.distance_noise_rate)
phi = norm.rvs(loc=relpos[1], scale=self.direction_noise)
return np.array([ell, phi]).T
def bias(self, relpos):
return (
relpos
+ np.array([relpos[0] * self.distance_bias_rate_std, self.direction_bias]).T
)
def phantom(self, cam_pose, relpos): # 追加
if uniform.rvs() < self.phantom_prob:
pos = np.array(self.phantom_dist.rvs()).T
return self.observation_function(cam_pose, pos)
else:
return relpos
def data(self, cam_pose):
observed = []
for lm in self.map.landmarks:
z = self.observation_function(cam_pose, lm.pos)
z = self.phantom(cam_pose, z) # 追加
if self.visible(z):
z = self.bias(z)
z = self.noise(z)
observed.append((z, lm.id))
self.lastdata = observed
return observed
world = World(30, 0.1)
### 地図を生成して3つランドマークを追加 ###
m = Map()
m.append_landmark(Landmark(-4, 2))
m.append_landmark(Landmark(2, -3))
m.append_landmark(Landmark(3, 3))
world.append(m)
### ロボットを作る ###
straight = Agent(0.2, 0.0)
circling = Agent(0.2, 10.0 / 180 * math.pi)
r = Robot(
np.array([0, 0, math.pi / 6]).T, sensor=Camera(m, phantom_prob=0.5), agent=circling
)
world.append(r)
### アニメーション実行 ###
world.draw()
```
| github_jupyter |
## Search algorithms within Optuna
In this notebook, I will demo how to select the search algorithm with Optuna. We will compare the use of:
- Grid Search
- Randomized search
- Tree-structured Parzen Estimators
- CMA-ES
We can select the search algorithm from the [optuna.study.create_study()](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.create_study.html#optuna.study.create_study) class.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import accuracy_score, roc_auc_score
from sklearn.model_selection import cross_val_score, train_test_split
from sklearn.ensemble import RandomForestClassifier
import optuna
# load dataset
breast_cancer_X, breast_cancer_y = load_breast_cancer(return_X_y=True)
X = pd.DataFrame(breast_cancer_X)
y = pd.Series(breast_cancer_y).map({0:1, 1:0})
X.head()
# the target:
# percentage of benign (0) and malign tumors (1)
y.value_counts() / len(y)
# split dataset into a train and test set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
X_train.shape, X_test.shape
```
## Define the objective function
This is the hyperparameter response space, the function we want to minimize.
```
# the objective function takes the hyperparameter space
# as input
def objective(trial):
rf_n_estimators = trial.suggest_int("rf_n_estimators", 100, 1000)
rf_criterion = trial.suggest_categorical("rf_criterion", ['gini', 'entropy'])
rf_max_depth = trial.suggest_int("rf_max_depth", 1, 4)
rf_min_samples_split = trial.suggest_float("rf_min_samples_split", 0.01, 1)
model = RandomForestClassifier(
n_estimators=rf_n_estimators,
criterion=rf_criterion,
max_depth=rf_max_depth,
min_samples_split=rf_min_samples_split,
)
score = cross_val_score(model, X_train, y_train, cv=3)
accuracy = score.mean()
return accuracy
```
## Randomized Search
RandomSampler()
```
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.RandomSampler(),
)
study.optimize(objective, n_trials=5)
study.best_params
study.best_value
study.trials_dataframe()
```
## TPE
TPESampler is the default
```
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.TPESampler(),
)
study.optimize(objective, n_trials=5)
study.best_params
study.best_value
```
## CMA-ES
CmaEsSampler
```
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.CmaEsSampler(),
)
study.optimize(objective, n_trials=5)
study.best_params
study.best_value
```
## Grid Search
GridSampler()
We are probably not going to perform GridSearch with Optuna, but in case you wanted to, you need to add a variable with the space, with the exact values that you want to be tested.
```
search_space = {
"rf_n_estimators": [100, 500, 1000],
"rf_criterion": ['gini', 'entropy'],
"rf_max_depth": [1, 2, 3],
"rf_min_samples_split": [0.1, 1.0]
}
study = optuna.create_study(
direction="maximize",
sampler=optuna.samplers.GridSampler(search_space),
)
study.optimize(objective)
study.best_params
study.best_value
```
| github_jupyter |
## Duplicated features
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
```
## Read Data
```
data = pd.read_csv('../UNSW_Train.csv')
data.shape
# check the presence of missing data.
# (there are no missing data in this dataset)
[col for col in data.columns if data[col].isnull().sum() > 0]
data.head(5)
```
### Train - Test Split
```
# separate dataset into train and test
X_train, X_test, y_train, y_test = train_test_split(
data.drop(labels=['is_intrusion'], axis=1), # drop the target
data['is_intrusion'], # just the target
test_size=0.2,
random_state=0)
X_train.shape, X_test.shape
```
## Remove constant and quasi-constant (optional)
```
# remove constant and quasi-constant features first:
# we can remove the 2 types of features together with this code
# create an empty list
quasi_constant_feat = []
# iterate over every feature
for feature in X_train.columns:
# find the predominant value, that is the value that is shared
# by most observations
predominant = (X_train[feature].value_counts() / np.float64(
len(X_train))).sort_values(ascending=False).values[0]
# evaluate predominant feature: do more than 99% of the observations
# show 1 value?
if predominant > 0.998:
quasi_constant_feat.append(feature)
len(quasi_constant_feat)
quasi_constant_feat
# we can then drop these columns from the train and test sets:
X_train.drop(labels=quasi_constant_feat, axis=1, inplace=True)
X_test.drop(labels=quasi_constant_feat, axis=1, inplace=True)
X_train.shape, X_test.shape
```
## Remove duplicated features
```
# fiding duplicated features
duplicated_feat_pairs = {}
_duplicated_feat = []
for i in range(0, len(X_train.columns)):
if i % 10 == 0:
print(i)
feat_1 = X_train.columns[i]
if feat_1 not in _duplicated_feat:
duplicated_feat_pairs[feat_1] = []
for feat_2 in X_train.columns[i + 1:]:
if X_train[feat_1].equals(X_train[feat_2]):
duplicated_feat_pairs[feat_1].append(feat_2)
_duplicated_feat.append(feat_2)
# let's explore our list of duplicated features
len(_duplicated_feat)
```
We found 1 features that were duplicates of others.
```
# these are the ones:
_duplicated_feat
# let's explore the dictionary we created:
duplicated_feat_pairs
```
We see that for every feature, if it had duplicates, we have entries in the list, otherwise, we have empty lists. Let's explore those features with duplicates now:
```
# let's explore the number of keys in our dictionary
# we see it is 21, because 2 of the 23 were duplicates,
# so they were not included as keys
print(len(duplicated_feat_pairs.keys()))
# print the features with its duplicates
# iterate over every feature in our dict:
for feat in duplicated_feat_pairs.keys():
# if it has duplicates, the list should not be empty:
if len(duplicated_feat_pairs[feat]) > 0:
# print the feature and its duplicates:
print(feat, duplicated_feat_pairs[feat])
print()
# to remove the duplicates (if necessary)
X_train = X_train[duplicated_feat_pairs.keys()]
X_test = X_test[duplicated_feat_pairs.keys()]
X_train.shape, X_test.shape
```
1 duplicate features were found in the UNSW-NB15 dataset
## Standardize Data
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
```
## Classifiers
```
from sklearn import linear_model
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from catboost import CatBoostClassifier
```
## Metrics Evaluation
```
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_curve, f1_score
from sklearn import metrics
from sklearn.model_selection import cross_val_score
```
### Logistic Regression
```
%%time
clf_LR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=25).fit(X_train, y_train)
pred_y_test = clf_LR.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_test))
f1 = f1_score(y_test, pred_y_test)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_test)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### Naive Bayes
```
%%time
clf_NB = GaussianNB(var_smoothing=1e-08).fit(X_train, y_train)
pred_y_testNB = clf_NB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testNB))
f1 = f1_score(y_test, pred_y_testNB)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testNB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### Random Forest
```
%%time
clf_RF = RandomForestClassifier(random_state=0,max_depth=100,n_estimators=1000).fit(X_train, y_train)
pred_y_testRF = clf_RF.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testRF))
f1 = f1_score(y_test, pred_y_testRF, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testRF)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
### KNN
```
%%time
clf_KNN = KNeighborsClassifier(algorithm='ball_tree',leaf_size=1,n_neighbors=5,weights='uniform').fit(X_train, y_train)
pred_y_testKNN = clf_KNN.predict(X_test)
print('accuracy_score:', accuracy_score(y_test, pred_y_testKNN))
f1 = f1_score(y_test, pred_y_testKNN)
print('f1:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testKNN)
print('fpr:', fpr[1])
print('tpr:', tpr[1])
```
### CatBoost
```
%%time
clf_CB = CatBoostClassifier(depth=7,iterations=50,learning_rate=0.04).fit(X_train, y_train)
pred_y_testCB = clf_CB.predict(X_test)
print('Accuracy:', accuracy_score(y_test, pred_y_testCB))
f1 = f1_score(y_test, pred_y_testCB, average='weighted', zero_division=0)
print('F1 Score:', f1)
fpr, tpr, thresholds = roc_curve(y_test, pred_y_testCB)
print('FPR:', fpr[1])
print('TPR:', tpr[1])
```
## Model Evaluation
```
import pandas as pd, numpy as np
test_df = pd.read_csv("../UNSW_Test.csv")
test_df.shape
# Create feature matrix X and target vextor y
y_eval = test_df['is_intrusion']
X_eval = test_df.drop(columns=['is_intrusion','ct_ftp_cmd'])
```
### Model Evaluation - Logistic Regression
```
modelLR = linear_model.LogisticRegression(n_jobs=-1, random_state=42, C=25)
modelLR.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredLR = modelLR.predict(X_eval)
y_predLR = modelLR.predict(X_test)
train_scoreLR = modelLR.score(X_train, y_train)
test_scoreLR = modelLR.score(X_test, y_test)
print("Training accuracy is ", train_scoreLR)
print("Testing accuracy is ", test_scoreLR)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreLR)
print('F1 Score:',f1_score(y_test, y_predLR))
print('Precision Score:',precision_score(y_test, y_predLR))
print('Recall Score:', recall_score(y_test, y_predLR))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predLR))
```
### Cross validation - Logistic Regression
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelLR, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - Naive Bayes
```
modelNB = GaussianNB(var_smoothing=1e-08)
modelNB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredNB = modelNB.predict(X_eval)
y_predNB = modelNB.predict(X_test)
train_scoreNB = modelNB.score(X_train, y_train)
test_scoreNB = modelNB.score(X_test, y_test)
print("Training accuracy is ", train_scoreNB)
print("Testing accuracy is ", test_scoreNB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreNB)
print('F1 Score:',f1_score(y_test, y_predNB))
print('Precision Score:',precision_score(y_test, y_predNB))
print('Recall Score:', recall_score(y_test, y_predNB))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predNB))
```
### Cross validation - Naive Bayes
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelNB, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - Random Forest
```
modelRF = RandomForestClassifier(random_state=0,max_depth=100,n_estimators=1000)
modelRF.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredRF = modelRF.predict(X_eval)
y_predRF = modelRF.predict(X_test)
train_scoreRF = modelRF.score(X_train, y_train)
test_scoreRF = modelRF.score(X_test, y_test)
print("Training accuracy is ", train_scoreRF)
print("Testing accuracy is ", test_scoreRF)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreRF)
print('F1 Score:', f1_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Precision Score:', precision_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predRF, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predRF))
```
### Cross validation - Random Forest
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelRF, X_eval, y_eval, cv=5, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelRF, X_eval, y_eval, cv=5, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelRF, X_eval, y_eval, cv=5, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelRF, X_eval, y_eval, cv=5, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - KNN
```
modelKNN = KNeighborsClassifier(algorithm='ball_tree',leaf_size=1,n_neighbors=5,weights='uniform')
modelKNN.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredKNN = modelKNN.predict(X_eval)
y_predKNN = modelKNN.predict(X_test)
train_scoreKNN = modelKNN.score(X_train, y_train)
test_scoreKNN = modelKNN.score(X_test, y_test)
print("Training accuracy is ", train_scoreKNN)
print("Testing accuracy is ", test_scoreKNN)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreKNN)
print('F1 Score:', f1_score(y_test, y_predKNN))
print('Precision Score:', precision_score(y_test, y_predKNN))
print('Recall Score:', recall_score(y_test, y_predKNN))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predKNN))
```
### Cross validation - KNN
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='accuracy')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
f = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='f1')
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
precision = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='precision')
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
recall = cross_val_score(modelKNN, X_eval, y_eval, cv=10, scoring='recall')
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
### Model Evaluation - CatBoost
```
modelCB = CatBoostClassifier(depth=7,iterations=50,learning_rate=0.04)
modelCB.fit(X_train, y_train)
# Predict on the new unseen test data
y_evalpredCB = modelCB.predict(X_eval)
y_predCB = modelCB.predict(X_test)
train_scoreCB = modelCB.score(X_train, y_train)
test_scoreCB = modelCB.score(X_test, y_test)
print("Training accuracy is ", train_scoreCB)
print("Testing accuracy is ", test_scoreCB)
from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score
print('Performance measures for test:')
print('--------')
print('Accuracy:', test_scoreCB)
print('F1 Score:',f1_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Precision Score:',precision_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Recall Score:', recall_score(y_test, y_predCB, average='weighted', zero_division=0))
print('Confusion Matrix:\n', confusion_matrix(y_test, y_predCB))
```
### Cross validation - CatBoost
```
from sklearn.model_selection import cross_val_score
from sklearn import metrics
accuracy = cross_val_score(modelCB, X_eval, y_eval, cv=5, scoring='accuracy')
f = cross_val_score(modelCB, X_eval, y_eval, cv=5, scoring='f1')
precision = cross_val_score(modelCB, X_eval, y_eval, cv=5, scoring='precision')
recall = cross_val_score(modelCB, X_eval, y_eval, cv=5, scoring='recall')
print("Accuracy: %0.5f (+/- %0.5f)" % (accuracy.mean(), accuracy.std() * 2))
print("F1 Score: %0.5f (+/- %0.5f)" % (f.mean(), f.std() * 2))
print("Precision: %0.5f (+/- %0.5f)" % (precision.mean(), precision.std() * 2))
print("Recall: %0.5f (+/- %0.5f)" % (recall.mean(), recall.std() * 2))
```
| github_jupyter |
# Run Experiments
You can use the Azure Machine Learning SDK to run code experiments that log metrics and generate outputs. This is at the core of most machine learning operations in Azure Machine Learning.
## Connect to your workspace
All experiments and associated resources are managed within your Azure Machine Learning workspace. In most cases, you should store the workspace configuration in a JSON configuration file. This makes it easier to reconnect without needing to remember details like your Azure subscription ID. You can download the JSON configuration file from the blade for your workspace in the Azure portal, but if you're using a Compute Instance within your workspace, the configuration file has already been downloaded to the root folder.
The code below uses the configuration file to connect to your workspace.
> **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
```
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
```
## Run an experiment
One of the most fundamental tasks that data scientists need to perform is to create and run experiments that process and analyze data. In this exercise, you'll learn how to use an Azure ML *experiment* to run Python code and record values extracted from data. In this case, you'll use a simple dataset that contains details of patients that have been tested for diabetes. You'll run an experiment to explore the data, extracting statistics, visualizations, and data samples. Most of the code you'll use is fairly generic Python, such as you might run in any data exploration process. However, with the addition of a few lines, the code uses an Azure ML *experiment* to log details of the run.
```
from azureml.core import Experiment
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name="mslearn-diabetes")
# Start logging data from the experiment, obtaining a reference to the experiment run
run = experiment.start_logging()
print("Starting experiment:", experiment.name)
# load the data from a local file
data = pd.read_csv('data/diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Plot and log the count of diabetic vs non-diabetic patients
diabetic_counts = data['Diabetic'].value_counts()
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
diabetic_counts.plot.bar(ax = ax)
ax.set_title('Patients with Diabetes')
ax.set_xlabel('Diagnosis')
ax.set_ylabel('Patients')
plt.show()
run.log_image(name='label distribution', plot=fig)
# log distinct pregnancy counts
pregnancies = data.Pregnancies.unique()
run.log_list('pregnancy categories', pregnancies)
# Log summary statistics for numeric columns
med_columns = ['PlasmaGlucose', 'DiastolicBloodPressure', 'TricepsThickness', 'SerumInsulin', 'BMI']
summary_stats = data[med_columns].describe().to_dict()
for col in summary_stats:
keys = list(summary_stats[col].keys())
values = list(summary_stats[col].values())
for index in range(len(keys)):
run.log_row(col, stat=keys[index], value = values[index])
# Save a sample of the data and upload it to the experiment output
data.sample(100).to_csv('sample.csv', index=False, header=True)
run.upload_file(name='outputs/sample.csv', path_or_stream='./sample.csv')
# Complete the run
run.complete()
```
## View run details
In Jupyter Notebooks, you can use the **RunDetails** widget to see a visualization of the run details.
```
from azureml.widgets import RunDetails
RunDetails(run).show()
```
### View more details in Azure Machine Learning studio
Note that the **RunDetails** widget includes a link to **view run details** in Azure Machine Learning studio. Click this to open a new browser tab with the run details (you can also just open [Azure Machine Learning studio](https://ml.azure.com) and find the run on the **Experiments** page). When viewing the run in Azure Machine Learning studio, note the following:
- The **Details** tab contains the general properties of the experiment run.
- The **Metrics** tab enables you to select logged metrics and view them as tables or charts.
- The **Images** tab enables you to select and view any images or plots that were logged in the experiment (in this case, the *Label Distribution* plot)
- The **Child Runs** tab lists any child runs (in this experiment there are none).
- The **Outputs + Logs** tab shows the output or log files generated by the experiment.
- The **Snapshot** tab contains all files in the folder where the experiment code was run (in this case, everything in the same folder as this notebook).
- The **Explanations** tab is used to show model explanations generated by the experiment (in this case, there are none).
- The **Fairness** tab is used to visualize predictive performance disparities that help you evaluate the fairness of machine learning models (in this case, there are none).
### Retrieve experiment details using the SDK
The **run** variable in the code you ran previously is an instance of a **Run** object, which is a reference to an individual run of an experiment in Azure Machine Learning. You can use this reference to get information about the run and its outputs:
```
import json
# Get logged metrics
print("Metrics:")
metrics = run.get_metrics()
for metric_name in metrics:
print(metric_name, ":", metrics[metric_name])
# Get output files
print("\nFiles:")
files = run.get_file_names()
for file in files:
print(file)
```
You can download the files produced by the experiment, either individually by using the **download_file** method, or by using the **download_files** method to retrieve multiple files. The following code downloads all of the files in the run's **output** folder:
```
import os
download_folder = 'downloaded-files'
# Download files in the "outputs" folder
run.download_files(prefix='outputs', output_directory=download_folder)
# Verify the files have been downloaded
for root, directories, filenames in os.walk(download_folder):
for filename in filenames:
print (os.path.join(root,filename))
```
If you need to troubleshoot the experiment run, you can use the **get_details** method to retrieve basic details about the run, or you can use the **get_details_with_logs** method to retrieve the run details as well as the contents of log files generated during the run:
```
run.get_details_with_logs()
```
Note that the details include information about the compute target on which the experiment was run, the date and time when it started and ended. Additionally, because the notebook containing the experiment code (this one) is in a cloned Git repository, details about the repo, branch, and status are recorded in the run history.
In this case, note that the **logFiles** entry in the details indicates that no log files were generated. That's typical for an inline experiment like the one you ran, but things get more interesting when you run a script as an experiment; which is what we'll look at next.
## Run an experiment script
In the previous example, you ran an experiment inline in this notebook. A more flexible solution is to create a separate script for the experiment, and store it in a folder along with any other files it needs, and then use Azure ML to run the experiment based on the script in the folder.
First, let's create a folder for the experiment files, and copy the data into it:
```
import os, shutil
# Create a folder for the experiment files
folder_name = 'diabetes-experiment-files'
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
```
Now we'll create a Python script containing the code for our experiment, and save it in the experiment folder.
> **Note**: running the following cell just *creates* the script file - it doesn't run it!
```
%%writefile $folder_name/diabetes_experiment.py
from azureml.core import Run
import pandas as pd
import os
# Get the experiment run context
run = Run.get_context()
# load the diabetes dataset
data = pd.read_csv('diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
run.log('observations', row_count)
print('Analyzing {} rows of data'.format(row_count))
# Count and log the label counts
diabetic_counts = data['Diabetic'].value_counts()
print(diabetic_counts)
for k, v in diabetic_counts.items():
run.log('Label:' + str(k), v)
# Save a sample of the data in the outputs folder (which gets uploaded automatically)
os.makedirs('outputs', exist_ok=True)
data.sample(100).to_csv("outputs/sample.csv", index=False, header=True)
# Complete the run
run.complete()
```
This code is a simplified version of the inline code used before. However, note the following:
- It uses the `Run.get_context()` method to retrieve the experiment run context when the script is run.
- It loads the diabetes data from the folder where the script is located.
- It creates a folder named **outputs** and writes the sample file to it - this folder is automatically uploaded to the experiment run
Now you're almost ready to run the experiment. To run the script, you must create a **ScriptRunConfig** that identifies the Python script file to be run in the experiment, and then run an experiment based on it.
> **Note**: The ScriptRunConfig also determines the compute target and Python environment. In this case, the Python environment is defined to include some Conda and pip packages, but the compute target is omitted; so the default local compute will be used.
The following cell configures and submits the script-based experiment.
```
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.widgets import RunDetails
# Create a Python environment for the experiment (from a .yml file)
env = Environment.from_conda_specification("experiment_env", "environment.yml")
# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
script='diabetes_experiment.py',
environment=env)
# submit the experiment
experiment = Experiment(workspace=ws, name='mslearn-diabetes')
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()
```
As before, you can use the widget or the link to the experiment in [Azure Machine Learning studio](https://ml.azure.com) to view the outputs generated by the experiment, and you can also write code to retrieve the metrics and files it generated:
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
print('\n')
for file in run.get_file_names():
print(file)
```
Note that this time, the run generated some log files. You can view these in the widget, or you can use the **get_details_with_logs** method like we did before, only this time the output will include the log data.
```
run.get_details_with_logs()
```
Although you can view the log details in the output above, it's usually easier to download the log files and view them in a text editor.
```
import os
log_folder = 'downloaded-logs'
# Download all files
run.get_all_logs(destination=log_folder)
# Verify the files have been downloaded
for root, directories, filenames in os.walk(log_folder):
for filename in filenames:
print (os.path.join(root,filename))
```
## View experiment run history
Now that you've run the same experiment multiple times, you can view the history in [Azure Machine Learning studio](https://ml.azure.com) and explore each logged run. Or you can retrieve an experiment by name from the workspace and iterate through its runs using the SDK:
```
from azureml.core import Experiment, Run
diabetes_experiment = ws.experiments['mslearn-diabetes']
for logged_run in diabetes_experiment.get_runs():
print('Run ID:', logged_run.id)
metrics = logged_run.get_metrics()
for key in metrics.keys():
print('-', key, metrics.get(key))
```
## Use MLflow
MLflow is an open source platform for managing machine learning processes. It's commonly (but not exclusively) used in Databricks environments to coordinate experiments and track metrics. In Azure Machine Learning experiments, you can use MLflow to track metrics as an alternative to the native log functionality.
To take advantage of this capability, you'll need the **azureml-mlflow** package, so let's ensure it's installed.
```
!pip show azureml-mlflow
```
### Use MLflow with an inline experiment
To use MLflow to track metrics for an inline experiment, you must set the MLflow *tracking URI* to the workspace where the experiment is being run. This enables you to use **mlflow** tracking methods to log data to the experiment run.
```
from azureml.core import Experiment
import pandas as pd
import mlflow
# Set the MLflow tracking URI to the workspace
mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
# Create an Azure ML experiment in your workspace
experiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')
mlflow.set_experiment(experiment.name)
# start the MLflow experiment
with mlflow.start_run():
print("Starting experiment:", experiment.name)
# Load data
data = pd.read_csv('data/diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
mlflow.log_metric('observations', row_count)
print("Run complete")
```
Now let's look at the metrics logged during the run
```
# Get the latest run of the experiment
run = list(experiment.get_runs())[0]
# Get logged metrics
print("\nMetrics:")
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
# Get a link to the experiment in Azure ML studio
experiment_url = experiment.get_portal_url()
print('See details at', experiment_url)
```
After running the code above, you can use the link that is displayed to view the experiment in Azure Machine Learning studio. Then select the latest run of the experiment and view its **Metrics** tab to see the logged metric.
### Use MLflow in an experiment script
You can also use MLflow to track metrics in an experiment script.
Run the following two cells to create a folder and a script for an experiment that uses MLflow.
```
import os, shutil
# Create a folder for the experiment files
folder_name = 'mlflow-experiment-files'
experiment_folder = './' + folder_name
os.makedirs(folder_name, exist_ok=True)
# Copy the data file into the experiment folder
shutil.copy('data/diabetes.csv', os.path.join(folder_name, "diabetes.csv"))
%%writefile $folder_name/mlflow_diabetes.py
from azureml.core import Run
import pandas as pd
import mlflow
# start the MLflow experiment
with mlflow.start_run():
# Load data
data = pd.read_csv('diabetes.csv')
# Count the rows and log the result
row_count = (len(data))
print('observations:', row_count)
mlflow.log_metric('observations', row_count)
```
When you use MLflow tracking in an Azure ML experiment script, the MLflow tracking URI is set automatically when you start the experiment run. However, the environment in which the script is to be run must include the required **mlflow** packages.
```
from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.widgets import RunDetails
# Create a Python environment for the experiment (from a .yml file)
env = Environment.from_conda_specification("experiment_env", "environment.yml")
# Create a script config
script_mlflow = ScriptRunConfig(source_directory=experiment_folder,
script='mlflow_diabetes.py',
environment=env)
# submit the experiment
experiment = Experiment(workspace=ws, name='mslearn-diabetes-mlflow')
run = experiment.submit(config=script_mlflow)
RunDetails(run).show()
run.wait_for_completion()
```
As usual, you can get the logged metrics from the experiment run when it's finished.
```
# Get logged metrics
metrics = run.get_metrics()
for key in metrics.keys():
print(key, metrics.get(key))
```
> **More Information**: To find out more about running experiments, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-manage-runs) in the Azure ML documentation. For details of how to log metrics in a run, see [this topic](https://docs.microsoft.com/azure/machine-learning/how-to-track-experiments). For more information about integrating Azure ML experiments with MLflow, see [this topic](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-use-mlflow).
| github_jupyter |
## Install Lib API
```
! pip install https://dnaink.jfrog.io/artifactory/dna-ink-pypi/model-fkeywords/0.1.0/model_fkeywords-0.1.0-py3-none-any.whl
! python -m spacy download pt_core_news_sm
```
## Import libs
```
import pandas as pd
import spacy
import nltk
nltk.download('stopwords')
from api_model import nlextract
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.max_rows', None)
extractor = nlextract.NLExtractor()
```
## Read Excel file
```
df = pd.read_excel("/content/Conversas Sodexo.xlsx", engine='openpyxl')
df.info()
```
#### See the first line of dataset
```
df.head(1)
```
### Limpeza de acentuações
#### Atribuir o nome original da coluna de texto para uma variavel (pode se usar qualquer coluna que tenha texto)
```
coluna_texto = 'Message'
df[f'{coluna_texto}_clean'] = df[coluna_texto].apply(extractor.udf_clean_text) ## limpa toda a acentuação, caracteres especiais,
# pontuações do texto.
df[[f'{coluna_texto}_clean', coluna_texto]].head(10)
```
### Call the functions inside of wheels file
#### Lista de Palavras Procuradas
#### Formato Exemplo:
variavel = {coluna_nome : [lista de palavras procuradas],
coluna_nome : [lista de palavras procuradas]}
```
mydict = {
'alto_atrito': ['ainda não', 'não chegou', 'até agora', 'até o momento', 'atraso', 'reclamação', 'pelo amor de Deus', 'procon', \
'péssimo', 'cansado', 'incompetente', 'ouvidoria', 'frustração', 'absurdo', 'porra', 'PQP', 'poxa vida', 'horrível',\
'ridículo', 'decepção', 'humilhação', 'FDP', 'merda', 'triste', 'bosta', 'protocolo','so um miniuto'],
'cancelamento': ['cancelar minha viagem', 'quero cancelar', 'cancelamento', 'desejo cancelar', 'quero desativar', 'verificado'],
'callback': ['me liga de novo', 'está me ligado']
}
```
#### Colocar coluna original para lowercase (minusculo)
```
df[coluna_texto] = df[coluna_texto].str.lower()
```
#### Aplicar função de Caça Palavras no Texto
```
coluna_texto = f'{coluna_texto}_clean'
try:
for key in mydict:
df[key] = df[coluna_texto].apply(lambda x: extractor.pattern_matcher(x,mydict[key],mode="dictionary", anonymizer=False,custom_anonymizer=f'<{key}>'))
except IOError as e:
print(f'não tem mais listas para rodar dados {e}')
pass
df[['Message','Message_clean','alto_atrito','cancelamento','callback']].head(20)
```
## Caça Palavras com palavras exatas do texto
```
mydict = {
'alto_atrito_2': ['ainda não', 'não chegou', 'até agora', 'até o momento', 'atraso', 'reclamação', 'pelo amor de Deus', 'procon', \
'péssimo', 'cansado', 'incompetente', 'ouvidoria', 'frustração', 'absurdo', 'porra', 'PQP', 'poxa vida', 'horrível',\
'ridículo', 'decepção', 'humilhação', 'FDP', 'merda', 'triste', 'bosta', 'protocolo'],
'cancelamento_2': ['cancelar minha viagem', 'quero cancelar', 'cancelamento', 'desejo cancelar', 'quero desativar', 'verificado']
}
try:
for key in mydict:
df[key] = df[coluna_texto].apply(lambda x: extractor.udf_type_keywords(x,mydict[key],mode="dictionary"))
except IOError as e:
print(f'não tem mais listas para rodar dados {e}')
pass
```
#### Avaliação das colunas do Caça Palavras geradas
```
df[['Message','Message_clean','alto_atrito','cancelamento','alto_atrito_2', 'cancelamento_2']].head(50)
```
## Salvar o Resultado em um CSV
```
filename = 'sodexo_tratado'
file_save = f'{filename}.csv'
df.to_csv(file_save, sep=';',encoding='utf-8',index=False)
```
| github_jupyter |
```
import pandas as pd
import os
import numpy as np
from datetime import timedelta
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
in_dir = 'D:\\Toppan\\2017-11-20 全データ\\処理済(機械ごと)\\vectorized'
out_dir = in_dir
holiday_path = 'D:\\Toppan\\2017-11-20 全データ\\データ\\切り離し全休日\\全休日.xlsx'
def mask_out(X, y, month):
try:
df_filter = pd.read_excel(holiday_path, sheet_name=month, index_col=0).iloc[2:]
except Exception as e:
print(e, month)
return X, y
seisan = True if '生産\n有無' in df_filter else False
def isBusy(idx):
row = df_filter.loc[idx]
if row.loc['切離\n有無'] == '切離' or row.loc['全休\n判定'] == '全休' \
or row.loc['異常判定'] == '※異常稼動' or (seisan and row.loc['生産\n有無'] == '無'):
return False
else:
return True
x_busy_idx = []
y_busy_idx = []
for x_idx, y_idx in zip (X.index, y.index):
if isBusy(x_idx) and isBusy(y_idx):
x_busy_idx.append(x_idx)
y_busy_idx.append(y_idx)
return X.loc[x_busy_idx], y.loc[y_busy_idx]
def get_importance_figure(model, name, features):
indices = np.argsort(model.feature_importances_)[::-1]
# save csv
s = pd.Series(data=model.feature_importances_[indices],
index=features[indices])
s.to_csv(os.path.join(out_dir, name + '_寄与度.csv'),
encoding='shift-jis')
def parse_data(exl, sheet):
df = exl.parse(sheet_name=sheet, index_col=0)
return df
def split_day_night(acc_abs):
acc_abs_days, acc_abs_nights = [], []
for i, acc in acc_abs.iteritems():
if 7 < i.hour < 22:
acc_abs_days.append(acc)
else:
acc_abs_nights.append(acc)
return acc_abs_days, acc_abs_nights
def get_output(res, output, sname):
res = res[res['target'] != 0]
if len(res) == 0:
return None
y_pred, y_true = res['preds'], res['target']
'''calculate abs accuracy'''
acc_abs = abs(y_pred - y_true) / y_true
'''aplit days and nights'''
acc_abs_days, acc_abs_nights = split_day_night(acc_abs)
len_days, len_nights = len(acc_abs_days), len(acc_abs_nights)
#sname2acc = {'蒸気': [0.2, 0.15], '電力': [0.09, 0.15], '冷水': [0.15, 0.1]}
'''acc stats'''
len_acc_days = len(list(filter(lambda x: x <= 0.2, acc_abs_days)))
len_acc_nights = len(list(filter(lambda x: x <= 0.15, acc_abs_nights)))
acc_stats_days = len_acc_days / len_days
acc_stats_nights = len_acc_nights / len_nights
output['設備名'].append(sname)
output['平日昼・総'].append(len_days)
output['平日夜・総'].append(len_nights)
output['平日昼・基準内'].append(len_acc_days)
output['平日夜・基準内'].append(len_acc_nights)
output['平日昼基準率'].append(acc_stats_days)
output['平日夜基準率'].append(acc_stats_nights)
return output
```
### Learning
```
exl_learn = pd.ExcelFile(os.path.join(in_dir, '201709010800_vapor_per_machine.xlsx'))
exl_test = pd.ExcelFile(os.path.join(in_dir, '201710010800_vapor_per_machine.xlsx'))
accs = []
for sheet in exl_learn.sheet_names:
if sheet == 'GDNA': continue
# data
df_learn = parse_data(exl_learn, sheet)
df_test = parse_data(exl_test, sheet)
# filter out holidays
X_learn, y_learn = mask_out(df_learn.iloc[:-1, :-1], df_learn.iloc[1:, -1], '17年9月')
X_test, y_test = mask_out(df_test.iloc[:-1, :-1], df_test.iloc[1:, -1], '17年10月')
# base learner
model = RandomForestRegressor(n_estimators=700,
n_jobs=-1,
max_depth=11,
max_features='auto',
criterion='mae',
random_state=700,
warm_start=True)
# learn 1 hour later target
model.fit(X_learn.values, y_learn.values)
# get feature importance figures
#get_importance_figure(model, sheet)
# test with online learning
preds = []
for idx, row in X_test.iterrows():
# predict
preds.append(model.predict(row.values.reshape(1, -1))[0])
# online learning
model.n_estimators += 50
X_learn = X_learn.append(row)
y_learn = y_learn.append(pd.Series(data=y_test.loc[idx + timedelta(hours=1)],
index=[idx + timedelta(hours=1)]))
model.fit(X_learn, y_learn)
# save preds and test
preds = pd.Series(data=preds, index=y_test.index, name='preds')
result = pd.concat([preds, y_test], axis=1)
result.to_csv(os.path.join(out_dir, sheet + '.csv'))
# accuracy
output = {'設備名': [],
'平日昼・総': [], '平日夜・総': [],
'平日昼・基準内': [], '平日夜・基準内': [],
'平日昼基準率': [], '平日夜基準率': []}
output = get_output(result, output, sheet)
print(sheet, output)
if output:
accs.append(pd.DataFrame(output))
# save accuracy
accs = pd.concat(accs)
accs.to_csv(os.path.join(out_dir, 'acc.csv'), index=False, encoding='shift-jis')
print('-------------over-----------')
```
| github_jupyter |
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2], 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x,ord=2,axis=1,keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x,axis=1,keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = np.divide(x_exp,x_sum)
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = None
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
| github_jupyter |
This notebook was prepared by [Donne Martin](https://github.com/donnemartin). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges).
# Solution Notebook
## Problem: Generate a list of primes.
* [Constraints](#Constraints)
* [Test Cases](#Test-Cases)
* [Algorithm](#Algorithm)
* [Code](#Code)
* [Unit Test](#Unit-Test)
## Constraints
* Is it correct that 1 is not considered a prime number?
* Yes
* Can we assume the inputs are valid?
* No
* Can we assume this fits memory?
* Yes
## Test Cases
* None -> Exception
* Not an int -> Exception
* 20 -> [False, False, True, True, False, True, False, True, False, False, False, True, False, True, False, False, False, True, False, True]
## Algorithm
For a number to be prime, it must be 2 or greater and cannot be divisible by another number other than itself (and 1).
We'll use the Sieve of Eratosthenes. All non-prime numbers are divisible by a prime number.
* Use an array (or bit array, bit vector) to keep track of each integer up to the max
* Start at 2, end at sqrt(max)
* We can use sqrt(max) instead of max because:
* For each value that divides the input number evenly, there is a complement b where a * b = n
* If a > sqrt(n) then b < sqrt(n) because sqrt(n^2) = n
* "Cross off" all numbers divisible by 2, 3, 5, 7, ... by setting array[index] to False
Complexity:
* Time: O(n log log n)
* Space: O(n)
Wikipedia's animation:

## Code
```
import math
class PrimeGenerator(object):
def generate_primes(self, max_num):
if max_num is None:
raise TypeError('max_num cannot be None')
array = [True] * max_num
array[0] = False
array[1] = False
prime = 2
while prime <= math.sqrt(max_num):
self._cross_off(array, prime)
prime = self._next_prime(array, prime)
return array
def _cross_off(self, array, prime):
for index in range(prime*prime, len(array), prime):
# Start with prime*prime because if we have a k*prime
# where k < prime, this value would have already been
# previously crossed off
array[index] = False
def _next_prime(self, array, prime):
next = prime + 1
while next < len(array) and not array[next]:
next += 1
return next
```
## Unit Test
```
%%writefile test_generate_primes.py
import unittest
class TestMath(unittest.TestCase):
def test_generate_primes(self):
prime_generator = PrimeGenerator()
self.assertRaises(TypeError, prime_generator.generate_primes, None)
self.assertRaises(TypeError, prime_generator.generate_primes, 98.6)
self.assertEqual(prime_generator.generate_primes(20), [False, False, True,
True, False, True,
False, True, False,
False, False, True,
False, True, False,
False, False, True,
False, True])
print('Success: generate_primes')
def main():
test = TestMath()
test.test_generate_primes()
if __name__ == '__main__':
main()
%run -i test_generate_primes.py
```
| github_jupyter |
```
import matplotlib
matplotlib.use('nbagg')
import matplotlib.animation as anm
import matplotlib.pyplot as plt
import math
import matplotlib.patches as patches
import numpy as np
%matplotlib widget
class World:
def __init__(self, time_span, time_interval, debug=False):
self.objects = []
self.debug = debug
self.time_span = time_span
self.time_interval = time_interval
def append(self,obj):
self.objects.append(obj)
def draw(self):
fig = plt.figure(figsize=(4,4))
ax = fig.add_subplot(111)
ax.set_aspect('equal')
ax.set_xlim(-5,5)
ax.set_ylim(-5,5)
ax.set_xlabel("X",fontsize=10)
ax.set_ylabel("Y",fontsize=10)
elems = []
if self.debug:
for i in range(int(self.time_span/self.time_interval)): self.one_step(i, elems, ax)
else:
self.ani = anm.FuncAnimation(fig, self.one_step, fargs=(elems, ax),
frames=int(self.time_span/self.time_interval)+1, interval=int(self.time_interval*1000), repeat=False)
plt.show()
def one_step(self, i, elems, ax):
while elems: elems.pop().remove()
time_str = "t = %.2f[s]" % (self.time_interval*i)
elems.append(ax.text(-4.4, 4.5, time_str, fontsize=10))
for obj in self.objects:
obj.draw(ax, elems)
if hasattr(obj, "one_step"): obj.one_step(self.time_interval)
class IdealRobot:
def __init__(self, pose, agent=None, sensor=None, color="black"): # 引数を追加
self.pose = pose
self.r = 0.2
self.color = color
self.agent = agent
self.poses = [pose]
self.sensor = sensor # 追加
def draw(self, ax, elems): ### call_agent_draw
x, y, theta = self.pose
xn = x + self.r * math.cos(theta)
yn = y + self.r * math.sin(theta)
elems += ax.plot([x,xn], [y,yn], color=self.color)
c = patches.Circle(xy=(x, y), radius=self.r, fill=False, color=self.color)
elems.append(ax.add_patch(c))
self.poses.append(self.pose)
elems += ax.plot([e[0] for e in self.poses], [e[1] for e in self.poses], linewidth=0.5, color="black")
if self.sensor and len(self.poses) > 1:
self.sensor.draw(ax, elems, self.poses[-2])
if self.agent and hasattr(self.agent, "draw"): #以下2行追加
self.agent.draw(ax, elems)
@classmethod
def state_transition(cls, nu, omega, time, pose):
t0 = pose[2]
if math.fabs(omega) < 1e-10:
return pose + np.array( [nu*math.cos(t0),
nu*math.sin(t0),
omega ] ) * time
else:
return pose + np.array( [nu/omega*(math.sin(t0 + omega*time) - math.sin(t0)),
nu/omega*(-math.cos(t0 + omega*time) + math.cos(t0)),
omega*time ] )
def one_step(self, time_interval):
if not self.agent: return
obs =self.sensor.data(self.pose) if self.sensor else None #追加
nu, omega = self.agent.decision(obs) #引数追加
self.pose = self.state_transition(nu, omega, time_interval, self.pose)
if self.sensor: self.sensor.data(self.pose)
class Agent:
def __init__(self, nu, omega):
self.nu = nu
self.omega = omega
def decision(self, observation=None):
return self.nu, self.omega
class Landmark:
def __init__(self, x, y):
self.pos = np.array([x, y]).T
self.id = None
def draw(self, ax, elems):
c = ax.scatter(self.pos[0], self.pos[1], s=100, marker="*", label="landmarks", color="orange")
elems.append(c)
elems.append(ax.text(self.pos[0], self.pos[1], "id:" + str(self.id), fontsize=10))
class Map:
def __init__(self): # 空のランドマークのリストを準備
self.landmarks = []
def append_landmark(self, landmark): # ランドマークを追加
landmark.id = len(self.landmarks) # 追加するランドマークにIDを与える
self.landmarks.append(landmark)
def draw(self, ax, elems): # 描画(Landmarkのdrawを順に呼び出し)
for lm in self.landmarks: lm.draw(ax, elems)
class IdealCamera:
def __init__(self, env_map, \
distance_range=(0.5, 6.0),
direction_range=(-math.pi/3, math.pi/3)):
self.map = env_map
self.lastdata = []
self.distance_range = distance_range
self.direction_range = direction_range
def visible(self, polarpos): # ランドマークが計測できる条件
if polarpos is None:
return False
return self.distance_range[0] <= polarpos[0] <= self.distance_range[1] \
and self.direction_range[0] <= polarpos[1] <= self.direction_range[1]
def data(self, cam_pose):
observed = []
for lm in self.map.landmarks:
z = self.observation_function(cam_pose, lm.pos)
if self.visible(z): # 条件を追加
observed.append((z, lm.id)) # インデント
self.lastdata = observed
return observed
@classmethod
def observation_function(cls, cam_pose, obj_pos):
diff = obj_pos - cam_pose[0:2]
phi = math.atan2(diff[1], diff[0]) - cam_pose[2]
while phi >= np.pi: phi -= 2*np.pi
while phi < -np.pi: phi += 2*np.pi
return np.array( [np.hypot(*diff), phi ] ).T
def draw(self, ax, elems, cam_pose):
for lm in self.lastdata:
x, y, theta = cam_pose
distance, direction = lm[0][0], lm[0][1]
lx = x + distance * math.cos(direction + theta)
ly = y + distance * math.sin(direction + theta)
elems += ax.plot([x,lx], [y,ly], color="pink")
if __name__ == '__main__': ###name_indent
world = World(30, 0.1)
### 地図を生成して3つランドマークを追加 ###
m = Map()
m.append_landmark(Landmark(2,-2))
m.append_landmark(Landmark(-1,-3))
m.append_landmark(Landmark(3,3))
world.append(m)
### ロボットを作る ###
straight = Agent(0.2, 0.0)
circling = Agent(0.2, 10.0/180*math.pi)
robot1 = IdealRobot( np.array([ 2, 3, math.pi/6]).T, sensor=IdealCamera(m), agent=straight ) # 引数にcameraを追加、整理
robot2 = IdealRobot( np.array([-2, -1, math.pi/5*6]).T, sensor=IdealCamera(m), agent=circling, color="red") # robot3は消しました
world.append(robot1)
world.append(robot2)
### アニメーション実行 ###
world.draw()
```
| github_jupyter |
## Overview/To-Do
This one tries a different, more realistic detecor layouout.
```
%matplotlib inline
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from BurstCube.LocSim.GRB import *
from BurstCube.LocSim.Detector import *
from BurstCube.LocSim.Spacecraft import *
from BurstCube.LocSim.Stats import calcNorms, addErrors, calcNormsWithError
```
## Set up
These are actually the default pointings but I put it here to show you how to set up various detectors. Just six, smaller detectors this time.
```
#Evenly spaced around azimuth
#Staggered in zenith
#Arbitrary type
pointings = {'01': ('90:0:0','8:0:0'),
'02': ('180:0:0','10:0:0'),
'03': ('270:0:0','12:0:0'),
'04': ('360:0:0','14:0:0')}
```
Set up a spacecraft object with the pointings of the detector you've decided on. The spacecraft defaults to a position above DC at an elevation of 550 km (about the orbit of Fermi).
```
spacecraft = Spacecraft(pointings, window = 0.1)
```
Set up some points in RA/Dec to calculate exposures and then access the 'exposure' function of the detector objects within the spacecraft object to plot the exposure.
```
res = 250
rr,dd = np.meshgrid(np.linspace(0,360,res,endpoint=False),np.linspace(-90,90,res))
exposure_positions = np.vstack([rr.ravel(),dd.ravel()])
exposures = np.array([[detector.exposure(position[0],position[1]) for position in exposure_positions.T]
for detector in spacecraft.detectors])
plt.figure(figsize=(20,4))
m = Basemap(projection='moll',lon_0=180.,resolution='c')
x,y = m(rr,dd)
for sp in range(4):
plt.subplot(2, 2, sp+1)
m.pcolormesh(x,y,exposures[sp].reshape((res,res)))
plt.show()
plt.figure(figsize=(8,10))
m = Basemap(projection='moll',lon_0=180,resolution='c')
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
x,y = m(rr,dd)
m.pcolormesh(x,y,exposures.sum(axis=0).reshape((res,res)))
m.colorbar()
plt.show()
rr,dd = np.meshgrid(np.linspace(0,360,55,endpoint=False),np.linspace(-90,90,55))
training_positions = np.vstack([rr.ravel(),dd.ravel()])
exposures = np.array([[detector.exposure(position[0],position[1]) for position in training_positions.T]
for detector in spacecraft.detectors])
training_grbs = [GRB(position[0],position[1],binz=.001) for position in training_positions.T[exposures.sum(axis=0) > 0.]]
pos = np.array([[grb.eph._ra*180./np.pi,grb.eph._dec*180./np.pi] for grb in training_grbs])
plt.figure(figsize=(8,10))
m = Basemap(projection='moll',lon_0=180,resolution='c')
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
x,y = m(pos[:,0],pos[:,1])
m.scatter(x,y,3,marker='o',color='k')
plt.show()
training_counts = spacecraft.throw_grbs(training_grbs,scaled=True)
```
## Setup and throw a random sample of GRBs
Note that I'm only throwing them in the north since the Earth blocks the south.
```
real_positions = np.array(list(zip(360.*np.random.random_sample(2000),180.*np.random.random_sample(2000)-90.)))
exposures = np.array([[detector.exposure(position[0],position[1]) for position in real_positions]
for detector in spacecraft.detectors])
real_grbs = [GRB(position[0],position[1],binz=0.001) for position in real_positions[exposures.sum(axis=0) > 0.]]
np.shape(real_grbs)
real_counts = spacecraft.throw_grbs(real_grbs, scaled=True)
pos = np.array([[grb.eph._ra*180./np.pi,grb.eph._dec*180./np.pi] for grb in real_grbs])
plt.figure(figsize=(8,10))
m = Basemap(projection='moll',lon_0=180,resolution='c')
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
x,y = m(pos[:,0],pos[:,1])
m.scatter(x,y,3,marker='o',color='k')
plt.show()
norms = calcNorms(real_counts,training_counts)
real_counts_err = addErrors(real_counts,training_counts)
norms_errp, norms_errm = calcNormsWithError(real_counts,training_counts,real_counts_err)
```
Find the minimum distance of each GRB.
```
loc_mins = [norm.argmin() for norm in norms]
loc_mins_errm = [norm.argmin() for norm in norms_errm]
loc_mins_errp = [norm.argmin() for norm in norms_errp]
```
Now, calculate the distance from the real GRB to the training one picked out from the distance measuremnt above.
```
errors = [eph.separation(grb.eph,training_grbs[loc_mins[idx]].eph)*180./np.pi for idx,grb in enumerate(real_grbs)]
errors_errm = [eph.separation(grb.eph,training_grbs[loc_mins_errm[idx]].eph)*180./np.pi for idx,grb in enumerate(real_grbs)]
errors_errp = [eph.separation(grb.eph,training_grbs[loc_mins_errp[idx]].eph)*180./np.pi for idx,grb in enumerate(real_grbs)]
```
Plot and save the cumulative distribution of this error.
```
hist_data = plt.hist(errors,bins=100,normed=1, histtype='step', cumulative=True)
hist_data_errm = plt.hist(errors_errm,bins=100,normed=1, histtype='step', cumulative=True)
hist_data_errp = plt.hist(errors_errp,bins=100,normed=1, histtype='step', cumulative=True)
plt.plot()
```
The 1-sigma error is around 68%. Quick function to find the distance value that most closely matches 0.68.
```
avg_stat = np.average([hist_data_errm[1][np.abs(hist_data_errm[0] - 0.68).argmin()],
hist_data_errp[1][np.abs(hist_data_errp[0] - 0.68).argmin()]])
print('Systematic Error: {:,.2f}'.format(hist_data[1][np.abs(hist_data[0] - 0.68).argmin()]))
print('Statistical Error: {:,.2f}'.format(avg_stat))
pos = np.array([[grb.eph._ra*180./np.pi,grb.eph._dec*180./np.pi] for grb in real_grbs])
plt.figure(figsize=(20,10))
m = Basemap(projection='moll',lon_0=180,resolution='c')
m.drawparallels(np.arange(-90.,120.,30.))
m.drawmeridians(np.arange(0.,420.,60.))
x,y = m(pos[:,0],pos[:,1])
#m.scatter(x,y,marker='o',c=errors, s=np.array(errors)*10,cmap=plt.cm.hsv)
m.scatter(x,y,marker='o',c=errors,s=100.,cmap=plt.cm.hsv)
plt.colorbar(shrink=0.5)
plt.savefig('Sky Map with Errors.pdf', transparent = True)
plt.show()
```
| github_jupyter |
```
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import SimpleITK as sitk
print(sitk.Version())
from myshow import myshow
# Download data to work on
%run update_path_to_download_script
from downloaddata import fetch_data as fdata
OUTPUT_DIR = "Output"
```
This section of the Visible Human Male is about 1.5GB. To expedite processing and registration we crop the region of interest, and reduce the resolution. Take note that the physical space is maintained through these operations.
```
fixed_rgb = sitk.ReadImage(fdata("vm_head_rgb.mha"))
fixed_rgb = fixed_rgb[735:1330,204:975,:]
fixed_rgb = sitk.BinShrink(fixed_rgb,[3,3,1])
moving = sitk.ReadImage(fdata("vm_head_mri.mha"))
myshow(moving)
# Segment blue ice
seeds = [[10,10,10]]
fixed_mask = sitk.VectorConfidenceConnected(fixed_rgb, seedList=seeds, initialNeighborhoodRadius=5, numberOfIterations=4, multiplier=8)
# Invert the segment and choose largest component
fixed_mask = sitk.RelabelComponent(sitk.ConnectedComponent(fixed_mask==0))==1
myshow(sitk.Mask(fixed_rgb, fixed_mask));
# pick red channel
fixed = sitk.VectorIndexSelectionCast(fixed_rgb,0)
fixed = sitk.Cast(fixed,sitk.sitkFloat32)
moving = sitk.Cast(moving,sitk.sitkFloat32)
initialTransform = sitk.Euler3DTransform()
initialTransform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_mask,moving.GetPixelID()), moving, initialTransform, sitk.CenteredTransformInitializerFilter.MOMENTS)
print(initialTransform)
def command_iteration(method) :
print("{0} = {1} : {2}".format(method.GetOptimizerIteration(),
method.GetMetricValue(),
method.GetOptimizerPosition()),
end='\n');
sys.stdout.flush();
tx = initialTransform
R = sitk.ImageRegistrationMethod()
R.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
R.SetOptimizerAsGradientDescentLineSearch(learningRate=1,numberOfIterations=100)
R.SetOptimizerScalesFromIndexShift()
R.SetShrinkFactorsPerLevel([4,2,1])
R.SetSmoothingSigmasPerLevel([8,4,2])
R.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
R.SetMetricSamplingStrategy(R.RANDOM)
R.SetMetricSamplingPercentage(0.1)
R.SetInitialTransform(tx)
R.SetInterpolator(sitk.sitkLinear)
import sys
R.RemoveAllCommands()
R.AddCommand( sitk.sitkIterationEvent, lambda: command_iteration(R) )
outTx = R.Execute(sitk.Cast(fixed,sitk.sitkFloat32), sitk.Cast(moving,sitk.sitkFloat32))
print("-------")
print(tx)
print("Optimizer stop condition: {0}".format(R.GetOptimizerStopConditionDescription()))
print(" Iteration: {0}".format(R.GetOptimizerIteration()))
print(" Metric value: {0}".format(R.GetMetricValue()))
tx.AddTransform(sitk.Transform(3,sitk.sitkAffine))
R.SetOptimizerAsGradientDescentLineSearch(learningRate=1,numberOfIterations=100)
R.SetOptimizerScalesFromIndexShift()
R.SetShrinkFactorsPerLevel([2,1])
R.SetSmoothingSigmasPerLevel([4,1])
R.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
R.SetInitialTransform(tx)
outTx = R.Execute(sitk.Cast(fixed,sitk.sitkFloat32), sitk.Cast(moving,sitk.sitkFloat32))
R.GetOptimizerStopConditionDescription()
resample = sitk.ResampleImageFilter()
resample.SetReferenceImage(fixed_rgb)
resample.SetInterpolator(sitk.sitkBSpline)
resample.SetTransform(outTx)
resample.AddCommand(sitk.sitkProgressEvent, lambda: print("\rProgress: {0:03.1f}%...".format(100*resample.GetProgress()),end=''))
resample.AddCommand(sitk.sitkProgressEvent, lambda: sys.stdout.flush())
resample.AddCommand(sitk.sitkEndEvent, lambda: print("Done"))
out = resample.Execute(moving)
out_rgb = sitk.Cast( sitk.Compose( [sitk.RescaleIntensity(out)]*3), sitk.sitkVectorUInt8)
vis_xy = sitk.CheckerBoard(fixed_rgb, out_rgb, checkerPattern=[8,8,1])
vis_xz = sitk.CheckerBoard(fixed_rgb, out_rgb, checkerPattern=[8,1,8])
vis_xz = sitk.PermuteAxes(vis_xz, [0,2,1])
myshow(vis_xz,dpi=30)
import os
sitk.WriteImage(out, os.path.join(OUTPUT_DIR, "example_registration.mha"))
sitk.WriteImage(vis_xy, os.path.join(OUTPUT_DIR, "example_registration_xy.mha"))
sitk.WriteImage(vis_xz, os.path.join(OUTPUT_DIR, "example_registration_xz.mha"))
```
| github_jupyter |
# BUSINESS ANALYTICS
You are the business owner of the retail firm and want to see how your company is performing. You are interested in finding out the weak areas where you can work to make more profit. What all business problems you can derive by looking into the data?
```
# Importing certain libraries
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
%matplotlib inline
```
## Understanding the data
```
# Importing the dataset
data = pd.read_csv(r"D:/TSF/Task 5/SampleSuperstore.csv")
# Displaying the Dataset
data.head()
# Gathering the basic Information
data.describe()
# Learning about differnet datatypes present in the dataset
data.dtypes
# Checking for any null or misssing values
data.isnull().sum()
```
Since, there are no null or missing values present, therefore we can move further for data exploration
## Exploratory Data Analysis
```
# First, using seaborn pairplot for data visualisation
sb.set(style = "whitegrid")
plt.figure(figsize = (20, 10))
sb.pairplot(data, hue = "Quantity")
```
We can clearly see that in our dataset, there are total of 14 different quantities in which our business deals.
```
# Second, using seaborn heatmap for data visualization
plt.figure(figsize = (7, 5))
sb.heatmap(data.corr(), annot = True, fmt = ".2g", linewidth = 0.5, linecolor = "Black", cmap = "YlOrRd")
```
Here, We can see that Sales and Profit are highly corelated as obvious.
```
# Third, using seaborn countplot for data visualization
sb.countplot(x = data["Country"])
plt.show()
```
Our dataset only contains data from United States only.
```
sb.countplot(x = data["Segment"])
plt.show()
```
Maximum Segment is of Consumer & Minimum segment is of Home Office
```
sb.countplot(x = data["Region"])
plt.show()
```
Maximum entries are from West region of United States, followed by East, Central & South respectively.
```
sb.countplot(x = data["Ship Mode"])
plt.show()
```
This shows that the mostly our business uses Standard class for shipping as compared to other classes.
```
plt.figure(figsize = (8, 8))
sb.countplot(x = data["Quantity"])
plt.show()
```
Out of total 14 quantites present, Maximum are number 2 and 3 respectively.
```
plt.figure(figsize = (10, 8))
sb.countplot(x = data["State"])
plt.xticks(rotation = 90)
plt.show()
```
If we watch carefully, we can clearly see that maximum sales happened in California, followed by New York & the Texas.
Lowest sales happened North Dakota, West Virginea.
```
sb.countplot(x = data["Category"])
plt.show()
```
So, Our business deals maximum in Office Supplies category, followed by Furniture & then Tech products.
```
plt.figure(figsize = (10, 8))
sb.countplot(x = data['Sub-Category'])
plt.xticks(rotation = 90)
plt.show()
```
If we define Sub Categories section, maximum profit is earned through Binders, followed by Paper & Furnishing. Minimum Profit is earned through Copiers, Machines etc.
```
# Forth, using Seaborn barplot for data visualization
plt.figure(figsize = (12, 10))
sb.barplot(x = data["Sub-Category"], y = data["Profit"], capsize = .1, saturation = .5)
plt.xticks(rotation = 90)
plt.show()
```
In this Sub-categories, Bookcases, Tables and Supplies are facing losses on the business level as compared to ther categories. So, Business owner needs to pay attention towards these 3 categories.
### Now, to compare specific features of Business, We have to use certain different Exploration operations
```
# Fifth, using regression plot for data visualization
plt.figure(figsize = (10, 8))
sb.regplot(data["Sales"], data["Profit"], marker = "X", color = "r")
plt.show()
```
This Relationship does not seem to be Linear. So, this relationship doesn't help much.
```
plt.figure(figsize = (10, 8))
sb.regplot(data["Quantity"], data["Profit"], color = "black", y_jitter=.1)
plt.show()
```
This Relationship happens to be linear. The quantity '5' has the maximum profit as compared to others.
```
plt.figure(figsize = (10, 8))
sb.regplot(data["Quantity"], data["Sales"], color = "m", marker = "+", y_jitter=.1)
plt.show()
```
This Relationship is also linear. The quantity '6' has the maximum sales as compared to others.
```
# Sixth, using seaborn lineplot for data visualisation
plt.figure(figsize = (10, 8))
sb.lineplot(data["Discount"], data["Profit"], color = "orange", label = "Discount")
plt.legend()
plt.show()
```
As expected, we can see at 50% discount, the profit is very much negligible or we can say that there are losses. But, on the other hand, at 10% discount, there is a profit at a very good level.
```
plt.figure(figsize = (10, 8))
sb.lineplot(data["Sub-Category"], data["Profit"], color = "blue", label = "Sales")
plt.xticks(rotation = 90)
plt.legend()
plt.show()
```
With Copiers, Business makes the largest Profit.
```
plt.figure(figsize = (10, 8))
sb.lineplot(data["Quantity"], data["Profit"], color = "red", label = "Quantity")
plt.legend()
plt.show()
```
Quantity '13' has the maximum profit.
### WHAT CAN BE DERIVED FROM ABOVE VISUALIZATIONS :
* Improvements should be made for same day shipment mode.
* We have to work more in the Southern region of USA for better business.
* Office Supplies are good. We have to work more on Technology and Furniture Category of business.
* There are very less people working as Copiers.
* Maximum number of people are from California and New York. It should expand in other parts of USA as well.
* Company is facing losses in sales of bookcases and tables products.
* Company have a lots of profit in the sale of copier but the number of sales is very less so there is a need of increase innumber of sales of copier.
* When the profits of a state are compared with the discount provided in each state, the states which has allowed more discount, went into loss.
* Profit and discount show very weak and negative relationship. This should be kept in mind that before taking any other decision related to business.
# ASSIGNMENT COMPLETED !!
| github_jupyter |
### LOAD DATA
```
import csv # for csv file import
import numpy as np
import os
import cv2
import math
#from keras import optimizers
from sklearn.utils import shuffle # to shuffle data in generator
from sklearn.model_selection import train_test_split # to split data into Training + Validation
def get_file_data(file_path, header=False):
# function to read in data from driving_log.csv
samples = []
with open(file_path + '/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
# if header is set to true then skip first line of csv
if header:
# if header exists iterate to next item in list, returns -1 if exhausted
next(reader, -1)
for line in reader:
# loop through reader appending each line to samples array
samples.append(line)
return samples
def stats_print(X_train,y_train):
instance_count = len(y_train)
image_count = len(X_train)
num_zeros = ((y_train == 0.0) & (y_train == -0.0)).sum()
num_near_zero = ((y_train < 0.0174) & (y_train > -0.0174)).sum()
num_left = (y_train < 0.0).sum()
num_right = (y_train > 0.0).sum()
deg = math.degrees(0.0174)
rad = math.radians(1)
print("Total number of steering instances: {0}".format(instance_count))
print("Total number of image instances: {0}".format(image_count))
print("Number of instances with 0 as steering Angle: {0} ({1:.2f}%)".format(num_zeros, (num_zeros/instance_count)*100))
print("Number of instances < +/-1 degree as steering Angle: {0} ({1:.2f}%)".format(num_near_zero, (num_near_zero/instance_count)*100))
print("Number of instances with left steering Angle: {0} ({1:.2f}%)".format(num_left, (num_left/instance_count)*100))
print("Number of instances with right steering Angle: {0} ({1:.2f}%)".format(num_right, (num_right/instance_count)*100))
def generator(samples, batch_size=32):
"""
Generate the required images and measurments for training/
`samples` is a list of pairs (`imagePath`, `measurement`).
"""
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
samples = sklearn.utils.shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
for imagePath, measurement in batch_samples:
originalImage = cv2.imread(imagePath)
image = cv2.cvtColor(originalImage, cv2.COLOR_BGR2RGB)
images.append(image)
angles.append(measurement)
# Flipping
images.append(cv2.flip(image,1))
angles.append(measurement*-1.0)
# trim image to only see section with road
inputs = np.array(images)
outputs = np.array(angles)
yield sklearn.utils.shuffle(inputs, outputs)
```
### NVIDIA NET FUNCTION
```
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda, Cropping2D, Dropout
from keras.layers.convolutional import Conv2D
from keras.layers.pooling import MaxPooling2D
def createPreProcessingLayers():
"""
Creates a model with the initial pre-processing layers.
"""
model = Sequential()
model.add(Lambda(lambda x: (x / 255.0) - 0.5, input_shape=(160,320,3)))
model.add(Cropping2D(cropping=((50,20), (0,0))))
return model
def net_NVIDIA():
# NVIDIA Convolutional Network function
# create a sequential model
#model = Sequential()
# add pre-processing steps - normalising the data and mean centre the data
# add a lambda layer for normalisation
# normalise image by divide each element by 255 (max value of an image pixel)
#model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(160, 320, 3)))
# after image is normalised in a range 0 to 1 - mean centre it by subtracting 0.5 from each element - shifts mean from 0.5 to 0
# training loss and validation loss should be much smaller
# crop the image to remove pixels that are not adding value - top 70, and bottom 25 rows
#model.add(Cropping2D(cropping=((70, 25), (0, 0))))
model = createPreProcessingLayers()
# keras auto infer shape of all layers after 1st layer
# 1st layer
#model.add(Conv2D(24, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(24, (5, 5), activation="elu", strides=(2, 2)))
# 2nd layer
#model.add(Conv2D(36, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(36, (5, 5), activation="elu", strides=(2, 2)))
# 3rd layer
#model.add(Conv2D(48, (5, 5), subsample=(2, 2), activation="relu"))
model.add(Conv2D(48, (5, 5), activation="elu", strides=(2, 2)))
# 4th layer
model.add(Conv2D(64, (3, 3), activation="elu"))
# 5th layer
model.add(Conv2D(64, (3, 3), activation="elu"))
# 6th layer
model.add(Flatten())
# 7th layer - add fully connected layer ouput of 100
model.add(Dense(100))
# 8th layer - add fully connected layer ouput of 50
model.add(Dense(50))
# 9th layer - add fully connected layer ouput of 10
model.add(Dense(10))
# 0th layer - add fully connected layer ouput of 1
model.add(Dense(1))
# summarise neural net and display on screen
model.summary()
return model
def train_model(model, train_samples, validation_samples, model_path, set_epochs= 3):
#model.compile(loss='mse', optimizer='adam')
#adam = optimizers.Adam(lr=0.001)
model.compile(loss='mse', optimizer='Adam', metrics=['mse', 'mae', 'mape', 'cosine', 'acc'])
#model.compile(loss='mse', optimizer='adam'(lr=0.001), metrics=['mse', 'mae', 'mape', 'cosine', 'acc'])
#history_object = model.fit(inputs, outputs, validation_split=0.2, shuffle=True, epochs=set_epochs, verbose=1)
train_generator = generator(train_samples)
#print (train_generator[0])
validation_generator = generator(validation_samples)
history_object=model.fit_generator(train_generator, steps_per_epoch= len(train_samples), validation_data=validation_generator, validation_steps=len(validation_samples), epochs=set_epochs, verbose = 1)
model_path
model_object = 'Final_' + model_path + str(set_epochs) + '.h5'
model.save(model_object)
print("Model saved at " + model_object)
return history_object
## main program
import sklearn
#samples = get_file_data('./my_driving')
data_samples = get_file_data('./data')
#print(data_samples[0])
# Split dataset: 80% training; 20% validation
train_samples, validation_samples = train_test_split(data_samples, test_size=0.2)
#X_train_gen, y_train_gen = generator(data_samples)
print(train_samples[0])
# Create Model
model = net_NVIDIA() #input_shape=(160, 320, 3)
num_epoch = 1
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape', 'cosine', 'acc'])
train_generator = generator(train_samples, batch_size=32)
validation_generator = generator(validation_samples, batch_size=32)
history_object = model.fit_generator(train_generator, samples_per_epoch= \
len(train_samples), validation_data=validation_generator, \
nb_val_samples=len(validation_samples), nb_epoch=1, verbose=1)
# train the model and save the model
#history_object = train_model(model, train_samples, validation_samples, './Gen_NVidia_', num_epoch)
### print the keys contained in the history object
print(history_object.history.keys())
### plot the training and validation loss for each epoch
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
#plt.savefig("Loss_NVidia_6.png")
plt.savefig("Final_Loss_NVidia_{0}.png".format(num_epochs))
plt.show()
```
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_1v2:wisig-oracle.run1",
"device": "cuda",
"lr": 0.0001,
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_accuracy",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": -1,
"pickle_path": "/root/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "Wisig_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 10000,
"pickle_path": "/root/csc500-main/datasets/oracle.Run1_10kExamples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": [],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1",
},
],
"dataset_seed": 500,
"seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Week 4
## Overview
Yay! It's week 4. Today's we'll keep things light.
I've noticed that many of you are struggling a bit to keep up and still working on exercises from the previous week. Thus, this week we only have two components with no lectures and very little reading.
## Informal intro
[](https://www.youtube.com/watch?v=YX0kCReIZzk)
## Overview
* An exercise on visualizing geodata using a different set of tools from the ones we played with during Lecture 2.
* Thinking about visualization, data quality, and binning. Why ***looking at the details of the data before applying fancy methods*** is often important.
## Part 1: Visualizing geo-data
It turns out that `plotly` (which we used during Week 2) is not the only way of working with geo-data. There are many different ways to go about it. (The hard-core PhD and PostDoc researchers in my group simply use matplotlib, since that provides more control. For an example of that kind of thing, check out [this one](https://towardsdatascience.com/visualizing-geospatial-data-in-python-e070374fe621).)
Today, we'll try another library for geodata called "[Folium](https://github.com/python-visualization/folium)". It's good for you all to try out a few different libraries - remember that data visualization and analysis Python is all about the ability to use many different tools.
The exercise below is based code illustrated in this nice [tutorial](https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data)), so let us start by taking a look at that one.
*Reading*. Read through the following tutorial
* "How to: Folium for maps, heatmaps & time data". Get it here: https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data
* (Optional) There are also some nice tricks in "Spatial Visualizations and Analysis in Python with Folium". Read it here: https://towardsdatascience.com/data-101s-spatial-visualizations-and-analysis-in-python-with-folium-39730da2adf
> *Exercise*: A new take on geospatial data.
>
>A couple of weeks ago (Part 4 of Week 2), we worked with spacial data by using color-intensity of shapefiles to show the counts of certain crimes within those individual areas. Today we look at studying geospatial data by plotting raw data points as well as heatmaps on top of actual maps.
>
> * First start by plotting a map of San Francisco with a nice tight zoom. Simply use the command `folium.Map([lat, lon], zoom_start=13)`, where you'll have to look up San Francisco's longitude and latitude.
> * Next, use the the coordinates for SF City Hall `37.77919, -122.41914` to indicate its location on the map with a nice, pop-up enabled maker. (In the screenshot below, I used the black & white Stamen tiles, because they look cool).
> 
> * Now, let's plot some more data (no need for popups this time). Select a couple of months of data for `'DRUG/NARCOTIC'` and draw a little dot for each arrest for those two months. You could, for example, choose June-July 2016, but you can choose anything you like - the main concern is to not have too many points as this uses a lot of memory and makes Folium behave non-optimally.
> We can call this a kind of visualization a *point scatter plot*.
Ok. Time for a little break. Note that a nice thing about Folium is that you can zoom in and out of the maps.
> * Now, let's play with **heatmaps**. You can figure out the appropriate commands by grabbing code from the main [tutorial](https://www.kaggle.com/daveianhickey/how-to-folium-for-maps-heatmaps-time-data)) and modifying to suit your needs.
> * To create your first heatmap, grab all arrests for the category `'SEX OFFENSES, NON FORCIBLE'` across all time. Play with parameters to get plots you like.
> * Now, comment on the differences between scatter plots and heatmaps.
>. - What can you see using the scatter-plots that you can't see using the heatmaps?
>. - And *vice versa*: what does the heatmaps help you see that's difficult to distinguish in the scatter-plots?
> * Play around with the various parameter for heatmaps. You can find a list here: https://python-visualization.github.io/folium/plugins.html
> * Comment on the effect on the various parameters for the heatmaps. How do they change the picture? (at least talk about the `radius` and `max_zoom`).
> For one combination of settings, my heatmap plot looks like this.
> 
> * In that screenshot, I've (manually) highlighted a specific hotspot for this type of crime. Use your detective skills to find out what's going on in that building on the 800 block of Bryant street ... and explain in your own words.
(*Fun fact*: I remembered the concentration of crime-counts discussed at the end of this exercise from when I did the course back in 2016. It popped up when I used a completely different framework for visualizing geodata called [`geoplotlib`](https://github.com/andrea-cuttone/geoplotlib). You can spot it if you go to that year's [lecture 2](https://nbviewer.jupyter.org/github/suneman/socialdataanalysis2016/blob/master/lectures/Week3.ipynb), exercise 4.)
For the final element of working with heatmaps, let's now use the cool Folium functionality `HeatMapWithTime` to create a visualization of how the patterns of your favorite crime-type changes over time.
> *Exercise*: Heat map movies. This exercise is a bit more independent than above - you get to make all the choices.
> * Start by choosing your favorite crimetype. Prefereably one with spatial patterns that change over time (use your data-exploration from the previous lectures to choose a good one).
> * Now, choose a time-resolution. You could plot daily, weekly, monthly datasets to plot in your movie. Again the goal is to find interesting temporal patterns to display. We want at least 20 frames though.
> * Create the movie using `HeatMapWithTime`.
> * Comment on your results:
> - What patterns does your movie reveal?
> - Motivate/explain the reasoning behind your choice of crimetype and time-resolution.
## Part 2: Errors in the data. The importance of looking at raw (or close to raw) data.
We started the course by plotting simple histogram plots that showed a lot of cool patterns. But sometimes the binning can hide imprecision, irregularity, and simple errors in the data that could be misleading. In the work we've done so far, we've already come across at least three examples of this in the SF data.
1. In the hourly activity for `PROSTITUTION` something surprising is going on on Wednesday. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2021/master/files/prostitution_hourly.png), where I've highlighted the phenomenon I'm talking about.
1. When we investigated the details of how the timestamps are recorded using jitter-plots, we saw that many more crimes were recorded e.g. on the hour, 15 minutes past the hour, and to a lesser in whole increments of 10 minutes. Crimes didn't appear to be recorded as frequently in between those round numbers. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2021/master/files/jitter_plot.png), where I've highlighted the phenomenon I'm talking about.
1. And finally, today we saw that the Hall of Justice seemed to be an unlikely hotspot for sex offences. Remind yourself [**here**](https://raw.githubusercontent.com/suneman/socialdata2021/master/files/crime_hot_spot.png).
> *Exercise*: Data errors. The data errors we discovered above become difficult to notice when we aggregate data (and when we calculate mean values, as well as statistics more generally). Thus, when we visualize, errors become difficult to notice when when we bin the data. We explore this process in the exercise below.
>
>This last exercise for today has two parts.
> * In each of the three examples above, describe in your own words how the data-errors I call attention to above can bias the binned versions of the data. Also briefly mention how not noticing these errors can result in misconceptions about the underlying patterns of what's going on in San Francisco (and our modeling).
> * Find your own example of human noise in the data and visualize it.
| github_jupyter |
<a href="http://cocl.us/pytorch_link_top">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/Pytochtop.png" width="750" alt="IBM Product " />
</a>
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/cc-logo-square.png" width="200" alt="cognitiveclass.ai logo" />
<h1>Using Dropout for Classification </h1>
<h2>Table of Contents</h2>
<p>In this lab, you will see how adding dropout to your model will decrease overfitting.</p>
<ul>
<li><a href="#Makeup_Data">Make Some Data</a></li>
<li><a href="#Model_Cost">Create the Model and Cost Function the PyTorch way</a></li>
<li><a href="#BGD">Batch Gradient Descent</a></li>
</ul>
<p>Estimated Time Needed: <strong>20 min</strong></p>
<hr>
<h2>Preparation</h2>
We'll need the following libraries
```
# Import the libraries we need for this lab
import torch
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from matplotlib.colors import ListedColormap
from torch.utils.data import Dataset, DataLoader
```
Use this function only for plotting:
```
# The function for plotting the diagram
def plot_decision_regions_3class(data_set, model=None):
cmap_light = ListedColormap([ '#0000FF','#FF0000'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#00AAFF'])
X = data_set.x.numpy()
y = data_set.y.numpy()
h = .02
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
newdata = np.c_[xx.ravel(), yy.ravel()]
Z = data_set.multi_dim_poly(newdata).flatten()
f = np.zeros(Z.shape)
f[Z > 0] = 1
f = f.reshape(xx.shape)
if model != None:
model.eval()
XX = torch.Tensor(newdata)
_, yhat = torch.max(model(XX), 1)
yhat = yhat.numpy().reshape(xx.shape)
plt.pcolormesh(xx, yy, yhat, cmap=cmap_light)
plt.contour(xx, yy, f, cmap=plt.cm.Paired)
else:
plt.contour(xx, yy, f, cmap=plt.cm.Paired)
plt.pcolormesh(xx, yy, f, cmap=cmap_light)
plt.title("decision region vs True decision boundary")
```
Use this function to calculate accuracy:
```
# The function for calculating accuracy
def accuracy(model, data_set):
_, yhat = torch.max(model(data_set.x), 1)
return (yhat == data_set.y).numpy().mean()
```
<!--Empty Space for separating topics-->
<h2 id="Makeup_Data">Make Some Data</h2>
Create a nonlinearly separable dataset:
```
# Create data class for creating dataset object
class Data(Dataset):
# Constructor
def __init__(self, N_SAMPLES=1000, noise_std=0.15, train=True):
a = np.matrix([-1, 1, 2, 1, 1, -3, 1]).T
self.x = np.matrix(np.random.rand(N_SAMPLES, 2))
self.f = np.array(a[0] + (self.x) * a[1:3] + np.multiply(self.x[:, 0], self.x[:, 1]) * a[4] + np.multiply(self.x, self.x) * a[5:7]).flatten()
self.a = a
self.y = np.zeros(N_SAMPLES)
self.y[self.f > 0] = 1
self.y = torch.from_numpy(self.y).type(torch.LongTensor)
self.x = torch.from_numpy(self.x).type(torch.FloatTensor)
self.x = self.x + noise_std * torch.randn(self.x.size())
self.f = torch.from_numpy(self.f)
self.a = a
if train == True:
torch.manual_seed(1)
self.x = self.x + noise_std * torch.randn(self.x.size())
torch.manual_seed(0)
# Getter
def __getitem__(self, index):
return self.x[index], self.y[index]
# Get Length
def __len__(self):
return self.len
# Plot the diagram
def plot(self):
X = data_set.x.numpy()
y = data_set.y.numpy()
h = .02
x_min, x_max = X[:, 0].min(), X[:, 0].max()
y_min, y_max = X[:, 1].min(), X[:, 1].max()
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = data_set.multi_dim_poly(np.c_[xx.ravel(), yy.ravel()]).flatten()
f = np.zeros(Z.shape)
f[Z > 0] = 1
f = f.reshape(xx.shape)
plt.title('True decision boundary and sample points with noise ')
plt.plot(self.x[self.y == 0, 0].numpy(), self.x[self.y == 0,1].numpy(), 'bo', label='y=0')
plt.plot(self.x[self.y == 1, 0].numpy(), self.x[self.y == 1,1].numpy(), 'ro', label='y=1')
plt.contour(xx, yy, f,cmap=plt.cm.Paired)
plt.xlim(0,1)
plt.ylim(0,1)
plt.legend()
# Make a multidimension ploynomial function
def multi_dim_poly(self, x):
x = np.matrix(x)
out = np.array(self.a[0] + (x) * self.a[1:3] + np.multiply(x[:, 0], x[:, 1]) * self.a[4] + np.multiply(x, x) * self.a[5:7])
out = np.array(out)
return out
```
Create a dataset object:
```
# Create a dataset object
data_set = Data(noise_std=0.2)
data_set.plot()
```
Validation data:
```
# Get some validation data
torch.manual_seed(0)
validation_set = Data(train=False)
```
<!--Empty Space for separating topics-->
<h2 id="Model_Cost">Create the Model, Optimizer, and Total Loss Function (Cost)</h2>
Create a custom module with three layers. <code>in_size</code> is the size of the input features, <code>n_hidden</code> is the size of the layers, and <code>out_size</code> is the size. <code>p</code> is the dropout probability. The default is 0, that is, no dropout.
```
# Create Net Class
class Net(nn.Module):
# Constructor
# p denotes probability
def __init__(self, in_size, n_hidden, out_size, p=0):
super(Net, self).__init__()
self.drop = nn.Dropout(p=p)
self.linear1 = nn.Linear(in_size, n_hidden)
self.linear2 = nn.Linear(n_hidden, n_hidden)
self.linear3 = nn.Linear(n_hidden, out_size)
# Prediction function
def forward(self, x):
x = F.relu(self.drop(self.linear1(x)))
x = F.relu(self.drop(self.linear2(x)))
x = self.linear3(x)
return x
```
Create two model objects: <code>model</code> had no dropout and <code>model_drop</code> has a dropout probability of 0.5:
```
# Create two model objects: model without dropout and model with dropout
model = Net(2, 300, 2)
model_drop = Net(2, 300, 2, p=0.5)
```
<!--Empty Space for separating topics-->
```
model1 = torch.nn.Sequential(
torch.nn.Linear(in_features=3,out_features=3), torch.nn.Dropout(0.5),torch.nn.Sigmoid(),
torch.nn.Linear(in_features=3,out_features=4), torch.nn.Dropout(0.3),torch.nn.Sigmoid(),
torch.nn.Linear(in_features=4,out_features=3)
)
model1.parameters
```
<h2 id="BGD">Train the Model via Mini-Batch Gradient Descent</h2>
Set the model using dropout to training mode; this is the default mode, but it's good practice to write this in your code :
```
# Set the model to training mode
model_drop.train()
```
Train the model by using the Adam optimizer. See the unit on other optimizers. Use the Cross Entropy Loss:
```
# Set optimizer functions and criterion functions
# In this lab we will use the ADAM optimizer, this optimizer gives more consistent performance. You can try with SGD.
optimizer_ofit = torch.optim.Adam(model.parameters(), lr=0.01)
optimizer_drop = torch.optim.Adam(model_drop.parameters(), lr=0.01)
criterion = torch.nn.CrossEntropyLoss()
```
Initialize a dictionary that stores the training and validation loss for each model:
```
# Initialize the LOSS dictionary to store the loss
LOSS = {}
LOSS['training data no dropout'] = []
LOSS['validation data no dropout'] = []
LOSS['training data dropout'] = []
LOSS['validation data dropout'] = []
```
Run 500 iterations of batch gradient gradient descent:
```
# Train the model
epochs = 500
def train_model(epochs):
for epoch in range(epochs):
#all the samples are used for training
yhat = model(data_set.x)
# batch gradient descent as we can store all data in memory
yhat_drop = model_drop(data_set.x)
loss = criterion(yhat, data_set.y)
loss_drop = criterion(yhat_drop, data_set.y)
#store the loss for both the training and validation data for both models
LOSS['training data no dropout'].append(loss.item())
LOSS['validation data no dropout'].append(criterion(model(validation_set.x), validation_set.y).item())
LOSS['training data dropout'].append(loss_drop.item())
model_drop.eval()# make a prediction on val data will turn off the dropout
LOSS['validation data dropout'].append(criterion(model_drop(validation_set.x), validation_set.y).item())
model_drop.train()# set back to train when we continue training
optimizer_ofit.zero_grad()
optimizer_drop.zero_grad()
loss.backward()
loss_drop.backward()
optimizer_ofit.step()
optimizer_drop.step()
train_model(epochs)
```
Set the model with dropout to evaluation mode:
```
# Set the model to evaluation model
model_drop.eval()
```
Test the model without dropout on the validation data:
```
# Print out the accuracy of the model without dropout
print("The accuracy of the model without dropout: ", accuracy(model, validation_set))
```
Test the model with dropout on the validation data:
```
# Print out the accuracy of the model with dropout
print("The accuracy of the model with dropout: ", accuracy(model_drop, validation_set))
```
You see that the model with dropout performs better on the validation data.
<h3>True Function</h3>
Plot the decision boundary and the prediction of the networks in different colors.
```
# Plot the decision boundary and the prediction
plot_decision_regions_3class(data_set)
```
Model without Dropout:
```
# The model without dropout
plot_decision_regions_3class(data_set, model)
```
Model with Dropout:
```
# The model with dropout
plot_decision_regions_3class(data_set, model_drop)
```
You can see that the model using dropout does better at tracking the function that generated the data.
Plot out the loss for the training and validation data on both models, we use the log to make the difference more apparent
```
# Plot the LOSS
plt.figure(figsize=(6.1, 10))
def plot_LOSS():
for key, value in LOSS.items():
plt.plot(np.log(np.array(value)), label=key)
plt.legend()
plt.xlabel("iterations")
plt.ylabel("Log of cost or total loss")
plot_LOSS()
```
You see that the model without dropout performs better on the training data, but it performs worse on the validation data. This suggests overfitting. However, the model using dropout performed better on the validation data, but worse on the training data.
<!--Empty Space for separating topics-->
<a href="http://cocl.us/pytorch_link_bottom">
<img src="https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/DL0110EN/notebook_images%20/notebook_bottom%20.png" width="750" alt="PyTorch Bottom" />
</a>
<h2>About the Authors:</h2>
<a href="https://www.linkedin.com/in/joseph-s-50398b136/">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.
Other contributors: <a href="https://www.linkedin.com/in/michelleccarey/">Michelle Carey</a>, <a href="www.linkedin.com/in/jiahui-mavis-zhou-a4537814a">Mavis Zhou</a>
<hr>
Copyright © 2018 <a href="cognitiveclass.ai?utm_source=bducopyrightlink&utm_medium=dswb&utm_campaign=bdu">cognitiveclass.ai</a>. This notebook and its source code are released under the terms of the <a href="https://bigdatauniversity.com/mit-license/">MIT License</a>.
| github_jupyter |
## Introduction
In real world, there exists many huge graphs that can not be loaded in one machine,
such as social networks and citation networks.
To deal with such graphs, PGL develops a Distributed Graph Engine Framework to
support graph sampling on large scale graph networks for distributed GNN training.
In this tutorial, we will walk through the steps of performing distributed Graph Engine for graph sampling.
We also develop a launch script for launch a distributed Graph Engine. To see more examples of distributed GNN training, please refer to [here](https://github.com/PaddlePaddle/PGL/tree/main/examples).
## Requirements
paddlepaddle>=2.1.0
pgl>=2.1.4
## example of how to start a distributed graph engine service
Supose we have a following graph that has two type of nodes (u and t).
Firstly, We should create a configuration file and specify the ip address of each machine.
Here we use two ports to simulate two machines.
After creating the configuration file and ip adress file, we can now start two graph servers.
Then we can use the client to sample neighbors or sample nodes from graph servers.
```
import os
import sys
import re
import time
import tqdm
import argparse
import unittest
import shutil
import numpy as np
from pgl.utils.logger import log
from pgl.distributed import DistGraphClient, DistGraphServer
edges_file = """37 45 0.34
37 145 0.31
37 112 0.21
96 48 1.4
96 247 0.31
96 111 1.21
59 45 0.34
59 145 0.31
59 122 0.21
97 48 0.34
98 247 0.31
7 222 0.91
7 234 0.09
37 333 0.21
47 211 0.21
47 113 0.21
47 191 0.21
34 131 0.21
34 121 0.21
39 131 0.21"""
node_file = """u 98
u 97
u 96
u 7
u 59
t 48
u 47
t 45
u 39
u 37
u 34
t 333
t 247
t 234
t 222
t 211
t 191
t 145
t 131
t 122
t 121
t 113
t 112
t 111"""
tmp_path = "./tmp_distgraph_test"
if not os.path.exists(tmp_path):
os.makedirs(tmp_path)
with open(os.path.join(tmp_path, "edges.txt"), 'w') as f:
f.write(edges_file)
with open(os.path.join(tmp_path, "node_types.txt"), 'w') as f:
f.write(node_file)
# configuration file
config = """
etype2files: "u2e2t:./tmp_distgraph_test/edges.txt"
symmetry: True
ntype2files: "u:./tmp_distgraph_test/node_types.txt,t:./tmp_distgraph_test/node_types.txt"
"""
ip_addr = """127.0.0.1:8342
127.0.0.1:8343"""
with open(os.path.join(tmp_path, "config.yaml"), 'w') as f:
f.write(config)
with open(os.path.join(tmp_path, "ip_addr.txt"), 'w') as f:
f.write(ip_addr)
config = os.path.join(tmp_path, "config.yaml")
ip_addr = os.path.join(tmp_path, "ip_addr.txt")
shard_num = 10
gserver1 = DistGraphServer(config, shard_num, ip_addr, server_id=0)
gserver2 = DistGraphServer(config, shard_num, ip_addr, server_id=1)
client1 = DistGraphClient(config, shard_num=shard_num, ip_config=ip_addr, client_id=0)
client1.load_edges()
client1.load_node_types()
print("data loading finished")
# random sample nodes by node type
client1.random_sample_nodes(node_type="u", size=3)
# traverse all nodes from each server
node_generator = client1.node_batch_iter(batch_size=3, node_type="t", shuffle=True)
for nodes in node_generator:
print(nodes)
# sample neighbors
# note that the edge_type "u2eut" is defined in config.yaml file
nodes = [98, 7]
neighs = client1.sample_successor(nodes, max_degree=10, edge_type="u2e2t")
print(neighs)
```
| github_jupyter |
# "[ML] What's the difference between a metric and a loss?"
- toc:true
- branch: master
- badges: false
- comments: true
- author: Peiyi Hung
- categories: [learning, machine learning]
In machine learning, we usually use two values to evaluate our model: a metric and a loss. For instance, if we are doing a binary classification task, our metric may be the accuracy and our loss would be the cross-entroy. They both show how good our model
performs. However, why do we need both rather than just use one of them? Also, what's the difference between them?
The short answer is that **the metric is for human while the loss is for your model.**
Based on the metric, machine learning practitioners such as data scientists and researchers assess a machine learning model. On the assessment, ML practitioners make decisions to address their problems or achieve their business goals. For example , say a data scientist aims to build a spam classifier to distinguish normal email from spam with 95% accuracy. First, the data scientist build a model with 90% accuracy. Apparently, this result doesn't meet his business goal, so he tries to build a better one. After implementing some techniques, he might get a classifier with 97% accuracy, which goes beyond his goal. Since the goal is met, the data scientist decides to integrate this model into his data product. ML partitioners use the metric to tell whether their model is good enough.
On the other hand, a loss indicates in what direction your model should improve. The difference between machine learning and traditional programming is how they get the ability to solve a problem. Traditional programs solve problems by following exact instructions given by programmers. In contrast, machine learning models learn how to solve a problem by taking into some examples (data) and discovering the underlying patterns of the problem. How does a machine learning model learn? Most ML models learn using a gradient-based method. Here's how a gradient-based method (be specifically, a gradient descent method in supervised learning context) works:
1. A model takes into data and makes predictions.
1. Compute the loss based on the predictions and the true data.
1. Compute the gradients of the loss with respect to parameters of the model.
1. Updating these parameters based on these gradients.
The gradient of the loss helps our model to get better and better. The reason why we need a loss is that a loss is **sensitive** enough to small changes so our model can improve based on it. More precisely, the gradient of the loss should vary if our parameters change slightly. In our spam classification example, accuracy is obviously not suitable for being a loss since it only changes when some examples are classified differently. The cross-entrpy is relatively smoother and so it is a good candidate for a loss. However, a metric do not have to be different from a loss. A metric can be a loss as long as it is sensitive enough. For instance, in a regression setting, MSE (mean squared error) can be both a metric and a loss.
In summary, a metric helps ML partitioners to evaluate their models and a loss facilitates the learning process of a ML model.
| github_jupyter |
# Convolutional Neural Networks
A CNN is made up of basic building blocks defined as tensor, neurons, layers and kernel weights and biases. In this lab, we use PyTorch to build a image classifier using CNN. The objective is to learn CNN using PyTorch framework.
Please refer to the link below for know more about CNN
https://poloclub.github.io/cnn-explainer/
## Import necessary libraries
```
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
from torchvision.datasets import MNIST
from torchvision.transforms import ToTensor
from torchvision import datasets, transforms
from torchvision.utils import make_grid
from torch.utils.data.dataloader import DataLoader
from torch.optim.lr_scheduler import StepLR
from torch.utils.data import random_split
from torch.utils.data.sampler import SubsetRandomSampler
%matplotlib inline
```
### Download the MNIST dataset.
```
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,download=True, transform=ToTensor())
test_data = datasets.MNIST(root='data', train=False,download=True, transform=ToTensor())
```
##Chopping training datasets in train set and validation set. This is done to avoid overfitting on the test set.
Use simple algorithm to create validation set:
First, create a list of indices of the training data. Then randomly shuffle those indices. Lastly,split the indices in 80-20.
```
indices = np.arange(len(train_data))
np.random.shuffle(indices)
train_indices = indices[:int(len(indices)*0.8)]
test_indices = indices[len(train_indices):]
```
## Print data size
```
print(train_data)
print(test_data)
```
## Data Visualization
```
figure = plt.figure(figsize=(10, 8))
cols, rows = 5, 5
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(train_data), size=(1,)).item()
img, label = train_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(label)
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
```
## Data preparation for training with PyTorch DataLoaders
```
# Obtaining training and validation batches
train_batch = SubsetRandomSampler(train_indices)
val_batch = SubsetRandomSampler(test_indices)
# Samples per batch to load
batch_size = 256
# Training Set
train_loader = torch.utils.data.DataLoader(dataset=train_data, batch_size=batch_size,sampler=train_batch,num_workers=4,pin_memory=True)
# Validation Set
val_loader = torch.utils.data.DataLoader(dataset=train_data,batch_size=batch_size, sampler=val_batch, num_workers=4,pin_memory=True)
# Test Set
test_loader = torch.utils.data.DataLoader(dataset=test_data,batch_size=batch_size,num_workers=4,pin_memory=True)
```
## Data normalization step: Calculate Mean and Std
```
train_mean = 0.
train_std = 0.
for images, _ in train_loader:
batch_samples = images.size(0) # batch size (the last batch can have smaller size!)
images = images.view(batch_samples, images.size(1), -1)
train_mean += images.mean(2).sum(0)
train_std += images.std(2).sum(0)
train_mean /= len(train_loader.dataset)
train_std /= len(train_loader.dataset)
print('Mean: ', train_mean)
print('Std: ', train_std)
```
## Data Augmentation:
It is usually done to increase the performance of the CNN based classifiers. Consider this is preprocess of the data. PyTorch inculdes lots of pre-built data augumentation and data transformation features such as Below are the list of transformations that come pre-built with PyTorch: ToTensor, Normalize, Scale, RandomCrop, LinearTransformation, RandomGrayscale, etc. Try to use atleaset one the data.
```
# Your code
# Check the data and see weither suggested augumenation is done. Also check for normalization transformation.
```
## Evaluation Metrics
prediction acuracy
```
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
## Use different form of evaluation meterics
```
## Loss Function: Cross Entropy
For each output row, pick the predicted probability for the correct label. E.g. if the predicted probabilities for an image are [0.1, 0.3, 0.2, ...] and the correct label is 1, we pick the corresponding element 0.3 and ignore the rest.
Then, take the logarithm of the picked probability. If the probability is high i.e. close to 1, then its logarithm is a very small negative value, close to 0. And if the probability is low (close to 0), then the logarithm is a very large negative value. We also multiply the result by -1, which results is a large postive value of the loss for poor predictions.
Finally, take the average of the cross entropy across all the output rows to get the overall loss for a batch of data.
```
class MnistModelBase(nn.Module):
def training_step(self, batch):
pass
# your code
def validation_step(self, batch):
pass
# your code
def validation_epoch_end(self, outputs):
pass
#your code
def epoch_end(self, epoch, result,LR):
pass
#your code
```
## Convolutional Neural Network model
we will use a convolutional neural network, using the nn.Conv2d class from PyTorch. The activation function we'll use here is called a Rectified Linear Unit or ReLU, and it has a really simple formula: relu(x) = max(0,x) i.e. if an element is negative, we replace it by 0, otherwise we leave it unchanged. To define the model, we extend the nn.Module class
```
class MnistModel(MnistModelBase):
"""Feedfoward neural network with 2 hidden layer"""
def __init__(self):
super().__init__()
self.conv1 = nn.Sequential(
nn.Conv2d(in_channels = 1, out_channels = 16, kernel_size=3), #RF - 3x3 # 26x26
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout2d(0.1),
nn.Conv2d(16, 16, 3), #RF - 5x5 # 24x24
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout2d(0.1),
nn.Conv2d(16, 32, 3), #RF - 7x7 # 22x22
nn.ReLU(),
nn.BatchNorm2d(32),
nn.Dropout2d(0.1),
)
# translation layer
# input - 22x22x64; output - 11x11x32
self.trans1 = nn.Sequential(
# RF - 7x7
nn.Conv2d(32, 20, 1), # 22x22
nn.ReLU(),
nn.BatchNorm2d(20),
# RF - 14x14
nn.MaxPool2d(2, 2), # 11x11
)
self.conv2 = nn.Sequential(
nn.Conv2d(20,20,3,padding=1), #RF - 16x16 #output- 9x9
nn.ReLU(),
nn.BatchNorm2d(20),
nn.Dropout2d(0.1),
nn.Conv2d(20,16,3), #RF - 16x16 #output- 9x9
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout2d(0.1),
nn.Conv2d(16, 16, 3), #RF - 18x18 #output- 7x7
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout2d(0.1),
)
self.conv3 = nn.Sequential(
nn.Conv2d(16,16,3), #RF - 20x20 #output- 5x5
nn.ReLU(),
nn.BatchNorm2d(16),
nn.Dropout2d(0.1),
#nn.Conv2d(16,10,1), #RF - 20x20 #output- 7x7
)
# GAP Layer
self.avg_pool = nn.Sequential(
# # RF - 22x22
nn.AvgPool2d(5)
) ## output_size=1
self.conv4 = nn.Sequential(
nn.Conv2d(16,10,1), #RF - 20x20 #output- 7x7
)
def forward(self, xb):
x = self.conv1(xb)
x = self.trans1(x)
x = self.conv2(x)
x = self.conv3(x)
x = self.avg_pool(x)
x = self.conv4(x)
x = x.view(-1, 10)
return x
```
### Using a GPU
```
#function to ensure that our code uses the GPU if available, and defaults to using the CPU if it isn't.
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
# a function that can move data and model to a chosen device.
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
#Finally, we define a DeviceDataLoader class to wrap our existing data loaders and move data to the selected device,
#as a batches are accessed. Interestingly, we don't need to extend an existing class to create a PyTorch dataloader.
#All we need is an __iter__ method to retrieve batches of data, and an __len__ method to get the number of batches.
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
```
We can now wrap our data loaders using DeviceDataLoader.
```
device = get_default_device()
train_loader = DeviceDataLoader(train_loader, device)
val_loader = DeviceDataLoader(val_loader, device)
test_loader = DeviceDataLoader(test_loader, device)
```
### Model Training
```
from torch.optim.lr_scheduler import OneCycleLR
@torch.no_grad()
def evaluate(model, val_loader):
model.eval()
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
torch.cuda.empty_cache()
history = []
optimizer = opt_func(model.parameters(), lr)
scheduler = OneCycleLR(optimizer, lr, epochs=epochs,steps_per_epoch=len(train_loader))
for epoch in range(epochs):
# Training Phase
model.train()
train_losses = []
for batch in train_loader:
loss = model.training_step(batch)
train_losses.append(loss)
loss.backward()
optimizer.step()
optimizer.zero_grad()
scheduler.step()
# Validation phase
result = evaluate(model, val_loader)
result['train_loss'] = torch.stack(train_losses).mean().item()
model.epoch_end(epoch, result,scheduler.get_lr())
history.append(result)
return history
```
Before we train the model, we need to ensure that the data and the model's parameters (weights and biases) are on the same device (CPU or GPU). We can reuse the to_device function to move the model's parameters to the right device.
```
# Model (on GPU)
model = MnistModel()
to_device(model, device)
```
Print Summary of the model
```
from torchsummary import summary
# print the summary of the model
summary(model, input_size=(1, 28, 28), batch_size=-1)
```
Let's see how the model performs on the validation set with the initial set of weights and biases.
```
history = [evaluate(model, val_loader)]
history
```
The initial accuracy is around 10%, which is what one might expect from a randomly intialized model (since it has a 1 in 10 chance of getting a label right by guessing randomly).
We are now ready to train the model. Let's train for 5 epochs and look at the results. We can use a relatively higher learning of 0.01.
```
history += fit(10, 0.01, model, train_loader, val_loader)
```
## Plot Metrics
```
def plot_scores(history):
# scores = [x['val_score'] for x in history]
acc = [x['val_acc'] for x in history]
plt.plot(acc, '-x')
plt.xlabel('epoch')
plt.ylabel('acc')
plt.title('acc vs. No. of epochs');
def plot_losses(history):
train_losses = [x.get('train_loss') for x in history]
val_losses = [x['val_loss'] for x in history]
plt.plot(train_losses, '-bx')
plt.plot(val_losses, '-rx')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.legend(['Training', 'Validation'])
plt.title('Loss vs. No. of epochs');
plot_losses(history)
plot_scores(history)
def get_misclassified(model, test_loader):
misclassified = []
misclassified_pred = []
misclassified_target = []
# put the model to evaluation mode
model.eval()
# turn off gradients
with torch.no_grad():
for data, target in test_loader:
# do inferencing
output = model(data)
# get the predicted output
pred = output.argmax(dim=1, keepdim=True)
# get the current misclassified in this batch
list_misclassified = (pred.eq(target.view_as(pred)) == False)
batch_misclassified = data[list_misclassified]
batch_mis_pred = pred[list_misclassified]
batch_mis_target = target.view_as(pred)[list_misclassified]
# batch_misclassified =
misclassified.append(batch_misclassified)
misclassified_pred.append(batch_mis_pred)
misclassified_target.append(batch_mis_target)
# group all the batches together
misclassified = torch.cat(misclassified)
misclassified_pred = torch.cat(misclassified_pred)
misclassified_target = torch.cat(misclassified_target)
return list(map(lambda x, y, z: (x, y, z), misclassified, misclassified_pred, misclassified_target))
misclassified = get_misclassified(model, test_loader)
import random
num_images = 25
fig = plt.figure(figsize=(12, 12))
for idx, (image, pred, target) in enumerate(random.choices(misclassified, k=num_images)):
image, pred, target = image.cpu().numpy(), pred.cpu(), target.cpu()
ax = fig.add_subplot(5, 5, idx+1)
ax.axis('off')
ax.set_title('target {}\npred {}'.format(target.item(), pred.item()), fontsize=12)
ax.imshow(image.squeeze())
plt.tight_layout()
plt.show()
```
| github_jupyter |
# Canonical correlation analysis in python
In this notebook, we will walk through the solution to the basic algrithm of canonical correlation analysis and compare that to the output of implementations in existing python libraries `statsmodels` and `scikit-learn`.
```
import numpy as np
from scipy.linalg import sqrtm
from statsmodels.multivariate.cancorr import CanCorr as smCCA
from sklearn.cross_decomposition import CCA as skCCA
import matplotlib.pyplot as plt
from seaborn import heatmap
```
Let's define a plotting functon for the output first.
```
def plot_cca(a, b, U, V, s):
# plotting
plt.figure()
heatmap(a, square=True, center=0)
plt.title("Canonical vector - x")
plt.figure()
heatmap(b, square=True, center=0)
plt.title("Canonical vector - y")
plt.figure(figsize=(9, 6))
for i in range(N):
plt.subplot(221 + i)
plt.scatter(np.array(X_score[:, i]).reshape(100),
np.array(Y_score[:, i]).reshape(100),
marker="o", c="b", s=25)
plt.xlabel("Canonical variate of X")
plt.ylabel("Canonical variate of Y")
plt.title('Mode %i (corr = %.2f)' %(i + 1, s[i]))
plt.xticks(())
plt.yticks(())
```
## Create data based on some latent variables
First generate some test data.
The code below is modified based on the scikit learn example of CCA.
The aim of using simulated data is that we can have complete control over the structure of the data and help us see the utility of CCA.
Let's create a dataset with 100 observations with two hidden variables:
```
n = 100
# fix the random seed so this tutorial will always create the same results
np.random.seed(42)
l1 = np.random.normal(size=n)
l2 = np.random.normal(size=n)
```
For each observation, there are two domains of data.
Six and four variables are measured in each of the domain.
In domain 1 (x), the first latent structure 1 is underneath the first 3 variables and latent strucutre 2 for the rest.
In domain 2 (y), the first latent structure 1 is underneath every other variable and for latent strucutre 2 as well.
```
latents_x = np.array([l1, l1, l1, l2, l2, l2]).T
latents_y = np.array([l1, l2, l1, l2]).T
```
Now let's add some random noise on this latent structure.
```
X = latents_x + np.random.normal(size=6 * n).reshape((n, 6))
Y = latents_y + np.random.normal(size=4 * n).reshape((n, 4))
```
The aim of CCA is finding the correlated latent features in the two domains of data.
Therefore, we would expect to find the hidden strucure is laid out in the latent components.
## SVD algebra solution
SVD solution is the most implemented way of CCA solution. For the proof of standard eigenvalue solution and the proof SVD solution demonstrated below, see [Uurtio wt. al, (2018)](https://dl.acm.org/citation.cfm?id=3136624).
The first step is getting the covariance matrixes of X and Y.
```
Cx, Cy = np.corrcoef(X.T), np.corrcoef(Y.T)
Cxy = np.corrcoef(X.T, Y.T)[:X.shape[1], X.shape[1]:]
Cyx = Cxy.T
```
We first retrieve the identity form of the covariance matix of X and Y.
```
sqrt_x, sqrt_y = np.matrix(sqrtm(Cx)), np.matrix(sqrtm(Cy))
isqrt_x, isqrt_y = sqrt_x.I, sqrt_y.I
```
According to the proof, we leared that the canonical correlation can be retrieved from SVD on Cx^-1/2 Cxy Cy^-1/2.
```
W = isqrt_x * Cxy * isqrt_y
u, s, v = np.linalg.svd(W)
```
The columns of the matrices U and V correspond to the sets of orthonormal left and right singular vectors respectively. The singular values of matrix S correspond to
the canonical correlations. The positions w a and w b are obtained from:
```
N = np.min([X.shape[1], Y.shape[1]])
a = np.dot(u, isqrt_x.T[:, :N]) / np.std(X) # scaling because we didn't standardise the input
b = np.dot(v, isqrt_y).T / np.std(Y)
```
Now compute the score.
```
X_score, Y_score = X.dot(a), Y.dot(b)
plot_cca(a, b, X_score, Y_score, s) # predefined plotting function
```
## Solution Using SVD Only
The solution above can be further simplified by conducting SVD on the two domains.
The algorithm SVD X and Y. This step is similar to doing principle component analysis on the two domains.
```
ux, sx, vx = np.linalg.svd(X, 0)
uy, sy, vy = np.linalg.svd(Y, 0)
```
Then take the unitary bases and form UxUy^T and SVD it. S would be the canonical correlation of the two domanins of features.
```
u, s, v = np.linalg.svd(ux.T.dot(uy), 0)
```
We can yield the canonical vectors by transforming the unitary basis in the hidden space back to the original space.
```
a = (vx.T).dot(u) # no scaling here as SVD handled it.
b = (vy.T).dot(v.T)
X_score, Y_score = X.dot(a), Y.dot(b)
```
Now we can plot the results. It shows very similar results to solution 1.
```
plot_cca(a, b, X_score, Y_score, s) # predefined plotting function
```
The method above has been implemented in `Statsmodels`. The results are almost identical:
```
sm_cca = smCCA(Y, X)
sm_s = sm_cca.cancorr
sm_a = sm_cca.x_cancoef
sm_b = sm_cca.y_cancoef
sm_X_score = X.dot(a)
sm_Y_score = Y.dot(b)
plot_cca(a, b, X_score, Y_score, s)
```
## Scikit learn
Scikit learn implemented [a different algorithm](https://www.stat.washington.edu/sites/default/files/files/reports/2000/tr371.pdf).
The outcome of the Scikit learn implementation yield very similar results.
The first mode capture the hidden structure in the simulated data.
```
cca = skCCA(n_components=4)
cca.fit(X, Y)
s = np.corrcoef(cca.x_scores_.T, cca.y_scores_.T).diagonal(offset=cca.n_components)
a = cca.x_weights_
b = cca.y_weights_
X_score, Y_score = cca.x_scores_, cca.y_scores_
plot_cca(a, b, X_score, Y_score, s) # predefined plotting function
```
| github_jupyter |
```
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasRegressor
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
%matplotlib inline
```
Set our random seed so that all computations are deterministic
```
seed = 21899
```
Read in the raw data for the first 100K records of the HCEPDB into a pandas dataframe
```
df = pd.read_csv('https://github.com/UWDIRECT/UWDIRECT.github.io/blob/master/Wi18_content/DSMCER/HCEPD_100K.csv?raw=true')
df.head()
```
Separate out the predictors from the output
```
X = df[['mass', 'voc', 'jsc', 'e_homo_alpha', 'e_gap_alpha',
'e_lumo_alpha']].values
Y = df[['pce']].values
```
Let's create the test / train split for these data using 80/20. The `_pn` extension is related to the 'prenormalization' nature of the data.
```
X_train_pn, X_test_pn, y_train, y_test = train_test_split(X, Y,
test_size=0.20,
random_state=seed)
```
Now we need to `StandardScaler` the training data and apply that scale to the test data.
```
# create the scaler from the training data only and keep it for later use
X_train_scaler = StandardScaler().fit(X_train_pn)
# apply the scaler transform to the training data
X_train = X_train_scaler.transform(X_train_pn)
```
Now let's reuse that scaler transform on the test set. This way we never contaminate the test data with the training data. We'll start with a histogram of the testing data just to prove to ourselves it is working.
```
plt.hist(X_test_pn[:,1])
```
OK, bnow apply the training scaler transform to the test and plot a histogram
```
X_test = X_train_scaler.transform(X_test_pn)
plt.hist(X_test[:,1])
```
### Let's create the neural network layout
This is a simple neural network with no hidden layers and just the inputs transitioned to the output.
```
def simple_model():
# assemble the structure
model = Sequential()
model.add(Dense(6, input_dim=6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# compile the model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
```
Train the neural network with the following
```
# initialize the andom seed as this is used to generate
# the starting weights
np.random.seed(seed)
# create the NN framework
estimator = KerasRegressor(build_fn=simple_model,
epochs=150, batch_size=25000, verbose=0)
history = estimator.fit(X_train, y_train, validation_split=0.33, epochs=150,
batch_size=10000, verbose=0)
```
The history object returned by the `fit` call contains the information in a fitting run.
```
print(history.history.keys())
print("final MSE for train is %.2f and for validation is %.2f" %
(history.history['loss'][-1], history.history['val_loss'][-1]))
```
Let's plot it!
```
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
```
Let's get the MSE for the test set.
```
test_loss = estimator.model.evaluate(X_test, y_test)
print("test set mse is %.2f" % test_loss)
```
## NEAT!
So our train mse is very similar to the training and validation at the final step!
### Let's look at another way to evaluate the set of models using cross validation
Use 10 fold cross validation to evaluate the models generated from our training set. We'll use scikit-learn's tools for this. Remember, this is only assessing our training set. If you get negative values, to make `cross_val_score` behave as expected, we have to flip the signs on the results (incompatibility with keras).
```
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(estimator, X_train, y_train, cv=kfold)
print("Results: %.2f (%.2f) MSE" % (-1 * results.mean(), results.std()))
```
#### Quick aside, `Pipeline`
Let's use scikit learns `Pipeline` workflow to run a k-fold cross validation run on the learned model.
With this tool, we create a workflow using the `Pipeline` object. You provide a list of actions (as named tuples) to be performed. We do this with `StandardScaler` to eliminate the posibility of training leakage into the cross validation test set during normalization.
```
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('mlp', KerasRegressor(build_fn=simple_model,
epochs=150, batch_size=25000, verbose=0)))
pipeline = Pipeline(estimators)
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(pipeline, X_train, y_train, cv=kfold)
print('MSE mean: %.4f ; std: %.4f' % (-1 * results.mean(), results.std()))
```
### Now, let's try a more sophisticated model
Let's use a hidden layer this time.
```
def medium_model():
# assemble the structure
model = Sequential()
model.add(Dense(6, input_dim=6, kernel_initializer='normal', activation='relu'))
model.add(Dense(4, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
# compile the model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
# initialize the andom seed as this is used to generate
# the starting weights
np.random.seed(seed)
# create the NN framework
estimator = KerasRegressor(build_fn=medium_model,
epochs=150, batch_size=25000, verbose=0)
history = estimator.fit(X_train, y_train, validation_split=0.33, epochs=150,
batch_size=10000, verbose=0)
print("final MSE for train is %.2f and for validation is %.2f" %
(history.history['loss'][-1], history.history['val_loss'][-1]))
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
test_loss = estimator.model.evaluate(X_test, y_test)
print("test set mse is %.2f" % test_loss)
```
_So it appears our more complex model improved performance_
### Free time!
Find example code for keras for the two following items:
* L1 and L2 regularization (note in keras, this can be done by layer)
* Dropout
#### Regularization
Let's start by adding L1 or L2 (or both) regularization to the hidden layer.
Hint: you need to define a new function that is the neural network model and add the correct parameters to the layer definition. Then retrain and plot as above. What parameters did you choose for your dropout? Did it improve training?
#### Dropout
Find the approach to specifying dropout on a layer using your best friend `bing`. As with L1 and L2 above, this will involve defining a new network struction using a function and some new 'magical' dropout layers.
| github_jupyter |
# Testing Various Machine Learning Models
I would like to create model that can identify individuals at the greatest risk of injury three months prior to when it occurs. In order to do this, I will first complete feature selection using a step-forward approach to optimize recall. Then I will complete some basic EDA. How imablaneced is the data? Are the features selected correlated? Next, I tested various machine learnign models and balenced thate data using to oversampling
```
# Machine Learning Model Tests
%load_ext nb_black
######FINAL COPY
# ML 2010
# Reading in the pachages used for this part of analysis
import pandas as pd
import numpy as np
from numpy import cov
from scipy.stats import pearsonr
from datetime import date
import datetime
from dateutil.relativedelta import relativedelta
from dateutil import parser
from collections import Counter
from datetime import datetime
from dateutil.parser import parse
import datetime
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
import seaborn as sns
from dateutil import parser
import random
import os
import os.path
from collections import Counter
import sklearn
import matplotlib.pyplot as plt
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
from pandas.plotting import scatter_matrix
from sklearn.preprocessing import StandardScaler
from sklearn import linear_model
from xgboost import XGBClassifier
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import confusion_matrix, classification_report
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
from imblearn.over_sampling import SMOTE
from imblearn.pipeline import make_pipeline
from imblearn.over_sampling import ADASYN
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn import preprocessing
import pickle
from sklearn import preprocessing
# Change the Working Directory
os.chdir("/Users/Owner/Desktop/InsightFellows/Daniella_Patton_Insight_Project/Raw_Data")
print(os.getcwd()) # Prints the current working directory
ml_table2010 = pd.read_csv("ML_filtered_career.csv")
ml_table2010.head()
def ratio(ml_table2010):
if ml_table2010.Month1DoublesMathces == 0:
x = ml_table2010.Month1SinglesMathces
else:
x = ml_table2010.Month1SinglesMathces / ml_table2010.Month1DoublesMathces
return x
# Hard Code in Yes or No for injury type
ml_table2010["Injured"] = ml_table2010["Injured"].replace("Y", 1)
ml_table2010["Injured"] = ml_table2010["Injured"].replace("N", 0)
# Hard Code in
ml_table2010["Month1Injured"] = ml_table2010["Month1Injured"].replace("Y", 1)
ml_table2010["Month1Injured"] = ml_table2010["Month1Injured"].replace("N", 0)
# Hard Code in
ml_table2010["Month3Injured"] = ml_table2010["Month3Injured"].replace("Y", 1)
ml_table2010["Month3Injured"] = ml_table2010["Month3Injured"].replace("N", 0)
# Hard Code in
ml_table2010["Month6Injured"] = ml_table2010["Month6Injured"].replace("Y", 1)
ml_table2010["Month6Injured"] = ml_table2010["Month6Injured"].replace("N", 0)
# Hard Code in
ml_table2010["CumInjured"] = ml_table2010["CumInjured"].replace("Y", 1)
ml_table2010["CumInjured"] = ml_table2010["CumInjured"].replace("N", 0)
# GET DUMMIES FOR THE REST
# Drop the name
ml_table2010 = pd.get_dummies(
ml_table2010,
columns=[
"Country",
"Month1InjuredType",
"Month3InjuredType",
"Month6InjuredType",
"CumInjuredType",
],
)
ml_table2010 = ml_table2010.drop(["EndDate"], axis=1)
ml_table2010.dtypes
# Getting all of the data in the collumn filtered by startdate so
ml_table2010["StartDate"] = ml_table2010["StartDate"].apply(
lambda x: parser.parse(x).date()
)
# Chacked for Unbalenced Classes
sns.catplot(x="Injured", kind="count", palette="ch:.25", data=ml_table2010)
print(ml_table2010["Injured"].value_counts())
print(2698 / 13687)
# Use 2010 - 2018 data to train
Training = ml_table2010[ml_table2010["StartDate"] < datetime.date(2018, 1, 1)]
X_train = Training.drop(["Injured", "StartDate", "PlayerName"], axis=1)
Y_train = Training["Injured"]
# Use 2019 data to test how accurate the model predictions are
# Testing Set
Testing = ml_table2010[
(ml_table2010["StartDate"] >= datetime.date(2018, 1, 1))
& (ml_table2010["StartDate"] < datetime.date(2019, 6, 1))
]
X_test = Testing.drop(["Injured", "StartDate", "PlayerName"], axis=1)
Y_test = Testing["Injured"]
ml_table2010.head()
# keep last duplicate value
df = ml_table2010.drop_duplicates(subset=["PlayerName"], keep="last")
csv_for_webapp = df[
[
"PlayerName",
"Month1Carpet",
"CumInjured",
"CumInjuredTimes",
"CumInjuredGames",
"Country_Argentina",
"Country_Australia",
"Country_Austria",
"Country_Belarus",
"Country_Brazil",
"Country_Colombia",
"Country_Egypt",
"Country_Estonia",
"Country_Israel",
"Country_Kazakhstan",
"Country_Latvia",
"Country_Romania",
"Country_Russia",
"Country_Serbia",
"Country_South Korea",
"Country_Sweden",
"Country_Switzerland",
"Country_Thailand",
"Country_Venezuela",
"Month1InjuredType_Severe",
"CumInjuredType_Moderate",
]
].copy()
csv_for_webapp.head()
csv_for_webapp.to_csv("Current_Player_Info.csv")
```
# First Pass Random Forest with unbalenced data for model selections
```
from imblearn.pipeline import Pipeline, make_pipeline
from sklearn.model_selection import KFold # import KFold
from imblearn.over_sampling import SMOTE
# Import the model we are using
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
from sklearn import metrics
from sklearn.model_selection import KFold, cross_val_score
# Build RF classifier to use in feature selection
from mlxtend.feature_selection import SequentialFeatureSelector as sfs
# First Pass Random Forest
rf = RandomForestClassifier()
rf.fit(X_train, Y_train)
y_pred = rf.predict(X_test)
y_pred = pd.Series(y_pred)
# Train the model on training data
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print("Accuracy of random forest on test set: {:.2f}".format(rf.score(X_test, Y_test)))
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("Recall:", metrics.recall_score(Y_test, y_pred))
print("F1:", metrics.f1_score(Y_test, y_pred))
# Step Foward for Precision
clf = RandomForestClassifier(n_estimators=100, n_jobs=-1)
#clf = LogisticRegression(solver='liblinear', max_iter=1, class_weight='balanced')
# Build step forward feature selection
sfs = sfs(clf,
k_features=25,
forward=True,
floating=False,
verbose=2,
scoring='recall',
cv=5)
# Perform SFFS
sfs1 = sfs1.fit(X_train, Y_train)
# Which features?
feat_cols = list(sfs1.k_feature_idx_)
# Random Forest Classifired Index [37, 38, 47, 49, 51, 74, 84, 92, 96, 110]
# Logistic Regression Index [10, 37, 38, 48, 56, 73, 75, 77, 84, 85, 92, 93, 98, 104, 111]
X_train.head(20)
# colnames = X_train.columns[feat_cols]
# print(colnames)
# X_train.head()
# X_train = X_train[colnames]
# X_test = X_test[colnames]
X_train = X_train[
[
"Month1GamesPlayed",
"Month1Carpet",
"CumInjured",
"CumInjuredTimes",
"CumInjuredGames",
"Country_Argentina",
"Country_Australia",
"Country_Austria",
"Country_Belarus",
"Country_Brazil",
"Country_Colombia",
"Country_Egypt",
"Country_Estonia",
"Country_Israel",
"Country_Kazakhstan",
"Country_Latvia",
"Country_Romania",
"Country_Russia",
"Country_Serbia",
"Country_South Korea",
"Country_Sweden",
"Country_Switzerland",
"Country_Thailand",
"Country_Venezuela",
"Month1InjuredType_Severe",
"CumInjuredType_Moderate",
]
]
X_test = X_test[
[
"Month1GamesPlayed",
"Month1Carpet",
"CumInjured",
"CumInjuredTimes",
"CumInjuredGames",
"Country_Argentina",
"Country_Australia",
"Country_Austria",
"Country_Belarus",
"Country_Brazil",
"Country_Colombia",
"Country_Egypt",
"Country_Estonia",
"Country_Israel",
"Country_Kazakhstan",
"Country_Latvia",
"Country_Romania",
"Country_Russia",
"Country_Serbia",
"Country_South Korea",
"Country_Sweden",
"Country_Switzerland",
"Country_Thailand",
"Country_Venezuela",
"Month1InjuredType_Severe",
"CumInjuredType_Moderate",
]
]
# Instantiate model with 1000 decision trees
# Imporvement to model, so using these variable movni foward
rf = RandomForestClassifier(class_weight="balanced")
rf.fit(X_train, Y_train)
y_pred = rf.predict(X_test)
y_pred = pd.Series(y_pred)
# Train the model on training data
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print("Accuracy of random forest on test set: {:.2f}".format(rf.score(X_test, Y_test)))
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("Recall:", metrics.recall_score(Y_test, y_pred))
print("F1:", metrics.f1_score(Y_test, y_pred))
# Corr Matrix test
corr_mt = X_train.copy()
corr_mt["Injured"] = Y_train
# Need to Look at correlation matrix and remove highly correlated variables in the
# order of most important variabes
f = plt.figure(figsize=(19, 15))
plt.matshow(corr_mt.corr(), fignum=f.number)
plt.xticks(range(corr_mt.shape[1]), corr_mt.columns, fontsize=14, rotation=45)
plt.yticks(range(corr_mt.shape[1]), corr_mt.columns, fontsize=14)
cb = plt.colorbar()
cb.ax.tick_params(labelsize=14)
# prepare configuration for cross validation test harness
seed = 7
kfold = 5
# Ratio of injured to non-injured
# 2682/13580 = 0.2 (5 x)
# prepare models
models = []
models.append(('LR', LogisticRegression(solver='liblinear', max_iter=100, class_weight='balanced')))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier(class_weight='balanced')))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC(class_weight='balanced')))
models.append(('RF', RandomForestClassifier(class_weight='balanced')))
models.append(('XGBoost', XGBClassifier(scale_pos_weight=4)))
# evaluate each model in turn
results = []
names = []
scoring='f1'
print('The results with balanced-weighting')
for name, model in models:
kfold = model_selection.KFold(n_splits=5)
cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
models.append(('RF', RandomForestClassifier()))
models.append(('XGBoost', XGBClassifier()))
print('The results with balanced data using SMOTE')
results_SMOTE = []
names_SMOTE = []
for name, model in models:
kfold = model_selection.KFold(n_splits=5)
imba_pipeline = make_pipeline(SMOTE(random_state=42), model)
cv_results = cross_val_score(imba_pipeline, X_train, Y_train, scoring= scoring, cv=kfold)
results_SMOTE.append(cv_results)
names_SMOTE.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
print('The results with balanced data using ADASYN')
results_ADASYN = []
names_ADASYN = []
for name, model in models:
kfold = model_selection.KFold(n_splits=5)
imba_pipeline = make_pipeline(ADASYN(random_state=42), model)
cv_results = cross_val_score(imba_pipeline, X_train, Y_train, scoring= scoring, cv=kfold)
results_ADASYN.append(cv_results)
names_ADASYN.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
X_scaled = preprocessing.scale(X_train)
# prepare models
models = []
models.append(
(
"LR",
LogisticRegression(solver="liblinear", max_iter=100, class_weight="balanced"),
)
)
models.append(("LDA", LinearDiscriminantAnalysis()))
models.append(("KNN", KNeighborsClassifier()))
models.append(("CART", DecisionTreeClassifier(class_weight="balanced")))
models.append(("NB", GaussianNB()))
models.append(("SVM", SVC(class_weight="balanced")))
models.append(("RF", RandomForestClassifier(class_weight="balanced")))
models.append(("XGBoost", XGBClassifier(scale_pos_weight=4)))
# evaluate each model in turn
results = []
names = []
scoring = "f1"
print("The results with balanced-weighting")
for name, model in models:
kfold = model_selection.KFold(n_splits=5)
cv_results = model_selection.cross_val_score(
model, X_scaled, Y_train, cv=kfold, scoring=scoring
)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
"""
The results with balanced-weighting
LR: 0.335936 (0.007798)
LDA: 0.061024 (0.037630)
KNN: 0.152377 (0.031906)
CART: 0.223875 (0.018499)
NB: 0.285749 (0.012118)
SVM: 0.329350 (0.012078)
RF: 0.237863 (0.016613)
XGBoost: 0.329015 (0.014148)
"""
# boxplot algorithm comparison
fig = plt.figure()
fig.suptitle("Balanced Algorithm Comparison")
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
sns.set(font_scale=3)
sns.set_style("whitegrid")
fig, ax = plt.subplots(figsize=(8, 8.27))
tips = sns.load_dataset("tips")
ax = sns.boxplot(x=names, y=results)
plt.xticks(rotation=45)
# Select which box you want to change
mybox = ax.artists[0]
# Change the appearance of that box
mybox.set_facecolor("orangered")
mybox.set_edgecolor("black")
# mybox.set_linewidth(3)
mybox = ax.artists[1]
mybox.set_facecolor("lightblue")
mybox = ax.artists[2]
mybox.set_facecolor("lightblue")
mybox = ax.artists[3]
mybox.set_facecolor("lightblue")
mybox = ax.artists[4]
mybox.set_facecolor("lightblue")
mybox = ax.artists[5]
mybox.set_facecolor("lightblue")
mybox = ax.artists[6]
mybox.set_facecolor("lightblue")
mybox = ax.artists[7]
mybox.set_facecolor("lightblue")
# boxplot algorithm comparison
sns.set(font_scale=3)
sns.set_style("whitegrid")
fig, ax = plt.subplots(figsize=(8, 8.27))
tips = sns.load_dataset("tips")
ax = sns.boxplot(x=names_SMOTE, y=results_SMOTE)
plt.xticks(rotation=45)
# fig = plt.figure()
# fig.suptitle('SMOTE Algorithm Comparison')
# ax = fig.add_subplot(111)
# plt.boxplot(results_SMOTE)
# ax.set_xticklabels(names_SMOTE)
plt.show()
# boxplot algorithm comparison
sns.set(font_scale=3)
sns.set_style("whitegrid")
fig, ax = plt.subplots(figsize=(8, 8.27))
tips = sns.load_dataset("tips")
ax = sns.boxplot(x=names, y=results)
plt.xticks(rotation=45)
# Select which box you want to change
mybox = ax.artists[0]
# fig = plt.figure()
# fig.suptitle('ADYSONs Algorithm Comparison')
# ax = fig.add_subplot(111)
# plt.boxplot(results_ADASYN)
# ax.set_xticklabels(names_ADASYN)
# plt.show()
rf = LogisticRegression(class_weight="balanced")
rf.fit(X_train, Y_train)
y_pred = rf.predict(X_test)
y_pred = pd.Series(y_pred)
# Train the model on training data
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print("Accuracy of random forest on test set: {:.2f}".format(rf.score(X_test, Y_test)))
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("Recall:", metrics.recall_score(Y_test, y_pred))
print("F1:", metrics.f1_score(Y_test, y_pred))
from sklearn.model_selection import GridSearchCV
# Create logistic regression
logistic = linear_model.LogisticRegression(
solver="liblinear", max_iter=1000, class_weight="balanced"
)
# Create regularization penalty space
penalty = ["l1", "l2"]
# Create regularization hyperparameter space
C = np.logspace(0, 4, 10)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
# Create grid search using 5-fold cross validation
clf = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)
# Fit grid search
best_model = clf.fit(X_train, Y_train)
# View best hyperparameters
print("Best Penalty:", best_model.best_estimator_.get_params()["penalty"])
print("Best C:", best_model.best_estimator_.get_params()["C"])
# Predict target vector
y_pred = best_model.predict(X_test)
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print(
"Accuracy of random forest on test set: {:.2f}".format(
best_model.score(X_test, Y_test)
)
)
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("F1:", metrics.f1_score(Y_test, y_pred))
print("Recall:", metrics.recall_score(Y_test, y_pred))
# Best Penalty: l2 Best C: 1.0
logistic = linear_model.LogisticRegression(
solver="liblinear", max_iter=1000, class_weight="balanced", penalty="l2"
)
logistic.fit(X_train, Y_train)
X_tester = X_test.iloc[
0:,
]
X_tester
logistic.predict(X_tester)
filename = "logistic_model.sav"
pickle.dump(logistic, open(filename, "wb"))
coeff_list = logistic.coef_
coeff_list
# Absolute or Square
# Standardized B coefficients
x = np.std(X_train, 0)
print(np.std(X_train, 0))
x[0] * coeff_list[0:0]
print(len(coeff_list))
print(type(coeff_list))
print(len(X_train.columns))
print(type(X_train.columns))
#coeff_list[10:] == shape 0,25
coeff_list.shape = (25,1)
X_train.columns
# coeff_list = coeff_list.flatten
flat_list = [item for sublist in coeff_list for item in sublist]
print(flat_list)
data = {'Var':X_train.columns,
'Coeff':flat_list,
'NP': x}
coeff_df = pd.DataFrame(data)
coeff_df.head()
# B standardizing the coefficients
# (B - sd)/mean
d_mean = []
d_std = []
for column in X_train.columns:
mean = X_train[column].mean()
d_mean.append(mean)
std = X_train[column].std()
d_std.append(std)
coeff_df["Mean"] = d_mean
coeff_df["Std"] = d_std
coeff_df.head(12)
coeff_df['Standardized_B'] = (coeff_df['Coeff'] - coeff_df['Std'])/coeff_df['Mean']
# cols = ['Coeff']
coeff_df = coeff_df[abs(coeff_df.Coeff) > 0.08]
coeff_df
# standardize the data attributes
X_train_2 = preprocessing.scale(X_train)
# Create logistic regression
logistic = linear_model.LogisticRegression(
solver="liblinear", max_iter=1000, class_weight="balanced"
)
# Create regularization penalty space
penalty = ["l1", "l2"]
# Create regularization hyperparameter space
C = np.logspace(0, 4, 10)
# Create hyperparameter options
hyperparameters = dict(C=C, penalty=penalty)
# Create grid search using 5-fold cross validation
clf = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)
# Fit grid search
best_model = clf.fit(X_train / np.std(X_train, 0), Y_train)
# View best hyperparameters
print("Best Penalty:", best_model.best_estimator_.get_params()["penalty"])
print("Best C:", best_model.best_estimator_.get_params()["C"])
# Predict target vector
y_pred = best_model.predict(X_test / np.std(X_test, 0))
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# m.fit(X / np.std(X, 0), y)
# print(m.coef_)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print(
"Accuracy of random forest on test set: {:.2f}".format(
best_model.score(X_test, Y_test)
)
)
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("F1:", metrics.f1_score(Y_test, y_pred))
print("Recall:", metrics.recall_score(Y_test, y_pred))
# Best Penalty: l2 Best C: 1.0
logistic = linear_model.LogisticRegression(
solver="liblinear", max_iter=1000, class_weight="balanced", penalty="l2"
)
logistic.fit(X_train / np.std(X_train, 0), Y_train)
# coeff_list = coeff_list.flatten
coeff_list = logistic.coef_
flat_list = [item for sublist in coeff_list for item in sublist]
print(flat_list)
data2 = {"Var": X_train.columns, "Coeff": flat_list}
coeff_df2 = pd.DataFrame(data2)
plt.subplots(figsize=(8, 8.27))
y_pos = np.arange(len(coeff_df2.Var))
# Create horizontal bars
# barlist = plt.barh(y_pos, coeff_df.Coeff)
barlist = plt.barh(y_pos, coeff_df.Var)
# Create names on the y-axis
plt.yticks(y_pos, coeff_df.Var)
# plt.suptitle('Coefficient', fontsize=14, fontweight='bold')
# Show graphic
plt.yticks(fontsize=16)
plt.xlabel("Coefficients", fontsize=18)
plt.xticks(fontsize=18)
plt.show()
# coeff_df.plot(kind='bar', color=coeff_df.Coeff.apply(lambda x: 'b' if x>0 else 'y'));
# sns.set(font_scale=3)
# sns.set_style("whitegrid")
# fig, ax = plt.subplots(figsize=(8, 8.27))
# tips = sns.load_dataset("tips")
# ax = sns.boxplot(x= coeff_df.Var, y= coeff_df.Coeff)
# plt.xticks(rotation=45)
plt.subplots(figsize=(8, 8.27))
y_pos = np.arange(len(coeff_df.Var))
# Create horizontal bars
barlist = plt.barh(y_pos, coeff_df.Standardized_B)
barlist[0].set_color("r")
barlist[8].set_color("r")
barlist[9].set_color("r")
barlist[10].set_color("r")
barlist[11].set_color("r")
barlist[12].set_color("r")
barlist[13].set_color("r")
barlist[17].set_color("r")
barlist[18].set_color("r")
barlist[19].set_color("r")
barlist[21].set_color("r")
barlist[22].set_color("r")
barlist[23].set_color("r")
# Create names on the y-axis
plt.yticks(y_pos, coeff_df.Var)
# plt.suptitle('Coefficient', fontsize=14, fontweight='bold')
# Show graphic
plt.yticks(fontsize=16)
plt.xlabel("Coefficients", fontsize=18)
plt.xticks(fontsize=18)
plt.show()
logistic = linear_model.LogisticRegression(
solver="liblinear", max_iter=1000, class_weight="balanced", penalty="l1"
)
logistic.fit(X_train, Y_train)
import pickle
filename = 'logistic_model.sav'
pickle.dump(logistic, open(filename, 'wb'))
plt.subplots(figsize=(8, 8.27))
y_pos = np.arange(len(coeff_df.Var))
# Create horizontal bars
# barlist = plt.barh(y_pos, coeff_df.Coeff)
barlist = plt.barh(y_pos, coeff_df.Standardized_B)
barlist[1].set_color("r")
barlist[2].set_color("r")
barlist[3].set_color("r")
barlist[4].set_color("r")
barlist[7].set_color("r")
barlist[8].set_color("r")
barlist[12].set_color("r")
barlist[15].set_color("r")
# Create names on the y-axis
plt.yticks(y_pos, coeff_df.Var)
# plt.suptitle('Coefficient', fontsize=14, fontweight='bold')
# Show graphic
plt.yticks(fontsize=16)
plt.xlabel("Coefficients", fontsize=18)
plt.xticks(fontsize=18)
plt.show()
# The four best performers test against the test set
# The results with balanced-weighting
# SVM: 0.789121 (0.047093)
# Instantiate model with 1000 decision trees
rf = SVC(class_weight="balanced")
rf.fit(X_train, Y_train)
y_pred = rf.predict(X_test)
y_pred = pd.Series(y_pred)
# Train the model on training data
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print("Accuracy of random forest on test set: {:.2f}".format(rf.score(X_test, Y_test)))
print("Precision:", metrics.precision_score(Y_test, y_pred))
print("Recall:", metrics.f1_score(Y_test, y_pred))
print("F1:", metrics.recall_score(Y_test, y_pred))
# Random Chance Model
from sklearn.dummy import DummyClassifier
dclf = DummyClassifier()
dclf.fit(X_train, Y_train)
y_pred = dclf.predict(X_test)
y_pred = pd.Series(y_pred)
# Train the model on training data
cm = confusion_matrix(Y_test, y_pred)
print(cm)
# First pass run is not awful, but courld use improvement and hyperperamter model
# optimization
print('Accuracy of random forest on test set: {:.2f}'.format(dclf.score(X_test, Y_test)))
print("Precision:",metrics.precision_score(Y_test, y_pred))
print("Recall:",metrics.recall_score(Y_test, y_pred))
print("F1:",metrics.f1_score(Y_test, y_pred))
score = dclf.score(X_test, Y_test)
score
#listofzeros = [0] * (2114 + 223)
# Randomly replace value of zeroes
# 0 2114
# 1 336
#Y_test.count()
```
| github_jupyter |
```
#!pip install gretel-synthetics --upgrade
#!pip install matplotlib
#!pip install smart_open
# load source training set
import logging
import os
import sys
import pandas as pd
from smart_open import open
source_file = "https://gretel-public-website.s3-us-west-2.amazonaws.com/datasets/uci-heart-disease/train.csv"
annotated_file = "./heart_annotated.csv"
def annotate_dataset(df):
df = df.fillna("")
df = df.replace(',', '[c]', regex=True)
df = df.replace('\r', '', regex=True)
df = df.replace('\n', ' ', regex=True)
return df
# Preprocess dataset, store annotated file to disk
# Protip: Training set is very small, repeat so RNN can learn structure
df = annotate_dataset(pd.read_csv(source_file))
while not len(df.index) > 15000:
df = df.append(df)
# Write annotated training data to disk
df.to_csv(annotated_file, index=False, header=False)
# Preview dataset
df.head(15)
# Plot distribution
counts = df['sex'].value_counts().sort_values(ascending=False)
counts.rename({1:"Male", 0:"Female"}).plot.pie()
from pathlib import Path
from gretel_synthetics.config import LocalConfig
# Create a config that we can use for both training and generating, with CPU-friendly settings
# The default values for ``max_chars`` and ``epochs`` are better suited for GPUs
config = LocalConfig(
max_lines=0, # read all lines (zero)
epochs=15, # 15-30 epochs for production
vocab_size=200, # tokenizer model vocabulary size
character_coverage=1.0, # tokenizer model character coverage percent
gen_chars=0, # the maximum number of characters possible per-generated line of text
gen_lines=10000, # the number of generated text lines
rnn_units=256, # dimensionality of LSTM output space
batch_size=64, # batch size
buffer_size=1000, # buffer size to shuffle the dataset
dropout_rate=0.2, # fraction of the inputs to drop
dp=True, # let's use differential privacy
dp_learning_rate=0.015, # learning rate
dp_noise_multiplier=1.1, # control how much noise is added to gradients
dp_l2_norm_clip=1.0, # bound optimizer's sensitivity to individual training points
dp_microbatches=256, # split batches into minibatches for parallelism
checkpoint_dir=(Path.cwd() / 'checkpoints').as_posix(),
save_all_checkpoints=False,
field_delimiter=",",
input_data_path=annotated_file # filepath or S3
)
# Train a model
# The training function only requires our config as a single arg
from gretel_synthetics.train import train_rnn
train_rnn(config)
# Let's generate some records!
from collections import Counter
from gretel_synthetics.generate import generate_text
# Generate this many records
records_to_generate = 111
# Validate each generated record
# Note: This custom validator verifies the record structure matches
# the expected format for UCI healthcare data, and also that
# generated records are Female (e.g. column 1 is 0)
def validate_record(line):
rec = line.strip().split(",")
if not int(rec[1]) == 0:
raise Exception("record generated must be female")
if len(rec) == 14:
int(rec[0])
int(rec[2])
int(rec[3])
int(rec[4])
int(rec[5])
int(rec[6])
int(rec[7])
int(rec[8])
float(rec[9])
int(rec[10])
int(rec[11])
int(rec[12])
int(rec[13])
else:
raise Exception('record not 14 parts')
# Dataframe to hold synthetically generated records
synth_df = pd.DataFrame(columns=df.columns)
for idx, record in enumerate(generate_text(config, line_validator=validate_record)):
status = record.valid
# ensure all generated records are unique
synth_df = synth_df.drop_duplicates()
synth_cnt = len(synth_df.index)
if synth_cnt > records_to_generate:
break
# if generated record passes validation, save it
if status:
print(f"({synth_cnt}/{records_to_generate} : {status})")
print(f"{line.text}")
data = line.values_as_list()
synth_df = synth_df.append({k:v for k,v in zip(df.columns, data)}, ignore_index=True)
import matplotlib.pyplot as plt
from pathlib import Path
import seaborn as sns
# Load model history from file
history = pd.read_csv(f"{(Path(config.checkpoint_dir) / 'model_history.csv').as_posix()}")
# Plot output
def plot_training_data(history: pd.DataFrame):
sns.set(style="whitegrid")
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(18,4))
sns.lineplot(x=history['epoch'], y=history['loss'], ax=ax1, color='orange').set(title='Model training loss')
history[['perplexity', 'epoch']].plot('epoch', ax=ax2, color='orange').set(title='Perplexity')
history[['accuracy', 'epoch']].plot('epoch', ax=ax3, color='blue').set(title='% Accuracy')
plt.show()
plot_training_data(history)
# Preview the synthetic dataset
synth_df.head(10)
# As a final step, combine the original training data +
# our synthetic records, and shuffle them to prepare for training
train_df = annotate_dataset(pd.read_csv(source_file))
combined_df = synth_df.append(train_df).sample(frac=1)
# Write our final training dataset to disk (download this for the Kaggle experiment!)
combined_df.to_csv('synthetic_train_shuffled.csv', index=False)
combined_df.head(10)
# Plot distribution
counts = combined_df['sex'].astype(int).value_counts().sort_values(ascending=False)
counts.rename({1:"Male", 0:"Female"}).plot.pie()
```
| github_jupyter |
# Methods of Approximating Ambient Light Level Using Camera Output
## 1. Helper Functions
These helper functions can largely be ignored, but make sure to run each cell before using the algorithm section (Section 2).
### 1.1 Get Camera Capture
Get capture from camera.
```
import matplotlib.pyplot as plt
import cv2
def show_img(img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.imshow(img)
plt.show()
def get_capture():
cap = cv2.VideoCapture(0, cv2.CAP_DSHOW)
# Lower resolution to speed up processing
cap.set(3, 640)
cap.set(4, 480)
ret, img = cap.read()
if not ret:
print("Failed to get video capture.")
raise ConnectionError
else:
return img
img = get_capture()
show_img(img)
```
### 1.2 Get Data Set
Load the camera frame to ALS lux reading into memory. The datasets are tuples where tuple[0] is the camera image and tuple[1] is the real lux value as read from the ALS.
```
import matplotlib.pyplot as plt
import glob
import cv2
DARK_SET_PATH = "dataset\dark_background"
LIGHT_D25_SET_PATH = "dataset\light_background_D25"
LIGHT_D90_SET_PATH = "dataset\light_background_D90"
def get_num_samples(set_path):
return len(glob.glob1(set_path, "*.txt"))
def get_real_lux_samples(set_path):
samples = []
for i in range(get_num_samples(set_path)):
with open("{}\{}.txt".format(set_path, i), "r") as f:
file_str = f.read()
samples.append(float(file_str))
return samples
def get_camera_frames(set_path):
samples = []
for i in range(get_num_samples(set_path)):
image = cv2.imread("{}\{}.png".format(set_path, i))
samples.append(image)
return samples
def get_dataset(set_path):
return list(zip(get_camera_frames(set_path), get_real_lux_samples(set_path)))
dark_dataset = get_dataset(DARK_SET_PATH)
light_d25_dataset = get_dataset(LIGHT_D25_SET_PATH)
light_d90_dataset = get_dataset(LIGHT_D90_SET_PATH)
```
### 1.3 Map Operation
Does a map operation over a dataset. Usually the map operator is a function which takes in an image and then return the approximated lux level.
```
def map_data(dataset, map_op):
def inner_map_op(data):
img = data[0]
real_lux = data[1]
return map_op(img), real_lux
return list(map(inner_map_op, dataset))
```
### 1.4 Plot Mapped Data
Takes in the mapped data (usually from the map_data() function) which is a list of tuples that contain tuple[0] is the approximated lux and tuple[1] is the real ALS data. The function then creates a scatter plot of the two data sets.
```
from pylab import plot, title, xlabel, ylabel, savefig, legend, array
from itertools import chain
def create_mapped_data_figure(title, mapped_data):
approx_lux_list, real_lux_list = list(zip(*mapped_data))
fig = plt.figure()
ax1 = fig.add_subplot(111)
fig.suptitle(title)
ax1.scatter(list(range(len(approx_lux_list))), approx_lux_list, c='r', marker="s", label='Approx Lux')
ax1.scatter(list(range(len(real_lux_list))), real_lux_list, c='g', marker="s", label='Real Lux')
ax1.set_ylabel("Lux")
ax1.set_xlabel("Sample")
plt.legend(loc='upper right', bbox_to_anchor=(1.32, 1));
```
### 1.5 Uber Mapping Function
Uber function which:
1. Runs the lux estimation algorithm over each image in the loaded data set (where algo is f(img) -> lux).
2. Maps each data set into a figure.
3. Displays each mapping.
This is generally the function each algorithm should call to show its data.
```
def apply_and_show_algo(algo):
light_d25_data_mapped = map_data(light_d25_dataset, get_avg_brightness)
light_d90_data_mapped = map_data(light_d90_dataset, get_avg_brightness)
dark_data_mapped = map_data(dark_dataset, get_avg_brightness)
create_mapped_data_figure('Light Background D25 Light', light_d25_data_mapped)
create_mapped_data_figure('Light Background D90 Light', light_d90_data_mapped)
create_mapped_data_figure('Dark Background', dark_data_mapped)
plt.show()
```
### 1.6 Get ALS Sample
Query the real lux value from the ALS sensor. There must be an ALS on the system, otherwise this function will throw.
```
import winrt.windows.devices.sensors as sensors
def get_als_sample():
als = sensors.LightSensor.get_default()
if not als:
print("Can't read from ALS, none on system.")
raise ConnectionError
return als.get_current_reading().illuminance_in_lux
```
## 1.7 Live Comparison
Grabs a camera capture and runs the given lux estimation algorithm on it. The function will then query the ALS. The approx lux and real lux are returned as a tuple for comparison.
```
def live_compare(algo):
img = get_capture()
show_img(img)
approx_lux = algo(img)
real_lux = get_als_sample()
return approx_lux, real_lux
```
## 2. (Option 1) Average Pixel Brightness
Convert the RGB image in YUV color space, then take the mean value from the Y channel.
```
import numpy as np
def get_avg_brightness(img):
img_yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)
return np.mean(y)
```
### 2.1 Average Pixel Brightness Dataset Performance
Run the average pixel brightness algorithm over the datasets.
```
apply_and_show_algo(get_avg_brightness)
```
### 2.2 Average Pixel Brightness Live Sample
Run the average pixel brightness algorithm on a live camera sample.
```
approx_lux, real_lux = live_compare(get_avg_brightness)
print('Approx Lux: {}'.format(approx_lux))
print('Real Lux: {}'.format(real_lux))
```
Run the average pixel brightness algorithm over time on a live camera sample
```
def collect_live_compare_over_time():
MAX_ITER = 4
data_list = []
for i in range(MAX_ITER):
print("Querying iteration {}".format(i))
data_list.append(live_compare(get_avg_brightness))
create_mapped_data_figure("Live Samples", data_list)
plt.show()
collect_live_compare_over_time()
```
## 3. (Option 2) Use ML to Classify User Enviroment
Use ML to classify what brightness enviroment the camera image is in.
## 4. (Option 3) Linear Regression
Gather a lot of camera data and the associate lux readings from a real ALS. Then run regression on the data set to find a more accurate lux approximation. This can be used with Option 1 to "calibrate" the readings.
## 5. Set Display Brightness Based On Camera
```
import wmi
def set_display_brightness(percent):
print("Setting display brightness to {}%".format(percent))
wmi.WMI(namespace='wmi').WmiMonitorBrightnessMethods()[0].WmiSetBrightness(percent, 0)
approx_lux, real_lux = live_compare(get_avg_brightness)
print('Approx Lux: {}'.format(approx_lux))
print('Real Lux: {}'.format(real_lux))
if approx_lux > 200:
print('Assuming user is in outdoor like enviroment')
set_display_brightness(90)
else:
print('Assuming user is in indoor like enviroment')
set_display_brightness(40)
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model # reflect the tables
Base = automap_base()
Base.prepare(engine, reflect=True)
Base.classes.keys()
# We can view all of the classes that automap found;# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Climate Analysis
Start of My Analysis.
```
# Find the tables...
inspector = inspect(engine)
inspector.get_table_names()
conn = engine.connect()
mdata = pd.read_sql("select * from measurement", conn)
mdata
# Get a list of column names and types
columns = inspector.get_columns('measurement')
for c in columns:
print(c['name'], c["type"])
# columns
# Get a list of column names and types
columns = inspector.get_columns('station')
for c in columns:
print(c['name'], c["type"])
# columns
```
#### Design a query to retrieve the last 12 months of precipitation data and plot the results
```
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# Calculate the date 1 year ago from the last data point in the database
year_ago = dt.date(2017, 8, 23) - dt.timedelta(days=365)
year_ago
# Perform a query to retrieve the data and precipitation scores
Prcp_results = session.query(Measurement.date, Measurement.prcp).\
filter(Measurement.date >= year_ago).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
Precipitation_inches_oneyear = pd.DataFrame(Prcp_results, columns = ['date', 'precipitation'])
Precipitation_inches_oneyear
#Sort by date
Precipitation_inches_oneyear = Precipitation_inches_oneyear.sort_values(by=['date'])
# Reindex the dataframe by date
Precipitation_inches_oneyear = Precipitation_inches_oneyear.set_index(['date'])
# Use Pandas Plotting with Matplotlib to plot the data
Precipitation_inches_oneyear.plot(rot=90)
plt.xlabel('date')
plt.ylabel('inches')
# Use Pandas to calcualte the summary statistics for the precipitation data
Precipitation_inches_oneyear.describe()
# Design a query to show how many stations are available in this dataset?
num_station = session.query(Station.station).count()
num_station
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
active_stations = session.query(Measurement.station,func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc())
most_active_stations = session.query(Station.name).filter(Station.station==active_stations[0][0]).all()
print (f'The most active station is {active_stations[0]}, {most_active_stations[0]}
for station in active_stations:
print (station)
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
temp_station = session.query(func.min(Measurement.tobs),func.max(Measurement.tobs), func.avg(Measurement.tobs)).\
filter(Measurement.station==active_stations[0][0])
temp_station[0][0],temp_station[0][1],round(temp_station[0][2],2)
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
active_tobs = session.query(Measurement.date, Measurement.tobs).filter(Measurement.station==active_stations[0][0]).\
filter(Measurement.date>=year_ago).all()
active_tobs[:10]
active_tobs_df = pd.DataFrame(active_tobs)
active_tobs_df.head()
active_tobs_df.plot(kind="hist",bins=12)
plt.xlabel("Temperature")
plt.title("Distribution of Temperature Observation")
plt.savefig("Histogram.png")
```
## Bonus Challenge Assignment
```
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
```
| github_jupyter |
## Homework 3 and 4 - Applications Using MRJob
```
# general imports
import os
import re
import sys
import time
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# tell matplotlib not to open a new window
%matplotlib inline
# automatically reload modules
%reload_ext autoreload
%autoreload 2
# print some configuration details for future replicability.
print 'Python Version: %s' % (sys.version.split('|')[0])
hdfs_conf = !hdfs getconf -confKey fs.defaultFS ### UNCOMMENT ON DOCKER
#hdfs_conf = !hdfs getconf -confKey fs.default.name ### UNCOMMENT ON ALTISCALE
print 'HDFS filesystem running at: \n\t %s' % (hdfs_conf[0])
JAR_FILE = "/usr/lib/hadoop-mapreduce/hadoop-streaming-2.6.0-cdh5.7.0.jar"
HDFS_DIR = "/user/root/HW3"
HOME_DIR = "/media/notebooks/SP18-1-maynard242" # FILL IN HERE eg. /media/notebooks/w261-main/Assignments
# save path for use in Hadoop jobs (-cmdenv PATH={PATH})
from os import environ
PATH = environ['PATH']
#!hdfs dfs -mkdir HW3
!hdfs dfs -ls
%%writefile example1.txt
Unix,30
Solaris,10
Linux,25
Linux,20
HPUX,100
AIX,25
%%writefile example2.txt
foo foo quux labs foo bar jimi quux jimi jimi
foo jimi jimi
data mining is data science
%%writefile WordCount.py
from mrjob.job import MRJob
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordFreqCount(MRJob):
def mapper(self, _, line):
for word in WORD_RE.findall(line):
yield word.lower(), 1
def combiner(self, word, counts):
yield word, sum(counts)
#hello, (1,1,1,1,1,1): using a combiner? NO and YEs
def reducer(self, word, counts):
yield word, sum(counts)
if __name__ == '__main__':
MRWordFreqCount.run()
!python WordCount.py -r hadoop --cmdenv PATH=/opt/anaconda/bin:$PATH example2.txt
from WordCount import MRWordFreqCount
mr_job = MRWordFreqCount(args=['example2.txt'])
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
print mr_job.parse_output_line(line)
%%writefile AnotherWordCount.py
from mrjob.job import MRJob
class MRAnotherWordCount(MRJob):
def mapper (self,_,line):
yield "chars", len(line)
yield "words", len(line.split())
yield 'lines', 1
def reducer (self, key, values):
yield key, sum(values)
if __name__ == '__main__':
MRAnotherWordCount.run()
!python AnotherWordCount.py example2.txt
%%writefile AnotherWC3.py
# Copyright 2009-2010 Yelp
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""An implementation of wc as an MRJob.
This is meant as an example of why mapper_final is useful."""
from mrjob.job import MRJob
class MRWordCountUtility(MRJob):
def __init__(self, *args, **kwargs):
super(MRWordCountUtility, self).__init__(*args, **kwargs)
self.chars = 0
self.words = 0
self.lines = 0
def mapper(self, _, line):
# Don't actually yield anything for each line. Instead, collect them
# and yield the sums when all lines have been processed. The results
# will be collected by the reducer.
self.chars += len(line) + 1 # +1 for newline
self.words += sum(1 for word in line.split() if word.strip())
self.lines += 1
def mapper_final(self):
yield('chars', self.chars)
yield('words', self.words)
yield('lines', self.lines)
def reducer(self, key, values):
yield(key, sum(values))
if __name__ == '__main__':
MRWordCountUtility.run()
!python AnotherWC3.py -r hadoop --cmdenv PATH=/opt/anaconda/bin:$PATH example2.txt
%%writefile AnotherWC2.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
WORD_RE = re.compile(r"[\w']+")
class MRMostUsedWord(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_get_words,
combiner=self.combiner_count_words,
reducer=self.reducer_count_words),
MRStep(reducer=self.reducer_find_max_word)
]
def mapper_get_words(self, _, line):
# yield each word in the line
for word in WORD_RE.findall(line):
self.increment_counter('group', 'counter_name', 1)
yield (word.lower(), 1)
def combiner_count_words(self, word, counts):
# optimization: sum the words we've seen so far
yield (word, sum(counts))
def reducer_count_words(self, word, counts):
# send all (num_occurrences, word) pairs to the same reducer.
# num_occurrences is so we can easily use Python's max() function.
yield None, (sum(counts), word)
# discard the key; it is just None
def reducer_find_max_word(self, _, word_count_pairs):
# each item of word_count_pairs is (count, word),
# so yielding one results in key=counts, value=word
yield max(word_count_pairs)
if __name__ == '__main__':
MRMostUsedWord.run()
!python WordCount.py example2.txt --output-dir mrJobOutput
!ls -las mrJobOutput/
!cat mrJobOutput/part-0000*
%%writefile WordCount2.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordFreqCount(MRJob):
SORT_VALUES = True
def mapper(self, _, line):
for word in WORD_RE.findall(line):
self.increment_counter('group', 'mapper', 1)
yield word.lower(), 1
def jobconfqqqq(self): #assume we had second job to sort the word counts in decreasing order of counts
orig_jobconf = super(MRWordFreqCount, self).jobconf()
'mapred.reduce.tasks': '1',
}
combined_jobconf = orig_jobconf
combined_jobconf.update(custom_jobconf)
self.jobconf = combined_jobconf
return combined_jobconf custom_jobconf = { #key value pairs
'mapred.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
'mapred.text.key.comparator.options': '-k2,2nr',
def combiner(self, word, counts):
self.increment_counter('group', 'combiner', 1)
yield word, sum(counts)
def reducer(self, word, counts):
self.increment_counter('group', 'reducer', 1)
yield word, sum(counts)
def steps(self):
return [MRStep(
mapper = self.mapper,
combiner = self.combiner,
reducer = self.reducer,
#,
# jobconf = self.jobconfqqqq
# jobconf = {'mapred.output.key.comparator.class': 'org.apache.hadoop.mapred.lib.KeyFieldBasedComparator',
# 'mapred.text.key.comparator.options':'-k1r',
# 'mapred.reduce.tasks' : 1}
)]
if __name__ == '__main__':
MRWordFreqCount.run()
!python WordCount2.py --jobconf numReduceTasks=1 example2.txt --output-dir mrJobOutput
!ls -las mrJobOutput/
```
### Calculate Relative Frequency and Sort by TOP and BOTTOM
```
%%writefile WordCount3.3.py
from mrjob.job import MRJob
from mrjob.step import MRStep
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordCount33(MRJob):
def steps(self):
return [
MRStep(mapper=self.mapper_get_words,
combiner=self.combiner_count_words,
reducer=self.reducer_count_words),
MRStep(reducer=self.reducer_find_max_word)
]
def mapper_get_words(self, _, line):
for word in WORD_RE.findall(line):
self.increment_counter('Process', 'Mapper', 1)
yield (word.lower(), 1)
def combiner_count_words(self, word, counts):
# optimization: sum the words we've seen so far
yield (word, sum(counts))
def reducer_count_words(self, word, counts):
# send all (num_occurrences, word) pairs to the same reducer.
# num_occurrences is so we can easily use Python's max() function.
yield None, (sum(counts), word)
# discard the key; it is just None
def reducer_find_max_word(self, _, word_count_pairs):
# each item of word_count_pairs is (count, word),
# so yielding one results in key=counts, value=word
yield max(word_count_pairs)
if __name__ == '__main__':
MRMostUsedWord.run()
%%writefile top_pages.py
"""Find Vroots with more than 400 visits.
This program will take a CSV data file and output tab-seperated lines of
Vroot -> number of visits
To run:
python top_pages.py anonymous-msweb.data
To store output:
python top_pages.py anonymous-msweb.data > top_pages.out
"""
from mrjob.job import MRJob
import csv
def csv_readline(line):
"""Given a sting CSV line, return a list of strings."""
for row in csv.reader([line]):
return row
class TopPages(MRJob):
def mapper(self, line_no, line):
"""Extracts the Vroot that was visited"""
cell = csv_readline(line)
if cell[0] == 'V':
yield ### FILL IN
# What Key, Value do we want to output?
def reducer(self, vroot, visit_counts):
"""Sumarizes the visit counts by adding them together. If total visits
is more than 400, yield the results"""
total = ### FILL IN
# How do we calculate the total visits from the visit_counts?
if total > 400:
yield ### FILL IN
# What Key, Value do we want to output?
if __name__ == '__main__':
TopPages.run()
%reload_ext autoreload
%autoreload 2
from top_pages import TopPages
import csv
mr_job = TopPages(args=['anonymous-msweb.data'])
with mr_job.make_runner() as runner:
runner.run()
for line in runner.stream_output():
print mr_job.parse_output_line(line)
%%writefile TopPages.py
"""Find Vroots with more than 400 visits.
This program will take a CSV data file and output tab-seperated lines of
Vroot -> number of visits
To run:
python top_pages.py anonymous-msweb.data
To store output:
python top_pages.py anonymous-msweb.data > top_pages.out
"""
from mrjob.job import MRJob
import csv
def csv_readline(line):
"""Given a sting CSV line, return a list of strings."""
for row in csv.reader([line]):
return row
class TopPages(MRJob):
def mapper(self, line_no, line):
"""Extracts the Vroot that visit a page"""
cell = csv_readline(line)
if cell[0] == 'V':
yield cell[1],1
def reducer(self, vroot, visit_counts):
"""Sumarizes the visit counts by adding them together. If total visits
is more than 400, yield the results"""
total = sum(i for i in visit_counts)
if total > 400:
yield vroot, total
if __name__ == '__main__':
TopPages.run()
%reload_ext autoreload
%autoreload 2
from TopPages import TopPages
import csv
mr_job = TopPages(args=['anonymous-msweb.data'])
with mr_job.make_runner() as runner:
runner.run()
count = 0
for line in runner.stream_output():
print mr_job.parse_output_line(line)
count += 1
print 'Final count: ', count
```
| github_jupyter |
# Mega-Meta Functional Connectivity Pipeline
_________
```
CHANGE LOG
08/14 . -MJ changed "dur = rel_events.loc[o,'durTR']" to "dur = rel_events.loc[i,'durTR'] -> 0 to i
05/22/2019 - JMP initial commit
05/28/2019 - JMP added 'rest' TR extration
```
#### Description
extracts signal from Power ROI spheres (264) for a given task condition, as defined by a model spec file. Create condition specific adjacency matrix
```
import numpy as np
import pandas as pd
import os,glob,sys,pickle,json
from IPython.display import Image
# NIYPE FUNCTIONS
import nipype.interfaces.io as nio # Data i/o
from nipype.interfaces.utility import IdentityInterface, Function # utility
from nipype.pipeline.engine import Node
from nipype.pipeline.engine.workflows import Workflow
from nilearn import image, plotting, input_data
from nilearn.connectome import ConnectivityMeasure
%matplotlib inline
```
# Nipype Setup
1. Infosource for iterating over subjects
2. create subject information structure
3. process confounds
4. subset TR
5. process signal
6. correlation pairwise
7. save function
### Infosource for iterating subjects
```
# set up infosource
infoSource = Node(IdentityInterface(fields = ['subject_id']),
name = 'infosource')
infoSource.iterables = [('subject_id',SUBJECT_LIST)]
```
### Get subject information
This function finds those runs for a given subject that are complete (e.g. have motion, events and functional data). The function then creates the `subject_str` which is a modified `model_str` with subject specific information.
```
def get_subject_info(subject_id,model_str):
"""
checks what runs a given subject has all information for
"""
import numpy as np
import os
subPath = model_str['sub_path'].format(PROJECT=model_str['ProjectID'],PID=subject_id)
Runs = []
for r in model_str['Runs']:
func = model_str['task_func_template'].format(PID=subject_id,
TASK=model_str['TaskName'],
RUN=r)
motion = model_str['motion_template'].format(PID=subject_id,
TASK=model_str['TaskName'],
RUN=r)
events = model_str['event_template'].format(PID=subject_id,
TASK=model_str['TaskName'],
RUN=r)
# check if files exist
if (os.path.isfile(os.path.join(subPath,func)) and
os.path.isfile(os.path.join(subPath,motion)) and
os.path.isfile(os.path.join(subPath,events))):
Runs.append(r)
# return a subject modified model_structure
subj_str = model_str
subj_str['subject_id'] = subject_id
subj_str['Runs'] = Runs
return subj_str
get_sub_info = Node(Function(input_names=['subject_id','model_str'],
output_names=['subj_str'],
function = get_subject_info),
name = "get_subject_info")
get_sub_info.inputs.model_str = model_def
```
### Extract Confounds
This function extracts matter and motion confounds. Matter confounds include Global average signal (from grey matter mask), white matter, and CSF average signal. There are 24 motion parameters, as per Power (2012). These include all 6 motion regressors, their derivatives, the quadratic of the motion params, and the squared derivatives.
```
def extract_confounds(subject_str):
"""
extract confounds for all available runs
"""
import numpy as np
import glob
import os
from nilearn import image, input_data
subPath = subject_str['sub_path'].format(PROJECT=subject_str['ProjectID'],PID=subject_str['subject_id'])
struc_files = glob.glob(subject_str['anat_template'].format(PID=subject_str['subject_id'][4:]))
print(struc_files)
# make matter masks
maskers = [input_data.NiftiLabelsMasker(labels_img=struc,standardize=True,memory='nilearn_cache') for struc in struc_files]
confound = {}
for r in subject_str['Runs']:
func = subject_str['task_func_template'].format(PID=subject_str['subject_id'],
TASK=subject_str['TaskName'],
RUN=r)
func_file = os.path.join(subPath,func)
# high variance confounds
hv_confounds = image.high_variance_confounds(func_file)
# get This runs matter confounds (grand mean, white matter, CSF)
matter_confounds = None
for mask in maskers:
mt = mask.fit_transform(func_file)
mean_matter = np.nanmean(mt,axis=1) # get average signal
if matter_confounds is None:
matter_confounds = mean_matter
else:
matter_confounds = np.column_stack([matter_confounds,mean_matter])
# Motion includes xyz,roll,pitch,yaw
# their derivatives, the quadratic term, and qaudratic derivatives
motion = subject_str['motion_template'].format(PID=subject_str['subject_id'],
TASK=subject_str['TaskName'],
RUN=r)
motion = np.genfromtxt(os.path.join(subPath,motion),delimiter='\t',skip_header=True)
motion = motion[:,:6] # dont take framewise displacement
# derivative of motion
motion_deriv = np.concatenate([np.zeros([1,np.shape(motion)[1]]),np.diff(motion,axis=0)],axis=0)
matter_deriv = np.concatenate([np.zeros([1,np.shape(matter_confounds)[1]]),np.diff(matter_confounds,axis=0)],axis=0)
conf = np.concatenate([motion,motion**2,motion_deriv,motion_deriv**2,
matter_confounds,matter_confounds**2,matter_deriv,matter_deriv**2,
hv_confounds],axis=1)
confound[r] = conf
return confound
confounds = Node(Function(input_names=['subject_str'],
output_names = ['confound'],
function = extract_confounds),
name = 'get_confounds')
```
### Condition TR
This function finds those TR for a run that match the condition labels of a given model specification. The `condition` input argument must be set for a given pipeline.
```
def get_condition_TR(subject_str):
"""
Gets the TR list for condition of interest
"""
import numpy as np
import os
import pandas as pd
subPath = subject_str['sub_path'].format(PROJECT=subject_str['ProjectID'],PID=subject_str['subject_id'])
conditions = subject_str['Conditions'][subject_str['condition']]
TRs = {}
for r in subject_str['Runs']:
ev = subject_str['event_template'].format(PID=subject_str['subject_id'],
TASK=subject_str['TaskName'],
RUN=r)
events_df = pd.read_csv(os.path.join(subPath,ev),delimiter='\t')
rel_events = events_df.loc[events_df.trial_type.isin(conditions)].reset_index()
rel_events['TR'] = (rel_events['onset']/subject_str['TR']).astype('int')
rel_events['durTR'] = (rel_events['duration']/subject_str['TR']).astype('int')
condition_TR = []
for i,tr in enumerate(rel_events.TR):
dur = rel_events.loc[i,'durTR']
condition_TR.extend(list(range(tr,tr+dur)))
TRs[r] = condition_TR
return TRs
events = Node(Function(input_names=['subject_str'],
output_names = ['TRs'],
function = get_condition_TR),
name = 'get_TRs')
```
### Get Signal
This is where things all come together. Data is masked and confounds are regressed from masked signal. Only those TR for the condition are then subset from the TR. Currently Power atlas is used as a masker (264 nodes).
```
def get_signal(subject_str,confound,TRs,mask):
"""
gets task data, regresses confounds and subsets relevant TR
"""
from nilearn import image, input_data
import numpy as np
import os
subPath = subject_str['sub_path'].format(PROJECT=subject_str['ProjectID'],PID=subject_str['subject_id'])
signal = None
for r in subject_str['Runs']:
runTR = TRs[r]
con = confound[r]
func = subject_str['task_func_template'].format(PID=subject_str['subject_id'],
TASK=subject_str['TaskName'],
RUN=r)
func_file = os.path.join(subPath,func)
masked_fun = mask.fit_transform(func_file,con)
condition_TR = [_ for _ in runTR if _ < masked_fun.shape[0]]
# if condition is rest, take all TR that are unmodeled
if subject_str['condition'] == 'rest':
masked_condition = masked_fun[[i for i in range(masked_fun.shape[0]) if i not in condition_TR],:]
else:
masked_condition = masked_fun[condition_TR,:]
if signal is None:
signal = masked_condition
else:
signal = np.concatenate([signal,masked_condition],axis=0)
return signal
signal = Node(Function(input_names=['subject_str','confound','TRs','mask'],
output_names = ['signal'],
function = get_signal),
name = 'get_signal')
signal.inputs.mask = NODE_MASKER
```
### Adjacency matrix
The final step of the pipeline. Data is pairwise correlated using pearson R and output is a 264X264 adjacency matrix.
```
def make_adj_matrix(signal):
import numpy as np
from scipy import stats
signal[np.isnan(signal)] = 0
features = signal.shape[1]
r_adj = np.zeros([features,features])
p_adj = np.zeros([features,features])
for i in range(features):
for i2 in range(features):
r_adj[i,i2],p_adj[i,i2] = stats.pearsonr(signal[:,i],signal[:,i2])
return r_adj,p_adj
adj_matrix = Node(Function(input_names=['signal'],
output_names = ['r_adj','p_adj'],
function = make_adj_matrix),
name = 'adjacency_matrix')
```
### Data output
Output is a json file containing
* the subject ID
* Project
* Task name
* Condition
* Pearson r value adj matrix
* p.value adj matrix
```
def data_out(subject_str,r_adj,p_adj):
import pickle,os
Output = {"SubjectID":subject_str['subject_id'],
"Project":subject_str['ProjectID'],
"Task":subject_str['TaskName'],
"Condition":subject_str['condition'],
'r_adj':r_adj,
'p_adj':p_adj}
subFile = '{PID}_task-{TASK}_condition-{CONDITION}_parcellation-POWER2011_desc-FCcorrelation_adj.pkl'.format(PID = subject_str['subject_id'],
TASK = subject_str['TaskName'],
CONDITION=subject_str['condition'])
outFile = os.path.join(subject_str['output_dir'],subFile)
with open(outFile,'wb') as outp:
pickle.dump(Output,outp)
data_save = Node(Function(input_names=['subject_str','r_adj','p_adj'],
function = data_out),
name = 'data_out')
```
______
## WIRE UP
```
wfl = Workflow(name='workflow')
wfl.base_dir = working_dir
wfl.connect([(infoSource,get_sub_info,[("subject_id","subject_id")]),
(get_sub_info, confounds,[("subj_str","subject_str")]),
(get_sub_info, events,[('subj_str','subject_str')]),
(get_sub_info,signal,[('subj_str','subject_str')]),
(confounds,signal,[('confound','confound')]),
(events,signal,[('TRs','TRs')]),
(signal, adj_matrix,[('signal','signal')]),
(get_sub_info,data_save,[('subj_str','subject_str')]),
(adj_matrix, data_save,[('r_adj','r_adj'),('p_adj','p_adj')]),
])
```
| github_jupyter |
# Debugging Numba problems
## Common problems
Numba is a compiler, if there's a problem, it could well be a "compilery" problem, the dynamic interpretation that comes with the Python interpreter is gone! As with any compiler toolchain there's a bit of a learning curve but once the basics are understood it becomes easy to write quite complex applications.
```
from numba import njit
import numpy as np
```
### Type inference problems
A very large set of problems can be classed as type inference problems. These are problems which appear when Numba can't work out the types of all the variables in your code. Here's an example:
```
@njit
def type_inference_problem():
a = {}
return a
type_inference_problem()
```
Things to note in the above, Numba has said that:
1. It has encountered a typing error.
2. It cannot infer (work out) the type of the variable named `a`.
3. It has an imprecise type for `a` of `DictType[undefined, undefined]`.
4. It's pointing to where the problem is in the source
5. It's giving you things to look at for help
Numba's response is reasonable, how can it possibly compile a specialisation of an empty dictionary, it cannot work out what to use for a key or value type.
### Type unification problems
Another common issue is that of type unification, this is due to Numba needing the inferred variable types for the code it's compiling to be statically determined and type stable. What this usually means is something like the type of a variable is being changed in a loop or there's two (or more) possible return types. Example:
```
@njit
def foo(x):
if x > 10:
return (1,)
else:
return 1
foo(1)
```
Things to note in the above, Numba has said that:
1. It has encountered a typing error.
2. It cannot unify the return types and then lists the offending types.
3. It pointis to the locations in the source that are the cause of the problem.
4. It's giving you things to look at for help.
Numba's response due to it not being possible to compile a function that returns a tuple or an integer? You couldn't do that in C/Fortran, same here!
### Unsupported features
Numba supports a subset of Python and NumPy, it's possible to run into something that hasn't been implemented. For example `str(int)` has not been written yet (this is a rather tricky thing to write :)). This is what it looks like:
```
@njit
def foo():
return str(10)
foo()
```
Things to note in the above, Numba has said that:
1. It has encountered a typing error.
2. It's an invalid use of a `Function` of type `(<class 'str'>)` with argument(s) of type(s): `(Literal[int](10))`
3. It points to the location in the source that is the cause of the problem.
4. It's giving you things to look at for help.
What's this bit about?
```
* parameterized
In definition 0:
All templates rejected with literals.
In definition 1:
All templates rejected without literals.
In definition 2:
All templates rejected with literals.
In definition 3:
All templates rejected without literals.
```
Internally Numba does something akin to "template matching" to try and find something to do the functionality requested with the types requested, it's looking through the definitions see if any match and reporting what they say (which in this case is "rejected").
Here's a different one, Numba's `np.mean` implementation doesn't support `axis`:
```
@njit
def foo():
x = np.arange(100).reshape((10, 10))
return np.mean(x, axis=1)
foo()
```
Things to note in the above, Numba has said that:
1. It has encountered a typing error.
2. It's an invalid use of a `Function` "mean" with argument(s) of type(s): `(array(float64, 2d, C), axis=Literal[int](1))`
3. It's reporting what the various template defintions are responding with: e.g.
"TypingError: numba doesn't support kwarg for mean", which is correct!
4. It points to the location in the source that is the cause of the problem.
5. It's giving you things to look at for help.
A common workaround for the above is to just unroll the loop over the axis, for example:
```
@njit
def foo():
x = np.arange(100).reshape((10, 10))
lim, _ = x.shape
buf = np.empty((lim,), x.dtype)
for i in range(lim):
buf[i] = np.mean(x[i])
return buf
foo()
```
### Lowering errors
"Lowering" is the process of translating the Numba IR to LLVM IR to machine code. Numba tries really hard to prevent lowering errors, but sometimes you might see them, if you do please tell us:
https://github.com/numba/numba/issues/new
A lowering error means that there's a problem in Numba internals. The most common cause is that it worked out that it could compile a function as all the variable types were statically determined, but when it tried to find an implementation for some operation in the function to translate to machine code, it couldn't find one.
<h3><span style="color:blue"> Task 1: Debugging practice</span></h3>
The following code has a couple of issues, see if you can work them out and fix them.
```
x = np.arange(20.).reshape((4, 5))
@njit
def problem_factory(x):
nrm_x = np.linalg.norm(x, ord=2, axis=1) # axis not supported, manual unroll
nrm_total = np.sum(nrm_x)
ret = {} # dict type requires float->int cast, true branch is int and it sets the dict type
if nrm_total > 87:
ret[nrm_total] = 1
else:
ret[nrm_total] = nrm_total
return ret
# This is a fixed version
@njit
def problem_factory_fixed(x):
lim, _ = x.shape
nrm_x = np.empty(lim, x.dtype)
for i in range(lim):
nrm_x[i] = np.linalg.norm(x[i])
nrm_total = np.sum(nrm_x)
ret = {}
if nrm_total > 87:
ret[nrm_total] = 1.0
else:
ret[nrm_total] = nrm_total
return ret
fixed = problem_factory_fixed(x)
expected = problem_factory.py_func(x)
# will pass if "fixed" correctly
for k, v in zip(fixed.items(), expected.items()):
np.testing.assert_allclose(k[0], k[1])
np.testing.assert_allclose(v[0], v[1])
```
## Debugging compiled code
In Numba compiled code debugging typically takes one of a few forms.
1. Temporarily disabling the JIT compiler so that the code just runs in Python and the usual Python debugging tools can be used. Either remove the Numba JIT decorators or set the environment variable `NUMBA_DISABLE_JIT`, to disable JIT compilation globally, [docs](http://numba.pydata.org/numba-doc/latest/reference/envvars.html#envvar-NUMBA_DISABLE_JIT).
2. Traditional "print-to-stdout" debugging, Numba supports the use of `print()` (without interpolation!) so it's relatively easy to inspect values and control flow. e.g.
```
@njit
def demo_print(x):
print("function entry")
if x > 1:
print("branch 1, x = ", x)
else:
print("branch 2, x = ", x)
print("function exit")
demo_print(5)
```
3. Debugging with `gdb` (the GNU debugger). This is not going to be demonstrated here as it does not work with notebooks. However, the gist is to supply the Numba JIT decorator with the kwarg `debug=True` and then Numba has a special function `numba.gdb()` that can be used in your program to automatically launch and attach `gdb` at the call site. For example (and **remember not to run this!**):
```
from numba import gdb
@njit(debug=True)
def _DO_NOT_RUN_gdb_demo(x):
if x > 1:
y = 3
gdb()
else:
y = 5
return y
```
Extensive documentation on using `gdb` with Numba is available [here](http://numba.pydata.org/numba-doc/latest/user/troubleshoot.html#debugging-jit-compiled-code-with-gdb).
| github_jupyter |
# Introduction to Strings
---
This notebook covers the topic of strings and their importance in the world of programming. You will learn various methods that will help you manipulate these strings and make useful inferences with them. This notebook assumes that you have already completed the "Introduction to Data Science" notebook.
*Estimated Time: 30 minutes*
---
**Topics Covered:**
- Objects
- String Concatenation
- Loops
- String Methods
**Dependencies:**
```
import numpy as np
from datascience import *
```
## What Are Objects?
Objects are used everywhere when you're coding - even when you dont know it. But what really is an object?
By definition, an object is an **instance** of a **class**. They're an **abstraction**, so they can be used to manipulate data. That sounds complicated, doesn't it? Well, to simplify, think of this: a class is a huge general category of something which holds particular attributes (variables) and actions (functions). Let's assume that Mars has aliens called Xelhas and one of them visits Earth. The Xelha species would be a class, and the alien itself would be an *instance* of that class (or an object). By observing its behavior and mannerisms, we would be able to see how the rest of its species goes about doing things.
Strings are objects too, of the *String* class, which has pre-defined methods that we use. But you don't need to worry about that yet. All you should know is that strings are **not** "primitive" data types, such as integers or booleans. That being said, let's delve right in.
Try running the code cell below:
```
5 + "5"
```
Why did that happen?
This can be classified as a *type* error. As mentioned before, Strings are not primitive data types, like integers and booleans, so when you try to **add** a string to an integer, Python gets confused and throws an error. The important thing to note here is that a String is a String: no matter what its contents may be. If it's between two quotes, it has to be a String.
But what if we followed the "same type" rule and tried to add two Strings? Let's try it.
```
"5" + "5"
```
What?! How does 5 + 5 equal 55?
This is known as concatenation.
## Concatenation
"Concatenating" two items means literally combining or joining them.
When you put the + operator with two or more Strings, Python will take all of the content inside quotes and club it all together to make one String. This process is called **concatenation**.
The following examples illustrate how String concatenation works:
```
"Berk" + "eley"
"B" + "e" + "r" + "k" + "e" + "l" + "e" + "y"
```
Here's a small exercise for you, with a lot of variables. Try making the output "today is a lovely day".
_Hint: Remember to add double quotes with spaces " " because Python literally clubs all text together._
```
a = "oda"
b = "is"
c = "a"
d = "l"
e = "t"
f = "y"
g = "lo"
h = "d"
i = "ve"
# your expression here
```
## String methods
The String class is great for regular use because it comes equipped with a lot of built-in functions with useful properties. These functions, or **methods** can fundamentally transform Strings. Here are some common String methods that may prove to be helpful.
### Replace
For example, the *replace* method replaces all instances of some part of a string with some replacement. A method is invoked on a string by placing a . after the string value, then the name of the method, and finally parentheses containing the arguments.
<string>.<method name>(<argument>, <argument>, ...)
Try to predict the output of these examples, then execute them.
```
# Replace one letter
'Hello'.replace('e', 'i')
# Replace a sequence of letters, which appears twice
'hitchhiker'.replace('hi', 'ma')
```
Once a name is bound to a string value, methods can be invoked on that name as well. The name doesn't change in this case, so a new name is needed to capture the result.
Remember, a string method will replace **every** instance of where the replacement text is found.
```
sharp = 'edged'
hot = sharp.replace('ed', 'ma')
print('sharp =', sharp)
print('hot =', hot)
```
Another very useful method is the **`split`** method. It takes in a "separator string" and splits up the original string into an array, with each element of the array being a separated portion of the string.
Here are some examples:
```
"Another very useful method is the split method".split(" ")
string_of_numbers = "1, 2, 3, 4, 5, 6, 7"
arr_of_strings = string_of_numbers.split(", ")
print(arr_of_strings) # Remember, these elements are still strings!
arr_of_numbers = []
for s in arr_of_strings: # Loop through the array, converting each string to an int
arr_of_numbers.append(int(s))
print(arr_of_numbers)
```
As you can see, the `split` function can be very handy when cleaning up and organizing data (a process known as _parsing_).
## Loops
What do you do when you have to do the same task repetitively? Let's say you have to say Hi to someone five times. Would that require 5 lines of "print('hi')"? No! This is why coding is beautiful. It allows for automation and takes care of all the hard work.
Loops, in the literal meaning of the term, can be used to repeat tasks over and over, until you get your desired output. They are also called "iterators", and they are defined using a variable which changes (either increases or decreases) with each loop, to keep a track of the number of times you're looping.
The most useful loop to know for the scope of this course is the **for** loop.
A for statement begins with the word *for*, followed by a name we want to give each item in the sequence, followed by the word *in*, and ending with an expression that evaluates to a sequence. The indented body of the for statement is executed once for each item in that sequence.
for *variable* in *np.arange(0,5)*:
Don't worry about the np.arange() part yet. Just remember that this expression produces a sequence, and Strings are sequences too! So let's try our loop with Strings!
for each_character in "John Doenero":
*do something*
Interesting! Let's put our code to test.
```
for each_character in "John Doenero":
print(each_character)
```
Cool, right? Now let's do something more useful.
Write a for loop that iterates through the sentence "Hi, I am a quick brown fox and I jump over the lazy dog" and checks if each letter is an *a*. Print out the number of a's in the sentence.
_Hint: try combining what you've learnt from conditions and use a counter._
```
# your code here
for ...
```
## Conclusion
---
Congratulations! You have learned the basics of String manipulation in Python.
## Bibliography
---
Some examples adapted from the UC Berkeley Data 8 textbook, <a href="https://www.inferentialthinking.com">*Inferential Thinking*</a>.
Authors:
- Shriya Vohra
- Scott Lee
- Pancham Yadav
| github_jupyter |
# Find UniProt IDs in ChEMBL targets
Ultimately, we are interested in getting [activity data from ChEMBl](/chembl-27/query_local_chembl-27.ipynb) we need to account for three components:
* The compound being measured
* The target the compound binds to
* The assay where this measurement took place
So, to find all activity data stored in ChEMBL that refers to kinases, we have to query for those assays annotated with a certain target.
Each of those three components have a unique ChEMBL ID, but so far we only have obtained Uniprot IDs in the `human-kinases` notebook. We need a way to connect Uniprot IDs to ChEMBL target IDs. Fortunately, ChEMBL maintains such a map in their FTP releases. We will parse that file and convert it into a dataframe for easy manipulation.
```
from pathlib import Path
import urllib.request
import pandas as pd
REPO = (Path(_dh[-1]) / "..").resolve()
DATA = REPO / 'data'
CHEMBL_VERSION = 28
url = fr"ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_{CHEMBL_VERSION}/chembl_uniprot_mapping.txt"
with urllib.request.urlopen(url) as response:
uniprot_map = pd.read_csv(response, sep="\t", skiprows=[0], names=["UniprotID", "chembl_targets", "description", "type"])
uniprot_map
```
We join this new information to the human kinases aggregated list from `human-kinases` (all of them, regardless the source):
```
kinases = pd.read_csv(DATA / "human_kinases.aggregated.csv", index_col=0)
kinases
```
We are only interested in those kinases present in these datasets:
* KinHub
* KLIFS
* PKinFam
* Dunbrack's MSA
```
kinases_subset = kinases[kinases[["kinhub", "klifs", "pkinfam", "dunbrack_msa"]].sum(axis=1) > 0]
kinases_subset
```
We would also like to preserve the provenance of the Uniprot assignment, so we will group the provenance columns in a single one now.
```
kinases_subset["origin"] = kinases_subset.apply(lambda s: '|'.join([k for k in ["kinhub", "klifs", "pkinfam", "reviewed_uniprot", "dunbrack_msa"] if s[k]]), axis=1)
kinases_subset
```
We can now merge the needed columns based on the `UniprotID` key.
```
merged = pd.merge(kinases_subset[["UniprotID", "Name", "origin"]], uniprot_map[["UniprotID", "chembl_targets", "type"]], how="inner", on='UniprotID')[["UniprotID", "Name", "chembl_targets", "type", "origin"]]
merged
```
How is this possible? 969 targets (ChEMBL 28)?!
Apparently, there's not 1:1 correspondence between UniprotID and ChEMBL ID! Some Uniprot IDs are included in several ChEMBL targets:
```
merged.UniprotID.value_counts()
merged[merged.UniprotID == "P11802"]
```
... and some ChEMBL targets include several kinases (e.g. chimeric proteins):
```
merged[merged.chembl_targets == "CHEMBL2096618"]
```
This is due to the different `type` values:
```
merged.type.value_counts()
```
If we focus on `SINGLE PROTEIN` types:
```
merged[merged.type == "SINGLE PROTEIN"]
```
... we end up with a total of 491 targets (ChEMBL 28), which is more acceptable.
For that reason, we will only save records corresponding to `type == SINGLE PROTEIN`
```
merged[merged.type == "SINGLE PROTEIN"].to_csv(DATA / f"human_kinases_and_chembl_targets.chembl_{CHEMBL_VERSION}.csv", index=False)
```
| github_jupyter |
```
%matplotlib inline
from pathlib import Path
from pandas import DataFrame,Series
from pandas.plotting import scatter_matrix
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score
import pandas as pd
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
import numpy as np
import scipy.stats as stats
import pylab as pl
from random import sample
#Description of features
#Average[3]: Average acceleration (for each axis)
#Standard Deviation[3]: Standard deviation (for each axis)
#Average Absolute Difference[3]: Average absolute
#difference between the value of each of the 200 readings
#within the ED and the mean value over those 200 values
#(for each axis)
#Average Resultant Acceleration[1]: Average of the square
#roots of the sum of the values of each axis squared
#over the ED
#Time Between Peaks[3]: Time in milliseconds between
#peaks in the sinusoidal waves associated with most
#activities (for each axis)
#Binned Distribution[30]: We determine the range of values
#for each axis (maximum – minimum), divide this range into
#10 equal sized bins, and then record what fraction of the
#200 values fell within each of the bins.
my_file = Path("/Users/bharu/CS690-PROJECTS/ActivityAnalyzer/activity_analyzer/DecisionTreeClassifier/FeaturesCsvFile/featuresfile.csv")
df = pd.read_csv(my_file)
df.head()
df.shape#(no of rows, no of columns)
df['color'] = Series([(0 if x == "walking" else 1) for x in df['Label']])
my_color_map = ListedColormap(['skyblue','coral'],'mycolormap')
#0,red,walking
#1,green,running
df_unique = df.drop_duplicates(subset=['User', 'Timestamp'])
df_unique.head()
df_unique.shape
X_train = df_unique.values[:,2:45]
Y_train = df_unique.values[:,45]
test_file = Path("/Users/bharu/CS690-PROJECTS/ActivityAnalyzer/activity_analyzer/DecisionTreeClassifier/FeaturesCsvFile/featuresfile_10.csv")
df_test = pd.read_csv(test_file)
df_test.head()
df_test.shape#(no of rows, no of columns)
df_test['color'] = Series([(0 if x == "walking" else 1) for x in df_test['Label']])
df_unique_test = df_test.drop_duplicates(subset=['User', 'Timestamp'])
df_unique_test.head()
df_unique_test.shape
#Predicting using test data
#taking size of test data 10% of training data
test_small = df_unique_test.iloc[sample(range(len(df_unique_test)), 40), :]
X_test_small = test_small.values[:,2:45]
Y_test_small = test_small.values[:,45]
df_gini = DecisionTreeClassifier(criterion = 'gini')
df_gini.fit(X_train, Y_train)
#Predicting using test data
Y_predict_gini = df_gini.predict(X_test_small)
#Calculating accuracy score
score = accuracy_score(Y_test_small,Y_predict_gini)
score
#Predicting using test data
Y_predict_gini_probas = df_gini.predict_proba(X_test_small)
print (Y_predict_gini_probas[:,0])
print (Y_predict_gini_probas[:,1])
print(len(Y_predict_gini_probas))
import numpy as np
from sklearn import metrics
import matplotlib.pyplot as plt
def plot_roc_curve(Y_predict_gini,Y_test,name_graph):
num_labels = []
for i in range(0,len(Y_test)):
if Y_test[i] == "walking":
num_labels.append(0)
else:
num_labels.append(1)
labels = np.array(num_labels)
fpr, tpr, thresholds = metrics.roc_curve(labels,Y_predict_gini)
roc_auc = metrics.auc(fpr, tpr)
plt.title('Area under ROC Curve')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.2f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.savefig('./../Data-Visualization/images/' + name_graph +'.png',dpi=1000)
plot_roc_curve(Y_predict_gini_probas[:,0],Y_test_small,"DecisionTree_ROC_using_predict_proba")
df_3_10 = pd.concat([df_unique,df_unique_test])
df_3_10.shape
X = df_3_10.values[:,2:45]
y = df_3_10.values[:,45]
X_train,X_test,Y_train,Y_test = train_test_split(X,y,test_size=0.5)
df_gini.fit(X_train, Y_train)
#Predicting using test data
Y_predict_gini_3_10 = df_gini.predict(X_test)
#Calculating accuracy score
score = accuracy_score(Y_test,Y_predict_gini_3_10)
score
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=10)
j = 0
for train, test in cv.split(X, y):
probas_ = df_gini.fit(X[train], y[train]).predict_proba(X[test])
num_labels = []
for i in range(0,len(y[test])):
if y[test][i] == "walking":
num_labels.append(0)
else:
num_labels.append(1)
labels = np.array(num_labels)
# Compute ROC curve and area the curve
fpr, tpr, thresholds = metrics.roc_curve(labels, probas_[:, 0])
roc_auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (j, roc_auc))
j += 1
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',label='Luck', alpha=.8)
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
from sklearn.model_selection import StratifiedKFold
cv = StratifiedKFold(n_splits=20)
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
j = 0
for train, test in cv.split(X, y):
probas_ = df_gini.fit(X[train], y[train]).predict_proba(X[test])
num_labels = []
for i in range(0,len(y[test])):
if y[test][i] == "walking":
num_labels.append(0)
else:
num_labels.append(1)
labels = np.array(num_labels)
# Compute ROC curve and area the curve
fpr, tpr, thresholds = metrics.roc_curve(labels, probas_[:, 0])
tprs.append(np.interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
roc_auc = metrics.auc(fpr, tpr)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (j, roc_auc))
j += 1
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
mean_auc = metrics.auc(mean_fpr, mean_tpr)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
```
| github_jupyter |
##### Copyright 2020 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Training & evaluation with the built-in methods
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/keras/train_and_evaluate"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/keras-team/keras-io/blob/master/tf/train_and_evaluate.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/keras-team/keras-io/blob/master/guides/training_with_built_in_methods.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/keras-io/tf/train_and_evaluate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
## Setup
```
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
```
## Introduction
This guide covers training, evaluation, and prediction (inference) models
when using built-in APIs for training & validation (such as `model.fit()`,
`model.evaluate()`, `model.predict()`).
If you are interested in leveraging `fit()` while specifying your
own training step function, see the guide
["customizing what happens in `fit()`"](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit/).
If you are interested in writing your own training & evaluation loops from
scratch, see the guide
["writing a training loop from scratch"](https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch/).
In general, whether you are using built-in loops or writing your own, model training &
evaluation works strictly in the same way across every kind of Keras model --
Sequential models, models built with the Functional API, and models written from
scratch via model subclassing.
This guide doesn't cover distributed training. For distributed training, see
our [guide to multi-gpu & distributed training](/guides/distributed_training/).
## API overview: a first end-to-end example
When passing data to the built-in training loops of a model, you should either use
**NumPy arrays** (if your data is small and fits in memory) or **`tf.data Dataset`
objects**. In the next few paragraphs, we'll use the MNIST dataset as NumPy arrays, in
order to demonstrate how to use optimizers, losses, and metrics.
Let's consider the following model (here, we build in with the Functional API, but it
could be a Sequential model or a subclassed model as well):
```
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
```
Here's what the typical end-to-end workflow looks like, consisting of:
- Training
- Validation on a holdout set generated from the original training data
- Evaluation on the test data
We'll use MNIST data for this example.
```
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
# Preprocess the data (these are NumPy arrays)
x_train = x_train.reshape(60000, 784).astype("float32") / 255
x_test = x_test.reshape(10000, 784).astype("float32") / 255
y_train = y_train.astype("float32")
y_test = y_test.astype("float32")
# Reserve 10,000 samples for validation
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
```
We specify the training configuration (optimizer, loss, metrics):
```
model.compile(
optimizer=keras.optimizers.RMSprop(), # Optimizer
# Loss function to minimize
loss=keras.losses.SparseCategoricalCrossentropy(),
# List of metrics to monitor
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
```
We call `fit()`, which will train the model by slicing the data into "batches" of size
"batch_size", and repeatedly iterating over the entire dataset for a given number of
"epochs".
```
print("Fit model on training data")
history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=2,
# We pass some validation for
# monitoring validation loss and metrics
# at the end of each epoch
validation_data=(x_val, y_val),
)
```
The returned "history" object holds a record of the loss values and metric values
during training:
```
history.history
```
We evaluate the model on the test data via `evaluate()`:
```
# Evaluate the model on the test data using `evaluate`
print("Evaluate on test data")
results = model.evaluate(x_test, y_test, batch_size=128)
print("test loss, test acc:", results)
# Generate predictions (probabilities -- the output of the last layer)
# on new data using `predict`
print("Generate predictions for 3 samples")
predictions = model.predict(x_test[:3])
print("predictions shape:", predictions.shape)
```
Now, let's review each piece of this workflow in detail.
## The `compile()` method: specifying a loss, metrics, and an optimizer
To train a model with `fit()`, you need to specify a loss function, an optimizer, and
optionally, some metrics to monitor.
You pass these to the model as arguments to the `compile()` method:
```
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
```
The `metrics` argument should be a list -- your model can have any number of metrics.
If your model has multiple outputs, you can specify different losses and metrics for
each output, and you can modulate the contribution of each output to the total loss of
the model. You will find more details about this in the section **"Passing data to
multi-input, multi-output models"**.
Note that if you're satisfied with the default settings, in many cases the optimizer,
loss, and metrics can be specified via string identifiers as a shortcut:
```
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
```
For later reuse, let's put our model definition and compile step in functions; we will
call them several times across different examples in this guide.
```
def get_uncompiled_model():
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, activation="softmax", name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
def get_compiled_model():
model = get_uncompiled_model()
model.compile(
optimizer="rmsprop",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
return model
```
### Many built-in optimizers, losses, and metrics are available
In general, you won't have to create from scratch your own losses, metrics, or
optimizers, because what you need is likely already part of the Keras API:
Optimizers:
- `SGD()` (with or without momentum)
- `RMSprop()`
- `Adam()`
- etc.
Losses:
- `MeanSquaredError()`
- `KLDivergence()`
- `CosineSimilarity()`
- etc.
Metrics:
- `AUC()`
- `Precision()`
- `Recall()`
- etc.
### Custom losses
There are two ways to provide custom losses with Keras. The first example creates a
function that accepts inputs `y_true` and `y_pred`. The following example shows a loss
function that computes the mean squared error between the real data and the
predictions:
```
def custom_mean_squared_error(y_true, y_pred):
return tf.math.reduce_mean(tf.square(y_true - y_pred))
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=custom_mean_squared_error)
# We need to one-hot encode the labels to use MSE
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
```
If you need a loss function that takes in parameters beside `y_true` and `y_pred`, you
can subclass the `tf.keras.losses.Loss` class and implement the following two methods:
- `__init__(self)`: accept parameters to pass during the call of your loss function
- `call(self, y_true, y_pred)`: use the targets (y_true) and the model predictions
(y_pred) to compute the model's loss
Let's say you want to use mean squared error, but with an added term that
will de-incentivize prediction values far from 0.5 (we assume that the categorical
targets are one-hot encoded and take values between 0 and 1). This
creates an incentive for the model not to be too confident, which may help
reduce overfitting (we won't know if it works until we try!).
Here's how you would do it:
```
class CustomMSE(keras.losses.Loss):
def __init__(self, regularization_factor=0.1, name="custom_mse"):
super().__init__(name=name)
self.regularization_factor = regularization_factor
def call(self, y_true, y_pred):
mse = tf.math.reduce_mean(tf.square(y_true - y_pred))
reg = tf.math.reduce_mean(tf.square(0.5 - y_pred))
return mse + reg * self.regularization_factor
model = get_uncompiled_model()
model.compile(optimizer=keras.optimizers.Adam(), loss=CustomMSE())
y_train_one_hot = tf.one_hot(y_train, depth=10)
model.fit(x_train, y_train_one_hot, batch_size=64, epochs=1)
```
### Custom metrics
If you need a metric that isn't part of the API, you can easily create custom metrics
by subclassing the `tf.keras.metrics.Metric` class. You will need to implement 4
methods:
- `__init__(self)`, in which you will create state variables for your metric.
- `update_state(self, y_true, y_pred, sample_weight=None)`, which uses the targets
y_true and the model predictions y_pred to update the state variables.
- `result(self)`, which uses the state variables to compute the final results.
- `reset_states(self)`, which reinitializes the state of the metric.
State update and results computation are kept separate (in `update_state()` and
`result()`, respectively) because in some cases, results computation might be very
expensive, and would only be done periodically.
Here's a simple example showing how to implement a `CategoricalTruePositives` metric,
that counts how many samples were correctly classified as belonging to a given class:
```
class CategoricalTruePositives(keras.metrics.Metric):
def __init__(self, name="categorical_true_positives", **kwargs):
super(CategoricalTruePositives, self).__init__(name=name, **kwargs)
self.true_positives = self.add_weight(name="ctp", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.reshape(tf.argmax(y_pred, axis=1), shape=(-1, 1))
values = tf.cast(y_true, "int32") == tf.cast(y_pred, "int32")
values = tf.cast(values, "float32")
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, "float32")
values = tf.multiply(values, sample_weight)
self.true_positives.assign_add(tf.reduce_sum(values))
def result(self):
return self.true_positives
def reset_states(self):
# The state of the metric will be reset at the start of each epoch.
self.true_positives.assign(0.0)
model = get_uncompiled_model()
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CategoricalTruePositives()],
)
model.fit(x_train, y_train, batch_size=64, epochs=3)
```
### Handling losses and metrics that don't fit the standard signature
The overwhelming majority of losses and metrics can be computed from `y_true` and
`y_pred`, where `y_pred` is an output of your model. But not all of them. For
instance, a regularization loss may only require the activation of a layer (there are
no targets in this case), and this activation may not be a model output.
In such cases, you can call `self.add_loss(loss_value)` from inside the call method of
a custom layer. Losses added in this way get added to the "main" loss during training
(the one passed to `compile()`). Here's a simple example that adds activity
regularization (note that activity regularization is built-in in all Keras layers --
this layer is just for the sake of providing a concrete example):
```
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
# The displayed loss will be much higher than before
# due to the regularization component.
model.fit(x_train, y_train, batch_size=64, epochs=1)
```
You can do the same for logging metric values, using `add_metric()`:
```
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
# The `aggregation` argument defines
# how to aggregate the per-batch values
# over each epoch:
# in this case we simply average them.
self.add_metric(
keras.backend.std(inputs), name="std_of_activation", aggregation="mean"
)
return inputs # Pass-through layer.
inputs = keras.Input(shape=(784,), name="digits")
x = layers.Dense(64, activation="relu", name="dense_1")(inputs)
# Insert std logging as a layer.
x = MetricLoggingLayer()(x)
x = layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = layers.Dense(10, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.RMSprop(learning_rate=1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
```
In the [Functional API](https://www.tensorflow.org/guide/keras/functional/),
you can also call `model.add_loss(loss_tensor)`,
or `model.add_metric(metric_tensor, name, aggregation)`.
Here's a simple example:
```
inputs = keras.Input(shape=(784,), name="digits")
x1 = layers.Dense(64, activation="relu", name="dense_1")(inputs)
x2 = layers.Dense(64, activation="relu", name="dense_2")(x1)
outputs = layers.Dense(10, name="predictions")(x2)
model = keras.Model(inputs=inputs, outputs=outputs)
model.add_loss(tf.reduce_sum(x1) * 0.1)
model.add_metric(keras.backend.std(x1), name="std_of_activation", aggregation="mean")
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
model.fit(x_train, y_train, batch_size=64, epochs=1)
```
Note that when you pass losses via `add_loss()`, it becomes possible to call
`compile()` without a loss function, since the model already has a loss to minimize.
Consider the following `LogisticEndpoint` layer: it takes as inputs
targets & logits, and it tracks a crossentropy loss via `add_loss()`. It also
tracks classification accuracy via `add_metric()`.
```
class LogisticEndpoint(keras.layers.Layer):
def __init__(self, name=None):
super(LogisticEndpoint, self).__init__(name=name)
self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)
self.accuracy_fn = keras.metrics.BinaryAccuracy()
def call(self, targets, logits, sample_weights=None):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
loss = self.loss_fn(targets, logits, sample_weights)
self.add_loss(loss)
# Log accuracy as a metric and add it
# to the layer using `self.add_metric()`.
acc = self.accuracy_fn(targets, logits, sample_weights)
self.add_metric(acc, name="accuracy")
# Return the inference-time prediction tensor (for `.predict()`).
return tf.nn.softmax(logits)
```
You can use it in a model with two inputs (input data & targets), compiled without a
`loss` argument, like this:
```
import numpy as np
inputs = keras.Input(shape=(3,), name="inputs")
targets = keras.Input(shape=(10,), name="targets")
logits = keras.layers.Dense(10)(inputs)
predictions = LogisticEndpoint(name="predictions")(logits, targets)
model = keras.Model(inputs=[inputs, targets], outputs=predictions)
model.compile(optimizer="adam") # No loss argument!
data = {
"inputs": np.random.random((3, 3)),
"targets": np.random.random((3, 10)),
}
model.fit(data)
```
For more information about training multi-input models, see the section **Passing data
to multi-input, multi-output models**.
### Automatically setting apart a validation holdout set
In the first end-to-end example you saw, we used the `validation_data` argument to pass
a tuple of NumPy arrays `(x_val, y_val)` to the model for evaluating a validation loss
and validation metrics at the end of each epoch.
Here's another option: the argument `validation_split` allows you to automatically
reserve part of your training data for validation. The argument value represents the
fraction of the data to be reserved for validation, so it should be set to a number
higher than 0 and lower than 1. For instance, `validation_split=0.2` means "use 20% of
the data for validation", and `validation_split=0.6` means "use 60% of the data for
validation".
The way the validation is computed is by taking the last x% samples of the arrays
received by the fit call, before any shuffling.
Note that you can only use `validation_split` when training with NumPy data.
```
model = get_compiled_model()
model.fit(x_train, y_train, batch_size=64, validation_split=0.2, epochs=1)
```
## Training & evaluation from tf.data Datasets
In the past few paragraphs, you've seen how to handle losses, metrics, and optimizers,
and you've seen how to use the `validation_data` and `validation_split` arguments in
fit, when your data is passed as NumPy arrays.
Let's now take a look at the case where your data comes in the form of a
`tf.data.Dataset` object.
The `tf.data` API is a set of utilities in TensorFlow 2.0 for loading and preprocessing
data in a way that's fast and scalable.
For a complete guide about creating `Datasets`, see the
[tf.data documentation](https://www.tensorflow.org/guide/data).
You can pass a `Dataset` instance directly to the methods `fit()`, `evaluate()`, and
`predict()`:
```
model = get_compiled_model()
# First, let's create a training Dataset instance.
# For the sake of our example, we'll use the same MNIST data as before.
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Now we get a test dataset.
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# Since the dataset already takes care of batching,
# we don't pass a `batch_size` argument.
model.fit(train_dataset, epochs=3)
# You can also evaluate or predict on a dataset.
print("Evaluate")
result = model.evaluate(test_dataset)
dict(zip(model.metrics_names, result))
```
Note that the Dataset is reset at the end of each epoch, so it can be reused of the
next epoch.
If you want to run training only on a specific number of batches from this Dataset, you
can pass the `steps_per_epoch` argument, which specifies how many training steps the
model should run using this Dataset before moving on to the next epoch.
If you do this, the dataset is not reset at the end of each epoch, instead we just keep
drawing the next batches. The dataset will eventually run out of data (unless it is an
infinitely-looping dataset).
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Only use the 100 batches per epoch (that's 64 * 100 samples)
model.fit(train_dataset, epochs=3, steps_per_epoch=100)
```
### Using a validation dataset
You can pass a `Dataset` instance as the `validation_data` argument in `fit()`:
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=1, validation_data=val_dataset)
```
At the end of each epoch, the model will iterate over the validation dataset and
compute the validation loss and validation metrics.
If you want to run validation only on a specific number of batches from this dataset,
you can pass the `validation_steps` argument, which specifies how many validation
steps the model should run with the validation dataset before interrupting validation
and moving on to the next epoch:
```
model = get_compiled_model()
# Prepare the training dataset
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
# Prepare the validation dataset
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(
train_dataset,
epochs=1,
# Only run validation using the first 10 batches of the dataset
# using the `validation_steps` argument
validation_data=val_dataset,
validation_steps=10,
)
```
Note that the validation dataset will be reset after each use (so that you will always
be evaluating on the same samples from epoch to epoch).
The argument `validation_split` (generating a holdout set from the training data) is
not supported when training from `Dataset` objects, since this features requires the
ability to index the samples of the datasets, which is not possible in general with
the `Dataset` API.
## Other input formats supported
Besides NumPy arrays, eager tensors, and TensorFlow `Datasets`, it's possible to train
a Keras model using Pandas dataframes, or from Python generators that yield batches of
data & labels.
In particular, the `keras.utils.Sequence` class offers a simple interface to build
Python data generators that are multiprocessing-aware and can be shuffled.
In general, we recommend that you use:
- NumPy input data if your data is small and fits in memory
- `Dataset` objects if you have large datasets and you need to do distributed training
- `Sequence` objects if you have large datasets and you need to do a lot of custom
Python-side processing that cannot be done in TensorFlow (e.g. if you rely on external libraries
for data loading or preprocessing).
## Using a `keras.utils.Sequence` object as input
`keras.utils.Sequence` is a utility that you can subclass to obtain a Python generator with
two important properties:
- It works well with multiprocessing.
- It can be shuffled (e.g. when passing `shuffle=True` in `fit()`).
A `Sequence` must implement two methods:
- `__getitem__`
- `__len__`
The method `__getitem__` should return a complete batch.
If you want to modify your dataset between epochs, you may implement `on_epoch_end`.
Here's a quick example:
```python
from skimage.io import imread
from skimage.transform import resize
import numpy as np
# Here, `filenames` is list of path to the images
# and `labels` are the associated labels.
class CIFAR10Sequence(Sequence):
def __init__(self, filenames, labels, batch_size):
self.filenames, self.labels = filenames, labels
self.batch_size = batch_size
def __len__(self):
return int(np.ceil(len(self.filenames) / float(self.batch_size)))
def __getitem__(self, idx):
batch_x = self.filenames[idx * self.batch_size:(idx + 1) * self.batch_size]
batch_y = self.labels[idx * self.batch_size:(idx + 1) * self.batch_size]
return np.array([
resize(imread(filename), (200, 200))
for filename in batch_x]), np.array(batch_y)
sequence = CIFAR10Sequence(filenames, labels, batch_size)
model.fit(sequence, epochs=10)
```
## Using sample weighting and class weighting
With the default settings the weight of a sample is decided by its frequency
in the dataset. There are two methods to weight the data, independent of
sample frequency:
* Class weights
* Sample weights
### Class weights
This is set by passing a dictionary to the `class_weight` argument to
`Model.fit()`. This dictionary maps class indices to the weight that should
be used for samples belonging to this class.
This can be used to balance classes without resampling, or to train a
model that has a gives more importance to a particular class.
For instance, if class "0" is half as represented as class "1" in your data,
you could use `Model.fit(..., class_weight={0: 1., 1: 0.5})`.
Here's a NumPy example where we use class weights or sample weights to
give more importance to the correct classification of class #5 (which
is the digit "5" in the MNIST dataset).
```
import numpy as np
class_weight = {
0: 1.0,
1: 1.0,
2: 1.0,
3: 1.0,
4: 1.0,
# Set weight "2" for class "5",
# making this class 2x more important
5: 2.0,
6: 1.0,
7: 1.0,
8: 1.0,
9: 1.0,
}
print("Fit with class weight")
model = get_compiled_model()
model.fit(x_train, y_train, class_weight=class_weight, batch_size=64, epochs=1)
```
### Sample weights
For fine grained control, or if you are not building a classifier,
you can use "sample weights".
- When training from NumPy data: Pass the `sample_weight`
argument to `Model.fit()`.
- When training from `tf.data` or any other sort of iterator:
Yield `(input_batch, label_batch, sample_weight_batch)` tuples.
A "sample weights" array is an array of numbers that specify how much weight
each sample in a batch should have in computing the total loss. It is commonly
used in imbalanced classification problems (the idea being to give more weight
to rarely-seen classes).
When the weights used are ones and zeros, the array can be used as a *mask* for
the loss function (entirely discarding the contribution of certain samples to
the total loss).
```
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
print("Fit with sample weight")
model = get_compiled_model()
model.fit(x_train, y_train, sample_weight=sample_weight, batch_size=64, epochs=1)
```
Here's a matching `Dataset` example:
```
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
# Create a Dataset that includes sample weights
# (3rd element in the return tuple).
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train, sample_weight))
# Shuffle and slice the dataset.
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model = get_compiled_model()
model.fit(train_dataset, epochs=1)
```
## Passing data to multi-input, multi-output models
In the previous examples, we were considering a model with a single input (a tensor of
shape `(764,)`) and a single output (a prediction tensor of shape `(10,)`). But what
about models that have multiple inputs or outputs?
Consider the following model, which has an image input of shape `(32, 32, 3)` (that's
`(height, width, channels)`) and a timeseries input of shape `(None, 10)` (that's
`(timesteps, features)`). Our model will have two outputs computed from the
combination of these inputs: a "score" (of shape `(1,)`) and a probability
distribution over five classes (of shape `(5,)`).
```
image_input = keras.Input(shape=(32, 32, 3), name="img_input")
timeseries_input = keras.Input(shape=(None, 10), name="ts_input")
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name="score_output")(x)
class_output = layers.Dense(5, activation="softmax", name="class_output")(x)
model = keras.Model(
inputs=[image_input, timeseries_input], outputs=[score_output, class_output]
)
```
Let's plot this model, so you can clearly see what we're doing here (note that the
shapes shown in the plot are batch shapes, rather than per-sample shapes).
```
keras.utils.plot_model(model, "multi_input_and_output_model.png", show_shapes=True)
```
At compilation time, we can specify different losses to different outputs, by passing
the loss functions as a list:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
```
If we only passed a single loss function to the model, the same loss function would be
applied to every output (which is not appropriate here).
Likewise for metrics:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
metrics=[
[
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
[keras.metrics.CategoricalAccuracy()],
],
)
```
Since we gave names to our output layers, we could also specify per-output losses and
metrics via a dict:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
)
```
We recommend the use of explicit names and dicts if you have more than 2 outputs.
It's possible to give different weights to different output-specific losses (for
instance, one might wish to privilege the "score" loss in our example, by giving to 2x
the importance of the class loss), using the `loss_weights` argument:
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={
"score_output": keras.losses.MeanSquaredError(),
"class_output": keras.losses.CategoricalCrossentropy(),
},
metrics={
"score_output": [
keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError(),
],
"class_output": [keras.metrics.CategoricalAccuracy()],
},
loss_weights={"score_output": 2.0, "class_output": 1.0},
)
```
You could also chose not to compute a loss for certain outputs, if these outputs meant
for prediction but not for training:
```
# List loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()],
)
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={"class_output": keras.losses.CategoricalCrossentropy()},
)
```
Passing data to a multi-input or multi-output model in fit works in a similar way as
specifying a loss function in compile: you can pass **lists of NumPy arrays** (with
1:1 mapping to the outputs that received a loss function) or **dicts mapping output
names to NumPy arrays**.
```
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(), keras.losses.CategoricalCrossentropy()],
)
# Generate dummy NumPy data
img_data = np.random.random_sample(size=(100, 32, 32, 3))
ts_data = np.random.random_sample(size=(100, 20, 10))
score_targets = np.random.random_sample(size=(100, 1))
class_targets = np.random.random_sample(size=(100, 5))
# Fit on lists
model.fit([img_data, ts_data], [score_targets, class_targets], batch_size=32, epochs=1)
# Alternatively, fit on dicts
model.fit(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
batch_size=32,
epochs=1,
)
```
Here's the `Dataset` use case: similarly as what we did for NumPy arrays, the `Dataset`
should return a tuple of dicts.
```
train_dataset = tf.data.Dataset.from_tensor_slices(
(
{"img_input": img_data, "ts_input": ts_data},
{"score_output": score_targets, "class_output": class_targets},
)
)
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
model.fit(train_dataset, epochs=1)
```
## Using callbacks
Callbacks in Keras are objects that are called at different point during training (at
the start of an epoch, at the end of a batch, at the end of an epoch, etc.) and which
can be used to implement behaviors such as:
- Doing validation at different points during training (beyond the built-in per-epoch
validation)
- Checkpointing the model at regular intervals or when it exceeds a certain accuracy
threshold
- Changing the learning rate of the model when training seems to be plateauing
- Doing fine-tuning of the top layers when training seems to be plateauing
- Sending email or instant message notifications when training ends or where a certain
performance threshold is exceeded
- Etc.
Callbacks can be passed as a list to your call to `fit()`:
```
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor="val_loss",
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1,
)
]
model.fit(
x_train,
y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2,
)
```
### Many built-in callbacks are available
- `ModelCheckpoint`: Periodically save the model.
- `EarlyStopping`: Stop training when training is no longer improving the validation
metrics.
- `TensorBoard`: periodically write model logs that can be visualized in
[TensorBoard](https://www.tensorflow.org/tensorboard) (more details in the section
"Visualization").
- `CSVLogger`: streams loss and metrics data to a CSV file.
- etc.
See the [callbacks documentation](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/) for the complete list.
### Writing your own callback
You can create a custom callback by extending the base class
`keras.callbacks.Callback`. A callback has access to its associated model through the
class property `self.model`.
Make sure to read the
[complete guide to writing custom callbacks](https://www.tensorflow.org/guide/keras/custom_callback/).
Here's a simple example saving a list of per-batch loss values during training:
```
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.per_batch_losses = []
def on_batch_end(self, batch, logs):
self.per_batch_losses.append(logs.get("loss"))
```
## Checkpointing models
When you're training model on relatively large datasets, it's crucial to save
checkpoints of your model at frequent intervals.
The easiest way to achieve this is with the `ModelCheckpoint` callback:
```
model = get_compiled_model()
callbacks = [
keras.callbacks.ModelCheckpoint(
# Path where to save the model
# The two parameters below mean that we will overwrite
# the current checkpoint if and only if
# the `val_loss` score has improved.
# The saved model name will include the current epoch.
filepath="mymodel_{epoch}",
save_best_only=True, # Only save a model if `val_loss` has improved.
monitor="val_loss",
verbose=1,
)
]
model.fit(
x_train, y_train, epochs=2, batch_size=64, callbacks=callbacks, validation_split=0.2
)
```
The `ModelCheckpoint` callback can be used to implement fault-tolerance:
the ability to restart training from the last saved state of the model in case training
gets randomly interrupted. Here's a basic example:
```
import os
# Prepare a directory to store all the checkpoints.
checkpoint_dir = "./ckpt"
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def make_or_restore_model():
# Either restore the latest model, or create a fresh one
# if there is no checkpoint available.
checkpoints = [checkpoint_dir + "/" + name for name in os.listdir(checkpoint_dir)]
if checkpoints:
latest_checkpoint = max(checkpoints, key=os.path.getctime)
print("Restoring from", latest_checkpoint)
return keras.models.load_model(latest_checkpoint)
print("Creating a new model")
return get_compiled_model()
model = make_or_restore_model()
callbacks = [
# This callback saves a SavedModel every 100 batches.
# We include the training loss in the saved model name.
keras.callbacks.ModelCheckpoint(
filepath=checkpoint_dir + "/ckpt-loss={loss:.2f}", save_freq=100
)
]
model.fit(x_train, y_train, epochs=1, callbacks=callbacks)
```
You call also write your own callback for saving and restoring models.
For a complete guide on serialization and saving, see the
[guide to saving and serializing Models](https://www.tensorflow.org/guide/keras/save_and_serialize/).
## Using learning rate schedules
A common pattern when training deep learning models is to gradually reduce the learning
as training progresses. This is generally known as "learning rate decay".
The learning decay schedule could be static (fixed in advance, as a function of the
current epoch or the current batch index), or dynamic (responding to the current
behavior of the model, in particular the validation loss).
### Passing a schedule to an optimizer
You can easily use a static learning rate decay schedule by passing a schedule object
as the `learning_rate` argument in your optimizer:
```
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
```
Several built-in schedules are available: `ExponentialDecay`, `PiecewiseConstantDecay`,
`PolynomialDecay`, and `InverseTimeDecay`.
### Using callbacks to implement a dynamic learning rate schedule
A dynamic learning rate schedule (for instance, decreasing the learning rate when the
validation loss is no longer improving) cannot be achieved with these schedule objects
since the optimizer does not have access to validation metrics.
However, callbacks do have access to all metrics, including validation metrics! You can
thus achieve this pattern by using a callback that modifies the current learning rate
on the optimizer. In fact, this is even built-in as the `ReduceLROnPlateau` callback.
## Visualizing loss and metrics during training
The best way to keep an eye on your model during training is to use
[TensorBoard](https://www.tensorflow.org/tensorboard), a browser-based application
that you can run locally that provides you with:
- Live plots of the loss and metrics for training and evaluation
- (optionally) Visualizations of the histograms of your layer activations
- (optionally) 3D visualizations of the embedding spaces learned by your `Embedding`
layers
If you have installed TensorFlow with pip, you should be able to launch TensorBoard
from the command line:
```
tensorboard --logdir=/full_path_to_your_logs
```
### Using the TensorBoard callback
The easiest way to use TensorBoard with a Keras model and the fit method is the
`TensorBoard` callback.
In the simplest case, just specify where you want the callback to write logs, and
you're good to go:
```
keras.callbacks.TensorBoard(
log_dir="/full_path_to_your_logs",
histogram_freq=0, # How often to log histogram visualizations
embeddings_freq=0, # How often to log embedding visualizations
update_freq="epoch",
) # How often to write logs (default: once per epoch)
```
For more information, see the
[documentation for the `TensorBoard` callback](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/tensorboard/).
| github_jupyter |
<img src="https://nlp.johnsnowlabs.com/assets/images/logo.png" width="180" height="50" style="float: left;">
## Deep Learning NER
In the following example, we walk-through a LSTM NER model training and prediction. This annotator is implemented on top of TensorFlow.
This annotator will take a series of word embedding vectors, training CoNLL dataset, plus a validation dataset. We include our own predefined Tensorflow Graphs, but it will train all layers during fit() stage.
DL NER will compute several layers of BI-LSTM in order to auto generate entity extraction, and it will leverage batch-based distributed calls to native TensorFlow libraries during prediction.
### Spark `2.4` and Spark NLP `2.0.1`
#### 1. Call necessary imports and set the resource folder path.
```
import os
import sys
sys.path.append('../../')
from pyspark.sql import SparkSession
from pyspark.ml import Pipeline
from sparknlp.annotator import *
from sparknlp.common import *
from sparknlp.base import *
import time
import zipfile
#Setting location of resource Directory
resource_path= "../../../src/test/resources/"
```
#### 2. Download CoNLL 2003 data if not present
```
# Download CoNLL 2003 Dataset
import os
from pathlib import Path
import urllib.request
url = "https://github.com/patverga/torch-ner-nlp-from-scratch/raw/master/data/conll2003/"
file_train="eng.train"
file_testa= "eng.testa"
file_testb= "eng.testb"
# https://github.com/patverga/torch-ner-nlp-from-scratch/tree/master/data/conll2003
if not Path(file_train).is_file():
print("Downloading "+file_train)
urllib.request.urlretrieve(url+file_train, file_train)
if not Path(file_testa).is_file():
print("Downloading "+file_testa)
urllib.request.urlretrieve(url+file_testa, file_testa)
if not Path(file_testb).is_file():
print("Downloading "+file_testb)
urllib.request.urlretrieve(url+file_testb, file_testb)
```
#### 3. Download Glove embeddings and unzip, if not present
```
# Download Glove Word Embeddings
file = "glove.6B.zip"
if not Path("glove.6B.zip").is_file():
url = "http://nlp.stanford.edu/data/glove.6B.zip"
print("Start downoading Glove Word Embeddings. It will take some time, please wait...")
urllib.request.urlretrieve(url, "glove.6B.zip")
print("Downloading finished")
else:
print("Glove data present.")
if not Path("glove.6B.100d.txt").is_file():
zip_ref = zipfile.ZipFile(file, 'r')
zip_ref.extractall("./")
zip_ref.close()
```
#### 4. Create the spark session
```
spark = SparkSession.builder \
.appName("DL-NER")\
.master("local[*]")\
.config("spark.driver.memory","8G")\
.config("spark.jars.packages", "JohnSnowLabs:spark-nlp:2.0.1")\
.config("spark.kryoserializer.buffer.max", "500m")\
.getOrCreate()
```
#### 6. Load parquet dataset and cache into memory
```
from sparknlp.training import CoNLL
conll = CoNLL(
documentCol="document",
sentenceCol="sentence",
tokenCol="token",
posCol="pos"
)
training_data = conll.readDataset(spark, './eng.train')
training_data.show()
```
#### 5. Create annotator components with appropriate params and in the right order. The finisher will output only NER. Put everything in Pipeline
```
glove = WordEmbeddings()\
.setInputCols(["sentence", "token"])\
.setOutputCol("glove")\
.setEmbeddingsSource("/home/saif/Downloads/glove.6B.100d.txt", 100, 2)
nerTagger = NerDLApproach()\
.setInputCols(["sentence", "token", "glove"])\
.setLabelColumn("label")\
.setOutputCol("ner")\
.setMaxEpochs(1)\
.setRandomSeed(0)\
.setVerbose(0)
converter = NerConverter()\
.setInputCols(["sentence", "token", "ner"])\
.setOutputCol("ner_span")
finisher = Finisher() \
.setInputCols(["sentence", "token", "ner", "ner_span"]) \
.setIncludeMetadata(True)
ner_pipeline = Pipeline(
stages = [
glove,
nerTagger,
converter,
finisher
])
```
#### 7. Train the pipeline. (This will take some time)
```
start = time.time()
print("Start fitting")
ner_model = ner_pipeline.fit(training_data)
print("Fitting is ended")
print (time.time() - start)
```
#### 8. Lets predict with the model
```
document = DocumentAssembler()\
.setInputCol("text")\
.setOutputCol("document")
sentence = SentenceDetector()\
.setInputCols(['document'])\
.setOutputCol('sentence')
token = Tokenizer()\
.setInputCols(['sentence'])\
.setOutputCol('token')
prediction_pipeline = Pipeline(
stages = [
document,
sentence,
token,
ner_model
]
)
prediction_data = spark.createDataFrame([["Germany is a nice place"]]).toDF("text")
prediction_data.show()
prediction_model = prediction_pipeline.fit(prediction_data)
prediction_model.transform(prediction_data).show()
# We can be fast!
lp = LightPipeline(prediction_model)
result = lp.annotate("International Business Machines Corporation (IBM) is an American multinational information technology company headquartered in Armonk.")
list(zip(result['token'], result['ner']))
```
#### 9. Save both pipeline and single model once trained, on disk
```
prediction_pipeline.write().overwrite().save("./prediction_dl_pipeline")
prediction_model.write().overwrite().save("./prediction_dl_model")
```
#### 10. Load both again, deserialize from disk
```
from pyspark.ml import PipelineModel, Pipeline
loaded_prediction_pipeline = Pipeline.read().load("./prediction_dl_pipeline")
loaded_prediction_model = PipelineModel.read().load("./prediction_dl_model")
loaded_prediction_model.transform(prediction_data).show()
```
| github_jupyter |
# Naive Bayes Classifier (Self Made)
### 1. Importing Libraries
```
import numpy as np
import matplotlib.pyplot as plt
import os
import pandas as pd
from sklearn.metrics import r2_score
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn import preprocessing
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from collections import defaultdict
```
### 2. Data Preprocessing
```
pima = pd.read_csv("diabetes.csv")
pima.head()
pima.info()
#normalizing the dataset
scalar = preprocessing.MinMaxScaler()
pima = scalar.fit_transform(pima)
#split dataset in features and target variable
X = pima[:,:8]
y = pima[:, 8]
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=42)
print(X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)
```
### 3. Required Functions
```
def normal_distr(x, mean, dev):
#finding the value through the normal distribution formula
return (1/(np.sqrt(2 * np.pi) * dev)) * (np.exp(- (((x - mean) / dev) ** 2) / 2))
def finding_mean(X):
return np.mean(X)
def finding_std_dev(X):
return np.std(X)
#def pred(X_test):
def train(X_train,Y_train):
labels = set(Y_train)
cnt_table = defaultdict(list)
for row in range(X_train.shape[0]):
for col in range(X_train.shape[1]):
cnt_table[(col, Y_train[row])].append(X_train[row][col])
lookup_list = defaultdict(list)
for item in cnt_table.items():
X_category = np.asarray(item[1])
lookup_list[(item[0][0], item[0][1])].append(finding_mean(X_category))
lookup_list[(item[0][0], item[0][1])].append(finding_std_dev(X_category))
return lookup_list
def pred(X_test, lookup_list):
Y_pred = []
for row in range(X_test.shape[0]):
prob_yes = 1
prob_no = 1
for col in range(X_test.shape[1]):
prob_yes = prob_yes * normal_distr(X_test[row][col], lookup_list[(col, 1)][0], lookup_list[(col, 1)][1])
prob_no = prob_no * normal_distr(X_test[row][col], lookup_list[(col, 0)][0], lookup_list[(col, 1)][1])
if(prob_yes >= prob_no):
Y_pred.append(1)
else:
Y_pred.append(0)
return np.asarray(Y_pred)
def score(Y_pred, Y_test):
correct_pred = np.sum(Y_pred == Y_test)
return correct_pred / Y_pred.shape[0]
def naive_bayes(X_train,Y_train, X_test, Y_test):
lookup_list = train(X_train, Y_train)
Y_pred = pred(X_test, lookup_list)
return score(Y_pred, Y_test)
score = naive_bayes(X_train, Y_train, X_test, Y_test)
print("The accuracy of the model is : {0}".format(score))
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import pickle
import time
import itertools
import matplotlib
matplotlib.rcParams.update({'font.size': 17.5})
import matplotlib.pyplot as plt
%matplotlib inline
import sys
import os.path
sys.path.append( os.path.abspath(os.path.join( os.path.dirname('..') , os.path.pardir )) )
from FLAMEdb import *
from FLAMEbit import *
# data generation, tune the tradeoff_param to generation to generate different plots
d = data_generation_gradual_decrease_imbalance( 10000 , 10000 , 20 )
df = d[0]
holdout,_ = data_generation_gradual_decrease_imbalance( 10000 , 10000, 20 )
res = run_bit(df, holdout, range(20), [2]*20, tradeoff_param = 0.5)
def bubble_plot(res):
sizes = []
effects = []
for i in range(min(len(res),21)):
r = res[i]
if (r is None):
effects.append([0])
sizes.append([0])
continue
effects.append(list( r['effect'] ) )
sizes.append(list(r['size'] ) )
return sizes, effects
# plot percent of units matched, figure 4
matplotlib.rcParams.update({'font.size': 17.5})
#res = pickle.load(open('__thePickleFile__', 'rb'))[1]
ss, es = bubble_plot(res[1])
s = []
for i in ss:
s.append(np.sum(i)/float(20000))
pct = [sum(s[:i+1]) for i in range(len(s))]
plt.figure(figsize=(5,5))
plt.plot(pct, alpha = 0.6 , color = 'blue')
plt.xticks(range(len(ss)), [str(20-i) if i%5==0 else '' for i in range(20) ] )
plt.ylabel('% of units matched')
plt.xlabel('number of covariates remaining')
plt.ylim([0,1])
plt.tight_layout()
# plot the CATE on each level, figure 5
ss, es = bubble_plot(res[1])
plt.figure(figsize=(5,5))
for i in range(len(ss)):
plt.scatter([i]*len(es[i]), es[i], s = ss[i], alpha = 0.6 , color = 'blue')
plt.xticks(range(len(ss)), [str(20-i) if i%5==0 else '' for i in range(20) ] )
plt.ylabel('estimated treatment effect')
plt.xlabel('number of covariates remaining')
plt.ylim([8,14])
plt.tight_layout()
#plt.savefig('tradeoff01.png', dpi = 300)
# figure 6
units_matched = []
CATEs = []
for i in range(len(res[1])):
r = res[1][i]
units_matched.append( np.sum(r['size']) )
l = list(res[1][i]['effect'])
CATEs.append( l )
PCTs = []
for i in range(len(units_matched)):
PCTs.append( np.sum(units_matched[:i+1])/30000 )
for i in range(len(PCTs)):
CATE = CATEs[i]
if len(CATE) > 0:
plt.scatter( [PCTs[i]] * len(CATE), CATE, alpha = 0.6, color = 'b' )
plt.xlabel('% of Units Matched')
plt.ylabel('Estimated Treatment Effect')
#plt.ylim([8,14])
plt.xlim([-0.1,1])
plt.tight_layout()
```
| github_jupyter |
```
from keras.applications.vgg19 import VGG19
from keras.models import Model
from keras.applications.vgg19 import preprocess_input
from keras.preprocessing import image
import numpy as np
import pickle
import os
val = []
# Leemos el archivo annotation.txt y guardamos los datos en un vector (por palabras)
with open('/home/exla24/TB2018/val/annotation.txt','r') as f:
archivo = f.readlines()
archivo.sort()
for line in archivo:
for word in line.split():
val.append(word)
archivo_val = []
etiqueta_val = []
i = 0
for i in range (len(val)):
if (i%2 == 0): # Sabemos que las palabras pares del vector a serán las de archivo y las impares serán las etiquetas
archivo_val.append(val[i])
else:
etiqueta_val.append(val[i])
# Ya tenemos en el vector archivos y etiquetas ordenados todo lo que necesitamos. Ahora crearemos el diccionario
val_dict = dict()
j=0
for j in range (len(etiqueta_val)):
val_dict[j] = dict(archivos = archivo_val[j], etiquetas = etiqueta_val[j])
# Haremos lo mismo con la carpeta train
train = []
# Leemos el archivo annotation.txt y guardamos los datos en un vector (por palabras)
with open('/home/exla24/TB2018/train/annotation.txt','r') as f:
archivo1 = f.readlines()
archivo1.sort()
for line in archivo1:
for word in line.split():
train.append(word)
archivo_train = []
etiqueta_train = []
i = 0
for i in range (len(train)):
if (i%2 == 0): # Sabemos que las palabras pares del vector a serán las de archivo y las impares serán las etiquetas
archivo_train.append(train[i])
else:
etiqueta_train.append(train[i])
# Ya tenemos en el vector archivos y etiquetas ordenados todo lo que necesitamos. Ahora crearemos el diccionario
train_dict = dict()
j=0
for j in range (len(etiqueta_train)):
train_dict[j] = dict(archivos = archivo_train[j], etiquetas = etiqueta_train[j])
# Haremos lo mismo con la carpeta test
test = []
# Leemos el archivo annotation.txt y guardamos los datos en un vector (por palabras)
with open('/home/exla24/TB2018/test/annotation.txt','r') as f:
archivo2 = f.readlines()
archivo2.sort()
for line in archivo2:
for word in line.split():
test.append(word)
archivo_test = []
etiqueta_test = []
i = 0
for i in range (len(test)):
if (i%2 == 0): # Sabemos que las palabras pares del vector a serán las de archivo y las impares serán las etiquetas
archivo_test.append(test[i])
else:
etiqueta_test.append(test[i])
# Ya tenemos en el vector archivos y etiquetas ordenados todo lo que necesitamos. Ahora crearemos el diccionario
test_dict = dict()
j=0
for j in range (len(etiqueta_test)):
test_dict[j] = dict(archivos = archivo_test[j],etiquetas = etiqueta_test[j])
pickle.dump(val_dict, open("/home/jupyter/Pickles/Etiquetas/diccionario_validacion", "wb"))
pickle.dump(train_dict, open("/home/jupyter/Pickles/Etiquetas/diccionario_entrenamiento", "wb"))
pickle.dump(test_dict, open("/home/jupyter/Pickles/Etiquetas/diccionario_test", "wb"))
```
| github_jupyter |
```
import arrow as arw
import matplotlib.pyplot as plt
import netCDF4 as nc
import numpy as np
import pandas as pd
import xarray as xr
from salishsea_tools import places, teos_tools
%matplotlib inline
hindcast_dataset = xr.open_dataset(
'https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV17-02')
nowcastv1_dataset = xr.open_dataset(
'https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DTracerFields1hV1')
nowcastv2_dataset = xr.open_dataset(
'https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DTracerFields1hV16-10')
def _prep_plot_data(xr_dataset, variable, place, weight1, weight2, poss1, poss2, state_day, end_day, factor=1):
time_slice = slice(start_day.date(), end_day.replace(days=+1).date(), 1)
grid_y, grid_x = places.PLACES[place]['NEMO grid ji']
var_result1 = (
xr_dataset[variable]
.sel(time=time_slice)
.isel(depth=poss1, gridX=grid_x, gridY=grid_y))
var_result2 = (
xr_dataset[variable]
.sel(time=time_slice)
.isel(depth=poss2, gridX=grid_x, gridY=grid_y))
return (weight1*var_result1+weight2*var_result2)*factor
def make_plot(use_title,
sal_obs,
hindcast_ts_2014, hindcast_ts_2015, hindcast_ts_2016, hindcast_ts_2017,
nowcastv2_ts_2016, nowcastv2_ts_2017,
nowcastv1_ts_2014, nowcastv1_ts_2015, nowcastv1_ts_2016):
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
sal_obs_hourly = sal_obs.resample('1H').mean()
nowcastv2_ts_2017.plot(ax=ax, color='b', label='')
sal_obs_hourly.plot(ax=ax, color='g', label='Observations')
hindcast_ts_2014.plot(ax=ax, color='r', label='Hindcast')
hindcast_ts_2015.plot(ax=ax, color='r', label='')
hindcast_ts_2016.plot(ax=ax, color='r', label='')
hindcast_ts_2017.plot(ax=ax, color='r', label='')
nowcastv2_ts_2016.plot(ax=ax, color='b', label='Nowcast-V2')
nowcastv1_ts_2014.plot(ax=ax, color='teal', label='Nowcast-V1')
nowcastv1_ts_2015.plot(ax=ax, color='teal', label='')
nowcastv1_ts_2016.plot(ax=ax, color='teal', label='')
ax.legend()
ax.set_ylabel('Reference Salinity (g/kg)')
ax.set_xlabel('Date (UTC)')
ax.set_title(use_title)
```
## Central Node ##
```
place = 'Central node'
poss1 = 34 -1
depth1 = 280
poss2 = poss1 + 1
depth2 = 307
depth = 294
weight1 = (depth2 - depth)/(depth2 - depth1)
weight2 = 1 - weight1
use_title = 'Salinity at VENUS Central'
observations_dataset = nc.Dataset('https://salishsea.eos.ubc.ca/erddap/tabledap/ubcONCSCVIPCTD15mV1')
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
sal_obs_hourly = sal_obs.resample('1H').mean()
sal_obs_hourly.plot(ax=ax, color='g', label='Observations')
ax.legend()
ax.set_ylabel('Reference Salinity (g/kg)')
ax.set_xlabel('Date (UTC)')
ax.set_title(use_title)
ax.set_xlim('20160601', '20160831')
ax.grid(which='both')
```
## These Cells need to be Run individually Rather than in a Function ##
```
sals = observations_dataset.variables['s.salinity']
times = observations_dataset.variables['s.time']
jd = nc.num2date(times[:], times.units)
units_sav = times.units
sal_obs = pd.Series(sals[:], index=jd)
sal_obs.plot()
# for cases without units (aka BBL)
sals = observations_dataset.variables['s.salinity']
times = observations_dataset.variables['s.time']
print(times)
#jd = nc.num2date(times[:], 'seconds since 1970-01-01T00:00:00Z')
sal_obs = pd.Series(sals[:], index=times)
print(place)
start_day = arw.get(hindcast_dataset.time_coverage_start)
end_day = arw.get('2014-12-31T23:59:59')
hindcast_ts_2014 = _prep_plot_data(hindcast_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
start_day = arw.get('2014-12-31T23:59:59')
end_day = arw.get('2015-12-31T23:59:59')
hindcast_ts_2015 = _prep_plot_data(hindcast_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
start_day = arw.get('2015-12-31T23:59:59')
end_day = arw.get('2016-12-31T23:59:59')
hindcast_ts_2016 = _prep_plot_data(hindcast_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
start_day = arw.get('2016-12-31T23:59:59')
end_day = arw.get(hindcast_dataset.time_coverage_end)
hindcast_ts_2017 = _prep_plot_data(hindcast_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
start_day = arw.get(nowcastv2_dataset.time_coverage_start)
end_day = arw.get('2016-12-31T23:59:59')
nowcastv2_ts_2016 = _prep_plot_data(nowcastv2_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
start_day = arw.get('2016-12-31T23:59:59')
end_day = arw.get(nowcastv2_dataset.time_coverage_end)
nowcastv2_ts_2017 = _prep_plot_data(nowcastv2_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day)
print(place)
factor = teos_tools.PSU_TEOS
start_day = arw.get(nowcastv1_dataset.time_coverage_start)
end_day = arw.get('2014-12-31T23:59:59')
nowcastv1_ts_2014 = _prep_plot_data(nowcastv1_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day, factor)
print(place)
start_day = arw.get('2014-12-31T23:59:59')
end_day = arw.get('2015-12-31T23:59:59')
nowcastv1_ts_2015 = _prep_plot_data(nowcastv1_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day, factor)
print(place)
start_day = arw.get('2015-12-31T23:59:59')
end_day = arw.get('2016-12-31T23:59:59')
nowcastv1_ts_2016 = _prep_plot_data(nowcastv1_dataset, 'salinity', place, weight1, weight2, poss1, poss2, start_day, end_day, factor)
```
## East Node ##
```
place = 'East node'
poss1 = 29 -1
depth1 = 147
poss2 = poss1 + 1
depth2 = 173
depth = 164
weight1 = (depth2 - depth)/(depth2 - depth1)
weight2 = 1 - weight1
use_title = 'Salinity at VENUS East'
observations_dataset = nc.Dataset('https://salishsea.eos.ubc.ca/erddap/tabledap/ubcONCSEVIPCTD15mV1')
```
### run all the ts cells one at a time and then plot
```
make_plot(use_title,
sal_obs,
hindcast_ts_2014, hindcast_ts_2015, hindcast_ts_2016, hindcast_ts_2017,
nowcastv2_ts_2016, nowcastv2_ts_2017,
nowcastv1_ts_2014, nowcastv1_ts_2015, nowcastv1_ts_2016)
```
## Delta BBL Node ##
```
place = 'Delta BBL node'
poss1 = 28 -1
depth1 = 122
poss2 = poss1 + 1
depth2 = 147
depth = 143
weight1 = (depth2 - depth)/(depth2 - depth1)
weight2 = 1 - weight1
use_title = 'Salinity at VENUS Delta BBL'
observations_dataset = nc.Dataset('https://salishsea.eos.ubc.ca/erddap/tabledap/ubcONCLSBBLCTD15mV1.nc')#?time,salinity,temperature,temperature_std_dev,salinity_std_dev,salinity_sample_count,temperature_sample_count,latitude,longitude,depth')
```
### run all the ts cells one at a time and then plot
```
make_plot(use_title,
sal_obs,
hindcast_ts_2014, hindcast_ts_2015, hindcast_ts_2016, hindcast_ts_2017,
nowcastv2_ts_2016, nowcastv2_ts_2017,
nowcastv1_ts_2014, nowcastv1_ts_2015, nowcastv1_ts_2016)
print(place)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
sal_obs_hourly = sal_obs.resample('1H').mean()
nowcastv2_ts_2017.plot(ax=ax, color='b', label='')
sal_obs_hourly.plot(ax=ax, color='k', label='Observations')
# hindcast_ts_2014.plot(ax=ax, color='g', label='Hindcast')
# hindcast_ts_2015.plot(ax=ax, color='g', label='')
# hindcast_ts_2016.plot(ax=ax, color='g', label='')
# hindcast_ts_2017.plot(ax=ax, color='g', label='')
nowcastv2_ts_2016.plot(ax=ax, color='b', label='Nowcast-V2')
nowcastv1_ts_2014.plot(ax=ax, color='teal', label='Nowcast-V1')
nowcastv1_ts_2015.plot(ax=ax, color='teal', label='')
nowcastv1_ts_2016.plot(ax=ax, color='teal', label='')
ax.legend()
ax.set_ylabel('Reference Salinity (g/kg)')
ax.set_xlabel('Date (UTC)')
ax.set_title(use_title)
```
## Delta DDL node ##
```
place = 'Delta DDL node'
poss1 = 27 -1
depth1 = 98
poss2 = poss1 + 1
depth2 = 122
depth = 107
weight1 = (depth2 - depth)/(depth2 - depth1)
weight2 = 1 - weight1
observations_dataset = xr.open_dataset('https://salishsea.eos.ubc.ca/erddap/tabledap/ubcONCUSDDLCTD15mV1')
observations_sal = observations_dataset['s.salinity']
observations_time = observations_dataset['s.time']
hindcast_ts = _prep_plot_data(hindcast_dataset, 'salinity', place, weight1, weight2, poss1, poss2)
nowcastv2_ts = _prep_plot_data(nowcastv2_dataset, 'salinity', place, weight1, weight2, poss1, poss2)
factor = teos_tools.PSU_TEOS
nowcastv1_ts = _prep_plot_data(nowcastv1_dataset, 'salinity', place, weight1, weight2, poss1, poss2, factor)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
ax.plot(observations_time, observations_sal, 'g', label='Observations')
hindcast_ts.plot(ax=ax, color='r', label='Hindcast')
nowcastv2_ts.plot(ax=ax, label='Nowcast-V2')
nowcastv1_ts.plot(ax=ax, label='Nowcast-V1')
ax.legend()
ax.set_ylabel('Reference Salinity (g/kg)')
ax.set_xlabel('Date (UTC)')
ax.set_title('Salinity at VENUS Delta DDL')
```
| github_jupyter |
# Dense Sentiment Classifier
In this notebook, we build a dense neural net to classify IMDB movie reviews by their sentiment.
```
#load watermark
%load_ext watermark
%watermark -a 'Gopala KR' -u -d -v -p watermark,numpy,pandas,matplotlib,nltk,sklearn,tensorflow,theano,mxnet,chainer,seaborn,keras,tflearn,bokeh,gensim
```
#### Load dependencies
```
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import pandas as pd
import matplotlib.pyplot as plt # new!
%matplotlib inline
```
#### Set hyperparameters
```
# output directory name:
output_dir = 'model_output/dense'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000 # as per Maas et al. (2011); may not be optimal
n_words_to_skip = 50 # ditto
max_review_length = 100
pad_type = trunc_type = 'pre'
# neural network architecture:
n_dense = 64
dropout = 0.5
```
#### Load data
For a given data set:
* the Keras text utilities [here](https://keras.io/preprocessing/text/) quickly preprocess natural language and convert it into an index
* the `keras.preprocessing.text.Tokenizer` class may do everything you need in one line:
* tokenize into words or characters
* `num_words`: maximum unique tokens
* filter out punctuation
* lower case
* convert words to an integer index
```
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0:6] # 0 reserved for padding; 1 would be starting character; 2 is unknown; 3 is most common word, etc.
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
```
#### Restoring words from index
```
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
(all_x_train,_),(all_x_valid,_) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
```
#### Preprocess data
```
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[0])
' '.join(index_word[id] for id in x_train[5])
```
#### Design neural network architecture
```
model = Sequential()
model.add(Embedding(n_unique_words, n_dim, input_length=max_review_length))
model.add(Flatten())
model.add(Dense(n_dense, activation='relu'))
model.add(Dropout(dropout))
# model.add(Dense(n_dense, activation='relu'))
# model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid')) # mathematically equivalent to softmax with two classes
model.summary() # so many parameters!
# embedding layer dimensions and parameters:
n_dim, n_unique_words, n_dim*n_unique_words
# ...flatten:
max_review_length, n_dim, n_dim*max_review_length
# ...dense:
n_dense, n_dim*max_review_length*n_dense + n_dense # weights + biases
# ...and output:
n_dense + 1
```
#### Configure model
```
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
```
#### Train!
```
# 84.7% validation accuracy in epoch 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
```
#### Evaluate
```
model.load_weights(output_dir+"/weights.01.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat, y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[6])
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[489])
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[927])
```
| github_jupyter |
```
import time
import warnings
import logging
import tensorflow as tf
```
### Decorate functions with tf.function
Functions can be faster than eager code, especially for graphs with many small ops. But for graphs with a few expensive ops (like convolutions), you may not see much speedup.
```
@tf.function
def add(a, b):
return a + b
@tf.function
def sub(a, b):
return a - b
@tf.function
def mul(a, b):
return a * b
@tf.function
def div(a, b):
return a / b
print(add(tf.constant(5), tf.constant(2)))
print(sub(tf.constant(5), tf.constant(2)))
print(mul(tf.constant(5), tf.constant(2)))
print(div(tf.constant(5), tf.constant(2)))
```
### Operate on variables and tensors, invoke nested functions
```
@tf.function
def matmul(a, b):
return tf.matmul(a, b)
@tf.function
def linear(m, x, c):
return add(matmul(m, x), c)
m = tf.constant([[4.0, 5.0, 6.0]], tf.float32)
m
x = tf.Variable([[100.0], [100.0], [100.0]], tf.float32)
x
c = tf.constant([[1.0]], tf.float32)
c
linear(m, x, c)
```
### Convert regular Python code to TensorFlow constructs
To help users avoid having to rewrite their code when adding @tf.function, AutoGraph converts a subset of Python constructs into their TensorFlow equivalents.
May use data-dependent control flow, including if, for, while break, continue and return statements
```
@tf.function
def pos_neg_check(x):
reduce_sum = tf.reduce_sum(x)
if reduce_sum > 0:
return tf.constant(1)
elif reduce_sum == 0:
return tf.constant(0)
else:
return tf.constant(-1)
pos_neg_check(tf.constant([100, 100]))
pos_neg_check(tf.constant([100, -100]))
pos_neg_check(tf.constant([-100, -100]))
```
### Operations with side effects
May also use ops with side effects, such as tf.print, tf.Variable and others.
```
num = tf.Variable(7)
@tf.function
def add_times(x):
for i in tf.range(x):
num.assign_add(x)
add_times(5)
print(num)
```
### In-order code execution
Dependencies in the code are automatically resolved based on the order in which the code is written
```
a = tf.Variable(1.0)
b = tf.Variable(2.0)
@tf.function
def f(x, y):
a.assign(y * b)
b.assign_add(x * a)
return a + b
f(1, 2)
```
### Polymorphism and tracing
Python's dynamic typing means that you can call functions with a variety of argument types, and Python will do something different in each scenario.
On the other hand, TensorFlow graphs require static dtypes and shape dimensions. tf.function bridges this gap by retracing the function when necessary to generate the correct graphs. Most of the subtlety of tf.function usage stems from this retracing behavior.
```
@tf.function
def square(a):
print("Input a: ", a)
return a * a
```
Trace a new graph with floating point inputs
```
x = tf.Variable([[2, 2], [2, 2]], dtype = tf.float32)
square(x)
```
Re-trace the graph, now the inputs are of type integer
```
y = tf.Variable([[2, 2], [2, 2]], dtype = tf.int32)
square(y)
```
This time the graph for floating point inputs is not traced, it is simply executed. This means that the print() statement is not executed. Since that is a Python side-effect. Python side-effects are executed only when the graph is traced
```
z = tf.Variable([[3, 3], [3, 3]], dtype = tf.float32)
square(z)
```
### Use get_concrete_function() to get a concrete trace for a particular type of function
```
concrete_int_square_fn = square.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.int32))
concrete_int_square_fn
concrete_float_square_fn = square.get_concrete_function(tf.TensorSpec(shape=None, dtype=tf.float32))
concrete_float_square_fn
concrete_int_square_fn(tf.constant([[2, 2], [2, 2]], dtype = tf.int32))
concrete_float_square_fn(tf.constant([[2.1, 2.1], [2.1, 2.1]], dtype = tf.float32))
concrete_float_square_fn(tf.constant([[2, 2], [2, 2]], dtype = tf.int32))
```
### Python side effects only happen during tracing
In general, Python side effects (like printing or mutating objects) only happen during tracing.
```
@tf.function
def f(x):
print("Python execution: ", x)
tf.print("Graph execution: ", x)
f(1)
f(1)
f("Hello tf.function!")
```
Appending to Python lists is also a Python side-effect
```
arr = []
@tf.function
def f(x):
for i in range(len(x)):
arr.append(x[i])
f(tf.constant([10, 20, 30]))
arr
@tf.function
def f(x):
tensor_arr = tf.TensorArray(dtype = tf.int32, size = 0, dynamic_size = True)
for i in range(len(x)):
tensor_arr = tensor_arr.write(i, x[i])
return tensor_arr.stack()
result_arr = f(tf.constant([10, 20, 30]))
result_arr
```
### Use the tf.py_function() exit hatch to execute side effects
```
external_list = []
def side_effect(x):
print('Python side effect')
external_list.append(x)
@tf.function
def fn_with_side_effects(x):
tf.py_function(side_effect, inp=[x], Tout=[])
fn_with_side_effects(1)
fn_with_side_effects(2)
external_list
```
### Control flow works
for/while --> tf.while_loop (break and continue are supported)
```
@tf.function
def some_tanh_fn(x):
while tf.reduce_sum(x) > 1:
x = tf.tanh(x)
return x
some_tanh_fn(tf.random.uniform([10]))
```
#### Converting a function in eager mode to its Graph representation
Converting a function that works in eager mode to its Graph representation requires to think about the Graph even though we are working in eager mode
```
def fn_with_variable_init_eager():
a = tf.constant([[10,10],[11.,1.]])
x = tf.constant([[1.,0.],[0.,1.]])
b = tf.Variable(12.)
y = tf.matmul(a, x) + b
tf.print("tf_print: ", y)
return y
fn_with_variable_init_eager()
@tf.function
def fn_with_variable_init_autograph():
a = tf.constant([[10,10],[11.,1.]])
x = tf.constant([[1.,0.],[0.,1.]])
b = tf.Variable(12.)
y = tf.matmul(a, x) + b
tf.print("tf_print: ", y)
return y
fn_with_variable_init_autograph()
class F():
def __init__(self):
self._b = None
@tf.function
def __call__(self):
a = tf.constant([[10, 10], [11., 1.]])
x = tf.constant([[1., 0.], [0., 1.]])
if self._b is None:
self._b = tf.Variable(12.)
y = tf.matmul(a, x) + self._b
print(y)
tf.print("tf_print: ", y)
return y
fn_with_variable_init_autograph = F()
fn_with_variable_init_autograph()
def f(x):
if x > 0:
x *= x
return x
print(tf.autograph.to_code(f))
```
#### AutoGraph is highly optimized and works well when the input is a tf.Tensor object
```
@tf.function
def g(x):
return x
start = time.time()
for i in tf.range(2000):
g(i)
end = time.time()
print("tf.Tensor time elapsed: ", (end-start))
warnings.filterwarnings('ignore')
logging.getLogger('tensorflow').disabled = True
start = time.time()
for i in range(2000):
g(i)
end = time.time()
print("Native type time elapsed: ", (end-start))
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.