code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Range
# It is an in-built function in Python which returns a sequence of numbers starting from 0 and increments to 1 until it reaches a specified number. The most common use of range function is to iterate sequence type. It is most commonly used in for and while loops.
#
# +
# Syntax
# range(start, stop, step)
# +
# Range With For Loop
for i in range(2,20,2):
print(i)
# +
# Increment With Positive And Negative Step
for i in range(2, 20, 5):
print(i, end=", ")
for j in range(25, 0 , -5):
print(j , end=", ")
# +
# Concatenating Two Range Functions
from itertools import chain
res = chain(range(10) , range(10, 15))
for i in res:
print(i , end=", ")
# +
# Accessing Range Using Index Values
a = range(0,10)[3]
b = range(0,10)[5]
print(a)
print(b)
# +
# Converting Range To List
a = range(0,10)
b = list(a)
c = list(range(0,5))
print(b)
print(c)
# -
| Range Function.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Applied Machine Learning, Module 1: A simple classification task¶
# ## Import required modules and load data file
# +
# %matplotlib notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as ptl
from sklearn.model_selection import train_test_split
fruits = pd.read_table("../data/fruit_data_with_colors.txt")
# -
fruits.head()
fruits.shape
# +
lookup_fruit_name = fruits.drop_duplicates(['fruit_label','fruit_name'])[['fruit_label','fruit_name']].set_index('fruit_label').to_dict()['fruit_name']
lookup_fruit_name
# -
# ## Create test train split
# +
X = fruits[['mass','width','height','color_score']]
y = fruits['fruit_label']
train_X, test_X, train_y, test_y = train_test_split(X,y,random_state=0)
print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)
# -
# ## Examining the data
# +
# feature pair plot
from matplotlib import cm
cmap = cm.get_cmap('gnuplot')
scatter = pd.plotting.scatter_matrix(train_X, c=train_y, marker='o', s=40, hist_kwds={'bins':15}, figsize=(10,10), cmap=cmap)
# +
# 3D feature scatterplot
from mpl_toolkits.mplot3d import Axes3D
figure = ptl.figure()
ax = figure.add_subplot(111, projection='3d')
ax.scatter(train_X['height'],train_X['width'],train_X['color_score'],c=train_y, marker='o', s=100)
ax.set_xlabel('Height')
ax.set_ylabel('Width')
ax.set_zlabel('Color Score')
ptl.show()
# -
# ## Create the calssifier object
# +
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5)
# -
# ## Train the classifier (fit the estimator) using the training data
knn.fit(train_X, train_y)
# ## Estimate the accuracy of the classifier on future data, using the test data
knn.score(test_X, test_y)
# ## Use the trained k-NN classifier model to classify new, previsouly unseen objects
fruit_prediction = knn.predict([[159, 7,7,0.75]])
lookup_fruit_name[fruit_prediction[0]]
fruit_prediction = knn.predict([[356, 6,10,0.5]])
lookup_fruit_name[fruit_prediction[0]]
# ## K vs Accuracy
accuracy = []
for k in range(1,20):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(train_X, train_y)
accuracy.append(knn.score(test_X,test_y))
ptl.plot(range(1,20), accuracy,'o')
ptl.show()
| Module 1/A simple classification task.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Exercise 3.21 Mutual information for naive Bayes classifiers with binary features
# Derive Equation 3.76.
# The mutual information between feature $X_j$ and the class label $Y$ is given by
#
# $$
# I(X, Y) = \sum_{x_j}\sum_{y}p(x_j, y)\log\frac{p(x_j, y)}{p(x_j)p(y)}
# $$
#
# The mutual information can be thought of as the reduction in entropy on the label distribution once we observe the value of feature $j$.
# If the features are binary, then
#
# \begin{aligned}
# I_j & = \sum_y p(x_j=0, y)\log\frac{p(x_j=0, y)}{p(x_j=0)p(y)} + p(x_j=1, y)\log\frac{p(x_j=1, y)}{p(x_j=1)p(y)} \\
# & = \sum_y p(x_j=0|y)p(y)\log\frac{p(x_j=0| y)}{p(x_j=0)} + p(x_j=1|y)p(y)\log\frac{p(x_j=1|y)}{p(x_j=1)} \\
# & = \sum_{c}(1-p(x_j=1|y=c))p(y=c)\log\frac{(1-p(x_j=1|y=c))}{(1-p(x_j=1))} + p(x_j=1|y=c)p(y=c)\log\frac{p(x_j=1|y=c)}{p(x_j=1)}
# \end{aligned}
# If we set $\pi_c = p(y=c)$, $\theta_{jc} = p(x_j=1|y=c)$, and $\theta_j = p(x_j=1) = \sum_c\pi_c\theta_{jc}$, then we have
#
# $$
# I_j = \sum_c\left[(1-\theta_{jc})\pi_c\log\frac{1-\theta_{jc}}{1-\theta_j}+ \theta_{jc}\pi_c\log\frac{\theta_{jc}}{\theta_j}\right]
# $$
| murphy-book/chapter03/q21.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.5
# language: julia
# name: julia-1.6
# ---
# # PRAKTIKUM 3
# __Solusi Akar Persamaan Tak-Linear 1__
#
# *Topik*
# 1. Metode _Bisection_
# 2. Metode _Regula Falsi_
# 3. Metode Iterasi Titik Tetap
#
# **Sumber** : Buku panduan praktikum pertemuan ke-3 (newlms)
# ## 1 Metode _Bisection_
# Metode pertama yang dipelajari untuk mencari nilai hampiran akar persamaan adalah metode bagi-dua (_bisection_). Metode ini memiliki syarat, yaitu nilai evaluasi fungsi pada ujung selang $ [a,b] $ yaitu $ f(a) $ dan $ f(b) $ memiliki **tanda yang berbeda**.
using Plots
function bisection(f,a,b)
# Definisikan nilai toleransi dan maksimum iterasi yang telah ditentukan.
delta = 10^-7;
maxi = 100;
flag = 1;
M = Array{Float64}(undef, 0, 5);
# Cek Syarat kekonvergenan, yaitu f(a) dan f(b) harus berbeda tanda.
fa = f(a);
fb = f(b);
if fa*fb>0
c = "error: f(a) dan f(b) harus berbeda tanda";
flag = 2;
return;
end
k = 1
# Mulai proses metode bisection
while k<=maxi
# Isi dengan rumus bisection untuk mencari nilai c.
c = (b+a)/2;
fc = f(c);
M = [M; [k-1 a c b fc] ];
# Analisa nilai tengah yang memiliki nilai tanda yang berbeda dengan titik ujung selang.
if fc == 0
a = c;
b = c;
elseif fa*fc>0
a = c;
fa = fc;
else
b = c;
fb = fc;
end
# Kriteria pemberhentian
if b-a < delta || abs(fc) < delta
flag = 0;
break;
end
k+=1
end
return c,flag, M
end
# ### Contoh 1
# Diberikan fungsi $ f(x)=x\sin(x)-1 $ pada interval $ [0,2] $. Carilah nilai hampiran akar persamaan $ f(x)=0 $ menggunakan metode _bisection_.
# **Langkah 1** : Definisikan fungsi $ f(x) $ dan titik ujung interval $ [0,2] $.
f(x) = x*sin(x)-1;
a = 0; b = 2;
# **Langkah 2** : Hitung nilai hampiran akar $ f(x)=0 $ menggunakan program _bisection_
c, flag, M = bisection(f,a,b)
@show c
@show flag
M
# **Langkah 3** : Buatlah plot perambatan **nilai hampiran akar** untuk setiap iterasi.
# Ambil nilai c untuk setiap n dari matriks M
iter = M[:,1];
cn = M[:,3];
# Plot cn
p = plot(iter, cn, label = :none)
title!("Plot perambatan nilai hampiran akar")
xlabel!("iterasi")
ylabel!("nilai hampiran akar (c)")
# **Langkah 4** : Bulatlah plot perambatan **$ |f(c_n)| $ dari hampiran akar** untuk setiap iterasi.
# +
# Ambil nilai mutlak f(c) untuk setiap n dari matriks M
iter = M[:,1];
fc = abs.(M[:,5]);
# Plot cn
p2 = plot(iter,fc, yaxis = :log, label = :none)
title!("Plot perambatan nilai mutlak f(c)")
xlabel!("iterasi")
ylabel!("nilai mutlak f(c)")
# -
# ## 2 Metode _Regula Falsi_
# Metode _regula falsi_ adalah pengembangan dari metode _bisection_ dengan memperbaiki pemilihan nilai tengah $ c $ diantara selang $ [a,b] $ untuk mengurangi jumlah iterasi dan waktu komputasi, yaitu $$ c=b - f(b)\frac{b-a}{f(b)-f(a)} $$
#
# Sama seperti metode _bisection_, metode ini memiliki syarat yaitu nilai evaluasi fungsi pada ujung selang $ [a,b] $ yaitu $ f(a) $ dan $ f(b) $ memiliki **tanda yang berbeda**.
#=
METODE REGULA FALSI UNTUK MENCARI AKAR PERSAMAAN
[c,flag,M] = regulaFalsi(f,a,b)
Input : f -> fungsi f
a,b -> titik ujung selang [a,b]
Output : c -> solusi numerik dari nilai hampiran akar
flag -> 0 -> toleransi terpehuhi
1 -> maksimum iterasi terpenuhi
2 -> error: f(a) dan f(b) memiliki tanda yang sama
M -> Matriks yang berisi nilai iterasi, a, c, b, dan f(c)
=#
function regulaFalsi(f,a,b)
# Definisikan nilai toleransi dan maksimum iterasi yang ditentukan.
delta = 10^-7;
maxi = 100;
flag = 1;
M = Array{Float64}(undef, 0, 5);
# Cek Syarat kekonvergenan, yaitu f(a) dan f(b) harus berbeda tanda.
fa = f(a);
fb = f(b);
if fa*fb > 0
c = "error : fa fb harus beda tanda";
flag = 2;
return;
end
# Mulai proses metode regula falsi
for k = 1:maxi
c = b-fb*(b-a)/(fb-fa);# Isi dengan rumus nilai c regula falsi.
fc = f(c);
dx = min(c-a,b-c);
M = [M ; [k-1 a c b fc] ];
# Analisa nilai tengah dengan titik ujung selang.
if fc == 0
a = c;
b = c;
elseif fa*fc>0
a = c;
fa= fc;
else
b = c;
fb= fc;
end
# Kriteria pemberhentian
if abs(fc) < delta || abs(dx)< delta
flag = 0; break;
end
end
return c,flag,M
end
# ### Contoh 2
# Diberikan fungsi $ f(x)=x\sin(x)-1 $ pada interval $ [0,2] $. Berikut merupakan langkah-langkah untuk mencari nilai hampiran akar $ f(x)=0 $ dengan metode _regula falsi_.
# **Langkah 1** : Definisikan fungsi $ f(x) $ dan titik ujung interval $ [0,2] $.
f(x) = x*sin(x)-1;
a = 0;
b = 2;
# **Langkah 2** : Hitung nilai hampiran akar $ f(x)=0 $ menggunakan program _regula falsi_
c, flag, M = regulaFalsi(f,a,b)
@show c
@show flag
M
# **Langkah 3** : Buatlah plot perambatan **nilai hampiran akar** untuk setiap iterasi.
# Ambil nilai c untuk setiap n dari matriks M
iter = M[:,1];
cn = M[:,3];
# Plot cn
p1 = plot(iter, cn, label = :none)
# Tambahkan grid, title dan label
title!("Plot perambatan nilai hampiran akar")
xlabel!("iterasi")
ylabel!("nilai hampiran akar (c)")
# **Langkah 4** : Bulatlah plot perambatan **$ |f(c_n)| $ dari hampiran akar** untuk setiap iterasi.
# +
# Ambil nilai mutlak f(c) untuk setiap n dari matriks M
iter = M[:,1];
fc = abs.(M[:,5]);
# Plot cn
p2 = plot(iter,fc, yaxis = :log, label = :none)
title!("Plot perambatan nilai mutlak f(c)")
xlabel!("iterasi")
ylabel!("nilai mutlak f(c)")
# -
# ## 3 Metode Iterasi Titik Tetap
# Selain metode _bisection_ dan _regula falsi_, terdapat metode lain yaitu metode **iteratif**. Pada praktikum ini, akan dipelajari salah satu metode iteratif yaitu metode **iterasi titik tetap** untuk mencari nilai akar persamaan, yaitu dengan mentransformasi bentuk $f(x)=0$ menjadi bentuk $x=g(x)$, sehingga diperoleh persamaan iterasi
# $$ p_n = g(p_{n-1}) $$
#=
# %%ITERASI TITIK TETAP
% [pn, flag] = fixpoint(g,p0)
% Input : g -> fungsi g
% p0 -> starting value
% Output : pn -> nilai akar
% flag -> tanda : 0 -> berhasil
% 1 -> gagal
% M -> matriks yang berisi nilai iterasi, hampiran akar dan
% galat
=#
function fixpoint(g,p0)
# Definisikan nilai toleransi, maksimum iterasi dan tebakan awal yang telah ditentukan.
delta = 10^-7;
maxi = 100;
flag = 1;
pn = p0
M = [0 pn NaN];
# Mulai langkah iterasi
for n = 2:maxi
# Rumus iterasi titik tetap
pn1 = pn;
pn = g(pn1);
# Hitung nilai galat mutlak dan relatif
err = abs(pn-pn1);
relerr = err/(abs(pn)+eps());
M = [M; [n-1 pn err]]
# Kriteria penghentian iterasi jika galat memenuhi toleransi.
if (err<delta) || (relerr<delta)
flag = 0; break
end
end
return pn, flag, M
end
# ### Contoh 3
# Diberikan iterasi yang konvergen, yaitu
# $$ p_{k+1}=\exp(-p_k) $$
# dengan $ p_0=0.5 $. Berikut merupakan langkah-langkah untuk menunjukkan iterasi tersebut konvergen menuju titik tetap.
# **Langkah 1** : Definisikan persamaan iterasi dan nilai tebakan awal $ p_0 $.
g(x) = exp(-x);
p0 = 0.5;
# **Langkah 2** : Hitung nilai iterasi titik tetap menggunakan program iterasi titik tetap
pn, flag, M = fixpoint(g,p0)
@show pn
@show flag
M
# **Langkah 3** : Buatlah plot **nilai** iterasi titik tetap $ p_k $ untuk setiap iterasi.
# Ambil nilai p_k untuk setiap k dari matriks M yaitu pada kolom ke-2.
iter = M[:,1];
pk = M[:,2];
# Plot pk
p1 = plot(iter,pk, label = :none)
# Tambahkan grid, title dan label
title!("Plot perambatan nilai iterasi titik tetap")
xlabel!("iterasi")
ylabel!("nilai iterasi (pk)")
# **Langkah 3** : Buatlah plot **galat** iterasi titik tetap untuk setiap iterasi.
# +
# Ambil nilai E_k untuk setiap k dari matriks M yaitu pada kolom ke-3.
iter = M[:,1];
Ek = M[:,3];
# Plot Ek
p2 = plot(iter,Ek, yaxis = :log, label = :none)
title!("Plot perambatan galat iterasi titik tetap")
xlabel!("iterasi")
ylabel!("nilai galat iterasi (Ek)")
# -
# <hr style="border:2px solid black"> </hr>
#
# # Soal Latihan
# Kerjakan soal berikut pada saat kegiatan praktikum berlangsung.
#
# `Nama: ________`
#
# `NIM: ________`
# ### Soal 1
# Ulangi langkah-langkah pada **Kasus 1** untuk mencari solusi akar dari persamaan $$ f(x)=\sin(x)-2\cos(x) $$ pada interval $ [-2,2] $ menggunakan metode _bisection_.
# ### Soal 2
# Ulangi langkah-langkah pada **Kasus 2** untuk mencari solusi akar dari persamaan $$ f(x)=\sin(x)-2\cos(x) $$ pada interval $ [-2,2] $ menggunakan metode _regula falsi_.
# ### Soal 3
# Diberikan iterasi $ p_{n+1}=g(p_n) $ menggunakan fungsi
#
# $$ g(x) = 1+x - x^2/4 $$
#
# Secara analitik, terdapat dua titik tetap yaitu $ p=-2 $ dan $ P=2 $.
#
# 1. Tunjukkan bahwa iterasi titik tetap akan konvergen menuju $ P=2 $ apabila $ p_0=1.6 $.
# 2. Tunjukkan bahwa iterasi titik tetap akan tidak konvergen menuju $ P=-2 $ apabila $ p_0=-2.05 $.
#
| notebookpraktikum/Praktikum 03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import drpy
# %pylab inline
# +
#devise test
x = np.arange(0,11)+0.5
y = np.arange(0,11)+0.5
c = np.ones(len(x))
xedge = np.arange(0,14,1)
yedge = np.arange(0,14,1)
drpy.util.boxbin(x,y,xedge,yedge,c=c,mincnt=1,edgecolor='k')
plt.plot(x,y,'o',markerfacecolor='w')
# -
| notebooks/util_test.ipynb |
# # Speeding-up gradient-boosting
# In this notebook, we present a modified version of gradient boosting which
# uses a reduced number of splits when building the different trees. This
# algorithm is called "histogram gradient boosting" in scikit-learn.
#
# We previously mentioned that random-forest is an efficient algorithm since
# each tree of the ensemble can be fitted at the same time independently.
# Therefore, the algorithm scales efficiently with both the number of cores and
# the number of samples.
#
# In gradient-boosting, the algorithm is a sequential algorithm. It requires
# the `N-1` trees to have been fit to be able to fit the tree at stage `N`.
# Therefore, the algorithm is quite computationally expensive. The most
# expensive part in this algorithm is the search for the best split in the
# tree which is a brute-force approach: all possible split are evaluated and
# the best one is picked. We explained this process in the notebook "tree in
# depth", which you can refer to.
#
# To accelerate the gradient-boosting algorithm, one could reduce the number of
# splits to be evaluated. As a consequence, the generalization performance of such
# a tree would be reduced. However, since we are combining several trees in a
# gradient-boosting, we can add more estimators to overcome this issue.
#
# We will make a naive implementation of such algorithm using building blocks
# from scikit-learn. First, we will load the California housing dataset.
# +
from sklearn.datasets import fetch_california_housing
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
# -
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">If you want a deeper overview regarding this dataset, you can refer to the
# Appendix - Datasets description section at the end of this MOOC.</p>
# </div>
# We will make a quick benchmark of the original gradient boosting.
# +
from sklearn.model_selection import cross_validate
from sklearn.ensemble import GradientBoostingRegressor
gradient_boosting = GradientBoostingRegressor(n_estimators=200)
cv_results_gbdt = cross_validate(
gradient_boosting, data, target, scoring="neg_mean_absolute_error",
n_jobs=2
)
# -
print("Gradient Boosting Decision Tree")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_gbdt['test_score'].mean():.3f} +/- "
f"{cv_results_gbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_gbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
# We recall that a way of accelerating the gradient boosting is to reduce the
# number of split considered within the tree building. One way is to bin the
# data before to give them into the gradient boosting. A transformer called
# `KBinsDiscretizer` is doing such transformation. Thus, we can pipeline
# this preprocessing with the gradient boosting.
#
# We can first demonstrate the transformation done by the `KBinsDiscretizer`.
# +
import numpy as np
from sklearn.preprocessing import KBinsDiscretizer
discretizer = KBinsDiscretizer(
n_bins=256, encode="ordinal", strategy="quantile")
data_trans = discretizer.fit_transform(data)
data_trans
# -
# <div class="admonition note alert alert-info">
# <p class="first admonition-title" style="font-weight: bold;">Note</p>
# <p class="last">The code cell above will generate a couple of warnings. Indeed, for some of
# the features, we requested too much bins in regard of the data dispersion
# for those features. The smallest bins will be removed.</p>
# </div>
# We see that the discretizer transforms the original data into integral
# values (eventhough they are encoded using a floating-point representation).
# Each value represents the bin index when the distribution by quantile is
# performed. We can check the number of bins per feature.
[len(np.unique(col)) for col in data_trans.T]
# After this transformation, we see that we have at most 256 unique values per
# features. Now, we will use this transformer to discretize data before
# training the gradient boosting regressor.
# +
from sklearn.pipeline import make_pipeline
gradient_boosting = make_pipeline(
discretizer, GradientBoostingRegressor(n_estimators=200))
cv_results_gbdt = cross_validate(
gradient_boosting, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
# -
print("Gradient Boosting Decision Tree with KBinsDiscretizer")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_gbdt['test_score'].mean():.3f} +/- "
f"{cv_results_gbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_gbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
# Here, we see that the fit time has been reduced but that the
# generalization performance of the model is identical. Scikit-learn provides
# specific classes which are even more optimized for large dataset, called
# `HistGradientBoostingClassifier` and `HistGradientBoostingRegressor`. Each
# feature in the dataset `data` is first binned by computing histograms, which
# are later used to evaluate the potential splits. The number of splits to
# evaluate is then much smaller. This algorithm becomes much more efficient
# than gradient boosting when the dataset has over 10,000 samples.
#
# Below we will give an example for a large dataset and we will compare
# computation times with the experiment of the previous section.
# +
from sklearn.ensemble import HistGradientBoostingRegressor
histogram_gradient_boosting = HistGradientBoostingRegressor(
max_iter=200, random_state=0)
cv_results_hgbdt = cross_validate(
histogram_gradient_boosting, data, target,
scoring="neg_mean_absolute_error", n_jobs=2,
)
# -
print("Histogram Gradient Boosting Decision Tree")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_hgbdt['test_score'].mean():.3f} +/- "
f"{cv_results_hgbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_hgbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_hgbdt['score_time'].mean():.3f} seconds")
# The histogram gradient-boosting is the best algorithm in terms of score.
# It will also scale when the number of samples increases, while the normal
# gradient-boosting will not.
| notebooks/ensemble_hist_gradient_boosting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''py38_clearml_serving_git_dev'': conda)'
# name: python3810jvsc74a57bd08077a6a1b7afc839013d1c78e8fdd0b9fe303014e7d699e44d472f6b36017654
# ---
from clearml import StorageManager, InputModel, Task
import furl
from pathlib import Path
uri_a = 'azure://clearmllibrary/artefacts/Caltech Birds%2FTraining/TRAIN [Network%3A resnet34, Library%3A torchvision] Ignite Train PyTorch CNN on CUB200.8611ada5be6f4bb6ba09cf730ecd2253/models/cub200_resnet34_ignite_best_model_0.pt'
uri_b = uri_a.replace(' ','%20')
uri_c = 'azure://clearmllibrary/artefacts/Caltech%20Birds%252FTraining/TRAIN%20%5BNetwork%253A%20resnet34%2C%20Library%253A%20torchvision%5D%20Ignite%20Train%20PyTorch%20CNN%20on%20CUB200.8611ada5be6f4bb6ba09cf730ecd2253/models/cub200_resnet34_ignite_best_model_0.pt'
StorageManager.get_local_copy(uri_a)
StorageManager.get_local_copy(uri_b)
StorageManager.get_local_copy(uri_c)
f_a = furl.furl(uri_a)
from clearml.backend_api import load_config
config_obj = load_config(Path('~/clearml_confs/clearml_pytorch_train.conf'))
config_obj.initialize_logging()
config = config_obj.get("sdk")
config.get('azure')
| bug_investigations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="1UukvdNl6KwR"
# # Importing Python libraries
# + id="R8OC-FfI6KwS"
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import cm
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
from prettytable import PrettyTable
import plotly.graph_objs as go
from statsmodels.tsa.stattools import adfuller
from scipy import stats
from scipy.stats import normaltest
import statsmodels.api as sm
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.arima_model import ARIMA
from statsmodels.tsa.statespace.sarimax import SARIMAX
import warnings
warnings.filterwarnings("ignore")
# + [markdown] id="_jevl2Ctmq-e" papermill={"duration": 0.065466, "end_time": "2020-11-30T07:38:51.578836", "exception": false, "start_time": "2020-11-30T07:38:51.513370", "status": "completed"} tags=[]
# <div class="alert alert-block alert-success">
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
# <h1><center><strong> TSLA Stock</strong></center></h1>
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
# <h6>"</h6>
#
# </div>
# + [markdown] id="bpBikXkO6KwY"
# # ------------------------------------------------------------------------------------------------------
# + [markdown] id="8Zdt4_b66KwZ"
# # Loading dataset
# + id="CqnqqpRqmq-n"
data = pd.read_csv('TSLA_Stock.csv')
# + [markdown] id="ZbgRood-mq_V"
# # -----------------------------------------------------------------------------------------------------------
# + [markdown] id="GdovkV07mq_W"
# <h1><center> Modelling Arima and Sarima</center></h1>
# + [markdown] id="LcopGQr1mq_W"
# # -----------------------------------------------------------------------------------------------------------
# + [markdown] _uuid="fcea90a58becab4088bfd8610b505230c53a5d1c"
# ### Seasonality of Close price
# + _uuid="17337869fcf192c4a973d8638eca1bf1149d8cbd"
data['Close'] = data['Close'] * 1.0
close_1 = data['Close']
c = '#386B7F'
# + [markdown] _uuid="505858801c9d35bf72aac99ace1305fe85b8b59b"
# ### Stationarize the Close price data
# + _uuid="b949457be57d4ab6787604b9561a70506ec2545f"
def test_stationarity(timeseries, window = 12, cutoff = 0.01):
rolmean = timeseries.rolling(window).mean()
rolstd = timeseries.rolling(window).std()
fig = plt.figure(figsize=(12, 8))
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best')
plt.title('Rolling Mean & Standard Deviation')
plt.show()
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeseries, autolag='AIC', maxlag = 20 )
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
pvalue = dftest[1]
if pvalue < cutoff:
print('p-value = %.4f. The series is likely stationary.' % pvalue)
else:
print('p-value = %.4f. The series is likely non-stationary.' % pvalue)
print(dfoutput)
# + _uuid="30423770007650c75b8b01d559693344c953ee43"
def residual_plot(model):
resid = model.resid
print(normaltest(resid))
fig = plt.figure(figsize=(12,8))
ax0 = fig.add_subplot(111)
sns.distplot(resid ,fit = stats.norm, ax = ax0)
(mu, sigma) = stats.norm.fit(resid)
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f} )'.format(mu, sigma)], loc='best')
plt.ylabel('Frequency')
plt.title('Residual distribution')
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(model.resid, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(model.resid, lags=40, ax=ax2)
# -
# ### Close price with rolling windows
# + _uuid="81807ffd06a99095c63f273875d739c48e72fa53"
test_stationarity(close_1)
# + [markdown] _uuid="4c95908e946390144098916245209765cb2eea56"
# We are going to make data as stationary. So we want to do is take a first difference of the data and it will help to eliminate the overall trend from the data.
# + _uuid="66da4c259e5f8d3c685a1a866554e2ad51194c45"
first_diff_a = close_1 - close_1.shift(1)
first_diff_a = first_diff_a.dropna(inplace = False)
test_stationarity(first_diff_a, window = 12)
# + [markdown] _uuid="974ae9df32f4554f5b43172947abc1605ca83734"
# ### Plots of ACF and PACF
# + _uuid="6bb62a22a366c45e597bef67091482e14e122a26"
plt.figure(figsize = (12, 8))
plt.subplot(421); plot_acf(close_1, lags = 50, ax = plt.gca(), color = c)
plt.subplot(422); plot_pacf(close_1, lags = 50, ax = plt.gca(), color = c)
# -
# ### Splitting the data into training (first 70%) testing (latest 30%)
train_data, test_data = data[0:int(len(data)*0.7)], data[int(len(data)*0.7):]
training_data = train_data['Close'].values
test_data = test_data['Close'].values
# ### Training and testing the ARIMA model
history_of_train = [x for x in training_data]
predictions = []
test_records = len(test_data)
for times in range(test_records):
arima = SARIMAX(history_of_train, order=(4,4,1))
arima_fit = arima.fit(disp=0)
output = arima_fit.forecast()
pred = output[0]
predictions.append(pred)
test_value = test_data[times]
history_of_train.append(test_value)
residual_plot(arima_fit)
# ### Evaluation of Arima Model on Test data
# + [markdown] id="qkcalTi0ZGC2"
# ### R2
# + colab={"base_uri": "https://localhost:8080/"} id="1XrbLKYnZGC4" outputId="c0542e93-bd99-4eae-c134-5268a4e2c3fc"
arima_test_rs=r2_score(test_data, predictions)
print('R Squared : ', round(arima_test_rs,3))
# + [markdown] id="w1BTf8THZGC-"
# ### MSE
# + colab={"base_uri": "https://localhost:8080/"} id="UGGMF4-KZGDA" outputId="d9ed4eb7-a5f9-499f-a185-458f7f001012"
arima_test_mse=mean_squared_error(test_data, predictions)
print('Mean Squared Error: ', round(arima_test_mse,3))
# + [markdown] id="ecIh2V5aZGDG"
# ### MAE
# + colab={"base_uri": "https://localhost:8080/"} id="EPewmcfSZGDJ" outputId="5153a04e-29c0-44a5-82d3-9e5c083eae89"
arima_test_MAE=mean_absolute_error(test_data, predictions)
print('Mean Absolute Error: ', round(arima_test_MAE,3))
# -
# ### Predictions and Actual Stock Price
test_set = data[int(len(data)*0.7):].index
plt.figure(figsize=(20,10))
plt.plot(test_set, predictions, color='blue', marker='o', linestyle='dashed',label='Predicted Price')
plt.plot(test_set, test_data, color='red', label='Actual Price')
plt.title('Comparison of actual and predicted stock prices')
plt.xlabel('Day')
plt.ylabel('Prices')
plt.legend()
plt.show()
# ### Training and testing the SARIMA model
history_of_train = [x for x in training_data]
predictions = []
test_records = len(test_data)
for times in range(test_records):
sarima = SARIMAX(history_of_train, order=(4,4,0))
sarima_fit = sarima.fit(disp=0)
output = sarima_fit.forecast()
pred = output[0]
predictions.append(pred)
test_value = test_data[times]
history_of_train.append(test_value)
residual_plot(sarima_fit)
# ### Evaluation of sarima Model on Test data
# + [markdown] id="qkcalTi0ZGC2"
# ### R2
# + colab={"base_uri": "https://localhost:8080/"} id="1XrbLKYnZGC4" outputId="c0542e93-bd99-4eae-c134-5268a4e2c3fc"
sarima_test_rs=r2_score(test_data, predictions)
print('R Squared : ', round(sarima_test_rs,3))
# + [markdown] id="w1BTf8THZGC-"
# ### MSE
# + colab={"base_uri": "https://localhost:8080/"} id="UGGMF4-KZGDA" outputId="d9ed4eb7-a5f9-499f-a185-458f7f001012"
sarima_test_mse=mean_squared_error(test_data, predictions)
print('Mean Squared Error: ', round(sarima_test_mse,3))
# + [markdown] id="ecIh2V5aZGDG"
# ### MAE
# + colab={"base_uri": "https://localhost:8080/"} id="EPewmcfSZGDJ" outputId="5153a04e-29c0-44a5-82d3-9e5c083eae89"
sarima_test_MAE=mean_absolute_error(test_data, predictions)
print('Mean Absolute Error: ', round(sarima_test_MAE,3))
# -
# ### Predictions and Actual Stock Price
test_set = data[int(len(data)*0.7):].index
plt.figure(figsize=(20,10))
plt.plot(test_set, predictions, color='green', marker='o', linestyle='dashed',label='Predicted Price')
plt.plot(test_set, test_data, color='red', label='Actual Price')
plt.title('Comparison of actual and predicted stock prices')
plt.xlabel('Day')
plt.ylabel('Prices')
plt.legend()
plt.show()
# # Comparison of all algorithms Results on R2 score
# +
x = PrettyTable()
print('\n')
print("Comparison of all algorithms")
x.field_names = ["Model", "R2 Score"]
x.add_row(["Arima Algorithm", round(arima_test_rs,3)])
x.add_row(["SARIMA Algorithm", round(sarima_test_rs,3)])
print(x)
print('\n')
# -
# # Comparison of all algorithms Results on MSE score
# +
x = PrettyTable()
print('\n')
print("Comparison of all algorithms")
x.field_names = ["Model", "MSE score"]
x.add_row(["Arima Algorithm", round(arima_test_mse,3)])
x.add_row(["SARIMA Algorithm", round(sarima_test_mse,3)])
print(x)
print('\n')
# -
# # Comparison of all algorithms Results on MAE score
# +
x = PrettyTable()
print('\n')
print("Comparison of all algorithms")
x.field_names = ["Model", "MAE score"]
x.add_row(["Arima Algorithm", round(arima_test_MAE,3)])
x.add_row(["SARIMA Algorithm", round(sarima_test_MAE,3)])
print(x)
print('\n')
# -
# # Graph of MSE of each algorithm
# +
Result_Comp = pd.DataFrame({'Algorithm':['Arima'], 'mean Squared error (MSE)': [arima_test_mse]})
Result_Comp1 = pd.DataFrame({'Algorithm':['SARIMA'], 'mean Squared error (MSE)': [sarima_test_mse]})
Result_Comp = pd.concat([Result_Comp, Result_Comp1])
Result_Comp.set_index("Algorithm",drop=True,inplace=True)
color = cm.inferno_r(np.linspace(.2, .4, 6))
Result_Comp.plot(kind='bar',figsize=(6, 4),stacked=True, color=color, legend=True)
# -
# # Graph of R2 of each algorithm
# +
Result_Comp = pd.DataFrame({'Algorithm':['Arima'], 'R2 sequared': [arima_test_rs]})
Result_Comp1 = pd.DataFrame({'Algorithm':['SARIMA'], 'R2 sequared': [sarima_test_rs]})
Result_Comp = pd.concat([Result_Comp, Result_Comp1])
Result_Comp.set_index("Algorithm",drop=True,inplace=True)
color = cm.inferno_r(np.linspace(0.8, 0.5, 2))
Result_Comp.plot(kind='bar', figsize=(6, 4),color=color)
# -
# # Graph of MAE of each algorithm
# +
Result_Comp = pd.DataFrame({'Algorithm':['Arima'],'mean absolute error (MAE)': [arima_test_MAE]})
Result_Comp1 = pd.DataFrame({'Algorithm':['SARIMA'], 'mean absolute error (MAE)': [sarima_test_MAE]})
Result_Comp = pd.concat([Result_Comp, Result_Comp1])
Result_Comp.set_index("Algorithm",drop=True,inplace=True)
color = cm.inferno_r(np.linspace(0.5, 0.2, 7))
Result_Comp.plot(kind='bar', figsize=(6, 4),color=color)
# -
# ### Now going to train Sarima on all data and then will fo FORECASTING
Sarima = SARIMAX(data['Close'],order=(4,1,0),seasonal_order=(1,1,1,12),enforce_invertibility=False, enforce_stationarity=False)
Sarima = Sarima.fit()
# ### FORECASTING
predictions = Sarima.predict(start=len(data), end= len(data)+42, dynamic= True)
predictions
pred=pd.DataFrame(predictions)
pred=pred.rename(columns={'predicted_mean':'Forecasting'})
plt.figure(figsize=(20,10))
plt.plot(pred, color='purple', marker='o', linestyle='dashed',label='Forecasting')
plt.title('Forecasting of stock')
plt.xlabel('Day')
plt.ylabel('Prices')
plt.legend()
plt.show()
| Notebooks/4. Arima_Sarima Code/TSLA Code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
import networkx as nx
import numpy as np
from matplotlib import pyplot as plt
import collections
from scipy.optimize import curve_fit
df = pd.read_csv("correlations.dat", sep='\t', header=None)
df.head()
G=nx.Graph()
# +
#G.add_nodes_from(np.arange(np.max(df.loc[:,1])))
# -
for edge in df.loc[:,:].values:
#G.add_weighted_edges_from([(edge[0],edge[1],edge[2])])
if edge[2] > 0.01:
G.add_node(edge[0])
G.add_node(edge[1])
G.add_edge(edge[0],edge[1])
N = G.number_of_nodes()
N
L = G.number_of_edges()
L
N*(N-1)/2
options = {'node_color': 'orange', "edge_color":'gray', "font_color": 'white', "font-family":"Helvetica", "font_size":'20', "font_style":"bold", 'node_size': 50, 'width': 0.8, 'with_labels': False}
lay = nx.layout.spring_layout(G, k=0.8)
fig = plt.figure()
nx.draw(G, pos=lay, **options)
plt.show()
fig.savefig("graph.pdf")
degree_sequence = [d for n, d in G.degree()] # degree sequence
fig = plt.figure()
counts, bin_edges, _ = plt.hist(degree_sequence, density=True, histtype='step', bins=70)
x=np.logspace(1, 3)
plt.plot(x, 1./(x-0.1), 'g--')
plt.xscale('log')
plt.yscale('log')
plt.title("Degree Histogram")
plt.ylabel("P(k)")
plt.xlabel("Degree k")
plt.show()
degree_sequence = sorted([d for n, d in G.degree()], reverse=True) # degree sequence
#print "Degree sequence", degree_sequence
degreeCount = collections.Counter(degree_sequence)
deg, cnt = zip(*degreeCount.items())
fig, ax = plt.subplots()
plt.xscale('log')
plt.yscale('log')
norm = np.sum(cnt)
plt.scatter(deg, np.array(cnt,dtype=float)/norm, color='b', label='degree')
x=np.arange(1, np.max(degree_sequence))
#plt.plot(x, 1./(x-0.1), 'g--')
plt.show()
fig.savefig("degree_distribution.png")
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
fig = plt.figure()
plt.xscale('log')
plt.yscale('log')
norm = np.sum(cnt)
plt.scatter(bin_centres, counts, color='b', label='degree')
x=np.arange(10, 900)
plt.plot(x, 1./x, 'g--')
plt.xlabel("degree", fontsize=16)
plt.show()
fig.savefig("degree_distribution.pdf")
def fitfunc(x, alpha, c):
return np.power(x,alpha)*(10**c)
counts, bin_edges, _ = plt.hist(deg, weights=cnt, histtype='step',bins=35, label='degree')
bin_centres = (bin_edges[:-1] + bin_edges[1:])/2.
fig = plt.figure()
plt.scatter(bin_centres, counts, color='b', label='degree')
x=np.arange(5, np.max(degree_sequence))
plt.plot(x, 1./(x-0.01)*(10**3), 'g--', label='$r^{-1}$')
popt, pcov = curve_fit(fitfunc, bin_centres[:20], counts[:20])
plt.plot(x, fitfunc(x, *popt), 'r', label='fit of $C r^{-\\alpha}$')
plt.xscale('log')
plt.yscale('log')
plt.title("Degree Histogram")
plt.ylabel("P(k)")
plt.xlabel("Degree k")
plt.legend()
plt.show()
popt
fig.savefig("degree_distribution.pdf")
| Network_correlations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from ImageUtils import ImageUtils
import os
import numpy as np
import matplotlib.pyplot as plt
import pickle
from pandas import read_excel
sourceFolder = 'D:/jupyter/car-management/api/rimages'
targetFolder = 'D:/jupyter/car-management/api/rimagesL'
root = '.'
Xfilename = 'licenseplateL.npy'
Yfilename = 'label.npy'
filename = '字典.xlsx'
imageUtils = ImageUtils()
imageUtils.processImage(sourceFolder, targetFolder)
X = imageUtils.readXData(targetFolder)
Y = imageUtils.readYData(root, targetFolder, filename)
imageUtils.save(root, Xfilename, Yfilename, X, Y)
imageUtils = ImageUtils()
X, Y = imageUtils.load(root, Xfilename, Yfilename)
print(X.shape, Y.shape)
print(getLabel(root, filename, Y[10]))
plt.imshow(X[10])
plt.imshow(X[5])
print(Y[1])
np.argmax(Y[1])
n
| recognitionalgorithm/.ipynb_checkpoints/processImage-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Coal production in mines 2013
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import explained_variance_score, r2_score, mean_squared_error
sns.set();
# -
# ## Cleaned data
#
# We claned this data in the notebook stored in : deliver/Data_cleaning.ipynb
df = pd.read_csv("../data/cleaned_coalpublic2013.csv", index_col="MSHA ID")
df[['Year','Mine_Name']].head()
# # Predict the Production of coal mines
# +
features = [
'Average_Employees',
'Labor_Hours'
]
categoricals=[
'Mine_State',
'Mine_County',
'Mine_Status',
'Mine_Type',
'Company_Type',
'Operation_Type',
'Union_Code',
'Coal_Supply_Region' #command +[ to move]
]
target = 'log_production'
# +
sns.set_context('poster')
fig = plt.subplots(figsize=(14,8))
sns.violinplot(y='Mine_Status', x='log_production', data=df, split=True,
inner='stick',)
plt.tight_layout()
plt.savefig("../figures/Coal_prediction_company_type_vs_log_production.png")
# -
pd.get_dummies(df['Company_Type']).sample(50).head()
dummy_categoricals= []
for categorical in categoricals:
# Avoid the dummy variable trap!
drop_var=sorted(df[categorical].unique())[-1]
temp_df=pd.get_dummies(df[categorical],prefix=categorical)
df = pd.concat([df, temp_df], axis=1)
temp_df.drop('_'.join([categorical, str(drop_var)]), axis=1, inplace=True)
dummy_categoricals +=temp_df.columns.tolist()
# # Random Forest Regressor
train, test=train_test_split(df, test_size=0.3)
rf=RandomForestRegressor(n_estimators=100,oob_score=True)
rf.fit(train[features+dummy_categoricals], train[target])
sns.set_context('poster')
fig=plt.subplots(figsize=(8,8))
sns.regplot(test[target], rf.predict(test[features+dummy_categoricals]))
plt.xlim(0,22)
plt.ylim(0,22)
plt.ylabel('Predicted')
plt.tight_layout()
plt.savefig("../figures/Coal-production-RF-prediction.png")
predicted=rf.predict(test[features+dummy_categoricals])
print ("R^2 score:", r2_score(test[target], predicted))
print ("MSE:", mean_squared_error(test[target], predicted))
# +
rf_importances=pd.DataFrame({'name': train[features + dummy_categoricals].columns,
'importance':rf.feature_importances_
}).sort_values(by='importance', ascending=False).reset_index(drop=True)
rf_importances.head(5)
# -
# # Conclusion
| deliver/Coal prediction of production.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
submissions_df = pd.read_csv("/Users/frankkelly/Downloads/collated - Tabellenblatt1.csv")
submissions_df.head()
submissions_df = submissions_df.loc[1:,:]
submissions_df.head()
submissions_df.columns
grade_df = submissions_df[["ID", "Rating", "Grade", "Rating: [a, b, c, d]"]]
grade_df.head()
pure_cool_df = submissions_df[["Coolness, attractiveness", "Coolness", "Coolness [+, ++, +++]"]]
pure_cool_df.head()
pure_cool_df
cool_dict = {}
cool_dict["+"] = 1
cool_dict["++"] = 2
cool_dict["+++"] = 3
cool_dict
# +
# def cool_series_convert(series_in):
# return [cool_dict[x] if x in cool_dict.keys() else np.nan for x in series_in.values]
# -
def get_plus(series_in):
return [len(''.join([x for x in val if x == '+'])) if val is not np.nan else np.nan for val in series_in.values ]
pure_cool_df.iloc[1]
get_plus(pure_cool_df.iloc[1])
cool_df = pure_cool_df.apply(lambda x: get_plus(x), axis=1)
cool_df.head()
pure_grade_df = grade_df[["Rating", "Grade", "Rating: [a, b, c, d]"]].apply(lambda x: x.str.lower(), axis=1)
pure_grade_df.head()
scoring_dict = {}
scoring_dict["a"] = 4
scoring_dict["b"] = 3
scoring_dict["c"] = 2
scoring_dict["d"] = 1
scoring_dict
# +
# def series_convert(series_in):
# list_out = []
# for x in series_in:
# if x is not np.nan:
# list_out.append(scoring_dict[y])
# else:
# list_out.append(0)
# return list_out
def series_convert(series_in):
return [scoring_dict[x] if x in scoring_dict.keys() else np.nan for x in series_in.values]
# -
pure_grade_df.iloc[0]
pure_grade_numerical_df = pure_grade_df.apply(lambda x: series_convert(x), axis=1)
grade_column = pure_grade_numerical_df.mean(axis=1)
print(grade_column[:5])
coolness_column = cool_df.mean(axis=1)
score_column = (grade_column + coolness_column)/2
score_column
submissions_df.columns
def remove_nan(list_in):
list_listin = list(list_in)
for item in list_listin:
if item is np.nan:
list_listin.remove(item)
return list_listin
remove_nan(['no', np.nan, np.nan])
mode = lambda x: x.str.lower().mode()[0] if len(x) > 2 else str(x.values)
# +
category_column = submissions_df[['Category', 'Category.1', 'cat']].apply(lambda x:str(x.values), axis=1)
level_column = submissions_df[['Level', 'Level.1', 'level']].apply(mode, axis=1)
pycon_column = submissions_df[['pycon', 'Suggest to Pycon', 'Suggest to Pycon: [yes, no]']]\
.apply(mode, axis=1)
long_slot_column = submissions_df[['long slot', 'Long Slot', 'Long slot: [yes, no]']]\
.apply(mode, axis=1)
print(long_slot_column[:5])
print(pycon_column[:5])
print(level_column[:5])
# -
pd.Series(['beginner', 'Beginner', np.nan]).str.lower().mode()
final_df = pd.concat([submissions_df[["ID"]], category_column, \
level_column, score_column, grade_column, \
coolness_column, pycon_column, long_slot_column], axis=1)
final_df.columns=["ID", "category", "level", "score", "grade", "coolness", "pycon", "long-slot"]
final_df.head()
top50_df = final_df.sort_values(by="score", ascending=False).head(50)
top50_df
top30_df.to_csv("../data/top30entries.csv")
top21_df.level.value_counts()
| notebooks/.ipynb_checkpoints.bck/CFP-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Distributed Tracing Template
#
# Illustrate the configuration for allowing distributed tracing using Jaeger.
#
#
# ## Setup Seldon Core
#
# Install Seldon Core as described in [docs](https://docs.seldon.io/projects/seldon-core/en/latest/workflow/install.html)
#
# Then port-forward to that ingress on localhost:8003 in a separate terminal either with:
#
# * Ambassador:
#
# ```kubectl port-forward $(kubectl get pods -n seldon -l app.kubernetes.io/name=ambassador -o jsonpath='{.items[0].metadata.name}') -n seldon 8003:8080```
#
# * Istio:
#
# ```kubectl port-forward $(kubectl get pods -l istio=ingressgateway -n istio-system -o jsonpath='{.items[0].metadata.name}') -n istio-system 8003:80```
# !kubectl create namespace seldon
# !kubectl config set-context $(kubectl config current-context) --namespace=seldon
# ## Install Jaeger
#
# Follow the Jaeger docs to [install on Kubernetes](https://www.jaegertracing.io/docs/1.18/operator/).
# !kubectl create namespace observability
# !kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
# !kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
# !kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
# !kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
# !kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
# !pygmentize simplest.yaml
# !kubectl apply -f simplest.yaml
# Port forward to Jaeger UI
#
# ```bash
# kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=simplest -n seldon -o jsonpath='{.items[0].metadata.name}') 16686:16686 -n seldon
# ```
# ## Run Example REST Deployment
# !pygmentize deployment_rest.yaml
# !kubectl create -f deployment_rest.yaml
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=tracing-example -o jsonpath='{.items[0].metadata.name}')
# !curl -d '{"data": {"ndarray":[[1.0, 2.0, 5.0]]}}' \
# -X POST http://localhost:8003/seldon/seldon/tracing-example/api/v1.0/predictions \
# -H "Content-Type: application/json"
# Check the Jaeger UI. You should be able to find traces like below:
#
# 
# !kubectl delete -f deployment_rest.yaml
# ## Run Example GRPC Deployment
# !pygmentize deployment_grpc.yaml
# !kubectl create -f deployment_grpc.yaml
# !kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=tracing-example -o jsonpath='{.items[0].metadata.name}')
# !cd ../../../executor/proto && grpcurl -d '{"data":{"ndarray":[[1.0,2.0]]}}' \
# -rpc-header seldon:tracing-example -rpc-header namespace:seldon \
# -plaintext \
# -proto ./prediction.proto 0.0.0.0:8003 seldon.protos.Seldon/Predict
# Check the Jaeger UI. You should be able to find traces like below:
#
#
# 
# !kubectl delete -f deployment_grpc.yaml
# !kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
# !kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
# !kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
# !kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
# !kubectl delete -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
# !kubectl delete namespace observability
| examples/models/tracing/tracing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ### MNIST neural network from scratch
# #### Fully Connected Layer (Linear Layer)
# +
import numpy as np
class Linear():
def __init__(self, in_size, out_size):
self.W = np.random.randn(in_size, out_size) * 0.01
self.b = np.zeros((1, out_size))
self.params = [self.W, self.b]
self.gradW = None
self.gradB = None
self.gradInput = None
def forward(self, X):
self.X = X
self.output = np.dot(X, self.W) + self.b
return self.output
def backward(self, nextgrad):
self.gradW = np.dot(self.X.T, nextgrad)
self.gradB = np.sum(nextgrad, axis=0)
self.gradInput = np.dot(nextgrad, self.W.T)
return self.gradInput, [self.gradW, self.gradB]
# -
# #### Rectified Linear Activation Layer (ReLU)
#
class ReLU():
def __init__(self):
self.params = []
self.gradInput = None
def forward(self, X):
self.output = np.maximum(X, 0)
return self.output
def backward(self, nextgrad):
self.gradInput = nextgrad.copy()
self.gradInput[self.output <=0] = 0
return self.gradInput, []
# #### Defining the softmax function
def softmax(x):
exp_x = np.exp(x - np.max(x, axis=1, keepdims=True))
return exp_x / np.sum(exp_x, axis=1, keepdims=True)
# #### Defining the Cross Entropy Loss
class CrossEntropy:
def forward(self, X, y):
self.m = y.shape[0]
self.p = softmax(X)
cross_entropy = -np.log(self.p[range(self.m), y]+1e-16)
loss = np.sum(cross_entropy) / self.m
return loss
def backward(self, X, y):
y_idx = y.argmax()
grad = softmax(X)
grad[range(self.m), y] -= 1
grad /= self.m
return grad
# #### Loading the MNIST dataset
# +
from keras.datasets import mnist
from keras.utils import np_utils
(train_features, train_targets), (test_features, test_targets) = mnist.load_data()
train_features = train_features.reshape(60000, 784)
print train_features.shape
test_features = test_features.reshape(10000, 784)
print test_features.shape
# # normalize inputs from 0-255 to 0-1
train_features = train_features / 255.0
test_features = test_features / 255.0
print train_targets.shape
print test_targets.shape
X_train = train_features
y_train = train_targets
X_val = test_features
y_val = test_targets
# -
# visualizing the first 10 images in the dataset and their labels
# %matplotlib inline
import matplotlib.pyplot as plt
plt.figure(figsize=(10, 1))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape(28, 28), cmap="gray")
plt.axis('off')
plt.show()
print('label for each of the above image: %s' % (y_train[0:10]))
# #### Here, we define the container NN class that enables the forward prop and backward propagation of the entire network. Note, how this class enables us to add layers of different types and also correctly pass gradients using the chain rule.
class NN():
def __init__(self, lossfunc=CrossEntropy(), mode='train'):
self.params = []
self.layers = []
self.loss_func = lossfunc
self.grads = []
self.mode = mode
def add_layer(self, layer):
self.layers.append(layer)
self.params.append(layer.params)
def forward(self, X):
for layer in self.layers:
X = layer.forward(X)
return X
def backward(self, nextgrad):
self.clear_grad_param()
for layer in reversed(self.layers):
nextgrad, grad = layer.backward(nextgrad)
self.grads.append(grad)
return self.grads
def train_step(self, X, y):
out = self.forward(X)
loss = self.loss_func.forward(out,y)
nextgrad = self.loss_func.backward(out,y)
grads = self.backward(nextgrad)
return loss, grads
def predict(self, X):
X = self.forward(X)
p = softmax(X)
return np.argmax(p, axis=1)
def predict_scores(self, X):
X = self.forward(X)
p = softmax(X)
return p
def clear_grad_param(self):
self.grads = []
# #### Defining the update function (SGD with momentum)
def update_params(velocity, params, grads, learning_rate=0.01, mu=0.9):
for v, p, g, in zip(velocity, params, reversed(grads)):
for i in range(len(g)):
v[i] = mu * v[i] - learning_rate * g[i]
p[i] += v[i]
# #### Defining a function which gives us the minibatches (both the datapoint and the corresponding label)
# get minibatches
def minibatch(X, y, minibatch_size):
n = X.shape[0]
minibatches = []
permutation = np.random.permutation(X.shape[0])
X = X[permutation]
y = y[permutation]
for i in range(0, n , minibatch_size):
X_batch = X[i:i + minibatch_size, :]
y_batch = y[i:i + minibatch_size, ]
minibatches.append((X_batch, y_batch))
return minibatches
# #### The traning loop
def train(net, X_train, y_train, minibatch_size, epoch, learning_rate, mu=0.9, X_val=None, y_val=None):
val_loss_epoch = []
minibatches = minibatch(X_train, y_train, minibatch_size)
minibatches_val = minibatch(X_val, y_val, minibatch_size)
for i in range(epoch):
loss_batch = []
val_loss_batch = []
velocity = []
for param_layer in net.params:
p = [np.zeros_like(param) for param in list(param_layer)]
velocity.append(p)
# iterate over mini batches
for X_mini, y_mini in minibatches:
loss, grads = net.train_step(X_mini, y_mini)
loss_batch.append(loss)
update_params(velocity, net.params, grads, learning_rate=learning_rate, mu=mu)
for X_mini_val, y_mini_val in minibatches_val:
val_loss, _ = net.train_step(X_mini, y_mini)
val_loss_batch.append(val_loss)
# accuracy of model at end of epoch after all mini batch updates
m_train = X_train.shape[0]
m_val = X_val.shape[0]
y_train_pred = np.array([], dtype="int64")
y_val_pred = np.array([], dtype="int64")
y_train1 = []
y_vall = []
for i in range(0, m_train, minibatch_size):
X_tr = X_train[i:i + minibatch_size, : ]
y_tr = y_train[i:i + minibatch_size,]
y_train1 = np.append(y_train1, y_tr)
y_train_pred = np.append(y_train_pred, net.predict(X_tr))
for i in range(0, m_val, minibatch_size):
X_va = X_val[i:i + minibatch_size, : ]
y_va = y_val[i:i + minibatch_size,]
y_vall = np.append(y_vall, y_va)
y_val_pred = np.append(y_val_pred, net.predict(X_va))
train_acc = check_accuracy(y_train1, y_train_pred)
val_acc = check_accuracy(y_vall, y_val_pred)
mean_train_loss = sum(loss_batch) / float(len(loss_batch))
mean_val_loss = sum(val_loss_batch) / float(len(val_loss_batch))
val_loss_epoch.append(mean_val_loss)
print("Loss = {0} | Training Accuracy = {1} | Val Loss = {2} | Val Accuracy = {3}".format(mean_train_loss, train_acc, mean_val_loss, val_acc))
return net
# #### Checking the accuracy of the model
def check_accuracy(y_true, y_pred):
return np.mean(y_pred == y_true)
# #### Invoking all that we have created until now
# +
from random import shuffle
## input size
input_dim = X_train.shape[1]
## hyperparameters
iterations = 10
learning_rate = 1e4
hidden_nodes = 32
output_nodes = 10
## define neural net
nn = NN()
nn.add_layer(Linear(input_dim, hidden_nodes))
nn.add_layer(ReLU())
nn.add_layer(Linear(hidden_nodes, output_nodes))
nn = train(nn, X_train , y_train, minibatch_size=200, epoch=10, \
learning_rate=learning_rate, X_val=X_val, y_val=y_val)
# -
# #### fprop a single image and showing its prediction
plt.imshow(X_val[0].reshape(28,28), cmap='gray')
# Predict Scores for each class
prediction = nn.predict_scores(X_val[0])[0]
print "Scores"
print prediction
np.argmax(prediction)
predict_class = nn.predict(X_val[0])[0]
predict_class
# Original class
y_val[0]
| CourseContent/11-Introduction.to.Neural.Network.and.Deep.Learning/Week2/MNIST Python Neural Network_Final.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
la = pd.read_csv('loan_approval.csv')
df = pd.DataFrame(la)
print(df)
# # Graph of Count vs Loan Status
df['Loan_Status'].value_counts().plot(kind='bar')
# # Graph of Count vs Property Area & Loan Status
df.groupby(['Property_Area', 'Loan_Status']).size().unstack().plot(kind='bar')
# # Graph of Count vs Education & Loan Status
df.groupby(['Education', 'Loan_Status']).size().unstack().plot(kind='bar')
# # Graph of Density vs Loan Amount
fig, (ax_1, ax_2) = plt.subplots(1,2, figsize=(20,5))
df[df['Education'] == 'Graduate']['LoanAmount'].plot(kind='density', ax=ax_1)
df[df['Education'] == 'Not Graduate']['LoanAmount'].plot(kind='density', ax=ax_2)
ax_1.set(title='For Graduated People', xlabel='Loan Amount')
ax_2.set(title='For Not Graduated People', xlabel='Loan Amount')
# # Scatter Plot of Loan Amount vs Income
fig, (ax_1, ax_2, ax_3) = plt.subplots(3,1, figsize=(10,20))
df.plot.scatter(x='ApplicantIncome', y='LoanAmount', ax=ax_1)
df.plot.scatter(x='CoapplicantIncome', y='LoanAmount', ax=ax_2)
df['TotalIncome'] = df['ApplicantIncome'] + df['CoapplicantIncome']
df.plot.scatter(x='TotalIncome', y='LoanAmount', ax=ax_3)
df.drop('TotalIncome', axis=1, inplace=True)
| Loan_Approval_Analysis/visualization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: data_template
# language: python
# name: data_template
# ---
import numpy as np
import pandas as pd
import seaborn as sns
# Read in data
df = pd.read_csv(
"../../data/brownfields_data_with_county_geoid/brownfields_data_with_county_geoid.csv"
)
# +
assessment = "Phase II Environmental Assessment"
# Subset to data with complete assessment info
df = df[
(df["Type of Brownfields Grant"] == "Assessment") &
(df["Amt of Assessment Funding"] > 0) &
(df["Assessment Phase"] == assessment) &
(df["Assessment Completion Date"] < "2021-03-01")
]
# +
past_use_cols = [
"Past Use: Greenspace (arces)",
"Past Use: Residential (arces)",
"Past Use: Commercial (arces)",
"Past Use: Industrial (arces)",
]
assessment_cols = [
"ACRES Property ID",
"Assessment Phase",
"Assessment Start Date",
"Assessment Completion Date",
"Source of Assessment Funding",
"Entity Providing Assmnt Funds",
"Amt of Assessment Funding",
]
# Find the "pure usage" properties, i.e., the properties that ONLY
# had greenspace, or only had residential, or only had commericial,
# or only had industrial.
# Compare the assessment cost of greenspace-only properties to
# residential-only, commercial-only and industrial-only.
pure_dfs = []
for use in past_use_cols:
# Build pure_df, a dataset of grants for "pure usage" properties
pure_df = df[(df[use] > 0)]
other_uses = [x for x in past_use_cols if x != use]
for x in other_uses:
pure_df = pure_df[(pure_df[x] == 0) | pd.isna(pure_df[x])]
# Drop duplicate grant & assessment info
pure_df = pure_df.drop_duplicates(subset=[use]+assessment_cols)
# For a given property, add up assessment funding from all grants
pure_df = pure_df.groupby(["ACRES Property ID"]) \
.agg({use: "max", "Amt of Assessment Funding": "sum"}).reset_index()
# Clean up columns
pure_df["Past Use"] = use.split(" ")[2]
pure_df["log(Assessment Cost)"] = pure_df["Amt of Assessment Funding"] \
.apply(lambda x: np.log(x))
pure_df["Assessment Cost"] = pure_df["Amt of Assessment Funding"]
# Add pure_df to a growing list of pure_df's
pure_dfs.append(
pure_df[["Past Use", "log(Assessment Cost)", "Assessment Cost"]]
)
# Combine the pure_df's into a single dataframe
plot_df = pd.concat(pure_dfs)
# Boxplot of assessment costs
sns.boxplot(x="Past Use", y="log(Assessment Cost)", data=plot_df) \
.set_title(f"Costs for {assessment}")
# Table of assessment costs
plot_df.groupby("Past Use")["Assessment Cost"].describe().reset_index()
| scripts/rebecca-burwei/compute_assessment_cost_by_past_use.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
# # Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func, inspect, desc
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# +
# We can view all of the classes that automap found
Base.classes.keys()
inspector = inspect(engine)
inspector.get_table_names()
# -
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
# Get the last date of entry
last_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date
# # Exploratory Climate Analysis
# +
# Design a query to retrieve the last 12 months of precipitation data and plot the results
meas_columns = inspector.get_columns('measurement')
for column in meas_columns:
print(column["name"], column["type"])
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
meas_query = engine.execute('SELECT date , prcp FROM measurement').fetchall()
# Save the query results as a Pandas DataFrame and set the index to the date column
measurement_df = pd.DataFrame(meas_query)
measurement_df = measurement_df.rename(columns={0: 'date', 1: 'prcp'})
measurement_df['date'] = pd.to_datetime(measurement_df['date'], format='%Y-%m-%d')
measurement_df.head()
mask = (measurement_df['date'] > '2016-08-22') & (measurement_df['date'] <= '2017-08-23')
measurement_df_1year = measurement_df.loc[mask]
measurement_df_1year = measurement_df_1year.set_index(['date'])
# Sort the dataframe by date
measurement_df_1year.sort_index()
measurement_df_1year.head()
# measurement_df_1year.count()
# -
# Use Pandas Plotting with Matplotlib to plot the data
measurement_df_1year.plot()
plt.legend(loc=9)
plt.savefig("precipitation_analysis.png")
plt.show()
# 
# +
# Use Pandas to calcualte the summary statistics for the precipitation data
measurement_df_1year.describe()
# -
# 
# +
# Design a query to show how many stations are available in this dataset?
stat_query = engine.execute('SELECT COUNT(station) FROM station').fetchall()
station_count = stat_query[0][0]
print(f'There are {station_count} stations in the dataset.')
# +
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
# Set above query results to dataframe
active_stations_descending = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).order_by(func.count(Measurement.station).desc()).all()
df_active_stations_descending = pd.DataFrame(data=active_stations_descending, columns=['Station', 'Count'])
print(f"Most Active Stations")
df_active_stations_descending.head()
# +
# Set station with highest number of observations to a variable
station_with_most_observations = df_active_stations_descending["Station"][0]
most_observations = df_active_stations_descending["Count"][0]
print(f"Station with most observations ({most_observations}): {station_with_most_observations}")
# +
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
print(f"Most Active Station Temperatures")
USC00519281_query = engine.execute('SELECT MIN(tobs), MAX(tobs), AVG(tobs) FROM measurement WHERE station = "USC00519281"').fetchall()
USC00519281_stats = USC00519281_query[0]
USC00519281_stats
# +
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
USC00519281_plotquery = engine.execute('SELECT date , tobs FROM measurement WHERE station = "USC00519281" AND date > "2016-08-22"').fetchall()
USC00519281_plotquery
USC00519281_df = pd.DataFrame(USC00519281_plotquery)
USC00519281_df = USC00519281_df.rename(columns={0: 'date', 1: 'tobs'})
USC00519281_df.head()
# +
USC00519281_df.plot.hist(bins = 12, alpha=.9)
#plt.xticks([])
#plt.tight_layout()
#plt.show()
# -
# 
# +
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# -
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
print(calc_temps('2012-02-28', '2012-03-05'))
# +
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
trip_stats = calc_temps('2012-02-28', '2012-03-05')
trip_yaxis = trip_stats[0][1]
trip_min = trip_stats[0][0]
trip_max = trip_stats[0][2]
error = [trip_max - trip_min]
plt.bar("temp", trip_yaxis, alpha=.9, align = "center", yerr = error, width=.9)
plt.title('Trip Avg Temp')
plt.ylabel('Temp (F)')
plt.yticks(np.arange(0, 150, 20))
# -
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
prcptrip_query = engine.execute('SELECT measurement.station, name, latitude, longitude, elevation, AVG(prcp) FROM measurement \
LEFT JOIN station ON measurement.station = station.station \
WHERE date BETWEEN "2012-02-28" AND "2012-03-05" GROUP BY measurement.station ORDER BY AVG(prcp) DESC').fetchall()
prcptrip_query
# ## Optional Challenge Assignment
# +
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# +
# calculate the daily normals for your trip
# push each tuple of calculations into a list called `normals`
# Set the start and end date of the trip
# Use the start and end date to create a range of dates
# Stip off the year and save a list of %m-%d strings
# Loop through the list of %m-%d strings and calculate the normals for each date
# -
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
# Plot the daily normals as an area plot with `stacked=False`
| climate_starter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# <!-- ---------------------------------------------------- -->
# <div class="col-sm-3 col-md-3 col-lg-3">
# <!-- logo -->
# <div class="img-responsive">
# <img src="https://www.dropbox.com/s/220ncn0o5danuey/pandas-ipython-tutorials-hedaro.jpg?dl=1" title="Pandas Tutorial | Hedaro" alt="Pandas Tutorial | Hedaro">
# </div>
# <!-- logo -->
# </div>
# <!-- ---------------------------------------------------- -->
# <div class="col-sm-6 col-md-6 col-lg-6">
# <!-- Pandas Tutorial -->
# <center>
# <br>
# <h1>Lesson 3</h1>
# <br>
# <br>
# <strong>These tutorials are also available through an email course, please visit </strong><a href="http://www.hedaro.com/pandas-tutorial" target="_blank"><strong>http://www.hedaro.com/pandas-tutorial</strong></a> <strong>to sign up today.</strong>
# </center>
# <!-- Pandas Tutorial -->
# </div>
# <!-- ---------------------------------------------------- -->
# **Get Data** - Our data set will consist of an Excel file containing customer counts per date. We will learn how to read in the excel file for processing.
# **Prepare Data** - The data is an irregular time series having duplicate dates. We will be challenged in compressing the data and coming up with next years forecasted customer count.
# **Analyze Data** - We use graphs to visualize trends and spot outliers. Some built in computational tools will be used to calculate next years forecasted customer count.
# **Present Data** - The results will be plotted.
#
# ***NOTE:
# Make sure you have looked through all previous lessons, as the knowledge learned in previous lessons will be
# needed for this exercise.***
# +
# Import libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy.random as np
import sys
import matplotlib
# %matplotlib inline
# -
print('Python version ' + sys.version)
print('Pandas version: ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
# > We will be creating our own test data for analysis.
# +
# set seed
np.seed(111)
# Function to generate test data
def CreateDataSet(Number=1):
Output = []
for i in range(Number):
# Create a weekly (mondays) date range
rng = pd.date_range(start='1/1/2009', end='12/31/2012', freq='W-MON')
# Create random data
data = np.randint(low=25,high=1000,size=len(rng))
# Status pool
status = [1,2,3]
# Make a random list of statuses
random_status = [status[np.randint(low=0,high=len(status))] for i in range(len(rng))]
# State pool
states = ['GA','FL','fl','NY','NJ','TX']
# Make a random list of states
random_states = [states[np.randint(low=0,high=len(states))] for i in range(len(rng))]
Output.extend(zip(random_states, random_status, data, rng))
return Output
# -
# Now that we have a function to generate our test data, lets create some data and stick it into a dataframe.
dataset = CreateDataSet(4)
df = pd.DataFrame(data=dataset, columns=['State','Status','CustomerCount','StatusDate'])
df.info()
df.head()
# We are now going to save this dataframe into an Excel file, to then bring it back to a dataframe. We simply do this to show you how to read and write to Excel files.
#
# We do not write the index values of the dataframe to the Excel file, since they are not meant to be part of our initial test data set.
# Save results to excel
df.to_excel('Lesson3.xlsx', index=False)
print('Done')
# # Grab Data from Excel
#
# We will be using the ***read_excel*** function to read in data from an Excel file. The function allows you to read in specfic tabs by name or location.
# +
# pd.read_excel?
# -
# **Note: The location on the Excel file will be in the same folder as the notebook, unless specified otherwise.**
# +
# Location of file
Location = r'C:\Users\david\notebooks\update\Lesson3.xlsx'
# Parse a specific sheet
df = pd.read_excel(Location, 0, index_col='StatusDate')
df.dtypes
# -
df.index
df.head()
# # Prepare Data
#
# This section attempts to clean up the data for analysis.
# 1. Make sure the state column is all in upper case
# 2. Only select records where the account status is equal to "1"
# 3. Merge (NJ and NY) to NY in the state column
# 4. Remove any outliers (any odd results in the data set)
#
# Lets take a quick look on how some of the *State* values are upper case and some are lower case
df['State'].unique()
# To convert all the State values to upper case we will use the ***upper()*** function and the dataframe's ***apply*** attribute. The ***lambda*** function simply will apply the upper function to each value in the *State* column.
# Clean State Column, convert to upper case
df['State'] = df.State.apply(lambda x: x.upper())
df['State'].unique()
# Only grab where Status == 1
mask = df['Status'] == 1
df = df[mask]
# To turn the ***NJ*** states to ***NY*** we simply...
#
# ***[df.State == 'NJ']*** - Find all records in the *State* column where they are equal to *NJ*.
# ***df.State[df.State == 'NJ'] = 'NY'*** - For all records in the *State* column where they are equal to *NJ*, replace them with *NY*.
# Convert NJ to NY
mask = df.State == 'NJ'
df['State'][mask] = 'NY'
# Now we can see we have a much cleaner data set to work with.
df['State'].unique()
# At this point we may want to graph the data to check for any outliers or inconsistencies in the data. We will be using the ***plot()*** attribute of the dataframe.
#
# As you can see from the graph below it is not very conclusive and is probably a sign that we need to perform some more data preparation.
df['CustomerCount'].plot(figsize=(15,5));
# If we take a look at the data, we begin to realize that there are multiple values for the same State, StatusDate, and Status combination. It is possible that this means the data you are working with is dirty/bad/inaccurate, but we will assume otherwise. We can assume this data set is a subset of a bigger data set and if we simply add the values in the ***CustomerCount*** column per State, StatusDate, and Status we will get the ***Total Customer Count*** per day.
sortdf = df[df['State']=='NY'].sort_index(axis=0)
sortdf.head(10)
# Our task is now to create a new dataframe that compresses the data so we have daily customer counts per State and StatusDate. We can ignore the Status column since all the values in this column are of value *1*. To accomplish this we will use the dataframe's functions ***groupby*** and ***sum()***.
#
# Note that we had to use **reset_index** . If we did not, we would not have been able to group by both the State and the StatusDate since the groupby function expects only columns as inputs. The **reset_index** function will bring the index ***StatusDate*** back to a column in the dataframe.
# Group by State and StatusDate
Daily = df.reset_index().groupby(['State','StatusDate']).sum()
Daily.head()
# The ***State*** and ***StatusDate*** columns are automatically placed in the index of the ***Daily*** dataframe. You can think of the ***index*** as the primary key of a database table but without the constraint of having unique values. Columns in the index as you will see allow us to easily select, plot, and perform calculations on the data.
#
# Below we delete the ***Status*** column since it is all equal to one and no longer necessary.
del Daily['Status']
Daily.head()
# What is the index of the dataframe
Daily.index
# Select the State index
Daily.index.levels[0]
# Select the StatusDate index
Daily.index.levels[1]
# Lets now plot the data per State.
#
# As you can see by breaking the graph up by the ***State*** column we have a much clearer picture on how the data looks like. Can you spot any outliers?
Daily.loc['FL'].plot()
Daily.loc['GA'].plot()
Daily.loc['NY'].plot()
Daily.loc['TX'].plot();
# We can also just plot the data on a specific date, like ***2012***. We can now clearly see that the data for these states is all over the place. since the data consist of weekly customer counts, the variability of the data seems suspect. For this tutorial we will assume bad data and proceed.
Daily.loc['FL']['2012':].plot()
Daily.loc['GA']['2012':].plot()
Daily.loc['NY']['2012':].plot()
Daily.loc['TX']['2012':].plot();
# We will assume that per month the customer count should remain relatively steady. Any data outside a specific range in that month will be removed from the data set. The final result should have smooth graphs with no spikes.
#
# ***StateYearMonth*** - Here we group by State, Year of StatusDate, and Month of StatusDate.
# ***Daily['Outlier']*** - A boolean (True or False) value letting us know if the value in the CustomerCount column is ouside the acceptable range.
#
# We will be using the attribute ***transform*** instead of ***apply***. The reason is that transform will keep the shape(# of rows and columns) of the dataframe the same and apply will not. By looking at the previous graphs, we can realize they are not resembling a gaussian distribution, this means we cannot use summary statistics like the mean and stDev. We use percentiles instead. Note that we run the risk of eliminating good data.
# +
# Calculate Outliers
StateYearMonth = Daily.groupby([Daily.index.get_level_values(0), Daily.index.get_level_values(1).year, Daily.index.get_level_values(1).month])
Daily['Lower'] = StateYearMonth['CustomerCount'].transform( lambda x: x.quantile(q=.25) - (1.5*x.quantile(q=.75)-x.quantile(q=.25)) )
Daily['Upper'] = StateYearMonth['CustomerCount'].transform( lambda x: x.quantile(q=.75) + (1.5*x.quantile(q=.75)-x.quantile(q=.25)) )
Daily['Outlier'] = (Daily['CustomerCount'] < Daily['Lower']) | (Daily['CustomerCount'] > Daily['Upper'])
# Remove Outliers
Daily = Daily[Daily['Outlier'] == False]
# -
# The dataframe named ***Daily*** will hold customer counts that have been aggregated per day. The original data (df) has multiple records per day. We are left with a data set that is indexed by both the state and the StatusDate. The Outlier column should be equal to ***False*** signifying that the record is not an outlier.
Daily.head()
# We create a separate dataframe named ***ALL*** which groups the Daily dataframe by StatusDate. We are essentially getting rid of the ***State*** column. The ***Max*** column represents the maximum customer count per month. The ***Max*** column is used to smooth out the graph.
# +
# Combine all markets
# Get the max customer count by Date
ALL = pd.DataFrame(Daily['CustomerCount'].groupby(Daily.index.get_level_values(1)).sum())
ALL.columns = ['CustomerCount'] # rename column
# Group by Year and Month
YearMonth = ALL.groupby([lambda x: x.year, lambda x: x.month])
# What is the max customer count per Year and Month
ALL['Max'] = YearMonth['CustomerCount'].transform(lambda x: x.max())
ALL.head()
# -
# As you can see from the ***ALL*** dataframe above, in the month of January 2009, the maximum customer count was 901. If we had used ***apply***, we would have got a dataframe with (Year and Month) as the index and just the *Max* column with the value of 901.
# ----------------------------------
# There is also an interest to gauge if the current customer counts were reaching certain goals the company had established. The task here is to visually show if the current customer counts are meeting the goals listed below. We will call the goals ***BHAG*** (Big Hairy Annual Goal).
#
# * 12/31/2011 - 1,000 customers
# * 12/31/2012 - 2,000 customers
# * 12/31/2013 - 3,000 customers
#
# We will be using the **date_range** function to create our dates.
#
# ***Definition:*** date_range(start=None, end=None, periods=None, freq='D', tz=None, normalize=False, name=None, closed=None)
# ***Docstring:*** Return a fixed frequency datetime index, with day (calendar) as the default frequency
#
# By choosing the frequency to be ***A*** or annual we will be able to get the three target dates from above.
# +
# pd.date_range?
# -
# Create the BHAG dataframe
data = [1000,2000,3000]
idx = pd.date_range(start='12/31/2011', end='12/31/2013', freq='A')
BHAG = pd.DataFrame(data, index=idx, columns=['BHAG'])
BHAG
# Combining dataframes as we have learned in previous lesson is made simple using the ***concat*** function. Remember when we choose ***axis = 0*** we are appending row wise.
# Combine the BHAG and the ALL data set
combined = pd.concat([ALL,BHAG], axis=0)
combined = combined.sort_index(axis=0)
combined.tail()
# +
fig, axes = plt.subplots(figsize=(12, 7))
combined['BHAG'].fillna(method='pad').plot(color='green', label='BHAG')
combined['Max'].plot(color='blue', label='All Markets')
plt.legend(loc='best');
# -
# There was also a need to forecast next year's customer count and we can do this in a couple of simple steps. We will first group the ***combined*** dataframe by ***Year*** and place the maximum customer count for that year. This will give us one row per Year.
# Group by Year and then get the max value per year
Year = combined.groupby(lambda x: x.year).max()
Year
# Add a column representing the percent change per year
Year['YR_PCT_Change'] = Year['Max'].pct_change(periods=1)
Year
# To get next year's end customer count we will assume our current growth rate remains constant. We then will increase this years customer count by that amount and that will be our forecast for next year.
(1 + Year.loc[2012,'YR_PCT_Change']) * Year.loc[2012,'Max']
# # Present Data
#
# Create individual Graphs per State.
# +
# First Graph
ALL['Max'].plot(figsize=(10, 5));plt.title('ALL Markets')
# Last four Graphs
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(20, 10))
fig.subplots_adjust(hspace=1.0) ## Create space between plots
Daily.loc['FL']['CustomerCount']['2012':].fillna(method='pad').plot(ax=axes[0,0])
Daily.loc['GA']['CustomerCount']['2012':].fillna(method='pad').plot(ax=axes[0,1])
Daily.loc['TX']['CustomerCount']['2012':].fillna(method='pad').plot(ax=axes[1,0])
Daily.loc['NY']['CustomerCount']['2012':].fillna(method='pad').plot(ax=axes[1,1])
# Add titles
axes[0,0].set_title('Florida')
axes[0,1].set_title('Georgia')
axes[1,0].set_title('Texas')
axes[1,1].set_title('North East');
# -
# <p class="text-muted">This tutorial was created by <a href="http://www.hedaro.com" target="_blank"><strong>HEDARO</strong></a></p>
| lectures/01_intro/code/learn-pandas/lessons/03 - Lesson.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import telebot
chat_id = "CHAT_ID"
token = "TOKEN"
bot = telebot.TeleBot(token, parse_mode=None) # You can set parse_mode by default. HTML or MARKDOWN
bot.send_message(chat_id, "這則訊息由 Bot 主動發送的喔~")
# +
# sendPhoto
photo = open('./anime_sketch.jpeg', 'rb')
bot.send_photo(chat_id, photo)
# -
# sendMarkdown
bot = telebot.TeleBot(token, parse_mode='Markdown') # You can set parse_mode by default. HTML or MARKDOWN
mardown_msg = '''
*Markdown*

'''
bot.send_message(chat_id, mardown_msg)
| day8_telegram_bot_push_msg_example.ipynb |
# ##### Copyright 2021 Google LLC.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# # contiguity_regular
# <table align="left">
# <td>
# <a href="https://colab.research.google.com/github/google/or-tools/blob/master/examples/notebook/contrib/contiguity_regular.ipynb"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/colab_32px.png"/>Run in Google Colab</a>
# </td>
# <td>
# <a href="https://github.com/google/or-tools/blob/master/examples/contrib/contiguity_regular.py"><img src="https://raw.githubusercontent.com/google/or-tools/master/tools/github_32px.png"/>View source on GitHub</a>
# </td>
# </table>
# First, you must install [ortools](https://pypi.org/project/ortools/) package in this colab.
# !pip install ortools
# +
# Copyright 2010 <NAME> <EMAIL>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Global constraint contiguity using regularin Google CP Solver.
This is a decomposition of the global constraint
global contiguity.
From Global Constraint Catalogue
http://www.emn.fr/x-info/sdemasse/gccat/Cglobal_contiguity.html
'''
Enforce all variables of the VARIABLES collection to be assigned to 0 or 1.
In addition, all variables assigned to value 1 appear contiguously.
Example:
(<0, 1, 1, 0>)
The global_contiguity constraint holds since the sequence 0 1 1 0 contains
no more than one group of contiguous 1.
'''
Compare with the following model:
* MiniZinc: http://www.hakank.org/minizinc/contiguity_regular.mzn
This model was created by <NAME> (<EMAIL>)
Also see my other Google CP Solver models:
http://www.hakank.org/google_or_tools/
"""
from ortools.constraint_solver import pywrapcp
#
# Global constraint regular
#
# This is a translation of MiniZinc's regular constraint (defined in
# lib/zinc/globals.mzn), via the Comet code refered above.
# All comments are from the MiniZinc code.
# '''
# The sequence of values in array 'x' (which must all be in the range 1..S)
# is accepted by the DFA of 'Q' states with input 1..S and transition
# function 'd' (which maps (1..Q, 1..S) -> 0..Q)) and initial state 'q0'
# (which must be in 1..Q) and accepting states 'F' (which all must be in
# 1..Q). We reserve state 0 to be an always failing state.
# '''
#
# x : IntVar array
# Q : number of states
# S : input_max
# d : transition matrix
# q0: initial state
# F : accepting states
def regular(x, Q, S, d, q0, F):
solver = x[0].solver()
assert Q > 0, 'regular: "Q" must be greater than zero'
assert S > 0, 'regular: "S" must be greater than zero'
# d2 is the same as d, except we add one extra transition for
# each possible input; each extra transition is from state zero
# to state zero. This allows us to continue even if we hit a
# non-accepted input.
# Comet: int d2[0..Q, 1..S]
d2 = []
for i in range(Q + 1):
row = []
for j in range(S):
if i == 0:
row.append(0)
else:
row.append(d[i - 1][j])
d2.append(row)
d2_flatten = [d2[i][j] for i in range(Q + 1) for j in range(S)]
# If x has index set m..n, then a[m-1] holds the initial state
# (q0), and a[i+1] holds the state we're in after processing
# x[i]. If a[n] is in F, then we succeed (ie. accept the
# string).
x_range = list(range(0, len(x)))
m = 0
n = len(x)
a = [solver.IntVar(0, Q + 1, 'a[%i]' % i) for i in range(m, n + 1)]
# Check that the final state is in F
solver.Add(solver.MemberCt(a[-1], F))
# First state is q0
solver.Add(a[m] == q0)
for i in x_range:
solver.Add(x[i] >= 1)
solver.Add(x[i] <= S)
# Determine a[i+1]: a[i+1] == d2[a[i], x[i]]
solver.Add(
a[i + 1] == solver.Element(d2_flatten, ((a[i]) * S) + (x[i] - 1)))
# Create the solver.
solver = pywrapcp.Solver('Global contiguity using regular')
#
# data
#
# the DFA (for regular)
n_states = 3
input_max = 2
initial_state = 1 # 0 is for the failing state
# all states are accepting states
accepting_states = [1, 2, 3]
# The regular expression 0*1*0*
transition_fn = [
[1, 2], # state 1 (start): input 0 -> state 1, input 1 -> state 2 i.e. 0*
[3, 2], # state 2: 1*
[3, 0], # state 3: 0*
]
n = 7
#
# declare variables
#
# We use 1..2 and subtract 1 in the solution
reg_input = [solver.IntVar(1, 2, 'x[%i]' % i) for i in range(n)]
#
# constraints
#
regular(reg_input, n_states, input_max, transition_fn, initial_state,
accepting_states)
#
# solution and search
#
db = solver.Phase(reg_input, solver.CHOOSE_FIRST_UNBOUND,
solver.ASSIGN_MIN_VALUE)
solver.NewSearch(db)
num_solutions = 0
while solver.NextSolution():
num_solutions += 1
# Note: here we subract 1 from the solution
print('reg_input:', [int(reg_input[i].Value() - 1) for i in range(n)])
solver.EndSearch()
print()
print('num_solutions:', num_solutions)
print('failures:', solver.Failures())
print('branches:', solver.Branches())
print('wall_time:', solver.WallTime(), 'ms')
| examples/notebook/contrib/contiguity_regular.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.2 64-bit (''qsharp-book'': conda)'
# language: python
# name: python38264bitqsharpbookconda1b2b918f2bce4e578f9d1fbcf8270573
# ---
# # Scratch pad for testing
# Using packages like QuTiP, you can check your understanding of the actions of the CHP operations as well as Pauli products.
# > NOTE: Either run this in a Python env where you have installed QuTiP and NumPy or create the `conda` environment based on the `environment-qutip.yml`.
import qutip as qt
import numpy as np
# +
# CHP ops
# S is the more cannonical name for P
S = qt.qip.operations.phasegate(np.pi/2)
H = qt.hadamard_transform()
CNOT = qt.qip.operations.cnot()
# Pauli operations
X = qt.sigmax()
Y = qt.sigmay()
Z = qt.sigmaz()
I = qt.qeye(1)
# -
S * X * S.dag()
Y
H * S * S * H
| sample/qutip-scratchpad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/agupta231/CARROL/blob/master/GPU-stress-test.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="O5dcG3YvtXfm" colab_type="code" colab={}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import gzip
import os
import sys
import time
import numpy
from six.moves import urllib
from six.moves import xrange # pylint: disable=redefined-builtin
import tensorflow as tf
# + id="zds-cReitiqm" colab_type="code" colab={}
SOURCE_URL = 'https://storage.googleapis.com/cvdf-datasets/mnist/'
WORK_DIRECTORY = 'data'
IMAGE_SIZE = 28
NUM_CHANNELS = 1
PIXEL_DEPTH = 255
NUM_LABELS = 10
VALIDATION_SIZE = 5000 # Size of the validation set.
SEED = 66478 # Set to None for random seed.
BATCH_SIZE = 64
NUM_EPOCHS = 10
EVAL_BATCH_SIZE = 64
EVAL_FREQUENCY = 100 # Number of steps between evaluations.
FLAGS = None
# + id="dOHXHit5tnul" colab_type="code" colab={}
def data_type():
"""Return the type of the activations, weights, and placeholder variables."""
if FLAGS.use_fp16:
return tf.float16
else:
return tf.float32
# + id="d-nSI5gTtpbL" colab_type="code" colab={}
def maybe_download(filename):
"""Download the data from Yann's website, unless it's already here."""
if not tf.gfile.Exists(WORK_DIRECTORY):
tf.gfile.MakeDirs(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not tf.gfile.Exists(filepath):
filepath, _ = urllib.request.urlretrieve(SOURCE_URL + filename, filepath)
with tf.gfile.GFile(filepath) as f:
size = f.size()
print('Successfully downloaded', filename, size, 'bytes.')
return filepath
# + id="Ds1xNMVotrL2" colab_type="code" colab={}
def extract_data(filename, num_images):
"""Extract the images into a 4D tensor [image index, y, x, channels].
Values are rescaled from [0, 255] down to [-0.5, 0.5].
"""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images * NUM_CHANNELS)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS)
return data
# + id="6obUVoLMtvAG" colab_type="code" colab={}
def extract_labels(filename, num_images):
"""Extract the labels into a vector of int64 label IDs."""
print('Extracting', filename)
with gzip.open(filename) as bytestream:
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.int64)
return labels
# + id="5YMOasSvtxMn" colab_type="code" colab={}
def fake_data(num_images):
"""Generate a fake dataset that matches the dimensions of MNIST."""
data = numpy.ndarray(
shape=(num_images, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS),
dtype=numpy.float32)
labels = numpy.zeros(shape=(num_images,), dtype=numpy.int64)
for image in xrange(num_images):
label = image % 2
data[image, :, :, 0] = label - 0.5
labels[image] = label
return data, labels
# + id="pZmGSjGltzAu" colab_type="code" colab={}
def error_rate(predictions, labels):
"""Return the error rate based on dense predictions and sparse labels."""
return 100.0 - (
100.0 *
numpy.sum(numpy.argmax(predictions, 1) == labels) /
predictions.shape[0])
# + id="bu5fUkkvtzvL" colab_type="code" colab={}
def main(_):
if FLAGS.self_test:
print('Running self-test.')
train_data, train_labels = fake_data(256)
validation_data, validation_labels = fake_data(EVAL_BATCH_SIZE)
test_data, test_labels = fake_data(EVAL_BATCH_SIZE)
num_epochs = 1
else:
# Get the data.
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
# Extract it into numpy arrays.
train_data = extract_data(train_data_filename, 60000)
train_labels = extract_labels(train_labels_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
test_labels = extract_labels(test_labels_filename, 10000)
# Generate a validation set.
validation_data = train_data[:VALIDATION_SIZE, ...]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, ...]
train_labels = train_labels[VALIDATION_SIZE:]
num_epochs = NUM_EPOCHS
train_size = train_labels.shape[0]
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step using the {feed_dict} argument to the Run() call below.
train_data_node = tf.placeholder(
data_type(),
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.int64, shape=(BATCH_SIZE,))
eval_data = tf.placeholder(
data_type(),
shape=(EVAL_BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
# The variables below hold all the trainable weights. They are passed an
# initial value which will be assigned when we call:
# {tf.global_variables_initializer().run()}
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED, dtype=data_type()))
conv1_biases = tf.Variable(tf.zeros([32], dtype=data_type()))
conv2_weights = tf.Variable(tf.truncated_normal(
[5, 5, 32, 64], stddev=0.1,
seed=SEED, dtype=data_type()))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64], dtype=data_type()))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED,
dtype=data_type()))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512], dtype=data_type()))
fc2_weights = tf.Variable(tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED,
dtype=data_type()))
fc2_biases = tf.Variable(tf.constant(
0.1, shape=[NUM_LABELS], dtype=data_type()))
# We will replicate the model structure for the training subgraph, as well
# as the evaluation subgraphs, while sharing the trainable parameters.
def model(data, train=False):
"""The Model definition."""
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec {ksize} also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0, dtype=data_type())
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the current training minibatch.
train_prediction = tf.nn.softmax(logits)
# Predictions for the test and validation, which we'll compute less often.
eval_prediction = tf.nn.softmax(model(eval_data))
# Small utility function to evaluate a dataset by feeding batches of data to
# {eval_data} and pulling the results from {eval_predictions}.
# Saves memory and enables this to run on smaller GPUs.
def eval_in_batches(data, sess):
"""Get all predictions for a dataset by running it in small batches."""
size = data.shape[0]
if size < EVAL_BATCH_SIZE:
raise ValueError("batch size for evals larger than dataset: %d" % size)
predictions = numpy.ndarray(shape=(size, NUM_LABELS), dtype=numpy.float32)
for begin in xrange(0, size, EVAL_BATCH_SIZE):
end = begin + EVAL_BATCH_SIZE
if end <= size:
predictions[begin:end, :] = sess.run(
eval_prediction,
feed_dict={eval_data: data[begin:end, ...]})
else:
batch_predictions = sess.run(
eval_prediction,
feed_dict={eval_data: data[-EVAL_BATCH_SIZE:, ...]})
predictions[begin:, :] = batch_predictions[begin - size:, :]
return predictions
# Create a local session to run the training.
start_time = time.time()
with tf.Session() as sess:
# Run all the initializers to prepare the trainable parameters.
tf.global_variables_initializer().run()
print('Initialized!')
# Loop through training steps.
for step in xrange(int(num_epochs * train_size) // BATCH_SIZE):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), ...]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the optimizer to update weights.
sess.run(optimizer, feed_dict=feed_dict)
# print some extra information once reach the evaluation frequency
if step % EVAL_FREQUENCY == 0:
# fetch some extra nodes' data
l, lr, predictions = sess.run([loss, learning_rate, train_prediction],
feed_dict=feed_dict)
elapsed_time = time.time() - start_time
start_time = time.time()
print('Step %d (epoch %.2f), %.1f ms' %
(step, float(step) * BATCH_SIZE / train_size,
1000 * elapsed_time / EVAL_FREQUENCY))
print('Minibatch loss: %.3f, learning rate: %.6f' % (l, lr))
print('Minibatch error: %.1f%%' % error_rate(predictions, batch_labels))
print('Validation error: %.1f%%' % error_rate(
eval_in_batches(validation_data, sess), validation_labels))
sys.stdout.flush()
# Finally print the result!
test_error = error_rate(eval_in_batches(test_data, sess), test_labels)
print('Test error: %.1f%%' % test_error)
if FLAGS.self_test:
print('test_error', test_error)
assert test_error == 0.0, 'expected 0.0 test_error, got %.2f' % (
test_error,)
# + id="n5mBzolMt6GP" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 7153} outputId="16636f7e-66ad-4e13-95d7-ff6843345ba2"
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--use_fp16',
default=False,
help='Use half floats instead of full floats if True.',
action='store_true')
parser.add_argument(
'--self_test',
default=False,
action='store_true',
help='True if running a self test.')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
# + [markdown] id="bmnoRSlqqrqu" colab_type="text"
#
| GPU-stress-test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # U.S. Medical Insurance Costs
# **Scope**: compare each variable and see its impact on charges.
#
# These include:
# - age
# - sex
# - bmi
# - children
# - smoker
# - region
# +
# bring in file
import csv
# list for each variable
ages = [] #int
sexes = [] #string
bmis = [] #float
children = [] #int
smokers = [] #boolean
regions = [] #string
charges = [] #float
# add data from csv to lists
with open('insurance.csv') as insurance_costs:
costs = csv.DictReader(insurance_costs)
for cost in costs:
ages.append(int(cost['age']))
sexes.append(cost['sex'])
bmis.append(float(cost['bmi']))
children.append(int(cost['children']))
if cost['smoker'] == 'yes': smokers.append(True)
else: smokers.append(False)
regions.append(cost['region'])
charges.append(float(cost['charges']))
# -
# Create a class with functions to run calculations on a data set of this structure.
class InsCost:
def __init__(self, ages, sexes, bmis, children, smokers, regions, charges):
self.ages = ages
self.sexes = sexes
self.bmis = bmis
self.children = children
self.smokers = smokers
self.regions = regions
self.charges = charges
def averages(self):
self.average_age = sum(ages) / len(ages)
self.average_bmi = sum(bmis) / len(bmis)
self.average_children = sum(children) / len(children)
self.average_charges = sum(charges) / len(charges)
def sex_count(self):
self.women = 0
for sex in self.sexes: self.women += int(sex == 'female')
self.men = len(self.sexes) - self.women
def smoker_count(self):
self.smokers = 0
for status in self.smokers: self.smokers += int(status)
self.nonsmokers = len(self.smokers) - self.smokers
def split_by_smoking_status(self, to_split):
smoker_values = []
nonsmoker_values = []
for i in range(len(data_set.smokers)):
if data_set.smokers[i]: smoker_values.append(to_split[i])
else: nonsmoker_values.append(to_split[i])
return smoker_values, nonsmoker_values
def split_by_sex(self, to_split):
male_values = []
female_values = []
for i in range(len(data_set.smokers)):
if data_set.sexes[i] == 'male': male_values.append(to_split[i])
else: female_values.append(to_split[i])
return male_values, female_values
# Create object from class and data set
data_set = InsCost(ages, sexes, bmis, children, smokers, regions, charges)
# Plot age and cost
# +
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(8, 6))
#plt.yticks(np.arange(0, max(data_set.charges), 1000))
plt.scatter(data_set.ages, data_set.charges, s=5)
plt.xlabel('age')
plt.ylabel('charges')
plt.show()
# -
# It looks like cost goes up with age. The three clustered bands with parallel slopes suggest that this is a general relationship independent of other variables.
#
# I'm curious to see if one of the other variables explains the y difference between the bands.
#
# First, I'll color dots by **smokers(red)** and **nonsmokers(blue)**
# +
import matplotlib.pyplot as plt
colors = []
plt.figure(figsize=(8, 6))
smoker_ages = data_set.split_by_smoking_status(data_set.ages)[0]
nonsmoker_ages = data_set.split_by_smoking_status(data_set.ages)[1]
smoker_charges = data_set.split_by_smoking_status(data_set.charges)[0]
nonsmoker_charges = data_set.split_by_smoking_status(data_set.charges)[1]
plt.scatter(smoker_ages, smoker_charges, s=15, color='red', label='Smokers', alpha=0.25)
plt.scatter(nonsmoker_ages, nonsmoker_charges, s=15, color='blue', label='Nonsmokers', alpha=0.25)
plt.xlabel('age')
plt.ylabel('charges')
plt.legend(loc='upper left')
plt.show()
# -
# From this, it's clear that smokers have higher charges regardless of age. However, something else is impacting charges: the bottom (low charge) band is almost entirely non-smokers, and the top (high charge) band is almost entirely smokers, but the middle band is a mix of smokers and non-smokers.
#
# This raises a couple of questions:
# 1. Why do the smokers in that band have lower charges than other smokers?
# 2. Why do the nonsmokers in that band have higher charges than other nonsmokers?
#
# Lets look at charges by BMI stratified by smokers/nonsmokers:
# +
smoker_bmis = data_set.split_by_smoking_status(data_set.bmis)[0]
nonsmoker_bmis = data_set.split_by_smoking_status(data_set.bmis)[1]
plt.figure(figsize=(8, 6))
plt.scatter(smoker_bmis, smoker_charges, s=15, color='red', label='Smokers', alpha=.25)
plt.scatter(nonsmoker_bmis, nonsmoker_charges, s=15, color='blue', label='Nonsmokers', alpha=.25)
plt.xlabel('bmi')
plt.ylabel('charges')
plt.legend(loc='upper left')
plt.show()
# -
# It appears that higher BMI is associated with much higher costs among smokers than non-smokers.
#
# What about looking charges by number of children for smokers and non-smokers?
# +
smoker_children = data_set.split_by_smoking_status(data_set.children)[0]
nonsmoker_children = data_set.split_by_smoking_status(data_set.children)[1]
plt.figure(figsize=(8, 6))
plt.scatter(smoker_children, smoker_charges, s=15, color='red', label='Smokers', alpha=.25)
plt.scatter(nonsmoker_children, nonsmoker_charges, s=15, color='blue', label='Nonsmokers', alpha=.25)
plt.xlabel('children')
plt.ylabel('charges')
plt.legend(loc='upper left')
plt.show()
# -
# Not particularly interesting. There doesn't seem to be significant variation at 0-3 children, and the numbers of patients with 4 and 5 are low.
#
# Finally, let's isolate smoking and charges:
# +
plt.figure(figsize=(4, 6))
smoker_data = []
for value in data_set.smokers:
if value == True: smoker_data.append('Smoker')
else: smoker_data.append('NonSmoker')
plt.scatter(smoker_data, data_set.charges, s=15, alpha=.25)
plt.xlabel('smokers')
plt.ylabel('charges')
plt.show()
# -
# It's clear that smoking is a major driver of higher charges!
#
# What about breaking the data out by sex?
# +
smoker_men = data_set.split_by_smoking_status(data_set.sexes)[0]
smoker_women = data_set.split_by_smoking_status(data_set.sexes)[1]
plt.figure(figsize=(4, 6))
plt.scatter(smoker_men, smoker_charges, s=15, color='red', label='Smokers', alpha=.25)
plt.scatter(smoker_women, nonsmoker_charges, s=15, color='blue', label='Nonsmokers', alpha=.25)
plt.xlabel('sex')
plt.ylabel('charges')
plt.legend(loc='center')
plt.show()
# -
# There seems to be some differences here - male smokers appear to have slightly higher costs than female smokers.
# +
charges_men = data_set.split_by_sex(data_set.charges)[0]
charges_women = data_set.split_by_sex(data_set.charges)[1]
ages_men = data_set.split_by_sex(data_set.ages)[0]
ages_women = data_set.split_by_sex(data_set.ages)[1]
bmis_men = data_set.split_by_sex(data_set.bmis)[0]
bmis_women = data_set.split_by_sex(data_set.bmis)[1]
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.scatter(ages_men, charges_men, s=15, color='green', label='Men', alpha=.25)
plt.scatter(ages_women, charges_women, s=15, color='orange', label='Women', alpha=.25)
plt.xlabel('age')
plt.ylabel('charges')
plt.legend(loc='upper right')
plt.title('Age')
plt.subplot(1, 2, 2)
plt.scatter(bmis_men, charges_men, s=15, color='green', label='Men', alpha=.25)
plt.scatter(bmis_women, charges_women, s=15, color='orange', label='Women', alpha=.25)
plt.xlabel('bmi')
plt.ylabel('charges')
plt.legend(loc='upper right')
plt.title('BMI')
plt.show()
# -
# Not a ton of obvious variation here. It looks like smoking is the biggest factor driving cost, followed by age. It'd be interesting to look at patterns by region, but that'll have to wait for another time!
| us_medical_insurance_costs/us-medical-insurance-costs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/00FFEF/test_deeplearning/blob/master/boston_housingwithregression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="4hC1gjo0aqVu"
import tensorflow as tf
from tensorflow.keras.datasets.boston_housing import load_data
# + [markdown] id="TL-cAicg1K4J"
# #Dataset
# + colab={"base_uri": "https://localhost:8080/"} id="3jhIP6Z8df5i" outputId="1cf8b423-8fa1-4391-ad31-ffdb575617ba"
(x_train, y_train), (x_test, y_test) = load_data(path='boston_housing.npz', test_split=0.2, seed=113)
x_train.shape, y_train.shape, x_test.shape, y_test.shape
# + id="LOyIDYDC1PVq"
import pandas as pd
# + colab={"base_uri": "https://localhost:8080/"} id="QzJ81dk71R9x" outputId="56444f1b-4bf2-4c4c-8e82-ee055cb79995"
df = pd.DataFrame(x_train)
df.info()
# + id="t5jnZpr-2maV"
df.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="mSZds00w3AjJ" outputId="bb7d8db4-085c-4f71-ffc9-7f6006cd97cc"
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
# + colab={"base_uri": "https://localhost:8080/"} id="qng7aCeO3qSq" outputId="3af2ee7b-da0b-479b-b5a1-058eb874fb0a"
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
x_train.shape, x_test.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 402} id="PWCVMFwv4C9v" outputId="053ad327-a28d-4227-f5b0-e26de2d22bb9"
df = pd.DataFrame(x_train)
df.describe()
# + [markdown] id="rZMhy8m71MjE"
# #Model and fit
# + id="HYEyJbt6d7GN"
model = tf.keras.models.Sequential()
# + colab={"base_uri": "https://localhost:8080/"} id="wBgrxkVZea-U" outputId="20455da0-0669-4531-9dfa-ed1026344c5e"
model.add(tf.keras.Input(shape=(13))) # input layer
model.add(tf.keras.layers.Dense(64, activation='sigmoid')) # hidden layer
model.add(tf.keras.layers.Dense(64, activation='sigmoid')) # hidden layer
model.add(tf.keras.layers.Dense(64, activation='sigmoid')) # hidden layer
model.add(tf.keras.layers.Dense(1,)) # output layer
model.compile(optimizer='sgd', loss='mae', metrics=['mae'])
# + id="mFtCbO92rKJZ"
tf.keras.utils.plot_model(model, show_shapes=True)
# + colab={"base_uri": "https://localhost:8080/"} id="z8P0GjDvrSdQ" outputId="be5492de-45af-4c29-82b3-7e4eaa2786b1"
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="AQ6UuFbjtqSB" outputId="286dd0ac-b77f-42c9-d020-246c4f6e0892"
model.fit(x_train, y_train, epochs=100)
# + [markdown] id="0v7DyIXL0aj5"
# #Evaluation
# + colab={"base_uri": "https://localhost:8080/"} id="iKkjmEuM0aD-" outputId="327e00b2-640c-4ccc-c49a-f74303fe2124"
model.evaluate(x_train, y_train)
# + [markdown] id="iW9eEUYKxKZ_"
# #Service
# + colab={"base_uri": "https://localhost:8080/"} id="ld7MZBwDxGAt" outputId="016aea31-2b7e-4f78-997a-e5f1bc8652b4"
x_train[10]
# + colab={"base_uri": "https://localhost:8080/"} id="giKHjm-gxH9s" outputId="dad4afdb-f05c-4b97-c4d4-6b5cd934031e"
model.predict([[ 0.63391647, -0.48361547, 1.0283258 , -0.25683275, 1.15788777,
0.19313958, 1.11048828, -1.03628262, 1.67588577, 1.5652875 ,
0.78447637, 0.22689422, 1.04466491]])
# + colab={"base_uri": "https://localhost:8080/"} id="45HPgw_JxZGs" outputId="be9148d0-efb0-4b41-b517-c9d48bdce5ea"
y_train[10]
# + id="WYr1vXEJ5l2_"
| boston_housingwithregression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Imports
# +
from __future__ import absolute_import, division, print_function, unicode_literals
import functools
from IPython.display import Image
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import glob
import datetime
import networkx as nx
from os.path import join
# %matplotlib inline
pd.set_option("display.precision", 2)
# -
# ## Count all unique bicycles in each month
# +
paths = [
r'./data',
]
all_bikes = []
bikes_per_month = {}
for path in paths:
all_files = sorted(glob.glob(path + "/*.csv"))
month = 3
for idx, filename in enumerate(all_files):
data = pd.read_csv(filename)
unique_bikes = data.bike_number.unique()
bikes_per_month[month] = unique_bikes.size
all_bikes = [*all_bikes, *unique_bikes]
print("Unique bicycles in month", month,":", unique_bikes.size)
month += 1
all_unique_bikes = set(all_bikes)
print("Unique bicycles in 2019 :",len(all_unique_bikes))
bikes_per_month
# -
# ## Read files and change stations names to numbers
df_names = pd.read_csv('./networks/nodes.csv', usecols=['value', 'name'])
df = pd.read_csv('./networks/nodes_locations.csv', usecols=['name', 'lng', 'lat'])
df_edges = pd.read_csv('./plik.csv', usecols=['interval_start','interval_end','number_of_trips','rental_place','return_place'])
dict_names_temp = df_names['name'].to_dict()
# +
dict_names_temp
dict_names = {}
for value, name in dict_names_temp.items():
dict_names[name] = value
df_edges["rental_place"].replace(dict_names, inplace=True)
df_edges["return_place"].replace(dict_names, inplace=True)
df_edges
# -
# ## Get start and end intervals
df_edges['interval_start']= pd.to_datetime(df_edges['interval_start'])
df_edges['interval_end']= pd.to_datetime(df_edges['interval_end'])
start = df_edges.interval_start.min()
start = start.replace(hour=0, minute=0, second=0)
start
end = df_edges.interval_end.max()
end = end.replace(hour=0, minute=0, second=0)
end
ranges = pd.date_range(start, end,freq='15T')
ranges
# ## Testing for interval_start 2019-03-31 23:45:00
# +
pd.DataFrame(df_edges[df_edges["interval_start"] == '2019-03-31 23:45:00'])
interval_edges = pd.DataFrame(df_edges[df_edges["interval_start"] == '2019-03-31 23:45:00'])
G = nx.DiGraph()
for index, node in df.iterrows():
G.add_node(index, name=node.name, lat=node.lat, lng=node.lng)
for index, edge in interval_edges.iterrows():
G.add_edge(edge.rental_place, edge.return_place, weight=edge.number_of_trips)
nodes_degrees = G.degree(weight='weight')
nodes_in_degrees = G.in_degree(weight='weight')
nodes_out_degrees = G.out_degree(weight='weight')
nodes_pageranks = nx.pagerank(G, weight='weight')
nodes_info = [dict(nodes_degrees).values(), dict(nodes_in_degrees).values(), dict(nodes_out_degrees).values(), nodes_pageranks.values()]
metrics_interval_df = pd.DataFrame({'node': list(G.nodes),
'degree': list(dict(nodes_degrees).values()),
'in_degree': list(dict(nodes_in_degrees).values()),
'out_degree': list(dict(nodes_out_degrees).values()),
'pagerank': list(nodes_pageranks.values())},
)
metrics_interval_df['interval_start'] = '2019-03-31 23:45:00'
metrics_interval_df['interval_end'] = '2019-04-01 00:00:00'
pd.set_option('display.max_rows', metrics_interval_df.shape[0]+1)
metrics_interval_df.style.hide_index()
bikes_in_use = sum(metrics_interval_df.in_degree)
bikes_total = bikes_per_month[ranges[0].month]
bikes_percentage = bikes_in_use/bikes_total
bikes_usage_interval_df = pd.DataFrame({'interval_start': '2019-03-31 23:45:00',
'interval_end': '2019-04-01 00:00:00',
'bikes_in_use': bikes_in_use,
'bikes_total': bikes_total,
'bikes_percentage': bikes_percentage}, index = [1])
bikes_usage_interval_df
metrics_interval_df = metrics_interval_df[metrics_interval_df['degree'] > 0]
# -
metrics_interval_df
# ## Function for counting metrics and bikes_usage for each interval in month
def count_metrics(df, df_edges):
bikes_usage_df = pd.DataFrame(columns = ['interval_start', 'interval_end', 'bikes_in_use', 'bikes_total', 'bikes_percentage'])
metrics_df = pd.DataFrame(columns = ['node', 'degree', 'in_degree', 'out_degree', 'pagerank', 'interval_start', 'interval_end'])
i = 0
for interval in ranges:
interval_end = interval + datetime.timedelta(minutes=15)
if interval_end > end:
break
interval_edges = pd.DataFrame(df_edges[df_edges["interval_start"] == interval])
G = nx.DiGraph()
for index, node in df.iterrows():
G.add_node(index, name=node.name, lat=node.lat, lng=node.lng)
for index, edge in interval_edges.iterrows():
G.add_edge(edge.rental_place, edge.return_place, weight=edge.number_of_trips)
nodes_degrees = G.degree(weight='weight')
nodes_in_degrees = G.in_degree(weight='weight')
nodes_out_degrees = G.out_degree(weight='weight')
nodes_pageranks = nx.pagerank(G, weight='weight')
nodes_info = [dict(nodes_degrees).values(), dict(nodes_in_degrees).values(), dict(nodes_out_degrees).values(), nodes_pageranks.values()]
metrics_interval_df = pd.DataFrame({'node': list(G.nodes),
'degree': list(dict(nodes_degrees).values()),
'in_degree': list(dict(nodes_in_degrees).values()),
'out_degree': list(dict(nodes_out_degrees).values()),
'pagerank': list(nodes_pageranks.values())},
)
metrics_interval_df['interval_start'] = interval
metrics_interval_df['interval_end'] = interval_end
pd.set_option('display.max_rows', metrics_interval_df.shape[0]+1)
bikes_in_use = sum(metrics_interval_df.in_degree)
bikes_total = bikes_per_month[interval.month]
bikes_percentage = bikes_in_use/bikes_total
bikes_usage_interval_df = pd.DataFrame({'interval_start': interval,
'interval_end': interval_end,
'bikes_in_use': bikes_in_use,
'bikes_total': bikes_total,
'bikes_percentage': bikes_percentage}, index = [i])
i += 1
bikes_usage_df = bikes_usage_df.append(bikes_usage_interval_df)
metrics_interval_df = metrics_interval_df[metrics_interval_df['degree'] > 0]
metrics_df = metrics_df.append(metrics_interval_df)
return bikes_usage_df, metrics_df
bikes_usage, metrics = count_metrics(df, df_edges)
bikes_usage
metrics
# ## Read metrics and bikes usage from existing files
metrics = pd.read_csv('./metrics/historia_przejazdow_2019-03.csv_metrics.csv', usecols=['node', 'degree', 'in_degree', 'out_degree', 'pagerank', 'interval_start', 'interval_end'])
bikes_usage = pd.read_csv('./metrics/historia_przejazdow_2019-03.csv_bikes_usage.csv', usecols=['interval_start', 'interval_end', 'bikes_in_use', 'bikes_total', 'bikes_percentage'])
metrics
print("Max pagerank:",metrics['pagerank'].max())
print("Min pagerank:",metrics['pagerank'].min())
print("Mean pagerank:",metrics['pagerank'].mean())
pd.DataFrame(metrics.iloc[metrics['pagerank'].idxmax()]).transpose()
pd.DataFrame(metrics.iloc[metrics['pagerank'].idxmin()]).transpose()
print("Max degree:",metrics['degree'].max())
print("Min degree:",metrics['degree'].min())
print("Mean degree:",metrics['degree'].mean())
pd.DataFrame(metrics.iloc[metrics['degree'].idxmax()]).transpose()
pd.DataFrame(metrics.iloc[metrics['degree'].idxmin()]).transpose()
print("Max in_degree:",metrics['in_degree'].max())
print("Min in_degree:",metrics['in_degree'].min())
print("Mean in_degree:",metrics['in_degree'].mean())
pd.DataFrame(metrics.iloc[metrics['in_degree'].idxmax()]).transpose()
pd.DataFrame(metrics.iloc[metrics['in_degree'].idxmin()]).transpose()
print("Max out_degree:",metrics['out_degree'].max())
print("Min out_degree:",metrics['out_degree'].min())
print("Mean out_degree:",metrics['out_degree'].mean())
pd.DataFrame(metrics.iloc[metrics['out_degree'].idxmax()]).transpose()
pd.DataFrame(metrics.iloc[metrics['out_degree'].idxmin()]).transpose()
bikes_usage
print("Total amount of bikes for month:",bikes_usage['bikes_total'].max())
print("Max amount of bikes in use:",bikes_usage['bikes_in_use'].max())
print("Min amount of bikes in use:",bikes_usage['bikes_in_use'].min())
print("Mean amount of bikes in use:",bikes_usage['bikes_in_use'].mean())
print("Max bikes % usage:",bikes_usage['bikes_percentage'].max())
print("Min bikes % usage:",bikes_usage['bikes_percentage'].min())
print("Mean bikes % usage:",bikes_usage['bikes_percentage'].mean())
pd.DataFrame(bikes_usage.iloc[bikes_usage['bikes_in_use'].idxmax()]).transpose()
# +
#metrics.to_csv(join("metrics_test_07.csv"), index=False)
# +
#bikes_usage.to_csv(join("bikes_usage_test_07.csv"), index=False)
| generate_metrics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="UN5U7W0b7X-b" papermill={"duration": 0.015104, "end_time": "2021-09-24T12:57:54.41903", "exception": false, "start_time": "2021-09-24T12:57:54.403926", "status": "completed"} tags=[]
# This script applies Domain-adaptive pretraining to BERT,RoBERTa,BART,and T5. The final pre-trained models can be found at: https://drive.google.com/drive/folders/1-A1hGKeu-27X9I4ySkja5vMlVscnF8GR?usp=sharing
#
# Required data to run this script:
# - the WNC corpus: https://github.com/rpryzant/neutralizing-bias
# + id="-EW98CVr7bMX" outputId="2b007272-ebe2-41e3-e7b6-17fefffdf702" papermill={"duration": 21.776564, "end_time": "2021-09-24T12:58:16.209532", "exception": false, "start_time": "2021-09-24T12:57:54.432968", "status": "completed"} tags=[]
# !pip install transformers
# !pip install openpyxl
# !pip install sentencepiece
import time
import openpyxl
import pandas as pd
import numpy as np
import torch
import torch.nn as nn
import io
import random
import sys
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import roc_auc_score,f1_score,precision_score,recall_score,accuracy_score
import transformers
import sentencepiece
from transformers import T5Tokenizer,T5EncoderModel,AdamW,BertModel,BertTokenizer,RobertaModel,RobertaTokenizer,BartModel,BartTokenizer
from torch.utils.data import DataLoader,TensorDataset,ConcatDataset,RandomSampler
# + id="lMQo98GZfSlQ" papermill={"duration": 0.026304, "end_time": "2021-09-24T12:58:16.325607", "exception": false, "start_time": "2021-09-24T12:58:16.299303", "status": "completed"} tags=[]
# function split train dataset into train, validation and test sets
def train_test (text,labels,test_size):
train_text, test_text, train_labels, test_labels = train_test_split(text,
labels,
random_state=2018,
test_size=test_size,
stratify=labels)
return train_text, test_text, train_labels, test_labels
# + id="kY5iqRlwfYd1" papermill={"duration": 7.772323, "end_time": "2021-09-24T12:58:24.116674", "exception": false, "start_time": "2021-09-24T12:58:16.344351", "status": "completed"} tags=[]
#function to tokenize sentences. Respective model must be uncommented
#tokenizer = T5Tokenizer.from_pretrained('t5-base')
#tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
#tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
#tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
def tokenize(sentences,labels,max_length = None):
"tokenizes input and returns tokenized input + labels as tensors"
input_ids = []
attention_masks = []
for text in sentences.to_list():
encodings = tokenizer.encode_plus(text,add_special_tokens = True,max_length = max_length
,truncation = True, padding = 'max_length',return_attention_mask = True)
input_ids.append(encodings['input_ids'])
attention_masks.append(encodings['attention_mask'])
return torch.tensor(input_ids),torch.tensor(attention_masks),torch.tensor(labels.to_list())
# + id="tYLxdLbofq1A" papermill={"duration": 0.030543, "end_time": "2021-09-24T12:58:24.171021", "exception": false, "start_time": "2021-09-24T12:58:24.140478", "status": "completed"} tags=[]
# function to get predictions for test data
def predict(model,dataloader):
predictions = []
for batch in dataloader:
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
with torch.no_grad():
output = model(sent_id, attention_mask=mask,labels = labels)
preds = output[1]
preds = preds.detach().cpu().numpy()
predictions.append(np.argmax(preds, axis = 1).flatten())
#merge sublists of predictions
predictions = [label for batch in predictions for label in batch]
return predictions
# + id="b8wGoA61M4gn" papermill={"duration": 0.03261, "end_time": "2021-09-24T12:58:24.225481", "exception": false, "start_time": "2021-09-24T12:58:24.192871", "status": "completed"} tags=[]
#set seed
np.random.seed(0)
torch.manual_seed(0)
random.seed(0)
torch.cuda.manual_seed_all(0)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# + id="kYHCogOqdfvC" papermill={"duration": 0.027235, "end_time": "2021-09-24T12:58:24.274741", "exception": false, "start_time": "2021-09-24T12:58:24.247506", "status": "completed"} tags=[]
#read WNC corpus
df_wiki = pd.read_excel('WNC.xlsx')
df_wiki.dropna(inplace=True)
# + id="xkb84DERme9r" papermill={"duration": 0.027531, "end_time": "2021-09-24T12:58:24.324983", "exception": false, "start_time": "2021-09-24T12:58:24.297452", "status": "completed"} tags=[]
#train test split + tokenization
train_text, test_text, train_labels, test_labels = train_test(df_wiki['text'], df_wiki['label_bias'],0.2)
train_input_ids,train_attention_masks,train_y = tokenize(train_text, train_labels)
test_input_ids,test_attention_masks,test_y = tokenize(test_text,test_labels)
train_data_wiki = TensorDataset(train_input_ids, train_attention_masks, train_y)
test_data_wiki = TensorDataset(test_input_ids, test_attention_masks, test_y)
# + id="C3aj7vxNJTiM" papermill={"duration": 0.03044, "end_time": "2021-09-24T12:58:40.324228", "exception": false, "start_time": "2021-09-24T12:58:40.293788", "status": "completed"} tags=[]
#define dataloader and epochs
epochs = 1
batch_size = 32
train_sampler = RandomSampler(train_data_wiki)
test_sampler = RandomSampler(test_data_wiki)
train_dataloader = DataLoader(train_data_wiki,sampler= train_sampler, batch_size=batch_size)
test_dataloader = DataLoader(test_data_wiki,sampler= test_sampler, batch_size=batch_size)
# + id="X_9S6NvBe-fO" papermill={"duration": 0.027027, "end_time": "2021-09-24T12:58:40.371438", "exception": false, "start_time": "2021-09-24T12:58:40.344411", "status": "completed"} tags=[]
#define loss
cross_entropy = nn.CrossEntropyLoss()
# +
#create model:RoBERTa
# class RobertaClass(torch.nn.Module):
# def __init__(self):
# super(RobertaClass, self).__init__()
# self.roberta = RobertaModel#.from_pretrained("roberta-base")
# self.vocab_transform = torch.nn.Linear(768, 768)
# self.dropout = torch.nn.Dropout(0.2)
# self.classifier1 = nn.Linear(768,2)
# def forward(self, input_ids, attention_mask,labels):
# output_1 = self.roberta(input_ids=input_ids, attention_mask=attention_mask)
# hidden_state = output_1[0]
# pooler = hidden_state[:, 0]
# pooler = self.vocab_transform(pooler)
# pooler = self.dropout(pooler)
# output = self.classifier1(pooler)
# loss = cross_entropy(output,labels)
# return loss
# + id="Y9qI1SEdbHMt" papermill={"duration": 0.029042, "end_time": "2021-09-24T12:58:40.42673", "exception": false, "start_time": "2021-09-24T12:58:40.397688", "status": "completed"} tags=[]
#create model: BART
# class BartClass(torch.nn.Module):
# def __init__(self):
# super(BartClass, self).__init__()
# self.bart = BartModel.from_pretrained("facebook/bart-base")
# self.vocab_transform = torch.nn.Linear(768, 768)
# self.dropout = torch.nn.Dropout(0.2)
# self.classifier1 = nn.Linear(768,2)
# def forward(self, input_ids, attention_mask,labels):
# output_1 = self.bart(input_ids=input_ids, attention_mask=attention_mask)
# hidden_state = output_1[0]
# pooler = hidden_state[:, 0]
# pooler = self.vocab_transform(pooler)
# pooler = self.dropout(pooler)
# output = self.classifier1(pooler)
# loss = cross_entropy(output,labels)
# return loss
# +
#create model: Bert
# class BertClass(torch.nn.Module):
# def __init__(self):
# super(BertClass, self).__init__()
# self.bert = BertModel.from_pretrained("bert-base-uncased")
# self.vocab_transform = torch.nn.Linear(768, 768)
# self.dropout = torch.nn.Dropout(0.1)
# self.classifier1 = nn.Linear(768,2)
# def forward(self, input_ids, attention_mask,labels):
# output_1 = self.bert(input_ids=input_ids, attention_mask=attention_mask)
# hidden_state = output_1[0]
# pooler = hidden_state[:, 0]
# pooler = self.vocab_transform(pooler)
# pooler = self.dropout(pooler)
# output = self.classifier1(pooler)
# loss = cross_entropy(output,labels)
# return loss
# +
#create model: T5
# class T5Class(torch.nn.Module):
# def __init__(self):
# super(T5Class, self).__init__()
# self.T5 = T5EncoderModel.from_pretrained("t5-base")
# self.vocab_transform = torch.nn.Linear(768, 768)
# self.dropout = torch.nn.Dropout(0.1)
# self.classifier1 = nn.Linear(768,2)
# def forward(self, input_ids, attention_mask,labels):
# output_1 = self.T5(input_ids=input_ids, attention_mask=attention_mask)
# hidden_state = output_1[0]
# pooler = hidden_state[:, 0]
# pooler = self.vocab_transform(pooler)
# pooler = self.dropout(pooler)
# output = self.classifier1(pooler)
# loss = cross_entropy(output,labels)
# return loss
# + id="i2BwHSj6cDri" outputId="aa93a2de-5060-47cc-c733-af3bcb442f12" papermill={"duration": 0.074651, "end_time": "2021-09-24T12:58:40.521086", "exception": false, "start_time": "2021-09-24T12:58:40.446435", "status": "completed"} tags=[]
#connect to GPU
if torch.cuda.is_available():
device = torch.device("cuda:0")
print(f'There are {torch.cuda.device_count()} GPU(s) available.')
print('Device name:', torch.cuda.get_device_name(0))
else:
print('No GPU available, using the CPU instead.')
device = torch.device("cpu")
# + papermill={"duration": 29.992468, "end_time": "2021-09-24T12:59:10.534451", "exception": false, "start_time": "2021-09-24T12:58:40.541983", "status": "completed"} tags=[]
#instantiate model: uncomment model you want to train
# model = BertClass()
# model = RobertaClass()
# model = BartClass()
# model = T5Class()
model = model.to(device)
optim = AdamW(model.parameters(), lr=1e-5)
# + id="o79JiLrW5MfB" papermill={"duration": 0.031646, "end_time": "2021-09-24T12:59:10.587149", "exception": false, "start_time": "2021-09-24T12:59:10.555503", "status": "completed"} tags=[]
#train function
def train(dataloader):
model.train()
total_loss = 0
counter = 0
for index,batch in enumerate(dataloader):
counter += 1
sys.stdout.write('\r Batch {}/{}'.format(counter,len(dataloader)))
optim.zero_grad()
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
loss = model(sent_id, attention_mask=mask,labels = labels)
loss.backward()
total_loss = total_loss+loss.item()
optim.step()
del batch,sent_id,mask,labels
avg_loss = total_loss / len(dataloader)
return avg_loss
# + id="_GYQcOX_fhjv" papermill={"duration": 0.028306, "end_time": "2021-09-24T12:59:10.637067", "exception": false, "start_time": "2021-09-24T12:59:10.608761", "status": "completed"} tags=[]
#test function
def validate(dataloader):
model.eval()
total_loss = 0
print("\nValidating...")
counter = 0
for batch in dataloader:
counter +=1
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
with torch.no_grad():
loss = model(sent_id, attention_mask=mask,labels = labels)
total_loss = total_loss+loss
avg_loss = total_loss / len(dataloader)
return avg_loss
# + id="Est1Bn9UAb3w" papermill={"duration": 0.030173, "end_time": "2021-09-24T12:59:10.687414", "exception": false, "start_time": "2021-09-24T12:59:10.657241", "status": "completed"} tags=[]
#train/validate function
def train_validate(train_dataloader,test_dataloader):
best_valid_loss = float('inf')
# empty lists to store training and validation loss of each epoch
train_losses=[]
valid_losses=[]
#for each epoch
for epoch in range(epochs):
print('\n Epoch {:} / {:}'.format(epoch + 1, epochs))
#train model
train_loss = train(train_dataloader)
if torch.cuda.is_available():
torch.cuda.empty_cache()
#evaluate model
valid_loss = validate(test_dataloader)
#save the best model
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
torch.save(model.state_dict(), 'pytorch_model.bin') #insert path here
#if validation loss increases, stop training
elif valid_loss >= best_valid_loss:
print("\n Validation loss not decreased, Model of previous epoch saved")
break
# append training and validation loss
train_losses.append(train_loss)
valid_losses.append(valid_loss)
print(f'\nTraining Loss: {train_loss:.3f}')
print(f'Validation Loss: {valid_loss:.3f}')
# + id="YTg22-3fbSI-" outputId="8398d94c-19ef-44c3-88d3-74c26987f02e" papermill={"duration": 0.025912, "end_time": "2021-09-24T12:59:10.733686", "exception": false, "start_time": "2021-09-24T12:59:10.707774", "status": "completed"} tags=[]
#apply training and validation
train_validate(train_dataloader,test_dataloader)
| domain-adaptive-pretraining.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Bloch Equation
# ### The spin-lattice relaxation time is defined by the z-component of the Bloch equations:
#
# ## $\frac{dM_z(t)}{dt} = \frac{M_0 - M_z(t)}{T_1}$
# ### The solution for $M_z = 0$ at $t = 0$ is:
# ## $M_z(t) = M_0(1 - e^{-\frac{t}{T_1}})$
# ### Or:
# ## $M_r = \frac{M_z(t)}{M_0} = (1 - e^{-\frac{t}{T_1}})$
#
# [N.B.(nota bene) - observe carefully or take special notice.]
#
# N.B. - In each of the following python code cells I have, explicitly, imported the necessary python routines. If you run these python cells sequentially you could just import all the routines in the first cell and proceed.
# Import necessary routines
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
#Assign Value for T1
T1 = 100
#Assign time values in ms
t = np.linspace(0,300,501)
#Define Magnetization Array
Mz=np.zeros(t.size)
#Fill-in Magnetization Values
for i in range(t.size):
Mz[i]=(1.0-np.exp(-t[i]/T1))
#Plot Routine
plt.plot(t, Mz)
plt.grid()
plt.title("Reduced Magnetization vs time")
plt.xlabel("t(ms)")
plt.ylabel("Magnetization (arbitrary units)")
plt.text(100,0.4,"$M_r = (1 - e^{-t/T_1})$",fontsize=15)
plt.text(100,0.2,"$T_1 = 100.0 ms$",
fontsize=10)
plt.figure()
# ### The solution for $M_z = -M_0$ at $t = 0$ is:
# ## $M_z(t) = M_0(1 - 2e^{-\frac{t}{T_1}})$
# ### Or:
# ## $M_r = \frac{M_z(t)}{M_0} = (1 - 2e^{-\frac{t}{T_1}})$
#
# #### This is the initial condition for your $T_1$ experiment. Then:
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
T1 = 100
t = np.linspace(0,300,501)
Mz=np.zeros(t.size)
for i in range(t.size):
Mz[i]=(1-2*np.exp(-t[i]/T1))
plt.plot(t, Mz)
plt.grid()
plt.title("Reduced Magnetization vs time")
plt.xlabel("t(ms)")
plt.ylabel("Magnetization (arbitrary units)")
plt.text(100,0.0,"$M_r = (1 - 2e^{-t/T_1})$",fontsize=15)
plt.text(100,-0.25,"$T_1 = 100.0 ms$",
fontsize=10)
plt.figure()
# ### Your apparatus can only measrue the absolute value of the induced voltage (which is proportional to the z-component of the magnetization) and therefore will look something like this:
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
T1 = 100
t = np.linspace(0,300,501)
Mz=np.zeros(t.size)
for i in range(t.size):
Mz[i]=np.abs((1-2*np.exp(-t[i]/T1)))
plt.plot(t, Mz)
plt.grid()
plt.title("Absolute Value of the Reduced Magnetization vs time")
plt.xlabel("t(ms)")
plt.ylabel("|Magnetization| (arbitrary units)")
plt.text(150,0.40,"$M_r = |(1 - 2e^{-t/T_1})|$",fontsize=15)
plt.text(150,0.25,"$T_1 = 100.0 ms$",
fontsize=10)
plt.figure()
# Therefore, you will need to "correct" your data in order to do a nonlinear fit to your data.
# Notice, that when $M_r = 0$ at about 70.0 ms then $T_1 = -70.0/ln(1/2) = 101.0$ ms. This is an estimate of your spin-lattice relaxation time, $T_1$.
# ## Data Analysis
# Let's generate some data:
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import random as rand
# %matplotlib inline
T1 = 100
t = np.linspace(0,500,51)
Mz=np.zeros(t.size)
for i in range(t.size):
Mz[i]=8.5*np.abs((1-2*np.exp(-t[i]/T1)))+ (rand(1)-0.5)/2.5
plt.plot(t, Mz, ".")
plt.grid()
plt.title("Absolute Value of the Magnetization vs time")
plt.xlabel("t(ms)")
plt.ylabel("|Magnetization| (Equivalent Volts)")
plt.figure()
#Store Data in an array
MagData=[t,Mz]
np.savetxt('Data.dat',MagData)
# ### Importing Data:
# Let's see if the data are stored:
print(MagData)
# Now we need to change the sign of the magnetization below the minimum:
import matplotlib.pyplot as plt
import numpy as np
from numpy.random import random as rand
ind=np.argmin(Mz)
for i in range(ind):
Mz[i]=-Mz[i]
plt.plot(t, Mz, ".")
plt.grid()
plt.title("Magnetization vs time")
plt.xlabel("t(ms)")
plt.ylabel("Magnetization (Equivalent Volts)")
#plt.figure()
# ### Fitting the Data
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
plt.close('all')
#data_set=np.loadtxt("Data.dat",delimiter=",")
plt.plot(t,Mz,"b.",label="Data")
plt.title("Voltage (V) vs. Time (ms)")
plt.xlabel("t (ms)")
plt.ylabel("Voltage (V)")
plt.legend()
#
#Define Function to Fit
#
def func(t,Vmax,T1):
return Vmax*(1.0-2.0*np.exp(-t/T1))
#
#Set Initial Quess of Fit Parameters and Curve Fit
#
popt,pcov=curve_fit(func,t,Mz,p0=(8.0,70.0))
print("Vmax,T1",popt)
plt.plot(t,func(t,*popt),'r--',label='Fit: Vmax = %3.3f volts,\
T1 = %4.2f ms' % tuple(popt))
plt.grid()
plt.legend()
# Let's see how good our fit is:
perr = np.sqrt(np.diag(pcov))
print (perr)
# $\textit{pcov}$ is the covariance matrix for our fit. To get 1-Standard Deviation for each of our parameters, just take the square root of each diagonal element.
#
# Therefore, our estimate of the (1$\sigma$) uncertainty in $V_{max}$ is: $\Delta V_{max}$ = the $1^{st}$ entry in "perr" in volts.
#
# Our estimate of the (1$\sigma$) uncertainty in $T_1$ is: $\Delta T_1$ = the $2^{nd}$ entry in "perr" in ms.
#
| NMR/NMRNotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import AlGDock.IO
dock6_reader = AlGDock.IO.dock6_mol2()
import glob
FNs = glob.glob('/Users/dminh/clusters/CCB/AstexDiv_xtal/4-UCSF_dock6/*/anchor_and_grow_scored.mol2')
Es = []
for FN in FNs:
(crd,E) = dock6_reader.read(FN)
Es.append(E['Grid Score'][0])
import sys
for E in Es:
sys.stdout.write('%f, '%E)
| Example/prmtopcrd/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sos
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SoS
# language: sos
# name: sos
# ---
# + [markdown] kernel="SoS" tags=[]
# # Phenotype data preprocessing
#
#
# This section documents output from the Molecular Phenotype Processing section of command generator MWE and explained the purpose for each of the command. The file used in this page can be found at [here](https://drive.google.com/drive/folders/16ZUsciZHqCeeEWwZQR46Hvh5OtS8lFtA?usp=sharing).
#
# **Each commands in the Molecular Phenotype Processing tutorials will be generated once per theme. The MWE is considered a one theme analysis**
#
# + kernel="SoS"
# %preview ../images/eqtl_command.png
# + [markdown] kernel="SoS" tags=[]
# ## Annotation and region list generation
# The input molecular phenotype data is assumed to be a matrix with first column being gene name/ gene ID. The first step of processing is to annotate the matrixs with the genomics coordination of the gene to make a bed file. The chromosome, transcription start site(TSS), and TSS+1 will be written in the chr,start,end columns of the bed file, in accordance with the requirement from TensorQTL and APEX.
#
#
# + kernel="SoS"
sos run pipeline/gene_annotation.ipynb annotate_coord \
--cwd data_preprocessing/MWE/phenotype_data \
--phenoFile MWE.log2cpm.tsv \
--annotation-gtf reference_data/genes.reformatted.gene.gtf \
--sample-participant-lookup reference_data/sampleSheetAfterQC.txt \
--container containers/rna_quantification.sif \
--phenotype-id-type gene_name
# + kernel="SoS"
sos run pipeline/gene_annotation.ipynb region_list_generation \
--cwd data_preprocessing/MWE/phenotype_data \
--phenoFile data_preprocessing/MWE/phenotype_data/MWE.log2cpm.bed.gz\
--annotation-gtf reference_data/genes.reformatted.gene.gtf \
--sample-participant-lookup reference_data/sampleSheetAfterQC.txt \
--container containers/rna_quantification.sif \
--phenotype-id-type gene_name
# + [markdown] kernel="SoS" tags=[]
# ## Residual Expression
# The residual expression will be computed using the [covariate-PCs matrixs](https://github.com/cumc/xqtl-pipeline/blob/main/code/data_preprocessing/covariate/covariate_formatting.ipynb). Residual expression is needed because [PEER(MOFA2) handle covariates poorly](https://biofam.github.io/MOFA2/faq.html). However, the residual expression is only used for factor analysis.
# + kernel="SoS"
sos run pipeline/covariate_formatting.ipynb compute_residual \
--cwd output/data_preprocessing/MWE/phenotype \
--phenoFile data_preprocessing/MWE/phenotype_data/MWE.log2cpm.bed.gz \
--covFile data_preprocessing/MWE/covariates/MWE.covariate.cov.MWE.MWE.related.filtered.extracted.pca.projected.gz \
--container containers/bioinfo.sif
# + [markdown] kernel="SoS"
# ## Phenotype reformmating
# The phenotype file will be partioned into 1 bed.gz per chromosome. Doing so allows [cis-eQTL association testing](https://github.com/cumc/xqtl-pipeline/blob/main/code/association_scan/cisQTL_scan.ipynb) be done in parallel.
# + kernel="SoS"
sos run pipeline/phenotype_formatting.ipynb partition_by_chrom \
--cwd data_preprocessing/MWE/phenotype_data \
--phenoFile data_preprocessing/MWE/phenotype_data/MWE.log2cpm.bed.gz \
--region-list data_preprocessing/MWE/phenotype_data/MWE.log2cpm.region_list \
--container containers/rna_quantification.sif \
--mem 4G
| code/data_preprocessing/phenotype_preprocessing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:nd_deepl] *
# language: python
# name: conda-env-nd_deepl-py
# ---
import numpy as np
import pandas as pd
df = pd.read_csv('student_data-Copy1.csv')
# decide the max number of rows to be displayed in jupyter notebook
pd.set_option('display.max_rows', 10)
df.info()
df
# show first 10 rows
df.head(10)
# show last 10 rows
df.tail(10)
# filter the table based on key 'rank', 3 possible values
df_rank_1 = df[ df['rank'] == 1 ]
df_rank_2 = df[ df['rank'] == 2 ]
df_rank_3 = df[ df['rank'] == 3 ]
# show the GPAs of rank 1 group
df_rank_1['gpa'] # or df_rank_1.gpa (works as well)
df.columns # to display what columns do we have
| intro-neural-networks/pandas-tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # In-class exercise 9: Deep Learning (Part 1A)
# In this notebook we will see how to write efficient and numerically stable code.
# +
import numpy as np
import matplotlib.pyplot as plt
import time
# %matplotlib inline
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import accuracy_score, f1_score
from sklearn.preprocessing import minmax_scale
# +
X, y = load_breast_cancer(return_X_y=True)
# Scale each feature to [-1, 1] range
X = minmax_scale(X, feature_range=(-1, 1))
# -
# # 1. Vectorization
# ## 1.1. Logistic regression (two classes)
# **Setting:** Logistic regression (two classes)
#
# **Task:** Generate predictions for the entire dataset
n_features = X.shape[1]
w = np.random.normal(size=[n_features], scale=0.1) # weight vector
b = np.random.normal(size=[1]) # bias
def sigmoid(t):
"""Apply sigmoid to the input array."""
return 1 / (1 + np.exp(-t))
# ### Bad - for loops
def predict_for_loop(X, w, b):
"""Generate predictions with a logistic regression model using a for-loop.
Args:
X: data matrix, shape (N, D)
w: weights vector, shape (D)
b: bias term, shape (1)
Returns:
y: probabilies of the positive class, shape (N)
"""
n_samples = X.shape[0]
y = np.zeros([n_samples])
for i in range(n_samples):
score = np.dot(X[i], w) + b
y[i] = sigmoid(score)
return y
# ### Good - vectorization
def predict_vectorized(X, w, b):
"""Generate predictions with a logistic regression model using vectorized operations.
Args:
X: data matrix, shape (N, D)
w: weights vector, shape (D)
b: bias term, shape (1)
Returns:
y: probabilies of the positive class, shape (N)
"""
scores = X @ w + b # @相对于普通的矩阵乘法,也就是.dot
y = sigmoid(scores)
return y
# ### Compare the runtime of two variants
# %%timeit
predict_for_loop(X, w, b)
# %%timeit
predict_vectorized(X, w, b)
# ## 1.2. K-nearest neighbors
# A more complicated task: compute the matrix of pairwise distances.
#
# Given a data matrix `X` of size `[N, D]`, compute the matrix `dist` of pairwise distances of size `[N, N]`, where `dist[i, j] = l2_distance(X[i], X[j])`.
# ### Bad - for loops
def l2_distance(x, y):
"""Compute Euclidean distance between two vectors."""
return np.sqrt(np.sum((x - y) ** 2))
def distances_for_loop(X):
"""Compute pairwise distances between all instances (for loop version).
Args:
X: data matrix, shape (N, D)
Returns:
dist: matrix of pairwise distances, shape (N, N)
"""
n_samples = X.shape[0]
distances = np.zeros([n_samples, n_samples])
for i in range(n_samples):
for j in range(n_samples):
distances[i, j] = l2_distance(X[i], X[j])
return distances
dist1 = distances_for_loop(X)
# ### Good - vectorization
# How can we compute all the distances in a vectorized way?
#
# Start with a simpler example.
x = np.arange(5, dtype=np.float64)
print(x)
print(x.shape)
# Increase the dimension of an array using `np.newaxis`
print(x[:, np.newaxis])
print(x[np.newaxis, :])
print(x[np.newaxis, :] - x[:, np.newaxis])
print(-x[np.newaxis, :] + x[:, np.newaxis])
def distances_vectorized(X):
"""Compute pairwise distances between all instances (vectorized version).
Args:
X: data matrix, shape (N, D)
Returns:
dist: matrix of pairwise distances, shape (N, N)
"""
return np.sqrt(((X[:, None] - X[None, :])**2).sum(-1))
dist2 = distances_vectorized(X)
# Make sure that both variants produce the same results
# Direct comparison fails because of tiny numerical differences
np.all(dist1 == dist2)
# Two results are very close
np.linalg.norm(dist1 - dist2, ord='fro')
# Use np.allclose to compare
np.allclose(dist1, dist2)
# ### Best - library function
# +
from scipy.spatial.distance import cdist, pdist, squareform
dist3 = cdist(X, X)
dist4 = squareform(pdist(X))
# -
# ### Compare the runtime
# %%timeit
dist1 = distances_for_loop(X)
# %%timeit
dist2 = distances_vectorized(X)
# %%timeit
dist3 = cdist(X, X) #依次计算X和X的距离,得到X*X的矩阵
# %%timeit
dist4 = squareform(pdist(X)) #对单一数组内的各值之间求距离
np.allclose(dist4, dist3)
# ## Lessons:
# 1. For-loops are extremely slow! Avoid them whenever possible.
# 2. A better alternative - use matrix operations & broadcasting
# 3. An even better alternative - use library functions (if they are available).
# 4. Implementations with for-loops can be useful for debugging vectorized code.
# # 2. Numerical stability
# Typically, GPUs use single precision (32bit) floating point numbers (in some cases even half precision / 16bit). This significantly speeds ups the computations, but also makes numerical issues a lot more likely.
# Because of this we always have to be extremely careful to implement our code in a numerically stable way.
#
# Most commonly, numerical issues occur when dealing with `log` and `exp` functions (e.g. when computing cross-entropy of a categorical distribution) and `sqrt` for values close to zero (e.g. when computing standard deviations or normalizing the $L_2$ norm).
# ## 2.1. Avoiding numerical overflow (exploding `exp`)
# Softmax function $f : \mathbb{R}^D \to \Delta^{D - 1}$ converts a vector $\mathbf{x} \in \mathbb{R}^D$ into a vector of probabilities.
#
# $$f(\mathbf{x})_j = \frac{\exp(x_j)}{\sum_{d=1}^{D} \exp(x_d)}$$
#
# Apply the softmax function to the following vector.
x = np.linspace(0., 4., 5).astype(np.float32)
x
# Our code here
denominator = np.exp(x).sum()
np.exp(x) / denominator
# Now apply it to the following vector
x = np.linspace(50., 90., 5).astype(np.float32)
x
# Our code here
denominator = np.exp(x).sum()
np.exp(x) / denominator
# How to avoid the exposion?
x_shifted = x - np.max(x)
denominator = np.exp(x_shifted).sum()
np.exp(x_shifted) / denominator
# ## 2.2. Working in the log-space / simplifying the expressions
# Binary cross entropy (BCE) loss for a logistic regression model (corresponds to negative log-likelihood of a Bernoulli model)
#
# $$\log p(\mathbf{y} \mid \mathbf{X}, \mathbf{w}, b) = -\sum_{i=1}^{N} y_i \log \sigma(\mathbf{w}^T \mathbf{x}_i + b) + (1 - y_i) \log (1 - \sigma(\mathbf{w}^T \mathbf{x}_i + b))$$
#
#
# Implement the BCE computation.
# +
# TODO
def sigmoid(t):
return 1 / (1 + np.exp(-t))
def binary_cross_entropy_unstable(scores, labels):
return -labels * np.log(sigmoid(scores)) - (1 - labels) * np.log(1 - sigmoid(scores))
# +
x = np.array([[20., 20.]])
w = np.array([[1., 1.]])
y = np.array([1.])
scores = x @ w.T
binary_cross_entropy_unstable(scores, y)
# -
# Try to simplify the BCE loss as much as possible
# +
# TODO
def binary_cross_entropy_stable(scores, labels):
return np.log(1 + np.exp(scores)) - labels * scores
binary_cross_entropy_stable(scores, y)
# -
# ## 2.3. Loss of numerical precision
# Implement the log sigmoid function
#
# $$f(x) = \log \sigma(x) = \log \left(\frac{1}{1 + \exp(-x)}\right)$$
# Your code here
def log_sigmoid_unstable(x):
return np.log(1 / (1 + np.exp(-x)))
# `float32` has much lower "resolution" than `float64`
x = np.linspace(0, 30, 11).astype(np.float32)
log_sigmoid_unstable(x)
x = np.linspace(0, 30, 11).astype(np.float64)
log_sigmoid_unstable(x)
# Implement the log-sigmoid function in a numerically stable way
def log_sigmoid_stable(x):
return -np.log1p(np.exp(-x))
x = np.linspace(0, 30, 11).astype(np.float32)
log_sigmoid_stable(x)
# Relevant functions: `np.log1p`, `np.expm1`, `scipy.special.logsumexp`, `scipy.special.softmax` -- these are also implemented in all major deep learning frameworks.
# ## Lessons:
# 1. Be especially careful when working with `log` and `exp` functions in **single precision** floating point arithmetics
# 2. Work in the log-space when possible
# 3. Use numerically stable library functions when available
| inclass_07_vectorization_numerics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# This is the first of two notebooks on the Euler equations. In this notebook, we discuss the equations and the structure of the exact solution to the Riemann problem. In the next notebook, we investigate approximate Riemann solvers.
# # Table of contents
#
# - [Fluid dynamics](#Fluid-dynamics)
# - [The Euler equations](#The-Euler-equations)
# - [Hyperbolic structure](#Hyperbolic-structure-of-the-Euler-equations)
# - [Exact solution of the Riemann problem](#Exact-solution-of-the-Riemann-problem)
# - [Plot particle trajectories](#Plot-particle-trajectories)
# - [Plot Riemann solution with advected colors](#Plot-Riemann-solution-with-advected-colors)
# - [Interactive Riemann solver](#Interactive-Riemann-solver)
# - [Riemann problems with vacuum](#Riemann-problems-with-vacuum)
# # Fluid dynamics
# In this chapter we study the system of hyperbolic PDEs that governs the motions of fluids in the absence of viscosity. These consist of conservation laws for **mass, momentum**, and **energy**. Together, they are referred to as the **compressible Euler equations**, or simply the Euler equations. Our discussion here is fairly brief; for much more detail see <cite data-cite="toro2013riemann"><a href="riemann.html#toro2013riemann">(Toro, 2013)</a></cite>.
# ### Mass conservation
# We will use $\rho(x,t)$ to denote the fluid density and $u(x,t)$ for its velocity. Then the equation for conservation of mass is just the **continuity equation**:
#
# $$\rho_t + (\rho u)_x = 0.$$
# ### Momentum conservation
# The momentum is given by the product of density and velocity, $\rho u$. The momentum flux has two components. First, the momentum is transported in the same way that the density is; this flux is given by the momentum times the density: $\rho u^2$.
#
# To understand the second term in the momentum flux, we must realize that a fluid is made up of many tiny molecules. The density and velocity we are modeling are average values over some small region of space. The individual molecules in that region are not all moving with exactly velocity $u$; that's just their average. Each molecule also has some additional random velocity component. These random velocities are what accounts for the **pressure** of the fluid, which we'll denote by $p$. These velocity components also lead to a net flux of momentum. Thus the momentum conservation equation is
#
# $$(\rho u)_t + (\rho u^2 + p)_x = 0.$$
# ### Energy conservation
# The energy has two components: internal energy $\rho e$ and kinetic energy $\rho u^2/2$:
#
# $$E = \rho e + \frac{1}{2}\rho u^2.$$
#
# Like the momentum flux, the energy flux involves both bulk transport ($Eu$) and transport due to pressure ($pu$):
#
# $$E_t + (u(E+p)) = 0.$$
# ### Equation of state
# You may have noticed that we have 4 unknowns (density, momentum, energy, and pressure) but only 3 conservation laws. We need one more relation to close the system. That relation, known as the equation of state, expresses how the pressure is related to the other quantities. We'll focus on the case of a polytropic ideal gas, for which
#
# $$p = \rho e (\gamma-1).$$
#
# Here $\gamma$ is the ratio of specific heats, which for air is approximately 1.4.
# ## The Euler equations
# We can write the three conservation laws as a single system $q_t + f(q)_x = 0$ by defining
#
# \begin{align}
# q & = \begin{pmatrix} \rho \\ \rho u \\ E\end{pmatrix}, &
# f(q) & = \begin{pmatrix} \rho u \\ \rho u^2 + p \\ u(E+p)\end{pmatrix}.
# \end{align}
# In three dimensions, the equations are similar. We have two additional velocity components $v, w$, and their corresponding fluxes. Additionally, we have to account for fluxes in the $y$ and $z$ directions. We can write the full system as
#
# $$ q_t + f(q)_x + g(q)_y + h(q)_z = 0$$
#
# with
#
# \begin{align}
# q & = \begin{pmatrix} \rho \\ \rho u \\ \rho v \\ \rho w \\ E\end{pmatrix}, &
# f(q) & = \begin{pmatrix} \rho u \\ \rho u^2 + p \\ \rho u v \\ \rho u w \\ u(E+p)\end{pmatrix} &
# g(q) & = \begin{pmatrix} \rho v \\ \rho uv \\ \rho v^2 + p \\ \rho v w \\ v(E+p)\end{pmatrix} &
# h(q) & = \begin{pmatrix} \rho w \\ \rho uw \\ \rho vw \\ \rho w^2 + p \\ w(E+p)\end{pmatrix}.
# \end{align}
# ## Hyperbolic structure of the 1D Euler equations
#
# In our discussion of the structure of these equations, it is convenient to work with the primitive variables $(\rho, u, p)$ rather than the conserved variables. In quasilinear form, we have
#
# \begin{align}
# \begin{bmatrix} \rho \\ u \\ p \end{bmatrix}_t
# + \begin{bmatrix} u & \rho & 0 \\ 0 & u & 1/\rho \\ 0 & \gamma \rho & u \end{bmatrix} \begin{bmatrix} \rho \\ u \\ p \end{bmatrix}_x & = 0.
# \end{align}
# ### Characteristic velocities
# In primitive variables, the eigenvalues of the flux Jacobian for the 1D Euler equations are:
#
# \begin{align}
# \lambda_1 & = u-c & \lambda_2 & = u & \lambda_3 & = u+c
# \end{align}
#
# Here $c$ is the sound speed:
#
# $$ c = \sqrt{\frac{\gamma p}{\rho}}.$$
#
# The eigenvectors of the flux Jacobian are (again in primitive variables):
#
# \begin{align}
# r_1 & = \begin{bmatrix} -\rho/c \\ 1 \\ - \rho c \end{bmatrix} &
# r_2 & = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} &
# r_3 & = \begin{bmatrix} \rho/c \\ 1 \\ \rho c \end{bmatrix}.
# \end{align}
#
# Notice that the second characteristic speed, $\lambda_2$, depends only on $u$ and that $u$ does not change as we move in the direction of $r_2$. In other words, the 2-characteristic velocity is constant on 2-integral curves. We say this characteristic field is **linearly degenerate**; it admits neither shocks nor rarefactions. In a simple 2-wave, all characteristics are parallel. A jump in this family carries a change only in the density, and is referred to as a **contact discontinuity**.
#
# The other two fields have characteristic velocities that **do** vary along the corresponding integral curves; thus the 1-wave and the 3-wave in any Riemann solution will be either a shock or a rarefaction. We say these characteristic fields are **genuinely nonlinear**.
#
# Mathematically, the $p$th field is linearly degenerate if
#
# $$\nabla \lambda_p(q) \cdot r_p(q) = 0$$
#
# and genuinely nonlinear if
#
# $$\nabla \lambda_p(q) \cdot r_p(q) \ne 0.$$
# ### Riemann invariants
#
# Since the Euler equations have three components, we expect each integral curve (a 1D set in 3D space) to be defined by two Riemann invariants. These are:
#
# \begin{align}
# 1 & : s, u+\frac{2c}{\gamma-1} \\
# 2 & : u, p \\
# 3 & : s, u-\frac{2c}{\gamma-1}.
# \end{align}
#
# Here $s$ is the **specific entropy**:
#
# $$ s = c_v \log(p/\rho^\gamma) + C.$$
#
# The level sets of these Riemann invariants are two-dimensional surfaces; the intersection of two appropriate level sets defines an integral curve.
#
# ### Integral curves
# The 2-integral curves, of course, are simply lines of constant pressure and velocity (with varying density). Since the field is linearly degenerate, these coincide with the Hugoniot loci.
# We can determine the form of the 1- and 3-integral curves using the Riemann invariants above. For a curve passing through $(\rho_0,u_0,p_0)$, we find
#
# \begin{align}
# u(p) & = u_0 \pm \frac{2c_0}{\gamma-1}\left(1-(p/p_0)^{(\gamma-1)/(2\gamma)}\right).
# \end{align}
# Here the plus sign is for 1-waves and the minus sign is for 3-waves.
#
# Below we plot the projection of some integral curves on the $p-u$ plane.
#
# **To do**: Discuss how $\rho$ fits into this, or plot 3D integral curves.
# %matplotlib inline
from exact_solvers import Euler
import matplotlib.pyplot as plt
import numpy as np
from ipywidgets import widgets
from clawpack import riemann
from utils import riemann_tools
import matplotlib
matplotlib.rcParams.update({'font.size': 12})
from collections import namedtuple
Primitive_State = namedtuple('State', Euler.primitive_variables)
gamma = 1.4
from ipywidgets import interact # for interactive widgets
#from utils.snapshot_widgets import interact # for static figure that can be viewed online
def plot_integral_curves(plot_1=True,plot_3=False,gamma=1.4,rho_0=1.):
N = 400
p = np.linspace(0.,5,N)
p_0 = 1.
uu = np.linspace(-3,3,15)
c_0 = np.sqrt(gamma*p_0/rho_0)
if plot_1:
for u_0 in uu:
u = u_0 + (2*c_0)/(gamma-1.)* \
(1.-(p/p_0)**((gamma-1)/(2*gamma)))
plt.plot(p,u,color='coral')
if plot_3:
for u_0 in uu:
u = u_0 - (2*c_0)/(gamma-1.)* \
(1.-(p/p_0)**((gamma-1)/(2*gamma)))
plt.plot(p,u,color='maroon')
plt.xlabel('p'); plt.ylabel('u')
plt.show()
interact(plot_integral_curves,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4));
# ## The structure of centered rarefaction waves
# ## Rankine-Hugoniot jump conditions
#
# The Hugoniot loci for 1- and 3-shocks are
# \begin{align}
# u(p) & = u_0 \pm \frac{2c_0}{\sqrt{2\gamma(\gamma-1)}}
# \left(\frac{1-p/p_0}{\sqrt{1+\beta p/p_0}}\right), \\
# \end{align}
# where $\beta = (\gamma+1)/(\gamma-1)$.
# Here the plus sign is for 1-shocks and the minus sign is for 3-shocks.
#
# **To do**: Discuss how $\rho$ varies, and maybe plot 3D integral curves.
def plot_hugoniot_loci(plot_1=True,plot_3=False,gamma=1.4,rho_0=1.):
N = 400
p = np.linspace(1.e-3,5,N)
p_0 = 1.
uu = np.linspace(-3,3,15)
c_0 = np.sqrt(gamma*p_0/rho_0)
beta = (gamma+1.)/(gamma-1.)
if plot_1:
for u_0 in uu:
u_1 = u_0 + (2*c_0)/np.sqrt(2*gamma*(gamma-1.))* \
(1.-p/p_0)/(np.sqrt(1+beta*p/p_0))
plt.plot(p,u_1,color='coral')
if plot_3:
for u_0 in uu:
u_1 = u_0 - (2*c_0)/np.sqrt(2*gamma*(gamma-1.))* \
(1.-p/p_0)/(np.sqrt(1+beta*p/p_0))
plt.plot(p,u_1,color='maroon')
plt.xlabel('p'); plt.ylabel('u')
plt.show()
interact(plot_hugoniot_loci,
gamma=widgets.FloatSlider(min=1.1,max=3,value=1.4));
# ### Entropy condition
# ## Exact solution of the Riemann problem
# Executing the cell below loads some subroutines that find the exact solution of the Riemann problem. In brief, the Riemann solution is found as follows:
#
# 1. Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the left state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
# 2. Define a piecewise function giving the middle state velocity $u_m$ that can be connected to the right state by an entropy-satisfying shock or rarefaction, as a function of the middle-state pressure $p_m$.
# 3. Use an iterative solver to find the intersection of the two functions defined above.
# 4. Use the Riemann invariants to find the intermediate state densities and the solution structure inside any rarefaction waves.
# Execute the cell below (after removing `#`) to bring the Euler solver into the notebook with syntax highlighting, or you can examine it by looking at this file: [exact_solvers/Euler.py](exact_solvers/Euler.py)
# +
# #%load exact_solvers/Euler.py
# -
# ### Examples of Riemann solutions
# ### Problem 1: Sod shock tube
#
# First we consider the classic shock tube problem, with high density and pressure on the left, low density and pressure on the right. Both sides are initially at rest. The solution includes a rarefaction, a contact, and a shock.
def riemann_solution(left_state, right_state):
q_left = Euler.primitive_to_conservative(*left_state)
q_right = Euler.primitive_to_conservative(*right_state)
ex_states, ex_speeds, reval, wave_types = Euler.exact_riemann_solution(q_left ,q_right, gamma)
plot_function = riemann_tools.make_plot_function(ex_states, ex_speeds, reval, wave_types,
layout='vertical',
variable_names=Euler.primitive_variables,
plot_chars=[Euler.lambda1,Euler.lambda2,Euler.lambda3],
derived_variables=Euler.cons_to_prim)
interact(plot_function, t=widgets.FloatSlider(value=0.1,min=0,max=.9),
which_char=widgets.Dropdown(options=[None,1,2,3],description='Show characteristics'))
# +
left_state = Primitive_State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = Primitive_State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
riemann_solution(left_state,right_state)
# -
# Here is a plot of the solution in the phase plane, showing the integral curve connecting the left and middle states, and the Hugoniot locus connecting the middle and right states.
Euler.phase_plane_plot(left_state, right_state)
# ### Problem 2: Symmetric expansion
#
# Next we consider the case of equal densities and pressures, and equal and opposite velocities, with the initial states moving away from each other. The result is two rarefaction waves (the contact has zero strength).
# +
left_state = Primitive_State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
right_state = Primitive_State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
riemann_solution(left_state,right_state);
# -
Euler.phase_plane_plot(left_state, right_state)
# ### Problem 3: Colliding flows
#
# Next, consider the case in which the left and right states are moving toward eachother. This leads to a pair of shocks, with a high-density, high-pressure state in between.
# +
left_state = Primitive_State(Density = 1.,
Velocity = 3.,
Pressure = 1.)
right_state = Primitive_State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
riemann_solution(left_state,right_state)
# -
Euler.phase_plane_plot(left_state, right_state)
# ## Plot particle trajectories
#
# In the next plot of the Riemann solution in the $x$-$t$ plane, we also plot the trajectories of a set of particles initially distributed along the $x$ axis at $t=0$, with the spacing inversely proportional to the density. The evolution of the distance between particles gives an indication of how the density changes.
# +
left_state = Primitive_State(Density = 3.,
Velocity = 0.,
Pressure = 3.)
right_state = Primitive_State(Density = 1.,
Velocity = 0.,
Pressure = 1.)
q_left = Euler.primitive_to_conservative(*left_state)
q_right = Euler.primitive_to_conservative(*right_state)
ex_states, ex_speeds, reval, wave_types = Euler.exact_riemann_solution(q_left ,q_right, gamma)
def reval_rho_u(x):
q = reval(x)
rho = q[0]
u = q[1]/q[0]
rho_u = np.vstack((rho,u))
return rho_u
# Specify density of trajectories to left and right:
rho_l = q_left[0] / 10.
rho_r = q_right[0] / 10.
x_traj, t_traj, xmax = riemann_tools.compute_riemann_trajectories(ex_states, ex_speeds, reval_rho_u, wave_types,
i_vel=1, rho_left=rho_l, rho_right=rho_r)
riemann_tools.plot_riemann_trajectories(x_traj, t_traj, ex_speeds, wave_types)
# -
# Recall that the evolution of the distance between particles gives an indication of how the density changes. Note that it increases across the shock wave and decreases through the rarefaction wave, and that in general there is a jump in density across the contact discontinuity.
# ## Plot Riemann solution with advected colors
#
# The next cell defines a function to plot the Riemann solution with the density plot also showing an advected color to help visualize the flow better. The fluid to the left of $x=0$ initially is colored red and to the right of $x=0$ is colored blue, with stripes of different shades of these colors to help visualize the motion of the fluids.
def plot_exact_riemann_solution_stripes(rho_l=3.,u_l=0.,p_l=3.,rho_r=1.,u_r=0.,p_r=1.,t=0.4):
q_l = Euler.primitive_to_conservative(rho_l,u_l,p_l)
q_r = Euler.primitive_to_conservative(rho_r,u_r,p_r)
from matplotlib.mlab import find
x = np.linspace(-1.,1.,1000)
states, speeds, reval, wave_types = Euler.exact_riemann_solution(q_l, q_r, gamma=gamma)
q = reval(x/t)
primitive = Euler.conservative_to_primitive(q[0],q[1],q[2])
# compute particle trajectories:
def reval_rho_u(x):
q = reval(x)
rho = q[0]
u = q[1]/q[0]
rho_u = np.vstack((rho,u))
return rho_u
# Specify density of trajectories to left and right:
num_left = 10
num_right = 10
rho_left = q_l[0] / 10.
rho_right = q_r[0] / 10.
x_traj, t_traj, xmax = riemann_tools.compute_riemann_trajectories(states, speeds, reval_rho_u, wave_types,
i_vel=1, xmax=1, rho_left=rho_left, rho_right=rho_right)
fig = plt.figure(figsize=(18,6))
names = ['Density','Velocity','Pressure']
axes = [0]*3
for i in range(3):
axes[i] = fig.add_subplot(1,3,i+1)
q = primitive[i]
plt.plot(x,q,linewidth=3)
plt.title(names[i])
qmax = max(q)
qmin = min(q)
qdiff = qmax - qmin
axes[i].set_ylim((qmin-0.1*qdiff,qmax+0.1*qdiff))
axes[i].set_xlim(-xmax,xmax)
if i==0:
# plot stripes only on density plot
n = find(t > t_traj)
if len(n)==0:
n = 0
else:
n = min(n.max(), len(t_traj)-1)
for i in range(1, x_traj.shape[1]-1):
j1 = find(x_traj[n,i] > x)
if len(j1)==0:
j1 = 0
else:
j1 = min(j1.max(), len(x)-1)
j2 = find(x_traj[n,i+1] > x)
if len(j2)==0:
j2 = 0
else:
j2 = min(j2.max(), len(x)-1)
# set advected color for density plot:
if x_traj[0,i]<0:
# shades of red for fluid starting from x<0
if np.mod(i,2)==0:
c = [1,0,0]
else:
c = [1,0.8,0.8]
else:
# shades of blue for fluid starting from x<0
if np.mod(i,2)==0:
c = [0,0,1]
else:
c = [0.8,0.8,1]
plt.fill_between(x[j1:j2],q[j1:j2],0,color=c)
plt.show()
# Make a plot with only a time slider to illustrate this viewpoint with the Sod shock tube data:
# +
def plot_exact_riemann_solution_stripes_t_slider(t):
plot_exact_riemann_solution_stripes(rho_l=3.,u_l=0.,p_l=3.,rho_r=1.,u_r=0.,p_r=1.,t=t)
interact(plot_exact_riemann_solution_stripes_t_slider,
t=widgets.FloatSlider(min=0.1,max=1.,step=0.1,value=0.5));
# -
# Note the following in the figure above:
#
# - The edges of each stripe are being advected with the fluid velocity, so you can visualize how the fluid is moving.
# - The width of each stripe initially is inversely proportional to the density of the fluid, so that the total mass of gas within each stripe is the same.
# - The total mass within each stripe remains constant as the flow evolves, and the width of each stripe remains inversely proportional to the local density.
# - The interface between the red and blue gas moves with the contact discontinuity. The velocity and pressure are constant but the density can vary across this wave.
# ## Interactive Riemann solver
#
# Here you can set up your own Riemann problem and immediately see the solution. If you don't want to download and run the notebook, an online interactive version is [here](http://sagecell.sagemath.org/?z=eJytWNtu47YWfQ-Qf2BnHiLJsmzFGeAgqIsCnfaxOCgGpw-DwJAtOiaOLgwviTJf30VSlKjYSgdog8FEIdfeXPvKLbGat0KRRtf8lRSSNPz6irm1ulC8alXF9hl_NU9mn1fq-uoo2prIA-OvWcsVq9k3Snqho2yrZ3p9dX1V0iOhXXFQO8FoXTTNDltasbaJnnZV-rQTabf9vW1oqtyvx6Kui22e3cX311cEPx8-fPiDKi0aok7U6SJeB1GtXf3D6SZctPuK1uSFqRNhDVOsqIhUhaKSmOPwn8icWvx8geSgiUlyaGuuFS1JoQjsoQS2NiXhLWuUJB2JXk5UUDzUxSvZU1KQ_LP3mRDFazyqnpzBGqglz_SgWiEBpZbzoW0kFc847kkXjQJXkNwDaDZbrSDjwF_FqU11yh-ywSHuAeu7imyNZV_XD25NDyv5wyrc-HXYuO1XRiXC7ohAiV-xSkSgxG8YJW7tI_nF-Y1U9Og8JtjjSTm_IyRUSo3_HJpbGpGN8jLP4iQyxJZknX1KrD0JLEiS29jDxRlcBHABuHDw72IjW40VySkte0IHS6jhmXwSyh2UgOTKkok9RlzACIsRcejNouKnYsJ4Fd1mif2rB-6pGhELixjAgRUnevg_ObaCHIpnBu7IUbfHjjbISxulBblNIpiwAMVQEfmRrO-HJEQQkMLk5pdBFSmpQj7S8gcEtUPuNY_ZzYgXruBMQYbWfSSf6ZE1Ln-hkT4K1NdBI4uldfVJP7YNQ4-o2gPr6fawnYXtcixtSVXU-7IgnNxbW4wVMGIVxjnPlhFfIRJxkkQX3RnHF0_YnJ9gMsacIC6eIP72BG_WDmZpCRtmLfA5cus0JKPamCQkivKls2kA5guTD4lbvXzcZt6c7ztOXDxOxPE0ti7l0OMQXWFDnC9fimdqWmNB5KnFLvJRoCcd0YSHhDQtnp_Yrop4fO8zaMwl5Cv_aQvz7n1avXUn5EY4rSQdkG-TZ0D-W5yF4XzOVcxx3Xw31ymy72YnNo2ld9vSczlvYzUry4r6ZpoSTfavBEVYomRd6dmThaSBeTxlzbFNCaMiJbV8xKnuSo5wUEoiRGNh8-I2S4-6qnbuvtl-EZqmnWorXMB0md_Fns1vcKJUosWZgS9lioZaU3NXyv4A0pjWSk604kO7AosftnnYjv4JPXN2K7brLA-YrgNff8Sli_Aj-LoqSYNudCo4p800zj2pIFGDRvlnIRr4994bVTKnCHf2MxWPNOyVoxgs8fHTMMkH168NvkTwpmEtaSPt_R_c6zts2btvaIJ5tnJtCaVtIcEFPkGLS2jxNhk_-nnHlo0tmvBefJHuyvtGRSujT_Eg9cXNLsrOYUbg3gtgKICMvmCtK0R7GY_ZMznOVp5tEiNPaFwbjdEwGKDxjc5JNHq1m4HC5XiiIDcKrCK3bEt2RBxGR7-53ldzGi0lx-XgYxAept2GlfSh760Tb6y7G6yzc0xvhphaN12ectn01t3NWSfes25Wo154yXO6buowE9EQ5POm1bfE56LSJqnHbt220szNdrreFxLDb9tczj-4rDNd3Mwg93i0JlSskbw40AhXXGr-rde-8oFXI14Bv87-k9RFF0Gu2Muoi-NV8OeLnN5_nbSuTBR4_mnIoOHbUUmeF47L6eHNAcVb0klmHwsv1pmO361U3xbM6BDMGomfHcwMh4eOxX6KW9hx0AptzoTcBBAd7MNFIcQ3H0vHjwc6XwKNeXk1Trpx3y2C2WFQsRnz06vo2FJvpirErApuOOCMxCjJ-5m6V9UjNhYhLGLTT9QTxNjlcAsYPt2P286UYexeGOC2qPupX-k3c7eZD3v5uHfrBV1peMTtiNj0CDFFbEbEnUNshr07L-JfoAxXMmWr57nqWaZ6jqCe46WnrLTnxC9x4vOc-CwnPseJz3HiU07cc-r7RB_c1PktdVSB8Algp7WqxTuyyRLAUjMJdX5yOzIzQfBKZXjCq2aEX5J9o9soX6d3Qy4KVuOSRWVvx1dqt9UUZoDB8s1nexO_3qQ3_6PmHcY-_rd_hb3p4UXn0OuHZNMzwHjE0AjQBJpHGm3CmdKgvzLTOkErK8pyJ_XeWBPl6SZlizxowE_GDs8TQsHIBOusUJc-pWiD9IWV6rTdxFMIXFTRyNoD8VAzWh-Umwb4NFkGaSyzZrpcsuPRvOkbqaVFnRmUSap2rxWro8jsLzGQJVYuNVKL4U_vf_u5CAMyxqKD9B-KcGHUv9mgOdD4twni0KBtOwCfPLszt8n11c925kW3HZNjN_OZydb6Vlbo0QIu_5SayRFCha4U_JfqcXf56c32Ok75u8K25uf2c6NcvK_8PWFAxs9g7vsNn7wlo6jefjcJv9PwyQtvAPZfTcIG-2T1f7W6UjKoTM3BDx4iPEQ4iFVkIGLyTenvb-uhjIF85_OgORKFnhI0BfdhMPySEtQE9A2LQ59Agadd_BeW1ABm&lang=python).
interact(plot_exact_riemann_solution_stripes,
rho_l=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=3.,description=r'$\rho_l$'),
u_l=widgets.FloatSlider(min=-10.,max=10.,step=0.1,value=0.,description=r'$u_l$'),
p_l=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=3.,description=r'$p_l$'),
rho_r=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=1.,description=r'$\rho_r$'),
u_r=widgets.FloatSlider(min=-10.,max=10.,step=0.1,value=0.,description=r'$u_r$'),
p_r=widgets.FloatSlider(min=1.,max=10.,step=0.1,value=1.,description=r'$p_r$'),
t=widgets.FloatSlider(min=0.1,max=1.,step=0.1,value=0.5));
# ## Riemann problems with vacuum
# A vacuum state (with zero pressure and density) can arise in the solution of the Riemann problem in two ways:
#
# 1. An initial left or right vacuum state: in this case the Riemann solution consists of a single rarefaction, connecting the non-vacuum state to vacuum.
# 2. A problem where the left and right states are not vacuum but middle states are vacuum. Since this means the middle pressure is smaller than that to the left or right, this can occur only if the 1- and 3-waves are both rarefactions. These rarefactions are precisely those required to connect the left and right states to the middle vacuum state.
# ### Initial vacuum state
# The velocity plot looks a bit strange, but note that the velocity is undefined in vacuum.
# +
left_state = Primitive_State(Density =0.,
Velocity = 0.,
Pressure = 0.)
right_state = Primitive_State(Density = 1.,
Velocity = -3.,
Pressure = 1.)
riemann_solution(left_state,right_state)
# -
Euler.phase_plane_plot(left_state, right_state)
# ### Middle vacuum state
# +
left_state = Primitive_State(Density =1.,
Velocity = -10.,
Pressure = 1.)
right_state = Primitive_State(Density = 1.,
Velocity = 10.,
Pressure = 1.)
riemann_solution(left_state,right_state)
# -
Euler.phase_plane_plot(left_state, right_state)
| Euler_equations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="oBd6qFcZAAtX"
# # Introduction to Setting Up the Environment and Importing Data
# This notebook is part of the [SachsLab Workshop for Intracranial Neurophysiology and Deep Learning](https://github.com/SachsLab/IntracranialNeurophysDL).
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/SachsLab/IntracranialNeurophysDL/blob/master/notebooks/01_02_data_import.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/SachsLab/IntracranialNeurophysDL/blob/master/notebooks/01_02_data_import.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="GdoBkDt4AAtd"
#
# ### Normalizing Environments
# Let's try to unify the Google Colab environment and the local environment.
#
# If you are running on Google Colab, then in the next cell you'll be presented
# with a widget. Use it to browse to the kaggle.json file.
# (Note: You might have to show hidden files/folders in the file browser. `CMD + SHIFT + .` on Mac)
#
# It will also clone the workshop repository and change into its directory.
#
# If you are running locally then this will simply change to the workshop root directory.
# + colab_type="code" id="RhQUlo-aAAte" pycharm={"name": "#%%\n"} colab={}
from pathlib import Path
import os
# Standard block to equalize local and Colab.
try:
# See if we are running on google.colab
import google.colab
from google.colab import files
if not (Path.home() / '.kaggle').is_dir():
# Configure kaggle
files.upload() # Find the kaggle.json file in your ~/.kaggle directory.
# !pip install -q kaggle
# !mkdir -p ~/.kaggle
# !mv kaggle.json ~/.kaggle/
# !chmod 600 ~/.kaggle/kaggle.json
if Path.cwd().stem != 'IntracranialNeurophysDL':
if not (Path.cwd() / 'IntracranialNeurophysDL').is_dir():
# Download the workshop repo and change to its directory
# !git clone --recursive https://github.com/SachsLab/IntracranialNeurophysDL.git
os.chdir('IntracranialNeurophysDL')
IN_COLAB = True
except ModuleNotFoundError:
IN_COLAB = False
import sys
if Path.cwd().stem == 'notebooks':
os.chdir(Path.cwd().parent)
# Make sure the kaggle executable is on the PATH
os.environ['PATH'] = os.environ['PATH'] + ';' + str(Path(sys.executable).parent / 'Scripts')
# %load_ext autoreload
# %autoreload 2
# + [markdown] colab_type="text" id="aTJelN8LAAtg"
# ## Get Data
#
# ### Download
# Now that your system is configured to use kaggle, whether local or running on colab,
# we check for the existence of the datadir, and if it's not there we download the data
# (2.2 GB) and unzip it into the correct folder.
#
# While running on google colab, this takes about a minute to download and another minute to unzip.
#
# While running locally, this can take a long time depending on your internet connection. The
# PyCharm notebook interface doesn't give feedback about download or unzip status, so you may
# want to download and extract from an Anaconda prompt, in the repository parent directory:
# ```
# kaggle datasets download --unzip --path data/kjm_ecog/converted/faces_basic cboulay/kjm-ecog-faces-basic
# ```
#
# If you are getting errors related to access restriction then you may need to download a new Kaggle API token and reset your Colab instance to upload the new .json file.
# + colab_type="code" id="xxObV_u5AAtg" pycharm={"name": "#%%\n"} colab={}
datadir = Path.cwd() / 'data' / 'kjm_ecog'
if not (datadir / 'converted').is_dir():
# !kaggle datasets download --unzip --path {str(datadir / 'converted' / 'faces_basic')} cboulay/kjm-ecog-faces-basic
print("Finished downloading and extracting data.")
else:
print("Data directory already exists. Skipping download.")
# + [markdown] colab_type="text" id="qwEDrZs9AAti"
# ### Import File
# We can now start working with the data.
# We will import a single file that contains processed band-power data.
#
# If you are curious about how the data were processed, the script can be found [here](https://github.com/SachsLab/IntracranialNeurophysDL/blob/master/data/kjm_ecog/03_convert.py).
# + colab_type="code" id="Me2kKnr5AAtj" pycharm={"name": "#%%\n"} colab={}
from data.utils.fileio import from_neuropype_h5
SUB_ID = 'de'
test_file = datadir / 'converted' / 'faces_basic' / (SUB_ID + '_bp.h5')
chunks = from_neuropype_h5(test_file)
print("Chunks found: {}".format([_[0] for _ in chunks]))
# + [markdown] colab_type="text" id="fkFrKqhGAAtl"
# ## Data Exploration
# ### Print Contents
# Let's quickly inspect the data to see what we have.
# + colab_type="code" id="e9Ks64GeAAtm" pycharm={"name": "#%%\n"} colab={}
import pandas as pd
# Get the 'signals' chunk
chunk_names = [_[0] for _ in chunks]
chunk = chunks[chunk_names.index('signals')][1]
ax_types = [_['type'] for _ in chunk['axes']]
print("The 'signals' chunk has data with shape {}.".format(chunk['data'].shape))
print("The axes types are {}".format(ax_types))
time_axis = chunk['axes'][ax_types.index('time')]
t_vec = time_axis['times']
print("Each trial has {} samples, ranging from {} to {} s.".format(len(t_vec), min(t_vec), max(t_vec)))
instance_axis = chunk['axes'][ax_types.index('instance')]
print("The trial label frequencies are \n{}".format(pd.value_counts(instance_axis['data']['Marker'])))
# + [markdown] colab_type="text" id="BhNLJL6sAAto"
# ### Simple Plotting
# The data contain 603 trials, with each trial having
# 17 samples in time, and 31 channels.
# Of the 603 trials, 303 or inter-stimulus intervals, and the remaining
# 300 are split between `face` and `house`.
#
# An N-dimensional data container is also called a "tensor".
# The most common forms are 1-D tensors, also known as "vectors",
# and 2-D tensors, also known as "matrices".
#
# This is a 3-D tensor.
#
# We will plot a few different slices of the tensor.
# + colab_type="code" id="PPRPkZUFAAto" pycharm={"name": "#%%\n"} colab={}
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams.update({'font.size': 22})
fig = plt.figure(figsize=(12, 6), facecolor='white')
# Plot a single trial
tr_idx = 20 # trial index
dat = np.copy(chunk['data'][tr_idx, :, :]) # Slice a single-trial
dat -= np.arange(dat.shape[-1])[None, :] # Separate channels for visibility.
plt.subplot(1, 2, 1)
plt.plot(t_vec, dat)
plt.title('Single Trial')
plt.ylabel('Channel')
plt.xlabel('Time (s)')
# Plot a few trials from a single channel.
ch_idx = 11 # Also try 22
dat = np.copy(chunk['data'][:, :, ch_idx])
tr_idx_isi = np.where(instance_axis['data']['Marker'] == 'ISI')[0][:5]
plt.subplot(1, 2, 2)
plt.plot(t_vec, dat[tr_idx_isi].T, 'k')
tr_idx_houses = np.where(instance_axis['data']['Marker'] == 'house')[0][:5]
plt.plot(t_vec, dat[tr_idx_houses].T, 'b')
tr_idx_faces = np.where(instance_axis['data']['Marker'] == 'face')[0][:5]
plt.plot(t_vec, dat[tr_idx_faces].T, 'r')
plt.title('Trials for for Chan {}'.format(ch_idx))
plt.ylabel('Broadband Power (z)')
plt.xlabel('Time After Stim (s)')
plt.show()
# + [markdown] colab_type="text" id="M9eUa0gxAAtq"
# ### Tensor Decomposition
# Let's use a tensor-decomposition tool to get a simpler view of the tensor contents.
# [Source](https://pyramidal.stanford.edu/publications/Williams2018_Neuron.pdf)
# 
#
# In the below plot, each row is a different tensor component and each column is a different axis/dimension.
# The output changes each time it is run.
# Look for a component where the 1st column clearly shows two different groups of trials.
# Then in the 2nd column we can see the time-course of that component, and in the 3rd column we see the channels that contributed to that component.
# + colab_type="code" id="3g_gmEhSAAtr" pycharm={"name": "#%%\n"} colab={}
if IN_COLAB:
# !pip install git+https://github.com/ahwillia/tensortools
import tensortools as tt
U = tt.cp_als(chunk['data'], rank=3, verbose=True)
# + id="tbIWFet4aIku" colab_type="code" colab={}
fig, ax, po = tt.plot_factors(U.factors, plots=['scatter', 'line', 'bar'], figsize=(12, 12))
# + [markdown] colab_type="text" id="mynXSjtyAAtt"
# ### Tensor Decomposition in Neuropype
#
# The next cell does the same thing with a slightly better plot, but is for Neuropype users only, and only those who have the latest version. I don't yet have a workflow for getting Neuropype on the cloud so this one part is for local use only.
#
# As it uses plotly, it won't show up in PyCharm (should be fixed in v2019.2) but it will in a browser.
# + colab_type="code" id="5HOXLOCOAAtu" pycharm={"name": "#%%\n"} colab={}
# Add my local copy of Neuropype to the PATH.
import sys
sys.path.append(str(Path.cwd().parent / 'Intheon' / 'cpe'))
# + colab_type="code" id="VGCqhz8jAAtw" pycharm={"name": "#%%\n"} colab={}
import neuropype.nodes as nn
pkt = nn.ImportH5(filename=str(test_file))()
tt_res = nn.TensorDecomposition(num_components=3, aggregate_axes=['instance'])(
data=pkt, return_outputs='all')
type_map = {'time': 'lines', 'space': 'bars', 'instance': 'markers'}
plt = nn.TensorDecompositionPlot(output_mode='notebook', iv_field='Marker',
type_map=type_map)(data=tt_res['model'])
| notebooks/01_02_data_import.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import seaborn as sn
import FeatureExtractor as fe
# -
# ## Colour Features
#
# ### RGB, HSV, Chrominance, Ohta, Hue histogram
img = mpimg.imread('betws_y_coed_small.jpg')
plt.imshow(img)
plt.axis('Equal')
plt.axis('Off')
plt.show()
#cv2.imshow('image',img)
#cv2.waitKey(0)
#cv2.destroyAllWindows()
# +
# Check RGB features
# Split the image into 3 rows and 3 columns
rgb = fe.fox_get_colour_features(img, blocks_r = 3, blocks_c = 3)
print('Number of features = ', len(rgb))
# Reshape as an image
block_img = np.reshape(rgb,(-1,3))
block_img = block_img[0:-1:2,:] # array with mean RGB for the blocks
block_img = np.reshape(np.reshape(block_img,(-1,1)),(3,3,3),order = 'C').astype(int)
plt.figure()
plt.imshow(block_img)
plt.axis('Equal')
plt.axis('Off')
plt.show()
# +
# Check HSV features
# Split the image into 3 rows and 3 columns
hsv = fe.fox_get_colour_features(img, fstr = 'HSV', blocks_r = 3, blocks_c = 3)
print('Number of features = ', len(hsv))
# Reshape as an image
block_img = np.reshape(hsv,(-1,3))
block_img = block_img[0:-1:2,:] # array with mean HSV for the blocks
block_img = np.reshape(np.reshape(block_img,(-1,1)),(3,3,3),order = 'C')
from skimage.color import hsv2rgb
block_img = hsv2rgb(block_img) # convert back to RGB to view
plt.figure()
plt.imshow(block_img)
plt.axis('Equal')
plt.axis('Off')
plt.show()
# +
# Check Chrominance features
chro = fe.fox_get_colour_features(img, fstr = 'CHR', blocks_r = 3, blocks_c = 3)
print('Number of features = ', len(chro))
# Visualise C1 by colour and C2 by number
# Reshape
block_img = np.reshape(chro,(-1,2))
block_img = block_img[0:-1:2,:] # array with mean Chrominance for the blocks
block_img = np.reshape(np.reshape(block_img,(-1,1)),(2,3,3),order = 'F')
c1 = np.transpose(block_img[0])
c2 = np.transpose(block_img[1])
plt.figure()
sn.heatmap(c1,annot = c2)
plt.show()
# +
# Check Ohta features
oht = fe.fox_get_colour_features(img, fstr = 'OHT', blocks_r = 3, blocks_c = 3)
print('Number of features = ', len(oht))
# Reshape as an image
block_img = np.reshape(oht,(-1,3))
block_img = block_img[0:-1:2,:] # array with mean Ohta for the blocks
block_img = np.reshape(np.reshape(block_img,(-1,1)),(3,3,3),order = 'C')
# Visualise as 3 matrices
v1 = np.min(block_img); v2 = np.max(block_img)
plt.figure()
plt.subplot(131)
sn.heatmap(block_img[0], vmin = v1, vmax = v2, cbar = False)
plt.axis('Equal')
plt.axis('Off')
plt.subplot(132)
sn.heatmap(block_img[1], vmin = v1, vmax = v2, cbar = False)
plt.axis('Equal')
plt.axis('Off')
plt.subplot(133)
sn.heatmap(block_img[2], vmin = v1, vmax = v2, cbar = False)
plt.axis('Equal')
plt.axis('Off')
plt.show()
# -
# Check H histogram
h = fe.fox_get_colour_features(img, fstr = 'H', blocks_r = 1, blocks_c = 1, bins = 16)
print('Number of features = ', len(h))
xx = np.linspace(0,255,18)
plt.figure
plt.bar(xx[1:-1],h,width=15)
plt.show()
# ## Shape Features
#
# ### Histogram of Oriented Gradiens (HOG)
# +
# Check HOG features
from skimage.transform import resize
from skimage.feature import hog
resized_img = resize(img, (128, 64))
#creating HOG features ----------------------------------------------------
# If I wanted the HOG image as well use visualize = True. Otherwise, only
# one argument is returned
fd, hog_image = hog(resized_img, orientations=9, pixels_per_cell=(8, 8), \
cells_per_block=(2, 2), visualize=True, multichannel=True)
print('Number of HOG features = ',len(fd))
plt.figure()
plt.subplot(121)
plt.imshow(resized_img)
plt.axis('Equal')
plt.axis('Off')
plt.subplot(122)
plt.imshow(hog_image)
plt.axis('Equal')
plt.axis('Off')
plt.show()
# -
# ## Texture Features
#
# ### Local Binary Patterns (LBP)
# +
from skimage.feature import local_binary_pattern
from skimage.color import rgb2gray
#creating LBP features ----------------------------------------------------
# https://scikit-image.org/docs/dev/auto_examples/features_detection/plot_local_binary_pattern.html
#
# See
# https://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.local_binary_pattern
#
# method{‘default’, ‘ror’, ‘uniform’, ‘var’}
# Method to determine the pattern.
#
# ‘default’: original local binary pattern which is gray scale but not
# rotation invariant.
#
# ‘ror’: extension of default implementation which is gray scale and
# rotation invariant.
#
# ‘uniform’: improved rotation invariance with uniform patterns and
# finer quantization of the angular space which is gray scale and rotation invariant.
#
# ‘nri_uniform’: non rotation-invariant uniform patterns variant
# which is only gray scale invariant [2], [3].
#
# ‘var’: rotation invariant variance measures of the contrast of local
# image texture which is rotation but not gray scale invariant.
radius = 3
n_points = 8 * radius
method = 'default'
nbins = 50 # this will be the number of features
lbp_raw = local_binary_pattern(rgb2gray(img), n_points, radius, method)
lbp,_ = np.histogram(lbp_raw,bins = 50)
lbp = lbp/np.sum(lbp)
lbp = np.reshape(lbp,(1,-1))
lbp = lbp[0]
plt.figure
plt.bar(np.arange(len(lbp)),lbp,width=0.8)
plt.show()
# -
| code_Python/CheckPythonFeatureExtraction.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 0. Setup and Install AdaptNLP
# !pip install adaptnlp
# # 1. Sequence classification fine-tuning: "bert-base-cased"
#
# We will be loading in a pre-trained language model called "bert-base-cased" and fine-tuning it on the AG News dataset. This language model is a good base model that's been trained on a large and general set of data.
#
# ### 1.1 Model loading and taggings
# +
from adaptnlp import EasySequenceClassifier
from pprint import pprint
classifier = EasySequenceClassifier()
# +
# Inference
example_text = "This didn't work at all"
sentences = classifier.tag_text(
text=example_text,
model_name_or_path="bert-base-cased",
mini_batch_size=1,
)
print("Tag Score Outputs:\n")
for sentence in sentences:
pprint({sentence.to_original_text(): sentence.labels})
# -
# ### 1.2. Data loading and processing with [datasets](https://github.com/huggingface/datasets)
# +
from datasets import load_dataset
train_dataset, eval_dataset = load_dataset('trec', split=['train', 'test'])
pprint(vars(train_dataset.info))
# -
train_dataset.set_format(type="pandas", columns=["text", "label-coarse"])
train_dataset[:]
# We just run this to reformat back to a 'python' dataset
train_dataset.set_format(columns=["text", "label-coarse"])
# ### 1.3. Training
# +
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./trec-models',
num_train_epochs=1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
evaluate_during_training=True,
logging_dir='./logs',
save_steps=100
)
# -
classifier.train(training_args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
model_name_or_path="bert-base-cased",
text_col_nm="text",
label_col_nm="label-coarse"
)
# ### 1.4. Evaluation
classifier.evaluate(model_name_or_path="bert-base-cased")
# +
multiple_text = ["The countries in the northern hemisphere talked to the countries",
"The basketball player made a touchdown in the field goal",
"The market was down 40% and economists were puzzled",
"The engineer and the scientist made it to pluto in their rocket"]
sentences = classifier.tag_text(
multiple_text,
model_name_or_path="./trec-models",
mini_batch_size=1
)
print("Tag Score Outputs:\n")
for sentence in sentences:
pprint({sentence.to_original_text(): sentence.labels})
# -
# # 2. Sequence classification fine-tuning: "distilbert-base-uncased-finetuned-sst-2-english"
#
# ### 1.1. Model release
classifier.release_model(model_name_or_path="bert-base-cased")
# ### 1.2. Training
# +
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./trec-from-sst-models',
num_train_epochs=1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
evaluate_during_training=True,
logging_dir='./logs',
save_steps=100
)
# -
classifier.train(training_args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
model_name_or_path="distilbert-base-uncased-finetuned-sst-2-english",
text_col_nm="text",
label_col_nm="label-coarse"
)
# ### 1.3. Evaluation
classifier.evaluate(model_name_or_path="distilbert-base-uncased-finetuned-sst-2-english")
# # 3. Sequence Classification on custom language model
# ### 3.1. Language model fine-tuning
# +
# !wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip
# !unzip -o wikitext-2-raw-v1.zip
train_file = "./wikitext-2-raw/wiki.train.raw"
eval_file = "./wikitext-2-raw/wiki.test.raw"
# +
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./language-models',
num_train_epochs=1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
warmup_steps=500,
weight_decay=0.01,
evaluate_during_training=False,
logging_dir='./logs',
save_steps=2500,
eval_steps=100
)
# +
from adaptnlp import LMFineTuner
finetuner = LMFineTuner(model_name_or_path="bert-base-cased")
# -
finetuner.train(
training_args=training_args,
train_file=eval_file,
eval_file=eval_file,
mlm=True,
overwrite_cache=False
)
# ### 3.2. Sequence Classification task training and evaluation
# +
from transformers import TrainingArguments
training_args = TrainingArguments(
output_dir='./trec-from-custom-LM-models',
num_train_epochs=1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
evaluate_during_training=True,
logging_dir='./logs',
save_steps=100
)
# -
classifier.train(training_args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
model_name_or_path="./language-models",
text_col_nm="text",
label_col_nm="label-coarse"
)
classifier.evaluate(model_name_or_path="./language-models")
# # *Tutorials for NLP Tasks with AdaptNLP*
#
# 1. Token Classification: NER, POS, Chunk, and Frame Tagging
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/1.%20Token%20Classification/token_tagging.ipynb)
# 2. Sequence Classification: Sentiment
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/2.%20Sequence%20Classification/Easy%20Sequence%20Classifier.ipynb)
# 3. Embeddings: Transformer Embeddings e.g. BERT, XLM, GPT2, XLNet, roBERTa, ALBERT
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/3.%20Embeddings/embeddings.ipynb)
# 4. Question Answering: Span-based Question Answering Model
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/4.%20Question%20Answering/question_answering.ipynb)
# 5. Summarization: Abstractive and Extractive
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/5.%20Summarization/summarization.ipynb)
# 6. Translation: Seq2Seq
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/6.%20Translation/translation.ipynb)
# ### *Tutorial for Fine-tuning and Training Custom Models with AdaptNLP*
#
# 1. Training a Sequence Classifier
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/2.%20Sequence%20Classification/Easy%20Sequence%20Classifier.ipynb)
# 2. Fine-tuning a Transformers Language Model
# - [](https://colab.research.google.com/github/Novetta/adaptnlp/blob/master/tutorials/Finetuning%20and%20Training%20(Advanced)/Fine-tuning%20Language%20Model.ipynb)
#
# Checkout the [documentation](https://novetta.github.io/adaptnlp) for more information.
#
# ## *NVIDIA Docker and Configurable AdaptNLP REST Microservices*
#
# 1. AdaptNLP official docker images are up on [Docker Hub](https://hub.docker.com/r/achangnovetta/adaptnlp).
# 2. REST Microservices with AdaptNLP and FastAPI are also up on [Docker Hub](https://hub.docker.com/r/achangnovetta)
#
# All images can build with GPU support if NVIDIA-Docker is correctly installed.
| workshops/ODSC_Europe_2020_Workshop.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/rushirajsherlocked/DeepLizard-PyTorch/blob/master/VGG_Network.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="yEYt-RTLwWhL"
#
# [VGG Architecture Research Paper](https://arxiv.org/pdf/1409.1556.pdf)
#
# Read the section 2.1 - Architecture to understand the architectural details
# + id="xb0yjgNohUhj"
import torch
import torch.nn as nn
import torch.nn.functional as F # All functions that dont have any parameters
import torch.optim as optim
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
# + id="84qLE1lPiCHx"
VGG_16 = [64,64, 'M', 128, 128, 'M', 256,256,256, 'M', 512,512,512, 'M', 512,512,512, 'M']
# Then flatten and 3 hidden layers of input_features 4096, 4096 and 1000 respectively
VGG_types = {
"VGG11": [64, "M", 128, "M", 256, 256, "M", 512, 512, "M", 512, 512, "M"],
"VGG13": [64, 64, "M", 128, 128, "M", 256, 256, "M", 512, 512, "M", 512, 512, "M"],
"VGG16": [64, 64, "M", 128, 128, "M", 256, 256, 256, "M", 512, 512, 512, "M", 512, 512, 512, "M"],
"VGG19": [64, 64, "M", 128, 128, "M", 256, 256, 256, 256, "M", 512, 512, 512, 512, "M", 512, 512, 512, 512, "M"]
}
# + id="o3940yTVidzI"
class VGG_Net(nn.Module):
def __init__(self, in_channels = 3, num_classes = 1000):
super(VGG_Net, self).__init__()
self.in_channels = in_channels
self.conv_layers = self.create_conv_layers(VGG_types['VGG16'])
self.classifer = nn.Sequential(
nn.Linear(in_features = 512*7*7,out_features = 4096), # 512*7*7 ==> Here the 7 X 7 comes from (224/(2**5)) - 5 Max Pool Layers
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096, 4096),
nn.ReLU(),
nn.Dropout(p=0.5),
nn.Linear(4096,num_classes)
)
def forward(self,x):
x = self.conv_layers(x)
x = x.reshape(x.shape[0], -1)
x = self.classifer(x)
return x
def create_conv_layers(self, architecture):
layers = []
in_channels = self.in_channels
for x in architecture:
if type(x) == int:
out_channels = x
layers += [nn.Conv2d(in_channels = in_channels, out_channels = out_channels,
kernel_size = (3,3), stride = (1,1), padding = (1,1)),
nn.BatchNorm2d(x), # NOTE: BATCHNORM wasnt added in original VGG implementation
nn.ReLU()]
in_channels = x
elif x == 'M':
layers += [nn.MaxPool2d(kernel_size = (2,2), stride = (2,2))]
return nn.Sequential(*layers)
# NOTE: layer+= is equivalent to layers.append() you can do that as well!
# + id="pof0MQixkYQz"
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# + id="q5XVZjyAryX-" outputId="d4888659-c169-4492-e63f-3325d0c0cb25" colab={"base_uri": "https://localhost:8080/", "height": 34}
model = VGG_Net(3,1000).to(device)
x = torch.randn(1,3,224,224).to(device)
print(model(x).shape)
# + id="-9UADPdb4AA1" outputId="e532341b-3125-492b-f1d5-fe8a36759828" colab={"base_uri": "https://localhost:8080/", "height": 1000}
from torchsummary import summary
summary(model, (3, 224, 224)) # Channels = 3
# IMP to check and understand how tensor shape changes after each layer and the no. of parameters as well
# + id="nBf_oRrD29zn" outputId="89be026d-7468-42ac-d83d-9ff626731519" colab={"base_uri": "https://localhost:8080/", "height": 986}
model.parameters
# + id="B-Sg9wsJ8Sv9"
# + [markdown] id="HqM5K3Y68TF4"
# # Visualize the Model Architecture
# + id="mg27Yqn95oDp" outputId="8b958783-9fbc-422c-83c7-2ca896e25d26" colab={"base_uri": "https://localhost:8080/", "height": 292}
# !pip install graphviz
# !pip install torchviz
# + id="ra4IVoMD4py_"
from graphviz import Digraph
import torch
from torch.autograd import Variable
# make_dot was moved to https://github.com/szagoruyko/pytorchviz
from torchviz import make_dot
# + id="ZfQ9QOON6vRD" outputId="628b7c05-ba5f-4be3-a391-4f06be58ff80" colab={"base_uri": "https://localhost:8080/", "height": 1000}
y = model(x)
make_dot(y.mean(), params = dict(model.named_parameters()))
# + id="giyejn9k74nN" outputId="46cca7a6-cac7-4aef-8e32-f950adab09b7" colab={"base_uri": "https://localhost:8080/", "height": 207}
# !pip install git+https://github.com/waleedka/hiddenlayer.git
# + id="wUPPPX9P9nBz"
| VGG_Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# default_exp data.validation
# -
# # Spliting data
#
# > Functions required to perform cross-validation and transform unique time series sequence into multiple samples ready to be used by a time series model.
#export
from imblearn.over_sampling import RandomOverSampler
from matplotlib.patches import Patch
from matplotlib.colors import LinearSegmentedColormap
from sklearn.model_selection import train_test_split, KFold, StratifiedKFold
from tsai.imports import *
from tsai.utils import *
#export
def RandomSplitter(valid_pct=0.2, seed=None):
"Create function that splits `items` between train/val with `valid_pct` randomly."
def _inner(o):
if seed is not None: torch.manual_seed(seed)
rand_idx = L(list(torch.randperm(len(o)).numpy()))
cut = int(valid_pct * len(o))
return rand_idx[cut:],rand_idx[:cut]
return _inner
# +
#export
def check_overlap(a, b, c=None):
a = toarray(a)
b = toarray(b)
overlap_ab = np.isin(a, b)
if c is None:
if isinstance(overlap_ab[0], (list, L, np.ndarray, torch.Tensor)): overlap_ab = overlap_ab[0]
if not any(overlap_ab): return False
else: return a[overlap_ab].tolist()
else:
c = toarray(c)
overlap_ac = np.isin(a, c)
if isinstance(overlap_ac[0], (list, L, np.ndarray, torch.Tensor)): overlap_ac = overlap_ac[0]
overlap_bc = np.isin(b, c)
if isinstance(overlap_bc[0], (list, L, np.ndarray, torch.Tensor)): overlap_bc = overlap_bc[0]
if not any(overlap_ab) and not any(overlap_ac) and not any(overlap_bc): return False
else: return a[overlap_ab].tolist(), a[overlap_ac].tolist(), b[overlap_bc].tolist()
def check_splits_overlap(splits):
return [check_overlap(*_splits) for _splits in splits] if is_listy(splits[0][0]) else check_overlap(*splits)
def leakage_finder(*splits, verbose=True):
'''You can pass splits as a tuple, or train, valid, ...'''
splits = L(*splits)
overlaps = 0
for i in range(len(splits)):
for j in range(i + 1, len(splits)):
overlap = check_overlap(splits[i], splits[j])
if overlap:
pv(f'overlap between splits [{i}, {j}] {overlap}', verbose)
overlaps += 1
assert overlaps == 0, 'Please, review your splits!'
def balance_idx(o, shuffle=False, random_state=None, verbose=False):
if isinstance(o, list): o = L(o)
idx_ = np.arange(len(o)).reshape(-1, 1)
ros = RandomOverSampler(random_state=random_state)
resampled_idxs, _ = ros.fit_resample(idx_, np.asarray(o))
new_idx = L(resampled_idxs.reshape(-1,).tolist())
if shuffle: new_idx = random_shuffle(new_idx)
return new_idx
# -
a = np.arange(10)
b = np.arange(10, 20)
test_eq(check_overlap(a, b), False)
a = np.arange(10)
b = np.arange(9, 20)
test_eq(check_overlap(a, b), [9])
a = np.arange(10)
b = np.arange(10, 20)
c = np.arange(20, 30)
test_eq(check_overlap(a, b, c), False)
a = np.arange(10)
b = np.arange(10, 20)
c = np.arange(10, 30)
test_eq(check_overlap(a, b, c), ([], [], [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]))
o = np.concatenate([np.ones(10), np.ones(20)*2, np.ones(30)*3])
idxs = balance_idx(o)
np.unique(o[idxs], return_counts=True)
# +
l = L(list(concat(np.zeros(5), np.ones(10)).astype(int)))
balanced_idx = balance_idx(l)
test_eq(np.mean(l[balanced_idx]), 0.5)
test_eq(isinstance(balanced_idx, L), True)
l = list(concat(np.zeros(5), np.ones(10)).astype(int))
balanced_idx = balance_idx(l)
test_eq(np.mean(L(l)[balanced_idx]), 0.5)
test_eq(isinstance(balanced_idx, L), True)
a = concat(np.zeros(5), np.ones(10)).astype(int)
balanced_idx = balance_idx(a)
test_eq(np.mean(a[balanced_idx]), 0.5)
test_eq(isinstance(balanced_idx, L), True)
t = concat(torch.zeros(5), torch.ones(10))
balanced_idx = balance_idx(t, shuffle=True)
test_eq(t[balanced_idx].mean(), 0.5)
test_eq(isinstance(balanced_idx, L), True)
# -
a, b = np.arange(100_000), np.arange(100_000, 200_000)
# +
soft_labels = True
filter_pseudolabels = .5
balanced_pseudolabels = True
pseudolabels = torch.rand(1000, 3)
pseudolabels = torch.softmax(pseudolabels, -1) if soft_labels else torch.argmax(pseudolabels, -1)
hpl = torch.argmax(pseudolabels, -1) if soft_labels else pseudolabels
if filter_pseudolabels and pseudolabels.ndim > 1:
error = 1 - pseudolabels.max(-1).values
filt_pl_idx = np.arange(len(error))[error < filter_pseudolabels]
filt_pl = pseudolabels[error < filter_pseudolabels]
assert len(filt_pl) > 0, 'no filtered pseudolabels'
filt_hpl = torch.argmax(filt_pl, -1)
else:
filt_pl_idx = np.arange(len(pseudolabels))
filt_pl = filt_hpl = pseudolabels
# -
pl_split = filt_pl_idx[balance_idx(filt_hpl)] if balanced_pseudolabels else filt_pl_idx
test_eq(hpl[pl_split].float().mean(), np.mean(np.unique(hpl)))
#export
def TrainValidTestSplitter(n_splits:int=1, valid_size:Union[float, int]=0.2, test_size:Union[float, int]=0., train_only:bool=False,
stratify:bool=True, balance:bool=False, shuffle:bool=True, random_state:Union[None, int]=None, verbose:bool=False, **kwargs):
"Split `items` into random train, valid (and test optional) subsets."
if not shuffle and stratify and not train_only:
pv('stratify set to False because shuffle=False. If you want to stratify set shuffle=True', verbose)
stratify = False
def _inner(o, **kwargs):
if stratify:
_, unique_counts = np.unique(o, return_counts=True)
if np.min(unique_counts) >= 2 and np.min(unique_counts) >= n_splits: stratify_ = stratify
elif np.min(unique_counts) < n_splits:
stratify_ = False
pv(f'stratify set to False as n_splits={n_splits} cannot be greater than the min number of members in each class ({np.min(unique_counts)}).',
verbose)
else:
stratify_ = False
pv('stratify set to False as the least populated class in o has only 1 member, which is too few.', verbose)
else: stratify_ = False
vs = 0 if train_only else 1. / n_splits if n_splits > 1 else int(valid_size * len(o)) if isinstance(valid_size, float) else valid_size
if test_size:
ts = int(test_size * len(o)) if isinstance(test_size, float) else test_size
train_valid, test = train_test_split(range(len(o)), test_size=ts, stratify=o if stratify_ else None, shuffle=shuffle,
random_state=random_state, **kwargs)
test = toL(test)
if shuffle: test = random_shuffle(test, random_state)
if vs == 0:
train, _ = RandomSplitter(0, seed=random_state)(o[train_valid])
train = toL(train)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
if shuffle: train = random_shuffle(train, random_state)
train_ = L(L([train]) * n_splits) if n_splits > 1 else train
valid_ = L(L([train]) * n_splits) if n_splits > 1 else train
test_ = L(L([test]) * n_splits) if n_splits > 1 else test
if n_splits > 1:
return [split for split in itemify(train_, valid_, test_)]
else:
return train_, valid_, test_
elif n_splits > 1:
if stratify_:
splits = StratifiedKFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state).split(np.arange(len(train_valid)), o[train_valid])
else:
splits = KFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state).split(np.arange(len(train_valid)))
train_, valid_ = L([]), L([])
for train, valid in splits:
train, valid = toL(train), toL(valid)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
if shuffle:
train = random_shuffle(train, random_state)
valid = random_shuffle(valid, random_state)
train_.append(L(L(train_valid)[train]))
valid_.append(L(L(train_valid)[valid]))
test_ = L(L([test]) * n_splits)
return [split for split in itemify(train_, valid_, test_)]
else:
train, valid = train_test_split(range(len(train_valid)), test_size=vs, random_state=random_state,
stratify=o[train_valid] if stratify_ else None, shuffle=shuffle, **kwargs)
train, valid = toL(train), toL(valid)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
if shuffle:
train = random_shuffle(train, random_state)
valid = random_shuffle(valid, random_state)
return (L(L(train_valid)[train]), L(L(train_valid)[valid]), test)
else:
if vs == 0:
train, _ = RandomSplitter(0, seed=random_state)(o)
train = toL(train)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
if shuffle: train = random_shuffle(train, random_state)
train_ = L(L([train]) * n_splits) if n_splits > 1 else train
valid_ = L(L([train]) * n_splits) if n_splits > 1 else train
if n_splits > 1:
return [split for split in itemify(train_, valid_)]
else:
return (train_, valid_)
elif n_splits > 1:
if stratify_: splits = StratifiedKFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state).split(np.arange(len(o)), o)
else: splits = KFold(n_splits=n_splits, shuffle=shuffle, random_state=random_state).split(np.arange(len(o)))
train_, valid_ = L([]), L([])
for train, valid in splits:
train, valid = toL(train), toL(valid)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
if shuffle:
train = random_shuffle(train, random_state)
valid = random_shuffle(valid, random_state)
if not isinstance(train, (list, L)): train = train.tolist()
if not isinstance(valid, (list, L)): valid = valid.tolist()
train_.append(L(train))
valid_.append(L(L(valid)))
return [split for split in itemify(train_, valid_)]
else:
train, valid = train_test_split(range(len(o)), test_size=vs, random_state=random_state, stratify=o if stratify_ else None,
shuffle=shuffle, **kwargs)
train, valid = toL(train), toL(valid)
if balance: train = train[balance_idx(o[train], random_state=random_state)]
return train, valid
return _inner
#export
def plot_splits(splits):
_max = 0
_splits = 0
for i, split in enumerate(splits):
if is_listy(split[0]):
for j, s in enumerate(split):
_max = max(_max, array(s).max())
_splits += 1
else:
_max = max(_max, array(split).max())
_splits += 1
_splits = [splits] if not is_listy(split[0]) else splits
v = np.zeros((len(_splits), _max + 1))
for i, split in enumerate(_splits):
if is_listy(split[0]):
for j, s in enumerate(split):
v[i, s] = 1 + j
else: v[i, split] = 1 + i
vals = np.unique(v)
plt.figure(figsize=(16, len(_splits)/2))
if len(vals) == 1:
v = np.ones((len(_splits), _max + 1))
plt.pcolormesh(v, color='blue')
legend_elements = [Patch(facecolor='blue', label='Train')]
plt.legend(handles=legend_elements, bbox_to_anchor=(1.05, 1), loc='upper left')
else:
colors = L(['gainsboro', 'blue', 'limegreen', 'red'])[vals]
cmap = LinearSegmentedColormap.from_list('', colors)
plt.pcolormesh(v, cmap=cmap)
legend_elements = L([
Patch(facecolor='gainsboro', label='None'),
Patch(facecolor='blue', label='Train'),
Patch(facecolor='limegreen', label='Valid'),
Patch(facecolor='red', label='Test')])[vals]
plt.legend(handles=legend_elements, bbox_to_anchor=(1.05, 1), loc='upper left')
plt.title('Split distribution')
plt.yticks(ticks=np.arange(.5, len(_splits)+.5, 1.0), labels=np.arange(1, len(_splits)+1, 1.0).astype(int))
plt.gca().invert_yaxis()
plt.show()
#export
def get_splits(o, n_splits:int=1, valid_size:float=0.2, test_size:float=0., train_only:bool=False, train_size:Union[None, float, int]=None, balance:bool=False,
shuffle:bool=True, stratify:bool=True, check_splits:bool=True, random_state:Union[None, int]=None, show_plot:bool=True, verbose:bool=False):
'''Arguments:
o : object to which splits will be applied, usually target.
n_splits : number of folds. Must be an int >= 1.
valid_size : size of validation set. Only used if n_splits = 1. If n_splits > 1 valid_size = (1. - test_size) / n_splits.
test_size : size of test set. Default = 0.
train_only : if True valid set == train set. This may be useful for debugging purposes.
train_size : size of the train set used. Default = None (the remainder after assigning both valid and test).
Useful for to get learning curves with different train sizes or get a small batch to debug a neural net.
balance : whether to balance data so that train always contain the same number of items per class.
shuffle : whether to shuffle data before splitting into batches. Note that the samples within each split will be shuffle.
stratify : whether to create folds preserving the percentage of samples for each class.
check_splits : whether to perform leakage and completion checks.
random_state : when shuffle is True, random_state affects the ordering of the indices. Pass an int for reproducible output.
show_plot : plot the split distribution
'''
if n_splits == 1 and valid_size == 0. and test_size == 0.: train_only = True
if balance: stratify = True
splits = TrainValidTestSplitter(n_splits, valid_size=valid_size, test_size=test_size, train_only=train_only, stratify=stratify,
balance=balance, shuffle=shuffle, random_state=random_state, verbose=verbose)(o)
if check_splits:
if train_only or (n_splits == 1 and valid_size == 0): print('valid == train')
elif n_splits > 1:
for i in range(n_splits):
leakage_finder([*splits[i]], verbose=True)
cum_len = 0
for split in splits[i]: cum_len += len(split)
if not balance: assert len(o) == cum_len, f'len(o)={len(o)} while cum_len={cum_len}'
else:
leakage_finder([splits], verbose=True)
cum_len = 0
if not isinstance(splits[0], Integral):
for split in splits: cum_len += len(split)
else: cum_len += len(splits)
if not balance: assert len(o) == cum_len, f'len(o)={len(o)} while cum_len={cum_len}'
if train_size is not None and train_size != 1: # train_size=1 legacy
if n_splits > 1:
splits = list(splits)
for i in range(n_splits):
splits[i] = list(splits[i])
if isinstance(train_size, Integral):
n_train_samples = train_size
elif train_size > 0 and train_size < 1:
n_train_samples = int(len(splits[i][0]) * train_size)
splits[i][0] = L(np.random.choice(splits[i][0], n_train_samples, False).tolist())
if train_only:
if valid_size != 0: splits[i][1] = splits[i][0]
if test_size != 0: splits[i][2] = splits[i][0]
splits[i] = tuple(splits[i])
splits = tuple(splits)
else:
splits = list(splits)
if isinstance(train_size, Integral):
n_train_samples = train_size
elif train_size > 0 and train_size < 1:
n_train_samples = int(len(splits[0]) * train_size)
splits[0] = L(np.random.choice(splits[0], n_train_samples, False).tolist())
if train_only:
if valid_size != 0: splits[1] = splits[0]
if test_size != 0: splits[2] = splits[0]
splits = tuple(splits)
if show_plot: plot_splits(splits)
return splits
# +
n_splits = 5
valid_size = 0.2
test_size = 0.2
train_only = False # set to True for debugging (valid = train)
train_size = 5000
stratify = True
balance = False
shuffle = True
predefined_splits = None
show_plot = True
check_splits = True
random_state = 23
y = np.random.randint(0, 3, 10000) + 100
splits = get_splits(y, n_splits=n_splits, valid_size=valid_size, test_size=test_size, shuffle=shuffle, balance=balance, stratify=stratify,
train_only=train_only, train_size=train_size, check_splits=check_splits, random_state=random_state, show_plot=show_plot, verbose=True)
splits
# -
train_size=256
y = np.random.randint(0, 3, 1000) + 100
splits = get_splits(y, train_size=train_size, train_only=True)
test_eq(splits[0], splits[1])
test_eq(len(splits[0]), train_size)
splits
# +
#export
def TSSplitter(valid_size:Union[int, float]=0.2, test_size:Union[int, float]=0., show_plot:bool=True):
"Create function that splits `items` between train/val with `valid_size` without shuffling data."
def _inner(o):
valid_cut = valid_size if isinstance(valid_size, Integral) else int(round(valid_size * len(o)))
if test_size:
test_cut = test_size if isinstance(test_size, Integral) else int(round(test_size * len(o)))
idx = np.arange(len(o))
if test_size:
splits = L(idx[:-valid_cut - test_cut].tolist()), L(idx[-valid_cut - test_cut: - test_cut].tolist()), L(idx[-test_cut:].tolist())
else:
splits = L(idx[:-valid_cut].tolist()), L(idx[-valid_cut:].tolist())
if show_plot:
if len(o) > 1_000_000:
warnings.warn('the splits are too large to be plotted')
else:
plot_splits(splits)
return splits
return _inner
TimeSplitter = TSSplitter
# -
y = np.arange(1000) + 100
test_eq(TimeSplitter(valid_size=0.2)(y)[1], L(np.arange(800, 1000).tolist()))
test_eq(TimeSplitter(valid_size=0.2)(y)[0], TimeSplitter(valid_size=200)(y)[0])
TimeSplitter(valid_size=0.2, show_plot=True)(y)
# +
n_splits = 5
valid_size = 0.2
test_size = 0
train_only = False # set to True for debugging (valid = train)
train_size = None
stratify = True
balance = True
shuffle = True
predefined_splits = None
show_plot = True
check_splits = True
random_state = 23
splits = get_splits(y, n_splits=n_splits, valid_size=valid_size, test_size=test_size, shuffle=shuffle, balance=balance, stratify=stratify,
train_only=train_only, train_size=train_size, check_splits=check_splits, random_state=random_state, show_plot=show_plot, verbose=True)
split = splits[0] if n_splits == 1 else splits[0][0]
y[split].mean(), split
# -
list([splits[0], splits[1], splits[2], splits[3], splits[4]])
# +
n_splits = 5
valid_size = 0.
test_size = 0.
shuffle = True
stratify = True
train_only = True
train_size = None
check_splits = True
random_state = 1
show_plot = True
splits = get_splits(y, n_splits=n_splits, valid_size=valid_size, test_size=test_size, shuffle=shuffle, stratify=stratify,
train_only=train_only, train_size=train_size, check_splits=check_splits, random_state=random_state, show_plot=show_plot, verbose=True)
for split in splits:
test_eq(len(split[0]), len(y))
test_eq(np.sort(split[0]), np.arange(len(y)))
# +
n_splits = 5
y = np.random.randint(0, 2, 1000)
splits = get_splits(y, n_splits=n_splits, shuffle=False, check_splits=True)
test_eq(np.concatenate((L(zip(*splits))[1])), np.arange(len(y)))
splits = get_splits(y, n_splits=n_splits, shuffle=True, check_splits=True)
test_eq(np.sort(np.concatenate((L(zip(*splits))[1]))), np.arange(len(y)))
# +
n_splits = 2
y = np.random.randint(0, 2, 1000)
splits = get_splits(y, n_splits=n_splits, test_size=0.2, shuffle=False)
for i in range(n_splits): leakage_finder(*splits[i])
test_eq(len(splits), n_splits)
test_eq(len(splits[0]), 3)
s = []
[s.extend(split) for split in splits[0]]
test_eq(np.sort(s), np.arange(len(y)))
s = []
[s.extend(split) for split in splits[1]]
test_eq(np.sort(s), np.arange(len(y)))
# -
y = np.random.randint(0, 2, 1000)
splits1 = get_splits(y, valid_size=.25, test_size=0, random_state=23, stratify=True, shuffle=True)
splits2 = get_splits(y, valid_size=.25, test_size=0, random_state=23, stratify=True, shuffle=True)
splits3 = get_splits(y, valid_size=.25, test_size=0, random_state=None, stratify=True, shuffle=True)
splits4 = get_splits(y, valid_size=.25, test_size=0, random_state=None, stratify=True, shuffle=True)
test_eq(splits1[0], splits2[0])
test_ne(splits3[0], splits4[0])
y = np.random.randint(0, 2, 100)
splits = get_splits(y, valid_size=.25, test_size=0, random_state=23, stratify=True, shuffle=True)
test_eq(len(splits), 2)
y = np.random.randint(0, 2, 100)
splits = get_splits(y, valid_size=.25, test_size=0, random_state=23, stratify=True)
test_eq(len(splits), 2)
y = np.random.randint(0, 2, 100)
splits = get_splits(y, valid_size=.25, test_size=20, random_state=23, stratify=True)
test_eq(len(splits), 3)
leakage_finder(*splits)
splits = TrainValidTestSplitter(valid_size=.25, test_size=20, random_state=23, stratify=True)(np.random.randint(0, 2, 100))
test_eq(len(splits[1]), 25)
test_eq(len(splits[2]), 20)
o = np.random.randint(0, 2, 1000)
for p in [1, .75, .5, .25, .125]:
splits = get_splits(o, train_size=p)
test_eq(len(splits[0]), len(o) * .8 * p)
y = L([0] * 50 + [1] * 25 + [2] * 15 + [3] * 10)
splits = get_splits(y, valid_size=.2, test_size=.2)
test_eq(np.mean(y[splits[0]])==np.mean(y[splits[1]])==np.mean(y[splits[2]]), True)
splits
y = L([0] * 50 + [1] * 25 + [2] * 15 + [3] * 10)
splits = get_splits(y, n_splits=1, valid_size=.2, test_size=.2, shuffle=False)
# test_eq(splits[0] + splits[1] + splits[2], np.arange(100))
splits
splits = get_splits(np.random.randint(0,5,100), valid_size=0.213, test_size=17)
test_eq(len(splits[1]), 21)
test_eq(len(splits[2]), 17)
splits = get_splits(np.random.randint(0,5,100), valid_size=0.213, test_size=17, train_size=.2)
splits
# +
#export
def get_predefined_splits(*xs):
'''xs is a list with X_train, X_valid, ...'''
splits_ = []
start = 0
for x in xs:
splits_.append(L(list(np.arange(start, start + len(x)))))
start += len(x)
return tuple(splits_)
def combine_split_data(xs, ys=None):
'''xs is a list with X_train, X_valid, .... ys is None or a list with y_train, y_valid, .... '''
xs = [to3d(x) for x in xs]
splits = get_predefined_splits(*xs)
if ys is None: return concat(*xs), None, splits
else: return concat(*xs), concat(*ys), splits
# -
#export
def get_splits_len(splits):
_len = []
for split in splits:
if isinstance(split[0], (list, L, tuple)): _len.append([len(s) for s in split])
else: _len.append(len(split))
return _len
X_train, y_train, X_valid, y_valid = np.random.rand(3,3,4), np.random.randint(0,2,3), np.random.rand(2,3,4), np.random.randint(0,2,2)
X, y, splits = combine_split_data([X_train, X_valid], [y_train, y_valid])
test_eq(X_train, X[splits[0]])
test_eq(X_valid, X[splits[1]])
test_type(X_train, X)
test_type(y_train, y)
X_train, y_train, X_valid, y_valid = np.random.rand(3,4), np.random.randint(0,2,3), np.random.rand(2,4), np.random.randint(0,2,2)
X, y, splits = combine_split_data([X_train, X_valid], [y_train, y_valid])
test_eq(X_train[:, None], X[splits[0]])
test_eq(X_valid[:, None], X[splits[1]])
test_type(X_train, X)
test_type(y_train, y)
#hide
from tsai.imports import *
from tsai.export import *
nb_name = get_nb_name()
nb_name = "010_data.validation.ipynb"
create_scripts(nb_name);
| nbs/010_data.validation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import pymc3 as pm
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-darkgrid')
np.set_printoptions(precision=2)
# # Simple example
# +
clusters = 3
n_cluster = [90, 50, 75]
n_total = sum(n_cluster)
means = [9, 21, 35]
std_devs = [2, 2, 2]
mix = np.random.normal(np.repeat(means, n_cluster), np.repeat(std_devs, n_cluster))
# -
sns.kdeplot(np.array(mix))
plt.xlabel('$x$', fontsize=14)
plt.savefig('B04958_07_01.png', dpi=300, figsize=[5.5, 5.5])
# +
# Author: <NAME>
import matplotlib.tri as tri
from functools import reduce
from matplotlib import ticker, cm
_corners = np.array([[0, 0], [1, 0], [0.5, 0.75**0.5]])
_triangle = tri.Triangulation(_corners[:, 0], _corners[:, 1])
_midpoints = [(_corners[(i + 1) % 3] + _corners[(i + 2) % 3]) / 2.0 for i in range(3)]
def xy2bc(xy, tol=1.e-3):
'''Converts 2D Cartesian coordinates to barycentric.
Arguments:
xy: A length-2 sequence containing the x and y value.
'''
s = [(_corners[i] - _midpoints[i]).dot(xy - _midpoints[i]) / 0.75 for i in range(3)]
return np.clip(s, tol, 1.0 - tol)
class Dirichlet(object):
def __init__(self, alpha):
'''Creates Dirichlet distribution with parameter `alpha`.'''
from math import gamma
from operator import mul
self._alpha = np.array(alpha)
self._coef = gamma(np.sum(self._alpha)) /reduce(mul, [gamma(a) for a in self._alpha])
def pdf(self, x):
'''Returns pdf value for `x`.'''
from operator import mul
return self._coef * reduce(mul, [xx ** (aa - 1)
for (xx, aa)in zip(x, self._alpha)])
def sample(self, N):
'''Generates a random sample of size `N`.'''
return np.random.dirichlet(self._alpha, N)
def draw_pdf_contours(dist, nlevels=100, subdiv=8, **kwargs):
'''Draws pdf contours over an equilateral triangle (2-simplex).
Arguments:
dist: A distribution instance with a `pdf` method.
border (bool): If True, the simplex border is drawn.
nlevels (int): Number of contours to draw.
subdiv (int): Number of recursive mesh subdivisions to create.
kwargs: Keyword args passed on to `plt.triplot`.
'''
refiner = tri.UniformTriRefiner(_triangle)
trimesh = refiner.refine_triangulation(subdiv=subdiv)
pvals = [dist.pdf(xy2bc(xy)) for xy in zip(trimesh.x, trimesh.y)]
plt.tricontourf(trimesh, pvals, nlevels, cmap=cm.Blues, **kwargs)
plt.axis('equal')
plt.xlim(0, 1)
plt.ylim(0, 0.75**0.5)
plt.axis('off')
# +
alphas = [[0.5] * 3, [1] * 3, [10] * 3, [2, 5, 10]]
for (i, alpha) in enumerate(alphas):
plt.subplot(2, 2, i + 1)
dist = Dirichlet(alpha)
draw_pdf_contours(dist)
plt.title(r'$\alpha$ = ({:.1f}, {:.1f}, {:.1f})'.format(*alpha), fontsize=16)
plt.savefig('B04958_07_02.png', dpi=300, figsize=[5.5, 5.5])
# -
with pm.Model() as model_kg:
# Each observation is assigned to a cluster/component with probability p
p = pm.Dirichlet('p', a=np.ones(clusters))
category = pm.Categorical('category', p=p, shape=n_total)
# Known Gaussians means
means = pm.math.constant([10, 20, 35])
y = pm.Normal('y', mu=means[category], sd=2, observed=mix)
trace_kg = pm.sample(1000)
varnames_kg = ['p']
pm.traceplot(trace_kg, varnames_kg)
plt.savefig('B04958_07_03.png', dpi=300, figsize=[5.5, 5.5])
pm.summary(trace_kg, varnames_kg)
with pm.Model() as model_ug:
# Each observation is assigned to a cluster/component with probability p
p = pm.Dirichlet('p', a=np.ones(clusters))
category = pm.Categorical('category', p=p, shape=n_total)
# We estimate the unknown gaussians means and standard deviation
means = pm.Normal('means', mu=[10, 20, 35], sd=2, shape=clusters)
sd = pm.HalfCauchy('sd', 5)
y = pm.Normal('y', mu=means[category], sd=sd, observed=mix)
trace_ug = pm.sample(1000)
varnames_ug = ['means', 'sd', 'p']
pm.traceplot(trace_ug, varnames_ug)
plt.savefig('B04958_07_05.png', dpi=300, figsize=[5.5, 5.5])
pm.summary(trace_ug, varnames_ug)
ppc = pm.sample_posterior_predictive(trace_ug, 50, model_ug)
# +
for i in ppc['y']:
sns.kdeplot(i, alpha=0.1, color='C0')
sns.kdeplot(np.array(mix), lw=2, color='k') # you may want to replace this with the posterior mean
plt.xlabel('$x$', fontsize=14)
plt.savefig('B04958_07_06.png', dpi=300, figsize=[5.5, 5.5])
# -
# ## Marginalized Gaussian Mixture model
with pm.Model() as model_mg:
p = pm.Dirichlet('p', a=np.ones(clusters))
means = pm.Normal('means', mu=[10, 20, 35], sd=2, shape=clusters)
sd = pm.HalfCauchy('sd', 5)
y = pm.NormalMixture('y', w=p, mu=means, sd=sd, observed=mix)
trace_mg = pm.sample(5000)
chain_mg = trace_mg[:]
varnames_mg = ['means', 'sd', 'p']
pm.traceplot(chain_mg, varnames_mg);
# ## Zero inflated Poisson model
lam_params = [0.5, 1.5, 3, 8]
k = np.arange(0, max(lam_params) * 3)
for lam in lam_params:
y = stats.poisson(lam).pmf(k)
plt.plot(k, y, 'o-', label="$\\lambda$ = {:3.1f}".format(lam))
plt.legend()
plt.xlabel('$k$', fontsize=14)
plt.ylabel('$pmf(k)$', fontsize=14)
plt.savefig('B04958_07_07.png', dpi=300, figsize=(5.5, 5.5))
# +
np.random.seed(42)
n = 100
theta = 2.5 # Poisson rate
pi = 0.1 # probability of extra-zeros (pi = 1-psi)
# Simulate some data
counts = np.array([(np.random.random() > pi) * np.random.poisson(theta) for i in range(n)])
# +
#plt.hist(counts, bins=30);
# -
with pm.Model() as ZIP:
psi = pm.Beta('psi', 1, 1)
lam = pm.Gamma('lam', 2, 0.1)
y = pm.ZeroInflatedPoisson('y', psi, lam, observed=counts)
trace_ZIP = pm.sample(5000)
pm.traceplot(trace_ZIP);
plt.savefig('B04958_07_08.png', dpi=300, figsize=(5.5, 5.5))
pm.summary(trace_ZIP)
# ## Zero inflated Poisson regression
# +
#Kruschke plot
# -
fish_data = pd.read_csv('../../../code/data/fish.csv')
fish_data.head()
# +
#plt.hist(fish_data['count'], bins=20);
# -
with pm.Model() as ZIP_reg:
psi = pm.Beta('psi', 1, 1)
alpha = pm.Normal('alpha', 0, 10)
beta = pm.Normal('beta', 0, 10, shape=2)
lam = pm.math.exp(alpha + beta[0] * fish_data['child'] + beta[1] * fish_data['camper'])
y = pm.ZeroInflatedPoisson('y', psi, lam, observed=fish_data['count'])
trace_ZIP_reg = pm.sample(2000)
pm.traceplot(trace_ZIP_reg)
plt.savefig('B04958_07_10.png', dpi=300, figsize=(5.5, 5.5));
pm.summary(trace_ZIP_reg)
children = [0, 1, 2, 3, 4]
fish_count_pred_0 = []
fish_count_pred_1 = []
thin = 5
for n in children:
without_camper = trace_ZIP_reg['alpha'][::thin] + trace_ZIP_reg['beta'][:,0][::thin] * n
with_camper = without_camper + trace_ZIP_reg['beta'][:,1][::thin]
fish_count_pred_0.append(np.exp(without_camper))
fish_count_pred_1.append(np.exp(with_camper))
# +
plt.plot(children, fish_count_pred_0, 'C0o', alpha=0.01)
plt.plot(children, fish_count_pred_1, 'C1o', alpha=0.01)
plt.xticks(children);
plt.xlabel('Number of children', fontsize=14)
plt.ylabel('Fish caught', fontsize=14)
plt.plot([], 'C0o', label='without camper')
plt.plot([], 'C1o', label='with camper')
plt.legend(fontsize=14)
plt.savefig('B04958_07_11.png', dpi=300, figsize=(5.5, 5.5))
# -
# ## Robust logistic Regression
iris = sns.load_dataset("iris")
df = iris.query("species == ('setosa', 'versicolor')")
y_0 = pd.Categorical(df['species']).codes
x_n = 'sepal_length'
x_0 = df[x_n].values
y_0 = np.concatenate((y_0, np.ones(6)))
x_0 = np.concatenate((x_0, [4.2, 4.5, 4.0, 4.3, 4.2, 4.4]))
x_0_m = x_0 - x_0.mean()
plt.plot(x_0, y_0, 'o', color='k')
plt.savefig('B04958_07_12.png', dpi=300, figsize=(5.5, 5.5))
with pm.Model() as model_rlg:
alpha_tmp = pm.Normal('alpha_tmp', mu=0, sd=100)
beta = pm.Normal('beta', mu=0, sd=10)
mu = alpha_tmp + beta * x_0_m
theta = pm.Deterministic('theta', 1 / (1 + pm.math.exp(-mu)))
pi = pm.Beta('pi', 1, 1)
p = pi * 0.5 + (1 - pi) * theta
alpha = pm.Deterministic('alpha', alpha_tmp - beta * x_0.mean())
bd = pm.Deterministic('bd', -alpha/beta)
yl = pm.Bernoulli('yl', p=p, observed=y_0)
trace_rlg = pm.sample(2000)
varnames = ['alpha', 'beta', 'bd', 'pi']
pm.traceplot(trace_rlg, varnames)
plt.savefig('B04958_07_13.png', dpi=300, figsize=(5.5, 5.5))
pm.summary(trace_rlg, varnames)
# +
theta = trace_rlg['theta'].mean(axis=0)
idx = np.argsort(x_0)
plt.plot(x_0[idx], theta[idx], color='C0', lw=3);
plt.axvline(trace_rlg['bd'].mean(), ymax=1, color='C1')
bd_hpd = pm.hpd(trace_rlg['bd'])
plt.fill_betweenx([0, 1], bd_hpd[0], bd_hpd[1], color='C1', alpha=0.5)
plt.plot(x_0, y_0, 'o', color='k')
theta_hpd = pm.hpd(trace_rlg['theta'])[idx]
plt.fill_between(x_0[idx], theta_hpd[:,0], theta_hpd[:,1], color='C0', alpha=0.5)
plt.xlabel(x_n, fontsize=16)
plt.ylabel('$\\theta$', rotation=0, fontsize=16)
plt.savefig('B04958_07_14.png', dpi=300, figsize=(5.5, 5.5))
# -
import sys, IPython, scipy, matplotlib, platform
print("This notebook was created on a %s computer running %s and using:\nPython %s\nIPython %s\nPyMC3 %s\nNumPy %s\nSciPy %s\nMatplotlib %s\nSeaborn %s\nPandas %s" % (platform.machine(), ' '.join(platform.linux_distribution()[:2]), sys.version[:5], IPython.__version__, pm.__version__, np.__version__, scipy.__version__, matplotlib.__version__, sns.__version__, pd.__version__))
| first_edition/code/Chp7/07_Mixture_Models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: VPython
# language: python
# name: vpython
# ---
# # 11 ODE: Visualization of planetary motion
# Visualize the motion of the earth using vpython (based on *Computational Modelling*, <NAME> 2016, Ch 4, program 4.1 and updated for vpython-jupyter 2.x)
# +
import numpy as np
import vpython as vp
M_earth = 3.003467e-6
M_sun = 1.0
G_grav = 4*np.pi**2
def F_gravity(r, m=M_earth, M=M_sun):
rr = np.sum(r*r)
rhat = r/np.sqrt(rr)
return -G_grav*m*M/rr * rhat
def vp_planet_orbit(r0=np.array([1.017, 0, 0]), v0=np.array([0, 6.179, 0]), mass=M_earth, dt=0.001):
"""Visualize 2D planetary motion with velocity verlet"""
dim = len(r0)
assert len(v0) == dim
r = np.array(r0, copy=True)
v = np.array(v0, copy=True)
scene = vp.display(title="Earth around Sun", background=vp.color.black,
forward=vp.vec(0, 2, -1))
planet = vp.sphere(pos=vp.vec(*r), radius=0.1, make_trail=True,
texture=vp.textures.earth,
up=vp.vec(0, 0, 1))
sun = vp.sphere(pos=vp.vec(0, 0, 0), radius=0.2, color=vp.color.yellow,
emissive=True)
sunlight = vp.local_light(pos=vp.vec(0, 0, 0), color=vp.color.yellow)
# start force evaluation for first step
Ft = F_gravity(r, m=mass)
while True:
vhalf = v + 0.5*dt * Ft/mass
r += dt * vhalf
Ftdt = F_gravity(r, m=mass)
v = vhalf + 0.5*dt * Ftdt/mass
# new force becomes old force
Ft = Ftdt
vp.rate(200)
planet.pos = vp.vec(*r)
vp_planet_orbit()
# -
| 11_ODEs/11-ODE-visualize-planets.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Binary Search Tree Check
#
# ## Problem Statement
#
# Given a binary tree, check whether it’s a binary search tree or not.
#
# ** Again, no solution cell, just worry about your code making sense logically. Hint: Think about tree traversals. **
#
# ## Solution
#
# Fill out your solution below:
# Definition for a binary tree node.
class TreeNode(object):
def __init__(self, x):
self.val = x
self.left = None
self.right = None
class Solution(object):
def isValidBST(self, root):
return self.is_valid(root, -math.inf, math.inf)
def is_valid(self, root, min_val, max_val):
if root is None:
return True
else:
return ( root.val > min_val and root.val < max_val and
self.is_valid(root.left, min_val, root.val) and
self.is_valid(root.right, root.val, max_val) )
# This is a classic interview problem, so feel free to just Google search "Validate BST" for more information on this problem!
| code/algorithms/course_udemy_1/Trees/Trees Interview Problems - PRACTICE/Binary Search Tree Check.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# auto load changes to Python files
# %load_ext autoreload
# %autoreload 2
print("autoreload enabled")
import numpy as np
print("numpy imported as np ({})".format(np.__version__))
# +
import pandas as pd
print("pandas imported as pd ({})".format(pd.__version__))
# Remove limits on displayed pandas tables
pd.set_option('display.max_colwidth', None)
print("Pandas display: Remove maximum column width")
pd.set_option('display.max_columns', 100)
print("Pandas display: Show up to 100 columns in tables")
pd.set_option('display.max_rows', 100)
print("Pandas display: Show up to 100 rows in tables")
pd.set_option('display.float_format', lambda x: '%.3f' % x)
print("Pandas display: Set floats to show up to 3 decimal places")
# +
# %matplotlib inline
print("matplotlib: show plots inline")
import matplotlib as mpl
print("matplotlib imported as mpl ({})".format(mpl.__version__))
import matplotlib.pyplot as plt
print("matplotlib.pyplot imported as plt")
mpl.style.use('ggplot')
print("matplotlib: use ggplot style")
import seaborn as sns
sns.set_theme(style="whitegrid")
print("seaborn: set white grid theme")
# -
import logging
import sys
logger = logging.getLogger()
logging.basicConfig(format='%(message)s',
level=logging.INFO, stream=sys.stdout)
print("Logging: show log messages in ipython")
| notebooks/notebook-config.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dcgan_py37
# language: python
# name: dcgan_py37
# ---
# +
from keras import Sequential
from keras.layers import Conv2D, BatchNormalization, LeakyReLU, Flatten, Dense
from keras.layers import Reshape, Conv2DTranspose
from keras.optimizers import Adam
from keras.datasets.cifar10 import load_data
import matplotlib.pyplot as plt
import numpy as np
# +
def define_disc_model():
# results in an output image size of 16x16x3
model = Sequential()
model.add(Conv2D(128, (3,3), strides=(2,2), padding='same', input_shape=(32,32,3)))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
# results in an output image size of 8x8x3
model.add(Conv2D(128, (3,3), strides=(2,2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
model.add(Flatten())
model.add(Dense(1, activation='sigmoid'))
# discriminator basically detects real/fake
# so binary crossentropy loss can be used
opt = Adam(lr=0.0002, beta_1=0.5)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
# model.build(input_shape=(None, 28,28,3))
# print(model.summary())
return model
def define_gen_model():
model = Sequential()
hidden_nodes = 128*8*8
input_nodes = 100
# input layer = 100 nodes, taken randomly from a Guassian distribution
# hidden layer = 128 * 8 * 8 nodes, representing 128 8*8 images
# each of these is a low-res / compressed version of the final image
model.add(Dense(hidden_nodes, input_dim=input_nodes))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
model.add(Reshape((8,8,128)))
# upsample: 128 filters of size 4x4, operating on every 8x8 image
# output image size 16x16
model.add(Conv2DTranspose(128, (4,4), strides=(2,2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
# output image size is 32x32
model.add(Conv2DTranspose(128, (4,4), strides=(2,2), padding='same'))
model.add(BatchNormalization())
model.add(LeakyReLU(alpha=0.2))
# Output image size is 32x32x3
model.add(Conv2D(3, (5,5), activation='sigmoid', padding='same'))
# model.build(input_shape=(None, 100))
# print(model.summary())
return model
def define_gan_model(d_model, g_model):
# freeze the discriminator model
# assert that all images received by d_model are real
# this will generate a loss, which will be used by g_model
# to improve the quality of generated images
d_model.trainable = False
model = Sequential()
model.add(g_model)
model.add(d_model)
opt = Adam(lr=0.0002, beta_1=0.5)
model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy'])
# model.build(input_shape=(None, 100))
# print(model.summary())
return model
def load_real_samples():
(train_x, train_y), (test_x, test_y) = load_data()
# scale the pixel values between 0 and 1
train_x = train_x.astype('float32')
train_x = train_x / 255.0
return train_x
def gen_real_subset(train_x, n_samples):
chosen_indices = np.random.randint(0, train_x.shape[0], n_samples)
subset_x = train_x[chosen_indices]
subset_y = np.ones((n_samples, 1))
return subset_x, subset_y
def gen_fake_subset(shape, n_samples):
subset_x = np.random.uniform(size=(n_samples, shape[0], shape[1], shape[2]))
subset_y = np.zeros((n_samples, 1))
return subset_x, subset_y
def gen_latent_inputs(n_samples, dimension):
latent_input = np.random.randn(n_samples, dimension)
return latent_input
def gen_fake_samples(g_model, n_samples, latent_dim):
latent_inputs = gen_latent_inputs(n_samples, latent_dim)
fake_x = g_model.predict(latent_inputs)
fake_y = np.zeros((n_samples, 1))
return fake_x, fake_y
def train_d_model(d_model, train_x, batch_size=128, iterations=100):
n_samples = int(batch_size / 2)
for i in range(iterations):
real_x, real_y = gen_real_subset(train_x, n_samples)
img_shape = real_x.shape[1:]
fake_x, fake_y = gen_fake_subset(img_shape, n_samples)
_, real_acc = d_model.train_on_batch(real_x, real_y)
_, fake_acc = d_model.train_on_batch(fake_x, fake_y)
print(f'Real Accuracy: {real_acc*100:.2f}\tFake Accuracy: {fake_acc*100:.2f}')
return d_model
def train_gan(d_model, g_model, gan_model, train_x, latent_dim=100, batch_size=256, n_epochs=100):
n_samples = int(batch_size / 2)
n_batches = int(train_x.shape[0] / batch_size)
for epoch in range(n_epochs):
for batch in range(n_batches):
real_x, real_y = gen_real_subset(train_x, n_samples)
d_loss1, d_acc1 = d_model.train_on_batch(real_x, real_y)
fake_x, fake_y = gen_fake_samples(g_model, n_samples, latent_dim)
d_loss2, d_acc2 = d_model.train_on_batch(fake_x, fake_y)
# d_x, d_y = np.vstack((real_x, fake_x)), np.vstack((real_y, fake_y))
# d_loss, d_acc = d_model.train_on_batch(d_x, d_y)
# gan will receive (n_samples * 2) latent space vectors
gan_x = gen_latent_inputs(n_samples*2, latent_dim)
# these latent inputs will be marked as real i.e. 1
gan_y = np.ones((n_samples*2, 1))
# during training, only generator weights will be updated,
# since we have frozen discriminator weights in gan definition
gan_loss, gan_acc = gan_model.train_on_batch(gan_x, gan_y)
print(f'D Loss Real: {d_loss1:.4f}\tD Loss Fake: {d_loss2:.4f}\tGAN Loss: {gan_loss:.4f}\tBatch: {batch+1}\tEpoch: {epoch+1}')
if((epoch+1) % 5 == 0):
evaluate_gan(d_model, g_model, train_x, latent_dim, epoch, n_samples)
def evaluate_gan(d_model, g_model, train_x, latent_dim, n_epoch, n_samples=25):
# discriminator performance on real samples
real_x, real_y = gen_real_subset(train_x, n_samples)
real_loss, real_acc = d_model.evaluate(real_x, real_y)
# discriminator perfomance on fake samples
fake_x, fake_y = gen_fake_samples(g_model, n_samples, latent_dim)
fake_loss, fake_acc = d_model.evaluate(fake_x, fake_y)
print(f'Real Accuracy: {real_acc*100:.2f}\tFake Accuracy: {fake_acc*100:.2f}')
# save the generator model snapshots
f_name = f'./gen_models/g_model_e_{n_epoch+1}.h5'
g_model.save(f_name)
# plot generated images, save to file
save_plot(fake_x[0:4], n_epoch)
def save_plot(images, n_epoch):
for i in range(4):
plt.subplot(2, 2, i+1)
plt.axis('off')
plt.imshow(images[i])
f_name = f'./gen_images/image_e_{n_epoch+1}.png'
plt.savefig(f_name)
plt.close()
# -
# discriminative model
d_model = define_disc_model()
g_model = define_gen_model()
gan_model = define_gan_model(d_model, g_model)
train_x = load_real_samples()
n_epochs=200
batch_size = 256
iterations = 50
latent_dim = 100
n_fakes = 25
# d_model = train_d_model(d_model, train_x, batch_size=batch_size, iterations=iterations)
# real_x, real_y = gen_real_subset(train_x, n_fakes)
# fake_x, fake_y = gen_fake_samples(g_model, n_fakes, latent_dim)
# for i in range(n_fakes):
# plt.subplot(5, 5, 1+i)
# plt.axis('off')
# plt.imshow(fake_x[i, :, :,:])
# plt.show()
# +
# n_samples = 128
# gan_x = gen_latent_inputs(n_samples*2, latent_dim)
# gan_y = np.ones((n_samples*2, 1))
# gan_x.shape, gan_y.shape
# d_model.predict(fake_x)
# -
train_gan(d_model, g_model, gan_model, train_x, latent_dim, batch_size, n_epochs)
| cifar10_trial1/c_gen.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# ### Aula no youtube
#
# https://www.youtube.com/watch?v=Wj0QuMIrFP8&feature=youtu.be
import numpy as np
x = np.arange(9)
x
x.size
y = x.reshape((3,3))
y.size
y.shape
x.shape
| 03-numpy/propriedades.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
from random import sample
link = ("https://github.com/dnllvrvz/Social-Network-Dataset/"
"raw/master/Social%20Network%20Dataset.xlsx")
network_data = pd.read_excel(link, sheet_name=['Elements', 'Connections'])
elements_data = network_data['Elements'] # node list
connections_data = network_data['Connections'] # edge list
edge_cols = ['Type', 'Weight', 'When']
graph = nx.convert_matrix.from_pandas_edgelist(connections_data,
source='From',
target='To',
edge_attr=edge_cols)
node_dict = elements_data.set_index('Label').to_dict(orient='index')
nx.set_node_attributes(graph, node_dict)
fig = plt.figure(figsize=(15, 10))
nx.draw(graph,
node_size=30,
edge_color='white')
fig.set_facecolor('black')
# -
len(graph.nodes),len(graph.edges)
node=sample(graph.nodes,1)[0]
graph.nodes[node]
sampled_nodes=sample(graph.nodes,100)
sub_graph=graph.subgraph(sampled_nodes)
nx.draw(sub_graph, node_size=5, with_labels=False)
from collections import defaultdict
nodes_school_id=nx.get_node_attributes(graph,'School (ID)')
school_nodes=defaultdict(list)
#print (school_nodes.items())
for node,school_id in nodes_school_id.items():
school_nodes[school_id].append(node)
school_nodes[5]
graph.nodes['S-087f53']
subgraphs={}
for school_id, nodes in school_nodes.items():
subgraph=graph.subgraph(nodes)
subgraphs[school_id]=subgraph
subgraphs[5].nodes
nx.draw(subgraphs[3],node_size=5,with_labels=True)
| Subgraphs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] pycharm={"name": "#%%\n"}
# # Random Forsest Apply on Phising Dataset
# + pycharm={"name": "#%%\n"}
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# -
sns.set()
df = pd.read_csv("PycharmProjects/Ruba Project final_datasets/ruba_phising_data_set_build_final.csv", sep=",")
df.head()
df.info()
df.describe()
# # Create Model
# # Split The Data Set
y = df["PHISHING STATUS"]
X = df[["HTTPS STATUS","AT THE RATE CHECK","IP ADDRESS PRESENT","DOT COUNT","SLASH COUNT","DASH CHECK","LENGTH OF HOST NAME","SLASH CHECK"]]
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0, test_size=0.2, stratify=y)
rf = RandomForestClassifier(n_estimators=1000, max_features= 4,criterion='entropy', random_state=42)
rf.fit(X_train, y_train)
y_Pred = rf.predict(X_test)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test,y_Pred)
print (cm)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_Pred))
# # Save Model
# +
import pickle
# Save the trained model as a pickle string.
saved_model = pickle.dumps(rf)
# Load the pickled model
rf_from_pickle = pickle.loads(saved_model)
# Use the loaded pickled model to make predictions
rf_from_pickle.predict(X_test)
# +
from sklearn.externals import joblib
# Save the model as a pickle in a file
joblib.dump(rf, 'model_rf_1.pkl')
# Load the model from the file
rf_from_joblib = joblib.load('model_rf_1.pkl')
# Use the loaded model to make predictions
rf_from_joblib.predict(X_test)
# -
# Load the model from the file
model_1 = joblib.load('model_rf_1.pkl')
model_1.predict(np.array([0, 0, 0, 3, 1, 1, 0, 0]).reshape(1, -1))
| machine learning model/notebook_files/random_forest_phising.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bts
# language: python
# name: bts
# ---
# +
import pandas as pd
from pathlib import Path
interim = '../data/interim'
panel = pd.read_pickle(Path(interim) / 'panel.pkl')
events = pd.read_pickle(Path(interim) / 'events.pkl')
batting_games = pd.read_pickle(Path(interim) / 'batting_games.pkl')
# -
panel = panel[panel.BAT_ID.notna()]
panel = panel[['GAME_ID', 'BAT_ID']]
# +
## BATING HITS
merged = panel.merge(
batting_games[['Win', 'avg_win']].add_prefix('b_'),
on=['GAME_ID', 'BAT_ID'],
how='outer',
indicator=True
)
merged._merge.value_counts()
# -
hits = events.groupby(['GAME_ID', 'BAT_ID'])['H'].agg('max')
# +
## BATING HITS
merged = panel.merge(
hits,
on=['GAME_ID', 'BAT_ID'],
how='outer',
indicator=True
)
merged._merge.value_counts()
# -
merged
merged['TEAM'] = merged['GAME_ID'].str.slice(0,3)
merged['year'] = merged['GAME_ID'].str.slice(3,7)
pd.crosstab(merged.TEAM, merged._merge)
print(pd.crosstab(merged.year, merged._merge).to_string())
| notebooks/MergeDebug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
import sys, platform, os
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import scipy as sci
import camb as camb
from camb import model, initialpower
print('Using CAMB %s installed at %s'%(camb.__version__,os.path.dirname(camb.__file__)))
import classy as classy
from classy import Class
print('Using CLASS %s installed at %s'%(classy.__version__,os.path.dirname(classy.__file__)))
from ipywidgets.widgets import *
import sympy
from sympy import cos, simplify, sin, sinh, tensorcontraction
from einsteinpy.symbolic import EinsteinTensor, MetricTensor, RicciScalar
sympy.init_printing()
from IPython.display import Markdown, display
def printmd(string, color='black', math=False, fmt='header2'):
if math==True:
mstring = string
elif math==False:
mstring="\\textrm{"+string+"}"
#colorstr = "<span style='color:{}'>{}</span>".format(color, string)
fmtstr = "${\\color{"+color+"}{"+mstring+"}}$"
if fmt=='header2':
fmtstr="## "+fmtstr
if fmt=='header1':
fmtstr="# "+fmtstr
display(Markdown(fmtstr))
return None
| Final***.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="oa2VCG0grMsn" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619540727, "user_tz": -330, "elapsed": 2896, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
import json
# + id="hIsLmszurq4I" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598619540733, "user_tz": -330, "elapsed": 2437, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="bdbccf6e-b9e3-436b-a751-fce284821dcc"
# cd /content/drive/My Drive/Yoga Pose Estimation
# + id="ODft1atO_JmR" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619581164, "user_tz": -330, "elapsed": 3777, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
dataset=pd.read_json('data(3).json')
# + id="vuCMrZbn_P1t" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619584410, "user_tz": -330, "elapsed": 1152, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
dataset=dataset.transpose()
# + id="vfmcQTIf_gdH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 422} executionInfo={"status": "ok", "timestamp": 1598619584414, "user_tz": -330, "elapsed": 893, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="444bd7ad-5243-4904-a0ea-96262a3970c6"
dataset
# + id="TZ3XrGZkYJpo" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619599613, "user_tz": -330, "elapsed": 841, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
labels=[]
for i in range(0,400):
labels.append("a")
for i in range(0,400):
labels.append("b")
for i in range(0,400):
labels.append("c")
# + id="iOMbO4QoYdvS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1598619601824, "user_tz": -330, "elapsed": 1091, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="21b83863-4ff8-42a1-9d9f-5baff6f05b97"
labels
# + id="CimboOshYelp" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619604438, "user_tz": -330, "elapsed": 887, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
train_feature,test_feature,label_train,label_test=train_test_split(dataset,labels,test_size=0.2,random_state=42)
# + id="WmdFTOJJYvSo" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 422} executionInfo={"status": "ok", "timestamp": 1598619606033, "user_tz": -330, "elapsed": 950, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="0305d9f6-4b05-451e-8c75-579d4a96ba8f"
train_feature
# + id="McCjS5rlaJPe" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619609224, "user_tz": -330, "elapsed": 858, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
from sklearn.model_selection import GridSearchCV
model=GridSearchCV(KNeighborsClassifier(),{'n_neighbors':list(np.arange(1,20))})
# + id="9Bn_u-JEauBA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 202} executionInfo={"status": "ok", "timestamp": 1598619613812, "user_tz": -330, "elapsed": 2372, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="288aa91f-c20f-417e-8e29-269b7ce519b3"
model.fit(train_feature,label_train)
# + id="Vdmvkc-1axF4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598619616112, "user_tz": -330, "elapsed": 840, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="b74ef423-a14f-4002-c897-85a7e4807591"
model.best_params_
# + id="FxM0TxnqYwmV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} executionInfo={"status": "ok", "timestamp": 1598619621597, "user_tz": -330, "elapsed": 829, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="2de32902-59ea-42db-c8a6-5b6589086fb6"
model=KNeighborsClassifier(n_neighbors=1)
model.fit(train_feature,label_train)
# + id="yhIffMO6Y73M" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619623614, "user_tz": -330, "elapsed": 884, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
label_pred=model.predict(test_feature)
# + id="kU_SeVNFa33P" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619627393, "user_tz": -330, "elapsed": 1046, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
label_pred=list(label_pred)
# + id="90ApzMOVbF_S" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619629077, "user_tz": -330, "elapsed": 850, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
def counter(l):
uniq=list(set(l))
for i in uniq:
print(i+" count is "+str(l.count(i)))
# + id="F9NgxqCobwY-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} executionInfo={"status": "ok", "timestamp": 1598619630808, "user_tz": -330, "elapsed": 854, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="6db63d0c-50de-42b1-9110-5da501a08c79"
counter(label_pred)
# + id="KAqm4RmMZBnU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} executionInfo={"status": "ok", "timestamp": 1598619637894, "user_tz": -330, "elapsed": 821, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="490c76b4-183a-44c0-fdba-8e4bc79c98d4"
counter(label_test)
# + id="CfMlGFdjZPwz" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619646814, "user_tz": -330, "elapsed": 927, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
from sklearn.metrics import *
# + id="yA5s5yLGZJdI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1598619648510, "user_tz": -330, "elapsed": 868, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="f5243149-f525-41fa-9ae6-e3164924b535"
accuracy_score(label_test,label_pred)
# + id="BoV2P2qEZf1B" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619680144, "user_tz": -330, "elapsed": 867, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
import pickle
# + id="11Bxjo3mcxV3" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598619704697, "user_tz": -330, "elapsed": 1149, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}}
pickle.dump(model,open('model_1200.pkl','wb+'))
# + id="XkyvHR8Hc3vi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 185} executionInfo={"status": "ok", "timestamp": 1598619705940, "user_tz": -330, "elapsed": 607, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="0e3e77ed-5791-4996-8d54-672feb5c1f42"
print(classification_report(label_test,label_pred))
# + id="axvByD9Rd_5O" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 67} executionInfo={"status": "ok", "timestamp": 1598619711833, "user_tz": -330, "elapsed": 874, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjTjqAs_XSPRluT1e_1a161LYmq1xxzr4Q-wY37=s64", "userId": "02067056678749702267"}} outputId="140d003e-7464-4b5e-981d-63373f201eac"
print(confusion_matrix(label_test,label_pred))
# + id="bRcTwscOeJXC" colab_type="code" colab={}
| model_training/model_creation_with 400 each.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ChangHuaHua/QM2-Group-12/blob/main/Multivariable_Linear_Regression_Analysis.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="56V8P1rMuoHT"
# Source from which method was based on https://datatofish.com/multiple-linear-regression-python/
# + [markdown] id="j8FEl81Gupp7"
# # Data Upload & Import Statements
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72} id="7oDkUY9hc5-4" outputId="c5b15f96-b80c-4b81-b896-0d69a4aeca72"
#Download Africa fully merged data
import pandas as pd
import numpy as np
from google.colab import files
uploaded = files.upload()
# + colab={"base_uri": "https://localhost:8080/", "height": 410} id="iOR4Uy1udDMD" outputId="2634fdbb-1d73-4fe4-ecb8-6aab7699a2ce"
datapath = '/content/2a. Africa_FULL_MERGE.csv'
df = pd.read_csv(datapath, encoding = 'latin1')
df.tail()
# + [markdown] id="khJ26QZ3GEEi"
# #1. HDI vs TFR - Scatter Plot & Regression
# + id="Ec9dy9WqGDjm"
#Creates HDI analysis dataframe which excludes missing values in the dataframe
dfHDI = df[df['HDI'].notna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="_a9swD6TFzK_" outputId="e58f79e6-eeac-47b9-d745-9b9123fe2561"
#Creating scatter plot of HDI vs TFR
import pylab
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import plotly.express as px
fig = px.scatter(dfHDI, x = 'HDI', y = 'TFR', color = 'Year',
title = 'Relationship between HDI and TFR in Africa',
hover_name = 'Country',width=1000, height=600)
fig.show()
# + id="nUgN-WthOOuh" colab={"base_uri": "https://localhost:8080/"} outputId="2a2e54b5-31a2-4a88-f6f6-92c5eed5525b"
# !pip install chart_studio
import chart_studio
# + id="wK6IKQ4eLolF"
username = 'uclqsdh'
api_key = '<KEY>'
chart_studio.tools.set_credentials_file(username=username, api_key=api_key)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="WTdELRusOSCb" outputId="660c9e0a-7b23-470c-b8f5-09f74da798b6"
import chart_studio.plotly as py
py.plot(fig, filename = 'hdi_tfr_scatter', auto_open=True)
# + colab={"base_uri": "https://localhost:8080/"} id="semJzQFWDouQ" outputId="307c807d-32d2-40cb-b93d-5f9be4e0b313"
#linear regression analysis with statsmodels
import statsmodels.api as sm
X = dfHDI["HDI"]
Y = dfHDI["TFR"]
X = sm.add_constant(X) # adding an intercept to model
#model = sm.OLS(Y, X).fit()
model = sm.OLS(Y, X.astype(float)).fit()
print_model = model.summary()
print(print_model)
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="4TrAEaFK3s7f" outputId="56afadbe-0095-4449-83a7-5bd769b300ad"
fig = px.scatter(dfHDI, x="HDI", y="TFR", title = 'Relationship between HDI and TFR in Africa',
trendline="ols", hover_name ='Country', opacity= 0.5, width=1000, height=600)
fig.data[1].update(line_color='red')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="N7c1dQA2PczK" outputId="78a88061-4e05-408d-cb60-969175c3feb0"
import chart_studio.plotly as py
py.plot(fig, filename = 'hdi_tfr_scatter_with_trendline', auto_open=True)
# + [markdown] id="XWgh1PQUxlK1"
# # 2. Female Education vs TFR - Scatterplot & Regression
# + id="hpthhjp9Gk0t" colab={"base_uri": "https://localhost:8080/", "height": 617} outputId="5f6d37b8-bb83-41a4-96eb-24748f80d81a"
dfED = df[df['Females in secondary education (%)'].notna()]
fig = px.scatter(dfED, x = 'Females in secondary education (%)', y = 'TFR', color = 'Year', width=1200, height=600,
title = 'Relationship between female education levels and TFR in Africa', hover_name = 'Country',
labels={
"TFR":"Total Fertilty Rate (TFR)"})
fig.show()
# + id="QiTIp3vtDa_K" colab={"base_uri": "https://localhost:8080/", "height": 406} outputId="a67b3be1-2c32-4cc4-f769-12f509de619b"
# Single linear regression
import statsmodels.api as sm
dfED = dfED[dfED['TFR'].notna()]
X = dfED['Females in secondary education (%)']
Y = dfED['TFR']
X = sm.add_constant(X) # adding an intercept to model
model = sm.OLS(Y, X).fit()
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="6uLd0-GaQVjs" outputId="830355a9-8fd5-4bcc-81ab-acc63b55f95f"
import chart_studio.plotly as py
py.plot(fig, filename = 'ed_tfr_scatterplot', auto_open=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="nNgrxptNBdwV" outputId="337b1904-e377-4026-b5de-0ace6141a76d"
fig = px.scatter(dfED, x="Females in secondary education (%)", y="TFR", title = 'Relationship between female education levels and TFR in Africa',
trendline="ols", hover_name ='Country', opacity =0.5, width=1200, height=600)
fig.data[1].update(line_color='red')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="19Hg5LHsQhlu" outputId="549b8f65-895d-4d11-84e9-057971a6247a"
import chart_studio.plotly as py
py.plot(fig, filename = 'ed_tfr_scatterplot_trendline', auto_open=True)
# + [markdown] id="n0TnwPJc1inI"
# # 3 Female LFPR vs TFR Scatter Plot & Regression
# + id="wX64gWGhyRAX"
#Creates LFPR analysis dataframe which excludes missing values in the dataframe
dfLF = df[df['Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)'].notna()]
#dfHDI = df[df['TFR'].notna()]
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="AnjydVFS2IeN" outputId="12c93b64-7c76-4671-eb31-4b11e86aa30d"
#Scatterplot creation
fig = px.scatter(dfLF, x = 'Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)', y = 'TFR', color = 'Year',
title = 'Female LFPR vs TFR', hover_name = 'Country', width=1200, height=600,
labels={
"Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)":"Female labor force participation rate (% of females aged 15 and above)", "TFR":"Totaly Fertility Rate (TFR)"
})
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="w8UDPdfsK4iN" outputId="990d2917-93a5-44b1-81a0-83cc8271b09f"
import chart_studio.plotly as py
py.plot(fig, filename = 'lfpr_tfr_scatterplot', auto_open=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 406} id="A73AZfwe2WKR" outputId="10b86889-97b1-40f9-a2c9-3b6533842cf7"
#Single linear regression
import statsmodels.api as sm # import statsmodels
X = dfLF['Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)']
Y = dfLF['TFR']
X = sm.add_constant(X) # adding an intercept to model
model = sm.OLS(Y, X).fit()
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 617} id="iGqbx8ikBm_x" outputId="2a59289d-2b88-49a8-dc79-492873c6f0bf"
fig = px.scatter(dfLF, x="Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)", y="TFR", title = 'Relationship between female labor force participation and TFR in Africa',
trendline="ols", hover_name ='Country', opacity = 0.5, width=1200, height=600,)
fig.data[1].update(line_color='red')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="ctHpFXFbRezv" outputId="c840923d-8624-4e33-a02c-1963afa7efe5"
import chart_studio.plotly as py
py.plot(fig, filename = 'lfpr_tfr_scatterplot_trendline', auto_open=True)
# + [markdown] id="LDm8L2GJ2z1t"
# # Multivariable Regression
# + [markdown] id="UOgHSLigDuWv"
# Checking for multicolinearity
# + id="K4vfxrEK25TL"
# First check for assumption of no multicolinearity between these 2 variables using VIF
#source: https://www.geeksforgeeks.org/detecting-multicollinearity-with-vif-python/
# + id="x7c42YAC3Fug"
df = df.dropna()
# + colab={"base_uri": "https://localhost:8080/", "height": 110} id="R0ZAHv2v29oM" outputId="0fc31cb4-6976-449d-8de3-4c3fc5de1f2a"
from statsmodels.stats.outliers_influence import variance_inflation_factor
# the independent variables set
X = df[['Females in secondary education (%)','Labor force participation rate, female (% of female population ages 15+) (modeled ILO estimate)']]
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = X.columns
# calculating VIF for each feature
vif_data["VIF"] = [variance_inflation_factor(X.values, i)
for i in range(len(X.columns))]
vif_data
# + [markdown] id="jCM2OnJ23sRk"
# As the VIF is above 5 and hence quite large, one of tehse variables must be ommited from the multivariable regression analysis. Female labor-force participation was chosen to be omitted because of the high VIF and also its lack of meeting the linear relationship requirement of multivariable regression.
# + [markdown] id="lFBx7XVXwM4I"
# # Multivariable Regression (with Income)
# + id="oZmq-1abSBT0"
df.sort_values(by=['Country','Year'], inplace=True)
df['Year'] = df['Year'].astype(int) #alters year from string object to integer
df.set_index(['Country Code', 'Year'],inplace = True)
# + id="ka8uEmzbSGx5"
#Upload GNI per Capita data from World Bank
from google.colab import files
uploaded = files.upload()
# + id="yBS0m5m5E9bL" colab={"base_uri": "https://localhost:8080/", "height": 586} outputId="f7703188-2293-4942-f5e5-d60fce7afa39"
datapath = '/content/GNI per capita.csv'
inc = pd.read_csv(datapath, encoding = 'latin1', skiprows=4)
inc.drop('Indicator Code', axis=1, inplace=True)
inc.drop('Unnamed: 65', axis=1, inplace=True)
inc.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 446} id="MttyoDuCtpsS" outputId="0d8569c8-0780-4220-c36c-40fab13f236e"
inc = pd.melt(inc, id_vars=['Country Name', 'Country Code'], value_vars=list(inc.columns)[3:-2])
inc.rename(columns={'value':'GNI per Capita (US$)'}, inplace=True)
inc.rename(columns={'variable':'Year'}, inplace=True)
inc.rename(columns={'Country Name':'Country'}, inplace=True)
inc.sort_values(by=['Country Code','Year'], inplace=True)
inc['Year'] = inc['Year'].astype(int) #alters year from string object to integer
inc.set_index(['Country Code','Year'],inplace = True)
inc
# + id="Q1mXofzKtLx2"
#merge income data into df
inc_merge = df.merge(inc, left_on = (['Country Code','Year']), right_on =(['Country Code','Year']), how ='left')
inc_merge.drop('Country_x', axis=1, inplace=True)
inc_merge.rename(columns={'Country_y':'Country'}, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 476} id="YdYEwL18taJG" outputId="09b18838-bc90-42d3-b7ae-640a405f94fd"
inc_merge.tail()
# + id="Kf3cZZk2wETG"
inc_merge = inc_merge.dropna()
# + colab={"base_uri": "https://localhost:8080/", "height": 460} id="ZoPLaaT4ye3d" outputId="f018cc2b-c016-4e45-8954-ceb82c260d6c"
#Multivariable regression with female education & income group
X = inc_merge[['Females in secondary education (%)','GNI per Capita (US$)']]
y = inc_merge['TFR']
X = sm.add_constant(X) # adding an intercept to model
model = sm.OLS(y, X).fit()
# Print out the statistics
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 110} id="grh3Mc2XzBdz" outputId="91dff5e2-258c-4638-83e1-4e359b5ce123"
from statsmodels.stats.outliers_influence import variance_inflation_factor
# the independent variables set
X = inc_merge[['Females in secondary education (%)','GNI per Capita (US$)']]
# VIF dataframe
vif_data = pd.DataFrame()
vif_data["feature"] = X.columns
# calculating VIF for each feature
vif_data["VIF"] = [variance_inflation_factor(X.values, i)
for i in range(len(X.columns))]
vif_data
# + id="S-GfUiYUznwC"
inc_merge = inc_merge.reset_index()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="xl4rhEqd0Cot" outputId="933a9685-f1fc-438d-9ac6-b034d7ac4022"
dfinc = inc_merge[inc_merge['GNI per Capita (US$)'].notna()]
fig = px.scatter(dfinc, x = 'GNI per Capita (US$)', y = 'TFR', color = 'Year', hover_name = 'Country',
title = 'Relationship between GNI per Capita and TFR in Africa',
labels={
"TFR":"Total Fertilty Rate (TFR)"})
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="pTSb10OmSj0B" outputId="d733a169-83c9-4227-a7ce-a5547a254f3c"
import chart_studio.plotly as py
py.plot(fig, filename = 'income_tfr_scatterplot', auto_open=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 441} id="-UWtPaFk1ua0" outputId="9268e448-da1a-47cd-8eba-33bf41425cef"
#Single regression with income and TFR
X = dfinc[['GNI per Capita (US$)']]
y = dfinc['TFR']
X = sm.add_constant(X) # adding an intercept to model
model = sm.OLS(y, X).fit()
# Print out the statistics
model.summary()
# + colab={"base_uri": "https://localhost:8080/", "height": 542} id="Rjjg19y_3BMF" outputId="01b67013-a4c9-48b0-a02c-c1008e749517"
fig = px.scatter(dfinc, x="GNI per Capita (US$)", y="TFR", title = 'Relationship between GNI per Capita and TFR in Africa',
trendline="ols", hover_name ='Country')
fig.data[1].update(line_color='red')
fig.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} id="IJ8bCiThSudy" outputId="01e14569-6d10-45b4-8c84-7add3a88492f"
import chart_studio.plotly as py
py.plot(fig, filename = 'income_tfr_scatterplot_trendline', auto_open=True)
| Python Analysis/Visualisations/Code/Single_and_Multivariable_Linear_Regression_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import csv
dataset_dir = '/beegfs/work/AudioSet'
data_dir = os.path.join(dataset_dir, 'data')
audio_ext = '.flac'
video_ext = '.mp4'
# +
# Find out which files have not been downloaded from AudioSet
missing_files = {}
missing_audio_files = {}
missing_video_files = {}
for subset_name in os.listdir(data_dir):
if not os.path.isdir(os.path.join(data_dir, subset_name)):
continue
subset_path = os.path.join(dataset_dir, "{}.csv".format(subset_name))
subset_dir = os.path.join(data_dir, subset_name)
missing_files[subset_name] = []
missing_audio_files[subset_name] = []
missing_video_files[subset_name] = []
# Get the files that have been downloaded
local_subset_audio_files = set([os.path.splitext(fname)[0] for fname in os.listdir(os.path.join(subset_dir, 'audio'))])
local_subset_video_files = set([os.path.splitext(fname)[0] for fname in os.listdir(os.path.join(subset_dir, 'video'))])
# Get all files from the the subset csv files
with open(subset_path, 'r') as f:
subset_data = csv.reader(f)
for row_idx, row in enumerate(subset_data):
# Skip commented lines
if row[0][0] == '#':
continue
ytid, ts_start, ts_end = row[0], float(row[1]), float(row[2])
tms_start, tms_end = int(ts_start * 1000), int(ts_end * 1000)
media_filename = '{}_{}_{}'.format(ytid, tms_start, tms_end)
missing_audio = media_filename not in local_subset_audio_files
missing_video = media_filename not in local_subset_video_files
# Keep track of missing audio or video files
if missing_audio or missing_video:
missing_files[subset_name].append(row)
# Keep track of the audio and videos separately for comparison
if missing_audio:
missing_audio_files[subset_name].append(row)
if missing_video:
missing_video_files[subset_name].append(row)
# Write a new csv containing only the YouTube video segments with missing files
missing_subset_path = os.path.join(dataset_dir, "{}-missing.csv".format(subset_name))
with open(missing_subset_path, 'w') as f:
writer = csv.writer(f)
writer.writerows(missing_files[subset_name])
# -
# Get the number of YouTube videos with missing files for each subset
for subset, missing_file_list in missing_files.items():
print("{}: Missing files for {} YouTube videos".format(subset, len(missing_file_list)))
# Get the number of YouTube videos with audio but no video for each subset
for subset in missing_files.keys():
audio_no_video = list(set(map(tuple, missing_audio_files[subset])) - set(map(tuple, missing_video_files[subset])))
print("{}: {} YouTube videos with audio but no video".format(subset, len(audio_no_video)))
# +
# Get the number of YouTube videos with video but no audio for each subset
for subset in missing_files.keys():
video_no_audio = list(set(map(tuple, missing_video_files[subset])) - set(map(tuple, missing_audio_files[subset])))
print("{}: {} YouTube videos with video but no audio".format(subset, len(video_no_audio)))
# -
| src/audiosetdl/notebooks/analyze_downloaded_files.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
words = ['cat', 'window', 'defenestrate']
for word in words:
print(word)
# +
words = ['cat', 'window', 'defenestrate']
for word in words[:]:
if len(word) > 6:
words.insert(0,word)
print(words)
# -
| 7. Loops.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python38564bita6c660d366ad41288ccc0d1156e7cceb
# ---
# # Variable Assignment and Naming
#
# This section introduces the variable assignment and variable naming.
#
# ## Why Variable?
#
# You can use string literals and number literals to run some tasks. Number literals are integers (such as `5`) or floats (such as `17.5`). For example:
#
# + tags=[]
print('Hi')
# -
# put spaces before and after the / to make the computation more reable
17.5 / 5
# However, in many situations
#
# - you don't know the values when you write the program. You need to read the data from a file or from a user input.
# - When you defeine a function that consumes inputs and generates outputs, you need to refer to the input values and output values.
# - Even you have the values used in computation, you still should give it a meaningful name to make the code easier to understand. For example, it is unclear what's the meaning of `17.5` and `5`.
#
# You learned from middle school that algebra uses symbols to represent values and symbols are more powerful in solving problems than pure numbers. The variables in Python serves the same purpose.
# ## Variable and Assignment
#
# A variable is a name that represents a value stored in the compute memory (RAM). You use an assignment statement to bind a value to a name -- also called variable declaration or variable definition:
#
# `variable = expression`
#
# In this statement:
#
# - `variable` is the name of the variable. The name must be in the left hand side (LHS) of the statement.
# - `=` is the **assignment operator**. It binds/defines the LHS name to a value. There is a space before and after the operator for better code format.
# - `expression` is a value or an operation that produces a value. It is in the right hand side (RHS) of the statement.
#
# 
#
# Here are some variable declarations:
#
#
# + tags=[]
# declare variables using value literals
name = 'Alice'
answer = 42
course_name = 'Business Application Develoment'
total_score = 17.5
number_of_courses = 5
# declare a variable using an expression
gpa = total_score / number_of_courses
print(gpa)
# -
# As you can see from the output, an assignment doesn't generate any visiable output. It just binds a value to a name. The experssion `total_score / number_of_courses` performs a division and the result is bound to the name `gpa`. It has the correct value of `3.5`.
#
# The variable name must be on the left hand side (LHS). The right hand side (RHS) can be a value, an expression or another variable. The following statement is invalid Python code and generates `SyntaxError` when you run it.
3.5 = gpa
# You cannot use a variable without defining it using an assignment statemetn first. You will get a `NameError` and your code crashes at that line.
x + 1
# ## Variable Names
#
# According to the [Two hard things blog](https://martinfowler.com/bliki/TwoHardThings.html):
# >There are only two hard things in Computer Science: cache invalidation and naming things.
# >
# >-- <NAME>
#
# It might be exaggerated, but naming is really one of the most important decisions in programming. You should think hard to give a variable the most meaningful name thus you and other people can understand the value behind it.
#
# Python has simple but strict rules for variable names:
#
# - A name must be started with a letter in the range of `a` through `z`, or `A` through `Z`, or an underscore character `_`.
# - The rest of the name can be any letters in the range of `a` through `z`, `A` through `Z`, 0 through 9, or an underscore character `_`.
# - The variable name is case sensitive. `score`, `scorE` and `Score` are all different names.
# - Python keywords such as `if`, `for` cannot be used as variable names because they have special meaning.
#
# `x`, `x9`, `_proxy`, `__all__`, `i18n`, `ohmygod`, `course_name` are all valid names because they obey the naming rules. However, `9x`, `42`, `@name`, `My**key`, `price$` are invalid names because they don't start with a valid letter or have invalid letters in the name.
#
# ## Multi-word Names
#
# As you can see, names such as `studentname`, `roomnumber` are clumsy to read and write. Programmers use special terms to describe the multi-word naming mechanism:
#
# - `snake_case`: use underscore to seperate each lower case word. It is recommended for naming multi-word variables and source code files in Python.
# - `kebab-case`: use a dash to seperate lower case words.
# - `PascalCasing`: each word starts with an uppercase letter.
# - `camelCasing`: only the first word start with a lower case letter, others start with an upperc case letter.
#
# Python uses underscore letter `_` to seperate words in names. The naming conventions are defined in [PEP 8 -- Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/)
#
# ## Magic Nubmers
#
# A number literal such as `3.1` in an operation such as `3.1 * diameter` is called a **magic number** because it is hard to know the meaning of these numbers. Python style guide recommeds constants are
#
# > written in all capital letters with underscores separating words. Examples include MAX_OVERFLOW and TOTAL.
#
# You should aovid using number literals in your code. Whenever it appears, give it a name. For example,
#
INTEREST_RATE = 0.072
balance = 100
interest = balance * INTEREST_RATE
# Another benefit of defining constant variables is that you only need to change one place and all its usage will be changes. For example, if the Pi value is used in many places and you define it as `PI = 3.1`. Later you want to use the value of `3.14`, you only need to change the defintion as `PI = 3.14` in one place. It is called **single point of control**.
| 2-basic-operations/variable-assignment-naming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Using a default CNN single model to text various cleaning steps and impact on score.
#
# Controls:
# - CNN single model
# - maxlen: 65
# - min occurance vocab: 5
# - glove.6B.100D
# - epochs: 2
# - cv: 3
# - max features 20000
model_name = 'raw_LSTM'
# ## Import data
import os
import numpy as np
import pandas as pd
dir_path = os.path.realpath('..')
# +
path = 'data/raw/train.csv'
full_path = os.path.join(dir_path, path)
df_train = pd.read_csv(full_path, header=0, index_col=0)
print("Dataset has {} rows, {} columns.".format(*df_train.shape))
# +
path = 'data/raw/test.csv'
full_path = os.path.join(dir_path, path)
df_test = pd.read_csv(full_path, header=0, index_col=0)
print("Dataset has {} rows, {} columns.".format(*df_test.shape))
# -
# ## Text cleaning
# +
import string
import nltk
nltk.data.path.append("/Users/joaeechew/dev/nltk_data")
from nltk import word_tokenize
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from gensim.corpora.dictionary import Dictionary
from os import listdir
from collections import Counter
# -
def process_text(corpus, vocab, regex=r'[\w]+', digits=False, english_only=False, stop=False, lemmatize=False):
"""Takes a corpus in list format and applies basic preprocessing steps of word tokenization,
removing of english stop words, and lemmatization. Returns processed corpus and vocab."""
processed_corpus = []
english_words = set(nltk.corpus.words.words())
english_stopwords = set(stopwords.words('english'))
wordnet_lemmatizer = WordNetLemmatizer()
tokenizer = RegexpTokenizer(regex)
for row in corpus:
tokens = tokenizer.tokenize(row)
if digits:
tokens = [t for t in tokens if not t.isdigit()]
if english_only:
tokens = [t for t in tokens if t in english_words]
if stopwords:
tokens = [t for t in tokens if not t in english_stopwords]
if lemmatize:
tokens = [wordnet_lemmatizer.lemmatize(t) for t in tokens]
vocab.update(tokens)
tokens = ' '.join(tokens)
if tokens == '':
tokens = 'cleaned'
processed_corpus.append(tokens)
return processed_corpus, vocab
# fill NaN with string "unknown"
df_train.fillna('unknown',inplace=True)
df_test.fillna('unknown',inplace=True)
regex = r'[\w|!]+'
# +
# # %%time
# vocab = Counter()
# df_train.comment_text, vocab = process_text(df_train.comment_text, vocab,
# digits=False, english_only=False, stop=False, lemmatize=False)
# df_test.comment_text, vocab = process_text(df_test.comment_text, vocab,
# digits=False, english_only=False, stop=False, lemmatize=False)
# +
# print(vocab.most_common(100))
# # print(len(vocab))
# +
# # keep tokens with a min occurrence
# min_occurance = 5
# vocab = [k for k,c in vocab.items() if c >= min_occurance]
# print(len(vocab))
# +
path = 'data/processed/train_' + model_name + '.csv'
dir_path = os.path.realpath('..')
full_path = os.path.join(dir_path, path)
df_train.to_csv(full_path, header=True, index=True)
# +
path = 'data/processed/test' + model_name + '.csv'
dir_path = os.path.realpath('..')
full_path = os.path.join(dir_path, path)
df_test.to_csv(full_path, header=True, index=True)
# -
# ## Train test split
from sklearn.model_selection import train_test_split
seed = 42
test_size = 0.2
target = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
X = df_train.drop(target, axis=1)
y = df_train[target]
corpus = 'comment_text'
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=test_size, random_state=seed)
# ## Pre-processing
import pickle
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
# +
# %%time
# prepare tokenizer
t = Tokenizer(num_words=20000)
t.fit_on_texts(df_train[corpus])
#define vocab size and max len
vocab_size = len(t.word_index) + 1
max_length = 65
print('Vocabulary size: %d' % vocab_size)
print('Maximum length: %d' % max_length)
# -
# %%time
# integer encode the documents
encoded_Xtrain = t.texts_to_sequences(Xtrain[corpus].astype(str))
encoded_Xtest = t.texts_to_sequences(Xtest[corpus].astype(str))
# +
# pad documents
padded_train = pad_sequences(encoded_Xtrain, maxlen=max_length, padding='post')
padded_test = pad_sequences(encoded_Xtest, maxlen=max_length, padding='post')
# -
# %%time
# load the whole embedding into memory
embeddings_index = dict()
f = open('/home/ec2-user/glove.6B.100d.txt', mode='rt', encoding='utf-8')
for line in f:
values = line.split()
word = values[0]
coefs = asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Loaded %s word vectors.' % len(embeddings_index))
# create a weight matrix for words in training docs
embedding_matrix = zeros((vocab_size, 100))
for word, i in t.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# saving
with open('tokenizer.pickle', 'wb') as handle:
pickle.dump(t, handle, protocol=pickle.HIGHEST_PROTOCOL)
# ## Model fit
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.pipeline import Pipeline
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
# +
# # Function to create model, required for KerasClassifier
# def create_model(optimizer='adam', vocab_size=vocab_size, max_length=max_length):
# model = Sequential()
# model.add(Embedding(vocab_size, 100, input_length=max_length))
# model.add(Conv1D(filters=32, kernel_size=8, activation='relu'))
# model.add(MaxPooling1D(pool_size=2))
# model.add(Flatten())
# model.add(Dense(10, activation='relu'))
# # model.add(Dense(1, activation='sigmoid'))
# model.add(Dense(6, activation='sigmoid')) #multi-label (k-hot encoding)
# # compile network
# model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# # summarize defined model
# model.summary()
# # plot_model(model, to_file='model.png', show_shapes=True)
# return model
# +
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
# Function to create model, required for KerasClassifier
def create_model(optimizer='adam', vocab_size=vocab_size, max_length=max_length):
model = Sequential()
model.add(Embedding(vocab_size, 100, input_length=max_length))
model.add(Bidirectional(LSTM(50, return_sequences=True, dropout=0.1, recurrent_dropout=0.1)))
model.add(GlobalMaxPool1D())
model.add(Dense(50, activation="relu"))
model.add(Dropout(0.1))
model.add(Dense(6, activation='sigmoid')) #multi-label (k-hot encoding)
# compile network
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
# summarize defined model
model.summary()
return model
# -
def save_model(model, model_path):
# serialize model to JSON
model_json = model.to_json()
with open(model_path + ".json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model.save_weights(model_path + ".h5")
print("Saved model to disk")
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, epochs=2, verbose=1)
# +
# %%time
# fit the model
target = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
# train the model
model.fit(padded_train, ytrain, validation_split=0.1)
trained_model = model.model
# save the model
model_path = os.path.join(dir_path, 'models', model_name)
save_model(trained_model, model_path)
# -
# ## Evaluation
from sklearn.metrics import log_loss
print(trained_model.evaluate(padded_test, ytest, verbose=1))
# +
# %%time
# pretty sure this needs to be looped and calculated column wise!
target = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
y_pred = trained_model.predict(padded_test, verbose=1)
hold_out_preds = pd.DataFrame(y_pred, index=ytest.index, columns=target)
losses = []
for label in target:
loss = log_loss(ytest[label], hold_out_preds[label])
losses.append(loss)
print("{} log loss is {} .".format(label, loss))
print("Combined log loss is {} .".format(np.mean(losses)))
# -
# ## Submission
# +
# %%time
# integer encode and pad test df
encoded_submission = t.texts_to_sequences(df_test[corpus].astype(str))
padded_submission = pad_sequences(encoded_submission, maxlen=max_length, padding='post')
# Predict
target = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
y_pred_proba = trained_model.predict(padded_submission, verbose=1)
submission = pd.DataFrame(y_pred_proba, index=df_test.index, columns=target)
## Output submissions
path = 'data/submissions/' + model_name + '.csv'
dir_path = os.path.realpath('..')
full_path = os.path.join(dir_path, path)
submission.to_csv(full_path, header=True, index=True)
| notebooks/archives/12-jc-data-cleaning-experiments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# # matplotlib introduction
#
# Advices:
#
# - Start any matplotlib figure by defining `Figure` and `Axes` objects, e.g., using `plt.suplots()` (see [this blog post](http://pbpython.com/effective-matplotlib.html)).
# - Look at the matplotlib [gallery](https://matplotlib.org/gallery.html)! Pick the figure the is the close to what you want to achive and reuse the code.
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# +
x = np.arange(100)
y = x**2
plt.plot(x, y)
# +
x = np.random.rand(100)
y = np.random.rand(100) * 5.
clr = np.random.rand(100) * 0.5
x2 = np.random.rand(100)
y2 = np.random.rand(100) * 10.
plt.scatter(x, y, c=clr)
plt.colorbar()
plt.scatter(x2, y2, c='red')
# +
x = np.random.normal(loc=2., scale=3., size=10000)
plt.hist(x, bins=20);
# -
# ## Figure elements
#
# <img src="figs/matplotlib_figure_components.png" width="100%">
#
# [figure source](http://pbpython.com/effective-matplotlib.html)
# +
fig, ax = plt.subplots(figsize=(6, 6))
x = np.random.rand(100)
y = np.random.rand(100)
ax.scatter(x, y)
ax.set(title="random samples", xlabel="x", ylabel="y",
aspect=1);
# +
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(12, 6))
ax1, ax2 = axes
x = np.random.rand(100)
y = np.random.rand(100)
ax1.scatter(x, y)
ax1.set(title="random samples", xlabel="x", ylabel="y",
aspect=1);
x_samples = np.random.normal(loc=2., scale=3., size=10000)
ax2.hist(x_samples, bins=20, color='green')
ax2.set(title="normal dist", xlabel="x");
fig.tight_layout();
fig.savefig("fig_random.png", transparent=False, dpi=300)
# -
| notebooks/matplotlib_intro.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# running sim by using diff mutation prob but keeping others fixed
# -
import matplotlib.pyplot as plt
import csv
plt.rcParams['figure.figsize'] = [20, 10]
# +
# using strategy #1 (AL-SFT) and t.c.r. of 0.6
# -
x0 = []
y0 = []
with open('dm0.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
x0.append(float(row[0]))
y0.append(float(row[12]))
x0 = [i/365 for i in x0]
x1 = []
y1 = []
with open('dm1.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
x1.append(float(row[0]))
y1.append(float(row[12]))
x1 = [i/365 for i in x1]
x2 = []
y2 = []
with open('dm2.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
x2.append(float(row[0]))
y2.append(float(row[12]))
x2 = [i/365 for i in x2]
x3 = []
y3 = []
with open('dm3.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
x3.append(float(row[0]))
y3.append(float(row[12]))
x3 = [i/365 for i in x3]
x4 = []
y4 = []
with open('dm4.txt','r') as csvfile:
plots = csv.reader(csvfile, delimiter='\t')
for row in plots:
x4.append(float(row[0]))
y4.append(float(row[12]))
x4 = [i/365 for i in x4]
plt.plot(x0,y0,'b') # blue line for m.p.=0.005 (default)
plt.plot(x1,y1,'g') # green line for m.p.=0.01
plt.plot(x2,y2,'r') # red line for m.p.=0.05
plt.plot(x3,y3,'c') # cyan line for m.p.=0.1
plt.plot(x4,y4,'m') # megneta line for m.p.=0.5
plt.show()
| Archives/190523/diffMutationP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 2.2 Conditional Frequency Distributions #
from nltk.book import *
# ## Counting Words by Genre ##
# **FreqDist() takes a simple list as input, ConditionalFreqDist() takes a list of pairs.**
# +
from nltk.corpus import brown
import nltk
cfd = nltk.ConditionalFreqDist(
(genre, word)
for genre in brown.categories()
for word in brown.words(categories=genre))
# -
genre_word = [(genre, word)
for genre in ['news', 'romance']
for word in brown.words(categories=genre)]
len(genre_word)
genre_word[:4]
genre_word[-4:]
cfd = nltk.ConditionalFreqDist(genre_word)
cfd.conditions()
cfd['news']
#<FreqDist with 100554 outcomes>
cfd['romance']
#<FreqDist with 70022 outcomes>
list(cfd['romance'])
#[',', '.', 'the', 'and', 'to', 'a', 'of', '``', "''", 'was', 'I', 'in', 'he', 'had',
#'?', 'her', 'that', 'it', 'his', 'she', 'with', 'you', 'for', 'at', 'He', 'on', 'him',
#'said', '!', '--', 'be', 'as', ';', 'have', 'but', 'not', 'would', 'She', 'The', ...]
cfd['romance']['could']
# ## Plotting and Tabulating Distributions ##
# It exploits the fact that the filename for each speech—for example,
# 1865-Lincoln.txt—contains the year as the first four characters . This code generates
# the pair ('america', '1865') for every instance of a word whose lowercased form starts
# with america—such as Americans—in the file 1865-Lincoln.txt.
from nltk.corpus import inaugural
cfd = nltk.ConditionalFreqDist(
(target, fileid[:4])
for fileid in inaugural.fileids()
for w in inaugural.words(fileid)
for target in ['america', 'citizen']
if w.lower().startswith(target))
# This time, the condition is the name of the language, and
# the counts being plotted are derived from word lengths . It exploits the fact that the
# filename for each language is the language name followed by '-Latin1' (the character
# encoding).
from nltk.corpus import udhr
languages = ['Chickasaw', 'English', 'German_Deutsch','Greenlandic_Inuktikut', 'Hungarian_Magyar', 'Ibibio_Efik']
cfd = nltk.ConditionalFreqDist(
(lang, len(word))
for lang in languages
for word in udhr.words(lang + '-Latin1'))
# For example,
# we can tabulate the cumulative frequency data just for two languages, and for
# words less than 10 characters long, as shown next. We interpret the last cell on the top
# row to mean that 1,638 words of the English text have nine or fewer letters.
cfd.tabulate(conditions=['English', 'German_Deutsch'],samples=range(10), cumulative=True)
#Exercise
from nltk.corpus import brown
days = ['Monday', 'Tuesday', 'Wednesday','Thursday', 'Friday', 'Saturday','Sunday']
cfd = nltk.ConditionalFreqDist(
(day, len(word))
for day in days
for word in brown.sents(categories=['news', 'romance']))
cfd.tabulate(conditions=['Monday'],samples=range(10), cumulative=True)
# ## Generating Random Text with Bigrams ##
sent = ['In', 'the', 'beginning', 'God', 'created', 'the', 'heaven','and', 'the', 'earth', '.']
list(nltk.bigrams(sent))
def generate_model(cfdist, word, num=15):
for i in range(num):
print(word, end=' ')
word = cfdist[word].max()
text = nltk.corpus.genesis.words('english-kjv.txt')
bigrams = nltk.bigrams(text)
cfd = nltk.ConditionalFreqDist(bigrams)
cfd['living']
generate_model(cfd, 'living')
# |Example|Description|
# |--------|---------|
# |cfdist = ConditionalFreqDist(pairs)|create a conditional frequency distribution from a list of pairs|
# |cfdist.conditions()|the conditions|
# |cfdist[condition]|the frequency distribution for this condition|
# |cfdist[condition][sample]|frequency for the given sample for this condition|
# |cfdist.tabulate()|tabulate the conditional frequency distribution|
# |cfdist.tabulate(samples, conditions)|tabulation limited to the specified samples and conditions|
# |cfdist.plot()|graphical plot of the conditional frequency distribution|
# |cfdist.plot(samples, conditions)|graphical plot limited to the specified samples and conditions|
# |cfdist1 < cfdist2|test if samples in cfdist1 occur less frequently than in cfdist2|
# ## More Python: Reusing Code ##
from __future__ import division
def lexical_diversity(text):
return len(text) / len(set(text))
def lexical_diversity(my_text_data):
word_count = len(my_text_data)
vocab_size = len(set(my_text_data))
diversity_score = vocab_size / word_count
return diversity_score
from nltk.corpus import genesis
kjv = genesis.words('english-kjv.txt')
lexical_diversity(kjv)
# +
def plural(word):
if word.endswith('y'):
return word[:-1] + 'ies'
elif word[-1] in 'sx' or word[-2:] in ['sh', 'ch']:
return word + 'es'
elif word.endswith('an'):
return word[:-2] + 'en'
else:
return word + 's'
plural('fairy')
# -
plural('woman')
# ## Lexical Resources ##
def unusual_words(text):
text_vocab = set(w.lower() for w in text if w.isalpha())
english_vocab = set(w.lower() for w in nltk.corpus.words.words())
unusual = text_vocab - english_vocab
return sorted(unusual)
unusual_words(nltk.corpus.gutenberg.words('austen-sense.txt'))
unusual_words(nltk.corpus.nps_chat.words())
#['aaaaaaaaaaaaaaaaa', 'aaahhhh', 'abortions', 'abou', 'abourted', 'abs', 'ack',
#'acros', 'actualy', 'adams', 'adds', 'adduser', 'adjusts', 'adoted', 'adreniline',
#'ads', 'adults', 'afe', 'affairs', 'affari', 'affects', 'afk', 'agaibn', 'ages', ...]
#There is also a corpus of stopwords, that is, high-frequency words like the, to and also that we sometimes want to filter out of a document
#before further processing. Stopwords usually have little lexical content, and their presence in a text fails to distinguish it from other texts.
from nltk.corpus import stopwords
stopwords.words('english')
#['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', 'your', 'yours',
#'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', 'her', 'hers',
#'herself', 'it', 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves',
#'what', 'which', 'who', 'whom', 'this', 'that', 'these', 'those', 'am', 'is', 'are',
#'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does',
#'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until',
#'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into',
#'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down',
#'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here',
#'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more',
#'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so',
#'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', 'should', 'now']
def content_fraction(text):
stopwords = nltk.corpus.stopwords.words('english')
content = [w for w in text if w.lower() not in stopwords]
return len(content) / len(text)
content_fraction(nltk.corpus.reuters.words())
# A wordlist is useful for solving word puzzles, such as the one in 4.3. Our program iterates through every word and, for each one, checks whether it meets the conditions. It is easy to check obligatory letter [2] and length constraints [1] (and we'll only look for words with six or more letters here). It is trickier to check that candidate solutions only use combinations of the supplied letters, especially since some of the supplied letters appear twice (here, the letter v). The FreqDist comparison method [3] permits us to check that the frequency of each letter in the candidate word is less than or equal to the frequency of the corresponding letter in the puzzle.
puzzle_letters = nltk.FreqDist('egivrvonl')
obligatory = 'r'
wordlist = nltk.corpus.words.words()
[w for w in wordlist if len(w) >= 6 [1] and obligatory in w [2] and nltk.FreqDist(w) <= puzzle_letters] [3]
#['glover', 'gorlin', 'govern', 'grovel', 'ignore', 'involver', 'lienor',
#'linger', 'longer', 'lovering', 'noiler', 'overling', 'region', 'renvoi',
#'revolving', 'ringle', 'roving', 'violer', 'virole']
# One more wordlist corpus is the Names corpus, containing 8,000 first names categorized by gender. The male and female names are stored in separate files. Let's find names which appear in both files, i.e. names that are ambiguous for gender:
names = nltk.corpus.names
names.fileids()
#['female.txt', 'male.txt']
male_names = names.words('male.txt')
female_names = names.words('female.txt')
[w for w in male_names if w in female_names]
#['Abbey', 'Abbie', 'Abby', 'Addie', 'Adrian', 'Adrien', 'Ajay', 'Alex', 'Alexis',
#'Alfie', 'Ali', 'Alix', 'Allie', 'Allyn', 'Andie', 'Andrea', 'Andy', 'Angel',
#'Angie', 'Ariel', 'Ashley', 'Aubrey', 'Augustine', 'Austin', 'Averil', ...]
names = nltk.corpus.names
cfd = nltk.ConditionalFreqDist((fileid, name[-1]) for fileid in names.fileids() for name in names.words(fileid))
cfd.plot()
[(fileid, name[-1]) for fileid in names.fileids() for name in names.words(fileid)]
#[('female.txt', 'l'),('female.txt', 'l'),('female.txt', 'e'), ('female.txt', 'y'), ('female.txt', 'i'), ('female.txt', 'e'), ('female.txt', 'y'), ('female.txt', 'l'),
# ('female.txt', 'l'),('female.txt', 'e'), ('female.txt', 'a'), ('female.txt', 'a'),..]
# ### A Pronouncing Dictionary ###
# A slightly richer kind of lexical resource is a table (or spreadsheet), containing a word plus some properties in each row. NLTK includes the CMU Pronouncing Dictionary for US English, which was designed for use by speech synthesizers.
entries = nltk.corpus.cmudict.entries()
len(entries)
for entry in entries[42371:42379]:
print(entry)
# Each time through the loop, word is assigned the first part of the entry, and pron is assigned the second part of the entry:
for word, pron in entries:
if len(pron) == 3:
ph1, ph2, ph3 = pron
if ph1 == 'P' and ph3 == 'T':
print(word, ph2, end=' ')
# This program finds all words whose pronunciation ends with a syllable sounding like nicks. You could use this method to find rhyming words.
syllable = ['N', 'IH0', 'K', 'S']
[word for word, pron in entries if pron[-4:] == syllable]
| NLP with Python/Chapter02/Chapter 2 Part 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Introduction to Data Science
# + [markdown] slideshow={"slide_type": "slide"}
# ## What is Data Science?
#
# 
#
# Source: http://drewconway.com/zia/2013/3/26/the-data-science-venn-diagram
# + [markdown] slideshow={"slide_type": "slide"}
# ## [What is a Data Scientist?](https://www.quora.com/What-is-a-data-scientist-3)
#
# Data Scientists are people with some mix of **coding and statistical skills** who work on **making data useful** in various ways.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Type A Data Scientist**: The A is for Analysis. This type is primarily concerned with making sense of data or working with it in a fairly static way.The Type A Data Scientist is very similar to a statistician (and may be one) but knows all the practical details of working with data that aren't taught in the statistics curriculum: data cleaning, methods for dealing with very large data sets, visualization, deep knowledge of a particular domain, writing well about data, and so on.
# + [markdown] slideshow={"slide_type": "fragment"}
# **Type B Data Scientist**: The B is for Building. Type B Data Scientists share some statistical background with Type A, but they are also very strong coders and may be trained software engineers. The Type B Data Scientist is mainly interested in using data "in production." They build models which interact with users, often serving recommendations (products, people you may know, ads, movies, search results).
# + [markdown] slideshow={"slide_type": "fragment"}
# We'll mainly be focusing on Type A in this course.
# + [markdown] slideshow={"slide_type": "slide"}
# ## What is a Data Scientist (continued)?
#
# <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician.</p>— <NAME> (@josh_wills) <a href="https://twitter.com/josh_wills/status/198093512149958656">May 3, 2012</a></blockquote>
# <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
#
# Source: [Tweet](https://twitter.com/josh_wills/status/198093512149958656) | [<NAME>](https://twitter.com/josh_wills?lang=en) - Data Scientist and Apache Crunch committer
#
# Josh is also known for pithy data science quotes, such as: “I turn data into awesome”.
# + [markdown] slideshow={"slide_type": "slide"}
# ## What is a Data Scientist (continued)?
#
# <blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr"><a href="https://twitter.com/josh_wills">@josh_wills</a> <a href="https://twitter.com/KirkDBorne">@KirkDBorne</a> I like the formulation: person who is worse at stats that a statistician and worse at engineering that an engineer.</p>— <NAME> (@robanhk) <a href="https://twitter.com/robanhk/status/505355606447120384">August 29, 2014</a></blockquote>
# <script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
# + [markdown] slideshow={"slide_type": "slide"}
# ## [<NAME> advice on becoming a Data Scientist](https://gist.github.com/hadley/820f09ded347c62c2864)
#
# ### Statistical knowledge
#
# > I think you need some knowledge of specific statistical/machine learning techniques, but a deep theoretical understanding is not that important. You need to understand the strengths and weaknesses of each technique, but you don't need a deep theoretical understanding. The vast majority of data science problems can be solved by a creative assembly of off-the-shelf techniques, and don't require new theory.
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Programming Skills
#
# > You need to be fluent with either R or python. There are other options, but none of them have the community that R and python have, which means you'll need to spend a lot of time reinventing tools that already exist elsewhere. Obviously, I prefer R, and unlike what some people claim it is a well founded programming language that is well tailored for its domain.
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Domain knowledge
#
# > This obviously depends on the domain, but as a data scientist should be able to contribute meaningfully to any project, even if you're not intimately familiar with the specifics. I think this means you should be generally well read (e.g. at the level of New Scientist for the sciences) and an able communicator. A good data scientist will help the real domain experts refine and frame their questions in a helpful way. Unfortunately I don't know of any good resources for learning how to ask questions.
#
# + [markdown] slideshow={"slide_type": "slide"}
# ## Data Science Workflow
#
# 
#
# Source: [General Assembly's Data Science 2.0 Curriculum](https://github.com/generalassembly-studio/ds-curriculum)
| notebooks/intro-data-science.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
# +
dataset_name = 'wine'
folder_path = 'dataset/'
file_format='.csv'
# -
data_path = folder_path + dataset_name + file_format
data = pd.read_csv(data_path,header=None)
data.head()
minmax = MinMaxScaler()
normalized_data = minmax.fit_transform(data)
normalized_df = pd.DataFrame(normalized_data, columns = None)
# +
postfix = '_normalized'
normal_data_path = 'normalized_dataset/'
norm_data_file_name = normal_data_path + dataset_name + file_format # + postfix
normalized_df.to_csv(norm_data_file_name, header= None , index= False )
# -
| datasets/Data Normalization/Normalizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="EEIdmbWAJoe2" outputId="696974d8-a7bd-4249-cc38-91ff2b21cf6d"
import os
# Find the latest version of spark 2.0 from http://www-us.apache.org/dist/spark/ and enter as the spark version
# For example:
# spark_version = 'spark-3.0.0'
spark_version = 'spark-3.1.1'
os.environ['SPARK_VERSION']=spark_version
# Install Spark and Java
# !apt-get update
# !apt-get install openjdk-11-jdk-headless -qq > /dev/null
# !wget -q http://www-us.apache.org/dist/spark/$SPARK_VERSION/$SPARK_VERSION-bin-hadoop2.7.tgz
# !tar xf $SPARK_VERSION-bin-hadoop2.7.tgz
# !pip install -q findspark
# Set Environment Variables
import os
os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-11-openjdk-amd64"
os.environ["SPARK_HOME"] = f"/content/{spark_version}-bin-hadoop2.7"
# Start a SparkSession
import findspark
findspark.init()
# + colab={"base_uri": "https://localhost:8080/"} id="Hk7jacr0Jxt7" outputId="17011b59-0000-4c66-c7a9-fc756886beeb"
# Download the Postgres driver that will allow Spark to interact with Postgres.
# !wget https://jdbc.postgresql.org/download/postgresql-42.2.16.jar
# + id="KY41iMu2hUEA"
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("BigData-Challenge").config("spark.driver.extraClassPath","/content/postgresql-42.2.16.jar").getOrCreate()
# + [markdown] id="5jIdobiwZYgF"
# ## Load data
# + id="yI7pSInOJyrp"
from pyspark import SparkFiles
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Camera_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
df = spark.read.option("encoding", "UTF-8").csv(SparkFiles.get("amazon_reviews_us_Camera_v1_00.tsv.gz"), sep="\t", header=True, inferSchema=True)
# + id="Y-GTwQCaJ-cl"
from pyspark.sql.functions import to_date
# Read in the Review dataset as a DataFrame
vine_reviews_clean_df = df.dropna()
# + colab={"base_uri": "https://localhost:8080/"} id="DlW5qa5rKECS" outputId="ba2a1019-fc4b-4eba-b4bb-d7695a37b307"
# Create the vine_table DataFrame
vine_df = vine_reviews_clean_df.select(["review_id","star_rating","helpful_votes","total_votes","vine","verified_purchase"])
vine_df.show()
# + [markdown] id="sNhWF0rxZexE"
# ## Filter data
# + colab={"base_uri": "https://localhost:8080/"} id="42z4wT08KHUK" outputId="f53117da-8d97-42e1-c842-1f20f5aec91d"
# Filtered for more than 20 total votes
vine_filtered_votes_df = vine_df.filter("total_votes>=20")
vine_filtered_votes_df.show()
# + colab={"base_uri": "https://localhost:8080/"} id="m2KxrKZgvc5v" outputId="24d3b2e1-5356-4cab-c046-590ad1ccc1d6"
# Filtered for more than 20 total votes, and more than 50% helpful
vine_helpful_by_total_df = vine_filtered_votes_df.filter("(helpful_votes/total_votes)>=0.5")
vine_helpful_by_total_df.show()
# + colab={"base_uri": "https://localhost:8080/"} id="mnJ5jdi4wn7i" outputId="13fa8a0f-0798-4e23-f3ca-8c59c7eddf34"
# Filtered for paid reviews, more than 50% helpful
vine_paid_helpful_df = vine_helpful_by_total_df.filter("vine == 'Y'")
vine_paid_helpful_df.show()
# + colab={"base_uri": "https://localhost:8080/"} id="30JBDGQ7x-wG" outputId="9c189f3d-8fb2-4936-dbab-38a6cea1713c"
# Filtered for non-paid reviews, more than 50% helpful
vine_unpaid_helpful_df = vine_helpful_by_total_df.filter("vine == 'N'")
vine_unpaid_helpful_df.show()
# + [markdown] id="N1YXf-ldZmNP"
# ## Calculations
# + colab={"base_uri": "https://localhost:8080/"} id="4vBEhkvKyH4f" outputId="9950a8e1-b078-417f-9651-dc4245f8e43b"
# Total number of reviews
total_reviews_ct = vine_helpful_by_total_df.count()
total_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="wCv4qZpRzCq1" outputId="58e48a16-7a25-42b2-f60c-04096faf30f0"
# Total paid reviews
total_paid_reviews_df = vine_helpful_by_total_df.filter("vine =='Y'")
total_paid_reviews_ct = total_paid_reviews_df.count()
total_paid_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="PQoLRnqN40Wl" outputId="271f9e91-17c8-4c3a-a961-f36051db1145"
# Total unpaid reviews
total_unpaid_reviews_df = vine_helpful_by_total_df.filter("vine =='N'")
total_unpaid_reviews_ct = total_unpaid_reviews_df.count()
total_unpaid_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="3Sjq9Pklx602" outputId="7f63a078-6651-45b4-878d-68e0f006ab77"
# Total five-star reviews
total_five_star_reviews_df = vine_helpful_by_total_df.filter("star_rating == 5")
total_five_star_reviews_ct = total_five_star_reviews_df.count()
total_five_star_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="54X61MKl5JqI" outputId="72584766-cfd5-40dc-e2c5-f5e84d1c8383"
# Total five-star paid reviews
five_star_paid_reviews_df = total_paid_reviews_df.filter("star_rating == 5")
five_star_paid_reviews_ct = five_star_paid_reviews_df.count()
five_star_paid_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="dOfN43YF6C9K" outputId="f41063a9-222a-4f1a-ab49-cf7ef44f3af1"
# Total five-star unpaid reviews
five_star_unpaid_reviews_df = total_unpaid_reviews_df.filter("star_rating < 5")
five_star_unpaid_reviews_ct = five_star_unpaid_reviews_df.count()
five_star_unpaid_reviews_ct
# + colab={"base_uri": "https://localhost:8080/"} id="m6RbVDbOxy6Y" outputId="10d5c100-b9b8-46d5-ab62-c3eb63d3f107"
# Five-star paid reviews as percent of total paid reviews
paid_five_star_per_total_paid = (five_star_paid_reviews_ct/total_paid_reviews_ct)*100
round(paid_five_star_per_total_paid, 3)
# + colab={"base_uri": "https://localhost:8080/"} id="ALVyopOs8am0" outputId="55bbd402-c368-471d-b0e5-e318c7d85449"
# Paid reviews as percent of total five-star reviews
paid_five_star_per_five_star_total = (five_star_paid_reviews_ct/total_five_star_reviews_ct)*100
round(paid_five_star_per_five_star_total, 3)
# + colab={"base_uri": "https://localhost:8080/"} id="Y4W9XzCX8iWf" outputId="e4cbf3e7-330d-4347-c998-8a140924f812"
# Five-star unpaid reviews as percent of total unpaid reviews
unpaid_five_star_per_total_unpaid = (five_star_unpaid_reviews_ct/total_unpaid_reviews_ct)*100
round(unpaid_five_star_per_total_unpaid, 3)
# + colab={"base_uri": "https://localhost:8080/"} id="9dnURAkb8qOe" outputId="e839f87b-e204-4c65-ba32-fe70559ae0f7"
# Five-star unpaid reviews as percent of total five-star reviews
unpaid_five_star_per_five_star_total = (five_star_unpaid_reviews_ct/total_five_star_reviews_ct)*100
round(unpaid_five_star_per_five_star_total, 3)
# + id="6JEGB1JHFqm1"
| Vine_Review_Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas
buyclicksDF = pandas.read_csv('buy-clicks.csv')
buyclicksDF
buyclicksDF.shape
buyclicksDF[['price', 'userId']].head(5)
buyclicksDF[buyclicksDF['price'] < 3].head(5)
buyclicksDF['price'].sum()
buyclicksDF['price'].mean()
adclicksDF = pandas.read_csv('ad-clicks.csv')
adclicksDF.head(5)
mergeDF = adclicksDF.merge(buyclicksDF, on='userId')
mergeDF.head(5)
| Coursera-lectures/BD_UCSD/big-data-3/notebooks/Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Deep Kung-Fu with advantage actor-critic
#
# In this notebook you'll build a deep reinforcement learning agent for atari [KungFuMaster](https://gym.openai.com/envs/KungFuMaster-v0/) and train it with advantage actor-critic.
#
# 
# +
from __future__ import print_function, division
from IPython.core import display
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
#If you are running on a server, launch xvfb to record game videos
#Please make sure you have xvfb installed
import os
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY"))==0:
# !bash ../xvfb start
# %env DISPLAY=:1
# -
# For starters, let's take a look at the game itself:
# * Image resized to 42x42 and grayscale to run faster
# * Rewards divided by 100 'cuz they are all divisible by 100
# * Agent sees last 4 frames of game to account for object velocity
# +
import gym
from atari_util import PreprocessAtari
def make_env():
env = gym.make("KungFuMasterDeterministic-v0")
env = PreprocessAtari(env, height=42, width=42,
crop = lambda img: img[60:-30, 5:],
dim_order = 'tensorflow',
color=False, n_frames=4,
reward_scale = 0.01)
return env
env = make_env()
obs_shape = env.observation_space.shape
n_actions = env.action_space.n
print("Observation shape:", obs_shape)
print("Num actions:", n_actions)
print("Action names:", env.env.env.get_action_meanings())
# +
s = env.reset()
for _ in range(100):
s, _, _, _ = env.step(env.action_space.sample())
plt.title('Game image')
plt.imshow(env.render('rgb_array'))
plt.show()
plt.title('Agent observation (4-frame buffer)')
plt.imshow(s.transpose([0,2,1]).reshape([42,-1]))
plt.show()
# -
# ### Build an agent
#
# We now have to build an agent for actor-critic training - a convolutional neural network that converts states into action probabilities $\pi$ and state values $V$.
#
# Your assignment here is to build and apply a neural network - with any framework you want.
#
# For starters, we want you to implement this architecture:
# 
#
# After you get above 50 points, we encourage you to experiment with model architecture to score even better.
import tensorflow as tf
tf.reset_default_graph()
sess = tf.InteractiveSession()
# +
from keras.layers import Conv2D, Dense, Flatten
class Agent:
def __init__(self, name, state_shape, n_actions, reuse=False):
"""A simple actor-critic agent"""
with tf.variable_scope(name, reuse=reuse):
# Prepare neural network architecture
### Your code here: prepare any necessary layers, variables, etc.
# prepare a graph for agent step
self.state_t = tf.placeholder('float32', [None,] + list(state_shape))
self.agent_outputs = self.symbolic_step(self.state_t)
def symbolic_step(self, state_t):
"""Takes agent's previous step and observation, returns next state and whatever it needs to learn (tf tensors)"""
# Apply neural network
### Your code here: apply agent's neural network to get policy logits and state values.
logits = <logits go here>
state_value = <state values go here>
assert tf.is_numeric_tensor(state_value) and state_value.shape.ndims == 1, \
"please return 1D tf tensor of state values [you got %s]" % repr(state_value)
assert tf.is_numeric_tensor(logits) and logits.shape.ndims == 2, \
"please return 2d tf tensor of logits [you got %s]" % repr(logits)
# hint: if you triggered state_values assert with your shape being [None, 1],
# just select [:, 0]-th element of state values as new state values
return (logits, state_value)
def step(self, state_t):
"""Same as symbolic step except it operates on numpy arrays"""
sess = tf.get_default_session()
return sess.run(self.agent_outputs, {self.state_t: state_t})
def sample_actions(self, agent_outputs):
"""pick actions given numeric agent outputs (np arrays)"""
logits, state_values = agent_outputs
policy = np.exp(logits) / np.sum(np.exp(logits), axis=-1, keepdims=True)
return np.array([np.random.choice(len(p), p=p) for p in policy])
# -
agent = Agent("agent", obs_shape, n_actions)
sess.run(tf.global_variables_initializer())
state = [env.reset()]
logits, value = agent.step(state)
print("action logits:\n", logits)
print("state values:\n", value)
# ### Let's play!
# Let's build a function that measures agent's average reward.
def evaluate(agent, env, n_games=1):
"""Plays an a game from start till done, returns per-game rewards """
game_rewards = []
for _ in range(n_games):
state = env.reset()
total_reward = 0
while True:
action = agent.sample_actions(agent.step([state]))[0]
state, reward, done, info = env.step(action)
total_reward += reward
if done: break
game_rewards.append(total_reward)
return game_rewards
env_monitor = gym.wrappers.Monitor(env, directory="kungfu_videos", force=True)
rw = evaluate(agent, env_monitor, n_games=3,)
env_monitor.close()
print (rw)
# +
#show video
from IPython.display import HTML
import os
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./kungfu_videos/")))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./kungfu_videos/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
# -
# ### Training on parallel games
# 
#
# To make actor-critic training more stable, we shall play several games in parallel. This means ya'll have to initialize several parallel gym envs, send agent's actions there and .reset() each env if it becomes terminated. To minimize learner brain damage, we've taken care of them for ya - just make sure you read it before you use it.
#
class EnvBatch:
def __init__(self, n_envs = 10):
""" Creates n_envs environments and babysits them for ya' """
self.envs = [make_env() for _ in range(n_envs)]
def reset(self):
""" Reset all games and return [n_envs, *obs_shape] observations """
return np.array([env.reset() for env in self.envs])
def step(self, actions):
"""
Send a vector[batch_size] of actions into respective environments
:returns: observations[n_envs, *obs_shape], rewards[n_envs], done[n_envs,], info[n_envs]
"""
results = [env.step(a) for env, a in zip(self.envs, actions)]
new_obs, rewards, done, infos = map(np.array, zip(*results))
# reset environments automatically
for i in range(len(self.envs)):
if done[i]:
new_obs[i] = self.envs[i].reset()
return new_obs, rewards, done, infos
# __Let's try it out:__
# +
env_batch = EnvBatch(10)
batch_states = env_batch.reset()
batch_actions = agent.sample_actions(agent.step(batch_states))
batch_next_states, batch_rewards, batch_done, _ = env_batch.step(batch_actions)
print("State shape:", batch_states.shape)
print("Actions:", batch_actions[:3])
print("Rewards:", batch_rewards[:3])
print("Done:", batch_done[:3])
# -
# # Actor-critic
#
# Here we define a loss functions and learning algorithms as usual.
# These placeholders mean exactly the same as in "Let's try it out" section above
states_ph = tf.placeholder('float32', [None,] + list(obs_shape))
next_states_ph = tf.placeholder('float32', [None,] + list(obs_shape))
actions_ph = tf.placeholder('int32', (None,))
rewards_ph = tf.placeholder('float32', (None,))
is_done_ph = tf.placeholder('float32', (None,))
# +
# logits[n_envs, n_actions] and state_values[n_envs, n_actions]
logits, state_values = agent.symbolic_step(states_ph)
next_logits, next_state_values = agent.symbolic_step(next_states_ph)
next_state_values = next_state_values * (1 - is_done_ph)
# probabilities and log-probabilities for all actions
probs = tf.nn.softmax(logits) # [n_envs, n_actions]
logprobs = tf.nn.log_softmax(logits) # [n_envs, n_actions]
# log-probabilities only for agent's chosen actions
logp_actions = tf.reduce_sum(logprobs * tf.one_hot(actions_ph, n_actions), axis=-1) # [n_envs,]
# +
# compute advantage using rewards_ph, state_values and next_state_values
gamma = 0.99
advantage = ### YOUR CODE
assert advantage.shape.ndims == 1, "please compute advantage for each sample, vector of shape [n_envs,]"
# compute policy entropy given logits_seq. Mind the sign!
entropy = ### YOUR CODE
assert entropy.shape.ndims == 1, "please compute pointwise entropy vector of shape [n_envs,] "
actor_loss = - tf.reduce_mean(logp_actions * tf.stop_gradient(advantage)) - 0.001 * tf.reduce_mean(entropy)
# compute target state values using temporal difference formula. Use rewards_ph and next_step_values
target_state_values = ### YOUR CODE
critic_loss = tf.reduce_mean((state_values - tf.stop_gradient(target_state_values))**2 )
train_step = tf.train.AdamOptimizer(1e-4).minimize(actor_loss + critic_loss)
sess.run(tf.global_variables_initializer())
# +
# Sanity checks to catch some errors. Specific to KungFuMaster in assignment's default setup.
l_act, l_crit, adv, ent = sess.run([actor_loss, critic_loss, advantage, entropy], feed_dict = {
states_ph: batch_states,
actions_ph: batch_actions,
next_states_ph: batch_states,
rewards_ph: batch_rewards,
is_done_ph: batch_done,
})
assert abs(l_act) < 100 and abs(l_crit) < 100, "losses seem abnormally large"
assert 0 <= ent.mean() <= np.log(n_actions), "impossible entropy value, double-check the formula pls"
if ent.mean() < np.log(n_actions) / 2: print("Entropy is too low for untrained agent")
print("You just might be fine!")
# -
# # Train
#
# Just the usual - play a bit, compute loss, follow the graidents, repeat a few million times.
# 
# +
from IPython.display import clear_output
from tqdm import trange
from pandas import ewma
env_batch = EnvBatch(10)
batch_states = env_batch.reset()
rewards_history = []
entropy_history = []
# +
for i in trange(100000):
batch_actions = agent.sample_actions(agent.step(batch_states))
batch_next_states, batch_rewards, batch_done, _ = env_batch.step(batch_actions)
feed_dict = {
states_ph: batch_states,
actions_ph: batch_actions,
next_states_ph: batch_next_states,
rewards_ph: batch_rewards,
is_done_ph: batch_done,
}
batch_states = batch_next_states
_, ent_t = sess.run([train_step, entropy], feed_dict)
entropy_history.append(np.mean(ent_t))
if i % 500 == 0:
if i % 2500 == 0:
rewards_history.append(np.mean(evaluate(agent, env, n_games=3)))
if rewards_history[-1] >= 50:
print("Your agent has earned the yellow belt" % color)
clear_output(True)
plt.figure(figsize=[8,4])
plt.subplot(1,2,1)
plt.plot(rewards_history, label='rewards')
plt.plot(ewma(np.array(rewards_history),span=10), marker='.', label='rewards ewma@10')
plt.title("Session rewards"); plt.grid(); plt.legend()
plt.subplot(1,2,2)
plt.plot(entropy_history, label='entropy')
plt.plot(ewma(np.array(entropy_history),span=1000), label='entropy ewma@1000')
plt.title("Policy entropy"); plt.grid(); plt.legend()
plt.show()
# -
# Relax and grab some refreshments while your agent is locked in an infinite loop of violence and death.
#
# __How to interpret plots:__
#
# The session reward is the easy thing: it should in general go up over time, but it's okay if it fluctuates ~~like crazy~~. It's also OK if it reward doesn't increase substantially before some 10k initial steps. However, if reward reaches zero and doesn't seem to get up over 2-3 evaluations, there's something wrong happening.
#
#
# Since we use a policy-based method, we also keep track of __policy entropy__ - the same one you used as a regularizer. The only important thing about it is that your entropy shouldn't drop too low (`< 0.1`) before your agent gets the yellow belt. Or at least it can drop there, but _it shouldn't stay there for long_.
#
# If it does, the culprit is likely:
# * Some bug in entropy computation. Remember that it is $ - \sum p(a_i) \cdot log p(a_i) $
# * Your agent architecture converges too fast. Increase entropy coefficient in actor loss.
# * Gradient explosion - just [clip gradients](https://stackoverflow.com/a/43486487) and maybe use a smaller network
# * Us. Or TF developers. Or aliens. Or lizardfolk. Contact us on forums before it's too late!
#
# If you're debugging, just run `logits, values = agent.step(batch_states)` and manually look into logits and values. This will reveal the problem 9 times out of 10: you'll likely see some NaNs or insanely large numbers or zeros. Try to catch the moment when this happens for the first time and investigate from there.
# ### "Final" evaluation
# +
env_monitor = gym.wrappers.Monitor(env, directory="kungfu_videos", force=True)
final_rewards = evaluate(agent, env_monitor, n_games=20,)
env_monitor.close()
print("Final mean reward:", np.mean(final_rewards))
video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./kungfu_videos/")))
# -
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./kungfu_videos/"+video_names[-1]))
HTML("""
<video width="640" height="480" controls>
<source src="{}" type="video/mp4">
</video>
""".format("./kungfu_videos/"+video_names[-2])) #try other indices
# +
# if you don't see videos, just navigate to ./kungfu_videos and download .mp4 files from there.
# -
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
# ```
#
# ```
#
# ### Now what?
# Well, 5k reward is [just the beginning](https://www.buzzfeed.com/mattjayyoung/what-the-color-of-your-karate-belt-actually-means-lg3g). Can you get past 200? With recurrent neural network memory, chances are you can even beat 400!
| week10_rl/optional/atari_tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
#
# _You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-machine-learning/resources/bANLa) course resource._
#
# ---
# ## Applied Machine Learning, Module 1: A simple classification task
# ### Import required modules and load data file
# +
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# use to split training set and test set from original data set
# This function randomly shuffles the dataset
# and splits off a certain percentage of the input samples
# for use as a training set,
# and then puts the remaining samples into a different variable
# for use as a test set.
from sklearn.model_selection import train_test_split
# read from the txt through panda read_table function
fruits = pd.read_table('fruit_data_with_colors.txt')
# -
fruits.head()
# +
# zip?
# zip can be used to cmonine two list of data as a new list of tuple
# length will be same with the shorter one
# youcan use zip(*zipped) to unzip the zipped list
# -
# create a mapping from fruit label value to fruit name to make results easier to interpret
lookup_fruit_name = dict(zip(fruits.fruit_label.unique(), fruits.fruit_name.unique()))
lookup_fruit_name
print(fruits.fruit_label.unique())
print(fruits.fruit_name.unique())
len(fruits)
fruits.shape
# +
# train_test_split?
# -
# The file contains the mass, height, and width of a selection of oranges, lemons and apples. The heights were measured along the core of the fruit. The widths were the widest width perpendicular to the height.
# ### Examining the data
# +
# plotting a scatter matrix
from matplotlib import cm
X = fruits[['height', 'width', 'mass', 'color_score']]
y = fruits['fruit_label']
# 75% training set, 25% test set as default training/test size
# if we want to get the same result
# we just keep the value of random state as the same
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# we use X_train to triain our model
# use X_test to test our model
# print(X_train)
# -
testtype = train_test_split(X, y, random_state=0)
print("length of train test split:", len(testtype))
print("The shape of training data",X_train.shape)
print("The shape of test data",X_test.shape)
# using feature pair plot?
# show all possible pairs of features
# produce a scatter plot for each pair
# show how the features are correlated to each other or not
cmap = cm.get_cmap('gnuplot')
type(cmap)
cmap = cm.get_cmap('gnuplot')
scatter = pd.plotting.scatter_matrix(X_train, c=y_train, marker = 'o', s=40, hist_kwds={'bins':15}, figsize=(9,9), cmap=cmap)
# +
# plotting a 3D scatter plot
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
ax.scatter(X_train['width'], X_train['height'], X_train['color_score'], c = y_train, marker = 'o', s=100)
ax.set_xlabel('width')
ax.set_ylabel('height')
ax.set_zlabel('color_score')
plt.show()
# -
# ### Create train-test split
# +
# For this example, we use the mass, width, and height features of each fruit instance
X = fruits[['mass', 'width', 'height']]
y = fruits['fruit_label']
print(y.unique())
# default is 75% / 25% train-test split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# -
# ### Create classifier object
# +
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 5)
# -
# ### Train the classifier (fit the estimator) using the training data
knn.fit(X_train, y_train)
# ### Estimate the accuracy of the classifier on future data, using the test data
knn.score(X_test, y_test)
# ### Use the trained k-NN classifier model to classify new, previously unseen objects
# first example: a small fruit with mass 20g, width 4.3 cm, height 5.5 cm
fruit_prediction = knn.predict([[20, 4.3, 5.5]])
print(fruit_prediction)
lookup_fruit_name[fruit_prediction[0]]
# second example: a larger, elongated fruit with mass 100g, width 6.3 cm, height 8.5 cm
fruit_prediction = knn.predict([[100, 6.3, 8.5]])
lookup_fruit_name[fruit_prediction[0]]
# ### Plot the decision boundaries of the k-NN classifier
# +
from adspy_shared_utilities import plot_fruit_knn
# 'unifrm is the waiting method to be used'
# passing the string uniform, which means to treat all neighbours equally
# when combining their labels
plot_fruit_knn(X_train, y_train, 5, 'uniform') # we choose 5 nearest neighbors
# -
# ### How sensitive is k-NN classification accuracy to the choice of the 'k' parameter?
# +
k_range = range(1,20)
scores = []
for k in k_range:
knn = KNeighborsClassifier(n_neighbors = k)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.figure()
plt.xlabel('k')
plt.ylabel('accuracy')
plt.scatter(k_range, scores)
plt.xticks([0,5,10,15,20]);
# +
k_range = range(1,10)
scores = []
for k in k_range:
# define a knn model with k number as K
knn = KNeighborsClassifier(n_neighbors = k)
# fit the model with training data set
knn.fit(X_train, y_train)
# score the model with test data set
scores.append(knn.score(X_test, y_test))
# show the data set
plt.figure()
# define x label
plt.xlabel('k')
# define y lable
plt.ylabel('accuracy')
# assign x, y axis data
plt.scatter(k_range, scores)
# assign the distance of x axis
plt.xticks([0,5,10]);
# -
# ### How sensitive is k-NN classification accuracy to the train/test split proportion?
# +
t = [0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
knn = KNeighborsClassifier(n_neighbors = 5)
plt.figure()
for s in t:
scores = []
for i in range(1,1000):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1-s)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.plot(s, np.mean(scores), 'bo')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy');
# +
t = [0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2]
# defin a knn model, no need change nerighbor k number
knn = KNeighborsClassifier(n_neighbors = 5)
# start plot
plt.figure()
# loop each percentage
for s in t:
scores = []
# for each test percentage, do training for 1000 times
# and get the mean value of scores
for i in range(1,2):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1-s)
knn.fit(X_train, y_train)
scores.append(knn.score(X_test, y_test))
plt.plot(s, np.mean(scores), 'bo')
plt.xlabel('Training set proportion (%)')
plt.ylabel('accuracy');
| Applied Machine Learning in Python/Module+1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Image classification on the CIFAR-10 Dataset
import torch
import torchvision
import torchvision.transforms as transforms
# +
# Sequentially apply transformations to the input
transform = transforms.Compose(
[transforms.ToTensor(), # numpy to torch tensor
# Normalize the image pixel values
transforms.Normalize(mean = (0.5, 0.5, 0.5), std = (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# +
import matplotlib.pyplot as plt
import numpy as np
# %matplotlib inline
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# -
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
# ### Define the same CNN as the previous tutorial
# +
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
# -
# ### Loss function and optimizer
#
# +
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
# Specify which are the set of parameters that are learnable
# Specify the LR and momentum
optimizer = optim.SGD(params=net.parameters(), lr=0.001, momentum=0.9)
# -
# ### Train the network
# Need to loop over the data, which is on disk using the `trainloader` Dataloader util
n_epochs = 1
# +
for epoch in range(n_epochs):
# initializing the loss
running_loss = 0.0
# getting the data
for i, data in enumerate(trainloader): # trainloader takes care of the mini-batch processing
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
# compute gradients
loss.backward()
# update weights
optimizer.step()
running_loss += loss.item() # accumulate loss for the epoch
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss per batch: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0 # reset to zero
print('Finished Training')
# -
# ### Testing Phase
# Load some test samples
# +
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
# -
# For the above images, we can look at the predictions:
outputs = net(images)
outputs # logit values
# +
_, predicted = torch.max(outputs, 1) # max-logit index corresponds to the predicted class
# the above returns both the max value and its index, we only need the index
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
# -
# The results are decent on the above 4 images. Next, look at the entire test set performance
# +
correct = 0
total = 0
with torch.no_grad(): # No need to compute gradients during inference time
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
# -
correct += (predicted == labels).sum().item() # .item gives the value of the tensor
# ### Training on GPU
# +
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
# -
net.to(device)
# the entire network is transferred to GPU
# convert everything to CUDA tensors
# done INPLACE, no need to assign
# For having the inputs and corresponding labels on GPU, add the following line after the dataloaders (within the for-loops)
inputs, labels = inputs.to(device), labels.to(device)
# __Note__: If the neural network is not very large, we may not observe any MASSIVE speedup compared to CPU
# ### Multiple GPUs
# Pytorch will only use one GPU by default. You can easily run your operations on multiple GPUs by making your model run parallelly using `DataParallel`
net = nn.DataParallel(net)
# The general idea is to __split the batch across multiple GPUs__, see below:
# Also refer: https://pytorch.org/tutorials/beginner/blitz/data_parallel_tutorial.html
# +
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
# This one line is sufficient to parallelize !
model = nn.DataParallel(model)
model.to(device)
# -
| 04_training_cifar_cnn_classifier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: igwn-py38
# language: python
# name: igwn-py38
# ---
# %matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import matplotlib
matplotlib.rc_file("~/Code/matplotlibrc")
# +
#https://predictablynoisy.com/matplotlib/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py
plt.figure()
ax = plt.gca()
cSM = "grey"
cPQ = "#3CA8C3"
cD = "#C3573C"
SM_gauge_box = matplotlib.patches.Ellipse(xy=[2, 5], width=2, height=1.5, color=cSM, alpha=0.5)
PQ_gauge_box = matplotlib.patches.Ellipse(xy=[3.2, 5.28], width=2, height=0.7, color=cPQ, angle=20, alpha=0.5,ec='w')
PQ_gauge_box_2 = matplotlib.patches.Ellipse(xy=[3.2, 5.28], width=2, height=0.7, color=cPQ, angle=20, alpha=0.5,ec='w', fill=False)
Dark_gauge_box = matplotlib.patches.Ellipse(xy=[3.2, 4.70], width=2, height=0.7, color=cD, angle=-20, alpha=0.5,ec='w')
Dark_gauge_box_2 = matplotlib.patches.Ellipse(xy=[3.2, 4.70], width=2, height=0.7, color=cD, angle=-20, alpha=0.5,ec='w', fill=False)
SM_particle_box = matplotlib.patches.FancyBboxPatch(xy=[1.5, 4.85], width=0.3, height=0.3, boxstyle='round', color='lightgrey')
#SM_gauge_box2 = matplotlib.patches.FancyBboxPatch(xy=[10, 1], width=5, height=5, boxstyle='round')
ax.add_patch(SM_gauge_box)
ax.add_patch(PQ_gauge_box)
ax.add_patch(Dark_gauge_box)
ax.add_patch(PQ_gauge_box_2)
ax.add_patch(Dark_gauge_box_2)
ax.add_patch(SM_particle_box)
#ax.add_patch(SM_gauge_box2)
plt.text(2, 5.9,r"$SU(3)_C \times SU(2)_L \times U(1)_Y$", va='center', ha='center', zorder=10, fontsize=16, color=cSM)
plt.text(3.4, 5.85,r"$U(1)_\mathrm{PQ}$", va='center', ha='center', zorder=10, fontsize=16, color=cPQ)
plt.text(3.9, 4.95,r"$U(1)_\mathrm{Dark}$", va='center', ha='center', zorder=10, fontsize=16, color=cD)
plt.text(1.65, 5,"Standard\nModel\nparticles", va='center', ha='center', zorder=10, fontsize=16)
plt.text(2.65, 5,r"$\psi,\, \psi^c$", va='center', ha='center', zorder=10, fontsize=20)
plt.text(3.4, 5.35,r"$\Phi_\mathrm{PQ}$", va='center', ha='center', zorder=10, fontsize=20)
#plt.text(3.2*(1 - 2e-3), 4.4*(1 + 5e-4),r"$\Phi_\mathrm{D},\,\gamma^\prime$", va='center', ha='center', zorder=10, fontsize=20, color='w')
plt.text(3.4, 4.6,r"$\gamma^\prime\,, \Phi_\mathrm{D}$", va='center', ha='center', zorder=10, fontsize=20, color='k')
plt.axis('equal')
plt.axis('off')
plt.tight_layout()
plt.savefig("../plots/Illustration.pdf", bbox_inches='tight')
plt.show()
# +
#https://predictablynoisy.com/matplotlib/gallery/shapes_and_collections/artist_reference.html#sphx-glr-gallery-shapes-and-collections-artist-reference-py
plt.figure(figsize=(6, 6*(5/4)))
ax = plt.gca()
cSM = "grey"
cPQ = "#3CA8C3"
cD = "#C3573C"
x_s = 0.2
SM_gauge_box = matplotlib.patches.Ellipse(xy=[x_s + 1.2, 3.75], width=2.5, height=1.5, color=cSM, alpha=0.5)
xy_PQ = [x_s + 2.6, 4.25-0.2]
xy_D =[x_s + 2.6, 3.75-0.2]
ang = 17
PQ_gauge_box = matplotlib.patches.Ellipse(xy=xy_PQ , width=2, height=0.7, color=cPQ, angle=ang, alpha=0.5,ec='w')
PQ_gauge_box_2 = matplotlib.patches.Ellipse(xy=xy_PQ , width=2, height=0.7, color=cPQ, angle=ang, alpha=0.5,ec='w', fill=False)
Dark_gauge_box = matplotlib.patches.Ellipse(xy=xy_D, width=2, height=0.7, color=cD, angle=-ang, alpha=0.5,ec='w')
Dark_gauge_box_2 = matplotlib.patches.Ellipse(xy=xy_D, width=2, height=0.7, color=cD, angle=-ang, alpha=0.5,ec='w', fill=False)
SM_particle_box = matplotlib.patches.FancyBboxPatch(xy=[x_s + 0.65, 3.6], width=0.3, height=0.3, boxstyle='round', color='lightgrey')
#SM_gauge_box2 = matplotlib.patches.FancyBboxPatch(xy=[10, 1], width=5, height=5, boxstyle='round')
ax.add_patch(SM_gauge_box)
ax.add_patch(PQ_gauge_box)
ax.add_patch(Dark_gauge_box)
ax.add_patch(PQ_gauge_box_2)
ax.add_patch(Dark_gauge_box_2)
ax.add_patch(SM_particle_box)
#ax.add_patch(SM_gauge_box2)
plt.text(x_s +1, 4.7,r"$SU(3)_C \times SU(2)_L \times U(1)_Y$", va='center', ha='center', zorder=10, fontsize=16, color=cSM)
plt.text(x_s +3.0, 4.7,r"$U(1)_\mathrm{PQ}$", va='center', ha='center', zorder=10, fontsize=16, color=cPQ)
plt.text(x_s +3.0, 2.9,r"$U(1)_\mathrm{Dark}$", va='center', ha='center', zorder=10, fontsize=16, color=cD)
plt.text(x_s +0.8, 3.75,"Standard\nModel\nparticles", va='center', ha='center', zorder=10, fontsize=16)
plt.text(x_s +2.0, 3.8,r"$\psi,\, \psi^c$", va='center', ha='center', zorder=10, fontsize=20)
plt.text(x_s +3.0, 4.2,r"$\Phi_\mathrm{PQ}$", va='center', ha='center', zorder=10, fontsize=20)
#plt.text(3.2*(1 - 2e-3), 4.4*(1 + 5e-4),r"$\Phi_\mathrm{D},\,\gamma^\prime$", va='center', ha='center', zorder=10, fontsize=20, color='w')
plt.text(x_s +3, 3.4,r"$\gamma^\prime,\, \Phi_\mathrm{D}$", va='center', ha='center', zorder=10, fontsize=20, color='k')
#---------------------------------------------
dy = -2.1
dx_arr = -0.1
arr_style = matplotlib.patches.ArrowStyle("Simple", head_length=10, head_width=15, tail_width=5)
arrow = matplotlib.patches.FancyArrowPatch(posA = [dx_arr + 2, 2.95], posB = [dx_arr + 2, 2.45], arrowstyle=arr_style, color='k')
ax.add_patch(arrow)
plt.text(dx_arr + 1.95, 2.7, r"$\mathrm{Symmetry} \quad \quad \mathrm{Breaking}$", ha = 'center', va='center')
#---------------------------------------------
SM_gauge_box_new = matplotlib.patches.Ellipse(xy=[x_s + 1.2, 3.75 + dy], width=2.5, height=1.5, color=cSM, alpha=0.5)
dx = -0.2
PQ_gauge_box_new = matplotlib.patches.Ellipse(xy=[dx + 3.4, 4.25 + dy], width=0.7, height=0.7, color=cPQ, angle=20, alpha=0.5,ec='w')
Dark_gauge_box_new = matplotlib.patches.Ellipse(xy=[dx + 3.4, 3.25+ dy], width=0.7, height=0.7, color=cD, angle=-20, alpha=0.5,ec='w')
SM_particle_box_new = matplotlib.patches.FancyBboxPatch(xy=[1.25, 3.6 + dy], width=0.3, height=0.3, boxstyle='round', color='lightgrey')
plt.text(1.4, 3.75 + dy,"Standard\nModel\nparticles", va='center', ha='center', zorder=10, fontsize=16)
ax.add_patch(SM_gauge_box_new)
ax.add_patch(PQ_gauge_box_new)
ax.add_patch(Dark_gauge_box_new)
ax.add_patch(SM_particle_box_new)
plt.text(dx + 3.4, 4.25 + dy,r"$a$", va='center', ha='center', zorder=10, fontsize=20)
plt.text(dx + 3.4, 3.25 + dy,r"$\gamma^\prime,\, h_D$", va='center', ha='center', zorder=10, fontsize=20, color='k')
dy = -3.15
plt.plot([dx + 3.4, dx + 3.4], [4.5 + dy, 5.05 + dy], linestyle='--', color='k')
plt.plot([1.8, dx + 3.1], [4.8 + dy + 0.1, 5.2 + dy +0.05], linestyle='--', color='k')
plt.plot([1.8, dx + 3.1], [4.8 + dy - 0.1, 4.25 + dy +0.05], linestyle='--', color='k')
#plt.axhline(2.5, linestyle='--')
plt.axis('equal')
plt.axis('off')
plt.xlim(0.1, 3.8)
plt.ylim(0.8, 4.9)
plt.tight_layout()
plt.savefig("../plots/Illustration_v2.pdf", bbox_inches='tight')
plt.show()
# -
# ##
| code/Illustrations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Computing the Entropy of an Image
#
# __~ <NAME>__
#
# __DTU/2K16/MC/013__
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pprint
import cv2
from typing import List
# Creating a histogram from a given Image channel
I = cv2.imread('../data/images/lenna.png')
I = cv2.cvtColor(I, cv2.COLOR_BGR2RGB)
plt.imshow(I)
# frequency distribution function
def pixel_frequency_channel(channel: np.ndarray) -> np.ndarray:
return np.array([(channel == value).sum() for value in range(256)])
dist_red = pixel_frequency_channel(I[:, :, 0])
dist_green = pixel_frequency_channel(I[:, :, 1])
dist_blue = pixel_frequency_channel(I[:, :, 2])
# distributin of red values
plt.bar([i for i in range(256)], dist_red)
# distributin of green values
plt.bar([i for i in range(256)], dist_green)
# distributin of blue values
plt.bar([i for i in range(256)], dist_blue)
# ### Entropy of a Single Pixel Value ($p_i$)
#
# $$
# H(p_i) = - P(p_i | D) \lg{P(p_i | D)}
# $$
#
# where $P(p_i | D)$ is the probability of that pixel value given the distribution
def entropy_pixel(value: int, dist: List[int]) -> float:
prob = dist[value] / sum(dist)
return - prob * np.log2(prob)
entropy_pixel(100, dist_red)
# entropy of entire channel
def entropy_channel(channel: np.ndarray) -> float:
dist = pixel_frequency_channel(channel)
prob = dist / dist.sum()
prob = prob[prob > 0]
return (-prob * np.log2(prob)).sum()
# red channel entropy
entropy_channel(I[:, :, 0])
# green channel entropy
entropy_channel(I[:, :, 1])
# blue channel entropy
entropy_channel(I[:, :, 2])
def probability_dist(I):
return np.histogramdd(np.ravel(I), bins = 256)[0] / I.size
def entropy_image(J: np.ndarray) -> float:
marg = probability_dist(J)
marg = marg[marg > 0]
return -(marg * np.log2(marg)).sum()
entropy_image(I)
# ### KL-Divrgence (Kullback Leibler Divergence)
# $$
# KL(P_1, P_2) = \sum_{x} P_1(x) \log_2{\frac{P_1(x)}{P_2(x)}}
# $$
def kl_divergence_images(I, J):
epsilon = 1e-10
p = probability_dist(I) + epsilon
q = probability_dist(J) + epsilon
return np.where(p != 0, p * np.log2(p / q), 0).sum()
kl_divergence_images(I, I)
# ### KL Divergence betwen Noisy and Noise free image
# We will add salt and pepper noise in our standard Lenna Image at varying degrees and see how adding such noiseaffets the KL Divergence
#
# #### Salt and Pepper Noise
# $$
# I(x, y) = \begin{cases}
# P_0 & 0 \leq P[I(x, y)] \leq \frac{d}{2} \\
# P_{255} & \frac{d}{2} \leq P[I(x, y)] \leq d \\
# I(x, y) & \text{otherwise}
# \end{cases}
# $$
# salt and pepper noise function
def salt_and_pepper_noise(I: np.ndarray, d: float) -> np.ndarray:
J = I.copy()
m, n , _ = I.shape
prob = np.random.rand(m, n)
J[prob < d / 2] = [0, 0, 0]
J[(d / 2 < prob) & (prob < d)] = [255, 255, 255]
return J
J = salt_and_pepper_noise(I, .1)
plt.imshow(J)
kl_divergence_images(I, J)
# ### Computing the KL Divergence with different values for salt and Pepper Noise
# different noise factors
D = np.arange(0, 1.01, 0.01)
D
kl_divs = [kl_divergence_images(I, salt_and_pepper_noise(I, d)) for d in D]
# +
plt.figure(figsize=(10, 7))
plt.plot(D, kl_divs)
plt.title('KL Divrgence Between Noise Free and Noisy Image')
plt.xlabel('Divergene Factor (Salt and Pepper)')
plt.ylabel('KL Divergence')
plt.show()
# -
# ### Computing Change in Entropy as Noise is Added
# noise factor
D
# entropy as a function of noise
E = [entropy_image(salt_and_pepper_noise(I, d)) for d in D]
# +
plt.figure(figsize=(10, 7))
plt.plot(D, E)
plt.title('Entropy as a Function of Salt & Pepper Noise')
plt.xlabel('Noise Factor')
plt.ylabel('Entropy of Image')
plt.show()
| labs/kl-divergence-entropy/kl-divergence-images.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/StephanieRogers-ML/practicum/blob/master/Classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="oYM61xrTsP5d"
# # Multi-Class Classification with Vehicle Interiors
#
# + [markdown] id="wKhTAvKaNUK2" colab_type="text"
# # Environment
# + [markdown] colab_type="text" id="L1otmJgmbahf"
# This notebook runs TensorFlow Hub modules in the native TF2 format with Keras. It uses a pre-trained image feature vector module for classifying three different vehicle makes, including fine-tuning of the module.
#
# **NOTE:** This colab needs TensorFlow 2.0 **beta1** or newer installed from a PIP package.
# + colab_type="code" id="110fGB18UNJn" outputId="d01cd0e1-b298-4de3-813c-c700e8e3b146" colab={"base_uri": "https://localhost:8080/", "height": 472}
# #!pip uninstall tensorflow tensorflow-gpu --yes
# !pip install -U --pre tensorflow-gpu==2.0.0b1;
# + colab_type="code" id="dlauq-4FWGZM" outputId="41eada1b-aac8-448e-911a-54539437ec31" colab={"base_uri": "https://localhost:8080/", "height": 84}
from __future__ import absolute_import, division, print_function
import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("Version: ", tf.__version__)
print("Eager mode: ", tf.executing_eagerly())
print("Hub version: ", hub.__version__)
print("GPU is", "available" if tf.test.is_gpu_available() else "NOT AVAILABLE")
# + id="LzM68UUS3R2U" colab_type="code" outputId="1479a6bb-633a-44d6-e1bb-c23fe9acc051" colab={"base_uri": "https://localhost:8080/", "height": 50}
from google.colab import drive
drive.mount('/content/drive')
import os
os.chdir('/content/drive/My Drive/BIMCON Inc./demo/train')
# !ls
# + [markdown] id="Jebz4wv0Mw2i" colab_type="text"
# # Set up the Dataset
# + colab_type="code" id="WBtFK1hO8KsO" outputId="0a018bb3-9f6f-4f1a-86b1-acf71f398694" colab={"base_uri": "https://localhost:8080/", "height": 117}
IMAGE_SHAPE = (224, 224)
train_root = ('/content/drive/My Drive/BIMCON Inc./demo/train')
test_root = ('/content/drive/My Drive/BIMCON Inc./demo/test')
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1/255)
image_data = image_generator.flow_from_directory(str(train_root), target_size=IMAGE_SHAPE)
test_data = image_generator.flow_from_directory(str(test_root), target_size=IMAGE_SHAPE)
for image_batch, label_batch in image_data:
print("Image batch shape: ", image_batch.shape)
print("Label batch shape: ", label_batch.shape)
break
for image_batch_test, label_batch_test in test_data:
print("Image batch shape: ", image_batch_test.shape)
print("Label batch shape: ", label_batch_test.shape)
break
# + [markdown] id="2844lzs9Mxro" colab_type="text"
# # Defining the Classification Model
# + colab_type="code" outputId="6902913d-3e23-4223-c3a5-1bc7d5b7787c" id="u45WOCh8TQ__" colab={"base_uri": "https://localhost:8080/", "height": 234}
from tensorflow.keras import layers
feature_extractor_url = "https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" #@param {type:"string"}
feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
input_shape=(224,224,3))
feature_batch = feature_extractor_layer(image_batch)
print(feature_batch.shape)
feature_extractor_layer.trainable = False
# %load_ext tensorboard
model = tf.keras.Sequential([
feature_extractor_layer,
layers.Dense(image_data.num_classes, activation='softmax')
])
model.summary()
predictions = model(image_batch)
predictions.shape
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss='categorical_crossentropy',
metrics=['acc'])
class CollectBatchStats(tf.keras.callbacks.Callback):
def __init__(self):
self.batch_losses = []
self.batch_acc = []
def on_train_batch_end(self, batch, logs=None):
self.batch_losses.append(logs['loss'])
self.batch_acc.append(logs['acc'])
self.model.reset_metrics()
steps_per_epoch = np.ceil(image_data.samples/image_data.batch_size)
batch_stats_callback = CollectBatchStats()
logdir = "/content/drive/My Drive/BIMCON Inc./demo/logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
# + [markdown] id="-Ap1VUL4M0lY" colab_type="text"
# # Training the Classification Model
# + colab_type="code" outputId="66c06d44-3abc-466a-8c39-3b98a9309a0a" id="uwgAd6v5TB3q" colab={"base_uri": "https://localhost:8080/", "height": 1000}
history = model.fit(image_data, epochs=5,
steps_per_epoch=steps_per_epoch,
callbacks = [batch_stats_callback,tensorboard_callback])
plt.figure()
plt.ylabel("Loss")
plt.xlabel("Training Steps")
plt.ylim([0,2])
plt.plot(batch_stats_callback.batch_losses)
plt.figure()
plt.ylabel("Accuracy")
plt.xlabel("Training Steps")
plt.ylim([0,1])
plt.plot(batch_stats_callback.batch_acc)
class_names = sorted(image_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
class_names
predicted_batch = model.predict(image_batch)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch, axis=-1)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch_test[n])
color = "green" if predicted_id[n] == label_id[n] else "red"
plt.title(predicted_label_batch[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (green: correct, red: incorrect)")
class_names = sorted(test_data.class_indices.items(), key=lambda pair:pair[1])
class_names = np.array([key.title() for key, value in class_names])
class_names
predicted_batch = model.predict(image_batch_test)
predicted_id = np.argmax(predicted_batch, axis=-1)
predicted_label_batch = class_names[predicted_id]
label_id = np.argmax(label_batch_test, axis=-1)
plt.figure(figsize=(10,9))
plt.subplots_adjust(hspace=0.5)
for n in range(30):
plt.subplot(6,5,n+1)
plt.imshow(image_batch_test[n])
color = "green" if predicted_id[n] == label_id[n] else "red"
plt.title(predicted_label_batch[n].title(), color=color)
plt.axis('off')
_ = plt.suptitle("Model predictions (green: correct, red: incorrect)")
# %reload_ext tensorboard
# #%load_ext tensorboard
# + [markdown] id="yBIv32MZNIMF" colab_type="text"
# # Saving, Exporting & Inference
# + colab_type="code" id="LGvTi69oIc2d" colab={}
saved_model_path = "/content/drive/My Drive/BIMCON Inc./demo/saved_classification_model"
tf.saved_model.save(model, saved_model_path)
| Classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pygmt]
# language: python
# name: conda-env-pygmt-py
# ---
# +
import matplotlib.pyplot as plt
from gprm import PointDistributionOnSphere
pts = PointDistributionOnSphere(distribution_type='marsaglia', N=1000)
plt.plot(pts.longitude,pts.latitude,'.')
# +
pts = PointDistributionOnSphere(distribution_type='fibonacci', N=2000)
plt.plot(pts.longitude,pts.latitude,'.')
# +
pts = PointDistributionOnSphere(distribution_type='spiral', N=1000)
plt.plot(pts.longitude,pts.latitude,'.')
# +
pts = PointDistributionOnSphere(distribution_type='healpix', N=16)
plt.plot(pts.longitude,pts.latitude,'.')
# +
from gprm import ReconstructionModel
from gprm.utils.spatial import rasterise_polygons
M2016 = ReconstructionModel('Matthews++2016')
M2016.add_rotation_model('/Applications/GPlates-2.0.0/SampleData/FeatureCollections/Rotations/Matthews_etal_GPC_2016_410-0Ma_GK07.rot')
M2016.add_continent_polygons('/Applications/GPlates-2.0.0/SampleData/FeatureCollections/ContinentalPolygons/Matthews_etal_GPC_2016_ContinentalPolygons.gpmlz')
pts = PointDistributionOnSphere(distribution_type='marsaglia', N=5000)
pts_mask_continents = pts.mask(M2016.continent_polygons, M2016.rotation_model, reconstruction_time = 50., masking='outside')
pts_mask_oceans = pts.mask(M2016.continent_polygons, M2016.rotation_model, reconstruction_time = 50., masking='inside')
plt.plot(pts_mask_continents.to_lat_lon_array()[:,1], pts_mask_continents.to_lat_lon_array()[:,0], 'r.')
plt.show()
plt.plot(pts_mask_oceans.to_lat_lon_array()[:,1], pts_mask_oceans.to_lat_lon_array()[:,0], 'b.')
plt.show()
# +
pts_mask = pts.mask(M2016.continent_polygons, M2016.rotation_model, reconstruction_time = 150., preserve_polygon_attributes=True)
for pts_group in pts_mask:
plt.plot(pts_group.get_geometry().to_lat_lon_array()[:,1], pts_group.get_geometry().to_lat_lon_array()[:,0], '.')
# +
pts_mask = pts.mask(M2016.continent_polygons, M2016.rotation_model, reconstruction_time = 0., preserve_polygon_attributes=True, masking=None)
for pts_group in pts_mask:
#NB currently only returns shapefile attributes...
#pt_fromage = pts_group.get_shapefile_attribute('FROMAGE')
plt.plot(pts_group.get_geometry().to_lat_lon_array()[:,1],
pts_group.get_geometry().to_lat_lon_array()[:,0], '.')
# -
| test_notebooks/test_PointDistributionOnSphere.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import os
import json
from itertools import count
from urllib.parse import urlparse
from datetime import datetime
def read_data():
with open("../keywords.json", "r") as fd:
keywords = [x["keyword"] for x in json.load(fd)]
return pd.DataFrame(map(get_keyword_data, keywords))
def get_keyword_data(keyword):
before = read_result(keyword, min)
after = read_result(keyword, max)
return {
"keyword": keyword,
"after": get_best_ranking(after["results"]),
"before": get_best_ranking(before["results"]),
"after_time": after["datetime"],
"before_time": before["datetime"],
}
def get_best_ranking(results, domain="serlo.org"):
for (index, entry), c in zip(enumerate(results), count(1)):
if get_domain(entry["link"]) == domain:
#return entry["page"]*10 + entry["index"] + 1
return c
return None
def read_result(keyword, select_func):
keyword_dir = get_keyword_dir(keyword)
if not os.path.isdir(keyword_dir):
return None
to_datetime = lambda x: datetime.fromisoformat(os.path.splitext(x)[0])
file_name = select_func(os.listdir(keyword_dir), key=to_datetime)
with open(os.path.join(keyword_dir, file_name), "r") as fd:
return { "results": json.load(fd), "datetime": to_datetime(file_name) }
def get_keyword_dir(keyword):
root_dir = os.path.dirname(os.getcwd())
return os.path.join(root_dir, "results", keyword[0], keyword)
def get_domain(url):
parts = urlparse(url).hostname.split(".")
domain = ".".join(parts[-2:])
if domain == "wikibooks.org" and ("Freaks" in url or "Serlo" in url):
return "serlo.org"
else:
return domain
df = read_data()
df.dropna(inplace=True)
df.head()
# -
df["diff"] = df["before"]-df["after"]
df["diff"].describe()
diffs = df["diff"][df["diff"] != 0]
diffs.plot.hist(xlim=(diffs.min(), diffs.max()), bins = int(diffs.max() - diffs.min()))
pd.set_option('display.max_rows', None)
df[df["diff"] > 0].describe()
pd.set_option('display.max_rows', None)
df[df["diff"] < 0].describe()
| evaluations/2021-03-09 Verbesserung der Google-Rankings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### MLP for Diabetes using keras
# ### <NAME>
# ### April 2021
# Load dependencies
from keras.models import Sequential
from keras.layers import Dense
from keras.activations import hard_sigmoid
from sklearn.model_selection import train_test_split
from sklearn.model_selection import train_test_split
import pandas as pd
import numpy as np
# +
# Load data
url= "http://academic.uprm.edu/eacuna/diabetes.dat"
#url="c://PW-PR/diabetes.dat"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataset = pd.read_table(url, names=names,header=None)
dataset.head()
dataset=np.array(dataset)
# Separate train and test data
X = dataset[:, 0:8]
y = dataset[:, 8]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=0)
print(X_train.shape)
print(X_test.shape)
# +
# Creating the MLP
## Create our model
model = Sequential()
# 1st layer: input_dim=8, 10 nnodes, RELU
model.add(Dense(10, input_dim=8, kernel_initializer='uniform', activation='relu'))
# 2nd layer: 10 nodes, RELU
model.add(Dense(10, kernel_initializer='uniform', activation='relu'))
# output layer: dim=1, activation sigmoid
model.add(Dense(1, kernel_initializer='uniform', activation='sigmoid' ))
model.summary()
# -
# Compile the model
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
# Train the Perceptron
#history=model.fit(X_train, y_train, epochs=225, batch_size=20, verbose=1, validation_split=0.2)
history = model.fit(X_train,
y_train,
validation_data=(X_test, y_test),
epochs=225,
batch_size=20,
verbose=1)
# Model accuracy
import matplotlib.pyplot as plt
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
# Model Losss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Model Loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'])
plt.show()
| notebooks/MLPdiabeteskeras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Composition of Functions
# This is a post that I have been excited to write for some time now. I realize that if you are reading this blog you most likely already have good handle on what a **function** is; both in the context's of mathematics and computer science. However, I recently saw just how shallow my own understanding was during my quest to understand the history of the **normal distribution**.
#
# For those unfamiliar, I highly recommend go through my post on the subject (in the mathematics/statistics section), but for the sake of making this post 100% stand alone, I will provide a brief background; it is essential in setting the stage for the problem that we are trying to solve. Please keep in mind that the purpose of this notebook is _not_ to discuss statistics and probably distributions; these curves are simply being used as lens for which we can think about functions, function compositions, and how functions are discovered.
#
# ### 1.1 Background of Normal Distribution
# The Normal Distribution, also known as the **Gaussian Distribution**, has an incredibly deep history, and even greater number of domains where it is applied; we will not talk about them here, however. For that I recommend looking through my other notebooks, digging into the **Central Limit Theorem**, **sampling**, **Gaussian Mixture Models**, distributions in the social sciences, **hypothesis testing**, and so on.
#
# The purpose of this post is to uncover what bothered me while learning about so many of the above topics:
#
# > Where did the equation that represents the Gaussian Distribution come from?
#
# If you are unfamiliar with the normal distribution, here are a few key points:
# * It is a **continuous** probability distribution (a continuous function).
# * It is often used to describe a **random variable** whose distribution is not known, but is thought to represent a gaussian data generating process
# * It plays a large role in **statistical inference** based on its use in the central limit theorem.
#
# Mathematically, the Normal Distribution is defined as follows:
#
# $$f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} exp(-\frac{(x-\mu)^2}{2\sigma^2})$$
#
# Where $\mu$ is the **mean**/**expectation** of the distribution, $\sigma$ is the **standard deviation**, and $\sigma^2$ is the **variance**. If you are rusty on your understanding of the prior term's definitions, I recommend reviewing my previous post. It is worth noting that the normal distribution is parameterized by $\mu$ and $\sigma^2$, and it can be stated verbally as:
#
# > It is a curve representing the probability distribution of the random variable $x$ given $\mu$ and $\sigma^2$.
#
#
# There is one point of confusion that isn't particulary important for our purposes, but that I will cover for clarity:
# * The normal distribution is a **probability density function**. What this means is that we cannot simply plug in a value, $x$, and evaluate the probability of observing that particular value. This is because an continuous random variable can take on an _infinite_ number of values, and the probability of observing a particular one is zero. Instead, the normal distribution is evaluated at each $x$, and the curve that is produced (seen below) can be used to determine the probability that $x$ will fall in certain _intervals_.
#
#
# With that said, visually it looks like:
# +
import numpy as np
from scipy.stats import bernoulli, binom, norm
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
sns.set(style="white", palette="husl")
sns.set_context("talk")
sns.set_style("ticks")
# +
fig = plt.figure(figsize=(10,6))
means = [0, 0, 0, -2]
variances = [0.3, 1, 3, 0.5]
x_axis = np.arange(-5, 5, 0.001)
legend = []
for mu, var in zip(means, variances):
plt.plot(x_axis, norm.pdf(x_axis,mu,var))
legend.append(f'$\mu$ = {mu}, $\sigma^2$ = {var}')
plt.xlabel('X')
plt.ylabel('Probability Density')
plt.title('Normal/Gaussian Distribution')
plt.legend(legend)
plt.show()
# -
# The graph above is incredibly important to keep in mind throughout this post! Take a moment to think above how on earth this was derived? Likewise, think about the shape (bell shape)-how would you take an input and transform it to have that shape?
#
# That was the problem that **[<NAME>](https://en.wikipedia.org/wiki/Abraham_de_Moivre)** faced in the mid 1700's. He and many colleagues, had observered that certain random processes began to take on the **binomial distribution** when repeated many times (green discrete distribution below).
#
# They realized that in order to keep their calculations computationaly feasible, they must determine an **approximation** to this discrete distribution (curve in red below).
# +
# generate binomial, n=25
fig = plt.figure(figsize=(10,6))
n = 25
data_binom = binom.rvs(n=n,p=0.5,size=10000)
bins = [i for i in range(n+2)]
ax = plt.hist(
data_binom,
bins=bins,
density=True,
rwidth=1,
color='forestgreen',
alpha=0.6,
edgecolor="black"
)
plt.title('Binomial Distribution: p = 0.5')
plt.xlabel('Outcome (Number of Heads in 25 coin tosses)')
plt.ylabel('Probability')
xtick_loc = [i + 0.5 for i in range(n+1) if i % 4 == 0]
xtick_val = [i for i in range(n+1) if i % 4 == 0]
plt.xticks(xtick_loc, xtick_val)
x = np.arange(0, 25, 0.5)
p = norm.pdf(x, 12.5, data_binom.std())
plt.plot(x, p, 'k', linewidth=3, c='r')
plt.legend(['Normal Curve'])
plt.show()
# -
# Which brings us to our main goal in this post, that is to answer the following:
#
# > How would _you_ derive the equation of the red curve in the above plot?
#
# This is by no means an easy question to answer! It took some of the worlds brightest minds many years to come to the normal distribution equation we saw earlier. However, I found that the most fundamental gap I needed to fill in order to answer the above question was that relating to **functions**, particularly their **composition**.
# ## 2. Functions: Mapping _input_ to _response_
# Forget $x$'s and $y$'s for the moment, forget equations that have been rotely memorized. What is a function, and why would we even need one?
#
# Well let's consider the real world scenario where you are trying to buy a car. Let's say you know that the car was made in 2010 and it has 100,000 miles on it. Intuitively, and without even realizing it, you create a function that maps those features of the car, to what you feel it is worth. Maybe you think that car is worth 4,000 dollars. That means that in some way you decided in your head that there is a function, which we can call $Car \;Price \;Estimator$:
#
# $$Function = Car \; Price \; Estimator $$
#
# And it takes two inputs, the year it was made and the number of miles on it:
#
# $$Car \; Price \; Estimator (Year \; made, number \; of \; miles)$$
#
# And that yielded an output, of 4,000 dollars:
#
# $$Car \; Price \; Estimator (2010, 100000 \;miles) = 4000 \; dollars$$
#
# This can be seen visually as well:
#
# <br>
# <img src="https://drive.google.com/uc?id=1nzWv2dubxT8dLgOcF_fR07uUqiyM74X4" width="600">
# <br>
#
# Think of how often that may happen-a situation where you take in information about the world around you, and you then say "hmmm there is definitely a relationship between these two things". Well, in mathematical terms that relationship is a function. If you are wondering, "why do we need to turn a normal everyday relationship into a mathematical equation?", well, the simplest answer is because it allows you to do very powerful things.
#
# As a motivator for why functions are so powerful I will leave you with this list of what they can and currently do:
# 1. We can create functions that define the relationship between certain images of tumor cells and whether or not the patient actually has cancer
#
# $$Function = Cancer \; Detector$$
#
# $$Cancer \; Detector(\text{Image of tumor cells}) \rightarrow \text{Patient has cancer, yes or no}$$
#
# 2. We can create functions that take in thousands of pixels associated with image and then determine what is in that image
#
# $$Function = Image \; Classifier$$
#
# $$Image \; Classifier(\text{Image of dog on a boat}) \rightarrow \text{Image contains: Dog, boat}$$
#
#
# 3. We can create functions that predict given a certain population and its characteristics, how quickly will a disease spread.
#
# $$Function = Disease \; Spread$$
#
# $$Disease \; Spread(\text{Population Characteristics}) \rightarrow \text{Disease spread rate}$$
#
# Okay, so with that in mind I want us to remember that a function can be thought of as a map of a relationship. From a more abstract point of view, a function can be considered a **process** that relates (or maps) an input to a single element, the output:
#
# <img src="https://drive.google.com/uc?id=1x-CFHKN2EpAvQfRCxrhaZmDeWUNpHSKY" width="500">
#
# Bringing in a slightly more formal terminology, we say that:
#
# > A function is a relationship that associates each element $x$ of a set $X$, the **domain** of the function, to a single element $y$ of another set $Y$ (possibly the same set), the **codomain** of the function. If the function is called $f$, the relation is denoted $y= f(x)$.
#
# ### 2.1 Graphs of Functions
# Now that we have an intuitive understanding of what a function is actually representing, we can move on to graphing of a function. There is something incredibly subtle that takes place here, of which there is the utmost importance that we fully grasp before moving on.
#
# Let's consider the function:
#
# $$f(x) = x^2$$
#
# This is a very simple function, which I am confident that anyone reading this post has come across before; it can be visualized below:
# +
from matplotlib import rc, animation
sns.set_style("whitegrid")
sns.set_context("talk", rc={"lines.linewidth": 2})
rc('axes', linewidth=2)
sns.set_palette("tab10")
def square(x):
return x ** 2
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey')
plt.axvline(x=0, color='grey')
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x = np.linspace(lower_bound, upper_bound, length)
y = square(x)
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
plt.title(r'f(x) = $x^2$ = y', pad="10")
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# Nothing ground breaking going on just yet. Now, an equivalent representation (_isomorphic_ if you will), can be seen in the following table:
#
# |x|f(x)|
# |---|---|
# |-5|25|
# |-4|16|
# |-3|9|
# |-2|4|
# |-1|1|
# |0|0|
# |1|1|
# |2|4|
# |3|9|
# |4|16|
# |5|25|
#
# I don't imagine that anyone would disagree with me in saying that the table and graph are equivalent representations of the same thing; that is the function $f(x) = x^2$ evaluated from $[-5, 5]$ (for all integers within the interval). Now, as students we are taught to view the function, $f$, and it's graphical representation as _equivalent_. And, for the most part this is true. However, this view point is slightly narrow and can lead to confusion, especially as we move into advanced mathematics or try coming up with original solutions on our own.
#
# #### 2.1.1 Ordering of $x$ inputs
# To build a better intuition for how a function and it's graphical representation relate, let's start by rearranging the inputs $x$ in the table above to be as follows:
#
# |x|f(x)|
# |---|---|
# |-5|25|
# |-3|9|
# |-4|16|
# |2|4|
# |5|25|
# |-1|1|
# |3|9|
# |0|0|
# |1|1|
# |2|4|
# |4|16|
# |-2|4|
#
# Each individual row is entirely valid; take a moment to convince yourself of that. However, our $x$ values are no longer _ordered_. This means that if we were to go down our table row by row, plot each point and then connect it to the the point in the row beneath by a line we would end up with:
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey', zorder=1)
plt.axvline(x=0, color='grey', zorder=1)
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x = np.array([-5,-3,-4,2,5,-1,3,0,1,2,4,-2])
y = square(x)
plt.scatter(x, y, c=sns.xkcd_rgb["dark pink"], zorder=3)
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"], zorder=2)
plt.title(r'Unordered $x$; Plot point, connect via line, repeat', pad="10")
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# Clearly this is not a valid function as presented above! Yet, this _does not_ mean that the table above is invalid! A function _does not need_ to take in an interval of ordered $x$ inputs; a function can take in anything that is part of it's domain (in this case all real numbers).
#
# The reason for the mixup was in the methodology for creating the final curve. I chose iterate down the table, row by row, plotting the point in my current row, then plotting the point in the next row, and connecting them by a line immediately. This was repeated for the whole table. In other words I did:
#
# $$(-5, 25) \rightarrow (-3, 9) \rightarrow (-4, 16) \rightarrow ...$$
#
# You can see that by plotting a point and then immediately using a line to connect to the next point we can run into issues.
#
# > This is because we are introducing a **time** component without even meaning to! It is a side effect of human nature; we introduce this time component because given pen and paper that is how _we would draw the curve from left to right_.
#
# However, our function has no concept of time (it's only parameter is $x$). A more appropriate way to plot our function would be to plot all points at once, and _then_ connect from left to right with a line of best fit:
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey', zorder=1)
plt.axvline(x=0, color='grey', zorder=1)
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x = np.array([-5,-3,-4,2,5,-1,3,0,1,2,4,-2])
y = square(x)
plt.scatter(x, y, c=sns.xkcd_rgb["dark pink"], zorder=3)
x = np.linspace(lower_bound, upper_bound, length)
y = square(x)
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
plt.title(r'Unordered $x$; Plot all at once, then connect via line', pad="10")
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# We see above that even though our $x$ inputs were not ordered (we plotted based on the ordering of the second table), we have the curve that we would expect. That is because this time all points were plotted first, and _then_ the line of best fit was drawn.
#
# #### 2.1.2 Inputs to the graphical representation of $f$
# Now that we have dug into the ordering of $x$ inputs when dealing with a graphical representation of a function, let's pick things up a notch. I pose the following question: when graphing the function $f(x)$, how many inputs does $f$ take?
#
# You may sit back and laugh when asked; surely if our function $f$ takes in a single input $x$, then the graphical representation of $f$ must only take in a single input, $x$, as well!
#
# Not so fast! While our function $f$ only takes a single input $x$, we have to keep in mind that $f$ only does one thing: it maps an input to an output:
#
# $$f(input) \rightarrow output$$
#
# $$f(-4) \rightarrow 16$$
#
# $$f(4) \rightarrow 16$$
#
# Yet, inherently a graphical representation of a function deal with _two things_: an input **value** and an input **location**. Let me explain via an example. Consider the case of our squaring function, you may initially think that the input to output mapping would look like:
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color=sns.xkcd_rgb["soft green"])
plt.axvline(x=0, color='grey')
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x = np.linspace(lower_bound, upper_bound, length)
y = square(x)
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
marker_squared, = ax.plot(-4, 16, 'or', zorder=5)
marker_x, = ax.plot(-4, 0, 'og', zorder=5)
func_arrow_square = ax.annotate(
'',
xy=(-4, square(-4)),
xytext=(-4, 0),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.title(r'Input: x-axis, Output: f(x) = $x^2$', pad="10")
ax.legend(
(marker_x, marker_squared),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# The above plot is correct in that it takes an input $x=-4$, evaluates the function $f$, and plot's the result. So, our coordinates are mapped as such:
#
# $$(-4, 0) \rightarrow (-4, 16)$$
#
# Notice that our $x$ coordinate does not change. That is a fundamental concept of graphical representations of functions. When you evaluate a particular input from the domain, $x$, you then graph the output, $y$ at the same $x$ coordinate. This is seen clearly by the black arrow representing our squaring function, $f$; notice that it is **perpendicular** to the $x$ axis. Realize that this is _not_ a fundamental property of the function $f$; rather it is used specifically by the graphical representation. To really highlight this point:
#
# **Stand Alone Function**<br>
#
# $$f(x) \rightarrow y$$
#
# $$f(-4) \rightarrow 16$$
#
# **Graphical Representation of Function**<br>
#
# $$f(x_{location}, x_{value}) \rightarrow (x_{location}, y)$$
#
# $$f((-4, 0)) \rightarrow (-4,16)$$
#
# In the graphical representation of the function, $f$ now takes in a **point**, $(-4, 0)$, instead of simply a stand alone value, $-4$. This is often a new way of view functions for most people (myself included), so I encourage you to take a moment to let this subtle change sink in. Once you have, you may realize what was wrong with the plot above.
#
# Based on the idea that in reality the graphical representation of a function must take in _two_ inputs, $x_{location}$ and $x_{value}$, you can see what is wrong with our green input point. It's $x_{location}$ is correct; it is equal to -4. However, it's $x_{value}$ is incredibly ambiguous! You most likely determine in your head that the $x_{value}$ must also be -4, but based on the visual representation we have provided, it is actually 0! This is often the case when we take our $x$ axis and treat it both as the location and value of our input.
#
# Why is this so important? Well, it creates an incredibly shaky foundation for us to build a top of if we ever want to be able to derive our own unique solutions! There is no way to fully intuit function composition and derive the normal distribution if we are working inside of this ambigious framework. The solution is as follows:
#
# > We need to ensure that when graphing functions our input curve/point has an unambiguous $x_{location}$ and $x_{value}$.
#
# This can be done by no longer using the $x$ axis as both the location and input to our function, but instead use the line $y=x$! Visually this will become more clear:
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey')
plt.axvline(x=0, color='grey')
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x_loc = np.linspace(lower_bound, upper_bound, length)
x_val = x_loc
y = square(x_val)
plt.plot(x_loc, x_val, lw=3, c=sns.xkcd_rgb["soft green"])
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
marker_squared, = ax.plot(-4, 16, 'or', zorder=5)
marker_x, = ax.plot(-4, -4, 'og', zorder=5)
func_arrow_square = ax.annotate(
'',
xy=(-4, square(-4)),
xytext=(-4, -4),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.title(r'Input: Line y=x, Output: f(x) = $x^2$')
# Put a legend to the right of the current axis
ax.legend(
(marker_x, marker_squared),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# Excellent, our input now is incredibly unambiguous! Our function $f$, when graphed, takes in an $x_{location}$ and $x_{value}$, in this case: $(-4, -4)$:
#
# $$f((-4, -4)) \rightarrow (-4,16)$$
#
# This can be done for every single point along the line $y=x$ (only three shown below):
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey')
plt.axvline(x=0, color='grey')
lower_bound = -5
upper_bound = 5
composition_upper_bound = 25
length = 2000
x_loc = np.linspace(lower_bound, upper_bound, length)
x_val = x_loc
y = square(x_val)
plt.plot(x_loc, x_val, lw=3, c=sns.xkcd_rgb["soft green"])
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
marker_squared_1, = ax.plot(-4, 16, 'or', zorder=5)
marker_x_1, = ax.plot(-4, -4, 'og', zorder=5)
marker_squared_2, = ax.plot(-2, 4, 'or', zorder=5)
marker_x_2, = ax.plot(-2, -2, 'og', zorder=5)
marker_squared_3, = ax.plot(3, 9, 'or', zorder=5)
marker_x_3, = ax.plot(3, 3, 'og', zorder=5)\
func_arrow_square_1 = ax.annotate(
'',
xy=(-4, square(-4)),
xytext=(-4, -4),
arrowprops=dict(facecolor='black', shrink=0.05),
)
func_arrow_square_2 = ax.annotate(
'',
xy=(-2, square(-2)),
xytext=(-2, -2),
arrowprops=dict(facecolor='black', shrink=0.05),
)
func_arrow_square_3 = ax.annotate(
'',
xy=(3, square(3)),
xytext=(3, 3),
arrowprops=dict(facecolor='black', shrink=0.05),
)
plt.title(r'Input: Line y=x, Output: f(x) = $x^2$')
ax.legend(
(marker_x, marker_squared),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel('Y', fontsize=20)
plt.show()
# -
# I want to make very clear why this generally does not cause an issue; When dealing with just one function operating on it's own, we can utilize the $x_{location}$ _as the_ $x_{value}$! We have been doing this since middle school when functions and their corresponding graphs were introduced.
#
# You may be wondering why on earth we have introduced a seemingly more complex paradigm and notation, when our old method worked well enough to begin with? To answer that question, we will need to introduce **function compositions**.
#
# #### 2.1.3 Function Compositions
# Function compositions are simply the operation of taking the _output_ of a function $f$, using that as the _input_ to another function $g$, in order to produce a final _output_, which we can refer to as h(x). This is written as:
#
# $$h(x) = g(f(x))$$
#
# It can be broken down as follows:
#
# $$f(x) = y$$
#
# $$h(x) = g(y)$$
#
# Notice this idea of passing the _output_ of the function $f$, as the _input_ to another function $g$. We can view this equivalently as a **mapping**:
#
# $$f: X \rightarrow Y$$
#
# $$g: Y \rightarrow Z$$
#
# Where $f$ and $g$ are said to be **composed** to yield a function that maps $x$ in $X$ to $g(f(x))$ in $Z$. This idea of function compositions is why we spent so the time earlier distinguishing between a function and a graphical representation of a function.
#
# You may not initially realize it, but almost all functions other than most elementary are actually a composition of smaller functions! For example, consider the following:
#
# $$h(x) = -x^2$$
#
# Now, written as a I did above, it may be viewed as a single function, $h$. However, it is actually a composition of two separate functions:
#
# $$\text{Negate Function:} \; \; \; g(x) = -x$$
#
# $$\text{Square Function:} \; \; \; f(x) = x^2$$
#
# Where we can then write $h$ as the composition of $f$ and $g$:
#
# $$h(x) = g(f(x)) = -x^2$$
#
# This may seem elementary but, I promise you we are building to an incredibly powerful mental framework! Recall the purpose of this post:
#
# > Determine how you would derive an equation (red) that can approximate the the binomial distribution (green).
# +
# generate binomial, n=25
fig = plt.figure(figsize=(10,6))
n = 25
data_binom = binom.rvs(n=n,p=0.5,size=10000)
bins = [i for i in range(n+2)]
ax = plt.hist(
data_binom,
bins=bins,
density=True,
rwidth=1,
color='forestgreen',
alpha=0.6,
edgecolor="black"
)
plt.title('Binomial Distribution: p = 0.5')
plt.xlabel('Outcome (Number of Heads in 25 coin tosses)')
plt.ylabel('Probability')
xtick_loc = [i + 0.5 for i in range(n+1) if i % 4 == 0]
xtick_val = [i for i in range(n+1) if i % 4 == 0]
plt.xticks(xtick_loc, xtick_val)
x = np.arange(0, 25, 0.5)
p = norm.pdf(x, 12.5, data_binom.std())
plt.plot(x, p, 'k', linewidth=3, c='r')
plt.legend(['Normal Curve'])
plt.show()
# -
# What if I told you that figuring out how to approximate that red line was as easy as composing several elementary functions? You may think to yourself that that cannot be true, given the nature of the formula for the normal distribution. However, I assure you _it is true_.
#
# In order to make this as clear as possible, I am going to outline the exact composition of functions that will yield the bell shape of the normal distribution approximation above, and then walk through each individual step in detail. However, having the end result in mind will help us stay on track as we work through the mechanics.
#
# **General Function Composition of the Normal Distribution**<br>
# The function composition that we will use in order to create the general shape/approximation of the normal distribution is shown below:
#
# <img src="https://drive.google.com/uc?id=1FJdaM7gpzUIeP20DJQxZ5yvRyEaeP4ap">
#
# In english, we are going to:
#
# 1. Take an input and square it to get an output.
# 2. Take that output and negate it, yield a new output.
# 3. Take the new output, exponentiate it, yielding a final output.
#
# So, our functions can be defined as:
#
# $$f(x) = x^2$$
#
# $$g(x) = -x$$
#
# $$h(x) = e^x$$
#
# To make things more intuitive, I am going to replace $x$ with a more general term: $input$. This is because we often associate $x$ with the $x$ axis, and in function compositions this will be a hindrance:
#
# $$f(input) = {input}^2$$
#
# $$g(input) = -(input)$$
#
# $$h(input) = e^{input}$$
#
# The equations above have the same meaning as before, only now it should be even more apparent that each one operates on some _input_, returning an output. If we evaluate our entire function composition we arrive at:
#
# $$let \; input = x$$
#
# $$h \Big( g \big( f(x)\big) \Big) = h \Big( g \big( x^2 \big) \Big) = h \Big( -x^2 \Big) = e^{-x^2}$$
#
# I am going to call the evaluated function composition above $n$, for normal:
#
# $$n(x) = e^{-x^2}$$
#
# Now, if we plot $n(x)$ for $x$ in the range $[-3, 3]$ we end up with:
# +
fig, ax = plt.subplots(figsize=(8,6))
plt.axhline(y=0, color='grey')
plt.axvline(x=0, color='grey')
lower_bound = -3
upper_bound = 3
composition_upper_bound = 25
length = 2000
def func_comp(x):
return np.exp(-(x**2))
x = np.linspace(lower_bound, upper_bound, length)
y = func_comp(x)
plt.plot(x, y, lw=3, c=sns.xkcd_rgb["red"])
plt.title(r'n(x) = $e^{-x^2}$', pad="10")
ax.set_xlabel('X', fontsize=20)
ax.set_ylabel(r'$n(x)$', fontsize=20)
plt.show()
# -
# And just like that we end up with our desired shape! Of course this is not the exact normal distribution, there are cosmetic updates that must be made in order to ensure it meets the constraints of a valid probability distribution (more on that can be found in my post on the history of the normal distribution). However, the significance of what we just accomplised cannot be overlooked! By composing just three elementary functions we were able to end up with the shape of the normal distribution. Recall how intimidating that function looked at the outset of this post!
#
# Now that I have shown the end result and a very high level view of how we get there, simply composing three basic functions, we will step through the details, using the concept of $x_{location}$ and $x_{value}$ that we discussed earlier.
#
# ### 2.1.4 Function Compositions: Finding the shape of the Normal Distribution
# I have already told you that the first thing that we are going to do in order to generate our approximation of the normal distribution is square our input $x$. This brings us back to what we spoke about earlier, that being the graphical representation of a function technically takes _two inputs_, a _location_ and a _value_.
#
# We have already gone over why this is the case, and specifically so in relation to the function $f(x) = x^2$. This can be visualized below:
# +
lower_bound = -5
upper_bound = 5
length = 2000
# Turn off interactive plotting
plt.ioff()
# Create figure and axis object
fig = plt.figure(figsize=(10, 6), dpi=150)
ax1 = plt.subplot(111)
# Add x and y axis lines
ax1.axhline(y=0, color='grey')
ax1.axvline(x=0, color='grey')
plt.tight_layout()
# Create iterable input axes, as well as set color of response curve
ax_input, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"])
# Create x input space, plot line x = y
x = np.linspace(lower_bound, upper_bound, length)
y = x
ax1.plot(x, y, sns.xkcd_rgb["soft green"], linewidth=3)
# Create markers
marker1, = ax1.plot(lower_bound, 400, 'og')
marker2, = ax1.plot(lower_bound, 400, 'or')
# Create arrow representing function
func_arrow = ax1.annotate(
'',
xy=(lower_bound, square(lower_bound)),
xytext=(lower_bound, lower_bound),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# Create label for arrow, representing function
offset = 2
func_label = ax1.annotate(
'Square',
xy=(lower_bound, square(lower_bound)/2),
xytext=(lower_bound + offset, (square(lower_bound) - lower_bound)/2 + offset),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# Square Animation function
def animate_square(current):
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
ax_input.set_data(x, x_squared)
marker1.set_data(current, current)
marker2.set_data(current, square(current))
func_arrow.set_position((current + 0.000001, current))
func_arrow.xy = (current, x_squared[-1])
func_label.set_position((current + offset + 0.000001, (x_squared[-1] - current)/2 + offset))
func_label.xy = (current, (x_squared[-1] - current)/2 + current)
return ax_input,
# Square init function
def init_square():
ax1.set_xlim(-5, 5)
ax1.set_ylim(-25, 25)
return ax_input,
""" Define steps and create animation object """
step = 0.025
steps = np.arange(lower_bound, upper_bound, step)
# Shrink current axis by 20%
box = ax1.get_position()
ax1.set_position([box.x0, box.y0, box.width * 0.65, box.height])
# Put a legend to the right of the current axis
ax1.legend(
(marker1, marker2),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
# For rendering html video in cell
# html_video = HTML(
# animation.FuncAnimation(
# fig,
# animate_square,
# steps,
# init_func=init_square,
# interval=50,
# blit=True
# ).to_html5_video()
# )
# display(html_video)
gif_video = animation.FuncAnimation(
fig,
animate_square,
steps,
init_func=init_square,
interval=50,
blit=True
)
gif_video.save('x_squared.gif', writer='imagemagick')
plt.close()
# -
# <img src="https://drive.google.com/uc?id=1kOtx1gzNPu6n2k__cULQXqVztmg5cfGb" width="700">
#
# Take note of the following in the animation above:
# * We have a green point that is being passed into our function, $f$.
# * This green point has an $x_{location}$ and $x_{value}$
# * The black arrow represents $f$, mapping the $x_{value}$ of the green point to the corresponding output value, $f(x)$
# * The $x_{location}$ of a green input point and a corresponding red output point are always identical, hence the black arrow always being perpendicular to the $x$ axis
#
# Now, this next part may very well be the most important piece of the entire post. What happens when we want to compose $f$, our squaring function, and $g$, our negation function? Well, as we discussed earlier, we will pass the _output of our squaring function_, the red curve above, into $g$. This will be mapped into a new output. Visually this looks like:
# +
lower_bound = -5
upper_bound = 5
length = 2000
# Turn off interactive plotting
plt.ioff()
# Create figure and axis object
fig = plt.figure(figsize=(10, 6), dpi=150)
ax1 = plt.subplot(111)
# Add x and y axis lines
ax1.axhline(y=0, color='grey')
ax1.axvline(x=0, color='grey')
plt.tight_layout()
# Create iterable input axes, as well as set color of response curve
ax_input, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["pinkish"])
# Create x input space, plot line y = x^2
x = np.linspace(lower_bound, upper_bound, length)
y = square(x)
ax1.plot(x, y, sns.xkcd_rgb["soft green"], linewidth=3)
# Create markers
marker1, = ax1.plot(lower_bound, 400, 'og')
marker2, = ax1.plot(lower_bound, 400, 'or')
# Create arrow representing function
func_arrow = ax1.annotate(
'',
xy=(lower_bound, negate(square(lower_bound))),
xytext=(lower_bound, square(lower_bound)),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# Create label for arrow, representing function
offset = 1
shift = 5
func_label = ax1.annotate(
'Negate',
xy=(lower_bound, square(lower_bound)),
xytext=(lower_bound + offset, (square(lower_bound) - lower_bound)/2 + offset),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# Negate Animation function
def animate_negate(current):
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
# Set output curve, marker1, marker2
ax_input.set_data(x, x_squared_negated)
marker1.set_data(current, x_squared[-1])
marker2.set_data(current, x_squared_negated[-1])
# Set function arrow head and tail position
func_arrow.set_position((current + 0.000001, x_squared[-1])) # Arrow tail
func_arrow.xy = (current, x_squared_negated[-1]) # Arrow head
# Label location, followed by label arrow head
func_label.set_position((current + offset + 0.000001, (x_squared_negated[-1] - current)/2 + offset - shift))
func_label.xy = (current, (x_squared[-1] - current)/2 + current)
return ax_input,
# Negate init function
def init_negate():
ax1.set_xlim(-5, 5)
ax1.set_ylim(-25, 25)
return ax_input,
""" Define steps and create animation object """
step = 0.025
steps = np.arange(lower_bound, upper_bound, step)
# Shrink current axis by 20% in order to fit legend
box = ax1.get_position()
ax1.set_position([box.x0, box.y0, box.width * 0.65, box.height])
# Put a legend to the right of the current axis
ax1.legend(
(marker1, marker2),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
# For rendering html video in cell
# html_video = HTML(
# animation.FuncAnimation(
# fig,
# animate_negate,
# steps,
# init_func=init_negate,
# interval=50,
# blit=True
# ).to_html5_video()
# )
# display(html_video)
gif_video = animation.FuncAnimation(
fig,
animate_negate,
steps,
init_func=init_negate,
interval=50,
blit=True
)
gif_video.save('x_squared_negated.gif', writer='imagemagick')
plt.close()
# -
# <img src="https://drive.google.com/uc?id=1vL8ifHeVZCKfrFCc8FpVcbWiQI5YtBNK" width="700">
# And just like that, all of our earlier work relating to the details behind graphs of functions-specifically that inputs have _locations_ and _values_-has paid off! This is exactly the scenario that requires us to expand our mental framework around what it means to graph a function. Let's run through what just happened and hone in on why our earlier work was necessary:
# * In general when graphing a function, we simply take $x$ (being the _location_ along the $x$ axis), and pass it in as an input.
# * However, we discussed earlier how in reality, we are passing in a _location_ and a _value_, which just so happen to be equal for a stand alone function.
# * When dealing with function composition's (above) the idea of _location_ and _value_ are _paramount_.
# * Our input to the negation function (the black arrow) is the _output_ of the square function (green point)!
# * Now, our $x_{location}$ and $x_{value}$ are _not equivalent_.
#
# Again, this is why it is so crucial that our input is made up an $x_{location}$ and $x_{value}$. For example, if we were not dealing with a graph and we had an input $x = 4$ passed into our negate function $g$:
#
# $$g(4) = -4$$
#
# However, here we need to be aware that the $x_{location}$ and $x_{value}$ are _no longer equivalent_! For instance, take a look at the squared input (green curve) above; at what $x_{location}$ does $x_{value} =4$? At $x_{location} = 2$!
#
# Because we are no longer just graphing a standard input $x$ where the location and value are equivalent (as is the case where $y=x$), our function needs to be able to handle both the $x_{location}$ and $x_{value}$.
#
# Now, let's add $h$ to our composition. So, $h$ will be passed the red curve above, $-x^2$, as input:
# +
lower_bound = -2
upper_bound = 2
length = 2000
# Turn off interactive plotting
plt.ioff()
# Create figure and axis object
fig = plt.figure(figsize=(10, 6), dpi=150)
ax1 = plt.subplot(111)
# Set x and y limits
ax1.set_xlim((-2, 2))
ax1.set_ylim((-5, 5))
# Add x and y axis lines
ax1.axhline(y=0, color='grey')
ax1.axvline(x=0, color='grey')
plt.tight_layout()
# Create iterable input axes, as well as set color of response curve
ax_input, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"])
# Create x input space, plot line y = -x^2
x = np.linspace(lower_bound, upper_bound, length)
y = negate(square(x))
ax1.plot(x, y, sns.xkcd_rgb["soft green"], linewidth=3)
# Create markers
marker1, = ax1.plot(lower_bound, 400, 'og')
marker2, = ax1.plot(lower_bound, 400, 'or')
# Create arrow representing function
func_arrow = ax1.annotate(
'',
xy=(lower_bound, exponentiate(negate(square(lower_bound)))),
xytext=(lower_bound, negate(square(lower_bound))),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# Create label for arrow, representing function
offset_horizontal = 0.5
offset_vertical = -2
func_label = ax1.annotate(
'Exponentiate',
xy=(lower_bound, square(lower_bound)),
xytext=(lower_bound + offset, (square(lower_bound) - lower_bound)/2 + offset),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=-90,angleB=0"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# Exponentiate Animation function
def animate_exponentiate(current):
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
x_squared_negated_exponentiated = exponentiate(x_squared_negated)
# Set output curve, marker1, marker2
ax_input.set_data(x, x_squared_negated_exponentiated)
marker1.set_data(current, x_squared_negated[-1])
marker2.set_data(current, x_squared_negated_exponentiated[-1])
# Set function arrow head and tail position
func_arrow.set_position((current + 0.000001, x_squared_negated[-1])) # Arrow tail
func_arrow.xy = (current, x_squared_negated_exponentiated[-1]) # Arrow head
# Label location, followed by label arrow head
label_arrow_pos = ((x_squared_negated_exponentiated[-1] - x_squared_negated[-1]) / 2 ) + x_squared_negated[-1]
func_label.set_position((current + offset_horizontal, label_arrow_pos + offset_vertical))
func_label.xy = (current, label_arrow_pos)
return ax_input,
# Exponentiate init function
def init_exponentiate():
return ax_input,
""" Define steps and create animation object """
step = 0.0125
steps = np.arange(lower_bound, upper_bound, step)
# Shrink current axis by 20% in order to fit legend
box = ax1.get_position()
ax1.set_position([box.x0, box.y0, box.width * 0.65, box.height])
# Put a legend to the right of the current axis
ax1.legend(
(marker1, marker2),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
# For rendering html video in cell
# html_video = HTML(
# animation.FuncAnimation(
# fig,
# animate_exponentiate,
# steps,
# init_func=init_exponentiate,
# interval=50,
# blit=True
# ).to_html5_video()
# )
# display(html_video)
gif_video = animation.FuncAnimation(
fig,
animate_exponentiate,
steps,
init_func=init_exponentiate,
interval=50,
blit=True
)
gif_video.save('x_squared_negated_exponentiated.gif', writer='imagemagick')
plt.close()
# -
# <img src="https://drive.google.com/uc?id=1GvPjXHlRk1l-urBOS86NGpplyBtkwbhR" width="700">
#
# Again, we see that we are passing in a 2 dimensional point to our function (in green) and that is being mapped to a 2 dimensional point (red), our output. The only way that we can intuitively understand the graph of the function $n(x) = e^{-x^2}$ as a composition of functions $f, g$ and $h$, is if we can follow passing curves (aka a list of two dimensional points) as inputs to these functions.
#
# The ability to do that is an incredibly powerful skill. When we put everything together we can visualize our entire function composition as follows:
# +
# ZOOMED ANIMATION
lower_bound = -2
upper_bound = -1 * lower_bound
composition_upper_bound = upper_bound * 4 + upper_bound
length = 2000
# Turn off interactive plotting
plt.ioff()
# Create figure and axis object
fig = plt.figure(figsize=(10, 6), dpi=200)
ax1 = plt.subplot(111)
# Add x and y axis lines
ax1.axhline(y=0, color='grey')
ax1.axvline(x=0, color='grey')
plt.tight_layout()
# Create x input space, plot line x = y
x = np.linspace(lower_bound, upper_bound, length)
y = x
# Create iterable input axes, as well as set color of response curve
ax_x, = ax1.plot(x, y, lw=3, c=sns.xkcd_rgb["soft green"], zorder=1)
ax_squared, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=2)
ax_negated, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=3)
ax_exponentiated, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=4)
# Create markers
marker_x, = ax1.plot(lower_bound, 400, 'og', zorder=5)
marker_squared, = ax1.plot(lower_bound, 400, 'or', zorder=5)
marker_negated, = ax1.plot(lower_bound, 400, 'or', zorder=5)
marker_exponentiated, = ax1.plot(lower_bound, 400, 'or', zorder=5)
offset = 0.5 # General offset
# ------------- Create arrow representing SQUARE function---------------
func_arrow_square = ax1.annotate(
'',
xy=(lower_bound, square(lower_bound)),
xytext=(lower_bound, lower_bound),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing SQUARE function ----------------
offset_square = 0.5
epsilon = 0.000001
func_label_square = ax1.annotate(
'Square',
xy=(lower_bound, square(lower_bound)/2),
xytext=(lower_bound + offset_square, (square(lower_bound) - lower_bound)/2 + offset_square),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# ------------- Create arrow representing NEGATE function---------------
negate_hide_coord = -10
func_arrow_negate = ax1.annotate(
'',
xy=(negate_hide_coord, negate_hide_coord),
xytext=(negate_hide_coord, negate_hide_coord),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing NEGATE function ----------------
offset_negate = 1
shift = 1
func_label_negate = ax1.annotate(
'Negate',
xy=(negate_hide_coord, negate_hide_coord),
xytext=(negate_hide_coord+0.01, negate_hide_coord),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# ------------- Create arrow representing EXPONENTIATE function---------------
exponentiate_hide_coord = -10
func_arrow_exponentiate = ax1.annotate(
'',
xy=(exponentiate_hide_coord, exponentiate_hide_coord),
xytext=(exponentiate_hide_coord, exponentiate_hide_coord),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing EXPONENTIATE function ----------------
offset_horizontal = 0.5
offset_vertical = -2
func_label_exponentiate = ax1.annotate(
'Exponentiate',
xy=(exponentiate_hide_coord, exponentiate_hide_coord),
xytext=(exponentiate_hide_coord, exponentiate_hide_coord),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=-90,angleB=0"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
function_calculation_label = ax1.annotate(
' ',
xy=(2, 2),
size=20,
)
# Composition animation function
def animate_composition(current):
if round(current, 5) < upper_bound:
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
# Set output curve, marker_x, marker_squared
ax_squared.set_data(x, x_squared)
marker_x.set_data(current, current)
marker_squared.set_data(current, x_squared[-1])
# Set function arrow head and tail position
func_arrow_square.set_position((current + epsilon, current))
func_arrow_square.xy = (current, x_squared[-1])
# Label location, followed by label arrow head
func_label_square.set_position((current + offset + epsilon, (x_squared[-1] - current)/2 + offset))
func_label_square.xy = (current, (x_squared[-1] - current)/2 + current)
# Set function calculation lable
function_calculation_label.set_text(r' ({})$^2$ = {}'.format(round(current, 1), round(x_squared[-1], 1)))
elif round(current, 5) == upper_bound:
# End of squaring, start of negating
func_arrow_square.remove()
marker_x.remove()
func_label_square.remove()
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
# Updating squared curve to be input to negate function (setting color to green)
marker_squared.set_color("green")
ax1.plot(x, y, lw=3, c=sns.xkcd_rgb["grey"])
ax1.plot(x, x_squared, c=sns.xkcd_rgb["soft green"], linewidth=3)
elif round(current, 5) > upper_bound and round(current, 5) < (upper_bound*3) :
current -= upper_bound*2
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
# Set output curve, marker1, marker2
ax_negated.set_data(x, x_squared_negated)
marker_squared.set_data(current, x_squared[-1])
marker_negated.set_data(current, x_squared_negated[-1])
# Set function arrow head and tail position
func_arrow_negate.set_position((current + 0.000001, x_squared[-1])) # Arrow tail
func_arrow_negate.xy = (current, x_squared_negated[-1]) # Arrow head
# Label location, followed by label arrow head
func_label_negate.set_position((current + offset + 0.000001, (x_squared_negated[-1] - current)/2 + offset - shift))
func_label_negate.xy = (current, (x_squared[-1] - current)/2 + current)
# Set function calculation lable
function_calculation_label.set_text(' -({}) = {}'.format(round(x_squared[-1], 1), round(x_squared_negated[-1], 1)))
elif round(current, 5) == (upper_bound*3):
# End of negating, start of exponentiating
func_arrow_negate.remove()
func_label_negate.remove()
marker_squared.remove()
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
# Updating negated curve to be input to negate function (setting color to green)
marker_negated.set_color("green")
ax1.plot(x, x_squared, lw=3, c=sns.xkcd_rgb["grey"])
ax1.plot(x, x_squared_negated, c=sns.xkcd_rgb["soft green"], linewidth=3, zorder=4)
elif round(current, 5) > (upper_bound*3) and round(current, 5) < (upper_bound*5):
current -= upper_bound*4
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
x_squared_negated_exponentiated = exponentiate(x_squared_negated)
# Set output curve, marker1, marker2
ax_exponentiated.set_data(x, x_squared_negated_exponentiated)
marker_negated.set_data(current, x_squared_negated[-1])
marker_exponentiated.set_data(current, x_squared_negated_exponentiated[-1])
# Set function arrow head and tail position
func_arrow_exponentiate.set_position((current + 0.000001, x_squared_negated[-1])) # Arrow tail
func_arrow_exponentiate.xy = (current, x_squared_negated_exponentiated[-1]) # Arrow head
# Label location, followed by label arrow head
label_arrow_pos = ((x_squared_negated_exponentiated[-1] - x_squared_negated[-1]) / 2 ) + x_squared_negated[-1]
func_label_exponentiate.set_position((current + offset_horizontal, label_arrow_pos + offset_vertical))
func_label_exponentiate.xy = (current, label_arrow_pos)
# Set function calculation lable
function_calculation_label.set_text(' exp({}) = {}'.format(round(x_squared_negated[-1], 1), round(x_squared_negated_exponentiated[-1], 1)))
return ax_x,
# Composition init function
def init_composition():
ax1.set_xlim(lower_bound, upper_bound)
ax1.set_ylim(-4, 4)
return ax_x,
""" Define steps and create animation object """
step = 0.0125
# step = 0.05
steps = np.arange(lower_bound, composition_upper_bound, step)
# Shrink current axis by 20%
box = ax1.get_position()
ax1.set_position([box.x0, box.y0, box.width * 0.65, box.height])
# Put a legend to the right of the current axis
ax1.legend(
(marker_x, marker_squared),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
# For rendering html video in cell
gif_video = animation.FuncAnimation(
fig,
animate_composition,
steps,
init_func=init_composition,
interval=25,
blit=True
)
gif_video.save('test_2.gif', writer='imagemagick')
plt.close()
# -
# <img src="https://drive.google.com/uc?id=1udWyIAPAKUXS6ra7nA7VpWwnaYdTVD3t" width="700">
#
# What we can also do is overlay $n(x)$ (in pink below) in order to see how our original input is transformed in just three steps to match the bell shaped approximation to the normal distribution that we were looking for:
# +
# ZOOMED ANIMATION
lower_bound = -2
upper_bound = -1 * lower_bound
composition_upper_bound = upper_bound * 4 + upper_bound
length = 2000
# Turn off interactive plotting
plt.ioff()
# Create figure and axis object
fig = plt.figure(figsize=(10, 6), dpi=200)
ax1 = plt.subplot(111)
# Add x and y axis lines
ax1.axhline(y=0, color='grey')
ax1.axvline(x=0, color='grey')
plt.tight_layout()
# Create x input space, plot line x = y
x = np.linspace(lower_bound, upper_bound, length)
y = x
func_comp_y = func_comp(x)
# Create iterable input axes, as well as set color of response curve
ax_x, = ax1.plot(x, y, lw=3, c=sns.xkcd_rgb["soft green"], zorder=1)
ax_squared, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=2)
ax_negated, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=3)
ax_exponentiated, = ax1.plot(0, 0, lw=3, c=sns.xkcd_rgb["red"], zorder=4)
ax_func_comp, = ax1.plot(x, func_comp_y, lw=3, c=sns.xkcd_rgb["pink"], zorder=1)
# Create markers
marker_x, = ax1.plot(lower_bound, 400, 'og', zorder=5)
marker_squared, = ax1.plot(lower_bound, 400, 'or', zorder=5)
marker_negated, = ax1.plot(lower_bound, 400, 'or', zorder=5)
marker_exponentiated, = ax1.plot(lower_bound, 400, 'or', zorder=5)
offset = 0.5 # General offset
# ------------- Create arrow representing SQUARE function---------------
func_arrow_square = ax1.annotate(
'',
xy=(lower_bound, square(lower_bound)),
xytext=(lower_bound, lower_bound),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing SQUARE function ----------------
offset_square = 0.5
epsilon = 0.000001
func_label_square = ax1.annotate(
'Square',
xy=(lower_bound, square(lower_bound)/2),
xytext=(lower_bound + offset_square, (square(lower_bound) - lower_bound)/2 + offset_square),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# ------------- Create arrow representing NEGATE function---------------
negate_hide_coord = -10
func_arrow_negate = ax1.annotate(
'',
xy=(negate_hide_coord, negate_hide_coord),
xytext=(negate_hide_coord, negate_hide_coord),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing NEGATE function ----------------
offset_negate = 1
shift = 1
func_label_negate = ax1.annotate(
'Negate',
xy=(negate_hide_coord, negate_hide_coord),
xytext=(negate_hide_coord+0.01, negate_hide_coord),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=0,angleB=-90"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
# ------------- Create arrow representing EXPONENTIATE function---------------
exponentiate_hide_coord = -10
func_arrow_exponentiate = ax1.annotate(
'',
xy=(exponentiate_hide_coord, exponentiate_hide_coord),
xytext=(exponentiate_hide_coord, exponentiate_hide_coord),
arrowprops=dict(facecolor='black', shrink=0.05),
)
# ------------- Create label for arrow, representing EXPONENTIATE function ----------------
offset_horizontal = 0.5
offset_vertical = -2
func_label_exponentiate = ax1.annotate(
'Exponentiate',
xy=(exponentiate_hide_coord, exponentiate_hide_coord),
xytext=(exponentiate_hide_coord, exponentiate_hide_coord),
arrowprops=dict(
color='grey',
arrowstyle="-",
connectionstyle="angle3,angleA=-90,angleB=0"
),
bbox=dict(boxstyle="square", alpha=0.1, ec="gray"),
size=20,
)
function_calculation_label = ax1.annotate(
' ',
xy=(2, 2),
size=20,
)
# Composition animation function
def animate_composition(current):
if round(current, 5) < upper_bound:
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
# Set output curve, marker_x, marker_squared
ax_squared.set_data(x, x_squared)
marker_x.set_data(current, current)
marker_squared.set_data(current, x_squared[-1])
# Set function arrow head and tail position
func_arrow_square.set_position((current + epsilon, current))
func_arrow_square.xy = (current, x_squared[-1])
# Label location, followed by label arrow head
func_label_square.set_position((current + offset + epsilon, (x_squared[-1] - current)/2 + offset))
func_label_square.xy = (current, (x_squared[-1] - current)/2 + current)
# Set function calculation lable
function_calculation_label.set_text(r' ({})$^2$ = {}'.format(round(current, 1), round(x_squared[-1], 1)))
elif round(current, 5) == upper_bound:
# End of squaring, start of negating
func_arrow_square.remove()
marker_x.remove()
func_label_square.remove()
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
# Updating squared curve to be input to negate function (setting color to green)
marker_squared.set_color("green")
ax1.plot(x, y, lw=3, c=sns.xkcd_rgb["grey"])
ax1.plot(x, x_squared, c=sns.xkcd_rgb["soft green"], linewidth=3)
elif round(current, 5) > upper_bound and round(current, 5) < (upper_bound*3) :
current -= upper_bound*2
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
# Set output curve, marker1, marker2
ax_negated.set_data(x, x_squared_negated)
marker_squared.set_data(current, x_squared[-1])
marker_negated.set_data(current, x_squared_negated[-1])
# Set function arrow head and tail position
func_arrow_negate.set_position((current + 0.000001, x_squared[-1])) # Arrow tail
func_arrow_negate.xy = (current, x_squared_negated[-1]) # Arrow head
# Label location, followed by label arrow head
func_label_negate.set_position((current + offset + 0.000001, (x_squared_negated[-1] - current)/2 + offset - shift))
func_label_negate.xy = (current, (x_squared[-1] - current)/2 + current)
# Set function calculation lable
function_calculation_label.set_text(' -({}) = {}'.format(round(x_squared[-1], 1), round(x_squared_negated[-1], 1)))
elif round(current, 5) == (upper_bound*3):
# End of negating, start of exponentiating
func_arrow_negate.remove()
func_label_negate.remove()
marker_squared.remove()
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
# Updating negated curve to be input to negate function (setting color to green)
marker_negated.set_color("green")
ax1.plot(x, x_squared, lw=3, c=sns.xkcd_rgb["grey"])
ax1.plot(x, x_squared_negated, c=sns.xkcd_rgb["soft green"], linewidth=3, zorder=4)
elif round(current, 5) > (upper_bound*3) and round(current, 5) < (upper_bound*5):
current -= upper_bound*4
# Gathering x axis metrics
x = np.linspace(lower_bound, current, length)
x_squared = square(x)
x_squared_negated = negate(x_squared)
x_squared_negated_exponentiated = exponentiate(x_squared_negated)
# Set output curve, marker1, marker2
ax_exponentiated.set_data(x, x_squared_negated_exponentiated)
marker_negated.set_data(current, x_squared_negated[-1])
marker_exponentiated.set_data(current, x_squared_negated_exponentiated[-1])
# Set function arrow head and tail position
func_arrow_exponentiate.set_position((current + 0.000001, x_squared_negated[-1])) # Arrow tail
func_arrow_exponentiate.xy = (current, x_squared_negated_exponentiated[-1]) # Arrow head
# Label location, followed by label arrow head
label_arrow_pos = ((x_squared_negated_exponentiated[-1] - x_squared_negated[-1]) / 2 ) + x_squared_negated[-1]
func_label_exponentiate.set_position((current + offset_horizontal, label_arrow_pos + offset_vertical))
func_label_exponentiate.xy = (current, label_arrow_pos)
# Set function calculation lable
function_calculation_label.set_text(' exp({}) = {}'.format(round(x_squared_negated[-1], 1), round(x_squared_negated_exponentiated[-1], 1)))
return ax_x,
# Composition init function
def init_composition():
ax1.set_xlim(lower_bound, upper_bound)
ax1.set_ylim(-4, 4)
return ax_x,
""" Define steps and create animation object """
step = 0.025
# step = 0.05
steps = np.arange(lower_bound, composition_upper_bound, step)
# Shrink current axis by 20%
box = ax1.get_position()
ax1.set_position([box.x0, box.y0, box.width * 0.65, box.height])
# Put a legend to the right of the current axis
ax1.legend(
(marker_x, marker_squared),
['Input to function', 'Output of function'],
loc='center left',
bbox_to_anchor=(1, 0.5)
)
# For rendering html video in cell
gif_video = animation.FuncAnimation(
fig,
animate_composition,
steps,
init_func=init_composition,
interval=50,
blit=True
)
gif_video.save('function_composition_with_final.gif', writer='imagemagick')
plt.close()
# -
# <img src="https://drive.google.com/uc?id=139CaUK0aH9OX7QYFpI9_iWFsH6ozkre2" width="700">
#
# We have officially accomplished our goal, that being to determine a general function that could act as the normal distribution and approximate the discrete binomial distribution we saw earlier. There are additional cosmetic updates that must be made, and I have an entire post dedicated to that if you are interested (the history of the normal distribution).
#
# The shape of the normal distribution can be approximated via our curve $n(x)$:
#
# $$f(x \mid \mu, \sigma^2) = \frac{1}{\sqrt{2 \pi \sigma^2}} exp(-\frac{(x-\mu)^2}{2\sigma^2})$$
#
# $$ n(x) = e^{-x^2}$$
#
# $$n(x) \rightarrow \text{Is an approximation of the shape} \rightarrow f(x \mid \mu, \sigma^2)$$
#
# With that said, there is a much larger theme of this post that I would like you to leave with. Thousands upon thousands of formulas and equations have been derived over the past 2000 years; they span quantum mechanics, network theory, statistical learning, financial modeling, computational biology, and so on. Often times you will be presented with one of these equations in a text book and expected to take it on face value, or given a erudite proof.
#
# However, what is often left out is the underlying process that was used to arrive at that equation. I want you to be able to find your equations, to create your own solutions to the hard problems that face our world today. One of the most fundamental ways that this is done in mathematics and related sciences is via the following process:
#
# 1. Collecting data on whatever it is you want to know more about. That could be the financial markets, the force due to gravity, the rate at which bacteria grow in a petri dish, etc.
# 2. That data gives you a discrete representation of some underlying function (how the financial markets respond due to certain inputs, how the force of gravity is effected by distance, how bacteria grow in a petri dish as a function of time). You can plot this discrete data and get a representation like the binomial distribution we saw earlier.
# 3. You want to find the underlying function that accounts for this data! In other words, you want to find a function, $f$, that when you input your collected data everything checks out! **This is one of the most important problems in all of mathematics and science**.
# 4. Most people don't have the slightest idea of where to start when they hit this point. But, you on the other hand now do. Function composition is your friend. Mapping inputs from one space to another, composing many functions, it is how many of the greatest laws of math and science have been derived!
# + active=""
# <script>
# function code_toggle() {
# if (code_shown){
# $('div.input').hide('500');
# $('#toggleButton').val('Show Code')
# } else {
# $('div.input').show('500');
# $('#toggleButton').val('Hide Code')
# }
# code_shown = !code_shown
# }
#
# $( document ).ready(function(){
# code_shown=false;
# $('div.input').hide()
# });
# </script>
# <form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>
# -
| Mathematics/06-Functions-01-Composition-of-functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + nbgrader={"grade": false, "grade_id": "cell-195a16dbc662c53b", "locked": true, "schema_version": 1, "solution": false}
# %matplotlib notebook
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# -
# ### Dataset: winequality-red.csv
#
# Source:
#
# Description:
#
# Variables/Columns:
#
# **Hypothesis**:
#
# Read the csv file into a pandas DataFrame
quality = pd.read_csv('./assets/winequality-red.csv')
quality.head()
# +
# Assign the data to X and y
X = quality[["volatile acidity","pH","alcohol"]]
#X = quality[["fixed acidity","volatile acidity","citric acid","residual sugar","chlorides","free sulfur dioxide","total sulfur dioxide","density","pH","sulphates","alcohol"]]
y = quality["quality"].values.reshape(-1, 1)
print(X.shape, y.shape)
# + nbgrader={"grade": false, "grade_id": "cell-97f9d8f3d4b7abc1", "locked": false, "schema_version": 1, "solution": true}
# Use train_test_split to create training and testing data
### BEGIN SOLUTION
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
### END SOLUTION
# + nbgrader={"grade": false, "grade_id": "cell-500eedfd487be441", "locked": false, "schema_version": 1, "solution": true}
# Create the model using LinearRegression
### BEGIN SOLUTION
from sklearn.linear_model import LinearRegression
model = LinearRegression()
### END SOLUTION
# + nbgrader={"grade": false, "grade_id": "cell-715f0369813d2b84", "locked": false, "schema_version": 1, "solution": true}
# Fit the model to the training data and calculate the scores for the training and testing data
### BEGIN SOLUTION
model.fit(X_train, y_train)
training_score = model.score(X_train, y_train)
testing_score = model.score(X_test, y_test)
### END SOLUTION
print(f"Training Score: {training_score}")
print(f"Testing Score: {testing_score}")
# + nbgrader={"grade": false, "grade_id": "cell-90aed41fb7c4f723", "locked": false, "schema_version": 1, "solution": true}
# Plot the Residuals for the Training and Testing data
### BEGIN SOLUTION
plt.scatter(model.predict(X_train), model.predict(X_train) - y_train, c="blue", label="Training Data")
plt.scatter(model.predict(X_test), model.predict(X_test) - y_test, c="orange", label="Testing Data")
plt.title("Multiple Linear Regression of Red Wine Quality Factors")
plt.xlabel("Volatile Acidity, pH, and Alcohol")
plt.ylabel("Quality")
plt.legend()
plt.hlines(y=0, xmin=y.min(), xmax=y.max())
### END SOLUTION
# -
plt.savefig("output-data/MLR-Residual-Plot-of-Red-Wine-Quality-Factors-VolatileAcidity-pH-Alcohol.png")
plt.show()
| .ipynb_checkpoints/multiple-linear-regression-wine-quality-red-volatileAcidity-pH-alcohol-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ---
# __Universidad Tecnológica Nacional, Buenos Aires__\
# __Ingeniería Industrial__\
# __Cátedra de Investigación Operativa__\
# __Autor: <NAME>__, <EMAIL>
#
# ---
# +
import pulp
lp01 = pulp.LpProblem("problema incompatible", pulp.LpMaximize)
# Variables:
x = pulp.LpVariable('x', lowBound=0, cat='Continuous')
y = pulp.LpVariable('y', lowBound=0, cat='Continuous')
# Función objetivo:
lp01 += 4*x + 3*y, "Z"
# Restricciones:
lp01 += 6*x + 16*y >= 48000
lp01 += 12*x + 6*y >= 42000
lp01 += 9*x + 9*y <= 36000
# Resolvemos:
lp01.solve()
# Imprimimos el status del problema:
print(pulp.LpStatus[lp01.status])
# Imprimimos las variables en su valor óptimo:
for variable in lp01.variables():
print("%s = %.2f" % (variable.name, variable.varValue))
# Imprimimos el funcional óptimo:
print(pulp.value(lp01.objective))
# -
| 07_programacion_matematica/casos_codigo/ejercicios/casos_particulares/incompatible.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# _by <NAME>$^{1,2}$ and <NAME>$^1$_
#
# $^1$ Institute of Communications Engineering, University of Rostock, Rostock <br>
# $^2$ University Library, University of Rostock, Rostock
#
# **Abstract**:
# This notebook contains the solutions for the tasks in the `02 Python Introduction.ipynb` Jupyter Notebook.
# **Task 1:** Figure out the meaning of each line yourself.
x = 34 - 23 # A comment.
y = "Hello" # Another one.
z = 3.45
if z == 3.45 or y == "Hello":
x = x + 1
y = y + " World"
print(x)
print(y)
# **Solution:**
# The first three lines each are variable assignments.
# You can see that Python 3 can have different kinds of values: integer numbers, strings and floating point numbers.
# The lines all work in the following way: the term on the right side of the `=` will be evaluated and the value copied to new memory that is referenced by the name on the left side of the `=`. As an example, the value of the term `34-23` which is `11` will be copied into a new set of memory that is referred to by the name `x`.
#
# The 4th line is an if-clause containing two conditions connected via the logical or operator.
# The or-operator is True if either of the conditions are True or if both are True.
# In comparison to the variable assignment (using a single `=`), we compare the value stored inside the variable memory with the other term by using the double equal sign `==`.
# After the condition and indentend block follows that will be executed if the condition is True.
# In this block, the variable x will be increased by 1 and the string inside variable y will be appended by the string ` World`.
#
# The last two lines are no longer indented which means that these lines are not part of the if-clause and instead will be executed anyway independent of the if-clause.
# The lines induce the output the current variable values of x respectively y.
#
# ---
# **Task 2:** Implement a function that tests whether a given number is prime.
#
# **Solution:**
# +
import math
def is_prime(n: int):
if n < 2:
return False
for k in range(2, math.ceil(n/2)+1):
if n % k == 0:
return False
return True
# -
for i in range(-1, 15):
print('%i is %s' % (i, is_prime(i)))
# ---
#
# **Task 3:** Use a list comprehension to make a list of all primes < 1000.
l = [x for x in range(1001) if is_prime(x)]
# Alternative to the list comprehension
l = []
for x in range(1001):
if is_prime(x):
l.append(x)
# ---
# **Task 4:** Type "`np.ze`" (without the quotes) and then hit the *Tab* key ...
import numpy as np
np.ze
| 02 Python Introduction (Solutions).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import sklearn
import matplotlib.pyplot as plt
from sklearn import datasets
import numpy as np
import seaborn as sb
iris = datasets.load_iris()
iris
iris.target_names
iris.feature_names
iris.data
iris.target
x = iris.data
y = iris.target
labels = iris.feature_names
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.5)
print(x.shape)
print(y.shape)
print(min(iris.data[:,0]))
print(max(iris.data[:,0]))
dataset = np.array(iris.data)
means = dataset.mean(axis=0)
maxs = dataset.max(axis=0)
mins = dataset.min(axis=0)
sds = dataset.std(axis=0)
for i in range(0,4):
print("Statistical information on ", labels[i])
print("Mean ", labels[i], ": ", means[i])
print("Max : ", maxs[i])
print("min : ", mins[i])
print("sd : ", sds[i])
print('')
dataset = sb.load_dataset('iris')
sb.distplot(dataset['petal_length'],kde = False)
sb.distplot(dataset['sepal_length'],kde = False)
sb.distplot(dataset['petal_width'],kde = False)
sb.distplot(dataset['sepal_width'],kde = False)
import pandas as pd
dataset = pd.DataFrame(data= np.c_[iris['data']],
columns= iris['feature_names'])
list(dataset.columns)
plt.hist(dataset['sepal length (cm)'], bins=10, density=True)
plt.legend("sepal length (cm)")
plt.ylabel("Frequency")
plt.title(labels[i])
plt.show()
plt.hist(dataset['sepal width (cm)'], bins=10, density=True)
plt.legend("sepal width (cm)")
plt.ylabel("Frequency")
plt.title(labels[i])
plt.show()
plt.hist(dataset['sepal length (cm)'], bins=10, density=True)
plt.legend("petal length (cm)")
plt.ylabel("Frequency")
plt.title(labels[i])
plt.show()
plt.hist(dataset['sepal length (cm)'], bins=10, density=True)
plt.legend("sepal length (cm)")
plt.ylabel("Frequency")
plt.title(labels[i])
plt.show()
print(dataset.shape)
print(dataset.describe())
dataset.plot(kind='box', subplots=False, sharex=False, sharey=False)
plt.show()
| 2 - visualizations/8 - histogram and boxplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# default_exp core
from nbdev import *
#hide
from nbdev.showdoc import *
# # Time Series Feature Engineering Core
#
# > Basic functions for time series analysis.
#export
from typing import *
from fastcore import *
from fastcore.utils import *
from fastcore.script import *
import pandas as pd
import numpy as np
#export
def ifnone(a:Any,b:Any)->Any:
"`a` if `a` is not None, otherwise `b`."
return b if a is None else a
#export
def make_date(df, date_field):
"Make sure `df[date_field]` is of the right date type."
field_dtype = df[date_field].dtype
if isinstance(field_dtype, pd.core.dtypes.dtypes.DatetimeTZDtype):
field_dtype = np.datetime64
if not np.issubdtype(field_dtype, np.datetime64):
df[date_field] = pd.to_datetime(df[date_field], infer_datetime_format=True)
#export
def add_datepart(df, field_name, prefix=None, drop=True, time=False):
"Helper function that adds columns relevant to a date in the column `field_name` of `df`."
make_date(df, field_name)
field = df[field_name]
prefix = ifnone(prefix, re.sub('[Dd]ate$', '', field_name))
attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
if time: attr = attr + ['Hour', 'Minute', 'Second']
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in attr: df[prefix + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
mask = ~field.isna()
df[prefix + 'Elapsed'] = np.where(mask,field.values.astype(np.int64) // 10 ** 9,None)
if drop: df.drop(field_name, axis=1, inplace=True)
return df
#export
def add_lag_features(df, field_name, prefix=None, lag_periods=[1]):
"Helper function that adds lag features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
for n in lag_periods: df[f'{prefix}-{n}p'] = df[field_name].shift(n)
return df
#export
def add_lag_percentage_gain_features(df, field_name, prefix=None, lag_periods=[1]):
"Helper function that adds lag percentage gain features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
for n in lag_periods:
df[f'{prefix}-{n}p_PG'] = df[field_name]/df[field_name].shift(n)
return df
#export
def add_moving_average_features(df, field_name, prefix=None, windows=[3], weighted=True):
"Helper function that adds moving average (rolling window) features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
for n in windows:
if weighted:
weights = np.arange(1, n + 1)
df[f'{prefix}_{n}p_MA'] = df[field_name].rolling(
window=n).apply(lambda x: np.dot(x, weights) /
weights.sum(), raw=True)
else:
df[f'{prefix}_{n}p_MA'] = df[field_name].rolling(window=n).mean()
return df
#export
def add_moving_average_percentage_gain_features(df, field_name, prefix=None, windows=[3], weighted=True):
"Helper function that adds moving average (rolling window) percentage gain features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
for n in windows:
if weighted:
weights = np.arange(1, n + 1)
df[f'{prefix}_{n}p_MA_PG'] = df[field_name]/df[field_name].rolling(
window=n).apply(lambda x: np.dot(x, weights) /
weights.sum(), raw=True)
else:
df[f'{prefix}_{n}p_MA_PG'] = df[field_name]/df[field_name].rolling(window=n).mean()
return df
#export
def add_expanding_features(df, field_name, prefix=None, period=7):
"Helper function that adds expanding features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
df[f'{prefix}_{period}p_expanding'] = df[field_name].expanding(period).mean()
return df
#export
def add_trend_features(df, field_name, prefix=None, windows=[3]):
"Helper function that adds trend features relevant to the column `field_name` of `df`."
field = df[field_name]
prefix = ifnone(prefix, field_name)
for n in windows:
df[f'{prefix}_{n}p_trend'] = (df[field_name]
.rolling(window=n)
.mean()
.diff()
.fillna(0))
return df
#hide
from nbdev.export import notebook2script; notebook2script()
| 00_core.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## US Fed Docs Registry (OCLC numbers)
#
# The US Fed Docs Registry: https://github.com/HTGovdocs/feddoc_oclc_nums
# +
import datetime
import json
import os
import shutil
import git
# -
# ### Clone data from Github repository (frequently updated)
# clone data fresh, remove existing repository if needed.
if os.path.exists("feddoc_oclc_nums"):
shutil.rmtree("feddoc_oclc_nums")
print("Cloning data from Github...")
repo = git.Repo.clone_from("https://github.com/HTGovdocs/feddoc_oclc_nums", "feddoc_oclc_nums")
# ### Copy file to data directory with a manifest
# +
dataset_name = "feddoc_oclc_nums"
dataset_file = "data/{}.txt".format(dataset_name)
if not os.path.exists("data"):
os.makedirs("data")
# # copy file to data folder
shutil.copyfile("feddoc_oclc_nums/feddoc_oclc_nums.txt", dataset_file)
# +
# create manifest file
manifest = {}
manifest["name"] = "feddoc_oclc_nums"
manifest["description"] = "A daily updated list of OCLC numbers determined to be Federal Documents."
# use the latest commit as a proxy for datetime
commit = repo.head.commit
file_datetime_proxy = datetime.datetime.utcfromtimestamp(commit.committed_date).isoformat()
manifest["datetime"] = str(file_datetime_proxy)
manifest["schema"] = {
"oclc": "object"
}
manifest["format"] = {
"type": "text",
"extension": "txt",
"header": False,
}
manifest["data-origins"] = [{
"origin": "https://github.com/HTGovdocs/feddoc_oclc_nums",
"datetime": str(file_datetime_proxy)
}]
# create manifest to accompany data
manifest_file = "data/{}.manifest.json".format(manifest["name"])
with open(manifest_file, 'w') as outfile:
json.dump(manifest, outfile, indent=4, sort_keys=True)
# -
# ### Finishing up!
print("Completed notebook ({}).".format(datetime.datetime.utcnow().isoformat()))
print("Output created:")
print(dataset_file)
print(manifest_file)
| create_htusfd_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# 변수 할당은 오브젝트의 참조를 복사합니다.
s1 = 'Python'
s2 = s1
id(s1), id(s2)
s3 = s1.replace('P', 'p')
s3, s1, s2
s1 += '!'
print(id(s1), id(s2))
s1, s2
# 리스트의 값이 변경될 때 참조하는 모든 변수에 영향이 미칩니다.
cities = ['seoul', 'busan', 'daegu']
dosi = cities
id(cities), id(dosi)
cities[0] = 'suwon'
cities, dosi
dosi = cities.copy()
id(cities), id(dosi)
cities[0] = 'seoul'
cities, dosi
dosi = list(cities)
dosi, id(dosi)
dosi = cities[:]
dosi, id(dosi)
| tutorial-1/object.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Old Norse with CLTK
#
# Process your Old Norse texts thanks to cltk. Here are presented several tools adapted to Old Norse.
# ### Import Old Norse corpora
# * old_norse_text_perseus contains different Old Norse books
# * old_norse_texts_heimskringla contains the Eddas
# * old_norse_models_cltk is data for a Part Of Speech tagger
#
# By default, corpora are imported into ~/cltk_data.
import os
from cltk.corpus.utils.importer import CorpusImporter
onc = CorpusImporter("old_norse")
onc.import_corpus("old_norse_text_perseus")
onc.import_corpus("old_norse_texts_heimskringla")
onc.import_corpus("old_norse_models_cltk")
onc.import_corpus("old_norse_dictionary_zoega")
# ### Configure IPython
#
# Configure IPython if you want to use this notebook
# ```bash
# $ ipython profile create
# $ ipython locate
# $ nano ~/.ipython/profile_default/ipython_config.py
# ```
# Add it a the end of the file:
# ```python
# c.InteractiveShellApp.exec_lines = [
# 'import sys; sys.path.append("~/cltk_data/old_norse")',
# 'import sys; sys.path.append("~/cltk_data/old_norse/dictionary")'
# ]
# ```
# And... It's done!
# ### old_norse_text_perseus
# +
import os
import json
corpus = os.path.join(os.environ["HOME"], "cltk_data", "old_norse", "text", "old_norse_text_perseus", "plain_text", "Ragnars_saga_loðbrókar_ok_sona_hans")
chapters = []
for filename in os.listdir(corpus):
with open(os.path.join(corpus, filename)) as f:
chapter_text = f.read() # json.load(filename)
print(chapter_text[:30])
chapters.append(chapter_text)
# -
# ### old_norse_texts_heimskringla
# +
from old_norse.text.old_norse_texts_heimskringla.text_manager import *
corpus_path = os.path.join(os.environ["HOME"], "cltk_data", "old_norse", "text", "old_norse_texts_heimskringla")
#here = os.getcwd()
#os.chdir(corpus_path)
loader = TextLoader(os.path.join(corpus_path, "Sæmundar-Edda", "Atlakviða"), "txt")
complete_text = loader.load()
print(complete_text[:100])
#os.chdir(here)
# -
# ### old_norse_dictionary_zoega
# +
from old_norse.dictionary.old_norse_dictionary_zoega import *
corpus_path = os.path.join(os.environ["HOME"], "cltk_data", "old_norse", "dictionary", "old_norse_dictionary_zoega")
# -
# ### POS tagging
# Unknown tags are marked with 'Unk'.
from cltk.tag.pos import POSTag
import cltk.tag.pos as cltkonpos
tagger = POSTag('old_norse')
sent = 'Hlióðs bið ek allar.'
tagger.tag_tnt(sent)
# ### Word tokenizing
# For now, the word tokenizer is basic, but Old Norse actually does not need a sophisticated one.
from cltk.tokenize.word import WordTokenizer
word_tokenizer = WordTokenizer('old_norse')
sentence = "Gylfi konungr var maðr vitr ok fjölkunnigr."
word_tokenizer.tokenize(sentence)
# ### Old Norse Stop Words
# A list of stop words was elaborated with the most insignificant words of a sentence. Of course, according to your needs, you can change it.
# +
from nltk.tokenize.punkt import PunktLanguageVars
from cltk.stop.old_norse.stops import STOPS_LIST
sentence = 'Þat var einn morgin, er þeir Karlsefni sá fyrir ofan rjóðrit flekk nökkurn, sem glitraði við þeim'
p = PunktLanguageVars()
tokens = p.word_tokenize(sentence.lower())
[w for w in tokens if not w in STOPS_LIST]
# -
# ### Swadesh list for Old Norse
# In the following Swadesh list, an item may have several words if they have a similar meaning, and some words lack because I have not found any corresponding Old Norse word.
from cltk.corpus.swadesh import Swadesh
swadesh = Swadesh('old_norse')
words = swadesh.words()
words[:30]
# ### Syllabification
from cltk.corpus.old_norse.syllabifier import hierarchy
from cltk.phonology.syllabify import Syllabifier
# ### Phonetic transcription
from cltk.phonology import utils as phu
from cltk.phonology.old_norse import transcription as ont
# ### Poetry
from old_norse.text.old_norse_texts_heimskringla import text_manager
from old_norse.text.old_norse_texts_heimskringla import reader as heim_reader
# #### Finding alliterations
from cltk.phonology.old_norse.transcription import measure_old_norse_syllable
# #### Verse structure
from cltk.prosody.old_norse.verse import MetreManager, Fornyrdhislag, Ljoodhhaattr, ShortLine, LongLine
import sys
sys.path
import dictionary.old_norse_dictionary_zoega
# By <NAME>, email address: <EMAIL>
| languages/old-norse/old-norse-tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (FastAI)
# language: python
# name: fastai
# ---
# # Transfer Learning Tutorial
#
# http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
# %matplotlib inline
# %reload_ext autoreload
# %autoreload 2
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
import torchvision
from torchvision import datasets, models, transforms
import numpy as np
import matplotlib.pyplot as plt
import time
import os
# ## 1. Load Data
#
# Using `torchvision` and `torch.utils.data` for data loading. Training a model to classify ants and bees; 120 training images each cat. 75 val images each. [data link](https://download.pytorch.org/tutorial/hymenoptera_data.zip)
# +
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Scale(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225])
]),
}
data_dir = 'hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train','val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
use_gpu = torch.cuda.is_available()
# +
# torchvision.transforms.Scale??
# -
# ```
# Init signature: torchvision.transforms.Scale(*args, **kwargs)
# Source:
# class Scale(Resize):
# """
# Note: This transform is deprecated in favor of Resize.
# """
# def __init__(self, *args, **kwargs):
# warnings.warn("The use of the transforms.Scale transform is deprecated, " +
# "please use transforms.Resize instead.")
# super(Scale, self).__init__(*args, **kwargs)
# ```
# +
# torchvision.transforms.Resize??
# -
# ```
# Init signature: torchvision.transforms.Resize(size, interpolation=2)
# Source:
# class Resize(object):
# """Resize the input PIL Image to the given size.
#
# Args:
# size (sequence or int): Desired output size. If size is a sequence like
# (h, w), output size will be matched to this. If size is an int,
# smaller edge of the image will be matched to this number.
# i.e, if height > width, then image will be rescaled to
# (size * height / width, size)
# interpolation (int, optional): Desired interpolation. Default is
# ``PIL.Image.BILINEAR``
# """
#
# def __init__(self, size, interpolation=Image.BILINEAR):
# assert isinstance(size, int) or (isinstance(size, collections.Iterable) and len(size) == 2)
# self.size = size
# self.interpolation = interpolation
# ```
# ## 2. Visualize a few images
# +
# plt.pause?
# +
def imshow(inp, title=None):
"""Imshow for Tensor"""
inp = inp.numpy().transpose((1,2,0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updates
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
# -
# Huh, cool
# ## 3. Training the model
#
# * Scheduling the learning rate
# * Saving the best model
#
# Parameter `scheduler` is an LR scheduler object from `torch.optim.lr_scheduler`
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = model.state_dict()
best_acc = 0.0
for epoch in range(num_epochs):
print(f'Epoch {epoch}/{num_epochs-1}')
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
scheduler.step()
model.train(True) # Set model to training mode
else:
model.train(False) # Set model to evaulation mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for data in dataloaders[phase]:
# get the inputs
inputs, labels = data
# wrap them in Variable
if use_gpu:
inputs = Variable(inputs.cuda())
labels = Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.data[0]
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects / dataset_sizes[phase]
print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
# deep copy the model ### <-- ooo this is very cool. .state_dict() & acc
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
mest_model_wts = model.state_dict()
print()
time_elapsed = time.time() - since
print('Training complete in {time_ellapsed//60:.0f}m {time_elapsed%60:.0fs}')
print(f'Best val Acc: {best_acc:.4f}')
# load best model weights
model.load_state_dict(best_model_wts)
return model
# ## 4. Visualizing the model's predictions
def visualize_model(model, num_images=6):
images_so_far = 0
fig = plt.figure()
for i, data in enumerate(dataloaders['val']):
inputs, labels = data
if use_gpu:
inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
else:
inputs, labels = Variable(inputs), Variable(labels)
outputs = model(inputs)
_, preds = torch.max(outputs.data, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title(f'predicted: {class_names[preds[j]]}')
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
return
# ```
# Variable.cpu(self)
#
# Source:
# def cpu(self):
# return self.type(getattr(torch, type(self.data).__name__))
# ```
# looking at the cpu() method
temp = Variable(torch.FloatTensor([1,2]))
temp.cpu()
# ## 5. Finetuning the ConvNet
#
# Load a pretrained model and reset final fully-connected layer
# +
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)
if use_gpu:
model_ft = model_ft.cuda()
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Delay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
# -
# ```
# torch.optim.lr_scheduler.StepLR
#
# --> defines `get_lr(self):
#
# def get_lr(self):
# return [base_lr * self.gamma ** (self.last_epoch // self.step_size)
# for base_lr in self.base_lrs]
# ```
#
# so `gamma` is exponentiated by ( last_epoch // step_size )
# ## 5.1 Train and Evaluate
#
# Should take 15-25 min on CPU; < 1 min on GPU.
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25)
visualize_model(model_ft)
# ## 6. ConvNet as a fixed feature extractor
#
# Freeze entire network except final layer. Need set `requires_grad == False` to freeze pars st grads aren't computed in `backward()`.
#
# [Link to Documentation](http://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward)
# +
model_conv = torchvision.models.resnet18(pretrained=True)
for par in model_conv.parameters():
par.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
if use_gpu:
model_conv = model_conv.cuda()
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of the final layer are being optimized as
# opposed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Delay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
# -
# ## 6.1 Train and evaluate
#
# For CPU: will take about half the time as before. This is expected as grads don't need to be computed for most of the network -- the forward pass though, has to be computed.
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
# +
visualize_model(model_conv)
plt.ioff()
plt.show()
# -
| pytorch/transfer_learning_tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# [Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
# + [markdown] slideshow={"slide_type": "slide"}
# # Widget Events
# -
# ## Special events
from __future__ import print_function
# The `Button` is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The `on_click` method of the `Button` can be used to register function to be called when the button is clicked. The doc string of the `on_click` can be seen below.
import ipywidgets as widgets
print(widgets.Button.on_click.__doc__)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Example
# -
# Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the `on_click` method, a button that prints a message when it has been clicked is shown below.
# +
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
# + [markdown] slideshow={"slide_type": "slide"}
# ## Traitlet events
# -
# Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the `observe` method of the widget can be used to register a callback. The doc string for `observe` can be seen below.
print(widgets.Widget.observe.__doc__)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Signatures
# -
# Mentioned in the doc string, the callback registered must have the signature `handler(change)` where `change` is a dictionary holding the information about the change.
#
# Using this method, an example of how to output an `IntSlider`'s value as it is changed can be seen below.
# +
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(change):
print(change['new'])
int_range.observe(on_value_change, names='value')
# -
# ## Linking Widgets
# Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
# ### Linking traitlets attributes in the kernel
#
# The first method is to use the `link` and `dlink` functions from the `traitlets` module (these two functions are re-exported by the `ipywidgets` module for convenience). This only works if we are interacting with a live kernel.
caption = widgets.Label(value='The values of slider1 and slider2 are synchronized')
sliders1, slider2 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2')
l = widgets.link((sliders1, 'value'), (slider2, 'value'))
display(caption, sliders1, slider2)
caption = widgets.Label(value='Changes in source values are reflected in target1')
source, target1 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1')
dl = widgets.dlink((source, 'value'), (target1, 'value'))
display(caption, source, target1)
# Function `traitlets.link` and `traitlets.dlink` return a `Link` or `DLink` object. The link can be broken by calling the `unlink` method.
l.unlink()
dl.unlink()
# ### Registering callbacks to trait changes in the kernel
#
# Since attributes of widgets on the Python side are traitlets, you can register handlers to the change events whenever the model gets updates from the front-end.
#
# The handler passed to observe will be called with one change argument. The change object holds at least a `type` key and a `name` key, corresponding respectively to the type of notification and the name of the attribute that triggered the notification.
#
# Other keys may be passed depending on the value of `type`. In the case where type is `change`, we also have the following keys:
#
# - `owner` : the HasTraits instance
# - `old` : the old value of the modified trait attribute
# - `new` : the new value of the modified trait attribute
# - `name` : the name of the modified trait attribute.
# +
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
slider = widgets.IntSlider(min=-5, max=5, value=1, description='Slider')
def handle_slider_change(change):
caption.value = 'The slider value is ' + (
'negative' if change.new < 0 else 'nonnegative'
)
slider.observe(handle_slider_change, names='value')
display(caption, slider)
# -
# ### Linking widgets attributes from the client side
# When synchronizing traitlets attributes, you may experience a lag because of the latency due to the roundtrip to the server side. You can also directly link widget attributes in the browser using the link widgets, in either a unidirectional or a bidirectional fashion.
#
# Javascript links persist when embedding widgets in html web pages without a kernel.
caption = widgets.Label(value='The values of range1 and range2 are synchronized')
range1, range2 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2')
l = widgets.jslink((range1, 'value'), (range2, 'value'))
display(caption, range1, range2)
caption = widgets.Label(value='Changes in source_range values are reflected in target_range1')
source_range, target_range1 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1')
dl = widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
display(caption, source_range, target_range1)
# Function `widgets.jslink` returns a `Link` widget. The link can be broken by calling the `unlink` method.
# +
# l.unlink()
# dl.unlink()
# -
# ### The difference between linking in the kernel and linking in the client
#
# Linking in the kernel means linking via python. If two sliders are linked in the kernel, when one slider is changed the browser sends a message to the kernel (python in this case) updating the changed slider, the link widget in the kernel then propagates the change to the other slider object in the kernel, and then the other slider's kernel object sends a message to the browser to update the other slider's views in the browser. If the kernel is not running (as in a static web page), then the controls will not be linked.
#
# Linking using jslink (i.e., on the browser side) means contructing the link in Javascript. When one slider is changed, Javascript running in the browser changes the value of the other slider in the browser, without needing to communicate with the kernel at all. If the sliders are attached to kernel objects, each slider will update their kernel-side objects independently.
#
# To see the difference between the two, go to the [static version of this page in the ipywidgets documentation](http://ipywidgets.readthedocs.io/en/latest/examples/Widget%20Events.html) and try out the sliders near the bottom. The ones linked in the kernel with `link` and `dlink` are no longer linked, but the ones linked in the browser with `jslink` and `jsdlink` are still linked.
# ## Continuous updates
#
# Some widgets offer a choice with their `continuous_update` attribute between continually updating values or only updating values when a user submits the value (for example, by pressing Enter or navigating away from the control). In the next example, we see the "Delayed" controls only transmit their value after the user finishes dragging the slider or submitting the textbox. The "Continuous" controls continually transmit their values as they are changed. Try typing a two-digit number into each of the text boxes, or dragging each of the sliders, to see the difference.
# +
a = widgets.IntSlider(description="Delayed", continuous_update=False)
b = widgets.IntText(description="Delayed", continuous_update=False)
c = widgets.IntSlider(description="Continuous", continuous_update=True)
d = widgets.IntText(description="Continuous", continuous_update=True)
widgets.link((a, 'value'), (b, 'value'))
widgets.link((a, 'value'), (c, 'value'))
widgets.link((a, 'value'), (d, 'value'))
widgets.VBox([a,b,c,d])
# -
# Sliders, `Text`, and `Textarea` controls default to `continuous_update=True`. `IntText` and other text boxes for entering integer or float numbers default to `continuous_update=False` (since often you'll want to type an entire number before submitting the value by pressing enter or navigating out of the box).
# + [markdown] nbsphinx="hidden"
# [Index](Index.ipynb) - [Back](Output Widget.ipynb) - [Next](Widget Styling.ipynb)
| docs/source/examples/Widget Events.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] papermill={"duration": 0.017923, "end_time": "2022-02-03T17:36:53.688593", "exception": false, "start_time": "2022-02-03T17:36:53.670670", "status": "completed"} tags=[]
# 
#
#
# Chemical structure of morphine, the prototypical opioid, from [Wikipedia](https://en.wikipedia.org/wiki/File:Morphin_-_Morphine.svg)
# + [markdown] papermill={"duration": 0.015722, "end_time": "2022-02-03T17:36:53.721243", "exception": false, "start_time": "2022-02-03T17:36:53.705521", "status": "completed"} tags=[]
# # INTRO
#
# This is my very first notebook on Kaggle. I started this only for one reason: to learn to explore datasets, to learn data science and to learn how to visualize using matplotlib. Also, the reason for chosing this specific dataset on opioid deaths: I like contributing to the planet and this world, so anything that helps the nature and the medicine, I am all in for it. Please forgive any mistakes for I am just a beginner. You are welcome to commment if you:
#
# * think any new explaoration or visualization that can give a better analysis of the dataset
# * find something that needs correction
# * can give ideas on how to make the notebook better
#
# CREDITS: <br/>
# https://www.kaggle.com/yamqwe/opioid-overdose-deathse/ <br/>
# https://data.world/health/opioid-overdose-deaths
#
# ORIGINAL LINKS: <br>
# https://www.kaggle.com/yamqwe/opioid-overdose-deathse <br/>
# https://www.kaggle.com/arnuld/opioids-overdose-deaths-a-rough-cut-eda
# + [markdown] papermill={"duration": 0.015865, "end_time": "2022-02-03T17:36:53.753322", "exception": false, "start_time": "2022-02-03T17:36:53.737457", "status": "completed"} tags=[]
# # BEGIN
#
# Next cell of code was automatically added by Kaggle. Kaggle notebook automatically imports Pandas, NumPy and the dataset file path. If you want to run it in your local/cloud environment then remove this cell but copy the imports to next cell before doing that. Then, you can use the **requirements.txt** to create a virtual environment to run it.
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5" papermill={"duration": 0.045154, "end_time": "2022-02-03T17:36:53.814692", "exception": false, "start_time": "2022-02-03T17:36:53.769538", "status": "completed"} tags=[]
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the read-only "../input/" directory
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using "Save & Run All"
# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session
# + papermill={"duration": 0.023794, "end_time": "2022-02-03T17:36:53.856074", "exception": false, "start_time": "2022-02-03T17:36:53.832280", "status": "completed"} tags=[]
import matplotlib.pyplot as plt
from IPython.display import display
# + [markdown] papermill={"duration": 0.016907, "end_time": "2022-02-03T17:36:53.890250", "exception": false, "start_time": "2022-02-03T17:36:53.873343", "status": "completed"} tags=[]
# ### After exploring the dataset I came to know that it has words like 'Suppressed' and 'Unreliable' instead of just NaN. So, we gonna treat those as NaN and read the dataset accordingly
# + papermill={"duration": 0.06004, "end_time": "2022-02-03T17:36:53.967329", "exception": false, "start_time": "2022-02-03T17:36:53.907289", "status": "completed"} tags=[]
file_path = '/kaggle/input/opioid-overdose-deathse/health-opioid-overdose-deaths/Multiple Cause of Death, 1999-2014 v1.1.csv'
df = pd.read_csv(file_path, na_values=['Suppressed','Unreliable'])
df.info()
# + [markdown] papermill={"duration": 0.017302, "end_time": "2022-02-03T17:36:54.003399", "exception": false, "start_time": "2022-02-03T17:36:53.986097", "status": "completed"} tags=[]
# ## FIRST IMPRESSIONS OF DATA
#
# As you can see there are 8 columns but the **Non-Null Count** is not same for all of them. That means there are some values missing. So we will clean the dataset first. Instead of calling cleaning functions one by one, let's write a function to clean the dataset instead.
# + papermill={"duration": 0.028175, "end_time": "2022-02-03T17:36:54.049271", "exception": false, "start_time": "2022-02-03T17:36:54.021096", "status": "completed"} tags=[]
# The structure of this function is based on "Avocado | EDA" done by "ks_lar_wtf"
# I have merely modified it to suit my needs
# https://www.kaggle.com/kslarwtf
# https://www.kaggle.com/kslarwtf/avocado-eda/notebook
def clean_data(d, dname=None):
# Let's clean duplicates first
temp = d.duplicated().sum()
if temp:
print(F"{temp} dupllicates found.\n")
print("*** Removing Duplicates ***\n")
d = d.drop_duplicates(ignore_index=True)
# Let's check if we have any missing values
temp = df.isna().sum().any()
if temp:
print("*** There are missing values in the data. Let's remove those ***\n")
d = d.dropna()
# Rename all Columns to lowercase
d.columns = d.columns.str.lower()
# one column has name too long
old_name = 'prescriptions dispensed by us retailers in that year (millions)'
new_name = 'prescriptions dispensed'
d = d.rename(columns={old_name: new_name})
return d
# + papermill={"duration": 0.02558, "end_time": "2022-02-03T17:36:54.092752", "exception": false, "start_time": "2022-02-03T17:36:54.067172", "status": "completed"} tags=[]
# The structure of this function is based on "Avocado | EDA" done by "ks_lar_wtf"
# please refer to the links from the last cell
def examine_data(d, dname=None):
print(F"*** Examining {dname} ***\n")
display(d.head())
display(d.info())
display(d.columns)
# + papermill={"duration": 0.06309, "end_time": "2022-02-03T17:36:54.174063", "exception": false, "start_time": "2022-02-03T17:36:54.110973", "status": "completed"} tags=[]
df_cleaned = clean_data(df, "Opioid Overdose Deaths")
examine_data(df_cleaned, "Opioid Overdose Deaths -- Cleaned")
# + [markdown] papermill={"duration": 0.020154, "end_time": "2022-02-03T17:36:54.214823", "exception": false, "start_time": "2022-02-03T17:36:54.194669", "status": "completed"} tags=[]
# Now the **Non-Null Count** is same for all the columns, which means we have effectively removed any rows with missing values. Now we can start doing some visualizations to see what's this dataset has. Let's see which state has the highest death rate from opioid overdose
# + papermill={"duration": 1.144342, "end_time": "2022-02-03T17:36:55.379546", "exception": false, "start_time": "2022-02-03T17:36:54.235204", "status": "completed"} tags=[]
# California has highest death rate from opioid overdose
# top 3 states are: California, Florida and New York
statewise_deaths = df_cleaned.groupby('state')['deaths'].sum().sort_values(ascending=False)
display(statewise_deaths)
fig, ax = plt.subplots(figsize=(9,15))
ax.barh(statewise_deaths.index, statewise_deaths)
ax.invert_yaxis()
# + [markdown] papermill={"duration": 0.022298, "end_time": "2022-02-03T17:36:55.424964", "exception": false, "start_time": "2022-02-03T17:36:55.402666", "status": "completed"} tags=[]
# Let's see the sale of the opioids per state if that matches the state with the highest overdose death rates
# + papermill={"duration": 0.034701, "end_time": "2022-02-03T17:36:55.482217", "exception": false, "start_time": "2022-02-03T17:36:55.447516", "status": "completed"} tags=[]
df_cleaned.groupby('state')['prescriptions dispensed'].sum().sort_values(ascending=False)
# + [markdown] papermill={"duration": 0.022786, "end_time": "2022-02-03T17:36:55.528138", "exception": false, "start_time": "2022-02-03T17:36:55.505352", "status": "completed"} tags=[]
# Our analysis is only as good as our data. In this specific case looks like our dataset does not have complete information regarding the prescription dispensed. It's not possible that 90% of the states are selling exactly the same number of opioids. Think about it: each state has a different population and hence number of prescriptions can't be exactly same. Second reason is exact same number of prescriptions can't cause 50 or 90% variation in overdoses across 52 states.
# + papermill={"duration": 0.421288, "end_time": "2022-02-03T17:36:55.974249", "exception": false, "start_time": "2022-02-03T17:36:55.552961", "status": "completed"} tags=[]
yearly_deaths = df_cleaned.groupby('year')['deaths'].sum().sort_values(ascending=False)
display(yearly_deaths)
fig, ax = plt.subplots(1,2, figsize=(15,8))
ax[0].plot(yearly_deaths.index, yearly_deaths)
ax[1].bar(yearly_deaths.index, yearly_deaths)
# + [markdown] papermill={"duration": 0.024658, "end_time": "2022-02-03T17:36:56.024060", "exception": false, "start_time": "2022-02-03T17:36:55.999402", "status": "completed"} tags=[]
# Year 2014 saw the largest number of deaths due to opioids overdose. In fact, death rate due to opioids overdose is increasing year by year. In 16 years, death rates have almost quadrupled. This is an alarming situation for a country. It should raise all the red flags within any organization working in the public interest.
# + papermill={"duration": 0.439011, "end_time": "2022-02-03T17:36:56.487711", "exception": false, "start_time": "2022-02-03T17:36:56.048700", "status": "completed"} tags=[]
yearly_population = df_cleaned.groupby('year')['population'].sum()
display(yearly_population)
fig, ax = plt.subplots(1,2, figsize=(18,8))
ax[0].plot(yearly_population.index, yearly_population)
ax[1].bar(yearly_population.index, yearly_population)
# + [markdown] papermill={"duration": 0.026502, "end_time": "2022-02-03T17:36:56.541050", "exception": false, "start_time": "2022-02-03T17:36:56.514548", "status": "completed"} tags=[]
# If you just look at the line chart, it looks like an explosive growth in population. But then if you draw a bar chart, you see the difference. The line chart gives us a a wrong overview of the population increase because the y-axis scale is different. In fact, y-axis scale of line chart is quite misleading (though both are using the default scale whatever matplotlib provides)
#
# To have clean comparison, let's use similar (and not misleading) values for the y-axis for both kinds of charts. Since y-axis values range from **2.5e8 - 3.2e8**. We can use a range from **2e8 - 4e8**
#
# Now you will see that rise in population is there but it is not as explosive as line chart made us believe the first time. We will also reduce the height of the charts to make it look more appealing
# + papermill={"duration": 0.42576, "end_time": "2022-02-03T17:36:56.993833", "exception": false, "start_time": "2022-02-03T17:36:56.568073", "status": "completed"} tags=[]
fig3, ax3 = plt.subplots(1,2, figsize=(18,6), sharey=True)
ax3[0].plot(yearly_population.index, yearly_population)
ax3[0].set_xlabel('Years')
ax3[0].set_ylabel('Population Opioids Use')
ax3[1].bar(yearly_population.index, yearly_population)
ax3[1].set_xlabel('Years')
ax3[1].set_ylabel('Population Opioids Use')
plt.ylim(2e8, 4e8)
# + [markdown] papermill={"duration": 0.028723, "end_time": "2022-02-03T17:36:57.051855", "exception": false, "start_time": "2022-02-03T17:36:57.023132", "status": "completed"} tags=[]
# ## CRUDE RATE & CONFIDENCE INTERVAL
#
# Cruade rate is the total number of events occurring in an entire population over a period of time, without reference to any of the individuals or subgroups within the population.
#
#
# We indicate a confidence interval by its endpoints; for example, the 90% confidence interval for the number of people in poverty in the United States in 1995 is "35,534,124 to 37,315,094." If we were to repeatedly make new estimates using exactly the same procedure (by drawing a new sample, conducting new interviews etc), 90% of the time the estimate will fall within the range given above.
#
#
# SOURCE: [The Free Dictionary](https://medical-dictionary.thefreedictionary.com/crude+rate), [United States Census Bureau](https://www.census.gov/programs-surveys/saipe/guidance/confidence-intervals.html)
#
# You can check out a small example of how to calculate Crude Rate at [Boston University School of Public Health](https://sphweb.bumc.bu.edu/otlt/MPH-Modules/EP/EP713_StandardizedRates/EP713_StandardizedRates2.html) by *<NAME>*
# + papermill={"duration": 0.044561, "end_time": "2022-02-03T17:36:57.125170", "exception": false, "start_time": "2022-02-03T17:36:57.080609", "status": "completed"} tags=[]
# Let's check the crude rate by state
df_cleaned.groupby('state')['crude rate'].mean().sort_values(ascending=False)
# + [markdown] papermill={"duration": 0.029792, "end_time": "2022-02-03T17:36:57.183733", "exception": false, "start_time": "2022-02-03T17:36:57.153941", "status": "completed"} tags=[]
# What we have done is taken the *mean* of all the crude rates for a particular state. We can see that even though California has the highest number of opioid overdose deaths, its crude rate is only 25% of the maximum value. New York and Florida are similar too in that aspect. I don't know what more I can make out of it for I have never studied Statistics.
# + [markdown] papermill={"duration": 0.028436, "end_time": "2022-02-03T17:36:57.241336", "exception": false, "start_time": "2022-02-03T17:36:57.212900", "status": "completed"} tags=[]
# # More To Come
#
# ## Once I learn more about how to analyze datasets and more visualization techniques, I will come back here and update this notebook.
# + papermill={"duration": 0.028684, "end_time": "2022-02-03T17:36:57.298983", "exception": false, "start_time": "2022-02-03T17:36:57.270299", "status": "completed"} tags=[]
# + [markdown] papermill={"duration": 0.028766, "end_time": "2022-02-03T17:36:57.356856", "exception": false, "start_time": "2022-02-03T17:36:57.328090", "status": "completed"} tags=[]
#
# + [markdown] papermill={"duration": 0.029091, "end_time": "2022-02-03T17:36:57.415166", "exception": false, "start_time": "2022-02-03T17:36:57.386075", "status": "completed"} tags=[]
#
| opioids overdose deaths/opioids-overdose-deaths-a-rough-cut-eda.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/imgurnoor/keras_practice_notebooks/blob/master/keras_practise_1.ipynb)
# + id="myPOVCcMdSJz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="15122255-565f-45f1-e3b4-0cfd79e9859b"
import keras
# + id="nUYgPxModoNA" colab_type="code" colab={}
from keras.datasets import mnist
# + id="q9h1MODsd5H9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="053ee2e7-ce70-46f8-b122-ad05dadb2781"
(train_images, train_labels), (test_images, test_labels)=mnist.load_data()
# + id="9jJ1fVkJfQq3" colab_type="code" colab={}
from keras import models
from keras import layers
# + id="UeFNNXJMgAgt" colab_type="code" colab={}
model=models.Sequential()
model.add(layers.Dense(500, activation="relu", input_shape=((28*28),)))
model.add(layers.Dense(10, activation='softmax'))
# + id="cr9uAA9Fg_AX" colab_type="code" colab={}
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
# + id="5JuDGGrAiQH4" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="272dc54e-12fd-4a2d-f9dd-acf6e9506f6c"
#shape of data
train_images.shape
# + [markdown] id="ZAKZk2iw2-oO" colab_type="text"
# flatten 28*28 images to a 784 vector for each image
#
# + id="1qo4alF3luph" colab_type="code" colab={}
train_images = train_images.reshape((60000, 28 * 28))
train_images= train_images.astype('float32')
# + id="QxmdE5qM2Vgs" colab_type="code" colab={}
test_images= test_images.reshape(10000,28*28)
test_images= test_images.astype('float32')
# + id="N14jdAv33QE_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="d280eb1e-30ad-4b4e-d863-f0f54ad246f2"
train_images.shape
test_images.shape
# + id="7BqA4w5x23Og" colab_type="code" colab={}
#Normalising input data from 0-255 to 0-1
train_images= train_images / 255
test_images= test_images / 255
# + id="2hpIaFrp4dKS" colab_type="code" colab={}
#one hot encode label
from keras.utils import to_categorical
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# + id="V0GQEgNS5Ybn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 406} outputId="954dc363-da0c-41cf-abe8-eb56029c8ba5"
model.fit(train_images, train_labels, epochs=10, batch_size= 128)
# + id="6BZ4SCBB5wSy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 52} outputId="72e49d49-ca8d-4fd3-e396-882504112e44"
# evaluating model on test data
test_loss, test_acc = model.evaluate(test_images, test_labels)
print("result", test_acc)
# + id="i79R_XQD8qCU" colab_type="code" colab={}
| keras_practise_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.9 64-bit (''site_similarity'': conda)'
# metadata:
# interpreter:
# hash: 72b3faef5542ae75c34eb0d3b11ce0fc432eb00b9ccfc309dfbebb58f482608a
# name: python3
# ---
import sys
sys.path.append("/home/panayot/Documents/site_similarity")
from utils.notebook_utils import eval_node2vec_models
models = [
'corpus_2020_audience_overlap_sites_lvl_three_unweighted_64D.model',
'corpus_2020_audience_overlap_sites_lvl_three_unweighted_512D.model',
'corpus_2020_audience_overlap_sites_lvl_three_unweighted_1024D.model',
'corpus_2020_audience_overlap_sites_lvl_four_unweighted_64D.model',
'corpus_2020_audience_overlap_sites_lvl_four_unweighted_128D.model',
'corpus_2020_audience_overlap_sites_lvl_four_unweighted_256D.model',
'corpus_2020_audience_overlap_sites_lvl_four_unweighted_512D.model',
'corpus_2020_audience_overlap_sites_lvl_four_unweighted_1024D.model'
]
results_2020 = eval_node2vec_models(models, data_year='2020')
results_2020
| notebooks/audience_overlap_node2vec_models/corpus_2020_bigger_node2vec_models_EVAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# name: python385jvsc74a57bd031f2aee4e71d21fbe5cf8b01ff0e069b9275f58929596ceb00d14d90e3e16cd6
# ---
# +
# import
import os
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torchvision
import torchvision.datasets as datasets
import torchvision.transforms as transforms
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
from pathlib import Path
import matplotlib.pyplot as plt
import time
import copy
# -
# Check GPU Availability
torch.cuda.is_available()
# +
BATCH_SIZE = 4
# torchvision.datasets.MNIST outputs a set of PIL images
# We transform them to tensors
transform = transforms.ToTensor()
# Load and transform data
trainset = datasets.MNIST('/tmp', train=True, download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=BATCH_SIZE, shuffle=True, num_workers=2)
testset = datasets.MNIST('/tmp', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=BATCH_SIZE, shuffle=False, num_workers=2)
# -
def show_batch(batch):
im = torchvision.utils.make_grid(batch)
plt.imshow(np.transpose(im.numpy(), (1, 2, 0)))
# +
dataiter = iter(trainloader)
images, labels = dataiter.next()
print('Labels: ', labels)
print('Batch shape: ', images.size())
show_batch(images)
# -
images.view(BATCH_SIZE, -1).size()
class MLP_MNIST(nn.Module):
def __init__(self):
super(MLP_MNIST, self).__init__()
self.linear1 = nn.Linear(28*28, 256)
self.linear2 = nn.Linear(256, 10)
def forward(self, x):
h_relu = F.relu(self.linear1(x.view(BATCH_SIZE, -1)))
y_pred = self.linear2(h_relu)
return y_pred
model = MLP_MNIST()
model
def train(model, trainloader, criterion, optimizer, n_epochs=2):
total_loss = 0
for t in range(n_epochs):
for i, data in enumerate(trainloader):
inputs, labels = data
inputs, labels = Variable(inputs), Variable(labels)
# TODO: why?
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels) # Compute the loss
total_loss += loss.data
loss.backward() # Compute the gradient for each variable
optimizer.step() # Update the weights according to the computed gradient
if not i % 2000:
print(t, i, total_loss)
# + tags=[]
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=1e-6)
train(model, trainloader, criterion, optimizer, n_epochs=2)
# +
def predict(model, images):
outputs = model(Variable(images))
_, predicted = torch.max(outputs.data, 1) # TODO: explain why 1
return predicted
dataiter = iter(testloader)
images, labels = dataiter.next()
show_batch(images)
print('Prediction: ', predict(model, images))
# +
def test(model, testloader, n):
correct = 0
for data in testloader:
inputs, labels = data
pred = predict(model, inputs)
correct += (pred == labels).sum()
return 100 * correct / n
print('Accuracy: ', test(model, testloader, len(testset)))
| Lab02/Lab02_PyTorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# #source: https://people.sc.fsu.edu/~jburkardt/py_src/lorenz_ode/lorenz_ode.html
# #scholar: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469(1963)020%3C0130:DNF%3E2.0.CO;2
# +
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import time
def lorenz_rhs ( t, m, xyz ):
omega = 10
b = 8/3
rho = omega*(omega+b+3)*np.power((omega-b-1),-1)
dxdt = np.zeros ( 3 )
dxdt[0] = omega * ( xyz[1] - xyz[0] )
dxdt[1] = xyz[0] * ( rho - xyz[2] ) - xyz[1]
dxdt[2] = xyz[0] * xyz[1] - b * xyz[2]
return dxdt
def rk4vec ( t0, m, u0, dt, f ):
#*****************************************************************************80
#
## RK4VEC takes one Runge-Kutta step for a vector ODE.
#
# Licensing:
#
# This code is distributed under the GNU LGPL license.
#
# Modified:
#
# 24 May 2016
#
# Author:
#
# <NAME>
#
# Parameters:
#
# Input, real T0, the current time.
#
# Input, integer M, the spatial dimension.
#
# Input, real U0(M), the solution estimate at the current time.
#
# Input, real DT, the time step.
#
# Input, function uprime = F ( t, m, u )
# which evaluates the derivative UPRIME(1:M) given the time T and
# solution vector U(1:M).
#
# Output, real U(M), the fourth-order Runge-Kutta solution
# estimate at time T0+DT.
#
# Get four sample values of the derivative.
#
f0 = f ( t0, m, u0 )
t1 = t0 + dt / 2.0
u1 = u0 + dt * f0 / 2.0
f1 = f ( t1, m, u1 )
t2 = t0 + dt / 2.0
u2 = u0 + dt * f1 / 2.0
f2 = f ( t2, m, u2 )
t3 = t0 + dt
u3 = u0 + dt * f2
f3 = f ( t1, m, u1 )
#
# Combine them to estimate the solution U at time T1.
#
u = u0 + dt * ( f0 + 2.0 * f1 + 2.0 * f2 + f3 ) / 6.0
return u
def main():
omega = 10
a_root = .5
b = 8/3
r = omega*(omega+b+3)*np.power((omega-b-1),-1)
print(r)
roots = np.roots([1.,(omega+b+1),(r+omega)*b,2*omega*b*(r-1)])
print(roots)
n = 100000
t_final = 40.
dt = t_final/n
t = np.linspace ( 0.0, t_final, n + 1 )
x = np.zeros ( n + 1 )
y = np.zeros ( n + 1 )
z = np.zeros ( n + 1 )
#x[0] = 8.0
#y[0] = 1.0
#z[0] = 1.0
x[0] = roots[0]
y[0] = roots[1]
z[0] = roots[2]
for j in range ( 0, n ):
xyz = np.array ( [ x[j], y[j], z[j] ] )
xyz = rk4vec ( t[j], 3, xyz, dt, lorenz_rhs )
x[j+1] = xyz[0]
y[j+1] = xyz[1]
z[j+1] = xyz[2]
plt.plot(t,x)
plt.plot(t,y)
plt.plot(t,z)
plt.legend(['x','y','z'])
plt.show()
fig = plt.figure ( )
ax = fig.gca ( projection = '3d' )
ax.plot ( x, y, z, linewidth = 2, color = 'b' )
ax.grid ( True )
ax.set_xlabel ( '<--- X(T) --->' )
ax.set_ylabel ( '<--- Y(T) --->' )
ax.set_zlabel ( '<--- Z(T) --->' )
plt.show()
main()
# -
| flow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Don't forget to shut down kernel by running the cell of the bottom.
#Don't modify this cell.
#Loading the file
import pandas as pd
FILE_NAME = "test.szf"
df = pd.read_csv(FILE_NAME, sep = "\t")
print(df)
# +
#Enter the starting row and column for the quantified value.
START_ROW = 4
START_COL = 2
COMPOUND_ID_NAME = "InChi"
#PCA
from sklearn.decomposition import PCA
pca = PCA(n_components = 2)
df_values = df.iloc[START_ROW:, START_COL:].astype(float)
df_values = df_values.apply(lambda x: (x - x.mean()) / x.std(), axis=1) #Auto Scaling
df_values["Name"] = df["Name"][START_ROW:, ]
df_values[COMPOUND_ID_NAME] = df[COMPOUND_ID_NAME][START_ROW:, ]
df_values = df_values.dropna() #Missing value processing
drop_col = ["Name", COMPOUND_ID_NAME]
x = pca.fit_transform(df_values.drop(drop_col, axis=1).T)
embed = pd.DataFrame(x)
pc1_variance = pca.explained_variance_ratio_[0] * 100
pc2_variance = pca.explained_variance_ratio_[1] * 100
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(6, 12))
ax1 = fig.add_subplot(2, 1, 1)
ax1.scatter(embed.iloc[:, 0], embed.iloc[:, 1])
for x, y, name in zip(embed.iloc[:, 0], embed.iloc[:, 1], df_values.drop(drop_col, axis=1).columns[0:]):
ax1.annotate(name, xy = (x, y))
ax1.grid()
ax1.set_title("Score Plot")
ax1.set_xlabel("PC1({:.1f}%)".format(pc1_variance))
ax1.set_ylabel("PC2({:.1f}%)".format(pc2_variance))
import numpy as np
loading = pca.components_*np.c_[np.sqrt(pca.explained_variance_)]
loading_plot = pd.DataFrame({"PC1" : loading[0], "PC2" : loading[1], "Name" : df_values["Name"], COMPOUND_ID_NAME : df_values[COMPOUND_ID_NAME]})
ax2 = fig.add_subplot(2, 1, 2)
ax2.scatter(loading_plot["PC1"], loading_plot["PC2"])
for x, y, name in zip(loading_plot["PC1"], loading_plot["PC2"], loading_plot["Name"]):
ax2.annotate(name, xy = (x, y))
ax2.grid()
ax2.set_title("Loading Plot")
ax2.set_xlabel("PC1({:.1f}%)".format(pc1_variance))
ax2.set_ylabel("PC2({:.1f}%)".format(pc2_variance))
fig.tight_layout()
plt.show()
# -
for i in ["PC1", "PC2"]:
loading_plot = loading_plot.sort_values(i, ascending=False)
plt.figure(figsize=(20, 6))
plt.bar(range(len(loading_plot)), loading_plot[i], tick_label = loading_plot["Name"])
plt.xticks(rotation=90)
plt.title("Loading Plot(" + i + ")")
plt.xlabel("Compound Name")
plt.ylabel(i)
plt.show()
# +
TOP = 10
loading_plot = loading_plot.reindex(loading_plot.PC1.abs().sort_values(ascending=False).index)
pc1 = loading_plot[COMPOUND_ID_NAME][:TOP]
loading_plot = loading_plot.reindex(loading_plot.PC2.abs().sort_values(ascending=False).index)
pc2 = loading_plot[COMPOUND_ID_NAME][:TOP]
output = pd.concat([pc1,pc2]).drop_duplicates().reset_index(drop=True)
#Creating an output file
output.to_csv("output.txt", sep = "\t", index = False, header=False)
# -
#shutting down the kernel
import os
os.rename("../Terminate/OFF", "../Terminate/ON")
| To Jupyter Notebook/template.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Understanding Hyperbole using RSA
#
# "My new kettle cost a million dollars."
#
# Hyperbole -- using an exagerated utterance to convey strong opinions -- is a common non-literal use of language. Yet non-literal uses of langauge are impossible under the simplest RSA model. Kao, et al, suggested that two ingredients could be added to ennable RSA to capture hyperbole. First, the state conveyed by the speaker and reasoned about by the listener should include affective dimensions. Second, the speaker only intends to convey information relevant to a particular topic, such as "how expensive was it?" or "how am I feeling about the price?"; pragmatic listeners hence jointly reason about this topic and the state.
# +
#first some imports
import torch
torch.set_default_dtype(torch.float64) # double precision for numerical stability
import collections
import argparse
import matplotlib.pyplot as plt
import pyro
import pyro.distributions as dist
import pyro.poutine as poutine
from search_inference import factor, HashingMarginal, memoize, Search
# -
# As in the simple RSA example, the inferece helper `Marginal` takes an un-normalized stochastic function, constructs the distribution over execution traces by using `Search`, and constructs the marginal distribution on return values (via `HashingMarginal`).
def Marginal(fn):
return memoize(lambda *args: HashingMarginal(Search(fn).run(*args)))
# The domain for this example will be states consisting of price (e.g. of a tea kettle) and the speaker's emotional arousal (whether the speaker thinks this price is irritatingly expensive). Priors here are adapted from experimental data.
# +
State = collections.namedtuple("State", ["price", "arousal"])
def price_prior():
values = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
probs = torch.tensor([0.4205, 0.3865, 0.0533, 0.0538, 0.0223, 0.0211, 0.0112, 0.0111, 0.0083, 0.0120])
ix = pyro.sample("price", dist.Categorical(probs=probs))
return values[ix]
def arousal_prior(price):
probs = {
50: 0.3173,
51: 0.3173,
500: 0.7920,
501: 0.7920,
1000: 0.8933,
1001: 0.8933,
5000: 0.9524,
5001: 0.9524,
10000: 0.9864,
10001: 0.9864
}
return pyro.sample("arousal", dist.Bernoulli(probs=probs[price])).item() == 1
def state_prior():
price = price_prior()
state = State(price=price, arousal=arousal_prior(price))
return state
# -
# Now we define a version of the RSA speaker that only produces *relevant* information for the literal listener. We define relevance with respect to a Question Under Discussion (QUD) -- this can be thought of as defining the speaker's current attention or topic.
#
# The speaker is defined mathematically by:
#
# $$P_S(u|s,q) \propto \left[ \sum_{w'} \delta_{q(w')=q(w)} P_\text{Lit}(w'|u) p(u) \right]^\alpha $$
#
# To implement this as a probabilistic program, we start with a helper function `project`, which takes a distribution over some (discrete) domain and a function `qud` on this domain. It creates the push-forward distribution, using `Marginal` (as a Python decorator). The speaker's relevant information is then simply information about the state in this projection.
# +
@Marginal
def project(dist,qud):
v = pyro.sample("proj",dist)
return qud_fns[qud](v)
@Marginal
def literal_listener(utterance):
state=state_prior()
factor("literal_meaning", 0. if meaning(utterance, state.price) else -999999.)
return state
@Marginal
def speaker(state, qud):
alpha = 1.
qudValue = qud_fns[qud](state)
with poutine.scale(scale=torch.tensor(alpha)):
utterance = utterance_prior()
literal_marginal = literal_listener(utterance)
projected_literal = project(literal_marginal, qud)
pyro.sample("listener", projected_literal, obs=qudValue)
return utterance
# -
# The possible QUDs capture that the speaker may be attending to the price, her affect, or some combination of these. We assume a uniform QUD prior.
# +
#The QUD functions we consider:
qud_fns = {
"price": lambda state: State(price=state.price, arousal=None),
"arousal": lambda state: State(price=None, arousal=state.arousal),
"priceArousal": lambda state: State(price=state.price, arousal=state.arousal),
}
def qud_prior():
values = qud_fns.keys()
ix = pyro.sample("qud", dist.Categorical(probs=torch.ones(len(values)) / len(values)))
return values[ix]
# -
# Now we specify the utterance meanings (standard number word denotations: "N" means exactly $N$) and a uniform utterance prior.
# +
def utterance_prior():
utterances = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
ix = pyro.sample("utterance", dist.Categorical(probs=torch.ones(len(utterances)) / len(utterances)))
return utterances[ix]
def meaning(utterance, price):
return utterance == price
# -
# OK, let's see what number term this speaker will say to express different states and QUDs.
# +
#silly plotting helper:
def plot_dist(d):
support = d.enumerate_support()
data = [d.log_prob(s).exp().item() for s in d.enumerate_support()]
names = support
ax = plt.subplot(111)
width=0.3
bins = map(lambda x: x-width/2,range(1,len(data)+1))
ax.bar(bins,data,width=width)
ax.set_xticks(map(lambda x: x, range(1,len(data)+1)))
ax.set_xticklabels(names,rotation=45, rotation_mode="anchor", ha="right")
# plot_dist( speaker(State(price=50, arousal=False), "arousal") )
# plot_dist( speaker(State(price=50, arousal=True), "price") )
plot_dist( speaker(State(price=50, arousal=True), "arousal") )
# -
# Try different values above! When will the speaker favor non-literal utterances?
# Finally, the pragmatic listener doesn't know what the QUD is and so jointly reasons abut this and the state.
@Marginal
def pragmatic_listener(utterance):
state = state_prior()
qud = qud_prior()
speaker_marginal = speaker(state, qud)
pyro.sample("speaker", speaker_marginal, obs=utterance)
return state
# How does this listener interpret the uttered price "10,000"? On the one hand this is a very unlikely price *a priori*, on the other if it were true it would come with strong arousal. Altogether this becomes a plausible *hyperbolic* utterence:
plot_dist( pragmatic_listener(10000) )
# ## Pragmatic Halo
#
# "It cost fifty dollars" is often interpretted as costing *around* 50 -- plausibly 51; yet "it cost fiftyone dollars" is interpretted as 51 and definitely not 50. This assymetric imprecision is often called the pragmatic halo or pragmatic slack.
#
# We can extend the hyperole model to capture this additional non-literal use of numbers by including QUD functions that collapse nearby numbers and assuming that round numbers are slightly more likely (because they are less difficult to utter).
# +
#A helper to round a number to the nearest ten:
def approx(x, b=None):
if b is None:
b = 10.
div = float(x)/b
rounded = int(div) + 1 if div - float(int(div)) >= 0.5 else int(div)
return int(b) * rounded
#The QUD functions we consider:
qud_fns = {
"price": lambda state: State(price=state.price, arousal=None),
"arousal": lambda state: State(price=None, arousal=state.arousal),
"priceArousal": lambda state: State(price=state.price, arousal=state.arousal),
"approxPrice": lambda state: State(price=approx(state.price), arousal=None),
"approxPriceArousal": lambda state: State(price=approx(state.price), arousal=state.arousal),
}
def qud_prior():
values = qud_fns.keys()
ix = pyro.sample("qud", dist.Categorical(probs=torch.ones(len(values)) / len(values)))
return values[ix]
def utterance_cost(numberUtt):
preciseNumberCost = 10.
return 0. if approx(numberUtt) == numberUtt else preciseNumberCost
def utterance_prior():
utterances = [50, 51, 500, 501, 1000, 1001, 5000, 5001, 10000, 10001]
utteranceLogits = -torch.tensor(list(map(utterance_cost, utterances)),
dtype=torch.float64)
ix = pyro.sample("utterance", dist.Categorical(logits=utteranceLogits))
return utterances[ix]
# -
# The RSA speaker and listener definitions are unchanged:
# +
@Marginal
def literal_listener(utterance):
state=state_prior()
factor("literal_meaning", 0. if meaning(utterance, state.price) else -999999.)
return state
@Marginal
def speaker(state, qud):
alpha = 1.
qudValue = qud_fns[qud](state)
with poutine.scale(scale=torch.tensor(alpha)):
utterance = utterance_prior()
literal_marginal = literal_listener(utterance)
projected_literal = project(literal_marginal, qud)
pyro.sample("listener", projected_literal, obs=qudValue)
return utterance
@Marginal
def pragmatic_listener(utterance):
state = state_prior()
qud = qud_prior()
speaker_marginal = speaker(state, qud)
pyro.sample("speaker", speaker_marginal, obs=utterance)
return state
# -
# OK, let's see if we get the desired assymetric slack (we're only interested in the interpretted price here, so we marginalize out the arousal).
# +
@Marginal
def pragmatic_listener_price_marginal(utterance):
return pyro.sample("pm", pragmatic_listener(utterance)).price
plot_dist(pragmatic_listener_price_marginal(50))
# -
plot_dist(pragmatic_listener_price_marginal(51))
# ## Irony and More Complex Affect
#
# In the above hyperbole model we assumed a very simple model of affect: a single dimension with two values (high and low arousal). Actual affect is best represented as a two-dimensional space corresponding to valence and arousal. Kao and Goodman (2015) showed that extending the affect space to these two dimensions immediately introduces a new usage of numbers: verbal irony in which an utterance corresponding to a high-arousal positive valence state is used to convey a high-arousal but negative valence (or vice versa).
| tutorial/source/RSA-hyperbole.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Categorical Transformation for DL
# > a list of things to categoryical transformation
# +
# default_exp category
# -
# ## Imports
# + code_folding=[]
# export
import pandas as pd
import numpy as np
from pathlib import Path
import json
from torch.utils.data.dataset import Dataset
from torch.utils.data.dataloader import DataLoader
from typing import Iterable, Dict, List
class C2I:
"""
Category to indices
>>> c2i = C2I(
["class 1", "class 2", ..., "class n"],
pad_mst=True,
)
>>> c2i[["class 2", "class 5"]]
[0] array([2,3])
If the indices you put in the slicing is a np.ndarray
a verctorized function will be used
"""
def __init__(
self,
arr: Iterable,
pad_mst: bool = False,
):
self.pad_mst = pad_mst
self.pad = ["[MST]", ] if self.pad_mst else []
self.dict = dict(
(v, k) for k, v in enumerate(self.pad + list(arr)))
self.get_int = self.get_get_int()
self.get_int_ = np.vectorize(self.get_int)
def get_get_int(self,):
if self.pad_mst:
def get_int(idx: str) -> int:
if idx in self.dict:
return self.dict[idx]
else:
return 0
else:
def get_int(idx: str) -> int:
return self.dict[idx]
return get_int
def __repr__(self) -> str:
return f"C2I:{self.__len__()} categories"
def __len__(self):
return len(self.dict)
def __getitem__(self, k: int):
if type(k) in [np.ndarray, list]:
# use vectorized function
return self.get_int_(k)
else:
# use the original python function
return self.get_int(k)
class Category:
"""
Manage categorical translations
c = Category(
["class 1", "class 2", ..., "class n"],
pad_mst=True,)
c.c2i[["class 3","class 6"]]
c.i2c[[3, 2, 1]]
"""
def __init__(
self,
arr: Iterable,
pad_mst: bool = False
):
self.pad_mst = pad_mst
self.c2i = C2I(arr, pad_mst=pad_mst)
self.i2c = np.array(self.c2i.pad+list(arr))
def save(self, path: Path) -> None:
"""
save category information to json file
"""
with open(path, "w") as f:
json.dump(self.i2c.tolist(), f)
@classmethod
def load(cls, path: Path):
"""
load category information from a json file
"""
with open(path, "r") as f:
l = np.array(json.load(f))
if l[0] == "[MST]":
return cls(l[1:], pad_mst=True)
else:
return cls(l, pad_mst=False)
def __len__(self):
return len(self.i2c)
def __repr__(self):
return f"Category Manager with {self.__len__()}"
class TreeCategory(Category):
"""
Manage categorical translations
c = Category(
["class 1", "class 2", ..., "class n"],
pad_mst=True,)
c.c2i[["class 3","class 6"]]
c.i2c[[3, 2, 1]]
"""
def __init__(
self,
parent_map: Dict[str, str],
pad_mst: bool = False
):
self.parent_map = parent_map
arr = np.array(list(self.parent_map.keys()))
super().__init__(arr, pad_mst=pad_mst)
self.ancestor_map = dict()
for name in self.parent_map.keys():
self.find_ancestor_map(name)
self.get_depth_map()
self.get_depth_map_array()
def find_ancestor_map(
self, name: str
) -> Dict[str, List[str]]:
if name in self.ancestor_map:
return self.ancestor_map[name]
if name not in self.parent_map:
return []
else:
result = [name, ]+self.find_ancestor_map(self.parent_map[name])
self.ancestor_map[name] = result
return result
def tree_hot(self, name: str) -> np.array:
"""
return tree hot encoding name according to category
"""
target = np.zeros(len(self), dtype=int)
target[self.c2i[self.ancestor_map[name]]]=1
return target
def get_depth_map(self) -> Dict[str, int]:
self.depth_map = dict(
(k, len(v)) for k,v in self.ancestor_map.items())
return self.depth_map
def get_depth_map_array(self) -> np.array:
self.depth_map_array = np.vectorize(
self.depth_map.get)(self.i2c)
return self.depth_map_array
def __repr__(self):
return f"""Tree Category({len(self)}):\n\tself.tree_hot("name")\tself.ancestor_map\tself.depth_map_array"""
# -
# ## Indexing forward and backward
cates = Category(list(map(lambda x:f"Cate_{x+1}",range(50))))
cates
cates.i2c[:5]
test_c = np.random.randint(1,50,1000)
# ### Indices to categories
labels = cates.i2c[test_c]
labels[:20]
# ### Category to indices
cates.c2i[labels[:20]]
# Using vectorized function
# %%time
for i in range(200):
indices_generated = cates.c2i[labels]
# Using the original python function
# %%time
for i in range(200):
indices_generated2 = list(cates.c2i.get_int(l) for l in labels)
# Transform forward and backward and check fidelity
(cates.c2i[labels]==test_c).mean()
# ## With missing tokens
# We can set pad_mst to True to manage missing token
nt = Category("ATCG", pad_mst=True)
# ### Categories to indices
nt.c2i[list("AAACCTTATTGCAGCOAAT")]
# ### Indices to categories
nt.i2c[[1, 1, 1, 3, 3, 2, 2, 2, 2, 4, 3, 1, 4, 3, 0, 1, 1, 2]]
# ## Data save and load
# ### Save categories
nt.save("atcg.json")
# ### Load categories
cm = Category.load("atcg.json")
cm
# ## Tree data
# Sometime the target should be treated like a tree structure multi-hotting, not one-hot encoding, the benifit of such operation is explained in this [colab notebook](https://colab.research.google.com/github/raynardj/python4ml/blob/master/experiments/treehot_encoding.ipynb)
# +
import requests
tree_str = requests.get("http://oncotree.mskcc.org/api/tumorTypes/tree?&version=oncotree_latest_stable").text
tree = json.loads(tree_str)
# +
parent_map = dict()
def get_pairs(node):
if "name" in node:
parent_map.update({node.get("code"):node.get("parent")})
if "children" in node:
for c,child in node["children"].items():
get_pairs(child)
get_pairs(tree['TISSUE'])
# -
tree_category = TreeCategory(parent_map)
tree_category
# You can transform the category data into the following multihot hierachical encoding vector.
#
# The rest is good old BCELoss
tree_category.tree_hot("NCCRCC")
tree_category.depth_map_array
# With the above array, you can achieve many tasks.
#
# eg. you can calc accuracy, loss, f1 for each level
#
# ```python
# crit = nn.BCEWithLogitLoss()
# def accuracy(y,y_): return (y==(y_>.5)).float().mean()
# loss = crit(y,y_)
# acc = accuracy(y,y_)
#
# level_map = torch.LongTensor(tree_category.depth_map_array).cuda()
#
# # calc metrics for level2, level3, level4
# loss_l = dict()
# acc_l = dict()
# for level in [2,3,4]:
# y_level = y[level_map==level]
# y_hat_level = y_[level_map==level]
# loss_l[level] = crit(y_level, y_hat_level)
# acc_l[level] = accuracy(y_level, y_hat_level)
# ```
#
# Or assign different weights to different level of loss, etc.
#
# Visualize level for first 100 categories
from matplotlib import pyplot as plt
plt.figure(figsize=(20,10))
plt.imshow(
np.stack(list(tree_category.depth_map_array==i
for i in range(1,tree_category.depth_map_array.max()+1)))[:,:100])
| nbs/53_category.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="PM4T1W4tsHs5"
# <img style="float: left;;" src='https://github.com/Enr1queRojas/Propedeutico/blob/main/Introduccion/Imagenes/iteso.jpg?raw=1' width="50" height="100"/></a>
#
# # <center> <font color= #000047> Introducción a Python: Listas, Iteraciones y Strings
# + id="kc9XV0wGsNuD"
# + [markdown] id="RaKilJrVsHta"
#
# <img style="float: right; margin: 0px 0px 15px 15px;" src="https://www.python.org/static/community_logos/python-logo.png" width="200px" height="200px" />
#
# > Ya conocemos un poco más de la sintaxis de Python, como hacer funciones y como usar condicionales. Es hora que veamos otros tipos de variables (arreglos) y cómo hacer líneas de código que ejecuten operaciones repetitivas.
#
# Referencias:
# - https://www.kaggle.com/learn/python
# ___
# + [markdown] id="D6tG994vsHuC"
# # 1. Listas
#
# Las listas son objetos en Python representan secuencias ordenadas de valores.
#
# Veamos un par de ejemplos de como crearlas:
# + id="EHO-87g1sHue"
# Primeros números primos
# Planetas del sistema solar
# + id="tkshBm89sHum"
# + id="bIFISht9sHu3"
# + [markdown] id="U5OtFE1JsHva"
# Vemos que las listas no son exclusivamente de números.
#
# Ya vimos listas de números, pero también de strings.
#
# Incluso, se pueden hacer listas de listas:
# + id="fH_wmcjNsHwN"
# + id="f2Zer0v1sHwn"
# + [markdown] id="3JdxMZojsHxL"
# Aún más, se pueden hacer listas de diferentes tipos de objetos:
# + id="2DuGZbn9sHxT"
# + [markdown] id="eg09k83GsHxa"
# Sin duda, en muchas ocasiones nos será muy útil tener una sola lista guardando varios resultados, que muchos resultados guardados en objetos individuales.
#
# Pero, una vez en la lista, ¿cómo accedemos a los objetos individuales?
# + [markdown] id="eDlHCn6FsHx7"
# ## 1.1 Indizado
#
# Podemos acceder a los elementos individuales pertenecientes a la lista a través de brackets ([])
#
# Por ejemplo, ¿cuál planeta está más cercano al sol en nuestro sistema solar?
#
# - Acá una nota importante: Python usa índices comenzando en cero (0):
# + id="31Q2q-4csHyK"
# Planeta más cercano al sol
# + id="uKSsYq9_sHyZ"
# Siguiente planeta
# + [markdown] id="LefjNOT_sHyh"
# Todo bien...
#
# Ahora, ¿cuál es el planeta más alejado del sol?
#
# - Los elementos de una lista pueden tambien ser accedidos de atrás para adelante, utilizando números negativos:
# + id="TfAz34M-sHyp"
# Planeta más alejado del sol
# + id="icjF1dO_sHyp"
# Segundo planeta más alejado
# + [markdown] id="fj73yU1YsHy4"
# Muy bien...
#
# Y si quisiéramos averiguar, por ejemplo, ¿cuáles son los tres planetas más cercanos al sol?
# + id="nq0-HLVZsHy_"
# Tres primeros planetas
# + [markdown] id="WFzpLxR-sHzH"
# Entonces `lista[a:b]` es nuestra manera de preguntar por todos los elementos de la lista con índice comenzando en `a` y continuando hasta `b` sin incluir (es decir, hasta `b-1`).
#
# Los índices de comienzo y de término son opcionales:
# - Si no ponemos el índice de inicio, se asume que es cero (0): `lista[:b] == lista[0:b]`
# + id="e3Ku9WREsHzI"
# Reescribir la expresión anterior
# + id="XXx9wXzBsHzP"
# + [markdown] id="hSFaHGUnsHzX"
# - Equivalentemente, si no ponemos el índice de fin, se asume que este equivale a la longitud de la lista:
# + id="6LZEYmrOsHzX"
# Lista de todos los planetas comenzando desde el planeta tierra
# + [markdown] id="Th-ykEDDsHzf"
# También podemos usar índices negativos cuando accedemos a varios objetos.
#
# Por ejemplo, ¿qué obtenemos con las siguientes expresión?
# + id="Uhoh46FTsHzv"
# + [markdown] id="jg2O3H9ksHzv"
# ```python
# lista[n:n + N] = [lista[n], lista[n + 1], ..., lista[n + N - 1]]
# ```
# + id="Gk-zghAMsHzv"
# + [markdown] id="_saIR2W5sHz3"
# Slice:
#
# ```python
# lista[n:n+N:s] = [lista[n], lista[n + s], lista[n + 2 * s], ..., ]
# ```
# + id="KjuMrqEMsHz3"
# + id="0r_gL2bpsHz4"
# + id="iieHxALysHz4"
# Elementos de la lista en reverso (al revés)
# + [markdown] id="aJmRZVRgsH0A"
# ## 1.2 Modificando listas
#
# Las listas son objetos "mutables", es decir, sus objetos pueden ser modificados directamente en la lista.
#
# Una manera de modificar una lista es asignar a un índice.
#
# Por ejemplo, supongamos que la comunidad científica, con argumentos basados en la composición del planeta, decidió modificar el nombre de "Planeta Tierra" a "Planeta Agua".
# + id="lYXzMjnlsH0A"
# + id="62KN-RJasH0A"
# + id="LGac_hj0sH0I"
# + [markdown] id="CjLwFmK7sH0I"
# También podemos cambiar varios elementos de la lista a la vez:
# + id="kgHKFBZ1sH0I"
# + id="DAG-A5_FsH0J"
# + [markdown] id="o4EaAdCIsH0J"
# ## 1.3 Funciones sobre listas
#
# Python posee varias funciones supremamente útiles para trabajar con listas.
#
# `len()` nos proporciona la longitud (número de elementos) de una lista:
# + id="6xjrHQ5fsH0J"
# función len()
# + id="LO0jkcd5sH0J"
# + [markdown] id="gw6hcP1qsH0R"
# `sorted()` nos regresa una versión ordenada de una lista:
# + id="3G0BOIWzsH0R"
# Ayuda en la función sorted
# + id="livYNAlVsH0R"
# Llamar la función sorted sobre primos
# + id="TjJEMNgSsH0S" outputId="1b1d491c-21ac-4b8f-930d-9bb9d43cf842" colab={"base_uri": "https://localhost:8080/", "height": 130}
planetas = ['Mercurio','Venus','Vierra','Marte','Jupiter','Staurno','Urano','Neptuno']
# + id="PeT1r40BsH0S" outputId="210881f6-028d-4c93-a56e-8c896062c746" colab={"base_uri": "https://localhost:8080/"}
Planetas
# + id="YsDV8GVcsH0S"
# Llamar la función sorted sobre planetas
def long_str2 (s):
return len(s)
# + id="kq7H08Y-sH0S"
long_str2 = lambda s: len(s)
# + id="-9_GE4ayzpLP"
# + [markdown] id="zE379hZqsH0T"
# **Paréntesis: Funciones anónimas**
#
# Las funciones anónimas comienzan con la palabra clave `lambda` seguidas por el (los) argumento(s) de la función. Después de `:` se escribe lo que retorna la función.
# + id="muHwpMTwsH0T" outputId="6b7088db-ac46-41d8-9e8d-7e893842dd2a" colab={"base_uri": "https://localhost:8080/"}
sorted(planetas,key=long_str2)
# + [markdown] id="GlVw5VUDsH0T"
# `sum()`, ya se imaginarán que hace:
# + id="KolC1d_FsH0T"
# Ayuda en la función sum
# + id="9jrSxbnNsH0T"
# sum
# + [markdown] id="1CnHmlubsH0U"
# En la clase pasada utilizamos las funciones `min()` y `max()` sobre varios argumentos.
#
# También le podemos pasar un solo argumento tipo lista.
# + id="J2pQZojDsH0U" outputId="7007e030-b418-46b7-ceac-e9f38d4d3df0" colab={"base_uri": "https://localhost:8080/"}
# min
primos = [2,5,3,7]
print(min(primos))
# + id="lIDKs2_dsH0U" outputId="2dfc39d0-aa0a-4b33-be31-94fc78d9b3ba" colab={"base_uri": "https://localhost:8080/"}
# max
print(max(primos))
# + id="N4hKs00Jw3yY"
# + [markdown] id="Ez95FJ6JsH0j"
# ___
# ## Pausa: Objetos
#
# Hasta ahora he venido utilizando la palabra **objeto** sin darle mucha importancia. ¿Qué significa en realidad?
#
# - si han visto algo de Python, pueden haber escuchado que todo en Python es un objeto.
#
# En la siguiente semana estudiaremos a nivel muy básico qué es la programación orientada a objetos.
#
# Por ahora, nos basta con saber que los objetos cargan varias "cosas" con ellos, y podemos acceder a estas "cosas" utilizando la "sintaxis punto (.)" de Python.
#
# Por ejemplo, los números en Python tienen una variable asociada llamada `imag`, la cual representa su parte imaginaria:
# + id="UbOSHpXFsH0k" outputId="a5f6d34b-4be3-4244-8882-77f7eed81405" colab={"base_uri": "https://localhost:8080/"}
# Atributos real e imag = a + bi = 7 + 0i
a=7
a.imag, a.real
# + id="WKU8ILwcsH0k" outputId="d8cb3289-2c35-4e72-f6c5-37abe0cb26a3" colab={"base_uri": "https://localhost:8080/"}
dir(a)
# + id="Nz_JShBV1SVR" outputId="828594aa-9686-406f-8a23-a3d501d5abb3" colab={"base_uri": "https://localhost:8080/"}
type(a)
# + id="5JR-mG_V1WL9" outputId="5071c6fb-2cd3-4256-f549-85b3cc37b1da" colab={"base_uri": "https://localhost:8080/"}
a.denominator, a.numerator
# + id="8FT2gw5t1kP2"
b= (6+5j)/3
# + id="90N4IPzK1rLa" outputId="47ba721d-c55a-4b38-aaeb-5cb70b76411f" colab={"base_uri": "https://localhost:8080/"}
b.real, b.imag
# + id="hF4HyE-J17-j"
c = 5/3
# + id="TZRRvlIv17rz" outputId="f9c55764-9590-4ad4-8fc8-9716ec939b1b" colab={"base_uri": "https://localhost:8080/"}
c.as_integer_ratio()
# + [markdown] id="NYL5R25DsH0k"
# Entre las "cosas" que los objetos cargan, también pueden haber funciones.
#
# Una función asociada a un objeto se llama **método**.
#
# Las "cosas" asociadas a los objetos, que no son funciones, son llamados **atributos** (ejemplo: imag).
# + id="TKTate2CsH0k" outputId="f8f1b4e9-b10c-403e-c3b4-b6cb9612a99e" colab={"base_uri": "https://localhost:8080/"}
# Método conjugate()
b
# + id="iKxUkUIZ2LaX" outputId="1ad3a503-86e6-484e-e0fd-4ecbd9ba3a8c" colab={"base_uri": "https://localhost:8080/"}
b.conjugate()
# + id="vVwE0DjB2Kwv"
# + id="kGWE3PRP2tQD"
# + [markdown] id="1yH9mrAisH0l"
# Y si no sabemos qué hace un método determinado en un objeto, también podemos pasar métodos a la función `help()`, de la misma manera en que le pasamos funciones:
# + id="OFhOw2VdsH0l"
# help(objeto.metodo)
# + [markdown] id="A72gS1dlsH0l"
# Bueno, ¿y esto de que nos sirve?
#
# Pues las listas tienen una infinidad de métodos útiles que estaremos usando...
# ___
# + [markdown] id="mo4xeKLNsH0l"
# ## 1.4 Métodos de las listas
#
# `list.append()` modifica una lista añadiéndole un elemento en el final:
# + id="MXqkuvfQsH0m" outputId="1ba5dc43-a966-43de-f34f-7d8a3b3b7d88" colab={"base_uri": "https://localhost:8080/"}
planetas.append('Pluton')
print(planetas)
# + id="RygphqxnsH0m" outputId="351bf43e-06d7-46c9-9a5d-8290ff3662d3" colab={"base_uri": "https://localhost:8080/"}
# Plutón también es un planeta
print(planetas2)
# + id="Ui_Ru01XsH0m"
# + [markdown] id="pCseRf8TsH0m"
# ¿Porqué no obtuvumos una salida en la celda de arriba?
#
# Verifiquemos la documentación del método append:
# + [markdown] id="8s6Zb3uWsH0m"
# **Comentario:** append es un método de todos los objetos tipo `list`, de manera que habríamos podido llamar `help(list.append)`. Sin embargo, si intentamos llamar `help(append)`, Python nos dirá que no existe nada con el nombre "append", pues `append` solo existe en el contexto de listas.
# + [markdown] id="uJi9LaJ2sH0n"
# `list.pop()` remueve y devuelve el último elemento de una lista:
# + id="jfdtzdF9sH0n" outputId="fb59b575-4fe4-4109-e9f6-8bfae91bf502" colab={"base_uri": "https://localhost:8080/", "height": 35}
# Que Plutón siempre no es un planeta
planetas.pop()
# + id="PuwmygEhsH0n" outputId="68d896f8-0123-4b29-f38a-1bfbdfc3c71c" colab={"base_uri": "https://localhost:8080/"}
planetas[:4]
# + id="ic0zlJ5W4liI" outputId="a2d4c6f3-1395-435d-b397-f859bffc5555" colab={"base_uri": "https://localhost:8080/"}
len(planetas)
# + id="tCtvvuU0sH0n" outputId="fff1b4ac-bc5d-46bf-cd7e-afc68d6f1e04" colab={"base_uri": "https://localhost:8080/"}
planetas
# + id="G_wI1DRjsH0n" outputId="726bec24-a7e0-43c4-8c5c-aa97b33a3383" colab={"base_uri": "https://localhost:8080/"}
planetas = ['Mercurio','Venus','Vierra','Marte','Jupiter','Staurno','Urano','Neptuno']
planetas
# + id="5WsKNfNI5jBi"
planetas.append("Pulton")
planetas.append("Pulton")
# + id="6L_4Efr95qUT" outputId="932510d2-baca-45e3-ebe6-b6f40bf7dd45" colab={"base_uri": "https://localhost:8080/"}
planetas
# + id="PXzKDhZ55ut9" outputId="52e9490b-a2c7-4729-8fed-70c64992ca64" colab={"base_uri": "https://localhost:8080/", "height": 35}
planetas.pop(-1)
# + id="MocMDz5k52Jk" outputId="9bcf90d5-2190-4384-9e5e-d51fa5490864" colab={"base_uri": "https://localhost:8080/"}
planetas
# + id="Cs8oac2-52fK"
# + [markdown] id="YW31Ia1CsH0n"
# ### 1.4.1 Buscando en listas
#
# ¿En qué lugar de los planetas se encuentra la Tierra? Podemos obtener su índice usando el método `list.index()`:
# + id="Ebx-Dm38sH0o" outputId="a1b41270-2662-416b-90b5-bee71ec289d4" colab={"base_uri": "https://localhost:8080/"}
planetas
# + id="DvK1j4LfsH0o" outputId="840678fe-128e-482f-d6db-429bd9d86c86" colab={"base_uri": "https://localhost:8080/"}
# índice del planeta tierra
planetas.index('Marte')
# + id="lbtMrkAvsH0o"
# + [markdown] id="6Cw0mUAUsH0o"
# Está en el tercer lugar (recordar que el indizado en Python comienza en cero)
#
# ¿En qué lugar está Plutón?
# + id="BaEtWOTKsH0o"
# índice del planeta plutón
# + [markdown] id="0KklbyR-sH0p"
# <font color=red> Error ... </font> ¡como debe ser!
#
# Para evitar este tipo de errores, existe el operador `in` para determinar si un elemento particular pertenece a una a una lista:
# + id="Bhe-pA1rsH0p"
planetas = ['Mercurio',
'Venus',
'Tierra',
'Marte',
'Jupiter',
'Staurno',
'Urano',
'Neptuno',
'Pulton']
# + id="xR_qK6sy7o4I" outputId="3392fc71-dc38-4381-ab64-c5904d3aad92" colab={"base_uri": "https://localhost:8080/", "height": 35}
planetas.pop()
# + id="mFLyg5g9sH0p" outputId="5e3b5299-a176-46a1-96d3-a22e1b79ec10" colab={"base_uri": "https://localhost:8080/"}
# ¿Es la Tierra un planeta?
if 'Tierra' in planetas:
print('Tierra es un planeta')
# + id="68JLBr1asH0x" outputId="9399c55e-4d35-4a80-c221-ebeac0c7f50b" colab={"base_uri": "https://localhost:8080/"}
# ¿Es Plutón un planeta?
'Pluton' in planetas
# + id="TsVWkRXm8ICH" outputId="ace4063b-2508-4636-aa16-f2992d3271e1" colab={"base_uri": "https://localhost:8080/"}
if 'Pluton' not in planetas:
print('Pluton no es un planeta')
# + id="MH846grOsH0x"
# Usar esto para evitar el error de arriba
def is_inlist(string,lista):
if 'Pluton' in planetas:
planetas.index('Pluton')
else:
print(f'{string} no esta en la lista')
return planetas
# + id="iOpJLCyz-fvv" outputId="c0dff3f4-8f0c-4544-ab1b-6436c3373681" colab={"base_uri": "https://localhost:8080/"}
is_inlist("Marte",planetas)
# + [markdown] id="yLlxs7KjsH0x"
# Hay otros métodos interesantes de las listas que no veremos. Si quieren aprende más acerca de todos los métodos y atributos de un objeto particular, podemos llamar la función `help()` sobre el objeto.
#
# Por ejemplo:
# + id="g1YJ64TosH0y" outputId="e5f084de-0d07-4873-e890-876449c94a85" colab={"base_uri": "https://localhost:8080/"}
primos
# + id="v0piH59u-_3g"
primos.extend([11,13])
# + id="15KgOvhz_M2z" outputId="5a5fd8e3-5fa3-47e2-b150-9a581c5cdac2" colab={"base_uri": "https://localhost:8080/"}
primos
# + id="UqZ_JgNX_QTP" outputId="68c9669d-0edc-4584-f3dd-02f2712752de" colab={"base_uri": "https://localhost:8080/"}
primos.sort()
# + id="mvWKlCJU_m-j" outputId="ef4844cf-0dc2-4c91-ee27-2f5eb5134b27" colab={"base_uri": "https://localhost:8080/"}
primos
# + id="K9OqfAJQ_pGM"
primos.sort(reverse=True)
# + id="y_4wqyla_wMp" outputId="aec1d388-9d0c-427e-ee95-f451337869a1" colab={"base_uri": "https://localhost:8080/"}
primos
# + id="VxqtBckz_72n"
# Ejercicio: crear una funcion que reciba 2 parametros de entrada iter, list
# Si la lista contiene menos de 5 valores extender esa lista con con esos valores
# + id="rH1dSmdiAazy"
def extension_tool(iter,lista):
if len(lista)<5:
lista.extend(iter)
print(liter) and print("iter")
else:
print(lista)
# + id="knBedKv5Bdj4"
lista = [1,2,3,4]
# + id="qad6HllRBi8w"
liter = [5,6,7]
# + id="-wSPySpUB81p" outputId="af08e638-4253-4268-c4d7-b2265fa685b3" colab={"base_uri": "https://localhost:8080/"}
len(lista)
# + id="x5ax1a_8BnOR" outputId="498a96c9-8837-4447-b6b8-47be86f7dc62" colab={"base_uri": "https://localhost:8080/"}
extension_tool(lista,liter)
# + [markdown] id="PKOr4Mm7sH0y"
# ## 1.5 Tuplas
#
# También son arreglos de objetos similares a las listas. Se diferencian en dos maneras:
#
# - La sintaxis para crear tuplas usa paréntesis (o nada) en vez de brackets:
# + id="UoITPUvosH0y"
# O equivalentemente
# + [markdown] id="tBDeRDOKsH0y"
# - Las tuplas, a diferencia de las listas, no pueden ser modificadas (son objetos inmutables):
# + id="DgWKOEA3sH0y"
# Intentar modificar una tupla
# + [markdown] id="bsjAUudfsH0z"
# Las tuplas son usadas comúnmente para funciones que devuelven más de un valor.
#
# Por ejemplo, el método `as_integer_ratio()` de los objetos `float`, devuelve el numerador y el denominador en la forma de una tupla:
# + id="qJOHhhgLsH0z"
# as_integer_ratio
# + id="PmPVnkg0sH0z"
# Ayuda en el método float.as_integer_ratio
# + [markdown] id="z-2eZjLFsH0z"
# También pueden ser usadas como un atajo:
# + id="sf0HoxbusH00"
# + [markdown] id="YeceLCHasH00"
# # 2. Ciclos o iteraciones
#
# ## 2.1 Ciclos `for`
#
# Las iteraciones son una manera de ejecutar cierto bloque de código repetidamente:
# + id="_QzsEDjZsH00"
# Planetas, de nuevo
# + id="51v0nXYxsH00"
# Imprimir todos los planetas en la misma línea
# + [markdown] id="j6dSBJZSsH00"
# Para construir un ciclo `for`, se debe especificar:
#
# - el nombre de la variable que va a iterar (planeta),
#
# - el conjunto de valores sobre los que va a iterar la variable (planetas).
#
# Se usa la palabra `in`, en este caso, para hacerle entender a Python que *planeta* va a iterar sobre *planetas*.
#
# El objeto a la derecha de la palabra `in` puede ser cualquier objeto **iterable**. Básicamente, un iterable es cualquier arreglo (listas, tuplas, conjuntos, arreglos de numpy, series de pandas...).
#
# Por ejemplo, queremos hallar la multiplicación de todos los elementos de la siguiente tupla.
# + id="rtgxFb2vsH01"
# + id="XzXT_x3AsH01"
# Multiplicación como ciclo
# + [markdown] id="ytrCkQVZsH01"
# Incluso, podemos iterar sobre los caracteres de un string:
# + id="sK_Xk_K1sH01"
# Imprimir solo los caracteres en mayúscula, sin espacios, uno seguido de otro
# + [markdown] id="61X33RQUsH01"
# ### 2.1.1 Función `range()`
#
# La función `range()` es una función que devuelve una secuencia de números. Es extremadamente útil para escribir ciclos for.
#
# Por ejemplo, si queremos repetir una acción 5 veces:
# + id="OQKTcoSMsH02" outputId="b3bef340-7534-4515-e1f2-a6726c7fabff" colab={"base_uri": "https://localhost:8080/"}
# For de 5 iteraciones
planetas = ['Mercurio','Venus','Tierra']
planetas
# + [markdown] id="HVCk5U7jsH02"
# **Actividad:**
#
# 1. Escribir una función que devuelva los primeros $n$ elementos de la sucesión de Fibonacci, usando un ciclo `for`.
# + id="B-bh_HYIsH02"
# Código aquí
# + [markdown] id="OFGWtTs1sH0-"
# ## 2.2 Ciclos `while`
#
# Son otro tipo de ciclos en Python, los cuales iteran hasta que cierta condición deje de cumplirse.
#
# Por ejemplo:
# + id="obCZE4U-sH0-"
# + [markdown] id="HDx2MXcVsH0-"
# El argumento de un ciclo `while` se evalúa como una condición lógica, y el ciclo se ejecuta hasta que dicha condición sea **False**.
# + [markdown] id="SvvM9gQEsH0-"
# **Ejercicio:**
#
# 1. Escribir una función que devuelva los primeros $n$ elementos de la sucesión de Fibonacci, usando un ciclo `while`.
#
# 2. Escribir una función que devuelva los elementos menores a cierto número $x$ de la sucesión de Fibonacci, usando un ciclo `while`.
# + id="iQzaq077sH0_"
# + id="xVUSgvGJsH0_"
fibonacci_while(10)
# + [markdown] id="yETik5musH0_"
# ## Pausa: Recursión
#
# Una manera adicional de ejecutar iteraciones se conoce como *recursión*, y sucede cuando definimos una función en términos de sí misma.
#
# Por ejemplo, el $n$-ésimo número de la secuencia de Fibonacci, recursivamente sería:
# + id="6nIbFEvKsH1F"
# + id="GLhZMjQ8sH1F"
# + [markdown] id="sBESZVNMsH1F"
# ## 2.3 List comprehensions (no encuentro una traducción suficientemente buena de esto)
#
# List comprehension son una de las características más chidas de Python. La manera más fácil de entenderla, como muchas cosas, es revisando ejemplos:
# + id="uIjB-vfMsH1G"
# Primero, con ciclo for: listar los cuadrados de los 10 dígitos
# + id="cwxT3jBhsH1G"
# Ahora con una list comprehension
# + [markdown] id="PaeP2bYvsH1G"
# Podemos agregar, incluso, condicionales:
# + id="wclnBjxmsH1G"
# + id="kYneLi8FsH1G"
# Ejemplo con los planetas
# + id="R_xZuXersH1H"
# + [markdown] id="8lO-JRHYsH1H"
# Se puede usar para dar formato:
# + id="uKx1J262sH1H"
# str.upper()
# + [markdown] id="Rgw1C1B-sH1H"
# Es supremamente importante aprender esto, ya que es ampliamente utilizado y ayuda a reducir muchísimas líneas de código.
#
# Ejemplo: escribir la siguiente función usando un ciclo for.
# + id="IYRRkwh1sH1H"
# + [markdown] id="uNcbkjgNsH1I"
# Ahora, con list comprehensions:
# + id="6yO0UyUysH1I"
# + id="YKa862XtsH1I"
# Probar la función
# + [markdown] id="dliuXU2KsH1I"
# # 3. Strings y diccionarios
#
# ## 3.1 Strings
#
# Si hay algo en lo que Python es la ley es manipulando Strings. En esta sección veremos algunos de los métodos de los objetos tipo string, y operaciones de formateo (muy útiles en la limpieza de bases de datos, por cierto).
# + [markdown] id="rQ5F5jtusH1J"
# ### 3.1.1 Sintaxis string
#
# Ya hemos visto varios ejemplos involucrando strings anteriormente. Solo para recordar:
# + id="RNQF1BqvsH1J"
# + [markdown] id="9puc7KsCsH1J"
# Hay casos particulares para preferir una u otra:
#
# - Las comillas dobles son convenientes si tu string contiene un apóstrofe.
#
# - De manera similar, se puede crear fácilmente un string que contiene comillas dobles englobándolo en comillas simples.
#
# Ejemplos:
# + id="vyDeYLEIsH1J"
# + id="fNftv20qsH1J"
# + id="LtoGuINVsH1J"
# + [markdown] id="0pheTZrysH1K"
# ### 3.1.2 Los strings son iterables
#
# Los objetos tipo strings son cadenas de caracteres. Casi todo lo que vimos que le podíamos aplicar a una lista, se lo podemos aplicar a un string.
# + id="XH7FMRsmsH1K"
# string de ejemplo
# + id="QtXdxp_OsH1K"
# Indexado
# + id="W3CmOK8QsH1K"
# Indexado multiple
# + id="gjgp-wausH1L"
# ¿Cuántos caracteres tiene?
# + id="wXru0Yt7sH1L"
# También podemos iterar sobre ellos
# + [markdown] id="fZ5nDabdsH1L"
# Sin embargo, una diferencia principal con las listas, es que son inmutables (no los podemos modificar).
# + id="zH-SInz_sH1L"
# + [markdown] id="gW6_Te4ksH1L"
# ### 3.1.3 Métodos de los strings
#
# Como las listas, los objetos tipo `str` tienen una gran cantidad de métodos útiles.
#
# Veamos algunos:
# + id="gKHOfZzCsH1M"
# string de ejemplo
# + id="43dpPGtysH1M"
# EN MAYÚSCULAS
# + id="ubFXI7nIsH1M"
# en minúsculas
# + id="zW8BVdBlsH1M"
# pregunta: comienza con?
# + id="uqSFFjyisH1N"
# pregunta: termina con?
# + [markdown] id="vs50r5UlsH1N"
# #### Entre listas y strings: métodos `split()` y `join()`
#
# El método `str.split()` convierte un string en una lista de strings más pequeños.
#
# Esto es supremamente útil para tomar de un string cada una de sus palabras:
# + id="1RpudLMbsH1N"
# + [markdown] id="CvlhAncpsH1N"
# O para obtener cierta información:
# + id="Eyf7sa0lsH1N"
# Año, mes y día de una fecha especificada como string
# + [markdown] id="rR1PjoC5sH1d"
# `str.join()` nos sirve para devolver los pasos.
#
# Teniendo una lista de pequeños strings, la podemos convertir en un solo string usando el string sobre el que se llama como separador:
# + id="nA-46_t3sH1e"
# Con la fecha...
# + [markdown] id="bxokC3EvsH1e"
# ### 3.1.4 Concatenación de strings
#
# Python nos permite concatenar strings con el operador `+`:
# + id="cqZwSElusH1e"
# Ejemplo
# + [markdown] id="mOXFFVHEsH1f"
# Sin embargo, hay que tener cuidado:
# + id="gwxdQXvusH1f"
# + id="sv9ghS4nsH1f"
# Concatenar un string con un número
# + [markdown] id="pIwT8tsCsH1f"
# ## 3.2 Diccionarios
#
# Los diccionarios son otros objetos de Python que mapean llaves a elementos:
# + id="kOhSqciWsH1g"
# + [markdown] id="1kA88j_0sH1g"
# En este caso, los strings "uno", "dos", y "tres" son las llaves, y los números 1, 2 y 3 son sus valores correspondientes.
#
# Los valores son accesados con brackets, similarmente a las listas:
# + id="iU0o2eKPsH1g"
# + [markdown] id="PSRNBHWhsH1g"
# Usamos una sintaxis similar para añadir otro par llave, valor
# + id="S_IRhl36sH1h"
# + [markdown] id="CMbAduB0sH1h"
# O cambiar el valor asociado a una llave existente
# + id="doaMkXwqsH1h"
# + id="-UenokWSsH1h"
# + [markdown] id="1lubJGQjsH1i"
# ### Navegando entre listas, tuplas, diccionarios: `zip`
# + [markdown] id="eOCz6DYRsH1i"
# Supongamos que tenemos dos listas que se corresponden:
# + id="2UcgrmVIsH1x"
# + [markdown] id="NQLAvGCQsH1x"
# ¿Cómo puedo asociar estos valores en un diccionario? Con `zip`:
# + id="6eZLC7dRsH1y"
# Primero, obtener la lista de pares
# + id="o-2WBv9RsH1y"
# Después obtener diccionario de relaciones
# + id="Z7my3VXvsH1y"
# + [markdown] id="RxrN83fhsH1y"
# Al ser los diccionarios iterables, puedo iterar sobre ellos (valga la redundancia)
# + id="ospZJTHSsH1z"
# Iterar sobre diccionario
# + id="oyugmJfBsH1z"
# Iterar sobre valores
# + id="tkaN6K4lsH1z"
# Iterar sobre pares llave-valor
# + id="5DJyRXGTsH1z"
# + id="uL-X7T4RsH1z"
# + [markdown] id="QvvJ1SiosH10"
# ___
# - Quiz 1 al comenzar la siguiente clase. Comprende clases 1 y 2.
| Introduccion/Clase2_ListasIteracionesStrings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''msc_project'': conda)'
# language: python
# name: python38364bitmscprojectconda1b7ed98db8104d919ac5b59276832f86
# ---
# # Notebook_10: <NAME>
#
# Notebook to generate some cool Altair visualisations for the report
# +
import numpy as np
import pandas as pd
from pathlib import Path
import altair as alt
import altair_saver
alt.data_transformers.enable('data_server')
# +
project_root = Path().resolve().parent
runs_path = project_root / 'Models' / 'runs.csv'
finalisation_path = project_root / 'Models' / 'finalisation_runs.csv'
figure_path = project_root / 'Figures'
# -
runs = pd.read_csv(runs_path)
runs.head()
final_runs = pd.read_csv(finalisation_path)
final_runs.head()
runs['polynomial_degree'].fillna(1.0, inplace = True)
final_runs['polynomial_degree'].fillna(1.0, inplace = True)
runs.columns
# +
core_cols = ['Name', 'polynomial_degree', 'scaled', 'cv_score_1', 'cv_score_2', 'cv_score_3', 'cv_score_4', 'cv_score_5', 'cv_mean', 'cv_std', 'cv_cov', 'no_val_r2', 'no_val_rmse']
run_metrics = runs[core_cols]
final_metrics = final_runs[core_cols]
# -
run_metrics
poly_models = run_metrics[run_metrics['polynomial_degree'] != 1.0]
alt.Chart(poly_models).mark_bar().encode(
alt.X('Name:N', title = 'Model'),
alt.Y('no_val_rmse', title = 'RMSE (mm)'),
column = 'polynomial_degree:N',
color = 'polynomial_degree:N'
)
| Notebooks/Notebook_10 Altair Viz.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('geo-env')
# language: python
# name: python3
# ---
# + [markdown] id="Ppg5bqOINf25"
# ## Importing Packages
# + id="nzuObDsHNf27"
import pandas as pd
import numpy as np
import tqdm
import pickle
from pprint import pprint
import os
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
#sklearn
from sklearn.manifold import TSNE
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.model_selection import train_test_split
import gensim
from gensim import corpora, models
from gensim.corpora import Dictionary
from gensim.models.coherencemodel import CoherenceModel
from gensim.models.ldamodel import LdaModel
# + colab={"base_uri": "https://localhost:8080/"} id="WyUoM7JoNn31" outputId="7e59a102-497d-4cb9-ddbf-e2cabde82919"
# ! pip install pyLDAvis
# + colab={"base_uri": "https://localhost:8080/"} id="G65af0UwNkts" outputId="91ac15c8-07d0-4e81-e748-2d2b3e9844f7"
import pyLDAvis
import pyLDAvis.sklearn
import pyLDAvis.gensim_models as gensimvis
# + colab={"base_uri": "https://localhost:8080/"} id="yAzGpTYmN39L" outputId="b4154ee4-bbba-4700-805d-3e58c2f28d88"
from google.colab import drive
drive.mount('/content/drive')
# + id="ZhTnZ-KqNf28"
with open('processed_tweets.pickle', 'rb') as read_file:
df = pickle.load(read_file)
# + [markdown] id="Zhr8nFpQNf28"
# ## Train-Test Split
# + colab={"base_uri": "https://localhost:8080/"} id="H17baDemNf28" outputId="8efbf3fc-c7d4-44aa-9b86-9890ded91c00"
X_train, X_test = train_test_split(df.tweet, test_size=0.2, random_state=42)
X_train
# + colab={"base_uri": "https://localhost:8080/"} id="xlNqCvNjNf29" outputId="a78bef4c-f96b-490a-eeeb-c4cca80acd5b"
X_test
# + id="AoyIsQstNf29"
train_list_of_lists = list(X_train.values)
# + [markdown] id="qnxPS6kSNf29"
# ## Bigram-Trigram Models
#
# (I did not incorporate bigrams and trigrams into the model yet)
# + id="NgnGqRA-Nf2-"
# Build the bigram and trigram models
bigram = gensim.models.Phrases(train_list_of_lists, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[train_list_of_lists], threshold=100)
# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
trigram_mod = gensim.models.phrases.Phraser(trigram)
# + id="Lv08Nn5SNf2-"
def make_bigrams(texts):
return [bigram_mod[doc] for doc in texts]
# + id="bj7C-ktBNf2_"
data_words_bigrams = make_bigrams(train_list_of_lists)
# + [markdown] id="GTjpmeUeNf2_"
# ## Bag of Words
# + id="u7ojCaMnNf2_"
id2word = Dictionary(train_list_of_lists)
corpus = [id2word.doc2bow(text) for text in train_list_of_lists]
# + colab={"base_uri": "https://localhost:8080/"} id="58sWA13yNf2_" outputId="ce0b48d5-e7bc-448f-98bf-d9b0890f5999"
sample = corpus[3000]
for i in range(len(sample)):
print("Word {} (\"{}\") appears {} time(s).".format(sample[i][0],
id2word[sample[i][0]],
sample[i][1]))
# + [markdown] id="-1VlwKE1Nf2_"
# ## LDA with Bag of Words
# + colab={"base_uri": "https://localhost:8080/"} id="Ej1AGlipNf3A" outputId="4206b0b7-1835-4341-9899-4a02e6af7719"
# Build LDA model
lda_model = LdaModel(corpus=corpus,
id2word=id2word,
num_topics=4,
random_state=42,
chunksize=100,
passes=100,
update_every=5,
alpha='auto',
per_word_topics=True)
pprint(lda_model.print_topics())
doc_lda = lda_model[corpus]
# + colab={"base_uri": "https://localhost:8080/", "height": 861} id="Ki068lBRNf3A" outputId="3327862c-5e52-40f3-f1e7-d1ba790f7b13"
pyLDAvis.enable_notebook()
LDAvis_prepared = gensimvis.prepare(lda_model, corpus, id2word)
LDAvis_prepared
# + colab={"base_uri": "https://localhost:8080/"} id="lMrTjSIdNf3A" outputId="1c2873a4-ce96-48ca-8b33-7d46ea91eefe"
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('Coherence Score: ', coherence_lda)
# + colab={"base_uri": "https://localhost:8080/"} id="bNKkV2tWNf3A" outputId="d8d3cac8-8353-4bf2-f2cb-9722a9689e85"
lda_model_bow = gensim.models.LdaMulticore(corpus, num_topics=4, id2word=id2word, passes=100, workers=2)
for idx, topic in lda_model_bow.print_topics(-1):
print('Topic: {} \nWords: {}'.format(idx, topic))
# + colab={"base_uri": "https://localhost:8080/", "height": 861} id="w-KBe1jLNf3B" outputId="46a7bcb5-27b7-4089-cdcc-aa1a1d5bbc21"
LDAvis_prepared_2 = gensimvis.prepare(lda_model_bow, corpus, id2word)
LDAvis_prepared_2
# + colab={"base_uri": "https://localhost:8080/"} id="h5mTnyo6Nf3B" outputId="345ee5f4-b492-4dfc-ca51-74c20e3f672e"
for index, score in sorted(lda_model_bow[corpus[3000]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_bow.print_topic(index, 4)))
# + colab={"base_uri": "https://localhost:8080/"} id="z1KlhqW6Nf3B" outputId="f699dc50-7382-4758-e090-91394a0fb6e7"
# Compute Coherence Score
coherence_model_lda_2 = CoherenceModel(model=lda_model_bow, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda_2 = coherence_model_lda_2.get_coherence()
print('Coherence Score: ', coherence_lda_2)
# + [markdown] id="EtDEJRL3Nf3B"
# ## LDA with TF-IDF
# + colab={"base_uri": "https://localhost:8080/"} id="7qwP-GqYNf3B" outputId="9b9e77af-9856-4107-9480-0468d1dd06b5"
tfidf = models.TfidfModel(corpus)
corpus_tfidf = tfidf[corpus]
for doc in corpus_tfidf:
pprint(doc)
break
# + colab={"base_uri": "https://localhost:8080/"} id="cT_4phsRNf3C" outputId="9df03114-7808-4aaf-e8ca-5c6c79d93036"
lda_model_tfidf = gensim.models.LdaMulticore(corpus_tfidf, num_topics=4, id2word=id2word, passes=100, workers=4)
for idx, topic in lda_model_tfidf.print_topics(-1):
print('Topic: {} Word: {}'.format(idx, topic))
# + colab={"base_uri": "https://localhost:8080/", "height": 861} id="Twoc5brzNf3C" outputId="3cf20471-deb4-4ec6-efe8-3cf07f97b4e4"
LDAvis_prepared_3 = gensimvis.prepare(lda_model_tfidf, corpus_tfidf, id2word)
LDAvis_prepared_3
# + colab={"base_uri": "https://localhost:8080/"} id="wPUKX4pGNf3C" outputId="194c9e2c-5e27-49d9-8643-4116015dc3e6"
for index, score in sorted(lda_model_tfidf[corpus[3000]], key=lambda tup: -1*tup[1]):
print("\nScore: {}\t \nTopic: {}".format(score, lda_model_tfidf.print_topic(index, 4)))
# + colab={"base_uri": "https://localhost:8080/"} id="7x52HgFCNf3C" outputId="03f7e547-6452-4374-a987-f8346a916573"
# Compute Coherence Score
coherence_model_lda_3 = CoherenceModel(model=lda_model_tfidf, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
coherence_lda_3 = coherence_model_lda_3.get_coherence()
print('Coherence Score: ', coherence_lda_3)
# + id="uxiVf1UZNf3C"
# supporting function
def compute_coherence_values(corpus, dictionary, k, a, b):
lda_model = gensim.models.LdaMulticore(corpus=corpus,
id2word=dictionary,
num_topics=k,
random_state=42,
chunksize=100,
passes=10,
alpha=a,
eta=b)
coherence_model_lda = CoherenceModel(model=lda_model, texts=train_list_of_lists, dictionary=id2word, coherence='c_v')
return coherence_model_lda.get_coherence()
# + colab={"base_uri": "https://localhost:8080/"} id="baiSVxQiNf3C" outputId="830a6e6b-1a0a-4cb8-a8a9-a8ba9811de43"
grid = {}
grid['Validation_Set'] = {}
# Topics range
min_topics = 2
max_topics = 11
step_size = 1
topics_range = range(min_topics, max_topics, step_size)
# Alpha parameter
alpha = list(np.arange(0.01, 1, 0.1))
alpha.append('symmetric')
alpha.append('asymmetric')
# Beta parameter
beta = list(np.arange(0.01, 1, 0.1))
beta.append('symmetric')
# Validation sets
num_of_docs = len(corpus)
corpus_sets = [# gensim.utils.ClippedCorpus(corpus, num_of_docs*0.25),
# gensim.utils.ClippedCorpus(corpus, num_of_docs*0.5),
gensim.utils.ClippedCorpus(corpus, int(num_of_docs*0.75)),
corpus]
corpus_title = ['75% Corpus', '100% Corpus']
model_results = {'Validation_Set': [],
'Topics': [],
'Alpha': [],
'Beta': [],
'Coherence': []
}
# Can take a long time to run
if 1 == 1:
pbar = tqdm.tqdm(total=540)
# iterate through validation corpuses
for i in range(len(corpus_sets)):
# iterate through number of topics
for k in topics_range:
# iterate through alpha values
for a in alpha:
# iterare through beta values
for b in beta:
# get the coherence score for the given parameters
cv = compute_coherence_values(corpus=corpus_sets[i], dictionary=id2word,
k=k, a=a, b=b)
# Save the model results
model_results['Validation_Set'].append(corpus_title[i])
model_results['Topics'].append(k)
model_results['Alpha'].append(a)
model_results['Beta'].append(b)
model_results['Coherence'].append(cv)
pbar.update(1)
pd.DataFrame(model_results).to_csv('lda_tuning_results_2.csv', index=False)
pbar.close()
# -
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
# + id="eIdtl1qmNf3D"
results = pd.read_csv('lda_tuning_results.csv')
results.head(20)
# + colab={"base_uri": "https://localhost:8080/", "height": 81} id="uC0EhHvn1KKW" outputId="f9f26ab8-d269-4924-fe0b-f07c316936ae"
sorted.tail(1)
# + id="MYpx4S00VMOt"
plt.plot(kind='scatter', x='Topics', y='Coherence')
# + id="x4XVufz4zltK"
| Training_LDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/BrianGisemba/MENTAL-HEALTH-TWEETS-CLASSIFICATION/blob/main/Augmentation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="GU28_BTGqVIf" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="9bce1fe3-44ca-4c69-afc5-f6e965704ca6"
# Loading the dataset
import pandas as pd
df = pd.read_csv("/content/MentalHealth_orig.csv")
df.head()
# + id="VGUxa2R2RmE1"
# %%capture
# !pip install nlpaug
# !pip install transformers
# + id="MPgd8lbFRvBA"
# Loading the required augmentation Libraries
import nlpaug.augmenter.char as nac
import nlpaug.augmenter.word as naw
import nlpaug.augmenter.sentence as nas
import nlpaug.flow as naf
from tqdm import tqdm
from sklearn.utils import shuffle
from nlpaug.util import Action
# + colab={"base_uri": "https://localhost:8080/"} id="CmT4pqWoR5WG" outputId="ff72138b-c7d4-462b-d1c0-2243f9a341d9"
#Split the train and test data
from sklearn.model_selection import train_test_split
train,valid=train_test_split(df,test_size=0.20 , stratify = df['disorder'])
train.shape, valid.shape
# + colab={"base_uri": "https://localhost:8080/"} id="YyQTdao2SSmu" outputId="791ce265-0aec-45fb-ea29-e0bbfc9ce4a9"
# Check the size of our columns so as to know how to augment each column
train['disorder'].value_counts()
# + id="7HB4fc8iShAM" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="c9ee5417-d77b-4a48-bb7a-52f7e50ad98c"
# Test text to check augmentation quality.
text = train.iloc[0]['tweet']
text
# + colab={"base_uri": "https://localhost:8080/", "height": 245, "referenced_widgets": ["941f1f58090c42418371a89cfcf5e607", "9bf4aa34a7274e0982f69e7a16eba81c", "9468d427dcac424faffe817c2497a242", "4f489684bca7462ca27c36a10db258d4", "2089a0de87ab436190400b0b39a8480c", "d859cf2570504003b78cb0143a2eb955", "97bc414d0d10426996b3b46147f14c0a", "a9c94a287e5e46dd815700a29d43f0b7", "6065ca95db2e46f589a9bfe1ba0aa278", "<KEY>", "<KEY>", "f2eb9e10fcc24a24a1d2a2b0cc45b8ae", "<KEY>", "<KEY>", "2cb078d1b0ec4211a12480b349e72f8f", "d46008025b1d4e788241dff9b874621d", "abde052d9b1d458b8e4d03e89e4ba729", "8c146ad208d04cf6aca8e0500af626d1", "<KEY>", "a18ca2470c354e9c9e26eeab2d725597", "a21927c392a245dcbe159ce5183e63aa", "7909b7baee714bd0ad7023180e579d64", "b42559fbe17b43e19e3031f552241e44", "<KEY>", "<KEY>", "83d308f2703141f4a419343ef86eba84", "<KEY>", "<KEY>", "4dd867860afe41858f6d435989beef92", "c09ba19b446e40bd88c16d4c3bea3b5e", "5a961f1ceb71469ba67e43b099c1ea22", "<KEY>", "4135dcc269004c6c8e05ac6593da2f9c", "<KEY>", "53fe71f170a447ccaccc1a73d8d8d35b", "518efb89ec8f430f946fe6a492715372", "a44d40a30cee4022ac3fb45527d1a96b", "<KEY>", "f59e89c51e434d9ead947ca6424912de", "973d18ec6a134288b11a45315d425fdc", "<KEY>", "<KEY>", "dee4c1079b6e4e2e81061ba42a7c7056", "9bbdb8e06c6d440e8697f1d1230533e3", "<KEY>", "7fcea47f411545f3a1cc79771a3a5c6d", "02a0deccdd5e43f0af767f239c752a6c", "bacb7e783d0945a5859421cb488a053a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "b7e43fc8ca70452b861ff53c17c82624", "6114266bb377465a94be70c1e5c36098", "<KEY>"]} id="6HY40qrWSsJ7" outputId="67a09d9f-43c9-4e8a-e068-fc08ffa92ab5"
# ContextualWordEmbsAug : Augmenter that apply operation (word level) to textual input based on contextual word embeddings.
aug = naw.ContextualWordEmbsAug(
model_path='bert-base-uncased', action="insert")
augmented_text = aug.augment(text)
print('Original text \n',text,'\n Augmented text\n', augmented_text)
# + id="tuZR8eagS-GV"
# Creating a copy of the dataset
df1 = df.copy(deep=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="cg2DIuTlTKrI" outputId="0fedc55d-5cd1-4dd7-a8e9-75c84f3e6081"
import numpy as np
#For anxiety, class = 0,
# Creating augmented text data to increase our training dataset by 78 entries
def augment_text(df1,samples=78,pr=0.2):
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==0].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':0})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/"} id="WXHu2QRWHpvx" outputId="3cfdd1f2-bfd4-409c-9eca-1090927ca271"
print(train.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="I1GHJ23vTrHz" outputId="27fc2117-72c5-49ac-a250-8be8a24c769a"
#For autism, class = 1
# Creating augmented text data to increase our training dataset by 289 entries
def augment_text(df1,samples=289,pr=0.2):
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==1].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':1})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/"} id="h8nRU9_1IkM0" outputId="ec2363f6-24dd-4f45-faff-34731a7b6969"
print(train.shape)
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="Q6WaXfBbTwB-" outputId="5a4d2988-3185-4cc1-a888-b842df827192"
#For bipolar disorder, class = 2
# Creating augmented text data to increase our training dataset by 286 entries
def augment_text(df1,samples=286,pr=0.2): #70 aurgumented data
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==2].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':2})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="4m6qJHdiUD26" outputId="c0656339-4f1c-4094-cff7-f1e8b3657e76"
#For dementia, class = 3
# Creating augmented text data to increase our training dataset by 290 entries
def augment_text(df1,samples=290,pr=0.2):
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==3].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':3})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="9k_Lcf54UQ6R" outputId="0afa610b-1fed-4692-8489-81eeade7ecb5"
#For depression, class = 4
# Creating augmented text data to increase our training dataset by 18 entries
def augment_text(df1,samples=18,pr=0.2):
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==0].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(4,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':4})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="jmyIP-ikUbtS" outputId="c011ad95-3261-448f-aba9-36a3fa53bdbb"
#For paranoia, class = 5
# Creating augmented text data to increase our training dataset by 293 entries
def augment_text(df1,samples=293,pr=0.2):
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==5].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':5})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="SvxAFPj5Umsy" outputId="4076dbfc-49af-463b-fe66-7bb514fed138"
#For schizophrenia, class = 6
# Creating augmented text data to increase our training dataset by 289 entries
def augment_text(df1,samples=289,pr=0.2): #70 aurgumented data
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==6].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':6})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 272} id="YVUfOdzzU8k5" outputId="75f4ccd7-17f1-4be5-e885-27472614f184"
#For suicidal ideation, class = 7
# Creating augmented text data to increase our training dataset by 253 entries
def augment_text(df1,samples=253,pr=0.2): #70 aurgumented data
aug.aug_p=pr
new_text=[]
#selecting the class samples
df_n=df1[df1['disorder']==7].reset_index(drop=True)
## data augmentation loop
for i in tqdm(np.random.randint(0,len(df_n),samples)):
text = df_n.iloc[i]['tweet']
augmented_text = aug.augment(text)
new_text.append(augmented_text)
## dataframe
new=pd.DataFrame({'tweet':new_text,'disorder':7})
df1=shuffle(df1.append(new).reset_index(drop=True))
return df1
train = augment_text(train)
print(train.shape, '\n\n')
train.head()
# + colab={"base_uri": "https://localhost:8080/"} id="5nrq3I4LbJYD" outputId="624af5b3-5ba0-4096-ab1f-06265c160786"
# Check the number of entries for each disorder in the train dataset
train['disorder'].value_counts()
# + id="75SHK4g5jIzg" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="183b4ae6-ab13-496c-e282-804319c165d7"
# Previewing the train dataset
train
# + id="lE3ioIPKU6rv"
# Dropping irrelevant columns
train=train.drop(columns=['Unnamed: 0','location','hour'])
# + colab={"base_uri": "https://localhost:8080/", "height": 419} id="5u3_1mrtVR1r" outputId="24683529-07d0-4c12-889a-2d9b8bb3d5b9"
# Previewing the train dataset
train
# + id="JQc0JR8nUTK2"
#Exporting to .csv
train.to_csv("Augmentaed.csv",index=False)
| Augmentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ufrpe-ensino/ic-aulas/aulas/blob/master/11_PraticaArquivos.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="NhcDob1_1xwu" colab_type="text"
# # Revisão: analisando temperaturas
#
# ## Objetivo
# Esta prática destina-se a relembrar as construções comuns em problemas que envolvem o processamento de grandes conjuntos de dados.
# Neste problema, seu objetivo é classificar as temperaturas diárias armazenadas na lista de temperaturas em quatro classes diferentes:
#
#
# * Frio ==> temperaturas abaixo de +2 graus (Celsius)
# * Confortável ==> temperaturas mais quentes ou iguais a +2 graus e até +15 graus (Celsius)
# * Quente ==> temperaturas mais quentes ou iguais a +15 graus (Celsius)
#
# Para resolver esse problema, você deve modificar e preencher as partes ausentes nas seguintes células. No total, existem três tarefas que você deve resolver de acordo com as instruções.
# + [markdown] id="LT7vznRsAZp1" colab_type="text"
# # Lendo os dados
#
# Leia o arquivo `../data/temperaturas.csv`, e separe os valores (função `split`)
# + id="wYlbibggAb0M" colab_type="code" colab={}
# + [markdown] id="1tLfyP4qA1mp" colab_type="text"
# # Categorizando
#
# Crie quatro listas vazias para diferentes classes de temperatura, ou seja:
#
# * frio
# * confortável
# * quente
#
# Percorra as temperaturas, e as insira na lista adequada.
# + id="ahTtaAe4AflY" colab_type="code" colab={}
# + [markdown] id="0j9TeuoNBb8p" colab_type="text"
# # Reponda as perguntas:
#
# (1) Quantas vezes a temperatura foi quente no período analisado?
# + id="mxGfIe12BjLJ" colab_type="code" colab={}
# + [markdown] id="SYuci-b0Bjpu" colab_type="text"
# (2) Qual a temperatura média dos dias considerados quentes, frios e confortáveis?
# + id="PdvXYXUMBrZp" colab_type="code" colab={}
# + [markdown] id="xVZ4kiS8BsrB" colab_type="text"
# (3) Qual a temperatura mais quente e qual a mais fria?
# + id="syM-0UF1BwxO" colab_type="code" colab={}
| aulas/11_PraticaArquivos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from selenium import webdriver
options = webdriver.ChromeOptions()
# options.add_argument('--headless')
# options.add_argument('--no-sandbox')
# options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome('chromedriver',options=options)
# -
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import time
text = '''درک کسر توده هیدروژن و الگوی حمل و نقل آن یک عنصر مهم در افزایش عملکرد سلول است. شکل 14 کسر جرمی H2 مدل ها را نشان می دهد. هیدروژن به قسمت آند می رسد و در داخل GDL پخش می شود. سپس از طریق لایه کاتالیزور نفوذ کرده و یون و الکترون تولید می کند. یون های هیدروژن توسط غشا ، جریان الکترون از طریق آند GDL و جمع کننده های جریان پراکنده می شوند. سرانجام ، هیدروژن به کاتد منتقل می شود و در آنجا به گاز هیدروژن تبدیل می شود. این فرآیند انرژی بیش از مصرف را تولید می کند.
در مقابل ، ما دریافتیم که می توان از آب برای انتقال الکترون بدون کاتالیزور اضافی استفاده کرد. به نظر می رسد که می توان این نتیجه را با استفاده از مولکول های آلی دیگر مانند گلیسرول ، گلیکولیپیدها یا گلیکوپروتئین ها بدست آورد. با این حال ، استفاده از آب به عنوان یک گیرنده الکترون یک عامل اساسی در عملکرد سلول است.
یون های هیدروژن توسط غشا ، جریان الکترون از طریق آند GDL و جمع کننده های جریان پراکنده می شوند. این الکترون ها به قسمت کاتد متصل شده و به جمع کننده های فعلی و GDL منتهی می شوند. وقتی وارد CL کاتد شوند ، با یون های هیدروژن و اکسیژن ترکیب شده و آب تولید می کنند و گرما آزاد می کنند. کسر جرم هیدروژن با مسیر جریان برای همه مدل های سلول سوختی کاهش می یابد.'''
url = 'https://www.spellchecker.net/iran_spell_checker.html#'
driver.get(url)
type_xpath = '//*[@id="ltForm"]/div/div[2]'
inputElement = driver.find_element_by_xpath(type_xpath)
inputElement.send_keys(text)
click_xpath = '//*[@id="lt"]/div/div[1]/h2/a'
click_element = driver.find_element_by_xpath(click_xpath)
click_element.click()
xpath1 = '//*[@id="id0"]'
text_element = driver.find_element_by_xpath(xpath1)
text_element.click()
| Web/Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] graffitiCellId="id_7h2s6k2"
# # Implement a queue using an array
#
# In this notebook, we'll look at one way to implement a queue by using an array. First, check out the walkthrough for an overview of the concepts, and then we'll take a look at the code.
# + [markdown] graffitiCellId="id_2ge7ywz"
# <span class="graffiti-highlight graffiti-id_2ge7ywz-id_bqg3jzc"><i></i><button>Walkthrough</button></span>
# + [markdown] graffitiCellId="id_6zy0p8y"
# 
# + [markdown] graffitiCellId="id_6e571xi"
# OK, so those are the characteristics of a queue, but how would we implement those characteristics using an array?
# + [markdown] graffitiCellId="id_u7ll4fe"
# <span class="graffiti-highlight graffiti-id_u7ll4fe-id_5jy6p59"><i></i><button>Walkthrough</button></span>
# + [markdown] graffitiCellId="id_wekpwim"
# What happens when we run out of space in the array? This is one of the trickier things we'll need to handle with our code.
# + [markdown] graffitiCellId="id_dvk829u"
# <span class="graffiti-highlight graffiti-id_dvk829u-id_w3049bo"><i></i><button>Walkthrough</button></span>
# + [markdown] graffitiCellId="id_qyngk16"
# ## Functionality
#
# Once implemented, our queue will need to have the following functionality:
# 1. `enqueue` - adds data to the back of the queue
# 2. `dequeue` - removes data from the front of the queue
# 3. `front` - returns the element at the front of the queue
# 4. `size` - returns the number of elements present in the queue
# 5. `is_empty` - returns `True` if there are no elements in the queue, and `False` otherwise
# 6. `_handle_full_capacity` - increases the capacity of the array, for cases in which the queue would otherwise overflow
#
# Also, if the queue is empty, `dequeue` and `front` operations should return `None`.
# + [markdown] graffitiCellId="id_2v52var"
# ## 1. Create the `queue` class and its `__init__` method
# First, have a look at the walkthrough:
# + [markdown] graffitiCellId="id_5tekmtx"
# <span class="graffiti-highlight graffiti-id_5tekmtx-id_p546sb0"><i></i><button>Walkthrough</button></span>
# + graffitiCellId="id_1cm16sy"
# + [markdown] graffitiCellId="id_v0ke48l"
# Now give it a try for yourself. In the cell below:
# * Define a class named `Queue` and add the `__init__` method
# * Initialize the `arr` attribute with an array containing 10 elements, like this: `[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]`
# * Initialize the `next_index` attribute
# * Initialize the `front_index` attribute
# * Initialize the `queue_size` attribute
# + graffitiCellId="id_ur7bsk5"
# + [markdown] graffitiCellId="id_ginicgq"
# <span class="graffiti-highlight graffiti-id_ginicgq-id_qh3d3tt"><i></i><button>Show Solution</button></span>
# + [markdown] graffitiCellId="id_ajja0hs"
# Let's check that the array is being initialized correctly. We can create a `Queue` object and access the `arr` attribute, and we should see our ten-element array:
# + graffitiCellId="id_81eds91"
q = Queue()
print(q.arr)
print("Pass" if q.arr == [0, 0, 0, 0, 0, 0, 0, 0, 0, 0] else "Fail")
# + [markdown] graffitiCellId="id_ctk4wmp"
# ## 2. Add the `enqueue` method
# + [markdown] graffitiCellId="id_pavyplt"
# <span class="graffiti-highlight graffiti-id_pavyplt-id_evc6ky2"><i></i><button>Walkthrough</button></span>
# + graffitiCellId="id_lffvdj1"
# + [markdown] graffitiCellId="id_4l8cnvy"
# In the cell below, add the code for the enqueue method.
#
# The method should:
# * Take a value as input and assign this value to the next free slot in the array
# * Increment `queue_size`
# * Increment `next_index` (this is where you'll need to use the modulo operator `%`)
# * If the front index is `-1` (because the queue was empty), it should set the front index to `0`
# + graffitiCellId="id_n5hh6uh"
class Queue:
def __init__(self, initial_size=10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.front_index = -1
self.queue_size = 0
# TODO: Add the enqueue method
# + [markdown] graffitiCellId="id_atrj1aj"
# <span class="graffiti-highlight graffiti-id_atrj1aj-id_xc03j2l"><i></i><button>Show Solution</button></span>
# + [markdown] graffitiCellId="id_yw3ieol"
# ## 3. Add the `size`, `is_empty`, and `front` methods
#
# Just like with stacks, we need methods to keep track of the size of the queue and whether it is empty. We can also add a `front` method that returns the value of the front element.
# * Add a `size` method that returns the current size of the queue
# * Add an `is_empty` method that returns `True` if the queue is empty and `False` otherwise
# * Add a `front` method that returns the value for the front element (whatever item is located at the `front_index` position). If the queue is empty, the `front` method should return None.
# + graffitiCellId="id_h3he5o8"
class Queue:
def __init__(self, initial_size=10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.front_index = -1
self.queue_size = 0
def enqueue(self, value):
# enqueue new element
self.arr[self.next_index] = value
self.queue_size += 1
self.next_index = (self.next_index + 1) % len(self.arr)
if self.front_index == -1:
self.front_index = 0
# TODO: Add the size method
# TODO: Add the is_empty method
# TODO: Add the front method
# + [markdown] graffitiCellId="id_i56dfhr"
# <span class="graffiti-highlight graffiti-id_i56dfhr-id_faf3sh0"><i></i><button>Show Solution</button></span>
# + [markdown] graffitiCellId="id_o74nheg"
# ## 4. Add the `dequeue` method
# + [markdown] graffitiCellId="id_lxj6sba"
# <span class="graffiti-highlight graffiti-id_lxj6sba-id_yqicw47"><i></i><button>Walkthrough</button></span>
# + graffitiCellId="id_r8vosiw"
# + [markdown] graffitiCellId="id_htn2xep"
# In the cell below, see if you can add the `deqeueue` method.
#
# Here's what it should do:
# * If the queue is empty, reset the `front_index` and `next_index` and then simply return `None`. Otherwise...
# * Get the value from the front of the queue and store this in a local variable (to `return` later)
# * Shift the `head` over so that it refers to the next index
# * Update the `queue_size` attribute
# * Return the value that was dequeued
# + graffitiCellId="id_o4aahoo"
class Queue:
def __init__(self, initial_size=10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.front_index = -1
self.queue_size = 0
def enqueue(self, value):
# enqueue new element
self.arr[self.next_index] = value
self.queue_size += 1
self.next_index = (self.next_index + 1) % len(self.arr)
if self.front_index == -1:
self.front_index = 0
# TODO: Add the dequeue method
def size(self):
return self.queue_size
def is_empty(self):
return self.size() == 0
def front(self):
# check if queue is empty
if self.is_empty():
return None
return self.arr[self.front_index]
# + [markdown] graffitiCellId="id_ldbkw0c"
# <span class="graffiti-highlight graffiti-id_ldbkw0c-id_xf9p4ln"><i></i><button>Show Solution</button></span>
# + [markdown] graffitiCellId="id_fim1y99"
# ## 5. Add the `_handle_queue_capacity_full` method
# + [markdown] graffitiCellId="id_spuqydc"
# <span class="graffiti-highlight graffiti-id_spuqydc-id_g2v8fid"><i></i><button>Walkthrough</button></span>
# + graffitiCellId="id_f9kfvle"
# + [markdown] graffitiCellId="id_9aj8z9m"
# First, define the `_handle_queue_capacity_full` method:
# * Define an `old_arr` variable and assign the the current (full) array so that we have a copy of it
# * Create a new (larger) array and assign it to `arr`.
# * Iterate over the values in the old array and copy them to the new array. Remember that you'll need two `for` loops for this.
#
# Then, in the `enqueue` method:
# * Add a conditional to check if the queue is full; if it is, call `_handle_queue_capacity_full`
# + graffitiCellId="id_2nbozkc"
class Queue:
def __init__(self, initial_size=10):
self.arr = [0 for _ in range(initial_size)]
self.next_index = 0
self.front_index = -1
self.queue_size = 0
def enqueue(self, value):
# TODO: Check if the queue is full; if it is, call the _handle_queue_capacity_full method
# enqueue new element
self.arr[self.next_index] = value
self.queue_size += 1
self.next_index = (self.next_index + 1) % len(self.arr)
if self.front_index == -1:
self.front_index = 0
def dequeue(self):
# check if queue is empty
if self.is_empty():
self.front_index = -1 # resetting pointers
self.next_index = 0
return None
# dequeue front element
value = self.arr[self.front_index]
self.front_index = (self.front_index + 1) % len(self.arr)
self.queue_size -= 1
return value
def size(self):
return self.queue_size
def is_empty(self):
return self.size() == 0
def front(self):
# check if queue is empty
if self.is_empty():
return None
return self.arr[self.front_index]
# TODO: Add the _handle_queue_capacity_full method
# + [markdown] graffitiCellId="id_52iezkc"
# <span class="graffiti-highlight graffiti-id_52iezkc-id_cyedroj"><i></i><button>Show Solution</button></span>
# + [markdown] graffitiCellId="id_aya1vks"
# ### Test your queue
# + graffitiCellId="id_2q547d6"
# Setup
q = Queue()
q.enqueue(1)
q.enqueue(2)
q.enqueue(3)
# Test size
print ("Pass" if (q.size() == 3) else "Fail")
# Test dequeue
print ("Pass" if (q.dequeue() == 1) else "Fail")
# Test enqueue
q.enqueue(4)
print ("Pass" if (q.dequeue() == 2) else "Fail")
print ("Pass" if (q.dequeue() == 3) else "Fail")
print ("Pass" if (q.dequeue() == 4) else "Fail")
q.enqueue(5)
print ("Pass" if (q.size() == 1) else "Fail")
| queue/.ipynb_checkpoints/Build a queue using an array-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/yasirabd/solver-society-job-data/blob/main/5_Analisis_data_jobstreet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="tvx-iVQ4pOSO"
# # Objectives
# Exploratory Data Analysis pada data jobstreet.
#
# Data input yang dibutuhkan:
# 1. data_master_16oct.csv (hasil colab 3_1)
# + id="6YI1XGj1q9_4"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('whitegrid')
import warnings
warnings.filterwarnings("ignore")
# + id="fLco2CEgrR4X" outputId="9491a163-de2e-44c8-81c9-c5849232334a" colab={"base_uri": "https://localhost:8080/", "height": 428}
df = pd.read_csv('/content/drive/My Drive/Data Loker/data_master_16oct.csv')
df.head()
# + id="y3KdbMKKr-I0" outputId="7249cda6-33da-4b47-f941-577e5edf8015" colab={"base_uri": "https://localhost:8080/", "height": 340}
# check null values
df.isnull().sum()
# + id="jcBQMgFlqgRO" outputId="75f09a34-c241-4ada-937a-c5b9c0c6acad" colab={"base_uri": "https://localhost:8080/", "height": 34}
df.shape
# + [markdown] id="0fLQ5C-9srGE"
# # Exploratory Data Analysis
# + id="ljb-Bp00nbz1"
data = df.copy()
# + [markdown] id="QT_PmyGrstFs"
# ## Berapa jumlah job_position tiap pulau?
# + id="LfrYQJ8rsmSA" outputId="f0e320be-f011-4fe6-ffe8-dd7f485c4d1c" colab={"base_uri": "https://localhost:8080/", "height": 405}
plt.figure(figsize=(15,6),dpi=100)
jobpulau = sns.countplot(data.pulau);
for p in jobpulau.patches:
jobpulau.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points')
# + [markdown] id="HCucRfesnnBS"
# ## Berapa jumlah job_position pada tiap provinsi di jawa?
# + id="DtntIuAMtdmd" outputId="eef5e66e-cff1-4832-a86d-75b0bb821a9f" colab={"base_uri": "https://localhost:8080/", "height": 408}
plt.figure(figsize=(15,6),dpi=100)
loker_jawa = data[data.pulau == 'Jawa']
job_jawa_prov = sns.countplot(loker_jawa.provinsi);
for p in job_jawa_prov.patches:
job_jawa_prov.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points');
# + [markdown] id="vK8K2f-psJ1Y"
# ## Apa top 5 job_position yang paling dicari pada tiap pulau?
# + id="fQa2-ysascxr" outputId="7cecd61b-ad61-441b-f2d1-703d3fb6ad6f" colab={"base_uri": "https://localhost:8080/", "height": 51}
pulau_job = data[['pulau', 'job_position']]
pulau_job['pulau'].unique()
# + id="tmOs3gcuR4Nr" outputId="1c87e1dd-ecd3-4295-81b6-fae9cdd10588" colab={"base_uri": "https://localhost:8080/", "height": 401}
plt.figure(figsize=(15,6), dpi=100)
slice_jp = data['job_position'].value_counts()[:5]
sns_jp = sns.barplot(slice_jp.index, slice_jp.values)
for p in sns_jp.patches:
sns_jp.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points');
# + id="03vIyF6wrix8" outputId="43734a45-b648-4143-cfee-2c7c3e8b9241" colab={"base_uri": "https://localhost:8080/", "height": 1000}
fig, axes = plt.subplots(7, 1, figsize=(15, 35), dpi=100)
for idx, pul in enumerate(list(pulau_job['pulau'].unique())):
# slice data tiap pulau
p = pulau_job[pulau_job.pulau == pul]
top_5 = p.job_position.value_counts()[:5]
sns.barplot(top_5.index, top_5.values, ax=axes[idx]).set_title('Pulau '+pul)
# + [markdown] id="LdOUCIsRwIXM"
# ## Industri apa yang banyak membuka loker? top 10
# + id="cT0aUmaVtP-v" outputId="6d4e8f1a-61a6-4163-b8ae-19200371deef" colab={"base_uri": "https://localhost:8080/", "height": 314}
industri = data['company_industry'].value_counts()[:10]
plt.figure(figsize=(15,6), dpi=100)
sns.barplot(industri.values, industri.index);
# + id="wzhoqARTOEx-" outputId="b0853087-a433-4f1b-b248-2d764f7af181" colab={"base_uri": "https://localhost:8080/", "height": 204}
industri
# + [markdown] id="5MidvdT6xxxt"
# ## Apa job position yang paling dicari pada INDUSTRI MESIN DAN PERLENGKAPAN YTDL?
# + id="rXFP9mmkxxD1" outputId="f7491aed-bfb7-450d-e56e-0e9afc6dd5e0" colab={"base_uri": "https://localhost:8080/", "height": 364}
job_industri = data[['job_position', 'company_industry']]
industri_mesin = job_industri[job_industri['company_industry'] == "INDUSTRI MESIN DAN PERLENGKAPAN YTDL"]
top10_job_mesin = industri_mesin['job_position'].value_counts()[:10]
top10_job_mesin
plt.figure(figsize=(15,6), dpi=100)
sns.barplot(top10_job_mesin.values, top10_job_mesin.index);
# + id="ffHtVdgoN3dZ" outputId="e4a07caa-618b-447f-f863-46f7ed63a37f" colab={"base_uri": "https://localhost:8080/", "height": 204}
top10_job_mesin
# + [markdown] id="kTyusWd5zH64"
# ## Company_size ditampilkan sekedip, ngasih tau aja
# + id="wuaRwHA9xNBg" outputId="b524fe9c-3911-4367-ee8e-de3aaaa6a488" colab={"base_uri": "https://localhost:8080/", "height": 363}
c_size = data['company_size'].value_counts()
plt.figure(figsize=(15,6), dpi=100)
csize_plot = sns.barplot(c_size.values, c_size.index)
# + [markdown] id="pkY8snXC1Exb"
# ## Bagaimana kriteria minimum pendidikan tiap pulau?
# + id="F5ouaa4E0fqh"
df_pendidikan = pd.DataFrame(columns=['id_loker', 'pulau', 'provinsi', 'pendidikan'])
for idx, row in data.iterrows():
split_pendidikan = [d.strip().upper() for d in row['pendidikan'].split(',')]
for pend in split_pendidikan:
df_pendidikan = df_pendidikan.append({'id_loker': row['id_loker'],
'pulau': row['pulau'],
'provinsi': row['provinsi'],
'pendidikan': pend}, ignore_index=True)
# + id="wSnAr-J014By" outputId="d1236132-0202-4fb0-f0fa-3511e27c98f5" colab={"base_uri": "https://localhost:8080/", "height": 1000}
fig, axes = plt.subplots(7,1,figsize=(15,35), dpi=100)
for idx, pul in enumerate(list(pulau_job['pulau'].unique())):
# slice data tiap pulau
pl = df_pendidikan[df_pendidikan.pulau == pul]
slice_pendidikan = pl.pendidikan.value_counts()
sns.barplot(slice_pendidikan.index, slice_pendidikan.values, ax=axes[idx]).set_title('Pulau '+pul)
# + id="oT81oNtzQ_e4" outputId="a0a374cd-e34d-444b-f24b-e0a6dc84b636" colab={"base_uri": "https://localhost:8080/", "height": 398}
plt.figure(figsize=(15,6), dpi=100)
vc_pendidikan = df_pendidikan['pendidikan'].value_counts()
sns_pend = sns.barplot(vc_pendidikan.index, vc_pendidikan.values)
for p in sns_pend.patches:
sns_pend.annotate(format(p.get_height(), '.0f'), (p.get_x() + p.get_width() / 2., p.get_height()), ha = 'center', va = 'center', xytext = (0, 10), textcoords = 'offset points');
# + [markdown] id="KiMU8apH6wVP"
# ## Berapa minimal tahun pengalaman yang dibutuhkan untuk mendapatkan loker pada top 5 job_position yang dibuka?
# + id="lJJ6rW3X12-Q" outputId="ae266cdd-4e68-4467-b434-81138f588fc2" colab={"base_uri": "https://localhost:8080/", "height": 427}
# top 5 job position
plt.figure(figsize=(15,6), dpi=100)
top_job_pos = data['job_position'].value_counts()[:5]
list_top_job = top_job_pos.index.tolist()
slice_job_pos = data[data.job_position.isin(list_top_job)]
sns.countplot(slice_job_pos.job_position, hue=slice_job_pos.years_of_experience_cat)
# + [markdown] id="VdU79Jj29Dd9"
# ## Bagaimana company_size dari top 3 industri?
# + id="YPBgwwdX4CXT" outputId="1f192255-5094-4c3d-c273-b1ac9f2e4fd1" colab={"base_uri": "https://localhost:8080/", "height": 427}
# top 3 company_industry
plt.figure(figsize=(15,6), dpi=100)
top_com_industry = data['company_industry'].value_counts()[:3]
list_top_industry = top_com_industry.index.tolist()
slice_industry = data[data.company_industry.isin(list_top_industry)]
sns.countplot(slice_industry.company_industry, hue=slice_industry.company_size)
# + [markdown] id="b_NrkKHY9uWr"
# ## Apakah bahasa yang harus dikuasai untuk mendaftar top 5 job_position?
# + id="0S74FMDE9fjf"
df_bahasa = pd.DataFrame(columns=['id_loker', 'job_position', 'bahasa'])
for idx, row in data.iterrows():
split_bahasa = [d.strip().upper() for d in row['work_environment_bahasa'].split(',')]
for bhs in split_bahasa:
df_bahasa = df_bahasa.append({'id_loker': row['id_loker'],
'job_position': row['job_position'],
'bahasa': bhs}, ignore_index=True)
# + id="UC89Q7Pv-B5P" outputId="2ffa96ea-de69-4a7e-b968-5784c2578c49" colab={"base_uri": "https://localhost:8080/", "height": 410}
top_job_pos = data['job_position'].value_counts()[:5]
list_top_job = top_job_pos.index.tolist()
# slice data bahasa berdasarkan top job position
slice_bhs_top_job = df_bahasa[df_bahasa.job_position.isin(list_top_job)]
plt.figure(figsize=(15,6), dpi=100)
sns.countplot(slice_bhs_top_job.job_position, hue=slice_bhs_top_job.bahasa);
# + id="Do83w-kj_Fpt" outputId="37c6b7c6-f1f9-4ffa-ac61-79fb3a65187b" colab={"base_uri": "https://localhost:8080/", "height": 428}
data.head()
# + id="m0b2aySgAR8C" outputId="e86d9688-3586-4223-f05c-041f5ff4e5af" colab={"base_uri": "https://localhost:8080/", "height": 545}
data.describe(include='object').T
# + id="O0QAbfMzAvDB" outputId="318a0f0a-0f9f-4f62-d8c4-be6d99c0f96e" colab={"base_uri": "https://localhost:8080/", "height": 297}
data.describe()
# + id="B6Ttmu_XBRuW"
| 5_Analisis_data_jobstreet.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Pandas DataFrame
import numpy as np
import pandas as pd
# ### DataFrame
# # Missing Data
#
# Let's see a few convenient methods to deal with Missing Data in pandas:
df = pd.read_csv('df11.csv')
df
df.dropna()
df.dropna(axis=1)
df.dropna(thresh=2)
df.fillna(value='FILL VALUE')
df['A'].fillna(value=df['A'].mean(),inplace=True)
df
df.iloc[1] = df.iloc[1].fillna(value=df.iloc[1].mean())
df.iloc[1]
df
df['B'] = df['B'].fillna(value=df['B'].mean())
df
# # Operations
#
# There are lots of operations with pandas that will be really useful to you, but don't fall into any distinct category. Let's show them here in this lecture:
df = pd.read_csv('df12.csv')
df
df.iloc[0][0]
df['A'][0]
# ### Info on Unique Values
df['B'].unique()
# No. of unique values
len(df['B'].unique())
df['B'].nunique()
df['B'].value_counts()
# ### Selecting Data
df['A']>2
df[df['A']>2]
df[(df['A']>2) & (df['B']==444)]
# ### Applying Functions
def times2(x):
return x*2
df['A'].apply(times2)
#Applying inbuit operator
df['C'].apply(len)
df['A'].sum()
# ** Permanently Removing a Column**
del df['A']
df
# ** Get column and index names: **
df.columns
df.index
# ** Sorting and Ordering a DataFrame:**
df
df.sort_values(by='B') #inplace=False by default
# ** Find Null Values or Check for Null Values**
df.isnull()
# ##### &&&&&
| Online Certificate Course in Data Science and Machine Learning rearranged/03 pandas/Pandas-Dataframe2-17-09-2020.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="6XvCUmCEd4Dm"
# # TensorFlow Datasets
#
# TFDS provides a collection of ready-to-use datasets for use with TensorFlow, Jax, and other Machine Learning frameworks.
#
# It handles downloading and preparing the data deterministically and constructing a `tf.data.Dataset` (or `np.array`).
#
# Note: Do not confuse [TFDS](https://www.tensorflow.org/datasets) (this library) with `tf.data` (TensorFlow API to build efficient data pipelines). TFDS is a high level wrapper around `tf.data`. If you're not familiar with this API, we encourage you to read [the official tf.data guide](https://www.tensorflow.org/guide/datasets) first.
#
# + [markdown] colab_type="text" id="J8y9ZkLXmAZc"
# Copyright 2018 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
# + [markdown] colab_type="text" id="OGw9EgE0tC0C"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/datasets/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/datasets/blob/master/docs/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="_7hshda5eaGL"
# ## Installation
#
# TFDS exists in two packages:
#
# * `tensorflow-datasets`: The stable version, released every few months.
# * `tfds-nightly`: Released every day, contains the last versions of the datasets.
#
# To install:
#
# ```
# pip install tensorflow-datasets
# ```
#
# Note: TFDS requires `tensorflow` (or `tensorflow-gpu`) to be already installed. TFDS support TF >=1.15.
#
# This colab uses `tfds-nightly` and TF 2.
#
# + cellView="both" colab={} colab_type="code" id="boeZp0sYbO41"
# !pip install -q tensorflow>=2 tfds-nightly matplotlib
# + colab={} colab_type="code" id="TTBSvHcSLBzc"
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
# + [markdown] colab_type="text" id="VZZyuO13fPvk"
# ## Find available datasets
#
# All dataset builders are subclass of `tfds.core.DatasetBuilder`. To get the list of available builders, uses `tfds.list_builders()` or look at our [catalog](https://www.tensorflow.org/datasets/catalog/overview).
# + colab={} colab_type="code" id="FAvbSVzjLCIb"
tfds.list_builders()
# + [markdown] colab_type="text" id="VjI6VgOBf0v0"
# ## Load a dataset
#
# The easiest way of loading a dataset is `tfds.load`. It will:
#
# 1. Download the data and save it as [`tfrecord`](https://www.tensorflow.org/tutorials/load_data/tfrecord) files.
# 2. Load the `tfrecord` and create the `tf.data.Dataset`.
#
# + colab={} colab_type="code" id="dCou80mnLLPV"
ds = tfds.load('mnist', split='train', shuffle_files=True)
assert isinstance(ds, tf.data.Dataset)
print(ds)
# + [markdown] colab_type="text" id="byOXYCEJS7S6"
# Some common arguments:
#
# * `split=`: Which split to read (e.g. `'train'`, `['train', 'test']`, `'train[80%:]'`,...). See our [split API guide](https://www.tensorflow.org/datasets/splits).
# * `shuffle_files=`: Control whether to shuffle the files between each epoch (TFDS store big datasets in multiple smaller files).
# * `data_dir=`: Location where the dataset is saved (
# defaults to `~/tensorflow_datasets/`)
# * `with_info=True`: Returns the `tfds.core.DatasetInfo` containing dataset metadata
# * `download=False`: Disable download
#
# + [markdown] colab_type="text" id="qeNmFx_1RXCb"
# `tfds.load` is a thin wrapper around `tfds.core.DatasetBuilder`. You can get the same output using the `tfds.core.DatasetBuilder` API:
# + colab={} colab_type="code" id="2zN_jQ2ER40W"
builder = tfds.builder('mnist')
# 1. Create the tfrecord files (no-op if already exists)
builder.download_and_prepare()
# 2. Load the `tf.data.Dataset`
ds = builder.as_dataset(split='train', shuffle_files=True)
print(ds)
# + [markdown] colab_type="text" id="aW132I-rbJXE"
# ## Iterate over a dataset
#
# ### As dict
#
# By default, the `tf.data.Dataset` object contains a `dict` of `tf.Tensor`s:
# + colab={} colab_type="code" id="JAGjXdk_bIYQ"
ds = tfds.load('mnist', split='train')
ds = ds.take(1) # Only take a single example
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
# + [markdown] colab_type="text" id="umAtqBBqdkDG"
# ### As tuple
#
# By using `as_supervised=True`, you can get a tuple `(features, label)` instead for supervised datasets.
# + colab={} colab_type="code" id="nJ4O0xy3djfV"
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in ds: # example is (image, label)
print(image.shape, label)
# + [markdown] colab_type="text" id="u9palgyHfEwQ"
# ### As numpy
#
# Uses `tfds.as_numpy` to convert:
#
# * `tf.Tensor` -> `np.array`
# * `tf.data.Dataset` -> `Generator[np.array]`
#
#
# + colab={} colab_type="code" id="tzQTCUkAfe9R"
ds = tfds.load('mnist', split='train', as_supervised=True)
ds = ds.take(1)
for image, label in tfds.as_numpy(ds):
print(type(image), type(label), label)
# + [markdown] colab_type="text" id="XaRN-LdXUkl_"
# ### As batched tf.Tensor
#
# By using `batch_size=-1`, you can load the full dataset in a single batch.
#
# `tfds.load` will return a `dict` (`tuple` with `as_supervised=True`) of `tf.Tensor` (`np.array` with `tfds.as_numpy`).
#
# Be careful that your dataset can fit in memory, and that all examples have the same shape.
# + colab={} colab_type="code" id="Gg8BNsv-UzFl"
image, label = tfds.as_numpy(tfds.load(
'mnist',
split='test',
batch_size=-1,
as_supervised=True,
))
print(type(image), image.shape)
# + [markdown] colab_type="text" id="o-cuwvVbeb43"
# ### Build end-to-end pipeline
#
# To go further, you can look:
#
# * Our [end-to-end Keras example](https://www.tensorflow.org/datasets/keras_example) to see a full training pipeline (with batching, shuffling,...).
# * Our [performance guide](https://www.tensorflow.org/datasets/performances) to improve the speed of your pipelines.
#
# + [markdown] colab_type="text" id="gTRTEQqscxAE"
# ## Visualize a dataset
#
# Visualize datasets with `tfds.show_examples` (only image datasets supported now):
#
# + colab={} colab_type="code" id="DpE2FD56cSQR"
ds, info = tfds.load('mnist', split='train', with_info=True)
fig = tfds.show_examples(ds, info)
# + [markdown] colab_type="text" id="Y0iVVStvk0oI"
# ## Access the dataset metadata
#
# All builders include a `tfds.core.DatasetInfo` object containing the dataset metadata.
#
# It can be accessed through:
#
# * The `tfds.load` API:
#
# + colab={} colab_type="code" id="UgLgtcd1ljzt"
ds, info = tfds.load('mnist', with_info=True)
# + [markdown] colab_type="text" id="XodyqNXrlxTM"
# * The `tfds.core.DatasetBuilder` API:
# + colab={} colab_type="code" id="nmq97QkilxeL"
builder = tfds.builder('mnist')
info = builder.info
# + [markdown] colab_type="text" id="zMGOk_ZsmPeu"
# The dataset info contains additional informations about the dataset (version, citation, homepage, description,...).
# + colab={} colab_type="code" id="O-wLIKD-mZQT"
print(info)
# + [markdown] colab_type="text" id="1zvAfRtwnAFk"
# ### Features metadata (label names, image shape,...)
#
# Access the `tfds.features.FeatureDict`:
# + colab={} colab_type="code" id="RcyZXncqoFab"
info.features
# + [markdown] colab_type="text" id="KAm9AV7loyw5"
# Number of classes, label names:
# + colab={} colab_type="code" id="HhfzBH6qowpz"
print(info.features["label"].num_classes)
print(info.features["label"].names)
print(info.features["label"].int2str(7)) # Human readable version (8 -> 'cat')
print(info.features["label"].str2int('7'))
# + [markdown] colab_type="text" id="g5eWtk9ro_AK"
# Shapes, dtypes:
# + colab={} colab_type="code" id="SergV_wQowLY"
print(info.features.shape)
print(info.features.dtype)
print(info.features['image'].shape)
print(info.features['image'].dtype)
# + [markdown] colab_type="text" id="thMOZ4IKm55N"
# ### Split metadata (e.g. split names, number of examples,...)
#
# Access the `tfds.core.SplitDict`:
# + colab={} colab_type="code" id="FBbfwA8Sp4ax"
print(info.splits)
# + [markdown] colab_type="text" id="EVw1UVYa2HgN"
# Available splits:
# + colab={} colab_type="code" id="fRBieOOquDzX"
print(list(info.splits.keys()))
# + [markdown] colab_type="text" id="iHW0VfA0t3dO"
# Get info on individual split:
# + colab={} colab_type="code" id="-h_OSpRsqKpP"
print(info.splits['train'].num_examples)
print(info.splits['train'].filenames)
print(info.splits['train'].num_shards)
# + [markdown] colab_type="text" id="fWhSkHFNuLwW"
# It also works with the subsplit API:
# + colab={} colab_type="code" id="HO5irBZ3uIzQ"
print(info.splits['train[15%:75%]'].num_examples)
print(info.splits['train[15%:75%]'].file_instructions)
# + [markdown] colab_type="text" id="GmeeOokMODg2"
# ## Citation
#
# If you're using `tensorflow-datasets` for a paper, please include the following citation, in addition to any citation specific to the used datasets (which can be found in the [dataset catalog](https://www.tensorflow.org/datasets/catalog)).
#
# ```
# @misc{TFDS,
# title = { {TensorFlow Datasets}, A collection of ready-to-use datasets},
# howpublished = {\url{https://www.tensorflow.org/datasets}},
# }
# ```
| site/en-snapshot/datasets/overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
graph = {
'a':{'b':3, 'c':4, 'd':7},
'b':{'c':1, 'f':5},
'c':{'f':6, 'd':2},
'd':{'e':3, 'g':6},
'e':{'g':3, 'h':4},
'f':{'e':1, 'h':8},
'g':{'h':2},
'h':{'g':2}
}
def dijkstra(graph ,start ,end):
shortest_path = {}
track_record = {}
yet_to_be_seen_node = graph
infinity = 9999
path = []
for nodes in yet_to_be_seen_node:
shortest_path[nodes] = infinity
shortest_path[start] = 0
while yet_to_be_seen_node:
minimum_distance_node = None
for nodes in yet_to_be_seen_node:
if minimum_distance_node is None:
minimum_distance_node = nodes
elif shortest_path[nodes] < shortest_path[minimum_distance_node]:
minimum_distance_node = nodes
path_options = graph[minimum_distance_node].items()
for child_node,weight in path_options:
if (weight + shortest_path[minimum_distance_node]) < shortest_path[child_node]:
shortest_path[child_node] = weight + shortest_path[minimum_distance_node]
track_record[child_node] = minimum_distance_node
yet_to_be_seen_node.pop(minimum_distance_node)
current_node = end
while current_node != start:
try:
path.insert(0 ,current_node)
current_node = track_record[current_node]
except KeyError:
print('PATH IS NOT RAECHABLE')
break
path.insert(0,start)
if shortest_path[end] != infinity:
print('SHORTEST DISTANCE IS ' + str(shortest_path[end]))
print('OPTIMAL PATH IS ' + str(path))
dijkstra(graph, 'a','d')
# -
| Dijkstra's algorithm_code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# + [markdown] origin_pos=0
# # Automatic Differentiation
# :label:`sec_autograd`
#
# As we have explained in :numref:`sec_calculus`,
# differentiation is a crucial step in nearly all deep learning optimization algorithms.
# While the calculations for taking these derivatives are straightforward,
# requiring only some basic calculus,
# for complex models, working out the updates by hand
# can be a pain (and often error-prone).
#
# Deep learning frameworks expedite this work
# by automatically calculating derivatives, i.e., *automatic differentiation*.
# In practice,
# based on our designed model
# the system builds a *computational graph*,
# tracking which data combined through
# which operations to produce the output.
# Automatic differentiation enables the system to subsequently backpropagate gradients.
# Here, *backpropagate* simply means to trace through the computational graph,
# filling in the partial derivatives with respect to each parameter.
#
#
# ## A Simple Example
#
# As a toy example, say that we are interested
# in (**differentiating the function
# $y = 2\mathbf{x}^{\top}\mathbf{x}$
# with respect to the column vector $\mathbf{x}$.**)
# To start, let us create the variable `x` and assign it an initial value.
#
# + origin_pos=1 tab=["mxnet"]
from mxnet import autograd, np, npx
npx.set_np()
x = np.arange(4.0)
x
# + [markdown] origin_pos=4
# [**Before we even calculate the gradient
# of $y$ with respect to $\mathbf{x}$,
# we will need a place to store it.**]
# It is important that we do not allocate new memory
# every time we take a derivative with respect to a parameter
# because we will often update the same parameters
# thousands or millions of times
# and could quickly run out of memory.
# Note that a gradient of a scalar-valued function
# with respect to a vector $\mathbf{x}$
# is itself vector-valued and has the same shape as $\mathbf{x}$.
#
# + origin_pos=5 tab=["mxnet"]
# We allocate memory for a tensor's gradient by invoking `attach_grad`
x.attach_grad()
# After we calculate a gradient taken with respect to `x`, we will be able to
# access it via the `grad` attribute, whose values are initialized with 0s
x.grad
# + [markdown] origin_pos=8
# (**Now let us calculate $y$.**)
#
# + origin_pos=9 tab=["mxnet"]
# Place our code inside an `autograd.record` scope to build the computational
# graph
with autograd.record():
y = 2 * np.dot(x, x)
y
# + [markdown] origin_pos=12
# Since `x` is a vector of length 4,
# an inner product of `x` and `x` is performed,
# yielding the scalar output that we assign to `y`.
# Next, [**we can automatically calculate the gradient of `y`
# with respect to each component of `x`**]
# by calling the function for backpropagation and printing the gradient.
#
# + origin_pos=13 tab=["mxnet"]
y.backward()
x.grad
# + [markdown] origin_pos=16
# (**The gradient of the function $y = 2\mathbf{x}^{\top}\mathbf{x}$
# with respect to $\mathbf{x}$ should be $4\mathbf{x}$.**)
# Let us quickly verify that our desired gradient was calculated correctly.
#
# + origin_pos=17 tab=["mxnet"]
x.grad == 4 * x
# + [markdown] origin_pos=20
# [**Now let us calculate another function of `x`.**]
#
# + origin_pos=21 tab=["mxnet"]
with autograd.record():
y = x.sum()
y.backward()
x.grad # Overwritten by the newly calculated gradient
# + [markdown] origin_pos=24
# ## Backward for Non-Scalar Variables
#
# Technically, when `y` is not a scalar,
# the most natural interpretation of the differentiation of a vector `y`
# with respect to a vector `x` is a matrix.
# For higher-order and higher-dimensional `y` and `x`,
# the differentiation result could be a high-order tensor.
#
# However, while these more exotic objects do show up
# in advanced machine learning (including [**in deep learning**]),
# more often (**when we are calling backward on a vector,**)
# we are trying to calculate the derivatives of the loss functions
# for each constituent of a *batch* of training examples.
# Here, (**our intent is**) not to calculate the differentiation matrix
# but rather (**the sum of the partial derivatives
# computed individually for each example**) in the batch.
#
# + origin_pos=25 tab=["mxnet"]
# When we invoke `backward` on a vector-valued variable `y` (function of `x`),
# a new scalar variable is created by summing the elements in `y`. Then the
# gradient of that scalar variable with respect to `x` is computed
with autograd.record():
y = x * x # `y` is a vector
y.backward()
x.grad # Equals to y = sum(x * x)
# + [markdown] origin_pos=28
# ## Detaching Computation
#
# Sometimes, we wish to [**move some calculations
# outside of the recorded computational graph.**]
# For example, say that `y` was calculated as a function of `x`,
# and that subsequently `z` was calculated as a function of both `y` and `x`.
# Now, imagine that we wanted to calculate
# the gradient of `z` with respect to `x`,
# but wanted for some reason to treat `y` as a constant,
# and only take into account the role
# that `x` played after `y` was calculated.
#
# Here, we can detach `y` to return a new variable `u`
# that has the same value as `y` but discards any information
# about how `y` was computed in the computational graph.
# In other words, the gradient will not flow backwards through `u` to `x`.
# Thus, the following backpropagation function computes
# the partial derivative of `z = u * x` with respect to `x` while treating `u` as a constant,
# instead of the partial derivative of `z = x * x * x` with respect to `x`.
#
# + origin_pos=29 tab=["mxnet"]
with autograd.record():
y = x * x
u = y.detach()
z = u * x
z.backward()
x.grad == u
# + [markdown] origin_pos=32
# Since the computation of `y` was recorded,
# we can subsequently invoke backpropagation on `y` to get the derivative of `y = x * x` with respect to `x`, which is `2 * x`.
#
# + origin_pos=33 tab=["mxnet"]
y.backward()
x.grad == 2 * x
# + [markdown] origin_pos=36
# ## Computing the Gradient of Python Control Flow
#
# One benefit of using automatic differentiation
# is that [**even if**] building the computational graph of (**a function
# required passing through a maze of Python control flow**)
# (e.g., conditionals, loops, and arbitrary function calls),
# (**we can still calculate the gradient of the resulting variable.**)
# In the following snippet, note that
# the number of iterations of the `while` loop
# and the evaluation of the `if` statement
# both depend on the value of the input `a`.
#
# + origin_pos=37 tab=["mxnet"]
def f(a):
b = a * 2
while np.linalg.norm(b) < 1000:
b = b * 2
if b.sum() > 0:
c = b
else:
c = 100 * b
return c
# + [markdown] origin_pos=40
# Let us compute the gradient.
#
# + origin_pos=41 tab=["mxnet"]
a = np.random.normal()
a.attach_grad()
with autograd.record():
d = f(a)
d.backward()
# + [markdown] origin_pos=44
# We can now analyze the `f` function defined above.
# Note that it is piecewise linear in its input `a`.
# In other words, for any `a` there exists some constant scalar `k`
# such that `f(a) = k * a`, where the value of `k` depends on the input `a`.
# Consequently `d / a` allows us to verify that the gradient is correct.
#
# + origin_pos=45 tab=["mxnet"]
a.grad == d / a
# + [markdown] origin_pos=48
# ## Summary
#
# * Deep learning frameworks can automate the calculation of derivatives. To use it, we first attach gradients to those variables with respect to which we desire partial derivatives. We then record the computation of our target value, execute its function for backpropagation, and access the resulting gradient.
#
#
# ## Exercises
#
# 1. Why is the second derivative much more expensive to compute than the first derivative?
# 1. After running the function for backpropagation, immediately run it again and see what happens.
# 1. In the control flow example where we calculate the derivative of `d` with respect to `a`, what would happen if we changed the variable `a` to a random vector or matrix. At this point, the result of the calculation `f(a)` is no longer a scalar. What happens to the result? How do we analyze this?
# 1. Redesign an example of finding the gradient of the control flow. Run and analyze the result.
# 1. Let $f(x) = \sin(x)$. Plot $f(x)$ and $\frac{df(x)}{dx}$, where the latter is computed without exploiting that $f'(x) = \cos(x)$.
#
# + [markdown] origin_pos=49 tab=["mxnet"]
# [Discussions](https://discuss.d2l.ai/t/34)
#
| scripts/d21-en/mxnet/chapter_preliminaries/autograd.ipynb |