code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Madhav2204/LGMVIP-DataScience/blob/main/Task_4_Image_to_Pencil_Sketch_with_Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="AYARQ1M1nMiX"
# ## **Author : <NAME>**
# + [markdown] id="uBfAkeSanWUJ"
# **Task-4 : Image to Pencil Sketch with Python**
# + [markdown] id="TnX0w5B5niqD"
# Problem Statement : We need to read the image in RBG format and then convert it to a grayscale image. This will turn an image into a classic black and white photo. Then the next thing to do is invert the grayscale image also called negative image, this will be our inverted grayscale image. Inversion can be used to enhance details. Then we can finally create the pencil sketch by mixing the grayscale image with the inverted blurry image. This can be done by dividing the grayscale image by the inverted blurry image. Since images are just arrays, we can easily do this programmatically using the divide function from the cv2 library in Python.
# + id="b911WDYvnlUd"
import cv2
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import sys
import os
from google.colab.patches import cv2_imshow
# + colab={"base_uri": "https://localhost:8080/"} id="O8PiGvCFsKvu" outputId="35f5561a-1636-4731-feed-e9cc5e695689"
from google.colab import drive
drive.mount("/content/gdrive")
# + colab={"base_uri": "https://localhost:8080/", "height": 194} id="TGe2ZtropsCa" outputId="7a1b2e90-f980-4560-bc1e-434f84cac6d2"
img = cv2.imread('/content/gdrive/MyDrive/LGM-Internship/offset_comp_772626-opt.jpg')
plt.imshow(img)
# + colab={"base_uri": "https://localhost:8080/", "height": 189} id="V52L30k0pzev" outputId="e211095b-acf6-4598-d313-51ee81a68edb"
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
plt.title("ORIGINAL IMAGE")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 189} id="9YOTq1S9p2hi" outputId="4f5cebc9-bc34-483f-a1c1-8077bda431a2"
g_img=cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
plt.imshow(cv2.cvtColor(g_img, cv2.COLOR_BGR2RGB))
plt.title("GRAYSCALE IMAGE")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 189} id="Gn4_b-iqtHWd" outputId="be27ae67-3ac7-47cd-bdf5-6b78dd10f4cf"
invert=cv2.bitwise_not(g_img)
plt.imshow(cv2.cvtColor(invert, cv2.COLOR_BGR2RGB))
plt.title("NEGATIVE IMAGE")
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 189} id="fhpxCxoWtK6s" outputId="a7e375c8-ca5f-471e-8733-d6db575e175f"
blur=cv2.GaussianBlur(invert, (31,31), 0)
inv_blur=cv2.bitwise_not(blur)
sketch=cv2.divide(g_img,inv_blur, scale=256.0)
plt.imshow(cv2.cvtColor(sketch, cv2.COLOR_BGR2RGB))
plt.title("PENCIL SKETCH")
plt.show()
# + id="ydOpA0UGtOWb"
| Task_4_Image_to_Pencil_Sketch_with_Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas_datareader import data as wb
from scipy.stats import norm
import matplotlib.pyplot as plt
# +
def d1(S, K, r, stdev, T):
return (np.log(S / K) + (r + stdev ** 2 / 2) * T) / (stdev + np.sqrt(T))
def d2(S, K, r, stdev, T):
return (np.log(S / K) + (r - stdev ** 2 / 2) * T) / (stdev + np.sqrt(T))
# -
norm.cdf(0)
norm.cdf(.25)
norm.cdf(.75)
norm.cdf(9)
def BSM(S, K, r, stdev, T):
return (S * norm.cdf(d1(S, K, r, stdev, T))) - (K * np.exp(-r * T) * norm.cdf(d2(S, K, r, stdev, T)))
ticker = 'SQ'
data = pd.DataFrame()
data[ticker] = wb.DataReader(ticker, data_source = 'yahoo', start = '2015-11-20', end = '2017-11-10')['Adj Close']
S = data.iloc[-1]
S
log_returns = np.log(1 + data.pct_change())
stdev = log_returns.std() * 250 ** .5
stdev
r = .025
K = 110
T = 1
d1(S, K, r, stdev, T)
d2(S, K, r, stdev, T)
BSM(S, K, r, stdev, T)
#Euler Discretization
type(stdev)
stdev = stdev.values
stdev
t_intervals = 250
delta_t = T / t_intervals
iterations = 10000
Z = np.random.standard_normal((t_intervals + 1, iterations))
S = np.zeros_like(Z)
S0 = data.iloc[-1]
S[0] = S0
for t in range(1, t_intervals + 1):
S[t] = S[t - 1] * np.exp((r - .5 * stdev ** 2) * delta_t + stdev * delta_t ** .5 * Z[t])
S
S.shape
plt.figure(figsize = (10,6))
plt.plot(S[:, :10]);
plt.show()
p = np.maximum(S[-1] - 110, 0)
p
p.shape
C = np.exp(-r * T) * np.sum(p) / iterations
C
| Python for Finance - Investment Fundamentals & Data Analytics/Euler Discretization.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Set the environment such that multiple R processes do not crash the kernel
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'%matplotlib inline
#
# 5. Additional Statistics Functions
# ==================================
# :code:`pymer4` also comes with some flexible routines for various statistical operations such as permutation testing, bootstrapping of arbitrary functions and equivalence testing. Here are a few examples:
#
# Permutation Tests
# -----------------
# :code:`pymer4` can compute a wide variety of one and two-sample permutation tests including mean differences, t-statistics, effect size comparisons, and correlations
#
#
# +
# Import numpy and set random number generator
import numpy as np
np.random.seed(10)
# Import stats functions
from pymer4.stats import perm_test
# Generate two samples of data: X (M~2, SD~10, N=100) and Y (M~2.5, SD~1, N=100)
x = np.random.normal(loc=2, size=100)
y = np.random.normal(loc=2.5, size=100)
# Between groups t-test. The first value is the t-stat and the
# second is the permuted p-value
result = perm_test(x, y, stat="tstat", n_perm=500, n_jobs=1)
print(result)
# -
# Spearman rank correlation. The first values is spearman's rho
# and the second is the permuted p-value
result = perm_test(x, y, stat="spearmanr", n_perm=500, n_jobs=1)
print(result)
# Bootstrap Comparisons
# ----------------------
# :code:`pymer4` can compute a bootstrap comparison using any arbitrary function that takes as input either one or two 1d numpy arrays, and returns a single value.
#
#
# +
# Import stats function
from pymer4.stats import boot_func
# Define a simple function for a median difference test
def med_diff(x, y):
return np.median(x) - np.median(y)
# Between groups median test with resampling
# The first value is the median difference and the
# second is the lower and upper 95% confidence interval
result = boot_func(x, y, func=med_diff)
print(result)
# -
# TOST Equivalence Tests
# ----------------------
# :code:`pymer4` also has experimental support for `two-one-sided equivalence tests <https://bit.ly/33wsB5i/>`_.
#
#
# +
# Import stats function
from pymer4.stats import tost_equivalence
# Generate some data
lower, upper = -0.1, 0.1
x, y = np.random.normal(0.145, 0.025, 35), np.random.normal(0.16, 0.05, 17)
result = tost_equivalence(x, y, lower, upper, plot=True)
# Print the results dictionary nicely
for k, v in result.items():
print(f"{k}: {v}\n")
| docs/auto_examples/example_05_misc_stats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %pylab
from glob import glob
from natsort import natsorted
import h5py
from palettable.cartocolors.qualitative import Bold_8, Bold_9, Bold_10
from natsort import natsorted as sorted
Q_over_m, L_over_m = 1/1.6e-47, 1067.8216118445728 # IMF-averaged specific luminosity and photon production rate
# -
# ## Figure 2
# +
fig, ax = plt.subplots(1,1,figsize=(4,4))
ax2 = ax.twiny()
rho0 = (2e4 * 3 / (4*np.pi*10**3))
tff = (3*np.pi / (32 * 4300.7 * rho0))**0.5 * 979
ax2.set(xlim=[0,10/tff],xlabel=r"$t/t_{\rm ff,0}$",xticks=[0,1,2])
ax.set_prop_cycle('color', Bold_9.mpl_colors)
times,r50, rho_avg,rho50,sigma_3D, fmag, alpha, mbound,mmol,mhii,rad,fmag = np.loadtxt("global_gas_props.dat")[::10].T
times *= 979
ax.plot(times,r50,label=r"$R_{\rm eff}\,\left(\rm pc\right)$",lw=1.5)
ax.plot(times,rho50,label=r"$n_{\rm H}^{\rm eff}\,\left(\rm cm^{-3}\right)$",lw=1.5)
ax.plot(times,rad,label=r"$e_{\rm rad}^{\rm eff} \,\left(\rm eV \, cm^{-3}\right)$",lw=1.5)
ax.plot(times,sigma_3D,label=r"$\sigma_{\rm 3D} \,\left(\rm km\,s^{-1}\right)$",lw=1.5)
ax.plot(times,alpha,label=r"$\alpha_{\rm turb} $",lw=1.5)
ax.plot(times[1:],mmol[1:],label=r"$M_{\rm H_{\rm 2}}\,\left(M_\odot\right)$",lw=1.5,ls='dashed')#,label=r"$\alpha_{\rm turb} $")
ax.plot(times,mhii,label=r"$M_{\rm HII}\,\left(M_\odot\right)$",lw=1.5,ls='dashed')
ax.plot(times,mbound,label=r"$M_{\rm bound}\,\left(M_\odot\right)$",lw=1.5,ls='dashed')
ax.plot(times,fmag,label=r"$E_{\rm mag}/E_{\rm turb}$",lw=1.5,ls='dotted')
t_SN = 0.0087*979
ax.plot([t_SN,t_SN],[0,1e37],ls='dotted',color='black')
ax.text(t_SN*1.01,1e4,"1st SN",rotation=90)
ax.set(xlabel="Time (Myr)",yscale='log',ylim=[0.01,4e6],xlim=[0,10])
ax.legend(labelspacing=0.1,fontsize=8,ncol=2,loc=2)
plt.savefig("global_gas_props.pdf",bbox_inches='tight')
# -
# ## Figure 3
# +
times,rho_stargas, rho_starstar, mstar, r50_star, lum, lum_MS, lum_acc, Q, F_jets, L_jets, F_wind, L_wind, sigma_3D_stars, vr_stars, Nstars, Mstar_max, Mstar_med, Mstar_mean = np.loadtxt("global_star_props.dat")[::10].T
fig, ax = plt.subplots(1,1,figsize=(4,4))
ax2 = ax.twiny()
rho0 = (2e4 * 3 / (4*np.pi*10**3))
times = np.array(times)*979
tff = (3*np.pi / (32 * 4300.7 * rho0))**0.5 * 979
ax2.set(xlim=[0,10/tff],xlabel=r"$t/t_{\rm ff,0}$",xticks=[0,1,2])
ax.set_prop_cycle('color', Bold_10.mpl_colors)
ax.plot(times, mstar, label=r"$M_{\rm \star}^{\rm tot}\,(M_\odot)$",lw=1.5)
ax.plot(times, Nstars, label=r"$N_{\rm \star}$",lw=1.5,ls='dashed')
t_nstar = np.interp(0.5, Nstars/Nstars.max(), times)
ax.plot([t_nstar,t_nstar],[0,1e100],color='black',ls='dashed',zorder=-100)
ax.text(t_nstar*1.02, 3e3,"50\% of stars",rotation=90)
t_mstar = np.interp(0.5, mstar/mstar.max(), times)
ax.plot([t_mstar,t_mstar],[0,1e100],color='black',ls='dotted',zorder=-100)
ax.text(t_mstar*1.02, 3e3,"50\% of mass",rotation=90)
L_acc = lum-lum_MS
ax.plot(times, rho_starstar,lw=1.5,label=r"$\tilde{\rho}_{\rm \star}^{\rm NN}\,(M_\odot\rm \,pc^{-3})$",ls='dashdot')
ax.plot(times, r50_star,lw=1.5,label=r"$R_{\rm \star}^{\rm eff}\,\rm (pc)$")
ax.plot(times, sigma_3D_stars/1e3,lw=1.5,label=r"$\sigma_{\rm \star}\,(\rm km\,s^{-1})$")
rho50 = 0.5*mstar / (4*np.pi/3 * r50_star**3)
ax.plot(times,rho50,lw=1.5,label=r"$\rho_{\rm \star}^{\rm eff}\,(M_\odot\rm \,pc^{-3})$",ls='dashdot')
ax.plot(times,vr_stars/1e3,lw=1.5,label=r"$|\tilde{v}_{\rm r}|\,(\rm km\,s^{-1})$",color='cornflowerblue')
ax.plot(times,-vr_stars/1e3,lw=1.5,color='cornflowerblue',ls='dashed')
ax.legend(labelspacing=0.1,fontsize=8,ncol=1,handletextpad=0.2)
ax.set(xlabel="Time (Myr)",yscale='log',ylim=[0.1,1e5],xlim=[0,10])
plt.savefig("global_star_props.pdf",bbox_inches='tight')
# -
# ## Figure 5
# +
data = np.loadtxt("global_star_props.dat")
stride = 10
data = np.diff(data.cumsum(axis=0)[::stride],axis=0)/stride
times,rho_stargas, rho_starstar, mstar, r50_star, lum, lum_MS, lum_acc, Q, F_jets, L_jets, F_wind, L_wind, sigma_3D_stars, vr_stars, Nstars, Mstar_max, Mstar_med, Mstar_mean = data.T
fig, ax = plt.subplots(1,1,figsize=(4,4))
ax2 = ax.twiny()
rho0 = (2e4 * 3 / (4*np.pi*10**3))
times = np.array(times)*979
tff = (3*np.pi / (32 * 4300.7 * rho0))**0.5 * 979
ax2.set(xlim=[0,10/tff],xlabel=r"$t/t_{\rm ff,0}$",xticks=[0,1,2])
from palettable.cartocolors.qualitative import Bold_6
cmap = Bold_6
ax.set_prop_cycle('color', cmap.mpl_colors)
L_acc = lum_acc
ax.plot(times, lum, label=r"$L_{\rm tot}\,(L_\odot)$",lw=1.5,zorder=100,ls='dashed')
ax.plot(times, L_acc, label=r"$L_{\rm acc}\,(L_\odot)$",lw=1.5,ls='dashed')
ax.plot(times, lum-L_acc, label=r"$L_{\rm fus}\,(L_\odot)$",lw=1.5,ls='dashed')
ax.plot(times, L_wind, label=r"$\dot{E}_{\rm wind}\,(L_\odot)$",lw=1.5)
ax.plot(times, L_jets, label=r"$\dot{E}_{\rm jets}\,\left(L_\odot\right) $",lw=1.5)
ax.plot(times, Q/1e47, label=r"$\mathcal{Q} \,\left(10^{47} \rm s^{-1}\right) $",lw=1.5)
ax.plot(times, mstar*L_over_m, label=r"$M_{\rm \star}^{\rm tot} \langle L/M_{\rm \star} \rangle \,(L_\odot)$",lw=1.5,ls='dotted',zorder=-1,color=cmap.mpl_colors[0])
ax.plot(times, mstar*Q_over_m/1e47, label=r"$M_{\rm \star}^{\rm tot} \langle \mathcal{Q}/M_{\rm \star}\rangle \,(10^{47} s^{-1})$",lw=1.5,ls='dotted',zorder=-1,color=cmap.mpl_colors[-1])
ax.plot([0,10],[8.5,8.5],ls='dashed',color='black')
ax.text(5,4,r"$G^{3/2} M_{\rm 0}^{5/2} R_{\rm 0}^{-5/2}$ ($L_\odot$)",fontsize=6)
t_SN = 0.0087*979
ax.plot([t_SN,t_SN],[0,1e37],ls='dotted',color='black')
ax.text(t_SN*1.01,1e7,"1st SN",rotation=90)
ax.legend(labelspacing=0,fontsize=8,loc=2,handletextpad=0.2,frameon=False,framealpha=0.5,ncol=2)
ax.set(xlabel="Time (Myr)",yscale='log',ylim=[1,1e8],xlim=[0,10],yticks=np.logspace(0,8,9))#,ylim=[0.1,1e5],xlim=[0,8])
plt.savefig("global_fb_energy_props.pdf",bbox_inches='tight')
# -
# ## Figure 6
# +
data = np.loadtxt("global_star_props.dat")
stride = 10
data = np.diff(data.cumsum(axis=0)[::stride],axis=0)/stride
times,rho_stargas, rho_starstar, mstar, r50_star, lum, lum_MS, lum_acc, Q, F_jets, L_jets, F_wind, L_wind, sigma_3D_stars, vr_stars, Nstars, Mstar_max, Mstar_med, Mstar_mean = data.T
fig, ax = plt.subplots(1,1,figsize=(4,4))
ax2 = ax.twiny()
rho0 = (2e4 * 3 / (4*np.pi*10**3))
times = np.array(times)*979
tff = (3*np.pi / (32 * 4300.7 * rho0))**0.5 * 979
ax2.set(xlim=[0,10/tff],xlabel=r"$t/t_{\rm ff,0}$",xticks=[0,1,2])
from palettable.cartocolors.qualitative import Bold_4
ax.set_prop_cycle('color', Bold_4.mpl_colors)
L_acc = lum_acc #np.clip(lum - lum_MS,0,1e100)
ax.plot([0,10], np.ones(2)* 2e4**2 / 10**2 * 0.21,ls='dashed',color='black')#,label=r"$GM_{\rm 0}^2/R_{\rm 0}^2$ ($L_\odot/c$)",color='black')
ax.text(0.4,0.6e6,r"$GM_{\rm 0}^2/R_{\rm 0}^2$ ($L_\odot/c$)",fontsize=6)
ax.plot(times, lum, color='black',lw=1.5)
ax.plot(times, lum, label=r"$L_{\rm tot}/c\,(L_\odot/c)$",lw=1)
ax.plot(times, F_wind, color='black',lw=1.5)
ax.plot(times, F_wind, label=r"$\dot{P}_{\rm wind} \,\left(L_\odot/c\right)$",lw=1)
ax.plot(times, F_jets, color='black',lw=1.5)
ax.plot(times, F_jets, label=r"$\dot{P}_{\rm jets}\,\left(L_\odot/c\right) $",lw=1)
t_SN = 0.0087*979
ax.plot([t_SN,t_SN],[0,1e37],ls='dotted',color='black')
ax.text(t_SN*1.01,3e6,"1st SN",rotation=90)
#ax.plot(times,np.abs(vr_stars))
ax.legend(labelspacing=0,fontsize=8,loc=2,handletextpad=0.2,frameon=False,framealpha=0.5,ncol=1)
ax.set(xlabel="Time (Myr)",yscale='log',ylim=[1e3,1e7],xlim=[0,10])#,ylim=[0.1,1e5],xlim=[0,8])
plt.savefig("global_fb_mom_props.pdf",bbox_inches='tight')
| PlotGlobalProperties.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 4_1_6_Calculating_Cumulative_Returns_with_Pandas
# Create a Pandas DataFrame for an imaginary asset that contains 10 daily prices. Calculate both the daily and the cumulative return for that asset.
# +
# Import the pandas library
import pandas as pd
# Create the stock_xyz DataFrame
stock_xyz = pd.DataFrame({'close' : [11.25, 11.98, 10.74, 11.16, 12.35, 12.87, 13.03, 13.15, 13.50, 13.87]})
# -
# View the stock_xyz DataFrame
stock_xyz
# +
# Create the daily_returns Dataframe from the stock_xyz prices
# daily_returns = (stock_xyz - stock_xyz.shift(1)) / stock_xyz.shift(1)
daily_returns = stock_xyz.pct_change()
# View the daily_returns DataFrame
daily_returns
# +
# Calculate the cumulative returns using the 'cumprod()' function
cumulative_returns = (1 + daily_returns).cumprod()
# View the cumulative_returns DataFrame
cumulative_returns
# -
| 4_1_6_Calculating_Cumulative_Returns_with_Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + deletable=true editable=true
from __future__ import print_function
from bqplot import pyplot as plt
from bqplot import topo_load
from bqplot.interacts import panzoom
import numpy as np
import pandas as pd
import datetime as dt
# + deletable=true editable=true
# initializing data to be plotted
np.random.seed(0)
size = 100
y_data = np.cumsum(np.random.randn(size) * 100.0)
y_data_2 = np.cumsum(np.random.randn(size))
y_data_3 = np.cumsum(np.random.randn(size) * 100.)
x = np.linspace(0.0, 10.0, size)
# + deletable=true editable=true
price_data = pd.DataFrame(np.cumsum(np.random.randn(150, 2).dot([[0.5, 0.8], [0.8, 1.0]]), axis=0) + 100,
columns=['Security 1', 'Security 2'],
index=pd.date_range(start='01-01-2007', periods=150))
symbol = 'Security 1'
dates_all = price_data.index.values
final_prices = price_data[symbol].values.flatten()
# + deletable=true editable=true
price_data.index.names = ['date']
# + [markdown] deletable=true editable=true
# ## Simple Plots
# + [markdown] deletable=true editable=true
# ### Line Chart
# + deletable=true editable=true
plt.figure()
plt.plot(x, y_data)
plt.xlabel('Time')
plt.show()
# + deletable=true editable=true
_ = plt.ylabel('Stock Price')
# + deletable=true editable=true
# Setting the title for the current figure
plt.title('Brownian Increments')
# + deletable=true editable=true
plt.figure()
plt.plot('Security 1', data=price_data)
plt.show()
# + [markdown] deletable=true editable=true
# ### Scatter Plot
# + deletable=true editable=true
plt.figure(title='Scatter Plot with colors')
plt.scatter(y_data_2, y_data_3, color=y_data)
plt.show()
# + [markdown] deletable=true editable=true
# ### Horizontal and Vertical Lines
# + deletable=true editable=true
## adding a horizontal line at y=0
plt.hline(0)
plt.show()
# + deletable=true editable=true
## adding a vertical line at x=4 with stroke_width and colors being passed.
plt.vline(4., stroke_width=2, colors=['orangered'])
plt.show()
# + deletable=true editable=true
plt.figure()
plt.scatter('Security 1', 'Security 2', color='date', data=price_data.reset_index(), stroke='Black')
plt.show()
# + [markdown] deletable=true editable=true
# ### Histogram
# + deletable=true editable=true
plt.figure()
plt.hist(y_data, colors=['OrangeRed'])
plt.show()
# + deletable=true editable=true
plt.figure()
plt.hist('Security 1', data=price_data, colors=['MediumSeaGreen'])
plt.xlabel('Hello')
plt.show()
# + [markdown] deletable=true editable=true
# ### Bar Chart
# + deletable=true editable=true
plt.figure()
bar_x=['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N', 'P', 'Q', 'R', 'S', 'T', 'U']
plt.bar(bar_x, y_data_3)
plt.show()
# + deletable=true editable=true
plt.figure()
plt.bar('date', 'Security 2', data=price_data.reset_index()[:10])
plt.show()
# + [markdown] deletable=true editable=true
# ### Pie Chart
# + deletable=true editable=true
plt.figure()
d = abs(y_data_2[:5])
plt.pie(d)
plt.show()
# + deletable=true editable=true
plt.figure()
plt.pie('Security 2', color='Security 1', data=price_data[:4])
plt.show()
# + [markdown] deletable=true editable=true
# ### OHLC
# + deletable=true editable=true
dates = np.arange(dt.datetime(2014, 1, 2), dt.datetime(2014, 1, 30), dt.timedelta(days=1))
prices = np.array([[ 187.21 , 187.4 , 185.2 , 185.53 ],
[ 185.83 , 187.35 , 185.3 , 186.64 ],
[ 187.15 , 187.355 , 185.3 , 186. ],
[ 186.39 , 190.35 , 186.38 , 189.71 ],
[ 189.33 , 189.4175, 187.26 , 187.97 ],
[ 189.02 , 189.5 , 186.55 , 187.38 ],
[ 188.31 , 188.57 , 186.28 , 187.26 ],
[ 186.26 , 186.95 , 183.86 , 184.16 ],
[ 185.06 , 186.428 , 183.8818, 185.92 ],
[ 185.82 , 188.65 , 185.49 , 187.74 ],
[ 187.53 , 188.99 , 186.8 , 188.76 ],
[ 188.04 , 190.81 , 187.86 , 190.09 ],
[ 190.23 , 190.39 , 186.79 , 188.43 ],
[ 181.28 , 183.5 , 179.67 , 182.25 ],
[ 181.43 , 183.72 , 180.71 , 182.73 ],
[ 181.25 , 182.8141, 179.64 , 179.64 ],
[ 179.605 , 179.65 , 177.66 , 177.9 ],
[ 178.05 , 178.45 , 176.16 , 176.85 ],
[ 175.98 , 178.53 , 175.89 , 176.4 ],
[ 177.17 , 177.86 , 176.36 , 177.36 ]])
plt.figure()
plt.ohlc(dates, prices)
plt.show()
# -
# ### Boxplot
plt.figure()
plt.boxplot(np.arange(10), np.random.randn(10, 100))
plt.show()
# + [markdown] deletable=true editable=true
# ### Map
# + deletable=true editable=true
plt.figure()
plt.geo(map_data='WorldMap')
plt.show()
# + [markdown] deletable=true editable=true
# ### Heatmap
# + deletable=true editable=true
plt.figure(padding_y=0)
plt.heatmap(x * x[:, np.newaxis])
plt.show()
# + [markdown] deletable=true editable=true
# ### GridHeatMap
# + deletable=true editable=true
plt.figure(padding_y=0)
plt.gridheatmap(x[:10] * x[:10, np.newaxis])
plt.show()
# + [markdown] deletable=true editable=true
# ### Plotting Dates
# + deletable=true editable=true
plt.figure()
plt.plot(dates_all, final_prices)
plt.show()
# + [markdown] deletable=true editable=true
# ### Editing existing axes properties
# + deletable=true editable=true
## adding grid lines and changing the side of the axis in the figure above
plt.axes(options={'x': {'grid_lines': 'solid'}, 'y': {'side': 'right', 'grid_lines': 'dashed'}})
# + [markdown] deletable=true editable=true
# ## Advanced Usage
# + [markdown] deletable=true editable=true
# ### Multiple Marks on the same Figure
# + deletable=true editable=true
plt.figure()
plt.plot(x, y_data_3, colors=['orange'])
plt.scatter(x, y_data, stroke='black')
plt.show()
# + [markdown] deletable=true editable=true
# ### Using marker strings in Line Chart
# + deletable=true editable=true
mark_x = np.arange(10)
plt.figure(title='Using Marker Strings')
plt.plot(mark_x, 3 * mark_x + 5, 'y-.s') # color=yellow, line_style=dash_dotted, marker=square
plt.plot(mark_x ** 2, 'm:d') # color=magenta, line_style=None, marker=diamond
plt.show()
# + [markdown] deletable=true editable=true
# ### Partially changing the scales
# + deletable=true editable=true
plt.figure()
plt.plot(x, y_data)
## preserving the x scale and changing the y scale
plt.scales(scales={'x': plt.Keep})
plt.plot(x, y_data_2, colors=['orange'], axes_options={'y': {'side': 'right', 'color': 'orange',
'grid_lines': 'none'}})
plt.show()
# + [markdown] deletable=true editable=true
# ### Adding a label to the chart
# + deletable=true editable=true
plt.figure()
line = plt.plot(dates_all, final_prices)
plt.show()
# + deletable=true editable=true
## adds the label to the figure created above
plt.label(['Pie Day'], x=[np.datetime64('2007-03-14')], y=[final_prices.mean()], scales=line.scales,
colors=['orange'])
# + [markdown] deletable=true editable=true
# ### Changing context figure
# + deletable=true editable=true
plt.figure(1)
plt.plot(x,y_data_3)
plt.show()
# + deletable=true editable=true
plt.figure(2)
plt.plot(x[:20],y_data_3[:20])
plt.show()
# + [markdown] deletable=true editable=true
# ### Re-editing first figure
# + deletable=true editable=true
## adds the new line to the first figure
plt.figure(1, title='New title')
plt.plot(x,y_data, colors=['orange'])
# + [markdown] deletable=true editable=true
# ### Viewing the properties of the figure
# + deletable=true editable=true
marks = plt.current_figure().marks
marks[0].get_state()
# + [markdown] deletable=true editable=true
# ### Showing a second view of the first figure
# + deletable=true editable=true
plt.show()
# + [markdown] deletable=true editable=true
# ### Clearing the figure
# + deletable=true editable=true
### Clearing the figure above
plt.clear()
# + [markdown] deletable=true editable=true
# ### Deleting a figure and all its views.
# + deletable=true editable=true
plt.show(2)
# + deletable=true editable=true
plt.close(2)
# + [markdown] deletable=true editable=true
# ## Interactions in Pyplot
# + deletable=true editable=true
def call_back(name, value):
print(value)
# + [markdown] deletable=true editable=true
# ### Brush Selector
# + deletable=true editable=true
plt.figure()
plt.scatter(y_data_2, y_data_3, colors=['orange'], stroke='black')
## click and drag on the figure to see the selector
plt.brush_selector(call_back)
plt.show(display_toolbar=False)
# + [markdown] deletable=true editable=true
# ### Fast Interval Selector
# + deletable=true editable=true
plt.figure()
n= 100
plt.plot(np.arange(n), y_data_3)
## click on the figure to activate the selector
plt.int_selector(call_back)
plt.show(display_toolbar=False)
# + [markdown] deletable=true editable=true
# ### Brush Interval Selector with call back on brushing
# + deletable=true editable=true
# click and drag on chart to make a selection
plt.brush_int_selector(call_back, 'brushing')
| examples/Basic Plotting/Pyplot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/u6k/ml-sandbox/blob/master/predict_horse_racing.3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="vGRSnotHBTZZ" colab_type="code" colab={}
from google.colab import drive
drive.mount("/content/drive")
# + id="dBV6_KTVBdqF" colab_type="code" outputId="18ea5d93-bfc7-4c48-a7cc-07faf17f137c" colab={"base_uri": "https://localhost:8080/", "height": 848}
import pandas as pd
df_all = pd.read_csv("drive/My Drive/ml_data/race_meta_and_scores.csv")
df_all.info()
df_all.head()
# + id="JBk0UfnkBhCF" colab_type="code" outputId="41886724-a1d1-49f4-bf9e-3f8c6dfc7bdf" colab={"base_uri": "https://localhost:8080/", "height": 1309}
df_subset = df_all[["course_length",
"weather",
"course_condition",
"race_class",
"prize_class",
"gender",
"age",
"coat_color",
"horse_weight",
"trainer_id",
"jockey_id",
"jockey_weight",
"rank"]]
print(df_subset.info())
print("----------")
print("isnull.sum")
print("----------")
print(df_subset.isnull().sum())
print("----------")
print("dropna")
print("----------")
df = df_subset.dropna()
print("----------")
print("isnull.sum")
print("----------")
print(df.isnull().sum())
print("----------")
print(df.info())
# + id="r6snqrybBjLf" colab_type="code" outputId="bca0e354-7cb8-4e65-9292-eff814d358ca" colab={"base_uri": "https://localhost:8080/", "height": 309}
df.head()
# + id="UGIdDd0FBm-y" colab_type="code" colab={}
df_course_length = pd.get_dummies(df["course_length"])
df_weather = pd.get_dummies(df["weather"])
df_course_condition = pd.get_dummies(df["course_condition"])
df_race_class = pd.get_dummies(df["race_class"])
df_prize_class = pd.get_dummies(df["prize_class"])
df_gender = pd.get_dummies(df["gender"])
df_age = pd.get_dummies(df["age"])
df_coat_color = pd.get_dummies(df["coat_color"])
df_trainer_id = pd.get_dummies(df["trainer_id"])
df_jockey_id = pd.get_dummies(df["jockey_id"])
df_father_horse_name = pd.get_dummies(df_all["father_horse_name"])
df_mother_horse_name = pd.get_dummies(df_all["mother_horse_name"])
# + id="Sbjcz5fxBqxD" colab_type="code" outputId="251fced5-36ed-4ed4-909d-82c01ded4e0c" colab={"base_uri": "https://localhost:8080/", "height": 2142}
import sklearn.preprocessing as sp
import numpy as np
df_input = pd.concat([df.drop(["course_length"], axis=1), df_course_length], axis=1)
df_input = pd.concat([df_input.drop(["weather"], axis=1), df_weather], axis=1)
df_input = pd.concat([df_input.drop(["course_condition"], axis=1), df_course_condition], axis=1)
#df_input = pd.concat([df_input.drop(["race_class"], axis=1), df_race_class], axis=1)
df_input = df_input.drop(["race_class"], axis=1)
#df_input = pd.concat([df_input.drop(["prize_class"], axis=1), df_prize_class], axis=1)
df_input = df_input.drop(["prize_class"], axis=1)
df_input = pd.concat([df_input.drop(["gender"], axis=1), df_gender], axis=1)
df_input = pd.concat([df_input.drop(["age"], axis=1), df_age], axis=1)
df_input = pd.concat([df_input.drop(["coat_color"], axis=1), df_coat_color], axis=1)
#df_input = pd.concat([df_input.drop(["trainer_id"], axis=1), df_trainer_id], axis=1)
df_input = df_input.drop(["trainer_id"], axis=1)
#df_input = pd.concat([df_input.drop(["jockey_id"], axis=1), df_jockey_id], axis=1)
df_input = df_input.drop(["jockey_id"], axis=1)
df_input["horse_weight"] = sp.minmax_scale(df_input["horse_weight"])
df_input["jockey_weight"] = sp.minmax_scale(df_input["jockey_weight"])
df_input["rank"] = df_input["rank"].astype(np.int64)
df_input.info()
df_input
# + id="koBBStClCHVe" colab_type="code" colab={}
x = df_input.drop(["rank"], axis=1)
y = df_input["rank"]
# + id="qD5fWx5VCNpo" colab_type="code" outputId="d68e9e3f-88f4-4200-a26c-3818e294b374" colab={"base_uri": "https://localhost:8080/", "height": 287}
x.head()
# + id="Fkw10YLcCO9y" colab_type="code" outputId="1dcb271e-cd83-40fa-91b3-ede716b7b797" colab={"base_uri": "https://localhost:8080/", "height": 119}
y.head()
# + id="uGZ4dhluCRMp" colab_type="code" outputId="3d034d45-341b-4164-d0f3-563233399398" colab={"base_uri": "https://localhost:8080/", "height": 85}
import sklearn.model_selection as sm
x_train, x_test, y_train, y_test = sm.train_test_split(x, y)
print("x_train.shape: {0}".format(x_train.shape))
print("x_test.shape: {0}".format(x_test.shape))
print("y_train.shape: {0}".format(y_train.shape))
print("y_test.shape: {0}".format(y_test.shape))
# + id="HTiuhIgcCVKd" colab_type="code" outputId="b95dd3a8-29f2-4e2e-9612-c450d1e8eafb" colab={"base_uri": "https://localhost:8080/", "height": 139}
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(x_train, y_train)
print("train score: {0}".format(clf.score(x_train, y_train)))
print("test score: {0}".format(clf.score(x_test, y_test)))
# + id="gTI4tdnNCtLN" colab_type="code" outputId="9d3d2f21-9932-4797-fb54-984ba8fcdeb7" colab={"base_uri": "https://localhost:8080/", "height": 1969}
results = clf.predict(x_test)
df_result = pd.DataFrame()
df_result["rank"] = y_test
df_result["rank_result"] = results
df_result
| predict_horse_racing.3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd # for working with data in Python
import numpy as np
import matplotlib.pyplot as plt # for visualization
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn import linear_model
# use Pandas to read in csv files. The pd.read_csv() method creates a DataFrame from a csv file
train = pd.read_csv('train.csv')
test = pd.read_csv('test.csv')
print("1 \n")
# check out the size of the data
print("Train data shape:", train.shape)
print("Test data shape:", test.shape)
print("2 \n")
# look at a few rows using the DataFrame.head() method
# train.head()
print(train.head())
# +
plt.style.use(style='ggplot')
plt.rcParams['figure.figsize'] = (10, 6)
#######################################################
# 2. Explore the data and engineer Features ###
#######################################################
print("3 \n")
# +
# to get more information like count, mean, std, min, max etc
# train.SalePrice.describe()
print (train.SalePrice.describe())
print("4 \n")
# to plot a histogram of SalePrice
print ("Skew is:", train.SalePrice.skew())
plt.hist(train.SalePrice, color='blue')
plt.show()
print("5 \n")
# -
# use np.log() to transform train.SalePric and calculate the skewness a second time, as well as re-plot the data
target = np.log(train.SalePrice)
print ("\n Skew is:", target.skew())
plt.hist(target, color='blue')
plt.show()
# return a subset of columns matching the specified data types
numeric_features = train.select_dtypes(include=[np.number])
# numeric_features.dtypes
print(numeric_features.dtypes)
# +
corr = numeric_features.corr()
# The first five features are the most positively correlated with SalePrice, while the next five are the most negatively correlated.
print (corr['SalePrice'].sort_values(ascending=False)[:5], '\n')
print (corr['SalePrice'].sort_values(ascending=False)[-5:])
# -
print(train.OverallQual.unique())
"""
print("9 \n")
"""
#investigate the relationship between OverallQual and SalePrice.
#We set index='OverallQual' and values='SalePrice'. We chose to look at the median here.
quality_pivot = train.pivot_table(index='OverallQual', values='SalePrice', aggfunc=np.median)
print(quality_pivot)
# +
#visualize this pivot table more easily, we can create a bar plot
#Notice that the median sales price strictly increases as Overall Quality increases.
quality_pivot.plot(kind='bar', color='blue')
plt.xlabel('Overall Quality')
plt.ylabel('Median Sale Price')
plt.xticks(rotation=0)
plt.show()
# +
print("11 \n")
"""
#to generate some scatter plots and visualize the relationship between the Ground Living Area(GrLivArea) and SalePrice
plt.scatter(x=train['GrLivArea'], y=target)
plt.ylabel('Sale Price')
plt.xlabel('Above grade (ground) living area square feet')
plt.show()
"""
print("12 \n")
# do the same for GarageArea.
plt.scatter(x=train['GarageArea'], y=target)
plt.ylabel('Sale Price')
plt.xlabel('Garage Area')
plt.show()
# +
# create a new dataframe with some outliers removed
train = train[train['GarageArea'] < 1200]
# display the previous graph again without outliers
plt.scatter(x=train['GarageArea'], y=np.log(train.SalePrice))
plt.xlim(-200,1600) # This forces the same scale as before
plt.ylabel('Sale Price')
plt.xlabel('Garage Area')
plt.show()
# -
# create a DataFrame to view the top null columns and return the counts of the null values in each column
nulls = pd.DataFrame(train.isnull().sum().sort_values(ascending=False)[:25])
nulls.columns = ['Null Count']
nulls.index.name = 'Feature'
#nulls
print(nulls)
# +
print("15 \n")
"""
#to return a list of the unique values
print ("Unique values are:", train.MiscFeature.unique())
"""
######################################################
# Wrangling the non-numeric Features ##
######################################################
print("16 \n")
# consider the non-numeric features and display details of columns
categoricals = train.select_dtypes(exclude=[np.number])
#categoricals.describe()
print(categoricals.describe())
# +
#####################################################
# Transforming and engineering features ##
######################################################
print("17 \n")
# When transforming features, it's important to remember that any transformations that you've applied to the training data before
# fitting the model must be applied to the test data.
#Eg:
print ("Original: \n")
print (train.Street.value_counts(), "\n")
# +
print("18 \n")
# our model needs numerical data, so we will use one-hot encoding to transform the data into a Boolean column.
# create a new column called enc_street. The pd.get_dummies() method will handle this for us
train['enc_street'] = pd.get_dummies(train.Street, drop_first=True)
test['enc_street'] = pd.get_dummies(test.Street, drop_first=True)
print ('Encoded: \n')
print (train.enc_street.value_counts()) # Pave and Grvl values converted into 1 and 0
print("19 \n")
# look at SaleCondition by constructing and plotting a pivot table, as we did above for OverallQual
condition_pivot = train.pivot_table(index='SaleCondition', values='SalePrice', aggfunc=np.median)
condition_pivot.plot(kind='bar', color='blue')
plt.xlabel('Sale Condition')
plt.ylabel('Median Sale Price')
plt.xticks(rotation=0)
plt.show()
# +
# encode this SaleCondition as a new feature by using a similar method that we used for Street above
def encode(x): return 1 if x == 'Partial' else 0
train['enc_condition'] = train.SaleCondition.apply(encode)
test['enc_condition'] = test.SaleCondition.apply(encode)
print("20 \n")
# explore this newly modified feature as a plot.
condition_pivot = train.pivot_table(index='enc_condition', values='SalePrice', aggfunc=np.median)
condition_pivot.plot(kind='bar', color='blue')
plt.xlabel('Encoded Sale Condition')
plt.ylabel('Median Sale Price')
plt.xticks(rotation=0)
plt.show()
# +
######################################################################################################
# Dealing with missing values #
# We'll fill the missing values with an average value and then assign the results to data #
# This is a method of interpolation #
######################################################################################################
data = train.select_dtypes(include=[np.number]).interpolate().dropna()
print("21 \n")
# Check if the all of the columns have 0 null values.
# sum(data.isnull().sum() != 0)
print(sum(data.isnull().sum() != 0))
print("22 \n")
# +
######################################################
# 3. Build a linear model ##
######################################################
# separate the features and the target variable for modeling.
# We will assign the features to X and the target variable(Sales Price)to y.
y = np.log(train.SalePrice)
X = data.drop(['SalePrice', 'Id'], axis=1)
# exclude ID from features since Id is just an index with no relationship to SalePrice.
#======= partition the data ===================================================================================================#
# Partitioning the data in this way allows us to evaluate how our model might perform on data that it has never seen before.
# If we train the model on all of the test data, it will be difficult to tell if overfitting has taken place.
#==============================================================================================================================#
# also state how many percentage from train data set, we want to take as test data set
# In this example, about 33% of the data is devoted to the hold-out set.
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42, test_size=.33)
# +
#========= Begin modelling =========================#
# Linear Regression Model #
#===================================================#
# ---- first create a Linear Regression model.
# First, we instantiate the model.
lr = linear_model.LinearRegression()
# ---- fit the model / Model fitting
# lr.fit() method will fit the linear regression on the features and target variable that we pass.
model = lr.fit(X_train, y_train)
print("23 \n")
# +
# ---- Evaluate the performance and visualize results
# r-squared value is a measure of how close the data are to the fitted regression line
# a higher r-squared value means a better fit(very close to value 1)
print("R^2 is: \n", model.score(X_test, y_test))
# use the model we have built to make predictions on the test data set.
predictions = model.predict(X_test)
print("24 \n")
# +
print('RMSE is: \n', mean_squared_error(y_test, predictions))
print("25 \n")
# view this relationship between predictions and actual_values graphically with a scatter plot.
actual_values = y_test
plt.scatter(predictions, actual_values, alpha=.75,
color='b') # alpha helps to show overlapping data
plt.xlabel('Predicted Price')
plt.ylabel('Actual Price')
plt.title('Linear Regression Model')
plt.show()
# +
#====== improve the model ================================================================#
# try using Ridge Regularization to decrease the influence of less important features #
#=========================================================================================#
print("26 \n")
# experiment by looping through a few different values of alpha, and see how this changes our results.
for i in range (-2, 3):
alpha = 10**i
rm = linear_model.Ridge(alpha=alpha)
ridge_model = rm.fit(X_train, y_train)
preds_ridge = ridge_model.predict(X_test)
plt.scatter(preds_ridge, actual_values, alpha=.75, color='b')
plt.xlabel('Predicted Price')
plt.ylabel('Actual Price')
plt.title('Ridge Regularization with alpha = {}'.format(alpha))
overlay = 'R^2 is: {}\nRMSE is: {}'.format(
ridge_model.score(X_test, y_test),
mean_squared_error(y_test, preds_ridge))
plt.annotate(s=overlay,xy=(12.1,10.6),size='x-large')
plt.show()
# if you examined the plots you can see these models perform almost identically to the first model.
# In our case, adjusting the alpha did not substantially improve our model.
print("27 \n")
print("R^2 is: \n", model.score(X_test, y_test))
# +
######################################################
# 4. Make a submission ##
######################################################
# create a csv that contains the predicted SalePrice for each observation in the test.csv dataset.
submission = pd.DataFrame()
# The first column must the contain the ID from the test data.
submission['Id'] = test.Id
# select the features from the test data for the model as we did above.
feats = test.select_dtypes(
include=[np.number]).drop(['Id'], axis=1).interpolate()
# generate predictions
predictions = model.predict(feats)
# transform the predictions to the correct form
# apply np.exp() to our predictions becasuse we have taken the logarithm(np.log()) previously.
final_predictions = np.exp(predictions)
print("28 \n")
# check the difference
print("Original predictions are: \n", predictions[:10], "\n")
print("Final predictions are: \n", final_predictions[:10])
print("29 \n")
# assign these predictions and check
submission['SalePrice'] = final_predictions
# submission.head()
print(submission.head())
# export to a .csv file as Kaggle expects.
# pass index=False because Pandas otherwise would create a new index for us.
submission.to_csv('submission1.csv', index=False)
print("\n Finish")
# -
| Linear_Ridge_Regression .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Taehee-K/Fractured-Bone-Classification/blob/code/code/ResNet16.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="JyBkid0LBEd3" outputId="11cb4fa7-0492-4705-909f-1aaab4369252"
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
# + id="OIs76v2JBFpn"
import os
os.chdir('/content/drive/MyDrive/2021/2021-1/패턴인식과머신러닝/Project')
# + id="tLoGVVbTpFzK"
import warnings
warnings.filterwarnings("ignore")
# + id="cmHQ04B-43N7"
import numpy as np
import pandas as pd
from PIL import Image
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import torch
from torch import nn
from torch import optim
import torch.optim as optim
import torch.nn.functional as F
from torchvision import transforms, models
from torch.autograd import Variable
from torchvision.datasets import ImageFolder
import torch.utils.data as data
from torch.utils.data import DataLoader, TensorDataset
from torchsummary import summary
# + colab={"base_uri": "https://localhost:8080/"} id="ZFMiG52RFBlE" outputId="5843a674-5669-4acd-a667-8cba04b5a8d1"
DEVICE = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(DEVICE)
# + [markdown] id="VoqDD9xcEDR7"
# # Split Data
# + id="hueSIG8TEFKf"
import shutil
original_dataset_dir = './Train' # 기존의 train 데이터
classes_list = os.listdir(original_dataset_dir)
base_dir = './Split' # train-validation 데이터 나누어 저장
os.mkdir(base_dir)
train_dir = os.path.join(base_dir, 'Train') # train data
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'Val') # validation data
os.mkdir(validation_dir)
for cls in classes_list:
os.mkdir(os.path.join(train_dir, cls))
os.mkdir(os.path.join(validation_dir, cls))
# + colab={"base_uri": "https://localhost:8080/"} id="UFKJllYiE3z9" outputId="5e8141de-ca0f-428f-dff1-dad678c75c68"
## 데이터 분할 후 클래스별 데이터 수 확인
import math
for cls in classes_list:
path = os.path.join(original_dataset_dir, cls)
fnames = os.listdir(path)
train_size = math.floor(len(fnames) * 0.8)
validation_size = math.floor(len(fnames) * 0.2)
train_fnames = fnames[:train_size]
print("Train size(",cls,"): ", len(train_fnames))
for fname in train_fnames:
src = os.path.join(path, fname)
dst = os.path.join(os.path.join(train_dir, cls), fname)
shutil.copyfile(src, dst)
validation_fnames = fnames[train_size:]
print("Validation size(",cls,"): ", len(validation_fnames))
for fname in validation_fnames:
src = os.path.join(path, fname)
dst = os.path.join(os.path.join(validation_dir, cls), fname)
shutil.copyfile(src, dst)
# + colab={"base_uri": "https://localhost:8080/"} id="iH-eGbJJM0pe" outputId="4d4a87ec-3a84-423d-9998-45131c15d2b4"
original_dataset_dir = './Train' # 기존의 train 데이터
classes_list = os.listdir(original_dataset_dir)
print(len(classes_list))
print(classes_list) # 분류해야 할 클래스들
# + [markdown] id="o17ieEHuFRpP"
# # Load Data
# + [markdown] id="MQf35Wj-r6kN"
# Training image 가 총 281개로 적은 수이기 때문에 RandomHorizontalFlip()을 활용해 각각 50%의 확률로 데이터를 좌우반전시켜 data augmentation을 진행하였다.
# + id="ZDww2B1iPPED"
BATCH_SIZE = 32
EPOCH = 100
# + id="_SAIBgCobG1t"
data_transforms = {
'train': transforms.Compose([
transforms.ToTensor(),
transforms.CenterCrop(400), # crop to 400
transforms.RandomHorizontalFlip(), # Horizontal Flip Randomly(p=50)
transforms.Grayscale(num_output_channels = 1)]),
'val': transforms.Compose([
transforms.ToTensor(),
transforms.CenterCrop(400), # crop to 400
transforms.Grayscale(num_output_channels = 1)])
}
train_dataset = ImageFolder(root='./Split/Train', transform = data_transforms['train'])
val_dataset = ImageFolder(root='./Split/Val', transform = data_transforms['val'])
train_loader = torch.utils.data.DataLoader(train_dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=4)
val_loader = torch.utils.data.DataLoader(val_dataset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=4)
# + id="GCdpTOxw43OI" colab={"base_uri": "https://localhost:8080/", "height": 873} outputId="81275e2d-1a65-468d-be89-c97a50310373"
# Display images
# Not necessary for training. Just for confirmation
for images, labels in val_loader:
i, l = Variable(images), Variable(labels)
print(i.size())
i = i.numpy()
l = l.numpy()
if l[0]==0:
print('Label = {} : Normal image'.format(l[0]))
else:
print('Label = {} : Fracture image'.format(l[0]))
plt.imshow(i[0,0,:,:])
plt.show()
# + [markdown] id="EMGIFX0vxdtG"
# # ResNet16 Model
# + [markdown] id="0Rz08d-mxiLy"
# ResNet26의 overfitting을 줄이기 위해 layer 수를 줄이고 L2 regularization을 적용해 주었다.
# + id="eBTBRwg9wRdp" colab={"base_uri": "https://localhost:8080/"} outputId="20c4fb6d-40dd-4992-ffce-b35fed22d3af"
class BasicBlock(nn.Module):
def __init__(self, in_planes, planes, stride = 1):
super(BasicBlock, self).__init__()
self.conv1 = nn.Conv2d(in_planes, planes,
kernel_size = 3,
stride = stride,
padding = 1,
bias = False)
self.bn1 = nn.BatchNorm2d(planes)
self.conv2 = nn.Conv2d(planes, planes,
kernel_size = 3,
stride = 1,
padding = 1,
bias = False)
self.bn2 = nn.BatchNorm2d(planes)
# Skip-Connection
self.shortcut = nn.Sequential()
if stride != 1 or in_planes != planes:
self.shortcut = nn.Sequential(
nn.Conv2d(in_planes, planes,
kernel_size = 3,
stride = stride,
padding = 1,
bias = False),
nn.BatchNorm2d(planes))
def forward(self, x):
out = F.relu(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = F.relu(out)
return out
class ResNet(nn.Module):
def __init__(self, n_classes = 2):
super(ResNet, self).__init__()
self.in_planes = 16
self.conv1 = nn.Conv2d(1, 16,
kernel_size = 3,
stride = 1,
padding = 1,
bias = False)
self.bn1 = nn.BatchNorm2d(16)
self.layer1 = self._make_layer(BasicBlock, 16, 2, stride = 1)
self.layer2 = self._make_layer(BasicBlock, 32, 2, stride = 2)
self.layer3 = self._make_layer(BasicBlock, 64, 2, stride = 2)
self.linear = nn.Linear(9216, n_classes)
def _make_layer(self, block, planes, num_blocks, stride):
strides = [stride] + [1] * (num_blocks - 1)
layers = []
for stride in strides:
layers.append(block(self.in_planes, planes, stride))
self.in_planes = planes
return nn.Sequential(*layers)
def forward(self, x):
x = F.relu(self.bn1(self.conv1(x)))
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = F.avg_pool2d(x, 8)
x = x.view(x.size(0), -1)
x = self.linear(x)
return x
model = ResNet().to(DEVICE) # 모델 GPU로
model # Print network
# + colab={"base_uri": "https://localhost:8080/"} id="FakUpM5jxuNJ" outputId="2ae52662-8b12-4023-a7a5-9581efd4ec82"
summary(model, (1, 400, 400))
# + id="6Ekbu99hxyUu"
criterion = nn.CrossEntropyLoss()
optimizer = optim.Adam(model.parameters(), lr=0.0001, weight_decay=1e-2)
# + [markdown] id="zRwoy3LkumKj"
# * Train Model
# + id="xS__U8M5S0mf"
def train(model, train_loader, optimizer):
model.train() # 모델 train 상태로
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(DEVICE), target.to(DEVICE) # data, target 값 DEVICE에 할당
optimizer.zero_grad() # optimizer gradient 값 초기화
output = model(data) # 할당된 데이터로 output 계산
loss = criterion(output, target) # Cross Entropy Loss 사용해 loss 계산
loss.backward() # 계산된 loss back propagation
optimizer.step() # parameter update
# + [markdown] id="DcbqkfMeuqxL"
# * Evaluate Model
# + id="wy2HuJAkS0oi"
def evaluate(model, test_loader):
model.eval() # 모델 평가 상태로
test_loss = 0 # test_loss 초기화
correct = 0 # 맞게 예측한 0 값으로 초기화
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(DEVICE), target.to(DEVICE) # data, target DEVICE에 할당
output = model(data) # output 계산
test_loss += criterion(output, target).item() # loss 계산(총 loss 에 더해주기)
pred = output.max(1, keepdim=True)[1] # 계산된 벡터값 중 가장 큰 값 가지는 class 예측
correct += pred.eq(target.view_as(pred)).sum().item() # 맞게 예측한 값 세기
test_loss /= len(test_loader.dataset) # 평균 loss
test_accuracy = 100. * correct / len(test_loader.dataset) # test(validation) 데이터 정확도
return test_loss, test_accuracy
# + [markdown] id="IAuI1yuoxseF"
# ## Train Model
# + id="JYODIPjs1vJJ"
import time
import copy
def train_model(model ,train_loader, val_loader, optimizer, num_epochs = 30):
acc_t = []; acc_v = [] # train, validation accuracy 저장할 list
loss_t = []; loss_v = [] # train, validation loss 저장할 list
best_acc = 0.0 # beset accuracy 초기화
best_model_wts = copy.deepcopy(model.state_dict())
for epoch in range(1, num_epochs + 1):
since = time.time() # 학습 시간 계산
train(model, train_loader, optimizer) # train 데이터로 학습
train_loss, train_acc = evaluate(model, train_loader) # train_loss, train_acc 계산
val_loss, val_acc = evaluate(model, val_loader) # valid_loss, valid_acc 계산
if val_acc>best_acc: # update best accuracy
best_acc = val_acc
best_model_wts = copy.deepcopy(model.state_dict()) # 가장 accuracy 높은 model 저장
# loss, accuarcy 저장하기
acc_t.append(train_acc); acc_v.append(val_acc)
loss_t.append(train_loss);loss_v.append(val_loss)
#학습 결과 및 시간 출력
time_elapsed = time.time() - since
print('-------------- EPOCH {} ----------------'.format(epoch))
print('Train Loss: {:.4f}, Accuracy: {:.2f}%'.format(train_loss, train_acc))
print('Val Loss: {:.4f}, Accuracy: {:.2f}%'.format(val_loss, val_acc))
print('Time: {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
print()
# Accuracy Graph
plt.plot(range(len(acc_t)), acc_t, 'b', range(len(acc_v)), acc_v, 'r')
blue_patch = mpatches.Patch(color='blue', label='Train Accuracy')
red_patch = mpatches.Patch(color='red', label='Validation Accuracy')
plt.legend(handles=[red_patch, blue_patch])
plt.show()
# Loss Graph
plt.plot(range(len(loss_t)), loss_t, 'b', range(len(loss_v)), loss_v, 'r')
blue_patch = mpatches.Patch(color='blue', label='Train Loss')
red_patch = mpatches.Patch(color='red', label='Validation Loss')
plt.legend(handles=[red_patch, blue_patch])
plt.show()
model.load_state_dict(best_model_wts) # validation accuracy, 가장 높은 모델 저장
return model
# + colab={"base_uri": "https://localhost:8080/"} id="q3Va8B5Ox7rk" outputId="7fdd2cf0-25ce-4ab0-a8e9-a84c6a6ccd5c"
# 모델 학습시키기
model = train_model(model ,train_loader, val_loader, optimizer, 50) # 모델 학습시키기
# + id="80fpmNMpyCGc"
# Save the weight matrices and bias vectors that will be loaded for testing later
torch.save(model,'ResNet(2)_Model_Taehee_1870027')
# + [markdown] id="935N-yPiyImH"
# ## Evaluate Model
# + id="IBWpxyAhyJwb" colab={"base_uri": "https://localhost:8080/"} outputId="3ef0cc3c-02a6-48af-b0a3-79c01549db53"
model=torch.load('ResNet(2)_Model_Taehee_1870027')
model.eval()
train_loss, train_acc = evaluate(model, train_loader)
val_loss, val_acc = evaluate(model, val_loader)
# prit saved model's train and validation accuracy
print('Train Accuracy: {:.4f}'.format(train_acc), 'Train Loss: {:.4f}'.format(train_loss))
print('Validation Accuracy: {:.4f}'.format(val_acc), 'Validation Loss: {:.4f}'.format(val_loss))
# + id="LfFf8zxLhGqY"
from sklearn.metrics import classification_report
def prediction(model, data_loader):
model.eval()
predlist=torch.zeros(0,dtype=torch.long, device='cpu')
lbllist=torch.zeros(0,dtype=torch.long, device='cpu')
with torch.no_grad():
for i, (data, label) in enumerate(data_loader):
data = data.to(DEVICE) # 데이터 DEVICE에 할당
label = label.to(DEVICE) # 라벨 값 DEVICE에 할당
outputs = model(data) # 예측
_, preds = torch.max(outputs, 1) # 가장 높은 확률 가지는 class 예측
# Batch 단위 예측값 append 하기
predlist=torch.cat([predlist,preds.view(-1).cpu()])
lbllist=torch.cat([lbllist,label.view(-1).cpu()])
# Classification Report
print(classification_report(lbllist.numpy(), predlist.numpy())) # 클래스별 accuracy, recall, f1-score
return
# + [markdown] id="1ptWij1FyYqM"
# * Print Classification Report
# + id="Q5wK4vjxyZJb" colab={"base_uri": "https://localhost:8080/"} outputId="1e54ca9f-0f2b-41eb-db2f-c42f129261d5"
prediction(model, val_loader)
# + id="CqyNZvpG0ypp"
| code/ResNet16.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + tags=[]
import sys
import rics
# Print relevant versions
print(f"{rics.__version__=}")
print(f"{sys.version=}")
# !git log --pretty=oneline --abbrev-commit -1
# +
from rics.logutils import basicConfig, logging
basicConfig(level=logging.INFO, rics_level=logging.DEBUG)
# -
# # Benchmarks for `FormatApplier` implementations
DATASET = "name.basics"
# ## Load data
# + tags=[]
# %%time
from data import load_imdb
df, id_columns = load_imdb(dataset=DATASET)
print(f"{id_columns=}")
print(f"{df.shape=}")
# +
import tqdm
tqdm.__version__
# +
from typing import Dict
from rics.translation.offline import DefaultFormatApplier, Format, FormatApplier, TranslationMap
from rics.translation.offline.types import IdType, NameType, PlaceholdersDict, PlaceholdersTuple, TranslatedIds
# + tags=[]
fmt = Format("{id}:{name} (*{birthYear}†{deathYear}) | Profession: {primaryProfession}")
# -
# ## Define the test procedure
# Force reinitialization every time; this is how the translator will usually do it.
def run_translate(clazz, id_key) -> TranslatedIds:
tmap = TranslationMap(
{DATASET: df.rename(columns={id_key: "id", "primaryName": "name"})},
format_applier_type=clazz, # Prepare data
)
return tmap[(DATASET, fmt)] # Does the actual formatting
# ## Define candidates
import numpy as np
import pandas as pd
# + tags=[]
candidates = [DefaultFormatApplier]
class BasicFormatApplier(FormatApplier):
def __init__(self, source: NameType, placeholders: PlaceholdersDict) -> None:
super().__init__(source, placeholders)
self._placeholders = placeholders
def _apply(self, fstring: str, placeholders: PlaceholdersTuple) -> TranslatedIds:
ids = self._placeholders["id"]
p_list = tuple([self._placeholders[p] for p in placeholders])
return {idx: fstring.format(*row) for idx, row in zip(ids, zip(*p_list))}
@property
def positional(self) -> bool:
"""Positional-flag for the default applicator."""
return True
class NumpyFormatApplier(FormatApplier):
def __init__(self, source: NameType, placeholders: PlaceholdersDict) -> None:
super().__init__(source, placeholders)
self._ids = placeholders["id"]
self._arr = np.array([placeholders[placeholder] for placeholder in self.placeholders], dtype="<U64")
def _apply(self, fstring: str, placeholders: PlaceholdersTuple) -> TranslatedIds:
placeholder_idx = np.searchsorted(self._placeholder_names, placeholders)
# This might be where we lose time?
sliced = self._arr[placeholder_idx].T[:, :]
return {idx: fstring.format(*row) for idx, row in zip(self._ids, sliced)}
@property
def positional(self) -> bool:
return True
class PandasFormatApplier(FormatApplier):
def __init__(self, source: NameType, placeholders: PlaceholdersDict) -> None:
super().__init__(source, placeholders)
self._df = pd.DataFrame.from_dict(placeholders)
self._df.index = self._df["id"]
self._range = range(len(self._df.index))
def _apply(self, fstring: str, placeholders: PlaceholdersTuple) -> TranslatedIds:
def func(arg: pd.Series) -> str:
return fstring.format(*arg)
return self._df[list(placeholders)].apply(func, raw=True, axis=1)
@property
def positional(self) -> bool:
return True
candidates.extend([BasicFormatApplier, NumpyFormatApplier, PandasFormatApplier])
candidates
# -
# ## Sample output and verification
# + tags=[]
id_key = "int_id_nconst"
reference = run_translate(DefaultFormatApplier, id_key)
for t in reference.values():
print(f"Total translations: {len(reference)}. Sample translation:\n {t}")
break
for cand in candidates:
cand_translations = run_translate(cand, id_key)
cmp = pd.Series(reference) == pd.Series(cand_translations)
assert cmp if isinstance(cmp, bool) else cmp.all(), f"Bad candidate: {cand}"
# + [markdown] tags=[]
# ## Run perfomance comparison
# -
from rics.utils import tname
# + tags=[]
for cand in candidates:
print(f"{tname(cand)}:")
for id_key in ["int_id_nconst", "str_id_nconst"]:
print(f" {id_key=}")
# %timeit -r 5 -n 5 run_translate(cand, id_key)
print("=" * 80)
# -
# # Conclusion
# The `BasicFormatApplier` seems best choice *for this use case* (likely because it doesn't copy as much data?). There are certainly better ways to use both Pandas and numpy, but `BasicFormatApplier` has the added benefit and being easy to understand and requiring no external dependencies.
| jupyterlab/perf-test/rics-package/FormatApplier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/DOOBIDOOBI/LinearAlgebra_2ndSem/blob/main/Assignment6.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="8gpOwO1dBSxU"
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + [markdown] id="2efqp7jMCGGP"
# #Transition
# + colab={"base_uri": "https://localhost:8080/"} id="jYU98EzTCJtF" outputId="23faefbc-5baf-4530-a2f5-d0548e24b16c"
A = np.array([
[2,3,4],
[5,3,2],
[7,9,3]
])
A
# + colab={"base_uri": "https://localhost:8080/"} id="Zxs76RB1CUtt" outputId="81d8307f-a53b-439b-d816-12374fd176a6"
A = np.transpose ([[2, 3, 4],
[5, 3, 2],
[7, 9, 3]])
A
# + id="gCl7VmePCx4s"
B = np. array ([
[9,2,3],
[21,3,6],
[3,4,5]
])
# + [markdown] id="SZ8zBW7jBdKj"
# $$
# A =
# \begin{bmatrix}
# x & y \\
# 4x & 10y
# \end{bmatrix}
# $$
# + [markdown] id="PX7rAQwxFwGF"
# $$
# B=\begin{bmatrix} 1 & 2 & 5 \\5 & -1 & 0 \\ 0 & -3 & 3\end{bmatrix}\\
# $$
# + colab={"base_uri": "https://localhost:8080/"} id="f3mx1l8XC8XX" outputId="78505426-2d98-46d5-9904-3e8fe0908a91"
A @ B
# + colab={"base_uri": "https://localhost:8080/"} id="PZVVT-lyDAeQ" outputId="813f51b7-1511-4872-de96-08d01f49ff0c"
A.dot (B)
# + id="MFZm7gijF-qM"
# + [markdown] id="6HYSadgbF-59"
# $$
# P=\begin{bmatrix} 41 & 22 & 58 \\55 & -12 & 40 \\ 60 & -53 & 23\\23 & 45 & 86\end{bmatrix}\\
# $$
#
# $$
# B=\begin{bmatrix} 1 & 2 & 5 \\5 & -1 & 0 \\ 0 & -3 & 3\end{bmatrix}\\
# $$
# $$
# T=\begin{bmatrix} 12 & 23 & 50\\12 & 45 & 12\\12 & 87 & 45\\ 90 & 123 & 23\end{bmatrix}\\
# $$
# + [markdown] id="UGpDCiCHHWU-"
# 1. $A \cdot B \neq B \cdot A$
# 2. $A \cdot (B \cdot C) = (A \cdot B) \cdot C$
# 3. $A\cdot(B+C) = A\cdot B + A\cdot C$
# 4. $(B+C)\cdot A = B\cdot A + C\cdot A$
# 5. $A\cdot I = A$
# 6. $A\cdot \emptyset = \emptyset$
#
# + id="uEiQchNzHqQ0"
P = np.array([
[23, 65, 78, 45],
[56, 32, 34, 56],
[65, 78, 34, 78],
[54, 87, 89, 99]
])
B = np.array ([
[32, 56, 897, 45],
[10, 54, 65, 45],
[87, 98, 67, 12],
[45, 87, 89, 65]
])
T = np.array ([
[12, 65, 8, 3],
[7, 98, 21, 65],
[4, 56, 35, 45],
[87, 9, 45, 6]
])
# + colab={"base_uri": "https://localhost:8080/"} id="b4RLazhyIO9U" outputId="959d5944-3d3b-4f2f-b45e-384848496a3b"
P @ B
# + colab={"base_uri": "https://localhost:8080/"} id="wJbH1oT7IqWZ" outputId="4d55f5c4-f1a7-4d2d-b374-b86db104925d"
B @ P
# + colab={"base_uri": "https://localhost:8080/"} id="GpjWGaaNIu2x" outputId="1d402837-ae37-4102-a640-f24d9242dcb3"
np.array_equiv(B @ P, P @ B)
# + colab={"base_uri": "https://localhost:8080/"} id="4q72AkdoJCZl" outputId="c78f06c3-dfbe-49d6-87fb-cb9753d00840"
P @ (B @ T)
# + colab={"base_uri": "https://localhost:8080/"} id="TMhEVcJKKhQ7" outputId="19b7f0db-a740-4fb9-d346-0a2f8369c769"
(P@B)@T
# + colab={"base_uri": "https://localhost:8080/"} id="cUpVEg0rKpmC" outputId="3f021a17-f77f-4ef5-ada6-2c77a778f8b0"
np.array_equiv((P @ B) @ T,P @ (B @ T) )
# + colab={"base_uri": "https://localhost:8080/"} id="v88N5ZhqK0_M" outputId="23bf995a-5888-40e2-c48e-dcc181f208f7"
P @ (B + T)
# + colab={"base_uri": "https://localhost:8080/"} id="5mHK5SGuK9_h" outputId="a4efe995-7312-455e-ad4a-a53b992a8f55"
P @ B + P @ T
# + colab={"base_uri": "https://localhost:8080/"} id="-U1y_ElbLDj4" outputId="6f666069-52ff-4a92-d837-8a72c887afd7"
np.array_equiv(P @ (B + T), P @ B + P @ T)
# + colab={"base_uri": "https://localhost:8080/"} id="R4GpJxvYLXJv" outputId="896f2b68-7911-4335-b109-9f75899a2c19"
P.dot(1)
# + colab={"base_uri": "https://localhost:8080/"} id="Ev59Q55-Lc_4" outputId="248c3b36-40cd-442c-c0de-4b99ecd75061"
P
# + colab={"base_uri": "https://localhost:8080/"} id="Hh12aNthLiZw" outputId="2639f9ec-efed-4123-fb54-c9bc6363a3cb"
np.array_equiv(P.dot(1), P)
# + colab={"base_uri": "https://localhost:8080/"} id="6RpBNpTSNKDT" outputId="2705cb8b-132d-485e-eeeb-fbdc10f88cc9"
P.dot(0)
# + colab={"base_uri": "https://localhost:8080/"} id="HjhZvc8aNPUY" outputId="122522e3-9800-4779-d0a2-8d67a2145b7e"
np.array_equiv(P.dot(0), 0)
# + colab={"base_uri": "https://localhost:8080/"} id="qTi42v6lL6y6" outputId="2bc503ee-6582-439c-96ec-4667ca4ec2d9"
K = np.array([
[21,45],
[48,54]
])
np.linalg.det(K)
# + colab={"base_uri": "https://localhost:8080/"} id="HOOWjAsiMBPZ" outputId="a697aaff-bbcd-40c5-ea05-4c220369bf33"
J = np.array ([
[32, 56, 897],
[10, 54, 65],
[87, 98, 67]
])
np.linalg.det(J)
# + colab={"base_uri": "https://localhost:8080/"} id="EiKdIPTjO-f6" outputId="254348c9-447e-4e0d-c810-98d98fdddfbc"
O = np.array([
[21,45,90],
[48,54,9],
[4, 5, 6]
])
np.array(O @ np.linalg.inv(O), dtype=int)
# + colab={"base_uri": "https://localhost:8080/"} id="KKgnNZTOPQ86" outputId="48499782-27a9-49f2-fee5-d0d44631589d"
Q = np.array([
[2, 1, 5, 4],
[48,54, 9, 5],
[45, 98, 4, 3],
[87, 5, 4, 6]
])
np.array(Q @ np.linalg.inv(Q),dtype=int)
| Assignment6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# name: python374jvsc74a57bd0b5552c8101d78e0c29410aff1830fecb0f002c740777f43a5549bce50a029a97
# ---
# + [markdown] id="sQ8jh5aiy8zo"
# # HOME ASSIGNMENT #6: CLOUD FUNCTION & STREAMLIT
#
# **Mục đích của bài Assignment**
# > * [Optional] Data Deploy Cloud Function
# > * Tạo Data Apps với Streamlit
# > * Thao tác với data bằng Pandas
# > * Data Visualization
#
# **Các kiến thức áp dụng**
# * Slack API, JSON to DataFrame
# * GCP Cloud Function
# * Streamlit
# * Python Pandas
# * Python Data Visualization
#
# **Lời Khuyên**
# * Các bạn dành thời gian ôn lại và xâu chuỗi kiến thức
# * Review Assignment 1-5 cho ít nhất 2 bạn học viên khác
# -
# !ls ..
# # TODO 1: Python Data Viz
# Hoàn tất các sets bài tập trên [Kaggle Data Visualization](https://www.kaggle.com/learn/data-visualization) - Nếu chưa hoàn thành trong [Assignment 5](https://github.com/anhdanggit/atom-assignments/blob/main/assignment_5/home_assignment_5.ipynb)
# +
# Copy các link Kaggle sau:
## 1. Link tới Kaggle Account của bạn -----> https://www.kaggle.com/danhpcv
## 2. Link tới các bài tập
## Pandas 1: --->https://www.kaggle.com/danhpcv/exercise-creating-reading-and-writing
## Pandas 2: --->https://www.kaggle.com/danhpcv/exercise-indexing-selecting-assigning
## Pandas 3: --->https://www.kaggle.com/danhpcv/exercise-summary-functions-and-maps
## Pandas 4: --->https://www.kaggle.com/danhpcv/exercise-grouping-and-sorting
## Pandas 5: --->https://www.kaggle.com/danhpcv/exercise-data-types-and-missing-values
## Pandas 6: --->https://www.kaggle.com/danhpcv/exercise-renaming-and-combining
# + [markdown] id="t28PUQoNzy1k"
# # TODO 2 (OPTIONAL): DEPLOY GOOGLE CLOUD FUNCTION
# * Làm theo Lab của Week 6: [HERE](https://anhdang.gitbook.io/datacracy/atom/6-cloud-function-and-streamlit/6.2-lab-cloud-function-hands-on)
# * Click đôi vào các tab Markdown bên dưới để trả lời các câu hỏi ([Markdown Cheatsheet](https://guides.github.com/features/mastering-markdown/))
# -
# ## Screenshot Cloud Function on GCP
# > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới*
#
# 
# ## Screenshot Cloud Function Testing on GCP
# > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới*
#
# 
# + [markdown] id="2QUVZlLm00PG"
# ## Screenshot Cloud Function Call on Postman
# > *Copy Screenshot vào folder img trong repo, và đổi link bên dưới*
#
# 
# -
# + [markdown] id="u5c_Lx9MyzSF"
# ## Các lỗi gặp trong quá trình thực hiện
# *Liên kê bên dưới các lỗi bạn gặp và các giải quyết*
#
# 1.
# 2.
# 3.
# -
# + [markdown] id="bT_ziqVJ1COI"
# # TODO 3: HIỂU & DIAGRAM CODE STREAMLIT
# Mình thường khuyên các bạn mới học code rằng:
#
# > Hãy code với một cây bút chì và tờ giấy
#
# Như thế nào?
#
# 1. Đầu tiên, là hình dung trong đầu: Bạn sẽ bắt đầu từ gì (`inputs`) và cho ra những gì (`output`)
# 2. Rồi, để đi từ inputs đến outputs thì bạn cần thực hiện những bước nào (các `functions`)
#
# Bạn có thể vẽ ra một diagram như vậy giúp bạn:
# * Nhìn bức tranh lớn, và không bị sa đà vào tiểu tiết, syntax
# * Rõ ràng hơn về flow
# * Giúp bạn tối ưu flow trước, rồi sau đó code sẽ thuận lợi hơn
# * Rất hiệu quả để bạn debugs trong tương lai
#
# Tham khảo Diagram sau của [streamlit/data_glimpse.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/data_glimpse.py) và vẽ diagram theo cách hiểu của bạn cho [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py)
# -
# ## Diagram Data Glimpse Apps
# > Bên dưới là ví dụ Diagram của app [streamlit/data_glimpse.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/data_glimpse.py)
#
# 
# ## DataCracy Slack
# > Là apps để tổng hợp lịch sử nộp bài, review và discussion của Datacracy Learners
# 
# ## Diagram DataCracy Slack Apps
# * Xem code của app [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py)
# > *Copy Diagram bạn vẽ vào folder img trong repo, và đổi link bên dưới*
#
# 
# ## Giải thích
# Xem code của app [streamlit/datacracy_slack.py](https://github.com/anhdanggit/atom-assignments/blob/main/streamlit/datacracy_slack.py):
#
# 1. Trong mỗi function (steps) trong Diagram của bạn, giải thích function làm những việc gì?
# 2. Liệt kê các logics được áp dụng để xử lý data?
## YOUR ANSWER
'''
1. Function process_msg_df():
Input: user_df, channel_df, msg_df
Process:
- Tách 2 giá trị trong cột reply_user thành hai cột reply_user_1 và reply_user_2
- Merge user_df và msg_df để lấy thông tin tên của learner nộp bài, reply_1 và reply_2
- Merge channel_df và msg_df để lấy tên của channel.
- Format lại các cột có giá trị datetime.
Output: process_msg_df có các thông tin từ user_df, channel_df và msg_df
2. Các logic được áp dụng:
- Đầu tiên, các bảng dữ liệu được load vào trong app.
- Tiếp theo, khi user vào app và nhập vào id của mình, lệnh if valid_user_id sẽ kiểm tra xem đây có phải là user_id trong bảng user_df hay không => Nếu không sẽ print thông báo "Không phải user_id"
- Nếu là user_id, các câu lệnh tiếp theo sẽ lấy các thông tin nộp bài của user_id trên các channel nộp bài atom.
- Sau đó, các câu lệnh được thực hiện để kiểm tra xem user_id đã review bài tập của user nào khác.
- Cuối cùng là sẽ xem các msg của user_id trong các channel discussion_group.
# + id="UccXW_FH4Glg"
# -
# # TODO 4: VISUALIZATION ON STREAMLIT
# Áp dụng kiến thức đã học trong *TODO 1* + Pandas thực hiện các tasks sau:
#
# 1. Tổng hợp cho tất cả learners các chỉ số sau:
# * Số assignment đã nộp
# * % bài được review
# * Số workcount đã thảo luận
# * Extract thứ trong tuần (weekday) của ngày nộp bài
# * Extract giờ trong ngày nộp bài (hour)
#
# 4. Vẽ biểu đồ thể hiện phân phối (Distribution - [Kaggle Tutorial](https://www.kaggle.com/alexisbcook/distributions)) của các thông số trên và add vào app Streamlit
| assignment_6/home_assignment_6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
# # 資料預處理
# +
import numpy as np
import pandas as pd
import os
filepath = '/Users/mac/Desktop/Kaggle_datasets/MNIST'
filename01 = 'train.csv'
filename02 = 'test.csv'
df_Train = pd.read_csv(os.path.join(filepath, filename01))
df_Test = pd.read_csv(os.path.join(filepath, filename02))
# -
#先觀察形狀
df_Train.shape
df_Test.shape
#看一下column排列
df_Train.head()
#feature就直接拿取pixel
train_feature = df_Train.values[:,1:]
test_feature = df_Test.values #test並沒有label欄位,所以不用改變
train_feature.shape
test_feature.shape
np.max(train_feature) #看來feature是沒有標準化過的
#來標準化一下吧
train_feature = train_feature/255
test_feature = test_feature/255
#調整維度fit CNN模型,feature處理就到此為止
train_feature_4D = train_feature.reshape(-1,28,28,1)
test_feature_4D = test_feature.reshape(-1,28,28,1)
#label要做onehot encoding處理,label處理就到此為止
import keras
train_label = keras.utils.to_categorical(df_Train.values[:,0], num_classes=10)
train_label
# # 跑模型,先用一般的sequential CNN
# +
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import matplotlib.pyplot as plt
def show_train_history(train_history,train,validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title('Train History')
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
model = Sequential()
# input: 28x28 images with 1 channels -> (28, 28, 1) tensors.
# this applies 16 convolution filters of size 3x3 each.
model.add(Conv2D(16, (3, 3), activation='relu', input_shape=(28, 28, 1)))
model.add(Conv2D(16, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2))) #每2x2取一個Max pool
model.add(Dropout(0.25))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu')) #壓平做full connected layer
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax')) #輸出成0~9其中一種辨識
print(model.summary())
model.compile(loss='categorical_crossentropy', optimizer='adam')
train_history = model.fit(train_feature_4D, train_label, batch_size=200, validation_split=0.2, epochs=20)
######################### 訓練過程視覺化
show_train_history(train_history,'acc','val_acc')
show_train_history(train_history,'loss','val_loss')
#儲存訓練結果
model.save_weights("Savemodel/MNIST(Kaggles)_SimpleCNN.h5")
print('model saved to disk')
# -
prediction = model.predict(test_feature_4D)
prediction[0]
#用np.argmax把softmax的東西轉換回來
np.argmax(prediction[0])
len(prediction)
ans = []
for i in range(len(prediction)):
ans.append(np.argmax(prediction[i]))
df_ans = pd.DataFrame(ans)
df_ans.to_csv('MNIST_ans.csv')
# # 現學現賣,用Functional Model做CNN
# +
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import matplotlib.pyplot as plt
def show_train_history(train_history,train,validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title('Train History')
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
from keras.models import Model
# First, define the vision modules
digit_input = Input(shape=(-1, 28, 28, 1))
x = Conv2D(64, (3, 3))(digit_input)
x = Conv2D(64, (3, 3))(x)
x = MaxPooling2D((2, 2))(x)
out = Flatten()(x)
vision_model = Model(digit_input, out)
# Then define the tell-digits-apart model
digit_a = Input(shape=(-1, 28, 28, 1))
digit_b = Input(shape=(-1, 28, 28, 1))
# The vision model will be shared, weights and all
out_a = vision_model(digit_a)
out_b = vision_model(digit_b)
concatenated = keras.layers.concatenate([out_a, out_b])
out = Dense(1, activation='sigmoid')(concatenated)
classification_model = Model([digit_a, digit_b], out)
print(model.summary())
model.compile(loss='categorical_crossentropy', optimizer='adam')
train_history = model.fit(train_feature_4D, train_label, batch_size=200, validation_split=0.2, epochs=10)
######################### 訓練過程視覺化
show_train_history(train_history,'acc','val_acc')
show_train_history(train_history,'loss','val_loss')
#儲存訓練結果
model.save_weights("Savemodel/MNIST(Kaggles)_ComplexCNN.h5")
print('model saved to disk')
| Keras_MNIST(Kaggle)_CNN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import frccontrol as frccnt
import matplotlib.pyplot as plt
import numpy as np
import control
import numpy
from frc3223_azurite.data import read_csv
from frc3223_azurite.conversions import *
class Drivetrain(frccnt.System):
def __init__(self, dt):
"""Drivetrain subsystem.
Keyword arguments:
dt -- time between model/controller updates
"""
state_labels = [
("Left position", "m"),
("Left velocity", "m/s"),
("Right position", "m"),
("Right velocity", "m/s"),
]
u_labels = [("Left voltage", "V"), ("Right voltage", "V")]
self.set_plot_labels(state_labels, u_labels)
self.in_low_gear = True
# Number of motors per side
self.num_motors = 2.0
# Drivetrain mass in kg
self.m = lbs_to_kg(110)
# Radius of wheels in meters
self.r = inch_to_meter(3)
# Radius of robot in meters
self.rb = inch_to_meter(23)
# Moment of inertia of the drivetrain in kg-m^2
self.J = 3.0
self.Gl = self.Gr = 10.75
self.model = frccnt.models.drivetrain(
frccnt.models.MOTOR_CIM,
self.num_motors,
self.m,
self.r,
self.rb,
self.J,
self.Gl,
self.Gr,
)
A=numpy.array([
[ 0. , 1. , 0. , 0. ],
[ 0. , -18.70879036, 0. , 15.90601316],
[ 0. , 0. , 0. , 1. ],
[ 0. , 15.90601316, 0. , -18.70879036]])
B=numpy.array([
[ 0. , 0. ],
[ 6.61359246, -5.33282046],
[ 0. , 0. ],
[-5.33282046, 6.61359246]])
C=numpy.array([[1, 0, 0, 0],
[0, 0, 1, 0]])
D=numpy.zeros(shape=(2,2))
#self.model = control.ss(A, B, C, D)
u_min = np.matrix([[-12.0], [-12.0]])
u_max = np.matrix([[12.0], [12.0]])
frccnt.System.__init__(self, self.model, u_min, u_max, dt)
q_pos = 0.12
q_vel = 1.0
q = [q_pos, q_vel, q_pos, q_vel]
r = [12.0, 12.0]
self.design_dlqr_controller(q, r)
qff_pos = 0.005
qff_vel = 1.0
self.design_two_state_feedforward(
[qff_pos, qff_vel, qff_pos, qff_vel], [12.0, 12.0]
)
q_pos = 0.05
q_vel = 1.0
q_voltage = 9.0
q_encoder_uncertainty = 2.0
r_pos = 0.0001
r_gyro = 0.000001
self.design_kalman_filter([q_pos, q_vel, q_pos, q_vel], [r_pos, r_pos])
dt=0.02
drivetrain = Drivetrain(dt=dt)
try:
import slycot
plt.figure(1)
drivetrain.plot_pzmaps()
except ImportError: # Slycot unavailable. Can't show pzmaps.
pass
t, xprof, vprof, aprof = frccnt.generate_s_curve_profile(
max_v=2.5, max_a=3.5, time_to_max_a=1.0, dt=dt, goal=50.0
)
refs = []
for i in range(len(t)):
r = np.matrix([[xprof[i]], [vprof[i]], [xprof[i]], [vprof[i]]])
refs.append(r)
plt.figure(2)
state_rec, ref_rec, u_rec = drivetrain.generate_time_responses(t, refs)
drivetrain.plot_time_responses(t, state_rec, ref_rec, u_rec)
plt.show()
data = read_csv("drivetrain10.csv")
s = 5
e = len(data["time"])
ts = data["time"][s:e]
bus_voltages = data["voltage"][s:e]
percentVLeft = data["VRL"][s:e]
percentVRight = -data["VPR"][s:e]
voltages_l = (bus_voltages * percentVLeft)
voltages_r = bus_voltages * percentVRight
encPosL = data["enc_pos_l"][s:e]
encPosR = -data["enc_pos_r"][s:e]
vsl=data["enc_vel_l"][s:e]
vsr=-data["enc_vel_r"][s:e]
sim_encpos_l = numpy.zeros(shape=ts.shape)
sim_encpos_r = numpy.zeros(shape=ts.shape)
sim_encvel_l = numpy.zeros(shape=ts.shape)
sim_encvel_r = numpy.zeros(shape=ts.shape)
print(vsl[0], vsr[0], encPosL[0], encPosR[0])
drivetrain.reset()
drivetrain.x[0] = encPosL[0] / 630. / 3.28
drivetrain.x[1] = vsl[0] / 630 / 3.28 * 10
drivetrain.x[2] = encPosR[0] / 630. / 3.28
drivetrain.x[3] = vsr[0] / 630 / 3.28 * 10
print(drivetrain.x)
for i, t in enumerate(ts):
sim_encpos_l[i] = drivetrain.x[0] * 630 * 3.28
sim_encpos_r[i] = drivetrain.x[2] * 630 * 3.28
sim_encvel_l[i] = drivetrain.x[1] * 630 * 3.28 / 10
sim_encvel_r[i] = drivetrain.x[3] * 630 * 3.28 / 10
u = numpy.matrix([
[voltages_l[i],],
[voltages_r[i],],
])
drivetrain.x = drivetrain.sysd.A * drivetrain.x + drivetrain.sysd.B * u
drivetrain.y = drivetrain.sysd.C * drivetrain.x + drivetrain.sysd.D * u
plt.plot(ts, voltages_r)
plt.plot(ts, voltages_l)
plt.legend(["right voltage", "left voltage"])
plt.show()
plt.plot(ts, encPosL, color="green")
plt.plot(ts, sim_encpos_l, color="orange")
plt.ylabel("left position")
plt.legend(["actual", "simulated"])
plt.show()
plt.plot(ts, encPosR, color="green")
plt.plot(ts, sim_encpos_r, color="orange")
plt.ylabel("right position")
plt.legend(["actual", "simulated"])
plt.show()
plt.plot(ts, vsl, color="green")
plt.plot(ts, sim_encvel_l, color="orange")
plt.legend(["actual", "simulated"])
plt.ylabel("left velocity")
plt.show()
plt.plot(ts, vsr, color="green")
plt.plot(ts, sim_encvel_r, color="orange")
plt.legend(["actual", "simulated"])
plt.ylabel("right velocity")
plt.show()
# +
for a in ["A", "B","C","D"]:
print (a+" = ", repr(getattr(drivetrain.sysd, a)))
print("")
for a in ["K", "Kff", "L"]:
print(a+" = ", repr(getattr(drivetrain, a)))
print("")
ainv = numpy.linalg.inv(drivetrain.sysd.A)
print("Ainv = ", repr(ainv))
print("A continuous: ", repr(drivetrain.sysc.A))
print("")
print("B continuous: ", repr(drivetrain.sysc.B))
# -
dir(drivetrain)
frccnt.models.cnt.ss.__doc__
| state-space.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R (SageMath)
# language: r
# metadata:
# cocalc:
# description: SageMath's R language environment
# priority: 0
# url: https://www.sagemath.org/
# name: ir-sage
# ---
# Which type of marbles are faster in completing the marble run- the smaller or the larger ones? Or is there no significant difference? State the null hypothesis below.
#
# Recall that we discussed the difference between directional and non-directional alternative hypotheses in class. For this analysis, we are going to use a non-directional alternative hypothesis. State the alternative hypothesis below.
#
# A t-test is used to compare the means between two samples to see if there is a significant difference between the two means. In order to perform a t-test, both samples need to
# -be continuous (not discrete or categorical)
# -be independent of each other
# -be from simple random samples
# -be normally distributed (or close to normally distributed)
# -have the same number of data points (or close)
# (if not, a t-test could be done if the variances are similar, but that is beyond the scope of this lesson)
# -have a large enough number of data points (generally 30 or more is considered acceptable)
#
#
# Here is some code you will use today. You will change the "yourdata" to an appropriate name.
# yourdata<-c(1,2,3) will create a list of numbers 1, 2, 3 and would be named "yourdata"
# median(yourdata) will show the median of the data.
# mean(yourdata) will show the mean of the data.
# hist(yourdata) will create a histogram of the data.
# length(yourdata) will show how many entries you have in the list of data.
# t.test(yourdata1,yourdata2)
# Today, you are going to build a marble run and time a number of small marbles while they complete the course.
# Record their times as a list of numbers.
smallmarbles<-c(1,2,3...)
# In order to perform a t-test, your data must be approximately normally distributed. Is it? What does the histogram look like? How close are the mean and median? Add more data if needed.
# Now time the same number of large marbles while they complete the course. Record their times as a list of numbers.
# As before, show that this data is also approximately normally distributed. What does the histogram look like? How close are the mean and median?
# In order to perform a t-test, the sizes of both samples need to be close to equivalent. Are they?
# Run the t-test. What are the means for each sample?
# What is the p-value? What does the p-value mean? Which hypothesis does it support?
# Do you know for certain that this hypothesis is correct? Explain.
# Plan another similar experiment that we could realistically run in class or on campus in the coming weeks. Explain your experiment below.
| Marbles t-test/Marbles t.test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# In this notebook, I'll leaverage the knowledge learned in cp5 to exploring my capstone project dataset.
# since my dataset is from different sources, I'll explore the various datasets one by one.
# only sample one file from those different dataset sources.
# -
import pandas as pd
import json
import matplotlib.pyplot as plt
import requests
from bs4 import BeautifulSoup
from datetime import datetime
import xml.etree.ElementTree as ET
# +
# some local env
# main dataset path
data_parent_path = './data/'
# ag_news
data_agnews = data_parent_path + 'ag_news_csv/train.csv'
# nyt article comments
data_nytcomments = data_parent_path + 'nyt_comments/CommentsApril2017.csv'
data_nytarticles = data_parent_path + 'nyt_comments/ArticlesApril2017.csv'
# one_week_newsfeed
data_oneweeknews = data_parent_path + 'one_week_newsfeeds/news-week-17aug24.csv'
# sogou news
data_sogou_news = data_parent_path + 'sogou_news_csv/train.csv'
# reuters
data_reuters = data_parent_path + 'reuters21578/reut2-000.sgm'
# social medial news
data_sns = data_parent_path + 'sns_dataset/News_Final.csv'
# twitter, 1,600,000
data_tweets = data_parent_path + 'training.1600000.processed.noemoticon.csv'
# -
# ### 1 - ag_news:
# ag_news contains 120,000 rows, 3 columns, including classes, title, and content.
# every columns is well orgnized, without any missing data.
# ag_news
ag_news_df = pd.read_csv(data_agnews, header=None)
ag_news_df.head()
ag_news_df.columns
ag_news_df.columns = ['category', 'title', 'content']
ag_news_df.head(10)
ag_news_df.tail()
ag_news_df.info()
ag_news_df.describe()
ag_news_df.title.iloc[1]
ag_news_df.content.iloc[1]
# ### 2. nyt articles comments
# * there are two different csv, including nyt-articiles, nyt-comments
# * in nyt-articles, there are totale 16 columns and 886 rows in each csv files
# * in nyt-articles , unfortunately, there is no real article content, instead it provide a link to the real articles.
# * I define a function to extract real content from nyt website. It works
#
# * in comment csv, there are 243832 entries, total 34 columns
# * I can join these two table by using articleID,
# * commentBody is the comment column
nyt_comments_df = pd.read_csv(data_nytcomments)
nyt_articles_df = pd.read_csv(data_nytarticles)
nyt_articles_df.head()
nyt_articles_df.columns
nyt_articles_df.info()
nyt_articles_df.describe()
# +
# since there is no real content, I have to extract it from the link.
# I'll define a function, and apply webLink as the input para, and for a content column.
def nyt_get_content(url):
r = requests.get(url)
soup = BeautifulSoup(r.text)
lst = []
for para in soup.select('.css-exrw3m'):
lst += para.text
return ''.join(lst)
# -
# it requires lot time to run the whole df
# nyt_articles_df['content'] = nyt_articles_df.webURL.apply(nyt_get_content)
test = nyt_articles_df.webURL.iloc[0:2].apply(nyt_get_content)
print(test)
nyt_articles_df.info()
nyt_comments_df.head()
nyt_comments_df.describe()
nyt_comments_df.columns
nyt_comments_df.info()
print(nyt_comments_df.commentBody.iloc[2])
nyt_comments_df.articleID.iloc[0]
nyt_articles_df.articleID.iloc[0]
# ### 3 one_week_newsfeeds
# * I have two week csv, one is from 2017-Aug-24, another is 2018-Aug-24
# * In 2017-Aug-24 csv, there are 1398431 e ntries,
# * almost no missing data, except headline_text
# * But when I check the content, I find there is no content, instead of a source_url, I try to visit some of the url, I find that there are many news feeds in German, and some of them reveal 404 error code.
# * Also there are no special pattern for extracting the content.
# * I don't know how to use this data source.
data_oneweeknews_df = pd.read_csv(data_oneweeknews)
data_oneweeknews_df.info()
data_oneweeknews_df.head()
data_oneweeknews_df.describe()
url = data_oneweeknews_df.source_url
print(url.values)
# ### 4 Sogou News
# * This dataset is really big, about 1.27G for train data. Having 449,999 entries
# * Using chinese Pinyin, how to use that?
# * No column name, I need to append it by myself.
# * Does this kind of dataset can be mixed with other language in training my model?
data_sogou_news_df = pd.read_csv(data_sogou_news, header=None)
data_sogou_news_df.columns = ['class', 'title', 'content']
data_sogou_news_df.info()
data_sogou_news_df.head()
data_sogou_news_df.content.iloc[0]
data_sogou_news_df.title.iloc[0]
# ### 5 sns_ataset
# * total 93,239 rows.
# * news_final.csv contains the sns news content.
# * other csv files only contains SOCIAL FEEDBACK DATA
data_sns_df = pd.read_csv(data_sns)
data_sns_df.head()
data_sns_df.info()
data_sns_df.columns
data_sns_df.Headline.iloc[2]
data_sns_df.Title.iloc[2]
# ### 6 training.1600000.processed.noemoticon
# * total 1,600,000 rows, 6 columns
# * column-target, is the emotional classes (0 = negative, 2 = neutral, 4 = positive ), there are half of the datasets is negative, others are positive. No neutral seeds
# * dtaset is from 2009- Apirl, May, June
data_tweets_df = pd.read_csv(data_tweets, encoding='ISO-8859-1', header=None)
data_tweets_df.columns = ['target', 'ids', 'date', 'flag', 'user', 'text']
data_tweets_df.head()
data_tweets_df.target.value_counts()
data_tweets_df.info()
data_tweets_df.tail()
# +
month_map = {'Jan': '01', 'Feb': '02', 'Mar': '03', 'Apr': '04', 'May': '05', 'Jun': '06',
'Jul': '07', 'Aug': '08', 'Sep': '09', 'Oct': 10, 'Nov': 11, 'Dec': 12
}
def parse_clf_time(text):
m = month_map[text[4:7]]
d = text[8:10]
y = text[-4:]
hh = text[11:13]
mm = text[14:16]
ss = text[17:19]
time_str = '{0:s}-{1:s}-{2:s} {3:s}:{4:s}:{5:s}'.format(y, m, d, hh, mm, ss)
return datetime.strptime(time_str, '%Y-%m-%d %H:%M:%S')
# -
test = parse_clf_time('Tue Jun 16 08:40:50 PDT 2009')
print(test)
data_tweets_df['timestamp'] = (data_tweets_df.date.apply(parse_clf_time))
data_tweets_df.info()
data_tweets_df.timestamp.apply(datetime.date).value_counts()
# ### 7 reuters
# *
# +
lst = []
with open(data_reuters, 'r') as reuter_file:
lines = reuter_file.read()
xml = BeautifulSoup(lines)
bodys = xml.findAll('text')
for body in bodys:
title = body.find('title').text if body.find('title') else ''
date = body.find('dateline').text if body.find('dateline') else ''
text = body.text
lst.append([title, date, text])
data_reuters_df = pd.DataFrame(lst, columns=['title', 'date', 'content'])
data_reuters_df
# -
data_reuters_df.info()
| notebooks/data_wrangling/capstone_cp5_data-exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %reload_ext autoreload
# %autoreload 2
# %matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0";
import urllib.request
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
# -
import ktrain
from ktrain import tabular
# # Predicting House Prices
#
# In this notebook, we will predict the prices of houses from various house attributes. The dataset [can be downloaded from Kaggle here](https://www.kaggle.com/c/house-prices-advanced-regression-techniques).
# ## STEP 1: Load and Preprocess Data
train_df = pd.read_csv('data/housing_price/train.csv', index_col=0)
train_df.head()
train_df.drop(['Alley','PoolQC','MiscFeature','Fence','FireplaceQu','Utilities'], 1, inplace=True)
train_df.head()
trn, val, preproc = tabular.tabular_from_df(train_df, is_regression=True,
label_columns='SalePrice', random_state=42)
# ## STEP 2: Create Model and Wrap in `Learner`
model = tabular.tabular_regression_model('mlp', trn)
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=128)
# ## STEP 3: Estimate LR
#
learner.lr_find(show_plot=True, max_epochs=16)
# ## STEP 4: Train
learner.autofit(1e-1)
# ## Evaluate Model
learner.evaluate(test_data=val)
| examples/tabular/HousePricePrediction-MLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
from distributed import Executor, hdfs, progress, wait, s3
e = Executor('localhost:8786')
e
df = s3.read_csv('s3://blaze-data/gdelt/csv/201401*.export.csv', sep='\t', header=None)
df = e.persist(df)
progress(df)
df.head(5)
gts = df[[1, 26, 0, 51, 3, 53, 54]]
gts.columns = ['Date', 'Code', 'ID', 'Country', 'Year', 'Latitude', 'Longitude']
gts.head()
gts = gts[gts['Year'] == 2014]
gts.head()
event_codes = [211, 231, 311, 331, 61, 71]
gts = gts[gts['Code'].isin(event_codes)]
gts.head()
gts = gts[gts['Country'] == 'US']
gts.head()
import numpy as np
lat = np.array(gts.Latitude)
lon = np.array(gts.Longitude)
from bokeh.io import output_notebook, output_file, show
from bokeh.models import (
GMapPlot, GMapOptions, ColumnDataSource, Circle, DataRange1d, PanTool, WheelZoomTool, BoxSelectTool
)
# +
map_options = GMapOptions(lat=30.29, lng=-97.73, map_type="roadmap", zoom=11)
plot = GMapPlot(
x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options, title="Austin"
)
# -
source = ColumnDataSource(
data=dict(
lat=lat,
lon=lon,
)
)
circle = Circle(x="lon", y="lat", size=15, fill_color="blue", fill_alpha=0.8, line_color=None)
plot.add_glyph(source, circle)
# output_file('map_plot.html')
output_notebook()
plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool())
show(plot)
| gdelt/gdelt-distributed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''a2j'': conda)'
# metadata:
# interpreter:
# hash: 67d8fd462c468bc89bc31b57261686658acdf7872194a373c635345be9bec38b
# name: python3
# ---
import model as model
import math
import anchor as anchor
import random
import torch
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
from hands2017 import my_dataloader as dataloader
from hands2017 import testingImageDir, center_test, test_lefttop_pixel, test_rightbottom_pixel, keypointsUVD_test, keypointsNumber
from hands2017 import trainingImageDir, center_train, train_lefttop_pixel, train_rightbottom_pixel, keypointsUVD_train
from hands2017 import cropWidth, cropHeight, depth_thres, xy_thres, downsample, keypointsNumber, validIndex_train, validIndex_test
from anchor import get_sparse_anchors
import cv2
image_datasets = dataloader(testingImageDir, center_test, test_lefttop_pixel, test_rightbottom_pixel, keypointsUVD_test, validIndex_test, augment=False)
center = center_test
image_datasets = dataloader(trainingImageDir, center_train, train_lefttop_pixel, train_rightbottom_pixel, keypointsUVD_train, validIndex_train, False)
center = center_train
def view_cropimg():
i = random.randint(0, len(image_datasets)-1)
img, label = image_datasets[i]
img, label = img.numpy(), label.numpy()
img = img.squeeze(0)
plt.plot(label[:,1], label[:,0], 'r*')
plt.imshow(img)
view_cropimg()
| src/visual_hands2017.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ElydeAngel/daa_2021_1/blob/master/07Diciembre_DyV.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="U-rTQbfb_vFC"
def fnRecInfinita():
print("Hola")
fnRecInfinita()
# + id="7l03Ka5T_71q"
fnRecInfinita()
# + colab={"base_uri": "https://localhost:8080/"} id="bGrnxiwhAQJe" outputId="a9a824c6-504c-4af6-e6a7-7bdd30e8c168"
def fnRec( x ):
if x == 0:
print("Stop")
else:
print( x )
fnRec( x-1 )
def main():
print("Inicio de programa")
fnRec( 5 )
print("Fin del programa")
main()
# + colab={"base_uri": "https://localhost:8080/"} id="UULSj0BOBbFF" outputId="6baa8d10-e67e-44f8-c216-25862b6ad4c2"
def printRev( x ): # Representa la pila de llamadas
if x > 0: # Evalua si 3, 2, 1 o 0 son mayores que 0
print( x ) # Imprime 3
printRev ( x-1 ) # Se le resta 1 a 3 y aquí va sacando de la pila
printRev( 3 )
# + colab={"base_uri": "https://localhost:8080/"} id="CN4NVQihEH0q" outputId="651ff05c-dfe6-424d-8579-5bc25158057f"
def printRev( x ): # Representa la pila de llamadas
if x > 0: # Evalua si 3, 2, 1 o 0 son mayores que 0
printRev ( x-1 ) # Sólamente vuelve a llamarse a si misma pero con un valor menos
print( x ) # Después de que ya no se cumpla el IF pasa a este print
printRev( 3 )
# + [markdown] id="CmQms9lXGt4X"
# ##Serie de Fibinacci
# ### **F(n - 1) + F(n - 2) si n > 1**
#
#
# > f(0) = 0 si n = 0
#
# > f(1) = 1 si n = 1
#
# > f(2) = F(2-1) + F(2-2) = 1 + 0 = 1
#
# > f(3) = F(3-1) + F(3-2) = 2 + 1 = 3
#
# y asi sucesivamente
#
# + colab={"base_uri": "https://localhost:8080/"} id="bY5FgKH3Ir-p" outputId="7de5689b-8ad4-4bdc-9756-9ea695069502"
def fibonacci( n ):
if n == 1 or n == 0:
return n
else:
return( fibonacci(n-1) + fibonacci(n-2) )
print(fibonacci(8))
# + colab={"base_uri": "https://localhost:8080/"} id="Pj1aElN7uGA8" outputId="03fd9cbe-250e-4495-f824-24901deeb75b"
def fibonacci( n ):
print("Llamada:", n) #Para visualizar cuantas llamadas hace y ver que la recursividad
if n == 1 or n == 0: #no necesariamente es la más eficienta, pues el árbol se va
return n #haciendo cada vez más grande
else:
return( fibonacci(n-1) + fibonacci(n-2) )
print(fibonacci(20))
# Aproximadamente su O(n) sería
# + [markdown] id="ip4_2CXJylXu"
# ##Divide y venceras
# Consiste en dividir el problea en pequeños subproblemas no relacionados para posteriormente combinarlos para resolver el problema principal
#
# Algoritmo
# * Dividir en pequeños subproblemas
# * Resolver los probleas recursivamente
# * Combinar las soluciones
#
| 07Diciembre_DyV.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "Network Analysis on the Bali network"
#
# > "A comprehensive assessment of the capabilities of the Python igraph package."
# - toc: true
# - branch: master
# - badges: false
# - comments: true
# - hide: false
# - search_exclude: true
# - metadata_key1: metadata_value1
# - metadata_key2: metadata_value2
# - image: images/net_figa.png
# - categories: [Network_Analysis, Python,igraph]
# - show_tags: true
# ## Purpose
# The purpose of this project is to perform a *network analysis* on the Bali terrorist network data set for the purpose of assessing the capabilities of the Python igraph package. The data set includes connection data between various terrorists (see https://www.researchgate.net/publication/249035940_Connecting_Terrorist_Networks for background).
# ## Analysis
# The notebook for the analysis can be accessed here:
# [NetworkingProject.ipynb](https://nbviewer.jupyter.org/github/kobus78/NetworkProject/blob/master/NetworkingProject.ipynb "NetworkingProject.ipynb")
| _notebooks/2019-03-15-NetworkProject.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <small><small><i>
# All the IPython Notebooks in **Python Introduction** lecture series by Dr. <NAME> are available @ **[GitHub](https://github.com/milaan9/01_Python_Introduction)**
# </i></small></small>
# # Python Statement, Indentation and Comments
#
# In this class, you will learn about Python statements, why indentation is important and use of comments in programming.
# ## 1. Python Statement
#
# Instructions that a Python interpreter can execute are called statements. For example, **`a = 1`** is an assignment statement. **`if`** statement, **`for`** statement, **`while`** statement, etc. are other kinds of statements which will be discussed later.
# ### Multi-line statement
#
# In Python, the end of a statement is marked by a newline character. But we can make a statement extend over multiple lines with the line continuation character **`\`**.
#
# * Statements finish at the end of the line:
# * Except when there is an open bracket or paranthesis:
#
# ```python
# >>> 1+2
# >>> +3 #illegal continuation of the sum
# ```
# * A single backslash at the end of the line can also be used to indicate that a statement is still incomplete
#
# ```python
# >>> 1 + \
# >>> 2 + 3 # this is also okay
# ```
#
# For example:
# +
1+2 # assignment line 1
+3 # assignment line 2
# Python is only calculating assignment line 1
# -
1+2\ # "\" means assignment line is continue to next line
+3
a = 1 + 2 + 3 + \
4 + 5 + 6 + \
7 + 8 + 9
print(a)
# This is an explicit line continuation. In Python, line continuation is implied inside:
# 1. parentheses **`( )`**,
# For Example:
# ```python
# (1+2
# + 3) # perfectly OK even with spaces
# ```
# 2. brackets **`[ ]`**, and
# 3. braces **`{ }`**.
# For instance, we can implement the above multi-line statement as:
(1+2
+3)
a = (1 + 2 + 3 +
4 + 5 + 6 +
7 + 8 + 9)
print(a)
# Here, the surrounding parentheses **`( )`** do the line continuation implicitly. Same is the case with **`[ ]`** and **`{ }`**. For example:
colors = ['red',
'blue',
'green']
print(colors)
# We can also put multiple statements in a single line using semicolons **`;`** as follows:
# +
a = 1; b = 2; c = 3
print(a,b,c)
a,b,c
# -
# ## 2. Python Indentation
#
# No spaces or tab characters allowed at the start of a statement: Indentation plays a special role in Python (see the section on control statements). For now simply ensure that all statements start at the beginning of the line.
#
# <div>
# <img src="img/ind1.png" width="700"/>
# </div>
#
# Most of the programming languages like C, C++, and Java use braces **`{ }`** to define a block of code. Python, however, uses indentation.
#
# A comparison of C & Python will help you understand it better.
#
# <div>
# <img src="img/ind2.png" width="700"/>
# </div>
#
# A code block (body of a **[function](https://github.com/milaan9/04_Python_Functions/blob/main/001_Python_Functions.ipynb)**, **[loop](https://github.com/milaan9/03_Python_Flow_Control/blob/main/005_Python_for_Loop.ipynb)**, etc.) starts with indentation and ends with the first unindented line. The amount of indentation is up to you, but it must be consistent throughout that block.
#
# Generally, four whitespaces are used for indentation and are preferred over tabs. Here is an example.
#
# > **In the case of Python, indentation is not for styling purpose. It is rather a requirement for your code to get compiled and executed. Thus it is mandatory!!!**
for i in range(1,11):
print(i) #press "Tab" one time for 1 indentation
if i == 6:
break
# The enforcement of indentation in Python makes the code look neat and clean. This results in Python programs that look similar and consistent.
#
# Indentation can be ignored in line continuation, but it's always a good idea to indent. It makes the code more readable. For example:
if True:
print('Hello')
a = 6
# or
if True: print('Hello'); a = 6
# both are valid and do the same thing, but the former style is clearer.
#
# Incorrect indentation will result in **`IndentationError`**
# .
# ## 3. Python Comments
#
# Comments are very important while writing a program. They describe what is going on inside a program, so that a person looking at the source code does not have a hard time figuring it out.
#
# You might forget the key details of the program you just wrote in a month's time. So taking the time to explain these concepts in the form of comments is always fruitful.
#
# In Python, we use the hash **`#`** symbol to start writing a comment.
#
# It extends up to the newline character. Comments are for programmers to better understand a program. Python Interpreter ignores comments.
#
# Generally, comments will look something like this:
#
# ```python
# #This is a Comment
# ```
#
# Because comments do not **execute**, when you run a program you will not see any indication of the comment there. Comments are in the source code for **humans** to **read**, not for **computers to execute**.
# +
#This is a Comment
# -
# ### 1. Single lined comment:
# In case user wants to specify a single line comment, then comment must start with **`#`**.
#
# ```python
# #This is single line comment.
# ```
# +
#This is single line comment.
# -
# ### 2. Inline comments
# If a comment is placed on the same line as a statement, it is called an inline comment. Similar to the block comment, an inline comment begins with a single hash (#) sign and followed by a space and comment.
#
# It is recommended that an inline comment should separate from the statement at least **two spaces**. The following example demonstrates an inline comment
#
# ```python
# >>>n+=1 # increase/add n by 1
# ```
n=9
n+=1 # increase/add n by 1
n
# ### 3. Multi lined comment:
#
# We can have comments that extend up to multiple lines. One way is to use the hash **`#`** symbol at the beginning of each line. For example:
# +
#This is a long comment
#and it extends
#to multiple lines
# -
#This is a comment
#print out Hello
print('Hello')
# Another way of doing this is to use triple quotes, either `'''` or `"""`.
#
# These triple quotes are generally used for multi-line strings. But they can be used as a multi-line comment as well. Unless they are not docstrings, they do not generate any extra code.
#
# ```python
# #single line comment
# >>>print ("Hello Python"
# '''This is
# multiline comment''')
# ```
"""This is also a
perfect example of
multi-line comments"""
'''This is also a
perfect example of
multi-line comments'''
#single line comment
print ("Hello Python"
'''This is
multiline comment''')
# ### 4. Docstrings in Python
#
# A docstring is short for documentation string.
#
# **[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)** (documentation strings) are the **[string](https://github.com/milaan9/02_Python_Datatypes/blob/main/002_Python_String.ipynb)** literals that appear right after the definition of a function, method, class, or module.
#
# Triple quotes are used while writing docstrings. For example:
#
# ```python
# >>>def double(num):
# >>> """Function to double the value"""
# >>> return 3*num
# ```
#
# Docstrings appear right after the definition of a function, class, or a module. This separates docstrings from multiline comments using triple quotes.
#
# The docstrings are associated with the object as their **`__doc__`** attribute.
#
# So, we can access the docstrings of the above function with the following lines of code:
def double(num):
"""Function to double the value"""
return 3*num
print(double.__doc__)
# To learn more about docstrings in Python, visit **[Python Docstrings](https://github.com/milaan9/04_Python_Functions/blob/main/Python_Docstrings.ipynb)** .
# ## Help topics
#
# Python has extensive help built in. You can execute **`help()`** for an overview or **`help(x)`** for any library, object or type **`x`**. Try using **`help("topics")`** to get a list of help pages built into the help system.
#
# `help("topics")`
help("topics")
| 006_Python_Statement_Indentation_Comments.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 1. Printing and User Input 2.0
# +
name = input("Hello, what is your name? ")
age = input("Hello {}, my name is Robert. How old are you? ".format(name))
age_difference = 27 - int(age)
response = "{}, I am 27 years old, so I am {} years"
if age_difference < 0:
response += " younger than you."
age_difference = -age_difference
else:
response += " older than you."
print(response.format(name, age_difference))
# -
# # 2. A Bookstore
# +
number_of_books = int(input("Enter the number of books: "))
cover_price = 24.95
discount = 0.4
shipping_first_item = 3.00
shipping_extra_items = 0.75
total_book_price = (number_of_books * cover_price) * (1.0 - discount)
shipping_costs = shipping_first_item + ((number_of_books - 1) * shipping_extra_items)
total_price = total_book_price + shipping_costs
print("The costs of {} books is {:.2f} euros.".format(number_of_books, total_price))
# -
# # 3. Odd or Even
# +
user_input = input("Enter a number: ")
while user_input.isdigit():
answer = "even." if int(user_input) % 2 == 0 else "odd."
print("The number {} is {}".format(user_input, answer))
user_input = input("Enter a number: ")
# -
# # 4. Parcel Delivery
# +
zone = input("Please enter the parcel's destination: ").lower()
weight = float(input("Please enter the parcel's weight: "))
cost = 0
if zone == "world":
if weight > 0 and weight <= 2.0:
cost = 24.50
elif weight > 2.0 and weight <= 5:
cost = 34.30
elif weight > 5.0 and weight <= 10:
cost = 58.30
else:
print("The parcel is too heavy.")
elif zone == "europe":
if weight > 0 and weight <= 2.0:
cost = 13.00
elif weight > 2.0 and weight <= 5:
cost = 19.50
elif weight > 5.0 and weight <= 10:
cost = 25.00
else:
print("The parcel is too heavy.")
else:
print("Not a valid zone.")
if cost != 0:
print("The cost is {:.2f} euros.".format(cost))
# -
# # 5. Simple Area Calculator
# +
import math
shape = input("Please enter 'rectangle', 'triangle', or 'circle': ")
if shape == "rectanlge":
height = float(input("Enter the height: "))
width = float(input("Enter the wdith: "))
area = height * width
print("The area of this rectangle is ", area)
elif shape == "triangle":
height = float(input("Enter the height: "))
width = float(input("Enter the wdith: "))
area = 0.5 * height * width
print("The area of this triangle is ", area)
elif shape == "circle":
radius = float(input("Enter the radius: "))
area = math.pi * (radius ** 2)
print("The area of this cirlce is ", area)
else:
print("Not a valid shape.")
# -
# # 6. Flowchart to Code
# +
max = int(input("Enter a maxiumum value: "))
for num in range(max):
if num % 2 == 0:
print(num)
# -
# # 7. Squares Table
# +
import random
running = True
while running:
if random.random() < 0.5:
question_value = int(random.uniform(1, 100))
correct_answer = question_value ** 2
for i in range(3):
student_answer = input("What is the square of {}: ".format(question_value))
if student_answer == "exit" or student_answer == "q":
running = False
break
if int(student_answer) == correct_answer:
print("Correct! Keep it up.")
break
print("Incorrect.")
else:
correct_answer = int(random.uniform(1, 100))
question_value = correct_answer ** 2
for i in range(3):
student_answer = input("What is the square root of {}: ".format(question_value))
if student_answer == "exit" or student_answer == "q":
running = False
break
if int(student_answer) == correct_answer:
print("Correct! Keep it up.")
break
print("Incorrect.")
| Assignments/answers/Lab_2-answers.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 2
# ### Exercise 15: Introduction to the Iterator
# 1. generate a list that will contain 10000000: ones
big_list_of_numbers = [1 for x in range(0, 10000000)]
big_list_of_numbers
# 2. Check the size of this variable
# +
from sys import getsizeof
getsizeof(big_list_of_numbers)
# -
# 3. Use an iterator to reduce memory utilization
# +
from itertools import repeat
small_list_of_numbers = repeat(1, times=10000000)
getsizeof(small_list_of_numbers)
# -
# 4. Loop over the newly generated iterator
for i, x in enumerate(small_list_of_numbers):
print(x)
if i > 10:
break
# 5. lookup several function definitions using ? operator in jupyter notebook.
# +
from itertools import (permutations, combinations, dropwhile, repeat,
zip_longest)
# +
# permutations?
# +
# combinations?
# +
# dropwhile?
# +
# repeat?
# +
# zip_longest?
# -
# ## Exercise 16: Implementing a Stack in Python
# 1. Define an empty stack and load the json file.
import pandas as pd
df = pd.read_json (r'Chapter02/users.json')
df
stack = []
# 2. Append another value to the stack
output = df.apply(lambda row : stack.append(row["email"]), axis=1)
stack
# 3. Use the append method to add an element in the stack.
stack.append(“<EMAIL>”)
stack
# 4. Read a value from our stack using the pop method.
tos = stack.pop()
tos
# 5. Append hello to the stack
stack.append("Hello")
stack
# ## Exercise 17: Implementing a Stack Using User-Defined Methods
# 1. First, define two functions, stack_push and stack_pop.
# +
def stack_push(s, value):
return s + [value]
def stack_pop(s):
tos = s[-1]
del s[-1]
return tos
url_stack = []
url_stack
# -
# 2. create a string with a few urls in it
wikipedia_datascience = """Data science is an interdisciplinary
field that uses scientific methods,
processes, algorithms and systems to extract
knowledge [https://en.wikipedia.org/wiki/Knowledge] and i
nsights from data [https://en.wikipedia.org/wiki/Data] in various
forms, both structured and unstructured,similar to data mining
[https://en.wikipedia.org/wiki/Data_mining]"""
# 3. Find the length of the string
len(wikipedia_datascience)
# 4. Convert this string into a list by using the split method from the string, and then calculate its length
wd_list = wikipedia_datascience.split()
wd_list
len(wd_list)
# 5. Use a for loop to go over each word and check whether it is a URL
for word in wd_list:
if word.startswith("[https://"):
url_stack = stack_push(url_stack, word[1:-1])
print(word[1:-1])
# 6. Print the value of the stack
print(url_stack)
# 7. Iterate over the list and print the URLs one by one by using the stack_pop function
for i in range(0, len(url_stack)):
print(stack_pop(url_stack))
# 8. Print it again to make sure that the stack is empty after the final for loop
print(url_stack)
# ## Exercise 18: Lambda Expression
# 1. Import the math package
import math
# 2. Define two functions, my_sine and my_cosine
# +
def my_sine():
return lambda x: math.sin(math.radians(x))
def my_cosine():
return lambda x: math.cos(math.radians(x))
# -
# 3. Define sine and cosine for our purpose
sine = my_sine()
cosine = my_cosine()
math.pow(sine(30), 2) + math.pow(cosine(30), 2)
# ## Exercise 19: Lambda Expression for Sorting
# 1. define a list of tuples
capitals = [("USA", "Washington"), ("India", "Delhi"), ("France", "Paris"), ("UK", "London")]
capitals
# 2. Sort this list by the name of the capitals of each country, using a simple lambda expression.
capitals.sort(key=lambda item: item[1])
capitals
# ## Exercise 20: Multi-Element Membership Checking
# 1. Create a list_of_words list with words scraped from a text corpus
list_of_words = ["Hello", "there.", "How", "are", "you", "doing?"]
list_of_words
# 2. Verify whether this list contains all the elements from another list
check_for = ["How", "are"]
check_for
# 3. Use the in keyword to check membership in the list list_of_words:
all(w in list_of_words for w in check_for)
# ## Exercise 21: Implementing a Queue in Python
# 1. Create a Python queue with the plain list methods
# +
# %%time
queue = []
for i in range(0, 100000):
queue.append(i)
print("Queue created")
queue
# -
# 2. Use the pop function to empty the queue and check items in it
# +
for i in range(0, 100000):
queue.pop(0)
print("Queue emptied")
queue
# -
# 3. Implement the same queue using the deque data structure from Python's collection package
# +
# %%time
from collections import deque
queue2 = deque()
for i in range(0, 100000):
queue2.append(i)
print("Queue created")
for i in range(0, 100000):
queue2.popleft()
print("Queue emptied")
# -
# ## Exercise 22: File Operations
# 1. Import the os module.
import os
# 2. Set few environment variables
os.environ['MY_KEY'] = "MY_VAL"
os.getenv('MY_KEY')
# 3. Print the environment variable when it is not set
print(os.getenv('MY_KEY_NOT_SET'))
# 4. Print the os environmen
print(os.environ)
# ## Exercise 23: Opening and Closing a File
# 1. Open a file in binary mode:
f = open("Alice's Adventures in Wonderland, by <NAME>", "w")
n = f.write('')
f.close()
fd = open("Alice's Adventures in Wonderland, by <NAME>",
"rb")
fd.close()
# ## Exercise 24: Reading a File Line by Line
# 1. Open a file and then read the file line by line and print it as we read it
with open("Alice's Adventures in Wonderland, by <NAME>",
encoding="utf8") as fd:
for line in fd:
print(line)
# 2. Duplicate the same for loop, just after the first one
with open("Alice's Adventures in Wonderland, by <NAME>",
encoding="utf8") as fd:
for line in fd:
print(line)
print("Ended first loop")
for line in fd:
print(line)
# ## Exercise 25: Write to a File
# 1. Use the write function from the file descriptor object
# +
data_dict = {"India": "Delhi", "France": "Paris", "UK": "London",
"USA": "Washington"}
with open("data_temporary_files.txt", "w") as fd:
for country, capital in data_dict.items():
fd.write("The capital of {} is {}\n".format(
country, capital))
# -
# 2. Read the file using
with open("data_temporary_files.txt", "r") as fd:
for line in fd:
print(line)
# 3. Use the print function to write to a file
data_dict_2 = {"China": "Beijing", "Japan": "Tokyo"}
with open("data_temporary_files.txt", "a") as fd:
for country, capital in data_dict_2.items():
print("The capital of {} is {}".format(
country, capital), file=fd)
# 4. Read the file
with open("data_temporary_files.txt", "r") as fd:
for line in fd:
print(line)
| Chapter02/.ipynb_checkpoints/chapter2-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# # Solving Multi-Period Newsvendor Problem with Amazon SageMaker RL
#
# This notebook shows an example of how to use reinforcement learning to solve a version of online stochastic Newsvendor problem. This problem is well-studied in inventory management wherein one must decide on an ordering decision (how much of an item to purchase from a supplier) to cover a single period of uncertain demand. The objective is to trade-off the various costs incurred and revenues achieved during the period, usually consisting of sales revenue, purchasing and holding costs, loss
# of goodwill in the case of missed sales, and the terminal salvage value of unsold items.
# <img src="images/rl_news_vendor.png" width="600" align="center"/>
# ## Problem Statement
#
# The case considered here is stationary, single-product and multi-period, with vendor lead time (VLT) and stochastic demand. The VLT $l$ refers to the number of time steps between the placement and receipt of an order. When formulated as a Markov Decision Process (MDP), the RL agent is aware of the following information at each time step:
#
# * Mean demand
# * Item purchase cost, sold price
# * lost sale penalty for each unit of unmet demand, holding cost if any unit is left over at the end of a period
# * on-hand inventory and the units to be received within the next VLT periods
#
# At each time step, the agent can take a continuous action, consisting of the size of the order placed and to arrive $l$ time periods later. The reward is then calculated as the difference between revenue from selling and cost of buying/storing.
#
# The time horizon is 40 steps. You can see the specifics in the `NewsVendorGymEnvironment` class in `news_vendor_env.py`. A normalized version(`NewsVendorGymEnvironmentNormalized`) of this problem is used in this notebook.
# ## Using Amazon SageMaker RL
#
# Amazon SageMaker RL allows you to train your RL agents in cloud machines using docker containers. You do not have to worry about setting up your machines with the RL toolkits and deep learning frameworks. You can easily switch between many different machines setup for you, including powerful GPU machines that give a big speedup. You can also choose to use multiple machines in a cluster to further speedup training, often necessary for production level loads.
# ## Pre-requsites
# ### Roles and permissions
#
# To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import HTML
import time
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object, wait_for_training_job_to_complete
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
# ### Setup S3 bucket
#
# Set up the linkage and authentication to the S3 bucket that you want to use for checkpoint and the metadata.
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket)
print("S3 bucket path: {}".format(s3_output_path))
# ### Define Variables
#
# We define variables such as the job prefix for the training jobs *and the image path for the container (only when this is BYOC).*
# create a descriptive job name
job_name_prefix = 'rl-newsvendor'
# ### Configure where training happens
# You can train your RL training jobs using the SageMaker notebook instance or local notebook instance. In both of these scenarios, you can run the following in either local or SageMaker modes. The local mode uses the SageMaker Python SDK to run your code in a local container before deploying to SageMaker. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. You just need to set `local_mode = True`. When setting `local_mode = False`, you can choose the instance type from avaialable [ml instances](https://aws.amazon.com/sagemaker/pricing/instance-types/)
# +
local_mode = False
if local_mode:
instance_type = 'local'
else:
# If on SageMaker, pick the instance type
instance_type = "ml.m5.large"
# -
# ### Create an IAM role
# Either get the execution role when running from a SageMaker notebook instance `role = sagemaker.get_execution_role()` or, when running from local notebook instance, use utils method `role = get_execution_role()` to create an execution role.
# +
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role()
print("Using IAM role arn: {}".format(role))
# -
# ### Install docker for `local` mode
#
# In order to work in `local` mode, you need to have docker installed. When running from you local machine, please make sure that you have docker and docker-compose (for local CPU machines) and nvidia-docker (for local GPU machines) installed. Alternatively, when running from a SageMaker notebook instance, you can simply run the following script to install dependenceis.
#
# Note, you can only run a single local notebook at one time.
# only run from SageMaker notebook instance
if local_mode:
# !/bin/bash ./common/setup.sh
# ## Setup the environment
#
# The environment is defined in a Python file called `news_vendor_env.py` in the `./src` directory. It implements the `init()`, `step()` and `reset()` functions that describe how the environment behaves. This is consistent with Open AI Gym interfaces for defining an environment.
#
# - Init() - initialize the environment in a pre-defined state
# - Step() - take an action on the environment
# - reset()- restart the environment on a new episode
# - [if applicable] render() - get a rendered image of the environment in its current state
# +
# uncomment the following line to see the environment
# # !pygmentize src/news_vendor_env.py
# -
# ## Write the training code
#
# The training code is written in the file `train_bin_packing.py` which is also uploaded in the `/src` directory.
# First import the environment files and the preset files, and then define the main() function.
# !pygmentize src/train_news_vendor.py
# ## Train the RL model using the Python SDK Script mode
#
# If you are using local mode, the training will run on the notebook instance. When using SageMaker for training, you can select a GPU or CPU instance. The [RLEstimator](https://sagemaker.readthedocs.io/en/stable/sagemaker.rl.html) is used for training RL jobs.
#
# 1. Specify the source directory where the gym environment and training code is uploaded.
# 2. Specify the entry point as the training code
# 3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
# 4. Define the training parameters such as the instance count, job name, S3 path for output and job name.
# 5. Specify the hyperparameters for the RL agent algorithm. The RLCOACH_PRESET or the RLRAY_PRESET can be used to specify the RL agent algorithm you want to use.
# 6. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
# ### Define Metric
# A list of dictionaries that defines the metric(s) used to evaluate the training jobs. Each dictionary contains two keys: ‘Name’ for the name of the metric, and ‘Regex’ for the regular expression used to extract the metric from the logs.
metric_definitions = [{'Name': 'episode_reward_mean',
'Regex': 'episode_reward_mean: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_max',
'Regex': 'episode_reward_max: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_len_mean',
'Regex': 'episode_len_mean: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'entropy',
'Regex': 'entropy: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'episode_reward_min',
'Regex': 'episode_reward_min: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'vf_loss',
'Regex': 'vf_loss: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
{'Name': 'policy_loss',
'Regex': 'policy_loss: ([-+]?[0-9]*\\.?[0-9]+([eE][-+]?[0-9]+)?)'},
]
# ### Define Estimator
# This Estimator executes an RLEstimator script in a managed Reinforcement Learning (RL) execution environment within a SageMaker Training Job. The managed RL environment is an Amazon-built Docker container that executes functions defined in the supplied entry_point Python script.
# +
train_entry_point = "train_news_vendor.py"
train_job_max_duration_in_seconds = 60 * 15 # 15 mins to make sure TrainingJobAnalytics shows at least two points
estimator = RLEstimator(entry_point=train_entry_point,
source_dir="src",
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.RAY,
toolkit_version='0.8.5',
framework=RLFramework.TENSORFLOW,
role=role,
instance_type=instance_type,
instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
metric_definitions=metric_definitions,
max_run=train_job_max_duration_in_seconds,
hyperparameters={}
)
# -
estimator.fit(wait=local_mode)
job_name = estimator.latest_training_job.job_name
print("Training job: %s" % job_name)
# ## Visualization
#
# RL training can take a long time. So while it's running there are a variety of ways we can track progress of the running training job. Some intermediate output gets saved to S3 during training, so we'll set up to capture that.
# +
s3_url = "s3://{}/{}".format(s3_bucket,job_name)
intermediate_folder_key = "{}/output/intermediate/".format(job_name)
intermediate_url = "s3://{}/{}training/".format(s3_bucket, intermediate_folder_key)
print("S3 job path: {}".format(s3_url))
print("Intermediate folder path: {}".format(intermediate_url))
# -
# ### Plot metrics for training job
# We can see the reward metric of the training as it's running, using algorithm metrics that are recorded in CloudWatch metrics. We can plot this to see the performance of the model over time.
# %matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics
if not local_mode:
wait_for_training_job_to_complete(job_name) # Wait for the job to finish
df = TrainingJobAnalytics(job_name, ['episode_reward_mean']).dataframe()
df_min = TrainingJobAnalytics(job_name, ['episode_reward_min']).dataframe()
df_max = TrainingJobAnalytics(job_name, ['episode_reward_max']).dataframe()
df['rl_reward_mean'] = df['value']
df['rl_reward_min'] = df_min['value']
df['rl_reward_max'] = df_max['value']
num_metrics = len(df)
if num_metrics == 0:
print("No algorithm metrics found in CloudWatch")
else:
plt = df.plot(x='timestamp', y=['rl_reward_mean'], figsize=(18,6), fontsize=18, legend=True, style='-', color=['b','r','g'])
plt.fill_between(df.timestamp, df.rl_reward_min, df.rl_reward_max, color='b', alpha=0.2)
plt.set_ylabel('Mean reward per episode', fontsize=20)
plt.set_xlabel('Training time (s)', fontsize=20)
plt.legend(loc=4, prop={'size': 20})
else:
print("Can't plot metrics in local mode.")
# #### Monitor training progress
# You can repeatedly run the visualization cells to get the latest metrics as the training job proceeds.
# ## Training Results
# You can let the training job run longer by specifying `train_max_run` in `RLEstimator`. The figure below illustrates the reward function of the RL policy vs. that of Critical Ratio, a classic heuristic. The experiments are conducted on a p3.8x instance. For more details on the environment setup and how different parameters are set, please refer to [ORL: Reinforcement Learning Benchmarks for Online Stochastic Optimization
# Problems](https://arxiv.org/pdf/1911.10641.pdf).
# <img src="images/rl_news_vendor_result.png" width="800" align="center"/>
| reinforcement_learning/rl_resource_allocation_ray_customEnv/rl_news_vendor_ray_custom.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/tf/c4_w1_time_series_baselines.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_dxa3O6B5uBV"
# # Time Series & Prediction Baselines
# + [markdown] id="FuRJrkSX52jC"
# The next code will set up the time series with seasonality, trend and a bit of noise.
# + id="U8jT4bZW5sUe"
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.keras as keras
# + id="mZ4ZLfTP6R3G"
def plot_series(times, series, format='-', start=0, end=None):
# plt.figure(figsize=(10, 6))
plt.plot(times[start:end], series[start:end], format)
plt.xlabel('time')
plt.ylabel('value')
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_patten(season_time):
'''
Just an arbitrary pattern, you can change it if you wish
'''
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
'''
Repeats the same patten at each period
'''
season_time = ((time + phase) % period) / period
return amplitude * seasonal_patten(season_time)
def noise(time, noise_level=1, rnd=np.random.RandomState(42)):
return rnd.randn(len(time)) * noise_level
# + id="tLdJr7dS71kU"
time = np.arange(4 * 365 + 1, dtype='float32')
baseline = 10
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
rnd_seed = np.random.RandomState(42)
trend_series = trend(time, slope)
seasonality_series = seasonality(time, period=365, amplitude=amplitude)
series = baseline + trend_series + seasonality_series
series += noise(time, noise_level, rnd=rnd_seed)
# + id="HqmL_uj_98ky" outputId="734a41fe-6d3b-46af-dba3-5efe2a59c0c0" colab={"base_uri": "https://localhost:8080/", "height": 388}
plt.figure(figsize=(10, 6));
plot_series(time, series);
# + [markdown] id="Hy0eYvbmCIi2"
# Now that we have the time series, let's split it so we can start forecasting
# + id="YP9PyWxYCH6d" outputId="5710d317-821f-4463-b50c-9f71ee8de3ba" colab={"base_uri": "https://localhost:8080/", "height": 279}
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
plt.plot(time_train, x_train);
plt.xlabel('time');
plt.ylabel('value');
plt.grid(True);
# + id="OM8irO5FRFLn" outputId="3fe6c09e-cf5f-4513-bb66-8f9d448b74b7" colab={"base_uri": "https://localhost:8080/", "height": 388}
plt.figure(figsize=(10, 6))
plt.plot(time_valid, x_valid);
plt.xlabel('time');
plt.ylabel('value');
plt.grid(True);
# + id="Jmy9dH8-Ppyt" outputId="92ca3ca6-9841-405d-fe1b-df10dc2d590a" colab={"base_uri": "https://localhost:8080/", "height": 279}
plot_series(time_train, x_train)
# + [markdown] id="68-uCw58B3CK"
# ## Naive Forecast
# + id="h9qd4Sff8W-0"
naive_forecast = series[split_time - 1: -1]
# + id="Xnj4hPeQ7i9F" outputId="8d7da2c6-a909-434d-f52b-47509a680891" colab={"base_uri": "https://localhost:8080/", "height": 388}
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, naive_forecast)
# + id="x7xWty3lTtsd" outputId="ca2feb1e-2ef6-4fdd-fb3f-5df6bcd69f6d" colab={"base_uri": "https://localhost:8080/", "height": 388}
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150)
plot_series(time_valid, naive_forecast, start=1, end=151)
# + [markdown] id="_-IB_kHBVh17"
# You can see that the naive forecast lags 1 step behind the time series.
#
# Now let's compute the mean squared error and the mean absolute error between the forecast and the predictions in the validation period.
# + id="MwdUzVKbVZ8a" outputId="27091601-90e6-4cd7-bba1-352773220183" colab={"base_uri": "https://localhost:8080/"}
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
# + [markdown] id="zbKRoAu2cy6p"
# That's our baseline, now let's try a moving average:
# + id="dE3GLGg6VaDS"
def moving_average_forecast(series, window_size):
'''
Forecast the mean of the last few values.
If window_size=1, then this is equivalent to the naive forecast
'''
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return forecast
# + id="wrw8ca6aVaAM" outputId="c4123120-fb47-4c82-e4f1-1ef3278c245f" colab={"base_uri": "https://localhost:8080/"}
moving_average_forecast(np.arange(10), 5)
# + id="Ix-GgvnyVZ5W" outputId="4cf01da7-1972-4cd0-9814-a1f3cd8961d1" colab={"base_uri": "https://localhost:8080/", "height": 388}
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, moving_avg)
# + id="nPcnc6pnf1w9" outputId="57f483f5-a5d9-41ff-b031-a55623dd5cd8" colab={"base_uri": "https://localhost:8080/"}
print('naive forecast mse:')
print('')
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
print('')
print('-----')
print('')
print('moving average:')
print('')
print(keras.metrics.mean_squared_error(x_valid, moving_avg).numpy())
print(keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy())
# + [markdown] id="u0RIGIuzi886"
# That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will substract the value at time $t - 365$ from the value at time t.
# + id="eYXKuq3fiKoG"
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
# + id="umMHAx90mCTP" outputId="1841eede-558e-446a-e141-7726fc193f78" colab={"base_uri": "https://localhost:8080/", "height": 388}
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series)
# + [markdown] id="tpfir4WUnmFi"
# Great, the trend and seasonality seem to be gone, so now we can use the moving average:
# + id="saWTCbhkmVFy" outputId="d1c55e0b-cb64-46e4-a1ab-d3f615ec27eb" colab={"base_uri": "https://localhost:8080/", "height": 388}
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:])
plot_series(time_valid, diff_moving_avg)
# + [markdown] id="xlBUT46JoI_Q"
# Now let's bring back the trend and seasonality by adding the past values from $t - 365$
# + id="R4tS5mLUmV0K" outputId="fb79680c-791d-4675-9740-ab1774ba76b0" colab={"base_uri": "https://localhost:8080/", "height": 388}
diff_moving_avg_plus_past = series[split_time - 365: -365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_past)
# + id="NCTTAiLOminK" outputId="7cf3869d-f106-41b1-8d8a-588e50e4def9" colab={"base_uri": "https://localhost:8080/"}
print('naive forecast mse:')
print('')
print(keras.metrics.mean_squared_error(x_valid, naive_forecast).numpy())
print(keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy())
print('')
print('-----')
print('')
print('Diff moving average plus past:')
print('')
print(keras.metrics.mean_squared_error(x_valid, diff_moving_avg_plus_past).numpy())
print(keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy())
# + [markdown] id="79GPF0g5plBP"
# Better than naive forecast, good. However the forecast look a bit too random, because we're just adding past values, which were noise. Let's use a moving averaging on past values to remove some of the noise:
# + id="RoIM_pRLms1N" outputId="b07cf23d-171d-4546-ee29-e2ce101fe3bf" colab={"base_uri": "https://localhost:8080/", "height": 388}
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-360], 10) + diff_moving_avg
diff_moving_avg_plus_smooth_past = diff_moving_avg_plus_smooth_past[:time_valid.shape[0]]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, diff_moving_avg_plus_smooth_past)
| deeplearning.ai/tf/c4_w1_ts_baselines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **This notebook combines running training, inference & ensembling for all models. If you would run it like this it would run about 4 days, so you can just copy out the models you want one by one and run them individually! I have uploads of all the weights & features on kaggle if you want to make use of them. Let me know if you have any questions!** 🥶
# # Data Preparation
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# Clone the repo; Get in the data
# !cd ../; git clone -b master https://github.com/Muennighoff/vilio.git
# Copy in the data you want to run; For simplification I just copy in all (If you run this on kaggle it might crash)
# Refer to the extraction notebook if you havn't run extraction yet!
# !cp -r ../input/hmtsvfeats/* ../vilio/data/
# !cp -r ../input/hmtsvfeats/* ../vilio/ernie-vil/data/hm/
# LMDB feats only used for V-Model
# !cp -r ../input/hmfeatureszipfin/detectron.lmdb ../vilio/data/
# Copy in the hateful memes data from uploading / downloading it
# If you do not have enough disk space split up the training of models!
# If you downloaded the updated HM data; you just need to copy in everything in the data folder (img, .jsonl's)
# Replace the hatefulmemes below with the name of your uploaded data
# !cp -r ../input/hatefulmemes/data/* ../vilio/data/
# !cp -r ../input/hatefulmemes/data/* ../vilio/ernie-vil/data/hm/
# Copy in the pretrained models depending on which model you want to run; Again just putting all here
# I have uploaded them all to kaggle so feel free to download from the below public dataets :)
# O
# !cp ../input/oscarvglarge/large-vg-labels/ep_20_590000/pytorch_model.bin ../vilio/data/
# U
# !cp ../input/uniterlarge/uniter-large.pt ../vilio/data/
# V
# !cp ../input/vbpretrainedfb/model.pth ../vilio/data/
# ES
# !cp -r ../input/erniesmall/ ../vilio/ernie-vil/data/
# !cp -r ../input/erniesmallvcr/ ../vilio/ernie-vil/data/
# EL
# !cp -r ../input/ernielarge/ ../vilio/ernie-vil/data/
# !cp -r ../input/ernielargevcr/ ../vilio/ernie-vil/data/
# -
# # Run PyTorch Models
# Install the PyTorch requirements
# !cd ../vilio; pip install -r requirements.txt
# O
# !cd ../vilio; bash bash/training/O/hm_O.sh
# U
# !cd ../vilio; bash bash/training/U/hm_U.sh
# V
# Make sure we have the most updated torch with SWA
# !pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
# !cd ../vilio; bash bash/training/V/hm_V.sh
# # Run PaddlePaddle Models
# Install PaddlePaddle Requirements
# !cd ../vilio/ernie-vil; pip install -r requirements.txt
# ES
# !cd ../vilio/ernie-vil; bash bash/training/ES/hm_ES.sh
# EL
# !cd ../vilio/ernie-vil; bash bash/training/EL/hm_EL.sh
# # Combine & Output
# Reinstall normal requirements
# !cd ../vilio; pip install -r requirements.txt
# +
# !cp -r ../vilio/data/O*/*.csv ../vilio/data
# !cp -r ../vilio/data/U*/*.csv ../vilio/data
# !cp -r ../vilio/data/V*/*.csv ../vilio/data
# !cp -r ../vilio/ernie-vil/data/hm/ES*/*.csv ../vilio/data
# !cp -r ../vilio/ernie-vil/data/hm/EL*/*.csv ../vilio/data
# -
# !cd ../vilio; bash bash/hm_ens.sh
# Save final csvs
# !cp -r ../vilio/data/FIN*.csv ./
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a" jupyter={"outputs_hidden": true}
| notebooks/hm_pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <img src="images/dask_horizontal.svg" align="right" width="30%">
# # 分布式执行
# 正如我们目前所看到的,Dask 允许您简单地构建具有依赖性的任务图,以及使用函数式、Numpy 或 Pandas 语法在数据集合上自动为您创建图形。如果没有一种并行和内存感知的方式来执行这些图,这些都不会很有用。到目前为止,我们一直在调用`thing.compute()`或`dask.compute(thing)`,而不用担心这会带来什么。现在我们将讨论该执行的可用选项,特别是分布式调度器,它带有额外的功能。
#
# Dask自带四个可用的调度器。
# - "线程"(又名 "线程"):由线程池支持的调度器
# - "进程":一个由进程池支持的调度器;
# - "单线程"(又名 "同步"):同步调度器,适合调试
# - distributed:用于在多台机器上执行图形的分布式调度器,见下文。
#
# 要选择其中一个进行计算,你可以在请求结果时指定,例如:
# ```python
# myvalue.compute(scheduler="single-threaded") # for debugging
# ```
#
# 您也可以临时设置一个默认的调度程序。
# ```python
# with dask.config.set(scheduler='processes'):
# # 只为该块临时设置
# # 该块中的所有计算调用都将使用指定的调度器。
# myvalue.compute()
# anothervalue.compute()
# ```
#
# Or globally
# ```python
# # set until further notice
# dask.config.set(scheduler='processes')
# ```
#
# 让我们在航班数据这个熟悉的案例上,试用几个调度器。
# %run prep.py -d flights
# +
import dask.dataframe as dd
import os
df = dd.read_csv(os.path.join('data', 'nycflights', '*.csv'),
parse_dates={'Date': [0, 1, 2]},
dtype={'TailNum': object,
'CRSElapsedTime': float,
'Cancelled': bool})
# 按机场分类的最大平均非取消航班延误情况
largest_delay = df[~df.Cancelled].groupby('Origin').DepDelay.mean().max()
largest_delay
# -
# 下面的每一个结果都是一样的(你可以检查一下)
# any surprises?
import time
for sch in ['threading', 'processes', 'sync']:
t0 = time.time()
r = largest_delay.compute(scheduler=sch)
t1 = time.time()
print(f"{sch:>10}, {t1 - t0:0.4f} s; result, {r:0.2f} hours")
# ### 一些需要考虑的问题:
#
# - 这个任务有可能提速多少(提示,看图)。
# - 考虑到这台机器上有多少核,并行调度器比单线程调度器能快多少。
# - 使用线程比单线程快多少?为什么这与最佳提速有差异?
# - 为什么这里多处理调度器会慢这么多?
#
# 对于在单机上处理大型数据集的核心外,只要所使用的函数在大部分时间释放[GIL](https://wiki.python.org/moin/GlobalInterpreterLock),"线程 "调度器是一个不错的选择。NumPy和pandas在大多数地方都会释放GIL,所以对于`dask.array`和`dask.dataframe`来说,`threaded`调度器是默认的。分布式调度器,或许用`processes=False`,对于单机上的这些工作负载也能很好地工作。
#
# 对于确实持有GIL的工作负载,比如常见的`dask.bag`和用`dask.delayed`封装的自定义代码,我们建议使用分布式调度器,即使在单机上也是如此。一般来说,它比`processes`调度器更智能,提供更好的诊断。
#
# https://docs.dask.org/en/latest/scheduling.html 提供了一些关于选择调度器的额外细节。
#
# 对于在集群上扩展工作,需要使用分布式调度器。
# ## 创建集群
# ### 简化方法
# `dask.distributed`系统由一个集中式调度器和一个或多个工作进程组成。[部署](https://docs.dask.org/en/latest/setup.html)一个远程Dask集群涉及一些额外的工作。但在本地做事情只是涉及到创建一个`Client`对象,它让你与 "集群"(你的机器上的本地线程或进程)交互。更多信息请参见[这里](https://docs.dask.org/en/latest/setup/single-distributed.html)。
#
# 请注意,`Client()`需要很多可选的[参数](https://distributed.dask.org/en/latest/local-cluster.html#api),来配置进程/线程的数量、内存限制和其他的参数。
# +
from dask.distributed import Client
# Setup a local cluster.
# By default this sets up 1 worker per core
client = Client()
client.cluster
# -
# 如果你没有在jupyterlab中使用 "dask-labextension",一定要点击 "Dashboard "链接来打开诊断仪表板。
#
# ## 用分布式客户端执行
# 考虑一些琐碎的计算,比如我们以前用过的,为了模拟真实工作的进行,我们加入了睡眠语句。
# +
from dask import delayed
import time
def inc(x):
time.sleep(5)
return x + 1
def dec(x):
time.sleep(3)
return x - 1
def add(x, y):
time.sleep(7)
return x + y
# -
# 默认情况下,创建一个 "客户端 "使它成为默认的调度器。任何对`.compute`的调用都将使用你的`client`所连接的集群,除非你如上所述另有规定。
#
x = delayed(inc)(1)
y = delayed(dec)(2)
total = delayed(add)(x, y)
total.compute()
# 当任务被集群处理时,它们会出现在web UI中,最终,结果会被打印出来,作为上面单元格的输出。请注意,在等待结果的过程中,内核是被阻塞的。结果的任务块图可能看起来像下面的东西。将鼠标悬停在每个块上,会给出与之相关的函数,以及它执行的时间。
#
# 只要计算是在飞行中,您还可以在仪表板的 "图形 "面板上看到正在执行的简化版图形。
# 让我们回到之前的航班计算,看看仪表盘上会发生什么(你可能希望笔记本和仪表盘并排)。与之前相比,表现如何?
# %time largest_delay.compute()
# 在这种特殊情况下,这应该和上面的最佳情况,即线程一样快,甚至更快。你认为这是为什么呢?你应该从阅读[这里](https://distributed.dask.org/en/latest/index.html#architecture)开始,特别要注意的是,分布式调度器是一个完整的重写,围绕着中间结果的共享和哪些任务在哪个worker上运行,有了更多的智能。这将在*些情况下带来更好的性能,但与线程式调度器相比,仍然有较大的延迟和开销,所以会有极少数情况下性能更差。幸运的是,现在仪表板给我们提供了更多的 [诊断信息](https://distributed.dask.org/en/latest/diagnosing-performance.html) 。在仪表板的Profile页面,看看我们刚刚进行的计算中,什么东西占用了CPU时间的最大部分?
# 如果你想做的只是执行使用延迟创建的计算,或者运行基于上层数据集合的计算,那么这就是你需要知道的所有内容,以将你的工作扩展到集群规模。然而,关于分布式调度器还有更多细节需要了解,这将有助于高效使用。请参阅《分布式,高级》一章。
# ### 练习
#
# 在查看诊断页面的同时,运行以下计算。在每一种情况下,什么东西花费的时间最长?
# 航班数
_ = len(df)
# 未取消的航班数量
_ = len(df[~df.Cancelled])
# 每个机场未被取消的航班数量
_ = df[~df.Cancelled].groupby('Origin').Origin.count().compute()
# 每个机场的平均起飞延误时间?
_ = df[~df.Cancelled].groupby('Origin').DepDelay.mean().compute()
# 每周一天的平均出发延误
_ = df.groupby(df.Date.dt.dayofweek).DepDelay.mean().compute()
client.shutdown()
| 05_distributed.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# tobac example: Tracking isolated convection based on updraft velocity and total condensate
# ==
# This example notebook demonstrates the use of tobac to track isolated deep convective clouds in cloud-resolving model simulation output based on vertical velocity and total condensate mixing ratio.
#
# The simulation results used in this example were performed as part of the ACPC deep convection intercomparison case study (http://acpcinitiative.org/Docs/ACPC_DCC_Roadmap_171019.pdf) with WRF using the Morrison microphysics scheme.
#
# The data used in this example is downloaded from "zenodo link" automatically as part of the notebooks (This only has to be done once for all the tobac example notebooks).
# **Import libraries:**
import iris
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
import iris.plot as iplt
import iris.quickplot as qplt
import zipfile,shutil
from six.moves import urllib
from glob import glob
# %matplotlib inline
# Import tobac itself:
import tobac
#Disable a couple of warnings:
import warnings
warnings.filterwarnings('ignore', category=UserWarning, append=True)
warnings.filterwarnings('ignore', category=RuntimeWarning, append=True)
warnings.filterwarnings('ignore', category=FutureWarning, append=True)
warnings.filterwarnings('ignore',category=pd.io.pytables.PerformanceWarning)
# **Download and load example data:**
# The actual dowloading is only necessary once for all example notebooks.
data_out='../'
# # Download the data: This only has to be done once for all tobac examples and can take a while
# file_path='https://zenodo.org/record/3195910/files/climate-processes/tobac_example_data-v1.0.1.zip'
# #file_path='http://zenodo..'
# tempfile='temp.zip'
# print('start downloading data')
# request=urllib.request.urlretrieve(file_path,tempfile)
# print('start extracting data')
# shutil.unpack_archive(tempfile,data_out)
# # zf = zipfile.ZipFile(tempfile)
# # zf.extractall(data_out)
# os.remove(tempfile)
# print('data extracted')
# **Load Data from downloaded file:**
data_file_W_mid_max = os.path.join(data_out,'*','data','Example_input_midlevelUpdraft.nc')
data_file_W_mid_max = glob(data_file_W_mid_max)[0]
data_file_TWC = os.path.join(data_out,'*','data','Example_input_Condensate.nc')
data_file_TWC = glob(data_file_TWC)[0]
W_mid_max=iris.load_cube(data_file_W_mid_max,'w')
TWC=iris.load_cube(data_file_TWC,'TWC')
# Display information about the two cubes for vertical velocity and total condensate mixing ratio:
display(W_mid_max)
display(TWC)
#Set up directory to save output and plots:
savedir='Save'
if not os.path.exists(savedir):
os.makedirs(savedir)
plot_dir="Plot"
if not os.path.exists(plot_dir):
os.makedirs(plot_dir)
# **Feature detection:**
#
# Perform feature detection based on midlevel maximum vertical velocity and a range of threshold values.
# Determine temporal and spatial sampling of the input data:
dxy,dt=tobac.get_spacings(W_mid_max)
# Keyword arguments for feature detection step:
parameters_features={}
parameters_features['position_threshold']='weighted_diff'
parameters_features['sigma_threshold']=0.5
parameters_features['min_num']=3
parameters_features['min_distance']=0
parameters_features['sigma_threshold']=1
parameters_features['threshold']=[3,5,10] #m/s
parameters_features['n_erosion_threshold']=0
parameters_features['n_min_threshold']=3
# Perform feature detection and save results:
print('start feature detection based on midlevel column maximum vertical velocity')
dxy,dt=tobac.get_spacings(W_mid_max)
Features=tobac.feature_detection_multithreshold(W_mid_max,dxy,**parameters_features)
print('feature detection performed start saving features')
Features.to_hdf(os.path.join(savedir,'Features.h5'),'table')
print('features saved')
# **Segmentation:**
# Perform segmentation based on 3D total condensate field to determine cloud volumes associated to identified features:
parameters_segmentation_TWC={}
parameters_segmentation_TWC['method']='watershed'
parameters_segmentation_TWC['threshold']=0.1e-3 # kg/kg mixing ratio
print('Start segmentation based on total water content')
Mask_TWC,Features_TWC=tobac.segmentation_3D(Features,TWC,dxy,**parameters_segmentation_TWC)
print('segmentation TWC performed, start saving results to files')
iris.save([Mask_TWC],os.path.join(savedir,'Mask_Segmentation_TWC.nc'),zlib=True,complevel=4)
Features_TWC.to_hdf(os.path.join(savedir,'Features_TWC.h5'),'table')
print('segmentation TWC performed and saved')
# **Trajectory linking:**
# Detected features are linked into cloud trajectories using the trackpy library (http://soft-matter.github.io/trackpy). This takes the feature positions determined in the feature detection step into account but does not include information on the shape of the identified objects.
# Keyword arguments for linking step:
parameters_linking={}
parameters_linking['method_linking']='predict'
parameters_linking['adaptive_stop']=0.2
parameters_linking['adaptive_step']=0.95
parameters_linking['extrapolate']=0
parameters_linking['order']=1
parameters_linking['subnetwork_size']=100
parameters_linking['memory']=0
parameters_linking['time_cell_min']=5*60
parameters_linking['method_linking']='predict'
parameters_linking['v_max']=10
parameters_linking['d_min']=2000
# Perform linking and save results:
Track=tobac.linking_trackpy(Features,W_mid_max,dt=dt,dxy=dxy,**parameters_linking)
Track.to_hdf(os.path.join(savedir,'Track.h5'),'table')
# **Visualisation:**
# Set extent for maps plotted in the following cells ( in the form [lon_min,lon_max,lat_min,lat_max])
axis_extent=[-95,-93.8,29.5,30.6]
# Plot map with all individual tracks:
import cartopy.crs as ccrs
fig_map,ax_map=plt.subplots(figsize=(10,10),subplot_kw={'projection': ccrs.PlateCarree()})
ax_map=tobac.map_tracks(Track,axis_extent=axis_extent,axes=ax_map)
# Create animation showing tracked cells with outline of cloud volumes and the midlevel vertical velocity as a background field:
animation_tobac=tobac.animation_mask_field(track=Track,features=Features,field=W_mid_max,mask=Mask_TWC,
axis_extent=axis_extent,#figsize=figsize,orientation_colorbar='horizontal',pad_colorbar=0.2,
vmin=0,vmax=20,extend='both',cmap='Blues',
interval=500,figsize=(10,7),
plot_outline=True,plot_marker=True,marker_track='x',plot_number=True,plot_features=True)
# Display animation:
from IPython.display import HTML, Image, display
HTML(animation_tobac.to_html5_video())
# +
# # Save animation to file
# savefile_animation=os.path.join(plot_dir,'Animation.mp4')
# animation_tobac.save(savefile_animation,dpi=200)
# print(f'animation saved to {savefile_animation}')
# -
# Updraft lifetimes of tracked cells:
fig_lifetime,ax_lifetime=plt.subplots()
tobac.plot_lifetime_histogram_bar(Track,axes=ax_lifetime,bin_edges=np.arange(0,120,10),density=False,width_bar=8)
ax_lifetime.set_xlabel('lifetime (min)')
ax_lifetime.set_ylabel('counts')
| examples/Example_Updraft_Tracking/Example_Updraft_Tracking.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python2
# name: python2
# ---
import numpy as np
import matplotlib.pyplot as plt
%matplotlib notebook
# Сгенерируем выборку $y = 0.5x + 1 + \epsilon$
x = np.arange(-250, 250)
y = 0.5*x + np.ones(len(x)) + \
np.random.normal(scale=np.sqrt(0.2), size=len(x))
fig, ax = plt.subplots(figsize=(9, 6))
plt.title(u'Выборка')
plt.xlabel('x')
plt.ylabel('y')
ax.plot(x, y)
plt.show()
# Минимизируем квадрат отклонений
# +
from scipy.optimize import minimize
def linear_func(x, k, b):
return k*x + b
def MSE(true_values, func_values):
return np.mean((true_values - func_values)**2)
k, b = minimize(lambda kb: MSE(y, linear_func(x, kb[0], kb[1])), [0, 0]).x
# -
fig, ax = plt.subplots(figsize=(9, 6))
plt.title(u'Минимизация MSE')
plt.xlabel('x')
plt.ylabel('y')
ax.plot(x, y)
ax.plot(x, linear_func(x, k, b))
ax.legend([u'Выборка', 'MSE'], bbox_to_anchor = (0, 1))
plt.show()
# Добавим выбросы $y = -1 + \epsilon$
x = np.hstack((x, np.random.random(size=75)*500 - 250))
y = np.hstack((y, np.random.normal(scale=np.sqrt(0.2), size=75) - 1))
x, y = np.transpose(sorted(np.transpose(np.vstack((x, y))), key=lambda x: x[0]))
fig, ax = plt.subplots(figsize=(9, 6))
plt.title(u'Выборка')
plt.xlabel('x')
plt.ylabel('y')
ax.plot(x, y)
plt.show()
# +
def MAE(true_values, func_values):
return np.mean(np.abs(true_values - func_values))
k_mse, b_mse = minimize(lambda kb: MSE(y, linear_func(x, kb[0], kb[1])), [0, 0]).x
k_mae, b_mae = minimize(lambda kb: MAE(y, linear_func(x, kb[0], kb[1])), [0, 0]).x
# -
fig, ax = plt.subplots(figsize=(9, 6))
plt.title(u'Минимизация с выбросами')
plt.xlabel('x')
plt.ylabel('y')
ax.plot(x, y)
ax.plot(x, linear_func(x, k_mse, b_mse))
ax.plot(x, linear_func(x, k_mae, b_mae))
ax.legend([u'Выборка', 'MSE', 'MAE'], bbox_to_anchor = (0, 1))
plt.show()
# ### Вывод
# Как видно из графиков, MSE слабо устойчиво к выбросам, в отличие от MAE, хорошо устойчивой к выбросам.
| hw1/Practise_3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 [GPU]
# language: python
# name: optirun_python3
# ---
# + pycharm={"is_executing": false}
import tensorflow as tf
import keras
import math
# + pycharm={"is_executing": false}
import numpy as np
import matplotlib.pyplot as plt
# + pycharm={"name": "#%%\n", "is_executing": false}
import time
import pickle as pkl
# + pycharm={"is_executing": false}
seed = 4
np.random.seed(seed)
cell_time = np.random.uniform(-2 * np.pi, 2 * np.pi, [1000, 1])
gene01_phase = np.random.uniform(0, 2 * np.pi, [1, 500])
gene01_time = np.random.normal(0, 0.1, [1, 500])
gene01_speed = np.random.uniform(0.5, 1.5, [1, 500])
gene0_phase = np.random.uniform(0, 2 * np.pi, [1, 800])
gene1_time = np.random.normal(0, 0.1, [1, 500])
gene1_speed = np.random.uniform(0.5, 1.5, [1, 500])
gene0 = np.sin(cell_time - gene0_phase)
gene1 = np.tanh(gene1_speed * (cell_time - gene1_time))
gene01 = np.sin(cell_time - gene01_phase) + np.tanh(gene01_speed * (cell_time - gene01_time))
for i in range(10):
plt.scatter(x=cell_time, y=gene0[:, i], s=1)
# + pycharm={"is_executing": false}
gene0.shape
# + pycharm={"is_executing": false}
# + pycharm={"is_executing": false}
y = keras.Input(shape=(gene0.shape[1],), name='input')
x = keras.layers.Dense(units=50,
kernel_regularizer=keras.regularizers.l2(0.0001),
kernel_initializer=keras.initializers.lecun_normal(seed=2018)
)(y)
x = keras.layers.Activation(activation='selu')(x)
x = keras.layers.Dense(units=30,
kernel_regularizer=keras.regularizers.l2(0.0001),
kernel_initializer=keras.initializers.lecun_normal(seed=2019)
)(x)
x = keras.layers.Activation(activation='selu')(x)
x = keras.layers.Dense(units=1,
use_bias=False,
kernel_regularizer=keras.regularizers.l2(0.00001),
kernel_initializer=keras.initializers.lecun_normal(seed=2020),
name='neck'
)(x)
x0 = keras.layers.Lambda(lambda x: keras.backend.sin(x), name='phase0')(x)
x1 = keras.layers.Lambda(lambda x: keras.backend.sin(x + math.pi * 2 / 3), name='phase1')(x)
x2 = keras.layers.Lambda(lambda x: keras.backend.sin(x + math.pi * 4 / 3), name='phase2')(x)
x = keras.layers.Concatenate(name='stack')([x0, x1, x2])
x = keras.layers.Dense(gene0.shape[1],
use_bias=False,
kernel_regularizer=None,
kernel_initializer=keras.initializers.Zeros(),
name='reconstructed'
)(x)
model = keras.Model(outputs=x, inputs=y)
# + pycharm={"is_executing": false}
# + pycharm={"is_executing": false}
# + pycharm={"is_executing": false}
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True).create(prog='dot', format='svg'))
# + pycharm={"is_executing": false}
model.compile(loss='mean_squared_error',
optimizer=keras.optimizers.Adam(1e-3))
class MyCallback(keras.callbacks.Callback):
def __init__(self, interval = 100):
self.cnt = 0
self.interval = interval
self.start_time = 0
self.rec = {'time': [], 'loss': []}
def on_train_begin(self, logs=None):
self.start_time = time.time()
def on_epoch_end(self, batch, logs={}):
self.cnt += 1
self.rec['time'].append(time.time() - self.start_time)
self.rec['loss'].append(logs.get('loss'))
if self.cnt % self.interval == 0:
print(f'epoch: {self.cnt}/{self.params["epochs"]}, loss: {logs.get("loss") : .4f}, total train time: {self.rec["time"][-1] : .2f}s')
my_callback = MyCallback()
history = model.fit(gene0, gene0, epochs=3000, verbose=0, callbacks=[my_callback])
# + pycharm={"name": "#%%\n", "is_executing": false}
#model.compile(loss='mean_squared_error',
# optimizer=keras.optimizers.Adam(1e-4, epsilon=1e-4))
#history = model.fit(gene0, gene0, epochs=5000, verbose=2)
# + pycharm={"is_executing": false}
model.evaluate(x=gene0, y=gene0)
# + pycharm={"is_executing": false}
model.get_layer('neck')
# + pycharm={"is_executing": false}
res = keras.backend.function([model.layers[0].input],
[model.get_layer('neck').output, model.get_layer('reconstructed').output]
)([gene0])
# + pycharm={"is_executing": false}
for i in range(10):
plt.scatter(x=res[0] % (2 * np.pi), y=gene0[:, i], s=1)
# + pycharm={"is_executing": false}
plt.scatter(res[0] % (2 * np.pi), cell_time % (2 * np.pi), s = 1)
# + pycharm={"is_executing": false, "name": "#%%\n"}
with open('comp-3-selu-lecun-norm-seed-%d.pkl' % seed, 'wb') as file:
pkl.dump(my_callback.rec, file)
| tests/simulation/activation-functions/comp-3-selu-lecun-norm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # UCL AI Society Machine Learning Tutorials
# ### Session 01. Introduction to Numpy, Pandas, Matplotlib
# ### Contents
# 1. Numpy
# 2. Pandas
# 3. Matplotlib
# 4. EDA(Exploratory Data Analysis)
#
# ### Aim
# At the end of this session, you will be able to:
# - Understand the basics of numpy.
# - Understand the basics of pandas.
# - Understand the basics of matplotlib.
# - Perform an Exploratory Data Analysis (EDA).
#
# ## 2. Pandas
# Pandas is another essential open-source library in Python, and today it is widely used by data scientists and ML engineers. It is built by <NAME> and is based on NumPy. The name 'Pandas' originates from the term "Panel Data", an econometrics term for datasets that include observations over multiple time periods for the same object.
# ### 2.1 Basics of Pandas
# run this shell if you haven't installed pandas library
# ! pip install pandas
import pandas as pd
import numpy as np
print(pd.__version__)
# The main data structures of pandas are **Series** and **DataFrame**, where data are stored and manipulated. A `Series` can simply be understood as a column and a `DataFrame` as a table that has many Series.
a = pd.Series([1, 2, 3, np.nan, 5, 6])
a
# **Let's see how they are different!**
# +
# Creating a Series using pandas.Series()
# This is just one of many ways to initialise a pandas series
module_score_dic = {'Database': 90, 'Security': 70, 'Math': 100, 'Machine Learning': 80}
module_score = pd.Series(module_score_dic)
print("Module_score: \n", module_score, '\n')
print("type: ", type(module_score), '\n')
# Creating a DataFrame using pandas DataFrame()
dataframe = pd.DataFrame(module_score, columns=['score'])
# dataframe = pd.DataFrame(module_score, index=[x for x in module_score.keys()], columns=['score'])
print("dataframe: \n", dataframe, '\n')
print("type: ", type(dataframe))
# -
# Series can also be a Dataframe that has only one attribute.
# **Now let's make a Dataframe that has multiple attributes**
solar_data = {
'Name' : ["Mercury", "Venus", "Earth", "Mars", "Jupiter", "Saturn", "Uranus", "Neptune"],
'Satellite' : [0, 0, 1, 2, 79, 60, 27, 14],
'AU' : [0.4, 0.7, 1, 1.5, 5.2, 9.5, 19.2, 30.1],
'Diameter (in 1Kkm)' : [4.9, 12.1, 12.7, 6.8, 139.8, 116.5, 50.7, 49.2]
}
solar_system = pd.DataFrame(solar_data, index = [i for i in range(1, 9)])
solar_system
solar_system.dtypes # checks data type
# We can select what to read from the DataFrame by using methods that df.DataFrame has.
#
# - `head()` : Returns the first n data
# - `tail()` : Returns the last n data
# - `index` : Returns the index
# - `columns` : Returns the column
# - `loc` : Returns the information of that row
# - `values` : Returns only the values without index and column names
# - `describe()` : Outputs satatistical summary a DataFrame
# - `sort_values(self, by, axis = 0, ascending = True, inplace = False)` : Sort the DataFrame
# - `drop()` : Drops selected row
df = solar_system
df.head() # the default value in the brackets is 5
df.tail(2)
df.index
df.index[0]
df.columns
df.loc[1]
df.iloc[0]
# +
# TODO: run this cell and observe it gives you an error. Do you see why?
# Try playing around with different numbers and find out why before you google it!
# Google search keyword: df.loc vs df.iloc
df.loc[1]
# -
df.iloc[0]
df.values
df.describe()
df.sort_values(by = 'Diameter (in 1Kkm)', ascending = False)
# TO DO: re-sort the DataFrame by the number of satellites in descending order.
df.sort_values(by = 'Satellite', ascending = False)
# Before 2006, Pluto was classified as a planet of the solar system. Let's bring it back to our solar system, by adding Pluto to our DataFrame.
df.loc[9] = ["Pluto", 0, 39.5, 2.38]
df
# Let's reclassify Pluto as a dwarf planet again.
# To DO: Drop Pluto, you can do that with either df.drop(index=idx) or df.drop(df.index[idx]), where idx is Pluto's index
df.drop(index=9)
# ### 2.2 Read Data with Pandas
# So far, we played with a small, example dataset. Pandas supports loading, reading, and writing data from/to various file format, including CSV, JSON and SQL, by converting it to a DataFrame.
# 1. `pd.read_csv()` : Read CSV files
# 2. `pd.read_json()` : Read JSON files
# 3. `pd.read_sql_query()` : Read SQL files
#
# Let's experiment with these:
# +
# TODO: Try each option, and see what the difference is.
# Option 1
movie = pd.read_csv("./data/IMDB-Movie-Data.csv", index_col = "Title")
#Option 2
# movie = pd.read_csv("./data/IMDB-Movie-Data.csv")
print(type(movie))
movie
# -
# To Do: Try the other two options
movie.iloc[2]
# movie.loc["Split"]
# movie.loc[2] --> Is this going to work? if not, why not?
# To Do: Sort the table by Ratings, in descending order
# Do you agree with the rankings? :)
movie.sort_values(by = "Rating", ascending = False)
# To Do: Sort the table by 'Revenue(Millions)', in ascending order and print the first 3 rows out
movie.sort_values(by = "Revenue (Millions)", ascending = True).head(3)
# The value_counts() function is used to get a Series containing counts of unique values
movie['Genre'].value_counts().head()
# This is called a "Masking Operation"
# filter out movies that have runtime under 170 minutes and sort the result by rating in descending order.
movie[movie['Runtime (Minutes)'] >= 170].sort_values(by="Rating", ascending=False)
# To Do: By using a masking operation, Extract the movies whose 'Metascore' is bigger than 95,
# and sort the result from the most recent to the least recent ones
movie[movie['Metascore'] >= 95].sort_values(by="Year", ascending=False)
# To Do: Extract movies who are directed by one of UCL's alumni ---> Hint: Tenet, Inception
movie[(movie['Director'] == '<NAME>')]
# #### 2.2.1 Pandas Exercise
#
# To Do: Extract the movie list that meets these requirements:
# - 1. Released after 2010 (key = 'Year') (including the year 2010)
# - 2. Runtime is shorter than 150 minutes (key = 'Runtime (Minutes)')
# - 3. Rating is above 8.0 (key = 'Rating')
#
# Print out only the first 3 movies from the result.
movie[
(movie['Year'] >= 2010) &
(movie['Runtime (Minutes)'] <= 150) &
(movie['Rating'] >= 8.0)
][:3]
# ### 2.3 How to deal with Missing Data
# To represent missing data, Pandas uses `np.nan` (this is the `np` from the NumPy tutorial!). Data scientists and machine learning engineers sometimes just remove missing data. However, it heavily depends on which data are missing, how big the missing data are and so on. You can fill the missing part with 0, with the mean value of the column, with the mean value of only the 10 closests values in the column or anything else that might seem appropriate. It is important for you to choose the way you are going to deal with missing data. Here are some methods to help you with that:
# - `isnull()`: returns True or False, depending on the cell's null status.
# - `sum()`: This can be used as a trick when you count the number of Trues. Once the Dataframe is filtered through the `isnull()` function, the sum of all Trues in a column gives you how many fields have missing data in them.
# - `dropna()`: deletes any row that contains at least one null value.
# - `fillna(value)`: Fills missing values with the given values.
movie.isnull()
movie.isnull().sum()
# We create a copy of the movie object, as we want to keep the original how we initially had it.
copy_of_movie = movie.copy()
# Take a look at "Take Me Home Tonight" and "Search Party"
copy_of_movie.fillna(value = 0)
# Important!!! This changes the copy_of_movie object itself!
copy_of_movie.dropna(inplace = True)
movie.shape
copy_of_movie.shape
# After dropping the rows that contain missing data, the shape of the data frame has changed, from (1000, 11) to (838, 11).
# ### 2.4 Merging Data
# Those of you who know SQL might have felt that Pandas is quite similar to a query language.
# What are the most common things that you do in most of the relational database query languages?
# Yes! (terminology alert!) An inner JOIN, an outer JOIN, a left JOIN, a right JOIN, a full JOIN, etc.
# - `concat()` : Concatenation. Used to merge two or more Pandas object.
# - `merge()` : Behaves very simlarly to SQL.
# We'll create two random dataframes, named `df1` and `df2`.
df1 = pd.DataFrame(np.random.randn(10, 2))
df1
df2 = pd.DataFrame(np.random.randn(10, 3))
df2
pd.concat([df1, df2])
pd.concat([df1, df2], axis = 1) # axis setting is very common in Pandas
demis = pd.DataFrame(
{'Modules': ['Bioinformatics', 'Robotic Systems', 'Security', 'Compilers'], 'Demis' : [75, 97, 64, 81]}
)
demis
sedol = pd.DataFrame(
{'Modules': ['Bioinformatics', 'Robotic Systems', 'Security', 'Compilers'], 'Sedol' : [63, 78, 84, 95]})
sedol
pd.merge(demis, sedol, on = 'Modules')
# ### 2.5 Exercise (Optional)
# Create the totally realistic popcorn dataset. The fact that the number of popcorn sold turns out
# to be the length of the name of the director times 100 000 is a total mystery to us too.
movie_popcorn = movie[["Director", "Description"]].copy()
movie_popcorn['Popcorn Sold'] = movie["Director"].str.len() * 1e5
movie_popcorn
# Above we created a fake dataset that tells us how much popcorn in total was sold for each movie. We would like you to find out what the average number of popcorn sold at movies that came out after 2014 is. **Hint**: you will need to join this new table with the existing movies table on the column `Description`. What happens to the size of the data frame if you try to join on the column `Director`? Can you guess why that is?
# To Do: Your code here
merged = pd.merge(movie, movie_popcorn, on = 'Description')
merged[merged['Year'] >= 2014]['Popcorn Sold'].mean()
# ### What to do next?
# The websites below are helpful for your further study of Pandas:
# - [Pandas official website](https://pandas.pydata.org)
# - [10 minutes to Pandas](https://pandas.pydata.org/pandas-docs/stable/getting_started/10min.html)
# - [Data Wrangling with Pandas Cheat Sheet](https://pandas.pydata.org/Pandas_Cheat_Sheet.pdf)
| notebooks/Session01-Pandas-Solution.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
import math
from scipy import stats
from scipy import misc
from itertools import permutations
# HIDDEN
# The alphabet
alph = make_array('a', 'd', 't')
# HIDDEN
# Decode atdt using all possible decoders
x1 = [['a', 't', 'd', 't'], ['a','d','t','d'], ['d','t','a','t']]
x2 = [['d','a','t','a'], ['t','d','a','d'], ['t','a','d','a']]
decoded = x1+x2
# HIDDEN
decoding = Table().with_columns(
'Decoder', list(permutations(alph)),
'atdt Decoded', decoded
)
# +
# HIDDEN
# Make bigram transition matrix
# Data from <NAME>'s bigram table
aa = 1913489177
dd = 6513992572
tt = 19222971337
ad = 23202347740
da = 23279747379
at = 80609883139
ta = 42344542093
dt = 10976756096
td = 3231292348
row1 = make_array(aa, ad, at)/sum([aa, ad, at])
row2 = make_array(da, dd, dt)/sum([da, dd, dt])
row3 = make_array(ta, td, tt)/sum([ta, td, tt])
rows = np.append(np.append(row1, row2), row3)
# -
# HIDDEN
bigrams = MarkovChain.from_table(Table().states(alph).transition_probability(rows))
# ## Code Breaking ##
# While it is interesting that many Markov Chains are reversible, the examples that we have seen so far haven't explained what we get by reversing a chain. After all, if it looks the same running forwards as it does backwards, why not just run it forwards? Why bother with reversibility?
#
# It turns out that reversing Markov Chains can help solve a class of problems that are intractable by other methods. In this section we present an example of how such problems arise. In the next section we discuss a solution.
# ### Assumptions ###
# People have long been fascinated by encryption and decryption, well before cybersecurity became part of our lives. Decoding encrypted information can be complex and computation intensive. Reversed Markov Chains can help us in this task.
#
# To get a sense of one approach to solving such problems, and of the extent of the task, let's try to decode a short piece of text that has been encoded using a simple code called a *substituion code*. Text is written in an *alphabet*, which you can think of as a set of letters and punctuation. In a substitution code, each letter of the alphabet is simply replaced by another in such a way that the code is just a permutation of the alphabet.
#
# To decode a message encrypted by a substitution code, you have to *invert* the permutation that was used. In other words, you have to apply a permutation to the *coded* message in order to recover the original text. We will call this permutation the *decoder*.
#
# To decode a textual message, we have to make some assumptions. For example, it helps to know the language in which the message was written, and what combinations of letters are common in that language. For example, suppose we try to decode a message that was written in English and then encrypted. If our decoding process ends up with "words" like zzxtf and tbgdgaa, we might want to try a different way.
#
# So we need data about which sequences of letters are common. Such data are now increasingly easy to gather; see for example this [web page](http://norvig.com/ngrams/) by [<NAME>](http://norvig.com), a Director of Research at Google.
# ### Decoding a Message ###
# Let's see how we can use such an approach to decode a message. For simplicity, suppose our alphabet consists of only three letters: a, d, and t. Now suppose we get the coded message atdt. We believe it's an English word. How can we go about decoding it in a manner that can be replicated by a computer for other words too?
#
# As a first step, we will write down all 3! = 6 possible permutations of the letters in the alphabet and use each one to decode the message. The table `decoding` contains all the results. Each entry in the `Decoder` column is a permutation that we will apply to our coded text atdt. The permutation determines which letters we will use as substitutes in our decoding process.
#
# To see how to do this, start by keeping the alphabet in "alphabetical" order in your head: 'a', 'd', 't'. Now look at the rows of the table.
#
# - The decoder in the first row is ['a', 'd', 't']. This decoder simply leaves the letters unchanged; atdt gets decoded as atdt.
# $$
# \text{Decoder ['a', 'd', 't']: } ~~~ a \to a, ~~~ d \to d, ~~~ t \to t
# $$
#
# - The decoder in the second row is ['a', 't', 'd']. This keeps the first letter of the alphabet 'a' unchanged, but replaces the second letter 'd' by 't' and the third letter 't' by 'd'.
# $$
# \text{Decoder ['a', 't', 'd']: } ~~~ a \to a, ~~~ d \to t, ~~~ t \to d
# $$
# So atdt gets decoded as adtd.
#
# You can read the rest of the table in the same way.
#
# Notice that in each decoded message, a letter appears twice, at indices 1 and 3. That's the letter being used to decode t in atdt. A feature of substitution codes is that each letter *original* is coded by a letter *code*, with the same letter *code* being used every time the letter *original* appears in the text. So the decoder must have the same feature.
decoding
# Which one of these decoders should we use? To make this decision, we have to know something about the frequency of letter transitions in English. Our goal will be to pick the decoder according to the frequency of the decoded word.
#
# We have put together some data on the frequency of the different *bigrams*, or two-letter combinations, in English. Here is a transition matrix called `bigrams` that is a gross simplification of available information about bigrams in English; we used Peter Norvig's bigrams table and restricted it to our three-letter alphabet. The row corresponding to the letter 'a' assumes that about 2% of the bigrams that start with 'a' are 'aa', about 22% are 'ad', and the remaining 76% are 'at'.
#
# It makes sense that the 'aa' transitions are rare; we don't use words like aardvark very often. Even 2% seems large until you remember that it is the proportion of 'aa' transitions only among transitions 'aa', 'ad', and 'at', because we have restricted the alphabet. If you look at its proportion among all $26\times26$ bigrams, that will be much lower.
bigrams
# Now think of the true text as a path of a Markov Chain that has this transition matrix. An interesting historical note is that this is what Markov did when he first came up with the process that now bears his name – he analyzed the transitions between vowels and consonants in *Eugene Onegin*, <NAME>'s novel written in verse.
#
# If the true text is tada, then we can think of the sequence tada as the path of a Markov chain. Its probability can be calculated at $P(t)P(t, a)P(a, d)P(d, a)$. We will give each decoder a score based on this probability. Higher scores correspond to better decoders.
#
# To assign the score, we assume that all three letters are equally likely to start the path. For three common letters in the alphabet, this won't be far from the truth. That means the probability of each path will start with a factor of 1/3, which we can ignore because all we are trying to do is rank all the probabilities. We will just calculate $P(t, a)P(a, d)P(d, a)$ which is about 8%.
#
# According to our `decoding` table above, tada is the result we get by applying the decoder ['t', 'd', 'a'] to our data atdt. For now, we will say that *the score of this decoder, given the data*, is 8%. Later we will introduce more formal calculations and terminology.
# score of decoder ['t', 'd', 'a']
0.653477 * 0.219458 * 0.570995
# To automate such calcuations we can use the `prob_of_path` method. Remember that its first argument is the initial state, and the second argument is a list or array consisting of the remaining states in sequence.
bigrams.prob_of_path('t', ['a', 'd', 'a'])
# Should we decide that our message atdt should be decoded as tada? Perhaps, if we think that 8% is a high likelihood. But what if some other possible decoder has a higher likelihood? In that case it would be natural to prefer that one.
#
# So we are going to need the probabilities of each of the six "decoded" paths.
#
# Let's define a function `score` that will take a list or array of characters and return the probability of the corresponding path using the `bigrams` transition matrix. In our example, this is the same as returning the score of the corresponding decoder.
def score(x):
return bigrams.prob_of_path(x[0], x[1:])
# Here are the results in decreasing order of score. There is a clear winner: the decoder ['d', 't', 'a'] corresponding to the message 'data' has more than twice the score of any other decoder.
decoding = decoding.with_column('Score of Decoder', decoding.apply(score, 1))
decoding.sort('Score of Decoder', descending=True)
# ### The Size of the Problem ###
# What we have been able to do with an alphabet of three characters becomes daunting when the alphabet is larger. The 52 lower case and upper case letters, along with a space character and all the punctuations, form an alphabet of around 70 characters. That gives us 70! different decoders to consider. In theory, we have to find the likelihood of each of these 70! candidates and sort them.
#
# Here is the number 70!. That's a lot of decoders. Our computing system can't handle that many, and other systems will have the same problem.
math.factorial(70)
# One potential solution is to sample at random from these 70! possible decoders and just pick from among the sampled permutations. But how should we draw from 70! items? It's not a good idea to choose uniform random permutations of the alphabet, as those are unlikely to get us quickly to the desired solution.
#
# What we would really like our sampling procedure to do is to choose good decoders with high probability. A good decoder is one that generates text that has higher probability than text produced by almost all other decoders. In other words, a good decoder has higher likelihood than other decoders, given the data.
#
# You can write down this likelihood using Bayes' Rule. Let $S$ represent the space of all possible permutations; if the alphabet has $N$ characters, then $S$ has $N!$ elements. For any randomly picked permutation $j$, the likelihood of that decoder given the data is:
# $$
# \begin{align*}
# \text{Likelihood of } j \text{ given the encoded text}
# &= \frac{\frac{1}{N!} P(\text{encoded text} \mid \text{decoder = }j)}
# { {\sum_{i \in S} } \frac{1}{N!} P(\text{encoded text} \mid \text{decoder = }i)} \\ \\
# &=\frac{P(\text{encoded text} \mid \text{decoder = }j)}
# { {\sum_{i \in S} } P(\text{encoded text} \mid \text{decoder = }i)}
# \end{align*}
# $$
#
# For the given encoded text, the denominator is the normalizing constant that makes all the likelihoods sum to 1. It appears in the likelihood of every decoder. In our example with the three-letter alphabet, we ignored it because we could figure out the numerators for all six decoders and just compare them. The numerator was what we called the *score* of the decoder.
#
# Even when the alphabet is large, for any particular decoder $j$ we can find the numerator by multiplying transition probabilities sequentially, as we did in our example. But with a large alphabet we can't do this for all possible decoders, so we can't list all possible scores and we can't add them all up. Therefore we don't know the denominator of the likelihoods, not even upto a decent approximation.
#
# What we need now is a method that helps us draw from a probability distribution even when we don't know the normalizing constant. That is what Markov Chain Monte Carlo helps us to do.
| content/Chapter_11/03_Code_Breaking.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
#0) What: Test steps 1,2,3 in local Postgres instance
#Who: Bryan/Jesse
#1) What: Create a second table(s) within Postgres. I suggest just adding "_cleaned" to the name and use the same prefix.
#The schema will remain the same. William - any objections here? We don't have a dev/staging Postgres instance, do we?
#Who: Jesse
#2a) What: Take Bryan's new Python and test each 'rule,' adding logic for writing to Postgres/the new 'cleaned' table.
#Who: Bryan/Jesse
#2b) What: Add logic for alerts (SendGrid); fire alert when issue found, correction made.
#Who: Jesse
#3) What: Verify that results look good in *_cleaned table(s).
#Who: Bryan/Jesse/William(?)/anyone else who wants to help test
#4) What: Set up cron job to run, initially, every 4-6 hours. Review results and make sure that:
#Data is not thrown out, unnecessarily altered/integrity is retained
#Any data corrections are done properly
#Review / tweak anything else (see Bryan's notes in Asana: "Data Cleaning Notes" under "Data Science Brainstorming")
#Who: Bryan/Jesse
#5) What: After 5-7 days of testing, 'deploy.' Cron job does not need to run as often; decide on interval.
#Who: Jesse
#6) What: Point all relevant queries to *_cleaned table(s)
#Who: Bryan/Jesse/Emily
#Will it tell the difference between an account that is consistently down or inconstently down.
#After missing a reading, does a sensor ever get back to 100% (visualize this)
#csv of sensor ids, date, and percentage (goes to a website?)
#can it pull the peron's first and last names
#first name last name (sensor id), dates, percentages below
#Fix tenant names
#Code to visualize stuff - for time periods for month and years
#Emily source code visualization plot %
#Talk to william or jesse about automating and letting noelle access this
#clean dat function (remove bad data)
#stats and viz stuff - 311 data, hpd data (blog posts)
#is hpd monitoring itself at resolving heat complaints?
#hpd conversion rate low, who and why? (conversaion rate = complaint to investigation) (summer blog posts)
#how many heat complaints to date? (visualization)
#A tale of two cities - adjust for pop (complaints per capita)
# -
import datetime
import pandas as pd
import numpy as np
import psycopg2
import csv
import time
from datetime import date
# +
try:
connection = psycopg2.connect(database ='heatseek', user = 'heatseekroot', password = '<PASSWORD>')
cursor = connection.cursor() #Open a cursor to perform operations
cursor.execute('SELECT * from users') #Executes the query
users = cursor.fetchall() #cursor.fetchone() for one line, fetchmany() for multiple lines, fetchall() for all lines
users = pd.DataFrame(users) #Saves 'users' as a pandas dataframe
users_header = [desc[0] for desc in cursor.description] #This gets the descriptions from cursor.description
#(names are in the 0th index)
users.columns = users_header #PD array's column names
cursor.execute('SELECT * FROM readings;')
readings = cursor.fetchall()
readings = pd.DataFrame(readings)
readings_header = [desc[0] for desc in cursor.description]
readings.columns = readings_header
cursor.execute('SELECT * FROM sensors;')
sensors = cursor.fetchall()
sensors = pd.DataFrame(sensors)
sensors_header = [desc[0] for desc in cursor.description]
sensors.columns = sensors_header
cursor.close()
connection.close()
except psycopg2.DatabaseError, error:
print 'Error %s' % error
# -
#This creates an array 'sensors_with_users' that consists of sensors that are currently assigned to users.
sensors_with_users_raw = np.intersect1d(users.id.unique(), sensors.user_id.unique()) #Returns the common ids in both the datasets.
#sensors.loc[sensors.user_id, sensors_with_users]
sensors_with_users = []
for ids in sensors_with_users_raw:
sensors_with_users.append(int(ids))
# +
#This function returns clean readings. #It doesn't exist yet
#This function will return if a sensor is polling faster than once per hour (i.e., test cases)
def dirty_data(dirty_readings, start_date = None, end_date = None):
if (start_date or end_date) == None:
start_date = pd.Timestamp('2000-01-01')
end_date = pd.Timestamp(datetime.datetime.now())
else:
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
mask = (dirty_readings['created_at'] > start_date) & (dirty_readings['created_at'] <= end_date)
dirty_readings = dirty_readings.loc[mask]
hot_ids = dirty_readings.loc[dirty_readings.temp > 90].sensor_id.unique() #Returns sensor IDs where indoor temp is > 90
cold_ids = dirty_readings.loc[dirty_readings.temp < 40].sensor_id.unique() #Returns sensor IDs where indoor temp is < 40
inside_colder_ids = dirty_readings.loc[dirty_readings.temp < dirty_readings.outdoor_temp].sensor_id.unique() #Returns sensor IDs where indoor temp is < outdoor temp
#Array of all the IDs above
all_ids = np.unique(np.concatenate((hot_ids, cold_ids, inside_colder_ids)))
all_ids = all_ids[~np.isnan(all_ids)]
#Create an empty dataframe with the IDs as indices
report = pd.DataFrame(index=all_ids,columns=['UserID','SensorID', 'Outside90', 'Inside40', 'InsideColderOutside'])
#Fill in the specific conditions as '1'
report.Outside90 = report.loc[hot_ids].Outside90.fillna(1)
report.Inside40 = report.loc[cold_ids].Inside40.fillna(1)
report.InsideColderOutside = report.loc[inside_colder_ids].InsideColderOutside.fillna(1)
report = report.fillna(0)
report.SensorID = report.index
#Fill in UserIDs
problem_ids = sensors[sensors.id.isin(all_ids)]
for index in report.index.values:
index = int(index)
try:
report.loc[index, 'UserID'] = sensors.loc[index, 'user_id']
except KeyError:
report.loc[index, 'UserID'] = 'No such user in sensors table.'
return report
def clean_data(dirty_readings):
cleaner_readings = dirty_readings[dirty_readings.sensor_id.notnull()] #Remove cases where there are no sensor IDs
return cleaner_readings
clean_readings = clean_data(readings)
report = dirty_data(readings)
# -
#This function takes (start date, end date, sensor id), returns % of failure
def sensor_down_complete(data, start_date, end_date, sensor_id):
#This pulls up the tenant's first and last name.
try:
tenant_id = int(sensors.loc[sensors.id == sensor_id].user_id.values[0])
tenant_first_name = users.loc[users.id == tenant_id].first_name.values[-1] #This pulls up the first name on the list (not the most recent)
tenant_last_name = users.loc[users.id == tenant_id].last_name.values[-1]
#Are these really not assigned?
except ValueError:
tenant_id = 'None'
tenant_first_name = 'Not'
tenant_last_name = 'Assigned'
except IndexError:
tenant_id = 'None'
tenant_first_name = 'Not'
tenant_last_name = 'Assigned'
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
sensor_readings = data.loc[data.sensor_id == sensor_id]
#Converting to timestamps
#for i in sensor_readings.index.values: #Iterates through all the index values
#sensor_readings.loc[i, 'created_at'] = pd.Timestamp(sensor_readings.created_at[i])
#Using map instead of for loop (about 15-20x faster)
try:
sensor_readings.loc[:, 'created_at'] = map(pd.Timestamp, sensor_readings.created_at)
except TypeError:
tenant_first_name = 'Mapping Error'
tenant_last_name = 'Only One Entry'
pass
#Using list comprehensions (as efficient as map)
#sensor_readings.loc[:, 'created_at'] = [pd.Timestamp(x) for x in sensor_readings.created_at]
#Using a boolean mask to select readings between the two dates
#(http://stackoverflow.com/questions/29370057/select-dataframe-rows-between-two-dates)
mask = (sensor_readings['created_at'] > start_date) & (sensor_readings['created_at'] <= end_date)
masked_sensor_readings = sensor_readings.loc[mask] #Get all readings between the two dates
masked_sensor_readings = masked_sensor_readings.sort_values('created_at')
#We then calculate how many hours have passed for that specific sensor and date range
try:
sensor_readings_start_date = masked_sensor_readings.loc[masked_sensor_readings.index.values[0], 'created_at']
sensor_readings_end_date = \
masked_sensor_readings.loc[masked_sensor_readings.index.values[len(masked_sensor_readings)-1], 'created_at']
timedelta_in_seconds = sensor_readings_end_date - sensor_readings_start_date #This returns Timedelta object
timedelta_in_seconds = timedelta_in_seconds.total_seconds()
total_number_of_hours = timedelta_in_seconds/3600 + 1 #The +1 fixes the rounding error for now but IDK why yet.
hours_in_date_range = ((end_date-start_date).total_seconds())/3600 + 1
except IndexError:
return [tenant_first_name, tenant_last_name, sensor_id, tenant_id, "No valid readings during this time frame."]
proportion_of_total_uptime = (len(masked_sensor_readings)/hours_in_date_range) * 100 #Proportion of uptime over TOTAL HOURS
proportion_within_sensor_uptime = (len(masked_sensor_readings)/total_number_of_hours) * 100 #Proportion of uptime for the sensor's first and last uploaded dates.
if proportion_within_sensor_uptime <= 100.1:
return [tenant_first_name, tenant_last_name, sensor_id, tenant_id, proportion_of_total_uptime, proportion_within_sensor_uptime]
else:
return [tenant_first_name, tenant_last_name, sensor_id, tenant_id, proportion_of_total_uptime, proportion_within_sensor_uptime, 'Sensor has readings more frequent than once per hour. Check readings table.']
#This function takes (start date, end date, sensor id), returns % of failure
def sensor_down(data, start_date, end_date, sensor_id):
#This pulls up the tenant's first and last name.
try:
tenant_id = int(sensors.loc[sensors.id == sensor_id].user_id.values[0])
tenant_first_name = users.loc[users.id == tenant_id].first_name.values[-1] #This pulls up the first name on the list (not the most recent)
tenant_last_name = users.loc[users.id == tenant_id].last_name.values[-1]
#Are these really not assigned?
except ValueError:
tenant_id = 'None'
tenant_first_name = 'Not'
tenant_last_name = 'Assigned'
except IndexError:
tenant_id = 'None'
tenant_first_name = 'Not'
tenant_last_name = 'Assigned'
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
sensor_readings = data.loc[data.sensor_id == sensor_id]
#Converting to timestamps
#for i in sensor_readings.index.values: #Iterates through all the index values
#sensor_readings.loc[i, 'created_at'] = pd.Timestamp(sensor_readings.created_at[i])
#Using map instead of for loop (about 15-20x faster)
try:
sensor_readings.loc[:, 'created_at'] = map(pd.Timestamp, sensor_readings.created_at)
except TypeError:
tenant_first_name = 'Mapping Error'
tenant_last_name = 'Only One Entry'
pass
#Using list comprehensions (as efficient as map)
#sensor_readings.loc[:, 'created_at'] = [pd.Timestamp(x) for x in sensor_readings.created_at]
#Using a boolean mask to select readings between the two dates
#(http://stackoverflow.com/questions/29370057/select-dataframe-rows-between-two-dates)
mask = (sensor_readings['created_at'] > start_date) & (sensor_readings['created_at'] <= end_date)
masked_sensor_readings = sensor_readings.loc[mask] #Get all readings between the two dates
masked_sensor_readings = masked_sensor_readings.sort_values('created_at')
#We then calculate how many hours have passed for that specific sensor and date range
try:
sensor_readings_start_date = masked_sensor_readings.loc[masked_sensor_readings.index.values[0], 'created_at']
sensor_readings_end_date = \
masked_sensor_readings.loc[masked_sensor_readings.index.values[len(masked_sensor_readings)-1], 'created_at']
timedelta_in_seconds = sensor_readings_end_date - sensor_readings_start_date #This returns Timedelta object
timedelta_in_seconds = timedelta_in_seconds.total_seconds()
total_number_of_hours = timedelta_in_seconds/3600 + 1 #The +1 fixes the rounding error for now but IDK why yet.
hours_in_date_range = ((end_date-start_date).total_seconds())/3600 + 1
proportion_of_total_uptime = (len(masked_sensor_readings)/hours_in_date_range) * 100 #Proportion of uptime over TOTAL HOURS
proportion_within_sensor_uptime = (len(masked_sensor_readings)/total_number_of_hours) * 100 #Proportion of uptime for the sensor's first and last uploaded dates.
if proportion_within_sensor_uptime <= 100.1:
return [sensor_id, 255*(proportion_within_sensor_uptime/100), proportion_of_total_uptime, tenant_first_name, tenant_last_name, tenant_id]
else:
pass
except IndexError:
pass
# +
#from the time a sensor has been deployed till now, how many % of the total hours where it is possible to receive a violation are actually in violation.
#minimum of 30 days
#account for sensor downtime
#instead of calculating proportions over 100%, we can calculate them over the % of time the sensor was actually - so basically, if it wasn't up
#we assume that there was no violation
def violation_percentages(data, start_date, end_date, sensor_id):
sensor_readings = data.loc[data.sensor_id == sensor_id] #All readings for a sensorID
try:
sensor_readings.loc[:,'created_at'] = map(pd.Timestamp, sensor_readings.created_at) #convert all to timestampst
except TypeError:
pass
#Filter out sensors that are < 30 days old
try:
sensor_readings_start_date = sensor_readings.loc[sensor_readings.index.values[0], 'created_at'].date()
today = date.today()
datediff = today - sensor_readings_start_date
except IndexError:
return "No readings in date range."
if datediff.days < 30: #If a sensor has been up for < 30 days, don't do anything
pass
else:
start_date = pd.Timestamp(start_date) #Convert dates to pd.Timestamp
end_date = pd.Timestamp(end_date)
mask = (sensor_readings['created_at'] > start_date) & (sensor_readings['created_at'] <= end_date) #mask for date range
masked_sensor_readings = sensor_readings.loc[mask]
try:
#First, find all possible violation-hours
##We need to index as datetimeindex in order to use the .between_time method
sensor_readings.set_index(pd.DatetimeIndex(sensor_readings['created_at']), inplace = True)
##This returns the a list of day and night readings
day_readings = sensor_readings.between_time(start_time='06:00', end_time='22:00')
night_readings = sensor_readings.between_time(start_time='22:00', end_time='6:00')
##Now, we count how many rows are violations and divide by total possible violation hours
#For day, if outdoor_temp < 55
day_total_violable_hours = len(day_readings.loc[day_readings['outdoor_temp'] < 55])
day_actual_violation_hours = len(day_readings.loc[day_readings['violation'] == True])
#For night, if outdoor_temp < 40
night_total_violable_hours = len(night_readings.loc[night_readings['outdoor_temp'] < 40])
night_actual_violation_hours = len(night_readings.loc[night_readings['violation'] == True])
#Calculate percentage
try:
violation_percentage = float(day_actual_violation_hours + night_actual_violation_hours)/float(day_total_violable_hours + night_total_violable_hours)
except ZeroDivisionError:
return "No violations in this range."
return violation_percentage #violationpercentage
except IndexError:
pass
unique_sensors = readings['sensor_id'].unique()
for ids in unique_sensors:
print "Sensor ID: {0}, Violation Percentage: {1}".format(ids, violation_percentages(readings, '2016-01-01', '2016-02-07', ids))
# -
#This function creates a simulated dataset of readings.
def simulate_data(start_date, end_date, polling_rate, sensor_id): #polling_rate in minutes
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
#how many hours between the two dates:
timedelta_in_seconds = end_date-start_date
total_number_of_hours = timedelta_in_seconds.total_seconds()/(polling_rate*60)
#Create an empty pandas dataframe
index = xrange(1,int(total_number_of_hours)+1)
columns = ['created_at', 'sensor_id']
simulated_readings = pd.DataFrame(index = index, columns = columns)
simulated_readings.loc[:,'sensor_id'] = sensor_id
#Populate it with columns of 'create_at' dates
time_counter = start_date
for i in simulated_readings.index.values:
simulated_readings.loc[i,'created_at'] = time_counter
time_counter = time_counter + pd.Timedelta('00:%s:00' % polling_rate)
return simulated_readings
#This function generates a report; we might want to make this a cron job.
def generate_report(start_date, end_date):
report = []
sensor_ids = readings.sensor_id.unique()
start_date = pd.Timestamp(start_date)
end_date = pd.Timestamp(end_date)
for ids in sensor_ids:
temp = sensor_down(readings, start_date, end_date, ids)
if temp != None:
report.append(temp)
else:
pass
return report
# +
tic = time.clock()
report = generate_report('2016-02-01','2016-02-07')
header =['sensorID', 'status', 'Percentage of uptime in daterange', 'FirstName', 'LastName' , 'userID']
toc = time.clock()
toc - tic
# +
tic = time.clock()
report = dirty_data(readings)
header = ['UserID', 'SensorID', 'Outside90', 'Inside40', 'InsideColderOutside']
toc = time.clock()
toc - tic
# -
csvoutput = open('sensors.csv', 'wb')
writer = csv.writer(csvoutput)
writer.writerow(header)
for i in report:
writer.writerow(i)
csvoutput.close()
report.to_csv('dirtydata.csv', index = False, na_rep="Not Currently Assigned")
# +
#What criteria do we want to use.
#SENSOR NUMBERS AS IDENTIFIERS, SOME DEGREE OF SEVERITY OF PROBLEM (CATEGORICAL)
#for the criteria we go, how do we define "bad"
#total days
#proportion of days (or clusters?)
#temperature discrepancy
#multiple apartments in the same building (how many % of the apartments in a building and how bad)
#multiple buildings by the same landlord (how many % of the buildings a landlord owns are bad)
#are our sensors failing in a specific building?
#Getting rid of test cases:
#1. Can we just delete by test IDs?
#2. If testing was separated by a minute, we can find all test cases by looping through all users,
# and if they have a bunch of data that was collected within minutes, delete the user?
#We first convert the string dates into datetime format
sensordata['formatteddate'] = sensordata.created_at.apply(lambda x: pd.to_datetime(x, format = "%Y-%m-%d %H:%M"))
#Then, one way of telling if a user_id was an actual user or a test case was to calculate the average timedelta for each user_id.
#Timedeltas of 1min are tests, 1 hour are users (don't know if this is always true, but if no user has an average polling rate
#of 1 min, we can use a bunch of methods to filter away test cases step by step).
sensordata['averagetimedelta'] = 0.00 #makes a new column
for i in sensordata.user_id.unique(): #for each user
timelist = sensordata.loc[sensordata.user_id == i, 'formatteddate'] #this gives us a list of all their times in timestamp
timedeltas = []
for j in range(1, len(timelist)-1):
timedeltas.append(timelist.iloc[j] - timelist.iloc[j-1]) #list of differences in time between time point j and j-1
try:
#we print the user_id, followd by their average time delta from point j to j-1
print i, abs(sum(timedeltas, datetime.timedelta(0)))/len(timedeltas) #the average timedelta
except ZeroDivisionError: #some cases have too few points (and results in a zero division error)
#instead of breaking with we encounter a zerodivisionerror, just print the following:
print i, "Too few data points?"
sensordata.loc[sensordata.user_id == i, 'averagetimedelta'] = averagetimedeltas.total_seconds()
#3. If user ids are recycled, we'll have to do a combination of those things.
#Some sensors
# +
#Triage
#We want to have a measure of which users are facing the most chronic problems.
#Metric combining temperature difference and chronicity of problems
#Write some code that subsets all the violation == 't' cases
sensordataviolations = sensordata[sensordata.violation == 't'] #here it is.
#Hackiest method: just number of violations/numberof nonviolations and sort users by that
#That is, which users have had the most violations given the total number of readings
violationsovertime = []
for i in sensordata.user_id.unique():
nonviolations = sensordata.loc[sensordata.user_id == i, 'violation'].value_counts()['f'] #Number of violations = 'f'
try:
violations = sensordata.loc[sensordata.user_id == i, 'violation'].value_counts()['t'] #Number of violations = 't'
except KeyError:
violations = 0
sensordata.loc[sensordata.user_id == i, 'vfreq'] = float(violations)/float(nonviolations)
violationsovertime.append([i, (float(violations)/float(nonviolations))])
#violations over time gives first the user_id, then the proportion of how many of their readings are violations
# +
#Stuff to do for fun
#Variability/consistency
#Which buildings have the least/most variable temperatures?
#For this, we just calculate within-person variability (how much do sensor temperatures by the same user) vary as a function of time
#We an use this same process to calculate variability between locations (e.g., just calculate variance for each location)
# +
#Now, we loop over all unique users in the dataset and generate a measure of how long they've had the sensor running
sensordata['totaltime'] = 0
sensordata['vfreq'] = 0
for i in sensordata.user_id.unique():
firstentry = len(sensordata.loc[sensordata.user_id==i,'formatteddate']) #This gives us the index of the first timepoint
lasttime = sensordata.loc[sensordata.user_id == i, 'formatteddate'].iloc[0] #This was the timestamp of the latest timepoint
firsttime = sensordata.loc[sensordata.user_id == i, 'formatteddate'].iloc[firstentry-1] #This was the timestamp of the first timepoint
sensordata.loc[sensordata.user_id == i, 'totaltime'] = lasttime - firsttime #This is the timedelta (over how long a period readings were made)
#print i, lasttime-firsttime
# +
#sensors.user_id.value_counts()
#sensordata.violation.value_counts() #This returns the number of 't's and 'f's
#np.sort(userdata.id.unique())
#np.intersect1d(userdata.id.unique(), sensordata.user_id.unique()) #Returns the common ids in both the datasets.
#readings.sensor_id.unique
| src/bryan.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: d2l:Python
# language: python
# name: conda-env-d2l-py
# ---
# # Practice 3. PyTorch Distributed and data-parallel training
# In this assignment, we'll be overview the distributed part of the PyTorch library. We will also see a couple of examples of distributed training using available wrappers.
#
# To find out about the communication pattern of your GPUs, you can use the following command:
# !nvidia-smi topo -m
# Let's import all required libraries and define a function which will create the process group. There are [three](https://pytorch.org/docs/stable/distributed.html#backends-that-come-with-pytorch) communication backends in PyTorch: as a simple rule, use GLOO for CPU communication and NCCL for communication between NVIDIA GPUs.
# +
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import random
def init_process(rank, size, fn, master_port, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = str(master_port)
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
torch.set_num_threads(1)
# -
# First, we'll run a very simple function with torch.distributed.barrier. The cell below prints in the first process and then prints in all other processes.
# +
def run(rank, size):
""" Distributed function to be implemented later. """
if rank!=0:
dist.barrier()
print(f'Started {rank}',flush=True)
if rank==0:
dist.barrier()
if __name__ == "__main__":
size = 4
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# Let's implement a classical ping-pong application with this paradigm. We have two processes, and the goal is to have P1 output 'ping' and P2 output 'pong' without any race conditions.
# +
def run_pingpong(rank, size, num_iter=10):
""" Distributed function to be implemented later. """
for _ in range(num_iter):
if rank==0:
print('ping')
dist.barrier()
elif rank==1:
dist.barrier()
print('pong')
else:
raise ValueError('Not correct rank')
dist.barrier()
if __name__ == "__main__":
size = 2
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_pingpong, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# # Point-to-point communication
# The functions below show that it's possible to send data from one process to another with `torch.distributed.send/torch.distributed.recv`:
# +
"""Blocking point-to-point communication."""
def run_sendrecv(rank, size):
tensor = torch.zeros(1)+int(rank==0)
print('Rank ', rank, ' has data ', tensor[0], flush=True)
if rank == 0:
# Send the tensor to process 1
dist.send(tensor=tensor, dst=1)
else:
# Receive tensor from process 0
dist.recv(tensor=tensor, src=0)
print('Rank ', rank, ' has data ', tensor[0], flush=True)
if __name__ == "__main__":
size = 2
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_sendrecv, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# Also, these functions have an immediate (asynchronous) version:
# +
"""Non-blocking point-to-point communication."""
import time
def run_isendrecv(rank, size):
tensor = torch.zeros(1)
req = None
if rank == 0:
tensor += 1
# Send the tensor to process 1
time.sleep(5)
req = dist.isend(tensor=tensor, dst=1)
print('Rank 0 started sending')
else:
# Receive tensor from process 0
req = dist.irecv(tensor=tensor, src=0)
print('Rank 1 started receiving')
print('Rank ', rank, ' has data ', tensor[0])
req.wait()
print('Rank ', rank, ' has data ', tensor[0])
if __name__ == "__main__":
size = 2
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_isendrecv, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# Adding an artificial delay shows that the communication is asynchronous:
# +
import time
def run_isendrecv(rank, size):
tensor = torch.zeros(1)
req = None
if rank == 0:
tensor += 1
# Send the tensor to process 1
time.sleep(5)
req = dist.isend(tensor=tensor, dst=1)
print('Rank 0 started sending')
else:
# Receive tensor from process 0
req = dist.irecv(tensor=tensor, src=0)
print('Rank 1 started receiving')
print('Haha, Rank ', rank, ' has data ', tensor[0])
req.wait()
print('Hoho, Rank ', rank, ' has data ', tensor[0])
if __name__ == "__main__":
size = 2
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_isendrecv, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# # Collective communication and All-Reduce
# Now, let's run a simple All-Reduce example which computes the sum across all workers. We'll be running the code with the `!python` magic to avoid issues caused by interaction of Jupyter and multiprocessing.
# +
# %%writefile run_allreduce.py
# #!/usr/bin/env python
import os
from functools import partial
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
def run_allreduce(rank, size):
tensor = torch.ones(1)
dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
print('Rank ', rank, ' has data ', tensor[0])
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 10
processes = []
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_allreduce))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# !python run_allreduce.py
# The same thing can be done with a simpler [torch.multiprocessing.spawn](https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn) wrapper:
# +
# %%writefile run_allreduce_spawn.py
# #!/usr/bin/env python
import os
from functools import partial
import torch
import torch.distributed as dist
def run_allreduce(rank, size):
tensor = torch.ones(1)
dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
print('Rank ', rank, ' has data ', tensor[0])
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
size = 10
fn = partial(init_process, size=size, fn=run_allreduce, backend='gloo')
torch.multiprocessing.spawn(fn, nprocs=size)
# -
# !python run_allreduce_spawn.py
# Let's write our own Butterfly All-Reduce. First, we start with creating 5 random vectors and getting the "true" average, just for comparison:
# +
size = 5
tensors = []
for i in range(size):
torch.manual_seed(i)
cur_tensor = torch.randn((size,), dtype=torch.float)
print(cur_tensor)
tensors.append(cur_tensor)
print("result", torch.stack(tensors).mean(0))
# -
# Now, let's create a custom implementation below:
# +
# %%writefile custom_allreduce.py
import os
import torch
import torch.distributed as dist
from torch.multiprocessing import Process
import random
def init_process(rank, size, fn, master_port, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = str(master_port)
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
def butterfly_allreduce(send, rank, size):
"""
Performs Butterfly All-Reduce over the process group.
Args:
send: torch.Tensor to be averaged with other processes.
rank: Current process rank (in a range from 0 to size)
size: Number of workers
"""
buffer_for_chunk = torch.empty((size,), dtype=torch.float)
send_futures = []
for i, elem in enumerate(send):
if i!=rank:
send_futures.append(dist.isend(elem, i))
recv_futures = []
for i, elem in enumerate(buffer_for_chunk):
if i!=rank:
recv_futures.append(dist.irecv(elem, i))
else:
elem.copy_(send[i])
for future in recv_futures:
future.wait()
# compute the average
torch.mean(buffer_for_chunk, dim=0, out=send[rank])
for i in range(size):
if i!=rank:
send_futures.append(dist.isend(send[rank], i))
recv_futures = []
for i, elem in enumerate(send):
if i!=rank:
recv_futures.append(dist.irecv(elem, i))
for future in recv_futures:
future.wait()
for future in send_futures:
future.wait()
def run_allreduce(rank, size):
""" Simple point-to-point communication. """
torch.manual_seed(rank)
tensor = torch.randn((size,), dtype=torch.float)
print('Rank ', rank, ' has data ', tensor)
butterfly_allreduce(tensor, rank, size)
print('Rank ', rank, ' has data ', tensor)
if __name__ == "__main__":
size = 5
processes = []
port = random.randint(25000, 30000)
for rank in range(size):
p = Process(target=init_process, args=(rank, size, run_allreduce, port))
p.start()
processes.append(p)
for p in processes:
p.join()
# -
# !python custom_allreduce.py
# # Distributed training
#
# Armed with this simple implementation of AllReduce, we can run multi-process distributed training. For now, let's use the model and the dataset from the official MNIST [example](https://github.com/pytorch/examples/blob/master/mnist/main.py), as well as the [torchrun](https://pytorch.org/docs/stable/elastic/run.html?highlight=torchrun) command used to manage processes:
# +
# %%writefile custom_allreduce_training.py
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from torchvision.datasets import MNIST
def init_process(local_rank, fn, backend='nccl'):
""" Initialize the distributed environment. """
dist.init_process_group(backend, rank=local_rank)
size = dist.get_world_size()
fn(local_rank, size)
torch.set_num_threads(1)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 32, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(4608, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
output = self.fc2(x)
return output
def average_gradients(model):
size = float(dist.get_world_size())
for param in model.parameters():
dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
param.grad.data /= size
def run_training(rank, size):
torch.manual_seed(1234)
dataset = MNIST('./mnist', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
loader = DataLoader(dataset, sampler=DistributedSampler(dataset, size, rank), batch_size=16)
model = Net()
device = torch.device('cpu')
model.to(device)
optimizer = torch.optim.SGD(model.parameters(),
lr=0.01, momentum=0.5)
num_batches = len(loader)
steps = 0
epoch_loss = 0
for data, target in loader:
data = data.to(device)
target = target.to(device)
optimizer.zero_grad()
output = model(data)
loss = torch.nn.functional.cross_entropy(output, target)
epoch_loss += loss.item()
loss.backward()
average_gradients(model)
optimizer.step()
steps += 1
if True:
print(f'Rank {dist.get_rank()}, loss: {epoch_loss / num_batches}')
epoch_loss = 0
if __name__ == "__main__":
local_rank = int(os.environ["LOCAL_RANK"])
init_process(local_rank, fn=run_training, backend='gloo')
# -
# !torchrun --nproc_per_node 2 custom_allreduce_training.py
# + active=""
# Now let's use the standard [DistributedDataParallel](https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html) wrapper (which you should probably use in real-world training anyway):
# +
# %%writefile ddp_example.py
import os
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
from torch.nn.parallel import DistributedDataParallel
import torchvision.transforms as transforms
from torch.utils.data import DataLoader
from torch.utils.data.distributed import DistributedSampler
from torchvision.datasets import MNIST
def init_process(local_rank, fn, backend='nccl'):
""" Initialize the distributed environment. """
dist.init_process_group(backend, rank=local_rank)
size = dist.get_world_size()
fn(local_rank, size)
torch.set_num_threads(1)
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 32, 3, 1)
self.conv2 = nn.Conv2d(32, 32, 3, 1)
self.dropout1 = nn.Dropout(0.25)
self.dropout2 = nn.Dropout(0.5)
self.fc1 = nn.Linear(4608, 128)
self.fc2 = nn.Linear(128, 10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = F.max_pool2d(x, 2)
x = self.dropout1(x)
x = torch.flatten(x, 1)
x = self.fc1(x)
x = F.relu(x)
x = self.dropout2(x)
output = self.fc2(x)
return output
def run_training(rank, size):
torch.manual_seed(1234)
dataset = MNIST('./mnist', transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
loader = DataLoader(dataset,
sampler=DistributedSampler(dataset, size, rank),
batch_size=16)
model = Net()
device = torch.device('cuda', rank)
model.to(device)
model = DistributedDataParallel(model, device_ids=[rank], output_device=rank)
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)
num_batches = len(loader)
steps = 0
epoch_loss = 0
for data, target in loader:
data = data.to(device)
target = target.to(device)
optimizer.zero_grad()
output = model(data)
loss = torch.nn.functional.cross_entropy(output, target)
epoch_loss += loss.item()
loss.backward()
optimizer.step()
steps += 1
if True:
print(f'Rank {dist.get_rank()}, loss: {epoch_loss / num_batches}')
epoch_loss = 0
if __name__ == "__main__":
local_rank = int(os.environ["LOCAL_RANK"])
init_process(local_rank, fn=run_training, backend='gloo')
# -
# !torchrun --nproc_per_node 2 ddp_example.py
# That's it for today! For the homework this week, see the `week03_data_parallel/homework` folder.
| week03_data_parallel/practice.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Advanced filtering
#
# In this tutorial we are going to see how to use the ``F`` object to do advanced filtering of hosts. Let's start by initiating nornir and looking at the inventory:
# +
from nornir import InitNornir
from nornir.core.filter import F
nr = InitNornir(config_file="advanced_filtering/config.yaml")
# -
# %cat advanced_filtering/inventory/hosts.yaml
# %cat advanced_filtering/inventory/groups.yaml
# As you can see we have built ourselves a collection of animals with different properties. The ``F`` object let's you access the magic methods of each types by just prepeding two underscores and the the name of the magic method. For instance, if you want to check if a list contains a particular element you can just prepend ``__contains``. Let's use this feature to retrieve all the animals that belong to the group ``bird``:
birds = nr.filter(F(groups__contains="bird"))
print(birds.inventory.hosts.keys())
# We can also invert the ``F`` object by prepending ``~``:
not_birds = nr.filter(~F(groups__contains="bird"))
print(not_birds.inventory.hosts.keys())
# We can also combine ``F`` objects and perform AND and OR operations with the symbols ``&`` and ``|`` (pipe) respectively:
domestic_or_bird = nr.filter(F(groups__contains="bird") | F(domestic=True))
print(domestic_or_bird.inventory.hosts.keys())
domestic_mammals = nr.filter(F(groups__contains="mammal") & F(domestic=True))
print(domestic_mammals.inventory.hosts.keys())
# As expected, you can combine all of the symbols:
flying_not_carnivore = nr.filter(F(fly=True) & ~F(diet="carnivore"))
print(flying_not_carnivore.inventory.hosts.keys())
# You can also access nested data the same way you access magic methods, by appending two underscores and the data you want to access. You can keep building on this as much as needed and even access the magic methods of the nested data. For instance, let's get the animals that have a lifespan greater or equal than 15:
long_lived = nr.filter(F(additional_data__lifespan__ge=15))
print(long_lived.inventory.hosts.keys())
# There are two extra facilities to help you working with lists; ``any`` and ``all``. Those facilities let's you send a list of elements and get the objects that has either any of the members or all of them. For instance:
marine_and_invertebrates = nr.filter(F(groups__all=["marine", "invertebrate"]))
print(marine_and_invertebrates.inventory.hosts.keys())
bird_or_invertebrates = nr.filter(F(groups__any=["bird", "invertebrate"]))
print(bird_or_invertebrates.inventory.hosts.keys())
| docs/howto/advanced_filtering.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Visualización dinámica
#
# En esta sesión estudiaremos algunas técnicas para la visualización dinámica de conjuntos de datos que tienen alguna componente temporal, haciendo especial énfasis en aquellos problemas para los que el dinamismo en la representación realmente supone alguna ventaja con respecto a visualizaciones estáticas clásicas.
#
# Comenzaremos con una breve descripción técnica de las herramientas que utilizaremos para generar las animaciones, así como su integración con `notebook`. A continuación, presentaremos diferentes ejemplos de problemas para los cuales la visualización dinámica permite comprender más en detalle el conjunto de datos o su comportamiento.
# ## Visualización interactiva con <tt>matplotlib</tt>
#
# Matplotlib da soporte explícito para la creación de animaciones a través de la API [`matplotlib.animation`](http://matplotlib.org/api/animation_api.html). En este paquete podemos encontrar distintas clases y métodos para crear animaciones, controlar su ciclo de vida, reproducirlas y exportarlas a un formato conveniente. En la documentación oficial podemos encontrar también diversos [ejemplos ilustrativos](http://matplotlib.org/1.5.1/examples/animation/).
#
# Esta API está soportada en `notebook` [a partir de la versión 1.4 de `matplotlib`](http://matplotlib.org/users/whats_new.html#new-in-matplotlib-1-4) mediante el uso de un backend interactivo.Esto es necesario debido a que la opción clásica `%matplotlib inline` sólo permite generar imágenes estáticas. Este backend se denomina `notebook`, y lo seleccionaremos con el siguiente [`magic`](http://ipython.readthedocs.org/en/stable/interactive/tutorial.html#magic-functions):
# %matplotlib notebook
# Comprobamos las capacidades de este backend creando una gráfica sencilla:
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
plt.plot(np.arange(-5,5,0.1)**3);
# Como podemos ver, la gráfica resultante es interactiva, lo que nos permite hacer zoom, desplazar los ejes, modificar el tamaño, etc. Si en algún momento deseamos terminar esta interactividad, podemos hacer que la gráfica se convierta en una imagen estática, equivalente a si usásemos el backend `inline`.
# ## Análisis de información sociológica
#
# En esta sección utilizaremos animaciones para analizar distintos tipos de información sociológica. El sitio web [Gapminder](http://www.gapminder.org/) es un magnífico repositorio de este tipo de información a nivel mundial, incluyendo series históricas desde el año 1800. En la hoja de cálculo <tt>social_data.xlsx</tt> se incluye información descargada de esta web para distintas variables y países.
#
# Nuestro primer objetivo será *visualizar la evolución temporal de la esperanza de vida de los habitantes de un conjunto de países frente a la [renta per cápita, o producto interior bruto (GDP) por habitante](https://en.wikipedia.org/wiki/Per_capita_income)*.
#
# Hacemos una breve exploración inicial de nuestros datos:
import pandas as pd
from IPython.display import display, HTML
df = pd.read_excel('../data/social_data.xlsx', sheetname=None)
#Para cada hoja de cálculo dentro del archivo, mostramos la cabecera del DataFrame como HTML.
for sh in df:
display(HTML('<h3>' + sh + '</h3>' + df[sh].head().to_html()))
# Como podemos ver, disponemos de información sobre la renta per cápita (GDP), la esperanza de vida, y la población de cada país desde el año 1800.
#
# En la primera parte de la práctica, analizaremos la relación entre esperanza de vida y renta per cápita de los siguientes países: China, EEUU, Brasil, Alemania, Sudáfrica y Australia.
countries = ['China', 'United States', 'Brazil', 'Germany', 'South Africa', 'Australia']
#Indexamos los Dataframes de "GDP" y "Life Expectancy" por países, y filtramos los seleccionados.
gdp = df['GDP'].set_index('Country').loc[countries]
lfexp = df['Life Expectancy'].set_index('Country').loc[countries]
display(gdp)
# Como primera exploración visual, dibujamos un gráfico de burbujas que muestre la relación en el año 2015 entre la esperanza de vida y la renta per cápita para los países seleccionados.
fig = plt.figure()
ax = fig.add_subplot(111)
#Configuración de las etiquetas de los ejes
plt.xlabel('Income per person (Inflation-adjusted $)')
plt.ylabel('Life Expectancy (years)')
#Creación de un punto para cada país
for country in countries:
plt.plot([gdp[2015][country]], [lfexp[2015][country]], 'o', label=country, markersize=30)
#Leyenda:
ax.legend(loc='lower right', fontsize=12, markerscale=0.5, numpoints=1, frameon=False)
# <div style="font-size:125%; display:inline; font-weight:bold">Ejercicio:</div> *Crear una animación a partir de la gráfica anterior, que permita visualizar la evolución de la relación entre renta per cápita y esperanza de vida durante toda la serie temporal disponible.*
#
# La forma más simple y recomendada de crear una animación en matplotlib es a través de la función [`FuncAnimation`](http://matplotlib.org/api/animation_api.html#matplotlib.animation.FuncAnimation), que genera el conjunto de *frames* a partir de la ejecución repetida de una función definida por el usuario (en este caso, la función `update_plot`). Los valores de los argumentos pasados a esta función y el intervalo entre imágenes consecutivas se controlan con los parámetros `frames` e `interval`.
# +
import matplotlib.animation as animation
fig = plt.figure()
ax = fig.add_subplot(111)
#Configuración de los límites de la gráfica y de las etiquetas de los ejes.
ax.set_xlim((0, 60000))
ax.set_ylim((0, 90))
plt.xlabel('Income per person (Inflation-adjusted $)')
plt.ylabel('Life Expectancy (years)')
#Creación de un punto para cada país, inicialmente sin datos.
circs = {}
for country in countries:
circs[country] = plt.plot([], [], 'o', label=country, markersize=30)[0]
#Leyenda y etiqueta para el año actual.
ax.legend(loc='lower right', fontsize=12, markerscale=0.5, numpoints=1, frameon=False)
cur_year = ax.annotate("", (0.05, 0.9), xycoords='axes fraction', size='x-large', weight='bold')
def update_plot(year):
"""
Función de animación, en la que se actualiza la posición del punto correspondiente
a cada país, así como la etiqueta del año actual.
"""
#Actualizar el centro de cada punto con set_data
for country in countries:
circs[country].set_xdata([gdp[year][country]])
circs[country].set_ydata([lfexp[year][country]])
#Actualizar la etiqueta del año correspondiente con set_text
cur_year.set_text(year)
return circs.values() + [cur_year]
#Creamos la animación, ejecutando la función 'update_plot' para cada año del dataframe
ani = animation.FuncAnimation(fig, update_plot, gdp.keys(), interval=100, repeat=False)
# -
# ### Exportando animaciones a través del paquete `JSAnimation`
#
# Otro método recomendado para visualizar animaciones en un `notebook` es el uso de la librería [JSAnimation](https://github.com/jakevdp/JSAnimation). La [instalación es muy sencilla](https://gist.github.com/gforsyth/188c32b6efe834337d8a), e incluso podemos descargar directamente las fuentes del proyecto e importarlas desde nuestro código, al tratarse de un proyecto muy pequeño. Para instalar `JSAnimation` en [Anaconda](https://www.continuum.io/downloads), podemos utilizar el siguiente comando:
#
# ```
# conda install --channel https://conda.anaconda.org/IOOS jsanimation
# ```
#
# O utilizar pip y descargar desde el repositorio, mediante el siguiente comando:
#
# ```
# pip install git+https://github.com/jakevdp/JSAnimation.git
# ```
#
# Esta librería evita que sea necesario utilizar un intérprete de Python cada vez que deseemos reproducir una animación ya generada, pues esta se almacena completamente en el código HTML y se reproduce utilizando Javascript.
#
# Antes de utilizar `JSAnimation`, se recomienda seleccionar el backend `inline` de matplotlib. A continuación, simplemente debemos importar la función `display_animation`.
# %matplotlib inline
from JSAnimation.IPython_display import display_animation
# Esta función recibe cualquier objeto de tipo [`Animation`](http://matplotlib.org/api/animation_api.html#matplotlib.animation.Animation) y genera un HTML dinámico que puede incluirse en cualquier página web, incluido un `notebook` exportado como HTML. Además, se incluye un conjunto de controles que nos permiten pausar, modificar la velocidad de la animación, o seleccionar un instante concreto de la misma. El único inconveniente de usar este paquete es que la animación debe ser renderizada completamente antes de comenzar a reproducirse, lo que resulta incómodo para programar y depurar. Además, el archivo de `notebook` resultante de usar este paquete será considerablemente más grande, pues almacena la información binaria de todas las imágenes que componen las animaciones.
#
# A continuación se muestra la misma animación del ejemplo anterior renderizada a través de `JSAnimation`.
display_animation(ani)
#Volvemos a seleccionar el backend notebook para las siguientes animaciones
# %matplotlib notebook
# <div style="font-size:125%; display:inline; font-weight:bold">Ejercicio:</div> *Crear una nueva animación en la cual el tamaño del marcador correspondiente a cada país se actualice en función de la población. Utilizar para ello la función `get_population_markersize` que se proporciona a continuación:*
def get_population_markersize(pop):
"""
Función para obtener el tamaño del marcador correspondiente a una determinada población.
A una población de 2 millones se corresponde un tamaño 4.0, que escalará de forma que
la superficie del marcador sea proporcional a esta relación.
"""
return 4*np.sqrt(pop/2e6)
# Además de esta función, es necesario tener en cuenta que los datos de población no están disponibles para todos los años en la serie temporal.
pop = df['Population'].set_index('Country').loc[countries]
display(pop)
#Actualizamos el conjunto de años para los DataFrames de "GDP" y "Life Expectancy"
#Nota: pop.keys() puede resultar de ayuda.
gdp_filt = df['GDP'].set_index('Country').loc[countries][pop.keys()]
lfexp_filt = df['Life Expectancy'].set_index('Country').loc[countries][pop.keys()]
display(gdp_filt)
# +
fig = plt.figure()
ax = fig.add_subplot(111)
#Configuración de los límites de la gráfica y de las etiquetas de los ejes.
ax.set_xlim((0, 60000))
ax.set_ylim((0, 90))
plt.xlabel('Income per person (Inflation-adjusted $)')
plt.ylabel('Life Expectancy (years)')
#Creación del punto para cada país
circs = {}
for country in countries:
circs[country] = plt.plot([], [], 'o', label=country, markersize=30)[0]
#Leyenda y etiqueta para el año actual.
ax.legend(loc='lower right', fontsize=12, markerscale=0.5, numpoints=1, frameon=False)
cur_year = ax.annotate("", (0.05, 0.9), xycoords='axes fraction', size='x-large', weight='bold')
def update_plot(year):
"""
Función de animación, en la que se actualiza la posición y el tamaño del punto
correspondiente a cada país, así como la etiqueta del año actual.
"""
for country in countries:
circs[country].set_xdata([gdp_filt[year][country]])
circs[country].set_ydata([lfexp_filt[year][country]])
#En este caso, además de actualizar la posición de cada punto con "set_data", debemos
#actualizar el tamaño de cada punto con "set_markersize"
circs[country].set_markersize(get_population_markersize(pop[year][country]))
cur_year.set_text(year)
return circs.values() + [cur_year]
#Creamos la animación con "FuncAnimation".
ani = animation.FuncAnimation(fig, update_plot, pop.keys(), interval=100, repeat=False)
# -
# Como comentario personal, esta gráfica aparenta añadir ruido y sería más legible si uno se preguntara qué quiere ver en el gráfico. Es decir, si lo que se quiere es comprobar si, visualmente, puede deducirse si existe relación entre 2 de las 4 variables mostradas para alguna de las 6 muestras elegidas existen mejores alternativas a esta gráfica.
#
# Variables mostradas: esperanza de vida, número de habitantes, sueldo por persona y tiempo.
# Muestras elegidas: los 6 países.
#
# <div style="font-size:125%; display:inline; font-weight:bold">Ejercicio opcional:</div> *Repetir la animación anterior, pero incluyendo todos los años entre 1800 y 2015. Para aquellos años en los cuales no se dispone de datos de población, realizar una interpolación lineal para obtener una estimación del valor.*
# +
todosanyos = np.arange(1800, 2015)
anyosdados = lfexp_filt.keys().tolist()
columnasfaltantes = todosanyos[np.where(~pd.Index(anyosdados).get_indexer(todosanyos) >= 0)[0]]
df_columnas = pd.DataFrame(columns = columnasfaltantes)
gdp_interpolated = gdp_filt.join(df_columnas, how="outer").reindex(columns=todosanyos).astype(np.float64)
gdp_interpolated.interpolate(method='linear', axis=1, inplace=True)
lfexp_interpolated = lfexp_filt.join(df_columnas, how="outer").reindex(columns=todosanyos).astype(np.float64)
lfexp_interpolated.interpolate(method='linear', axis=1, inplace=True)
pop_interpolated = pop.join(df_columnas, how="outer").reindex(columns=todosanyos).astype(np.float64)
pop_interpolated.interpolate(method='linear', axis=1, inplace=True)
# +
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlim((0, 60000))
ax.set_ylim((0, 90))
plt.xlabel('Income per person (Inflation-adjusted $)')
plt.ylabel('Life Expectancy (years)')
circs = {}
for country in countries:
circs[country] = plt.plot([], [], 'o', label=country, markersize=30)[0]
ax.legend(loc='lower right', fontsize=12, markerscale=0.5, numpoints=1, frameon=False)
cur_year = ax.annotate("", (0.05, 0.9), xycoords='axes fraction', size='x-large', weight='bold')
def update_plot(year):
for country in countries:
circs[country].set_xdata([gdp_interpolated[year][country]])
circs[country].set_ydata([lfexp_interpolated[year][country]])
circs[country].set_markersize(get_population_markersize(pop_interpolated[year][country]))
cur_year.set_text(year)
return circs.values() + [cur_year]
#Creamos la animación con "FuncAnimation".
ani = animation.FuncAnimation(fig, update_plot, np.arange(1800, 2015), interval=100, repeat=False)
# -
# <div style="font-size:125%; display:inline; font-weight:bold">Ejercicio:</div> *Crear una animación que muestre las pirámides de población resultantes de las [proyecciones de población para los años 2015-2064 del Instituto Nacional de Estadística](http://www.ine.es/dyngs/INEbase/es/operacion.htm?c=Estadistica_C&cid=1254736176953&menu=resultados&idp=1254735572981), que se encuentran en el fichero `population_projection.xlsx`*.
#
# En primer lugar, leemos los datos, e indexamos cada una de las tablas (hombres y mujeres) por la edad, para facilitar el acceso a los datos de interés.
df = pd.read_excel('../data/population_projection.xlsx', sheetname=None)
men, women = df['Varones'], df['Mujeres']
men.set_index('Edad', inplace=True)
women.set_index('Edad', inplace=True)
display(women.head(), women.tail())
# Como vemos, disponemos de datos para cada posible valor de edad entre 1 y 99 años. Sin embargo, a la hora de realizar pirámides de población, es habitual establecer grupos de edad [en márgenes de 5 años](http://www.ine.es/prensa/np813.pdf). Esto lo podemos hacer de forma muy sencilla con pandas y las funciones [`cut()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html) y [`groupby()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html).
# +
categories, bins = pd.cut(women.index, np.append(women.index[::5], 100), retbins=True, )
gwomen = women.groupby(categories).sum()
categories, bins = pd.cut(men.index, np.append(men.index[::5], 100), retbins=True, )
gmen = women.groupby(categories).sum()
gmen.head()
# +
categories, bins = pd.cut(women.index, np.append(women.index[::5], 100), retbins=True, )
#Tanto para la serie de varones como de mujeres, debemos agrupar por las categorías creadas,
#utilizando la función "groupby", y a continuación agregar el resultado sumando los valores
#de cada categoría, con la función "aggregate".
#Nota: no hace falta aggregate, es más directo con sum
gwomen = women.groupby(categories).sum()
categories, bins = pd.cut(men.index, np.append(men.index[::5], 100), retbins=True, )
gmen = women.groupby(categories).sum()
gwomen.head()
# -
# Para la visualización de los datos, el método [`barh()`](http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.barh) de matplotlib nos proporciona una forma directa de representar las pirámides de población, necesitando únicamente pequeños ajustes estéticos.
# +
#Creamos dos gráficas que comparten el eje Y, y eliminamos el espacio horizontal
fig, (axw,axm)= plt.subplots(ncols=2, sharey=True)
plt.subplots_adjust(wspace=0)
#Primer año en la serie temporal
first_year = next(iter(gwomen))
#Etiqueta que muestra el año que se está visualizando
cur_year = plt.suptitle(str(first_year), size='x-large', weight='bold')
#Obtenemos el máximo de la coordenada X
xmax = max(np.amax(gwomen.values), np.amax(gmen.values))
#Establecemos el rango de X para las dos gráficas
axw.set_xlim((0, 1.1*xmax))
axm.set_xlim((0, 1.1*xmax))
#La gráfica de mujeres (a la izquierda) deberá estar invertida
axw.invert_xaxis()
#Otros ajustes estéticos
axw.yaxis.set_ticks(bins)
axm.tick_params(labelright=True)
axw.set_title('Mujeres')
axm.set_title('Varones')
#Creamos la gráfica para mujeres y para varones, con la función barh
wbars = axw.barh(bins[:-1], gwomen[first_year].values, height=5, color="pink") #
mbars = axm.barh(bins[:-1], gmen[first_year].values, height=5)
def update_pyramid(year):
"""Actualizamos la pirámide de población"""
#Para cada una de las barras en "wbars" y "mbars", debemos actualizar
#su tamaño con la función "set_width". Finalmente, actualizaremos
#también el título, referenciado por la etiqueta "cur_year"
#for idx in wbars:
# wbars[idx].set_width(gwomen[year][idx])
for idx, item in enumerate(wbars):
item.set_width(gwomen[year][idx])
for idx, item in enumerate(mbars):
item.set_width(gmen[year][idx])
cur_year.set_text(year)
return (wbars, mbars)
#Creamos la animación, utilizando FuncAnimation y pasando como parámetro "frames" el conjunto
#de años para los que disponemos de datos. En este caso, el parámetro "interval" será de 500ms
ani = animation.FuncAnimation(fig, update_pyramid, women.keys(), interval=500, repeat=False)
#inspect.getmembers(mbars[0], predicate=inspect.ismethod)
# -
# ## Análisis de algoritmos
#
# En esta práctica hemos visto ejemplos de visualizaciones dinámicas que nos ayudan a comprender mejor datos temporales, o a descubrir información adicional que no resulta fácilmente accesible mediante visualizaciones estáticas. Pero además de esto, una visualización dinámica puede resultar de gran ayuda para comprender el funcionamiento de ciertos algoritmos, y en particular aquellos que utilizaremos para construir modelos a partir de conjuntos de datos.
#
# A continuación se muestra un ejemplo, [inspirado en el que podemos encontrar en la documentación de Scikit-learn](http://scikit-learn.org/stable/modules/svm.html), construiremos una animación que permita visualizar el proceso de aprendizaje de un clasificador SVM, lo que nos permitirá comprobar cómo los hiperplanos discriminantes de las distintas clases se van actualizando con la aparición de nuevos ejemplos.
# +
from sklearn import svm, datasets
import matplotlib.colors
#Conjunto de colores utilizado para el pintado
cols = ['#E24A33', '#348ABD', '#8EBA42']
#Cargamos el conjunto de datos
iris = datasets.load_iris()
X, y = iris.data[:, :2], iris.target
#Para hacer dinámico el aprendizaje, modificaremos los pesos de cada
#ejemplo, incluyendo uno más en cada iteración.
sample_weights = np.zeros_like(y)
#Reordenamiento aleatorio de los ejemplos
transp = np.random.permutation(np.c_[X,y])
X, y = transp[:, :2], transp[:, 2].astype(int)
#Instancia de SVM que utilizaremos para el aprendizaje
vm = svm.SVC()
#Creamos una malla para dibujar el área de cada clase
h = .02
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
fig = plt.figure()
#Configuramos los ejes
ax = fig.add_subplot(111)
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title('SVM learning process')
#Añadimos las gráficas que dibujarán los puntos de cada una de las clases.
pts = {k : ax.plot([], [], 'o', color=cols[k])[0] for k in np.unique(y)}
#Dibujo del área de clasificación de cada clase.
contours = None
def update_classifier(nsamp):
#Eliminamos los contornos de cada clase
global contours
if contours is not None:
for cont in contours.collections:
ax.collections.remove(cont)
#Añadimos un nuevo ejemplo al entrenamiento
sample_weights[nsamp] = 1
vm.fit(X, y, sample_weights)
#Calculamos la salida para cada punto de la malla
Z = vm.predict(np.c_[xx.ravel(), yy.ravel()])
#Y la dibujamos
Z = Z.reshape(xx.shape)
contours = ax.contourf(xx, yy, Z, alpha=0.8, colors=cols, levels=[0, 0.5, 1.5, 2])
#Añadimos el nuevo punto a la gráfica correspondiente.
lastx, lasty = X[nsamp]
xpt, ypt = pts[y[nsamp]].get_data()
pts[y[nsamp]].set_data(np.append(xpt, lastx), np.append(ypt, lasty))
return pts.values() + contours.collections
ani = animation.FuncAnimation(fig, update_classifier, len(y), interval=100, repeat=False)
# -
# ## Recursos adicionales
#
# En esta introducción a las herramientas de visualización dinámica nos hemos centrado en el paquete `matplotlib`, al ser el más popular dentro del ecosistema Python. Sin embargo, existen otras librerías, normalmente basadas o compatibles con `matplotlib`, que pueden resultar muy útiles a la hora de crear visualizaciones dinámicas e interactivas. Destacamos las siguientes:
#
# - Bokeh: http://bokeh.pydata.org/en/latest/
# - Seaborn: https://stanford.edu/~mwaskom/software/seaborn/
# - NVD3: https://github.com/areski/python-nvd3
# - MPLD3: http://mpld3.github.io/
| lab3/python/notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .sh
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Bash
# language: bash
# name: bash
# ---
# # 8 delete containers from turtlebot3
# change ${CORE_ROOT} to your path of `core`.
export CORE_ROOT="${HOME}/core"
# change ${PJ_ROOT} to your path of `example-turtlebot3`.
export PJ_ROOT="${HOME}/example-turtlebot3"
cd ${PJ_ROOT};pwd
# example)
# ```
# /Users/user/example-turtlebot3
# ```
# ## load environment variables
# load from `core`
source ${CORE_ROOT}/docs/environments/azure_aks/env
# load from `example-turtlebot3`
source ${PJ_ROOT}/docs/environments/azure_aks/env
# ## delete containers from turtlebot3
# ### A. turtlebot3 simulator: undeploy turtlebot3
export TURTLEBOT3_USER=turtlebot3
export TURTLEBOT3_UID=1000
envsubst < ${PJ_ROOT}/ros/turtlebot3-fake/yaml/turtlebot3-fake-deployment-acr.yaml > /tmp/turtlebot3-fake-deployment-acr.yaml
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -v /tmp:/tmp -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete /tmp/turtlebot3-fake-deployment-acr.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
rm /tmp/turtlebot3-fake-deployment-acr.yaml
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/turtlebot3-fake/yaml/turtlebot3-fake-service.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
# ### A. (alterntive) turtlebot3 simulator: stop turtlebot3-fake directly
# 1. `exit` from telepresence shell
# 2. stop port forwarding using Ctrl-C
# ### B. actual turtlebot3 robot: deploy turtlebot3
export TURTLEBOT3_WORKSPACE=/home/turtlebot3/catkin_ws
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/turtlebot3-bringup/yaml/turtlebot3-bringup-deployment-acr.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/turtlebot3-bringup/yaml/turtlebot3-bringup-service.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
# ### common procedure: undeploy fiware-ros-turtlebot3-operator
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-turtlebot3-operator/yaml/fiware-ros-turtlebot3-operator-deployment-acr-wide.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-turtlebot3-operator/yaml/fiware-ros-turtlebot3-operator-service.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-turtlebot3-operator/yaml/fiware-ros-turtlebot3-operator-configmap.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
# ### common procedure: undeploy fiware-ros-bridge
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-bridge/yaml/fiware-ros-bridge-deployment-acr.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-bridge/yaml/fiware-ros-bridge-service.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/fiware-ros-bridge/yaml/fiware-ros-bridge-configmap.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
export MQTT_YAML_BASE64=""
envsubst < ${PJ_ROOT}/ros/fiware-ros-bridge/yaml/fiware-ros-bridge-secret.yaml > /tmp/fiware-ros-bridge-secret.yaml
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -v /tmp:/tmp -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete /tmp/fiware-ros-bridge-secret.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
rm /tmp/fiware-ros-bridge-secret.yaml
# ### common procedure: undeploy ros-master
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/ros-master/yaml/ros-master-deployment-acr.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
TOKEN=$(cat ${CORE_ROOT}/secrets/auth-tokens.json | jq '.[0].settings.bearer_tokens[0].token' -r)
docker run -it --rm -v ${PJ_ROOT}:${PJ_ROOT} -w ${PJ_ROOT} example_turtlebot3:0.0.1 \
${PJ_ROOT}/tools/deploy_yaml.py --delete ${PJ_ROOT}/ros/ros-master/yaml/ros-master-service.yaml https://api.${DOMAIN} ${TOKEN} ${FIWARE_SERVICE} ${DEPLOYER_SERVICEPATH} ${DEPLOYER_TYPE} ${DEPLOYER_ID}
# ## stop minikube on turtlebot3
# ```
# <EMAIL>tlebot3@turtlebot3:~$ sudo minikube stop
# <EMAIL>tlebot3@turtlebot3:~$ sudo minikube delete
# <EMAIL>tlebot3@turtlebot3:~$ sudo rm -rf /etc/kubernetes/
# <EMAIL>tlebot3@turtlebot3:~$ sudo rm -rf $HOME/.minikube/
# turtlebot3@turtlebot3:~$ rm -rf $HOME/.kube/
# ```
| docs/en-jupyter_notebook/azure_aks/08_delete_containers_from_turtlebot3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import shutil
shutil.os.listdir('filedemo-raw')
filenames = shutil.os.listdir('filedemo-raw')
filenames = filenames[:10]
for filename in filenames:
print(filename)
# # Separate files by prefix
#
# "a" in one directory, "b" in one...
# ### How to get the first letter?
myname = 'Catherine'
myname[0]
# ### How to copy a file?
shutil.copy('filedemo-raw/a100.txt', 'filedemo-raw/a-files/a100.txt')
shutil.os.listdir('filedemo-raw/a-files/')
# +
### How to build a string from small strings
# -
'filedemo-raw/' + 'a' + '-files.txt'
# ### Put it together
filenames = shutil.os.listdir('filedemo-raw')
for filename in filenames:
# Get the first letter
# Decide the directory to put it in
# use shutil.copy to move it there
pass
# # Find min, max, average file size
shutil.os.stat('filedemo-raw/a100.txt')
shutil.os.stat('filedemo-raw/a100.txt').st_size
# +
biggest_so_far = 0
filenames = shutil.os.listdir('filedemo-raw')
for filename in filenames:
# Get the stats for this file
# Get the size from the stats
# Remember it if it's the biggest
pass
print(biggest_so_far)
# -
# # Find duplicates
import hashlib
hashlib.md5(b'all these words').hexdigest()
hashlib.md5(b'all these words').hexdigest()
with open('filedemo-raw/a100.txt') as infile:
content = infile.read()
content
content.encode()
hashlib.md5(content.encode()).hexdigest()
# ### Using a defaultdict
from collections import defaultdict
duplicates = defaultdict(list)
duplicates
duplicates['a'].append('b')
duplicates
duplicates['a'].append('c')
duplicates
key = hashlib.md5(content.encode()).hexdigest()
duplicates[key] = []
duplicates[key].append('filedemo-raw/a100.txt')
duplicates
# ### Put it together
#
# +
duplicates = defaultdict(list)
filenames = shutil.os.listdir('filedemo-raw')
for filename in filenames:
with open(filename) as infile:
content = infile.read()
byte_content = content.encode()
hashed = hashlib.md5(byte_content).hexdigest()
duplicates[something]
# Get the stats for this file
# Get the size from the stats
# Remember it if it's the biggest
pass
print(biggest_so_far)
| file-handling-demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 ('ml3950')
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
import tensorflow as tf
import keras
from keras.datasets import fashion_mnist, cifar10
from keras.layers import Dense, Flatten, Normalization, Dropout, Conv2D, MaxPooling2D, RandomFlip, RandomRotation, RandomZoom, BatchNormalization, Activation, InputLayer
from keras.models import Sequential
from keras.losses import SparseCategoricalCrossentropy, CategoricalCrossentropy
from keras.callbacks import EarlyStopping
from keras.utils import np_utils
from keras import utils
import os
from keras.preprocessing.image import ImageDataGenerator
import matplotlib as mpl
import matplotlib.pyplot as plt
import datetime
# -
# # Tensorboard and Transfer Learning
# +
# Load Some Data
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
# -
# ## Tensorboard
#
# Tensorboard is a tool from Keras that can monitor the results of a tensorflow model and display it in a nice Tableau-like dashboard view. We can enable tensorboard and add it to our modelling process to get a better view of progress and save on some of the custom charting functions.
#
# ### Create Model
# Set # of epochs
epochs = 10
def create_model():
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
acc = keras.metrics.CategoricalAccuracy(name="accuracy")
pre = keras.metrics.Precision(name="precision")
rec = keras.metrics.Recall(name="recall")
metric_list = [acc, pre, rec]
# #### Add Tensorboard Callback
#
# The tensorboard can be added to the model as it is being fit as a callback. The primary parameter that matters there is the log_dir, where we can setup the folder to put the logs that the visualizations are made from. The example I have here is from the tensorflow documentation, generating a new subfolder for each execution. Using this to log the tensorboard data is fine, there's no need to change it without reason.
# ### Launch Tensorboard
#
# In recent versions of VS Code, whioch I assume all of you have, tensorboard can be used directly in a VS Code tab:
#
# 
#
# The command below launches tensorboard elsewhere, such as Google colab.
#
# Either way, the actual tensorboard feature works the same once launched.
# %load_ext tensorboard
# %tensorboard --logdir logs/fit
# The logdir is wherever the logs are, this is specified in the callback setup.
# +
model = create_model()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=metric_list)
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=epochs,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
# -
# ### Tensorboard Contents
#
# The first page of the tensorboard page gives us a nice pretty view of our training progress - this part should be quite straightforward.
#
# #### Tensorboard Images
#
# We can also use the tensorboard to visualize other stuff.
# +
# Sets up a timestamped log directory.
logdir = "logs/train_data/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
# Creates a file writer for the log directory.
file_writer = tf.summary.create_file_writer(logdir)
# -
with file_writer.as_default():
# Don't forget to reshape.
images = np.reshape(x_train[0:25], (-1, 28, 28, 1))
tf.summary.image("25 training data examples", images, max_outputs=25, step=0)
# ## Using Pretrained Models
#
# As we've seen lately, training neural networks can take a really long time. Highly accurate models such as the ones that are used for image recognition in a self driving cars can take multiple computers days or weeks to train. With one laptop we don't really have the ability to get anywhere close to that. Is there any hope of getting anywhere near that accurate?
#
# We can use models that have been trained on large datasets and adapt them to our purposes. By doing this we can benefit from all of that other learning that is embedded into a model without going through a training process that would be impossible with our limited resources.
#
# We will look at using a pretrained model here, and at making modifications to it next time.
#
# #### Functional Models
#
# I have lied to you, I forgot that the pretrained models are not sequntial ones (generally, not as a rule), so some of the syntax here is for functional models. It leads to us using some slightly unfamiliar syntax.
# +
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
BATCH_SIZE = 32
IMG_SIZE = (160, 160)
train_dataset = tf.keras.utils.image_dataset_from_directory(train_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
validation_dataset = tf.keras.utils.image_dataset_from_directory(validation_dir,
shuffle=True,
batch_size=BATCH_SIZE,
image_size=IMG_SIZE)
# -
# ### Download Model
#
# There are several models that are pretrained and available to us to use. VGG16 is one developed to do image recognition, the name stands for "Visual Geometry Group" - a group of researchers at the University of Oxford who developed it, and ‘16’ implies that this architecture has 16 layers. The model got ~93% on the ImageNet test that we mentioned a couple of weeks ago.
#
# 
# +
from keras.applications.vgg16 import VGG16
from keras.layers import Input
from keras.models import Model
input_tensor = Input(shape=(160, 160, 3))
vgg = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)
for layer in vgg.layers:
layer.trainable = False
x = Flatten()(vgg.output)
prediction = Dense(1, activation='sigmoid')(x)
model = Model(inputs=vgg.input, outputs=prediction)
model.summary()
# +
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=metric_list)
log_dir = "logs/fit/VGG" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
model.fit(train_dataset,
epochs=epochs,
validation_data=validation_dataset,
callbacks=[tensorboard_callback])
model.evaluate(validation_dataset)
# -
model.evaluate(validation_dataset)
# ## More Complex Data
#
# We can use the rose data for a more complex dataset and a more interesting example in terms of accuracy.
# +
import pathlib
import PIL
dataset_url = "https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz"
data_dir = tf.keras.utils.get_file(origin=dataset_url,
fname='flower_photos',
untar=True)
data_dir = pathlib.Path(data_dir)
#Flowers
batch_size = 32
img_height = 180
img_width = 180
train_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.utils.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
class_names = train_ds.class_names
print(class_names)
# +
input_tensor = Input(shape=(180, 180, 3))
vgg = VGG16(include_top=False, weights='imagenet', input_tensor=input_tensor)
for layer in vgg.layers:
layer.trainable = False
x = Flatten()(vgg.output)
prediction = Dense(5)(x)
model = Model(inputs=vgg.input, outputs=prediction)
model.summary()
# +
model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="adam",
metrics=keras.metrics.SparseCategoricalAccuracy(name="accuracy"))
log_dir = "logs/fit/VGG" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
callback = EarlyStopping(monitor='loss', patience=3, restore_best_weights=True)
model.fit(train_ds,
epochs=epochs,
verbose=1,
validation_data=val_ds,
callbacks=[tensorboard_callback, callback])
| img_opt_tensorboard.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Panel stock ticker example
#
# A few cells from https://panel.pyviz.org/gallery/apis/stocks_hvplot.html
# +
# This cell will get executed at build time and then removed
import ipydeps
ipydeps.pip('hvplot')
import bokeh
bokeh.sampledata.download()
# +
import panel as pn
import pandas as pd
import hvplot.pandas
from bokeh.sampledata import stocks
pn.extension()
# +
title = '## Stock Explorer hvPlot'
tickers = ['AAPL', 'FB', 'GOOG', 'IBM', 'MSFT']
def get_df(ticker, window_size):
df = pd.DataFrame(getattr(stocks, ticker))
df['date'] = pd.to_datetime(df.date)
return df.set_index('date').rolling(window=window_size).mean().reset_index()
def get_plot(ticker, window_size):
df = get_df(ticker, window_size)
return df.hvplot.line('date', 'close', grid=True)
# +
interact = pn.interact(get_plot, ticker=tickers, window_size=(1, 21, 5))
pn.Row(
pn.Column(title, interact[0]),
interact[1]
).servable()
# -
| nb2dashboard/examples/panel-stock.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#check that the notebook is pointing to your anaconda version, not the master Anaconda folder
#import sys
#sys.path
# -
# ## A note on adjusting the rules
# You have to restart the kernel and re-import the libraries every time you add rules to the knowledge_base files.
# +
import medspacy
import spacy
import cov_bsv
#from cov_bsv import visualize_doc
from medspacy.visualization import visualize_ent
from medspacy.ner import TargetMatcher, TargetRule
from spacy import displacy
import warnings
warnings.filterwarnings("ignore")
# +
with open("O:\\VINCI_COVIDNLP\\va_external_c19_tests\\examples\\text_4.txt", 'r') as reader:
text = reader.read()
print(type(text))
#print(text)
# -
# ## Sections
nlp = cov_bsv.load(enable=["tagger", "parser", "concept_tagger", "target_matcher", "sectionizer"])
doc = nlp(text)
visualize_ent(doc)
#visualize_dep(doc)
for ent in doc.ents:
print(ent)
print("Uncertain:", ent._.is_uncertain)
print("Negated:", ent._.is_negated)
print("Positive:", ent._.is_positive)
print("Tested outside VA:", ent._.is_external)
print("Experienced by someone else:", ent._.is_other_experiencer)
print()
# ### Example of the difference between the default spaCy and the medspaCy tokenizers
# +
from medspacy.custom_tokenizer import create_medspacy_tokenizer
nlp = spacy.blank("en")
medspacy_tokenizer = create_medspacy_tokenizer(nlp)
default_tokenizer = nlp.tokenizer
example = r'December(23rd)' #r'Jul 22, 2020' #text[2] #r'10-18-20' #r'Pt c/o n;v;d h/o chf+cp'
print("spacy tokenizer:")
print(list(default_tokenizer(example)))
print('\n')
print("medspacy tokenizer:")
print(list(medspacy_tokenizer(example)))
#for ent in doc.ents:
#print(ent.text, ent.label_, sep=" -> ")
| Sections.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Fuel consumption project
# - In this project we are going to analyze parameters that affects on fuel consumptions
# - We want to build a model that will predict fuel consumption based on car brand,engine power,acceleration,number of cylinder and fuel
# - We will use Tensorflow to build our linear regression model to predict fuel consumption
# ## Data exploring
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import seaborn as sns
from matplotlib import cm
df=pd.read_csv("clean_cars")
df.head()
# Column information:
# - car_brand - brand of a car
# - car_model - model of a car
# - fuel - type of fuel
# - engine_capacity - Engine capacity, cc
# - cylinder - number of cylinder
# - kw - engine power in kilowatt
# - max_speed - top speed of a car (km/h)
# - lenght - lenght of a car (mm)
# - width - width of a car (mm)
# - height - height of a car (mm)
# - weight - car weight (kg)
# - fuel_consumption - combined fuel consumption (l/100 km)
df.shape
df.info()
df.describe()
df.isnull().sum()
#droping null values
df2=df.dropna()
df3=df2.copy()
# ## Finding outliers
plt.boxplot(df3['engine_capacity'])
#using IQR techique
threshold=2457+ 1.5*(2457 - 1598)
max_threshold
df4=df3[df3.engine_capacity<max_threshold]
lower_threshold= 1598 - 1.5*(2457 - 1598)
lower_threshold
df5=df4[df4.engine_capacity>lower_threshold]
plt.boxplot(df5["kw"])
max_th=130+ 1.5*(130 - 76)
max_th
df6=df5[df5.kw<max_th]
plt.boxplot(df6["acceleration"])
max_thr=12.5+ 1.5*(12.5 - 8.8)
max_thr
df7=df6[df6.acceleration<max_thr]
plt.boxplot(df7["max_speed"])
max_thre=216+ 1.5*(216 - 179)
max_thre
lower_t= 179 - 1.5*(216 - 179)
lower_t
df8=df7[(df7.max_speed<max_thre)&(df7.max_speed>lower_t)]
plt.boxplot(df8["lenght"])
max_thres=4690+ 1.5*(4690 - 4236)
max_thres
lower_th= 4236 - 1.5*(4690- 4236)
lower_th
df9=df8[(df8.lenght<max_thres)&(df8.lenght>lower_th)]
plt.boxplot(df9["width"])
max_thresh=1815+ 1.5*(1815 - 1706)
max_thresh
lower_thr= 1706 - 1.5*(1815- 1706)
lower_thr
df10=df9[(df9.width<max_thresh)&(df9.width>lower_thr)]
plt.boxplot(df10["height"])
max_thresho=1507+ 1.5*(1507 - 1420)
max_thresho
lower_thre= 1420 - 1.5*(1507- 1420)
lower_thre
df11=df10[(df10.height<max_thresho)&(df10.height>lower_thre)]
plt.boxplot(df11["weight"])
max_threshol=1757+ 1.5*(1757 - 1392.5)
max_threshol
lower_thres= 1392 - 1.5*(1757 - 1392.5)
lower_thres
df12=df11[(df11.weight<max_threshol)&(df11.weight>lower_thres)]
plt.boxplot(df12["fuel_consumption"])
max_=8.2+ 1.5*(8.2 - 5.7)
max_
df13=df12[df12.fuel_consumption<max_]
df13.describe()
df13.shape
# ## Data visulaization
#Countplot
plt.figure(figsize=(19,6))
plt.xticks(rotation=65)
sns.countplot(x="car_brand",hue="fuel",data=df13)
plt.title("Types of fuel for each car ")
plt.show()
title = 'Speed dependence on engine power'
plt.figure(figsize=(10,6))
sns.scatterplot(df13.kw,df13.max_speed,hue=df13.cylinder).set_title(title)
plt.ioff()
title = 'Dependence of car weight and engine power on fuel consumption'
plt.figure(figsize=(10,6))
sns.scatterplot(df13.weight,df13.kw,hue=df13.fuel_consumption).set_title(title)
plt.ioff()
# Average top speed
location=df13.groupby("car_brand")["max_speed"].mean().reset_index("car_brand")
color = cm.inferno_r(np.linspace(.4, .8, 30))
location=location.sort_values("max_speed" , ascending=[True])[-30:]
location.plot.barh(x="car_brand", y='max_speed', color=color , figsize=(15,10))
plt.title("Average top speed of top 30 car brands")
plt.xlabel("km/h")
plt.show()
# Average acceleration
location=df13.groupby("car_brand")["acceleration"].mean().reset_index("car_brand")
color = cm.copper(np.linspace(.4, .8, 30))
location=location.sort_values("acceleration" , ascending=[False])[-30:]
location.plot.barh(x="car_brand", y='acceleration', color=color , figsize=(15,10))
plt.title("Average accelaration of top 30 car brands")
plt.xlabel("sec")
plt.show()
# Average weight
location=df13.groupby("car_brand")["weight"].mean().reset_index("car_brand")
color = cm.Greens(np.linspace(.4, .8, 30))
location=location.sort_values("weight" , ascending=[True])[-30:]
location.plot.barh(x="car_brand", y='weight', color=color , figsize=(15,10))
plt.title("Average weight of top 30 car brands")
plt.xlabel("kg")
plt.show()
# Average fuel consumption
location=df13.groupby("car_brand")["fuel_consumption"].mean().reset_index("car_brand")
color = cm.Oranges(np.linspace(.4, .8, 30))
location=location.sort_values("fuel_consumption" , ascending=[True])[-30:]
location.plot.barh(x="car_brand", y='fuel_consumption', color=color , figsize=(15,10))
plt.title("Average fuel consumption of top 30 car brands")
plt.xlabel("l/100 km")
plt.show()
#top 10 car models with lowest fuel consumption
min_consumption=df13.groupby(["car_brand","car_model"])["fuel_consumption"].min().sort_values(ascending=[True])[:10]
min_consumption
##top 10 car models with highestfuel consumption
max_consumption=df13.groupby(["car_brand","car_model"])["fuel_consumption"].max().sort_values(ascending=[False])[:10]
max_consumption
# ## Analyzing audi cars
audi=df13[df13["car_brand"]=="Audi"]
#fuel consumption by models
plt.figure(figsize=(18,8))
sns.stripplot(x="car_model",y="fuel_consumption",data=audi)
plt.title("Fuel consumption by audi car models ")
plt.show()
#average accelaration
location=audi.groupby("car_model")["acceleration"].mean().reset_index("car_model")
color = cm.spring(np.linspace(.4, .8, 30))
location=location.sort_values("acceleration" , ascending=[True])[:10]
location.plot.bar(x="car_model", y='acceleration', color=color , figsize=(15,10))
plt.title("Average acceleration by car models ")
plt.ylabel("sec")
plt.show()
#top speed by car model
location=audi.groupby("car_model")["max_speed"].mean().reset_index("car_model")
color = cm.winter(np.linspace(.4, .8, 30))
location=location.sort_values("max_speed" , ascending=[False])[:10]
location.plot.bar(x="car_model", y='max_speed', color=color , figsize=(15,10))
plt.title("Top speed by car models ")
plt.ylabel("km/h")
plt.show()
# visualizing correlations
plt.figure(figsize=(10,10))
sns.heatmap(df13.corr(), annot=True, cmap='coolwarm')
#droping unnecessary features
df14=df13.drop(["engine_capacity","width","height","lenght","max_speed","car_model"],axis=1)
# visualizing correlations
plt.figure(figsize=(10,10))
sns.heatmap(df14.corr(), annot=True, cmap='coolwarm')
# ## Preparing data
#counting brands
brand_stat=df14["car_brand"].value_counts()
brand_stat[-10:]
# We will make function that will put all values that are less then 10 in "other" category.
less_10=brand_stat[brand_stat<10]
len(less_10)
df14["car_brand"]=df13["car_brand"].apply(lambda x: "other" if x in less_10 else x)
# replace brands with 0 and 1
df14.replace({"Diesel":0 ,"Petrol":1}, value=None, inplace=True)
#function for creating dummy columns
dummies = pd.get_dummies(df14["car_brand"])
dummies.head(5)
df15 = pd.concat([df14,dummies.drop("other",axis=1)],axis=1)
df15.head()
df16 = df15.drop("car_brand",axis=1)
df16.head()
df16.shape
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
X=df16.drop(["fuel_consumption"],axis=1)
y=df16["fuel_consumption"]
from sklearn.model_selection import train_test_split
train_X,test_X,train_y,test_y=train_test_split(X,y,train_size=0.8,random_state=10)
# ### Scaling
# min-max scaling
from sklearn.preprocessing import MinMaxScaler, RobustScaler
scaler=MinMaxScaler()
train_X = scaler.fit_transform(train_X)
test_X= scaler.transform(test_X)
# ### Building model
# +
linear_model = tf.keras.Sequential([
layers.Dense(128, activation='relu',input_shape=train_X[0].shape),
layers.Dense(64, activation='relu'),
layers.Dense(units=1,activation="linear"), #linear regression
layers.BatchNormalization()
])
linear_model.compile(
optimizer=tf.optimizers.Adam(learning_rate=0.01),
loss="mse")
# -
# Calculate validation results on training data
history=linear_model.fit(train_X,train_y,epochs=100,validation_data=(test_X,test_y),verbose=1)
history
# ### Evaluating model
linear_model.evaluate(
test_X, test_y, verbose=1)
# ### Visualizing model performance
def learning_curve (history,epoch):
epoch_range=range(1,epoch+1)
plt.plot(epoch_range,history.history["loss"])
plt.plot(epoch_range,history.history["val_loss"])
plt.title("Model loss")
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend(["Train","Val"])
plt.show()
learning_curve(history,100)
# ### Visualizing predictions
pred= linear_model.predict(test_X).flatten()
plt.figure(figsize=(10,10))
plt.scatter(test_y,pred)
plt.xlabel("True values")
plt.xlabel("Predicted values")
plt.axis("equal")
plt.axis("square")
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
a=plt.plot([-100,100],[-100,100])
# In perfect world all values would be on line,but we can see that our model predicted pretty good.
# # Results
result = pd.DataFrame({'Actual': test_y,
'Predicted': pred})
result.sample(10)
| cars/fuel_consumption_predicting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Cifar10 Outlier Detection
# 
#
# In this example we will deploy an image classification model along with an outlier detector trained on the same dataset. For in depth details on creating an outlier detection model for your own dataset see the [alibi-detect project](https://github.com/SeldonIO/alibi-detect) and associated [documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/). You can find details for this [CIFAR10 example in their documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/examples/od_vae_cifar10.html) as well.
#
#
# Prequisites:
#
# * Running cluster with
# * [kfserving installed](https://github.com/kubeflow/kfserving/blob/master/README.md)
# * [Knative eventing installed](https://knative.dev/docs/install/)
#
# !pip install -r requirements_notebook.txt
# ## Setup Resources
# Enabled eventing on default namespace. This will activate a default Knative Broker.
# !kubectl label namespace default knative-eventing-injection=enabled
# Create a Knative service to log events it receives. This will be the example final sink for outlier events.
# !pygmentize message-dumper.yaml
# !kubectl apply -f message-dumper.yaml
# Create the Kfserving image classification model for Cifar10. We add in a `logger` for requests - the default destination is the namespace Knative Broker.
# !pygmentize cifar10.yaml
# !kubectl apply -f cifar10.yaml
# Create the pretrained VAE Cifar10 Outlier Detector. We forward replies to the message-dumper we started.
# !pygmentize cifar10od.yaml
# !kubectl apply -f cifar10od.yaml
# Create a Knative trigger to forward logging events to our Outlier Detector.
# !pygmentize trigger.yaml
# !kubectl apply -f trigger.yaml
# Get the IP address of the Istio Ingress Gateway. This assumes you have installed istio with a LoadBalancer.
CLUSTER_IPS=!(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
CLUSTER_IP=CLUSTER_IPS[0]
print(CLUSTER_IP)
SERVICE_HOSTNAMES=!(kubectl get inferenceservice tfserving-cifar10 -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_CIFAR10=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_CIFAR10)
SERVICE_HOSTNAMES=!(kubectl get ksvc vae-outlier -o jsonpath='{.status.url}' | cut -d "/" -f 3)
SERVICE_HOSTNAME_VAEOD=SERVICE_HOSTNAMES[0]
print(SERVICE_HOSTNAME_VAEOD)
# +
import matplotlib.pyplot as plt
import numpy as np
import json
import tensorflow as tf
tf.keras.backend.clear_session()
from alibi_detect.od.vae import OutlierVAE
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.visualize import plot_feature_outlier_image
import requests
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
def show(X):
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
def predict(X):
formData = {
'instances': X.tolist()
}
headers = {}
headers["Host"] = SERVICE_HOSTNAME_CIFAR10
res = requests.post('http://'+CLUSTER_IP+'/v1/models/tfserving-cifar10:predict', json=formData, headers=headers)
if res.status_code == 200:
return classes[np.array(res.json()["predictions"])[0].argmax()]
else:
print("Failed with ",res.status_code)
return []
def outlier(X):
formData = {
'instances': X.tolist()
}
headers = {"Alibi-Detect-Return-Feature-Score":"true","Alibi-Detect-Return-Instance-Score":"true"}
headers["Host"] = SERVICE_HOSTNAME_VAEOD
res = requests.post('http://'+CLUSTER_IP+'/', json=formData, headers=headers)
if res.status_code == 200:
od = res.json()
od["data"]["feature_score"] = np.array(od["data"]["feature_score"])
od["data"]["instance_score"] = np.array(od["data"]["instance_score"])
return od
else:
print("Failed with ",res.status_code)
return []
# -
# ## Normal Prediction
idx = 1
X = X_train[idx:idx+1]
show(X)
predict(X)
# Lets check the message dumper for an outlier detection prediction. This should be false.
# res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[0]))
print("Outlier",j["data"]["is_outlier"]==[1])
# ## Outlier Prediction
np.random.seed(0)
X_mask, mask = apply_mask(X.reshape(1, 32, 32, 3),
mask_size=(10,10),
n_masks=1,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
show(X_mask)
predict(X_mask)
# Now lets check the message dumper for a new message. This should show we have found an outlier.
# res=!kubectl logs $(kubectl get pod -l serving.knative.dev/configuration=message-dumper -o jsonpath='{.items[0].metadata.name}') user-container
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
data.append(res[i+1])
j = json.loads(json.loads(data[1]))
print("Outlier",j["data"]["is_outlier"]==[1])
# We will now call our outlier detector directly and ask for the feature scores to gain more information about why it predicted this instance was an outlier.
od_preds = outlier(X_mask)
# We now plot those feature scores returned by the outlier detector along with our original image.
plot_feature_outlier_image(od_preds,
X_mask,
X_recon=None)
# ## Tear Down
# !kubectl delete -f cifar10.yaml
# !kubectl delete -f cifar10od.yaml
# !kubectl delete -f trigger.yaml
# !kubectl delete -f message-dumper.yaml
| docs/samples/outlier-detection/alibi-detect/cifar10/cifar10_outlier.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Py3 research env
# language: python
# name: py3_research
# ---
# # Pictures compression using SVD
# In this exercise you are supposed to study how SVD could be used in image compression.
#
# _Based on open course in [Numerical Linear Algebra](https://github.com/oseledets/nla2018) by <NAME>_
# #### 1. Compute the singular values of some predownloaded image (via the code provided below) and plot them. Do not forget to use logarithmic scale.
# +
# If you are using colab, uncomment this cell
# # ! wget https://raw.githubusercontent.com/neychev/harbour_ml2020/master/day04_SVM_PCA/data/waiting.jpeg
# # ! wget https://raw.githubusercontent.com/neychev/harbour_ml2020/master/day04_SVM_PCA/data/another_one.png
# # ! wget https://raw.githubusercontent.com/neychev/harbour_ml2020/master/day04_SVM_PCA/data/simpsons.jpg
# # ! mkdir data
# # ! mv -t data waiting.jpeg another_one.png simpsons.jpg
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from PIL import Image, ImageDraw
import numpy as np
face_raw = Image.open('data/waiting.jpeg')
face = np.array(face_raw).astype(np.uint8)
h,w,c = face.shape
print('Image shape: {} x {} x {}'.format(h,w,c))
plt.imshow(face_raw)
plt.xticks(())
plt.yticks(())
plt.title('Original Picture')
plt.show()
# -
# But first, let's try with simple synthetic data.
# Here comes the example matrix:
simple_matrix = np.arange(20).reshape((4, 5))
u, S, v = np.linalg.svd(simple_matrix, full_matrices=False)
# +
# Image is saved as a 3-dimensional array with shape H x W x C (heigt x width x channels)
Rf = face[:,:,0]
Gf = face[:,:,1]
Bf = face[:,:,2]
# Compute SVD and plot the singular values for different image channels
u, Rs, vh = np.linalg.svd(Rf, full_matrices=False)
u, Gs, vh = np.linalg.svd(Gf, full_matrices=False)
u, Bs, vh = np.linalg.svd(Bf, full_matrices=False)
plt.figure(figsize=(12,7))
plt.plot(<R singular values>,'ro')
plt.plot(<G singular values>,'g.')
plt.plot(<B singular values>,'b:')
plt.yscale('log')
plt.ylabel("Singular values")
plt.xlabel("Singular value order")
plt.show()
# -
# #### 2. Complete a function ```compress```, that performs SVD and truncates it (using $k$ singular values/vectors). See the prototype below.
#
# Note, that in case when your images are not grayscale you have to split your image to channels and work with matrices corresponding to different channels separately.
#
# Plot approximate reconstructed image $M_\varepsilon$ of your favorite image such that $rank(M_\varepsilon) = 5, 20, 50$ using ```plt.subplots```.
def compress(image, k):
"""
Perform svd decomposition and truncate it (using k singular values/vectors)
Parameters:
image (np.array): input image (probably, colourful)
k (int): approximation rank
--------
Returns:
reconst_matrix (np.array): reconstructed matrix (tensor in colourful case)
s (np.array): array of singular values
"""
image2 = image.copy()
Rf = image2[:,:,0]
Gf = image2[:,:,1]
Bf = image2[:,:,2]
# compute per-channel SVD for input image
# <your code here>
# reconstruct the input image with the given approximation rank
reduced_im = np.zeros((image.shape),np.uint8)
# <your code here>
# save the array of top-k singular values
s = np.zeros((len(Gs),3))
# <your code here>
return reduced_im.copy(), s
plt.figure(figsize=(18,12))
for i,k in enumerate([350,300,250,200,150,100,50,20,5]):
plt.subplot(3,3,i+1)
im,s = compress(face,k)
plt.imshow(Image.fromarray(im,"RGB"))
plt.xticks(())
plt.yticks(())
plt.title("{} greatest SV".format(k))
# #### 3. Plot the following two figures for your favorite picture
# * How relative error of approximation depends on the rank of approximation?
# * How compression rate in terms of storing information ((singular vectors + singular numbers) / total size of image) depends on the rank of approximation?
# +
# fancy progress bar
from tqdm.auto import tqdm
k_list = range(5, 386, 5)
rel_err = []
info = []
for k in tqdm(k_list, leave=False):
img, s = compress(face, k)
current_relative_error = # MSE(img, face) / l2_norm(face)
current_information = # U(image_height x K) @ S(diag KxK) @ V^T(K x image_width)
rel_err.append(current_relative_error)
info.append(current_information)
plt.figure(figsize=(12,7))
plt.subplot(2,1,1)
plt.title("Memory volume plot")
plt.xlabel("Rank")
plt.ylabel("Bytes")
plt.plot(k_list, info)
plt.subplot(2,1,2)
plt.title("Relative error plot")
plt.xlabel("Rank")
plt.ylabel("Rel err value")
plt.plot(k_list, rel_err)
plt.tight_layout()
plt.show()
# -
# #### 4. Consider the following two pictures. Compute their approximations (with the same rank, or relative error). What do you see? Explain results.
# +
image_raw1 = Image.open('data/another_one.png')
image_raw2 = Image.open('data/simpsons.jpg')
image1 = np.array(image_raw1).astype(np.uint8)
image2 = np.array(image_raw2).astype(np.uint8)
plt.figure(figsize=(18, 6))
plt.subplot(1,2,1)
plt.imshow(image_raw1)
plt.title('One Picture')
plt.xticks(())
plt.yticks(())
plt.subplot(1,2,2)
plt.imshow(image_raw2)
plt.title('Another Picture')
plt.xticks(())
plt.yticks(())
plt.show()
# -
# ### Same Rank
# +
# Your code is here
im1, s = compress(image1, 100)
im2, s = compress(image2, 100)
plt.figure(figsize=(18,6))
plt.subplot(1,2,1)
plt.imshow(Image.fromarray(im1, "RGB"))
plt.xticks(())
plt.yticks(())
plt.subplot(1,2,2)
plt.imshow(Image.fromarray(im2, "RGB"))
plt.xticks(())
plt.yticks(())
plt.show()
# -
# ### Same Relative Error
# +
k_list = range(5,500,1)
rel_err1 = []
rel_err2 = []
relative_error_threshold = 0.15
for k in tqdm(k_list):
image1_compressed, s = compress(image1, k)
image2_compressed, s = compress(image2, k)
relative_error_1 = # MSE(image1_compressed, image1) / l2_norm(image1)
relative_error_2 = # MSE(image2_compressed, image2) / l2_norm(image2)
rel_err1.append(relative_error_1)
rel_err2.append(relative_error_2)
# find the indices
idx1 = int(np.argwhere(np.diff(np.sign(np.array(rel_err1) - relative_error_threshold))).flatten())
idx2 = int(np.argwhere(np.diff(np.sign(np.array(rel_err2) - relative_error_threshold))).flatten())
print("K1 = {}; K2 = {}".format(k_list[idx1], k_list[idx2]))
plt.figure(figsize=(12,7))
plt.plot(k_list[idx1], rel_err1[idx1], 'ro')
plt.plot(k_list[idx2], rel_err2[idx2], 'ro')
plt.title("Rel err for 2 pics")
plt.xlabel("Rank")
plt.ylabel("Rel error val")
plt.plot(k_list, rel_err1, label="Image 1")
plt.plot(k_list, rel_err2, label="Image 2")
plt.plot(k_list, [relative_error_threshold]*len(k_list),":",)
plt.legend()
plt.show()
# +
image1_compressed, s = compress(image1, <find the value>)
image2_compressed, s = compress(image2, <find the value>)
plt.figure(figsize=(18,6))
plt.subplot(1,2,1)
plt.imshow(Image.fromarray(image1_compressed, "RGB"))
plt.xticks(())
plt.yticks(())
plt.subplot(1,2,2)
plt.imshow(Image.fromarray(image2_compressed, "RGB"))
plt.xticks(())
plt.yticks(())
plt.show()
| day04_SVM_PCA/04_extra_pictures_svd.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="KCI6pSeGKjAQ" outputId="ee52d92f-31e2-4677-8ccd-96738300f0ac"
# !pip install transformers
# + id="x1hlIAqzL4Zn"
import numpy as np
import pandas as pd
import json
# + id="DBdfxHvZK-E0"
import torch
from transformers import BertModel, BertTokenizer, BertConfig, BertForMaskedLM, BertForSequenceClassification
# + id="S8bOlEQaLJRr"
#bert=BertModel.from_pretrained("bert-base-chinese")
# + id="UkBLDvXhL5M7" colab={"base_uri": "https://localhost:8080/", "height": 67, "referenced_widgets": ["4f836b114a0147be801be93f9188fd75", "878eff1ab6fd487bad32c292385c189d", "cec3d74f550441c6a785430e0c33c277", "7a24e0583557417e80c78f7a985f5777", "<KEY>", "<KEY>", "ee92e130ea104eb1833243064ba41b08", "e46f0476fdfc48a7802c001926367f5f"]} outputId="4e9f2f94-621a-4157-823c-13d4d8b3dd51"
tokenizer=BertTokenizer.from_pretrained("bert-base-chinese")
# + id="BD9yyiMRomlr"
dict = { #自定义BertConfig
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.15, #提高dropout
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"position_embedding_type": "absolute",
"type_vocab_size": 2,
"vocab_size": 21128
}
with open("config.json",'w') as file_obj:
json.dump(dict,file_obj)
# + id="a-jxPydVMEaz"
model_config=BertConfig.from_json_file("config.json")
# + colab={"base_uri": "https://localhost:8080/"} id="RzJ15HLxnnuR" outputId="efaa0dcf-48cb-4320-ab5f-989c61ca27bc"
model_config
# + colab={"base_uri": "https://localhost:8080/"} id="jJ_EGfQCh5xb" outputId="fe2177fc-1568-4f28-ee9f-071a3d427296"
from google.colab import drive
drive.mount('/content/drive')
# + id="tEt3KGYPi9gt"
path = '/content/drive/My Drive/data_sampled_20000.csv'
# + id="hbzYZWmejBI4"
def label_to_category(label): #将标签转化为编号
if label == '词':
return 0
elif label == '诗':
return 1
elif label == '文言文':
return 2
elif label == '新闻':
return 3
elif label == '期刊':
return 4
else:
print("标签格式存在异常")
# + id="htQddilSjEpA"
def get_data(file_path):#获取数据集的文本和标签信息
data = pd.read_csv(file_path)
#ci = data[data.类别=='词'].sample(n=20000,random_state=10, axis=0)
#poem = data[data.类别=='诗'].sample(n=20000,random_state=12, axis=0)
#wyw = data[data.类别=='文言文'].sample(n=20000,random_state=15, axis=0)
#news = data[data.类别=='新闻'].sample(n=20000,random_state=16, axis=0)
#journal = data[data.类别=='期刊'].sample(n=20000,random_state=18, axis=0)
#print(len(ci),len(poem),len(wyw),len(news),len(journal))
#temp = pd.concat([ci,poem,wyw,news,journal])
#temp = temp[['文本','类别']]
#temp.to_csv('/content/drive/My Drive/data_sampled_20000.csv')
content = data['文本']
labels = np.asarray([label_to_category(label) for label in data['类别']])
return content, labels
content,labels=get_data(path)
# + colab={"base_uri": "https://localhost:8080/"} id="YOXn1r48PT6b" outputId="170fd3aa-85d9-4e69-e0bd-0f0b25c25add"
content
# + id="Ct0xXgdeMQ2b"
data=[]
# + id="f2ro5jI-RFtj"
for sentence in content:
sentence = sentence[:128]
data.append(tokenizer.encode(sentence)) #将句子转换为BERT预训练模型中的id
del content
# + colab={"base_uri": "https://localhost:8080/"} id="bcOAiwkc01Hh" outputId="662ae225-7494-491d-ebca-51dcac16c236"
data[:5]
# + id="K_ugpiNVgAJJ"
import keras
X_padded=keras.preprocessing.sequence.pad_sequences(data,maxlen=128, dtype="long", truncating="post", padding="post") #对齐句子
del data
# + colab={"base_uri": "https://localhost:8080/"} id="7x7-07fWgmbp" outputId="5ca57218-7281-4f68-9c9a-ce5a5feeff48"
X_padded[:5]
# + id="-ewSQwrOjm8d"
from sklearn.model_selection import train_test_split
#X_padded = np.load('/content/drive/My Drive/X_padded_bert.npy')
X_train, X_test, y_train, y_test = train_test_split(X_padded, labels, test_size=0.4, random_state=21) #训练集、验证集、测试集比例为6:2:2
X_val, X_test, y_val, y_test = train_test_split(X_test, y_test, test_size=0.5, random_state=21)
del X_padded
# + colab={"base_uri": "https://localhost:8080/"} id="NNTMbB2MpFId" outputId="5069b317-5713-427b-90eb-fe2535be8691"
print(len(X_train),len(X_val),len(X_test))
# + id="L4IshCubg6DZ"
def get_masks(data): #BERT的MASK
attention_masks = []
for seq in data:
seq_mask = [float(i>0) for i in seq]
attention_masks.append(seq_mask)
return attention_masks
# + id="BvlKAbK0hXUL"
train_masks = get_masks(X_train)
# + id="DOZMO-Esh4bw"
val_masks = get_masks(X_val)
# + id="6_Vqb2ov5wy9"
test_masks = get_masks(X_test)
# + id="VehXZzHNh_na"
import torch.utils.data as Data
x_train_tensor = torch.LongTensor(X_train)
y_train_tensor = torch.LongTensor(y_train)
del X_train, y_train
x_val_tensor = torch.LongTensor(X_val)
y_val_tensor = torch.LongTensor(y_val)
del X_val, y_val
x_test_tensor = torch.LongTensor(X_test)
y_test_tensor = torch.LongTensor(y_test)
del X_test ,y_test
train_masks = torch.LongTensor(train_masks)
val_masks = torch.LongTensor(val_masks)
test_masks = torch.LongTensor(test_masks)
train_data = Data.TensorDataset(x_train_tensor, train_masks, y_train_tensor) # 将数据放入 torch_dataset
del x_train_tensor, train_masks, y_train_tensor
val_data = Data.TensorDataset(x_val_tensor, val_masks, y_val_tensor)
del x_val_tensor, val_masks, y_val_tensor
test_data = Data.TensorDataset(x_test_tensor, test_masks, y_test_tensor)
del x_test_tensor, test_masks, y_test_tensor
BATCH_SIZE = 64 # BATCH_SIZE为64
train_loader=Data.DataLoader(
dataset=train_data, # 将数据放入loader
batch_size=BATCH_SIZE,
shuffle=True # 是否打乱数据的排布
)
del train_data
val_loader=Data.DataLoader(
dataset=val_data, # 将数据放入loader
batch_size=BATCH_SIZE,
shuffle=True # 是否打乱数据的排布
)
del val_data
test_loader=Data.DataLoader(
dataset=test_data, # 将数据放入loader
batch_size=BATCH_SIZE,
shuffle=True # 是否打乱数据的排布
)
del test_data
# + colab={"base_uri": "https://localhost:8080/"} id="DsX_3nB1lRkm" outputId="2a914952-3bef-48b6-8539-6d4a144eeeca"
def go(loader): #测试loader是否正常
for step, (x,y,z) in enumerate(loader):
print(z)
break
go(train_loader)
go(val_loader)
go(test_loader)
# + colab={"base_uri": "https://localhost:8080/"} id="1ZikXm-UvCHv" outputId="a6aa48cd-cc48-4cd9-bbf9-c27982b7a711"
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
print(device)
# + id="8RkAIyZWotVX"
from transformers import BertForSequenceClassification, BertPreTrainedModel
from transformers.modeling_outputs import SequenceClassifierOutput #
class BertForMultiClassSequenceClassification(BertPreTrainedModel): #重写BertForSequenceClassification类
def __init__(self, config):
super().__init__(config)
self.num_labels = 5 #分为5类(源码里默认是2类)
#以下都是源码,未改动
self.bert = BertModel(config)
self.dropout = torch.nn.Dropout(0.2) #增加Dropout
self.classifier = torch.nn.Linear(config.hidden_size, 5) #全连接层
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None
):
"""
labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`):
Labels for computing the sequence classification/regression loss. Indices should be in :obj:`[0, ...,
config.num_labels - 1]`. If :obj:`config.num_labels == 1` a regression loss is computed (Mean-Square loss),
If :obj:`config.num_labels > 1` a classification loss is computed (Cross-Entropy).
"""
return_dict = return_dict if return_dict is not None else self.config.use_return_dict
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output)
logits = self.classifier(pooled_output)
#print(logits)
loss = None
if labels is not None:
if self.num_labels == 1:
# We are doing regression
loss_fct = MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = torch.nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
if not return_dict:
output = (logits,) + outputs[2:]
return ((loss,) + output) if loss is not None else output
return SequenceClassifierOutput(
loss=loss,
logits=logits,
hidden_states=outputs.hidden_states,
attentions=outputs.attentions,
)
# + colab={"base_uri": "https://localhost:8080/"} id="aqUltFwZlWEp" outputId="ea9eec84-60c9-4b11-ffa0-d8fc2a279a5f"
model = BertForMultiClassSequenceClassification(model_config) #Bert模型
model.to(device)
# + id="k8tsAmB6fLtT"
def save_metrics(save_path, train_loss_list, valid_loss_list, train_acc_list, valid_acc_list, global_steps_list): #记录loss和accuracy在训练与验证过程中的变化情况,以便后续评价模型
if save_path == None:
return
state_dict = {'train_loss_list': train_loss_list,
'valid_loss_list': valid_loss_list,
'train_acc_list': train_acc_list,
'valid_acc_list': valid_acc_list,
'global_steps_list': global_steps_list}
torch.save(state_dict, save_path)
print('Loss and accuracy saved')
def load_metrics(load_path): #读取训练过程
if load_path==None:
return
state_dict = torch.load(load_path, map_location=device)
print('Loss and accuracy loaded')
return state_dict['train_loss_list'], state_dict['valid_loss_list'], state_dict['train_acc_list'], state_dict['valid_acc_list'], state_dict['global_steps_list']
# + id="MlxLEftFryRk"
from transformers import AdamW #AdamW优化器
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer],
'weight_decay_rate': 0.01}
]
'''{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
'''
optimizer = AdamW(optimizer_grouped_parameters,lr=2e-5,weight_decay=0.01) #学习率为0.00002
# + colab={"base_uri": "https://localhost:8080/"} id="K0OC9Hp8-Fyr" outputId="0c89949c-4b7a-4c5f-8441-1d74d480edb9"
def flat_accuracy(preds, labels): #计算准确率
pred_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return np.sum(pred_flat == labels_flat) / len(labels_flat)
train_loss_set = []
num_epochs = 10 #训练10轮
eval_every = len(train_loader) // 2
running_loss = 0.0
running_acc = 0.0
valid_running_loss = 0.0
valid_running_acc = 0.0
global_step = 0
train_loss_list = []
valid_loss_list = []
train_acc_list = []
valid_acc_list = []
global_steps_list = []
for i in range(num_epochs): # 模型训练
print("Epoch:",i+1)
model.train()
tr_loss, tr_accuracy = 0, 0
nb_tr_steps, nb_tr_examples = 0, 0
for step, batch in enumerate(train_loader):
batch = tuple(t.to(device) for t in batch) #使用GPU
b_input_ids, b_input_mask, b_labels = batch
#print(b_input_ids.size(),b_labels.size())
optimizer.zero_grad()
loss, logits = model(b_input_ids ,attention_mask=b_input_mask, labels=b_labels)[:2]
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
accuracy = flat_accuracy(logits, label_ids)
loss.backward()
optimizer.step()
running_loss += loss.item()
running_acc += accuracy
global_step += 1
# 用验证集验证
if global_step % eval_every == 0:
model.eval()
with torch.no_grad():
# validation loop
for st, batch in enumerate(val_loader):
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
loss, logits = model(b_input_ids ,attention_mask=b_input_mask, labels=b_labels)[:2]
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
accuracy = flat_accuracy(logits, label_ids)
valid_running_loss += loss.item()
valid_running_acc += accuracy
# 评估
average_train_loss = running_loss / eval_every
average_train_acc = running_acc / eval_every
average_valid_loss = valid_running_loss / len(val_loader)
average_valid_acc = valid_running_acc / len(val_loader)
train_loss_list.append(average_train_loss)
valid_loss_list.append(average_valid_loss)
train_acc_list.append(average_train_acc)
valid_acc_list.append(average_valid_acc)
global_steps_list.append(global_step)
# 重置running_loss和running_acc
running_loss = 0.0
running_acc = 0.0
valid_running_loss = 0.0
valid_running_acc = 0.0
model.train()
# 输出训练进度
print('Epoch [{}/{}], Step [{}/{}], Train Loss: {:.4f}, Train Acc: {:.4f}, Valid Loss: {:.4f}, Valid Acc: {:.4f}'
.format(i+1, num_epochs, global_step, num_epochs*len(train_loader), average_train_loss, average_train_acc, average_valid_loss,average_valid_acc))
save_metrics('/bert_metrics.pt', train_loss_list, valid_loss_list, train_acc_list, valid_acc_list, global_steps_list) #保存训练过程
print('Finished Training!')
# + id="qDxDDpd9HMjt"
torch.save(model.state_dict(), '/content/drive/My Drive/bert_model.pth') #保存模型的参数
# + id="9j_kVrCNkbaj" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="a2c27ca8-707c-4926-c9e7-de0016f5f32e"
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
import seaborn as sns
# 可视化训练过程
train_loss_list, valid_loss_list, train_acc_list, valid_acc_list, global_steps_list = load_metrics('/bert_metrics.pt')
plt.plot(global_steps_list, train_loss_list, label='Train')
plt.plot(global_steps_list, valid_loss_list, label='Valid')
plt.xlabel('Global Steps')
plt.ylabel('Loss')
plt.legend()
plt.show()
# + id="SP7I3kRKlMmV" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="28411c9b-bfc3-4f06-c90e-8f4ab5ea2e98"
plt.plot(global_steps_list, train_acc_list, label='Train')
plt.plot(global_steps_list, valid_acc_list, label='Valid')
plt.xlabel('Global Steps')
plt.ylabel('Accuracy')
plt.legend()
plt.show()
# + id="XKX6IVRE4i_k"
def evaluate(model, test_loader, version='title', threshold=0.5): # 用测试集评价模型效果
y_pred = []
y_true = []
model.eval()
with torch.no_grad():
for st, batch in enumerate(test_loader):
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
output = model(b_input_ids ,attention_mask=b_input_mask, labels=b_labels)[1]
#output = (output > threshold).int()
output=np.asarray(output.cpu())
output=np.argmax(output,1)
#print(output)
y_pred.extend(output.tolist())
y_true.extend(b_labels.tolist())
print('Classification Report:')
print(classification_report(y_true, y_pred, target_names=['词','诗','文言文','新闻','期刊'], digits=4))
# %matplotlib inline
cm = confusion_matrix(y_true, y_pred, labels=[0,1,2,3,4])
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='Blues', fmt="d")
ax.set_title('Confusion Matrix')
ax.set_xlabel('Predicted Labels')
ax.set_ylabel('True Labels')
ax.xaxis.set_ticklabels(['ci','poem','wyw','news','journal'])
ax.yaxis.set_ticklabels(['ci','poem','wyw','news','journal'])
# + id="5LRXCaYu63_V" colab={"base_uri": "https://localhost:8080/", "height": 529} outputId="ee3c4226-57f5-43e0-98d8-6efcabf6cc93"
evaluate(model, test_loader)
# + id="euDHqEX3CFmI"
def category_to_label(num): #由编号转化为标签
if num == 0:
return '词'
elif num == 1:
return '诗'
elif num == 2:
return '文言文'
elif num == 3:
return '新闻'
elif num == 4:
return '期刊'
else:
print("类型格式存在异常")
# + id="qbcFRo_6GObz" colab={"base_uri": "https://localhost:8080/"} outputId="2a93b339-230d-4782-f29a-4c65eab66e34"
model = BertForMultiClassSequenceClassification(model_config).to('cuda')
model.load_state_dict(torch.load('/content/drive/My Drive/bert_model.pth')) #加载模型
def get_category(model): #预测新文本
sentence = input('请输入句子:')
tokenizer=BertTokenizer.from_pretrained("bert-base-chinese")
ids = [tokenizer.encode(sentence)]
import keras
ids = keras.preprocessing.sequence.pad_sequences(ids, maxlen=128, dtype="long", truncating="post", padding="post") #对齐句子
mask = get_masks(ids)
mask = torch.LongTensor(mask).to('cuda')
ids = torch.LongTensor(ids).to('cuda')
print(ids)
model.eval()
with torch.no_grad():
pred = model(ids, attention_mask=mask).logits
print(pred)
pred = np.asarray(pred.cpu())
pred = np.argmax(pred,1)[0]
#print(pred)
print('属于',category_to_label(pred),'的文本')
get_category(model)
| text_classification_bert.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <center>
# <img src="https://image.aladin.co.kr/product/12609/37/cover500/k372532974_1.jpg" width="200" height="200"><br>
# </center>
#
# # 3장 시카고 샌드위치 맛집 분석
#
# 3장부터는 데이터를 인터넷에서 직접 얻는 과정을 알아보자. 이를 웹 스크래핑이라고 하며, 인터넷에서 데이터를 가져요는 Beautiful Soup라는 모듈의 기초를 익혀두면 되겠다. 코드의 자세한 사항은 다음을 참고 한다
#
# - Github: [PinkWink](https://github.com/PinkWink/DataScience)
#
# ### 3-1 웹 데이터를 가져오는 Beautiful Soup
#
# 웹 스크래핑을 하기 위해서는 html에 대해 어느정도 알아야 하는데 책에서와 다르게 여기서는 중점적으로 다루지는 않겠다.
from bs4 import BeautifulSoup
# 다음과 같이 html파일을 open함수로 읽어주고, 읽기 옵션으로 'r'을 지정 해준다.
# +
page = open("data/03. test_first.html", "r").read()
soup = BeautifulSoup(page, "html.parser")
print(soup.prettify())
# -
# 이번에는 이 html에서 한단게 낮은 태그들을 알기 위해서는 childern이라는 속성을 사용하면 된다.
list(soup.children)
# 이때 soup는 문서 전체를 저장한 변수이기 때문에 그 안에서 html 태그를 컨트롤 하기위해서는 다음을 확인하자.
# +
html = list(soup.children)[2]
html
# -
# 그리고 다시 이 html을 조사 해보면 다음과 같다.
list(html.children)
# 이중에서 body태그를 알기 위해서든 다음과 같이 리스트를 건드려주자.
# +
body = list(html.children)[3]
body
# -
# 이렇게 children과 parent를 이용해서 태그를 조사할 수 있고 그냥 한번에 나타낼 수도 있다. 그러나 이 방법대로 단계별로 접근하는건 복잡하고 큰 페이지에서는 쉽지 않다. 접근해야할 태그를 알고 있다면 find또는 find_all함수를 사용하자. 다음은 'p'태그를 찾기 위해 find_all을 사용하여 전부 찾게 된 것이다.
soup.find_all('p')
# 또한, p 태그의 class가 outer-text인 것을 찾는것도 가능하다.
soup.find_all('p', class_='outer-text')
# 또는 class이름으로만 outer_text를 찾을 수도 있다.
soup.find_all(class_="outer-text")
# 이번에는 id가 first인 태그를 찾아보자.
soup.find_all(id="first")
# 이번에는 get_text()명령으로 태그 안에 있는 텍스트만 가져와보자.
for each_tag in soup.find_all('p'):
print(each_tag.get_text())
# body전체에서 get_text()를 실행하게 되면 테그가 있던 자리는 개행(\n)과 함꼐 텍스트가 표시 된다.
body.get_text()
# 이번에는 클릭이 가능한 링크를 의미하는 a태그를 찾아보자.
links = soup.find_all('a')
links
# 추가로 여기서 href속성을 찾으면 링크 주소를 얻을 수 있다.
for each in links:
href = each['href']
text = each.string
print(text + '->' + href)
# ### 3-2 크롬 개발자 도구를 이용해서 원하는 태그 찾기
#
# 이번에는 네이버 금융의 데이터를 찾아보면서 테스트를 해보자. 먼저 url로 접근하는경우 urllib에서 urlopen이라는 함수를 import해주자.
#
# - https://finance.naver.com/marketindex/
from urllib.request import urlopen
# +
url = "https://finance.naver.com/marketindex/"
page = urlopen(url)
soup = BeautifulSoup(page, "html.parser")
print(soup.prettify())
# -
soup.find_all('span', 'value')[0].string
# 추가로 여기서 뉴스 제목을 알기 위해서는 section_news태그를 통해 알수 있다.
for each_tag in soup.find_all(class_ = 'section_news'):
print(each_tag.get_text())
# ### 3-3 실전: 시카고 샌드위치 맞집 소개 사이트 접근
#
# 이번에는 시카고의 베스트 샌드위치 가게를 소개하고 있는 시카고 매거진 홈페이지에 접속해서 샌드위치 가게 정보를 얻어오자. 접속주소는 다음과 같다.
#
# - https://chicagomag.com/
#
# 그러면 이제 이 링크를 통해 크롬 개발자 도구를 사용하자. 이때 우리가 찾으려는 태그가 나오는데 div 태그의 class sammy 또는 sammyListing이다.
# +
from bs4 import BeautifulSoup
from urllib.request import urlopen
url_base = "https://www.chicagomag.com"
url_sub = "/Chicago-Magazine/November-2012/Best-Sandwiches-Chicago/"
url = url_base + url_sub
# +
res = requests.get(url, headers=headers)
soup = BeautifulSoup(res.text)
soup
# -
# 이번에는 finda_all명령을 사용해서 div의 Sammy 태그를 찾아보자.다음의 코드를 사용하면 된다.
print(soup.find_all('div', 'sammy'))
# len 함수를 사용해서 몇개가 나오는지도 확인해보자.
len(soup.find_all('div', 'sammy'))
# 이제 이중에서 첫번째것만 확인해보자.
print(soup.find_all('div', 'sammy')[0])
# ### 3-4 접근한 웹 페이지에서 원하는 데이터 추출하고 정리
| Ch3_chicago_sandwich.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: imaging py2.7
# language: python
# name: imaging-py2.7
# ---
# Run: python obiwan/kenobi.py -n 2 --DR 5 -b 1238p245 -o elg --add_sim_noise --zoom 1550 1650 1550 1650
# %matplotlib inline
# %load_ext autoreload
# %autoreload
# +
import h5py
import galsim
import os
import sys
import shutil
import logging
import pdb
import photutils
import numpy as np
import matplotlib.pyplot as plt
from pkg_resources import resource_filename
from pickle import dump
from astropy.table import Table, Column, vstack
from astropy.io import fits
#from astropy import wcs as astropy_wcs
from fitsio import FITSHDR
import fitsio
from astropy import units
from astropy.coordinates import SkyCoord
from tractor.psfex import PsfEx, PsfExModel
from tractor.basics import GaussianMixtureEllipsePSF, RaDecPos
from legacypipe.runbrick import run_brick
from legacypipe.decam import DecamImage
from legacypipe.survey import LegacySurveyData, wcs_for_brick
from astrometry.util.fits import fits_table, merge_tables
from astrometry.util.ttime import Time
# -
# %aimport obiwan.priors
# use getSrcsInBrick
# %aimport obiwan.db_tools
# use get_parser,main
# %aimport obiwan.kenobi
print(obiwan.kenobi.__file__)
obiwan.kenobi.get_sample_fn('hey','/global/cscratch1/sd/kaylanb/')
# +
# Environment Vars
#LEGACY_SURVEY_DIR="/global/cscratch1/sd/kaylanb/test/legacypipe/py/legacypipe-dir"
#desiproc="/global/cscratch1/sd/desiproc/"
#DUST_DIR=desiproc+"dust/v0_0"
#UNWISE_COADDS_DIR="unwise-coadds/fulldepth":desiproc+"unwise-coadds/w3w4"
#UNWISE_COADDS_TIMERESOLVED_DIR=/global/cscratch1/sd/desiproc/unwise-coadds/time_resolved_neo2
#UNWISE_COADDS_TIMERESOLVED_INDEX=/global/cscratch1/sd/desiproc/unwise-coadds/time_resolved_neo2/time_resolved_neo2-atlas.fits
#DECALS_SIM_DIR=/global/cscratch1/sd/kaylanb/test/obiwan/py/obiwan/junk
# -
print(obiwan.kenobi.__file__)
# notebook cares about order of this list for some reason, -o goes first
cmd_line= ['-o', 'elg','-n', '2', '--DR', '5', '-b', '1238p245,'
'--add_sim_noise',
'--zoom', '1550', '1650', '1550', '1650']
parser= obiwan.kenobi.get_parser()
namesp= parser.parse_args(args=cmd_line)
obiwan.kenobi.main(args=namesp)
| doc/nb/kenobi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: nlstruct
# language: python
# name: nlstruct
# ---
# +
import re
import numpy as np
import string
import pandas as pd
from tqdm import tqdm
from nlstruct.core.text import transform_text, apply_deltas, encode_as_tag, split_into_spans
from nlstruct.core.pandas import merge_with_spans, make_id_from_merged
from nlstruct.core.cache import get_cache
from nlstruct.core.environment import env
from nlstruct.chunking.spacy_tokenization import spacy_tokenize, SPACY_ATTRIBUTES
# from nlstruct.dataloaders.ncbi_disease import load_ncbi_disease
# from nlstruct.dataloaders.bc5cdr import load_bc5cdr
from nlstruct.dataloaders.n2c2_2019_task3 import load_n2c2_2019_task3
from nlstruct.dataloaders.brat import load_from_brat
# -
# ## Load the dataset
# dataset = docs, mentions, labels, fragments = load_ncbi_disease()[["docs", "mentions", "labels", "fragments"]]
# dataset = docs, mentions, labels, fragments = load_bc5cdr()[["docs", "mentions", "labels", "fragments"]]
# dataset = docs, mentions, fragments = load_from_brat(env.resource("brat/my_brat_dataset/"))[["docs", "mentions", "fragments"]]
dataset = docs, mentions, fragments = load_n2c2_2019_task3()[["docs", "mentions", "fragments"]]
dataset
# ## Transform docs
# Apply substitutions to the documents and translate spans accordingly
# +
# Define subs as ("pattern", "replacements") list
subs = [
(re.escape("<????-??-??>"), "MASKEDDATE"),
(r"(?<=[{}\\])(?![ ])".format(string.punctuation), r" "),
(r"(?<![ ])(?=[{}\\])".format(string.punctuation), r" "),
("(?<=[a-zA-Z])(?=[0-9])", r" "),
("(?<=[0-9])(?=[A-Za-z])", r" "),
("MASKEDDATE", "<????-??-??>"),
]
# Clean the text / perform substitutions
docs, deltas = transform_text(docs, *zip(*subs), return_deltas=True)
# Apply transformations to the spans
fragments = apply_deltas(fragments, deltas, on='doc_id')
fragments = fragments.merge(mentions)
# -
# ## Tokenize the documents, and define fragments as spans of tokens
# +
# Tokenize
tokens = (
spacy_tokenize(docs, lang="en_core_web_sm", spacy_attributes=["orth_"])#, spacy_attributes=list((set(SPACY_ATTRIBUTES) - {"norm_"}) | {"lemma_"}),)
#spm_tokenize(docs, "/Users/perceval/Development/data/resources/camembert.v0/sentencepiece.bpe.model")
)
# Perform token substitution to match CoNLL guidelines
tokens["token_orth"] = tokens["token_orth"].apply(lambda word: {
"$": "${dollar}",
"_": "${underscore}",
"\t": "${tab}",
"\n": "${newline}",
" ": "${space}",
"#": "${hash}"}.get(word, word))
tokenized_fragments = split_into_spans(fragments, tokens, pos_col="token_idx")
# -
# ## Deal with overlaps
# +
# Extract overlapping spans
conflicts = merge_with_spans(tokenized_fragments, tokenized_fragments, on=["doc_id", ("begin", "end")], how="outer", suffixes=("", "_other"))
# Assign a cluster (overlapping fragments) to each fragment
fragment_cluster_ids = make_id_from_merged(
conflicts[["doc_id", "mention_id", "fragment_id"]],
conflicts[["doc_id", "mention_id_other", "fragment_id_other"]],
apply_on=[(0, tokenized_fragments[["doc_id", "mention_id", "fragment_id"]])])
# Group by cluster and set the biggest fragment to depth 0, next to 1, ...
split_fragments = (tokenized_fragments
.groupby(fragment_cluster_ids, as_index=False, group_keys=False)
.apply(lambda group: group.assign(depth=np.argsort(group["begin"]-group["end"]))))
# -
# ## Encode mentions as tags on tokens
# Encode labels into tag on tokens, with respect to the fragments indices
tagged_tokens = tokens.copy()
tag_scheme="bio" # / "bioul"
label_col_names = []
for depth_i in range(split_fragments["depth"].max()):
label_col_names.append(f'label-{depth_i}')
tagged_tokens[f'label-{depth_i}'] = encode_as_tag(tokens[["doc_id", "token_id", "token_idx"]],
split_fragments[split_fragments["depth"] == depth_i],
tag_scheme=tag_scheme, label_cols=["label"], use_token_idx=True, verbose=1)[0]['label']
tagged_tokens.head()
# ## Write the CoNLL files
# +
# Alternatively, we could use the nlstruct.exporters.conll.to_conll function like so:
# to_conll(
# dataset=Dataset(tokens=tagged_tokens, docs=docs),
# token_cols=["token_orth", *label_col_names],
# destination="n2c2_conll"
# )
cache = get_cache("n2c2_conll")
for doc_id, doc_tokens in tqdm(tagged_tokens.groupby(["doc_id"], sort="begin")):
with open(cache / (doc_id + ".conll"), "w") as file:
for (token_idx, token, *token_labels) in doc_tokens[["token_idx", "token_orth", *label_col_names]].itertuples(index=False): # iter(zip(*df)) is way faster than df.iterrows()
print(token_idx, "\t", token, "\t", "\t".join(token_labels), file=file)
for doc_id, doc_text in docs[["doc_id", "text"]].itertuples(index=False):
with open(cache / (doc_id + ".txt"), "w") as file:
print(doc_text, file=file)
# -
| examples/export_to_conll.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: project-setup
# language: python
# name: project-setup
# ---
# ## Imports
# +
import datasets
import pandas as pd
from datasets import load_dataset
# -
# ## Dataset
cola_dataset = load_dataset('glue', 'cola')
cola_dataset
train_dataset = cola_dataset['train']
val_dataset = cola_dataset['validation']
test_dataset = cola_dataset['test']
len(train_dataset), len(val_dataset), len(test_dataset)
train_dataset[0]
val_dataset[0]
test_dataset[0]
train_dataset.features
train_dataset.filter(lambda example: example['label'] == train_dataset.features['label'].str2int('acceptable'))[:5]
train_dataset.filter(lambda example: example['label'] == train_dataset.features['label'].str2int('unacceptable'))[:5]
# ## Tokenizing
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/bert_uncased_L-2_H-128_A-2")
train_dataset = cola_dataset['train']
val_dataset = cola_dataset['validation']
test_dataset = cola_dataset['test']
tokenizer
print(train_dataset[0]['sentence'])
tokenizer(train_dataset[0]['sentence'])
tokenizer.decode(tokenizer(train_dataset[0]['sentence'])['input_ids'])
def encode(examples):
return tokenizer(
examples["sentence"],
truncation=True,
padding="max_length",
max_length=512,
)
train_dataset = train_dataset.map(encode, batched=True)
# ## Formatting
import torch
train_dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'label'])
train_dataset
# ## Data Loader
dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32)
next(iter(dataloader))
for batch in dataloader:
print(batch['input_ids'].shape, batch['attention_mask'].shape, batch['label'].shape)
| week_0_project_setup/experimental_notebooks/data_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] nbsphinx="hidden"
# # Quantization of Signals
#
# *This jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing.
# -
# ## Requantization of a Speech Signal
#
# The following example illustrates the requantization of a speech signal. The signal was originally recorded with a wordlength of $w=16$ bits. It is requantized by a [uniform mid-tread quantizer](linear_uniform_characteristic.ipynb#Mid-Tread-Chacteristic-Curve) to various wordlengths. The signal-to-noise ratio (SNR) after quantization is computed and a portion of the (quantized) signal is plotted. It is further possible to listen to the requantized signal and the quantization error. Note, the level of the quantization error has been normalized for better audability of the effects.
# +
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import soundfile as sf
idx = 130000 # index to start plotting
def uniform_midtread_quantizer(x, w):
# quantization step
Q = 1/(2**(w-1))
# limiter
x = np.copy(x)
idx = np.where(x <= -1)
x[idx] = -1
idx = np.where(x > 1 - Q)
x[idx] = 1 - Q
# linear uniform quantization
xQ = Q * np.floor(x/Q + 1/2)
return xQ
def evaluate_requantization(x, xQ):
e = xQ - x
# SNR
SNR = 10*np.log10(np.var(x)/np.var(e))
print('SNR: {:2.1f} dB'.format(SNR))
# plot signals
plt.figure(figsize=(10, 4))
plt.plot(x[idx:idx+100], label=r'signal $x[k]$')
plt.plot(xQ[idx:idx+100], label=r'requantized signal $x_Q[k]$')
plt.plot(e[idx:idx+100], label=r'quantization error $e[k]$')
plt.xlabel(r'sample index $k$')
plt.grid()
plt.legend()
# normalize error
e = .2 * e / np.max(np.abs(e))
return e
# load speech sample
x, fs = sf.read('../data/speech.wav')
# normalize sample
x = x/np.max(np.abs(x))
# -
# **Original Signal**
# <audio src="../data/speech.wav" controls>Your browser does not support the audio element.</audio>
# [../data/speech.wav](../data/speech.wav)
# ### Requantization to 8 bit
xQ = uniform_midtread_quantizer(x, 8)
e = evaluate_requantization(x, xQ)
sf.write('speech_8bit.wav', xQ, fs)
sf.write('speech_8bit_error.wav', e, fs)
# **Requantized Signal**
# <audio src="speech_8bit.wav" controls>Your browser does not support the audio element.</audio>
# [speech_8bit.wav](speech_8bit.wav)
#
# **Quantization Error**
# <audio src="speech_8bit_error.wav" controls>Your browser does not support the audio element.</audio>
# [speech_8bit_error.wav](speech_8bit_error.wav)
# ### Requantization to 6 bit
xQ = uniform_midtread_quantizer(x, 6)
e = evaluate_requantization(x, xQ)
sf.write('speech_6bit.wav', xQ, fs)
sf.write('speech_6bit_error.wav', e, fs)
# **Requantized Signal**
# <audio src="speech_6bit.wav" controls>Your browser does not support the audio element.</audio>
# [speech_6bit.wav](speech_6bit.wav)
#
# **Quantization Error**
# <audio src="speech_6bit_error.wav" controls>Your browser does not support the audio element.</audio>
# [speech_6bit_error.wav](speech_6bit_error.wav)
# ### Requantization to 4 bit
xQ = uniform_midtread_quantizer(x, 4)
e = evaluate_requantization(x, xQ)
sf.write('speech_4bit.wav', xQ, fs)
sf.write('speech_4bit_error.wav', e, fs)
# **Requantized Signal**
# <audio src="speech_4bit.wav" controls>Your browser does not support the audio element.</audio>
# [speech_4bit.wav](speech_4bit.wav)
#
# **Quantization Error**
# <audio src="speech_4bit_error.wav" controls>Your browser does not support the audio element.</audio>
# [speech_4bit_error.wav](speech_4bit_error.wav)
# ### Requantization to 2 bit
xQ = uniform_midtread_quantizer(x, 2)
e = evaluate_requantization(x, xQ)
sf.write('speech_2bit.wav', xQ, fs)
sf.write('speech_2bit_error.wav', e, fs)
# **Requantized Signal**
# <audio src="speech_2bit.wav" controls>Your browser does not support the audio element.</audio>
# [speech_2bit.wav](speech_2bit.wav)
#
# **Quantization Error**
# <audio src="speech_2bit_error.wav" controls>Your browser does not support the audio element.</audio>
# [speech_2bit_error.wav](speech_2bit_error.wav)
| Lectures_Advanced-DSP/quantization/requantization_speech_signal.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="OuJ-YDzWCgEI"
# <!-- ---
# title: How to switch data provider during training
# weight: 9
# downloads: true
# sidebar: true
# summary: Example on how to switch data during training after some number of iterations
# tags:
# - custom events
# --- -->
#
# # How to switch data provider during training
# + [markdown] id="zi2JOUi1CgEO"
# In this example, we will see how one can easily switch the data provider during the training using
# [`set_data()`](https://pytorch.org/ignite/generated/ignite.engine.engine.Engine.html#ignite.engine.engine.Engine.set_data).
# + [markdown] id="wJKPRmQZIV_S"
# ## Basic Setup
# + [markdown] id="QwpM9M-XI23h"
# ### Required Dependencies
# + id="H_UgUurNIb53"
# !pip install pytorch-ignite
# + [markdown] id="Z2Yo1WSWI6vr"
# ### Import
# + id="-2Ai1Ht_HWiB"
from ignite.engine import Engine, Events
# + [markdown] id="Q9tTpXjmI9R_"
# ### Data Providers
# + id="g7ctwsy3Han_"
data1 = [1, 2, 3]
data2 = [11, 12, 13]
# + [markdown] id="S-aQnuihJbZz"
# ## Create dummy `trainer`
#
# Let's create a dummy `train_step` which will print the current iteration and batch of data.
# + id="2Skq9nmSHnce"
def train_step(engine, batch):
print(f"Iter[{engine.state.iteration}] Current datapoint = ", batch)
trainer = Engine(train_step)
# + [markdown] id="YIBlmaO6JW9c"
# ## Attach handler to switch data
#
# Now we have to decide when to switch the data provider. It can be after an epoch, iteration or something custom. Below, we are going to switch data after some specific iteration. And then we attach a handler to `trainer` that will be executed once after `switch_iteration` and use `set_data()` so that when:
#
# * iteration <= `switch_iteration`, batch is from `data1`
# * iteration > `switch_iteration`, batch is from `data2`
# + id="RaMkWUwnCgEQ"
switch_iteration = 5
@trainer.on(Events.ITERATION_COMPLETED(once=switch_iteration))
def switch_dataloader():
print("<------- Switch Data ------->")
trainer.set_data(data2)
# + [markdown] id="BvJ2qms6M44n"
# And finally we run the `trainer` for some epochs.
# + colab={"base_uri": "https://localhost:8080/"} id="8W-WFdZ8HzJU" outputId="7c2c5a36-f657-4d75-8086-ec3fd1fdf10e"
trainer.run(data1, max_epochs=5)
| how-to-guides/09-switch-data-training.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # <div align="center">The Karplus Strong Algorithm</div>
# ---------------------------------------------------------------------
#
# you can Find me on Github:
# > ###### [ GitHub](https://github.com/lev1khachatryan)
# <a id="top"></a> <br>
# ## Notebook Content
# * [DSP building blocks](#1)
#
#
# * [Moving averages and simple feedback loops](#2)
#
#
# * [Generalization](#3)
#
#
# * [Implementation in Python](#4)
#
#
# <a id="1"></a> <br>
# # <div align="center">DSP building blocks</div>
# ---------------------------------------------------------------------
#
# [go to top](#top)
#
# ### Adder:
#
# <img src='assets/20190910 02/1.png'>
#
# In a diagramatical form we see from -->> to :
#
# <table><tr><td><img src='assets/20190910 02/2.png'></td><td><img src='assets/20190910 02/3.png'></td></tr></table>
#
# ### Multiplier:
#
# <img src='assets/20190910 02/4.png'>
#
# In a diagramatical form we see from -->> to :
#
# <table><tr><td><img src='assets/20190910 02/5.png'></td><td><img src='assets/20190910 02/6.png'></td></tr></table>
#
# ### Delay:
#
# <img src='assets/20190910 02/7.png'>
#
# In a diagramatical form we see from -->> to :
#
# <img src='assets/20190910 02/8.png'>
#
# ### Arbitrary Delay:
#
# <img src='assets/20190910 02/9.png'>
#
# In a diagramatical form we see from -->> to :
#
# <img src='assets/20190910 02/10.png'>
#
#
#
#
# <a id="2"></a> <br>
# # <div align="center">Moving averages and simple feedback loops</div>
# ---------------------------------------------------------------------
#
# [go to top](#top)
#
#
#
# ### MA(2)
#
# <img src='assets/20190910 02/11.png'>
#
# <img src='assets/20190910 02/12.png'>
#
# ### Example 1
#
# <img src='assets/20190910 02/13.png'>
#
# ### Example 2
#
# <img src='assets/20190910 02/14.png'>
#
# ### Example 3
#
# <img src='assets/20190910 02/15.png'>
#
# ### Example 4
#
# <img src='assets/20190910 02/16.png'>
#
#
#
# ## Reverse the loop
#
# <img src='assets/20190910 02/17.png'>
#
#
# ## A simple model for banking
#
# * constant interest/borrowing rate of 5% per year
#
#
# * interest accrues on Dec 31
#
#
# * deposits/withdrawals during year n: x[n]
#
#
# * balance at year n: ***y[n] = 1.05 y[n-1] + x[n]***
#
# <img src='assets/20190910 02/18.png'>
#
# # Example: the one time investment
#
# <img src='assets/20190910 02/19.png'>
#
# # Example: the saver
#
# <img src='assets/20190910 02/20.png'>
#
# # Example: the independently wealthy
#
# <img src='assets/20190910 02/21.png'>
#
#
#
#
# <a id="3"></a> <br>
# # <div align="center">Generalization</div>
# ---------------------------------------------------------------------
#
# [go to top](#top)
#
#
#
# <img src='assets/20190910 02/22.png'>
# ## Example
#
# <img src='assets/20190910 02/23.png'>
#
# ## Example
#
# <img src='assets/20190910 02/24.png'>
#
#
# <img src='assets/20190910 02/25.png'>
#
# ## Example: Playing a sine wave
#
# <img src='assets/20190910 02/26.png'>
#
# <table><tr><td><img src='assets/20190910 02/27.png'></td><td><img src='assets/20190910 02/28.png'></td></tr></table>
#
#
# <img src='assets/20190910 02/29.png'>
#
# <img src='assets/20190910 02/30.png'>
#
# <img src='assets/20190910 02/31.png'>
#
#
#
#
#
# <a id="4"></a> <br>
# # <div align="center">Implementation In Python</div>
# ---------------------------------------------------------------------
#
# [go to top](#top)
#
# +
## Math
import numpy as np
## nd.array deep copy
import copy
## Audio Playing
from IPython.display import Audio
## Visualization
# %matplotlib inline
import matplotlib.pyplot as plt
# -
def karplus_strong(wavetable, n_samples):
"""
Synthesizes a new waveform from an existing wavetable, modifies last sample by averaging.
Parameters:
wavetable (list or nd.array) - initially generated signal
n_samples (int) - number of samples from wavetable
Return:
np.array - transformed n_sample size wavetable
"""
wavetable = copy.deepcopy(wavetable)
samples = []
current_sample = 0
previous_value = 0
while len(samples) < n_samples:
wavetable[current_sample] = 0.5 * (wavetable[current_sample] + previous_value)
samples.append(wavetable[current_sample])
previous_value = samples[-1]
current_sample += 1
current_sample = current_sample % wavetable.size
return np.array(samples)
# Frequency
fs = 8000
wavetable_size = fs // 55
t = np.linspace(0, 1, num=wavetable_size)
wavetable = np.sin(np.sin(2 * np.pi * t))
plt.plot(t, wavetable);
# plt.plot(t, wavetable, '-o')andint(0, 2, wavetable_size) - 1).astype(np.float)
sample1 = karplus_strong(wavetable, 2 * fs)
Audio(sample1, rate=fs)
plt.subplot(211)
plt.plot(sample1)
plt.subplot(212)
plt.plot(sample1)
plt.xlim(0, 1000)
#
#
#
wavetable_size = fs // 55
wavetable = (2 * np.random.randint(0, 2, wavetable_size) - 1).astype(np.float)
# wavetable = ( np.random.random(wavetable_size)).astype(np.float)
plt.plot(wavetable)
sample2 = karplus_strong(wavetable, 2 * fs)
Audio(sample2, rate=fs)
plt.subplot(211)
plt.plot(sample2)
plt.subplot(212)
plt.plot(sample2)
plt.xlim(500, 1000)
#
#
#
#
#
#
#
def generalization(timbre, decay, pitch):
"""
Synthesizes a new waveform from an existing wavetable, modifies last sample by averaging.
Parameters:
timbre (list or nd.array) - initially generated signal
decay (float) - alpha, which can be any number, but we consider it from 0 to 1, controls envelop
pitch (int) - M, controls frequency
Return:
np.array - transformed timbre by y[n] = x[n] + alpha*y[n-M]
"""
timbre = copy.deepcopy(timbre)
transformed = []
current_sample = 0
while current_sample < len(timbre):
if current_sample < pitch:
transformed.insert(current_sample, timbre[current_sample])
else:
transformed.insert(current_sample, timbre[current_sample] + decay * transformed[current_sample-pitch])
current_sample += 1
return np.array(transformed)
# +
fs = 8000
wavetable_size = fs // 55
wavetable = (2 * np.random.randint(0, 2, wavetable_size) - 1).astype(np.float)
plt.plot(wavetable)
# wavetable_size = fs // 55
# t = np.linspace(0, 1, num=wavetable_size)
# wavetable = np.sin(np.sin(2 * np.pi * t))
# plt.plot(t, wavetable);
# # plt.plot(t, wavetable, '-o')andint(0, 2, wavetable_size) - 1).astype(np.float)
# -
sample2 = generalization(wavetable, 0.6, 3)
# +
# # %debug
# -
Audio(sample2, rate=fs)
# +
# sample2
# -
plt.subplot(211)
plt.plot(sample2)
plt.subplot(212)
plt.plot(wavetable)
# plt.xlim(500, 1000)
| Lectures/3 - 2019.09.10. Part 2 - Karplus-Strong and Building Blocks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA,TruncatedSVD,NMF
from sklearn.preprocessing import Normalizer
import argparse
import time
import pickle as pkl
def year_binner(year,val=10):
return year - year%val
def dim_reduction(df,rows):
df_svd = TruncatedSVD(n_components=300, n_iter=10, random_state=args.seed)
print(f'Explained variance ratio {(df_svd.fit(df).explained_variance_ratio_.sum()):2.3f}')
#df_list=df_svd.fit(df).explained_variance_ratio_
df_reduced = df_svd.fit_transform(df)
df_reduced = Normalizer(copy=False).fit_transform(df_reduced)
df_reduced=pd.DataFrame(df_reduced,index=rows)
#df_reduced.reset_index(inplace=True)
if args.temporal!=0:
df_reduced.index = pd.MultiIndex.from_tuples(df_reduced.index, names=['common', 'time'])
return df_reduced
# +
parser = argparse.ArgumentParser(description='Gather data necessary for performing Regression')
parser.add_argument('--inputdir',type=str,
help='Provide directory that has the files with the fivegram counts')
parser.add_argument('--outputdir',type=str,
help='Provide directory in that the output files should be stored')
parser.add_argument('--temporal', type=int, default=0,
help='Value to bin the temporal information: 0 (remove temporal information), 1 (no binning), 10 (binning to decades), 20 (binning each 20 years) or 50 (binning each 50 years)')
parser.add_argument('--contextual', action='store_true',
help='Is the model contextual')
parser.add_argument('--cutoff', type=int, default=50,
help='Cut-off frequency for each compound per time period : none (0), 20, 50 and 100')
parser.add_argument('--seed', type=int, default=1991,
help='random seed')
parser.add_argument('--storedf', action='store_true',
help='Should the embeddings be saved')
parser.add_argument('--dims', type=int, default=300,
help='Desired number of reduced dimensions')
parser.add_argument('--input_format',type=str,default='csv',choices=['csv','pkl'],
help='In what format are the input files : csv or pkl')
parser.add_argument('--save_format', type=str,default='pkl',choices=['pkl','csv'],
help='In what format should the reduced datasets be saved : csv or pkl')
args = parser.parse_args('--inputdir ../Compounding/coha_compounds/ --outputdir ../Compounding/coha_compounds/ --cutoff 10 --storedf --input_format csv --save_format csv'.split())
# -
print(f'Cutoff: {args.cutoff}')
print(f'Time span: {args.temporal}')
print(f'Dimensionality: {args.dims}')
# +
print("Creating dense embeddings")
if args.contextual:
print("CompoundCentric Model")
print("Loading the constituent and compound vector datasets")
if args.input_format=="csv":
compounds=pd.read_csv(args.inputdir+"/compounds.csv",sep="\t")
elif args.input=="pkl":
compounds=pd.read_pickle(args.inputdir+"/compounds.pkl")
compounds.reset_index(inplace=True)
compounds.year=compounds.year.astype("int32")
compounds=compounds.query('1800 <= year <= 2010').copy()
compounds['common']=compounds['modifier']+" "+compounds['head']
#head_list_reduced=compounds['head'].unique().tolist()
#modifier_list_reduced=compounds['modifier'].unique().tolist()
if args.temporal==0:
print('No temporal information is stored')
compounds=compounds.groupby(['common','context'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','context'])['count'].sum()
else:
compounds['time']=year_binner(compounds['year'].values,args.temporal)
compounds=compounds.groupby(['common','context','time'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
modifiers=pd.read_csv(args.inputdir+"/modifiers.csv",sep="\t")
elif args.input=="pkl":
modifiers=pd.read_pickle(args.inputdir+"/modifiers.pkl")
modifiers.reset_index(inplace=True)
modifiers.year=modifiers.year.astype("int32")
modifiers=modifiers.query('1800 <= year <= 2010').copy()
modifiers.columns=['common','context','year','count']
modifiers['common']=modifiers['common'].str.replace(r'_noun$', r'_m', regex=True)
if args.temporal==0:
print('No temporal information is stored')
modifiers=modifiers.groupby(['common','context'])['count'].sum().to_frame()
modifiers.reset_index(inplace=True)
modifiers=modifiers.loc[modifiers.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
modifiers=modifiers.groupby(['common','context'])['count'].sum()
else:
modifiers['time']=year_binner(modifiers['year'].values,args.temporal)
modifiers=modifiers.groupby(['common','context','time'])['count'].sum().to_frame()
modifiers=modifiers.loc[modifiers.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
modifiers=modifiers.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
heads=pd.read_csv(args.inputdir+"/heads.csv",sep="\t")
elif args.input_format=="pkl":
heads=pd.read_pickle(args.inputdir+"/heads.pkl")
heads.reset_index(inplace=True)
heads.year=heads.year.astype("int32")
heads=heads.query('1800 <= year <= 2010').copy()
heads.columns=['common','context','year','count']
heads['common']=heads['common'].str.replace(r'_noun$', r'_h', regex=True)
if args.temporal==0:
print('No temporal information is stored')
heads=heads.groupby(['common','context'])['count'].sum().to_frame()
heads.reset_index(inplace=True)
heads=heads.loc[heads.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
heads=heads.groupby(['common','context'])['count'].sum()
else:
heads['time']=year_binner(heads['year'].values,args.temporal)
heads=heads.groupby(['common','context','time'])['count'].sum().to_frame()
heads=heads.loc[heads.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
heads=heads.groupby(['common','time','context'])['count'].sum()
print('Concatenating all the datasets together')
df=pd.concat([heads,modifiers,compounds], sort=True)
else:
print("CompoundAgnostic Model")
wordlist = pkl.load( open( "data/coha_wordlist.pkl", "rb" ) )
if args.input_format=="csv":
compounds=pd.read_csv(args.inputdir+"/phrases.csv",sep="\t")
elif args.input_format=="pkl":
compounds=pd.read_pickle(args.inputdir+"/phrases.pkl")
compounds.reset_index(inplace=True)
compounds.year=compounds.year.astype("int32")
compounds=compounds.query('1800 <= year <= 2010').copy()
compounds['common']=compounds['modifier']+" "+compounds['head']
if args.temporal==0:
print('No temporal information is stored')
compounds=compounds.groupby(['common','context'])['count'].sum().to_frame()
compounds.reset_index(inplace=True)
compounds=compounds.loc[compounds.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','context'])['count'].sum()
else:
compounds['time']=year_binner(compounds['year'].values,args.temporal)
#compounds = dd.from_pandas(compounds, npartitions=100)
compounds=compounds.groupby(['common','context','time'])['count'].sum().to_frame()
compounds=compounds.loc[compounds.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
compounds=compounds.groupby(['common','time','context'])['count'].sum()
if args.input_format=="csv":
constituents=pd.read_csv(args.outputdir+"/words.csv",sep="\t")
elif args.input_format=="pkl":
constituents=pd.read_pickle(args.outputdir+"/words.pkl")
constituents.reset_index(inplace=True)
constituents.year=constituents.year.astype("int32")
constituents=constituents.query('1800 <= year <= 2010').copy()
constituents.columns=['common','context','year','count']
constituents.query('common in @wordlist',inplace=True)
if args.temporal==0:
print('No temporal information is stored')
constituents=constituents.groupby(['common','context'])['count'].sum().to_frame()
constituents.reset_index(inplace=True)
constituents=constituents.loc[constituents.groupby(['common'])['count'].transform('sum').gt(args.cutoff)]
constituents=constituents.groupby(['common','context'])['count'].sum()
else:
constituents['time']=year_binner(constituents['year'].values,args.temporal)
constituents=constituents.groupby(['common','context','time'])['count'].sum().to_frame()
constituents.reset_index(inplace=True)
constituents=constituents.loc[constituents.groupby(['common','time'])['count'].transform('sum').gt(args.cutoff)]
constituents=constituents.groupby(['common','time','context'])['count'].sum()
print('Concatenating all the datasets together')
df=pd.concat([constituents,compounds], sort=True)
# +
dtype = pd.SparseDtype(np.float, fill_value=0)
df=df.astype(dtype)
if args.temporal!=0:
df, rows, _ = df.sparse.to_coo(row_levels=['common','time'],column_levels=['context'],sort_labels=False)
else:
df, rows, _ = df.sparse.to_coo(row_levels=['common'],column_levels=['context'],sort_labels=False)
# +
print('Running SVD')
df_reduced=dim_reduction(df,rows)
print('Splitting back into individual datasets are saving them')
if args.temporal!=0:
df_reduced.index.names = ['common','time']
else:
df_reduced.index.names = ['common']
# -
compounds_reduced=df_reduced.loc[df_reduced.index.get_level_values(0).str.contains(r'\w \w')]
compounds_reduced.reset_index(inplace=True)
#print(compounds_reduced.head())
#compounds_reduced['modifier'],compounds_reduced['head']=compounds_reduced['common'].str.split(' ', 1).str
compounds_reduced[['modifier','head']]=compounds_reduced['common'].str.split(' ', n=1,expand=True).copy()
compounds_reduced
| Tryout.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import tensorflow.compat.v1 as tf
import math
import numpy as np
import itertools
print((tf.__version__))
tf.enable_eager_execution()
import tensorflow
# !pip install openpyxl
from openpyxl import Workbook
from openpyxl import load_workbook
import cv2
import glob
from waymo_open_dataset.utils import range_image_utils
from waymo_open_dataset.utils import transform_utils
from waymo_open_dataset.utils import frame_utils
from waymo_open_dataset import dataset_pb2 as open_dataset
# !pip install pathlib
from pathlib import Path
waymo_to_labels = {
0 : 'UNKNOWN',
1 : 'VEHICLE',
2 : 'PEDESTRIAN',
3 : 'SIGN',
4 : 'CYCLIST'
}
path = '/MyTeams/Waymo/WaymoDataset/TrainingSet/training_0000/segment-10017090168044687777_6380_000_6400_000_with_camera_labels.tfrecord'
frames = []
dataset = tf.data.TFRecordDataset(path, compression_type='')
for data in dataset:
frame = open_dataset.Frame()
frame.ParseFromString(bytearray(data.numpy()))
frames.append(frame)
print("frames extracted")
# -
print(frames[0].context.stats)
| TestNotebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: GNN
# language: python
# name: gnn
# ---
# + [markdown] colab_type="text" id="EbfhRLv0zejb"
#
# ## Graph Autoencoder based Collaborative Filtering
# #Update 2020.03.21
#
# - As the model is so big we need to save and reload the last save epoch with checkpoint.
#
# #Update 2020.03.14
# - Deep and wide neighours n_order, k_neighbour -> $n^k$ inputs
# - **Note** the model consumes lots of RAM with deeper and wider nodes
#
# Main settings in 4A:
#
# #Update 2020.03.02
# - Integrated validation set during training
# - Integrated early stopping with delta = 1e-5
# - Use 'adadelta' optimizer for dynamic learning rate
# - user n neibor + item n neibor
# - @base: 3/03/20 discussion
#
#
# + [markdown] colab_type="text" id="wuBt2WWN5Zrf"
# #[New Model](https://drive.google.com/file/d/1kN5loA18WyF1-I7BskOw6c9P1bdArxk7/view?usp=sharing):
#
# 
#
# + [markdown] colab_type="text" id="mWjyzcXW54GG"
# #Model implementation framework
#
# TF2.0 and Keras implementation
#
# - Create GMF model
# - Create helper methods: User/item latent
# - Create loss functions
# - Handle input $u_i, v_j$
# - Handle output $\hat{r}_{ij}$
# + [markdown] colab_type="text" id="koO06XoHRo_K"
# ## Organise imports
#
# + cellView="both" colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="PhlM3OtBzRdr" outputId="6f071d79-7345-4007-b12e-7310c2c8e103"
#@title
#import
#tensorflow_version 2.x
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import Input, Dense, Concatenate, Embedding, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l1, l2, l1_l2
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.layers import dot, add
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + colab={} colab_type="code" id="XFEsPmNydxl3"
#dt_dir_name= "C:/Users/jiyu/Desktop/Mo/sample_data/ml-1m"
dt_dir_name= "C:/Users/thinguyen/Desktop/PhD_2020/Python Code/GNN/Mo/sample_data/Amazon_Book_small"
# +
#prepare folder structures
saved_model_dir = 'saved_models_WiHi_MLP(2,1)/'
# !mkdir -p "saved_models_WiHi_MLP(2,1)"
# -
load_saved_model = True
n_order = 2
k_neighbour=1
lr = 0.0005
l1_reg=1e-5
l2_reg=1e-4
k=20
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="Yvt1j3H7M_Yl" outputId="dc9b894a-6a60-4503-c202-b4d687acc5ce"
dataset = pd.read_csv(dt_dir_name +'/'+ 'ratings.csv', names=['user_id', 'item_id', 'rating'])
#dataset = pd.read_csv(dt_dir_name +'/'+ "ratings.csv")
# + colab={} colab_type="code" id="Y50GEUeWrgYL"
#reindex from 0 ids
dataset.user_id = dataset.user_id.astype('category').cat.codes.values
dataset.item_id = dataset.item_id.astype('category').cat.codes.values
#createMFModel(dataset=dataset)
# + [markdown] colab_type="text" id="3JkJvoIbS4gd"
# ##Turn original dataset to negative sample dataset
# + colab={} colab_type="code" id="HYxI9uKCQ9Gl"
#Version 1.2 (flexible + superfast negative sampling uniform)
import random
import time
import scipy
def neg_sampling(ratings_df, n_neg=1, neg_val=0, pos_val=1, percent_print=5):
"""version 1.2: 1 positive 1 neg (2 times bigger than the original dataset by default)
Parameters:
input rating data as pandas dataframe: userId|movieId|rating
n_neg: include n_negative / 1 positive
Returns:
negative sampled set as pandas dataframe
userId|movieId|interact (implicit)
"""
sparse_mat = scipy.sparse.coo_matrix((ratings_df.rating, (ratings_df.user_id, ratings_df.item_id)))
dense_mat = np.asarray(sparse_mat.todense())
print(dense_mat.shape)
nsamples = ratings_df[['user_id', 'item_id']]
nsamples['rating'] = nsamples.apply(lambda row: 1, axis=1)
length = dense_mat.shape[0]
printpc = int(length * percent_print/100)
nTempData = []
i = 0
start_time = time.time()
stop_time = time.time()
extra_samples = 0
for row in dense_mat:
if(i%printpc==0):
stop_time = time.time()
print("processed ... {0:0.2f}% ...{1:0.2f}secs".format(float(i)*100 / length, stop_time - start_time))
start_time = stop_time
n_non_0 = len(np.nonzero(row)[0])
zero_indices = np.where(row==0)[0]
if(n_non_0 * n_neg + extra_samples >= len(zero_indices)):
print(i, "non 0:", n_non_0,": len ",len(zero_indices))
neg_indices = zero_indices.tolist()
extra_samples = n_non_0 * n_neg + extra_samples - len(zero_indices)
else:
neg_indices = random.sample(zero_indices.tolist(), n_non_0 * n_neg + extra_samples)
extra_samples = 0
nTempData.extend([(uu, ii, rr) for (uu, ii, rr) in zip(np.repeat(i, len(neg_indices))
, neg_indices, np.repeat(neg_val, len(neg_indices)))])
i+=1
nsamples=nsamples.append(pd.DataFrame(nTempData, columns=["user_id","item_id", "rating"]),ignore_index=True)
nsamples.reset_index(drop=True)
return nsamples
# + colab={"base_uri": "https://localhost:8080/", "height": 493} colab_type="code" id="y_14eDLzQ5tY" outputId="b1ec141a-6269-4f74-98cc-fe6f35983a48"
neg_dataset = neg_sampling(dataset, n_neg=1)
neg_dataset.shape
# + [markdown] colab_type="text" id="utsDgdnjiKGe"
# ##Create train test set
#
# + colab={} colab_type="code" id="bXY34jFnUd8A"
from sklearn.model_selection import train_test_split
train, test = train_test_split(neg_dataset, test_size=0.2, random_state=2020)
# + [markdown] colab_type="text" id="gYNfcOkbFaxL"
# #Create deep embedding using MLP of the [model](https://drive.google.com/file/d/1kN5loA18WyF1-I7BskOw6c9P1bdArxk7/view?usp=sharing)
# + colab={} colab_type="code" id="yd2F19dTFmpi"
uids = np.sort(dataset.user_id.unique())
iids = np.sort(dataset.item_id.unique())
n_users = len(uids)
n_items = len(iids)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="XrPNkCqOsY3h" outputId="8de02878-dc86-436b-a6a6-9c323d34d641"
n_users, n_items
# + [markdown] colab_type="text" id="mUH0ZY-U9GUa"
# ## Create deep autoencoder (Skipped this)
#
#
# Reference: [keras](https://blog.keras.io/building-autoencoders-in-keras.html)
# + [markdown] colab_type="text" id="qFc7u4Y0kk0o"
# #Create rating matrix
# + colab={} colab_type="code" id="TYBlPffk_4jG"
import scipy
sparse_mat = scipy.sparse.coo_matrix((neg_dataset.rating, (neg_dataset.user_id, neg_dataset.item_id)))
rating_matrix = np.asarray(sparse_mat.todense())
# + colab={"base_uri": "https://localhost:8080/", "height": 136} colab_type="code" id="zY_2RWV4AK-y" outputId="34fc95f8-4188-4de5-fd90-90a8622b202d"
rating_matrix
# + [markdown] colab_type="text" id="T7owpsQpJBER"
# #Helper functions
# + colab={} colab_type="code" id="f5Gbtsl1JEGV"
def create_hidden_size(n_hidden_layers = 3, n_latent_factors = 8):
"""Sizes of each hidden layer, decreasing order"""
hidden_size = [n_latent_factors*2**i for i in reversed(range(n_hidden_layers))]
return hidden_size
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="9HhzvVBi6ouk" outputId="31ac443e-0376-47e7-ab6e-f64c7fc4a889"
create_hidden_size()
# + [markdown] colab_type="text" id="a1mD8CqzznZx"
# ### Create nearest neighbour (using cosine similarity)
#
# Deep and wide version! n order + k neighbour
# Total: $k + k^2 + ...+ k^n$
# This is fuking insane!
# - Order 2: first $k$ rows
# - Order 3: next $k^2$ rows
# - Order 4: next $k^3$ rows
#
# Important pattern when parsing data:
#
#
# $[order 2 \rightarrow order 3 \rightarrow order 4]$
#
# samples:
#
# $[k \rightarrow k^2 \rightarrow k^3 ]$
#
# **Note**: don't care about loop (self-loop) e.g. $\Delta$
# + colab={} colab_type="code" id="NNtj5B8mNkls"
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
def create_closest_neighbour_list(rating_matrix, n_order, k_neighbour):
"""return index list of most (k) similar rows that sorted descendingly of 2, 3,..n order
Params:
n_order: 1 -> its self, 2-> depth = 2 (include 1 further)
k_neighour: number of neighour nodes each node in each order from 1 -> n.
"""
k_nb = []
idx = 0
cos_matrix = cosine_similarity(rating_matrix, rating_matrix)
#print(cos_matrix)
for row in cos_matrix:
k_largest = np.argsort(-row)[:k_neighbour+1]
k_largest = k_largest.tolist()
if idx in k_largest:
k_largest.remove(idx)
k_nb.append(k_largest[:k_neighbour])
idx += 1
k_nb_2nd = np.stack(k_nb, axis=1)
#print(k_nb_2nd)
temp = k_nb_2nd
for o in range(2, n_order):
start_idx = sum([k_neighbour*k_neighbour**i for i in range(o-2)])
#print([k_neigbour*k_neigbour**i for i in range(o-2)],"start:", start_idx)
temp1 = np.concatenate([np.asarray([k_nb_2nd[:, k] for k in row]).T for row in temp[start_idx:,:]])
temp = np.concatenate([temp,temp1])
return temp
# + [markdown] colab_type="text" id="hpl15LyQlZ9F"
# #Create model with Keras with shared autoencoder layers
#
# Reference: shared vision model: https://keras.io/getting-started/functional-api-guide/#shared-vision-model
#
# Problem: graph disconnect : https://github.com/keras-team/keras/issues/11151
#
# + [markdown] colab_type="text" id="b8OhJEI-TOCn"
# ###Create custom loss for ui,um,& items
#
# Currently not in use!!!
# + [markdown] colab_type="text" id="mtoxMSlrtQWK"
# ###Create shared autoencoder
# + colab={} colab_type="code" id="4CnewmMGvgUG"
def createSharedAutoEncoder(input_shape, hidden_size, names=['user_encoder', 'user_decoder']):
"""This method is to create autoencoder
Parameters:
input_shape: tuble for shape. For this method, one value is expected, e.g. (30, ).
hidden_size: the array that contains number of neuron each layers, e.g. [10, 20, 1]
Returns:
encoder: the encoder model
decoder: the decoder model
"""
# shared autoencoder
input=Input(shape=input_shape)
encoded = input
for nn in hidden_size[:-1]:
encoded = Dense(nn, activation='relu',kernel_initializer='he_uniform')(encoded)
encoded = Dense(hidden_size[-1], activation='relu',kernel_initializer='he_uniform',
name=names[0])(encoded)
encoder = Model(input, encoded, name=names[0])
#------- decoder model
hidden_size.reverse()
decoderinput = Input(shape=(hidden_size[0]))
decoded = decoderinput
for nn in hidden_size[1:]:
decoded = Dense(nn, activation='relu', kernel_initializer='he_uniform')(decoded)
decoded = Dense(input_shape[0], activation='relu', kernel_initializer='he_uniform', name=names[1])(decoded)
decoder = Model(decoderinput, decoded, name=names[1])
return encoder, decoder
# + [markdown] colab_type="text" id="tpN5OKg-vRvM"
# ###Integrate autoencoders + mlp + custom loss
# + colab={} colab_type="code" id="2ZgDmpzeE9lV"
import numpy as np
def get_input_weights(n_order, k_neighbour, decay=4):
layer_weights = [np.repeat(decay**(n_order-o-1), k_neighbour**o) for o in range(n_order)]
layer_weights_flat = np.concatenate(layer_weights).ravel()
layer_weights_sum = np.sum(layer_weights_flat)
layer_weights_normalized = layer_weights_flat / layer_weights_sum
return layer_weights_normalized
get_input_weights(2, 1, 4)
# + colab={} colab_type="code" id="qKSGawhd1nqS"
def create_model(n_users, n_items, n_order=2, k_neighbour=1, latent_factors=64, lr = 0.0005, l1_reg=1e-5, l2_reg=1e-4):
"""
number of depth = n_order, n_order=2: 1 node + 1 deeper node
"""
#user shared autoencoder
hidden_size = create_hidden_size() #for autoencoder
uencoder, udecoder = createSharedAutoEncoder((n_items,), hidden_size)
#item shared autoencoder
hidden_size = create_hidden_size() #for autoencoder
iencoder, idecoder = createSharedAutoEncoder((n_users,),
hidden_size,['item_encoder','item_decoder'])
#create n inputs + shared autoencoder
u_inputs = []
v_inputs = []
u_encoded = []
v_encoded = []
u_decoded = []
v_decoded = []
#n-order proximity by comparing n embedded vecs
input_weights = get_input_weights(n_order, k_neighbour, decay=4)
for i in range(n_order):
u_inputs.extend([Input(shape=(n_items,), name= f'ui{i}{k}') for k in range(k_neighbour**i)])
v_inputs.extend([Input(shape=(n_users,), name= f'vj{i}{k}') for k in range(k_neighbour**i)])
u_encoded.extend([uencoder(u_i) for u_i in u_inputs])
v_encoded.extend([iencoder(v_j) for v_j in v_inputs])
u_decoded.extend([udecoder(u_en) for u_en in u_encoded])
v_decoded.extend([idecoder(v_en) for v_en in v_encoded])
#get ALL COMBINED embeddings from 2 encoders(Need work with combining method)
uii_encoded = add([u_encoded[i]*input_weights[i] for i in range(len(u_encoded))]) if n_order > 1 and k_neighbour > 0 else u_encoded[0]
vji_encoded = add([v_encoded[i]*input_weights[i] for i in range(len(u_encoded))]) if n_order > 1 and k_neighbour > 0 else v_encoded[0]
concat = layers.concatenate([uii_encoded, vji_encoded])
mlp = concat
for i in range(3,-1,-1):
if i == 0:
mlp = Dense(8**i, activation='sigmoid', name="mlp")(mlp)
else:
mlp = Dense(8*2**i, activation='sigmoid')(mlp)
if i >= 2:
mlp = BatchNormalization()(mlp)
mlp = Dropout(0.2)(mlp)
model = Model(inputs=[u_inputs, v_inputs],
outputs=[u_decoded, v_decoded, mlp])
udecoder_names=["user_decoder" if x==0 else f"user_decoder_{x}" for x in range(len(input_weights))]
vdecoder_names=["item_decoder" if x==0 else f"item_decoder_{x}" for x in range(len(input_weights))]
udecoder_dict = {ukey: 'mean_squared_error' for ukey in udecoder_names}
vdecoder_dict = {vkey: 'mean_squared_error' for vkey in vdecoder_names}
udecoder_metric_dict = {ukey: 'mse' for ukey in udecoder_names}
vdecoder_metric_dict = {vkey: 'mse' for vkey in udecoder_names}
losses={'mlp':'binary_crossentropy', **udecoder_dict, **vdecoder_dict}
metrics={'mlp':['binary_accuracy'
],
**udecoder_metric_dict,
**vdecoder_metric_dict
}
adadelta=tf.keras.optimizers.Adadelta(learning_rate=lr)
model.compile(optimizer='adadelta', loss=losses, metrics=metrics)
model.summary()
return model
# + [markdown] colab_type="text" id="5nYDqsVrtX6o"
# ##Argparse
#
# Store all settings here
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="Rckg9TjWdeHm" outputId="95c25ee6-cebe-4dfa-f9c6-af1c1c27f86b"
import os
if load_saved_model:
saved_list = os.listdir(saved_model_dir)
saved_list.sort()
print(saved_list)
if(len(saved_list) != 0):
last_saved = saved_list[-1]
model = tf.keras.models.load_model(saved_model_dir+'/'+last_saved)
else:
model = create_model(n_users, n_items, n_order, k_neighbour)
# + [markdown] colab_type="text" id="hL6lccOaleLN"
# ###Create data generator using rating matrix
#
# It takes rating matrix and generate a sequence of users, items, and ratings
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="dyMtoZLy6SxZ" outputId="ca1937c8-735d-446a-a156-c495eebe95c7"
closest_uneighbor = create_closest_neighbour_list(rating_matrix, n_order, k_neighbour)
closest_ineighbor = create_closest_neighbour_list(rating_matrix.T, n_order,k_neighbour)
closest_uneighbor.shape, closest_ineighbor.shape
# + colab={} colab_type="code" id="rzlkixAH9q9F"
from tensorflow.keras.utils import Sequence
import math
class DataGenerator(Sequence):
def __init__(self, dataset, rating_matrix, batch_size=100, n_order = 2, k_neighbour=1, shuffle=True):
'Initialization'
self.n_order = n_order
self.batch_size = batch_size
self.dataset = dataset
self.shuffle = shuffle
self.indices = self.dataset.index
self.rating_matrix = rating_matrix
self.on_epoch_end()
def __len__(self):
'Denotes the number of batches per epoch'
return math.floor(len(self.dataset) / self.batch_size)
def __getitem__(self, index):
'Generate one batch of data'
# Generate indexes of the batch
idxs = [i for i in range(index*self.batch_size,(index+1)*self.batch_size)]
# Find list of IDs
list_IDs_temp = [self.indices[k] for k in idxs]
# Generate data
uids = self.dataset.iloc[list_IDs_temp,[0]].to_numpy().reshape(-1)
iids = self.dataset.iloc[list_IDs_temp,[1]].to_numpy().reshape(-1)
Users = np.stack([rating_matrix[row] for row in uids])
Items = np.stack([rating_matrix[:, col] for col in iids])
ratings = self.dataset.iloc[list_IDs_temp,[2]].to_numpy().reshape(-1)
if n_order > 1 and k_neighbour > 0:
u_neighbors = [closest_uneighbor[:,index] for index in uids ]
i_neighbors = [closest_ineighbor[:,index] for index in iids]
#print([np.stack([rating_matrix[row] for row in u_neighbors[i]]) for i in range(len(u_neighbors))])
User_neighbors =list(zip(*[[rating_matrix[rowId] for rowId in u_neighbors[i]] for i in range(len(u_neighbors))]))
#print([u for u in User_neighbors])#, User_neighbors.shape)
User_neighbors = np.array([np.stack(batch) for batch in User_neighbors])
Item_neighbors =list(zip(*[[rating_matrix[:,colId] for colId in i_neighbors[i]] for i in range(len(i_neighbors))]))
Item_neighbors = np.array([np.stack(batch) for batch in Item_neighbors])
return [Users, *User_neighbors, Items, *Item_neighbors],[Users,*User_neighbors, Items, *Item_neighbors, ratings]
else:
return [Users, Items],[Users, Items, ratings]
def on_epoch_end(self):
'Updates indexes after each epoch'
self.indices = np.arange(len(self.dataset))
if self.shuffle == True:
np.random.shuffle(self.indices)
# + [markdown] colab_type="text" id="XW6ZseFXRQzV"
# ##Training with data generator
# + colab={} colab_type="code" id="63qB2z8jzPKt"
#early_stop = EarlyStopping(monitor='val_mlp_loss', min_delta = 0.0001, patience=10)
# reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2,
# patience=10, min_lr=0.000001)
# + colab={} colab_type="code" id="Va1XeZWzkBKl"
checkpoint_path= saved_model_dir + "/model-{epoch:02d}-{mlp_binary_accuracy:.2f}.hdf5"
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path, monitor='mlp_binary_accuracy',verbose=1, save_best_only=True, mode='max')
# + colab={} colab_type="code" id="0-yRouiTlUaA"
train_generator = DataGenerator(train, rating_matrix, batch_size=256, n_order=n_order, k_neighbour=k_neighbour, shuffle=False)
#val_generator = DataGenerator(val, rating_matrix, batch_size=512, n_order=n_order, k_neighbour=k_neighbour, shuffle=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} colab_type="code" id="eWPaVhX251-0" outputId="80125e7b-6ca2-47cb-dea1-439f0f2c4aba"
history = model.fit(train_generator,
# validation_data=val_generator,
epochs=100,
verbose=2, callbacks=[cp_callback,
# early_stop
],
#workers=4,
shuffle=False)
# + [markdown] colab_type="text" id="2E8-5W-Vis_e"
# ## Plot losses
#
# There are several losses, pick the one we need
# + [markdown] colab_type="text" id="Pyd1JY_tYilg"
# Let's now see how our model does! I'll do a small post-processing step to round off our prediction to the nearest integer. This is usually not done, and thus just a whimsical step, since the training ratings are all integers! There are better ways to encode this intger requirement (one-hot encoding!), but we won't discuss them in this post.
# + colab={} colab_type="code" id="4iQQp_-5Yg8E"
test_datagenerator = DataGenerator(test, rating_matrix)
results = model.evaluate(test_datagenerator)
print(results)
# -
#####################################################################
#Cal HR according to NCF
#Create user & item list:
tmp_lst_u=train.user_id.unique()
#tmp_lst_i=train.item_id.unique()
tmp_lst_i=dataset.item_id.unique()
tmp_lst_u.sort(), tmp_lst_i.sort()
lst_user=tmp_lst_u.tolist()
lst_item=tmp_lst_i.tolist()
# +
def Top_100_Unused_item(user_id):
tmp_df_used_item=train.loc[(train['user_id']==user_id) & (train['rating']==1)]
tmp_lst=tmp_df_used_item['item_id'].values.tolist()
#lst_un_item= set(lst_item) - set(tmp_lst)
lst_un_item=[x for x in lst_item if x not in tmp_lst]
#random 100 items:
tmp_no=100000
np.random.seed(2020)
lst_100_un_item=(np.random.choice(lst_un_item,tmp_no))
#Create DataFrame
tmp_df=pd.DataFrame(columns=['user_id', 'item_id', 'rating', 'prediction'])
tmp_df['item_id']=lst_100_un_item
tmp_df['user_id']=user_id
tmp_df['rating']=0.0
top_datagenerator = DataGenerator(tmp_df, rating_matrix)
tmp_y_hat = model.predict(top_datagenerator)
y_hat= tmp_y_hat[4]
tmp_arr=y_hat.flatten().tolist()
tmp_df['prediction']=tmp_arr
return tmp_df
#tạo item_id array for each user:
def recommend(df,u,k):
tmp_df=df.sort_values(by=['prediction'],ascending=False)
tmp_df=tmp_df.head(k)
#reset index sẽ dễ cho việc .iloc hoặc .loc
tmp_df.reset_index(drop=True, inplace=True)
tmp_arrItem=tmp_df['item_id'].to_numpy()
return (tmp_arrItem,tmp_df)
def dcg_at_k(r, k):
assert k >= 1
r = np.asfarray(r)[:k] != 0
if r.size:
return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2)))
return 0.
def ndcg_at_k(r, k):
assert k >= 1
idcg = dcg_at_k(sorted(r, reverse=True), k)
if not idcg:
return 0.
return dcg_at_k(r, k) / idcg
# +
import random
test_2=test.copy()
test_2.reset_index(drop=True, inplace=True)
k=20
rd_no =10
np.random.seed(2020)
rd_lst_usr=np.random.choice(lst_user,rd_no)
#rd_lst_usr=lst_user
#________________________________________________________________________________________________
#tạo dataframe HR
df_HR=pd.DataFrame(columns=['user_id', 'HR','NDCG'])
df_HR['user_id']=rd_lst_usr
df_HR=df_HR.sort_values(by=['user_id'],ascending=True)
for u in rd_lst_usr:
df_100_Unused=Top_100_Unused_item(u)
#get top 20 prediction:
arr_top_k,df_top_k=recommend(df_100_Unused,u,k)
#Check_with_TestData(df_top_k,test_2)
for i in range(len(df_top_k)):
#Column sort: "user_id -> item_id -> rating -> prediction
usr=df_top_k.iloc[i,0]
itm=df_top_k.iloc[i,1]
#check xem có row nào trong test_2 thỏa mãn ko, nếu có sẽ tạo ra 1 df có >=1 row
chk=len(test_2.loc[(test_2["user_id"]==usr) & (test_2["item_id"]==itm) & (test_2["rating"]==1)])
if chk==1:
df_top_k.loc[(df_top_k["user_id"]==usr) & (df_top_k["item_id"]==itm),"rating"]=1
rating_lst=df_top_k['rating'].tolist()
#################################################
#Tính HR:
tmp_cnt=0
for r in rating_lst:
if r!=0:
tmp_cnt += 1
tmp_hr = tmp_cnt/len(rating_lst)
df_HR.loc[df_HR["user_id"]==int(u),["HR"]]=tmp_hr
##########################################################
#Tính NDCG:
ndcg=ndcg_at_k(rating_lst, k)
df_HR.loc[df_HR["user_id"]==int(u),["NDCG"]]=ndcg
#print(df_HR)
# -
df_HR
# +
#Calculate HR and NDCG for the model
HR_temp= df_HR.sum(0)
HR=HR_temp[1]/(len(df_HR))
NDCG=HR_temp[2]/(len(df_HR))
print("HR= ", HR)
print("NDCG= ", NDCG)
# + [markdown] colab_type="text" id="GsdDXeO8Ry7_"
# #References
# + [markdown] colab_type="text" id="tKqSn4KnL2yQ"
# Input layer:
#
# - Embedding layer: [Link](https://gdcoder.com/-what-is-an-embedding-layer/)
# - Embedding lookup: [link text](https://keras.io/layers/embeddings/)
# - Multi input: [link text](https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models)
#
| GACF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## [作業重點]
# 了解線性回歸的模型發展歷程,並了解優勢與劣勢,以及其使用情境
# ## 作業
# 請閱讀以下相關文獻,並回答以下問題
#
# [Linear Regression 詳細介紹](https://medium.com/@yehjames/%E8%B3%87%E6%96%99%E5%88%86%E6%9E%90-%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E7%AC%AC3-3%E8%AC%9B-%E7%B7%9A%E6%80%A7%E5%88%86%E9%A1%9E-%E9%82%8F%E8%BC%AF%E6%96%AF%E5%9B%9E%E6%AD%B8-logistic-regression-%E4%BB%8B%E7%B4%B9-a1a5f47017e5)
#
# [Logistics Regression 詳細介紹](https://medium.com/@yehjames/%E8%B3%87%E6%96%99%E5%88%86%E6%9E%90-%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E7%AC%AC3-3%E8%AC%9B-%E7%B7%9A%E6%80%A7%E5%88%86%E9%A1%9E-%E9%82%8F%E8%BC%AF%E6%96%AF%E5%9B%9E%E6%AD%B8-logistic-regression-%E4%BB%8B%E7%B4%B9-a1a5f47017e5)
#
# 1. 線性回歸模型能夠準確預測非線性關係的資料集嗎?
# 可以預測,只要只要帶入的輸入先經過適當的非線性處理。
# 2. 回歸模型是否對資料分布有基本假設?
# 需要針對資料的類型使用適當的模型(函式)假設進行回歸才能得到理想模型。
| homeworks/D037/Day_037_HW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sys
sys.path.append('../scripts/')
from dynamic_programming import *
class BeliefDynamicProgramming(DynamicProgramming):
def __init__(self, widths, goal, puddles, time_interval, sampling_num, camera, puddle_coef=100.0, \
lowerleft=np.array([-4, -4]).T, upperright=np.array([4, 4]).T, dev_borders=[0.1,0.2,0.4,0.8]):
super().__init__(widths, goal, puddles, time_interval, sampling_num, puddle_coef, lowerleft, upperright) ###amdp6changeactions(4-9行目)
self.actions = [(0.0, 2.0), (0.0, -2.0), (1.0, 0.0), (-1.0, 0.0)] #バック(-1.0, 0.0)を追加してself.actionsを再定義(この位置に!)
self.state_transition_probs = self.init_state_transition_probs(time_interval, sampling_num) #追加。計算し直し。
self.index_nums = np.array([*self.index_nums, len(dev_borders) + 1])
nx, ny, nt, nh = self.index_nums
self.indexes = list(itertools.product(range(nx), range(ny), range(nt), range(nh)))
self.value_function, self.final_state_flags = self.init_belief_value_function()
self.policy = np.zeros(np.r_[self.index_nums,2])
self.dev_borders = dev_borders
self.dev_borders_side = [dev_borders[0]/10, *dev_borders, dev_borders[-1]*10]
self.motion_sigma_transition_probs = self.init_motion_sigma_transition_probs()
self.obs_sigma_transition_probs = self.init_obs_sigma_transition_probs(camera) #追加
def init_obs_sigma_transition_probs(self, camera):
probs = {}
for index in self.indexes:
pose = self.pose_min + self.widths*(np.array(index[0:3]).T + 0.5)
sigma = (self.dev_borders_side[index[3]] + self.dev_borders_side[index[3]+1])/2
S = (sigma**2)*np.eye(3)
for d in camera.data(pose):
S = self.observation_update(d[1], S, camera, pose)
probs[index] = {self.cov_to_index(S):1.0}
return probs
def observation_update(self, landmark_id, S, camera, pose):
distance_dev_rate = 0.14
direction_dev = 0.05
H = matH(pose, camera.map.landmarks[landmark_id].pos)
estimated_z = IdealCamera.observation_function(pose, camera.map.landmarks[landmark_id].pos)
Q = matQ(distance_dev_rate*estimated_z[0], direction_dev)
K = S.dot(H.T).dot(np.linalg.inv(Q + H.dot(S).dot(H.T)))
return (np.eye(3) - K.dot(H)).dot(S)
def init_motion_sigma_transition_probs(self):
probs = {}
for a in self.actions:
for i in range(len(self.dev_borders)+1):
probs[(i, a)] = self.calc_motion_sigma_transition_probs(self.dev_borders_side[i], self.dev_borders_side[i+1], a)
return probs
def cov_to_index(self, cov):
sigma = np.power(np.linalg.det(cov), 1.0/6)
for i, e in enumerate(self.dev_borders):
if sigma < e: return i
return len(self.dev_borders)
def calc_motion_sigma_transition_probs(self, min_sigma, max_sigma, action, sampling_num=100):
nu, omega = action
if abs(omega) < 1e-5: omega = 1e-5
F = matF(nu, omega, self.time_interval, 0.0) #ロボットの向きは関係ないので0[deg]で固定で
M = matM(nu, omega, self.time_interval, {"nn":0.19, "no":0.001, "on":0.13, "oo":0.2})#移動の誤差モデル(カルマンフィルタのものをコピペ)
A = matA(nu, omega, self.time_interval, 0.0)
ans = {}
for sigma in np.linspace(min_sigma, max_sigma*0.999, sampling_num): #遷移前のσを作る(区間内に一様分布していると仮定)
index_after = self.cov_to_index(sigma*sigma*F.dot(F.T) + A.dot(M).dot(A.T)) #遷移後のσのインデックス
ans[index_after] = 1 if index_after not in ans else ans[index_after] + 1 #単にカウントしてるだけ(辞書の初期化もあるのでややこしい)
for e in ans:
ans[e] /= sampling_num #頻度を確率に
return ans
def init_belief_value_function(self):
v = np.empty(self.index_nums)
f = np.zeros(self.index_nums)
for index in self.indexes:
f[index] = self.belief_final_state(np.array(index).T)
v[index] = self.goal.value if f[index] else -100.0
return v, f
def belief_final_state(self, index):
x_min, y_min, _ = self.pose_min + self.widths*index[0:3]
x_max, y_max, _ = self.pose_min + self.widths*(index[0:3] + 1)
corners = [[x_min, y_min, _], [x_min, y_max, _], [x_max, y_min, _], [x_max, y_max, _] ]
return all([self.goal.inside(np.array(c).T) for c in corners ]) and index[3] == 0
def action_value(self, action, index, out_penalty=True):###amdp6
value = 0.0
for delta, prob in self.state_transition_probs[(action, index[2])]:
after, out_reward = self.out_correction(np.array(index[0:3]).T + delta)
reward = - self.time_interval * self.depths[(after[0], after[1])] * self.puddle_coef - self.time_interval + out_reward*out_penalty
for sigma_after, sigma_prob in self.motion_sigma_transition_probs[(index[3], action)].items():
for sigma_obs, sigma_obs_prob in dp.obs_sigma_transition_probs[(*after, sigma_after)].items(): #もう一段追加
value += (self.value_function[(*after, sigma_obs)] + reward) * prob * sigma_prob * sigma_obs_prob #確率の掛け算も追加
return value
# +
puddles = [Puddle((-2, 0), (0, 2), 0.1), Puddle((-0.5, -2), (2.5, 1), 0.1)]
##地図とカメラを作る##
m = Map()
for ln in [(1,4), (4,1), (-4, 1), (-2, 1)]: m.append_landmark(Landmark(*ln))
c = IdealCamera(m)
dp = BeliefDynamicProgramming(np.array([0.2, 0.2, math.pi/18]).T, Goal(-3,-3), puddles, 0.1, 10, c) #カメラを加える
# +
def save():
with open("policy_amdp.txt", "w") as f: ###amdp6sweeps
for index in dp.indexes:
p = dp.policy[index]
f.write("{} {} {} {} {} {}\n".format(index[0], index[1], index[2],index[3], p[0], p[1])) #一つ{}とindexの要素を増やす
with open("value_amdp.txt", "w") as f:
for index in dp.indexes:
p = dp.value_function[index]
f.write("{} {} {} {} {}\n".format(index[0], index[1], index[2], index[3], p)) #5行目と同じ
delta = 1e100
counter = 0
while delta > 0.01:
delta = dp.value_iteration_sweep()
counter += 1
print(counter, delta)
save()
# -
import seaborn as sns
v = dp.value_function[:, :, 18, 4]
sns.heatmap(np.rot90(v), square=False)
plt.show()
| section_pomdp/amdp6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 使用keras训练模型
from __future__ import absolute_import, division, print_function
import tensorflow as tf
tf.keras.backend.clear_session()
import tensorflow.keras as keras
import tensorflow.keras.layers as layers
# ## 模型的构造、训练、测试流程
# +
# 模型构造
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, './ch3/net001.png', show_shapes=True)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
# 载入数据
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') /255
x_test = x_test.reshape(10000, 784).astype('float32') /255
x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
# 训练模型
history = model.fit(x_train, y_train, batch_size=64, epochs=3,
validation_data=(x_val, y_val))
print('history:')
# history 打印训练过程的相关数据
print(history.history)
result = model.evaluate(x_test, y_test, batch_size=128)
print('evaluate:')
print(result)
pred = model.predict(x_test[:2])
print('predict:')
print(pred)
# -
# ## 自定义损失和指标
# 自定义指标只需继承 Metric 类, 并重写一下函数
#
# _init_(self),初始化。
#
# update_state(self,y_true,y_pred,sample_weight = None),它使用目标 y_true 和模型预测 y_pred 来更新状态变量。
#
# result(self),它使用状态变量来计算最终结果。
#
# reset_states(self),重新初始化度量的状态。
# +
# 这是一个简单的示例,显示如何实现 CatgoricalTruePositives 指标,该指标计算正确分类为属于给定类的样本数量
class CatgoricalTruePostives(keras.metrics.Metric):
def __init__(self, name='binary_true_postives', **kwargs):
super(CatgoricalTruePostives, self).__init__(name=name, **kwargs)
self.true_postives = self.add_weight(name='tp', initializer='zeros')
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.argmax(y_pred)
y_true = tf.equal(tf.cast(y_pred, tf.int32), tf.cast(y_true, tf.int32))
y_true = tf.cast(y_true, tf.float32)
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, tf.float32)
y_true = tf.multiply(sample_weight, y_true)
return self.true_postives.assign_add(tf.reduce_sum(y_true))
def result(self):
return tf.identity(self.true_postives)
def reset_states(self):
self.true_postives.assign(0.)
model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[CatgoricalTruePostives()])
model.fit(x_train, y_train,
batch_size=64, epochs=3)
# +
# 以定义网络层的方式添加网络 loss
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(tf.reduce_sum(inputs) * 0.1)
return inputs
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = ActivityRegularizationLayer()(h1)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)
# +
# 也可以以定义网络层的方式添加要统计的 metric
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
self.add_metric(keras.backend.std(inputs),
name='std_of_activation',
aggregation='mean')
return inputs
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = MetricLoggingLayer()(h1)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)
# +
# 也可以直接在 model 上面加
# 也可以以定义网络层的方式添加要统计的 metric
class MetricLoggingLayer(layers.Layer):
def call(self, inputs):
self.add_metric(keras.backend.std(inputs),
name='std_of_activation',
aggregation='mean')
return inputs
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h2 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h2)
model = keras.Model(inputs, outputs)
model.add_metric(keras.backend.std(inputs),
name='std_of_activation',
aggregation='mean')
model.add_loss(tf.reduce_sum(h1)*0.1)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)
# -
# 处理使用 validation_data 传入测试数据,还可以使用 validation_split 划分验证数据
#
# ps: validation_split 只能在用 numpy 数据训练的情况下使用
# 处理使用 validation_data 传入测试数据,还可以使用 validation_split 划分验证数据
model.fit(x_train, y_train, batch_size=32, epochs=1, validation_split=0.2)
# ## 使用tf.data构造数据
# +
def get_compiled_model():
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h2 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h2)
model = keras.Model(inputs, outputs)
model.compile(optimizer=keras.optimizers.RMSprop(),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
model = get_compiled_model()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
# model.fit(train_dataset, epochs=3)
# steps_per_epoch 每个epoch只训练几步
# validation_steps 每次验证,验证几步
model.fit(train_dataset, epochs=3, steps_per_epoch=100,
validation_data=val_dataset, validation_steps=3)
# -
# ## 样本权重和类权重
# “样本权重”数组是一个数字数组,用于指定批处理中每个样本在计算总损失时应具有多少权重。 它通常用于不平衡的分类问题(这个想法是为了给予很少见的类更多的权重)。 当使用的权重是 1 和 0 时,该数组可以用作损失函数的掩码(完全丢弃某些样本对总损失的贡献)。
#
# “类权重” dict 是同一概念的更具体的实例:它将类索引映射到应该用于属于该类的样本的样本权重。 例如,如果类 “0” 比数据中的类 “1” 少两倍,则可以使用 class_weight = {0:1.,1:0.5}。
# +
# 增加第5类的权重
import numpy as np
# 类权重
model = get_compiled_model()
class_weight = {i : 1.0 for i in range(10)}
class_weight[5] = 2.0
print(class_weight)
model.fit(x_train, y_train,
class_weight=class_weight,
batch_size=64,
epochs=4)
# +
# 样本权重
model = get_compiled_model()
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
model.fit(x_train, y_train,
sample_weight=sample_weight,
batch_size=64,
epochs=4)
# +
# tf.data数据
model = get_compiled_model()
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train,
sample_weight))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
model.fit(train_dataset, epochs=3, )
# -
# ## 多输入多输出模型
# +
image_input = keras.Input(shape=(32, 32, 3), name='img_input')
timeseries_input = keras.Input(shape=(None, 10), name='ts_input')
x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)
x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)
x = layers.concatenate([x1, x2])
score_output = layers.Dense(1, name='score_output')(x)
class_output = layers.Dense(5, activation='softmax', name='class_output')(x)
model = keras.Model(inputs=[image_input, timeseries_input],
outputs=[score_output, class_output])
keras.utils.plot_model(model, './ch3/multi_input_output_model.png'
, show_shapes=True)
# +
# 可以为模型指定不同的 loss和 metrics
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[keras.losses.MeanSquaredError(),
keras.losses.CategoricalCrossentropy()])
# 还可以指定 loss 的权重
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={'score_output': keras.losses.MeanSquaredError(),
'class_output': keras.losses.CategoricalCrossentropy()},
metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(),
keras.metrics.MeanAbsoluteError()],
'class_output': [keras.metrics.CategoricalAccuracy()]},
loss_weight={'score_output': 2., 'class_output': 1.})
# 可以把不需要传播的 loss 置 0
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss=[None, keras.losses.CategoricalCrossentropy()])
# Or dict loss version
model.compile(
optimizer=keras.optimizers.RMSprop(1e-3),
loss={'class_output': keras.losses.CategoricalCrossentropy()})
# -
# ## 使用回调
# Keras中的回调是在训练期间(在epoch开始时,batch结束时,epoch结束时等)在不同点调用的对象,可用于实现以下行为:
#
# - 在训练期间的不同时间点进行验证(超出内置的每个时期验证)
# - 定期检查模型是否超过某个精度阈值
# - 在训练似乎平稳时改变模型的学习率
# - 在训练似乎平稳时对顶层进行微调
# - 在训练结束或超出某个性能阈值时发送电子邮件或即时消息通知等等。
#
#
# **可使用的内置回调有**
#
# - ModelCheckpoint:定期保存模型。
# - EarlyStopping:当训练不再改进验证指标时停止训练。
# - TensorBoard:定期编写可在 TensorBoard 中显示的模型日志(更多细节见“可视化”)。
# - CSVLogger:将损失和指标数据流式传输到 CSV 文件。
# - 等等
# ### 回调的使用
# +
model = get_compiled_model()
callbacks = [
keras.callbacks.EarlyStopping(
# Stop training when `val_loss` is no longer improving
monitor='val_loss',
# "no longer improving" being defined as "no better than 1e-2 less"
min_delta=1e-2,
# "no longer improving" being further defined as "for at least 2 epochs"
patience=2,
verbose=1)]
model.fit(x_train, y_train,
epochs=20,
batch_size=64,
callbacks=callbacks,
validation_split=0.2)
# +
# checkpoint模型回调
model = get_compiled_model()
check_callback = keras.callbacks.ModelCheckpoint(
filepath='./ch3/mymodel_{epoch}.h5',
save_best_only=True,
monitor='val_loss',
verbose=1
)
model.fit(x_train, y_train,
epochs=3,
batch_size=64,
callbacks=[check_callback],
validation_split=0.2)
# -
# **如果遇到报错: `ImportError: 'load_weights' requires h5py.` 则可以 `conda uninstall h5py`, 然后重装 `pip install h5py`**
# +
# 动态调整学习率
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate,
decay_steps=10000,
decay_rate=0.96,
staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
# +
# 使用tensorboard
import os
tensorboard_cbk = keras.callbacks.TensorBoard(log_dir=os.path.join(os.getcwd(), 'ch3'))
model.fit(x_train, y_train,
epochs=5,
batch_size=64,
callbacks=[tensorboard_cbk],
validation_split=0.2)
# -
# ### 创建自己的回调方法
# +
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self, logs):
self.losses = []
def on_epoch_end(self, batch, logs):
self.losses.append(logs.get('loss'))
print('\nloss:', self.losses[-1])
model = get_compiled_model()
callbacks = [LossHistory()]
model.fit(x_train, y_train,
epochs=3,
batch_size=64,
callbacks=callbacks,
validation_split=0.2)
# -
# ## 自己构造训练和验证循环
# +
# Get the model.
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# 自己构造循环
for epoch in range(3):
print('epoch: ', epoch)
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
# 开一个gradient tape, 计算梯度
with tf.GradientTape() as tape:
logits = model(x_batch_train)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
# +
# 训练并验证
# Get model
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy()
# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy()
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()
# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)
# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)
# Iterate over epochs.
for epoch in range(3):
print('Start of epoch %d' % (epoch,))
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Update training metric.
train_acc_metric(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, loss_value))
print('Seen so far: %s samples' % ((step + 1) * 64))
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print('Training acc over epoch: %s' % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val)
# Update val metrics
val_acc_metric(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print('Validation acc: %s' % (float(val_acc),))
# +
# 添加自己构造的 loss, 每次只能看到最新一次训练增加的 loss
class ActivityRegularizationLayer(layers.Layer):
def call(self, inputs):
self.add_loss(1e-2 * tf.reduce_sum(inputs))
return inputs
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)
logits = model(x_train[:64])
print(model.losses)
logits = model(x_train[:64])
logits = model(x_train[64: 128])
logits = model(x_train[128: 192])
print(model.losses)
# +
# 将loss添加进求导中
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
for epoch in range(3):
print('Start of epoch %d' % (epoch,))
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train)
loss_value = loss_fn(y_batch_train, logits)
# Add extra losses created during this forward pass:
loss_value += sum(model.losses)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Log every 200 batches.
if step % 200 == 0:
print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
print('Seen so far: %s samples' % ((step + 1) * 64))
# -
| 003_keras_training_and_evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # CSC421 Assignment 2 - Part II First-Order Logic (5 points) #
# ### Author: <NAME>
#
# This notebook is based on the supporting material for topics covered in **Chapter 7 - Logical Agents** from the book *Artificial Intelligence: A Modern Approach.* You can consult and modify the code provided in logic.py and logic.ipynb for completing the assignment questions. This part does rely on the provided code.
#
# ```
# Birds can fly, unless they are penguins and ostriches, or if they happen
# to be dead, or have broken wings, or are confined to cages, or have their
# feet stuck in cement, or have undergone experiences so dreadful as to render
# them psychologically incapable of flight
#
# <NAME>
# ```
# # Introduction - First-Order Logic and knowledge engineering
#
# In this assignment we explore First-Order Logic (FOL) using the implementation of knowledge base and first-order inference provided by the textbook authors. We also look into matching a limited form of unification.
#
# **NOTE THAT THE GRADING IN THIS ASSIGNMENT IS DIFFERENT FOR GRADUATE STUDENTS AND THEY HAVE TO DO EXTRA WORK FOR FULL MARKS**
#
# # Question 2A (Minimum) (CSC421 - 1 point, CSC581C - 0 points)
#
# Consider the following propositional logic knowledge base.
#
# * It is not sunny this afternoon and it is colder than yesterday.
# * We will go swimming only if it is sunny.
# * If we do not go swimming then we will take a canoe trip.
# * If we take a canoe trip, then we will be home by sunset.
#
#
# Denote:
#
#
# * p = It is sunny this afternoon
# * q = it is colder than yesterday
# * r = We will go swimming
# * s= we will take a canoe trip
# * t= We will be home by sunset
#
# Express this knowledge base using propositional logic using the expression syntax used in logic.ipynb. You can incoprorate any code you need from logic.ipynb and logic.py. In order to access the associated code the easiest way is to place your notebook in the same folder as the aima_python source code. Using both model checking and theorem proving inference (you can use the implementations provided) show that this knowledge base entails the sentence if it is not sunny this afternoon then we will be home by sunset.
# +
# YOUR CODE GOES HERE
from logic import *
from utils import *
from notebook import psource
# p = it is sunny this afternoon
# q = it is colder than yesterday
# r = we will go swimming
# s = we will take a canoe trip
# t = we will be home by sunset
prop_kb = PropKB()
P, Q, R, S, T = expr('P, Q, R, S, T')
prop_kb.tell(~P & Q)
prop_kb.tell(R | '==>' | P)
prop_kb.tell(~R | '==>' | S)
prop_kb.tell(S | '==>' | T)
prop_kb.clauses
# Model checking
print("Model Checking.\nKB entails sentence: " + str(prop_kb.ask_if_true(~P & Q)))
# Theorem proving inference
print("\nTheorem proving.\nKB entails sentence: " + str(pl_resolution(prop_kb, ~P & Q)))
# -
# # Question 2B (Minimum) (CSC421 - 1 point, CSC581C - 0 point)
#
# Encode the kindship domain described in section 8.3.2 of the textbook using FOL and FolKB implementation in logic.ipynb and encode as facts the relationships between the members of the Simpsons family from the popular TV show:
#
# https://en.wikipedia.org/wiki/Simpson_family
#
#
# Show how the following queries can be answered using the KB:
#
# * Who are the children of Homer ?
# * Who are the parents of Bart ?
# * Are Lisa and Homer siblings ?
# * Are Lisa and Bart siblings ?
#
# +
# YOUR CODE GOES HERE
# KB:
# homer marge bart lisa maggie
# parent, sibling, spouse, child
# Queries:
# who are the children of homer
# who are the parents of bart
# are lisa and homer siblings
# are lisa and bart siblings
clauses = []
clauses.append(expr("Husband(x, y) & Wife(y, x) ==> Married(x, y)")) # x=wife, y=husband
clauses.append(expr("Husband(Marge, Homer)"))
clauses.append(expr("Wife(Homer, Marge)"))
clauses.append(expr("Parent(Bart,Homer)"))
clauses.append(expr("Parent(Bart,Marge)"))
clauses.append(expr("Parent(Lisa,Homer)"))
clauses.append(expr("Parent(Lisa,Marge)"))
clauses.append(expr("Parent(Maggie,Homer)"))
clauses.append(expr("Parent(Maggie,Marge)"))
clauses.append(expr("Parent(x,y) ==> Child(y,x)"))
# clauses.append(expr("Mother(w,y) & Mother(x,y) & Father(w,z) & Father(x,z) ==> Siblings(w,x)"))
clauses.append(expr("Siblings(Bart,Lisa)"))
clauses.append(expr("Siblings(Bart,Maggie)"))
clauses.append(expr("Siblings(Lisa,Maggie)"))
clauses.append(expr("Siblings(Lisa,Bart)"))
clauses.append(expr("Siblings(Maggie,Bart)"))
clauses.append(expr("Siblings(Maggie,Lisa)"))
simpsons_kb = FolKB(clauses)
# who are the children of homer
answer = fol_fc_ask(simpsons_kb,expr('Child(Homer,x)'))
print("Children of Homer:")
print(list(answer))
# who are the parents of bart
answer = fol_fc_ask(simpsons_kb,expr('Parent(Bart,x)'))
print("\nParents of Bart:")
print(list(answer))
# are lisa and homer siblings
print("\nAre Lisa and Homer Siblings?")
print(simpsons_kb.ask(expr("Siblings(Lisa,Homer)")) =={})
# are lisa and bart siblings
print("\nAre Lisa and Bart Siblings?")
out = simpsons_kb.ask(expr("Siblings(Lisa,Bart)"))
print(simpsons_kb.ask(expr("Siblings(Lisa,Bart)")) == {})
# -
# # Question 2C (Expected) 1 point
#
#
# Encode the electronic circuit domain described in section 8.4.2 of your textbook using the FolKB implementation in logics.ipynb. Encode the general knowledge of the domain as well as the specific problem instance shown in Figure 8.6. Post the same queries described by the book to the inference procedure.
# +
# YOUR CODE GOES HERE
# *** this is not working. didnt have enough time to finish
# Queries
# what combinations of inputs would cause the first output of C1 to be 0 and the second output of C1 to be 1?
# what are the possible sets of values of all the terminals for the adder circuit?
clauses=[]
clauses.append(expr("Xor(X1)"))
clauses.append(expr("Xor(X2)"))
clauses.append(expr("And(A1)"))
clauses.append(expr("And(A1)"))
clauses.append(expr("Or(O1)"))
clauses.append(expr("Connected(Out(1,X1),In(1,X2))"))
clauses.append(expr("Connected(Out(1,X1),In(2,A2))"))
clauses.append(expr("Connected(Out(1,A2),In(1,O1))"))
clauses.append(expr("Connected(Out(1,A1),In(2,O1))"))
clauses.append(expr("Connected(Out(1,X2),Out(1,C1))"))
clauses.append(expr("Connected(Out(1,O1),Out(1,C1))"))
clauses.append(expr("Connected(Out(1,C1),In(1,X1))"))
clauses.append(expr("Connected(Out(1,C1),In(1,A1))"))
clauses.append(expr("Connected(Out(2,C1),In(2,X1))"))
clauses.append(expr("Connected(Out(2,C1),In(2,A1))"))
clauses.append(expr("Connected(Out(3,C1),In(2,X2))"))
clauses.append(expr("Connected(Out(3,C1),In(1,A2))"))
# XOR truth table
clauses.append(expr("(Xor(t) & Signal(In(1,t),0) & Signal(In(2,t),0)) ==> Signal(Out(1,t),0)"))
clauses.append(expr("(Xor(t) & Signal(In(1,t),1) & Signal(In(2,t),0)) ==> Signal(Out(1,t),1)"))
clauses.append(expr("(Xor(t) & Signal(In(1,t),0) & Signal(In(2,t),1)) ==> Signal(Out(1,t),1)"))
clauses.append(expr("(Xor(t) & Signal(In(1,t),1) & Signal(In(2,t),1)) ==> Signal(Out(1,t),0)"))
# AND truth table
clauses.append(expr("(And(t) & Signal(In(1,t),0) & Signal(In(2,t),0)) ==> Signal(Out(1,t),0)"))
clauses.append(expr("(And(t) & Signal(In(1,t),1) & Signal(In(2,t),0)) ==> Signal(Out(1,t),0)"))
clauses.append(expr("(And(t) & Signal(In(1,t),0) & Signal(In(2,t),1)) ==> Signal(Out(1,t),0)"))
clauses.append(expr("(And(t) & Signal(In(1,t),1) & Signal(In(2,t),1)) ==> Signal(Out(1,t),1)"))
# OR truth table
clauses.append(expr("(Or(t) & Signal(In(1,t),0) & Signal(In(2,t),0)) ==> Signal(Out(1,t),0)"))
clauses.append(expr("(Or(t) & Signal(In(1,t),1) & Signal(In(2,t),0)) ==> Signal(Out(1,t),1)"))
clauses.append(expr("(Or(t) & Signal(In(1,t),0) & Signal(In(2,t),1)) ==> Signal(Out(1,t),1)"))
clauses.append(expr("(Or(t) & Signal(In(1,t),1) & Signal(In(2,t),1)) ==> Signal(Out(1,t),1)"))
# if 2 terminals are connected, they have the same value
clauses.append(expr("Signal(t1,0) & Connected(t1,t2) ==> Signal(t2,0)"))
clauses.append(expr("Signal(t1,1) & Connected(t1,t2) ==> Signal(t2,1)"))
# connection goes both ways
clauses.append(expr("Connected(t1,t2) ==> Connected(t2,t1)"))
# print(clauses)
elec_kb = FolKB(clauses)
answer = fol_fc_ask(elec_kb,expr("Signal(In(1,C1),i1) & Signal(In(2,C1),i2) & Signal(In(3,C1),i3) & Signal(Out(1,C1),0) & Signal(Out(2,C1),1)"))
print(list(answer))
# answer = fol_fc_ask(simpsons_kb,expr('Parent(Bart,x)'))
# Queries
# Signal(In(1,C1),i1)&Signal(In(2,C1),i2)&Signal(In(3,C1),i3)&Signal(Out(1,C1),0)&Signal(Out(2,C1),1)
# result should be {i1/1,i2/1,i3/0},{i1/1,i2/0,i3/1},{i1/0,i2/1,i3/1}
# -
# # QUESTION 1D (EXPECTED) 1 point
#
# In this question we explore Prolog which is a programming language based on logic. We won't go into details but just wanted to give you a flavor of the syntax and how it connects to what we have learned. For this question you
# will NOT be using the notebook so your answer should just be the source code. We will use http://tau-prolog.org/ which is a Prolog implementation that can run in a browser. When you access the webpage there is a text window labeled try it for entering your knowledge base and under it there is a text entry field for entering your query.
#
# For example type in the Try it window and press enter:
#
# ```Prolog
# likes(sam, salad).
# likes(dean, pie).
# likes(sam, apples).
# likes(dean, whiskey).
# ```
#
# Then enter the query:
# ```Prolog
# likes(sam,X).
# ```
# When you press Enter once you will get X=apples. and X=salad. Note the periods at the end of each statement.
#
# Encode the kinship domain from question 2B in Prolog and answer the queries from 2B. Notice that in Prolog the constants start with lower case letters and the variables start with upper case letters.
#
# Provide your code for the KB and queries using markup. See the syntax for Prolog of this cell by double clicking for editing.
#
#
# \# YOUR CODE GOES HERE
#
# **Code for the KB:**
# ```
# parent(bart,homer).
# parent(bart,marge).
# parent(lisa,homer).
# parent(lisa,marge).
# parent(maggie,homer).
# parent(maggie,marge).
# child(homer,bart).
# child(homer,lisa).
# child(homer,maggie).
# child(marge,bart).
# child(marge,lisa).
# child(marge,maggie).
# siblings(bart,lisa).
# siblings(bart,maggie).
# siblings(lisa,bart).
# siblings(lisa,maggie).
# siblings(maggie,bart).
# siblings(maggie,lisa).
# ```
#
# **Queries:**
# **Who are the children of Homer?**
# ```
# child(homer,X).
# ```
# >**Output:**
# ```
# X = bart ;
# X = lisa ;
# X = maggie.
# ```
#
# **Who are the parents of Bart?**
# ```
# parent(bart,X).
# ```
# >**Output:**
# ```
# X = homer ;
# X = marge.
# ```
#
# **Are Lisa and Homer siblings?**
# ```
# siblings(lisa,homer).
# ```
# >**Output:**
# ```
# false.
# ```
#
# **Are Lisa and Bart Siblings?**
# ```
# siblings(lisa,bart).
# ```
# >**Output:**
# ```
# true.
# ```
# # QUESTION 1E (ADVANCED) 1 point
#
# Implement exercise 8.26 using the code in logic.ipynb as well as the KB you wrote for the circuit domain.
#
#
#
# YOUR CODE GOES HERE
# # QUESTION 1F (ADVANCED) (CSC421 - 0 points, CSC581C - 2 points)
#
#
# This question explores the automatic constructions of a first-order logic knowledge base from a web resource and is more open ended than the other ones. The website https://www.songfacts.com/ contains a large variety of facts about music. Check the https://www.songfacts.com/categories link for some categories. Using selenium Python bindings https://selenium-python.readthedocs.io/ access the webpage and scrape at least three categories. Your code should scrape the information from the pages and convert it into relationships and facts in first-order logic using the syntax of expressions in logic.ipynb. Once you build your knowledge-base then write 4 non-trivial queries that show-case the expressiveness of FOL. These queries should not be possible to be answered easily using the web interface i.e they should have some logical connectives, more than one possible answer etc.
# The translation of the song facts from the web page to FOL should NOT be done by hand but using the web scraping tool you develop. You can use multiple cells in your answer.
#
#
#
# +
# YOUR CODE GOES HERE
# -
| csc421_asn2_part2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda3]
# language: python
# name: conda-env-anaconda3-py
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
import random
from sklearn import linear_model, datasets
# -
def shrink(t,q):
x = q[0]; y = q[1]
dx = -x
dy = -y
return np.array([-x, -y])
# +
num_trials = 10 # this is the number of intital conditions to try, note the total number of trials is num_trials*num_trials
min_x = -10
max_x = 10
min_y = -10
max_y = 10
x_vals = np.linspace(min_x, max_y, num_trials)
y_vals = np.linspace(min_y, max_y, num_trials)
dt = .1
t_eval = np.arange(0,5,dt)
q = np.zeros((len(t_eval), num_trials, num_trials, 2))
for i, x in enumerate(x_vals):
for j, y in enumerate(y_vals):
sol = solve_ivp(shrink, (0,5), np.array([x,y]), vectorized = True, t_eval = t_eval)
q[:,i,j,:] = sol.y.T
plt.plot(sol.y[0,:], sol.y[1,:])
# %matplotlib inline
#plt.gca().set_aspect('equal', adjustable='box')
traj_list = [np.flip(q[:,i,j,:], axis = 0) for i in range(num_trials) for j in range(num_trials)]
# +
# Calculate thefractional dimension at each time step (or something like that anyway)
# Method Katie Suggests
# TODO make this a function
# just calculating the length of the shortest trajectory
min_t = min([x.shape[0] for x in traj_list])
NR_list = []
r_delta = .01
delta = .01
for i in range(min_t):
NR_list.append([])
r_min = delta**2 # Length of the ball around our point. Delta from above, need to start somewhere
r_max = 3
num_r = 100
N = 0
for r in np.linspace(r_min, r_max, num=num_r):
N = 0
points = [traj[i,:] for traj in traj_list]
random.shuffle(points) #shuffles points in place
while(True):
# Is a try catch really the best way to terminate?
# probably not but right now I don't care
try:
center = points.pop(0) # pop also removes point from our list
points[:] = [x for x in points if sum((center - x)**2) > r]
N+=1
except IndexError:
NR_list[i].append((N,r))
break
# +
# %matplotlib
a = np.array(NR_list[30])
plt.plot(a[:,1], a[:,0],'x-')
plt.figure()
plt.plot(np.log(a[:,1]), np.log(a[:,0]),'-')
#plt.figure()
X = np.log(a[:,1]).reshape(-1,1)
y = np.log(a[:,0]).reshape(-1,1)
# adapted from https://scikit-learn.org/stable/auto_examples/linear_model/plot_ransac.html
lr = linear_model.LinearRegression()
lr.fit(X, y)
ransac = linear_model.RANSACRegressor()
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
# Predict data of estimated models
line_X = np.arange(X.min(), X.max())[:, np.newaxis]
#line_y = lr.predict(line_X)
line_y_ransac = ransac.predict(line_X)
plt.scatter(X[inlier_mask], y[inlier_mask], color='yellowgreen', marker='.',
label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask], color='gold', marker='.',
label='Outliers')
plt.plot(line_X, line_y_ransac)
# -
| misc/dimension_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
from pyvista import set_plot_theme
set_plot_theme('document')
# Beam Shape {#ref_light_beam_shape_example}
# ==========
#
# The default directional lights are infinitely distant point sources, for
# which the only geometric customization option is the choice of beam
# direction defined by the light\'s position and focal point. Positional
# lights, however, have more options for beam customization.
#
# Consider two hemispheres:
#
# +
# sphinx_gallery_thumbnail_number = 5
import pyvista as pv
plotter = pv.Plotter()
hemi = pv.Sphere().clip()
hemi.translate((-1, 0, 0))
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
hemi = hemi.copy()
hemi.rotate_z(180)
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
plotter.show()
# -
# We can see that the default lighting does a very good job of
# articulating the shape of the hemispheres.
#
# Let\'s shine a directional light on them, positioned between the
# hemispheres and oriented along their centers:
#
# +
plotter = pv.Plotter(lighting='none')
hemi = pv.Sphere().clip()
hemi.translate((-1, 0, 0))
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
hemi = hemi.copy()
hemi.rotate_z(180)
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
light = pv.Light(position=(0, 0, 0), focal_point=(-1, 0, 0))
plotter.add_light(light)
plotter.show()
# -
# Both hemispheres have their surface lit on the side that faces the
# light. This is consistent with the point source positioned at infinity,
# directed from the light\'s nominal position toward the focal point.
#
# Now let\'s change the light to a positional light (but not a spotlight):
#
# +
plotter = pv.Plotter(lighting='none')
hemi = pv.Sphere().clip()
hemi.translate((-1, 0, 0))
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
hemi = hemi.copy()
hemi.rotate_z(180)
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
light = pv.Light(position=(0, 0, 0), focal_point=(-1, 0, 0))
light.positional = True
light.cone_angle = 90
plotter.add_light(light)
plotter.show()
# -
# Now the inner surface of both hemispheres is lit. A positional light
# with a cone angle of 90 degrees (or more) acts as a point source located
# at the light\'s nominal position. It could still display attenuation,
# see the `ref_attenuation_example`{.interpreted-text role="ref"} example.
#
# Switching to a spotlight (i.e. a positional light with a cone angle less
# than 90 degrees) will enable beam shaping using the
# :py`exponent`{.interpreted-text role="attr"} property. Let\'s put our
# hemispheres side by side for this, and put a light in the center of
# each: one spotlight, one merely positional.
#
# +
plotter = pv.Plotter(lighting='none')
hemi = pv.Sphere().clip()
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
offset = 1.5
hemi = hemi.copy()
hemi.translate((0, offset, 0))
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
# non-spot positional light in the center of the first hemisphere
light = pv.Light(position=(0, 0, 0), focal_point=(-1, 0, 0))
light.positional = True
light.cone_angle = 90
# add attenuation to reduce cross-talk between the lights
light.attenuation_values = (0, 0, 2)
plotter.add_light(light)
# spotlight in the center of the second hemisphere
light = pv.Light(position=(0, offset, 0), focal_point=(-1, offset, 0))
light.positional = True
light.cone_angle = 89.9
# add attenuation to reduce cross-talk between the lights
light.attenuation_values = (0, 0, 2)
plotter.add_light(light)
plotter.show()
# -
# Even though the two lights only differ by a fraction of a degree in cone
# angle, the beam shaping effect enabled for spotlights causes a marked
# difference in the result.
#
# Once we have a spotlight we can change its
# :py`exponent`{.interpreted-text role="attr"} to make the beam shape
# sharper or broader. Three spotlights with varying sharpness:
#
# +
plotter = pv.Plotter(lighting='none')
hemi_template = pv.Sphere().clip()
centers = [(0, 0, 0), (0, 1.5, 0), (0, 1.5*0.5, 1.5*3**0.5/2)]
exponents = [1, 0.3, 5]
for center, exponent in zip(centers, exponents):
hemi = hemi_template.copy()
hemi.translate(center)
plotter.add_mesh(hemi, color='cyan', smooth_shading=True)
# spotlight in the center of the hemisphere, shining into it
focal_point = center[0] - 1, center[1], center[2]
light = pv.Light(position=center, focal_point=focal_point)
light.positional = True
light.cone_angle = 89.9
light.exponent = exponent
# add attenuation to reduce cross-talk between the lights
light.attenuation_values = (0, 0, 2)
plotter.add_light(light)
plotter.show()
# -
# The spotlight with exponent 0.3 is practically uniform, and the one with
# exponent 5 is visibly focused along the axis of the light.
#
| examples/04-lights/beam_shape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
from sklearn.model_selection import cross_val_score
from collections import Counter
#from alg5 import model
# -
# explicitly require this experimental feature
from sklearn.experimental import enable_hist_gradient_boosting # noqa
# now you can import normally from ensemble
from sklearn.ensemble import HistGradientBoostingClassifier
model=HistGradientBoostingClassifier(learning_rate=0.1,max_iter=75)
from xgboost import XGBClassifier
model=XGBClassifier(n_estimators=200, max_depth=3, n_jobs=4, eval_metric='mlogloss')
# +
DATA='ugrin2020-vehiculo-usado-multiclase/'
TRAIN=DATA+'train.csv'
TEST=DATA+'test.csv'
PREPROCESSED_DATA='preprocessed_data/'
RESULTS='results/'
# -
test = pd.read_csv(TEST)
test_ids = test['id']
# Carga de datos ya procesados
X=np.load(PREPROCESSED_DATA+'binScale.npz')
train = X['arr_0']
label = X['arr_1']
test = X['arr_2']
X.close()
train.shape
scores=cross_val_score(model, train, label)
print(scores)
print(np.mean(scores))
model
# ## Generar fichero de Kaggle
model.fit(train,label)
# Ahora predecimos
predict = model.predict(test)
predict = list(map(int,predict))
# Generamos
df_result = pd.DataFrame({'id': test_ids, 'Precio_cat': predict})
df_result.to_csv(RESULTS+"try30.csv", index=False)
| practica3/main-load.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %load_ext autoreload
# %autoreload 2
from preprocess import *
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.utils import to_categorical
# -
# Second dimension of the feature is dim2
feature_dim_2 = 11
# Save data to array file first
save_data_to_array(max_len=feature_dim_2)
# +
# # Loading train set and test set
X_train, X_test, y_train, y_test = get_train_test()
# # Feature dimension
feature_dim_1 = 20
channel = 1
epochs = 15
batch_size = 100
verbose = 1
num_classes = 8
# Reshaping to perform 2D convolution
X_train = X_train.reshape(X_train.shape[0], feature_dim_1, feature_dim_2, channel)
X_test = X_test.reshape(X_test.shape[0], feature_dim_1, feature_dim_2, channel)
y_train_hot = to_categorical(y_train)
y_test_hot = to_categorical(y_test)
# +
def get_model():
model = Sequential()
model.add(Conv2D(32, kernel_size=(2, 2), activation='relu', input_shape=(feature_dim_1, feature_dim_2, channel)))
model.add(Conv2D(48, kernel_size=(2, 2), activation='relu'))
model.add(Conv2D(120, kernel_size=(2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.4))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
return model
# Predicts one sample
def predict(filepath, model):
sample = wav2mfcc(filepath)
sample_reshaped = sample.reshape(1, feature_dim_1, feature_dim_2, channel)
return get_labels()[0][
np.argmax(model.predict(sample_reshaped))
]
# -
# # Building The Model Then Training it
model = get_model()
Model1 = model.fit(
X_train, y_train_hot,
batch_size=batch_size,
epochs=epochs,
verbose=verbose,
validation_data=(X_test, y_test_hot)
)
model.summary()
import matplotlib.pyplot as plt
# %matplotlib inline
plt.plot(Model1.history['loss'])
plt.plot(Model1.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','test'], loc='upper left')
plt.show()
# ## Prediction
print(predict('mixes/droite/droite_mathilde_01_09.wav', model=model))
print(predict('data/test/test-01.wav', model=model)) #bonjour
print(predict('data/test/test-02.wav', model=model)) #au revoir
print(predict('data/test/test-03.wav', model=model)) #a gauche
print(predict('data/test/test-04.wav', model=model)) #a droite
print(predict('data/test/steven_test-01.wav', model=model)) #oui
print(predict('data/test/steven_test-02.wav', model=model)) #oui
model.save("model.h5")
| Modele.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# importing the MDPtoolbox library
library(MDPtoolbox)
# +
# defining the transition probability matrices
# Up
up=matrix(c( 0.9, 0.1, 0, 0,
0.2, 0.7, 0.1, 0,
0, 0, 0.1, 0.9,
0, 0, 0, 1),
nrow=4,ncol=4,byrow=TRUE)
# Down
down=matrix(c(0.1, 0, 0, 0.9,
0, 0.8, 0.2, 0,
0, 0.2, 0.8, 0,
0, 0, 0.8, 0.2),
nrow=4,ncol=4,byrow=TRUE)
# Left
left=matrix(c(1, 0, 0, 0,
0.9, 0.1, 0, 0,
0, 0.8, 0.2, 0,
0, 0, 0, 1),
nrow=4,ncol=4,byrow=TRUE)
# Right
right=matrix(c(0.1, 0.9, 0, 0,
0.1, 0.2, 0.7, 0,
0, 0, 0.9, 0.1,
0, 0, 0, 1),
nrow=4,ncol=4,byrow=TRUE)
# -
# putting all the actions in one single matrix
actions = list(up=up, down=down, left=left, right=right)
actions
# defining the rewards and penalties
rewards=matrix(c( -1, -1, -1, -1,
-1, -1, -1, -1,
-1, -1, -1, -1,
100, 100, 100, 100),
nrow=4,ncol=4,byrow=TRUE)
rewards
# solving the problem
solved_MDP=mdp_policy_iteration(P=actions, R=rewards, discount = 0.2)
solved_MDP
# looking at the policy given by policy iteration algorithm
solved_MDP$policy
names(actions)[solved_MDP$policy]
# looking at the values at each step
solved_MDP$V
| Chapter09/Recipe 1 - Model based reinforcement learning using MDPtoolbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Tracking individual bands (aka modes) in PWEM.
# This is a problem of sorting the eigenvectors that come out of the eigensolution.
# +
import numpy as np
import sys
sys.path.append("D:\\RCWA\\")
import numpy as np
import matplotlib.pyplot as plt
from convolution_matrices import convmat2D as cm
from PWEM_functions import K_matrix as km
from PWEM_functions import PWEM_eigen_problem as eg
'''
solve PWEM for a simple circular structure in a square unit cell
and generate band structure
compare with CEM EMLab; also Johannopoulos book on Photonics
'''
## lattice and material parameters
a = 1;
radius = 0.2*a;
e_r = 8.9;
c0 = 3e8;
#generate irreducible BZ sample
T1 = 2*np.pi/a;
T2 = 2*np.pi/a;
# determine number of orders to use
P = 5;
Q = 5;
PQ = (2*P+1)*(2*Q+1)
# ============== build high resolution circle ==================
Nx = 512; Ny = 512;
A = np.ones((Nx,Ny));
ci = int(Nx/2); cj= int(Ny/2);
cr = (radius/a)*Nx;
I,J=np.meshgrid(np.arange(A.shape[0]),np.arange(A.shape[1]));
dist = np.sqrt((I-ci)**2 + (J-cj)**2);
A[np.where(dist<cr)] = e_r;
#visualize structure
plt.imshow(A);
plt.show()
## =============== Convolution Matrices ==============
E_r = cm.convmat2D(A, P,Q)
print(E_r.shape)
print(type(E_r))
plt.figure();
plt.imshow(abs(E_r), cmap = 'jet');
plt.colorbar()
plt.show()
## =============== K Matrices =========================
beta_x = beta_y = 0;
plt.figure();
## check K-matrices for normal icnidence
Kx, Ky = km.K_matrix_cubic_2D(0,0, a, a, P, Q);
np.set_printoptions(precision = 3)
print(Kx.todense())
print(Ky.todense())
band_cutoff = PQ; #number of bands to plot
## ======================== run band structure calc ==========================##
kx_scan = np.linspace(-np.pi, np.pi, 400)/a;
kx_mat = np.repeat(np.expand_dims(kx_scan, axis = 1), PQ,axis = 1)
TE_eig_store = []
eigen_vectors_store = [];
for beta_x in kx_scan:
beta_y = beta_x;
beta_y = 0;
Kx, Ky = km.K_matrix_cubic_2D(beta_x, beta_y, a, a, P, Q);
eigenvalues, eigenvectors, A_matrix = eg.PWEM2D_TE(Kx, Ky, E_r); #we solve for E field components... Ez specifically
#eigenvalues...match with the benchmark...but don't match with
TE_eig_store.append(np.sqrt(np.real(eigenvalues)));
eigen_vectors_store.append((eigenvectors))
#plt.plot(beta_x*np.ones((PQ,)), np.sort(np.sqrt(eigenvalues)), '.')
TE_eig_store = np.array(TE_eig_store);
eigen_vectors_store = np.array(eigen_vectors_store);
# question: which eigenvalues are relevant for plotting the band structure?
# -
plt.figure()
plt.plot(kx_mat[:,0:band_cutoff], TE_eig_store[:,0:band_cutoff]/(2*np.pi),'.g', markersize = 0.6);
plt.title('TE polarization')
plt.ylim([0.4,1.5])
plt.show();
print(TE_eig_store.shape)
# +
## Reconstructing the mode
# print(eigen_vectors_store.shape)
# print(eigen_vectors_store[:,2,:].shape)
## 1000 sampled points...each one is a 121x121 matrix.
#121 x121 is the discretization in fourier space...so we have to reconstruct the fourier series.
# at each frequency, we got 121 modes, each consisting of 11x11 fourier orders
## test at kx = 0, ky = 0
# beta_x = 0; beta_y = 0;
# Kx, Ky = km.K_matrix_cubic_2D(beta_x, beta_y, a, a, P, Q);
# eigenvalues, eigenvectors, A_matrix = eg.PWEM2D_TE(Kx, Ky, E_r); #we solve for E field components... Ez specifically
# eig1_coeffs = np.reshape(eigenvectors[:,49],(2*P+1, 2*Q+1)); #first index is mode...
#print(np.round(eigenvalues, 3))
k_index = 50;
plt.figure(figsize = (9,9))
c = 1;
for freq_index in range(4,4+9):
print(freq_index)
ax = plt.subplot(3,3,c)
eigenvals = TE_eig_store[k_index,:];
#get sorted indices
sorted_inds = np.argsort(eigenvals);
#print(sorted_inds)
eigenvals = eigenvals[sorted_inds];
selected_index = 1;
eig1_coeffs = eigen_vectors_store[k_index,:,:];
eig1_coeffs = eig1_coeffs[:, sorted_inds]
eig1_coeffs = np.reshape(eig1_coeffs[:,freq_index],(2*P+1, 2*Q+1)); #first index is mode...
x = np.linspace(-a/2, a/2, 200);
y = np.linspace(-a/2, a/2, 200)
x = np.linspace(0, a, 200);
y = np.linspace(0, a, 200)
X,Y = np.meshgrid(x,y);
# kx_scan = np.diag(Kx.todense());
# ky_scan = np.diag(Ky.todense())
#print(beta_x, beta_y, np.arange(-int(P), int(P)+1))
Kx_scan = beta_x - 2*np.pi*np.arange(-int(P), int(P)+1)/a;
Ky_scan = beta_y - 2*np.pi*np.arange(-int(Q), int(Q)+1)/a;
#print(np.round(Kx_scan, 3))
reconstruction = 0;
for i in range(-P, P+1):
for j in range(-Q, Q+1):
#we need teh P offset since teh (0,0) happens at the center.
reconstruction += eig1_coeffs[P+i,Q+j]*np.exp(-1j*(Kx_scan[P+i]*X + Ky_scan[Q+j]*Y))
#print(reconstruction.shape)
c = c+1;
ax.pcolor(X,Y, np.real(reconstruction), cmap = 'bwr');
#plt.axes().set_aspect('equal')
plt.savefig('sample_mode_reconstructions_PWEM_photonic_circle.png', dpi = 300);
plt.show()
# -
def reconstruct_real_space_2D(x, y, Kx_scan, Ky_scan, orders):
'''
orders should be an nxm matrix (so reshape the eigenvector of coeffs)
'''
X,Y = np.meshgrid(x,y);
P,Q = orders.shape//2;
reconstruction = 0;
for i in range(-P, P+1):
for j in range(-Q, Q+1):
#we need teh P offset since teh (0,0) happens at the center.
reconstruction += orders[P+i,Q+j]*np.exp(-1j*(Kx_scan[P+i]*X + Ky_scan[Q+j]*Y))
return reconstruction
# ## Mode tracking != band tracking
# The problem is that a given mode will not look the same as we modulate $k_x$ or $k_y$, so we can't simply just cluster modes (nor can we do the parity sorting like we did for 1D systems.
#
# ## Alternative: Mode sorting
# What if we sort the eigenvalues and the subsequent eigenvector indices?
# +
## here we attempt to sort eigenvalues in TE_eig_store into individual bands
# sort every column of TE_eigs_store
TE_sorted = np.sort(TE_eig_store, axis = 1);
for i in range(TE_sorted.shape[1]):
plt.plot(TE_sorted[:,i]/(2*np.pi), '.');
plt.ylim([0.95,1.4])
plt.show();
# -
| PWEM_examples/Mode Tracking in PWEM2D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import networkx as nx
import matplotlib.pyplot as plt
from itertools import product
import numpy as np
# # Пример: рисование генеалогического дерева
# +
family = {0: '<NAME>',
1: '<NAME>',
2: '<NAME>',
3: '<NAME>'}
relations = [(0,2),(1,2),(3,1)]
position = {3:[0,0], 1:[1,0], 0:[1,1], 2:[2,1]}
g = nx.Graph()
g.add_nodes_from(list(family.keys()))
g.add_edges_from(relations)
nx.draw(g, pos=position)
for i, name in family.items():
plt.text(position[i][0], position[i][1]+0.1, name)
# -
# В дереве есть теща (Г.И.Смирнова), супруги Ивановы и сын Петр. Ребро означает отношение "родитель-потомок".
# Этот граф можно представить матрицей смежности, в которой будет 1 при отношении "родитель-потомок" и 0 в противоположном случае. Строки и столбцы это вершины графа.
m = np.array([[0,0,1,0],[0,0,1,0],[0,0,0,0],[0,1,0,0]])
m
# Задачи:
#
# 1. Обогатить граф (увеличить количество узлов до 15)
#
# 2. Написать алгоритм, который находит
# - все пары "бабушка или дедушка"-"внук или внучка"
# - всех прадедушек и прабабушек
# - все пары "зять-теща"
# - бездетные пары
# # Разные примеры networkx
g = nx.Graph()
g.add_nodes_from([(1, {'color':'red'}),(2),(3),(4)])
g.add_edges_from([(1,2),(3,4)])
nx.draw(g)
g.nodes()
g.number_of_edges()
g.number_of_nodes()
g.edges
g.nodes
g.adj
g.degree
g1 = nx.DiGraph()
g1.add_edge(1,2)
g1.add_edge(2,3)
nx.draw(g1)
g = nx.Graph()
g.add_nodes_from([1,2,3])
nx.draw(g)
pos = nx.spring_layout(g, seed=200)
pos
nx.draw(g, pos={1:(1,1), 2:(2,2), 3:(3,3)})
# +
g1 = nx.Graph()
places = dict()
input_layer = range(5)
for i, item_ in enumerate(input_layer):
places[item_] = [0,i]
hidden_layer = range(5,10)
for i, item_ in enumerate(hidden_layer):
places[item_] = [1,i]
output_layer = range(10,12)
for i, item_ in enumerate(output_layer):
places[item_] = [2,i]
for edge_ in product(input_layer, hidden_layer):
g1.add_edge(*edge_)
for edge_ in product(hidden_layer, output_layer):
g1.add_edge(*edge_)
# -
nx.draw(g1, pos=places, )
g2 = nx.MultiDiGraph()
g2.add_edges_from([(1,2),(2,1)])
nx.draw(g2)
g3 = nx.barbell_graph(5,2)
nx.draw(g3)
nx.draw(nx.lollipop_graph(3,3))
red = nx.random_lobster(10, 0.9, 0.9)
nx.draw(red)
| NetworkX.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py35]
# language: python
# name: conda-env-py35-py
# ---
# ### Fun with Strings
#
# Simple string operations.
#
# See [The LeetCode example problem](https://leetcode.com/problems/trim-a-binary-search-tree/description/)
# +
debugging = False
debugging = True
logging = True
def dprint(f, *args):
if debugging:
print((' DBG:' + f).format(*args))
def log(f, *args):
if logging:
print((f).format(*args))
def logError(f, *args):
if logging:
print(('*** ERROR:' + f).format(*args))
def className(instance):
return type(instance).__name__
# -
def matchCount(sourceChars, targetChars):
""" Create a dict containing counts of source chars in target string. """
matches = {sc.lower():0 for sc in sourceChars}
# Get offsets of only the first instances of the source characters.
locations = {sc.lower(): len(sourceChars)-off for off, sc in enumerate(reversed(sourceChars),start=1)}
print(locations)
for dc in targetChars:
dc = dc.lower()
if dc in matches: matches[dc] = matches[dc] + 1
return matches
s1 = "xzThis is a test of the emergency broadcast system."
s1 = "ABC"
# 0123456789 123456789 123456789 123456789 123456789
# 1 2 3 4 5
s2 = "The quick brown fox jumped over the lazy dog. anda one anda two anda bbbb"
matchCount(s1, s2)
[x for x in reversed("This is it")]
class TestCase(object):
def __init__(self, name, method, inputs, expected):
self.name = name
self.method = method
self.inputs = inputs
self.expected = expected
def run(self):
return self.method(*self.inputs)
# +
import time
from datetime import timedelta
class TestSet(object):
def __init__(self, cases):
self.cases = cases
def run_tests(self):
count = 0
errors = 0
total_time = 0
for case in self.cases:
count += 1
start_time = time.time()
result = case.run()
elapsed_time = time.time() - start_time
total_time += elapsed_time
if callable(case.expected):
if not case.expected(result):
errors += 1
logError("Test {0} failed. Returned {1}", case.name, result)
elif result != case.expected:
errors += 1
logError('Test {0} failed. Returned "{1}", expected "{2}"', case.name, result, case.expected)
if errors:
logError("Tests passed: {0}; Failures: {1}", count-errors, errors)
else:
log("All {0} tests passed.", count)
log("Elapsed test time: {0}", timedelta(seconds=total_time))
# +
simplecompare = lambda s1, s2: matchCount(s1, s2)
c1 = TestCase('one letter 10 times',
simplecompare,
["x", "aa xxaaxbbxccx xxxxx"],
{ 'x' : 10 })
c2 = TestCase('ABC',
simplecompare,
["ABC", "The quick brown fox jumped over the lazy dog. anda one anda two anda bbbb"],
lambda r : r['a'] == 7 and r['b'] == 5)
tester = TestSet([c1, c2])
# -
tester.run_tests()
class Trie(object):
def __init__(node, value=None):
node.value = value # Note, root node has None value.
node.children = {}
node.wordEnd = False # true at word end nodes.
def insert(node, word):
for letter in word:
if letter in node.children:
node = node.children[letter]
else:
newn = Trie(letter)
node.children[letter] = newn
node = newn
node.wordEnd = True
def find(node, word):
for letter in word:
if letter in node.children:
node = node.children[letter]
else:
return False
return node.wordEnd
def __str__(node):
pass
[x for x in node.children]
x = Trie()
x.insert('Hello')
x.insert("Mommy")
x.insert('M')
x.insert('Glorm')
x.find('Mommy')
x.children
[(z, x.value) for z in x.children]
a = "This is a test."
b = reversed(a)
a[::-1]
''.join(reversed(a))
| StringFun.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.9 64-bit (''ug'': pyenv)'
# name: python3
# ---
# !ls ../../../../data/dev-clean
# +
import os
import random
import shutil
import sys
libri_path = "../../../LibriTTS" # absolute path to TensorFlowTTS.
dataset_path = "../../../libritts" # Change to your paths. This is a output of re-format dataset.
subset = "train-clean-100"
# +
with open(os.path.join(libri_path, "SPEAKERS.txt")) as f:
data = f.readlines()
dataset_info = {}
max_speakers = 20 # Max number of speakers to train on
min_len = 3 # Min len of speaker narration time
max_file_len = 11 # max audio file lenght
min_file_len = 2 # min audio file lenght
# -
possible_dataset = [i.split("|") for i in data[12:] if i.split("|")[2].strip() == subset and float(i.split("|")[3].strip()) >= min_len]
ids = [i[0].strip() for i in possible_dataset]
import soundfile as sf
possible_map = {}
subset_path = os.path.join(libri_path, subset)
for i in os.listdir(subset_path):
if i in ids:
id_path = os.path.join(subset_path, i)
id_dur = 0
id_included = []
for k in os.listdir(id_path):
for j in os.listdir(os.path.join(id_path, k)):
if ".wav" in j:
f_path = os.path.join(id_path, k, j)
sf_file = sf.SoundFile(f_path)
dur = len(sf_file) / sf_file.samplerate
if max_file_len < dur or dur < min_file_len:
continue
else:
id_included.append(f_path)
id_dur += dur
possible_map[i] = {"dur": id_dur, "included": id_included}
min_len
poss_speakers = {k: v["included"] for k, v in possible_map.items() if v["dur"]/60 >= min_len}
poss_speakers.keys()
to_move = list(poss_speakers.keys())
random.shuffle(to_move)
to_move = to_move[:max_speakers]
for sp_id, v in poss_speakers.items():
if sp_id in to_move:
for j in v:
f_name = j.split(os.path.sep)[-1]
text_f_name = f_name.split(".wav")[0] + ".txt"
os.makedirs(os.path.join(dataset_path, sp_id), exist_ok=True)
shutil.copy(j, os.path.join(dataset_path, sp_id, f_name))
shutil.copy(j.replace(".wav", ".normalized.txt"), os.path.join(dataset_path, sp_id, text_f_name))
sorted(to_move)
# !ls ../../../
| ttsexamples/fastspeech2_libritts/libri_experiment/prepare_libri.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# %matplotlib inline
import matplotlib.pyplot as plt
# +
plt.plot([1,2,3],[3,4,5])
plt.title('lec13')
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
import pandas
# +
df=pandas.read_excel('s3://ia-241-2022-spring-stanaway/house_price.xls')
df.sort_values(by=['built_in']).plot(x='built_in',y='price')
# +
avg_p_y = df.groupby('built_in').mean()['price']
avg_p_y.plot()
# -
df['price'].hist()
# +
avg_p_h = df.groupby('house_type').mean()['price']
avg_p_h.plot.bar()
# +
num_h = df['house_type'].value_counts()
num_h.plot.pie()
# -
df.plot.scatter(x='area', y='price')
| lec13.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/rl/berater-v11.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="eU7ylMh1kQ2y"
# # Berater Environment v11
#
# ## Changes from v10
# * configure custom network allowing to train to almost perfection
# * score method for BaseLine
# + [markdown] colab_type="text" id="zpzHtN3-kQ26"
# ## Installation (required for colab)
# + colab_type="code" id="0E567zPTkQ28" colab={}
# !pip install git+https://github.com/openai/baselines >/dev/null
# !pip install gym >/dev/null
# + [markdown] id="w3OdHyWEEEwy" colab_type="text"
# ## Environment
# + colab_type="code" id="-S4sZG5ZkQ3T" colab={}
import numpy as np
import random
import gym
from gym.utils import seeding
from gym import spaces
def state_name_to_int(state):
state_name_map = {
'S': 0,
'A': 1,
'B': 2,
'C': 3,
'D': 4,
'E': 5,
'F': 6,
'G': 7,
'H': 8,
'K': 9,
'L': 10,
'M': 11,
'N': 12,
'O': 13
}
return state_name_map[state]
def int_to_state_name(state_as_int):
state_map = {
0: 'S',
1: 'A',
2: 'B',
3: 'C',
4: 'D',
5: 'E',
6: 'F',
7: 'G',
8: 'H',
9: 'K',
10: 'L',
11: 'M',
12: 'N',
13: 'O'
}
return state_map[state_as_int]
class BeraterEnv(gym.Env):
"""
The Berater Problem
Actions:
There are 4 discrete deterministic actions, each choosing one direction
"""
metadata = {'render.modes': ['ansi']}
showStep = False
showDone = True
envEpisodeModulo = 100
def __init__(self):
# self.map = {
# 'S': [('A', 100), ('B', 400), ('C', 200 )],
# 'A': [('B', 250), ('C', 400), ('S', 100 )],
# 'B': [('A', 250), ('C', 250), ('S', 400 )],
# 'C': [('A', 400), ('B', 250), ('S', 200 )]
# }
self.map = {
'S': [('A', 300), ('B', 100), ('C', 200 )],
'A': [('S', 300), ('B', 100), ('E', 100 ), ('D', 100 )],
'B': [('S', 100), ('A', 100), ('C', 50 ), ('K', 200 )],
'C': [('S', 200), ('B', 50), ('M', 100 ), ('L', 200 )],
'D': [('A', 100), ('F', 50)],
'E': [('A', 100), ('F', 100), ('H', 100)],
'F': [('D', 50), ('E', 100), ('G', 200)],
'G': [('F', 200), ('O', 300)],
'H': [('E', 100), ('K', 300)],
'K': [('B', 200), ('H', 300)],
'L': [('C', 200), ('M', 50)],
'M': [('C', 100), ('L', 50), ('N', 100)],
'N': [('M', 100), ('O', 100)],
'O': [('N', 100), ('G', 300)]
}
max_paths = 4
self.action_space = spaces.Discrete(max_paths)
positions = len(self.map)
# observations: position, reward of all 4 local paths, rest reward of all locations
# non existing path is -1000 and no position change
# look at what #getObservation returns if you are confused
low = np.append(np.append([0], np.full(max_paths, -1000)), np.full(positions, 0))
high = np.append(np.append([positions - 1], np.full(max_paths, 1000)), np.full(positions, 1000))
self.observation_space = spaces.Box(low=low,
high=high,
dtype=np.float32)
self.reward_range = (-1, 1)
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.envReward = 0
self.envEpisodeCount = 0
self.envStepCount = 0
self.reset()
self.optimum = self.calculate_customers_reward()
def seed(self, seed=None):
self.np_random, seed = seeding.np_random(seed)
return [seed]
def iterate_path(self, state, action):
paths = self.map[state]
if action < len(paths):
return paths[action]
else:
# sorry, no such action, stay where you are and pay a high penalty
return (state, 1000)
def step(self, action):
destination, cost = self.iterate_path(self.state, action)
lastState = self.state
customerReward = self.customer_reward[destination]
reward = (customerReward - cost) / self.optimum
self.state = destination
self.customer_visited(destination)
done = destination == 'S' and self.all_customers_visited()
stateAsInt = state_name_to_int(self.state)
self.totalReward += reward
self.stepCount += 1
self.envReward += reward
self.envStepCount += 1
if self.showStep:
print( "Episode: " + ("%4.0f " % self.envEpisodeCount) +
" Step: " + ("%4.0f " % self.stepCount) +
lastState + ' --' + str(action) + '-> ' + self.state +
' R=' + ("% 2.2f" % reward) + ' totalR=' + ("% 3.2f" % self.totalReward) +
' cost=' + ("%4.0f" % cost) + ' customerR=' + ("%4.0f" % customerReward) + ' optimum=' + ("%4.0f" % self.optimum)
)
if done and not self.isDone:
self.envEpisodeCount += 1
if BeraterEnv.showDone:
episodes = BeraterEnv.envEpisodeModulo
if (self.envEpisodeCount % BeraterEnv.envEpisodeModulo != 0):
episodes = self.envEpisodeCount % BeraterEnv.envEpisodeModulo
print( "Done: " +
("episodes=%6.0f " % self.envEpisodeCount) +
("avgSteps=%6.2f " % (self.envStepCount/episodes)) +
("avgTotalReward=% 3.2f" % (self.envReward/episodes) )
)
if (self.envEpisodeCount%BeraterEnv.envEpisodeModulo) == 0:
self.envReward = 0
self.envStepCount = 0
self.isDone = done
observation = self.getObservation(stateAsInt)
info = {"from": self.state, "to": destination}
return observation, reward, done, info
def getObservation(self, position):
result = np.array([ position,
self.getPathObservation(position, 0),
self.getPathObservation(position, 1),
self.getPathObservation(position, 2),
self.getPathObservation(position, 3)
],
dtype=np.float32)
all_rest_rewards = list(self.customer_reward.values())
result = np.append(result, all_rest_rewards)
return result
def getPathObservation(self, position, path):
source = int_to_state_name(position)
paths = self.map[self.state]
if path < len(paths):
target, cost = paths[path]
reward = self.customer_reward[target]
result = reward - cost
else:
result = -1000
return result
def customer_visited(self, customer):
self.customer_reward[customer] = 0
def all_customers_visited(self):
return self.calculate_customers_reward() == 0
def calculate_customers_reward(self):
sum = 0
for value in self.customer_reward.values():
sum += value
return sum
def modulate_reward(self):
number_of_customers = len(self.map) - 1
number_per_consultant = int(number_of_customers/2)
# number_per_consultant = int(number_of_customers/1.5)
self.customer_reward = {
'S': 0
}
for customer_nr in range(1, number_of_customers + 1):
self.customer_reward[int_to_state_name(customer_nr)] = 0
# every consultant only visits a few random customers
samples = random.sample(range(1, number_of_customers + 1), k=number_per_consultant)
key_list = list(self.customer_reward.keys())
for sample in samples:
self.customer_reward[key_list[sample]] = 1000
def reset(self):
self.totalReward = 0
self.stepCount = 0
self.isDone = False
self.modulate_reward()
self.state = 'S'
return self.getObservation(state_name_to_int(self.state))
def render(self):
print(self.customer_reward)
# + id="wdZBH30Rs95B" colab_type="code" outputId="489b9bbd-8458-4289-d34b-2fa6423669e4" colab={"base_uri": "https://localhost:8080/", "height": 68}
env = BeraterEnv()
print(env.reset())
print(env.customer_reward)
# + [markdown] colab_type="text" id="Usj9iWTskQ3t"
# # Try out Environment
# + colab_type="code" id="oTtUfeONkQ3w" outputId="4fbf26b1-6304-4d81-8af3-6c0eb2e62cd4" colab={"base_uri": "https://localhost:8080/", "height": 3540}
BeraterEnv.showStep = True
BeraterEnv.showDone = True
env = BeraterEnv()
print(env)
observation = env.reset()
print(observation)
for t in range(1000):
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t+1))
break
env.close()
print(observation)
# + [markdown] id="eWpCU8xH0ZKt" colab_type="text"
# ## Baseline
# + id="7NxTojLi0N0o" colab_type="code" colab={}
from copy import deepcopy
import json
class Baseline():
def __init__(self, env, max_reward, verbose=1):
self.env = env
self.max_reward = max_reward
self.verbose = verbose
self.reset()
def reset(self):
self.map = self.env.map
self.rewards = self.env.customer_reward.copy()
def as_string(self, state):
# reward/cost does not hurt, but is useless, path obsucres same state
new_state = {
'rewards': state['rewards'],
'position': state['position']
}
return json.dumps(new_state, sort_keys=True)
def is_goal(self, state):
if state['position'] != 'S': return False
for reward in state['rewards'].values():
if reward != 0: return False
return True
def expand(self, state):
states = []
for position, cost in self.map[state['position']]:
new_state = deepcopy(state)
new_state['position'] = position
new_state['rewards'][position] = 0
reward = state['rewards'][position]
new_state['reward'] += reward
new_state['cost'] += cost
new_state['path'].append(position)
states.append(new_state)
return states
def search(self, root, max_depth = 25):
closed = set()
open = [root]
while open:
state = open.pop(0)
if self.as_string(state) in closed: continue
closed.add(self.as_string(state))
depth = len(state['path'])
if depth > max_depth:
if self.verbose > 0:
print("Visited:", len(closed))
print("Reached max depth, without reaching goal")
return None
if self.is_goal(state):
scaled_reward = (state['reward'] - state['cost']) / self.max_reward
state['scaled_reward'] = scaled_reward
if self.verbose > 0:
print("Scaled reward:", scaled_reward)
print("Perfect path", state['path'])
return state
expanded = self.expand(state)
open += expanded
# make this best first
open.sort(key=lambda state: state['cost'])
def find_optimum(self):
initial_state = {
'rewards': self.rewards.copy(),
'position': 'S',
'reward': 0,
'cost': 0,
'path': ['S']
}
return self.search(initial_state)
def benchmark(self, model, sample_runs=100):
self.verbose = 0
BeraterEnv.showStep = False
BeraterEnv.showDone = False
perfect_rewards = []
model_rewards = []
for run in range(sample_runs):
observation = self.env.reset()
self.reset()
optimum_state = self.find_optimum()
perfect_rewards.append(optimum_state['scaled_reward'])
state = np.zeros((1, 2*128))
dones = np.zeros((1))
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = self.env.step(actions[0])
if done:
break
model_rewards.append(env.totalReward)
return perfect_rewards, model_rewards
def score(self, model, sample_runs=100):
perfect_rewards, model_rewards = self.benchmark(model, sample_runs=100)
perfect_score_mean, perfect_score_std = np.array(perfect_rewards).mean(), np.array(perfect_rewards).std()
test_score_mean, test_score_std = np.array(model_rewards).mean(), np.array(model_rewards).std()
return perfect_score_mean, perfect_score_std, test_score_mean, test_score_std
# + [markdown] colab_type="text" id="4GlYjZ3xkQ38"
# # Train model
#
# Estimation
# * total cost when travelling all paths (back and forth): 2500
# * all rewards: 6000
# * but: rewards are much more sparse while routes stay the same, maybe expect less
# * estimate: no illegal moves and between
# * half the travel cost: (6000 - 1250) / 6000 = .79
# * and full traval cost (6000 - 2500) / 6000 = 0.58
# * additionally: the agent only sees very little of the whole scenario
# * changes with every episode
# * was ok when network can learn fixed scenario
#
# + id="-rAaTCL0r-ql" colab_type="code" colab={}
# !rm -r logs
# !mkdir logs
# !mkdir logs/berater
# + id="LArM6BsJgUvL" colab_type="code" outputId="2826d1d8-7453-429d-d78a-92cc6a9381d4" colab={"base_uri": "https://localhost:8080/", "height": 34}
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# + [markdown] id="GCufDIpnjNms" colab_type="text"
# ### Step 1: Extract MLP builder from openai sources
# + id="WMylk8s_amq1" colab_type="code" colab={}
# copied from https://github.com/openai/baselines/blob/master/baselines/a2c/utils.py
def ortho_init(scale=1.0):
def _ortho_init(shape, dtype, partition_info=None):
#lasagne ortho init for tf
shape = tuple(shape)
if len(shape) == 2:
flat_shape = shape
elif len(shape) == 4: # assumes NHWC
flat_shape = (np.prod(shape[:-1]), shape[-1])
else:
raise NotImplementedError
a = np.random.normal(0.0, 1.0, flat_shape)
u, _, v = np.linalg.svd(a, full_matrices=False)
q = u if u.shape == flat_shape else v # pick the one with the correct shape
q = q.reshape(shape)
return (scale * q[:shape[0], :shape[1]]).astype(np.float32)
return _ortho_init
def fc(x, scope, nh, *, init_scale=1.0, init_bias=0.0):
with tf.variable_scope(scope):
nin = x.get_shape()[1].value
w = tf.get_variable("w", [nin, nh], initializer=ortho_init(init_scale))
b = tf.get_variable("b", [nh], initializer=tf.constant_initializer(init_bias))
return tf.matmul(x, w)+b
# copied from https://github.com/openai/baselines/blob/master/baselines/common/models.py#L31
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
"""
Stack of fully-connected layers to be used in a policy / q-function approximator
Parameters:
----------
num_layers: int number of fully-connected layers (default: 2)
num_hidden: int size of fully-connected layers (default: 64)
activation: activation function (default: tf.tanh)
Returns:
-------
function that builds fully connected network with a given input tensor / placeholder
"""
def network_fn(X):
# print('network_fn called')
# Tensor("ppo2_model_4/Ob:0", shape=(1, 19), dtype=float32)
# Tensor("ppo2_model_4/Ob_1:0", shape=(512, 19), dtype=float32)
# print (X)
h = tf.layers.flatten(X)
for i in range(num_layers):
h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
# Tensor("ppo2_model_4/pi/Tanh_2:0", shape=(1, 500), dtype=float32)
# Tensor("ppo2_model_4/pi_2/Tanh_2:0", shape=(512, 500), dtype=float32)
# print(h)
return h
return network_fn
# + [markdown] id="YUvTLKKK8L8K" colab_type="text"
# ### Step 2: Replace exotic parts
#
# Steps:
# 1. Low level matmul replaced with dense layer (no need for custom code here)
# * https://www.tensorflow.org/api_docs/python/tf/layers
# * https://www.tensorflow.org/api_docs/python/tf/layers/Dense
#
# 1. initializer changed to best practice glorot uniform, but does not give reliable results, so use seed
# 1. use relu activations (should train faster)
# 1. standard batch normalization does not train with any configuration (no idea why), so we need to keep layer normalization
# 1.Dropout and L2 would be nice as well, but easy to do within the boundaries of the OpenAI framework: https://stackoverflow.com/questions/38292760/tensorflow-introducing-both-l2-regularization-and-dropout-into-the-network-do
#
# #### Alternative: Using Keras API
#
# Not done here, as no big benefit expected and would need to be integrated into surrounding low level tensorflow model. Need to reuse session. If you want to do this, be sure to check at least the first link
#
# * using Keras within TensorFlow model: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html
# * https://stackoverflow.com/questions/46790506/calling-a-keras-model-on-a-tensorflow-tensor-but-keep-weights
# * https://www.tensorflow.org/api_docs/python/tf/get_default_session
# * https://www.tensorflow.org/api_docs/python/tf/keras/backend/set_session
# + id="9X4G6Y4O8Khh" colab_type="code" colab={}
# first the dense layer
def mlp(num_layers=2, num_hidden=64, activation=tf.tanh, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=ortho_init(np.sqrt(2)))
# h = fc(h, 'mlp_fc{}'.format(i), nh=num_hidden, init_scale=np.sqrt(2))
if layer_norm:
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
# + id="NIbexm3U_341" colab_type="code" colab={}
# then initializer, relu activations
def mlp(num_layers=2, num_hidden=64, activation=tf.nn.relu, layer_norm=False):
def network_fn(X):
h = tf.layers.flatten(X)
for i in range(num_layers):
h = tf.layers.dense(h, units=num_hidden, kernel_initializer=tf.initializers.glorot_uniform(seed=17))
if layer_norm:
# h = tf.layers.batch_normalization(h, center=True, scale=True)
h = tf.contrib.layers.layer_norm(h, center=True, scale=True)
h = activation(h)
return h
return network_fn
# + colab_type="code" id="NzbylmYAkQ3-" outputId="fa563d7b-700f-4af0-abfe-8d0e08a43ccc" colab={"base_uri": "https://localhost:8080/", "height": 12764}
# %%time
# https://github.com/openai/baselines/blob/master/baselines/deepq/experiments/train_pong.py
# log_dir = logger.get_dir()
log_dir = '/content/logs/berater/'
import gym
from baselines import bench
from baselines import logger
from baselines.common.vec_env.dummy_vec_env import DummyVecEnv
from baselines.common.vec_env.vec_monitor import VecMonitor
from baselines.ppo2 import ppo2
BeraterEnv.showStep = False
BeraterEnv.showDone = False
env = BeraterEnv()
wrapped_env = DummyVecEnv([lambda: BeraterEnv()])
monitored_env = VecMonitor(wrapped_env, log_dir)
# https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py
# https://github.com/openai/baselines/blob/master/baselines/common/models.py#L30
# https://arxiv.org/abs/1607.06450 for layer_norm
# lr linear from lr=1e-2 to lr=1e-4 (default lr=3e-4)
def lr_range(frac):
# we get the remaining updates between 1 and 0
start_lr = 1e-2
end_lr = 1e-4
diff_lr = start_lr - end_lr
lr = end_lr + diff_lr * frac
return lr
network = mlp(num_hidden=500, num_layers=3, layer_norm=True)
model = ppo2.learn(
env=monitored_env,
network=network,
lr=lr_range,
gamma=1.0,
ent_coef=0.05,
total_timesteps=1000000)
# model = ppo2.learn(
# env=monitored_env,
# network='mlp',
# num_hidden=500,
# num_layers=3,
# layer_norm=True,
# lr=lr_range,
# gamma=1.0,
# ent_coef=0.05,
# total_timesteps=500000)
# model.save('berater-ppo-v11.pkl')
monitored_env.close()
# + [markdown] id="0cfzto7W8Mpd" colab_type="text"
# ### Visualizing Results
#
# https://github.com/openai/baselines/blob/master/docs/viz/viz.ipynb
# + id="yBzvtyVcvhkn" colab_type="code" colab={}
# # !ls -l $log_dir
# + id="2ZWB88EVsRei" colab_type="code" outputId="1e386a52-f722-497f-dde2-5c2fd6233b01" colab={"base_uri": "https://localhost:8080/", "height": 418}
from baselines.common import plot_util as pu
results = pu.load_results(log_dir)
import matplotlib.pyplot as plt
import numpy as np
r = results[0]
plt.ylim(0, .75)
# plt.plot(np.cumsum(r.monitor.l), r.monitor.r)
plt.plot(np.cumsum(r.monitor.l), pu.smooth(r.monitor.r, radius=100))
# + [markdown] colab_type="text" id="TtBh4c6-kQ4K"
# # Enjoy model
# + id="H_QTckfBra7l" colab_type="code" outputId="49cf8fa2-5356-48bd-fa1a-14c77888da88" colab={"base_uri": "https://localhost:8080/", "height": 34}
import numpy as np
observation = env.reset()
env.render()
baseline = Baseline(env, max_reward=6000)
# + colab_type="code" id="ucP0gNhhkQ4O" outputId="dda9701d-5bfd-4e93-ba56-1fa7ce341829" colab={"base_uri": "https://localhost:8080/", "height": 310}
state = np.zeros((1, 2*128))
dones = np.zeros((1))
BeraterEnv.showStep = True
BeraterEnv.showDone = False
for t in range(1000):
actions, _, state, _ = model.step(observation, S=state, M=dones)
observation, reward, done, info = env.step(actions[0])
if done:
print("Episode finished after {} timesteps, reward={}".format(t+1, env.totalReward))
break
env.close()
# + id="3z35_dMMt6SW" colab_type="code" outputId="b5fb4d47-b45f-459d-b519-16a04c886c93" colab={"base_uri": "https://localhost:8080/", "height": 707}
# %time baseline.find_optimum()
# + [markdown] id="K36GXkzyRGOO" colab_type="text"
# ## Evaluation
# + id="KMb58O_q067F" colab_type="code" colab={}
baseline = Baseline(env, max_reward=6000)
perfect_score_mean, perfect_score_std, test_score_mean, test_score_std = baseline.score(model, sample_runs=100)
# + id="Dr9ylHgnRIcc" colab_type="code" outputId="b1684d33-86c4-4de0-c7f8-3c4816247691" colab={"base_uri": "https://localhost:8080/", "height": 34}
# perfect scores
perfect_score_mean, perfect_score_std
# + id="rOSOoO29Rwgm" colab_type="code" outputId="09a856f2-18c6-4779-da15-0b21e5bf3e0b" colab={"base_uri": "https://localhost:8080/", "height": 34}
# test scores for our model
test_score_mean, test_score_std
# + id="Ls8IKVV1R5SE" colab_type="code" colab={}
| notebooks/rl/berater-v11.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import requests
import re
from bs4 import BeautifulSoup as bs
# import urllib
url_pre = "https://www.midiworld.com/search/"
# -
list = []
for i in range(1,10):
url = url_pre + str(i) + "/?q=classic"
r = requests.get(url)
rt = r.content
rh = str(rt,"utf-8")
soup = bs(rh,"html.parser")
download = soup.find_all("a",attrs={"target":"_blank"})
for j in download:
list.append(j['href'])
list
for i,u in enumerate(list):
data = requests.get(list[i])
with open("classic/demo"+str(i)+".mid","wb") as code:
code.write(data.content)
| tensorflow-GPU/spider/piliangmidi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# Steps partially taken from https://debuggercafe.com/advanced-facial-keypoint-detection-with-pytorch/
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import cv2
import torch
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
from sklearn.model_selection import train_test_split
import timm
# %matplotlib inline
# %config Completer.use_jedi = False
ROOT = "/home/lenin/code/hat_on_the_head/"
DATA = ROOT + "data/"
KP_DATA = DATA + "kaggle_keypoints/"
RANDOM_SEED = 42
# +
def load(path):
img = cv2.imread(path)
return cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
def load_item(root, row):
img = load(root + row.img)
keypoints = [[float(e) for e in t.split(",")] for t in row.our_kpts.split(";")]
return img, [keypoints[0]]
def show_w_kpts(img, kpts):
plt.figure(figsize=(10, 10))
plt.imshow(img)
keypoints = np.array(kpts)
for j in range(len(keypoints)):
plt.plot(keypoints[j, 0], keypoints[j, 1], 'b.')
plt.show()
df = pd.read_csv(KP_DATA + "training_frames_keypoints.csv")
if True:
def join_kpts(row):
kpts = []
kpts.append(f'{row["78"]},{row["79"]}')
# kpts.append(f'{row["84"]},{row["85"]}')
return ";".join(kpts)
df["img"] = df["Unnamed: 0"]
df["our_kpts"] = df.apply(join_kpts,axis=1)
df = df.drop(columns=[str(i) for i in range(136)] + ["Unnamed: 0"])
# df.to_csv(DATA + "our_train_kaggle_keypoints.csv", index=False)
print(f"total images {len(df)}")
df.head()
# +
H = 224
W = 224
tfms_train = A.Compose([
# A.LongestMaxSize(448),
# A.ShiftScaleRotate(border_mode=0, value=0, shift_limit=0.4, scale_limit=0.3, p=0.8),
# A.RandomBrightnessContrast(p=0.2),
# A.CLAHE(),
#A.RandomCrop(320, 320),
A.Resize(H, W),
A.Normalize(),
#ToTensorV2(),
], keypoint_params=A.KeypointParams(format='xy'))
tfms_valid = A.Compose([
A.Resize(H, W),
A.Normalize(),
#ToTensorV2(),
], keypoint_params=A.KeypointParams(format='xy'))
sample = df.sample(1).iloc[0]
orig_img, orig_kpts = load_item(KP_DATA + "training/", sample)
res = tfms_train(image=orig_img, keypoints=orig_kpts)
img_tfmd = res["image"] #.transpose(0, -1).numpy()
kpts_tfmd = res["keypoints"]
show_w_kpts(img_tfmd, kpts_tfmd)
# -
class KeypointsDataset(torch.utils.data.Dataset):
def __init__(self, root, df, aug=A.Compose([])):
self.root = root
self.df = df
self.aug = aug
self.to_tensor = ToTensorV2()
def __len__(self):
return len(self.df)
def __getitem__(self, i):
row = self.df.iloc[i]
orig_img, orig_kpts = load_item(self.root, row)
res = self.aug(image=orig_img, keypoints=orig_kpts)
while len(res["keypoints"]) < 1:
res = self.aug(image=orig_img, keypoints=orig_kpts)
img_tfmd = self.to_tensor(image=res["image"])["image"]
kpts_tfmd = res["keypoints"]
kpts_tfmd = np.array(kpts_tfmd) / np.array([W, H])
return img_tfmd, torch.FloatTensor([kp for x in kpts_tfmd for kp in x])
train_df, val_df = train_test_split(df, test_size=0.15, shuffle=True, random_state=RANDOM_SEED)
# +
batch_size = 64
num_workers = 0
train_ds = KeypointsDataset(root=KP_DATA + "training/", df=train_df, aug=tfms_train)
val_ds = KeypointsDataset(root=KP_DATA + "training/", df=val_df, aug=tfms_valid)
train_dl = torch.utils.data.DataLoader(
dataset=train_ds,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers,
)
val_dl = torch.utils.data.DataLoader(
dataset=val_ds,
batch_size=batch_size,
shuffle=False,
num_workers=num_workers,
)
# -
model = timm.create_model('efficientnet_b0', pretrained=True)
model.classifier = torch.nn.Linear(model.classifier.in_features, out_features=2, bias=True)
import pytorch_lightning as pl
from torchmetrics import functional as metrics
pl.seed_everything(RANDOM_SEED)
class HatModule(pl.LightningModule):
def __init__(self, model, optimizer_name, optimizer_hparams):
super().__init__()
# Exports the hyperparameters to a YAML file, and create "self.hparams" namespace
self.save_hyperparameters()
# Create model
self.model = model
# Create loss module
self.loss_module = torch.nn.SmoothL1Loss()
# Example input for visualizing the graph in Tensorboard
# self.example_input_array = torch.zeros((1, 3, 32, 32), dtype=torch.float32)
def forward(self, imgs):
# Forward function that is run when visualizing the graph
return self.model(imgs)
def configure_optimizers(self):
# We will support Adam or SGD as optimizers.
if self.hparams.optimizer_name == "Adam":
# AdamW is Adam with a correct implementation of weight decay (see here
# for details: https://arxiv.org/pdf/1711.05101.pdf)
optimizer = torch.optim.AdamW(self.model.parameters(), **self.hparams.optimizer_hparams)
elif self.hparams.optimizer_name == "SGD":
optimizer = torch.optim.SGD(self.model.parameters(), **self.hparams.optimizer_hparams)
else:
assert False, f'Unknown optimizer: "{self.hparams.optimizer_name}"'
# We will reduce the learning rate by 0.1 after 100 and 150 epochs
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[100, 150], gamma=0.1)
return [optimizer], [scheduler]
def training_step(self, batch, batch_idx):
# "batch" is the output of the training data loader.
imgs, keypoints = batch
preds = self.model(imgs)
loss = self.loss_module(preds, keypoints)
#rmse = metrics.mean_squared_error(preds, keypoints, squared=False)
#self.log("train_rmse", rmse, prog_bar=True, on_step=True)
self.log("train_loss", loss, prog_bar=True, on_step=True)
return loss # Return tensor to call ".backward" on
def validation_step(self, batch, batch_idx):
imgs, keypoints = batch
preds = self.model(imgs)
loss = self.loss_module(preds, keypoints)
rmse = metrics.mean_squared_error(preds, keypoints, squared=False)
self.log("val_rmse", rmse, prog_bar=True, on_step=True)
# By default logs it per epoch (weighted average over batches)
self.log("val_loss", loss, prog_bar=True, on_step=True)
def test_step(self, batch, batch_idx):
self.validation_step(batch, batch_idx)
# +
device = "cuda:0"
trainer = pl.Trainer(
#default_root_dir=os.path.join(CHECKPOINT_PATH, save_name), # Where to save models
# We run on a single GPU (if possible)
gpus=1 if str(device) == "cuda:0" else 0,
# How many epochs to train for if no patience is set
max_epochs=30,
callbacks=[
pl.callbacks.ModelCheckpoint(
save_weights_only=True, mode="min", monitor="val_loss", verbose=True,
), # Save the best checkpoint based on the maximum val_acc recorded. Saves only weights and not optimizer
pl.callbacks.LearningRateMonitor("epoch"),
],
)
module = HatModule(model, 'Adam', {"lr": 0.001})
# -
trainer.fit(module, train_dataloaders=train_dl, val_dataloaders=val_dl)
ckpt = "/home/lenin/code/hat_on_the_head/notebooks/lightning_logs/version_18/checkpoints/epoch=6-step=321.ckpt"
module.load_from_checkpoint(ckpt)
module.eval()
# +
sample = val_df.iloc[0]
orig_img, orig_kpts = load_item(KP_DATA + "training/", sample)
res = tfms_valid(image=orig_img, keypoints=orig_kpts)
img_tfmd = res["image"]
kpts_tfmd = res["keypoints"]
#show_w_kpts(img_tfmd, kpts_tfmd)
show_w_kpts(orig_img, orig_kpts)
# +
tfms_test = A.Compose([
# A.LongestMaxSize(448),
# A.ShiftScaleRotate(border_mode=0, value=0, shift_limit=0.4, scale_limit=0.3, p=0.8),
# A.RandomBrightnessContrast(p=0.2),
# A.CLAHE(),
# A.Resize(H, W),
A.Normalize(),
ToTensorV2(),
])
img = load("/home/lenin/img1.png")
img = tfms_test(image=img)["image"]
out = module.forward(img.unsqueeze(0).to(device))
img = img.moveaxis(0, -1).cpu().detach().numpy()
kpts = out.cpu().detach().numpy()[0] * 224
#kpts = [kpts[:2], kpts[2:]]
kpts
show_w_kpts(img, [kpts])
# -
| notebooks/Baseline-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from around_the_world import AroundTheWorld
import pandas as pd
# Load the dataframe with cities of the world
dataframe = pd.read_excel("worldcities.xlsx")
# Instantiate the AroundTheWorld class, with:
# the previous loaded dataframe,
# city_start set to "London", the starting city,
# country_start set to "GB", the starting country,
# n_min set to 3, the number of the closest cities to which it is possible to travel,
# x_size set to 0.3, the size of the longitudinal side of the grid used to search for the nearest cities
# y_size set to 0.15, the size of the latitudinal side of the grid used to search for the nearest cities
# rise_factor set to 2, the multiplication factor to increase the grid used to search for the nearest cities
around = AroundTheWorld(
dataframe,
"London",
"GB",
n_min = 3,
x_size = 0.3,
y_size = 0.15,
rise_factor = 2)
# Run the algorithm that does the journy around the world from the starting city, with:
# is_intermediate_plot set to true, that plots the intermediate journey
# n_intermediate_step set to 100, the number of the step the plot is printed
# is_clear_output set to true, in such a way as to remove previous plots
around.travel(
is_intermediate_plot = True,
n_intermediate_step = 100,
is_clear_output = True)
# Print all the cities visited during the journey
pd.set_option("display.max_rows", None, "display.max_columns", None)
print(around.map_city.to_string(index=False))
| main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## The one-shot Knowledge Gradient acquisition function
#
# The *Knowledge Gradient* (KG) (see [2, 3]) is a look-ahead acquisition function that quantifies the expected increase in the maximum of the modeled black-box function $f$ from obtaining additional (random) observations collected at the candidate set $\mathbf{x}$. KG often shows improved Bayesian Optimization performance relative to simpler acquisition functions such as Expected Improvement, but in its traditional form it is computationally expensive and hard to implement.
#
# BoTorch implements a generalized variant of parallel KG [3] given by
# $$ \alpha_{\text{KG}}(\mathbf{x}) =
# \mathbb{E}_{\mathcal{D}_{\mathbf{x}}}
# \Bigl[\, \max_{x' \in \mathbb{X}} \mathbb{E} \left[ g(\xi)\right] \Bigr] - \mu,
# $$
# where $\xi \sim \mathcal{P}(f(x') \mid \mathcal{D} \cup \mathcal{D}_{\mathbf{x}})$ is the posterior at $x'$ conditioned on $\mathcal{D}_{\mathbf{x}}$, the (random) dataset observed at $\mathbf{x}$, and $\mu := \max_{x}\mathbb{E}[g(f(x)) \mid \mathcal{D}]$.
#
#
# #### Optimizing KG
#
# The conventional approach for optimizing parallel KG (where $g(\xi) = \xi$) is to apply stochastic gradient ascent, with each gradient observation potentially being an average over multiple samples. For each sample $i$, the inner optimization problem $\max_{x_i \in \mathbb{X}} \mathbb{E} \left[ \xi^i \mid \mathcal{D}_{\mathbf{x}}^i \right]$ for the posterior mean is solved numerically. An unbiased stochastic gradient of KG can then be computed by leveraging the envelope theorem and the optimal points $\{x_i^*\}$. In this approach, every iteration requires solving numerous inner optimization problems, one for each outer sample, in order to estimate just one stochastic gradient.
#
# The "one-shot" formulation of KG in BoTorch treats optimizing $\alpha_{\text{KG}}(\mathbf{x})$ as an entirely deterministic optimization problem. It involves drawing $N_{\!f} = $ `num_fantasies` fixed base samples $\mathbf{Z}_f:= \{ \mathbf{Z}^i_f \}_{1\leq i \leq N_{\!f}}$ for the outer expectation, sampling fantasy data $\{\mathcal{D}_{\mathbf{x}}^i(\mathbf{Z}_f^i)\}_{1\leq i \leq N_{\!f}}$, and constructing associated fantasy models $\{\mathcal{M}^i(\mathbf{Z}_f^i)\}_{1 \leq i \leq N_{\!f}}$. The inner maximization can then be moved outside of the sample average, resulting in the following optimization problem:
# $$
# \max_{\mathbf{x} \in \mathbb{X}}\alpha_{\text{KG}}(\mathbf{x}) \approx \max_{\mathbf{x}\in \mathbb{X}, \mathbf{X}' \in \mathbb{X}^{N_{\!f}} } %=1}^{\!N_{\!f}}}
# \sum_{i=1}^{N_{\!f}} \mathbb{E}\left[g(\xi^i)\right],
# $$
# where $\xi^i \sim \mathcal{P}(f(x'^i) \mid \mathcal{D} \cup \mathcal{D}_{\mathbf{x}}^i(\mathbf{Z}_f^i))$ and $\mathbf{X}' := \{x'^i\}_{1 \leq i \leq N_{\!f}}$.
#
# If the inner expectation does not have an analytic expression, one can also draw fixed base samples $\mathbf{Z}_I:= \{ \mathbf{Z}^i_I \}_{1\leq i\leq N_{\!I}}$ and use an MC approximation as with the standard MC acquisition functions of type `MCAcquisitionFunction`. In either case one is left with a deterministic optimization problem.
#
# The key difference from the envelope theorem approach is that we do not solve the inner optimization problem to completion for every fantasy point for every gradient step with respect to $\mathbf{x}$. Instead, we solve the nested optimization problem jointly over $\mathbf{x}$ and the fantasy points $\mathbf{X}'$. The resulting optimization problem is of higher dimension, namely $(q + N_{\!f})d$ instead of $qd$, but unlike the envelope theorem formulation it can be solved as a single optimization problem, which can be solved using standard methods for deterministic optimization.
#
#
# [1] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. BoTorch: Programmable Bayesian Optimization in PyTorch. ArXiv 2019.
#
# [2] <NAME>, <NAME>, and <NAME>. A Knowledge-Gradient policy for sequential information collection. SIAM Journal on Control and Optimization, 2008.
#
# [3] <NAME> and <NAME>. The parallel knowledge gradient method for batch bayesian optimization. NIPS 2016.
# ### Setting up a toy model
#
# We'll fit a standard `SingleTaskGP` model on noisy observations of the synthetic function $f(x) = \sin(2 \pi x_1) * \cos(2 \pi x_2)$ in `d=2` dimensions on the hypercube $[0, 1]^2$.
# +
import math
import torch
from botorch.fit import fit_gpytorch_model
from botorch.models import SingleTaskGP
from botorch.utils import standardize
from gpytorch.mlls import ExactMarginalLogLikelihood
# +
bounds = torch.stack([torch.zeros(2), torch.ones(2)])
train_X = bounds[0] + (bounds[1] - bounds[0]) * torch.rand(20, 2)
train_Y = torch.sin(2 * math.pi * train_X[:, [0]]) * torch.cos(2 * math.pi * train_X[:, [1]])
train_Y = standardize(train_Y + 0.05 * torch.randn_like(train_Y))
model = SingleTaskGP(train_X, train_Y)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
fit_gpytorch_model(mll);
# -
# ### Defining the qKnowledgeGradient acquisition function
#
# The `qKnowledgeGradient` complies with the standard `MCAcquisitionFunction` API. The only mandatory argument in addition to the model is `num_fantasies` the number of fantasy samples. More samples result in a better approximation of KG, at the expense of both memory and wall time.
#
# `qKnowledgeGradient` also supports the other parameters of `MCAcquisitionFunction`, such as a generic objective `objective` and pending points `X_pending`. It also accepts a `current_value` argument that is the maximum posterior mean of the current model (which can be obtained by maximizing `PosteriorMean` acquisition function). This does not change the optimizer so it is not required, but it means that the acquisition value is some constant shift of the actual "Knowledge Gradient" value.
# +
from botorch.acquisition import qKnowledgeGradient
qKG = qKnowledgeGradient(model, num_fantasies=128)
# -
# ### Optimizing qKG
#
# `qKnowledgeGradient` subclasses `OneShotAcquisitionFunction`, which makes sure that the fantasy parameterization $\mathbf{X}'$ is automatically generated and optimized when calling `optimize_acqf` on the acquisition function. This means that optimizing one-shot KG in BoTorch is just a easy as optimizing any other acquisition function (from an API perspective, at least). It turns out that a careful initialization of the fantasy points can significantly help with the optimization (see the logic in `botorch.optim.initializers.gen_one_shot_kg_initial_conditions` for more information).
#
#
# Here we use `num_restarts=10` random initial `q`-batches with `q=2` in parallel, with the intialization heuristic starting from `raw_samples = 512` raw points (note that since `qKnowledgeGradient` is significantly more expensive to evaluate than other acquisition functions, large values of `num_restarts` and `raw_samples`, which are typically feasible in other settings, can result in long wall times and potential memory issues).
#
# Finally, since we do not pass a `current_value` argument, this value is not actually the KG value, but offset by the constant (w.r.t. the candidates) $\mu := \max_{x}\mathbb{E}[g(f(x)) \mid \mathcal{D}]$.
# +
from botorch.optim import optimize_acqf
from botorch.utils.sampling import manual_seed
with manual_seed(1234):
candidates, acq_value = optimize_acqf(
acq_function=qKG,
bounds=bounds,
q=2,
num_restarts=10,
raw_samples=512,
)
# -
candidates
acq_value
# ### Computing the actual KG value
#
# We first need to find the maximum posterior mean - we can use a large number of random restarts and raw_samples to increase the likelihood that we do indeed find it (this is a non-convex optimization problem, after all).
# +
from botorch.acquisition import PosteriorMean
argmax_pmean, max_pmean = optimize_acqf(
acq_function=PosteriorMean(model),
bounds=bounds,
q=1,
num_restarts=20,
raw_samples=2048,
)
# -
# Now we can optimize KG after passing the current value. We also pass in the `sampler` from the original `qKG` above, which containst the fixed base samples $\mathbf{Z}_f$. This is to ensure that we optimize the same approximation and so our values are an apples-to-apples comparison (as `num_fantasies` increases, the effect of this randomness will get less and less important).
# +
qKG_proper = qKnowledgeGradient(
model,
num_fantasies=128,
sampler=qKG.sampler,
current_value=max_pmean,
)
with manual_seed(1234):
candidates_proper, acq_value_proper = optimize_acqf(
acq_function=qKG_proper,
bounds=bounds,
q=2,
num_restarts=10,
raw_samples=512,
)
# -
candidates_proper
acq_value_proper
| tutorials/one_shot_kg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
from spacy import displacy
[m.start() for m in re.finditer('test', 'test test test test')]
#[0, 5, 10, 15]
txt = "On February 23, 2019, Eminem released a re-issue of The Slim Shady LP. On June 25, 2019, " \
"The New York Times Magazine listed Eminem among hundreds of artists whose material was " \
"reportedly destroyed in the 2008 Universal Studios fire."
GT = [
{'start': 22, 'end': 28, 'mention_txt': 'Eminem', 'entity_txt': 'Eminem'},
{'start': 52,'end': 69,'mention_txt': 'The Slim Shady LP','entity_txt': 'The Slim Shady LP'},
{'start': 89, 'end': 116, 'mention_txt': 'The New York Times Magazine', 'entity_txt': 'The New York Times Magazine'},
{'start': 124, 'end': 130, 'mention_txt': 'Eminem', 'entity_txt': 'Eminem'},
{'start': 149, 'end': 156, 'mention_txt': 'artists', 'entity_txt': 'Artist'},
{'start': 204, 'end': 231, 'mention_txt': '2008 Universal Studios fire', 'entity_txt': '2008 Universal Studios fire'}
]
model = [
{'start': 52,'end': 69,'mention_txt': 'The Slim Shady LP','entity_txt': 'The Slim Shady LP'},
{'start': 93, 'end': 116, 'mention_txt': 'New York Times Magazine', 'entity_txt': 'The New York Times Magazine'},
{'start': 149, 'end': 156, 'mention_txt': 'artists', 'entity_txt': 'Artist (film)'},
{'start': 209, 'end': 226, 'mention_txt': 'Universal Studios', 'entity_txt': 'Universal Pictures'},
]
def process(anno, title=''):
ents = []
for anno_ele in anno:
start = anno_ele['start']
end = anno_ele['end']
mention = anno_ele['mention_txt']
entity = anno_ele['entity_txt']
assert txt[start:end] == mention
ent = {
'start': start,
'end': end,
'label': entity,
}
ents.append(ent)
instance = {
"text": txt,
"ents": ents,
'title': title,
}
print(repr(instance))
return instance
process(model)
GT_pre = {'text': 'On February 23, 2019, Eminem released a re-issue of The Slim Shady LP. On June 25, 2019, The New York Times Magazine listed Eminem among hundreds of artists whose material was reportedly destroyed in the 2008 Universal Studios fire.',
'ents': [{'start': 22, 'end': 28, 'label': 'Eminem'},
{'start': 52, 'end': 69, 'label': 'The Slim Shady LP'},
{'start': 89, 'end': 116, 'label': 'The New York Times Magazine'},
{'start': 124, 'end': 130, 'label': 'Eminem'},
{'start': 149, 'end': 156, 'label': 'Artist'},
{'start': 204, 'end': 231, 'label': '2008 Universal Studios fire'}],
'title': ''}
model_pre = {'text': 'On February 23, 2019, Eminem released a re-issue of The Slim Shady LP. On June 25, 2019, The New York Times Magazine listed Eminem among hundreds of artists whose material was reportedly destroyed in the 2008 Universal Studios fire.',
'ents': [{'start': 52, 'end': 69, 'label': 'The Slim Shady LP'},
{'start': 93, 'end': 116, 'label': 'The New York Times Magazine'},
{'start': 149, 'end': 156, 'label': 'Artist (film)'},
{'start': 209, 'end': 226, 'label': 'Universal Pictures'}],
'title': ''}
colors = {
'Eminem': "#ff8197",
'The Slim Shady LP': "#bfeeb7",
'The New York Times Magazine': "#e4e7d2",
'2008 Universal Studios fire': "#ff9561",
'Universal Pictures': 'yellow',
'Artist': "#bfe1d9",
'Artist (film)':"#ffeb80",
}
options = {'colors': colors}
displacy.render(GT_pre, style='ent',manual=True, options=options)
displacy.render(model_pre, style='ent',manual=True, options=options)
| e2e_EL_evaluate/visualization/anno_category_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# POC2
# ---
# dataset: [./data/processed0903/train.csv](../data/processed0903/)
import pandas as pd
import numpy as np
import json
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.model_selection import GridSearchCV
import xgboost as xgb
from xgboost import XGBRegressor
from src.modeling.modelpipline import ModelPipline
from src.modeling.model_xgb import XGBoost
train = pd.read_csv("./data/processed0903/train.csv")
val_test = pd.read_csv("./data/processed0903/val.csv")
n_ = val_test.shape[0]
n_val = int(n_*.5)
val, test = val_test[:n_val], val_test[n_val:]
print(train.shape[0], val.shape[0], test.shape[0])
x_train, y_train = train.drop("y", axis=1), train[["y"]]
x_val, y_val = val.drop("y", axis=1), val[["y"]]
x_test, y_test = test.drop("y", axis=1), test[["y"]]
# +
def evaluate_xgb(model):
pred_tr = model.predict(xgb.DMatrix(x_train), ntree_limit=model.best_ntree_limit)
pred_va = model.predict(xgb.DMatrix(x_val), ntree_limit=model.best_ntree_limit)
pred_te = model.predict(xgb.DMatrix(x_test), ntree_limit=model.best_ntree_limit)
show(pred_tr, pred_va, pred_te)
def evaluate_sk(model):
pred_tr = model.predict(x_train_s).ravel()
pred_va = model.predict(x_val_s).ravel()
pred_te = model.predict(x_test_s).ravel()
show(pred_tr, pred_va, pred_te)
def show(pred_tr, pred_va, pred_te):
error_tr = mean_squared_error(pred_tr, y_train)
error_va = mean_squared_error(pred_va, y_val)
error_te = mean_squared_error(pred_te, y_test)
plt.figure(figsize=(15, 5))
plt.subplot(1, 3, 1)
plt.scatter(np.arange(len(pred_tr)).tolist(), pred_tr-y_train.values.ravel())
plt.plot(np.arange(len(pred_tr)).tolist(), np.zeros(len(pred_tr)).tolist(), color="r")
plt.title(f"Train: {error_tr:.3f}")
plt.subplot(1, 3, 2)
plt.scatter(np.arange(len(pred_va)).tolist(), pred_va-y_val.values.ravel())
plt.plot(np.arange(len(pred_va)).tolist(), np.zeros(len(pred_va)).tolist(), color="r")
plt.title(f"Val: {error_va:.3f}")
plt.subplot(1, 3, 3)
plt.scatter(np.arange(len(pred_te)).tolist(), pred_te-y_test.values.ravel())
plt.plot(np.arange(len(pred_te)).tolist(), np.zeros(len(pred_te)).tolist(), color="r")
plt.title(f"Test: {error_te:.3f}")
plt.show()
def show_coef(model, model1, model2):
df = pd.DataFrame({"coef_li": model.coef_[0], "columns": x_train.columns})
df = df.groupby("columns").max()
df1 = pd.DataFrame({"coef_ra": model1.coef_[0], "columns": x_train.columns})
df1 = df1.groupby("columns").max()
df2= pd.DataFrame({"coef_la": model2.coef_[0], "columns": x_train.columns})
df2 = df2.groupby("columns").max()
df_ = pd.merge(df, df1, how="outer", right_index=True, left_index=True)
df__ = pd.merge(df_, df2, how="outer", right_index=True, left_index=True)
return df__.style.background_gradient(cmap="Blues")
# -
s = StandardScaler()
x_train_s = s.fit_transform(x_train)
x_val_s = s.transform(x_val)
x_test_s = s.transform(x_test)
pipline = ModelPipline(model_type="regression")
pipline.predict(x_train_s, y_train, x_val_s, y_val)
# + jupyter={"outputs_hidden": true} tags=[]
xgb_ = XGBoost()
xgb_model = xgb_.fit(x_train, y_train, x_val, y_val, methods="regression")
# -
evaluate_xgb(xgb_model)
xgb_.show_feature_impotrance()
# + jupyter={"outputs_hidden": true} tags=[]
result = xgb_.parameter_chunning(num_rounds=50, n_trials=20)
with open("./models/result_xgb_parameter.json", "w") as f:
json.dump(result["best_parameters"], f)
# -
result
xgb_ = XGBoost(result["best_parameters"])
xgb_model_p = xgb_.fit(x_train, y_train, x_val, y_val, methods="regression")
xgb_.save()
evaluate_xgb(xgb_model_p)
li = LinearRegression().fit(x_train_s, y_train)
ri = Ridge(random_state=0).fit(x_train_s, y_train)
la = Lasso(random_state=0).fit(x_train_s, y_train)
evaluate_sk(li)
evaluate_sk(ri)
evaluate_sk(la)
# +
param = {
"alpha": [0.1, 0.4, 0.2, 0.7, 1.0],
"max_iter": [10, 100, 500, 700, 1000],
"tol": [0.0001, 0.001, 0.01, 0.1, 0.5]
}
grid = GridSearchCV(Ridge(random_state=0), param_grid=param, cv=5).fit(x_val_s, y_val)
ri_ = Ridge(**grid.best_params_, random_state=0).fit(x_test_s, y_test)
evaluate_sk(ri_)
# + jupyter={"outputs_hidden": true} tags=[]
param = {
"alpha": [0.1, 0.4, 0.2, 0.7, 1.0],
"max_iter": [10, 100, 500, 700, 1000],
"tol": [0.0001, 0.001, 0.01, 0.1, 0.5]
}
grid_la = GridSearchCV(Lasso(random_state=0), param_grid=param, cv=5).fit(x_val_s, y_val)
la_ = Ridge(**grid_la.best_params_, random_state=0).fit(x_test_s, y_test)
evaluate_sk(la_)
# -
show_coef(li, ri, ri_)
xgb_reg = XGBRegressor(**result["best_parameters"], random_state=0,
eval_set=[(x_train_s, y_train), (x_val_s, y_val)],
ealry_stopping_rounds=30).fit(x_train_s, y_train)
evaluate_sk(xgb_reg)
# ### 推論用モデルの作成
# ---
dtrain = xgb.DMatrix(val_test.drop("y", axis=1), val_test[["y"]])
model_xgb = xgb.train(result["best_parameters"], dtrain)
model_ri = Ridge(**grid.best_params_, random_state=0).fit(s.transform(val_test.drop("y", axis=1)), val_test[["y"]])
model_la = Lasso(**grid_la.best_params_, random_state=0).fit(s.transform(val_test.drop("y", axis=1)), val_test[["y"]])
# ### 推論実行
# ---
test = pd.read_csv("./data/processed0903/test.csv")
test.head()
# +
def inference(test, model):
prediction = []
for i in range(test.shape[0]):
data = test.iloc[i, :].values.reshape(1, -1)
data = pd.DataFrame(data, columns=test.columns)
data = data[val_test.drop("y", axis=1).columns]
Ddata = xgb.DMatrix(data)
predict = model.predict(Ddata).tolist()
if i != test.shape[0]-1:
test.loc[i+1, "before_1day_y"] = predict[0]
prediction.append(int(round(predict[0], 0)))
return prediction
def inference_sk(test, model):
prediction = []
for i in range(test.shape[0]):
test = test[val_test.drop("y", axis=1).columns]
data = test.iloc[i, :].values.reshape(1, -1)
data = s.transform(data)
predict = model.predict(data).tolist()[0]
if i != test.shape[0]-1:
test.loc[i+1, "before_1day_y"] = predict
prediction.append(int(round(predict, 0)))
return prediction
def submit(pred, name="xgb"):
sub = pd.read_csv("./data/raw/test.csv")
sub = sub[["datetime"]]
sub["y"] = pred
sub.to_csv(f"./data/submit/submit_0903_{name}.csv", index=False, header=False)
# -
predict_xgb = inference(test, model_xgb)
submit(predict_xgb, "xgb")
# モデルの概要: 検証とテスト検証データを訓練として、設定済パラメータで学習をしたモデルの評価
# result: 21.6110
predict_ri = inference_sk(test, model_ri)
submit(predict_ri, "ridge")
# モデルの概要: 基本的な訓練データは同上。モデルはRidge回帰
# result: 11.2498
predict_la = inference_sk(test, model_la)
submit(predict_la, "lasso")
# + jupyter={"outputs_hidden": true} tags=[]
xgb_ = XGBoost()
xgb_test = xgb_.fit(x_val, y_val, x_test, y_test, methods="regression")
# + jupyter={"outputs_hidden": true} tags=[]
result = xgb_.parameter_chunning(num_rounds=50, n_trials=20)
xgb_ = XGBoost(result["best_parameters"])
xgb_test = xgb_.fit(x_val, y_val, x_test, y_test, methods="regression")
evaluate_xgb(xgb_test)
# -
predict_xgb = inference(test, xgb_test)
submit(predict_xgb, "xgb_valtest")
# モデルの概要: 検証データのみで学習。テスト検証データでパラメータチューニング。
# result: 13.1894
| POC0903.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
df = pd.DataFrame({
'men': [80.2, 80.8, 57.5, 98, 50.5, 73.8, 77.4, 59.7, 77.9, 52.6],
'women': [57.9, 57.3, 72.8, 48.2, 59.5, 48.3, 61.2, 53.9, 53.9, 70.7]
})
df.head(3)
men_range = df['men'].max() - df['men'].min()
round(men_range,1)
men_irq = df['men'].quantile(q=0.75, interpolation='midpoint') - \
df['men'].quantile(q=0.25, interpolation='midpoint')
round(men_irq,1)
women_range = df['women'].max() - df['women'].min()
round(women_range, 1)
women_irq=df['women'].quantile(q=0.75, interpolation='midpoint') - \
df['women'].quantile(q=0.25, interpolation='midpoint')
round(women_irq,1)
men_std = df['men'].std()
round(men_std, 1)
women_std = df['women'].std()
round(women_std, 1)
men_var = df['men'].var()
round(men_var, 1)
women_var = df['women'].var()
round(women_var, 1)
# ВЫБРОСЫ
df = pd.DataFrame({
'men':[80.2,80.8,57.5,98,50.5,73.8,77.4,59.7,77.9,52.6],
'women':[57.9,57.3,72.8,48.2,59.5,48.3,61.2,53.9,53.9,70.7]
})
df.head(3)
men_q1 = df['men'].quantile(q=0.25, interpolation='midpoint')
men_q3 = df['men'].quantile(q=0.75, interpolation='midpoint')
men_irq = round(men_q3 - men_q1, 2)
women_q1 = df['women'].quantile(q=0.25, interpolation='midpoint')
women_q3 = df['women'].quantile(q=0.75, interpolation='midpoint')
women_irq=round(women_q3 - women_q1, 2)
men_irq, women_irq
df[df['men'] > men_q3 + men_irq * 1.5 ]['men']
df[df['men'] < men_q1 - men_irq * 1.5 ]['men']
df[df['women'] < men_q1 - women_irq * 1.5 ]['women']
df[df['women'] > men_q3 + women_irq * 1.5 ]['women']
# Задание 1.10.2
import pandas as pd
df = pd.DataFrame({
'year': [2007, 2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018],
'temp': [-4.7, -6.1, -5.5, -3.3, -7.1, -3.1, -5.2, -7.3, -12.1, -6.6, -5.9, -6.3]
})
df.head(3)
mean_t = df['temp'].mean()
round(mean_t,1)
median_t = df['temp'].median()
round(median_t,1)
std_t = df['temp'].std()
round(std_t, 1)
rng = df['temp'].max() - df['temp'].min()
round(rng, 1)
irq_t = df['temp'].quantile(q=0.75, interpolation='midpoint') - \
df['temp'].quantile(q=0.25, interpolation='midpoint')
round(irq_t,1)
q1_t = df['temp'].quantile(q=0.25, interpolation='midpoint')
q3_t = df['temp'].quantile(q=0.75, interpolation='midpoint')
irq_t = round(q3_t - q1_t, 1)
irq_t
df[df['temp'] > q3_t + irq_t * 1.5 ]['temp']
df[df['temp'] < q1_t - irq_t * 1.5 ]
| stats_and_basic/StatsAndBasic_1.9_1.10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dictionary_comprehension
students ={"태연",'진우','정현','하늘','성진'}
for number,name in enumerate(students):
print("{}번의 이름은 {}입니다.".format(number,name))
student_dict={"{}번".format(number +1):name for number,name in enumerate(students)}
student_dict
# # zip
scores=[85,92,78,90,100]
for x,y in zip(students,scores):
print(x,y)
score_dic={student:score for student, score in zip(students,scores)}
score_dic
| python/Dictionary_comprehension&zip.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Controversial Picks Calculator (by <NAME> and <NAME>)
#
# For now this script doesn't perform a full analysis. It needs to be run on the set for human player, then it should be manually made run for a bot, and then there's an R script named `controversial_picks_part2.R` that finalizes the comparison.
# +
import pandas as pd
import numpy as np
import csv
import json
import re
import itertools
import matplotlib.pyplot as plt
import mpld3
# -
from draftsim_utils_ab import *
#mtgJSON = json.load(open('Allsets.json'))
with open('../../data/Allsets.json', 'r',encoding='utf-8') as json_data:
mtgJSON = json.load(json_data)
# Change the set abbrevation below to work with a different set
# Alternatives: XLN, DOM, M19, GRN, RNA
setName = 'WAR'
# +
thisSet = mtgJSON[setName]['cards']
thisSet = {getName(card) : card for card in thisSet} # a dict with names as indices for cards, for all cards in set
# +
dataFileNames = {
'XLN': '../../2018-02-23 Two days data XLN.csv',
'DOM': '../../2018-04-16 Dominiaria initial data-2.csv',
'M19': '../../2018-08-23 m19 drafts round 2.csv',
'GRN': '../../2018-10-05 GRN Draft Data 1.csv',
'RNA': '../../2019-01-22 RNA merged.csv',
'WAR': '../../2019-04-29 WAR prerelease leadup.csv'
}
draftData = pd.read_csv(dataFileNames[setName],
names = ['format', 'human', 'bot1', 'bot2', 'bot3', 'bot4', 'bot5', 'bot6', 'bot7'])
draftData.head()
# -
#print(thisSet.keys())
dict((k.lower(), v) for k, v in thisSet.items()) # Lowercase the keys
cardlist = list(thisSet.keys())
#print(cardlist)
# Main loop, but it's actually quite fast (~10 s).
# +
cardpicks = {card : [] for card in cardlist} # Pick order
cardpickOn = {card : [] for card in cardlist} # Pick order on color
draftCount = 0
player = 'human' # normally should be: human, but can also be bot1 or another bot
for draft in draftData[player]:
draft = fixName(draft)
draft = draft.lower()
draft = draft.split(',')
draftCount += 1
#colorCount = {i : 0 for i in range(0,6)}
colorCount = [0,0,0,0,0,0,0]
for i in range(14):
try:
cardpicks[draft[i]].append(i+1)
bestColorSoFar = np.argmax(colorCount)
if bestColorSoFar==0 or getCardColor(thisSet[draft[i]]) == bestColorSoFar:
cardpickOn[draft[i]].append(i+1)
colorCount[getCardColor(thisSet[draft[i]])] += 1
except KeyError as e:
print(draft)
print(count)
raise
#if any('karn_scion_of_urza' in x for x in draft):
# print(cardpicks['karn_scion_of_urza'][-1])
# #print(draft)
print("Analyzed ",draftCount," records.")
# +
for card in cardpicks:
if cardpicks[card] == []:
cardpicks[card].append(15)
for card in cardpickOn:
if cardpickOn[card] == []:
cardpickOn[card].append(15)
# +
#cardpicks['karn_scion_of_urza']
#cardpicks['karn_scion_of_urza']
#cardpicks['chance_for_glory']
goodCardName = 'spark_double'
# Controversial in GRN: ionize, chance_for_glory. Well known: leapfrog, silent_dart
# Histogram of cardpicks
fig, ax = plt.subplots()
n, bins, patches = plt.hist(cardpicks[goodCardName], 13, range=(0.5,13.5), density=True, facecolor=(0.2,0.5,1), alpha=0.75)
ax.set_title(goodCardName, size=10)
plt.xlabel('Pick order')
plt.ylabel('Frequency')
# -
cardpicksdf = pd.DataFrame({
'avg' : [np.mean(cardpicks[card]) for card in cardpicks],
'var' : [np.var(cardpicks[card]) for card in cardpicks],
'count' : [len(cardpicks[card]) for card in cardpicks],
'color' : [getCardColor(thisSet[card]) for card in cardpicks],
'rarity' :[thisSet[card]['rarity'] for card in cardpicks],
'legendary' : [1 if isLegendary(thisSet[card]) else 0 for card in cardpicks]
}, list(cardpicks.keys()))
#cardpicksdf.head()
cardpickOndf = pd.DataFrame({
'avg' : [np.mean(cardpickOn[card]) for card in cardpickOn],
'var' : [np.var(cardpickOn[card]) for card in cardpickOn],
'count' : [len(cardpickOn[card]) for card in cardpickOn],
'color' : [getCardColor(thisSet[card]) for card in cardpickOn],
'rarity' :[thisSet[card]['rarity'] for card in cardpickOn],
'legendary' : [1 if isLegendary(thisSet[card]) else 0 for card in cardpickOn]
}, list(cardpickOn.keys()))
if player=='human':
cardpicksdf.to_csv('../../data/controversial_cards_'+setName+'.csv', index_label="name")
#cardpickOndf.to_csv('../../data/controversial_cards_data_onColor.csv', index_label="name")
else:
cardpicksdf.to_csv('../../data/controversial_cards_'+setName+'_bot.csv', index_label="name")
#cardpickOndf.to_csv('../../data/controversial_cards_data_onColor_bot.csv', index_label="name")
cardpicksdf.sort_values(by=['var'], ascending=False).head(10)
#cardpicksdf.iloc[0]['rarity'] == 'Common'
def getHeights(picks):
heights = [0 for x in range(16)]
for num in picks:
heights[num] +=1
return heights
colorkey = ['0', 'M', 'W', 'U', 'B', 'G', 'R']
colorid = ['#9F9F9F', 'DC03FD', '#F3C750', '#0E68AB', '#150B00', '#00733E', '#D3202A']
mplcolors = ['gray', 'mediumvioletred', 'orange', 'dodgerblue', 'black', 'g', 'red']
mplrarity = ['crimson','orange','gray', 'black', 'w']
rarity= ['Mythic Rare', 'Rare', 'Uncommon', 'Common', 'Basic Land']
# +
fig, ax = plt.subplots()
scatter = ax.scatter(cardpicksdf['avg'],
cardpicksdf['var'],
c = [mplcolors[x] for x in cardpicksdf['color']]
)
ax.set_title("Card Picks avg vs var", size=20)
plt.xlabel('avg pick number')
plt.ylabel('var in pick number')
labels = list(cardpicksdf.index)
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)
mpld3.plugins.connect(fig, tooltip)
#plt.savefig('avgvsvar.png')
#mpld3.display()
# +
fig, ax = plt.subplots()
scatter = ax.scatter(cardpicksdf['avg'],
cardpicksdf['var'],
c = [mplrarity[rarity.index(x)] for x in cardpicksdf['rarity']]
)
ax.set_title("Card Picks avg vs var by rarity", size=20)
plt.xlabel('avg pick number')
plt.ylabel('var in pick number')
labels = list(cardpicksdf.index)
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)
mpld3.plugins.connect(fig, tooltip)
#plt.savefig('avgvsvarRarityDom.png')
#mpld3.display()
# +
fig, ax = plt.subplots()
#The top scatter is for legendary cards. The bottom is non legendary.
#You can comment one out for more clarity
scatter = ax.scatter(cardpicksdf.loc[cardpicksdf['legendary'] == 1]['avg'],
cardpicksdf.loc[cardpicksdf['legendary'] == 1]['var'],
c = [mplrarity[rarity.index(x)] for x in cardpicksdf['rarity']],
marker = 's'
)
scatter = ax.scatter(cardpicksdf.loc[cardpicksdf['legendary'] == 0]['avg'],
cardpicksdf.loc[cardpicksdf['legendary'] == 0]['var'],
c = [mplrarity[rarity.index(x)] for x in cardpicksdf['rarity']],
marker = 'o'
)
ax.set_title("Card Picks avg vs var by rarity", size=10)
plt.xlabel('avg pick number')
plt.ylabel('var in pick number')
labels = list(cardpicksdf.index)
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)
mpld3.plugins.connect(fig, tooltip)
plt.savefig('avgvsvarRarityDomNormal.png')
mpld3.display()
| archive/Arseny/controversial picks.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
trace_file = "traces/trace_c3_v0"
# +
# Read in and pre-process trace file
with open(trace_file) as f:
content = f.readlines()
content = [x.strip() for x in content]
num_header_lines = 4
content = content[num_header_lines:]
def is_an_allocation(v):
return (v[0] == 'a')
def is_a_free(v):
return (v[0] == 'f')
# Data wrangle into dicts and lists
allocations_dict = dict()
allocations_indices = dict()
freed_dict = dict()
freed_order = []
alloc_order = []
for i, v in enumerate(content):
v = v.split(" ")
if is_an_allocation(v):
allocations_indices[v[1]] = v[2]
alloc_order.append(int(v[2]))
if v[2] not in allocations_dict:
allocations_dict[v[2]] = 1
else:
allocations_dict[v[2]] += 1
elif is_a_free(v):
if v[1] not in freed_dict:
freed_dict[v[1]] = 'freed'
freed_order.append(int(v[1]))
# -
# print in order of most frequent allocations
for key, value in sorted(allocations_dict.iteritems(), key=lambda (k,v): (-v,k)):
print "%s: %s" % (key, value)
# +
# convert each key to ints, so can sort
# for some reason can't start out with ints, and increment the values. this works.
for key in allocations_dict:
val = allocations_dict[key]
del allocations_dict[key]
allocations_dict[int(key)] = val
for key in freed_dict:
val = freed_dict[key]
del freed_dict[key]
freed_dict[int(key)] = val
# +
# list form of allocation amounts and counts, and totals, since plays nicer with matplotlib
allocation_amounts = []
allocation_counts = []
allocation_totals = []
for key in sorted(allocations_dict.iterkeys()):
allocation_amounts.append(key)
allocation_counts.append(allocations_dict[key])
allocation_totals.append(int(allocations_dict[key]*key))
# +
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
N = len(allocation_amounts)
ind = np.arange(N) # the x locations for the groups
width = 1.0 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(allocation_amounts, allocation_counts, width, color='r')
plt.xlabel('Individual allocation size (bytes)')
plt.ylabel('# allocations')
plt.show()
# num times allocated vs. individual allocation size
# This plot shows you which types of allocations are most frequent
# +
ind = np.arange(N) # the x locations for the groups
width = 1.0 # the width of the bars
fig, ax = plt.subplots()
rects1 = ax.bar(allocation_amounts, allocation_totals, width, color='r')
plt.xlabel('Individual allocation size (bytes)')
plt.ylabel('Total memory size (bytes)')
plt.show()
# total memory size vs. individual allocation size
# This plot shows you which types of allocations are taking up the most memory
# +
# See if there was anything left unfreed
# # copy over dict
left_at_end_allocations_dict = dict()
for key in allocations_dict:
left_at_end_allocations_dict[str(key)] = allocations_dict[key]
# subtract
for key in freed_dict:
if str(key) in allocations_indices:
amount = allocations_indices[str(key)]
left_at_end_allocations_dict[str(amount)] -= 1
if left_at_end_allocations_dict[amount] == 0:
del left_at_end_allocations_dict[amount]
print left_at_end_allocations_dict
# +
# Calculate header overhead
HEADER_SIZE_BYTES = 32
print "Total # allocations:", sum(allocation_counts)
print "Total cumulative allocation size (bytes):", sum(allocation_totals)
print "Total size allocated for headers:", sum(allocation_counts)*HEADER_SIZE_BYTES
# This is how much is wasted on headers
print "Header overhead is (percent):", 100*sum(allocation_counts)*32.0/sum(allocation_totals)
# +
## Free order
# print freed_order
# This is realively useless to look at but is what is provided natively in the trace file
# -
## Alloc order
print alloc_order
# This list shows, in order of when they were allocated, all of the allocation sizes
# +
## Freed order by amount
freed_order_by_amount = []
for i,v in enumerate(freed_order):
amount = alloc_order[int(freed_order[i])]
freed_order_by_amount.append(amount)
print freed_order_by_amount
# This is very useful to look at -- this shows the individual free amounts in order of when they were freed
# -
# Example, how much is allocated in the last 30 allocations?
print sum(alloc_order[-30:])
# +
# Find what percent of allocations are small, vs. large
SMALL_ALLOCATION_SIZE = 512
# Count how many are un
unique_small_alloc_sizes = 0
for i in allocation_amounts:
if (i < SMALL_ALLOCATION_SIZE):
unique_small_alloc_sizes += 1
print sum(allocation_totals[0:unique_small_alloc_sizes]), "bytes are in small allocations"
print sum(allocation_totals[unique_small_alloc_sizes:]), "bytes are in large allocations"
ratio = sum(allocation_totals[3:])/(sum(allocation_totals[0:3])*1.0+ sum(allocation_totals[3:]))
print ratio*100.0, "percent of memory is in large allocations"
# +
# Plot cdf (cumulative distribution function) of allocation amounts
total_allocated = sum(allocation_totals)*1.0
cumulative_allocation_percent = []
cumulative_sum = 0.0
for i in allocation_totals:
cumulative_sum += i/total_allocated
cumulative_allocation_percent.append(cumulative_sum)
plt.plot(allocation_amounts, cumulative_allocation_percent, color='r')
plt.xlabel('Individual allocation size (bytes)')
plt.ylabel('Cumulative proportion of total allocated memory')
plt.show()
# Cumulative proportion of total allocated memory vs. individual allocation size
# This plot shows a richer view of how much of allocations are small vs. large
# +
# Filter for only large allocations
alloc_order_large_only = []
for i in alloc_order:
if i > SMALL_ALLOCATION_SIZE:
alloc_order_large_only.append(i)
freed_order_by_amount_large_only = []
for i in freed_order_by_amount:
if i > SMALL_ALLOCATION_SIZE:
freed_order_by_amount_large_only.append(i)
plt.plot(alloc_order_large_only)
plt.xlabel('Time')
plt.ylabel('Allocation size, large allocations only (bytes)')
plt.show()
plt.plot(freed_order_by_amount_large_only)
plt.xlabel('Time')
plt.ylabel('Freed size, large allocations only (bytes)')
plt.show()
print len(alloc_order_large_only), "allocations are large"
print min(alloc_order_large_only), "bytes is the smallest 'large' allocation"
# +
from pylab import plot,show
from numpy import vstack,array
from numpy.random import rand
from scipy.cluster.vq import kmeans,vq
## Just keeping this as an example of k-means
# data generation
data = vstack((rand(150,2) + array([.5,.5]),rand(150,2)))
# computing K-Means with K = 2 (2 clusters)
centroids,_ = kmeans(data,2)
# assign each sample to a cluster
idx,_ = vq(data,centroids)
# some plotting using numpy's logical indexing
plot(data[idx==0,0],data[idx==0,1],'ob',
data[idx==1,0],data[idx==1,1],'or')
plot(centroids[:,0],centroids[:,1],'sg',markersize=8)
show()
# +
## K-means on the large allocations
K = 8
alloc_order_large_only_floats = []
for i in alloc_order_large_only:
alloc_order_large_only_floats.append(float(i))
clusters,_ = kmeans(sorted(alloc_order_large_only_doubles),K)
print clusters
plt.plot(sorted(clusters),'.')
| trace_visualizations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
from keras import backend as K
import utils.utils_iterator as utils_iterator
import utils.utils_wv as utils_wv
import utils.attention_with_context as attention_layers
from keras.models import Model
from keras.layers import Dense, Input, LSTM, Dropout, Bidirectional, concatenate, Activation, Conv1D, Flatten, MaxPooling1D, Concatenate
import utils.utils_statistics as utils_statistics
import utils.utils_optimization as utils_optimization
from keras.callbacks import ModelCheckpoint
from keras import layers
# +
checkpoint = ModelCheckpoint(
"models/best_model_rimkata_3.h5", monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list = [checkpoint]
sentence = Input(shape=(75,))
position = Input(shape=(75,))
embeddings = utils_wv.word2vec_embedding_layer()(sentence)
pos_embeddings = layers.Embedding(1,15, input_length=75, trainable=True)(position)
x = layers.concatenate([embeddings, pos_embeddings])
z = layers.Dropout(0.41)(x)
z = Bidirectional(LSTM(256, return_sequences=True))(z)
z = attention_layers.AttentionWithContext()(z)
z = Dense(1, activation="sigmoid")(z)
model = Model([sentence, position], z, name='sample_model')
model.summary()
model.compile(loss='binary_crossentropy',
optimizer="adam",
metrics=['accuracy'])
model.fit_generator(generator=utils_iterator.generate_data('../NLP_Preprocessing/data/clean_data/ontotext_train_labeled.txt', 512, input_size=75, with_positions=True),
steps_per_epoch=71560//(512),
validation_data=utils_iterator.generate_data('../NLP_Preprocessing/data/clean_data/ontotext_test_labeled.txt', 512, input_size=75, with_positions=True),
validation_steps=17892//(512), workers=8, epochs=45, callbacks=callbacks_list)
utils_statistics.calculate_recall(model, 0.5)
| NLP_Modelling/rim.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import lithosphere_prior
import sys
sys.path.append('../lithosphere_prior')
from lithosphere_prior_new import LithospherePrior
# Import rest
from sdss_stochastic import SDSS_stochastic
import utility as sds_util
import plot as sds_plt
import mikkel_tools.utility as mt_util
import numpy as np
import matplotlib.pyplot as plt
# +
LP = LithospherePrior(sat_height = 350)
LP.grid_even_spaced(grid_size=3.0)
grid_in = np.hstack((90-LP.grid_even_theta, LP.grid_even_phi))
print(LP)
# +
sph_d_list = list()
for i in np.arange(0,259200):
for j in np.arange(0,259200):
sph_d_list.append(sds_util.haversine(LP.a, LP.grid_even_phi[i,[0]], LP.grid_even_theta[i,[0]], LP.grid_even_phi[j,[0]], LP.grid_even_theta[j,[0]]))
# -
# CORE GRID
core = SDSS_stochastic(sim_type = "core")
core.grid(core.r_cmb, grid_in, calc_sph_d = True)
#core.grid(core.a, "gauss_leg", calc_sph_d = True)
core.data()
#core.condtab()
#core.semivar(model_lags = 'all', model = "exponential", max_dist = 4500, lag_length = 50, zero_nugget = True)
print(core)
print(core.lon)
print(core.lat)
plt.figure()
plt.imshow(core.data.reshape(60,120))
plt.colorbar()
# SAT GRID
sat = SDSS_stochastic(sim_type = "sat", N_grid = 3010)
sat.grid(sat.r_sat, "equal_area", calc_sph_d = False)
sat.data()
print(sat)
#%% PLOT GRID
sds_util.plot_cartopy_global(core.lat, core.lon, cmap = 'PuOr_r', title="Prior radial core information")
#%% PLOT SYNTHETIC DATA
sds_util.plot_cartopy_global(sat.lat, sat.lon, data=sat.data, cmap = 'PuOr_r', title="Synthetic radial satellite observations", scale_uneven = False)
sds_util.plot_cartopy_global(core.lat, core.lon, data=core.data, cmap = 'PuOr_r', title="Prior radial core information", scale_uneven = False)
#%% PLOT SEMI-VARIOGRAM
semivar = sdssim_semivar = {"semi-variogram LUT":core.sv_lut, "total data lags":core.lags, "total data sv":core.pics, "model data lags":core.lags_model, "model data sv":core.pics_model, "model names":core.model_names, "sv model y":core.sv_curve, "sv model x":core.lags_sv_curve, "sv model":core.model, "a":core.a, "C0":core.C0, "C1":core.C1, "C2":core.C2, "C3":core.C3, "n_lags":core.n_lags, "max_cloud":core.max_cloud, "sph_d_sorted":core.sph_d_sorted, "sort_d":core.sort_d}
sds_plt.plots('model_semi_variogram_new', semivar)
| save_nbs/showcase_new-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="kkHnGIfGf3-X" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 35} outputId="47913954-2182-4973-fbda-99db7f5b0009" executionInfo={"status": "ok", "timestamp": 1524755827748, "user_tz": -420, "elapsed": 7792, "user": {"displayName": "<NAME>\u00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
import keras
from keras import layers, models
from keras.applications import VGG16
from keras import optimizers
from keras.callbacks import ModelCheckpoint
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
from matplotlib import pyplot as plt
import os
# + id="4ZMcDwOrgaZf" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 53} outputId="2b9b56c0-c189-4434-90db-62faa04c02e5" executionInfo={"status": "ok", "timestamp": 1524755835500, "user_tz": -420, "elapsed": 5898, "user": {"displayName": "<NAME>\u00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
# define parameters
IMAGE_SIZE = 150
BATCH_SIZE = 20
NUM_EPOCHS = 30
TOTAL_TRAIN_IMAGES = 2000
TOTAL_VALID_IMAGES = 1000
TOTAL_TEST_IMAGES = 1000
base_dir = 'drive/workspace/Cloud_Service/Google_Colab/DogVsCat_Kaggle'
model_dir = os.path.join(base_dir, 'model_checkpoint/vgg16_aug_30epochs')
dataset_dir = os.path.join(base_dir, 'Dataset')
train_dir = os.path.join(dataset_dir, 'train')
validation_dir = os.path.join(dataset_dir, 'valid')
test_dir = os.path.join(dataset_dir, 'test')
print(len(os.listdir(train_dir)))
print(len(os.listdir(model_dir)))
# + id="L6g-tDctga2M" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 892} outputId="4c378188-6730-4c6f-bc6f-f212f61624bd" executionInfo={"status": "ok", "timestamp": 1524755843441, "user_tz": -420, "elapsed": 3021, "user": {"displayName": "<NAME>\u00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
conv_base = VGG16(include_top=False, weights='imagenet', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 3))
conv_base.summary()
# + id="RyMYbbTpkBtF" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 290} outputId="e0108019-bfcf-411d-ca7a-e7dccda3b9d2" executionInfo={"status": "ok", "timestamp": 1524755850991, "user_tz": -420, "elapsed": 748, "user": {"displayName": "<NAME>\u00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
# define total model
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
# + id="mZrd3vWzkCBF" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 290} outputId="b3ccb3f2-3f2b-4491-e296-f662e4116354" executionInfo={"status": "ok", "timestamp": 1524755979258, "user_tz": -420, "elapsed": 789, "user": {"displayName": "<NAME>00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
# freezing vgg16 pretrained model
conv_base.trainable = False
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(lr=2e-5), metrics=['acc'])
model.summary()
# + id="axtIcL5_lGB8" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 72} outputId="6909d393-26e1-44cc-9484-bbe4fb32ac57" executionInfo={"status": "ok", "timestamp": 1524756097758, "user_tz": -420, "elapsed": 23881, "user": {"displayName": "<NAME>\u00e2n", "photoUrl": "https://lh3.googleusercontent.com/a/default-user=s128", "userId": "114654873584027723429"}}
# create datagen for training with data augmentation
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
height_shift_range=0.2,
width_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest'
)
validation_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
class_mode='binary'
)
validation_generator = validation_datagen.flow_from_directory(
validation_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
class_mode='binary'
)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size=(IMAGE_SIZE, IMAGE_SIZE),
batch_size=BATCH_SIZE,
class_mode='binary'
)
# + id="NgC63dk7lGWU" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# define model checkpoint
checkpoint_path = os.path.join(model_dir, 'valid-acc-improvement-{epoch:02d}-{val_acc:.2f}.hdf5')
val_acc_checkpoint = ModelCheckpoint(checkpoint_path, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
# + id="hHV9O4Z_oTOs" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}, "base_uri": "https://localhost:8080/", "height": 1019} outputId="7fadd9cc-6c0e-4ee3-a10f-ff8ebc667cfa"
# fit model
history = model.fit_generator(
train_generator,
steps_per_epoch=TOTAL_TRAIN_IMAGES // BATCH_SIZE,
epochs=NUM_EPOCHS,
callbacks=[val_acc_checkpoint],
validation_data=validation_generator,
validation_steps=TOTAL_VALID_IMAGES // BATCH_SIZE
)
# + id="tqfq8ILjxCoY" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# + id="9cXPe6usn1WM" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# save last model
model.save(os.path.join(model_dir, 'last_model_vgg16_aug.h5'))
# + id="rOphIVOwny_U" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# plot history
loss = history.history['loss']
val_loss = history.history['val_loss']
acc = history.history['acc']
val_acc = history.history['val_acc']
fig = plt.figure()
plt.plot(acc, label='acc')
plt.plot(val_acc, label='val_acc')
plt.title('Training and validation accuracy')
plt.legend()
fig.savefig(os.path.join(model_dir, 'Training_Validation_Accuracy.jpg'))
fig = plt.figure()
plt.plot(loss, label='loss')
plt.plot(val_loss, label='val_loss')
plt.title('Training and validation loss')
plt.legend()
fig.savefig(os.path.join(model_dir, 'Training_Validation_Loss.jpg'))
plt.show()
# + id="acZUY_6tqhcV" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# evalute model with test data
scores = model.evaluate_generator(test_generator, steps=TOTAL_TEST_IMAGES // BATCH_SIZE)
print(scores)
# + id="T-9k59LIiPks" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# load best validation model
best_val_model_path = os.path.join(model_dir, 'valid-acc-improvement-.hdf5')
model2 = models.load_model(best_val_model_path)
# + id="qDvTLOziiSJj" colab_type="code" colab={"autoexec": {"startup": false, "wait_interval": 0}}
# evaluate best model
scores2 = model2.evaluate_generator(test_generator, steps=TOTAL_TEST_IMAGES // BATCH_SIZE)
print(scores2)
| VGG16_pretrained_aug.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="bOChJSNXtC9g" colab_type="text"
# # Computer Vision
# + [markdown] id="OLIxEDq6VhvZ" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/logo.png" width=150>
#
# In this notebook we're going to cover the basics of computer vision using CNNs. So far we've explored using CNNs for text but their initial origin began with computer vision tasks.
#
#
#
# + [markdown] id="wKX2R_FT4hSQ" colab_type="text"
# <img src="https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/images/cnn_cv.png" width=650>
# + [markdown] id="WhPQxVDRvMWG" colab_type="text"
# # Configuration
# + id="GBOpAGnTvJ2L" colab_type="code" colab={}
config = {
"seed": 1234,
"cuda": True,
"data_url": "https://raw.githubusercontent.com/GokuMohandas/practicalAI/master/data/surnames.csv",
"data_dir": "cifar10",
"shuffle": True,
"train_size": 0.7,
"val_size": 0.15,
"test_size": 0.15,
"vectorizer_file": "vectorizer.json",
"model_file": "model.pth",
"save_dir": "experiments",
"num_epochs": 5,
"early_stopping_criteria": 5,
"learning_rate": 1e-3,
"batch_size": 128,
"fc": {
"hidden_dim": 100,
"dropout_p": 0.1
}
}
# + [markdown] id="ptkKF5Fov-SD" colab_type="text"
# # Set up
# + id="R_rteDFbvKPc" colab_type="code" colab={}
# Load PyTorch library
# !pip3 install torch
# + id="hVe2_gEuvKfr" colab_type="code" colab={}
import os
import json
import numpy as np
import time
import torch
import uuid
# + [markdown] id="W_2LVROYwFyL" colab_type="text"
# ### Components
# + id="uvBG7wQzvKx-" colab_type="code" colab={}
def set_seeds(seed, cuda):
""" Set Numpy and PyTorch seeds.
"""
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
print ("==> 🌱 Set NumPy and PyTorch seeds.")
# + id="29_qNeT9wKTc" colab_type="code" colab={}
def generate_unique_id():
"""Generate a unique uuid
preceded by a epochtime.
"""
timestamp = int(time.time())
unique_id = "{}_{}".format(timestamp, uuid.uuid1())
print ("==> 🔑 Generated unique id: {0}".format(unique_id))
return unique_id
# + id="nyjHX3DzwKZY" colab_type="code" colab={}
def create_dirs(dirpath):
"""Creating directories.
"""
if not os.path.exists(dirpath):
os.makedirs(dirpath)
print ("==> 📂 Created {0}".format(dirpath))
# + id="Q2nZw4grwKQS" colab_type="code" colab={}
def check_cuda(cuda):
"""Check to see if GPU is available.
"""
if not torch.cuda.is_available():
cuda = False
device = torch.device("cuda" if cuda else "cpu")
print ("==> 💻 Device: {0}".format(device))
return device
# + [markdown] id="lA0uwEUlwHjO" colab_type="text"
# ### Operations
# + id="gt8SiXgavK38" colab_type="code" outputId="1fe12a80-3724-44b6-f865-4acfaea2fdd2" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Set seeds for reproducability
set_seeds(seed=config["seed"], cuda=config["cuda"])
# + id="xAkZJMckvK1s" colab_type="code" outputId="f13930f3-ee38-44d9-bbc5-2722c2574270" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Generate unique experiment ID
config["experiment_id"] = generate_unique_id()
# + id="MQeZH6oqvKu5" colab_type="code" outputId="295fd1d0-9a83-42a0-d10e-fd660a2b4941" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Create experiment directory
config["save_dir"] = os.path.join(config["save_dir"], config["experiment_id"])
create_dirs(dirpath=config["save_dir"])
# + id="JZbd2RQjvKsD" colab_type="code" outputId="a263b3be-361b-4717-d2fc-1343b7821cca" colab={"base_uri": "https://localhost:8080/", "height": 68}
# Expand file paths to store components later
config["vectorizer_file"] = os.path.join(config["save_dir"], config["vectorizer_file"])
config["model_file"] = os.path.join(config["save_dir"], config["model_file"])
print ("Expanded filepaths: ")
print ("{}".format(config["vectorizer_file"]))
print ("{}".format(config["model_file"]))
# + id="TdlTftnCvKph" colab_type="code" colab={}
# Save config
config_fp = os.path.join(config["save_dir"], "config.json")
with open(config_fp, "w") as fp:
json.dump(config, fp)
# + id="qa0EQ8VRvKl0" colab_type="code" outputId="919c94f1-4ae7-4a59-c05e-7f921dd19479" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Check CUDA
config["device"] = check_cuda(cuda=config["cuda"])
# + [markdown] id="ZVtnfTpvwi4i" colab_type="text"
# # Load data
# + [markdown] id="z0qPZ5tRws38" colab_type="text"
# We are going to get CIFAR10 data which contains images from ten unique classes. Each image has length 32, width 32 and three color channels (RGB). We are going to save these images in a directory. Each image will have its own directory (name will be the class).
# + id="FrfCnsZS2io4" colab_type="code" colab={}
import matplotlib.pyplot as plt
import pandas as pd
from PIL import Image
import tensorflow as tf
# + [markdown] id="muXdsetFx4QW" colab_type="text"
# ### Components
# + id="QYiuXPMax4YJ" colab_type="code" colab={}
def get_data():
"""Get CIFAR10 data.
"""
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
X = np.vstack([x_train, x_test])
y = np.vstack([y_train, y_test]).squeeze(1)
print ("==> 🌊 Downloading Cifar10 data using TensorFlow.")
return X, y
# + id="MD01k_gx1pxN" colab_type="code" colab={}
def create_class_dirs(data_dir, classes):
"""Create class directories.
"""
create_dirs(dirpath=data_dir)
for _class in classes.values():
classpath = os.path.join(data_dir, _class)
create_dirs(dirpath=classpath)
# + id="O4bthBeo281C" colab_type="code" colab={}
def visualize_samples(data_dir, classes):
"""Visualize sample images for
each class.
"""
# Visualize some samples
num_samples = len(classes)
for i, _class in enumerate(classes.values()):
for file in os.listdir(os.path.join(data_dir, _class)):
if file.endswith((".png", ".jpg", ".jpeg")):
plt.subplot(1, num_samples, i+1)
plt.title("{0}".format(_class))
img = Image.open(os.path.join(data_dir, _class, file))
plt.imshow(img)
plt.axis("off")
break
# + id="1sxd32MU4f8-" colab_type="code" colab={}
def img_to_array(fp):
"""Conver image file to NumPy array.
"""
img = Image.open(fp)
array = np.asarray(img, dtype="float32")
return array
# + id="Er9BP9Ch4nKc" colab_type="code" colab={}
def load_data(data_dir, classes):
"""Load data into Pandas DataFrame.
"""
# Load data from files
data = []
for i, _class in enumerate(classes.values()):
for file in os.listdir(os.path.join(data_dir, _class)):
if file.endswith((".png", ".jpg", ".jpeg")):
full_filepath = os.path.join(data_dir, _class, file)
data.append({"image": img_to_array(full_filepath), "category": _class})
# Load to Pandas DataFrame
df = pd.DataFrame(data)
print ("==> 🖼️ Image dimensions: {0}".format(df.image[0].shape))
print ("==> 🍣 Raw data:")
print (df.head())
return df
# + [markdown] id="FPDqxy_ax4wI" colab_type="text"
# ### Operations
# + id="KfbhWM7PvKdY" colab_type="code" outputId="05d92d98-55f3-466c-f79e-cdbfa8ff2fb8" colab={"base_uri": "https://localhost:8080/", "height": 102}
# Get CIFAR10 data
X, y = get_data()
print ("X:", X.shape)
print ("y:", y.shape)
# + id="UqEXvEZkvKaV" colab_type="code" colab={}
# Classes
classes = {0: 'plane', 1: 'car', 2: 'bird', 3: 'cat', 4: 'deer', 5: 'dog',
6: 'frog', 7: 'horse', 8: 'ship', 9: 'truck'}
# + id="Ohn-yT0KvKYW" colab_type="code" outputId="8f717fc6-7fa4-42f4-e881-83b5f55ac158" colab={"base_uri": "https://localhost:8080/", "height": 204}
# Create image directories
create_class_dirs(data_dir=config["data_dir"], classes=classes)
# + id="OMsgEBXwvKVN" colab_type="code" colab={}
# Save images for each class
for i, (image, label) in enumerate(zip(X, y)):
_class = classes[label]
im = Image.fromarray(image)
im.save(os.path.join(config["data_dir"], _class, "{0:02d}.png".format(i)))
# + id="o8vVS8advKTE" colab_type="code" outputId="f4fc86cc-f1d9-4837-8fd4-b78441957508" colab={"base_uri": "https://localhost:8080/", "height": 101}
# Visualize each class
visualize_samples(data_dir=config["data_dir"], classes=classes)
# + id="9oPqQShhvKMe" colab_type="code" outputId="a2c96dfa-f175-41fb-b882-47889cebb9af" colab={"base_uri": "https://localhost:8080/", "height": 153}
# Load data into DataFrame
df = load_data(data_dir=config["data_dir"], classes=classes)
# + [markdown] id="JNSAGHsI6JH2" colab_type="text"
# # Split data
# + [markdown] id="1AIMrapz6Pzy" colab_type="text"
# Split the data into train, validation and test sets where each split has similar class distributions.
# + id="WKeBROMj6WQL" colab_type="code" colab={}
import collections
# + [markdown] id="p2SLUqQp6S4-" colab_type="text"
# ### Components
# + id="0ySue6PBvKJ1" colab_type="code" colab={}
def split_data(df, shuffle, train_size, val_size, test_size):
"""Split the data into train/val/test splits.
"""
# Split by category
by_category = collections.defaultdict(list)
for _, row in df.iterrows():
by_category[row.category].append(row.to_dict())
print ("\n==> 🛍️ Categories:")
for category in by_category:
print ("{0}: {1}".format(category, len(by_category[category])))
# Create split data
final_list = []
for _, item_list in sorted(by_category.items()):
if shuffle:
np.random.shuffle(item_list)
n = len(item_list)
n_train = int(train_size*n)
n_val = int(val_size*n)
n_test = int(test_size*n)
# Give data point a split attribute
for item in item_list[:n_train]:
item['split'] = 'train'
for item in item_list[n_train:n_train+n_val]:
item['split'] = 'val'
for item in item_list[n_train+n_val:]:
item['split'] = 'test'
# Add to final list
final_list.extend(item_list)
# df with split datasets
split_df = pd.DataFrame(final_list)
print ("\n==> 🖖 Splits:")
print (split_df["split"].value_counts())
return split_df
# + [markdown] id="GcunpUPK6UlF" colab_type="text"
# ### Operations
# + id="yzznm188vKHN" colab_type="code" outputId="e2fa1789-65ce-4c78-f894-21a517821b2d" colab={"base_uri": "https://localhost:8080/", "height": 323}
# Split data
split_df = split_data(
df=df, shuffle=config["shuffle"],
train_size=config["train_size"],
val_size=config["val_size"],
test_size=config["test_size"])
# + [markdown] id="DlwbzIsL64yz" colab_type="text"
# # Vocabulary
# + [markdown] id="daGiSGEn7JZL" colab_type="text"
# Create vocabularies for the image classes.
# + [markdown] id="cOH5OuG07Ohv" colab_type="text"
# ### Components
# + id="hJzh14kmvKE6" colab_type="code" colab={}
class Vocabulary(object):
def __init__(self, token_to_idx=None, add_unk=True, unk_token="<UNK>"):
# Token to index
if token_to_idx is None:
token_to_idx = {}
self.token_to_idx = token_to_idx
# Index to token
self.idx_to_token = {idx: token \
for token, idx in self.token_to_idx.items()}
# Add unknown token
self.add_unk = add_unk
self.unk_token = unk_token
if self.add_unk:
self.unk_index = self.add_token(self.unk_token)
def to_serializable(self):
return {'token_to_idx': self.token_to_idx,
'add_unk': self.add_unk, 'unk_token': self.unk_token}
@classmethod
def from_serializable(cls, contents):
return cls(**contents)
def add_token(self, token):
if token in self.token_to_idx:
index = self.token_to_idx[token]
else:
index = len(self.token_to_idx)
self.token_to_idx[token] = index
self.idx_to_token[index] = token
return index
def add_tokens(self, tokens):
return [self.add_token[token] for token in tokens]
def lookup_token(self, token):
if self.add_unk:
index = self.token_to_idx.get(token, self.unk_index)
else:
index = self.token_to_idx[token]
return index
def lookup_index(self, index):
if index not in self.idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self.idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self.token_to_idx)
# + [markdown] id="W60Z1cda7Pv2" colab_type="text"
# ### Operations
# + id="NcufVLErvKCt" colab_type="code" outputId="9d40c5d2-96dd-4e60-aca7-9c24f3ccc244" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Vocabulary instance
category_vocab = Vocabulary(add_unk=False)
for index, row in df.iterrows():
category_vocab.add_token(row.category)
print (category_vocab) # __str__
print (len(category_vocab)) # __len__
index = category_vocab.lookup_token("bird")
print (index)
print (category_vocab.lookup_index(index))
# + [markdown] id="ubECmrcqZIHI" colab_type="text"
# # Sequence vocbulary
# + [markdown] id="mUee35g37m8k" colab_type="text"
# We will also create a vocabulary object for the actual images. It will store the mean and standard deviations for eahc image channel (RGB) which we will use later on for normalizing our images with the Vectorizer.
# + id="37pGFTBiZIbm" colab_type="code" colab={}
from collections import Counter
import string
# + [markdown] id="gbIKSQ5a7jsG" colab_type="text"
# ### Components
# + id="YvWL2JcgZPaw" colab_type="code" colab={}
class SequenceVocabulary(Vocabulary):
def __init__(self, train_means, train_stds):
self.train_means = train_means
self.train_stds = train_stds
def to_serializable(self):
contents = {'train_means': self.train_means,
'train_stds': self.train_stds}
return contents
@classmethod
def from_dataframe(cls, df):
train_data = df[df.split == "train"]
means = {0:[], 1:[], 2:[]}
stds = {0:[], 1:[], 2:[]}
for image in train_data.image:
for dim in range(3):
means[dim].append(np.mean(image[:, :, dim]))
stds[dim].append(np.std(image[:, :, dim]))
train_means = np.array((np.mean(means[0]), np.mean(means[1]),
np.mean(means[2])), dtype="float64").tolist()
train_stds = np.array((np.mean(stds[0]), np.mean(stds[1]),
np.mean(stds[2])), dtype="float64").tolist()
return cls(train_means, train_stds)
def __str__(self):
return "<SequenceVocabulary(train_means: {0}, train_stds: {1}>".format(
self.train_means, self.train_stds)
# + [markdown] id="osFf9EGY71tT" colab_type="text"
# ### Operations
# + id="-ODlh2wcahqH" colab_type="code" outputId="df6e43f2-d6fb-41dd-c781-daa95565e07a" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Create SequenceVocabulary instance
image_vocab = SequenceVocabulary.from_dataframe(split_df)
print (image_vocab) # __str__
# + [markdown] id="lUZKa0c9YD0V" colab_type="text"
# # Vectorizer
# + [markdown] id="-vlDb26u8BEy" colab_type="text"
# The vectorizer will normalize our images using the vocabulary.
# + [markdown] id="DyfX-utj8JTz" colab_type="text"
# ### Components
# + id="RyxHZLTFX5VC" colab_type="code" colab={}
class ImageVectorizer(object):
def __init__(self, image_vocab, category_vocab):
self.image_vocab = image_vocab
self.category_vocab = category_vocab
def vectorize(self, image):
# Avoid modifying the actual df
image = np.copy(image)
# Normalize
for dim in range(3):
mean = self.image_vocab.train_means[dim]
std = self.image_vocab.train_stds[dim]
image[:, :, dim] = ((image[:, :, dim] - mean) / std)
# Reshape from (32, 32, 3) to (3, 32, 32)
image = np.swapaxes(image, 0, 2)
image = np.swapaxes(image, 1, 2)
return image
@classmethod
def from_dataframe(cls, df):
# Create vocabularies
image_vocab = SequenceVocabulary.from_dataframe(df)
category_vocab = Vocabulary(add_unk=False)
for category in sorted(set(df.category)):
category_vocab.add_token(category)
return cls(image_vocab, category_vocab)
@classmethod
def from_serializable(cls, contents):
image_vocab = SequenceVocabulary.from_serializable(contents['image_vocab'])
category_vocab = Vocabulary.from_serializable(contents['category_vocab'])
return cls(image_vocab=image_vocab,
category_vocab=category_vocab)
def to_serializable(self):
return {'image_vocab': self.image_vocab.to_serializable(),
'category_vocab': self.category_vocab.to_serializable()}
# + [markdown] id="kTxCnjAb8Kzn" colab_type="text"
# ### Operations
# + id="yXWIhtFUiDUe" colab_type="code" outputId="07849d6e-b269-43ed-ad80-86e51d0b2dbd" colab={"base_uri": "https://localhost:8080/", "height": 85}
# Vectorizer instance
vectorizer = ImageVectorizer.from_dataframe(split_df)
print (vectorizer.image_vocab)
print (vectorizer.category_vocab)
print (vectorizer.category_vocab.token_to_idx)
image_vector = vectorizer.vectorize(split_df.iloc[0].image)
print (image_vector.shape)
# + [markdown] id="Xm7s9RPThF3c" colab_type="text"
# # Dataset
# + [markdown] id="22ptC-NZ9Ls1" colab_type="text"
# The Dataset will create vectorized data from the data.
# + id="2mL4eEdNX5c1" colab_type="code" colab={}
import random
from torch.utils.data import Dataset, DataLoader
# + [markdown] id="TtnGYqxk9Hxc" colab_type="text"
# ### Components
# + id="Dzegh16nX5fY" colab_type="code" colab={}
class ImageDataset(Dataset):
def __init__(self, df, vectorizer, infer=False):
self.df = df
self.vectorizer = vectorizer
# Data splits
if not infer:
self.train_df = self.df[self.df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.df[self.df.split=='val']
self.val_size = len(self.val_df)
self.test_df = self.df[self.df.split=='test']
self.test_size = len(self.test_df)
self.lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.val_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
# Class weights (for imbalances)
class_counts = df.category.value_counts().to_dict()
def sort_key(item):
return self.vectorizer.category_vocab.lookup_token(item[0])
sorted_counts = sorted(class_counts.items(), key=sort_key)
frequencies = [count for _, count in sorted_counts]
self.class_weights = 1.0 / torch.tensor(frequencies, dtype=torch.float32)
elif infer:
self.infer_df = self.df[self.df.split=="infer"]
self.infer_size = len(self.infer_df)
self.lookup_dict = {'infer': (self.infer_df, self.infer_size)}
self.set_split('infer')
@classmethod
def load_dataset_and_make_vectorizer(cls, df):
train_df = df[df.split=='train']
return cls(df, ImageVectorizer.from_dataframe(train_df))
@classmethod
def load_dataset_and_load_vectorizer(cls, df, vectorizer_filepath):
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(df, vectorizer)
def load_vectorizer_only(vectorizer_filepath):
with open(vectorizer_filepath) as fp:
return ImageVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
with open(vectorizer_filepath, "w") as fp:
json.dump(self.vectorizer.to_serializable(), fp)
def set_split(self, split="train"):
self.target_split = split
self.target_df, self.target_size = self.lookup_dict[split]
def __str__(self):
return "<Dataset(split={0}, size={1})".format(
self.target_split, self.target_size)
def __len__(self):
return self.target_size
def __getitem__(self, index):
row = self.target_df.iloc[index]
image_vector = self.vectorizer.vectorize(row.image)
category_index = self.vectorizer.category_vocab.lookup_token(row.category)
return {'image': image_vector,
'category': category_index}
def get_num_batches(self, batch_size):
return len(self) // batch_size
def generate_batches(self, batch_size, shuffle=True, drop_last=True, device="cpu"):
dataloader = DataLoader(dataset=self, batch_size=batch_size,
shuffle=shuffle, drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
# + id="BMZbi-S29VQP" colab_type="code" colab={}
def sample(dataset):
"""Some sanity checks on the dataset.
"""
sample_idx = random.randint(0,len(dataset))
sample = dataset[sample_idx]
print ("\n==> 🔢 Dataset:")
print ("Random sample: {0}".format(sample))
print ("Unvectorized category: {0}".format(
dataset.vectorizer.category_vocab.lookup_index(sample['category'])))
# + [markdown] id="DZKAha5f9VVG" colab_type="text"
# ### Operations
# + id="-sW6otUGX5iA" colab_type="code" outputId="87911deb-a842-447c-8c58-5cef1591070a" colab={"base_uri": "https://localhost:8080/", "height": 51}
# Load dataset and vectorizer
dataset = ImageDataset.load_dataset_and_make_vectorizer(split_df)
dataset.save_vectorizer(config["vectorizer_file"])
vectorizer = dataset.vectorizer
print (dataset.class_weights)
# + id="_TKZjOyM9liD" colab_type="code" colab={}
# Sample checks
sample(dataset=dataset)
# + [markdown] id="SjPHp36i3G_i" colab_type="text"
# # Model
# + [markdown] id="KPgY8JDi9_GO" colab_type="text"
# Basic CNN architecture for image classification.
# + id="bPaf6Dy2X5ko" colab_type="code" colab={}
import torch.nn as nn
import torch.nn.functional as F
# + [markdown] id="OdZfoEKc-BBm" colab_type="text"
# ### Components
# + id="RKRPzX1nX5nN" colab_type="code" colab={}
class ImageModel(nn.Module):
def __init__(self, num_hidden_units, num_classes, dropout_p):
super(ImageModel, self).__init__()
self.conv1 = nn.Conv2d(3, 10, kernel_size=5) # input_channels:3, output_channels:10 (aka num filters)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv_dropout = nn.Dropout2d(dropout_p)
self.fc1 = nn.Linear(20*5*5, num_hidden_units)
self.dropout = nn.Dropout(dropout_p)
self.fc2 = nn.Linear(num_hidden_units, num_classes)
def forward(self, x, apply_softmax=False):
# Conv pool
z = self.conv1(x) # (N, 10, 28, 28)
z = F.max_pool2d(z, 2) # (N, 10, 14, 14)
z = F.relu(z)
# Conv pool
z = self.conv2(z) # (N, 20, 10, 10)
z = self.conv_dropout(z)
z = F.max_pool2d(z, 2) # (N, 20, 5, 5)
z = F.relu(z)
# Flatten
z = z.view(-1, 20*5*5)
# FC
z = F.relu(self.fc1(z))
z = self.dropout(z)
y_pred = self.fc2(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
# + id="lMKxHPxs-JlA" colab_type="code" colab={}
def initialize_model(config, vectorizer):
"""Initialize the model.
"""
print ("\n==> 🚀 Initializing model:")
model = ImageModel(
num_hidden_units=config["fc"]["hidden_dim"],
num_classes=len(vectorizer.category_vocab),
dropout_p=config["fc"]["dropout_p"])
print (model.named_modules)
return model
# + [markdown] id="wSZOM8Xw-I8O" colab_type="text"
# ### Operations
# + id="z5C6DfY3-QXr" colab_type="code" outputId="fd88fc88-41e7-4511-c08a-d40303d21bb8" colab={"base_uri": "https://localhost:8080/", "height": 187}
# Initializing model
model = initialize_model(config=config, vectorizer=vectorizer)
# + [markdown] id="jAiIbY9TBGef" colab_type="text"
# # Training
# + [markdown] id="w5yLzB4XG5aw" colab_type="text"
# Training operations for image classification.
# + id="bnMvjt9JX5p4" colab_type="code" colab={}
import torch.optim as optim
# + [markdown] id="kCkVhv54G6Ui" colab_type="text"
# ### Components
# + id="z8dxixyVHAi8" colab_type="code" colab={}
def compute_accuracy(y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
# + id="FhDr6F8-HAyf" colab_type="code" colab={}
def update_train_state(model, train_state):
""" Update train state during training.
"""
# Verbose
print ("[EPOCH]: {0} | [LR]: {1} | [TRAIN LOSS]: {2:.2f} | [TRAIN ACC]: {3:.1f}% | [VAL LOSS]: {4:.2f} | [VAL ACC]: {5:.1f}%".format(
train_state['epoch_index'], train_state['learning_rate'],
train_state['train_loss'][-1], train_state['train_acc'][-1],
train_state['val_loss'][-1], train_state['val_acc'][-1]))
# Save one model at least
if train_state['epoch_index'] == 0:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['stop_early'] = False
# Save model if performance improved
elif train_state['epoch_index'] >= 1:
loss_tm1, loss_t = train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= train_state['early_stopping_best_val']:
# Update step
train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < train_state['early_stopping_best_val']:
torch.save(model.state_dict(), train_state['model_filename'])
# Reset early stopping step
train_state['early_stopping_step'] = 0
# Stop early ?
train_state['stop_early'] = train_state['early_stopping_step'] \
>= train_state['early_stopping_criteria']
return train_state
# + id="Yw8Ek3fvHAwT" colab_type="code" colab={}
class Trainer(object):
def __init__(self, dataset, model, model_file, device, shuffle,
num_epochs, batch_size, learning_rate, early_stopping_criteria):
self.dataset = dataset
self.class_weights = dataset.class_weights.to(device)
self.model = model.to(device)
self.device = device
self.shuffle = shuffle
self.num_epochs = num_epochs
self.batch_size = batch_size
self.loss_func = nn.CrossEntropyLoss(self.class_weights)
self.optimizer = optim.Adam(self.model.parameters(), lr=learning_rate)
self.scheduler = optim.lr_scheduler.ReduceLROnPlateau(
optimizer=self.optimizer, mode='min', factor=0.5, patience=1)
self.train_state = {
'done_training': False,
'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'early_stopping_criteria': early_stopping_criteria,
'learning_rate': learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': model_file}
def run_train_loop(self):
print ("==> 🏋 Training:")
for epoch_index in range(self.num_epochs):
self.train_state['epoch_index'] = epoch_index
# Iterate over train dataset
# initialize batch generator, set loss and acc to 0, set train mode on
self.dataset.set_split('train')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, shuffle=self.shuffle,
device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.train()
for batch_index, batch_dict in enumerate(batch_generator):
# zero the gradients
self.optimizer.zero_grad()
# compute the output
y_pred = self.model(batch_dict['image'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute gradients using loss
loss.backward()
# use optimizer to take a gradient step
self.optimizer.step()
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['train_loss'].append(running_loss)
self.train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# initialize batch generator, set loss and acc to 0; set eval mode on
self.dataset.set_split('val')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, shuffle=self.shuffle, device=self.device)
running_loss = 0.
running_acc = 0.
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['image'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.to("cpu").item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['val_loss'].append(running_loss)
self.train_state['val_acc'].append(running_acc)
self.train_state = update_train_state(model=self.model, train_state=self.train_state)
self.scheduler.step(self.train_state['val_loss'][-1])
if self.train_state['stop_early']:
break
def run_test_loop(self):
# initialize batch generator, set loss and acc to 0; set eval mode on
self.dataset.set_split('test')
batch_generator = self.dataset.generate_batches(
batch_size=self.batch_size, shuffle=self.shuffle, device=self.device)
running_loss = 0.0
running_acc = 0.0
self.model.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['image'])
# compute the loss
loss = self.loss_func(y_pred, batch_dict['category'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['category'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
self.train_state['test_loss'] = running_loss
self.train_state['test_acc'] = running_acc
# Verbose
print ("==> 💯 Test performance:")
print ("Test loss: {0:.2f}".format(self.train_state['test_loss']))
print ("Test Accuracy: {0:.1f}%".format(self.train_state['test_acc']))
# + id="MVqHg8rqHAt5" colab_type="code" colab={}
def plot_performance(train_state, save_dir, show_plot=True):
""" Plot loss and accuracy.
"""
# Figure size
plt.figure(figsize=(15,5))
# Plot Loss
plt.subplot(1, 2, 1)
plt.title("Loss")
plt.plot(train_state["train_loss"], label="train")
plt.plot(train_state["val_loss"], label="val")
plt.legend(loc='upper right')
# Plot Accuracy
plt.subplot(1, 2, 2)
plt.title("Accuracy")
plt.plot(train_state["train_acc"], label="train")
plt.plot(train_state["val_acc"], label="val")
plt.legend(loc='lower right')
# Save figure
plt.savefig(os.path.join(save_dir, "performance.png"))
# Show plots
if show_plot:
print ("==> 📈 Metric plots:")
plt.show()
# + id="H_dN8IRRHArg" colab_type="code" colab={}
def save_train_state(train_state, save_dir):
train_state["done_training"] = True
with open(os.path.join(save_dir, "train_state.json"), "w") as fp:
json.dump(train_state, fp)
print ("==> ✅ Training complete!")
# + [markdown] id="7yLIizWvI-KV" colab_type="text"
# ### Operations
# + id="EtJhoyL2HApb" colab_type="code" outputId="732330ab-2990-4c03-84e5-59b278155ccd" colab={"base_uri": "https://localhost:8080/", "height": 119}
# Training
trainer = Trainer(
dataset=dataset, model=model, model_file=config["model_file"],
device=config["device"], shuffle=config["shuffle"],
num_epochs=config["num_epochs"], batch_size=config["batch_size"],
learning_rate=config["learning_rate"],
early_stopping_criteria=config["early_stopping_criteria"])
trainer.run_train_loop()
# + id="2G6I5YWtt_Ea" colab_type="code" outputId="c3fc5823-0f0a-43cc-d3fc-9cd067852734" colab={"base_uri": "https://localhost:8080/", "height": 352}
# Plot performance
plot_performance(train_state=trainer.train_state,
save_dir=config["save_dir"], show_plot=True)
# + id="Iz3G5eaTS04m" colab_type="code" outputId="fbb86f93-da9f-4691-cc83-98bfd5a04c13" colab={"base_uri": "https://localhost:8080/", "height": 68}
# Test performance
trainer.run_test_loop()
# + id="kqMzljfpS09F" colab_type="code" outputId="4a2523aa-c54b-4fee-8f66-37acb5577457" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Save all results
save_train_state(train_state=trainer.train_state, save_dir=config["save_dir"])
# + [markdown] id="1fMNOVJUYvhs" colab_type="text"
# ~60% test performance for our CIFAR10 dataset is not bad but we can do way better.
# + [markdown] id="P9DcE8tHYvfX" colab_type="text"
# # Transfer learning
# + [markdown] id="EclYytw6Swh-" colab_type="text"
# In this section, we're going to use a pretrained model that performs very well on a different dataset. We're going to take the architecture and the initial convolutional weights from the model to use on our data. We will freeze the initial convolutional weights and fine tune the later convolutional and fully-connected layers.
#
# Transfer learning works here because the initial convolution layers act as excellent feature extractors for common spatial features that are shared across images regardless of their class. We're going to leverage these large, pretrained models' feature extractors for our own dataset.
# + id="D8w9CEDeONTY" colab_type="code" colab={}
# !pip install torchvision
# + id="mxl4PEfqTMwm" colab_type="code" colab={}
from torchvision import models
# + id="GjufXPDJTB7W" colab_type="code" outputId="7eb92456-10c4-416d-a658-7db3d5e9ad38" colab={"base_uri": "https://localhost:8080/", "height": 54}
model_names = sorted(name for name in models.__dict__
if name.islower() and not name.startswith("__")
and callable(models.__dict__[name]))
print (model_names)
# + id="daJN4BSWS016" colab_type="code" colab={}
model_name = 'vgg19_bn'
vgg_19bn = models.__dict__[model_name](pretrained=True) # Set false to train from scratch
print (vgg_19bn.named_parameters)
# + [markdown] id="XBudDGFz1j87" colab_type="text"
# The VGG model we chose has a `features` and a `classifier` component. The `features` component is composed of convolution and pooling layers which act as feature extractors. The `classifier` component is composed on fully connected layers. We're going to freeze most of the `feature` component and design our own FC layers for our CIFAR10 task. You can access the default code for all models at `/usr/local/lib/python3.6/dist-packages/torchvision/models` if you prefer cloning and modifying that instead.
# + [markdown] id="5rz1g7FrOlQg" colab_type="text"
# ### Components
# + id="YmzQIXsd59Rj" colab_type="code" colab={}
class ImageModel(nn.Module):
def __init__(self, feature_extractor, num_hidden_units,
num_classes, dropout_p):
super(ImageModel, self).__init__()
# Pretrained feature extractor
self.feature_extractor = feature_extractor
# FC weights
self.classifier = nn.Sequential(
nn.Linear(512, 250, bias=True),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(250, 100, bias=True),
nn.ReLU(),
nn.Dropout(0.5),
nn.Linear(100, 10, bias=True),
)
def forward(self, x, apply_softmax=False):
# Feature extractor
z = self.feature_extractor(x)
z = z.view(x.size(0), -1)
# FC
y_pred = self.classifier(z)
if apply_softmax:
y_pred = F.softmax(y_pred, dim=1)
return y_pred
# + id="0uVyLE91OnFq" colab_type="code" colab={}
def initialize_model(config, vectorizer, feature_extractor):
"""Initialize the model.
"""
print ("\n==> 🚀 Initializing model:")
model = ImageModel(
feature_extractor=feature_extractor,
num_hidden_units=config["fc"]["hidden_dim"],
num_classes=len(vectorizer.category_vocab),
dropout_p=config["fc"]["dropout_p"])
print (model.named_modules)
return model
# + [markdown] id="C75F1xbdOnfP" colab_type="text"
# ### Operations
# + id="czo1bGBwXKNj" colab_type="code" colab={}
# Initializing model
model = initialize_model(config=config, vectorizer=vectorizer,
feature_extractor=vgg_19bn.features)
# + id="hZybxGHoDTwQ" colab_type="code" colab={}
# Finetune last few conv layers and FC layers
for i, param in enumerate(model.feature_extractor.parameters()):
if i < 36:
param.requires_grad = False
else:
param.requires_grad = True
# + id="GTbYKussTvB2" colab_type="code" outputId="04e2b9cf-0870-4aac-ec10-fcdf5b8b34a3" colab={"base_uri": "https://localhost:8080/", "height": 119}
# Training
trainer = Trainer(
dataset=dataset, model=model, model_file=config["model_file"],
device=config["device"], shuffle=config["shuffle"],
num_epochs=config["num_epochs"], batch_size=config["batch_size"],
learning_rate=config["learning_rate"],
early_stopping_criteria=config["early_stopping_criteria"])
trainer.run_train_loop()
# + id="NCLCnQgATvMj" colab_type="code" outputId="c27278e4-3cf1-4401-ff60-03f0a8b4c3cb" colab={"base_uri": "https://localhost:8080/", "height": 352}
# Plot performance
plot_performance(train_state=trainer.train_state,
save_dir=config["save_dir"], show_plot=True)
# + id="Hjn0HJVoTvJ0" colab_type="code" outputId="92d8644d-f51a-4d82-bff0-78aba33131c3" colab={"base_uri": "https://localhost:8080/", "height": 68}
# Test performance
trainer.run_test_loop()
# + id="ZQVrGTNNTvH0" colab_type="code" outputId="44a24c37-2e22-47af-974d-3fc197b6427a" colab={"base_uri": "https://localhost:8080/", "height": 34}
# Save all results
save_train_state(train_state=trainer.train_state, save_dir=config["save_dir"])
# + [markdown] id="7CL689FebJhf" colab_type="text"
# Much better performance! If you let it train long enough, we'll actually reach ~95% accuracy :)
# + [markdown] id="02iDXCtiYo5K" colab_type="text"
# ## Inference
# + id="cVT--tAvnOu7" colab_type="code" colab={}
from pylab import rcParams
rcParams['figure.figsize'] = 2, 2
# + [markdown] id="KFEGIgA1NKJ-" colab_type="text"
# ### Components
# + id="5IH9qjDpNW8m" colab_type="code" colab={}
class Inference(object):
def __init__(self, model, vectorizer, device="cpu"):
self.model = model.to(device)
self.vectorizer = vectorizer
self.device = device
def predict_category(self, dataset):
# Batch generator
batch_generator = dataset.generate_batches(
batch_size=len(dataset), shuffle=False, device=self.device)
self.model.eval()
# Predict
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = self.model(batch_dict['image'], apply_softmax=True)
# Top k categories
y_prob, indices = torch.topk(y_pred, k=len(self.vectorizer.category_vocab))
probabilities = y_prob.detach().to('cpu').numpy()[0]
indices = indices.detach().to('cpu').numpy()[0]
results = []
for probability, index in zip(probabilities, indices):
category = self.vectorizer.category_vocab.lookup_index(index)
results.append({'category': category, 'probability': probability})
return results
# + [markdown] id="uFJ7Dmn4NLm7" colab_type="text"
# ### Operations
# + id="OtKJsv4HNLsN" colab_type="code" colab={}
# Load vectorizer
with open(config["vectorizer_file"]) as fp:
vectorizer = ImageVectorizer.from_serializable(json.load(fp))
# + id="iAeHP-FvP26o" colab_type="code" colab={}
# Load the model
model = initialize_model(config=config, vectorizer=vectorizer, feature_extractor=vgg_19bn.features)
model.load_state_dict(torch.load(config["model_file"]))
# + id="OlOlBXp-mWYT" colab_type="code" colab={}
# Initialize
inference = Inference(model=model, vectorizer=vectorizer, device=config["device"])
# + id="6NwjskFsysKb" colab_type="code" outputId="00e7d3c2-b011-4e62-df7c-8e288040155a" colab={"base_uri": "https://localhost:8080/", "height": 176}
# Get a sample
sample = split_df[split_df.split=="test"].iloc[0]
plt.imshow(sample.image)
plt.axis("off")
print ("Actual:", sample.category)
# + id="cp-rN2eeybae" colab_type="code" outputId="a88e448d-63f5-422c-ef72-b0d724b7cdd9" colab={"base_uri": "https://localhost:8080/", "height": 187}
# Inference
category = list(vectorizer.category_vocab.token_to_idx.keys())[0] # random filler category
infer_df = pd.DataFrame([[sample.image, category, "infer"]], columns=['image', 'category', 'split'])
infer_dataset = ImageDataset(df=infer_df, vectorizer=vectorizer, infer=True)
results = inference.predict_category(dataset=infer_dataset)
results
# + [markdown] id="1YHneO3SStOp" colab_type="text"
# # TODO
# + [markdown] id="gGHaKTe1SuEk" colab_type="text"
# - segmentation
# - interpretability via activation maps
# - processing images of different sizes
| notebooks/15_Computer_Vision.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.8 64-bit (''aida_pro'': conda)'
# metadata:
# interpreter:
# hash: 6d29e069e955b1b6a88d9229fe76a20b44327016bc4846623cf15be91831d8a6
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from ast import literal_eval
# #!pip install pandas pyarrow
import pyarrow
import glob
import pyarrow.parquet as pq
movies = pq.read_table(source="../data/interim/df_cleaned_v1.gzip").to_pandas()
movies
# -
movies.genre_id.value_counts(ascending=True)
# +
from sklearn.model_selection import train_test_split
y= movies.genre_id
traincomple, test= train_test_split(movies , test_size=0.20, random_state=42, shuffle=True)
# -
traincomple.shape
test.shape
# +
#Train complete again splitted for validatopn propose 60 train 20 validation 20 test
train_, val= train_test_split(traincomple , test_size=0.20, random_state=42, shuffle=True)
# -
train_.shape
val.shape
# + tags=[]
from ast import literal_eval
#train_.genre_ids2=train_.genre_ids2.apply(literal_eval)
def testme(x):
tmp="".join(x).replace("'", "")
# print([y for y in tmp.split(",")])
return [y for y in tmp.split(",")]
train_.genre_ids2=train_.genre_ids2.apply(testme)
# -
train_.head(5)
# +
def get_path_colab(x):
filename_=x.rsplit('/', 1)[-1]
return filename_
train_['path_image']=train_['poster_url'].apply(lambda x: get_path_colab(x))
train_explode=train_.explode('genre_ids2')
train_explode.shape
# -
from sklearn.preprocessing import OneHotEncoder
ohe= OneHotEncoder(sparse=False, handle_unknown="ignore").fit(train_explode[['genre_ids2']])
ohe_class=pd.DataFrame(ohe.transform(train_explode[['genre_ids2']]),columns=ohe.get_feature_names())
train_explode.reset_index(drop=True, inplace=True)
movies_class = pd.concat([train_explode, ohe_class], axis=1)
movies_class.columns
# +
#movies_class.isnull().sum()
# -
genre_col=['x0_Action', 'x0_Adventure',
'x0_Animation', 'x0_Comedy', 'x0_Crime', 'x0_Documentary', 'x0_Drama',
'x0_Family', 'x0_Fantasy', 'x0_History', 'x0_Horror', 'x0_Music',
'x0_Mystery', 'x0_Romance', 'x0_Science Fiction', 'x0_TV Movie',
'x0_Thriller', 'x0_War', 'x0_Western' ]
b1=movies_class[['id']+genre_col].groupby("id").sum().reset_index()
b1.head(5)
b2=pd.merge(train_, b1 ,on="id")
b2.shape
#180185
b2.columns
# +
################### test###################
test.genre_ids2=test.genre_ids2.apply(testme)
test['path_image']=test['poster_url'].apply(lambda x: get_path_colab(x))
test_explode=test.explode('genre_ids2')
ohe= OneHotEncoder(sparse=False, handle_unknown="ignore").fit(test_explode[['genre_ids2']])
ohe_class=pd.DataFrame(ohe.transform(test_explode[['genre_ids2']]),columns=ohe.get_feature_names())
test_explode.reset_index(drop=True, inplace=True)
movies_class_test = pd.concat([test_explode, ohe_class], axis=1)
movies_class_test
b1_test=movies_class_test[['id']+genre_col].groupby("id").sum().reset_index()
b2_test=pd.merge(test, b1_test ,on="id")
b2_test.shape
#56309
# -
b2_test
# +
########Validation ########
################### test###################
val.genre_ids2=val.genre_ids2.apply(testme)
val['path_image']=val['poster_url'].apply(lambda x: get_path_colab(x))
val_explode=val.explode('genre_ids2')
ohe= OneHotEncoder(sparse=False, handle_unknown="ignore").fit(val_explode[['genre_ids2']])
ohe_class=pd.DataFrame(ohe.transform(val_explode[['genre_ids2']]),columns=ohe.get_feature_names())
val_explode.reset_index(drop=True, inplace=True)
movies_class_val = pd.concat([val_explode, ohe_class], axis=1)
movies_class_val
b1_val=movies_class_val[['id']+genre_col].groupby("id").sum().reset_index()
b2_val=pd.merge(val, b1_val ,on="id")
b2_val.shape
#45047
# -
b2.to_parquet("../data/interim/df_train_v2.gzip", compression='gzip')
b2_test.to_parquet("../data/interim/df_test_v2.gzip", compression='gzip')
b2_val.to_parquet("../data/interim/df_val_v2.gzip", compression='gzip')
| notebooks/01_analyse_raw_test.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf27
# language: python
# name: tf27
# ---
# # 01/20
# # %matplotlib inline
description = 'train_mobilenet_from_pretrain_v3_add_valence_arousal_batch32'
# +
import os
import numpy as np
from sklearn.preprocessing import LabelEncoder
import matplotlib.pyplot as plt
import cv2
from PIL import Image
from random import shuffle
import math
import pandas as pd
import pickle
from sklearn.svm import SVC,LinearSVC
from sklearn.ensemble import RandomForestClassifier, BaggingClassifier,ExtraTreesClassifier
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
#from scipy.misc import imread, imresize
# -
import tqdm
import json
# +
train_df = pd.read_csv('../../data/Manually_Annotated_file_lists/training_face_mesh_crop.csv')
train_df.subDirectory_filePath = '../../data/Manually_Annotated_Images_FaceMesh_Cropped/' + train_df.subDirectory_filePath
#train_df_2 = pd.read_csv('../../data/Automatically_annotated_file_list/automatically_annotated_face_mesh_crop.csv')
#train_df_2.subDirectory_filePath = '../../data/Automatically_Annotated_Images_FaceMesh_Cropped/' + train_df_2.subDirectory_filePath
#train_df = train_df.append(train_df_2)
#del train_df_2
train_df = train_df[train_df['have_facemesh']]
# +
from eason_utils import DataFrameBatchIterator
from eason_utils import lprint, now_time_string, log_file_name, change_log_file_name
change_log_file_name(f'''{log_file_name.replace('.log', '')}_{description}.log''')
# -
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow as tf
config = tf.compat.v1.ConfigProto(gpu_options = tf.compat.v1.GPUOptions(allow_growth = True))
sess = tf.compat.v1.Session(config = config)
mobilenet_pretrained = tf.keras.models.load_model("../models/affectnet_emotions/mobilenet_7.h5")
mobilenet_pretrained.trainable = False
mobilenet_output = mobilenet_pretrained.get_layer("global_pooling").output
hidden_layers = [256]
# +
valaence_feat = mobilenet_output
for size in hidden_layers:
valence_feat = tf.keras.layers.Dense(size, activation = 'relu', name = f'feat_valence')(valaence_feat)
outputs_valence = (tf.keras.layers.Dense(1, activation = 'sigmoid', name = 'outputs_valence')(valence_feat) * 4) - 2
arousal_feat = mobilenet_output
for size in hidden_layers:
arousal_feat = tf.keras.layers.Dense(size, activation = 'relu', name = f'feat_arousal')(arousal_feat)
outputs_arousal = (tf.keras.layers.Dense(1, activation = 'sigmoid', name = 'outputs_arousal')(arousal_feat) * 4) - 2
# -
model = tf.keras.Model(inputs=mobilenet_pretrained.input,
outputs=(outputs_valence, outputs_arousal, mobilenet_pretrained.output) , name="mobilenet_train")
model.summary()
# Instantiate an optimizer.
optimizer = tf.keras.optimizers.Adam()
# Instantiate a loss function.
MSE_loss = tf.keras.losses.MeanSquaredError()
SCC_loss = tf.keras.losses.SparseCategoricalCrossentropy()
# +
batch_size = 32
epochs = 4
logs = []
# +
from telegram_notifier import send_message
import time
start_time = time.time()
for epoch in range(epochs):
total_loss = 0
total_step = 0
total_emotion_correct = 0
total_valence_loss = 0
total_arousal_loss = 0
total_emotion_loss = 0
total_process = 0
for step, row in enumerate(DataFrameBatchIterator(train_df, batch_size=batch_size)):
imgs = row.subDirectory_filePath.apply(lambda x: cv2.resize(
cv2.cvtColor(cv2.imread(x), cv2.COLOR_BGR2RGB), (224, 224)))
img_array = np.array(list(imgs))
y_valence = np.array(row.valence)
y_arousal = np.array(row.arousal)
y_emotion = np.array(row.expression)
with tf.GradientTape() as tape:
logits = model(img_array, training=True)
pred_valence = logits[0]
pred_arousal = logits[1]
pred_emotion = logits[2]
valence_loss = MSE_loss(y_valence, pred_valence)
arousal_loss = MSE_loss(y_arousal, pred_arousal)
emotion_loss = SCC_loss(y_emotion, pred_emotion)
loss = valence_loss + arousal_loss # + emotion_loss
grads = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
total_loss += float(loss)
total_step += 1
total_process += len(row)
valence_loss = float(valence_loss)
arousal_loss = float(arousal_loss)
emotion_loss = float(emotion_loss)
total_valence_loss += valence_loss
total_arousal_loss += arousal_loss
total_emotion_loss += emotion_loss
emotion_correct = int(sum(pred_emotion.numpy().argmax(axis = 1) == y_emotion))
total_emotion_correct += emotion_correct
log = {
'emotion_correct': emotion_correct,
'valence_loss': valence_loss,
'arousal_loss': arousal_loss,
'emotion_loss': emotion_loss,
'time_take': time.time() - start_time,
'pred_valence': list(map(lambda x: float(x), np.around(np.array(pred_valence).reshape(-1), 3))),
'pred_arousal': list(map(lambda x: float(x), np.around(np.array(pred_arousal).reshape(-1), 3))),
}
log = json.dumps(log)
lprint(log)
save_model_path = f"models/{now_time_string}_{description}_epoch{epoch}_batch_{batch_size}"
model.save(save_model_path)
lprint(f"Save {save_model_path}")
send_message(f"Save {save_model_path}\nepoch {epoch} is finish")
log = {
'model_path': save_model_path,
'total_emotion_correct': total_emotion_correct,
'total_loss': total_loss,
'total_valence_loss': total_valence_loss,
'total_arousal_loss': total_arousal_loss,
'total_emotion_loss': total_emotion_loss,
'total_process': total_process,
'time_take': time.time() - start_time,
}
log = json.dumps(log)
lprint(log)
# -
log
| EasonTest/train_mobilenet_from_pretrain_v3_add_valence_arousal_batch32.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import sym_tensor
import torch
import numpy as np
import sym_tensor.ops as st_ops
# # Contents
# (Links work only when you run the notebook)
# - [Tensor basics](#Tensor-basics)
# - [U(1) symmetry](#U(1)-symmetry)
# - [Differentiable programming](#Differentiable-programming)
# - [Gradient descent demo](#Gradient-descent-demo) (new)
# # Tensor basics
# A symmetric tensor is defined by its elements, quantum numbers (`charges`) and symmetry group (`ZN`).
# To initialize a `Z2`-symmetric matrix, run
# +
charges = [
[
[0,2],
[1,2]
],[
[0,2],
[1,2]
]
]
# Use elements=None to initialize with random elements
T = sym_tensor.newtensor(elements=None, charges=charges, ZN=2)
print(T)
# -
# A `Z2`-symmetric matrix can be represented in block-diagonal form; in this case by two 2x2 blocks. Every symmetric tensor can be reshaped to a block-diagonal matrix by fusing indices together:
# Choose [0] as the left index, [1] as the right index and 'n' (irrelevant for Z2)
blocks, *meta = T.get_blocks(left_legs=[0], right_legs=[1], side='n') # The metadata is necessary to reverse the reshaping
for b in blocks:
print(b)
T_full = T.to_full() # Embed the blocks in a matrix (only Z2 matrices for now supported)
print(T_full)
# To multiply tensors together, there are several options:
# +
T2 = T.copy() # No shared memory
# Ncon
res = st_ops.ncon([T,T2], ([-1,1],[1,-2]))
print(res.to_full())
# Mult (arguments are the legs to be contracted on each tensor)
res = T.mult(T2, [1], [0])
print(res.to_full())
# Matrix product (in general: contract over last index of T and first index of T2)
res = T @ T2
print(res.to_full())
# Full result
print(T_full @ T_full)
# -
# SVD (left_legs, right_legs, number of singular values to keep, absorb s left/right/not)
u,s,v = T.svd([0], [1], n=np.inf, absorb='n')
print((<EMAIL>()).to_full()) # Equal to T itself
print(T.allclose(<EMAIL>()))
# ## U(1) symmetry
# The continuous `U(1)` symmetry can be approximated by a `ZN` symmetry with large enough `N`. Initialize a tensor like:
# +
charges = [
[
[1,1],
[35,1]
],[
[0,1],
[1,1],
[35,1]
],[
[0,1],
[1,1],
[35,1]
]
]
# Use elements=None to initialize with random elements
T = sym_tensor.newtensor(elements=None, charges=charges, ZN=36)
print(T)
# -
blocks, *meta = T.get_blocks(left_legs=[0,1], right_legs=[2], side='n') # The metadata is necessary to reverse the reshaping
for i,b in enumerate(blocks):
if b.numel() > 0:
print("Quantum number: ", i)
print(b)
# The same information can be obtained by
# +
# Reshaped as matrix with indices 0 and 1 fused
T.show_block_structure([0,1], [2])
# As tensor
T.show_block_structure()
# -
# ## Hamiltonian initizalization (automatic)
# For the automatic conversion of a Hamiltonian (or other operators) in full array format to a symmetric tensor, we can use the `symmetrize_operator` function:
# +
H_Heis = np.array([
[0.25, 0, 0, 0],
[0, -0.25, 0.5, 0],
[0, 0.5, -0.25, 0],
[0, 0, 0, 0.25]
])
# These represent the quantum numbers on each physical index
# For a spin-1/2 system with U(1) symmetry (approximated by Z36), the
# basis states (up,down) correspond to quantum numbers (1,-1) mod ZN = (1,35)
phys_charges = [1,35]
# Either give the operator in matrix-form (bra indices, ket indices):
H_Heis_symmetric = sym_tensor.symmetrize_operator(op=H_Heis, phys_charges=phys_charges, ZN=36)
print('Elements of H: ', H_Heis_symmetric.data)
print(H_Heis_symmetric)
H_Heis_symmetric.show_block_structure()
# Or in tensor format, with an index for each physical leg:
H_Heis = np.reshape(H_Heis, [2,2,2,2])
H_Heis_symmetric = sym_tensor.symmetrize_operator(op=H_Heis, phys_charges=phys_charges, ZN=36)
print('\nElements of H: ', H_Heis_symmetric.data)
print(H_Heis_symmetric)
# -
# ## Covariant tensors
# If you want to, for example, write a spin-flip operator as SymTensor, it will not be possible with a `U(1)`-invariant tensor, since it contains exactly the elements that are not allowed by the symmetry.
# In order to still make such a SymTensor, you can attach an extra index to the tensor that carries a nontrivial quantum number, such that the total charge `Q_in - Q_out` is still conserved:
Splus = np.array([[0,1],[0,0]])
print("S+ operator:")
print(Splus)
# The SymTensor will have only two elements that are allowed to be nonzero:
# +
operator_el = np.array([[1,2],[3,4]])
print("General operator:")
print(operator_el)
print("")
op = sym_tensor.symmetrize_operator(operator_el, phys_charges=[1,35], ZN=36)
# Tensor with 2 allowed nonzero elements, namely the diagonal elements
# of the operator in matrix form
print(op)
print(op.data) # [1, 4]
print("")
# -
# Now make a new tensor with an additional index with quantum number +2 or -2
# Why ±2? Because in the code there are no fractional quantum numbers, so spin-up corresponds to +1 (not +1/2). The S+ operator changes spin-down (-1) to spin-up (+1), so the difference is +2.
# +
charges_with_extra_leg = [[[1,1],[35,1]], [[1,1],[35,1]], [[2,1]]]
op = sym_tensor.newtensor(charges=charges_with_extra_leg, ZN=36)
# Note that now there's only a single allowed nonzero element - just like the
# the S+ operator
print(op)
# Make it S+:
op.data[0] = 1
# -
# The other way of doing this is by using the `totalcharge` property, which can be set to ±2 for this example. However this property is not yet fully supported in the code so that may cause some problems.
# ## Hamiltonian initialization (manual)
# When building a Hamiltonian by hand with a nontrivial symmetry, the blocks of the tensor have be matched with the right elements of the Hamiltonian.
# Here we use a Heisenberg Hamiltonian as an example. The basis is the standard spin basis {(up,up), (up,down), (down,up), (down,down)}, which can be identified with the set of quantum numbers {(1,1), (1,-1), (-1,1), (-1,-1)}.
# The 4x4 (2-site) Hamiltonian can be reshaped to tensor form 2x2x2x2 where each index connects to a physical index of a tensor network. Each index then runs over the quantum numbers (1,-1) and will be given a `charge` of [[1,1],[35,1]] when we approximate the `U(1)` symmetry by `Z36`.
H_Heis = np.array([
[0.25, 0, 0, 0],
[0, -0.25, 0.5, 0],
[0, 0.5, -0.25, 0],
[0, 0, 0, 0.25]
])
print(H_Heis)
# Now we need to identify all elements with the right quantum numbers. The top-left element maps an (up, up) state to an (up, up) state, corresponding to quantum numbers ( (1,1), (1,1) ).
# The element on its bottom right corner (-0.25) maps (up,down) -> (up,down), so quantum numbers ( (1, -1), (1, -1) ).
# Let's make a SymTensor with (±1) charges on each leg and inspect the nonzero blocks:
H = sym_tensor.newtensor(charges=4*[[[1,1],[35,1]]], ZN=36)
H.show_block_structure()
# Note that there are 6 blocks of size 1. One important remark is that the charges on the outgoing indices [2,3] are *conjugated* (in the code the 'arrows' are always pointed inwards), so in the first block [35,35,1,1] corresponds to ( (-1, -1), (-1, -1) ), which is the element that maps (down,down)->(down,down) - the element on the bottom right of the Hamiltonian (0.25).
# We can then identify each block with a matrix element:
data = torch.zeros(6)
data[0] = H_Heis[3,3] # (down,down) -> (down,down)
data[1] = H_Heis[2,1] # (up,down) -> (down,up)
data[2] = H_Heis[1,1] # (up,down) -> (up,down)
data[3] = H_Heis[2,2] # (down,up) -> (down,up)
data[4] = H_Heis[1,2] # (down,up) -> (up,down)
data[5] = H_Heis[0,0] # (up,up) -> (up,up)
H.data = data
print('Elements of H: ', H.data)
H.show_block_structure(left_legs=[0,1], right_legs=[2,3])
# # Differentiable programming
# Within Torch, all tensor operations should be differentiable in order to obtain gradients. Turn on the tracking of gradients:
T.requires_grad = True
print(T) # Note that now the gradient will be tracked
T2 = sym_tensor.newtensor(elements=None, charges=charges, ZN=36)
T2 = T2.conj() # Take the hermitian conjugate
nrm = T.mult(T2, [0,1,2], [0,1,2]) # Full contraction to a scalar
print(nrm) # Note that now the gradient function has been stored
# Backpropagation is the same as with regular torch Tensors
nrm.backward()
print(T.grad) # The gradient will be stored as a raw torch Tensor
# The methods defined for SymTensor objects automatically determine whether they should be wrapped into a Torch autograd function (defined in `ops.py`) by checking the `T.requires_grad` property.
# A method which is differentiable follows this general structure:
# + magic_args="false --no-raise-error" language="script"
# # tensors.py
# # wrap_grad decorator wraps function in autograd operation if necessary
# # ('Max' in ops.py in this case)
# # The decorator can be bypassed by adding the nograd=True keyword argument
# # T.max(nograd = True)
# # or by calling the equivalent underscore variant:
# # T.max_()
# # When the tensor does not have the property requireds_grad==True, the gradient
# # is never computed, so the explicit bypass is almost never necessary.
# @_Decorators.wrap_grad(st_ops.Max)
# def max(self): # Implementation on tensor level
# """ Largest element """
# max_el = self.data.max().unsqueeze(0).detach().clone()
# T = newtensor(elements=None, charges=[[[0,1]]], ZN=self.ZN, totalcharge=self.totalcharge)
# T.data = max_el
# return T
#
# # ops.py
# class Max(torch.autograd.Function):
# @staticmethod
# def forward(ctx, tensor):
# res = tensor.max_() # Basic operation on tensor (note the underscore)
#
# # Save information for backward pass
# ctx.intermediate_results = (tensor, res)
#
# # In some operations, only the metadata of the SymTensor needs to
# # be saved, since Torch already saves the elements of the output.
# # res.meta contains all information of the resulting SymTensor
# # (except the elements) that can be used to reconstruct a SymTensor
# # in the backward pass
# # Saving the information would then be something like:
# # ctx.intermediate_results = (res.meta)
#
# return res
#
# @staticmethod
# def backward(ctx, grad_output):
# # Backward pass, which receives the grad_output in the form of a
# # regular Torch Tensor. If necessary, we cast the grad_output back
# # to a SymTensor like this:
# # (meta) = ctx.intermediate_results
# # tensor = tensors.from_meta(meta, elements=grad_output)
# (tensor, res) = ctx.intermediate_results
# tensor = tensor.copy()
# new_data = torch.zeros_like(tensor.data)
# tensor.data = new_data.masked_fill(tensor.data == res.data, grad_output.squeeze())
# return res
# -
# Most of the elementwise functions can be simply be implemented by calling the equivalent Torch function on the elements:
def __add__(self, other):
return self._elem_function(torch.Tensor.__add__, other)
# `T._elem_function` makes sure the elements are in the correct order, then calls the corresponding torch.Tensor function (here `torch.Tensor.__add__`) on the elements and finally calls `T.fill_data` to reconstruct the SymTensor from the elements.
# # Gradient descent demo
# We can put the SymTensors to work in a simple 2-site Heisenberg chain example.
# In this case, there are two variable tensors, for which we construct (1) a torch.nn.Parameter with the elements and (2) a corresponding SymTensor that is filled with the elements and used in the rest of the code.
# Note that since `T.fill_data` is differentiable, the gradient on the SymTensors are automatically propagated to the Parameter objects.
# models.py
from torch import nn
class TwoSiteHeis(nn.Module):
def __init__(self, elems_A=None, elems_B=None):
super(TwoSiteHeis, self).__init__()
H_Heis = np.array([
[0.25, 0, 0, 0],
[0, -0.25, 0.5, 0],
[0, 0.5, -0.25, 0],
[0, 0, 0, 0.25]
])
phys_charges = [1,35]
self.H = sym_tensor.symmetrize_operator(op=H_Heis, phys_charges=phys_charges, ZN=36)
# A and B are simple U(1) MPS tensors that form a two-site chain
ch_A = ch_B = [[[1,1],[35,1]], [[0,1],[1,1],[35,1]]]
self.A = sym_tensor.newtensor(elements=elems_A, charges=ch_A, ZN=36)
self.B = sym_tensor.newtensor(elements=elems_B, charges=ch_B, ZN=36)
if elems_A is None:
self.elems_A = nn.Parameter(self.A.data)
if elems_B is None:
self.elems_B = nn.Parameter(self.B.data)
def forward(self, elems_A=None, elems_B=None):
# The elements can be given as optional arguments so that the gradient
# of the full forward pass can be checked numerically
# Normally, just take the tensors that are stored in the model
if elems_A is not None:
self.elems_A = elems_A
if elems_B is not None:
self.elems_B = elems_B
# This is an important step: it copies the elements into the SymTensor
# in such a way that the operation remains differentiable
# In the backward step, the gradient of A (SymTensor) will be propagated
# back to the gradient of elems_A (regular Torch Tensor)
A = self.A.fill_data(self.elems_A)
B = self.B.fill_data(self.elems_B)
A_c = A.conj()
B_c = B.conj()
E = st_ops.ncon([A, B, self.H, A_c, B_c], ([1,2],[3,2],[1,3,4,5],[4,6],[5,6]))
nrm = st_ops.ncon([A, B, A_c, B_c], ([1,2],[3,2],[1,4],[3,4]))
E_normalized = E / nrm
return E_normalized
# Check if everything works:
# +
m = TwoSiteHeis()
# Check the forward pass with two random initial tensors
elems_A = torch.nn.Parameter(torch.rand(m.A.numel(), requires_grad=True))
elems_B = torch.nn.Parameter(torch.rand(m.B.numel(), requires_grad=False))
E = m.forward(elems_A, elems_B) # Returns the energy in a SymTensor
print("Starting energy:", E.data)
# Notice that now the gradient of the SymTensor that is stored
# on the model (m.A) is propagated to the Parameter object (elems_A)
E.backward()
print("Gradient of A:", elems_A.grad)
# Since the SymTensor is not a leaf in the graph, its gradient will be
# deleted, unless you set m.A.retain_grad()
print("Gradient of SymTensor m.A:", m.A.grad) # None
# Check the gradient of the full computation against a numerical gradient
print("Gradient check:", torch.autograd.gradcheck(m, (elems_A, elems_B)))
# +
# main simulation script
m = TwoSiteHeis()
print("Model Parameters:")
print(list(m.parameters()))
learning_rate = 1
optimizer = torch.optim.LBFGS(m.parameters(), max_iter=10, lr=learning_rate)
def closure():
optimizer.zero_grad()
loss = m.forward()
loss.backward()
return loss
energies = []
for epoch in range(10):
loss = optimizer.step(closure)
print("Step", epoch, "energy", loss.data)
energies.append(loss)
print("Final energy:", energies[-1].data, "error:", energies[-1].data--0.75)
import matplotlib.pyplot as plt
plt.plot(energies, '-+');
| docs/source/example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tempfile
fp = tempfile.TemporaryFile()
fp.write(b'Hello World')
fp.seek(0)
print(fp.read())
fp.close()
fn = tempfile.NamedTemporaryFile(mode = 'w+')
fn.write("Hey This is matrix this is to test the file!!")
fn.seek(0)
fn.read()
fn.seek(0)
fn.read()
fn.close()
fn.seek(0)
fn.read()
fs = tempfile.SpooledTemporaryFile(max_size=0, mode='w+')
fs.write("""
asnfasnd oiaa siod asj dp iasj dpas jps afds
ndsfhnsafasnf sd fds ds gsg sd as fas asf asfas as dasd asd as das d as fas as
"""
)
fs.seek(0)
fs.read()
import os
with tempfile.TemporaryDirectory() as tmpdir:
os.chdir(tmpdir)
os.mkdir("test")
print("Created temp directory", tmpdir)
| Standard Library/tempfile.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# name: python3
# ---
# <h3>Les variables</h3>
# <p>La variable est utilisée afin de stocker une valeur dans la mémoire de l'ordinateur, et être utilisée à chaque fois que nous avons besoin de cette valeur.</p>
# <p>Voici comment définir une variable en Python</p>
monAge = 34
# <p>Ce programme déclare une variable nommée <code>monAge</code> et lui affecte la valeur <code>34</code>. Ainsi, nous pouvons afficher la valeur que contient la variable à l'aide de la ligne de code <code>print(monAge)</code>. <em>A faire comme exercice</em>.</p>
# <p>Nous nous proposons d'écrire un programme qui déclare et affiche plusieurs variables:</p>
# <ul>
# <li>Une variable pour stocker le nom d'une personne</li>
# <li>Une variable pour stocker l'age de cette personne</li>
# <li>Une variable pour stocker si la personne dont l'age est saisi est majeur ou non.</li>
# </ul>
#
# <p>Voici comment se présentera le programme</p>
# +
monNom = '<NAME>'
monAge = 34
suisJeMajeur = monAge >= 18
print(f'Mon nom est {monNom}')
print(f'Mon age est {monAge}')
print(f'Suis-je majeur {suisJeMajeur}')
# -
# Notez l'utilisation de la lettre <code>f</code> dans la fonction <code>print()</code>; ceci nous permet de combiner le texte tapé à la main avec les variables placées entre les accolades <code>{</code> et <code>}</code>. De ce fait Python comprend que le mot placé entre accolades est une variable, il ira chercher la valeur contenu dans cette variable et l'affichera côte à côte.
| docs/chap02-03-les variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# metadata:
# interpreter:
# hash: 1ee38ef4a5a9feb55287fd749643f13d043cb0a7addaab2a9c224cbe137c0062
# name: Python 3.8.5 64-bit
# ---
# +
fig = plt.figure(figsize=(15,10))
fig, ax = plt.subplots()
ind_counts = df['Industry'].value_counts(dropna=True, sort=True)
df_ind = pd.DataFrame(ind_counts)
df_ind = df_ind.reset_index()
df_ind.columns = ['Industry', 'Attendees']
values = df_ind['Attendees']
labels = df_ind['Industry']
startangle = 0
title = 'AFP Attendee Industries - Practitioners Only'
ax.pie(values, autopct = lambda p:f'{p:.2f}%\n({p*sum(yvalues)/100 :.0f})',
startangle = startangle,
labels = labels,
wedgeprops = wedgeprops)
ax.set_title(title, fontsize = '14', weight = 'regular')
ax.axis('equal')
plt.show()
fig.savefig('Industries.png', dpi=96)
# +
# remove records
indexNames = df[~(df['Individual Type'] == 'Practitioner')].index
df.drop(indexNames, inplace=True)
# df.head(2)
df.info()
def make_autopct(values):
def my_autopct(pct):
total = sum(yaxis)
val = int(round(pct*total/100.0))
return '{p:.2f}%\n({v:d})'.format(p=pct,v=val)
return my_autopct
ax = fig.add_subplot()
title = 'AFP Attendee Top 25 Job Titles - Practitioners Only'
ax = df_tit.nlargest(25, 'Attendees').plot.barh(x='Job Title',
y='Attendees',
rot = 0,
fontsize = '8',
figsize=(15,10),
title = title)
ax.figure.savefig('Titles.png', dpi=96)
# World map read in
# world = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
# -
# # datetime Functions
# ```
# date.year, date.month, date.day
# date.strftime(format) Return a string representing the date, controlled by an explicit format string. Format codes referring to hours, minutes or seconds will see 0 values.
#
# # %a Weekday as locale’s abbreviated name.
# # %A Weekday as locale’s full name.
# # %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday.
# # %d Day of the month as a zero-padded decimal number.
# # %b Month as locale’s abbreviated name.
# # %B Month as locale’s full name.
# # %m Month as a zero-padded decimal number.
# # %y Year without century as a zero-padded decimal number.
# # %Y Year with century as a decimal number.
# # %H Hour (24-hour clock) as a zero-padded decimal number.
# # %I Hour (12-hour clock) as a zero-padded decimal number.
# # %p Locale’s equivalent of either AM or PM.
# # %M Minute as a zero-padded decimal number.
# # %S Second as a zero-padded decimal number.
# # %f Microsecond as a decimal number, zero-padded on the left.
# # %z UTC offset in the form +HHMM or -HHMM (empty string if the the object is naive).
# # %Z Time zone name (empty string if the object is naive).
# # %j Day of the year as a zero-padded decimal number.
# # %U Week number of the year (Sunday as the first day of the week) as a zero padded decimal number. All days in a new year preceding the first Sunday are considered to be in week 0.
# # %W Week number of the year (Monday as the first day of the week) as a decimal number. All days in a new year preceding the first Monday are considered to be in week 0.
# # %c Locale’s appropriate date and time representation.
# # %x Locale’s appropriate date representation.
# # %X Locale’s appropriate time representation.
# %% literal '%' character.
# ```
# + tags=[]
import schedule
import time
def job():
print("I'm working...")
schedule.every(1).seconds.do(job)
# schedule.every().hour.do(job)
# schedule.every().day.at("10:30").do(job)
# schedule.every(5).to(10).minutes.do(job)
# schedule.every().monday.do(job)
# schedule.every().wednesday.at("13:15").do(job)
# schedule.every().minute.at(":17").do(job)
while True:
schedule.run_pending()
time.sleep(1)
# + tags=[]
# https://vishalmnemonic.github.io/DC6/
import io
import sys
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(16,9))
from mpl_toolkits.axes_grid1 import make_axes_locatable
from collections import OrderedDict
import IPython
import pytz
import datetime
import json
import logging
# !python3 --version
# -
# # Vader Sentiment
#
# ```python
# from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
# analyser = SentimentIntensityAnalyzer()
# def sentiment_analyzer_scores(sentence):
# score = analyser.polarity_scores(sentence)
# print("{:-<40} {}".format(sentence, str(score)))
# ```
# # GeoPandas
# <a href="https://geopandas.org"><img src="img/l_geopandas.png" style="float: right;"></a>
#
# ```python
# # !pip3 install geopandas
# import geopandas as gpd
# geo_df = geopandas.read_file("data/maps/usgeojson/gz_2010_us_040_00_5m.json")
# geo_df.head()
# geo_df = gpd.read_file("data/maps/states_21basic/states.shp")
# geo_df["STATE_FIPS"] = geo_df["STATE_FIPS"].astype(np.int64)
# geo_df.head()
# geo_df.dtypes
# geo_df.plot()
# df = pd.read_csv("data/uspop-nst-2018.csv", header=0)
# df = df[['STATE_FIPS','POP_2018']]
# merged = geo_df.join(df.set_index("STATE_FIPS"), on="STATE_FIPS")
# merged.head()
# fig, ax = plt.subplots(1, 1)
# divider = make_axes_locatable(ax)
# merged.plot(column='POP_2018',
# ax=ax,
# legend=True,
# legend_kwds={'label': "Population by State (m)",
# 'orientation': "horizontal"})
# merged.plot(column='POP_2018');
# cax = divider.append_axes("right", size="5%", pad=0.1)
# merged.plot(column='POP_2018', ax=ax, legend=True, cax=cax)
#
# fig.savefig("leaddistribution.png", dpi=300)
# ```
#
# # Excel<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html"><img src="img/l_excel.png" style="float: right;"></a>
#
# ```python
# import pandas as pd
# url ='http://s3.amazonaws.com/assets.datacamp.com/course/importing_data_into_r/latitude.xls'
# xl = pd.read_excel(url,sheetname=None)
# print(xl.keys())
# print(xl['1700'].head())
#
#
# # !pip install xlrd
# import xlrd
# xlsx = pd.ExcelFile('examples/ex1.xlsx')
# df = pd.read_excel('examples/ex1.xlsx', 'Sheet1')
# writer = pd.ExcelWriter('examples/ex2.xlsx')
# frame.to_excel(writer, 'Sheet1')
# df.to_excel('examples/ex2.xlsx')
# df.to_excel(writer, 'Sheet1')
# writer.save()
# # !rm examples/ex2.xlsx
# ```
# --------------------------------------
# + [markdown] tags=[]
# # BeautyfulSoup4
# <a href="https://www.crummy.com/software/BeautifulSoup/"><img src="img/l_beautsoup4.jpg" style="width: 50px; float: right;"></a>
#
# ```python
# # # !pip3 install BeautifulSoup4
#
# Scape all URL's off a page
# import requests
# from bs4 import BeautifulSoup
# url = 'https://www.python.org/~guido/'
# r = requests.get(url)
# html_doc = r.text
# soup = BeautifulSoup(html_doc)
# print(soup.title)
# a_tags = soup.find_all('a')
# for link in a_tags:
# print(link.get('href'))
# ```
# + tags=[]
# Import packages
import requests
from bs4 import BeautifulSoup
# Specify url: url
url = 'https://www.python.org/~guido/'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Extracts the response as html: html_doc
html_doc = r.text
# Create a BeautifulSoup object from the HTML: soup
soup = BeautifulSoup(html_doc)
# Prettify the BeautifulSoup object: pretty_soup
pretty_soup = soup.prettify()
# Print the response
print(pretty_soup)
# + tags=["outputPrepend"]
# Import package
import requests
# Specify the url: url
url = "http://www.datacamp.com/teach/documentation"
# Packages the request, send the request and catch the response: r
r= requests.get(url)
# Extract the response: text
text = r.text
# Print the html
print(text)
# -
# Import packages
import requests
from bs4 import BeautifulSoup
# Specify url: url
url = 'https://www.python.org/~guido/'
# Package the request, send the request and catch the response: r
r = requests.get(url)
# Extracts the response as html: html_doc
html_doc = r.text
# Create a BeautifulSoup object from the HTML: soup
soup = BeautifulSoup(html_doc)
# Prettify the BeautifulSoup object: pretty_soup
pretty_soup = soup.prettify()
# Print the response
print(pretty_soup)
# Import packages
from urllib.request import urlopen, Request
# Specify the url
url = "http://www.datacamp.com/teach/documentation"
# This packages the request
request = Request(url)
# Sends the request and catches the response: response
response = urlopen(request)
# Extract the response: html
html = response.read()
# Print the html
print(html)
# Be polite and close the response!
response.close()
# Import packages
from urllib.request import urlopen, Request
# Specify the url
url = "http://www.datacamp.com/teach/documentation"
# This packages the request: request
request=Request(url)
# Sends the request and catches the response: response
response = urlopen(request)
# Print the datatype of response
print(type(response))
# Be polite and close the response!
response.close()
# # Scrapy
# <a href="https://scrapy.org"><img src="img/l_scrapy.png" style="width: 200px; float: right;"></a>
#
#
# ```python
# # !pip3 install scrapy
# # !pip3 install json
# # !pip3 install logging
#
# import pandas as pd
# import scrapy
# from scrapy.crawler import CrawlerProcess
# import json
# import logging
# class JsonWriterPipeline(object):
# def open_spider(self, spider):
# self.file = open('venues.jl', 'w')
# def close_spider(self, spider):
# self.file.close()
# def process_item(self, item, spider):
# line = json.dumps(dict(item)) + "\n"
# self.file.write(line)
# return item
# class QuotesSpider(scrapy.Spider):
# name = "venues"
# start_urls = ['https://www.cvent.com/venues/results/United%20States?']
# custom_settings = {
# 'LOG_LEVEL': logging.WARNING,
# 'ITEM_PIPELINES': {'__main__.JsonWriterPipeline': 1}, # Used for pipeline 1
# 'FEED_FORMAT':'json', # Used for pipeline 2
# 'FEED_URI': 'venues.json' # Used for pipeline 2
# }
# def parse(self, response):
# for venue in response.css('div.quote'):
# yield {
# 'name': quote.css('span.text::text').extract_first(),
# data-cvent-id="searchResult-mainContent-mainResults-venueListContainer-ul-li-0-venueCard-link-wrapper-venueInfoWrapper-venueName
# 'author': quote.css('span small::text').extract_first(),
# 'tags': quote.css('div.tags a.tag::text').extract(),
# }
# NEXT_PAGE_SELECTOR = '.next a ::attr(href)'
# next_page = response.css(NEXT_PAGE_SELECTOR).extract_first()
# if next_page:
# yield scrapy.Request(
# response.urljoin(next_page),
# callback=self.parse
# )
# ```
#
# + [markdown] tags=[]
# # Bokah
# <a href="https://www.mysql.com/products/connector/"><img src="img/l_bokeh.png" style="float: right;"></a>
#
# ```python
# # # !pip3 install bokeh
# # # !pip3 install hvplot
# import bokeh
# from bokeh.io import output_notebook, output_file, show, push_notebook
# from bokeh.plotting import *
# from bokeh.models import ColumnDataSource, HoverTool, CategoricalColorMapper
# from bokeh.layouts import row, column, gridplot, widgetbox
# from bokeh.models.widgets import Tabs, Panel
# from bokeh.transform import factor_cmap
# import hvplot as hv
# import hvplot.pandas
# ```
#
# -
# # MySQL
# <a href="https://www.mysql.com/products/connector/"><img src="img/l_mysql.png" style="float: right;"></a>
#
# ```python
# # !pip install mysql-connector-python
# import mysql.connector
# config = {
# 'host': 'rpsmithii.mysql.pythonanywhere-services.com',
# 'database': 'rpsmithii$weight','user': 'rpsmithii',
# 'password': '<PASSWORD>','port': '3306'}
# db = mysql.connector.connect(**config)
# cur = db.cursor()
# cur.execute("SELECT dt, wht FROM weight WHERE wht > 10")
# table = pd.DataFrame(cur.fetchall())
# table.columns = cur.column_names
# ```
#
# # Pandas
# <a href="https://pandas.pydata.org"><img src="img/l_pandas.png" style="float: right;"></a>
#
# ```python
# # !pip3 install pandas
# import pandas as pd
# ```
#
# ### csv methods
# ```python
# df = pd.read_csv('data/hosp.csv', index_col='Target?', dtype={'user_id': int})
# with open('csvfile.csv', 'w') as csvfile: f = csv.writer(csvfile) f.writerows(items) # write to csv file
# ```
#
# ### attributes
# ```python
# pd.info() # data types and specs
# pd.shape # dimensions (tuple)
# pd.describe() # shows a quick statistic data summary
# pd.dtypes # column labels & data types
# pd.index # index (row labels)
# pd.head(n) # return first n rows
# pd.tail(n) # return last n rows
# pd.columns # column labels
# pd.values # Numpy representation
# pd.axes # list axes
# pd.size # number (int) elements
# pd.memory_usage([index, deep]) # each column memory (bytes)
# ```
#
# ### transforms
# ```python
# pd.between_time(start_time, end_time) # select range
# pd.set_index() # set index using existing columns
# pd.['col0'].astype(str) # change data type
# df = pd.drop("del", axis=0) # delete all rows with label "del"
# df = pd.concat([df01, df02, df03]) # combine dataframes
# pd.is_copy # return copy
# pd.empty # empty
# df = df0.drop(columns=[' X', ' N']) # drop columns
# df = df0.sort_values(by='column') # sort
# ```
#
# ### [plots](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html)
#
# ```python
# plt.figure(figsize=(16,9))
# plt.rcParams["figure.figsize"] = [16,9]
# df.plot.area([x, y]) # stacked area plot
# df.plot.bar([x, y]) # vertical bar plot
# df.plot.barh([x, y]) # horizontal bar plot
# df.plot.box([by]) # box plot
# df.boxplot([column, by, ax, …]) # box plot from columns
# df.plot.hist([by, bins]) # histogram of columns
# df.hist([column, by, grid, …]) # histogram
# df.plot.line([x, y]) # columns as lines
# df.plot.pie([y]) # pie plot
# df.plot.scatter(x, y[, s, c]) # scatter plot
# df.plot(legend=False)
# df.plot(xlabel="new x", ylabel="new y")
# ts.plot(logy=True) # log-scale
# df['B'].plot(secondary_y=True, style='g') # secondary axis
# df['A'].plot(x_compat=True) # compa
# df.plot.line()
# with pd.plotting.plot_params.use('x_compat', True):
# df[' Air Temperature'].plot(color='b')
# df[' Water Temperature'].plot(color='g')
# plt.rcParams["figure.figsize"] = [16,9]
# dfa.plot.line()
# dfw.plot.line()
# ```
#
# ### lookup()
# Extract a set of values given a sequence of row labels and column labels return **NumPy array**
#
# ```python
# DataFrame.lookup(list(range(0, 10, 2)), ['B', 'C', 'A', 'B', 'D'])
# ```
#
# ### query()
#
# column b has values between column a and c values
# ```python
# DataFrame.query('(a < b) & (b < c)')
# index = pd.MultiIndex.from_arrays([colors, foods], names=['color', 'food'])
# ```
#
# columns a and "b" have overlapping values
# ```python
# DataFrame.query('a in b')
# ```
#
# columns a and b have overlapping values and col c's values are less than col d's
# ```python
# DataFrame.query('a in b and c < d')
# ```
#
# Comparing a list of values to a column using ==/!= works similarly to in/not in.
# ```python
# DataFrame.query('b == ["a", "b", "c"]')
# ```
#
# select rows with index values 'Andrade' + 'Veness' with columns fr 'city' to 'email'
# ```python
# nDataFrame.loc[['Andrade', 'Veness'], 'city':'email']
# ```
#
# select same rows, with just 'first_name', 'address' and 'city' columns
# ```python
# DataFrame.loc['Andrade':'Veness', ['first_name', 'address', 'city']]
# ```
#
# select rows with _first name_ Antonio and _columns_ 'city' to 'email'
#
# ```python
# DataFrame.loc[DataFrame['first_name'] == 'Antonio', 'city':'email']
# ```
#
# Select rows where email column ends w/ 'hotmail.com' include all columns
# ```python
# DataFrame.loc[DataFrame['email'].str.endswith("hotmail.com")]
# ```
#
# --------------------------------------
# # Zipcode Methods
#
# ```python
# # !pip3 install uszipcode
# # !pip3 install --upgrade uszipcode # upgrade databases
#
# from uszipcode import SearchEngine
# search = SearchEngine(simple_zipcode=False) # set simple_zipcode=False to use rich info database
# search = SearchEngine(simple_zipcode=True) # False uses rich info database
# zipcode = search.by_zipcode("06916")
# zipcode.to_json() # to json
# zipcode.to_dict() # to dict
# zip = zipcode.values() # to list
# zipcode
# ```
# ----------------------------------
states = {
'AK': 'Alaska',
'AL': 'Alabama',
'AR': 'Arkansas',
'AS': 'American Samoa',
'AZ': 'Arizona',
'CA': 'California',
'CO': 'Colorado',
'CT': 'Connecticut',
'DC': 'District of Columbia',
'DE': 'Delaware',
'FL': 'Florida',
'GA': 'Georgia',
'GU': 'Guam',
'HI': 'Hawaii',
'IA': 'Iowa',
'ID': 'Idaho',
'IL': 'Illinois',
'IN': 'Indiana',
'KS': 'Kansas',
'KY': 'Kentucky',
'LA': 'Louisiana',
'MA': 'Massachusetts',
'MD': 'Maryland',
'ME': 'Maine',
'MI': 'Michigan',
'MN': 'Minnesota',
'MO': 'Missouri',
'MP': 'Northern Mariana Islands',
'MS': 'Mississippi',
'MT': 'Montana',
'NA': 'National',
'NC': 'North Carolina',
'ND': 'North Dakota',
'NE': 'Nebraska',
'NH': 'New Hampshire',
'NJ': 'New Jersey',
'NM': 'New Mexico',
'NV': 'Nevada',
'NY': 'New York',
'OH': 'Ohio',
'OK': 'Oklahoma',
'OR': 'Oregon',
'PA': 'Pennsylvania',
'PR': 'Puerto Rico',
'RI': 'Rhode Island',
'SC': 'South Carolina',
'SD': 'South Dakota',
'TN': 'Tennessee',
'TX': 'Texas',
'UT': 'Utah',
'VA': 'Virginia',
'VI': 'Virgin Islands',
'VT': 'Vermont',
'WA': 'Washington',
'WI': 'Wisconsin',
'WV': 'West Virginia',
'WY': 'Wyoming'
}
| toolbox.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.5 64-bit
# metadata:
# interpreter:
# hash: 4cd7ab41f5fca4b9b44701077e38c5ffd31fe66a6cab21e0214b68d958d0e462
# name: python3
# ---
# Wizualizacja funkcji sigmoidalnej
import matplotlib.pyplot as plt
import numpy as np
# +
def sigmoid(z):
return 1.0 / (1 + np.exp(-z))
z = np.arange(-7, 7, 0.1)
phi_z = sigmoid(z)
plt.plot(z, phi_z)
plt.axvline(0.0, color='k')
plt.ylim(-0.1, 1.1)
plt.xlabel('z')
plt.ylabel('$\phi (z)$')
# Jednostyki osi y i siatka
plt.yticks([0.0, 0.5, 1.0])
ax = plt.gca()
ax.yaxis.grid(True)
plt.show()
# -
# Wykres ilustrujący koszt klasyfikacji jednego przykładu
def cost_1(z):
return - np.log(sigmoid(z))
def cost_0(z):
return - np.log(1 - sigmoid(z))
z = np.arange(-10, 10, 0.1)
phi_z = sigmoid(z)
c1 = [cost_1(x) for x in z]
plt.plot(phi_z, c1, label='J(w) jeśli y=1')
c0 = [cost_0(x) for x in z]
plt.plot(phi_z, c0, linestyle='--', label='J(w) jeśli y=0')
plt.ylim(0.0, 5.1)
plt.xlim([0, 1])
plt.xlabel('$\phi$(z)')
plt.ylabel('J(w)')
plt.legend(loc='best')
plt.show()
# +
# Implementacja regresji logistycznej
# Przeróbka algorytmu Adaline
class LogisticRegressionGD(object):
"""Klasyfikator - regresja ligistyczna z wykorzystaniem
metody gradientu prostego
Parametry
--------
eta: zmiennoprzecinkowy
współczynnik uczenia (0.0 - 1.0)
n_iter: liczba całkowita
Liczba przebiegów po training set
random_state: liczba całkowita
ziarno generatora liczb losowych
Atrybuty
-------
w_: jednowymiarowa tablica
wagi po dopasowaniu
cost_: lista
suma kwadratów błędów w każdej epoce
"""
def __init__ (self, eta=0.05, n_iter=100, random_state=1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self, X, y):
"""Trenowanie za pomocą danych uczących
Parametry
--------
X: {tablicopodobny}, wymiary [n_probek, n_cech]
wektory uczenia
y: {tablicopodobny}, wymiary [n_probek]
wartości docelowe
Zwraca
-----
self: obiekt
"""
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc=0.0, scale=0.01, size=1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
net_input = self.net_input(X)
output = self.activation(net_input)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
# Tu występuje różnica między LR a Adaline
# Inaczej obliczany jest koszt
cost = (-y.dot(np.log(output)) -
((1 - y).dot(np.log(1 - output))))
self.cost_.append(cost)
return self
def net_input(self, X):
"""Oblicza całkowite pobudzenie"""
return np.dot(X, self.w_[1:]) + self.w_[0]
def activation(self, z):
"""Oblicza liniową funkcję aktywacji"""
# Tu występuje różnica między LR a Adaline
# Sigmoidalna funkcja aktywacji
activate_value = 1. / (1. + np.exp(-np.clip(z, -250, 250)))
return activate_value
def predict(self, X):
"""Zwraca rtykietę klas po wykonaniu skoku jednostkowego"""
return np.where(self.net_input(X) >= 0.0, 1, 0)
# +
from sklearn import datasets
import pandas as pd
iris = datasets.load_iris()
X = iris.data[:, [2,3]]
y = iris.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state=1, stratify=y)
X_train_01_subset = X_train[y_train == 0 | (y_train ==1)]
y_train_01_subset = y_train[y_train == 0 | (y_train ==1)]
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
# sc.fit oblicza wartość średnią próbek
# oraz odchylenie standardowe
sc.fit(X_train)
# transform - standaryzacja danych
# na podstawie wartości obliczonych dzięki fit
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
# +
# zobaczymy jak wygląda granica przebiegająca między etykietami
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, test_idx = None, resolution=0.02):
# konfiguracja generatora znaczników i mapy kolorów
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# wykresy powierzchni decyzyjnej
x1_min, x1_max = X[:, 0].min() -1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() -1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.3, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# narysuj wykres z próbkami
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=colors[idx],
marker=markers[idx], label=cl,
edgecolor='black')
# zaznacz próbki testowe
if test_idx:
# rysuj wykres wszystkich próbek
X_test, y_Test = X[list(test_idx), :], y[list(test_idx)]
plt.scatter(X_test[:, 0], X_test[:, 1], c='', edgecolor='black',
alpha=1.0, linewidth=1, marker='o', edgecolors='k',
s=100, label='Zestaw testowy')
# -
lrgd = LogisticRegressionGD(eta=0.05, n_iter=1000)
lrgd.fit(X_train_01_subset, y_train_01_subset)
plot_decision_regions(X=X_train_01_subset, y=y_train_01_subset, classifier = lrgd)
plt.xlabel('Długość płatka [standaryzowana]')
plt.ylabel('Szerokość płatka [standaryzowana]')
plt.legend(loc='upper left')
plt.show()
# +
# Implementacja przy pomocy scikit-learn
from sklearn.linear_model import LogisticRegression
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
lr = LogisticRegression(C=1000.0, random_state=1)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std,
y_combined,
classifier=lr,
test_idx=range(105, 150))
plt.xlabel('Długość płatka [standaryzowana]')
plt.ylabel('Szerokość płatka [standaryzowana]')
plt.legend(loc='upper left')
plt.show()
# -
# Wyciąganie prawdopodobieństwa przynależności do klasy
# W odpowiedzi otrzymujemy prawdopodobieństwo przynalezności
# do każdej z klas, prawdop. w wierszu się sumują do 1
lr.predict_proba(X_test_std[:3, :])
# Zwróćmy listę próbek z informacją,
# dla której klasy były max prawdopodobieństwa
lr.predict_proba(X_test_std[:3, :]).argmax(axis=1)
# A teraz zwykła predykcja
# Oczywiście pokrywa się z wnioskami wyżej
lr.predict(X_test_std[:3, :])
# +
# Przewidywanie na jednej próbce jedynie z użyciem funkcji reshape
# scikit learn oczekuje dwuwymiarowej tablicy
# Możemy w taką przekształcić za pomoca reshape
lr.predict(X_test_std[0, :].reshape(1, -1))
# -
weights, params = [], []
for c in np.arange(-5, 5):
lr = LogisticRegression(C=10.**c, random_state=1)
lr.fit(X_train_std, y_train)
weights.append(lr.coef_[1])
params.append(10.**c)
weights = np.array(weights)
plt.plot(params, weights[:,0], label='Długość płatka')
plt.plot(params, weights[:,1], label='Szerokość płatka', linestyle='--')
plt.ylabel('Współczynnik wag')
plt.xlabel('C')
plt.legend(loc='upper left')
plt.xscale('log')
plt.show()
| scikit-learn/logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import keras.backend as K
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt
import tensorboard as tb
# +
#Load the dataset
(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
# +
#Prototype
model = models.Sequential()
#CNN with 3 hidden layers
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
#Classification
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir="./logs")
history = model.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model.evaluate(test_images, test_labels, verbose=1)
# +
# Prototype performance
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history.history['accuracy'])
ax1.plot(history.history['val_accuracy'])
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.legend(['Train', 'Test'], loc='upper left')
ax2.plot(history.history['loss'])
ax2.plot(history.history['val_loss'])
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Train', 'Test'], loc='upper left')
plt.savefig('prototype.png')
plt.show()
# -
tf.keras.utils.plot_model(model, to_file='model.png',show_shapes=True, dpi=400)
# +
#Test2 AveragePooling2D
model2 = models.Sequential()
#CNN with 3 hidden layers
model2.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model2.add(layers.AveragePooling2D(2,2))
model2.add(layers.Conv2D(64, (3, 3), activation='relu'))
model2.add(layers.AveragePooling2D(2,2))
model2.add(layers.Conv2D(128, (3, 3), activation='relu'))
model2.add(layers.AveragePooling2D(2,2))
#Classification
model2.add(layers.Flatten())
model2.add(layers.Dense(512, activation='relu'))
model2.add(layers.Dense(10, activation='softmax'))
model2.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history2 = model2.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model2.evaluate(test_images, test_labels, verbose=1)
# +
#Test3 MaxPooling
model3 = models.Sequential()
#CNN with 3 hidden layers
model3.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model3.add(layers.MaxPooling2D(2,2))
model3.add(layers.Conv2D(64, (3, 3), activation='relu'))
model3.add(layers.MaxPooling2D(2,2))
model3.add(layers.Conv2D(128, (3, 3), activation='relu'))
model3.add(layers.MaxPooling2D(2,2))
#Classification
model3.add(layers.Flatten())
model3.add(layers.Dense(512, activation='relu'))
model3.add(layers.Dense(10, activation='softmax'))
model3.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history3 = model3.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model3.evaluate(test_images, test_labels, verbose=1)
# +
# Compare pooling
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history.history['val_accuracy']) #no
ax1.plot(history2.history['val_accuracy']) #avg
ax1.plot(history3.history['val_accuracy']) #max
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.set_ylim([0.5,1])
ax1.legend(['No pooling', 'Avg pooling', 'Max pooling'], loc='upper left')
ax2.plot(history.history['val_loss'])
ax2.plot(history2.history['val_loss']) #avg
ax2.plot(history3.history['val_loss']) #max
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['No pooling', 'Avg pooling', 'Max pooling'], loc='upper left')
plt.savefig('pooling.png')
plt.show()
# +
#Test4 RMSprop
model4 = models.Sequential()
#CNN with 3 hidden layers
model4.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model4.add(layers.AveragePooling2D(2,2))
model4.add(layers.Conv2D(64, (3, 3), activation='relu'))
model4.add(layers.AveragePooling2D(2,2))
model4.add(layers.Conv2D(128, (3, 3), activation='relu'))
model4.add(layers.AveragePooling2D(2,2))
#Classification
model4.add(layers.Flatten())
model4.add(layers.Dense(512, activation='relu'))
model4.add(layers.Dense(10, activation='softmax'))
model4.compile(optimizer='RMSprop', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history4 = model4.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model4.evaluate(test_images, test_labels, verbose=1)
# +
#Test5 Adadelta
model5 = models.Sequential()
#CNN with 3 hidden layers
model5.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model5.add(layers.AveragePooling2D(2,2))
model5.add(layers.Conv2D(64, (3, 3), activation='relu'))
model5.add(layers.AveragePooling2D(2,2))
model5.add(layers.Conv2D(128, (3, 3), activation='relu'))
model5.add(layers.AveragePooling2D(2,2))
#Classification
model5.add(layers.Flatten())
model5.add(layers.Dense(512, activation='relu'))
model5.add(layers.Dense(10, activation='softmax'))
model5.compile(optimizer='Adadelta', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history5 = model5.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model5.evaluate(test_images, test_labels, verbose=1)
# +
# Compare pooling
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history2.history['val_accuracy']) #adam
ax1.plot(history4.history['val_accuracy']) #RMSprop
ax1.plot(history5.history['val_accuracy']) #Adadelta
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.set_ylim([0,1])
ax1.legend(['Adam', 'RMSprop', 'Adadelta'], loc='upper left')
ax2.plot(history2.history['val_loss']) #adam
ax2.plot(history4.history['val_loss']) #RMSprop
ax2.plot(history5.history['val_loss']) #Adadelta
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Adam', 'RMSprop', 'Adadelta'], loc='upper left')
plt.savefig('optimizer.png')
plt.show()
# +
#Test6 BatchNormalization
model6 = models.Sequential()
#CNN with 3 hidden layers
model6.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model6.add(layers.BatchNormalization())
model6.add(layers.AveragePooling2D(2,2))
model6.add(layers.Conv2D(64, (3, 3), activation='relu'))
model6.add(layers.BatchNormalization())
model6.add(layers.AveragePooling2D(2,2))
model6.add(layers.Conv2D(128, (3, 3), activation='relu'))
model6.add(layers.BatchNormalization())
model6.add(layers.AveragePooling2D(2,2))
#Classification
model6.add(layers.Flatten())
model6.add(layers.Dense(512, activation='relu'))
model6.add(layers.Dense(10, activation='softmax'))
model6.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history6 = model6.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model6.evaluate(test_images, test_labels, verbose=1)
# +
#Test7 BatchNormalization+dropout
model7 = models.Sequential()
#CNN with 3 hidden layers
model7.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model7.add(layers.BatchNormalization())
model7.add(layers.AveragePooling2D(2,2))
model7.add(layers.Conv2D(64, (3, 3), activation='relu'))
model7.add(layers.BatchNormalization())
model7.add(layers.AveragePooling2D(2,2))
model7.add(layers.Conv2D(128, (3, 3), activation='relu'))
model7.add(layers.BatchNormalization())
model7.add(layers.AveragePooling2D(2,2))
#Classification
model7.add(layers.Flatten())
model7.add(layers.Dropout(0.3))
model7.add(layers.Dense(512, activation='relu'))
model7.add(layers.Dense(10, activation='softmax'))
model7.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history7 = model7.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model7.evaluate(test_images, test_labels, verbose=1)
# +
# Compare BatchNormalization+dropout
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history2.history['val_accuracy']) #adam only
ax1.plot(history6.history['val_accuracy']) #BatchNormalization
ax1.plot(history7.history['val_accuracy']) #BatchNormalization+dropout
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.set_ylim([0.5,1])
ax1.legend(['Nothing', 'BatchNormalization', 'BatchNormalization+dropout'], loc='upper left')
ax2.plot(history2.history['val_loss']) #adam
ax2.plot(history6.history['val_loss']) #RMSprop
ax2.plot(history7.history['val_loss']) #Adadelta
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Nothing', 'BatchNormalization', 'BatchNormalization+dropout'], loc='upper left')
plt.savefig('Addition_improve.png')
plt.show()
# +
#Test8 lr=0.01
model8 = models.Sequential()
#CNN with 3 hidden layers
model8.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model8.add(layers.BatchNormalization())
model8.add(layers.AveragePooling2D(2,2))
model8.add(layers.Conv2D(64, (3, 3), activation='relu'))
model8.add(layers.BatchNormalization())
model8.add(layers.AveragePooling2D(2,2))
model8.add(layers.Conv2D(128, (3, 3), activation='relu'))
model8.add(layers.BatchNormalization())
model8.add(layers.AveragePooling2D(2,2))
#Classification
model8.add(layers.Flatten())
model8.add(layers.Dropout(0.3))
model8.add(layers.Dense(512, activation='relu'))
model8.add(layers.Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(learning_rate=0.01)
model8.compile(optimizer=opt, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history8 = model8.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model8.evaluate(test_images, test_labels, verbose=1)
# +
#Test9 lr=0.01 before 40, 0.001 before 80, then 0.0005
step = tf.Variable(0, trainable=False)
boundaries = [40,80]
values = [0.01,0.001,0.0005]
learning_rate_fn = tf.keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
learning_rate = learning_rate_fn(step)
model9 = models.Sequential()
#CNN with 3 hidden layers
model9.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))
model9.add(layers.BatchNormalization())
model9.add(layers.AveragePooling2D(2,2))
model9.add(layers.Conv2D(64, (3, 3), activation='relu'))
model9.add(layers.BatchNormalization())
model9.add(layers.AveragePooling2D(2,2))
model9.add(layers.Conv2D(128, (3, 3), activation='relu'))
model9.add(layers.BatchNormalization())
model9.add(layers.AveragePooling2D(2,2))
#Classification
model9.add(layers.Flatten())
model9.add(layers.Dropout(0.3))
model9.add(layers.Dense(512, activation='relu'))
model9.add(layers.Dense(10, activation='softmax'))
opt = tf.keras.optimizers.Adam(learning_rate=learning_rate)
model9.compile(optimizer=opt, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history9 = model9.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model9.evaluate(test_images, test_labels, verbose=1)
# +
# Compare learning rate
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history7.history['val_accuracy']) #BatchNormalization+dropout
ax1.plot(history8.history['val_accuracy']) #lr=0.01
ax1.plot(history9.history['val_accuracy']) #learning schedule
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.set_ylim([0.5,1])
ax1.legend(['Fourth Model', 'lr=0.01', 'Applied learning schedule '], loc='upper left')
ax2.plot(history7.history['val_loss']) #BatchNormalization+dropout
ax2.plot(history8.history['val_loss']) #lr=0.01
ax2.plot(history9.history['val_loss']) #learning schedule
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Fourth Model', 'lr=0.01', 'Applied learning schedule '], loc='upper left')
plt.savefig('lr.png')
plt.show()
# +
#Test10 4 hidden layers
model10 = models.Sequential()
#CNN with 3 hidden layers
model10.add(layers.Conv2D(32, (3, 3), activation='relu'))
model10.add(layers.BatchNormalization())
model10.add(layers.AveragePooling2D(2,2))
model10.add(layers.Conv2D(64, (3, 3), activation='relu'))
model10.add(layers.BatchNormalization())
model10.add(layers.AveragePooling2D(2,2))
model10.add(layers.Conv2D(128, (3, 3), activation='relu'))
model10.add(layers.BatchNormalization())
model10.add(layers.AveragePooling2D(2,2))
#Classification
model10.add(layers.Flatten())
model10.add(layers.Dropout(0.3))
model10.add(layers.Dense(512, activation='relu'))
model10.add(layers.Dense(128, activation='relu'))
model10.add(layers.Dense(64, activation='relu'))
model10.add(layers.Dense(10, activation='softmax'))
model10.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy'])
history10 = model10.fit(train_images, train_labels, epochs=100, batch_size=32, shuffle=True,
validation_data=(test_images, test_labels), callbacks=[tensorboard_callback])
model10.evaluate(test_images, test_labels, verbose=1)
# +
# Compare desen layer
fig, ((ax1, ax2)) = plt.subplots(nrows=1, ncols=2, figsize=(10, 5))
ax1.plot(history7.history['val_accuracy']) #BatchNormalization+dropout
ax1.plot(history10.history['val_accuracy']) #+3 more desen layer
ax1.title.set_text('Model accuracy')
ax1.set_ylabel('Accuracy')
ax1.set_xlabel('Epoch')
ax1.set_ylim([0.5,1])
ax1.legend(['Nothing', '+2 desen layer'], loc='upper left')
ax2.plot(history7.history['val_loss']) #BatchNormalization+dropout
ax2.plot(history10.history['val_loss']) #+3 more desen layer
ax2.title.set_text('Model loss')
ax2.set_ylabel('Loss')
ax2.set_xlabel('Epoch')
ax2.legend(['Nothing', '+2 desen layer'], loc='upper left')
plt.savefig('dense.png')
plt.show()
# -
| cifa10.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import matplotlib
import matplotlib.pyplot as plt
import pathlib
import skimage
from PIL import Image
import imgaug
from imgaug import augmenters as iaa
from skimage.filters import threshold_otsu
ROOT_DIR = os.path.abspath("../")
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# + pycharm={"name": "#%%\n"}
# minimum input size = 128
class ShapesConfig(Config):
# Give the configuration a recognizable name
NAME = "skin"
GPU_COUNT = 1
IMAGES_PER_GPU = 16
NUM_CLASSES = 1 + 2 # background + 2 types
IMAGE_MIN_DIM = 128
IMAGE_MAX_DIM = 128
RPN_ANCHOR_SCALES = (16,32,64,128,256) # anchor side in pixels
TRAIN_ROIS_PER_IMAGE = 8
STEPS_PER_EPOCH = 626 // IMAGES_PER_GPU
VALIDATION_STEPS = 626 // IMAGES_PER_GPU
LEARNING_RATE = 0.001
USE_MINI_MASK = False
# gpu_options = True
config = ShapesConfig()
def get_ax(rows=1, cols=1, size=8):
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
class ShapesDataset(utils.Dataset):
def list_images(self,data_dir):
# define classes
self.add_class("skin", 1, "fibroblast")
self.add_class("skin", 2, "falsePositive")
# data_dir = pathlib.Path('/home/kuki/Desktop/novo/')
# register images
train_images = list(data_dir.glob('*tile*/image/*.png'))
print('# image in this dataset : ',len(train_images))
for idx,train_image in enumerate(train_images):
label = str(train_image).replace("image","mask")
self.add_image("skin",image_id=idx,path=train_image,labelpath=label,
height=config.IMAGE_SHAPE[0],width=config.IMAGE_SHAPE[1])
def load_image(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
"""
# Load image
image = skimage.io.imread(self.image_info[image_id]['path'])
# If grayscale. Convert to RGB for consistency.
if image.ndim != 3:
print('grayscale to rgb')
image = skimage.color.gray2rgb(image)
# If has an alpha channel, remove it for consistency
if image.shape[-1] == 4:
print('rgba to rgb')
image = image[..., :3]
# image = cv2.resize(image,dsize=(256,256))
return image.astype(np.uint8)
# def load_mask(self, image_id):
# label = self.image_info[image_id]['labelpath']
# mask = np.load(label.replace('.tif','mask.npy'))
# class_ids = np.load(label.replace('.tif','classids.npy'))
# class_ids = class_ids + 1
# return mask,class_ids
def load_mask(self, image_id):
label = self.image_info[image_id]['labelpath']
mask = Image.open(label)
mask = np.array(mask).astype('int')
mask = mask[:,:,np.newaxis]
if 'false_positive' in label:
class_ids = np.array([2])
else:
class_ids = np.array([1])
return mask,class_ids
def image_reference(self, image_id):
"""Return the shapes data of the image."""
info = self.image_info[image_id]
if info["source"] == "skin":
return info["truth"]
else:
super(self.__class__).image_reference(self, image_id)
# + pycharm={"name": "#%%\n"}
data_dir = pathlib.Path(r'\\kukissd\Kyu_Sync\Aging\data\svs\20x\segmentation')
# CLASS_NAMES = np.array([item.name for item in data_dir.glob('*') if item.name != ".DS_store"])
# print(CLASS_NAMES)
dataset_train = ShapesDataset()
dataset_train.list_images(data_dir)
dataset_train.prepare()
# + pycharm={"name": "#%%\n"}
class InferenceConfig(ShapesConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
IMAGE_MAX_DIM = 128
inference_config = InferenceConfig()
# + pycharm={"name": "#%%\n"}
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
# model_path = os.path.join(ROOT_DIR, ".h5 file name here")
model_path = model.find_last()
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# + pycharm={"name": "#%%\n"}
# Test on a random image
image_id = random.choice(dataset_train.image_ids)
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_train, inference_config, image_id)
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
# + pycharm={"name": "#%%\n"}
from PIL import Image
from skimage import io
src = r'\\kukissd\Kyu_Sync\Aging\data\svs\20x\segmentation\Wirtz.Denis_OTS-19_5021-003_false_positive_4\image'
dst = os.path.join(src,'classified')
if not os.path.exists(dst): os.mkdir(dst)
images = [os.path.join(src,_) for _ in os.listdir(src) if _.endswith('png')]
idd = []
for original_image in images:
original_image2 = skimage.io.imread(original_image)
results = model.detect([original_image2], verbose=1)
r = results[0]
masks = r['masks']
masks = np.moveaxis(masks,2,0)
if len(masks)<1:
continue
maskzero=np.zeros(masks[0].shape)
for mask,id in zip(masks,r['class_ids']):
idd.append(id)
maskzero[mask]=id
im = Image.fromarray(maskzero)
im.save(os.path.join(dst, os.path.basename(original_image).replace('png','tif')))
print(idd)
| Aging/evaluate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="mOpp2UVvsHxV"
import tensorflow as tf
import matplotlib.pyplot as plt
from keras.datasets import cifar10
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Dropout
from tensorflow.keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator
# + id="6qFW3WIxsH56"
#loading dataset and one hot encode target values
def load_ds():
(train_x, train_y), (test_x,test_y) = cifar10.load_data()
train_y = to_categorical(train_y)
test_y = to_categorical(test_y)
return train_x,train_y,test_x,test_y
# + id="1LHQJJnp28IN"
#scale pixels
def pixels_prep(train,test):
train_n = train.astype('float32') #integers to floats
test_n = test.astype('float32')
train_n /= 255.0 #normalize range 0-1
test_n /= 255.0
return train_n, test_n
# + id="S3tr1DHd25Tw"
#cnn model
# 1 VGG Block 67% accuracy
# 2 VGG Blocks 71.5% accuracy
# 3 VGG Blocks 73% accuracy
# Dropout regularization 82.4% accuracy
# Data augmentation 84.3%
def model_def():
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same', input_shape=(32, 32, 3)))
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', padding='same'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
# compile model
opt = SGD(learning_rate=0.001, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
# + id="kTzyRgZO23Pp"
#plot for learning curves
def diagnostics(history):
plt.subplot(211)
plt.title('Cross Entropy Loss')
plt.plot(history.history['loss'], color='blue',label='train')
plt.plot(history.history['val_loss'], color='orange', label='test')
plt.subplot(212)
plt.title('Classification Accuracy')
plt.plot(history.history['accuracy'], color='blue',label='train')
plt.plot(history.history['val_accuracy'], color='orange', label='test')
# + id="6V1-3bN520lD" colab={"base_uri": "https://localhost:8080/", "height": 349} outputId="14918e4e-b1b2-474f-9c81-f911736ecaad"
#evaluating a model
def run_test():
train_x, train_y, test_x, test_y = load_ds()
train_x, test_x = pixels_prep(train_x,test_x)
model = model_def()
datagen = ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.1, horizontal_flip=True)
it_train = datagen.flow(train_x, train_y, batch_size=64)
#steps = int(train_x.shape[0] / 64)
history = model.fit(it_train, workers=8, epochs=100, validation_data=(test_x,test_y), verbose=0)
_,acc = model.evaluate(test_x, test_y, verbose=0)
print(f'{acc*100}%')
diagnostics(history)
run_test()
| image classification/image_classification.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.5 64-bit (''PythonData'': conda)'
# language: python
# name: python37564bitpythondatacondaadf2dc53d8344d2f91c5b97fe5b73276
# ---
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
import copy
import json
from collections import defaultdict
merged_data = "Data/merged.csv"
merged_df = pd.read_csv(merged_data)
merged_df
#filter columns
df = merged_df[['title', 'region', 'calories', 'fat', 'carbs', 'protein', 'summary']]
df
#get rid of g in these columns
df['fat'] = df['fat'].map(lambda x: x.rstrip('g'))
df['carbs'] = df['carbs'].map(lambda x: x.rstrip('g'))
df['protein'] = df['protein'].map(lambda x: x.rstrip('g'))
df
df.to_csv("Data/mina_chart_data.csv", index=False, header=True)
| foodtopia/templates/Mina_chart_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Recommendations with IBM
#
# In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
#
#
# You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
#
# By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
#
#
# ## Table of Contents
#
# I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
# II. [Rank Based Recommendations](#Rank)<br>
# III. [User-User Based Collaborative Filtering](#User-User)<br>
# IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
# V. [Matrix Factorization](#Matrix-Fact)<br>
# VI. [Extras & Concluding](#conclusions)
#
# At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
# %matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
# -
# Show df_content to get an idea of the data
df_content.head()
# ### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
#
# Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
#
# `1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
print('There are {} user-article interactions in df data'.format(df.shape[0]))
print('There are {} unique articles in df data.'.format(df['article_id'].nunique()))
print('There are {} unique users interacting with all articles.'.format(df['email'].nunique()))
df.isna().sum()
# There is no Nan value in df data.
df.groupby('email').count().describe()
# We can see that user interacts with minimum 1 article and maximum 364 articles. 75% of users interacts with less than 10 articles.
df_inter = df.groupby('email').count()['title']
df_inter.columns=['interaction_count']
# Let's check the distribution plot of number of articles one user interacts with.
plt.figure(figsize=(6,4))
ax=df_inter.hist(bins=40);
ax.set_xlim((0,400));
plt.title('Distribution of number of articles a user interacts with');
plt.xlabel('Number of articles one user interacts with');
plt.ylabel('Count');
# Take a closer look at the histogram with counts between 0 and 100. We can see most users interacts with less than 100 articles. A few of users interact with much more articles up to 364.
plt.figure(figsize=(6,4))
ax=df_inter.hist(bins=np.arange(1,100,2));
ax.set_xlim((0,50));
plt.title('Distribution of number of articles a user interacts with');
plt.xlabel('Number of articles one user interacts with');
plt.ylabel('Count');
a=df.groupby('article_id')
# a.groups.keys()
# len(a.get_group(1430.0))
a.get_group(0.0)
# +
# Fill in the median and maximum number of user_article interactios below
median_val = df_inter.median() # 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = df_inter.max() # The maximum number of user-article interactions by any 1 user is ______.
print('50% of individuals interact with {} number of articles or fewer.'.format(round(median_val)))
print('The maximum number of user-article interactions by any 1 user is {}.'.format(max_views_by_user))
# -
# `2.` Explore and remove duplicate articles from the **df_content** dataframe.
# Remove any rows that have the same article_id - only keep the first
df_content.drop_duplicates(subset='article_id', keep='first', inplace=True)
# `3.` Use the cells below to find:
#
# **a.** The number of unique articles that have an interaction with a user.
# **b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
# **c.** The number of unique users in the dataset. (excluding null values) <br>
# **d.** The number of user-article interactions in the dataset.
unique_articles = df[~df['article_id'].isna()]['article_id'].nunique() # The number of unique articles that have at least one interaction
total_articles = df_content[~df_content['article_id'].isna()]['article_id'].nunique() # The number of unique articles on the IBM platform
unique_users = df[~df['email'].isna()]['email'].nunique() # The number of unique users
user_article_interactions = df.shape[0] # The number of user-article interactions
print('The number of unique articles that have at least one interaction is {}.'.format(unique_articles))
print('The number of unique articles on the IBM platform is {}.'.format(total_articles))
print('The number of unique users is {}.'.format(unique_users))
print('The number of user-article interactions is {}.'.format(user_article_interactions))
# `4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
df.isna().sum()
ranked_article = df.groupby('article_id').count().sort_values(by='title',ascending=False)
# The most viewed article in the dataset as a string with one value following the decimal
most_viewed_article_id = ranked_article.index[0].astype(str)
# The most viewed article in the dataset was viewed how many times?
max_views = ranked_article.iloc[0,:]['title']
print('The most viewed article in the dataset is {} with {} viewings'.format(most_viewed_article_id,max_views))
# +
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
# +
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
# -
# ### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
#
# Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
#
# `1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
# +
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Group df by article id and sort by its counts in descending order
ranked_article = df.groupby('article_id').count().sort_values(by='title',ascending=False)
# Get the titles of top articles
top_articles_id = get_top_article_ids(n)
top_articles = []
for id in top_articles_id:
# Get the 1st item of article title list which id matche 'article_id'
top_articles.append(df[df['article_id'].astype('str').str.match(id)]['title'].iloc[0])
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
# Group df by article id and sort by its counts in descending order
ranked_article = df.groupby('article_id').count().sort_values(by='title',ascending=False)
# Get the most viewed article in the dataset as a string
top_articles = list(ranked_article.index[0:n].astype(str))
return top_articles # Return the top article ids
# -
print(get_top_articles(10))
print(get_top_article_ids(10))
# +
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
# -
# ### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
#
#
# `1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
#
# * Each **user** should only appear in each **row** once.
#
#
# * Each **article** should only show up in one **column**.
#
#
# * **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
#
#
# * **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
#
# Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
# +
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
# Add one more column in df for number of interactions with value 1 for all rows
df['interaction'] = 1
user_item = df.groupby(['user_id','article_id'])['interaction'].max().unstack()
user_item.fillna(0,inplace=True)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
# -
user_item.head(2)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
# `2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
#
# Use the tests to test your function.
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
user1 = user_id
# compute similarity of each user to the provided user
user_sim = np.dot(user_item.loc[user1,:],user_item.T)
# Get list of user_id ordered by largest similarity first
id_index = np.argsort(user_sim)[::-1]+1
most_similar_users = user_item.index[id_index-1]
# remove user's own id
index_self = np.where(most_similar_users==user1)[0][0]
most_similar_users = np.delete(most_similar_users, index_self)
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
# +
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
# Find ariticle title according to article id
article_names = np.array([])
for id in article_ids:
name = df[df['article_id']==np.float_(id)]['title'].unique()
article_names = np.concatenate([article_names, name], axis=0)
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column in df)
(this is identified by the doc_full_name column in df_content, but some articles are missing)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
# Get article ids user has seen
article_ids = np.array(user_item.loc[user_id, user_item.loc[user_id,:]==1].index.astype('str'))
# Get the article names according to ids
# article_names = list(df_content[df_content['article_id'].isin(list(np.float_(article_ids)))]['doc_full_name'])
article_names = get_article_names(article_ids)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommended article ids for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
# Get the article list already seen by the user_id
article_ids_seen, article_names_seen = get_user_articles(user_id)
# Get the list of most similar users ids
most_similar_users = find_similar_users(user_id)
# Keep the recommended articles here
recs = np.array([])
for id in most_similar_users:
article_ids_like, article_names_like = get_user_articles(id)
#Obtain recommendations from each neighbor only on the unseen articles
new_recs = np.setdiff1d(article_ids_like, np.intersect1d(article_ids_like, article_ids_seen, assume_unique=True), assume_unique=True)
# Update recs with new recs
recs = np.unique(np.concatenate([new_recs, recs], axis=0))
# If we have enough recommendations exit the loop
if len(recs) > m-1:
break
return recs[0:min(m,len(recs))] # return your recommendations for this user_id
# -
# Check Results
get_article_names(user_user_recs(1, 10)) # Return 10 recommendations for user 1
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
# `4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
#
# * Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
#
#
# * Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
# +
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
user1 = user_id
neighbors_df= pd.DataFrame(columns=['neighbor_id','similarity','num_interactions'])
# compute similarity of each user to the provided user
user_sim = np.dot(user_item.loc[user1,:], user_item.T)
neighbors_df['neighbor_id'] = user_item.index
neighbors_df['similarity'] = user_sim
neighbors_df['num_interactions'] = df.groupby('user_id').count()['interaction']
# sort by similarity
neighbors_df.sort_values(by=['similarity','num_interactions'],ascending=False,inplace=True)
# Remove user_id itself
neighbors_df.reset_index(inplace=True)
neighbors_df.drop(['index'],axis=1,inplace=True)
index_self=np.where(neighbors_df['neighbor_id']==user_id)[0][0]
neighbors_df.drop(index_self,axis=0,inplace=True)
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
# Get the article list already seen by the user_id
article_ids_seen, article_names_seen = get_user_articles(user_id)
# Get the sorted neighbor users' data (columns=['neighbor_id','similarity','num_interactions'])
neighbors = get_top_sorted_users(user_id)
# Keep the recommended articles here
recs = np.array([])
rec_names = np.array([])
for id in neighbors['neighbor_id']:
article_ids_like, article_names_like = get_user_articles(id)
# Obtain recommendations from each neighbor only on the unseen articles
new_recs = np.setdiff1d(article_ids_like, np.intersect1d(article_ids_like, article_ids_seen, assume_unique=True), assume_unique=True)
# Sort new_recs by number of interactions
new_recs_sort = df[df['article_id'].isin(new_recs)].groupby('article_id').count().sort_values(by='interaction',ascending=False).index.values
# Update recs with new recs
rec_temp = np.concatenate([recs, new_recs_sort], axis=0)
ordered_idx = np.unique(rec_temp,return_index=True)[1]
recs = rec_temp[sorted(ordered_idx)].astype(str) # Unique article ids in the same order as in rec_temp
# If we have enough recommendations exit the loop
if len(recs) > m-1:
break
recs = recs[:m]
rec_names = get_article_names(recs)
return recs, rec_names
# -
## Check Numpy set routines
a = np.array([1, 2, 3, 2, 4, 1])
b = np.array([3, 4, 5, 6])
c = np.array([5, 6, 3, 2, 4, 1])
print('a-b: ', np.setdiff1d(a, b))
print('b-a: ', np.setdiff1d(b, a))
print('a&b&c: ', np.intersect1d(np.intersect1d(a,b),c))
# +
## Check code on this alternative solutions
user_id=20
# Get the article list already seen by the user_id
article_ids_seen, article_names_seen = get_user_articles(user_id)
# Get the sorted neighbor users' data (columns=['neighbor_id','similarity','num_interactions'])
neighbors = get_top_sorted_users(user_id)
print(neighbors.iloc[0:3])
recs = np.array([])
rec_names = np.array([])
for id in neighbors['neighbor_id'].iloc[0:3]:
article_ids_like, article_names_like = get_user_articles(id)
# Obtain recommendations from each neighbor only on the unseen articles
new_recs_tmp = np.setdiff1d(article_ids_like, np.intersect1d(article_ids_like, article_ids_seen, assume_unique=True), assume_unique=True)
# Further clean up the recommendations only on articles that haven't been recommended yet
new_recs = np.setdiff1d(new_recs_tmp, np.intersect1d(new_recs_tmp, recs, assume_unique=True), assume_unique=True)
# Sort new_recs by number of interactions
new_recs_sort = df[df['article_id'].isin(new_recs)].groupby('article_id').count().sort_values(by='interaction',ascending=False).index.values
# Update recs with new recs
recs = np.concatenate([recs, new_recs_sort], axis=0).astype(str)
print()
print('From user ', id)
print('Sorted unseen articles to be recommended: ', new_recs_sort)
print('Updated recommendations: ', recs)
# -
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
# `5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).iloc[0,:]['neighbor_id'] # Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).iloc[9,:]['neighbor_id'] # Find the 10th most similar user to user 131
# +
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
# -
# `6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
# **Provide your response here.**
#
# **Given a new user, we are not able to make recommendation using above function because this collaborative filtering based technique need to have information on user-articles interaction but new user has no information. For this code start problem, we can just make recommendation based on top ranked articles.**
# `7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
# +
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = get_top_article_ids(n=10) # Your recommendations here
# +
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
# -
# ### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
#
# Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
#
# `1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
def make_content_recs():
'''
INPUT:
OUTPUT:
'''
# `2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
# **Write an explanation of your content based recommendation system here.**
# `3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
#
# ### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
# +
# make recommendations for a brand new user
# make a recommendations for a user who only has interacted with article id '1427.0'
# -
# ### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
#
# In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
#
# `1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
# +
# Load the matrix here
# user_item_matrix = pd.read_pickle('user_item_matrix.p')
# -
# quick look at the matrix
# user_item_matrix.head()
user_item.head()
# `2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
# +
# Perform SVD on the User-Item Matrix Here
#u, s, vt = np.linalg.svd(user_item_matrix, full_matrices=True) # use the built in to get the three matrices
u, s, vt = np.linalg.svd(user_item, full_matrices=True) # use user_item to get the three matrices
s.shape, u.shape, vt.shape
# -
# **Provide your response here.**
#
# **We can get all three matrices from Singular Value Decomposition because there is no NaN value in the user_item data.**
# `3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
# +
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
# diffs = np.subtract(user_item_matrix, user_item_est)
diffs = np.subtract(user_item, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
# plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/user_item.count().sum());
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
# -
# `4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
#
# Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
#
# * How many users can we make predictions for in the test set?
# * How many users are we not able to make predictions for because of the cold start problem?
# * How many articles can we make predictions for in the test set?
# * How many articles are we not able to make predictions for because of the cold start problem?
# +
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# Get user_item matrix for training data
user_item_train = create_user_item_matrix(df_train)
# Get user_item matrix for test data
user_item_test = create_user_item_matrix(df_test)
# Record user and article id for test data
test_idx = user_item_test.index.values
test_arts = user_item_test.columns.values
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
# +
# Find how many users in test data can be found in the training data
# n_user = df_train[df_train['user_id'].isin(test_idx)]['user_id'].nunique()
n_user = sum(user_item_train.index.isin(test_idx)==True)
print(n_user)
# How many users in test data are not in the training data'
print(df_test['user_id'].nunique()-n_user)
# Find how many articles in test data can be found in the training data
n_article = df_train[df_train['article_id'].isin(test_arts)]['article_id'].nunique()
print(n_article)
# How many users in test data are not in the training data'
print(df_test['article_id'].nunique()-n_article)
# +
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,
'How many movies can we make predictions for in the test set?': b,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d
}
t.sol_4_test(sol_4_dict)
# -
# `5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
#
# Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train, full_matrices=False)
# Use these cells to see how well you can use the training
# decomposition to predict on both training and test data
def predict_user_item(k, u, s, vt, user_item):
'''
INPUT:
k - number of latent features used on prediction
u - u matrix from SVD on data
s - s matrix from SVD on data
vt - vt matrix from SVD on data
user_item - a user-item matrix of the data dataframe
OUTPUT:
err: prediction error on the data
'''
# prediction using restructure matrices with k latent features
pred = np.around(np.dot(np.dot(u, np.diag(s)), vt))
# compute error for prediction to actual value
diffs = np.subtract(user_item, pred)
# Sum of error for prediction
err = np.sum(np.sum(np.abs(diffs)))
return err
# +
# plot prediction error using different number of latent features
num_latent_feats = np.arange(10,500+10,20)
sum_errs_train = []
sum_errs_test = []
for k in num_latent_feats:
# Prediction error on training data
err_train = predict_user_item(k, u_train[:, :k], s_train[:k], vt_train[:k, :], user_item_train)
sum_errs_train.append(err_train)
# Prediction error on test data for only users existing in training data
# Get the predictable user and article index in test data which also exist in training data
predict_user_row = user_item_train.index.isin(test_idx)
predict_user_idx = np.array(user_item_train.index[predict_user_row])
predict_arts_col = user_item_train.columns.isin(test_arts)
predict_arts_idx = np.array(user_item_train.columns[predict_arts_col])
# slice matrices for test data
u_test = u_train[predict_user_row, :]
s_test = s_train
vt_test = vt_train[:, predict_arts_col]
user_item_test_sec = user_item_test.loc[predict_user_idx, predict_arts_idx]
# Prediction error
err_test = predict_user_item(k, u_test[:, :k], s_test[:k], vt_test[:k, :], user_item_test_sec)
sum_errs_test.append(err_test)
plt.plot(num_latent_feats, 1 - np.array(sum_errs_train)/user_item_train.count().sum(), label='Train');
plt.plot(num_latent_feats, 1 - np.array(sum_errs_test)/user_item_test_sec.count().sum(), label='Test');
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
plt.legend(loc='center right');
# -
# `6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
user_item_train.shape
# **Your response here.**
#
# **From the plot, we can see that when number of latent features increases the prediction accuracy on training data increases, but the prediction accuracy on test data decreases. The training data contains the information for 4487 users and 714 articles and we can predict for all users using SVD. More latent features help to improve the prediction accuracy. But we can only predict 20 users in test data because the training data only contains information for these 20 users. The size of user_item matrix and matrices from SVD are also small, hence more latent features doesn't improve the prediction accuracy.**
#
# **To improve the prediction accuracy on the test data, we can**
#
# - Make the training data contains more information of users and articles in the test data, or
# - Use other recommendation method such as content based or knowledge based method to deal with this code start problem for new users.
#
# **To determine if the above recommendation system is an improvement to how users currently find articles, we can do a web based AB test:**
#
# - Divide users who view IBM Watson Studio platform into control group and experiment group by cookies or tokens.
# - Users in each group are randomly picked and make sure the number are similar.
# - There is no change for users in control group. The web provides recommendation for users in the experiment group.
# - Precaculate the experiment size and time based on number of user visiting each day, improvement percentage and confident level and run the test for engough time.
# - Define and analyze the metrics such as user interactions with articles, time spent on reading and etc.
#
# <a id='conclusions'></a>
# ### Extras
# Using your workbook, you could now save your recommendations for each user, develop a class to make new predictions and update your results, and make a flask app to deploy your results. These tasks are beyond what is required for this project. However, from what you learned in the lessons, you certainly capable of taking these tasks on to improve upon your work here!
#
#
# ## Conclusion
#
# > Congratulations! You have reached the end of the Recommendations with IBM project!
#
# > **Tip**: Once you are satisfied with your work here, check over your report to make sure that it is satisfies all the areas of the [rubric](https://review.udacity.com/#!/rubrics/2322/view). You should also probably remove all of the "Tips" like this one so that the presentation is as polished as possible.
#
#
# ## Directions to Submit
#
# > Before you submit your project, you need to create a .html or .pdf version of this notebook in the workspace here. To do that, run the code cell below. If it worked correctly, you should get a return code of 0, and you should see the generated .html file in the workspace directory (click on the orange Jupyter icon in the upper left).
#
# > Alternatively, you can download this report as .html via the **File** > **Download as** submenu, and then manually upload it into the workspace directory by clicking on the orange Jupyter icon in the upper left, then using the Upload button.
#
# > Once you've done this, you can submit your project by clicking on the "Submit Project" button in the lower right here. This will create and submit a zip file with this .ipynb doc and the .html or .pdf version you created. Congratulations!
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
| Recommendations_with_IBM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Summary & Further Resources
# That was a long tutorial! But hey, when was the last time you've seen a non-trivial one?
#
# To recap, the H1ST.AI principles & ideas we've learned:
# * Leverage use-case analysis to decompose problems and adopt different models at the right level of abstractions
# * Encoding human experience as a model
# * Combine human experience and data-driven insights to work harmoniously in a H1st Graph
#
# Most importantly, we have used H1ST.AI to tackle a real-world challenging automotive cybersecurity problem, for which attack event labels are not available to start with, hence solving the Cold Start problem.
#
# It is important to stress that this is still a toy example IDS and much more is needed to handle attacks (e.g. replacement attacks where a whole ECU can be compromised & normal messages silenced and there won’t be a zig-zag pattern) and of course on-device vs cloud deployment, OTA updates, etc. But it is clear adopting H1ST.AI makes the problem much more tractable and explainable.
#
# H1ST.AI framework further provides productivity tools for a team of Data Scientists and domain experts to collaborate on such complex software projects. Especially, we’ve seen our own productivity vastly when moving from a spaghetti code jungle of ML to a more principled H1ST project structure and make use of H1ST Model API & repository as well as Graph.
#
# Excited? [Star/fork our Github repo](https://github.com/h1st-ai/h1st), we're open-source! Especially check out the "Quick Start" section.
| tutorials/4. Summary & Further Resources.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Time for K2SciCon!
#
# <NAME>
#
# We need to make some figures for the poster
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.ticker import MaxNLocator
import seaborn as sns
import pandas as pd
# %config InlineBackend.figure_format = 'retina'
# %matplotlib inline
# I want to recreate Gully's 3x3 spectra figure but with poster-appropriate asthetics
import pandas as pd
all_avail = [106, 107, 109, 110, 113, 114, 116, 118, 119]
no_plots = [101]
ms_forward = np.array(list(set(all_avail) - set(no_plots)), dtype=np.int)
ms = ms_forward[::-1] # make it an even 9 plots.
len(ms), ms
# +
ii = -1
nrows = 3
ncols = 3
#----------------------------------------------------------------------
# Plot the results
sns.set_context("talk")
fig = plt.figure(figsize=(8.0, 11.0))
fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05)
for j in range(ncols):
for i in range(nrows):
ii += 1
print(ii)
m = ms[ii]
#label with the teff and fill factor of that order.
try:
with open('../sf/m{:03d}/output/mix_emcee/run02/models_ff-05_50_95.csv'.format(m)) as f:
dat = pd.read_csv(f)
ws = np.load("../sf/m{:03d}/output/mix_emcee/run02/emcee_chain.npy".format(m))
burned = ws[:, -200:,:]
except:
ws = np.load("../sf/m{:03d}/output/mix_emcee/run01/emcee_chain.npy".format(m))
burned = ws[:, -200:,:]
with open('../sf/m{:03d}/output/mix_emcee/run01/models_ff-05_50_95.csv'.format(m)) as f:
dat = pd.read_csv(f)
xs, ys, zs = burned.shape
fc = burned.reshape(xs*ys, zs)
ff = 10**fc[:, 7]/(10**fc[:, 7]+10**fc[:, 5])
inds_sorted = np.argsort(ff)
fc_sorted = fc[inds_sorted]
ps_med = fc_sorted[4000]
fill_factor = 10**ps_med[7]/(10**ps_med[7]+10**ps_med[5])
t_spot = ps_med[6]
str1 = "$f =$ {:0.0%}, ".format(fill_factor)
str2 = "$T_{cool} = $" + "{:.0f}, ".format(ps_med[6])
str3 = "$T_{hot} = $" + "{:.0f}$".format(ps_med[0])
fmt_str = "$f =$ {:0.0%} $T_c =$ {:0.0f} $T_h =$ {:0.0f}"
str_all = fmt_str.format(fill_factor, np.round(ps_med[6], -1), np.round(ps_med[0], -1))
sample_label = str_all
ax = fig.add_subplot(nrows, ncols, ncols * j + 1 + i)
lw=0.75
ax.step(dat['wl'], dat['data'], '-', color='#7f8c8d', linewidth=lw*2.5, alpha=0.6, label='IGRINS')
ax.plot(dat['wl'], dat['model_comp50'], color='#813196', linewidth=lw*2, alpha=0.7, label='cool + hot')
ax.plot(dat['wl'], dat['model_cool50'], '-', color='#C77169', linewidth=lw*2.5, label='cool')
ax.plot(dat['wl'], dat['model_hot50'], linestyle='dashed',color='#EFA64F', linewidth=lw*1.5, label='hot')
ax.axhline(0, linestyle='dashed', color='#000000')
if m != 100:
ax.text(dat['wl'].values[-1]-80.0, 0.35, "m = {}".format(m), fontsize=10)
ax.text(dat['wl'].values[0]+5.0, -0.07, sample_label, fontsize=10)
#ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.set_yscale('linear')
ax.set_ylim(-0.1, 0.4)
ax.set_xlim(dat.wl[0], dat.wl.iloc[-1])
if i > 0:
ax.yaxis.set_major_locator(plt.NullLocator())
ax.yaxis.set_major_formatter(plt.NullFormatter())
ax.xaxis.set_major_locator(plt.MaxNLocator(2))
if j > (nrows - 2):
plt.xlabel(r'$\lambda \; (\AA)$')
if m == 100:
ax.set_ylim(-100, -200)
plt.legend(loc='center', ncol=1, frameon=False, fontsize=25)
ax.axis('off')
plt.savefig('../document/figures/H_band_spectra_3x3_poster.pdf', bbox_inches='tight', dpi=300, transparent=True)
# -
| notebooks/05_conference_figures/K2SciCon_plots.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # Retail Demo Store Messaging Workshop - Amazon Pinpoint
#
# In this workshop we will use [Amazon Pinpoint](https://aws.amazon.com/pinpoint/) to add the ability to dynamically send personalized messages to the customers of the Retail Demo Store. We'll build out the following use-cases.
#
# - Send new users a welcome email after they sign up for a Retail Demo Store account
# - When users add items to their shopping cart but do not complete an order, send an email with a coupon code encouraging them to finish their order
# - Send users an email with product recommendations from the Amazon Personalize campaign we created in the Personalization workshop
#
# Recommended Time: 1 hour
#
# ## Prerequisites
#
# Since this module uses Amazon Personalize to generate and associate personalized product recommendations for users, it is assumed that you have either completed the [Personalization](../1-Personalization/1.1-Personalize.ipynb) workshop or those resources have been pre-provisioned in your AWS environment. If you are unsure and attending an AWS managed event such as a workshop, check with your event lead.
# ## Architecture
#
# Before diving into setting up Pinpoint to send personalize messages to our users, let's review the relevant parts of the Retail Demo Store architecture and how it uses Pinpoint to integrate with the machine learning campaigns created in Personalize.
#
# 
# ### AWS Amplify & Amazon Pinpoint
#
# The Retail Demo Store's Web UI leverages [AWS Amplify](https://aws.amazon.com/amplify/) to integrate with AWS services for authentication ([Amazon Cognito](https://aws.amazon.com/cognito/)), messaging and analytics ([Amazon Pinpoint](https://aws.amazon.com/pinpoint/)), and to keep our personalization ML models up to date ([Amazon Personalize](https://aws.amazon.com/personalize/)). AWS Amplify provides libraries for JavaScript, iOS, Andriod, and React Native for building web and mobile applications. For this workshop, we'll be focusing on how user information and events from the Retail Demo Store's Web UI are sent to Pinpoint. This is depicted as **(1)** and **(2)** in the architecture above. We'll also show how the user information and events synchronized to Pinpoint are used to create and send personalized messages.
#
# When a new user signs up for a Retail Demo Store account, views a product, adds a product to their cart, completes an order, and so on, the relevant function is called in [AnalyticsHandler.js](https://github.com/aws-samples/retail-demo-store/blob/master/src/web-ui/src/analytics/AnalyticsHandler.js) in the Retail Demo Store Web UI. The new user sign up event triggers a call to the `AnalyticsHandler.identify` function where user information from Cognito is used to [update an endpoint](https://docs.aws.amazon.com/pinpoint/latest/apireference/apps-application-id-endpoints.html) in Pinpoint. In Pinpoint, an endpoint represents a destination that you can send messages to, such as a mobile device, email address, or phone number.
#
# ```javascript
# // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js
#
# export const AnalyticsHandler = {
# identify(user) {
# Vue.prototype.$Amplify.Auth.currentAuthenticatedUser().then((cognitoUser) => {
# let endpoint = {
# userId: user.id,
# optOut: 'NONE',
# userAttributes: {
# Username: [ user.username ],
# ProfileEmail: [ user.email ],
# FirstName: [ user.first_name ],
# LastName: [ user.last_name ],
# Gender: [ user.gender ],
# Age: [ user.age.toString() ],
# Persona: user.persona.split("_")
# }
# }
#
# if (user.addresses && user.addresses.length > 0) {
# let address = user.addresses[0]
# endpoint.location = {
# City: address.city,
# Country: address.country,
# PostalCode: address.zipcode,
# Region: address.state
# }
# }
#
# if (cognitoUser.attributes.email) {
# endpoint.address = cognitoUser.attributes.email
# endpoint.channelType = 'EMAIL'
# Amplify.Analytics.updateEndpoint(endpoint)
# }
# })
# }
# }
# ```
#
# Once an `EMAIL` endpoint is created for our user, we can update attributes on that endpoint based on actions the user takes in the web UI. For example, when the user adds an item to their shopping cart, we'll set the attribute `HasShoppingCart` to `true` to indicate that this endpoint has an active shopping cart. We can also set metrics such as the number of items in the endpoint's cart. As we'll see later, we can use these attributes when building Campaigns in Pinpoint to target endpoints based on their activity in the application.
#
# ```javascript
# // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js
# productAddedToCart(userId, cart, product, quantity, experimentCorrelationId) {
# Amplify.Analytics.updateEndpoint({
# attributes: {
# HasShoppingCart: ['true']
# },
# metrics: {
# ItemsInCart: cart.items.length
# }
# })
# }
# ```
#
# When the user completes an order, we send revenue tracking events to Pinpoint, as shown below, and also update endpoint attributes and metrics. We'll see how these events, attributes, and metrics be can used later in this workshop.
#
# ```javascript
# // Excerpt from src/web-ui/src/analytics/AnalyticsHandler.js
# orderCompleted(user, cart, order) {
# // ...
# for (var itemIdx in order.items) {
# let orderItem = order.items[itemIdx]
#
# Amplify.Analytics.record({
# name: '_monetization.purchase',
# attributes: {
# userId: user ? user.id : null,
# cartId: cart.id,
# orderId: order.id.toString(),
# _currency: 'USD',
# _product_id: orderItem.product_id
# },
# metrics: {
# _quantity: orderItem.quantity,
# _item_price: +orderItem.price.toFixed(2)
# }
# })
# }
#
# Amplify.Analytics.updateEndpoint({
# attributes: {
# HasShoppingCart: ['false'],
# HasCompletedOrder: ['true']
# },
# metrics: {
# ItemsInCart: 0
# }
# })
# }
# ```
# ### Integrating Amazon Pinpoint & Amazon Personalize - Pinpoint Recommenders
#
# When building a Campaign in Amazon Pinpoint, you can associate the Pinpoint Campaign with a machine learning model, or recommender, that will be used to retrieve item recommendations for each endpoint eligible for the campaign. A recommender is linked to an Amazon Personalize Campaign. As you may recall from the [Personalization workshop](../1-Personalization/1.1-Personalize.ipynb), a Personalize Campaign only returns a list of item IDs (which represent product IDs for Retail Demo Store products). In order to turn the list of item IDs into more useful information for building a personalized email, Pinpoint supports the option to associate an AWS Lambda function to a recommender. This function is called passing information about the endpoint and the item IDs from Personalize and the function can return metadata about each item ID. Then in your Pinpoint message template you can reference the item metadata to incorporate it into your messages.
#
# The Retail Demo Store architecture already has a [Lambda function](https://github.com/aws-samples/retail-demo-store/blob/master/src/aws-lambda/pinpoint-recommender/pinpoint-recommender.py) deployed to use for our Pinpoint recommender. This function calls to Retail Demo Store's [Products](https://github.com/aws-samples/retail-demo-store/tree/master/src/products) microservice to retrieve useful information for each product (name, description, price, image URL, product URL, and so on). We will create a Pinpoint recommender in this workshop to tie it all together. This is depicted as **(3)**, **(4)**, and **(5)** in the architecture above.
# ## Setup
#
# Before we can make API calls to setup Pinpoint from this notebook, we need to install and import the necessary dependencies.
# ### Import Dependencies
#
# Next, let's import the dependencies we'll need for this notebook. We also have to retrieve Uid from a SageMaker notebook instance tag.
# +
# Import Dependencies
import boto3
import time
import json
import requests
from botocore.exceptions import ClientError
# Setup Clients
personalize = boto3.client('personalize')
ssm = boto3.client('ssm')
pinpoint = boto3.client('pinpoint')
lambda_client = boto3.client('lambda')
iam = boto3.client('iam')
# Service discovery will allow us to dynamically discover Retail Demo Store resources
servicediscovery = boto3.client('servicediscovery')
with open('/opt/ml/metadata/resource-metadata.json') as f:
data = json.load(f)
sagemaker = boto3.client('sagemaker')
sagemakerResponce = sagemaker.list_tags(ResourceArn=data["ResourceArn"])
for tag in sagemakerResponce["Tags"]:
if tag['Key'] == 'Uid':
Uid = tag['Value']
break
# -
# ### Determine Pinpoint Application/Project
#
# When the Retail Demo Store resources were deployed by the CloudFormation templates, a Pinpoint Application (aka Project) was automatically created with the name "retaildemostore". In order for us to interact with the application via API calls in this notebook, we need to determine the application ID.
#
# Let's lookup our Pinpoint application using the Pinpoint API.
# +
pinpoint_app_name = 'retaildemostore'
pinpoint_app_id = None
get_apps_response = pinpoint.get_apps()
if get_apps_response['ApplicationsResponse'].get('Item'):
for app in get_apps_response['ApplicationsResponse']['Item']:
if app['Name'] == pinpoint_app_name:
pinpoint_app_id = app['Id']
break
assert pinpoint_app_id is not None, 'Retail Demo Store Pinpoint project/application does not exist'
print('Pinpoint Application ID: ' + pinpoint_app_id)
# -
# ### Get Personalize Campaign ARN
#
# Before we can create a recommender in Pinpoint, we need the Amazon Personalize Campaign ARN for the product recommendation campaign. Let's look it up in the SSM parameter store where it was set by the Personalize workshop.
# +
response = ssm.get_parameter(Name='retaildemostore-product-recommendation-campaign-arn')
personalize_campaign_arn = response['Parameter']['Value']
assert personalize_campaign_arn != 'NONE', 'Personalize Campaign ARN not initialized - run Personalization workshop'
print('Personalize Campaign ARN: ' + personalize_campaign_arn)
# -
# ### Get Recommendation Customizer Lambda ARN
#
# We also need the ARN for our Lambda function that will return product metadata for the item IDs. This function has already been deployed for you. Let's lookup our function by its name.
response = lambda_client.get_function(FunctionName = 'RetailDemoStorePinpointRecommender')
lambda_function_arn = response['Configuration']['FunctionArn']
print('Recommendation customizer Lambda ARN: ' + lambda_function_arn)
# ### Get IAM Role for Pinpoint to access Personalize
#
# In order for Pinpoint to access our Personalize campaign to get recommendations, we need to provide it with an IAM Role. The Retail Demo Store deployment has already created a role with the necessary policies. Let's look it up by it's role name.
response = iam.get_role(RoleName = Uid+'-PinpointPersonalize')
pinpoint_personalize_role_arn = response['Role']['Arn']
print('Pinpoint IAM role for Personalize: ' + pinpoint_personalize_role_arn)
# ## Create Pinpoint Recommender Configuration
#
# With our environment setup and configuration info loaded, we can now create a recommender in Amazon Pinpoint.
#
# > We're using the Pinpoint API to create the Recommender Configuration in this workshop. You can also create a recommeder in the AWS Console for Pinpoint under the "Machine learning models" section.
#
# A few things to note in the recommender configuration below.
#
# - In the `Attributes` section, we're creating user-friendly names for the product information fields returned by our Lambda function. These names will be used in the Pinpoint console UI when designing message templates and can make it easier for template designers to select fields.
# - We're using `PINPOINT_USER_ID` for the `RecommendationProviderIdType` since the endpoint's `UserId` is where we set the ID for the user in the Retail Demo Store. Since this ID is what we use to represent each user when training the recommendation models in Personalize, we need Pinpoint to use this ID as well when retrieving recommendations.
# - We're limiting the number of recommendations per message to 4.
# +
response = pinpoint.create_recommender_configuration(
CreateRecommenderConfiguration={
'Attributes': {
'Recommendations.Name': 'Product Name',
'Recommendations.URL': 'Product Detail URL',
'Recommendations.Category': 'Product Category',
'Recommendations.Description': 'Product Description',
'Recommendations.Price': 'Product Price',
'Recommendations.ImageURL': 'Product Image URL'
},
'Description': 'Retail Demo Store Personalize recommender for Pinpoint',
'Name': 'retaildemostore-recommender',
'RecommendationProviderIdType': 'PINPOINT_USER_ID',
'RecommendationProviderRoleArn': pinpoint_personalize_role_arn,
'RecommendationProviderUri': personalize_campaign_arn,
'RecommendationTransformerUri': lambda_function_arn,
'RecommendationsPerMessage': 4
}
)
recommender_id = response['RecommenderConfigurationResponse']['Id']
print('Pinpoint recommender configuration ID: ' + recommender_id)
# -
# ### Verify Machine Learning Model / Recommender
#
# If you open a web browser window/tab and browse to the Pinpoint service in the AWS console for the AWS account we're working with, you should see the ML Model / Recommender that we just created in Pinpoint.
#
# 
# ## Create Personalized Email Templates
#
# With Amazon Pinpoint we can create email templates that can be used to send to groups of our users based on criteria. We'll start by creating email templates for the following use-case then step through how we target and send emails to the right users at the appropriate time.
#
# - Welcome Email - sent to users shortly after creating a Retail Demo Store account
# - Abandoned Cart Email - sent to users who leave items in their cart without completing an order
# - Personalized Recommendations Email - includes recommendations from the recommeder we just created
#
# ### Load Welcome Email Templates
#
# The first email template will be a welcome email template that is sent to new users of the Retail Demo Store after they create an account. Our templates will support both HTML and plain text formats. We'll load both formats and create the template. You can find all templates used in this workshop in the `pinpoint-templates` directory where this notebook is located. They can also be found in the Retail Demo Store source code repository.
#
# First, let's load and display the HTML version of our welcome template.
# +
with open('pinpoint-templates/welcome-email-template.html', 'r') as html_file:
html_template = html_file.read()
print('HTML Template:')
print(html_template)
# -
# Notice how we're using the mustache template tagging syntax, `{{User.UserAttributes.FirstName}}`, to display the user's first name. This will provide a nice touch of personalization to our welcome email.
#
# Next we'll load and display the text version of our welcome email.
# +
with open('pinpoint-templates/welcome-email-template.txt', 'r') as text_file:
text_template = text_file.read()
print('Text Template:')
print(text_template)
# -
# ### Create Welcome Email Pinpoint Template
#
# Now let's take our HTML and text email template source and create a template in Amazon Pinpoint. We'll use a default substitution of "there" for the user's first name attribute if it is not set for some reason. This will result in the email greeting being "Hi there,..." rather than "Hi ,..." if we don't have a value for first name.
# +
response = pinpoint.create_email_template(
EmailTemplateRequest={
'Subject': 'Welcome to the Retail Demo Store',
'TemplateDescription': 'Welcome email sent to new customers',
'HtmlPart': html_template,
'TextPart': text_template,
'DefaultSubstitutions': json.dumps({
'User.UserAttributes.FirstName': 'there'
})
},
TemplateName='RetailDemoStore-Welcome'
)
welcome_template_arn = response['CreateTemplateMessageBody']['Arn']
print('Welcome email template ARN: ' + welcome_template_arn)
# -
# ### Load Abandoned Cart Email Templates
#
# Next we'll create an email template that includes messaging and a coupon code for users who add items to their cart but fail to complete an order.
# +
with open('pinpoint-templates/abandoned-cart-email-template.html', 'r') as html_file:
html_template = html_file.read()
print('HTML Template:')
print(html_template)
# -
# Next load the text version of our template.
# +
with open('pinpoint-templates/abandoned-cart-email-template.txt', 'r') as text_file:
text_template = text_file.read()
print('Text Template:')
print(text_template)
# -
# ### Create Abandoned Cart Email Template
#
# Now we can create an email template in Pinpoint for our abandoned cart use-case.
# +
response = pinpoint.create_email_template(
EmailTemplateRequest={
'Subject': 'Retail Demo Store - Motivation to Complete Your Order',
'TemplateDescription': 'Abandoned cart email template',
'HtmlPart': html_template,
'TextPart': text_template,
'DefaultSubstitutions': json.dumps({
'User.UserAttributes.FirstName': 'there'
})
},
TemplateName='RetailDemoStore-AbandonedCart'
)
abandoned_cart_template_arn = response['CreateTemplateMessageBody']['Arn']
print('Abandoned cart email template ARN: ' + abandoned_cart_template_arn)
# -
# ### Load Recommendations Email Templates
#
# Next we'll create an email template that includes recommendations from the Amazon Personalize product recommendation campaign that we created in the [Personalization workshop](../1-Personalization/1.1-Personalize.ipynb). If you haven't completed the personalization workshop, please do so now and come back to this workshop when complete.
#
# As with the welcome email template, let's load the HTML and text formats for our template.
# +
with open('pinpoint-templates/recommendations-email-template.html', 'r') as html_file:
html_template = html_file.read()
print('HTML Template:')
print(html_template)
# -
# Notice the use of several new mustache template tags in this template. For example, `{{Recommendations.Name.[0]}}` resolves to the product name of the first product recommended by Personalize. The product name came from our Lambda function which was called by Pinpoint after it called `get_recommendations` on our Personalize campaign.
#
# Next load the text version of our template.
# +
with open('pinpoint-templates/recommendations-email-template.txt', 'r') as text_file:
text_template = text_file.read()
print('Text Template:')
print(text_template)
# -
# ### Create Recommendations Email Template
#
# This time when we create the template in Pinpoint, we'll specify the `RecommenderId` for the machine learning model (Amazon Personalize) that we created earlier.
# +
response = pinpoint.create_email_template(
EmailTemplateRequest={
'Subject': 'Retail Demo Store - Products Just for You',
'TemplateDescription': 'Personalized recommendations email template',
'RecommenderId': recommender_id,
'HtmlPart': html_template,
'TextPart': text_template,
'DefaultSubstitutions': json.dumps({
'User.UserAttributes.FirstName': 'there'
})
},
TemplateName='RetailDemoStore-Recommendations'
)
recommendations_template_arn = response['CreateTemplateMessageBody']['Arn']
print('Recommendation email template ARN: ' + recommendations_template_arn)
# -
# ### Verify Email Templates
#
# If you open a web browser window/tab and browse to the Pinpoint service in the AWS console for the AWS account we're working with, you should see the message templates we just created.
#
# 
# ## Enable Pinpoint Email Channel
#
# Before we can setup Segments and Campaigns to send emails, we have to enable the email channel in Pinpoint and verify sending and receiving email addresses.
#
# > We'll be using the Pinpoint email channel in sandbox mode. This means that Pinpoint will only send emails from and to addresses that have been verified in the Pinpoint console.
#
# In the Pinpoint console, click on "All Projects" and then the "retaildemostore" project.
#
# 
# ### Email Settings
#
# From the "retaildemostore" project page, expand "Settings" in the left navigation and then click "Email". You will see that email has not yet been enabled as a channel. Click the "Edit" button to enable Pinpoint to send emails and to verify some email addresses.
#
# 
# ### Verify Some Email Addresses
#
# On the "Edit email" page, check the box to enable the email channel and enter a valid email address that you have the ability to check throughout the rest of this workshop.
#
# 
# ### Verify Additional Email Addresses
#
# So that we can send an email to more than one endpoint in this workshop, verify a couple more variations of your email address.
#
# Assuming your **valid** email address is `<EMAIL>`, add a few more variations using `+` notation such as...
#
# - `<EMAIL>`
# - `<EMAIL>`
# - `<EMAIL>`
#
# Just enter a variation, click the "Verify email address" button, and repeat until you've added a few more. Write down or commit to memory the variations you created--we'll need them later.
#
# By adding these variations, we're able to create separate Retail Demo Store accounts for each email address and therefore separate endpoints in Pinpoint that we can target. Note that emails sent to the these variations should still be delivered to your same inbox.
# ### Check Your Inbox & Click Verification Links
#
# Pinpoint should have sent verification emails to all of the email addresses you added above. Sign in to your email client and check your inbox for the verification emails. Once you receive the emails (it can take a few minutes), click on the verification link in **each email**. If after several minutes you don't receive the verification email or you want to use a different address, repeat the verification process above.
#
# > Your email address(es) must be verified before we can setup Campaigns in Pinpoint.
#
# After you click the verify link in the email sent to each variation of your email address, you should see a success page like the following.
#
# 
# ## Let's Go Shopping - Create Retail Demo Store User Accounts & Pinpoint Endpoints
#
# Next let's create a few new user accounts in the Retail Demo Store Web UI using the email address(es) that we just verified. Based on the source code snippets we saw earlier, we know that the Retail Demo Store will create endpoints in Pinpoint for new accounts.
#
# <div class="alert alert-info">
# IMPORTANT: each Retail Demo Store account must be created in an entirely separate web browser session in order for them to be created as separate endpoints in Pinpoint. Signing out and attempting to create a new account in the same browser will NOT work. The easiest way to do this successfully is to use Google Chrome and open new Incognito windows for each new account. Alternatively, you could use multiple browser types (i.e. Chrome, Firefox, Safari, IE) and/or separate devices to create accounts such as a mobile phone or tablet.
# </div>
#
# 1. Open the Retail Demo Store Web UI in an new Incognito window. If you don't already have the Web UI open or need the URL, you can find it in the "Outputs" tab for the Retail Demo Store CloudFormation stack in your AWS account. Look for the "WebURL" output field, right click on the link, and select "Open Link in Incognito Window" (Chrome only).
#
# 
#
# 2. Click the "Sign In" button in the top navigation (right side) and then click on the "Create account" link in the "Sign in" form.
#
# 
#
# 3. A few seconds after creating your account you should receive an SMS with a six digit confirmation code. Enter this code on the confirmation page.
#
# 
#
# 4. Once confirmed you can sign in to your account with your user name and password. At this point you should have a endpoint in Pinpoint for this user.
#
# 5. Close your Incognito window(s).
#
# 6. Open a new Incognito window and **repeat the process for SOME (but not all) of your remaining email address variations** you verified in Pinpoint above. **As a reminder, it's important that you create each Retail Demo Store account in a separate/new Incognito window, browser application, or device. Otherwise, your accounts will overwrite the same endpoint in Pinpoint.**
#
# <div class="alert alert-info">
# Be sure to hold back one or two of your verified email addresses until after we create a welcome email campaign below so the sign up events fall within the time window of the campaign.
# </div>
# ### Shopping Behavior
#
# With your Retail Demo Store accounts created, perform some activities with some of your accounts.
#
# - For one of your users add some items to the shopping cart but do not checkout to simulate an abandoned cart scenario.
# - For another user, add some items to the cart and complete an order or two so that revenue events are sent all the way through to Pinpoint.
# - Also be sure to view a few products by clicking through to the product detail view. Select products that would indicate an affinity for a product type (e.g. shoes or electronics) so you can see how product recommendations are tailored in the product recommendations email.
# ## Create Pinpoint Segments
#
# With our Recommender and message templates in place and a few test users created in the Retail Demo Store, let's turn to creating Segments in Pinpoint. After our Segments are created, we'll create some Campaigns.
#
# 1. Start by browsing to the Amazon Pinpoint service page in the AWS account where the Retail Demo Store was deployed. Click on "All Projects" and you should see the "retaildemostore" project. Click on the "retaildemostore" project and then "Segments" in the left navigation. Click on the "Create a segment" button.
#
# 
# 2. Then click on the "Create segment" button. We will be building a dynamic segment based on the endpoints that were automatically created when we created our Retail Demo Store user accounts. We'll include all endpoints that have an email address by adding a filter by channel type with a value of `EMAIL`. Name your segment "AllEmailUsers" and scroll down and click the "Create segment" button at the bottom of the page.
#
# 
# 3. Create another segment that is based on the "AllEmailUsers" segment you just created but has an additional filter on the `HasShoppingCart` endpoint attribute and has a value of `true`. This represents all users that have a shopping cart and will be used for our abandoned cart campaign. If you don't see this endpoint attribute or don't see `true` as an option, switch to another browser tab/window and add items to the shopping cart for one of your test users.
#
# 
# ## Create Campaigns
#
# With our segments created for all users and for users with shopping carts, let's create campaigns for our welcome email, product recommendations, and abandoned cart use-cases.
# ### Welcome Email Campaign
#
# Let's start with with the welcome email campaign. For the "retaildemostore" project in Pinpoint, click "Campaigns" in the left navigation and then the "Create a campaign" button.
#
# 1. For Step 1, give your campaign a name such as "WelcomeEmail", select "Standard campaign" as the campaign type, and "Email" as the channel. Click "Next" to continue.
#
# 
#
# 2. For Step 2, we will be using our "AllEmailUsers" dynamic segment. Click "Next" to continue.
#
# 
#
# 3. For Step 3, choose the "RetailDemoStore-Welcome" email template, scroll to the bottom of the page, and click "Next".
#
# 
#
# 4. For Step 4, we want the campaign to be sent when the `UserSignedUp` event occurs. Set the campaign start date to be today's date so that it begins immediately and the end date to be a few days into the future. **Be sure to adjust to your current time zone.**
#
# 
#
# 5. Scroll to the bottom of the page, click "Next".
#
# 6. Click "Launch campaign" to launch your campaign.
# <div class="alert alert-info">
# <strong>Given that the welcome campaign is activated based on sign up events that occur between the campaign start and end times, to test this campaign you must wait until after the camapign starts and then use one of your remaining verified email addresses to create a new Retail Demo Store account.</strong>
# </div>
# ### Abandoned Cart Campaign
#
# To create an abandoned cart campaign, repeat the steps you followed for the Welcome campaign above but this time select the `UsersWithCarts` segment, the `RetailDemoStore-AbandonedCart` email template, and the `Session Stop` event. This will trigger the abandoned cart email to be sent when users end their session while still having a shopping cart. Launch the campaign, wait for the campaign to start, and then close out some browser sessions for user(s) with items still in their cart. This can take some trial and error and waiting given the how browsers and devices trigger end of session events.
# ### Recommendations Campaign
#
# Finally, create a recommendations campaign that targets the `AllEmailUsers` segment and uses the `RetailDemoStore-Recommendations` message template. This time, however, rather than trigger the campaign based on an event, we'll send the campaign immediately. Click "Next", launch the campaign, and check the email inbox for your test accounts after a few moments.
#
# 
# ## Bonus - Pinpoint Journeys
#
# With Amazon Pinpoint journeys, you can create custom experiences for your customers using an easy to use, drag-and-drop interface. When you build a journey, you choose the activities that you want to add to the journey. These activities can perform a variety of different actions, like sending an email to journey participants, waiting a defined period of time, or splitting users based on a certain action, such as when they open or click a link in an email.
#
# Using the segments and message templates you've already created, experiment with creating a journey that guides users through a messaging experience. For example, start a journey by sending all users the Recommendations message template. Then add a pause/wait step followed by a Multivariate Split that directs users down separate paths whether they've completed an order (hint: create a `OrderCompleted` segment), opened the Recommendations email, or done nothing. Perhaps users who completed an order might receive a message asking them to refer a friend to the Retail Demo Store and users who just opened the email might be sent a message with a coupon to motivate them to get shopping (you'll need to create a new message templates for these).
# ## Workshop Complete
#
# Congratulations! You have completed the Retail Demo Store Pinpoint Workshop.
#
# ### Cleanup
#
# If you launched the Retail Demo Store in your personal AWS account **AND** you're done with all workshops & your evaluation of the Retail Demo Store, you can remove all provisioned AWS resources and data by deleting the CloudFormation stack you used to deploy the Retail Demo Store. Although deleting the CloudFormation stack will delete the entire "retaildemostore" project in Pinpoint, including all endpoint data, it will not delete resources we created directly in this workshop (i.e. outside of the "retaildemostore" Pinpoint project). The following cleanup steps will remove the resources we created outside the "retaildemostore" Pinpoint project.
#
# > If you are participating in an AWS managed event such as a workshop and using an AWS provided temporary account, you can skip the following cleanup steps unless otherwise instructed.
# #### Delete Recommeder Configuration
response = pinpoint.delete_recommender_configuration(RecommenderId=recommender_id)
print(json.dumps(response, indent=2))
# #### Delete Email Message Templates
response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-Welcome')
print(json.dumps(response, indent=2))
response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-AbandonedCart')
print(json.dumps(response, indent=2))
response = pinpoint.delete_email_template(TemplateName='RetailDemoStore-Recommendations')
print(json.dumps(response, indent=2))
# Other resources allocated for the Retail Demo Store will be deleted when the CloudFormation stack is deleted.
#
# End of workshop
| workshop/4-Messaging/4.1-Pinpoint.ipynb |