code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Examples and overview
# You now have all the basic tools to solve interesting economic models. The trick is to be able to combine what you know to solve problems in practice. We firstly briefly recap, with a focus solving optimization problems and non-linear equations. Afterwards, we consider a number of examples.
#
# 1. The consumer problem
# 2. A worker-capitalist production economy
# 3. The inaugurual project from 2020 (labor supply and taxation)
# +
# magic to reload modules automatically
# %load_ext autoreload
# %autoreload 2
# standard imports
from types import SimpleNamespace # new? explained below
import numpy as np
from scipy import optimize
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
# -
# # Recap
# 2. **Primitives:** types, operators, copy vs. view, conditionals, loops, functions, classes
# 3. **Optimize, print and plot:** mathematics (numpy), printing, figures (matplotlib), solving optimization problems and equations (scipy.optimize)
# 4. **Random numbers and simulation:** random numbers (numpy.random), save/load (pickle), interactive figures (ipywidgets)
# 5. **Workflow and debugging:** structuring, naming, commenting, debugging (assert, try-except), modules
# **Sum up:** Lots and lots of information. The important thing is not to remember it all, but to know where to look for answers.
# ## Optimize, optimize, optimize
# **The two most important tools:**
#
# 1. Solving optimization problems with `scipy.optimize.minimize` and `scipy.optimize.minimize_scalar`
# 2. Solving equations with `scipy.optimize.root` and `scipy.optimize.root_scalar`
# **Problem:** A bit of a black box...
#
# * **Lecture 10:** Details on solving equations.
# * **Lecture 11:** Details on numerical optimization.
# * **Now:** Compare with a) a *loop search* and b) a *hand-written optimizer*.
# ### Loops vs. optimizer
# **Define function:** Simple polynomial with maximum at $x = 2.0$
def f_func(x):
return -3*(x-2)**2 + 1
# **Rough solution with loop:**
# +
N = 100
x_vec = np.linspace(-10,10,N)
f_vec = np.empty(N)
f_best = -np.inf # initial maximum
x_best = np.nan # not-a-number
for i,x in enumerate(x_vec):
f_now = f_vec[i] = f_func(x)
if f_now > f_best:
x_best = x
f_best = f_now
print(f'best with loop is {f_best:.8f} at x = {x_best:.8f}')
# -
# **Question:** Not quite right, how to improve?
# **Plot:**
# +
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(x_vec,f_vec,ls='--',lw=2,color='black',label='$f(x)$')
ax.plot(x_best,f_best,ls='',marker='s',label='best')
ax.set_xlabel('x')
ax.set_ylabel('f')
ax.legend(loc='lower center',frameon=True);
# -
# **Solution with** `scipy.optimize.minimize_scalar` ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize_scalar.html#scipy.optimize.minimize_scalar)):
# +
obj = lambda x: -f_func(x)
res = optimize.minimize_scalar(obj,bracket=(-10,10),method='brent')
x = res.x
f = -res.fun
print(f'best is {f:.8f} at x = {x:.8f}')
# -
# **Solution with** `scipy.optimize.minimize` ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize)):
# +
x_guess = [0]
obj = lambda x: -f_func(x[0])
res = optimize.minimize(obj, x_guess, method='Nelder-Mead')
x = res.x[0]
f = -res.fun
print(f'best is {f:.8f} at x = {x:.8f}')
# -
# **Solution with** `scipy.optimize.root_scalar` ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root_scalar.html)):
# Find derivative and solve via FOC:
def fp_func(x):
return -6*(x-2)
# +
x_guess = [0]
obj = lambda x: fp_func(x[0])
res = optimize.root(obj,x_guess,method='hybr')
x = res.x[0]
f = f_func(x)
print(f'best is {f:.8f} at x = {x:.8f}')
# -
# **Solution with** `scipy.optimize.root` ([documentation](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.root.html)):
# +
obj = lambda x: fp_func(x)
res = optimize.root_scalar(obj,bracket=(-10,10),method='bisect')
x = res.root
f = f_func(res.root)
print(f'best is {f:.8f} at x = {x:.8f}')
# -
# ### Gradient descent optimizer
# **Algorithm:** `minimize_gradient_descent()`
#
# 1. Choose tolerance $\epsilon>0$, step size $\alpha > 0$, and guess on $x_0$, set $n=0$.
# 2. Compute $f(x_n)$ and $f^\prime(x_n) \approx \frac{f(\boldsymbol{x}_{n}+\Delta)-f(\boldsymbol{x}_{n})}{\Delta}$.
# 3. If $|f^\prime(x_n)| < \epsilon$ then stop.
# 4. Compute new guess "down the hill":
#
# $$
# x_{n+1} = x_{n} - \alpha f^\prime(x_n)
# $$
#
#
# 5. Set $n = n + 1$ and return to step 2.
# **Code for algorithm:**
def gradient_descent(f,x0,alpha=1,Delta=1e-8,max_iter=500,eps=1e-8):
""" minimize function with gradient descent
Args:
f (callable): function
x0 (float): initial value
alpha (float,optional): step size factor in search
Delta (float,optional): step size in numerical derivative
max_iter (int,optional): maximum number of iterations
eps (float,optional): tolerance
Returns:
x (float): minimum
fx (float): funciton value at minimum
trials (list): list with tuple (x,value,derivative)
"""
# step 1: initialize
x = x0
n = 0
trials = []
# step 2-4:
while n < max_iter:
# step 2: compute function value and derivative
fx = f(x)
fp = (f(x+Delta)-fx)/Delta
trials.append({'x':x,'fx':fx,'fp':fp})
# step 3: check convergence
print(f'n = {n:3d}: x = {x:12.8f}, f = {fx:12.8f}, fp = {fp:12.8f}')
if np.abs(fp) < eps:
break
# step 4: update x
x -= alpha*fp
# step 5: update n
n += 1
return x,fx,trials
# **Call the optimizer:**
# +
x0 = 0
alpha = 0.5
f = lambda x: -np.sin(x)+0.05*x**2
x,fx,trials = gradient_descent(f,x0,alpha)
print(f'best with gradient_descent is {fx:.8f} at x = {x:.8f}')
# -
# **Illusstration:**
# +
fig = plt.figure(figsize=(10,10))
# a. main figure
ax = fig.add_subplot(2,2,(1,2))
trial_x_vec = [trial['x'] for trial in trials]
trial_f_vec = [trial['fx'] for trial in trials]
trial_fp_vec = [trial['fp'] for trial in trials]
ax.plot(x_vec,f(x_vec),ls='--',lw=2,color='black',label='$f(x)$')
ax.plot(trial_x_vec,trial_f_vec,ls='',marker='s',ms=4,color='blue',label='iterations')
ax.set_xlabel('$x$')
ax.set_ylabel('$f$')
ax.legend(loc='upper center',frameon=True)
# sub figure 1
ax = fig.add_subplot(2,2,3)
ax.plot(np.arange(len(trials)),trial_x_vec)
ax.set_xlabel('iteration')
ax.set_ylabel('x')
# sub figure 2
ax = fig.add_subplot(2,2,4)
ax.plot(np.arange(len(trials)),trial_fp_vec)
ax.set_xlabel('iteration')
ax.set_ylabel('derivative of f');
# -
# **Question:** Can we guess on any initial value of $x_0$?
# # The consumer problem
# $$
# \begin{aligned}
# V(p_{1},p_{2},I) & = \max_{x_{1},x_{2}} \left(\alpha^{\frac{1}{\sigma}}x_{1}^{\frac{\sigma-1}{\sigma}}+(1-\alpha)^{\frac{1}{\sigma}}x_{2}^{\frac{\sigma-1}{\sigma}}\right)^{\frac{\sigma}{\sigma-1}}\\
# \text{s.t.}\\
# p_{1}x_{1}+p_{2}x_{2} & \leq I,\,\,\,p_{1},p_{2},I>0\\
# x_{1},x_{2} & \geq 0
# \end{aligned}
# $$
# **Goal:** Create a model-class to solve this problem.
# **Utility function:**
def u_func(model,x1,x2):
u_x1 = model.alpha**(1/model.sigma)*x1**((model.sigma-1)/model.sigma)
u_x2 = (1-model.alpha)**(1/model.sigma)*x2**((model.sigma-1)/model.sigma)
return (u_x1+u_x2)**(model.sigma/(model.sigma-1))
# **Solution function:**
def solve(model):
# a. objective function (to minimize)
obj = lambda x: -model.u_func(x[0],x[1]) # minimize -> negtive of utility
# b. constraints and bounds
budget_constraint = lambda x: model.I-model.p1*x[0]-model.p2*x[1] # violated if negative
constraints = ({'type':'ineq','fun':budget_constraint})
bounds = ((1e-8,model.I/model.p1-1e-8),(1e-8,model.I/model.p2-1e-8))
# why all these 1e-8? To avoid ever having x1 = 0 or x2 = 0
# c. call solver
x0 = [(model.I/model.p1)/2,(model.I/model.p2)/2]
sol = optimize.minimize(obj,x0,method='SLSQP',bounds=bounds,constraints=constraints)
# d. save
model.x1 = sol.x[0]
model.x2 = sol.x[1]
model.u = model.u_func(model.x1,model.x2)
# **Create consumer class:**
class ConsumerClass:
def __init__(self):
self.alpha = 0.5
self.sigma = 0.1
self.p1 = 1
self.p2 = 2
self.I = 10
u_func = u_func
solve = solve
# **Solve consumer problem**:
jeppe = ConsumerClass() # calls __init__()
jeppe.solve()
print(f'(x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')
# Easy to loop over:
for alpha in np.linspace(0.1,0.9,9):
jeppe.alpha = alpha
jeppe.solve()
print(f'alpha = {alpha:.3f} -> (x1,x2) = ({jeppe.x1:.3f},{jeppe.x2:.3f}), u = {jeppe.u:.3f}')
# **Question:** Anything you want to test?
# # A worker-capitalist production economy
# Consider an economy consisting of $N_w$ **workers**, and $N_c$ **capitalists** and a single **firm** owned equally by the capitalists.
# **Workers:** Consume, $c_w$, at a price $p$, and supply labor, $\ell_w$, at a wage of $w$. Maximize utility:
#
# $$\max_{c_w\geq0,\ell_w\in[0,1]} \log (c_w+\kappa)- \omega \ell_w^{\eta} \text{ s.t } pc_w \leq w \ell_w,\,\,\,\omega,\kappa > 0, \eta \geq 1$$
#
# Equivalently, substituting in the budget constraint with equality:
#
# $$\max_{\ell_w\in[0,1]} \log \left( \frac{w \ell_w}{p}+\kappa \right)- \omega \ell_w^{\eta}$$
#
# Denote ***optimal behavior*** $c_w^{\star}(p,w)$ and $\ell_w^{\star}(p,w)$.
# **Capitalists:** Consume, $c_c$, at a price $p$, supply labor, $\ell_c$, at a wage $w$, and receives profits $\pi$. Maximize utility:
#
# $$\max_{c_c\geq0,\ell_c\in[0,1]} \log (c_c+\kappa) - \omega \ell_c^{\eta} \text{ s.t } pc_c = w \ell_c + \pi, ,\,\,\,\omega,\kappa > 0, \eta \geq 1$$
#
# Equivalently, substituting in the budget constraint with equality:
#
# $$\max_{\ell_c\in[0,1]} \log \left( \frac{w \ell_c + \pi}{p}+\kappa \right)- \omega \ell_c^{\eta}$$
#
# Denote ***optimal behavior*** $c_c^{\star}(p,w,\pi)$ and $\ell_c^{\star}(p,w,\pi)$.
# **Firm:** Use the production function $f(\ell) = \ell^\alpha, \alpha \in (0,1)$. Maximize profits:
#
# $$\max_{\ell\geq0} p f(\ell) - w\ell $$
#
# Denote ***optional behavior*** by $\ell^{\star}(p,w)$.
#
# Implied ***production*** is $y^{\star}(p,w) = f(\ell^{\star}(p,w))$ and implied ***total profits*** are $\Pi^\star(p,w) = py^{\star}(p,w) - w\ell^{\star}(p,w)$
# **Equilibrium:** A set of prices $(p,w)$ such that workers, capitalists and firms act optimally given prices and profit, and
#
# 1. **Goods market clears**: $N_w c_w^{\star}(p,w) + N_c c_c^{\star}(p,w,\pi) = y^\star(p,w)$
# 2. **Labor market clears**: $N_w \ell_w^{\star}(p,w) + N_c \ell_c^{\star}(p,w,\pi) = \ell^\star(p,w)$
# 3. **Profits received equal profits distributed**: $\pi = \frac{py^{\star}(p,w) - w\ell^{\star}(p,w)}{N_c}$
#
# **Note I:** We can use $p=1$ as numeraire.
#
# **Note II:** *Walras' Law* imply that if one of the markets clear, then the other one does too.
# ## Parameters
# Choose parameters:
par = SimpleNamespace()
par.kappa = 0.1
par.omega = 10
par.eta = 1.50
par.alpha = 0.50
par.Nw = 99
par.Nc = 1
# **SimpleNamespace():** Like a dictionary, but e.g. `par.kappa` instead of `par['kappa']`.
#
# Can always be interfaced as a dictionary with `__dict__`:
for k,v in par.__dict__.items():
print(f'{k:6s} = {v:6.3f}')
# ## Workers
# +
def utility_w(c,l,par):
""" utility of workers """
return np.log(c+par.kappa)-par.omega*l**par.eta
def workers(p,w,par):
""" maximize utility for workers """
# a. solve
obj = lambda l: -utility_w((w*l)/p,l,par)
res = optimize.minimize_scalar(obj,bounds=(0,1),method='bounded')
# b. save
l_w_star = res.x
c_w_star = (w*l_w_star)/p
return c_w_star,l_w_star
# -
# **Small test:**
p = 1
for w in [0.5,1,1.5]:
c,l = workers(p,w,par)
print(f'w = {w:.2f} -> c = {c:.2f}, l = {l:.2f}')
# ## Capitalists
# +
def utility_c(c,l,par):
""" utility of capitalists """
return np.log(c+par.kappa)-par.omega*l**par.eta
def capitalists(p,w,pi,par):
""" maximize utility of capitalists """
# a. solve
obj = lambda l: -utility_c((w*l+pi)/p,l,par) # subsittute in the budget constraint
res = optimize.minimize_scalar(obj,bounds=(0,1),method='bounded')
# b. save
l_c_star = res.x
c_c_star = (w*l_c_star+pi)/p
return c_c_star,l_c_star
# -
# **Small test:**
p = 1
pi = 0.1
for w in [0.5,1,1.5]:
c,l = capitalists(p,w,pi,par)
print(f'w = {w:.2f} -> c = {c:.2f}, l = {l:.2f}')
# **Question:** Any idea for another test?
# ## Firm
def firm(p,w,par):
""" maximize firm profits """
# a. solve
f = lambda l: l**par.alpha
obj = lambda l: -(p*f(l)-w*l)
x0 = [0.0]
res = optimize.minimize(obj,x0,bounds=((0,None),),method='L-BFGS-B')
# b. save
l_star = res.x[0]
y_star = f(l_star)
Pi = p*y_star - w*l_star
return y_star,l_star,Pi
# **Small test:**
p = 1
for w in [0.5,1,1.5]:
y,l,Pi = firm(p,w,par)
print(f'w = {w:.2f} -> y = {y:.2f}, l = {l:.2f}, Pi = {Pi:.2f}')
# ## Equilibrium
def evaluate_equilibrium(w,par,p=None,do_print=False):
""" evaluate equilirium """
# a. normalize output price
p = 1 if p is None else p
# b. optimal behavior of firm
y_star,l_star,Pi = firm(p,w,par)
pi = Pi/par.Nc
# c. optimal behavior of households
c_w_star,l_w_star = workers(p,w,par)
c_c_star,l_c_star = capitalists(p,w,pi,par)
# d. market clearing
goods_mkt_clearing = par.Nw*c_w_star + par.Nc*c_c_star - y_star
labor_mkt_clearing = par.Nw*l_w_star + par.Nc*l_c_star - l_star
if do_print:
u_w = utility_w(c_w_star,l_w_star,par)
print(f'workers : c = {c_w_star:6.4f}, l = {l_w_star:6.4f}, u = {u_w:7.4f}')
u_c = utility_c(c_c_star,l_c_star,par)
print(f'capitalists : c = {c_c_star:6.4f}, l = {l_c_star:6.4f}, u = {u_c:7.4f}')
print(f'goods market : {goods_mkt_clearing:.8f}')
print(f'labor market : {labor_mkt_clearing:.8f}')
else:
return goods_mkt_clearing
# **Step 1:** Perform rough grid search to check when the goods market clears.
# +
num_w = 10
grid_w = np.linspace(0.1,1.5,num_w)
grid_mkt_clearing = np.zeros(num_w)
for i,w in enumerate(grid_w):
grid_mkt_clearing[i] = evaluate_equilibrium(w,par)
print(f'w = {w:.2f} -> excess demand = {grid_mkt_clearing[i]:12.8f}')
# -
# **Step 2:** Find where *excess demand* changes sign - the equilibrium price must be within this range
left = np.max(grid_w[grid_mkt_clearing < 0])
right = np.min(grid_w[grid_mkt_clearing > 0])
print(f'equilibrium price must be in [{left:.2f},{right:.2f}]')
# **Step 3:** Use equation-solver / root-finder
res = optimize.root_scalar(evaluate_equilibrium,bracket=[left,right],method='bisect',args=(par,))
w_eq = res.root
print(f'the equilibrium wage is {w_eq:.4f}')
# **Show details:**
evaluate_equilibrium(w_eq,par,do_print=True)
# **Check I:** Does both markets clear?
#
# **Check II:** Can we multiply both prices with the same factor? I.e. can we change the numeraire?
fac = 100
p_eq_ = fac*1.0
w_eq_ = fac*w_eq
evaluate_equilibrium(w_eq_,par,p=p_eq_,do_print=True)
# ## Experiments
# It is easy to extend this model in many directions:
#
# 1. Should workers and capitalists have different tastes or producitvity?
# 2. Should workers differ wrt. tastes or producitvity?
# 3. Should there be government redistribution?
# 4. Other ideas?
# ## Using a class
from WorkerCapitalistEconomy import WorkerCapitalistEconomyClass
# **Look at `WorkerCapitalistEconomy.py`:** Same code, but written as a class!
model = WorkerCapitalistEconomyClass()
print(model.par.kappa) # excess the class data with "".property"
model.find_equilibrium()
# **Benefit I:** Fewer inputs and outputs, less risk of wrong ordering.
# **Benefit II of class-based solution:** Easy access to all data.
# E.g. capitalists share of total consumption.
C_w = model.par.Nw*model.c_w_star
C_c = model.par.Nc*model.c_c_star
print(f'capitalists share of total consumption is: {C_c/(C_c+C_w):.2f}')
# **Benefit III of class-based solution:** Easy to experiment with different parameters.
model.par.kappa = model.par.kappa/100 # lower kappa
model.find_equilibrium()
# # Inuagural project from last year (labor supply and taxation)
# Consider a consumer solving the following maximization problem
#
# $$\begin{eqnarray}
# c^{\star},\ell^{\star} & = & \arg\max_{c,\ell}\log(c)-\nu\frac{\ell^{1+\frac{1}{\varepsilon}}}{1+\frac{1}{\varepsilon}}\\
# & \text{s.t.} \\
# x & = & m+w\ell-\left[\tau_{0}w\ell+\tau_{1}\max\{w\ell-\kappa,0\}\right] \\
# c & \in & [0,x] \\
# \ell & \in & [0,1]
# \end{eqnarray}$$
#
# where $c$ is consumption, $\ell$ is labor supply, $m$ is cash-on-hand,
# $w$ is the wage rate, $\tau_{0}$ is the standard labor income tax,
# $\tau_{1}$ is the top bracket labor income tax, $\kappa$ is the
# cut-off for the top labor income bracket, $x$ is total resources,
# $\nu$ scales the disutility of labor, and $\varepsilon$ is the Frisch
# elasticity of labor supply.
# Note that utility is monotonically increasing in consumption. This implies that
# $$\begin{equation}
# c^{\star}=x
# \end{equation}$$
# **Question 1:** Construct a function which solves the consumer given the parameters.
# We choose the following parameter values
#
# $$
# m=1,\,\nu=10,\,\varepsilon=0.3,\,\tau_{0}=0.4,\,\tau_{1}=0.1,\,\kappa=0.4
# $$
# **Question 2:** Plot $\ell^{\star}$ and $c^{\star}$ as functions of $w$ in
# the range $0.5$ to $1.5$.
# Consider a population with $N=1,000$ individuals indexed by $i$.
#
# Assume the distribution of wages is uniform such that
#
# $$w_{i}\sim\mathcal{U}(0.5,1.5).$$
#
# Denote the optimal choices of individual $i$ by $\ell_{i}^{\star}$ and $c_{i}^{\star}$.
#
# **Question 3:** Calculate the total tax revenue given by $T=\sum_{i=1}^{N}\left[\tau_{0}w_{i}\ell_{i}^{\star}+\tau_{1}\max\{w_{i}\ell_{i}^{\star}-\kappa,0\}\right].$
# **Question 4:** What would the tax revenue be if instead $\varepsilon=0.1$?
# Consider a politician who wishes to maximize the tax revenue.
# **Question 5:** Which $\tau_{0}$, $\tau_{1}$ and $\kappa$ would you suggest her to implement? Report the tax revenue you expect to obtain.
# ## Solution of question 1+2
# All the basic functions are written in `LaborSupplyModel.py`.
import LaborSupplyModel as LSM
# Define all **parameters**:
m = 1
nu = 10
frisch = 0.3
tau0 = 0.4
tau1 = 0.1
kappa = 0.4
# **Allocate** arrays for solutions:
N = 1_000
w_vec = np.linspace(0.5,1.5,N)
l_vec = np.zeros(N)
c_vec = np.zeros(N)
# **Solve:**
for i in range(N):
l_vec[i] = LSM.find_optimal_labor_supply(nu,frisch,m,w_vec[i],tau0,tau1,kappa)
c_vec[i] = LSM.implied_c(l_vec[i],m,w_vec[i],tau0,tau1,kappa)
# **Plot results:**
# +
fig = plt.figure(figsize=(12,4))
ax = fig.add_subplot(1,2,1)
ax.plot(w_vec,l_vec,'-')
ax.set_ylabel('labor supply, $\ell$')
ax.set_xlabel('wage, $w$')
ax.set_title('Labor suppply')
ax = fig.add_subplot(1,2,2)
ax.plot(w_vec,c_vec,'-')
ax.set_ylabel('consumption, $c$')
ax.set_xlabel('consumption, $c$')
ax.set_title('Consumption');
# -
# ## Solution of question 3
# Calculate **tax revnue** using that a equally spaced vector approximates a uniform distribution:
T = np.sum(LSM.implied_tax(l_vec,w_vec,tau0,tau1,kappa))
print(f'total tax revenue is: {T:.4f}')
# Using **random sampling** is also a possibility:
# +
# a. set seed
np.random.seed(1917)
# b. run replications
reps = 50
T_vec = np.zeros(reps)
for rep in range(reps):
# i. draw randow wages
w_vec_ = np.random.uniform(0.5,1.5,size=N)
# ii. find labor supply
l_vec_ = np.zeros(N)
for i in range(N):
l_vec_[i] = LSM.find_optimal_labor_supply(nu,frisch,m,w_vec_[i],tau0,tau1,kappa)
# iii. find tax revenue
T_vec[rep] = np.sum(LSM.implied_tax(l_vec_,w_vec_,tau0,tau1,kappa))
if rep < 10 or rep%10 == 0:
print(f'{rep:2d}: {T_vec[rep]:.4f}')
# c. mean
print(f'mean: {np.mean(T_vec):.4f} [{np.min(T_vec):.4f} {np.max(T_vec):.4f}]')
# -
# ## Question 4
# **Re-solve** with $\epsilon = 0.1$:
frisch_low = 0.1
l_vec_frisch_low = np.zeros(N)
for i in range(N):
l_vec_frisch_low[i] = LSM.find_optimal_labor_supply(nu,frisch_low,m,w_vec[i],tau0,tau1,kappa)
# Re-calculate **tax revenue**:
T_frisch_low = np.sum(LSM.implied_tax(l_vec_frisch_low,w_vec,tau0,tau1,kappa))
print(f'total tax revenue is: {T_frisch_low:.4f}')
# **Conclusion:** Higher tax revenue because of lower Frish elasticity.
# ## Question 5
# Define function to calculate **tax revenue for guess of tax parameters**:
def tax_revenue(nu,frisch,m,w_vec,tau0,tau1,kappa):
""" find total tax revenue and labor and consumpty
Args:
nu (float): disutility of labor supply
frisch (float): frisch elasticity of labor supply
m (float): cash-on-hand
w_vec (np.array): wage
tau0 (float): standard labor tax
tau1 (float): top bracket labor income tax
kappa (float): cut-off for the top labor income bracket
Returns:
(float): total tax revenue
"""
# a. optimal labor supply
N = w_vec.size
l_vec = np.zeros(N)
for i in range(N):
l_vec[i] = LSM.find_optimal_labor_supply(nu,frisch,m,w_vec[i],tau0,tau1,kappa)
# b. taxes
T = np.sum(LSM.implied_tax(l_vec,w_vec,tau0,tau1,kappa))
return T
# Define **objective function for optimizer**:
def obj(x,nu,frisch_low,m,w_vec):
""" find negative of total tax revenue
Args:
x (np.array): tax parameters
nu (float): disutility of labor supply
frisch (float): frisch elasticity of labor supply
m (float): cash-on-hand
w_vec (np.array): wage
Returns:
(float): minus total tax revenue
"""
global it
# a. determine parameters
tau0 = x[0]
if x.size > 1:
tau1 = x[1]
kappa = x[2]
else:
tau1 = 0.0
kappa = 0.0
# b. calculate tax revnue
T = tax_revenue(nu,frisch_low,m,w_vec,tau0,tau1,kappa)
# c. print
print(f'{it:3d}: tau0 = {tau0:10.8f}, tau1 = {tau1:10.8f}, kappa = {kappa:10.8f} -> T = {T:12.8f},')
it += 1
return -T
# **Solve:**
# +
# a. initial guess and bounds
x0 = np.array([tau0,tau1,kappa])
bounds = ((0,0.99),(0,0.99),(0,1.5))
# b. call solver
it = 0
result = optimize.minimize(obj,x0,
method='SLSQP',bounds=bounds,
args=(nu,frisch,m,w_vec)
)
# -
# **Have we found the global optimum?**
# **Same result with another initial guess?**
# +
# a. initial guess and bounds
x0 = np.array([0.1,0.1,0.1])
bounds = ((0,0.99),(0,0.99),(0,1.5))
# b. call solver
it = 0
result = optimize.minimize(obj,x0,
method='SLSQP',bounds=bounds,
args=(nu,frisch,m,w_vec)
)
# -
# **Can we improve if we force $\tau_1 = \kappa = 0$?**
# +
# a. initial guess and bounds
x0 = np.array([result.x[0]])
bounds = ((0,0.99),)
# b. call solver
it = 0
result = optimize.minimize(obj,x0,
method='SLSQP',bounds=bounds,
args=(nu,frisch,m,w_vec)
)
# -
# **Can we improve if fix $\kappa$ to some value?**
def obj_kappa(x,nu,frisch_low,m,w_vec,kappa):
""" find negative of total tax revenue
Args:
x (np.array): tax parameters
nu (float): disutility of labor supply
frisch (float): frisch elasticity of labor supply
m (float): cash-on-hand
w_vec (np.array): wage
kappa (float): cut-off for the top labor income bracket
Returns:
(float): minus total tax revenue
"""
global it
# a. determine parameters
tau0 = x[0]
tau1 = x[1]
# b. calculate tax revnue
T = tax_revenue(nu,frisch_low,m,w_vec,tau0,tau1,kappa)
# c. print
print(f' {it:3d}: tau0 = {tau0:10.8f}, tau1 = {tau1:10.8f} -> T = {T:12.8f},')
it += 1
return -T
# +
# a. initial guess and bounds
x0 = np.array([0.1,0.1])
bounds = ((0,0.99),(0,0.99))
# b. call solver
for kappa in [0.05,0.1,0.15]:
print(f'kappa = {kappa:.3f}')
it = 0
result = optimize.minimize(obj_kappa,x0,
method='SLSQP',bounds=bounds,
args=(nu,frisch,m,w_vec,kappa)
)
print('')
# -
# **Suggestions for other tests?**
# # Summary
# 1. **Main takeway:** You are actually already equipped to solve a lot of interesting economic models.
# 2. **Next time:** Pandas, the central Python package for working with data.
|
06/Examples_and_overview.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas_datareader as data
import pandas as pd
import matplotlib as plt
from datetime import datetime
tickers = ["BTC-USD", "ETH-USD", "XRP-USD", "BCH-USD", "LTC-USD", "EOS-USD", "BNB-USD", "ADA-USD", "XMR-USD", "XLM-USD", "TRX-USD", "LINK-USD", "ETC-USD", "DASH-USD", "NEO-USD"]
#BSV, XTZ, LEO and HT not on Yahoo Finance
#Set timeframe
start = "2018-01-01"
end = "2020-02-25"
#Pull data from Yahoo! Finance
df = data.DataReader(tickers, data_source="yahoo", start=start, end=end)
# new dataframe
new_df = pd.DataFrame()
# updating df dataframe to only exhibit close price information
df = df["Close"]
# creates a dataframe that shows the percent change for each ticker
class Stocks():
def __init__(self, ticker):
self.tickers = ticker
def relative(self):
for f in self.tickers:
new_df[f] = df[f].pct_change()
return new_df
x = Stocks(tickers)
df_relative_returns = x.relative()
# creates the correlation matrix
corr = df_relative_returns.corr()
corr.style.background_gradient(cmap='coolwarm')
# +
# With missing values dropped.
tickers = ["BTC-USD", "ETH-USD", "XRP-USD", "BCH-USD", "LTC-USD", "EOS-USD", "BNB-USD", "ADA-USD", "XMR-USD", "XLM-USD", "TRX-USD", "LINK-USD", "ETC-USD", "DASH-USD", "NEO-USD"]
#BSV, XTZ, LEO and HT not on Yahoo Finance
#Set timeframe
start = "2018-01-01"
end = "2020-02-25"
#Pull data from Yahoo! Finance
df = data.DataReader(tickers, data_source="yahoo", start=start, end=end)
# new dataframe
new_df = pd.DataFrame()
# Drop missing values below via dropna
# updating df dataframe to only exhibit close price information
df = df.dropna()["Close"]
# creates a dataframe that shows the percent change for each ticker
class Stocks():
def __init__(self, ticker):
self.tickers = ticker
def relative(self):
for f in self.tickers:
new_df[f] = df[f].pct_change()
return new_df
x = Stocks(tickers)
df_relative_returns = x.relative()
# creates the correlation matrix
corr = df_relative_returns.corr()
corr.style.background_gradient(cmap='coolwarm')
# -
|
Research/Correlation Matrix - Yahoo Finance.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:balto] *
# language: python
# name: conda-env-balto-py
# ---
# # BALTO Graphical User Interface (prototype)
# This Jupyter notebook creates a GUI (graphical user interface) for BALTO (Brokered Alignment of Long-Tail Observations) project. BALTO is funded by the NSF EarthCube program. The GUI intends to provide an simplified and customizable method for users to access data sets of interest on servers that support the OpenDAP data access protocol. The interactive GUI runs within the Jupyter notebook and uses the Python packages: <b>ipywidgets</b> and <b>ipyleaflet</b>.
# ## Set up a conda environment called "balto"
# To run this Jupyter notebook, it is recommended to use Python 3.7 from an Anaconda distribution and to create a "balto" conda environment with the following commands.
#
# conda update -n base -c defaults conda <br>
# conda create --name balto <br>
# conda activate balto <br>
# conda install -c conda-forge ipywidgets <br>
# conda install -c conda-forge ipyleaflet <br>
# conda install -c conda-forge pydap <br>
#
# conda install -c conda-forge jupyterlab <br>
# conda install -c conda-forge nb_conda_kernels # (needed for conda envs) <br>
#
# conda install -c conda-forge nodejs <br>
# conda install -c conda-forge widgetsnbextension <br>
# jupyter labextension install jupyter-leaflet <br>
# jupyter labextension install @jupyter-widgets/jupyterlab-manager <br>
#
# Change to directory with BALTO_GUI.ipynb. <br>
# jupyter lab <br>
#
# Finally, choose BALTO_GUI.ipynb in Jupyter Lab,
# but make sure to choose the kernel: Python [conda env:balto] <br>
#
# References <br>
# https://jupyterlab.readthedocs.io/en/stable/getting_started/installation.html <br>
# https://jupyterlab.readthedocs.io/en/stable/user/extensions.html <br>
# https://ipywidgets.readthedocs.io/en/latest/user_install.html#installing-the-jupyterlab-extension<br>
# ## Import required packages
from ipyleaflet import Map
import ipywidgets as widgets
from ipywidgets import Layout
from IPython.display import display, HTML
## from IPython.core.display import display
## from IPython.lib.display import display
# import pydap
from pydap.client import open_url
import requests # for get_filenames()
import time # for sleep
# ## Create the GUI components
# This GUI is built up from <b>ipywidgets</b> (for the controls) and <b>ipyleaflet</b> (for the interactive map).
#
# For more information on ipywidgets, see: https://ipywidgets.readthedocs.io/en/latest/user_guide.html
#
# For more information on ipyleaflet, see:
# https://ipyleaflet.readthedocs.io/en/latest/
#
# +
p0 = widgets.HTML(value=f"<p></p> <p></p>") # padding
h0 = widgets.HTML(value=f"<b><font size=5>BALTO User Interface</font></b>")
# p0 = widgets.HTML(value="<p></p> <p></p>") # padding
# h0 = widgets.HTML(value="<b><font size=5>BALTO User Interface</font></b>")
#---------------------------------------------
# h0 = widgets.Label('BALTO User Interface')
# Create an interactive map with ipyleaflets
# gui_width = '640px'
gui_width = '680px'
map_width = '640px' # (gui_width - 40px)
map_height = '250px'
att_width = '560px'
url_box_width = att_width
## url_box_width = '560px'
m = Map(center=(0.0, 0.0), zoom=1,
layout=Layout(width=map_width, height=map_height))
style0 = {'description_width': 'initial'}
style1 = {'description_width': '130px'}
style2 = {'description_width': '80px'}
style3 = {'description_width': '50px'}
# bbox_style = {'description_width': '130px'}
bbox_style = {'description_width': '100px'}
date_style = {'description_width': '70px'}
#####################################################
# Does "step=001" restrict accuracy of selection ??
#####################################################
bbox_width = '270px'
w1 = widgets.BoundedFloatText(
value=-180, min=-180, max=180.0, step=0.01,
# description='West longitude:',
description='West edge lon:',
disabled=False, style=bbox_style,
layout=Layout(width=bbox_width) )
w2 = widgets.BoundedFloatText(
value=180, min=-180, max=180.0, step=0.01,
# description='East longitude:',
description='East edge lon:',
disabled=False, style=bbox_style,
layout=Layout(width=bbox_width) )
w3 = widgets.BoundedFloatText(
value=90, min=-90, max=90.0, step=0.01,
# description='North latitude:',
description='North edge lat:',
disabled=False, style=bbox_style,
layout=Layout(width=bbox_width) )
w4 = widgets.BoundedFloatText(
value=-90, min=-90, max=90.0, step=0.01,
# description='South latitude:',
description='South edge lat:',
disabled=False, style=bbox_style,
layout=Layout(width=bbox_width) )
# Date Range (& Temporal Resolution ??)
date_width = '270px'
d1 = widgets.DatePicker( description='Start Date:',
disabled=False, style=date_style,
layout=Layout(width=date_width) )
d2 = widgets.DatePicker( description='End Date:',
disabled=False, style=date_style,
layout=Layout(width=date_width) )
d3 = widgets.Text( description='Start Time:',
disabled=False, style=date_style,
layout=Layout(width=date_width) )
d4 = widgets.Text( description='End Time:',
disabled=False, style=date_style,
layout=Layout(width=date_width) )
# Variable Options
n0 = widgets.HTML(value=f"<p> </p>") # padding
n1 = widgets.Text(description='Variable name:',
value='sea surface temperature',
disabled=False, style=style0,
layout=Layout(width='550px') )
#------------------------------
# Example GES DISC opendap URL
#------------------------------
# https://gpm1.gesdisc.eosdis.nasa.gov/opendap/GPM_L3/GPM_3IMERGHHE.05/2014/
# 091/3B-HHR-E.MS.MRG.3IMERG.20140401-S000000-E002959.0000.V05B.HDF5.nc
# # ?HQprecipitation[1999:2200][919:1049],lon[1999:2200],lat[919:1049]
#----------------------------------
# Example OpenDAP URL for testing
#----------------------------------
# http://test.opendap.org/dap/data/nc/coads_climatology.nc
## value='http://test.opendap.org/dap/data/nc/coads_climatology.nc',
## value='https://gpm1.gesdisc.eosdis.nasa.gov/opendap/',
# OpenDAP Options
o1 = widgets.Text(description='OpenDAP Dir:',
value='http://test.opendap.org/dap/data/nc/',
disabled=False, style=style1,
layout=Layout(width=url_box_width))
b1 = widgets.Button(description="Go", layout=Layout(width='50px'))
o2 = widgets.Dropdown( description='Filename:',
options=[''], value='',
disabled=False, style=style1,
layout=Layout(width=att_width) )
## o3 = widgets.Select( description='Variable:',
o3 = widgets.Dropdown( description='Variable:',
options=[''], value='',
disabled=False, style=style1,
layout=Layout(width=att_width) )
o4 = widgets.Text(description='Units:', style=style1,
value='', layout=Layout(width=att_width) )
o5 = widgets.Text(description='Shape:', style=style1,
value='', layout=Layout(width=att_width) )
o6 = widgets.Text(description='Dimensions:', style=style1,
value='', layout=Layout(width=att_width) )
o7 = widgets.Text(description='Data type:', style=style1,
value='', layout=Layout(width=att_width) )
# Settings
s1 = widgets.Dropdown( description='OpenDAP package:',
options=['pydap', 'netcdf4'],
value='pydap',
disabled=False, style=style1)
# Output File Format
f1 = widgets.Dropdown( description='Download Format:',
options=['HDF', 'netCDF', 'netCDF4', 'ASCII'],
value='netCDF',
disabled=False, style=style0)
## layout=Layout(width=map_width, height=map_height))
# Buttons at the bottom
b3 = widgets.Button(description="Download")
# Can use this for output
status = widgets.Text(description=' Status:', style=style3,
layout=Layout(width='380px') )
log = widgets.Textarea( description='', value='',
disabled=False, style=style0,
layout=Layout(width='560px', height='160px'))
# -
# ## Define some GUI utility functions
# +
def get_bounds():
return [m.west, m.south, m.east, m.north]
#==================================================================
def get_start_date():
if (d1.value is not None):
return str(d1.value) # Need the str().
else:
return 'None'
#==================================================================
def get_end_date():
if (d2.value is not None):
return str(d2.value) # Need the str().
else:
return 'None'
#==================================================================
def get_variable_name():
return o3.value
## return n1.value
#==================================================================
def get_opendap_package():
return s1.value
#==================================================================
def get_output_format():
return f1.value
#==================================================================
def list_to_string( array ):
s = ''
for item in array:
s = s + item + '\n'
return s
#==================================================================
def print_choices():
msg = [
'bounds = ' + str(get_bounds()),
'opendap package = ' + get_opendap_package(),
'start date = ' + get_start_date(),
'end date = ' + get_end_date(),
'variable = ' + get_variable_name() ]
log.value = list_to_string( msg )
#==================================================================
# -
# ## Define some GUI event handling functions
# ## Set up the GUI event handlers
# +
def download_data( dum ):
status.value = 'Download button clicked.'
print_choices()
#==================================================================
def show_bounds( **kwargs ):
# Called by m.on_interaction().
# Don't need to process separate events?
w1.value = m.west
w2.value = m.east
w3.value = m.north
w4.value = m.south
#==================================================================
def show_bounds2( **kwargs ):
# events: mouseup, mousedown, mousemove, mouseover,
# mouseout, click, dblclick, preclick
event = kwargs.get('type')
# print('event = ', event)
if (event == 'mouseup') or (event == 'mousemove') or \
(event == 'click') or (event == 'dblclick'):
w1.value = m.west
w2.value = m.east
w3.value = m.north
w4.value = m.south
# status.value = event
# with output2:
# print( event )
#==================================================================
def get_opendap_url():
directory = o1.value
if (directory[-1] != '/'):
directory += '/'
filename = o2.value
return directory + filename
#==================================================================
def update_filename_list( dummy=None ):
#----------------------------------------------------
# Note: This is called when "Go" button is clicked.
#----------------------------------------------------
## opendap_dir = 'http://test.opendap.org/dap/data/nc/'
opendap_dir = o1.value
filenames = get_filenames( opendap_dir )
if (len(filenames) == 0):
print('Error: No data files found.')
return
#-----------------------------------
# Update filename list & selection
#-----------------------------------
o2.options = filenames
o2.value = filenames[0]
#------------------------------------
# Update info for this file/dataset
# which also calls show_var_info().
#---------------------------------------------
# Does this get called automatically due to:
# o2.observe() call below ?
#---------------------------------------------
## show_dataset_info() ########
## show_var_info()
#==================================================================
def open_dataset():
## from pydap.client import open_url # (at top)
opendap_url = get_opendap_url()
dataset = open_url( opendap_url )
return dataset
#==================================================================
def show_dataset_info( dummy=None ):
if (o2.value == ''):
## update_filename_list() # (doesn't work)
return
#---------------------------------------------------
# Note: Not sure why "dummy" arg is required here.
#---------------------------------------------------
# Note: When this is called, the following are set
# as global variables in the notebook.
#---------------------------------------------------
global dataset, short_names, long_names, units_names
global short_name_map, units_map
dataset = open_dataset()
short_names = get_all_var_shortnames( dataset )
long_names = get_all_var_longnames( short_names)
units_names = get_all_var_units( short_names )
# print('### short_names =', short_names)
# print('### long_names =', long_names)
# print('### units_names =', units_names)
short_name_map = dict(zip(long_names, short_names ))
units_map = dict(zip(long_names, units_names ))
#-------------------------------------------
# Update variable list and selected value.
#-------------------------------------------
o3.options = long_names
o3.value = long_names[0]
## print('In show_dataset_info((), o3.value =', o3.value)
## time.sleeppppppp(0.5) # [seconds] # didn't help
#------------------------------------
# Show other info for this variable
#------------------------------------
## show_var_info()
#==================================================================
def show_var_info( dummy=None ):
if (o3.value == ''):
return
short_name = short_name_map[ o3.value ]
# print('#### short_name =', short_name)
#----------------------------------------------
# Note: long_name is selected from Dropdown.
# o3.value = get_var_longname( short_name )
#----------------------------------------------
# LATER: Do this and pass var to get_var_*()?
#----------------------------------------------
# var = dataset[ short_name ]
o4.value = get_var_units( short_name )
o5.value = get_var_shape( short_name )
o6.value = get_var_dimensions( short_name )
o7.value = get_var_dtype( short_name )
return
#---------------------------------------------------
# Note: Not sure why "dummy" arg is required here.
#---------------------------------------------------
try:
short_name = short_name_map[ o3.value ]
# print('#### short_name =', short_name)
#----------------------------------------------
# Note: long_name is selected from Dropdown.
# o3.value = get_var_longname( short_name )
#----------------------------------------------
o4.value = get_var_units( short_name )
o5.value = get_var_shape( short_name )
o6.value = get_var_dimensions( short_name )
o7.value = get_var_dtype( short_name )
except:
dum = 0
#==================================================================
def get_all_var_shortnames( dataset ):
return list( dataset.keys() )
#==================================================================
def get_all_var_longnames( short_names ):
# Assume short_names is available as global var.
# short_names = get_all_var_shortnames()
long_names = list()
for name in short_names:
try:
long_name = get_var_longname( name )
long_names.append( long_name )
except:
# Use short name if there is no long_name.
long_names.append( name )
# print('No long name found for:', name)
return long_names
#==================================================================
def get_all_var_units( short_names ):
# Assume short_names is available as global var.
# short_names = get_all_var_shortnames()
units_names = list()
for name in short_names:
try:
units = get_var_units( name )
units_names.append( units )
except:
units_names.append( 'unknown' )
# print('No units name found for:', name)
return units_names
#==================================================================
def get_var_dimensions( short_name ):
var = dataset[ short_name ]
if hasattr(var, 'dimensions'):
return str(var.dimensions)
else:
return 'No dimensions found.'
#==================================================================
def get_var_longname( short_name ):
var = dataset[ short_name ]
if hasattr(var, 'long_name'):
return var.long_name
else:
return short_name
#==================================================================
def get_var_units( short_name ):
var = dataset[ short_name ]
if hasattr(var, 'units'):
return var.units
else:
return 'unknown'
#==================================================================
def get_var_shape( short_name ):
var = dataset[ short_name ]
return str(var.shape)
#==================================================================
def get_var_dtype( short_name ):
var = dataset[ short_name ]
return str(var.dtype)
#==================================================================
def get_var_attributes( short_name ):
var = dataset[ short_name ]
if hasattr(var, 'attributes'):
return var.attributes
else:
return 'No attributes found.'
#==================================================================
def get_filenames( opendap_dir):
r = requests.get( opendap_dir )
lines = r.text.splitlines()
# n_lines = len(lines)
filenames = list()
for line in lines:
if ('"sameAs": "http://' in line) and ('www' not in line):
line = line.replace('.html"', '')
parts = line.split("/")
filename = parts[-1]
filenames.append( filename )
return filenames
#==================================================================
# -
m.on_interaction( show_bounds )
b1.on_click( update_filename_list )
b3.on_click( download_data )
o2.observe( show_dataset_info )
o3.observe( show_var_info, names='value' ) ### NEED names='value' !!!!!!
## o3.observe( show_var_info ) ## This alone works intermittently.
# ## Create the GUI from the GUI components
# +
#===========================
# Set up the UI layout: V1
#===========================
# v1 = widgets.VBox([w1, w2, w3, w4])
# v2 = widgets.VBox([d1, d2, n0, n1 ])
# h1 = widgets.HBox([v1, v2])
# h2 = widgets.VBox([o1, o2])
# h3 = widgets.HBox([b1, status])
# ui = widgets.VBox([p0, h0, m, h1, h2, h3, p0, log])
#======================================
# Set up the UI layout: V2: Accordion
#======================================
e1a = widgets.VBox([w1, w2])
e1b = widgets.VBox([w3, w4])
e1 = widgets.HBox( [e1a, e1b])
v1 = widgets.VBox( [m, e1] )
v2a = widgets.VBox([d1, d2])
v2b = widgets.VBox([d3, d4])
v2 = widgets.HBox([v2a, v2b])
v3 = widgets.VBox([n1])
## bb = widgets.HBox([b1,b2])
bb = widgets.HBox([o1, b1]) # directory + button
v4 = widgets.VBox([bb,o2,o3,o4,o5,o6,o7])
## v4 = widgets.VBox([o1,o2,b1,o3])
h3 = widgets.HBox([f1, n0, b3])
## v5 = widgets.VBox([h3, status, log])
v5 = widgets.VBox([h3, log])
v6 = widgets.VBox([s1])
# selected_index=None causes all cells to be collapsed
# acc = widgets.Accordion( children=[v1, v2, v3, v4, v5],
## acc = widgets.Accordion( children=[v4, v1, v2, v3, v5, v6],
acc = widgets.Accordion( children=[v4, v1, v2, v5, v6],
selected_index=None,
layout=Layout(width=gui_width) )
acc.set_title(0,'Browse Data')
acc.set_title(1,'Spatial Extent')
acc.set_title(2,'Date Range')
acc.set_title(3,'Download Data')
acc.set_title(4,'Settings')
#-----------------------------
# acc.set_title(3,'Variable')
# acc.set_title(4,'Download')
# acc.set_title(5,'Settings')
#--------------------------------------
# acc.set_title(0,'Spatial Extent')
# acc.set_title(1,'Date Range')
# acc.set_title(2,'Variable')
# acc.set_title(3,'OpenDAP Server')
# acc.set_title(4,'Download')
## ui = widgets.VBox([p0,h0,m,acc])
ui = widgets.VBox([p0,h0,acc])
# -
# ## Display the GUI
gui_output = widgets.Output()
display(ui, gui_output)
# ## Some information for testing
# +
# Geographic bounding box for state of Colorado
# Colorado_xmin = -109.060253
# Colorado_xmax = -102.041524
# Colorado_ymin = 36.992426
# Colorado_ymax = 41.003444
|
BALTO_GUI.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.4 ('power_perceiver')
# language: python
# name: python3
# ---
# +
import xarray as xr
import pandas as pd
from power_perceiver.load_raw.data_sources.raw_nwp_data_source import RawNWPDataSource
from power_perceiver.consts import Location
NWP_ZARR_PATH = "/media/jack/wd_18tb/data/ocf/NWP/UK_Met_Office/UKV/zarr/UKV_intermediate_version_3.zarr"
# -
raw_nwp = RawNWPDataSource(
zarr_path=NWP_ZARR_PATH,
roi_height_pixels=4,
roi_width_pixels=4,
history_duration=pd.Timedelta(hours=12),
forecast_duration=pd.Timedelta(hours=12),
start_date="2020-01-01",
end_date="2020-12-31",
y_coarsen=16,
x_coarsen=16,
channels=["dswrf", "t", "si10", "prate"],
)
raw_nwp.per_worker_init(worker_id=0, seed=0)
t0_periods = pd.DataFrame(
[
{"start_dt": "2020-06-01", "end_dt": "2020-06-02"},
]
)
for col_name in t0_periods.columns:
t0_periods[col_name] = pd.to_datetime(t0_periods[col_name])
raw_nwp.load_subset_into_ram(t0_periods)
xr_example = raw_nwp.get_example(
t0_datetime_utc="2020-06-01T12:00",
center_osgb=Location(x=379379.90625, y=583073.0),
)
xr_example
np_example = raw_nwp.to_numpy(xr_example)
np_example
# +
from power_perceiver.consts import BatchKey
np_example[BatchKey.nwp_step]
# -
xr_example.resample(target_time_utc="30T").interpolate(kind="cubic")
|
notebooks/2022-06-17_nwp_resample_and_recent_t0/NWP_resample.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Note on NLTK data download
# Run the cell below to download the necessary nltk data packages. Note, because we are working in classroom workspaces, we will be downloading specific packages in each notebook throughout the lesson. However, you can download all packages by entering `nltk.download()` on your computer. Keep in mind this does take up a bit more space. You can learn more about nltk data installation [here](https://www.nltk.org/data.html).
import nltk
nltk.download('punkt')
# # Tokenization
# Try out the tokenization methods in nltk to split the following text into words and then sentences.
#
# **Note: All solution notebooks can be found by clicking on the Jupyter icon on the top left of this workspace.**
# import statements
from nltk.tokenize import word_tokenize
from nltk.tokenize import sent_tokenize
text = "Dr. Smith graduated from the University of Washington. He later started an analytics firm called Lux, which catered to enterprise customers."
print(text)
# Split text into words using NLTK
words = word_tokenize(text)
print(words)
# Split text into sentences
sentences = sent_tokenize(text)
print(sentences)
|
tokenization_practice.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6
# language: python
# name: python36
# ---
# 
# # Semantic Types
#
# Some string values can be recognized as semantic types. For example, email addresses, US zip codes or IP addresses have specific formats that can be recognized, and then split in specific ways.
#
# When getting a DataProfile you can optionally ask to collect counts of values recognized as semantic types. [`Dataflow.get_profile()`](./data-profile.ipynb) executes the Dataflow, calculates profile information, and returns a newly constructed DataProfile. Semantic type counts can be included in the data profile by calling `get_profile` with the `include_stype_counts` argument set to true.
#
# The `stype_counts` property of the DataProfile will then include entries for columns where some semantic types were recognized for some values.
# +
import azureml.dataprep as dprep
dflow = dprep.read_json(path='../data/json.json')
profile = dflow.get_profile(include_stype_counts=True)
print("row count: " + str(profile.row_count))
profile.stype_counts
# -
# To see all the supported semantic types, you can examine the `SType` enumeration. More types will be added over time.
[t.name for t in dprep.SType]
# You can filter the found semantic types down to just those where all non-empty values matched. The `DataProfile.stype_counts` gives a list of semantic type counts for each column, where at least some matches were found. Those lists are in desecending order of count, so here we consider only the first in each list, as that will be the one with the highest count of values that match.
#
# In this example, the column `inspections.business.postal_code` looks to be a US zip code.
stypes_counts = profile.stype_counts
all_match = [
(column, stypes_counts[column][0].stype)
for column in stypes_counts
if profile.row_count - profile.columns[column].empty_count == stypes_counts[column][0].count
]
all_match
# You can use semantic types to compute new columns. The new columns are the values split up into elements, or canonicalized.
#
# Here we reduce our data down to just the `postal` column so we can better see what a `split_stype` operation can do.
dflow_postal = dflow.keep_columns(['inspections.business.postal_code']).rename_columns({'inspections.business.postal_code': 'postal'})
dflow_postal.head(5)
# With `SType.ZipCode`, values are split into their basic five digit zip code and the plus-four add-on of the Zip+4 format.
dflow_split = dflow_postal.split_stype('postal', dprep.SType.ZIPCODE)
dflow_split.head(5)
# `split_stype` also allows you to specify the fields of the stype to use and the name of the new columns. For example, if you just needed to strip the plus four from our zip codes, you could use this.
dflow_no_plus4 = dflow_postal.split_stype('postal', dprep.SType.ZIPCODE, ['zip'], ['zipNoPlus4'])
dflow_no_plus4.head(5)
|
how-to-guides/semantic-types.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# Observationally, studies of doppler shift maps have shown that many dense cores have measured non-negligible,systematic velocities. This is important because the centrifugal force introduces an additional support for the sphere to counter the gravitational infall.
#
# While it is easy to think about a rotating sphere, putting rotation on a Cartesian grid is a bit more tricky, in particular, we need to enforce:
#
#
#
# 1. Solid body rotation $\leftrightarrow d\Omega/dr = 0$.
# 2. Everything should be rotating in a plane $v_z=0$ (angular momentum in the $z$-direction).
#
# We can get the $\Omega$ from either the energy-balance or the force-balance approach, where $\beta$ is our tuning parameter and determines whether the sphere is rotationally-supported or collapsing.
# \begin{align}
# \beta = \frac{E_{k,rot}}{E_{p,grav}}
# \\ \boxed{\Omega \approx\sqrt{\frac{\beta GM}{R^3}}}
# \end{align}
# Given x, y, z,$\Omega$ , we can get:
# \begin{align}
# \theta = cos^{-1}(\frac{z}{r})
# \\ \boxed{\phi = tan^{-1}(\frac{y}{x})}
# \\ r = \sqrt{x^2+y^2+z^2}
# \end{align}
# From the conservation of angular momentum (things should rotate in a plane), $V_z = 0$. Enforcing solid body rotation $$\Omega = \frac{|V|}{r}= \frac{\sqrt{V_x^2+V_y^2}}{r}$$.
#
# We can solve this by two approaches to enforce circular rotation:
#
# ### $\vec{\rho}\perp\vec{v}$ Approach
# Let $\vec{\rho}$ be the cylindrical radial vector
# \begin{align}
# \vec{\rho} = \vec{r}sin\theta
# \\ \vec{\rho}\cdot \vec{v} = 0
# \\ V_x = -V_y tan\phi
# \end{align}
# when $\theta \neq 0,\pi$. (Note that Method 2 doesn't have this constraint on $\theta$. )
#
# ### Geometric Approach
# Looking at a top-down view, we can determine geometrically:
# \begin{align}
# V_y = - v sin\phi
# \\ V_x = v cos\phi
# \end{align}
# Dividing the two we get: $V_x = -V_y tan\phi$.
# \begin{align}
# \boxed{V_y = - \Omega r sin\phi}
# \\ \boxed{V_x = \Omega r cos\phi}
# \end{align}
#
# %pylab inline
from scripts.plotSim import *
# Setup :
#
# ./setup RotatingSinkSphere -3d +usm --maxblock=500 -auto ; cd object/;make -j8;cd ..;
# ### Initial Conditions
# ~~~fortran
# # !Computed from 3*G*M/(rmax/1.057e-17)**3
# omega = sqrt(2.2196e-25*beta_param)
# ...
# rr = sqrt(xx**2 + yy**2 + zz**2)
# phi = atan(yy/xx)
# ....
# velxZone = abs(omega*rr*sin(phi))
# velyZone = abs(omega*rr*cos(phi))
#
# if (xx<0) then
# velyZone=-velyZone
# endif
# if (yy>0) then
# velxZone=-velxZone
# endif
# if (rc >=18.719) then
# velxZone = velxZone*exp(-(rc-18.719)/9.35)
# velyZone = velyZone*exp(-(rc-18.719)/9.35)
# endif
# velzZone = 0.0
# ~~~
# Note that we need to explicitly specify the direction of the velocities because the arctangent that is computed in fortran only return ranges from $\epsilon[0,\pi/2]$. We also damp the velocity outside of the sphere, so that the velocity at the edge of the box is not too high. (The only important physics in the problem is the sphere itself rotating)
# ### Running the example
# Code example up to now is in ``RotatingSinkSphere/``.
# cd ../proj/dlee/FLASH4.3/object/
plot_dens(612)
from IPython.display import YouTubeVideo
YouTubeVideo("XnIdxSPN0_A",width=520,height=420)
|
tutorial/8-Rotation and Discs.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="5rmpybwysXGV"
# ##### Copyright 2018 The TensorFlow Authors.
# + cellView="form" colab={} colab_type="code" id="m8y3rGtQsYP2"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="hrXv0rU9sIma"
# # Custom training: basics
# + [markdown] colab_type="text" id="7S0BwJ_8sLu7"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/tutorials/eager/custom_training.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="k2o3TTG4TFpt"
# In the previous tutorial we covered the TensorFlow APIs for automatic differentiation, a basic building block for machine learning.
# In this tutorial we will use the TensorFlow primitives introduced in the prior tutorials to do some simple machine learning.
#
# TensorFlow also includes a higher-level neural networks API (`tf.keras`) which provides useful abstractions to reduce boilerplate. We strongly recommend those higher level APIs for people working with neural networks. However, in this short tutorial we cover neural network training from first principles to establish a strong foundation.
# + [markdown] colab_type="text" id="3LXMVuV0VhDr"
# ## Setup
# + colab={} colab_type="code" id="PJ64L90aVir3"
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
# + [markdown] colab_type="text" id="eMAWbDJFVmMk"
# ## Variables
#
# Tensors in TensorFlow are immutable stateless objects. Machine learning models, however, need to have changing state: as your model trains, the same code to compute predictions should behave differently over time (hopefully with a lower loss!). To represent this state which needs to change over the course of your computation, you can choose to rely on the fact that Python is a stateful programming language:
#
# + colab={} colab_type="code" id="VkJwtLS_Jbn8"
# Using python state
x = tf.zeros([10, 10])
x += 2 # This is equivalent to x = x + 2, which does not mutate the original
# value of x
print(x)
# + [markdown] colab_type="text" id="wfneTXy7JcUz"
# TensorFlow, however, has stateful operations built in, and these are often more pleasant to use than low-level Python representations of your state. To represent weights in a model, for example, it's often convenient and efficient to use TensorFlow variables.
#
# A Variable is an object which stores a value and, when used in a TensorFlow computation, will implicitly read from this stored value. There are operations (`tf.assign_sub`, `tf.scatter_update`, etc) which manipulate the value stored in a TensorFlow variable.
# + colab={} colab_type="code" id="itxmrMil6DQi"
v = tf.Variable(1.0)
assert v.numpy() == 1.0
# Re-assign the value
v.assign(3.0)
assert v.numpy() == 3.0
# Use `v` in a TensorFlow operation like tf.square() and reassign
v.assign(tf.square(v))
assert v.numpy() == 9.0
# + [markdown] colab_type="text" id="-paSaeq1JzwC"
# Computations using Variables are automatically traced when computing gradients. For Variables representing embeddings TensorFlow will do sparse updates by default, which are more computation and memory efficient.
#
# Using Variables is also a way to quickly let a reader of your code know that this piece of state is mutable.
# + [markdown] colab_type="text" id="BMiFcDzE7Qu3"
# ## Example: Fitting a linear model
#
# Let's now put the few concepts we have so far ---`Tensor`, `GradientTape`, `Variable` --- to build and train a simple model. This typically involves a few steps:
#
# 1. Define the model.
# 2. Define a loss function.
# 3. Obtain training data.
# 4. Run through the training data and use an "optimizer" to adjust the variables to fit the data.
#
# In this tutorial, we'll walk through a trivial example of a simple linear model: `f(x) = x * W + b`, which has two variables - `W` and `b`. Furthermore, we'll synthesize data such that a well trained model would have `W = 3.0` and `b = 2.0`.
# + [markdown] colab_type="text" id="gFzH64Jn9PIm"
# ### Define the model
#
# Let's define a simple class to encapsulate the variables and the computation.
# + colab={} colab_type="code" id="_WRu7Pze7wk8"
class Model(object):
def __init__(self):
# Initialize variable to (5.0, 0.0)
# In practice, these should be initialized to random values.
self.W = tf.Variable(5.0)
self.b = tf.Variable(0.0)
def __call__(self, x):
return self.W * x + self.b
model = Model()
assert model(3.0).numpy() == 15.0
# + [markdown] colab_type="text" id="xa6j_yXa-j79"
# ### Define a loss function
#
# A loss function measures how well the output of a model for a given input matches the desired output. Let's use the standard L2 loss.
# + colab={} colab_type="code" id="Y0ysUFGY924U"
def loss(predicted_y, desired_y):
return tf.reduce_mean(tf.square(predicted_y - desired_y))
# + [markdown] colab_type="text" id="qutT_fkl_CBc"
# ### Obtain training data
#
# Let's synthesize the training data with some noise.
# + colab={} colab_type="code" id="gxPTb-kt_N5m"
TRUE_W = 3.0
TRUE_b = 2.0
NUM_EXAMPLES = 1000
inputs = tf.random_normal(shape=[NUM_EXAMPLES])
noise = tf.random_normal(shape=[NUM_EXAMPLES])
outputs = inputs * TRUE_W + TRUE_b + noise
# + [markdown] colab_type="text" id="-50nq-wPBsAW"
# Before we train the model let's visualize where the model stands right now. We'll plot the model's predictions in red and the training data in blue.
# + colab={} colab_type="code" id="_eb83LtrB4nt"
import matplotlib.pyplot as plt
plt.scatter(inputs, outputs, c='b')
plt.scatter(inputs, model(inputs), c='r')
plt.show()
print('Current loss: '),
print(loss(model(inputs), outputs).numpy())
# + [markdown] colab_type="text" id="sSDP-yeq_4jE"
# ### Define a training loop
#
# We now have our network and our training data. Let's train it, i.e., use the training data to update the model's variables (`W` and `b`) so that the loss goes down using [gradient descent](https://en.wikipedia.org/wiki/Gradient_descent). There are many variants of the gradient descent scheme that are captured in `tf.train.Optimizer` implementations. We'd highly recommend using those implementations, but in the spirit of building from first principles, in this particular example we will implement the basic math ourselves.
# + colab={} colab_type="code" id="MBIACgdnA55X"
def train(model, inputs, outputs, learning_rate):
with tf.GradientTape() as t:
current_loss = loss(model(inputs), outputs)
dW, db = t.gradient(current_loss, [model.W, model.b])
model.W.assign_sub(learning_rate * dW)
model.b.assign_sub(learning_rate * db)
# + [markdown] colab_type="text" id="RwWPaJryD2aN"
# Finally, let's repeatedly run through the training data and see how `W` and `b` evolve.
# + colab={} colab_type="code" id="XdfkR223D9dW"
model = Model()
# Collect the history of W-values and b-values to plot later
Ws, bs = [], []
epochs = range(10)
for epoch in epochs:
Ws.append(model.W.numpy())
bs.append(model.b.numpy())
current_loss = loss(model(inputs), outputs)
train(model, inputs, outputs, learning_rate=0.1)
print('Epoch %2d: W=%1.2f b=%1.2f, loss=%2.5f' %
(epoch, Ws[-1], bs[-1], current_loss))
# Let's plot it all
plt.plot(epochs, Ws, 'r',
epochs, bs, 'b')
plt.plot([TRUE_W] * len(epochs), 'r--',
[TRUE_b] * len(epochs), 'b--')
plt.legend(['W', 'b', 'true W', 'true_b'])
plt.show()
# + [markdown] colab_type="text" id="vPnIVuaSJwWz"
# ## Next Steps
#
# In this tutorial we covered `Variable`s and built and trained a simple linear model using the TensorFlow primitives discussed so far.
#
# In theory, this is pretty much all you need to use TensorFlow for your machine learning research.
# In practice, particularly for neural networks, the higher level APIs like `tf.keras` will be much more convenient since it provides higher level building blocks (called "layers"), utilities to save and restore state, a suite of loss functions, a suite of optimization strategies etc.
#
|
site/en/r1/tutorials/eager/custom_training.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.0 64-bit ('maximum-segment-sum')
# name: python3100jvsc74a57bd0793eb695977fec631855426e2f18f47b7acfa989782bbdade29a32a730f679d5
# ---
# Welcome to the Maximum Segment Sum playground!
#
# Here we'll test the maximum segment sum on our story arc example and dig into some other fun stuff about left- and right-folds.
from maximum_segment_sum import (
naive_maximum_segment_sum,
efficient_maximum_segment_sum,
fold_from_left,
fold_from_right,
)
toy_example_list = [-1, 2, -3, 5, -2, 1, 3, -2, -2, -3, 6]
naive_maximum_segment_sum(toy_example_list)
efficient_maximum_segment_sum(toy_example_list)
# Now, just to make sure that indeed the naive version is much slower than the efficient one:
# +
import time
extended_toy_example_list = toy_example_list + 300*[0,] # extended_toy_example_list = [-1, 2, -3, 5, -2, 1, 3, -2, -2, -3, 6, 0, 0, 0, ...]
start = time.time()
naive_version_result_on_extended_list = naive_maximum_segment_sum(extended_toy_example_list)
end = time.time()
print(f"Execution time (in seconds) of naive maximum segment sum on a ~300 items list: {round(end - start, 2)} seconds.")
start = time.time()
efficient_version_result_on_extended_list = efficient_maximum_segment_sum(extended_toy_example_list)
end = time.time()
print(f"Execution time (in seconds) of efficient maximum segment sum on the same list: {round(end - start, 4)} seconds.")
assert naive_version_result_on_extended_list == efficient_version_result_on_extended_list == 7
# +
def add(x: int, y: int) -> int:
return x+y
assert fold_from_left(add, 0)([1, 2, 3, 4]) == add(add(add(add(0, 1), 2), 3), 4)
assert fold_from_right(add, 0)([1, 2, 3, 4]) == add(1, add(2, add(3, add(4, 0))))
assert (
sum([1, 2, 3, 4]) == 10
== fold_from_left(add, 0)([1, 2, 3, 4])
== fold_from_right(add, 0)([1, 2, 3, 4])
)
# -
# Well, you just unveiled some interesting relation between summing a list of integers and left/right-folding the addition over it: they actually are one and the same thing!
#
# ```py
# fold_from_left(add, 0) = fold_from_right(add, 0) = sum
# ```
# +
from math import prod
def multiply(x: int, y: int) -> int:
return x*y
assert fold_from_left(multiply, 1)([1, 2, 3, 4]) == multiply(multiply(multiply(multiply(1, 1), 2), 3), 4)
assert fold_from_right(multiply, 1)([1, 2, 3, 4]) == multiply(1, multiply(2, multiply(3, multiply(4, 1))))
assert (
prod([1, 2, 3, 4]) == 24
== fold_from_left(multiply, 1)([1, 2, 3, 4])
== fold_from_right(multiply, 1)([1, 2, 3, 4])
)
# -
# And one more sweetness. Taking the product of an integer list is equivalent to left-&-right-folding the binary multiplication over it:
#
# ```py
# fold_from_left(multiply, 1) = fold_from_right(multiply, 1) = prod
# ```
# Quiz time: What happens if we inadvertently slip 0 into our previous multiplication folds, i.e. what are `fold_from_left(multiply, 0)` and `fold_from_right(multiply, 0)`?
# Recall the `all` and `any` Python functions for respectively assessing that *all* boolean list elements are `True` and that *at least one* is? Well, they can also be written as folds! (here we'll content ourselves with the left version though -- the right-fold being once again equivalent to its left counterpart, this time due to `logical_and` and `logical_or` being "commutative", i.e. order-agnostic):
# +
def logical_and(x: bool, y: bool) -> bool:
return x and y
assert fold_from_left(logical_and, True)([True, True, True]) == all([True, True, True]) == True
assert fold_from_left(logical_and, True)([True, False, True]) == all([True, False, True]) == False
assert fold_from_left(logical_and, True)([False, True, False]) == all([False, True, False]) == False
assert fold_from_left(logical_and, True)([False, False, False]) == all([False, False, False]) == False
# +
def logical_or(x: bool, y: bool) -> bool:
return x or y
assert fold_from_left(logical_or, False)([True, True, True]) == any([True, True, True]) == True
assert fold_from_left(logical_or, False)([True, False, True]) == any([True, False, True]) == True
assert fold_from_left(logical_or, False)([False, True, False]) == any([False, True, False]) == True
assert fold_from_left(logical_or, False)([False, False, False]) == any([False, False, False]) == False
# -
# Or in short:
#
# ```py
# fold_from_left(logical_and, True) = fold_from_right(logical_and, True) = all
#
# fold_from_left(logical_or, False) = fold_from_right(logical_or, False) = any
# ```
# Quiz time again: What are `fold_from_left(logical_and, False)` and `fold_from_left(logical_or, True)`?
# And that's it for this little playground but feel free to import other functions from the `maximum_segment_sum` module to experiment on your own!
|
playground.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Import libraries and data
#
# Dataset was obtained in the capstone project description (direct link [here](https://d3c33hcgiwev3.cloudfront.net/_429455574e396743d399f3093a3cc23b_capstone.zip?Expires=1530403200&Signature=<KEY>&Key-Pair-Id=<KEY>)) and splited manually in separated csv files. They were stored at my personal github account (folder link [here](https://github.com/caiomiyashiro/RecommenderSystemsNotebooks/tree/master/data/capstone)) and you can download and paste inside your working directory in order for this notebook to run.
import pandas as pd
import numpy as np
# ## Preprocess data
#
# Float data came with ',' in the csv and python works with '.', so it treated the number as text. In order to convert them to numbers, I first replaced all the commas by punct and then converted the columns to float.
# +
items = pd.read_csv('data/capstone/Capstone Data - Office Products - Items.csv', index_col=0)
actual_ratings = pd.read_csv('data/capstone/Capstone Data - Office Products - Ratings.csv', index_col=0)
content_based = pd.read_csv('data/capstone/Capstone Data - Office Products - CBF.csv', index_col=0)
user_user = pd.read_csv('data/capstone/Capstone Data - Office Products - User-User.csv', index_col=0)
item_item = pd.read_csv('data/capstone/Capstone Data - Office Products - Item-Item.csv', index_col=0)
matrix_fact = pd.read_csv('data/capstone/Capstone Data - Office Products - MF.csv', index_col=0)
pers_bias = pd.read_csv('data/capstone/Capstone Data - Office Products - PersBias.csv', index_col=0)
items[['Availability','Price']] = items[['Availability','Price']].apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
# preprocess
content_based = content_based.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
user_user = user_user.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
item_item = item_item.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
matrix_fact = matrix_fact.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
pers_bias = pers_bias.apply(lambda col: col.apply(lambda elem: str(elem).replace(',', '.'))).astype(float)
print('items.shape = ' + str(items.shape))
print('actual_ratings.shape = ' + str(actual_ratings.shape))
print('content_based.shape = ' + str(content_based.shape))
print('user_user.shape = ' + str(user_user.shape))
print('item_item.shape = ' + str(item_item.shape))
print('matrix_fact.shape = ' + str(matrix_fact.shape))
print('pers_bias.shape = ' + str(pers_bias.shape))
actual_ratings.head()
# -
# # Class RecommenderEvaluator
#
# In order to become easier to evaluate the metrics, I created a class that receives all the original ratings and predicted ratings for every recommender system and defined functions to extract all the metrics established in section 1 of the capstone report. Lets take a look at a summary of the class before looking at the code:
# - **Constructor (init)**: receive all recommendation algorithms, besides the actual rating list and the list of items. All data is contained in the data downloaded from Coursera. Besides storing all recommendation algorithms, the constructor also calculate the 20 most frequent items, which is used in the popularity metric calculation.
#
# - **get_observed_ratings**: as the ratings matrix is sparse, this method only returns the items a user with id userId has purchased.
#
# - **get_top_n**: by ordering all the predicted ratings for each recommendation algorithm, we can extract what would be their 'top' recommendation for a given user. Given a parameter $n$, we can then return all the top $n$ recommendations for all the recommendation algorithms.
#
# - **rmse**: by comparing the observed ratings a given user has given to an item and the predicted rating an algorithm has defined for a user, we can have an idea of how much error the algorithm is predicting the user's ratings. Here we don't work with lists, as usually each user has rated only a few amount of items. So here we get all the items the user has rated, recover these items from the algorithms' recommendations and them calculate the error.
#
# - **nDCG**: By looking at lists now, we can have an idea of how optimal the ranked lists are. By using the scoring factor defined in the report, we can calculate the overall DCG for the recommenders' lists and then normalise them using the concepts of the nDCG.
#
# - **Price and avalaibility diversity**: Diversity metric which evaluate how the recommended items' prices vary, *i.e.*, how is the standard deviation of the price. The higher, the better in this case. The same is for the availability index, but here, with higher standard deviations, it means the models are recommending items which are present and not present in local stores.
#
# - **Popularity**: A popular recommender tries to recommend items which has a high chance of being purchased. In the formulation of this metric, an item has a high chance of being purchased if lots of people have purchased them. In the class constructor, we take the observed ratings data and the item list and select which were the top $n$ (standard = 20) most purchased data. In a recommendation list, we return the ration of how many items were inside this list of top $n$ ones.
class RecommenderEvaluator:
def __init__(self, items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias):
self.items = items
self.actual_ratings = actual_ratings
# static data containing the average score given by each user
self.average_rating_per_userid = actual_ratings.apply(lambda row: np.average(row[~np.isnan(row)]))
self.content_based = content_based
self.user_user = user_user
self.item_item = item_item
self.matrix_fact = matrix_fact
self.pers_bias = pers_bias
# aggregate list. Makes for loops among all recommenders' predictions easier
self.recommenders_list = [self.content_based, self.user_user, self.item_item, self.matrix_fact,self.pers_bias]
self.recommenders_list_names = ['content_based', 'user_user', 'item_item', 'matrix_fact','pers_bias']
# Used for item popularity metric.
# Calculate the 20 most popular items (item which most of the customers bought)
N_LIM = 20
perc_users_bought_item = self.actual_ratings.apply(lambda item: np.sum(~np.isnan(item)), axis=0)/actual_ratings.shape[1]
sort_pop_items = np.argsort(perc_users_bought_item)[::-1]
self.pop_items = perc_users_bought_item.iloc[sort_pop_items][:N_LIM].index.values.astype(np.int)
def get_observed_ratings(self, userId):
"""
Returns all the items a given user evaluated and their ratings. Used mainly by all the metrics calculation
:parameter: userId - user id
:return: array of rated items. Index is the item id and value is the item rating
"""
userId = str(userId)
filtered_ratings = self.actual_ratings[userId]
rated_items = filtered_ratings[~np.isnan(filtered_ratings)]
return rated_items
def get_top_n(self, userId, n):
"""
Get the top n recommendations for every recommender in the list given a user id
:parameter: userId - user id
:parameter: n - max number of recommendations to return
:return: dictionary where the key is the recommender's name and the value is an array of size n for the top n recommnendations.
"""
userId = str(userId)
predicted_ratings = dict()
for recommender, recommender_name in zip(self.recommenders_list,self.recommenders_list_names):
item_ids = recommender[userId].argsort().sort_values()[:n].index.values
predicted_ratings[recommender_name] = item_ids
return predicted_ratings
def rmse(self, userId):
"""
Root Mean Square Error of the predicted and observed values between the recommender's prediction and the actual ratings
:parameter: userId - user id
:return: dataframe of containing the rmse from all recommenders given user id
"""
userId = str(userId)
observed_ratings = self.get_observed_ratings(userId)
rmse_list = {'rmse': []}
for recommender in self.recommenders_list:
predicted_ratings = recommender.loc[observed_ratings.index, userId]
rmse_list['rmse'].append(np.sqrt(np.average((predicted_ratings - observed_ratings)**2)))
rmse_list = pd.DataFrame(rmse_list, index = self.recommenders_list_names)
return rmse_list
def nDCG(self, userId, top_n = 5, individual_recommendation = None):
"""
Normalised Discounted Cumulative Gain for all recommenders given user id
:parameter: userId - user id
:return: dataframe of containing the nDCG from all recommenders given user id
"""
ri = self.get_observed_ratings(userId)
if(individual_recommendation is None):
topn = self.get_top_n(userId,top_n)
results_pandas_index = self.recommenders_list_names
else:
topn = individual_recommendation
results_pandas_index = list(individual_recommendation.keys())
# 1st step: Given recommendations, transform list into scores (see score transcriptions in the capstone report)
scores_all = []
for name, item_list in topn.items():
scores = np.empty_like(item_list) # initialise 'random' array
scores[:] = -10 ###########################
# check which items returned by the recommender
is_already_rated = np.isin(item_list, ri.index.values) # the user already rated. Items users didn't rate
scores[~is_already_rated] = 0 # receive score = 0
for index, score in enumerate(scores):
if(score != 0): # for each recommended items the user rated
if(ri[item_list[index]] < self.average_rating_per_userid[userId] - 1): # score accordingly the report
scores[index] = -1
elif((ri[item_list[index]] >= self.average_rating_per_userid[userId] - 1) &
(ri[item_list[index]] < self.average_rating_per_userid[userId] + 0.5)):
scores[index] = 1
else:
scores[index] = 2
scores_all.append(scores) # append all the transformed scores
scores_all
# 2nd step: Given scores, calculate the model's DCG, ideal DCG and then nDCG
nDCG_all = dict()
for index_model, scores_model in enumerate(scores_all): # for each model
model_DCG = 0 # calculate model's DCG
for index, score in enumerate(scores_model): #
index_ = index + 1 #
model_DCG = model_DCG + score/np.log2(index_ + 1) #
ideal_rank_items = np.sort(scores_model)[::-1] # calculate model's ideal DCG
ideal_rank_DCG = 0 #
for index, ideal_score in enumerate(ideal_rank_items): #
index_ = index + 1 #
ideal_rank_DCG = ideal_rank_DCG + ideal_score/np.log2(index_ + 1) #
if((ideal_rank_DCG == 0) | (np.abs(ideal_rank_DCG) < np.abs(model_DCG))): # if nDCG is 0 or only negative scores came up
nDCG = 0
else: # calculate final nDCG when ideal DCG is != 0
nDCG = model_DCG/ideal_rank_DCG
nDCG_all[results_pandas_index[index_model]] = nDCG # save each model's nDCG in a dict
# convert it to dataframe
result_final = pd.DataFrame(nDCG_all, index=range(1)).transpose()
result_final.columns = ['nDCG']
return result_final
def price_diversity(self,userId,top_n = 5,individual_recommendation = None):
"""
Mean and standard deviation of the price of the top n products recommended by each algorithm.
Intuition for a high price wise diversity recommender is to have a high price standard deviation
:parameter: userId - user id
:return: dataframe of containing the price's mean and standard deviation from all recommenders given user id
"""
if(individual_recommendation is None):
topn = self.get_top_n(userId,top_n)
else:
topn = individual_recommendation
stats = pd.DataFrame()
for key, value in topn.items():
data_filtered = self.items.loc[topn[key]][['Price']].agg(['mean','std']).transpose()
data_filtered.index = [key]
stats = stats.append(data_filtered)
return stats
def availability_diversity(self,userId,top_n = 5,individual_recommendation = None):
"""
Mean and standard deviation of the availabity index of the top n products recommended by each algorithm.
Intuition for a high availabity diversity is to have a small mean value in the availabity index
:parameter: userId - user id
:return: dataframe of containing the availabity index's mean and standard deviation from all recommenders given user id
"""
if(individual_recommendation is None):
topn = self.get_top_n(userId,top_n)
else:
topn = individual_recommendation
stats = pd.DataFrame()
for key, value in topn.items():
data_filtered = self.items.loc[topn[key]][['Availability']].agg(['mean','std']).transpose()
data_filtered.index = [key]
stats = stats.append(data_filtered)
return stats
def popularity(self, userId,top_n = 5,individual_recommendation = None):
"""
Return the ratio of how many items of the top n items are among the most popular purchased items. Default is
the 20 most purchased items.
:parameter: userId - user id
:return: dataframe of containing ratio of popular items in the recommended list from all recommenders given user id
"""
if(individual_recommendation is None):
topn = self.get_top_n(userId,top_n)
results_pandas_index = self.recommenders_list_names
else:
topn = individual_recommendation
results_pandas_index = list(individual_recommendation.keys())
results = {'popularity': []}
for recommender, recommendations in topn.items():
popularity = np.sum(np.isin(recommendations,self.pop_items))
results['popularity'].append(popularity)
return pd.DataFrame(results,index = results_pandas_index)
def precision_at_n(self, userId, top_n = 5, individual_recommendation = None):
if(individual_recommendation is None):
topn = self.get_top_n(userId,top_n)
results_pandas_index = self.recommenders_list_names
else:
topn = individual_recommendation
results_pandas_index = list(individual_recommendation.keys())
observed_ratings = self.get_observed_ratings(userId).index.values
precisions = {'precision_at_'+str(top_n): []}
for recommender, recommendations in topn.items():
precisions['precision_at_'+str(top_n)].append(np.sum(np.isin(recommendations, observed_ratings))/top_n)
return pd.DataFrame(precisions,index = results_pandas_index)
# # Test methods:
#
# Just to have an idea of the output of each method, lets call all them with a test user. At the next section we will calculate these metrics for all users.
userId = '64'
re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias)
# ## Test RMSE
re.rmse(userId)
# ## Test nDCG
re.nDCG(userId)
# ## Test Diversity - Price and Availability
re.price_diversity(userId)
re.availability_diversity(userId)
# ## Test Popularity
re.popularity(userId)
# ## Test Precision@N
re.precision_at_n(userId)
# # Average metrics by all users
#
# Espefically for user 907, the recommendations from the user user came with all nulls (original dataset). This specifically impacted the RMSE calculation, as one Nan damaged the entire average calculation. So specifically for RMSE we did a separate calculation section. All the other metrics are going the be calculated in the next code block.
# +
re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias)
i = 0
count = np.array([0,0,0,0,0])
for userId in actual_ratings.columns:
if(userId == '907'):
rmse_recommenders = re.rmse(userId).fillna(0)
else:
rmse_recommenders = re.rmse(userId)
count = count + rmse_recommenders['rmse']
# as we didn't use user 907 for user user, divide it by the number of users - 1
denominator = [len(actual_ratings.columns)] * 5
denominator[1] = len(actual_ratings.columns) - 1
print('Average RMSE for all users')
count/ denominator
# +
count_nDCG = np.array([0,0,0,0,0])
count_diversity_price = np.ndarray([5,2])
count_diversity_availability = np.ndarray([5,2])
count_popularity = np.array([0,0,0,0,0])
count_precision_at_5 = np.array([0,0,0,0,0])
for userId in actual_ratings.columns:
nDCG_recommenders = re.nDCG(userId)
count_nDCG = count_nDCG + nDCG_recommenders['nDCG']
diversity_price_recommenders = re.price_diversity(userId)
count_diversity_price = count_diversity_price + diversity_price_recommenders[['mean','std']]
diversity_availability_recommenders = re.availability_diversity(userId)
count_diversity_availability = count_diversity_availability + diversity_availability_recommenders[['mean','std']]
popularity_recommenders = re.popularity(userId)
count_popularity = count_popularity + popularity_recommenders['popularity']
precision_recommenders = re.precision_at_n(userId)
count_precision_at_5 = count_precision_at_5 + precision_recommenders['precision_at_5']
print('\n---')
print('Average nDCG')
print('---\n')
print(count_nDCG/len(actual_ratings.columns))
print('\n---')
print('Average Price - Diversity Measure')
print('---\n')
print(count_diversity_price/len(actual_ratings.columns))
print('\n---')
print('Average Availability - Diversity Measure')
print('---\n')
print(count_diversity_availability/len(actual_ratings.columns))
print('\n---')
print('Average Popularity')
print('---\n')
print(count_popularity/len(actual_ratings.columns))
print('---\n')
print('Average Precision@5')
print('---\n')
print(count_precision_at_5/len(actual_ratings.columns))
# -
# # Final Analysis
#
# In terms of **RMSE**, the user-user collaborative filtering showed to be the most effective, despite it not being significantly better.
#
# For nDCG rank score, again user user and now item item collaborative filtering were the best.
#
# In terms of price diversity, the item item algorith was the most diverse, providing products varying ~32 dollars from the mean item price list. Matrix factorisation and user user follow right behind, with price standard deviation around 25 dollars. An interesting factor here was the *pers_bias* algorithm, as it recommended basically cheap products with a low standard deviation.
#
# For the availabity index, all the algorithms besides the user user managed to recommend items not so present in the local stores **together** with items present in local stores, as we can see they also provided items with availability index high (high standard deviation).
#
# In terms of popularity, no algorithm actually managed to obtain good scores in the way we defined. So, if the popularity is focused in the future, we can either change the popularity concept or improve mechanics in the recommender so it predict higher scores for the most popular items in the store.
#
# After this evaluation, it seemed to us that the item-item recommender system had an overall better performance, highlighted in terms of its diversity scores. Unfortunately, the items that item item recommender has suggested are in overall pricy, and we can check if there is any mixture possibility with the pers_bias algorithm, as it really indicated cheap prices and a low price standard deviation. Matrix factorization performed good as well but it didn't outperform any of the other recommenders.
# # Hibridization Techniques - Part III
#
# We are trying four different types of hibridization here.
#
# 1. Linear ensemble
# 2. Non linear ensemble
# 3. Top 1 from each recommender
# 4. Recommender switching
#
# The first two options approach the recommender's performance in terms of how good it predicts the users' ratings, so its only evaluation will be in terms of RMSE.
#
# The third approach have the intuition that, if we get the top 1 recommendation from each algorithm, the resulting 5 item list will have a better performance in terms of identyfing 'good' items to users. In this case, we defined the good items if the recommender suggested an already bought item for an user. Therefore, the final measurement of this hibridization mechanism is through the precision@5, as we end up with a 5 item list.
#
# The final mixing algorithm has the underlying theory of how collaborative filtering mechanisms perform with items that had not enough users/items in its calculations. As a well known weakness of these recommenders, the idea was to check how many items we would affect if we established a threshold of enough data in order for us to use a collaborative filtering. Otherwise, if the item doesn't have enough support in form of users' ratings we could have a support of a content based recommendation, or even, in last case, a non personalised one.
#
#
# ## Dataset Creation and User Sample Definition
#
# ### Dataset
#
# For the first and second approach, we need another perspective on the data. The dataset contains all the existing ratings from all users and concatenates all the predictions made the 5 traditional recommenders. The idea is to use the observed rating as target variable and all recommenders' predictions as dependent variable, *i.e.* treat this as a regression problems.
# +
obs_ratings_list = []
content_based_list = []
user_user_list = []
item_item_list = []
matrix_fact_list = []
pers_bias_list = []
re = RecommenderEvaluator(items, actual_ratings, content_based, user_user, item_item, matrix_fact, pers_bias)
for userId in actual_ratings.columns:
observed_ratings = re.get_observed_ratings(userId)
obs_ratings_list.extend(observed_ratings.values)
content_based_list.extend(content_based.loc[observed_ratings.index, userId].values)
user_user_list.extend(user_user.loc[observed_ratings.index, userId].values)
item_item_list.extend(item_item.loc[observed_ratings.index, userId].values)
matrix_fact_list.extend(matrix_fact.loc[observed_ratings.index, userId].values)
pers_bias_list.extend(pers_bias.loc[observed_ratings.index, userId].values)
dataset = pd.DataFrame({'rating': obs_ratings_list, 'content_based':content_based_list, 'user_user': user_user_list,
'item_item':item_item_list, 'matrix_fact':matrix_fact_list,'pers_bias':pers_bias_list})
dataset = dataset.dropna()
dataset.head()
# -
# ### In order to have an idea of the results, let's choose 3 users randomly to show the predictions using the new hybrid models
np.random.seed(42)
sample_users = np.random.choice(actual_ratings.columns, 3).astype(str)
print('sample_users: ' + str(sample_users))
# ### Get recommenders' predictions for sample users in order to create input for ensemble models (hybridization I and II)
# +
from collections import OrderedDict
df_sample = pd.DataFrame()
for user in sample_users:
content_based_ = re.content_based[user]
user_user_ = re.user_user[user]
item_item_ = re.item_item[user]
matrix_fact_ = re.matrix_fact[user]
pers_bias_ = re.pers_bias[user]
df_sample = df_sample.append(pd.DataFrame(OrderedDict({'user':user,'item':actual_ratings.index.values,'content_based':content_based_, 'user_user':user_user_, 'item_item':item_item_,
'matrix_fact':matrix_fact_,'pers_bias':pers_bias_})), ignore_index=True)
df_sample.head()
# -
#
# ## Focus on Performance (RMSE) I - Linear Model
# +
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import cross_val_score
linear = LinearRegression()
print('RMSE for linear ensemble of recommender systems:')
np.mean(cross_val_score(linear, dataset.drop('rating', axis=1), dataset['rating'], cv=5))
# -
# ### Predictions for sample users: Creating top 5 recommendations for sample users
pred_cols = ['content_based','user_user','item_item','matrix_fact','pers_bias']
predictions = linear.fit(dataset.drop('rating', axis=1), dataset['rating']).predict(df_sample[pred_cols])
recommendations = pd.DataFrame(OrderedDict({'user':df_sample['user'], 'item':df_sample['item'], 'predictions':predictions}))
recommendations.groupby('user').apply(lambda df_user : df_user.loc[df_user['predictions'].sort_values(ascending=False)[:5].index.values])
# ## Focus on Performance (RMSE) II - Emsemble
# +
from sklearn.ensemble import RandomForestRegressor
rf = RandomForestRegressor(random_state=42)
print('RMSE for non linear ensemble of recommender systems:')
np.mean(cross_val_score(rf, dataset.drop('rating', axis=1), dataset['rating'], cv=5))
# -
# ### Predictions for sample users:
predictions = rf.fit(dataset.drop('rating', axis=1), dataset['rating']).predict(df_sample[pred_cols])
recommendations = pd.DataFrame(OrderedDict({'user':df_sample['user'], 'item':df_sample['item'], 'predictions':predictions}))
recommendations.groupby('user').apply(lambda df_user : df_user.loc[df_user['predictions'].sort_values(ascending=False)[:5].index.values])
# ## Focus on Recommendations - Top 1 from each Recommender
#
# With the all top 1 recommender, we can evaluate its performance not just with RMSE, but all the list metrics we evaluated before. As a business constraint, we will also pay more attention to the *precision@5* metric, as a general information on how good is the recommender on providing suggestions that the user will buy, or already bought in this case.
# The majority of metrics were in the same scale as the best metrics in the all models comparison. However, it's good to highlight the the top 1 all recommender had the best *precision@5* metric among all recommender, showing to be a **good suitable hibridization mechanism**.
# +
count_nDCG = np.array([0])
count_diversity_price = np.ndarray([1,2])
count_diversity_availability = np.ndarray([1,2])
count_popularity = np.array([0])
count_precision = np.array([0])
for userId in actual_ratings.columns:
top_n_1 = re.get_top_n(userId,1)
user_items = {}
user_items['top_1_all'] = [a[0] for a in top_n_1.values()]
nDCG_recommenders = re.nDCG(userId, individual_recommendation = user_items)
count_nDCG = count_nDCG + nDCG_recommenders['nDCG']
diversity_price_recommenders = re.price_diversity(userId, individual_recommendation = user_items)
count_diversity_price = count_diversity_price + diversity_price_recommenders[['mean','std']]
diversity_availability_recommenders = re.availability_diversity(userId, individual_recommendation = user_items)
count_diversity_availability = count_diversity_availability + diversity_availability_recommenders[['mean','std']]
popularity_recommenders = re.popularity(userId, individual_recommendation = user_items)
count_popularity = count_popularity + popularity_recommenders['popularity']
precision_recommenders = re.precision_at_n(userId, individual_recommendation = user_items)
count_precision = count_precision + precision_recommenders['precision_at_5']
print('\n---')
print('Average nDCG')
print('---\n')
print(count_nDCG/len(actual_ratings.columns))
print('\n---')
print('Average Price - Diversity Measure')
print('---\n')
print(count_diversity_price/len(actual_ratings.columns))
print('\n---')
print('Average Availability - Diversity Measure')
print('---\n')
print(count_diversity_availability/len(actual_ratings.columns))
print('\n---')
print('Average Popularity')
print('---\n')
print(count_popularity/len(actual_ratings.columns))
print('\n---')
print('Average Precision@5')
print('---\n')
print(count_precision/len(actual_ratings.columns))
# -
# ### Predictions for sample users:
results = {}
for user_sample in sample_users:
results[user_sample] = [a[0] for a in list(re.get_top_n(user_sample, 1).values())]
results
# ## Focus on Recommendations - Switching algorithm
#
# ### Can we use a Content Based Recommender for items with less evaluations?
#
# We can see in the cumulative histogram that only around 20% of the rated items had 10 or more ratings. This signals us that maybe we can prioritize the use of a content based recommender or even a non personalised one for the majority of the items which don't have a sufficient amount of ratings in order to make the collaborative filtering algorithms to be stable.
# +
import matplotlib.pyplot as plt
item_nbr_ratings = actual_ratings.apply(lambda col: np.sum(~np.isnan(col)), axis=1)
item_max_nbr_ratings = item_nbr_ratings.max()
range_item_max_nbr_ratings = range(item_max_nbr_ratings+1)
plt.figure(figsize=(15,3))
plt.subplot(121)
nbr_ratings_items = []
for i in range_item_max_nbr_ratings:
nbr_ratings_items.append(len(item_nbr_ratings[item_nbr_ratings == i]))
plt.plot(nbr_ratings_items)
plt.xlabel('Number of ratings')
plt.ylabel('Amount of items')
plt.title('Histogram of amount of ratings')
plt.subplot(122)
cum_nbr_ratings_items = []
for i in range(len(nbr_ratings_items)):
cum_nbr_ratings_items.append(np.sum(nbr_ratings_items[:i]))
cum_nbr_ratings_items = np.array(cum_nbr_ratings_items)
plt.plot(cum_nbr_ratings_items/actual_ratings.shape[0])
plt.xlabel('Number of ratings')
plt.ylabel('Cumulative distribution')
plt.title('Cumulative histogram of amount of ratings');
# -
|
notebooks/reco-tut-asr-99-10-metrics-calculation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Dask [shared installation]
# language: python
# name: dask
# ---
import numpy as np
import pandas as pd
import xarray as xr
import zarr
import math
import glob
import pickle
import statistics
import scipy.stats as stats
from sklearn.neighbors import KernelDensity
import dask
import seaborn as sns
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
def get_files():
models = glob.glob("/terra/data/cmip5/global/historical/*")
avail={}
for model in models:
zg = glob.glob(str(model)+"/r1i1p1/day/2deg/zg*")
try:
test = zg[0]
avail[model.split('/')[-1]] = zg
except:
pass
return avail
files = get_files()
files['NOAA'] = glob.glob("/home/pmarsh/NOAA_2deg/NOAA_zg/*.nc")
files['ERA5'] = glob.glob("/home/pmarsh/NOAA_2deg/ERA5_zg/*.nc")
files.pop('MIROC-ESM')
def contourise(x):
x = x.fillna(0)
x = x.where((x>=limit))
x = x/x
return x
results={}
for model in files.keys():
print(model)
x = xr.open_mfdataset(files[model])
if model == 'NOAA':
x = x.rename({'hgt':'zg'})
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1950','2005'))
elif model == 'ERA5':
x = x.rename({'level':'plev'})
x = x.sel(plev=850)
x = x.sel(time=slice('1979','2005'))
else:
x = x.sel(plev=85000)
x = x.sel(time=slice('1950','2005'))
x = x.load()
x = x.sel(lat=slice(-60,0))
x = x[['zg']]
x = x.assign_coords(lon=(((x.lon + 180) % 360) - 180))
with dask.config.set(**{'array.slicing.split_large_chunks': True}):
x = x.sortby(x.lon)
x = x.sel(lon=slice(-50,20))
x = x.resample(time="QS-DEC").mean(dim="time",skipna=True)
x = x.load()
limit = np.nanquantile(x.zg.values,0.9)
results[model]={}
for seas in ['DJF','MAM','JJA','SON']:
mean_seas = x.where(x.time.dt.season==str(seas)).dropna(dim='time')
mean_seas = contourise(mean_seas).zg.fillna(0).mean(dim='time')
results[model][seas] = mean_seas.fillna(0)
x.close()
pickle.dump(results, open( "../HIGH_OUT/SASH_track_2D.p", "wb" ) )
weights = np.cos(np.deg2rad(results['NOAA']['DJF'].lat)) #area weighted
#mean absolute error calc
scores=[]
for index in results:
MAE={}
for season in ['DJF','MAM','JJA','SON']:
ref = results['NOAA'][season]
x = results[index][season]
MAE[season] = (np.abs(ref - x)).weighted(weights).sum(('lat','lon'))
scores.append([index,np.mean(MAE['DJF'].values + MAE['MAM'].values + MAE['JJA'].values + MAE['SON'].values)])
resultsdf = pd.DataFrame(np.array(scores),columns=['model','score'])
resultsdf = resultsdf.sort_values('score').set_index('model')['score']
pickle.dump( resultsdf, open( "../HIGH_OUT/scores_2D.p", "wb" ) )
resultsdf.to_csv("../HIGH_OUT/scores_2D.csv")
|
HIGH/SASH_2D.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/melange396/delphi-epidata/blob/main/delphi_interim_query_builder_test_env.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="m4pQNStu8TCy" outputId="3d492a02-2fce-4639-ed2a-b10c505d5ac9"
from datetime import date, datetime
from typing import (
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Sequence,
Tuple,
Union,
cast,
Mapping,
)
from sqlalchemy import text
from sqlalchemy.engine import RowProxy
from ._common import db, app
from ._db import metadata
from ._printer import create_printer, APrinter
from ._exceptions import DatabaseErrorException
from ._validate import DateRange, extract_strings
from ._params import GeoPair, SourceSignalPair, TimePair
def date_string(value: int) -> str:
# converts a date integer (YYYYMMDD) into a date string (YYYY-MM-DD)
# $value: the date as an 8-digit integer
year = int(value / 10000) % 10000
month = int(value / 100) % 100
day = value % 100
return "{0:04d}-{1:02d}-{2:02d}".format(year, month, day)
def to_condition(
field: str,
value: Union[Tuple[str, str], str, Tuple[int, int], int],
param_key: str,
params: Dict[str, Any],
formatter=lambda x: x,
) -> str:
if isinstance(value, (list, tuple)):
params[param_key] = formatter(value[0])
params[f"{param_key}_2"] = formatter(value[1])
return f"{field} BETWEEN :{param_key} AND :{param_key}_2"
params[param_key] = formatter(value)
return f"{field} = :{param_key}"
# TODO: does this work for conditioning on latest issue? who knows
def to_indicator(
field: str,
value: Tuple[str,str],
param_key: str,
params: Dict[str, Any],
formatter=lambda x: x,
) -> str:
if isinstance(value, tuple):
# params[param_key] = formatter(value[0])
# params[f"{param_key}_2"] = formatter(value[1])
return f"CASE WHEN {value[0]} = {value[1]} THEN 1 ELSE 0 END AS `is_latest_issue`"
# params[param_key] = formatter(value)
# return f"{field} = :{param_key}"
def filter_values(
field: str,
values: Optional[Sequence[Union[Tuple[str, str], str, Tuple[int, int], int]]],
param_key: str,
params: Dict[str, Any],
formatter=lambda x: x,
):
if not values:
return "FALSE"
# builds a SQL expression to filter strings (ex: locations)
# $field: name of the field to filter
# $values: array of values
conditions = [to_condition(field, v, f"{param_key}_{i}", params, formatter) for i, v in enumerate(values)]
return f"({' OR '.join(conditions)})"
def filter_strings(
field: str,
values: Optional[Sequence[Union[Tuple[str, str], str]]],
param_key: str,
params: Dict[str, Any],
):
return filter_values(field, values, param_key, params)
def filter_integers(
field: str,
values: Optional[Sequence[Union[Tuple[int, int], int]]],
param_key: str,
params: Dict[str, Any],
):
return filter_values(field, values, param_key, params)
def filter_dates(
field: str,
values: Optional[Sequence[Union[Tuple[int, int], int]]],
param_key: str,
params: Dict[str, Any],
):
return filter_values(field, values, param_key, params, date_string)
def filter_fields(generator: Iterable[Dict[str, Any]]):
fields = extract_strings("fields")
if not fields:
yield from generator
else:
exclude_fields = {f[1:] for f in fields if f.startswith("-")}
include_fields = [f for f in fields if not f.startswith("-") and f not in exclude_fields]
for row in generator:
filtered = dict()
if include_fields:
# positive list
for field in include_fields:
if field in row:
filtered[field] = row[field]
elif exclude_fields:
# negative list
for k, v in row.items():
if k not in exclude_fields:
filtered[k] = v
yield filtered
def filter_geo_pairs(
type_field: str,
value_field: str,
values: Sequence[GeoPair],
param_key: str,
params: Dict[str, Any],
) -> str:
"""
returns the SQL sub query to filter by the given geo pairs
"""
def filter_pair(pair: GeoPair, i) -> str:
type_param = f"{param_key}_{i}t"
params[type_param] = pair.geo_type
if isinstance(pair.geo_values, bool) and pair.geo_values:
return f"{type_field} = :{type_param}"
return f"({type_field} = :{type_param} AND \
{filter_strings(value_field, cast(Sequence[str], pair.geo_values), type_param, params)})"
parts = [filter_pair(p, i) for i, p in enumerate(values)]
if not parts:
# something has to be selected
return "FALSE"
return f"({' OR '.join(parts)})"
def filter_source_signal_pairs(
source_field: str,
signal_field: str,
values: Sequence[SourceSignalPair],
param_key: str,
params: Dict[str, Any],
) -> str:
"""
returns the SQL sub query to filter by the given source signal pairs
"""
def filter_pair(pair: SourceSignalPair, i) -> str:
source_param = f"{param_key}_{i}t"
params[source_param] = pair.source
if isinstance(pair.signal, bool) and pair.signal:
return f"{source_field} = :{source_param}"
return f"({source_field} = :{source_param} AND {filter_strings(signal_field, cast(Sequence[str], pair.signal), source_param, params)})"
parts = [filter_pair(p, i) for i, p in enumerate(values)]
if not parts:
# something has to be selected
return "FALSE"
return f"({' OR '.join(parts)})"
def filter_time_pairs(
type_field: str,
time_field: str,
values: Sequence[TimePair],
param_key: str,
params: Dict[str, Any],
) -> str:
"""
returns the SQL sub query to filter by the given time pairs
"""
def filter_pair(pair: TimePair, i) -> str:
type_param = f"{param_key}_{i}t"
params[type_param] = pair.time_type
if isinstance(pair.time_values, bool) and pair.time_values:
return f"{type_field} = :{type_param}"
return f"({type_field} = :{type_param} AND\
{filter_integers(time_field, cast(Sequence[Union[int, Tuple[int,int]]], pair.time_values), type_param, params)})"
parts = [filter_pair(p, i) for i, p in enumerate(values)]
if not parts:
# something has to be selected
return "FALSE"
return f"({' OR '.join(parts)})"
def parse_row(
row: RowProxy,
fields_string: Optional[Sequence[str]] = None,
fields_int: Optional[Sequence[str]] = None,
fields_float: Optional[Sequence[str]] = None,
):
keys = set(row.keys())
parsed = dict()
if fields_string:
for f in fields_string:
v = row[f] if f in keys else None
if isinstance(v, (date, datetime)):
v = v.strftime("%Y-%m-%d") # format to iso date
parsed[f] = v
if fields_int:
for f in fields_int:
parsed[f] = int(row[f]) if f in keys and row[f] is not None else None
if fields_float:
for f in fields_float:
parsed[f] = float(row[f]) if f in keys and row[f] is not None else None
return parsed
def parse_result(
query: str,
params: Dict[str, Any],
fields_string: Optional[Sequence[str]] = None,
fields_int: Optional[Sequence[str]] = None,
fields_float: Optional[Sequence[str]] = None,
) -> List[Dict[str, Any]]:
"""
execute the given query and return the result as a list of dictionaries
"""
return [parse_row(row, fields_string, fields_int, fields_float) for row in db.execute(text(query), **params)]
def run_query(p: APrinter, query_tuple: Tuple[str, Dict[str, Any]]):
query, params = query_tuple
# limit rows + 1 for detecting whether we would have more
full_query = text(f"{query} LIMIT {p.remaining_rows + 1}")
app.logger.info("full_query: %s, params: %s", full_query, params)
return db.execution_options(stream_results=True).execute(full_query, **params)
def _identity_transform(row: Dict[str, Any], _: RowProxy) -> Dict[str, Any]:
"""
identity transform
"""
return row
def execute_queries(
queries: Sequence[Tuple[str, Dict[str, Any]]],
fields_string: Sequence[str],
fields_int: Sequence[str],
fields_float: Sequence[str],
transform: Callable[[Dict[str, Any], RowProxy], Dict[str, Any]] = _identity_transform,
):
"""
execute the given queries and return the response to send them
"""
p = create_printer()
fields_to_send = set(extract_strings("fields") or [])
if fields_to_send:
exclude_fields = {f[1:] for f in fields_to_send if f.startswith("-")}
include_fields = {f for f in fields_to_send if not f.startswith("-") and f not in exclude_fields}
if include_fields:
fields_string = [v for v in fields_string if v in include_fields]
fields_int = [v for v in fields_int if v in include_fields]
fields_float = [v for v in fields_float if v in include_fields]
if exclude_fields:
fields_string = [v for v in fields_string if v not in exclude_fields]
fields_int = [v for v in fields_int if v not in exclude_fields]
fields_float = [v for v in fields_float if v not in exclude_fields]
query_list = list(queries)
def dummy_gen():
if 3 > 4:
yield {}
pass
if not query_list or p.remaining_rows <= 0:
return p(dummy_gen)
def gen(first_rows):
for row in first_rows:
yield transform(parse_row(row, fields_string, fields_int, fields_float), row)
for query_params in query_list:
if p.remaining_rows <= 0:
# no more rows
break
r = run_query(p, query_params)
for row in r:
yield transform(parse_row(row, fields_string, fields_int, fields_float), row)
# execute first query
try:
r = run_query(p, query_list.pop(0))
except Exception as e:
raise DatabaseErrorException(str(e))
# now use a generator for sending the rows and execute all the other queries
return p(gen(r))
def execute_query(
query: str,
params: Dict[str, Any],
fields_string: Sequence[str],
fields_int: Sequence[str],
fields_float: Sequence[str],
transform: Callable[[Dict[str, Any], RowProxy], Dict[str, Any]] = _identity_transform,
):
"""
execute the given query and return the response to send it
"""
return execute_queries([(query, params)], fields_string, fields_int, fields_float, transform)
def _join_l(value: Union[str, List[str]]):
return ", ".join(value) if isinstance(value, (list, tuple)) else value
class QueryBuilder:
"""
query builder helper class for simplified conditions
"""
def __init__(self, reftable:Union[str,"QueryBuilder"], refalias: str, datatable: str, dataalias: str):
self.reftable: Union[str,"QueryBuilder"] = ""
self.refalias: str = refalias
self.datatable: str = ""
self.dataalia: str = ""
self.group_by: Union[str, List[str]] = ""
self.order: Union[str, List[str]] = ""
self.fields: Union[str, List[str]] = "*"
self.conditions: List[str] = []
self.params: Dict[str, Any] = {}
self.subquery: str = ""
self.index: Optional[str] = None
@property
def conditions_clause(self) -> str:
return " AND ".join(self.conditions)
@property
def fields_clause(self) -> str:
return _join_l(self.fields) if self.fields else "*"
@property
def order_clause(self) -> str:
return _join_l(self.order)
def __str__(self):
table = self.table if isinstance(self.table,str) else f"({str(self.table)})"
where = f"WHERE {self.conditions_clause}" if self.conditions else ""
order = f"ORDER BY {_join_l(self.order)}" if self.order else ""
group_by = f"GROUP BY {_join_l(self.group_by)}" if self.group_by else ""
index = f"USE INDEX ({self.index})" if self.index else ""
return f"SELECT {self.fields_clause} FROM {self.table} {index} {self.subquery} {where} {group_by} {order}"
@property
def query(self) -> str:
"""
returns the full query
"""
return str(self)
def where(self, **kvargs: Dict[str, Any]) -> "QueryBuilder":
for k, v in kvargs.items():
self.conditions.append(f"{self.alias}.{k} = :{k}")
self.params[k] = v
return self
def where_strings(
self,
field: str,
values: Optional[Sequence[Union[Tuple[str, str], str]]],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_field = f"{self.alias}.{field}" if "." not in field else field
self.conditions.append(filter_strings(fq_field, values, param_key or field, self.params))
return self
def _fq_field(self, field: str) -> str:
return f"{self.alias}.{field}" if "." not in field else field
def where_integers(
self,
field: str,
values: Optional[Sequence[Union[Tuple[int, int], int]]],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_field = self._fq_field(field)
self.conditions.append(filter_integers(fq_field, values, param_key or field, self.params))
return self
def where_dates(
self,
field: str,
values: Optional[Sequence[Union[Tuple[int, int], int]]],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_field = self._fq_field(field)
self.conditions.append(filter_dates(fq_field, values, param_key or field, self.params))
return self
def where_geo_pairs(
self,
type_field: str,
value_field: str,
values: Sequence[GeoPair],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_type_field = self._fq_field(type_field)
fq_value_field = self._fq_field(value_field)
self.conditions.append(
filter_geo_pairs(
fq_type_field,
fq_value_field,
values,
param_key or type_field,
self.params,
)
)
return self
def where_source_signal_pairs(
self,
type_field: str,
value_field: str,
values: Sequence[SourceSignalPair],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_type_field = self._fq_field(type_field)
fq_value_field = self._fq_field(value_field)
self.conditions.append(
filter_source_signal_pairs(
fq_type_field,
fq_value_field,
values,
param_key or type_field,
self.params,
)
)
return self
def where_time_pairs(
self,
type_field: str,
value_field: str,
values: Sequence[TimePair],
param_key: Optional[str] = None,
) -> "QueryBuilder":
fq_type_field = self._fq_field(type_field)
fq_value_field = self._fq_field(value_field)
self.conditions.append(
filter_time_pairs(
fq_type_field,
fq_value_field,
values,
param_key or type_field,
self.params,
)
)
return self
def set_fields(self, *fields: Iterable[str]) -> "QueryBuilder":
self.fields = [f"{self.alias}.{field}" for field_list in fields for field in field_list]
return self
def set_order(self, *args: str, **kwargs: Union[str, bool]) -> "QueryBuilder":
"""
sets the order for the given fields (as key word arguments), True = ASC, False = DESC
"""
def to_asc(v: Union[str, bool]) -> str:
if v == True:
return "ASC"
elif v == False:
return "DESC"
return cast(str, v)
args_order = [f"{self.alias}.{k} ASC" for k in args]
kw_order = [f"{self.alias}.{k} {to_asc(v)}" for k, v in kwargs.items()]
self.order = args_order + kw_order
return self
def with_max_issue(self, *args: str) -> "QueryBuilder":
fields: List[str] = [f for f in args]
subfields = f"max(issue) max_issue, {','.join(fields)}"
group_by = ",".join(fields)
field_conditions = " AND ".join(f"x.{field} = {self.alias}.{field}" for field in fields)
condition = f"x.max_issue = {self.alias}.issue AND {field_conditions}"
self.subquery = f"JOIN (SELECT {subfields} FROM {self.table} WHERE {self.conditions_clause} GROUP BY {group_by}) x ON {condition}"
# reset conditions since for join
self.conditions = []
return self
# + colab={"base_uri": "https://localhost:8080/", "height": 368} id="zDSPke_j9dS0" outputId="94a3e711-b8fc-41f4-e032-3482b4ede834"
from typing import List, Optional, Union, Tuple, Dict, Any, Set
from itertools import groupby
from datetime import date, datetime, timedelta
from flask import Blueprint, request
from flask.json import loads, jsonify
from bisect import bisect_right
from sqlalchemy import text
from pandas import read_csv
from .._common import is_compatibility_mode, db
from .._exceptions import ValidationFailedException, DatabaseErrorException
from .._params import (
GeoPair,
SourceSignalPair,
TimePair,
parse_geo_arg,
parse_source_signal_arg,
parse_time_arg,
parse_day_arg,
parse_day_range_arg,
parse_single_source_signal_arg,
parse_single_time_arg,
parse_single_geo_arg,
)
from .._query import QueryBuilder, execute_query, run_query, parse_row, filter_fields
from .._printer import create_printer, CSVPrinter
from .._validate import (
extract_date,
extract_dates,
extract_integer,
extract_strings,
require_all,
require_any,
)
from .._pandas import as_pandas, print_pandas
from .covidcast_utils import compute_trend, compute_trends, compute_correlations, compute_trend_value, CovidcastMetaEntry, AllSignalsMap
from ..utils import shift_time_value, date_to_time_value, time_value_to_iso, time_value_to_date
# first argument is the endpoint name
bp = Blueprint("covidcast", __name__)
alias = None
def parse_source_signal_pairs() -> List[SourceSignalPair]:
ds = request.values.get("data_source")
if ds:
# old version
require_any("signal", "signals")
signals = extract_strings(("signals", "signal"))
return [SourceSignalPair(ds, signals)]
if ":" not in request.values.get("signal", ""):
raise ValidationFailedException("missing parameter: signal or (data_source and signal[s])")
return parse_source_signal_arg()
def parse_geo_pairs() -> List[GeoPair]:
geo_type = request.values.get("geo_type")
if geo_type:
# old version
require_any("geo_value", "geo_values", empty=True)
geo_values = extract_strings(("geo_values", "geo_value"))
if len(geo_values) == 1 and geo_values[0] == "*":
return [GeoPair(geo_type, True)]
return [GeoPair(geo_type, geo_values)]
if ":" not in request.values.get("geo", ""):
raise ValidationFailedException("missing parameter: geo or (geo_type and geo_value[s])")
return parse_geo_arg()
def parse_time_pairs() -> List[TimePair]:
time_type = request.values.get("time_type")
if time_type:
# old version
require_all("time_type", "time_values")
time_values = extract_dates("time_values")
return [TimePair(time_type, time_values)]
if ":" not in request.values.get("time", ""):
raise ValidationFailedException("missing parameter: time or (time_type and time_values)")
return parse_time_arg()
def _handle_lag_issues_as_of(q: QueryBuilder, issues: Optional[List[Union[Tuple[int, int], int]]] = None, lag: Optional[int] = None, as_of: Optional[int] = None):
if issues:
q.where_integers("issue", issues)
elif lag is not None:
q.where(lag=lag)
elif as_of is not None:
# fetch most recent issues with as of
sub_condition_asof = "(issue <= :as_of)"
q.params["as_of"] = as_of
sub_fields = "max(issue) max_issue, time_type, time_value, `source`, `signal`, geo_type, geo_value"
sub_group = "time_type, time_value, `source`, `signal`, geo_type, geo_value"
sub_condition = f"x.max_issue = {q.alias}.issue AND x.time_type = {q.alias}.time_type AND x.time_value = {q.alias}.time_value AND x.source = {q.alias}.source AND x.signal = {q.alias}.signal AND x.geo_type = {q.alias}.geo_type AND x.geo_value = {q.alias}.geo_value"
q.subquery = f"JOIN (SELECT {sub_fields} FROM {q.table} WHERE {q.conditions_clause} AND {sub_condition_asof} GROUP BY {sub_group}) x ON {sub_condition}"
else:
# fetch most recent issue fast
q.conditions.append(f"({q.alias}.is_latest_issue IS TRUE)")
def guess_index_to_use(time: List[TimePair], geo: List[GeoPair], issues: Optional[List[Union[Tuple[int, int], int]]] = None, lag: Optional[int] = None, as_of: Optional[int] = None) -> Optional[str]:
time_values_to_retrieve = sum((t.count() for t in time))
geo_values_to_retrieve = sum((g.count() for g in geo))
if geo_values_to_retrieve > 5 or time_values_to_retrieve < 30:
# no optimization known
return None
if issues:
return "by_issue"
elif lag is not None:
return "by_lag"
elif as_of is None:
# latest
return "by_issue"
return None
@bp.route("/", methods=("GET", "POST"))
def handle():
source_signal_pairs = parse_source_signal_pairs()
time_pairs = parse_time_pairs()
geo_pairs = parse_geo_pairs()
as_of = extract_date("as_of")
issues = extract_dates("issues")
lag = extract_integer("lag")
# build query
q = QueryBuilder("data_reference", "ref", "datapoint", "point")
fields_string = ["ref.geo_value", "ref.signal"]
fields_int = ["ref.time_value", "point.issue", "point.lag", "point.missing_value", "point.missing_stderr", "point.missing_sample_size"]
fields_float = ["point.value", "point.stderr", "point.sample_size"]
if is_compatibility_mode():
q.set_order("ref.signal", "ref.time_value", "ref.geo_value", "point.issue")
else:
# transfer also the new detail columns
fields_string.extend(["ref.source", "ref.geo_type", "ref.time_type"])
q.set_order("ref.source", "ref.signal", "ref.time_type", "ref.time_value", "ref.geo_type", "ref.geo_value", "point.issue")
q.set_fields(fields_string, fields_int, fields_float)
# basic query info
# data type of each field
# build the source, signal, time, and location (type and id) filters
q.where_source_signal_pairs("ref.source", "ref.signal", source_signal_pairs)
q.where_geo_pairs("ref.geo_type", "ref.geo_value", geo_pairs)
q.where_time_pairs("ref.time_type", "ref.time_value", time_pairs)
# TODO: Reevaluate the guess_index_to_use function since it tends to choose incorrectly
# q.index = guess_index_to_use(time_pairs, geo_pairs, issues, lag, as_of)
_handle_lag_issues_as_of(q, issues, lag, as_of)
# send query
# TODO: establish a boolean for indicating is_latest_query vs as_of
is_latest_query = True
if is_latest_query:
return execute_query(str(q), q.params, fields_string, fields_int, fields_float)
else:
q = f'''SELECT * FROM( {str(q)}) all_issues
WHERE all_issues.issue <= all_issues.max_issue
ORDER BY source ASC, `signal` ASC, time_type ASC, time_value ASC, geo_type ASC,
geo_value ASC, issue ASC'''
return execute_query(str(q), q.params, fields_string, fields_int, fields_float)
# TODO: Calls query builder again around the returned values giving it alias all_issues
# q = QueryBuilder(all_issues, "all_issues", "data_reference", "ref")
@bp.route("/trend", methods=("GET", "POST"))
def handle_trend():
require_all("date", "window")
source_signal_pairs = parse_source_signal_pairs()
geo_pairs = parse_geo_pairs()
time_value = parse_day_arg("date")
time_window = parse_day_range_arg("window")
basis_time_value = extract_date("basis") or shift_time_value(time_value, -7)
# build query
q = QueryBuilder("datapoint", "t")
fields_string = ["geo_type", "geo_value", "source", "signal"]
fields_int = ["time_value"]
fields_float = ["value"]
q.set_fields(fields_string, fields_int, fields_float)
q.set_order("geo_type", "geo_value", "source", "signal", "time_value")
q.where_source_signal_pairs("source", "signal", source_signal_pairs)
q.where_geo_pairs("geo_type", "geo_value", geo_pairs)
q.where_time_pairs("time_type", "time_value", [TimePair("day", [time_window])])
# fetch most recent issue fast
_handle_lag_issues_as_of(q, None, None, None)
p = create_printer()
def gen(rows):
for key, group in groupby((parse_row(row, fields_string, fields_int, fields_float) for row in rows), lambda row: (row["geo_type"], row["geo_value"], row["source"], row["signal"])):
trend = compute_trend(key[0], key[1], key[2], key[3], time_value, basis_time_value, ((row["time_value"], row["value"]) for row in group))
yield trend.asdict()
# execute first query
try:
r = run_query(p, (str(q), q.params))
except Exception as e:
raise DatabaseErrorException(str(e))
# now use a generator for sending the rows and execute all the other queries
return p(filter_fields(gen(r)))
@bp.route("/trendseries", methods=("GET", "POST"))
def handle_trendseries():
require_all("window")
source_signal_pairs = parse_source_signal_pairs()
geo_pairs = parse_geo_pairs()
time_window = parse_day_range_arg("window")
basis_shift = extract_integer("basis")
if basis_shift is None:
basis_shift = 7
# build query
q = QueryBuilder("datapoint", "t")
fields_string = ["geo_type", "geo_value", "source", "signal"]
fields_int = ["time_value"]
fields_float = ["value"]
q.set_fields(fields_string, fields_int, fields_float)
q.set_order("geo_type", "geo_value", "source", "signal", "time_value")
q.where_source_signal_pairs("source", "signal", source_signal_pairs)
q.where_geo_pairs("geo_type", "geo_value", geo_pairs)
q.where_time_pairs("time_type", "time_value", [TimePair("day", [time_window])])
# fetch most recent issue fast
_handle_lag_issues_as_of(q, None, None, None)
p = create_printer()
shifter = lambda x: shift_time_value(x, -basis_shift)
def gen(rows):
for key, group in groupby((parse_row(row, fields_string, fields_int, fields_float) for row in rows), lambda row: (row["geo_type"], row["geo_value"], row["source"], row["signal"])):
trends = compute_trends(key[0], key[1], key[2], key[3], shifter, ((row["time_value"], row["value"]) for row in group))
for trend in trends:
yield trend.asdict()
# execute first query
try:
r = run_query(p, (str(q), q.params))
except Exception as e:
raise DatabaseErrorException(str(e))
# now use a generator for sending the rows and execute all the other queries
return p(filter_fields(gen(r)))
@bp.route("/correlation", methods=("GET", "POST"))
def handle_correlation():
require_all("reference", "window", "others", "geo")
reference = parse_single_source_signal_arg("reference")
other_pairs = parse_source_signal_arg("others")
geo_pairs = parse_geo_arg()
time_window = parse_day_range_arg("window")
lag = extract_integer("lag")
if lag is None:
lag = 28
# build query
q = QueryBuilder("datapoint", "t")
fields_string = ["geo_type", "geo_value", "source", "signal"]
fields_int = ["time_value"]
fields_float = ["value"]
q.set_fields(fields_string, fields_int, fields_float)
q.set_order("geo_type", "geo_value", "source", "signal", "time_value")
q.where_source_signal_pairs("source", "signal", other_pairs + [reference])
q.where_geo_pairs("geo_type", "geo_value", geo_pairs)
q.where_time_pairs("time_type", "time_value", [TimePair("day", [time_window])])
# fetch most recent issue fast
q.conditions.append(f"({q.alias}.is_latest_issue IS TRUE)")
df = as_pandas(str(q), q.params, parse_dates={"time_value": "%Y%m%d"})
p = create_printer()
def prepare_data_frame(df):
return df[["time_value", "value"]].set_index("time_value")
def gen():
by_geo = df.groupby(["geo_type", "geo_value"])
for (geo_type, geo_value), group in by_geo:
# group by source, signal
by_signal = group.groupby(["source", "signal"])
# find reference group
# dataframe structure: index=time_value, value=value
reference_group = next((prepare_data_frame(group) for (source, signal), group in by_signal if source == reference.source and signal == reference.signal[0]), None)
if reference_group is None or reference_group.empty:
continue # no data for reference
# dataframe structure: index=time_value, value=value
other_groups = [((source, signal), prepare_data_frame(group)) for (source, signal), group in by_signal if not (source == reference.source and signal == reference.signal[0])]
if not other_groups:
continue # no other signals
for (source, signal), other_group in other_groups:
for cor in compute_correlations(geo_type, geo_value, source, signal, lag, reference_group, other_group):
yield cor.asdict()
# now use a generator for sending the rows and execute all the other queries
return p(filter_fields(gen()))
@bp.route("/csv", methods=("GET", "POST"))
def handle_export():
source, signal = request.args.get("signal", "jhu-csse:confirmed_incidence_num").split(":")
start_day = request.args.get("start_day", "2020-04-01")
end_day = request.args.get("end_day", "2020-09-01")
geo_type = request.args.get("geo_type", "county")
geo_values = request.args.get("geo_values", "*")
if geo_values != "*":
geo_values = geo_values.split(",")
as_of = request.args.get("as_of", None)
start_day = datetime.strptime(start_day, "%Y-%m-%d").date()
end_day = datetime.strptime(end_day, "%Y-%m-%d").date()
if as_of is not None:
as_of = datetime.strptime(as_of, "%Y-%m-%d").date()
# build query
q = QueryBuilder("datapoint", "t")
q.set_fields(["geo_value", "signal", "time_value", "issue", "lag", "value", "stderr", "sample_size", "geo_type", "source"], [], [])
q.set_order("time_value", "geo_value")
q.where(source=source, signal=signal, time_type="day")
q.conditions.append("time_value BETWEEN :start_day AND :end_day")
q.params["start_day"] = date_to_time_value(start_day)
q.params["end_day"] = date_to_time_value(end_day)
q.where_geo_pairs("geo_type", "geo_value", [GeoPair(geo_type, True if geo_values == "*" else geo_values)])
_handle_lag_issues_as_of(q, None, None, date_to_time_value(as_of) if as_of is not None else None)
# tag as_of in filename, if it was specified
as_of_str = "-asof-{as_of}".format(as_of=as_of.isoformat()) if as_of is not None else ""
filename = "covidcast-{source}-{signal}-{start_day}-to-{end_day}{as_of}".format(source=source, signal=signal, start_day=start_day.isoformat(), end_day=end_day.isoformat(), as_of=as_of_str)
p = CSVPrinter(filename)
def parse_row(i, row):
# '',geo_value,signal,{time_value,issue},lag,value,stderr,sample_size,geo_type,data_source
return {
"": i,
"geo_value": row["geo_value"],
"signal": row["signal"],
"time_value": time_value_to_iso(row["time_value"]),
"issue": time_value_to_iso(row["issue"]),
"lag": row["lag"],
"value": row["value"],
"stderr": row["stderr"],
"sample_size": row["sample_size"],
"geo_type": row["geo_type"],
"data_source": row["source"],
}
def gen(first_row, rows):
yield parse_row(0, first_row)
for i, row in enumerate(rows):
yield parse_row(i + 1, row)
# execute query
try:
r = run_query(p, (str(q), q.params))
except Exception as e:
raise DatabaseErrorException(str(e))
# special case for no data to be compatible with the CSV server
first_row = next(r, None)
if not first_row:
return "No matching data found for signal {source}:{signal} " "at {geo} level from {start_day} to {end_day}, as of {as_of}.".format(
source=source, signal=signal, geo=geo_type, start_day=start_day.isoformat(), end_day=end_day.isoformat(), as_of=(date.today().isoformat() if as_of is None else as_of.isoformat())
)
# now use a generator for sending the rows and execute all the other queries
return p(gen(first_row, r))
@bp.route("/backfill", methods=("GET", "POST"))
def handle_backfill():
"""
example query: http://localhost:5000/covidcast/backfill?signal=fb-survey:smoothed_cli&time=day:20200101-20220101&geo=state:ny&anchor_lag=60
"""
require_all("geo", "time", "signal")
signal_pair = parse_single_source_signal_arg("signal")
time_pair = parse_single_time_arg("time")
geo_pair = parse_single_geo_arg("geo")
reference_anchor_lag = extract_integer("anchor_lag") # in days
if reference_anchor_lag is None:
reference_anchor_lag = 60
# build query
q = QueryBuilder("datapoint", "t")
fields_string = []
fields_int = ["time_value", "issue"]
fields_float = ["value", "sample_size"]
# sort by time value and issue asc
q.set_order(time_value=True, issue=True)
q.set_fields(fields_string, fields_int, fields_float, ["is_latest_issue"])
q.where_source_signal_pairs("source", "signal", [signal_pair])
q.where_geo_pairs("geo_type", "geo_value", [geo_pair])
q.where_time_pairs("time_type", "time_value", [time_pair])
# no restriction of issues or dates since we want all issues
# _handle_lag_issues_as_of(q, issues, lag, as_of)
p = create_printer()
def find_anchor_row(rows: List[Dict[str, Any]], issue: int) -> Optional[Dict[str, Any]]:
# assume sorted by issue asc
# find the row that is <= target issue
i = bisect_right([r["issue"] for r in rows], issue)
if i:
return rows[i - 1]
return None
def gen(rows):
# stream per time_value
for time_value, group in groupby((parse_row(row, fields_string, fields_int, fields_float) for row in rows), lambda row: row["time_value"]):
# compute data per time value
issues: List[Dict[str, Any]] = [r for r in group]
anchor_row = find_anchor_row(issues, shift_time_value(time_value, reference_anchor_lag))
for i, row in enumerate(issues):
if i > 0:
prev_row = issues[i - 1]
row["value_rel_change"] = compute_trend_value(row["value"] or 0, prev_row["value"] or 0, 0)
if row["sample_size"] is not None:
row["sample_size_rel_change"] = compute_trend_value(row["sample_size"] or 0, prev_row["sample_size"] or 0, 0)
if anchor_row and anchor_row["value"] is not None:
row["is_anchor"] = row == anchor_row
row["value_completeness"] = (row["value"] or 0) / anchor_row["value"] if anchor_row["value"] else 1
if row["sample_size"] is not None:
row["sample_size_completeness"] = row["sample_size"] / anchor_row["sample_size"] if anchor_row["sample_size"] else 1
yield row
# execute first query
try:
r = run_query(p, (q.query, q.params))
except Exception as e:
raise DatabaseErrorException(str(e))
# now use a generator for sending the rows and execute all the other queries
return p(filter_fields(gen(r)))
@bp.route("/meta", methods=("GET", "POST"))
def handle_meta():
"""
similar to /covidcast_meta but in a structured optimized JSON form for the app
"""
signal = parse_source_signal_arg("signal")
row = db.execute(text("SELECT epidata FROM covidcast_meta_cache LIMIT 1")).fetchone()
data = loads(row["epidata"]) if row and row["epidata"] else []
all_signals: AllSignalsMap = {}
for row in data:
if row["time_type"] != "day":
continue
entry: Set[str] = all_signals.setdefault(row["data_source"], set())
entry.add(row["signal"])
out: Dict[str, CovidcastMetaEntry] = {}
for row in data:
if row["time_type"] != "day":
continue
if signal and all((not s.matches(row["data_source"], row["signal"]) for s in signal)):
continue
entry = out.setdefault(
f"{row['data_source']}:{row['signal']}", CovidcastMetaEntry(row["data_source"], row["signal"], row["min_time"], row["max_time"], row["max_issue"], {}, all_signals=all_signals)
)
entry.intergrate(row)
return jsonify([r.asdict() for r in out.values()])
@bp.route("/coverage", methods=("GET", "POST"))
def handle_coverage():
"""
similar to /signal_dashboard_coverage for a specific signal returns the coverage (number of locations for a given geo_type)
"""
signal = parse_source_signal_pairs()
geo_type = request.args.get("geo_type", "county")
if "window" in request.values:
time_window = parse_day_range_arg("window")
else:
now_time = extract_date("latest")
now = date.today() if now_time is None else time_value_to_date(now_time)
last = extract_integer("days")
if last is None:
last = 30
time_window = (date_to_time_value(now - timedelta(days=last)), date_to_time_value(now))
q = QueryBuilder("covidcast", "c")
fields_string = ["source", "signal"]
fields_int = ["time_value"]
q.set_fields(fields_string, fields_int)
# manually append the count column because of grouping
fields_int.append("count")
q.fields.append(f"count({q.alias}.geo_value) as count")
if geo_type == "only-county":
q.where(geo_type="county")
q.conditions.append('geo_value not like "%000"')
else:
q.where(geo_type=geo_type)
q.where_source_signal_pairs("source", "signal", signal)
q.where_time_pairs("time_type", "time_value", [TimePair("day", [time_window])])
q.group_by = "c.source, c.signal, c.time_value"
q.set_order("source", "signal", "time_value")
_handle_lag_issues_as_of(q, None, None, None)
return execute_query(q.query, q.params, fields_string, fields_int, [])
@bp.route("/anomalies", methods=("GET", "POST"))
def handle_anomalies():
"""
proxy to the excel sheet about data anomalies
"""
signal = parse_source_signal_arg("signal")
df = read_csv(
"https://docs.google.com/spreadsheets/d/e/2PACX-1vToGcf9x5PNJg-eSrxadoR5b-LM2Cqs9UML97587OGrIX0LiQDcU1HL-L2AA8o5avbU7yod106ih0_n/pub?gid=0&single=true&output=csv", skip_blank_lines=True
)
df = df[df["source"].notnull() & df["published"]]
return print_pandas(df)
|
delphi_interim_query_builder_test_env.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import json
from pprint import pprint
import os
from falx.table_utils import *
from falx.chart.matplotlib_chart import *
# %reload_ext table_utils
# %reload_ext autoreload
benchmark_dir = "../../benchmarks"
def values_to_colors(values):
distinct_vals = list(set(values))
cmap = matplotlib.cm.viridis
colors = cmap(np.linspace(0, 1, len(distinct_vals)))
return [colors[distinct_vals.index(x)] for x in values]
def load_input_table(fname):
with open(os.path.join(benchmark_dir, fname), "r") as f:
benchmark = json.load(f)
input_data = benchmark["input_data"]
df = pd.DataFrame.from_records(input_data)
df = load_and_clean_table(df, return_as_df=True)
return df
# +
df = load_input_table("007.json")
df["index"] = df.index
#df.plot(kind="barh",x="index", y=["Agree", "Disagree", "Strongly Agree", "Strongly Disagree"], stacked=True)
chart = MatplotlibChart(df,
MpGroupBarChart("index", ["Agree", "Disagree",
"Strongly Agree",
"Strongly Disagree"],
stacked=True, orient="horizontal"))
print(df)
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("010.json")
# plt.scatter(y=df["Class"],x=df["Fall"], label="Fall")
# plt.scatter(y=df["Class"],x=df["Spring"], label="Spring")
# plt.legend()
# plt.show()
chart = MatplotlibChart(df, MpScatterPlot(c_x="Class", c_ys=["Fall", "Spring"]))
print(df)
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("003.json")
df["index"] = df.index
tmp = df["Net Cash Flow"].cumsum()
df = df.join(tmp, lsuffix='', rsuffix=' Sum')
df = df.assign(c=df["Net Cash Flow Sum"]-df["Net Cash Flow"])
df = df.assign(d=df["c"]>df["Net Cash Flow Sum"])
#plt.bar(x=df["Month"], height=df["Net Cash Flow"], bottom=df["c"], color=values_to_colors(df["d"]))
chart = MatplotlibChart(df, MpBarChart(c_x="index", c_height="Net Cash Flow", c_bot="c", c_color="d"))
print(df)
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("015.json")
split_col = df["End of Shift"].str.split("/", expand=True)
df["s1"] = split_col[0]
split_col = df["Start of Shift"].str.split("/", expand=True)
df["s0"] = split_col[0]
df = pd.DataFrame.from_records(load_and_clean_table(df))
#plt.bar(x=df["Period"], height=df["Duration"], bottom=df["s0"])
chart = MatplotlibChart(df, MpBarChart(c_x="Period", c_height="Duration", c_bot="s0"))
print(df)
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("018.json")
df = pd.melt(df, id_vars='Item', value_vars=["2012", "2013", "2014", "2015"])
df = df.pivot_table(index="variable", columns="Item", values=['value'])
df.columns = df.columns.droplevel(0)
df = df.reset_index()
print(df)
# fig, ax = plt.subplots()
# for col in ["Monitors", "Printers"]:
# ax.plot(df["variable"], df[col], label=col)
# df.plot.bar(x="variable", y=["Desktop Computers", "Laptops"], stacked=True, ax=ax)
# ax.legend()
chart = MatplotlibChart(df, MultiLayer(
charts=[
GroupBarChart(c_x="variable", c_ys=["Desktop Computers", "Laptops"], stacked=True),
LineChart(c_x="variable", c_ys=["Monitors", "Printers"])
]))
chart.render()
# +
df = load_input_table("009.json")
df = pd.melt(df, id_vars=['Location', 'Rae'], value_vars=["2009", "2010", "2011", "2012", "2013", "2014"])
df = df.pivot_table(index=["variable", "Rae"], columns="Location", values=['value']).reset_index()
df.columns = [col[-1] if col[-1] != "" else col[0] for col in df.columns.values]
chart = MatplotlibChart(df, Subplot(
chart=ScatterPlot(c_x="variable", c_ys=["Arizona", "Maricopa", "United States"]),
column="Rae"))
#print(df)
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("012.json")
df = pd.melt(df, id_vars=["C1", "C2"], value_vars=["EMEA", "LATAM", "North America", "APAC"])
df = df.pivot_table(index=["variable", "C2"], columns="C1", values=['value']).reset_index()
df.columns = [col[-1] if col[-1] != "" else col[0] for col in df.columns.values]
print(df)
# group = df["variable"]
# num_group = len(np.unique(group))
# fig, axes = plt.subplots(1,num_group,figsize=(num_group * 5,5),sharex=True, sharey=True)
# for ax,g in zip(axes, np.unique(group)):
# i = np.where(group == g)
# sub_df = df.loc[i]
# for col in ["Q1", "Q2", "Q3"]:
# ax.plot(sub_df["C2"], sub_df[col], label=col)
# ax.set_xlabel(g)
# plt.legend()
# plt.show()
chart = MatplotlibChart(df, Subplot(
chart=LineChart(c_x="C2", c_ys=["Q1", "Q2", "Q3"]),
column="variable"))
#pprint(chart.eval())
chart.render()
# +
df = load_input_table("013.json")
chart = MatplotlibChart(df, GroupBarChart("Value", ["alpha", "beta", "gamma"], stacked=True))
#df.plot(kind="bar", x="Value", y=["alpha", "beta", "gamma"], stacked=True)
chart.render()
# +
df = load_input_table("014.json")
split_col = df["End of Shift"].str.split("/", expand=True)
df["s1"] = split_col[0]
split_col = df["Start of Shift"].str.split("/", expand=True)
df["s0"] = split_col[0]
df = pd.DataFrame.from_records(load_and_clean_table(df))
#plt.barh(y=df["Shift"], width=df["Duration"], left=df["s0"])
chart = MatplotlibChart(df, BarChart(c_x="Shift", c_height="Duration",
c_bot="s0", orient="horizontal"))
print(df)
pprint(chart.eval())
chart.render()
# +
df = load_input_table("008.json")
# gather(dat, "col1", "col2", -Value) %>% spread(Value, col2)
df = pd.melt(df, id_vars='Value', value_vars=["Y1", "Y2", "Y3", "Y4", "Y5"])
df = df.pivot(index='variable',columns='Value', values='value').reset_index()
df = pd.DataFrame.from_records(load_and_clean_table(df))
print(df)
chart = MatplotlibChart(df, MultiLayer(
charts=[
AreaChart(c_x="variable", c_tops=["upper range"], c_bots=["lower range"]),
LineChart(c_x="variable", c_ys=["means"])]))
chart.render()
# +
df = load_input_table("039.json")
df = pd.melt(df, id_vars=["Year"],
value_vars=["NORTH-Bisc", "NORTH-Choc", "SOUTH-Bisc", "SOUTH-Choc", "WEST-Bisc", "WEST-Choc"])
split_col = df["variable"].str.split("-", expand=True)
df["loc"] = split_col[0]
df["type"] = split_col[1]
df = df.pivot_table(index=["Year", "loc"], columns='type', values=['value']).reset_index()
df.columns = ["Year", "loc", "Bisc", "Choc"]
print(df)
chart = MatplotlibChart(df, Subplot(
chart=AreaChart(c_x="Year", c_bots=["Bisc"], c_tops=["Choc"]),
column="loc"))
# chart = MatplotlibChart(df, MultiLayer(
# charts=[
# AreaChart(c_x="variable", c_top="upper range", c_bot="lower range"),
# LineChart(c_x="variable", c_ys=["means"])]))
chart.render()
# +
df = load_input_table("036.json")
df["VolumeDiff"] = df["Volume"].diff()
df = pd.melt(df, id_vars=["Close", "Date"], value_vars=["VolumeDiff", "Volume"])
df["color"] = np.array(list(map(lambda x: "pos" if x > 0 else "neg", df['value'])))
df["value"] = df["value"].abs()
df = df.pivot_table(index=["Date", "variable"], columns="color", values=['value']).reset_index()
df.columns = [col[-1] if col[-1] != "" else col[0] for col in df.columns.values]
display(df)
chart = MatplotlibChart(df, MpSubplot(
chart=MpAreaChart(c_x="Date", c_tops=["pos", "neg"]),
column="variable"))
#print(chart.eval())
|
falx/notebooks/.ipynb_checkpoints/matplotlib_exp-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="9tu5aMWDKJon"
# # Verify GPU
# + colab={"base_uri": "https://localhost:8080/"} id="wVbAxbgFccI4" executionInfo={"status": "ok", "timestamp": 1619287641699, "user_tz": -480, "elapsed": 781, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="c39a4ac0-3b9c-4271-8425-b1486a7b8bd8"
# !nvidia-smi
# + [markdown] id="DBRukcVdFkmc"
# # Weight and Bias (Assisting Metrics, Optional)
# + colab={"base_uri": "https://localhost:8080/"} id="ZJhFk-HiFiMp" executionInfo={"status": "ok", "timestamp": 1619287656661, "user_tz": -480, "elapsed": 13333, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="41e8737f-63d6-4d78-a03a-9633326e36c0"
# !pip install wandb
# !wandb login
project_name = "CoLA with BERT" # @param {type:"string"}
import os
os.environ["WANDB_PROJECT"] = project_name
# + [markdown] id="nv5kop8DFwv9"
# # Installation
# + colab={"base_uri": "https://localhost:8080/"} id="NlWEz_DGdmL2" executionInfo={"status": "ok", "timestamp": 1619287666504, "user_tz": -480, "elapsed": 6886, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="2a819d4d-3ecf-4609-dfbe-6a9a216ceb72"
# !pip install transformers==4.5.1 datasets==1.5.0
# + id="NYB87-3cd6VE" executionInfo={"status": "ok", "timestamp": 1619287672797, "user_tz": -480, "elapsed": 12891, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
import os
import numpy as np
from datasets import load_dataset, load_metric
from transformers import (
BertConfig,
BertForSequenceClassification,
BertTokenizerFast,
EvalPrediction,
Trainer,
TrainingArguments,
default_data_collator,
)
# + id="rmPWXk40kfOG" executionInfo={"status": "ok", "timestamp": 1619287672797, "user_tz": -480, "elapsed": 12524, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "13013387071134080776"}}
per_device_train_batch_size = 32 # @param {type:"slider", min:1, max:64, step:1}
learning_rate = 2e-5 # @param {type:"slider", min:2e-7, max:2e-3, step:2e-7}
num_train_epochs = 3 # @param {type:"slider", min:1, max:10, step:1}
logging_steps = 10 # @param {type:"slider", min:10, max:100, step:5}
# + [markdown] id="feEx7HM2KTU3"
# # Tokenizer
# + id="-L9Zno23KTnx" colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["658cfd873ab04d74bfdec828ffef3234", "a6fe3e5496754bf490582f5e4765adff", "11272632c3734379850beb4550be40fe", "b66aca1a2ff94a0eb58519e3ef370c3c", "18c5739a76a340e8985f984fc336acfa", "74ed8c66952a4719a4359f7417eaec23", "3494ba5caebc49f79be64dcb14ea11c8", "<KEY>", "2e982fd57dd34e61a9c96547c173884e", "a0bb7ee2969b4ae8a6fee41a5144f572", "<KEY>", "<KEY>", "6ede55e1e5b048feb46ce85ffab0ca97", "<KEY>", "<KEY>", "d183edbaaf644127a4f2e7ce97f03c98", "703acb7ebe3c478bacc651ed6de4d291", "87a85beba476482a9fbd4836795108ab", "<KEY>", "7427faf724f64039b00dce7b300deb74", "7c7a8a13f6f24a9d802c7b7372650513", "<KEY>", "<KEY>", "ca51fbb7892b4afc89c52cf1148e6314"]} executionInfo={"status": "ok", "timestamp": 1619287675100, "user_tz": -480, "elapsed": 14336, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="6be8ffd7-fd11-4796-fdfc-022ca0232dcf"
tokenizer = BertTokenizerFast.from_pretrained("bert-base-cased")
# + [markdown] id="3DcNvb7pCbS4"
# # Dataset
# + colab={"base_uri": "https://localhost:8080/", "height": 218, "referenced_widgets": ["b9ba490af1364eb4823943ff87158eed", "2f6155c624734821abda564a480dfa39", "b049355dc53f494eb74f363973c42074", "5af28db4e4274315b94ad745094d2b08", "d65f1fb590d543bdb71452749fcbbd9d", "65712580b18243c898496b27344cf2e5", "5433c68a0dd747cf9ace06feafd52da6", "<KEY>", "df5e82edbce441058bde764ee9cd2b56", "ac28e2e76f904d66898e4fb962a55825", "ac969fbe2e554dc996738f129b2e57ae", "a9d4f4a069024451bac8837072f0378b", "<KEY>", "f6ac354c74c349d8995a07f3fba4a2ac", "75457188e73948d694d50e02f8f2d54a", "264ce3870dde4be2b0c205acc8b19b43", "c88b3e78814446e4a18c735c8d9d26d0", "<KEY>", "<KEY>", "<KEY>", "ca6659d19ba0458f9d12e937f1e65c82", "3dd63ff5358f4565813c460a5953a880", "f1ae4ba4a4b04f588b34e3c94e249196", "<KEY>", "10249ae317b94296ab53918a23ddd5eb", "<KEY>", "<KEY>", "a3f7dc2d4d754badbaebd2566fc406ea", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "7ed5edd1b44546a98582fd4c88abd3bb", "8c5d7bd5eece4493bd9e99ec25fc1ec2", "5904c368953b4be5ace7c25c85915fa9", "634812b93b154738b3868e544ce7f8cc", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "033a274da3d7492eb871b024070b6947", "cf42a3f462f8457787eca97dea477960", "<KEY>", "f673297b239741d9b936e3ed929fe019", "<KEY>", "<KEY>", "<KEY>"]} id="DDJ8tEhBfM_h" executionInfo={"status": "ok", "timestamp": 1619287677539, "user_tz": -480, "elapsed": 16366, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="f430fa22-852a-4b03-ea2b-c99daba76616"
datasets = load_dataset("glue", "cola")
label_list = datasets["train"].features["label"].names
# + id="xGQ8mBlJ5So-" executionInfo={"status": "ok", "timestamp": 1619287677540, "user_tz": -480, "elapsed": 15991, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/<KEY>", "userId": "13013387071134080776"}}
def preprocess_function(examples):
# Tokenize the texts
result = tokenizer(
examples["sentence"],
padding="max_length",
max_length=128,
truncation="longest_first",
)
result["label"] = examples["label"]
return result
# + colab={"base_uri": "https://localhost:8080/", "height": 164, "referenced_widgets": ["5450209f25de4bfab447db3354a87716", "3d018871fde445948366fa8baeaa5a7b", "<KEY>", "fadde9fc922b4de2ad60053d62d14462", "<KEY>", "af1936b6b1164fdca37dd007d92ebeee", "5a5af232250d4aa39955f13f4f61f49e", "11ab1b9ba38d4469ada93a254359b6a1", "7f63aa6c4552488aadae4fe3a0c2433b", "55120c32e3de4ae3b1f27e694f9a68c1", "<KEY>", "ec4f5ecd955948fca6dbe51d126e71b4", "c719d9ad27a44c94802e13ae07990bfc", "2a13b4c23c0e414caea2420638224637", "f16db5603e41473f9ae1130e1ea88434", "b31a09b5b0464d479ed1cac715e780c2", "7718a34079e14e8080017816ab2ec7f6", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "38cb08ea22514503aa5567d2d090f3f7", "28e6e6ac87214aa4bd16a72d1c40d687", "9dd7520695d0480590ebb676951b3eba"]} id="3mNVI6n65Sk7" executionInfo={"status": "ok", "timestamp": 1619287678517, "user_tz": -480, "elapsed": 16402, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="b789f5e5-f075-4127-ae4d-55699a89ec0a"
datasets = datasets.map(preprocess_function, batched=True)
datasets["test"] = datasets["test"].remove_columns("label")
# + colab={"base_uri": "https://localhost:8080/"} id="m0JWvfMW5Si0" executionInfo={"status": "ok", "timestamp": 1619287678518, "user_tz": -480, "elapsed": 15884, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="407c7dcc-12ec-4b46-da54-7c098a5a82b5"
for index in range(3):
print(f"Sample {index} of the training set: {datasets['train'][index]}.")
# + [markdown] id="M5_HZzpUCnQY"
# # Model
# + colab={"base_uri": "https://localhost:8080/", "height": 220, "referenced_widgets": ["bdab369eabe24e2dbbbe0697239af937", "b8f7c62a2d36456dad05d1050c7e75fd", "fe788c6cc9b44369adaf0a93fb539b94", "53dc251ffce5431f8dfc49e9abfbd609", "5b182fad7c6a4cee980ac8ce992b5200", "a02c89a68fa1444c86baf7c41392ff2d", "<KEY>", "<KEY>", "e26d03273d57445c9b064e6e931f326c", "<KEY>", "<KEY>", "<KEY>", "f1e5da05d32f486a95b3d3ef06481381", "<KEY>", "<KEY>", "e2936ff4c9af4c35808914ee0c5f734b"]} id="xvnUY6NJ5SrE" executionInfo={"status": "ok", "timestamp": 1619287691347, "user_tz": -480, "elapsed": 27158, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="4410f1d6-e9cf-415f-dac0-26b6e5e60ee3"
config = BertConfig.from_pretrained(
"bert-base-cased", num_labels=len(label_list), finetuning_task="cola"
)
model = BertForSequenceClassification.from_pretrained("bert-base-cased", config=config)
# + [markdown] id="KMiIOaURC5hM"
# # Metric
# + id="dII7KSVX_QlK" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["4f26337a7dd242f9b0178b01aa70cad2", "47036f7fd6384964b3cbfc0a7ed20a8b", "f2420e8726024a109b321bb06e05901e", "dc41af0bdfde4505a20175fd1e44d6a0", "11effe9d31924b5289bfe3f87a1b0d1f", "913c4331ff7b4430a96331ad39d1dee4", "273c9ac362144d6b8a9e843326193aa2", "8be32d4835fa49aabec10190c3874f15"]} executionInfo={"status": "ok", "timestamp": 1619287691348, "user_tz": -480, "elapsed": 25868, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="72a01da0-38a0-44be-c146-c9b48df671b4"
metric = load_metric("glue", "cola")
# + id="_LFGUhQt5Sgq" executionInfo={"status": "ok", "timestamp": 1619287691348, "user_tz": -480, "elapsed": 25342, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
def compute_metrics(p: EvalPrediction):
preds = p.predictions[0] if isinstance(p.predictions, tuple) else p.predictions
preds = np.argmax(preds, axis=1)
result = metric.compute(predictions=preds, references=p.label_ids)
if len(result) > 1:
result["combined_score"] = np.mean(list(result.values())).item()
return result
# + [markdown] id="eR0zJvATDF4Y"
# # Trainer
# + id="X3ptpS-MBdiu" executionInfo={"status": "ok", "timestamp": 1619287691349, "user_tz": -480, "elapsed": 24198, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
training_args = TrainingArguments(
output_dir="Transformers Trainer",
do_train=True,
do_eval=True,
evaluation_strategy="epoch",
per_device_train_batch_size=per_device_train_batch_size,
learning_rate=learning_rate,
num_train_epochs=num_train_epochs,
logging_strategy="steps",
logging_steps=logging_steps,
report_to="wandb" if os.getenv("WANDB_PROJECT") else "none",
)
# + id="jOLK2s2V5Sel" executionInfo={"status": "ok", "timestamp": 1619287702936, "user_tz": -480, "elapsed": 34908, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
trainer = Trainer(
model=model,
args=training_args,
train_dataset=datasets["train"],
eval_dataset=datasets["validation"],
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
# + [markdown] id="SASUk8x7DKmO"
# # Training
# + id="UbvbnlJY5ScG" colab={"base_uri": "https://localhost:8080/", "height": 375} executionInfo={"status": "ok", "timestamp": 1619288334407, "user_tz": -480, "elapsed": 630220, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="97972cfb-bc6f-4911-973b-c6a6ece1619b"
train_result = trainer.train()
metrics = train_result.metrics
metrics["train_samples"] = len(datasets["train"])
trainer.save_model()
trainer.log_metrics("train", metrics)
trainer.save_metrics("train", metrics)
trainer.save_state()
# + [markdown] id="3PXcECXZDNnC"
# # Evaluation
# + id="CalPa25E5SZ3" colab={"base_uri": "https://localhost:8080/", "height": 37} executionInfo={"status": "ok", "timestamp": 1619288343338, "user_tz": -480, "elapsed": 632718, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="791c6dc4-be21-49ef-f9a1-077c95c719e5"
metrics = trainer.evaluate()
metrics["eval_samples"] = len(datasets["validation"])
trainer.log_metrics("eval", metrics)
trainer.save_metrics("eval", metrics)
# + [markdown] id="7fa1nAnrDRo1"
# # Test
# + [markdown] id="K6-azaAMdUW4"
# ## On dataset
# + id="m-sYOY_Z5SX2" colab={"base_uri": "https://localhost:8080/", "height": 37} executionInfo={"status": "ok", "timestamp": 1619288352659, "user_tz": -480, "elapsed": 638123, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="e2ac2000-a504-488d-e974-e235ffb292c3"
predictions = trainer.predict(test_dataset=datasets["test"]).predictions
predictions = np.argmax(predictions, axis=1)
# + id="KBhIzZc85SVt" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619288353154, "user_tz": -480, "elapsed": 637940, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="67fa9933-c14c-456c-9b22-9bae945f636f"
for index, (sample, pred) in enumerate(zip(datasets["test"]["sentence"], predictions)):
print(f"{index}\t{label_list[pred]}\t{sample}")
# + [markdown] id="_rfR3-VmdRQn"
# ## Manually
# + id="J2lIJUZSYwmd" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619288408534, "user_tz": -480, "elapsed": 903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="c67cb847-7c68-4982-86d9-7e5f7946c43a"
sentence = "The probable hostile German reaction is unfortunate." # @param {type:"string"}
tokenized_input = tokenizer(sentence, return_tensors="pt").to(model.device)
outputs = model(**tokenized_input)
print(f"Prediction: {label_list[outputs.logits.argmax(dim=-1).item()]}")
# + [markdown] id="4D8-YEkriwLX"
# # Inference
# + id="yNJduFfTfahF" executionInfo={"status": "ok", "timestamp": 1619288456031, "user_tz": -480, "elapsed": 1412, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
import torch
from transformers import BertConfig, BertForSequenceClassification, BertTokenizerFast
# + id="nPjyt8H_i_8i" executionInfo={"status": "ok", "timestamp": 1619288462813, "user_tz": -480, "elapsed": 8184, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
label_list = ["unacceptable", "acceptable"]
tokenizer = BertTokenizerFast.from_pretrained("Transformers Trainer")
config = BertConfig.from_pretrained("Transformers Trainer", finetuning_task="cola")
model = BertForSequenceClassification.from_pretrained(
"Transformers Trainer", config=config
).to(device)
# + id="PKdUDuHTjlMo" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1619288465195, "user_tz": -480, "elapsed": 708, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GhRBTPG7kFx7Gw14fyOvKjff2xqDyZ0tjThv8YKXg=s64", "userId": "13013387071134080776"}} outputId="32951a13-fb3a-4be8-b267-c4bc3fbc6876"
sentence = "The probable hostile German reaction is unfortunate." # @param {type:"string"}
tokenized_input = tokenizer(sentence, return_tensors="pt").to(device)
outputs = model(**tokenized_input)
print(f"Prediction: {label_list[outputs.logits.argmax(dim=-1).item()]}")
# + id="ImWfnAyiDC_-"
|
Homework 6/Transformers Trainer.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: bert
# language: python
# name: bert
# ---
# %load_ext autoreload
# %autoreload 2
# %reload_ext autoreload
# +
import json
with open('podcast.txt', 'rb') as f:
request = json.load(f)
# +
from main import handler
res = handler(request, None)
group = json.loads(res['body'])
# +
## visualization
import iso8601
from datetime import datetime
# meeting start time.
def formatTime(tz_time, datetime_object=False):
isoTime = iso8601.parse_date(tz_time)
ts = isoTime.timestamp()
ts = datetime.utcfromtimestamp(ts).strftime("%Y-%m-%d %H:%M:%S:%f")
if datetime_object:
ts = datetime.fromisoformat(ts)
return ts
#m_time = formatTime("2019-09-19T06:05:00Z", True)
#m_time = formatTime("2019-09-22T09:37:00Z", True)
#m_time = formatTime("2019-09-16T09:53:21Z", True)
m_time = formatTime("2019-07-04T12:15:14Z", True)
for i in group['group'].keys():
print ("\n\n\nPIMs ", i)
print ("\n\nDiscussion:\n\n ")
for seg in group['group'][i]:
print ("Minutes from the start of the meeting: ", formatTime(seg['startTime'], True) - m_time , seg['id'],"\n")
print (seg['originalText'],"\n")
# +
import boto3
from boto3 import client
from botocore.client import Config
import numpy as np
import json
aws_config = Config(
connect_timeout=60,
read_timeout=300,
retries={"max_attempts": 0},
region_name="us-east-1",
)
lambda_client = client("lambda", config=aws_config)
def get_pims_score(req):
#if req_data is None:
# lambda_payload = {"body": input_list}
# print (json.dumps(lambda_payload))
#else:
# lambda_payload = {"body": {"request": req_data, "text_input": input_list}}
try:
#logger.info("Invoking lambda function")
invoke_response = lambda_client.invoke(
FunctionName="pim",
InvocationType="RequestResponse",
Payload=json.dumps(req),
)
lambda_output = (
invoke_response["Payload"].read().decode("utf8")
)
response = json.loads(lambda_output)
status_code = response["statusCode"]
response_body = response["body"]
if status_code == 200:
result = json.loads(response_body)['d2vResult'][0]['distance']
return result
except Exception as e:
print (e)
return False
# -
pim_result = {}
pim_request = {"contextId": request["body"]["contextId"], "mindId": "01daaqy88qzb19jqz5prjfr76y"}
for seg in request['body']['segments']:
pim_request["segments"] = [seg]
# get_pims_score({"body":pim_request})
pim_result[seg["recordingId"]] = get_pims_score({"body":pim_request})
group
topic_pim = {
}
group_result = {}
for keys in group['group'].keys():
for seg in group['group'][keys]:
group_result[seg['recordingId']] = keys
ranked_pims = sorted([(k,v) for (k,v) in pim_result.items()], key= lambda kv: kv[1])
used_topics = []
group_no = None
index = 0
for (rec_id, distance) in ranked_pims:
if rec_id in group_result.keys():
group_no = group_result[rec_id]
if group_no not in used_topics:
topic_pim[index] = group_no
used_topics.append(group_no)
index += 1
topic_pim
final_output = []
final_output = list(map(lambda x: group['group'][x] , topic_pim.values()))
final_output[0][0]
users = []
for result in final_output:
temp_users = []
for seg in result:
if seg['spokenBy'] not in temp_users:
temp_users.append(seg['spokenBy'])
users.append(temp_users)
users
|
community_detection/extract_topic_pims/topic_based_pims.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Paraphrase
# <div class="alert alert-info">
#
# This tutorial is available as an IPython notebook at [Malaya/example/paraphrase](https://github.com/huseinzol05/Malaya/tree/master/example/paraphrase).
#
# </div>
# <div class="alert alert-warning">
#
# This module only trained on standard language structure, so it is not save to use it for local language structure.
#
# </div>
# +
# %%time
import malaya
from pprint import pprint
# -
# ### List available Transformer model
malaya.paraphrase.available_transformer()
# ### Load Transformer model
# ```python
# def transformer(model: str = 'small-t5', quantized: bool = False, **kwargs):
# """
# Load Malaya transformer encoder-decoder model to generate a paraphrase given a string.
#
# Parameters
# ----------
# model : str, optional (default='small-t5')
# Model architecture supported. Allowed values:
#
# * ``'t5'`` - T5 BASE parameters.
# * ``'small-t5'`` - T5 SMALL parameters.
# * ``'tiny-t5'`` - T5 TINY parameters.
#
# quantized : bool, optional (default=False)
# if True, will load 8-bit quantized model.
# Quantized model not necessary faster, totally depends on the machine.
#
# Returns
# -------
# result: model
# List of model classes:
#
# * if `t5` in model, will return `malaya.model.t5.Paraphrase`.
# """
# ```
t5 = malaya.paraphrase.transformer(model = 'small-t5')
# ### Paraphrase simple string
#
# We only provide `greedy_decoder` method for T5 models,
#
# ```python
# def greedy_decoder(self, strings: List[str]):
# """
# paraphrase strings.
#
# Parameters
# ----------
# strings: List[str]
#
# Returns
# -------
# result: List[str]
# """
# ```
string = "Beliau yang juga saksi pendakwaan kesembilan berkata, ia bagi mengelak daripada wujud isu digunakan terhadap Najib."
pprint(string)
pprint(t5.greedy_decoder([string]))
string = """
PELETAKAN jawatan Tun Dr <NAME> sebagai Pengerusi Parti Pribumi Bersatu Malaysia (Bersatu) ditolak di dalam mesyuarat khas Majlis Pimpinan Tertinggi (MPT) pada 24 Februari lalu.
Justeru, tidak timbul soal peletakan jawatan itu sah atau tidak kerana ia sudah pun diputuskan pada peringkat parti yang dipersetujui semua termasuk Presiden, Tan Sri Muhyiddin Yassin.
Bekas Setiausaha Agung Bersatu Datuk Marzuki Yahya berkata, pada mesyuarat itu MPT sebulat suara menolak peletakan jawatan Dr Mahathir.
"Jadi ini agak berlawanan dengan keputusan yang kita sudah buat. Saya tak faham bagaimana Jabatan Pendaftar Pertubuhan Malaysia (JPPM) kata peletakan jawatan itu sah sedangkan kita sudah buat keputusan di dalam mesyuarat, bukan seorang dua yang buat keputusan.
"Semua keputusan mesti dibuat melalui parti. Walau apa juga perbincangan dibuat di luar daripada keputusan mesyuarat, ini bukan keputusan parti.
"Apa locus standy yang ada pada Setiausaha Kerja untuk membawa perkara ini kepada JPPM. Seharusnya ia dibawa kepada Setiausaha Agung sebagai pentadbir kepada parti," katanya kepada Harian Metro.
Beliau mengulas laporan media tempatan hari ini mengenai pengesahan JPPM bahawa Dr Mahathir tidak lagi menjadi Pengerusi Bersatu berikutan peletakan jawatannya di tengah-tengah pergolakan politik pada akhir Februari adalah sah.
Laporan itu juga menyatakan, kedudukan <NAME> memangku jawatan itu juga sah.
Menurutnya, memang betul Dr Mahathir menghantar surat peletakan jawatan, tetapi ditolak oleh MPT.
"Fasal yang disebut itu terpakai sekiranya berhenti atau diberhentikan, tetapi ini mesyuarat sudah menolak," katanya.
Marzuki turut mempersoal kenyataan media yang dibuat beberapa pimpinan parti itu hari ini yang menyatakan sokongan kepada Perikatan Nasional.
"Kenyataan media bukanlah keputusan rasmi. Walaupun kita buat 1,000 kenyataan sekali pun ia tetap tidak merubah keputusan yang sudah dibuat di dalam mesyuarat. Kita catat di dalam minit apa yang berlaku di dalam mesyuarat," katanya.
"""
# +
import re
# minimum cleaning, just simply to remove newlines.
def cleaning(string):
string = string.replace('\n', ' ')
string = re.sub(r'[ ]+', ' ', string).strip()
return string
string = cleaning(string)
splitted = malaya.text.function.split_into_sentences(string)
splitted
# -
t5.greedy_decoder(splitted)
|
example/paraphrase/load-paraphrase.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gaussian elimination and LU decomposition
#
# Say we want to compute the solution of
# $$Ax = b$$
# for the vector $x$. We learn how to do this by transforming it to the problem of solving
# $$U x = y$$
# where $U$ is an upper-triangular matrix obtained by performing Gaussian elimiantion on $A$ and $y$ is obtained by performing the same operations on $b$. We can then use back substitution to solve $Ux=y$ more easily than solving $Ax=b$ directly.
#
# This approach is directly related to the LU decomposition of a matrix, where we wish to factor a matrix $A$ into a product of a lower triangular matrix $L$ and an upper triangular matrix $U$ to give $A = LU$. To understand how to compute the LU decomposition of a matrix, let us start by reminding ourselves of how to do Gaussian elimination.
#
# ## Gaussian elimination by hand
#
# To start, consider the following 3x3 matrix
# $$ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 10 \end{bmatrix}$$
#
# 1. Use Gaussian elimination to transform this by hand to an upper triangular matrix $U$ (in row echelon form). Record each elementary row operation you perform along the way.
#
# 2. Apply the same sequence of row operations to the vector
# $$b = \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}$$
# to obtain the transformed vector $y$.
#
# 3. Use back substitution to solve $U x = y$.
# ### Solution
#
# Using the standard Gaussian elimination algorithm, we would perform the following steps:
# 1. Subtract 4x(row 1) from row 2. This leaves us with $$\begin{bmatrix} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 7 & 8 & 10 \end{bmatrix}$$
# 2. Subtract 7x(row 1) from row 3. This leaves us with $$\begin{bmatrix} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 0 & -6 & -11 \end{bmatrix}$$
# 3. Subtract 2x(row 2) from row 3. This leaves us with $$\begin{bmatrix} 1 & 2 & 3 \\ 0 & -3 & -6 \\ 0 & 0 & 1 \end{bmatrix}$$
#
# We now have an upper-triangular matrix $U$. Applying the same sequence of operations to $b$:
# 1. Subtract 4x(row 1) from row 2. This leaves us with $$\begin{bmatrix} 1 \\ -2 \\ 3\end{bmatrix}$$
# 2. Subtract 7x(row 1) from row 3. This leaves us with $$\begin{bmatrix} 1 \\ -2 \\ -4 \end{bmatrix}$$
# 3. Subtract 2x(row 2) from row 3. This leaves us with $$\begin{bmatrix} 1 \\ -2 \\ 0 \end{bmatrix}$$
#
# Finally, we use backsubstitution to solve $Ux = y$ for x. Starting with the last entry
# $$ x_n = 0 / 1 = 0$$
# $$ x_{n-1} = \frac{-2 - (-6)(0)}{-3} = \frac23$$
# $$ x_{n-2} = \frac{1 - (2)(\frac23) - (3)(0)}{1} = -\frac13$$
# so we have the solution
# $$x = \begin{bmatrix} -\frac13 \\ \frac23 \\ 0\end{bmatrix}$$
# ## Gaussian elimination in Python
#
# We will now transform the previous algorithm into Python code. First of all we define the matrix $A$ and the vector $b$.
# +
import numpy as np
A = np.array([[1,2,3],[4,5,6],[7,8,10]])
b = np.array([1,2,3])
n = 3
# -
# Now perform Gaussian elimination and store the result in a matrix $U$ and a vector $y$. We keep track of the multiplication factors for each step in a matrix $L$.
# +
U = np.array(A)
y = np.array(b)
L = np.identity(n)
for k in range(0,n):
for i in range(k+1,n):
L[i,k] = U[i,k]/U[k,k]
U[i,:] = U[i,:] - L[i,k]*U[k,:]
y[i] = y[i] - L[i,k]*y[k]
# -
U
y
# If we consider how many operations this took, there are: ($n$ iterations of the outer loop) x ($n-(k+1)$ iterations of the inner loop) x (n multiplications for the subtraction). This means we require $\mathcal{O}(n^3)$ operations for the Gaussian elimination step.
# Let us now solve for $x$ using back substitution on $U x = y$.
# +
x = np.zeros(n)
x[n-1] = y[n-1]/U[n-1,n-1]
for i in range(n-2,-1,-1):
x[i] = (y[i] - U[i,i+1:n]@x[i+1:n])/U[i,i]
# -
x
# We can check that our original matrix is given by $A=LU$:
L@U
# ## Gaussian elimination by matrix multiplication
#
# We could consider each of the steps in Gaussian elimination in terms of multiplication on the left by a sequence of *elementary elimination matrices*. These come in three forms:
#
# 1. Multiplying row $i$ by a scalar $c$: $\mathbf{r}_i \to c\, \mathbf{r}_i$. This is equivalent to pre-multiplying by a matrix with $1$'s along the diagonal and c in the $i$-th diagonal,$$E_1(i, c) = \begin{bmatrix}
# 1 & & & & & & \\
# & \ddots & & & & & \\
# & & 1 & & & & \\
# & & & c & & & \\
# & & & & 1 & & \\
# & & & & & \ddots & \\
# & & & & & & 1
# \end{bmatrix}$$
# Note that the inverse is given by $E_1(c)^{-1} = E_1(c^{-1})$.
#
# 2. Add a multiple $c$ of row $j$ to row $i$: $\mathbf{r}_i \to \mathbf{r}_i + c\, \mathbf{r}_j$. This is equivalent to premultiplying by a matrix with $1$'s along the diagonal and $c$ in $(i, j)$-th entry:
# $$E_2(i,j,c) = \begin{bmatrix}
# 1 & & & & & & \\
# & \ddots & & & & & \\
# & & 1 & & & & \\
# & & & \ddots & & & \\
# & & c & & 1 & & \\
# & & & & & \ddots & \\
# & & & & & & 1
# \end{bmatrix}$$
# In this case the inverse is given by $E_2(c)^{-1} = E_2(-c)$.
#
# 3. Interchanging rows $i$ and $j$: $\mathbf{r}_i \leftrightarrow \mathbf{r}_j$. This is equivalent to pre-multiplying by a matrix which is the identity with rows $i$ and $j$ swapped: $$E_3(i,j) = \begin{bmatrix}
# 1 & & & & & & \\
# & \ddots & & & & & \\
# & & 0 & & 1 & & \\
# & & & \ddots & & & \\
# & & 1 & & 0 & & \\
# & & & & & \ddots & \\
# & & & & & & 1
# \end{bmatrix}$$
# In this case the $E_3$ is a permutation matrix and it is its own inverse $E_3^{-1} = E_3$.
# Let's work out the sequence of elimination matrices we need to perform the Gaussian elimination from the previous example. First, we define Python functions produce each of the three types of elimination matrix:
# +
def E1(i,c):
e1 = np.identity(n)
e1[i,i] = c
return e1
def E2(i,j,c):
e2 = np.identity(n)
e2[i,j] = c
return e2
def E3(i,j):
e3 = np.identity(n)
e3[i,i] = 0
e3[j,j] = 0
e3[i,j] = 1
e3[j,i] = 1
return e3
# -
# Now, we can see that the Gaussian elimination steps correspond to
# $$ U = E_2(2,1,-2) E_2(2,0,-7) E_2(1,0,-4) A$$
E2(1,0,-4)@A
E2(2,0,-7)@E2(1,0,-4)@A
E2(2,1,-2)@E2(2,0,-7)@E2(1,0,-4)@A
# We therefore have
# $$
# \begin{aligned}
# A &= [E_2(2,1,-2) E_2(2,0,-7) E_2(1,0,-4)]^{-1} U \\
# &= E_2(1,0,-4)^{-1} E_2(2,0,-7)^{-1} E_2(2,1,-2)^{-1} U \\
# &= E_2(1,0,4) E_2(2,0,7) E_2(2,1,2) U \\
# &= L U
# \end{aligned}
# $$
# so we have $L$ in terms of elementry elimination matrices.
E2(1,0,4)@E2(2,0,7)@E2(2,1,2)
L
# ## LU decomposition and rank-1 matrices
#
# In the lecture videos we emphasized the idea of matrix multiplication in terms of columns-times-rows and the related idea of breaking a matrix into a sum of rank-1 matrices. Now, let's see how this gives a different way of looking at the LU decomposition.
#
# The idea is that we would like to split $A$ into a rank-1 piece that picks out the first row and first column, plus a rank-1 piece that picks out the next row and column, and so on:
# $$
# \begin{aligned}
# A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 10 \end{bmatrix}
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & \_ & \_ \\ 7 & \_ & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & \_ & \_ \\ 0 & \_ & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \_ \end{bmatrix}
# \end{aligned}
# $$
# We can fill in all the blanks here by insisting that each term is rank-1 and that we recover $A$.
# Doing so, we get
# $$
# \begin{aligned}
# A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 10 \end{bmatrix}
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & \_ & \_ \\ 7 & \_ & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & \_ & \_ \\ 0 & \_ & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \_ \end{bmatrix}\\
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & 8 & 12 \\ 7 & 14 & 21 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & \_ & \_ \\ 0 & \_ & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \_ \end{bmatrix} \quad \text{(rank-1)}\\
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & 8 & 12 \\ 7 & 14 & 21 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & -3 & -6 \\ 0 & -6 & \_ \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \_ \end{bmatrix} \quad \text{(=$A$)}\\
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & 8 & 12 \\ 7 & 14 & 21 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & -3 & -6 \\ 0 & -6 & -12 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & \_ \end{bmatrix} \quad \text{(rank-1)}\\
# &= \begin{bmatrix} 1 & 2 & 3 \\ 4 & 8 & 12 \\ 7 & 14 & 21 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & -3 & -6 \\ 0 & -6 & -12 \end{bmatrix}
# + \begin{bmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix} \quad \text{(=$A$)} \\
# &= \begin{bmatrix} 1 \\ 4 \\ 7 \end{bmatrix} \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}
# + \begin{bmatrix} 0 \\ 1 \\ 2 \end{bmatrix} \begin{bmatrix} 0 & -3 & -6 \end{bmatrix}
# + \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} \\
# &= l_1 u_1{}^T + l_2 u_2{}^T + l_3 u_3{}^T = LU
# \end{aligned}
# $$
l1 = L[:,0:1]
u1T = U[0:1]
l2 = L[:,1:2]
u2T = U[1:2]
l3 = L[:,2:3]
u3T = U[2:3]
l1@u1T
l2@u2T
l3@u3T
l1@u1T + l2@u2T + l3@u3T
|
Matrix Factorisation/LU.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: R
# language: R
# name: ir
# ---
# # Preamble
# +
libraries = c("dplyr","magrittr","tidyr","ggplot2","gridExtra","RColorBrewer") #,"zoo","directlabels")
for(x in libraries) {
library(x,character.only=TRUE,warn.conflicts=FALSE) }
# options(jupyter.plot_mimetypes = "image/svg+xml")
clrs = brewer.pal(8,"Set1")
if (Sys.info()[['sysname']]=='Windows') {
windowsFonts(Times = windowsFont("Times New Roman"))
theme_set(theme_bw(base_size=12,base_family='Times'))
} else { theme_set(theme_bw(base_size=12,base_family='Times')) }
'%&%' = function(x,y)paste0(x,y)
'%!in%' = function(x,y)!('%in%'(x,y))
# Initialization of array for recorded plots
plot_point_sizes = c(); nm = c(); plot_point_sizes = list()
# -
# # Sensitivity analysis for $\mu$ and $\bar\mu$
"../figures/draft/final-sensitivity_mu.csv" %>%
read.csv %>%
select(-matches("err|sw_end|univ_point")) %>%
arrange(T,Tbar) -> df
# group_by(T) %>% mutate(y = 1:n()) %>% ungroup %>%
# group_by(Tbar) %>% mutate(x = 1:n()) %>% ungroup
df %>% head
# ## Main figure for baseline cost of resistance 10%
# +
cs = c(4.25,3.6)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
outcome_threshold = 4
nbin=20; w=1; h=.9
cl_rc = brewer.pal(8,"Greys")[2]
cl_pt = "green" #brewer.pal(8,"Greens")[3]
vjst = .4
hjst = .5
df %>% filter(index==1) %>%
mutate(outcome_cut=if_else(outcome>=outcome_threshold,outcome_threshold,outcome)) %>%
ggplot(aes(x=T,y=Tbar,fill=outcome_cut)) +
geom_raster(interpolate=T) +
geom_abline(slope=1,size=.2,linetype="dashed") +
stat_contour(aes(z=outcome),breaks=c(.2,.5,2,5), color=brewer.pal(8,"Greys")[1], size=.9) +
stat_contour(aes(z=outcome),breaks=c(1), color="black", size=1) +
scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(limits=c(0,outcome_threshold),
colours=rev(brewer.pal(11,"RdYlBu")),
values=c(seq(0,1,length.out=6),seq(1,outcome_threshold,length.out=6)[-1])/outcome_threshold,
breaks=c(0,1,2,outcome_threshold),
labels=c(0,1,2,outcome_threshold%&%"+"),
name="Fold\nchange") +
xlab("Average time of the direct switch (days)") +
ylab("Average time of the inverse switch (days)") +
coord_equal() +
annotate("point",x=10.5,y=14,shape=23,fill=cl_pt,size=2) +
annotate("rect",xmin=8.85-w,xmax=8.85+w,ymin=35-h,ymax=35+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=8.85,y=35,label="2",size=3,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=18.1-h,ymax=18.1+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=18.1,label="1/2",size=2.5,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=7.5-h,ymax=7.5+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=7.5,label="1/4",size=2.5,vjust=vjst,hjust=hjst) +
theme(legend.title=element_text(size=10,vjust=2),legend.text=element_text(size=8),
axis.title=element_text(size=11)) -> p1
p1
ggsave(plot=p1,width=cs[1],height=cs[2],filename="../figures/draft/Fig6-A.pdf",useDingbats=FALSE)
# -
# ## Two other supplementary figures for 5% and 20%
# +
cs = c(3.5,3.6)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
df %>% filter(index==2) %>%
mutate(outcome_cut=if_else(outcome>=outcome_threshold,outcome_threshold,outcome)) %>%
ggplot(aes(x=T,y=Tbar,fill=outcome_cut)) +
geom_raster(interpolate=T) +
geom_abline(slope=1,size=.2,linetype="dashed") +
stat_contour(aes(z=outcome),breaks=c(.2,.5,2,5), color=brewer.pal(8,"Greys")[1], size=.9) +
stat_contour(aes(z=outcome),breaks=c(1), color="black", size=.9) +
scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(limits=c(0,outcome_threshold),
colours=rev(brewer.pal(11,"RdYlBu")),
values=c(seq(0,1,length.out=6),seq(1,outcome_threshold,length.out=6)[-1])/outcome_threshold,
breaks=c(1e-4,1,2,outcome_threshold/2,outcome_threshold),
labels=c(0,1,2,outcome_threshold/2,outcome_threshold%&%"+"),
name="Fold\nchange") +
xlab("Average time of the direct switch (days)") +
ylab("Average time of the inverse switch (days)") +
coord_equal() + guides(fill=FALSE) +
annotate("rect",xmin=12.6-w,xmax=12.6+w,ymin=35-h,ymax=35+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=12.6,y=35,label="2",size=3,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=15.3-h,ymax=15.3+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=15.3,label="1/2",size=2.5,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=7-h,ymax=7+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=7,label="1/4",size=2.5,vjust=vjst,hjust=hjst) +
theme(legend.title=element_text(size=10,vjust=10),legend.text=element_text(size=8),
axis.title=element_text(size=10)) -> p2
p2
ggsave(plot=p2,width=cs[1],height=cs[2],filename="../figures/draft/Fig6-C.pdf",useDingbats=FALSE)
# +
cs = c(3.5,3.6)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
df %>% filter(index==4) %>%
mutate(outcome_cut=if_else(outcome>=outcome_threshold,outcome_threshold,outcome)) %>%
ggplot(aes(x=T,y=Tbar,fill=outcome_cut)) +
geom_raster(interpolate=T) +
geom_abline(slope=1,size=.2,linetype="dashed") +
stat_contour(aes(z=outcome),breaks=c(.2,.5,2,5), color=brewer.pal(8,"Greys")[1], size=.9) +
stat_contour(aes(z=outcome),breaks=c(1), color="black", size=.9) +
scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(limits=c(0,outcome_threshold),
colours=rev(brewer.pal(11,"RdYlBu")),
values=c(seq(0,1,length.out=6),seq(1,outcome_threshold,length.out=6)[-1])/outcome_threshold,
breaks=c(1e-4,1,2,outcome_threshold/2,outcome_threshold),
labels=c(0,1,2,outcome_threshold/2,outcome_threshold),
name="Fold\nchange") +
xlab("Average time of the direct switch (days)") +
ylab("Average time of the inverse switch (days)") +
coord_equal() + guides(fill=FALSE) +
annotate("rect",xmin=4.3-w,xmax=4.2+w,ymin=37-h,ymax=37+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=4.25,y=37,label="2",size=3,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=24.6-h,ymax=24.6+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=24.6,label="1/2",size=2.5,vjust=vjst,hjust=hjst) +
annotate("rect",xmin=35-w,xmax=35+w,ymin=9-h,ymax=9+h,fill=cl_rc,color="white",size=.6) +
annotate("text",x=35,y=9,label="1/4",size=2.5,vjust=vjst,hjust=hjst) +
theme(legend.title=element_text(size=10,vjust=10),legend.text=element_text(size=8),
axis.title=element_text(size=10)) -> p2
p2
ggsave(plot=p2,width=cs[1],height=cs[2],filename="../figures/draft/Fig6-B.pdf",useDingbats=FALSE)
# -
# # Change in optimal ratio for universal line
"../figures/draft/final-sensitivity_mu.csv" %>%
read.csv %>%
filter(index==1) %>%
select(-matches("err|outcome|_t")) %>%
arrange(T,Tbar) -> df
# group_by(T) %>% mutate(y = 1:n()) %>% ungroup %>%
# group_by(Tbar) %>% mutate(x = 1:n()) %>% ungroup
df %>% head
# +
cs = c(3,3)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
# close values to these two: 7.033, 13.76
Tbs = 10.5; Tbarbs = 14
baseline_values = filter(df,T==Tbs&Tbar==Tbarbs)
df %>%
filter(T==Tbs) %>%
ggplot(aes(x=Tbar)) +
geom_line(aes(y=sw_end_x),size=.4) +
coord_cartesian(expand=0.0,ylim=c(0,1),xlim=c(0,40)) +
xlab("Average time of\n the inverse switch (days)") + ylab("Optimal proportion (%)") +
theme(
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.5),"lines")) +
scale_y_continuous(breaks=seq(0,1,length.out=5),labels=100*seq(0,1,length.out=5)) +
annotate("point",x=Tbarbs,y=baseline_values$sw_end_x,shape=23,fill=cl_pt,size=2) -> p
p
ggsave(plot=p,width=cs[1],height=cs[2],filename="../figures/draft/FigS7-B.pdf",useDingbats=FALSE)
# +
df %>%
filter(Tbar==Tbarbs) %>%
ggplot(aes(x=T)) +
geom_line(aes(y=sw_end_x),size=.4) +
coord_cartesian(expand=0.0,ylim=c(0,1),xlim=c(0,40)) +
xlab("Average time of\n the direct switch (days)") + ylab("Optimal proportion (%)") +
theme(
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.5),"lines")) +
scale_y_continuous(breaks=seq(0,1,length.out=5),labels=100*seq(0,1,length.out=5)) +
annotate("point",x=Tbs,y=baseline_values$sw_end_x,shape=23,fill=cl_pt,size=2) -> p
p
ggsave(plot=p,width=cs[1],height=cs[2],filename="../figures/draft/FigS7-A.pdf",useDingbats=FALSE)
# +
cs = c(4.4,3.9)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
nbin=20; w=3; h=2.2
cl_rc = brewer.pal(8,"Greys")[2]
cl_pt = "green" #brewer.pal(8,"Greens")[3]
vjst = .4
hjst = .5
df %>% filter(index==1) %>%
ggplot(aes(x=T,y=Tbar,fill=sw_end_x)) +
geom_raster(interpolate=T) +
stat_contour(aes(z=sw_end_x),breaks=c(.5), color="black", size=.4, linetype="dashed") +
scale_x_continuous(expand=c(0,0)) + scale_y_continuous(expand=c(0,0)) +
scale_fill_gradientn(limits=c(0,1),
colours=rev(brewer.pal(11,"RdYlBu")),
values=seq(0,1,length.out=11),
breaks=seq(0,1,length.out=5),
labels=100*seq(0,1,length.out=5),
name="optimal\nproportion, %\n") +
xlab("Average time of the direct switch (days)") +
ylab("Average time of the inverse switch (days)") +
coord_equal() +
annotate("point",x=Tbs,y=Tbarbs,shape=23,fill=cl_pt,size=2) +
theme(legend.title=element_text(size=10,vjust=0),legend.text=element_text(size=8),
axis.title=element_text(size=11)) -> p
p
ggsave(plot=p,width=cs[1],height=cs[2],filename="../figures/draft/FigS7-C.pdf",useDingbats=FALSE)
# -
# # Varying other parameters
#
# ## Cost of resistance
# +
"../figures/draft/final-sensitivity_c.csv" %>%
read.csv(colClasses='numeric') %>%
select(-matches("err|sw_end|univ_point")) %>%
arrange(c) -> df_
cs = c(3,2.75)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
c_relative_bs = .1
outcome_bs = filter(df_,c_relative==c_relative_bs)$outcome
pC = df_ %>%
mutate(Value = c_relative*100) %>%
ggplot(aes(x=Value,y=outcome)) +
geom_hline(yintercept=1,size=.25,color="black",linetype="dashed") +
geom_hline(yintercept=0,size=.25,color="black",linetype="solid") +
geom_line(size=.6) +
coord_cartesian(expand=0.01,ylim=c(.75,1.75)) +
ylab("Fold change in tumor size") + xlab("Relative cost of resistance (%)") +
theme(
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.5),"lines")) +
annotate("point",x=c_relative_bs*100,y=outcome_bs,shape=23,fill=cl_pt,size=2)
pC
# -
# # Sensitivity on $\alpha$
# +
"../figures/draft/final-sensitivity_alpha.csv" %>%
read.csv(colClasses='numeric') %>%
select(-matches("err|sw_end|univ_point")) %>%
arrange(alpha) -> df_
cs = c(3,2.75)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
alpha_bs = 0.3
pAlpha = df_ %>%
ggplot(aes(x=alpha,y=outcome)) +
geom_hline(yintercept=1,size=.25,color="black",linetype="dashed") +
geom_hline(yintercept=0,size=.25,color="black",linetype="solid") +
geom_line(size=.6) +
coord_cartesian(expand=0.01) +
ylab("Fold change in tumor size") + xlab(expression(alpha)) +
theme(
axis.title.y = element_text(colour=NA),
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.9),"lines")) +
annotate("point",x=alpha_bs,y=outcome_bs,shape=23,fill=cl_pt,size=2)
pAlpha
# +
"../figures/draft/final-sensitivity_theta.csv" %>%
read.csv(colClasses='numeric') %>%
select(-matches("err|sw_end|univ_point")) %>%
arrange(theta) -> df_
cs = c(3,2.75)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
pTheta = df_ %>% gather(Variable,Value,-outcome) %>%
ggplot(aes(x=Value,y=outcome)) +
geom_hline(yintercept=1,size=.25,color="black",linetype="dashed") +
geom_hline(yintercept=0,size=.25,color="black",linetype="solid") +
geom_line(size=.6) +
coord_cartesian(expand=0.01,ylim=c(.75,2)) +
ylab("Fold change in tumor size") + xlab(expression(theta)) +
theme(
#axis.title.y = element_text(colour=NA),
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.5),"lines")) +
annotate("point",x=.45,y=outcome_bs,shape=23,fill=cl_pt,size=2)
pTheta
# +
"../figures/draft/final-sensitivity_kappa.csv" %>%
read.csv(colClasses='numeric') %>%
select(-matches("err|sw_end|univ_point")) %>%
arrange(kappa) -> df_
cs = c(3,2.75)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
pKappa = df_ %>%
ggplot(aes(x=kappa,y=outcome)) +
geom_hline(yintercept=1,size=.25,color="black",linetype="dashed") +
geom_hline(yintercept=0,size=.25,color="black",linetype="solid") +
geom_line(size=.6) +
coord_cartesian(expand=0.01,ylim=c(.75,2)) +
ylab("Fold change in tumor size") + xlab(expression(kappa)) +
theme(
axis.title.y = element_text(colour=NA),
panel.grid.major = element_blank(),#element_line(colour="grey",size=.2),
panel.grid.minor = element_blank(),
plot.margin=unit(c(.5,.5,.75,.5),"lines")) +
annotate("point",x=40,y=outcome_bs,shape=23,fill=cl_pt,size=2)
pKappa
# +
cs = c(5.5,5.5)
options(repr.plot.width=cs[1],repr.plot.height=cs[2])
pFinal = grid.arrange(ggplotGrob(pC), ggplotGrob(pAlpha), ggplotGrob(pTheta),
ggplotGrob(pKappa), nrow=2)
ggsave(plot=pFinal,height=cs[2],width=cs[1],dpi=200,filename="../figures/FigS8.pdf",useDingbats=FALSE)
# -
|
scripts/.ipynb_checkpoints/C1. Sensitivity analysis [R]-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from PIL import Image
img = Image.open("brain.tif")
img.load()
img
# +
# %matplotlib inline
import matplotlib.pyplot as plt
from scipy import ndimage
from sklearn import cluster
import numpy as np
import time
import sys
import cv2
from skimage import data
from PIL import Image
img = Image.open('brain.tif')
#convert back and white image to rgb
img = ndimage.imread("brain.tif",flatten=True, mode='RGB')
#resize of the image
plt.figure(figsize = (15,8))
#display the colored image
plt.imshow(img)
#to save the image
plt.savefig('final.png')
# -
img
# +
from __future__ import division, print_function, absolute_import
__all__ = ['imread']
from numpy import array
# -
def imread(fname, flatten=False, mode=None):
try:
from PIL import Image
except ImportError:
raise ImportError("Could not import the Python Imaging Library (PIL)"
" required to load image files. Please refer to"
" http://pypi.python.org/pypi/PIL/ for installation"
" instructions.")
im = Image.open(fname)
if mode:
im = im.convert(mode)
if flatten:
im = im.convert('F')
result = array(im)
return plt.imshow(result)
imread('tumor.jpg')
import cv2
import numpy as np
from PIL import Image
img = Image.open("out4.png")
img.load()
img
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('out4.png', -1)
#cv2.imshow('GoldenGate',img)
color = ('b','g','r')
for channel,col in enumerate(color):
histr = cv2.calcHist([img],[channel],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.title('Histogram for color scale picture')
plt.show()
while True:
#k = cv2.waitKey(0) & 0xFF
if k == 27: break # ESC key to exit
cv2.destroyAllWindows()
# +
import cv2
import numpy as np
from matplotlib import pyplot as plt
import operator
img = cv2.imread('out4.png', -1)
cv2.imshow('Imagem:',img)
color = ('b','g','r')
qtdBlue = 0
qtdGreen = 0
qtdRed = 0
totalPixels = 0
for channel,col in enumerate(color):
histr = cv2.calcHist([img],[channel],None,[256],[1,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
totalPixels+=sum(histr)
print (histr)
if channel==0:
qtdBlue = sum(histr)
elif channel==1:
qtdGreen = sum(histr)
elif channel==2:
qtdRed = sum(histr)
qtdBlue = (qtdBlue/totalPixels)*100
qtdGreen = (qtdGreen/totalPixels)*100
qtdRed = (qtdRed/totalPixels)*100
#qtdBlue = filter(operator.isNumberType, qtdBlue)
#qtdGreen = filter(operator.isNumberType, qtdGreen)
#qtdRed = filter(operator.isNumberType, qtdRed)
plt.title("Red: "+str(qtdRed)+"%; Green: "+str(qtdGreen)+"%; Blue: "+str(qtdBlue)+"%")
plt.show()
# +
import numpy as np
import cv2
img = cv2.imread('color.png')
green = [60,179,113] # RGB
diff = 20
boundaries = [([green[2]-diff, green[1]-diff, green[0]-diff],
[green[2]+diff, green[1]+diff, green[0]+diff])]
# in order BGR as opencv represents images as numpy arrays in reverse order
for (lower, upper) in boundaries:
lower = np.array(lower, dtype=np.uint8)
upper = np.array(upper, dtype=np.uint8)
mask = cv2.inRange(img, lower, upper)
output = cv2.bitwise_and(img, img, mask=mask)
ratio_green = cv2.countNonZero(mask)/(img.size/3)
print('green pixel percentage:', np.round(ratio_green*100, 2))
cv2.imshow("images", np.hstack([img, output]))
cv2.waitKey(0)
# +
from __future__ import division
import cv2
import numpy as np
def nothing(*arg):
pass
# -
# Initial HSV GUI slider values to load on program start.
#icol = (36, 202, 59, 71, 255, 255) # Green
#icol = (18, 0, 196, 36, 255, 255) # Yellow
#icol = (89, 0, 0, 125, 255, 255) # Blue
icol = (0, 100, 80, 10, 255, 255) # Red
cv2.namedWindow('colorTest')
# Lower range colour sliders.
cv2.createTrackbar('lowHue', 'colorTest', icol[0], 255, nothing)
cv2.createTrackbar('lowSat', 'colorTest', icol[1], 255, nothing)
cv2.createTrackbar('lowVal', 'colorTest', icol[2], 255, nothing)
# Higher range colour sliders.
cv2.createTrackbar('highHue', 'colorTest', icol[3], 255, nothing)
cv2.createTrackbar('highSat', 'colorTest', icol[4], 255, nothing)
cv2.createTrackbar('highVal', 'colorTest', icol[5], 255, nothing)
# +
frame = cv2.imread('outputHSV.png')
while True:
# Get HSV values from the GUI sliders.
lowHue = cv2.getTrackbarPos('lowHue', 'colorTest')
lowSat = cv2.getTrackbarPos('lowSat', 'colorTest')
lowVal = cv2.getTrackbarPos('lowVal', 'colorTest')
highHue = cv2.getTrackbarPos('highHue', 'colorTest')
highSat = cv2.getTrackbarPos('highSat', 'colorTest')
highVal = cv2.getTrackbarPos('highVal', 'colorTest')
# Show the original image.
cv2.imshow('frame', frame)
# Blur methods available, comment or uncomment to try different blur methods.
frameBGR = cv2.GaussianBlur(frame, (7, 7), 0)
#frameBGR = cv2.medianBlur(frameBGR, 7)
#frameBGR = cv2.bilateralFilter(frameBGR, 15 ,75, 75)
# Show blurred image.
cv2.imshow('blurred', frameBGR)
# HSV (Hue, Saturation, Value).
# Convert the frame to HSV colour model.
hsv = cv2.cvtColor(frameBGR, cv2.COLOR_BGR2HSV)
# HSV values to define a colour range.
colorLow = np.array([lowHue,lowSat,lowVal])
colorHigh = np.array([highHue,highSat,highVal])
mask = cv2.inRange(hsv, colorLow, colorHigh)
# Show the first mask
cv2.imshow('mask-plain', mask)
kernal = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (7, 7))
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernal)
mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, kernal)
# Show morphological transformation mask
cv2.imshow('mask', mask)
# Put mask over top of the original image.
result = cv2.bitwise_and(frame, frame, mask = mask)
# Show final output image
cv2.imshow('colorTest', result)
k = cv2.waitKey(5) & 0xFF
if k == 27:
break
cv2.destroyAllWindows()
# -
|
color percentage and MASKING.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Implement de l'algorithme de la descente de Gradient en Deep Learning
# ## Importation des packages
import warnings
#suppress warnings
# warnings.filterwarnings('ignore')
# %matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from sklearn.datasets import make_blobs
plt.style.use('ggplot')
# ## Création du dataset
# +
X, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=0)
y = y.reshape(y.shape[0], 1)
print(f'Les de X: {X.shape}')
print(f'Les de y: {y.shape}')
# -
# ## Répresentation graphique des données
plt.figure(figsize=(16, 8))
plt.scatter(X[:, 0], X[:, 1], c=y, cmap='summer')
# ## Création de la fonction initialisation
def initialisation(X):
W = np.random.randn(X.shape[1], 1)
b = np.random.randn(1)
return (W, b)
# ## Testons la fonction d'initialisation
W, b = initialisation(X)
W
b
# ## Création de la fonction d'activation
def sigmoid(Z):
A = 1 / (1 + np.exp(-Z))
return A
# ## Création du modèle
def modele(X, W, b):
Z = X.dot(W) + b
# print(Z.max())
A = sigmoid(Z)
return A
# ## Testons le modèle
A = modele(X, W,b)
A.shape
# ## Determinons la fonction coût
def fonction_cout(A, y):
epsilon = 1e-15
L = 1 / len(y) * np.sum(-y * np.log(A + epsilon) - (1 - y) * np.log(1 - A + epsilon))
return L
# ## Testons la fonction coût
Loss = fonction_cout(A, y)
Loss
# ## Determination des gradients
def gradients(X, A, y):
dW = 1 / len(y) * np.dot(X.T, A - y)
db = 1 / len(y) * np.sum(A - y)
return (dW, db)
# ## Testons les Gradients
dW, db = gradients(X, A, y)
dW
db
# ## Algorithme de mise à jour des paramètres W et b
def update(dW, db, W, b, learning_rate):
W = W - learning_rate * dW
b = b - learning_rate * db
return (W, b)
# ## Testons l'Algorithme de mise à jour des paramètres W et b
# ### W, b AVANT MISE A JOUR
W, b
# ### W, b APRES MISE A JOUR
W, b = update(dW, db, W, b, 0.1)
W, b
from sklearn.metrics import accuracy_score
# ## La fonction de prédiction
def predict(X, W, b):
A = modele(X, W, b)
return A >= 0.5
# ## Ensemblages des differentes algorithme pour l'algorithme de descente de gradient
def algo_descente_gradient(X,y, learning_rate = 0.1, epochs = 100):
## Initialisation des paramètres
W, b = initialisation(X)
Loss = []
## Entrainement et amelioration des paramètre en fonction des sorties souhaités
for i in range(epochs):
## appel du model
A = modele(X, W, b)
## Calcule des coût
Loss.append(fonction_cout(A, y))
## Calcule des gradients
dW, db = gradients(X, A, y)
## Optimisation des paramètres
W, b = update(dW, db, W, b, learning_rate)
## Affichage du score
y_pred = predict(X, W, y)
score = accuracy_score(y, y_pred)
print(f'Le Score est: {score}')
## Affichage de la courbe d'apprentissage
plt.figure(figsize=(6,8))
plt.plot(Loss)
return (W, b)
W, b = algo_descente_gradient(X, y)
from utilities import *
X_train, y_train, X_test, y_test = load_data()
plt.figure(figsize=(16, 8))
for i in range(1, 5):
plt.subplot(4, 5, i)
plt.imshow(X_train[i], cmap='gray')
plt.title(y_train[i])
plt.tight_layout()
plt.show();
# +
# # !pip install -U --user tqdm
# -
from sklearn.metrics import log_loss
from tqdm import tqdm
def algo_descente_gradient_image(X_train, y_train, X_test, y_test, learning_rate = 0.1, epochs = 100):
## Initialisation des paramètres
W, b = initialisation(X_train)
train_acc = []
train_loss = []
test_acc = []
test_loss = []
## Entrainement et amelioration des paramètre en fonction des sorties souhaités
for i in tqdm(range(epochs)):
## appel du model
A_train = modele(X_train, W, b)
A_test = modele(X_test, W, b)
if i%10 == 0:
## Calcule des coût train
train_loss.append(fonction_cout(y_train, A_train))
## train acc
y_pred_train = predict(X_train, W, y_train)
train_acc.append(accuracy_score(y_train, y_pred_train))
## Calcule des coût train
test_loss.append(fonction_cout(y_test, A_test))
## train acc
y_pred_test = predict(X_test, W, y_test)
test_acc.append(accuracy_score(y_test, y_pred_test))
## Calcule des gradients
dW, db = gradients(X_train, A_train, y_train)
## Optimisation des paramètres
W, b = update(dW, db, W, b, learning_rate)
## Affichage de la courbe d'apprentissage
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.plot(train_loss, label="Train Loss")
plt.plot(test_loss, label="Test Loss")
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(train_acc, label="Train accuracy")
plt.plot(test_acc, label="Test accuracy")
plt.legend()
return (W, b)
X_train_flatten = X_train.reshape(X_train.shape[0], X_train.shape[1] * X_train.shape[2])
X_test_flatten = X_test.reshape(X_test.shape[0], X_test.shape[1] * X_test.shape[2])
X_train_flatten = X_train_flatten / X_train_flatten.max()
X_test_flatten = X_test_flatten / X_test_flatten.max()
algo_descente_gradient_image(X_train_flatten, y_train, X_test_flatten, y_test, 0.01, 10000)
W
b
|
python/algorith_gradient_descent.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Using Clipped PPO for Automated Model Compression
#
#
# Continuing on the work of [AMC](https://arxiv.org/abs/1802.03494) we replace DDPG with Clipped PPO.
#
# Results are interesting and encouraging as there is learning. However, this is less sample-efficient compared to DDPG, and therefore takes longer.
#
# We search for a 50%-MACs-constrained (FLOPs-constrained) Plain20. From Greedy Search algorithm we know that there exists a 50%-MACs-constrained Plain20 that can provide Top1=90%. The current fine-tuned Plain20 model from our PPO experiments has a Top1=89%.
# ## Experiment setup
#
#
# ### Clipped PPO configuration
#
# ### Distiller Clipped PPO AMC experiments
#
# ## Notebook code
#
# Skip this part - it is necessary only for creating the diagrams. You may also toggle the code-view button.
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Click here to toggle on/off code view"></form>''')
# +
# %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import csv
from matplotlib.ticker import FuncFormatter
import ipywidgets as widgets
from ipywidgets import interactive, interact, Layout
import matplotlib.pylab as pylab
import matplotlib.animation as animation
from matplotlib import animation, rc
#plt.style.use('seaborn') # pretty matplotlib plots
params = {'legend.fontsize': 'x-large',
'figure.figsize': (15, 7),
'axes.labelsize': 'x-large',
'axes.titlesize':'xx-large',
'xtick.labelsize':'x-large',
'ytick.labelsize':'x-large'}
pylab.rcParams.update(params)
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
if y < 1:
y = 100 * y
s = "{:.1f}".format(y)
# The percent symbol needs escaping in latex
if matplotlib.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
# Widen the cells to get entire rows in the screen.
from IPython.core.display import display, HTML
#display(HTML("<style>.container { width:100% !important; }</style>"))
def plot_layer_densities(df, idx, action_type='action_history', ax=None, color=None):
if ax is None:
plt.figure()
ax = plt
record = df.iloc[idx]
layer_sparsities = record[action_type]
layer_sparsities = layer_sparsities[1:-1].split(",")
layer_densities = [1.- float(sparsity) for sparsity in layer_sparsities]
ax.bar(range(len(layer_densities)), layer_densities, color=color)
ax.set_title("Ep:{} - Top1:{:.1f}%\nMACs:{:.1f}%".format(record['episode'],
record['top1'],
record['normalized_macs']))
def smooth(data, win_size):
win_size = max(1, win_size)
return [np.mean(data[max(0, i-win_size):i]) for i in range(len(data))]
def plot_performance(alpha, window_size, top1, macs, params, reward, start=0, end=-1):
plot_kwargs = {"figsize":(15,7), "lw": 1, "alpha": alpha, "title": "Performance Data"}
smooth_kwargs = {"lw": 2 if window_size > 0 else 1, "legend": True}
if macs:
ax = df['normalized_macs'][start:end].plot(**plot_kwargs, color="r")
ax.set(xlabel="Episode", ylabel="(%)")
#ax.set_ylim([0,100])
df['smooth_normalized_macs'] = smooth(df['normalized_macs'], window_size)
df['smooth_normalized_macs'][start:end].plot(**smooth_kwargs, color="r")
if top1:
ax = df['top1'][start:end].plot(**plot_kwargs, color="b", grid=True)
ax.set(xlabel="Episode", ylabel="(%)")
df['smooth_top1'] = smooth(df['top1'], window_size)
df['smooth_top1'][start:end].plot(**smooth_kwargs, color="b")
if params:
ax = df['normalized_nnz'][start:end].plot(**plot_kwargs, color="black")
ax.set(xlabel="Episode", ylabel="(%)")
df['smooth_normalized_nnz'] = smooth(df['normalized_nnz'], window_size)
df['smooth_normalized_nnz'][start:end].plot(**smooth_kwargs, color="black")
if reward:
ax = df['reward'][start:end].plot(**plot_kwargs, secondary_y=True, color="g")
ax.set(xlabel="Episode", ylabel="reward")
df['smooth_reward'] = smooth(df['reward'], window_size)
df['smooth_reward'][start:end].plot(**smooth_kwargs, secondary_y=True, color="g")
#ax.set_ylim([0,100])
ax.grid(True, which='minor', axis='x', alpha=0.3)
def plot_2d_embeddings(top1, normalized_macs):
plt.figure(figsize=(15,7))
plt.title('Projection of Discovered Networks ({})'.format(len(top1)))
plt.xlabel('Normalized MACs')
plt.ylabel('Top1 Accuracy')
# Create the formatter using the function to_percent. This multiplies all the
# default labels by 100, making them all percentages
formatter = FuncFormatter(to_percent)
# Set the formatter
plt.gca().yaxis.set_major_formatter(formatter)
plt.gca().xaxis.set_major_formatter(formatter)
# Use color gradients to show the "age" of the network:
# Lighter networks were discovered earlier than darker ones.
color_grad = [str(1-i/len(top1)) for i in range(len(top1))]
plt.scatter(normalized_macs, top1, color=color_grad, s=80, edgecolors='gray');
INTERVAL = 100 # Animation speed
WINDOW = 20
font = {'family': 'serif',
'color': 'darkred',
'weight': 'normal',
'alpha': 0.50,
'size': 32,
}
# Based on these two helpful example code:
# https://stackoverflow.com/questions/9401658/how-to-animate-a-scatter-plot
# http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-notebooks/.
# Specifically, the use of IPython.display is missing from the first example, but most of the animation code
# leverages code from there.
class AnimatedScatter(object):
"""An animated scatter plot using matplotlib.animations.FuncAnimation."""
def __init__(self, xdata, ydata):
assert len(xdata) == len(ydata)
self.numpoints = len(xdata)
self.xdata = xdata
self.ydata = ydata
self.stream = self.data_stream()
# Setup the figure and axes...
self.fig, self.ax = plt.subplots(figsize=(15,7))
# Then setup FuncAnimation.
self.ani = animation.FuncAnimation(self.fig, self.update, interval=INTERVAL,
frames=self.numpoints-2,
init_func=self.setup_plot, blit=True)
def setup_plot(self):
"""Initialize drawing of the scatter plot."""
x, y, s, c = next(self.stream)
#self.annot = self.ax.annotate("txt", (10, 10))
self.scat = self.ax.scatter(x, y, c=c, s=s, animated=False)
self.scat.set_edgecolors('gray')
self.scat.set_cmap('gray')
self.width = max(self.xdata) - min(self.xdata) + 4
self.height = max(self.ydata) - min(self.ydata) + 4
self.ax.axis([min(self.xdata)-2, max(self.xdata)+2,
min(self.ydata)-2, max(self.ydata)+2])
self.annot = self.ax.text(min(self.xdata) + self.width/2,
min(self.xdata) + self.height/2,
"", fontdict=font)
# For FuncAnimation's sake, we need to return the artist we'll be using
# Note that it expects a sequence of artists, thus the trailing comma.
return self.scat,
def data_stream(self):
numpoints = 0#len(self.xdata)
colors = []
xxx = 0
while True:
numpoints += 1
win_len = min(WINDOW, numpoints)
data = np.ndarray((4, win_len))
start = max(0,numpoints-WINDOW-1)
data[0, :] = self.xdata[start:start+win_len]
data[1, :] = self.ydata[start:start+win_len]
data[2, :] = [70] * win_len # point size
#data[3, :] = [np.random.random() for p in range(numpoints)] # color
# The color of the points is a gradient with larger values for "younger" points.
# At each new frame we show one more point, and "age" each existing point by incrementaly
# reducing its color gradient.
data[3, :] = [(1-i/(win_len+1)) for i in range(win_len)]
yield data
def update(self, i):
"""Update the scatter plot."""
data = next(self.stream)
self.annot.set_text(i)
i = i % len(data)
# Set x and y data
xy = [(data[0,i], data[1,i]) for i in range(len(data[0,:]))]
self.scat.set_offsets(xy)
# Set colors
self.scat.set_array(data[3])
# We need to return the updated artist for FuncAnimation to draw..
# Note that it expects a sequence of artists, thus the trailing comma.
return self.scat, self.annot
def show(self):
plt.show()
# -
# ## Results
#
# Below I present the results of a single execution. There is a substantial variance between the experiment executions, but most conclude similarly to this experiment.
# ### Read the results log files
#
# The code below reads the log file of your selected experiment. To change the path to the file you will need to open the code cell and change its content.
# +
pd.set_option('display.max_colwidth', 150)
fname = "sample_logs/clipped_ppo/macs_constrained_clipped-ppo.amc.csv"
#fname = "sample_logs/clipped_ppo/accuracy-guaranteed_clipped-ppo.amc.csv"
df = pd.read_csv(fname)
# -
# ### Plot experiment performance
# +
plt.figure(figsize=(15,7))
#print(plt.style.available)
#plt.style.use('bmh')
@interact(window_size=(0,50,5), top1=True, macs=True, params=False, reward=True)
def plot_performance_proxy(window_size=10, top1=True, macs=True, params=False, reward=True):
plot_performance(0.15, window_size, top1, macs, params, reward)
# -
plot_performance(0.15, window_size=10, top1=True, macs=True, params=False, reward=True, start=0, end=600)
# What do we see?
# - If we zoom in on the first 600 episodes, we see how the reward starts rising as the agent learns to retain as many compute resources (MACs) as possible. Does this occur with other models?
#
# ### Sample some networks
#
# Let's look at the networks with the best top1 accuracy, and see if they share geometrical attributes.
#
# We sort the discovered networks by their Top1 accuracy and display the density of each layer in the networks.
# +
top1_sorted_df = df.sort_values(by=['top1'], ascending=False)
nrows = 2; ncols = 4
f, axarr = plt.subplots(nrows, ncols, figsize=(15,7))
for i in range(0, nrows * ncols):
plot_layer_densities(top1_sorted_df, i, ax=axarr[i//ncols, i%ncols], color='g')
# Fine-tune figure; make subplots farther from each other.
f.subplots_adjust(hspace=0.6, wspace=0.4)
#pd.set_option('display.max_colwidth', -1)
# -
# ### Network 2D embeddings
#
# Let's create an embedding of the networks AMC discovers over the course of each experiment session. Each network is projected onto a 2D plane mapping the Top1 accuracy versus the compute budget, and is represented by a small circle. I used gradient-color-coding to show the relative phase where each network is discovered. Lighter circles are networks discovered early in the search, darker networks are discovered later.
top1 = df['top1']
normalized_macs = df['normalized_macs']
plot_2d_embeddings(top1, normalized_macs)
# ### Video animation
a = AnimatedScatter(normalized_macs, top1)
plt.title('Projection of Discovered Networks ({})'.format(len(top1)))
plt.xlabel('Normalized MACs')
plt.ylabel('Top1 Accuracy')
#a.ani.save('amc_vgg16.mp4', fps=10, dpi=80) #Frame per second controls speed, dpi controls the quality
rc('animation', html='html5')
a.ani
|
examples/automated_deep_compression/ppo-amc-results.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib notebook
# %matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
# -
# # Polar Forces on the Circle
# +
fig,ax = plt.subplots()
x = np.linspace(0,360,200)
y = -1.*np.cos(x/180.*np.pi)
plt.plot(x,y, label='polar')
ax = plt.gca()
ax.axhline(y=0, c='grey', ls='--', lw='.5')
ax.set_xlabel(u'\u0394\u03b8')
ax.set_ylabel('Energy')
x = np.linspace(0,360,200)
y = -2.*np.cos(x/180.*np.pi)**2+1
plt.plot(x,y, label='non-polar')
ax.xaxis.set_major_locator(ticker.MultipleLocator(45))
plt.legend(loc='best')
# +
fig, ax = plt.subplots()
x = np.linspace(0,360,200)
y = -np.cos(x/180.*np.pi)
plt.plot(x,y, label = 'polar')
x = np.linspace(0,360,200)
y = -4.*np.sin(x/180.*np.pi)*np.cos(x/180.*np.pi)
plt.plot(x,y, label = 'non-polar')
ax.axhline(y=0, c='grey', ls='--', lw='.5')
ax.set_xlabel(u'\u0394\u03b8')
ax.set_ylabel('Torque')
ax.xaxis.set_major_locator(ticker.MultipleLocator(45))
# -
|
sim/dev/sinGordon/Untitled Folder/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **[Pandas Home Page](https://www.kaggle.com/learn/pandas)**
#
# ---
#
# # Introduction
#
# Run the following cell to load your data and some utility functions.
# +
import pandas as pd
reviews = pd.read_csv("../input/wine-reviews/winemag-data-130k-v2.csv", index_col=0)
from learntools.core import binder; binder.bind(globals())
from learntools.pandas.renaming_and_combining import *
print("Setup complete.")
# -
# # Exercises
#
# View the first several lines of your data by running the cell below:
reviews.head()
# ## 1.
# `region_1` and `region_2` are pretty uninformative names for locale columns in the dataset. Create a copy of `reviews` with these columns renamed to `region` and `locale`, respectively.
# +
# Your code here
renamed = reviews.rename(columns={'region_1': 'region', 'region_2': 'locale'})
# Check your answer
q1.check()
# +
#q1.hint()
#q1.solution()
# -
# ## 2.
# Set the index name in the dataset to `wines`.
# +
reindexed = reviews.rename_axis("wines", axis='rows')
# Check your answer
q2.check()
# +
#q2.hint()
#q2.solution()
# -
# ## 3.
# The [Things on Reddit](https://www.kaggle.com/residentmario/things-on-reddit/data) dataset includes product links from a selection of top-ranked forums ("subreddits") on reddit.com. Run the cell below to load a dataframe of products mentioned on the */r/gaming* subreddit and another dataframe for products mentioned on the *r//movies* subreddit.
gaming_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/g/gaming.csv")
gaming_products['subreddit'] = "r/gaming"
movie_products = pd.read_csv("../input/things-on-reddit/top-things/top-things/reddits/m/movies.csv")
movie_products['subreddit'] = "r/movies"
# Create a `DataFrame` of products mentioned on *either* subreddit.
# +
combined_products = pd.concat([gaming_products, movie_products])
# Check your answer
q3.check()
# +
#q3.hint()
#q3.solution()
# -
# ## 4.
# The [Powerlifting Database](https://www.kaggle.com/open-powerlifting/powerlifting-database) dataset on Kaggle includes one CSV table for powerlifting meets and a separate one for powerlifting competitors. Run the cell below to load these datasets into dataframes:
powerlifting_meets = pd.read_csv("../input/powerlifting-database/meets.csv")
powerlifting_competitors = pd.read_csv("../input/powerlifting-database/openpowerlifting.csv")
# Both tables include references to a `MeetID`, a unique key for each meet (competition) included in the database. Using this, generate a dataset combining the two tables into one.
# +
powerlifting_combined = powerlifting_meets.set_index("MeetID").join(powerlifting_competitors.set_index("MeetID"))
# Check your answer
q4.check()
# +
#q4.hint()
#q4.solution()
# -
# # Congratulations!
#
# You've finished the Pandas micro-course. Many data scientists feel efficiency with Pandas is the most useful and practical skill they have, because it allows you to progress quickly in any project you have.
#
# If you'd like to apply your new skills to examining geospatial data, you're encouraged to check out our **[Geospatial Analysis](https://www.kaggle.com/learn/geospatial-analysis)** micro-course.
#
# You can also take advantage of your Pandas skills by entering a **[Kaggle Competition](https://www.kaggle.com/competitions)** or by answering a question you find interesting using **[Kaggle Datasets](https://www.kaggle.com/datasets)**.
# ---
# **[Pandas Home Page](https://www.kaggle.com/learn/pandas)**
#
#
#
#
#
# *Have questions or comments? Visit the [Learn Discussion forum](https://www.kaggle.com/learn-forum/161299) to chat with other Learners.*
|
04 Pandas Certificate/exercise-renaming-and-combining.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Detecting horizontal gene transfer (HGT)
#
# This sheet shows how we flag genes as potential HGT events.
#
import pandas as pd
from matplotlib import pyplot as plt
# %matplotlib inline
# ## Number of taxa inferred to have been lost for each gene by each algorithm
lossTaxa = pd.read_csv("lossTaxa_HUMAN.csv",index_col=0)
lossTaxa.head()
# ## Get algorithms that have been removed as false positives
# +
# Map genes to trimmed algorithms
outlierD = {}
with open("lossStats_HUMAN.csv") as f:
f.readline() # skip header
for line in f:
line = line.strip().split(",")
if line[3] == '':
continue
outlierD[line[0]] = line[3].split()
outlierD[outlierD.keys()[0]]
# -
# ### Calculate average number of lossTaxa for each gene and add this as a new column to lossTaxa
# +
dbsTrimmed = 0
avgs = pd.Series()
for index,row in lossTaxa.iterrows():
if index in outlierD:
dbsTrimmed += len(outlierD[index])
dbs = [i for i in lossTaxa.columns if i not in outlierD[index]]
else:
dbs = lossTaxa.columns
avgs[index] = row[dbs].mean() # only include algorithm that have not been trimmed
dbsTrimmed
# -
lossTaxa["Avg"] = avgs
lossTaxa.head()
# ## The average has a fat-tailed distribution
#
# We will flag all the genes in the 95th percentile
# +
def floatRange(start,stop,step):
i = start
while i <= stop:
yield i
i += step
quantile_steps = [i for i in floatRange(0,1,.05)]
quantiles = lossTaxa["Avg"].quantile(quantile_steps)
quantiles
# -
quantiles.iloc[-1]
# +
lossTaxa["Avg"].hist(bins=50,color='grey')
bline = plt.axvline(24.92,color='black',label="95th percentile")
plt.legend()
#plt.savefig("AvgLossTaxa_distribution.svg")
# -
# ## Flag genes
lossTaxa["HGT_flag"] = lossTaxa["Avg"] >= 24.92
lossTaxa.head()
lossTaxa.to_csv("HGTFlag_HUMAN.csv")
|
Notebooks/HGT_flagging.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: up42-py
# language: python
# name: up42-py
# ---
# # 30 seconds example
#
# A new workflow is created and filled with tasks (Sentinel-2 data, image sharpening).
# The area of interest and workflow parameters are defined. After running the job,
# the results are downloaded and visualized.
# +
import up42
up42.authenticate("config.json")
project = up42.initialize_project()
project
# -
# Add blocks/tasks to the workflow.
workflow = project.create_workflow(name="30-seconds-workflow",
use_existing=True)
blocks = up42.get_blocks(basic=True)
input_tasks= [blocks['sobloo-s2-l1c-aoiclipped'],
blocks['sharpening']]
workflow.add_workflow_tasks(input_tasks=input_tasks)
# Define the aoi and input parameters of the workflow to run it.
aoi = workflow.read_vector_file("data/aoi_berlin.geojson", as_dataframe=True)
input_parameters = workflow.construct_parameters(geometry=aoi,
geometry_operation="bbox",
start_date="2018-01-01",
end_date="2020-12-31",
limit=1)
input_parameters["sobloo-s2-l1c-aoiclipped:1"].update({"max_cloud_cover":60})
input_parameters
job = workflow.create_and_run_job(input_parameters=input_parameters)
job.track_status()
job.downjob.download_results()
job.plot_results()load_results()
|
examples/30-seconds-example.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: U4-S1-NLP (Python3)
# language: python
# name: u4-s1-nlp
# ---
# ## LRUCache
# +
"""
Each ListNode holds a reference to its previous node
as well as its next node in the List.
"""
class ListNode:
def __init__(self, value, prev=None, next=None):
self.value = value
self.prev = prev
self.next = next
def delete(self):
'''Updating our previous and next pointers'''
if self.prev:
self.prev.next = self.next
if self.next:
self.next.prev = self.prev
class DoublyLinkedList:
"""
Our doubly-linked list class. It holds references to
the list's head and tail nodes.
"""
def __init__(self, node=None):
self.head = node
self.tail = node
self.length = 1 if node is not None else 0
def __len__(self):
return self.length
def add_to_head(self, value):
"""
Wraps the given value in a ListNode and inserts it
as the new head of the list. Don't forget to handle
the old head node's previous pointer accordingly.
"""
# create a new node
new_node = ListNode(value, None, None)
self.length +=1
# 1. add to empty
if self.head is None:
self.head = new_node
self.tail = new_node
# 2. add to nonempty
else:
new_node.next = self.head
self.head.prev = new_node
self.head = new_node
# update the lenght
# self.length +=1
def remove_from_head(self):
"""
Removes the List's current head node, making the
current head's next node the new head of the List.
Returns the value of the removed Node.
"""
if self.head is None:
return
if self.head is self.tail:
self.head = None
self.tail = None
self.length = 0
return
self.head = self.head.next
self.head.prev = None
self.length += -1
def add_to_tail(self, value):
"""
Wraps the given value in a ListNode and inserts it
as the new tail of the list. Don't forget to handle
the old tail node's next pointer accordingly.
"""
new_node = ListNode(value)
self.length += 1
if not self.tail and not self.head:
self.head = new_node
self.tail = new_node
else:
new_node.prev = self.tail
self.tail.next = new_node
self.tail = new_node
# self.length += 1
def remove_from_tail(self):
"""
Removes the List's current tail node, making the
current tail's previous node the new tail of the List.
Returns the value of the removed Node.
"""
self.length += -1
if self.tail is None:
return
if self.tail is self.head:
self.head = None
self.tail = None
self.length = 0
return
self.tail = self.tail.prev
self.tail.next = None
# self.length = -1
# value = self.tail.value
# self.delete(self.tail)
# return value
def move_to_front(self, node):
"""
Removes the input node from its current spot in the
List and inserts it as the new head node of the List.
"""
self.delete(node)
self.add_to_head(node)
def move_to_end(self, node):
"""
Removes the input node from its current spot in the
List and inserts it as the new tail node of the List.
"""
# if self.head is node:
# self.head = self.head.next
# self.head.prev = None
self.delete(node)
self.add_to_tail(node)
def delete(self, node):
"""
Deletes the input node from the List, preserving the
order of the other elements of the List. Handles cases where
the node was the head or the tail as well
"""
if self.head and self.head.value == node:
if self.head.next:
self.head = self.head.next
self.head.prev = None
else:
self.head = None
if self.tail and self.tail.value == node:
if self.tail.prev:
self.tail = self.tail.prev
self.tail.next = None
else:
self.tail = None
node.delete()
self.length -= 1
"""
Finds and returns the maximum value of all the nodes
in the List.
"""
def get_max(self):
# check if dll empty
if self.head is None:
return None
# keep track of current node, max
# keep track of max
cur_node = self.head
max_value = self.head.value
# loop through dll
while cur_node: # while cur_node is not None:
# comparing with cur_max
if cur_node.value > max_value:
max_value = cur_node.value
cur_node = cur_node.next
return max_value
def traverse_list(self):
'''Print out values stored at the list'''
if self.head is None:
print("List has no element")
return
else:
h = self.head
while h is not None:
print(h.value , " ")
h = h.next
# l = DoublyLinkedList()
# l.traverse_list()
# l.add_to_head(1)
# l.add_to_head(2)
# l.add_to_tail(3)
# print('Values:')
# l.traverse_list()
# l.remove_from_head()
# print('Remove head')
# l.traverse_list()
# l.remove_from_tail()
# print('remove tail')
# l.traverse_list()
# l.add_to_head(2)
# l.add_to_head(1)
# l.add_to_tail(3)
# print('actual values')
# l.traverse_list()
# l.move_to_front(3)
# print('move to front')
# l.traverse_list()
# l.move_to_end(3)
# print('back to tail')
# l.traverse_list()
# # print('max_number')
# # l.get_max()
# # l.traverse_list()
# print(f'Length: {l.length}')
# -
l = DoublyLinkedList()
l.traverse_list()
l.add_to_head(1)
l.add_to_head(2)
l.add_to_tail(3)
print('Values:')
l.traverse_list()
# +
class LRUCache:
"""
Our LRUCache class keeps track of the max number of nodes it
can hold, the current number of nodes it is holding, a doubly-
linked list that holds the key-value entries in the correct
order, as well as a storage dict that provides fast access
to every node stored in the cache.
"""
def __init__(self, limit=10):
self.order = DoublyLinkedList()
self.storage = dict()
self.limit = limit
"""
Retrieves the value associated with the given key. Also
needs to move the key-value pair to the end of the order
such that the pair is considered most-recently used.
Returns the value associated with the key or None if the
key-value pair doesn't exist in the cache.
"""
def get(self, key):
"""
Retrieves the value associated with the given key. Also
needs to move the key-value pair to the end of the order
such that the pair is considered most-recently used.
Returns the value associated with the key or None if the
key-value pair doesn't exist in the cache.
"""
if key not in self.storage:
return None
node = self.storage[key]
self.order.move_to_end(node)
return node.value[1]
"""
Adds the given key-value pair to the cache. The newly-
added pair should be considered the most-recently used
entry in the cache. If the cache is already at max capacity
before this entry is added, then the oldest entry in the
cache needs to be removed to make room. Additionally, in the
case that the key already exists in the cache, we simply
want to overwrite the old value associated with the key with
the newly-specified value.
"""
def set(self, key, value):
"""
Adds the given key-value pair to the cache. The newly-
added pair should be considered the most-recently used
entry in the cache. If the cache is already at max capacity
before this entry is added, then the oldest entry in the
cache needs to be removed to make room. Additionally, in the
case that the key already exists in the cache, we simply
want to overwrite the old value associated with the key with
the newl
"""
if key in self.storage:
node = self.storage[key]
node.value = (key, value)
self.order.move_to_end(node)
return
node = ListNode((key, value))
self.storage[key] = node
self.order.move_to_end(node)
self.order.length += 1
if self.order.length > self.limit:
new_value = self.order.remove_from_head()
self.storage.pop(new_value[0], None)
self.order.length -= 1
# -
c = LRUCache()
c.get(4)
c.set(5, 6)
# ## AVL_tree
# +
import random, math
outputdebug = False
def debug(msg):
if outputdebug:
print(msg)
"""
Node class to keep track of
the data internal to individual nodes
"""
class Node:
def __init__(self, key):
self.key = key
self.left = None
self.right = None
"""
A tree class to keep track of things like the
balance factor and the rebalancing logic
"""
class AVLTree:
def __init__(self, node=None):
self.node = node
# init height to -1 because of 0-indexing
self.height = -1
self.balance = 0
"""
Display the whole tree. Uses recursive def.
"""
def display(self, level=0, pref=''):
self.update_height() # Update height before balancing
self.update_balance()
if self.node != None:
print ('-' * level * 2, pref, self.node.key,
f'[{self.height}:{self.balance}]',
'L' if self.height == 0 else ' ')
if self.node.left != None:
self.node.left.display(level + 1, '<')
if self.node.right != None:
self.node.right.display(level + 1, '>')
"""
Computes the maximum number of levels there are
in the tree
"""
def update_height(self):
if not self.node == None:
if self.node.left != None:
self.node.left.update_height()
if self.node.right != None:
self.node.right.update_height()
self.height = max(self.node.left.height,
self.node.right.height) + 1
else:
self.height = -1
"""
Updates the balance factor on the AVLTree class
"""
def update_balance(self, data=True):
if not self.node == None:
if data:
if self.node.left != None:
self.node.left.update_balance()
if self.node.right != None:
self.node.right.update_balance()
self.balance = self.node.left.height - self.node.right.height
else:
self.balance = 0
"""
Perform a left rotation, making the right child of this
node the parent and making the old parent the left child
of the new parent.
"""
def left_rotate(self):
S = self.node
R = self.node.right.node
L = R.left.node
self.node = R
R.left.node = S
S.right.node = L
"""
Perform a right rotation, making the left child of this
node the parent and making the old parent the right child
of the new parent.
"""
def right_rotate(self):
S = self.node
L = self.node.left.node
R = L.right.node
self.node = L
L.right.node = S
S.left.node = R
"""
Sets in motion the rebalancing logic to ensure the
tree is balanced such that the balance factor is
1 or -1
"""
def rebalance(self):
self.update_height(False)
self.update_balance(False)
while self.balance < -1 or self.balance > 1:
if self.balance > 1:
if self.node.left.balance < 0:
self.node.left.left_rotate()
self.update_height()
self.update_balance()
self.right_rotate()
self.update_height()
self.update_balance()
if self.balance < -1:
if self.node.right.balance > 0:
self.node.right.right_rotate()
self.update_height()
self.update_balance()
self.left_rotate()
self.update_height()
self.update_balance()
"""
Uses the same insertion logic as a binary search tree
after the value is inserted, we need to check to see
if we need to rebalance
"""
def insert(self, key):
tree = self.node
newnode = Node(key)
if tree == None:
self.node = newnode
self.node.left = AVLTree()
self.node.right = AVLTree()
debug("Inserted key [" + str(key) + "]")
elif key < tree.key:
self.node.left.insert(key)
elif key > tree.key:
self.node.right.insert(key)
else:
debug("Key [" + str(key) + "] already in tree.")
self.rebalance()
# -
# ## Heap
class Heap:
# defaults to a max heap if no comparator is specified
def __init__(self, comparator=lambda x, y: x > y):
self.storage = []
self.comparator = comparator
def insert(self, value):
pass
def delete(self):
pass
def get_priority(self):
pass
def get_size(self):
pass
def _bubble_up(self, index):
pass
def _sift_down(self, index):
pass
|
notebooks/LRUCache.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# metadata:
# interpreter:
# hash: a1812049e47c7b7897ad3b92f649b1a7880ee880f611304d9272717459842b47
# name: python3
# ---
import torch
# ### Check CUDA availability
torch.cuda.is_available()
# ### Tensor Basics
x1 = torch.empty(3,2)
print(x1)
x2 = torch.rand(2,2)
print(x2)
x3 = torch.zeros(3,4)
print(x3)
x4 = torch.ones(2,3)
print(x4)
# look at data type
print(x4.dtype)
x5 = torch.ones(2,3, dtype=torch.int)
print(x5.dtype)
print(x5.size())
# tensor from python list
dl1 = [0.2, 0.3, 0.4]
x6 = torch.tensor(dl1, dtype=torch.float16)
print(x6)
print(x6.size())
print(x6.shape)
# ### Tensor Opeartions
# +
# element-wise addition
a1 = torch.rand(2,2)
a2 = torch.ones(2,2)
a_sum = a1 +a2 # same as torch.add(a1,a2) or inplace operation a1.add_(a2) => a1 = a1 + a2
print(a1)
print(a2)
print(a_sum)
# +
# elemen-wise subtraction
a_diff = a_sum - a1 # same as torch.sub(a1,a2)
print(a_diff)
# +
# element-wise mutiplication
a_mul = a1*a2 # same as torch.mul(a1,a2)
print(a_mul)
# +
# element-wise division
a_div = a1/a2 # same as torch.div(a1,a2)
print(a_div)
# -
# ### Slicing/reshaping and numpy interops on tensors (like numpy)
b1 = torch.randn(5,4, dtype=torch.float32)
print(b1)
# get a row
b1[1, :]
# get a column
b1[:, 2]
# value of a single(i,j) element
b1[1,2].item()
# reshaping a tensor
b2 = b1.view(2, -1)
print(b2)
print(b2.shape)
# +
# torch to numpy conversion
import numpy as np
b_n = b1.numpy()
print(type(b_n)) # both b1 and b_n point to same memory, so changing one will alter the other also.
# +
# numpy to torch conversion
b_n1 = np.ones(5)
b3 = torch.from_numpy(b_n1)
print(type(b3))
# +
# convert tensor in gpu to cpu first before converting to numpy
device = torch.device("cuda")
b4 = torch.ones(2,2).to(device)
print(b4)
# -
b4.numpy()
b4.cpu().numpy()
# ### Torch autograd
# <p>Plot function (x+1)^2 and find its derivative in domain [0,5]
import matplotlib.pyplot as plt
# %matplotlib inline
x = torch.linspace(0.0, 5.0, 100, requires_grad=True)
print(x)
print(x.shape)
y = x*x + 2*x + 1
print(y)
v = torch.tensor([1]*x.shape[0], dtype=torch.float32) # not required if function value is scalar
y.backward(v) # dy/dx
print(x.grad)
# if tensor is part of computation graph, we need to detach it before converting to numpy
plt.plot(x.detach().numpy(), y.detach().numpy(), label="(x+1)^2")
plt.grid()
plt.legend()
plt.show()
plt.plot(x.detach().numpy(), x.grad.detach().numpy(), label="2x+2")
plt.grid()
plt.legend()
plt.show()
# <p>
# Prevent gradient when performing ops
# <li>
# x.requires_grad(False)
# <li>
# x.detach() # creates a copy of tensor where gradient is not calculated
# <li>
# with torch.no_grad():
#
# <p>
# Everytime we call y.backward() gradients are calculated and summed to already present value in x.grad
# <p>
# To clear previous gradient we should call: x.grad.zero_()
|
basics/tut_1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _kg_hide-input=true _kg_hide-output=true _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
# + [markdown] _uuid="9d0fc470a746fcf07f76d09b0ee0cea46ebcfc4e"
# # Learning to speak French
#
# French is hard. But slowly, machines are getting good at translating English to French. This notebook is an adaptation of Francois Chollet's [A ten-minute introduction to sequence-to-sequence learning in Keras](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html) with faster code, optimized for training on a GPU. Read his tutorial for a full overview of what is going on. I tried to comment the code well so that you can read along.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _kg_hide-output=true _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from keras.models import Model
from keras.layers import Input, CuDNNLSTM, Dense
from keras.preprocessing.text import one_hot, text_to_word_sequence, Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
from numpy.random import seed
seed(1)
from tensorflow import set_random_seed
set_random_seed(2)
# + _cell_guid="fcd7da31-a545-41a8-ae89-376b2a8f25e8" _uuid="66c19897a47dc4565c6c24ed29d4dccada1f0d71"
batch_size = 64 # Batch size for training.
epochs = 100 # Number of epochs to train for.
latent_dim = 256 # Latent dimensionality of the encoding space.
num_samples = 10000 # Number of samples to train on.
# Path to the data txt file on disk.
data_path = '../input/fra-eng/fra.txt'
# +
# docs = ['Well done!',
# 'Good work',
# 'Great effort',
# 'nice work',
# 'Excellent!',
# 'Weak',
# 'Poor effort!',
# 'not good',
# 'poor work',
# 'Could have done better.']
# +
# tokenized_docs = [text_to_word_sequence(doc) for doc in docs]
# +
# t = Tokenizer()
# t.fit_on_texts(docs)
# +
# word_index = t.word_index
# +
# index_word = {idx:word for (word,idx) in word_index.items()}
# +
# integer_encoded_docs = []
# +
# for doc in tokenized_docs:
# encoded = [word_index[word] for word in doc]
# integer_encoded_docs.append(encoded)
# +
# text = 'The cat sat on the mat.'
# encoded = one_hot(text, 5)
# +
# max_length = 4
# padded_docs = pad_sequences(integer_encoded_docs, maxlen=max_length, padding='post')
# print(padded_docs)
# + [markdown] _uuid="4dad6ce5dfcb0935bf9722da247b50efacb11676"
# ## The data
# The data is present in a tab delimited CSV file. To give it a quick overview we will load it with pandas:
# + _uuid="6c6891c41f2dfcd07554e5fc13eb7410dc432b4a"
df = pd.read_csv(data_path,delimiter='\t')
# + _uuid="559cc4eb61559b022b732b38e3bbc6d8719fecd1"
df.head(20)
# + _uuid="b4402602262a7d854cbb664985e4ea0377083b9e"
del df # Save memory
# + [markdown] _uuid="ba16c99cab41ed4b7e2ca43da18e77ada52fd6dd"
# ## Vectorizing data
# + _cell_guid="ea0e2e7f-f65e-4fe6-b75e-304efb56f04f" _uuid="06a6c35f13c93a188005227a3b98fc9b7b21a201"
# Vectorize the data.
input_texts = []
target_texts = []
input_characters = set()
target_characters = set()
# Loop over lines
lines = open(data_path).read().split('\n')
for line in lines[: min(num_samples, len(lines) - 1)]:
# Input and target are split by tabs
# English TAB French
input_text, target_text = line.split('\t')
# We use "tab" as the "start sequence" character
# for the targets, and "\n" as "end sequence" character.
target_text = '\t' + target_text + '\n'
input_texts.append(input_text)
target_texts.append(target_text)
# Create a set of all unique characters in the input
for char in input_text:
if char not in input_characters:
input_characters.add(char)
# Create a set of all unique output characters
for char in target_text:
if char not in target_characters:
target_characters.add(char)
print('Number of samples:', len(input_texts))
# + _cell_guid="de998ecc-341a-4a47-830a-1dc20f0ae904" _uuid="555c970d2117022365512bf3d84246a909f44fc2"
input_characters = sorted(list(input_characters)) # Make sure we achieve the same order in our input chars
target_characters = sorted(list(target_characters))
num_encoder_tokens = len(input_characters) # aka size of the english alphabet + numbers, signs, etc.
num_decoder_tokens = len(target_characters) # aka size of the french alphabet + numbers, signs, etc.
print('Number of unique input tokens:', num_encoder_tokens)
print('Number of unique output tokens:', num_decoder_tokens)
# + _cell_guid="b99e23e7-2faa-4ce7-a6f3-784e02ef46c7" _uuid="a52af2a93fd956dd9a33c02baf3aa5a73bd7a1ec"
# This works very similar to a tokenizer
# The index maps a character to a number
input_token_index = {char: i for i, char in enumerate(input_characters)}
target_token_index = {char: i for i, char in enumerate(target_characters)}
# + _cell_guid="76afcd7b-23bd-44c3-b3fb-978cba081f21" _uuid="43f7153aa4595b48da4338c300712a7c16fcfa01"
# Demo character tokenization
for c in 'the cat sits on the mat':
print(input_token_index[c], end = ' ')
# + _cell_guid="7d4ad6ea-17fc-479a-87fe-3822fdd161c2" _uuid="b88478d0f15b9642f905395d8c45445f628719b2"
max_encoder_seq_length = max([len(txt) for txt in input_texts]) # Get longest sequences length
max_decoder_seq_length = max([len(txt) for txt in target_texts])
print('Max sequence length for inputs:', max_encoder_seq_length)
print('Max sequence length for outputs:', max_decoder_seq_length)
# + _cell_guid="2c39c2c1-072c-4e5c-99bd-e13a62ebed04" _uuid="7231966fbdb13bffb3805d69856d354acb1d15ce"
# encoder_input_data is a 3D array of shape (num_pairs, max_english_sentence_length, num_english_characters)
# containing a one-hot vectorization of the English sentences.
encoder_input_data = np.zeros(
(len(input_texts), max_encoder_seq_length, num_encoder_tokens),
dtype='float32')
# decoder_input_data is a 3D array of shape (num_pairs, max_french_sentence_length, num_french_characters)
# containg a one-hot vectorization of the French sentences.
decoder_input_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
# decoder_target_data is the same as decoder_input_data but offset by one timestep.
# decoder_target_data[:, t, :] will be the same as decoder_input_data[:, t + 1, :]
decoder_target_data = np.zeros(
(len(input_texts), max_decoder_seq_length, num_decoder_tokens),
dtype='float32')
# + _cell_guid="8d1014c8-82a6-4988-a31e-33bd3498b585" _uuid="d12785864708eaa8feec3b7aba862b02bec76e3b"
# Loop over input texts
for i, (input_text, target_text) in enumerate(zip(input_texts, target_texts)):
# Loop over each char in an input text
for t, char in enumerate(input_text):
# Create one hot encoding by setting the index to 1
encoder_input_data[i, t, input_token_index[char]] = 1.
# Loop over each char in the output text
for t, char in enumerate(target_text):
# decoder_target_data is ahead of decoder_input_data by one timestep
decoder_input_data[i, t, target_token_index[char]] = 1.
if t > 0:
# decoder_target_data will be ahead by one timestep
# and will not include the start character.
decoder_target_data[i, t - 1, target_token_index[char]] = 1.
# + _cell_guid="909d7eb1-54f5-4b96-8745-34080104dcc4" _uuid="14f0c33f929a0ef7aaf1fa23cff824ecb4b4fe6f"
# Define an input sequence and process it.
encoder_inputs = Input(shape=(None, num_encoder_tokens),
name = 'encoder_inputs')
# The return_state contructor argument, configuring a RNN layer to return a list
# where the first entry is the outputs and the next entr3ies are the internal RNN states.
# This is used to recover the states of the encoder.
encoder = CuDNNLSTM(latent_dim,
return_state=True,
name = 'encoder')
encoder_outputs, state_h, state_c = encoder(encoder_inputs)
# We discard `encoder_outputs` and only keep the states.
encoder_states = [state_h, state_c]
# Set up the decoder, using `encoder_states` as initial state.
decoder_inputs = Input(shape=(None, num_decoder_tokens),
name = 'decoder_inputs')
# We set up our decoder to return full output sequences,
# and to return internal states as well. We don't use the
# return states in the training model, but we will use them in inference.
decoder_lstm = CuDNNLSTM(latent_dim,
return_sequences=True,
return_state=True,
name = 'decoder_lstm')
# The inital_state call argument, specifying the initial state(s) of a RNN.
# This is used to pass the encoder states to the decoder as initial states.
# Basically making the first memory of the decoder the encoded semantics
decoder_outputs, _, _ = decoder_lstm(decoder_inputs,
initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens,
activation='softmax',
name = 'decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
# Define the model that will turn
# `encoder_input_data` & `decoder_input_data` into `decoder_target_data`
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
# + _cell_guid="1e70cde9-4c5d-4de2-8f41-90a7c5bd3475" _kg_hide-output=true _uuid="72492095eea44a02d9f392b2274606382cf11f64"
# Run training
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
history = model.fit([encoder_input_data, decoder_input_data],
decoder_target_data,
batch_size=batch_size,
epochs=epochs,
validation_split=0.2)
# Save model
#model.save('s2s.h5')
# + [markdown] _uuid="5979b64e11875088255f5d91c9665a4418a46b08"
# ## Evaluating the model
# This model overfits quite a bit. It is only useful for demo purposes, for a more serious translator, consider working with word vectors and training on the full dataset.
# + _cell_guid="55d758b0-b642-4974-b028-82f920d4602b" _uuid="ca7c6f484ebbe4aed426744aa2197cf194c3ba26"
import matplotlib.pyplot as plt
plt.figure(figsize=(10,7))
a, = plt.plot(history.history['loss'],label='Training Loss')
b, = plt.plot(history.history['val_loss'],label='Validation Loss')
plt.legend(handles=[a,b])
plt.show()
# + [markdown] _cell_guid="2ad8f7e4-72a4-4884-8be4-d9fb4e1bd126" _uuid="61c95ebe8356e8f91cd8affc10e14acc1596421d"
# # Creating inference models
# -
encoder_model = Model(encoder_inputs,encoder_states)
# +
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_input = [decoder_state_input_h,decoder_state_input_c]
decoder_outputs, state_h, state_c = decoder_lstm(decoder_inputs,initial_state=decoder_states_input)
decoder_states = [state_h,state_c]
decoder_outputs = decoder_dense(decoder_outputs)
decoder_model = Model(
[decoder_inputs]+decoder_states_input,
[decoder_outputs]+decoder_states)
# + _cell_guid="239a7196-6485-4f65-8dea-4ca1a9aa87df" _uuid="1c10010e6db47a30edd82bee80e77a0241a79fcf"
# Define encoder model
encoder_model = Model(encoder_inputs, encoder_states)
# + _cell_guid="bb95b6a7-772b-458a-bd23-411e98e32205" _uuid="343f0049794312cf4fd1511c5f34ce36a9e26982"
# Define decoder model
# Inputs from the encoder
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
# Create a combined memory to input into the decoder
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]
# Decoder
decoder_outputs, state_h, state_c = decoder_lstm(
decoder_inputs, initial_state=decoder_states_inputs)
decoder_states = [state_h, state_c]
# Predict next char
decoder_outputs = decoder_dense(decoder_outputs)
# The model takes in the encoder memory plus it's own memory as an input and spits out
# a prediction plus its own memory to be used for the next char
decoder_model = Model(
[decoder_inputs] + decoder_states_inputs,
[decoder_outputs] + decoder_states)
# -
def decoder_sequence(input_seq):
states_value = encoder_model.predict(input_seq)
target_sequence = np.zeros([1,1,num_decoder_tokens])
target_sequence[0,0,target_token_index["\t"]] = 1
stop_condition = False
decoded_sentence = ""
while not stop_condition:
output_tokens,h,c = decoder_model.predict([target_seq]+states_value)
sampled_token_index = np.argmax(output_token[0,-1,:])
sampled_char = reverse_target_char_index[sampled_token_index]
# + _cell_guid="5f19066a-c7e4-421b-a6cf-a2f463fd75fb" _uuid="c3c1dd185d7e8532f088903bf634f6c14d18fb2d"
# Reverse-lookup token index to decode sequences back to
# something readable.
reverse_input_char_index = {i: char
for char, i in input_token_index.items()}
reverse_target_char_index = {i: char
for char, i in target_token_index.items()}
# + _cell_guid="d6e067d6-e27f-47e9-b36d-8c8469fef085" _uuid="a1b14b5e6f0a1c34d6ffd3a27b302e79c0d31ea5"
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1, 1, num_decoder_tokens))
# Populate the first character of target sequence with the start character.
target_seq[0, 0, target_token_index['\t']] = 1.
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
# Loop untill we recieve a stop sign
while not stop_condition:
# Get output and internal states of the decoder
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Get the predicted token (the token with the highest score)
sampled_token_index = np.argmax(output_tokens[0, -1, :])
# Get the character belonging to the token
sampled_char = reverse_target_char_index[sampled_token_index]
# Append char to output
decoded_sentence += sampled_char
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '\n' or
len(decoded_sentence) > max_decoder_seq_length):
stop_condition = True
# Update the target sequence (of length 1).
target_seq = np.zeros((1, 1, num_decoder_tokens))
target_seq[0, 0, sampled_token_index] = 1.
# Update states
states_value = [h, c]
return decoded_sentence
# + [markdown] _uuid="96c02bf34edb09a51cefa1e34ed3425e96fd69eb"
# ## Giving it a spin
#
# + _cell_guid="c171e7a9-5642-43b8-ac4d-a0150ad20e14" _uuid="3a78fc8d64f0bca2650fd73c6356fd0b5a6bf171"
my_text = '<NAME>'
placeholder = np.zeros((1,len(my_text)+10,num_encoder_tokens))
# + _cell_guid="e211bbb7-1dc7-40aa-96b9-d9372adea403" _uuid="10d39df3fdd008e15853d873c4a496a0568a872a"
for i, char in enumerate(my_text):
print(i,char, input_token_index[char])
placeholder[0,i,input_token_index[char]] = 1
# + _cell_guid="9c45d63b-fcec-4611-bc76-59b1c5322eb9" _uuid="d94521498349ca4e3a8010f283f9e4057a5b8e78"
decode_sequence(placeholder)
# -
|
seq2seq.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from transformers import GPT2Tokenizer, GPT2Model, GPT2LMHeadModel
import torch
# from torch import nn
import numpy as np
perm = np.random.permutation
import pickle
from tqdm import tqdm
from collections import Counter
import matplotlib.pyplot as plt
import logging
logging.getLogger("transformers.tokenization_utils").setLevel(logging.ERROR)
# -
# ## Load E-mail Data
with open("../../../w3c-emails/emails.pkl", "rb") as handle:
emails = pickle.load(handle)
# # Steps
#
# 0. select authors (so that evaulation sets can be held out) and establish their frequencies <br>
# -> $P(X)$, i.e. the probability that a person appears as an author of any e-mail
# 1. pre-train GPT-2 on W3C e-mail corpus <br>
# -> W3CGPT-2 <br>
# -> approximates $P(email)$
# 2. train W3CGPT-2 on e-mails by selected authors <br>
# -> GPT-2$_X$ for each person $X$ <br>
# -> i.e. $P(email|X)$
# 3. use GPT-2$_X$ to classify unseen (both to W3CGPT-2 and to GPT-2$_X$) e-mails <br>
# -> $P(X|email) = P(X)P(email|X)/P(email)$
# ## 0. Select Authors (and their e-mails)
#
# - author-frequency distribution Zipfian, so use log-linear ranks to select authors <br>
# (i.e. author rank 1, author rank 2, author rank 4, author rank 8, ..., author rank 512)
#
# - plots below equivalent to $P(X)$
# +
def select_by_ranks(emails, ls_of_ranks):
sndr_cnts = Counter(e.sender for e in emails)
ranks_sndr = {r:s for r, (s, c) in enumerate(sndr_cnts.most_common())} # need to start from 1
for r in ls_of_ranks:
cur_s = ranks_sndr[r]
yield [m for m in emails if m.sender == cur_s]
rank_rng = [2**i for i in range(10)]
selection = list(select_by_ranks(emails, rank_rng))
rest = list(set(emails) - set(m for m_ls in selection for m in m_ls))
# +
sndr_cnts = Counter(e.sender for e in emails)
rs, cs = list(zip(*[(r, c) for r, (_, c) in enumerate(sndr_cnts.most_common())]))
fig = plt.figure(1, figsize=(15, 10))
plt.subplot(221)
plt.loglog(rs, cs, '.')
plt.xlabel("$\log$ rank"); plt.ylabel("$\log$ frequency"); plt.title("Rank-Frequency plot of number of e-mails authored by each person\n$= P(X)$")
rng = [2**i for i in range(10)]
rng_cs = [cs[i] for i in rng]
plt.subplot(222)
plt.loglog(rng, rng_cs, '.')
_ = plt.title("Thinned version of the left plot\n(subset of 10)")
# -
[(r, sndr_cnts.most_common()[r]) for r in rank_rng]
# ## 0.1 Get Train and Evaluation sets
#
# using train:eval ratio of 30:70
# +
def split_train_test(ls, test_ratio=0.3):
cutoff = int(len(ls)*test_ratio)
randmsd_ls = list(perm(ls))
test = randmsd_ls[:cutoff]
train = randmsd_ls[cutoff:]
return train, test
# def emails_to_datasets(selected, rest, test_ratio=0.3):
# train, test = split_train_test(rest, test_ratio)
# for mail_ls in selected:
# cur_train, cur_test = split_train_test(mail_ls, test_ratio)
# train.extend(cur_train)
# test.extend(cur_test)
# return train, test
selection_train, selection_test = list(zip(*[split_train_test(m_ls) for m_ls in selection]))
rest_train, rest_test = split_train_test(rest)
selection_never_seen, selection_test = list(zip(*[split_train_test(m_ls, test_ratio=0.5)
for m_ls in selection_test]))
rest_never_seen, rest_test = split_train_test(rest_test, test_ratio=0.5)
print(list(map(len, selection_never_seen)), list(map(len, selection_test)))
len(rest_never_seen), len(rest_test)
# +
# save in folders because of random permutations
with open("data_splits/selection_train.pkl", "wb") as handle:
pickle.dump(selection_train, handle)
with open("data_splits/selection_test.pkl", "wb") as handle:
pickle.dump(selection_test, handle)
with open("data_splits/selection_never_seen.pkl", "wb") as handle:
pickle.dump(selection_never_seen, handle)
with open("data_splits/rest_train.pkl", "wb") as handle:
pickle.dump(rest_train, handle)
with open("data_splits/rest_test.pkl", "wb") as handle:
pickle.dump(rest_test, handle)
with open("data_splits/rest_never_seen.pkl", "wb") as handle:
pickle.dump(rest_never_seen, handle)
# -
# # START FROM HERE
# # IMPORTANT!
# +
with open("data_splits/selection_train.pkl", "rb") as handle:
selection_train = pickle.load(handle)
with open("data_splits/selection_test.pkl", "rb") as handle:
selection_test = pickle.load(handle)
with open("data_splits/selection_never_seen.pkl", "rb") as handle:
selection_never_seen = pickle.load(handle)
with open("data_splits/rest_train.pkl", "rb") as handle:
rest_train = pickle.load(handle)
with open("data_splits/rest_test.pkl", "rb") as handle:
rest_test = pickle.load(handle)
with open("data_splits/rest_never_seen.pkl", "rb") as handle:
rest_never_seen = pickle.load(handle)
# -
# ## 1. Domain-Adapt GPT-2 to W3C E-mails
#
# - pre-train an instance of GPT-2 on entire (subset of) w3c-email corpus <br>
# -> will reduce perplexity and thus increase sensitivity of LMs
# - use this LM as starting point to train personalised LMs
# - also reserve a test set? -> i.e. some e-mails which no custom-trained LM has seen before
#
# - => write e-mail bodies into text files, train and eval
# +
def emails_to_trainfile(email_ls, file_name, split_into=1):
chunk_size = len(email_ls)//split_into
for i in range(split_into):
with open(file_name + f".{i}", "w", encoding="utf-8") as handle:
cur_chunk = email_ls[i*chunk_size:(i+1)*chunk_size] if i != split_into-1 else email_ls[i*chunk_size:]
for m in cur_chunk:
mail_str = m.body_raw.replace("\n", " ")
handle.write(mail_str)
handle.write("\n\n")
full_train = perm(rest_train +
[m for m_ls in selection_train for m in m_ls] +
[m for m_ls in selection_test for m in m_ls])
full_test = perm(rest_never_seen + [m for m_ls in selection_never_seen for m in m_ls])
emails_to_trainfile(full_train, "W3CGPT2/full.train.raw", split_into=5)
emails_to_trainfile(full_test, "W3CGPT2/full.test.raw", split_into=4)
# -
#
# ### 1.1 Call for Training
#
# - first: merge split up text files -> `cat full.train.raw.* > full.train.raw.all`
#
# `python3 run_language_modeling.py --train_data_file=W3CGPT2/full.train.raw.all --model_type=gpt2 --output_dir=W3CGPT2/lm --model_name_or_path=gpt2 --do_train --line_by_line --num_train_epochs=2`
#
# - `--line_by_line` indicates one sample per line to spearate e-mails, `"\n"` inside e-mails converted to `" "`
# - perhaps use `--block_size=128/256/512` (rather than GPT-2's default of 1024) -> loose fewer tokens at the ends of long emails
#
#
# ### 1.2 Load Trained
#
# should be as simple as:
model = GPT2Model.from_pretrained('/W3CGPT2/lm/')
# ## 2. Train one instance of W3CGPT-2 per author $X$ to become GPT-2$_X$
#
# - load W3CGPT2
# - get training files
# - call `run_language_modeling.py` with adequate parameters
# - run on LISA
# ### 2.1 Training files, one per author
# +
import os
folder_name = "GPT2_X/"
names = []
for mail_ls in selection_train:
cur_auth = mail_ls[0].sender
auth_name = cur_auth.name.replace(" ", "_")
names.append(auth_name)
print(auth_name)
os.mkdir(folder_name + "lm_" + auth_name)
with open(folder_name + "lm_" + auth_name + "/nothing.txt", "w") as handle: pass
with open(folder_name + auth_name + ".train.raw", "w", encoding="utf-8") as handle:
for m in mail_ls:
print(len(m.body_raw))
mail_str = m.body_raw.replace("\n", " ")
handle.write(mail_str)
handle.write("\n\n")
print()
with open(folder_name + "auth_names.txt", "w") as handle:
handle.write("\n".join(names))
|
analytics/embeddings_for_the_people/personalised_GPT2/prepare_training.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from numpy import *
from pylab import *
from netCDF4 import Dataset
# %matplotlib inline
path_input='../../data.input/MOM6/mom01_salt1year/'
NCdepth=Dataset(path_input+'topog.nc')
depth=NCdepth.variables['depth'][:,:]
contourf(depth[0:500,2000:2500])
colorbar()
print(shape(depth))
NCsalt_restore=Dataset(path_input+'salt_restore.nc')
print(NCsalt_restore)
salt=NCsalt_restore.variables['salt'][:,:,:]
pcolor(salt[1,:,:])
NCgold=Dataset(path_input+'GOLD_IC.2010.11.15.nc')
gold=NCgold.variables['salt'][:,:,:]
pcolor(gold[1,:,:])
GEBCONC=Dataset('../../GEBCO/GEBCO_2014_2D.nc')
GEBCONC
elevation=GEBCONC.variables['elevation'][:,:]
lat=GEBCONC.variables['lat'][:]
lon=GEBCONC.variables['lon'][:]
pcolormesh(elevation)
elevation[elevation>=0]=0
pcolormesh(elevation)
lonm=lon[40000::]
latm=lat[1250:3000]
pcolormesh(lonm,latm,elevation[1250:3000,40000::],vmin=-100,vmax=0)
colorbar()
newlon=linspace(lonm.min(),lonm.max(),23)
newlat=linspace(latm.min(),latm.max(),14)
from scipy import interpolate
f = interpolate.interp2d(lonm, latm, elevation[1250:3000,40000::], kind='cubic')
newelev=f(newlon,newlat)
newelev[newelev>0]=0
contourf(newlon,newlat,-newelev)
colorbar()
NCdepth
topogfullNC=Dataset('../../GEBCO/topog.nc')
fulldepth=topogfullNC.variables['depth']
print(fulldepth)
print(shape(fulldepth))
pcolormesh(fulldepth[0:14,80:103])
# +
import sys
import numpy as np
import netCDF4 as nc4
from datetime import datetime
j0=sys.argv[0]
j1=sys.argv[1]
i0=sys.argv[2]
i1=sys.argv[3]
inputfile=sys.argv[4]
outputfile=sys.argv[5]
topogfullNC=nc4.Dataset(inputfile)
globaldepth=topogfullNC.variables['depth']
regiondepth=globaldepth[j0:j1,i0:i1]
f = nc4.Dataset(outputfile,'w', format='NETCDF4')
ny=f.createDimension('ny', np.shape(regiondepth)[0])
nx=f.createDimension('nx', np.shape(regiondepth)[1])
ntiles=f.createDimension('ntiles', 1)
ny = range(0,np.shape(regiondepth)[0])
nx = range(0,np.shape(regiondepth)[1])
ntiles = 1
depth = f.createVariable('depth', 'ny', 'nx')
depth = regiondepth
f.history = today.strftime("%d/%m/%y") + "python topography_subregion.py " + j0 + j1 + i0 + i1+ inputfile +" "+ outputfile
f.close()
# -
|
MOM6_utils/notebooks/.ipynb_checkpoints/INPUT_files-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: python3
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # WeatherPy
# ----
#
# #### Note
# * Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
# +
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# print(weather_api_key)
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
# -
# ## Generate Cities List
# +
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(lat_range[0], lat_range[1], size=1500)
lngs = np.random.uniform(lng_range[0], lng_range[1], size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
# -
# ### Perform API Calls
# * Perform a weather check on each city using a series of successive API calls.
# * Include a print log of each city as it'sbeing processed (with the city number and city name).
#
# From the website we get this API call
#url = "api.openweathermap.org/data/2.5/weather?q={city name}&appid={API key}"
city_info = []
print("Beginning Data Retrieval")
print("-------------------------")
for index,city in enumerate(cities):
city_url = f"http://api.openweathermap.org/data/2.5/weather?units=imperial&q={city}&appid={weather_api_key}"
try:
data = requests.get(city_url).json()
print(f"proccesing record number {index} for {city}")
# Check to see if we can get the weather, if not it should give us an exception
weather = data["weather"]
# If we did not get an exception we found data for this city, lets append it
city_info.append(data)
except Exception as ex:
print(f"could not find data for {city}")
print("----------------------------")
print("Data Retrieval Complete")
print("----------------------------")
# ### Convert Raw Data to DataFrame
# * Export the city data into a .csv.
# * Display the DataFrame
city_data = []
#[{'coord': {'lon': -25.4333, 'lat': 37.7167},
# 'weather': [{'id': 800,
# 'main': 'Clear',
# 'description': 'clear sky',
# 'icon': '01n'}],
# 'base': 'stations',
# 'main': {'temp': 294.38,
# 'feels_like': 294.65,
# 'temp_min': 294.38,
# 'temp_max': 294.38,
# 'pressure': 1027,
# 'humidity': 80,
# 'sea_level': 1027,
# 'grnd_level': 1024},
# 'visibility': 10000,
# 'wind': {'speed': 3.03, 'deg': 35, 'gust': 4.48},
# 'clouds': {'all': 0},
# 'dt': 1627939948,
# 'sys': {'type': 1,
# 'id': 6899,
# 'country': 'PT',
# 'sunrise': 1627886775,
# 'sunset': 1627937377},
# 'timezone': 0,
# 'id': 3372472,
# 'name': '<NAME>',
# 'cod': 200},
for data in city_info:
# City Lat Lng Max Temp Humidity Cloudiness Wind Speed Country Date
# We only need this data
try:
city = data["name"]
lat = data["coord"]["lat"]
lng = data["coord"]["lon"]
max_temp = data["main"]["temp_max"]
humidity = data["main"]["humidity"]
cloudiness = data["clouds"]["all"]
wind_speed = data["wind"]["speed"]
country = data["sys"]["country"]
date = data["dt"]
info = {"City": city,
"Lat": lat,
"Lng": lng,
"Max Temp": max_temp,
"Humidity": humidity,
"Cloudiness": cloudiness,
"Wind Speed": wind_speed,
"Country": country,
"Date": date}
city_data.append(info)
except:
...
city_data_df = pd.DataFrame(city_data)
city_data_df.to_csv(output_data_file)
city_data_df.head()
# ## Inspect the data and remove the cities where the humidity > 100%.
# ----
# Skip this step if there are no cities that have humidity > 100%.
humidity = city_data_df.loc[city_data_df["Humidity"] > 100]
humidity
# Get the indices of cities that have humidity over 100%.
indexes = []
if humidity.index.any():
indexes = humidity.index
city_data_df = city_data_df.drop(index=indexes)
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
# Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
clean_city_data = pd.DataFrame(city_data_df)
clean_city_data
# We are gona use a function so we dont repeat the code
from datetime import date
date = date.today()
date = date.strftime("%d/%m/%Y")
def scatter_plot(y, y_label, y_units):
clean_city_data.plot(kind="scatter", x="Lat",y=y, edgecolor="black", grid = True)
plt.xlabel("Latitude")
plt.ylabel(f"{y_label} ({y_units})")
plt.title(f"City Latitude vs. {y_label} plot ({date})")
plt.savefig(f"./output_data/Lat_vs_{y_label}.png")
plt.show()
# ## Plotting the Data
# * Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
# * Save the plotted figures as .pngs.
# ## Latitude vs. Temperature Plot
scatter_plot(y="Max Temp", y_label="Max Temperatures", y_units="F")
# As we see in this plot above, as we get closer to the latitude 0 (closer to the Ecuator), the temperatures are higher.
# ## Latitude vs. Humidity Plot
scatter_plot(y="Humidity", y_label="Humidity", y_units="%")
# As we see in this plot above, most of the cities have higher humidity rather then lower humidity
# ## Latitude vs. Cloudiness Plot
scatter_plot(y="Cloudiness", y_label="Cloudiness", y_units="%")
# We do not see a significant difference in cloudiness based on latitude
# ## Latitude vs. Wind Speed Plot
scatter_plot(y="Wind Speed", y_label="Wind Speed", y_units="mph")
# Most of the cities have a lower rate of wind speed rather than a higher rate.
# ## Linear Regression
north = clean_city_data[clean_city_data["Lat"]>=0]
south = clean_city_data[clean_city_data["Lat"]<0]
def linear_regression(x_values, y_values, title, y_label, annotate_x_y):
x_values = x_values
y_values = y_values
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
print(f"r-value is: {rvalue}")
plt.scatter(x_values,y_values, edgecolor = "black")
# Plot regression line
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq, annotate_x_y, fontsize=15,color="red")
# Label plot
plt.title(title)
plt.xlabel("Latitude")
plt.ylabel(y_label)
plt.savefig(f"./output_data/{title}.png")
plt.show()
# #### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_values_n = north["Lat"]
y_values = north["Max Temp"]
# Label plot
title = "Northern Hemisphere - Max Temp vs. Latitude Linear Regression"
linear_regression(x_values_n, y_values, title, "Max Temp", (10,40))
# -
# 92.8 is the expected MaxTemp value when latitude is 0; Whith every one degree increase in latitude, the MaxTemp value in the Northern Hemisphere decreases by 0.5F. The correlation r value indicated a strong negative correlation between the MaxTemp and the latitude.
# #### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
# +
x_values_s = south["Lat"]
y_values = south["Max Temp"]
# Label plot
title = "Southern Hemisphere - Max Temp vs. Latitude Linear Regression"
linear_regression(x_values_s, y_values, title, "Max Temp", (-50,80))
# -
# 80.71 is the MaxTemp value when the latitude is 0; whith every one degree increase in latidude, the MaxTemp in the Southern Hemisphere increases by 0.71F. The r correlation value indicated a strong positive correlation between the MaxTemp and the latidude.
# #### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
y_values = north["Humidity"]
# Label plot
title = "Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression"
linear_regression(x_values_n, y_values, title, "Humidity (%)", (45,15))
# -
# 65.14 is the expected humidity value when latitude is 0. Whith every one degree of latitude increase, the humidity in the Northern Hemisphere increases by 0.08. The r correlation value indicates a very weak, sligthly positive association between humidity and latitude.
# #### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
# +
y_values = south["Humidity"]
# Label plot
title = "Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression"
linear_regression(x_values_s, y_values, title, "Humidity", (-55,45))
# -
# 70.34 is the expected humidity value when latitude is 0. Whith every one degree of latitude increase, the humidity in the Southern Hemisphere decreases by 0.02. The r correlation value indicates a very weak, sligthly negative association between humidity and latitude.
# #### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
y_values = north["Cloudiness"]
# Label plot
title = "Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression"
linear_regression(x_values_n, y_values, title, "Cloudiness", (45,15))
# -
# 58.14 is the expected cloudiness value when latitude is 0. Whith every one degree of latitude increase, the cloudiness in the Northern Hemisphere decreases by 0.01. The r correlation value indicates a very weak negative association between cloudiness and latitude in the Northern Hemisphere.
# #### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
# +
y_values = south["Cloudiness"]
# Label plot
title = "Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression"
linear_regression(x_values_s, y_values, title, "Cloudiness", (-55,25))
# -
# 59.06 is the expected cloudiness value when latitude is 0. Whith every one degree of latitude increase, the cloudiness in the Southern Hemisphere decreases by 0.16. The r correlation value indicates a weak positive association between cloudiness and latitude in the Southern Hemisphere.
# #### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
y_values = north["Wind Speed"]
# Label plot
title = "Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression"
linear_regression(x_values_n, y_values, title, "Wind Speed", (45,25))
# -
# 8.11 is the expected Wind Speed value when latitude is 0. Whith every one degree of latitude increase, the Wind Speed in the Northern Hemisphere decreases by 0.02. The r correlation value indicates a weak negative association between Wind Speed and latitude in the Northern Hemisphere
# #### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
# +
y_values = south["Wind Speed"]
# Label plot
title = "Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression"
linear_regression(x_values_s, y_values, title, "Wind Speed", (-55,25))
# -
# 7.98 is the expected Wind Speed value when latitude is 0. Whith every one degree of latitude increase, the Wind Speed in the Southern Hemisphere decreases by 0.05. The r correlation value indicates a negative association between Wind Speed and latitude in the Southern Hemisphere
|
WeatherPy/WeatherPy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <h3>Recursion: Fibonacci Numbers</h3>
# <i>
# The Fibonacci Sequence
#
# The Fibonacci sequence appears in nature all around us, in the arrangement of seeds in a sunflower and the spiral of a nautilus for example.
#
# The Fibonacci sequence begins with $fibonacci(0) = 0$ and $fibonacci(1) = 1$ as its first and second terms. After these first two elements, each subsequent element is equal to the sum of the previous two elements.
#
# Programmatically:
# * $fibonacci(0) = 0$
# * $fibonacci(1) = 1$
# * $fibonacci(n) = fibonacci(n-1) + fibonacci(n-2)$
#
# Given $n$, return the $n^{th}$ number in the sequence.
#
# As an example, $n=5$. The Fibonacci sequence to %6% is $fs = [0, 1, 1, 2, 3, 5, 8]$. With zero-based indexing, $fs[5]=5$.
# </i>
# <b>Function Description</b>
# <i>Complete the recursive function $fibonacci$ in the editor below. It must return the $n^{th}$ element in the Fibonacci sequence.
#
# fibonacci has the following parameter(s):
#
# * n: the integer index of the sequence to return</i>
# <b>Input Format</b>
# <i>The input line contains a single integer, $n$.</i>
# <b>Constraints</b>
# * $0 \leq n \leq 30$
# <b>Output Format</b>
# <i>Locked stub code in the editor prints the integer value returned by the $fibonacci$ function.</i>
# <b>Sample Input</b>
'''
3
'''
# <b>Sample Output</b>
'''
2
'''
# <b>Explanation</b>
# <i>The Fibonacci sequence begins as follows:
# <br>
# $fibonacci(0) = 0$
# <br>
# $fibonacci(1) = 1$
# <br>
# $fibonacci(2) = (0 + 1) = 1$
# <br>
# $fibonacci(3) = (1 + 1) = 2$
# <br>
# $fibonacci(4) = (1 + 2) = 3$
# <br>
# $fibonacci(5) = (2 + 3) = 5$
# <br>
# $fibonacci(6) = (3 + 5) = 8$
# <br>
# $...$
# <br>
# We want to know the value of $fibonacci(3)$. In the sequence above, $fibonacci(3)$ evaluates to $2$.
# </i>
|
Lista Python IV/HackerRank - Interview Preparation Kit (Recursion and Backtracking).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/iamslash/examplesofml/blob/master/handsonml/handson_ml_02_end_to_end_machine_learning_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ra4qqsgpIQs4" colab_type="text"
# # 설정
# + id="utw7TZ9-jkQZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 425} outputId="9aa86b3b-7991-4d87-85f4-2b61ca6f9a30"
# 폰트설치
import matplotlib as mpl
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
# !apt-get install fonts-nanum*
# !apt-get install fonts-woowa-hanna
BM_HANNA = fm.FontProperties(fname='/usr/share/fonts/truetype/woowa/BM-HANNA.ttf')
NANUM_GOTHIC = fm.FontProperties(fname='/usr/share/fonts/truetype/nanum/NanumGothic.ttf')
NANUM_GOTHIC_CODING = fm.FontProperties(fname='/usr/share/fonts/truetype/nanum/NanumGothicCoding.ttf')
#
"""
- font의 위치를 가져와서 font properties로 사용할 때는 해당 폰트파일이 어디에 있든 문제가 없지만,
- font_family(font의 이름과 동일한 의미)를 사용해서 세팅할 때는 matplotlib 내에서 해당 폰트 파일을 가지고 있어야 함.
- 따라서 아래처럼 빌드 해주는 것이 필요함.
"""
fm._rebuild()
plt.rc('font', family=NANUM_GOTHIC.get_name() )
mpl.rcParams['axes.unicode_minus'] = False
# all_fonts_I_can_use = fm.findSystemFonts(fontpaths=None, fontext='ttf')
# + id="dl-5oUs7HAwz" colab_type="code" colab={}
# 파이썬 2와 파이썬 3 지원
from __future__ import division, print_function, unicode_literals
# 공통
import numpy as np
import os
# 일관된 출력을 위해 유사난수 초기화
np.random.seed(42)
# 맷플롯립 설정
# %matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# 한글출력
matplotlib.rc('font', family='NanumBarunGothic')
plt.rcParams['axes.unicode_minus'] = False
# 그림을 저장할 폴드
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
def ensure_dir(dir):
if not os.path.exists(dir):
os.makedirs(dir)
ensure_dir('./images')
ensure_dir(IMAGES_PATH)
# + [markdown] id="S64Md6P4IT93" colab_type="text"
# # 데이터 다운로드
#
# + id="K_2CrhAzITaX" colab_type="code" outputId="b9cb6bf4-aafd-49b5-869e-2d702a6869a8" colab={"base_uri": "https://localhost:8080/", "height": 224}
import os
import tarfile
from six.moves import urllib
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = os.path.join("datasets", "housing")
HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
if not os.path.isdir(housing_path):
os.makedirs(housing_path)
tgz_path = os.path.join(housing_path, "housing.tgz")
urllib.request.urlretrieve(housing_url, tgz_path)
housing_tgz = tarfile.open(tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
fetch_housing_data()
import pandas as pd
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
housing = load_housing_data()
housing.head()
# + id="mr_Um-zkIy5c" colab_type="code" outputId="f7e961a9-daf9-4299-a11f-0a988fa7cc02" colab={"base_uri": "https://localhost:8080/", "height": 272}
housing.info()
# + id="qs8mDareIzua" colab_type="code" outputId="6fb074ff-3e73-4d44-d3e8-78cf8fb4da6e" colab={"base_uri": "https://localhost:8080/", "height": 119}
housing["ocean_proximity"].value_counts()
# + id="ADFavAeEJRfU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 2006} outputId="16aa2c67-d78c-4817-ad45-5027ceaffe1e"
pd.get_dummies(housing, columns=['ocean_proximity'])
# + id="eKjMEQGaJYCg" colab_type="code" outputId="22722839-14fb-46e4-cc32-a636c214f283" colab={"base_uri": "https://localhost:8080/", "height": 317}
housing.describe()
# + id="qJ7ZCqsZJzDc" colab_type="code" outputId="829354bf-c266-48eb-8298-d8aa37f9f24f" colab={"base_uri": "https://localhost:8080/", "height": 1143}
# %matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
save_fig("attribute_histogram_plots")
plt.show()
# + id="OVG3XEofJ_Zo" colab_type="code" outputId="ae42e116-b0a3-4a17-fa48-f41578107da3" colab={"base_uri": "https://localhost:8080/", "height": 335}
df = pd.DataFrame({
'length': [1.5, 0.5, 1.2, 0.9, 3],
'width': [0.7, 0.2, 0.15, 0.2, 1.1]
}, index= ['pig', 'rabbit', 'duck', 'chicken', 'horse'])
df.hist(bins=3)
# + id="Ma6yC1w_K42l" colab_type="code" outputId="135d2e1c-4406-444f-a9fd-ae69c9686742" colab={"base_uri": "https://localhost:8080/", "height": 241}
# 일관된 출력을 위해 유사난수 초기화
np.random.seed(42)
import numpy as np
# 예시를 위해서 만든 것입니다. 사이킷런에는 train_test_split() 함수가 있습니다.
def split_train_test(data, test_ratio):
shuffled_indices = np.random.permutation(len(data))
test_set_size = int(len(data) * test_ratio)
test_indices = shuffled_indices[:test_set_size]
train_indices = shuffled_indices[test_set_size:]
return data.iloc[train_indices], data.iloc[test_indices]
train_set, test_set = split_train_test(housing, 0.2)
print(len(train_set), "train +", len(test_set), "test")
from zlib import crc32
def test_set_check(identifier, test_ratio):
return crc32(np.int64(identifier)) & 0xffffffff < test_ratio * 2**32
def split_train_test_by_id(data, test_ratio, id_column):
ids = data[id_column]
in_test_set = ids.apply(lambda id_: test_set_check(id_, test_ratio))
return data.loc[~in_test_set], data.loc[in_test_set]
housing_with_id = housing.reset_index() # `index` 열이 추가된 데이터프레임이 반환됩니다.
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "index")
housing_with_id["id"] = housing["longitude"] * 1000 + housing["latitude"]
train_set, test_set = split_train_test_by_id(housing_with_id, 0.2, "id")
test_set.head()
# + id="gvVjUS-WM_R_" colab_type="code" outputId="2f1d7595-850a-4852-b8f2-9a1337d3624c" colab={"base_uri": "https://localhost:8080/", "height": 224}
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
test_set.head()
# + id="6zabLjRGNC8G" colab_type="code" outputId="48e03784-d112-454f-a8cd-3465d2960db6" colab={"base_uri": "https://localhost:8080/", "height": 289}
housing["median_income"].hist()
# + id="2ywYgj_RN1Cc" colab_type="code" outputId="1f75b6a2-4b46-430a-aafe-ac24f1cf0f72" colab={"base_uri": "https://localhost:8080/", "height": 170}
housing['median_income'].describe()
# + id="DCS5DnUDNwdc" colab_type="code" outputId="2dff6757-0282-458e-fc66-4564ec0210f0" colab={"base_uri": "https://localhost:8080/", "height": 374}
# 소득 카테고리 개수를 제한하기 위해 1.5로 나눕니다.
housing["income_cat"] = np.ceil(housing["median_income"] / 1.5)
# 5 이상은 5로 레이블합니다.
housing["income_cat"].where(housing["income_cat"] < 5, 5.0, inplace=True)
housing['income_cat']
housing['income_cat'].hist()
housing["income_cat"].value_counts()
# + id="kg0onM4VOvvt" colab_type="code" outputId="cf3fbcea-3f03-4e33-a4fa-fed2ff2fde84" colab={"base_uri": "https://localhost:8080/", "height": 297}
housing["income_cat"].hist()
save_fig('income_category_hist')
# + id="-lEuC57QftNR" colab_type="code" colab={}
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
# + id="oypxUIGQfy08" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="80131a8e-f12c-4668-dde7-cd104f85697e"
len(strat_train_set) + len(strat_test_set)
# + id="AEbIaPJrf7gU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="8d18af17-a0e8-43c0-fba5-6f7dbd1a1752"
strat_test_set["income_cat"].value_counts() / len(strat_test_set)
# + id="xH_ruStgf8Q0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="46213d5a-4b62-4894-817a-d7334a228f4b"
housing["income_cat"].value_counts() / len(housing)
# + id="5TyZ9CK3gCII" colab_type="code" colab={}
def income_cat_proportions(data):
return data["income_cat"].value_counts() / len(data)
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
compare_props = pd.DataFrame({
"Overall": income_cat_proportions(housing),
"Stratified": income_cat_proportions(strat_test_set),
"Random": income_cat_proportions(test_set),
}).sort_index()
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Strat. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
# + id="pGXnxJTdgTst" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="15d143af-192d-40d0-bf8c-6bc150366ac0"
compare_props
# + id="wWE5ik3XgfiZ" colab_type="code" colab={}
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
# + id="gS6IpveJgg0W" colab_type="code" colab={}
strat_train_set
strat_test_set
# + [markdown] id="MMzFEy1zgubm" colab_type="text"
# # 데이터 이해를 위한 탐색과 시각화
# + id="uUcW7MCYgv8E" colab_type="code" colab={}
housing = strat_train_set.copy()
# + id="sExw0VlNgyJJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 351} outputId="ab6a4e79-ecf4-40c6-8b1f-8d2b1c025be1"
ax = housing.plot(kind="scatter", x="longitude", y="latitude")
ax.set(xlabel='경도', ylabel='위도')
save_fig("bad_visualization_plot")
# + id="6mXNtiMag32t" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="c0efc6cd-f976-4d46-a828-f925d22c7e07"
ax = housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1)
ax.set(xlabel='경도', ylabel='위도')
save_fig("better_visualization_plot")
# + id="S6rYmb1vg72W" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 567} outputId="d33f8912-4d91-410f-bfbc-f334578679b9"
ax = housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4,
s=housing["population"]/100, label="인구", figsize=(10,7),
c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True,
sharex=False)
ax.set(xlabel='경도', ylabel='위도')
plt.legend()
save_fig("housing_prices_scatterplot")
# + id="dO3rQDOhhK0J" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 351} outputId="e030cbb5-cddf-48b5-c72d-67d1d125c984"
import matplotlib.image as mpimg
california_img=mpimg.imread(PROJECT_ROOT_DIR + '/images/end_to_end_project/california.png')
ax = housing.plot(kind="scatter", x="longitude", y="latitude", figsize=(10,7),
s=housing['population']/100, label="인구",
c="median_house_value", cmap=plt.get_cmap("jet"),
colorbar=False, alpha=0.4,
)
plt.imshow(california_img, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.5)
plt.ylabel("위도", fontsize=14)
plt.xlabel("경도", fontsize=14)
prices = housing["median_house_value"]
tick_values = np.linspace(prices.min(), prices.max(), 11)
cbar = plt.colorbar()
cbar.ax.set_yticklabels(["$%dk"%(round(v/1000)) for v in tick_values], fontsize=14)
cbar.set_label('중간 주택 가격', fontsize=16)
plt.legend(fontsize=16)
save_fig("california_housing_prices_plot")
plt.show()
# + id="kyOj79c_hk2y" colab_type="code" colab={}
corr_matrix = housing.corr()
# + id="sMxguOYii_xk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 187} outputId="1abf5611-d00c-49ce-9021-c36ac19c3b6c"
corr_matrix["median_house_value"].sort_values(ascending=False)
# + id="t5PPIKqSjGLn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 590} outputId="dde25414-9f1b-4d64-dbb5-46ade79c1e95"
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
save_fig("scatter_matrix_plot")
# + id="Hc54zm1AjNJ5" colab_type="code" colab={}
|
handsonml/handson_ml_02_end_to_end_machine_learning_project.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import inspect
import os
import sys
from typing import List, Dict, Tuple, Optional
from torchvision.models.detection.faster_rcnn import TwoMLPHead, FastRCNNPredictor
from torchvision.models.detection.roi_heads import RoIHeads, fastrcnn_loss
from torchvision.ops import MultiScaleRoIAlign
from common.data import DataGenerator
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import cv2
import mlflow
import numpy as np
import torch
from matplotlib import pyplot as plt
from torch import nn
from src.common.conics import plot_conics, conic_matrix, ellipse_axes, ellipse_angle, conic_center
from src.detection import create_detection_model
from src.detection.training import get_dataloaders
# + pycharm={"name": "#%%\n"}
mlflow.set_tracking_uri("http://localhost:5000/")
mlflow.set_experiment("crater-detection")
# + pycharm={"name": "#%%\n"}
dataset_path = "../data/dataset_2c6e817a-238f-4872-a17a-5896686b837a.h5"
batch_size = 10
num_workers = 4
train_loader, validation_loader, test_loader = get_dataloaders(dataset_path, batch_size, num_workers)
# + pycharm={"name": "#%%\n"}
device = torch.device('cpu')
model = create_detection_model()
model.to(device)
checkpoint = mlflow.pytorch.load_state_dict(r'../artifacts/1/fb52f4caddb7419bb2ac35a2cd1cbbe1/artifacts/checkpoint')
model.load_state_dict(checkpoint['model_state_dict'])
model.eval()
print()
# + pycharm={"name": "#%%\n"}
images, targets = next(iter(test_loader))
with torch.no_grad():
out = model(images)
# + pycharm={"name": "#%%\n"}
test_masks = out[0]['masks']
test_scores = out[0]['scores']
test_masks = test_masks[test_scores > 0.98]
# + [markdown] pycharm={"name": "#%% md\n"}
# ### Baseline
# + pycharm={"name": "#%%\n"}
fig, axes = plt.subplots(len(out), 3, figsize=(10, len(out)*5))
for j in range(len(out)):
test_masks = out[j]['masks']
target_masks = targets[j]['masks']
test_scores = out[j]['scores']
test_masks = test_masks[test_scores > 0.5]
for i in range(test_masks.shape[0]):
cnt = np.array(np.where(test_masks[i, 0].numpy() > 0.0)).T[:, None, :]
cnt[..., [0, 1]] = cnt[..., [1, 0]]
(x, y), (a, b), psi = cv2.fitEllipse(cnt)
psi = np.radians(psi)
A = conic_matrix(a, b, psi, x, y)
plot_conics(A, ax=axes[j, 2])
axes[j, 0].imshow(images[j][0].numpy(), cmap='gray')
axes[j, 1].imshow(np.sum(target_masks.numpy(), axis=0), cmap='gray')
axes[j, 2].imshow(np.sum(test_masks.numpy(), axis=0)[0], cmap='gray')
# + pycharm={"name": "#%%\n"}
model.train()
model(images, targets)
# + pycharm={"name": "#%%\n"}
generator = DataGenerator.from_robbins_dataset(file_path="../data/lunar_crater_database_robbins_2018.csv")
# + pycharm={"name": "#%%\n"}
generator.set_random_position()
while generator.solar_incidence_angle < 30:
generator.set_random_position()
image, mask = map(torch.as_tensor, generator.image_mask_pair())
mask: torch.Tensor = mask.int()
obj_ids = mask.unique()[1:]
masks = mask == obj_ids[:, None, None]
num_objs = len(obj_ids)
boxes = torch.zeros((num_objs, 4), dtype=torch.float32)
for i in range(num_objs):
pos = torch.where(masks[i])
xmin = pos[1].min()
xmax = pos[1].max()
ymin = pos[0].min()
ymax = pos[0].max()
boxes[i] = torch.tensor([xmin, ymin, xmax, ymax])
area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
area_filter = area > 4
masks, obj_ids, boxes, area = map(lambda x: x[area_filter], (masks, obj_ids, boxes, area))
num_objs = len(obj_ids)
labels = torch.ones((num_objs,), dtype=torch.int64)
masks = masks.int()
image_id = torch.tensor([0])
iscrowd = torch.zeros((num_objs,), dtype=torch.int64)
target = dict(
boxes=boxes,
labels=labels,
masks=masks,
image_id=image_id,
area=area,
iscrowd=iscrowd
)
# + pycharm={"name": "#%%\n"}
generator.plot(figsize=(5,5))
plt.imshow(generator.generate_image(), cmap='gray')
# + pycharm={"name": "#%%\n"}
Q_proposals = torch.zeros((len(boxes), 3))
Q_proposals[:, 0] = boxes[:, 0] + (boxes[:, 2] - boxes[:, 0])/2
Q_proposals[:, 1] = boxes[:, 1] + (boxes[:, 3] - boxes[:, 1])/2
Q_proposals[:, 2] = torch.sqrt((boxes[:, 2] - boxes[:, 0])**2 + (boxes[:, 2] - boxes[:, 0])**2)
Q_proposals[:5]
# + pycharm={"name": "#%%\n"}
A_craters = generator.craters_in_image()
x, y = conic_center(A_craters).T
a, b = ellipse_axes(A_craters)
angle = ellipse_angle(A_craters)
# + pycharm={"name": "#%%\n"}
E_proposals = np.zeros((len(A_craters), 5))
E_proposals[:, 0] = x
E_proposals[:, 1] = y
E_proposals[:, 2] = a
E_proposals[:, 3] = b
E_proposals[:, 4] = angle
E_proposals = torch.as_tensor(E_proposals)
E_proposals[:5]
# + pycharm={"name": "#%%\n"}
d_x = (E_proposals[:, 0] - Q_proposals[:, 0])/Q_proposals[:, 2]
d_y = (E_proposals[:, 1] - Q_proposals[:, 1])/Q_proposals[:, 2]
d_a = torch.log(2*E_proposals[:, 2]/Q_proposals[:, 2])
d_b = torch.log(2*E_proposals[:, 3]/Q_proposals[:, 2])
d_angle = E_proposals[:, 4]/np.pi
# + pycharm={"name": "#%%\n"}
d_x
# + pycharm={"name": "#%%\n"}
|
notebooks/ellipse_fitting.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analisi delle reti sociali
# <img style="float: right;" src="figures/karate_kid.jpg">
#
# *[Zachary's karate club](https://en.wikipedia.org/wiki/Zachary%27s_karate_club) is a well-known social network of a university karate club described in the paper [An Information Flow Model for Conflict and Fission in Small Groups](http://www1.ind.ku.dk/complexLearning/zachary1977.pdf) by <NAME>.*
# ## Indice
# 1. [Rappresentazione di reti binarie indirette](#reti_binarie_indirette)<br>
# 2. [Algoritimi di *Force-Direction Placements*](#placements)<br>
# 3. [Matrice di adiacenza](#adiacenza)<br>
# 4. [Informazioni di nodo](#informazioni_nodo)<br>
# 5. [Il cammino più corto (*shortest path*)](#shortest_path)<br>
# 6. [Utili indici descrittivi](#indici)<br>
# 6.1 [Indici descrittivi a livello di nodo](#indici_nodo)<br>
# 6.2 [Indici descrittivi a livello di rete](#indici_rete)<br>
# 7. [Community Detection](#detection)<br>
# 7.1 [Metodo di Louvain](#louvain)<br>
# # 1. Rappresentazione di reti binarie indirette <a id=reti_binarie_indirette> </a>
# +
from matplotlib.cbook import mplDeprecation
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore", category=mplDeprecation)
# -
# ### Grafi
# **Grafi**, $\mathcal{G} = (\mathcal{N}, \mathcal{A})$
G = nx.karate_club_graph()
print(nx.info(G))
# * Insieme dei nodi $\mathcal{N} = \{1, \dots, V\}$;
# ### Nodi
N = G.nodes()
print("Nodi:", N)
# * Un arco è definito come una coppia $\{\{i, j\} : i, j \in \mathcal{N}, i > j\}$;
# * Insieme degli archi $\mathcal{A} \subseteq \{\{i, j\} : i, j \in \mathcal{N}, i > j\}$.
# ### Archi
A = G.edges()
print("Archi:", A)
# # 2. Algoritimi di *Force-Direction Placements* <a id=placements> </a>
# Definiscono le posizioni dei nodi utilizzando solo le informazioni sugli archi nella rete con un'interpretazione che deriva dalla **Fisica**. Come?
# * **I nodi** sono visti come particelle in un sistema fisco con una certa energia che risulta da due principali forze che agiscono su ogni nodo;
# * **Forza repulsiva**: Simile alla forza elettrostatica di **Coulomb**. Agisce su tutti i nodi e genera più energia tanto più i nodi sono vicini;
# * **Forza attrattiva**: Simile alla forza della molla di **Hooke**. Agisce solo su nodi connessi e genera più energia tanto più i nodi sono lontani.
#
# Questi algoritmi individuano le posizioni spaziali dei nodi in modo da ottenere la configurazione più stabile nel sistema di particelle (a minor energia).
#
# [spring_layout](https://networkx.github.io/documentation/stable/reference/generated/networkx.drawing.layout.spring_layout.html): *Position nodes using Fruchterman-Reingold force-directed algorithm.*
pos = nx.spring_layout(G, iterations=50, seed=3)
# +
node_color_club = ['lightblue' if (N[nodo]['club'] == "Mr. Hi") else 'orange' for nodo in N]
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
node_color=node_color_club
)
plt.axis('off')
plt.tight_layout()
plt.show()
# -
# # 3. Matrice di adiacenza <a id=adiacenza> </a>
# **Matrice di adiacenza**, $Y$
# * $Y$ matrice quadrata simmetrica di dimensioni $V \times V$;
# * Nodi disposti in riga e colonna;
# * $Y_{ij} = Y_{ji} = 1$ se ${i, j} \in \mathcal{A}$ ($i$ e $j$ sono connessi), $0$ altrimenti.
Y = nx.adjacency_matrix(G)
V, V = Y.shape
print("Dimensioni: {} X {}".format(V, V))
print(Y.todense())
# ### Esempio
#
# Calcolare la matrice di adiacenza $Y$ a partire dal grafo $G$ (vedi definizione sopra).
# +
from scipy.sparse.csr import csr_matrix
row_ind = []
col_ind = []
for arco in A:
row_ind.extend(list(arco)) # a, b
col_ind.extend(list(arco)[::-1]) # b, a
Y = csr_matrix((np.ones(len(row_ind)), (np.array(row_ind), np.array(col_ind))),
shape=(V, V), dtype=np.int64)
print("Dimensioni: {} X {}\n".format(V, V))
print(Y.todense())
# -
# # 4. Informazioni di nodo <a id=informazioni_nodo> </a>
# ### club
mr_hi = 0
officer = 33
print("Il nodo {} rappresenta \"Mr. Hi\"".format(mr_hi))
print(N[mr_hi])
print("\nIl nodo {} rappresenta \"Officer\"".format(officer))
print(N[officer])
print("\nI rimanenti nodi appartengono a uno dei due club.")
# +
node_color_club[mr_hi] = 'blue'
node_color_club[officer] = 'red'
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
node_color=node_color_club
)
plt.axis('off')
plt.tight_layout()
plt.show()
# -
# # 5. Il cammino più corto (*shortest path*) <a id=shortest_path> </a>
# * Per ogni coppia di nodi $i$ e $j$ gli *shortest paths* sono i cammini più corti tra nodi interconnessi che uniscono $i$ a $j$;
# * Possono essere molteplici;
# * Lunghezza dello *shortest path*: numero di archi di cui si compone.
# ### Esercizio
#
# Utilizzare le funzioni `shortest_path()` e `shortest_path_length()` della libreria NetworkX (già importata come `nx`) per:
# 1. Identificare il cammino più corto tra Mr. Hi e Officer;
# 2. Calcolare la lunghezza del cammino più corto.
# +
# ============== YOUR CODE HERE ==============
raise NotImplementedError
shortest_path = []
lunghezza_shortest_path = None
# ============================================
print("Cammino più corto tra Mr. Hi e Officer:", shortest_path)
print("Lunghezza del cammino più corto tra Mr. Hi e Officer:", lunghezza_shortest_path)
# -
# # 6. Utili indici descrittivi <a id=indici> </a>
# ## 6.1 Indici descrittivi a livello di nodo <a id=indici_nodo> </a>
# ### Grado di un nodo
# * Grado di *i*. Numero di nodi con cui è connesso: $d_i = \sum_{j=1}^{V}Y_{ij}$.
# +
d_mr_hi = G.degree(mr_hi)
print("Grado del nodo associato a Mr. Hi: {}".format(d_mr_hi))
# -
plt.hist(dict(G.degree()).values())
plt.title("Istogramma dei gradi")
plt.xlabel('Grado')
plt.ylabel('Frequenza')
plt.show()
# ### Esercizio
#
# Calcolare il grado del nodo *Mr. Hi* a partire dalla matrice di adiacenza $Y$ (vedi definizione sopra).
# +
# ============== YOUR CODE HERE ==============
raise NotImplementedError
d_mr_hi = None
# ============================================
print("Grado del nodo associato a Mr. Hi: {}".format(d_mr_hi))
# -
# ### *Betweenness*
# * Livello di *betweenness* di $i$. È la somma (fatta su tutte le coppie di nodi $u$ e $v$ diversi da $i$) del rapporto tra il numero degli *shortest paths* tra $u$ e $v$ che passano per $i$, $n_{uv}(i)$, ed il totale degli *shortest paths* tra $u$ e $v$, $n_{uv}$: $g_i = \sum_{u \neq i \neq u}\frac{n_{uv}(i)}{n_{uv}}$.
#
#
# [betweenness_centrality](https://networkx.github.io/documentation/stable/reference/algorithms/generated/networkx.algorithms.centrality.betweenness_centrality.html): *Compute the shortest-path betweenness centrality for nodes. Betweenness centrality of a node $v$ is the sum of the fraction of all-pairs shortest paths that pass through $v$*.
#
# * **k** (int, optional (default=None)) – If k is not None use k node samples to estimate betweenness. The value of k <= n where n is the number of nodes in the graph.
g = nx.betweenness_centrality(G, k=None)
print("Livelli di betweenness per ogni nodo:", g)
# ### Esercizio
#
# Ricavare a partire dal dizionario `g` il livello di *betweenness* di Mr. Hi.
# +
# ============== YOUR CODE HERE ==============
raise NotImplementedError
betweenness_mr_hi = None
# ============================================
print("Betweenness del nodo \"Mr. Hi\": {:.2f}".format(betweenness_mr_hi))
# -
plt.title("Istogramma della betweenness")
plt.hist(g.values())
plt.xlabel('Betweenness')
plt.ylabel('Frequenza')
plt.show()
# +
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler((100, 1000))
node_size_betweenness = scaler.fit_transform(np.array([g[nodo] for nodo in N]).reshape(-1, 1))
plt.title("Rappresentazione nella rete dei diversi livelli di betweenness")
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
node_color=node_color_club,
node_size=node_size_betweenness
)
plt.axis('off')
plt.tight_layout()
plt.show()
# -
# ## 6.2 Indici descrittivi a livello di rete <a id=indici_rete> </a>
# ### Densità del grafo
# * Densità di $G$. Frequenza relativa del numero totale di archi osservati, sul totale degli archi possibili: $D = \frac{1}{V(V - 1)}\sum Y_{ij}$;
# +
D = nx.density(G)
print("Densità del grafo G: {:.2f}".format(D))
# -
# ## Esercizio
# Calcolare la densità del grafo $G$ a partire dalla matrice di adiacenza $Y$ (vedi definizione sopra).
# +
# ============== YOUR CODE HERE ==============
raise NotImplementedError
D = None
# ============================================
print("Densità del grafo G: {:.2f}".format(D))
# -
# ### Diametro del grafo
# * Diametro di $G$. Lunghezza del più lungo *shortest path*;
print("Diametro del grafo G: {}".format(nx.diameter(G)))
# * Lunghezza media di *shortest path*. Media delle lunghezze minime di *path*. $L = \frac{1}{V(V - 1)}\sum s_{ij}$.
L = nx.average_shortest_path_length(G)
print("Lunghezza media di shortest path del grafo G: {:.2f}".format(L))
# ### Misure di omofilia - Modularità
# * Modularità. Frazione di archi che connette nodi dello stesso tipo meno il valore atteso della stessa quantità in una rete con connessioni casuali: $Q = \sum_k^K (e_{kk} - a_k^2)$, dove $e_{kk}$ rappresenta la frazione degli archi completamente contenuti nella comunità $k$ e $a_k$ è la frazione delle estremità degli archi contenuti nella comunità $k$.
#
# Nota: Nella rete con connessioni casuali ogni nodo viene vincolato a mantenere il suo grado, in pratica è come se si tagliasse ogni arco in due e ogni mezzo arco, chiamato *stub*, venisse ricablato casualmente con qualsiasi altro *stub* nella rete. Se $a^\star_k$ è il numero di *stub* nella comunità $k$, il numero di possibili archi contenuti nella comunità $k$ (consentendo *self loops*) è ${a^\star_k}^2$, il valore atteso degli archi contenuti nella comunità $k$ è quindi $^{{a^\star}_k^2}/_{l^2}$ dove $l$ è il numero di *stub* nella rete. Visto che $a_k = {^{{a^\star}_k}/_{l}}$, il valore atteso degli archi contenuti nella comunità è anche pari a $a_k^2$.
# +
G_esem = nx.make_small_graph(["edgelist", "Esempio di rete", 6, [[1, 2], [1, 3], [2, 3], [4, 5], [4, 6], [5, 6], [2, 5]]])
pos_esem = nx.spring_layout(G_esem, iterations=50, seed=3)
part_esem = {0: 0, 1: 0, 2: 0, 3: 1, 4: 1, 5: 1}
node_color_esem = ['lightblue' if (part_esem[nodo] == 0) else 'orange' for nodo in G_esem.nodes()]
nx.draw_networkx(
G_esem,
pos=pos_esem,
node_color=node_color_esem,
with_labels=True
)
plt.title("Esempio di rete")
plt.axis('off')
plt.tight_layout()
# -
import community
import pandas as pd
# ### Calcolo della modularità
# +
Q = community.modularity(part_esem, G_esem)
print("Modularità del grafo: {:.2f}".format(Q))
# -
# ### Esempio
#
# Ricavare la modularità di un grafo senza utilizzare la funzione `modularity()`.
# +
freq_rel = pd.DataFrame(
[[6 / 14, 1 / 14], [1 / 14, 6 / 14]],
columns=["Gruppo 1", "Gruppo 2"],
index=["Gruppo 1", "Gruppo 2"]
)
freq_rel['Marginale'] = freq_rel.sum(axis=0)
freq_rel = freq_rel.append(pd.Series(freq_rel.sum(axis=0), name='Marginale'))
print("Tabella delle frequenze relative:")
display(freq_rel.round(2))
num_archi = 7
num_estremita = num_archi * 2
num_archi_1 = 3
num_archi_2 = 3
num_estremita_1 = 7
num_estremita_2 = 7
Q_a_mano = (num_archi_1 / num_archi + num_archi_2 / num_archi) - \
((num_estremita_1 / num_estremita) ** 2 + (num_estremita_2 / num_estremita) ** 2)
Q_freq_rel = np.diagonal(freq_rel)[:-1].sum() - (freq_rel['Marginale'][:-1] ** 2).sum()
Q = community.modularity(part_esem, G_esem)
print("\nValore di Q calcolato \"a mano\": {:.2f}".format(Q_a_mano))
print("Valore di Q calcolato a partire dalla matrice delle frequenze relative: {:.2f}".format(Q_freq_rel))
print("Valore di Q calcolato tramite la funzione modularity: {:.2f}".format(Q))
# -
# ### Esercizio
#
# Definire una nuova rete e ripetere l'esercizio.
# ============== YOUR CODE HERE ==============
raise NotImplementedError
# ============================================
# # 7. Community Detection <a id=detection> </a>
# **Obiettivo**: Dividere la rete in comunità di nodi, in modo che nodi all'interno di ogni comunità abbiano molte connessioni tra loro (rete densa), mentre nodi in comunità diverse siano poco connessi (rete sparsa). Esistono vari approcci:
# * **Algoritmo di Girvan-Newman**: basato sulla *betweenness* di arco;
# * **Metodo di Louvain**: ottimizzazione della modularità;
# * E altri ...
communities = next(nx.community.girvan_newman(G))
part_girvan_newman = {node: int(node in communities[0]) for node in N}
Q_girvan_newman = community.modularity(part_girvan_newman, G)
print("Modularità, comunità identificate con Girvan-Newman (n = 2): {:.2f}".format(Q_girvan_newman))
# ## 7.1 Metodo di Louvain <a id=louvain> </a>
# **Metodo di Louvain**
# 1. L'algoritmo è inizializzato mettendo ogni nodo in una comunità diversa;
# 2. Per ogni nodo $i$ si calcola il guadagno in modularità $\Delta Q_{i:i \rightarrow C_j}$ ottenuto nello spostare $i$ dalla sua comunità a quella di ogni nodo $j$ connesso ad $i$;
# 3. Il nodo $i$ viene messo nella comunità con maggiore incremento in modularità se l'incremento è positivo. Altrimenti rimane nella sua comunità. Questo processo è applicato in ripetizione e sequenzialmente a tutti i nodi fino a quando la modularità non aumenta più;
# 4. Le comunità vengono raggruppate a formare una nuova rete (pesata con *self loops*) in cui le comunità sono i nuovi nodi e i nuovi pesi degli archi sono dati dal numero totale di archi che connettono i nodi nelle due comunità;
# 5. Torna a 2. e riapplica il procedimento alla nuova rete tra comunità.
part_louvain = community.best_partition(G)
Q_louvain = community.modularity(part_louvain, G)
print("Modularità, comunità identificate con Louvain: {:.2f}".format(Q_louvain))
# +
node_color_girvan_newman = ['lightgreen' if (nodo in communities[0]) else 'yellow' for nodo in N]
node_color_louvain = [part_louvain.get(nodo) for nodo in N]
plt.figure(figsize=(12, 8))
plt.subplot(221)
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
node_color=node_color_club,
node_size=node_size_betweenness
)
plt.title("I due club")
plt.axis('off')
plt.tight_layout()
plt.subplot(222)
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
node_color=node_color_girvan_newman,
node_size=node_size_betweenness
)
plt.title("Comunità trovate usando l'algoritmo di Girvan-Newman (n = 2)")
plt.axis('off')
plt.tight_layout()
plt.subplot(223)
plt.text(0.5, 0.6, "Community Detection", size=30, ha="center", va="center")
plt.axis('off')
plt.tight_layout()
plt.subplot(224)
nx.draw_networkx(
G,
pos=pos,
with_labels=True,
cmap=plt.get_cmap("Set2"),
node_color=node_color_louvain,
node_size=node_size_betweenness
)
plt.title("Comunità trovate usando il metodo di Louvain")
plt.axis('off')
plt.tight_layout()
plt.show()
# -
|
14_analisi_delle_reti_sociali.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Analysis
#
# ### Objetivo:
# #### Com base na tabela "houseprices" criada após a limpeza dos dados obtidos por web scraping, o conteúdo do dataset será análisado para responder algumas perguntas e por fim alguns modelos regressivos serão testados para tentar prever o preço dos imóveis com base em suas características.
#
# ****************
#
# ##### Listei abaixo as 8 perguntas que eu quero responder com essa análise.
# 1. Qual é o valor mínimo, médio, mediano e máximo dos imóveis do Ipiranga?
# 2. Quais são as principais características encontradas nesses imóveis?
# 3. Quais são os anunciantes com maior quantidade de ofertas?
# 4. Quem é o anunciante com os preços mais altos e mais baixos?
# 5. Qual é a relação entre as características do imóvel e o valor?
# 6. Quais imóveis são outliers nessa base?
# 7. Qual imóvel é o melhor custo-benefício desse dataset?
# 8. Qual é o melhor modelo para prever os preços dos imóveis com base nas características encontradas neste dataset?
# +
# imports para a análise
import sqlite3
import numpy as np
import pandas as pd
from scipy.stats import zscore
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.ticker import PercentFormatter
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = 10, 5
# imports para a criação do modelo de machine learning
from sklearn.model_selection import train_test_split
#from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
from sklearn import metrics
from sklearn.ensemble import RandomForestRegressor
#from sklearn.metrics import roc_curve, auc
from sklearn.linear_model import LinearRegression
#from sklearn import linear_model
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import VotingRegressor
# +
# Conecta com o SQLite
conn = sqlite3.connect('C:/python/web-scraping/house-prices/houseprices.db')
cur = conn.cursor()
# +
# extrai dos dados do SQLite
df = pd.read_sql_query("""
select
area
,quarto
,banheiro
,vagas_garagem
,valor_imovel
,anunciante
,endereco
,link
from houseprices_vivareal_cleaned
where tipo_imovel = 'Casa'
and bairro = 'Ipiranga'
and area is not null
""", conn)
# -
df.info()
# #### Documentação:
#
# | Coluna | Descrição |
# |---|---|
# | area | area total do imóvel em m² |
# | quarto | quantidade de quartos |
# | banheiro | quantidade de banheiros |
# | vagas_garagem | quantidade de vagas para carro |
# | valor_imovel | valor do imóvel em reais |
# | anunciante | nome do anuncianete |
# | endereco | endereço do imóvel |
# | link | link do anúncio |
#
# +
# Exibe as ptimeiras linhas do dataset
pd.options.display.float_format = '{:,.0f}'.format
df.head()
# +
# Exibe o resumo estatístico do dataset
df.describe()
# +
# preenche os nulos com 0
df.fillna(0, inplace=True)
# -
# ### Vamos começar a responder as perguntas
# #### 1. Qual é o valor mínimo, médio, mediano e máximo dos imóveis do Ipiranga?
print('Valor mínimo: ' ,"R$ {:,.0f}".format(np.min(df['valor_imovel'])).replace(',','.'))
print('Valor máximo: ' ,"R$ {:,.0f}".format(np.max(df['valor_imovel'])).replace(',','.'))
print('Valor médio: ' ,"R$ {:,.0f}".format(np.mean(df['valor_imovel'])).replace(',','.'))
print('Valor mediano:',"R$ {:,.0f}".format(np.median(df['valor_imovel'])).replace(',','.'))
print('Desvio padrão:' ,"R$ {:,.0f}".format(np.std(df['valor_imovel'])).replace(',','.'))
print('Coeficiente de variação:' ,"{:,.4f}".format(np.std(df['valor_imovel'])/np.mean(df['valor_imovel'])))
# Os valores das casas no dataset são bastante dispersos variando de 47k até 5,5MM, e fogem bastante da média.
# Metade das casas desse dataset valem até 850k e a outra metade estão acima desse valor. Isso já é um bom parâmetro sobre o que vamos encontrar nesses dados.
#
# Veja abaixo a ilustração da distribuição dos valores dos imóveis do Ipiranga por valor.
f, (ax) = plt.subplots(1, figsize=(25, 5))
plot_valor = df['valor_imovel'].values
sns.histplot (plot_valor, color='#293352', kde=True, linewidth=0)
ax.set_title('DISTRIBUIÇÃO DOS VALORES DOS IMÓVEIS (R$ milhões)', fontsize=14)
ax.set_xlim(0,np.max(df['valor_imovel']))
start, end = ax.get_xlim()
ax.xaxis.set_ticks(np.arange(start, end+100000, 250000));
# #### 2. Quais são as principais características encontradas nesses imóveis?
fig = plt.figure(figsize=(30,10))
ax1 = plt.subplot2grid((3,3), (1,1)); sns.histplot(data = df.area,kde=True )
ax2 = plt.subplot2grid((3,3), (1,2)); sns.histplot(data = df.quarto)
ax3 = plt.subplot2grid((3,3), (2,1)); sns.histplot(data = df.banheiro)
ax3 = plt.subplot2grid((3,3), (2,2)); sns.histplot(data = df.vagas_garagem)
fig.tight_layout()
# #### As principais características desses imóveis são:
# * A área total da maior parte das casas possui menos de 200 metros;
# * É mais comum encontrar casas com até duas vagas na garagem;
# * A maior parte dessas casas tem entre dois e três quartos e banheiros;
# #### 3. Quais são os anunciantes com maior quantidade de ofertas?
# +
anunciante = df.groupby('anunciante')['valor_imovel'].agg(['count','sum','mean','median','max','min']).reset_index()
anunciante_plot = anunciante.sort_values('count', ascending = False).reset_index()
anunciante_plot.index = anunciante_plot['anunciante']
anunciante_plot["cumpercentage"] = anunciante_plot["count"].cumsum()/anunciante_plot["count"].sum()*100
anunciante_plot = anunciante_plot.head(30)
fig, ax = plt.subplots()
ax.bar(anunciante_plot.index, anunciante_plot["count"], color="C0")
ax2 = ax.twinx()
ax2.plot(anunciante_plot.index, anunciante_plot["cumpercentage"], color="C1", marker="D", ms=7)
ax2.yaxis.set_major_formatter(PercentFormatter())
ax.tick_params(axis="y", colors="C0")
ax2.tick_params(axis="y", colors="C1")
for tick in ax.get_xticklabels():
tick.set_rotation(90)
plt.show()
print('Total de anuncuantes no dataset:',anunciante.count().max())
print('Total de ofertas dos 30 maiores anunciantes:',anunciante_plot['count'].sum())
# -
# Cerca de 10% dos anunciantes somam 60% das ofertas presentes no dataset.
#
# Os destaques são:
# * <NAME> - Vendas
# * <NAME>
# * <NAME>cios Imobiliários
# #### 4. Quem é o anunciante com os preços mais altos e mais baixos?
max_price = anunciante.sort_values('median', ascending = False).head(10)
max_price[['anunciante','count','median','mean']].head(5)
min_price = anunciante.sort_values('median', ascending = True).head(10)
min_price[['anunciante','count','median','mean']].head(5)
# ##### Maiores Preços:
#
# Os anunciantes com preços mais altos são [ Gonçalves Personnalite, <NAME> Anastacio Junior, Romário Imóveis & Ipermutei e <NAME> Imóveis ], esses imóveis ofertados passam de 2MM, entretanto, cada um desses anunciantes possuem apenas uma oferta no dataset.
#
# ##### Menores Preços:
#
# Os menores preços no dataset são dos anunciantes [Intermedium, <NAME>, <NAME>, <NAME> e Dreamcasa], esses valores são realmente baixos, mas podem não ser reais ou estão com algum erro na classificação do tipo de imóvel e / ou valor.
# #### 5. Qual é a correlação entre as características do imóvel e o valor?
fig = plt.figure(figsize=(25.3,5))
fig1 = fig.add_subplot(141); sns.regplot(x='area', y='valor_imovel', data=df)
fig2 = fig.add_subplot(142); sns.regplot(x='quarto', y='valor_imovel', data=df);
fig3 = fig.add_subplot(143); sns.regplot(x='banheiro', y='valor_imovel', data=df);
fig4 = fig.add_subplot(144); sns.regplot(x='vagas_garagem', y='valor_imovel', data=df);
# Conforme os gráficos exibidos acima, todas as características dos imóveis desse dataset possuem correlação positiva com o valor dos imóveis, ou seja, o valor é maior quanto maior for área, mais quartos, mais banheiros e vagas para carro o imóvel possuir.
# +
# Resumo das correlações por variável com o valor do imóvel
pd.options.display.float_format = '{:,.4f}'.format
cor = df.corr()
cor_target = abs(cor["valor_imovel"])
variaveis = cor_target[cor_target>0.0]
print('Resumo das correlações:\n')
variaveis.sort_values(ascending=False)
# -
# A correlação de Pearson indica que a área e a quantidade de vagas na garagem possuem correlação moderada com relação ao valor do imóvel e a quantidade de quartos e banheiros possuem uma correlação um pouco mais fraca com o valor do imóvel, mas todas as variáveis possuem correlação positiva.
# #### 6. Quais imóveis são outliers nessa base?
# A métrica que vou considerar para calcular o z-score será o valor do metro quadrado do imóvel.
df['valor_m2'] = df['valor_imovel']/df['area']
f, (ax) = plt.subplots(1, figsize=(25, 5))
plot_valor = df['valor_m2'].values
sns.histplot (plot_valor, color='#52854C', kde=True, linewidth=0)
ax.set_title('DISTRIBUIÇÃO DO VALOR DOS IMÓVEIS POR METRO QUADRADO', fontsize=14)
ax.set_xlim(0,np.max(df['valor_m2']))
start, end = ax.get_xlim()
plt.xticks(rotation=30)
ax.xaxis.set_ticks(np.arange(start, end,1000));
df['zscore_val_m2'] = zscore(df['valor_m2'])
outliers = df.loc[df['zscore_val_m2'].abs() > 3]
outliers.sort_values('zscore_val_m2', ascending = False).reset_index()
# Foi utilizado o z-score para identificar os outliers no dataset tomando como base o valor do imóvel por metro quadrado, com isso, foram encontrados 18 imóveis com valores mais discrepantes. Basicamente são os imóveis de valor acima de 12,140k por m2.
# #### 7. Qual imóvel é o melhor custo-benefício desse dataset?
#
# Essa questão é um pouco relativa ao que é melhor para cada comprador, mas utilizando as variáveis disponíveis nesse DataFrame os imóveis de melhor custo benefício são aqueles com menor preço por metro quadrado e quantidade de quartos, banheiros e vagas na garagem mais adequada ao interesse do comprador.
# filtro no z-score de até dois desvios padrão para evitar trazer anúncios duvidósos
df_limpo = df.loc[df['zscore_val_m2'].abs() < 2]
# filtro de exemplo do objetivo do comprador
df_filtro = df_limpo.query('quarto >= 2 &\
banheiro >= 2 &\
vagas_garagem >= 1 &\
area > 300')
df_filtro.sort_values('valor_m2', ascending=True).head(5)
# link para consulta mo site https://www.vivareal.com.br/
df_filtro.loc[df_filtro.index == 1251 , 'link'].values
# #### 8. Qual é o melhor modelo para prever os preços dos imóveis com base nas características encontradas neste dataset?
#
# Como o objetivo desse modelo é prever o valor do imóvel com base nas características, todos os modelos utilizados serão:
# * LinearRegression
# * RandomForestRegressor
# * KNeighborsRegressor
# * DecisionTreeRegressor
# * BaggingRegressor
# * VotingRegressor
# +
# deleta as colunas que não serão utilizadas
del df_limpo['zscore_val_m2']
del df_limpo['valor_m2']
del df_limpo['anunciante']
del df_limpo['endereco']
del df_limpo['link']
# +
# Resumo das correlações por variável com o valor do imóvel usando o dataset limpo
pd.options.display.float_format = '{:,.4f}'.format
cor = df_limpo.corr()
cor_target = abs(cor["valor_imovel"])
variaveis = cor_target[cor_target>0.0]
print('Resumo das correlações:\n')
variaveis.sort_values(ascending=False)
# -
# Como foram retiradas as ofertas com desvio padrão do metro quadrado maior que 2 no zscore, a correlação dos dados restantes melhorou um pouco e poucos registros foram perdidos.
#
# Agora a correlação de Pearson das variáveis em relação ao valor do imóvel indica que a área possui forte correlação, a quantidade de vagas na garagem e a quantidade de banheiros possuem correlação moderada e a quantidade de quartos possui correlação um pouco fraca com o valor do imóvel.
# +
# shape do dataset
df_limpo.shape
# +
# Correlação entre as variáveis
plt.figure(figsize=(4,4))
cor = df_limpo.corr()
sns.heatmap(cor, annot=True, cmap=plt.cm.Greens)
plt.show()
# -
# Antes de começar a criar o modelo, vou estruturar um dicionário para guardar os resultados e depois compará-los
# +
# criar dicionário
resultados = {}
# +
# selecionando os atributos
atributos = ['vagas_garagem','area','quarto','banheiro']
atrib_prev = ['valor_imovel']
# +
# criando objetos
x = df_limpo[atributos].values
y = df_limpo[atrib_prev].values
# +
# definindo o tamanho da base de teste
split_teste_size = 0.30
# +
# criando bases de treino e de teste
x_treino, x_teste, y_treino, y_teste = train_test_split(x, y, test_size = split_teste_size, random_state = 42)
# -
print("{0:0.2f}% nos dados de treino".format((len(x_treino)/len(df_limpo.index))*100))
print("{0:0.2f}% nos dados de teste".format((len(x_teste)/len(df_limpo.index))*100))
# #### Modelo 1 - LinearRegression
# +
modelo1 = LinearRegression()
modelo1.fit(x_treino,y_treino);
y_pred = modelo1.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['LinearRegression'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
# #### Modelo 2 - RandomForestRegressor
# +
modelo2 = RandomForestRegressor(n_estimators=220, min_samples_leaf=3, random_state=26, max_depth=10 )
modelo2.fit(x_treino, y_treino.ravel());
y_pred = modelo2.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['RandomForestRegressor'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
# #### Modelo 3 - KNeighborsRegressor
# +
modelo3 = KNeighborsRegressor(n_neighbors=4,metric='euclidean')
modelo3.fit(x_treino,y_treino);
y_pred = modelo3.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['KNeighborsRegressor'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
# #### Modelo 4 - DecisionTreeRegressor
# +
modelo4 = DecisionTreeRegressor(random_state=26)
modelo4.fit(x_treino,y_treino);
y_pred = modelo4.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['DecisionTreeRegressor'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
# #### Modelo 5 - Ensemble - BaggingRegressor
# +
modelo_base = DecisionTreeRegressor(random_state=26)
modelo5 = BaggingRegressor(base_estimator=modelo_base, n_estimators=10, random_state=26)
modelo5.fit(x_treino,y_treino.ravel());
y_pred = modelo5.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['BaggingRegressor'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
# #### Modelo 6 - Ensemble - VotingRegressor
# +
r1 = LinearRegression()
r2 = RandomForestRegressor(n_estimators=190, random_state=26)
modelo6 = VotingRegressor([('LR', r1), ('RF', r2)])
modelo6.fit(x_treino, y_treino.ravel());
y_pred = modelo6.predict(x_teste)
r2 = metrics.r2_score(y_teste,y_pred)
mae = metrics.mean_absolute_error(y_teste,y_pred)
mse = metrics.mean_squared_error(y_teste,y_pred, squared=True)
rmse = metrics.mean_squared_error(y_teste,y_pred, squared=False)
resultados['VotingRegressor'] = {'R2':r2, 'MAE':mae, 'MSE':mse, 'RMSE':rmse}
print('R2 :',r2, '- MAE :',mae, ' - MSE :',mse, ' - RMSE:',rmse)
# -
print("Comparação entre os modelos:")
cm = sns.color_palette('Reds', as_cmap=True)
pd.DataFrame(resultados).T.style.background_gradient(subset=['R2'], cmap=cm)
# Apesar de nenhum desses modelos apresentarem resultados satisfatórios, o modelo 2 *RandomForestRegressor* obteve a melhor performance com um R2 de 71,2776% e o erro médio absoluto de 183,7k.
# # Conclusão
# A coleta efetuada por meio de web scraping no site Viva Real retornou 1577 anúncios de imóveis na região do Ipiranga em São Paulo - SP, desse total, 6 não correspondiam com os filtros aplicados no site por serem de outras regiões ou outro tipo de imóvel e foram descartados durante a análise.
#
# A maior parte dos anúncios coletados não possuem informações de rua ou número do imóvel e isso limitou um pouco a comparação entre eles.
#
# No processo de limpeza dos dados foram encontrados três anúncios com áreas muito discrepantes (provavelmente um erro do anunciante), a decisão tomada foi editar área desses
# imóveis para enquadrá-los aos demais.
#
# Os principais insights da análise foram:
# * O valor mediano dos imóveis anunciados é 850k;
# * Foram encontrados anúncios entre 47k e 5,5MM;
# * O top 30 anunciantes correspondem a 60% dos anúncios;
# * Dentre os anúncios com melhor custo benefício, há um imóvel 440 m2 por 650k (a análise se limitou aos dados, é necessário aprofundar um pouco mais).
# * Apesar de nenhum modelo ter atingido ótimos resultados, o melhor modelo para prever o valor dos imóveis com base nas características desse dataset foi o RandomForestRegressor.
#
|
data-analysis-houseprices.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: hackathon_data:Python
# language: python
# name: conda-env-hackathon_data-py
# ---
# %conda env update -f env.yml
# +
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime, timedelta
import os
from pathlib import Path
from random import sample
import shutil
import subprocess
import geopandas as gpd
import numpy as np
import pandas as pd
import requests
import s3fs
from shapely.geometry import box
import rasterio
from rasterio.errors import RasterioIOError
# -
from src.constants import FIRMS_API_KEY, DEFAULT_PARAMS
from src.data_sources import (cluster_fires,
create_chip_bounds,
ndvi_from_topleft,
landcover_from_topleft,
atmospheric_from_topleft,
fires_from_topleft,
elevation_from_topleft)
if not FIRMS_API_KEY:
raise ValueError('FIRMS_API_KEY empty, please ensure your environment variable set')
# # Input parameters
# +
output_fp = '/home/studio-lab-user/sagemaker-studiolab-notebooks/hackathon_data/predict_test'
output_s3 = os.environ['AWS_S3_BUCKET'] + '/predict_test'
# North America
bbox= [-168.7,24.8,-51.8,74.2]
# dates to search for fires
begin_date = datetime(2021,12,1)
end_date = datetime(2021,12,15)
# -
fs = s3fs.S3FileSystem(key=os.environ['AWS_ACCESS_KEY_ID'], secret=os.environ['AWS_SECRET_ACCESS_KEY'])
# # Load fire data
def load_fires(begin_date, end_date, bbox, day_range=10):
"""
Given input parameters, search NASA firms API for fires and return GeoDataFrame of fire points
:param begin_date: datetime
:param end_date: datetime
:param bbox: list of floats; left, bottom, right, top
:param day_range: int for the number of days to search API for
:return: gpd.GeoDataFrame of fire points
"""
if day_range > 10:
# firms api allows max of 10 day range see https://firms.modaps.eosdis.nasa.gov/api/area/
raise ValueError('"day_range" must be less than or equal to 10')
start_dates = (pd.date_range(start=begin_date, end=end_date, freq=f"{day_range}D") + pd.Timedelta(f'{day_range}d')).to_pydatetime().tolist()
if len(start_dates) == 0:
raise ValueError('No dates to search for, check "begin_date" and "end_date" are formated correctly')
# get min date of the VIIRS_SNPP_NRT so that we can decide based on the date range which FIRMS product(s) we need
viirs_nrt = pd.read_csv(f'https://firms.modaps.eosdis.nasa.gov/api/data_availability/csv/{FIRMS_API_KEY}/VIIRS_SNPP_NRT')
viirs_nrt_start = datetime.strptime(viirs_nrt.iloc[0].min_date, '%Y-%m-%d')
df_fires = pd.DataFrame()
for start_date in start_dates:
mapkey_status = requests.get(f'https://firms.modaps.eosdis.nasa.gov/mapserver/mapkey_status/?MAP_KEY={FIRMS_API_KEY}')
if mapkey_status.json()['current_transactions'] > 460:
# TODO: tenacity retry wait_exponential:
raise ValueError('Not enough free transactions left with FIRMS API for given key')
# split requests by date for VIIRS_SNPP_SP/VIIRS_SNPP_NRT
if start_date > viirs_nrt_start:
nrt_request_url = f'https://firms.modaps.eosdis.nasa.gov/api/area/csv/{FIRMS_API_KEY}/VIIRS_SNPP_NRT/{",".join([str(i) for i in bbox])}/{day_range}/{start_date.strftime("%Y-%m-%d")}'
df_fires=df_fires.append(pd.read_csv(nrt_request_url),ignore_index=True)
if (start_date - timedelta(days=day_range)) < viirs_nrt_start:
sp_request_url = f'https://firms.modaps.eosdis.nasa.gov/api/area/csv/{FIRMS_API_KEY}/VIIRS_SNPP_SP/{",".join([str(i) for i in bbox])}/{day_range}/{start_date.strftime("%Y-%m-%d")}'
df_fires=df_fires.append(pd.read_csv(sp_request_url),ignore_index=True)
# drop fires after end_date
df_fires = df_fires[((df_fires['acq_date'].astype('datetime64[ns]') <= end_date) & (df_fires['acq_date'].astype('datetime64[ns]') >= begin_date))]
gdf_fires = gpd.GeoDataFrame(df_fires, geometry=gpd.points_from_xy(df_fires.longitude, df_fires.latitude), crs='EPSG:4326')
return gdf_fires
gdf_fires = load_fires(begin_date, end_date, bbox)
gdf_fires
# +
# create clusters
print('Clustering fires')
df_fire_clustered = cluster_fires(gdf_fires)
# create chip bounds
print('Creating chip bounds')
Path(output_fp).mkdir(parents=True, exist_ok=True)
manifest = create_chip_bounds(df_fire_clustered)
manifest
# -
# # Process Chips
# + tags=[]
def process_chip(chip, fs, output_fp, output_s3, fires, cog_footprints, training=True):
"""
Given a chips metadata, load all of the training data and write numpy files, finally upload results to S3
:param chip: records.csv chip to process data for
:param fs: s3fs.S3FileSystem
:param output_fp: local directory to write data to
:param output_s3: local directory to write data to
:param fires: gpd.GeoDataFrame or path to vector file containing fire point data
:param cog_footprints: gpd.GeoDataFrame of the dem footprints
:param training: bool, if true then will load/write next days fires
"""
chip_idx, left, bottom, top, right, epsg, chip_date = chip["idx"], chip["left"], chip["bottom"], chip["top"], chip["right"], chip["epsg"], chip["date"]
if os.path.exists(output_fp + f'/{chip_idx}'):
return
# check not already on s3
if len(fs.ls(f'{output_s3}/{chip_idx}')) != 0:
return
print(f'Processing chip: {chip_idx}')
# create output dir if it doesnt already exist
output_dir = Path(output_fp).joinpath(str(chip_idx))
output_dir.mkdir(parents=True, exist_ok=True)
# load modis
try:
ndvi = ndvi_from_topleft([top, left], epsg, chip_date)
np.save(output_dir.joinpath('ndvi.npy'), ndvi)
except RasterioIOError:
# modis missing from bucket
shutil.rmtree(output_dir)
return
# save bbox to geojson
bounds_utm = rasterio.coords.BoundingBox(left=left, right=right, bottom=bottom, top=top)
gpd.GeoSeries([box(*bounds_utm)]).set_crs(epsg).to_file(output_dir.joinpath('bbox.geojson'))
# load fires
todays_fires = fires_from_topleft([top, left], epsg, chip_date, fires=fires)
np.save(output_dir.joinpath(f'todays_fires.npy'), todays_fires.bool)
np.save(output_dir.joinpath(f'todays_frp.npy'), todays_fires.frp)
# load tomorrows fires if training
if training:
tomorrows_date = (datetime.strptime(chip_date, '%Y-%m-%d') + timedelta(days=1)).strftime('%Y-%m-%d')
tomorrows_fires = fires_from_topleft([top, left], epsg, tomorrows_date, fires=fires)
np.save(output_dir.joinpath(f'tomorrows_fires.npy'), tomorrows_fires.bool)
np.save(output_dir.joinpath(f'tomorrows_frp.npy'), tomorrows_fires.frp)
# load dem
dem = elevation_from_topleft([top, left], epsg, cog_footprints)
np.save(output_dir.joinpath('elevation.npy'), dem)
# load landcover
landcover = landcover_from_topleft([top, left], epsg)
np.save(output_dir.joinpath('landcover.npy'), landcover)
# load atmospheric
atmos = atmospheric_from_topleft([top, left], epsg, chip_date, DEFAULT_PARAMS)
for var in list(atmos.data_vars):
data_arr = getattr(atmos, var).values[0]
np.save(output_dir.joinpath(f'{var}.npy'), data_arr)
fs.upload(str(output_dir),
f'{output_s3}/{chip_idx}/',
recursive=True)
shutil.rmtree(output_dir)
# -
chips = list(manifest.T.to_dict().values())
print(f'Chips total = {len(chips)}')
# query s3 to see which chips already processed and remove from list
processed_chips = fs.ls(output_s3)
processed_ids = [int(x.split('/')[-1]) for x in processed_chips if x.split('/')[-1].isdigit()]
print(f'Processed = {len(processed_ids)}')
to_process = [x for x in chips if x['idx'] not in processed_ids]
print(f'To process = {len(to_process)}')
# + tags=[]
# %%time
os.environ['AWS_NO_SIGN_REQUEST'] = 'True'
cog_footprints = gpd.GeoDataFrame.from_file('s3://copernicus-dem-30m/grid.zip')
with ThreadPoolExecutor(max_workers=os.cpu_count()) as executor:
future_work = [
executor.submit(process_chip, chip, fs, output_fp, output_s3, gdf_fires, cog_footprints, training=False) for chip in to_process
]
# + [markdown] tags=[]
# # Explore some of the processed chips
# +
import matplotlib.pyplot as plt
for chip in [x for x in fs.ls(output_s3) if x.split('/')[-1].isdigit()]:
print(chip.split('/')[-1])
try:
tf = np.load(fs.open(chip + '/todays_fires.npy'))
except FileNotFoundError:
continue
fig, (ax1, ax3, ax4, ax5, ax6) = plt.subplots(1, 5, figsize=(20,20))
im = ax1.imshow(tf)
ax1.title.set_text('todays_fires')
lc = np.load(fs.open(chip + '/landcover.npy'))
im = ax3.imshow(lc)
ax3.title.set_text('land cover')
el = np.load(fs.open(chip + '/elevation.npy'))
im = ax4.imshow(el)
ax4.title.set_text('elevation')
nd = np.load(fs.open(chip + '/ndvi.npy'))
im = ax5.imshow(nd)
ax5.title.set_text('ndvi')
sa = np.load(fs.open(chip + '/surface_air_pressure.npy'))
im = ax6.imshow(sa)
ax6.title.set_text('surface_air_pressure')
plt.show()
# -
|
dataset_preparation/2_PredictionDataCreation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mutex Watershed
#
# Use the `elf.segmentation` module for boundary based segmentation with the mutex watershed algorithm: [The Mutex Watershed: Efficent, Parameter-Free Image Partitionong](http://openaccess.thecvf.com/content_ECCV_2018/html/Steffen_Wolf_The_Mutex_Watershed_ECCV_2018_paper.html).
# We use data from the paper based on the [ISBI 2012 EM Segmentation challenge](http://brainiac2.mit.edu/isbi_challenge/home).
# You can obtain this data [here](https://hcicloud.iwr.uni-heidelberg.de/index.php/s/6LuE7nxBN3EFRtL).
#
# The mutex watershed can operate directly on pixel affinity maps.
# It produces a segmentation by partitioning the grid graph, taking into acount long range pixel connections. This is achieved by greedily connecting pixels that are joined by a path of local affinity edges **unless** there exists a long range edge that prevents this join.
#
# In addition to the default elf dependencies, you will need to install [affogato](https://github.com/constantinpape/affogato) to run this example.
# ## Preparation
# +
# %gui qt5
import numpy as np
# import napari for data visualisation
import napari
# import the segmentation functionality from elf
import elf.segmentation.mutex_watershed as mws
from elf.segmentation.utils import load_mutex_watershed_problem
# import the open_file function from elf, which supports opening files
# in hdf5, zarr, n5 or knossos file format
from elf.io import open_file
# -
# download the example data
prefix = "isbi-data-"
data_path = f"{prefix}test.h5"
affs, offsets = load_mutex_watershed_problem(prefix=prefix)
with open_file(data_path, 'r') as f:
# load the raw data in addition
raw = f['raw'][:]
# ## Segment via mutex watershed
# +
# set additional parameters for the mutex watershed
# The strides are used to sub-sample the long range edges, which are used for repulsive
# connections in the mutex watershed.
# This reduces the runtime and is ok, because we have more long range then local affinity channels.
strides = [1, 10, 10]
# if randomize_strides is True, the sub-sampling of long-range edges is done at random.
# this usually improves resutls by avoiding sampling artefacts, but it makes the result
# not fully reproducible
randomize_strides = True
# -
# run the algorithm
segmentation = mws.mutex_watershed(affs, offsets, strides,
randomize_strides=True)
viewer = napari.Viewer()
viewer.add_image(raw, name='raw')
viewer.add_image(affs, name='affinities')
viewer.add_labels(segmentation, name='mws-segmentation')
# ## Block-wise MWS
#
# There's also a block-wise implementation of the mutex watershed (that uses Multicut to stitch block results).
# You can use it to segment larger volumes, where normal mutex watershed takes too long to run.
# NOTE due to an issue with the current mws implementation, please
# reload the affinities before running the blockwise segmentation
block_shape = [10, 256, 256]
blockwise_seg = mws.blockwise_mutex_watershed(affs, offsets, strides,
block_shape, randomize_strides=True)
print(blockwise_seg.shape)
# visualize the results
viewer = napari.Viewer()
viewer.add_image(raw, name='raw')
viewer.add_image(affs, name='affinities')
viewer.add_labels(segmentation, name='mws-segmentation')
viewer.add_labels(blockwise_seg, name='blockwise-segmentation')
|
example/segmentation/mutex_watershed.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="Gl6lU6VqsgtU" colab_type="text"
# # Getting Started with TensorFlow 2.0 in 7 Days
# ## 2.1 Understanding the Limits of Linear Regression
# + id="GApffbBxsfuI" colab_type="code" outputId="e680457b-551d-4fa0-8ec9-5690e50aa3ae" colab={"base_uri": "https://localhost:8080/", "height": 373}
# !pip install tensorflow==2.0.0-beta0
# + id="xm0K9ZWrswf5" colab_type="code" colab={}
import tensorflow as tf
from tensorflow import keras
# + [markdown] id="8L6EkAadtaku" colab_type="text"
# ## Keras Datasets
#
# These are provided for educational purposes, and are often available as both training and test datasets
# + id="U1YEKpLntMMg" colab_type="code" colab={}
fashion_mnist = keras.datasets.fashion_mnist
# + [markdown] id="NjyowEWRtnK0" colab_type="text"
# Fashion MNIST is a dataset of 70,000 grayscale images. These images come in 10 categories and have a size of 28 pixels by 28 pixels. We will make use of 60,000 images for training a model, and 10,000 images for evaluation.
# + id="nUiVAOETtmIC" colab_type="code" outputId="7887a673-12a5-488a-8fc5-6518d38550f1" colab={"base_uri": "https://localhost:8080/", "height": 151}
(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()
# + id="59xBBEJsv4eE" colab_type="code" outputId="9a667e5e-95a9-4cd3-fdea-b9491c31399f" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(train_images.shape)
# + id="NalRuroZv9L9" colab_type="code" outputId="7470a32c-7042-4e2c-98b9-7f22909e88e5" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(train_labels.shape)
# + id="ECBOWfgJwAwJ" colab_type="code" outputId="a2d8d16d-9a9e-4e03-8275-1d33408314d0" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(test_images.shape)
# + id="GSH3Wd8swFx2" colab_type="code" colab={}
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
# + [markdown] id="tuJ7NCPIwflN" colab_type="text"
# ## Look at one image
# + id="0P4505AWwkK6" colab_type="code" colab={}
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="KWcmPK3hwVXD" colab_type="code" outputId="7a7b68d4-0dea-41ec-b478-2da682629e44" colab={"base_uri": "https://localhost:8080/", "height": 269}
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
# + id="MORwSQ4Fwz-Z" colab_type="code" outputId="a0462e4d-2383-4f7e-9c7f-e87487a92f61" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(train_labels[0])
# + id="okUx72pyw-Yf" colab_type="code" outputId="2e141f12-4c4c-4298-cced-78f6a086e276" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(class_names[train_labels[0]])
# + [markdown] id="em6yddDZxcm-" colab_type="text"
# __Scale Images to a range between 0 and 1__
# + id="8KoQjilcxEr-" colab_type="code" colab={}
train_images = train_images / 255.0
test_images = test_images / 255.0
# + [markdown] id="Sql6QVYjySrn" colab_type="text"
# ## Linear Regression
# + id="tzfeYeJwxpsq" colab_type="code" colab={}
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(10, activation='softmax')
])
# + id="-gsudw_uzI0e" colab_type="code" colab={}
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# + [markdown] id="ihYgEecC0f7l" colab_type="text"
# ### Train the model
# + id="dcoK5d77zZTE" colab_type="code" outputId="12627cd7-b47a-46c9-98d5-be24ef432b03" colab={"base_uri": "https://localhost:8080/", "height": 474}
model.fit(train_images, train_labels, epochs=10)
# + [markdown] id="pCC-4m0J0kC4" colab_type="text"
# ### Evaluate the model
# + id="fBwjE4xnzh-M" colab_type="code" outputId="97ab3a46-5456-4214-c014-14ae5a292ccd" colab={"base_uri": "https://localhost:8080/", "height": 67}
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('\nTest accuracy:', test_acc)
# + [markdown] id="TYcj49m50nOr" colab_type="text"
# ### Make predictions
# + id="kqwr_KPJ0Uwx" colab_type="code" outputId="7339f0fe-65e3-4a8e-e560-65b0aeead87e" colab={"base_uri": "https://localhost:8080/", "height": 50}
predictions = model.predict(test_images)
print(predictions[0])
# + id="mSPpuLDa0x1-" colab_type="code" outputId="62c69fbf-d520-4029-975e-7a46a140fa22" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(np.argmax(predictions[0]))
# + id="SadifsmG09uf" colab_type="code" outputId="babc866b-a8f5-4808-9d58-32245f6ba7a7" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(class_names[np.argmax(predictions[0])])
# + id="1Cg48qFT1Ey_" colab_type="code" outputId="a1824dfd-dca0-4e32-d29e-997478430c68" colab={"base_uri": "https://localhost:8080/", "height": 269}
plt.figure()
plt.imshow(test_images[0])
plt.colorbar()
plt.grid(False)
plt.show()
# + id="QuhMPSE-1LWf" colab_type="code" outputId="43d15204-7df2-4134-db43-ad81526342a1" colab={"base_uri": "https://localhost:8080/", "height": 34}
print(class_names[test_labels[0]])
# + id="74zkf5PM1Rws" colab_type="code" colab={}
|
Section 2/Packt_2_1_1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# ### media data ###
data_media = pd.read_csv("data/Media.csv", delimiter=';')
# # Normalize date
dict_monat_to_number={
'Januar' : '01',
'Februar' : '02',
'März' : '03',
'April': '04',
'Mai': '05',
'Juni': '06',
'Juli': '07',
'August': '08',
'September': '09',
'Oktober': '10',
'November': '11',
'Dezember': '12',
}
data_media['_Month'] = data_media.Month.map(dict_monat_to_number)
data_media['_Day'] = data_media.Day.replace({0:1})
data_media['date'] = pd.to_datetime(data_media.Year.astype(str) \
+'/'+ data_media._Month \
+'/'+ data_media._Day.astype(str))
data_media.set_index('date')
# # create date index for each day between 20015 and 2017
start_date = '01/01/2015'
end_date = '31/12/2017'
dateTimeId = pd.date_range(start=start_date,
end=end_date)
data_output_just_index = pd.DataFrame(index=pd.to_datetime(dateTimeId))
# # group by media type
temp_data_media = data_media.groupby(['date','MediaType']).size().unstack(fill_value=0)
# # final output
data_output = pd.concat([data_output_just_index, temp_data_media]).replace({np.nan:0})
data_output
|
02-tvads.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="z2d830efzJl0" outputId="1a65b329-f566-428f-995c-2eb230e54381"
# !python -m pip install konlpy
# + colab={"base_uri": "https://localhost:8080/"} id="ZNEI7bL6zbZy" outputId="81a2543f-4910-4c7c-ad4a-667ce4a8201d"
# !curl -O https://raw.githubusercontent.com/konlpy/konlpy/master/scripts/mecab.sh
# + colab={"base_uri": "https://localhost:8080/"} id="1N7o2hHIz3Qd" outputId="a3bc23af-6508-41c1-e052-a4f3c85460f7"
# !bash ./mecab.sh
# + colab={"base_uri": "https://localhost:8080/"} id="4fbYmy4wz6SW" outputId="cf14e1e7-ee02-428d-f263-15a0e715a3ff"
# !curl -O https://raw.githubusercontent.com/bab2min/corpus/master/sentiment/naver_shopping.txt
# + colab={"base_uri": "https://localhost:8080/", "height": 359} id="uO5H34dK0p7G" outputId="6899abe6-cdce-4a1f-faf9-a6f826579e9a"
import pandas as pd
total_data = pd.read_table('./naver_shopping.txt', names=['ratings', 'reviews'])
total_data.head(10)
# + [markdown] id="8Whs0WMl7iJz"
# regular expression(정규 표현식)
#
# - https://regexr.com/
# + colab={"base_uri": "https://localhost:8080/"} id="pAL4DJXz1IQk" outputId="4986ed9d-a27d-4ae4-f7e8-a4a21b967dbd"
total_data.info()
# + colab={"base_uri": "https://localhost:8080/"} id="2RtlMpKs1myx" outputId="e6c7b8c4-1c80-476a-82df-074c9bd0cbbc"
total_data.drop_duplicates(subset=['reviews'], inplace=True)
len(total_data)
# + id="xvx-5i4r2ySl"
from sklearn.model_selection import train_test_split
# + id="eGV8Mxsn32JC"
x_data = total_data['reviews']
y_data = total_data['ratings']
# + colab={"base_uri": "https://localhost:8080/"} id="rPpDYEpF3csG" outputId="a59b2635-96f6-4186-9496-71ccbd9ebc69"
x_train, x_test, y_train, y_test = train_test_split(x_data, y_data)
x_train.shape, x_test.shape, y_train.shape, y_test.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="XlcanN_t32GC" outputId="56139ea9-8609-43c1-c4d6-0e2466180ffc"
y_data.value_counts().plot(kind='bar')
# 3이 빠져있고, 한쪽(5)으로 쏠려있는 형태
# 빠진 데이터는 채워 넣고, 쏠린 데이터는 밸런스를 맞추어야 함 --> SMOTE 활용
# + [markdown] id="ddFt6x4W6zXS"
# SMTP로 데이터 불균형 처리
#
# - https://imbalanced-learn.org/stable/references/generated/imblearn.over_sampling.SMOTE.html
#
# - https://mkjjo.github.io/python/2019/01/04/smote_duplicate.html
#
# - https://john-analyst.medium.com/smote%EB%A1%9C-%EB%8D%B0%EC%9D%B4%ED%84%B0-%EB%B6%88%EA%B7%A0%ED%98%95-%ED%95%B4%EA%B2%B0%ED%95%98%EA%B8%B0-5ab674ef0b32
# + [markdown] id="E5E5LrTHPrM4"
# # NLP
# + id="aw7iiUbV43fT"
# x_train.str,replace('[가-힣ㄱ-ㅎㅠ]'.'')
# + colab={"base_uri": "https://localhost:8080/"} id="WhsW47G4j4IO" outputId="1265ba02-11b3-435e-bb9c-e628f7877946"
x_temp01 = x_train
x_temp01.str.replace('[^가-힣ㄱ-ㅎㅠ]','')
# + colab={"base_uri": "https://localhost:8080/"} id="Ov_F6_ZcjrSW" outputId="1443f74b-e90d-47b4-bde9-240e374e44c3"
x_train.str.replace('[^가-힣ㄱ-ㅎㅠ ]','') # [a-zA-Z ]
# + id="CWoh3JSNP2XO"
from konlpy.tag import Mecab
# + colab={"base_uri": "https://localhost:8080/"} id="4ONEkoRpP9Ry" outputId="c5dc164c-0458-4169-f19e-f67a7b35ee9c"
mecab = Mecab()
print(mecab.morphs('와 이런 것도 상품이라고 차라리 내가 만드는게 나을 것 같다.'))
# + id="83waN4ZyRO7t"
x_train_small = x_train[0:5000]
# + id="2QpZkJm9QDIM"
sentence = list()
stopwords = ['도', '는', '다', '의', '가', '이', '은', '한', '에', '하', '고', '을', '를', '인', '듯', '과', '와', '네', '들', '듯', '지', '임', '게']
for tok in x_train_small:
encoded = mecab.morphs(tok)
sentence.append([item for item in encoded if item not in stopwords])
sentence
# + [markdown] id="6GpYgelqTlHk"
# # Tokenizer
# + colab={"base_uri": "https://localhost:8080/"} id="18PT24GUW-ix" outputId="d6cd3d5e-9a1c-4a7c-dc00-c9453008f843"
print(sentence)
# + id="UzxI2_BpRg9l"
import tensorflow as tf
# + id="VmuQH8I6ToZr"
tokenizer = tf.keras.preprocessing.text.Tokenizer()
tokenizer.fit_on_texts(sentence)
# + colab={"base_uri": "https://localhost:8080/"} id="XWP45wUcUJHZ" outputId="45712bdb-02f4-4e66-bc19-7c2a88bf8a99"
tokenizer.word_index
# + id="JCRdM3RYUMTt" colab={"base_uri": "https://localhost:8080/"} outputId="f446d6ee-fca8-4279-f93f-87b56a3e06e4"
tokenizer.word_counts
# + id="kpuCJMwpU-zw"
total_cnt = len(tokenizer.word_index)
rare_cnt = 0
total_freq, rare_freq = 0, 0
for key, value in tokenizer.word_counts.items():
total_freq = total_freq + value # 전체 단어 수
if(value <= 2):
rare_cnt = rare_cnt + 1
rare_freq = rare_freq + value # 2미만인 단어 수
# + colab={"base_uri": "https://localhost:8080/"} id="RvGZ68YmX7WS" outputId="6e922fe9-24e9-4b46-8789-7ac02892897f"
total_cnt, rare_cnt, (rare_cnt / total_cnt)*100, (rare_freq/total_freq)*100
# + id="82k5j2eWYKsP"
vocab_size = total_cnt - rare_cnt
# + [markdown] id="QOwm30EXZkcx"
# OOV(Out-Of-Vocabulary)
#
# - https://stackoverflow.com/questions/45495190/initializing-out-of-vocabulary-oov-tokens
#
# - https://codetorial.net/tensorflow/natural_language_processing_in_tensorflow_01.html
# + id="RCc8BeYdZe4u"
tokenizer = tf.keras.preprocessing.text.Tokenizer(vocab_size, oov_token='OOV')
tokenizer.fit_on_texts(sentence)
# + [markdown] id="Wyp3DGoCvILy"
# ### ['후기', '엄청', '맛있', '다고', [], '구입', '했', [], '기대', '커서', '그런가', [], [], [], '번', '안', '먹', '것', '같', '아요']
# + colab={"base_uri": "https://localhost:8080/"} id="IGcf43flZuqN" outputId="97f5cf2b-510c-4557-f90d-a7cb0ba734cb"
tokenizer.index_word
# + id="iRmROMzMdAG2"
x_train_small = tokenizer.texts_to_sequences(sentence)
# + colab={"base_uri": "https://localhost:8080/"} id="8F2NaznSeNED" outputId="6b7e90c6-e9b3-4742-c994-5bc9ba3a75e8"
print(x_train_small[0:3])
# + colab={"base_uri": "https://localhost:8080/"} id="J7j79eXReWJy" outputId="4b5e293b-2229-41bc-ccd4-6133e5f77577"
len(x_train_small[0]), len(x_train_small[40]), len(x_train_small[50])
# + id="XISoBrmnewfz"
hist_len = [len(words) for words in x_train_small]
# + colab={"base_uri": "https://localhost:8080/", "height": 481} id="8WFFJuAuf25V" outputId="3ea72efd-d29c-4dd0-909b-20fa514bbf83"
import matplotlib.pyplot as plt
plt.hist(hist_len, bins=50)
# + colab={"base_uri": "https://localhost:8080/"} id="ftEcChDAgEUs" outputId="20afce90-4c19-4954-edb4-00da656f5ccc"
sum(hist_len) / len(x_train_small)
# 너무 많은 값이 잘려나가기 때문에 채우는 방법으로 선회
# + id="0PuwJU-QgNPG"
x_train_small = tf.keras.preprocessing.sequence.pad_sequences(x_train_small, maxlen=50)
# + id="CPdMeg3fiz3n"
y_train_small = y_train[0:5000]
# + colab={"base_uri": "https://localhost:8080/"} id="NCgpAj-sjG2g" outputId="97596445-cd4f-40fe-a346-efbc46b35235"
import numpy as np
y_train_small = np.array(y_train_small)-1
np.unique(y_train_small)
# + id="Xfg5TuVnv10O"
# y_train_small[6] = 3
# + id="ml7wZRpdv1i6"
# y_train_small[6]
# + id="mP3xgzFAwjrN"
# y_train_small[0:6]
# + colab={"base_uri": "https://localhost:8080/"} id="GIkJClxVwmEp" outputId="3452d54a-dea8-41c6-c62f-cae26b3a9161"
len(x_train_small), len(y_train_small)
# + id="XMUfjqhnwk4W"
# y_train_onehot = tf.keras.utils.to_categorical(y_train_small)
# len(y_train_onehot[5])
# + id="F95s5QtBwqI_"
# len(y_train_onehot[5]), y_train_onehot[5]
# + [markdown] id="BewFAfBdhGSp"
# # make model
# + id="WpZSpJNrg4l6"
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=30, input_length=50)) # input layer
# model.add(tf.keras.layers.LSTM(128)) # hidden layer
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(128))) # hidden layer
# model.add(tf.keras.layers.GRU(128)) # hidden layer
model.add(tf.keras.layers.Dense(5, activation='softmax')) # output layer
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['acc']) # gadget
# + id="RxSDc6XLkRsi" colab={"base_uri": "https://localhost:8080/"} outputId="16b553ff-8a5a-4da7-c811-5553ba768db2"
hist = model.fit(x_train_small, y_train_small, epochs=2, batch_size=256, validation_split=0.3, shuffle=True)
# + id="yCkteuB4w1ZJ"
# model.evaluate(x_train_small, y_train_small) # LSTM(128) - loss: 0.9170 - acc: 0.8500
# + id="2V8zigTMw4Ha"
# model.evaluate(x_train_small, y_train_small) # GRU - loss: 0.9265 - acc: 0.8436
# + id="EF9pdFSMw6la"
# model.evaluate(x_train_small, y_train_small) # Bidirectional(LSTM(128)) - loss: 0.8787 - acc: 0.8382
|
Naver_shopping_review.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example usage with Python 3
# This notebook demonstrates usage of `petab_select` to perform forward selection in a Python 3 script.
# ## Problem setup with initial model
# Dependencies are imported. A model selection problem is loaded from the specification files. Some helper methods are defined.
# +
import petab_select
from petab_select import (
Model,
ForwardCandidateSpace,
)
# Load the PEtab Select problem.
select_problem = petab_select.Problem.from_yaml('model_selection/petab_select_problem.yaml')
# Fake criterion values as a surrogate for a model calibration tool.
fake_criterion = {
'M1_0': 200,
'M1_1': 150,
'M1_2': 140,
'M1_3': 130,
'M1_4': -40,
'M1_5': -70,
'M1_6': -110,
'M1_7': 50,
}
def print_model(model: Model) -> None:
"""Helper method to view model attributes."""
print(
f"""\
Model subspace ID: {model.model_subspace_id}
PEtab YAML location: {model.petab_yaml}
Custom model parameters: {model.parameters}
Model hash: {model.get_hash()}
Model ID: {model.model_id}
{select_problem.criterion}: {model.get_criterion(select_problem.criterion, compute=False)}
"""
)
def calibrate(model: Model, fake_criterion=fake_criterion) -> None:
"""Set model criterion values to fake values that could be the output of a calibration tool.
Each model subspace in this problem contains only one model, so a model-specific criterion can
be indexed by the model subspace ID.
"""
model.set_criterion(select_problem.criterion, fake_criterion[model.model_subspace_id])
print(
f"""Information about the model selection problem.
YAML path: {select_problem.yaml_path}
Method: {select_problem.method}
Criterion: {select_problem.criterion}
"""
)
# -
# ## First iteration
#
# Neighbors of the initial model in the model space are identified for testing. Here, no initial model is specified. If an initial model is required for the algorithm, PEtab Select can automatically use a virtual initial model, if such a model is defined. For example, for the forward and backward methods, the virtual initial model defaults to a model with no parameters estimated, and all parameters estimated, respectively.
# The model candidate space is setup with the initial model. The model space is then used to find neighbors to the initial model. The candidate space is used to calculate distances between models, and whether a candidate model represents a valid move in model space.
#
# The in-built `ForwardCandidateSpace` uses the following properties to identify candidate models:
# - previously estimated parameters must not be fixed;
# - the number of estimated parameters must increase; and
# - the increase in the number of estimated parameters must be minimal.
#
# The model space keeps a history of identified neighbors, such that subsequent calls ignore previously identified neighbors. This can be disabled by changing usage to `petab_select.ModelSpace.search(..., exclude=False)`, or reset to forget all history with `petab_select.ModelSpace.reset()`.
candidate_space = petab_select.ui.candidates(problem=select_problem)
# Model IDs default to the model hash, which is generated from hashing the model subspace ID and model parameterization.
#
# Here, the model identified is a model with all possible parameters fixed. This is because the default virtual initial model is the same parameterization, and the closest model in the "real" model subspace is the same parameterization. If the initial model was from the "real" model subspace, then candidate models would be true forward steps in the subspace (e.g. an increase in the number of estimated parameters).
# Each of the candidate models includes information that should be sufficient for model calibration with any suitable tool that supports PEtab.
#
# NB: the `petab_yaml` is for the original PEtab problem, and would need to be customized by `parameters` to be the actual candidate model.
for candidate_model in candidate_space.models:
print_model(candidate_model)
# At this point, a model calibration tool is used to find the best of the test models, according to some criterion. PEtab select can select the best model from a collection of models that provide a value for this criterion, or a specific model can be supplied. Here, PEtab Select will be used to select the best model from multiple models. At the end of the following iterations, a specific model will be provided.
# Set fake criterion values that might be the output of a model calibration tool.
for candidate_model in candidate_space.models:
calibrate(candidate_model)
select_problem.add_calibrated_models(candidate_space.models)
local_best_model = select_problem.get_best(candidate_space.models)
print_model(local_best_model)
# ## Second iteration
# The process then repeats.
# The chosen model is used as the predecessor model, such that neighboring models are identified with respect to the chosen model.
petab_select.ui.candidates(
problem=select_problem,
candidate_space=candidate_space,
predecessor_model=select_problem.get_best(candidate_space.models),
);
for candidate_model in candidate_space.models:
print_model(candidate_model)
# Set fake criterion values that might be the output of a model calibration tool.
for candidate_model in candidate_space.models:
calibrate(candidate_model)
select_problem.add_calibrated_models(candidate_space.models)
local_best_model = select_problem.get_best(candidate_space.models)
print_model(local_best_model)
# ## Third iteration
petab_select.ui.candidates(
problem=select_problem,
candidate_space=candidate_space,
predecessor_model=select_problem.get_best(candidate_space.models),
);
for candidate_model in candidate_space.models:
print_model(candidate_model)
# Set fake criterion values that might be the output of a model calibration tool.
for candidate_model in candidate_space.models:
calibrate(candidate_model)
select_problem.add_calibrated_models(candidate_space.models)
local_best_model = select_problem.get_best(candidate_space.models)
print_model(local_best_model)
# ## Fourth iteration
petab_select.ui.candidates(
problem=select_problem,
candidate_space=candidate_space,
predecessor_model=select_problem.get_best(candidate_space.models),
);
for candidate_model in candidate_space.models:
print_model(candidate_model)
# Set fake criterion values that might be the output of a model calibration tool.
for candidate_model in candidate_space.models:
calibrate(candidate_model)
select_problem.add_calibrated_models(candidate_space.models)
local_best_model = select_problem.get_best(candidate_space.models)
print_model(local_best_model)
# ## Sixth iteration
petab_select.ui.candidates(
problem=select_problem,
candidate_space=candidate_space,
predecessor_model=select_problem.get_best(candidate_space.models),
);
# The `M1_7` model is the most complex model in the model space (all parameters in the space are estimated), so no valid neighbors are identified for the forward selection method.
print(f'Number of candidate models: {len(candidate_space.models)}.')
# At this point, the results of the model calibration tool for the different models can be used to select the best model.
best_model = select_problem.get_best(select_problem.calibrated_models)
print_model(best_model)
# ## Seventh iteration
# Note that there can exist additional, uncalibrated models in the model space, after a single forward algorithm terminates. These additional models can be identified with the brute-force method.
candidate_space = petab_select.BruteForceCandidateSpace()
petab_select.ui.candidates(
problem=select_problem,
candidate_space=candidate_space,
excluded_models=select_problem.calibrated_models,
);
for candidate_model in candidate_space.models:
print_model(candidate_model)
|
doc/examples/workflow_python.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + jupyter={"outputs_hidden": false}
# %matplotlib inline
# -
#
# # Compare the effect of different scalers on data with outliers
#
#
# Feature 0 (median income in a block) and feature 5 (number of households) of
# the `California housing dataset
# <https://www.dcc.fc.up.pt/~ltorgo/Regression/cal_housing.html>`_ have very
# different scales and contain some very large outliers. These two
# characteristics lead to difficulties to visualize the data and, more
# importantly, they can degrade the predictive performance of many machine
# learning algorithms. Unscaled data can also slow down or even prevent the
# convergence of many gradient-based estimators.
#
# Indeed many estimators are designed with the assumption that each feature takes
# values close to zero or more importantly that all features vary on comparable
# scales. In particular, metric-based and gradient-based estimators often assume
# approximately standardized data (centered features with unit variances). A
# notable exception are decision tree-based estimators that are robust to
# arbitrary scaling of the data.
#
# This example uses different scalers, transformers, and normalizers to bring the
# data within a pre-defined range.
#
# Scalers are linear (or more precisely affine) transformers and differ from each
# other in the way to estimate the parameters used to shift and scale each
# feature.
#
# ``QuantileTransformer`` provides non-linear transformations in which distances
# between marginal outliers and inliers are shrunk. ``PowerTransformer`` provides
# non-linear transformations in which data is mapped to a normal distribution to
# stabilize variance and minimize skewness.
#
# Unlike the previous transformations, normalization refers to a per sample
# transformation instead of a per feature transformation.
#
# The following code is a bit verbose, feel free to jump directly to the analysis
# of the results_.
#
# + jupyter={"outputs_hidden": false}
# Author: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
# <NAME>
# License: BSD 3 clause
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
from matplotlib import cm
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import minmax_scale
from sklearn.preprocessing import MaxAbsScaler
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import RobustScaler
from sklearn.preprocessing import Normalizer
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import PowerTransformer
from sklearn.datasets import fetch_california_housing
print(__doc__)
dataset = fetch_california_housing()
X_full, y_full = dataset.data, dataset.target
# Take only 2 features to make visualization easier
# Feature of 0 has a long tail distribution.
# Feature 5 has a few but very large outliers.
X = X_full[:, [0, 5]]
distributions = [
('Unscaled data', X),
('Data after standard scaling',
StandardScaler().fit_transform(X)),
('Data after min-max scaling',
MinMaxScaler().fit_transform(X)),
('Data after max-abs scaling',
MaxAbsScaler().fit_transform(X)),
('Data after robust scaling',
RobustScaler(quantile_range=(25, 75)).fit_transform(X)),
('Data after power transformation (Yeo-Johnson)',
PowerTransformer(method='yeo-johnson').fit_transform(X)),
('Data after power transformation (Box-Cox)',
PowerTransformer(method='box-cox').fit_transform(X)),
('Data after quantile transformation (gaussian pdf)',
QuantileTransformer(output_distribution='normal')
.fit_transform(X)),
('Data after quantile transformation (uniform pdf)',
QuantileTransformer(output_distribution='uniform')
.fit_transform(X)),
('Data after sample-wise L2 normalizing',
Normalizer().fit_transform(X)),
]
# scale the output between 0 and 1 for the colorbar
y = minmax_scale(y_full)
# plasma does not exist in matplotlib < 1.5
cmap = getattr(cm, 'plasma_r', cm.hot_r)
def create_axes(title, figsize=(16, 6)):
fig = plt.figure(figsize=figsize)
fig.suptitle(title)
# define the axis for the first plot
left, width = 0.1, 0.22
bottom, height = 0.1, 0.7
bottom_h = height + 0.15
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter = plt.axes(rect_scatter)
ax_histx = plt.axes(rect_histx)
ax_histy = plt.axes(rect_histy)
# define the axis for the zoomed-in plot
left = width + left + 0.2
left_h = left + width + 0.02
rect_scatter = [left, bottom, width, height]
rect_histx = [left, bottom_h, width, 0.1]
rect_histy = [left_h, bottom, 0.05, height]
ax_scatter_zoom = plt.axes(rect_scatter)
ax_histx_zoom = plt.axes(rect_histx)
ax_histy_zoom = plt.axes(rect_histy)
# define the axis for the colorbar
left, width = width + left + 0.13, 0.01
rect_colorbar = [left, bottom, width, height]
ax_colorbar = plt.axes(rect_colorbar)
return ((ax_scatter, ax_histy, ax_histx),
(ax_scatter_zoom, ax_histy_zoom, ax_histx_zoom),
ax_colorbar)
def plot_distribution(axes, X, y, hist_nbins=50, title="",
x0_label="", x1_label=""):
ax, hist_X1, hist_X0 = axes
ax.set_title(title)
ax.set_xlabel(x0_label)
ax.set_ylabel(x1_label)
# The scatter plot
colors = cmap(y)
ax.scatter(X[:, 0], X[:, 1], alpha=0.5, marker='o', s=5, lw=0, c=colors)
# Removing the top and the right spine for aesthetics
# make nice axis layout
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.get_xaxis().tick_bottom()
ax.get_yaxis().tick_left()
ax.spines['left'].set_position(('outward', 10))
ax.spines['bottom'].set_position(('outward', 10))
# Histogram for axis X1 (feature 5)
hist_X1.set_ylim(ax.get_ylim())
hist_X1.hist(X[:, 1], bins=hist_nbins, orientation='horizontal',
color='grey', ec='grey')
hist_X1.axis('off')
# Histogram for axis X0 (feature 0)
hist_X0.set_xlim(ax.get_xlim())
hist_X0.hist(X[:, 0], bins=hist_nbins, orientation='vertical',
color='grey', ec='grey')
hist_X0.axis('off')
# -
# Two plots will be shown for each scaler/normalizer/transformer. The left
# figure will show a scatter plot of the full data set while the right figure
# will exclude the extreme values considering only 99 % of the data set,
# excluding marginal outliers. In addition, the marginal distributions for each
# feature will be shown on the side of the scatter plot.
#
#
# + jupyter={"outputs_hidden": false}
def make_plot(item_idx):
title, X = distributions[item_idx]
ax_zoom_out, ax_zoom_in, ax_colorbar = create_axes(title)
axarr = (ax_zoom_out, ax_zoom_in)
plot_distribution(axarr[0], X, y, hist_nbins=200,
x0_label="Median Income",
x1_label="Number of households",
title="Full data")
# zoom-in
zoom_in_percentile_range = (0, 99)
cutoffs_X0 = np.percentile(X[:, 0], zoom_in_percentile_range)
cutoffs_X1 = np.percentile(X[:, 1], zoom_in_percentile_range)
non_outliers_mask = (
np.all(X > [cutoffs_X0[0], cutoffs_X1[0]], axis=1) &
np.all(X < [cutoffs_X0[1], cutoffs_X1[1]], axis=1))
plot_distribution(axarr[1], X[non_outliers_mask], y[non_outliers_mask],
hist_nbins=50,
x0_label="Median Income",
x1_label="Number of households",
title="Zoom-in")
norm = mpl.colors.Normalize(y_full.min(), y_full.max())
mpl.colorbar.ColorbarBase(ax_colorbar, cmap=cmap,
norm=norm, orientation='vertical',
label='Color mapping for values of y')
# -
#
# Original data
# -------------
#
# Each transformation is plotted showing two transformed features, with the
# left plot showing the entire dataset, and the right zoomed-in to show the
# dataset without the marginal outliers. A large majority of the samples are
# compacted to a specific range, [0, 10] for the median income and [0, 6] for
# the number of households. Note that there are some marginal outliers (some
# blocks have more than 1200 households). Therefore, a specific pre-processing
# can be very beneficial depending of the application. In the following, we
# present some insights and behaviors of those pre-processing methods in the
# presence of marginal outliers.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(0)
# -
# StandardScaler
# --------------
#
# ``StandardScaler`` removes the mean and scales the data to unit variance.
# However, the outliers have an influence when computing the empirical mean and
# standard deviation which shrink the range of the feature values as shown in
# the left figure below. Note in particular that because the outliers on each
# feature have different magnitudes, the spread of the transformed data on
# each feature is very different: most of the data lie in the [-2, 4] range for
# the transformed median income feature while the same data is squeezed in the
# smaller [-0.2, 0.2] range for the transformed number of households.
#
# ``StandardScaler`` therefore cannot guarantee balanced feature scales in the
# presence of outliers.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(1)
# -
# MinMaxScaler
# ------------
#
# ``MinMaxScaler`` rescales the data set such that all feature values are in
# the range [0, 1] as shown in the right panel below. However, this scaling
# compress all inliers in the narrow range [0, 0.005] for the transformed
# number of households.
#
# As ``StandardScaler``, ``MinMaxScaler`` is very sensitive to the presence of
# outliers.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(2)
# -
# MaxAbsScaler
# ------------
#
# ``MaxAbsScaler`` differs from the previous scaler such that the absolute
# values are mapped in the range [0, 1]. On positive only data, this scaler
# behaves similarly to ``MinMaxScaler`` and therefore also suffers from the
# presence of large outliers.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(3)
# -
# RobustScaler
# ------------
#
# Unlike the previous scalers, the centering and scaling statistics of this
# scaler are based on percentiles and are therefore not influenced by a few
# number of very large marginal outliers. Consequently, the resulting range of
# the transformed feature values is larger than for the previous scalers and,
# more importantly, are approximately similar: for both features most of the
# transformed values lie in a [-2, 3] range as seen in the zoomed-in figure.
# Note that the outliers themselves are still present in the transformed data.
# If a separate outlier clipping is desirable, a non-linear transformation is
# required (see below).
#
#
# + jupyter={"outputs_hidden": false}
make_plot(4)
# -
# PowerTransformer
# ----------------
#
# ``PowerTransformer`` applies a power transformation to each feature to make
# the data more Gaussian-like. Currently, ``PowerTransformer`` implements the
# Yeo-Johnson and Box-Cox transforms. The power transform finds the optimal
# scaling factor to stabilize variance and mimimize skewness through maximum
# likelihood estimation. By default, ``PowerTransformer`` also applies
# zero-mean, unit variance normalization to the transformed output. Note that
# Box-Cox can only be applied to strictly positive data. Income and number of
# households happen to be strictly positive, but if negative values are present
# the Yeo-Johnson transformed is to be preferred.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(5)
make_plot(6)
# -
# QuantileTransformer (Gaussian output)
# -------------------------------------
#
# ``QuantileTransformer`` has an additional ``output_distribution`` parameter
# allowing to match a Gaussian distribution instead of a uniform distribution.
# Note that this non-parametetric transformer introduces saturation artifacts
# for extreme values.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(7)
# -
# QuantileTransformer (uniform output)
# ------------------------------------
#
# ``QuantileTransformer`` applies a non-linear transformation such that the
# probability density function of each feature will be mapped to a uniform
# distribution. In this case, all the data will be mapped in the range [0, 1],
# even the outliers which cannot be distinguished anymore from the inliers.
#
# As ``RobustScaler``, ``QuantileTransformer`` is robust to outliers in the
# sense that adding or removing outliers in the training set will yield
# approximately the same transformation on held out data. But contrary to
# ``RobustScaler``, ``QuantileTransformer`` will also automatically collapse
# any outlier by setting them to the a priori defined range boundaries (0 and
# 1).
#
#
# + jupyter={"outputs_hidden": false}
make_plot(8)
# -
# Normalizer
# ----------
#
# The ``Normalizer`` rescales the vector for each sample to have unit norm,
# independently of the distribution of the samples. It can be seen on both
# figures below where all samples are mapped onto the unit circle. In our
# example the two selected features have only positive values; therefore the
# transformed data only lie in the positive quadrant. This would not be the
# case if some original features had a mix of positive and negative values.
#
#
# + jupyter={"outputs_hidden": false}
make_plot(9)
plt.show()
|
Datasets transformations/ML0101EN-14-Comparing_ Of_all_Feature_Scaling.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.9.7 64-bit (''azure-firenet'': conda)'
# name: python3
# ---
from azure.cognitiveservices.vision.customvision.training import CustomVisionTrainingClient
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from azure.cognitiveservices.vision.customvision.training.models import ImageFileCreateBatch, ImageFileCreateEntry, Region
from msrest.authentication import ApiKeyCredentials
import os, time, uuid
import yaml
with open('../config.yaml') as f:
config = yaml.load(f, Loader=yaml.FullLoader)
# +
ENDPOINT = config["custom_vision"]["endpoint"];
training_key = config["custom_vision"]["training_key"]
prediction_key = config["custom_vision"]["prediction_key"]
prediction_resource_id = config["custom_vision"]["prediction_resource_id"]
project_id = config["project_id"]
if not config["project_id"]:
raise ValueError
# -
# ## Download images and their tags
# +
from pathlib import Path
local_path = Path(f"../data/{project_id}")
Path.mkdir(local_path, exist_ok=True)
images_path = local_path / "Images"
Path.mkdir(images_path, exist_ok=True)
# +
import urllib
import http
import json
headers = {
'Training-Key': training_key
}
params = urllib.parse.urlencode({
# Format - int32. Maximum number of images to return. Defaults to 50, limited to 256.
'take': 50,
# Format - int32. Number of images to skip before beginning the image batch. Defaults to 0.
'skip': 0,
'orderBy': 'Oldest'
})
conn = http.client.HTTPSConnection(ENDPOINT)
conn.request(
"GET", f"/customvision/v3.0/training/projects/{project_id}/images/tagged?%s" % params, "{body}", headers)
response = conn.getresponse()
data = response.read()
data_json = json.loads(data)
# +
import pandas as pd
df_cols = [
"filename",
"tag",
"left",
"top",
"width",
"height",
"truncated",
"difficult",
]
regions = []
for image in data_json:
s_filename = image["id"] + ".png"
# save image file
try:
urllib.request.urlretrieve(image["originalImageUri"], images_path / s_filename)
except:
print("Retry")
urllib.request.urlretrieve(image["originalImageUri"], images_path / s_filename)
# save tags
for tag in image["regions"]:
region = {"filename": s_filename}
region["tag"] = tag["tagName"]
region["truncated"] = 0
region["difficult"] = 0
region["left"] = tag["left"]
region["top"] = tag["top"]
region["width"] = tag["width"]
region["height"] = tag["height"]
regions.append(region)
conn.close()
df = pd.DataFrame(regions, columns=df_cols)
df.to_pickle(local_path / "object_detection.pkl")
df.head()
|
src/download_images.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ChristianMones/CPEN-21A-CPE-1-2/blob/main/Loop_Statement.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="87J85uVcKduO"
# ##For Loop
# + colab={"base_uri": "https://localhost:8080/"} id="pwgv-_X5KbQ1" outputId="7c74011c-064b-4eb6-d5b5-6fa71c50f212"
week=["Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"]
for x in week:
print(x)
# + [markdown] id="PYs18Ji4LNVs"
# The break statement
# + colab={"base_uri": "https://localhost:8080/"} id="5atuG8haLSnm" outputId="560a6214-0cf3-43a7-fd40-0b0b6dc68d9d"
for x in week:
print(x)
if x=="Thursday":
break
# + colab={"base_uri": "https://localhost:8080/"} id="33spbuDjLoW2" outputId="13251989-eca9-4d43-a886-531998636558"
for x in week: #The break statement
if x=="Thursday":
break
print(x)
# + [markdown] id="YuZAK73VMaJj"
# Looping through String
# + colab={"base_uri": "https://localhost:8080/"} id="d7NvaHwoMSfq" outputId="4894a23e-680a-4465-98e1-f139fa0634c3"
for x in "Programming with Pyhton":
print(x)
# + [markdown] id="WW2OrU0sMuFe"
# The range function
# + colab={"base_uri": "https://localhost:8080/"} id="DFEH_I92MwXB" outputId="d7f29037-ae78-45ed-d38b-bb13c2fe02c0"
for x in range(10):
print(x)
# + [markdown] id="6z8RKuXUNNlZ"
# Nested Loops
# + colab={"base_uri": "https://localhost:8080/"} id="MXXGJxGlN3P-" outputId="1e74c71b-4358-48da-fe43-a693738cad1e"
adjective=["red", "big", "tasty"]
fruits=["apple", "banana", "cherry"]
for x in adjective:
for y in fruits:
print(x,y)
# + [markdown] id="mrbR4bS8OYD7"
# ##While Loop
# + colab={"base_uri": "https://localhost:8080/"} id="YX1QZxq4PKgY" outputId="29bb0d3c-da5d-455c-b57e-652e13b6aa37"
i=10
while i>6:
print(i)
i-=1 # i=i-1
# + [markdown] id="56xOwHzBPh6i"
# The break statement
# + colab={"base_uri": "https://localhost:8080/"} id="bqogffgfPjwC" outputId="2379a9f2-3a22-43af-e29e-331cec380dee"
i=10
while i>6:
print(i)
if i==8:
break
i-=1
# + [markdown] id="852hZzo1QVNY"
# The continue statement
# + colab={"base_uri": "https://localhost:8080/"} id="pvUjn67UQXtZ" outputId="abe19390-f457-4315-b272-00873f9d32c8"
i=10
while i>6:
i-=1
if i==8:
continue
print(i)
# + [markdown] id="TCRZY28ARXpP"
# The else statement
# + colab={"base_uri": "https://localhost:8080/"} id="nRNIyXWyRZ-4" outputId="e06ddadd-149d-45d5-cf8c-1d01456e48df"
i=10
while i>6:
i-=1
print(i)
else:
print("i is no longer greater than 6")
# + [markdown] id="lZEceZ1pSEPV"
# Application 1
# + colab={"base_uri": "https://localhost:8080/"} id="8uNMOfO7SGY9" outputId="780f6f03-aeec-4a0e-d825-d0e78a80236d"
a=0
b=["Value"]
while a<11:
for x in b:
print(x,a)
a+=1
# + [markdown] id="gA4A20cMSGxG"
# Application 2
# + colab={"base_uri": "https://localhost:8080/"} id="SHw5p1BBSIH9" outputId="9c8f2319-01d0-4499-de42-702cd0197ca5"
i=20
while i>3:
i-=1
print(i)
if i==4:
break
|
Loop_Statement.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import io
import requests
# ## 1) 載入資料集
url = 'https://github.com/1010code/iris-dnn-tensorflow/raw/master/data/Iris.csv'
s=requests.get(url).content
df_train=pd.read_csv(io.StringIO(s.decode('utf-8')))
df_train = df_train.drop(labels=['Id'],axis=1) # 移除Id
df_train
# ## 2) 手動編碼
# 處理名目資料 (Nominal variables) - 資料前處理
# 依據特徵資料的特性,可以選擇手動編碼或自動編碼。
#
# ### 使用編碼時機?
# 進行深度學習時,神經網路只能處理數值資料。因此我們需要將所有非數字型態的特徵進行轉換。
#
# ex:
#
# | Iris-setosa | Iris-versicolor | Iris-virginica |
# |:---:|:---:|:---:|
# | 1 | 2 | 3 |
# +
label_map = {'Iris-setosa': 0, 'Iris-versicolor': 1, 'Iris-virginica': 2}
#將編碼後的label map存至df_train['Species']中。
df_train['Class'] = df_train['Species'].map(label_map)
# -
df_train
# ## 3) 檢查缺失值
# 使用 numpy 所提供的函式來檢查是否有 NA 缺失值,假設有缺失值使用dropna()來移除。使用的時機在於當只有少量的缺失值適用,若遇到有大量缺失值的情況,或是本身的資料量就很少的情況下建議可以透過機器學習的方法補值來預測缺失值。
#
# ```python
# # 移除缺失值
# train=train.dropna()
# ```
X = df_train.drop(labels=['Species','Class'],axis=1).values # 移除Species (因為字母不參與訓練)
# checked missing data
print("checked missing data(NAN mount):",len(np.where(np.isnan(X))[0]))
# ## 資料前處理
import numpy as np
np.set_printoptions(suppress=True)
# ## Standardization 平均&變異數標準化
# +
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X)
X_scaled = scaler.transform(X)
# scaled之後的資料零均值,單位方差
print('資料集 X 的平均值 : ', X.mean(axis=0))
print('資料集 X 的標準差 : ', X.std(axis=0))
print('\nStandardScaler 縮放過後資料集 X 的平均值 : ', X_scaled.mean(axis=0))
print('SStandardScaler 縮放過後資料集 X 的標準差 : ', X_scaled.std(axis=0))
# +
fig, axes = plt.subplots(nrows=1,ncols=4)
fig.set_size_inches(15, 4)
sns.distplot(X_scaled[:,0],ax=axes[0])
sns.distplot(X_scaled[:,1],ax=axes[1])
sns.distplot(X_scaled[:,2],ax=axes[2])
sns.distplot(X_scaled[:,3],ax=axes[3])
axes[0].set(xlabel='SepalLengthCm',title="distribution of SepalLengthCm")
axes[1].set(xlabel='SepalWidthCm',title="distribution of SepalWidthCm")
axes[2].set(xlabel='PetalLengthCm',title="distribution of PetalLengthCm")
axes[3].set(xlabel='PetalWidthCm',title="distribution of PetalWidthCm")
# -
X_scaled=pd.DataFrame(X_scaled,columns=['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm'])
X_scaled['Species']=df_train['Species']
sns.lmplot("SepalLengthCm", "SepalWidthCm", hue='Species', data=X_scaled, fit_reg=False)
# ## MinMaxScaler 最小最大值標準化
# +
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1)).fit(X)
X_scaled = scaler.transform(X)
# scaled之後的資料零均值,單位方差
print('資料集 X 的平均值 : ', X.mean(axis=0))
print('資料集 X 的標準差 : ', X.std(axis=0))
print('\nStandardScaler 縮放過後資料集 X 的平均值 : ', X_scaled.mean(axis=0))
print('SStandardScaler 縮放過後資料集 X 的標準差 : ', X_scaled.std(axis=0))
# +
fig, axes = plt.subplots(nrows=1,ncols=4)
fig.set_size_inches(15, 4)
sns.distplot(X_scaled[:,0],ax=axes[0])
sns.distplot(X_scaled[:,1],ax=axes[1])
sns.distplot(X_scaled[:,2],ax=axes[2])
sns.distplot(X_scaled[:,3],ax=axes[3])
axes[0].set(xlabel='SepalLengthCm',title="distribution of SepalLengthCm")
axes[1].set(xlabel='SepalWidthCm',title="distribution of SepalWidthCm")
axes[2].set(xlabel='PetalLengthCm',title="distribution of PetalLengthCm")
axes[3].set(xlabel='PetalWidthCm',title="distribution of PetalWidthCm")
# -
X_scaled=pd.DataFrame(X_scaled,columns=['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm'])
X_scaled['Species']=df_train['Species']
sns.lmplot("SepalLengthCm", "SepalWidthCm", hue='Species', data=X_scaled, fit_reg=False)
# ## MaxAbsScaler
# +
from sklearn.preprocessing import MaxAbsScaler
scaler = MaxAbsScaler().fit(X)
X_scaled = scaler.transform(X)
# scaled之後的資料零均值,單位方差
print('資料集 X 的平均值 : ', X.mean(axis=0))
print('資料集 X 的標準差 : ', X.std(axis=0))
print('\nStandardScaler 縮放過後資料集 X 的平均值 : ', X_scaled.mean(axis=0))
print('SStandardScaler 縮放過後資料集 X 的標準差 : ', X_scaled.std(axis=0))
# +
fig, axes = plt.subplots(nrows=1,ncols=4)
fig.set_size_inches(15, 4)
sns.distplot(X_scaled[:,0],ax=axes[0])
sns.distplot(X_scaled[:,1],ax=axes[1])
sns.distplot(X_scaled[:,2],ax=axes[2])
sns.distplot(X_scaled[:,3],ax=axes[3])
axes[0].set(xlabel='SepalLengthCm',title="distribution of SepalLengthCm")
axes[1].set(xlabel='SepalWidthCm',title="distribution of SepalWidthCm")
axes[2].set(xlabel='PetalLengthCm',title="distribution of PetalLengthCm")
axes[3].set(xlabel='PetalWidthCm',title="distribution of PetalWidthCm")
# -
# ## RobustScaler
# +
from sklearn.preprocessing import RobustScaler
scaler = RobustScaler().fit(X)
X_scaled = scaler.transform(X)
# scaled之後的資料零均值,單位方差
print('資料集 X 的平均值 : ', X.mean(axis=0))
print('資料集 X 的標準差 : ', X.std(axis=0))
print('\nStandardScaler 縮放過後資料集 X 的平均值 : ', X_scaled.mean(axis=0))
print('SStandardScaler 縮放過後資料集 X 的標準差 : ', X_scaled.std(axis=0))
# +
fig, axes = plt.subplots(nrows=1,ncols=4)
fig.set_size_inches(15, 4)
sns.distplot(X_scaled[:,0],ax=axes[0])
sns.distplot(X_scaled[:,1],ax=axes[1])
sns.distplot(X_scaled[:,2],ax=axes[2])
sns.distplot(X_scaled[:,3],ax=axes[3])
axes[0].set(xlabel='SepalLengthCm',title="distribution of SepalLengthCm")
axes[1].set(xlabel='SepalWidthCm',title="distribution of SepalWidthCm")
axes[2].set(xlabel='PetalLengthCm',title="distribution of PetalLengthCm")
axes[3].set(xlabel='PetalWidthCm',title="distribution of PetalWidthCm")
# -
|
_posts/ithome/2020-12th-ironman/7.非監督式學習-降維(1)/.ipynb_checkpoints/6. 非監督式學習 K-mean分群-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import pandas as pd
data_set = pd.read_csv('steam.csv')
# -
data_set.info() # Get a general idea of what's in our data
# +
# We will be predicting the total ratings a game gets
# Specifically we will make this a binary classification problem where a game can have many or few
# ratings
data_set['total_ratings'] = (data_set['positive_ratings'] + data_set['negative_ratings'])
# Everything above and equal to the median is considered as having many ratings, everything else
# has few ratings
ratings_threshold = data_set['total_ratings'].median()
data_set['total_ratings'] = (data_set['total_ratings'] >= ratings_threshold).astype(int)
# -
# Drop everything we won't need
data_set.drop(columns=['positive_ratings', 'negative_ratings', 'appid', 'name'], axis=1, inplace=True)
# Make a deep copy of our data set for visualization purposes
data_set_visualize = data_set.copy()
# +
# %matplotlib inline
import seaborn as sns
import matplotlib.pyplot as plt
# Visualize release day, month, year and total ratings
data_set_visualize['release_date'] = pd.to_datetime(data_set_visualize['release_date'])
data_set_visualize['year'], data_set_visualize['month'], data_set_visualize['day'] = data_set_visualize['release_date'].dt.year, \
data_set_visualize['release_date'].dt.month, \
data_set_visualize['release_date'].dt.day
# Year and total ratings
sns.countplot(x='year', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games released in 2016 and before have more games with many reviews than games with few
# Look at month and total ratings
sns.countplot(x='month', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# No meaningful relationship between release month and total ratings
# Look at date and total ratings
sns.countplot(x='day', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# No meaningful relationship between release day and total ratings
# Look at English language support and total ratings
sns.countplot(x='english', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# More games with English support than without, but virtually no difference in games with many and
# few ratings in English and non-English categories
# Visualize developer with total ratings
# Specifically look at how the top 250 developers relate to total ratings
dev_pub_data = pd.read_csv('top-dev-pub.csv')
top_dev = dev_pub_data['developer'].tolist()
data_set_visualize['developer'] = data_set_visualize['developer'].apply(lambda x: any([developer in x for developer in top_dev])).astype(int)
sns.countplot(x='developer', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games by top developers have virtually no games with few ratings
# Let's visualize the same relationship but with top 250 publishers
top_pub = dev_pub_data['publisher'].tolist()
data_set_visualize['publisher'] = data_set_visualize['publisher'].apply(lambda x: any([publisher in x for publisher in top_pub])).astype(int)
sns.countplot(x='publisher', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games by top publishers also have virtually no games with few ratings
# Visualize number of platforms
data_set_visualize['platform_count'] = data_set_visualize['platforms'].apply(lambda x: len(x.split(';')))
sns.countplot(x='platform_count', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Higher number of supported platforms gives more total ratings
# Visualize required age and total ratings
sns.countplot(x='required_age', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# 16 & 18 years old have more games with many reviews than with few, but very small amount of
# games in these categories
# Look at number of categories and total ratings
data_set_visualize['categories_count'] = data_set_visualize['categories'].apply(lambda x: len(x.split(';')))
sns.countplot(x='categories_count', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games with >= 5 categories are more likely to have many reviews
# Look at number of genres and total ratings
data_set_visualize['genres_count'] = data_set_visualize['genres'].apply(lambda x: len(x.split(';')))
sns.countplot(x='genres_count', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Every genre count gives roughly same number of games with many & few ratings
# Look at Steamspy tags and total ratings
data_set_visualize['tags_count'] = data_set_visualize['steamspy_tags'].apply(lambda x: len(x.split(';')))
sns.countplot(x='tags_count', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games with <= 2 Steamspy tags seem to be more likely to have few reviews
# Achievements with total ratings
sns.countplot(x='achievements', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Every number of achievements gives roughly same numbers of games with many & few ratings
# Average playtime's relationship with total ratings
data_set_visualize['average_playtime'] = (data_set_visualize['average_playtime'] >= 1.0).astype(int)
sns.countplot(x='average_playtime', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games with >0 average playtime tend to have many reviews
# Median playtime's relationship with total ratings
data_set_visualize['median_playtime'] = (data_set_visualize['median_playtime'] >= 1.0).astype(int)
sns.countplot(x='median_playtime', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Games with >0 median playtime tend to have many reviews
# Number of owners and total ratings
# Data set gives a range, we'll take the midpoint
data_set_visualize['owners_lower'] = data_set_visualize['owners'].str.split('-', expand=True)[0]
data_set_visualize['owners_upper'] = data_set_visualize['owners'].str.split('-', expand=True)[1]
data_set_visualize['owners_mid'] = data_set_visualize[['owners_lower', 'owners_upper']].mean(axis=1)
sns.countplot(x='owners_mid', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Midpoint > 10000 gives more chances of having many ratings
# Price and total ratings
sns.countplot(x='price', hue='total_ratings', data=data_set_visualize)
plt.show()
# +
# Doesn't seem to be a pattern with price and total ratings: some low price tags have many more
# games with few reviews than many reviews but some higher price tags show the same (and same
# can be said for higher price tags)
# +
# Now that we've done some data exploration, we can pre-process the data according to what we've found
# Here we're working directly with the data set rather than its deep copy, and create a CSV of cleaned data from it
# Drop everything we've seen don't affect total ratings
data_set.drop(['english', 'required_age', 'achievements', 'genres', 'price'], axis=1, inplace=True)
# Binarize the release year, with 2016 as the threshold; every year that's 2016 and before is 1
data_set['release_date'] = pd.to_datetime(data_set['release_date'])
data_set['year'] = data_set['release_date'].dt.year
data_set['year'] = (data_set['year'] <= 2016).astype(int)
data_set.drop(['release_date'], axis=1, inplace=True)
# Binarize developer data; top 250 developers become 1
data_set['developer'] = data_set['developer'].apply(lambda x: any([developer in x for developer in top_dev])).astype(int)
# Binarize publisher data; top 250 publishers become 1
data_set['publisher'] = data_set['publisher'].apply(lambda x: any([publisher in x for publisher in top_pub])).astype(int)
# Get a count of the number of supported platforms then binarize the count data; 2+ platforms become 1
data_set['platform_count'] = data_set['platforms'].apply(lambda x: len(x.split(';')))
data_set['platform_count'] = (data_set['platform_count'] >= 2).astype(int)
data_set.drop(['platforms'], axis=1, inplace=True)
# Get a count of the number of categories listed then binarize the data; 5+ categories become 1
data_set['categories_count'] = data_set['categories'].apply(lambda x: len(x.split(';')))
data_set['categories_count'] = (data_set['categories_count'] >= 5).astype(int)
data_set.drop(['categories'], axis=1, inplace=True)
# Get a count of the number of Steamspy tags listed then binarize the data; 2+ tags become 1
data_set['tags_count'] = data_set['steamspy_tags'].apply(lambda x: len(x.split(';')))
data_set['tags_count'] = (data_set['tags_count'] <= 2).astype(int)
data_set.drop(['steamspy_tags'], axis=1, inplace=True)
# Binarize average playtime; playtime that's greater or equal to 1 becomes 1
data_set['average_playtime'] = (data_set['average_playtime'] >= 1.0).astype(int)
# Binarize median playtime; playtime that's greater or equal to 1 becomes 1
data_set['median_playtime'] = (data_set['median_playtime'] >= 1.0).astype(int)
# Binarize number of owners; >10000 owners beecomes 1
data_set['owners_lower'] = data_set['owners'].str.split('-', expand=True)[0]
data_set['owners_upper'] = data_set['owners'].str.split('-', expand=True)[1]
data_set['owners_mid'] = data_set[['owners_lower', 'owners_upper']].mean(axis=1)
data_set['owners_mid'] = (data_set['owners_mid'] > 10000).astype(int)
data_set.drop(['owners_lower', 'owners_upper', 'owners'], axis=1, inplace=True)
# Re-order the data frame into the order we want
data_set = data_set[['year', 'developer', 'publisher', 'platform_count', 'categories_count', 'tags_count',
'average_playtime', 'median_playtime', 'owners_mid', 'total_ratings']]
print(data_set.head(20)) # Check that the data frame looks how we want it (order of columns, data typess in columns)
# -
data_set.to_csv('steam-cleaned.csv') # Make CSV for the models
|
steam_clean_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Awkwafina/fluid/blob/master/structure_by_dina_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="W95Sni929z9z" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 139} outputId="07925d91-e297-4765-dfc4-a6d2bbc6b3ff"
# !git clone https://github.com/Awkwafina/fluid
# + id="Ex_1RhBh98pf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="807bc7a6-9656-47d9-83de-def23ae8d2cb"
# !python /content/fluid/scripts/convert_raw_to_bert.py /content/en_ewt-ud-train.conllu.txt bert_train large
# + id="HsoO27uZFoNh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 89} outputId="f0219fa3-e925-4a96-9a0d-3478543cb270"
# !python /content/fluid/scripts/convert_raw_to_bert.py /content/en_ewt-ud-dev.conllu.txt bert_dev large
# + id="ecC_ssZ_JVOk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 55} outputId="2652121d-2e88-4566-bae4-373531dd2f5a"
# !python /content/fluid/scripts/convert_raw_to_bert.py /content/en_ewt-ud-test.conllu.txt bert_test large
# + id="NEjI0znd_fkv" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 419} outputId="ff09a8d2-9e12-4e3c-eea2-7e953e26c4c6"
# !pip install pytorch-pretrained-bert
# + id="bXW6jHL6Ait6" colab_type="code" colab={}
from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM, WordpieceTokenizer
# + id="2WMvakI0pHxp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 401} outputId="57cb9be7-5b7c-440b-e871-8521b40f9d8c"
# !python /content/fluid/structural-probes/run_experiment.py /content/fluid/example/config/bert_ex.yaml
|
structure_by_dina_5.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # EDS Case Study
#
# Load and resample GSS data
#
# <NAME>
#
# [MIT License](https://en.wikipedia.org/wiki/MIT_License)
# +
# If we're running in Colab, set up the environment
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
# !pip install empiricaldist
# !git clone --depth 1 https://github.com/AllenDowney/ExploratoryDataAnalysis
# %cd ExploratoryDataAnalysis
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import utils
# -
# ### Reading the extract
#
# https://gssdataexplorer.norc.org/projects/52787/extracts
#
# Currently Pandas is not able to read the files generated by GSS in any of the standard formats: Stata, SPSS, Excel.
#
# As a workaround, I wrote the following functions to read the Stata dictionary file and use the information there to read the Stata data file using `pd.read_fwf` which reads fixed-width files.
# +
import re
import os
class FixedWidthVariables(object):
"""Represents a set of variables in a fixed width file."""
def __init__(self, variables, index_base=0):
"""Initializes.
variables: DataFrame
index_base: are the indices 0 or 1 based?
Attributes:
colspecs: list of (start, end) index tuples
names: list of string variable names
"""
self.variables = variables
# note: by default, subtract 1 from colspecs
self.colspecs = variables[['start', 'end']] - index_base
# convert colspecs to a list of pair of int
self.colspecs = self.colspecs.astype(np.int).values.tolist()
self.names = variables['name']
def ReadFixedWidth(self, filename, **options):
"""Reads a fixed width ASCII file.
filename: string filename
returns: DataFrame
"""
df = pd.read_fwf(filename,
colspecs=self.colspecs,
names=self.names,
**options)
return df
def ReadStataDct(dct_file, **options):
"""Reads a Stata dictionary file.
dct_file: string filename
options: dict of options passed to open()
returns: FixedWidthVariables object
"""
type_map = dict(byte=int, int=int, long=int, float=float,
double=float, numeric=float)
var_info = []
with open(dct_file, **options) as f:
for line in f:
match = re.search( r'_column\(([^)]*)\)', line)
if not match:
continue
start = int(match.group(1))
t = line.split()
vtype, name, fstring = t[1:4]
name = name.lower()
if vtype.startswith('str'):
vtype = str
else:
vtype = type_map[vtype]
long_desc = ' '.join(t[4:]).strip('"')
var_info.append((start, vtype, name, fstring, long_desc))
columns = ['start', 'type', 'name', 'fstring', 'desc']
variables = pd.DataFrame(var_info, columns=columns)
# fill in the end column by shifting the start column
variables['end'] = variables.start.shift(-1)
variables.loc[len(variables)-1, 'end'] = 0
dct = FixedWidthVariables(variables, index_base=1)
return dct
def read_gss(dirname):
"""Reads GSS files from the given directory.
dirname: string
returns: DataFrame
"""
dct_file = os.path.join(dirname, 'GSS.dct')
dct = ReadStataDct(dct_file)
data_file = os.path.join(dirname, 'GSS.dat.gz')
gss = dct.ReadFixedWidth(data_file, compression='gzip')
return gss
# -
gss = read_gss('gss_eda')
print(gss.shape)
gss.head()
# ### Missing data
#
# For many variables, missing values are encoded with numbers, so we need to replace them before we do any analysis.
#
# For example, for `polviews`, the values 8, 9, and 0 represent "Don't know", "No answer", and "Not applicable".
#
# "Not applicable" usually means the respondent was not asked a particular question.
#
# To keep things simple, we'll treat all of these values as equivalent, but we should keep in mind that we lose some information by doing that. For example, if a respondent refuses to answer a question, that might suggest something about their answer. If so, treating their response as missing data might bias the results.
#
# Fortunately, for most questions the number of respondents who refused to answer is small.
# +
def replace_invalid(df):
"""Replace invalid data with NaN.
df: DataFrame
"""
df.realinc.replace([0], np.nan, inplace=True)
df.educ.replace([98, 99], np.nan, inplace=True)
# 89 means 89 or older
df.age.replace([98, 99], np.nan, inplace=True)
df.cohort.replace([9999], np.nan, inplace=True)
df.adults.replace([9], np.nan, inplace=True)
df.colhomo.replace([0, 8, 9], np.nan, inplace=True)
df.libhomo.replace([0, 8, 9], np.nan, inplace=True)
df.cappun.replace([0, 8, 9], np.nan, inplace=True)
df.gunlaw.replace([0, 8, 9], np.nan, inplace=True)
df.grass.replace([0, 8, 9], np.nan, inplace=True)
df.fepol.replace([0, 8, 9], np.nan, inplace=True)
df.abany.replace([0, 8, 9], np.nan, inplace=True)
df.prayer.replace([0, 8, 9], np.nan, inplace=True)
df.sexeduc.replace([0, 8, 9], np.nan, inplace=True)
df.premarsx.replace([0, 8, 9], np.nan, inplace=True)
df.xmarsex.replace([0, 8, 9], np.nan, inplace=True)
df.homosex.replace([0, 5, 8, 9], np.nan, inplace=True)
df.racmar.replace([0, 8, 9], np.nan, inplace=True)
df.spanking.replace([0, 8, 9], np.nan, inplace=True)
df.racpres.replace([0, 8, 9], np.nan, inplace=True)
df.fear.replace([0, 8, 9], np.nan, inplace=True)
df.databank.replace([0, 8, 9], np.nan, inplace=True)
df.affrmact.replace([0, 8, 9], np.nan, inplace=True)
df.happy.replace([0, 8, 9], np.nan, inplace=True)
df.hapmar.replace([0, 8, 9], np.nan, inplace=True)
df.natspac.replace([0, 8, 9], np.nan, inplace=True)
df.natenvir.replace([0, 8, 9], np.nan, inplace=True)
df.natheal.replace([0, 8, 9], np.nan, inplace=True)
df.natcity.replace([0, 8, 9], np.nan, inplace=True)
df.natcrime.replace([0, 8, 9], np.nan, inplace=True)
df.natdrug.replace([0, 8, 9], np.nan, inplace=True)
df.nateduc.replace([0, 8, 9], np.nan, inplace=True)
df.natrace.replace([0, 8, 9], np.nan, inplace=True)
df.natarms.replace([0, 8, 9], np.nan, inplace=True)
df.nataid.replace([0, 8, 9], np.nan, inplace=True)
df.natfare.replace([0, 8, 9], np.nan, inplace=True)
df.health.replace([0, 8, 9], np.nan, inplace=True)
df.life.replace([0, 8, 9], np.nan, inplace=True)
df.helpful.replace([0, 8, 9], np.nan, inplace=True)
df.fair.replace([0, 8, 9], np.nan, inplace=True)
df.trust.replace([0, 8, 9], np.nan, inplace=True)
df.conclerg.replace([0, 8, 9], np.nan, inplace=True)
df.coneduc.replace([0, 8, 9], np.nan, inplace=True)
df.confed.replace([0, 8, 9], np.nan, inplace=True)
df.conpress.replace([0, 8, 9], np.nan, inplace=True)
df.conjudge.replace([0, 8, 9], np.nan, inplace=True)
df.conlegis.replace([0, 8, 9], np.nan, inplace=True)
df.conarmy.replace([0, 8, 9], np.nan, inplace=True)
df.spkhomo.replace([0, 8, 9], np.nan, inplace=True)
df.spkath.replace([0, 8, 9], np.nan, inplace=True)
df.colath.replace([0, 8, 9], np.nan, inplace=True)
df.libath.replace([0, 8, 9], np.nan, inplace=True)
df.spkrac.replace([0, 8, 9], np.nan, inplace=True)
df.spkcom.replace([0, 8, 9], np.nan, inplace=True)
df.spkmil.replace([0, 8, 9], np.nan, inplace=True)
df.satjob.replace([0, 8, 9], np.nan, inplace=True)
df.satfin.replace([0, 8, 9], np.nan, inplace=True)
df.finrela.replace([0, 8, 9], np.nan, inplace=True)
df.union_.replace([0, 8, 9], np.nan, inplace=True)
df.res16.replace([0, 8, 9], np.nan, inplace=True)
df.fund.replace([0, 8, 9], np.nan, inplace=True)
df.memchurh.replace([0, 8, 9], np.nan, inplace=True)
df.fund16.replace([0, 8, 9], np.nan, inplace=True)
df.reliten.replace([0, 8, 9], np.nan, inplace=True)
df.postlife.replace([0, 8, 9], np.nan, inplace=True)
df.pray.replace([0, 8, 9], np.nan, inplace=True)
df.sprel16.replace([0, 8, 9], np.nan, inplace=True)
df.hunt.replace([0, 8, 9], np.nan, inplace=True)
df.polviews.replace([0, 8, 9], np.nan, inplace=True)
df.compuse.replace([0, 8, 9], np.nan, inplace=True)
df.degree.replace([8, 9], np.nan, inplace=True)
df.padeg.replace([8, 9], np.nan, inplace=True)
df.madeg.replace([8, 9], np.nan, inplace=True)
df.spdeg.replace([8, 9], np.nan, inplace=True)
df.partyid.replace([8, 9], np.nan, inplace=True)
df.chldidel.replace([-1, 8, 9], np.nan, inplace=True)
df.attend.replace([9], np.nan, inplace=True)
df.childs.replace([9], np.nan, inplace=True)
df.adults.replace([9], np.nan, inplace=True)
df.divorce.replace([0, 8, 9], np.nan, inplace=True)
df.agewed.replace([0, 98, 99], np.nan, inplace=True)
df.relig.replace([0, 98, 99], np.nan, inplace=True)
df.relig16.replace([0, 98, 99], np.nan, inplace=True)
df.age.replace([0, 98, 99], np.nan, inplace=True)
# note: sibs contains some unlikely numbers
df.sibs.replace([-1, 98, 99], np.nan, inplace=True)
df.educ.replace([97, 98, 99], np.nan, inplace=True)
df.maeduc.replace([97, 98, 99], np.nan, inplace=True)
df.paeduc.replace([97, 98, 99], np.nan, inplace=True)
df.speduc.replace([97, 98, 99], np.nan, inplace=True)
df.cohort.replace([0, 9999], np.nan, inplace=True)
df.marcohrt.replace([0, 9999], np.nan, inplace=True)
df.phone.replace([0, 2, 9], np.nan, inplace=True)
df.owngun.replace([0, 3, 8, 9], np.nan, inplace=True)
df.pistol.replace([0, 3, 8, 9], np.nan, inplace=True)
df.class_.replace([0, 5, 8, 9], np.nan, inplace=True)
df.pres04.replace([0, 8, 9], np.nan, inplace=True)
df.pres08.replace([0, 8, 9], np.nan, inplace=True)
df.pres12.replace([0, 8, 9], np.nan, inplace=True)
replace_invalid(gss)
# -
# ### Resampling
#
# The GSS uses stratified sampling, which means that some groups are deliberately oversampled to help with statistical validity.
#
# As a result, each respondent has a sampling weight which is proportional to the number of people in the population represented by the respondent.
#
# Before running any analysis, we should compensate for stratified sampling by "resampling", that is, by drawing a random sample from the dataset, where each respondent's chance of appearing in the sample is proportional to their sampling weight.
#
# `utils` provides a function to do this resampling.
np.random.seed(19)
sample = utils.resample_by_year(gss, 'wtssall')
# ### Saving the results
#
# I'll save the results to an HDF5 file, which is a binary format that makes it much faster to read the data back.
# !rm eds.gss.hdf5
for i in range(3):
np.random.seed(i)
sample = utils.resample_by_year(gss, 'wtssall')
key = f'gss{i}'
sample.to_hdf('eds.gss.hdf5', key)
# %time gss = pd.read_hdf('eds.gss.hdf5', 'gss0')
gss.shape
|
eds01_gss_clean.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Grafico de la distribución Beta
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
a, b = 5, 1 # parametros de forma.
# Función que recibe los parametros a y b, halla la distribucio Beta y la grafica
def f1(a,b):
beta = stats.beta(a, b)
x = np.linspace(beta.ppf(0.01),
beta.ppf(0.99), 100)
fp = beta.pdf(x) # Función de Probabilidad
# Graficando Beta
plt.plot(x, fp)
# Se llama la funcion f1 para cada uno de los parametros (a,b) requeridos
f1(0.5,0.5)
f1(5,1)
f1(1,3)
f1(2,2)
f1(2,5)
# Se configura la grafica
plt.title('Distribucion Beta')
plt.ylabel('probabilidad')
plt.xlabel('valores')
plt.xlim(0, 1)
plt.ylim(0, 2.5)
# Se muestra la grafica plt
plt.show()
# +
# Propiedades de las distribuciones
import statistics as st
import scipy as scy
import matplotlib.pyplot as plt
def random_beta(a,b):
rand_beta = np.random.beta(a,b,size=100)
return rand_beta
first_rand = random_beta(1,0.5)
mean = first_rand.mean()
median = st.median(first_rand)
# mode = st.mode(first_rand)
Kurtosis = scy.stats.kurtosis(first_rand)
skewness = scy.stats.skew(first_rand)
print("Media: ")
print(mean)
print("Mediana: ")
print(median)
print("Moda: No existe")
# print(mode)
print("Kurtosis:")
print(Kurtosis)
print("skewness:")
print(skewness)
plt.axvline(mean)
plt.axvline(median)
# plt.axvline(mode)
plt.axvline(Kurtosis)
plt.axvline(skewness)
plt.show()
# +
# Evaluación de modelos
# Regresion lineal
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
X_train =[]
Y_train =[]
X_test = []
Y_test = []
for line in open('slr05.txt'):
if line[0] != '#':
# print(line.replace('\n','').split(' ')) # \n es salto de linea, .split(' ') -> separar por espacio, .split('\t') -> si esta separado por tabulación
d = line.replace('\n','').split('\t')
X_train.append(d[0])
Y_train.append(d[1])
for line_test in open('slr06.txt'):
if line_test[0] != '#':
# print(line.replace('\n','').split(' ')) # \n es salto de linea, .split(' ') -> separar por espacio, .split('\t') -> si esta separado por tabulación
d_test = line_test.replace('\n','').split('\t')
X_test.append(d_test[0])
Y_test.append(d_test[1])
regr = linear_model.LinearRegression()
regr.fit(X_train, Y_train)
# Plot outputs
matplotlib.pyplot.scatter(X_test, Y_test, color='black')
plt.plot(X_test, regr.predict(X_test), color='blue',
linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
#edges.append((d[0],d[1]))
print(X_train)
print(X_test)
print(Y_train)
print(Y_test)
# -
|
DiderGonzalez/Ejercicios 1.0/Repaso Estadistica/Funcion de densidad.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 3.1 Three in One:
# Describe how you could use a single array to implement three stacks.
# ## 3.2 Stack Min:
# How would you design a stack which, in addition to push and pop, has a function min which returns the minimum element? Push, pop and min should all operate in 0(1) time.
# +
import sys
capacity = 30
class Stack():
def __init__(self, capacity):
self.last = -1
self.capacity = capacity
self.mem = [(0, 0) for _ in range(capacity)]
def push(self, value):
if self.last < capacity - 1:
min_val = min(self.min(), value)
self.last += 1
self.mem[self.last] = (value, min_val)
def min(self):
if self.last == -1:
return sys.maxsize
else:
value, min_val = self.mem[self.last]
return min_val
def pop(self):
if self.last == -1:
value = None
else:
value, _ = self.mem[self.last]
self.last -= 1
return value
def __str__(self):
out_ch_list = [str(self.mem[i])for i in range(self.last + 1)]
return ",".join(out_ch_list)
stack = Stack(capacity)
print(stack)
print(stack.min())
stack.push(19)
print(stack)
print(stack.min())
stack.push(10)
print(stack)
print(stack.min())
stack.push(1)
print(stack)
print(stack.min())
stack.pop()
print(stack)
print(stack.min())
stack.pop()
print(stack)
print(stack.min())
# -
|
Issues/algorithms/Stacks and Queues.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
#次元の呪いをMCMCで実感する
# -
# %matplotlib inline
# +
#MCMCとは
#MCMCでπを推定する
from __future__ import division
import math
import random
n = 100000
count = 0
for i in range(n):
u, v = random.uniform(0,1), random.uniform(0,1)
d = sqrt((u - 0.5)**2 + (v - 0.5)**2)
if d < 0.5:
count += 1
ratio = count/n
print ratio *4
# +
#上のMCMCをn次元球面で行って同様にπを推定す
def curse(n):
m = 10000
count = 0
for i in range(m):
x = []
for j in range(n):
x.append(random.uniform(0,1))
y = sum(l**2 for l in x)
z = math.pow(y, 1/2)
if z < 1:
count += 1
ratio = count / m
return math.pow(ratio* (math.pow(2, n))* (math.gamma(n/2.0 +1)), 2/n)
# -
curse(2)
#次元の呪いを数値で実感
for i in range(2, 15):
print curse(i)
# +
#グラフで実感
import matplotlib.pyplot as plt
def curse2(n):
m = 10000
count = 0
W = []
for i in range(1, m+1):
x = []
for j in range(n):
x.append(random.uniform(0,1))
y = sum(l**2 for l in x)
z = math.pow(y, 1/2)
if z < 1:
count += 1
p_ratio = count / i
w = math.pow(p_ratio* (math.pow(2, n))* (math.gamma(n/2.0 +1)), 2/n)
W.append(w)
plt.plot(W)
# -
#次元の呪いを実感
#最初の一つは気にしないでください
for i in range(1, 16):
fig = plt.figure(figsize=(30, 30)) #(縦、横のpixel)
plt.subplot(5,3,i)
curse2(i)
|
curse of dimentionality.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Constrained Simulation Examples
from typing import Sequence
import coba
from coba.random import CobaRandom
from coba.simulations import LambdaSimulation, ConstrainedSimulation, Context, Action
from coba.learners import RandomLearner, VowpalLearner, ChanceConstrainedOptimizer, EpsilonBanditLearner
from coba.benchmarks import Benchmark
import numpy as np
n_interactions = 5000
r = CobaRandom()
# Below is a constrained simulation with two possible actions- the higher reward action violates the constraint wile the lower reward action doesn't
# +
def context(index: int) -> Context:
return tuple(r.randoms(5))
def actions(index: int, context: Context) -> Sequence[Action]:
return [1, 2]
#actions = [ r.randoms(5) for _ in range(3) ]
#return [ tuple(a/sum(action) for a in action) for action in actions ]
def rewards(index: int, context: Context, action: Action) -> float:
if action == 1:
return 0.5
else:
return 0.9
def feedback(index: int, context: Context, action: Action) -> Sequence[float]:
if action == 1:
return tuple((0.5, -1))
return tuple((0.9, 0.8))
# +
sim = [ConstrainedSimulation(n_interactions, context, actions, feedback)]
result = Benchmark(sim, shuffle=[1,2,3,4]).evaluate([ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=VowpalLearner, vw_kwargs={"bag":5, "seed":10}),
ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=RandomLearner)])
# -
# These results are averaged over 4 runs. We can see that the vw learner is obeying the constraint and as the number of iteractions increase, the standard deviation of the reward (denoted by the orange bars) decreases. I interpreted this as the vw learner seems to be consistently learning the constraint with the amount of time it takes to learn it varying.
result.plot_learners()
# This is a plot showing the reward averaged over every 10 interactions (I tried plotting instantaneous reward but it's a messy hard to interpret plot). Here we can see the vw learner reward dropping off and staying at 0.5.
result.plot_learners(start=0.0, end=1.0, span=10 , err_every=0)
# The simulation below is the same as the one above but with Gaussian noise added. This means that in some rare situations, the high reward action may not violate the constraint. I was interested in seeing how this would affect how quickly the vw learner learned the constraint and its effect on reward. This simulation was also an average of 4 runs.
def noisy_feedback(index: int, context: Context, action: Action) -> Sequence[float]:
noise = np.random.normal()
if action == 1:
return tuple((0.5, -1+noise))
return tuple((0.9, 0.8+noise))
# +
sim = [ConstrainedSimulation(n_interactions, context, actions, noisy_feedback)]
noisy_result = Benchmark(sim, shuffle=[1,2,3,4]).evaluate([ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=VowpalLearner, vw_kwargs={"bag":5, "seed":10}),
ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=RandomLearner)])
# -
# It seems like even with the noise, the vw learner tends towards the lower reward non-constraint violating action.
noisy_result.plot_learners()
# This plot is hard to interpret given the noise, but it shows the general trend?
noisy_result.plot_learners(start=0.0, end=1.0, span=10 , err_every=0)
# Finally I tested this on multiple actions and multiple constraints. The lowest reward action doesn't violate either constraint, the medium reward action violates one constraint and the ighest reward action violates both constraints.
# +
def multiple_actions(index: int, context: Context) -> Sequence[Action]:
return [1, 2, 3]
#actions = [ r.randoms(5) for _ in range(3) ]
#return [ tuple(a/sum(action) for a in action) for action in actions ]
def multiple_rewards(index: int, context: Context, action: Action) -> float:
if action == 1:
return 0.2
if action == 2:
return 0.6
if action == 3:
return 0.9
def multiple_feedback(index: int, context: Context, action: Action) -> Sequence[float]:
if action == 1:
return tuple((0.2, -1, -0.5))
if action == 2:
return tuple((0.6, -0.5, 0.8))
if action == 3:
return tuple((0.9, 0.7, 1))
# +
sim = [ConstrainedSimulation(n_interactions, context, multiple_actions, multiple_feedback)]
multiple_result = Benchmark(sim).evaluate([ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=VowpalLearner, vw_kwargs={"bag":5, "seed":10}),
ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=RandomLearner),
ChanceConstrainedOptimizer(constraint=0.1, len_feedback=1, learning_rate=0.3, learner=EpsilonBanditLearner, vw_kwargs={"epsilon":0.1})])
# -
# I also included the epsilon bandit learner with constraint because I was curious but it doesn't seem to be learning the constraint to well compared to the vw learner (is this learner reasonable for this type of simulation?). We can see though that the vw learner quickly goes for the lowest reward non constraint violating option.
multiple_result.plot_learners()
# And here's a span 10 simulation again where you can see that remarkable drop off.
multiple_result.plot_learners(start=0.0, end=1.0, span=10 , err_every=0)
|
examples/notebooks/ConstrainedSim.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# pyLigthGBM
# =======
#
# Python wrapper for Microsoft [LightGBM](https://github.com/Microsoft/LightGBM)
# Make sure that you have installed LightGBM [Installation-Guide](https://github.com/Microsoft/LightGBM/wiki/Installation-Guide)
#
# **GitHub : [https://github.com/ArdalanM/pyLightGBM](https://github.com/ArdalanM/pyLightGBM) **
#
# **Author of this notebook :** <NAME> <<EMAIL>>
#
# -------
#
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import os, gc
import numpy as np
import pandas as pd
from sklearn import metrics, model_selection
from sklearn.preprocessing import LabelEncoder
from pylightgbm.models import GBMRegressor
# -
# DATA
# -------
#
# For this example used **data from Kaggle competition Allstate Claims Severity**
# You can download data here : https://www.kaggle.com/c/allstate-claims-severity/data
# +
df_train = pd.read_csv("data/train.csv.zip")
print('Train data shape', df_train.shape)
df_test = pd.read_csv("data/test.csv.zip")
print('Test data shape', df_test.shape)
# -
# Extracting `loss` from train and `id` from test
y = np.log(df_train['loss']+1).as_matrix().astype(np.float)
id_test = np.array(df_test['id'])
# Merging train and test data
# +
df = df_train.append(df_test, ignore_index=True)
del df_test, df_train
gc.collect()
print('Merged data shape', df.shape)
# -
# Droping not useful columns
df.drop(labels=['loss', 'id'], axis=1, inplace=True)
feature_list = df.columns.tolist()
# Transfrom categorical features `cat` from 1 to 116
# +
le = LabelEncoder()
for col in df.columns.tolist():
if 'cat' in col:
df[col] = le.fit_transform(df[col])
# -
# TRAIN, VALIDATION, TEST
# -------
# Split data into train, validation (for early stopping) and test set
# +
print('train-test split')
df_train, df_test = df.iloc[:len(y)], df.iloc[len(y):]
del df
gc.collect()
print('train-validation split\n')
X = df_train.as_matrix()
X_train, X_valid, y_train, y_valid = model_selection.train_test_split(X, y, test_size=0.2, random_state=42)
X_test = df_test.as_matrix()
del df_train, df_test
gc.collect()
print('Train shape', X_train.shape)
print('Validation shape', X_valid.shape)
print('Test shape', X_test.shape)
# -
# TRAINING GBMRegressor
# -------
# List of parameters and their explanation you can find here https://github.com/Microsoft/LightGBM/wiki/Quick-Start
#
# **don't forget to change `exec_path` here**
# +
seed = 42
gbmr = GBMRegressor(
exec_path='/path/to/your/LightGBM/lightgbm', # change this to your LighGBM path
config='',
application='regression',
num_iterations=500,
learning_rate=0.1,
num_leaves=10,
tree_learner='serial',
num_threads=4,
min_data_in_leaf=10,
metric='l2',
feature_fraction=1.0,
feature_fraction_seed=seed,
bagging_fraction=1.0,
bagging_freq=0,
bagging_seed=seed,
metric_freq=1,
early_stopping_round=10
)
gbmr.fit(X_train, y_train, test_data=[(X_valid, y_valid)])
print("Mean Square Error:", metrics.mean_absolute_error(y_true=(np.exp(y_valid)-1), y_pred=(np.exp(gbmr.predict(X_valid))-1)))
# -
print('Best round', gbmr.best_round)
# FEATURE IMPORTANCE
# -------
# TOP 10
# +
feature_dict = dict(zip(range(len(feature_list)), feature_list))
df_fi = pd.DataFrame(gbmr.feature_importance(), columns=['feature', 'importance'])
df_fi = df_fi.replace({"feature": feature_dict})
del feature_dict, feature_list
top = 10
plt.figure()
df_fi.head(top).plot(kind='barh',
x='feature',
y='importance',
sort_columns=False,
legend=False,
figsize=(10, 6),
facecolor='#1DE9B6',
edgecolor='white')
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance')
# -
# SUBMISSION
# -------
# Predicting Test set
y_test_preds = gbmr.predict(X_test)
y_test_preds=(np.exp(y_test_preds)-1)
# Make submission file
df_submission = pd.read_csv('data/sample_submission.csv.zip')
df_submission['loss'] = y_test_preds
# Save submission file
df_submission.to_csv('submission.csv',index=False)
# #### This submission file scored 1138.06444 on LB
|
notebooks/regression_example_kaggle_allstate.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="89RkxlCD3Ivz" outputId="bf447e4e-cd2d-4219-c1e4-c6aeb32b7869"
# Run the command below if necessary, for example with Google Colab
# #!python3 -m pip install mxnet-cu110
# + id="lwPUiNAbTv39"
import matplotlib.pyplot as plt
import mxnet as mx
import numpy as np
import pandas as pd
# + id="0bay_LSqwcBW"
# Perceptron Model
def perceptron(weights, bias, features):
return mx.nd.dot(features, weights) + bias
# +
# Activation Functions
def linear(x):
return x
def relu(x):
return (x > 0) * x
# +
# Input Data
inputs = np.arange(-5, 5, 0.01)
fig, axs = plt.subplots(1, 2)
fig.suptitle("Activation Functions")
axs[0].set_title("Linear")
axs[0].plot(inputs, linear(inputs))
axs[1].set_title("ReLU")
axs[1].plot(inputs, relu(inputs))
# + id="5oaBcTfbTsDK"
# Loading data
house_df = pd.read_csv("kc_house_data.csv")
# + colab={"base_uri": "https://localhost:8080/"} id="laSBKHf4aJUk" outputId="c55e15dc-f485-4faa-b142-5146037a63a0"
house_df.info()
# + id="2LofYYgkxon4"
# Only interested in living square feet, bathrooms and grade
house_df = house_df[["price", "sqft_living", "bathrooms", "grade"]]
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="73O2WSsKbYow" outputId="7d8a2bc3-54d5-4b50-d553-00a8e5b9d117"
house_df.head()
# + id="2yAtt49vK2C_"
# One-hot encoding
grade_onehot = pd.get_dummies(house_df.grade)
house_df = pd.concat([house_df, grade_onehot], axis=1)
house_df = house_df.drop("grade", axis=1)
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="6qC67qwL4vei" outputId="eb27bec2-b8df-4a72-bd3d-4a60e4a472f2"
house_df.head()
# + colab={"base_uri": "https://localhost:8080/"} id="NQdFeke0yz7g" outputId="3464786f-6a18-4bbb-98e1-dfe55876bf45"
# Number of Features: number of columns of dataframe except output (price)
number_of_features = len(house_df.columns) - 1
print(number_of_features)
# + id="_Xs-wgf53v97"
# Number of outputs: 1 (price)
number_of_outputs = 1
# + colab={"base_uri": "https://localhost:8080/"} id="aVFO4nsv16ww" outputId="58407531-28ff-488d-c4a1-2f30ae46eb04"
# Model Parameters Definition + Initialization
weights = mx.nd.random_normal(shape=(number_of_features, number_of_outputs))
bias = mx.nd.random_normal(shape=number_of_outputs)
print("Weights:")
print(weights)
print()
print("Bias:")
print(bias)
# + colab={"base_uri": "https://localhost:8080/"} id="6595mTDU343g" outputId="4e7dd328-b983-4e31-ae64-40fe18c340f4"
# Input features of an example
example_input = mx.nd.array(house_df.iloc[0].drop("price").to_numpy())
print(example_input)
# + colab={"base_uri": "https://localhost:8080/"} id="wU-ugJJO4IdD" outputId="d7e16e7f-5b12-4faa-94e8-0cea43f52c93"
# Expected output of the example (price)
expected_output = house_df.iloc[0].price
print(expected_output)
# + colab={"base_uri": "https://localhost:8080/"} id="0_tlFEFQ5RRs" outputId="eab9e144-974e-48ad-b93e-e5053c64affd"
# Calculate the prediction of our model
model_output = perceptron(weights, bias, example_input).asnumpy()[0]
print(model_output)
# + colab={"base_uri": "https://localhost:8080/"} id="fus6MPbO5ZFQ" outputId="818b36bf-e23d-4ea0-811b-cf10eb8c17a9"
# How much error
error_abs = abs(expected_output - model_output)
error_perc = error_abs / expected_output * 100
print("Absolute Error:", error_abs)
print("Relative Error (%):", error_perc)
|
ch03/3_1_Understanding_Maths_for_Regression_Models.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
def func(fun, a):
return fun(a,a,a)
print(func(lambda x,y,z : x+y+z, 5))
# +
# lambda is an anonymous function which is exexute in one line only.
|
Python Basic/Week 4/10. Functional Programming/.ipynb_checkpoints/Anonymous or lamda function-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Operaciones con listas
lista_1 = [2,1,4,3]
lista_1*2
lista_1+1
lista_1 + [1]
lista_1
len(lista_1)
lista_1
min(lista_1)
max(lista_1)
lista_1.sort()
lista_1
# ## Numpy
# 
# https://numpy.org/
import numpy as np
array_1 = np.array([1,2,3,4])
type(array_1)
array_1
array_1+1
array_1 = array_1+1
array_1
array_1*3
len(array_1)
array_1.shape
array_2 = [10,20,30,40]
array_1**2
np.arange(10)
np.linspace(0,10)
m = np.array([[10,11,12],
[12,14,15]])
m
len(m)
m.shape
unos = np.ones(shape=(3,5))
unos
numeros = np.arange(0,101,5)
numeros
numeros[:]
numeros[4:10]
numeros[4:10:2]
|
notebooks/007a_intro_numpy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Eldave93/Seizure_Detection_Tutorials/blob/master/Classification_01_Feature_Pre_Processing.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="kjBMZF-lBgH4" colab_type="text"
# # Classification Tutorial #01
# # Feature Pre-Processing
#
# by [<NAME>](https://www.lancaster.ac.uk/psychology/about-us/people/david-elliott)
# / [GitHub](https://github.com/Eldave93)
#
# In this notebook we will explore how to prepare a dataset so it can be used for classification. We will cover the concepts of splitting data into test and training sets, scaling the data values so they are comparable, and balancing the classes so there is not an uneven distribution of data belonging to each class.
# + id="-qwaAH4-Pa09" colab_type="code" outputId="baa4f4eb-2a21-4431-ba99-7517f4536dae" colab={"base_uri": "https://localhost:8080/", "height": 259}
# need at least 0.9.0
# !pip install seaborn --upgrade
# + [markdown] id="tpxEvhibBvP2" colab_type="text"
# # Data Preparation
#
# First lets start by getting our workspace ready and then loading in the data.
# + id="EgX0721pBsU-" colab_type="code" colab={}
import os # for file locations
import matplotlib.pyplot as plt # for plotting
# colours for printing outputs
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
# + [markdown] id="ev9Y9qApB8x3" colab_type="text"
# Lets start by loading in the feature dataframe we made in the first Feature Extraction Tutorial (Epileptologie). If you do not have the dataframe already created then go to that previous tutorial and click run all (it sould take less than a minute).
#
# If you are using Google Colab for both notebooks it should now be in your Files and you can continue with this tutorial if you like. If you saved the file locally onto your computer then go into the left tab, Files, and upload the data.
#
# If you saved it into Google Drive then you can mount the drive as shown below
# + id="88KeZy7eDPRu" colab_type="code" outputId="ade17f4c-3148-4465-e1fa-cb6aab2b2181" colab={"base_uri": "https://localhost:8080/", "height": 55}
from google.colab import drive
drive.mount('/content/gdrive')
# + [markdown] id="QcRD5bNqC3gP" colab_type="text"
# Now lets put the data into a pandas dataframe and have a quick reminder of its structure.
#
# This dataset has only one signal that was divided into multipule datasets lasting 23.6 seconds each. Each dataset was associated with a class and had features extracted from it. These briefly were:
#
# **Welch**
#
# The Welch method is a spectral density estimation method that calculates a periodogram for windowed sections of data using a Fourier Transform. Overlapping segments are windowed with a discrete Fourier tranform applied to calculate the periodogram which is then squared and averaged to get a power measure.
# - *power_delta*: Average power between 0.1hz and 4hz
# - *power_theta*: Average power between 4hz and 8hz
# - *power_alpha*: Average power between 8hz and 12hz
# - *power_beta*: Average power between 12hz and 30hz
# - *power_gamma*: Average power between 30hz and 70hz
#
# **Discrete Wavelet Transform**
#
# Several oscillatory kernel-based wavelets are stretched and moved to different positions in time across a signal, dividing the data into different frequency components which are each analysed in respect to their scale.
# - *LSWT*: The log-sum energy of the subband coefficients
# - *mean*: Average power of the wavelet coefficients in each sub-band
# - *mean_abs*: Mean of the absolute values of the coefficients in each sub-band
# - *std*: Standard deviation of the coefficients in each sub-band
# - *Ratio*: Ratio of the absolute mean values of adjacent sub-bands
#
# If you want more specific detail on these features then I recommend you look at the tutorial associated with the feature extraction.
# + id="CisJyfYHKQru" colab_type="code" colab={}
FILE_PATH = 'feature_df.json.gzip'
# + id="vjzYMdF1Deoh" colab_type="code" outputId="29d87732-ef4b-41ad-f203-6b6e0431806e" colab={"base_uri": "https://localhost:8080/", "height": 1208}
import numpy as np
import pickle # saving python objects
import pandas as pd # dataframes
# load features dataframe
feature_df = pd.read_json(FILE_PATH, orient='index', compression = 'gzip')
# display examples of the data
display(feature_df.info())
display(feature_df.head())
# + [markdown] id="Bg-8_kSKLp0N" colab_type="text"
# # Splitting Data
#
# We need to split the data up into an array containing the feature data and an array of the class labels.
#
# However before diving in, lets first take a moment to reflect on scikit-learn's design principles<sup>1</sup>, as laid out by Géron (2017)<sup>2</sup>, so we can get an understanding as to how to use scikit-learn's API:
#
# - Consistency
# - *Estimators*: Objects that can estimate parameters based on a dataset. Estimation is performed by the *fit()* method that takes a dataset (or two for supervised learning), with other parameters being hyperparameters set via an instance variable.
# - *Transformers*: Some estimators (e.g. LabelEncoder) can transform the dataset using a *transform()* method to return the transformed dataset; with a convinient and sometimes optimised *fit_transform()* method to do both.
# - *Predictors*: Some estimators can make predictions using a *predict()* method, which can be measured for quality using the *score()* method.
# - Inspection
# - All hyperparameters are available by instance variables (e.g. LogisticRegressionObject.C) and all learned parameters accessible with underscore instance variables (e.g. LogisticRegressionObject.intercept_)
# - Nonproliferation of classes
# - Datasets are NumPy arrays or SciPy sparse matrices with hyperparameters Python strings or numbers.
# - Composition
# - Bulding blocks can be re-used as much as possible; for example the use of *Pipeline* tostring together transformers and an estimator
# - Sensible defaults
# - Most parameters have sensible defaults to get a baseline system working quickly
#
# **NOTE**
# - We will be focusing on *Estimators* and *Transformers* in this notebook, with *Predictors* found in the following tutorial
# - If you look at the number of epoch labels (seconds) you will notice there is a class imballance, we will come to this later.
#
#
# ---
#
# 1. <NAME>. (2017). Hands-on machine learning with Scikit-Learn and TensorFlow: concepts, tools, and techniques to build intelligent systems. " O'Reilly Media, Inc.".
# 2. <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., <NAME>., ... & <NAME>. (2013). API design for machine learning software: experiences from the scikit-learn project. arXiv preprint arXiv:1309.0238.
# + id="3tGZPgDwMSUl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="fa745d57-96ec-4c35-a22c-557fd1e9048b"
import sklearn
sklearn.__version__
# + [markdown] id="HaLr-dX9LvtD" colab_type="text"
# ## Datax
#
# Getting the feature array is pretty easy. All we need do is remove the other columns referring to the class, file_id and location of the electrodes as these will not be used to help classify the signals.
#
# Scikit-learn accepts numpy arrays so we need to change our pandas dataframe to an array. This is simple enough as we just need to use `.values`
# + id="dQ7VSGqXLwiL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 257} outputId="4fc6be1c-70f8-4d13-e6e0-272212965a78"
feature_df_drop = feature_df.drop(['class', 'file_id', 'location'], axis='columns')
data_x = feature_df_drop.values
data_x
# + [markdown] id="RnHrdSt4MIbO" colab_type="text"
# ## Datay
#
# Getting the class labels ready has a little more to it as you need to make a decision regarding how you encode it. Data class strings are typically encoded into integers, using methods such as one-hot encoding, dummy coding and effect coding.
#
# The most simple method is to use a label encoder, which encodes labels with value between 0 and n_classes-1. However values then become orderable, which should not be permissible for destinct categories<sup>1</sup>. Still this is a common and easy method because it is easy to get sklearn to fit and predict models based on integer encoded data.
#
# Lets have a look at how we could encode the class labels to make a binary decision, seizure or no seizure.
#
# **NOTE**
# - The positive class in scikit-learn is the class that is labeled as class 1 unless specified when creating a scoring metric.
#
# ---
#
# 1. <NAME>., & <NAME>. (2018). Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists. " O'Reilly Media, Inc.".
# + id="bxSvQXjlLrv7" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="9814b6bd-3965-42f7-d947-ddfbcbe7178e"
display(feature_df['class'].value_counts())
# + id="3opUIYVlMKOI" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="42b5a6f1-5da1-4739-ed46-880d946561d5"
from sklearn.preprocessing import LabelEncoder
# create condition(group) array
class_series = feature_df['class']
# make a label encoder
le = LabelEncoder()
# change the string labels to ints
data_y = le.fit_transform(class_series)
# get the unique labels
labels = list(class_series.unique())
# print out the labels and their new codes
for i, code in enumerate(list(le.transform(labels))):
print(labels[i] + ': ' + str(code))
data_y[:5]
# + [markdown] id="TNVZ9hxgMiZ9" colab_type="text"
# Another method is to use One-Hot Encoding which represents category labels as a group of bits with each bit representing a category. However this method uses one more bit than really neccissary, leading to a linear dependency (e1 + e2 + ... + ek = 1).
#
# Imagine we wanted to classify the locations of the electrodes, as we have 3 locations, then to ensure class 2, 1, 0 are not given different weights then we should encode them using the LabelBinarizer.
#
# **Note**
# - a one-hot encoding of y labels should use LabelBinarizer instead of the OneHotEncoder as the latter is meant for feature variables NOT classes
# + id="Omhln_97Mj1a" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 91} outputId="46f1d4fb-9e81-4317-828c-7ed9d2c5eaa3"
feature_df['location'].value_counts()
# + [markdown] id="Zbvhe6UyMn58" colab_type="text"
# **Note**
# - SciPy sparse matrices are useful for one-hot encoded data as it only stores the location of the nonzero elements so saves on space. These are the default for sklearn.preprocessing.OneHotEncoder and can be used where you want to use categorical data as a feature.
# + id="vLDIacatMt3N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 165} outputId="70007617-4673-4088-ecd4-e8fc432af15c"
from sklearn.preprocessing import LabelBinarizer
# create condition(group) array
location_series = feature_df['location']
# make a label encoder
lb = LabelBinarizer()
# change the string labels to ints
data_y = lb.fit_transform(location_series)
# get the unique labels
labels = list(location_series.unique())
# print out the labels and their new codes
for i, code in enumerate(list(lb.transform(labels))):
print(labels[i] + ': ' + str(code))
data_y[:5]
# + [markdown] id="mrpPt4u7MxW7" colab_type="text"
# Dummy Coding avoids the latter problem with One-Hot Encoding, as it represents features in only k-1 features. The last feature is represented by all 0's and known as the reference category. However this means, unlike one-hot encoding, missing data cannot be represented as all 0's.
#
# There doesnt appear at time of writing a way to do this in sklearn, but we can do it easily in Pandas using the get_dummies function. By default, the get_dummies() does not do dummy encoding, but one-hot encoding, so we need to use drop_first=True (notice from the pandas df there is no intracranial epileptogenic zone as it is now [0,0]).
# + id="d3eL4lvIM6F0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 524} outputId="f6b650d0-7014-4ddb-c061-676da404c309"
one_hot_y = pd.get_dummies(location_series)
dummy_y = pd.get_dummies(location_series, drop_first=True)
print(color.BOLD+color.UNDERLINE+'Onehot'+color.END)
display(one_hot_y.head())
print(color.BOLD+color.UNDERLINE+'Dummy'+color.END)
display(dummy_y.head())
data_y = dummy_y.values
data_y[:5]
# + [markdown] id="zdx1sn9vM8YQ" colab_type="text"
# There are other methods that we won't focus upon but I will briefly mention them.
#
# Firstly there is effect coding, which is similar to dummy coding except the reference category is represented as -1's rather than two 0's. The reason you may want to do this relates to interpretaion of a regression analysis, which we will not touch on in this notebook, however can be read about in Zheng and Casari (2018)<sup>1</sup>.
#
# For large categorical variables, feature hashing can be used. A hash function deterministically maps unbounded integers to a finite range with multipule numbers mapped to the same output (collision). Uniform hashing means roughly the same number of numbers are mapped to each m bins. However the problem is that hashed features are no longer interpretable, but do have major computational benifits for data exploration and viualization in machine learning for large datasets.
#
# Bin counting is another method for large data that uses the conditional probability of a target rather than the category value. It assumes that historical data is availible for the statistics. Other features can be used in additional probability including raw counts and the log-odds ratio. This is in order to convert a large, sparse binary categorical representation into small, dense, real-valued numbers. Rare categories can also be given their own bin, if under a threshold, as these categories likely will not provide enough data to reliably estimate the given target. The problem with bin counting is there is potential for data leakage, although there are methods to prevent this<sup>1</sup>.
#
# ---
# 1. <NAME>., & <NAME>. (2018). Feature Engineering for Machine Learning: Principles and Techniques for Data Scientists. " O'Reilly Media, Inc.".
# + [markdown] id="lVmpmnF0M_Bs" colab_type="text"
# ## Training and Test Data
#
# Data is split into training and test sets that have the same class proportions as the original data. This is because later when we fit out classification model to predict a class in the data, we want to compare predictions to some true labels in a separate group of data (the test set) to give an unbiased performance evaluation of our model before we use it in the real world. Balancing the information loss in the training set and estimation of a models generalization error requires the size of the data to be taken into consideration. As we are witholding data the algorithm could learn from for the test set we don't want to withhold too much information but the smaller the test set, the more inaccurate the estimation of the generalization error<sup>1</sup>.
#
# To split the data we will use the train_test_split function. Note that this function automatically shuffles the data before splitting. We fix a random_state parameter, as we will for a number of later functions, to fix the internal pseudo-random number generator used for shuffling the datasets to ensure our results are reproducible.
#
# Lets start by going back to a simple label encoded y data to encode seizure and non-seizure data and we will remove the surface electrodes.
#
# **NOTE**
# - Its advised you actually split the data before calculating your features to ensure they are separated. However, for the ease of this tutorial we worked out all the features first and are splitting them now
#
# ---
#
# 1. Raschka, Sebastian, and <NAME>. Python Machine Learning, 2nd Ed. Packt Publishing, 2017.
# + id="hzOaVGE-NDf0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 91} outputId="ab079ffc-7e86-4143-e630-5f8b4906500f"
# select only intracranial EEG
feature_reduced = feature_df[feature_df.location != 'surface']
# drop the columns which are not feature variables
feature_reduced_drop = feature_reduced.drop(['class', 'file_id', 'location'], axis='columns')
# change to an array
data_x = feature_reduced_drop.values
# change the string labels to ints
data_y = le.fit_transform(feature_reduced['class'])
print(color.BOLD+'Feature DataFrame'+color.END)
display(data_x.shape)
print(color.BOLD+'Target DataFrame'+color.END)
display(data_y.shape)
# + id="uzZuNnXkNZhb" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 146} outputId="80c43386-9248-48e8-ca11-b17629e4ae6e"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data_x, data_y, test_size=0.2, random_state=0)
print(color.BOLD+color.UNDERLINE+'Feature DataFrame'+color.END)
print('Training size: ' + str(X_train.shape))
print('Test size: ' + str(X_test.shape))
print(color.BOLD+color.UNDERLINE+'\nTarget DataFrame'+color.END)
print('Training size: ' + str(y_train.shape))
print('Test size: ' + str(y_test.shape))
# + [markdown] id="Uet0t7_QNYnt" colab_type="text"
# Lets just check that the proportions of a class membership is in the split data (1st column being class label and 2nd being count)
# + id="o1qikAVzNnQU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 146} outputId="ac3b75a8-ca55-4261-d282-77251f5634fb"
print(color.BOLD+color.UNDERLINE+'Feature DataFrame'+color.END)
display(np.array(np.unique(y_train, return_counts=True)).T)
print(color.BOLD+color.UNDERLINE+'\nTarget DataFrame'+color.END)
display(np.array(np.unique(y_test, return_counts=True)).T)
# + [markdown] id="Gn_GVFyBNugE" colab_type="text"
# # Scaling
#
# Scaling the features is important for most classifiers, except decision trees and random forests (we will talk about them in the next notebook).
#
# There are two general methods to scaling we will discuss; normalisation and standardization.
#
# Normalisation converts signals into a common range so they can be compared, through detrending the signal by removing the mean or scaling to a unit variance to a range of 0-1 using methods such as min-max scaling. The mean of a single electrode recording depends on the amplifier gain, so normalisation is particularly required if comparing signals from different recording equipment or researchers.
#
# Standardization is similar, but centres the feature columns at mean 0 with standard deviation 1 so that the features form a normal distribution. This is particularly useful for classifiers that use optimization algorithms, such as logistic regression and SVM, as it makes it easier for the model to learn weights and makes the algorithm less sensitive to outliers than min-max scaling<sup>1</sup>.
#
# **TODO**
# - find where I got the placeholder from, probs Varsavsky but check because useful to compare with standardization
#
# **References**
#
# 1. Raschka, Sebastian, and <NAME>. Python Machine Learning, 2nd Ed. Packt Publishing, 2017.
# + id="kPDCyXVTNyn-" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 128} outputId="bb580bda-74c6-4b28-9adf-cffbddf6fbaa"
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import StandardScaler
mms = MinMaxScaler()
X_train_mms = mms.fit_transform(X_train)
sc = StandardScaler()
X_train_scale = sc.fit_transform(X_train)
print(color.BOLD+color.UNDERLINE+'Before Scaling'+color.END)
display(X_train[:5,0])
print(color.BOLD+color.UNDERLINE+'After MinMaxScaler'+color.END)
display(X_train_mms[:5,0])
print(color.BOLD+color.UNDERLINE+'After StandardScaler'+color.END)
display(X_train_scale[:5,0])
# + [markdown] id="NGDgOSzAOSBm" colab_type="text"
# # Balancing the Classes
#
# In some domain applications, the data will have a differing number of data in each class. This is particularly the case in seizure detection, where seizures ('Ictal' EEG) happen infrequently comparative to baseline EEG (known as Interictal - the period between seizures). The learning and prediction of machine learning algorithms tend to be affected by imballances; for example the decision function of a linear SVM may favour the majority class<sup>1</sup>. There are a number of methods available to address imbalances in a dataset, with types being broadly categorised into Under-Sampling, Over-Sampling, and a combination of both.
#
# However before getting into the method, first lets look at our data and pick two features to visualise for the re-sampling because as can be seen below there are loads of combinations!
#
# **NOTE**
# - this is a pretty quick tour of ballancing methods. For more in-depth explanations make sure to check out: https://github.com/scikit-learn-contrib/imbalanced-learn
#
# ---
#
# 1. http://contrib.scikit-learn.org/imbalanced-learn/stable/introduction.html
# + id="3yoBW0g8O0Cc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="bb8ec852-020a-4349-e57c-7eee671de196"
import seaborn
import seaborn as sns; sns.set(color_codes=True)
seaborn.__version__
# + [markdown] id="cqqwOJm9gihZ" colab_type="text"
# **NOTE**
# - Below is quite computationally expensive so if the image already exists load that in rather than calculate it again!
# + id="k1mJh8PSORhN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 54} outputId="c26ef3a1-7a32-42f9-c435-c69e21edaaa4"
# %%time
from IPython.display import Image
SKIP=True
SCATTER_PLOT_PATH = 'tutorial_scatterplot_matrix.png'
# if the image already exists
if os.path.exists(SCATTER_PLOT_PATH):
display(Image(SCATTER_PLOT_PATH))
elif SKIP==False:
# make a dataframe out of the sclaed training set
plot_data = pd.DataFrame(X_train_scale, columns = feature_reduced_drop.columns)
# make a class column the training clas set which has changed from integers to strings
plot_data['class'] = np.vectorize({0:'Baseline', 1:'Seizure'}.get)(y_train)
# plot each feature against each other with the classes used to separate out data
sns.pairplot(plot_data, height =2.5, hue = 'class', markers=["o", "s"])
plt.savefig(SCATTER_PLOT_PATH, dpi=300)
plt.show()
# + [markdown] id="W19Mhgh9WcNh" colab_type="text"
# Now lets pick two features to plot against each other, so below are all the options.
# + id="6lQYHMy4WVzi" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 664} outputId="ba2d5d41-d8b4-4aac-89f4-df0a000e4f51"
feature_list = list(feature_reduced_drop.columns)
feature_list
# + [markdown] id="ee0z_DGDYWMH" colab_type="text"
# Now we pick our two options and we'll format the df so they are put in two side by side columns
# + id="6nLO5eIpYXA9" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 206} outputId="b725e9e2-29ea-46b8-fa07-22d2b16d1b4e"
x_axis_label = 'D1_Ratio'
y_axis_label = 'D2_Ratio'
reduced_array = X_train_scale[:,[feature_list.index(x_axis_label),feature_list.index(y_axis_label)]]
reduced_df = pd.DataFrame(reduced_array, columns=[x_axis_label, y_axis_label])
reduced_df.head()
# + [markdown] id="O_LWYQVUYYZV" colab_type="text"
# Now we can look at that pairplot
# + id="0HocOLEMYatd" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 365} outputId="007b0a1f-14e8-48ab-c2e6-643ca61c5bea"
def plot_pairplot(data_x, data_y):
data_plot = data_x.copy()
data_plot['class'] = np.vectorize({0:'Baseline', 1:'Seizure'}.get)(data_y)
sns.pairplot(data_plot,
hue = 'class',
markers=["o", "s"],
plot_kws=dict(alpha = 0.5))
plt.show()
plot_pairplot(reduced_df, y_train)
# + [markdown] id="FR1ohsbvYwFX" colab_type="text"
# ## Under-Sampling
#
# ### Resample
#
# A fast way to balance the data is just to randomly select a subset of the data for each class so they have the number of datapoints found in the smallest class.
#
# First lets do this using the scikit-learn library
# + id="iHBzLnpMYxyl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="490ba736-4397-46df-c81f-de9b481512bd"
from sklearn.utils import resample
from collections import Counter
print(color.UNDERLINE + 'Before Resample' + color.END)
print(Counter(y_train))
data_x_downsampled, data_y_downsampled = resample(reduced_df[y_train == 0],
y_train[y_train == 0],
replace=True,
n_samples=reduced_df[y_train == 1].shape[0],
random_state=123)
data_x_bal = np.vstack((reduced_df[y_train == 1], data_x_downsampled))
data_y_bal = np.hstack((y_train[y_train == 1], data_y_downsampled))
print(color.UNDERLINE + 'After Resample' + color.END)
print(Counter(data_y_bal))
plot_pairplot(pd.DataFrame(data_x_bal), pd.DataFrame(data_y_bal))
# + [markdown] id="FjyIw-AVYy32" colab_type="text"
# ### RandomUnderSampler
#
# RandomUnderSampler is part of the Imblearn package, which allows for a lot of techniques for working with imballanced data.
#
# First we'll look at how you can do a random undersample with this package, which is just an easier version of what we have just done
# + id="8u9miN1mZhCy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="46a5a66e-af31-4425-d00f-d34ec7263217"
from imblearn.under_sampling import RandomUnderSampler
from collections import Counter
def imblearn_sample(sampler, data_x, data_y):
print(color.UNDERLINE + 'Before Resample' + color.END)
print(Counter(data_y))
data_x_downsampled, data_y_downsampled = sampler.fit_resample(data_x,
data_y)
print(color.UNDERLINE + 'After Resample' + color.END)
print(Counter(data_y_downsampled))
plot_pairplot(pd.DataFrame(data_x_downsampled), pd.DataFrame(data_y_downsampled))
imblearn_sample(RandomUnderSampler(random_state=123), reduced_df, y_train)
# + [markdown] id="EQJSxpuSZrto" colab_type="text"
# ### NearMiss
#
# A number of undersampling methods use heuristics based on k-nearest neighbors (KNN) classification<sup>1</sup>. KNN finds a number of samples that are the most similar to a data point we want to classify, based on a given distance metric, with its assigned class label depending on a majority vote by the nearest neighbours<sup>2</sup>. NearMiss uses this by selecting samples in the class to be under-sampled where the average distance to the closest or farthest samples of the minority class is smallest<sup>3</sup>.
#
# **References**
# 1. <NAME>., & <NAME>. (2003, August). kNN approach to unbalanced data distributions: a case study involving information extraction. In Proceedings of workshop on learning from imbalanced datasets (Vol. 126).
# 2. Raschka, Sebastian, and <NAME>. Python Machine Learning, 2nd Ed. Packt Publishing, 2017.
# 3. <NAME>., <NAME>., & <NAME>. (2017). Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. The Journal of Machine Learning Research, 18(1), 559-563.
# + id="rpk2mfGleenR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="65b2b404-a364-4bf2-f852-91dae4775e1f"
from imblearn.under_sampling import NearMiss
imblearn_sample(NearMiss(random_state=123), reduced_df, y_train)
# + [markdown] id="h_-lY7mme5hx" colab_type="text"
# ### NeighbourhoodCleaningRule
#
# Undersampling techniques also include data cleaning rules, where the number of samples in classes are not specified, but data is edited based on methods such as removing data dissimilar to their neighbourhood<sup>1</sup> or by removing one or both samples in different classes when they are nearest neighbors of each other<sup>2</sup>.
#
# ---
# 1. <NAME>. (1972). Asymptotic properties of nearest neighbor rules using edited data. IEEE Transactions on Systems, Man, and Cybernetics, (3), 408-421.
# 2. <NAME>. (1976). Two modifications of CNN. IEEE Trans. Systems, Man and Cybernetics, 6, 769-772.
# + id="9DUHMf-9e_Dp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="073a0d6f-b4b1-4d55-e09f-b70fd8327c0e"
from imblearn.under_sampling import NeighbourhoodCleaningRule
imblearn_sample(NeighbourhoodCleaningRule(random_state=123), reduced_df, y_train)
# + [markdown] id="S-hs555SY58n" colab_type="text"
# ## Over-Sampling
#
# ### RandomOverSampler
# Data can be oversampled easily by randomly sampling from minority classes with replacement to duplicate original samples.
#
# **Notes**
# - make sure to oversample after splitting the training and validation sets or you may "bleed" information into the validation sets of the model when trying to test a model (https://beckernick.github.io/oversampling-modeling/)
# + id="l0i1ZuxyfyOQ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 439} outputId="6f3a1155-9b98-4888-d685-98fbb530c028"
from imblearn.over_sampling import RandomOverSampler
imblearn_sample(RandomOverSampler(random_state=123), reduced_df, y_train)
# + [markdown] id="IWOGshEUf65Z" colab_type="text"
# ### ADASYN and SMOTE
# Instead of just randomly oversampling there are also available approaches that generate new samples through the use of interpolation, such as SMOTE and ADASYN. However these methods can generate noisy samples so the previously discussed cleaning methods can be applied after oversampling<sup>1</sup>.
#
# ---
# 1. <NAME>., <NAME>., & <NAME>. (2004). A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD explorations newsletter, 6(1), 20-29.
# + id="rmumlD39hsLh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1778} outputId="ff08a97a-7ba9-4089-8222-d71e8f99c042"
from imblearn.over_sampling import SMOTE, ADASYN
from imblearn.combine import SMOTEENN, SMOTETomek
print(color.BOLD+color.UNDERLINE+'SMOTE'+color.END)
imblearn_sample(SMOTE(random_state=123), reduced_df, y_train)
print(color.BOLD+color.UNDERLINE+'ADASYN'+color.END)
imblearn_sample(ADASYN(random_state=123), reduced_df, y_train)
print(color.BOLD+color.UNDERLINE+'SMOTE with Edited Nearest Neighbor'+color.END)
imblearn_sample(SMOTEENN(random_state=123), reduced_df, y_train)
print(color.BOLD+color.UNDERLINE+'SMOTE with Tomek links'+color.END)
imblearn_sample(SMOTETomek(random_state=123), reduced_df, y_train)
# + [markdown] id="SO50lrjZjJjx" colab_type="text"
# # Exercises
#
# Below are a few suggested exercises that may help improve your skills.
#
# **TODO**
# - Make some exercises...
# + [markdown] id="nEsw92BkjMPh" colab_type="text"
# # License (MIT)
#
# Copyright (c) 2019 [<NAME>](https://www.lancaster.ac.uk/psychology/about-us/people/david-elliott)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
|
Classification_01_Feature_Pre_Processing.ipynb
|
# ---
# jupyter:
# jupytext:
# cell_metadata_filter: collapsed,code_folding
# formats: ipynb,py:percent
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %% [markdown]
#
# # A Life Cycle Model: Data and Theory
#
# National registry data on income and wealth from Scandinavian countries (esp. Norway) have recently become available (with a lot of security) to some (lucky!) researchers. These data offer a uniquely powerful tool for testing (and improving) our models of consumption and saving behavior over the life cycle.
#
# This notebook is an example of how to construct a life cycle model with the HARK toolkit that makes predictions that can be compared to the raw data statistics that now are becoming available.
#
# For example, existing papers have tabulated information about the **growth rate** of assets at different ages over the life cycle.
#
# The default parameters of the HARK life cycle model have not been optmized to match features of the Norwegian data; a first step in a real "structural" estimation would be to use Norwegian calibrate the inputs to the model (like the profile of income, and the magnitude of income shocks, over the life cycle), and then to find the values of parameters like the time preference rate that allow the model to fit the data best. (See [SolvingMicroDSOPs](https://econ.jhu.edu/people/ccarroll/SolvingMicroDSOPs) for how this can be done, and search for the corresponding HARK content using [our documentation](https://hark.readthedocs.io)).
# %% code_folding=[]
# Initial imports and notebook setup, click arrow to show
import HARK.ConsumptionSaving.ConsIndShockModel as cShksModl # The consumption-saving micro model
import HARK.SolvingMicroDSOPs.Calibration.EstimationParameters as Params # Parameters for the consumer type and the estimation
from HARK.utilities import plotFuncsDer, plotFuncs # Some tools
import pandas as pd
import numpy as np
# %% code_folding=[0]
# Set up default values for CRRA, DiscFac, and simulation variables in the dictionary
Params.init_consumer_objects["CRRA"]= 2.00 # Default coefficient of relative risk aversion (rho)
Params.init_consumer_objects["DiscFac"]= 0.97 # Default intertemporal discount factor (beta)
Params.init_consumer_objects["PermGroFacAgg"]= 1.0 # Aggregate permanent income growth factor
Params.init_consumer_objects["aNrmInitMean"]= -10.0 # Mean of log initial assets
Params.init_consumer_objects["aNrmInitStd"]= 1.0 # Standard deviation of log initial assets
Params.init_consumer_objects["pLvlInitMean"]= 0.0 # Mean of log initial permanent income
Params.init_consumer_objects["pLvlInitStd"]= 0.0 # Standard deviation of log initial permanent income
# %%
# Make an instance of a lifecycle consumer to be used for estimation
LifeCyclePop = cShksModl.IndShockConsumerType(**Params.init_consumer_objects)
# %% code_folding=[0]
# Solve and simulate the model (ignore the "warning" message)
LifeCyclePop.solve() # Obtain consumption rules by age
LifeCyclePop.unpackcFunc() # Expose the consumption rules
# Which variables do we want to track
LifeCyclePop.track_vars = ['aNrmNow','pLvlNow','mNrmNow','cNrmNow','TranShkNow']
LifeCyclePop.T_sim = 120 # Nobody lives to be older than 145 years (=25+120)
LifeCyclePop.initializeSim() # Construct the age-25 distribution of income and assets
LifeCyclePop.simulate() # Simulate a population behaving according to this model
# %% code_folding=[0]
# Plot the consumption functions during working life
print('Consumption as a function of market resources while working:')
mMin = min([LifeCyclePop.solution[t].mNrmMin for t in range(LifeCyclePop.T_cycle)])
plotFuncs(LifeCyclePop.cFunc[:LifeCyclePop.T_retire],mMin,5)
# %% code_folding=[0]
# Define the saving rate function
def savRteFunc(SomeType, m, t):
"""
Parameters:
----------
SomeType:
Agent type that has been solved and simulated.
m:
normalized market resources of agent
t:
age of agent (from starting in the workforce)
Returns:
--------
savRte: float
"""
inc = (SomeType.Rfree -1.)*(m-1.)+1. # Normalized by permanent labor income
cns = SomeType.solution[t].cFunc(m) # Consumption (normalized)
sav = inc - cns # Flow of saving this period
savRte = sav / inc # Saving Rate
return savRte
# %% code_folding=[]
# Create a giant matrix gathering useful data:
# 't_now', 'aNrmNow_hist', 'cNrmNow_hist', employment-status in date t and date t-1,
# aLvlGro_hist, Saving rate
w, h = 1, LifeCyclePop.T_cycle
giant_list = [[0 for x in range(w)] for y in range(h)]
savRte_list = []
import warnings
warnings.filterwarnings("ignore") # Suppress some disturbing but harmless warnings
for t in range(1,LifeCyclePop.T_cycle+1):
#aLvlGro_hist[0] = 0 # set the first growth rate to 0, since there is no data for period 0
aLvlGroNow = np.log((LifeCyclePop.aNrmNow_hist[t] *LifeCyclePop.pLvlNow_hist[t])/ \
LifeCyclePop.aNrmNow_hist[t-1] *LifeCyclePop.pLvlNow_hist[t-1]) # (10000,)
# Call the saving rate function defined above
savRte = savRteFunc(LifeCyclePop, LifeCyclePop.mNrmNow_hist[t] , t)
savRte_list.append(savRte) # Add this period's saving rate to the list
# Create elements of matrix list
matrix_list = [0 for number in range(7)]
matrix_list[0] = t
matrix_list[1] = LifeCyclePop.aNrmNow_hist[t]
matrix_list[2] = LifeCyclePop.cNrmNow_hist[t]
matrix_list[3] = LifeCyclePop.TranShkNow_hist[t]
matrix_list[4] = LifeCyclePop.TranShkNow_hist[t-1]
matrix_list[5] = aLvlGroNow
matrix_list[6] = savRte
giant_list[t-1] = matrix_list
# %% code_folding=[0]
# Construct the level of assets A from a*p where a is the ratio to permanent income p
# Remember 41 is "years after entering workforce" (=age 25); 66 is the year right after retirement
LifeCyclePop.aLvlNow_hist = LifeCyclePop.aNrmNow_hist*LifeCyclePop.pLvlNow_hist
aGro41=LifeCyclePop.aLvlNow_hist[41]/LifeCyclePop.aLvlNow_hist[40]
aGro41NoU=aGro41[aGro41[:]>0.2] # Throw out extreme outliers; don't want growth rates relative to 0 income!
# %% code_folding=[0]
# Plot the (truncated) distribution of growth rates of wealth between age 65 and 66 (=25 + 41)
from matplotlib import pyplot as plt
n, bins, patches = plt.hist(aGro41NoU,50,density=True)
# %%
# put your solution here
# %%
# put your answer here
# %%
# put your answer here
# %%
# put your solution here
# %%
# put your solution here
# %% [markdown]
# # Saving Rates and Lifetime Income Growth
#
# We are interested in how income growth over the lifetime of the agent affects their saving rate and asset ratio $a=A/P$.
#
# %%
cumulative_income_first_half = np.sum(LifeCyclePop.pLvlNow_hist[0:20,:]*LifeCyclePop.TranShkNow_hist[0:20,:],0)
cumulative_income_second_half = np.sum(LifeCyclePop.pLvlNow_hist[20:40,:]*LifeCyclePop.TranShkNow_hist[20:40,:],0)
lifetime_growth = cumulative_income_second_half/cumulative_income_first_half
t=39
vigntiles = qcut(lifetime_growth,20,labels=False)
savRte = savRteFunc(LifeCyclePop, LifeCyclePop.mNrmNow_hist[t] , t)
savRtgueseByVigtile = np.zeros(20)
assetsByVigtile = np.zeros(20)
assetsNrmByVigtile = np.zeros(20)
for i in range(20):
savRteByVigtile[i] = np.mean(savRte[vigntiles==i])
assetsByVigtile[i] = np.mean(LifeCyclePop.aLvlNow_hist[t][vigntiles==i])
assetsNrmByVigtile[i] = np.mean(LifeCyclePop.aNrmNow_hist[t][vigntiles==i])
plt.plot(np.array(range(20)), savRteByVigtile)
plt.title("Saving Rate at age 65, by Vigntile of Lifetime Income Growth")
plt.xlabel("Vigntile of Lifetime Income Growth")
plt.ylabel("Savings Rate")
plt.figure()
plt.plot(np.array(range(20)), assetsByVigtile)
plt.title("Assets at age 65, by Vigntile of Lifetime Income Growth")
plt.xlabel("Vigntile of Lifetime Income Growth")
plt.ylabel("Assets")
plt.figure()
plt.plot(np.array(range(20)), assetsNrmByVigtile)
plt.title("Normalized Assets at age 65, by Vigntile of Lifetime Income Growth")
plt.xlabel("Vigntile of Lifetime Income Growth")
plt.ylabel("Normalized Assets")
|
notebooks/LifecycleModelExample.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from geninfo import *
df=pd.read_csv('data.csv')
df
df['DATE']=df['DATE'].apply(lambda l : l[2:-15])
df=df[df['DATE']>='20-03-01']
df
# +
import plotly.express as px
import plotly.graph_objs as go
fig = px.choropleth(df,
animation_frame='DATE',
geojson=gdf.set_index('LOCATION'),
color_continuous_scale=px.colors.sequential.YlOrRd,
locations='LOCATION',
color="casos_fa_14acum_percapita")
fig.update_geos(fitbounds="locations")
fig.update_layout(go.Layout( height=700 ))
# -
fig.show()
|
mapadinamico/Untitled.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import ipyplot
# %matplotlib notebook
import matplotlib.pyplot as plt
import rawpy
import cv2
import numpy as np
import os
import os.path as op
data_dir = op.join("..", "data")
img_name = "DSC01088"
raw_fp = op.join(data_dir, img_name+".ARW")
assert op.exists(raw_fp)
# +
raw = rawpy.imread(raw_fp)
a = raw.postprocess(half_size=True, no_auto_bright=True, no_auto_scale=True, output_color=rawpy.ColorSpace.raw, output_bps=16)[:,:,1].copy()/2**16
b = raw.postprocess(half_size=True, no_auto_bright=True, no_auto_scale=True, output_color=rawpy.ColorSpace.raw, output_bps=16)[:,:,1].copy()/2**16
# -
def superpose(img, bins_n):
ref = img.copy()
t_mins = [b*1/bins_n for b in range(bins_n)]
t_maxs = [b*1/bins_n for b in range(1,bins_n+1)]
t_maxs[-1] += 1e-6
for t_min, t_max in zip(t_mins, t_maxs):
idx = np.where(np.logical_and(ref>=t_min, ref<t_max))
img[idx] = (img[idx] - t_min) / (t_max - t_min)
return img
result=superpose(b,10)
result.max()
np.histogram(a, bins=[b*1/5 for b in range(5)], density=True)
np.histogram(result, bins=[b*1/5 for b in range(5)], density=True)
cv2.imwrite("a.jpg", a*255)
cv2.imwrite("result.jpg", result*255)
|
211108/Superpose.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parsing Dates
#
# Another common data transformation involves parsing dates. Parsing generally means that you start with a string and then transform that string into a different data type. In this case, that means taking a date in the format of a string and transforming the string into a date type. Run the next cell to see an example.
import pandas as pd
parsed_date = pd.to_datetime('January 1st, 2017')
parsed_date
parsed_date.month
parsed_date.year
parsed_date.second
# Sometimes date string are formatted in unexpected ways. For example, in the United States, dates are given with the month first and then the day. That is what pandas expects by default. However, some countries write the date with the day first and then the month. Run the next three examples to see Panda's default behavior and how you can specify the date formatting.
parsed_date = pd.to_datetime('5/3/2017 5:30')
parsed_date.month
parsed_date = pd.to_datetime('5/3/2017 5:30', format='%d/%m/%Y %H:%M')
parsed_date.month
# The formatting abbreviations are actually part of the python standard. You can see examples at [this link](http://strftime.org/).
# # Part 1 - Parsing Dates
#
# Run the code cells below to import the World Bank projects data. The last line of the code outputs all of the column names in the data frame.
# Run this code cell. Read in the projects data set with all columns type string
df_projects = pd.read_csv('./data/projects_data.csv', dtype=str)
df_projects.drop(['Unnamed: 56'], axis=1, inplace=True)
df_projects.columns
# select time related colunms
df_projects.head(15)[['boardapprovaldate', 'board_approval_month', 'closingdate']]
# +
# Use the pandas to_datetime method to convert these two columns
# (boardapprovaldate, closingdate) into date times.
df_projects['boardapprovaldate'] = pd.to_datetime(df_projects['boardapprovaldate'])
df_projects['closingdate'] = pd.to_datetime(df_projects['closingdate'])
# -
# # Part 2 - Access different parts of datetime
# +
###
# create the follwing new columns in the df_projects data frame
#
# approvalyear
# approvalday
# approvalweekday
# closingyear
# closingday
# closingweekday
#
#
###
df_projects['approvalyear'] = df_projects['boardapprovaldate'].dt.year
df_projects['approvalday'] = df_projects['boardapprovaldate'].dt.day
df_projects['approvalweekday'] = df_projects['boardapprovaldate'].dt.weekday
df_projects['closingyear'] = df_projects['closingdate'].dt.year
df_projects['closingday'] = df_projects['closingdate'].dt.day
df_projects['closingweekday'] = df_projects['closingdate'].dt.weekday
# +
###
# Make a visualization with year on the x-axis and the sum of the totalamt columns per year on the y-axis
# The totalamt column is currently a string with commas. For example 100,250,364. You'll need to remove the
# commas and convert the column to a numeric variable.
# pandas groupby, sum, and plot methods should also be helpful
####
import matplotlib.pyplot as plt
# %matplotlib inline
# Step 1 - convert the totalamt column from string to numeric. Be sure to remove the commas in this column
df_projects['totalamt'] = pd.to_numeric(df_projects['totalamt'].str.replace(',',''))
ax = df_projects.groupby('approvalyear')['totalamt'].sum().plot(x='approvalyear', y='totalamt',
title ='Total Amount Approved per Year')
ax.set_xlabel('year')
ax.set_ylabel('amount $')
plt.show()
|
Transform/parsing_dates.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="yRQW4RjjheVM" colab={"base_uri": "https://localhost:8080/"} outputId="5d11ce01-1dbe-4632-affe-cd8dfd8ebbe1"
############ SHOW DO MILHÃO
############ GAME DO GRUPO 10 COM PERGUNTAS DO LIVRO FACTFULNESS
############ Gabriela, Schirlei e Tode
############ Link do conteúdo: https://factfulnessquiz.com/
pontuacao = 0 # criamos uma variável que começa com o valor zero
nome_jogador = input("Bem-vindo/a! Como você se chama? ") # coleta do nome do jogador
print(f'Bem-vindo/a, {nome_jogador}! Prepare-se porque vamos começar o jogo! Você está participando do Show do Milhão!') # print de boas-vindas
# + colab={"base_uri": "https://localhost:8080/"} id="RJUu_atwLqpL" outputId="ccca39a8-338d-4f8d-da11-2abfd2b0aa1a"
####### PERGUNTA 1
print("Pergunta 1: Nos países de baixa renda do mundo hoje, quantas meninas terminam a escola primária? \n a) 20% \n b) 40% \n c) 60% \n d) 80%")
pergunta_1 = input("Qual a alternativa correta? \n Digite a letra ")
if pergunta_1 == "c": # esta é a resposta certa! se a resposta for 'c', a variável 'pontuacao' ganhará um ponto
pontuacao = pontuacao + 1 # acrescenta o ponto da pergunta
print(f"Parabéns, você está antenado! Sua pontuação agora é {pontuacao}!") # avisa que o jogador acertou
elif pergunta_1 == "d": # se a resposta for 'd', o jogador terá mais uma chance
pergunta_1 = input("Quase lá. Vamos deixar duas opções e você tenta de novo! \n Então, selecione a letra: \n b) 40% \n c) 60% \n ") # jogador é perguntado novamente
if pergunta_1 == "c": # esta é a resposta certa! se a resposta for 'c', a variável 'pontuacao' ganhará um ponto
pontuacao = pontuacao + 1 # acrescenta o ponto da pergunta
print(f"Agora sim! Sua pontuação agora é {pontuacao}!") # avisa que o jogador acertou
else:
print("Errou. Próxima pergunta!") # mensagem na segunda chance para o jogador que respondeu errado
else:
print("Erroooou! Siga para a próxima pergunta.") # caso a resposta não seja 'c' nem 'd', esta mensagem será exibida
# + id="189ixK4zoLQ_" colab={"base_uri": "https://localhost:8080/"} outputId="ba67e8d0-107e-426d-af83-432eb59a5bec"
####### PERGUNTA 2
print("Pergunta_2: Onde mora a maioria da população mundial?") # formulação da pergunta 2
print("a) Países de baixa renda") # alternativa 'a' da pergunta 2
print("b) Países de renda média") # alternativa 'b' da pergunta 2
print("c) Países de alta renda") # alternativa 'c' da pergunta 2
print("d) Países de altíssima renda") # alternativa 'd' da pergunta 2
pergunta_2 = input("Qual a alternativa correta? Digite a letra ") # coleta da resposta da pergunta 2
if pergunta_2 == "b": # esta é a resposta certa! se a resposta for 'b', a variável 'pontuacao' ganhará um ponto
pontuacao = pontuacao + 1 # acrescenta o ponto da pergunta
print(f"Parabéns, você acertou!!!! Sua pontuação agora é {pontuacao}, {nome_jogador}!") # exibir mensagem para quem acertou
elif pergunta_2 == "a": # se a resposta for 'a', o jogador terá mais uma chance
print(f"Quase lá! Tente mais uma vez, {nome_jogador}!")
pergunta_2 = input("Qual a alternativa correta? Digite a letra ") # jogador é perguntado novamente
if pergunta_2 == "b": # esta é a resposta certa! se a resposta for 'b', a variável 'pontuacao' ganhará um ponto
pontuacao = pontuacao + 1 # acrescenta o ponto da pergunta
print(f"Aeeee, você acertou! A resposta certa era {pergunta_2}. Parabéns!") # exibir mensagem para quem acertou
else: # mensagem na segunda chance para o jogador que respondeu errado
print(f"Poxa, que pena, {nome_jogador}. Leia o livro novamente. A resposta correta é a letra 'b'.")
else: # caso a resposta não seja 'b' nem 'a', esta mensagem será exibida
print("Errou. Leia o livro novamente. A resposta correta é a letra 'b'.")
# + id="vPspoPQUwz_x" colab={"base_uri": "https://localhost:8080/"} outputId="2d678242-acdd-4dd5-8220-772332547366"
####### PERGUNTA 3
print("Pergunta_3: Nos últimos 20 anos, a proporção da população mundial que vive em extrema pobreza...")
print("a) Quase dobrou")
print("b) Permaneceu mais ou menos o mesmo")
print("c) Caiu quase pela metade")
print("d) Aumentou 10%")
pergunta_3 = input("Qual a alternativa correta? Digite a letra ")
if pergunta_3 == "c":
pontuacao = pontuacao + 1
print(f"Parabéns, {nome_jogador}! Você acertou!!! Sua pontuação agora é {pontuacao}!")
else:
print(f"Errooooou! A resposta correta é a letra 'c'. \n Sua pontuação agora é {pontuacao}, {nome_jogador}! ")
# + colab={"base_uri": "https://localhost:8080/"} id="Z1oow_ybw2wy" outputId="b4e12932-bb76-4a2d-dd51-35f40cfbf7f0"
####### PERGUNTA 4
print("Pergunta_4: Qual é a expectativa de vida do mundo hoje? ")
print("a) 50 anos")
print("b) 60 anos")
print("c) 70 anos")
print("d) 65 anos")
pergunta_4 = input("Qual a alternativa correta? Digite a letra ")
if pergunta_4 == "c":
pontuacao = pontuacao + 1
print(f"Parabéns, você ganhou mais um ponto! Agora você tem {pontuacao} ponto(s)")
else:
print("Errou. Preste mais atenção ao livro. A resposta correta é a letra 'c'.")
# + id="hHxfaQyvw5bN" colab={"base_uri": "https://localhost:8080/"} outputId="940e2af6-d8e5-423a-8cb3-b32ac3852e90"
####### PERGUNTA 5
print(f"Última pergunta, {nome_jogador}!!!!!!")
print("Pergunta 5: Existem 2 bilhões de crianças no mundo hoje, com idades entre 0 e 15 anos. Quantas crianças haverá no ano 2100, de acordo com as Nações Unidas?")
print("a) 4 bilhões")
print("b) 3 bilhões")
print("c) 2 bilhões")
print("d) 1 bilhão")
pergunta_5 = input("Qual é a alternativa correta? Digite a letra ")
if pergunta_5 == "c":
pontuacao = pontuacao + 1
print(f"Parabéns, {nome_jogador}! Você não veio para passar vergonha com a resposta!")
else:
print("Que pena, você errou. Não foi dessa vez! ")
# + colab={"base_uri": "https://localhost:8080/"} id="RcdDLTtzKxe7" outputId="4b793d48-7edd-4b70-92fa-2ddc7aef5224"
####### NOTA FINAL
print(f"Chegamos ao final do jogo, {nome_jogador}! Vamos ver como você se saiu?")
print(f"Sua nota foi {pontuacao}")
if pontuacao >= 5:
print("Arrasou, você está muito bem informado!")
elif pontuacao == 4:
print("Parabéns, continue assim!")
elif pontuacao == 3:
print("Você foi bem, mas precisa estudar mais!")
else:
print("Você está desinformado, precisa estudar muito mais!")
|
python_exercicio_19ago2021/trabalho_final_grupo_schirlei_tode_gabriela.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # A2 - Bias
#
# By: <NAME>
# Date: 10/05/2019
#
# ## Overview
#
# The goal of this assignment is to reflect on sources of bias by analyzing coverage and relative article quality by country and geographical regions of politicians articles taken from the English Wikipedia.
#
# ## Step 1: Data acquisition
#
# The data for this analysis comes from:
#
# 1. [The Wikipedia politicians by country dataset](https://figshare.com/articles/Untitled_Item/5513449)
# 2. [Population resource bureau, mid-2018 population by country](https://www.prb.org/international/indicator/population/table/)
#
# and is located in the `raw_data` folder. See the repository's README.md file for additional details.
#
# ## Step 2: Cleaning the data
#
# First we will import a few libraries needed for our analysis.
#
# The `pandas` library will be used for loading and manipulating the data.
# > `pandas` uses the `numpy` library behind the scenes to handle multidimensional arrays efficiently. We will import this library as well to help with specific manipulations later on.
import pandas as pd
import numpy as np
# We load the data csv files from the `raw_data` folder and output the first few rows of each to make sure they were loaded correctly.
politicians_by_country = pd.read_csv('../raw_data/page_data.csv')
politicians_by_country.head(2)
population_by_geography = pd.read_csv('../raw_data/WPDS_2018_data.csv', thousands=',')
population_by_geography.head(2)
# To simplify the use of the `population_by_geography` table we will rename its columns `geo` and `pop`.
population_by_geography.columns = ['geo', 'pop']
population_by_geography.head(2)
# We can see that some rows of the `politicians_by_country` dataframe's `page` column contains the "Template:" prefix. These pages are not Wikipedia articles and will be removed below.
# ~ is used as the standard ! (negation operator)
template_prefix_filter = ~politicians_by_country.page.str.startswith('Template:')
politicians_by_country = politicians_by_country[template_prefix_filter]
politicians_by_country.head(3)
# The `population_by_geography` contains some cumulative regional (i.e. AFRICA, OCEANIA) population counts. Regions are ALL CAPS values in the `geo` column. These rows won't match with the country field of our `politicians_by_country` table, so we will remove them to form the `population_by_country` table and keep the other rows.
# Only regions are in ALL CAPS
region_filter = population_by_geography.geo.str.isupper()
population_by_country = population_by_geography[~region_filter]
population_by_country.columns = ['country', 'pop']
population_by_country.head(3)
# ## Step 3: Getting article quality predictions
#
# We will be gathering quality predictions data from the [ORES](https://www.mediawiki.org/wiki/ORES) (Objective Revision Evaluation Servie) machine learning system.
#
# The code in the cell below was provided as sample code to use with the ores package.
# +
from ores import api
# We provide this useragent string (second arg below) to help the ORES team track requests
ores_session = api.Session("https://ores.wikimedia.org", "Class project: <EMAIL>")
# Fetch the article quality using the rev_id values
results = ores_session.score("enwiki", ["articlequality"], politicians_by_country.rev_id.values)
# -
# For each article in the result we obtain the prediction and place them in an array. If the prediction was not available we instead use a `no_prediction_token` as value.
# +
article_quality_col = []
no_prediction_token = '<PASSWORD>'
for score in results:
found_prediction = False
# Is a prediction in the score object ?
if 'articlequality' in score:
if 'score' in score['articlequality']:
if 'prediction' in score['articlequality']['score']:
article_quality_col.append(score['articlequality']['score']['prediction'])
found_prediction = True
# No predictions were found
if not found_prediction:
article_quality_col.append(no_prediction_token)
# Output the first five values to validate
article_quality_col[0:5]
# -
# We add the newly extracted article_quality column to the politicians_by_country dataframe.
politicians_by_country['article_quality'] = article_quality_col
politicians_by_country.head(2)
# We save the articles whose ratings weren't found in a file named `ores_not_found.csv` in the artifacts folder. We will used them later in the analysis phase.
# For now, we remove these values from our `politicians_by_country` table.
# +
not_found_articles_filter = (politicians_by_country['article_quality'] == 'NOT_FOUND')
not_found_articles = politicians_by_country.loc[not_found_articles_filter]
# We do not need to include the article_quality column as it was not available
not_found_articles.drop(columns=['article_quality'])
not_found_articles.to_csv('../artifacts/data/ores_not_found.csv', index=None, header=True)
# -
# Politicians by country now only has rated articles
politicians_by_country = politicians_by_country[~not_found_articles_filter]
# ## Step 4: Combining datasets
#
# Now that our article data in the `politicians_by_country` table has the quality rating for each article, we will merge it with our population_by_country into one table. We also rename our columns for readability going forward.
# pandas' merge is the equivalent of the sql join statement
# the how parameter indicates the type of merge
# outer indicates a "full outer join"
articles_and_population = pd.merge(politicians_by_country, population_by_country, on='country', how='outer')
articles_and_population.columns = ['article_name', 'country', 'revision_id', 'article_quality', 'population']
articles_and_population.head()
# Some rows will not have had a match with the other table.
#
# * We want to keep a record of rows for which there was not `pop` value (NaN in the table) which indicates no match from the population_by_country table.
#
#
# * We also want to keep rows for which the other fields (such as rev_id) are missing (NaN) which indicates no match from the politicians_by_country table.
no_population_match_rows = articles_and_population[articles_and_population['population'].isnull()]
no_revision_id_match_rows = articles_and_population[articles_and_population['revision_id'].isnull()]
no_match_df = no_population_match_rows.append(no_revision_id_match_rows)
# We will now create a file with the complete and incomplete rows.
articles_and_population = articles_and_population.drop(no_match_df.index)
no_match_df.to_csv('../clean_data/wp_wpds_countries_no_match.csv', index=None, header=True)
articles_and_population.to_csv('../clean_data/wp_wpds_politicians_by_country.csv', index=None, header=True)
# ## Step 5: Analysis
#
# We start by loading the cleaned data.
# We use the Thousands=',' token to specify that the population column has thousands delimted by commas
articles_and_population = pd.read_csv('../clean_data/wp_wpds_politicians_by_country.csv', thousands=',')
articles_and_population.head()
# Our analysis will focus on:
#
# | Area | Description |
# |---|---|
# | Coverage | The number of politician articles as a proportion of the country's population |
# | Relative article quality |The proportion of the number of "FA" (featured article) or "GA" (good article) over the number of articles |
#
# We are interested in getting those metrics by regions and countries. We will use our original data source to associate a country with its region and add this to our dataset.
# +
# Drop the population from our original dataset
geography = population_by_geography.drop(columns=['pop'])
# iterate over indexes in geography and create dictionary of countries (key) to their region (value).
# The original dataset has region in ALL_CAPS first followed by all countries in that region.
country_to_region_lookup = {}
region = ''
for i in geography.index:
country_or_region = geography.loc[i, 'geo']
# Is the 'geo' field of this row a region?
if country_or_region.isupper():
# Assign region for all countries until the next region
region = country_or_region
else:
# Assign current region to country
country_to_region_lookup[country_or_region] = region
# iterate over the articles dataset using the lookup to assign a region
# to each row based on the value of the country field
regions = []
for i in articles_and_population.index:
country = articles_and_population.loc[i, 'country']
regions.append(country_to_region_lookup[country])
# Assign region column
articles_and_population['region'] = regions
# Display as validation
articles_and_population.head(3)
# -
# ## Coverage calculation
#
# ### By country
#
# Our analysis will first focus on 'coverage' which we will calculate in terms of number of politician articles as a proportion of the country's population.
#
# First we create a table of the number of the article_count and population by country.
# np.mean gives the mean for each group, np.size gives us the row_count (in this case the article count)
coverage_by_country = articles_and_population.groupby('country').agg({'population': np.mean, 'article_name': np.size})
coverage_by_country.columns = ['population', 'article_count']
coverage_by_country.head(2)
# We calculate coverage in its own row and sort the table to obtain the top and bottom 10 countries for coverage.
# Reminder we mulitply by 1e6 as the population is in millions
coverage_by_country['coverage'] = (coverage_by_country.article_count/(coverage_by_country.population*1e6))
# +
# Sort by coverage percentage descending and take 10
top_10_by_country = coverage_by_country.sort_values(by=['coverage'], ascending=False).head(10)
# Sort by coverage percentage ascending and take 10
bottom_10_by_country = coverage_by_country.sort_values(by=['coverage']).head(10)
# -
# ### By region
#
# We'd like to do a similar excersise to see what the coverage by geographical region will be.
# +
# Group data by region counting the number of articles
articles_by_region = articles_and_population.groupby('region').agg({'article_name': np.size})
# Rename columns for article_count
articles_by_region.columns = ['article_count']
# Get population by region from the orginal table (population_by_geography)
coverage_by_region = pd.merge(articles_by_region, population_by_geography, left_on='region', right_on='geo', how='inner')
# Rename the 'pop' column
coverage_by_region = coverage_by_region.rename(columns={"pop": "population"})
# Calculate coverage (population is in millions)
coverage_by_region['coverage'] = (coverage_by_region['article_count']/(coverage_by_region.population*1e6))
# Output sorted by coverage percentage descending
coverage_by_region = coverage_by_region.sort_values(by=['coverage'], ascending=False)
# Output friendly names
coverage_by_region = coverage_by_region.rename(columns={'geo': 'region'})
coverage_by_region = coverage_by_region[['region', 'population', 'article_count', 'coverage']]
# -
# ## Coverage tables discussion
# +
# Display logic inspired by: https://stackoverflow.com/questions/38783027/jupyter-notebook-display-two-pandas-tables-side-by-side
from IPython.display import display_html
top_10_by_country_styler = top_10_by_country.style.set_table_attributes("style='display:inline'").set_caption('Top 10').format({'coverage' : '{:.3%}'})
bottom_10_by_country_styler = bottom_10_by_country.style.set_table_attributes("style='display:inline;margin-left:40px'").set_caption('Bottom 10').format({'coverage' : '{:.5%}'})
region_styler = coverage_by_region.style.set_table_attributes("style='display:block'").set_caption('Regions').format({'coverage' : '{:.5%}'})
display_html(top_10_by_country_styler._repr_html_()+bottom_10_by_country_styler._repr_html_()+region_styler._repr_html_(), raw=True)
# -
# > Note population is in millions
#
# #### Observations
#
# We notice that the countries in the "top 10 coverage" table all have fairly small populations. This is expected as having a good coverage in countries with bigger populations would require a significant amount of articles. This is reflected in the bottom 10 table which all have populations over 30 million.
#
# Both the "top 10" and "bottom 10" countries official languages are not english. This is interesting given that articles were fetch from the English Wikipedia.
#
# Coverage is calculated by counting the number of articles about politicians over a country's population. This does not take into account the historical context of the country nor their political system(s). Some countries may have much richer history records, political systems that involve more people etc.
#
# In the region table we can see some of the observations above come into play:
#
# - The population count seems to vaguely dictate the overall order
# - Northern America has a small number of articles for its population, but may also have some of the shortest reported historical period.
# - Many other factors such as the distribution of wikipedia's english countries could explain some of the discrepancies between regions.
#
#
# ## Relative quality
#
# Our analysis will now focus on 'relative quality' which we will calculate as a proportion of the number of articles with a rating of "FA" or "GA" over the total number of articles.
#
# ### By country
# +
# Create custom aggregator to count the number of "FA" and "GA" articles
def count_quality_articles(series):
great_articles_count = 0
for val in series:
if val == 'FA' or val == 'GA':
great_articles_count = great_articles_count + 1
return great_articles_count
# Group data by country
relative_quality_by_country = articles_and_population.groupby('country').agg({'article_name': np.size, 'article_quality': count_quality_articles})
# Rename columns for article_count
relative_quality_by_country.columns = ['article_count', 'quality_article_count']
# Calculate relative_quality
relative_quality_by_country['relative_quality'] = (relative_quality_by_country['quality_article_count']/relative_quality_by_country['article_count'])
# Grab top 10
top_10_relative_quality_by_country = relative_quality_by_country.sort_values(by=['relative_quality'], ascending=False).head(10)
# Grab bottom 10
bottom_10_relative_quality_by_country = relative_quality_by_country.sort_values(by=['relative_quality']).head(10)
# -
# ### By region
# +
# Group data by region
relative_quality_by_region = articles_and_population.groupby('region').agg({'article_name': np.size, 'article_quality': count_quality_articles})
# Rename columns for article_count
relative_quality_by_region.columns = ['article_count', 'quality_article_count']
# Calculate relative_quality
relative_quality_by_region['relative_quality'] = (relative_quality_by_region['quality_article_count']/relative_quality_by_region['article_count'])
# Output by relative_quality descending
relative_quality_by_region = relative_quality_by_region.sort_values(by=['relative_quality'], ascending=False)
# -
# ## Relative quality tables
# +
# Display logic inspired by: https://stackoverflow.com/questions/38783027/jupyter-notebook-display-two-pandas-tables-side-by-side
top_10_relative_quality_by_country_styler = top_10_relative_quality_by_country.style.set_table_attributes("style='display:inline'").set_caption('Top 10').format({'relative_quality' : '{:.3%}'})
bottom_10_relative_quality_by_country_styler = bottom_10_relative_quality_by_country.style.set_table_attributes("style='display:inline;margin-left:40px'").set_caption('Bottom 10').format({'relative_quality' : '{:.5%}'})
region_styler = relative_quality_by_region.style.set_table_attributes("style='display:block'").set_caption('Regions').format({'relative_quality' : '{:.5%}'})
display_html(top_10_relative_quality_by_country_styler._repr_html_()+bottom_10_relative_quality_by_country_styler._repr_html_()+region_styler._repr_html_(), raw=True)
# -
# #### Observations
#
# * Looking at the dataset more closely, we can see that more than 10 countries have no articles about a politician which obtained a quality rating of "FA" or "GA".
# * Having very few articles makes it easy for a contributor to go and increase the relative_quality rating for a given country.
# * Many of the countries having poor relative quality ratings also have a few number of articles.
# * When looking at the region table, we see that the Northern America has the highest relative quality rating. This may be due to having a large number of english native speakers.
#
#
# # Reflection
#
# **1. What biases did you expect to find in the data (before you started working with it), and why?**
#
# Before starting the work, I thought article quality would reflect writting quality, but also content quality. As a result, I expected countries that live under political regimes prone to censorship to have worse article quality and a limited quantity of articles. I also expected english speaking countries to have better article quality by a significant margin due to having a larger number of editor's whose native language is english. I intuitively thought that, at least for countries whose official language list include english, population and coverage would be fairly proportional.
#
#
# **2. What (potential) sources of bias did you discover in the course of your data processing and analysis?**
#
# The evaluation for article quality doesn't really evaluate what the documentation calls 'tone':
#
# > The metrics of evaluation for article quality on wikipedia is derived by:
# <br/>
# >_"The articlequality model bases its predictions on structural characteristics of the article. E.g. How many sections are there? Is there an infobox? How many references? And do the references use a {{cite}} template? The articlequality model doesn't evaluate the quality of the writing or whether or not there's a tone problem (e.g. a point of view being pushed). However, many of the structural characteristics of articles seem to correlate strongly with good writing and tone, so the models work very well in practice."_ -Ores documentation
#
# We also have very few information regarding how the model concretely does this evaluation. The code is at least made available for further exploration.
#
# The number of politicians in a country is not proportional to a country's population. Due to this, article coverage seems like a metric with very little value/explanability. Furthermore, countries have varied political systems whom might involve a greater or smaller number of people. The total number of articles written for a country also depends largely on its historical records and history. In example, a country whose libraries where destroyed during wars might not have many records of early politics. Similarly, countries that were founded in the last century will not have a comparable amount of articles to a country whose rich history ranges over multiple centuries.
#
# The dataset does not include any information about the editors of the articles. Having no information about the editors makes it so that we cannot make inference about the intent and or the validity of the articles. It would have been interesting to try to use data about the editors to account for potential bias. (Age group, Gender editing pages of same/different gender, countries editing pages from other countries etc.)
#
#
# **4. What might your results suggest about the internet and global society in general?**
#
# It is very tempting to draw intuitive (even prejudicial) conclusions from a dataset before taking a look at the data and its source. Sources of bias in anything human centered are multiple and seem to be difficult to account for. The Internet is an inherently biased source of data (notably, because access to the Internet is required to be part of the conversation).
#
# Given these observations, it is interesting to think that there seem to remain an inherent (naive) trust in the democratic process of sharing opinions and information online. The current generation is already feeling the repercussions of exercising trust in largely unmonitored information sources.
|
src/hcds-a2-bias.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Pierre-papier-ciseaux
#
# Le jeu **pierre-papier-ciseaux** (feuille-caillou-ciseaux) se joue avec les mains.
# 
# ## Description
#
# Les deux joueurs choisissent simultanément l'un des trois coups possibles:
#
# 
#
# - pierre bat ciseaux
# - papier bat pierre
# - ciseaux bat papier
#
# Ce jeu est aussi connu sous le nom de **chifoumi**.
# Vous trouvez plus d'info sur [Wikipédia](https://fr.wikipedia.org/wiki/Pierre-papier-ciseaux).
# ## Exemple online
#
# Allez sur le site [chifoumi.io](https://chifoumi.io) pour avoir une idée du jeu.
# Vous (you) allez jouer contre l'ordinateur (bot).
#
# 
#
# ## Votre projet
#
# C'est un exemple d'un jeu vidéo où un humain joue contre l'ordinateur.
#
# - vous continuez jusqu'à une condition de fin
# - vous vérifiez si le choix est permis
# - vous affichez les coups
# - vous décidez qui gagne
# - vous tenez compte du score tout au long de la partie
#
# Ce jeu est simple, mais il vous permet de comprendre le fonctionnement d'un jeu vidéo.
# ## Représentation des coups
#
# À chaque tour il y a 3 coups possibles.
# Il a différentes manières de représenter ces 3 options.
# Une façon est d'utiliser des chaînes de caractères.
coups = ['pierre', 'papier', 'ciseaux']
# Une autre façon plus efficace est d'utiliser juste des entiers (0, 1, 2).
# En commençant par 0, nous pouvons utiliser ces entiers comme indice pour une liste avec les noms des coups.
#
# - 0 pour pierre
# - 1 pour papier
# - 2 pour ciseaux
coups = [0, 1, 2]
# Bien sûr, vous pouvez aussi utiliser des émojis pour rendre l'apparence plus cool.
coups = ['💎', '📜', '✂️']
# ## Input-output
#
# Pour ce jeu, vous utilisez la fonction `input()` pour demander le choix de l'utilisateur. Vous utilisez la fonction `print()` pour communiquer avec lui.
#
# Le principe de base est:
x = input('votre choix: ')
print('vous avez choisi', x)
# ## Boucle de jeu
#
# Pour jouer à ce jeu, vous utilisez la boucle `while` pour répéter les échanges.
# Une condition de fin sera nécessaire.
# Une manière fréquemment utilisée pour terminer un jeu est d'appuyer tout simplement sur la touche **retour**.
# Dans ce cas `input()` retourne une chaine vide.
#
# x = ''
#
# La chaine vide est interprétée comme valeur booléenne `False`.
# Essayez différentes valeurs.
bool('')
bool('pierre')
# Donc voici une boucle de jeu. Elle continue jusqu'à ce que vous appuyez sur **retour** tout seul.
# +
x = input('votre choix:')
while x:
x = input('votre choix:')
print('game over')
# -
# ## Vérification
#
# Un jeu vérifie si les entrées sont permises. Établissons d'abord une liste des coups qui sont permis.
coups = ['pierre', 'papier', 'ciseaux']
# L'opérateur `in` permet de questionner si un élément fait partie d'une liste. Par exemple.
'papier' in coups
'ciso' in coups
# Nous pouvons maintenant ajouter la vérification.
# +
coups = ['pierre', 'papier', 'ciseaux']
print('choisissez entre:', *coups)
x = input('votre choix:')
while x :
if x in coups:
print('ok')
else:
print(x, "n'est pas dans", coups)
x = input('votre choix')
print('game over')
# -
# ## Choisir avec des raccourcis
#
# Pour jouer plus facilement, il est préférable d'entrer un seul caractère, plutôt que de taper un mot entier.
#
# **Attention** : ici, la liste des coups sont les caractères `'0'`, `'1'` et `'2'` et non les entiers `0`, `1` et `2`
# +
coups = ['0', '1', '2']
print('choissisez entre:', *coups)
x = input('votre choix:')
while x :
if x in coups:
print('ok')
else:
print(x, "n'est pas dans", *coups)
x = input('votre choix:')
print('GAME OVER')
# -
# ## Transformer le choix
#
# Dans le jeu, il est avantageux de transformer les indices (0, 1, 2) en mots.
#
# **Attention**
# La fonction `input()` retourne un caractère et non pas un nombre.
# Il faut le transformer en entier avec la fonction `int()`.
# L'instruction `x = int(x)` transforme le caractère `'1'` en nombre `1`.
#
# +
coups = ['pierre', 'papier', 'ciseaux']
emoji = ['💎', '📜', '✂️']
print('0=pierre, 1=papier, 2=ciseaux')
x = input('votre choix:')
while x :
x = int(x)
print(emoji[x], coups[x])
x = input('votre choix:')
print('G A M E O V E R')
# -
# ## Choix aléatoire
#
# Le module `random` permet de choisir des éléments aléatoires.
# La fonction `random.randint(0, 2)` retourne un entier aléatoire dans l'intervalle [0, 2].
# +
import random
coups = ['💎', '📜', '✂️']
for i in range(5):
x = random.randint(0, 2)
print(x, '=', coups[x])
# +
coups = ['pierre', 'papier', 'ciseaux']
for i in range(5):
x = random.randint(0, 2)
print(coups[x])
# -
# ## Jouer contre l'ordinateur
#
# Dans ce jeux vous (you) allez jouer contre l'ordinateur (bot).
# L'ordinateur choisit une des 3 possibilités de façon aléatoire.
# +
coups = ['pierre', 'papier', 'ciseaux']
print('0=pierre, 1=papier, 2=ciseaux')
x = input('votre choix:')
while x :
you = int(x)
bot = random.randint(0, 2)
print(coups[you], 'contre' ,coups[bot])
x = input('votre choix:')
print('game over')
# -
# ## Qui gagne ?
#
# Si les deux coups sont les mêmes, c'est un match nul et c'est facile à détecter.
you = 0
bot = 0
if you == bot:
print('match nul')
# Si les deux coups sont différents, c'est plus compliqué
#
# - pierre (0) est plus fort que ciseaux (2)
# - papier (1) est plus fort que pierre (0)
# - ciseaux (2) est plus fort que papier (1)
#
# Testez avec différents combinaisons.
# Modifiez les variables `you` et `bot` et testez 3 cas différents.
you = 1
bot = 2
if you == bot :
print('match nul')
elif you == 0 and bot == 2:
print('humain gagne')
elif you == 1 and bot == 0:
print('humain gagne')
elif you == 2 and bot == 1:
print('humain gagne')
else:
print('ordi gagne')
# ## Garder un score
#
# Utilisez des variables pour garder un score. Au début il faut les initialiser.
score_you = 0
score_bot = 0
# Pendant le jeu vous allez incrémenter le score si l'un ou l'autre gagne.
score_you = score_you + 1
score_bot = score_bot + 1
score_bot = score_bot + 1
# Cette incrémentation peut être simplifié avec l'opérateur `+=`.
score_you += 1
score_bot += 1
score_bot += 1
print('score =', score_you, ':', score_bot)
# ## Projet
#
# Combinez tous les éléments pour créer un jeu vidéo où
#
# * vous jouez contre l'ordinateur
# * vous continuez jusqu'à une condition de fin
# * vous vérifiez si le choix est permis
# * vous affichez les coups
# * vous décidez qui gagne à chaque tour
# * vous tenez compte du score
# * vous décidez qui gagne la partie
|
doc/jeu/ppc.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="copyright"
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="title:generic,gcp"
# # E2E ML on GCP: MLOps stage 3 : formalization: get started with BigQuery and TFDV pipeline components
#
# <table align="left">
# <td>
# <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_bq_tfdv_pipeline_components.ipynb">
# <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
# View on GitHub
# </a>
# </td>
# <td>
# <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/ml_ops_stage3/get_started_with_bq_tfdv_pipeline_components.ipynb">
# Open in Google Cloud Notebooks
# </a>
# </td>
# </table>
# <br/><br/><br/>
# + [markdown] id="overview:mlops"
# ## Overview
#
#
# This tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 3 : formalization: get started with BigQuery and TFDV pipeline components.
# + [markdown] id="dataset:gsod,lrg"
# ### Dataset
#
# The dataset used for this tutorial is the GSOD dataset from [BigQuery public datasets](https://cloud.google.com/bigquery/public-data). The version of the dataset you use only the fields year, month and day to predict the value of mean daily temperature (mean_temp).
# + [markdown] id="objective:mlops,stage3,get_started_bq_tfdv_pipeline_components"
# ### Objective
#
# In this tutorial, you learn how to use build lightweight Python components for BigQuery and Tensorflow Data Validation.
#
# This tutorial uses the following Google Cloud ML services:
#
# - `Vertex AI Pipelines`
# - `Vertex AI Datasets`
# - `BigQuery`
#
# The steps performed include:
#
# - Build and execute a pipeline component for creating a Vertex AI Tabular Dataset from a BigQuery table.
# - Build and execute a pipeline component for generating TFDV statistics and schema from a Vertex AI Tabular Dataset.
# - Execute a Vertex AI pipeline.
# + [markdown] id="install_mlops"
# ## Installations
#
# Install *one time* the packages for executing the MLOps notebooks.
# + id="install_mlops"
ONCE_ONLY = False
if ONCE_ONLY:
# ! pip3 install -U tensorflow==2.5 $USER_FLAG
# ! pip3 install -U tensorflow-data-validation==1.2 $USER_FLAG
# ! pip3 install -U tensorflow-transform==1.2 $USER_FLAG
# ! pip3 install -U tensorflow-io==0.18 $USER_FLAG
# ! pip3 install --upgrade google-cloud-aiplatform[tensorboard] $USER_FLAG
# ! pip3 install --upgrade google-cloud-bigquery $USER_FLAG
# ! pip3 install --upgrade google-cloud-logging $USER_FLAG
# ! pip3 install --upgrade apache-beam[gcp] $USER_FLAG
# ! pip3 install --upgrade pyarrow $USER_FLAG
# ! pip3 install --upgrade cloudml-hypertune $USER_FLAG
# ! pip3 install --upgrade kfp $USER_FLAG
# + [markdown] id="restart"
# ### Restart the kernel
#
# Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
# + id="restart"
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# + [markdown] id="project_id"
# #### Set your project ID
#
# **If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
# + id="set_project_id"
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# + id="autoset_project_id"
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
# + id="set_gcloud_project_id"
# ! gcloud config set project $PROJECT_ID
# + [markdown] id="region"
# #### Region
#
# You can also change the `REGION` variable, which is used for operations
# throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
#
# - Americas: `us-central1`
# - Europe: `europe-west4`
# - Asia Pacific: `asia-east1`
#
# You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
#
# Learn more about [Vertex AI regions](https://cloud.google.com/vertex-ai/docs/general/locations)
# + id="region"
REGION = "us-central1" # @param {type: "string"}
# + [markdown] id="timestamp"
# #### Timestamp
#
# If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
# + id="timestamp"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# + [markdown] id="bucket:mbsdk"
# ### Create a Cloud Storage bucket
#
# **The following steps are required, regardless of your notebook environment.**
#
# When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
#
# Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
# + id="bucket"
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
# + id="autoset_bucket"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
# + [markdown] id="create_bucket"
# **Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
# + id="create_bucket"
# ! gsutil mb -l $REGION $BUCKET_NAME
# + [markdown] id="validate_bucket"
# Finally, validate access to your Cloud Storage bucket by examining its contents:
# + id="validate_bucket"
# ! gsutil ls -al $BUCKET_NAME
# + [markdown] id="set_service_account"
# #### Service Account
#
# **If you don't know your service account**, try to get your service account using `gcloud` command by executing the second cell below.
# + id="set_service_account"
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
# + id="autoset_service_account"
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
# shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
# + [markdown] id="set_service_account:pipelines"
# #### Set service account access for Vertex AI Pipelines
#
# Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
# + id="set_service_account:pipelines"
# ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
# ! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
# + [markdown] id="setup_vars"
# ### Set up variables
#
# Next, set up some variables used throughout the tutorial.
# ### Import libraries and define constants
# + id="import_aip:mbsdk"
import google.cloud.aiplatform as aip
# + [markdown] id="import_tf"
# #### Import TensorFlow
#
# Import the TensorFlow package into your Python environment.
# + id="import_tf"
import tensorflow as tf
# + id="import_kfp:namedtuple"
from typing import NamedTuple
from kfp import dsl
from kfp.v2 import compiler
from kfp.v2.dsl import component
# + [markdown] id="init_aip:mbsdk"
# ### Initialize Vertex AI SDK for Python
#
# Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
# + id="init_aip:mbsdk"
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
# + [markdown] id="bq_dataflow_components_intro"
# ## Pipeline components with BigQuery and Dataflow
#
# ### An anatomy of a pipeline component
#
# Let's dive a bit into how pipeline components are executed. First, each component is containerized. That is, each component has its own:
#
# - container image
# - installation requirements
# - (optional) hardware requirements
#
# The above affects the amount of time/resources required to provision the component. For example, if each component in the pipeline had a different machine requirement, then a machine would have to be provisioned for each component. Even if the machine type is the same, if each component had a different container image, then a new container image would have to be provisioned for each component.
#
# In otherwords, the efficiency of the pipeline is affected by the amount of provisioning.
#
# Additionally, since each component runs in a container with its own memory space, there are performance issues relating to the amount of data moved across the container boundaries -- i.e., marshalling. To marshall data, the data has to be serialized and written to a volume storage, where the next component can access and de-serialize the data. Simple data types like integers, floats, strings, small dictionaries can be efficiently marshalled. You want to avoid though marshalling large memory objects.
#
# ### Construction of data pipeline components
#
# Both BigQuery and Dataflow deal with data, and more importantly large amounts of data. As a result, you need to carefully consider the construction of the pipeline, so that you are not marshalling large amounts of in-memory data.
#
# For example, consider a task that consists of reading a million records into an in-memory pandas dataframe, and then the in-memory data is processed for statistics. You could write this as two components: one component creates the dataframe, and the other processes it. Sounds good, nice and modular and the first component is likely reusable. Bad choice though.
#
# If you did construct the components this way, the first component would have to write the dataframe to a disk, and the second component would then have to read it back from disk. Very inefficient. If you need a large in-memory object, one should only create it in the same component where it is used. In this example, one would create a single component to create and process the dataframe.
#
# Let's now consider Vertex AI resources like datasets, models and endpoints. These resources have a physical manifestation which may include a combination of data and binary files. The Vertex AI resource object is not the actual files, but a in-memory wrapper. The resource object consists of properties and method, and file data is not read into memory until a property/method needs it.
#
# Thus, for efficiency purposes, Vertex AI was designed with reference identifiers. One can load these resource wrappers via the resource identifier. Thus, when creating or otherwise referencing resource objects between components, one passes the resource identifier(s) between components.
# + [markdown] id="import_file:u_dataset,bq"
# #### Location of BigQuery training data.
#
# Now set the variable `IMPORT_FILE` to the location of the data table in BigQuery.
# + id="import_file:gsod,bq,lrg"
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
BQ_TABLE = "bigquery-public-data.samples.gsod"
# + [markdown] id="create_dataset_component:bq"
# ### BigQuery components
#
# First, you build a component `create_dataset_bq` to create a Vertex AI dataset from a BigQuery table. The component will return the resource identifier for the created Vertex AI dataset. Next, you build two downstream components:
#
# - `get_dataset_source`: Using the returned resource identifier, load the dataset resource object and get/return the dataset input source.
# - `get_column_names`: Using the returned resource identifier, load the dataset resource object and get/return the dataset column names.
# + id="create_dataset_component:bq"
@component(packages_to_install=["google-cloud-aiplatform"])
def create_dataset_bq(bq_table: str, display_name: str, project: str) -> str:
import google.cloud.aiplatform as aip
dataset = aip.TabularDataset.create(
display_name=display_name, bq_source="bq://" + bq_table, project=project
)
return dataset.resource_name
@component(packages_to_install=["google-cloud-aiplatform"])
def get_dataset_source(dataset_id: str) -> str:
import google.cloud.aiplatform as aip
dataset = aip.TabularDataset(dataset_id)
if "gcsSource" in dataset.gca_resource.metadata["inputConfig"].keys():
files = dataset.gca_resource.metadata["inputConfig"]["gcsSource"]["uri"]
return list(files)
else:
bq = dataset.gca_resource.metadata["inputConfig"]["bigquerySource"]["uri"]
return bq
@component(packages_to_install=["google-cloud-aiplatform"])
def get_column_names(dataset_id: str) -> list:
import google.cloud.aiplatform as aip
dataset = aip.TabularDataset(dataset_id)
return dataset.column_names
PIPELINE_ROOT = "{}/pipeline_root/dataset_bq".format(BUCKET_NAME)
@dsl.pipeline(
name="dataset-bq",
description="Vertex Dataset from BQ Table",
pipeline_root=PIPELINE_ROOT,
)
def pipeline(
bq_table: str = BQ_TABLE, display_name: str = "example", project: str = PROJECT_ID
):
create_op = create_dataset_bq(bq_table, display_name, project)
source_op = get_dataset_source(create_op.output)
column_names_op = get_column_names(create_op.output)
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataset_bq.json")
pipeline = aip.PipelineJob(
display_name="dataset_bq",
template_path="dataset_bq.json",
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
pipeline.run()
# ! rm dataset_bq.json
# + [markdown] id="view_pipeline_results:dataset_bq"
# ### View the pipeline execution results
#
# Next, view the results -- i.e., artifacts that are passed by each component.
# + id="view_pipeline_results:dataset_bq"
import json
PROJECT_NUMBER = pipeline.gca_resource.name.split("/")[1]
print(PROJECT_NUMBER)
def print_pipeline_output(job, output_task_name):
JOB_ID = job.name
print(JOB_ID)
for _ in range(len(job.gca_resource.job_detail.task_details)):
TASK_ID = job.gca_resource.job_detail.task_details[_].task_id
EXECUTE_OUTPUT = (
PIPELINE_ROOT
+ "/"
+ PROJECT_NUMBER
+ "/"
+ JOB_ID
+ "/"
+ output_task_name
+ "_"
+ str(TASK_ID)
+ "/executor_output.json"
)
if tf.io.gfile.exists(EXECUTE_OUTPUT):
# ! gsutil cat $EXECUTE_OUTPUT
break
return EXECUTE_OUTPUT
print("create_dataset_bq")
artifacts = print_pipeline_output(pipeline, "create-dataset-bq")
# output = !gsutil cat $artifacts
val = json.loads(output[0])
dataset_id = val["parameters"]["Output"]["stringValue"]
print("\n\n")
print("get_dataset_source")
artifacts = print_pipeline_output(pipeline, "get-dataset-source")
print("\n\n")
print("get_column_names")
artifacts = print_pipeline_output(pipeline, "get-column-names")
# + [markdown] id="delete_pipeline"
# ### Delete a pipeline job
#
# After a pipeline job is completed, you can delete the pipeline job with the method `delete()`. Prior to completion, a pipeline job can be canceled with the method `cancel()`.
# + id="delete_pipeline"
pipeline.delete()
# + [markdown] id="create_statistics_pipeline"
# ### Build TFDV component for dataset statistics
#
# Next, you build a component that will use the Tensorflow Data Validation package to produce dataset statistics and schema from the Vertex AI dataset you created, with the following parameters:
#
# - `dataset_id`: The resource ID of the Vertex AI dataset.
# - `label`: The label column for the dataset.
# - `bucket`: The bucket to write the statistics and schema data
#
# The statistics and schema are large memory objects that may be reused downstream by other components. For this purpose, the component directly writes the data to a Cloud Storage bucket, and then returns the Cloud Storage locations of the statistics and schema file as output artifacts.
# + id="create_statistics_pipeline"
@component(
packages_to_install=[
"google-cloud-aiplatform",
"google-cloud-bigquery",
"tensorflow-data-validation==1.2",
"tensorflow==2.5",
]
)
def statistics(
dataset_id: str, label: str, bucket: str
) -> NamedTuple("Outputs", [("stats", str), ("schema", str)]): # Return parameters
import google.cloud.aiplatform as aip
import tensorflow_data_validation as tfdv
from google.cloud import bigquery
dataset = aip.TabularDataset(dataset_id)
if "gcsSource" in dataset.gca_resource.metadata["inputConfig"].keys():
files = dataset.gca_resource.metadata["inputConfig"]["gcsSource"]["uri"]
files = list(files)
stats = tfdv.generate_statistics_from_csv(
data_location=files[0],
stats_options=tfdv.StatsOptions(label_feature=label, num_top_values=50),
)
else:
bq = dataset.gca_resource.metadata["inputConfig"]["bigquerySource"]["uri"]
bq_table = bq[5:]
table = bigquery.TableReference.from_string(bq_table)
bqclient = bigquery.Client()
rows = bqclient.list_rows(
table,
selected_fields=[
bigquery.SchemaField("station_number", "STRING"),
bigquery.SchemaField("year", "INTEGER"),
bigquery.SchemaField("month", "INTEGER"),
bigquery.SchemaField("day", "INTEGER"),
bigquery.SchemaField("mean_temp", "FLOAT"),
],
max_results=10000,
)
dataframe = rows.to_dataframe()
stats = tfdv.generate_statistics_from_dataframe(
dataframe=dataframe,
stats_options=tfdv.StatsOptions(label_feature=label, num_top_values=50),
)
stats_file = bucket + "/stats.txt"
tfdv.write_stats_text(output_path=stats_file, stats=stats)
schema = tfdv.infer_schema(statistics=stats)
schema_file = bucket + "/schema.txt"
tfdv.write_schema_text(output_path=schema_file, schema=schema)
return (stats_file, schema_file)
PIPELINE_ROOT = "{}/pipeline_root/dataset_stats".format(BUCKET_NAME)
@dsl.pipeline(
name="dataset-stats", description="Dataset statistics", pipeline_root=PIPELINE_ROOT
)
def pipeline(dataset_id: str, label: str, bucket: str):
stats_op = statistics(dataset_id, label, bucket)
compiler.Compiler().compile(pipeline_func=pipeline, package_path="dataset_stats.json")
pipeline = aip.PipelineJob(
display_name="dataset_stats",
template_path="dataset_stats.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={
"dataset_id": dataset_id,
"label": "mean_temp",
"bucket": BUCKET_NAME,
},
)
pipeline.run()
# !rm -f dataset_stats.json
# + [markdown] id="view_pipeline_results:statistics"
# ### View the pipeline execution results
#
# Next, view the results -- i.e., the location of the statistics and schema artifacts.
# + id="view_pipeline_results:statistics"
artifacts = print_pipeline_output(pipeline, "statistics")
# output = !gsutil cat $artifacts
val = json.loads(output[0])
schema_location = val["parameters"]["schema"]["stringValue"]
stats_location = val["parameters"]["stats"]["stringValue"]
# + [markdown] id="delete_pipeline"
# ### Delete a pipeline job
#
# After a pipeline job is completed, you can delete the pipeline job with the method `delete()`. Prior to completion, a pipeline job can be canceled with the method `cancel()`.
# + id="delete_pipeline"
pipeline.delete()
# + [markdown] id="cleanup:mbsdk"
# # Cleaning up
#
# To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud
# project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
#
# Otherwise, you can delete the individual resources you created in this tutorial:
#
# - Dataset
# - Pipeline
# - Model
# - Endpoint
# - AutoML Training Job
# - Batch Job
# - Custom Job
# - Hyperparameter Tuning Job
# - Cloud Storage Bucket
# + id="cleanup:mbsdk"
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline training job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom training job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
# ! gsutil rm -r $BUCKET_NAME
|
notebooks/community/ml_ops/stage3/get_started_with_bq_tfdv_pipeline_components.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] Collapsed="false"
# # Movies feature graph
# + Collapsed="false"
# Imports
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from scipy.spatial.distance import pdist, squareform
import networkx as nx
import ast
# Constants
ML_DATA_PATH = '../data/ml-100k-convert/'
GENERATED_PATH = '../generated/'
GRAPH_PATH = '../graphs/'
# %matplotlib inline
# + Collapsed="false"
movies = pd.read_csv(GENERATED_PATH+'final_movies.csv')
# + [markdown] Collapsed="false" heading_collapsed=true
# ## Combining the tmdb and MovieLens dataset
# + [markdown] Collapsed="false" hidden=true
# <b>We can skip this, it creates the 'final_movies.csv' file </b>
# + Collapsed="false" hidden=true
# Load imdb datasets
tmdb_movies = pd.read_csv('datasets/tmdb_5000_movies.csv', delimiter=',')
tmdb_movies_cast = pd.read_csv('datasets/tmdb_5000_credits.csv', delimiter=',')
# Drop some columns and change index
tmdb_movies = tmdb_movies.drop(columns=["homepage", "status", "tagline", "overview", "original_title"])
tmdb_movies.head()
# + Collapsed="false" hidden=true
# Load MovieLens dataset
ML_links = pd.read_csv('datasets/ml-latest/links.csv', delimiter=',')
ML_links
# + [markdown] Collapsed="false" hidden=true
# We can only use the movies that are in the TMDB_5000, ml-latest and ml-100k datasets,
# + Collapsed="false" hidden=true
# Load links of the movies to use
movie_links = pd.read_csv('datasets/ml_links.csv')
movie_links.head()
# + Collapsed="false" hidden=true
# Join ML_links and the total links
movies = ML_links.merge(movie_links, left_on="movieId", right_on="ML-latestId")
# Create a merge of the movies in tmdb 5000 and movielens
movies = tmdb_movies.merge(movies, left_on="id", right_on="tmdbId")
movies.head()
# + Collapsed="false" hidden=true
# id is tmdbId and ML-latestId is movieId
# From now on use tmdbId as the real ID
movies.drop(columns=["tmdbId", "movieId"], inplace=True)
movies.to_csv("datasets/final_movies.csv", index=False)
# + Collapsed="false" hidden=true
movies.shape
# + [markdown] Collapsed="false" hidden=true
# The combined dataset of the ML_latest, ML_100k and tmdb contains 480 movies
# + [markdown] Collapsed="false"
# ## Feature networks
# + [markdown] Collapsed="false"
# There are many possible networds we could create from the features.
#
# We will explore and possibly make the following graphs:
# * genres
# * keywords
# * revenue & budget
# * language
# * production company
#
# + Collapsed="false"
def strdict_to_column(strdict, name):
"""
Converts a dict (in string format) to a list of the values
e.g. [{"id": 18, "name": "Drama"}, {"id": 10749, "Action"}] -> ["Drama", "Action"]
"""
list_dicts = strdict.apply(lambda x: ast.literal_eval(x))
# Convert list of dicts to list of keywords/genres
_list = list_dicts.apply(lambda x: [d['name'] for d in x ])
df = pd.DataFrame(_list)
df = df.explode(name)
df['count'] = 1
# Pivot so 'name' becomes columns
df = df.pivot(columns=name, values='count').fillna(0)
return df
# + Collapsed="false"
def distance_to_weight():
# Let us use the Gaussian function
kernel_width = distances.mean()
weights_list = np.exp(-distances**2 / kernel_width**2)
# + Collapsed="false"
def epsilon_similarity_graph(distances: np.ndarray, alpha=1, epsilon=0):
""" X (n x n): distance matrix
alpha (float): width of the kernel
epsilon (float): threshold
Return:
adjacency (n x n ndarray): adjacency matrix of the graph.
"""
X = distances.copy()
X[X > epsilon] = np.inf
adjacency = np.exp( - X ** 2 / alpha)
np.fill_diagonal(adjacency, 0)
return adjacency
# + [markdown] Collapsed="false"
# ### Genre network
# + [markdown] Collapsed="false"
# Here, we use the item dataset of ml-100k-convert
# + Collapsed="false"
movie_genres = movies["genres"]
movie_genres_matrix = strdict_to_column(movie_genres, "genres")
genres = list(movie_genres_matrix.columns)
# + Collapsed="false"
movie_genres_matrix.head()
# -
# The dataset contains 19 different genres, the Hamming distance is used as a measure of distance, i.e. the
# number of non-matching genre categories.
# + Collapsed="false"
genre_distances = pdist(movie_genres_matrix, 'hamming')
plt.hist(genre_distances)
plt.title('Distribution of weights')
plt.show()
# + Collapsed="false"
unique, counts = np.unique(genre_distances, return_counts=True)
dict(zip(unique, counts))
# + Collapsed="false"
# Connected when all genres the same or one difference
genre_adjacency = squareform(np.where(genre_distances<0.10,1,0))
plt.spy(genre_adjacency)
plt.show()
# + Collapsed="false"
alpha = 0.25
epsilon = 0.15
genre_adjacency = epsilon_similarity_graph(squareform(genre_distances), alpha=alpha, epsilon=epsilon)
plt.spy(genre_adjacency)
plt.show()
# + Collapsed="false"
np.savetxt(GENERATED_PATH+'movie_genre_adj.csv', genre_adjacency, delimiter=',')
# + Collapsed="false"
# Add labels for visualisation in Gephi
movie_genres_matrix['label'] = movie_genres_matrix.apply(lambda x: [genre for genre in genres if x[genre] != 0], axis=1)
movie_genres_matrix['label'] = movie_genres_matrix.apply(lambda x: {1: '-'.join(x['label'])}, axis=1)
movie_genres_matrix['label']
# + Collapsed="false"
# Export for use in Gephi
graph = nx.from_numpy_array(genre_adjacency)
nx.set_node_attributes(graph, movie_genres_matrix['label'])
nx.write_gexf(graph, GRAPH_PATH+'movie_genres.gexf')
# + [markdown] Collapsed="false"
# ### Keywords network
# + Collapsed="false"
def strdict_to_column_keywords(strdict, name):
"""
Converts a dict (in string format) to a list of the values
e.g. [{"id": 18, "name": "Drama"}, {"id": 10749, "Action"}] -> ["Drama", "Action"]
"""
list_dicts = movie_keywords.apply(lambda x: ast.literal_eval(x))
# Convert list of dicts to list of keywords/genres
_list = list_dicts.apply(lambda x: [d['name'] for d in x ])
df = pd.DataFrame(_list)
df = df.explode(name)
# Keep only first word of index
df[name] = df.apply(lambda x: str(x[name]).split()[0], axis=1)
df['count'] = 1
df = df.reset_index()
df = df.drop_duplicates()
# Pivot so 'name' becomes columns
df = df.pivot(index='index', columns=name, values='count').fillna(0)
return df
# + Collapsed="false"
movie_keywords = movies["keywords"]
movie_keywords_matrix = strdict_to_column_keywords(movie_keywords, "keywords")
keywords = list(movie_keywords_matrix.columns)
# + Collapsed="false"
keywords
# + Collapsed="false"
movie_keywords_matrix
# + Collapsed="false"
movie_keywords_matrix.sum(axis=1).mean()
# + Collapsed="false"
keyword_distances = pdist(movie_keywords_matrix,'jaccard')
plt.hist(keyword_distances[keyword_distances != 1])
plt.show
# + Collapsed="false"
keyword_adjacency = squareform(np.where(keyword_distances < 1 , 1 , 0))
plt.spy(keyword_adjacency)
plt.show()
# + Collapsed="false"
keyword_distances
# + Collapsed="false"
alpha = 0.25
epsilon = 0.95
keyword_adjacency = epsilon_similarity_graph(squareform(keyword_distances), alpha=alpha, epsilon=epsilon)
plt.spy(keyword_adjacency)
plt.show()
# + Collapsed="false"
np.savetxt(GENERATED_PATH+'movie_keyword_adj.csv', keyword_adjacency, delimiter=',')
# + Collapsed="false"
# Add labels for visualisation in Gephi
# First keyword is float (nan), remove
#keywords.pop(0)
movie_keywords_matrix['label'] = movie_keywords_matrix.apply(lambda x: [keyword for keyword in keywords if x[keyword] != 0], axis=1)
movie_keywords_matrix['label'] = movie_keywords_matrix.apply(lambda x: {1: '-'.join(x['label'])}, axis=1)
movie_keywords_matrix['label']
# + Collapsed="false"
graph = nx.from_numpy_array(keyword_adjacency)
nx.set_node_attributes(graph, movie_keywords_matrix['label'])
nx.write_gexf(graph, GRAPH_PATH+'movie_keywords.gexf')
# + [markdown] Collapsed="false" heading_collapsed=true
# ### Buget & Revenue network
# + Collapsed="false" hidden=true
movies_revenue = movies[['id', 'title', 'revenue', 'budget']]
print(np.sum((movies_revenue['budget'] == 0)))
print(np.sum((movies_revenue['revenue'] == 0)))
# + [markdown] Collapsed="false" hidden=true
# A lot of data is unknown (86/480), so this metric is not really usable
# + [markdown] Collapsed="false" heading_collapsed=true
# ### Original language network
# + Collapsed="false" hidden=true
language = movies[['original_language']]
language['original_language'].value_counts()
# + [markdown] Collapsed="false" hidden=true
# As most movies are english, language isn't a good metric either
# + [markdown] Collapsed="false"
# ### Production company
# + Collapsed="false"
def strdict_to_column_companies(strdict, name):
"""
Converts a dict (in string format) to a list of the values
e.g. [{"id": 18, "name": "Drama"}, {"id": 10749, "Action"}] -> ["Drama", "Action"]
"""
list_dicts = strdict.apply(lambda x: ast.literal_eval(x))
# Convert list of dicts to list of keywords/genres
_list = list_dicts.apply(lambda x: [d['name'] for d in x ])
df = pd.DataFrame(_list)
df = df.explode(name)
df['production_companies'] = df.apply(lambda x: company_tranform(str(x['production_companies'])), axis=1)
df['count'] = 1
df = df.reset_index()
df = df.drop_duplicates()
# Pivot so 'name' becomes columns
df = df.pivot(index='index', columns=name, values='count').fillna(0)
return df
# We could see that some companies have a slightly different name, but should be the same:
# Act III & Act III Communications
# Alphaville Films & Alphaville Productions
# Canal Plus & Canal+
# Columbia Pictures & Columbia Pictures Corporation & Columbia Pictures Industries
# ...
def company_tranform(company):
if company == "Act III Communications":
return "Act III"
if company == "Alphaville Productions":
return "Alphaville Films"
if company == "Constellation Entertainment":
return "Constellation Films"
if company == "Detour Film Production":
return "Detour Filmproduction"
if company == "Dino de Laurentiis Cinematografica":
return "Dino De Laurentiis Company"
if company == "Hemdale Film Corporation":
return "Hemdale Film"
if company == "Polar Entertainment":
return "Polar Productions"
if company == "Renaissance Pictures":
return "Renaissance Films"
if company == "Taurus Films":
return "Taurus Film"
if "Samuel Goldwyn Company" in company:
return "Samuel Goldwyn Company"
if "Fox" in company:
return "Fox"
if "BBC" in company:
return "BBC"
if "Columbia Pictures" in company:
return "Columbia Pictures"
if "MPCA" in company:
return "MPCA"
if "Paramount" in company:
return "Paramount"
if "Disney" in company:
return "Disney"
if "<NAME>" in company:
return "<NAME>"
return company
# + Collapsed="false"
movie_companies = movies['production_companies']
movie_companies_matrix = strdict_to_column_companies(movie_companies, 'production_companies')
companies = list(movie_companies_matrix.columns)
# + Collapsed="false"
movie_companies_matrix.sum(axis=1).mean()
# + [markdown] Collapsed="false"
# As with the keywords, we will use the Jaccard similarity
# + Collapsed="false"
company_distances = pdist(movie_companies_matrix,'jaccard')
plt.hist(company_distances)
plt.show()
# + Collapsed="false"
pd.DataFrame(squareform(company_distances)).iloc[13,421]
# + Collapsed="false"
company_adjacency = squareform(np.where(company_distances < 1 , 1 , 0))
plt.spy(company_adjacency)
plt.show()
# + Collapsed="false"
alpha = 0.25
epsilon = 0.95
company_adjacency = epsilon_similarity_graph(squareform(company_distances), alpha=alpha, epsilon=epsilon)
plt.spy(company_adjacency)
plt.show()
# + Collapsed="false"
np.savetxt(GENERATED_PATH+'movie_company_adj.csv', company_adjacency, delimiter=',')
# + Collapsed="false"
# Add labels for visualisation in Gephi
movie_companies_matrix['label'] = movie_companies_matrix.apply(lambda x: [company for company in companies if x[company] != 0], axis=1)
movie_companies_matrix['label'] = movie_companies_matrix.apply(lambda x: {1: '-'.join(x['label'])}, axis=1)
movie_companies_matrix['label']
# + Collapsed="false"
graph = nx.from_numpy_array(company_adjacency)
nx.set_node_attributes(graph, movie_companies_matrix['label'])
nx.write_gexf(graph, GRAPH_PATH+'movie_companies.gexf')
# + [markdown] Collapsed="false"
# ## Combining feature networks
# + Collapsed="false"
plt.figure(1,figsize=(15,3))
plt.subplot(131)
plt.hist(genre_distances)
plt.subplot(132)
plt.hist(keyword_distances)
plt.subplot(133)
plt.hist(company_distances)
plt.show()
# + [markdown] Collapsed="false"
# As expected, keyword_distances and company_distances are mostly around one, as the sets are very big
# + Collapsed="false"
genre_factor = 1
keyword_factor = 1
company_factor = 1
movie_distances = genre_factor*genre_distances + keyword_factor*keyword_distances + company_factor*company_distances
#movie_distances = np.where(movie_distances<0,0,movie_distances)
plt.hist(movie_distances, bins=20)
plt.show()
# + Collapsed="false"
def epsilon_similarity_graph(distances: np.ndarray, alpha=1, epsilon=0):
""" X (n x n): distance matrix
alpha (float): width of the kernel
epsilon (float): threshold
Return:
adjacency (n x n ndarray): adjacency matrix of the graph.
"""
X = distances.copy()
X[X > epsilon] = np.inf
adjacency = np.exp( - X ** 2 / alpha)
np.fill_diagonal(adjacency, 0)
return adjacency
# + Collapsed="false"
alpha = 1
epsilon = 2
adjacency = epsilon_similarity_graph(squareform(movie_distances), alpha=alpha, epsilon=epsilon)
#adjacency = np.where(adjacency < 0.001, 0, adjacency)
plt.spy(adjacency)
plt.show()
# + Collapsed="false"
np.savetxt(GENERATED_PATH+'movie_features_adj.csv', adjacency, delimiter=',')
# + Collapsed="false"
movie_labels=pd.DataFrame()
# Add labels for visualisation in Gephi
movie_genres_matrix['label'] = movie_genres_matrix.apply(lambda x: [genre for genre in genres if x[genre] != 0], axis=1)
movie_labels['genre'] = movie_genres_matrix.apply(lambda x: {1: '-'.join(x['label'])}, axis=1)
movie_keywords_matrix['label'] = movie_keywords_matrix.apply(lambda x: [keyword for keyword in keywords if x[keyword] != 0], axis=1)
movie_labels['keyword'] = movie_keywords_matrix.apply(lambda x: {2: '-'.join(x['label'])}, axis=1)
movie_companies_matrix['label'] = movie_companies_matrix.apply(lambda x: [company for company in companies if x[company] != 0], axis=1)
movie_labels['company'] = movie_companies_matrix.apply(lambda x: {3: '-'.join(x['label'])}, axis=1)
movie_labels
# + Collapsed="false"
graph = nx.from_numpy_array(adjacency)
nx.set_node_attributes(graph, movie_labels['genre'])
nx.set_node_attributes(graph, movie_labels['keyword'])
nx.set_node_attributes(graph, movie_labels['company'])
nx.write_gexf(graph, GRAPH_PATH+'movie_features.gexf')
|
src/movie_feature_network.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %load_ext autoreload
# %autoreload 2
from nbdev import *
# default_exp bbox_annotator
# # Bounding Box Annotator
# +
#export
import os
import json
from ipyevents import Event
from ipywidgets import (AppLayout, Button, IntSlider, IntProgress,
HBox, VBox, Output,
Layout, Label)
from pathlib import Path
from traitlets import Int, observe, link, dlink, HasTraits, Bytes, Unicode, Dict
from ipyannotator.bbox_canvas import BBoxCanvas
from ipyannotator.navi_widget import Navi
from ipyannotator.storage import setup_project_paths, get_image_list_from_folder, AnnotationStorage
# +
#export
class BBoxAnnotatorGUI(AppLayout):
def __init__(self, canvas_size=(505, 50)):
self._navi = Navi()
self._save_btn = Button(description="Save",
layout=Layout(width='auto'))
self._controls_box = HBox([self._navi, self._save_btn],
layout=Layout(display='flex', flex_flow='row wrap', align_items='center'))
self._image_box = BBoxCanvas(*canvas_size)
super().__init__(header=None,
left_sidebar=None,
center=self._image_box,
right_sidebar=None,
footer=self._controls_box,
pane_widths=(2, 8, 0),
pane_heights=(1, 4, 1))
def on_client_ready(self, callback):
self._image_box.observe_client_ready(callback)
# -
BBoxAnnotatorGUI(canvas_size=(800, 1))
# +
#export
class BBoxAnnotatorLogic(HasTraits):
index = Int(0)
image_path = Unicode()
bbox_coords = Dict()
current_im_num = Int()
def __init__(self, project_path, image_dir='pics'):
self.project_path = Path(project_path)
self.image_dir, self.annotation_file_path = setup_project_paths(self.project_path, image_dir=image_dir)
self.image_paths = get_image_list_from_folder(self.image_dir)
self.current_im_num = len(self.image_paths)
self.annotations = AnnotationStorage(self.image_paths)
def _update_im(self):
self.image_path = str(self.image_paths[self.index])
def _update_coords(self): # from annotations
im_name = self.__get_name_by_index(self.index)
self.bbox_coords = self.annotations.get(im_name) or {}
def _update_annotations(self, index): # from coordinates
im_name = self.__get_name_by_index(index)
self.annotations[im_name] = self.bbox_coords
def _save_annotations(self, *args, **kwargs): # to disk
index = kwargs.pop('old_index', self.index)
self._update_annotations(index)
self.annotations.save(self.annotation_file_path)
def _handle_client_ready(self):
self._update_im()
self._update_coords()
@observe('index')
def _idx_changed(self, change):
''' On index change save an old state
and update current image path and bbox coordinates for visualisation
'''
self._save_annotations(old_index = change['old'])
self._update_im()
self._update_coords()
def __get_name_by_index(self, idx):
return self.image_paths[idx].name
# -
# We have annotation saved in dictionary lile: `{'imagename.jpg': {'x':0, 'y': 0, 'width': 100, 'heigth': 100}}`
#
# Navi widget has `index` and prev/next buttons to iterate over `max_im_number` of images (todo: change name as we can iterate of any object).
#
# BBoxAnnotator has coupled `index` (with Navi one), and onchange event to update the current image path and label.
#
# On image_path change event BBoxCanvas rerenders new image and label
#
# So `__get_name_by_index` gives the ability to map those events in memory.
# +
# new index -> save *old* annotations -> update image -> update coordinates from annotation
# |
# |-> _update_annotations -> get current bbox values -> save to self.annotations
# -
logica = BBoxAnnotatorLogic(project_path='../data/projects/bbox')
assert len(logica.image_paths) == 4
logica._handle_client_ready()
# +
#export
class BBoxAnnotator(BBoxAnnotatorGUI):
debug_output = Output()
def __init__(self, project_path, canvas_size=(200, 400), image_dir='pics'):
self._model = BBoxAnnotatorLogic(project_path, image_dir=image_dir)
super().__init__(canvas_size=canvas_size)
self._save_btn.on_click(self._model._save_annotations)
# set correct slider max value based on image number
dlink((self._model, 'current_im_num'), (self._navi.model, 'max_im_number'))
# draw current image and bbox only when client is ready
self.on_client_ready(self._model._handle_client_ready)
# Link image path and bbox coordinates between model and the ImageWithBox widget
link((self._model, 'image_path'), (self._image_box, 'image_path'))
link((self._model, 'bbox_coords'), (self._image_box, 'bbox_coords'))
# Link current image index from controls to annotator model
link((self._navi.model, 'index'), (self._model, 'index'))
def to_dict(self, only_annotated=True):
return self._model.annotations.to_dict(only_annotated)
# -
bb = BBoxAnnotator(project_path='../data/projects/bbox', canvas_size=(640, 400))
bb
bb.to_dict()
#hide
from nbdev.export import notebook2script
notebook2script()
|
nbs/04_bbox_annotator.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # An Introduction to Inference in Pyro
# Much of modern machine learning can be cast as approximate inference and expressed succinctly in a language like Pyro. To motivate the rest of this tutorial, let's build a generative model for a simple physical problem so that we can use Pyro's inference machinery to solve it. However, we will first import the required modules for this tutorial:
# +
import matplotlib.pyplot as plt
import numpy as np
import torch
import pyro
import pyro.infer
import pyro.optim
import pyro.distributions as dist
pyro.set_rng_seed(101)
# -
# ## A Simple Example
# Suppose we are trying to figure out how much something weighs, but the scale we're using is unreliable and gives slightly different answers every time we weigh the same object. We could try to compensate for this variability by integrating the noisy measurement information with a guess based on some prior knowledge about the object, like its density or material properties. The following model encodes this process:
#
# $${\sf weight} \, | \, {\sf guess} \sim \cal {\sf Normal}({\sf guess}, 1) $$
# $${\sf measurement} \, | \, {\sf guess}, {\sf weight} \sim {\sf Normal}({\sf weight}, 0.75)$$
#
# Note that this is a model not only for our belief over weight, but also for the result of taking a measurement of it. The model corresponds to the following stochastic function:
def scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(weight, 0.75))
# ## Conditioning
#
# The real utility of probabilistic programming is in the ability to condition generative models on observed data and infer the latent factors that might have produced that data. In Pyro, we separate the expression of conditioning from its evaluation via inference, making it possible to write a model once and condition it on many different observations. Pyro supports constraining a model's internal `sample` statements to be equal to a given set of observations.
#
# Consider `scale` once again. Suppose we want to sample from the distribution of `weight` given input `guess = 8.5`, but now we have observed that `measurement == 9.5`. That is, we wish to *infer* the distribution:
# $$({\sf weight} \, | \, {\sf guess}, {\sf measurement} = 9.5) \sim \, ? $$
#
# Pyro provides the function `pyro.condition` to allow us to constrain the values of sample statements. `pyro.condition` is a higher-order function that takes a model and a dictionary of observations and returns a new model that has the same input and output signatures but always uses the given values at observed `sample` statements:
conditioned_scale = pyro.condition(scale, data={"measurement": 9.5})
# Because it behaves just like an ordinary Python function, conditioning can be deferred or parametrized with Python's `lambda` or `def`:
def deferred_conditioned_scale(measurement, *args, **kwargs):
return pyro.condition(scale, data={"measurement": measurement})(*args, **kwargs)
# In some cases it might be more convenient to pass observations directly to individual `pyro.sample` statements instead of using `pyro.condition`. The optional `obs` keyword argument is reserved by `pyro.sample` for that purpose:
def scale_obs(guess): # equivalent to conditioned_scale above
weight = pyro.sample("weight", dist.Normal(guess, 1.))
# here we condition on measurement == 9.5
return pyro.sample("measurement", dist.Normal(weight, 1.), obs=9.5)
# Finally, in addition to `pyro.condition` for incorporating observations, Pyro also contains `pyro.do`, an implementation of Pearl's `do`-operator used for causal inference with an identical interface to `pyro.condition`. `condition` and `do` can be mixed and composed freely, making Pyro a powerful tool for model-based causal inference.
# ## Flexible Approximate Inference With Guide Functions
#
# Let's return to `conditioned_scale`. Now that we have conditioned on an observation of `measurement`, we can use Pyro's approximate inference algorithms to estimate the distribution over `weight` given `guess` and `measurement == data`.
#
#
# Inference algorithms in Pyro, such as `pyro.infer.SVI`, allow us to use arbitrary stochastic functions, which we will call *guide functions* or *guides*, as approximate posterior distributions. Guide functions must satisfy these two criteria to be valid approximations for a particular model:
# 1. all unobserved (i.e., not conditioned) sample statements that appear in the model appear in the guide.
# 2. the guide has the same input signature as the model (i.e., takes the same arguments)
#
# Guide functions can serve as programmable, data-dependent proposal distributions for importance sampling, rejection sampling, sequential Monte Carlo, MCMC, and independent Metropolis-Hastings, and as variational distributions or inference networks for stochastic variational inference. Currently, importance sampling, MCMC, and stochastic variational inference are implemented in Pyro, and we plan to add other algorithms in the future.
#
# Although the precise meaning of the guide is different across different inference algorithms, the guide function should generally be chosen so that, in principle, it is flexible enough to closely approximates the distribution over all unobserved `sample` statements in the model.
# In the case of `scale`, it turns out that the true posterior distribution over `weight` given `guess` and `measurement` is actually ${\sf Normal}(9.14, 0.6)$. As the model is quite simple, we are able to determine our posterior distribution of interest analytically (for derivation, see for example Section 3.4 of http://www.stat.cmu.edu/~brian/463-663/week09/Chapter%2003.pdf ).
#
#
def perfect_guide(guess):
loc =(0.75**2 * guess + 9.5) / (1 + 0.75**2) # 9.14
scale = np.sqrt(0.75**2/(1 + 0.75**2)) # 0.6
return pyro.sample("weight", dist.Normal(loc, scale))
# ## Parametrized Stochastic Functions and Variational Inference
#
# Although we could write out the exact posterior distribution for `scale`, in general it is intractable to specify a guide that is a good approximation to the posterior distribution of an arbitrary conditioned stochastic function. In fact, stochastic functions for which we can determine the true posterior exactly are the exception rather than the rule. For example, even a version of our `scale` example with a nonlinear function in the middle may be intractable:
def intractable_scale(guess):
weight = pyro.sample("weight", dist.Normal(guess, 1.0))
return pyro.sample("measurement", dist.Normal(some_nonlinear_function(weight), 0.75))
# What we can do instead is use the top-level function `pyro.param` to specify a *family* of guides indexed by named parameters, and search for the member of that family that is the best approximation according to some loss function. This approach to approximate posterior inference is called *variational inference*.
#
# `pyro.param` is a frontend for Pyro's key-value *parameter store*, which is described in more detail in the documentation. Like `pyro.sample`, `pyro.param` is always called with a name as its first argument. The first time `pyro.param` is called with a particular name, it stores its argument in the parameter store and then returns that value. After that, when it is called with that name, it returns the value from the parameter store regardless of any other arguments. It is similar to `simple_param_store.setdefault` here, but with some additional tracking and management functionality.
#
# ```python
# simple_param_store = {}
# a = simple_param_store.setdefault("a", torch.randn(1))
# ```
#
# For example, we can parametrize `a` and `b` in `scale_posterior_guide` instead of specifying them by hand:
def scale_parametrized_guide(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.))
return pyro.sample("weight", dist.Normal(a, torch.abs(b)))
# As an aside, note that in `scale_parametrized_guide`, we had to apply `torch.abs` to parameter `b` because the standard deviation of a normal distribution has to be positive; similar restrictions also apply to parameters of many other distributions. The PyTorch distributions library, which Pyro is built on, includes a [constraints module](https://pytorch.org/docs/master/distributions.html#module-torch.distributions.constraints) for enforcing such restrictions, and applying constraints to Pyro parameters is as easy as passing the relevant `constraint` object to `pyro.param`:
# +
from torch.distributions import constraints
def scale_parametrized_guide_constrained(guess):
a = pyro.param("a", torch.tensor(guess))
b = pyro.param("b", torch.tensor(1.), constraint=constraints.positive)
return pyro.sample("weight", dist.Normal(a, b)) # no more torch.abs
# -
# Pyro is built to enable *stochastic variational inference*, a powerful and widely applicable class of variational inference algorithms with three key characteristics:
#
# 1. Parameters are always real-valued tensors
# 2. We compute Monte Carlo estimates of a loss function from samples of execution histories of the model and guide
# 3. We use stochastic gradient descent to search for the optimal parameters.
#
# Combining stochastic gradient descent with PyTorch's GPU-accelerated tensor math and automatic differentiation allows us to scale variational inference to very high-dimensional parameter spaces and massive datasets.
#
# Pyro's SVI functionality is described in detail in the [SVI tutorial](svi_part_i.ipynb). Here is a very simple example applying it to `scale`:
# +
guess = torch.tensor(8.5)
pyro.clear_param_store()
svi = pyro.infer.SVI(model=conditioned_scale,
guide=scale_parametrized_guide,
optim=pyro.optim.SGD({"lr": 0.001, "momentum":0.1}),
loss=pyro.infer.Trace_ELBO())
losses, a,b = [], [], []
num_steps = 2500
for t in range(num_steps):
losses.append(svi.step(guess))
a.append(pyro.param("a").item())
b.append(pyro.param("b").item())
plt.plot(losses)
plt.title("ELBO")
plt.xlabel("step")
plt.ylabel("loss");
print('a = ',pyro.param("b").item())
print('b = ', pyro.param("a").item())
# +
plt.subplot(1,2,1)
plt.plot([0,num_steps],[9.14,9.14], 'k:')
plt.plot(a)
plt.ylabel('a')
plt.subplot(1,2,2)
plt.ylabel('b')
plt.plot([0,num_steps],[0.6,0.6], 'k:')
plt.plot(b)
plt.tight_layout()
# -
# **Note that SVI obtains parameters very close to the true parameters of the desired conditional distribution. This is to be expected as our guide is from the same family.**
#
# Note that optimization will update the values of the guide parameters in the parameter store, so that once we find good parameter values, we can use samples from the guide as posterior samples from downstream tasks.
#
#
# ## Next Steps
#
# In the [Variational Autoencoder tutorial](vae.ipynb), we'll see how models like `scale` can be augmented with deep neural networks and use stochastic variational inference to build a generative model of images.
|
tutorial/source/intro_part_ii.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Imports
# %load_ext autoreload
# %autoreload 1
from utils import *
from data_preprocess import *
# %aimport utils
# %aimport data_preprocess
# -
# ## Detect faces from source images and save them to one folder
# +
path_out = 'test'
path_to_annotations = 'test/annotations.csv'
predictor_path = 'dlib_landmarks_predictor/shape_predictor_68_face_landmarks.dat'
# process_folder('archive/source/faces_pytorch/images/', path_out, path_to_annotations, predictor_path)
# process_folder('example_data/images/', path_out, path_to_annotations, predictor_path)
# process_folder('faces_pytorch/images/', path_out, path_to_annotations, predictor_path)b
# -
# ## Manualy label faces, extracted from source images
label_images('data/images/', 'data/annotations.csv')
# ## Explore the dataset
imgs = ImageList('data/images/', 'data/annotations.csv', default_type='numpy')
imgs.viewer()
# ## Distribute images to class folders, according to labels
# Categorical classes:
distribute_by_class('./data/images/', './data/datasets/categorical/', './data/annotations.csv', mode='categorical')
# Binary classes for 2 problems
distribute_by_class('./data/images/', './data/datasets/binary', './data/annotations.csv', mode='binary')
|
preprocess_and_label.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + [markdown] nbpresent={"id": "37bafa9d-bd32-4fb6-ac83-4899d86e02af"}
# ### Val Roseg Water Supply
# The water supply to the fountain was intermittent causing long melt and short freezing periods. Also due to freezing and melting inside the supply pipeline, the fountain water discharge varied a lot.
# + nbpresent={"id": "d60dcd45-0dfd-4644-8476-0f5848851c2e"}
# -*- coding: utf-8 -*-
from __future__ import print_function
from __future__ import division
import datetime
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
# + nbpresent={"id": "91d63ca3-e029-4e87-b4af-ef7be120d2a7"}
import pandas as pd
import numpy as np
from datetime import timedelta,date, time
from matplotlib import pyplot as plt
def GetTime(sec):
sec = timedelta(seconds=sec)
d = datetime(1,1,1) + sec
print("DAYS:HOURS:MIN:SEC")
print("%d:%d:%d:%d" % (d.day-1, d.hour, d.minute, d.second))
# + nbpresent={"id": "c3242a97-af9e-4b02-b714-9ba30e861f57"}
df = pd.read_csv('../data/interim/roseg_measurements.csv', parse_dates=['time'])
del df['doy']
df.head()
# + [markdown] nbpresent={"id": "d63db050-0cd2-47da-bef7-27146090e1a5"}
# ### Estimating total water consumption
# Assuming constant discharge of 1 litre/s or $3.6 m^3/h$
# -
df['delta_time'] = (df['time']-df['time'].shift()).fillna(0)
df['delta_height'] = (df['height']-df['height'].shift()).fillna(0)
df['growth_rate'] = (df['delta_height']*100)/(df['delta_time'].apply(lambda x: x.total_seconds()/3600))
df['delta_time']=df['delta_time'].apply(lambda x: x.total_seconds())
df['water_used']=df['delta_time']*df['water']
print('The fountain worked approximately for %d days from 17th November 2016 to 7th April 2017(in 141 days)' %(df['water_used'].sum()//3600//24))
print(' %d litres of water used' % (df['water_used'].sum()))
df['water_used']=df['delta_time']*df['water']/3600
df[['water_used']].plot(figsize=(12,8), grid=True)
|
notebooks/.ipynb_checkpoints/Water_IO-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!pip install python-dotenv
# +
# Import dependencies
import os
import pandas as pd
from sqlalchemy import create_engine
import psycopg2
import matplotlib.pyplot as plt
import numpy as np
from dotenv import load_dotenv
import seaborn as sns
# %matplotlib inline
load_dotenv()
# Set password variable
db_password = os.environ.get('db_password')
# -
# Createa a connection to the Salary database
salaries_engine = create_engine(f'postgres://postgres:{db_password}@localhost:5432/salaries')
connection = salaries_engine.connect()
# Query All Records in the Salary Table
salary_db = pd.read_sql("SELECT * FROM salaries", connection)
salary_db.head()
# Create a connection to the Titles database
titles_engine = create_engine(f'postgres://postgres:{db_password}@localhost:5432/titles')
connection_two = titles_engine.connect()
# Query All Records in the Titles Table
titles_db = pd.read_sql("SELECT * FROM titles", connection_two)
titles_db.head()
# Create a connection to the Employees database
employees_engine = create_engine(f'postgres://postgres:{db_password}@localhost:5432/employees')
connection_two = employees_engine.connect()
# Query All Records in the Employees Table
employees_db = pd.read_sql("SELECT * FROM employees", connection_two)
employees_db.head()
# Merge employees and salary table
employees_salary_merged = pd.merge(employees_db,salary_db, how="inner", on="emp_no")
employees_salary_merged
# Rename column in "employees_salary_merged" to match it with titles table
renamed_data = employees_salary_merged.rename(columns={"emp_no":"emp_no",
"emp_title_id":"title_id",
"birth_date":"birth_date",
"first_name":"first_name",
"last_name":"last_name",
"sex":"sex",
"hire_date":"hire_date",
"salary":"salary"})
renamed_data.head()
# Merge 'renamed_data' with titles table
title_salary_merged = pd.merge(renamed_data, titles_db, how="inner", on="title_id")
title_salary_merged
# +
# Create a histogram to visualize the most common salary ranges for employee
ax = title_salary_merged.salary.hist(by=title_salary_merged.title,figsize=(15,13), color='green')
ax = ax[0]
for x in ax:
# Set x-axis label
x.set_xlabel("Salary", labelpad=10, size=12)
# Set y-axis label
x.set_ylabel("Employees Frequency", labelpad=10, size=12)
# -
# Find the average salary by title
avg_salary_by_title = title_salary_merged.groupby('title')['salary'].mean()
# Generate the bar plot of average salary by title
avg_salary_by_title.plot(kind='bar', color="green", title="Average Salary by Title")
plt.ylabel("Average Salary")
plt.tight_layout()
plt.show()
|
bonus_employees.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="aaveNCjDcbtD" colab_type="text"
# # "Type hinting enum params from within a notebook"
#
# - toc: true
# - badges: true
# - comments: true
# - categories: [Misc]
# + id="SGCVGPXacFrZ" colab_type="code" colab={}
#hide
from IPython.display import Image as IPImage
def url_image(url):
display(IPImage(url=url))
def blog_image(fn):
show_image("https://joedockrill.github.io/blog/images/" + fn)
def local_image(fn):
display(IPImage(filename=fn))
# + id="SibIY17ic51E" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 151} outputId="32aa108b-33b8-4938-9eed-90225acf6767"
#hide_input
local_image("type_hint.jpg")
# + [markdown] id="c1EtMtagdOu4" colab_type="text"
# What is that meant to look like? Can I not do something about that?
|
_notebooks/2020-08-03-Type-hinting-enum-params-within-a-notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Historical Tonnage List API Examples
# ## Setup
# Install the Signal Ocean SDK:
# ```
# pip install signal-ocean
# ```
# Set your subscription key acquired here: https://apis.signalocean.com/profile
signal_ocean_api_key = 'NotValid' #replace with your subscription key
# ## Retrieving a historical tonnage list
# First, we need to determine the parameters of the **historical tonnage list** (**HTL**). In order to fetch an HTL, we will need to specify:
# - a loading port,
# - a vessel class,
# - a time frame.
#
# Ports and vessel classes can be retrieved through their corresponding APIs via the `PortAPI` and `VesselClassAPI` classes:
# +
from signal_ocean import PortAPI, VesselClassAPI, Connection
connection = Connection(signal_ocean_api_key)
port_api = PortAPI(connection)
vessel_class_api = VesselClassAPI(connection)
vessel_class_api.get_vessel_classes()
# -
# Ports can be looked up by their name using the `PortFilter`:
# +
from signal_ocean import PortFilter
port_api.get_ports(PortFilter(name_like='rot'))
# -
# And so can vessel classes with the use of the `VesselClassFilter`:
# +
from signal_ocean import VesselClassFilter
vessel_class_api.get_vessel_classes(VesselClassFilter(name_like='MAX'))
# -
# Note that the search is case-insensitive and does not require specifying exact names.
#
# We will look for Aframax vessels in Ceyhan, 6 days forward, for the last 90 days:
# +
from datetime import date, timedelta,time
vessel_class = vessel_class_api.get_vessel_classes(VesselClassFilter(name_like='aframax'))[0]
port = port_api.get_ports(PortFilter(name_like='ceyhan'))[0]
days_forward = 6
today = date.today()
start_date = today - timedelta(days=5)
# -
# With the parameters above, we can now call the API:
# +
from signal_ocean.historical_tonnage_list import HistoricalTonnageListAPI
htl_api = HistoricalTonnageListAPI(connection)
htl = htl_api.get_historical_tonnage_list(
port,
vessel_class,
days_forward,
start_date,
end_date=today
)
# -
# The resulting historical tonnage list is a Python object that contains a collection of tonnage lists, each of which has a timestamp and a collection of vessel data. The tonnage lists are ordered by date in descending order:
todays_tl = htl[0]
print('Date:', todays_tl.date)
print('Vessel count:', len(todays_tl.vessels))
print('Example vessel:', todays_tl.vessels[0])
# The result can also be converted into a Pandas data frame:
data_frame = htl.to_data_frame()
data_frame
# ## Example 1 - Plotting a supply trend
# The data frame format makes it very easy to generate a supply trend plot.
#
# We'll generate a supply trend from the beginning of the year, but we'll also filter the vessel list by looking for vessels that:
# - are pushed,
# - have a market deployment type of "Relet" or "Spot",
# - their commercial status is available, cancelled or failed,
# - are crude oil tankers (their vessel subclass is "Dirty"),
# - their AIS information is no older than 5 days.
#
# Filtering can be achieved by creating an instance of a `VesselFilter` and passing it to the `get_historical_tonnage_list` method. A `VesselFilter` meeting the above criteria will look as follows:
# +
from signal_ocean.historical_tonnage_list import VesselFilter, PushType, MarketDeployment, CommercialStatus, VesselSubclass
vessel_filter = VesselFilter(
push_types=[PushType.PUSHED],
market_deployments=[MarketDeployment.RELET, MarketDeployment.SPOT],
commercial_statuses=[CommercialStatus.AVAILABLE, CommercialStatus.CANCELLED, CommercialStatus.FAILED],
vessel_subclass=VesselSubclass.DIRTY,
latest_ais_since=5
)
# -
# Note the usage of the `PushType`, `MarketDeployment`, `CommercialStatus`, and `VesselSubclass`. These are enum-like classes that contain constants for all the possible values for a given `VesselFilter` parameter. To list the available values for any of the classes, just invoke `list()` on the class:
list(CommercialStatus)
# You can use these values directly or use a corresponding class member:
CommercialStatus.ON_SUBS == 'On Subs'
# Let's get the HTL for our filter:
# +
beginning_of_year = date(today.year, 1, 1)
htl_for_supply_trend = htl_api.get_historical_tonnage_list(
port,
vessel_class,
days_forward,
start_date,
end_date=today,
vessel_filter=vessel_filter,
time=time(hour=6)
)
supply_trend_data_frame = htl_for_supply_trend.to_data_frame()
supply_trend_data_frame
# -
# Now, we can generate the plot:
# +
from signal_ocean.historical_tonnage_list import IndexLevel
supply_trend = supply_trend_data_frame.groupby(IndexLevel.DATE, sort=True).size()
plot = supply_trend.plot()
plot.set_ylabel('Vessel count')
plot
# -
# ## Example 2 - Generating an Excel sheet
# The data frame can be easily saved as an Excel file by using Pandas's built-in `to_excel()` function.
#
# Before we do that, we need to remove all the time zone information from all the timestamps in the data frame. This is because Excel does not support storing time zone information along with timestamps. However, Signal Ocean's SDK always provides time zone information to make all timestamp-based computation unambiguous.
# +
from signal_ocean.historical_tonnage_list import Column
without_time_zones = (
supply_trend_data_frame
.reset_index()
.astype({ IndexLevel.DATE: 'datetime64[ns]', Column.OPEN_DATE: 'datetime64[ns]', Column.ETA: 'datetime64[ns]', Column.LATEST_AIS: 'datetime64[ns]'})
.set_index([IndexLevel.DATE, IndexLevel.IMO])
)
# -
# Now, we can generate the Excel file:
without_time_zones.to_excel('htl.xlsx')
|
docs/examples/jupyter/HTL API/Historical Tonnage List API.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import matplotlib.pyplot as plt
from cylinder import Cylinder
from simulation import simulate, cumulative_function_graph
rounds = 10 ** 5
radius = 1
height = 2
body = Cylinder(radius, height)
simulation = simulate(body, rounds)
x_points, y_points = cumulative_function_graph(body, simulation)
plt.plot(x_points, y_points)
|
cylinder simulation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Analyzing CIA Factbook Data Using SQLite and Python
#
# in this project we are going to analyse CIA factbook which is about statistics for all of the countries on Earth there are some questions that we are going to answer:
#
# general statistics
#
# the most common birth rate, death rate and population for countries
#
# which countries have high density
#
# which country have high water to land ratio
#
# we are going to use SQLite and Python
#
# first we connect to database and get some information about the tables in database
# +
import sqlite3
import pandas as pd
conn = sqlite3.connect("factbook.db")
q= "SELECT * FROM sqlite_master WHERE type='table';"
pd.read_sql_query(q, conn)
# -
# lets just explore 5 rows of facts table as you can see in blow
# +
q = "select * from facts limit 5"
pd.read_sql_query(q, conn)
# -
# ## Summary statistics
#
# lets calculate some summery statistics about this table to see anything interesting
q ="select min(population), max(population), min(population_growth), max(population_growth) from facts"
pd.read_sql_query(q, conn)
# as you can see the results in above indicate that there is a row with 0 population and a row with more than 7.2 billion. lets find out about theses rows first start with 7.2 billion
q = " select * from facts where population == (select max(population) from facts)"
pd.read_sql_query(q, conn)
# the name of country for this row is world and the population actually indicate the population of whole world.
#
q = " select * from facts where population == (select min(population) from facts)"
pd.read_sql_query(q, conn)
# Also the zero population is belong to Antarctica which seems to be logical
#
# ## Histograms for population, population_growth, birth_rate, death_rate
#
# in this section we are going to only select `population`, `population_growth`, `birth_rate`, `death_rate` columns form database to plot histogram for each column
# +
import matplotlib.pyplot as plt
import seaborn as sns
fig = plt.figure(figsize=(15, 15))
ax1 = fig.add_subplot(2, 2, 1)
q = '''select population, population_growth, birth_rate, death_rate from facts
where (population != (select min(population) from facts)) and (population != (select max(population) from facts))
'''
pd.read_sql_query(q, conn).hist(ax = ax1, grid = False)
# -
# as we can see in histograms above most of rows have birth rate between 7 and 25 percent and actually there is no country with less than 7% birth rate. for the death rate most of rows have the rate between 5 and 9 %. for the population column. there are very few rows with population between 1.2 and 1.4 billion which obviously are china and India and there are no countries with population between 600 million and 1.2 billion there is a big gap here most of countries have around 100 million population between zero and 100 million.
#
# ## highest population density
#
# Population density is the ratio of population to land area lets calculate this factor for our data set
#
# +
q = ''' select cast(population as float) / cast(area_land as float) as density from facts
where (population != (select min(population) from facts)) and (population != (select max(population) from facts))
order by density desc'''
pd.read_sql_query(q, conn).hist(grid = False, bins = 5)
# -
# most of countries have the density less than 5000. lets find 20 top countries in case of density
# +
q = ''' select name, cast(population as float) / cast(area_land as float) as density from facts
where (population != (select min(population) from facts)) and (population != (select max(population) from facts))
order by density desc
limit 20
'''
pd.read_sql_query(q, conn)
# -
# you can see in the table the top 20 high density countries with their name and density value
#
# ## Water to land ration
#
# we are going to calculate ware to land ration for each country by dividing `area_land` from 'area_water' column as in code blow
# +
q = ''' select name, cast(area_water as float) / cast(area_land as float) as water_to_land from facts
where (population != (select min(population) from facts)) and (population != (select max(population) from facts))
order by water_to_land desc
limit 20
'''
pd.read_sql_query(q, conn)
# -
# we can see the result for top 20 in the table above Virgin Islands is the only country in the world which has more water than land because the ratio value for this country is greater than 1
#
# ## Conclusion
#
# we analyzed CIA fact book
#
# -most countries have birth rate between 7 and 25 percent
#
# -most countries have death rate between 5 and 9 %
#
# -most countries have population between zero and 100 million.
#
# -most countries have the density less than 5000.
#
# -Macau is the country with the highest density 21168.96
#
# Virgin Islands are the only country in the world which has more water than land
#
|
CIA_factbook_analyz.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + language="html"
# <iframe src="//player.bilibili.com/player.html?aid=45030628&cid=78866092&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true" width=100% height=680> </iframe>
# + [markdown] slideshow={"slide_type": "slide"}
# # 一体化云端课件演示
# ## 如何使用Jupyter Notebook整合资源实现教学任务by David
# + hide_input=true init_cell=true slideshow={"slide_type": "skip"}
from videoref import *
# + [markdown] slideshow={"slide_type": "slide"}
# # Python数据结构
# - 来源自ALIYUN OSS视频
# + hide_input=false
webm
# + [markdown] hide_input=true slideshow={"slide_type": "subslide"}
# <video src="http://ml-course.oss-cn-beijing.aliyuncs.com/1.webm" width="100%" height="100%" controls="controls"></video>
# -
# <video src="1080.webm" width="100%" height="100%" controls="controls"></video>
# OGG
# <video src="http://ml-course.oss-cn-beijing.aliyuncs.com/2.Ogg" width="100%" height="100%" controls="controls"></video>
# <video src="1.mp4" width="100%" height="100%" controls="controls"></video>
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 来,来跟视频上试下吧
# + [markdown] slideshow={"slide_type": "fragment"}
# - 任务1:创建一个字符串对象,名字是str1,值是i love china,并显示它
# + slideshow={"slide_type": "fragment"}
str1='i love china -'
print(str1)
# + [markdown] slideshow={"slide_type": "fragment"}
# - 任务2:任务切片访问该字符串,访问第3到7个元素
# + slideshow={"slide_type": "fragment"}
str1[3:]
# + [markdown] slideshow={"slide_type": "slide"}
# ## 来跟视频,再深入感受下Python中的List中
# - 视频来源B站
# + hide_input=false init_cell=true slideshow={"slide_type": "subslide"}
display(a1)
# + [markdown] slideshow={"slide_type": "fragment"}
# - 任务1:使用列表推导式创建一个从0到100的列表
# + slideshow={"slide_type": "fragment"}
l1=[item for item in range(10)]
l1
# + [markdown] slideshow={"slide_type": "slide"}
# # 最后,总结放松一下,结束今天学习吧
# + [markdown] slideshow={"slide_type": "subslide"}
# ## 今天学习内容就是Python中的数据结构,以及列表
# + hide_input=false init_cell=true slideshow={"slide_type": "subslide"}
display(a2)
# + [markdown] slideshow={"slide_type": "slide"}
# # 继续下一课 Continue...
|
Video.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:tf2]
# language: python
# name: conda-env-tf2-py
# ---
# # 模型参数的访问、初始化和共享
#
# 在[“线性回归的简洁实现”]一节中,我们通过`init`模块来初始化模型的全部参数。我们也介绍了访问模型参数的简单方法。本节将深入讲解如何访问和初始化模型参数,以及如何在多个层之间共享同一份模型参数。
#
# 我们先定义一个与上一节中相同的含单隐藏层的多层感知机。我们依然使用默认方式初始化它的参数,并做一次前向计算。
import tensorflow as tf
import numpy as np
print(tf.__version__)
# +
net = tf.keras.models.Sequential()
net.add(tf.keras.layers.Flatten())
net.add(tf.keras.layers.Dense(256,activation=tf.nn.relu))
net.add(tf.keras.layers.Dense(10))
X = tf.random.uniform((2,20))
Y = net(X)
Y
# -
# ## 4.2.1 access model parameters
#
# 对于使用`Sequential`类构造的神经网络,我们可以通过weights属性来访问网络任一层的权重。回忆一下上一节中提到的`Sequential`类与`tf.keras.Model`类的继承关系。对于`Sequential`实例中含模型参数的层,我们可以通过`tf.keras.Model`类的`weights`属性来访问该层包含的所有参数。下面,访问多层感知机`net`中隐藏层的所有参数。索引0表示隐藏层为`Sequential`实例最先添加的层。
net.weights[0], type(net.weights[0])
# ## 4.2.2 initialize params
#
# 我们在[“数值稳定性和模型初始化”]一节中描述了模型的默认初始化方法:权重参数元素为[-0.07, 0.07]之间均匀分布的随机数,偏差参数则全为0。但我们经常需要使用其他方法来初始化权重。在下面的例子中,我们将权重参数初始化成均值为0、标准差为0.01的正态分布随机数,并依然将偏差参数清零。
# +
class Linear(tf.keras.Model):
def __init__(self):
super().__init__()
self.d1 = tf.keras.layers.Dense(
units=10,
activation=None,
kernel_initializer=tf.random_normal_initializer(mean=0,stddev=0.01),
bias_initializer=tf.zeros_initializer()
)
self.d2 = tf.keras.layers.Dense(
units=1,
activation=None,
kernel_initializer=tf.ones_initializer(),
bias_initializer=tf.ones_initializer()
)
def call(self, input):
output = self.d1(input)
output = self.d2(output)
return output
net = Linear()
net(X)
net.get_weights()
# -
# ## 4.2.3 define initializer
#
# 可以使用`tf.keras.initializers`类中的方法实现自定义初始化。
# +
def my_init():
return tf.keras.initializers.Ones()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(64, kernel_initializer=my_init()))
Y = model(X)
model.weights[0]
|
code/chapter04_DL-computation/4.2_parameters.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Finding the Authors of a Terrorist Attack
# # Table of contents <a name="TOC"></a>
# 1. [Data aquisition](#data-aquisition)
# - [Data exploration](#data-exploration)
# 1. [Attacks per organization](#attacks-per-organization)
# - [Terrorist Attacks through time](#attacks-through-time)
# - [Similarity within organization](#org-similarity)
# - [Building the feature graph](#feature-graph)
# - [Finding organisations reponsible of an attack](#org-predict)
# 1. [Using PCA and K-Means](#pca-kmeans)
# - [Using Spectral Embedding and K-Means](#spectral-kmeans)
# - [Using Multiclass label classifier (OneVersusRest)](#onevsrest)
# # Data acquisition <a name="data-aquisition"></a>
# [Go back to the top](#TOC)
# The data we analyse are given by the NTDS course. They can be downloaded from [here](https://linqs-data.soe.ucsc.edu/public/lbc/TerrorAttack.tgz).
# +
# utility imports
import pandas as pd
import numpy as np
from collections import Counter
from scipy.spatial.distance import pdist, squareform
from scipy import sparse
# ml imports
from sklearn.model_selection import cross_val_score, KFold
from sklearn.multiclass import OneVsRestClassifier
from sklearn.svm import LinearSVC
from sklearn import preprocessing, decomposition
from sklearn.cluster import KMeans
# visualization imports
import networkx as nx
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
# -
PATH = "TerrorAttack/"
SEED = 0
# Load data
edges_org = pd.read_csv(PATH +'terrorist_attack_loc_org.edges', sep=' ', header=None)
edges = pd.read_csv(PATH +'terrorist_attack_loc.edges', sep=' ', header=None)
labels = pd.read_csv(PATH +'terrorist_attack.labels', sep=' ', header=None)
nodes = pd.read_csv(PATH +'terrorist_attack.nodes', sep='\t', header=None)
n_nodes = nodes.shape[0]
# # Data exploration and cleaning <a name="data-exploration"></a>
# [Go back to the top](#TOC)
nodes.head()
nodes[0][42]
# The nodes are identified by an internet link. However, there is information given inside the link: the name of the organization after `#` in the link and the date of the attack and the end of the link.
#
# We will extract these information and construct a new data frame.
# +
# extract information of date and organization from the link
nodes_info = nodes[0].apply(lambda x : (x.split("#"))[1])
dates = nodes_info.apply(lambda x : x[-8:])
dates = pd.to_datetime(dates, format='%Y%m%d', errors='coerce')
organizations = nodes_info.apply(lambda x : x[:-9])
attacks_dict = {"organization": organizations, "date": dates}
attacks = pd.DataFrame(attacks_dict)
attacks.head()
# -
# ## Attacks per organization <a name="attacks-per-organization"></a>
# [Go back to the top](#TOC)
# We observe that there are some unknown organizations. We want to know the number of the unknown values.
attacks.organization.value_counts().head()
# That is pretty much. We may predict the organization who is responsible of an attack for those unknown author. First, we check the proportion of attacks that are created by known organizations which create at least 10 attacks. In fact, if an organization only have few rows in the `nodes` dataset, it will be difficult to extract information from the limited rows.
ATK_THRESHOLD = 10
attacks.organization.replace('', 'Unknown', inplace=True)
# +
attack_per_org = attacks.organization.value_counts()[1:]
famous_orgs = attack_per_org[attack_per_org >= ATK_THRESHOLD].index
num_attacks = attack_per_org.sum()
prop_freq_org = attack_per_org[famous_orgs].sum() / num_attacks
print("There are {:.2%} of known attacks are created by frequent organizations.".format(prop_freq_org))
# -
# Which seems good for us to predict.
# Here are the main organizations in the dataset:
attack_per_org[famous_orgs][::-1].plot.barh(color='steelblue', figsize=(6,6),
title='Attacks per organization');
# +
# concatenate features into the dataframe of attacks
attacks = pd.concat([attacks, nodes.iloc[:, 1:]], axis=1)
# get only the type of attack from last column
attacks.iloc[:, -1] = nodes.iloc[:, -1].apply(lambda x: x.split('#')[1])
attacks.head()
# -
# ## Terrorist Attacks through time<a name="attacks-through-time"></a>
# [Go back to the top](#TOC)
# We will now also have a look on the number of attacks w.r.t time.
dates = attacks.date.dropna()
# +
attack_year = dates.apply(lambda d: d.year)
year_min = attack_year.min()
year_max = attack_year.max()
print("Our data contains attacks start from year {} till {}"
.format(year_min, year_max))
# -
sns.distplot(attack_year, bins=year_max - year_min + 1)
plt.title('Histogram of attacks per year')
plt.xlim([year_min, year_max]);
# +
attack_month = dates.apply(lambda d: d.month)
month_occurences = attack_month.value_counts().sort_index()
month_occurences.index = ['January', 'February', 'Mars', 'April', 'May', 'June',
'July', 'August', 'September', 'October', 'November', 'December']
month_occurences.plot.bar(width=0.9, color='steelblue', title='Number of attacks per month', rot=30);
# -
# ## Similarity within organization<a name="org-similarity"></a>
# [Go back to the top](#TOC)
# To see if it is relevant to try to predict the organizations based on the features, we check if the feature distance between attacks of the same organization are smaller than across organizations.
#Transform the labels into features with dummy variable encoding, also dismiss the labels
features = pd.get_dummies(nodes.iloc[:, 1:])
dot_products = features@features.T
norms = np.sqrt(np.diag(dot_products))
sim_matrix = dot_products / np.outer(norms, norms)
# +
diffs = []
for i in range(n_nodes):
org = attacks['organization'][i]
sim = sim_matrix[i]
if org != 'Unknown' and attack_per_org[org] >= 10:
org_indices = attacks[attacks.organization == org].index
diffs += [sim[org_indices].mean() - sim.mean()]
# -
fig = plt.figure(figsize=(8,4))
plt.hist(diffs, bins=25)
plt.xlim(-max(diffs), max(diffs))
plt.ylim((0, 50))
plt.vlines(0, 0, 50)
plt.title('Global similarity vs organization similarity', size=16);
# ## Building a feature graph<a name="feature-graph"></a>
# [Go back to the top](#TOC)
# +
#creating the adjacency matrix for our feature graph
#pdist computes the pairwise euclidean distance
distances = pdist(features)
kernel_width = distances.mean()
#Gaussian function
weights = np.exp(-distances**2 / kernel_width**2)
features_adjacency = squareform(weights)
# -
# put the diagonal values to 0
features_adjacency[(range(n_nodes), range(n_nodes))] = 0
plt.hist(features_adjacency.ravel(), bins=30);
# sparsify the matrix for visualization
sparse_f_adjacency = features_adjacency.copy()
sparse_f_adjacency[sparse_f_adjacency < 0.7] = 0
sparse_graph = nx.from_numpy_array(sparse_f_adjacency)
# Save the graph to use it in gephi
nx.write_gexf(sparse_graph, 'feature_graph.gexf')
# # Finding organisations reponsible of an attack<a name="org-predict"></a>
# [Go back to the top](#TOC)
# Our goal is to see if we can predict who is behind an attack based only on the data we have. For the first 2 approach, we are only going to use the attacks where the terrorist organization is known, and we also sub-sample the data to keep only the top 3 organizations with the most attacks because it will be easier to see.
# +
# create a mapping between the organizations and labels
idx_to_org = pd.Series(famous_orgs, name='organization')
org_to_idx = idx_to_org.reset_index().set_index('organization')['index']
# organizations with more than ATK_THRESHOLD attacks
X = features[attacks.organization.apply(lambda org: org in famous_orgs)]
y = attacks.query('organization in @famous_orgs').organization.apply(lambda x: org_to_idx[x])
# top 3 organizations
top3_orgs = attack_per_org.index[:3]
top3_orgs_idx = attacks.query('organization in @top3_orgs').index
X_top3 = X.loc[top3_orgs_idx]
y_top3 = y[top3_orgs_idx]
# -
# ## Using PCA and K-Means <a name="pca-kmeans"></a>
# [Go back to the top](#TOC)
# Here, we will use Principle Component Analysis to reduce our feature graph from a very high dimension (113) to a 2 dimensional space. This lets us embbed it in a graph.
features_pca = decomposition.PCA(n_components=2).fit_transform(X_top3)
plt.scatter(features_pca[:, 0], features_pca[:, 1]);
plt.title('PCA embedding', size=16);
# Now we run K-Means to compute 3 clusters:
H = features_pca
clusters3 = KMeans(n_clusters=3, random_state=0).fit_predict(H)
plt.scatter(features_pca[:, 0], features_pca[:, 1], c=clusters3, cmap='brg', alpha=0.5);
plt.title('K-means cluster assignment PCA', size=16);
# Now we need to compare these clusters with the actual truth labels
color_map = {0: 'red', 1: 'blue', 2: 'green'}
colors = [color_map[n] for n in y_top3]
plt.scatter(features_pca[:, 0], features_pca[:, 1], c=colors, cmap='brg', alpha=0.5);
plt.title('PCA embedding with ground truth labels', size=16);
# We can compute the accuracy of our prediction (in percent)
translate = {0:1, 1:0, 2:2}
labels = np.vectorize(translate.get)(y_top3)
((labels == clusters3).sum() / labels.shape[0])*100
# 71 percent! This is not that bad, but let's see if we can do better with spectral embedding
# ## Using Spectral Embedding and K-Means <a name="spectral-kmeans"></a>
# [Go back to the top](#TOC)
# We restrict our feature adjacency to the top 3 organizations and build the normalized laplacian out of it
# +
f_adj_top3 = features_adjacency[top3_orgs_idx][:, top3_orgs_idx]
degrees_top3 = f_adj_top3.sum(axis=0)
# Combinatorial Laplacian.
laplacian = np.diag(degrees_top3) - f_adj_top3
# Normalized Laplacian.
deg_inv = np.diag(1 / np.sqrt(degrees_top3))
laplacian = deg_inv @ laplacian @ deg_inv
laplacian = sparse.csr_matrix(laplacian)
# -
#
# Now we compute the eigenvalue decompostion to be able to embbed it in 2D.
eigenvalues, eigenvectors = sparse.linalg.eigsh(laplacian, k=3, which='SM', v0=np.ones(laplacian.shape[0]))
sortID = np.argsort(eigenvalues)
eigenvalues = eigenvalues[sortID]
eigenvectors = eigenvectors[:,sortID]
proj = eigenvectors[:,1:3]
plt.scatter(proj[:,0],proj[:,1])
plt.title('Spectral graph embedding', size=16);
# Again, we run K-means on it
# +
H = eigenvectors[:,1:3];
spect_kmeans = KMeans(n_clusters=3, random_state=0).fit_predict(H)
#to match the number of the cluster with the number of the true label
spect_kmeans = (spect_kmeans + 1) % 3
# -
# Cluster that k-means gives us:
# +
plt.scatter(proj[:,0],proj[:,1], c=spect_kmeans, cmap='brg', alpha=0.5)
plt.title('K-means cluster assignment', size=16);
# -
# For k=3
new_order3 = np.array([],dtype = int)
for i in range(3):
new_order3 = np.append(new_order3,np.where(clusters3 == i))
# Now we compare it with our real labels:
color_map = {0: 'green', 1: 'red', 2: 'blue'}
colors = [color_map[n] for n in y_top3]
plt.scatter(proj[:,0],proj[:,1], c=colors, cmap='brg', alpha=0.5)
plt.title('Ground truth assigment', size=16);
# And we compute the accuracy:
translate = {0:2, 1:1, 2:0}
labels = np.vectorize(translate.get)(y_top3)
((labels == spect_kmeans).sum() / labels.shape[0])*100
# Our accuracy this time is 88%.
# ## Using Multiclass label classifier (OneVersusRest). <a name=onevsrest></a>
# [Go back to the top](#TOC)
# #### Cross-validation
# +
correct = Counter()
total = Counter()
for train_idx, test_idx in KFold(4, shuffle=True, random_state=SEED).split(X):
# split the data
X_train = X.iloc[train_idx]
y_train = y.iloc[train_idx]
X_test = X.iloc[test_idx]
y_test = y.iloc[test_idx]
# fit the model
model = OneVsRestClassifier(LinearSVC(random_state=SEED))
model.fit(X_train, y_train)
# predict
y_pred = model.predict(X_test)
for i in range(len(y_pred)):
y_p = y_pred[i]
y_t = y_test.iloc[i]
total[y_t] += 1
if y_p == y_t:
correct[y_t] += 1
# -
prediction_comparision = pd.DataFrame([correct, total]).T.fillna(0)
prediction_comparision.columns = ['correct', 'total']
prediction_comparision.index = famous_orgs
correctly_predicted = prediction_comparision.correct.sum()
print('With %d correct predictions from a total of %d samples, we obtain a success rate of %.3f%%!'
% (correctly_predicted, y.shape[0], 100 * correctly_predicted / y.shape[0]))
prediction_comparision.sort_values('total').plot.barh(figsize=(6, 8), color=['limegreen', 'steelblue'], width=0.65, alpha=1)
plt.yticks(size=12)
plt.title('Amount of correctly predicted samples per organization', size=14);
# For most of the top 19 organizations, the predictions we obtain are very accurate! We observe however that our model has more trouble predicting organizations with few attacks because of the consequently small amount of training data for the organization.
(prediction_comparision.correct / prediction_comparision.total).sort_values().plot.bar(figsize=(12,5))
plt.title('Ratio of correctly predicted samples per organization', size= 18);
# ### Predict the unknown organizations
# +
X_unknown = features[attacks.organization.apply(lambda x: x == 'Unknown')]
model = OneVsRestClassifier(LinearSVC(random_state=SEED))
model.fit(X, y)
# predict
y_pred = model.predict(X_unknown)
y_pred_orgs = idx_to_org[y_pred]
# -
y_pred_orgs.value_counts().iloc[::-1].plot.barh(figsize=(6, 6), color='royalblue', width=0.7)
plt.title('Number of predictions per organization', size=16)
plt.yticks(size=13)
plt.xticks(size=12);
# [Go back to the top](#TOC)
|
code.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ##### Import Packages
import matplotlib.pyplot as plt
import pandas as pd
import pylab as pl
import numpy as np
# %matplotlib inline
# ##### Read Data
# +
df = pd.read_csv("FuelConsumption.csv")
# take a look at the dataset
df.head()
# -
# ##### Data Exploration
#
# Lets first have a descriptive exploration on our data.
# summarize the data
df.describe()
cdf = df[['ENGINESIZE','CYLINDERS','FUELCONSUMPTION_COMB','CO2EMISSIONS']]
cdf.head(9)
# We'll plot each of these fearues:
viz = cdf[['CYLINDERS','ENGINESIZE','CO2EMISSIONS','FUELCONSUMPTION_COMB']]
viz.hist()
plt.show()
# Lets plot each of these features vs the Emission, to see how linear is their relation:
plt.scatter(cdf.FUELCONSUMPTION_COMB, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("FUELCONSUMPTION_COMB")
plt.ylabel("Emission")
plt.show()
plt.scatter(cdf.ENGINESIZE, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
# plot CYLINDER vs the Emission, to see how linear is their relation:
plt.scatter(cdf.CYLINDERS, cdf.CO2EMISSIONS, color='blue')
plt.xlabel("Cylinders")
plt.ylabel("Emission")
plt.show()
# ##### Creating train and test dataset¶
#
# Train/Test Split involves splitting the dataset into training and testing sets respectively, which are mutually exclusive. After which, you train with the training set and test with the testing set. This will provide a more accurate evaluation on out-of-sample accuracy because the testing dataset is not part of the dataset that have been used to train the data. It is more realistic for real world problems.
#
# This means that we know the outcome of each data point in this dataset, making it great to test with! And since this data has not been used to train the model, the model has no knowledge of the outcome of these data points. So, in essence, it is truly an out-of-sample testing.
msk = np.random.rand(len(df)) < 0.8
train = cdf[msk]
test = cdf[~msk]
# ##### Simple Regression Model
#
# Linear Regression fits a linear model with coefficients B = (B1, ..., Bn) to minimize the 'residual sum of squares' between the independent x in the dataset, and the dependent y by the linear approximation.
#
# ##### Train data distribution
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.xlabel("Engine size")
plt.ylabel("Emission")
plt.show()
# ##### Modeling¶
# Using sklearn package to model data.
from sklearn import linear_model
regr = linear_model.LinearRegression()
train_x = np.asanyarray(train[['ENGINESIZE']])
train_y = np.asanyarray(train[['CO2EMISSIONS']])
regr.fit (train_x, train_y)
# The coefficients
print ('Coefficients: ', regr.coef_)
print ('Intercept: ',regr.intercept_)
# __Coefficient__ and __Intercept__ in the simple linear regression, are the parameters of the fit line. Given that it is a simple linear regression, with only 2 parameters, and knowing that the parameters are the intercept and slope of the line, sklearn can estimate them directly from our data. Notice that all of the data must be available to traverse and calculate the parameters.
# ##### Plot outputs¶
#
# we can plot the fit line over the data:
plt.scatter(train.ENGINESIZE, train.CO2EMISSIONS, color='blue')
plt.plot(train_x, regr.coef_[0][0]*train_x + regr.intercept_[0], '-r')
plt.xlabel("Engine size")
plt.ylabel("Emission")
# ##### Evaluation
#
# we compare the actual values and predicted values to calculate the accuracy of a regression model. Evaluation metrics provide a key role in the development of a model, as it provides insight to areas that require improvement.
#
# There are different model evaluation metrics, lets use MSE here to calculate the accuracy of our model based on the test set:
# - Mean absolute error: It is the mean of the absolute value of the errors. This is the easiest of the metrics to understand since it’s just average error.
# - Mean Squared Error (MSE): Mean Squared Error (MSE) is the mean of the squared error. It’s more popular than Mean absolute error because the focus is geared more towards large errors. This is due to the squared term exponentially increasing larger errors in comparison to smaller ones.
# - Root Mean Squared Error (RMSE).
# - R-squared is not error, but is a popular metric for accuracy of your model. It represents how close the data are to the fitted regression line. The higher the R-squared, the better the model fits your data. Best possible score is 1.0 and it can be negative (because the model can be arbitrarily worse).
#
# +
from sklearn.metrics import r2_score
test_x = np.asanyarray(test[['ENGINESIZE']])
test_y = np.asanyarray(test[['CO2EMISSIONS']])
test_y_ = regr.predict(test_x)
print("Mean absolute error: %.2f" % np.mean(np.absolute(test_y_ - test_y)))
print("Residual sum of squares (MSE): %.2f" % np.mean((test_y_ - test_y) ** 2))
print("R2-score: %.2f" % r2_score(test_y_ , test_y) )
# -
|
Simple Linear Regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
# +
correl_time_local = np.loadtxt('Mean_Correlation_Time_Local_Update.dat')
correl_time_wolff = np.loadtxt('Mean_Correlation_Time_Wolff_Update.dat')
correl_time_SLMC = np.loadtxt('Mean_Correlation_Time_SLMC_Update.dat')
correl_time_RSLMC = np.loadtxt('Mean_Correlation_Time_RSLMC_Update.dat')
size_local = np.loadtxt('Correlation_Time_Size_Local_Update.dat')
size_wolff = np.loadtxt('Correlation_Time_Size_Wolff_Update.dat')
size_SLMC = np.loadtxt('Correlation_Time_Size_SLMC_Update.dat')
size_RSLMC = np.loadtxt('Correlation_Time_Size_RSLMC_Update.dat')
# +
# plot correlation time vs size
def func(x,a,b): #fitting function
return a*x**(b)
# Local Correlation
a1, b1 = optimize.curve_fit(func, size_local, correl_time_local)[0]
print('Fitting coefficient(Local):',a1,b1)
x1 = np.arange(5, 70, 0.1)
y1 = a1*x1**b1
# Wolff Correlation
a2, b2 = optimize.curve_fit(func, size_wolff, correl_time_wolff)[0]
print('Fitting coefficient(Wolff):',a2,b2)
x2 = np.arange(5, 70, 0.1)
y2 = a2*x2**b2
# SLMC Correlation
a3, b3 = optimize.curve_fit(func, size_SLMC, correl_time_SLMC)[0]
print('Fitting coefficient(SLMC):',a3,b3)
x3 = np.arange(5, 70, 0.1)
y3 = a3*x3**b3
# RSLMC Correlation
correl_time_RSLMC_change = []
restriction = [10, 15, 25, 35, 40, 40]
# Correct the scale
for i in range(len(correl_time_RSLMC)):
correl_time_RSLMC_change.append(correl_time_RSLMC[i]/(size_RSLMC[i]**2/((restriction[i]*2)**2/2)))
a4, b4 = optimize.curve_fit(func, size_RSLMC, correl_time_RSLMC_change)[0]
print('Fitting coefficient(RSLMC):',a4,b4)
x4 = np.arange(5, 130, 0.1)
y4 = a4*x4**b4
plt.figure()
plt.scatter(size_local[:], correl_time_local[:], 25, "red")
plt.plot(x1, y1, "red", label = 'Local')
plt.scatter(size_wolff[:], correl_time_wolff[:], 25, "blue")
plt.plot(x2, y2, "blue", label = 'Wolff')
plt.scatter(size_SLMC[:], correl_time_SLMC[:], 25, "green")
plt.plot(x3, y3, "green", label = 'SLMC')
plt.scatter(size_RSLMC[:], correl_time_RSLMC_change[:], 25, "black")
plt.plot(x4, y4, "black", label = 'RSLMC')
plt.legend()
plt.title("Correlation Time vs Size", fontsize=25)
plt.xlabel("$Size$", fontsize=20)
plt.ylabel("Correlation time", fontsize=20)
plt.text(40,300,r'$\tau_L \textasciitilde L^{2.1}$',fontsize=12,verticalalignment="top",horizontalalignment="right")
plt.text(60,200,r'$\tau_W \textasciitilde L^{1.9}$',fontsize=12,verticalalignment="top",horizontalalignment="right")
plt.text(100,250,r'$\tau_R \textasciitilde L^{1.4}$',fontsize=12,verticalalignment="top",horizontalalignment="right")
plt.text(90,50,r'$\tau_S \textasciitilde L^{1.8}$',fontsize=12,verticalalignment="top",horizontalalignment="right")
plt.tight_layout()
plt.savefig('Correlation_time_vs_Size.png')
plt.show()
# +
# L = 25
# plot autocorrelation function obtained from different update algorithm
Local = np.loadtxt('Local')
Wolff = np.loadtxt('Wolff')
SLMC = np.loadtxt('SLMC')
RSLMC = np.loadtxt('RSLMC')
n_fit_pts = 50
xr = np.arange(n_fit_pts, dtype=float)
# fit autocorrelation function
f = lambda x, a, b: a*np.exp(-x/float(b))
a1, b1 = optimize.curve_fit(f, xr, Local[0:n_fit_pts], p0=(1000,1))[0]
print("Local: Autocorrelation time =", b1)
plt.plot(np.abs(Local), '-bo', lw=1, alpha=0.5)
plt.plot(xr, (f(xr, a1, b1)), 'b-', lw=2, label='Local')
#plt.plot([0,300], [0,0], 'b--', lw=2)
plt.legend()
plt.title("Autocorrelation function", fontsize=25)
plt.xlabel("$t$", fontsize=20)
plt.ylabel(r"$\mathcal{C}(t)$", fontsize=20)
plt.xlim(0, n_fit_pts+10)
n_fit_pts = 15
xr = np.arange(n_fit_pts, dtype=float)
a2, b2 = optimize.curve_fit(f, xr, Wolff[0:n_fit_pts], p0=(1000,1))[0]
print("Wolff: Autocorrelation time =", b2)
plt.plot(np.abs(Wolff), '-ro', lw=1, alpha=0.5)
plt.plot(xr, (f(xr, a2, b2)), 'r-', lw=2, label='Wolff')
#plt.plot([0,300], [0,0], 'r--', lw=2)
plt.legend()
plt.title("Autocorrelation function", fontsize=25)
plt.xlabel("$t$", fontsize=20)
plt.ylabel(r"$\mathcal{C}(t)$", fontsize=20)
plt.xlim(0, n_fit_pts+10)
n_fit_pts = 20
xr = np.arange(n_fit_pts, dtype=float)
a3, b3 = optimize.curve_fit(f, xr, SLMC[0:n_fit_pts], p0=(1000,1))[0]
print("SLMC: Autocorrelation time =", b3)
plt.plot(np.abs(SLMC), '-go', lw=1, alpha=0.5)
plt.plot(xr, (f(xr, a3, b3)), 'g-', lw=2, label='SLMC')
#plt.plot([0,300], [0,0], 'g--', lw=2)
plt.legend()
plt.title("Autocorrelation function", fontsize=25)
plt.xlabel("$t$", fontsize=20)
plt.ylabel(r"$\mathcal{C}(t)$", fontsize=20)
plt.xlim(0, n_fit_pts+10)
plt.plot([0,300], [0,0], '--', lw=2)
plt.savefig('Autocorrel_fitting(L=25).png')
# -
|
Testing/Autocorrelation_function/Autocorrelation_function_plot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# name: python2
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Emekaborisama/logothings/blob/master/Dash.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="_DQNdUBmWef6" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 238} outputId="149bb315-1617-4c5c-f587-2a1d27938d60"
# !wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
# !unzip ngrok-stable-linux-amd64.zip
# + id="Un6LE1oEWnwl" colab_type="code" colab={}
get_ipython().system_raw('./ngrok http 8050 &')
# + id="Zwkazfx7Wr87" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="45038ebb-bbee-4930-b455-095731e78f09"
# ! curl -s http://localhost:4040/api/tunnels | python3 -c \
# "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
# + id="EqVt2GA9WyUF" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1890} outputId="1178ce80-fcad-482e-921e-738fbf4ea0aa"
# !pip install dash==0.31.1 # The core dash backend
# !pip install dash-html-components==0.13.2 # HTML components
# !pip install dash-core-components==0.39.0 # Supercharged components
# !pip install dash-table==3.1.7
# + id="QpqWMhgJZF-1" colab_type="code" colab={}
# -*- coding: utf-8 -*-
# + id="4jkSlKhxW73n" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="849cac5f-d591-4c2d-ae81-9be1ae07e614"
# %%writefile my_app1.py
# -*- coding: utf-8 -*-
import dash
import dash_core_components as dcc
import dash_html_components as html
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div(children=[
html.H1(children='Hello Dash'),
html.Div(children='''
Dash: A web application framework for Python.
'''),
dcc.Graph(
id='example-graph',
figure={
'data': [
{'x': [1, 2, 3], 'y': [4, 1, 2], 'type': 'bar', 'name': 'SF'},
{'x': [1, 2, 3], 'y': [2, 4, 5], 'type': 'bar', 'name': u'Montréal'},
],
'layout': {
'title': 'Dash Data Visualization'
}
}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
# + id="gCRC43E0XLPZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 201} outputId="b91ed0e3-3821-49e3-f6d0-cc8b93763f5b"
# !python my_app1.py
# + id="OR64oAq5XMcs" colab_type="code" colab={}
|
Dash.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import Packages
import pandas as pd
import os
import numpy as np
import re
import string
from urllib.request import urlopen
from bs4 import BeautifulSoup
import requests
from selenium import webdriver
import math
from selenium.common.exceptions import NoSuchElementException
import googlemaps
import json
# ### Load Webdriver
EXE_PATH = r"C:\Users\nikhi\Downloads\chromedriver_win32\chromedriver.exe"
browser = webdriver.Chrome(executable_path=EXE_PATH)
# ### Open Delhi Metro Website
base = "https://www.delhimetrorail.com/"
browser.get(base)
browser.implicitly_wait(10)
browser.find_element_by_id("FromStation").click()
html = browser.page_source
soup = BeautifulSoup(html)
# ### Find all the different lines (routes)
line_results = browser.find_elements_by_class_name('popup-result-line')
lines = soup.find_all('div', class_ = 'popup-result-line')
# ### Scrape Data for All Stations on all the lines
stations_prev = ''
dic = {}
count = 0
for i in range(len(line_results)-1):
line_name = lines[i].text
line_results[i].click()
while True:
html = browser.page_source
soup = BeautifulSoup(html)
if str(soup.find('div', class_ = 'sub-popup-result').find('div', class_ = 'layout')) != stations_prev:
break
print(line_name)
stations = soup.find('div', class_ = 'sub-popup-result').find('div', class_ = 'layout').find_all('a', class_ = 'clearfix')
for station in stations:
dic[count] = {"Line": line_name}
dic[count]['n'] = station.find('div', class_ = 'sub-result-left').text
dic[count]['name'] = station.find('div', class_ = 'sub-result-name').text
dic[count]['disabled friendly'] = 1 if "Divyang" in str(station.find('div', class_ = 'sub-result-list')) else 0
dic[count]['parking available'] = 1 if "Parking" in str(station.find('div', class_ = 'sub-result-list')) else 0
dic[count]['elevator available'] = 1 if "Lift" in str(station.find('div', class_ = 'sub-result-list')) else 0
count+=1
stations_prev = str(soup.find('div', class_ = 'sub-popup-result').find('div', class_ = 'layout'))
pd.DataFrame.from_dict(dic).T
# ### Geocode Stations using Google Maps API
# +
g_API = 'api_key'
gmaps_key = googlemaps.Client(key=g_API)
for i in range(len(dic)):
search_term = "Delhi Metro " + dic[i]['Line'][:-2] + " " + dic[i]['name']
geocode_obj = gmaps_key.geocode(search_term)
dic[i]['map_info'] = geocode_obj
# -
for i in range(len(dic)):
if len(dic[i]['map_info']) > 0:
lat = dic[i]['map_info'][0]['geometry']['location']['lat']
lon = dic[i]['map_info'][0]['geometry']['location']['lng']
dic[i]['lat'] = lat
dic[i]['long'] = lon
else:
print(dic[i])
df = pd.DataFrame.from_dict(dic).T
df.drop(columns=['map_info']).to_excel(r"C:\Users\nikhi\Downloads\Delhi_metro.xlsx")
# +
for i in range(len(dic)):
dic[i].pop('map_info')
with open(r"C:\Users\nikhi\Downloads\Delhi_metro.json", "w") as outfile:
json.dump(dic, outfile,indent = 4)
# -
|
Delhi Metro.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Topics covered in this notebook:
# 1. What are Decision Trees?
# 2. What is Information Entropy?
# 3. What is Information Gain?
# 4. How do we choose best split?
# 5. What makes a good tree?
# 6. Decision Trees vs KNN.
# 7. Applications.
# 8. References.
# # 1. Decision Trees:
#
# Simple in concept, complicated to implement.
#
# 1. Pick an attribute, do a simple test.
# 2. Conditioned on choice, pick another attribute, do another test.
# 3. In the leaves, assign a class with majority vote.
# 4. Do the other branches as well.
# 5. Gives axis aligned decision boundaries.
#
# Basically: A bunch of nested if-statements.
#
# Example - Spam Classifier:
#
# $If (doc\,contains\,'money'\,):$<br>
# $\quad If\,(doc\,contains\,'free'\,):$<br>
# $\qquad Return\,\,True$ <br>
# $\quad else:$<br>
# $\qquad Return\,\,False$ <br>
# $else:$<br>
# $\quad Return\,\,False$ <br>
#
# 1. One key feature is that we look at one attribure at a time.
# 1. Each condition checks only one column of X.
# 2. Attributes = 'Input Features'.
# 3. Ex:
# 1. If (height < 5): Go to left node Else: Go to right node.
#
# 2. What does it tell us about the geometry of the problem?
# 1. Splits are always orthogonal to the axes.
# 2. Whereas discriminating line can be at an angle - Ex. Linear Classifier.
# 3. Can still get a highly non-linear boundary - if we split multiple times, splits at each level.
#
# 3. Recursiveness- Because its a Tree!.
# 1. Each node is a TreeNode object.
# 2. But its children are also TreeNode objects.
# 3. Leaf nodes have no children.
# 4. Leaf nodes are where we make predictions.
# 5. It then bubbles back up to the root node.
#
# What makes this ML?
#
# 1. Its how we choose the conditions.
# 1. Based on Information theory.
# ## 2. Information Entropy:
#
# High level: We want to choose a split that maximizes reduction in uncertainity.<br>
#
# Example: Going from 50% certain to 100% certain is better than going from 50% to 75%
#
# 1. Related to variance.
# 1. Wide variance: More uncertainity.
# 2. Slim variance: Less uncertainity. <img src="Images/Variance.png" alt="Drawing" style="width: 500px;"/>
#
# 2. Entropy:
# 1. It is always positive. Since p is between 0-1 and negative log is also positive.
# 2. Entropy - We always mean log base 2 implicity.
#
# $\qquad H(X) = -\displaystyle\sum_{x} p(x)\log p(x).$
#
# ### Binary Random variable:
# Example:<br>
# P(X = 1) = p<br>
# P(X = 0) = 1 - p<br>
#
# $$Entropy\,H(p)\,=\,-plog(p)\,-\,(1-p)log(1-p)$$<br>
# What value of p maximizes p? Solve dH/dp = 0 for p.<br>
# Answer: p = 0.5
#
# <img src="Images/EntropyVsP.png" alt="Drawing" style="width: 250px;"/>
#
# 1. If p = 0.5, there is no possible way to make a good prediction, we'll always have a probability of 50% being wrong.
# 2. If p = 0.8, then we should always predict 1 because that gives us the best chance of being correct.
# 3. Entropy is a measure of how much information we get from finding out the value of the random variable.
# 1. If we flip a coint with p = 0.8 and we get heads(1), we don't gain much information, we were already 80% certain.
# 2. If we flip a coint with p = 0.5 and we get heads, we gain maximium amount of information we could have.
# 3. Prior to knowing this, we were maximally clueless about the value we would get.
# 4. In general, uniform distribution yields maximum entropy.
# ## 3. Information Gain:
#
# What is Information Gain?
# 1. Suppose we have labels 0,0,1,1.
# 2. H(Y) = 1.
# 3. Suppose we have an attribute that splits the data perfectly.
# 1. Ex: if X > 0, Y = 0 ,if X < 0, Y = 1.
# 2. Then we have the left nodes: Y_left = {0,0}.
# 3. Right nodes: Y_right = {1,1}.
# 4. The entropy for each subset of data is 0.
# 5. Information Gain, IG(Y|Split on X) = H(Y) - 0.5* H(Y_left) - - 0.5* H(Y_right) = 1 - 0 - 0 = 1.
# 1. 0.5 indicates half the data went to the left and half to the right.
# 2. Ensures IG>=0.
#
# Another Example:
# 
# 1. H(Y) = 1.
# 2. Split X1:
# 1. X1 = 1: H(Y|X1=1) = - (3/4)log2(3/5) - (1/4)log2(1/4) = 0.811.
# 2. X1 = 2: H(Y|X1=2) = - (2/6)log2(2/6) - (4/6)log2(4/6) = 0.918.
# 3. IG = 1 - (4/10) * 0.811 - (6/10) * 0.918 = 0.1248.
# 3. Split X2:
# 1. X2 = 5: H(Y|X2=5) = 1.
# 2. X2 = 10: H(Y|X2=10) = 1.
# 3. IG = 1 - (4/10) * 1 - (6/10) * 1 = 0.
#
# Since splitting across X1 gives maximum IG, we should split on X1 first.
# ## 4. Choosing the best split:
# 1. In the example above X1 & X2 had only 2 values.
# 2. But in datasets like MNIST, the data is continuous. How do we pick the best split?
# 1. Continuous data -> Infinite places to split.
# 2. Need some way to find smalles number of splits.
#
# Rules:<br>
#
# X : 0 1 2 3<br>
# Y : 1 0 1 0<br>
#
# We only need to consider the midpoint between any two sorted X's.<br>
# Split between 1,2 -> 1.5 -> Entropy = 1<br>
# Split between 1,2 -> 1.75 -> Entropy still = 1<br>
#
# Only need to consider boundaries between differing labels:<br>
#
# Y: 1,1,0,0,0,0,0,0,1,1,1,1,1,1,0,0
#
# Split at the middle -> total entropy = 2 * (-(6/8)log2(6/8) - (2/8)log2(2/8)) = 1.62.<br>
# Move one left -> total entropy = 1.78.<br>
# Move one more left -> total entropy = 1.89.<br>
#
# Further from boundary -> higher entropy -> lower information gain.
#
#
# ### Best Split Algorithm:
# 1. Sort X's for current column in order, sort Y in the corresponding way.
# 2. Find all the boundary points where Y changes from one value to another.
# 3. Calculate information gain when splitting at each boundary.
# 4. Keep the split that yields max. information gain.
# ## 5. What makes a good tree?
# 1. Not too small: need to handle important but possible subtle disctinctions in data.
# 2. Not too big:
# 1. Computational efficiency - avoid redundant attributes.
# 2. Avoid over-fitting training examples.
# 3. **Occam's Razor**: Find the simples hypothesis (smallest tree) that fits the observations.
# 4. Inductive bias: small trees with informative nodes near the root.
# 5. In practice, one often regilarizes the construction process to try to get small but highly informative trees.
#
# ### Problems:
# 1. Exponentially less data at lower nodes.
# 2. Too big of a tree can overfit the data.
# 3. Greedy algoritms don't necessarily yield the global optimum.
# 4. Not suited for continous attributes.
# 5. Bad on parity(XOR) & majority fucntions.
# ## 6. Decision Trees vs. KNN:
# 1. Decision boundaries are axis-alignes, tree structures whereas in KNN it is piecewise linear.
# 2. Test complexity: Attributes & splits whereas in KNN it is non-parametric.
# ## 7. Applications:
# 1. Used in XBOX to classify body parts. Depth image -> body parts -> 3D joint proposals.
# 2. Flight simulators: 20 state variables, 90K examples based on expert pilot's actions; auto-pilot tree.
# 3. Yahoo's ranking challenge.
# ## 8. References:
# 1. An Introduction to Statistical Learning Textbook by <NAME>, <NAME>, <NAME> and <NAME>.
# 2. University of Michigan EECS 445 - Machine Learning Course (https://github.com/eecs445-f16/umich-eecs445-f16).<br>
# 3. University of Toronto CSC 411 - Intro. to Machine Learning (http://www.cs.toronto.edu/~urtasun/courses/CSC411_Fall16/CSC411_Fall16.html).<br>
# 4. Stanford CS109 - Intro. to proabability for computer scientists (https://web.stanford.edu/class/archive/cs/cs109/cs109.1166/). <br>
# 5. Few online courses on Udemy, Coursera etc.
|
5.Decision Trees/0.Theory/Decision Trees.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# Imports
import dask.dataframe as dd
from dask.distributed import Client
# +
# Create a distibuted client for parallel computation
client = Client(n_workers=16) # Since we are using AWS SageMaker with 16 cores
youtube_df = dd.read_csv('/home/ec2-user/SageMaker/Youtube*.csv')
# -
# Check the number of columns and rows in the dataframe
print(f"Number of columns in dataframe is {len(youtube_df.columns)}")
print(f"Number of rows in dataframe is {len(youtube_df)}")
youtube_df.head()
(spam, notspam) = youtube_df.CLASS.value_counts().compute()
print(f"spam count is {spam} and legitimate comments count is {notspam}")
# Seperating out data with respect to spam and nonspam
spamdf = youtube_df[youtube_df.CLASS == 1].compute()
nonspamdf = youtube_df[youtube_df.CLASS == 0].compute()
# +
# Computing the count of 'check' in spam and nonspam data
checkwordcount_spam = 0
for comment in spamdf.CONTENT:
if "check" in comment.lower():
checkwordcount_spam += 1
print(f"Number of comments containing 'check' in spams is {checkwordcount_spam}")
checkwordcount_nonspam = 0
for comment in nonspamdf.CONTENT:
if "check" in comment.lower():
checkwordcount_nonspam += 1
print(f"Number of comments containing 'check' in spams is {checkwordcount_nonspam}")
# +
# To score a 3, do extra work, such as creating the Dask Distributed Client, or creating a visualization with this dataset.
# I have added below code to complete this stretch goal already.
# client = Client(n_workers=16) # Since we are using AWS SageMaker with 16 cores
# -
# ### Big data options
#
# I would like to first discuss about the platform.
#
# **AWS Sagemaker:**
#
# AWS Sagemaker is a powerful platform for working with big dataset which needs very high computational resources (CPU and RAM to be specific). Our exisiting PCs/laptops have limitation with respect to computational resource. For example my laptop has 8 core CPU and 32 GB RAM capacity. If the data which needs to be processed is huge and takes lot of time to process assume 10 GB of data, then the capacity on my existing laptop is not sufficient. (As per guidelines usually 10X amount of capacity is needed).
#
# In this case, I can make use of AWS Sagemaker which gives me an option to **Scale-Up** my capacity and work with huge data sets. Advantage of this approach is that there is no need for actually worrying about partitioning the data and combining it together later since we are running on a single system.
#
# **AWS EMR/Databricks**
#
# As an alternative to **Scale-Up** which usually needs bigger and powerful computational resource on a single node, we can use the approach of **Scale-Out** using similar or almost similar nodes with comparable computational resources. The major disadvantage of **Scale-Up** approach is there is a physical and economical limitation with respect to how much we can Scale-Up computationally. In simpler words, there are limitation on how big the Super Computers can be and also they are very very expensive.
#
# So with **Scale-Out** approach, we can have nodes having a lower computational resources but use them together is a managed way to distribute the work among them logically and process the data. The technology required for such distributed computing (like Hadoop, Spark...) are existing and can be used. So here instead of going for expensive AWS Sagemaker, we can use nodes of lower capacity like AWS EMR/Databricks. Some key disadvantage of this approch is fault tolerance/reducancy needs to be considered. Also for distributing the computation and managing the task their would be additional overhead.
#
#
# Next I would like to first discuss about the libraries.
#
# **Numba**
#
# Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops. Numba library optimises the execution of instructions using just-in-time compiling and can work with existing python code.
#
# However Numba is suited best to work with Numpy based code and has limitation with respect to other libraries like Pandas.
# Ideal for initial step of **Step-Up** optimization.
#
# **Dask**
#
# Dask is a flexible library for parallel computing in Python.
#
# Dask is composed of two parts:
#
# 1. Dynamic task scheduling optimized for computation and optimized for interactive computational workloads.
# 2 . “Big Data” collections like parallel arrays, dataframes, and lists that extend common interfaces like NumPy, Pandas, or Python iterators to larger-than-memory or distributed environments. These parallel collections run on top of dynamic task schedulers.
#
# Ideal for **Step-Up** optimization and also for **Step-Out** approaches where we can distribute the load across multiple nodes. However, for better **Step-Out** optimization MapReduce/Spark would be better alternative.
#
# The interface are very similar to Pandas and Sci-kit learn and needs minimal code changes.
#
# **MapReduce/Spark**
#
# As mentioned above, both MapReduce/Spark are developed to support **Step-Out** based computation and is best suited for Big Data processing which usually spans across multiple noded. The underlying computation overhead like distribution of data, fault tolerance, redundacy and consolidation is handled by these libraries.
#
# To help development many of the common constructs from exisiting technology are supported on these libraries. For example Spark supports development on Scala, Python and Java. It also supports programming constructs similar to DataFrame and SQL.
#
# Lastly let us consider the languages.
#
# **Python**
#
# Python is multipurpose programming langage which is extensively used in Data Science also. When considering **Scale-Up or Scale-Out** approaches Python has good libraries developed for both cases.
#
# Numba and Dask can be used to optimize the python code on single nodes with just-in-time compilation and parallely processing.
#
# Similarly, technology like Spark provides interface for developing code in Python and running ontop of Spark (PySpark) which is not very much optimized as compared to Scala Spark.
#
# **SQL**
#
# SQL is similar to Python. Spark supports interaction with data using SQL queries which is a very convinent way. In fact, native DataFrame or SQL based approach are evaluated by Spark platform similarly.
#
# **Scala**
#
# Scala is the 1st citizen in the world of Spark and in fact Spart is developed using Scala. So many of the interfaces in Spark are optimized for Scala language. Scala supports functional programming approch which is very useful for **Scale-Out** based approach.
#
# **Java**
#
# Java support on Spark is similar to Python.
#
# **My Preference**
#
# I would initially use python (Numpy/Pandas/Sci-Kit Learn) on single node system for small/medium data set. If the computation resource needed increases to a large data set, I would optimize it python code using Numba or Dask. Finally incase of Scale-Out is needed, I would develop the code using Scala/PySpark on Spark.
|
DS-Unit-3-Sprint-3-Big-Data-Sprint-Challenge.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os
import numpy as np
import signatureanalyzer as sa
# ---
# # Extracting Mutational Signatures
#
# Extracts mutational signatures derived from WES samples in [Bustoros et al (2020)](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7367550/) and compares them to SMM subtypes.
#
# For details on how to extract mutational sigantures, please see https://github.com/getzlab/getzlab-SignatureAnalyzer.
#
# **Author**: [<NAME>](<EMAIL>)
DATA_DIR = "../data/raw"
NMF_FILE = os.path.join(DATA_DIR, "SMM_signatureanalyzer.h5")
samples_df = pd.read_csv(os.path.join(DATA_DIR, "SMM_dbgap_paths.tsv"), sep='\t')
clust_df = pd.read_csv(os.path.join("../Fig1/supplement", "table4_sample_cluster_id.tsv"), sep='\t', index_col=0)
# +
Hraw = pd.read_hdf(NMF_FILE, "Hraw")
Hraw = Hraw.join(samples_df.set_index("fc_tumor")['id']).dropna(subset=['id']).set_index("id")
Wraw = pd.read_hdf(NMF_FILE, "Wraw").iloc[:,:-1]
# Renormalize
W_weight = np.sum(Wraw, axis=0)
W = Wraw / W_weight
H = W_weight[:, np.newaxis]*Hraw.T
H = H.T
H = H.join(clust_df['consensus_nmf'])
# Rename
hd = {
"SBS12":"SBS5",
"SBS6":"SBS1",
"SBS13":"SBS13",
"SBS2":"SBS85",
"SBS42":"SBS2",
"SBS41":"SBS84"
}
# Mapping Appropriate Signatures
H.columns = [x.split("-")[-1] for x in H.columns]
H = H.rename(columns=hd)
hd = {
"SBS5":"Aging-5",
"SBS1":"Aging-1",
"SBS13":"APOBEC activity-13",
"SBS85":"AID-85",
"SBS2":"APOBEC activity-2",
"SBS84":"AID-84"
}
H = H.rename(columns=hd)
H['APOBEC activity-2_13'] = H['APOBEC activity-13'] + H['APOBEC activity-2']
H['Aging-1_5'] = H['Aging-1'] + H['Aging-5']
H['AID-84_85'] = H['AID-84'] + H['AID-85']
H.to_csv("supplement/table8_mut_sigs.tsv", sep='\t')
# -
H = pd.read_hdf(NMF_FILE, "H")
H = H.join(samples_df.set_index("fc_tumor")['id']).dropna(subset=['id']).set_index("id")
W = pd.read_hdf(NMF_FILE, "W")
_ = sa.pl.signature_barplot(W, np.sum(H.iloc[:,:6]))
|
Fig1/3a_mutational_signatures.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/joselvira/BiomecanicaPython/blob/master/Notebooks/Relacion_Posicion_Velocidad_Aceleracion.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="MMCrZZVvBMTh"
# # Relación entre posición, velocidad y aceleración
#
# <NAME> ([enlace a más recursos en GitHub](https://github.com/joselvira/BiomecanicaPython))
#
# Última modificación 03/08/2021
#
# Para poder aprovechar las partes interactivas, pulsa el botón *Open in Colab*.
# + [markdown] id="zNqtCv3qUv6d"
# El movimiento de cualquier objeto se puede describir a partir de variables cinemáticas, entre ellas **la posición, la velocidad y la aceleración**.
# Cada una de esas tres variables nos aporta un tipo de información concreta sobre el movimiento, y a la vez están íntimamente relacionadas entre sí.
#
# La relación entre posición, velocidad y aceleración se establece a partir de cómo se comportan a lo largo del tiempo. En realidad, en un objeto que se mueve, si conocemos una de las tres podemos calcular las otras dos a partir de las operaciones matemáticas derivar e integrar, tal como se ve en la figura.
# + colab={"base_uri": "https://localhost:8080/", "height": 421} id="lpKQjXi_Gd1W" outputId="d7b80262-5423-440f-b063-7d08dd162207"
from IPython.display import Image
Image(url="https://github.com/joselvira/BiomecanicaPython/raw/master/Imagenes/Relacion-Pos-Vel-Acel.png", height=400)
# + [markdown] id="2Q5pSAaxIsjG"
# Para subir un peldaño en la escalera es necesario derivar, mientras que para bajar, hay que integrar. Por ejemplo, si tenemos datos de la posición de un jugador de fútbol moviéndose por el campo, al derivar esos datos obtendremos la velocidad. Si derivamos nuevamente los datos que hemos obtenido de velocidad, entonces obtendremos la aceleración. Y si los integráramos, volveríamos a obtener primero la velocidad, y después la posición que teníamos inicialmente.
#
# Esto es así porque **la velocidad mide cómo cambia la posición a lo largo del tiempo**, esto es *la pendiente* en una gráfica posición/tiempo, y eso es precisamente lo que se calcula con la derivada. Y la misma relación que se da entre la velocidad y la posición, se da entre la aceleración y la velocidad, porque **la aceleración mide cómo cambia la velocidad a lo largo del tiempo**.
#
# Por el contrario, la integral es la operación matemática inversa a la derivada (representa el área que queda por debajo de una curva). Por ejemplo, la integral de un conjunto de datos de velocidad que cambian a lo largo del tiempo, gráficamente representa el área que queda por debajo de la gráfica velocidad/tiempo, y eso es exactamente la posición.
#
# Vamos a verlo con algún ejemplo. A continuación vemos una forma de modelar la curva velocidad tiempo en una carrera de 100 m.l.
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="PdCgT3RULiNt" outputId="cc137307-3e9a-46a0-a016-10e0dfb480cc"
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
#Curva similar a velocidad en 100 m
dt = 0.01
t = np.arange(0, 10, dt) #crea un array de datos de tiempo de 10 s, con intervalos de 0.01 s
p1 = 11 #velocidad máxima
p2 = 0.9 #+-aceleración
#Modela la velocidad
v = -p1*np.e**(-p2*t) + a #la parte final se suma para que suba el offset
#Crea la gráfica
plt.plot(t, v)
plt.xlabel('tiempo (s)')
plt.ylabel('velocidad (m/s)')
plt.show()
# + [markdown] id="bk9xs1DWObBq"
# Para obtener la aceleración de la carrera, simplemente derivamos la variable de la velocidad.
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="5JS0olXwNn5K" outputId="7045c3bc-b7b9-4b18-f396-cddb187c5485"
a=np.gradient(v)/dt
#Crea la gráfica
plt.plot(t, a)
plt.xlabel('tiempo (s)')
plt.ylabel('aceleración (m/$s^2$)')
plt.show()
# + [markdown] id="hrTXNioyOjLy"
# Y para obtener la posición, simplemente integramos la velocidad.
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="F55VgreNOoZi" outputId="2460245b-1fc8-4663-ae69-5ec316c831a2"
import scipy.integrate
p = scipy.integrate.cumtrapz(v, t, initial=0)
#Crea la gráfica
plt.plot(t, p)
plt.xlabel('tiempo (s)')
plt.ylabel('posición (m)')
plt.show()
# + [markdown] id="K1SLJM49QxJ9"
# ¿Y qué pasa si la velocidad la integramos y después el resultado lo derivamos?
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="puFARmOhQ_Uy" outputId="c9a81b54-32bd-4c43-8b63-f248376f7771"
#Integral de la velocidad
integralV = scipy.integrate.cumtrapz(v, t, initial=0)
#Derivamos el resultado de la integral
derivada_integralV = np.gradient(integralV)/dt
#Gráfica de la velocidad original y de la integrado-derivado
plt.plot(t, v, lw=4, color='blue', label='velocidad original')
plt.plot(t, derivada_integralV, color='red', label='velocidad integrada y derivada')
plt.xlabel('tiempo (s)')
plt.legend()
plt.show()
# + [markdown] id="bAW_iGjdSR-r"
# ¡SALE LO MISMO! Esto es así porque integrar y derivar son operaciones opuestas, igual que sumar y restar, multiplicar y dividir, etc. Si se hace una operación y la opuesta, vuelves al resultado original.
# + [markdown] id="wUzoPoY-Spar"
# # Creadora de gráficas aleatorias
#
# A continuación puedes generar movimientos creados aleatoriamente en los que se crean las gráficas de posición, velocidad y aceleración para que observes las relaciones que se crean entre las tres variables.
#
# Los aspectos críticos en los que hay que fijarse son, qué pasa con las otras variables cuando una de ellas:
#
# * Pasa por el cero.
# * Aumenta.
# * Disminuye.
# * Se encuentra en un máximo local.
# * Se encuentra en un mínimo local.
# + [markdown] id="CJrCLh4WC4U9"
# Para crear una nueva gráfica de posición, pulsa sobre el botón "Play" en la parte superior izquierda de la siguiente celda.
#
# A partir de ahí, cada vez que ejecutes la celda, se creará una curva de posición aleatoria. Intenta analizar esa curva e identificar cómo sería el signo de la velocidad y aceleración en cada parte del movimiento.
#
# Una vez lo tengas claro y quieras corregirlo, pulsa sobre el botón "Play" de la celda 2). Se crearán simultáneamente las gráficas de Posición, Velocidad y Aceleración del mismo movimiento anterior.
# Las líneas verticales verdes indican cuándo la velocidad corta por el cero, que coincide con picos o valles de la posición.
#
# Las líneas verticales azules indican cuándo la aceleración corta por el cero, que coincide con picos o valles de la velocidad.
#
# Puedes guardar las gráficas pulsando con el botón derecho del ratón sobre la gráfica y seleccionando "Guardar imagen como..."
# + id="Aw5oux_-4qQF" colab={"base_uri": "https://localhost:8080/", "height": 297} cellView="form" outputId="b6da22c5-42bc-4f62-d683-684c3f3132bb"
#@title 1) Pulsa en el botón "Play" que hay justo a la izquierda para crear una gráfica de posición aleatoria.
import sys
#La primera vez carga las librerías e instala detecta
if not 'detecta' in sys.modules:
# !pip install detecta
from detecta import detect_onset
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
#import os
#import time
#sys.path.insert(1, r'J:\Programacion\Python\Mios\Functions') # add to pythonpath
#from tnorm import tnorm #para normalizar series de datos
#from detect_onset import detect_onset
# =============================================================================
# Define condicionales
# =============================================================================
bGraficasPruebas = False
bGraficaCompleta = True #Crea la figura con todas las gráficas limpias
bDatosInicialesAMano = False #si es false, los saca aleatorios
bCalculoInterpolando = False #interpolando requiere de separaciones iguales en eje X.; Si es falso, calcula la curva de ajuste con el grado que se indique.
# =============================================================================
numGraficas = 1
numDatos = 1000
nExtrapolacion = 10 #datos en X antes y después de la ventana visible
rangoY = 100 #Rango variable en el eje Y
gradoPolin = 5
rangoDatosInicial = [7,12]
colorVar = [[1,0,0], [0,1,0], [0,0,1]] #por orden P, V, A
# =============================================================================
# Empieza a procesar todas las gráficas
# =============================================================================
for nGraf in range(numGraficas):
if bDatosInicialesAMano:
#Entrada de datos X e Y a mano
data = np.array([
[-nExtrapolacion, 85.91247806],
[185, 150],
[207, 189.73686096],
[304, 124.91312292],
[367, 42.68889048],
[468, 74.26467954],
[numDatos+nExtrapolacion, 74.26467954],
])
x = data[:,0]
y = data[:,1]
#Datos aleatorios de puntos clave
else:
#np.random.seed(1234)
n = np.random.randint(rangoDatosInicial[0], rangoDatosInicial[1])
#x = np.arange(0,100, 10)
x = np.linspace(-nExtrapolacion,numDatos+nExtrapolacion,n)
y = np.random.rand(len(x))*rangoY-rangoY/2
#Variable tiempo
t = np.linspace(min(x), max(x), numDatos+2*nExtrapolacion)
dt = t[1]-t[0]
#################################
#Calcula Desplazamiento
if bCalculoInterpolando:
P, tn, indie = tnorm(y, k=4, step=-(numDatos+2*nExtrapolacion), smooth=0, show=bGraficasPruebas)
#interp = interpolate.interp1d(x, y, kind='cubic') #otras opciones de interpolar son: 'nearest', 'zero', 'slinear', 'quadratic', 'cubic'
P = interp(t)
# spline = interpolate.splrep(x, y)
# D = interpolate.splev(t, spline, der=0)
else:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
polynomial_features = PolynomialFeatures(degree=gradoPolin, include_bias=False)
linear_regression = LinearRegression()
pipeline = Pipeline([("polynomial_features", polynomial_features), ("linear_regression", linear_regression)])
pipeline.fit(x[:, np.newaxis], y)
P = pipeline.predict(t[:, np.newaxis])
P_predict = pipeline.predict(x[:, np.newaxis]) #Calcula puntos de la predicción en las posiciones de X
if bGraficasPruebas:
plt.plot(t,P)
#plt.plot(x,P_predict, 'ro')
plt.plot(x, y, 'bo')
plt.show()
#################################
#Calcula velocidad
V = np.gradient(P) / dt
#Calcula aceleración
A = np.gradient(V) / dt
######################################
#Busca cortes por el cero en velocidad
indV = detect_onset(V, 0.0, n_above=2, n_below=0, show=False) #empieza a buscar desde que iniciamos la medida del peso
indV=indV.flatten()-nExtrapolacion #ajusta restando el margen que se pone para que empieze ya con vovimiento
#Busca cortes por el cero en aceleración
indA = detect_onset(A, 0.0, n_above=2, n_below=0, show=False) #empieza a buscar desde que iniciamos la medida del peso
indA=indA.flatten()-nExtrapolacion #ajusta restando el margen que se pone para que empieze ya con vovimiento
# =============================================================================
# # %%Figuras de pruebas
# =============================================================================
if bGraficasPruebas:
fig, ax = plt.subplots(figsize=(6, 4))
plt.plot(x, y, 'bo')
plt.plot(t, D, c=colorVar[0], lw=2, label='D')
ax2 = ax.twinx() #crea el segundo eje
ax2.plot(t, V, c=colorVar[1], lw=2, label='V')
ax2.plot(t, A*10, c=colorVar[2], lw=2, label='A')
plt.xlabel("tiempo", fontsize=15)
plt.xlim((-nExtrapolacion, numDatos+nExtrapolacion))
plt.legend(loc='best')
plt.show()
# =============================================================================
# # %%figura limpia
# =============================================================================
if bGraficaCompleta:
#Figura con la gráficas de posición
fig, ax = plt.subplots(figsize=(8, 4), sharex=True)#, dpi=200)
#ax.plot(x, y, 'ro', lw=2)
ax.plot(t, P, c=colorVar[0], lw=2.5)
ax.axhline(y=0.0, color='k', lw=1, zorder=1)
ax.set_ylabel('Posición', fontsize=14)
ax.set_xlabel('Tiempo', fontsize=14)
plt.xlim((0, numDatos))
#sustituye las etiquetas del eje X para que se ajusten de 0 a 10
plt.xticks(np.linspace(0, 1000, 10), np.round(np.linspace(0, 10, 10),0))
plt.tight_layout()
plt.show()
# + id="FeY9rm5R8Twl" colab={"base_uri": "https://localhost:8080/", "height": 585} cellView="form" outputId="2839b7d6-3910-4093-dd60-e1e49f6c5f64"
#@title 2) Pulsa en el botón "Play" que hay justo a la izquierda para ver las curvas de velocidad y aceleración asociadas a la misma gráfica de posición.
# =============================================================================
# # %%figura limpia
# =============================================================================
#Figura con las tres gráficas juntas
fig, ax = plt.subplots(3,1,figsize=(8, 8), sharex=True)#, dpi=200)
#ax[0].plot(x, y, 'ro', lw=2)
ax[0].plot(t, P, c=colorVar[0], lw=2.5)
ax[0].axhline(y=0.0, color='k', lw=1, zorder=1)
ax[0].set_ylabel('Posición', fontsize=14)
ax[1].plot(t, V, c=colorVar[1], lw=2.5)
ax[1].axhline(y=0.0, color='k', lw=1, zorder=1)
ax[1].set_ylabel('Velocidad', fontsize=14)
ax[2].plot(t, A, c=colorVar[2], lw=2.5)
ax[2].axhline(y=0.0, color='k', lw=1, zorder=1)
ax[2].set_ylabel('Aceleración', fontsize=14)
ax[2].set_xlabel('Tiempo', fontsize=14)
#Dibuja líneas división según cortes por cero de la velocidad
for i in indV[(indV>0) & (indV<numDatos-nExtrapolacion)]: #dibuja solo los que están dentro de la gráfica
ax[2].axvline(x=i, ymin=0, ymax=3.115, c=colorVar[1], ls='--', linewidth=1.5, alpha=0.6, dash_capstyle='round', dashes=(5, 6), zorder=0, clip_on=False)
for i in indA[(indA>0) & (indA<numDatos-nExtrapolacion)]:
ax[2].axvline(x=i, ymin=0, ymax=3.115, c=colorVar[2], ls='--', linewidth=1.5, alpha=0.6, dash_capstyle='round', dashes=(5, 6), zorder=0, clip_on=False)
plt.xlim((0, numDatos))
#sustituye las etiquetas del eje X para que se ajusten de 0 a 10
plt.xticks(np.linspace(0, 1000, 10), np.round(np.linspace(0, 10, 10),0))
plt.tight_layout()
plt.show()
|
Notebooks/Relacion_Posicion_Velocidad_Aceleracion.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import toml
import os
import pandas as pd
import cv2
import torch
dict_toml = toml.load(open('../config.toml'))
image_path = dict_toml["path"]["dataset"]["image"]
# +
#image_df = pd.read_table("../../list_attr_celeba.txt", header=0,sep=" ", index_col=0)
# +
#image_df["Male"]["000001.jpg"]
# +
f = open('../not_detected.txt', 'r')
datalist = f.read().splitlines()
f.close()
# +
#data = [not i in datalist for i in image_df.index]
# +
#image_df[data]["Male"]
# +
#os.listdir("../../voice/train/F/")
# -
"""
for i in os.listdir("../../voice/train/F/VCC2SF1"):
a = pd.read_pickle("../../voice/train/F/VCC2SF1/"+i)
print(a.shape)
"""
# # dataloader
# ## transform
# ### voice
class VoiceTrans(object):
def __init__(self, maxi, mini):
self.maxi = maxi
self.mini = mini
def norm_voice(self, array):
array -= self.mini
array /= self.maxi
return array
def cut(self, voice):
return voice[:, :voice.shape[1]-voice.shape[1]%4]
def __call__(self, sample):
return self.cut(self.norm_voice(sample))
voice = pd.read_pickle("../../voice/train/VCC2SF1/10001.pkl")
# +
#t = VoiceTrans()
#t(voice).shape
# -
# ### image
class ImageTrans(object):
def __init__(self):
pass
def norm_image(self, array):
return array/255
def __call__(self, sample):
return self.norm_image(sample).T
# +
#image = cv2.imread("../../image/000001.jpg")
# +
#a = ImageTrans()
#a(image).shape
# -
# # dataset
import pathlib
import itertools
import pandas as pd
import numpy as np
import torch
# ### voice
class VoiceDataset(torch.utils.data.Dataset):
def __init__(self, path, train, trans):
p = pathlib.Path(path)
if train:
self.path = p / "train"
else:
self.path = p / "eval"
self.trans = trans
self.dir = [i for i in self.path.iterdir() if i.is_dir()]
self.file = list(itertools.chain.from_iterable([[j for j in i.iterdir()] for i in self.dir]))
self.data = [pd.read_pickle(i) for i in self.file]
self.label = [1 if "M" == str(i.parent)[-2] else -1 for i in self.file]
self.datanum = len(self.data)
def __len__(self):
return self.datanum
def __getitem__(self, idx):
out_data = self.data[idx]
out_label = self.label[idx]
if self.trans:
k = np.concatenate(self.data,axis=1)
maxi = k.max()
mini = k.min()
self.transform = VoiceTrans(maxi, mini)
out_data = torch.tensor([[self.transform(out_data)]], dtype=torch.float32)
return out_data, out_label
c = VoiceDataset(path="../../voice", train=1, trans=1)
c.__getitem__(0)[1]
# ### face
# +
import random
class ImageDataset(torch.utils.data.Dataset):
def __init__(self, path, trans):
p = pathlib.Path(path)
if trans:
self.transform = ImageTrans()
self.dir = p
self.file = [i for i in self.dir.iterdir() if i.suffix == ".jpg"]
if self.transform:
self.data = [self.transform(cv2.imread(str(i))) for i in self.file]
else:
self.data = [cv2.imread(str(i)) for i in self.file]
image_df = pd.read_table("../../list_attr_celeba.txt", header=0,sep=" ", index_col=0)
self.label = [image_df["Male"][i.name] for i in self.file]
"""
for i in self.file:
try:
image_df["Male"][i.name]
except:
print(i.name)
"""
self.datanum = len(self.data)
def __len__(self):
return self.datanum
def __getitem__(self, idx):
out_data = self.data[idx]
out_label = self.label[idx]
return out_data, out_label
def sample_label(self, male, num):
l = random.sample([i for i, x in enumerate(self.label) if x==male], num)
return l
def sample_data(self, male, num):
label = self.sample_label(male, num)
data = self.data
print(data[0].shape)
return torch.tensor([data[i] for i in label])
# +
#f = ImageDataset("../../image", trans=1)
# -
# ## pair
# +
from random import sample
class PairDataset(torch.utils.data.Dataset):
voice_path: pathlib.PosixPath
def __init__(self, voice_path, train, image_path):
# path定義
p = pathlib.Path(voice_path)
if train:
self.voice_path = p / "train"
else:
self.voice_path = p / "eval"
self.image_path = pathlib.Path(image_path)
voice_dir = [i for i in self.voice_path.iterdir() if i.is_dir()]
voice_file = list(itertools.chain.from_iterable([[j for j in i.iterdir()] for i in voice_dir]))
voice_data = [pd.read_pickle(i) for i in voice_file]
self.voice_label = [1 if "M" == str(i.parent)[-2] else -1 for i in voice_file]
k = np.concatenate(voice_data,axis=1)
self.voice_transform = VoiceTrans(k.max(), k.min())
self.voice_data = [torch.tensor([[self.voice_transform(i)]], dtype=torch.float32) for i in voice_data]
image_dir = self.image_path
image_file = [i for i in image_dir.iterdir() if i.suffix == ".jpg"]
image_df = pd.read_table(self.image_path / "list_attr_celeba.txt", header=0,sep=" ", index_col=0)
self.image_label = [image_df["Male"][i.name] for i in image_file]
self.image_label_male = [i for i, x in enumerate(self.image_label) if x == 1]
self.image_label_female= [i for i, x in enumerate(self.image_label) if x == -1]
self.image_transform = ImageTrans()
self.image_data = [torch.tensor(self.image_transform(cv2.imread(str(i))), dtype=torch.float32) for i in image_file]
self.datanum = len(self.voice_data)
def __len__(self):
return self.datanum
def __getitem__(self, idx):
out_voice_data = self.voice_data[idx]
out_label = self.voice_label[idx]
k = 4
if out_label == 1:
c = sample(self.image_label_male, k)
else:
c = sample(self.image_label_female, k)
out_image_data = torch.stack([self.image_data[i] for i in c])
return out_voice_data, out_image_data, out_label
# -
c = PairDataset(voice_path="../../voice", train=True, image_path="../../image")
d = PairDataset(voice_path="../../voice", train=False, image_path="../../image")
type(pathlib.Path())
# ## dataloader
def collate_fn(batch):
# batchはDatasetの返り値 (タプル) のリスト
voices, images = [], []
for voice, image, label in batch:
voices.append(voice)
images.append(image)
#labels.append(label)
# labelsはTensorリストのまま
return voices, images
batch_size = 64
trainloader = torch.utils.data.DataLoader(c, batch_size = batch_size, shuffle = False, num_workers = 0, collate_fn = collate_fn)
testloader = torch.utils.data.DataLoader(d, batch_size = batch_size, shuffle = False, num_workers = 0, collate_fn = collate_fn)
for voice, image in trainloader:
break
print(voice[0].shape)
print(image[0].shape)
from cmvc import *
net = Net()
from torch import optim
net.loss(voice[0], image[0])
def train_net(net, train_loader,
optimizer_cls=optim.Adam,
n_iter=10, device="cpu"):
train_losses = []
optimizer = optimizer_cls(net.parameters(), lr=0.001)
for epoch in range(n_iter):
running_loss = 0.0
net.train()
for i, (xx, yy) in enumerate(train_loader):
optimizer.zero_grad()
losses = torch.zeros(1)
for batch in range(len(xx)):
loss = net.loss(xx[batch], yy[batch])
losses += loss
losses.backward()
optimizer.step()
print(losses.item(), flush=True)
running_loss += losses.item()
train_losses.append(running_loss / i)
print(epoch, train_losses[-1], flush=True)
net.to("cpu")
train_net(net, trainloader)
|
cmvc/learning/dataloader.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Train Model with GPU (and CPU*)
# CPU is still used to store variables that we are learning (`W` and `b`). This allows the GPU to focus on compute vs. storage.
# +
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
# -
# ## Reset TensorFlow Graph
# Useful in Jupyter Notebooks
tf.reset_default_graph()
# ## Create TensorFlow Session
# +
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.gpu_options.per_process_gpu_memory_fraction=0.4
print(config)
sess = tf.Session(config=config)
print(sess)
# -
# ## Generate Model Version (current timestamp)
# +
from datetime import datetime
version = int(datetime.now().strftime("%s"))
# -
# ### Load Model Training and Test/Validation Data
#
num_samples = 100000
# +
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
# +
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
# +
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/gpu:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
# +
learning_rate = 0.025
with tf.device("/gpu:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
# -
# ## Randomly Initialize Variables (Weights and Bias)
# The goal is to learn more accurate Weights and Bias during training.
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
# ## View Accuracy of Pre-Training, Initial Random Variables
# We want this to be close to 0, but it's relatively far away. This is why we train!
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_test, y_test)
# ## Setup Loss Summary Operations for Tensorboard
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
# +
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/gpu/%s/test' % version,
graph=tf.get_default_graph())
# -
# ## Train Model
# +
# %%time
with tf.device("/gpu:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
# -
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
# ## View Loss Summaries in Tensorboard
# Navigate to the `Scalars` and `Graphs` tab at this URL:
#
# http://[ip-address]:6006
# ## Save Graph For Optimization
# We will use this later.
# +
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_gpu.pb' % optimize_me_parent_path
print(unoptimized_model_graph_path)
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
# -
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
# ## Show Graph
# + language="bash"
#
# summarize_graph --in_graph=/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb
# +
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
# -
input_graph='/root/models/optimize_me/linear/gpu/unoptimized_gpu.pb'
output_dot='/root/notebooks/unoptimized_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
# + language="bash"
#
# dot -T png /root/notebooks/unoptimized_gpu.dot \
# -o /root/notebooks/unoptimized_gpu.png > /tmp/a.out
# +
from IPython.display import Image
Image('/root/notebooks/unoptimized_gpu.png', width=1024, height=768)
# -
|
gpu.ml/notebooks/03a_Train_Model_GPU.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Problem Statement:
#
# Prepare a classification model using Naive Bayes
# for salary data
#
# Data Description:
#
# age -- age of a person
# workclass -- A work class is a grouping of work
# education -- Education of an individuals
# maritalstatus -- Marital status of an individulas
# occupation -- occupation of an individuals
# relationship --
# race -- Race of an Individual
# sex -- Gender of an Individual
# capitalgain -- profit received from the sale of an investment
# capitalloss -- A decrease in the value of a capital asset
# hoursperweek -- number of hours work per week
# native -- Native of an individual
# Salary -- salary of an individual
#
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.naive_bayes import MultinomialNB as MB
from sklearn.naive_bayes import GaussianNB as GB
from sklearn.naive_bayes import BernoulliNB as BN
from sklearn.preprocessing import LabelEncoder
# Importing the training dataset
salary_train = pd.read_csv("G:/data sceince/Assignments/Naive Bayes/SalaryData_Train.csv")
salary_train.head()
salary_train.shape
# Importing the test dataset
salary_test = pd.read_csv("G:/data sceince/Assignments/Naive Bayes/SalaryData_Test.csv")
salary_test.head()
salary_test.shape
#checking for overall info. of training dataset like null values and data types
salary_train.info()
#checking for overall info. of test dataset like null values and data types
salary_test.info()
# +
#Converting the Y variable into labels
label_encoder = LabelEncoder()
salary_test['Salary'] = label_encoder.fit_transform(salary_test.Salary)
salary_train['Salary'] = label_encoder.fit_transform(salary_train.Salary)
# +
# Checking whether the class(y)variable is balanced or not.
import seaborn as sns
sns.countplot(x = 'Salary', data = salary_test )
# -
# Salary : <=50K is under oth class and Salary : >50K is under 1 class
# +
# converting the categorical columns into dummy variables
salary_train = pd.get_dummies(salary_train)
salary_test = pd.get_dummies(salary_test)
# -
salary_train.dtypes
# +
# assigning the training data to x_train and y_train and test data to x_test and y_test
x_train_std = salary_train.drop('Salary', axis = 1)
y_train = salary_train['Salary']
x_test_std = salary_test.drop('Salary', axis = 1)
y_test = salary_test['Salary']
# -
# ### Building the model
# #### Multinomial Naive Bayes
classifier_MB = MB()
classifier_MB.fit(x_train_std,y_train)
train_predict1 = classifier_MB.predict(x_train_std)
accuracy_train = np.mean(train_predict1==y_train)
accuracy_train*100
test_predict1 = classifier_MB.predict(x_test)
accuracy_test = np.mean(test_predict1==y_test)
accuracy_test*100
print(classification_report(test_predict1, y_test))
# #### Gaussian Naive Bayes
classifier_GB = GB()
classifier_GB.fit(x_train,y_train)
train_predict2 = classifier_GB.predict(x_train)
accuracy_train = np.mean(train_predict2==y_train)
# Training Accuracy
accuracy_train*100
test_predict2 = classifier_GB.predict(x_test)
accuracy_test = np.mean(test_predict2==y_test)
# Test accuracy
accuracy_test*100
from sklearn.metrics import classification_report
print(classification_report(test_predict2, y_test))
# #### Bernolli Naive Bayes
classifier_BN = BN(binarize = True)
classifier_BN.fit(x_train,y_train)
train_predict3 = classifier_BN.predict(x_train)
accuarcy_train = np.mean(train_predict3 == y_train)
# Training accuracy
accuracy_train*100
test_predict3 = classifier_BN.predict(x_test)
accuracy_test = np.mean(test_predict3==y_test)
# Test accuarcy
accuracy_test*100
print(classification_report(test_predict3, y_test))
# Inference : As we are classifying the numeric data so using gaussian is always better and from the accuracy also we can see that we are getting good accuracy in gaussian naive bayes
|
Assign_Naive_Bayes_salary.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Stratify
#
# ## Vertical interpolation for numerical weather prediction (NWP) model data
#
# Whilst this is not the only use for the ``stratify`` package, NWP vertical interpolation was the motivating usecase for the package's creation.
#
# In its simplest form, vertical interpolation amounts to a 1d interpolation at each grid-point. Whilst some more sophistication exists for a number of interpolators, ``stratify`` can be seen as an optimisation to vectorize these interpolations beyond naïve nested for-loops.
#
# #### Data setup
# In order to setup the problem, let's manufacture some reasonably realistic NWP data.
#
# First, let's randomly generate some orography (or, if this were an ocean model, bathymetry) that we can use for our model:
# -
# %matplotlib inline
# + deletable=true editable=true
import numpy as np
nx, ny = 6, 3
np.random.seed(0)
orography = np.random.normal(1000, 600, size=(ny, nx)) - 400
sea_level_temp = np.random.normal(290, 5, size=(ny, nx))
# + deletable=true editable=true
# Now visualise:
import matplotlib.pyplot as plt
plt.set_cmap('viridis')
plt.subplot(1, 2, 1)
plt.pcolormesh(orography)
cbar = plt.colorbar(orientation='horizontal',
label='Orography (m)')
# Reduce the maximum number of ticks to 5.
cbar.ax.xaxis.get_major_locator().nbins = 5
plt.subplot(1, 2, 2)
plt.pcolormesh(sea_level_temp)
cbar = plt.colorbar(orientation='horizontal',
label='Sea level temperature (K)')
# Reduce the maximum number of ticks to 5.
cbar.ax.xaxis.get_major_locator().nbins = 5
plt.show()
# + [markdown] deletable=true editable=true
# Next, let's define a vertical coordinate system that minimises missing data values, and gives good resolution at the (orographic) surface.
#
# To achieve this we invent a scheme where the "bottom" of the model closely follows the orography/bathymetry, and as we reach the "top" of the model we get levels of approximately constant height.
# + deletable=true editable=true
nz = 9
model_levels = np.arange(nz)
model_top = 5000 # m
# The proportion of orographic influence on the model altitude. In this case,
# we define this as a log progression from full influence to no influence.
sigma = 1.1 - np.logspace(-1, np.log10(1.1), nz)
# Broadcast sigma so that when we multiply the orography we get a 3D array of z, y, x.
sigma = sigma[:, np.newaxis, np.newaxis]
# Combine sigma with the orography and model top value to
# produce 3d (z, y, x) altitude data for our "model levels".
altitude = (orography * sigma) + (model_top * (1 - sigma))
# + [markdown] deletable=true editable=true
# Our new 3d array now represents altitude (height above *sea* surface) at each of our "model levels".
# Let's look at a cross-section of the data to see how these levels:
# + deletable=true editable=true
plt.figure(figsize=(8, 6))
plt.fill_between(np.arange(6), np.zeros(6), orography[1, :],
color='green', linewidth=2, label='Orography')
plt.plot(np.zeros(nx),
color='blue', linewidth=1.2,
label='Sea level')
for i in range(9):
plt.plot(altitude[i, 1, :], color='gray', linestyle='--',
label='Model levels' if i == 0 else None)
plt.ylabel('altitude / m')
plt.margins(0.1)
plt.legend()
plt.show()
# + [markdown] deletable=true editable=true
# To recap, we now have a model vertical coordinate system that maximises the number grid-point locations close to the orography. In addition, we have a 3d array of "altitudes" so that we can relate any phenomenon measured on this grid to useful vertical coordinate information.
# + [markdown] deletable=true editable=true
# Let's now define the temperature at each of our x, y, z points. We use the [International Standard Atmosphere lapse rate](https://en.wikipedia.org/wiki/International_Standard_Atmosphere) of $ -6.5\ ^{\circ}C\ /\ km $ combined with our sea level standard temperature as an appoximate model for our temperature profile.
# + deletable=true editable=true
lapse = -6.5 / 1000 # degC / m
temperature = sea_level_temp + lapse * altitude
# + deletable=true editable=true
from matplotlib.colors import LogNorm
fig = plt.figure(figsize=(8, 6))
norm = plt.Normalize(vmin=temperature.min(), vmax=temperature.max())
for i in range(nz):
plt.subplot(3, 3, i + 1)
qm = plt.pcolormesh(temperature[i], cmap='viridis', norm=norm)
plt.subplots_adjust(right=0.84, wspace=0.3, hspace=0.3)
cax = plt.axes([0.85, 0.1, 0.03, 0.8])
plt.colorbar(cax=cax)
plt.suptitle('Temperature (K) at each "model level"')
plt.show()
# + [markdown] deletable=true editable=true
# #### Restratification / vertical interpolation
#
# Our data is in the form:
#
# * 1d "model level" vertical coordinate (z axis)
# * 2 x 1d horizontal coordinates (x, y)
# * 3d "altitude" variable (x, y, z)
# * 3d "temperature" variable (x, y, z)
#
# Suppose we now want to change the vertical coordinate system of our variables so that they are on levels of constant altitude, not levels of constant "model levels":
# + deletable=true editable=true
target_altitudes = np.linspace(700, 5500, 5) # m
# + [markdown] deletable=true editable=true
# If we visualise this, we can see that we need to consider the behaviour for a number of situations, including what should happen when we are sampling *below the orography*, and when we are *above the model top*.
# + deletable=true editable=true
plt.figure(figsize=(8, 6))
plt.fill_between(np.arange(6), np.zeros(6), orography[1, :],
color='green', linewidth=2, label='Orography')
for i in range(9):
plt.plot(altitude[i, 1, :],
color='gray', lw=1.2,
label=None if i > 0 else 'Source levels \n(model levels)')
for i, target in enumerate(target_altitudes):
plt.plot(np.repeat(target, 6),
color='gray', linestyle='--', lw=1.4, alpha=0.6,
label=None if i > 0 else 'Target levels \n(altitude)')
plt.ylabel('height / m')
plt.margins(top=0.1)
plt.legend()
plt.savefig('summary.png')
plt.show()
# + [markdown] deletable=true editable=true
# The default behaviour depends on the scheme, but for linear interpolation we recieve NaNs both below the orography and above the model top:
# + deletable=true editable=true
import stratify
target_nz = 20
target_altitudes = np.linspace(400, 5200, target_nz) # m
new_temperature = stratify.interpolate(target_altitudes, altitude, temperature,
axis=0)
# + [markdown] deletable=true editable=true
# With some work, we can visualise this result to compare a cross-section before and after. In particular this will allow us to see precisely what the interpolator has done at the extremes of our target levels:
# + deletable=true editable=true
plt.figure(figsize=(8, 6))
ax1 = plt.subplot(1, 2, 1)
plt.fill_between(np.arange(6), np.zeros(6), orography[1, :],
color='green', linewidth=2, label='Orography')
cs = plt.contourf(np.tile(np.arange(6), nz).reshape(nz, 6),
altitude[:, 1],
temperature[:, 1])
plt.scatter(np.tile(np.arange(6), nz).reshape(nz, 6),
altitude[:, 1],
c=temperature[:, 1])
plt.subplot(1, 2, 2, sharey=ax1)
plt.fill_between(np.arange(6), np.zeros(6), orography[1, :],
color='green', linewidth=2, label='Orography')
plt.contourf(np.arange(6), target_altitudes,
np.ma.masked_invalid(new_temperature[:, 1]),
cmap=cs.cmap, norm=cs.norm)
plt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx),
np.repeat(target_altitudes, nx).reshape(target_nz, nx),
c=new_temperature[:, 1])
plt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx),
np.repeat(target_altitudes, nx).reshape(target_nz, nx),
s=np.isnan(new_temperature[:, 1]) * 15, marker='x')
plt.suptitle('Temperature cross-section before and after restratification')
plt.show()
|
index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Evaluation
# ## Imports
# + pycharm={"name": "#%%\n"}
import os
from pathlib import Path
import codecs
import re
from math import sqrt
from statistics import mean, stdev
import pandas as pd
from scipy.stats import ttest_ind
from tabulate import tabulate
pd.set_option('display.max_rows', 5)
# + [markdown] pycharm={"name": "#%% md\n"}
# ## Workspace Directories
# +
TIMESTAMP = "2020-05-06-04-13"
NOTEBOOKS_DIR = Path(os.getcwd())
WORKSPACE_DIR = NOTEBOOKS_DIR.parent
DATA_DIR = WORKSPACE_DIR / "data"
EVALUATION_DIR = (DATA_DIR / "experiments" / TIMESTAMP).resolve(True)
OUTPUT_DIR = (DATA_DIR / "figures")
OUTPUT_DIR.mkdir(exist_ok=True)
OUTPUT_DIR
# -
# ## Name mappings
# +
evaluation_names = {
"ndcg@10-per-topic": r"\ndcg{10}~Performance",
"ndcg@20-per-topic": r"\ndcg{20}~Performance",
"ndcg-per-topic": r"\ndcg{}~Performance",
"map-per-topic": r"\map{}~Performance",
"first-wikipedia-rank-per-topic": r"Mean First Rank of Wikipedia Documents",
"first-irrelevant-wikipedia-rank-per-topic": r"Mean First Rank of Irrelevant Wikipedia Documents",
"first-duplicate-rank-per-topic": r"Mean First Rank of Wikipedia Documents",
"first-irrelevant-duplicate-rank-per-topic": r"Mean First Rank of Irrelevant Wikipedia Documents",
"domain-fairness-per-topic": r"Fairness of Exposure Across Domains"
}
corpus_names = {
"clueweb09": r"ClueWeb~09",
"gov2": r"GOV2"
}
run_sampling_names = {
"identity": r"Duplicates Unmodified",
"duplicates-irrelevant": r"Duplicates Irrelevant",
"remove-duplicates": r"Duplicates Removed"
}
ranker_names = {
"bm25": r"BM25",
"ada-rank": r"AdaRank",
"coordinate-ascent": r"Coor.~Ascent",
"lambda-mart": r"LambdaMART",
"list-net": r"ListNET",
"rank-boost": r"RankBoost",
"linear-regression": r"Regression"
}
sampling_names = {
("identity", "identity", "identity"): r"100\,\%",
("no-wikipedia-redundancy", "identity", "identity"): r"0\,\%",
("filter-canonical", "identity", "identity"): r"0\,\%",
# ("identity", "identity", "novelty-relevance-feedback-null"): r"NOV\textsubscript{0}",
# ("identity", "identity", "novelty-relevance-feedback-null-novelty-feature"): r"NOV\textsubscript{0,F}",
# ("identity", "identity", "novelty-relevance-feedback-scale"): r"NOV\textsubscript{S}",
("identity", "identity", "novelty-relevance-feedback-scale-novelty-feature"): r"NOV\textsubscript{S,F}",
}
split_names = {
"most-redundant-training": r"Worst-Case Scenario",
# "3-fold-cross-validation-1": r"3-Fold Cross Validation",
# "3-fold-cross-validation-2": r"3-Fold Cross Validation",
# "3-fold-cross-validation-3": r"3-Fold Cross Validation",
"5-fold-cross-validation-1": r"5-Fold Cross Validation",
"5-fold-cross-validation-2": r"5-Fold Cross Validation",
"5-fold-cross-validation-3": r"5-Fold Cross Validation",
"5-fold-cross-validation-4": r"5-Fold Cross Validation",
"5-fold-cross-validation-5": r"5-Fold Cross Validation",
"clueweb09-mostredundanttraining": r"Worst-Case Scenario",
"clueweb09-fold1": r"5-Fold Cross Validation",
"clueweb09-fold2": r"5-Fold Cross Validation",
"clueweb09-fold3": r"5-Fold Cross Validation",
"clueweb09-fold4": r"5-Fold Cross Validation",
"clueweb09-fold5": r"5-Fold Cross Validation",
"letor-trec-millionquery2007-fold-1": r"5-Fold Cross Validation MQ\,2007",
"letor-trec-millionquery2007-fold-2": r"5-Fold Cross Validation MQ\,2007",
"letor-trec-millionquery2007-fold-3": r"5-Fold Cross Validation MQ\,2007",
"letor-trec-millionquery2007-fold-4": r"5-Fold Cross Validation MQ\,2007",
"letor-trec-millionquery2007-fold-5": r"5-Fold Cross Validation MQ\,2007",
"letor-trec-millionquery2008-fold-1": r"5-Fold Cross Validation MQ\,2008",
"letor-trec-millionquery2008-fold-2": r"5-Fold Cross Validation MQ\,2008",
"letor-trec-millionquery2008-fold-3": r"5-Fold Cross Validation MQ\,2008",
"letor-trec-millionquery2008-fold-4": r"5-Fold Cross Validation MQ\,2008",
"letor-trec-millionquery2008-fold-5": r"5-Fold Cross Validation MQ\,2008",
"trec-millionquery2007-fold1": r"5-Fold Cross Validation",
"trec-millionquery2007-fold2": r"5-Fold Cross Validation",
"trec-millionquery2007-fold3": r"5-Fold Cross Validation",
"trec-millionquery2007-fold4": r"5-Fold Cross Validation",
"trec-millionquery2007-fold5": r"5-Fold Cross Validation",
"trec-millionquery2008-fold1": r"5-Fold Cross Validation",
"trec-millionquery2008-fold2": r"5-Fold Cross Validation",
"trec-millionquery2008-fold3": r"5-Fold Cross Validation",
"trec-millionquery2008-fold4": r"5-Fold Cross Validation",
"trec-millionquery2008-fold5": r"5-Fold Cross Validation"
# "trec-millionquery2007-fold1": r"5-Fold Cross Validation MQ\,2007",
# "trec-millionquery2007-fold2": r"5-Fold Cross Validation MQ\,2007",
# "trec-millionquery2007-fold3": r"5-Fold Cross Validation MQ\,2007",
# "trec-millionquery2007-fold4": r"5-Fold Cross Validation MQ\,2007",
# "trec-millionquery2007-fold5": r"5-Fold Cross Validation MQ\,2007",
# "trec-millionquery2008-fold1": r"5-Fold Cross Validation MQ\,2008",
# "trec-millionquery2008-fold2": r"5-Fold Cross Validation MQ\,2008",
# "trec-millionquery2008-fold3": r"5-Fold Cross Validation MQ\,2008",
# "trec-millionquery2008-fold4": r"5-Fold Cross Validation MQ\,2008",
# "trec-millionquery2008-fold5": r"5-Fold Cross Validation MQ\,2008"
}
evaluations = list(evaluation_names.keys())
corpora = list(corpus_names.keys())
evaluation_filter_metrics = {
"ndcg@10-per-topic": "ndcg@10",
}
evaluation_filter_metrics = { e : evaluation_filter_metrics.get(e, "ndcg@20") for e in evaluations }
# -
# ## Configuration
# + pycharm={"name": "#%%\n"}
baseline_ranker = "BM25"
baseline_sampling = sampling_names[("identity", "identity", "identity")]
# -
# ## Parse evaluation data frame
# + pycharm={"name": "#%%\n"}
# Read from JSON-Lines file.
def get_evaluation_raw(name):
file = EVALUATION_DIR / ("evaluation-of-experiments-" + name + ".jsonl")
return pd.read_json(file.open(), lines=True)
# Only print for debugging.
get_evaluation_raw(evaluations[0])
# + pycharm={"name": "#%%\n"}
def get_evaluation(evaluation_name, corpus=None):
evaluation = get_evaluation_raw(evaluation_name)
# Drop training set results.
evaluation = evaluation.drop(columns=["train-set-result"])
# Drop evaluation column.
evaluation = evaluation.drop(columns=["evaluation"])
# Drop trial column.
evaluation = evaluation.drop(columns=["trial"])
# Filter corpus.
if corpus:
evaluation = evaluation[evaluation["corpus"] == corpus]\
.drop(columns=["corpus"])
# Filter models with metric.
filter_metric = evaluation_filter_metrics[evaluation_name]
evaluation = evaluation[evaluation["metric"] == filter_metric]\
.drop(columns=["metric"])
# Merge samplings into one column.
evaluation["sampling"] = evaluation[["underSampling","overSampling","featureMutation"]]\
.aggregate(tuple, axis=1)
evaluation = evaluation.drop(columns=["underSampling","overSampling","featureMutation"])
return evaluation
# Only print for debugging.
get_evaluation(evaluations[0], corpora[0])
# + pycharm={"name": "#%%\n"}
def get_evaluation_labeled(evaluation_name, corpus=None):
evaluation = get_evaluation(evaluation_name, corpus)
# Map names.
if "corpus" in evaluation.columns:
evaluation["corpus"] = evaluation["corpus"].map(lambda split : corpus_names.get(split, ""))
evaluation["trainTestSplit"] = evaluation["trainTestSplit"].map(lambda split : split_names.get(split, ""))
evaluation["ranker"] = evaluation["ranker"].map(lambda ranker : ranker_names.get(ranker, ""))
evaluation["runSampling"] = evaluation["runSampling"].map(lambda run_sampling : run_sampling_names.get(run_sampling, ""))
evaluation["sampling"] = evaluation["sampling"].map(lambda sampling : sampling_names.get(sampling, ""))
# Filter empty (ignored) names.
if "corpus" in evaluation.columns:
evaluation=evaluation[evaluation["corpus"] != ""]
evaluation=evaluation[evaluation["trainTestSplit"] != ""]
evaluation=evaluation[evaluation["ranker"] != ""]
evaluation=evaluation[evaluation["runSampling"] != ""]
evaluation=evaluation[evaluation["sampling"] != ""]
return evaluation
# Only print for debugging.
get_evaluation_labeled(evaluations[0], corpora[0])
# + pycharm={"name": "#%%\n"}
def categorical_type(categories):
categories = list(categories)
categories = sorted(set(categories), key=categories.index)
return pd.api.types.CategoricalDtype(categories=categories, ordered=True)
# Categories:
corpus_categorical_type = categorical_type(corpus_names.values())
split_categorical_type = categorical_type(split_names.values())
ranker_categorical_type = categorical_type(ranker_names.values())
run_sampling_categorical_type = categorical_type(run_sampling_names.values())
sampling_categorical_type = categorical_type(sampling_names.values())
def get_evaluation_aggregated(evaluation_name, corpus=None):
evaluation = get_evaluation_labeled(evaluation_name, corpus)
# Make types categorical.
types = {
"trainTestSplit": split_categorical_type,
"ranker": ranker_categorical_type,
"runSampling": run_sampling_categorical_type,
"sampling": sampling_categorical_type
}
if "corpus" in evaluation.columns:
types.update({"corpus" : corpus_categorical_type})
evaluation = evaluation.astype(types)
# Sort.
sort_cols = ["trainTestSplit", "ranker", "runSampling", "sampling"]
if "corpus" in evaluation.columns:
sort_cols.insert(0, "corpus")
evaluation = evaluation.sort_values(by=sort_cols)
# Aggregate trials.
evaluation = evaluation.groupby(sort_cols)\
.aggregate(lambda lists : [item for sublist in lists for item in sublist])\
.dropna()\
.reset_index()
return evaluation
# Only print for debugging.
get_evaluation_aggregated(evaluations[0], corpora[0])
# -
# ## Statistic utils
# + pycharm={"name": "#%%\n"}
MAX_P_VALUE = 0.05
def significantly_better(compare, baseline):
test = ttest_ind(compare,baseline)
return test.statistic > 0 and test.pvalue <= MAX_P_VALUE
def cohens_d(compare, baseline):
return (mean(compare) - mean(baseline)) / (sqrt((stdev(compare) ** 2 + stdev(baseline) ** 2) / 2))
# -
# ## Generate LaTeX table from data frame
# +
def empty_columns(n):
return [""] * n
def table(name, corpus=None, decimals=3):
evaluation = get_evaluation_aggregated(name, corpus)
rankers = evaluation["ranker"].unique()
run_samplings = evaluation["runSampling"].unique()
samplings = evaluation["sampling"].unique()
def table_head():
if not corpus:
head = ["Corpus", "Split", "Algorithm"]
else:
head = ["Split", "Algorithm"]
head.append(evaluation_names[name])
head += empty_columns(len(samplings) * len(run_samplings) - 1)
head = list(map(lambda item : r"\textbf{" + item + r"}" if len(item) > 0 else item, head))
return head
def table_subhead():
head = empty_columns(3 if not corpus else 2)
for run_sampling in run_samplings:
head.append(run_sampling)
head += empty_columns(len(samplings) - 1)
return head
def table_subsubhead():
head = empty_columns(3 if not corpus else 2)
for _ in run_samplings:
for sampling in samplings:
head.append(sampling)
return head
def table_cell(baseline, compare):
column = r"\("
significant = significantly_better(compare, baseline)
if significant:
column += r"\mathbf{"
column += ("{:." + str(decimals) + "f}").format(mean(compare))
d = cohens_d(compare, baseline)
if d > 0:
column += r"\updiff{"
column += "{:.1f}".format(d)
column += r"}"
elif d < 0:
column += r"\downdiff{"
column += "{:.1f}".format(-d)
column += r"}"
else:
column += r"\nodiff{"
column += "{:.1f}".format(d)
column += r"}"
if significant:
column += r"}"
column += r"\)"
return column
def table_row(split, split_tex, ranker, row_corpus=None):
if row_corpus:
row = [row_corpus, split_tex, ranker]
else:
row = [split_tex, ranker]
for run_sampling in run_samplings:
df = evaluation
if row_corpus:
df = df[df["corpus"] == row_corpus]
df = df[df["trainTestSplit"] == split]
df = df[df["ranker"] == ranker]
df = df[df["runSampling"] == run_sampling]
if row_corpus:
drop_columns = ["corpus", "trainTestSplit", "ranker", "runSampling"]
else:
drop_columns = ["trainTestSplit", "ranker", "runSampling"]
df = df.drop(columns=drop_columns)
baseline_result = df[df["sampling"] == baseline_sampling]["test-set-result"].iloc[0]
row.append(r"\(" + ("{:." + str(decimals) + "f}").format(mean(baseline_result)) + r"\)")
for sampling in samplings:
if sampling != baseline_sampling:
if ranker == baseline_ranker:
# We don't see sampling differences in BM25 Ranking,
# as those don't depend on training data.
# Therefore hide all except the first.
row.append(r"---")
else:
compare_result = df[df["sampling"] == sampling]["test-set-result"].iloc[0]
row.append(table_cell(baseline_result, compare_result))
return row
def table_rows():
def split_rotated(split_name, num_rankers):
return r"\multirow{" + str(num_rankers) +\
r"}{*}{\rotatebox[origin=c]{90}{\parbox[c]{" +\
str(num_rankers + 1) +\
r"em}{\centering \textbf{" + split_name + "}}}}"
rows = []
if not corpus:
for corp in evaluation["corpus"].unique():
corpus_df = evaluation[evaluation["corpus"] == corp]
for split in corpus_df["trainTestSplit"].unique():
split_tex = split_rotated(split, len(rankers))
for ranker in rankers:
rows.append(table_row(split, split_tex, ranker, corp))
split_tex = ""
else:
for split in evaluation["trainTestSplit"].unique():
split_tex = split_rotated(split, len(rankers))
for ranker in rankers:
rows.append(table_row(split, split_tex, ranker))
split_tex = ""
return rows
table_data = [
table_head(),
table_subhead(),
table_subsubhead()
] + table_rows()
return tabulate(table_data, tablefmt="latex_raw")
def write_table(evaluation, corpus=None, decimals=3):
file_name = OUTPUT_DIR / (corpus + ("-" if corpus else "") + evaluation + ".tex")
with codecs.open(file_name, 'w', 'utf-8') as file:
content = table(evaluation, corpus, decimals)
content = re.sub(r"\s+&\s+", " & ",content)
content = re.sub(r"\s+\\\\", r" \\\\",content)
file.write(r"\documentclass[preview]{standalone}" + "\n" +\
r"\usepackage{amsmath}" + "\n" +\
r"\usepackage{graphicx}" + "\n" +\
r"\newcommand{\ndcg}[1]{nDCG\def\tempndcg{#1}\ifx\tempndcg\empty\else{@}\tempndcg\fi}" + "\n" +\
r"\newcommand{\map}{MAP}" + "\n" +\
r"\newcommand{\updiff}[1]{^{\text{↑}#1}}" + "\n" +\
r"\newcommand{\downdiff}[1]{^{\text{↓}#1}}" + "\n" +\
r"\newcommand{\nodiff}[1]{^{\text{=}#1}}" + "\n" +\
r"\begin{document}" + "\n")
file.write(content)
file.write(r"\end{document}")
# -
# ## Generate tables
# write_table("domain-fairness-per-topic")
write_table("domain-fairness-per-topic", corpus="gov2")
write_table("domain-fairness-per-topic", corpus="clueweb09")
# write_table("map-per-topic")
# write_table("map-per-topic", corpus="gov2")
# write_table("map-per-topic", corpus="clueweb09")
# write_table("ndcg@10-per-topic")
write_table("ndcg@10-per-topic", corpus="gov2")
write_table("ndcg@10-per-topic", corpus="clueweb09")
# write_table("ndcg@20-per-topic")
write_table("ndcg@20-per-topic", corpus="gov2")
write_table("ndcg@20-per-topic", corpus="clueweb09")
# write_table("first-wikipedia-rank-per-topic", decimals=0, corpus="clueweb09")
write_table("first-irrelevant-wikipedia-rank-per-topic", decimals=0, corpus="clueweb09")
# write_table("first-duplicate-rank-per-topic", decimals=0)
# write_table("first-duplicate-rank-per-topic", decimals=0, corpus="gov2")
# write_table("first-duplicate-rank-per-topic", decimals=0, corpus="clueweb09")
# write_table("first-irrelevant-duplicate-rank-per-topic", decimals=0)
write_table("first-irrelevant-duplicate-rank-per-topic", decimals=0, corpus="gov2")
write_table("first-irrelevant-duplicate-rank-per-topic", decimals=0, corpus="clueweb09")
|
notebooks/evaluation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# Title: Books on Tape - Author Classification from Dante and Shakespeare
#
# Given a line from either Shakespeare or Dante, we will train a machine learning model that will accurately predict the author.
#
# We will investigate three candidate models and compare their performance: Amazon’s BlazingText, Naive Bayes, and KNeighbors.
#
# We have chosen two scenes from the Merchant of Venice by <NAME> and two scenes from the Divine Comedy by <NAME>. These scenes will be transcribed using Amazon Transcribe. Word transcription data will then be processed and cleaned. Transcriptions with low confidence will be evaluated for removal. Data will then be explored and formatted.
#
# Formatted data will be used to train models and features selected. A second set of data will be chosen for and processed for validation. Each model’s performance will be evaluated and compared.
#
# Questions we are now considering/ Interesting thoughts:
# 1. How well can Amazon Transcribe transcribe the unique language of Shakespeare? Will transcriptions with low confidence turn out to be revealing language for classifying authors?
# 2. Shakespeare is written in iambic pentameter. Dante is originally written in Terza Rima and often translated into iambic pentameter. Is meter a revealing feature for author classification? Can meter be used as a feature?
# 3. Features of interest - words common by one author or another; uniqueness as a feature; clusters of words; semantics as a feature - Dante will have a lot of “fiery, hell, burning”
import sagemaker
from sagemaker import get_execution_role
import json
import boto3
from __future__ import print_function
import time
#transcribe chapters function
def transcribe_chapters (job_name, job_uri):
transcribe = boto3.client('transcribe')
output_bucket = bucket
transcribe.start_transcription_job(
TranscriptionJobName=job_name,
Media={'MediaFileUri': job_uri},
MediaFormat='mp3',
LanguageCode='en-US',
OutputBucketName=output_bucket
)
while True:
status = transcribe.get_transcription_job(TranscriptionJobName=job_name)
if status['TranscriptionJob']['TranscriptionJobStatus'] in ['COMPLETED', 'FAILED']:
break
print("Not ready yet...")
time.sleep(5)
print(status)
# !aws transcribe list-transcription-jobs
# !aws transcribe delete-transcription-job --transcription-job-name $job_name
# +
#set session variables
sess = sagemaker.Session()
role = get_execution_role()
print(role) # This is the role that SageMaker would use to leverage AWS resources (S3, CloudWatch) on your behalf
bucket = 'crazycurlygirlbucket311' # Replace with your own bucket name if needed
print(bucket)
prefix = 'BookProphet' #Replace with the prefix under which you want to store the data if needed
# -
# get the files
# !wget https://etc.usf.edu/lit2go/audio/mp3/the-merchant-of-venice-005-merchant-of-venice-act-2-scene-1.589.mp3
# !wget https://etc.usf.edu/lit2go/audio/mp3/the-merchant-of-venice-014-merchant-of-venice-act-3-scene-1.600.mp3
# !wget http://www.archive.org/download/divine_comedy_librivox/divinecomedy_longfellow_05_dante.mp3
# !wget http://www.archive.org/download/divine_comedy_librivox/divinecomedy_longfellow_10_dante.mp3
# +
# save MP3 files to S3
MP3Location = prefix + '/MP3Files'
sess.upload_data(path='the-merchant-of-venice-005-merchant-of-venice-act-2-scene-1.589.mp3', bucket=bucket, key_prefix=MP3Location)
sess.upload_data(path='the-merchant-of-venice-014-merchant-of-venice-act-3-scene-1.600.mp3', bucket=bucket, key_prefix=MP3Location)
sess.upload_data(path='divinecomedy_longfellow_05_dante.mp3', bucket=bucket, key_prefix=MP3Location)
sess.upload_data(path='divinecomedy_longfellow_10_dante.mp3', bucket=bucket, key_prefix=MP3Location)
# -
#create dictionary of job names and uri
chapters = {
"merchant1": "s3://crazycurlygirlbucket311/BookProphet/MP3Files/the-merchant-of-venice-005-merchant-of-venice-act-2-scene-1.589.mp3",
"merchant2": "s3://crazycurlygirlbucket311/BookProphet/MP3Files/the-merchant-of-venice-014-merchant-of-venice-act-3-scene-1.600.mp3",
"divine1": "s3://crazycurlygirlbucket311/BookProphet/MP3Files/divinecomedy_longfellow_05_dante.mp3",
"divine2": "s3://crazycurlygirlbucket311/BookProphet/MP3Files/divinecomedy_longfellow_10_dante.mp3"
}
# transcribe chapters using function
for ch, uri in chapters.items():
transcribe_chapters(ch,uri)
# +
# #copy transcribe results to notebook for future reference
# !aws s3 cp s3://crazycurlygirlbucket311/divine1.json divine1.json
# !aws s3 cp s3://crazycurlygirlbucket311/divine2.json divine2.json
# !aws s3 cp s3://crazycurlygirlbucket311/merchant1.json merchant1.json
# !aws s3 cp s3://crazycurlygirlbucket311/merchant2.json merchant2.json
# -
|
bookprophet/.ipynb_checkpoints/dataextraction-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # ONLINE RETAIL dataset
# <NAME>
# 11603290
# Lovely Professional University
# # Libraries used
import math
import datetime as dt
import numpy as np
import pandas
import pandas as pd
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
import seaborn as sns
import missingno as msno
from sklearn import model_selection
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale, StandardScaler, normalize
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
from sklearn.svm import SVC
# # Load Dataset
source="OnlineRetail.xlsx"
dataset=pd.read_excel(source)
dataset.head(5)
# # Data Insight
dataset_copy=dataset
dataset.shape
dataset.info()
# # Missing data
msno.matrix(dataset)
# # Cleaning data
dataset=dataset.loc[pd.isnull(dataset.CustomerID) == False]
dataset.info()
msno.matrix(dataset)
dataset.describe(include='all')
#remove canceled orders
dataset=dataset[dataset['Quantity']>0]
#remove rows where customerID are NA
dataset.dropna(subset=['CustomerID'],how='all',inplace=True)
dataset.shape
# +
#Summary
# -
#exploring the unique values of each attribute
print("Number of transactions: ", dataset['InvoiceNo'].nunique())
print("Number of products bought: ",dataset['StockCode'].nunique())
print("Number of customers:", dataset['CustomerID'].nunique() )
print("Percentage of customers NA: ", round(dataset['CustomerID'].isnull().sum() * 100 / len(dataset),2),"%" )
# # RFM Analysis
# - RECENCY (R): Days since last purchase
# - FREQUENCY (F): Total number of purchases
# - MONETARY VALUE (M): Total money this customer spent.
# ## Recency
# To calculate recency, we need to choose a date point from which we evaluate **how many days ago was the customer's last purchase**.
#last date available in our dataset
dataset['InvoiceDate'].max()
now = dt.date(2011,12,9)
print(now)
#create a new column called date which contains the only the date of invoice
dataset['date'] = dataset['InvoiceDate'].dt.date
dataset.head()
# #### Recency dataset
#group by customers and check last date of purshace
recency_data = dataset.groupby(by='CustomerID', as_index=False)['date'].max()
recency_data.columns = ['CustomerID','LastPurshaceDate']
recency_data.head()
#calculate recency
recency_data['Recency'] = recency_data['LastPurshaceDate'].apply(lambda x: (now - x).days)
recency_data.head()
#drop LastPurchaseDate as we don't need it anymore
recency_data.drop('LastPurshaceDate',axis=1,inplace=True)
recency_data.head()
# ## Frequency
# Frequency helps us to know **how many times a customer purchased from us**. To do that we need to check how many invoices are registered by the same customer.
data_copy=dataset
# drop duplicates
data_copy.drop_duplicates(subset=['InvoiceNo', 'CustomerID'], keep="first", inplace=True)
#calculate frequency of purchases
frequency_data = data_copy.groupby(by=['CustomerID'], as_index=False)['InvoiceNo'].count()
frequency_data.columns = ['CustomerID','Frequency']
frequency_data.head()
# ## Monetary
#
# Monetary attribute answers the question: **How much money did the customer spent over the time?**
#create column total cost
dataset['TotalCost'] = dataset['Quantity'] * dataset['UnitPrice']
monetary_data = dataset.groupby(by='CustomerID',as_index=False).agg({'TotalCost': 'sum'})
monetary_data.columns = ['CustomerID','Monetary']
monetary_data.head()
# ## Create RFM Table
# Merge recency, frequency, monetary data
rfm_data = recency_data.merge(frequency_data.merge(monetary_data,on='CustomerID'),on='CustomerID')
rfm_data.head()
#use CustomerID as index
rfm_data.set_index('CustomerID',inplace=True)
rfm_data.head()
# ## Applying 80-20 rule
# Pareto’s rule says **80% of the results come from 20% of the causes**.
#
# Similarly, **20% customers contribute to 80% of your total revenue**. Let's verify that because that will help us know which customers to focus on when marketing new products.
#get the 80% of the revenue
pareto_cutoff = rfm_data['Monetary'].sum() * 0.8
print("The 80% of total revenue is: ",round(pareto_cutoff,2))
customers_rank = rfm_data
# Create a new column that is the rank of the value of coverage in ascending order
customers_rank['Rank'] = customers_rank['Monetary'].rank(ascending=0)
#customers_rank.drop('RevenueRank',axis=1,inplace=True)
customers_rank.head()
# ### Top customers
customers_rank.sort_values('Rank',ascending=True)
#get top 20% of the customers
top_20_cutoff = 4339 *0.2
top_20_cutoff
#sum the monetary values over the customer with rank <=867
revenueByTop20 = customers_rank[customers_rank['Rank'] <= 867]['Monetary'].sum()
revenueByTop20
# The top 20% contribute to more than 80% of the revenue.
#
# ## Customer segmentation based on RFM score
# I will give a score of 1 to 4 for the data in RFM model.
# 4 -> best/highest value
#
# 1 -> lowest/worst value
quantiles = rfm_data.quantile(q=[0.25,0.5,0.75])
quantiles
quantiles.to_dict()
# **Note:** it is clear that:-
#
# Higher Recency is bad.
#
# Higher Frequency, Monetary is profitting.
# and vice-versa.
# Arguments (x = value, p = recency, monetary_value, frequency, d = quartiles dict)
def RScore(x,p,d):
if x <= d[p][0.25]:
return 4
elif x <= d[p][0.50]:
return 3
elif x <= d[p][0.75]:
return 2
else:
return 1
# Arguments (x = value, p = recency, monetary_value, frequency, k = quartiles dict)
def FMScore(x,p,d):
if x <= d[p][0.25]:
return 1
elif x <= d[p][0.50]:
return 2
elif x <= d[p][0.75]:
return 3
else:
return 4
#create rfm segmentation table
rfm_segmentation = rfm_data
rfm_segmentation['R_Quartile'] = rfm_segmentation['Recency'].apply(RScore, args=('Recency',quantiles,))
rfm_segmentation['F_Quartile'] = rfm_segmentation['Frequency'].apply(FMScore, args=('Frequency',quantiles,))
rfm_segmentation['M_Quartile'] = rfm_segmentation['Monetary'].apply(FMScore, args=('Monetary',quantiles,))
rfm_segmentation.head()
#Combine the RFM scores
rfm_segmentation['RFMScore'] = rfm_segmentation.R_Quartile.map(str) + rfm_segmentation.F_Quartile.map(str) + rfm_segmentation.M_Quartile.map(str)
rfm_segmentation.head()
# Best Recency score = 4: most recently purchase. Best Frequency score = 4: most quantity purchase. Best Monetary score = 4: spent the most.
#top 10 customers
rfm_segmentation[rfm_segmentation['RFMScore']=='444'].sort_values('Monetary', ascending=False).head(10)
#Classification based on these scores
print("Best Customers: ",len(rfm_segmentation[rfm_segmentation['RFMScore']=='444']))
print('Loyal Customers: ',len(rfm_segmentation[rfm_segmentation['F_Quartile']==4]))
print("Big Spenders: ",len(rfm_segmentation[rfm_segmentation['M_Quartile']==4]))
print('Almost Lost: ', len(rfm_segmentation[rfm_segmentation['RFMScore']=='244']))
print('Lost Customers: ',len(rfm_segmentation[rfm_segmentation['RFMScore']=='144']))
print('Lost Cheap Customers: ',len(rfm_segmentation[rfm_segmentation['RFMScore']=='111']))
# Now that we knew our customers segments we can choose how to target or deal with each segment.
#
# For example:
#
# **Best Customers -**: Reward them. They can be early adopters to new products. Suggest them "Refer a friend".
#
# **At Risk**: Send them personalized emails to encourage them to shop.
#
# reference: https://searchsalesforce.techtarget.com/definition/customer-segmentation
#
# ## Final data for prediction
# Add a column called **CustomerClass** with values ('Best', 'Loyal', 'BigSpender', 'AlmostLost', 'Lost', 'LostCheap')
#
def classifier(CustomerID , RFMScore, F_Quartile, M_Quartile, data):
if(data[RFMScore][CustomerID]=='444'):
return 'Best'
elif(data[F_Quartile][CustomerID]==4):
return 'Loyal'
elif(data[M_Quartile][CustomerID]==4):
return 'BigSpenders'
elif(data[RFMScore][CustomerID]=='244'):
return 'AlmostLost'
elif(data[RFMScore][CustomerID]=='144'):
return 'Lost'
elif(data[RFMScore][CustomerID]=='111'):
return 'LostCheap'
else:
return 'Others'
rfm_data.head()
copy=rfm_data
copy['CustomerID']=copy.index
copy['CustomerClass']=copy['CustomerID'].apply(classifier, args=('RFMScore','F_Quartile', 'M_Quartile', rfm_data))
copy.head(10)
copy.drop('CustomerID',axis=1,inplace=True)
copy.head()
import copy
final1=copy.deepcopy(rfm_data)
final2=copy.deepcopy(final1)
final2.drop('R_Quartile',axis=1,inplace=True)
final2.drop('F_Quartile',axis=1,inplace=True)
final2.drop('M_Quartile',axis=1,inplace=True)
final2.drop('RFMScore',axis=1,inplace=True)
final1.head(3)
final2.drop('Rank',axis=1,inplace=True)
final2.head(3)
# ## Final Datasets:
# final1, final2
# # Using SVM
final2['CustomerClass'].head()
final2.drop('CustomerClass',axis=1,inplace=True)
final2.corr()
final2['Class']=final1['CustomerClass']
final2.head()
final2.corr()
sns.heatmap(final2.corr())
sns.set_style("whitegrid")
sns.FacetGrid(final2, hue="Class", height=4).map(plt.scatter, "Recency", "Monetary").add_legend()
plt.show()
scatter_matrix(final2, alpha = 0.3, figsize = (21,10), diagonal = 'kde');
final2_r_log = np.log(final2['Recency']+0.1)
final2_f_log = np.log(final2['Frequency'])
final2_m_log = np.log(final2['Monetary']+0.1)
final2_c_log = final2['Class']
log_data=pd.DataFrame({'Monetary': final2_m_log,'Recency': final2_r_log,'Frequency': final2_f_log})
log_data['Class']=final2['Class']
log_data.head()
scatter_matrix(log_data, alpha = 0.3, figsize = (21,10), diagonal = 'kde');
array=log_data.values
X = array[:,0:2]
Y = array[:,3]
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
# ## Running SVM for different values of C
c=0.5;
#while(c<=10):
svm=SVC(kernel='rbf', C = c)
svm.fit(X_train, Y_train)
predictions = svm.predict(X_validation)
correct=0.0
for i in range(len(predictions)):
if(predictions[i]==Y_validation[i]):
correct+=1
accuracy=correct/len(predictions)
msg="C= %.1f -> accuracy = %f" % (c,accuracy)
print(msg)
#c+=0.1
svm=SVC(kernel='linear', C = c)
svm.fit(X_train, Y_train)
predictions = svm.predict(X_validation)
correct=0.0
for i in range(len(predictions)):
if(predictions[i]==Y_validation[i]):
correct+=1
accuracy=correct/len(predictions)
msg="C= %.1f -> accuracy = %f" % (c,accuracy)
print(msg)
|
script.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
df=pd.read_csv('car data.csv')
df.shape
print(df['Seller_Type'].unique())
print(df['Fuel_Type'].unique())
print(df['Transmission'].unique())
print(df['Owner'].unique())
##check missing values
df.isnull().sum()
df.describe()
final_dataset=df[['Year','Selling_Price','Present_Price','Kms_Driven','Fuel_Type','Seller_Type','Transmission','Owner']]
final_dataset.head()
final_dataset['Current Year']=2020
final_dataset.head()
final_dataset['no_year']=final_dataset['Current Year']- final_dataset['Year']
final_dataset.head()
final_dataset.drop(['Year'],axis=1,inplace=True)
final_dataset.head()
final_dataset=pd.get_dummies(final_dataset,drop_first=True)
final_dataset.head()
final_dataset.head()
final_dataset=final_dataset.drop(['Current Year'],axis=1)
final_dataset.head()
final_dataset.corr()
import seaborn as sns
sns.pairplot(final_dataset)
X=final_dataset.iloc[:,1:]
y=final_dataset.iloc[:,0]
X['Owner'].unique()
X.head()
y.head()
# +
### Feature Importance
from sklearn.ensemble import ExtraTreesRegressor
import matplotlib.pyplot as plt
model = ExtraTreesRegressor()
model.fit(X,y)
# -
print(model.feature_importances_)
#plot graph of feature importances for better visualization
feat_importances = pd.Series(model.feature_importances_, index=X.columns)
feat_importances.nlargest(5).plot(kind='barh')
plt.show()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn import ensemble
clf = ensemble.GradientBoostingRegressor(n_estimators = 400, max_depth = 5, min_samples_split = 2,
learning_rate = 0.1, loss = 'ls')
clf.fit(X_train, y_train)
clf.score(X_test,y_test)
predictions=clf.predict(X_test)
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
# +
# Import XGBoost Regressor
from xgboost import XGBRegressor
#Create a XGBoost Regressor
reg = XGBRegressor()
# Train the model using the training sets
reg.fit(X_train, y_train)
# -
# Model prediction on train data
y_predict = reg.predict(X_test)
reg.score(X_test,y_test)
print('MAE:', metrics.mean_absolute_error(y_test, y_predict))
print('MSE:', metrics.mean_squared_error(y_test, y_predict))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, y_predict)))
from sklearn.ensemble import RandomForestRegressor
regressor=RandomForestRegressor()
n_estimators = [int(x) for x in np.linspace(start = 100, stop = 1200, num = 12)]
print(n_estimators)
from sklearn.model_selection import RandomizedSearchCV
# +
#Randomized Search CV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 100, stop = 1200, num = 12)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(5, 30, num = 6)]
# max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10, 15, 100]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 5, 10]
# +
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf}
print(random_grid)
# -
# Use the random grid to search for best hyperparameters
# First create the base model to tune
rf = RandomForestRegressor()
# Random search of parameters, using 3 fold cross validation,
# search across 100 different combinations
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid,scoring='neg_mean_squared_error', n_iter = 10, cv = 5, verbose=2, random_state=42, n_jobs = 1)
rf_random.fit(X_train,y_train)
rf_random.best_params_
rf_random.best_score_
rf_r.score
predictions=rf_random.predict(X_test)
sns.distplot(y_test-predictions)
plt.scatter(y_test,predictions)
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
# +
import pickle
# open a file, where you ant to store the data
file = open('random_forest_regression_model.pkl', 'wb')
# dump information to that file
pickle.dump(rf_random, file)
# -
|
car.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
def comparison_plot2D(
u, f, # Function expressions in x and y
value=0.5, # x or y equals this value
variation='y', # independent variable
n=100, # no of intervals in plot
tol=1E-8, # tolerance for points inside the domain
plottitle='', # heading in plot
filename='tmp', # stem of filename
):
"""
Plot u and f along a line in x or y dir with n intervals
and a tolerance of tol for points inside the domain.
"""
v = np.linspace(-1+tol, 1-tol, n+1)
# Compute points along specified line:
points = np.array([(value, v_)
if variation == 'y' else (v_, value)
for v_ in v])
u_values = [u(point) for point in points] # eval. Function
f_values = [f(point) for point in points]
plt.figure()
plt.plot(v, u_values, 'r-', v, f_values, 'b--')
plt.legend(['u', 'f'], loc='upper left')
if variation == 'y':
plt.xlabel('y'); plt.ylabel('u, f')
else:
plt.xlabel('x'); plt.ylabel('u, f')
plt.title(plottitle)
plt.savefig(filename + '.pdf')
plt.savefig(filename + '.png')
import fenics as fe
import sympy as sym
x, y = sym.symbols('x[0] x[1]')
def problem(f, nx=8, ny=8, degrees=[1,2]):
"""
Plot u along x=const or y=const for Lagrange elements,
of given degrees, on a nx times ny mesh. f is a SymPy expression.
"""
f = sym.printing.ccode(f)
f = fe.Expression(f, degree=2)
mesh = fe.RectangleMesh(
fe.Point(-1, 0), fe.Point(1, 2), 2, 2)
for degree in degrees:
if degree == 0:
# The P0 element is specified like this in FEniCS
V = fe.FunctionSpace(mesh, 'DG', 0)
else:
# The Lagrange Pd family of elements, d=1,2,3,...
V = fe.FunctionSpace(mesh, 'P', degree)
u = fe.project(f, V)
u_error = fe.errornorm(f, u, 'L2')
print(('||u-f||=%g' % u_error, degree))
comparison_plot2D(
u, f,
n=50,
value=0.4, variation='x',
plottitle='Approximation by P%d elements' % degree,
filename='approx_fenics_by_P%d' % degree,
tol=1E-3)
#fe.plot(u, title='Approx by P%d' % degree)
if __name__ == '__main__':
# x and y are global SymPy variables
f = 2*x*y - x**16
f = 2*x*y - x**2
problem(f, nx=2, ny=2, degrees=[0, 1, 2])
plt.show()
# -
|
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/EXAMPLES/FINITE_ELEMENTS/INTRO/SRC/02_APPROX_FENICS.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
import tensorflow as tf
from tasks.tasks import SvAgreementLM, WordSvAgreementLM, WordSvAgreementVP
from tf2_models.lm_transformer import LmGPT2, ClassifierGPT2
from util.config_util import get_model_params, get_task_params, get_train_params
from tf2_models.lm_lstm import LmLSTM, LmLSTMSharedEmb, ClassifierLSTM
from tf2_models.trainer import Trainer
from absl import app
from absl import flags
from util import constants
from collections import Counter
from tqdm import tqdm
import numpy as np
from tf2_models.metrics import *
MODELS = {"lm_lstm": LmLSTM,
"lm_gpt2": LmGPT2,
"lm_lstm_shared_emb": LmLSTMSharedEmb,
'cl_gpt2': ClassifierGPT2,
'cl_lstm': ClassifierLSTM}
# +
log_dir = "../logs"
chkpt_dir = "../tf_ckpts"
exp_name = "nol2_batchsumloss"
task = WordSvAgreementLM(task_params=get_task_params(),data_dir='../data')
# +
model_config = 'lstm_drop30_v2'
model_name = 'lm_lstm_shared_emb'
train_config ='radam_fast'
# Create the Model
model_params = get_model_params(task,model_name, model_config)
print("model_params: ", model_params.__dict__)
model = MODELS[model_name](hparams=get_model_params(task,model_name, model_config))
trainer_params = get_train_params(train_config)
log_dir = os.path.join(log_dir,task.name, model.model_name+"_"+str(model_config)+"_"+str(trainer_params.learning_rate)+"_"+exp_name)
ckpt_dir = os.path.join(chkpt_dir,task.name, model.model_name+"_"+str(model_config)+"_"+str(trainer_params.learning_rate)+"_"+exp_name)
print(log_dir)
trainer = Trainer(task=task,
model=model,
train_params=get_train_params('radam_fast'),
log_dir=log_dir,
ckpt_dir=ckpt_dir)
trainer.restore()
# -
trainer.model.evaluate(task.test_dataset, steps=100)
for x,y in task.test_dataset:
print(len(x))
mask = tf.cast(y > 0, dtype=tf.float32)
logits = model(x)
print(logits.shape)
logits = tf.reshape(logits, (-1, logits.shape[-1]))
targets = tf.reshape(y, (-1, 1))
mask = tf.reshape(mask, (-1, 1))
correct = tf.cast(tf.argmax(model(x), axis=-1) == y, dtype=tf.float32)
print(logits.shape)
print(targets.shape)
print(model.loss)
print(model.loss(y_pred=logits, y_true=targets))
print(tf.reduce_sum(masked_sequence_loss(y_pred=logits, y_true=targets)))
break
def gen_inflect_from_vocab(vocab_file, freq_threshold=1000):
vbp = {}
vbz = {}
nn = {}
nns = {}
from_pos = {'NNS': nns, 'NN': nn, 'VBP': vbp, 'VBZ': vbz}
for line in open(vocab_file):
if line.startswith(' '): # empty string token
continue
word, pos, count = line.strip().split()
count = int(count)
if len(word) > 1 and pos in from_pos and count >= freq_threshold:
from_pos[pos][word] = count
verb_infl = {'VBP': 'VBZ', 'VBZ': 'VBP'}
for word, count in vbz.items():
candidate = infl_eng.plural_verb(word)
if candidate in vbp:
verb_infl[candidate] = word
verb_infl[word] = candidate
noun_infl = {'NN': 'NNS', 'NNS': 'NN'}
for word, count in nn.items():
candidate = infl_eng.plural_noun(word)
if candidate in nns:
noun_infl[candidate] = word
noun_infl[word] = candidate
return verb_infl, noun_infl
# +
from util import inflect
infl_eng = inflect.engine()
dependency_fields = ['sentence', 'orig_sentence', 'pos_sentence',
'subj', 'verb', 'subj_pos', 'has_rel', 'has_nsubj',
'verb_pos', 'subj_index', 'verb_index', 'n_intervening',
'last_intervening', 'n_diff_intervening', 'distance',
'max_depth', 'all_nouns', 'nouns_up_to_verb']
verb_infl, noun_infl = gen_inflect_from_vocab('wiki.vocab')
# +
distance_hits = Counter()
distance_total = Counter()
diff_hits = Counter()
diff_total = Counter()
test_data = task.databuilder.as_dataset(split='test', batch_size=1000)
e = 0
for example in tqdm(test_data):
e += 1
encoded_sentences = example['sentence']
s_shape = tf.shape(encoded_sentences)
batch_size, length = s_shape[0], s_shape[1]
bos = tf.ones((batch_size,1), dtype=tf.int64) * task.databuilder.sentence_encoder().encode(constants.bos)
eos = tf.ones((batch_size,1), dtype=tf.int64) * task.databuilder.sentence_encoder().encode(constants.eos)
encoded_sentences = tf.concat([bos, encoded_sentences, eos], axis=1)
actual_verbs = example['verb']
inflected_verbs = [verb_infl[v.decode("utf-8")] for v in actual_verbs.numpy()]
verb_indexes = example['verb_position'] - 1
distances = example['distance'].numpy()
nz = example['n_intervening'].numpy()
n_diffs = example['n_diff_intervening'].numpy()
sentence = task.databuilder.sentence_encoder().decode(encoded_sentences[0])
actual_verb_indexes = [task.databuilder.sentence_encoder().encode(v)[0] for v in actual_verbs.numpy()]
inflected_verb_indexes = [task.databuilder.sentence_encoder().encode(v)[0] for v in inflected_verbs]
scores = model(encoded_sentences)
actual_batch_indexes = [ (i,verb_indexes[i], actual_verb_indexes[i]) for i in range(len(verb_indexes))]
actual_scores = tf.compat.v2.gather_nd(scores, actual_batch_indexes)
inflected_batch_indexes = [ (i,verb_indexes[i], inflected_verb_indexes[i]) for i in range(len(verb_indexes))]
infelected_scores = tf.compat.v2.gather_nd(scores, inflected_batch_indexes)
corrects = actual_scores > infelected_scores
for i, c in enumerate(corrects):
if verb_indexes[i] == 10035:
continue
if nz[i] > 4 or distances[i] > 16:
continue
distance_total[distances[i]] += 1
distance_hits[distances[i]] += int(c)
if nz[i] == n_diffs[i]:
n = nz[i]
diff_total[n] += 1
diff_hits[n] += int(c)
# +
dis_acc = {}
dis_acc = np.zeros(17)
dif_acc = np.zeros(5)
print('Accuracy by distance')
for k in sorted(distance_hits.keys()):
v = distance_hits[k]
acc = v / distance_total[k]
dis_acc[k] = acc
print("%d | %.2f" % (k, acc), distance_total[k])
print('Accuracy by intervenings')
for k in sorted(diff_hits.keys()):
v = diff_hits[k]
acc = v * 1./diff_total[k]
print("%d | %.2f" % (k, acc), diff_total[k])
dif_acc[k] = acc
stats = {'distance': dis_acc, 'intervenings': dif_acc}
# -
task.databuilder.sentence_encoder().encode("unk")
# +
import matplotlib.pyplot as plt
plt.style.use('classic')
# %matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
sns.set()
d = dis_acc
lists = sorted(d.items()) # sorted by key, return a list of tuples
x, y = zip(*lists) # unpack a list of pairs into two tuples
plt.plot(x, y)
# -
|
notebooks/eval_sv_agreement.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ObserveRTC/sample-reports/blob/main/Observer_BigQuery_Demo_Report.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="_fj5lTVymC8R"
# # ObserveRTC Demo Reports
#
# This notebook contains sample queries based on the default ObserveRTC Scheme v1.
#
# Colab uses BigQuery magics for simple querying from a cell.
#
# Modify these reports to suit your own needs
# + [markdown] id="0_ZoeLtAs_fr"
# # Global Report Setup
# + [markdown] id="zNCqGs352HUn"
# Run this first to Authenticate!
# + colab={"base_uri": "https://localhost:8080/"} id="SprBEOnU2E7A" outputId="d554989c-319b-433a-a942-e1c138274f58"
from google.colab import auth
auth.authenticate_user()
print('Authenticated')
# + [markdown] id="qAMlx3Sl2I2G"
# ## Styling fixes
# + id="SZEe0MhE5fDO" colab={"base_uri": "https://localhost:8080/"} outputId="d7beb57b-6396-4c82-933b-460d12bf9284"
# Load this for filterable tables
# %load_ext google.colab.data_table
# + [markdown] id="A15bAlZjKNhs"
# Do this for darkmode only
# + [markdown] id="RG8lKL-inlRb"
#
# + id="knpt6Ut41L_M"
# If you use darkmode in colab do this to see Matlab's plots
# change to a different style as needed: https://matplotlib.org/3.1.1/gallery/index.html?highlight=style%20sheet#style-sheets
from matplotlib import style
style.use('dark_background')
# + [markdown] id="bz7AU-a3nu6H"
# ## Global report parameters
# + [markdown] id="keqikslJ7Yjl"
# Setup some parameters for all the reports. All of the following queries rely on these variables, so run this first.
# + id="yFYO_7Q9KZ6m"
# Specify specific project parameters
project = "observertc"
dataset = "demo_v1"
# + id="P3Q5LJCYnyvK"
# + id="LUtMIa-UJz1P"
# set the default dataset
from google.cloud.bigquery import magics
magics.context.default_query_job_config.default_dataset = project + "." + dataset
# + id="tc6M0IYkJrGh"
# Specify date ranges
# ToDo: use forms for this
from datetime import date, timedelta
today = date.today()
yesterday = today - timedelta(days = 1)
last_seven = today - timedelta(days = 7)
last_thirty = today - timedelta(days = 30)
# + id="UURIhDNUzLVP" colab={"base_uri": "https://localhost:8080/"} outputId="b71488b7-f464-48d5-e659-c3714966dbde"
# Build the parameters
params = {
'startDate': last_thirty.strftime("%Y-%m-%d"),
'endDate': yesterday.strftime("%Y-%m-%d")
}
params
# + [markdown] id="Eo8JzkHhPgx1"
# # Usage Analysis
# + [markdown] id="O923POWz1auU"
# ### Completed calls summaries by service & mediaUnit per day
# This aggregate table is taken from the `FinishedCalls` and `DetachedPeerConnections` tables. Aggregating this data save some query costs.
#
# + colab={"base_uri": "https://localhost:8080/"} id="XekyIj7ytC7q" outputId="c47933e8-9bb9-4546-892d-373157cb07b3"
# %%bigquery df --project $project --verbose --params $params
SELECT
f.serviceName,
mediaUnitID,
COUNT( DISTINCT f.callUUID) AS calls,
COUNT( DISTINCT d.browserID) as users, #if userID is populated use that instead
COUNT( DISTINCT d.peerConnectionUUID) as pcs,
COUNT( DISTINCT d.peerConnectionUUID) / COUNT( DISTINCT f.callUUID) as pcPerCall,
EXTRACT(MONTH FROM f.timestamp ) AS Month,
EXTRACT(DAY FROM f.timestamp) AS Day,
CONCAT(
LPAD( CAST(EXTRACT(MONTH FROM f.timestamp ) AS STRING), 2, '0'),
"-",
LPAD( CAST(EXTRACT(DAY FROM f.timestamp ) AS STRING), 2, '0')
) AS moDay
FROM `FinishedCalls` AS f
INNER JOIN `DetachedPeerConnections` AS d ON f.callUUID = d.callUUID
WHERE
f.timestamp >= TIMESTAMP(@startDate) AND
f.timestamp <= TIMESTAMP(@endDate) AND
f.callName <> ""
GROUP BY
f.serviceName,
mediaUnitID,
moDay,
Month,
Day
ORDER BY
moDay
# + colab={"base_uri": "https://localhost:8080/"} id="-fpidub_1l7O" outputId="d0f1fd42-3543-404f-d127-c01775c4241e"
# Run `df` in any cell by itself if you want to see the raw data table
# make sure to run `%load_ext google.colab.data_table` above if you want to make the table display filterable
df
# + [markdown] id="Iz5YlLyZ-oyR"
# ### Usage summary chart
# + [markdown] id="xIfnQBlhvCn5"
# The chart below is is helpful for datachecks and looking for anomalies across the metrics. See the charts below for a view that does not mix units.
#
# ---
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="t01mx6Kr-oQp" outputId="a709810b-b100-4812-dcb4-cf4a135955e0"
df.plot.line(x="moDay", figsize=(20, 7), grid=True, title="Finished call metrics per Day")
# + [markdown] id="bz_3FCYm2Enl"
# ## Calls per day per server
# + colab={"base_uri": "https://localhost:8080/"} id="uA2Ynd9c2ypC" outputId="6551e645-a5a6-4e47-afe0-c1ca458a3da7"
pivot_table = df.pivot(index="moDay", columns="mediaUnitID", values="calls")
pivot_table.plot.bar(stacked=False, figsize=(20, 7), grid=True, title="Calls per Server per Day")
# + id="peeRJx1N9qbu" colab={"base_uri": "https://localhost:8080/"} outputId="9a6b871f-cf44-432c-ac76-25dc8017f00b"
# run this to see the table
pivot_table
# + [markdown] id="XyM8E19h0w5z"
# ### Unique Users per day
# + colab={"base_uri": "https://localhost:8080/"} id="S9UbDMuy9qGK" outputId="93218418-6ae0-4a51-9b81-c528439e5b20"
pivot_table = df.pivot(index="moDay", columns="mediaUnitID", values="users")
pivot_table.plot.bar(stacked=False, figsize=(20, 7), grid=True, title="Users per Server per Day")
# + [markdown] id="v79Z65u5Rz6o"
# ## PeerConnections per Call
# + colab={"base_uri": "https://localhost:8080/", "height": 494} id="35-3Plxt-I3p" outputId="507ad9e3-5766-4136-bfab-34ee2aa2580c"
# use pandas to filter the data some more; use `dff` instead of `df` for filters
#dff = df.query("mediaUnitID == 'staging' or mediaUnitID == 'meet-us-east' or mediaUnitID == 'meet-us-west'")
pivot_table = df.pivot(index="moDay", columns="mediaUnitID", values="pcPerCall")
pivot_table.plot.bar(figsize=(20, 7), grid=True, title="PeerConnections per Call per Day")
# + [markdown] id="MroyiDQ-dOe8"
# ## Users Per Call
# + [markdown] id="-NxOqdF64QjL"
# Build a per-call summary and then find the max & average number of distincit browser IDs per call with more than 1 participant
# + colab={"base_uri": "https://localhost:8080/"} id="EB0rXcrSvU-k" outputId="3c571109-3390-4bf1-b687-d1cb1fe6f938"
# %%bigquery df --project $project --verbose --params $params
SELECT
serviceName,
MAX(users) AS maxUsersPerCall,
AVG(users) as avgUsersPerCall,
Month,
Day,
CONCAT(
LPAD( CAST(Month as STRING), 2, '0'),
"-",
LPAD( CAST(Day as STRING), 2, '0') ) AS moDay
FROM (
SELECT
serviceName,
COUNT(DISTINCT browserID) AS users,
EXTRACT(MONTH FROM timestamp ) AS Month,
EXTRACT(DAY FROM timestamp) AS Day
FROM
`DetachedPeerConnections`
WHERE
# serviceName = @serviceName AND
timestamp >= TIMESTAMP(@startDate) AND
timestamp <= TIMESTAMP(@endDate)
GROUP BY
serviceName,
callUUID,
Month,
Day
# Ignore 1-person callIDs
HAVING users > 1
)
GROUP BY serviceName, moDay, Month, Day
ORDER BY moDay
# + colab={"base_uri": "https://localhost:8080/", "height": 623} id="AFme0wKwuEQ-" outputId="95859eb6-995c-4b0e-a99a-8922a8b2ea26"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="Wuly62GTdWGX" outputId="f0cc6362-8edd-4d69-9831-3b1185ef71ea"
# use pandas to filter the data some more; use `dff` instead of `df` for filters
# dff = df.query("mediaUnitID == 'meet1' or mediaUnitID == 'meet-us-east' or mediaUnitID == 'meet-us-west'")
pivot_table = df.pivot(index="moDay", columns="serviceName", values=["avgUsersPerCall", "maxUsersPerCall"])
pivot_table.plot(figsize=(20, 7), grid=True, title="Users per Call per Day")
# + [markdown] id="CtBcYj4fouwy"
# # ToDo: Concurrent reports
# These need to be added back to the BigQuery scheme.
# See Graphana for this data for now.
# + [markdown] id="78sZN7HRPGe6"
# ## Peak concurrent Streams per mediaUnit
#
# + id="pL1Tdw7DPF_1"
# %%bigquery df --project $project --verbose --params $params
SELECT
EXTRACT(MONTH FROM timestamp) AS Month,
EXTRACT(DAY FROM timestamp) AS Day,
CONCAT(
LPAD( CAST(EXTRACT(MONTH FROM timestamp ) AS STRING), 2, '0'),
"-",
LPAD( CAST(EXTRACT(DAY FROM timestamp ) AS STRING), 2, '0')
) AS moDay,
mediaUnitID,
MAX(peerConnectionsNum) as maxPCs
FROM `ConcurrentStreams`
WHERE
timestamp >= TIMESTAMP(@startDate) AND
timestamp <= TIMESTAMP(@endDate)
GROUP BY moDay, Month, Day, mediaUnitID
ORDER by moDay ASC
# + id="TCqvz5MkSswM" colab={"base_uri": "https://localhost:8080/", "height": 623} outputId="940f7ec3-8a34-4ff4-f3a0-5b825a54b38a"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 495} id="fEF5fc8STCb6" outputId="2ba6fbfd-7aac-4aa4-d479-629f4de0c58a"
# use pandas to filter the data some more
dff = df.query("mediaUnitID == 'meet1' or mediaUnitID == 'meet-us-east' or mediaUnitID == 'meet-us-west'")
pivot_table = dff.pivot(index="moDay", columns="mediaUnitID", values="maxPCs")
pivot_table.plot.bar(figsize=(20, 7), grid=True, title="Peak PeerConnections per MediaUnit per Day")
# + [markdown] id="eRqGpIAPQBKM"
# # Quality Metrics
# + [markdown] id="dcbNimuv1vSD"
# ## Average RTT per Server
# A quick analysis of RTT values.
# + colab={"base_uri": "https://localhost:8080/"} id="xfCBG54p10SV" outputId="4e2112ae-db12-49cc-9950-aed53819405d"
# %%bigquery df --project $project --verbose --params $params
SELECT
EXTRACT(MONTH FROM timestamp) AS Month,
EXTRACT(DAY FROM timestamp) AS Day,
CONCAT(
LPAD( CAST(EXTRACT(MONTH FROM timestamp ) AS STRING), 2, '0'),
"-",
LPAD( CAST(EXTRACT(DAY FROM timestamp) AS STRING), 2, '0')
) AS moDay,
r.mediaUnitID,
AVG(roundTripTime) as avgRTT
FROM `RemoteInboundRTPSamples` r
# Do something like the below to only check peerConnections used in actual calls
# INNER JOIN `MyNewWebRTC.JoinedPeerConnections` p ON p.peerConnectionUUID = r.peerConnectionUUID
# INNER JOIN `MyNewWebRTC.FinishedCalls` f ON f.callUUID = p.callUUID
WHERE
timestamp >= TIMESTAMP(@startDate) AND
timestamp <= TIMESTAMP(@endDate) AND
roundTripTime > 0 and roundTripTime < 2 # should I cap this?
GROUP BY moDay, mediaUnitID, Month, Day
ORDER by moDay ASC
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="IEJp4GYiILPI" outputId="43b4da39-ce44-44af-a878-e5905c05b4e1"
# extra filter
# dff = df.query("mediaUnitID == 'meet1' or mediaUnitID == 'meet-us-east' or mediaUnitID == 'meet-us-west'")
pivot = df.pivot(index="moDay", columns='mediaUnitID', values='avgRTT')
pivot.plot(figsize=(20, 7), grid=True, title="Average RTT per Day")
# + [markdown] id="VGS6HTdHQWPM"
# ## Top quality limiting reasons
# + [markdown] id="I-Fk2gRuQo6J"
# ***WARNING: this is usually an expensive report***
# + colab={"base_uri": "https://localhost:8080/"} id="KBgCVYz_QePB" outputId="e2da394a-3487-4de5-a0b1-981007341405"
# %%bigquery df --project $project --verbose --params $params
SELECT
# serviceName,
mediaUnitId,
qualityLimitationReason,
SUM(qty) as qty,
Month,
Day,
CONCAT(
LPAD( CAST(Month as STRING), 2, '0'),
"-",
LPAD( CAST(Day as STRING), 2, '0') ) AS moDay
FROM (
SELECT
# serviceName,
mediaUnitId,
qualityLimitationReason,
COUNT(*) as qty,
EXTRACT(MONTH FROM timestamp ) AS Month,
EXTRACT(DAY FROM timestamp) AS Day
FROM
`OutboundRTPSamples`
WHERE
# serviceName = @serviceName AND
timestamp >= TIMESTAMP(@startDate) AND
timestamp <= TIMESTAMP(@endDate)
GROUP BY
# serviceName,
mediaUnitId,
qualityLimitationReason,
Month,
Day
) t
GROUP BY
# serviceName,
mediaUnitId, qualityLimitationReason, moDay, Month, Day
ORDER BY moDay
# + colab={"base_uri": "https://localhost:8080/", "height": 623} id="G47OCs0EVs03" outputId="9259d330-78cf-4779-b533-045bf36e9185"
df
# + colab={"base_uri": "https://localhost:8080/", "height": 475} id="kZGqvm1kTcRo" outputId="0e0a67f7-68a4-434f-d88d-3d68409e3831"
# ToDo: redo as a $ of group per day, remove NULL & NONE
#dff = df.query("qualityLimitationReason != 'NULL' and qualityLimitationReason != 'NONE'")
pivot = df.pivot_table(index="moDay", columns=["mediaUnitId", "qualityLimitationReason"], values="qty", aggfunc="mean")
pivot.plot(figsize=(20, 7), grid=True, title="Quality Limitation Reasons by Day")
|
Observer_BigQuery_Demo_Report.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 数据准备
# +
import pandas as pd
import numpy as np
train = pd.read_table('../datasets/ml-100k/u1.base',
sep='\t', header=None).iloc[:, :3].values
test = pd.read_table('../datasets/ml-100k/u1.test',
sep='\t', header=None).iloc[:, :3].values
n_users, n_items = 943+1, 1682+1 # 数据idx从1开始
n_samples = train.shape[0]
print(train.shape, test.shape)
# -
# # 模型基本
# ## 参数设定
# +
k = 20 # 隐因子数量
glob_mean = np.mean(train[:, 2]) # 全局均分
bi = np.zeros(n_items)
bu = np.zeros(n_users)
qi = np.zeros((n_items, k))
pu = np.zeros((n_users, k))
# 查询用字典,避免生成大型稀疏矩阵
item_user_dict = dict()
user_item_dict = dict()
# -
for sample in train:
user_id,item_id,rating=sample
item_user_dict.setdefault(item_id,{})
user_item_dict.setdefault(user_id,{})
item_user_dict[item_id][user_id]=rating
user_item_dict[user_id][item_id]=rating
# ## 训练代码
# +
max_iter = 20 # 迭代次数
lr = 0.01 # 学习率
alpha = 0.1 # 正则项系数
for epoch in range(max_iter):
MSE = 0
random_idxs = np.random.permutation(n_samples)
for idx in random_idxs:
user_id, item_id, rating = train[idx]
y_pred = glob_mean+bi[item_id]+bu[user_id] + \
np.dot(pu[user_id], qi[item_id].T)
err = rating-y_pred
MSE += err**2
bu[user_id] += lr*(err-alpha*bu[user_id])
bi[item_id] += lr*(err-alpha*bi[item_id])
tmp = qi[item_id]
qi[item_id] += lr*(err*pu[user_id]-alpha*qi[item_id])
pu[user_id] += lr*(err*tmp-alpha*pu[user_id])
MSE /= n_samples
print(epoch, MSE)
# -
# ## 测试误差
# +
Y_pred = list()
test_mse = 0
for sample in test:
user_id, item_id, rating = sample
y_pred = glob_mean+bi[item_id]+bu[user_id] + \
np.dot(pu[user_id], qi[item_id].T)
test_mse += (rating-y_pred)**2
test_mse /= len(test)
print(test_mse)
# -
|
recommend/LFM.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.9 64-bit
# language: python
# name: python3
# ---
# +
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# Ensure we begin with defaults before making modifications
mpl.rcdefaults()
# Set the color palette with seaborn
sns.set_palette("Set1")
my_palette = sns.color_palette("Set1")
# Set up general parameters for plotting
mpl.rcParams['font.family'] = 'sans-serif'
mpl.rcParams['font.size'] = 8
mpl.rcParams['font.weight'] = 'light'
mpl.rcParams['font.sans-serif'] = 'Gill Sans Nova'
mpl.rcParams['figure.figsize'] = (3.375,2.25) #inches
mpl.rcParams['axes.spines.right'] = False
mpl.rcParams['axes.spines.top'] = False
mpl.rcParams['axes.linewidth'] = 0.5
mpl.rcParams['lines.linewidth'] = 0.5
mpl.rcParams['lines.markersize'] = 1
mpl.rcParams['axes.unicode_minus'] = True
mpl.rcParams['xtick.major.size'] = 3
mpl.rcParams['xtick.major.width'] = 0.5
mpl.rcParams['ytick.major.size'] = 3
mpl.rcParams['ytick.major.width'] = 0.5
mpl.rcParams['savefig.transparent'] = False
mpl.rcParams['legend.frameon'] = False
mpl.rcParams['savefig.format'] = 'pdf'
# +
lamb = 1
delta = 0.0001
plt.clf()
xs = np.arange(-lamb, 2 * lamb,0.01)
def lorentzian(x, delta):
return delta / np.pi / (x**2 + delta**2)
def heavisidetheta(x):
if x > 0:
return 1
else:
return 0
def f(x, lamb, delta):
return lorentzian(xs + 0.5*lamb, delta) - (0.5*lorentzian(xs, delta) + 0.5*lorentzian(np.sqrt(np.abs(xs)),lamb)*1/np.sqrt(np.abs(xs)))*np.heaviside(xs,1)
plt.plot(xs, f(xs, lamb, delta))
plt.ylim(-1,1)
plt.ylabel("Δρ(E)")
plt.xlabel("E")
plt.savefig("Delta_rho.png", bbox_inches='tight')
plt.show()
# +
xs = np.arange(-3,3,0.01)
plt.clf()
plt.plot(xs, -lorentzian(xs,0.001), label = "δ-potential")
plt.plot(xs, np.exp(-np.abs(xs)), c = "gray", label = "Bound state")
plt.legend()
plt.ylim(-1,1)
plt.xlabel("mλx")
plt.savefig("pot_and_boundstate.png", bbox_inches='tight')
plt.show()
# -
|
assets/img/bound-state-figs.ipynb
|